text
stringlengths 56
7.94M
|
---|
\begin{document}
\title[Index of a finitistic space]{Index of a finitistic space and a generalization of the topological central point theorem}
\author{Satya Deo*}
\thanks{* While this work was done, the author was supported by the DST (Govt of India) Research Grant sanction letter number SR/S4/MS:567/09 dated 18.02.2010}
\date{}
\address{Harish-Chandra Research Institute, \\
Chhatnag Road, Jhusi,\\
Allahabad 211 019, India. }
\email{[email protected], [email protected] }
\subjclass[2000]{}
\keywords{}
\begin{abstract}
In this paper we prove that if $G$ is a $p$-torus (resp. torus) group acting without fixed points on a finitistic space $X$ (resp. with finitely many orbit types), then the $G$-index $i_G(X) < \infty$. Using this G-index we obtain a generalization of the Central Point Theorem and also of the Tverberg Theorem for any d-dimensional Hausdorff space.
\end{abstract}
\vskip 1in
\maketitle
\section{Introduction}
Let $X$ be a topological space with an involution T and let $\mathbb S^n$ denote the $n$-sphere with antipodal involution. Then the $\mathbb Z_2-$index of the $\mathbb Z_2-$space $X$ is defined as the smallest integer n such that there is an equivariant map $X\to \mathbb S^n$ (see \cite{Mat}, p.95). Then the Borsuk-Ulam theorem asserts that the $\mathbb Z_2-$index of $\mathbb S^n$ is $n$. The idea of this $\mathbb Z_2-$index has been generalized in many ways, first for any finite group (see \cite{con}) and then for arbitrary $G-$spaces where $G$ is a compact Lie group acting on a space $X$ (see \cite{Mat}, \cite{vol}). Its multiple applications in proving several nonembedding results like the van Kampen-Flores theorem, theorems like Central Point theorem and the Tverberg theorem etc. make it a powerful tool in topological combinatorics. The idea of an "ideal-index" of a G-space $X$, using the spectral sequence arising from the principal G-bundle map of Borel construction, has proved to be quite effective (see \cite {vol}, \cite{kar}). This ideal-index defines very naturally several numerical indices, in particular the numerical G-index $i_G(X)$ of a G-space $X$. If $f:X\to Y $ is a G-map then it follows easily that $i_G(X)\leq i_G(Y).$ This result, combined with the G-index of the deleted join with natural group actions, is utilized in showing the nonexistence of a G-map from a G-space $X$ to a G-space $Y$. However, in doing so it is necessary to know that the G-index of $Y$ is finite. The first aim of this paper is to show that if $G$ is a $p$-torus (resp. torus) group acting without fixed points on a finitistic space $X$ (resp. with finitely many orbit types), then the $G$-index $i_G(X) < \infty$. This is done in Section 2. Then, as an application, we obtain in Section 5, a generalization of the Central Point Theorem and also of the Tverberg Theorem for any d-dimensional Hausdorff space. The cohomology groups of the classifying spaces of these groups have a neat presentation with a polynomial part in several variables which can be used as a multiplicative set in understanding the scalar product.
\section{Preliminaries}
Let $G$ be a compact Lie group acting continuously on a topological space $X$. Suppose $B_G$ denotes the classifying space of $G$ and $E_G\to B_G$ is the principal $G$-bundle of $G$. We have two continuous maps $\pi_1,\;\pi_2$ defined by the Borel construction
$$\begin{array}{ccccc}
X& \leftarrow & X\times E_G & \rightarrow & E_G\\
\downarrow & & \downarrow & & \downarrow\\
X\backslash G & \stackrel{\pi_2}\leftarrow & X\times_G E_G & \stackrel{\pi_1}\rightarrow & B_G
\end{array}$$
Let $H^*$ denote the Alexander-Spanier cohomology and $H^{*}_{G}(X)=H^*(X\times_GE_G),\;H^{*}_{G}(pt)=H^{*}(B_G)$ denote the corresponding equivariant cohomology of the $G$-space $X$ and the trivial $G$-space $\{pt\}$ consisting of a point. Then $\mathbb Lambda^*=H^{*}_G(pt)$ is a commutative ring with identity and $H^{*}_{G}(X)$ becomes a $\mathbb Lambda^*$-module via the ring homeomorphism induced by the equivariant map $X\to\{pt\}$. The Serre spectral sequence of the bundle map $\pi_1:X_G\to B_G$ with fibre $X$ defines a sequence of homeomorphism
$$\mathbb Lambda^*\to E^{*0}_{2}\to\cdots\to E^{*0}_{r}\to\cdots\to E^{*0}_{\infty}\subset H^{*}_{G}(X)$$
The kernel of the composite map $\mathbb Lambda^*\to E^{*0}_{\infty}$ or $\mathbb Lambda^*\to H^{*}_{G}(X)$, denoted by $\hbox{Ind}^{G}(X)$, is called the ``ideal index'' of the $G$-space $X$. The kernel of the homeomorphism $\mathbb Lambda^*\to E_{r+1}$, denoted by $_r\hbox{Ind}^{G}(X)$, is called the ${r}^{th}$ filtration of the ideal index $\hbox{Ind}^G(X)= _{\infty}\hbox{Ind}^G(X)$. Now we have (see \cite{vol})
{\mathbb Def Let $\alpha (\neq 0)\in\hbox{Ind}^G(X)$ and $i_X(\alpha)$ denote the smallest $r$ such that $\alpha\in\; _r\hbox{Ind}^G(X)$. Then $\min \{r\;|\; i_X(\alpha)=r$ and $\alpha (\neq 0)\in\hbox{Ind}^G(X)\}$ is called the $G$-index of the $G$ space $X$ and is denoted by $i_G(X)$. Similarly, $\min \{r\;|\; i_X(\alpha)=r$ and $\alpha (\neq 0)\in\hbox{Ind}^G(X)$ and is not a zero-divisor in $\mathbb Lambda^*\}$ is yet another $G$ index of $X$ and is denoted by $i'_G(X)$. }
Recall that the Yang's homology index, $\hbox{ind}_G(X)$, defined by using the equivariant homology theory with $\mathbb Z_2-$ coefficients is a well-known numerical $\mathbb Z_2-$index \cite{yan}. Clearly $i_G(X)\leq i'_G(X)$. If Ind$^G(X)=0$, we put $i_G(X)=\infty$. Note that if $X$ has a fixed point, then there is a monomorphism from $\mathbb Lambda^*\to H^{*}_{G}(X)$ and hence $i_G(X)=\infty$. If $f:X\to Y$ is a $G$-map, then $i_G(X)\leq i_G(Y)$. Also, if $G$ acts on the sphere $\mathbb S^k$ without fixed point, then $i_G(\mathbb S^k)=k+1$. It is obviously interesting to ask as to what are those $G$-spaces $X$ on which $G$ acts without fixed points and for which $i_G(X)$ is finite. It must be pointed out that the index $i_G(X)$ of a $G$-space $X$ is essentially dependent on the action of $G$ on $X$, e.g., when the circle group $G=\mathbb S^1$ acts on an $n$-sphere $\mathbb S^n$ with fixed points, then $i_G(X)=\infty$, but when it acts without fixed points then $i_G(S^n)=n+1.$
Here we show that if the group $G$ is a $p$-torus $\mathbb Z^{r}_{p}$ or a torus $T^r$ (acting with finitely many orbit types) and $X$ is a finitistic $G$-space, then with suitable coefficient groups for the cohomology, $i_G(X)$ and $i'_G(X)$ both are finite. The result is already known when $X$ is a compact $G$-space or $X$ is a paracompact $G$-space of finite cohomological dimension and $G$ is acting with finitely many orbit types (see \cite{vol}, Section 3).
Let us recall (see ~\cite{Br}, \cite{deo-tri}) that a paracompact Hausdorff space $X$ is said to be {\bf finitistic} if each open cover of $X$ has a finite dimensional open refinement. Every compact space is trivially finitistic and every paracompact space of finite covering dimension is also clearly finitistic. On the other hand, there are numerous finitistic spaces which are neither compact nor of finite cohomological dimension, e.g., the product space $X=(\prod^{\infty}_{n=1}\mathbb S^n)\times\mathbb R^k$. Finitistic spaces are the most general class of spaces suitable for cohomology theory of topological transformation groups (\cite{All},\cite{Br}). Hence it is natural to ask whether or not the index $i_G(X)$ is finite when $G$ is acting on a finitistic space $X$ without fixed points.
\section{Index of a finitistic $G$-space is finite}
Suppose the group $G=\mathbb Z^{r}_{p}$ or $T^r$ and the coefficients are in the fields $k=\mathbb Z_p$ or $\mathbb Q$ respectively. Then we know that (see \cite{Hsi}, p.45)
\begin{eqnarray*}
H^*(B_G,k) & = & k[t_1,t_2,\cdots ,t_r],\; \deg t_i=2 (\hbox{resp.}\;1),\\
& & \;\hbox{when}\; G=T^r (\hbox{resp.}\; \mathbb Z^{r}_{2}),\; k=\mathbb Q (\hbox{resp.}\; \mathbb Z_2).\\
H^*(B_G,k) & = & k[t_1,t_2,\cdots ,t_r]\otimes \mathbb Lambda[v_1,\cdots, v_r],\; \deg v_i=1,\;t_i=\beta (v_i)\\
& & \hbox{when}\;G=\mathbb Z^{r}_{p}, \mathbb Q=\mathbb Z_p\;\hbox{and $p$ is odd}.
\end{eqnarray*}
Suppose $G$ acts on a finitistic space $X$ without fixed points. For each $x\in X$, the inclusion map $ G(x)\to X$ induces a continous map $j_x: G(x)_G\to X_G$ in the Borel constructions. Let $j^{*}_{x}: H^{*}_{G}(X)\to H^{*}_{G}(G(x))$ denote the induced homomorphism in equivariant cohomology. Also, the constant $G$-map $G(x)\to \{pt\}$ induces a homomorphism $H^*(B_G)\to H^*(G(x))=H^*(B_G)$ for each $x\in X$. Let $S=R-\{0\}$ be the multiplicative subset where $R$ is the polynomial part of the ring $H^*(B_G, k)$. If $G$ acts on $X$ without fixed points, then $G_x$ is a proper subtorus of $G$ for each $x\in X$, and so from the cohomology algebra mentioned above it follows that for each $x$, there is an $s\in S$ which goes to zero under the map $H^*(B_G)\to H^*(B_{G_x})$. Note that $H^*(B_{G_x})$ depends only on the conjugacy class of $G_x$ in $G$.
If $G=\mathbb Z^{r}_{p}$, then trivially there are only finitely many conjugacy classes of $G_x$, but if $G=T^r$, we {\bf assume} that there are only finitely many orbit types in $X$, and hence in this case also there are only finitely many conjugacy classes of $G_x$ in $G$. Choosing one $s_i\in S$ for each conjugacy class $G_{x_i},\;i=1,2,\cdots ,n$, we find that the product $s=s_1.s_2 \cdots s_n\in S$ has the property that $s$ goes to zero under the map $H^*(B_G)\to H^*(B_{G_x}),\;\;\forall\; x\in X$. If $\pi_{1}^{*}(s)\in H^{*}_{G}(X)$, we will show that $(\pi^{*}_{1}(s))^N=0$ for some integer $N$, which will prove that $i_G(X)<\infty$.
Let us call the element $j^{*}_{x}(\pi^{*}_{1}(s))$ as the {\bf restriction} of the element $s$ on the orbit $G(x)$ and denote it as $s|G(x)$. Since $s|G(x)=0$ we find by the tautness property of the Alexander-Spanier cohomology that there is an invariant open neighborhood $U(x)$ of $x$ in $X$ such that $s|U(x)=0$. Thus we have now an open covering $\{U(x)\}$ of the space $X$. Since $X$ is finitistic, we find from \cite{deo-tri} that the orbit space $X/G$ is also finitistic. Hence the open covering $\{q(U(x))\}$ has a finite dimensional open refinement, say $\mathcal V=\{V_{\alpha}\;|\;\alpha\in\mathcal{A}\}$. Here $q:X\to X/G$ is the quotient map. Let $\{f_{\alpha}\;|\;\alpha\in\mathcal{A}\}$ be a partition of unity subordinate to $\mathcal V$ and $\mathcal W=\{W_{\alpha}\;|\;W_{\alpha}=f^{-1}_{\alpha}(0,\; 1]\subset V_{\alpha}\}$
Then $\mathcal W$ is a finite-dimensional locally finite refinement of $\mathcal V$. Let $\{X_{\alpha}\;|\;\alpha\in\mathcal A\}$ be a shrinking of $\{W_{\alpha}\}$ which means $\{X_{\alpha}\;|\;\alpha\in\mathcal A\}$ is a locally finite, finite-dimensional covering of $X/G$ such that $\overline{X}_{\alpha}\subset W_{\alpha}$ for each $\alpha\in\mathcal A$. Put
$$Z_{\alpha}=q^{-1}(X_{\alpha}) \;\hbox{and}\;Y_{\alpha}=q^{-1}(W_{\alpha}).$$
Then $\{Z_{\alpha}\}$ and $\{Y_{\alpha}\}$ both are finite-dimensional locally finite coverings of the space $X$ consisting of invariant open sets so that $\{Z_{\alpha}\}$ is a shrinking of $\{Y_{\alpha}\}$
Now for each $\alpha\in\mathcal A$, let $\nu_{\alpha}:X / G\to [0,1]$ be a continuous function ($X/G$ is normal) such that $\nu_{\alpha}(\overline{Z}_{\alpha})=1$ and $\nu_{\alpha}(X/G-Y_{\alpha})=0$. Let $\mu_{\alpha}$ be the composite map $\nu_{\alpha}\circ q: X\to [0,1]$. Then $\mu_{\alpha}(\overline{X}_{\alpha})=1$ and $\mu_{\alpha}(X-W_{\alpha})=0.$
Now for any finite nonempty subset $F$ of $\mathcal A_{\alpha}$, let
$$U(F)=\{x\in X\;|\;\min_{\alpha\in F}\mu_{\alpha}(x)>\max_{\alpha\notin F}\mu_{\alpha}(x)\}$$
Then $U(F)\subseteq\bigcap_{\alpha\in F}W_{\alpha}$, and because of finite dimensionality of $\{V_{\alpha}\}$, there is an integer $n$ such that $U(F)=\phi$ whenever $|F|\geq n$. Since $\{V_{\alpha}\}$ is locally finite, and $\{W_{\alpha}\}$ is a covering, $\{V(F)\;|\;F\subset\mathcal A\;\hbox{and}\;|F|<\infty\}$ is an invariant open cover of $X$. Moreover $|F_1|=|F_2|$ and $F_1\neq F_2$ implies $U(F_1)\bigcap U(F_2)=\phi$ because otherwise for some $x\in U(F_1)\bigcap U(F_2)$, we will have
\begin{eqnarray*}
\min_{\alpha\in F_1}\mu_{\alpha}(x) & > & \max_{x\notin F_1}\mu_{\alpha}(x)\\
&\geq & \min_{\alpha\in F_2}\mu_{\alpha}(x)\\
&\geq & \max_{\alpha\notin F_2}\mu_{\alpha}(x)\\
&\geq & \min_{x\in F_1}\mu_{\alpha}(x),
\end{eqnarray*}
a contradiction.
Let $U_i=\bigcup\{U(F)\;|\;|F|=i\}$. Then $\{U_1, U_2,\cdots , U_n\}$ is a covering of $X$ by invariant open sets. Since, for each $i$, $U_i$ is a disjoint union of $U(F_i)$, we have
$$H^{*}_{G}(U_i)=\Pi_{|F|=i}H^{*}_{G}(U(F))$$
Also, since $U(F)\subset\bigcap_{\alpha\in F}V_{\alpha}\subset V_{\alpha}$ for some $\alpha$, which is contained in $U(x)$, we find that $s|F=0$ for each $F$, and therefore $s|U_i=0\;\;\forall\; i=1,2,\cdots ,n$. Considering the pair $(X,U_i)$ we observe that $\pi^{*}_{1}(s)\in H^{*}_{G}(X, U_i)$ and hence the product $\pi^{*}_{1}(s^n)= (\pi^{*}_{1}(s))^n\in H^{*}_{G}(X, X)=0$. Therefore $(\pi^{*}_{1}(s))^n$ is zero in $H^{*}_{G}(X)$. We have thus proved the following.
{\mathbb Thm Suppose $G=\mathbb Z^{r}_{p}$ (resp $T^r$) is a $p$-torus (resp. torus) acts on a finitistic space $X$ without fixed points. If $G=T^r$, we assume further that $G$ acts on $X$ with finitely many orbit types. Then with coefficients in $\mathbb Z_p$(resp. $\mathbb Q$), the index $i_G(X)$ of $X$ is finite.}
The following result follows easily from definitions (see ~\cite{vol}).
{\Prop Let $X$ and $Y$ be two $G$-spaces.
\begin{enumerate}
\item[(i)] If $\tilde{H}_i(X)=0$ for $i< N$, then $i_G(X)\geq N+1$.
\item[(ii)] If $H^i(Y)=0$ for $i>N-1$ and $i_G(Y)<\infty$, then $i_G(Y)\leq N$.
\end{enumerate}}
We already know that if there is a $G$-map $f:X\to Y$, the $i_G(X)\leq i_G(Y)$. Hence we have the following
{\mathbb Cor Let $G$ be a $p$-torus acting on spaces $X,\;Y$ without fixed points. Suppose $\tilde{H}^i(X,\mathbb Z_p)=0$ for $i<N$ and $Y$ is a finitistic space such that $H^i(Y,\mathbb Z_p)=0$ for $i>N-1$. Then there is no $G$ map from $X$ to $Y$.}
{\mathbb Cor Let $G$ be a torus acting on spaces $X$ and $Y$ without fixed points. Assume that $H^i(X,\mathbb Q)=0$ for $i< N$ and $Y$ is finitistic having only finitely many orbit types such that $H^i(Y,\mathbb Q)=0$ for $i>N-1$. Then there is no $G$-map from $X$ to $Y$.}
{\mathbb Remark The condition that the space $X$ is finitistic in the above theorem is needed. For example take any group G and let $E_G\to B_G$ be the principal G-bundle with $E_G$ as the bundle space. Then $G$ acts on $E_G$ without fixed points (freely) and so the number of orbit types is finite. Since the space $(E_G)_G$ is paracompact and the fibres of $p:(E_G)_G\to B_G$ are acyclic, the Vietoris-Begle theorem implies that $p^*:H^*(B_G)\to H^*(E_G)$ is an isomorphism, i.e., $i_G(E_G)= \infty$.} The space $E_G$ is clearly nonfinitistic.
{\mathbb Remark The above theorem is true not only for the groups $\mathbb Z_p^r$ and $T^r$, but also for any compact connected Lie group $G$ provided the coefficients are taken in a field $k$ of characteristic zero, e.g., $k=\mathbb Q.$ In this case
$$ {\rm ker} p^* = \bigcap_{x\in X} {\rm ker} j_x^*p^*=\bigcap_{i=1}^{m}{\rm ker} j_{x_i}^*p^*, $$
where the number of orbit types is finite because $\alpha \in {\rm ker} j_x^*p^*$ for each $x\in X$ means $p^*(\alpha)\in {\rm ker} j_x^*$ for each $x\in X,$ and hence there is an integer n such that $(p^*(\alpha))^n =0,$ i.e., $\alpha^n\in {\rm ker} p^*.$ But ${\rm ker} p^*$ is a prime ideal implies $\alpha \in {\rm ker} p^*.$
Now if ${\rm ker} p^*=0,$ then $\cap_{i=1}^m {\rm ker} j_{x_i}^*=0$ Hence there is an index k such that ${\rm ker} j_{x_k}p^* =0.$ This means the map $H^*(B_G)\to H^*(B_{G_{x_k}})$ is injective map, i.e., $G_{x_k}=G,$ which means $x_k$ is a fixed point, a contradiction. Hence $i_G(X)<\infty$. }
\section{A generalization of the topological central point theorem}
The classical central point theorem says that ``{\it given any point set $X$ having $m=(d+1)(r-1)+1$ points in $\mathbb R^d$, there exists a point $x$ in $\mathbb R^d$ which is contained in the convex hull of every subset $F$ of $X$ having at least $d(r-1)+1$ points.}'' The point $x$ is called a central point of $X$. This theorem has a nontrivial generalization known as the Tverberg Theorem which says that ``{\it given any finite set $X$ in $\mathbb R^d$ having $m=(d+1)(r-1)+1$ points, we can partition the set $X$ into $r$ subsets $X_1, X_2,\cdots , X_r$ such that} $$\bigcap^{r}_{i=1}\hbox{conv}(X_i)\neq \phi.\hbox{''}$$
Both of these theorems have been generalized in various directions (~\cite{Br}~\cite{Sar}, etc). The following is a recent generalization of the central point theorem~\cite{kar}.
{\mathbb Thm {\bf (Topological Central Point Theorem)} \\mathbb Let $m=(d+1)(r-1)$ and $\mathbb Delta^m$ be the $m$-simplex. Suppose $W$ is any $d$-dimensional metric space and $f:\mathbb Delta^m\to W$ is any continuous map then
$$\bigcap_{\dim F=d(r-1)}f(F)\neq \phi$$
where $F$ runs over all faces of $\mathbb Delta^m$ of dimension $d(r-1)$.}
The following topological Tverberg theorem is also proved in the same paper ~\cite{kar} for the case when $r$ is a prime power. For general $r$ the theorem is still open. In contrast to this, the topological central point theorem is true for any $r$.
{\mathbb Thm {\bf (Topological Tverberg Theorem)}\\ Let $m=(d+1)r-1$ where $r$ is a prime power and let $\mathbb Delta^m$ be a $m$-dimensional simplex. Suppose $f:\mathbb Delta^m\to W$ is any continuous map to a $d$-dimensional metric space $W$. Then there exists $r$ disjoint faces $F_1,\cdots , F_r$ of $\mathbb Delta^m$ such that $$\bigcap^{r}_{i=1}f(F_i)\neq \phi .$$}
In what follows we will generalize both of the above theorems for continuous maps $f:\mathbb Delta^m\to W$ where $W$ is any Hausdorff space of dimension $d$, not necessarily a metric space.
\section{Generalization to $d$-dimensional Hausdorff spaces}
Suppose $\dim X$ denotes the covering dimension of a space $X$~\cite{Pea}. We know that if $A$ is a closed subspace of $X$, then $\dim A\leq \dim X$. If $P$ is any $m$-dimensional polyhedron then $\dim P=m$. We also point out that both of the above theorems are proved in \cite{kar} with full detail when the space $W$ is a polyhedron of dimension $d$, and the proof uses the $G$-index of a $G$-space $X$ as discussed earlier in this paper for $G=\mathbb Z_2$ or $G=(\mathbb Z_p)^{r}$.
Let us first mention that the topological central point theorem is not valid when the space $W$ is an arbitrary $d$-dimensional space. As an easy example, let us take wedge (one point union with $p$ as a common point) $W=\mathbb Delta^d\vee\mathbb Delta^m$ where $\mathbb Delta^d$ is the usual $d$-simplex and $\mathbb Delta^m$ is the usual $m$-simplex with indiscrete topology on $\mathbb Delta^m-\{p\}$. Note that $\mathbb Delta^m$ with indiscrete topology is $0-$dimensional but not Hausdorff. Then the central point theorem cannot be true for the continuous map $f:\mathbb Delta^m\to W$ where $f:\mathbb Delta^m\to\mathbb Delta^d\vee\mathbb Delta^m$ is the obvious inclusion map into the the second part, because the intersection of all $d(r-1)$ dimensional faces of $\mathbb Delta^m$ is clearly empty.
{We have now}
{\mathbb Thm Let $f:\mathbb Delta^m\to W$ be a continuous map where $m=(d+1)(r-1)$ and $W$ be a $d$-dimensional Hausdorff space. If the topological central point theorem is true for a $d$-dimensional polyhedron, then the theorem is true for any Hausdorff space $W$ of dimension $d$.}
\begin{proof}
Since the map $f:\mathbb Delta^m\to W$ is continuous and $W$ is Hausdorff, $f(\mathbb Delta^m)$ is a compact closed subspace of $W$. Hence $\dim f(\mathbb Delta^m)\leq d$. Moreover, since $f:\mathbb Delta^m\to f(\mathbb Delta^m)$ is a perfect map, viz., the inverse image $f^{-1}(p)$ is compact for each $p$ in $p(W)$, the space $f(\mathbb Delta^m)$ is also metrizable (see~\cite{Pea} p. 96, Cor 5.8). Thus the space $M=f(\mathbb Delta^m)$ is a compact metric space of dimension at most $d$. Let us denote the map $f$ as the composite
$$\mathbb Delta^m\stackrel{f'}\to M\stackrel{i}\to W$$
Since $M$ is a compact metric space of dimension at most $d$, given any $\epsilon > 0$, we can find a polyhedron $P_{\epsilon}$ of dimension at most $d$ and a map $g_{\epsilon}: M\to P_{\epsilon}$ (see~\cite{Eng}, p 322, 6.P.51) such that the diam($ g^{-1}_{\epsilon}(p))$ $\leq \epsilon$ for each $p\in P_{\epsilon}$.
\begin{picture}(300,135)(20,-40)
\put(70,30){\makebox(0,0)[t]{$\mathbb Delta^m$}}
\put(105,40){\makebox(0,0)[t]{$f'$}}
\put(145,30){\makebox(0,0)[t]{$M$}}
\put(85,25){\vector(1,0){50}}
\put(160,30){\vector(3,1){50}} \put(160,20){\vector(3,-1){50}}
\put(185,40){\makebox(0,0)[b]{$i$}}
\put(185,8){\makebox(0,0)[t]{$g_{\epsilon}$}}
\put(220,50){\makebox(0,0)[t]{$W$}}
\put(220,4){\makebox(0,0)[t]{$P_{\epsilon}$}}
\end{picture}
Now we know that the theorem is true for the map $g_{\epsilon}\circ f':\mathbb Delta^m\to P_{\epsilon}$ for any given $\epsilon$. This means $\exists\; p\in P_{\epsilon}$ such that
$$ p\in\bigcap g_{\epsilon}\circ f'(F),$$
where $F$ runs over all faces of $\mathbb Delta^m$ of dimension $(d+1)r$. This implies $\cap f'(F)\neq\phi$. To see this suppose $F_1, F_2,\cdots ,F_k$ are all the $(d+1)r$ dimensional faces of $\mathbb Delta^m$. Suppose $f'(F_1),\cdots ,f'(F_n),\;n<k$ intersect and there exists an image set, $f'(F_{n+1})$ which is disjoint from $f'(F_1)\cap\cdots\cap f'(F_n)$. Let $$\delta=d(f'(F_{n+1}),\; \bigcap^{n}_{i=1}f'(F_i))>0.$$
Choose $\epsilon<\frac{\delta}{2}$, a polyhedron $P_{\epsilon}$ and a continuous map $g_{\epsilon}: M\to P_{\epsilon}$ such that diam ($ g^{-1}_{\epsilon}(p)$) $<\bar{\epsilon}, \; \forall\;p\in P_{\bar{\epsilon}}$. Since $\cap_{i\leq k}g_{\epsilon}\circ f'(F_i)\neq \phi$ we find that $\cap_{i\leq n+1}g_{\epsilon}\circ f(F_i)\neq \phi$, and so there is a point $x_1\in \cap^{n}_{i=1}f'(F_i)$ and a point $x_2\in f'(F_{n+1})$ such that $g(x_1)=g(x_2)=p\in P_{\epsilon}$. But this is a contradiction to the fact that diam($ g^{-1}_{\epsilon}(p)$) $<\epsilon$. Hence $\cap g_{\epsilon}\circ f'(F)\neq \phi$ and this means $\cap f(F)\neq\phi$.
\end{proof}
By a similar argument one can prove the Topological Tverberg Theorem also. Since the topological central point theorem and the topological Tverberg theorem both are already known for polyhedra ~\cite{kar}, we have the following:
{\mathbb Cor {\bf (The Topological Central Point Theorem for Hausdorff spaces)}:
Let $m=(d+1)(r-1)$ and $\mathbb Delta^m$ be a $m$-simplex. Suppose $W$ is a $d$-dimensional Hausdorff space and $f:\mathbb Delta^m\to W$ is a continuous map. Then if $F$ runs over all the faces of $\mathbb Delta^m$ of dimension $d(r-1)$, then
$$\bigcap_{\dim F=d(r-1)} f(F)\neq\phi.$$}
{\mathbb Cor {\bf (The Topological Tverberg Theorem for Hausdorff Spaces)}:
Let $m=(d+1)r-1$ where $r$ is a prime power and $\mathbb Delta^m$ be a $m$-simplex. Let $f:\mathbb Delta^m\to W$ be a continuous map into a $d$-dimensional Hausdorff space $W$. Then there exist $r$ disjoint faces $F_1,\cdots , F_r$ of $\mathbb Delta^m$ such that
$$\bigcap^{r}_{i=1}f(F_i)\neq\phi.$$}
\noindent {\bf Acknowledgement : } The author is thankful to R. N. Karasev for a few e-mail discussions on his paper~\cite{kar} and also to the referee for some useful remarks.
\end{document}
|
\begin{document}
\maketitle
\tableofcontents
\addcontentsline{toc}{section}{Glossary}
\centerline{{\ell^1ambdarge{G}}{\small{LOSSARY}}}
\noindent{\bf Nonsingular dynamical system:} Let $(X,\mathcal B, \mu)$ be a
standard Borel
space equipped with a $\sigma$-finite measure. A Borel map $T:X\to X$ is a {\it nonsingular
transformation}\index{nonsingular transformation} of $X$ if
for any $N\in \mathcal B$, $\mu(T^{-1}N)=0 \text { if
and only if } \mu(N)=0.$
In this case the measure $\mu$ is called {\it
quasi-invariant\,}\index{quasi-invariant} for $T$; and the quadruple $(X,\mathcal B,\mu,T)$ is called a {\it nonsingular dynamical system}. If $\mu(A)=\mu(T^{-1}A)$ for all $A\in\mathcal B$ then $\mu$ is said to be
{\it invariant } under $T$ or, equivalently, $T$ is
{\it measure-preserving}.
\noindent{\bf Conservativity:}
$T$ is {\it
conservative}\index{conservative} if for all sets $A$ of positive
measure there exists an
integer $n>0$ such that $\mu(A\cap T^{-n}A)>0$.
\noindent{\bf Ergodicity:} $T$ is {\it ergodic} if every measurable subset $A$ of
$X$ that is invariant under $T$ (i.e., $T^{-1}A= A$) is either $\mu$-null or $\mu$-conull.
Equivalently, every Borel function $f:X\to \mathbb R$ such that $f\> {\scriptstyle \circ}\> c
T=f$ is constant a.e.
\noindent{\bf Types II, II$_1$, II$_\infty$ and III:} Suppose that $\mu$ is non-atomic and $T$ is invertible and ergodic (and hence conservative). If there exists a
$\sigma$-finite measure $\nu$ on $\mathcal B$ which is equivalent to $\mu$
and invariant under $T$ then $T$ is said {\it to be of type II}. It
is easy to see that $\nu$ is unique up to scaling. If $\nu$ is finite
then $T$ is {\it of type $II_1$}. If $\nu$ is infinite then $T$ is of
type $II_\infty$.
If $T$ is not of type $II$ then $T$ is said {\it to
be of type $III$}.
\section{Definition of the subject}
\ell^1ambdabel{S:Introduction}
An abstract measurable dynamical system consists of a set $X$ (phase space) with a transformation $T:X\to X$ (evolution law or time)
and a finite or $\sigma$-finite measure $\mu$ on $X$ that specifies a class of negligible subsets. Nonsingular ergodic theory studies systems where $T$ respects $\mu$ in a weak sense: the transformation preserves only the class of negligible subsets but it may not preserve $\mu$.
This survey is about dynamics and invariants of nonsingular systems.
Such systems model `non-equilibrium' situations in which events that are impossible at some time remain impossible at any other time.
Of course, the first question that arises is whether it is possible to find an equivalent invariant measure, i.e., pass to a hidden equilibrium without changing the negligible subsets? It turns out that there exist systems which do not admit an equivalent invariant finite or even $\sigma$-finite measure. They are of our primary interest here.
In a way (Baire category) most of systems are like that.
Nonsingular dynamical systems arise naturally in various fields of mathematics: topological and smooth dynamics, probability theory, random walks, theory of numbers, von Neumann algebras, unitary representations of groups, mathematical physics and so on. They also can appear in the study of probability preserving systems: some criteria of mild mixing and distality, a problem of Furstenberg on disjointness, etc. We briefly discuss this in \S\,\ref{applic}. Nonsingular ergodic theory studies all of them from a general point of view:
\begin{itemize}
\item[---] What is the qualitative nature of the dynamics?
\item[---] What are the orbits?
\item[---] Which properties are typical within a class of systems?
\item[---] How do we find computable invariants to compare or distinguish
various systems?
\end{itemize}
Typically there are two kinds of results: some are extensions to
nonsingular systems of theorems for finite measure-preserving
transformations (for instance, \S\,2, \S\,4, \S\,12) and the other are about
new properly `nonsingular' phenomena (see \S\,5--\S\,9).
Philosophically
speaking, the dynamics of nonsingular systems is more diverse comparatively
with their finite measure-preserving counterparts.
That is why it is
usually easier to construct counterexamples than to develop a general
theory. While infinite measure preserving transformations are not the main subject of this survey, we cover them partially as they are also nonsingular systems and arise often as natural examples or counterexamples in the nonsingular setting.
Because of shortage of space we concentrate mainly on
invertible transformations, and we have not included as many references as we had wished. General group or semigroup actions are practically not considered here (with some exceptions in \S\,\ref{applic} devoted to applications). A number of open problems are scattered through the entire text.
We thank J. Aaronson, J. R. Choksi, V. Ya. Golodets, M. Lema\'nczyk, F. Parreau, E. Roy for useful remarks to the first edition of this survey.
Many new results related to nonsingular dynamical systems have appeared since the
release of the first edition.
The second edition is enlarged essentially to cover (partially) this
progress.
In particular, we added new \S7 and \S9 and totally rewrote \S8.
More than 100 new references have been added.
\section{Introduction and Basic Results}
This section includes the basic results involving conservativity and ergodicity as well as some direct nonsingular counterparts of the basic machinery from classic ergodic theory: mean and pointwise ergodic theorems, Rokhlin lemma, ergodic decomposition, generators, Glimm-Effros theorem and special representation of nonsingular flows. The historically first example of a transformation of type $III$
(due to Ornstein) is also given here with full proof.
\subsection{Nonsingular transformations}
In this paper we will consider mainly {\it invertible} nonsingular
transformations, i.e., those which are bijections when restricted to
an invariant Borel subset of full measure. Thus when we refer to a nonsingular dynamical system
$(X,\mathcal B,\mu, T)$ we shall assume that $T$ is an invertible nonsingular transformation (unless the contrary is specified explicitly). Of course, each measure
$\nu$ on $\mathcal B$ which is {\it equivalent} to $\mu$, i.e., $\mu$ and
$\nu$ have the same null sets, is also quasi-invariant under $T$. In
particular, since $\mu$ is $\sigma$-finite, $T$ admits an equivalent
quasi-invariant probability measure.
For each $i\in\mathbb Z$, we denote by $\omega_i^\mu$ or $\omega_i$
the Radon-Nikodym derivative ${d(\mu\> {\scriptstyle \circ}\> c T^i)}/{d\mu}\in L^1(X,\mu)$.
The derivatives satisfy the cocycle equation
$\omega_{i+j}(x)=\omega_i(x) \omega_j(T^ix)$ for a.e. $x$ and all
$i,j\in\mathbb Z$.
\subsection {Basic properties of conservativity and ergodicity}
\ell^1ambdabel{S:Recurrence}
A measurable set $W$ is said to be
{\it wandering}\index{ wandering} if for all $i, j \geq 0$ with $i\neq j$,
$T^{-i}W\cap
T^{-j}W=\emptyset$.
Clearly, if $T$ has a wandering set of positive measure then
it cannot be conservative. A nonsingular transformation $T$ is {\it
incompressible}\index{incompressible} if whenever $T^{-1}C\subset C$,
then $\mu(C\setminus
T^{-1}C)=0$.
\begin{prop} {\rm(see e.g. \cite{K85})}{\ell^1ambdabel{recurrentequivalences}}
Let $(X,\mathcal B,\mu,T)$ be a
nonsingular dynamical system. The following are equivalent:
\begin{enumerate}
\item[(i)] $T$ is conservative.
\item[(ii)] For every measurable set $A$,
$
\mu(A\setminus
\bigcup_{n=1}^{\infty}T^{-n}A)=0.
$
\item[(iii)] $T$ is incompressible.
\item[(iv)] Every wandering set for $T$ is null.
\item[(v)] $\sum_{i=0}^{+\infty}\omega_i(x)=\infty$ at a.e. $x$ (provided that $\mu(X)<\infty$).
\end{enumerate}
\end{prop}
Since any finite measure-preserving transformation is
incompressible, we deduce that it is conservative. This is the
statement of the classical Poincar\'e recurrence lemma.
If $T$ is a conservative nonsingular transformation of $(X,\mathcal B,\mu)$
and $A\in\mathcal B$ a subset of positive measure, we can define an {\it
induced transformation} $T_A$ of the space $(A,\mathcal B\cap
A,\mu\restriction A)$ by setting $T_Ax:=T^nx$ if $n=n(x)$ is the
smallest natural number such that $T^nx\in A$. $T_A$ is also
conservative. As shown in \cite[5.2]{ST1}, if $\mu(X)=1$ and $T$ is conservative and ergodic,
$\int_A \sum_{i=0}^{n(x)-1} \omega_i(x)\ d\mu(x)=1$, which is a nonsingular version of the
well-known Ka\c{c}s formula.
\begin{theorem}[Hopf Decomposition, see e.g. \cite{Aa}] \ell^1ambdabel{T:hopf}
Let $T$ be a nonsingular transformation.
Then there exist disjoint invariant sets $C,D\in\mathcal B$ such that
$X=C\sqcup D$, $T$ restricted to $C$ is conservative, and $D=
\bigsqcup_{n=-\infty}^\infty T^nW$, where $W$ is a wandering set.
If $f\in L^1(X,\mu)$, $f>0$, then
$
C=\{x: \sum_{i=0}^{+\infty} f(T^ix) \omega_i(x) =\infty \text{ a.e.}\}$
and $D=\{x: \sum_{i=0}^{+\infty} f(T^ix) \omega_i(x) <\infty \text{ a.e.}\}.
$
\end{theorem}
The set $C$ is called the {\it conservative part} of $T$ and $D$ is
called the {\it dissipative part} of $T$.
If $D$ is of positive measure we call $T$ {\it dissipative}.
If $D$ is of full measure we call $T$ {\it totally dissipative}.
If $T$ is ergodic and $\mu$ is non-atomic then
$T$ is automatically conservative. The translation by 1 on the group
$\mathbb Z$ furnished with the counting measure is an example of an
ergodic non-conservative (infinite measure-preserving) transformation.
\begin{prop}\ell^1ambdabel{fmpergodic} Let $(X,\mathcal B,\mu,T)$ be a nonsingular
dynamical
system.
The following are equivalent:
\begin{enumerate}
\item[(i)] $T$ is conservative and ergodic.
\item[(ii)] For every set $A$ of positive measure, $\mu(X\setminus
\bigcup_{n=1}^{\infty}T^{-n}A)=0$. (In this case we will say $A$
sweeps out.)
\item[(iii)] For every measurable set $A$ of positive measure and for a.e.
$x\in X$ there exists an integer $n>0$ such that
$
T^{n}x\in A.
$
\item[(iv)] For all sets $A$ and $B$ of positive measure there exists an
integer $n>0$ such that $\mu(T^{-n}A\cap B)>0$.
\item[(v)] If $A$ is such that $T^{-1}A\subset A$, then $\mu(A)=0$ or $\mu(A^c)=0$.
\end{enumerate}
\end{prop}
A set $W$ of positive measure is said to be {\it weakly wandering} if there is a sequence
$n_i\to\infty$ such that $T^{n_i}W\cap T^{n_j}W=\emptyset$ for all $i\neq j$.
Clearly, a finite measure-preserving transformation cannot have a weakly wandering set.
Hajian and Kakutani \cite{HK} showed that a nonsingular transformation $T$
is of type $II_1$ if and only if $T$ does not have a weakly wandering set.
This survey is mainly about systems of type $III$.
For some time it was not quite obvious whether such systems exist at
all. The historically first example was constructed by Ornstein in
1960.
\begin{ex} (Ornstein \cite{O60})\ell^1ambdabel{OrnEx}
Let $A_n=\{0,1,\dots,n\}$, $\nu_n(0)=0.5$ and $\nu_n(i)=1/(2n)$ for
$0<i\ell^1e n$ and all $n\in\mathbb N$. Denote by $(X,\mu)$ the infinite
product probability space $\bigotimes_{n=1}^\infty(A_n,\nu_n)$. Of
course, $\mu$ is non-atomic. A point of $X$ is an infinite sequence
$x=(x_n)_{n=1}^\infty$ with $x_n\in A_n$ for all $n$. Given $a_1\in
A_1,\dots,a_n\in A_n$, we denote the cylinder
$\{x=(x_i)_{i=1}^\infty\in X : x_1=a_1,\dots, x_n=a_n\}$ by $[a_1,\dots,a_n]$.
Define a Borel map $T:X\to X$ by setting
\begin{eqnarray}
\ell^1ambdabel{odometer}
(Tx)_i=
\begin{cases}
0, & \text{if }i<l(x)\\
x_i+1, &\text{if }i=l(x)\\
x_i, &\text{if }i>l(x),
\end{cases}
\end{eqnarray}
where $l(x)$ is the smallest number $l$ such that $x_l\ne l$. It is
easy to verify that $T$ is a nonsingular transformation of $(X,\mu)$
and
$$
\omega^\mu_1(x)=\prod_{n=1}^\infty\frac{\nu_n((Tx)_n)}{\nu_n(x_n)}=
\begin{cases}
(l(x)-1)!/l(x), &\text{if } x_{l(x)}=0\\
(l(x)-1)!, &\text{if } x_{l(x)}\ne 0.
\end{cases}
$$
We prove that $T$ is of type III by contradiction. Suppose that
there exists a $T$-invariant $\sigma$-finite measure $\nu$ equivalent
to $\mu$.
Let $\varphi:=d\mu/d\nu$. Then
\begin{eqnarray} \ell^1ambdabel{formula}
\omega^\mu_i(x)=\varphi(x)\varphi(T^ix)^{-1}\text{ \ for a.a. $x\in X$ and
all $i\in\mathbb Z$}.
\end{eqnarray}
Fix a real $C>1$ such that the set $E_C:=\varphi^{-1}([C^{-1},C])\subset
X$ is of positive measure. By a standard approximation argument, for
each sufficiently large $n$, there is a cylinder $[a_1,\dots,a_n]$
such that $\mu(E_C\cap[a_1,\dots,a_n])>0.9\mu([a_1,\dots,a_n])$. Since
$\nu_{n+1}(0)=0.5$, it follows that
$\mu(E_C\cap[a_1,\dots,a_n,0])>0.8\mu([a_1,\dots,a_n,0])$. Moreover,
by the pigeon hole principle there is $0<i\ell^1e n+1$ with
$\mu(E_C\cap[a_1,\dots,a_n,i])>0.8\mu([a_1,\dots,a_n,i])$. Find
$N_n>0$ such that $T^{N_n}[a_1,\dots,a_n,0]=[a_1,\dots,a_n,i]$. Since
$\omega^{\mu}_{N_n}$ is constant on $[a_1,\dots,a_n,0]$, there is a
subset $E_0\subset E_C\cap[a_1,\dots,a_n,0]$ of positive measure such
that $T^{N_n}E_0\subset E_C\cap
[a_1,\dots,a_n,i]$. Moreover, $\omega^\mu_{N_n}(x)=\nu_{n+1}(i)/\nu_{n+1}(0)=(n+1)^{-1}$ for
a.a. $x\in [a_1,\dots,a_n,0]$. On the other hand, we deduce
from~(\ref{formula})
that $\omega^\mu_{N_n}(x)\ge C^{-2}$ for all $x\in E_0$, a contradiction.
\end{ex}
\subsection {Mean and pointwise ergodic theorems. Rokhlin lemma}
\ell^1ambdabel{S:ErgodicTheorem}
Let $(X,\mathcal B,\mu, T)$ be a nonsingular dynamical system.
Define a unitary operator $U_T$ of $L^2(X,\mu)$ by setting
\begin{align}\ell^1ambdabel{koopman oper}
U_Tf:=\sqrt{\omega_1}\cdot f\> {\scriptstyle \circ}\> c T.
\end{align}
We note that $U_T$ preserves the cone of positive functions $L^2_+(X,\mu)$. Conversely,
every positive unitary operator in $L^2(X,\mu)$ that preserves $L^2_+(X,\mu)$ equals $U_T$ for a $\mu$-nonsingular transformation $T$.
We call $U_T$ the {\it Koopman operator} generated by $T$.
\begin{theorem}
[von Neumann mean Ergodic Theorem, see e.g. \cite{Aa}] $T$ has no $\mu$-absolutely continuous $T$-invariant probability if and only if $n^{-1}\sum_{i=0}^{n-1}U_T^i\to 0$ in the strong operator topology.
\end{theorem}
\begin{proof}
Let $P$ denote the orthogonal projector in $L^2(X,\mu)$ onto the subspace of $U_T$-invariant vectors.
By the well-known fact from the theory of Hilbert spaces,
$n^{-1}\sum_{i=0}^{n-1}U_T^i\to P$ in the strong operator topology.
Then $P\ne 0$ if and only if there is $f\in L^2(X,\mu)$ such that $f\ne 0$ and $U_Tf=f$.
Of course, $U_T|f|=|f|$.
We now define a non-trivial finite measure $\ell^1ambdambda\prec\mu$ by setting $\frac{d\ell^1ambdambda}{d\mu}:=|f|^2$.
It is straightforward to verify that $\ell^1ambdambda$ is invariant under $T$.
\end{proof}
Denote by ${\mathcal I}$ the sub-$\sigma$-algebra of $T$-invariant sets. Let ${\mathbb E}_\mu[.|{\mathcal I}]$ stand for the conditional expectation with respect to ${\mathcal I}$.
Note that if $T$ is ergodic, then ${\mathbb E}_\mu[f|{\mathcal I}]=\int f\,d\mu$.
Now we state a nonsingular analogue of Birkhoff's pointwise ergodic theorem, due to Hurewicz \cite{Hur} and in
the form stated by Halmos \cite{Hal46}.
\begin{theorem}
[Hurewicz pointwise Ergodic Theorem]\ell^1ambdabel{HpET} If $T$ is conservative,
$\mu(X)=1$,
$f , g\in\ L^1(X,\mu)$ and $g>0$, then
$$
\frac{\sum_{i=0}^{n-1} f(T^ix) \omega_i(x) } {\sum_{i=0}^{n-1} g(T^ix) \omega_i(x)} \to \frac{{\mathbb E}_\mu[f|{\mathcal I}](x)}
{{\mathbb E}_\mu[g|{\mathcal I}](x)} \ \text{ as }n\to\infty\text{ for a.e. }x.
$$
\end{theorem}
There is a nonsingular version of the subadditive ergodic theorem of Kingman \cite{ST1}.
Let $T$ be a nonsingular transformation and let $(\omega_n)$ be its sequence of Radon-Nikodym derivatives.
A sequence of functions $(f_n)$ is said to be {\it{subadditive}} if $f_{n+m}\ell^1eq f_m+f_n\> {\scriptstyle \circ}\> c T^m \omega_m$ for all $n,m\ge 0$.
Since $(f_n)$ is a subadditive, one can verify that the following limit
\[
\mathbb S_\mu[(f_n)](x):=\ell^1im_{n\to\infty}\frac{1}{n} {\mathbb E}_\mu [f_n|{\mathcal I}](x)
\]
exists almost everywhere.
\begin{theorem}[Nonsingular subadditive Ergodic Theorem]\ell^1ambdabel{sat}
If $T$ is conservative,
$\mu(X)=1$, $(f_n)$ is a subadditive sequence of integrable functions,
$ g\in\ L^1(X,\mu)$ and $g>0$, then
$$
\frac{f_n(x) } {\sum_{i=0}^{n-1} g(T^ix) \omega_i(x)} \to \frac{{\mathbb S}_\mu[(f_n)](x)}
{{\mathbb E}_\mu[g|{\mathcal I}](x)} \ \text{ as }n\to\infty\text{ for a.e. }x.
$$
\end{theorem}
Of course, Theorem~\ref{HpET} follows from Theorem~\ref{sat} if we set $f_n(x):=
\sum_{i=0}^{n-1} f(T^ix) \omega_i(x)$ for all $n>0 $ and $x\in X$.
A transformation $T$ is {\it aperiodic } if the $T$-orbit of a.e. point from $X$ is infinite. The following classical statement can be deduced easily from Proposition \ref{recurrentequivalences}.
\begin{lemma}[Rokhlin's lemma \cite{F70}]\ell^1ambdabel{Rol} Let $T$ be an aperiodic nonsingular transformation of a standard probability space $(X,\mu)$.
For each ${\varepsilon}psilon>0$ and
integer $N>1$ there exists a measurable set $A$ such that the sets $A, TA,\dots,T^{N-1}A$
are disjoint and $\mu(A\cup TA\cup\cdots\cup T^{N-1}A)>1-{\varepsilon}psilon$.
\end{lemma}
This lemma was refined later (for ergodic transformations) by Lehrer and Weiss
as follows.
\begin{theorem}[${\varepsilon}psilonsilon$-free Rokhlin lemma \cite{LeW}] \ell^1ambdabel{epsilon free} Let $T$ be ergodic and $\mu$ non-atomic. Then for a subset $B\subset X$ and any $N$ for which $\bigcup_{k=0}^\infty T^{-kN}(X\setminus B)=X$, there is a set $A$ such that
the sets $A, TA,\dots,T^{N-1}A$
are disjoint and $A\cup TA\cup\cdots\cup T^{N-1}A\supset B$.
\end{theorem}
The condition $\bigcup_{k=0}^\infty T^{-kN}(X\setminus B)=X$ holds of course for each $B\ne X$ if $T$ is {\it totally ergodic}, i.e., $T^p$ is ergodic for any $p$, or if $N$ is prime.
We now state a nonsingular version of Alpern's lemma which is a generalization of
Lemma~\ref{Rol}.
\begin{theorem}[Alpern's lemma \cite{AP}] Let $T$ be an aperiodic nonsingular transformation of a standard probability space $(X,\mu)$.
Let $\pi=(\pi_1,\pi_2,\dots)$ be a probability vector
such that
$\{k\mid \pi_k>0\}$ is a relatively prime set of integers.
Then there is a measurable partition $P=\{P_{k,i}\mid k>0, i=1,\dots,k\}$ of $X$ satisfying
\begin{enumerate}
\item[(a)] $TP_{k,i}=P_{k,i+1}$ for each $k$ and every $i<k$ and
\item[(b)] $\sum_{i=1}^k\mu(P_{k,i})=\pi_k$
for each $k$.
\end{enumerate}
\end{theorem}
\subsection{Ergodic decomposition}
A proof of the following theorem may be found in \cite[2.2.8]{Aa} and \cite[\S 6]{Sc}.
\begin{theorem}[Ergodic Decomposition Theorem]\ell^1ambdabel{T:ergodicdecomp}
Let $T$ be a conservative nonsingular transformation on a standard probability space
$(X,\mathcal{B},\mu)$. There there exists a standard probability space $(Y,\nu,\mathcal A)$
and a family of probability measures $\mu_y$ on $(X,\mathcal{B})$, for $y\in Y$, such that
\begin{enumerate}
\item[(i)] For each $A\in\mathcal{B}$ the map $y\mapsto \mu_y(A)$ is Borel and for each $A\in\mathcal{B}$
\[
\mu(A)=\int \mu_y(A) d\nu(y).
\]
\item[(ii)] For $y,y^\prime\in Y$ the measures $\mu_y$ and $\mu_{y^\prime}$ are mutually singular.
\item[(iii)] For each $y\in Y$ the transformation $T$ is nonsingular and conservative, ergodic
on $(X,\mathcal{B},\mu_y)$.
\item[(iv)] For each $y\in Y$,
$\omega^{\mu_y}_1=\omega^\mu_1 \
\mu_y\text{-a.e.}
$
\item[(v)] (Uniqueness) If there exists another probability space $(Y^\prime,\nu^\prime,\mathcal A^\prime)$
and a family of probability measures $\mu^\prime_{y'}$ on $(X,\mathcal{B})$, for $y'\in Y'$, satisfying
\rm{(i)-(iv)}, then there exists a measure-preserving isomorphism $\theta:Y\to Y^\prime$ such that
$\mu_y = \mu^\prime_{\theta y}$ for $\nu$-a.e. $y$.
\end{enumerate}
\end{theorem}
It follows that if $T$ preserves an equivalent $\sigma$-finite measure
then the system $(X,\mathcal{B},\mu_y,T)$ is of type $II$ for a.a. $y$.
The space $(Y,\nu,\mathcal A)$ is called {\it the space of $T$-ergodic components}.
\subsection{Generators}\ell^1ambdabel{generators}
A $\sigma$-algebra $\mathcal F$ is called a {\it generator} for a nonsingular transformation $T$ on a standard probability space $(X,\mathcal B,\mu)$, if $\bigvee_{n=-\infty}^\infty T^n{\mathcal F}=\mathcal B$.
It was shown in \cite{R65} and \cite{wP66} that $T$ has a {\it countable generator}, i.e., a countable partition $ P$ of $X$ so that the the $\sigma$-algebra of $P$-measurable
sets is a generator for $T$.
It was refined by Krengel \cite{Kr70}: if $T$ is of type $II_\infty$ or $III$ then there exists $P$ consisting of two sets only.
Moreover, given a sub-$\sigma$-algebra $\mathcal F\subset\mathcal B$ such that $\mathcal F\subset T\mathcal F$ and $\bigcup_{k>0}T^k\mathcal F=\mathcal B$, the set $\{A\in \mathcal F\mid (A,X\setminus A)\text{ is a generator of }T\}$ is dense in $\mathcal F$. It follows, in particular, that $T$ is isomorphic to the shift on $\{0,1\}^\mathbb Z$ equipped with a quasi-invariant probability measure.
\subsection{The Glimm-Effros Theorem}\ell^1ambdabel{S:Glimm}
The classical Bogolyubov-Krylov theorem states that each homeomorphism of a compact space admits an ergodic invariant probability measure~\cite{CFS}. The following statement by Glimm \cite{Gli} and Effros \cite{Eff} is a ``nonsingular'' analogue of that theorem. (We consider here only a particular case of $\mathbb Z$-actions.)
\begin{theorem}\ell^1ambdabel{T:Glimm} Let $X$ be a Polish space and $T:X\to X$ an aperiodic homeomorphism. Then the following are equivalent:
\begin{itemize}
\item[(i)] $T$ has a recurrent point $x$, i.e., $x=\ell^1im_{n\to\infty}T^{n_i}x$ for a sequence $n_1<n_2<\cdots$.
\item[(ii)]
There is an orbit of $T$ which is not locally closed.
\item[(iii)] There is no a Borel set which intersects each orbit of $T$ exactly once.
\item[(iv)]
There is a continuous probability Borel measure $\mu$ on $X$ such that $(X,\mu,T)$ is an ergodic nonsingular system.
\end{itemize}
\end{theorem}
A natural question arises: under the conditions of the theorem how many such $\mu$ can exists? It turns out that there is a wealth of such measures. To state a corresponding result we first write an important definition.
\begin{defn}\ell^1ambdabel{orbit equivalent}
Two
nonsingular
systems $(X,\mathcal{B},\mu,T)$ and
$(X,\mathcal{B}',\mu',T')$ are called {\it orbit equivalent} if there is a
one-to-one bi-measurable map $\varphi:X\to X$ with $\mu'\> {\scriptstyle \circ}\> c\varphi\sim\mu$ and
such that $\varphi$ maps the $T$-orbit of $x$ onto the $T'$-orbit of $\varphi(x)$ for a.a. $x\in X$.
\end{defn}
The following theorem was proved in \cite{KW0}, \cite{Sc4} and \cite{Kr7}.
\begin{theorem}
Let $(X,T)$ be as in Theorem~\ref{T:Glimm}. Then for each ergodic dynamical system $(Y,\mathcal C,\nu, S)$ of type II$_\infty$ or III, there exist uncountably many mutually disjoint Borel measures $\mu$ on $X$ such that
$(X,T,\mathcal{B},\mu)$ is orbit equivalent to $(Y,\mathcal C,\nu,S)$.
\end{theorem}
On the other hand, $T$ may not have any finite invariant measure.
The first such example appeared in \cite{ehw}.
We present a simpler one.
\begin{ex}
Let $T$ be an irrational rotation on the circle $\mathbb T$ and let $K$ be a nowhere dense closed subset of $\mathcal{B}bb T$ of positive Lebesgue measure.
Let $X$ be the complement of the $T$-orbit $\bigcup_{n\in\mathcal{B}bb Z}T^nK$ of $K$.
Then $X$ is a $T$-invariant $G_\delta$-subset of zero Lebesgue measure.
Hence $X$ is Polish in the induced topology and $T\restriction X$ is an aperiodic homeomorphism of $X$.
Since $T$ is minimal, $X$ is dense in $\mathbb T$ and the $(T\restriction X)$-orbit of each point of $X$ is dense in $X$.
Hence every point is recurrent.
By Theorem~2.10, there exists a continuous ergodic nonsingular probability Borel measure $\ell^1ambdambda$ on $X$.
If it is invariant under $T\restriction X$ then $\ell^1ambdambda$ can be considered also as a finite $T$-invariant measure on $\mathbb T$.
Since $T$ is uniquely ergodic, $\ell^1ambdambda$ is the Lebesgue measure.
However $X$ is of zero Lebesgue measure, a contradiction.
\end{ex}
Let $T$ be an aperiodic Borel transformation of a standard Borel space $X$. Denote by $\mathcal M(T)$ the set of all ergodic $T$-nonsingular continuous measures on $X$.
Given $\mu\in\mathcal M(T)$, let $N(\mu)$ denote the family of all Borel $\mu$-null subsets. Shelah and Weiss showed \cite{SWe} that $\bigcap_{\mu\in\mathcal M(T)}N(\mu)$ coincides with the collection of all Borel $T$-wandering sets.
\subsection{Minimal Radon uniquely ergodic models for infinite measure preserving transformations}
We first note that there is only one, up to a homeomorphism, locally compact non-compact Cantor (i.e., zero-dimensional, perfect, metrizable) set.
Denote it by $C$.
We recall that a Borel measure on $C$ is called {\it Radon} if it is finite on every compact subset of $C$.
The following is an infinite version of the well known Jewett-Krieger theorem.
\begin{theorem}[Yuasa, \cite{Yuasa1}]
Let $T$ be an ergodic measure preserving transformation of the standard infinite $\sigma$-finite
measure space $(X,\mu)$.
Then there exists a minimal homeomorphism $R$ of $C$ that admits a unique, up to scaling, $R$-invariant Radon measure $\nu$ such that $(X,\mu,T)$ is isomorphic to $(C,\nu,R)$.
\end{theorem}
Yuasa proved a relative version of this theorem in \cite{Yuasa2}.
Similar results on {\it strictly ergodic models} for ergodic systems of type $III$ are unknown yet.
\subsection{Special representations of ergodic flows}
Nonsingular flows (=$\mathbb R$-actions) appear naturally in the study of orbit equivalence for systems of type $III$ (see Section~\ref{orbits}).
Here we record some basic notions related to nonsingular flows.
Let $(X,\mathcal{B},\mu)$ be a standard Borel space with a $\sigma$-finite measure $\mu$ on $\mathcal{B}$.
A nonsingular {\it flow} on $(X,\mu)$ is a Borel map $S:X\times$\mathbb{R}$m\ni(x,t)\mapsto S_tx\in X$ such that $S_tS_s=S_{t+s}$
for all $s,t\in\mathbb R$ and each $S_t$ is a nonsingular transformation of $(X,\mu)$. Conservativity and ergodicity for flows are defined in a similar way as for transformations.
A very useful example of a flow is a flow built under a function. Let $(X,\mathcal{B},\mu,T)$ be a nonsingular dynamical system
and $f$ a positive Borel function on $X$ such
that $\sum_{i=0}^\infty f(T^ix)=\sum_{i=0}^\infty f(T^{-i}x)=\infty$ for all $x\in X$. Set
$X^{f}:=\{(x,s): x\in X, 0\ell^1eq s <f(x)\}.$
Define $\mu^{f}$ to be the restriction of the product measure $\mu\times\text{Leb}$ on $X\times$\mathbb{R}$m$ to $X^{f}$ and
define, for
$t \geq 0$,
\[S^{f}_{t(x,s)} : = (T^n x,s+t-\sum_{i=0}^{n-1}f(T^ix),\]
where $n$ is the unique integer that satisfies
\[
\sum_{i=0}^{n-1}f(T^ix)<s+t\ell^1eq \sum_{i=0}^{n}f(T^ix).
\]
A similar definition applies when $t<0$. In particular, when $0<s+t<\varphi(x) $, $S^f_t(x,s)=(x,s+t)$,
so that the flow moves the point $(x,s)$ up $t$ units, and when it reaches $(x,\varphi(x))$ it is sent
to $(Tx,0)$. It can be shown that $S^f=(S^f_t)_{t\in\mathbb R}$ is a free $\mu^{f}$-nonsingular flow and that it preserves $\mu^{f}$ if and only if $T$ preserves $\mu$ \cite{Nad}.
It is called the {\it flow built under the function $\varphi$ with the base transformation $T$}. Of course, $S^f$ is conservative or ergodic if and only if so is $T$.
Two flows $S=(S_t)_{t\in\mathbb R}$ on $(X,\mathcal{B},\mu)$ and $V=(V_t)_{t\in\mathbb R}$ on $(Y,\mathcal C,\nu)$
are said to be {\it isomorphic} if there exist invariant co-null sets $X^\prime\subset X$ and $Y^\prime\subset Y$ and an invertible nonsingular map $\rho:X^\prime\to Y^\prime$ that interwines the actions of
the flows: $\rho\> {\scriptstyle \circ}\> c S_t=V_t\> {\scriptstyle \circ}\> c\rho$ on $X^\prime$ for all $t$. The following nonsingular version of Ambrose--Kakutani representation theorem was proved by Krengel \cite{Kre69} and Kubo \cite{Ku}.
\begin{theorem}\ell^1ambdabel{ambrose} Let $S$ be a free nonsingular flow. Then it is isomorphic to a flow built under a function.
\end{theorem}
Rudolph showed that in the Ambrose-Kakutani theorem one can choose the function $\varphi$ to take two values. Krengel \cite{Kre76} showed that this can also be assumed in the nonsingular case.
\section{Panorama of Examples}
\ell^1ambdabel{S:BasicExample}
This section is devoted entirely to examples of nonsingular systems. We describe here the most popular (and simple) constructions of nonsingular systems: product odometers, nonsingular Markov odometers, tower transformations, rank-one and finite rank systems, nonsingular Bernoulli and Markov shifts.
\subsection{Nonsingular product odometers}\ell^1ambdabel{SS:odometerdef}
Given a sequence $m_n$ of natural numbers, we let
$A_n:=\{0,1,\dots,m_n-1\}$. Let $\nu_n$ be a probability on $A_n$ and $\nu_n(a)>0$ for all $a\in A_n$. Consider now the infinite product probability space
$(X,\mu):=\bigotimes_{n=1}^\infty(A_n,\nu_n)$.
Assume that $\prod_{n=1}^\infty\max\{\nu_n(a)\mid a\in A_n\}=0$. Then $\mu$
is non-atomic. Given $a_1\in A_1,\dots,a_n\in A_n$, we denote by $[a_1,\dots,a_n]$ the cylinder ${x=(x_i)_{i>0}\mid x_1=a_1,\dots,x_n=a_n}$. If $x\ne (0,0,\dots)$, we let $l(x)$ be the smallest number $l$ such that the $l$-th coordinate of $x$ is not $m_l-1$.
We define a Borel map $T:X\to X$ by (\ref{odometer}) if $x\ne (m_1,m_2,\dots)$ and put $Tx:=(0,0,\dots)$ if $x=(m_1,m_2,\dots)$. Of course, $T$ is isomorphic to a rotation on a compact monothetic totally disconnected Abelian group. It is easy to check that $T$ is $\mu$-nonsingular and
$$
\omega_1^\mu(x)=\prod_{n=1}^\infty\frac{\nu_n((Tx)_n)}{\nu_n(x_n)}=
\frac{\nu_{l(x)}(x_{l(x)}+1)}
{\nu_{l(x)}(x_{l(x)})}\prod_{n=1}^{l(x)-1}\frac{\nu_n(0)}{\nu_n(m_n-1)}
$$ for a.a. $x=(x_n)_{n>0}\in X$.
It is also easy to verify that $T$ is ergodic. It is called the {\it
nonsingular product odometer} associated to $(m_n,\nu_n)_{n=1}^ \infty$. We note
that Ornstein's transformation (Example \ref{OrnEx}) is a nonsingular product
odometer.
\subsection{Markov odometers}\ell^1ambdabel{MarkOd}
We define Markov odometers as in \cite{DoH1}. An ordered Bratteli
diagram $B$ \cite{HPS} consists of
\begin{enumerate}
\item[(i)]
a vertex set $V$ which is a disjoint union of finite sets $V^{(n)}$, $n\ge
0$, $V_0$ is a singleton;
\item[(ii)]
an edge set $E$ which is a disjoint union of finite sets $E^{(n)}$, $n>0$;
\item[(iii)]
source mappings $s_n:E^{(n)}\to V^{(n-1)}$ and range mappings
$r_n:E^{(n)}\to V^{(n)}$ such that $s_n^{-1}(v)\ne\emptyset$ for all $v\in
V^{(n-1)}$ and $r_n^{-1}(v)\ne\emptyset$ for all $v\in V^{(n)}$, $n>0$;
\item[(iv)]
a partial order on $E$ so that $e,e'\in E$ are comparable if and only if
$e,e'\in E^{(n)}$ for some $n$ and $r_n(e)=r_n(e')$.
\end{enumerate}
A {\it Bratteli compactum} $X_B$ of the diagram $B$ is the space of
infinite paths
$$
\{x=(x_n)_{n>0}\mid x_n\in E^{(n)} \text{ and }r(x_n)=s(x_{n+1})\}
$$
on $B$. $X_B$ is equipped with the natural topology induced by the product
topology on $\prod_{n>0}E^{(n)}$. We will assume always that the diagram is
{\it essentially simple}, i.e., there is only one infinite path
$x_{\max}=(x_n)_{n>0}$ with $x_n$ maximal for all $n$ and only one
$x_{\min}=(x_n)_{n>0}$ with $x_n$ minimal for all $n$. The {\it
Bratteli-Vershik} map $T_B:X_B\to X_B$ is defined as follows:
$Tx_{\max}=x_{\min}$. If $x=(x_n)_{n>0}\ne x_{\max}$ then let $k$ be the
smallest number such that $x_k$ is not maximal. Let $y_k$ be a successor of
$x_k$. Let $(y_1,\dots,y_k)$ be the unique path such that
$y_1,\dots, y_{k-1}$ are all minimal. Then we let
$T_Bx:=(y_1,\dots,y_k,x_{k+1},x_{k+2},\dots)$. It is easy to see that $T_B$
is a homeomorphism of $X_B$. Suppose that we are given a sequence
$P^{(n)}=(P^{(n)}_{(v,e)\in V^{n-1}\times E^{(n)}})$ of stochastic
matrices, i.e.,
\begin{enumerate}
\item[(i)]
$P^{(n)}_{v,e}>0$ if and only if $v=s_n(e)$ and
\item[(ii)] $\sum_{\{e\in E^{(n)}\mid s_n(e)=v\}}P^{(n)}_{v,e}=1$ for each
$v\in V^{(n-1)}$.
\end{enumerate}
For $e_1\in E^{(1)},\dots, e_n\in E^{(n)}$, let $[e_1,\dots,e_n]$ denote
the cylinder $\{x=(x_j)_{j>0}\mid x_1=e_1,\dots, x_n=e_n\}$. Then we define
a {\it Markov measure} on $X_B$ by setting
$$
\mu_{P}([e_1,\dots,e_n])=P^{1}_{s_1(e_1),e_1}P^{2}_{s_2(e_2),e_2}\cdots
P^{n}_{s_n(e_n),e_n}
$$
for each cylinder $[e_1,\dots,e_n]$. The dynamical system $(X_B,\mu_{P},
T_B)$ is called a {\it Markov odometer}. It is easy to see that every
nonsingular product odometer is a Markov odometer where the corresponding $V^{(n)}$
are all singletons.
\subsection{Tower transformations}\ell^1ambdabel{tower} This construction is a discrete analogue of flow under a function. Given a nonsingular dynamical system
$(X,\mu,T)$ and a measurable map $f:X\to\mathbb N$, we define a new dynamical system $(X^f,\mu^f,T^f)$ by setting
$$
\begin{aligned}
X^f &:=\{(x,i)\in X\times\mathbb Z_+\mid 0\ell^1e i< f(x)\},\\
d\mu^f(x,i) &:=d\mu(x) \ \text{and}\\
T^f(x,i) &:=
\begin{cases}
(x,i+1), & \text{if }i+1<f(x)\\
(Tx,0), & \text{ otherwise.}
\end{cases}
\end{aligned}
$$
Then $T^f$ is $\mu^f$-nonsingular and $(d\mu^f\> {\scriptstyle \circ}\> c T^f/d\mu^f)(x,i)=(d\mu\> {\scriptstyle \circ}\> c T/d\mu)(x)$ for a.a. $(x,i)\in X^f$.
This transformation is called the (Kakutani) {\it tower over $T$ with height function} $f$. It is easy to check that $T^f$ is conservative if and only if $T$ is conservative; $T^f$ is ergodic if and only if $T$ is ergodic; $T^f$ is of type $III$ if and only if $T$ is of type $III$. Moreover, the induced transformation $(T^f)_{X\times \{0\}}$ is isomorphic to $T$. Given a subset $A\subset X$ of positive measure, $T$ is the tower over the induced transformation $T_A$ with the first return time to $A$ as the height function.
\subsection{Rank-one transformations. Chac\'on maps. Finite rank}\ell^1ambdabel{S:rankone}
The definition
uses the process of ``cutting and stacking.''
We construct by induction a sequence of columns $C_n$. A {\it column} $C_n$ consists of a
finite sequence of bounded intervals (left-closed, right-open) $C_n=\{I_{n,0},\dots,I_{n,h_n-1}\}$ of
{\it height} $h_n$. A column $C_n$
determines a {\it column map} $T_{C_n}$ that sends each interval $I_{n,i}$ to the interval above it
$I_{n,i+1}$ by the unique orientation-preserving affine map between the intervals. $T_{C_n}$ remains undefined on the top interval $I_{n,h_n-1}$. Set $C_0=\{[0,1)\}$ and let $\{r_n \geq 2\}$ be a sequence of positive integers, let $\{s_{n}\}$ be a sequence of functions $s_n:\{0,\ell^1dots, r_n-1\}\to \mathbb N_0$, and let $\{w_{n}\}$ be a sequence of probability vectors on $\{0,\ell^1dots, r_n-1\}$. If $C_n$ has been defined, column $C_{n+1}$ is defined as follows. First ``cut'' (i.e., subdivide) each interval $I_{n,i}$ in $C_n$ into $r_n$ subintervals $I_{n,i}[j], j=0,\ell^1dots,r_n-1$,
whose lengths are in the proportions $w_n(0) : w_n(1) : \cdots : w_n(r_n-1)$. Next place, for each
$j=0,\ell^1dots r_n-1$, $s_{n}(j)$
new subintervals above $I_{n,h_n-1}[j]$, all of the same length as $I_{n,h_n-1}[j]$. Denote these intervals, called {\it spacers}, by
$S_{n,0}[j], \ell^1dots S_{n,s_{n}(j)-1}[j]$. This yields, for each $j\in\{0,\ell^1dots,r_n-1\}$, $r_n$ subcolums
each consisting of the subintervals
\[I_{n,0}[j],\ell^1dots I_{n,h_n-1}[j]\text{\ followed by the spacers\ } S_{n,0}[j], \ell^1dots S_{n,s_{n}(j)-1}[j].
\]
Finally each subcolumn is stacked from left to right so that the top subinterval in subcolumn $j$ is sent to the bottom subinterval in subcolumn $j+1$, for $j=0,\ell^1dots,r_n-2$ (by the unique orientation-preserving affine map between the intervals).
For example, $S_{n,s_{n}(0)-1}[0]$ is sent to $I_{n,0}[1]$. This defines a new column $C_{n+1}$ and new column map $T_{C_{n+1}}$, which remains undefined on its top subinterval. Let $X$ be the union of all intervals in all columns and let $\mu$ be Lebesgue measure restricted to $X$.
We assume that as $n\to\infty$ the maximal length of the intervals in $C_n$
converges to $0$, so we may define a transformation $T$ of $(X,\mu)$ by $Tx:=\ell^1im_{n\to\infty} T_{C_{n}}x$.
One can verify that $T$ is well-defined a.e. and that it is nonsingular and ergodic. $T$ is said to be the {\it rank-one} transformation associated with $(r_n,w_n,s_{n})_{n=1}^\infty$. If all the probability vectors $w_n$ are uniform the resulting transformation is measure-preserving. The measure is infinite ($\sigma$-finite) if and only if the total mass of the spacers is infinite. In the case $r_n=3$ and $s_n(0)=s_n(2)=0$, $s_n(1)=1$ for all $n\geq 0$, the associated rank-one transformation is called a {\it nonsingular Chac\'on map}.
It is easy to see that every nonsingular product odometer is of rank-one (the corresponding maps $s_n$ are all trivial).
Each rank-one map $T$ is a tower over a nonsingular product odometer (to obtain such an odometer reduce $T$ to a column $C_n$).
A rank $N$ transformation is defined in a similar way. A nonsingular transformation $T$ is said to be of
{\it rank $N$ or less} if at each stage of its construction there exits $N$ disjoint columns, the levels of the columns generate the $\sigma$-algebra and the Radon-Nikodym derivative of $T$ is constant on each non-top level of every column. $T$ is said to be of {\it rank $N$} if it is of rank $N$ or less and not of
rank $N-1$ or less. A rank $N$ transformation, $N\geq 2$, need not be ergodic.
\subsection{Nonsingular Bernoulli shifts}\ell^1ambdabel{S:hamachi}
A {\it nonsingular Bernoulli} transformation is a transformation $T$ such that there exists a generating $\sigma$-algebra $\mathcal P$ for $T$ (see \S\,\ref{generators}) such that the $\sigma$-algebras $T^n{\mathcal P}$, $n\in\mathbb Z$, are mutually independent.
Thus we may think that $T$ is the left shift on the probability space $(X,\mu)=(A^\mathcal{B}bb Z,\bigotimes_{n\in\mathcal{B}bb Z}\mu_n)$, where $A$ is a standard Borel space and $(\mu_n)_{n\in\mathcal{B}bb Z}$ is a sequence
of probability measures on $A$.
We will always assume that
$\mu$ is nonatomic.
It follows from Kakutani's criterion for equivalence of infinite product measures \cite{Kak} that $\mu$ is nonsingular if and only if $\mu_n\sim\mu_{n+1}$ for each $n$ and
\begin{align}\ell^1ambdabel{ffff}
\sum_{n\in\mathcal{B}bb Z} H^2(\mu_n,\mu_{n+1})<\infty,
\end{align}
where $H^2(\mu,\nu)$ denotes the {\it Hellinger distance} defined by
$H^2(\mu,\nu):=1-\int_X\sqrt{\frac{d\mu}{d\xi}\frac{d\nu}{d\xi}}d\xi$, where $\xi$ is a probability
such that $\mu\prec\xi$ and $\nu\prec\xi$.
If (\ref{ffff}) is satisfied then for a.e. $x=(x_n)_{n\in\mathcal{B}bb Z}\in X$,
$$
\omega^\mu_1(x)=\prod_{n\in\mathcal{B}bb Z}\frac{\mu_{n-1}(x_n)}{\mu_n(x_n)}.
$$
\subsection{Nonsingular Markov shifts}\ell^1ambdabel{S:MarkSh} Let $A$ be a finite set and let $M=(M(a,b))_{a,b\in A}$ be a 0-1-valued $A\times A$-matrix.
We let $X_M:=\{x=(x_i)_{i\in\mathcal{B}bb Z}\in A^{\mathcal{B}bb Z}\mid M(x_i,x_{i+1})>0\text{ for all }i\in\mathcal{B}bb Z\}$.
Let $T$ denote the restriction of the left shift to $X_M$.
Then $T$ is a
shift of finite type.
Given two integers $i\ell^1e j$, and a finite sequence $a=(a_l)_{l=i}^j$ of elements from $A$ such that $M(a_l,a_{l+1})=1$ for $l=i,\dots,j-1$, we define by $[a]_i^j$ the cylinder $\{x\in X_M\mid x_l=a_l\text{ for all }l=i,\dots,j\}$.
Suppose that there is a sequence of probability measures $(\pi_n)_{n\in\mathcal{B}bb Z}$ on $A$ and a sequence $(P_n)_{n\in\mathcal{B}bb Z}$ of row-stochastic $A\times A$-matrices such that
$\pi_nP_n=\pi_{n+1}$ and $P_n(a,b)>0$ if and only if $M(a,b)>0$ for each $n\in\mathcal{B}bb Z$.
Then there is a unique probability measure $\mu$ on $X_M$ such that for every cylinder
$[a]_i^j$ in $X_M$.
$$
\mu([a]_i^j)=\pi_i(a_i)P_i(a_i,a_{i+1})\cdots P_{j-1}(a_{j-1},a_j).
$$
It is called Markov measure on $X_M$ generated by $(\pi_n,P_n)_{n\in\mathcal{B}bb Z}$.
By analogy with \cite{Kak}, using \cite{KLS} one can find necessary and sufficient conditions for $T$ to be $\mu$-nonsingular.
Then the system $(X_M,T,\mu)$ is called a {\it nonsingular Markov shift}.
It is a natural generalization of nonsingular Bernoulli shifts.
By a standard computation,
$$
\omega^\mu_1(x)=\ell^1im_{n\to+\infty}\frac{\pi_{-n-1}(x_{-n})}{\pi_{-n}(x_{-n})}\prod_{j=-n}^n\frac{P_{j-1}(x_j,x_{j+1})}{P_j(x_j,x_{j+1})}.
$$
We also mention here so-called {\it infinite Markov shifts}, i.e., Markov transformations preserving an infinite $\sigma$-finite measure (see \cite{KP} and \S4.5 from \cite{Aa}).
Let $A=\mathcal{B}bb Z$ and let $P=(P(a,b))_{a,b\in A}$ be a row stochastic $A\times A$-matrix.
Suppose that $P$ is irreducible, i.e., for each pair $a,b\in A$, there is $n>0$ with $P^n(a,b)>0$.
Suppose that there is a strictly positive function $\pi:A\to\mathcal{B}bb R_+^*$ such that
$\sum_{a\in A}\pi(a)=\infty$ and
$\sum_{a\in A}\pi(a)P(a,b)=\pi(b)$ for all $b\in A$.
Let $T$ denote the restriction of the left shift to $X_P$.
Define a measure $\mu$ on $X_P$ by setting
$$
\mu([a]_i^j)=\pi(a_i)P(a_i,a_{i+1})\cdots P(a_{j-1},a_j).
$$
Then $\mu$ is infinite and $\sigma$-finite and $T$ preserves $\mu$.
By \cite{KP}, if $\sum_{n=1}^\infty P^k(0,0)=\infty$ then $T$ is ergodic.
We call the system $(X_P,\mu,T)$ {\it the infinite Markov shift associated with $(P,\pi)$}.
\subsection{Nonsingular Poisson suspentions}\ell^1ambdabel{NPT}
Let $(X,\mathcal B,\mu)$ be a $\sigma$-finite Lebesgue space with an infinite non-atomic measure.
Let $\mathcal B_0\subset\mathcal B$ be the collection of subsets of finite measure.
Denote by $X^*$ the space of measures $\omega$ of the form
$\omega=\sum_{i\in I}{\delta_{x_i}}$, where $I$ is a countable set.
Endow $X^*$ with the smallest $\sigma$-algebra $\mathcal B^*$ such that the maps
$$
N_A:X^*\ni\omega\mapsto\omega(A)\in\mathcal{B}bb Z_+\sqcup\{\infty\}
$$
are all measurable, $A\in\mathcal B_0.$
Let $\mu^*$ be the (only) probability measure on $\mathcal B^*$
such that
\begin{itemize}
\item
$\mu^*\> {\scriptstyle \circ}\> c N_A^{-1}$ has the Poisson distribution with parameter $\mu(A)$, $A\in\mathcal B_0$ and
\item
if $A,B\in\mathcal B_0$ and $A\cap B=\emptyset$ then the maps
$N_A$ and $N_B$ are independent.
\end{itemize}
Let $T$ be a nonsingular invertible transformation of $(X,\mathcal B,\mu)$.
Define a transformation $T_*:X^*\to X^*$ by setting: $T_*\omega:=\omega\> {\scriptstyle \circ}\> c T^{-1}$.
If $T_*$ is $\mu^*$-nonsingular then $(X^*,\mathcal B^*,\mu^*, T_*)$ is called {\it the nonsingular Poisson suspension} of $(X,\mathcal B,\mu, T)$ \cite{DaKoRo1}.
\begin{theorem}[\cite{DaKoRo1}]\ell^1ambdabel{nonsing-P}
$T_*$ is $\mu^*$-nonsingular if and only if
$\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}-1\in L^2(\mu)$.
If $\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}-1\in L^1(\mu)$ then $T_*$ is $\mu^*$-nonsingular and
$$
\frac{d\mu^*\> {\scriptstyle \circ}\> c T_*}{d\mu^*}(\omega)=e^{-\int_X(\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}-1)d\mu}
\prod_{\omega(\{x\})=1}\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}(x)
$$
at a.e. $\omega\in X^*$.
\end{theorem}
\subsection{Nonsingular Gaussian transformations}\ell^1ambdabel{gaus}
Let $\gamma$ stand for the normalized Gaussian measure on $\mathcal{B}bb R$:
$d\gamma(t)=\frac1{\sqrt{2\pi}}e^{-\frac{t^2}2}dt$.
Let $(X,\mu):=(\mathcal{B}bb R,\gamma)^{\mathcal{B}bb N}$.
Denote by $\mathcal O$ the group of orthogonal operators in a separable infinite dimensional real Hilbert space $\mathcal H$.
Fix an orthonormal base in $\mathcal H$.
Then we can identify $\mathcal H$ with $\ell^2(\mathcal{B}bb N)$ and every operator from $\mathcal O$ with
an infinite $\mathcal{B}bb N\times\mathcal{B}bb N$-matrix.
Every $O\in\mathcal O$ determines a Borel transformation $T_O:X\to X$ by the formula
$$
T_Ox:=((T_Ox)_n)_{n\in\mathcal{B}bb N}, \text{ where } (T_Ox)_n:=\sum_{m=1}^\infty O_{n,m}x_m,\ n\in\mathcal{B}bb N.
$$
More precisely, $T_O$ is defined $\mu$-almost everywhere and it is invertible (mod 0).
Moreover, $T_O$ preserves $\mu$.
The set $\{T_O\mid O\in\mathcal O\}$
is exactly the family of classical (probability preserving) Gaussian transformations, which are
well studied in ergodic theory.
A wider class of nonsingular Gaussian transformations is defined by Arano, Isono and Marrakchi in \cite{AIM} (see also \cite{DanL22} for a different but equivalent presentation of their concepts).
For each $y\in X$, consider a transformation $S_y:X\ni x\mapsto x+y\in X$.
By the Cameron-Martin theorem, $S_y$ is $\mu$-nonsingular if and only if $y\in\ell^2(\mathcal{B}bb N)$ and
$$
\frac{d\mu\> {\scriptstyle \circ}\> c S_{y}^{-1}}{d\mu}(x)=e^{-\frac12\|y\|_2^2+\sum_{n=1}^\infty x_ny_n}\text{\ at $\mu$-a.e. $x=(x_n)_{n=1}^\infty\in X$ and $y=(y_n)_{n=1}^\infty\in\ell^2$.}
$$
It is easy to see that $S_y$ is totally dissipative.
It is straightforward to verify that $T_OS_yT_O^{-1}=S_{Oy}$.
Denote the affine group $ \mathcal H\rtimes \mathcal O$ of $\mathcal H$ by Aff$\,\mathcal H$.
Then for each $(h,O)\in \text{Aff}\,\mathcal H$, consider a transformation
$G_{h,O}:=S_hT_O$ of $(X,\mu)$.
It is called {\it a nonsingular Gaussian transformation}.
Thus, each nonsingular Gaussian transformation is a composition of a classical probability preserving Gaussian transformation and a nonsingular totally dissipative ``rotation''.
It is obvious that
$$
\frac{d\mu\> {\scriptstyle \circ}\> c G_{h,O}^{-1}}{d\mu}(x)=\frac{d\mu\> {\scriptstyle \circ}\> c S_{h}^{-1}}{d\mu}(x)=e^{-\|h\|_2^2+\sum_{n=1}^\infty x_nh_n}\text{\ at $\mu$-a.e. $x=(x_n)_{n=1}^\infty\in X$.}
$$
\subsection{IDPFT transformations}
\begin{defn}[\cite{DanL18}]
Let $T_n$ be an ergodic nonsingular invertible transformation of a standard probability space $(X_n,\frak B_n,\mu_n)$ for each $n\in\mathcal{B}bb N$.
Denote by $T$ the infinite direct product of $T_n$, $n\in\mathcal{B}bb N$, acting on the infinite product space $(X,\frak B,\mu):=\bigotimes_{n\in\mathcal{B}bb N}(X_n,\frak B_n,\mu_n)$.
If $T$ is $\mu$-nonsingular and each $T_n$ is of finite type, i.e that there exists a
$\mu_n$-equivalent probability measure $\nu_n$ which is invariant under $T_n$ for each $n\in\mathcal{B}bb N$
then $T$ is called an {\it infinite direct product of finite types (IDPFT).}
\end{defn}
Let $\varphi_n:=\frac{d\mu_n}{d\nu_n}$.
It follows from the Kakutani criterion \cite{Kak} that $T$ is $\mu$-nonsingular if and only if
$\prod_{n=1}^\infty\int_{X_n}\sqrt{\varphi_n\cdot\varphi_n\> {\scriptstyle \circ}\> c T_n}>0$.
Moreover, $\mu$ is mutually singular with the $T$-invariant probability $ \nu:=\bigotimes_{n\in\mathcal{B}bb Z}\nu_n$ if and only if $\prod_{n=1}^\infty\int_{X_n}\sqrt{\varphi_n}\,d\nu_n=0$.
If $T$ is $\mu$-nonsingular then
$$
\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}(x)=\prod_{n=1}^\infty\frac{d\mu_n\> {\scriptstyle \circ}\> c T_n}{d\mu_n}(x_n)\ \text{\ at a.e. }x=(x_n)_{n=1}^\infty\in X
.$$
\subsection{Natural extensions of nonsingular endomorphisms}
Let $(X,\mathcal B,\mu)$ be a $\sigma$-finite standard measure space.
A {\it nonsingular endomorphism} is a measurable map $R:X\to X$ such that $\mu(A)=0$ if and only if $\mu(R^{-1}A)=0$.
Suppose that $\mu$ is $\sigma$-finite on $R^{-1}\mathcal B$.
We define the {\it Radon-Nikodym derivative} $\omega_1^\mu$ of $R$ by setting
$\omega_1^\mu=\frac{d\mu}{d\mu\> {\scriptstyle \circ}\> c R^{-1}}\> {\scriptstyle \circ}\> c R$.
It was shown in \cite{Si} and \cite{ST2}
that there exists a $\sigma$-finite standard measure space $(X^*,\mathcal B^*,\mu^*)$, an invertible $\mu^*$-nonsingular transformation $R^*$ and a Borel map $\pi:X^*\to X$
such that the following hold: $\mu^*\> {\scriptstyle \circ}\> c\pi^{-1}=\mu$, $\pi R^*=R\pi$, $\omega_1^{\mu^*}$ is $\pi^{-1}(\mathcal B)$-measurable and $\bigvee_{n>0}R^n\pi^{-1}(\mathcal B)=\mathcal B^*$.
The dynamical system $(X^*,\mathcal B^*,R^*,\mu^*)$ is defined uniquely (up to a natural isomorphism) and called {\it the natural extension of $R$}.
It coincides with the standard Rokhlin definition of the natural extension in the case where $R$ preserves $\mu$ and $\mu$ is finite.
\begin{theorem}[\cite{Si}, \cite{ST2}] $R^*$ is conservative if and only if $R$ is $\mu$-recurrent, i.e.,
$$
\sum_{i\ge 0}h\> {\scriptstyle \circ}\> c R^i\omega_i=+\infty\quad \text{a.e., where $\omega_i=\prod_{j=0}^{i-1}\omega_1^{\mu^*}\> {\scriptstyle \circ}\> c R^j$}
$$
for each integrable function $h>0$.
Moreover, if $R$ is $\mu$-recurrent then $R^*$ is ergodic if and only if $R$ is ergodic.
\end{theorem}
Let $R$ be a nonsingular one-sided Bernoulli shift $(X,\mu)=\bigotimes_{n=1}^\infty (A,\mu_n)$.
Then the natural extension of $R$ is isomorphic to the two-sided nonsingular Bernoulli shift $T$ on
$(X^*,\mu^*)=\bigotimes_{n=-\infty}^\infty (A,\mu_n^*)$, where $\mu_n^*=\mu_n$ if $n>0$
and $\mu_n^*=\mu_1$ if $n\ell^1e 0$.
The corresponding projection $\pi:X^*\to X$ is the natural projection, i.e., $\pi(\dots,a_{-1},a_0,a_1,a_2,\dots):=(a_1,a_2,\dots)$.
\section{Topological groups Aut$(X,\mu)$, Aut$_2(X,\mu)$ and Aut$_1(X,\mu)$}\ell^1ambdabel{topological group}
\subsection{} Let $(X,\mathcal B,\mu)$ be a standard probability space and let Aut$(X,\mu)$ denote the group of all nonsingular transformations
of $X$. Let $\nu$ be a finite or $\sigma$-finite measure equivalent to $\mu$; the subgroup of the $\nu$-preserving transformations is denoted by Aut$_0(X,\nu)$. Then Aut$(X,\mu)$ is a simple group \cite{Eig} and it has no outer automorphisms \cite{Eig2}. Ryzhikov showed \cite{Ry} that every element of this group is a product of three involutions (i.e., transformations of order~2). Moreover, a nonsingular transformation is a product of two involutions if and only if it is conjugate to its inverse by an involution.
Inspired by \cite{Hal}, Ionescu Tulcea \cite{IT65} and Chacon and Friedman \cite{ChF} introduced the {\it weak} and the {\it uniform} topologies respectively on Aut$(X,\mu)$. The weak one---we denote it by $d_w$---is induced from the weak operator topology on the group of unitary operators in $L^2(X,\mu)$ by the embedding $T\mapsto U_T$ (see \S\,\ref{S:ErgodicTheorem}). Then $(\text{Aut$(X,\mu)$}, d_w)$ is a Polish topological group and Aut$_0(X,\nu)$ is a closed subgroup of Aut$(X,\mu)$.
This topology will not be affected if we replace $\mu$ with any equivalent measure. We note that $T_n$ weakly converges to $T$ if and only if $\mu(T_{n}^{-1}A\bigtriangleup T^{-1}A)\to 0$ for each $A\in\mathcal B$ and
$d(\mu\> {\scriptstyle \circ}\> c {T_n})/d\mu\to d(\mu\> {\scriptstyle \circ}\> c T)/d\mu$ in $L^1(X,\mu)$.
For each $p\ge 1$, one can also embed Aut$(X,\mu)$ into the isometry
group of $L^p(X,\mu)$ via a formula similar to (\ref{koopman oper}) but
with another power of the Radon-Nikodym derivative in it. The strong
operator topology on the isometry group induces the very same weak
topology on Aut$(X,\mu)$ for all $p\ge 1$ \cite{CK79}.
Danilenko showed in \cite{Dan} that $(\text{Aut$(X,\mu)$}, d_w)$ is contractible. It follows easily from the Rokhlin lemma that periodic transformations are dense in Aut$(X,\mu)$.
It is natural to ask which properties of nonsingular transformations are typical in the sense of Baire category. The following technical lemma
(see see \cite{F70}, \cite{CK79}) is an indispensable tool when considering such problems.
\begin{lemma} \ell^1ambdabel{conjugacy}The conjugacy class of each aperiodic transformation $T$ is dense in {\rm Aut}$(X,\mu)$ endowed with the weak topology.
\end{lemma}
Using this lemma and the Hurewicz ergodic theorem Choksi and Kakutani \cite{CK79} proved that
the ergodic transformations form a dense
$ G_\delta$ in Aut$(X,\mu)$. The same holds for the subgroup Aut$_0(X,\nu)$ (\cite{S71} and \cite{CK79}). Combined with \cite{IT65}
the above implies that the ergodic transformations of type $III$ is a dense $G_\delta$ in Aut$(X,\mu)$. For further refinement of this statement we refer to Section~\ref{orbits}.
Since the map $T\mapsto T\times \cdots\times T$\,($p$ times) from Aut$(X,\mu)$ to Aut$(X^p,\mu^{\otimes p})$ is continuous for each $p>0$, we deduce that the set $\mathcal E_\infty$ of transformations with infinite ergodic index (which means that $T\times \cdots\times T$\,($p$ times) is ergodic for each $p>0$)
is a $G_\delta$ in Aut$(X,\mu)$. It is non-empty by \cite{KP}. Since this $\mathcal E_\infty$ is invariant under conjugacy, it is dense in Aut$(X,\mu)$ by Lemma~\ref{conjugacy}. Thus we obtain that $\mathcal E_\infty$ is a dense $G_\delta$. In a similar way one can show that
$\mathcal E_\infty\cap\text{Aut}_0(X,\nu)$ is a dense $G_\delta$ in Aut$_0(X,\nu)$ (see also \cite{S71}, \cite{CK79}, \cite{CN00} for original proofs of these claims).
A nonsingular transformation $T$ is called {\it rigid} if $T^{n_i}\to\text{Id}$ weakly for some sequence $n_k\to\infty$.
The rigid transformations form a dense $G_\delta$ in Aut$(X,\mu)$.
It follows that the set of multiply recurrent nonsingular transformations is residual \cite{AgSi}.
A finer result was established in \cite{DanS}: the set of polynomially recurrent transformations in
Aut$_0(X,\nu)$ is residual in Aut$_0(X,\nu)$.
For the definition of multiple and polynomial recurrence we refer to \S\,\ref{MPR} below.
Given $T\in\text{Aut}(X,\mu)$, we denote the {\it centralizer} $\{S\in \text{Aut}(X,\mu)\mid ST=TS\}$ of $T$ by $C(T)$. Of course, $C(T)$ is a closed subgroup of Aut$(X,\mu)$ and $C(T)\supset\{T^n\mid n\in\mathbb Z\}$.
In a similar way, if
$T\in \text{Aut$_0(X,\nu)$}$, the {\it measure preserving centralizer}
$C_0(T):=\text{Aut$_0(X,\nu)$}\cap C(T)$
of $T$
is a weakly closed subgroup of \text{Aut$_0(X,\nu)$}.
The following problems solved (by several authors) for probability preserving systems are still open for the nonsingular case. Are the properties:
\begin{itemize}
\item[(i)] $T$ has square root;
\item[(ii)] $T$ embeds into a flow;
\item[(iii)] $T$ has non-trivial invariant sub-$\sigma$-algebra;
\item[(iv)] $C(T)$ contains a torus of arbitrary dimension
\end{itemize}
typical (residual) in Aut$(X,\mu)$ or Aut$_0(X,\nu)$?
The {\it uniform} topology on Aut$(X,\mu)$, finer than $d_w$, is defined by the metric
\[
d_u(T,S) =\mu(\{x: Tx\neq Sx\}) +\mu(\{x: T^{-1}x\neq S^{-1}x\}).
\]
This topology is also complete metric.
It depends only on the measure class of $\mu$. However the uniform topology is not separable and that is why it is of less importance in ergodic theory.
We refer to \cite{ChF}, \cite{F70}, \cite{CK79} and
\cite{CP83} for the properties of $d_u$.
\subsection{}
Suppose now that $\mu(X)=\infty$ but $\mu$ is $\sigma$-finite.
We now let
$$
\begin{aligned}
\text{Aut}_2(X,\mu)&:=\bigg\{T\in\text{Aut}(X,\mu)\mid \sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}-1\in L^2(\mu)\bigg\}\quad
\text{and}\\
\text{Aut}_1(X,\mu)&:=\{T\in\text{Aut}(X,\mu)\mid \frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}-1\in L^1(\mu)\}.
\end{aligned}
$$
These groups appear naturally in the study of nonsingular Poisson suspensions (see~(\ref{NPT})).
Define a topology $d_2$ on $\text{Aut}_2(X,\mu)$ by setting that a sequence $(T_n)_{n=1}^\infty$
converges to $T$ in $d_2$ if
$T_n\to T$ weakly and $\|\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T_n}{d\mu}}-\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}\|_2\to 0$
as $n\to\infty$.
In a similar way one can define a topology $d_1$ on $\text{Aut}_1(X,\mu)$:
a sequence $(T_n)_{n=1}^\infty$
converges to $T$ in $d_1$ if $T_n\to T$ weakly and $\|{\frac{d\mu\> {\scriptstyle \circ}\> c T_n}{d\mu}}-{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}\|_1\to 0$
as $n\to\infty$.
\begin{theorem}\cite{DaKoRo1}
\begin{itemize}
\item $(\text{Aut}_2(X,\mu), d_2)$ is a Polish group.
\item $(\text{Aut}_1(X,\mu), d_1)$ is a Polish group.
\item The mapping $\chi:\text{Aut}_1(X,\mu)\ni T\mapsto\int_X(\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}-1)d\mu\in\mathcal{B}bb R$ is a
continuous onto homomorphism.
\item
The short exact sequence
$$
\{1\}\to\ker\chi\to\text{Aut}_1(X,\mu) \overset{\chi}\to\mathcal{B}bb R\to\{1\}
$$
splits.
\item $\text{Aut}_1(X,\mu)$ is a dense and meager subgroup of $(\text{Aut}_2(X,\mu),d_2)$.
\item
$\chi$ is not $d_2$-continuous.
\item
the group $\{T_*\mid T\in \text{Aut}_2(X,\mu)\}$ is weakly closed in $\text{Aut}(X^*,\mu^*)$.
\item
the groups $ (\text{Aut}_2(X,\mu),d_2)$ and $(\ker\chi,d_1)$ have the Rokhlin property (i.e. there exists a dense conjugacy class in each of these groups).
\item
The group $ (\text{Aut}_1(X,\mu),d_1)$ does not have the Rokhlin property.
\end{itemize}
\end{theorem}
\section{Orbit theory }\ell^1ambdabel{orbits}
Orbit theory is, in a sense, the most complete part of nonsingular ergodic theory. We present here the seminal Krieger's theorem on orbit classification of ergodic nonsingular transformations in terms of ratio sets and associated flows. Examples of transformations of various types $III_\ell^1ambdambda$, $0\ell^1e \ell^1ambdambda\ell^1e 1$ are given.
``Almost continuous'' refinement of the orbit equivalence is also considered here.
Next, we consider the outer conjugacy problem for automorphisms of the orbit equivalence relations. This problem is solved in terms of a simple complete system of invariants. We discuss also a general theory of cocycles (of nonsingular systems) taking values in locally compact Polish groups and present an important orbit classification theorem for cocycles. This theorem is an analogue of the aforementioned result of Krieger.
We complete the section by considering ITPFI-systems and their relation to AT-flows.
\subsection{Full groups. Ratio set and types $III_\ell^1ambdambda$,
$0\ell^1e \ell^1ambdambda\ell^1e 1$
}
Let $T$ be a nonsingular transformation of a standard probability space
$(X,\mathcal B,\mu)$. Denote by Orb$_T(x)$ the $T$-orbit of $x$, i.e.,
Orb$_T(x)=\{T^nx\mid n\in\mathbb Z\}$. The {\it full group} $[T]$ of $T$
consists of all transformations $S\in\text{Aut}(X,\mu)$ such that
$Sx\in\text{Orb}_T(x)$ for a.a. $x$. If $T$ is ergodic then $[T]$ is topologically simple (or even algebraically simple if $T$ is not of type $II_\infty$) \cite{Eig}. It is easy to see
that $[T]$ endowed with the uniform topology $d_u$ is a Polish group. If $T$ is ergodic then $([T],d_u)$ is
contractible \cite{Dan}.
The {\it ratio set\,} $r(T)$ of $T$ was defined by Krieger [Kr70] and as we shall see below it is the key concept in the orbit
classification (see Definition~\ref{orbit equivalent}). The ratio set is a subset of $[0,+\infty)$ defined as follows: $t\in
r(T)$ if and only if for every $A\in\mathcal B$ of positive measure and each
${\varepsilon}psilonsilon>0$ there is a subset $B\subset A$ of positive measure and an
integer $k\ne 0$ such that $T^kB\subset A$ and $| \omega_k^\mu(x)-t|<{\varepsilon}psilonsilon$ for all $x\in B$. It is easy to verify that
$r(T)$ depends only on the equivalence class of $\mu$ and not on $\mu$
itself.
A basic
fact is that $1\in r(T)$ if and only if $T$ is conservative. Assume now $T$ to be
conservative and ergodic. Then $r(T)\cap(0,+\infty)$ is a closed subgroup
of the multiplicative group $(0,+\infty)$. Hence $r(T)$ is one of the
following sets:
\begin{enumerate}
\item[(i)]
$\{1\}$;
\item[(ii)]
$\{0,1\}$; in this case we say that $T$ is of {\it type $III_0$},
\item[(iii)]
$\{\ell^1ambdambda^n\mid n\in\mathbb Z\}\cup\{0\}$ for $0<\ell^1ambdambda<1$; then we say that $T$ is of {\it type $III_\ell^1ambdambda$},
\item[(iv)]
$[0,+\infty)$; then we say that $T$ is of {\it type III$_1$}.
\end{enumerate}
Krieger showed that $r(T)=\{1\}$ if and only if $T$ is of type $II$.
Hence we obtain a further subdivision of type $III$ into subtypes $III_0$, $III_\ell^1ambdambda$, $0<\ell^1ambdambda<1$, and $III_1$.
\begin{ex}\ell^1ambdabel{type III}
(i) Fix $\ell^1ambdambda\in(0,1)$. Let
$\nu_n(0):=1/(1+\ell^1ambdambda)$ and $\nu_n(1):=\ell^1ambdambda/(1+\ell^1ambdambda)$ for all
$n=1,2,\dots$. Let $T$ be the nonsingular product odometer associated with the sequence $(2,\nu_n)_{n=1}^\infty$ (see \S\,\ref{SS:odometerdef}). We claim that $T$ is of type $III_\ell^1ambdambda$.
Indeed, the group $\Sigma$ of finite permutations of $\mathbb N$ acts on $X$
by $(\sigma x)_n=x_{\sigma^{-1}(n)}$, for all $n\in\mathbb N$,
$\sigma\in\Sigma$ and $x=(x_n)_{n=1}^\infty\in X$. This action preserves
$\mu$. Moreover, it is ergodic by the Hewitt-Savage 0-1 law. It remains to
notice that $(d\mu\> {\scriptstyle \circ}\> c T/d\mu)(x)=\ell^1ambdambda$ on the cylinder $[0]$ which is of positive measure.
(ii) Fix positive reals $\rho_1$ and $\rho_2$ such
that $\ell^1og\rho_1$ and $\ell^1og\rho_2$ are rationally independent. Let
$\nu_n(0):=1/(1+\rho_1+\rho_2)$, $\nu_n(1):=\rho_1/(1+\rho_1+\rho_2)$ and
$\nu_n(2):=\rho_2/(1+\rho_1+\rho_2)$ for all $n=1,2,\dots$. Then the
nonsingular product odometer associated with the sequence
$(3,\nu_n)_{n=1}^\infty$ is of type $III_1$. This can be shown in a similar way as (i).
(iii) Partition $\mathcal{B}bb N$ into two infinite subsets $A$ and $B$.
Fix a sequence $({\varepsilon}psilonsilon_n)_{n\in B}$ of positive reals such that ${\varepsilon}psilonsilon_n<0.4$ for all $n\in B$ and $ \sum_{n\in B}{\varepsilon}psilonsilon_n<\infty$.
We now let $\nu_n(0):=0.5$ if $n\in A$ and $\mu_n(0):=1-{\varepsilon}psilonsilon_n$ if $n\in B$.
Then the
nonsingular product odometer associated with the sequence
$(2,\nu_n)_{n=1}^\infty$ is of type $II_\infty$.
This follows from \cite{Moo}.
\end{ex}
Non-singular product odometer of type $III_0$ will be constructed in Example~\ref{IIIzero}
below.
\subsection{Maharam extension, associated flow and orbit classification of
type $III$ systems
}\ell^1ambdabel{S:maharam}
On $X\times\mathbb R$ with the $\sigma$-finite measure $\mu\times\kappa$,
where $d\kappa(y)=\exp(y)dy$, consider the transformation
$$
\widetilde T(x,y):=(Tx,y-\ell^1og\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}(x)).
$$
We call it the {\it Maharam extension} of $T$ (see \cite{Mah}, where these
transformations were introduced). It is measure-preserving and it commutes
with the flow $S_t(x,y):=(x,y+t)$, $t\in\mathbb R$.
It is conservative if and
only if $T$ is conservative \cite{Mah}.
However $\widetilde T$ is not
necessarily ergodic when $T$ is ergodic.
Let $(Z,\nu)$ denote the space of $\widetilde T$-ergodic
components. Then $(S_t)_{t\in\mathbb R}$ acts nonsingularly
on this space. The restriction of $(S_t)_{t\in\mathbb R}$ to $(Z,\nu)$ is
called the {\it associated flow} of $T$. The associated flow is ergodic
if and only if $T$ is ergodic. It is easy to verify that the isomorphism class of
the associated flow is an invariant of the orbit equivalence of the
underlying system.
\begin{prop}[\cite{HO81}]
\begin{enumerate}
\item[(i)]
$T$ is of type $II$ if and only if its associated flow is the translation
on $\mathbb R$, i.e., $x\mapsto x+t$, $x,t\in\mathbb R$,
\item[(ii)]
$T$ is of type $III_\ell^1ambdambda$, $0\ell^1e\ell^1ambdambda<1$ if and only if its associated
flow is the periodic flow on the interval $[0,-\ell^1og\ell^1ambdambda)$, i.e.,
$x\mapsto x+t\mod(-\ell^1og\ell^1ambdambda)$,
\item[(iii)]
$T$ is of type $III_1$ if and only if its associated flow is the trivial
flow on a singleton or, equivalently, $\widetilde T$ is ergodic,
\item[(iv)]
$T$ is of type $III_0$ if and only if its associated flow is nontransitive.
\end{enumerate}
\end{prop}
\begin{ex}\ell^1ambdabel{IIIzero} Let $A_n=\{0,1,\dots,2^{2^n}\}$ and $\nu_n(0)=0.5$
and $\nu_n(i)=0.5\cdot 2^{-2^n}$ for all $0<i\ell^1e 2^n$. Let $T$ be the
nonsingular product odometer associated with $(2^{2^n}+1,\nu_n)_{n=0}^\infty$.
It is straightforward that the associated flow of $T$ is the
flow built under the constant function $1$ with the probability preserving 2-adic product odometer (associated with $(2,\kappa_n)_{n=1}^\infty$, $\kappa_n(0)=\kappa_n(1)=0.5$) as the
base transformation. In particular, $T$ is of type $III_0$.
\end{ex}
A natural problem arises: to compute Krieger's type (or the ratio set) for the nonsingular product odometers---the simplest class of nonsingular systems. Some partial progress was achieved in \cite{AW}, \cite{Moo}, \cite{Os2}, \cite{DKQ}, etc. However in the general setting this problem remains open.
The map $\Psi:\text{Aut}(X,\mu)\ni T\mapsto\widetilde
T\in\text{Aut}(X\times\mathbb R,\mu\times\kappa)$ is a continuous group
homomorphism. Since the set $\mathcal E$ of ergodic
transformations on $X\times\mathbb R$ is a $G_\delta$ in Aut$(X\times\mathbb
R,\mu\times\kappa)$ (See \S\,\ref{topological group}), the subset $\Psi^{-1}(\mathcal E)$
of type $III_1$ ergodic transformations on $X$ is also $G_\delta$. The
latter subset is non-empty in view of Example~\ref{type III}(ii). Since it is invariant under
conjugacy, we deduce from Lemma~\ref{conjugacy} that the set of ergodic transformations of type $III_1$ is a dense $G_\delta$
in (Aut$(X,\mu),d_w)$ (\cite{PaS}, \cite{CHP}).
Now we state the main result of this section---Krieger's theorem on orbit
classification for ergodic transformations of type $III$. It is a far
reaching generalization of the basic result by H. Dye: any two ergodic
probability preserving transformations are orbit equivalent \cite{dye}.
\begin{theorem}[Orbit equivalence for type $III$ systems
\cite{Kr1}---\cite{Kr76}]\ell^1ambdabel{T:2.4} Two ergodic transformations of type $III$ are
orbit equivalent if and only if their associated flows are isomorphic. In
particular, for a fixed
$0<\ell^1ambdambda\ell^1e 1$, any two ergodic transformations of type $III_\ell^1ambdambda$ are orbit equivalent.
\end{theorem}
The original proof of this theorem is rather complicated. Simpler treatment
of it can be found in \cite{HO81} and \cite{KW}.
We also note that every free ergodic flow can be realized as the
associated flow of a type $III_0$ transformation.
However it is somewhat
easier to construct a $\mathbb Z^2$-action of type $III_0$ whose associated
flow is the given one. For this, we take an ergodic nonsingular
transformation $Q$ on a probability space $(Z,\mathcal B,\ell^1ambdambda)$ and a
measure-preserving transformation $R$ of an infinite $\sigma$-finite
measure space $(Y,\mathcal F,\nu)$ such that there is a continuous
homomorphism $\pi:\mathbb R\to C(R)$ with
$(d\nu\> {\scriptstyle \circ}\> c\pi(t)/d\nu)(y)=\exp(t)$ for a.a. $y$ (for instance, take a type
$III_1$ transformation $T$ and put $R:=\widetilde T$ and $\pi(t):=S_t$).
Let $\varphi:Z\to \mathbb R$ be a Borel map with $\inf_Z\varphi>0$. Define two
transformations $R_0$ and $Q_0$ of $(Z\times Y,\ell^1ambdambda\times\nu)$ by
setting:
$$
R_0(x,y):=(x,Ry), \ \ Q_0(x,y)=(Qx, U_xy),
$$
where $U_x=\pi(\varphi(x)-\ell^1og(d\mu\> {\scriptstyle \circ}\> c Q/d\mu)(x))$. Notice that $R_0$ and
$Q_0$ commute. The corresponding $\mathbb Z^2$-action generated by these
transformations is ergodic. Take any transformation $V\in
\text{Aut}(Z\times Y,\ell^1ambdambda\times\nu)$ whose orbits coincide with the
orbits of the $\mathbb Z^2$-action. (According to \cite{CFW}, any ergodic
nonsingular action of any countable amenable group is orbit equivalent to
a single transformation.) It is now easy to
verify that the associated flow of $V$ is the special flow built under
$\varphi\> {\scriptstyle \circ}\> c Q^{-1}$ with the base transformation $Q^{-1}$.
Then $V$ is of type $III_0$.
Since $Q$ and
$\varphi$ are arbitrary, we deduce the following from Theorem~\ref{ambrose}.
\begin{theorem} \ell^1ambdabel{T:2.5} Every nontransitive ergodic flow is an associated flow of
an ergodic transformation of type $III_0$.
\end{theorem}
In \cite{Kr76} Krieger introduced a map $\Phi$ as follows. Let $T$ be an
ergodic transformation of type $III_0$. Then the associated flow of $T$ is
a flow built under function with a base transformation $\Phi(T)$. We note
that the orbit equivalence class of $\Phi(T)$ is well defined by the orbit
equivalent class of $T$. If $\Phi^n(T)$ fails to be of type $III_0$ for
some $1\ell^1e n<\infty$ then $T$ is said to {\it belong to Krieger's
hierarchy}. For instance, the transformation constructed in Example~\ref{IIIzero} belongs to Krieger's hierarchy. Connes gave in \cite{Co} an example of $T$ such that $\Phi(T)$
is orbit equivalent to $T$ (see also \cite{HO81} and \cite{GiS}). Hence $T$
is not in
Krieger's hierarchy.
\subsection{Almost continuous orbit equivalence}
In this subsection, by a dynamical system we mean a quadruple $(X,\tau,\mu,T)$, where $(X,\tau)$ is a Polish space, $\mu$ is a non-atomic Borel measure of full support, $T$ is a nonsingular ergodic homeomorphism of $X$ such that the function $\omega_1:X\to\mathcal{B}bb R$ is continuous (has a continuous version).
\begin{defn} Two dynamical systems $(X,\tau,\mu,T)$ and $(X',\tau',\mu',T')$ are {\it almost continuously orbit equivalent} if there are dense invariant $G_\delta$ subsets $X_0\subset X$ and $X_0'\subset X'$ of full measure and a homeomorphism $\varphi:X_0\to X_0'$ such that
\begin{itemize}
\item $\varphi(\{T^nx\mid n\in\mathcal{B}bb Z\})=\{(T')^n\varphi(x)\mid n\in\mathcal{B}bb Z\}$ at every $x\in X_0$,
\item
$\mu\> {\scriptstyle \circ}\> c\varphi^{-1}\sim\mu'$ and the Radon-Nikodym derivative
$\frac{d\mu\> {\scriptstyle \circ}\> c\varphi^{-1}}{d\mu'}$ is (can be chosen) continuous,
\item
letting $S:=\varphi^{-1}T'\varphi$ we have $Tx=S^{n(x)}x$ and $Sx=T^{m(x)}x$, where $n$ and $m$ are continuous on $X_0$.
\end{itemize}
\end{defn}
We note that in the case where $X$ and $X'$ are infinite product spaces, $T$ and $T'$ preserve $\mu$ and $\mu'$ respectively and we omit the requirement that $X_0$ and $X_0$ are $G_\delta$ then the above definition of $\varphi$ is equivalent to the ``finitary'' equivalence from the celebrated work of Keane and Smorodinsky \cite{KS}.
It was shown by del Junco and {\c S}ahin \cite{dJS} that any two ergodic probability preserving homeomorphisms
of Polish spaces are almost continuously orbit equivalent.
The same is
true for
any ergodic homeomorphisms preserving infinite
$\sigma$-finite local measures \cite{dJS}.
In \cite{DadJ}, a topological analogue
$r_{\text{top}}(T)$ of $r(T)$ was introduced.
It is a closed subgroup of
$\mathcal{B}bb R$
which contains $r(T)$ and it is invariant under the almost continuous orbit equivalence.
In \cite{DadJ}, two type
$III$
homeomorphisms were constructed which are measure-theoretically orbit equivalent but not almost continuously orbit equivalent (their $r_{\text{top}}$-invariants are different).
\begin{theorem}[\cite{DadJ}]
Let
$(X,\tau,\mu,T)$ and $(X',\tau',\mu',T')$
be ergodic non-singular homeomorphisms of Polish spaces.
If the two systems are either
\begin{itemize}
\item[{\rm (i)}]
of type
$III_\ell^1ambdambda$ with $0<\ell^1ambdambda<1$
and $r_{\text{top}}(T)=r_{\text{top}}(T')=\ell^1og\ell^1ambdambda\cdot\mathcal{B}bb Z$
or
\item[{\rm (ii)}]
of type
$III_1$
\end{itemize}
then they are almost continuously orbit equivalent.
\end{theorem}
Characterization of almost continuous orbit equivalence for homeomorphisms of type $III_0$ remains an open problem.
\subsection{Normalizer of the full group. Outer conjugacy problem
}
Let
$$
N[T]=\{R\in\text{Aut}(X,\mu)\mid R[T]R^{-1}=[T]\},
$$
i.e., $N[T]$ is the {\it normalizer} of the full group $[T]$ in
Aut$(X,\mu)$. We note that a transformation $R$ belongs to $N[T]$ if and
only if $R(\text{Orb}_T(x))=\text{Orb}_T(Rx)$ for a.a. $x$. To define a
topology on $N[T]$ consider the $T$-orbit equivalence relation $\mathcal
R_T\subset X\times X$ and a $\sigma$-finite measure $\mu_\mathcal R$ on $\mathcal
R_T$ given by $\mu_{\mathcal
R_T}=\int_X\sum_{y\in\text{Orb}_T(x)}\delta_{(x,y)}d\mu(x)$. For $R\in
N[T]$, we define a transformation $i(R)\in\text{Aut}(\mathcal R_T,\mu_{\mathcal
R_T})$ by setting $i(R)(x,y):=(Rx,Ry)$. Then the map $R\mapsto i(R)$ is an
embedding of $N[T]$ into Aut$(\mathcal R_T,\mu_{\mathcal R_T})$. Denote by $\tau$
the topology on $N[T]$ induced by the weak topology on Aut$(\mathcal
R_T,\mu_{\mathcal R_T})$ via $i$ \cite{Dan}. Then $(N[T],\tau)$ is a Polish group. A
sequence $R_n$ converges to $R$ in $(N[T],\tau)$ if $R_n\to R$ weakly (in
Aut$(X,\mu)$) and $R_nTR_n^{-1}\to RTR^{-1}$ uniformly (in $[T]$).
Given $R\in N[T]$, denote by $\widetilde R$ the Maharam extension of $R$.
Then $\widetilde R\in N[\widetilde T]$ and it commutes with $(S_t)_{t\in
\mathbb R}$. Hence it defines a nonsingular transformation mod\,$R$ on the
space $(Z,\nu)$ of the associated flow $W=(W_t)_{t\in\mathbb R}$ of $T$.
Moreover, mod\,$R$ belongs to the centralizer $C(W)$ of $W$ in
Aut$(Z,\nu)$. Note that $C(W)$ is a closed subgroup of
(Aut$(Z,\nu),d_w)$.
Let $T$ be of type $II_\infty$ and let $\mu'$ be the invariant
$\sigma$-finite measure equivalent to $\mu$. If $R\in N[T]$ then it is easy
to see that the Radon-Nikodym derivative $d\mu'\> {\scriptstyle \circ}\> c R/d\mu'$ is invariant
under $T$. Hence it is constant, say $c$. Then mod$\,R=\ell^1og c$.
\begin{theorem}[\cite{HO81}, \cite{Ham}]\ell^1ambdabel{T:2.6} If $T$ is of type III then
the map $\mod:N[T]\to C(W)$ is a continuous onto homomorphism. The kernel
of this homomorphism is the $\tau$-closure of $[T]$. Hence the quotient
group $N[T]/\overline{[T]}^\tau$ is (topologically) isomorphic to $C(W)$.
In particular, $\overline{[T]}^\tau$ is co-compact in $N[T]$ if and only if
$W$ is a finite measure-preserving flow with a pure point spectrum.
\end{theorem}
The following theorem describes the homotopical structure of normalizers.
\begin{theorem}[\cite{Dan}]\ell^1ambdabel{T:2.7} Let $T$ be of type $II$ or
$III_\ell^1ambdambda$, $0\ell^1e\ell^1ambdambda<1$. The group $\overline{[T]}^\tau$ is
contractible. $N[T]$ is homotopically equivalent to $C(W)$. In particular,
$N[T]$ is contractible if $T$ is of type $II$.
If $T$ is of type
$III_\ell^1ambdambda$ with $0<\ell^1ambdambda<1$ then $\pi_1(N[T])=\mathbb Z$.
\end{theorem}
The {\it outer period} $p(R)$ of $R\in N[T]$ is the smallest positive
integer $n$ such that $R^n\in[T]$. We write $p(R)=0$ if no such $n$ exists.
Two transformations $R$ and $R'$ in $N[T]$ are called {\it outer conjugate}
if there are transformations $V\in N[T]$ and $S\in[T]$ such that
$VRV^{-1}=R'S$. The following theorem provides convenient (for
verification) necessary and sufficient conditions for the outer conjugacy.
\begin{theorem}[\cite{CK} for type $II$ and \cite{BG} for type
$III$]\ell^1ambdabel{T:2.8} Transformations $R,R'\in N[T]$ are outer conjugate if and only if
$p(R)=p(R')$ and $\mod R$ is conjugate to $\mod R'$ in the centralizer of
the associated flow of $T$.
\end{theorem}
We note that in the case $T$ is of type $II$, the second condition in the
theorem is just mod\,$R=\text{mod\,} R'$.
It is always satisfied when $T$ is of
type $II_1$.
\subsection{Cocycles of dynamical systems. Weak equivalence of cocycles
}\ell^1ambdabel{cocycles}
Let $G$ be a locally compact Polish group and $\ell^1ambdambda_G$ a left Haar
measure on $G$. A Borel map $\varphi:X\to G$ is called a {\it cocycle} of $T$.
Two cocycles $\varphi$ and $\varphi'$ are
{\it cohomologous} if there is a Borel map $b:X\to G$ such that
$$
\varphi'(x)=b(Tx)^{-1}\varphi(x)b(x)
$$
for a.a. $x\in X$. A cocycle cohomologous to the trivial one is called a {\it
coboundary}. Given a dense subgroup $G'\subset G$, then every cocycle is
cohomologous to a cocycle with values in $G'$ \cite{GS}. Each cocycle
$\varphi$ extends to a (unique) map $\alpha_\varphi:\mathcal R_T\to G$ such that
$\alpha_\varphi(Tx,x)=\varphi(x)$ for a.a. $x$ and
$\alpha_\varphi(x,y)\alpha_\varphi(y,z)=\alpha_\varphi(x,z)$ for a.a.
$(x,y),(y,z)\in\mathcal R_T$. $\alpha_\varphi$ is called the {\it cocycle of $\mathcal
R_T$ generated by $\varphi$}. Moreover, $\varphi$ and $\varphi'$ are cohomologous
via $b$ as above if and only if $\alpha_\varphi$ and $\alpha_{\varphi'}$ are {\it
cohomologous} via $b$, i.e., $
\alpha_\varphi(x,y)=b(x)^{-1}\alpha_{\varphi'}(x,y)b(y) $ for $\mu_{\mathcal
R_T}$-a.a. $(x,y)\in \mathcal R_T$.
The following notion was introduced by Golodets and Sinelshchikov \cite{GS3}, \cite{GS}: two cocycles $\varphi$ and $\varphi'$ are {\it
weakly equivalent} if there is a transformation $R\in N[T]$ such that the
cocycles $\alpha_\varphi$ and $\alpha_\varphi'\> {\scriptstyle \circ}\> c (R\times R)$ of $\mathcal R_T$ are
cohomologous.
Let $\mathcal M(X,G)$ denote the set of Borel maps from $X$ to
$G$. It is a Polish group when endowed with the topology of convergence in
measure. Since $T$ is ergodic, it is easy to deduce from Rokhlin's lemma
that the cohomology class of any cocycle is dense in $\mathcal M(X,G)$. Given
$\varphi\in\mathcal M(X,G)$, we define the $\varphi$-{\it skew product extension}
$T_\varphi$ of $T$ acting on $(X\times G,\mu\times\ell^1ambdambda_G)$ by setting
$T_\varphi(x,g):=(Tx,\varphi(x)g)$. Thus Maharam extension is (isomorphic to) the
Radon-Nikodym cocycle-skew product extension.
We now specify some basic classes of
cocycles \cite{Sc}, \cite{BG2}, \cite{GS}, \cite{Dan2}:
\begin{enumerate}
\item[(i)] $\varphi$ is called {\it transient} if $T_\varphi$ is totally disipative,
\item[(ii)] $\varphi$ is called {\it recurrent} if $T_\varphi$ is conservative
(equivalently, $\varphi$ is not transient),
\item[(iii)] $\varphi$ {\it has dense range in $G$} if $T_\varphi$ is
ergodic.
\item[(iv)] $\varphi$ is called {\it regular} if $\varphi$ cobounds with dense
range into a closed subgroup $H$ of $G$ (then $H$ is defined up to
conjugacy).
\end{enumerate}
These properties are invariant under the cohomology and the weak
equivalence. The Radon-Nikodym cocycle $\omega_1$ is a
coboundary if and only if $T$ is of type $II$. It is regular if and only if
$T$ is of type $II$ or $III_\ell^1ambdambda$, $0<\ell^1ambdambda\ell^1e 1$. It has dense range
(in the multiplicative group $\mathbb R_+^*$) if and only if $T$ is of type
$III_1$. Notice that $\omega_1$
is never transient (since $T$ is conservative).
In case $G$ is Abelian, Schmidt introduced in \cite{Sc5} an invariant $R(\varphi):=\{g\in G\mid \varphi-g\text{\ is recurrent}\}$. He showed in particular that
\begin{itemize}
\item[(i)]
$R(\varphi)$ is a cohomology invariant,
\item[(ii)]
$R(\varphi)$ is a Borel set in $G$,
\item[(iii)] $R(\ell^1og\omega_1)=\{0\}$ for each aperiodic conservative $T$,
\item[(iv)] there are cocycles $\varphi$ such that $R(\varphi)$ and $G\setminus R(\varphi)$ are dense in $G$,
\item[(v)] if $\mu(X)=1$, $\mu\> {\scriptstyle \circ}\> c T=\mu$ and $\varphi:X\to\mathbb R$ is integrable then $R(\varphi)=\{\int\varphi\, d\mu\}$.
\end{itemize}
We note that (v) follows from Atkinson theorem \cite{Atk}. A nonsingular version of this theorem was established in \cite{Ull}: if $T$ is ergodic and $\mu$-nonsingular and $f\in L^1(\mu)$ then $$
\ell^1iminf_{n\to\infty}\bigg|\sum_{j=0}^{n-1}f(T^jx)\omega_j(x)\bigg|=0\text{ \ for a.a. }x
$$
if and only if
$\int f\,d\mu=0$.
Since $T_\varphi$ commutes with the action of $G$ on $X\times G$ by inverted
right translations along the second coordinate, this action induces an ergodic $G$-action
$W_\varphi=(W_\varphi(g))_{g\in G}$ on the space $(Z,\nu)$ of
$T_\varphi$-ergodic components. It is called the {\it Mackey range (or Poincar{\'e} flow)} of $\varphi$
\cite{Mac}, \cite{FM}, \cite{Sc}, \cite{Zim3}. We note that $\varphi$ is regular (and cobounds with
dense range into $H\subset G$) if and only if $W_\varphi$ is transitive (and
$H$ is the stabilizer of a point $z\in Z$, i.e., $H=\{g\in G\mid
W_\varphi(g)z=z\}$). Hence every cocycle taking values in a compact
group is regular.
It is often useful to consider the {\it double cocycle} $\varphi_0:=\varphi\times
\omega_1$ instead of $\varphi$. It takes values in the group $G\times\mathbb
R^*_+$. Since $T_{\varphi_0}$ is exactly the Maharam extension of $T_\varphi$, it
follows from \cite{Mah} that $\varphi_0$ is transient or recurrent if and only
if $\varphi$ is transient or recurrent respectively.
\begin{theorem}[Orbit classification of cocycles \cite{GS}]\ell^1ambdabel {T:2.9} Let
$\varphi,\varphi':X\to G$ be two recurrent cocycles of an ergodic transformation
$T$. They are weakly equivalent if and only if their Mackey ranges
$W_{\varphi_0}$ and $W_{\varphi_0'}$ are isomorphic.
\end{theorem}
Another proof of this theorem was presented in \cite{Fed}.
\begin{theorem}\ell^1ambdabel{T:2.10} Let $T$ be an ergodic nonsingular transformation.
Then there is a cocycle of $T$ with dense range in $G$ if and only if $G$ is
amenable.
\end{theorem}
It follows that if $G$ is amenable then the subset of cocycles of $T$ with dense
range in $G$ is a dense $G_\delta$ in $\mathcal M(X,G)$ (just adapt the
argument following Example~\ref{IIIzero}). The `only if' part of Theorem~\ref{T:2.10} was
established in \cite{Zim}. The `if' part was considered by many authors in
particular cases: $G$ is compact \cite{Zim2}, $G$ is solvable or amenable
almost connected \cite{GS2}, etc. The
general case was proved in \cite{GS3} and \cite{Her} (see also a recent
treatment in \cite{AaW}).
We note that the ``if'' part in Theorem~\ref{T:2.10} can be refined in the case where $G$ is a compactly generated Abelian group.
\begin{theorem}[\cite{Dan21}]
Let $T$ be an ergodic nonsingular transformation.
If $G$ is a compactly generated [FIA]-group then there is a bounded ergodic cocycle $\varphi$ of $T$ with values in $G$.
\end{theorem}
We recall that a locally compact Polish group $G$ is called a {\it [FIA]-group} if
the group of inner automorphisms of $G$ is relatively compact in the group of all automorphisms of $G$ furnished with the natural topology \cite{GrMo}.
Of course, each Abelian group is [FIA].
The cocycle $\varphi$ is {\it bounded} if there is a compact subset $K$ in $G$ such that $\varphi$ takes values in $K$.
Theorem~\ref{T:2.5} is a particular case of the following result.
\begin{theorem}[\cite{GS4}, \cite{Fed}, \cite{AEG}]\ell^1ambdabel{T:2.11} Let $G$ be amenable.
Let $V$ be an ergodic nonsingular action of $G\times\mathbb R_+^*$.
Then there is an ergodic nonsingular transformation $T$ and a recurrent cocycle
$\varphi$ of $T$ with values in $G$ such that $V$ is isomorphic to the Mackey
range of the double cocycle $\varphi_0$.
\end{theorem}
Given a cocycle $\varphi\in\mathcal M(X,G)$ of $T$, we say that a transformation
$R\in N[T]$ is {\it compatible with} $\varphi$ if the cocycles $\alpha_\varphi$
and $\alpha_\varphi\> {\scriptstyle \circ}\> c(R\times R)$ of $\mathcal R_T$ are cohomologous. Denote by
$D(T,\varphi)$ the group of all such $R$. It has a natural Polish topology
which is stronger than $\tau$ \cite{DaG}. Since $[T]$ is a normal subgroup
in $D(T,\varphi)$, one can consider the outer conjugacy equivalence relation
inside $D(T,\varphi)$. It is called {\it $\varphi$-outer conjugacy}. Suppose that
$G$ is Abelian. Then an analogue of Theorem~\ref{T:2.8} for the $\varphi$-outer
conjugacy is established in \cite{DaG}.
Also, the cocycles $\varphi$ with $D(T,\varphi)=N[T]$ are
described there.
\subsection{ITPFI transformations and AT-flows}\ell^1ambdabel{AT-f}
A nonsingular transformation $T$ is called {\it ITPFI\footnote{This
abbreviates `infinite tensor product of factors of type $I$' (came from the
theory of von Neumann algebras).}} if it is orbit equivalent to a nonsingular product odometer
(associated to a sequence $(m_n,\nu_n)_{n=1}^\infty$, see \S\,\ref{SS:odometerdef}). If the sequence
$m_n$ can be chosen bounded then $T$ is called {\it ITPFI of bounded type}. If $m_n=2$ for all $n$
then $T$ is called {\it ITPFI$_2$}. By \cite{GiS2}, every ITPFI-transformation of bounded type is
ITPFI$_2$.
In view of Theorem~\ref{T:2.4} and Example~\ref{type III},
every ergodic transformation of type $II$ or $III_\ell^1ambdambda$ with $0<\ell^1ambdambda\ell^1e 1$
is ITPFI$_2$.
A remarkable characterization of ITPFI transformations in terms
of their associated flows was obtained by Connes and Woods \cite{CoW}. We
first single out a class of ergodic flows. A nonsingular flow
$V=(V_t)_{t\in\mathbb R}$ on a space $(\Omegamega,\nu)$ is called {\it approximate
transitive (AT)} if given ${\varepsilon}psilonsilon>0$ and $f_1,\dots,f_n\in L^1_+(X,\mu)$,
there exists $f\in L^1_+(X,\mu)$ and $\ell^1ambdambda_1,\dots,\ell^1ambdambda_n\in
L^1_+(\mathbb R,dt)$ such that
$$
\ell^1eft|\ell^1eft|f_j-\int_\mathbb R f\> {\scriptstyle \circ}\> c V_t\frac{d\nu\> {\scriptstyle \circ}\> c
V_t}{d\nu}\ell^1ambdambda_j(t)dt\right|\right|_1<{\varepsilon}psilonsilon
$$
for all $1\ell^1e j\ell^1e n$. A flow built under a constant ceiling function with
a funny rank-one \cite{Fer} probability preserving base transformation is AT \cite{CoW}. In
particular, each ergodic finite measure-preserving flow with a pure point
spectrum is AT.
\begin{theorem}[\cite{CoW}] \ell^1ambdabel{2.12}
An ergodic nonsingular transformation
is ITPFI if and only if its associated flow is AT.
\end{theorem}
The original proof of this theorem was given in the framework of von
Neumann algebras theory.
A simpler, purely measure theoretical proof was
given later in \cite{Haw} (the `only if' part) and \cite{Ham2} (the `if'
part).
It follows from Theorem~\ref{2.12} that every ergodic flow with pure point
spectrum is the associated flow of an ITPFI transformation.
This was refined recently
in \cite{BeVa2}: every ergodic flow with a pure point
spectrum is the associated flow of an ITPFI$_2$ transformation.
This fact was proved earlier in \cite{HO2} only for
flows whose spectrum is $\theta\Gamma$, where $\Gamma$ is a subgroup of $\mathbb Q$
and $\theta\in\mathbb R$.
The existence of ITPFI transformations which are not of bounded type was shown in
\cite{Kr5}.
Krieger introduced an invariant for the orbit equivalence, called {\it property A},
and showed that each product odometer satisfies property A.
He also constructed an ergodic nonsingular transformation which does not satisfy this property \cite{Kr5}.
Hence this transformation is not ITPFI.
Though not every ergodic transformation is orbit equivalent to a nonsingular product odometer, a ``weaker'' form of this statement holds.
\begin{theorem}[\cite{DoH1}]\ell^1ambdabel{2.13} Each ergodic nonsingular transformation
is orbit equivalent to a Markov odometer (see \S\ref{MarkOd}).
\end{theorem}
In \cite{DoH2}, an explicit example of a non-ITPFI ergodic Markov odometer (not satisfying property A) was
constructed.
Later Munteanu in \cite{Mu12} exhibited an ergodic non-ITPFI transformation satisfying property A.
In \cite{JM}, it was constructed an explicit example of a non-AT nonsingular flow $W$ built under a function and over a nonsingular product odometer.
Hence every nonsingular ergodic transformation whose associated flow is isomorphic to $W$
is non-ITPFI.
\section{Mixing notions and multiple recurrence}\ell^1ambdabel{mixing}
The study of mixing and multiple recurrence are central topics in classical ergodic theory \cite{CFS}, \cite{Fu}. Unfortunately, these notions are considerably less `smooth' in the world of nonsingular systems. The very concepts of any kind of mixing and multiple recurrence are not well understood in view of their ambiguity. Below we discuss nonsingular systems possessing a surprising diversity of such properties that seem equivalent but are different indeed.
\subsection{Weak mixing}\ell^1ambdabel{S:weakmixing}
Let $T$ be an ergodic conservative nonsingular transformation. A number
$\ell^1ambdambda\in\mathbb C$ is an {\it $ L^\infty $-eigenvalue} for $T$ if there
exists a nonzero $f\in L^\infty$ so that $f\> {\scriptstyle \circ}\> c T=\ell^1ambdambda f$ a.e. It
follows that $|\ell^1ambdambda|=1$ and $f$ has constant modulus, which we assume to
be $1$. Denote by $e(T)$ the set of all $L^\infty$-eigenvalues of $T$. $T$
is said to be {\it weakly mixing} if $e(T)=\{1\}$. We refer to \cite[
Theorem~2.7.1]{Aa} for proof of the following Keane's ergodic
multiplier theorem: given an
ergodic probability preserving transformation $S$, the product
transformation $T\times S$ is ergodic if and only if $\sigma_S(e(T))=0$,
where $\sigma_S$ denotes the measure of (reduced) maximal spectral type of
the unitary $U_S$ (see (\ref{koopman oper})). It follows that $T$ is weakly
mixing if and only $T\times S$ is ergodic for every ergodic probability
preserving $S$.
While in the finite measure-preserving case this implies that $T\times T$ is ergodic, it was shown in \cite{ALW} that there exits a weakly mixing nonsingular $T$ with $T\times T$ not conservative, hence not ergodic. In \cite{AFS}, a weakly mixing $T$ was constructed with $T\times T$ conservative but not ergodic. A nonsingular transformation $T$ is said to be {\it weakly doubly ergodic} (originally called {\it doubly ergodic} in \cite{BFMS01}) if for all sets
of positive measure $A$ and $B$ there exists an integer $n>0$ such that $\mu(A\cap T^{-n}A)>0$ and $\mu(A\cap T^{-n}B)>0$.
Furstenberg \cite{Fu} showed that for finite measure-preserving transformations weak double ergodicity is equivalent to weak mixing.
In \cite{AFS,BFMS01} it is shown that for nonsingular transformations weak mixing does not imply weak double ergodicity and weak double ergodicity does not imply that $T\times T$ is ergodic.
$T$ is said to have {\it ergodic index $k$} if the Cartesian product of $k$ copies of $T$ is ergodic but the product of $k+1$ copies of $T$ is not ergodic. If all finite Cartesian products of $T$ are ergodic then $T$ is said to have {\it infinite ergodic index}.
In a similar way one can define the {\it conservative index} of $T$.
Parry and Kakutani \cite{KP} constructed for each $k\in\mathbb N\cup\{\infty\}$, an infinite Markov shift of ergodic index $k$.
We note that for each infinite Markov shift, the ergodic index coincides with the conservative index.
Infinite measure preserving rank-one transformations of an arbitrary ergodic index $k$ and infinite conservative index we constructed in \cite{AS15} and \cite{Dan2016}.
A stronger property is
{\it power weak mixing}, which requires that for all nonzero integers $k_1,\ell^1dots,k_r$ the product
$T^{k_1}\times\cdots\times T^{k_r}$ is ergodic \cite{DGMS}. The following examples were constructed in \cite{afs01}, \cite{d01}, \cite{d04}, \cite{DGMS}:
\begin{itemize}
\item[(i)] power weakly mixing rank-one transformations,
\item[(ii)] non-power weakly mixing rank-one transformations with infinite ergodic index,
\item[(iii)] non-power weakly mixing rank-one transformations with infinite ergodic index and such that $T^{k_1}\times\cdots\times T^{k_r}$ are all conservative, $k_1,\ell^1dots,k_r\in\mathbb Z$,
\end{itemize}
of types $II_\infty$ and $III$ (and various subtypes of $III$, see Section~\ref{orbits}).
Thus we have the following scale of properties (equivalent to weak mixing in the probability preserving case), where every next property is strictly stronger than the previous ones:
$$
\begin{aligned}
\text{
$T$ is weakly mixing } &\text{$\Leftarrow$ $T$ is weakly doubly ergodic $\Leftarrow$
$T\times T$ is ergodic} \\&\text{$\Leftarrow$ $T\times T\times T$ is ergodic }\Leftarrow\cdots\\ &\text{$\Leftarrow$ $T$ has infinite ergodic index $\Leftarrow$ $ T$ is power weakly mixing}.
\end{aligned}
$$
There is a rank-one weakly doubly ergodic $T$ such that $T\times T$ is
nonconservative \cite{BFMS01}
and there is a rank-one weakly doubly ergodic $T$ such that $T\times T$
conservative but not ergodic \cite{LS}.
We also mention an example of a power weakly mixing transformation of type $II_\infty$ which embeds into a rank-one flow \cite{DanSol}.
This result was sharpened in \cite{DanPa}: there is an infinite measure preserving rank-one flow $(R_t)_{t\in\mathcal{B}bb R}$ such that for each $t\ne 0$, the transformation $T_t$ has infinite ergodic index.
Several of these notions have been studied in the context of nonsingular actions of locally compact groups by Glasner and Weiss \cite{GlWe16}; we mention one condition that has not yet been discussed though only in the context of transformations. A nonsingular transformation $T$ on a probability space is said to be
{\it ergodic with isometric coefficients} if every factor map
onto an isometry of
a (separable) metric space
is constant a.e. Glasner and Weiss show that if $T\times T$ is ergodic (i.e., $T$ is doubly ergodic), then $T$ is ergodic with isometric coefficients, and that if $T$ is ergodic with isometric coefficients, then it is weakly mixing. In \cite{LS} it is shown that if $T$ is weakly doubly ergodic, then it is
ergodic with isometric coefficients. In \cite{GlWe16} there is an example of a $T$ that is
ergodic with isometric coefficients but $T\times T$ is not conservative, hence not ergodic.
Further conditions related to weak mixing (in the case
of infinite measure-preserving transformations) are discussed in the survey \cite{AdSi18}.
\subsection{Rational ergodicity and rational weak mixing}
Let $T$ be a conservative, ergodic measure-preserving transformation of a $\sigma$-finite measure space
$(X,\mathcal B, \mu)$.
For a function $f:X\to X$, let $S_n(f)=\sum_{k=0}^{n-1}f\> {\scriptstyle \circ}\> c T^k.$
$T$ is called {\it rationally ergodic} \cite{Aa77} if there is a subset $F\in\mathcal B$, $0<\mu(F)<\infty$, satisfying a {\it Renyi inequality},
i.e., there exists a constant $M>0$ such that for all $n\geq 1$,
\[\int_F(S_n(\mathbb I_F))^2\ d\mu\ell^1eq M\ell^1eft(\int_F S_n(\mathbb I_F)\ d\mu\right)^2.\]
We now set
\[u_k(F):=\frac{\mu(F\cap T^{-k}F)}{\mu(F)^2}\ \text{ and }\ a_n(F):=\sum_{k=0}^{n-1}u_k(F).\]
\begin{theorem} [\cite{Aa77}]\ell^1ambdabel{rat} If $T$ is rationally ergodic and $F$ satisfies the Renyi inequality then for all measurable sets $A$ and $B$ contained in $F$,
$$
\ell^1im_{n\to\infty}\frac{1}{a_n(F)}\sum_{k=0}^{n-1}\mu(A\cap T^{-k}B)=\mu(A)\mu(B).
$$
\end{theorem}
An ergodic conservative transformation $T$ is {\it rationally weakly mixing} \cite{Aa13} if there exists a measurable set $F$ of positive finite measure such that for all measurable sets $A$ and $B$ contained in $F$ we have
\begin{align}\ell^1ambdabel{raterg}
\ell^1im_{n\to\infty}\frac{1}{a_n(F)}\sum_{k=0}^{n-1}|\mu(A\cap T^{-k}B)-\mu(A)\mu(B)u_k(F)|=0,\end{align}
where
$u_k(F)$ and $a_n(F)$ are defined as above.
When $\mu(X)=1$ and we let $F=X$, then $a_n(F)=n$ and \eqref{raterg} becomes the
condition equivalent to the weak mixing property for a finite measure-preserving transformation. In infinite measure, however, the rational weak mixing condition is not equivalent to weak mixing as we shall see. If in \eqref{raterg} we drop the absolute values then this condition defines the notion of {\it weak rational ergodicity} \cite{Aa80}.
Then Theorem~\ref{rat} claims that rational ergodicity implies weak rational ergodicity.
If in \eqref{raterg} the sequence $(n)$ is replaced by a subsequence $(n_i)$ we say $T$
is {\it subsequence rational weak mixing}. {\it Subsequence weak rationally ergodic} is defined in a similar way.
The transformation $T$ is {\it boundedly rationally ergodic} \cite{Aa80} if
\[\sup_{n\geq 1}{\bigg\|}\frac{a_n(F)}{S_n(\mathbb {I}_F)}{\bigg\|}_\infty<\infty,\]
The notions of {\it subsequence boundedly rationally ergodic} and {\it subsequence rationally ergodic} are defined when the
sequence $(n)$ is replaced by a subsequence $(n_i)$. It can be seen from the definition that bounded rational ergodicity implies
rational ergodicity (and similarly for the subsequence versions). Aaronson \cite{Aa80} showed that rational ergodicity does not imply bounded rational ergodicity, and more recently it was shown by Adams and Silva \cite{AS16} that weak rational ergodicity does not imply rational ergodicity.
The following theorem was proved in \cite{Aa80} for the weakly rationally ergodic transformations.
The same argument works in the more general case.
\begin{theorem}\ell^1ambdabel{Squa} Each subsequence weakly rationally ergodic transformation $T$ of $(X,\mu)$ is non-squashable, i.e.,
each each non-singular transformation commuting with $T$ preserves $\mu$.
\end{theorem}
There are several examples of rationally ergodic transformations which are infinite Markov shifts, see \cite{Aa}.
More recently, it was shown in \cite{DGPS, Bozgan}
that rank-one (infinite measure-preserving) transformations are subsequence boundedly rationally ergodic. The first version of \cite{Bozgan} has a proof that the
rank-one transformations are subsequence weakly rationally ergodic; a simpler proof was found in \cite{D16}, where this property is also established for the class of funny rank one transformations and the class of ergodic transformations of {\it balanced finite rank}.
(A transformation is called of balanced finite rank if
if it is of finite rank and the
bases of the Rokhlin towers on the $n$-th step of the cutting-and-stacking inductive construction have asymptotically comparable measures as $n\to\infty$.)
Therefore all these transformations are non-squashable in view of Theorem~\ref{Squa}.
The
rank-one transformations for which the sequence of cuts $(r_n)_{n=1}^\infty$ is bounded are boundedly rationally ergodic \cite{DGPS, Bozgan}; a
stronger condition was shown in \cite{AKW}.
As for the examples of rationally weakly mixing transformations,
Aaronson \cite{Aa13} shows that Markov shifts with certain conditions on their associated renewal sequences are rationally weakly mixing, and Dai et al \cite{DGPS} give rank-one examples.
Subsequence rational weak mixing and rational weak mixing for products of powers have been studied in
\cite{Aa13} and \cite{AdRigid}.
We have the following implications for rational weak mixing.
\begin{theorem} [\cite{Aa13}] If a transformation is sequentially rationally weakly mixing, then it is weakly mixing.
\end{theorem}
\begin {theorem} [\cite{Bozgan}] If a transformation is rationally weakly mixing, then it is weakly doubly ergodic.
\end{theorem}
It is an open problem whether weak double ergodicity implies rational weak mixing.
Aaronson \cite{Aa13} asked if weak rational ergodicity plus weak mixing imply
rational weak mixing.
This was answered in negative in \cite{DGPS}, where it was constructed an example of a weakly mixing rationally ergodic rank-one transformation that is not rationally weakly mixing.
We also mention an example of a weakly mixing, rationally ergodic and Koopman mixing (or zero type, see \S\ref{Mix} for the definition) transformation that is not subsequence rationally weakly mixing \cite{A16}.
The set of transformations that are subsequence rationally weakly mixing is residual \cite{Aa13}, while the set
of rationally weakly mixing transformations is meagre \cite{Aa13}.
Since the set of power weakly mixing rank-one transformations is residual,
and the rank-one transformations are subsequence boundedly rationally ergodic, there exist rank one transformations
that are power weakly mixing and subsequence boundedly rationally ergodic but not rationally weakly mixing.
\subsection{Mixing, zero type}\ell^1ambdabel{Mix}
We now consider several attempts to define (strong) mixing for nonsingular maps.
Probably the first notion of mixing for infinite measure preserving systems was proposed by Hopf in \cite{Ho37}. The idea was to show an asymptotic rate for the sequence $\mu(A\cap T^{-n}B)$ for a large class of finite measure sets $A, B$.
More precisely, a transformation $T$ is {\it mixing for a ring $\mathcal R$} (called now {\it Krickeberg mixing}),
where $\mathcal R$ is a ring of sets of finite measure that is invariant under $T$ and generates the entire $\sigma$-algebra,
if there is a sequence $(\rho_n)_{n=1}^\infty$ such that for all $A,B\in\mathcal R$ we have
\[\ell^1im_{n\to\infty}\rho_n\mu(A\cap T^{-n}B)=\mu(A)\mu(B).\]
Hopf proved such a property for an infinite measure-preserving transformation defined on $\mathbb R^+\times [0,1]$ that is now called an infinite random walk; with $\mathcal R$ being the ring of Riemann measurable subsets.
If $\mathcal R$ is the ring of all subsets of finite measure then there are no
Krickerberg $\mathcal R$-mixing transformations
because of the existence of weakly wandering sets.
We note that the above (purely measure theoretical) definition $\mathcal R$-mixing is due to Friedman \cite{F76} who
extended
Krickeberg's one \cite{Kr67} given for continuous transformations of topological spaces endowed with a measure.
Recently there have been several works showing this version of mixing and computing mixing rates for several transformations. Melbourne and Terhesiu \cite{MT12} have verified mixing for a large class of maps including AFN maps with indifferent fixed points; these methods were extended to invertible transformations by Melbourne \cite{M15} and to additional maps by Gou\"ezel \cite{Go11}. Recently, Dolgopyat and N\'{a}ndori \cite{DN19} have shown Krickerberg mixing for a class of special flows; other recent work appeared in \cite{BMT19}.
Another approach to mixing was proposed by Krengel and Sucheston \cite{KS69}
for nonsingular maps.
Given a sequence of measurable sets $\{A_n\}$ let $\sigma_k(\{A_n\})$ denote the $\sigma$-algebra generated by $A_k, A_{k+1},\ell^1dots$. A sequence $\{A_n\}$ is said to be {\it remotely trivial}
if $\bigcap_{k=0}^\infty \sigma_k(\{A_n\}) = \{\emptyset, X\}$ mod $\mu$, and it is
{\it semi-remotely trivial} if every subsequence contains a further subsequence that is remotely trivial.
A nonsingular transformation $T$
of a $\sigma$-finite measure space is called {\it mixing} if for every set $A$ of finite measure the sequence
$\{T^{-n}A\}$ is semi-remotely trivial, and {\it completely mixing} if $\{T^{-n}A\}$ is semi-remotely trivial
for all measurable sets $A$.
Krengel and Sucheston show that $T$ is
completely mixing if and only if it is type $II_1$ and mixing for the equivalent finite invariant measure.
Thus there are no type $III$ and $II_\infty$ completely mixing nonsingular transformations on probability spaces. We note that this definition of mixing in infinite measure spaces depends on the choice of measure inside the equivalence class (but it is independent if we replace the measure by an equivalent measure with the same collection of sets of finite measure).
Hajian and Kakutani showed \cite{HK} that
an ergodic infinite measure-preserving transformation $T$ is either of {\it zero type}:
$\ell^1im_{n\to\infty}\mu(T^{-n}A\cap A)=0$ for all sets $A$ of finite measure, or of
{\it positive type}: $\ell^1imsup_{n\to\infty}\mu(T^{-n}A\cap A)>0$ for all subsets $A$ of finite positive measure.
It appears that $T$ is mixing if and only if it is of zero type \cite{KS69}.
We note that in infinite measure, mixing implies mixing of all orders, i.e., if a measure preserving $T$ is of zero type
then $\mu( T^{n_1}A_1\cap\cdots\cap T^{n_k}A_k)\to 0$ for each $k$ and all subsets $A_1,\dots,A_k$ of finite measure whenever $|n_i-n_j|\to\infty$ if $i\ne j$.
For $0\ell^1e\alpha\ell^1e 1$ Kakutani suggested a related definition of $\alpha$-type: an infinite measure preserving transformation is {\it of $\alpha$-type} if $\ell^1im\sup_{n\to\infty} \mu(A\cap T^nA)=\alpha\mu(A)$ for every subset $A$ of finite measure. In \cite{OH}
examples of ergodic transformations of any $\alpha$-type and a transformation of not any type were constructed.
It was shown in \cite{Dan2016} and \cite{loh} that for each pair $k\ell^1e n$, there exists a mixing rank-one infinite measure preserving transformation of ergodic index $k$
and conservative index $n$.
Rigid infinite measure preserving rank-one transformations of arbitrary ergodic index were constructed in \cite{Dan2016}.
Of course, rigidity implies infinite conservative index.
We now isolate an important class of concrete rank-one transformations and examine mixing properties within this class.
Let $T$ be a rank-one transformation associated with a sequence $(r_n,w_n,s_n)_{n=1}^\infty$.
If $w_n(0)=w_n(1)=\cdots= w_n(r_n-1)$ and $s_n(j)=z_n+j$ for $j=0,\dots,r_{n-1}$
then $T$ is called {\it a high staircase} (called also tower staircase in \cite{BFMS01}).
It was shown in \cite{BFMS01} that each high staircase is weakly doubly ergodic and hence weak mixing.
However there exist high staircases whose Cartesian square is not ergodic \cite{BFMS01}.
As for the mixing of the high staircases, the following theorem was proved in \cite{DanR2}.
It is an infinite analogue of the Adams solution \cite{Ada} of the Smorodinsky conjecture.
\begin{theorem} If $\ell^1im_{n\to\infty}\frac{r_n^2}{r_1\cdots r_{n-1}}=0$ and $\sum_{n=1}^\infty\frac{z_n}{h_n}=\infty$ then the associated high staircase is infinite measure preserving and mixing.
\end{theorem}
Mixing high staircases which are power weakly mixing
were constructed in \cite{DanR2}.
We note that mixing (zero type) does not imply either ergodicity or conservativity in the category of infinite measure-preserving transformations.
Indeed, a translation on $\mathcal{B}bb R$ endowed with the Lebesgue measure
is non-ergodic, totally dissipative but of zero type.
It may seem that mixing plus ergodicity together are stronger than any kind of nonsingular weak mixing considered above.
However, it is not the case: if $T$ is a weakly mixing infinite measure-preserving transformation of zero type and $S$ is an ergodic probability preserving transformation then $T\times S$ is ergodic and of zero type. On the other hand, the $L^\infty$-spectrum $e(T\times S)$ is nontrivial, i.e., $T\times S$ is not weakly mixing, whenever $S$ is not weakly mixing. We also note that
there exist rank-one infinite measure-preserving transformations $T$ of zero type such that $T\times T$ is not conservative (hence not ergodic) \cite{AFS}.
In contrast to that,
if $T$ is of positive type then all of its finite Cartesian products are conservative \cite{AN00}. Another result that suggests that there is no good definition of mixing in the infinite measure-preserving case was proved in
\cite{james}. It is shown there that while the mixing finite measure-preserving transformations are
measurably sensitive, there exists no infinite measure-preserving system that
is measurably sensitive. (Measurable sensitivity is a measurable version of the strong sensitive dependence on initial conditions---a concept from topological theory of chaos.)
The Krengel-Sucheston concept of mixing (or the Hajian-Kakutani zero type) considered above for infinite measure-preserving systems extends naturally to nonsingular transformations $T$
of $(X,\mathcal B,\mu)$ without finite absolutely continuous invariant measure in the following way: we say that
$T$ is {\it Koopman mixing} (or {\it of zero type}) if the maximal spectral type of the Koopman operator $U_T$ generated by $T$ (see (\ref{koopman oper})) is a Rajchman measure, i.e.,
$\int_Xf\cdot U_T^nf\,d\mu\to 0$ for each $f\in L^2(X,\mu)$.
It is easy to see that this definition of mixing will not be affected if we replace $\mu$ with an equivalent measure.
Examples of Koopman mixing rank-one transformations of type $III$ were constructed in \cite{d01}.
More recently, Lenci \cite{Le1} introduced a new notion of mixing for infinite measure-preserving maps
that is motivated by statistical mechanics and uses global observables. The definition is with respect
to a collection of sets, global observables and local observables. We choose a family $\mathcal V$ of measurable sets of finite measure so that it contains sets $V_1\subset V_2\subset \cdots $ such that
$\bigcup_i V_i =X$. We also have a subspace $\mathcal G$ of $L^\infty$ functions (called global observables),
and a subspace $\mathcal L$ of $L^1$ functions (called local observables). There is also a condition on the growth rate of the measure of $\mathcal V$-elements under iteration by $T$. Then Lenci defines an {\it infinite volume average} for elements $F$ of $\mathcal G$ by
\[\bar{\mu}(F)=\ell^1im_{V\to X}\int_V F \ d\mu.\]
By this limit we mean that for every neighborhood of $\bar{\mu}(F)$, there is a number $M>0$ so that
when $\mu(V)>M$ for a set $V$ in $\mathcal V$, then $\int_VFd\mu$ is in the neighborhood.
He shows that under the above conditions,
$\bar{\mu}(F\> {\scriptstyle \circ}\> c T^n)=\bar{\mu}(F)$.
Then he defines several notions of what he calls infinite volume mixing \cite{Le1}; we mention three here.
The transformation $T$ is said to be {\it global-local mixing-1} if for all $F$ in $\mathcal G$ and all $g$ in $\mathcal L$ with $\int g\ d\mu=0$, we have
\[\ell^1im_{n\to\infty} \int (F\> {\scriptstyle \circ}\> c T^n)g\ d\mu= 0.\] The transformation $T$ is said to be {\it global-local mixing-2} if for all $F$ in $\mathcal G$ and all $g$ in $\mathcal L$ we have
\[\ell^1im_{n\to\infty} \int (F\> {\scriptstyle \circ}\> c T^n)g\ d\mu= \bar{\mu}(F) \int g\ d\mu.\]
The transformation is said to be
{\it global-global mixing} if
for all $F, G$ in $\mathcal G$ we have
\[\ell^1im_{n\to\infty} \bar{\mu}(F\> {\scriptstyle \circ}\> c T^nG)\ d\mu= \bar{\mu}(F) \bar{\mu}(G)\]
Lenci proves in \cite{Le3} that if $T$ is an infinite measure-preserving K-automorphism then
$T$ is global-local mixing-1 for any choice
of $\mathcal V$ satisfying the measure growth condition, for $\mathcal L=L^1$, and for $\mathcal C$
that is the closure in $L^1$ of $T^n\mathcal F $, where $\mathcal F$ is as in the definition
of the K-automorphism (see Subsection~\ref{Kprop}).
Infinite mixing has been shown for other examples \cite{Le1}, in particular
for uniformly expanding maps of the interval \cite{Le17}, and for one-dimensional maps with
an indifferent fixed point \cite{BGL18}.
\subsection{$K$-property}\ell^1ambdabel{Kprop}
A nonsingular transformation $T$ of $(X,\mathcal B,\mu)$ is
called {\it K-automorp\-hism} \cite{ST2} if there exists a
sub-$\sigma$-algebra $\mathcal F\subset\mathcal B$ such that $T^{-1}\mathcal
F\subset\mathcal F$, $\bigcap_{k\ge 0}T^{-k}\mathcal F=\{\emptyset,X\}$,
$\bigvee_{k=0}^{+\infty}T^k\mathcal F=\mathcal B$ and the Radon-Nikodym
derivative $\omega_1^\mu$ is $\mathcal F$-measu\-rable (see also
\cite{wP65} for the case when $T$ is of type $II_\infty$; the authors in \cite{ST2} required $T$ to be conservative).
If $R$ is a nonsingular endomorphism on $(X,\mathcal B,\mu)$ then the natural extension of $R$ is a $K$-automorphism if and only if $R$ is exact, i.e., $\bigwedge_{n=1}^\infty R^{-n}\mathcal B=\{\emptyset, X\}$ mod 0.
It follows from the Kolmogorov 0-1 theorem that a nonsingular Bernoulli shift from the generalized Krengel class (see \S\,\ref{S:hamachi}) is a
$K$-automorphism.
Parry \cite{wP65} showed that a type $II_\infty$ $K$-automorphism is either dissipative or ergodic.
Krengel \cite{Kre70} proved the same for the Krengel class of Bernoulli nonsingular shifts, and finally Silva and Thieullen extended this result to the nonsingular $K$-automorphisms \cite{ST2}.
It is
also shown in \cite{ST2} that if $T$ is a
nonsingular $K$-automorphism, for any ergodic nonsingular transformation $S$, if $S\times T$ is
conservative, then it is ergodic. This property is called {\it sharply weak mixing} in \cite{DanL22}.
It follows that a conservative nonsingular $K$-automorphism is weakly mixing. However, it does not necessarily have infinite ergodic index \cite{KP}.
Krengel and Sucheston \cite{KS69} showed that an infinite measure-preserving conservative
$K$-automorphism is mixing.
In \cite{EKRSS} it is shown that if $T$ is a rank-one transformation with $(r_n)$ bounded and
$s_{n-1}(r_{n-1}-1)\geq h_n/2$ for all $n\geq 1$ (so the space is of infinite measure, and an $T$ is the infinite Chac\'on map), there exists a rank-one transformation $S$ such that $T\times S$ is conservative but not ergodic, so it is not sharply weak mixing.
\subsection{Multiple and polynomial recurrence}\ell^1ambdabel{MPR}
Let $p$ be a positive integer.
A nonsingular transformation $T$
is called $p$-{\it recurrent} if for every subset
$B$ of positive measure there exists a positive integer $k$
such that
$$
\mu(B\cap T^{-k}B\cap \dots \cap T^{-kp}B)>0.
$$
If $T$ is $p$-recurrent for any $p>0$, then it is called
{\it multiply recurrent}.
It is easy to see that $T$ is 1-recurrent if and only if it is conservative.
Clearly, if $T$ is rigid then it is multiply recurrent.
Furnstenberg showed \cite{Fu} that every finite measure-preserving
transformation is multiply recurrent.
In contrast to that,
Eigen, Hajian and Halverson \cite{ehh} constructed for any $p\in\mathbb
N\cup\{\infty\}$, a nonsingular product odometer of type $II_\infty$ which is $p$-recurrent but not $(p+1)$-recurrent.
Aaronson and Nakada showed in \cite{AN00} that an infinite measure
preserving Markov shift $T$ is $p$-recurrent if and only if the product
$T\times \cdots\times T$ ($p$ times) is conservative. It
follows from this and \cite{ALW} that in the class of ergodic Markov
shifts, infinite ergodic index implies multiple recurrence. However, in
general this is not true. It was shown in \cite{afs01}, \cite{Ketal03} and \cite{DanS} that for each $p\in\mathbb N\cup\{\infty\}$ there exist
\begin{itemize}
\item[(i)]
power weakly mixing rank-one transformations and
\item[(ii)] non-power weakly mixing rank-one transformations with infinite ergodic index
\end{itemize}
which are $p$-recurrent but not $(p+1)$-recurrent (the latter holds when $p\ne\infty$, of course).
A subset $A$ is called $p$-{\it wandering} if $\mu(A\cap T^kA\cap\cdots\cap T^{pk}A)=0$ for each $k$. Aaronson and Nakada established in \cite{AN00} a $p$-analogue of Hopf decomposition (see Theorem~\ref{T:hopf}).
\begin{prop} If $(X,\mathcal B,\mu,T)$ is conservative aperiodic nonsingular dynamical system and $p\in\mathbb N$ then $X=C_p\sqcup D_p$, where $C_p$ and $D_p$ are $T$-invariant disjoint subsets, $D_p$ is a countable union of $p$-wandering sets, $T\restriction C_p$ is $p$-recurrent and $\sum_{k=1}^\infty\mu(B\cap T^{-k}B\cap\cdots\cap T^{-dk}B)=\infty$ for every $B\subset C_p$.
\end{prop}
Let $T$ be an infinite measure-preserving transformation and let $\mathcal F$
be a $\sigma$-finite factor (i.e., invariant subalgebra) of $T$. Inoue \cite{Ino}
showed that for each $p>0$, if $T\restriction \mathcal F$
is $p$-recurrent then so is $T$ provided that the extension
$T\to T \restriction \mathcal F$ is isometric. It is unknown yet whether
the latter assumption can be dropped. However, partial progress was
achieved in \cite{Mey}: if $T\restriction\mathcal F$ is multiply
recurrent then so is $T$.
Let $\mathcal P:=\{q\in\mathbb Q[t]\mid q(\mathbb Z)\subset\mathbb Z \text{ and }
q(0)=0\}$. An ergodic conservative nonsingular transformation $T$ is called
$p$-{\it polynomially recurrent} if for every
$q_1,\dots,q_p\in\mathcal
P$ and every subset $B$ of positive measure there exists $k\in\mathbb N$ with
$$
\mu(B\cap T^{q_1(k)}B\cap\dots\cap T^{q_p(k)}B)>0.
$$
If $T$ is $p$-polynomially recurrent
for every $p\in\mathbb N$
then it is called {\it polynomially recurrent}. Furstenberg's theorem on multiple recurrence was significantly strengthened in \cite{BeL}, where it was shown that every finite measure-preserving transformation is polynomially recurrent.
However, Danilenko and Silva \cite{DanS} constructed
\begin{itemize}
\item[(i)] type $II_\infty$ transformations $T$ which are $p$-polynomially recurrent but not $(p+1)$-polynomially recurrent (for each fixed $p\in\mathbb N$),
\item[(ii)] polynomially recurrent transformations $T$ of type $II_\infty$,
\item[(iii)] rigid (and hence multiply recurrent) type $II_\infty$ transformations $T$ which are not polynomially recurrent.
\end{itemize}
Moreover, such $T$ can be chosen inside the class of rank-one transformations with infinite ergodic index.
\section{Dynamical properties of IDPFT systems}
Let $T_n$ be an ergodic transformation of a standard probability space $(X_n,\mu_n, T_n)$ and let $\nu_n$ be a $\mu_n$-equivalent invariant probability measure
for each $n\in\mathcal{B}bb N$.
Assume that the transformation $T=\bigotimes_{n=1}^\infty T_n$ is nonsingular
with respect to the infinite product measure
$\mu:=\bigotimes_{n=1}^\infty\mu_n$.
In other words,
$(X,\mu, T)$ is an IDPFT dynamical system.
\begin{theorem}[\cite{DanL18}] The transformation $T$ is either conservative or totally dissipative.
Moreover,
if $S$ is an ergodic conservative nonsingular transformation then the direct product
$T\times S$ is either conservative or totally dissipative.
\end{theorem}
\begin{theorem}[\cite{DanL18}]
Let $(X_n,\nu_n,T_n)$
be mildly mixing (see \S\ref{mild mixing} for the definition) for each $n>0$.
If $T$ is $\mu$-conservative then $T$ is sharply weak mixing.
\end{theorem}
Examples of rigid ergodic but not weakly mixing IDPFT transformations of Krieger type $III_\ell^1ambdambda$, for each $\ell^1ambdambda\in(0,1)$ were constructed in \cite{DanL18}.
Some families of 0-type IDPFT transformations of type $III_1$ appeared in \cite{DanL18}
and of all possible Krieger types
in \cite{DaKo}.
\begin{theorem}[\cite{DaKo}]
Let $K\in\{III_\ell^1ambdambda\mid 0\ell^1e \ell^1ambdambda\ell^1e 1\}\sqcup\{II_\infty\}$.
Then there is a 0-type weakly mixing IDPFT transformation
of type $K$.
\end{theorem}
\section{Dynamical properties of nonsingular Bernoulli and Markov shifts}
We will use below the notation introduced in \S\ref{S:hamachi}.
Thus,
$T$ stands for the nonsingular Bernoulli shift on the space $(X,\mu)=(A^\mathcal{B}bb Z,\bigotimes_{n\in\mathcal{B}bb Z}\mu_n)$ associated with a sequence of probability measures $(\mu_n)_{n\in\mathcal{B}bb Z}$ on $A$.
\subsection{Krengel class}
Nonsingular Bernoulli shifts appeared originally in Krengel's work
\cite{Kre70}.
He introduced there a class of nonsingular shifts for which $A=\{0,1\}$ and $\mu_n$ is the equidistribution on $\{0,1\}$ for all $n\ell^1e 1$.
We will call it {\it the Krengel class}.
Krengel showed that this class contains totally dissipative transformations.
He also used an inductive procedure to construct the sequence $(\mu_n)_{n=1}^\infty$ in such a way that the corresponding Bernoulli shift is ergodic conservative and not of type $II_1$.
Krengel conjectured that the shift is of type $III$ indeed.
In \cite{Ham3}, Hamachi showed that Krengel's class contains ergodic conservative nonsingular Bernoulli shifts of type $III$.
This was further refined by Kosloff who constructed type $III_1$ ergodic conservative shifts belonging to Krengel's class \cite{Ko11}.
In \cite{Ko13}
Kosloff constructed a nonsingular Bernoulli shift of type $III_1$ (and belonging to the Krengel class) which is power weakly mixing.
Weiss asked about possible Krieger's types for the nonsingular Bernoulli shifts.
Answering his question, Kosloff proved in
a subsequent paper \cite{Ko14} that each conservative Bernoulli shift from the Krengel class is ergodic and either of type $II_1$ or of type $III_1$.
In particular, the non-type-$II_1$ conservative Bernoulli shifts constructed in \cite{Kre70}, \cite{Ham3} and \cite{Ko11} are all of type $III_1$ indeed.
\subsection{Generalized Krengel class}
Kosloff's result from \cite{Ko14} was further extended in \cite{DanL18}.
We say that a nonsingular Bernoulli shift belongs to {\it the generalized Krengel class} if $A=\{0,1\}$ and $\mu_n=\mu_1$ for each $n\ell^1e 0$.
We note that these transformations
are the natural extension of the one-sided nonsingular Bernoulli shifts
defined on $(A^\mathcal{B}bb N, \bigotimes_{n>0}\mu_n)$.
Every shift from the generalized Krengel class is a $K$-automorphism.
\begin{theorem}[On types of nonsingular Bernoulli shifts from the generalized Krengel class \cite{Ko14}, \cite{DanL18}]\ell^1ambdabel{BernShiftType}
Let $A=\{0,1\}$ and let $T$ be a nonsingular Bernoulli shift on $(A^{\mathcal{B}bb Z},\bigotimes_{n\in\mathcal{B}bb Z}\mu_n)$ from the generalized Krengel class.
\begin{enumerate}
\item[(i)] If $\sum_{n>0}(\mu_n(0)-\mu_1(0))^2<\infty$ then $\mu$ is equivalent to
$\bigotimes_{n\in\mathcal{B}bb Z}\mu_1$ and hence $T$ is of type $II_1$.
\item[(ii)] If $\sum_{n>0}(\mu_n(0)-\mu_1(0))^2=\infty$ and $T$ is conservative then $T$ is ergodic of type $III_1$.
Moreover, the Maharam extension of $ T$ is a weakly mixing $K$-automorphism.
\end{enumerate}
\end{theorem}
Thus, Krieger's type of each nonsingular Bernoulli shift from the generalized Krengel class is never of type $III_\ell^1ambdambda$, $0\ell^1e\ell^1ambdambda<1$.
In \cite{VaWa}
Vaes and Wahl, answering a question from \cite{DanL18}, found a convenient condition
for a nonsingular Bernoulli shift from the generalized Krengel class to be conservative.
Utilizing that condition they
constructed, for each $\ell^1ambdambda\in (0,1)$, an explicit example of
power weakly mixing nonsingular Bernoulli shift of type $III_1$ with $\mu_1(0)=\ell^1ambdambda$.
We note that the previously known Bernoulli shifts of type $III_1$
were constructed via involved inductive procedures.
Vaes and Wahl also provided an example of type $III_1$ Bernoulli shift with finite ergodic index (less than 73) in \cite{VaWa}.
Modifying their example Kosloff and Soo showed that there are nonsingular Bernoulli shifts
of type $III_1$ of arbitrary ergodic index.
\begin{theorem}[\cite{KoSo2}]
Let $A=\{0,1\}$ and $c >0$.
Let
$
\mu_n^c(0):=\frac 12+\frac c{\sqrt n}1_{\{n\in\mathcal{B}bb N\mid \sqrt n>2c\}}
$
for each $n\in\mathcal{B}bb Z$.
There exists $D >\frac16$ such that the Bernoulli shift $T$ on
$\bigotimes_{n\in\mathcal{B}bb Z}(A,\mu_n)$
is ergodic of type $III_1$ for all $c < D$ and totally dissipative for all $c > D$.
In addition, if $k\in \mathcal{B}bb N$ and $\frac D{\sqrt{k+1}}\ell^1e c< \frac D{\sqrt k}$ then $T$
is of ergodic index $k$.
\end{theorem}
\subsection{General nonsingular Bernoulli shifts}
The studying of general nonsingular Bernoulli shifts was initiated by Kosloff
in \cite{Ko13}.
\begin{theorem}[Mixing of nonsingular Bernoulli shifts \cite{Ko13}]
If $\#A=2$ and $(\mu_n)_{n\in\mathcal{B}bb Z}$ is a sequence of probabilities on $A$ such that
{\rm (\ref{ffff})} holds.
Then
$T$ is
either of type $II_1$ and mixing (with respect to the equivalent invariant probability measure) or of zero type.
\end{theorem}
In \cite{Ko18}, Kosloff noticed that under some natural conditions, conservativity of Bernoulli shifts implies ergodicity.
His proof was based essentially on the Hurewicz ergodic theorem
and properties of the tail equivalence relation on $A^{\mathcal{B}bb Z}$.
Danilenko \cite{Dan2018} refined his results by exploiting the interplay between $T$
and the measurable equivalence relation on $A^{\mathcal{B}bb Z}$ generated by the finite permutations of coordinates.
\begin{theorem}[Weak mixing of conservative nonsingular Bernoulli shifts]\ell^1ambdabel{BernWeak} Let $A$ be finite.
\begin{enumerate}
\item[(i)]
If $\inf_{n\in\mathcal{B}bb Z}\min_{a\in A}\mu_n(a)>0$ and $T$ is conservative then
$T$ is weakly mixing $($see \cite{Ko18}, \cite{Dan2018}$)$.
\item[(ii)] If $\# A=2$ and $\inf_{n\in\mathcal{B}bb Z}\min_{a\in A}\ell^1og|\frac{\mu_n(a)}{\mu_{n+1}(a)}|>0$ and $T$ is conservative then
$T$ is weakly mixing \cite{Dan2018}.
\item[(iii)]
Under condition
{\rm (i)} or
{\rm (ii)}, if
$T\times\cdots\times T$ ($p$ times)
is conservative for some
$p \ge 1$
then
$T\times\cdots\times T$ ($p$ times)
is weakly mixing \cite{Dan2018}.
\end{enumerate}
\end{theorem}
The techniques from \cite{Ko18} and \cite{Dan2018} were further elaborated in \cite{BjKoVa}
to prove the following refinement of Theorem~\ref{BernWeak}.
\begin{theorem}\ell^1ambdabel{th7.4}
Let $A=\{0,1\}$ and let $T$ be a nonsingular Bernoulli shift on $(A^\mathcal{B}bb Z,\bigotimes_{n\in\mathcal{B}bb Z}\mu_n)$ and let $\mu=\bigotimes_{n\in\mathcal{B}bb Z}\mu_n$ be nonatomic.
If $T$ is not totally dissipative then $T$ is weakly mixing and its type is given as follows.
\begin{enumerate}
\item[(i)]
If there is $\ell^1ambdambda\in(0,1)$ such that $\sum_{n\in\mathcal{B}bb Z}(\mu_n(0)-\ell^1ambdambda)^2<+\infty$
then $T$ is of type $II_1$.
\item[(ii)]
If there is $\ell^1im_{n\to\infty}\mu_n(0)=\ell^1ambdambda\in(0,1)$ and $\sum_{n\in\mathcal{B}bb Z} (\mu_n(0)-\ell^1ambdambda)^2=+\infty$ then $T$ is of type $III_1$.
\item[(iii)]
If either $\ell^1im_{n\to\infty}\mu_n(0)=0$ or $\ell^1im_{n\to\infty}\mu_n(0)=1$ then $T$ is of type $III$.
\item[(iv)]
If the sequence $(\mu_n(0))_{n}$ does not converge as $n\to\infty$
then $T$ is of type $III_1$.
\end{enumerate}
\end{theorem}
It follows, in particular, that the nonsingular Bernoulli shifts on $\{0,1\}^{\mathcal{B}bb Z}$ are never of type $II_\infty$.
It is still an open problem which subtypes of $III$ are realized in the case (iii).
An example of $T$ of type $III_1$ with $\ell^1im_{n\to\infty}\mu_n(0)=0$ was constructed in \cite{BeVa}.
Is there $T$ of type $III_0$ satisfying~(iii)?
However, the situation is different if we consider $A=[0,1]$
and the probability measures $(\mu_n)_{n\in\mathcal{B}bb Z}$ on $A$ with infinite support.
In \cite{KoSo}, for each $\ell^1ambdambda\in(0,1)$, an example of type $III_\ell^1ambdambda$ Bernoulli shift was constructed with $\mu_n\sim \text{Leb}$ for each $n$.
Examples of nonsingular Bernoulli shifts of each possible Krieger's type were given in a later paper \cite{BeVa}.
An alternative proof of this result appeared in a recent work \cite{DaKo}.
\begin{theorem}[\cite{DaKo}]\ell^1ambdabel{I+Z}
Let $K\in\{III_\ell^1ambdambda\mid 0\ell^1e \ell^1ambdambda\ell^1e 1\}\sqcup\{II_\infty\}$.
Then there is a sequence $(\mu_n)_{n\in\mathcal{B}bb Z}$ of probability measures on $[0,1]$ such that $\mu_n\sim\text{{\rm Leb}}$ for each $n\in\mathcal{B}bb N$, the Bernoulli shift $T$ on $([0,1]^\mathcal{B}bb Z,\bigotimes_{n\in\mathcal{B}bb Z}\mu_n)$
is weakly mixing, IDPFT, and of Krieger type $K$.
\end{theorem}
We also note that if there is $C>1$ such that $C^{-1}\ell^1e \frac{d\mu_n}{d\mu_0}\ell^1e C$ for all $n\in\mathcal{B}bb Z$ then
$T$ can be neither of type $III_0$ nor of type $II_\infty$ \cite{BeVa}.
The following result is an analog of Theorem~\ref{th7.4} for Bernoulli shifts with
general state spaces.
\begin{theorem}[\cite{BeVa2}]
Let $(\mu_n)_{n\in\mathcal{B}bb Z}$ be a sequence of mutually equivalent probabilities on a
standard Borel space $A$ and let $T$ be the Bernoulli shift on
$(X,\mu)=\bigotimes_{n\in\mathcal{B}bb Z}(A,\mu_n)$.
If $T$ is nonsingular and conservative then $T$ is weakly mixing and the following holds.
\begin{enumerate}
\item[(i)]
$T$ of type $II_1$ if and only if there exists a probability measure
$\nu\sim\mu_0$ such that
$\nu^{\otimes Z}\sim\mu$.
\item[(ii)]
$T$ of type $II_\infty$ if and only if
if and only if there exists a $\sigma$-finite measure
$\nu\sim\mu_0$
and Borel subsets $B_n\subset A$
such that
$\nu(B_n)<\infty$ for each $n\in\mathcal{B}bb Z$ and
$$
\sum_{n\in\mathcal{B}bb Z}\mu_n(A\setminus B_n)<\infty, \
\sum_{n\in\mathcal{B}bb Z}\nu_n(A\setminus B_n)=\infty, \
\sum_{n\in\mathcal{B}bb Z} H^2(\mu_n, \nu(B_n)^{-1}\nu\restriction B_n)
$$
\item[(iii)]
$T$ is of type $III$ in all other cases.
\end{enumerate}
\end{theorem}
The next very natural question to ask is: which ergodic flows arise as the associated flow of nonsingular Bernoulli shifts of type $III_0$?
\begin{defn} Let $V=(V_t)_{t\in\mathcal{B}bb R}$ be a nonsingular flow on a standard probability space $(X,\nu)$.
Given $n>1$, consider two mutually commuting nonsingular flows $U=(U_t)_{t\in\mathcal{B}bb R}$ and $D=(D_t)_{t\in\mathcal{B}bb R}$ on the product space
$(X,\nu)^{\otimes n}$:
$$
U_t(x_1,\dots,x_n):=(V_tx_1,x_2,\dots, x_n), \ D_t(x_1,\dots,x_n):=(V_tx_1,V_tx_2,\dots, V_tx_n),
$$
for each $t\in\mathcal{B}bb R$ and $(x_1,\dots, x_n)\in X^n$.
The restriction of $U$ to the $\sigma$-algebra of $D$-invariant subsets in $X^n$
is a well defined nonsingular flow.
It is called the {\it joint flow of $n$ copies of $V$}.
The flow $V$ is called {\it infinitely divisible} if for each $n>1$, there is a flow $W$ such
that $V$ is isomorphic to the joint flow of $n$ copies of $W$.
\end{defn}
If $V$ is an AT-flow associated with an ITPFI$_2$-transformation (see Section~\ref{AT-f}) then $V$
is infinitely divisible.
\begin{theorem}[\cite{BeVa2}]\ell^1ambdabel{recent}
\begin{enumerate}
\item[(i)]
Let $(\mu_n)_{n\in\mathcal{B}bb Z}$ be a sequence of mutually equivalent probabilities on a
standard Borel space $A$ and let $T$ be the Bernoulli shift on
$\bigotimes_{n\in\mathcal{B}bb Z}(A,\mu_n)$.
If $T$ is nonsingular and conservative then the associated flow of $T$
is infinitely divisible.
For each ergodic probability preserving transformation $S$, the product $T\times S$ is ergodic and the associated flows of $T$ and $T\times S$ are isomorphic.
\item[(ii)]
Conversely, let $V$ be an AT-flow associated with an
ITPFI$_2$-transformation.
Then there is a weakly mixing nonsingular Bernoulli shift whose associated flow is isomorphic to $V$.
\end{enumerate}
\end{theorem}
We note that formally Berendschot and Vaes proved in \cite{BeVa2} a stronger claim than (ii).
They introduced a concept of {\it Poisson flow} as the tail boundary of certain nonhomogeneous random walks on $\mathcal{B}bb R$ (see \S\ref{brw} below).
Every Poisson flow is infinitely divisible.
Every AT-flow associated with an ITPFI$_2$-transformation is Poisson.
It is shown in \cite{BeVa2} that Theorem~\ref{recent}(ii) holds for each Poisson flow $V$.
However, the following problems remain open:
\begin{itemize}
\item
Whether each Poisson flow is an AT-flow associated
with an ITPFI$_2$-transformation?
\item
Whether each infinitely divisible flow is Poisson?
Whether each infinitely divisible AT-flow is Poisson?
\end{itemize}
\subsection{Bernoulli factors of nonsingular Bernoulli shifts}
\begin{defn}Let $T$ be a nonsingular Bernoulli shift on $(X,\mu):=\bigotimes_{n\in\mathcal{B}bb Z}(A,\mu_n)$.
Suppose that there is another sequence $(\nu_n)_{n\in\mathcal{B}bb Z}$ of probabilities on $A$
such that $T$ is nonsingular on the product space $(X,\nu):=\bigotimes_{n\in\mathcal{B}bb Z}(A,\nu_n)$.
If there is a map $\pi:A^\mathcal{B}bb Z\to A^{\mathcal{B}bb Z}$
such that $\pi T=T\pi$ and the measure $\mu\> {\scriptstyle \circ}\> c \pi^{-1}$ is equivalent to $\nu$ then $(X,\nu,T)$ is called {\it a factor} of $(X,\mu,T)$.
\end{defn}
We assume that the factors are non-trivial.
What is the relation between Krieger's types of $(X,\nu,T)$ and $(X,\mu,T)$?
\begin{theorem}(Bernoulli factors of type $II_1$ \cite{KoSo2})
Let $A$ be finite and
$$
1>\inf_{n\in\mathcal{B}bb Z}\min_{a\in A}\mu_n(a)>0.
$$
Then there is a probability measure $\rho$ on $A$
such that $(X,\rho^{\otimes\mathcal{B}bb Z}, T)$ is a factor of $(X,\mu,T)$.
\end{theorem}
\begin{theorem}[ \cite{KoSo2}]
Let $\ell^1ambdambda,\ell^1ambdambda'\in(0,1]$.
There exists a type $III_\ell^1ambdambda$ Bernoulli shift
which has a type $III_{\ell^1ambdambda'}$ Bernoulli shift as a factor
in each of the two cases:
\begin{enumerate}
\item[(i)]
$\ell^1ambdambda'<\ell^1ambdambda=1$,
\item[(ii)]
$\ell^1ambdambda<\ell^1ambdambda'$.
\end{enumerate}
\end{theorem}
\subsection{Nonsingular Markov shifts}
An analog of Theorem~\ref{BernWeak}(i) holds also for nonsingular Markov shifts (see \cite{Ko18}, \cite{Dan2018}).
\begin{theorem}[Weak mixing of conservative nonsingular Markov shifts \cite{Ko18}, \cite{Dan2018}]
Let $A$ be finite and let
$M=(M(a,b))_{a,b\in A}$ be a 0-1-valued $A\times A$-matrix.
Suppose that $M$ is primitive, i.e., there is $n>0$ such that all the entries of $M^n$ are strictly positive.
Let $(X_M,T,\mu)$ be a nonsingular Markov shift and let $\mu$ be generated by a sequence $(\pi_n,P_n)_{n\in\mathcal{B}bb Z}$ as in {\rm \S\ref{S:MarkSh}.}
Suppose that $\mu$ is nonatomic and that $\pi_n$ is fully supported on $A$ for each $n$.
If $\inf\{P_n(a,b)\mid n\in\mathcal{B}bb Z, M(a,b)=1\}>0$ and $T$ is conservative then
$T$ is weakly mixing.
\end{theorem}
We isolate a class of nonsingular Markov shifts for which $P_n=P_1$ and $\pi_n=\pi_1$ for all $n\ell^1e 0$ and call it {\it the Markov-Krengel} class.
Each shift from this class is the natural extension of the corresponding one-sided nonsingular Markov shift \cite{DanL18}.
There is an analog of Theorem~\ref{BernShiftType} for the Markov-Krengel shifts.
\begin{theorem}[\cite{DanL18}]
Let $M^*=\begin{pmatrix}
1 & 1\\
1& 1
\end{pmatrix}$,
$P_n$ be a bistochastic matrix for each $n\in\mathcal{B}bb Z$, $P_k=\begin{pmatrix}
0.5 & 0.5\\
0.5& 0.5
\end{pmatrix}$ and $\pi_k=(0.5, 0.5)$ for each $k\ell^1e 0$.
Let the corresponding Markov-Krengel shift $(X_{M^*},T,\mu)$ be nonsingular and conservative.
Then either $T$ is of type $II_1$ (if $\sum_{n>0}|P_n(0,0)-0.5|<\infty$) or $III_1$ (otherwise).
In the latter case the Maharam extension of $ T$ is a weakly mixing $K$-automorphism.
Moreover, if $\mu$ is equivalent to a Bernoulli (i.e., infinite product) measure then $T$ is of type $II_1$.
\end{theorem}
Concrete examples of Markov-Krengel shifts $(X_{M},\mu,T)$ of type $III_1$
such that $\mu$ is not equivalent to a Bernoulli measure were constructed in \cite{DanL18}
and \cite{Ko15}.
Recently Avraham-Re'em extended and refined the aforementioned results on Markov shifts
\cite{AR}.
To state his results, we introduce some notation.
Given $n\in\mathcal{B}bb Z$ and $a,b\in A$ we let
$$
\widehat P_n(a,b)=
\begin{cases}\frac{\pi_{n-1}(b)}{\pi_n(a)}P_{n-1}(b,a)&\text{if }\pi_n(a)\ne 0,\\
0&\text{otherwise.}
\end{cases}
$$
For a stochastic $A\times A$-matrix $Q$, let $\ell^1ambdambda$ be the distribution on $A$ such that
$\ell^1ambdambda Q=\ell^1ambdambda$.
We let $\widehat Q(a,b):=\frac{\ell^1ambdambda(b)}{\ell^1ambdambda(a)}Q(b,a)$.
\begin{theorem}[\cite{AR}]\ell^1ambdabel{ART}
Let $A$ be finite and let
$M=(M(a,b))_{a,b\in A}$ be a primitive 0-1-valued $A\times A$-matrix.
Let a measure $\mu$ on $X_M$ be generated by a sequence $(\pi_n,P_n)_{n\in\mathcal{B}bb Z}$ as in {\rm \S\ref{S:MarkSh}} and
$\inf\{P_n(a,b)\mid n\in\mathcal{B}bb Z, M(a,b)=1\}>0$.
If
the Markov shift
$(X_M,T,\mu)$ is nonsingular and conservative and
\begin{enumerate}
\item[(i)]
If $\ell^1im_{n\to\infty}P_n$ does not exist then
$T$ is of type $III_1$.
\item[(ii)]
If there exists the limit $P_+:=\ell^1im_{n\to+\infty}P_n$ and $P_-:=\ell^1im_{n\to-\infty}P_n$ then
$P_+=P_-$.
\item [(iii)]
If $A=\{0,1\}$ and there exist the limit $Q:=\ell^1im_{n\to\infty}P_n$ then $T$ is either of type $II_1$ or $II_\infty$.
More precisely, $T$ is of type $II_1$ if and only if
$$
\sum_{n\in\mathcal{B}bb Z}\sum_{a,b,a',b'\in A}\bigg(\sqrt{\widehat P_n(a,b)P_n(a',b')}-\sqrt{\widehat Q(a,b)Q(a',b')}\bigg)^2<\infty.
$$
The corresponding equivalent invariant probability measure (if exists) is the Markov measure
defined by $Q$ and the distribution $\ell^1ambdambda$ on $A$ satisfying $\ell^1ambdambda Q=\ell^1ambdambda$.
\end{enumerate}
\end{theorem}
It was also shown in \cite{AR} that an analogue of Theorem~\ref{ART}(iii) holds also for the golden mean Markov shift for which
$A=\begin{pmatrix} 1 &0 &1\\
1&0&1\\
0&1&0
\end{pmatrix}$.
\section{Dynamical properties of nonsingular Poisson suspensions and nonsingular Gaussian
transformations}
\subsection{Nonsingular Poisson suspensions.}
Let $(X,\mathcal B,\mu)$ be a $\sigma$-finite infinite standard measure space and let
$T\in\text{Aut}_2(X,\mu)$.
Then the Poisson suspension $(X^*,\mu^*,T_*)$ is well defined by Theorem~\ref{nonsing-P}.
The first problem to consider is to find out when $T_*$ admits an absolutely continuous
invariant probability measure.
A satisfactory solution of this problem is obtained in \cite{DaKoRo1}.
\begin{theorem} The following are equivalent:
\begin{itemize}
\item
there exists a $T_*$-invariant probability measure $\rho\prec\mu_*$,
\item
$\sup_{n\in\mathcal{B}bb Z}\|\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T^n}{d\mu}}-1\|_2<\infty$,
\item
there is a measure $\kappa\prec\mu$ such that $\sqrt{\frac{d\kappa}{d\mu}}-1\in L^2(\mu)$
and $\kappa\> {\scriptstyle \circ}\> c T=\kappa$.
\end{itemize}
\end{theorem}
It follows that there exists ``properly'' nonsingular Poisson suspensions, i.e. suspensions that do not admit an absolutely continuous invariant measure.
The next problem is to find out when $T_*$ is dissipative and when it is conservative.
\begin{theorem}
\begin{enumerate}
\item[(i)]
If
$\sum_{n\in\mathcal{B}bb Z}e^{-\frac12\|\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T^n}{d\mu}}-1\|_2^2}<\infty$
then $T_*$ is totally dissipative \cite{DaKoRo1}.
\item[(ii)]
Let $T\in \text{Aut}_1(X,\mu)$, $\chi (T)= 0$ and
$\bigg(\frac{d\mu}{d\mu\> {\scriptstyle \circ}\> c T^{-n}}\bigg)^2-1\in L^1(X,\mu)$
for each $n\in\mathcal{B}bb N$.
If there is a sequence $(b_n)_{n=1}^\infty$
of positive reals such that
$\sum_{n=1}^\infty b_n=\infty$ but
$$
\sum_{n=1}^\infty b_n^2e^{\int_X\big(\big(\frac{d\mu}{d\mu\> {\scriptstyle \circ}\> c T^{-n}}\big)^2-1\big)d\mu}<\infty
$$
then $T_*$ is conservative \cite{DaKoRo2}.
\end{enumerate}
\end{theorem}
We also note that if $T\in \text{Aut}_1(X,\mu)$ and $\chi (T)\ne 0$ then $T$ is not conservative
\cite{DaKoRo1}.
Moreover, $T_*$ is totally dissipative \cite{DaKoRo2}.
\begin{theorem}[\cite{DaKoRo1}] If $T$ is of 0-type and there is no $T$-invariant measure
$\kappa\prec\mu$ such that $\sqrt{\frac{d\kappa}{d\mu}}-1\in L^2(\mu)$ then $T_*$ is of 0-type.
\end{theorem}
The following theorem is proved via Baire category tools.
\begin{theorem}[\cite{DaKoRo2}]
The set
$$
\{T\in \text{Aut}_2(X,\mu)\mid \text{$T$ and $T_*$ are both ergodic and of type $III_1$ }\}
$$
is a dense $G_\delta$ in $(\text{Aut}_2(X,\mu),d_2)$.
The set
$$
\{T\in \ker\chi \mid \text{$T$ and $T_*$ are both ergodic and of type $III_1$ }\}
$$
is a dense $G_\delta$ in $(\ker\chi,d_1)$.
\end{theorem}
It is easy to see that if $T\in \text{Aut}_2(X,\mu)$ then $T\in \text{Aut}_2(X,t\cdot\mu)$ for each $t>0$.
Hence $T_*$ is $(t\cdot\mu)^*$-nonsingular for each $t>0$.
However the dynamical properties of the systems $(X^*, T_*, (t\cdot\mu)^*)$ depend heavily on the choice of $t$.
This is illustrated by a concrete example constructed in \cite{DaKoRo2}.
\begin{ex}[Phase transition] There is a totally dissipative transformation $T\in\text{Aut}_2(X,\mu)$ and $t_0>0$ such that the dynamical system
$(X^*, T_*, (t\cdot\mu)^*)$ is ergodic of type $III_1$ for each $t\in(0,t_0)$ and
and totally dissipative for each $t>t_0$.
\end{ex}
The point $t_0$ can be interpreted as a ``bifurcation point''.
We note that if $T\in\text{Aut}_2(X,\mu)$ and $T$ is totally dissipative
then $T_*$ is isomorphic to a nonsingular Bernoulli shift.
Therefore the following theorem was proved in \cite{DaKo}
simultaneously with~Theorem~\ref{I+Z}.
\begin{theorem} Let $K\in\{III_\ell^1ambdambda\mid 0\ell^1e \ell^1ambdambda\ell^1e 1\}\sqcup\{II_\infty\}$.
Then there is a totally dissipative
transformation $T\in\text{Aut}_2(X,\mu)$ such that
$(X^*, T_*, \mu^*)$ is weakly mixing of type $K$.
\end{theorem}
This theorem is further refined in type $III_0$ as follows (cf. Theorem~\ref{recent}).
\begin{theorem}[\cite{BeVa2}]
Let $V$ be an AT-flow associated with an
ITPFI$_2$-transformation.
Then there is
a totally dissipative
transformation $T\in\text{Aut}_2(X,\mu)$ such that
$T_*$ is
weakly mixing and the associated flow of $T_*$ is isomorphic to $V$.
\end{theorem}
\subsection{Nonsingular Gaussian transformations.}
Let $\mathcal H$ be a separable infinite dimensional real Hilbert space.
Let $(h,O)\in\text{Aff}\,\mathcal H$.
For $n\in\mathcal{B}bb Z$, define a vector $h^{(n)}\in\mathcal H$ by setting $(h,O)^n=(h^{(n)},O^n)$.
We first consider when $G_{h,O}$ admits an equivalent
invariant probability measure.
\begin{theorem}[ \cite{AIM}, \cite{DanL22}] Let $(X,\mu)$ denote the space of $G_{h,O}$.
The following are equivalent:
\begin{itemize}
\item
there exists a $G_{h,O}$-invariant probability measure $\rho\sim\mu$,
\item
$h$ is an $O$-coboundary, i.e. there is $a\in\mathcal H$ such that $h=a-Oa$,
\item
the affine operator $(h,O)\in\text{{\rm Aff}}\,\mathcal H$ on $\mathcal H$ has a fixed point,
\item
the sequence $(h^{(n)})_{n\in\mathcal{B}bb Z}$ is bounded in $\mathcal H$.
\end{itemize}
\end{theorem}
The following dichotomy for nonsingular Gaussian transformations was established in \cite{AIM} (see also \cite{DanL22}).
\begin{theorem} If $O$ has no nontrivial fixed vectors in $\mathcal H$ then
$G_{h,O}$ is either conservative or totally dissipative.
\end{theorem}
In \cite{AIM}, for each $(h,O)\in\text{{\rm Aff}}\,\mathcal H$, the authors consider a one-parametric family of nonsingular Gaussian transformations $G_{\theta h,O}$, $\theta\in(0,+\infty)$.
\begin{defn} The {\it Poincare exponent} of $(h,O)$ is
$$
\delta_{h,O}:=\inf\{\alpha>0\mid \sum_{n=1}^\infty e^{-\alpha \|h^{(n)}\|^2}<\infty\}\in[0,+\infty].
$$
\end{defn}
The following theorem demonstrates a phase transition from conservativity to total
dissipativity.
\begin{theorem}[\cite{AIM}]
Let $(h,O)\in\text{{\rm Aff}}\,\mathcal H$.
There exists $\theta_{diss}\in[0+\infty]$ such that $G_{\theta h,O}$ is conservative for all $\theta<\theta_{diss}$ and totally dissipative for all $\theta>\theta_{diss}$.
Moreover,
$$
\sqrt{2\delta_{h,O}}\ell^1e \theta_{diss}\ell^1e2\sqrt{2\delta_{h,O}}.
$$
\end{theorem}
This result can be strengthened under additional assumptions on $O$.
\begin{theorem}[\cite{AIM}, \cite{DanL22}] Let $(h,O)\in\text{{\rm Aff}}\,\mathcal H$.
Suppose that $f$ is not an $O$-coboundary
and the $\mu$-preserving transformation $G_{0,O}$ is mildly mixing.
Then for each $\theta<\theta_{diss}$, the nonsingular Gaussian transformation $G_{\theta h,O}$
is weakly mixing, IDPFT, and of Krieger type $III_1$.
\end{theorem}
This theorem is generalized in a further work \cite{MaVa} for some cases in which
$G_{0,O}$ is weakly mixing but not mildly mixing.
In these cases $G_{0,O}$ has ``mixing subsequences'' along which
the cocycle $(h^{(n)})_n$ is ``proper''.
In particular, an example of rigid weakly mixing Gaussian transformation $G_{0,O}$ is constructed
such that the nonsingular Gaussian transformation $G_{h,O}$ is ergodic of type $III_1$ for some $h\in\mathcal H$.
Currently, the only known examples of ergodic nonsingular Gaussian transformations are either of type $III_1$ or $II_1$.
Therefore,
though there is an obvious analogy between the theory of nonsingular Gaussian transformations and the theory of nonsingular Poisson suspensions, the latter theory looks more diverse.
We illustrate this by the following theorem.
\begin{theorem}[\cite{Da23}]
For each $K\in\{III_\ell^1ambdambda\mid 0\ell^1e \ell^1ambdambda\ell^1e 1\}\sqcup\{II_\infty\}$,
there is transformation $T\in\text{{\rm Aut}}_2(Y,\nu)$ such that
the nonsingular Poisson suspension $T_*$ is 0-type, weakly mixing and of Krieger type $K$,
while the nonsingular Gaussian transformation $G_{\sqrt{\frac{d\nu\> {\scriptstyle \circ}\> c T}{d\nu}}-1, U_T}$
is 0-type, weakly mixing and of Krieger type $III_1$.
The unitary Koopman operators of the two nonsingular transformations are unitarily equivalent.
\end{theorem}
\section {Spectral theory for nonsingular systems
}
While the spectral theory for probability preserving systems is developed
in depth, the spectral theory of nonsingular systems is still in its
infancy. We discuss below some problems related to $L^\infty$-spectrum
which may be regarded as an analogue of the discrete spectrum. We also
include results on computation of the maximal spectral type of the
`nonsingular' Koopman operator for rank-one nonsingular transformations.
\subsection{$L^\infty$-spectrum and groups of quasi-invariance
}\ell^1ambdabel{9.1}
Let $T$ be an ergodic nonsingular transformation of $(X,\mathcal
B,\mu)$. A number $\ell^1ambdambda\in\mathbb T$ belongs to the $L^\infty$-spectrum
$e(T)$ of $T$ if there is a function $f\in L^\infty(X,\mu)$ with $f\> {\scriptstyle \circ}\> c
T=\ell^1ambdambda f$. $f$ is called an $L^\infty$-{\it eigenfunction} of $T$
corresponding to $\ell^1ambdambda$. Denote by $\mathcal E(T)$ the group of all
$L^\infty$-eigenfunctions of absolute value $1$. It is a Polish group when
endowed with the topology of converges in measure. If $T$ is of type $II_1$
then the $L^\infty$-eigenfunctions are $L^2(\mu')$-eigenfuctions of $T$,
where $\mu'$ is an equivalent invariant probability measure. Hence $e(T)$
is countable. Osikawa constructed in \cite{Os} the first examples of ergodic
nonsingular transformations with uncountable $e(T)$.
We state now a nonsingular version of the von Neumann-Halmos discrete
spectrum theorem. Let $Q\subset \mathbb T$ be a countable infinite subgroup.
Let $K$ be a compact dual of $Q_d$, where $Q_d$ denotes $Q$ with the
discrete topology. Let $k_0\in K$ be the element defined by $k_0(q)=q$ for
all $q\in Q$. Let $R:K\to K$ be defined by $Rk=k+k_0$. The system $(K,R)$
is called a {\it compact group rotation}. The following theorem was proved in \cite{AN87}.
\begin{theorem}\ell^1ambdabel{T:4.1} Assume that the $L^\infty$-eigenfunctions
of $T$ generate the entire $\sigma$-algebra $\mathcal B$. Then $T$ is
isomorphic to a compact group rotation equipped with an ergodic
quasi-invariant measure.
\end{theorem}
A natural question arises: which subgroups of $\mathbb T$ can appear as $e(T)$
for an ergodic $T$?
\begin{theorem}[\cite{MooS}, \cite{Aa}]\ell^1ambdabel{T:4.2} $e(T)$ is a Borel subset of
$\mathbb T$ and carries a unique Polish topology which is stronger than the usual
topology on $\mathbb T$. The Borel structure of $e(T)$ under this topology
agrees with the Borel structure inherited from $\mathbb T$. There is a Borel
map $\psi:e(T)\ni\ell^1ambdambda\mapsto\psi_\ell^1ambdambda\in\mathcal E(T)$ such that
$\psi_\ell^1ambdambda\> {\scriptstyle \circ}\> c T=\ell^1ambdambda\psi_\ell^1ambdambda$ for each
$\ell^1ambdambda$. Moreover, $e(T)$ is of Lebesgue measure $0$ and it can have an
arbitrary Hausdorff dimension.
\end{theorem}
A proper Borel subgroup $E$ of $\mathbb T$ is called
\begin{enumerate}
\item[(i)]
{\it weak Dirichlet} if $\ell^1imsup_{n\to\infty}\widehat\ell^1ambdambda(n)=1$ for each
finite complex measure $\ell^1ambdambda$ supported on $E$;
\item[(ii)] {\it saturated} if $\ell^1imsup_{n\to\infty}|\widehat\ell^1ambdambda(n)|\ge
|\ell^1ambdambda(E)|$ for each finite complex measure $\ell^1ambdambda$ on $\mathbb T$,
\end{enumerate}
where $\widehat\ell^1ambdambda(n)$ denote the $n$-th Fourier coefficient of
$\ell^1ambdambda$.
Every countable subgroup of $\mathbb T$ is saturated.
\begin{theorem}\ell^1ambdabel{T:4.3} $e(T)$ is $\sigma$-compact in the usual
topology on $\mathbb T$ \cite{HMP} and saturated (\cite{Mel}, \cite{HMP}).
\end{theorem}
It follows that $e(T)$ is weak Dirichlet (this fact was established earlier
in \cite{Sc3}).
It is not known if every Polish group continuously embedded in $\mathbb T$
as a $\sigma$-compact saturated group is the eigenvalue group of some
ergodic nonsingular transformation. This is the case for the so-called
$H_2$-groups and the groups of quasi-invariance of measures on $\mathbb T$
(see below). Given a sequence $n_j$ of positive integers and a sequence
$a_j\ge 0$, the set of all $z\in\mathbb T$ such that $\sum_{j=1}^\infty
a_j|1-z^{n_j}|^2<\infty$ is a group. It is called an $H_2$-{\it group}.
Every $H_2$-group is Polish in an intrinsic topology stronger than the
usual circle topology.
\begin{theorem}[\cite{HMP}]\ell^1ambdabel{T:4.4}
\begin{itemize}
\item[(i)] Every $H_2$-group is a saturated (and hence weak Dirichlet)
$\sigma$-compact subset of $\mathbb T$.
\item[(ii)]
If $\sum_{j=0}^\infty a_j=+\infty$ then the corresponding $H_2$-group is a
proper subgroup of $\mathbb T$.
\item[(iii)] If $\sum_{j=0}^\infty a_j(n_j/n_{j+1})^2<\infty$ then
the corresponding $H_2$-group is uncountable.
\item[(iv)] Any $H_2$-group is
$e(T)$ for an ergodic nonsingular compact group rotation $T$.
\end{itemize}
\end{theorem}
It is an open problem whether every eigenvalue group $e(T)$ is an
$H_2$-group. It is known however that $e(T)$ is close `to be an $H_2$-group':
if a compact subset $L\subset\mathbb T$ is disjoint from $e(T)$ then there is an
$H_2$-group containing $e(T)$ and disjoint from $L$.
\begin{ex}[\cite{AN87}, see also \cite{Os}]\ell^1ambdabel{4.5} Let $(X,\mu,T)$ be the nonsingular
product odometer associated to a sequence $(2,\nu_j)_{j=1}^\infty$
Let $n_j$ be a sequence of positive integers such that $n_j>\sum_{i<j}n_i$
for all $j$. For $x\in X$, we put $h(x):=n_{l(x)}-\sum_{j<l(x)}n_j$. Then
$h$ is a Borel map from $X$ to the positive integers. Let $S$ be the tower
over $T$ with height function $h$ (see \S\,\ref{tower}). Then $e(S)$ is the
$H_2$-group of all $z\in\mathbb T$ with $\sum_{j=1}^\infty
\nu_j(0)\nu_j(1)|1-z^{n_j}|^2<\infty$.
\end{ex}
It was later shown in \cite{HMP} that if $\sum_{j=1}^\infty
\nu_j(0)\nu_j(1)(n_j/n_{j+1})^2<\infty$ then the $L^\infty$-eigenfunctions of $S$
generate the entire $\sigma$-algebra, i.e., $S$ is isomorphic (measure
theoretically) to a nonsingular compact group rotation.
Let $\mu$ be a finite measure on $\mathbb T$. Let $H(\mu):=\{z\in\mathbb
Z\mid \delta_z*\mu\sim\mu\}$, where $*$ means the convolution of measures.
Then $H_\mu$ is a group called the {\it group of quasi-invariance of}
$\mu$. It has a Polish topology whose Borel sets agree with the Borel sets
which $H(\mu)$ inherits from $\mathbb T$ and the injection map of $H(\mu)$
into $\mathbb T$ is continuous. This topology is induced by the weak
operator topology on the unitary group in the Hilbert space $L^2(\mathbb
T,\mu)$ via the map $H(\mu)\ni z\mapsto U_z$, $(U_zf)(x)=\sqrt{(d(\delta_z
* \mu)/d\mu)(x)}f(xz)$ for $f\in L^2(\mathbb T,\mu)$.
Moreover, $H(\mu)$ is saturated \cite{HMP}. If $\mu(H(\mu))>0$ then either
$H(\mu)$ is countable or $\mu$ is equivalent to $\ell^1ambdambda_\mathbb T$
\cite{ManN}.
\begin{theorem}[\cite{AN87}]\ell^1ambdabel{T:4.6}
Let $\mu$ be an ergodic with respect to
the $H(\mu)$-action by translations on $\mathbb T$. Then there is a compact
group rotation $(K,R)$ and a finite measure on $K$ quasi-invariant and
ergodic under $R$ such that $e(R)=H(\mu)$. Moreover, there is a continuous
one-to-one homomorphism $\psi:e(R)\to E(R)$ such that $\psi_\ell^1ambdambda \> {\scriptstyle \circ}\> c
R=\ell^1ambdambda\psi_\ell^1ambdambda$ for all $\ell^1ambdambda\in e(R)$.
\end{theorem}
It was shown by Aaronson and
Nadkarni \cite{AN87} that if $n_1=1$ and $n_j=a_ja_{j-1}\cdots a_1$ for
positive integers $a_j\ge 2$ with $\sum_{j=1}^\infty a_j^{-1}<\infty$ then the
transformation $S$ from Example~\ref{4.5} does not admit a continuous
homomorphism $\psi:e(S)\to E(S)$ with $\psi_\ell^1ambdambda \> {\scriptstyle \circ}\> c
T=\ell^1ambdambda\psi_\ell^1ambdambda$ for all $\ell^1ambdambda\in e(S)$. Hence $e(S)\ne H(\mu)$
for any measure $\mu$ satisfying the conditions of Theorem~\ref{T:4.6}.
Assume that $T$ is an ergodic nonsingular compact group rotation. Let
$\mathcal B_0$ be the $\sigma$-algebra generated by a sub-collection of
eigenfunctions. Then $\mathcal B_0$ is invariant under $T$ and hence a
factor (see \S\ref{S:joinings}) of $T$. It is not known if every factor of $T$ is of this form. It
is not even known whether every factor of $T$ must have non-trivial
eigenvalues.
\subsection{Koopman unitary operator for a nonsingular system
}
Let $(X,\mathcal B,\mu,T)$ be a nonsingular dynamical system. In this subsection we
consider spectral properties of the Koopman operator $U_T$ defined by~(\ref{koopman oper}). First, we note that the spectrum of
$T$ is the entire circle $\mathbb T$ \cite{Nad2}. Next, if $U_T$ has an
eigenvector then $T$ is of type $II_1$. Indeed, if there are
$\ell^1ambdambda\in\mathbb T$ and $0\ne f\in L^2(X,\mu)$ with $U_Tf=\ell^1ambdambda f$ then the
measure $\nu$, $d\nu(x):=|f(x)|^2d\mu(x)$, is finite, $T$-invariant and
equivalent to $\mu$. Hence if $T$ is of type $III$ or $II_\infty$ then the
maximal spectral type $\sigma_T$ of $U_T$ is continuous. Another `restriction' on $\sigma_T$ was found in \cite{Roy2}: no Fo\"{\i}a\c{s}-Str\u{a}til\u{a} measure is absolutely continuous with respect to $\sigma_T$ if $T$ is of type $II_\infty$. We recall that
a symmetric measure on $\mathbb T$ possesses {\it Fo\"{\i}a\c{s}-Str\u{a}til\u{a} property} if for each ergodic probability preserving system $(Y,\nu,S)$ and $f\in L^2(Y,\nu)$, if $\sigma$ is the spectral measure of $f$ then $f$ is a Gaussian random variable \cite{LPT}. For instance, measures supported on Kronecker sets possess this property.
As we have noted in \S\ref{mixing}, mixing (0-type) is an $L^2$-spectral property for nonsingular transformations.
Also, if $T$ is infinite measure-preserving then $T$ is mixing if and only if
$n^{-1}\sum_{i=0}^{n-1}U_T^{k_i}\to 0$ in the strong operator topology for each strictly increasing sequence $k_1<k_2<\cdots$ \cite{KS69}.
This generalizes a well known theorem of Blum and Hanson for probability preserving maps. For comparison, we note that ergodicity is not an $L^2$-spectral property of infinite measure-preserving systems.
Now let $T$ be a rank-one nonsingular transformation associated with a sequence $(r_n,w_n,s_n)_{n=1}^\infty$ as in \S\ref{S:rankone}.
\begin{theorem}[\cite{HMP}, \cite{ChN}]\ell^1ambdabel{T:4.7} The spectral multiplicity
of $U_T$ is $1$ and the maximal spectral type $\sigma_T$ of $U_T$ (up to a
discrete measure in the case $T$ is of type II$_1$) is the weak limit of the measures $\rho_k$ defined as
follows:
$$
d\rho_k(z)=\prod_{j=1}^k w_j(0)|P_j(z)|^2\,dz,
$$
where $P_j(z):=1+\sqrt{w_j(1)/w_j(0)}z^{-R_{1,j}}+
\cdots+\sqrt{w_j(m_j-1)/w_j(0)}z^{-R_{r_j-1,j}}$, $z\in\mathbb T$,
$R_{i,j}:=ih_{j-1}+s_j(0)+\cdots+s_j(i)$, $1\ell^1e i\ell^1e r_k-1$ and $h_j$ is the hight of the $j$-th column.
\end{theorem}
Thus the maximal spectral type of $U_T$ is given by a so-called {\it
generalized Riesz product}. We refer the reader to \cite{HMP}, \cite{HMP1},
\cite{ChN}, \cite{Nad} for a detailed study of Riesz products:
their convergence, mutual singularity, singularity to $\ell^1ambdambda_\mathbb T$,
etc.
It was shown in \cite{AN87} that $H(\sigma_T)\supset e(T)$ for any ergodic
nonsingular transformation $T$. Moreover, $\sigma_T$ is ergodic under the
action of $e(T)$ by translations if $T$ is isomorphic to an ergodic
nonsingular compact group rotation. However it is not known:
\begin{enumerate}
\item[(i)]
Whether $H(\sigma_T)= e(T)$ for all ergodic $T$?
\item[(ii)]
Whether ergodicity of $\sigma_T$ under $e(T)$ implies that $T$ is an
ergodic compact group rotation?
\end{enumerate}
The first claim of Theorem~\ref{T:4.7} extends to the rank $N$ nonsingular systems
as follows: if $T$ is an ergodic nonsingular transformation of rank $N$
then the spectral multiplicity of
$U_T$ is bounded by $N$ (as in the finite measure-preserving case). It is
not known whether this claim is true for a more general class of transformations which are defined as rank $N$ but without the assumption that the Radon-Nikodym cocycle is constant on the tower levels.
Danilenko and Ryzhikov showed in \cite{DanR1} that
for each subset $E\subset\mathcal{B}bb N$, there is an ergodic conservative infinite measure-preserving transformation $T$ such that the set of essential values of the multiplicity function of $U_T$ is $E$.
In a subsequent paper \cite{DanR2} they sharpened this result: for each subset $E\subset\mathcal{B}bb N\cup\{\infty\}$, there is a mixing ergodic conservative infinite measure-preserving transformation $T$ such that the set of essential values of the multiplicity function of $U_T$ is $E$.
We note that the analogous realization problem for spectral multiplicities of ergodic probability preserving transformations is still open \cite{Dan27S}.
In \cite{DanR1}, a mixing rank-one infinite measure-preserving transformation $T$ was constructed such that the measures $\sigma_T,\sigma_T*\sigma_T, \sigma_T*\sigma_T*\sigma_T,\dots$ on $\mathcal{B}bb T$ are mutually disjoint.
Hence the unitary operator $U_T\oplus U_T^{{\bar d}ot 2}\oplus U_T^{{\bar d}ot 3}\oplus\cdots$ has a simple spectrum.
El Abdalaoui and Nadkarni constructed an ergodic nonsingular transformation whose spectrum has Lebesgue component of multiplicity one \cite{AbNa}.
The problem of existence of an ergodic nonsingular transformation with a simple Lebesgue spectrum is still open.
\subsection{ Koopman unitary operators associated with nonsingular Poisson transformations}
We recall a very important structural feature of Poisson spaces (see \cite{Ner}):
there is a canonical isometry between $L^2(\mu^*)$
and the symmetric Fock space $F(L^2(\mu))$ over $L^2(\mu)$.
We recall that
$F(L^2(\mu)):=\bigoplus_{n=0}^\infty L^2(\mu)^{{\bar d}ot n}$,
where $L^2(\mu)^{{\bar d}ot 0}=\mathcal{B}bb C$
and each factor $L^2(\mu)^{{\bar d}ot n}$ is equipped with the normalized scalar product
$n!\ell^1ambdangle.,.\rightarrowngle_{L^2(\mu)^{{\bar d}ot n}}$.
There is an exponential map $\mathcal E:L^2(\mu)\to F(L^2(\mu))$, given by
$$
\mathcal E(f):=\bigoplus_{n=0}^\infty\frac1{n!}f^{{\bar d}ot n}.
$$
The family $\{\mathcal E(f)\mid f\in L^2(\mu)\}$ is total in $F(L^2(\mu))$.
Let Aff$_\mathcal{B}bb R( L^2(\mu))$ be the subgroup of invertible affine operators in $L^2(\mu)$ that preserve invariant the $\mathcal{B}bb R$-subspace $L^2_\mathcal{B}bb R(\mu)$.
Then Aff$_\mathcal{B}bb R( L^2(\mu))=L^2_\mathcal{B}bb R(\mu)\rtimes \mathcal U_{\mathcal{B}bb R}(L^2(\mu))$, where
$\mathcal U_{\mathcal{B}bb R}(L^2(\mu))$ is the group of unitary operators that preserve invariant
$L^2_\mathcal{B}bb R(\mu)$.
The Weil unitary representation $W$ of Aff$_\mathcal{B}bb R( L^2(\mu))$ is well defined in $F(L^2(\mu))$ via the formula
$$
W(f,V)\mathcal E(h):=e^{-\frac12\|f\|_2-\ell^1ambdangle f,Vh\rightarrowngle}\mathcal E(f+Vh),\qquad f\in L^2(\mu).
$$
Given $T\in\text{Aut}_2(X,\mu)$, we have that
$\mathcal{B}ig(\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}-1,U_T\mathcal{B}ig)\in \text{Aff}_\mathcal{B}bb R( L^2(\mu))$.
\begin{theorem}[Koopman operator associated with $T_*$ \cite{DaKoRo1}]\ell^1ambdabel{KoopmanP}
If $T\in\text{Aut}_2(X,\mu)$ then under the canonical identification of $L^2(\mu^*)$ and $F(L^2(\mu))$,
$$
U_{T_*}= W\bigg(\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}-1,U_T\bigg).
$$
\end{theorem}
\subsection{ Koopman unitary operators associated with nonsingular Gaussian transformations}
Let $\mathcal H$ be a separable infinite dimensional real Hilbert space.
Denote by $(X,\mu)$ the probability space where the nonsingular Gaussian transformations
$G_{h,O}$ for all $(h,O)\in\text{Aff} \,H$ are defined (see \S\ref{gaus}).
It is well known that there is a canonical isometry between $L^2(X,\mu)$
and the symmetric Fock space $F(\mathcal H)$.
\begin{theorem}[Koopman operator associated with $G_{h,O}$ \cite{DanL22}]\ell^1ambdabel{KoopmanG}
If $(h,O)\in\text{{\rm Aff}} \,H$ then under the canonical identification of $L^2(X,\mu)$ and $F(\mathcal H)$,
$$
U_{G_{h,O}}= W\bigg(\frac12h,O\bigg).
$$
\end{theorem}
It follows from Theorems~\ref{KoopmanP} and \ref{KoopmanG} that each nonsingular Poisson transformation $T_*$ is spectrally equivalent to the nonsingular Gaussian
transformation $G_{2\big(\sqrt{\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}}-1\big),U_T}$.
\section {Entropy and other invariants
}
Let $T$ be an ergodic conservative nonsingular transformation of a
standard probability space $(X,\mathcal B,\mu)$. If $\mathcal P$ is a finite
partition of $X$, we define the entropy $H(\mathcal P)$ of $\mathcal P$ as $H(\mathcal
P)=-\sum_{P\in\mathcal P}\mu(P)\ell^1og\mu(P)$. In the study of measure-preserving
systems the classical (Kolmogorov-Sinai) entropy proved to be a very useful
invariant for isomorphism \cite{CFS}. The key fact of the theory is that if $\mu\> {\scriptstyle \circ}\> c
T=\mu$ then the limit $\ell^1im_{n\to\infty}n^{-1}H(\bigvee_{i=1}^n
T^{-i}\mathcal P)$ exists for every $\mathcal P$. However if $T$ does not preserve
$\mu$, the limit may no longer exist. Some efforts have been made to extend
the use of entropy and similar invariants to the nonsingular domain.
These include Krengel's entropy of conservative measure-preserving maps and its extension to nonsingular maps, Parry's entropy and Parry's nonsingular version of Shannon-McMillan-Breiman theorem, Poisson entropy, critical dimension by Mortiss and Dooley, etc. Unfortunately, these invariants are less informative than their classical counterparts and they are more difficult to compute.
\subsection{{Krengel's and Parry's entropies}\ell^1ambdabel{Krengel}
}
Let $S$ be a conservative measure-preserving transformation of a $\sigma$-finite measure space
$(Y,\mathcal E,\nu)$. The {\it Krengel entropy} \cite{Kre1} of $S$ is defined
by
$$
h_{\text{Kr}}(S)=\sup\{\nu(E)h(S_E)\mid 0<\nu(E)<+\infty\},
$$
where $h(S_E)$ is the Kolmogorov-Sinai entropy of $S_E$.
It follows from Abramov's formula for the entropy of induced transformation
that $h_{\text{Kr}}(S)=\mu(E)h(S_E)$ whenever $E$ {\it sweeps out}, i.e.,
$\bigcup_{i\ge 0}S^{-i}E=X$. A generic transformation from Aut$_0(X,\mu)$
has entropy 0. Krengel raised a question in \cite{Kre1}: does there exist a
zero entropy infinite measure-preserving $S$ and a zero entropy finite
measure-preserving $R$ such that $h_{\text{Kr}}(S\times R)>0$? This problem
was solved in \cite{DaR} (a special case was announced by Silva and Thieullen in
an October 1995 AMS conference (unpublished)):
\begin{enumerate}
\item[(i)]
if $h_{\text{Kr}}(S)=0$ and $R$ is distal then $h_{\text{Kr}}(S\times
R)=0$;
\item[(ii)]
if $R$ is not distal then there is a rank-one transformation $S$ with
$h_{\text{Kr}}(S\times R)=\infty$.
\end{enumerate}
We also note that if a conservative $S\in\text{Aut}_0(X,\mu)$ is squashable, i.e., it commutes with
another transformation $R$ such that $\nu\> {\scriptstyle \circ}\> c R=c\nu$ for a constant $c\ne
1$, then $h_{\text{Kr}}(S)$ is either 0 or $\infty$ \cite{ST2}.
Now let $T$
be a type $III$ ergodic transformation of
$(X,\mathcal B,\mu)$. Silva and Thieullen define an entropy $h^*(T)$ of $T$ by
setting $h^*(T):=h_{\text{Kr}}(\widetilde T)$, where $\widetilde T$ is the
Maharam extension of $T$ (see \S\,\ref{S:maharam}). Since $\widetilde T$ commutes
with transformations which `multiply' $\widetilde T$-invariant measure,
it follows that $h^*(T)$ is either 0 or $\infty$.
Let $T$ be the standard
$III_\ell^1ambdambda$-odometer from Example~\ref{type III}(i). Then $h^*(T)=0$. The same is true
for a so-called ternary product odometer associated with the sequence $(3,\nu_n)_{n=1}^\infty$, where $\nu_n(0)=\nu_n(2)
=\ell^1ambdambda/(1+2\ell^1ambdambda)$ and $\nu_n(1)=\ell^1ambdambda/(1+\ell^1ambdambda)$
\cite{ST2}. It is not known
however whether every ergodic nonsingular product odometer has
zero entropy. On the other hand, it was shown in
\cite{ST2} that $h^*(T)=\infty$ for every $K$-automorphism.
The Parry entropy \cite{Pa3} of $S$ is defined by
$$
h_{\text{Pa}}(S):=\{H(S^{-1}\mathfrak F |\mathfrak F)\mid \mathfrak F\text{ is a $\sigma$-finite subalgebra of $\mathfrak B$ such that $\mathfrak F\subset S^{-1}\mathfrak F$}\}.
$$
Parry showed \cite{Pa3} that $h_{\text{Pa}}(S)\ell^1e h_{\text{Kr}}(S)$.
It is still an open question whether the two entropies coincide. This is the case when $S$ is of rank one (since $h_{\text{Kr}}(S)=0$) and when $S$ is quasi-finite \cite{Pa3}. The transformation $S$ is called {\it quasi-finite} if there exists a subset of finite measure $A\subset Y$ such that the first return time partition $(A_n)_{n>0}$ of $A$ has finite entropy. We recall that $x\in A_n\iff n\text{ is the smallest positive integer such that }T^nx\in A$. An example of non-quasi-finite ergodic infinite measure-preserving transformation was constructed in \cite{AaP}.
A natural question is about existence of the maximal invariant $\sigma$-finite subalgebra of zero (Krengel or Parry) entropy.
Such an algebra is called {\it the Krengel-Pinsker or the Parry-Pinsker} factor of $T$ respectively.
Existence of the Krengel-Pinsker factors was proved in \cite{AaP} for a special class
of quasi-finite transformations called {\it LLB}.
This result was extended in \cite{Jan} in the following way.
\begin{theorem} Let $T$ be an ergodic quasi-finite transformation.
Then either there is the Krengel-Pinsker factor of $T$ which is also the Parry-Pinsker and the Poisson-Pinsker (see the next subsection below) factor of $T$ or $T$ is remotely infinite, i.e.,
there exists a sub-$\sigma$-algebra $\mathcal F\subset\mathcal B$ such that
$T^{-1}\mathcal F\subset\mathcal F$, $\bigvee_{n>0}T^n\mathcal F=\mathcal F$ and the subalgebra $\bigwedge_{n>0}T^{-n}\mathcal F$ does not contain subsets of positive finite measure.
\end{theorem}
\subsection{Poisson entropy}\ell^1ambdabel{Poisson}
Poisson entropy for infinite measure-preserving transformations was introduced in \cite{Roy}.
Let $(X,\mu)$ be an infinite $\sigma$-finite space and let $T$ be a $\mu$-preserving invertible transformation of $X$.
The Poisson suspension $T_*$ of $T$ is well defined on a probability space $(X^*,\mu^*)$ and
$\mu^*\> {\scriptstyle \circ}\> c T_*=\mu^*$ (see \S\ref{NPT}).
It is ergodic if and only if $T$ has no invariant sets of finite positive measure.
It follows from Theorem~\ref{KoopmanP} that $U_{ T_*}$ is the `exponent' of $U_T$.
Hence, the maximal spectral type of $U_{ T_*}$ is $\sum_{n\ge 0}(n!)^{-1}(\sigma_T)^{*n}$, where $\sigma_T$ is a measure of the maximal spectral type of $U_T$.
Now {\it the Poisson entropy} $h_{\text{Po}}(T)$ of $T$ is $h(T_*)$.
The main question is: whether $h_{\text{Po}}(T)$ coincides with $h_{\text{Pa}}(T)$ or
$h_{\text{kr}}(T)$?
It was shown in \cite{Jan} that $h_{\text{Pa}}(T)\ell^1e h_{\text{Po}}(T)$.
If $T$ is quasi-finite or rank one then the three entropies of $T$ coincide \cite{Jan}.
If $T$ is the infinite Markov shift associated with a pair $(P,\pi)$ for recurrent and irreducible
$P$ (see \S\ref{S:MarkSh}) then
$$
h_{\text{Kr}}(T)= h_{\text{Pa}}(T)= h_{\text{Po}}(T)=-\sum_{a\in A}\pi(a)\sum_{b\in A}P(a,b)\ell^1og P(a,b).
$$
If $\sigma_T$ is singular or $U_T$ has finite multiplicity then $h_{\text{Po}}(T)=0$ \cite{Jan}.
It was also shown in \cite{Jan} that given a nontrivial invariant $\sigma$-finite algebra $\mathcal F$ of $\mathcal B$, the natural $\mathcal F$-relative version of Poisson entropy coincides with the relative (Krengel) entropy defined in \cite{DaR}.
Hence if Krengel's and the Poisson entropies coinside on $T\restriction\mathcal F$ for some $\mathcal F$ then $h_{\text{Kr}}(T)=h_{\text{Po}}(T)$.
On the other hand, Janvresse and de la Rue constructed an ergodic conservative infinite measure-preserving transformation $T$ such that $h_{\text{Kr}}(T)=0$ but $h_{\text{Po}}(T)>0$ \cite{Jande}.
\begin{defn}
An ergodic measure-preserving transformation $T$ of a $\sigma$-finite measure space
$(X,\mathcal B,\mu)$
is said {\it to have totally positive Poisson entropy} if for each $\sigma$-finite $T$-invariant sub-$\sigma$-algebra ${\mathcal F}\subset {\mathcal B}$, the Poisson entropy of the system $(X,\mathcal F,\mu\restriction\mathcal F,T)$ is strictly positive.
\end{defn}
We note that the Poisson suspension of the system $(X,\mathcal F,\mu\restriction\mathcal F,T)$ from the above definition
is canonically a factor of $(\widetilde X,\widetilde{\mathcal B},\widetilde\mu,\widetilde T)$.
Such factors of $\widetilde T$ are called Poissonian.
Roy showed in \cite{Roy3} that
if $T$ has totally positive Poisson entropy then $T$ is of zero type.
\begin{theorem}[Existence of the Poisson-Pinsker factor \cite{Roy3}] Let $T$ be an ergodic measure-preserving transformation of an infinite $\sigma$-finite measure space $(X,\mathcal B,\mu)$.
Then either $T$ has totally positive entropy and $\widetilde T$ is CPE or there is a $\sigma$-finite $T$-invariant sub-$\sigma$-algebra ${\mathcal E}\subset {\mathcal B}$ such that the Poisson suspension of
$(X,\mathcal F,\mu\restriction\mathcal F,T)$ is the Pinsker factor of $\widetilde T$.
\end{theorem}
If $T$ has totally positive entropy then the maximal spectral type of $T$ is Lebesgue countable.
If $h_{\text{Po}}(T)>0$ and $T$ possesses a Poisson-Pinsker factor then
the maximal spectral type of $T$ in the orthocomplement to the Poisson-Pinsker factor is Lebesgue countable \cite{Roy3}.
\subsection{Parry's generalization of Shannon-MacMillan-Breiman theorem}
Let $T$ be an ergodic transformation of a standard non-atomic probability
space $(X,\mathcal B,\mu)$. Suppose that $f\> {\scriptstyle \circ}\> c T\in L^1(X,\mu)$ if and only
if $f\in L^1(X,\mu)$. This means that there is $K>0$ such that
$K^{-1}<\frac{d\mu\> {\scriptstyle \circ}\> c T}{d\mu}(x)<K$ for a.a. $x$. Let $\mathcal P$ be a
finite partition of $X$. Denote by $C_n(x)$ the atom of $\bigvee_{i=0}^n
T^{-i}\mathcal P$ which contains $x$. We put $\omega_{-1}=0$. Parry shows in \cite{Par2} that
\begin{multline*}
\frac{\sum_{j=0}^n\ell^1og\mu(C_{n-j}(T^jx))(\omega_j(x)-\omega_{j-1}(x))}
{\sum_{i=0}^n\omega_j(x)}\to\\
H\bigg(P\mid\bigvee_{i=1}^\infty
T^{-1}P\bigg)-\int_X\ell^1og E\bigg(\frac{d\mu\> {\scriptstyle \circ}\> c
T}{d\mu}\mid\bigvee_{i=0}^\infty T^{-i}\mathcal P\bigg)\,d\mu
\end{multline*}
for a.a. $x$. Parry also shows that under the aforementioned conditions on
$T$,
\[
\frac 1n\bigg(\sum_{j=0}^nH\bigg(\bigvee_{i=0}^j T^{-j}\mathcal
P\bigg)-\sum_{j=0}^{n-1}H\bigg(\bigvee_{i=1}^{j+1} T^{-j}\mathcal
P\bigg)\bigg)\to H\bigg(\mathcal P\mid\bigvee_{i=1}^\infty T^{-i}\mathcal P\bigg).
\]
\subsection{{Critical dimension}} The critical dimension introduced
by Mortiss \cite{Mor} measures the order of growth for sums of
Radon-Nikodym derivatives. Let $(X,\mathcal B,\mu,T)$ be an ergodic nonsingular dynamical system.
Given $\delta>0$, let
\begin{align}
X_\delta &:=\{x\in
X\mid\ell^1iminf_{n\to\infty}\frac{\sum_{i=0}^{n-1}\omega_i(x)}{n^\delta}>0\}
\text{ and}\\
X^\delta &:=\{x\in
X\mid\ell^1iminf_{n\to\infty}\frac{\sum_{i=0}^{n-1}\omega_i(x)}{n^\delta}=0\}.
\end{align}
Then $X_\delta$ and $X^\delta$ are $T$-invariant subsets.
\begin{defn} [\cite{Mor}, \cite{DooM}]
The {\it lower critical dimension} $\alpha(T)$ of $T$ is
$\sup\{\delta\mid\mu(X_\delta)=1\}$. The {\it upper critical dimension}
$\beta(T)$ of $T$ is $\inf\{\delta\mid\mu(X^\delta)=1\}$.
\end{defn}
It was shown in \cite{DooM} that the lower and upper critical dimensions
are invariants for isomorphism of nonsingular systems. Notice also that
$$
\alpha(T)=\ell^1iminf_{n\to\infty}\frac{\ell^1og(\sum_{i=1}^n\omega_i(x))}{\ell^1og
n}\text{ and
}\beta(T)=\ell^1imsup_{n\to\infty}\frac{\ell^1og(\sum_{i=1}^n\omega_i(x))}{\ell^1og n}.
$$
Moreover, $0\ell^1e\alpha(T)\ell^1e\beta(T)\ell^1e 1$. If $T$ is of type $II_1$ then
$\alpha(T)=\beta(T)=1$. If $T$ is the standard $III_\ell^1ambdambda$-odometer from
Example~\ref{type III} then
$\alpha(T)=\beta(T)=\ell^1og(1+\ell^1ambdambda)-\frac{\ell^1ambdambda}{1+\ell^1ambdambda}\ell^1og\ell^1ambdambda$.
\begin{theorem}\ell^1ambdabel{5.2}
\begin{enumerate}
\item[(i)]
For every $\ell^1ambdambda\in[0,1]$ and every $c\in[0,1]$ there exists a
nonsingular product odometer of type $III_\ell^1ambdambda$ with critical dimension equal
to $c$ \cite{Mor1}.
\item[(ii)]
For every $c\in[0,1]$ there exists a nonsingular product odometer of type
$II_\infty$ with critical dimension equal to $c$ \cite{DooM}.
\end{enumerate}
\end{theorem}
Let $T$ be the nonsingular product odometer associated with a sequence $(m_n,\nu_n)_{n=1}^\infty$. Let $s(n)=m_1\cdots m_n$
and let $H(\mathcal P_n)$ denote the entropy of the partition of the first $n$
coordinates with respect to $\mu$. We now state a nonsingular version of
Shannon-MacMillan-Breiman theorem for $T$ from \cite{DooM}.
\begin{theorem}\ell^1ambdabel{5.3} Let $m_i$ be bounded from above. Then
\begin{enumerate}
\item[(i)]
$\alpha(T)=\ell^1iminf_{n\to\infty}\inf\frac{-\sum_{i=1}^n\ell^1og m_i(x_i)}{\ell^1og
s(n)}=\ell^1iminf_{n\to\infty}\frac{H(\mathcal P_n)}{\ell^1og s(n)}$ \ and
\item[(ii)]
$\beta(T)=\ell^1imsup_{n\to\infty}\inf\frac{-\sum_{i=1}^n\ell^1og m_i(x_i)}{\ell^1og
s(n)}=\ell^1imsup_{n\to\infty}\frac{H(\mathcal P_n)}{\ell^1og s(n)}$
\end{enumerate}
for a.a. $x=(x_i)_{i\ge 1}\in X$.
\end{theorem}
It follows that in the case when $\alpha(T)=\beta(T)$, the critical
dimension coincides with $\ell^1im_{n\to\infty}\frac{H(\mathcal P_n)}{\ell^1og s(n)}$.
In \cite{Mor1} this expression (when it exists) was called {\it AC-entropy}
(average coordinate). It also follows from Theorem~\ref{5.3} that if $T$ is a product
odometer of bounded type then $\alpha(T^{-1})=\alpha(T)$
and $\beta(T^{-1})=\beta(T)$. In \cite{DooM2}, Theorem~\ref{5.3} was extended to a subclass of Markov
odometers.
Those results were further extended to so-called $G$-measures on product spaces \cite{MDoo} and a class of Bratteli-Vershik systems with multiple edges \cite{DooH}.
The critical dimensions for nonsingular Bernoulli shifts (see \S\,\ref{S:hamachi}) were investigated in \cite{DooM3}:
\begin{theorem} For any ${\varepsilon}psilonsilon>0$, there exists a nonsingular Bernoulli shift $S$ from the Krengel class
with $\alpha(S)<{\varepsilon}psilonsilon$ and $\beta(S)>1-{\varepsilon}psilonsilon$.
\end{theorem}
\subsection{Nonsingular restricted orbit equivalence}
In \cite{Mor2} Mortiss initiated study of a nonsingular version of Rudolph's restricted orbit equivalence \cite{Ru1}. This work is still in its early stages and does not yet deal with any form of entropy. However she introduced nonsingular orderings of orbits, defined sizes and showed that much of the basic machinery still works in the nonsingular setting.
\section{Nonsingular joinings and factors}\ell^1ambdabel{S:joinings}
The theory of joinings is a powerful tool to study probability preserving
systems and to construct striking counterexamples.
It is interesting to study what part
of this machinery can be extended to the nonsingular case.
However, there are some principal obstacles for such extensions:
\begin{itemize}
\item there are too many quasi-invariant measures in view of the Glimm-Effros theorem (see Theorem~\ref{T:Glimm});
\item
ergodic components of a non-ergodic joining need not be joinings of the original systems.
\end{itemize}
There are several ways to bypass these obstacles.
The principal idea is to select always an appropriate (rather narrow) class of quasiinvariant measures under consideration or impose some restrictions on the structure of joinings.
This approach led to some
progress in understanding $2$-fold joinings and constructing
prime systems of any Krieger type. As far as we know the higher-fold
nonsingular joinings have not been considered so far. It turned out however
that an alternative coding technique, predating joinings in studying the
centralizer and factors of the classical measure-preserving Chac{\'o}n maps, can be used as well to
classify factors of Cartesian products of some nonsingular Chac{\'o}n maps.
\subsection{Joinings, nonsingular MSJ and simplicity} In this subsection all measures are probability measures. A {\it nonsingular joining } of two nonsingular systems $(X_1,\mathcal B_1,\mu_1, T_1)$ and $ (X_2,\mathcal B_2,\mu_2, T_2)$ is a measure ${\hat \mu}$ on the product $\mathcal B_1\times \mathcal B_2$ that is nonsingular for $T_1\times T_2$ and satisfies:
${\hat \mu}(A\times X_2)=\mu_1(A)$ and ${\hat \mu}(X_1\times B)=\mu_2(B)$ for all $A\in\mathcal B_1$ and $B\in\mathcal B_2$. Clearly, the product $\mu_1\times\mu_2$ is a nonsingular joining. Given a transformation $S\in C(T)$, the measure $\mu_S$ given by $\mu_S(A\times B):=\mu(A\cap S^{-1}B)$ is a nonsingular joining of $(X,\mu,T)$ and $(X,\mu\> {\scriptstyle \circ}\> c S^{-1},T)$. It is called
a {\it graph}-joining since it is supported on the graph of $S$. Another important kind of joinings that we are going to define now is related to factors of dynamical systems. Recall that
given a nonsingular system
$ (X,{\mathcal {B}} , \mu ,T)$, a sub-$\sigma$-algebra $\mathcal A$ of $\mathcal B$
such that $T^{-1}({\mathcal A}) = {\mathcal A} \mod\mu$ is called a {\it factor} of $T$. There is another, equivalent, definition. A nonsingular
dynamical system $(Y,\mathcal C,\nu,S)$ is called a factor of $T$ if
there exists a measure-preserving map $\varphi : X \to
Y$, called a {\it factor map}, with $\varphi
T = S\varphi$ a.e. (If $\varphi$ is only nonsingular,
$\nu$ may be replaced with the equivalent measure $\mu \> {\scriptstyle \circ}\> c \varphi^{-1}$, for which $\varphi$ is measure-preserving.) Indeed, the sub-$\sigma$-algebra
$\varphi^{-1}(\mathcal C)\subset\mathcal B$ is $T$-invariant and, conversely, any $T$-invariant
sub-$\sigma$-algebra of $\mathcal B$ defines a factor map by immanent properties of standard probability spaces, see e.g. \cite{Aa}.
If $\varphi$ is a factor map as above, then $\mu$
has a disintegration with respect to $\varphi$, i.e.,
$\mu \ = \ \int \mu_y d\nu (y)$ for
a measurable map
$y\mapsto \mu_{y}$ from $Y$ to the probability measures on $X$ so that $\mu_y(\varphi^{-1}( y))=1$, the measure
$\mu_{S\varphi(x)}\> {\scriptstyle \circ}\> c T $ is equivalent to
$\mu_{\varphi(x)}$ and
\begin{equation}\ell^1ambdabel{10-1}
\frac{d\mu \> {\scriptstyle \circ}\> c
T}{d\mu} (x)
= \frac{d\nu \> {\scriptstyle \circ}\> c
S}{d\nu} (\varphi(x)) {\frac{d\mu_{S\varphi(x)}\> {\scriptstyle \circ}\> c
T}{d\mu_{\varphi(x)}}} (x)
\end{equation}
for a.e. $x\in X. $
Define now
the {\it relative product}
${\hat \mu} = \mu \times_{\varphi} \mu$ on $X \times X$ by setting
$
{\hat \mu} = \int \mu_y \times \mu_y \, d\nu (y).
$
Then it is easy to deduce from (\ref{10-1}) that
${\hat \mu}$ is a nonsingular self-joining of $T$.
We note however that the above definition of joining is too general to be satisfactory (as we noted in the introduction to this section).
It does not reduce to the classical definition when we consider probability preserving systems.
Indeed, the following result was proved in \cite{RS89}.
\begin{theorem}\ell^1ambdabel{T:njext} Let $(X_1,\mathcal B_1,\mu_1, T_1)$ and $ (X_2,\mathcal B_2,\mu_2, T_2)$ be two finite measure-preserving systems such that $T_1 \times T_2$ is ergodic. Then for every $\ell^1ambdambda, 0 < \ell^1ambdambda < 1$, there exists a nonsingular joining ${\hat \mu}$ of $\mu_1$ and $\mu_2$ such that $(T_1 \times T_2, {\hat \mu})$ is ergodic and of type $III_\ell^1ambdambda$.
\end{theorem}
It is not known however if the nonsingular joining ${\hat \mu}$ can be chosen in every orbit equivalence class. In view of the above, Rudolph and Silva \cite{RS89}
isolate an important subclass of joining.
It is used in the definition of a nonsingular version of minimal self-joinings.
\begin{defn} \ell^1ambdabel{E:rationaleq}
\begin{itemize}
\item[(i)] A nonsingular joining ${\hat \mu}$ of $(X_1,\mu_1,T_1)$ and
$(X_2,\mu_2,T_2)$ is {\it rational} if there exit
measurable
functions $c^{1}:X_1\to \mathbb R_+$ and $c^{2}:X_2\to\mathbb R_+$ such that
$$
{\hat \omega}^{{\hat \mu}}_1 (x_1, x_2) = \omega^{\mu_1}_1 (x_1 )
\omega^{\mu_2}_1 (x_2 ) c^{1} (x_1 ) = \omega^{\mu_1}_1 (x_1 ) \omega^{\mu_2}_1 (x_2 )
c^{2}
(x_2 )\quad {\hat \mu} \ a.e.
$$
\item[(ii)]
A nonsingular dynamical system $ (X,{\mathcal {B}} , \mu, T) $ has {\it minimal self-joinings (MSJ)} over a class $\mathcal M$ of probability measures equivalent to $\mu$, if for every $\mu_1, \mu_2\in\mathcal M$, for every rational joining ${\hat \mu}$ of $\mu_1, \mu_2$, a.e. ergodic component of ${\hat \mu}$ is
either the product of its marginals or is the graph-joining supported on $T^j$ for some $j\in\mathbb Z$.
\end{itemize}
\end{defn}
Clearly, product measure, graph-joinings and the relative products are all rational joinings. Moreover, a rational joining of finite measure-preserving systems
is measure-preserving and a rational joining of type $II_1$'s is of type $II_1$ \cite{RS89}. Thus we obtain the finite measure-preserving theory as a special case. As for the definition of MSJ, it depends on a class $\mathcal M$ of equivalent measures. In the finite measure-preserving case $\mathcal M=\{\mu\}$. However, in the nonsingular case no particular measure is distinguished. We note also that Definition~\ref{E:rationaleq}(ii) involves some restrictions on all rational joinings and not only ergodic ones as in the finite measure-preserving case. The reason is that an ergodic component of a nonsingular joining needs not be a joining of measures equivalent to the original ones \cite{a87}. For finite measure-preserving transformations, MSJ over $\{\mu\}$ is the same as the usual 2-fold MSJ \cite{JR87}.
A nonsingular transformation $T$ on $(X,\mathcal B,\mu)$ is called {\it prime} if its only factors are $\mathcal B$ and $\{X,\emptyset\}$ mod\,$\mu$.
A (nonempty) class $\mathcal M$ of probability measures equivalent to $\mu$ is said to be
{\it centralizer stable} if
for each $S\in C(T)$ and $\mu_1\in\mathcal M$, the measure $\mu_1\> {\scriptstyle \circ}\> c S$ is in $\mathcal M$.
\begin{theorem}[\cite{RS89}] \ell^1ambdabel{T:msj}Let $ (X,{\mathcal {B}} , \mu, T) $ be a ergodic non-atomic dynamical system such that $T$ has MSJ over a class $\mathcal M$
that is centralizer stable. Then $T$ is prime and the centralizer of $T$ consists of the powers of $T$.
\end{theorem}
A question that arises is whether such nonsingular
dynamical system (not of type $II_1$) exist. Expanding on Ornstein's
original construction from \cite{Ornst},
Rudolph and Silva
construct in \cite{RS89}, for each $0\ell^1eq \ell^1ambdambda\ell^1eq 1$,
a nonsingular rank-one transformation
$T_\ell^1ambdambda$ that is of type $III_\ell^1ambdambda$ and that has MSJ over a class
$\mathcal M$ that is
centralizer stable. Type $II_\infty$ examples with analogues properties were also constructed there. In this connection it is worth to mention the example by
Aaronson and Nadkarni \cite{AN87} of $II_\infty$ ergodic transformations
that have no factor algebras on which the invariant measure is $\sigma$-finite (except for the entire ones); however these transformations are not prime.
A more general notion than MSJ called {\it graph self-joinings (GSJ)}, was
introduced \cite{SW92}: just replace the the words ``on $T^j$ for some
$j\in\mathbb Z$'' in Definition~\ref{E:rationaleq}(ii) with ``on $S$ for some
element $S\in C(T)$''. For finite measure-preserving transformations, GSJ
over $\{\mu\}$ is the same as the usual 2-fold simplicity \cite{JR87}. The
famous Veech theorem on factors of 2-fold simple maps (see \cite{JR87})
was
extended to nonsingular systems in \cite{SW92} as follows: if a system
$(X,\mathcal B,\mu,T)$ has GSJ then for every non-trivial factor $\mathcal A$ of
$T$ there exists a locally compact subgroup $H$ in $C(T)$ (equipped with
the weak topology) which acts smoothly (i.e., the partition into $H$-orbits
is measurable) and such that $\mathcal A=\{B\in\mathcal B\mid \mu(hB\triangle B)=0
\text{ for all } h\in H\}$. It follows that there is a cocycle $\varphi$ from
$(X,\mathcal A,\mu\restriction\mathcal A)$ to $H$ such that $T$ is
isomorphic to the $\varphi$-skew product extension $(T\restriction A)_\varphi$
(see \S\,6.4). Of course, the ergodic nonsingular product odometers and, more
generally, ergodic nonsingular compact group rotation (see \S\,\ref{9.1}) have
GSJ. However, except for this trivial case (the Cartesian square is
non-ergodic) plus the systems with MSJ from \cite{RS89}, no examples of
type $III$ systems with GSJ are known. In particular, no smooth examples have
been constructed so far. This is in sharp contrast with the finite measure
preserving case where abundance of simple (or close to simple) systems are
known (see \cite{JR87}, \cite{Tho}, \cite{Dan26}).
\subsection{Nonsingular coding and factors of Cartesian products of nonsingular maps}
As we have already noticed above, the nonsingular MSJ theory was developed in \cite{RS89} only for 2-fold self-joinings. The reasons for this were technical problems with extending the notion of rational joinings form $2$-fold to $n$-fold self-joinings. However while the $2$-fold nonsingular MSJ or GSJ properties of $T$ are sufficient to control the centralizer and the factors of $T$, it is not clear whether
it implies anything about the factors or centralizer of $T\times T$. Indeed, to control them one needs to know the $4$-fold joinings of $T$.
However even in the finite measure-preserving case it is a long standing open question whether $2$-fold MSJ implies $n$-fold MSJ. That is why del Junco and Silva \cite{JS03} apply an alternative---nonsingular coding---techniques to classify the factors of Cartesian products of nonsingular Chac\'on maps.
The techniques were originally used in \cite{j78} to show that the classical Chac\'on map is prime and has trivial centralizer. They were extended to nonsingular systems in \cite{JS}.
For each $0<\ell^1ambdambda<1$ we denote by $T_{\ell^1ambdambda}$
the Chac\'on map (see \S\,\ref{S:rankone}) corresponding the sequence
of probability
vectors $w_n=
({\ell^1ambdambda}/({1+2\ell^1ambdambda}),
{1}/({1+2\ell^1ambdambda}),
{\ell^1ambdambda} /({1+2\ell^1ambdambda}))$ for all $n>0$. One can verify that the maps $T_{\ell^1ambdambda}$
are of type $III_\ell^1ambdambda$. (The classical Chac\'on map corresponds to $\ell^1ambdambda=1$.) All of these transformations are defined on the same standard Borel space $(X,\mathcal B)$.
These transformations were shown to be power weakly mixing in \cite{afs01}.
The centralizer of any finite Cartesian product of
nonsingular Chac\'on maps is computed in the following theorem.
\begin {theorem}[\cite{JS03}]{\ell^1ambdabel{secondthm}} Let
$0<\ell^1ambdambda_{1}<\ell^1dots <\ell^1ambdambda_{k} \ell^1eq 1$ and $n_{1},\ell^1dots, n_{k}$
be positive integers. Then the centralizer of the Cartesian product
$T_{\ell^1ambdambda_1}^{\otimes n_1} \times \ell^1dots\times
T_{\ell^1ambdambda_k}^{\otimes n_k} $
is generated by maps of the form $U_1 \times \ell^1dots \times U_k$,
where each $U_i$, acting on the $n_{i}$-dimensional
product space $X^{n_i}$, is a Cartesian
product of
powers of $T_{\ell^1ambdambda_{i}}$ or a co-ordinate permutation on
$X^{n_i}$.
\end{theorem}
Let $\pi$ denote the permutation on $X\times X$ defined by $\pi(x,y)=(y,x)$ and let ${\mathcal {B}}^{2{\bar d}ot}$ denote the symmetric factor, i.e.,
${\mathcal {B}}^{2{\bar d}ot}=\{A\in\mathcal B\otimes\mathcal B\mid \pi(A)=A\}$.
The following theorem
classifies the factors of the Cartesian product of any
two nonsingular type $III_{\ell^1ambdambda}$, $0<\ell^1ambdambda < 1$, or type
$II_{1}$ Chac\'on maps.
\begin{theorem}[\cite{JS03}]{\ell^1ambdabel{firstthm} } Let
$ T_{\ell^1ambdambda_{1}}$
and $T_{\ell^1ambdambda_{2}}$
be two nonsingular Chac\'on systems. Let ${\mathcal F}$ be a factor algebra of
$T_{\ell^1ambdambda_{1}} \times T_{\ell^1ambdambda_{2}}$.
\begin{itemize}
\item[(i)] If $\ell^1ambdambda_{1}\neq \ell^1ambdambda_{2}$ then
${\mathcal F}$ is equal $\mod 0$ to one of the four
algebras
${\mathcal {B}}\otimes \mathcal {B}$, ${\mathcal {B}}\otimes {\mathcal N}$, ${\mathcal N}
\otimes \mathcal {B}$, or ${\mathcal N}\otimes {\mathcal N}$, where ${\mathcal N}=\{\emptyset, X\}$.
\item[(ii)] If $\ell^1ambdambda_{1} = \ell^1ambdambda_{2}$ then
${\mathcal F}$ is equal $\mod 0$ to one of the
following algebras
${\mathcal {B}}\otimes \mathcal {B}$, ${\mathcal {B}}\otimes {\mathcal N}$, ${\mathcal N}\otimes \mathcal {B}$,
${{\mathcal N}}\otimes{{\mathcal N}}$,
or $(T^m\times Id){\mathcal {B}}^{2{\bar d}ot}$ for some integer $m$.
\end{itemize}
\end{theorem}
It is not hard to obtain type $III_{1}$ examples of Chac\'on maps for which the previous two theorems hold.
However the construction of type
$II_{\infty}$ and type $III_{0}$ nonsingular Chac\'on transformations is
more subtle as it needs the choice of $\omega_n$ to vary with $n$. In
\cite{hs00}, Hamachi and Silva construct type $III_0$ and type $II_\infty$
examples, however the only property proved for these maps is ergodicity of
their Cartesian square. More recently, Danilenko \cite{d04} has shown that
all of them (in fact, a wider class of nonsingular Chac\'on maps of all
types) are power weakly mixing.
In \cite{cep},
Choksi, Eigen and Prasad asked whether there exists a zero
entropy, finite measure-preserving mixing automorphism $S$, and a
nonsingular type $III$ automorphism $T$, such that $T\times S$ has no
Bernoulli factors.
Theorem~\ref{firstthm} provides a partial answer (with a mildly mixing only instead of mixing) to this question:
if $S$ is the finite measure-preserving Chac\'on
map and $T$ is a nonsingular Chac\'on map as
above, the factors of $T\times S$ are only the trivial
ones,
so $T\times S$ has no Bernoulli factors.
\subsection{Joinings and MSJ for infinite measure-preserving systems}
Adams, Friedman and Silva introduced in \cite{AFS} an infinite version of Chac\'on map $T$ as a rank-one transformation associated with $(r_n,\omega_n,s_n)_{n=1}^\infty$ such that
$r_n=3$, $\omega_n(0)=\omega_n(1)=\omega_n(2)$, $s_n(0)=0$, $s_n(1)=1$ and $s_n(2)=3h_n+1$ for each $n>0$. This is called the {\it infinite Chac\'on} transformation.
Let $(X,\mu)$ be the space of $T$.
Of course, $\mu(X)=\infty$. This transformation has infinite ergodic index \cite{AFS}, is not power weakly mixing and not multiply recurrent \cite{Ketal03}, and has trivial centralizer \cite{JanRR}.
For each $d>0$, Janvresse, de la Rue and Roy investigated $T^{\times d}$-invariant measures on $X^d$ which are
{\it boundedly finite}.
This means that for each $d$ levels of every tower of the inductive construction, the measure of the Cartesian product of these levels is finite.
The product $\bigotimes_{n=1}^d\mu$ and graph-joinings, i.e., measures of the form $(A_1,\dots, A_d)\mapsto \mu(S_1^{-1}A_1,\dots \cap S_d^{-1}A_d)$ for some transformations $S_1,\dots, S_d\in C(T)$, are boundedly finite.
Moreover, $T$ itself is uniquely ergodic in the sense that there is only one (up to scaling) boundedly finite $T$-invariant measure.
It was shown in \cite{JanRR} that each ergodic $T^{\times d}$-invariant boundedly finite measure is a direct product of so-called {\it diagonal measures}.
Unlike the finite measure-preserving case, the class of diagonal measures does not reduce to the graph-joinings (with $S_1,\dots, S_d$ being the powers of $T$).
It contains so-called {\it weird} measures whose marginals are singular to $\mu$.
As a corollary, it was proved that $C(T)=\{T^n\mid n\in\mathcal{B}bb Z\}$ \cite{JanRR}.
Some of the weird measures are totally dissipative (supported on a single orbit) are some of them are conservative.
Danilenko showed in \cite{Dan*} that there is a conservative $T\times T$-invariant boundedly finite measure with absolutely continuous marginals whose ergodic components are all weird.
This phenomenon is impossible for another infinite version $T$ of Chacon map constructed in \cite{JanRR2}.
Its construction mimics the construction of the classical Chacon map so much that it gives a $\mu$-conull subset $X_\infty$ such that for each $d\ge 1$, each ergodic $T^{\times d}$-invariant measure supported on $X_\infty^d$ is the direct product of several copies of $\mu$ and the graph-joinings generated by powers of $T$.
As a corollary, we obtain that each boundedly finite $d$-fold self-joining of $T$ (the marginals of a joining are absolutely continuous) is a convex combination of countably many ergodic joinings.
In \cite{Dan*}, the problems studied in \cite{JanRR} are considered from a different point of view.
Let $T$ be a homeomorphism of a locally compact Cantor space $X$.
We assume that $T$ is Radon uniquely ergodic, i.e., there is only one (up to scaling) Radon $T$-invariant measure $\mu$ on $X$.
A {\it $d$-fold Radon self-joining of $T$} is a Radon measure on $X^d$ whose marginals (which may be non-sigma-finite) are equivalent to $\mu$.
We consider only Radon invariant measures, define Radon $d$-fold MSJ and Radon disjointness.
Of course, each ergodic component of nonergodic Radon joinning is Radon. However it needs not to be a joining.
Then the $(C,F)$-construction (see \cite{d01} and \cite{Dan26}) is used to produce a number of rank-one homeomorphisms of $X$ whose ergodic joinings are explicitly described.
The weird measures from \cite{JanRR} appear now as a quasi-graph Radon measures, i.e., they are graphs of equivariant maps whose domain and range are meager (and of zero measure) subsets of $X$.
It is constructed an uncountable family of pairwise Radon disjoint infinite Chacon like Radon uniquely ergodic homeomorphisms with Radon MSJ.
Moreover, every transformation of this family is Radon disjoint with its inverse \cite{Dan*}.
\section {Smooth nonsingular transformations
}
Diffeomorphisms of smooth manifolds equipped with smooth measures are commonly considered as physically natural examples of dynamical systems. Therefore the construction of smooth models for various dynamical properties is a well established problem of the modern (probability preserving) ergodic theory. Unfortunately, the corresponding `nonsingular' counterpart of this problem is almost unexplored. We survey here several interesting facts related to the topic.
For $r\in\mathbb N\cup\{\infty\}$, denote by Diff$^r_+(\mathbb T)$ the group of
orientation preserving $C^r$-diffeomorphisms of the circle $\mathbb T$. Endow
this set with the natural Polish topology. Fix $T\in\text{Diff}_+^r(\mathbb
T)$. Since $\mathbb T=\mathbb R/\mathbb Z$, there exists a $C^1$-function
$f:\mathbb R\to\mathbb R$ such that $T(x+\mathbb Z)=f(x)+\mathbb Z$ for all $x\in\mathbb R$.
The {\it rotation number} $\rho(T)$ of $T$ is the limit
$\ell^1im_{n\to\infty}(\underbrace{f\> {\scriptstyle \circ}\> c\cdots\> {\scriptstyle \circ}\> c f}_{n\text{
times}})(x)\pmod 1$. The limit exists and does not depend on the choice of
$x$ and $f$. It is obvious that $T$ is nonsingular with respect to
Lebesgue measure $\ell^1ambdambda_\mathbb T$. Moreover, if $T\in \text{Diff}_+^r(\mathbb
T)$ and $\rho(T)$ is irrational then the dynamical system $(\mathbb
T,\ell^1ambdambda_\mathbb T,T)$ is ergodic \cite{CFS}. It is interesting to ask: which
Krieger's type can such systems have?
Katznelson showed in \cite{Kat} that the subset of type $III$
$C^\infty$-diffeomorphisms
and the subset of type $II_\infty$ $C^\infty$-diffeomorphisms are dense
in Diff$_+^\infty(\mathbb T)$.
Hawkins and Schmidt refined the idea of Katznelson from \cite{Kat} to
construct, for every irrational number $\alpha\in[0,1)$ which is not of
constant type (i.e., in whose continued fraction expansion the denominators
are not bounded) a transformation $T\in \text{Diff}_+^2(\mathbb T)$ which is
of type $III_1$ and $\rho(T)=\alpha$ \cite{HaS}. It should be
mentioned that class $C^2$ in the construction is essential, since it
follows from a remarkable result of Herman that if
$T\in\text{Diff}_+^3(\mathbb T)$ then under some condition on $\alpha$ (which
determines a set of full Lebesgue measure), $T$ is measure theoretically
(and topologically) conjugate to a rotation by $\rho(T)$ \cite{Her2}. Hence $T$ is of
type $II_1$.
In \cite{Haw3}, Hawkins shows that every smooth paracompact manifold of
dimension~$\geq 3$ admits a type $III_\ell^1ambdambda$ diffeomorphism for every
$\ell^1ambdambda\in[0,1]$. This extends a result of Herman \cite{Her} on the
existence of type $III_1$ diffeomorphisms in the same circumstances.
It is also of interest to ask: which free ergodic flows are
associated with smooth dynamical systems of type $III_0$? Hawkins proved
that any free ergodic $C^\infty$-flow on a smooth,
connected, paracompact manifold is the associated flow for a
$C^\infty$-diffeomorphism on another manifold (of higher dimension)
\cite{Haw4}.
A nice result was obtained in \cite{Kat2}: if $T\in\text{Diff}_+^2(\mathbb T)$
and the rotation number of $T$ has unbounded continued fraction
coefficients then $(\mathbb T, \ell^1ambdambda_\mathbb T, T)$ is ITPFI.
Moreover, a
converse also holds: given a nonsingular product odometer $R$, the set of
orientation-preserving $C^\infty$-diffeomorphisms of the circle which are
orbit equivalent to $R$ is $C^\infty$-dense in the Polish set of all
$C^\infty$-orientation-preserving diffeomorphisms with irrational rotation
numbers.
In contrast to that, Hawkins constructs in \cite{Haw2} a type
$III_0$ $C^\infty$-diffeomorphism of the 4-dimensional torus which is not
ITPFI.
Examples of $n$-to-1 conservative ergodic nonsingular C$^\infty$-endomorphisms on the 2-torus, not admitting an equivalent $\sigma$-finite invariant measure, were constructed in \cite{HS91}.
In \cite{AB07} it is shown that a C$^1$ generic expanding map of $\mathbb T$ has no absolutely continuous $\sigma$-finite invariant measure.
Kosloff in \cite{Ko15} showed that $\mathcal{B}bb T^2$ admits a $C^1$ Anosov diffeomorphism of type $III_1$ with respect to Lebesgue measure.
We recall that this phenomenon is impossible in the class of conservative $C^{1+\alpha}$
Anosov diffeomorphisms because by a theorem of Gurevich and Oseledets, every such transformation is of type $II_1$ (with respect to Lebesgue measure).
In a later work \cite{Ko20}, he extended this result to $\mathcal{B}bb T^d$ for every $d>3$.
The case $d=3$ remains open.
\section{Miscellaneous topics}
Let $T$ be an ergodic measure-preserving transformation of an infinite $\sigma$-finite nonatomic measure space
$(X,\mathcal B,\mu)$.
\subsection{On normalizing constants for ergodic theorem}\ell^1ambdabel{NC}
Replacing $\mu$ with an equivalent probability measure one can deduce from the Hurewicz ergodic theorem that the average
$\frac{1}{n}\sum_{i=0}^{n-1}f(T^ix)$ converges to $0$ a.e.
for each function $f\in L^1(X,\mu)$.
In view of that, a natural question arises: is there a sequence of positive numbers $(a_n)_{n=1}^\infty$ such that
\begin{align}\ell^1ambdabel{aarn}
\frac{1}{a_n}\sum_{i=0}^{n-1}f(T^ix) \to \int f d\mu
\ \text{a.e.}
\end{align}
for each $f\in L^1(X,\mu)$?
Aaronson answered this question negatively in \cite{Aa77b}.
He showed that if
there is a sequence of positive numbers $(a_n)_{n=1}^\infty$ and a single integrable function $f\geq 0$ with $ \int f d\mu>0$ such that (\ref{aarn}) holds then
$\mu(X)<\infty$.
Thus, no normalizing constants in the ergodic theorem for infinite measure-preserving transformations exist.
Since then other forms of convergence of ergodic averages for a given sequence have been studied, for which the reader may refer to \cite{Aa81}, \cite{Aa}, \cite{ThZw06}, \cite{AZ14} and the references therein.
\subsection{Around King's weak closure theorem} We recall that if $S$ is probability preserving rank-one map then $C(S)$ is the weak closure of the set $\{S^n\mid n\in\mathcal{B}bb Z\}$ (King theorem, \cite{King}).
It is still unclear whether this theorem extends to the infinite measure-preserving rank-one transformations.
However,
there are some classes of infinite rank-one maps for which it is true: zero type maps and partially bounded maps.
\begin{defn} A {\it $\sigma$-finite self-joining (of order 2)} of $T$ is a $\sigma$-finite $T\times T$-invariant measure $\ell^1ambdambda$ on $(X\times X,\mathcal B\otimes\mathcal B)$ such that $\ell^1ambdambda(A\times X)=\ell^1ambdambda(X\times A)=\ell^1ambdambda (A)$ for all $A\in\mathcal B$ of finite measure.
If for each ergodic $\sigma$-finite self-joining $\ell^1ambdambda$ of $T$, there is $n\in\mathcal{B}bb Z$ such that
$\ell^1ambdambda(A\times B)=\mu(A\times T^{-n}B)$ then $T$ is said to have {\it minimal $\sigma$-finite self-joinings (of order 2)}.
\end{defn}
The above concept of MSJ permits to control the $\mu$-preserving centralizer $C_0(T)$ of $T$:
if $T$ has MSJ then $C_0(T)=\{T^n\mid n\in\mathcal{B}bb Z\}$.
It was shown by Ryzhikov and Thouvenot \cite{RyT} that each zero type transformation of rank one has $\sigma$-finite MSJ.
Since the rank-one transformations are non-squashable, it follows that the centralizer of each zero type rank-one transformation is just its powers.
\begin{defn} Let $T$ be a rank-one transformation associated with $(r_n,\omega_n,s_n)_{n=1}^\infty$.
It is called {\it partially bounded} if there is $L>0$ such that $r_n\ell^1e L$, $\omega_n(0)=\cdots=\omega_n(r_n-1)$, $\max_{0\ell^1e i<j<r_n-1}|s_n(i)-s_n(j)|<L$, $s_n(r_n-1)=0$ and $\min_{0\ell^1e i<r_n-1}s_n(i)\ge h_n$ for each $n>0$
\cite{Gaeb}.
\end{defn}
It was shown in \cite{Gaeb} that for each partially bounded transformation, the centralizer consists of just the powers.
Of course, the family of partially bounded transformations does not intersect the set of zero type rank-one maps.
\subsection{Asymmetry and Bergelson's question}
We say that $T$ is {\it asymmetric} if $T$ is not isomorphic to $T^{-1}$.
Explicit examples of asymmetric infinite rank-one transformations are constructed in \cite{Ry2} and
\cite{DanR3} (see there for asymmetric maps which embed into a flow).
It was shown in \cite{Gaeb} that if $T$ is a partially bounded rank-one transformation
then $T$ is isomorphic to $T^{-1}$ if and only if $s_n(i)=s_n(r_n-2-i)$ for all $i=0,\dots, r_n-2$ eventually in $n$.
Bergelson asked: is there $T$ of infinite ergodic index such that
$T\times T^{-1}$ is not ergodic?
Of course, such a $T$ is asymmetric.
The question is still open.
However some partial progress was achieved in \cite{clancy} and \cite{Dan2016}.
In \cite{clancy}, an example of a rank-one $T$ was constructed such that $T\times T$ is ergodic but $T\times T^{-1}$ is not.
Similar examples appeared also in \cite{Dan2016}.
However they do not answer Bergelson's question because $T$ has ergodic index 2 in these examples.
It was also shown in \cite{Dan2016} that within the class of infinite Markov shifts, the answer on Bergelson's question is negative.
As for the rank-one transformations, it was shown in \cite{clancy} that $T\times T^{-1}$ is always conservative.
\subsection{Ergodicity of powers}
Let $T$ be a rank-one infinite measure-preserving transformation associated with
$(r_n,\omega_n,s_n)_{n=1}^\infty$. (Hence $\omega_n(0)=\cdots=\omega_n(r_n-1)$.)
For each $n>0$, we denote by $C_n$ the set of bottom levels of all copies of $(n-1)$-th tower in the $n$-th tower.
Thus, formally, $C_n:=\{0\}\cup \{jh_{n-1}+\sum_{i=0}^{j-1}s_n(i)\mid j=1,\dots, r_n-1\}$.
The following was proved in \cite{DanAnti}.
\begin{theorem}[Ergodicity of powers of rank-one transformations]
\begin{itemize}
\item[(i)]
If $T^d$ is ergodic then for each divisor $p$ of $d$ there are infinitely many $n$ such that some $c\in C_n$ is not divisible by $p$.
\item[(ii)]
If $(r_n)_{n=1}^\infty$ is bounded and for each divisor $p$ of a positive integer $d$,
there are infinitely many
$n$
such that
$p$
does not divide some $c\in C_n$ then $T^d$ is ergodic.
\item[(iii)]
If the sequence $(r_n)_{n=1}^\infty$ is bounded
then
$T$
is totally ergodic if and only if
for each
$d >
1$,
there are infinitely many
$n >
0$
such that some element $c\in C_n$
is not divisible by $d$.
\end{itemize}
\end{theorem}
\subsection{Rigidity sequences}
Adams proved in \cite{AdRigid} that if $S$ is an ergodic probability preserving transformation such that $S^{n_k}\to\text{Id}$ weakly for some sequence $(n_k)_{k=1}^\infty$ of positive integers then there exists a {\it power rationally weakly
mixing} infinite measure-preserving transformation $T$ such that $T^{n_k}\to\text{Id}$ weakly.
The converse assertion is obvious---just pass to the Poisson suspension.
(An infinite measure-preserving $T$ is called power rationally weakly
mixing if for each finite sequence of non-zero integers $l_1,\dots,l_k$, the product transformation $T^{l_1}\times\cdots\times T^{l_k}$ is rationally weakly mixing.)
Thus the class of rigidity sequences for the ergodic probability preserving transformations equals to the class of rigidity sequences for the ergodic infinite measure-preserving ones.
\subsection{Directional recurrence} Given an ergodic infinite measure-preserving $\mathcal{B}bb Z^d$-action
$T=(T_g)_{g\in\mathcal{B}bb Z^d}$, it seems natural to study the dynamics of individual transformations $T_g$ when $g$ runs $\mathcal{B}bb Z^d$.
For instance, one of the natural questions is to describe the set
of those $g\in G$
such that $T_g$ is recurrent.
Since $T_g$ is recurrent if and only if $T_{ng}$ is recurrent for every $n\in\mathcal{B}bb Z\setminus\{0\}$, it make sense to talk about {\it recurrent directions} in $\mathcal{B}bb Z^d$ or {\it rational recurrent directions} in the group $\mathcal{B}bb R^d$ containg $\mathcal{B}bb Z^d$ as a standard lattice.
Following Milnor's general idea of directional dynamics \cite{Mil}, Johnson and {\c S}ahin
introduced in \cite{JS} a concept of directional recurrence of $T$ along an arbitrary (including irrational) direction in $\mathcal{B}bb R^d$.
They showed that the set $R(T)$ of recurrent directions is a $G_\delta$-subset, produced
examples of $T$ with trivial and non-trivial $R(T)$ and asked about description of all possible $R(T)$
when $T$ runs the ergodic $\mathcal{B}bb Z^d$-actions.
Some partial answers were obtained
in \cite{Dan2017}: given a $G_\delta$-subset $\Delta$ of the real projective space $P(\mathcal{B}bb R^d)$ and a countable subset $D\subset\Delta$, there is a rank-one action $T$ with $D\subset R(T)\subset\Delta$.
However, in general, the problem remains open.
\section{Applications. Connections with other fields}\ell^1ambdabel{applic}
In this---final---section we shed light on numerous mathematical sources of nonsingular systems. They come from the theory of stochastic processes, random walks, locally compact Cantor systems, horocycle flows on hyperbolic surfaces, von Neumann algebras, statistical mechanics, representation theory for groups and anticommutation relations, etc. We also note that such systems sometimes appear in the context of probability preserving dynamics (see also a criterium of distality in terms of the Krengel entropy in \S\,\ref{Krengel}).
\subsection{Mild mixing}\ell^1ambdabel{mild mixing}
An ergodic finite measure-preserving dynamical system $(X,\mathcal B,\mu,T)$ is called {\it mildly mixing} if for each non-trivial $T$-invariant $\sigma$-algebra $\mathcal A\subset\mathcal B$, the restriction $T\restriction\mathcal A$ is not rigid.
For equivalent definitions and extensions to actions of locally compact groups we refer to \cite{Aa} and \cite{sw}.
There is an interesting criterium of the mild mixing that involves nonsingular systems: $T$ is mildly mixing if and only if for each ergodic nonsingular transformation $S$, the product $T\times S$ is ergodic \cite{fw}.
Furthermore, $T$ is mildly mixing if and only if for each ergodic nonsingular transformation $S$, the product
$T\times S$ is orbit equivalent to $S$ \cite{HS97}.
In particular, the associated flows of $S$ and $T\times S$ are isomorphic.
Moreover, if $R$ is a nonsingular transformation such that $R\times S$ is ergodic for any ergodic nonsingular $S$ then $R$ is of type $II_1$ (and mildly mixing) \cite{sw}.
In this context we note that
for every ergodic infinite measure-preserving transformation $T$
there is an ergodic Markov shift $S$ such that $T\times S $ is not conservative, hence not ergodic \cite{ALW}; also
that $S$ can be chosen to be rank-one and rigid \cite{EKRSS}.
\subsection{Ergodicity of Gaussian cocycles}
Let $T$ be an ergodic (equivalently, weakly mixing) measure preserving Gaussian transformation on a standard probability space $(X,\frak B,\mu)$ and let $H$ be the corresponding invariant Gaussian subspace of the real Hilbert space $L^2_0(X,\mu):=L^2(X,\mu)\ominus\mathcal{B}bb R$.
The following conjecture was stated in \cite{LeLeSk}:
for each function $f\in H$, either $f$ is a $T$-coboundary (equivalently, a Gaussian coboundary, i.e. the transfer function belongs to $H$) or the skew product transformation $T_f$ acting on the product space $(X\times\mathcal{B}bb R,\mu\times\text{Leb})$ is ergodic.
We note that $T_f$ preserves the infinite measure $\mu\times\text{Leb}$.
The affirmative answer was obtained in \cite{DanL22} (and independently in \cite{MaVa}) under an assumption that $T$ is mildly mixing.
Some examples of ergodic $T_f$ with rigid weakly mixing $T$ were constructed in \cite{MaVa}.
However, in the general setting, the problem remains open.
\subsection{Disjointness and Furstenberg's class $\mathcal W^\perp$} Two probability preserving systems $(X,\mu,T)$ and $(Y,\nu,S)$ are called
{\it disjoint} if $\mu\times\nu$ is the only $T\times S$-invariant
probability measure on $X\times Y$ whose coordinate projections
are $\mu$ and $\nu$ respectively. Furstenberg in \cite{Fur}
initiated studying the class $\mathcal W^\perp$ of transformations
disjoint from all weakly mixing ones.
Let $\mathcal D$ denote the class of distal transformations
and $\mathcal M(\mathcal W^\perp)$ the class of multipliers
of $\mathcal W^\perp$ (for the definitions see \cite{Glas}).
Then $\mathcal D\subset\mathcal M(\mathcal W^\perp)\subset \mathcal W^\perp$.
In \cite{LemP} and \cite{DanL} it was shown by constructing
explicit examples that these inclusions are strict. We record this
fact here because nonsingular ergodic theory
was the key ingredient of the arguments in the two papers pertaining to
the theory of probability preserving systems.
The examples
are of the form $T_{\varphi,S}(x,y)=(Tx,S_{\varphi(x)}y)$, where $T$ is an
ergodic rotation on $(X,\mu)$, $(S_g)_{g\in G}$ a mildly mixing action of a
locally compact group $G$ on $Y$ and $\varphi:X\to G$ a measurable map.
Let $W_\varphi$ denote the Mackey action of $G$ associated with $\varphi$ and let $(Z,\kappa)$ be the space of this action.
The key observation is that there exists an affine isomorphism of the simplex
of $T_{\varphi,S}$-invariant probability measures whose pullback on $X$ is $\mu$ and the simplex of $W_\varphi\times S$ quasi-invariant probability measures whose pullback on $Z$ is $\kappa$ and whose Radon-Nikodym cocycle is measurable with respect to $Z$. This is a far reaching generalization of Furstenberg theorem on relative unique ergodicity of ergodic compact group extensions.
\subsection{Symmetric stable and infinitely divisible stationary processes}
Rosinsky in \cite{Ros} established a remarkable connection between structural studies of stationary stochastic processes and ergodic theory of nonsingular transformations (and flows). For simplicity we consider only real processes in discrete time. Let $X=(X_n)_{n\in\mathbb Z}$ be a measurable stationary symmetric $\alpha$-stable (S$\alpha$S) process, $0<\alpha<2$. This means that any linear combination $\sum_{k=1}^n a_kX_{j_k}$, $j_k\in\mathbb Z$, $a_k\in\mathbb R$ has an S$\alpha$S-distribution. (The case $\alpha=2$ corresponds to Gaussian processes.) Then the process admits a spectral representation
\begin{equation}\ell^1ambdabel{integral}
X_n=\int_Yf_n(y)\,M(dy), \ n\in\mathbb Z,
\end{equation}
where $f_n\in L^\alpha(Y,\mu)$ for a standard $\sigma$-finite measure space $(Y,\mathcal B,\mu)$ and $M$ is an independently scattered random measure on $\mathcal B$ such that $E\exp{(iuM(A))}=\exp{(-|u|^\alpha\mu(A))}$ for every $A\in\mathcal B$ of finite measure. By \cite{Ros}, one can choose the kernel $(f_n)_{n\in\mathbb Z}$ in a special way: there are a $\mu$-nonsingular transformation $T$ and measurable maps $\varphi:X\to\{-1,1\}$ and $f\in L^\alpha(Y,\mu)$ such that
$f_n=U^nf$, $n\in\mathbb Z$, where $U$ is the isometry of $L^\alpha(X,\mu)$ given by $Ug=\varphi\cdot({d\mu\> {\scriptstyle \circ}\> c T}/{d\mu})^{1/\alpha}\cdot g\> {\scriptstyle \circ}\> c T$.
If, in addition, the smallest $T$-invariant $\sigma$-algebra containing $f^{-1}(\mathcal B_{\mathbb R})$ coincides with $\mathcal B$ and Supp$\{f\> {\scriptstyle \circ}\> c T^n:n\in\mathbb Z\}=Y$ then the pair $(T,\varphi)$ is called minimal. It turns out that minimal pairs always exist. Moreover, two minimal pairs $(T,\varphi)$ and $(T',\varphi')$ representing the same S$\alpha$S process are equivalent in some natural sense \cite{Ros}. Then one can relate ergodic-theoretical properties of $(T,\varphi)$ to probabilistic properties of $(X_n)_{n\in\mathbb Z}$. For instance, let $Y=C\sqcup D$ be the Hopf decomposition of $Y$ (see Theorem~\ref{T:hopf}). We let $X_n^D:=\int_Df_n(y)\,M(dy)$ and $X_n^C:=\int_Cf_n(y)\,M(dy)$.
Then we obtain a unique (in distribution) decomposition of $X$ into the sum
$X^D+X^C$ of two independent stationary S$\alpha$S-processes.
Another kind of decomposition was considered in \cite{Sam}. Let $P$ be the largest invariant subset of $Y$ such that $T\restriction P$ has a finite invariant measure. Partitioning $Y$ into $P$ and $N:=Y\setminus N$ and restricting the integration in (\ref{integral}) to $P$ and $N$ we obtain a unique (in distribution) decomposition of $X$ into the sum $X^P+X^N$ of two independent stationary S$\alpha$S-processes. Notice that the process $X$ is ergodic if and only if $\mu(P)=0$.
Roy considered a more general class of {\it infinitely divisible (ID)} stationary processes \cite{Roy1}. Using Maruyama's representation of the characteristic function of an ID process $X$ without Gaussian part he singled out the L\'evy measure $Q$ of $X$. Then $Q$ is a shift invariant $\sigma$-finite measure on $\mathbb R^\mathbb Z$. Decomposing the dynamical system $(\mathbb R^\mathbb Z,\tau,Q)$ in various natural ways (Hopf decomposition, 0-type and positive type, so-called `rigidity free' part and its complement) he obtains corresponding decompositions for the process $X$. Here $\tau$ stands for the shift on $\mathbb R^\mathbb Z$.
\subsection{Poisson suspensions of infinite measure preserving transformations}
Poisson suspensions over infinite measure-preserving transformations (see \S\,\ref{NPT} or \cite{Roy}, \cite{Roy2})
are widely used in statistical mechanics to model ideal gas, Lorentz gas, etc (see \cite{CFS}).
Together with the Gaussian dynamical systems
they are also an important source of examples and counterexamples in ergodic theory.
Due to a close
similarity with the well studied Gaussian systems, a natural question arises: are there ergodic Poisson suspensions whose ergodic self-joinings are Poissonian?
Such suspensions are called PAP.
They are analogue of GAG in the theory of Gaussian systems \cite{LPT}.
Janvresse, de la Rue and Roy constructed PAP suspension in \cite{Jan2017} (see also \cite{ParR}).
The example of an infinite measure-preserving $T$ with ``minimal self-joinnings'' from \cite{JanRR2} plays a crucial role in their construction.
We also mention a result of Meyerovitch \cite{Mey1} related to weak mixing of infinite measure-preserving systems and Poisson suspensions: if $T$ is a conservative ergodic infinite measure-preserving transformation then the direct product of $T$ with the Poisson suspension
$T_*$ of $T$ is ergodic.
\subsection{Recurrence of random walks with non-stationary increments}
Using nonsingular ergodic theory one can introduce the notion of recurrence for random walks obtained from certain non-stationary processes. Let $T$ be an ergodic nonsingular transformation of a standard probability space $(X,\mathcal B,\mu)$ and let $f:X\to\mathbb R^n$ a measurable function. Define for $m\ge 1$, $Y_m:X\to\mathbb R^n$ by $Y_m:=\sum_{n=0}^{m-1} f\> {\scriptstyle \circ}\> c T^n$. In other words, $(Y_m)_{m\ge 1}$ is the random walk associated with the (non-stationary) process $(f\> {\scriptstyle \circ}\> c T^n)_{n\ge 0}$. Let us call this random walk {\it recurrent} if the cocycle $f$ of $T$ is recurrent (see \S\,\ref{cocycles}). It was shown in \cite{Sc5} that in the case $\mu\> {\scriptstyle \circ}\> c T=\mu$, i.e., the process is stationary, this definition is equivalent to the standard one.
\subsection{Boundaries of random walks}\ell^1ambdabel{brw}
Boundaries of random walks on groups retain valuable information on the underlying groups (amenability, entropy, etc.) and enable one to obtain integral representation for harmonic functions of the random walk \cite{Zim}, \cite{Zim2}, \cite{KV}.
Let $G$ be a locally compact group and $\nu$ a probability measure on $G$. Let $T$ denote the (one-sided) shift on the probability space $(X,\mathcal B_X,\mu):=(G,\mathcal B_G,\nu)^{\mathbb Z_+}$ and $\varphi:X\to G$ a measurable map defined by $(y_0,y_1,\dots)\mapsto y_0$. Let $T_\varphi$ be the $\varphi$-skew product extension of $T$ acting on the space $(X\times G,\mu\times\ell^1ambdambda_G)$ (for non-invertible transformations the skew product extension is defined in the very same way as for invertible ones, see \S\,\ref{cocycles}). Then $T_\varphi$ is isomorphic to the {\it homogeneous random walk} on $G$ with jump probability $\nu$. Let $\mathcal I(T_\varphi)$ denote the sub-$\sigma$-algebra of $T_\varphi$-invariant sets and let $\mathcal F(T_\varphi):=\bigcap_{n>0}T_\varphi^{-n}(\mathcal B_X\otimes\mathcal B_G)$. The former is called the {\it Poisson boundary} of $T_\varphi$ and the latter one is called the {\it tail boundary} of $T_\varphi$. Notice that a nonsingular action of $G$ by inverted right translations along the second coordinate is well defined on each of the two boundaries. The two boundaries (or, more precisely, the $G$-actions on them) are ergodic. The Poisson boundary is the Mackey range of $\varphi$ (as a cocycle of $T$). Hence the Poisson boundary is amenable \cite{Zim}. If the support of $\nu$ generates a dense subgroup of $G$ then the corresponding Poisson boundary is weakly mixing \cite{AaL}. As for the tail boundary, we first note that it can be defined for a wider family of {\it non-homogeneous} random walks. This means that the jump probability $\nu$ is no longer fixed and a sequence $(\nu_n)_{n>0}$ of probability measures on $G$ is considered instead. Now let $(X,\mathcal B_X,\mu):=\prod_{n>0}(G,\mathcal B_G,\nu)$. The one-sided shift on $X$ may not be nonsingular now. Instead of it, we consider the tail equivalence relation $\mathcal R$ on $X$ and a cocycle $\alpha:\mathcal R\to G$ given by $\alpha(x,y)=x_1\cdots x_ny_n^{-1}\cdots y_1$, where $x=(x_i)_{i>0}$ and $y=(y_i)_{i>0}$ are $\mathcal R$-equivalent and $n$ in the smallest integer such that $x_i=y_i$ for all $i>n$. The tail boundary of the random walk on $G$ with time dependent jump probabilities $(\nu_n)_{n>0}$ is the Mackey $G$-action associated with $\alpha$. In the case of homogeneous random walks this definition is equivalent to the initial one. Connes and Woods showed \cite{CW} that the tail boundary is always amenable and AT. It is unknown whether the converse holds for general $G$. However it is true for $G=\mathbb R$ and $G=\mathbb Z$: the class of AT-flows coincides with the class of tail boundaries of the random walks on $\mathbb R$ and a similar statement holds for $\mathbb Z$ \cite{CW}.
Jaworski showed \cite{Ja} that if $G$ is countable and a random walk is homogeneous then the tail boundary of the random walk possesses a so-called SAT-property (which is stronger than AT).
\subsection{Stationary actions} Let $T=(T_g)_{g\in G}$ be a continuous action of a countable group $G$ on a compact metrizable space $X$.
By Markov-Kakutani theorem, if $G$ is amenable then there is an invariant probability Borel measure on $K$.
If $G$ is non-amenable such a measure does not necessarily exist.
However, if $\kappa$ is a probability measure on $G$ whose support generates $G$ as a semigroup then there is always a $T$-quasiinvariant probability measure $\mu$ on $X$ such that
$$
\sum_{g\in G}\kappa(g)\frac{d\mu\> {\scriptstyle \circ}\> c T_g^{-1}}{d\mu}(x)=1\ \ \text{for a.e. }x\in X.
$$
$\mu$ is called a {\it $\kappa$-stationary} measure.
For a deep theory of stationary actions and its applications we refer to
\cite{FurG}, \cite{Furman} and references therein.
However, if
$G$ is Abelian or, more generally, nilpotent then each $\kappa$-stationary action is invariant under $T$.
Thus, there are no stationary $\mathcal{B}bb Z$-actions except for the probability preserving ones.
That is why we do not discuss them in this survey.
\subsection{Classifying $\sigma$-finite ergodic invariant measures}
The description of ergodic finite invariant measures for topological (or, more generally, standard Borel) systems is a well established problem in the classical ergodic theory \cite{CFS}. On the other hand, it seems impossible to obtain any useful information about the system by analyzing the set of all ergodic quasi-invariant (or just $\sigma$-finite invariant) measures because this set is wildly huge (see \S\,\ref{S:Glimm}). The situation changes if we impose some restrictions on the measures.
For instance, if the system under question is a homeomorphism (or a topological flow) defined on a locally compact Polish space then it is natural to consider the class of ($\sigma$-finite) invariant Radon measures, i.e., measures taking finite values on the compact subsets. We give two examples.
First, the seminal results of Giordano, Putnam and Skau on the topological orbit equivalence of compact Cantor minimal systems were extended to locally compact Cantor minimal (l.c.c.m.) systems in \cite{Dan27} and \cite{Mat}. Given a l.c.c.m. system $X$, we denote by $\mathcal M(X)$ and $\mathcal M_1(X)$ the set of invariant Radon measures and the set of invariant probability measures on $X$. Notice that $\mathcal M_1(X)$ may be empty \cite{Dan27}. It was shown in \cite{Mat} that two systems $X$ and $X'$ are topologically orbit equivalent if and only if there is a homeomorphism of $X$ onto $X'$ which maps bijectively $\mathcal M(X)$ onto $\mathcal M(X')$ and $\mathcal M_1(X)$ onto $\mathcal M_1(X')$.
Thus $\mathcal M(X)$ retains an important information on the system---it is `responsible' for the topological orbit equivalence of the underlying systems. Uniquely ergodic l.c.c.m. systems (with unique up to scaling infinite invariant Radon measure) were constructed in \cite{Dan27}.
The second example is related to study of the smooth horocycle flows on tangent bundles of hyperbolic surfaces.
Let $\mathbb D$ be the open disk equipped with the hyperbolic metric
$|dz|/(1-|z|^2)$ and let M\"{o}b$(\mathbb D)$ denote the group of M\"{o}bius
transformations of $\mathbb D$.
A hyperbolic surface can be written in the form
$M:=\Gamma\backslash\text{M\"{o}b}(\mathbb D)$,
where $\Gamma$ is a torsion free discrete subgroup of M\"{o}b$(\mathbb D)$.
Suppose that $\Gamma$ is a nontrivial normal subgroup of a lattice
$\Gamma_0$ in M\"{o}b$(\mathbb D)$.
Then $M$ is a regular cover of the finite volume surface
$M_0:=\Gamma_0\backslash\text{M\"{o}b}(\mathbb D)$.
The group of deck transformations $G=\Gamma_0/\Gamma$ is finitely generated.
The horocycle flow $(h_t)_{t\in\mathbb R}$ and the geodesic flow
$(g_t)_{t\in\mathbb R}$ defined on the unit tangent bundle $T^1(\mathbb D)$
descend naturally to flows, say $h$ and $g$, on $T^1(M)$. We consider the
problem of classification of the $h$-invariant Radon measures on $M$.
According to Ratner, $h$ has no finite invariant measures on $M$ if $G$ is
infinite (except for measures supported on closed orbits). However there
are infinite invariant Radon measures, for instance the volume measure. In
the case when $G$ is free Abelian and $\Gamma_0$ is co-compact, every
homomorphism $\varphi:G\to\mathbb R$ determines a unique up to scaling ergodic
invariant Radon measure (e.i.r.m.) $m$ on $T^1(M)$ such that $m\> {\scriptstyle \circ}\> c
dD=\exp(\varphi(D))m$ for all $D\in G$ \cite{BabL} and every e.i.r.m. arises
this way \cite{Sar}. Moreover all these measures are quasi-invariant under
$g$. In the general case, an interesting bijection is established in
\cite{LedS} between the e.i.r.m. which are quasi-invariant under $g$ and
the `non-trivial minimal' positive eigenfunctions of the hyperbolic
Laplacian on $M$.
\subsection{Von Neumann algebras} There is a fascinating and productive interplay between nonsingular ergodic theory and von Neumann algebras. The two theories alternately influenced development of each other. Let $(X,\mathcal B,\mu,T)$ be a nonsingular dynamical system. Given $\varphi\in L^\infty(X,\mu)$ and $j\in \mathbb Z$, we define operators $A_\varphi$ and $U_j$ on the Hilbert space $L^2(Z\times\mathbb Z,\mu\times\nu)$ by setting
$$
(A_\varphi f)(x,i) := \varphi(T^ix)f(x,i),\ (U_j f)(x,i) := f(x,i-j)
$$
Then $U_jA_\varphi U^*_j=A_{\varphi\> {\scriptstyle \circ}\> c T^j}$. Denote by $\mathcal M$
the von Neumann algebra (i.e., the weak closure of the $*$-algebra) generated by $A_\varphi$, $\varphi\in L^\infty(X,\mu)$ and $U_j$, $j\in\mathbb Z$. If $T$ is ergodic and aperiodic then $\mathcal M$ is a factor, i.e., $\mathcal M\cap\mathcal M'=\mathbb C 1$, where $\mathcal M'$ denotes the algebra of bounded operators commuting with $\mathcal M$. It is called a {\it Krieger's} factor. Murray-von Neumann-Connes' type of $\mathcal M$ is exactly the Krieger's type of $T$. The flow of weights of $\mathcal M$ is isomorphic to the associated flow of $T$.
Two Krieger's factors are isomorphic if and only if the underlying dynamical systems are orbit equivalent \cite{Kr76}. Moreover, a number of important problems in the theory of von Newmann algebras such as classification of subfactors, computation of the flow of weights and Connes' invariants, outer conjugacy for automorphisms, etc. are intimately related to the corresponding problems in nonsingular orbit theory. We refer to \cite{Moo1}, \cite{FM}, \cite{GiS}, \cite{GiS2}, \cite{HamKos}, \cite{DanH} for details.
\subsection{Representations of CAR} Representations of canonical anticommutation relations (CAR) is one of the most elegant and useful chapters of mathematical physics, providing a natural language for many body quantum physics and quantum field theory. By a representation of CAR we mean a sequence of bounded linear operators $a_1,a_2,\dots$ in a separable Hilbert space $\mathcal K$ such that
$
a_ja_k+a_ka_j=0 \ \text{and }a_ja_k^*+a_k^*a_j=\delta_{j,k}.
$
Consider $\{0,1\}$ as a group with addition mod\,2. Then $X=\{0,1\}^\mathbb N$ is a compact Abelian group. Let $\Gamma:=\{x=(x_1, x_2,\dots):\ell^1im_{n\to\infty}x_n=0\}$. Then $\Gamma$ is a dense countable subgroup of $X$. It is generated by elements $\gamma_k$ whose $k$-coordinate is 1 and the other ones are 0. $\Gamma$ acts on $X$ by translations. Let $\mu$ be an ergodic $\Gamma$-quasi-invariant measure on $X$. Let $(C_k)_{k\ge 1}$ be Borel maps from $X$ to the group of unitary operators in a Hilbert space $\mathcal H$ satisfying $C_k^*(x)=C_k(x+\delta_k)$, $C_k(x)C_l(x+\delta_l)=C_l(x)C_k(x+\delta_k)$, $k\ne l$ for a.a. $x$.
In other words, $(C_k)_{k\ge 1}$ defines a cocycle of the $\Gamma$-action.
We now put $\widetilde{\mathcal H}:=L^2(X,\mu)\otimes \mathcal H$ and
define operators $a_k$ in $\widetilde{\mathcal H}$ by setting
$$
(a_kf)(x)=(-1)^{x_1+\cdots+x_{k-1}}(1-x_k)C_k(x)\sqrt{\frac
{d\mu\> {\scriptstyle \circ}\> c\delta_k}{d\mu}(x)}f(x+\delta_k),
$$
where $f:X\to\mathcal H$ is an element of $\widetilde{\mathcal H}$ and $x=(x_1,x_2,\dots)\in X$. It is easy to verify that $a_1,a_2,\dots$ is a representation of CAR. The converse was established in \cite{GorW} and \cite{Golod}: every factor-representation (this means that the von Neumann algebra generated by all $a_k$ is a factor) of CAR can be represented as above for some ergodic measure $\mu$, Hilbert space $\mathcal H$ and a $\Gamma$-cocycle $(C_k)_{k\ge 1}$. Moreover, using nonsingular ergodic theory Golodets \cite{Golod} constructed for each $k=2,3,\dots,\infty$, an irreducible representation of CAR such that dim\,$\mathcal H=k$. This answered a question of G{\aa}rding and Wightman \cite{GorW} who considered only the case $k=1$.
\subsection{Unitary representations of locally compact groups}
Nonsingular actions appear in a systematic way in the theory of unitary representations of groups. Let $G$ be a locally compact second countable group and $H$ a closed normal subgroup of $G$. Suppose that $H$ is commutative (or, more generally, of type I, see \cite{Dix}).
Then the natural action of $G$ by conjugation on $H$ induces a Borel $G$-action, say $\alpha$, on the dual space $\widehat H$---the set of unitarily equivalent classes of irreducible unitary representations of $H$. If now $U=(U_g)_{g\in G}$ is a unitary representation of $G$ in a separable Hilbert space then by applying Stone decomposition theorem to $U\restriction H$ one can deduce that $\alpha$ is nonsingular with respect to a measure $\mu$ of the `maximal spectral type' for $U\restriction H$ on $\widehat H$. Moreover, if $U$ is irreducible then $\alpha$ is ergodic. Whenever $\mu$ is fixed, we obtain a one-to-one correspondence between the set of cohomology classes of irreducible cocycles for $\alpha$ with values in the unitary group on a Hilbert space
$\mathcal H$ and the subset of $\widehat G$ consisting of classes of those unitary representations $V$ for which the measure associated to $V\restriction H$ is equivalent to $\mu$. This correspondence is used in both directions. From information about cocycles we can deduce facts about representations and vise versa \cite{Kir}, \cite{Dix}.
\begin{comment}
\section{Further Directions}
While some of the results that we have cited for nonsingular $\mathbb Z$-actions extend to actions of locally compact Polish groups (or subclasses of Abelian or amenable ones), many natural questions remain open in the general setting. For instance: what is the Rokhlin lemma, or the pointwise ergodic theorem (for some obstacles towards extension of the ratio ergodic theorem to nonsingular actions of arbitrary amenable groups see \cite{Hoch}; a weak version of this theorem was proved recently in \cite{Dan2018}), or the definition of entropy for nonsingular actions of general countable amenable groups? The theory of abstract nonsingular equivalence relations \cite{FM} or, more generally, nonsingular groupoids \cite{Rams} and polymorphisms \cite{Ver} is also a beautiful part of nonsingular ergodic theory that has nice applications: description of semifinite traces of AF-algebras, classification of factor representations of the infinite symmetric group \cite{VeK}, path groups \cite{AlbVer}, etc. Nonsingular ergodic theory is getting even more sophisticated when we pass from $\mathbb Z$-actions to noninvertible endomorphisms or, more generally, semigroup actions (see \cite{Aa} and references therein).
\end{comment}
\end{document}
|
\begin{document}
\title[Multiplicity results]{Multiplicity results for fractional Laplace problems with critical growth}
\thanks{The first author was supported by {\it Coordena\c c\~ao de Aperfei\c conamento de pessoal de n\'ivel superior} (CAPES) through the fellowship 33003017003P5--PNPD20131750--UNICAMP/MATEM\'ATICA. The second and the third author were supported by the INdAM-GNAMPA Project 2016 {\it Problemi variazionali su variet\`a riemanniane e gruppi di Carnot}, by the DiSBeF Research Project 2015 {\it Fenomeni non-locali: modelli e applicazioni} and by the DiSPeA Research Project 2016 {\it Implementazione e testing di modelli di fonti energetiche ambientali per reti di sensori senza fili autoalimentate}. The third author was supported by the ERC grant $\epsilon$ ({\it Elliptic Pde's and Symmetry of Interfaces and Layers for Odd Nonlinearities}).}
\author[A. Fiscella]{Alessio Fiscella}
\address{Departamento de Matem\'atica, Universidade Estadual de Campinas, IMECC,
Rua S\'ergio Buarque de Holanda 651, SP CEP 13083--859 Campinas, BRAZIL}
\email{\tt [email protected]}
\author[G. Molica Bisci]{Giovanni Molica Bisci}
\address{Dipartimento PAU,
Universit\`a `Mediterranea' di Reggio Calabria,
Via Melissari 24, 89124 Reggio Calabria, Italy}
\email{\tt [email protected]}
\author[R. Servadei]{Raffaella Servadei}
\address{Dipartimento di Scienze Pure e Applicate (DiSPeA), Universita degli Studi di Urbino
`Carlo Bo', Piazza della Repubblica 13, 61029 Urbino (Pesaro e Urbino), Italy}
\email{\tt [email protected]}
\keywords{Fractional Laplacian, critical nonlinearities, best fractional critical Sobolev constant, variational techniques, integrodifferential
operators.\\
\phantom{aa} 2010 AMS Subject Classification: Primary:
49J35, 35A15, 35S15; Secondary: 47G20, 45G05.}
\begin{abstract}
This paper deals with multiplicity and bifurcation results for nonlinear problems driven by the fractional Laplace operator $(-\Delta)^s$ and involving a critical Sobolev term. In particular, we consider
$$\leqslantslantft\{
\begin{array}{ll}
(-\Delta)^su=\gamma\leqslantslantft|u\right|^{2^*-2}u+f(x,u) & \mbox{in } \Omega\\
u=0 & \mbox{in } \mathbb R^n\setminus \Omega,
\end{array}\right.$$
where $\Omega\subset\mathbb R^n$ is an open bounded set with continuous boundary, $n>2s$ with $s\in(0,1)$, $\gamma$ is a positive real parameter, $2^*=2n/(n-2s)$ is the fractional critical Sobolev exponent and $f$ is a Carath\'{e}odory function satisfying different subcritical conditions.
\end{abstract}
\maketitle
\section{Introduction}
Recently, the interest towards nonlocal fractional Laplacian equations involving a critical term has grown more and more.
Concerning the existence result for this kind of problems, a positive answer has been given in the recent papers \cite{colorado2, fv, ms, sY, servadeivaldinociBN, servadeivaldinociBNLOW}: in all these works well known existence results for classical Laplace operators were extended to the nonlocal fractional setting. A natural question is to ask when it is possible to get more than a non--trivial solution, giving a multiplicity result. In literature few attempts have been made to answer this question. In particular we refer to very recent papers \cite{fms, pereira} which give a bifurcation result.
Motivated by the above papers, here we deal with the following problem
\begin{equation}\label{P}
\leqslantslantft\{\begin{array}{ll}
(-\Delta)^su=\gamma\leqslantslantft|u\right|^{2^*-2}u+f(x,u) & \mbox{in } \Omega\\
u=0 & \mbox{in } \mathbb{R}^{n}\setminus\Omega,
\end{array}
\right.
\end{equation}
where $s\in(0,1)$ is fixed, $n>2s$, $\Omega\subset\mathbb{R}^{n}$ is an open and bounded set with continuous boundary, $2^*=2n/(n-2s)$ and $(-\Delta)^s$ is the fractional Laplace operator, that may be defined (up to a normalizing constant) by the Riesz potential as follows
\begin{equation}\label{delta}
\mathcal (-\Delta)^su(x)=\int_{\mathbb{R}^{n}}\frac{2u(x)-u(x+y)-u(x-y)}{\leqslantslantft|y\right|^{n+2s}}dy,\quad x\in\mathbb R^n\,,
\end{equation}
as defined in \cite{valpal} (see this paper and the references therein for further details on fractional Laplacian).
Concerning the nonlinearity in \eqref{P}, in the present work we assume that $f:\Omega\times\mathbb R\rightarrow\mathbb R$ is a {\em Carath\'{e}odory function} satisfying the following condition
\begin{equation}\label{f1}
\sup\Big\{\leqslantslantft|f(x,t)\right|:\;\;x\in\Omega,\;\;\leqslantslantft|t\right|\leqslantslantq M\Big\}<+\infty \textit{ for any }M>0.
\end{equation}
The main aim of the present paper is to establish bifurcation results for $\eqref{P}$. For this, we need that $f(x,t)$ is odd in $t$, i.e.
\begin{equation}\label{odd}
f(x,t)=-f(x,-t)\quad \mbox{for any}\,\,\, t\in \mathbb R \mbox{ and a.e.}\,\,\,x\in \Omega\,,
\end{equation}
in order to apply the symmetric version of the Mountain Pass Theorem due to Ambrosetti and Rabinowitz (see \cite{ar}). However, with respect to the classical case presented in \cite{ar}, we use a weaker condition than the usual one of Ambrosetti-Rabinowitz, in order to overcome the lack of compactness at critical level $L^{2^*}(\Omega)$. Thus, we assume that $f$ and its primitive $F$, defined as
\begin{equation}\label{F}
F(x,t)=\int^t_0f(x,\tau)d\tau\,,
\end{equation}
satisfy
\begin{equation}\label{f2}
\lim_{\leqslantslantft|t\right|\rightarrow +\infty}\frac{f(x,t)}{\leqslantslantft|t\right|^{2^*-1}}=0 \textit{ uniformly a.e. in }\Omega;
\end{equation}
\begin{equation}\label{f3}
\begin{gathered}
\textit{there exist } \sigma\in[0,2)\textit{ and }a_1, a_2>0\textit{ such that}\\
\frac{1}{2}\,f(x,t)t-F(x,t)\geqslantslantq-a_1-a_2\leqslantslantft|t\right|^\sigma \textit{ for any }t\in\mathbb{R} \textit{ and a.e. } x\in\Omega;\end{gathered}
\end{equation}
\begin{equation}\label{f4}
\begin{gathered}
\textit{there exist } \theta\in(2,2^*)\textit{ and }b_1, b_2>0\textit{ such that}\\
F(x,t)\leqslantslantq b_1\leqslantslantft|t\right|^\theta+b_2 \textit{ for any }t\in\mathbb{R}\textit{ and a.e. }x\in\Omega;
\end{gathered}
\end{equation}
\begin{equation}\label{f5}
\begin{gathered}
\textit{there exist } c_1>0, h_1\in L^1(\Omega)\textit{ and }\Omega_0\subset\Omega\textit{ with }\leqslantslantft|\Omega_0\right|>0\textit{ such that}\\
F(x,t)\geqslantslantq -h_1(x)\leqslantslantft|t\right|^2-c_1 \textit{ for any }t\in\mathbb{R}\textit{ and a.e. }x\in\Omega\textit{ and}\\
\liminf_{\leqslantslantft|t\right|\rightarrow +\infty}\frac{F(x,t)}{\leqslantslantft|t\right|^2}=+\infty \textit{ uniformly a.e. in }\Omega_0.
\end{gathered}
\end{equation}
We are now ready to state our first result.
\begin{theorem}\label{a}
Let $s\in (0,1)$, $n>2s$, $\Omega$ be an open bounded subset of $\mathbb R^n$ with continuous boundary, and let $f$ be a function satisfying assumptions \eqref{f1}, \eqref{odd}, \eqref{f2}, \eqref{f3}--\eqref{f5}.
Then, for any $k\in\mathbb{N}$ there exists $\gamma_k\in(0,+\infty]$ such that \eqref{P} admits at least $k$ pairs of non--trivial solutions for any $\gamma\in(0,\gamma_k)$.
\end{theorem}
In the next result we establish a multiplicity result of solutions for \eqref{P} without assuming that primitive $F$ still satisfies a general subcritical growth like in \eqref{f4}. However, we need a stronger condition than \eqref{f5}. That is, given $j, k\in\mathbb{N}$ with $j\leqslantslantq k$, we consider these different versions of \eqref{f4} and \eqref{f5}
\begin{equation}\label{ft5}
\begin{gathered}
\textit{there exists a measurable function }a:\Omega\rightarrow\mathbb R\textit{ such that}\\
\limsup_{t\rightarrow0}2\frac{F(x,t)}{\leqslantslantft|t\right|^2}=a(x)\textit{ uniformly a.e. in }\Omega,\\
a(x)\leqslantslantq\lambda_j\textit{ a.e. in }\Omega\textit{ and }a(x)<\lambda_j\textit{ on a set of positive measure contained in } \Omega;
\end{gathered}
\end{equation}
\begin{equation}\label{ft4}
\begin{gathered}
\textit{there exists } B>0\textit{ such that}\\
F(x,t)\geqslantslantq \lambda_k\frac{\leqslantslantft|t\right|^2}{2}-B \textit{ for any }t\in\mathbb{R}\textit{ and a.e. }x\in\Omega,\end{gathered}
\end{equation}
where $\lambda_j\leqslantslantq\lambda_k$ are eigenvalues of $(-\Delta)^s$, as recalled in Section \ref{sec main}.
With the above conditions we still can apply the Mountain Pass Theorem given in \cite{ar}, getting the following result:
\begin{theorem}\label{b}
Let $s\in (0,1)$, $n>2s$, $\Omega$ be an open bounded subset of $\mathbb R^n$ with continuous boundary. Let $j$, $k\in\mathbb{N}$, with $j\leqslantslantq k$, and let $f$ be a function satisfying assumptions \eqref{f1}, \eqref{odd}, \eqref{f2}, \eqref{f3}, \eqref{ft5} and \eqref{ft4}.
Then, there exists $\gamma_{k,j}\in(0,+\infty]$ such that \eqref{P} admits at least $k-j+1$ pairs of non--trivial solutions for any $\gamma\in(0,\gamma_{k,j})$.
\end{theorem}
A natural question is to investigate what happens when $f$ has not any symmetry. In this case it is still possible to get a multiplicity result, by studying two truncated problems related to \eqref{P}. These auxiliary problems are still variational and by using the Mountain Pass Theorem we get at least two solutions of different sign for them, as stated in the following result:
\begin{theorem}\label{c}
Let $s\in (0,1)$, $n>2s$, $\Omega$ be an open bounded subset of $\mathbb R^n$ with continuous boundary. Let $f$ satisfy $f(x,0)=0$, \eqref{f1}, \eqref{f2}, \eqref{f3}, \eqref{ft5} and \eqref{ft4} with $j=k=1$.
Then, there exists $\gamma_1>0$ such that \eqref{P} admits a non--trivial non--negative and a non--trivial non--positive solution for any $\gamma\in(0,\gamma_1)$.
\end{theorem}
The main tools used in order to prove Theorem~\ref{a}--Theorem~\ref{c} are variational and topological methods and a suitable decomposition of the functional space $X_0^s(\Omega)$ where we look for solutions of problem~\eqref{P}, through the eigenvalues of the fractional Laplace operator.
An interesting open problem is to prove the main results of the present paper in a more general framework, like the one given in the following problem:
\begin{equation}\label{Pgen}
\leqslantslantft\{\begin{array}{ll}
-\mathcal L^p_K u=\gamma\leqslantslantft|u\right|^{p^*-2}u+f(x,u) & \mbox{in } \Omega\\
u=0 & \mbox{in } \mathbb{R}^{n}\setminus\Omega\,,
\end{array}
\right.
\end{equation}
where $\Omega\subset\mathbb R^n$ is an open and bounded set with continuous boundary, $n>ps\geqslantslantq 2s$, $p^*=pn/(n-ps)$ and $\mathcal L^p_K$ is a nonlocal operator defined as follows:
$$\mathcal L^p_K u(x)=2\lim_{\varepsilon\searrow0}\int_{\mathbb{R}^{n}\setminus B_{\varepsilon}(x)}|u(x)-u(y)|^{p-2}(u(x)-u(y))K(x-y)dy,\quad x\in\mathbb R^n\,.$$
Here, the kernel $K:\mathbb{R}^{n}\setminus \leqslantslantft\{0\right\}\rightarrow(0,+\infty)$ is a
{\em measurable function} for which
\begin{equation}\label{K1}
mK\in L^{1}(\mathbb{R}^{n}),\textit{ with }\; m(x)=\min\leqslantslantft\{\leqslantslantft|x\right|^{p},1\right\};
\end{equation}
\begin{equation}\label{K2}
\textit{there exists}\;\; \theta >0\;\textit{such that }\;K(x)\geqslantslantq\theta\leqslantslantft|x\right|^{-(n+ps)}\;\textit{for any } x\in\mathbb{R}^{n}\setminus\{0\},
\end{equation}
hold true. A model for $\mathcal L^p_K$ is given by the fractional $p$-Laplacian $(-\Delta)^s_p$ which (up to normalization factors) may be defined for any $x\in\mathbb{R}^{n}$ as
$$(-\Delta)^s_p u(x)=2\lim_{\varepsilon\searrow0}\int_{\mathbb R^n\setminus B_{\varepsilon}(x)}\frac{\leqslantslantft|u(x)-u(y)\right|^{p-2}(u(x)-u(y))}{|x-y|^{n+sp}}\,dxdy\,.$$
For problem~\eqref{Pgen} the appropriate functional space where finding solution is $X^{s,\,p}_0(\Omega)$, defined as
$$X^{s,\,p}_0(\Omega)=\{g\in X^{s,\,p}(\Omega) : g=0 \mbox{ a.e. in }
\mathbb R^n\setminus\Omega\}.$$
Here $X^{s,\,p}(\Omega)$ denotes the linear space of Lebesgue
measurable functions $u:\mathbb R^n\rightarrow\mathbb R$ whose restrictions to $\Omega$ belong to $L^p(\Omega)$ and such that
\[\mbox{the map }
(x,y)\mapsto (u(x)-u(y))^p K(x-y) \mbox{ is in } L^1\big(Q,dxdy\big),
\]
where $Q=\mathbb R^n \times \mathbb R^n\setminus\leqslantslantft((\mathbb R^n\setminus\Omega)\times(\mathbb R^n\setminus\Omega)\right)$. It is immediate to see that $X^{s,\,p}_0(\Omega)$ is a Banach space endowed with the following norm
\begin{equation}\label{norma2}
\|u\|_{s,p}=
\Big(\iint_{\mathbb R^n \times \mathbb R^n}
|u(x)-u(y)|^pK(x-y)\,dx\,dy\Big)^{1/p}\,.
\end{equation}
When $p=2$ and $K(x)=\leqslantslantft|x\right|^{-(n+2s)}$ the space $X^{s,\,p}_0(\Omega)$ coincides with $X_0^s(\Omega)$ defined in \eqref{spazio} (see \cite[Lemma~5]{svmountain}). In such a case the statements of Theorem~\ref{a}--Theorem~\ref{c} are still valid and their proofs can be performed exactly with the same arguments considered in the model case of the fractional Laplace operator $(-\Delta)^s$.
In order to treat problem~\eqref{Pgen} when $p\not =2$ we have to adapt in a suitable way the arguments used for studying \eqref{P}. Indeed, in this case the main difficulty is related to the fact that we have to understand how to decompose the space $X^{s,\,p}_0(\Omega)$. Indeed, when $p\not=2$ the full spectrum of $(-\Delta)^s_p$ and of $\mathcal L^p_K$ is still almost unknown, even if some important properties of the first eigenvalue and of the higher order (variational) eigenvalues have been established in \cite{fp, ll}.
We would recall that in \cite{bartolomolica} the authors proposed a definition of quasi--eigenvalues for $(-\Delta)^s_p$ and using them considered a suitable decomposition of $X^{s,\,p}_0(\Omega)$ which turns out to be the known one for $p=2$.
The paper is organized as follows.
In Section~\ref{sec variational} we introduce the variational
formulation of the problem under consideration. Section~\ref{sec palais} is devoted to the proof of the compactness property for problem~\eqref{P}.
In Section~\ref{sec main} we conclude the proofs of Theorem~\ref{a}--Theorem~\ref{c}.
\section{Variational setting}\label{sec variational}
Problem~\eqref{P} has a variational structure and the natural space where finding solutions is the homogeneous fractional Sobolev space $H^s_0(\Omega)$. In order to study \eqref{P} it is important to encode the ``boundary condition'' $u=0$ in
$\mathbb{R}^n\setminus\Omega$ (which is different from the classical case of the Laplacian, where it is required $u=0$ on $\partial \Omega$) in the weak formulation,
by considering also that the interaction between $\Omega$ and its complementary in $\mathbb{R}^n$ gives a positive contribution
in the so-called {\em Gagliardo norm} given as
\begin{equation}\label{norma}
\leqslantslantft\|u\right\|_{H^s(\mathbb R^n)}=\leqslantslantft\|u\right\|_{L^2(\mathbb R^n)}+\Big(\iint_{\mathbb R^n \times \mathbb R^n} \frac{|u(x)-u(y)|^2}{\leqslantslantft|x-y\right|^{n+2s}}\,dx\,dy\Big)^{1/2}.
\end{equation}
The functional space that takes into account this boundary condition will be denoted by $X_0^s(\Omega)$ and it is defined as
\begin{equation}\label{spazio}
X_0^s(\Omega)=\big\{u\in H^s(\mathbb R^n):\,u=0\mbox{ a.e. in } \mathbb R^n\setminus \Omega\big\}.
\end{equation}
We refer to \cite{svmountain, servadeivaldinociBN} for a general definition of $X_0^s(\Omega)$ and its properties.
We also would like to point out that, when $\partial\Omega$ is continuous, by \cite[Theorem~6]{fsv} the space $X_0^s(\Omega)$ can be seen as the closure of $C^\infty_0(\Omega)$ with respect to the norm \eqref{norma}. This last point will play a crucial role in the proof of the compactness condition for the energy functional related to \eqref{P}.
In $X_0^s(\Omega)$ we can consider the following norm
\begin{equation}\label{normax0}
\leqslantslantft\|u\right\|_{X_0^s(\Omega)}=\Big(\iint_{\mathbb R^n \times \mathbb R^n} \frac{|u(x)-u(y)|^2}{\leqslantslantft|x-y\right|^{n+2s}}\,dx\,dy\Big)^{1/2},
\end{equation}
which is equivalent to the usual one defined in \eqref{norma} (see \cite[Lemma~6]{svmountain}).
We also recall that $(X_0^s(\Omega),\leqslantslantft\|\cdot\right\|_{X_0^s(\Omega)})$ is a Hilbert space, with the scalar product defined as
\begin{equation}\label{prodottoscalare}
\leqslantslantft\langle u,v\right\rangle_{X_0^s(\Omega)}=\iint_{\mathbb R^n \times \mathbb R^n} \frac{(u(x)-u(y))(v(x)-v(y))}{\leqslantslantft|x-y\right|^{n+2s}}\,dx\,dy.
\end{equation}
{\it From now on, in order to simplify the notation, we will denote $\|\cdot\|_{X_0^s(\Omega)}$ and $\leqslantslantft\langle \cdot,\cdot\right\rangle_{X_0^s(\Omega)}$ by $\|\cdot\|$ and $\leqslantslantft\langle \cdot,\cdot\right\rangle$ respectively, and $\|\cdot\|_{L^q(\Omega)}$ by $\|\cdot\|_q$ for any $q\in[1,+\infty]$.}
A function $u\in X_0^s(\Omega)$ is said to be a ({\em weak}) {\em solution of problem}~\eqref{P} if $u$ satisfies the following weak formulation
\begin{equation}\label{wf}
\leqslantslantft\langle u,\varphi\right\rangle
=\displaystyle\gamma\int_{\Omega} \leqslantslantft|u(x)\right|^{2^*-2}u(x)\varphi(x)dx+\int_{\Omega} f(x,u(x))\varphi(x) dx,
\end{equation}
for any $\varphi \in X_0^s(\Omega)$.
We observe that \eqref{wf} represents the Euler--Lagrange equation of the functional~$\mathcal J_\gamma:X_0^s(\Omega)\to \mathbb R$ defined as
\begin{equation}\label{Jgamma}
\mathcal J_\gamma(u)=\frac 1 2 \leqslantslantft\|u\right\|^2-\frac{\gamma}{2^*}\leqslantslantft\|u\right\|^{2^*}_{2^*}
-\int_\Omega F(x,u(x))\,dx\,,
\end{equation}
where $F$ is as in \eqref{F}. It is easily seen that $\mathcal J_\gamma$ is well defined thanks to \eqref{f1}--\eqref{f2} and \cite[Lemma~6]{svmountain}.
Moreover, $\mathcal J_\gamma\in C^1(X_0^s(\Omega))$, thus critical points of $\mathcal J_\gamma$ are solutions to
problem~\eqref{wf}, that is weak solutions for \eqref{P}.
The proofs of Theorem~\ref{a} and Theorem~\ref{b} are mainly based on variational and topological methods.
Precisely, here we will perform the following version of the symmetric Mountain Pass Theorem (see \cite{ar, bartolo, silva}).
\begin{theorem}[Abstract critical point theorem]\label{abs}
Let $E=V\oplus X$, where $E$ is a real Banach space and $V$ is finite dimensional. Suppose that $\mathcal I\in C^1(E,\mathbb{R})$ is a functional satisfying the following conditions:
\begin{itemize}
\item [$(I_1)$] $\mathcal I(u)=\mathcal I(-u)$ and $\mathcal I(0)=0$;
\item [$(I_2)$] there exists a constant $\rho>0$ such that $\mathcal I|_{\partial B_\rho\cap X}\geqslantslantq 0$;
\item [$(I_3)$] there exists a subspace $W\subset E$ with $\mbox{dim } V<\mbox{dim } W<+\infty$ and there is $M>0$ such that ${\displaystyle \max_{u\in W}\mathcal I(u)<M}$;
\item [$(I_4)$] considering $M>0$ from $(I_3)$, $\mathcal I(u)$ satisfies $(PS)_c$ condition for $0\leqslantslantq c\leqslantslantq M$.
\end{itemize}
Then, there exist at least $\mbox{dim }W-\mbox{codim }V$ pairs of non--trivial critical points of $\mathcal I$.
\end{theorem}
In order to prove our main results, the idea consists in applying Theorem~\ref{abs} to the functional $\mathcal J_\gamma$. At this purpose
note that when $f$ is odd in $t$, $\mathcal J_\gamma$ is even and also $\mathcal J_\gamma(0)=0$.
{\it Thus, condition~$(I_1)$ of Theorem~\ref{abs} is always verified by $\mathcal J_\gamma$ and we will not recall it in the sequel.}
For the proof of Theorem~\ref{c} we will use the following version of the Mountain Pass Theorem (see \cite{silva}):
\begin{theorem}\label{abs2}
Let $E$ be a real Banach space. Suppose that $\mathcal I\in C^1(E,\mathbb{R})$ is a functional satisfying the following conditions:
\begin{itemize}
\item [$(I_1)$] $\mathcal I(0)=0$;
\item [$(I_2)$] there exists a constant $\rho>0$ such that $\mathcal I|_{\partial B_\rho}\geqslantslantq 0$;
\item [$\widehat{(I_3)}$] there exist $v_1\in\partial B_1$ and $M>0$ such that ${\displaystyle \sup_{t\geqslantslantq0}\mathcal I(tv_1)\leqslantslantq M}$;
\item [$(I_4)$] considering $M>0$ from $(I_3)$, $\mathcal I(u)$ satisfies $(PS)_c$ condition for $0\leqslantslantq c\leqslantslantq M$.
\end{itemize}
Then, $\mathcal I$ possesses a non--trivial critical point.
\end{theorem}
\section{The Palais--Smale condition}\label{sec palais}
In this section we verify that the functional $\mathcal J_\gamma$ satisfies the $(PS)_c$ condition under a suitable level. For this, we use some preliminary estimates concerning the nonlinearity $f$ and its primitive $F$. By \eqref{f1} and \eqref{f2} for any $\varepsilon>0$ there exists a constant $C_\varepsilon>0$ such that
\begin{equation}\label{sottocritico}
\leqslantslantft|f(x,t)t\right|\leqslantslantq C_\varepsilon+\varepsilon\leqslantslantft|t\right|^{2^*}\quad\mbox{for any }t\in\mathbb{R}\mbox{ and a.e. }x\in\Omega
\end{equation}
and
\begin{equation}\label{sottocritico2}
\leqslantslantft|F(x,t)\right|\leqslantslantq C_\varepsilon+\frac{\varepsilon}{2^*}\leqslantslantft|t\right|^{2^*}\quad\mbox{for any }t\in\mathbb{R}\mbox{ and a.e. }x\in\Omega.
\end{equation}
We recall that $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}\subset X_0^s(\Omega)$ is a {\it Palais--Smale sequence for $\mathcal J_\gamma$ at level $c\in\mathbb{R}$} (in short $(PS)_c$ sequence) if
\begin{equation}\label{ps1}
\mathcal J_\gamma(u_j)\rightarrow c\quad\mbox{and}\quad\mathcal J'_\gamma(u_j)\rightarrow0\quad\mbox{as}\;j\rightarrow +\infty.
\end{equation}
We say that $\mathcal J_\gamma$ {\it satisfies the Palais--Smale condition at level} $c$ if any Palais--Smale sequence
$\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ at level $c$ admits a convergent subsequence in $X_0^s(\Omega)$.
As usual, we first prove the boundedness of the $(PS)_c$ sequence.
\begin{lemma}\label{bound}
Let $f$ satisfy \eqref{f1}, \eqref{f2} and \eqref{f3}. For any $\gamma>0$, let $c>0$ and $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ be a $(PS)_c$ sequence for $\mathcal J_\gamma$.
Then, $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ is bounded in $X_0^s(\Omega)$.
\end{lemma}
\begin{proof}
Fix $\gamma>0$. By \eqref{ps1} there exists $C>0$ such that
\begin{equation}\label{3.10}
\leqslantslantft|\mathcal J_\gamma(u_j)\right|\leqslantslantq C\quad\mbox{and}\quad\leqslantslantft| \mathcal J'_\gamma(u_j)\leqslantslantft(\frac{u_j}{\leqslantslantft\|u_j\right\|}\right)\right|\leqslantslantq C
\quad\mbox{for any }j\in\mathbb{N}.
\end{equation}
Moreover, by \eqref{f3} and H\"{o}lder inequality we have
\begin{equation}\label{3.11}
\mathcal J_\gamma(u_j)-\frac{1}{2}\mathcal J'_\gamma(u_j)(u_j)\geqslantslant\frac{s\gamma}{n}\leqslantslantft\|u_j\right\|^{2^*}_{2^*}-a_1\leqslantslantft|\Omega\right|-a_2\leqslantslantft|\Omega\right|^{\frac{2^*-\sigma}{2^*}}\leqslantslantft\|u_j\right\|^{\sigma}_{2^*}.
\end{equation}
From Young's inequality with exponents $p=2^*/\sigma$ and $q=2^*/(2^*-\sigma)$ we also get
\[\leqslantslantft\|u_j\right\|^{\sigma}_{2^*}\leqslantslantq\delta\leqslantslantft\|u_j\right\|^{2^*}_{2^*}+C_\delta,
\]
for suitable $\delta$, $C_\delta>0$. The last inequality combined with \eqref{3.10} and \eqref{3.11} says that
\begin{equation}\label{3.12}
\leqslantslantft\|u_j\right\|^{2^*}_{2^*}\leqslantslantq C'\Big(\leqslantslantft\|u_j\right\|+1\Big),
\end{equation}
for another positive constant $C'$.
Now, by \eqref{sottocritico2}, \eqref{3.10} and \eqref{3.12} we obtain
\[C\geqslantslantq\mathcal J_\gamma(u_j)\geqslantslantq\frac{1}{2}\leqslantslantft\|u_j\right\|^2-\leqslantslantft(\frac{C'\gamma}{2^*}-\frac{C'\varepsilon}{2^*}\right)\leqslantslantft(1+\leqslantslantft\|u_j\right\|\right)-C_\varepsilon\leqslantslantft|\Omega\right|,
\]
which gives the boundedness of $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ in $X_0^s(\Omega)$.
\end{proof}
Now, we can prove the relatively compactness of a $(PS)_c$ sequence under a suitable level. Here, we must pay attention to the lack of compactness at level $L^{2^*}(\Omega)$.
\begin{lemma}\label{palais}
Let $f$ satisfy \eqref{f1}, \eqref{f2} and \eqref{f3}.
Then, for any $M>0$ there exists $\gamma^*>0$ such that $\mathcal J_\gamma$ satisfies the $(PS)_c$ condition for any $c\leqslantslantq M$, provided $0<\gamma<\gamma^*$.
\end{lemma}
\begin{proof}
Fix $M>0$. We set
\begin{equation}\label{gamma}
\gamma^*=\min\leqslantslantft\{S(n,s), \leqslantslantft[\leqslantslantft(S(n,s)\right)^{\frac{n}{2s}}\leqslantslantft(\frac{s}{n(M+A)}\right)^{\frac{2^*}{2^*-\sigma}}\right]^{\frac{1}{n/2s-2^*/(2^*-\sigma)}}\right\}
\end{equation}
with
\begin{equation}\label{A}
A=a_1\leqslantslantft|\Omega\right|+a_2\leqslantslantft|\Omega\right|^{\frac{2^*-\sigma}{2^*}}\,,
\end{equation}
where $a_1$, $a_2$, $\sigma$ are the constants given in \eqref{f3}, while $S(n,s)$ is the best constant of the fractional Sobolev embedding (see \cite[Lemma~6]{svmountain}) defined as
\begin{equation}\label{Sns}
S(n,s)=\inf_{v\in H^s(\mathbb R^n)\setminus\leqslantslantft\{0\right\}}\frac{\displaystyle{\iint_{\mathbb R^n \times \mathbb R^n}\frac{\leqslantslantft|v(x)-v(y)\right|^2}{\leqslantslantft|x-y\right|^{n+2s}}dx dy}}{\displaystyle{\leqslantslantft(\int_{\mathbb R^n}\leqslantslantft|v(x)\right|^{2^*}\right)^{2/2^*}}}>0.
\end{equation}
Given $\gamma<\gamma^*$ and $c<M$, let us consider a $(PS)_c$ sequence $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ for $\mathcal J_\gamma$.
Since by Lemma~\ref{bound} we have that $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ is bounded in $X_0^s(\Omega)$, by applying also \cite[Lemma~8]{sv} and \cite[Theorem~IV.9]{brezis}, there exists $u\in X_0^s(\Omega)$ such that, up to a subsequence,
\begin{equation}\label{ujconvX0}
u_j \rightharpoonup u \quad \mbox{weakly in}\,\, X_0^s(\Omega),
\end{equation}
\begin{equation}\label{ujconvLq}
u_j \to u \quad \mbox{in}\,\, L^q(\Omega),
\end{equation}
with $q\in[1,2^*)$ and
\begin{equation}\label{ujconvae}
u_j \to u \quad \mbox{a.e in}\,\, \Omega,
\end{equation}
as $j\to +\infty$.
Now, we claim that
\begin{equation}\label{claim}
\leqslantslantft\|u_j\right\|^2\to\leqslantslantft\|u\right\|^2\quad\mbox{as }j\to +\infty,
\end{equation}
which easily implies that $u_j\to u$ in $X_0^s(\Omega)$ as $j\to +\infty$, thanks to \eqref{ujconvX0}.
First of all, from Phrokorov's Theorem we deduce the existence of two positive measures $\mu$ and $\nu$ on $\mathbb{R}^n$ such that
\begin{equation}\label{convergenza misure}
\leqslantslantft|(-\Delta)^{s/2} u_j(x)\right|^2 dx\stackrel{*}{\rightharpoonup}\mu\quad\mbox{and}\quad\leqslantslantft|u_j(x)\right|^{2^*}dx\rightharpoonup\nu\quad\mbox{in }\mathcal M(\mathbb R^n)
\end{equation}
as $j\to +\infty$.
By \cite[Theorem~6]{fsv}, thanks to our assumptions on $\partial\Omega$, it is easy to see that $X_0^s(\Omega)$ can also be defined as the closure of $C^\infty_0(\Omega)$ with respect to the norm \eqref{norma}. Hence, $X_0^s(\Omega)$ is consistent with the functional space introduced in \cite{palatuccipisante}.
Thus, by \cite[Theorem~2]{palatuccipisante} we obtain an at most countable set of distinct points $\leqslantslantft\{x_i\right\}_{i\in J}$,
non--negative numbers $\leqslantslantft\{\nu_i\right\}_{i\in J}$, $\leqslantslantft\{\mu_i\right\}_{i\in J}$ and a positive measure $\widetilde{\mu}$,
with $Supp\; \widetilde{\mu}\subset\overline{\Omega}$, such that
\begin{equation}\label{nu}
\nu=\leqslantslantft|u(x)\right|^{2^*}dx+\sum_{i\in J} \nu_i\delta_{x_i}, \quad \mu=\leqslantslantft|(-\Delta)^{s/2} u(x)\right|^2 dx+\widetilde{\mu}+\sum_{i\in J}\mu_i\delta_{x_i},
\end{equation}
and
\begin{equation}\label{mu}
\nu_i\leqslantslantq S(n,s)^{-\frac{2^*}{2}}\mu^{\frac{2^*}{2}}_{i}
\end{equation}
for any $i\in J$, where $S(n,s)$ is the constant given in \eqref{Sns}.
Now, in order to prove \eqref{claim} we proceed by steps.
\begin{step}\label{step1}
Fix $i_0\in J$. Then, either $\nu_{i_0}=0$ or
\begin{equation}\label{step}
\nu_{i_0}\geqslantslantq\displaystyle\leqslantslantft[\frac{S(n,s)}{\gamma}\right]^{n/2s}.
\end{equation}
\end{step}
\begin{proof}
Let $\psi\in C^{\infty}_{0}(\mathbb{R}^n,[0,1])$ be such that $\psi\equiv 1$ in $B(0,1)$ and $\psi\equiv0$ in $\mathbb{R}^n\setminus B(0,2)$. For any $\delta\in(0,1)$ we set $$\psi_{\delta,i_0}(x)=\psi\Big((x-x_{i_0})/\delta\Big)\,.$$
Clearly the sequence $\leqslantslantft\{\psi_{\delta,i_0} u_j\right\}_{j\in\mathbb{N}}$ is bounded in $X_0^s(\Omega)$ by Lemma~\ref{bound}, and so by \eqref{ps1} it follows that
$$\mathcal J'_\gamma(u_j)(\psi_{\delta,i_0} u_j)\to 0$$ as $j\to +\infty$. In other words
\begin{equation}\label{that is}
\begin{alignedat}2
o(1)+&\iint_{\mathbb R^n \times \mathbb R^n}\frac{\big(u_j(x)-u_j(y)\big)\big(\psi_{\delta,i_0}(x)u_j(x)-\psi_{\delta,i_0}(y)u_j(y)\big)}
{\leqslantslantft|x-y\right|^{n+2s}}\,dxdy\\
&=\gamma\int_\Omega \leqslantslantft|u_j(x)\right|^{2^*}\psi_{\delta,i_0}(x)dx+\int_\Omega f(x, u_j(x))\psi_{\delta,i_0}(x)u_j(x)dx,
\end{alignedat}
\end{equation}
as $j\to +\infty$.
By \cite[Proposition~3.6]{valpal} and taking into account the definition of $(-\Delta)^s$ given in \eqref{delta}, we know that for any $v\in X_0^s(\Omega)$
$$\iint_{\mathbb R^n \times \mathbb R^n}\frac{|v(x)-v(y)|^2}{\leqslantslantft|x-y\right|^{n+2s}}\, dxdy=\int_{\mathbb{R}^n}\leqslantslantft|(-\Delta)^{s/2}v(x)\right|^2dx\,.$$
By taking derivative of the above equality, for any $v, w\in X_0^s(\Omega)$ we obtain
\begin{equation}\label{deriv}
\iint_{\mathbb R^n \times \mathbb R^n} \frac{(v(x)-v(y))(w(x)-w(y))}{\leqslantslantft|x-y\right|^{n+2s}}dxdy=\int_{\mathbb{R}^n}(-\Delta)^{s/2}v(x)(-\Delta)^{s/2}w(x)dx.
\end{equation}
Furthermore, for any $v, w\in X_0^s(\Omega)$ we have
\begin{equation}\label{prod}
(-\Delta)^{s/2}(v w)=v(-\Delta)^{s/2}w+w(-\Delta)^{s/2}v-2I_{s/2}(v,w),
\end{equation}
where the last term is defined, in the principal value sense, as follows
\[I_{s/2}(v,w)(x)=P.V.\int_{\mathbb{R}^n}\frac{(v(x)-v(y))(w(x)-w(y))}{\leqslantslantft|x-y\right|^{n+s}}\,dy
\]
for any $x\in\mathbb R^n$.
Thus, by \eqref{deriv} and \eqref{prod} the integral in the left--hand side of \eqref{that is} becomes
\begin{equation}\label{that2}
\begin{aligned}
&\iint_{\mathbb R^n \times \mathbb R^n}\frac{\big(u_j(x)-u_j(y)\big)\big(\psi_{\delta,i_0}(x)u_j(x)-\psi_{\delta,i_0}(y)u_j(y)\big)}{\leqslantslantft|x-y\right|^{n+2s}}\,dxdy\\
& \quad = \int_{\mathbb{R}^n}(-\Delta)^{s/2}u_j(x)(-\Delta)^{s/2}(\psi_{\delta,i_0}u_j)(x)dx\\
& \quad =\int_{\mathbb{R}^n}u_j(x)(-\Delta)^{s/2}u_j(x)(-\Delta)^{s/2}\psi_{\delta,i_0}(x)dx\\
&\quad\quad+\int_{\mathbb{R}^n}\leqslantslantft|(-\Delta)^{s/2}u_j(x)\right|^2\psi_{\delta,i_0}(x)dx\\
&\quad\quad-2\int_{\mathbb{R}^n}(-\Delta)^{s/2}u_j(x)\int_{\mathbb{R}^n}\frac{(u_j(x)-u_j(y))(\psi_{\delta,i_0}(x)-\psi_{\delta,i_0}(y))}{\leqslantslantft|x-y\right|^{n+s}}dxdy.
\end{aligned}
\end{equation}
By \cite[Lemma~2.8 and Lemma~2.9]{colorado2} we have
\begin{equation}\label{2.8}
\lim_{\delta\rightarrow0}\lim_{j\rightarrow +\infty}\leqslantslantft|\int_{\mathbb{R}^n}u_j(x)(-\Delta)^{s/2}u_j(x)(-\Delta)^{s/2}\psi_{\delta,i_0}(x)dx\right|=0
\end{equation}
and
\begin{equation}\label{2.9}
\lim_{\delta\rightarrow0}\lim_{j\rightarrow +\infty}\leqslantslantft|\int_{\mathbb{R}^n}(-\Delta)^{s/2}u_j(x)\int_{\mathbb{R}^n}\frac{(u_j(x)-u_j(y))
(\psi_{\delta,i_0}(x)-\psi_{\delta,i_0}(y))}{\leqslantslantft|x-y\right|^{n+s}}dxdy\right|=0.
\end{equation}
Then, by combining \eqref{that2}--\eqref{2.9} and \eqref{convergenza misure}--\eqref{nu} we get
\begin{equation}\label{that3}
\lim_{\delta\rightarrow0}\lim_{j\rightarrow +\infty}\iint_{\mathbb R^n \times \mathbb R^n}
\frac{\big(u_j(x)-u_j(y)\big)\big(\psi_{\delta,i_0}(x)u_j(x)-\psi_{\delta,i_0}(y)u_j(y)\big)}
{\leqslantslantft|x-y\right|^{n+2s}}dxdy\geqslantslantq\mu_{i_0}.
\end{equation}
While, by \eqref{sottocritico} and the Dominated Convergence Theorem we get
\begin{equation*}
\int_{B(x_{i_0},2\delta)} f(x, u_j(x))u_j(x)\psi_{\delta,i_0}(x)dx\rightarrow\int_{B(x_{i_0},2\delta)} f(x, u(x))u(x)\psi_{\delta,i_0}(x)dx\quad\mbox{as }j\rightarrow +\infty,
\end{equation*}
and so by sending $\delta\rightarrow0$ we observe that
\begin{equation}\label{term f}
\lim_{\delta\rightarrow0}\lim_{j\rightarrow +\infty}\int_{B(x_{i_0},2\delta)} f(x, u_j(x))u_j(x)\psi_{\delta,i_0}(x)dx=0.
\end{equation}
Furthermore, by \eqref{convergenza misure} it follows that
\[\int_\Omega \leqslantslantft|u_j(x)\right|^{2^*}\psi_{\delta,i_0}(x)dx\to\int_\Omega \psi_{\delta,i_0}(x)d\nu\quad\mbox{as }j\to +\infty\,.
\]
Finally, by combining this last formula with \eqref{that is}, \eqref{that3} and \eqref{term f} we get
\begin{equation*}
\nu_{i_0}\geqslantslantq\frac{\mu_{i_0}}{\gamma}.
\end{equation*}
Thus, from this and \eqref{mu} with $i=i_0$ we have that
$$\nu_{i_0}\geqslantslantq \frac{\nu_{i_0}^{2/2^*}S(n,s)}{\gamma}\,,$$
which yields that
either $\nu_{i_0}=0$ or $\nu_{i_0}$ verifies \eqref{step}. This ends the proof of Step~\ref{step1}.
\end{proof}
\begin{step}\label{s2}
Estimate \eqref{step} can not occur, hence $\nu_{i_0}=0$.
\end{step}
\begin{proof}
For this, it is enough to see that
\begin{equation}\label{claim2}
\int_\Omega d\nu<\leqslantslantft[\frac{S(n,s)}{\gamma}\right]^{\frac{n}{2s}}\,.
\end{equation}
For this, let us consider two cases. First of all, assume that
\begin{equation}\label{v<1}
{\displaystyle \int_\Omega d\nu\leqslantslantq 1}\,.
\end{equation}
Since $\gamma<\gamma^*$ and by \eqref{gamma} (which implies that $\gamma^*<S(n,s)$) we have
\[1<\leqslantslantft(\frac{S(n,s)}{\gamma}\right)^{\frac{n}{2s}},
\]
from which immediately follows \eqref{claim2}, thanks to \eqref{v<1}.
Now, assume that ${\displaystyle \int_\Omega d\nu>1}$. Since $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ is a $(PS)_c$ sequence for $\mathcal J_\gamma$, arguing as in Lemma~\ref{bound} (see formula~\eqref{3.11}) we get
\begin{equation}\label{3.11'}
\mathcal J_\gamma(u_j)-\frac{1}{2}\mathcal J'_\gamma(u_j)(u_j)\geqslantslant\frac{s\gamma}{n}\leqslantslantft\|u_j\right\|^{2^*}_{2^*}
-a_1\leqslantslantft|\Omega\right|-a_2\leqslantslantft|\Omega\right|^{\frac{2^*-\sigma}{2^*}}\leqslantslantft\|u_j\right\|^{\sigma}_{2^*}\,.
\end{equation}
By sending $j\rightarrow +\infty$ in \eqref{3.11'} and using \eqref{ps1}, \eqref{convergenza misure} we obtain
$$\begin{aligned}
\frac{s\gamma}{n}\int_\Omega d\nu & \leqslantslantq c+a_1\leqslantslantft|\Omega\right|+a_2\leqslantslantft|\Omega\right|^{\frac{2^*-\sigma}{2^*}}\Big(\int_\Omega d\nu\Big)^{\frac{\sigma}{2^*}}\\
& \leqslantslantq\leqslantslantft(M+a_1\leqslantslantft|\Omega\right|+a_2\leqslantslantft|\Omega\right|^{\frac{2^*-\sigma}{2^*}}\right)\Big(\int_\Omega d\nu\Big)^{\frac{\sigma}{2^*}}\\
& = (M+A)\Big(\int_\Omega d\nu\Big)^{\frac{\sigma}{2^*}}\,,
\end{aligned}$$
thanks to the choice of $c\leqslantslantq M$ and the definition of $A$ given in \eqref{A}. Hence we get
\begin{equation}\label{addraffy222}
\int_\Omega d\nu\leqslantslantq \leqslantslantft[\frac{n(M+A)}{s\gamma}\right]^{\frac{2^*}{2^*-\sigma}}\,.
\end{equation}
By \eqref{gamma} and the fact that $\gamma<\gamma^*$ we know that
$$\gamma < \leqslantslantft[\leqslantslantft(S(n,s)\right)^{\frac{n}{2s}}\leqslantslantft(\frac{s}{n(M+A)}\right)^
{\frac{2^*}{2^*-\sigma}}\right]^{\frac{1}{n/2s-2^*/(2^*-\sigma)}}\,,$$
that is
$$\gamma^{\frac{n}{2s}-\frac{2^*}{2^*-\sigma}} < \leqslantslantft(S(n,s)\right)^{\frac{n}{2s}}\leqslantslantft(\frac{s}{n(M+A)}\right)^
{\frac{2^*}{2^*-\sigma}}\,,$$
which yields
\[\leqslantslantft[\frac{n(M+A)}{s\gamma}\right]^{\frac{2^*}{2^*-\sigma}}
<\leqslantslantft(\frac{S(n,s)}{\gamma}\right)^{\frac{n}{2s}}\,.
\]
From this and \eqref{addraffy222}
we get \eqref{claim2}. Thus, the proof of Step~\ref{s2} is complete and $\nu_{i_0}=0$.
\end{proof}
\begin{step}\label{s3}
Claim \eqref{claim} holds true.
\end{step}
\begin{proof}
By considering that $i_0$ was arbitrary in Step~\ref{step1}, we deduce that $\nu_i=0$ for any $i\in J$. As a consequence, also from \eqref{convergenza misure} and \eqref{nu} it follows that
$u_j\to u$ in $L^{2^*}(\Omega)$ as $j\rightarrow +\infty$.
Thus, by \eqref{sottocritico}, the fact that
\begin{equation}\label{ps1ADD}
\mathcal J'_\gamma(u_j)\rightarrow0\quad\mbox{as}\;j\rightarrow +\infty
\end{equation}
(being $\leqslantslantft\{u_{j}\right\}_{j\in\mathbb{N}}$ a $(PS)_c$ sequence for $\mathcal J_\gamma$)
and the Dominated Convergence Theorem, we have
\begin{equation}\label{4.9}
\lim_{j\to +\infty}\leqslantslantft\|u_j\right\|^2=\gamma\int_\Omega\leqslantslantft|u(x)\right|^{2^*}dx+\int_\Omega f(x,u(x))u(x)dx.
\end{equation}
Moreover, by remembering that $u_j\rightharpoonup u$ in $X_0^s(\Omega)$ and by using again \eqref{sottocritico}, \eqref{ps1ADD}
and the Dominated Convergence Theorem, we have
\begin{equation}\label{4.10}
\leqslantslantft\langle u,\varphi\right\rangle=\gamma\int_\Omega \leqslantslantft|u(x)\right|^{2^*-2}u(x)\varphi(x)dx+\int_\Omega f(x, u(x))\varphi(x)dx,
\end{equation}
for any $\varphi\in X_0^s(\Omega)$. Thus, by combining \eqref{4.9} and \eqref{4.10} with $\varphi=u$ we get the claim \eqref{claim}, concluding the proof of Step~\ref{s3}.
\end{proof}
Hence, the proof of Lemma~\ref{palais} is complete.
\end{proof}
\section{Main theorems}\label{sec main}
This section is devoted to the proof of the main results of the paper. In particular here we study the geometry of the functional~$\mathcal J_\gamma$.
At first, we need some notation. In what follows $\big\{ \lambda_j\big\}_{{j\in\mathbb N}}$ denotes the sequence of the eigenvalues of the following problem
\begin{equation}\label{problemaautovalori}
\leqslantslantft\{\begin{array}{ll}
(-\Delta)^s u=\lambda\, u & \mbox{in } \Omega\\
u=0 & \mbox{in } \mathbb R^n\setminus \Omega,
\end{array}\right.
\end{equation}
with
\begin{equation}\label{lambdacrescente}
0<\lambda_1<\lambda_2\leqslantslant \dots \leqslantslant \lambda_j\leqslantslant \lambda_{j+1}\leqslantslant \dots
\end{equation}
$${\mbox{$\lambda_j\to +\infty$ as $ j\to +\infty,$}}$$
and with $e_j$ as eigenfunction corresponding to $\lambda_j$.
Also, we choose $\big\{e_j\big\}_{j\in\mathbb N}$ normalized in such a way that
this sequence provides an orthonormal basis of $L^2(\Omega)$ and an orthogonal basis of $X_0^s(\Omega)$. For a complete study of the spectrum of the fractional Laplace operator~$(-\Delta)^s$ we refer to \cite[Proposition~2.3]{sY}, \cite[Proposition~9 and Appendix~A]{svlinking} and \cite[Proposition~4]{servadeivaldinociBNLOW}.
Along the paper, for any $j\in \mathbb N $ we also set
\[\mathbb{P}_{j+1}=\leqslantslantft\{u\in X_0^s(\Omega):\,\,\leqslantslantft\langle u,e_i\right\rangle=0\quad\mbox{for any } i=1,\ldots,j \right\}\quad(\mbox{with }\mathbb{P}_1=X_0^s(\Omega)),
\]
as defined also in \cite[Proposition~9 and Appendix~A]{svlinking}, while $$\mathbb H_j=\mbox{span}\leqslantslantft\{e_1,\ldots,e_j\right\}$$ will denote the linear subspace generated by the first $j$ eigenfunctions of $(-\Delta)^s$.
It is immediate to observe that $\mathbb P_{j+1}=\mathbb H^{\bot}_j$ with respect to the scalar product in $X_0^s(\Omega)$ defined as in formula~\eqref{prodottoscalare}. Thus, since $X_0^s(\Omega)$ is a Hilbert space (see \cite[Lemma~7]{svmountain} and \eqref{prodottoscalare}), we can write it as a direct sum as follows
$$X_0^s(\Omega)=\mathbb H_j\oplus\mathbb P_{j+1}$$
for any $j\in \mathbb N$\,.
Moreover, since $\big\{e_j\big\}_{j\in\mathbb N}$ is an orthogonal basis of $X_0^s(\Omega)$, it is easy to see that for any $j\in \mathbb N$
$$\mathbb P_{j+1}=\overline{\mbox{span}\leqslantslantft\{e_i:\,\,i\geqslantslantq j+1\right\}}.$$
Now, before studying and proving the geometric features for $\mathcal J_\gamma$ we need a stronger version of the classical Sobolev embedding. Here the constant of the embedding can be chosen and controlled a priori.
\begin{lemma}\label{lem4.1}
Let $r\in[2,2^*)$ and $\delta>0$.
Then, there exists $j\in\mathbb{N}$ such that $\leqslantslantft\|u\right\|^r_r\leqslantslantq\delta\leqslantslantft\|u\right\|^r$ for any $u\in\mathbb{P}_{j+1}$.
\end{lemma}
\begin{proof}
By contradiction, we suppose that there exists $\delta>0$ such that for any $j\in\mathbb{N}$ there exists $u_j\in\mathbb{P}_{j+1}$ which verifies $\leqslantslantft\|u_j\right\|^r_r>\delta\leqslantslantft\|u_j\right\|^r$. Considering $v_j=u_j/\leqslantslantft\|u_j\right\|_r$, we have that $v_j\in \mathbb{P}_{j+1}$,
\begin{equation}\label{normavjr}
\leqslantslantft\|v_j\right\|_r=1
\end{equation}
and $\leqslantslantft\|v_j\right\|<1/\delta$ for any $j\in\mathbb{N}$. Thus, the sequence $\leqslantslantft\{v_j\right\}_{j\in\mathbb{N}}$ is bounded in $X_0^s(\Omega)$ and we may suppose that there exists $v\in X_0^s(\Omega)$ such that, up to a subsequence,
$$v_j\rightharpoonup v \quad \mbox{in} \quad X_0^s(\Omega)$$
and
\begin{equation}\label{addconvr}
v_j\to v \quad \mbox{in} \quad L^r(\Omega)
\end{equation}
as $j\to +\infty$. Hence, by \eqref{normavjr} and \eqref{addconvr} we deduce that
\begin{equation}\label{normavr=1}
\leqslantslantft\|v\right\|_r=1\,.
\end{equation}
Moreover, since $\leqslantslantft\{e_j\right\}_{j\in\mathbb{N}}$ is an orthogonal basis of $X_0^s(\Omega)$ by \cite[Proposition~9]{svlinking}, we can write $v$ as follows
$$v=\sum^\infty_{j=1}\leqslantslantft\langle v, e_j\right\rangle e_j\,.$$
Now, given $k\in\mathbb{N}$ we have $\leqslantslantft\langle v_j, e_k\right\rangle=0$ for any $j\geqslantslantq k$, since $v_j\in\mathbb{P}_{j+1}$. From this we deduce that $\leqslantslantft\langle v, e_k\right\rangle=0$ for any $k\in\mathbb N$, which clearly implies that $v\equiv 0$. On the other hand, this contradicts \eqref{normavr=1}. Hence, Lemma~\ref{lem4.1} holds true.
\end{proof}
\subsection{Geometric setting for Theorem~\ref{a}}\label{sec a}
In order to prove Theorem~\ref{a}, we just have to verify that the energy functional $\mathcal J_\gamma$ satisfies $(I_2)$ and $(I_3)$ of Theorem~\ref{abs}. For this we will consider $V= \mathbb H_j$ and $X=\mathbb{P}_{j+1}$, with $j\in\mathbb{N}$ chosen as in the following result:
\begin{lemma}\label{lem4.2}
Let $f$ satisfy \eqref{f4}.
Then, there exist $\widetilde{\gamma}>0$, $j\in\mathbb{N}$ and $\rho$, $\alpha>0$ such that $\mathcal J_\gamma(u)\geqslantslantq\alpha$, for any $u\in\mathbb{P}_{j+1}$ with $\leqslantslantft\|u\right\|=\rho$, and $0<\gamma<\widetilde{\gamma}$.
\end{lemma}
\begin{proof}
Take $\gamma>0$.
By \eqref{f4} and \cite[Lemma~6]{svmountain} we get a suitable constant $c>0$ such that
\begin{equation}\label{add22222}
\mathcal J_\gamma(u)\geqslantslantq\frac{1}{2}\leqslantslantft\|u\right\|^2-b_1\leqslantslantft\|u\right\|^\theta_\theta-b_2\leqslantslantft|\Omega\right|-\gamma c\leqslantslantft\|u\right\|^{2^*},
\end{equation}
for any $u\in X_0^s(\Omega)$.
Let $\delta>0$: we will fix it in the sequel. By \eqref{add22222} and Lemma~\ref{lem4.1} there exists $j\in\mathbb{N}$ such that
\begin{equation}\label{geo1}
\mathcal J_\gamma(u)\geqslantslantq\leqslantslantft\|u\right\|^2\leqslantslantft(\frac{1}{2}-b_1\delta\leqslantslantft\|u\right\|^{\theta-2}\right)-b_2\leqslantslantft|\Omega\right|-\gamma c\leqslantslantft\|u\right\|^{2^*},
\end{equation}
for any $u\in\mathbb{P}_{j+1}$.
Now, consider $\leqslantslantft\|u\right\|=\rho=\rho(\delta)$, with $\rho$ such that $b_1\delta\rho^{\theta-2}=1/4$, so that
$$\mathcal J_\gamma(u)\geqslantslantq \frac 1 4 \rho^2-b_2|\Omega|-\gamma c\rho^{2^*}$$
for any $u\in\mathbb{P}_{j+1}$, thanks to \eqref{geo1}.
Now, observe that $\rho(\delta)\rightarrow +\infty$ as $\delta\rightarrow0$, since $\theta>2$. Hence, we can choose $\delta$ sufficiently small such that $\rho^2 /4-b_2\leqslantslantft|\Omega\right|\geqslantslantq\rho^2 /8$, which yields
\[\mathcal J_\gamma(u)\geqslantslantq\frac{1}{8}\rho^2-\gamma c\rho^{2^*},
\]
for any $u\in\mathbb{P}_{j+1}$ with $\leqslantslantft\|u\right\|=\rho$.
Finally, let $\widetilde{\gamma}>0$ be such that $\frac{1}{8}\rho^2-\widetilde{\gamma} c\rho^{2^*}=\alpha>0$. Then we get
$$\mathcal J_\gamma(u)\geqslantslantq \mathcal J_{\widetilde{\gamma}}(u)\geqslantslantq \alpha$$
for any $u\in\mathbb{P}_{j+1}$ with $\|u\|=\rho$ and any $\gamma\in (0,\widetilde{\gamma})$\,,
concluding the proof.
\end{proof}
\begin{lemma}\label{lem4.3}
Let $f$ satisfy \eqref{f5} and let $l\in\mathbb{N}$.
Then, there exist a subspace $W$ of $X_0^s(\Omega)$ and a constant $M_l>0$, independent of $\gamma$, such that $\mbox{dim } W=l$ and ${\displaystyle \max_{u\in W}\mathcal J_0(u)<M_l}$.
\end{lemma}
\begin{proof}
Here we can argue exactly as in \cite[Lemma~4.3]{silva} where the classical case of the Laplacian was considered. For this, we can use also the properties of eigenfunctions of $(-\Delta)^s$ (see \cite{svlinking}).
\end{proof}
\begin{proof}[\bf Proof of Theorem~\ref{a}]
By Lemma~\ref{lem4.2} we find $j\in\mathbb{N}$ and $\widetilde{\gamma}>0$ such that $\mathcal J_\gamma$ satisfies $(I_2)$ in $X=\mathbb P_{j+1}$, for any $0<\gamma<\widetilde{\gamma}$. While, by Lemma~\ref{lem4.3} for any $k\in \mathbb N$ there is a subspace $W\subset X_0^s(\Omega)$ with $\mbox{dim }W=k+j$ and such that $\mathcal J_\gamma$ satisfies $(I_3)$ with $M=M_{j+k}>0$ for any $\gamma>0$, since $\mathcal J_\gamma<\mathcal J_0$.
Finally, we note that by Lemma~\ref{palais}, considering $\widetilde{\gamma}$ smaller if necessary, we have that $\mathcal J_\gamma$ satisfies $(I_4)$ for any $0<\gamma<\widetilde{\gamma}$. Thus, we may apply Theorem~\ref{abs} to conclude that $\mathcal J_\gamma$ admits $k$ pairs of non--trivial critical points for $\gamma>0$ sufficiently small. Hence, Theorem~\ref{a} is proved.
\end{proof}
\subsection{Geometric setting for Theorem~\ref{b}}\label{sec b}
We apply again Theorem~\ref{abs} to the functional~$\mathcal J_\gamma$. By considering $\lambda_j\leqslantslantq\lambda_k$ as in \eqref{ft5} and \eqref{ft4}, we have two cases. When $j=1$ we set $V=\leqslantslantft\{0\right\}$, so $X=X_0^s(\Omega)$: note that this is consistent with the situation $\mathbb{P}_1=X_0^s(\Omega)$. While if $j>1$ we consider $X=\mathbb{P}_j$ and $V=\mathbb H_{j-1}$. Moreover, we set $W=\mathbb H_k$ as subspace of $X_0^s(\Omega)$ in $(I_3)$.
Now, in order to verify the geometric assumptions $(I_2)$ and $(I_3)$ in Theorem~\ref{abs} we consider here two different characterizations of the eigenvalues of $(-\Delta)^s$. That is, for any $j\in\mathbb{N}$ by \cite[Proposition~9]{svlinking} we have that
\begin{equation}\label{min}
\lambda_j=\min_{u\in\mathbb{P}_j\setminus\{0\}}\frac{\leqslantslantft\|u\right\|^2}{\leqslantslantft\|u\right\|^2_2},
\end{equation}
while from \cite[Proposition~2.3]{sY} we know that
\begin{equation}\label{max}
\lambda_j=\max_{u\in\mathbb H_j\setminus\{0\}}\frac{\leqslantslantft\|u\right\|^2}{\leqslantslantft\|u\right\|^2_2}.
\end{equation}
Moreover, we need the following technical lemma:
\begin{lemma}\label{lemtec}
Let $a:\Omega\rightarrow\mathbb{R}$ be the measurable function given in \eqref{ft5}. Then, there exists $\beta>0$ such that for any $u\in\mathbb{P}_j$
$$\leqslantslantft\|u\right\|^2-\int_\Omega a(x)\leqslantslantft|u(x)\right|^2dx\geqslantslantq\beta\leqslantslantft\|u\right\|^2_2\,.$$
\end{lemma}
\begin{proof}
We argue by contradiction and we suppose that for any $i\in\mathbb{N}$ there exists $u_i\in\mathbb{P}_j$ such that
\begin{equation}\label{AddRaffy}
\leqslantslantft\|u_i\right\|^2-\int_\Omega a(x)\leqslantslantft|u_i(x)\right|^2dx<\frac{1}{i}\leqslantslantft\|u_i\right\|^2_2.
\end{equation}
Let $v_i=u_i/\leqslantslantft\|u_i\right\|_2$. Of course, $v_i\in \mathbb{P}_j$ and
\begin{equation}\label{vi}
\leqslantslantft\|v_i\right\|_2=1
\end{equation}
for any $i\in \mathbb N$. By \eqref{ft5}, \eqref{min}, \eqref{AddRaffy} and \eqref{vi} we get
\begin{equation}\label{4.5}
\begin{aligned}
\lambda_j & \leqslantslantq\leqslantslantft\|v_i\right\|^2\\
& <\int_\Omega a(x)\leqslantslantft|v_i(x)\right|^2 dx+\frac{1}{i}\\
& \leqslantslantq\lambda_j\int_\Omega \leqslantslantft|v_i(x)\right|^2 dx+\frac{1}{i}\\
& \leqslantslantq\lambda_j+\frac{1}{i}
\end{aligned}
\end{equation}
for any $i\in \mathbb N$. From this, we have that $\leqslantslantft\{v_i\right\}_{i\in\mathbb{N}}$ is a bounded sequence in $X_0^s(\Omega)$. Therefore, by applying \cite[Lemma~8]{svmountain} and \cite[Theorem~IV.9]{brezis} there exists $v\in X_0^s(\Omega)$ such that, up to a subsequence, $v_i$ converges to $v$ weakly in $X_0^s(\Omega)$, strongly in $L^2(\Omega)$ and a.e. in $\Omega$ as $j\to +\infty$ and $\leqslantslantft|v_i\right|\leqslantslantq h\in L^2(\Omega)$ a.e. in $\Omega$. Thus, by \eqref{vi} we know that $\leqslantslantft\|v\right\|_2=1$, so that $v$ is almost everywhere different from zero in $\Omega$, i.e.
\begin{equation}\label{vnot0}
v\not \equiv 0 \quad \mbox{in} \quad \Omega\,.
\end{equation}
By sending $i\rightarrow +\infty$ in \eqref{4.5} and using the Dominated Convergence Theorem and \eqref{AddRaffy}, we get
\begin{equation}\label{4.6}
\int_\Omega \big(\lambda_j-a(x)\big)\leqslantslantft|v(x)\right|^2 dx=0.
\end{equation}
Then, \eqref{ft5}, \eqref{vnot0} and \eqref{4.6} implies that
\[a(x)=\lambda_j\quad\mbox{a.e. in }\Omega, \]
which contradicts the assumption \eqref{ft5}. Hence, Lemma~\ref{lemtec} holds true.
\end{proof}
Now we are ready to prove that $\mathcal J_\gamma$ satisfies $(I_2)$ and $(I_3)$ of Theorem~\ref{abs}.
\begin{lemma}\label{lem5.1}
Let $f$ satisfy \eqref{f1}, \eqref{f2} and \eqref{ft5}.
Then, for any $\gamma>0$ there exist $\rho$, $\alpha>0$ such that $\mathcal J_\gamma(u)\geqslantslantq\alpha$ for any $u\in\mathbb{P}_j$ with $\leqslantslantft\|u\right\|=\rho$.
\end{lemma}
\begin{proof}
Fix $\gamma>0$. By \eqref{f1}, \eqref{f2} and \eqref{ft5}, for any $\varepsilon>0$ there exists $C_\varepsilon>0$ such that
\begin{equation}\label{4.7}
\leqslantslantft|F(x,t)\right|\leqslantslantq\frac{C_\varepsilon}{2^*}\leqslantslantft|t\right|^{2^*}+\frac{a(x)+\varepsilon}{2}\leqslantslantft|t\right|^2,
\end{equation}
for any $t\in\mathbb{R}$ and a.e. $x\in\Omega$.
Now, let $\beta>0$ be as in Lemma~\ref{lemtec} and $\varepsilon'>0$ be such that $\beta-\varepsilon'\lambda_j>0$. Thus, by \eqref{ft5} and Lemma~\ref{lemtec}, we have
\begin{equation*}
\begin{aligned}
\leqslantslantft\|u\right\|^2-\int_\Omega a(x)\leqslantslantft|u(x)\right|^2 dx
&=\frac{1+\varepsilon'}{1+\varepsilon'}\leqslantslantft(\leqslantslantft\|u\right\|^2-\int_\Omega a(x)\leqslantslantft|u(x)\right|^2 dx\right)\\
&=\frac{\varepsilon'}{1+\varepsilon'}\leqslantslantft\|u\right\|^2+\frac{1}{1+\varepsilon'}\leqslantslantft(\leqslantslantft\|u\right\|^2-\int_\Omega a(x)\leqslantslantft|u(x)\right|^2 dx-\varepsilon'\int_\Omega a(x)\leqslantslantft|u(x)\right|^2 dx\right)\\
&\geqslantslantq\frac{\varepsilon'}{1+\varepsilon'}\leqslantslantft\|u\right\|^2+\frac{1}{1+\varepsilon'}\leqslantslantft(\beta\leqslantslantft\|u\right\|^2_2-\varepsilon'\int_\Omega a(x)\leqslantslantft|u(x)\right|^2 dx\right)\\
&\geqslantslantq\frac{\varepsilon'}{1+\varepsilon'}\leqslantslantft\|u\right\|^2+\int_\Omega(\beta-\varepsilon'\lambda_j)\leqslantslantft|u(x)\right|^2 dx\\
&\geqslantslantq\frac{\varepsilon'}{1+\varepsilon'}\leqslantslantft\|u\right\|^2
\end{aligned}
\end{equation*}
for any $u\in\mathbb{P}_j$.
From this and by \eqref{4.7} we get
\begin{equation*}
\begin{alignedat}2
\mathcal J_\gamma(u)&=\frac 1 2 \leqslantslantft\|u\right\|^2-\frac{\gamma}{2^*}\leqslantslantft\|u\right\|^{2^*}_{2^*}
-\int_\Omega F(x,u(x))\,dx\\
&\geqslantslantq\frac{1}{2}\leqslantslantft(\leqslantslantft\|u\right\|^2-\int_\Omega a(x)\leqslantslantft|u(x)\right|^2 dx\right)-\frac{1}{2^*}(\gamma+C_\varepsilon)\leqslantslantft\|u\right\|^{2^*}_{2^*}-\frac{\varepsilon}{2}\leqslantslantft\|u\right\|^2_2\\
&\geqslantslantq\frac{\varepsilon'}{2(1+\varepsilon')}\leqslantslantft\|u\right\|^2-\frac{1}{2^*}(\gamma+C_\varepsilon)\leqslantslantft\|u\right\|^{2^*}_{2^*}-\frac{\varepsilon}{2}\leqslantslantft\|u\right\|^2_2
\end{alignedat}
\end{equation*}
for any $u\in\mathbb{P}_j$.
Thus, by \cite[Lemma~6]{svmountain} and taking $\varepsilon>0$ sufficiently small, there exist constants $K$, $C>0$ such that
\begin{equation}\label{bisci}
\mathcal J_\gamma(u)\geqslantslantq K\rho^2-C\rho^{2^*}
\end{equation}
for any $u\in\mathbb{P}_j$ with $\leqslantslantft\|u\right\|=\rho$. By taking $\rho>0$ small enough, \eqref{bisci} gives that
$$\mathcal J_\gamma(u)\geqslantslantq \alpha$$
for a suitable $\alpha>0$, since $2^*>2$.
\end{proof}
\begin{lemma}\label{lem5.2}
Let $f$ satisfy \eqref{ft4}.
Then, for any $\gamma>0$ there exists a constant $M>0$, independent of $\gamma$, such that ${\displaystyle \max_{u\in\mathbb H_k}\mathcal J_\gamma(u)<M}$.
\end{lemma}
\begin{proof}
Fix $\gamma>0$.
By \eqref{ft4} and \eqref{max}, for any $u\in\mathbb H_k\setminus\{0\}$ we have
$$\begin{aligned}
\mathcal J_\gamma(u) & \leqslantslantq\frac{1}{2}\leqslantslantft\|u\right\|^2-\frac{\lambda_k}{2}\leqslantslantft\|u\right\|^2_2-
\frac{\gamma}{2^*}\leqslantslantft\|u\right\|^{2^*}_{2^*}+B\leqslantslantft|\Omega\right|\\
& \leqslantslantq B\leqslantslantft|\Omega\right|-\frac{\gamma}{2^*}\leqslantslantft\|u\right\|^{2^*}_{2^*}\\
&<B\leqslantslantft|\Omega\right|,
\end{aligned}$$
concluding the proof of Lemma~\ref{lem5.2}.
\end{proof}
\begin{proof}[\bf Proof of Theorem~\ref{b}]
By Lemma~\ref{palais}, Lemma~\ref{lem5.1} and Lemma~\ref{lem5.2}, there is $\gamma^*>0$ sufficiently small such that $\mathcal J_\gamma$ satisfies $(I_2)-(I_4)$ of Theorem~\ref{abs} for any $\gamma\in(0,\gamma^*)$. By recalling that $\mathbb{P}_j=H^\bot_{j-1}$, we get that $\mbox{codim}~\mathbb{P}_j=j-1$. Hence, by Theorem~\ref{abs} we conclude that $\mathcal J_\gamma$ admits $k-j+1$ pairs of non--trivial critical points for any $\gamma\in(0,\gamma^*)$. Then, the proof of Theorem~\ref{b} is complete.
\end{proof}
\begin{remark}
We would like to point out that when $j=1$ we can also replace \eqref{ft4} with \eqref{f5} and Theorem~\ref{b} still holds true. Indeed, we can argue exactly as in the proof of Theorem~\ref{a}, by using Lemma~\ref{lem5.1} $($with $\mathbb{P}_1=X_0^s(\Omega)$$)$ instead of Lemma~\ref{lem4.2}.
\end{remark}
\subsection{Proof of Theorem~\ref{c}}\label{sec c}
We first show that problem~\eqref{P} possesses a non--trivial non--negative solution. For this, it is sufficient to study the following problem
\begin{equation}\label{Ptronc}
\leqslantslantft\{\begin{array}{lll}
$$(-\Delta)^su=\gamma u^{2^*-1}+\widetilde{f}(x,u) & \mbox{in } \Omega$$\\
$$u\geqslantslantq0 & \mbox{in } \Omega$$\\
$$u=0 & \mbox{in } \mathbb{R}^{n}\setminus\Omega\,,$$
\end{array}
\right.
\end{equation}
where
\begin{equation}\label{ftilde}
\widetilde{f}(x,t)=\begin{cases}
f(x,t) & \mbox{if } t>0\\
0 & \mbox{if } t\leqslantslantq0.
\end{cases}
\end{equation}
Indeed, a non-trivial solution of \eqref{Ptronc} is a non-trivial non-negative solution of \eqref{P}.
The energy functional associated with \eqref{Ptronc} is given by
\begin{equation}\label{Jtilde}
\widetilde{\mathcal J}_\gamma(u)=\frac 1 2 \leqslantslantft\|u\right\|^2-\frac{\gamma}{2^*}\int_\Omega (u(x))^{2^*}\,dx
-\int_\Omega \widetilde{F}(x,u(x))\,dx,
\end{equation}
where
$$\widetilde{F}(x,t)=\int^t_0\widetilde{f}(x,\tau)d\tau\,.$$
We would observe that the truncated function $\widetilde{f}$ still verifies \eqref{f1}, \eqref{f2}, \eqref{f3} and \eqref{ft5}, while \eqref{ft4} holds true for $\widetilde{f}$ for any $t\geqslantslantq0$ but not for any $t<0$. This point must be considered for our proof.
Indeed, in order to apply Theorem~\ref{abs2}, we immediately note that $\widetilde{\mathcal J}_\gamma$ still verifies $(I_2)$ by Lemma~\ref{lem5.1} with $\mathbb{P}_1=X_0^s(\Omega)$. In order to prove $\widehat{(I_3)}$ of Theorem~\ref{abs2} we have to proceed as follows.
Let $e_1$ be the eigenfunction of $(-\Delta)^s$ associated to $\lambda_1$. Since $e_1$ is positive by \cite[Corollary~8]{servadeivaldinociRego}, by \eqref{ftilde} it follows that $\widetilde{F}(x,te_1(x))=F(x,te_1(x))$ for any $t>0$ and for a.e. $x\in\Omega$. Thus, we can use \eqref{ft4} and get for any $t>0$
$$\begin{aligned}
\widetilde{\mathcal J}_\gamma(te_1)& = \frac 1 2 \leqslantslantft\|te_1\right\|^2-\frac{\gamma}{2^*}\leqslantslantft\|te_1\right\|^{2^*}_{2^*}
-\int_\Omega\widetilde{F}(x,te_1(x))\,dx\\
& \leqslantslantq\frac{t^2}{2}\leqslantslantft\|e_1\right\|^2-\frac{t^2}{2}\lambda_1\leqslantslantft\|e_1\right\|^2_2+B\leqslantslantft|\Omega\right|\\
&=B\leqslantslantft|\Omega\right|,
\end{aligned}$$
thanks to the characterization of $e_1$ given in \cite[Proposition~9]{svlinking}. From this, $\widetilde{\mathcal J}_\gamma$ satisfies $\widehat{(I_3)}$ for any $\gamma>0$.
Now it remains to verify $(I_4)$ of Theorem~\ref{abs2}: for this it is enough to argue as in the proof of Lemma~\ref{bound} and Lemma~\ref{palais} (note that for these lemmas we just need assumptions \eqref{f1}, \eqref{f2} and \eqref{f3}).
Finally, all the assumptions of Theorem~\ref{abs2} are satisfied by $\widetilde{\mathcal J}_\gamma$ and so we can conclude that for any $\gamma\in(0,\gamma^*)$, $\widetilde{\mathcal J}_\gamma$ has a non--trivial critical point which is a non--trivial non--negative solution for \eqref{P}. In a similar way, with small modifications, it is possible to prove the existence of a non--trivial non--positive solution for \eqref{P}. This ends the proof of Theorem~\ref{c}.
\end{document}
|
\begin{document}
\begin{abstract}
The ``equation-free'' approach has been proposed in recent years as
a general framework for developing multiscale methods to
efficiently capture the macroscale behavior of a system using only
the microscale models. In this paper, we take a close look at some
of the algorithms proposed under the ``equation-free'' umbrella, the
projective integrators and the patch dynamics. We discuss some very
simple examples in the context of the ``equation-free''
approach. These examples seem to indicate that while its general
philosophy is quite attractive and indeed similar to many other
approaches in concurrent multiscale modeling, there are severe
limitations to the specific implementation proposed by the
equation-free approach.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:inro}
The purpose of this note is to examine some of the basic issues
surrounding the ``equa\-tion\--free'' approach proposed
in~\cite{eqfree}, which has been pursued in recent years as a general
tool for multiscale, multi-physics modeling. To begin with, the
equa\-tion\--free approach is an example of concurrent coupling
technique. In contrast to sequential coupling techniques which
require establishing the macroscale equations through precomputing,
concurrent coupling techniques compute the required macroscale
quantities ``on-the-fly'' from microscopic models \cite{AB1, AB2}.
The most well-known example of such concurrent coupling techniques is
perhaps the Car-Parrinello molecular dynamics which computes the
atomic interaction forces ``on-the-fly'' by solving the electronic
structure problem~\cite{cpmd}. Other algorithms, such as the extended
multi-grid method \cite{Brandt} and the heterogeneous multiscale
method (HMM) \cite{hmm_cms} are all example of the concurrent coupling
approach.
At a technical level, a key idea in the ``equation-free'' approach is
to make use of scale separation in the system. There are many
different ways of exploiting scale separation. In~\cite{chorin} and
\cite{cpmd}, time scale separation was used to artificially slow down
the time scale of the microscopic system. As for spatial scales,
homogenization-based methods (such as the ones that use representative
averaging volumes \cite{Bear}) and the quasicontinuum methods
\cite{QC} are all examples of algorithms that explore the separation
of spatial scales. Most closely related to the ``equation-free''
approach is perhaps the extended multi-grid method \cite{Brandt}. In
his review article \cite{Brandt}, Achi Brandt described ideas that can
be used to extend multi-grid techniques to deal with multiscale,
multi-physics problems in order to capture the macroscale behavior of
a system using microscopic models such as molecular dynamics. As is
common in multi-grid methods, the ideas of Brandt rely heavily on
mapping back and forth between the macro- and micro-states of the
system, through prolongation and restriction operators (which are
called respectively reconstruction and compression operators in HMM,
and lifting and restriction operators in the ``equation-free''
approach). Brandt realized that central to the efficiency of these
algorithms is the possibility of only performing microscopic
simulations in small samples for short periods of times, as a result
of the scale separation in the system. These ideas were later adopted
by both HMM and the ``equation-free'' approach. In fact, HMM and the
``equation-free'' approach are both alternative approaches with the
same motivation and objective.
\begin{center}
\begin{tabular}{l|l|l}
\hline
{} & {Macro to micro} & {micro to Macro} \\
\hline
Extended multi-grid & interpolation & restriction\\
\hline
HMM & reconstruction & compression \\
\hline
Equation-free & lifting & restriction \\
\hline
\end{tabular}
\end{center}
While the general philosophy of the ``equation-free''
approach is very similar to the extended multi-grid method and HMM,
the ``equation-free'' approach proposes its own ways of implementing
such a philosophy, in particular, ways of dealing with scale
separation. The basic idea is to use extrapolation in time and
interpolation in space. More precisely, two important building blocks
of the ``equation-free'' approach are:
\begin{enumerate}
\item \underline{Projective integrators:} (An ensemble of) the
microscale problems are solved for a short period of time using
small time steps. The time derivative of the macro variable is
computed from the results of the last few steps and then used to
advance the macro variable over a macro time step. It is easy to
see that such a procedure amounts to extrapolation, and indeed the
authors state in~\cite{geke}: \textit{``The reader might think that
these should be called `extrapolation methods,' but that name has
already been used [...]. Hence we call the proposed methods
projective integration methods.''}
\item \underline{The gap-tooth scheme:} The microscopic problem is
solved in small domains (the teeth) separated by large gaps. The
solution is averaged over each domain and then interpolated to give
the prediction over the gaps.
\end{enumerate}
The combination of these two ideas gives directly the so-called
``patch dynamics''~\cite{eqfree}.
Detailed understanding of the ``equation-free'' algorithms is made
difficult by the fact that the ``equation-free'' papers are generally
quite vague. The present note should be regarded as an attempt to pin
down some of these details. Indeed this was initially intended as a
regular journal article. But it soon becomes clear that there is
still substantial disagreement between our understanding of the
``equation-free'' approach and that of its developers. However, we
believe the simple examples that we discuss here do shed some light on
the ``equation-free'' approach and should be made available to a
larger audience in some form. We are grateful to Yannis Kevrekidis
for a detailed report on the earlier version of this note. Some of his
comments have been taken into account in this revised version. We
also welcome any discussion about the issues raised in this note, the
most important of which being: What really is the ``equation-free''
approach? Indeed our primary purpose of presenting this note is to
prompt such a discussion.
\section{Projective integrators for stochastic ODEs}
\label{sec:projective}
Projective integrators were proposed as a way of extrapolating the
solution of an explicit ODE solver for systems with multiple time
scales using large time steps. The basic idea is to run the
microscopic solver (using small time steps) for a number of steps, and
then estimate the time derivative and use that to extrapolate the
solution over a large time step \cite{geke}. For stiff ODEs, the
extrapolation step is applied to the whole system~\cite{geke}. For
general multiscale problems, the extrapolation step is applied only to
the slow variables~\cite{eqfree,hummer}.
In the case of stiff ODEs, projective integrators can give rise to
useful numerical sche\-mes, as was demonstrated in~\cite{geke}. In this
case, the idea becomes very close to the ones proposed by Eriksson
\textit{et. al} for developing explicit stiff ODE solvers~\cite{EJL}.
The objectives of the two papers are quite different: For Eriksson
\textit{et al.}, the objective is to find explicit and efficient stiff
ODE solvers. For Gear~\textit{et al.}, the objective is to deal with
general multiscale, multi-physics problems. However, in the general
case such as the case considered in \cite{hummer}, projective
integrators have serious limitations, as we now show.
Denote by $x$ the coarse variable of the system. The coarse
projective integrator proposed in \cite{hummer} performs the
following steps at each macro time step (of size~$\Delta t$):
\begin{enumerate}
\item Create an ensemble of~$N$ microscopic initial conditions
consistent with the known coarse variable~$x^n$ at time step~$n$.
\item Run the microscopic solver with these initial conditions for a
number of steps, say~$k$, with time step~$\partialelta t$. Denote the
corresponding values of the coarse variables as~$\tilde x_j(k \partialelta
t)$ where $j = 1, \ldots, N$.
\item Perform ensemble averaging to get an approximation to the coarse
variable. For example,
\begin{equation}
\bar x = \frac 1N \sum_{j=1}^N \tilde x_j (k\partialelta t)
\end{equation}
\item Use this value to extrapolate the coarse variable to a time step
of size~$\Delta t$.
\begin{equation}
\label{eq:extrapolsdeb}
x^{n+1} = x^n + {\Delta t} \frac{\bar x - x^n}{k\partialelta t}
\end{equation}
\end{enumerate}
Now consider the simple case when the coarse variable obeys effectively
a stochastic ODE:
\begin{equation}
\label{eq:sdefast}
dx(t) = b(x(t)) dt + dW(t)
\end{equation}
Since, to $O(k\partialelta t)$, we have
\begin{equation}
\label{eq:EMstep}
\tilde x_j(k\partialelta t) - x^n = k \partialelta t\, b(x^n) +
\sqrt{k\partialelta t} \, \xi^n_j
\end{equation}
where $\{\xi_j^n\}, j=1, \cdots, N$ are $N$ independent Gaussian variables
with mean 0 and variance 1, (\ref{eq:extrapolsdeb}) becomes, to leading order
\begin{equation}
\label{eq:extrapolsde3}
x^{n+1} = x^n + \Delta t\, b(x^n) + \frac{\Delta t }{\sqrt{k\partialelta t}}\,
\frac1N \sum_{j=1}^N \xi_j^n.
\end{equation}
(\ref{eq:extrapolsde3}) is equivalent in law to
\begin{equation}
\label{eq:extrapolsde4}
x^{n+1} = x^n + \Delta t\, b(x^n) +
\frac{\Delta t }{\sqrt{N k\partialelta t}} \, \xi^n,
\end{equation}
where $\xi^n$ is a Gaussian variable with mean 0 and
variance 1.
It is obvious from this that the effective dynamics produced by the
coarse projective integrator depends on the numerical parameters $N$,
$k$, $\partialelta t, $ and $\Delta t$. In particular, if $N k\partialelta t \gg
\Delta t$, then the noise term in \eqref{eq:sdefast} is lost in the
limit. If $N k\partialelta t \ll \Delta t$, then the noise term overwhelms
the drift term. In either case, one obtains a wrong prediction for
the effective dynamics of the coarse variable.
The only way to get a scheme consistent with~(\ref{eq:sdefast}) is to
choose the numerical parameters so that they precisely satisfy $N
k\partialelta t = \Delta t$. The reader should be aware, however, that this
choice is not advocated in~\cite{hummer} and is, in fact, quite
orthogonal to the original equation-free philosophy since it requires
knowing beforehand that (\ref{eq:sdefast}) is an SODE and not
something else. In addition, it is easy to see that using $N k\partialelta
t = \Delta t$ leads to no gain in efficiency: The total cost is
comparable to solving the microscopic problem in a brute force fashion
using $\partialelta t$ as the time step, since the size of the ensemble is
equal to the number of microscopic simulation time intervals
during a time duration of $\Delta t$: $ N = \Delta t/(k \partialelta t)$.
For the case when $N k\partialelta t \gg \Delta t$, one might think of using
the coarse projective integrators (or coarse molecular dynamics) as a
way of simulating the dynamics $dx/dt = b(x)$ in the context of
molecular dynamics simulations. In this case the unknown drift $b(x)$
is related to the gradient of the free energy and simulating $dx/dt =
b(x)$ is then a way to explore this free energy. Indeed, this appears
to be how the scheme was actually used in \cite{hummer}. The problem,
however, is that using $N k\partialelta t \gg \Delta t$ leads again to a
scheme which is no less expensive than a brute force solution of $N$
replica of~(\ref{eq:sdefast}) using $\partialelta t$ as the time step.
The problem above seems to be intrinsic to projective integrators in
the context of SDEs because it is inherent to the fact that the
dynamics~(\ref{eq:sdefast}) is dominated by the noise on short time
scales and the extrapolation step in the projective integrators
amplifies these fluctuations. Averaging them out can only be done at a
cost which is at least comparable to the cost of a direct scheme.
\section{Patch dynamics}
Patch dynamics is proposed as a way of analyzing the macroscopic
dynamics of a system using microscopic models. Like the extended
multi-grid methods \cite{Brandt} and HMM \cite{hmm_cms, hmm-review},
it is formulated in such a way that scale separation can be exploited
to reduce computational cost.
The setup is as follows. We have a macroscale grid over the
computational domain. The grid size $\Delta x$ is chosen to resolve
the macroscale variations but not the microscale features in the
problem. Each grid point is surrounded by a small domain (the
``tooth''), the size of which (denoted by $h$) should be large enough
to sample the local microscale variations but can be much smaller than
the macroscale grid size if the macro and microscales are very much
separated.
Given a set of macroscale values at the macroscale grid points,
$\{U^n_j\}$, at the $n$-th time step $t_n = n \Delta t$ where $\Delta
t$ is the size of the macroscale time step, patch dynamics computes
the update of these values at the next macroscale time step,
$\{U^{n+1}_j \}$, using the following procedure:
\begin{enumerate}
\item \textit{Lifting:} From $\{U^n_j\}$, reconstruct a consistent
microscopic initial data, denoted by $\tilde u_0$.
\item \textit{Evolution:} Solve the original microscopic model with
this initial data $\tilde u_0$ over the small domains (the
``teeth'') for some time $\partialelta t$: $\tilde u_{\partialelta t} =
\mathcal{S}_{\partialelta t} \tilde u_0$.
\item \textit{Restriction:} Average the microscale solution $\tilde
u_{\partialelta t}$ over the small domains. The results are denoted by
$\{ \tilde U^n_{\partialelta t} \}$.
\item \textit{Extrapolation:} Compute the approximate derivative and
use it to predict $\{U^{n+1}_j \}$:
\begin{equation}
\label{eq:extrapol}
U^{n+1} = U^n + \Delta t \frac{\tilde U^n_{\partialelta t} - U^n}{\partialelta t}
\end{equation}
or more generally:
\begin{equation}
U^{n+1} = U^n + \Delta t \frac{\tilde U^n_{\partialelta t} - \tilde U^n_{\alpha
\partialelta t}} {(1-\alpha)\partialelta t}
\end{equation}
where $ \alpha$ is some numerical parameter between 0 and 1.
\end{enumerate}
There are very few examples of how to implement these steps in practice.
\cite{eqfree, SKD} suggest the following:
For the lifting operator, in the small domain around the macro grid
point $x_j$, use the approximate Taylor expansion:
\begin{equation}
\label{interp-1}
\tilde u_0 (x) = \sum_{k=0}^d \frac 1{k!} D_k (x- x_j)^k
\end{equation}
Here $D_k$ is some approximations to the derivatives of the macroscale
profile at $x_j$, for example:
\begin{equation}
\label{interp-2}
D_2 = \frac{U^n_{j+1} - 2 U^n_j + U^n_{j-1}}{\Delta x^2}, \quad
D_1 = \frac{U^n_{j+1} - U^n_{j-1}}{ 2 \Delta x}, \quad
D_0 = U^n_j - \frac 1{24} h^2 D_2
\end{equation}
Below we will consider the case when $d=2$.
When solving the microscale problem, \cite{SKD} suggest extending the
microscale domain to include some buffer regions in the hope that this
would allow the use of {\it any} boundary conditions for the
microscopic solver: By choosing sufficiently large buffers, the effect
would be as if the microscale problem is solved in the whole space
where and when averaging is performed. This introduces another
spatial scale $H$ which is the real size of the region on which
microscale problems are solved. (The parameter $h$, which is (much)
smaller than $H$, is the size of the domain over which averaging is
performed). In the following discussion, we will take $H$ to be
infinity.
Let us now examine this algorithm in more detail, using some very
simple examples. Let us first consider the heat equation
\begin{equation}
\partial_t u = \partial^2_{x} u
\end{equation}
For simplicity, let us assume $x_j = 0$.
Denote $\tilde u_0 = D_0 + D_1 x + \frac 12 D_2 x^2$.
We have
\begin{equation}
\mathcal{S}_{\partialelta t} \tilde u_0 (x) =
D_0 +D_1 x + D_2 (\frac 12 x^2 + \partialelta t)
\end{equation}
Denote by $\mathcal{A}_h$ the averaging operator over the small domain
(of size $h$), we have
\begin{equation}
\tilde U^n_{\partialelta t}= \mathcal{A}_h \mathcal{S}_{\partialelta t} \tilde u_0 (x) =
D_0 + D_2 \partialelta t +\frac 1{24} D_2 h^2
=U^n +D_2 \partialelta t
\end{equation}
Inserting this expression in~(\ref{eq:extrapol}) gives the familiar scheme:
\begin{equation}
U^{n+1} = U^n + \Delta t D_2
\end{equation}
as was shown in \cite{eqfree}.
This is both stable and consistent with the heat equation,
which is the right effective model at the large scale.
Now let us turn to the advection equation
\begin{equation}
\partial_t u + \partial_{x} u = 0
\end{equation}
In this case, we have
\begin{equation}
\mathcal{S}_{\partialelta t} \tilde u_0 (x) =
D_0 +D_1 (x -\partialelta t) + \frac 12 D_2 (x - \partialelta t)^2
\end{equation}
Hence,
\begin{equation}
\tilde U^n_{\partialelta t}= \mathcal{A}_h \mathcal{S}_{\partialelta t} \tilde u_0 (x)=
D_0 - D_1 \partialelta t +
\frac 12 D_2 \partialelta t^2 +\frac 1{24} D_2 h^2
= U^n -D_1 \partialelta t + \frac 12 D_2 \partialelta t^2
\end{equation}
and (\ref{eq:extrapol}) becomes
\begin{equation}
U^{n+1} = U^n +\Delta t (-D_1+ \frac 12 \partialelta t D_2)
\end{equation}
Since $\partialelta t \ll \Delta t$, the last term is much smaller than the other
terms, and we are left essentially with a scheme which is unstable under
the standard CFL condition that $\Delta t \sim \Delta x$:
\begin{equation}
U^{n+1} = U^n -\Delta t D_1
\end{equation}
due to the central character of $D_1$.
Aside from the stability issue, there can also be problems with
consistency. Consider the following example:
\begin{equation}
\partial_t u = - \partial_x^4 u
\end{equation}
The macroscale model is obviously the same model. However, it is easy
to see that if we follow the patch dynamics procedure with $d=2$, we
would be solving $\partial_t U = 0$, which is obviously inconsistent
with the correct macroscale model.
For these simple examples, the difficulties discussed above can be
fixed by using different reinitialization procedure for the
micro-solvers. For the example of the convection equation, one should
use one-sided interpolation schemes in the spirit of upwind
schemes. For the last example, one should use piecewise $4th$ order
polynomial interpolation. {\it But in general, finding such a
reinitialization procedure seems to be quite a daunting task, since
it depends on the nature of the unknown effective macroscale model}.
Imagine that the microscopic solver is molecular dynamics. The
reinitialization procedure has to take into account not only
consistency with the local macrostates of the system (which is the
only requirement for the extended multi-grid method and HMM), but also
the effective macroscale scale model (which is unknown) such as:
\begin{enumerate}
\item The order of the macroscale equation.
\item The direction of the wind, if the effective macroscale
equation turns out to be a first order PDE.
\item Other unforeseeable factors.
\end{enumerate}
Indeed it is not at all clear how patch dynamics would work if
molecular dynamics models are used to model macroscopic gas dynamics.
To overcome these problems, the ``equation-free'' developers propose
to design a number of numerical tests to find out the nature of the
effective macroscale equations. One such a procedure, the
``baby-bathwater scheme'' will be discussed in the next section.
However, in addition to the technical issues, it is not clear what set of
characteristics that we are supposed to test on.
\section{The ``baby-bathwater scheme''}
As the last example shows, it is useful at times to know the order of
the highest order derivatives that appear in the effective macroscale
equation, even if we do not know all the details of the macroscale
model. An algorithm was proposed in \cite{baby} for this purpose.
The ``baby-bathwater scheme'', as it was called, promises to find the
highest order derivative in the effective macroscale model, by
performing simulations using the microscopic model: Assume that the
effective macroscale model is of the form
\begin{equation}
\partial_t U = F(U, \partial_x U, \cdots, \partial_x^m U)
\end{equation}
The objective is to find $m$.
This problem can be formulated abstractly as follows. Assume we have
a function $F=F(x_1, x_2, \cdots)$ and we know that it only depends on
finitely many variables: $F= F(x_1, x_2 \cdots, x_m)$. Assume that we
can evaluate $F$ at any given point, can we find the value of $m$
efficiently?
The basis idea used in \cite{baby} is that if $F$ depends truly
on the variable $x_j$, then the variance of $F$ as $x_j$ changes should
not vanish. The practical difficulty is how to use this idea
efficiently.
Without getting into the details of the algorithm presented in
\cite{baby}, it seems quite clear that there is no fool-proof
inductive procedure for finding $m$. Take an extreme case, say, $F=
F(x_1, x_{100})$. Without having the prior knowledge that $F$ may depend
on $x_{100}$, an inductive procedure would likely conclude that $F$ is
only a function of $x_1$.
This example is of course quite extreme, and most practical situations
are not like this. Nevertheless, it does raise some questions about
the robustness of the algorithm presented in \cite{baby}. There is a
much more serious concern, and this is associated with the well-known
phenomenon that the order of the effective macroscale model depends on
the scale we look at. A physically intuitive example is convection
and diffusion of tracer particles in the Rayleigh-Bernard cells: At the
scale of the cells, the tracer particles are convected and diffusion
can be neglected. Hence the effective model is a first order
equation. At scales much larger than the size of the cells, diffusion
dominates. Hence the effective model is a second order equation.
This means that the output of the ``baby-bathwater scheme'' should
depend on the numerical parameter $\Delta$ in the scheme.
This behavior can be demonstrated rigorously using the well-known
results of Kesten and Papanicolaou \cite{K-P}. Consider the dynamics
if inertial particles in a stationary random force field:
\begin{equation}
\frac{d^2 x}{d t^2} = F(x)
\end{equation}
In phase space, we can write this as
\begin{equation}
\begin{cases}
\partialisplaystyle \frac{d x}{d t} = v,\\[6pt]
\partialisplaystyle \frac{d v}{d t} = F(x)
\end{cases}
\end{equation}
The density of the particles (in phase space)
obeys the Liouville equation:
\begin{equation}
\partial_t \rho + v \partial_x \rho + F(x) \partial_v \rho = 0
\end{equation}
However, if we consider the rescaled fields:
\begin{equation}
\begin{cases}
\partialisplaystyle \frac{d x^\partialelta}{d t} = \frac1{\partialelta^2} v^\partialelta,\\[6pt]
\partialisplaystyle \frac{d v^\partialelta}{d t} = \frac1{\partialelta} F(x^\partialelta)
\end{cases}
\end{equation}
it was shown in \cite{K-P} under some conditions on the random field
$F$ that as $\partialelta \rightarrow 0$, the process $v^\partialelta (\cdot)$
converges to a diffusion process. In other words, if we consider the
density of the particles in $v$ space, then in this scaling we have
\begin{equation}
\partial_t \rho + \partial_v
(b(v) \rho) = \frac 12 \partial_v^2 (a(v) \rho)
\end{equation}
for some functions $b(\cdot)$ and $a(\cdot)$, which is a second order
equation.
\section{Conclusions}
The idea of interrogating legacy codes as a control system is very
attractive and to some extend, has already been commonly used in some
disciplines. For example, chemists use packages such as CHARMM and
AMBER as legacy codes to perform optimization tasks, e.g. to find
free energy surfaces and minimum free energy paths. Optimization
techniques such as the Nelder-Mead algorithm that use only function
values (not the derivatives) were designed with this kind of problems
in mind. One purpose of the work of Keller \textit{et al.} is to
extend bifurcation analysis tools to systems that are defined by
legacy codes \cite{Shroff}. The ``equation-free'' approach attempts
to extend such practices to another direction, namely the modeling of
macroscale spatial/temporal dynamics of systems defined by microscopic
models, in the form of legacy codes. While this seems very
attractive, the set of tools proposed under this umbrella are quite
far from being sufficient for reaching this objective. We have
discussed some of the technical difficulties in this note. This
discussion is certainly not exhaustive. It is only meant to be
illustrative.
\section*{Acknowledgments}
We are very grateful to the many people, including Yannis Kevrekidis,
who have read the draft of this paper at various stages and gave us
their comments. The work of W. E was supported in part by ONR grant
N00014-01-1-0674 and by DOE grant DOE DE-FG02-03ER25587. The work of
E. V.-E. was supported in part by NSF grants DMS02-09959 and
DMS02-39625 and by ONR grant N00014-04-1-0565.
\end{document}
|
\begin{document}
\title{Timelike minimal surfaces in the three-dimensional Heisenberg group}
\author[H.~Kiyohara]{Hirotaka Kiyohara}
\operatorname{Ad}dress{Department of Mathematics, Hokkaido University,
Sapporo, 060-0810, Japan}
\email{[email protected]}
\thanks{The first named author is supported by JST SPRING, Grant Number JPMJSP2119.}
\author[S.-P.~Kobayashi]{Shimpei Kobayashi}
\operatorname{Ad}dress{Department of Mathematics, Hokkaido University,
Sapporo, 060-0810, Japan}
\email{[email protected]}
\thanks{The second named author is partially supported by Kakenhi 18K03265.}
\subjclass[2020]{Primary~53A10, 58E20, Secondary~53C42}
\keywords{Minimal surfaces; Heisenberg group; timelike surfaces; loop groups;
the generalized Weierstrass type representation}
\date{\today}
\pagestyle{plain}
\begin{abstract}
Timelike surfaces in the three-dimensional Heisenberg group with left invariant
semi-Riemannian metric are studied. In particular, non-vertical timelike minimal
surfaces are characterized by the non-conformal Lorentz harmonic maps into
the de Sitter two sphere. On the basis of the characterization,
the generalized Weierstrass type representation
will be established through the loop group decompositions.
\end{abstract}
\maketitle
\section{Introduction}
Constant mean curvature surfaces in three-dimensional homogeneous spaces,
specifically Thurston's eight model spaces \cite{Thurston},
have been intensively studied in recent years.
One of the reasons is a seminal paper by Abresch-Rosenberg \cite{Abresch-Rosenberg:Acta},
where they introduced a quadratic differential, the so-called the \textit{Abresch-Rosenberg differential}, analogous to the Hopf differential for surfaces in the space forms
and showed that it was holomorphic for a constant
mean curvature surface in various classes of three-dimensional homogeneous spaces,
such as the Heisenberg group ${\rm Nil}_3$, the product spaces $\mathbb S^2 \times \mathbb R$
and $\mathbb H^2 \times \mathbb R$ etc, see \cite{Abresch-Rosenberg} in detail.
It is evident that holomorphic quadratic differentials are fundamental
for study of global geometry of surfaces, \cite{Fer-Mira2}.
On the one hand, Berdinsky-Taimanov developed integral representations of surfaces in three-dimensional homogeneous spaces
by using the generating spinors and the nonlinear Dirac type equations,
\cite{Ber:Heisenberg, BT:Sur-Lie}. They were natural generalizations of
the classical Kenmotsu-Weierstrass representation
for surfaces in the Euclidean three-space.
Combining the Abresch-Rosenberg differential
and the nonlinear Dirac equation with generating spinors, in \cite{DIKAsian, DIKtop},
Dorfmeister, Inoguchi and the second named author of this paper have
established the loop group method for minimal surfaces in ${\rm Nil}_3$, where the following
left-invariant Riemannian metric has been
considered on ${\rm Nil}_3$:
\[
ds^2 = dx_1^2 + dx_2^2 + \lambdaeft(dx_3 + \frac12 (x_2 dx_1 - x_1 dx_2)\right)^2,
\]
In particular, all non-vertical minimal surfaces in ${\rm Nil}_3$ have been constructed from
holomorphic data, which have been called the \textit{holomorphic potentials},
through the loop group decomposition, the so-called Iwasawa decomposition,
and the construction has been commonly called the \textit{generalized Weierstrass type representation}.
In this loop group method, the Lie group structure of ${\rm Nil}_3$ and
harmonicity of the left-translated normal Gauss map
of a non-vertical surface, which obviously took values in a hemisphere in
the Lie algebra of ${\rm Nil}_3$,
were essential tools.
To be more precise, a surface in ${\rm Nil}_3$ is minimal if and only if
the left-translated normal Gauss map is a non-conformal harmonic map
with respect to the \textit{hyperbolic metric} on the hemisphere, that is,
one considers the hemisphere as the hyperbolic two space not the two sphere
with standard metric.
Since the hyperbolic two space is one of the standard symmetric spaces and the
loop group method of
harmonic maps from a Riemann surface into a symmetric space have been
developed very well \cite{DPW}, thus we have obtained
the generalized Weierstrass type representation.
On the one had, it is easy to see
that the three-dimensional Heisenberg group ${\rm Nil}_3$ can have the following
left-invariant \textit{semi-Riemannian} metrics:
\[
ds^2_{\pm} = \pm dx_1^2 + dx_2^2 \mp \lambdaeft(dx_3 + \frac12 (x_2 dx_1 - x_1 dx_2)\right)^2.
\]
Moreover in \cite{Rah}, it has been shown that
the left-invariant semi-Riemannian metrics on ${\rm Nil}_3$
with $4$-dimensional isometry group only are
the metrics $ds^2_{\mp}$.
Therefore a natural problem is study of spacelike/timelike,
minimal/maximal surfaces in ${\rm Nil}_3$ with the above semi-Riemannian
metrics in terms of the generalized Weierstrass type representations.
In this paper we will consider
timelike surfaces in ${\rm Nil}_3$ with the semi-Riemannian metric $ds^2_{-}$.
For defining the Abresch-Rosenberg differential and the nonlinear Dirac equations
with generating spinors, the \textit{para-complex structure} on a timelike
surface is essential, and we will systematically develop theory
of timelike surfaces using the para-complex structure, the Abresch-Rosenberg
differential and the nonlinear Dirac equations with generating spinors in
Section \ref{sc:timelikesurf}.
Then the first of the main results in this paper is Theorem \ref{thm:main}, where
non-vertical
timelike minimal surfaces in ${\rm Nil}_3$ will be characterized in terms of harmonicity
of the left-translated normal Gauss map.
To be more precise, the left-translated normal Gauss map of a timelike surface
takes values in
the lower half part of the de Sitter two sphere $\widetilde{\mathbb S}^2_{1-}
=\{(x_1, x_2, x_3) \in \mathfrak{nil}_3 = \mathbb L^3 \mid
- x_1^2 + x_2^2 + x_3^2=1, x_3< 0\}$, but it is not a Lorentz harmonic map into
$\widetilde{\mathbb S}^2_{1-}$ with respect to the standard metric on the de Sitter sphere. It will be shown that by combining two stereographic projections, the left-translated normal Gauss map can take values in
the upper half part of the de Sitter
two sphere with interchanging $x_1$ and $x_2$, that is, $\mathbb S^2_{1+} = \{(x_1, x_2, x_3) \in \mathbb L^3 \mid
x_1^2 -x_2^2 + x_3^2=1, x_3>0\}$, see Figure \ref{fig:desitter},
and it is
a non-conformal Lorentz harmonic into $\mathbb S^2_{1+}$ if and only if the timelike surface is minimal, see Section \ref{gaussmap} in details.
Note that timelike minimal surfaces in $({\rm Nil}_3, ds^2_-)$
have been studied through the Weierstrass-Enneper type representation
and the Bj\"oring problem in \cite{IKOS, LMM, CDM, CMO:Bjorling, SSP, CMO:Minimal, CO, Magid, Kondreak}.
It has been known that timelike constant mean curvature surfaces in
the three-dimensional Minkowski space $\mathbb L^3$ could be characterized by
a Lorentz harmonic map into the de Sitter two space, \cite{Inoguchi, IT, DIT, BS:CMC,
BS:Cauchy}. In fact
the Lorentz harmonicity of the unit normal of a timelike surface in $\mathbb L^3$
is equivalent to constancy of the mean curvature.
Furthermore, the generalized Weierstrass type representation for
timelike non-zero constant mean curvature surfaces has been established in
\cite{DIT}. In Theorem \ref{thm:Sym1}, we will show
that two maps, which are given by the logarithmic derivative of
one parameter family of moving frames of a non-conformal Lorentz
harmonic map (the so-called \textit{extended frame})
into $\mathbb S^2_{1+}$ with respect to an additional parameter
(the so-called \textit{spectral parameter}),
define a timelike non-zero constant mean curvature
surface in $\mathbb L^3$ and a non-vertical timelike minimal surface in ${\rm Nil}_3$,
respectively.
From the view point of the loop group
construction of Lorentz harmonic maps, the construction in \cite{DIT}
is sufficient, however,
it is not enough for our study of timelike minimal surfaces in $({\rm Nil}_3, ds_-^2)$.
As we have mentioned above, for defining
the Abresch-Rosenberg differential and the nonlinear Dirac equation
with generating spinors the para-complex structure is essential.
Note that the para-complex structure has been used for study of timelike surface
\cite{W:Lorentz, L:Lorentz}.
We can then show that
the Abresch-Rosenberg differential is para-holomorphic
if a timelike surface has constant mean curvature,
Theorem \ref{thm:constant}, which is analogous to the fundamental result
of Abresch-Rosenberg.
As a by-product of utilizing the para-complex structure, it is easy to
compare our construction with minimal surface in $({\rm Nil}_3, ds^2)$, where
the complex structure has been used, and moreover,
the generalized Weierstrass type representation can be understood in
a unified way, that is, the Weierstrass data is just a
$2$ by $2$ matrix-valued para-holomorphic function
and a loop group decomposition of
the solution of a para-holomorphic differential equation gives the
extended frame of a non-conformal Lorentz harmonic map in $\mathbb S^2_{1 +}$,
Theorem \ref{thm:Weierstrass}. One of
difficulties is that one needs to have appropriate loop
group decompositions in the para-complex setting, that is,
Birkhoff and Iwasawa decompositions.
In Theorem \ref{thm:BIdecomposition}, by identifying
the double loop groups of $\mathrm{SL}_2 \mathbb R$, that is $\Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma}$ and
the loop group of ${\rm SL}_2 \mathbb C^{\prime}$,
that is, $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$ (where $\mathbb C^{\prime}$ denotes the para-complex number) by a natural isomorphism,
we will obtain such decompositions.
Finally in Section \ref{sc:Example}, several examples will be shown by our loop group construction.
In particular \textit{B-scroll type minimal surfaces} in ${\rm Nil}_3$ will be established
in Section \ref{sbsc:Bscroll}.
In Appendix \ref{timelikemin}, we will
discuss timelike constant mean curvature surfaces in $\mathbb L^3$, and
in Appendix \ref{sc:nopara}, we will see
the correspondence between our construction and the construction without
the para-complex structure in \cite{DIT}.
\section{Timelike surfaces in ${\rm Nil}_3$}\lambdaabel{sc:timelikesurf}
In this section we will consider timelike surfaces in ${\rm Nil}_3$.
In particular we will use the para-complex structure and
the nonlinear Dirac equation for timelike surfaces. Finally the Lax pair type
system for timelike surface will be shown.
\subsection{${\rm Nil}_3$ with indefinite metrics}
The Heisenberg group is a $3$-dimensional Lie group
\[
{\rm Nil}_3(\tau) = (\mathbb R^3 (x_1,x_2,x_3), \cdot)
\]
for $\tau \neq 0$ with the multiplication
\[
(x_1,x_2,x_3) \cdot (y_1,y_2,y_3)
=
(x_1+y_1,x_2+y_2,x_3+y_3+\tau(x_1y_2-y_1x_2)).
\]
The unit element of ${\rm Nil}_3(\tau)$ is $(0,0,0)$.
The inverse element of $(x_1,x_2,x_3)$ is $(-x_1,-x_2,-x_3)$.
The groups ${\rm Nil}_3(\tau)$ and ${\rm Nil}_3(\tau')$ are isomorphic if $\tau \tau' \neq 0$.
The Lie algebra $\mathfrak{nil}_3$ of ${\rm Nil}_3(\tau)$ is $\mathbb R^3$ with the relations:
\[
[e_1,e_2] = 2\tau e_3, \quad [e_2,e_3] = [e_3,e_1] =0
\]
with respect to the normal basis $e_1=(1,0,0),e_2=(0,1,0),e_3=(0,0,1)$.
In this paper we consider the left invariant indefinite metric $ds_-^2$ for
${\rm Nil}_3$ as follows:
\begin{equation}\lambdaabel{eq:indefinitemetric}
ds_-^2 = -(dx_1)^2 + (dx_2)^2 + \omega_{\tau} \otimes \omega_{\tau},
\end{equation}
where $\omega_{\tau} = dx_3 + \tau (x_2 dx_1 - x_1 dx_2)$.
Moreover, we fix the real parameter $\tau$ as $\tau =1/2$
for simplicity.
The vector fields $E_k \, (k=1,2,3)$ defined by
\[
E_1 = \partial_{x_1} - \frac{x_2}2 \partial_{x_3},\quad
E_2 = \partial_{x_2} + \frac{x_1}2 \partial_{x_3}\quad
{\rm and}\quad
E_3 = \partial_{x_3}
\]
are left invariant corresponding to $e_1,e_2,e_3$ and orthonormal to each other
with the timelike vector $E_1$ with respect to the metric $ds_-^2$.
The Levi-Civita connection $\nabla$ of $ds_-^2$ is given by
\[
\begin{matrix}
\nabla_{E_1} E_1 = 0, & \nabla_{E_1} E_2 = \frac12E_3, &\nabla_{E_1} E_3 = -\frac12E_2,\\
\nabla_{E_2} E_1 = -\frac12E_3,& \nabla_{E_2} E_2 = 0, &\nabla_{E_2} E_3 = -\frac12E_1,\\
\nabla_{E_3} E_1 = -\frac12E_2,& \nabla_{E_3} E_2 = -\frac12E_1,&\nabla_{E_3} E_3 = 0 .\\
\end{matrix}
\]
\subsection{Para-complex structure}
Let $\mathbb C^{\prime}$ be a real algebra spanned by $1$ and $i^{\prime}$ with following multiplication:
\[ (i^{\prime})^2=1, \quad 1\cdot i^{\prime} = i^{\prime} \cdot 1= i^{\prime}. \]
An element of the algebra $\mathbb C^{\prime} = \mathbb R 1\oplus \mathbb R i^{\prime}$ is called a
\textit{para-complex number}.
For a para-complex number $z$ we can uniquely express $z=x+yi^{\prime}$ with some $x, y \in \mathbb R$.
Similar to complex numbers, the real part $\mathbb Re z$,
the imaginary part $\operatorname {Im} z$
and the conjugate $\bar z$ of $z$ are defined by
\[ \mathbb Re z = x, \quad \operatorname {Im} z = y \quad {\rm and} \quad
\bar z = x - yi^{\prime}.
\]
For a para-complex number $z=x+yi^{\prime} \in \mathbb C^{\prime}$
there exists a para-complex number $w \in \mathbb C^{\prime}$ with $z^{1/2} =w$
if and only if
\begin{equation}\lambdaabel{eq:root}
x+y \geq 0 \quad {\rm and} \quad x-y \geq 0.
\end{equation}
In particular ${i^{\prime}} ^{1/2}$ does not exist.
Moreover, for a para-complex number $z = x+ y i^{\prime} \in \mathbb C^{\prime}$,
there exists a para-complex number $w \in \mathbb C^{\prime}$
such that $z = e^w$ if and only if
\begin{equation}\lambdaabel{eq:exp}
x+y > 0 \quad {\rm and} \quad x-y > 0.
\end{equation}
Let $M$ be an orientable connected 2-manifold,
$G$ a Lorentzian manifold
and $f:M \to G$ a timelike immersion, that is, the induced metric on $M$ is Lorentzian.
The induced Lorentzian metric defines a Lorentz conformal structure on $M$:
for a timelike surface
there exists a local para-complex coordinate system $z= x+yi^{\prime}$
such that the induced metric $I$ is given by
$I = e^u dz d\bar z = e^u \lambdaeft((dx)^2-(dy)^2\right)$.
Then we can regard $M$ and $f$
as a Lorentz surface and a conformal immersion, respectively.
The coordinate system $z$ is called the \textit{conformal coordinate system}
and the function $e^u$ the conformal factor of the metric with respect to $z$.
For a para-complex coordinate system $z= x+yi^{\prime}$, the partial differentiations
are defined by
\[
\partial_z = \frac12(\partial_x + i^{\prime} \partial_y) \quad {\rm and} \quad
\partial_{\bar z} = \frac12(\partial_x - i^{\prime} \partial_y).
\]
\subsection{Structure equations}
Let $f:M \to {\rm Nil}_3$ be a conformal immersion from a Lorentz surface $M$ into ${\rm Nil}_3$.
Let us denote the inverse element of $f$ by $f^{-1}$.
Then the $1$-form $\alpha = f^{-1}df$ satisfies the Maurer-Cartan equation:
\begin{equation}\lambdaabel{eq:MC}
d\alpha + \frac12 [\alpha\wedge\alpha] = 0.
\end{equation}
For a conformal coordinate $z = x + yi^{\prime}$
defined on a simply connected domain $\mathbb D \subset M$,
set $\Phi$ as
\[ \Phi = f^{-1}f_z. \]
The function $\Phi$ takes values
in the para-complexification $\mathfrak{nil}_3^{\mathbb C^{\prime}}$ of $\mathfrak{nil}_3$.
Then $\alpha$ is expressed as
\[ \alpha = \Phi dz + \overline{\Phi} d\bar z \]
and the Maurer-Cartan equation \eqref{eq:MC} as
\begin{equation}\lambdaabel{eq:MC2}
\Phi_{\bar z} - \overline{\Phi}_z + [\overline{\Phi}, \Phi] =0.
\end{equation}
Denote the para-complex extension of
$ds_-^2 =g = \sum_{i, j}g_{ij} dx_i dx_j $ to $\mathfrak{nil}_3^{\mathbb C^{\prime}}$
by the same letter.
Then the conformality of $f$ is equivalent to
\[
g(\Phi, \Phi) =0, \quad
g(\Phi, \overline{\Phi}) >0.
\]
For the orthonormal basis $\{ e_1, e_2, e_3\}$ of $\mathfrak{nil}_3$
we can expand $\Phi$ as $\Phi = \phi_1 e_1 + \phi_2 e_2 + \phi_3 e_3$.
Then the conformality of $f$ can be represented as
\begin{equation}\lambdaabel{eq:condition}
-(\phi_1)^2 +(\phi_2)^2 +(\phi_3)^2 =0, \quad
-\phi_1 \overline{\phi_1} +\phi_2 \overline{\phi_2} +\phi_3 \overline{\phi_3} =\frac12 e^u,
\end{equation}
for some function $u$.
The conformal factor is given by $e^u$.
Conversely, for a $\mathfrak{nil}_3^{\mathbb C^{\prime}}$-valued function $\Phi = \sum_{k=1}^3 \phi_k e_k$
on a simply connected domain $\mathbb D \subset M$
satisfying \eqref{eq:MC2} and \eqref{eq:condition},
there exists an unique conformal immersion $f:\mathbb D \to {\rm Nil}_3$
with the conformal factor $e^u$
satisfying $f^{-1} df = \Phi dz + \overline{\Phi} d\bar z$
for any initial condition in ${\rm Nil}_3$ given at some base point in $\mathbb D$.
Next we consider the equation for a timelike surface $f$ with constant mean curvature 0.
For $f$ denote the unit normal vector field by $N$
and the mean curvature by $H$.
The tension field $\tau (f)$ for $f$ is given by
$\tau (f)$ = tr$(\nabla df)$
where $\nabla df$ is the second fundamental form for $(f,N)$.
As well known the tension field of $f$ is related to the mean curvature and the unit normal by
\begin{equation}\lambdaabel{eq:tension}
\tau(f) = 2HN.
\end{equation}
By left translating to $(0,0,0)$, we can see this equation rephrased as
\begin{equation}\lambdaabel{eq:streq}
\Phi _{\bar z}
+\overline{\Phi} _z
+ \lambdaeft\{ \Phi , \overline{\Phi}\right\}
=
e^u Hf^{-1} N
\end{equation}
where $\{\cdot,\cdot\}$ is the bilinear symmetric map defined by
\[
\lambdaeft\{ X , Y\right\}
=
\nabla_ {X} Y
+ \nabla_ {Y} X
\]
for $X,Y \in \mathfrak{nil}_3$.
In particular for a surface with the mean curvature 0,
we have
\begin{equation}\lambdaabel{eq:minimalcondition}
\Phi_{\bar z}
+ \overline{\Phi} _z
+ \lambdaeft\{ \Phi , \overline{\Phi}\right\}=0.
\end{equation}
Conversely, for a $\mathfrak{nil}_3$-valued function $\Phi = \sum_{k=1}^3 \phi_k e_k$
satisfying \eqref{eq:MC2}, \eqref{eq:condition} and \eqref{eq:minimalcondition}
on a simply connected domain $\mathbb D$,
there exists a conformal timelike surface $f:\mathbb D \to {\rm Nil}_3$
with the mean curvature $0$
and the conformal factor $e^u$
satisfying $f^{-1} df = \Phi dz + \overline{\Phi} d\bar z $
for any initial condition in ${\rm Nil}_3$ given at some base point in $\mathbb D$.
\subsection{Nonlinear Dirac equation for timelike surfaces}\lambdaabel{sec:deracequation}
Let us consider the conformality condition of an immersion $f$.
We first prove the following lemma:
\begin{Lemma}\lambdaabel{lem:squreroot}
If a product $xy $ of two para-complex numbers $x, y \in \mathbb C^{\prime}$ has
the square root, then there exists $\epsilon \in \{\pm 1, \pm i^{\prime}\}$
such that $\epsilon x$ and $\epsilon y$ have the square roots.
\end{Lemma}
\begin{proof}
By the assumption,
\[
\mathbb Re (x y) \pm \operatorname {Im} (x y) \geq 0
\]
holds, and a simple computation shows that it is equivalent to
\[
(\mathbb Re (x) \pm \operatorname {Im} (x))(\mathbb Re (y) \pm \operatorname {Im} (y)) \geq 0.
\]
Then the claim follows.
\end{proof}
Since the first condition in \eqref{eq:condition} can be rephrased as
\begin{equation}
\phi_3^2 = (\phi_1 + i \phi_2)(\phi_1 - i \phi_2),
\end{equation}
and by Lemma \ref{lem:squreroot}, there exists
$\epsilon \in \{\pm 1, \pm i^{\prime} \}$ such that
$\epsilon(\phi_1 + i \phi_2)$ and $\epsilon(\phi_1 - i \phi_2)$
have the square roots. Therefore there exist para-complex functions $\overline{\psi_2}$
and $\psi_1$
such that
\[
\phi_1 + i \phi_2 = 2 \epsilon \overline{\psi_2}^2, \quad
\phi_1 - i \phi_2 = 2 \epsilon {\psi_1}^2
\]
hold. Then $\phi_3$ can be rephrased as $\phi_3 = 2 \psi_1 \overline{\psi_2}$.
Let us compute the second condition in \eqref{eq:condition} by using
$\{\psi_1, \overline{\psi_2}\}$ as
\[
- \phi_1 \overline{\phi_1} + \phi_2 \overline{\phi_2} + \phi_3 \overline{\phi_3}
= -2 \epsilon \bar \epsilon
(\psi_1 \overline{\psi_1} - \epsilon \bar \epsilon\psi_2 \overline{\psi_2})^2.
\]
Since we have assumed that the left hand side is positive, $\epsilon$
takes values in
\[
\epsilon \in \{\pm i^{\prime}\}.
\]
Therefore without loss of generality, we have
\begin{equation}\lambdaabel{eq:A}
\phi_1 = \epsilon \lambdaeft((\overline{\psi_2})^2 +(\psi_1)^2\right), \:
\phi_2 = \epsilon i^{\prime} \lambdaeft((\overline{\psi_2})^2 -(\psi_1)^2\right), \:
\phi_3 = 2 \psi_1 \overline{\psi_2}.
\end{equation}
Then the normal Gauss map $f^{-1}N$ can be represented in terms of the functions $\psi_1$ and $\psi_2$:
\begin{equation}\lambdaabel{eq:normalgaussmap}
f^{-1} N = 2 e^{-u/2}
\lambdaeft(
- \epsilon \lambdaeft( \psi_1 \psi_2 - \overline {\psi_1} \overline {\psi_2} \right) e_1
+ \epsilon i^{\prime} \lambdaeft( \psi_1 \psi_2 + \overline {\psi_1} \overline {\psi_2} \right) e_2
-\lambdaeft( \psi_2 \overline {\psi_2} - \psi_1 \overline {\psi_1} \right) e_3
\right),
\end{equation}
where $e^{u/2} = 2(\psi_2 \overline {\psi_2} + \psi_1 \overline {\psi_1})$.
We can see that,
using the functions $(\psi_1, \psi_2)$,
the structure equations \eqref{eq:MC2} and \eqref{eq:streq}
are equivalent to the following nonlinear Dirac equation:
\begin{equation}\lambdaabel{eq:Diracequation}
\lambdaeft(
\begin{array}{ll}
\partial_z \psi_2 + \mathcal U \psi_1\\
-\partial_{\overline z} \psi_1 + \mathcal V \psi_2
\end{array}
\right)
=
\lambdaeft(
\begin{array}{ll}
0\\0
\end{array}
\right).
\end{equation}
Here the Dirac potential $\mathcal U$ and $\mathcal V$ are given by
\begin{equation}\lambdaabel{eq:potential1}
\mathcal U = \mathcal V =
-\frac{H}2 e^{u/2}+ \frac{i^{\prime}}4 h
\end{equation}
where
\[
e^{u/2}=
2\lambdaeft(\psi_2 \overline{\psi_2} + \psi_1 \overline{\psi_1} \right)
\quad {\rm and} \quad
h =2\lambdaeft( \psi_2 \overline{\psi_2}-\psi_1 \overline{\psi_1} \right).
\]
\begin{Remark}
\mbox{}
\begin{enumerate}
\item
Without loss of generality,
we can take $\psi_2 \overline{\psi_2} + \psi_1 \overline{\psi_1}$ as positive value,
if necessary, by replacing $( \psi_1, \psi_2 )$ into $( -i^{\prime} \psi_1, i^{\prime} \psi_2 )$.
\item To prove the equations \eqref{eq:MC2} and \eqref{eq:streq}
from the nonlinear Dirac equation \eqref{eq:Diracequation} with \eqref{eq:potential1},
the functions $e^{u/2}$ and $h$ in \eqref{eq:potential1} and solutions $\psi_k \:(k=1,2)$
have to satisfy the relations
\[
e^{u/2} = 2 (\psi_2 \overline{\psi_2} + \psi_1 \overline{\psi_1}),
\quad
h = 2(\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}).
\]
\end{enumerate}
\end{Remark}
For a timelike surface with the constant mean curvature $H=0$,
the Dirac potential takes purely imaginary values.
Then, by using \eqref{eq:normalgaussmap},
we have the following lemma.
\begin{Lemma}
Let $f:\mathbb D \to ({\rm Nil}_3, ds_-^2)$ be a timelike surface with
constant mean curvature $H=0$.
Then the following statements are equivalent:
\begin{enumerate}
\item The Dirac potential $\mathcal U$ is not invertible at $p \in \mathbb D$.
\item The function $h$ is equal to zero at $p \in \mathbb D$.
\item $E_3$ is tangent to $f$ at $p \in \mathbb D$.
\end{enumerate}
\end{Lemma}
\begin{Remark}
The equivalence between $(2)$ and $(3)$ holds regardless of the value of $H$.
In general,
$\mathcal U$ is invertible if and only if
$(\mathbb Re \mathcal U)^2 - (\operatorname {Im} \mathcal U)^2 \neq 0$.
\end{Remark}
Hereafter we will exclude the points where $\mathcal U$ is not invertible,
that is, we will restrict ourselves to the case of
\begin{equation}\lambdaabel{eq:assumption1}
\lambdaeft( {\rm Re}\, \mathcal U \right)^2 - \lambdaeft( {\rm Im}\, \mathcal U \right)^2 \neq 0.
\end{equation}
Then, by using \eqref{eq:exp},
the Dirac potentials can be written as
\begin{equation}\lambdaabel{eq:potential3}
\mathcal U =\mathcal V = \tilde \epsilon e^{w/2}
\end{equation}
for some $\mathbb C^{\prime}$-valued function $w$ and $\tilde \epsilon \in \{\pm 1, \pm i^{\prime} \}$.
In particular,
if the mean curvature is zero
and the function $h$ has positive values,
then $\tilde \epsilon = i^{\prime}$.
\subsection{Hopf differential and an associated quadratic differential}
The Hopf differential $A dz^2$
is the $(2,0)$-part of the second fundamental form
for $f$, that is,
\[A = g (\nabla _{\partial_z} f_z, N). \]
A straightforward computation shows that
the coefficient function $A$ is rephrased in terms of $\psi_k$ as follows:
\[
A = 2 \{ \psi_1 (\overline{\psi_2})_z - \overline{\psi_2}(\psi_1)_z \}
- 4 i^{\prime} \psi_1^2 (\overline{\psi_2})^2.
\]
Next we define a para-complex valued function $B$ by
\begin{equation}\lambdaabel{eq:ARdiff}
B = \frac14 (2 H - i^{\prime}) \tilde A, \quad \mbox{where}\quad \tilde A = A -
\frac{\phi_3^2}{2 H - i^{\prime}}.
\end{equation}
Here $A$ and $\phi_3$ are the Hopf differential and the $e_3$-component of $f^{-1}f_z$ for $f$.
It is easy to check
the quadratic differential $B dz^2$ is defined entirely
and it will be called the \textit{Abresch-Rosenberg differential}.
\subsection{Lax pair for timelike surfaces}
The nonlinear Dirac equation can be represented
in terms of the Lax pair type system.
\begin{Theorem}
Let $\mathbb D$ be a simply connected domain in $\mathbb C^{\prime}$ and $f: \mathbb D
\to {\rm Nil}_3$ a conformal timelike immersion
for which the Dirac potential $\mathcal U$ satisfies
\eqref{eq:assumption1}.
Then the vector $\widetilde \psi
=( \psi_1, \psi_2)$ satisfies the system of equations
\begin{equation}\lambdaabel{eq:Laxpair}
\widetilde \psi_{z} = \widetilde \psi \widetilde U, \quad
\widetilde \psi_{\bar z} = \widetilde \psi \widetilde V,
\end{equation}
where
\begin{align}
\lambdaabel{eq:tildeU1}
\widetilde U & =
\begin{pmatrix}
\frac12 w_z + \frac12 H_z \tilde \epsilon e^{-w/2} e^{u/2} &
- \tilde \epsilon e^{w/2}\\
B \tilde \epsilon e^{-w/2}& 0
\end{pmatrix},
\\
\lambdaabel{eq:tildeV1}
\widetilde V & =
\begin{pmatrix}
0 & - \overline B \tilde \epsilon e^{-w/2}\\
\tilde \epsilon e^{w/2}& \frac12 w_{\bar z} + \frac 1 2 H_{\bar z} \tilde \epsilon e^{-w/2} e^{u/2}
\end{pmatrix}.
\end{align}
Here, $\tilde \epsilon \in \{\pm1, \pm i^{\prime}\}$ is the number decided by \eqref{eq:potential3}.
Conversely, every solution $\widetilde{\psi}$ to \eqref{eq:Laxpair}
with \eqref{eq:potential3} and \eqref{eq:potential1}
is a solution of the nonlinear Dirac equation \eqref{eq:Diracequation} with \eqref{eq:potential1}.
\end{Theorem}
\begin{proof}
By computing the derivative of the Dirac
potential $\tilde \epsilon e^{w/2}$ with respect to $z$,
we have
\[
\frac12 w_z \tilde \epsilon e^{w/2}
=
- \frac12 H_z e^{u/2}
+ 2 i^{\prime} H \psi_1\psi_2 (\overline \psi_2)^2
-\frac{2H -i^{\prime}}2 \psi_2 (\overline \psi_2)_z
-\frac{2H +i^{\prime}}2 \overline{\psi_1} (\psi_1)_z.
\]
Multiplying the equation above by $\psi_1$
and using the function $B$ defined in \eqref{eq:ARdiff},
we derive
\[
(\psi_1)_z
=
\lambdaeft( \frac12 w_z + \frac12 H_z \tilde \epsilon e^{-w/2}e^{u/2} \right) \psi_1
+
B \tilde \epsilon e^{-w/2} \psi_2.
\]
The derivative of $\psi_2$ with respect to $z$ is given by the nonlinear Dirac equation.
Thus we obtain the first equation of \eqref{eq:Laxpair}.
We can derive the second equation of \eqref{eq:Laxpair} in a similar way by differentiating the potential with respect to $\bar z$.
Conversely, if the vector $\widetilde \psi = ( \psi_1, \psi_2)$
is a solution of \eqref{eq:Laxpair},
the terms of $(\psi_1)_{\bar z}$ and $(\psi_2)_{z}$ of \eqref{eq:Laxpair}
are the equations just we want.
\end{proof}
The compatibility condition of the above system is
\begin{gather}
\frac12 w_{z \bar z}
+ e^{w}
-B \overline{B} e^{-w}
+ \frac12 (H_{z \bar z} + p) \tilde \epsilon e^{-w/2} e^{u/2} =0,\lambdaabel{GaussEquation} \\
\overline B_{z} \tilde \epsilon e^{-w/2}
=
- \frac12 \overline B H_{z} e^{-w} e^{u/2}
- \frac12 H_{\bar z} e^{u/2}, \lambdaabel{Codazzi1}\\
B_{\bar z} \tilde \epsilon e^{-w/2} = -\frac12 B H_{\bar z} e^{-w} e^{u/2} - \frac12 H_{z} e^{u/2}, \lambdaabel{Codazzi2}
\end{gather}
where $p = H_{z}(-w/2 + u/2)_{\bar z}$ for the (1,1)-entry and $p = H_{\bar z}(-w/2 + u/2)_{z}$ for the (2,2)-entry.
From the above compatibility conditions we have the following:
\begin{Theorem}\lambdaabel{thm:constant}
For a constant mean curvature timelike surface in ${\rm Nil}_3$
which has the Dirac potential invertible anywhere,
the Abresch-Rosenberg differential is para-holomorphic.
\end{Theorem}
\begin{Remark}
To obtain a timelike immersion for solutions $w, B$ and $H$ of the compatibility condition
\eqref{GaussEquation}, \eqref{Codazzi1} and \eqref{Codazzi2},
a solution $\widetilde \psi =( \psi_1, \psi_2)$ of \eqref{eq:Laxpair} has to satisfy
\[
\tilde \epsilon e^{w/2} = -H(\psi_2 \overline{\psi_2} + \psi_1 \overline{\psi_1})
+ \frac{i^{\prime}}2 (\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}).
\]
This gives an overdetermined system and it seems not easy to find
a general solution for arbitrary $H$, but for minimal surfaces we will show that
it will be automatically satisfied.
\end{Remark}
\section{Timelike minimal surfaces in ${\rm Nil}_3$}
A timelike surface in ${\rm Nil}_3$ with the constant mean curvature $H=0$
is called a timelike minimal surface.
By Theorem \ref{thm:constant},
the Abresch-Rosenberg differential for a timelike minimal surface
is para-holomorphic.
For example
the triple $B=0, H=0$ and $e^w=16 / (1+16z \bar z)^2$
is a solution of the compatibility condition
\eqref{GaussEquation}, \eqref{Codazzi1} and \eqref{Codazzi2}.
In fact these are derived from a horizontal plane
\begin{equation}\lambdaabel{eq:horizontalplane}
f(z) = \lambdaeft( \frac{2i^{\prime}(z- \bar z)} {1+z \bar z}, \frac{2(z + \bar z)} {1 + z \bar z}, 0 \right).
\end{equation}
Thus the horizontal plane \eqref{eq:horizontalplane} is a timelike minimal surface in ${\rm Nil}_3$.
We will give examples of timelike minimal surfaces in Section \ref{sc:Example}.
In this section we characterize timelike minimal surfaces
in terms of the normal Gauss map.
\subsection{The normal Gauss map} \lambdaabel{gaussmap}
For a timelike surface in ${\rm Nil}_3$, the normal Gauss map is given by \eqref{eq:normalgaussmap}.
Clearly it takes values in de Sitter two sphere $\widetilde{\mathbb{S}}^2_1 \subset \mathfrak{nil}_3$:
\[
\widetilde{\mathbb{S}}^2_1 = \lambdaeft\{
x_1 e_1+ x_2 e_2 + x_3 e_3 \in \mathfrak{nil}_3 \mid
-x_1^2 + x_2^2 +x_3^2 =1
\right\}.
\]
From now on we will assume that the function $h$ takes positive values,
that is,
the image of the normal Gauss map is in lower half part of the de Sitter two sphere.
Moreover,
we assume that the timelike surface has the pair of functions $(\psi_1, \psi_2)$
of the formula \eqref{eq:A} with $\epsilon = i^{\prime}$.
If the function $h$ takes negative values,
or if the functions $(\psi_1, \psi_2)$ are given
with $\epsilon = -i^{\prime}$,
by a similar to the case of $h>0$ and $\epsilon = i^{\prime}$,
we can get same results.
The normal Gauss map $f^{-1}N$ can be considered
as a map into another de Sitter two sphere
in the Minkowski space
\[
\mathbb{S}^2_1 = \lambdaeft\{
(x_1, x_2 , x_3 ) \in \mathbb L^3 \mid
x_1^2 - x_2^2 +x_3^2 =1
\right\}\subset \mathbb L^3_{(+, -, +)}
\]
through the stereographic projections
from $(0, 0, 1) \in \widetilde{\mathbb{S}}^2_1 \subset \mathfrak{nil}_3$:
\[
\pi_{\mathfrak{nil}}^+ : \mathfrak{nil}_3 \supset \widetilde{\mathbb{S}}^2_1 \ni x_1 e_1 + x_2 e_2 +x_3 e_3
\mapsto
\lambdaeft( \frac{x_1}{1-x_3}, \frac{x_2}{1-x_3}, 0\right) = \frac{x_1}{1-x_3} + i^{\prime} \frac{x_2}{1-x_3} \in \mathbb C^{\prime}
\]
and from $(0, 0, -1) \in \mathbb{S}^2_1 \subset \mathbb L^3_{(+, -, +)}$:
\[
\pi_{\mathbb L^3}^- : \mathbb L^3_{(+, -, +)} \supset \mathbb{S}^2_1 \ni (x_1, x_2, x_3)
\mapsto
\lambdaeft( \frac{x_1}{1+x_3}, \frac{x_2}{1+x_3}, 0\right) = \frac{x_1}{1+x_3} + i^{\prime} \frac{x_2}{1+x_3} \in \mathbb C^{\prime}.
\]
In particular, the inverse map $(\pi _ {\mathbb L^3}^-) ^{-1}$ is given by
\[
(\pi _ {\mathbb L^3}^-) ^{-1}(g) = \lambdaeft( \frac{2\mathbb Re g}{1+ g \overline{g}}, \frac{2\operatorname {Im} g}{1+ g \overline{g}}, \frac{1- g \overline{g}}{1+ g \overline{g}} \right)
\]
for $g = \lambdaeft( \mathbb Re g, \operatorname {Im} g, 0 \right) \in \mathbb C^{\prime}$.
Since the normal Gauss map takes values
in the lower half of the de Sitter two sphere in $\mathfrak{nil}_3$,
the image under the projection $\pi_{\mathfrak{nil}}^+$ is
in the region enclosed by four hyperbolas, see Figure \ref{fig:desitter}.
Two of the four hyperbolas correspond to the vertical points,
that is, the points where $h$ vanishes,
and the others correspond to the infinite-points,
that is, the points where the first fundamental form degenerates.
Since first and second sign of metrics of ${\rm Nil}_3$ and $\mathbb L^3_{(+, -, +)}$ are interchanged,
the image of each hyperbola under the inverse map $(\pi^{-}_{\mathbb L^3})^{-1}$
plays the other role.
\begin{figure}
\caption{The upper half part of the de Sitter two sphere $\mathbb S^2_1$ (left)
and its stereographic projection (middle), and
the stereographic projection of the lower half part of $\tilde {\mathbb S^{2}
\end{figure}
Define a map $g$ by the composition of the stereographic projection $\pi_{\mathfrak{nil}}^+$ with $f^{-1}N$,
and then we obtain
\[
g = i^{\prime} \frac{\overline{\psi_1}}{\psi_2} \in \mathbb C^{\prime}.
\]
Thus the normal Gauss map can be represented as
\[
f^{-1} N = \frac1{1-g \overline{g}}\lambdaeft( 2 \mathbb Re (g) e_1 + 2 \operatorname {Im} (g)e_2 - (1+g \overline{g})e_3\right)
\]
and
\begin{equation}\lambdaabel{eq:A++}
(\pi _ {\mathbb L^3}^-) ^{-1} \circ \pi_{\mathfrak{nil}}^+ \circ f^{-1}N
=
\frac{1} {\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}}
\lambdaeft( -2 \operatorname {Im}(\psi_1 \psi_2), 2 \mathbb Re(\psi_1 \psi_2), \psi_2 \overline{\psi_2} + \psi_1 \overline{\psi_1} \right).
\end{equation}
Let $\mathfrak{su}_{1, 1}^{\prime}$ be \textit{the special para-unitary Lie algebra} defined by
\[
\mathfrak{su}_{1, 1}^{\prime} = \lambdaeft\{
\begin{pmatrix}
ai^{\prime} & \bar b\\
b & -ai^{\prime}
\end{pmatrix}
\mid a \in \mathbb R, b \in \mathbb C^{\prime}
\right\}
\]
with the usual commutator of the matrices.
We assign the following indefinite product on $\mathfrak{su}_{1, 1}^{\prime}$:
\[
\lambdaangle X,Y \rangle := 2 {\rm tr} (XY).
\]
Then we can identify the Lie algebra $\mathfrak{su}_{1, 1}^{\prime}$ with $\mathbb L^3_{(+, -, +)}$ isometrically by
\begin{equation} \lambdaabel{eq:identification1}
\mathfrak{su}_{1, 1}^{\prime} \ni \frac12
\begin{pmatrix}
r i^{\prime}& -p - q i^{\prime} \\
-p + q i^{\prime} & - r i^{\prime}
\end{pmatrix}
\lambdaongleftrightarrow
(p, q, r) \in \mathbb L^3_{(+, -, +)}.
\end{equation}
Let ${\rm SU}_{1, 1}^{\prime}$ be the \textit{special para-unitary group} of degree two
corresponding to $\mathfrak{su}_{1, 1}^{\prime}$:
\[
{\rm SU}_{1, 1}^{\prime}= \lambdaeft\{
\begin{pmatrix}
\alpha & \beta \\
\bar \beta &\bar \alpha
\end{pmatrix}
\mid
\alpha, \beta\in \mathbb C^{\prime},
\alpha \bar \alpha -\beta \bar \beta =1
\right\}.
\]
By the identification \eqref{eq:identification1}, the represented normal Gauss map \eqref{eq:A++} is equal to
\[
(\pi _ {\mathbb L^3}^-) ^{-1} \circ \pi_{\mathfrak{nil}}^+ \circ f^{-1}N= \frac{i^{\prime}}{2} \operatorname{Ad}(F)
\begin{pmatrix}
1&0\\
0&-1
\end{pmatrix},
\]
where $F$ is a ${\rm SU}_{1, 1}^{\prime}$-valued map defined by
\begin{equation}\lambdaabel{eq:framingF}
F=
\frac{1}{\sqrt{\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}}}
\begin{pmatrix}
\overline{\psi_2} & \overline{\psi_1}\\
\psi_1 & \psi_2
\end{pmatrix}.
\end{equation}
The ${\rm SU}_{1, 1}^{\prime}$-valued function $F$ defined as above
is called a \textit{frame} of the normal Gauss map $f^{-1}N$.
\begin{Remark}
In general a frame of the normal Gauss map $f^{-1} N$ is not unique, that is,
for some frame $F$, there is a freedom of ${\rm SU}_{1, 1}^{\prime}$-valued initial
condition $F_0$ and ${\rm U}_1^{\prime}$-valued map $k$ such that $F_0F k$ is an another frame.
In this paper we use the particular frame in \eqref{eq:framingF}, since
arbitrary choice of initial condition does not correspond to
a given timelike surface $f$.
\end{Remark}
\subsection{Characterization of timelike minimal surfaces}
Let $F$ be the frame defined in \eqref{eq:framingF} of the normal
Gauss map $f^{-1} N$.
By taking the gauge transformation
\[
F \mapsto F
\begin{pmatrix}
e^{-w/4}&0\\
0&e^{-w/4}
\end{pmatrix},
\]
we can see the system \eqref{eq:Laxpair} is equivalent to the matrix differential equations
\begin{equation}\lambdaabel{eq:Laxpair2}
F_{z} = F U, \quad
F_{\bar z} = F V,
\end{equation}
where
\begin{align*}
U & =
\begin{pmatrix}
\frac14 w_{z} + \frac 1 2 H_z \tilde \epsilon e^{-w/2} e^{u/2}& - \tilde \epsilon e^{w/2} \\
B \tilde \epsilon e^{-w/2}& -\frac14w_{z}
\end{pmatrix},\\
V & =
\begin{pmatrix}
-\frac14w_{\bar z} & - \bar B \tilde \epsilon e^{-w/2}\\
\tilde \epsilon e^{w/2}& \frac14 w_{\bar z} + \frac 1 2 H_{\bar z} \tilde \epsilon e^{-w/2} e^{u/2}
\end{pmatrix}.
\end{align*}
We define a family of Maurer-Cartan forms $\alpha^{\mu}$
parameterized by $\mu \in \lambdaeft\{ e^{i^{\prime} t} \,|\, t \in \mathbb R \right\}$ as follows:
\begin{equation}\lambdaabel{eq:alphamu}
\alpha^\mu := U^\mu dz+ V^\mu d\bar z ,
\end{equation}
where
\begin{align}
U^\mu & =
\begin{pmatrix}\lambdaabel{eq:Umu}
\frac14 w_{z} + \frac 1 2 H_z \tilde \epsilon e^{-w/2} e^{u/2}&
- \mu^{-1} \tilde \epsilon e^{w/2} \\
\mu^{-1} B \tilde \epsilon e^{-w/2}& -\frac14w_{z}
\end{pmatrix},\\
V^\mu & =
\begin{pmatrix}\lambdaabel{eq:Vmu}
-\frac14w_{\bar z} &
- \mu \bar B \tilde \epsilon e^{-w/2}\\
\mu \tilde \epsilon e^{w/2}&
\frac14 w_{\bar z} + \frac 1 2 H_{\bar z} \tilde \epsilon e^{-w/2} e^{u/2}
\end{pmatrix}.
\end{align}
\begin{Theorem}\lambdaabel{thm:main}
Let $f$ be a conformal timelike immersion from a simply connected domain
$\mathbb D \subset \mathbb C^{\prime}$ into ${\rm Nil}_3$
satisfying \eqref{eq:assumption1}.
Then the following conditions are mutually
equivalent$:$
\begin{enumerate}
\item $f$ is a timelike minimal surface.
\item The Dirac potential $\mathcal U =\tilde \epsilon e^{w/2} = - \frac{H}2 e^{u/2} + \frac{i^{\prime}}4h$ takes
purely imaginary values.
\item $d + \alpha^{\mu}$ defines a family of flat connections on
$\mathbb D \times {\rm SU}_{1, 1}^{\prime}$.
\item The normal Gauss map $f^{-1} N$ is a Lorentz harmonic
map into de Sitter two sphere $\mathbb{S}^2_1 \subset \mathbb L^3_{(+, -, +)}$.
\end{enumerate}
\end{Theorem}
\begin{proof}
The statement (3) holds if and only if
\begin{equation} \lambdaabel{eq:flat}
(U^\mu)_{\bar z} - (V^\mu)_z +[V^\mu, U^\mu] = 0
\end{equation}
for all $\mu \in \lambdaeft\{ e^{i^{\prime} t} \,|\, t \in \mathbb R \right\}$.
The coefficients of ${\mu}^{-1}, {\mu}^0$ and $\mu$ of \eqref{eq:flat} are as follows:
\begin{eqnarray}
&\mbox{$\mu^{-1}$-part:}\;\; \frac{1}{2} H_{\bar z} e^{u/2} =0,
\;\; B_{\bar z}+\frac{1}{2} B H_{\bar z} \tilde \epsilon e^{-w/2} e^{u/2}=0, \lambdaabel{eq:codazzi1} \\
& \mbox{$\mu^{0}$-part:}\;\;
\frac{1}{2} w_{z \bar z} + e^{w} -B \overline{B} e^{-w}
+ \frac{1}{2}(H_{z \bar z}+p ) \tilde \epsilon e^{-w/2} e^{u/2} =0, \lambdaabel{eq:structure}\\
&\mbox{$\mu$-part:} \;\;
\overline B_z+\frac{1}{2} \overline B H_z \tilde \epsilon e^{-w/2} e^{u/2}=0,
\;\;\frac{1}{2} H_z e^{u/2} =0,\lambdaabel{eq:codazzi2}
\end{eqnarray}
where $p$ is $H_z (-w/2 +u/2)_{\bar z}$ for the $(1, 1)$-entry and
$H_{\bar z} (-w/2 +u/2)_{z}$ for the $(2, 2)$-entry, respectively.
Since the equation in \eqref{eq:structure} is a structure equation for
the immersion $f$, these are always satisfied, which in fact is equivalent
to \eqref{GaussEquation}.
The equivalence of (1) and (2) is obvious.\\
We consider $(1) \mathbb Rightarrow (3)$.
Since $f$ is timelike minimal, by Theorem \ref{thm:constant},
the Abresch-Rosenberg differential $B dz^2$ is para-holomorphic.
Hence, the equations \eqref{eq:codazzi1}, \eqref{eq:structure} and \eqref{eq:codazzi2} hold.
Consequently, the statement $(3)$ holds.
Next we show $(3) \mathbb Rightarrow (1)$.
Assume that $d + \alpha^\lambdaambda$ is flat, that is, \eqref{eq:codazzi1},
\eqref{eq:structure} and \eqref{eq:codazzi2} are satisfied.
Then it is easy to see that $H$ is constant.
Furthermore, since $\alpha^\mu$ is valued in $\mathfrak{su}_{1, 1}^{\prime}$,
we can derive that the mean curvature $H$ is 0
by comparing (2,1)-entry with (1,2)-entry of $\alpha^\mu$.\\
Finally we consider the equivalence between (3) and (4).
The condition (3) is \eqref{eq:flat} and it can be rephrased as
\begin{equation}\lambdaabel{eq:Lorentzharm}
d (* \alpha_{1}) + [\alpha_{0} \wedge *\alpha_1]=0,
\end{equation}
where $\alpha_0 = \alpha^{\prime}_{\mathfrak k} dz
+ \alpha^{\prime \prime}_{\mathfrak k} d\bar z $ and
$\alpha_1 = \alpha^{\prime}_{\mathfrak m} dz
+ \alpha^{\prime \prime}_{\mathfrak m} d\bar z $
and $\mathfrak{su}_{1, 1}^{\prime}$ has been decomposed as $\mathfrak{su}_{1, 1}^{\prime} = \mathfrak k + \mathfrak m$
with
\[
\mathfrak k = \lambdaeft\{ \begin{pmatrix} i^{\prime} r & 0\\ 0 & -i^{\prime} r\end{pmatrix}\mid r \in \mathbb R
\right\}, \quad
\mathfrak m = \lambdaeft\{ \begin{pmatrix} 0& - p - q i^{\prime} \\ - p + q i^{\prime} &0
\end{pmatrix}\mid p, q \in \mathbb R
\right\}.
\]
Moreover $*$ denotes the Hodge star operator defined by
\[
* d z = i^{\prime} dz, \quad
* d \bar z = -i^{\prime} d\bar z.
\]
It is known that by \cite[Section 2.1]{Melko-Sterling}, the harmonicity condition
\eqref{eq:Lorentzharm} is
equivalent to the Lorentz harmonicity of the normal Gauss map $f^{-1} N =
\tfrac{i^{\prime}}{2} F \sigma_3 F^{-1}$
into the symmetric space $\mathbb S^2_1$.
Thus the equivalence between (3) and (4) follows.
\end{proof}
From Theorem \ref{thm:main}, we define the followings$:$
\begin{Definition} \lambdaabel{def:extendedframe}
\mbox{}
\begin{enumerate}
\item
For a timelike minimal surface $f$
in ${\rm Nil}_3$ with the frame $F$ in \eqref{eq:framingF}
of the normal Gauss map,
let $F^{\mu}$ be a ${\rm SU}_{1, 1}^{\prime}$-valued solution
of the matrix differential equation
$( F^{\mu} )^{-1} d F^{\mu} = \alpha^{\mu}$
with $F^{\mu} |_{\mu=1} = F$.
Then $F^{\mu}$ is called an \textit{extended frame} of the timelike minimal surface
$f$.
\item Let $\tilde F^{\mu}$ be a ${\rm SU}_{1, 1}^{\prime}$-valued solution of
$( \tilde F^{\mu} )^{-1} d\tilde F^{\mu} = \alpha^{\mu}$.
Then $\tilde F^{\mu}$ is called a \textit{general extended frame}.
\end{enumerate}
\end{Definition}
Note that an extended frame $F^{\mu}$ and a general extended frame $\tilde F^{\mu}$
are differ by an initial condition $F_0$, $\tilde F^{\mu}=F_0F^{\mu}$, and
$F^{\mu}$ can be explicitly written as
\begin{equation}\lambdaabel{eq:extendedframe}
F^\mu = \frac1{\sqrt{\psi_2(\mu) \overline{\psi_2(\mu)} - \psi_1(\mu) \overline{\psi_1(\mu)}}}
\begin{pmatrix}
\overline{\psi_2(\mu)} & \overline{\psi_1(\mu)}\\
\psi_1(\mu) & \psi_2(\mu)
\end{pmatrix}, \quad
\end{equation}
where $\psi_j(\mu =1)= \psi_j \;(j=1, 2)$ are the original generating spinors of
a timelike minimal surface $f$.
For a timelike minimal surface, the Maurer-Cartan form
$\alpha^{\mu} = U^{\mu} dz + V^{\mu} d \bar z$ of a general extended frame
$\tilde F^{\mu}$ can be written
explicitly as follows:
\begin{equation}\lambdaabel{eq:UVmu}
U^{\mu} = \begin{pmatrix} \frac12 (\lambdaog h)_z& -\frac{i^{\prime}}{4} h \mu^{-1}\\4 {i^{\prime} } B h^{-1} \mu^{-1}& -\frac12 (\lambdaog h)_z\end{pmatrix},
\quad
V^{\mu} = \begin{pmatrix} -\frac12 (\lambdaog h)_{\bar z} & - 4 {i^{\prime} }\bar B h^{-1}\mu \\ \frac{i^{\prime}}{4} h\mu&\frac12 (\lambdaog h)_{\bar z}\end{pmatrix}.
\end{equation}
\section{Sym formula and duality between timelike minimal surfaces in three-dimensional Heisenberg group and timelike CMC surfaces in Minkowski space}\lambdaabel{sc:Sym}
In this section we will derive an immersion formula for timelike minimal surfaces in ${\rm Nil}_3$ in terms of the extended frame, the so-called \textit{Sym formula}. Unlike
the integral represetnation formula, the so-called \textit{Weierstrass type representation}
\cite{LMM, SSP, CO, Kondreak}, the Sym-formula will be given by the derivative of the extended frame with respect
to the spectral parameter.
We define a map $\Xi : \mathfrak{su}_{1, 1}^{\prime} \to \mathfrak{nil}_3$ by
\begin{equation}\lambdaabel{eq:linearisom1}
\Xi \lambdaeft(
x_1 \mathcal{E}_1 + x_2 \mathcal{E}_2 + x_3 \mathcal{E}_3
\right)
:=
x_1e_1 + x_2e_2 + x_3e_3
\end{equation}
where
\begin{equation}\lambdaabel{eq:E}
\mathcal{E}_1 = \frac12
\begin{pmatrix}
0&-i^{\prime} \\
i^{\prime} &0
\end{pmatrix},
\quad
\mathcal{E}_2 =\frac12
\begin{pmatrix}
0&-1 \\
-1 &0
\end{pmatrix},
\quad
\mathcal{E}_3 = \frac12
\begin{pmatrix}
i^{\prime}&0\\
0&-i^{\prime}
\end{pmatrix}.
\end{equation}
Clearly, $\Xi$ is a linear isomorphism but not a Lie algebra isomorphism.
Moreover, define a map $\Xi_{\mathfrak{nil}} : \mathfrak{su}_{1, 1}^{\prime} \to {\rm Nil}_3$
as $\Xi_{\mathfrak{nil}} = \exp \circ \Xi$,
explicitly
\begin{equation}
\Xi_{\mathfrak{nil}}
\lambdaeft(
\frac12
\begin{pmatrix}
x_3 i^{\prime} & -x_2 - x_1 i^{\prime}\\
-x_2 + x_1 i^{\prime} & -x_3 i^{\prime}
\end{pmatrix}
\right)
=
(x_1, x_2, x_3).
\end{equation}
Then we can obtain a family of timelike minimal surfaces in ${\rm Nil}_3$
from an extended frame of a timelike minimal surface.
\begin{Theorem} \lambdaabel{thm:Sym1}
Let $\mathbb D$ be a simply connected domain in $\mathbb C^{\prime}$
and $F^\mu$ be an extended frame defined in
\eqref{eq:extendedframe} for some conformal timelike minimal surface on $\mathbb D$
for which the functions $\psi_1, \psi_2$ are given by the formula \eqref{eq:A} with $\epsilon = i^{\prime}$
and the function $h$ defined by \eqref{eq:potential1} has positive values on $\mathbb D$.
Define maps $f_{\mathbb L^3}$ and $N_{\mathbb L^3}$
respectively by
\begin{equation}\lambdaabel{eq:SymMin}
f_{\mathbb L^3}=-i^{\prime} \mu (\partial_{\mu} F^{\mu}) (F^{\mu})^{-1}
- \frac{i^{\prime}}{2} \operatorname{Ad} (F^{\mu}) \sigma_3
\quad \mbox{and} \quad
N_{\mathbb L^3}= \frac{i^{\prime}}{2} \operatorname{Ad} (F^{\mu}) \sigma_3,
\end{equation}
where $\sigma_3 = \lambdaeft( \begin{smallmatrix} 1 &0 \\ 0 & -1 \end{smallmatrix}\right)$.
Moreover, define a map $f^{\mu}:\mathbb{D}\to \mathrm{Nil}_3$ by
\begin{equation}\lambdaabel{eq:symNil}
f^{\mu}:=\Xi_{\mathrm{nil}}\circ \hat{f}
\quad \mbox{with} \quad
\hat f =
(f_{\mathbb L^3})^o -\frac{i^{\prime}}{2} \mu (\partial_{\mu} f_{\mathbb L^3})^d,
\end{equation}
where the superscripts ``$o$'' and ``$d$'' denote the off-diagonal and
diagonal part,
respectively.
Then, for each $\mu \in \mathbb S^1_1 = \{e^{i^{\prime} t} \in \mathbb C^{\prime} \mid t \in \mathbb R \}$
the following statements hold$:$
\begin{enumerate}
\item The map $f^{\mu}$ is a
timelike minimal surface (possibly singular) in ${\rm Nil}_3$ and
$N_{\mathbb L^3}$ is the isometric image of the normal Gauss map of $f^{\mu}$.
Moreover, $f^{\mu} |_{\mu=1}$ and the original surface are same up to a translation.
\item The map $f_{\mathbb L^3}$ is a timelike constant mean curvature
surface with mean curvature $H=1/2$ in $\mathbb L^3$
and $N_{\mathbb L^3}$ is the spacelike unit normal vector of $f_{\mathbb L^3}$.
\end{enumerate}
\end{Theorem}
\begin{proof}
Because of the continuity of the extended frame with respect to the parameter $\mu$,
$F^\mu$ can be represented in the form of
\[
F^\mu = \frac1{\sqrt{\psi_2(\mu) \overline{\psi_2(\mu)} - \psi_1(\mu) \overline{\psi_1(\mu)}}}
\begin{pmatrix}
\overline{\psi_2(\mu)} & \overline{\psi_1(\mu)}\\
\psi_1(\mu) & \psi_2(\mu)
\end{pmatrix}
\]
for some $\mathbb C^{\prime}$-valued functions $\psi_1(\mu)$ and $\psi_2(\mu)$ with $\psi_k(1) = \psi_k$ for $k=1, 2$.
Since $F^\mu$ satisfies the equations
\[F^\mu_z = F^\mu U^\mu, \quad F^\mu_{\bar z} = F^\mu V^\mu,\]
with \eqref{eq:Umu}, \eqref{eq:Vmu} and $H=0$,
by considering the gauge transformation
\[ F^\mu \mapsto F^\mu
\begin{pmatrix}
\mu^{-1/2}&0\\
0&\mu^{1/2}
\end{pmatrix},\]
it can be shown that
the deformation with respect to parameter $\mu$ does not change the Dirac potential,
that is,
$\psi_2(\mu) \overline{\psi_2(\mu)} - \psi_1(\mu) \overline{\psi_1(\mu)}$ is independent of $\mu$.
Since $F^\mu$ is ${\rm SU}_{1, 1}^{\prime}$-valued,
a straightforward computation shows that
$i^{\prime} \mu (\partial _{\mu} F^\mu) (F^\mu)^{-1}$
and $N_{\mathbb L^3}$ take values in $\mathfrak{su}_{1, 1}^{\prime}$.
Hence $f_{\mathbb L^3}$ is a $\mathfrak{su}_{1, 1}^{\prime}$-valued map.
Therefore,
the diagonal entries of $i^{\prime} \mu (\partial_{\mu} f_{\mathbb L^3})$ take purely imaginary values
and the trace of $i^{\prime} \mu (\partial_{\mu} f_{\mathbb L^3})$ vanishes.
Thus $i^{\prime} \mu (\partial_{\mu} f_{\mathbb L^3})^d$ takes $\mathfrak{su}_{1, 1}^{\prime}$ values.
Next we compute $\partial_z \hat f$.
By the usual computations we obtain
\begin{eqnarray}
\partial_z f_{\mathbb L^3}
&=&
\partial_z
\lambdaeft( -i^{\prime} \mu (\partial_{\mu} F^\mu) (F^\mu)^{-1})
- \frac{i^{\prime}}{2} \operatorname{Ad} (F^{\mu}) \sigma_3\right) \notag\\
&=&
\operatorname{Ad}(F^\mu)
\lambdaeft( -i^{\prime} \mu (\partial_\mu U^\mu)
- \frac {i^{\prime}}2
\lambdaeft[U^\mu,
\sigma_3
\right]
\right)\notag\\
&=&
-2\mu^{-1}e^{w/2} \operatorname{Ad}(F^\mu) \lambdaabel{eq:zfmin}
\sigma_3\\
&=&
\mu^{-1}
\begin{pmatrix}
\psi_1(\mu) \overline{\psi_2(\mu)} & - (\overline{\psi_2(\mu)})^2\\
(\psi_1(\mu))^2 & -\psi_1(\mu) \overline{\psi_2(\mu)}
\end{pmatrix}\notag.
\end{eqnarray}
Then we have
\begin{eqnarray}
\partial_z f_{\mathbb L^3}
&=& \frac12
\begin{pmatrix}
\phi_3(\mu) & -\phi_2(\mu) - i^{\prime} \phi_1(\mu) \\
-\phi_2(\mu) + i^{\prime} \phi_1(\mu) & -\phi_3(\mu)
\end{pmatrix} \notag\\
&=&\lambdaabel{eq:f_z}
\phi_1(\mu) \mathcal{E}_1 + \phi_2(\mu) \mathcal{E}_2 + i^{\prime} \phi_3(\mu) \mathcal{E}_3 \lambdaabel{eq:fz}
\end{eqnarray}
with
\[
\phi_1(\mu) = \mu^{-1} i^{\prime} \lambdaeft( (\overline{\psi_2(\mu)})^2 + (\psi_1(\mu)^2) \right),
\quad
\phi_2(\mu) = \mu^{-1} \lambdaeft( (\overline{\psi_2(\mu)})^2 - (\psi_1(\mu)^2) \right)
\]
and\[ \phi_3(\mu) = \mu^{-1} 2 \psi_1(\mu) \overline{\psi_2(\mu)}.
\]
By using \eqref{eq:zfmin}, we can compute
\begin{eqnarray*}
\partial_z \lambdaeft( -\frac{i^{\prime}}2 \mu \lambdaeft( \partial_{\mu} f_{\mathbb L^3} \right) \right)
&=&
-\frac{i^{\prime}}2 \mu \partial_{\mu} (\partial_z f_{\mathbb L^3})\\
&=&
i^{\prime} e^{w/2} \mu (-\mu^{-2}) \operatorname{Ad}(F^\mu)
\begin{pmatrix}
0&1\\
0&0
\end{pmatrix}\\
&& \,+ i^{\prime} e^{w/2} \lambdaeft[ i^{\prime} \mu^{-1} (-f_{\mathbb L^3} - N_{\mathbb L^3}), -\frac12 \mu e^{-w/2} \partial_z f_{\mathbb L^3} \right]\\
&=&
\frac{i^{\prime}}2 \partial_z f_{\mathbb L^3}
+ \lambdaeft[f_{\mathbb L^3} + N_{\mathbb L^3}, \frac12 \partial_z f_{\mathbb L^3} \right].
\end{eqnarray*}
Using \eqref{eq:zfmin}, we have
\[
\lambdaeft[f_{\mathbb L^3}, \frac12 \partial_z f_{\mathbb L^3} \right]^d
=\frac12 \lambdaeft( \phi_2 (\mu) \int \phi_1 (\mu) dz - \phi_1 (\mu) \int \phi_2 (\mu) dz \right) \mathcal E_3
\]
and
\[
\lambdaeft[ N_{\mathbb L^3}, \frac12 \partial_z f_{\mathbb L^3} \right]
= \frac{i^{\prime}}2 \partial_z f_{\mathbb L^3}.
\]
Consequently, we have
\[
\partial_z \lambdaeft( -\frac{i^{\prime}}2 \mu \lambdaeft( \partial_{\mu} f_{\mathbb L^3} \right) \right)^d
=
\lambdaeft(
\phi_3(\mu) + \frac12 \lambdaeft( \phi_2 (\mu) \int \phi_1 (\mu) dz - \phi_1 (\mu) \int \phi_2 (\mu) dz\right)
\right)
\mathcal E_3.
\]
Thus we obtain
\begin{eqnarray*}
\partial_z \hat f
&=&
\partial_z (f_{\mathbb L^3})^o
+ \partial_z \lambdaeft( -\frac{i^{\prime}}2 \mu \lambdaeft( \partial_{\mu} f_{\mathbb L^3} \right) \right)^d\\
&=&
\phi_1(\mu) \mathcal E_1 + \phi_2(\mu) \mathcal E_2
+
\lambdaeft(
\phi_3(\mu) + \frac12 \lambdaeft( \phi_2 (\mu) \int \phi_1 (\mu) dz - \phi_1 (\mu) \int \phi_2 (\mu) dz\right)
\right)
\mathcal E_3
\end{eqnarray*}
and then
\begin{equation}\lambdaabel{eq:conclusion1}
(f^{\mu})^{-1} (\partial_z f^{\mu})
= \phi_1(\mu) e_1 + \phi_2(\mu) e_2 + \phi_3(\mu) e_3.
\end{equation}
The equation \eqref{eq:conclusion1} means that,
for $\mu = e^{i^{\prime} t}$ with sufficiently small $t \in \mathbb R$,
the map $f^\mu$ is conformal with the conformal parameter $z$
and the conformal factor $4(\psi_2(\mu) \overline{\psi_2(\mu)} + \psi_1(\mu) \overline{\psi_1(\mu)})^2$.
To complete the proof of $(1)$ we check the mean curvature and the normal Gauss map of $f^{\mu}$.
Since the Dirac potential of $f^\mu$ is same with the one of the original timelike minimal surface,
the mean curvature of $f^\mu$ is zero
for $\mu$ with $\psi_2(\mu) \overline{\psi_2(\mu)} + \psi_1(\mu) \overline{\psi_1(\mu)}$
nowhere vanishing on $\mathbb D$.
Using the map $\mathfrak{nil}_3 \supset \widetilde{\mathbb{S}}^2_1 \to \mathbb{S}^2_1 \subset \mathbb L^3_{(+,-,+)}$
defined in Section \ref{gaussmap},
the normal Gauss map of $f^\mu$ is converted into $N_{\mathbb L^3}$.
To prove $(2)$, see Appendix \ref{timelikemin}.
\end{proof}
\begin{Remark}
In other cases, $h<0$ or $\epsilon = - i^{\prime}$,
we can get the same result with Theorem \ref{thm:Sym1}
by adapting the identification \eqref{eq:identification1} between $\mathfrak{su}_{1, 1}^{\prime}$ and $\mathbb L^3_{(+,-,+)}$
and the linear isomorphism \eqref{eq:linearisom1} from $\mathfrak{su}_{1, 1}^{\prime}$ to $\mathfrak{nil}_3$
precisely.
For example,
when the original timelike minimal surface has $h>0$ and $\epsilon = - i^{\prime}$,
we should replace the identification \eqref{eq:identification1}
and the linear isomorphism \eqref{eq:linearisom1}, respectively,
into
\[
\mathfrak{su}_{1, 1}^{\prime} \ni \frac12
\begin{pmatrix}
r i^{\prime}& - (-p - q i^{\prime}) \\
- (-p + q i^{\prime}) & - r i^{\prime}
\end{pmatrix}
\lambdaongleftrightarrow
(p, q, r) \in \mathbb L^3_{(+, -, +)}
\]
and
\[
\Xi \lambdaeft(
x_1 \mathcal{E}_1 + x_2 \mathcal{E}_2 + x_3 \mathcal{E}_3
\right)
:=
-x_1e_1 - x_2e_2 + x_3e_3
\]
where $\mathcal{E}_j\;(j=1, 2, 3)$ is
defined in \eqref{eq:E}.
\end{Remark}
In Theorem \ref{thm:Sym1}, we recover a given timelike minimal surface in
${\rm Nil}_3$ in terms of generating spinors and Sym formula.
More generally, we can construct timelike minimal surfaces
using a non-conformal harmonic map into $\mathbb S^2_1$.
As we have seen in the proof of Theorem \ref{thm:Sym1}
the harmonicity of a map $N$ into $\mathbb S^2_1$
in terms of
\[
d(* \alpha_1) + [\alpha_0 \wedge *\alpha_1] =0,
\]
where $\alpha$ is the Maurer-Cartan form of
the frame $\tilde{F} : \mathbb D \to {\rm SU}_{1, 1}^{\prime}$ of $N$ and
moreover, $\alpha = \alpha_0 + \alpha_1$ is the representation
in accordance with the decomposition $\mathfrak{su}_{1, 1}^{\prime} = \mathfrak k + \mathfrak m$.
Denote the $(1,0)$-part and $(0,1)$-part of $\alpha_1$ by
$\alpha_{1} {'}$ and $\alpha_{1} {''}$,
and define a $\mathfrak{su}_{1, 1}^{\prime}$-valued $1$-form $\alpha^{\mu}$ for each $\mu \in S^1_1$ by
\[
\alpha^{\mu} := \alpha_0 + \mu^{-1} \alpha_{1} {'} + \mu \alpha_1 {''}.
\]
Then $\alpha^{\mu}$ satisfies
\[
d \alpha^{\mu} + \frac12 [\alpha^{\mu} \wedge \alpha^{\mu}] =0
\]
for all $\mu \in S^1_1$,
and thus there exists $\tilde{F}^{\mu} : \mathbb D \to {\rm SU}_{1, 1}^{\prime}$
which is smooth with respect to the parameter $\mu$
and satisfies $(\tilde{F}^{\mu})^{-1} d \tilde{F}^{\mu} = \alpha^{\mu}$ for each $\mu$.
Thus $\tilde F^{\mu}$ is the extended frame of the harmonic map $N$.
As well as Theorem \ref{thm:Sym1} we can show the following theorem:
\begin{Theorem} \lambdaabel{thm:Sym2}
Let $\tilde{F}^\mu : \mathbb D \to {\rm SU}_{1, 1}^{\prime}$ be the extended frame of
a harmonic map $N$ into the $\mathbb S^2_1$.
Assume that the coefficient function $a$ of $(1, 2)$-entry of $\alpha_1 {'}$ satisfies
$a \overline a <0$ on $\mathbb D$.
Define the maps $\tilde f_{\mathbb L^3}$, $\tilde N_{\mathbb L^3}$
and $\tilde f^{\mu}$ respectively by the Sym formulas in \eqref{eq:SymMin}
and \eqref{eq:symNil} where $F^{\mu}$
replaced by $\tilde F^{\mu}$.
Then, under the identification \eqref{eq:identification1}of $\mathfrak{su}_{1, 1}^{\prime}$ and $\mathbb L^3$
and the linear isomorphism \eqref{eq:linearisom1} from $\mathfrak{su}_{1, 1}^{\prime}$ to $\mathfrak {nil}_3$,
for each $\mu = e^{i^{\prime} t} \in \mathbb{S}^1_1$ the following
statements hold$:$
\begin{enumerate}
\item The map $\tilde f_{\mathbb L^3}$ is a timelike constant mean curvature
surface with mean curvature $H=1/2$ in $\mathbb L^3$ with the first fundamental form
$I=-16 a \overline a dz d\bar z$
and $\tilde N_{\mathbb L^3}$ is the spacelike unit normal vector of $\tilde f_{\mathbb L^3}$.
\item The map $\tilde f^{\mu}$ is a timelike minimal surface
(possibly singular) in ${\rm Nil}_3$
and
$N_{\mathbb L^3}$ is the isometric image of the normal Gauss map of $f^{\mu}$.
In particular, $\tilde{F}^\mu$ is an extended frame of
some timelike minimal surface $f$.
\end{enumerate}
\end{Theorem}
\begin{proof}
To prove the theorem, one needs to define generating spinors properly:
After gauging the extended frame the upper right corner of $\alpha_1^{\prime}$
takes values in purely imaginary, that is $a$ can be assumed to be purely imaginary.
Define $h$ by $h = -4 i^{\prime} a$, and $\tilde \psi_1$ and $\tilde \psi_2$ by
putting
\[
\tilde F_{21} = \sqrt{2} \tilde \psi_1 h^{-1/2}, \quad
\tilde F_{22} = \sqrt{2} \tilde \psi_2 h^{-1/2},
\]
respectively. Then $\tilde \psi_1$ and $\tilde \psi_2$ are
generating spinors of the map $\tilde f^{\mu}$ and its angle function is exactly
$h = 2 \lambdaeft(\tilde \psi_2 \overline{\tilde \psi_2} -\tilde \psi_1 \overline{\tilde \psi_1}\right)$.
\end{proof}
\section{Generalized Weierstrass type representation
for timelike minimal surfaces in ${\rm Nil}_3$} \lambdaabel{sc:Generalized Weierstrass type representation}
In this section we will give a construction of timelike minimal surfaces in ${\rm Nil}_3$
in terms of the para-holomorphic data, the so-called \textit{generalized Weierstrass
type representation.} The heart of the construction is based on two loop group
decompositions, the so-called \textit{Birkhoff} and \textit{Iwasawa} decompositions,
which are reformulations of \cite[Theorem 2.5]{DIT}, see also \cite{PreS:LoopGroup},
in terms of the para-complex structure.
\subsection{From minimal surfaces to normalized potentials: The Birkhoff decomposition}
Let us recall the hyperbola on $\mathbb C^{\prime}$:
\begin{equation}
\mathbb S_1^1=\{ \mu \in \mathbb C^{\prime}\mid \mu \bar \mu =1, \, \mathbb Re \mu >0 \}.
\end{equation}
Since an extended frame $F^{\mu}$ is analytic on $\mathbb S_1^1$ (in fact it is
analytic on $\mathbb C^{\prime} \setminus \{x (1\pm i^{\prime})\mid x \in \mathbb R\}$),
it is natural to introduce the following loop groups$:$
\begin{align*}
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma} &=
\lambdaeft\{
g : \mathbb S_1^1 \to {\rm SL}_2 \mathbb C^{\prime} \mid
\begin{array}{l}
g =\cdots+ g_{-1} \mu^{-1} + g_0 + g_1 \mu^{1} + \cdots\\
\mbox{and
$g(- \mu) = \sigma_3 g(\mu)\sigma_3$}
\end{array}
\right\}, \\
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P &=
\lambdaeft\{ g \in \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}
\mid g = g_0 + g_1 \mu^1 + \cdots
\right\}.
\end{align*}
On the one hand, we define
\[
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}N=
\lambdaeft\{ g \in \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}
\mid g= g_0 + g_{-1} \mu^{-1} + \cdots
\right\}.
\]
We now use the lower subscript $*$ for normalization
at $\mu =0$ or $\mu = \infty$ by identity, that is
\[
\Lambda^{\prime \pm}_* {\rm SL}_{2} \mathbb C^{\prime}_{\sigma}= \lambdaeft\{
g \in \Lambda^{\prime \pm} {\rm SL}_{2} \mathbb C^{\prime}_{\sigma} \mid
\mbox{$g(0)=\operatorname{id}$ for
$\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P$ or $g(\infty)=\operatorname{id}$ for
$\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}N$}
\right\}.
\]
Moreover, we define the loop group of the special
para-unitary group ${\rm SU}_{1, 1}^{\prime}:$
\[
\Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime} =
\lambdaeft\{
g \in \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma} \mid
\sigma_3 \lambdaeft(\overline{g(1/\bar \mu)}^{T}\right)^{-1} \sigma_3 = g(\mu)
\right\}.
\]
Further, let us introduce the following subgroup
\[
{\rm U}_1^{\prime}= \lambdaeft\{ \operatorname{diag}( e^{i^{\prime} \theta}, e^{- i^{\prime} \theta})
\mid \theta \in \mathbb R \right\}.
\]
The fundamental decompositions for the above loop groups are
Birkhoff and Iwasawa decompositions as follows$:$
\begin{Theorem}[Birkhoff and Iwasawa decompositions]\lambdaabel{thm:BIdecomposition}
The
loop group $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$ can be decomposed as follows$:$
\begin{enumerate}
\item[(1)]{\rm Birkhoff decomposition}$:$
The multiplication maps
\begin{equation}\lambdaabel{eq:Birkhoff}
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}NN \times
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P \to \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}
\quad \mbox{and}\quad
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}PN\times
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}N\to \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}
\end{equation}
are diffeomorphism onto the open dense subsets of $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$,
which will be called the {\rm big cells} of $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$.
\item[(2)]{\rm Iwasawa decomposition}$:$
The multiplication map
\begin{equation}\lambdaabel{eq:Iwasawa}
\Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime} \times
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P\to \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}
\end{equation}
is an diffeomorphism onto the open dense subset of $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$,
which will be called the {\rm big cell} of $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$.
\end{enumerate}
\end{Theorem}
\begin{proof}
We first note that a given real Lie algebra $\mathfrak g$, the para-complexification
$\mathfrak g \otimes \mathbb C^{\prime}$ of $\mathfrak g$ is isomorphic to $\mathfrak g \oplus
\mathfrak g$ as a real Lie algebra, that is, the isomorphism is given explicitly
as
\begin{equation}\lambdaabel{eq:doubleiso}
\mathfrak g \oplus
\mathfrak g \ni (X, Y) \mapsto \frac12 (X+Y) + \frac12(X-Y)i^{\prime} \in
\mathfrak g \otimes \mathbb C^{\prime}.
\end{equation}
Accordingly an isomorphism between $\mathrm{SL}_2 \mathbb R \times \mathrm{SL}_2 \mathbb R$ and ${\rm SL}_2 \mathbb C^{\prime}$ follows.
In particular we have an isomorphism between $\{\operatorname{diag}(a, a^{-1})\mid a\in \mathbb R^{\times}\}
\times \{\operatorname{diag}(a, a^{-1})\mid a\in \mathbb R^{\times}\}$
and $\{\operatorname{diag}(r e^{i^{\prime} \theta}, r^{-1} e^{- i^{\prime} \theta})\mid r \neq 0, \; \theta \in \mathbb R\}$ follows.
Let us consider two real Lie algebras $\mathfrak{sl}_2 \mathbb R$ and $\mathfrak{su}_{1, 1}^{\prime}$:
\[
\mathfrak{sl}_2 \mathbb R = \lambdaeft\{\begin{pmatrix}
a & b \\ c & -a
\end{pmatrix}\mid a, b, c \in \mathbb R\right\}, \quad
\mathfrak{su}_{1, 1}^{\prime} = \lambdaeft\{\begin{pmatrix}
c i^{\prime} & b - a i^{\prime} \\ b + a i^{\prime} & -c i^{\prime}
\end{pmatrix}\mid a, b, c \in \mathbb R\right\}.
\]
Then an explicit map
\begin{equation}\lambdaabel{eq:isom}
X \mapsto \frac12 (X + X^*) + \frac{1}{2} (X- X^*)i^{\prime}, \quad
X^*= - \sigma_3 \overline{X}^T \sigma_3
\end{equation}
induces an isomorphism between $\mathfrak{sl}_2 \mathbb R$ and $\mathfrak{su}_{1, 1}^{\prime}$.
Note that $X^* =- \sigma_3 X\sigma_3$ for $X \in \mathfrak{sl}_2 \mathbb R$.
Then accordingly an isomorphism between $\mathrm{SL}_2 \mathbb R$ and ${\rm SU}_{1, 1}^{\prime}$ follows.
Let us now define the loop algebras of $\mathfrak {sl}_2 \mathbb R$
by
\begin{align*}
\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} & = \lambdaeft\{\xi : \mathbb R^{+} \to
\mathfrak {sl}_{2} \mathbb R \mid \mbox{$\xi = \cdots + \xi_{-1}\lambdaambda^{-1} +
\xi_0 + \xi_1 \lambdaambda + \cdots $ and $\xi(- \lambdaambda) = \sigma_3 \xi(\lambdaambda) \sigma_3$}
\right\}, \\
\Lambda^{\pm} \mathfrak {sl}_2 \mathbb R_{\sigma} & = \lambdaeft\{
\xi \in \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \mid \xi= \xi_0 + \xi_{\pm 1} \lambdaambda^{\pm 1} + \cdots
\right\}.
\end{align*}
Moreover, the lower subscript $*$ denotes normalization at $\lambdaambda = 0$ and
$\lambdaambda = \infty$, that is, $\xi_0=0$ in $ \Lambda^{\pm} \mathfrak {sl}_2 \mathbb R$.
On the one hand the loop algebra of $\mathfrak{su}_{1, 1}^{\prime}$
is defined by
\[
\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}= \lambdaeft\{ \tau : \mathbb S^1_1 \to \mathfrak{su}_{1, 1}^{\prime} \mid
\tau(- \mu) = \sigma_3 \tau(\mu) \sigma_3\right\}.
\]
The Lie algebra of
$\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$ is defined by
\begin{align*}
\Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma} = \lambdaeft\{ \tau : \mathbb S^1_1 \to \mathfrak{sl}_2 \mathbb C^{\prime} \mid
\mbox{$\tau = \cdots + \tau_{-1} \mu^{-1} + \tau_0 + \tau_1 \mu + \cdots
$ and $\tau(- \mu) = \sigma_3 \tau(\mu) \sigma_3$}
\right\},
\end{align*}
and
it is easy to see that the loop algebra $\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}$ can be extended
to the following fixed point set of an anti-linear
involution of $\Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma}$:
\[
\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}=\lambdaeft\{ \tau \in \Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma} \mid
\tau^{*}(1/\bar \mu) = \tau (\mu) \right\}.
\]
We now identify the two loop algebras $\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}$ and $\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}$ as follows:
Let $\xi
= \cdots + \xi_{-1} \lambdaambda^{-1} + \xi_0 + \xi_1 \lambdaambda + \cdots$
with $\xi_{i} \in \mathfrak{sl}_2 \mathbb R$ be an element in $\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}$ and consider the isomorphism
\eqref{eq:isom}:
\begin{align*}
\xi \mapsto
\tilde \xi&= \xi \ell+\xi^*\bar \ell \\
&= \cdots +(\xi_{-1}\ell +\xi_{-1}^*\bar \ell)\lambdaambda^{-1}
+ \xi_{0}\ell+\xi_{0}^*\bar \ell
+ (\xi_{1}\ell+\xi_{1}^*\bar \ell )\lambdaambda + \cdots,
\end{align*}
where we set
\[
\ell = \frac12(1+i^{\prime}).
\]
Since $\lambdaambda \in \mathbb R^{+}$ corresponds to
$\lambdaambda = \mu \ell + \mu^{-1} \bar \ell$ with $\mu \in \mathbb S^1_1 \;(\bar \mu =\mu^{-1})$
and the properties of null basis $\{\ell, \bar \ell\}$,
that is, $\ell \bar \ell =0$ and
$\ell^2 =\ell$, $\bar \ell\,^2=\bar \ell$,
we have
\[
\tilde \xi = \cdots + (\xi_{-1} \ell + \xi_{1}^* \bar \ell) \mu^{-1}
+ (\xi_{0} \ell + \xi_{0}^* \bar \ell)
+ (\xi_{1} \ell + \xi_{-1}^* \bar \ell) \mu
+ \cdots.
\]
Thus the following map is an isomorphism between $\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}$ and
$\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}$
\begin{equation}\lambdaabel{eq:funnyisom}
\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \ni \xi (\lambdaambda) \mapsto
\xi (\mu)\ell + \xi^* (1/\bar \mu)\bar \ell
\in \Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma},
\end{equation}
where $\mu = \lambdaambda \ell + \lambdaambda^{-1} \bar \ell$.
Then combining two isomorphisms \eqref{eq:doubleiso} and \eqref{eq:funnyisom},
we have isomorphisms
\[
\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \oplus\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \cong \Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma} \oplus\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma} \cong \Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma},
\]
where the maps are explicitly given by
\begin{equation}\lambdaabel{eq:isomap1}
(\xi(\lambdaambda), \eta(\lambdaambda)) \mapsto
\lambdaeft( \xi(\mu)\ell+
\xi^*(1/\bar \mu)\bar \ell, \,
\eta(\lambdaambda)\ell+
\eta^*(1/\bar \mu)\bar \ell
\right)
\end{equation}
for $ \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \oplus\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \cong \Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma} \oplus\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}$, and
\begin{equation}\lambdaabel{eq:isomap}
(\xi(\lambdaambda), \eta(\lambdaambda)) \mapsto \xi(\mu)\ell+\eta^*(1/\bar \mu)\bar \ell
\end{equation}
for $ \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \oplus\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \cong \Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma}$.
Moreover, by the map \eqref{eq:isomap}, the following isomorphisms follow:
\[
\Lambda^{+} \mathfrak {sl}_2 \mathbb R \oplus
\Lambda^{-} \mathfrak {sl}_2 \mathbb R \cong \Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma}p,
\quad
\Lambda^{-} \mathfrak {sl}_2 \mathbb R \oplus
\Lambda^{+} \mathfrak {sl}_2 \mathbb R \cong \Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma}n.
\]
It is well known that \cite[Section 2.1]{DIT} the loop algebra
$\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}$
is a Banach Lie algebra and thus $\Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma} (\cong \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}
\times \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma})$ is
also a Banach Lie algebra, and the corresponding loop groups $\Lambda {\rm SL}_{2} \R_{\sigma}$ and
$\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}(\cong \Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma})$
become Banach Lie groups, respectively.
Then the Birkhoff and Iwasawa decompositions
of
$\Lambda {\rm SL}_{2} \R_{\sigma}$ and $\Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma}$ were proved in Theorem 2.2 and Theorem 2.5 in \cite{DIT}:
The following multiplication maps
\[
\Lambda {\rm SL}_{2} \R_{\sigma}N \times \Lambda {\rm SL}_{2} \R_{\sigma}P \rightarrow \Lambda {\rm SL}_{2} \R_{\sigma} ,\quad
\Lambda {\rm SL}_{2} \R_{\sigma}P \times \Lambda {\rm SL}_{2} \R_{\sigma}N \rightarrow \Lambda {\rm SL}_{2} \R_{\sigma} ,
\]
and
\[
\mathbb Delta (\Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma})
\times
\Lambda {\rm SL}_{2} \R_{\sigma}P \times \Lambda {\rm SL}_{2} \R_{\sigma}N \rightarrow
\Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma}
\]
are diffeomorphisms onto the open dense subsets of $\Lambda {\rm SL}_{2} \R_{\sigma}$
and $ \Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma}$, respectively.
Then these decomposition
theorems can be translated to the Birkhoff and Iwasawa decompositions for
$\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$.
This completes the proof.
\end{proof}
\begin{Remark}
In this paper, we consider only the loop group of
a Lie group $G$ which is defined on the hyperbola $\mathbb S^1_1$ and
has the power series expansion. We have denoted such loop group
by the symbol $\Lambda G_{\sigma}$.
However in \cite{DIT}, the authors considered the loop group
$\tilde \Lambda G_{\sigma}$ which was a space of continuous maps from $\mathbb R^+$
and it can be analytically continued to $\mathbb C^{\times}$,
that is, an element of $\tilde \Lambda G_{\sigma}$ has the power series expansion.
If an element of $\tilde \Lambda G_{\sigma}$ is restricted to $\mathbb R^{+}$, then
it corresponds to an element of $\Lambda G_{\sigma}$ as discussed above.
\end{Remark}
In the following, we assume that an extended frame $F^{\mu}$ is in the
big cell of $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$.
Using the Birkhoff decomposition in Theorem \ref{thm:BIdecomposition},
we have the para-holomorphic data from a timelike minimal surface.
\begin{Theorem}[The normalized potential]\lambdaabel{thm:normalized}
Let $F^{\mu}$ be an extended frame of a timelike minimal surface $f$ in
${\rm Nil}_3$, and apply the Birkhoff decomposition in Theorem $\ref{thm:BIdecomposition}$ as
$F^{\mu} = F_-^{\mu} F_+^{\mu}$ with $F_-^{\mu} \in
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}N$ and
$F_+^{\mu} \in \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P$.
Then the Maurer-Cartan form of $F_-^{\mu}$, that is, $\xi = (F_-^{\mu})^{-1} d F_-^{\mu}$, is para-holomorphic with respect to $z$. Moreover, $\xi$
has the following explicit form$:$
\begin{equation}\lambdaabel{eq:normalized}
\xi = \mu^{-1} \begin{pmatrix} 0 & b(z)\\ - \frac{B(z)}{b(z)} & 0
\end{pmatrix} dz,
\end{equation}
where
\[
b (z) = - \frac{i^{\prime}}4 \frac{h^2(z,0)}{h(0,0)}.
\]
The data $\xi$ is called the {\rm normalized potential} of a
timelike minimal surface $f$.
\end{Theorem}
\begin{proof}
Let $F^{\mu}$ be an extended frame of a timelike minimal surface $f$ in ${\rm Nil}_3$.
Applying the Birkhoff decomposition \eqref{eq:Birkhoff}
in Theorem \ref{thm:BIdecomposition}:
\[
F^{\mu} = F^{\mu}_- F^{\mu}_+ \in
\Lambda^-_* {\rm SL}_{2} \mathbb C^{\prime}_{\sigma}\times \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P.
\]
Then the Maurer-Cartan form of $ F^{\mu}_{-}$ can be computed as
\begin{align}
\lambdaabel{eq:compxi} \xi & = (F^{\mu}_{-})^{-1} d F^{\mu}_{-} \\
\nonumber & = F_+^{\mu} (F^{\mu})^{-1} d \lambdaeft\{F^{\mu} (F_+^{\mu})^{-1} \right\} \\
\nonumber & = F_+^{\mu} \alpha (F_+^{\mu})^{-1} - d F_+^{\mu} (F_+^{\mu})^{-1}.\end{align}
Since $\xi$ takes values in $\Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma}$ and does not have
$\mu^0$-term,
thus
\[
\xi = \mu^{-1}F_{+0} \begin{pmatrix} 0 &- \frac{i^{\prime}}4 h \\
\frac{4 i^{\prime} B}{h} & 0
\end{pmatrix} F_{+0}^{-1}\Big|_{\bar z=0} dz,
\]
where $F_{+0}^{\mu}$ denotes the first coefficient of $F_{+}^{\mu}$
expansion with respect to $\mu$, that is,
$F_{+}^{\mu} = F_{+0} + F_{+1} \mu + F_{+2} \mu^2+ \cdots$.
Therefore $F_-^{\mu}$ is para-holomorphic with respect to $z$ and
moreover, $\xi$ can be computed as
\[
\xi(z, \mu) = \mu^{-1} \begin{pmatrix}
0 & - \frac{i^{\prime}}{4} h(z, 0) f_{0}^2(z, 0) \\
\frac{4i^{\prime} B(z)}{h(z,0)}f_{0}^{-2}(z, 0) & 0
\end{pmatrix}dz,
\]
where $F_{+0}(z, 0)= \operatorname{diag}(f_0(z, 0), f_0^{-1}(z, 0))$.
We now look at the $\mu^0$-terms of both sides of \eqref{eq:compxi}:
Then
\[
0 = (F_{+0} \alpha_0^{\prime} F_{+0}^{-1}- d F_{+0} F_{+0}^{-1})|_{\bar z=0},
\]
where $\alpha_0^{\prime}$ is
$\alpha_0^{\prime}= (\frac12 \lambdaog h_{z}(z, 0) ) \sigma_3 dz$.
It is equivalent to $ d F_{+0} = F_{+0} \alpha_0^{\prime}$, and therefore
\[
f_{0}(z, 0) = h^{1/2}(z, 0) c,
\]
where $c$ is some constant, follows.
Since $F_{+0}(0, 0)= \operatorname{id}$, thus $c= h^{-1/2}(0,0)$.
This completes the proof.
\end{proof}
\subsection{From para-holomorphic potentials to minimal surface:
The Iwasawa decomposition}
Conversely, in the following theorem we will show
a construction of timelike minimal surface from
normalized potentials as defined in \eqref{eq:normalized},
the so-called \textit{generalized Weierstrass type representation}.
\begin{Theorem}[The generalized Weierstrass type representation
]\lambdaabel{thm:Weierstrass}
Let $\xi$ be a normalized potential defined in \eqref{eq:normalized},
and let $F_{-}$ be the solution of
\[
\partial_z F_{-} = F_{-} \xi, \quad F_{-}(z=0)= \operatorname{id}.
\]
Then applying the Iwasawa decomposition in Theorem $\ref{thm:BIdecomposition}$
to $F_-$, that is $F_- = F^{\mu} V_+$ with
$F^{\lambda} \in \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime}$
and $V_+ \in \Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P$, and choosing a proper diagonal ${\rm U}_1^{\prime}$-element $k$,
$F^{\mu}k$ is
an extended frame of the normal Gauss map $f^{-1} N$ of a
timelike minimal surface $f$ in ${\rm Nil}_3$ up to the change of coordinates.
\end{Theorem}
\begin{proof}
It is easy to see that the solution $F_-$ takes values in
$\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$. Then apply the Iwasawa decomposition to $F_-$ (on the big cell), that is,
\[
F_- = F^{\mu} V_+ \in \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime}
\times
\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}P.
\]
We now compute the Maurer-Cartan form of $F^{\mu}$ as $(F^{\mu})^{-1} d F^{\mu}$,
\begin{align}
\lambdaabel{eq:MCF} \alpha^{\mu}& =(F^{\mu})^{-1} d F^{\mu} \\
&= V_+ F_-^{-1} d (F_- V_+^{-1})\\
\nonumber &= V_+ \xi V_+^{-1} - d V_+ V_+^{-1}.
\end{align}
From the right-hand side of the above equation, it is easy to see
$\alpha^{\mu} = \mu^{-1}\alpha_{-1}+ \alpha_{0} + \mu^{1}\alpha_{1}+ \cdots$.
Since $F^{\mu}$ takes values in $\Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime}$, thus
\[
\alpha^{\mu} = \mu^{-1}\alpha_{-1}+ \alpha_{0} + \mu^{1}\alpha_{1},
\]
and $\alpha_{j}^* = \alpha_{-j}$ holds. From the form of $\xi$
and the right-hand side of \eqref{eq:MCF}, the Maurer-Cartan form
$\alpha^{\mu}$ almost has the form in \eqref{eq:alphamu}.
Finally a proper choice of a diagonal ${\rm U}_1^{\prime}$-element $k$ and a change of coordinates
imply that $\alpha^{\mu}$ is exactly the same form in \eqref{eq:alphamu}.
This completes the proof.
\end{proof}
\begin{Remark}
Taking an extended frame $\tilde{F}^\mu$ given by Theorem \ref{thm:Weierstrass} with a $\Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime}$-valued initial condition :
$F_-(z=0)=A \:\:(A \in \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime})$,
extended frames $\tilde{F}^\mu$ and $F^\mu$ differ by $A$,
that is,
$\tilde{F}^\mu = A F^\mu$.
In general, timelike minimal surfaces in ${\rm Nil}_3$ corresponding to extended frames for different initial conditions are not isometric.
\end{Remark}
\section{Examples}\lambdaabel{sc:Example}
In this section we will give some examples of timelike minimal surfaces in ${\rm Nil}_3$
in terms of para-holomorphic potentials and the generalized Weierstrass type
representation as explained in the previous section.
\subsection{Hyperbolic paraboloids corresponding to circular cylinders}
Let $\xi$ be the normalized potential defined as
\[\xi =
\mu^{-1}
\begin{pmatrix}
0& -\frac {i^{\prime}} 4\\
\frac {i^{\prime}} 4 &0
\end{pmatrix}
dz.
\]
The solution of the equation $d F_- = F_- \xi$
with the initial condition $F_- (z=0) = \operatorname{id}$
is given by
\[ F_- =
\begin{pmatrix}
\cos \frac {\mu^{-1} z} 4 & -i^{\prime} \sin \frac {\mu^{-1} z} 4 \\
i^{\prime} \sin \frac {\mu^{-1} z} 4 & \cos \frac {\mu^{-1} z} 4
\end{pmatrix}.
\]
Applying the Iwasawa decomposition to the solution $F_-$:
\[ F_- = F^{\mu} V_+,\]
we obtain an extended frame $F^\mu: \mathbb C^{\prime} \to \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime} $:
\[
F^\mu =
\begin{pmatrix}
\cos \frac{ {\mu}^{-1} z + \mu \bar z } 4 & -i^{\prime} \sin \frac{ {\mu}^{-1} z + \mu \bar z } 4\\
i^{\prime} \sin \frac{ {\mu}^{-1} z + \mu \bar z } 4 & \cos \frac{ {\mu}^{-1} z + \mu \bar z } 4
\end{pmatrix}.
\]
Then, by Theorem \ref{thm:Sym2}, we have the map $f_{\mathbb L^3}$ explicitly
\[
f_{\mathbb L^3} = \frac{1}{2}
\begin{pmatrix}
-i^{\prime} \cos \frac{\mu^{-1}z + \mu \bar z}2 & -\sin \frac{\mu^{-1}z + \mu \bar z}2 - \frac{\mu^{-1}z - \mu \bar z}2\\
-\sin \frac{\mu^{-1}z + \mu \bar z}2 + \frac{\mu^{-1}z - \mu \bar z}2& i^{\prime} \cos \frac{\mu^{-1}z + \mu \bar z}2
\end{pmatrix},
\]
and
\[
\hat{f} =\frac12
\begin{pmatrix}
-\frac{-\mu^{-1}z +\mu \bar z}4 \sin \frac{\mu^{-1}z +\mu \bar z}2 &
-\sin \frac{\mu^{-1}z +\mu \bar z}2 -\frac{\mu^{-1}z -\mu \bar z}2\\
-\sin \frac{\mu^{-1}z +\mu \bar z}2 +\frac{\mu^{-1}z -\mu \bar z}2 &
\frac{-\mu^{-1}z +\mu \bar z}4 \sin \frac{\mu^{-1}z +\mu \bar z}2
\end{pmatrix}.
\]
Thus we obtain timelike surfaces $f_{\mathbb L^3}$ with the constant mean curvature $1/2$ in $\mathbb L^3$
and timelike minimal surfaces $f^\mu$ in ${\rm Nil}_3$:
\[
f_{\mathbb L^3} =\lambdaeft(
\sin \frac{\mu^{-1}z + \mu \bar z}2,
i^{\prime} \frac{\mu^{-1}z - \mu \bar z}2,
-\cos \frac{\mu^{-1}z + \mu \bar z}2
\right)
\]
and
\[
f^\mu=\lambdaeft(
i^{\prime} \frac{\mu^{-1}z - \mu \bar z}2,
\sin \frac{\mu^{-1}z + \mu \bar z}2,
i^{\prime} \frac{\mu^{-1}z - \mu \bar z}4 \sin \frac{\mu^{-1}z + \mu \bar z}2
\right)
\]
for $\mu = e^{i^{\prime} t}$ with sufficiently small $t$
on some simply connected domain $\mathbb D$.
Each surface $f^\mu$ describes a part of a hyperbolic paraboloid $x_3= x_1 x_2/2$.
Furthermore $f^\mu$ has the function $h= 1$,
the Abresch-Rosenberg differential $B^\mu dz^2 = \mu^{-2}/16 dz^2$ on $\mathbb D$
and
the first fundamental form $I$ of $f^\mu$ is
$I= \cos ^2 (\mu^{-1} z + \mu \bar z)/2 dz d\bar z$.
The corresponding timelike CMC $1/2$ surfaces $f_{\mathbb L^3}$
are called circular cylinders.
\begin{figure}
\caption{Hyperbolic paraboloids corresponding to a cylinder (left) and
a hyperbolic cylinder (right).}
\end{figure}
\subsection{Hyperbolic paraboloids corresponding to hyperbolic cylinders}
Define the normalized potential $\xi$ as
\[
\xi = \mu^{-1}
\begin{pmatrix}
0 & - \frac i^{\prime} 4 \\
- \frac i^{\prime} 4 & 0
\end{pmatrix} dz.
\]
The solution of the equation $d F_- = F_- \xi$
with the initial condition $F_- (z=0) = \operatorname{id}$
is given by
\[F_- =
\begin{pmatrix}
\cosh \frac {\mu^{-1} z} 4 & -i^{\prime} \sinh \frac {\mu^{-1} z} 4 \\
-i^{\prime} \sinh \frac {\mu^{-1} z} 4 & \cosh \frac {\mu^{-1} z} 4
\end{pmatrix}.
\]
Applying the Iwasawa decomposition to the solution $F_-$:
\[ F_- = F^{\mu} V_+,\]
we obtain an extended frame $F^\mu: \mathbb C^{\prime} \to \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime} $:
\[
F^\mu =
\begin{pmatrix}
\cosh \frac{ -{\mu}^{-1} z + \mu \bar z } 4 & i^{\prime} \sinh \frac{ -{\mu}^{-1} z + \mu \bar z } 4\\
i^{\prime} \sinh \frac{ -{\mu}^{-1} z + \mu \bar z } 4 & \cosh \frac{ -{\mu}^{-1} z + \mu \bar z } 4
\end{pmatrix}.
\]
Then, by Theorem \ref{thm:Sym2}, we have the map $f_{\mathbb L^3}$ for $F^\mu$ explicitly
\[
f_{\mathbb L^3} = \frac12
\begin{pmatrix}
-i^{\prime} \cosh \frac{-\mu^{-1}z + \mu \bar z}2 &
-\frac{\mu^{-1}z + \mu \bar z}2 + i^{\prime} \sinh i^{\prime} \frac{-\mu^{-1}z + \mu \bar z}2\\
-\frac{\mu^{-1}z + \mu \bar z}2 - i^{\prime} \sinh i^{\prime} \frac{-\mu^{-1}z + \mu \bar z}2&
i^{\prime} \cosh \frac{-\mu^{-1}z + \mu \bar z}2
\end{pmatrix},
\]
and thus we obtain timelike surfaces $f_{\mathbb L^3}$ with the constant mean curvature $1/2$ in $\mathbb L^3$
and timelike minimal surfaces $f^\mu$ in ${\rm Nil}_3$:
\[
f_{\mathbb L^3} =\lambdaeft(
\frac{ \mu^{-1} z + \mu \bar z}2,
- \sinh i^{\prime} \frac{ -\mu^{-1} z + \mu \bar z}2,
-\cosh \frac{ -\mu^{-1}z + \mu \bar z}2
\right)
\]
and
\[
f^\mu=\lambdaeft(
-\sinh i^{\prime} \frac{-\mu^{-1} z + \mu \bar z}2,
\frac{\mu^{-1} z + \mu \bar z}2,
\frac{\mu^{-1}z + \mu \bar z}4 \sinh i^{\prime} \frac{ -\mu^{-1}z + \mu \bar z}2
\right)
\]
for any $\mu$ on $\mathbb C^{\prime}$.
Each timelike minimal surface $f^\mu$ describes the hyperbolic paraboloid $x_3=-x_1 x_2/2$
and has the function $h=1$,
the Abresch-Rosenberg differential $B^\mu dz^2 = -\mu^{-2}/16 dz^2$
and the first fundamental form $I(\mu) = \lambdaeft\{\cosh \tfrac{i^{\prime}}2 ( -\mu^{-1} z + \mu \bar z)\right\}^2 dz d \bar z$.
The corresponding timelike CMC $1/2$ surfaces $f_{\mathbb L^3}$
are called hyperbolic cylinders.
\subsection{Horizontal plane}
Let $\xi$ be the normalized potential defined by
\[
\xi = \mu^{-1}
\begin{pmatrix}
0 & -i^{\prime}\\
0 & 0
\end{pmatrix}dz.
\]
The solution of the equation $d F_- = F_- \xi$ under the initial condition $F_-(z=0) = \operatorname{id}$ is given by
\[
F_- =
\begin{pmatrix}
1 & -i^{\prime} \mu^{-1} z\\
0 & 1
\end{pmatrix}.
\]
Then by the Iwasawa decomposition of the solution $F_- = F^\mu V_+$,
we have an extended frame $F^\mu: \tilde{\mathbb D} \to \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime}$:
\begin{equation}\lambdaabel{eq:exthori}
F^\mu =
\frac{1} {( 1+ z \bar z)^{1/2}}
\begin{pmatrix}
1 & -i^{\prime} \mu^{-1} z\\
i^{\prime} \mu \bar z & 1
\end{pmatrix},
\end{equation}
where $\tilde{\mathbb D}$ is a simply connected domain defined as $\tilde{\mathbb D} = \lambdaeft\{ z \in \mathbb C^{\prime} | z \bar z>-1 \right\}$.
Then $f_{\mathbb L^3}$ is given by
\[
f_{\mathbb L^3} =\frac1 {1 + z \bar z}
\begin{pmatrix}
\frac {i^{\prime}(3z \bar z -1)} 2 & -2 \mu^{-1} z\\
-2\mu \bar z & -\frac {i^{\prime} (3z \bar z -1)} 2
\end{pmatrix}.
\]
Hence the timelike surfaces $f_{\mathbb L^3}$ with the constant mean curvature $1/2$ in $\mathbb L^3$
and the timelike minimal surfaces $f^\mu$ in ${\rm Nil}_3$ are computed as
\[
f_{\mathbb L^3} =\lambdaeft(
\frac{2 ( \mu^{-1} z + \mu \bar z)}{1 + z \bar z},
\frac{2 i^{\prime} ( \mu^{-1} z - \mu \bar z)}{1 + z \bar z},
\frac{3 z \bar z - 1}{1 + z \bar z}
\right)
\]
and
\[
f^\mu = \lambdaeft(
\frac{2 i^{\prime} (\mu^{-1} z -\mu \bar z)}{1 + z \bar z}, \:
\frac{2(\mu^{-1} z +\mu \bar z)}{1 + z \bar z}, \:
0
\right).
\]
The surfaces $f^\mu$ are defined on $D= \lambdaeft\{ z \in \mathbb C^{\prime} | -1<z \bar z <1 \right\}$.
In fact the first fundamental form $I$ of $f^\mu$ is computed as
\[
I= 16 \frac {( 1-z \bar z )^2 } {(1 + z \bar z)^4} dz d \bar z.
\]
Moreover the Abresch-Rosenberg differential $B^\mu dz^2$ vanishes on $D$.
In general the graph of the function $F(x_1,x_2) = ax_1 + bx_2 +c$ for $a,b,c \in \mathbb R$
describes a timelike minimal surface
on $\mathbb D = \lambdaeft\{(x_1,x_2) \mid -(a + x_2/2)^2 + (b - x_1/2)^2 +1 >0 \right\}$.
This plane has positive Gaussian curvature $K$:
\[
K= \frac {2 ( - ( a+\tfrac{1}{2}x_2 )^2 + ( b - \tfrac{1}{2}x_1 )^2 +1) +1}
{4 ( -( a+\tfrac{1}{2}x_2 )^2 + ( b - \tfrac{1}{2}x_1)^2 +1)^2},
\]
and it will be called the \textit{horizontal umbrellas}. The horizontal
umbrellas are obtained by different choices of initial conditions of the extended
frame of $F^{\mu}$ in \eqref{eq:exthori}. For examples the extended frame
$F_0 F^{\mu}$ with
\[
F_0= \begin{pmatrix} \cosh a & \mu^{-3}\sinh a\\
\mu^3 \sinh a & \cosh a \end{pmatrix} \in \Lambda^{\prime} {\rm SU}_{1, 1 \sigma}^{\prime},
\]
where $a \in \mathbb R$ gives a horizontal umbrella which is
not a horizontal plane.
\subsection{B-scroll type minimal surfaces}\lambdaabel{sbsc:Bscroll}
Let $\xi$ be a normalized potential defined as
\[
\xi = \mu^{-1} \begin{pmatrix} 0 & - \frac{i^{\prime}}4 \\ -S(z)\bar \ell &0
\end{pmatrix} dz,
\]
where $\bar \ell = \frac12 (1 -
i^{\prime})$ and $S(z)$ is a para-holomorphic function.
The solution $\Phi$ of $d \Phi = \Phi \xi$ with $\Phi(z=0) = \operatorname{id}$ cannot be
computed explicitly, but it can be partially computed as follows:
It is known that a para-holomorphic function $S(z)$ can be expanded
as
\[
S(z) = Q(s) \ell + R(t) \bar \ell
\]
with $ s= x + y$ and $t = x- y$ for para-complex coordinates $z = x + i^{\prime} y$,
$Q = \mathbb Re S + \operatorname {Im} S$ and $R = \mathbb Re S - \operatorname {Im} S$.
Note that $\ell^2 = \ell, \bar \ell^2 = \ell, \ell \bar \ell =0$.
Moreover, from the definition of $s$ and $t$,
$d z = \ell d s + \bar \ell dt$ follows.
Then the para-holomorphic potential $\xi$ can be decomposed by
\[
\xi = \xi^s \ell + {\xi^t}^* \bar \ell,
\]
with ${\xi^t}^* = - \sigma_3 \lambdaeft(\overline {\xi^t(1/\bar \mu)}\right)^{T} \sigma_3$
and
\begin{equation}\lambdaabel{eq:xist}
\xi^s = \lambda^{-1}\begin{pmatrix}
0 & - \frac14 \\ 0 & 0
\end{pmatrix} ds,
\quad
\xi^t = \lambda \begin{pmatrix}
0 & - R(t) \\ \frac14 & 0
\end{pmatrix} dt.
\end{equation}
Then by the isomorphism in \eqref{eq:funnyisom},
the pair $(\xi^s(\lambdaambda), \xi^t(\lambdaambda))$ is the
normalized potential in \cite[Section 6.2]{DIT} for
a timelike CMC surface in $\mathbb L^3$ \textit{B-scroll}.
Then the solution of $d \Phi = \Phi \xi$ can be
computed by
\[
d \Phi^s = \Phi^s \xi^s, \quad d \Phi^t = \Phi^t \xi^t\quad
\mbox{with}\quad \Phi^s (0)= \Phi^t (0)= \operatorname{id}
\]
and $\Phi$ is given by
$\Phi = \Phi^s \ell + {\Phi^t}^* \bar \ell$, where
$\Phi^s = \Phi^s (\mu)$ and
${\Phi^t}^* = \sigma_3\overline{\Phi^t (1 / \bar \mu)}^{T-1}\sigma_3$
for $\Phi^s, \Phi^t \in \Lambda {\rm SL}_{2} \R_{\sigma}$.
Then $\Phi^s$ can be explicitly integrated as
\[
\Phi^s = \begin{pmatrix}1 & - \frac14 \lambda^{-1}s\\ 0 & 1
\end{pmatrix},
\]
while $\Phi^t$ cannot be explicitly integrated. Set
\begin{equation}\lambdaabel{eq:Phit}
\Phi^t= \operatorname{id} + \sum_{k\geq 1} \lambda^{k}
\begin{pmatrix} a_k & b_k\\ c_k & d_k \end{pmatrix},
\end{equation}
where $a_{2k+1} = d_{2k+1} = b_{2k} = c_{2k}=0$ for all $k \geq 1$.
Then applying the Iwasawa decomposition in Theorem \ref{thm:BIdecomposition}
to $\Phi$, that is, $\Phi = F^{\mu} V_+$, one can
compute
\[
\Phi = \Phi^s \ell + {\Phi^t}^* \bar \ell
= (\hat F \ell + \hat F^* \bar \ell )(\hat V_+ \ell + \hat V_-^* \bar \ell),
\]
where $F^{\mu}= \hat F \ell + \hat F^* \bar \ell$ and
$V_+ = \hat V_+\ell + \hat V_-^* \bar \ell$ and $\hat F \in \Lambda {\rm SL}_{2} \R_{\sigma}$,
$\hat V_{+} \in \Lambda {\rm SL}_{2} \R_{\sigma}P$ and $\hat V_{-} \in \Lambda {\rm SL}_{2} \R_{\sigma}N$. Note that it is
equivalent to the Iwasawa decomposition of $\Lambda {\rm SL}_{2} \R_{\sigma} \times \Lambda {\rm SL}_{2} \R_{\sigma}$, that is,
\begin{equation}\lambdaabel{eq:Iwasawaphist}
(\Phi^s, \Phi^t) = (\hat F, \hat F) (\hat V_+, \hat V_-).
\end{equation}
\begin{Proposition}
The map $\hat F$ can be computed as follows:
\begin{equation}\lambdaabel{eq:hatF}
\hat F = \Phi^t \Phi_{-}, \quad \mbox{with}
\quad \Phi_{-} = \begin{pmatrix} \lambdaeft(1+ \frac14 sc_1\right)^{-1} & -\frac14 \lambda^{-1}s \\
0 & 1+ \frac14 sc_1
\end{pmatrix},
\end{equation}
where $c_1= c_1(s, t)$ is the function defined in \eqref{eq:Phit}.
\end{Proposition}
\begin{proof}
From \eqref{eq:Iwasawaphist}, the map $\hat F$ can be computed as
\[
{\Phi^s}^{-1} \Phi^t = \hat V_{+}^{-1} \hat V_-
\]
by the Birkhoff decomposition of $\Phi_s^{-1} \Phi^t$ and set
$\hat F = \Phi^t \hat V_-^{-1} = \Phi^s \hat V_+^{-1}$.
We then multiply $\Phi_-$ on $ {\Phi^s}^{-1} \Phi^t $
by right, and a straightforward computation shows that
\begin{align*}
{\Phi^s}^{-1} \Phi^t \Phi_-&= \begin{pmatrix}1 & \frac14 \lambda^{-1}s\\ 0 & 1 \\
\end{pmatrix}\lambdaeft(\operatorname{id} + \sum_{k \geq 1} \lambda^k
\begin{pmatrix} a_k & b_k \\ c_k & d_k \end{pmatrix}\right)
\Phi_- \\
& = \lambdaeft\{\begin{pmatrix} 1 + \frac14 sc_1 & \frac14 \lambda^{-1} s \\
0 & 1
\end{pmatrix}
+ O(\lambda)\right\}
\begin{pmatrix}
\frac1{1+ \frac14 s c_1} &- \frac14\lambda^{-1} s \\ 0& 1 + \frac14 s c_1
\end{pmatrix} \\
& = \operatorname{id} + O(\lambda)
\end{align*}
holds.
Therefore ${\Phi^s}^{-1} \Phi^t \Phi_- \in \Lambda {\rm SL}_{2} \R_{\sigma}P$ with identity at $\lambdaambda =0$, and
${\Phi^s}^{-1} \Phi^t = \hat V_+^{-1} \Phi_-^{-1}$ is the Birkhoff decomposition.
This completes the proof.
\end{proof}
Plugging the $F^{\mu}$ into $f_{\mathbb L^3}$ in \eqref{eq:SymMin}, we obtain
\[
f_{\mathbb L^3} = \lambdaeft\{\gamma(t)+ q(s, t) B(t)\right\} \ell
+ \lambdaeft\{\gamma(t)+ q(s, t) B(t)\right\}^* \bar \ell,
\]
where $A^*= - \sigma_3 \overline{A(1/ \bar \mu)}^T\sigma_3$
for $A \in \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}$ and
\begin{align*}
\gamma(t)&= -i^{\prime} \mu (\partial_\mu \Phi^{t})(\Phi^{t})^{-1}
- \frac i^{\prime}2 \operatorname{Ad} (\Phi^t) \sigma_3 ,\\
B(t)&= -i^{\prime} \mu \operatorname{Ad} \Phi^t \begin{pmatrix}0 & 1 \\ 0 & 0 \end{pmatrix},
\\
q(s, t) &= \frac{s}{2 (1+ \frac1{16} s t)}.
\end{align*}
Under the new coordinates $(q, t)$, $f_{\mathbb L^3}$ is a so-called \textit{B-scroll},
that is, $\gamma$ is null curve in $\mathbb L^3$ and $B$ is the bi-normal null
vector of $\gamma$, see in detail \cite[Section 6.2]{DIT}.
Further plugging the $F^{\mu}$ into $\hat f$ in \eqref{eq:symNil},
we obtain
\[
\hat f = \lambdaeft\{\hat \gamma(t)+ q(s, t) \hat B(t)\right\} \ell
+ \lambdaeft\{\hat \gamma(t)+ q(s, t) \hat B(t)\right\}^* \bar \ell,
\]
where
\[
\hat \gamma(t) = \gamma(t)^{o} - \frac{i^{\prime}}{2} \mu \partial_{\mu} \gamma(t)^d,
\quad \mbox{and}\quad
\hat B(t) = B(t)^{o} - \frac{i^{\prime}}{2} \mu \partial_{\mu} B(t)^d.
\]
A straightforward computation shows that $\exp( \hat \gamma(t) \ell + \hat \gamma(t)^* \bar \ell )$ is
null curve in ${\rm Nil}_3$ and $\hat B(t) \ell + \hat B(t) ^* \bar \ell $ is a bi-normal vector of $\exp( \hat \gamma(t) \ell + \hat \gamma(t)^* \bar \ell )$
analogous to the Minkowski case. Therefore we call $f^{\mu}$ is
the \textit{B-scroll type} minimal surface in ${\rm Nil}_3$.
We will investigate property of the B-scroll type minimal surface
in a separate publication.
\appendix
\section{Timelike constant mean curvature surfaces in $\mathbb E^3_1$}\lambdaabel{timelikemin}
We recall the geometry of timelike surfaces in Minkowski 3-space.
Let $\mathbb L^3$ be the Minkowski 3-space with the Lorentzian metric
\[
\lambdaangle \cdot,\cdot \rangle
=
dx_1^2 - dx_2^2 +dx_3^2,
\]
where $(x_1, x_2, x_3)$ is the canonical coordinate of $\mathbb R^3$.
We consider a conformal immersion $\varphi : M \to \mathbb L^3$
of a Lorentz surface $M$ into $\mathbb L^3$.
Take a para-complex coordinate $z = x + i^{\prime} y$
and represent the induced metric by $e^u dz d\bar z$.
Let $N$ be the unit normal vector field of $\varphi$.
The second fundamental form $I\hspace{-1pt}I$ of $\varphi$ derived from $N$ is defined by
\[
I\hspace{-1pt}I = -\lambdaangle d\varphi, dN \rangle.
\]
The mean curvature $H$ of $\varphi$ is defined by
\[
H = \frac12 {\rm tr}(I\hspace{-1pt}I \cdot I ^{-1}).
\]
For a conformal immersion $\varphi : \mathbb D \to \mathbb L^3$,
define para-complex valued functions $\phi_1, \phi_2, \phi_3$ by
\[
\varphi_z = (\phi_2, \phi_1, \phi_3).
\]
The analogy of the discussion in Section \ref{sec:deracequation}
shows that there exists
$\epsilon \in \{\pm i^{\prime} \}$
and a pair of para-complex functions $(\psi_1, \psi_2)$ such that
\[
\phi_1 = \epsilon \lambdaeft((\overline{\psi_2})^2 +(\psi_1)^2\right), \quad
\phi_2 = \epsilon i^{\prime} \lambdaeft((\overline{\psi_2})^2 -(\psi_1)^2\right), \quad
\phi_3 = 2 i^{\prime} \psi_1 \overline{\psi_2}.
\]
Then the conformal factor $e^u$ and the unit normal vector field $N$ of $\varphi$ can be represented as
\[
e^u = 4(\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1})^2,
\]
\begin{equation}\lambdaabel{normal}
N = \frac1{\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}}
\lambdaeft(
-\epsilon (\psi_1\psi_2 - \overline{\psi_1}\overline{\psi_2}),\,
\epsilon i^{\prime} (\psi_1\psi_2 + \overline{\psi_1}\overline{\psi_2}),\,
\psi_2 \overline{\psi_2} + \psi_1 \overline{\psi_1}
\right).
\end{equation}
As well as timelike surfaces in ${\rm Nil}_3$, we can show that
$(\psi_1, \psi_2)$ is a solution of the nonlinear Dirac equation for a timelike surface in $\mathbb L^3$:
\begin{equation}\lambdaabel{diraceq}
\lambdaeft(
\begin{array}{ll}
(\psi_2)_z + \mathcal{U} \psi_1\\
-(\psi_1)_{\bar z} + \mathcal{V} \psi_2
\end{array}
\right)
=
\lambdaeft(
\begin{array}{ll}
0\\0
\end{array}
\right)
\end{equation}
where $\mathcal{U} =\mathcal{V}= i^{\prime} H \hat \epsilon e^{u/2} /2$
and $\hat \epsilon$ is the sign of $\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}$.
Conversely, if a pair of para-complex functions $(\psi_1, \psi_2)$
satisfying the nonlinear Dirac equation \eqref{diraceq} and
$\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1} \neq0$ is given,
there exists a conformal timelike surface in $\mathbb L^3$
with the conformal factor
$e^u = 4(\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1})^2$.
\begin{Theorem}\lambdaabel{minws}
Let $\mathbb D$ be a simply connected domain in $\mathbb C^{\prime}$,
$\mathcal U$ a purely imaginary valued function
and the vector $(\psi_1, \psi_2)$ a solution of the nonlinear Dirac equation \eqref{diraceq}
satisfying $\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1} \neq0$.
Take points $z_0 \in \mathbb D$ and $f(z_0) \in \mathbb L^3$,
set $\epsilon$ as either $i^{\prime}$ or $-i^{\prime}$
and define a map $\Phi$ by
\[
\Phi = \lambdaeft( \epsilon i^{\prime} \lambdaeft( (\overline{\psi_2})^2 - (\psi_1)^2\right),\,
\epsilon \lambdaeft( (\overline{\psi_2})^2 + (\psi_1)^2\right), \,
2 i^{\prime} \psi_1 \overline{\psi_2} \right).
\]
Then the map $f : \mathbb D \to \mathbb L^3$ defined by
\begin{equation}\lambdaabel{eq:minws}
f(z) := f(z_0) +
\mathbb Re \lambdaeft(
\int_{z_0}^z \Phi dz
\right)
\end{equation}
describes a timelike surface in $\mathbb L^3$.
\end{Theorem}
\begin{proof}
A straightforward computation shows that the $1$-form $\Phi dz + \overline{\Phi} d\bar{z}$ is a closed form.
Then Green's theorem implies that $f(z)$ is well-defined.
Thus we have $f_z = \Phi$.
By setting $\phi_k \:(k=1,2,3)$ as $\Phi = (\phi_2, \phi_1, \phi_3)$,
we derive $\phi_2^2 -\phi_1^2 +\phi_3^2 =0$
and $\phi_2 \overline{\phi_2} -\phi_1 \overline{\phi_1} + \phi_3 \overline{\phi_3}
= 2(\psi_2 \overline{\psi_2} -\psi_1 \overline{\psi_1})^2$.
This means that $f$ is conformal, and then timelike.
\end{proof}
\begin{Remark}
The timelike surface defined in Theorem\ref{minws} is conformal with respect to the coordinate $z$.
Denoting the mean curvature by $H$
and the conformal factor by $e^u$
then we have $\mathcal{U} =i^{\prime} H \hat \epsilon e^{u/2} /2$
where $\hat \epsilon $ is the sign of $\psi_2 \overline{\psi_2} - \psi_1 \overline{\psi_1}$.
\end{Remark}
Obviously, the Dirac equation for timelike minimal surfaces in $({\rm Nil}_3, ds_-^2)$
coincides the one for timelike constant mean curvature $H=1/2$ surfaces in $\mathbb L^3$.
Combining the identification of $\mathfrak{su}_{1, 1}^{\prime}$ with $\mathbb L^3$ and \eqref{eq:f_z},
we can show that
the corresponding timelike constant mean curvature $1/2$ surfaces
for timelike minimal surfaces $f^{\mu}$ in $({\rm Nil}_3, ds_-^2)$ are given by $f_{\mathbb L^3}$ up to translations
and represented in the form of
\eqref{eq:minws}.
It is easy to see that the unit normal vector field \eqref{normal}
of the timelike surface $f_{\mathbb L^3}$ can be written as $N_{\mathbb L^3}$ by the identification of $\mathfrak{su}_{1, 1}^{\prime}$ and $\mathbb L^3$.
\section{Without para-complex coordinates}\lambdaabel{sc:nopara}
As we have explained in Example \ref{sbsc:Bscroll}, the normalized
potential $\xi$ which is a $1$-form taking values in $\Lambda^{\prime} \mathfrak{sl}_{2} \mathbb C^{\prime}_{\sigma}$
can be translated to the pair of two real potentials which is
a pair of $1$-forms taking values in $\Lambda \mathfrak{sl}_2 \mathbb R_{\sigma} \times \Lambda \mathfrak{sl}_2 \mathbb R_{\sigma}$.
It can be generalized to
any normalized potential $\xi$ any pair of two real potentials
$(\xi^s, \xi^t)$ as follows:
For a normalized potential
\[
\xi = \mu^{-1} \begin{pmatrix}
0 & -\frac{i^{\prime}}4 b(z) \\ 4 i^{\prime} \frac{B(z)}{b(z)} & 0
\end{pmatrix} dz,
\]
where $b(z)= h^2(z, 0)h^{-1}(0, 0)$, one can define a pair of $1$-forms
by $\xi = \xi^s \ell + {\xi^t}^* \bar \ell$ such that
\[
\xi^s = \lambdaambda^{-1} \begin{pmatrix} 0 & - \frac14 f(s) \\mathbb Q (s)/f(s) &0 \end{pmatrix}
ds,
\quad
\xi^t = \lambdaambda \begin{pmatrix} 0 &- R (t)/g(t) \\ \frac14 g(t) & 0 \end{pmatrix}
dt,
\]
where para-complex coordinates $z = x + i^{\prime} y$ define null
coordinates $(s, t)$ by $x = s+ t$ and $y = s- t$, and
the functions $f(s)$ and $g(t)$ are given by
\[
f(s) = \mathbb Re b(s) + \operatorname {Im} b(s), \quad
g(t) = \mathbb Re b(t) - \operatorname {Im} b(t),
\]
and the functions $Q(s)$ and $R(t)$ are given by
\begin{equation}\lambdaabel{eq:QR}
Q(s) = 4(\operatorname{Re} B(s) + \operatorname{Im}B(s) ), \quad
R(t) = 4( \operatorname{Re} B(t) - \operatorname{Im}B(t) ).
\end{equation}
Note that we use relations
$b(z) = f(s) \ell + g(t) \bar \ell$, and
$4 B(z) = Q(s) \ell + R(t) \bar \ell$
with $\ell =\frac{1+i^{\prime}}2$
and $1/(f(s) \ell + g(t)\bar \ell) = \ell /f(s) + \bar \ell/ g(t)$.
Again that the para-holomorphic solution $\Phi$ taking values in $\Lambda^{\prime} {\rm SL}_{2} \C_{\sigma}$
of $d \Phi = \Phi \xi$ with $\Phi(0) =\operatorname{id}$ can
be identified with the pair $(\Phi^s, \Phi^t)$ by
\[
\Phi= \Phi^s\ell+ {\Phi^t}^* \bar \ell,
\]
where $\Phi^s = \Phi^s(\mu)$ and
${\Phi^t}^* = \sigma_3 \overline{\Phi^t(1/ \bar\mu)}^{T -1}\sigma_3$.
Thus using the partial differentiations with respect to $s$ and $t$ by
\[
\partial_s = \ell \partial_z + \bar \ell \partial_{\bar z}
\quad {\rm and} \quad
\partial_t = \bar \ell \partial_z + \ell \partial_{\bar z},
\]
we need to consider the pair of ODEs
\[
\partial_s \Phi^s = \Phi^s \xi^s, \quad \partial_t \Phi^t = \Phi^t \xi^t,
\]
with the initial condition $(\Phi^s(0), \Phi^t(0)) = (\operatorname{id}, \operatorname{id})$.
The Iwasawa decomposition of $\Phi$, that is $\Phi = F^{\mu} V_+$, can
be again translated to
\[
(\Phi^s, \Phi^t) = (\hat F, \hat F)(\hat V_+, \hat V_-).
\]
Again note that $F^{\mu} = \hat F \ell+ {\hat F}^* \bar \ell$
and accordingly the Maurer-Cartan form $\alpha^{\mu}$ of $F^{\mu}$
taking values in $\Lambda^{\prime} \mathfrak{su}^{\prime}_{1, 1 \sigma}$
can be translated to $\alpha^{\mu} = \hat \alpha \ell + \hat \alpha^{*} \bar \ell$,
where
\begin{equation}\lambdaabel{eq:alphalambda2}
\hat \alpha = \hat U ds + \hat V dt\quad \mbox{with}\quad
\partial_s \hat F = \hat F \hat U,
\quad
\partial_t \hat F = \hat F \hat V.
\end{equation}
Note that $ \hat \alpha = \hat \alpha(\mu )$
and $\hat \alpha^{*} = - \sigma_3 \overline{\hat \alpha(1 /\bar \mu)}^T \sigma_3$.
Then a straightforward computation shows that
\begin{align}\lambdaabel{eq:hatUV}
\hat U =
\begin{pmatrix}
\frac12 (\lambdaog \hat h)_{s} & - \frac14\lambdaambda^{-1} \hat h \\
\lambdaambda^{-1} Q (s)\hat h^{-1}& - \frac12 (\lambdaog \hat h)_{s}
\end{pmatrix},\quad
\hat V =
\begin{pmatrix}
-\frac12(\lambdaog \hat h)_{t} & -\lambdaambda R(t)\hat h^{-1}\\
\frac14\lambdaambda \hat h & \frac12(\lambdaog \hat h)_t
\end{pmatrix}
\end{align}
hold, where for a angle function $h= h(z, \bar z)$, $\hat h$ is defined by
$\hat h(s, t) = \mathbb Re h(s, t) + \operatorname {Im} h(s, t)$, and $F^{\mu}$ has the Maurer-Caran form
in \eqref{eq:UVmu}.
Note that when we consider that $\alpha$ takes values in $\mathfrak{su}_{1, 1}^{\prime}$, the spectral parameter takes
\[
\mu = e^{i^{\prime} \theta} = \cosh(\theta) + i^{\prime} \sinh(\theta) \in
\mathbb S_1^1\quad
( \theta \in \mathbb R).
\]
Then the corresponding spectral parameter $\lambdaambda$ is given by
\[
\lambdaambda = e^{\theta}= \cosh(\theta) + \sinh(\theta) \in \mathbb R^+.
\]
We would like to note that,
in \cite{DIT}, the null coordinate is used.
Moreover, the spectral parameter $\lambdaambda$ is replaced by $\lambdaambda^{-1}$,
and then $\hat U$ (resp. $\hat V$) in this paper
plays a role of
$U(\lambdaambda^{-1})$ (resp. $V(\lambdaambda^{-1})$) in \cite[Section 5]{DIT}.
\textbf{Acknowledgement:} We would like to thank the anonymous referee for
carefully reading the manuscript and for pointing out to
us a number of typographical errors and for giving good suggestions.
\def$'${$'$}
\end{document}
|
\begin{equation}gin{document}
\title{Dynamical decoupling leads to improved scaling in noisy quantum metrology}
\author{P.~Sekatski, M.~Skotiniotis, and W.~D\"ur}
\affiliation{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstr. 21a, A-6020 Innsbruck, Austria}
\date{\today}
\begin{equation}gin{abstract}
We consider the usage of dynamical decoupling in quantum metrology, where the joint evolution of system plus environment is
described by a Hamiltonian. We demonstrate that by ultra-fast unitary control operations acting locally only on system qubits,
essentially all kinds of noise can be eliminated. This is done in such a way that the desired evolution is reduced by at most a
constant factor, leading to Heisenberg scaling. The only exception is noise that is generated by the Hamiltonian to be estimated
itself. However, even for such parallel noise, one can achieve an improved scaling as compared to the standard quantum limit
for any local noise by means of symmetrization.
\end{abstract}
\pacs{03.67.-a, 03.65.Yz, 03.65.Ta}
\maketitle
\paragraph{Introduction.---}
\label{sec:intro}
Quantum mechanics offers the promise to significantly enhance the precision of estimating unknown
parameters as compared to
any classical approach~\cite{GLM04,*GLM06,*Giovanetti:11}. Such high-precision measurements are of central importance in
physics and other areas of science, and include possible applications in frequency
standards~\cite{Wineland:92, Bollinger:96, Chwalla:07}, atomic clocks~\cite{Roos:06, Valencia:04, deburgh:05}, or gravitational
wave detectors~\cite{McKenzie:02, Ligo:11}. However, this quantum advantage seems to be rather fragile and can in general
not be maintained in the presence of incoherent noise
processes~\cite{Huelga:97,Escher:11,*Escher:12, Kn11,Kolodynski:12,*Kolodynski:13, Al14, Knysh:14}.
Identifying schemes that realize this advantage in practice are of high theoretical and practical
relevance. The usage of quantum error correction was suggested in this context, which is however restricted to certain specific
noise processes~\cite{Dur:14,*Kessler:14,*Ozeri:13, Sekatski:15}. Limited control over the environment also allows for an improved scaling in
certain situations~\cite{Gefen:15, Plenio:15}, but may be difficult to realize.
Here we provide a practical scheme to maintain quantum advantage based on the usage of dynamical decoupling
techniques~\cite{Viola:98,*Viola:99, Lidar:14}, which have been shown to be applicable for storage and for the realization of quantum gates~\cite{ Viola:09,*Viola:10}. These techniques are nowadays widely used in various experimental settings~\cite{Biercuk:09,Biercuk:09a, Sagi:10, Szwer:11, du:09, Barthel:10, bluhm:11,de:10, Ryan:10}. With the help of ultra-fast control operations that act locally on the system qubits, we show
that one can effectively decouple the system from its environment and fully protect it against decoherence effects, while at the
same time maintaining its sensing capabilities.
System and environment are described by a (pure) state, and interact via a coherent evolution governed by some Hamiltonian
$H_{SE}$. In addition, the sensing system is effected by a Hamiltonian $H_S$ that includes an unknown parameter that should
be estimated. We assume that a coherent description is appropriate at all times, and that the evolution can be interrupted by
(ultra)-fast control operations. This is similar to what is done in dynamical
decoupling~\cite{Viola:98,*Viola:99,Viola:09,*Viola:10, West:10}, or digital quantum simulation~\cite{Lloyd:96,Jane:03}.
In practice, these intermediate pulses will not be infinitely fast, and only noise up to a certain frequency can be treated and
eliminated this way. However, in what follows we will assume
that the accessible time is much faster than any other timescale in the problem.
We show that for {\em any} local system-environment interaction of rank one or two one can completely eliminate the
noise process at the cost of reducing the sensing capabilities of the system by a constant factor leading to Heiseneberg scaling
in precision. Only interactions that are of rank three---and hence necessarily contain a component that is
generated by the system Hamiltonian that should be estimated---cannot be fully corrected. Still one can achieve that
only this parallel noise part remains, and, even in this case one can still maintain a quantum advantage. In particular, for any
{\em local} noise process, we show that one can achieve a super-standard quantum scaling in precision by preparing $N$ qubit
probes in the Greenberger-Horne-Zeilinger (GHZ) state and randomizing the qubits via fast,
intermediate swap gates. For correlated noise processes we provide a general lower bound on the achievable precision,
applicable even if one assumes perfect quantum control and auxiliary resources, demonstrating that one is in general restricted
to the standard quantum limit. Even in this case, one may still improve the precision by a constant factor, however
the exact effect depends on the details of the system-environment interactions and the type of fluctuations.
\paragraph{Background.---}
Quantum metrology is the science of optimally measuring and estimating an unknown parameter such as the frequency in
an atomic clock or the strength of a magnetic field. In a typical metrology scenario, a system of $N$ probes is subjected to
an evolution for a certain time $t$ with respect to a Hamiltonian $H_S= \omega \sum \sigma_z^{(k)}$, where $\omega$ is the parameter to be
estimated, and is subsequently measured. This process is repeated $\nu$ times and $\omega$ is estimated from the
measurement statistics. The Cram\'er-Rao bound then provides a limitation on the estimation precision $\delta \omega \geq \frac{1}{\sqrt{\nu \mathcal{F}_{(t,N)}}}$, which is attainable for large enough number of repetitions $\nu$. Here ${\cal F}$ is the quantum Fisher information (QFI) ~\cite{BC94}.
For classical strategies (separable probe states) the ultimate precision is given by the standard
quantum limit (SQL), $\mathcal{F}\propto N$. While using entangled states, one can achieve the so-called
Heisenberg limit, $\mathcal{F}\propto N^2$, i.e., a quadratic improvement as compared to any classical
strategy~\cite{GLM04,*GLM06,*Giovanetti:11}.
However, in practice the system is not isolated but also interacts with its environment. In general, this leads to a noisy
evolution, where it was shown that in case of incoherent noise described by a master equation, the quantum advantage is limited
to a constant factor rather than a different
scaling~\cite{Huelga:97,Escher:11,*Escher:12, Kn11,Kolodynski:12,*Kolodynski:13, Al14, Knysh:14}. The way to describe noise
processes crucially depends on the timescales of the problem. Here we assume that we have ultra-fast access to the system,
and can interject the evolution by fast control operations, much faster than the relaxation time of the environment. In this case it is appropriate to
describe the dynamics of system plus environment in a coherent way, i.e., by means of an overall Hamiltonian that governs
the unitary evolution of system plus environment. As we will show shortly, the fact that system and environment
evolve coherently grants us with additional freedom that allows to maintain Heisenberg scaling even in the presence
of uncontrolled interaction with some environment.
\paragraph*{General setting.---}We now specify the exact form of the overall Hamiltonian describing the coherent evolution of
both the system and environment. We begin by first considering the case of a single qubit. The most general evolution
of a single qubit plus environment is described by the Hamiltonian
\begin{equation}
H=H_S+H_{SE}\equiv\omega\sigma_3\otimes\mathbbm{1} + \sum_{j=0}^{3} c_j \sigma_j \otimes A_j,
\label{singlequbitH}
\end{equation}
where $H_S=\omega\sigma_3\otimes\mathbbm{1}$ describes the evolution of the system, $H_{SE}$ describes the system-environment
interaction with $A_j$ arbitrary environment operators and $c_j$ the
coupling strengths. Here and throughout the remainder of this work $\sigma_j, \, j\in\{1,2,3\}$ denote the Pauli matrices,
${\bm\sigma}$ denotes the vector of Pauli matrices and $\sigma_{\bf n}= {\bf n}\cdot {\bm \sigma}$.
In the case where we have $N$ probe systems, the exact form of the Hamiltonian governing their coherent evolution with the
environment depends on whether the $N$ probes couple to individual environments, or to a common environment. In the former
case the Hamiltonian is given by
\begin{equation}gin{align}
H&=\sum_{a=1}^N \Big(\underbrace{\omega \sigma_3^{(a)}\otimes\mathbbm{1}+\sum_{j=0}^{3}c_j^{(a)}\sigma_j^{(a)}\otimes A_j^{(a)}}_{H_S^{(a)}+H_{SE}^{(a)}}\Big),
\label{independentH}
\end{align}
where $A_j^{(a)}$ act on different Hilbert spaces. If the $N$ probes couple to a common environment then the
operators $A_j^{(a)}$ are entirely unspecified allowing
for both temporal and spatial correlations.
Before proceeding to the results let us outline the decoupling procedure.
\paragraph*{Decoupling strategy.---}
The most general dynamical decoupling strategy consists of applying an arbitrarily
fast time sequence of unitary gates, i.e., intersecting the system evolution with fast pulses. Without loss of generality any strategy
corresponds to a time ordered sequence of gates $\{ u_i \}_0^n$ applied at times $\{0,t_1,\ldots,t_n\}$. If this is done fast enough, one can use a first order Trotter expansion to describe the effective evolution as being
generated by the Hamiltonian
\begin{equation}a
H_\text{eff}= \mathcal{E}(H)=\sum_{i=1}^n p_i U_i H U_i^\dag
\label{effectiveH}
\end{equation}a
modulo an irrelevant final unitary, where we redefine the gates as $U_1 =u_0$, $U_{i+1} = U_i u_i$ and the probabilities are obtained from the time sequence $p_i = \frac{t_i-t_{i-1}}{t_n}$.
This is similar to what is done in optimal control theory or bang-bang control~\cite{Viola:98, Viola:99}, with the requirement that
the desired evolution $H_S$ is not completely eliminated.
\paragraph{Single qubit.---}
Notice that in the single qubit case the parametrization of completely positive trace preserving (CPTP) maps in Eq.~\eqref{effectiveH} spans exactly all the unital maps, i.e. such that
$\mathcal{E}(\mathbbm{1})=\mathbbm{1}$~\cite{Oi:01}, and an arbitrary unital map can always be constructed from the set of generators
$\{\sigma_i\}_{i=0}^3$. As all single qubit CPTP maps corresponds to affine transformations on the Bloch
sphere, a unital CPTP map is uniquely specified by a real matrix $A=\bar{R} D R$, where $R,\,\bar{R} \in SO(3)$,
$D=\text{diag}(\eta_1,\eta_2,\eta_3)$~\footnote{The values $\eta_i$ have to satisfy several constraints which are, however, not
important in our context}, and the action of such map on any Pauli matrix is given by
$\mathcal{E}(\sigma_{\bm n})=A{\bm n} \cdot {\bm \sigma}$. The second rotation $\bar R$ corresponds to an inconsequential change of basis, so we assume $\bar R=R^T$.
Noting that a noise term can be eliminated if and only if it belongs to the kernel of $D R$, on the other hand $D\neq 0$ since some part of the system evolution has to survive. Therefore in order to identify all the noises that can be removed it is sufficient to consider rank one projectors $A=\Uppi_{\bf r}$ in a general direction ${\bf r}=(r_1,r_2,r_3)^T$. It follows that the action of the corresponding map $\mathcal{E}=\uppi_{\bf r}$ on
Pauli matrices is given by $\uppi_{\bf r}(\sigma_{\bm n})=({\bm r} \cdot {\bm n})\, \sigma_{\bf r}$.
Notice that this remains true even if one allows for auxiliary systems and intermediate unitary operations in Eq.~
\eqref{effectiveH} to act on the enlarged system. As the argument is a purely geometrical embedding everything in a larger
dimensional space does not change the conclusions. Finally note that such a unital map $\uppi_{\bf r}$ can be easily
implemented by applying $U=\sigma_{\bm r}$ at regular intervals, so that the effective Hamiltonian is
$H_{\rm eff}=\uppi_{\bf r}(H)=(H + \sigma_{\bm r} H \sigma_{\bm r})/2$.
Now for the single qubit we identify all the noises that can be removed by dynamical decoupling. As the noise term $\mathbbm{1} \otimes A_0$ in Eq.~\eqref{singlequbitH} only acts on the environment and does not affect
the system evolution, it cannot be altered by any control operations performed on the system alone. In what follows we will
ignore this term, but remark that it generally plays a role for the overall evolution unless all noise
components can be canceled. The action of the decoupling strategy $\uppi_{\bf r}$ on the rest of the Hamiltonian in Eq.~\eqref{singlequbitH} leads to
\begin{equation}\label{rankthreeeffective}
H_\text{eff} =\omega \,{r}_3\, \sigma_{\bf r} \otimes \mathbbm{1} + \sigma_{\bf r}\otimes \sum_{j=1}^3 { r}_j c_j A_j.
\end{equation} Consequently the noise can be effectively decoupled if and only if
$
\exists\, \alpha_1,\alpha_2 \in \mathbb{R}$ such that $ c_3 A_3 = \sum_{j=1}^2 \alpha_j c_j A_j,
$
in which case the effective system evolution is slowed down by the factor $r_3 =(1+\alpha_1^2+\alpha_2^2)^{-1/2}$.
In the case of bounded operators $A_j$ the Hamiltonian Eq.~\eqref{singlequbitH} can be put in the \emph{standard from}
\begin{equation}
H =\omega\, \sigma_3 \otimes \mathbbm{1} +\sum_{j=1}^3 b_j \sigma_{{\bm n}_j} \otimes B_j,
\end{equation}
where $\text{tr}B_j B_k =\delta_{j k}$, $\{{\bf n_1},{\bf n_2} ,{\bf n_3} \}$ is an orthonormal frame and $b_1 \geq b_2 \geq b_3$ are the ordered Schmidt coefficients, see supplementary material. This allows a more intuitive geometrical picture of dynamical decoupling. For any rank one or two noise, i.e. $b_3=0$, we can choose
${\bm r}={\bm n}_1 \times {\bm n}_2$, orthogonal to the plane where noise acts. The fact that the
desired evolution $\sigma_3 \otimes \mathbbm{1}$ is not completely canceled requires that ${\bm r}\cdot {\bm z} \not=0$,
which is equivalent to ${\bm n}_1,{\bm n}_2 \not= {\bm z}$. In this case, the noise is completely eliminated by
dynamical decoupling and,as above, the system evolution
$H_S = r_3 \omega\, \sigma_{\bf r} \otimes \mathbbm{1}$ is slowed down by a factor $r_3 = {\bf z} \cdot ( {\bm n}_1 \times {\bm n}_2)$.
The reduction of the coupling strength leads to a constant reduction of the
achievable accuracy by $(r_3)^2$ but, as all noise is completely eliminated, we still obtain Heisenberg scaling precision.
Notice that noise that is perpendicular to the system Hamiltonian, i.e., any combination of $\sigma_{{\bm n}_x}$ and
$\sigma_{{\bm n}_y}$ noise, can be eliminated without altering the evolution, i.e. $r_3=1$. This is done by using fast
intermediate $\sigma_{{\bm n}_z}$ pulses.
\paragraph*{Single qubit and full rank noise.---} From the above geometrical argument, it follows that one cannot eliminate rank three noise as
such noise spans the whole three-dimensional space, and we can only eliminate a two-dimensional plane. To see this note that
the effect of the decoupling procedure on any rank-three noise model is to project both the system
Hamiltonian and noise onto direction ${\bm r}$, so one obtains an effective Hamiltonian \eqref{rankthreeeffective} for system plus environment
which is unitarily equivalent to a $\sigma_{{\bm n}_z}$ evolution and parallel noise, with $\sum_j r_j c_j A_j= \sum_j ({\bf n}_j \cdot {\bf r})\,b_j B_j$ for the standard form. As noise generated by the same
operator as the system Hamiltonian cannot be eliminated without eliminating the system evolution as well, the best one can
hope for in this case is to reduce any rank-three noise to noise parallel to the system evolution.
The choice of direction ${\bm r}$ in Eq.~\eqref{rankthreeeffective} determines both
the effective coupling strength of the system Hamiltonian $r_3$ and the strength of
the noise. One can then optimize ${\bm r}$ to optimize the ratio
between the modified coupling strength $r_3$ and the variance of noise fluctuations after projection \footnote{Notice,
however, that while this shows that in general we can reduce the problem to parallel noise it is not clear if this reduction to
parallel noise does in fact correspond to the optimal strategy.}. Later on we will show what this optimal ratio is for the case of local Gaussian noise.
\paragraph{$N$ qubits.---}
We now turn to the case of $N$ two level systems.
Consider first the case where each qubit encounters an independent environment which corresponds to a local noise
process. The total Hamiltonian describing the evolution of all $N$ qubits plus environment is given by
Eq.~\eqref{independentH}, with the system evolution $H_S=\omega \sum_a \sigma_3^{(a)} \equiv \omega S_3$, and the system-environment interaction $H_{SE} = \sum_{a=1}^N \sum_{j=1}^3 c_j^{(a)} \sigma_j^{(a)} \otimes A_j^{(a)}$ . One can use the above dynamical decoupling
strategy independently on each of the systems so that the results of the previous section directly apply. For each qubit, noise of
rank one or two can be eliminated, while full rank noise can be reduced to parallel noise $H_{SE} = \sum_{a} \tilde c^{(a)} \sigma_{\bf r}^{(a)} \otimes \tilde A^{(a)}$ with $\tilde c^{(a)} \tilde A^{(a)}= \sum_{j=1}^3 c_j^{(a)} r_j^{(a)} A_j^{(a)}$ (and $H_S= \omega \sum_a r_3^{(a)} \sigma_{\bf r}^{(a)}$).
In addition, one can randomize the system particles by means of fast intermediate permutations, where each permutation can
be efficiently realized by $\mathcal{O}(N)$ two-qubit swap gates. Random permutations leave $H_S$ unchanged, but project out all
asymmetric noise terms onto their symmetric contribution~\footnote{This follows from the fact that any asymmetric operator will pick up a negative sign when
permuted by the right element of the symmetric group of $N$ objects. As we are uniformly averaging over all elements of this
group, the only operators that will survive are the symmetric ones.}. Hence, the only remaining noise term is given by
\begin{equation}gin{equation}
H_{SE}=\bar{c_3} S_3 \otimes \bar{A},\quad
\bar{A}=\frac{1}{ \bar{c_3}} \sum_{a=1}^N \tilde c^{(a)} \tilde A^{(a)}, \quad
\bar{c_3}=\frac{1}{N} \sum_{a=1}^N \tilde c^{(a)}.
\label{permnoise}
\end{equation}
Notice that in general $\bar{A}$ depends on the individual coupling strengths $\tilde c^{(a)}$ unless all $\tilde A^{(a)}$ are
identical. As we show later symmetrization of all system qubits can, in the presence of
independent couplings or fluctuating coupling strengths, help boost precision to super-classical scaling.
We remark that if the noise has no symmetric contributions then $\bar c_3=0$, and even
locally full rank noise can be eliminated by symmetrization.
We now consider the case where the $N$ qubits couple to a common environment, which may possess both temporal and
spatial correlations. In this case the environment operators $A^{(a)}$ in Eq.~\eqref{independentH} are unspecified. Let us first
suppose that the system-environment interactions are such that each system qubit interacts individually with the
environment. In principle, a similar strategy as illustrated in the single-qubit case can be applied, where one
eliminates all noise except the one generated by the (symmetrized) system Hamiltonian itself by appropriate fast control
operations. By way of example consider the following \emph{local} decoupling strategy where one applies fast
local $\sigma_z^{(a)}$ on each of the qubits. This allows to eliminate all noise terms including $\sigma_x^{(a)},\sigma_y^{(a)}$
without altering $H_S$, and together with fast random permutations reduces all noise to one generated by the system
Hamiltonian itself, see Eq. (\ref{permnoise}). The only difference as compared to the case of independent environments treated
above is now that the operators $\tilde A^{(a)}$ may act on the same environment.
In general a more involved decoupling strategy requiring non-local operations may be needed in order to partially or fully remove
the noise. However, it is not clear if all noise operators except those parallel to $H_S$ can be completely removed in this
case as not all unital maps can be expressed as convex combinations of unitary operations~\cite{Landau:93}. Moreover,
whatever the dynamical decoupling procedure, the condition that $H_S$ has a non-zero overlap with the kernel of the unital map
must hold in order to be able to estimate $\omega$.
One may also consider noise where several systems are affected simultaneously. From a physical standpoint
such many-body noise processes are less important as they usually correspond to higher order processes. Nevertheless, these
correlated noise processes can be eliminated by means of dynamical decoupling, and for any quasi-local noise process
one still recovers Heisenberg scaling in the absence of noise generated by $S_3$, see supplementary material.
\paragraph{Parallel noise.---}
Hitherto, we have seen how to eliminate all kinds of noise, except noise generated by $S_3$. The latter is indistinguishable from
the desired evolution, and can not be eliminated. However, we will now show that even such parallel noise does
not automatically imply the SQL. In fact, the scaling of the QFI depends on the particular situation considered. For
instance, if the noise is due to uncorrelated fluctuations of single-qubit noise terms, then a super-SQL scaling $O(N^{3/2})$ of the QFI can
be achieved.
Consider the effect of the system plus environment evolution described by Eq.~\eqref{permnoise} on the
system alone. Tracing out the environment in the eigenbasis $\{{\bf k}et{\ell}\}$ of $\bar A_3$ one can always represent
the noise by the CPTP map
\begin{equation}a
\mathcal{E}(\rho)=\int p(\bar c_3) f(\ell) e^{- i t \bar c_3 \ell S_3} \rho e^{i t \bar c_3 \ell S_3} d\ell \,d \bar c_3,
\label{cptpnoise}
\end{equation}a
where $p(\bar c_3)$ corresponds to fluctuations of the interaction strength between experimental runs, and
$f(\ell) =\bra{\ell} \rho_E {\bf k}et{\ell}$ depends on the initial state of the environment~\footnote{If $\bar A_3$ has a discreet
spectrum then the integral over $\ell$ is replaced by a sum.}.
The effect of the system-environment coupling, when the environment is not in an eigenstate of $\bar A_3$, is similar to a
fluctuating interaction strength. In both cases, one has to average over evolutions governed by the same Hamiltonian as $H_S$
with a fluctuating parameter, where the latter is described by a suitable probability distribution. These fluctuations are what
ultimately limit the achievable accuracy in parameter estimation, as they directly correspond to fluctuations of the
parameter $\omega$ to be estimated. However, the resulting scaling strongly depends on the details of the situation, such as the
spectrum of environment and whether these fluctuations are correlated or uncorrelated. We now consider some of these
different cases.
The worst case is when the interaction strength, $\bar c_3$, is fixed but unknown (within a certain range). This type of noise
leads to a systematic error on the estimated value of $\omega$ and there is no way to
decrease the error below a certain value set by the initial knowledge of the interaction strength and the state of the
environment (except the trivial case where the environment is in the zero eigenstate of $\bar A_3$).
We now turn to the case where the mean interaction strength $\bar c_3$ is known but fluctuates around the mean value
between experimental runs following a smooth distribution $p(\bar c_3)$ and $\bar A_3=\mathbbm{1}$. This is is equivalent to the case
of a fixed $\bar c_3$ but a continuous spectrum of $\bar A_3$ with smooth $f(\ell)$.
We show in the supplementary material that for any $p(\bar c_3)$ the optimal QFI per unit time is upper bounded by
\begin{equation}a\label{parallel bound}
\frac{\mathcal{F} }{t}\leq N \sqrt{\mathcal{F}_{cl}(p(\bar c_3))},
\end{equation}a
where $\mathcal{F}_{cl}(p(\bar c_3))=\int \frac{\big(p'(\bar c_3)\big)^2}{p(\bar c_3)} d\bar c_3$
remains finite for every smooth noise distribution $p(\bar c_3)$ enforcing the SQL in this case.
If $p(\bar c_3)$ is normally distributed with width $\sigma$ the bound takes the
simple form $\mathcal{F}/t \leq N/\sigma$, whereas a strategy utilizing an $N$ qubit GHZ state
$\frac{1}{\sqrt{2}}({\bf k}et{0}^{\otimes N}+{\bf k}et{1}^{\otimes N})$ gives a maximal QFI per unit time
$\mathcal{F}/t \approx 0.43 \, N/\sigma$ for the optimal choice of $t$.
Next consider local parallel noise, where each $c_3^{(a)}$ in Eq.~\eqref{permnoise} is an independent and normally distributed
random variable with width $\sigma$. After randomly permuting the probes
one finds that $\bar{c}_3$ is also a normally distributed random variable whose width $\bar \sigma$ is reduced by a
factor $\sqrt{N}$, $\bar{\sigma}=\frac{\sigma}{\sqrt{N}}$. Consequently, preparing the probes in the GHZ state yields a
super-SQL precision in estimating $\omega$
\begin{equation}a
\frac{\mathcal{F}_{\mathrm{GHZ}}}{t_\text{opt}} = \frac{N^{3/2}}{\sqrt{2e}\sigma},
\end{equation}a
where $t_\text{opt}=1/\sqrt{N \sigma^2}$. Consequently, the Cram\'er-Rao bound $\delta \omega \geq (\nu \mathcal{F})^{-1/2}= (T \mathcal{F}/t_\text{opt})^{-1/2}$ is attainable for large total running time $T=\nu t_\text{opt}$.
This demonstrates that the use of symmetrization of the noise operators allows one to significantly reduce the overall effects
of noise (a fact that was also noted in~\cite{Macchiavello:02, Zanardi:99}), and restore super-SQL scaling.
Finally, in the case where $\bar c_3$ is fixed and $\bar A_3$ has a discrete spectrum, the effective noise distribution $f(\ell)$ is
discrete and $\mathcal{F}_{cl}(p(\bar c_3))$ is unbounded. Consequently, the bound of Eq.~\eqref{parallel bound} is trivial and
no general statements can be made with regards to the optimal QFI per unit time. For example, if $\bar A_3$ has an equally
gapped spectrum with gap $\Delta$, then at time $t=\frac{2\pi}{\Delta \bar c_3}$ the noise completely cancels. This final
example, though artificial, demonstrates that one cannot provide general statements on achievable scaling without
specifying further details of the type of fluctuations, interaction, spectrum, and initial state of the environment.
\paragraph{Summary.---}
We have shown that when an overall Hamiltonian description of the system plus environment is appropriate, ultra fast control allow one to alleviate a large class of noise processes, and recover Heisenberg scaling. We remark that the dynamical procedure outlined here can also be experimentally realized with finite duration control pulses as was shown in~\cite{Viola:03}. Ultimately, the only noise
processes that forbid Heisenberg scaling precision are those generated by the system Hamiltonian to be estimated itself. The
effect of such parallel noise strongly depends on the details of interactions, the spectrum of the environment and the type of
fluctuation of the coupling parameter.
Our results are in stark contrast to situations where a master equation description of the system environment interaction is
required. There, it has been shown that with the help of auxiliary systems and fast error correction only rank one Pauli noise
processes can be eliminated, while even full quantum control including ultrafast pulses and quantum error correction do not
allow one to go beyond SQL scaling~\cite{Sekatski:15}. Hence, our results provide a big promise for practical applications of
quantum metrology in various contexts, opening the way towards ultra-sensitive devices with widespread potential application in
all branches of science.
\paragraph{Acknowledgments} We thank J. Ko\l odynski for useful discussions.
This work was supported by the Austrian Science Fund (FWF): P24273-N16, P28000-N27
and the Swiss National Science Foundation grant P2GEP2\_151964.
\appendix
\section{Appendix}
\subsection{Standard form of the Hamiltonian}
Consider a Hamiltonian of the form
\begin{equation}gin{align}
H=\sum_{j=1}^{3} c_j \tilde \sigma_j \otimes A_j.
\label{independent}
\end{align}
For bounded operators $\tilde C_j = c_j A_j$ we define the overlap matrix
\begin{equation}
\tilde O_{i k} = \mathrm{tr} \tilde C_i \tilde C_k,
\end{equation}
which is real and symmetric $\tilde O=\tilde O^T$ (as imposed by the hermiticity of the Hamiltonian $\tilde C_j=\tilde C_j^\dag$).
Expressing the Pauli operators in a rotated frame $\tilde {\bm \sigma}= R\, {\bm \sigma}$ allows one to rewrite the Hamiltonian as
\begin{equation}
H=\sum_{k=1}^{3} \sigma_k \otimes \sum_j R_{j k}\tilde C_j = \sum_{k=1}^{3} \sigma_k \otimes C_k,
\end{equation}
with $C_k = \sum_{j=1}^3 R_{jk}\tilde C_{k}$. Accordingly the overlap matrix for the operators $O_{i k} = \mathrm{tr} C_i C_k$ is given by
\begin{equation}
O = R^T \tilde O R.
\end{equation}
Choosing the rotation that diagonalizes the symmetric matrix $\tilde O= R \,\text{diag}(\lambda_1,\lambda_2,\lambda_3) \, R^T$ leads to
\begin{equation}
\mathrm{tr} C_j C_k = \delta_{jk} \lambda_j.
\end{equation}
Which also shows that $\lambda_j \geq 0$, being the trace of the square of an Hermitian operator. Finally, denoting
$B_j = \frac{1}{\sqrt{\lambda_j}} C_j$ and $b_j= \sqrt{\lambda_j}$ allows one to rewrite the Hamiltonian as
\begin{equation}
H= \sum_{j=1}^3 b_j \,\sigma_j \otimes B_j,
\end{equation}
with $\mathrm{tr} B_j B_k =\delta_{jk}$.
\subsection{Correlated noise}
\label{correlated}
We now consider correlated noise processes where several systems are affected jointly. In general, the
system-environment Hamiltonian of a $N$-qubit system is given by
$H_{SE} = \sum_{{\bm j}} c_{\bm j}^{({\bm a})} T_{\bm j}^{({\bm a})} \otimes A_{\bm j}^{({\bm a})}$ where
$T_{\bm j}^{({\bm a})} =\sigma_{j_1}^{(a_1)}\otimes \sigma_{j_2}^{(a_2)} \ldots \sigma_{j_N}^{(a_N)}$ denotes a tensor product
of Pauli operators. Using fast intermediate $\sigma_3$ operations on all qubits allows one to eliminate all terms containing
$\sigma_1,\,\sigma_2$ somewhere. We are then left with a Hamiltonian where $j_k \in \{0,3\}$ and noise is solely diagonal.
In case of localized noise, i.e., where there is a certain spatial structure and only qubits that are spatially close are jointly
affected by noise, one can use fast intermediate $\sigma_1$ operations acting sparsely to eliminate noise terms of range $k$.
For instance, performing such an action on every second qubit eliminates all nearest neighbor two-qubit noise terms in a
$1$-D setting. However, this also eliminates the desired evolution for half of the particles, and these particles no longer
contribute to the sensing process. As long as the number of systems to be decoupled is given by $\alpha N$ with $\alpha$
being some constant---which is the case for any finite range $k$ noise operators---we still obtain Heisenberg scaling
${\cal O}(\alpha^2 N^2)$.
\subsection{{Parallel noise upper-bound QFI}}
\label{Parallel}
In this section we derive a limitation on the maximally achievable QFI in presence of the parallel noise, i.e., noise that is described by the same generator as the Hamiltonian $H$ that governs the evolution. Such parallel noise results in the
channel
\begin{equation}a
\label{parallel noise}
\mathcal{E}(\rho) = \int p(\lambda) e^{-i t \lambda H } \rho \,e^{i t \lambda H } d\lambda,
\end{equation}a
where $\lambda$ is a random variable and $p(\lambda)$ is a probability distribution with standard deviation $\sigma$
characterizing the strength of the noise. As already mentioned such type of noise cannot be ameliorated using error
correction as the operator generating it is identical to the Hamiltonian generating the desired evolution. The noise process
in Eq.~\eqref{parallel noise} can be viewed as describing classical noise applied directly on the estimated parameter $\omega$, i.e.,
in every run of the experiment the observed parameter fluctuates by an amount $\lambda$, with $\lambda$ being a random
variable with corresponding probability distribution $p(\lambda)$.
Recall that the QFI of a state $\rho$ is given by~\cite{BC94}
\begin{equation}a
\mathcal{F}\Big(\rho(\theta)\Big) = 8 \frac{1-F\Big(\rho(\theta),\rho(\theta+d\theta)\Big)}{d\theta^2},
\label{A2}
\end{equation}a
where $\rho(\theta) = e^{-i \theta H } \rho \,e^{i \theta H }$ and $F(\rho,\tau) = \mathrm{tr} \sqrt{\tau^{1/2} \rho\, \tau^{1/2}}$ is the Uhlmann fidelity. So one can access the QFI in the presence of parallel noise (Eq.~\eqref{parallel noise}) through the Uhlmann fidelity
\begin{equation}gin{align}
F\Big(\mathcal{E}(\rho(t \omega)),\mathcal{E}(\rho(t \omega+ t d\omega))\Big)= \nonumber\\ F\Big( \int p(\lambda) \rho(t \omega +t \lambda) d\lambda,
\int p(\lambda-d\omega) \rho(t \omega+ t \lambda) d\lambda\Big).
\label{A3}
\end{align}
As the Uhlmann fidelity is strongly concave it follows that Eq.~\eqref{A3} is lower bounded by the fidelity of the probability
distributions $p(\lambda)$ and $p(\lambda +d\omega)$. Consequently the QFI in the presence of parallel noise is bounded by
\begin{equation}a
\mathcal{F}\Big(\mathcal{E}(\rho)\Big)\leq \int \frac{(p'(\lambda))^2}{p(\lambda)}d\lambda =\mathcal{F}_{cl}\Big(p(\lambda)
\Big).
\end{equation}a
On the other hand we know that the QFI in the noisy case is lower that the noiseless QFI (atteined by the GHZ state), therefore $\mathcal{F}\Big(\mathcal{E}({\bf k}et{Xi})\Big)\leq t^2 N^2$. Combining the two bounds one gets for the QFI per unit time
\begin{equation}a
\frac{\mathcal{F}\Big(\mathcal{E}({\bf k}et{Xi})\Big)}{t} \leq \min\left(t N^2, \frac{ \mathcal{F}_{cl}\Big(p(\lambda)\Big)}{t}\right).
\end{equation}a
It remains to find the time $t$ that maximizes the r.h.s. Trivially the maximum is attained when $t N^2 = \frac{ \mathcal{F}_{cl}(p(\lambda))}{t}$, which yields
\begin{equation}a
\frac{\mathcal{F}\Big(\mathcal{E}({\bf k}et{Xi})\Big)}{t} \leq N \sqrt{ \mathcal{F}_{cl}\Big(p(\lambda)\Big)}.
\end{equation}a
For any smooth distribution $p(\lambda)$ the classical Fisher information $ \mathcal{F}_{cl}(p(\lambda))$ is finite, and therefore SQL scaling for the QFI per unit time is enforced. In particular for a Gaussian noise with $p(\lambda)= \frac{1}{\sqrt{2\pi\sigma^2}} e^{- \lambda^2/2\sigma^2}$ this bound implies
\begin{equation}a
\frac{\mathcal{F}\Big(\mathcal{E}({\bf k}et{Xi})\Big)}{t} \leq \frac{N}{\sigma}.
\end{equation}a
While for a simple strategy with GHZ states and the optimal choice of the time a straightforward calculation gives $\mathcal{F}/t \approx 0.429 N/\sigma$, which is roughly half of the bound above.
\end{document}
|
\begin{document}
\title[almost complete higher cluster tilting objects]{almost complete
cluster tilting objects \\ in generalized higher cluster categories}
\author{Lingyan GUO}
\address{Universit\'e Paris Diderot - Paris~7, UFR de Math\'ematiques,
Institut de Math\'ematiques de Jussieu, UMR 7586 du CNRS, Case 7012,
B\^atiment Chevaleret, 75205 Paris Cedex 13, France}
\email{[email protected]}
\date{\today}
\begin{abstract}
We study higher cluster tilting objects in generalized higher
cluster categories arising from dg algebras of higher Calabi-Yau
dimension. Taking advantage of silting mutations of Aihara-Iyama, we
obtain a class of $m$-cluster tilting objects in generalized
$m$-cluster categories. For generalized $m$-cluster categories
arising from strongly ($m+2$)-Calabi-Yau dg algebras, by using
truncations of minimal cofibrant resolutions of simple modules, we
prove that each almost complete $m$-cluster tilting $P$-object has
exactly $m+1$ complements with periodicity property. This leads us
to the conjecture that each liftable almost complete $m$-cluster
tilting object has exactly $m+1$ complements in generalized
$m$-cluster categories arising from $m$-rigid good completed
deformed preprojective dg algebras.
\end{abstract}
\maketitle
\section{Introduction}
Cluster categories associated to acyclic quivers were introduced in
\cite{BMRRT06}, where the authors gave an additive categorification
of the finite type cluster algebras introduced by Fomin and
Zelevinsky \cite{FZ1} \cite{FZ2}. The cluster category of an acyclic
quiver $Q$ is defined as the orbit category of the derived category
of finite dimensional representations of $Q$ under the action of
${\tau}^{-1}\Sigma$, where $\tau$ is the AR-translation and $\Sigma$
the suspension functor. If we replace the autoequivalence
${\tau}^{-1}\Sigma$ with ${\tau}^{-1}{\Sigma}^m$ for some integer $m
\geq 2$, we obtain the $m$-cluster category, which was first
mentioned and proved to be triangulated in \cite{Ke05}, cf. also
\cite{Th07}. In the cluster category, the exchange relations of the
corresponding cluster algebra are modeled by exchange triangles. It
was shown in \cite{IY08} that every almost complete cluster tilting
object admits exactly two complements. In the higher cluster
category, exchange triangles are replaced by AR-angles, whose
existence (in the more general set up of Krull-Schmidt {\rm
Hom}-finite triangulated categories with Serre functors) was shown
in \cite{IY08}. Both \cite{W} and \cite{ZZ} proved that each almost
complete $m$-cluster tilting object has exactly $m+1$ complements in
an $m$-cluster category. In this paper, we study the analogous
statements for almost complete $m$-cluster tilting objects in
certain $(m+1)$-Calabi-Yau triangulated categories.
Amiot \cite{Am08} constructed generalized cluster categories using
$3$-Calabi-Yau dg algebras which satisfy some suitable assumptions.
A special class is formed by the generalized cluster categories
associated to Ginzburg algebras \cite{Gi06} coming from suitable
quivers with potentials. If the quiver is acyclic, the generalized
cluster category is triangle equivalent to the classical cluster
category. Amiot's results were extended by the author to generalized
$m$-cluster categories in \cite{GUO} by changing the Calabi-Yau
dimension from $3$ to $m+2$ for an arbitrary positive integer $m$.
As one of the applications, she particularly considered generalized
higher cluster categories associated to Ginzburg dg categories
\cite{Ke09} coming from suitable graded quivers with
superpotentials.
In the representation theory of algebras, mutation plays an
important role. Here we recall several kinds of mutation. Cluster
algebras associated to finite quivers without loops or $2$-cycles
are defined using mutation of quivers. As an extension of quiver
mutation, the mutation of quivers with potentials was introduced in
\cite{DWZ}. Moreover, the mutation of decorated representations of
quivers with potentials, which can be viewed as a generalization of
the BGP construction, was also studied in \cite{DWZ}. Tilting
modules over finite dimensional algebras are very nice objects,
although some of them can not be mutated. In the cluster category
associated to an acyclic quiver, mutation of cluster tilting objects
is always possible \cite{BMRRT06}. It is determined by exchange
triangles and corresponds to mutation of clusters in the
corresponding cluster algebra via a certain cluster character
\cite{CK06}. In the derived categories of finite dimensional
hereditary algebras, a mutation operation was given in \cite{BRT} on
silting objects, which were first studied in \cite{KV}. Silting
mutation of silting objects in triangulated categories, which is
always possible, was investigated recently by Aihara and Iyama in
\cite{AI}.
The aim of this paper is to study higher cluster tilting objects in
generalized higher cluster categories arising from dg algebras of
higher Calabi-Yau dimension. Under certain assumptions on the dg
algebras (Assumptions \ref{23}), tilting objects do not exist in the
derived categories (Remark \ref{22}). Thus, we consider silting
objects, e.g., the dg algebras themselves. The author was motivated
by the construction of tilting complexes in Section 4 of
\cite{IR06}.
This article is organized as follows: In Section 2, we list our
assumptions on dg algebras and
use the standard $t$-structure to situate the silting objects which
are iteratively obtained from $P$-indecomposables with respect to
the fundamental domain. In Section 3, using silting objects we
construct higher cluster tilting objects in generalized higher
cluster categories. We show that in such a category each liftable
almost complete $m$-cluster tilting object has at least $m+1$
complements. In Section 4, we specialize to strongly higher
Calabi-Yau dg algebras. By studying minimal cofibrant resolutions of
simple modules of good completed deformed preprojective dg algebras,
we obtain isomorphisms in generalized higher cluster categories
between images of some left mutations and images of some right
mutations of the same $P$-indecomposable. Using this, we derive the
periodicity property of the images of iterated silting mutations of
$P$-indecomposables in Section 5, where we also construct
($m+1$)-Calabi-Yau triangulated categories containing infinitely
many indecomposable $m$-cluster tilting objects. We obtain an
explicit description of the terms of Iyama-Yoshino's AR angles in
this situation, and we deduce that each almost complete $m$-cluster
tilting $P$-object in the generalized $m$-cluster category
associated to a suitable completed deformed preprojective dg algebra
has exactly $m+1$ complements in Section 6. We show that the
truncated dg subalgebra at degree zero of the dg endomorphism
algebra of a silting object in the derived category of a good
completed deformed preprojective dg algebra is also strongly
Calabi-Yau in Section 7. Then we conjecture a class (namely
$m$-rigid) of good completed deformed preprojective dg algebras such
that each liftable almost complete $m$-cluster tilting object should
have exactly $m+1$ complements in the associated generalized
$m$-cluster category.
In Section 8, we give a long exact sequence to show the relations
between extension spaces in generalized higher cluster categories
and extension spaces in derived categories. This sequence
generalizes the short exact sequence obtained by Amiot \cite{Am08}
in the $2$-Calabi-Yau case. At the end, we show that any almost
complete $m$-cluster tilting object in ${\mathcal {C}}_{\Pi}$ is
liftable if $\Pi$ is the completed deformed preprojective dg algebra
arising from an acyclic quiver.
\subsection*{Notation} For a collection
$\mathcal {X}$ of objects in an additive category $\mathcal {T}$, we
denote by add${\mathcal {X}}$ the smallest full subcategory of
$\mathcal {T}$ which contains $\mathcal {X}$ and is closed under
finite direct sums, summands and isomorphisms. Let $k$ be an
algebraically closed field of characteristic zero.
\subsection*{Acknowledgments} The author is supported by the China Scholarship Council (CSC).
This is part of her Ph.~D.~thesis under the supervision of Professor
Bernhard Keller. She is grateful to him for his guidance, patience
and kindness. She also sincerely thanks Pierre-Guy Plamondon, Fan
Qin and Dong Yang for helpful discussions and Zhonghua Zhao for
constant encouragement.
\section{Silting objects in derived categories}
Let $A$ be a differential graded (for simplicity, write `dg')
$k$-algebra. We write per$A$ for the {\em perfect derived category}
of $A$, i.e.~the smallest triangulated subcategory of the derived
category ${\mathcal {D}}(A)$ containing $A$ and stable under passage
to direct summands. We denote by ${\mathcal {D}}_{fd} (A)$ the {\em
finite dimensional derived category} of $A$ whose objects are those
of ${\mathcal {D}}(A)$ with finite dimensional total homology.
A dg $k$-algebra $A$ is {\em pseudo-compact} if it is endowed with a
complete separated topology which is generated by two-sided dg
ideals of finite codimension. A (pseudo-compact) dg algebra $A$ is
{\em (topologically) homologically smooth} if $A$ lies in per$A^e$,
where $A^e$ is the (completed) tensor product of $A^{op}$ and $A$
over $k$. For example, suppose that $A$ is of the form
$(\widehat{kQ}, d)$, where $\widehat{kQ}$ is the completed path
algebra of a finite graded quiver $Q$ with respect to the two-sided
ideal $\mathfrak{m}$ of $\widehat{kQ}$ generated by the arrows of
$Q$, and the differential $d$ takes each arrow of $Q$ to an element
of $\mathfrak{m}$; it was stated in \cite{KY09} that $A$ is
pseudo-compact and topologically homologically smooth.
\begin{assum} \label{23}
Let $m$ be a positive integer. Suppose that $A$ is a
(pseudo-compact) dg $k$-algebra and has the following four
additional properties:
\begin{itemize}
\item[a)] $A$ is (topologically) homologically smooth;
\item[b)] the $p$th homology $H^p A$ vanishes for each positive
integer $p$;
\item[c)] the zeroth homology $H^0 A$ is finite dimensional;
\item[d)] $A$ is $(m+2)$-Calabi-Yau as a bimodule, {\it i.e.},~there is an isomorphism in ${\mathcal {D}}(A^e)$
$${\rm RHom}_{A^e} (A, A^e) \simeq {\Sigma}^{-m-2}A.$$
\end{itemize}
\end{assum}
\begin{thm}[\cite{Ke09}] \label{17}
(Completed) Ginzburg dg categories ${\Gamma}_{m+2}(Q,W)$ associated
to graded quivers with superpotentials $(Q,W)$ are (topologically)
homologically smooth and $(m+2)$-Calabi-Yau.
\end{thm}
\begin{lem} [\cite{Ke08}] \label{24}
Suppose that $A$ is (topologically) homologically smooth. Then the
category ${\mathcal {D}}_{fd}(A)$ is contained in {\rm per}$A$. If
moreover $A$ is ($m+2$)-Calabi-Yau for some positive integer $m$,
then for all objects $L$ of $\mathcal{D}$$(A)$ and $M$ of
${\mathcal{D}}_{fd} (A)$, we have a canonical isomorphism
\begin{center}
$D {\rm Hom}_{{\mathcal{D}}(A)} (M, L) \simeq {\rm
Hom}_{{\mathcal{D}}(A)} (L, {\Sigma}^{m+2}M).$
\end{center}
\end{lem}
Throughout this paper, we always consider the dg algebras satisfying
Assumptions \ref{23}.
\begin{prop}[\cite{GUO}]\label{4}
Under Assumptions \ref{23}, the triangulated category {\rm per}$A$
is {\rm Hom}-finite.
\end{prop}
Let $({\mathcal {D}}A)^c$ denote the full subcategory of ${\mathcal
{D}}(A)$ consisting of compact objects. Since each idempotent in
${\mathcal {D}}(A)$ is split and $({\mathcal {D}}A)^c$ is closed
under direct summands, each idempotent in $({\mathcal {D}}A)^c$ is
also split. Therefore, the category per$A$ which is equal to
$({\mathcal {D}}A)^c$ by \cite{Ke06} is a $k$-linear Hom-finite
category with split idempotents. It follows that per$A$ is a
Krull-Schmidt triangulated category.
\begin{defns} Let $A$ be a dg algebra satisfying Assumptions \ref{23}.
\begin{itemize}
\item[a)] An object $X \in {\rm per}A$ is {\em silting} (resp. {\em tilting})
if per$A=$ thick$X$ the smallest thick subcategory of per$A$
containing $X$, and the spaces ${\rm Hom}_{{\mathcal {D}}(A)} (X,
{\Sigma}^i X)$ are zero for all integers $i > 0$ (resp. $i \neq 0$).
\item[b)] An object $Y \in {\rm per}A$ is {\em almost complete silting} if there
is some indecomposable object $Y'$ in (per$A$)$\setminus$(add$Y$)
such that $Y \oplus Y'$ is a silting object. Here $Y'$ is called a
{\em complement} of $Y$.
\end{itemize}
\end{defns}
Clearly the dg algebra $A$ itself is a silting object since the
space ${\rm Hom}_{{\mathcal {D}}(A)} (A, {\Sigma}^i A)$ is
isomorphic to $H^i A$ which is zero for each positive integer.
\begin{rem} \label{22}
Under Assumptions \ref{23}, tilting objects do not exist in per$A$.
Otherwise, let $T$ be a tilting object in per$A$. By definition, the
object $T$ generates per$A$. Then for any object $M$ in
${\mathcal{D}}(A)$, it belongs to the subcategory ${\mathcal
{D}}_{fd}(A)$ if and only if ${\sum}_{p \in {\mathbb{Z}}} {\rm dim}
{\rm Hom}_{{\mathcal {D}}(A)} (T, {\Sigma}^p M)$ is finite. Since
the space ${\rm Hom}_{{\mathcal {D}}(A)}(T,T)$ is finite dimensional
by Proposition \ref{4} and the space ${\rm Hom}_{{\mathcal
{D}}(A)}(T,{\Sigma}^pT)$ vanishes for any nonzero integer $p$, the
object $T$ belongs to ${\mathcal {D}}_{fd}(A)$. Note that ${\mathcal
{D}}_{fd}(A)$ is $(m+2)$-Calabi-Yau as a triangulated category by
Lemma \ref{24}. Thus, we have the following isomorphism
$$(0=) {\rm Hom}_{{\mathcal {D}} (A)} (T, {\Sigma}^{m+2} T)
\simeq D {\rm Hom}_{{\mathcal {D}} (A)} (T,T) ( \neq 0).$$ Here we
obtain a contradiction. Therefore, tilting objects do not exist.
\end{rem}
Assume that $H^0A$ is a basic algebra. Let $e$ be a primitive
idempotent element of $H^0 A$. We denote by $P$ the indecomposable
direct summand $e A$ (in the derived category ${\mathcal {D}}(A)$)
of $A$ and call it a {\em $P$-indecomposable}. We denote by $M$ the
dg module $(1-e) A$. It follows from Proposition \ref{4} that the
subcategory add$M$ is functorially finite \cite{AS} in add$A$. Let
us write $RA_0$ for $P$ (later we will also write $LA_0$ for $P$).
By induction on $t \geq 1$, we define $RA_t$ as follows: take a
minimal right (add$M$)-approximation $f^{(t)}: A^{(t)} \rightarrow RA_{t-1}$
of $RA_{t-1}$ in ${\mathcal {D}}(A)$ and form the triangle in
${\mathcal {D}}(A)$
\[
\xymatrix { RA_t \ar[r]^{\alpha^{(t)}} & A^{(t)} \ar[r]^-{f^{(t)}} &
RA_{t-1} \ar[r] & {\Sigma} RA_t.}
\]
Dually, for each positive integer $t$, we take a minimal left
(add$M$)-approximation $g^{(t)}: LA_{t-1} \rightarrow B^{(t)}$ of $LA_{t-1}$
in ${\mathcal {D}}(A)$, and form the triangle in ${\mathcal {D}}(A)$
\[
\xymatrix { LA_{t-1} \ar[r]^{g^{(t)}} & B^{(t)}
\ar[r]^-{\beta^{(t)}} & LA_{t} \ar[r] & {\Sigma} LA_{t-1}.}
\]
The object $RA_t$ is called the {\em right mutation} of $RA_{t-1}$
(with respect to $M$), and $LA_t$ is called the {\em left mutation}
of $LA_{t-1}$ (with respect to $M$).
\begin{thm}[\cite{AI}]\label{5}
For each nonnegative integer $t$, the objects $M \oplus RA_t$ and $M
\oplus LA_t$ are silting objects in {\rm per}$A$. Moreover, any
basic silting object containing $M$ as a direct summand is either of
the form $M \oplus RA_t$ or of the form $M \oplus LA_t$.
\end{thm}
From the construction and the above theorem, we know that the
morphisms $\alpha^{(t)}$ (resp. $\beta^{(t)}$) are minimal left
(resp. minimal right) ${\rm (add}M \rm{)}$-approximations in
${\mathcal {D}}(A)$ and that the objects $RA_t$ and $LA_{t}$ are
indecomposable objects in ${\mathcal {D}}(A)$ which do not belong to
${\rm add}M$.
We simply denote ${\mathcal{D}} (A)$ by $\mathcal{D}$. Let
${\mathcal{D}}^{{\leq} {0}}$ (resp. ${\mathcal{D}}^{{\geq} {1}}$) be
the full subcategory of $\mathcal{D}$ whose objects are the dg
modules $X$ such that $H^p X$ vanishes for each positive (resp.
nonpositive) integer $p$. For a complex $X$ of $k$-modules, we
denote by ${\tau}_{{\leq} 0} X$ the subcomplex with $({\tau}_{{\leq}
0} X)^0 = {ker} d^0$, and $({\tau}_{{\leq} 0} X)^i = X^i$ for
negative integers $i$, otherwise zero. Set ${\tau}_{{\geq} 1} X =
{X/{{\tau}_{{\leq} 0} X}}.$
\begin{prop}\label{15}
For each integer $t \geq 0$, the object $RA_t$ belongs to the
subcategory ${\mathcal{D}}^{{\leq} {t}} \cap \,
^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-1}} \cap \, {\rm per}A $, and the
object $LA_t$ belongs to the subcategory ${\mathcal{D}}^{{\leq} {0}}
\cap \, ^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-t-1}} \cap \, {\rm per}A $.
\end{prop}
\begin{proof}
We consider the triangles appearing in the constructions of $RA_t$,
and similarly for $LA_t$.
The object $RA_0 (= P)$ belongs to ${\mathcal{D}}^{{\leq} {0}} \cap
\, ^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-1}} \cap \, {\rm per}A$ since the
dg algebra $A$ has its homology concentrated in nonpositive degrees.
The object $RA_t$ is an extension of $A^{(t)}$ by ${\Sigma}^{-1}
RA_{t-1}$, which both belong to the subcategory
${\mathcal{D}}^{{\leq} {t}} \cap \, {\rm per}A$. Thus, the object
$RA_t$ belongs to ${\mathcal{D}}^{{\leq} {t}} \cap \, {\rm per}A$.
We do induction on $t$ to show that $RA_t$ belongs to
$^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-1}}$. Let $Y$ be an object in
${\mathcal {D}}^{\leq {-1}}$. By applying the functor ${\rm
Hom}_{\mathcal {D}} (-,Y)$ to the triangle \[ \xymatrix { RA_t
\ar[r] & A^{(t)} \ar[r]^-{f^{(t)}} & RA_{t-1} \ar[r] & {\Sigma}
RA_t,}
\] we obtain the long exact sequence $$\ldots \rightarrow
{\rm Hom}_{\mathcal {D}} (A^{(t)},Y) \rightarrow {\rm Hom}_{\mathcal {D}} (RA_t,Y)
\rightarrow {\rm Hom}_{\mathcal {D}} ({\Sigma}^{-1} RA_{t-1},Y) \rightarrow \ldots
.$$ Since ${\Sigma} Y$ belongs to ${\mathcal{D}}^{{\leq} {-2}}$, by
hypothesis, the space ${\rm Hom}_{\mathcal {D}} ({\Sigma}^{-1}
RA_{t-1},Y)$ is zero. Thus, the object $RA_t$ belongs to
$^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-1}}$.
\end{proof}
Assume that $\{e_1,\cdots,e_n\}$ is a collection of primitive
idempotent elements of $H^0A$. We denote by $S_i$ the simple module
corresponding to $e_iA$. For any object $X$ in per$A$, we define the
support of $X$ as follows:
\begin{defn} The {\em support} of $X$ is defined as the set
$$supp\,(X) := \{j \in {\mathbb{Z}} | {\rm Hom}_{{\mathcal {D}}}(X,{\Sigma}^j S_i)
\neq 0 \, \mbox{for some simple module}\, S_i \}.$$
\end{defn}
\begin{prop} \label{27}
For any nonnegative integer $t$, we have the following inclusions:
\begin{itemize}
\item[1)] $\{-t\} \subseteq supp\,(RA_t) \subseteq [-t,0],$
\item[2)] $\{t\} \subseteq supp\,(LA_t) \subseteq [0,t]$.
\end{itemize}
\end{prop}
\begin{proof} We only show the first statement, since the second one can be deduced in a similar way.
By Proposition \ref{15}, the object $RA_t$ belongs to
${\mathcal{D}}^{{\leq} {t}} \cap \, ^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq}
{-1}} \cap \, {\rm per}A $. Therefore, the space ${\rm
Hom}_{\mathcal {D}}(RA_t, {\Sigma}^r S_i)$ vanishes for each integer
$r \geq 1$ since ${\Sigma}^r S_i$ lies in ${\mathcal {D}}^{\leq -1}$
and the space ${\rm Hom}_{\mathcal {D}}(RA_t, {\Sigma}^{r'} S_i)$
vanishes for each integer $r' \leq -t-1$ since ${\Sigma}^{r'} S_i$
lies in ${\mathcal {D}}^{\geq t+1}$. Thus, we have the inclusion
$supp\,(RA_t) \subseteq [-t,0]$.
Let $S_P$ be the simple module corresponding to the
$P$-indecomposable $P$ from which $RA_t$ and $LA_t$ are obtained by
mutation. We will show that ${\rm Hom}_{\mathcal {D}}(RA_t,
{\Sigma}^{-t} S_P)$ is nonzero. Clearly, the space ${\rm
Hom}_{\mathcal {D}}(P, S_P)$ is nonzero. We do induction on the
integer $t$. Assume that the space ${\rm Hom}_{\mathcal
{D}}(RA_{t-1}, {\Sigma}^{1-t} S_P)$ is nonzero. Applying the functor
${\rm Hom}_{\mathcal {D}}(-,{\Sigma}^{1-t}S_P)$ to the triangle
$$RA_t \rightarrow A^{(t)} \rightarrow RA_{t-1} \rightarrow {\Sigma}RA_t,$$where $A^{(t)}$ belongs to (add$A$)$\setminus$(add$P$), we get the long
exact sequence
$$\cdots \rightarrow ({\Sigma}A^{(t)},{\Sigma}^{1-t}S_P) \rightarrow
({\Sigma}RA_t,{\Sigma}^{1-t}S_P) \rightarrow (RA_{t-1},{\Sigma}^{1-t}S_P)
\rightarrow (A^{(t)}, {\Sigma}^{1-t}S_P)
\rightarrow \cdots ,$$ where both the
leftmost term and the rightmost term are zero. Therefore, ${\rm
Hom}_{\mathcal {D}}(RA_t,{\Sigma}^{-t}S_P)$ is nonzero. This
completes the proof.
\end{proof}
Now we deduce the following corollary, which can also be deduced
from Theorem 2.43 in \cite{AI}.
\begin{cor}\label{19}
\begin{itemize}
\item[1)] For any two nonnegative integers $r \neq t$, the object
$RA_r$ is not isomorphic to $RA_t$, and the object $LA_r$ is not
isomorphic to $LA_t$.
\item[2)] For any two positive integers $r$ and $t$, the objects
$RA_r$ and $LA_t$ are not isomorphic.
\end{itemize}
\end{cor}
\begin{proof}
Assume that $r > t \geq 0$. Following Proposition \ref{27}, we have
that $${\rm Hom}_{\mathcal {D}}(RA_r, {\Sigma}^{-r}S_P) \neq 0,
\quad \mbox{while} \quad {\rm Hom}_{\mathcal {D}}(RA_t,
{\Sigma}^{-r}S_P) = 0.$$Thus, the objects $RA_r$ and $RA_t$ are not
isomorphic. Similarly for $LA_r$ and $LA_t$. Also in a similar way,
we can obtain the second statement.
\end{proof}
Combining Theorem \ref{5} with Proposition \ref{27}, we can deduce
the following corollary, which is analogous to Corollary 4.2 of
\cite{IR06}:
\begin{cor}\label{9} For any positive integer $l$, up to isomorphism, the object $M$
admits exactly $2l-1$ complements whose supports are contained in
$[1-l,l-1]$. These give rise to basic silting objects. They are the
indecomposable objects $RA_t$ and $LA_t$ for $0 \leq t < l$.
\end{cor}
\section{From silting objects to $m$-cluster tilting objects}
Let $\mathcal{F}$ be the full subcategory ${\mathcal{D}}^{{\leq}
{0}} \cap \, ^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-m-1}} \cap \, {\rm
per}A $ of $\mathcal {D}$. It is called the fundamental domain in
\cite{GUO}. Following Lemma \ref{24}, the category ${\mathcal
{D}}_{fd}(A)$ is a triangulated thick subcategory of per$A$. The
triangulated quotient category ${\mathcal{C}}_A =\, $
{per$A$}/{${\mathcal{D}}_{fd} (A)$} is called the generalized
$m$-cluster category \cite{GUO}. We denote by $\pi$ the canonical
projection functor from per$A$ to ${\mathcal{C}}_A$.
\begin{prop}[\cite{GUO}]\label{3}
Under Assumptions \ref{23}, the projection functor $\pi: {\rm per}A
\longrightarrow {\mathcal {C}}_A$ induces a $k$-linear equivalence
between $\mathcal {F}$ and ${\mathcal {C}}_A$.
\end{prop}
\begin{thm}[\cite{GUO} Theorem 2.2, \cite{KY09} Theorem 7.21]\label{2}
If $A$ satisfies Assumptions \ref{23}, then
\begin{itemize}
\item[1)] the generalized $m$-cluster category ${\mathcal{C}}_A$ is {\rm Hom}-finite and
$(m+1)$-Calabi-Yau;
\item[2)] the object $T = {\pi} (A)$ is an {\em $m$-cluster
tilting object} in ${\mathcal {C}}_A$, i.e., $${\rm add}T = \{ L \in
{\mathcal {C}}_A | \, {\rm Hom}_{{\mathcal{C}}_A} (T, {\Sigma}^r L)
= 0, \, r = 1,\ldots,m \}.$$
\end{itemize}
\end{thm}
\begin{thm} \label{25}
The image of any silting object under the projection functor $\pi:
{\rm per}A \rightarrow {\mathcal {C}}_A$ is an $m$-cluster tilting object in
${\mathcal {C}}_A$.
\end{thm}
\begin{proof}
Assume that $Z$ is an arbitrary silting object in per$A$. Without
loss of generality, we can assume that $Z$ is a cofibrant dg
$A$-module \cite{Ke94}. We denote by $\Gamma$ the dg endomorphism
algebra ${\rm Hom}_A^{\bullet} (Z,Z)$. Since the spaces ${\rm
Hom}_{\mathcal {D}} (Z, {\Sigma}^i Z)$ are zero for all positive
integers $i$, the dg algebra ${\Gamma}$ has its homology
concentrated in nonpositive degrees. The zeroth homology of
${\Gamma}$ is isomorphic to the space ${\rm Hom}_{\mathcal {D}}
(Z,Z)$ which is finite dimensional by Proposition \ref{4}.
Since $Z$ is a compact generator of ${\mathcal {D}}$, the left
derived functor $F = - \overset{L}{\otimes}_{{\Gamma}}Z
$ is a Morita equivalence \cite{Ke94} from ${\mathcal {D}}(\Gamma)$
to $\mathcal {D}$ which sends ${\Gamma}$ to $Z$. Therefore, the dg
algebra ${\Gamma}$ is also (topologically) homologically smooth and
$(m+2)$-Calabi-Yau. Thus, the generalized $m$-cluster category
${\mathcal {C}}_{\Gamma}$ is well defined. The equivalence $F$ also
induces a triangle equivalence from the generalized $m$-cluster
category ${\mathcal {C}}_{{\Gamma}}$ to ${\mathcal {C}}_A$ which
sends $\pi({\Gamma})$ to $\pi(Z)$. By Theorem \ref{2}, the image
$\pi({\Gamma})$ is an $m$-cluster tilting object in ${\mathcal
{C}}_{{\Gamma}}$. Hence, the image of $Z$ is an $m$-cluster tilting
object in ${\mathcal {C}}_A$.
\end{proof}
In particular, for each nonnegative integer $t$, the images of
$LA_t \oplus M$ and $RA_t \oplus M$ in the generalized $m$-cluster
category ${\mathcal {C}}_A$ are $m$-cluster tilting objects.
\begin{defns}Let $A$ be a dg algebra satisfying Assumptions \ref{23} and ${\mathcal
{C}}_A$ its generalized $m$-cluster category.
\begin{itemize}
\item[a)] An object $X$ in ${\mathcal {C}}_A$ is called an {\em almost complete $m$-cluster tilting
object} if there exists some indecomposable object $X'$ in
${\mathcal {C}}_A \setminus$ ({\rm add}X) such that $X \oplus X'$ is
an $m$-cluster tilting object. Here $X'$ is called a {\em
complement} of $X$. In particular, we call $\pi(M)$ an {\em almost
complete $m$-cluster tilting $P$-object}.
\item[b)] An almost complete $m$-cluster tilting object $Y$ is said to
be {\em liftable} if there exists a basic silting object $Z$ in
per$A$ such the $\pi(Z/Z')$ is isomorphic to $Y$ for some
indecomposable direct summand $Z'$ of $Z$.
\end{itemize}
\end{defns}
\begin{prop}\label{39}
Let $A$ be a $3$-Calabi-Yau dg algebra satisfying Assumptions 2.1.
Then any $(1-)$cluster tilting object in ${\mathcal {C}}_A$ is
induced by a silting object in $\mathcal {F}$ under the canonical
projection $\pi$.
\end{prop}
\begin{proof}
Let $T$ be a cluster tilting object in ${\mathcal {C}}_A$. By
Proposition \ref{3}, we know that there exists an object $Z$ in the
fundamental domain $\mathcal {F}$ such that $\pi(Z) = T$.
First we will claim that $Z$ is a partial silting object, that is,
the spaces ${\rm Hom}_{\mathcal {D}}(Z,{\Sigma}^iZ)$ are zero for
all positive integers $i$. Since $Z$ belongs to $\mathcal {F}$,
clearly these spaces vanish for all integers $i \geq 2$. Consider
the case $i = 1$. The following short exact sequence $$0 \rightarrow {\rm
Ext}^1_{\mathcal {D}}(X,Y) \rightarrow {\rm Ext}^1_{{\mathcal {C}}_A}(X,Y)
\rightarrow D{\rm Ext}^1_{\mathcal {D}}(Y,X) \rightarrow 0$$ was shown to exist in
\cite{Am08} for any objects $X,\,Y$ in $\mathcal {F}$. We specialize
both $X$ and $Y$ to the object $Z$. The middle term in the short
exact sequence is zero since $T$ is a cluster tilting object. Thus,
the object $Z$ is partial silting.
Second we will show that $Z$ generates per$A$. Consider the
following triangle $$A \stackrel{f}\rightarrow Z_0 \rightarrow Y
\rightarrow {\Sigma}A$$ in $\mathcal {D}$, where $f$ is a minimal
left (add$Z$)-approximation in $\mathcal {D}$. It is easy to see
that $Y$ also belongs to $\mathcal {F}$. Therefore, the above
triangle can be viewed as a triangle in ${\mathcal {C}}_A$ with $f$
a minimal left (add$Z$)-approximation in ${\mathcal {C}}_A$.
Applying the functor ${\rm Hom}_{{\mathcal {C}}_A}(-,Z)$ to the
triangle, we get the exact sequence
$${\rm Hom}_{{\mathcal {C}}_A}(Z_0,Z) \rightarrow {\rm
Hom}_{{\mathcal {C}}_A}(A,Z) \rightarrow {\rm Hom}_{{\mathcal
{C}}_A}({\Sigma}^{-1}Y,Z) \rightarrow {\rm Hom}_{{\mathcal
{C}}_A}({\Sigma}^{-1}Z_0,Z)$$Therefore the space ${\rm
Hom}_{{\mathcal {C}}_A}(Y,{\Sigma}Z)$ becomes zero. As a
consequence, $Y$ belongs to add$Z$ in ${\mathcal {C}}_A$. Since both
$Y$ and $Z$ are in $\mathcal {F}$, the object $Y$ also belongs to
add$Z$ in $\mathcal {D}$. Therefore, the dg algebra $A$ belongs to
the subcategory thick$Z$ of per$A$. It follows that $Z$ generates
per$A$.
\end{proof}
\begin{thm}\label{20}
The almost complete $m$-cluster tilting $P$-object $\pi(M)$ has at
least $m+1$ complements in ${\mathcal {C}}_A$.
\end{thm}
\begin{proof}
Following Proposition \ref{15} and Corollary \ref{19}, the pairwise
non isomorphic indecomposable objects $LA_t \, (0 \leq t \leq m)$
belong to the fundamental domain $\mathcal {F}$. Therefore, by
Proposition \ref{3}, the $m+1$ objects $\pi(P),\,\pi(LA_1),\dots ,
\pi(LA_m)$ are indecomposable and pairwise non isomorphic in
${\mathcal {C}}_A$. It follows that $\pi(M)$ has at least $m+1$
complements in ${\mathcal {C}}_A$.
\end{proof}
Let us generalize the above theorem:
\begin{thm} \label{26}
Each liftable almost complete $m$-cluster tilting object has at
least $m+1$ complements in ${\mathcal {C}}_A$.
\end{thm}
\begin{proof}
Let $Y$ be a liftable almost complete $m$-cluster tilting object. By
definition there exists a basic silting object $Z$ (assume that $Z$
is cofibrant) in per$A$ such the $\pi(Z/Z')$ is isomorphic to $Y$
for some indecomposable direct summand $Z'$ of $Z$. Let $\Gamma$ be
the dg endomorphism algebra ${\rm Hom}_A^{\bullet} (Z,Z)$. Then
$H^0{\Gamma}$ is a basic algebra.
Similarly as in the proof of Theorem \ref{25}, the dg algebra
$\Gamma$ satisfies Assumptions \ref{23}, and the left derived
functor $F := - \overset{L}{\otimes}_{{\Gamma}}{Z}$ induces a
triangle equivalence from ${\mathcal {C}}_{{\Gamma}}$ to ${\mathcal
{C}}_A$ which sends $\pi({\Gamma})$ to $\pi(Z)$. Let $\Gamma'$ be
the object ${\rm Hom}_A^{\bullet} (Z,Z/Z')$ in per$\Gamma$. Then
$\pi(\Gamma')$ is the almost complete $m$-cluster tilting $P$-object
in ${\mathcal {C}}_{\Gamma}$ which corresponds to $Y$ under the
functor $F$. It follows from Theorem \ref{20} that $\pi(\Gamma')$
has at least $m+1$-complements in ${\mathcal {C}}_{\Gamma}$. So does
the liftable almost complete $m$-cluster tilting object $Y$ in
${\mathcal {C}}_A$.
\end{proof}
\begin{rem}\label{28}
Let $\mathcal {T}$ be a Krull-Schmidt {\rm Hom}-finite triangulated
category with a Serre functor. In fact, following \cite{IY08}, one
can get that any almost complete $m$-cluster tilting object $Y$ in
$\mathcal {T}$ has at least $m+1$ complements. Note that the
notation in \cite{GUO} and \cite{IY08} has some differences with
each other, for example, $m$-cluster tilting objects in \cite{GUO}
correspond to $(m+1)$-cluster tilting subcategories (or objects) in
\cite{IY08}. Here we use the same notation as \cite{GUO}. Set
${\mathcal {Y}} = {\rm add}Y$, ${\mathcal {Z}} =
{\cap}^m_{i=1}\,^{\mbox{\rm per}p}({\Sigma}^i{\mathcal {Y}})$ and ${\mathcal
{U}} = {\mathcal {Z}}/{\mathcal {Y}}$. Let $X$ be an $m$-cluster
tilting object in $\mathcal {T}$ which contains $Y$ as a direct
summand. Set ${\mathcal {X}} = {\rm add}X$. Then by Theorem 4.9 in
\cite{IY08}, the subcategory ${\mathcal {L}} := {\mathcal
{X}}/{\mathcal {Y}}$ is $m$-cluster tilting in the triangulated
category $\mathcal {U}$. The subcategories ${\mathcal
{L}},\,{\mathcal {L}}\langle 1 \rightarrowngle, \, \ldots, \, {\mathcal
{L}}\langle m \rightarrowngle$ are distinct $m$-cluster tilting
subcategories of $\mathcal {U}$, where $\langle 1 \rightarrowngle$ is the
shift functor in the triangulated category $\mathcal {U}$. Also by
the same theorem, the one-one correspondence implies that the number
of $m$-cluster tilting objects of $\mathcal {T}$ containing $Y$ as a
direct summand is at least $m+1$.
\end{rem}
\section{Minimal cofibrant resolutions of simple modules\\ for strongly $(m+2)$-Calabi-Yau case}
The well-known Connes long exact sequence (SBI-sequence) for cyclic
homology \cite{Lod} associated to a dg algebra $A$ is as follows
$$\ldots \rightarrow HH_{m+3}(A) \stackrel{I} \rightarrow HC_{m+3}(A) \stackrel{S} \rightarrow HC_{m+1}(A)
\stackrel{B} \rightarrow HH_{m+2}(A) \stackrel{I} \rightarrow \ldots,$$
where $HH_{\ast}(A)$ denotes the Hochschild homology of $A$ and
$HC_{\ast}(A)$ denotes the cyclic homology.
Let $M$ and $N$ be two dg $A$-modules with $M$ in per$A^e$. Then in
${\mathcal {D}}(k)$ we have the isomorphism
$${\rm RHom}_{A^e} ({\rm
RHom}_{A^e} (M, A^e), N) \simeq M \overset{L} \otimes_{A^e} N .$$ An
element $\xi = \sum^s_{i=1} \xi_{1i} \otimes \xi_{2i} \in H^r (M
\overset{L} \otimes_{A^e} N)$ is {\em non-degenerate} if the
corresponding map
$$\xi^{+}: {\rm RHom}_{A^e} (M, A^e)
\rightarrow {\Sigma}^{r}N$$ given by $\xi^{+}(\phi) = \sum^s_{i=1}
(-1)^{|\phi||\xi|} \phi(\xi_{1i})_2 \xi_{2i} \phi(\xi_{1i})_1$ is an
isomorphism. Throughout this article, we write $|\cdot|$ to denote
the degrees.
Let $l$ be a finite dimensional separable $k$-algebra. We fix a
trace $Tr: l \rightarrow k$ and let $\sigma' \otimes \sigma''$ be the
corresponding Casimir element (i.e., $\sigma' \otimes \sigma'' =
\sum \sigma'_i \otimes \sigma''_i$ and $Tr(\sigma'_i \sigma''_j) =
\delta_{ij}$). An {\em augmented dg $l$-algebra} is a dg algebra $A$
equipped with dg $k$-algebra homomorphisms $l
\stackrel{\varsigma}\rightarrow A \stackrel{\epsilon}\rightarrow l $ such that
$\epsilon \varsigma$ is the identity. Following \cite{VDB} we write
$PCAlgc(l)$ for the category of pseudo-compact augmented dg
$l$-algebras satisfying $ker(\epsilon) = coker(\varsigma) = {\rm
rad}A$. When forgetting the grading, rad$A$ is just the Jacobson
radical of the underlying ungraded algebra $A^u := \prod_r A^r$ of
the dg algebra $A=(A^r)_r$.
The SBI-sequence can be extended to the case that $A \in PCAlgc(l)$,
where $HH_{\ast}(A) (= H_{\ast}(A \overset{L}\otimes_{A^e}A))$ is
computed by the pseudo-compact Hochschild complex. For more details,
see section 8 and Appendix B in \cite{VDB}.
\begin{defn} [\cite{VDB}]
An algebra $A \in PCAlgc(l)$ is {\em strongly ($m+2$)-Calabi-Yau}
if $A$ is topologically homologically smooth and $HC_{m+1}(A)$
contains an element $\eta$ such that $B\eta$ is non-degenerate in
$HH_{m+2}(A)$.
\end{defn}
\begin{thm}[\cite{VDB}] \label{29}
Let $A \in PCAlgc(l)$. Assume that $A = (A^r)_{r \leq 0}$ is
concentrated in nonpositive degrees. Then $A$ is strongly
($m+2$)-Calabi-Yau if and only if there is a quasi-isomorphism
$(\widehat{T_l V}, d) \rightarrow A$ as augmented dg $l$-algebras with $V$
having the following properties
\begin{itemize}
\item[a)] $d(V) \cap V = 0$;
\item[b)] $V = V_c \oplus lz$ with $z$ an $l$-central element of
degree $-m-1$, $V_c$ finite dimensional and concentrated in degrees
$[-m,0]$;
\item[c)] $dz = \sigma' \eta \sigma''$ with $\eta \in V_c
{\otimes}_{l^e} V_c$ non-degenerate and antisymmetric under the flip
$F: v_1 \otimes v_2 \rightarrow (-1)^{|v_1||v_2|}v_2 \otimes v_1$ for any
$v_1$, $v_2$ in $V_c$.
\end{itemize}
\end{thm}
We would like to present the explicit construction of Ginzburg dg
categories in the following straightforward proposition.
\begin{prop} \label{31}
The completed Ginzburg dg category $\widehat{\Gamma}_{m+2} (Q,W)$
associated to a finite graded quiver $Q$ concentrated in degrees
$[-m,0]$ and a reduced superpotential $W$ being a linear combination
of paths of $Q$ of degree $1-m$ and of length at least $3$, is
strongly ($m+2$)-Calabi-Yau.
\end{prop}
\begin{proof}
We only need to check that $\widehat{\Gamma}_{m+2} (Q,W)$ satisfies
the assumptions and condition 2) in Theorem \ref{29} from its
definition.
Let $l$ be the separable $k$-algebra ${\prod}_{i \in Q_0} ke_i$. Let
${\overline{Q}}^G$ be the double quiver obtained from $Q$ by
adjoining opposite arrows $a^{\ast}$ of degree $-m-|a|$ for arrows
$a \in Q_1$. Let ${\widetilde{Q}}^G$ be obtained from
${\overline{Q}}^G$ by adjoining a loop $t_i$ of degree $-m-1$ for
each vertex $i$. Then the completed Ginzburg dg category
$\widehat{\Gamma}_{m+2} (Q,W)$ is the completed path category
$\widehat{T_l({\widetilde{Q}}^G)}$ with the following differential
\begin{center}
$d(a) = 0$, \quad $a \in Q_1$; \\
$d(t_i) = e_i ({\sum}_{a \in Q_1} [a,a^{\ast}]) e_i, \quad i \in
Q_0$; \\
$d(a^{\ast}) = (-1)^{|a|} \frac{\partial W}{\partial a} = (-1)^{|a|}
{\sum}_{p=uav} (-1)^{(|a|+|v|)|u|}v u, \quad a \in Q_1$;
\end{center}
where the sum in the third formula runs over all homogeneous
summands $p=uav$ of $W$.
Thus, the components of $\widehat{{\Gamma}}_{m+2} (Q,W)$ are
concentrated in nonpositive degrees and ${\widehat{\Gamma}}_{m+2}
(Q,W)$ $( = l \oplus {\prod}_{s \geq 1}
({\widetilde{Q}}^G)^{{\otimes}_l s})$ lies in $PCAlgc(l)$.
The differential above which is induced by the reduced
superpotential $W$ satisfies that $d({\widetilde{Q}}^G) \cap
{\widetilde{Q}}^G = 0$. Set $z = {\sum}_{i \in Q_0} t_i$. Then $z$
is an $l$-central element of degree $-m-1$. Clearly,
${\widetilde{Q}}^G = {\overline{Q}}^G \oplus lz$, the double quiver
${\overline{Q}}^G$ is finite and concentrated in degrees $[-m,0]$,
and the element $d(z) = {\sum}_{a \in Q_1} (a a^{\ast} -
(-1)^{|a||a^{\ast}|} a^{\ast} a)$ is antisymmetric under the flip
$F$.
The last step is to show that $\eta := {\sum}_{a \in Q_1}
[a,a^{\ast}]$ is non-degenerate, that is, the corresponding map
$$\eta^+: \, {\rm Hom}_{l^e} ({\overline{Q}}^G,l^e) \longrightarrow {\overline{Q}}^G, \quad \phi \rightarrow (-1)^{|\phi||\eta|} \phi (\eta_1)_2 \eta_2 \phi
(\eta_1)_1$$is an isomorphism. Define morphisms $\phi_{\gamma}
(\gamma \in {\overline{Q}}^G) : {\overline{Q}}^G \rightarrow l^e$ as follows
$$\phi_{\gamma} (\alpha) = \delta_{\alpha \gamma} e_{t(\alpha)} \otimes
e_{s(\alpha)}.$$ Then $\{\phi_{\gamma} | \gamma \in {\overline{Q}}^G
\}$ is a basis of the space ${\rm Hom}_{l^e}
({\overline{Q}}^G,l^e)$. Applying the map $\eta^+$, we obtain the
images $\eta^+ (\phi_a) = (-1)^{m|a|} a^{\ast}$ and $\eta^+
(\phi_{a^{\ast}}) = (-1)^{1+|a^{\ast}|^2} a$ for arrows $a \in Q_1$.
Thus, $\{\eta^+ (\phi_{\gamma}) | \gamma \in {\overline{Q}}^G \}$ is
a basis of ${\overline{Q}}^G$. Therefore, the element $\eta$ is
non-degenerate.
\end{proof}
Now we write down the explicit construction of deformed
preprojective dg algebras as described in \cite{VDB}. Let $Q$ be a
finite graded quiver and $L$ the subset of $Q_1$ consisting of all
loops $a$ of odd degree such that $|a| = -m/2$. Let
${\overline{Q}}^V$ be the double quiver obtained from $Q$ by
adjoining opposite arrows $a^{\ast}$ of degree $-m-|a|$ for $a \in
Q_1\setminus L$ and putting $a^{\ast} = a$ without adjoining an
extra arrow for $a \in L$. Let $N$ be the Lie algebra $k
{\overline{Q}}^V / {[k {\overline{Q}}^V, k {\overline{Q}}^V]}$
endowed with the necklace bracket $\{-,- \}$ (cf. \cite{BL},
\cite{Gi01}). Let $W$ be a superpotential which is a linear
combination of homogeneous elements of degree $1-m$ in $N$ and
satisfies $\{W,W\}=0$ (in order to make the differential
well-defined). Let ${\widetilde{Q}}^V$ be obtained from
${\overline{Q}}^V$ by adjoining a loop $t_i$ of degree $-m-1$ for
each vertex $i$. Then the {\em deformed preprojective dg algebra}
$\Pi(Q,m+2,W)$ is the dg algebra $(k{\widetilde{Q}}^V,d)$ with the
differential
\begin{center}
$da= \{W,a\}=(-1)^{(|a|+1)|a^{\ast}|}\frac{\partial W}{\partial
a^{\ast}}= (-1)^{(|a|+1)|a^{\ast}|}\sum_{p=ua^{\ast}v} (-1)^{(|a^{\ast}|+|v|)|u|}v u$; \\
$da^{\ast} = \{W,a^{\ast}\} = (-1)^{|a|+1} \frac{\partial
W}{\partial a} = (-1)^{|a|+1}
{\sum}_{p=uav} (-1)^{(|a|+|v|)|u|}v u$; \\
$dt_i = e_i ({\sum}_{a \in Q_1} [a,a^{\ast}]) e_i$;
\end{center}
where $a \in Q_1$ and $i \in Q_0$. Later we will denote the
homogeneous elements $rvu \, (r \in k)$ appearing in $d \alpha \,
(\alpha \in {\overline{Q}}^V)$ by $y (\alpha, v, u)$.
\begin{rem} \label{32}
As in Proposition \ref{31}, we see that the completed deformed
preprojective dg algebra $\widehat{\Pi}(Q,m+2,W)$ associated to a
finite graded quiver $Q$ concentrated in degrees $[-m,0]$ and a
reduced superpotential $W$ being a linear combination of paths of
${\overline{Q}}^V$ of length at least $3$, is also strongly
($m+2$)-Calabi-Yau.
\end{rem}
Suppose that $-1$ is a square in the field $k$ and denote by
$\sqrt{-1}$ a chosen square root. Then the class of deformed
preprojective dg algebras is strictly greater than the class of
Ginzburg dg categories. Suppose that $Q$ does not contain {\em
special loops} (i.e., loops of odd degree which is equal to $-m/2$).
Then we can easily see that $\Gamma_{m+2}(Q,W) = \Pi (Q,m+2,-W)$.
Otherwise, let $Q^0$ be the subquiver of $Q$ obtained by removing
the special loops. For each special loop $a$ in $Q_1$, we draw a
pair of loops $a'$ and $a''$ which are also special at the same
vertex of $Q^0$. Denote the new quiver by $Q'$. Let $W'$ be the
superpotential obtained from $W$ by replacing each special loop by
the corresponding element $a' + a'' {\sqrt{-1}}$. Now we define a
map $\iota : \Gamma_{m+2}(Q,W) \rightarrow \Pi (Q',m+2,-W')$, which sends
each special loop $a$ of $Q_1$ to the element $a' + a'' {\sqrt{-1}}$
and its dual $a^{\ast}$ to the element $a' - a'' {\sqrt{-1}}$ in
$\Pi (Q',m+2,-W')$, and is the identity on the other arrows of
${\widetilde{Q}}^G$. Then it is not hard to check that $\iota$ is a
dg algebra isomorphism. It follows that Ginzburg dg categories are
deformed preprojective dg algebras. For the strictness, see the
following example.
\begin{example}\label{36}
Suppose that $m$ is $2$. Let $Q$ be the quiver consisting of only
one vertex `$\bullet$' and one loop $a$ of degree $-1$. Then the
Ginzburg dg category $ {\Gamma}_4(Q,0)$ and the deformed
preprojective dg algebra ${\Pi}(Q,4,0)$ respectively have the the
following underlying graded quivers
\[
{\widetilde{Q}}^G: \quad \xymatrix { \bullet \ar@(ur,lu)[]_{a}
\ar@(ur,rd)[]^{{a}^{\ast}} \ar@(ul,ld)[]_{t} }, \quad \quad \quad
{\widetilde{Q}}^V: \quad \xymatrix { \bullet \ar@(ur,lu)[]_{a =
{a}^{\ast}} \ar@(ul,ld)[]_{t} }
\]
where $|a| = |a^{\ast}| = -1$ and $|t| = -3$. The differential takes
the following values $$d(a) = 0 = d(a^{\ast}), \quad \,
d_{\Gamma_{4}(Q,0)}(t) = a a^{\ast}+a^{\ast}a, \quad
d_{\Pi(Q,4,0)}(t) = 2a^2.$$ Then ${\rm dim} H^{-1} (\Gamma_4(Q,0)) =
2$ while ${\rm dim} H^{-1} (\Pi(Q,4,0)) = 1$. Hence, these two dg
algebras are not quasi-isomorphic. Moreover, it is obvious that the
dg algebra $\Pi(Q,4,0)$ can not be realized as a Ginzburg dg
category.
\end{example}
\begin{lem}
Let $\Pi = \widehat{\Pi}(Q,m+2,W)$ be a completed deformed
preprojective dg algebra. Let $x$ (resp. $y$) denote the minimal
(resp. maximal) degree of the arrows of ${\overline{Q}}^V$. Then
there exist a canonical completed deformed preprojective dg algebra
$\Pi' = \widehat{\Pi}(Q',m+2,W')$ isomorphic to $\Pi$ as a dg
algebra, where the quiver $Q'$ is concentrated in degrees
$[-m/2,y]$.
\end{lem}
\begin{proof}
We can construct directly a quiver $Q'$ and a superpotential $W'$.
We claim first that $x+y=-m$. Let $x_1$ (resp. $y_1$) denote the
minimal (resp. maximal) degree of the arrows of $Q$. Then
${\overline{Q}}^V \setminus Q$ is concentrated in degrees
$[-m-y_1,-m-x_1]$. If $x_1 \leq -m-y_1$, then $x= x_1$ and $y_1 \leq
-m-x_1$. Hence, $x+y = x_1 + (-m-x_1) = -m$. Similarly for the case
`$-m-y_1 \leq x_1$'.
Let $Q^0$ be the subquiver of $Q$ which has the same vertices as $Q$
and whose arrows are those of $Q$ with degree belonging to
$[-m/2,y]\, ( = [(x+y)/2,y])$. In this case $|a^{\ast}| = -m - |a|
\in [-m-y,-m/2] = [x,-m/2]$. For each arrow $b$ of $Q$ whose dual
$b^{\ast}$ has degree in $(-m/2,y]$, we add a corresponding arrow
$b'$ to $Q^0$ with the same degree as $b^{\ast}$. Denote the new
quiver by $Q'$. Therefore, the quiver $Q'$ has arrow set
$$\{ a \in Q_1 | \, |a| \in [-m/2,y] \} \cup \{ b'\, | \,
|b'| = |b^{\ast}|, b \in Q_1 \, \mbox{and} \, |b^{\ast}| \in (-m/2,y] \}.$$
We define a map $\iota: {\widetilde{Q}}^V \rightarrow {\widetilde{Q'}}^V$ by
setting $$\iota(a) = a, \, \iota(a^{\ast}) = a^{\ast}; \quad
\iota(t_i) = t_i; \quad \iota(b) = (-1)^{|b||{b^{\ast}}|+1}
{b'}^{\ast}, \, \iota(b^{\ast}) = b'.$$ Let $W'$ be the
superpotential obtained from $W$ by replacing each arrow $\alpha$ in
$W$ by $\iota(\alpha)$. Then it is not hard to check that the map
$\iota$ can be extended to a dg algebra isomorphism from $\Pi$ to
$\Pi'$.
\end{proof}
In particular, if $Q$ is concentrated in degrees $[-m,0]$, then by
the above lemma, the new quiver $Q'$ is concentrated in degrees
$[-m/2,0]$. If the following two conditions
\begin{itemize}
\item[V1)] $Q$ a finite graded quiver concentrated in degrees
$[-m/2,0]$,
\item[V2)] $W$ a reduced superpotential being a linear combination of paths of
${\overline{Q}}^V$ of degree $1-m$ and of length $\geq 3$,
\end{itemize}
hold, then we will say that the completed deformed preprojective dg
algebra $\widehat{\Pi}(Q,m+2,W)$ is good.
\begin{thm}[\cite{VDB}] \label{30}
Let $A$ be a strongly ($m+2$)-Calabi-Yau dg algebra with components
concentrated in degrees $\leq 0$. Suppose that $A$ lies in
$PCAlgc(l)$ for some finite dimensional separable commutative
$k$-algebra $l$. Then $A$ is quasi-isomorphic to some good completed
deformed preprojective dg algebra.
\end{thm}
We consider the strongly ($m+2$)-Calabi-Yau case in this section, by
Theorem \ref{30}, it suffices to consider good completed deformed
preprojective dg algebras $\Pi = \widehat{\Pi}(Q,m+2,W)$. The simple
$\Pi$-module $S_i$ (attached to a vertex $i$ of $Q$) belongs to the
finite dimensional derived category ${\mathcal {D}}_{fd} (\Pi)$,
hence it also belongs to per$\Pi$. We will give a precise
description of the objects $RA_t$ and $LA_t$ obtained from iterated
mutations of a $P$-indecomposable $e_i \Pi$, where $e_i$ is the
primitive idempotent element associated to a vertex $i$ of $Q$.
\begin{defn}[\cite{Pla}]\label{7}
Let $A = (\widehat{kQ},d)$ be a dg algebra, where $Q$ is a finite
graded quiver and $d$ is a differential sending each arrow to a
(possibly infinite) linear combination of paths of length $\geq 1$.
A dg $A$-module $M$ is {\em minimal perfect} if
\begin{itemize}
\item[a)] its underlying graded module is of the form ${\oplus}^N_{j=1}
R_j$, where $R_j$ is a finite direct sum of shifted copies of direct
summands of $A$, and
\item[b)] its differential is of the form
$d_{int}+{\delta}$, where $d_{int}$ is the direct sum of the
differentials of these $R_j \,\, (1 \leq j \leq N)$, and $\delta$,
as a degree $1$ map from ${\oplus}^N_{j=1} R_j$ to itself, is a
strictly upper triangular matrix whose entries are in the ideal
$\mathfrak{m}$ of $A$ generated by the arrows of $Q$.
\end{itemize}
\end{defn}
\begin{lem}[\cite{Pla}]\label{34}
Let $M$ be a dg $A (= (\widehat{kQ},d))$-module such that $M$ lies
in {\rm per}$A$. Then $M$ is quasi-isomorphic to a minimal perfect
dg $A$-module.
\end{lem}
In the second part of this section, we illustrate how to obtain
minimal perfect dg modules which are quasi-isomorphic to simple
$\Pi$-modules from cofibrant resolutions \cite{KY09}. If a cofibrant
resolution ${\mathbf{p}}X$ of a dg module $X$ is minimal perfect,
then we say ${\mathbf{p}}X$ a {\em minimal cofibrant resolution} of
$X$.
Let $i$ be a vertex of $Q$ and $P_i = e_i {\Pi}$. Consider the short
exact sequence in the category ${\mathcal {C}} (\Pi)$ of dg modules
$$0 \rightarrow ker(p) \stackrel{\iota} \longrightarrow P_{i} \stackrel{p}
\longrightarrow S_{i} \rightarrow 0 ,$$ where in the category Grmod($\Pi$) of graded
modules $ker(p)$ is the direct sum of $\rho P_{s(\rho)}$ over all
arrows $\rho \in {{\widetilde{Q}}^V}_1$ with $t(\rho) = i$. The
simple module $S_{i}$ is quasi-isomorphic to $cone(ker(p)
\stackrel{\iota} \rightarrow P_{i})$, {\em i.e.}, the dg module
\begin{displaymath}
{X = ( \underline{X} = P_{i} \oplus {\Sigma}X'_0 \oplus \ldots
\oplus {\Sigma} X'_{m+1}, d_X = \left( \begin{array}{cc} d_{P_{i}} &
\iota \\ 0 & -d_{ker(p)}
\end{array} \right)),}
\end{displaymath}
where for each integer $0 \leq j \leq m+1$, the object $X'_j$ is the
direct sum of $\rho P_{s(\rho)}$ ranging over all arrows $\rho \in
{\widetilde{Q}}^V_1$ with $t(\rho) = i$ and $|\rho| = -j$. By
Section 2.14 in \cite{KY09}, the dg module $X$ is a cofibrant
resolution of the simple module $S_{i}$.
Now let $P'_j \, (0 \leq j \leq m+1)$ be the direct sum of
$P_{s(\rho)}$ where $\rho$ ranges over all arrows in
${\widetilde{Q}}^V_1$ satisfying $t(\rho) = i$ and $|\rho| = -j$.
Clearly, $P'_{m+1} = P_{i}$. We require that the ordering of direct
summands $P_{s(\rho)}$ in $P'_j$ is the same as the ordering of
direct summands $\rho P_{s(\rho)}$ in $X'_j$ for each integer $0
\leq j \leq m+1$. Let $Y$ be an object whose underlying graded
module is $\underline{Y} = P_{i} \oplus {\Sigma}P'_0 \oplus
{\Sigma}^2 P'_1 \oplus \ldots \oplus {\Sigma}^{m+2} P'_{m+1}.$ We
endow $\underline{Y}$ with the degree $1$ graded endomorphism
$d_{int} + {\delta}_Y$, where $d_{int}$ is the same notation as in
Definition \ref{7}. The columns of ${\delta}_Y$ have the following
two types: $(\alpha, 0, \ldots, -y_{red}(\alpha,v,u), \ldots, 0)^t$,
and $(t_i, \ldots, -a^{\ast}, \ldots, (-1)^{|b| |b^{\ast}|} b,
\ldots, 0)^t$ for the last column.
Here $\alpha$ is an arrow in ${\overline{Q}}^V$, while $a$ is an
arrow in $Q$ and $b$ is an arrow in ${\overline{Q}}^V \setminus Q$.
Here $y_{red}(\alpha,v,u)$ is obtained from the path $y(\alpha,v,u)
= \beta_s \ldots \beta_1$ (this notation is defined just before
Remark \ref{32}) by removing the factor $\beta_s$. The ordering of
the elements in each column is determined by the ordering of $Y$.
Let $f: Y \rightarrow X$ be a map constructed as the diagonal matrix whose
elements are all arrows in ${\widetilde{Q}}^V_1$ with target at $i$,
together with $e_{i}$ as the first element. Moreover, we require
that the ordering of these arrows is determined by $Y$ (hence also
by $X$), that is, the components of $f$ are of the form
\begin{center} $f_{\rho} : {\Sigma}^{|\rho| + 1} P_{s(\rho)} \longrightarrow {\Sigma}
{\rho} P_{s(\rho)}, \quad \quad u \mapsto \rho u .$
\end{center}
It is not hard to check the identity $f (d_{int} + {\delta}_Y) = d_X
f.$ Hence, the morphism $f$ is an isomorphism in ${{\mathcal
{C}}(\Pi)}$, and the map $d_{int} + {\delta}_Y$ makes the object $Y$
into a dg module which is minimal perfect. Therefore, the dg module
$Y$ is a minimal cofibrant resolution of the simple module $S_i$.
In the third part of this section, we show that when there are no
loops of $Q$ at vertex $i$, the truncations of the minimal cofibrant
resolution $Y$ of the simple module $S_i$ produce $RA_t$ and $LA_t
\, (0 \leq t \leq m+1)$ obtained from the $P$-indecomposable $P_i$
by iterated mutations. If we write $M$ for the dg module
${\Pi}/{P_i}$, then the dg modules $P'_j \, (0 \leq j \leq m)$
appearing in $Y$ lie in add$M$. Let ${\varepsilon}_{\leq t}Y$ be the
submodule of $Y$ with the inherited differential whose underlying
graded module is the direct sum of those summands of $Y$ with copies
of shift $\leq t$. Let ${\varepsilon}_{\geq t+1}Y$ be the quotient
module $Y/({\varepsilon}_{\leq t}Y)$. Notice that
${\varepsilon}_{\leq t}Y$ is a truncation of $Y$ for the canonical
weight structure on per$\Pi$, cf. Bondarko, Keller-Nicolas.
\begin{prop} \label{11}
Let $\Pi$ be a good completed deformed preprojective dg algebra
$\widehat{\Pi}(Q,m+2,W)$ and $i$ a vertex of $Q$. Assume that there
are no loops of $Q$ at vertex $i$. Then the following two
isomorphisms
$${\Sigma}^{-t}{\varepsilon}_{\leq t}Y \simeq RA_t \quad
\mbox{and}\quad {\Sigma}^{-t-1}{\varepsilon}_{\geq t+1}Y \simeq
LA_{m+1-t}$$ hold in the derived category $\mathcal {D} := {\mathcal
{D}}{(\Pi)}$ for each integer $0 \leq t \leq m+1$.
\end{prop}
\begin{proof}
We only consider the first isomorphism. Then the second one can be
obtained dually. For arrows of ${\overline{Q}}^V$ of degree $-j$
ending at vertex $i$, we write ${\alpha}_j$; for the symbols
$-y_{red}(\alpha,v,u)$ of degree $-j$, we simply write $-y^j_{red}$,
and for morphisms $f$ of degree $-j$, we write $f_j$, where $0 \leq
j \leq m$. Moreover, we use the notation $[x]$ to denote a matrix
whose entries $x$ have the same `type' (in some obvious sense).
Clearly, when $t=0$, we have that ${\varepsilon}_{\leq 0}Y = P_i =
RA_0$.
When $t=1$, we have the following isomorphisms
\begin{displaymath}
{{\Sigma}^{-1}{\varepsilon}_{\leq 1}Y \simeq ({\Sigma}^{-1}P_i
\oplus P'_0, \left(\begin{array}{cc} d_{{\Sigma}^{-1}P_i} &
-[{\alpha}_0]
\\ 0 & d_{P'_0}
\end{array} \right) ) \simeq {\Sigma}^{-1}cone(P'_0 \overset{h^{(1)}} \longrightarrow P_i), }
\end{displaymath}
where each component of $h^{(1)} (= [{\alpha}_0] )$ is the left
multiplication by some ${\alpha}_0$. Since $W$ is reduced, the left
multiplication by ${\alpha}_0$ is nonzero in the space ${\rm
Hom}_{\mathcal {D}}(P'_0,P_i)$. Moreover, only the trivial paths
$e_i$ have zero degree, and there are no loops of ${\overline{Q}}^V$
of degree zero at vertex $i$. It follows that $h^{(1)}$ is a minimal
right (add$M$)-approximation of $P_i$. Then
${\Sigma}^{-1}{\varepsilon}_{\leq 1}Y$ and $RA_1$ are isomorphic in
$\mathcal {D}$.
In general, assume that ${\Sigma}^{-t}{\varepsilon}_{\leq t}Y \simeq
RA_t \, (1 \leq t \leq m)$. We will show that
${\Sigma}^{-t-1}{\varepsilon}_{\leq t+1}Y \simeq RA_{t+1}$. First we
have the following isomorphism
$${\Sigma}^{-t-1}{\varepsilon}_{\leq t+1}Y \simeq ({\Sigma}^{-t-1}P_i
\oplus {\Sigma}^{-t}P'_0 \oplus \ldots \oplus P'_t,$$
\begin{displaymath}
\left(\begin{array}{ccccc} d_{{\Sigma}^{-t-1}P_i} &
(-1)^{t+1}[{\alpha}_0] & \ldots & (-1)^{t+1}[{\alpha}_{t-1}]& (-1)^{t+1}[{\alpha}_t] \\
0 & d_{{\Sigma}^{-t}P'_0} & \ldots & (-1)^t [y^{t-2}_{red}] &(-1)^t
[y^{t-1}_{red}] \\ & \ldots & & \ldots &
\\ 0& 0& \ldots & d_{{\Sigma}^{-1}P'_{t-1}} & (-1)^t [y^0_{red}] \\
0 & 0 & \ldots & 0 & d_{P'_t}
\end{array} \right) )
\end{displaymath}
$$ \simeq {\Sigma}^{-1}cone(P'_t \overset{h^{(t+1)}} \longrightarrow RA_t),
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad$$
where $h^{(t+1)} = ((-1)^t[{\alpha}_t], (-1)^{t-1} [y^{t-1}_{red}],
\ldots, (-1)^{t-1} [y^0_{red}])^t$.
Each component of $h^{(t+1)}$ is a nonzero morphism in ${\rm
Hom}_{\mathcal {D}}(P'_t,RA_t)$, since the superpotential $W$ is
reduced. Otherwise, the arrow ${\alpha}_t$ will be a linear
combination of paths of length $\geq 2$. It follows that $h^{(t+1)}$
is right minimal. Let $L$ be an arbitrary indecomposable object in
add$A$ and $f = (f_t, [f_{t-1}], \ldots, [f_1], [f_0])^t$ an
arbitrary morphism in ${\rm Hom}_{\mathcal {D}}(L,RA_t)$. Then the
vanishing of $d(f)$
implies that $d(f_t) = -[{\alpha}_0][f_{t-1}] - \ldots -
[{\alpha}_{t-2}][f_1] - [{\alpha}_{t-1}][f_0]$. Since there are no
loops of ${\overline{Q}}^V$ of degree $-t$ at vertex $i$, the map
$f_t$ which is homogeneous of degree $-t$ is a linear combination of
the following forms:
$(i)\,f_t= {\alpha}_t g_0$, where $|g_0| = 0$. In this case, the
differential
$$d(f_t) = d(\alpha_tg_0) = d(\alpha_t)g_0 =
[\alpha_0][y^{t-1}_{red}]g_0+\ldots+[\alpha_{t-1}][y^0_{red}]g_0,$$
which implies that $[f_r]$ is equal to $-[y^r_{red}]g_0 \,(0 \leq r
\leq t-1)$. Then the equalities
\begin{displaymath} f =
\left(\begin{array}{c} f_t \\ {[f_{t-1}]} \\ \ldots \\ {[f_1]} \\
{[f_0]}
\end{array}\right) = \left(\begin{array}{c} \alpha_tg_0 \\
-[y^{t-1}_{red}]g_0 \\ \ldots \\ -[y^1_{red}]g_0 \\
-[y^0_{red}]g_0
\end{array}\right) = \left(\begin{array}{c} (-1)^t \alpha_t \\
(-1)^{t-1}[y^{t-1}_{red}] \\ \ldots \\ (-1)^{t-1}[y^1_{red}] \\
(-1)^{t-1}[y^0_{red}]
\end{array}\right)(-1)^tg_0.
\end{displaymath}
hold. Thus, the morphism $f$ factors through $h^{(t+1)}$.
$(ii) \, f_t = \alpha_r g_{t-r}$, where $|g_{t-r}| = r-t \, (0 \leq
r \leq t-1)$. In these cases, the differentials
\begin{center}
$d(f_t) = d(\alpha_r)g_{t-r} + (-1)^r\alpha_rd(g_{t-r}) =
[\alpha_0][y^{r-1}_{red}]g_{t-r} + \ldots
+[\alpha_{r-1}][y^0_{red}]g_{t-r}+(-1)^r\alpha_rd(g_{t-r}),$
\end{center} which implies that $[f_{t-1}]=-[y^{r-1}_{red}]g_{t-r},
\ldots, [f_{t-r}]=-[y^0_{red}]g_{t-r}$ and
$[f_{t-r-1}]=(-1)^{r+1}d(g_{t-r}).$ Then we have that
\begin{displaymath}
\left(\begin{array}{c} f_t \\
{[f_{t-1}]} \\ \ldots \\ {[f_1]} \\ {[f_0]}
\end{array}\right) = \left(\begin{array}{c} \alpha_rg_{t-r} \\ -[y^{r-1}_{red}]g_{t-r} \\
\ldots \\ -[y^{0}_{red}]g_{t-r} \\ (-1)^{r+1}d(g_{t-r})\\
0 \\ \ldots \\0
\end{array}\right) = d_{RA_t}\left(\begin{array}{c} 0 \\ 0 \\ \ldots \\ 0 \\ (-1)^tg_{t-r}\\0 \\ \ldots \\ 0
\end{array}\right)
+ \left(\begin{array}{c} 0 \\ 0 \\ \ldots \\ 0 \\ (-1)^tg_{t-r}\\0 \\ \ldots \\ 0
\end{array}\right) d_L
\end{displaymath}
is a zero element in ${\rm Hom}_{\mathcal {D}} (L, RA_t)$.
Therefore, the morphism $h^{(t+1)}$ is a minimal right
(add$A$)-approximation of $RA_t (1 \leq t \leq m)$. Hence, the
isomorphism ${\Sigma}^{-t-1}\varepsilon_{\leq t+1}Y \simeq RA_{t+1}$
holds.
\end{proof}
We further assume that the zeroth homology $H^0 \Pi$ is finite
dimensional. Then the dg algebra $\Pi$ satisfies Assumptions
\ref{23} and moreover it is strongly ($m+2$)-Calabi-Yau.
Since the simple module $S_i$ is zero in the generalized $m$-cluster
category ${\mathcal {C}}_{\Pi} = {\rm per}\Pi/{{\mathcal
{D}}_{fd}(\Pi)}$, the corresponding minimal cofibrant resolution $Y$
also becomes zero in ${\mathcal {C}}_{\Pi}$. Taking truncations of
$Y$, we obtain $m+2$ triangles in ${\mathcal {C}}_{\Pi}$
$$\pi(\varepsilon_{\leq t}Y) \longrightarrow 0 \longrightarrow \pi(\varepsilon_{\geq t+1}Y) \longrightarrow \Sigma
\pi(\varepsilon_{\leq t}Y), \quad \quad 0 \leq t \leq m+1,$$where
$\pi: {\rm per}{\Pi} \rightarrow {\mathcal {C}}_{\Pi}$ is the canonical
projection functor. Therefore, the following theorem holds:
\begin{thm} \label{13}
Under the assumptions in Proposition \ref{11} and the assumption
that $H^0 \Pi$ is finite dimensional, the image of $RA_t$ is
isomorphic to the image of $LA_{m+1-t}$ in the generalized
$m$-cluster category ${\mathcal {C}}_{\Pi}$ for each integer $0 \leq
t \leq m+1$.
\end{thm}
\begin{proof} The following isomorphisms
$$\pi(RA_t) \simeq \pi({\Sigma}^{-t}\varepsilon_{\leq t}Y) \simeq
\pi({\Sigma}^{-t-1}\varepsilon_{\geq t+1}Y) \simeq \pi(LA_{m+1-t})$$
are true in ${\mathcal {C}}_{\Pi}$ for all integers $0 \leq t \leq
m+1$.
\end{proof}
In the presence of loops, the objects $RA_t$ and $LA_r$ do not
always satisfy the relations in Theorem \ref{13}. See the following
example.
\begin{example}\label{46}
Suppose that $m$ is $2$. Let $Q$ be the quiver whose vertex set
$Q_0$ has only one vertex `$\bullet$' and whose arrow set $Q_1$ has
two loops $\alpha$ and $\beta$ of degree $-1$. Then the completed
deformed preprojective dg algebra ${\Pi} = \widehat{\Pi}(Q,4,0)$ has
the underlying graded quiver as follows
\[
{\widetilde{Q}}^V: \quad\quad \xymatrix { \bullet
\ar@(ur,lu)[]_{\alpha} \ar@(ur,rd)[]^{\beta} \ar@(ul,ld)[]_{t} }
\]
with $|\alpha| = |\beta| = -1$ and $|t| = -3$. The differential
takes the following values $$d(\alpha) = 0 = d(\beta), \quad \, d(t)
= 2\alpha^2+2\beta^2.$$
The algebra ${\Pi}$ is an indecomposable object in the derived
category ${\mathcal {D}}(\Pi)$. Let $P = {\Pi}$. Then we have the
equality ${\Pi} = P \oplus M$, where $M=0$. Then $LA_r$ is
isomorphic to ${\Sigma}^rP$ and $RA_r$ is isomorphic to
${\Sigma}^{-r}P$ for all $r \geq 0$.
The zeroth homology $H^0{\Pi}$ is one-dimensional and generated by
the trivial path $e_{\bullet}$. Let ${\mathcal {C}}_{{\Pi}}$ be the
generalized $2$-cluster category. We claim that the image of $RA_1$
in ${\mathcal {C}}_{{\Pi}}$ is not isomorphic to the image of
$LA_2$. Otherwise, assume that $\pi(RA_1)$ is isomorphic to
$\pi(LA_2)$. Then the following isomorphisms hold $${\rm
Hom}_{{\mathcal {C}}_{{\Pi}}}(\pi(LA_2),{\Sigma}\pi(LA_2)) \simeq
{\rm Hom}_{{\mathcal {C}}_{{\Pi}}}(\pi(LA_2),{\Sigma}\pi(RA_1))
\simeq {\rm Hom}_{{\mathcal
{C}}_{{\Pi}}}({\Sigma}^2P,{\Sigma}\pi({\Sigma}^{-1}P)) $$ $$ \simeq
{\rm Hom}_{{\mathcal {C}}_{{\Pi}}}({\Sigma}^2P,P) \simeq {\rm
Hom}_{{\mathcal {D}}(\Pi)}({\Sigma}^2P,P) \simeq H^{-2}{\Pi}.$$ The
left end term of these isomorphisms vanishes since $\pi(LA_2)$ is a
$2$-cluster tilting object, while the right end term is a
$3$-dimensional space whose basis is
$\{\alpha^2,\,\alpha\beta,\,\beta\alpha\}$. Therefore, we obtain a
contradiction.
\end{example}
\section{Periodicity property}
\begin{lem}\label{18}
Let $A$ be a dg algebra satisfying Assumptions \ref{23}. Let $x$ and
$y$ be two integers satisfying $x \leq y+m+1$. Suppose that the
object $X$ lies in ${\mathcal {D}}^{\leq x} \cap {\rm per}A$ and the
object $Y$ lies in $^{\mbox{\rm per}p}{\mathcal {D}}^{\leq y} \cap {\rm
per}A$. Then the quotient functor $\pi: {\rm per}A \rightarrow {\mathcal
{C}}_A$ induces an isomorphism
$${\rm Hom}_{\mathcal {D}} (X, Y) \simeq {\rm Hom}_{{\mathcal {C}}_A}
(\pi (X),\pi (Y)).$$
\end{lem}
\begin{proof}
This proof is quite similar to the proof of Lemma 2.9 given in
\cite{Pla}.
First, we show the injectivity.
Assume that $f: X \rightarrow Y$ is a morphism in $\mathcal {D}$ whose image
in ${\mathcal {C}}_A$ is zero. It follows that $f$ factors through
some $N$ in ${\mathcal {D}}_{fd} (A)$. Let $f = hg$. Consider the
following diagram
\[
\xymatrix{ & X \ar@{.>}[dl] \ar[dr]^g \ar[rr]^f & & Y & & & \\
{\tau}_{\leq x}N \ar[rr] & & N \ar[rr] \ar[ur]^h & & {\tau}_{\geq
x+1}N \ar[r] & {\Sigma} ({\tau}_{\leq x}N) .}
\] We have that $g$ factors through ${\tau}_{\leq x}N$ because $X \in {\mathcal {D}}^{\leq
x}$ and ${\rm Hom}_{\mathcal {D}}({\mathcal {D}}^{\leq x},
{\tau}_{\geq x+1}N)$ vanishes.
Now since ${\tau}_{\leq x}N$ is still in ${\mathcal {D}}_{fd} (A)$,
by the Calabi-Yau property, the following isomorphism $$D {\rm
Hom}_{\mathcal {D}} ({\tau}_{\leq x}N,Y) \simeq {\rm Hom}_{\mathcal
{D}} (Y, {\Sigma}^{m+2} ({\tau}_{\leq x}N))$$ holds. Since
${\Sigma}^{m+2}({\tau}_{\leq x}N)$ belongs to ${\mathcal {D}}^{\leq
x-m-2} (\subseteq {\mathcal {D}}^{\leq y-1})$, the right hand side
of the above isomorphism is zero. Therefore, the morphism $f$ is
zero in the derived category $\mathcal {D}$.
Second, we show the surjectivity.
Consider an arbitrary fraction $s^{-1}f$ in ${\mathcal {C}}_A$
\[
\xymatrix{X \ar[dr]^f & & Y \ar[dl]_s \\ & U \ar[dl]_r & \\ N & &}
\]
where the cone $N$ of $s$ is in ${\mathcal {D}}_{fd} (A)$. Now look
at the following diagram
\[
\xymatrix{ X \ar[dr]_f \ar@{.>}[r]^{w} & Y \ar[d]^s \ar[dr]^v & & \\
& U \ar@{.>}[r]_g \ar[d]^r & Z
\ar[d] & \\ {\tau}_{\leq x}N \ar[r] & N \ar[r]^{{\pi}_{x+1}\quad}
\ar[d]^u & {\tau}_{\geq x+1}N \ar[r] \ar@{.>}[dl]^h &
{\Sigma}({\tau}_{\leq x}N)
\\ & {\Sigma}Y . & & }
\]
By the Calabi-Yau property, the space ${\rm Hom}_{\mathcal {D}}
({\tau}_{\leq x}N, {\Sigma}Y)$ is isomorphic to $D {\rm
Hom}_{\mathcal {D}} (Y, {\Sigma}^{m+1}({\tau}_{\leq x}N))$, which is
zero since $x-m-1 \leq y$. Thus, there exists a morphism $h$ such
that $u=h \circ {\pi}_{x+1}$. Now we embed $h$ into a triangle in
$\mathcal {D}$ as follows
$$Y \stackrel{v} \longrightarrow Z \longrightarrow {\tau}_{\geq x+1}N \stackrel{h} \longrightarrow
{\Sigma}Y.$$ It follows that the morphism $v$ factors through $s$ by
some morphism $g$. Then we can get a new fraction
\[
\xymatrix{ X \ar[dr]_{g \circ f} & & Y \ar[dl]_{v} \\ & Z & }
\] where the cone of $v$ is ${\tau}_{\geq x+1}N (\in {\mathcal
{D}}_{fd} (A))$. This fraction is equal to the one we start with
because
$${v^{-1}} (g \circ f) = (g \circ s)^{-1} (g \circ f) \sim s^{-1} f.$$
Moreover, since the space ${\rm Hom}_{\mathcal {D}} (X, {\tau}_{\geq
x+1}N)$ vanishes, there exists a morphism $w: X \rightarrow Y$ such that $g
\circ f = v \circ w$. Therefore, the fraction above is exactly the
image of $w$ in ${\rm Hom}_{\mathcal {D}}(X,Y)$ under the quotient
functor $\pi$.
\end{proof}
Note that in the assumptions of the above lemma, we do not
necessarily suppose that the objects $X$ and $Y$ lie in some shifts
of the fundamental domain.
A special case of Lemma \ref{18} is that, if $X$ lies in ${\mathcal
{D}}^{\leq m} \cap {\rm per}A$, then the quotient functor $\pi: {\rm
per}A \rightarrow {\mathcal {C}}_A$ induces an isomorphism
$${\rm Hom}_{\mathcal {D}} (X, RA_t) \simeq {\rm Hom}_{{\mathcal {C}}_A}
(\pi (X),\pi (RA_t))$$ for any nonnegative integer $t$, where $RA_t$
belongs to $^{\mbox{\rm per}p} {\mathcal {D}}^{\leq -1}$.
\begin{thm} \label{6} Under the assumptions of Theorem \ref{13},
for each positive integer $t$,
{\rm 1)} the image of $RA_t$
is isomorphic to the image of $RA_{t ({\rm mod} \, m+1)}$ in
${\mathcal {C}}_{\Pi}$,
{\rm 2)} the image of $LA_t$
is isomorphic to the image of $LA_{t ({\rm mod} \, m+1)}$ in
${\mathcal {C}}_{\Pi}$.
\end{thm}
\begin{proof}
We only show the first statement. Then the second one can be
obtained similarly.
Following Theorem \ref{13}, the image of $RA_{m+1}$ in ${\mathcal
{C}}_{\Pi}$ is isomorphic to $P$, which is $RA_0$ by definition. Let
us denote `$t \,({\rm mod} \, m+1)$' by $\overline{t}$. We prove the
statement by induction.
Assume that the image of $RA_t$ is isomorphic to the image of
$RA_{\overline{t}}$ in ${\mathcal {C}}_{\Pi}$. Consider the
following two triangles in ${\mathcal {D}}(\Pi)$
$$RA_{t+1} \longrightarrow A^{(t+1)} \stackrel{f^{(t+1)}} \longrightarrow RA_t \longrightarrow
{\Sigma}RA_{t+1},$$ $$RA_{\overline{t+1}} \longrightarrow A^{(\overline{t+1})}
\stackrel{f^{(\overline{t+1})}} \longrightarrow RA_{\overline{t}} \longrightarrow
{\Sigma}RA_{\overline{t+1}},$$and also consider their images in
${\mathcal {C}}_{\Pi}$. By Lemma \ref{18}, the isomorphism
$${\rm Hom}_{{\mathcal {D}}(\Pi)}(L,RA_t) \simeq {\rm Hom}_{{\mathcal
{C}}_{\Pi}}(L,\pi(RA_t))$$ holds for any object $L \in {\rm add}M$
and any nonnegative integer $t$. Hence, the images $\pi(f^{(t+1)})$
and $\pi(f^{(\overline{t+1})})$ are minimal right
(add$M$)-approximations of $\pi(RA_{t})$ and
$\pi(RA_{\overline{t}})$ in ${\mathcal {C}}_{\Pi}$, respectively. By
hypothesis, $\pi(RA_t)$ is isomorphic to $\pi(RA_{\overline{t}})$.
Therefore, the objects $A^{(t+1)}$ and $A^{(\overline{t+1})}$ are
isomorphic, and $\pi(RA_{t+1})$ is isomorphic to
$\pi(RA_{\overline{t+1}})$ in ${\mathcal {C}}_{\Pi}$. This completes
the statement.
\end{proof}
\begin{rem}
Section 10 in \cite{IY08} gave a class of ($2n+1$)-Calabi-Yau (only
for even integers $2n$, not for all integers $m \geq 2$)
triangulated categories (arising from certain Cohen-Macaulay rings)
which contain infinitely many indecomposable $2n$-cluster tilting
objects.
\end{rem}
In the following, for every integer $m \geq 2$, we construct an
($m+1$)-Calabi-Yau triangulated category which contains infinitely
many indecomposable $m$-cluster tilting objects.
When $m = 2$, we use the same quiver $Q$ as in Example \ref{46}.
When $m > 2$, let $Q$ be the quiver consisting of one vertex
$\bullet$ and one loop $\alpha$ of degree $-1$.
Let $\Pi = {\widehat{\Pi}}(Q,m+2,0)$ be the associated completed
deformed preprojective dg algebra. Clearly, $\Pi$ is an
indecomposable object in the derived category ${\mathcal {D}}(\Pi)$,
the zeroth homology $H^0 \Pi$ is one-dimensional and the path
${\alpha}^s$ is a nonzero element in the homology $H^{-s} \Pi$ ($s
\in {\mathbb{N}}^{\ast}$). Let ${\mathcal {C}}_{\Pi}$ be the
generalized $m$-cluster category and $\pi: \mbox{per} \Pi \rightarrow
{\mathcal {C}}_{\Pi}$ the canonical projection functor. Set $P =
\Pi$. Then $\Pi = P \oplus 0$. For each integer $t \geq 0$, the
object $LA_t$ is isomorphic to ${\Sigma}^tP$ and the object $RA_t$
is isomorphic to ${\Sigma}^{-t}P$. Now we claim that
\begin{itemize}
\item[1)] For any two integers $r > t \geq 0$, the object
$\pi(RA_r)$ is not isomorphic to $\pi(RA_t)$ in ${\mathcal
{C}}_{\Pi}$, and the object $\pi(LA_r)$ is not isomorphic to
$\pi(LA_t)$ in ${\mathcal {C}}_{\Pi}$.
\item[2)] For any two integers $r_1, r_2 \geq 0$, the objects
$\pi(RA_{r_1})$ and $\pi(LA_{r_2})$ are not isomorphic in ${\mathcal
{C}}_{\Pi}$.
\end{itemize}
Otherwise, similarly as in Example \ref{46}, the following
contradictions will appear
$$(0 = )\, {\rm Hom}_{{\mathcal {C}}_{\Pi}}(\pi(RA_t), {\Sigma}\pi(RA_t)) = {\rm Hom}_{{\mathcal
{C}}_{{\Pi}}}(\pi(RA_t),{\Sigma}\pi(RA_r)) \simeq {\rm
Hom}_{{\mathcal {C}}_{{\Pi}}}({\Sigma}^{-t}P,{\Sigma}^{1-r}P) $$ $$
\simeq {\rm Hom}_{{\mathcal {C}}_{{\Pi}}}(P,{\Sigma}^{t-r+1}P)
\simeq {\rm Hom}_{{\mathcal {D}}(\Pi)}(P,{\Sigma}^{t-r+1}P) \simeq
H^{t-r+1}{\Pi}\, ( \neq 0);$$ $$(0 = ) \, {\rm Hom}_{{\mathcal
{C}}_{{\Pi}}}(\pi(LA_r),{\Sigma}\pi(LA_r)) ={\rm Hom}_{{\mathcal
{C}}_{{\Pi}}}(\pi(LA_r),{\Sigma}\pi(LA_t)) \simeq {\rm
Hom}_{{\mathcal {C}}_{{\Pi}}}({\Sigma}^rP,{\Sigma}^{t+1}P) $$ $$
\simeq {\rm Hom}_{{\mathcal {C}}_{{\Pi}}}(P,{\Sigma}^{t-r+1}P)
\simeq {\rm Hom}_{{\mathcal {D}}(\Pi)}(P,{\Sigma}^{t-r+1}P) \simeq
H^{t-r+1}{\Pi} \,( \neq 0);$$ $$(0 = ) \, {\rm Hom}_{{\mathcal
{C}}_{{\Pi}}}(\pi(LA_{r_2}),{\Sigma}\pi(LA_{r_2})) = {\rm
Hom}_{{\mathcal {C}}_{{\Pi}}}(\pi(LA_{r_2}),{\Sigma}\pi(RA_{r_1}))
\simeq {\rm Hom}_{{\mathcal
{C}}_{{\Pi}}}({\Sigma}^{r_2}P,{\Sigma}^{1-{r_1}}P) $$ $$ \simeq {\rm
Hom}_{{\mathcal {C}}_{{\Pi}}}(P,{\Sigma}^{1-{r_1}-{r_2}}P) \simeq
{\rm Hom}_{{\mathcal {D}}(\Pi)}(P,{\Sigma}^{1-{r_1}-{r_2}}P) \simeq
H^{1-{r_1}-{r_2}}{\Pi} \,( \neq 0);$$ where the left end terms
become zero, the right end terms are nonzero since $t-r+1 \leq 0$
and $1 - {r_1} - {r_2} < 0$, and the isomorphism $${\rm
Hom}_{{\mathcal {C}}_{{\Pi}}}(P,{\Sigma}^{-s}P) \simeq {\rm
Hom}_{{\mathcal {D}}(\Pi)}(P,{\Sigma}^{-s}P)$$ holds for any $s \in
{\mathbb{N}}$ following Lemma \ref{18}.
Therefore, the ($m+1$)-Calabi-Yau triangulated category ${\mathcal
{C}}_{\Pi}$ contains infinitely many $m$-cluster tilting objects,
and the objects $\pi(RA_t)$ and $\pi(LA_r)$ do not satisfy the
relations in Theorem \ref{13} and Theorem \ref{6} in the presence of
loops.
\section{AR ($m+3$)-angles related to $P$-indecomposables}
Let $\mathcal {T}$ be an additive Krull-Schmidt category. We denote
by $J_{\mathcal {T}}$ the {\em Jacobson radical} \cite{ARS} of
$\mathcal {T}$.
Let $f \in {\mathcal {T}} (X,Y)$ be a morphism. Then $f$ is called
(in \cite{IY08}) a {\em sink map} of $Y \in {\mathcal {T}}$ if $f$
is right minimal, $f \in J_{\mathcal {T}}$, and $${\mathcal {T}}
(-,X) \stackrel{f \cdot} \longrightarrow J_{\mathcal {T}} (-,Y) \longrightarrow 0$$ is
exact as functors on $\mathcal {T}$. The definition of {\em source
maps} is given dually.
Let $n$ be a positive integer. Given $n$ triangles in a triangulated
category,
$$X_{i+1} \stackrel{b_{i+1}} \rightarrow B_i \stackrel{a_i}\rightarrow X_i \rightarrow
{\Sigma} X_{i+1}, \quad 0 \leq i < n,$$ the complex
$$X_n \stackrel{b_{n}} \rightarrow
B_{n-1} \stackrel{b_{n-1} a_{n-1}} \longrightarrow B_{n-2} \rightarrow \ldots \rightarrow B_1
\stackrel{b_{1} a_1} \longrightarrow B_0 \stackrel{a_0} \rightarrow X_0 $$ is called
(in \cite{IY08}) an {\em $(n+2)$-angle}.
\begin{defn}[\cite{IY08}] \label{10}
Let $H$ be an $m$-cluster tilting object in a Krull-Schmidt
triangulated category. We call an $(m+3)$-angle with $X_0, X_{m+1}$
and all $B_i (0 \leq i \leq m)$ in add$H$ an {\em AR $(m+3)$-angle}
if the following conditions are satisfied
\begin{itemize}
\item[a)] $a_0$ is a sink map of $X_0$ in add$H$ and $b_{m+1}$
is a source map of $X_{m+1}$ in add$H$, and
\item[b)] $a_i$ (resp. $b_i$) is a minimal right (resp. left) (add$H$)-approximation of $X_i$ for each integer $1 \leq i \leq m$.
\end{itemize}
\end{defn}
\begin{rem}
An AR $(m+3)$-angle with right term $X_0$ (resp. left term
$X_{m+1}$) depends only on $X_0$ (resp. $X_{m+1}$) and is unique up
to isomorphism as a complex.
\end{rem}
We will use the AR angle theory to show the following theorem, which
gives a more virtual criterion than Theorem 5.8 in \cite{IY08} for
our case.
\begin{thm} \label{16}
Let $\Pi$ be a good completed deformed preprojective dg algebra
$\widehat{\Pi}(Q,m+2,W)$ and $i$ a vertex of $Q$. Assume that the
zeroth homology $H^0 \Pi$ is finite dimensional and there are no
loops of $Q$ at vertex $i$. Then the almost complete $m$-cluster
tilting $P$-object $\Pi/{e_i\Pi}$ has exactly $m+1$ complements in
the generalized $m$-cluster category ${\mathcal {C}}_{\Pi}$.
\end{thm}
\begin{proof}
Set $RA_0 = P_i = e_i\Pi$ and $M = \Pi/{e_i\Pi}$. Section 4 gives us
a construction of iterated mutations $RA_t$ of $P_i$ in the derived
category ${\mathcal {D}}(\Pi)$, that is, the morphism $h^{(1)}: P'_0
\rightarrow P_i$ is a minimal right (add$M$)-approximation of $P_i$, and
morphisms $h^{(t+1)}: P'_t \rightarrow RA_t \, (1 \leq t \leq m)$ are
minimal right (add$A$)-approximations of $RA_{t}$ with $P'_t$ in
add$M$. Let $\mathcal {A}$ (resp. $\mathcal {M}$) denote the
subcategory add$\pi(\Pi)$ (resp. add$\pi(M)$) in the generalized
$m$-cluster category ${\mathcal {C}}_{\Pi}$.
Step 1. Since $P'_0,\,P_i$ and $M$ are in the fundamental domain,
the morphism $h^{(1)}$ can be viewed as a minimal right $\mathcal
{M}$-approximation in ${\mathcal {C}}_{\Pi}$, that is, the sequence
$${\mathcal {A}}(-,P'_0)|_{\mathcal {M}} \stackrel{h^{(1)} \cdot}\longrightarrow
{\mathcal {A}}(-,P_i)|_{\mathcal {M}} = J_{\mathcal
{A}}(-,P_i)|_{\mathcal {M}} \rightarrow 0,$$ is exact as functors on
$\mathcal {M}$. Since there are no loops of ${\overline{Q}}^V$ of
degree zero at vertex $i$, the Jacobson radical of ${\rm
End}_{\mathcal {A}}(P_i) \, (\simeq {\rm End}_{{\mathcal
{D}}(\Pi)}(P_i))$ consists of combinations of cyclic paths $p = a_1
\ldots a_r \,(r \geq 2)$ of ${\overline{Q}}^V$ of degree zero. The
path $p$ factors though $e_{s(a_1)}\Pi$ and factors through
$h^{(1)}$. Therefore, we have an exact sequence
$${\mathcal {A}}(P_i, P'_0) \stackrel{h^{(1)} \cdot}\longrightarrow {\rm rad} \, {\rm End}_{\mathcal {A}}(P_i) \longrightarrow
0.$$ Thus, the morphism $h^{(1)}$ is a sink map in the subcategory
$\mathcal {A}$.
Step 2. The morphisms $h^{(t+1)} \, (1 \leq t \leq m)$ are minimal
right (add$A$)-approximations of $RA_{t}$ with $P'_t$ in add$M$.
Since the objects $RA_t (1 \leq t \leq m)$ and $P'_t$ lie in the
shift ${\Sigma}^{-m}{\mathcal {F}}$ of the fundamental domain by
Proposition \ref{15}, the images of $h^{(t+1)}$ are minimal right
$\mathcal {A}$-approximations in ${\mathcal {C}}_{\Pi}$.
Step 3. Consider the morphisms $\alpha^{(t)}$ in the triangles of
constructing $RA_t$ in ${\mathcal {D}}(\Pi)$
$${\Sigma}^{-1}RA_{t-1} \longrightarrow RA_t \stackrel{\alpha^{(t)}}\longrightarrow P'_{t-1}
\stackrel{h^{(t)}} \longrightarrow RA_{t-1}, \quad 1 \leq t \leq m.$$
We already know that the maps $\alpha^{(t)}$ are minimal left
(add$M$)-approximations in ${\mathcal {D}}(\Pi)$. Now applying the
functor ${\rm Hom}_{{\mathcal {D}}(\Pi)}(-,P_i)$ to the above
triangles, we obtain long exact sequences $$\ldots \rightarrow {\rm
Hom}_{{\mathcal {D}}(\Pi)}(P'_{t-1},P_i) \longrightarrow {\rm Hom}_{{\mathcal
{D}}(\Pi)}(RA_t,P_i) \longrightarrow {\rm Hom}_{{\mathcal
{D}}(\Pi)}({\Sigma}^{-1}RA_{t-1},P_i) \rightarrow \ldots.$$The terms ${\rm
Hom}_{{\mathcal {D}}(\Pi)}({\Sigma}^{-1}RA_{t-1},P_i)$ are zero
since all $RA_{t-1}$ lie in $^{\mbox{\rm per}p}{{\mathcal {D}}(\Pi)}^{\leq
-1}$. Hence, the morphisms $\alpha^{(t)}$ are minimal left
(add$A$)-approximations in ${\mathcal {D}}(\Pi)$. Since the objects
$RA_t (1 \leq t \leq m)$ and $P'_t$ lie in the shift
${\Sigma}^{-m}{\mathcal {F}}$, the images of $\alpha^{(t)}$ are
minimal left $\mathcal {A}$-approximations in ${\mathcal
{C}}_{\Pi}$.
Step 4. Consider the following two triangles in ${\mathcal
{D}}(\Pi)$
$$RA_{m+1} \stackrel{\alpha^{(m+1)}}\longrightarrow P'_m \stackrel{h^{(m+1)}} \longrightarrow RA_m \longrightarrow
{\Sigma}RA_{m+1},$$ $$P_i \stackrel{g^{(1)}} \longrightarrow P'_m
\stackrel{\beta^{(1)}}\longrightarrow LA_1 \longrightarrow {\Sigma}P_i.$$ Since the
objects $P_i,\,P'_m$ and $LA_{1}$ are in the fundamental domain
$\mathcal {F}$, the second triangle can also be viewed as a triangle
in ${\mathcal {C}}_{\Pi}$ and the morphism $\beta^{(1)}$ is a
minimal right $\mathcal {M}$-approximation of $LA_1$. Note that the
objects $RA_m$ and $P'_m$ belong to ${\Sigma}^{-m}{\mathcal {F}}$.
Hence, the image of the first triangle
$$\pi(RA_{m+1}) \stackrel{\pi({\alpha}^{(m+1)})} \longrightarrow P'_m
\stackrel{\pi(h^{(m+1)})} \longrightarrow \pi(RA_m) \longrightarrow
{\Sigma}\pi(RA_{m+1})$$ is a triangle in ${\mathcal {C}}_{\Pi}$ with
$\pi(h^{(m+1)})$ a minimal right ${\mathcal {M}}$-approximation of
$\pi(RA_m)$. By Theorem \ref{13}, the image of $RA_m$ is isomorphic
to the image of $LA_1$ in ${\mathcal {C}}_{\Pi}$. Thus, the images
of these two triangles in ${\mathcal {C}}_{\Pi}$ are isomorphic. We
can also check that $g^{(1)}$ is a source map in $\mathcal {A}$ as
Step 1. Therefore, the image $\pi(\alpha^{(m+1)})$ is also a source
map in $\mathcal {A}$ with $\pi(RA_{m+1})$ isomorphic to $P_i$ in
${\mathcal {C}}_{\Pi}$.
Step 5. Now we form the following $(m+3)$-angle in ${\mathcal
{C}}_{\Pi}$
\[
\xymatrix{P_i = \pi(RA_{m+1}) \ar[r]^{\quad \quad \quad
\varphi_{m+1}} & P'_m \ar[r]^{\varphi_m} & P'_{m-1} \ar[r] & \ldots
\ar[r] & P'_1 \ar[r]^{\varphi_1} & P'_0 \ar[r]^{\varphi_0} & P_i,
\,}
\]
where $\varphi_0$ is equal to $\pi(h^{(1)})$, the morphism
$\varphi_t \, (1 \leq t \leq m)$ is the composition
$\pi(\alpha^{(t)})\pi(h^{(t+1)})$, and $\varphi_{m+1}$ is equal to
$\pi(\alpha^{(m+1)})$. From the above four steps, we know that
$\varphi_0$ is a sink map in $\mathcal {A}$, and $\varphi_{m+1}$ is
a source map in $\mathcal {A}$. Furthermore, this $(m+3)$-angle is
the AR $(m+3)$-angle determined by $P_i$. Since the indecomposable
object $P_i$ does not belong to add$({\oplus}^m_{t=0}P'_t)$,
following Theorem 5.8 in \cite{IY08}, the almost complete
$m$-cluster tilting $P$-object $\Pi/{e_i\Pi}$ has exactly $m+1$
complements $e_i \Pi,\,\pi(RA_1), \ldots, \pi(RA_m)$ in ${\mathcal
{C}}_{\Pi}$. The proof is completed.
\end{proof}
\section{Liftable almost complete $m$-cluster tilting objects \\ for strongly ($m+2$)-Calabi-Yau case}
Keep the assumptions as in Theorem \ref{16}. Let $\Pi =
{\widehat{\Pi}(Q,m+2,W)}$. Let $Y$ be a liftable almost complete
$m$-cluster tilting object in the generalized $m$-cluster category
${\mathcal {C}}_{\Pi}$. Assume that $Z$ is a basic cofibrant silting
object in per$\Pi$ such that $\pi (Z/{Z'})$ is isomorphic to $Y$,
where $\pi: {\rm per}\Pi \rightarrow {\mathcal {C}}_{\Pi}$ is the canonical
projection and $Z'$ is an indecomposable direct summand of $Z$. Let
$A$ be the dg endomorphism algebra ${\rm Hom}^{\bullet}_{\Pi}(Z,Z)$
and $F$ the left derived functor $- \overset{L}{\otimes}_A Z$. From
the proof of Theorem \ref{25}, we know that $F$ is a Morita
equivalence from ${\mathcal {D}}(A)$ to ${\mathcal {D}}(\Pi)$ and
$A$ satisfies Assumptions \ref{23}. We denote the truncated dg
subalgebra ${\tau}_{\leq 0}A$ by $E$. Since $A$ has its homology
concentrated in nonpositive degrees, the canonical inclusion $E
\hookrightarrow A$ is a quasi-isomorphism. Then the left derived
functor $- \overset{L}{\otimes}_{E}A$ is a Morita equivalence from
${\mathcal {D}}(E)$ to ${\mathcal {D}}(A)$.
\begin{thm}[\cite{Ke96}] \label{33}
Let $l$ be a commutative ring. Let $B$ and $B'$ be two dg
$l$-algebras and $X$ a dg $B$-$B'$-bimodule which is cofibrant over
$B$. Assume that $B$ and $B'$ are flat as dg $l$-modules and $$-
\overset{L}{\otimes}_{B'} X: {\mathcal {D}}(B') \rightarrow {\mathcal
{D}}(B)$$ is an equivalence. Then the dg algebras $B$ and $B'$ have
isomorphic cyclic homology and isomorphic Hochschild homology.
\end{thm}
A corollary of Theorem \ref{33} is that $B'$ is strongly
($m+2$)-Calabi-Yau if and only if so is $B$.
The object $Z$ is canonically an $k$-module, so the dg algebras $A$
and $E$ are $k$-algebras. Thus, the derived equivalent dg algebras
$\Pi, A$ and $E$ are flat as dg $k$-modules. Following Remark
\ref{32} and Theorem \ref{33}, the dg algebras $A$ and $E$ are also
strongly ($m+2$)-Calabi-Yau.
We will show that the dg algebra $E$ satisfies the assumption in
Theorem \ref{30}, that is $E$ lies in $PCAlgc(l')$ for some finite
dimensional separable commutative $k$-algebra $l'$. In fact, $l' =
{\prod}_{|Z|} k$, where $|Z|$ is the number of indecomposable direct
summands of $Z$ in per$\Pi$. Furthermore, from the following lemma,
we can deduce that $l' = l$.
\begin{lem} \label{38}
Suppose that $B$ is a dg algebra with positive homologies being
zero. Then all basic cofibrant silting objects have the same number
of indecomposable direct summands in ${\rm per}B$.
\end{lem}
\begin{proof}
The triangulated category per$B$ contains an additive subcategory
${\mathcal {B}} :=$ add$B$. Since the dg algebra $B$ has its
homology concentrated in nonpositive degrees, it follows that $${\rm
Hom}_{{\rm per}B} ({\mathcal {B}},{\Sigma}^p{\mathcal {B}}) = 0,
\quad \, p > 0.$$ Since the category per$B$, which consists of the
compact objects in ${\mathcal {D}}(B)$, and the category add$B$ are
both idempotent split, by Proposition 5.3.3 of \cite{Bon}, the
isomorphism
$$K_0 ({\rm per}B) \simeq K_0 ({\rm add}B)$$ holds, where $K_0 (-)$
denotes the Grothendieck group.
Let $Z$ be any basic cofibrant silting object in per$B$ and $B'$ its
dg endomorphism algebra ${\rm Hom}^{\bullet}_B(Z,Z)$. Then $B'$ has
its homology concentrated in nonpositive degrees and per$B'$ is
triangle equivalent to per$B$. Therefore, we have $K_0 ({\rm per}B')
\simeq K_0 ({\rm add}B')$ and $K_0 ({\rm per}B') \simeq K_0 ({\rm
per}B)$. As a consequence, the following isomorphisms hold
$$K_0 ({\rm add}B) \simeq K_0 ({\rm add}B') \simeq K_0 ({\rm
add}Z).$$ Thus, any basic cofibrant silting object in ${\rm per}B$
has the same number of indecomposable direct summands as that of the
dg algebra $B$ itself.
\end{proof}
When forgetting the grading, the dg algebra $E$ becomes to be $E^u
:= Z^0 A \oplus ({\prod}_{r < 0} A^r)$, where $Z^0 A ( = {\rm
Hom}_{{\mathcal {C}}(\Pi)}(Z,Z))$ consists of the zeroth cycles of
$A$. For any $x \in {\prod}_{r < 0}A^r$, the element $1+x$ clearly
has an inverse element. It follows that ${\prod}_{r<0}A^r$ is
contained in rad($E^u$). We have the following canonical short exact
sequence
$$0 \rightarrow B^0A \rightarrow Z^0A \stackrel{p}\rightarrow H^0A \rightarrow 0,$$where $B^0A$ is a
two-sided ideal of the algebra $Z^0A$ consisting of the zeroth
boundaries of $A$.
Following from Lemma \ref{34}, without loss of generality, we can
assume that the basic silting object $Z$ is a minimal perfect dg
$\Pi$-module.
\begin{lem}
Keep the above notation and suppose that $Z$ is a minimal perfect dg
$\Pi$-module. Then $B^0A$ lies in the radical of $Z^0A$.
\end{lem}
\begin{proof}
Let $f$ be an element in $B^0A$. Then $f$ is of the form $d_Z h + h
d_Z$ for some degree $-1$ morphism $h: Z \rightarrow Z$. Since $Z$ is
minimal perfect, the entries of $f$ lie in the ideal $\mathfrak{m}$
generated by the arrows of $\widetilde{Q}^V$. Then for any morphism
$g: Z \rightarrow Z$, the morphism $1_Z - gf$ admits an inverse $1_Z + gf +
(gf)^2 + \ldots$. Similarly for the morphism $1_Z - fg$. It follows
that $f$ lies in the radical of the algebra $Z^0A$. This completes
the proof.
\end{proof}
The epimorphism $p$ in the above short exact sequence induces an
epimorphism $$\overline{p}: Z^0A/{{\rm rad}(Z^0A)} \rightarrow H^0A/{{\rm
rad}(H^0A)}.$$ Since $B^0A$ lies in the radical of $Z^0A$, the
epimorphism $\overline{p}$ is an isomorphism. Therefore, the
following isomorphisms $$E^u/{{\rm rad}(E^u)} \simeq Z^0A/{{\rm
rad}(Z^0A)} \simeq H^0A/{{\rm rad}(H^0A)}$$ are true. Note that
per$\Pi$ is Krull-Schmidt and Hom-finite. Since the algebra $E_i :=
{\rm End}_{{\rm per}\Pi}(Z_i)$ is local and $k$ is algebraically
closed, the quotient $E_i/{{\rm rad}(E_i)}$ is isomorphic to $k$.
Then we have that
$$H^0A/{{\rm rad}(H^0A)} \simeq {\rm End}_{\mbox{per}\Pi}(Z)/{{\rm rad}({\rm End}_{\mbox{per}\Pi}(Z))}
\simeq {\prod}_{|Z|} E_i/{{\rm rad}(E_i)} \simeq {\prod}_{|Z|}k \, ( = l).$$
Hence, the dg algebra $E$ lies in $PCAlgc(l)$. Therefore, $E$ is
quasi-isomorphic to some good completed deformed preprojective dg
algebra $\widehat{\Pi}(Q',m+2,W')$ (denoted by $\Pi'$). Moreover,
$H^0 \Pi'$ is equal to $H^0A$ which is finite dimensional.
The following diagram
\[
\xymatrix{ {\rm per}\Pi' \ar[r]^{- \overset{L}\otimes_{\Pi'}E}
\ar[d] & {\rm per}E \ar[r]^{- \overset{L}\otimes_{E}A} \ar[d] & {\rm
per}A \ar[r]^{- \overset{L}\otimes_{A}Z} \ar[d] & {\rm per}\Pi
\ar[d] \\
{\mathcal {C}}_{\Pi'} \ar[r] & {\mathcal {C}}_E \ar[r]& {\mathcal
{C}}_A \ar[r] & {\mathcal {C}}_{\Pi}}
\]
is commutative, where each functor in the rows is an equivalence and
each functor in a column is the canonical projection. The preimage
of $Z$ in ${\rm per}\Pi'$, under the equivalence $F$ given by the
composition of the functors in the top row, is $\Pi'$. Let $\Pi'_0 =
e_j \Pi'$ be the $P$-indecomposable dg $\Pi'$-module such that
$F(\Pi'_0) = Z'$ in per$\Pi$, where $j$ is a vertex of $Q'$. Assume
that there are no loops of $Q'$ at vertex $j$. It follows from
Theorem \ref{16} that the almost complete $m$-cluster tilting
$P$-object $\Pi'/{\Pi'_0}$ has exactly $m+1$ complements in
${\mathcal{C}}_{\Pi'}$. Note that the image of $\Pi'/{\Pi'_0}$ in
${\mathcal {C}}_{\Pi}$, under the equivalence given by the
composition of the functors in the bottom row, is $Y$. Therefore,
the liftable almost complete $m$-cluster tilting object $Y$ has
exactly $m+1$ complements in ${\mathcal{C}}_{\Pi}$.
As a conclusion, we write down the following theorem.
\begin{thm} \label{35}
Let $\Pi$ be a good completed deformed preprojective dg algebra
$\widehat{\Pi}(Q,m+2,W)$ whose zeroth homology $H^0 \Pi$ is finite
dimensional. Let $Z$ be a basic silting object in per$\Pi$ which is
minimal perfect and cofibrant. Denote by $E$ the dg algebra
$\tau_{\leq 0}({\rm Hom}^{\bullet}_{\Pi}(Z,Z))$. Then
\begin{itemize}
\item[1)] $E$ is quasi-isomorphic to some good completed deformed preprojective
dg algebra $\Pi' = {\widehat{\Pi}}(Q',m+2,W')$, where the quiver
$Q'$ has the same number of vertices as $Q$ and $H^0 \Pi'$ is finite
dimensional;
\item[2)] let $Y$ be a liftable almost complete $m$-cluster
tilting object of the form $\pi(Z/Z')$ in ${\mathcal{C}}_{\Pi}$ for
some indecomposable direct summand $Z'$ of $Z$. If we further assume
that there are no loops at the vertex $j$ of $Q'$, where $e_j \Pi'
\overset{L} \otimes_{\Pi'} Z = Z'$, then $Y$ has exactly $m+1$
complements in ${\mathcal{C}}_{\Pi}$.
\end{itemize}
\end{thm}
Here we would like to point out a special case of the above theorem,
namely $m=1$ and $Z = LA_1^{(k)}$ with respect to some vertex $k$ of
$Q$. Let $(Q^{\star}, W^{\star})$ denote the (reduced) mutation
${\mu_k(Q,W)}$ defined in \cite{DWZ} of the quiver with potential
$(Q,W)$ at vertex $k$. Let $A$ be the dg endomorphism algebra ${\rm
Hom}_{\Pi}^{\bullet}(Z,Z)$ and $\Pi^{\star}$ the good completed
deformed preprojective dg algebra
${\widehat{\Pi}}(Q^{\star},m+2,W^{\star})$. By \cite{KY09}, there is
a canonical morphism from $\Pi^{\ast}$ to $A$. Define three functors
as follows:
$$F = - \overset{L} \otimes_{\Pi^{\star}} Z, \quad \, F_1 = - \overset{L} \otimes_{\Pi^{\star}} A, \quad \, F_2 = - \overset{L} \otimes_{A} Z.$$
Clearly, we have that $F = F_2 F_1$ and $F_2$ is a quasi-inverse equivalence. It was shown in \cite{KY09} that $F$ is a quasi-inverse equivalence. The
following isomorphisms
$$H^{n}(\Pi^{\star}) \simeq {\rm Hom}_{\Pi^{\star}} (\Pi^{\star}, \Sigma^n \Pi^{\star}) \simeq {\rm Hom}_A(A, \Sigma^n A) \simeq H^{n}A$$
become true, which implies that $\Pi^{\star}$ and $A$ are quasi-isomorphic. Therefore, the quiver with potential $(Q',W')$ appearing in Theorem \ref{35} 1)
for this special case can be chosen as $\mu_k (Q,W)$.
As the end part of this section, we state a `reasonable' conjecture
about the non-loop assumption in the above theorem for completed
deformed preprojective dg algebras.
\begin{defn}
Let $r$ be a positive integer. An algebra $A \in PCAlgc(l)$ is said
to be {\em $r$-rigid} if
$$HH_0 (A) \simeq l, \quad {\rm {and}} \quad HH_p (A)= 0 \,\, (1
\leq p \leq r-1),$$where $HH_{\ast}(A)$ is the pseudo-compact
version of the Hochschild homology of the dg algebra $A$.
\end{defn}
\begin{rem} For completed Ginzburg algebras associated to quivers with potentials, our definition of $1$-rigidity coincides
with the definition of rigidity in \cite{DWZ}. Proposition 8.1 in
\cite{DWZ} states that any rigid reduced quiver with potential is
$2$-acyclic. Then no loops will be produced following their
mutation rule. Although
we do not know whether the quiver $Q'$ related to such a silting
object as in Theorem \ref{35} can be obtained from mutation of
quivers with potentials, we can still obtain that the quiver $Q'$
always does not contain loops in the condition of $1$-rigidity (see
Corollary \ref{41}).
\end{rem}
\begin{prop}\label{42}
The completed deformed preprojective dg algebras $\Pi =
{\widehat{\Pi}(Q,m+2,0)}$ associated to acyclic quivers $Q$ are
$m$-rigid.
\end{prop}
\begin{proof}
It is clear that the zeroth component ${\Pi}^0$ of $\Pi$ is just the
finite dimensional path algebra $kQ$ (denoted by $B$) and the
$(-p)$th component of $\Pi$ is zero for $1 \leq p \leq m-1$. Thus,
the Hochschild homology of $\Pi$ is given by
\begin{center}
$HH_0(\Pi) = B/{[B, B]} = {\prod}_{|Q_0|}k,$ \\
$HH_p(\Pi) = HH_p(B) = ker({\partial}^0_p)/
\mbox{Im}({\partial}^0_{p+1}) \quad (1 \leq p \leq m-1),$
\end{center}
where ${\partial}^0_p: B^{{\otimes}_k (p+1)} \rightarrow B^{{\otimes}_k p}$
is the $p$th row differential of the uppermost row in the Hochschild
complex $X := \Pi \overset{L}{\otimes}_{{\Pi}^e} \Pi$.
Since the path algebra $kQ$ is of finite dimension and of finite
global dimension and $k$ is algebraically closed, we have $HH_p(B) =
0$ for all integers $p > 0$, cf. Proposition 2.5 of \cite{Ke96}. It
follows that the dg algebra ${\widehat{\Pi}(Q,m+2,0)}$ is $m$-rigid.
\end{proof}
\begin{prop}\label{40}
Let $\Pi = {\widehat{\Pi}(Q,m+2,W)}$ be a good completed deformed
preprojective dg algebra and $p$ a fixed integer in the segment $[0,
m]$. Suppose the $p$-th Hochschild homology of $\Pi$ satisfies the
isomorphism
\begin{displaymath}
HH_p (\Pi) \simeq \left\{ \begin{array}{ll}
{\prod}_{|Q_0|} k & \textrm{if}\,\, p=0,\\
0 & \textrm{if} \,\, p \neq 0.
\end{array} \right.
\end{displaymath}
Then ${\overline{Q}}^V$ does not contain loops with zero
differential and of degree $-p$.
\end{prop}
\begin{proof}
Let $a$ be a loop of ${\overline{Q}}^V$ at some vertex $i$ with zero
differential and of degree $-p$. The element $a$ lies in the
rightmost column of the Hochschild complex $X$ of $\Pi$. By
assumption the differential $d(a)$ is zero, so $a$ is an element in
$HH_p(\Pi)$. Now we claim that $a$ is a nonzero element in
$HH_p(\Pi)$.
First, the superpotential $W$ is a linear combination of paths of
length at least 3, so $d(\widetilde{Q}_1^V) \subseteq
{\mathfrak{m}}^2$, where $\mathfrak{m}$ is the two-sided ideal of
$\Pi$ generated by the arrows of $\widetilde{Q}^V$. Second, it is
obvious that the relation $\mbox{Im} {\partial}_1 \cap \{
\mbox{loops\, of}\,\,{\widetilde{Q}}^V \} = \emptyset$ holds.
Therefore, the loop $a$ can not be written in the form $\sum
d(\gamma) + \sum
\partial_1 (u \otimes v)$ for paths $\gamma \in e_i {\mathfrak{m}}
e_i$ and $u, v$ paths of ${\widetilde{Q}}^V$, which means that $a$
is a nonzero element in $HH_p(\Pi)$.
Note that the trivial paths associated to the vertices of $Q$ are
nonzero elements in $HH_0 (\Pi)$. Hence, we get a contradiction to
the isomorphism in the assumption. As a result, the quiver
${\overline{Q}}^V$ does not contain loops with zero differential and
of degree $-p$.
\end{proof}
\begin{cor}\label{41}
Keep the notation as in Theorem \ref{35} and let $m = 1$. Suppose
that $\Pi$ is $1$-rigid. Then the new quiver $Q'$ does not contain
loops.
\end{cor}
\begin{proof}
It follows from statement 1) in Theorem \ref{35} that $E$ is
quasi-isomorphic to some good completed deformed preprojective dg
algebra $\Pi' = {\widehat{\Pi}}(Q',3,W')$. Then following Theorem
\ref{33} and the analysis before Theorem \ref{35}, we can obtain
that the dg algebras $\Pi$ and $\Pi'$ have isomorphic Hochschild
homology. Therefore, the new dg algebra $\Pi'$ is also $1$-rigid.
Note that every arrow of $Q'$ has zero degree and thus has zero
differential. Hence, by Proposition \ref{40} the quiver $Q'$ does
not contain loops.
\end{proof}
\begin{conj}\label{43}
Let $\Pi = {\widehat{\Pi}(Q,m+2,W)}$ be an $m$-rigid good completed
deformed preprojective dg algebra whose zeroth homology $H^0 \Pi$ is
finite dimensional. Then any liftable almost complete $m$-cluster
tilting object has exactly $m+1$ complements in ${\mathcal
{C}}_{\Pi}$.
\end{conj}
Following the same procedure as in the proof of Corollary \ref{41},
we know that the good completed deformed preprojective dg algebra
$\Pi' = {\widehat{\Pi}(Q',m+2,W')}$ in Theorem \ref{35} is also
$m$-rigid, and the new quiver $Q'$ does not contain loops of degree
zero. It seems that we would like to get a stronger result than
Proposition \ref{40}, that is, $m$-rigidity implies that
${\overline{Q'}}^V$ does not contain loops (not only loops with zero
differential). If this is true, then it follows from statement 2) in
Theorem \ref{35} that any liftable almost complete $m$-cluster
tilting object has exactly $m+1$ complements in ${\mathcal
{C}}_{\Pi}$.
If Conjecture \ref{43} holds, then the $m$-rigidity property shown
in Proposition \ref{42} of the dg algebra $\Pi =
{\widehat{\Pi}}(Q,m+2,0)$ with $Q$ an acyclic quiver implies that
any liftable almost complete $m$-cluster tilting object in
${\mathcal {C}}_{\Pi}$ has exactly $m+1$ complements. Later
Proposition \ref{44} shows that any almost complete $m$-cluster
tilting object in ${\mathcal {C}}_{\Pi}$ is liftable in the `acyclic
quiver' case. Thus, on one hand, if Conjecture \ref{43} holds, we
can deduce a common result both in \cite{W} and \cite{ZZ}, namely,
any almost complete $m$-cluster tilting object in the classical
$m$-cluster category ${\mathcal {C}}_Q^{(m)}$ has exactly $m+1$
complements. On the other hand, it follows from this common result
for the classical $m$-cluster category ${\mathcal {C}}_Q^{(m)}$,
which is triangle equivalent to the corresponding generalized
$m$-cluster category ${\mathcal {C}}_{\Pi}$, that any almost
complete $m$-cluster tilting object in ${\mathcal {C}}_{\Pi}$ should
have exactly $m+1$ complements.
\section{A long exact sequence and the acyclic case}
Let $A$ be a dg algebra satisfying Assumptions \ref{23}. In the
first part of this section, we give a long exact sequence to see the
relations between extension spaces in generalized $m$-cluster
categories ${\mathcal {C}}_A$ and extension spaces in derived
categories ${\mathcal {D}} (= {\mathcal {D}}(A))$. If the extension
spaces between two objects of ${\mathcal {C}}_A$ are zero, in some
cases, we can deduce that the extension spaces between these two
objects are also zero in the derived category $\mathcal {D}$.
\begin{prop} \label{14}
Suppose that $X$ and $Y$ are two objects in the fundamental domain
$\mathcal {F}$. Then there is a long exact sequence $$0 \rightarrow {\rm
Ext}^1_{\mathcal {D}}(X,Y) \rightarrow {\rm Ext}^1_{{\mathcal {C}}_A}(X,Y)
\rightarrow D{\rm Ext}^m_{\mathcal {D}}(Y,X)$$ $$ \rightarrow {\rm Ext}^2_{\mathcal
{D}}(X,Y) \rightarrow {\rm Ext}^2_{{\mathcal {C}}_A}(X,Y) \rightarrow D{\rm
Ext}^{m-1}_{\mathcal {D}}(Y,X)
$$ $$\rightarrow \quad \cdots \quad\quad \cdots \quad \rightarrow$$ $${\rm
Ext}^m_{\mathcal {D}}(X,Y) \rightarrow {\rm Ext}^m_{{\mathcal {C}}_A}(X,Y)
\rightarrow D{\rm Ext}^1_{\mathcal {D}}(Y,X) \rightarrow 0 .$$
\end{prop}
\begin{proof}
We have the canonical triangle
$${\tau}_{\leq -m}X \rightarrow X \rightarrow {\tau}_{\geq 1-m}X \rightarrow \Sigma ({\tau}_{\leq
-m}X) ,$$ which yields the long exact sequence $$\cdots \rightarrow {\rm
Hom}_{\mathcal {D}}({\Sigma}^{-t} ({\tau}_{\geq 1-m}X), Y) \rightarrow {\rm
Hom}_{\mathcal {D}}({\Sigma}^{-t} X, Y) \rightarrow {\rm Hom}_{\mathcal
{D}}({\Sigma}^{-t} ({\tau}_{\leq -m}X), Y) \rightarrow \cdots , \quad t \in
\mathbb{Z}.$$
Step 1. {\it The isomorphism $${\rm Hom}_{\mathcal
{D}}({\Sigma}^{-t}({\tau}_{\geq 1-m}X),Y) \simeq D{\rm
Hom}_{\mathcal {D}}(Y,{\Sigma}^{m+2-t}X)$$ holds when $t \leq m+1$.}
By the Calabi-Yau property, there holds the isomorphism
$${\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\geq 1-m}X),Y) \simeq
D{\rm Hom}_{\mathcal {D}}(Y,{\Sigma}^{m+2-t}({\tau}_{\geq 1-m}X)),
\quad t \in \mathbb{Z}. \quad \quad {\rm (8.1)}.$$ Applying the
functor ${\rm Hom}_{\mathcal {D}}(Y,-)$ to the triangle which we
start with, we obtain the exact sequence
$$(Y,{\Sigma}^{m+2-t}({\tau}_{\leq -m}X)) \rightarrow
(Y,{\Sigma}^{m+2-t}X) \rightarrow (Y,{\Sigma}^{m+2-t}({\tau}_{\geq 1-m}X))
\rightarrow (Y,{\Sigma}^{m+3-t}({\tau}_{\leq -m}X)),$$where $(-,-)$ denotes
${\rm Hom}_{\mathcal {D}}(-,-)$. When $t \leq m+1$, we have that
$(-m)-(m+2-t) \leq -m-1$. Then the objects
${\Sigma}^{m+2-t}({\tau}_{\leq -m}X)$ and
${\Sigma}^{m+3-t}({\tau}_{\leq -m}X)$ belong to ${\mathcal
{D}}^{\leq -m-1}$. Note that $Y$ is in
$^{\mbox{\rm per}p}{\mathcal{D}}^{{\leq} {-m-1}}$. Therefore, the following
isomorphism holds
$$\quad \quad \quad \quad {\rm Hom}_{\mathcal {D}}(Y,{\Sigma}^{m+2-t}({\tau}_{\geq
1-m}X)) \simeq {\rm Hom}_{\mathcal {D}}(Y,{\Sigma}^{m+2-t}X). \quad
\quad \quad \quad \quad {\rm (8.2).}$$ As a consequence, when $t
\leq m+1$, together by (8.1) and (8.2), we have the isomorphism
$${\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\geq 1-m}X),Y)
\simeq D{\rm Hom}_{\mathcal {D}}(Y,{\Sigma}^{m+2-t}X).$$ Moreover,
if $t \leq 1$, the object ${\Sigma}^{m+2-t}X$ belongs to ${\mathcal
{D}}^{\leq -m-1}$, so the space ${\rm Hom}_{\mathcal
{D}}(Y,{\Sigma}^{m+2-t}X)$ vanishes, and so does the space ${\rm
Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\geq 1-m}X),Y)$.
Step 2. {\it When $t \leq m$, we have the following isomorphism
$${\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\leq -m}X),Y)
\simeq {\rm Hom}_{{\mathcal {C}}_A}(\pi X,{\Sigma}^t(\pi Y)).$$}
Consider the triangles $${\tau}_{\leq s-1}X \rightarrow {\tau}_{\leq s}X \rightarrow
{\Sigma}^{-s}(H^sX) \rightarrow {\Sigma}({\tau}_{\leq s-1}X) , \quad \quad s
\in \mathbb{Z}.$$ Applying the functor ${\rm Hom}_{\mathcal
{D}}(-,Y)$ to these triangles, we can obtain the following long
exact sequences $$\cdots \rightarrow ({\Sigma}^{-s-t}(H^sX),Y) \rightarrow
({\Sigma}^{-t}({\tau}_{\leq s}X),Y) \rightarrow ({\Sigma}^{-t}({\tau}_{\leq
s-1}X),Y) \rightarrow ({\Sigma}^{-s-t-1}(H^sX),Y) \rightarrow \cdots,$$where $(-,-)$
denotes ${\rm Hom}_{\mathcal {D}}(-,-)$. Using the Calabi-Yau
property, we have that $${\rm Hom}_{\mathcal
{D}}({\Sigma}^{-s-t}(H^sX),Y) \simeq D{\rm Hom}_{\mathcal
{D}}(Y,{\Sigma}^{m+2-s-t}(H^sX)) , \quad \quad t \in \mathbb{Z}.$$
When $t \leq -s$, the inequality $m+2-s-t-1 \geq m+1$ holds. So the
two objects ${\Sigma}^{m+2-s-t}(H^sX)$ and
${\Sigma}^{m+2-s-t-1}(H^sX)$ belong to ${\mathcal {D}}^{\leq -m-1}$.
Therefore, ${\rm Hom}_{\mathcal {D}}({\Sigma}^{-s-t}(H^sX),Y)$ and
the space ${\rm Hom}_{\mathcal {D}}({\Sigma}^{-s-t-1}(H^sX),Y)$ are
zero, and the following isomorphism
$${\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\leq s}X),Y)
\simeq {\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\leq
s-1}X),Y)$$ holds. As a consequence, we can get the following
isomorphisms $${\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\leq
-t}X),Y) \simeq {\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\leq
-t-1}X),Y) \simeq \cdots \simeq {\rm Hom}_{\mathcal
{D}}({\Sigma}^{-t}({\tau}_{\leq -m}X),Y),\, t \leq m. \, {\rm
(8.3)}.$$ Since the functor $\pi : {\rm per}A \rightarrow {\mathcal {C}}_A$
induces an equivalence from ${\Sigma}^{t}{\mathcal {F}}$ to
$\mathcal {C}$ (Proposition \ref{3} applies to shifted
$t$-structure), the following bijections are true
$${\rm Hom}_{{\mathcal {C}}_A}(\pi X,{\Sigma}^t(\pi Y)) \simeq {\rm Hom}_{{\mathcal
{C}}_A}(\pi ({\tau}_{\leq -t}X),\pi ({\Sigma}^{t}Y)) \simeq {\rm
Hom}_{\mathcal {D}}({\tau}_{\leq -t}X, {\Sigma}^t Y).\, \quad \quad
\quad {\rm (8.4)}.$$ Hence, when $t \leq m$, together by (8.3) and
(8.4), we have the isomorphism
$${\rm Hom}_{\mathcal {D}}({\Sigma}^{-t}({\tau}_{\leq -m}X),Y)
\simeq {\rm Hom}_{{\mathcal {C}}_A}(\pi X,{\Sigma}^t(\pi Y)).$$
Therefore, the long exact sequence at the beginning becomes $$0 =
{\rm Hom}_{\mathcal {D}}({\Sigma}^{-1}({\tau}_{\geq 1-m}X),Y) \rightarrow
{\rm Ext}^1_{\mathcal {D}}(X,Y) \rightarrow {\rm Ext}^1_{{\mathcal
{C}}_A}(X,Y) \rightarrow D{\rm Ext}^m_{\mathcal {D}}(Y,X)$$ $$ \rightarrow {\rm
Ext}^2_{\mathcal {D}}(X,Y) \rightarrow {\rm Ext}^2_{{\mathcal {C}}_A}(X,Y)
\rightarrow D{\rm Ext}^{m-1}_{\mathcal {D}}(Y,X)
$$ $$\rightarrow \quad \cdots \quad\quad \cdots \quad \rightarrow$$ $${\rm
Ext}^m_{\mathcal {D}}(X,Y) \rightarrow {\rm Ext}^m_{{\mathcal {C}}_A}(X,Y)
\rightarrow D{\rm Ext}^1_{\mathcal {D}}(Y,X) \rightarrow {\rm Hom}_{\mathcal
{D}}({\Sigma}^{-m-1}X,Y) = 0 .$$ This concludes the proof.
\end{proof}
\begin{rems}
1) When $m =1$, the long exact sequence in Proposition \ref{14}
becomes the following short exact sequence (already appearing in the
proof of Proposition \ref{39})
$$\quad \quad 0 \rightarrow {\rm Ext}^1_{\mathcal {D}}(X,Y) \rightarrow {\rm
Ext}^1_{{\mathcal {C}}_A}(X,Y) \rightarrow D{\rm Ext}^1_{\mathcal {D}}(Y,X)
\rightarrow 0 \quad \quad {\rm (8.5)},$$ which was presented in \cite{Am08}
for the Hom-finite $2$-Calabi-Yau case, and also was presented in
\cite{Pla} for the Jacobi-infinite $2$-Calabi-Yau case.
2) If $T$ is an object in the fundamental domain $\mathcal {F}$
satisfying $${\rm Ext}^{i}_{\mathcal {D}}(T,T) = 0, \quad i=1,
\ldots, m,$$ then the long exact sequence in Proposition \ref{14}
implies that the spaces ${\rm Ext}^i_{{\mathcal {C}}_A}(T,T)$ also
vanish for integers $1 \leq i \leq m$.
\end{rems}
Suppose that $X$ and $Y$ are two objects in the fundamental domain.
It is clear that ${\rm Ext}^i_{\mathcal {D}}(X,Y)$ vanishes when $i
> m$, since X belongs to $\mathcal {F}$ and ${\Sigma}^i Y$ lies in ${\mathcal {D}}^{\leq
-m-1}$. Now we assume that the spaces ${\rm Ext}^i_{{\mathcal
{C}}_A}(X,Y)$ are zero for integers $1 \leq i \leq m$. What about
the extension spaces ${\rm Ext}^i_{\mathcal {D}}(X,Y)$ in the
derived category? Do they always vanish?
When $m=1$, the short exact sequence (8.5) implies that the space
${\rm Ext}^1_{\mathcal {D}}(X,Y)$ vanishes.
When $m > 1$, we will give the answer for completed Ginzburg dg
categories (the same as completed deformed preprojective dg algebras
in this case) arising from acyclic quivers.
\begin{prop}\label{21}
Let $Q$ be an acyclic quiver. Let $\Gamma$ be the completed Ginzburg
dg category ${\widehat{\Gamma}_{m+2}}(Q,0)$ and ${\mathcal
{C}}_{\Gamma}$ the generalized $m$-cluster category. Suppose that
$X$ and $Y$ are two objects in the fundamental domain $\mathcal {F}$
which satisfy
$${\rm Ext}^i_{{\mathcal {C}}_{\Gamma}}(X,Y) = 0, \quad i = 1, \ldots, m.$$
Then the extension spaces ${\rm Ext}^i_{{\mathcal
{D}}(\Gamma)}(X,Y)$ vanish for all positive integers $i$.
\end{prop}
\begin{proof}
Let $B$ be the path algebra $kQ$ and $\Omega$ the inverse dualizing
complex $R {\rm Hom}_{B^e} (B, B^e)$. Set $\Theta =
{\Sigma}^{m+1}\Omega$. Then the $(m+2)$-Calabi-Yau completion
\cite{Ke09} of $B$ is the tensor dg category $${\Pi}_{m+2}(B) =
T_B(\Theta) = B \oplus \Theta \oplus (\Theta \otimes_B \Theta)
\oplus \ldots .$$ Theorem 6.3 in \cite{Ke09} shows that
${\Pi}_{m+2}(B)$ is quasi-isomorphic to the completed Ginzburg dg
category $\Gamma$. Thus, we can write $\Gamma$ as
$$\Gamma = B \oplus \Theta \oplus (\Theta \overset{L}\otimes_B
\Theta) \oplus \ldots = {\oplus}_{p \geq
0}{\Theta}^{\overset{L}\otimes_B p}.$$ Let $X', \, Y'$ be two
objects in ${\mathcal {D}}_{fd}(B)$. The following isomorphisms hold
$${\rm Hom}_{\mathcal {D}(\Gamma)}(X'\overset{L}\otimes_B\Gamma,
Y'\overset{L}\otimes_B\Gamma) \simeq {\rm Hom}_{{\mathcal
{D}}(B)}(X', Y'\overset{L}\otimes_B\Gamma|_B) \simeq {\rm
Hom}_{{\mathcal {D}}(B)}(X', Y' \overset{L}\otimes_B({\oplus}_{p
\geq 0}{\Theta}^{\overset{L}\otimes_B p}))$$ $$ \simeq {\rm
Hom}_{{\mathcal {D}}(B)}(X', \oplus_{p \geq 0}
(Y'\overset{L}\otimes_B{\Theta}^{\overset{L}\otimes_B p})) \simeq
\oplus_{p \geq 0}{\rm Hom}_{{\mathcal {D}}(B)}(X',
Y'\overset{L}\otimes_B{\Theta}^{\overset{L}\otimes_B p}).$$ By Lemma
\ref{24}, the category ${\mathcal {D}}_{fd}(B)$ admits a Serre
functor $S$ whose inverse is $- \overset{L}\otimes_B \Omega.$
Therefore, the functor $- \overset{L}\otimes_B \Theta$ is equal to
the functor $S^{-1}{\Sigma}^{m+1} (\simeq {\tau}^{-1}{\Sigma}^m)$,
where $\tau$ is the Auslander-Reiten translation. As a consequence,
we have that
$${\rm Hom}_{\mathcal {D}(\Gamma)}(X'\overset{L}\otimes_B\Gamma,
Y'\overset{L}\otimes_B\Gamma) \simeq \oplus_{p \geq 0}{\rm
Hom}_{{\mathcal {D}}_{fd}(B)}(X', (\tau^{-1}{\Sigma}^m)^pY').$$
Let ${\mathcal {C}}_{Q}^{(m)}$ be the $m$-cluster category
${\mathcal {D}}_{fd}(B)/({\tau^{-1}\Sigma^m})^{\mathbb{Z}}$.
Consider the following commutative diagram
\[
\xymatrix@C=2.5CM{{\mathcal {D}}_{fd}(B) \ar[d]_{\pi_B} \ar[r]^{-
\overset{L}\otimes_B \Gamma} &
{\rm per}\Gamma \ar[d]^{\pi_\Gamma} \\
{\mathcal {C}}_{Q}^{(m)} \ar@<1ex>[r]_{\simeq}^{ -
\overset{L}\otimes_B \Gamma } & {\mathcal {C}}_{\Gamma} . }
\]
Under the equivalence, let $X= X' \overset{L}\otimes_B \Gamma$ and
$Y= Y' \overset{L}\otimes_B \Gamma$, so the vanishing of spaces
${\rm Ext}^i_{{\mathcal {C}}_{\Gamma}}(X,Y)$ implies that ${\rm
Ext}^i_{{\mathcal {C}}_Q^{(m)}}(X',Y')$ also vanish for integers $1
\leq i \leq m$. Note that $${\rm Ext}^i_{{\mathcal
{C}}_Q^{(m)}}(X',Y') \simeq \oplus_{p \in \mathbb{Z}} {\rm
Ext}^i_{{\mathcal {D}}_{fd}(B)}(X',(\tau^{-1}\Sigma^m)^pY').$$
Hence, we obtain that$${\rm Ext}^i_{{\mathcal {D}}(\Gamma)}(X,Y)
\simeq \oplus_{p \geq 0} {\rm Ext}^i_{{\mathcal
{D}}_{fd}(B)}(X',(\tau^{-1}\Sigma^m)^pY') = 0, \quad \quad 1 \leq i
{\leq} m.$$
\end{proof}
Let $Q$ be an ordinary acyclic quiver and $B$ the path algebra $kQ$.
Let $\Gamma$ be its completed Ginzburg dg category
$\widehat{\Gamma}_{m+2}(Q,0)$. Let $T$ be an $m$-cluster tilting
object in ${\mathcal {C}}_Q^{(m)}$. Then $T$ is induced from an
object $T'$ (that is, $T = \pi(T')$) in the fundamental domain
$${\mathcal {S}}_m := {\mathcal {S}}^0_m \vee {\Sigma}^m B, \quad {\rm where}
\,\, {\mathcal {S}}^0_m := {\rm mod}B \vee {\Sigma}({\rm mod}B)
\ldots \vee {\Sigma}^{m-1}({\rm mod}B).$$
\begin{lem}[\cite{BRT}] \label{37}
The object $T'$ is a partial silting object, that is,
$${\rm Hom}_{{\mathcal {D}}_{fd}(B)}(T', {\Sigma}^i T') = 0, \quad \quad i > 0;$$
and $T'$ is maximal with this property.
\end{lem}
An object in ${\mathcal {D}}_{fd}(B)$ which satisfies the `maximal
partial silting' property as in Lemma \ref{37} is called a `silting'
object in \cite{BRT}. Next we will show that our definition for
silting object in per$B$ coincides with their definition.
\begin{lem}\label{45}
Let $U$ be a basic partial silting object in ${\mathcal
{D}}_{fd}(B)$. Then $U$ is maximal partial silting if and only if
$U$ generates ${\rm per}B$.
\end{lem}
\begin{proof}
On one hand, assume that $U$ is a basic partial silting object and
generates per$B$. By Lemma \ref{38}, the object $U$ has the same
number of indecomposable direct summands as that of the dg algebra
$B$ itself. That is, $U$ is a basic partial silting object with
$|Q_0|$ indecomposable direct summands. Following from Lemma 2.2 in
\cite{BRT}, we obtain that $U$ is a maximal partial silting object.
On the other hand, assume that $U$ is a maximal partial silting
object in ${\mathcal {D}}_{fd}(B)$. We decompose $U$ into a direct
sum ${\Sigma}^{k_1}U_1 \oplus \ldots \oplus {\Sigma}^{k_r}U_r$ such
that each $U_i$ lies in ${\rm mod}B$ and $k_1 < \ldots < k_r$. Set
$U' = {\oplus}^{r}_{i=1} U_i$. It follows from Lemma 2.2 in
\cite{BRT} that the object $U'$ can be ordered to a complete
exceptional sequence. Let $C(U')$ be the smallest full subcategory
of ${\rm mod}B$ which contains $U'$ and is closed under extensions,
kernels of epimorphisms, and cokernels of monomorphisms. By Lemma 3
in \cite{CB}, the subcategory $C(U')$ is equal to ${\rm mod}B$. As a
consequence, the object $U$ generates ${\mathcal {D}}_{fd}(B)$ which
is equal to per$B$.
\end{proof}
Since $B$ is finite dimensional and hereditary, the subcategory
${\mathcal {S}}^0_m$ is contained in $^{\mbox{\rm per}p}{\mathcal
{D}}(B)^{\leq -m-1}$. The isomorphism $${\rm Hom}_{{\mathcal
{D}}(B)} ({\Sigma}^{m} B, M) \simeq H^mM \quad (M \in {\mathcal
{D}}(B))$$ implies that ${\Sigma}^m B$ is in $^{\mbox{\rm per}p}{\mathcal
{D}}(B)^{\leq -m-1}$. So ${\mathcal {S}}_m$ is contained in
${\mathcal {D}}(B)^{\leq 0} \cap ^{\mbox{\rm per}p}{\mathcal {D}}(B)^{\leq
-m-1} \cap {\mathcal {D}}_{fd}(B)$.
Set $Z = T' \overset{L}\otimes_{B}\Gamma$. For any object $N$ in
${\mathcal {D}}(\Gamma)$, we have the following canonical
isomorphism $${\rm Hom}_{{\mathcal {D}}(\Gamma)}(T' \overset{L}
\otimes_B \Gamma, N) \simeq {\rm Hom}_{{\mathcal {D}}(B)}(T', \,
{\rm RHom}_{\Gamma}(\Gamma,N)).$$ When $N$ lies in ${\mathcal
{D}}(\Gamma)^{\leq -m-1}$, the right hand side of the above
isomorphism becomes zero. Thus, the object $Z$ is in the fundamental
domain of ${\mathcal {D}}(\Gamma)$. The spaces ${\rm
Ext}^i_{{\mathcal {C}}_Q^{(m)}}(T,T)$ vanish for integers $1 \leq i
\leq m$, following the proof of Proposition \ref{21}, the space
${\rm Hom}_{{\mathcal {D}}(\Gamma)}(Z,\Sigma^iZ)$ is zero for each
positive integer $i$. In addition, Lemma \ref{37} and Lemma \ref{45}
together imply that $T'$ generates ${\mathcal {D}}_{fd}(B)$. Hence,
the object $Z$ generates per$\Gamma$. So $Z$ is a basic silting
object whose image in ${\mathcal {C}}_{\Gamma}$ is $T \overset{L}
\otimes_B \Gamma$.
Now we conclude the above analysis to get the following proposition.
\begin{prop}\label{44}
Let $Q$ be an acyclic quiver and $B$ its path algebra. Let $\Gamma$
be the completed Ginzburg dg category
${\widehat{\Gamma}_{m+2}}(Q,0)$ and ${\mathcal {C}}_{\Gamma}$ the
generalized $m$-cluster category. Then any $m$-cluster tilting
object in ${\mathcal {C}}_{\Gamma}$ is induced by a silting object
in $\mathcal {F}$ under the canonical projection $\pi: {\rm
per}\Gamma \rightarrow {\mathcal {C}}_{\Gamma}$.
\end{prop}
\begin{proof}
Let $\overline{T}$ be an $m$-cluster tilting object in ${\mathcal
{C}}_{\Gamma}$. Then $\overline{T}$ can be written as $T \overset{L}
\otimes_B \Gamma$ for some $m$-cluster tilting object $T$ in
${\mathcal {C}}_Q^{(m)}$, where $T$ is induced by some silting
object $T'$ in ${\mathcal {D}}_{fd}(B)$. The object $T'
\overset{L}\otimes_{B}\Gamma$ (denoted by $Z$) is a silting object
in the fundamental domain ${\mathcal {F}} (\subseteq \mbox{per}
{\Gamma})$ whose image under the canonical projection $\pi: {\rm
per}\Gamma \rightarrow {\mathcal {C}}_{\Gamma}$ is equal to
$\overline{T}$. This completes the proof.
\end{proof}
\end{document}
|
\begin{document}
\title{Polynomial orbits in totally minimal systems}
\author[J.~Qiu]{Jiahao Qiu}
\address[J.~Qiu]{School of Mathematical Sciences, Peking University, Beijing, 100871, China}
\email{[email protected]}
\date{\today}
\begin{abstract}
Inspired by the recent work of Glasner, Huang, Shao, Weiss and Ye \cite{GHSWY20},
we prove that the maximal $\infty$-step pro-nilfactor $X_\infty$ of a minimal system $(X,T)$
is the topological characteristic factor along polynomials in a certain sense.
Namely, we show that by an almost one to one modification of $\pi:X\to X_\infty$,
the induced open extension $\pi^*:X^*\to X_\infty^*$ has the following property:
for any $d\in \mathbb{N}$, any open subsets $V_0,V_1,\ldots,V_d$ of $X^*$ with
$\bigcap_{i=0}^d \pi^*(V_i)\neq \emptyset$ and any
distinct non-constant integer polynomials $p_i$ with $p_i(0)=0$ for $i=1,\ldots,d$,
there exists some $n\in \mathbb{Z}$ such that $V_0\cap
T^{-p_1(n)}V_1\cap \ldots \cap T^{-p_d(n)}V_d \neq \emptyset$.
As an application, the following result is obtained:
for a totally minimal system $(X,T)$ and integer polynomials $p_1,\ldots,p_d$,
if every non-trivial integer combination of $p_1,\ldots,p_d$ is not constant,
then there is a dense $G_\delta$ subset $\Omega$ of $ X$ such that
the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$
for every $x\in \Omega$.
\end{abstract}
\blfootnote{ \ \ \
This research is supported by NNSF of China (11431012,11971455)
and China Post-doctoral Grant (BX2021014).
}
\keywords{Totally minimal systems, polynomial orbits}
\subjclass[2020]{Primary: 37B05, Secondary: 37B99}
\maketitle
\section{Introduction}
In this section, we will provide the background of the research, state the main results of the paper
and give an outline of the ideas for the proofs.
\subsection{Density problems}\
For a totally ergodic system $(X,\mathcal{X} ,\mu,T)$
(this means $T^k$ is ergodic for any positive integer $k$),
Furstenberg \cite{FH81} shown that for any non-constant integer polynomial
$p$ and $f \in L^2(\mu)$,
\begin{equation}\label{poly-total-ergodic}
\lim_{N\to \infty} \frac{1}{N} \sum_{n=0}^{N-1} f(T^{p(n)}x)=\int f\; \mathrm{d}\mu
\end{equation}
in $L^2$ norm,
where an integer polynomial is a
polynomial with rational coefficients taking integer values on the integers.
Bourgain \cite{JB89} shown that (\ref{poly-total-ergodic}) holds pointwisely for any
$f \in L^r(\mu)$ with $r> 1$.
For topological dynamics,
the following question is natural.
\begin{ques}\label{Q1}
Let $(X,T)$ be a totally minimal system
(this means $T^k$ is minimal for any positive integer $k$)
and let $p$ be a non-constant integer polymonial.
Is there a point $x\in X$
such that the set $\{T^{p(n)}x : n\in \mathbb{Z}\}$ is dense in $X$?
\end{ques}
Note that for Question \ref{Q1} we cannot use the results of Furstenberg and Bourgain on polynomial convergence for totally ergodic systems,
as not every minimal system admits a totally ergodic measure.
In addition,
the total minimality assumption is necessary,
as can be seen by considering a periodic orbit of period 3 and taking an integer polynomial by $p(n)=n^2$.
In \cite{GHSWY20}, it was shown that the answer to Question \ref{Q1} is positive for any integer polynomial of degree 2.
In order to precisely state the equidistribution results for totally ergodic nilsystems
obtained by Frantzikinakis and Kra \cite{FK05},
we start with the following definition.
A family of integer polynomials $\{p_1,\ldots,p_d\}$ is said to be
{\it independent} if for all integers $m_1,\ldots,m_d$ with at least some $m_j\neq 0,j\in\{1,\ldots,d\}$,
the polynomial $\sum_{j=1}^{d}m_jp_j$ is not constant.
In \cite{FK05},
it was shown that for a totally ergodic nilsystem
which is equivalent to be totally minimal (see for example \cite{LA,Le05A,PW}),
there exists some point whose orbit along an independent family of integer polynomials is
uniformly distributed and thus dense.
They also pointed out that the assumption that the polynomial family is independent is necessary,
as can be seen by considering an irrational rotation on the circle.
In this paper, we give an affirmative answer to Question \ref{Q1}.
We prove:
\begin{Maintheorem}\label{poly-orbit}
Let $(X,T)$ be a totally minimal system,
and assume that $\{p_1,\ldots,p_d\}$ is an independent family of integer polynomials.
Then there is a dense $G_\delta$ subset $\Omega$ of $ X$ such that
the set
\begin{equation}\label{product-space}
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\end{equation}
is dense in $X^d$ for every $x\in \Omega$.
\end{Maintheorem}
If one assumes that $(X,T)$ is minimal and weakly mixing,
Huang, Shao and Ye \cite{HSY19} showed that
for any family of distinct integer polynomials,
the set (\ref{product-space}) is dense.
\subsection{Topological characteristic factors}\
For a measure preserving system $(X,\mathcal{X} ,\mu,T)$ and $f_1,\ldots,f_d\in L^\infty(\mu)$,
the study of convergence of the {\it multiple ergodic averages}
\begin{equation}\label{MEA}
\frac{1}{N} \sum_{n=0}^{N-1}f_1(T^n x)\cdots f_d(T^{dn}x)
\end{equation}
started from Furstenberg's elegant proof of Szemer\'{e}di Theorem \cite{Sz75}
via an ergodic theoretical analysis \cite{FH}.
After nearly 30 years' efforts of many researchers, this
problem (for $L^2$ convergence) was finally solved in \cite{HK05,TZ07}.
The basic approach is to find an appropriate factor, called a characteristic factor,
that controls the limit behavior in $L^2(\mu)$ of the averages (\ref{MEA}).
For the origin of these ideas and this terminology, see \cite{FH}.
To be more precise, let $(X,\mathcal{X} ,\mu,T)$ be a measure preserving system
and let $(Y,\mathcal{Y},\nu,T)$ be a factor of $X$.
We say that $Y$ is a \emph{characteristic factor} of $X$ if for all
$f_1,\ldots,f_d\in L^\infty(\mu)$,
\[
\lim_{N\to \infty}\frac{1}{N} \sum_{n=0}^{N-1}f_1(T^n x)\cdots f_d(T^{dn}x)
-\frac{1}{N} \sum_{n=0}^{N-1}\mathbb{E}(f_1| \mathcal{Y})(T^nx)\cdots
\mathbb{E}(f_d| \mathcal{Y})(T^{dn}x)=0
\]
in $L^2$ norm.
The next step is to obtain a concrete description for some well chosen characteristic factor
in order to prove convergence.
The result in \cite{HK05,TZ07} shows that
such a characteristic factor can be described as an inverse limit of nilsystems,
which is also called a pro-nilfactor.
The counterpart of characteristic factors for topological dynamics was first studied by
Glasner in \cite{GE94}. To state the result we need a notion called saturated subset.
Given a map $\pi:X\to Y$ of sets $X$ and $Y$, a subset $L$ of $X$ is called $\pi$-\emph{saturated} if
\[
\{ x\in L:\pi^{-1}(\{\pi(x)\})\subset L \}=L
\]
i.e., $L=\pi^{-1}(\pi(L))$.
Here is the definition of topological characteristic factors:
\begin{definition}\cite{GE94}\label{def-saturated}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of topological dynamical systems and $d\in \mathbb{N}$.
$(Y,T)$ is said to be a \emph{$d$-step topological characteristic factor}
if there exists a
dense $G_\delta$ subset $\Omega$ of $X$ such that for each $x\in \Omega$ the orbit closure
\[
L_x^d(X):=\overline{\mathcal{O}}\big((\underbrace{x,\ldots,x}_{\text{$d$ times}}),T\times \ldots \times T^d\big)
\]
is $\underbrace{\pi\times \ldots\times \pi}_{\text{$d$ times}}$-saturated.
That is, $(x_1,\ldots,x_d)\in L_x^d(X)$ if and only if
$(x_1',\ldots,x_d')\in L_x^d(X)$ whenever $\pi(x_i)=\pi(x_i')$ for every $i=1,\ldots, d$.
\end{definition}
In \cite{GE94}, it was shown that for minimal systems, up to a canonically defined proximal
extension, a characteristic family for $T\times \ldots \times T^d$ is the family of canonical PI flows of class $(d-1)$.
In particular, if $(X,T)$ is distal, then its largest class $(d-1)$ distal factor
is its topological characteristic factor along $T\times \ldots \times T^d$. Moreover,
if $(X,T)$ is weakly mixing, then the trivial system is its topological characteristic factor.
For more related results we refer the reader to \cite{GE94}.
On the other hand,
to get the corresponding pro-nilfactors for topological dynamics,
in a pioneer work, Host, Kra and Maass \cite{HKM10} introduced the notion of
{\it regionally proximal relation of order $d$}
for a topological dynamical system $(X,T)$, denoted by $\mathbf{RP}^{[d]}(X)$.
For $d\in\mathbb{N}$, we say that a minimal system $(X,T)$ is a \emph{d-step pro-nilsystem}
if $\mathbf{RP}^{[d]}(X)=\Delta$ and this is equivalent for $(X,T)$ being
an inverse limit of minimal $d$-step nilsystems.
For a minimal distal system $(X,T)$, it was proved that
$\mathbf{RP}^{[d]}(X)$ is an equivalence relation and $X/\mathbf{RP}^{[d]}(X)$
is the maximal $d$-step pro-nilfactor \cite{HKM10}.
Later, Shao and Ye \cite{SY12} showed that in fact for
any minimal system, the regionally proximal relation of order $d$ is an equivalence
relation and it also has the so-called lifting property.
Very recently,
the result in \cite{GHSWY20} improves Glasner's result to pro-nilsystems significantly.
That is, they proved:
\begin{theorem}[Glasner-Huang-Shao-Weiss-Ye]\label{key-thm0}
Let $(X,T)$ be a minimal system, and let $\pi:X\rightarrow X/\mathbf{RP}^{[\infty]}(X)= X_\infty$ be the factor map.
Then there exist minimal systems $X^*$ and $X_\infty^*$ which are almost one to one
extensions of $X$ and $X_\infty$ respectively, and a commuting diagram below such that $X_\infty^*$ is a
$d$-step topological characteristic factor of $X^*$ for all $d\ge 2$.
\begin{equation}\label{commuting-diagram}
\xymatrix{
X \ar[d]_{\pi} & X^* \ar[d]^{\pi^*} \ar[l]_{\sigma^*} \\
X_\infty & X_\infty^* \ar[l]_{\tau^*}
}
\end{equation}
\end{theorem}
In the theorem above, one can see that
for any open subsets $V_0,V_1,\ldots,V_d$ of $X^*$ with
$\bigcap_{i=0}^d \pi^*(V_i)\neq \emptyset$,
there is some $n\in \mathbb{Z}$
such that
\[
V_0\cap T^{-n}V_1\cap \ldots\cap T^{-dn} V_d\neq \emptyset.
\]
Based on this result,
in this paper we use PET-induction which was introduced by Bergelson in \cite{BV87},
to give a polynomial version of their work:
\begin{Maintheorem}\label{polynomial-TCF}
Let $(X,T)$ be a minimal system,
and let $\pi:X\to X/\mathbf{RP}^{[\infty]}(X)= X_{\infty}$ be the factor map.
Then there exist minimal systems $X^*$ and $X_\infty^*$
which are almost one to one extensions of $X$ and $X_\infty$ respectively,
and a commuting diagram as in (\ref{commuting-diagram}) such that
for any open subsets $V_0,V_1,\ldots,V_d$ of $X^*$ with
$\bigcap_{i=0}^d \pi^*(V_i)\neq \emptyset$ and any distinct non-constant integer polynomials $p_i$
with $p_i(0)=0$ for $i=1,\ldots,d$,
there exists some $n\in \mathbb{Z}$
such that
\[
V_0\cap T^{-p_1(n)}V_1\cap \ldots\cap T^{-p_d(n)} V_d\neq \emptyset.
\]
\end{Maintheorem}
\subsection{Strategy of the proofs}\
To prove Theorem \ref{poly-orbit},
by Theorem \ref{key-thm0}
it suffices to verify the system $(X^*,T)$
which is also totally minimal.
It is equivalent to prove that
for any given non-empty open subsets $V_0,V_1,\ldots,V_d$ of $X^*$,
there exists some $n\in \mathbb{Z}$
such that
\[
V_0\cap T^{-p_1(n)}V_1\cap \ldots\cap T^{-p_d(n)} V_d\neq \emptyset.
\]
Notice that $X_\infty^*$ is an
almost one to one extension of a totally minimal $\infty$-step pro-nilsystem
which can be approximated arbitrarily
well by a nilsystem (see \cite[Theorem 3.6]{DDMSY13}),
we get that $( X_{\infty}^*,T)$ satisfies Theorem \ref{poly-orbit} (Lemma \ref{equi-condition}),
which implies there is some $m\in \mathbb{Z}$ such that
\[
\pi^*(V_0)\cap T^{-p_1(m)}\pi^*(V_1)\cap\ldots \cap T^{-p_d(m)}\pi^*(V_d)\neq \emptyset.
\]
Using Theorem \ref{polynomial-TCF}
for open sets $V_0,T^{-p_1(m)}V_1,\ldots,T^{-p_d(m)}V_d$
and integer polynomials $p_1(\cdot+m)-p_1(m),\ldots,p_d(\cdot+m)-p_d(m)$,
there is some $k\in \mathbb{Z}$ such that
\[
V_0\cap T^{-p_1(k+m)}V_1\cap \ldots\cap T^{-p_d(k+m)} V_d\neq \emptyset,
\]
as was to be shown.
To prove Theorem \ref{polynomial-TCF},
we use PET-induction, which was introduced by Bergelson in \cite{BV87},
where PET stands for {\it polynomial ergodic theorem} or
{\it polynomial exhaustion technique} (see \cite{BV87,BM00}).
See also \cite{BL96,BL99} for more on PET-induction.
Basically, we associate any finite collection of integer polynomials with a `complexity',
and reduce the complexity at some step to the trivial one.
Note that in some step,
the cardinal number of the collection may increase while the complexity decreases.
We note that when doing the induction procedure,
we find that the known results are not enough to
guarantee them,
and we need a stronger result (Theorem \ref{polynomial-case}).
After we introduce PET-induction in Subsection \ref{PET-induction-def}, we will explain the main ideas for the
proof via proving some simple examples.
\subsection{The organization of the paper}\
The paper is organized as follows.
In Section \ref{pre}, the basic notions used in the paper are introduced.
In Section \ref{pf-thm-A}, we first give a proof of Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}
whose proof is very complicated.
In Section \ref{pf-thm-B}, we prove Theorem \ref{polynomial-TCF}.
\noindent {\bf Acknowledgments.}
The author would like to thank Professors Wen Huang, Song Shao and Xiangdong Ye for helping discussions.
\section{Preliminaries}\label{pre}
In this section we gather definitions and preliminary results that
will be necessary later on.
Let $\mathbb{N}$ and $\mathbb{Z}$ be the sets of all positive integers
and integers respectively.
\subsection{Topological dynamical systems}\
A \emph{topological dynamical system}
(or \emph{system}) is a pair $(X,T)$,
where $X$ is a compact metric space with a metric $\rho$, and $T : X \to X$
is a homeomorphism.
For $x\in X$, the \emph{orbit} of $x$ is given by $\mathcal{O}(x,T)=\{T^nx: n\in \mathbb{Z}\}$.
For convenience, we denote the orbit closure of $x\in X$
under $T$ by $\overline{\mathcal{O}}(x,T)$,
instead of $\overline{\mathcal{O}(x,T)}$.
A system $(X,T)$ is said to be \emph{minimal} if
every point has a dense orbit,
and \emph{totally minimal} if $(X,T^k)$ is minimal for
any positive integer $k$.
A \emph{homomorphism} between systems $(X,T)$ and $(Y,T)$ is a continuous onto map
$\pi:X\to Y$ which intertwines the actions; one says that $(Y,T)$ is a \emph{factor} of $(X,T)$
and that $(X,T)$ is an \emph{extension} of $(Y,T)$. One also refers to $\pi$ as a \emph{factor map} or
an \emph{extension} and one uses the notation $\pi : (X,T) \to (Y,T)$.
An extension $\pi$ is determined
by the corresponding closed invariant equivalence relation $R_\pi=\{(x,x')\in X\times X\colon \pi(x)=\pi(x')\}$.
\subsection{Regional proximality of higher order}\
For $\vec{n} = (n_1,\ldots, n_d)\in \mathbb{Z}^d$ and $\epsilon=(\epsilon_1,\ldots,\epsilon_d)\in \{0,1\}^d$, we
define
$\displaystyle\vec{n}\cdot \epsilon = \sum_{i=1}^d n_i\epsilon_i $.
\begin{definition}\cite{HKM10}\label{definition of pronilsystem and pronilfactor}
Let $(X,T)$ be a system and $d\in \mathbb{N}$.
The \emph{regionally proximal relation of order $d$} is the relation $\mathbf{RP}^{[d]}(X)$
defined by: $(x,y)\in\textbf{RP}^{[d]}(X)$ if
and only if for any $\delta>0$, there
exist $x',y'\in X$ and
$\vec{n}\in \mathbb{N}^d$ such that:
$\rho(x,x')<\delta,\rho(y,y')<\delta$ and
\[
\rho( T^{\vec{n}\cdot\epsilon} x', T^{\vec{n}\cdot\epsilon} y')<\delta,
\quad \forall\;\epsilon\in \{0,1\}^d\backslash\{ \vec{0}\}.
\]
A system is called a \emph{$d$-step pro-nilsystem}
if its regionally proximal relation of order $d$ is trivial,
i.e., coincides with the diagonal.
\end{definition}
It is clear that for any system $(X,T)$,
\[
\ldots\subset \mathbf{RP}^{[d+1]}(X)\subset \mathbf{RP}^{[d]}(X)\subset
\ldots \subset\mathbf{RP}^{[2]}(X)\subset \mathbf{RP}^{[1]}(X).
\]
\begin{theorem}\cite[Theorem 3.3]{SY12}\label{cube-minimal}
For any minimal system and $d\in \mathbb{N}$,
the regionally proximal relation of order $d$
is a closed invariant equivalence relation.
\end{theorem}
It follows that for any minimal system $(X,T)$,
\[
\mathbf{RP}^{[\infty]}(X)=\bigcap_{d\geq1}\mathbf{RP}^{[d]}(X)
\]
is also a closed invariant equivalence relation.
Now we formulate the definition of $\infty$-step pro-nilsystems.
\begin{definition}
A minimal system $(X,T)$ is an \emph{$\infty$-step pro-nilsystem},
if the equivalence relation $\mathbf{RP}^{[\infty]}(X)$ is trivial,
i.e., coincides with the diagonal.
\end{definition}
The regionally proximal relation of order $d$ allows us to construct the \emph{maximal $d$-step pro-nilfactor}
of a minimal system. That is, any factor of $d$-step pro-nilsystem
factorizes through this system.
\begin{theorem}\label{lift-property}\cite[Theorem 3.8]{SY12}
Let $\pi :(X,T)\to (Y,T)$ be a factor map of minimal systems and $d\in \mathbb{N}\cup\{\infty\}$. Then,
\begin{enumerate}
\item $(\pi \times \pi) \mathbf{RP}^{[d]}(X)=\mathbf{RP}^{[d]}(Y)$.
\item $(Y,T)$ is a $d$-step pro-nilsystem if and only if $\mathbf{RP}^{[d]}(X)\subset R_\pi$.
\end{enumerate}
In particular, the quotient of $(X,T)$ under $\mathbf{RP}^{[d]}(X)$
is the maximal $d$-step pro-nilfactor of $X$.
\end{theorem}
\subsection{Nilpotent groups, nilmanifolds and nilsystems}\
Let $L$ be a group.
For $g,h\in L$, we write $[g,h]=ghg^{-1}h^{-1}$ for the commutator of $g$ and $h$,
we write $[A,B]$ for the subgroup spanned by $\{[a,b]:a\in A,b\in B\}$.
The commutator subgroups $L_j,j\geq1$, are defined inductively by setting $L_1=L$
and $L_{j+1}=[L_j,L],j\geq1$.
Let $k\geq 1$ be an integer.
We say that $L$ is \emph{k-step nilpotent} if $L_{k+1}$ is the trivial subgroup.
Let $L$ be a $k$-step nilpotent Lie group and $\Gamma$ be a discrete cocompact subgroup of $L$.
The compact manifold $X=L/\Gamma$ is called a \emph{k-step nilmanifold.}
The group $L$ acts on $X$ by left translation and we write this action as $(g,x)\mapsto gx$.
Let $\tau\in L$ and $T$ be the transformation $x\mapsto \tau x$ of $X$.
Then $(X,T)$ is called a \emph{k-step nilsystem}.
We also make use of inverse limits of nilsystems and so we recall the definition of an inverse limit
of systems (restricting ourselves to the case of sequential inverse limits).
If $\{(X_i,T_i)\}_{i\in \mathbb{N}}$ are systems with $\text{diam}(X_i)\leq 1$ and $\phi_i:X_{i+1}\to X_i$
are factor maps, the \emph{inverse limit} of the systems is defined to be the compact subset of
$\prod_{i\in \mathbb{N}}X_i$
given by $\{(x_i)_{i\in\mathbb{N}}:\phi_i(x_{i+1})=x_i,i\in \mathbb{N}\}$,
which is denoted by $\lim\limits_{\longleftarrow}\{ X_i\}_{i\in \mathbb{N}}$.
It is a compact metric space endowed with the
distance $\rho(x,y)=\sum_{i\in \mathbb{N}}1/ 2^i \rho_i(x_i,y_i)$.
We note that the maps $\{T_i\}_{i\in \mathbb{N}}$ induce a transformation $T$ on the inverse limit.
The following structure theorems characterize inverse limits of nilsystems.
\begin{theorem}[Host-Kra-Maass]\cite[Theorem 1.2]{HKM10}\label{description}
Let $d\geq2$ be an integer.
A minimal system is a $d$-step pro-nilsystem
if and only if
it is an inverse limit of minimal $d$-step nilsystems.
\end{theorem}
\begin{theorem}\cite[Theorem 3.6]{DDMSY13}\label{system-of-order-infi}
A minimal system is an $\infty$-step pro-nilsystem
if and only if it is an inverse limit of minimal nilsystems.
\end{theorem}
\subsection{Some facts about hyperspaces and fundamental extensions}\
Let $X$ be a compact metric space with a metric $\rho$.
Let $2^X$ be the collection of non-empty closed subsets of $X$.
We may define a metric on $2^X$ as follows:
\begin{align*}
\rho_H(A,C) &= \inf\{ \epsilon>0:A\subset B(C,\epsilon),C\subset B(A,\epsilon) \} \\
& =\max\{ \max_{a\in A} \rho(a,C),\max_{c\in C} \rho(c,A)\},
\end{align*}
where $\rho(x,A)=\inf_{y\in A}\rho(x,y)$ and $B(A,\epsilon)=\{x\in X:\rho(x,A)<\epsilon\}$.
The metric $\rho_H$ is called the \emph{Hausdorff metric} on $2^X$.
Let $\pi:(X,T)\to (Y,T)$ be a factor map of systems.
We say that:
\begin{enumerate}
\item $\pi$ is an \emph{open} extension if it is open as a map;
\item $\pi$ is an \emph{almost one to one} extension if there is a dense $G_\delta$ subset
$\Omega$ of $X$ such that $\pi^{-1}(\{\pi (x)\})=\{x\}$ for every $x\in \Omega$.
\end{enumerate}
The following is a well known fact about open mappings (see \cite[Appendix A.8]{JDV} for example).
\begin{theorem}\label{open-map}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of systems.
Then the map $\pi^{-1}:Y\to 2^X,y \mapsto \pi^{-1}(y)$ is continuous
if and only if $\pi$ is open.
\end{theorem}
\subsection{Polynomial orbits in minimal systems}\
We have the following characterization of polynomial orbits in minimal systems.
\begin{lemma}\label{equi-condition}
Let $(X,T)$ be a minimal system and let $p_1,\ldots,p_d$ be non-constant integer polynomials.
Then the following statements are equivalent:
\begin{enumerate}
\item There exists a dense $G_\delta$ subset $\Omega$
of $X$ such that
the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$ for every $x\in \Omega$.
\item There exists some $x\in X$ such that the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
\item For any non-empty open subsets $U,V_1,\ldots,V_d$ of $X$,
there is some $n\in \mathbb{Z}$ such that
\[
U\cap T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d\neq \emptyset.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
(1) $\Rightarrow$ (2) is obvious.
(2) $\Rightarrow$ (3):
Assume there is some $x\in X$ such that
the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
It is clear that for any $m\in \mathbb{Z}$, the set
\[
X(x,m):= \{\big(T^{p_1(n)}(T^mx),\ldots, T^{p_d(n)}(T^mx)\big):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
Fix non-empty open subsets $U,V_1,\ldots,V_d$ of $X$.
As $(X,T)$ is minimal, there is some $m\in \mathbb{N}$ with $T^mx\in U$.
Notice $ X(x,m)$ is dense in $X^d$,
we can choose some $n\in \mathbb{Z}$ such that
$T^{p_i(n)}(T^mx)\in V_i$ for $i=1,\ldots,d$, which implies
\[
T^mx\in U\cap T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d.
\]
(3) $\Rightarrow$ (1):
Assume that for any given non-empty open subsets $U,V_1,\ldots,V_d$ of $X$,
there is some $n\in \mathbb{Z}$ such that
$U\cap T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d\neq \emptyset.$
Let $\mathcal{F}$ be a countable base of $X$, and let
\[
\Omega:=\bigcap_{V_1,\ldots,V_d\in \mathcal{F}} \bigcup_{n\in \mathbb{Z}}
T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d.
\]
Then it is easy to see that the dense $G_\delta$ subset $\Omega$ is what we need.
\end{proof}
The following result can be derived from
\cite[Theorem 1.2]{FK05}.
\begin{cor}\label{uniform-dis}
Let $(X=L/\Gamma,T)$ be a totally minimal nilsystem,
and assume that $\{p_1,\ldots,p_d\}$ is an independent family of integer polynomials.
Then there is some $x\in X$ such that the set
\[
\{ (T^{p_1(n)}x,\ldots,T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
\end{cor}
By Theorem \ref{system-of-order-infi} and Corollary \ref{uniform-dis} we have:\
\begin{cor}\label{uni-distributed-AA}
Let $(X,T)$ be a totally minimal system
and assume that $\{p_1,\ldots,p_d\}$ is an independent family of integer polynomials.
If $(X,T)$ is an almost one to one extension of an $\infty$-step pro-nilsystem,
then there is some $x\in X$ such that the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
\end{cor}
\subsection{Polynomial recurrence}\
Recall that a collection $\mathcal{F}$ of subsets of $\mathbb{Z}$ is a \emph{family}
if it is hereditary upward, i.e.,
$F_1 \subset F_2$ and $F_1 \in \mathcal{F}$ imply $F_2 \in \mathcal{F}$.
A family $\mathcal{F}$ is called \emph{proper} if it is neither empty nor the entire power set of $\mathbb{Z}$,
or, equivalently if $\mathbb{Z}\in \mathcal{F}$ and $\emptyset\in \mathcal{F}$.
For a family $\mathcal{F}$ its \emph{dual} is the family
$\mathcal{F}^*:=\{ F\subset \mathbb{Z}: F\cap F' \neq \emptyset \;\mathrm{for} \; \mathrm{all}\; F'\in \mathcal{F} \} $.
It is not hard to see that
$\mathcal{F}^*=\{F\subset \mathbb{Z}:\mathbb{Z}\backslash F\notin \mathcal{F}\}$, from which we have that if $\mathcal{F}$ is a family then $(\mathcal{F}^*)^*=\mathcal{F}$.
If a family $\mathcal{F}$ is closed under finite intersections and is proper, then it is called a \emph{filter}.
A family $\mathcal{F}$ has the {\it Ramsey property} if $A = A_1\cup A_2 \in \mathcal{F}$ implies that $A_1 \in \mathcal{F}$
or $A_2 \in F$. It is well known that a proper family has the Ramsey property if and
only if its dual $\mathcal{F}^*$ is a filter \cite{FH}.
For $j\in \mathbb{N}$ and a finite subset $\{p_1, \ldots, p_j\}$ of $\mathbb{Z}$, the
\emph{finite IP-set of length $j$} generated by $\{p_{1}, \ldots, p_j\}$
is the set
\[
\big\{p_{1}\epsilon_{1}+ \ldots+ p_j\epsilon_j: \epsilon_1,\ldots,\epsilon_j\in \{0,1\}\big\} \backslash \{0\}.
\]
The collection of all sets containing finite IP-sets with arbitrarily long lengths is denoted by $\mathcal{F}_{fip}$.
\begin{lemma}\cite[Lemma 8.1.6]{HSY16}
$\mathcal{F}_{fip}$ has the Ramsey property.
\end{lemma}
Then we have:
\begin{cor}\label{filter}
$\mathcal{F}_{fip}^*$ is a filter.
\end{cor}
For a system $(X,T)$, $x\in X$, a non-constant integer polynomial $p$ and a non-empty open subset $V$ of $X$, set
\[
N(x,V)=\{ n\in \mathbb{Z}:T^nx\in V\}\quad
\mathrm{and}\quad
N_p(x,V)=\{n\in \mathbb{Z}:T^{p(n)}x\in V\}.
\]
\begin{prop}\cite[Proposition 8.1.5]{HSY16}\label{RP-infi}
Let $(X,T)$ be a minimal system and $(x,y)\in X\times X\backslash \Delta$.
Then $(x,y)\in \mathbf{RP}^{[\infty]}(X)$ if and only if $N(x,V)\in \mathcal{F}_{fip}$
for every open neighborhood $V$ of $y$.
\end{prop}
The following proposition follows from the argument in the proof of \cite[Theorem 8.1.7]{HSY16},
which also can be derived from \cite[Theorem 0.2]{BL18}.
\begin{prop}\label{fip-family}
Let $(X,T)$ be a minimal $\infty$-step pro-nilsystem.
Then for any $x\in X$, $N(x,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $x$.
\end{prop}
The following proposition is from \cite[Section 2.11]{Le05B}.
\begin{prop}\label{poly-in-nilsystem}
Let $(X,T)$ be a nilsystem, $x\in X$ and an open neighborhood $U$ of $x$.
For any non-constant integer polynomial $p$ with $p(0)=0$,
we can find another `larger' nilsystem $(Y, S)$ with $y\in Y$ and
an open neighborhood $V$ of $y$ such that
\[
\{n\in \mathbb{Z}:T^{p(n)}x\in U\}\supset \{n\in \mathbb{Z}:S^ny\in V\}.
\]
\end{prop}
It follows Theorem \ref{system-of-order-infi} that
a minimal $\infty$-step pro-nilsystem is an inverse limit of minimal nilsystems.
By Propositions \ref{fip-family} and \ref{poly-in-nilsystem}
we can get:
\begin{prop}\label{infi-poly-rec}
Let $(X,T)$ be a minimal $\infty$-step pro-nilsystem
and let $p$ be a non-constant integer polynomial with $p(0)=0$.
Then for any $x\in X$,
$N_{p}(x,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $x$.
\end{prop}
Notice that $\mathcal{F}_{fip}^*$ is a filter,
thus by Proposition \ref{infi-poly-rec} we have:
\begin{cor}\label{return-time-AA}
Let $(X,T)$ be an almost one to one extension of a minimal $\infty$-step pro-nilsystem
and let $p_1,\ldots,p_d$ be non-constant integer polynomials with $p_i(0)=0$ for $i=1,\ldots,d$.
Then there exists a dense $G_\delta$ subset $\Omega$ of $X$
such that for any $x\in \Omega$,
$\bigcap_{i=1}^dN_{p_i}(x,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $x$.
\end{cor}
\subsection{A useful lemma}\
To end this section we give a useful lemma which can be derived from
the proof of \cite[Theorem 5.6]{GHSWY20}.
For completeness, we include the proof here.
To do this, we need the following topological characteristic factor theorem.
\begin{theorem}\cite[Theorem 4.2]{GHSWY20}\label{key-thm}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of minimal systems.
If $\pi$ is open and $X/ \mathbf{RP}^{[\infty]}(X)$ is a factor of $Y$,
then $Y$ is a $d$-step topological characteristic factor of $X$ for all $d\in \mathbb{N}$.
\end{theorem}
With the help of the above powerful theorem we are able to show:
\begin{lemma}\label{fip-infi-fiber}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of minimal systems.
If $\pi$ is open and $X/ \mathbf{RP}^{[\infty]}(X)$ is a factor of $Y$,
then for any distinct positive integers $a_1,\ldots,a_s$,
there is a dense $G_\delta$ subset $\Omega$ of $X$ such that for
any open subsets $V_0,V_1,\ldots,V_s$ of $X$ with
$\bigcap_{i=0}^s \pi(V_i)\neq \emptyset$ and any $z\in V_0 \cap \Omega$ with $\pi(z)\in \bigcap_{i=0}^s \pi(V_i)$,
there exists some $A\in \mathcal{F}_{fip}$
such that $T^{a_in}z\in V_i$ for every $i=1,\ldots,s$ and $n\in A$.
\end{lemma}
\begin{proof}
By Theorem \ref{key-thm},
for every $d\in \mathbb{N}$
there is a dense $G_\delta$ subset $\Omega_d$ of $X$ such that for each $x\in \Omega_d$,
$L_x^d(X)=\overline{\mathcal{O}}((x,\ldots,x),T\times \ldots \times T^d)$
is $\pi\times\ldots\times \pi$-saturated.\footnote{See Definition \ref{def-saturated}.}
Set $\Omega=\bigcap_{d\in \mathbb{N}}\Omega_d$,
then $\Omega$ is a dense $G_\delta$ subset of $X$.
We next show that the set $\Omega$ meets our requirement.
Now fix distinct positive integers $a_1,\ldots,a_s$.
Let $V_0,V_1,\ldots,V_s$ be open subsets of $X$ with
$W:=\bigcap_{i=0}^s \pi(V_i)\neq \emptyset$,
then $\pi^{-1}(W)\cap V_i\neq \emptyset$ for every $i=0,1,\ldots,s$.
Let $z\in \Omega \cap V_0 \cap \pi^{-1}(W)$.
For $i=1,\ldots,s$, let $z_i\in \pi^{-1}(\{ \pi(z) \} )\cap V_i$ and choose $\delta>0$ with $B(z_i,\delta)\subset V_i$.
Set $N=\max\{a_1,\ldots,a_s\}$.
Let $\{b_j\}_{j\in \mathbb{N}}$ be a sequence of positive integers such that $b_{j+1}\geq N(b_1+\ldots+b_j)+1$,
and let $I_j$ be the finite IP-set generated by $\{b_1,\ldots,b_j\}$.
\noindent {\bf Claim:}
For $i,i'\in\{1,\ldots,N\}$ and $m,m'\in I_j$,
$im=i'm'$ if and only if $i=i'$ and $m=m'$.
\begin{proof}[Proof of Claim]
Suppose for a contradiction that there exist
$i,i'\in\{1,\ldots,N\}$ with $i<i'$ and $m,m'\in I_j$ such that $im=i'm'$.
Let $\epsilon_1,\ldots,\epsilon_j,\epsilon_1',\ldots,\epsilon_j'\in \{0,1\}$
such that $m=b_1\epsilon_1+\ldots+b_j\epsilon_j$ and $m'=b_1\epsilon_1'+\ldots+b_j\epsilon_j'$.
Let
\[
j_0=\max\{ 1\leq n \leq j: \epsilon_n+\epsilon_n'>0\}.
\]
If $j_0=1$, then $m=m'=b_1$,
and thus $im< i'm'$.
If $j_0\geq 2$, then we have
\begin{align*}
b_{j_0}\leq |i\epsilon_{j_0}-i'\epsilon_{j_0}'|b_{j_0} &=\big|i\sum_{n=1}^{j_0-1}b_n\epsilon_n-
i'\sum_{n=1}^{j_0-1}b_n\epsilon_n'\big| \\
& \leq \sum_{n=1}^{j_0-1}b_n|i\epsilon_n-i'\epsilon'_n|\\
& \leq N(b_1+\ldots+b_{j_0-1}),
\end{align*}
which is a contradiction
by the choice of $b_{j_0}$.
This shows the claim.
\end{proof}
For $k\in \mathbb{N}$, let $B_k=b_1+\ldots+b_k$ and
let $\vec{z}_k=(z_1^{k},\ldots,z_{B_kN}^k)\in X^{B_k N} $
such that
\[
z_{j}^k=
\begin{cases}
z_i, & j=a_im,\;\mathrm{where}\; i\in \{1,\ldots,s\},\; m\in I_k;\\
z, & \hbox{otherwise.}
\end{cases}
\]
By the claim above, every $\vec{z}_k$ is well defined.
Recall that for any $d\in \mathbb{N}$,
$L_z^d(X)$
is $\pi\times \ldots\times \pi$-saturated,
then $\vec{y}\in L_z^d(X)$ for any $\vec{y}=(y_1,\ldots,y_d)\in X^d$ with
$\pi(z)=\pi(y_i)$ for $i=1,\ldots,d$.
In particular, $\vec{z}_k\in L_z^{B_kN}(X)$
which implies that there is some $n_k\in \mathbb{N}$
such that
$\rho(T^{jn_k}z,z_j^k)<\delta$ for $j=1,\ldots,B_kN$.
Let $A_k=\{m n_k :m\in I_k\}$ and $A=\bigcup_{k\in \mathbb{N}}A_k$.
Then $A\in \mathcal{F}_{fip}$ and
$T^{a_in}z\in B(z_i,\delta)\subset V_i$ for $i=1,\ldots,s$ and $n\in A$.
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}}\label{pf-thm-A}
In this section, assuming Theorem \ref{polynomial-TCF} we give a proof of Theorem \ref{poly-orbit}.
We start with the following simple observation.
\begin{lemma}\label{total-minimality-proximal}
Let $\pi:(X,T)\to (Y,T)$ be an almost one to one extension of minimal systems.
Then $(X,T)$ is totally minimal if and only if $(Y,T)$ is totally minimal.
\end{lemma}
Now we are in position to show Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}.
\begin{proof}[Proof of Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}]
Let $(X,T)$ be a totally minimal system
and let $X_\infty=X/\mathbf{RP}^{[\infty]}(X)$.
It follows from Theorem \ref{polynomial-TCF} that there exist minimal systems $X^*$ and $X_\infty^*$
which are almost one to one extensions of $X$ and $X_\infty$ respectively,
and a commuting diagram below:
\[
\xymatrix{
X \ar[d]_{\pi} & X^* \ar[d]^{\pi^*} \ar[l]_{\sigma^*} \\
X_\infty & X_\infty^* \ar[l]_{\tau^*}
}
\]
By Lemma \ref{total-minimality-proximal}, $(X^*,T)$ and $(X_\infty^*,T)$ are both totally minimal.
It suffices to verify Theorem \ref{poly-orbit} for system $(X^*,T)$.
Let $\{p_1,\ldots,p_d\}$ be an independent family of integer polynomials,
and let $V_0,V_1,\ldots,V_d$ be non-empty open subsets of $X^*$.
As $\pi^*$ is open, $\pi^*(V_0),\pi^*(V_1),\ldots,\pi^*(V_d)$ are non-empty open subsets of $X^*_\infty$.
Notice that $X^*_\infty$ is an almost one to one extension of $X_\infty$
which is a minimal $\infty$-step pro-nilsystem,
thus by Lemma \ref{equi-condition} and Corollary \ref{uni-distributed-AA},
there is some $m\in \mathbb{N}$ such that
\begin{align*}
& \pi^*(V_0)\cap\pi^*( T^{-p_1(m)}V_1)\cap\ldots \cap \pi^*( T^{-p_d(m)}V_d) \\
= & \pi^*(V_0)\cap T^{-p_1(m)}\pi^*(V_1)\cap\ldots \cap T^{-p_d(m)}\pi^*(V_d)\neq \emptyset
\end{align*}
For $i=1,\ldots,d$, let $p_i'(n)=p_i(n+m)-p_i(m)$.
Then every $p_i'$ is an integer polynomial with $p_i'(0)=0$.
Now using Theorem \ref{polynomial-TCF} for integer polynomials $p_1',\ldots,p_d'$
and open sets $V_0, T^{-p_1(m)}V_1,\ldots,T^{-p_d(m)}V_d$,
there exists some $k\in \mathbb{N}$ such that
\begin{align*}
& V_0\cap T^{-p_1(k+m)}V_1\cap\ldots \cap T^{-p_d(k+m)}V_d \\
= & V_0\cap T^{-p_1'(k)}(T^{-p_1(m)}V_1)\cap\ldots \cap T^{-p_d'(k)}( T^{-p_d(m)}V_d)\neq \emptyset,
\end{align*}
which implies Theorem \ref{poly-orbit} for system $(X^*,T)$ by Lemma \ref{equi-condition}.
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{polynomial-TCF}}\label{pf-thm-B}
In this section, we will prove Theorem \ref{polynomial-TCF}.
Let $\mathcal{P}^*$ be the set of all non-constant integer polynomials
{\bf taking zero value at zero}.
A \emph{system} $\mathcal{A}$ is a finite subset of $\mathcal{P}^*$.
\subsection{The PET-induction}\label{PET-induction-def}\
Two integer polynomials $p,q$ will be called \emph{equivalent}
if they have the same degree and the leading coefficients
of the polynomials $p,q$ coincide as well.
If $C$ is a set of equivalent integer polynomials,
its \emph{degree} $w(C)$ is the degree of any its members.
For every system $\mathcal{A}$, we define its \emph{weight vector} $\phi(\mathcal{A})$
as follows.
Let $w_1<\ldots<w_k$ be the
set of the distinct degrees of all equivalence classes appeared in $\mathcal{A}$.
For $1\leq i\leq k$, let $\phi(w_i)$ be the number of the equivalence classes of elements of $\mathcal{A}$
with degree $w_i$. Let the weight vector $\phi(\mathcal{A})$ be
\[
\phi(\mathcal{A})=\big((\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big).
\]
For example, the weight vector of $\{c_1n,\ldots,c_sn\}$ is $(s,1)$ if $c_1,\ldots,c_s$
are distinct non-zero integers;
the weight vector of $\{an^2+b_1n,\ldots,an^2+b_tn\}$ ($a$ is a non-zero integer) is $(1,2)$;
and
the weight vector of $\{an^2+b_1n,\ldots,an^2+b_tn,\;c_1n,\ldots,c_sn\}$
($a$ is a non-zero integer and $c_1,\ldots,c_s$
are distinct non-zero integers) is $\big((s,1),(1,2)\big)$;
and the weight vector of the general polynomials of degree not more than 2
is $\big((s,1),(k,2)\big)$.
Let $\mathcal{A},\mathcal{A}'$ be two systems.
We say that $\mathcal{A}'$ \emph{precedes} $\mathcal{A}$
if there exists a degree $w$ such that $\phi(\mathcal{A}')(w)<\phi(\mathcal{A})(w)$ and
$\phi(\mathcal{A})(u)=\phi(\mathcal{A}')(u)$ for any degree $u>w$.
We denote it by $\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
Under the order of weight vectors, we have
\begin{align*}
&(1,1)\prec (2,1)\prec\ldots\prec (m,1)\prec\ldots \prec(1,2)\prec\big((1,1),(1,2)\big)\prec\ldots \prec\\
& \big((m,1),(1,2)\big)\prec\ldots\prec(2,2)\prec\big((1,1),(2,2)\big)\prec\ldots \prec \big((m,1),(2,2)\big)\prec\ldots \prec\\
&\big((m,1),(k,2)\big)\prec\ldots \prec(1,3)\prec\big((1,1),(1,3)\big)\prec\ldots
\big((m,1),(k,2),(1,3)\big)\prec\ldots \prec\\
& (2,3)\prec\ldots \prec
\big((a_1,1),(a_2,2)\ldots,(a_k,k)\big)\prec\ldots.
\end{align*}
For $p\in \mathcal{P}^*$ and $m\in \mathbb{Z}$,
define $(\partial_m p)(n):=p(n+m)-p(m)$.
It is clear that $\partial_m p \in \mathcal{P}^*$ for any $p\in \mathcal{P}^*$ and $m\in \mathbb{Z}$.
The following lemma can be found in \cite{BL96,Le94}.
\begin{lemma}\label{PET-induction}
Let $\mathcal{A}$ be a system and let $m_1,\ldots,m_d $ be distinct non-zero integers.
Let $p\in \mathcal{A}$ be an element of the minimal degree in $\mathcal{A}$ and let
\[
\mathcal{A}'=\{q-p,\;\partial_{m_j}q-p:\; q\in \mathcal{A},\; 1\leq j\leq d\},
\]
then $\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
\end{lemma}
\subsection{A stronger result}\
Throughout this section,
let $(X,T)$ and $(Y,T)$ be minimal systems, and
let
\[
X\stackrel{\pi}{\longrightarrow} Y\stackrel{\phi}{\longrightarrow} X/\mathbf{RP}^{[\infty]}(X)=:X_\infty
\]
be factor maps such that
$\pi$ is \textbf{open} and $\phi$ is \textbf{almost one to one}.
For systems $\mathcal{A}=\{p_1,\ldots,p_s\}$ and $\mathcal{C}$,
we just say that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for convenience,
if for any open subsets $V_0,V_1,\ldots,V_s$ of $X$ with
$\bigcap_{i=0}^s\pi(V_i)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item $T^{p_i(n)}z\in V_i$ for $1\leq i\leq s$;
\item\label{AAAA1111} $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s \pi(V_i)$ for $q\in \mathcal{C}$.
\end{enumerate}
It follows from Theorem \ref{key-thm0}
that to show Theorem \ref{polynomial-TCF},
it suffices to show the following stronger result:
\begin{theorem}\label{polynomial-case}
For any systems $\mathcal{A}$ and $\mathcal{C}$,
$\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\end{theorem}
\subsubsection{Ideas for the proof of Theorem \ref{polynomial-case}}\
To prove Theorem \ref{polynomial-case}, we will use induction on the weight vector of $\mathcal{A}$.
The first step we do is to show that
$\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$
if the weight vector of $\mathcal{A}$ is $(s,1)$, i.e., $\mathcal{A}=\{c_1n,\ldots,c_sn\}$,
where $c_1,\ldots,c_s$ are distinct non-zero integers.
In the second
step we assume that $\pi$ has the property $\Lambda(\mathcal{A}',\mathcal{C}')$
for any system $\mathcal{C}'$ and the system $\mathcal{A}'$ whose weight vector is
$\prec \big( (\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$.
Then we show that
$\pi$ also has the property $\Lambda(\mathcal{A},\mathcal{C})$
for any system $\mathcal{C}$ and
the system $\mathcal{A}$
with weight vector $\big( (\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$,
and hence the proof is completed.
Before giving the proof of the second step,
we show Theorem \ref{polynomial-case} holds for system $\mathcal{A}$ with weight vectors $(1,2)$ and $\big((s,1),(1,2)\big)$
as examples
to illustrate our basic ideas.
\subsubsection{The concrete construction}\
We use a simple example to describe how we prove Theorem \ref{polynomial-case}.
Let $(X,T)$ be a minimal system, and assume $\pi:X\to X/\mathbf{RP}^{[\infty]}(X)= X_{\infty}$
is open. For open subsets $U,V$ of $X$
with $\pi(U)\cap \pi(V)\neq\emptyset$, we aim to choose some $n\in \mathbb{Z}$ with
\begin{equation}\label{simple-example}
U\cap T^{-n^2}V\neq \emptyset.
\end{equation}
\noindent {\bf Construction.}
The classical idea (under some assumption) to show (\ref{simple-example}) is the following:\
\noindent {(i):} Cover $X$ by the orbits of $U,V$. That is, choose $d\in \mathbb{N}$
with $\bigcup_{i=1}^d T^iU=X=\bigcup_{i=1}^d T^iV$;\
\noindent {(ii):} Cnstruct $x_1,\ldots,x_d\in X$ and $n_1,\ldots,n_d\in \mathbb{N}$
such that
\[
T^{(n_i+\ldots+n_j)^2}x_j\in T^iV,\quad \forall\; 1\leq i\leq j\leq d.
\]
Once we have achieved this, then $x_d\in T^lU$ for some $ l\in\{1,\ldots, d\}$
which implies
\[
U\cap T^{-(n_l+\ldots+n_d)^2}V\neq \emptyset.
\]
For general minimal systems,
this construction needs some modifications.
Now consider the return time set
\begin{equation}\label{return time set-a}
N_k(W_1,W_2):=\{n\in \mathbb{Z}:W_1\cap T^{-kn}W_2\neq \emptyset\}.
\end{equation}
It follows from Theorem \ref{key-thm} that $N_k(W_1,W_2)$ is non-empty for any
non-zero integer $k$ and any open subsets $W_1,W_2$ of $X$ with $\pi(W_1)\cap \pi(W_2)\neq \emptyset$.
Thus to ensure the existence of the construction (i),
the first change we make is the following:
\noindent {(i)*:} Cover some fixed fiber instead of the whole space.
That is,
choose some $x\in X$ with $\pi(x)\in \pi(U)\cap \pi(V)$ and choose $a_1,\ldots,a_d\in \mathbb{N}$ such that
\[
\pi^{-1}( \{\pi(x)\})\subset \big(\bigcup_{j=1}^dT^{a_j}U\big)
\cap
\big(\bigcup_{j=1}^dT^{a_j}V\big).
\]
Let us go into the detail of the first two steps in construction (ii).
Choose $x_1\in X$ and $n_1\in \mathbb{N}$ with $ T^{n_1^2}x_1 \in T^{a_1}V$.
Note that by Bergelson-Leibman Theorem \cite{BL96} such choice exists.
What additional condition the point $x_1$ need to satisfy remains to be determined.
For the choice of $x_2$,
a feasible method is to track
$x_1$ along $\partial_{n_1}n^2=n^2+2n_1n$.
To be more precise, it suffices to choose $x_2\in X$ and $n_2\in \mathbb{N}$
such that $T^{n_2^2}x_2\in T^{a_2}V$ and $T^{n_2^2+2n_1n_2}x_2\in V_{x_1}$,
where $V_{x_1}$ is an open neighbourhood of $x_1$ with $T^{n_1^2}V_{x_1}\subset T^{a_1}V$.
This implies that the return time set $\{n\in \mathbb{Z}:T^{a_2}V\cap T^{-2n_1n}V_{x_1}\neq \emptyset\}$
should be non-empty.
By the argument above about the set (\ref{return time set-a}),
a suitable condition is $\pi(T^{a_2}V)\cap \pi(V_{x_1})\neq \emptyset$.
Thus to guarantee the induction procedure,
the points we choose in construction (ii) need to be very close to the fixed fiber,
i.e., $T^{n_1^2}\pi(x_1),T^{n_2^2}\pi(x_2)\in \pi(U)\cap \pi(V)$:
\[
\begin{tikzpicture}
\draw [dashed,-] (-3,4)--(-3,-4);
\draw [dashed,-] (2.6,4)--(2.6,-4);
\draw [dashed,-] (3,4)--(3,-4);
\draw [dashed,-] (-6,-2)--(6,-2);
\node [right] at (-3,-3) {$\pi(x)$};
\node [left] at (-4.5,3) {$T^{a_1}V$};
\node [left] at (-4.5,2.2) {$T^{a_2}V$};
\node [left] at (-4.7,-1.5) {$T^{a_d}V$};
\node [left] at (1.5,3) {$T^{a_1}V$};
\node [left] at (1.5,2.2) {$T^{a_2}V$};
\node [left] at (1.3,-1.5) {$T^{a_d}V$};
\node [right] at (3,-3) {$\pi(x)$};
\node [below] at (-3.4,-0.5){$x_1$};
\node [below] at (2.6,-0.5){$x_1$};
\node [below] at (2,0){$x_2$};
\node [left] at (3,3) {$x$};
\node [left] at (-3,3) {$x$};
\fill
(-3,-3)circle (2pt)
(3,-3)circle (2pt)
(3,3)circle (2pt)
(-3.4,-0.5)circle (2pt)
(2.6,-0.5)circle (2pt)
(2.6,2)circle (2pt)
(-3,3)circle (2pt)
(2,0)circle (2pt);
\draw[thick,red,->] (-3.4,-0.5) to[out=10, in=360] node[right, midway] {$n_1^2$} (-2.6,3);
\draw[thick,red,->] (2.6,-0.5) to[out=10, in=360] node[right, midway] {$n_1^2$} (3.4,3);
\draw[thick,blue,->] (2.6,2) to[out=-20, in=40] node[left, midway] {$2n_1n_2$} (2.6,-0.37);
\draw[thick,red,->] (2,0) to[out=170, in=170] node[left, midway] {$n_2^2$} (2.6,2);
\draw[ thick] (-3, -3) ellipse (1.5 and 0.7);
\draw[thick] (3, -3) ellipse (1.5 and 0.7);
\draw[ thick] (3, 3) ellipse (1.5 and 0.5);
\draw[ thick] (3, 2.2) ellipse (1.6 and 0.4);
\draw[thick] (3, -1.5) ellipse (1.7 and 0.2);
\draw[ thick] (-3, 3) ellipse (1.5 and 0.5);
\draw[ thick] (-3, 2.2) ellipse (1.6 and 0.4);
\draw[ thick] (-3, -1.5) ellipse (1.7 and 0.2);
\node [right,font=\Huge] at (5.5,-2.8) {$X_\infty$};
\node [right,font=\Huge] at (5.5,2.5) {$X$};
\end{tikzpicture}
\]
Furthermore,
we will reduce the system of any given complexity to the lower one,
thus the points chosen in construction (ii) also need to be close to the fixed fiber
along polynomials of any higher degrees,
that is why we need property \ref{AAAA1111} in Theorem \ref{polynomial-case} additionally.
Summarize it as follows:
\noindent {(ii)*:}
For any system $\mathcal{C}$, construct $x_1,\ldots,x_d\in X$ and $n_1,\ldots,n_d\in \mathbb{N}$
such that
\begin{enumerate}
[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item $T^{(n_i+\ldots+n_j)^2}x_j\in T^iV$ for $1\leq i\leq j\leq d$;
\item$T^{q(n)}\pi(x_j)\in \pi(U)\cap \pi(V)$ for $1\leq j\leq d$ and $q\in \mathcal{C}$.
\end{enumerate}
Practically, we will use construction (i)* and (ii)* for general cases to prove Theorem \ref{polynomial-case}.
When doing this, we find that if in the collection of polynomials there are linear
elements and other non-linear elements, the argument will be very involved.
We will explain in Subsection \ref{exampless} how to overcome this difficulty via proving Case \ref{case2}.
\subsection{The first step: $\mathcal{A}=\{c_1n,\ldots,c_sn\}$}
\begin{lemma}\label{linear-case-with-constraint}
If there are distinct non-zero integers $c_1,\ldots,c_s$ such that $\mathcal{A}=\{c_1n,\ldots,c_sn\}$,
then $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for
any system $\mathcal{C}$.
\end{lemma}
\begin{proof}
Fix distinct non-zero integers $c_1,\ldots,c_s$.
Let $V_0,V_1,\ldots,V_s$ be open subsets of $X$ with
$W:=\bigcap_{i=0}^s\pi(V_i)\neq \emptyset$.
Recall that the map $\pi$ is open, thus $W$ is an open subset of $Y$
and $\pi^{-1}( W)\cap V_i\neq \emptyset$
for every $0\leq i\leq s$.
\noindent {\bf Case 1:} $\min\{ c_1,\ldots,c_s\}>0$.
Fix a system $\mathcal{C}$.
By Corollary \ref{return-time-AA},
there is a dense $G_\delta$ subset $\Omega_Y$ of $Y$
such that for any $y\in \Omega_Y$,
$\bigcap_{q\in \mathcal{C}}N_q(y,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $y$.
By Lemma \ref{fip-infi-fiber}, there exists
a dense $G_\delta$ subset $\Omega_X$ of $X$ such that for any $x \in\Omega_X \cap V_0 \cap \pi^{-1}(W)$,
there exists $A\in \mathcal{F}_{fip}$
such that $T^{c_in}x\in V_i$ for $1\leq i\leq s$ and $n\in A$.
Let $z\in \pi^{-1}(\Omega_Y)\cap \Omega_X \cap V_0 \cap \pi^{-1}(W)$.
Then we have
\[
\{n\in \mathbb{Z}: \pi(z)\in \bigcap_{q\in \mathcal{C}}T^{-q(n)}W\}=
\bigcap_{q\in \mathcal{C}}N_q(\pi(z),W)\in \mathcal{F}_{fip}^*,
\]
and we can choose some $n\in \mathbb{Z}$ such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item $T^{c_in}z\in V_i$ for $1\leq i\leq s$;
\item\label{AAAA1111} $T^{q(n)}\pi(z)\in W=\bigcap_{i=0}^s \pi(V_i)$ for $q\in \mathcal{C}$.
\end{enumerate}
Thus $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\noindent {\bf Case 2:} $\min\{ c_1,\ldots,c_s\}<0$.
Fix a system $\mathcal{C}$.
Assume $c_m=\min\{ c_1,\ldots,c_s\}<0$ for some $m\in \{1,\ldots,s\}$.
Let
\begin{align*}
\mathcal{A}' &=\{-c_mn,\; (c_1-c_m)n,\ldots,(c_{m-1}-c_m)n,\; (c_{m+1}-c_m)n,\ldots,(c_s-c_m)n\}, \\
\mathcal{C}'& =\{q(n)-c_mn:q\in \mathcal{C}\}.
\end{align*}
By Case 1, $\pi $ has the property $\Lambda(\mathcal{A}',\mathcal{C}')$.
Then for open sets $V_m,V_0,\ldots,V_{m-1},V_{m+1},\ldots,V_s$,
there exist $w\in V_m$ and $n\in \mathbb{Z}$ such that
$T^{-c_mn}w\in V_0,T^{(c_i-c_m)n}w\in V_i$ for $i\in \{1,\ldots,s\}\backslash\{m\}$
and $T^{q(n)-c_mn}\pi(w)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
By letting $z=T^{-c_mn}w$, we deduce that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
This completes the proof.
\end{proof}
\subsection{\bf Examples}\label{exampless}\
In this subsection,
we show Theorem \ref{polynomial-case} holds for system $\mathcal{A}$ with weight vectors $(1,2)$
and $\big((s,1),(1,2)\big)$.
\begin{case}\label{case1}
$\phi(\mathcal{A})=(1,2)$.
\end{case}
\begin{proof}
Let $\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn \}$, where $a$ is a non-zero integer
and $b_1,\ldots,b_t$ are distinct integers.
Let $V_0,V_1,\ldots,V_m$ be open subsets of $X$ with
$W:= \bigcap_{m=0}^t\pi(V_m)\neq \emptyset.$
Recall that the map $\pi$ is open, thus $W$ is an open subset of $Y$.
For $0\leq m\leq t$,
by replacing $V_m$ by $V_m\cap \pi^{-1}(W)$
respectively,
we may assume without loss of generality that $\pi(V_m)=W$.
As $(X,T)$ is minimal, there is some $N\in \mathbb{N}$ such that
$\bigcup_{j=1}^NT^j V_m=X$ for every $0\leq m\leq t$.
Let $x\in X$ with $\pi(x)\in W$ and let
\[
\{1\leq j\leq N:\pi(x)\in T^jW\}
=\{a_1,\ldots,a_d\}.
\]
Then we have
\[
\pi^{-1}( \{\pi(x)\})\subset \bigcap_{m=0}^t\big(\bigcup_{j=1}^dT^{a_j}V_m\big) .
\]
As the map $\pi$ is open,
by Theorem \ref{open-map} we can choose $\delta>0$ such that
\begin{equation}\label{relation-1}
\pi^{-1}\big(B(\pi(x),\delta)\big)\subset \bigcap_{m=0}^t \big(\bigcup_{j=1}^dT^{a_j}V_m\big),
\end{equation}
and
\begin{equation}\label{relation-2}
B(\pi(x),\delta)\subset W \cap \big(\bigcap_{j=1}^d T^{a_j}W \big).
\end{equation}
Fix a system $\mathcal{C}$.
Write $p_m(n)=an^2+b_mn$ for $1\leq m\leq t$ and let $\eta=\delta/d$.
Inductively we will construct $x_1,\ldots,x_d\in X$
and $n_1,\ldots,n_d\in \mathbb{N}$
such that for $1\leq j\leq l \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_l)\in B(\pi(x),l\eta)$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_j}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_{l})}\pi(x_l)\in B(\pi(x),l\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Assume this has been achieved, then for $1\leq j \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_d)\in B(\pi(x),d\eta)=B(\pi(x),\delta)$;
\item$T^{p_m(n_j+\ldots+n_d)}x_d\in T^{a_j}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_d)}\pi(x_d)\in B(\pi(x),d\eta)=
B(\pi(x),\delta)\subset \bigcap_{j=1}^d T^{a_j}W $ for $q\in \mathcal{C}$ by (\ref{relation-2}).
\end{itemize}
As $\pi(x_d)\in B(\pi(x),\delta)$,
it follows from (\ref{relation-1}) that $x_d\in T^{a_j}V_0$ for some $ j\in\{1,\ldots, d\}$.
Put $z=T^{-a_j}x_d$ and $n=n_j+\ldots+n_d$, then we have
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $z=T^{-a_j}x_d\in V_0$;
\item $T^{p_m(n)}z=T^{-a_j}(T^{p_m(n)}x_d)\in V_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)=T^{-a_j}(T^{q(n)}\pi(x_d))\in W = \bigcap_{m=0}^t\pi(V_m)$ for $q\in \mathcal{C}$.
\end{itemize}
This shows that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
{\bf We now return to the inductive construction of $x_1,\ldots,x_d$ and $n_1,\ldots,n_d$.}
\noindent {\bf Step 1:}
Let $I_1=\pi^{-1}\big(B(\pi(x),\eta)\big)$. Then $I_1$ is an open subset of $X$ and
\[
\pi(x)\in \underbrace{\bigcap_{m=1}^t\pi( I_1\cap T^{a_1} V_m) }_{=:S_1}\subset B(\pi(x),\eta).
\]
\noindent {(i)} When $t=1$, i.e., $\mathcal{A}=\{p_1\}$.
By Bergelson-Leibman Theorem \cite{BL96}, there exsit $x_1\in I_1\cap T^{a_1}V_1$
and $n_1\in \mathbb{N}$ such that for $q\in \mathcal{C}$,
\[
T^{p_1(n_1)}x_1,\; T^{q(n_1)}x_1\in I_1\cap T^{a_1}V_1.
\]
Then for $q\in \mathcal{C}$ we have
\[
\pi(x_{1}),\; T^{q(n_{1})}\pi(x_{1})\in B(\pi(x),\eta).
\]
\noindent {(ii)} When $t\geq 2$.
Let
\begin{align*}
\mathcal{A}_1 &=\{ p_m-p_1:2\leq m\leq t\}=\{ (b_m-b_1) n:2\leq m \leq t\}, \\
\mathcal{C}_1 & =\{-p_1,\; q-p_1:q\in \mathcal{C}\}.
\end{align*}
By Lemma \ref{linear-case-with-constraint}, $\pi$ has the property $\Lambda(\mathcal{A}_1,\mathcal{C}_1)$.
Then for open sets $I_1\cap T^{a_1}V_1$ and $I_1\cap T^{a_1}V_2,\ldots,I_1\cap T^{a_1}V_t$,
there exist $y_1\in I_1\cap T^{a_1}V_1$ and $n_1\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(1a)$] $T^{p_m(n_1)-p_1(n_1)}y_1\in I_1\cap T^{a_1} V_m$ for $2\leq m\leq t$;
\item[$(1c)$] $T^{-p_1( n_1)}\pi(y_1),\;T^{q(n_1)-p_1( n_1)}\pi(y_1)\in S_1$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_1=T^{-p_1( n_1)}y_1$.
By $(1a)$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_1)}x_1\in T^{a_1}V_m.
\]
By $(1c)$, for $ q\in \mathcal{C}$ we have
\[
\pi(x_{1}),\; T^{q(n_{1})}\pi(x_{1})\in S_1\subset B(\pi(x),\eta).
\]
Thus by (i) and (ii) we can choose $x_1\in X$ with $\pi(x_1)\in B(\pi(x),\eta)$ and $n_1\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $T^{p_m(n_{1})}x_{1}\in T^{a_1}V_m$ for $1\leq m\leq t$ ;
\item $T^{q(n_{1})}\pi(x_{1})\in B(\pi(x),\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
\noindent {\bf Step l:}
Let $l\geq 2$ be an integer and assume that we have already chosen
$x_1,\ldots,x_{l-1}\in X$ and $n_1,\ldots,n_{l-1}\in \mathbb{N}$ such that for $1\leq j \leq l-1$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ \pi(x_{l-1})\in B(\pi(x),(l-1)\eta)\subset B(\pi(x),\delta)$;
\item $T^{p_m(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_j}V_m$ for $1\leq m \leq t$;
\item $ T^{q(n_j+\ldots+n_{l-1})}\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Choose $\eta_l>0$ with $\eta_l<\eta$ such that for $1\leq j \leq l-1$,
\begin{align}
\label{newl1} T^{p_m(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&\subset T^{a_j}V_m,
&\forall\; 1\leq m \leq t, \\
\label{newl2} T^{q(n_j+\ldots+n_{l-1})}B\big(\pi(x_{l-1}),\eta_l\big)&
\subset B(\pi(x),(l-1)\eta),&\forall \; q\in \mathcal{C}.
\end{align}
Let $I_l=\pi^{-1}\big(B(\pi(x_{l-1}),\eta_l)\big)$.
By (\ref{relation-2}),
we have $ \pi(x_{l-1})\in B(\pi(x),\delta)\subset \bigcap_{j=1}^dT^{a_j}W$
and
\[
\pi(x_{l-1})\in \underbrace{\bigcap_{m=1}^t\pi( I_l\cap T^{a_l} V_m)\cap
\pi\big(B(x_{l-1},\eta_l)\big) }_{=:S_l}\subset \pi(I_l)=
B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x_{l-1}),\eta).
\]
Let
\begin{align*}
\mathcal{A}_l= &\{ p_m-p_1,\;\partial_{n_j+\ldots+n_{l-1}}p_m-p_1:1\leq m\leq t,1\leq j\leq l-1\} \\
= & \{ (b_m-b_1) n,\; (b_m-b_1+2an_j+\ldots+2an_{l-1}) n:1\leq m\leq t,1\leq j\leq l-1\},\\
\mathcal{C}_l=&\{-p_1,\; q-p_1,\;\partial_{n_j+\ldots+n_{l-1}}q-p_1:q\in \mathcal{C},1\leq j\leq l-1\}.
\end{align*}
By Lemma \ref{linear-case-with-constraint}, $\pi$ has the property $\Lambda(\mathcal{A}_l,\mathcal{C}_l)$.
Then for open sets
$I_l\cap T^{a_l}V_1,I_l\cap T^{a_l}V_2, \ldots,I_l\cap T^{a_l}V_t ,
\underbrace{B(x_{l-1},\eta_l),\ldots,B(x_{l-1},\eta_l)}_{t(l-1)\; \mathrm{times}}$,
there exist $y_l\in I_l\cap T^{a_l}V_1$ and $n_l\in \mathbb{N}$
such that for $1\leq j \leq l-1$,
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(la)_1$]$ T^{p_m(n_l)-p_1(n_l)}y_l\in I_l\cap T^{a_l}V_m$ for $2\leq m\leq t$;
\item[$(la)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)-p_1(n_l)}y_l\in B(x_{l-1},\eta_l)$
for $1\leq m\leq t$;
\item[$(lc)_1$] $ T^{-p_1(n_l)}\pi(y_l),\;T^{q(n_l)-p_1(n_l)}\pi(y_l)\in S_l$ for $q\in \mathcal{C}$;
\item[$(lc)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)-p_1(n_l)}\pi(y_l)\in S_l
\subset B(\pi(x_{l-1}),\eta_l)$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_l=T^{-p_1( n_l)}y_l$.
By $(la)_1$ for $1\leq m\leq t$ we have
\[
T^{p_m(n_l)}x_l\in T^{a_l}V_m.
\]
By $(la)_2$ and (\ref{newl1}), for $1\leq m\leq t,1\leq j\leq l-1$ we have
\begin{align*}
T^{p_m(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{p_m(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l)\ \\
& \in T^{p_m(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_j}V_m.
\end{align*}
By $(lc)_1$, for $q\in \mathcal{C}$ we have
\[
\pi(x_l), \;T^{q(n_l)}\pi(x_l)\in S_l\subset B(\pi(x_{l-1}),\eta)
\subset B(\pi(x),l\eta).
\]
By $(lc)_2$ and (\ref{newl2}), for $q\in \mathcal{C},1\leq j\leq l-1$ we have
\begin{align*}
T^{q(n_j+\ldots+n_{l-1}+n_l)}\pi(x_l)=&T^{q(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)}\pi(x_l))\ \\
& \in T^{q(n_j+\ldots+n_{l-1})} B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x),l\eta).
\end{align*}
We finish the construction by induction.
\end{proof}
\begin{case}\label{case2}
$\phi(\mathcal{A})=\big((s,1),(1,2)\big)$.
\end{case}
From $(la)_1,(la)_2$ in the proof of Case \ref{case1}, we can see that the points
$T^{p_m(n_l)}x_l$ and $T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l$
should be in different open sets.
However, this may not hold for the family of polynomials containing linear ones.
The reason is clear,
for any linear polynomial $q\in \mathcal{P}^*$,
one has $\partial_mq=q$ for all $m\in \mathbb{Z}$.
In this case, we cannot use $q$ and $\partial_mq$ to track different open sets.
To overcome this difficulty, we divide the proof of Case \ref{case2} into
the following two claims whose proofs will be given after the proof of Case \ref{case2} since they are very long.
For general cases, the idea is similar.
\begin{claim}\label{ex2}
Let $\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn \}$, where $a$ is a non-zero integer
and $b_1,\ldots,b_t$ are distinct integers,
and let $c_1,\ldots,c_s$ be distinct non-zero integers.
Then for any system $\mathcal{C}$ and open subsets $V_0,V_1,\ldots,V_m$ of $X$
with $\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{c_i n}z\in V_0$ for $1\leq i\leq s$;
\item $ T^{an^2+b_mn}z\in V_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in \bigcap_{m=0}^t\pi(V_m)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
\begin{claim}\label{ex-claim2}
Let $\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn \}$, where $a$ is a non-zero integer
and $b_1,\ldots,b_t$ are distinct integers,
and let $c_1,\ldots,c_s$ be distinct non-zero integers.
Then for any system $\mathcal{C}$ and open subsets $W_0,W_1,\ldots,W_s$ of $X$
with $\bigcap_{i=0}^s\pi(W_i)\neq \emptyset$,
there exist $z\in W_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{an^2+b_mn}z\in W_0$ for $1\leq m\leq t$;
\item$ T^{c_in}z\in W_i$ for $1\leq i \leq s$;
\item $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s\pi(W_i)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
Using Claims \ref{ex2} and \ref{ex-claim2}, we are able to give a proof of Case \ref{case2}.
\begin{proof}[Proof of Case \ref{case2} assuming Claims \ref{ex2} and \ref{ex-claim2}]
Fix a system $\mathcal{C}$.
Let
\[
\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn,\; c_1n,\ldots,c_s n \}
\]
where $a$ is a non-zero integer,
$b_1,\ldots,b_t$ are distinct integers, and
$c_1,\ldots,c_s$ are distinct non-zero integers.
Let $V_0,V_1,\ldots,V_s,U_1,\ldots,U_t$ be open subsets of $X$ with
\[
W:=\bigcap_{i=0}^s\pi(V_i)\cap\bigcap_{m=1}^t\pi(U_m)\neq \emptyset.
\]
Let $\mathcal{A}_1=\{an^2+b_1n,\ldots,an^2+b_tn\}$.
Using Claim \ref{ex2} for system $\mathcal{A}_1$ and integers $c_1,\ldots,c_s$,
then for system $\mathcal{C}$ and open sets $V_0\cap \pi^{-1}(W),U_1,\ldots,U_t$,
there exist $w\in V_0\cap \pi^{-1}(W)$ and $k\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{c_ik}w\in V_0\cap \pi^{-1}(W)$ for $1\leq i \leq s$;
\item $ T^{ak^2+b_mk}w\in U_m$ for $1\leq m\leq t$;
\item$T^{q(k)}\pi(w)\in \pi\big(V_0\cap \pi^{-1}(W)\big)\cap \bigcap_{m=1}^t\pi(U_m)\subset W$ for $q\in \mathcal{C}$.
\end{itemize}
For every $1\leq i \leq s$,
as $T^{c_ik}w\in V_0\cap \pi^{-1}(W)$,
there is some $w_i\in V_i$ with $\pi(T^{c_ik}w)=\pi(w_i)$ which implies
\begin{equation}\label{equal1}
\pi(w)=\pi(T^{-c_ik}w_i).
\end{equation}
Choose $\gamma>0$ such that
\begin{align}
\label{conA1} T^{c_ik}B(T^{-c_ik}w_i,\gamma)&\subset V_i, \quad\;\;\; \forall\; 1\leq i\leq s, \\
\label{conA2} T^{ak^2+b_mk}B(w,\gamma)&\subset U_m, \quad\; \forall\; 1\leq m\leq t,\\
\label{conA3} T^{q(k)}B\big(\pi(w),\gamma\big)&\subset W,\quad\;\; \;\forall\; q\in \mathcal{C}.
\end{align}
Let $W_0=B(w,\gamma)\cap \pi^{-1}\big( B(\pi(w),\gamma)\big)\cap V_0$ and
let $W_i=B(T^{-c_ik}w_i,\gamma)$ for $1\leq i\leq s$.
Then $W_0$ is a non-empty open set as $w\in W_0$,
and $\pi(w)\in \pi(W_i)$ by (\ref{equal1}).
Let
\begin{align*}
\mathcal{A}_2 &=\{\partial_k p:p\in \mathcal{A}_1\}=\{an^2+(b_1+2ak)n,\ldots,an^2+(b_t+2ak)n\}, \\
\mathcal{C}_1 & =\{\partial_k q:q\in \mathcal{C}\}.
\end{align*}
Now using Claim \ref{ex-claim2} for system $\mathcal{A}_2$ and integers $c_1,\ldots,c_s$,
then for system $\mathcal{C}_1$ and open sets $W_0,W_1,\ldots,W_s$,
there exist $z\in W_0$ and $l\in \mathbb{N}$ such that
\begin{align}
\label{qqq1} T^{al^2+(b_m+2ak)l}z&\in W_0\subset B(w,\gamma), \quad\quad\quad\quad\quad\quad\quad\quad\;\;\;\; \forall\; 1\leq m\leq t, \\
\label{qqq2} T^{c_il}z&\in W_i=B(T^{-c_ik}w_i,\gamma), \quad\quad\quad\quad\quad\quad\;\; \;\forall\; 1\leq i\leq s,\\
\label{qqq3}T^{\partial_kq(l)}\pi(z)&\in \bigcap_{i=0}^s\pi(W_i)\subset \pi(W_0)\subset B(\pi(w),\gamma),\quad\;\; \forall\; q\in \mathcal{C}.
\end{align}
By (\ref{conA1}) and (\ref{qqq2}), for $1\leq i \leq s$ we have
\[
T^{c_i(l+k)}z\in T^{c_ik}B(T^{-c_ik}w_i,\gamma)\subset V_i.
\]
By (\ref{conA2}) and (\ref{qqq1}), for $1\leq m\leq t$ we have
\[
T^{a(l+k)^2+b_m(l+k)}z=T^{ak^2+b_mk}(T^{al^2+(b_m+2ak)l}z)\in T^{ak^2+b_mk}B(w,\gamma)\subset U_m.
\]
By (\ref{conA3}) and (\ref{qqq3}), for $q\in \mathcal{C}$ we have
\[
T^{q(l+k)}\pi(z)=T^{q(k)}(T^{\partial_k q(l)}\pi(z))\in T^{q(k)}B(\pi(w),\gamma)\subset W.
\]
Put $n=l+k$, then we have
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $z\in V_0$;
\item$ T^{c_in}z\in V_i$ for $1\leq i \leq s$;
\item $ T^{an^2+b_mn}z\in U_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in W=\bigcap_{i=0}^s\pi(V_i)\cap\bigcap_{m=1}^t\pi(U_m)$ for $q\in \mathcal{C}$.
\end{itemize}
This completes the proof of Case \ref{case2}.
\end{proof}
We now proceed to the proof of Claims \ref{ex2} and \ref{ex-claim2}.
\begin{proof}[Proof of Claim \ref{ex2}]
We show this claim by induction on $s$.
When $s=0$, it follows from Case \ref{case1}.
Let $s\geq 1$ be an integer and
suppose the statement of the claim is true for $ (s-1)$.
Let $c_1,\ldots,c_s$ be distinct non-zero integers,
and let $V_0,V_1,\ldots,V_m$ be open subsets of $X$ with $W:=\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$.
By the similar argument in the proof of Case \ref{case1},
we may assume without loss of generality that $\pi(V_m)=W$ for $0\leq m\leq t$,
and there exist $x\in X$ with $\pi(x)\in W$, $a_1,\ldots,a_d\in \mathbb{N}$ and $\delta>0$ such that
\begin{equation}\label{case2-relation1}
\pi^{-1}\big(B(\pi(x),\delta)\big)\subset \bigcap_{m=0}^t\bigcup_{j=1}^dT^{a_j}V_m,
\end{equation}
and
\begin{equation}\label{case2-relation2}
B(\pi(x),\delta)\subset W \cap \big(\bigcap_{j=1}^d T^{a_j}W \big).
\end{equation}
Fix a system $\mathcal{C}$ and let $\eta=\delta/d$.
Write $p_m(n)=an^2+b_mn$ for $1\leq m\leq t$.
Inductively we will construct $x_1,\ldots,x_d\in X,k_1,\ldots,k_{d+1}\in \{1,\ldots,d\}$ with $k_1=1$
and $n_1,\ldots,n_d\in \mathbb{N}$
such that for every $1\leq j\leq l \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0$;
\item $\pi(x_l)\in B(\pi(x),l\eta)$;
\item $T^{c_i(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $1\leq i\leq s$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Assume this has been achieved, there exist $1\leq j\leq l\leq d$ with $k_j=k_{l+1}$ such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0=T^{a_{k_j}}V_0$;
\item $T^{c_i(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $1\leq i\leq s$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)\subset B(\pi(x),\delta)\subset \bigcap_{j=1}^d T^{a_j}W $ for $q\in \mathcal{C}$
\ \ \ by (\ref{case2-relation2}).
\end{itemize}
Put $n=n_j+\ldots+n_l$ and $z=T^{-a_{k_{j}}}x_{l}$, then the claim follows.
{\bf
We now return to the inductive construction of $x_1,\ldots,x_d,k_1,\ldots,k_{d+1}$ and $n_1,\ldots,n_d$.
}
\noindent {\bf Step 1:}
Let $I_1=\pi^{-1}\big(B(\pi(x),\eta)\big)$
Then $I_1$ is an open subset of $X$ and
\[
\pi(x)\in \underbrace{\bigcap_{m=0}^t\pi( I_1\cap T^{a_1} V_m) }_{=:S_1}\subset \pi(I_1)=
B(\pi(x),\eta).
\]
Let
\[
\mathcal{A}_1 =\{p_m(n)-c_1n:1\leq m\leq t\} \quad\mathrm{and}\quad
\mathcal{C}_1 =\{-c_1n,\; q(n)-c_1n:q\in \mathcal{C}\}.
\]
By our inductive hypothesis,
the conclusion of Claim \ref{ex2} holds for system $\mathcal{A}_1$ and integers $c_2-c_1,\ldots,c_s-c_1$.
Then for system $\mathcal{C}_1$ and open sets $I_1\cap T^{a_1} V_0,I_1\cap T^{a_1} V_1, \ldots,I_1\cap T^{a_1} V_t$,
there exist $y_1\in I_1\cap T^{a_1}V_0$ and $n_1\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(1a)_1$] $T^{(c_i-c_1) n_1}y_1\in I_1\cap T^{a_1} V_0$ for $2\leq i\leq s$;
\item[$(1a)_2$] $T^{p_m(n_1)-c_1n_1}y_1\in I_1\cap T^{a_1} V_m$ for $1\leq m\leq t$;
\item[$(1c)_{\;\;}$] $T^{-c_1 n_1}\pi(y_1),\; T^{q(n_1)-c_1 n_1}\pi(y_1)\in S_1$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_1=T^{-c_1 n_1}y_1$.
By $(1a)_1$, for $1\leq i\leq s$ we have
\[
T^{c_i n_1}x_1\in T^{a_1}V_0.
\]
By $(1a)_2$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_1)}x_{1}\in T^{a_1}V_m.
\]
By $(1c)$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{case2-001}
\pi(x_{1}),\;T^{q(n_{1})}\pi(x_{1})\in S_1\subset B(\pi(x),\eta)\subset B(\pi(x),\delta).
\end{equation}
There is some $k_2\in \{1,\ldots,d\}$ with $x_1\in T^{a_{k_2}}V_0$ by (\ref{case2-relation1}) and (\ref{case2-001}).
\noindent {\bf Step l:}
Let $l\geq2$ be an integer and assume that we have already chosen
$x_1,\ldots,x_{l-1}\in X$, $k_1,\ldots,k_{l-1}\in \{1,\ldots,d\}$ and $n_1,\ldots,n_{l-1}\in \mathbb{N}$ such that
for $1\leq j\leq l-1$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$;
\item $T^{c_i(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_0$ for $1\leq i\leq s$;
\item $T^{p_m(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_{l-1})}\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
As $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)\subset B(\pi(x),d\eta)=B(\pi(x),\delta)$,
by (\ref{case2-relation1})
there is some $k_l\in \{1,\ldots,d\}$
such that $x_{l-1}\in T^{a_{k_l}}V_0$.
Choose $\eta_l>0$ with $\eta_l<\eta$ such that for $1\leq j\leq l-1$,
\begin{align}
\label{hhh1} B( x_{l-1},\eta_l)&\subset T^{a_{k_l}}V_0,& \\
\label{hhh2} T^{c_i(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)&\subset T^{a_{k_{j}}}V_0,
&\forall \;1\leq i\leq s, \\
\label{hhh3} T^{p_m(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&
\subset T^{a_{k_{j}}}V_m ,& \forall \;1\leq m\leq t,\\
\label{hhh4} T^{q(n_j+\ldots+n_{l-1})}B(\pi(x_{l-1}),\eta_l)&
\subset B(\pi(x),(l-1)\eta) ,& \forall \;q\in \mathcal{C}.
\end{align}
Let $I_l=\pi^{-1}\big(B(\pi(x_{l-1}),\eta_l)\big)$.
By (\ref{case2-relation2}),
we have $ \pi(x_{l-1})\in B(\pi(x),\delta)\subset \bigcap_{j=1}^dT^{a_j}W$
and
\[
\pi(x_{l-1})\in \underbrace{\bigcap_{m=1}^t\pi( I_l\cap T^{a_{k_l}}V_m)\cap
\pi\big(B(x_{l-1},\eta_l)\big) }_{=:S_l} \subset \pi(I_l)=
B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x_{l-1}),\eta).
\]
Let
\begin{align*}
\mathcal{A}_l& =\{p_m(n)-c_1n,\;\partial_{n_j+\ldots+n_{l-1}}p_m(n)-c_1n:1\leq m\leq t,1\leq j\leq l-1\} ,\\
\mathcal{C}_l& =\{-c_1n,\;q(n)-c_1n,\;\partial_{n_j+\ldots+n_{l-1}}q(n)-c_1n:q\in \mathcal{C},1\leq j\leq l-1\}.
\end{align*}
By our inductive hypothesis,
the conclusion of Claim \ref{ex2} holds for system $\mathcal{A}_l$ and integers $c_2-c_1,\ldots,c_s-c_1$.
Then for system $\mathcal{C}_l$ and open sets $B(x_{l-1},\eta_l),I_l\cap T^{a_{k_l}}V_1,\ldots,I_l\cap T^{a_{k_l}}V_t,\underbrace{B(x_{l-1},\eta_l),\ldots,B(x_{l-1},\eta_l)}_{t(l-1)\;\mathrm{times}}$,
there exist $y_l\in B(x_{l-1},\eta_l)$ and $n_l\in \mathbb{N}$
such that for $1\leq j\leq l-1$,
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(la)_1$] $ T^{(c_i-c_1)n_l}y_l\in B(x_{l-1},\eta_l)$ for $2\leq i \leq s$;
\item[$(la)_2$] $ T^{p_m(n_l)-c_1n_l}y_l\in I_l\cap T^{a_{k_l}}V_m$ for $1\leq m \leq t$;
\item[$(la)_3$] $ T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)-c_1
n_l}y_l\in B(x_{l-1},\eta_l)$ for $1\leq m\leq t$;
\item[$(lc)_1$] $ T^{-c_1n_l}\pi(y_l),\;T^{q(n_l)-c_1 n_l}\pi(y_l)\in S_l$ for $q\in \mathcal{C}$;
\item[$(lc)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)-c_1n_l}\pi(y_l)\in S_l\subset B(\pi(x_{l-1}),\eta_l)$
for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_l=T^{-c_1 n_l}y_l$.
By $(la)_1$ and (\ref{hhh1}), for $1\leq i\leq s$ we have
\begin{equation}\label{AQ}
T^{c_i n_l}x_l\in B(x_{l-1},\eta_l)\subset T^{a_{k_l}}V_0.
\end{equation}
By (\ref{hhh2}) and (\ref{AQ}) , for $1\leq i\leq s,1\leq j\leq l-1$ we have
\[
T^{c_i(n_j+\ldots+n_{l-1}+n_l)}x_l\in T^{c_i(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)
\subset
T^{a_{k_j}}V_0 .
\]
By $(la)_2$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_l)}x_l\in T^{a_{k_l}}V_m.
\]
By $(la)_3$ and (\ref{hhh3}), for $1\leq m\leq t$ and $1\leq j\leq l-1$ we have
\begin{align*}
T^{p_m(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{p_m(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l)\ \\
& \in T^{p_m(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_{k_j}}V_m.
\end{align*}
By $(lc)_1$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{1212}
\pi(x_l), \;T^{q(n_l)}\pi(x_l)\in S_l\subset B(\pi(x_{l-1}),\eta)
\subset B(\pi(x),l\eta).
\end{equation}
By $(lc)_2$ and (\ref{hhh4}), for $q\in \mathcal{C}$ and $1\leq j\leq l-1$ we have
\begin{align*}
T^{q(n_j+\ldots+n_{l-1}+n_l)}\pi(x_l)=& T^{q(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)}\pi(x_l))\ \\
& \in T^{q(n_j+\ldots+n_{l-1})} B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x),l\eta).
\end{align*}
There is some $k_{l+1}\in \{1,\ldots,d\}$ with $x_l\in T^{a_{k_{l+1}}}V_0$ by (\ref{case2-relation1})
and (\ref{1212}).
We finish the construction by induction.
\end{proof}
Using Claim \ref{ex2}, we are able to give a proof of Claim \ref{ex-claim2}.
\begin{proof}[Proof of Claim \ref{ex-claim2}]
Let
\begin{align*}
\mathcal{A}_1= & \{-an^2-b_1n,\;-an^2+(c_1-b_1)n,\ldots,\;-an^2+(c_s-b_1)n\}, \\
\mathcal{C}_1= & \{q(n)-an^2-b_1n:q\in \mathcal{C}\}.
\end{align*}
Using Claim \ref{ex2} for system $\mathcal{A}_1$ and integers $b_2-b_1,\ldots,b_m-b_1$,
then for system $\mathcal{C}_1$ and open sets $W_0,W_0,W_1,\ldots,W_s$,
there exist $w\in W_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{(b_m-b_1)n}w\in W_0$ for $2\leq m\leq t$;
\item $ T^{-an^2-b_1n}w\in W_0$;
\item $ T^{-an^2+(c_i-b_1)n}w\in W_i$ for $1\leq i\leq s$;
\item $T^{q(n)-an^2-b_1n}\pi(w)\in \bigcap_{i=0}^s\pi(W_i)$ for $q\in \mathcal{C}$.
\end{itemize}
Put $z=T^{-an^2-b_1n}w$, then the claim follows.
\end{proof}
\subsection{Proofs of Theorems \ref{polynomial-case} and \ref{polynomial-TCF}}\
Now we are able to give proofs of the main results of this section.
Proving them, we need two intermediate claims
whose proofs will be given in later subsection since they are very long.
For a weight vector $\vec{w}=\big((\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$,
define $\min (\vec{w})=w_1$ and $\max (\vec{w})=w_k$. That is,
for any system $\mathcal{A}$ with weight vector $\vec{w}$,
there are $a,b\in \mathcal{A}$
such that for all $p\in A$,
\[
\min (\vec{w})=w_1=\mathrm{deg}(a)\leq \mathrm{deg}(p) \leq \mathrm{deg}(b)= w_k=\max (\vec{w}).
\]
\begin{claim}\label{gene-claim1}
Assume $\pi$ has the property
$\Lambda(\mathcal{A}',\mathcal{C}')$
for any systems $\mathcal{A}'$ and $\mathcal{C}'$
if the weight vector of $\mathcal{A}'$ is $\vec{w}$.
Let $\mathcal{A}=\{p_1,\ldots,p_t\}$ be a system with weight vector $\vec{w}$,
and let $\mathcal{B}$ be a system such that $\mathrm{deg}(b)<\min(\vec{w})$ for
all $b\in \mathcal{B}$.
Then for any system $\mathcal{C}$ and open subsets $V_0,V_1,\ldots,V_t$ of $X$
with $\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{b(n)}z\in V_0$ for $b\in \mathcal{B}$;
\item $ T^{p_m(n)}z\in V_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in \bigcap_{m=0}^t\pi(V_m)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
\begin{claim}\label{gene-claim2}
Assume $\pi$ has the property
$\Lambda(\mathcal{A}',\mathcal{C}')$
for any systems $\mathcal{A}'$ and $\mathcal{C}'$
if the weight vector of $\mathcal{A}'$ is $\vec{w}$.
Let $\mathcal{A}$ be a system with weight vector $\vec{w}$,
and let $c_1,\ldots,c_s\in \mathcal{P}^*$ be distinct linear polynomials such that $c_i\notin \mathcal{A}$ for $1\leq i \leq s$.
Then for any system $\mathcal{C}$ and open subsets $V_0,V_1,\ldots,V_s$ of $X$
with $\bigcap_{i=0}^s\pi(V_i)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{a(n)}z\in V_0$ for $a\in \mathcal{A}$;
\item $ T^{c_i(n)}z\in V_i$ for $1\leq i\leq s$;
\item $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
With the help of Claims \ref{gene-claim1} and \ref{gene-claim2},
we are able to show Theorem \ref{polynomial-case}.
\begin{proof}[Proof of Theorem \ref{polynomial-case} assuming Claims \ref{gene-claim1} and \ref{gene-claim2}]
For a system $\mathcal{A}$,
if $\phi(\mathcal{A})=(s,1)$ for some $s\in \mathbb{N}$,
then $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for any system $\mathcal{C}$
by Lemma \ref{linear-case-with-constraint}.
Now fix a system $\mathcal{A}$ with $(s,1)\prec \phi(\mathcal{A})$ for all $s\in \mathbb{N}$.
That is, there is some $p\in \mathcal{A}$ with $\mathrm{deg}(p)\geq 2$.
Assume $\pi$ has the property
$\Lambda(\mathcal{A}',\mathcal{C}')$
for any systems $\mathcal{A}'$ and $\mathcal{C}'$
if $\phi(\mathcal{A}')\prec\phi(\mathcal{A})$.
Fix a system $\mathcal{C}$.
We next show that $\pi$ also has the property
$\Lambda(\mathcal{A},\mathcal{C})$.
\noindent {\bf Case 1:} $\mathrm{deg}(p)\geq2 $ for every $p\in \mathcal{A}$.
Notice that for any non-zero integer $a$,
$\partial_{a}p\neq p$ for every $p\in \mathcal{A}$.
By the similar construction in the proof of Case \ref{case1},
one can deduce that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\noindent {\bf Case 2:}
There is some $p\in \mathcal{A}$ with $\mathrm{deg}(p)=1 $.
Let $\mathcal{A}=\{c_1,\ldots,c_s,\; p_1,\ldots,p_t\}$ such that $\mathrm{deg}(c_i)=1$ for $1\leq i\leq s$
and $\mathrm{deg}(p_m)\geq 2$ for $1\leq m\leq t$.
Let $V_0,V_1,\ldots,V_s,U_1,\ldots,U_t$ be open subsets of $X$ with
\[
W:= \bigcap_{i=0}^s\pi(V_i)\cap \bigcap_{m=1}^t\pi(U_m)\neq \emptyset.
\]
Let $\mathcal{A}'=\{p_1,\ldots,p_t\}$ and $\mathcal{B}=\{c_1,\ldots,c_s\}$. Then $\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
By our PET-induction hypothesis, $\pi$ has the property $\Lambda(\mathcal{A}',\mathcal{C})$.
Using Claim \ref{gene-claim1} for systems
$\mathcal{A}'$ and $\mathcal{B}$,
then for system $\mathcal{C}$ and open sets $V_0\cap \pi^{-1}(W),U_1,\ldots,U_t$,
there exist $w\in V_0\cap \pi^{-1}(W)$ and $k\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{c_i(k)}w\in V_0\cap \pi^{-1}(W)$ for $1\leq i \leq s$;
\item $ T^{p_m(k)}w\in U_m$ for $1\leq m\leq t$;
\item $T^{q(k)}\pi(w)\in \pi\big(V_0\cap \pi^{-1}(W)\big)\cap \bigcap_{m=1}^t\pi(U_m)\subset W$ for $q\in \mathcal{C}$.
\end{itemize}
For every $1\leq i \leq s$,
as $T^{c_i(k)}w\in V_0\cap \pi^{-1}(W)$,
there is some $w_i\in V_i$ with $\pi(T^{c_i(k)}w)=\pi(w_i)$ which implies
\begin{equation}\label{general-equal1}
\pi(w)=\pi(T^{-c_i(k)}w_i).
\end{equation}
Choose $\gamma>0$ such that
\begin{align}
\label{gene-conA1}T^{c_i(k)} B(T^{-c_i(k)}w_i,\gamma)&\subset V_i, \;\quad\;\forall\; 1\leq i\leq s, \\
\label{gene-conA2} T^{p_m(k)}B(w,\gamma)&\subset U_m,\quad \forall \;1\leq m\leq t,\\
\label{gene-conA3} T^{q(k)}B(\pi(w),\gamma)&\subset W,\;\quad\;\forall\; q\in \mathcal{C}.
\end{align}
Let $W_0=B(w,\gamma)\cap \pi^{-1}\big( B(\pi(w),\gamma)\big)\cap V_0$
and let $W_i=B(T^{-c_i(k)}w_i,\gamma)$ for $1\leq i\leq s$.
Then $W_0$ is a non-empty open set as $w\in W_0$ and $\pi(w)\in \pi(W_i)$ by (\ref{general-equal1}).
Let
\[
\mathcal{A}''=\{\partial_kp_m:1\leq m\leq t\}\quad\text{ and} \quad
\mathcal{C}'=\{\partial_kq:q\in \mathcal{C}\}.
\]
Then $\pi$ has the property $\Lambda(\mathcal{A}'',\mathcal{C}')$
as $\phi(\mathcal{A}'')=\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
Using Claim \ref{gene-claim2} for systems
$\mathcal{A}''$ and $\mathcal{B}$, then for system
$\mathcal{C}'$ and open sets $W_0,W_1,\ldots,W_s$,
there exist $z\in W_0\subset V_0$ and $l\in \mathbb{N}$ such that
\begin{align}
\label{gene-conC2} T^{\partial_kp_m(l)}z&\in W_0\subset B(w,\gamma) , &\forall\;1\leq m\leq t, \\
\label{gene-conC1}T^{c_i(l)}z&\in W_i =B(T^{-c_i(k)}w_i,\gamma),&\forall\;1\leq i\leq s, \\
\label{gene-conC3} T^{\partial_kq(l)}\pi(z)&\in \bigcap_{i=0}^s \pi(W_i)\subset \pi(W_0)\subset B(\pi(w),\gamma) ,&\forall\;q\in \mathcal{C}.
\end{align}
Recall that $\mathrm{deg}(c_i)=1$,
by (\ref{gene-conA1}) and (\ref{gene-conC1}) for $1\leq i\leq s$ we have
\[
T^{c_i(l+k)}z=T^{c_i(k)}(T^{c_i(l)}z)\in T^{c_i(k)}B(T^{-c_i(k)}w_i,\gamma)\subset V_i.
\]
By (\ref{gene-conA2}) and (\ref{gene-conC2}), for $1\leq m \leq t$ we have
\[
T^{p_m(l+k)}z= T^{p_m(k)}(T^{\partial_kp_m(l)}z)\in T^{p_m(k)}B(w,\gamma)\subset U_m.
\]
By (\ref{gene-conA3}) and (\ref{gene-conC3}), for $q\in \mathcal{C}$ we have
\[
T^{q(l+k)}\pi(z)
= T^{q(k)}(T^{\partial_kq(l)}\pi(z))\in T^{q(k)}B(\pi(w),\gamma)\subset W.
\]
Now set $n=l+k$, we get that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $z\in V_0$;
\item$ T^{c_i(n)}z\in V_i$ for $1\leq i \leq s$;
\item $ T^{p_m(n)}z\in U_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in W=\bigcap_{i=0}^s\pi(V_i)\cap\bigcap_{m=1}^t\pi(U_m)$ for $q\in \mathcal{C}$.
\end{itemize}
This completes the proof.
\end{proof}
We are ready to show Theorem \ref{polynomial-TCF}.
\begin{proof}[Proof of Theorem \ref{polynomial-TCF}]
It follows Theorems \ref{key-thm0} and \ref{polynomial-case}.
\end{proof}
We now proceed to the proof of Claims \ref{gene-claim1} and \ref{gene-claim2}.
\begin{proof}[Proof of Claim \ref{gene-claim1}]
Fix a weight vector $\vec{w}$ with $\min(\vec{w})\geq 2$,
otherwise the system $\mathcal{B}$ will be empty and there is nothing to prove.
We prove this claim by PET-induction,
the induction on the weight vector of the system $\mathcal{B}$.
Fix a non-empty system $\mathcal{B}$ such that $\mathrm{deg}(b)<\min(\vec{w})$ for all $b\in \mathcal{B}$,
and
suppose the statement of the claim is true for any systems
$\mathcal{B}',\mathcal{A}',\mathcal{C}$ if $\phi(\mathcal{B}')\prec \phi(\mathcal{B})$ and $\phi(\mathcal{A}')=\vec{w}$.
Let $\mathcal{A}=\{p_1,\ldots,p_t\}$ be a system with weight vector $\vec{w}$,
and let $V_0,V_1,\ldots,V_t$ be open subsets of $X$
with $W:=\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$.
By the similar argument in the proof of Case \ref{case1},
we may assume without loss of generality that $\pi(V_m)=W$ for $0\leq m\leq t$,
and there exist $x\in X$ with $\pi(x)\in W$, $a_1,\ldots,a_d\in \mathbb{N}$ and $\delta>0$ such that
\begin{equation}\label{general-relation1}
\pi^{-1}\big(B(\pi(x),\delta)\big)\subset \bigcap_{m=0}^t\bigcup_{j=1}^dT^{a_j}V_m,
\end{equation}
and
\begin{equation}\label{general-relation2}
B(\pi(x),\delta)\subset W \cap \big(\bigcap_{j=1}^d T^{a_j}W \big).
\end{equation}
Fix a system $\mathcal{C}$ and
let $\eta=\delta/d$.
Inductively we will construct $x_1,\ldots,x_d\in X,k_1,\ldots,k_{d+1}\in \{1,\ldots,d\}$ with $k_1=1$
and $n_1,\ldots,n_d\in \mathbb{N}$
such that for every $1\leq j\leq l \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0$;
\item $\pi(x_l)\in B(\pi(x),l\eta)$;
\item $T^{b(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $b\in \mathcal{B}$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Assume this has been achieved, we can choose $1\leq j\leq l\leq d$ with $k_j=k_{l+1}$ such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0=T^{a_{k_j}}V_0$;
\item $T^{b(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $b\in \mathcal{B}$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)
\subset B(\pi(x),\delta)\subset \bigcap_{j=1}^d T^{a_j}W $ for $q\in \mathcal{C}$ \ \ \ by (\ref{general-relation2}).
\end{itemize}
Put $n=n_j+\ldots+n_l$ and $z=T^{-a_{k_{j}}}x_{l}$, then the claim follows.
{\bf
We now return to the inductive construction of $x_1,\ldots,x_d,k_1,\ldots,k_{d+1}$ and $n_1,\ldots,n_d$.
}
Let $b_1\in \mathcal{B}$ be an element of the minimal weight in $\mathcal{B}$.
\noindent {\bf Step 1:}
Let $I_1=\pi^{-1}\big(B(\pi(x),\eta)\big)$.
Then $I_1$ is an open subset of $X$ and
\[
\pi(x)\in \underbrace{\bigcap_{m=0}^t\pi( I_1\cap T^{a_1} V_m) }_{=:S_1}\subset \pi(I_1)=
B(\pi(x),\eta).
\]
Let
\begin{align*}
\mathcal{B}_1 &=\{b-b_1:b\in \mathcal{B}\}, \\
\mathcal{A}_1 & =\{p_m-b_1:1\leq m\leq t\}, \\
\mathcal{C}_1 &=\{-b_1,q-b_1:q\in \mathcal{C}\}.
\end{align*}
Then $\phi(\mathcal{B}_1)\prec \phi(\mathcal{B})$ by Lemma \ref{PET-induction},
and $\phi(\mathcal{A}_1)=\phi(\mathcal{A})$ as $\mathrm{deg}(b_1)<\mathrm{deg}(p_m)$ for every $1\leq m\leq t$.
By our PET-induction hypothesis,
the conclusion of Claim \ref{gene-claim1} holds for systems $\mathcal{A}_1$ and $\mathcal{B}_1$.
Then for system $\mathcal{C}_1$ and open sets $I_1\cap T^{a_1} V_0,I_1\cap T^{a_1} V_1,\ldots,I_1\cap T^{a_1} V_t$,
there exist $y_1\in I_1\cap T^{a_1}V_0$ and $n_1\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(1b)$] $T^{b(n_1)-b_1 (n_1)}y_1\in I_1\cap T^{a_1} V_0$ for $b\in \mathcal{B}$;
\item[$(1a)$] $T^{p_m(n_1)-b_1(n_1)}y_1\in I_1\cap T^{a_1} V_m$ for $1\leq m\leq t$;
\item[$(1c)$] $T^{-b_1 (n_1)}\pi(y_1),\; T^{q(n_1)-b_1 (n_1)}\pi(y_1)\in S_1$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_1=T^{-b_1 (n_1)}y_1$.
By $(1b)$, for $b\in \mathcal{B}$ we have
\[
T^{b(n_1)}x_{1}\in T^{a_1}V_0.
\]
By $(1a)$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_1)}x_{1}\in T^{a_1}V_m.
\]
By $(1c)$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{general0000}
\pi(x_{1}),\; T^{q(n_{1})}\pi(x_{1})\in S_1\subset B(\pi(x),\eta) \subset B(\pi(x),\delta).
\end{equation}
There is some $k_2\in \{1,\ldots,d\}$ with $x_1\in T^{a_{k_2}}V_0$ by (\ref{general-relation1}) and (\ref{general0000}).
\noindent {\bf Step l:}
Let $l\geq2$ be an integer and assume that we have already chosen
$x_1,\ldots,x_{l-1}\in X$, $k_1,\ldots,k_{l-1}\in \{1,\ldots,d\}$ and $n_1,\ldots,n_{l-1}\in \mathbb{N}$ such that for $1\leq j\leq l-1$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$;
\item $T^{b(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_0$ for $b\in \mathcal{B}$;
\item $T^{p_m(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_{l-1})}\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
As $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)\subset B(\pi(x),d\eta)=B(\pi(x),\delta)$,
by (\ref{general-relation1})
there is some $k_l\in \{1,\ldots,d\}$ such that $x_{l-1}\in T^{a_{k_l}}V_0$.
Choose $\eta_l>0$ with $\eta_l<\eta$ such that for $1\leq j\leq l-1$,
\begin{align}
\label{generalhhh1} B( x_{l-1},\eta_l)&\subset T^{a_{k_l}}V_0,& \\
\label{generalhhh2} T^{b(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&\subset
T^{a_{k_{j}}}V_0,&\forall\; b\in \mathcal{B}, \\
\label{generalhhh3} T^{p_m(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&\subset
T^{a_{k_{j}}}V_m, &\forall\; 1\leq m\leq t,\\
\label{generalhhh4} T^{q(n_j+\ldots+n_{l-1})}B(\pi(x_{l-1}),\eta_l)&\subset
B(\pi(x),(l-1)\eta), &\forall\;q\in \mathcal{C}.
\end{align}
Let $I_l=\pi^{-1}\big(B(\pi(x_{l-1}),\eta_l)\big)$.
By (\ref{general-relation2}),
we have $ \pi(x_{l-1})\in B(\pi(x),\delta)\subset \bigcap_{j=1}^dT^{a_j}W$
and
\[
\pi(x_{l-1})\in \underbrace{\bigcap_{m=1}^t\pi( I_l\cap T^{a_{k_l}}V_m)\cap
\pi\big(B(x_{l-1},\eta_l)\big) }_{=:S_l} \subset \pi(I_l)=
B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x_{l-1}),\eta).
\]
Let
\begin{align*}
\mathcal{B}_l &=\{b-b_1,\;\partial_{n_j+\ldots+n_{l-1}}b-b_1:b\in \mathcal{B},1\leq j\leq l-1\}, \\
\mathcal{A}_l & = \{p_m-b_1,\;\partial_{n_j+\ldots+n_{l-1}}p_m-b_1:1\leq m\leq t,1\leq j\leq l-1\} ,\\
\mathcal{C}_l &=\{-b_1,\;q-b_1,\;\partial_{n_j+\ldots+n_{l-1}}q-b_1:q\in \mathcal{C},1\leq j\leq l-1\}.
\end{align*}
Then $\phi(\mathcal{B}_l)\prec \phi(\mathcal{B})$ by Lemma \ref{PET-induction},
and $\phi(\mathcal{A}_l)=\phi(\mathcal{A}_1)=\phi(\mathcal{A})$.
By our PET-induction hypothesis,
the conclusion of Claim \ref{gene-claim1} holds for systems $\mathcal{A}_l,\mathcal{B}_l$.
Notice that for any non-zero integer $a$,
$\partial_{a}p_m\neq p_m$ for $1\leq m\leq t$.
Hence for system $\mathcal{C}_l$ and open sets $B(x_{l-1},\eta_l),I_l\cap T^{a_{k_l}}V_1,\ldots,I_l\cap T^{a_{k_l}}V_t,\underbrace{B(x_{l-1},\eta_l),\ldots,B(x_{l-1},\eta_l)}_{t(l-1) \; \mathrm{times}}$,
there exist $y_l\in B(x_{l-1},\eta_l)$ and $n_l\in \mathbb{N}$
such that for $1\leq j\leq l-1$,
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(lb)_{\; \;}$]$ T^{b(n_l)-b_1(n_l)}y_l,\;
T^{\partial_{n_j+\ldots+n_{l-1}}b(n_l)-b_1(n_l)}y_l\in B(x_{l-1},\eta_l)$ for $b\in \mathcal{B}$;
\item[$(la)_1$] $ T^{p_m(n_l)-b_1(n_l)}y_l\in I_l\cap T^{a_{k_l}}V_m$ for $1\leq m \leq t$;
\item[$(la)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)-b_1(n_l)}y_l\in B(x_{l-1},\eta_l)$
for $1\leq m\leq t$;
\item[$(lc)_1$] $ T^{-b_1(n_l)},\; T^{q(n_l)-b_1( n_l)}\pi(y_l)\in S_l$ for $q\in \mathcal{C}$;
\item[$(lc)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)-b_1(n_l)}\pi(y_l)\in S_l\subset B(\pi(x_{l-1}),\eta_l)$
for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_l=T^{-b_1 (n_l)}y_l$.
By $(lb)$ and (\ref{generalhhh1}), for $b\in \mathcal{B}$ we have
\[
T^{b (n_l)}x_l\in B(x_{l-1},\eta_l)\subset T^{a_{k_l}}V_0.
\]
By $(lb)$ and (\ref{generalhhh2}), for $b\in \mathcal{B},1\leq j\leq l-1$ we have
\begin{align*}
T^{b(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{b(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}b(n_l)}x_l)\ \\
& \in T^{b(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_{k_{j}}}V_0.
\end{align*}
By $(la)_1$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_l)}x_l\in T^{a_{k_l}}V_m.
\]
By $(la)_2$ and (\ref{generalhhh3}), for $1\leq m\leq t,1\leq j\leq l-1$ we have
\begin{align*}
T^{p_m(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{p_m(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l)\ \\
& \in T^{p_m(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_{k_{j}}}V_m.
\end{align*}
By $(lc)_1$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{general1212}
\pi(x_l), \;T^{q(n_l)}\pi(x_l)\in S_l\subset B(\pi(x_{l-1}),\eta)
\subset B(\pi(x),l\eta).
\end{equation}
By $(lc)_2$ and (\ref{generalhhh4}), for $q\in \mathcal{C},1\leq j\leq l-1$ we have
\begin{align*}
T^{q(n_j+\ldots+n_{l-1}+n_l)}\pi(x_l)=&T^{q(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)}\pi(x_l))\ \\
& \in T^{q(n_j+\ldots+n_{l-1})} B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x),l\eta).
\end{align*}
There is some $k_{l+1}\in \{1,\ldots,d\}$ with $x_l\in T^{a_{k_{l+1}}}V_0$ by (\ref{general-relation2})
and (\ref{general1212}).
We finish the construction by induction.
\end{proof}
Using Claim \ref{gene-claim1}, we are able to give a proof of Claim \ref{gene-claim2}.
\begin{proof}[Proof of Claim \ref{gene-claim2}]
Fix a weight vector
$\vec{w}=\big((\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$.
Let $\mathcal{A}$ be a system with weight vector $\vec{w}$,
and let $c_1,\ldots,c_s\in \mathcal{P}^*$ be distinct linear polynomials such that $c_i\notin \mathcal{A}$ for $1\leq i \leq s$.
Assume that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for any system $\mathcal{C}$.
When $w_k=1$, that is, every polynomial in $\mathcal{A}$
is linear, it follows from Lemma \ref{linear-case-with-constraint}.
When $w_k\geq 2$.
Let $p\in \mathcal{A}$ with $\mathrm{deg}(p)=w_k$ and let
\[
\mathcal{A}_p=\{a\in \mathcal{A}:a,p\; \text{are equivalent}\} \quad
\mathrm{and}\quad
\mathcal{A}_r =\mathcal{A}-\mathcal{A}_p.
\]
Fix a system $\mathcal{C}$ and let
\begin{align*}
\mathcal{ B }\;& =\{a-p:a\in \mathcal{A}_{p}\}, \\
\mathcal{A}' &=\{c_i-p,\;-p,\;a-p:1\leq i\leq s,\;a\in \mathcal{A}_r\}, \\
\mathcal{C}' &= \{q-p:q\in \mathcal{C}\}.
\end{align*}
It is easy to see $\phi(\mathcal{A}')=(\phi(w_k),w_k)\prec \phi(\mathcal{A})$,
thus $\pi$ has the property $\Lambda(\mathcal{A}',\mathcal{C}')$.
For any $b\in \mathcal{B}$, there is some $a\in \mathcal{A}_p$ such that
$\mathrm{deg}(b)=\mathrm{deg}(a-p)<\mathrm{deg}(p)=w_k=\min( \phi(\mathcal{A}'))$.
Using Claim \ref{gene-claim1} for systems
$\mathcal{A}'$ and $\mathcal{B}'$,
then for system $\mathcal{C}$
and open sets
$V_0,V_1,\ldots,V_s,\underbrace{V_0,\ldots,V_0}_{(|\mathcal{A}'|-s )\;\mathrm{times}}$,
there exist $w\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{a(n)-p(n)}w\in V_0$ for $a\in \mathcal{A}_{p}$;
\item $ T^{c_i(n)-p(n)}w\in V_i$ for $1\leq i\leq s$;
\item $ T^{-p(n)}w,\; T^{a(n)-p(n)}w\in V_0$ for $a\in \mathcal{A}_r$;
\item $T^{q(n)-p(n)}\pi(w)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
\end{itemize}
Now put $z=T^{-p(n)}w\in V_0$, then we have
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{a(n)}z\in V_0$ for $a\in \mathcal{A}_p\cup \mathcal{A}_{r}= \mathcal{A}$;
\item $ T^{c_i(n)}z\in V_i$ for $1\leq i\leq s$;
\item $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
\end{itemize}
This completes the proof.
\end{proof}
\end{document}
|
\begin{document}
\title{Embedding Right-Angled Artin Groups into Brin-Thompson Groups}
\author{James M.~Belk}
\author{Collin Bleak}
\author{Francesco Matucci}
\begin{abstract}
We prove that every finitely-generated right-angled Artin group can be embedded into some Brin-Thompson group~$nV$. It follows that many other groups can be embedded into some~$nV$ (e.g., any finite extension of any of Haglund and Wise's special groups), and that various decision problems involving subgroups of $nV$ are unsolvable.
\end{abstract}
\maketitle
\section{Introduction}
The \newword{Brin-Thompson groups} $nV$ are an infinite family of groups that act by homeomorphisms on Cantor spaces. They were first defined by Matt Brin in~\cite{Brin}, and can be viewed as higher-dimensional generalizations of the group~$V$ ($=1V$) defined by Richard J.~Thompson (see \cite{CFP} for an introduction to Thompson's groups). The groups $nV$ are finitely presented~\cite{HeMa} and simple~\cite{Brin3}, and for $1\leq j<k$ it is known that $jV$ embeds into $kV$ \cite{Brin}, but also that $jV$ and $kV$ are not isomorphic~\cite{BlLa}.
Recall that a \newword{simple graph} is a finite graph with no loops (a loop is an edge from a vertex to itself) and no mulitple edges (any two distinct vertices admit at most one edge between them). Given a simple graph $\Gamma$, the associated \newword{right-angled Artin group} $A_\Gamma$ has one generator $g_v$ for each vertex $v$ of~$\Gamma$, with one relation of the form $g_vg_w=g_wg_v$ for each pair of vertices $v,w$ that are connected by an edge. This class of groups has been studied extensively (see~\cite{Char} for an introduction).
Our main result is the following theorem:
\begin{theorem}\label{thm:MainTheorem}For any simple graph\/~$\Gamma$, there exists an $n\geq 1$ so that the right-angled Artin group~$A_\Gamma$ embeds isomorphically into~$nV$.
\end{theorem}
Specifically, we prove that $A_\Gamma$ embeds into $nV$ for $n = |V| + |E^c|$, where $V$ is the set of vertices of~$\Gamma$ and $E^c$ is the set of complementary edges, i.e.~the set of all pairs of vertices that are \textit{not} connected by an edge.
Recall the definition of Bleak and Salazar-D\'iaz in ~\cite{BlSa} that a group $G$ is \newword{demonstrable for a group $K$} of homeomorphisms of a space $X$ if there is an embedding $\hat{G}$ of $G$ into $K$ so that there is an open set $U$ in $X$ so that for all $\hat{g}\neq 1_K\in \hat{G}$ we have $U\cdot \hat{g}\cap U=\emptyset$ (we call the image group $\hat{G}$ a demonstrative subgroup of $K$ with demonstration set $U$). Bleak and Salazar-D\'iaz also say that such a group $K$ of homeomorphisms of a space $X$ \newword{acts with local realisation} if given any open set $U\mathcal{D}set X$, there is a subgroup $K_U$ of $K$ supported only on $U$ so that $K_U\cong K$. We now state a theorem of \cite{BlSa}.
{\flushleft {\bf Theorem:} [Bleak--Salazar-D\'iaz, Proposition 3.6]\begin{italic} Let $K$ be a group of homeomorphisms of a space $X$ so that $K$ acts with local realisation, and let $G$ be demonstrable for $K$. Then $K\wr G$ embeds in $K$.
\end{italic}}
The embedding $A_\Gamma \to nV$ that we construct is demonstrative for $nV$. Moreover, $nV$~acts on the Cantor cube with local realization, so we obtain the following.
\begin{theorem}For any simple graph\/~$\Gamma$, there exists an $n\geq 1$ so that the standard restricted wreath product $nV\wr A_\Gamma \cong \bigl(\bigoplus_{A_\Gamma} \!nV\bigr)\rtimes A_\Gamma$ embeds isomorphically into~$nV$.
\end{theorem}
The Krasner-Kaloujnine theorem~\cite{KrKa} states that any extension of a group $G$ by a group $H$ is contained in the (unrestricted) wreath product~$H\wr G$. Since, for all $n\geq 1$, all finite groups embed in $nV$ in a demonstrative way, Theorem \ref{thm:MainTheorem} and Krasner-Kaloujnine theorem show the following.
\begin{corollary} For any finite graph\/~$\Gamma$ there is a natural number $n$ so that every finite extension of $A_\Gamma$ embeds into~$nV$.
\end{corollary}
Recall that we say a group $G$ \newword{virtually embeds} in a group $H$ if $G$ admits a finite index subgroup $F$ so that $F$ embeds in $H$. Note also that any group that virtually embeds into~$nV$ embeds isomorphically into~$nV$. This follows from the fact that finite groups embed demonstratively into~$V$ (and hence into $nV$ \cite[Lemma~3.3]{BlSa} by taking products across the cantor spaces in the extraneous dimensions), combined with the argument above. In particular, any group that virtually embeds into a right-angled Artin group embeds isomorphically into some~$nV$.
\begin{corollary}If $G$ is a surface group or a graph braid group, then there exists an $n\geq 1$ so that $G$ embeds into~$nV$.
\end{corollary}
\begin{proof}Droms, Servatius, and Servatious prove that the fundamental group of a surface of genus five embeds into~$A_\Gamma$ for~$\Gamma$ a 5-cycle~\cite{SDS}, and it follows that all hyperbolic surface groups embed into~$10V$. Neunh\"offer, the second and third author observe that all non-hyperbolic surface groups embed into~$V$~\cite{BlMaNe}.
As for graph braid groups, Crisp and Wiest prove that all graph braid groups embed in some right-angled Artin group~\cite{CrWi}.
\end{proof}
Note that the braid groups $B_n$ do not belong to the family of graph braid groups. We do not know whether braid groups can be embedded into~$nV$.
Haglund and Wise have shown that the fundamental group of any ``special'' cube complex embeds in a right-angled Artin groups~\cite{HaWi1}. These are known as \newword{special groups}, and any group that has a special subgroup of finite index is \newword{virtually special}.
\begin{corollary}For any virtually special group~$G$, there exists an $n\geq 1$ so that $G$ embeds isomorphically into~$nV$. This includes:
\begin{enumerate}
\item All finitely generated Coxeter groups~\cite{HaWi2}.
\item Many word hyperbolic groups, including all one-relator groups with torsion~\cite{Wise1}.
\item All limit groups~\cite{Wise1}.
\item Many\/~$3$-manifold groups, including the fundamental groups of all compact\/ \mbox{$3$-manifolds} that admit a Riemannian metric of nonpositive curvature~\cite{PyWi}, as well as all finite-volume hyperbolic $3$-manifolds~\cite{Agol}.
\end{enumerate}
\end{corollary}
Bridson has recently proven several undecidability results for right-angled Artin groups~\cite{Bri}. These have the following consequences for the Brin-Thompson groups.
\begin{corollary}There exists an $n\geq 1$ with the following properties. First, the isomorphism problem for finitely presented subgroups of~$nV$ is unsolvable. Second, there exists a subgroup $H\leq nV$ that has unsolvable subgroup membership problem and unsolvable conjugacy problem.
\end{corollary}
As far as we know, none of the decision problems mentioned in this corollary have been settled for Thompson's group~$V$. Thus, it is conceivable that the statement of this corollary holds for~$n=1$.
In general, the bound $n = |V| + |E^c|$ for $A_\Gamma$ embedding into $nV$ is far from sharp. For example, our method proves that a free group of rank~$k$ embeds into $nV$ for $n=k(k+1)/2$, but in fact all such groups embed into~$V$. However, Bleak and Salazar-D\'{\i}az show that $\mathbb{Z}^2*\mathbb{Z}$ does not embed into~$V$, and hence the only right-angled Artin groups that embed into $V$ are direct products of free groups~\cite{BlSa}. This leads to the following conjecture.
\begin{conjecture}A right-angled Artin group $A_\Gamma$ embeds into $nV$ if and only if $A_\Gamma$ does not contain $\mathbb{Z}^{n+1}*\mathbb{Z}$.
\end{conjecture}
\begin{remark}
We have several remarks.
\begin{enumerate}
\item Firstly, we note that Corwin and Haymaker in \cite{CoHa} use the main result of Bleak--Salazar-D\'iaz to show that the only obstruction to a right-angled Artin group embedding into $V$ is the existence of a subgroup isomorphic to $\mathbb{Z}^2*\mathbb{Z}$, thus verifying the $n=1$ case of the conjecture above.
\item If the conjecture is true, this would imply that the right-angled Artin group for a $5$-cycle embeds into~$2V$, and hence all surface groups would embed into~$2V$ as well.
\item Hsu and Wise proved that right-angled Artin groups can be embedded into $\mathrm{SL}_n(\mathbb{Z})$ (see~\cite{HsWi}) and
Grigorchuk, Sushanski and Romankov showed that $\mathrm{SL}_n(\mathbb{Z})$ can be realized using synchronous automata. We observe that one can use Theorem~\ref{thm:MainTheorem}
coupled with the embedding Theorem~5.2 in \cite{BeBl} to recover a weaker version of this result, showing that right-angled Artin groups can be realized using asynchronous automata.
\end{enumerate}
\end{remark}
\section{Right-Angled Artin Groups}
Given a finite graph $\Gamma$ with vertex set $V_\Gamma = \{v_1,\ldots,v_n\}$ and edge set~$E$, the corresponding \newword{right-angled Artin group} $A_\Gamma$ is defined by the presentation
\[
A_\Gamma \,=\, \bigl\langle g_1,\ldots,g_n \;\bigl|\; g_ig_j=g_jg_i\text{ for all }\{v_i,v_j\}\in E\bigr\rangle.
\]
For example, if $\Gamma$ has no edges, then $A_\Gamma$ is a free group on $n$ generators. Similarly, if $\Gamma$ is a complete graph, then $A_\Gamma$ is a free abelian group of rank~$n$. See \cite{Char} for a general introduction to these groups.
We need a version of the ping-pong lemma for actions of right-angled Artin groups. The following is a slightly modified version of the ping-pong lemma for right-angled Artin groups stated in~\cite{CrFa} (also
see \cite{Kob}).
\begin{theorem}[Ping-Pong Lemma for Right-Angled Artin Groups]
\label{thm:ping-pong}
Let $A_\Gamma$ be a right-angled Artin group with generators $g_1,\ldots,g_n$ acting on a set~$X$. Suppose that there exist subsets $\{S_i^+\}_{i=1}^n$ and $\{S_i^-\}_{i=1}^n$ of~$X$, with $S_i = S_i^+\cup S_i^-$, satisfying the following conditions:
\begin{enumerate}
\item $g_i(S_i^+) \mathcal{D}seteq S_i^+$ and $g_i^{-1}(S_i^-) \mathcal{D}seteq S_i^-$ for all~$i$.
\item If $g_i$ and $g_j$ commute (with $i\ne j$), then $g_i(S_j) = S_j$.
\item If $g_i$ and $g_j$ do not commute, then $g_i(S_j) \mathcal{D}seteq S_i^+$ and $g_i^{-1}(S_j) \mathcal{D}seteq S_i^-$.
\item There exists a point $x \in X - \bigcup_{i=1}^n S_i$ such that $g_i(x) \in S_i^+$ and $g_i^{-1}(x) \in S_i^-$ for all~$i$.
\end{enumerate}
Then the action of $A_\Gamma$ on $X$ is faithful.
\qedsymbol
\end{theorem}
Indeed, if $U$ is any subset of $X - \bigcup_{i=1}^n S_i$ such that $g_i(U) \mathcal{D}seteq S_i^+$ and $g_i^{-1}(x) \in S_i^-$, then all of the sets $\{g(U) \mid g\in A_\Gamma\}$ are disjoint. In the case where $X$ is a topological space and $U$ is an open set, this means that the action of $A_\Gamma$ on $X$ is demonstrative in the sense of~\cite{BlSa}.
\section{The Groups $nV$ and $XV$}
Given a finite alphabet $\Sigma$, let $\Sigma^\omega$ denote the space of all strings of symbols from $\Sigma$ under the product topology, and let $\Sigma^*$ denote the set of all finite strings of symbols from $\Sigma$, including the empty string.
A \newword{Cantor cube} is any finite product $X =\Sigma_1^\omega\times \cdots \times \Sigma_n^\omega$, where $\Sigma_1,\ldots,\Sigma_n$ are finite alphabets with at least two symbols each. Given any tuple $(\mathcal{A}ha_1,\ldots,\mathcal{A}ha_n) \in \Sigma_1^*\times\cdots\times \Sigma_n^*$, the corresponding \newword{subcube} $X(\mathcal{A}ha_1,\ldots,\mathcal{A}ha_n)$ of~$X$ is the set of all points $(x_1,\ldots,x_n)\in X$ such that $x_i$ begin with~$\mathcal{A}ha_i$ for each~$i$. Note that $X$ is homeomorphic to $X(\mathcal{A}ha_1,\ldots,\mathcal{A}ha_n)$ via the map
\[
(x_1,\ldots,x_n) \;\mapsto\; (\mathcal{A}ha_1\cdot x_1,\ldots,\mathcal{A}ha_n\cdot x_n)
\]
where $\cdot$ denotes concatenation. More generally, any two subcubes $X(\mathcal{A}ha_1,\ldots,\mathcal{A}ha_n)$ and $X(\beta_1,\ldots,\beta_n)$ of~$X$ have a \newword{canonical homeomorphism} between them given by prefix replacement, i.e.
\[
(\mathcal{A}ha_1\cdot x_1,\ldots,\mathcal{A}ha_n\cdot x_n) \;\mapsto\; (\beta_1\cdot x_1,\ldots,\beta_n\cdot x_n).
\]
A \newword{rearrangement} of a Cantor cube $X$ is any homeomorphism of~$X$ obtained through the following procedure:
\begin{enumerate}
\item Choose a partition $D_1,\ldots,D_k$ of the domain $X$ into finitely many subcubes.
\item Choose another partition $R_1,\ldots,R_k$ of the range $X$ into the same number of subcubes.
\item Define a homeomorphism $h\colon X\to X$ piecewise by mapping each $D_i$ to $R_i$ via a canonical homeomorphism.
\end{enumerate}
The rearrangements of~$X$ form a group under composition, which we refer to as~$XV$. In the case where $X = (\{0,1\}^\omega)^n$, the group $XV$ is known as the \newword{Brin-Thompson group $\boldsymbol{nV}$}.
\begin{proposition}If $X = \Sigma_1^\omega\times \cdots \times \Sigma_n^\omega$ is any Cantor cube, then the rearrangement group~$XV$ of~$X$ embeds into the Brin-Thompson group~$nV$.
\end{proposition}
\begin{proof}For each $i$, let $\lambda_{i,1},\lambda_{i,2},\ldots,\lambda_{i,m_i}$ denote the symbols of~$\Sigma_i$. For each $i$ we choose a complete binary prefix code $\mathcal{A}ha_{i,1},\mathcal{A}ha_{i,2},\ldots,\mathcal{A}ha_{i,m_i}$ with $m_i$ different codewords. That is, we choose a finite rooted binary tree with~$m_i$ leaves, and we let $\mathcal{A}ha_{i,1},\mathcal{A}ha_{i,2},\ldots,\mathcal{A}ha_{i,m_i}$ be the binary addresses of these leaves. Then the map
\[
\lambda_{i,j_1} \cdot \lambda_{i,j_2} \cdot \lambda_{i,j_3} \cdots \;\mapsto\; \mathcal{A}ha_{i,j_1}\cdot \mathcal{A}ha_{i,j_2} \cdot \mathcal{A}ha_{i,j_3} \cdots
\]
gives a homeomorphism from $\Sigma_i^\omega$ to $\{0,1\}^\omega$. Taking the Cartesian product gives a homeomorphism $X\to (\{0,1\}^\omega)^n$ which maps each subcube of~$X$ to a subcube of $(\{0,1\}^\omega)^n$, and it is easy to check that conjugating $XV$ by this homeomorphism yields a subgroup of~$nV$.
\end{proof}
\section{Embedding Right-Angled Artin Groups}
The goal of this section is to prove Theorem~\ref{thm:MainTheorem}. That is, we wish to embed any right-angled Artin group into some~$nV$.
Let $A_\Gamma$ be a right-angled Artin group with generators $g_1,\ldots,g_n$. For convenience, we assume that none of the generators $g_i$ lie in the center of~$A_\Gamma$. For in this case $A_\Gamma \cong A'_{\Gamma'}\times \mathbb{Z}$ for some right-angled Artin group $A'_{\Gamma'}$ with fewer generators, and since $sV\times \mathbb{Z}$ embeds in~$sV$, any embedding $A'_{\Gamma'} \to kV$ yields an embedding $A_\Gamma\to kV$.
Let $P$ be the set of all pairs $\{i,j\}$ for which $g_ig_j \ne g_jg_i$, and note that each $i\in\{1,\ldots,n\}$ lies in at least one element of~$P$. Let $X$ be the following Cantor cube:
\[
X \;=\; \prod_{i=1}^n \{0,1\}^\omega \;\times \prod_{\{i,j\}\in P} \{i,j,\emptyset\}^\omega.
\]
Our goal is to prove the following theorem.
\begin{theorem}The group $A_\Gamma$ embeds into $XV$, and hence embeds into $kV$ for $k=n+|P|$.
\end{theorem}
We begin by establishing some notation:
\begin{enumerate}
\item For each point $x\in X$, we will denote its components by $\{x_i\}_{i\in\{1,\ldots,n\}}$ and $\{x_{ij}\}_{\{i,j\}\in P}$.
\item Given any $i\in\{1,\ldots,n\}$ and $\mathcal{A}ha\in \{0,1\}^*$, let $C_i(\mathcal{A}ha)$ be the subcube consisting of all $x\in X$ for which $x_i$ begins with~$\mathcal{A}ha$. Let $L_{i,\mathcal{A}ha}\colon X\to C_i(\mathcal{A}ha)$ be the canonical homeomorphism, i.e.~the map that prepends $\mathcal{A}ha$ to~$x_i$.
\item For each $i\in\{1,\ldots,n\}$, let $P_i$ be the set of all $j$ for which $\{i,j\}\in P$, and let $S_i$ be the subcube consisting of all $x\in X$ such that $x_{ij}$ begins with $i$ for all $j\in P_i$. Let $F_i\colon X\to S_i$ be the canonical homeomorphism, i.e.~the map that prepends $i$ to~$x_{ij}$ for each $j\in P_i$.
\item Let $S_{ii} = F_i(S_i)=F_i^2(X)$, i.e.~the subcube consisting of all $x\in X$ such that $x_{ij}$ begins with $ii$ for each $j\in P_i$.
\end{enumerate}
Now, for each $i\in\{1,\ldots,n\}$, define a homeomorphism $h_i\colon X\to X$ as follows:
\begin{enumerate}
\item $h_i$ maps $X - S_i$ to $(S_i-S_{ii})\cap C_i(10)$ via $L_{i,10}\circ F_i$.
\item $h_i$ is the identity on $S_{ii}$.
\item $h_i$ maps $(S_i-S_{ii})\cap C_i(1)$ to $(S_i-S_{ii})\cap C_i(11)$ via $L_{i,1}$.
\item $h_i$ maps $(S_i-S_{ii})\cap C_i(01)$ to $X-S_i$ via $F_i^{-1}\circ L_{i,01}^{-1}$.
\item $h_i$ maps $(S_i-S_{ii})\cap C_i(00)$ to $(S_i-S_{ii})\cap C_i(0)$ via $L_{i,0}^{-1}$.
\end{enumerate}
Note that the five domain pieces form a partition of~$X$, and each is the union of finitely many subcubes. Similarly, the five range pieces form a partition of~$X$, and each is the union of finitely many subcubes. Since each of the maps is a restriction of a canonical homeomorphism, it follows that $h_i$ is an element of~$XV$.
\begin{proposition}For each $i,j\in\{1,\ldots,n\}$, if $g_i$ and $g_j$ commute, then so do $h_i$ and $h_j$.
\end{proposition}
\begin{proof}Observe that $h_i(x)$ is completely determined by $x_i$ and $\{x_{ij}\}_{j\in P_i}$, and only changes these coordinates of~$x$. If $g_i$ and $g_j$ commute, then the relevant sets of coordinates for $h_i$ and $h_j$ do not overlap, and hence $h_i$ and $h_j$ commute.
\end{proof}
Thus we can define a homomorphism $\Phi\colon A_\Gamma \to XV$ by $\Phi(g_i) = h_i$ for each~$i$.
\begin{proposition}The homomorphism $\Phi$ is injective.
\end{proposition}
\begin{proof}For each $i$, let $S_i^+ = S_i \cap C_i(1)$, and let $S_i^{-} = S_i\cap C_i(0)$. These two sets form a partition of~$S_i$, with
\[
h_i(S_i^+) = S_i\cap C_i(11) \mathcal{D}seteq S_i^+
\qquad\text{and}\qquad
h_i^{-1}(S_i^{-}) = S_i\cap C_i(00) \mathcal{D}seteq S_i^-.
\]
Now suppose we are given two generators $g_i$ and $g_j$. If $g_i$ and $g_j$ commute, then clearly $h_i(S_j) = S_j$. If $g_i$ and $g_j$ do not commute, then $S_j \mathcal{D}seteq X-S_i$, and therefore $h_i(S_j)\mathcal{D}seteq S_i^+$ and $h_i^{-1}(S_j)\mathcal{D}seteq S_i^-$.
Finally, let $x$ be an point in $X$ such that $x_{ij}$ starts with $\emptyset$ for all $\{i,j\}\in P$. Then $x\in X-S_i$ for all~$i$, so $h_i(x) \in S_i^+$ and $h_i^{-1}(x) \in S_i^-$. The homomorphism $\Phi$
is thus injective by Theorem \ref{thm:ping-pong}.
\end{proof}
This proves our main theorem. Note further that, if $U$ is the open subset of~$X$ consisting of all points $x\in X$ such that $x_{ij}$ starts with $\emptyset$ for all $\{i,j\}\in P$, then $h_i(U) \mathcal{D}seteq S_i^+$ and $h_i^{-1}(U) \mathcal{D}seteq S_i^-$, and therefore all of the sets $\{g(U) \mid g\in A_\Gamma\}$ are disjoint. Thus the action
of $A_\Gamma$ on $X$ is demonstrative in the sense of~\cite{BlSa}, and it follows that the conjugate action of $A_\Gamma$ on $(\{0,1\}^\omega)^k$ as a subgroup of~$kV$ is demonstrative as well.
\end{document}
|
\begin{document}
\title[Birkhoff-James orthogonality in complex Banach spaces]{Birkhoff-James orthogonality in complex Banach spaces and Bhatia-\v{S}emrl Theorem revisited}
\author[Roy Bagchi Sain]{Saikat Roy, Satya Bagchi, Debmalya Sain}
\address[Roy]{Department of Mathematics\\ National Institute of Technology Durgapur\\ Durgapur 713209\\ West Bengal\\ INDIA\\}
\email{[email protected]}
\address[Bagchi]{Department of Mathematics\\ National Institute of Technology Durgapur\\ Durgapur 713209\\ West Bengal\\ INDIA\\}
\email{[email protected]}
\address[Sain]{Department of Mathematics\\ Indian Institute of Science\\ Bengaluru 560012\\ Karnataka \\INDIA\\ }
\email{[email protected]}
\newcommand{\newline\indent}{\newline\indent}
\subjclass[2020]{Primary 47A30, Secondary 47A12, 46B20}
\keywords{Birkhoff-James orthogonality; smoothness; Toeplitz-Hausdorff Theorem; Bhatia-\v{S}emrl Theorem; complex Banach spaces.}
\maketitle
\begin{abstract}
We explore Birkhoff-James orthogonality of two elements in a complex Banach space by using the directional approach. Our investigation illustrates the geometric distinctions between a smooth point and a non-smooth point in a complex Banach space. As a concrete outcome of our study, we obtain a new proof of the Bhatia-\v{S}emrl Theorem on orthogonality of linear operators.
\end{abstract}
\section{Introduction}
The importance of Birkhoff-James orthogonality in the study of the geometry of Banach spaces is undeniable. Several mathematicians applied Birkhoff-James orthogonality techniques to investigate the geometry of Banach spaces from time to time; see \cite{BS,BP,K,PSMM,S,SP,SRBB}. In the present work, we study a comparatively weaker version of Birkhoff-James orthogonality in complex Banach spaces, namely, the directional orthogonality. The objective of the current article is twofold: we characterize smoothness of an element in complex Banach spaces and engage the directional orthogonality techniques to obtain a different proof of the classical Bhatia-\v{S}emrl Theorem.
\subsection{Notations and terminologies} Letters $ \mathbb{X},~ \mathbb{Y} $ denote Banach spaces and the symbol $ \mathbb{H} $ is reserved for a Hilbert space. Let $ S_{\mathbb{X}} $ denote the unit sphere of the Banach space $ \mathbb{X}, $ i.e., $ S_{\mathbb{X}} := \{ x \in \mathbb{X} :~ \|x\|=1 \}. $ The unit circle in $ \mathbb{C} $ is denoted by $ S^{1}, $ i.e., $ S^{1} := \{ \gamma\in \mathbb{C} :~ |\gamma|=1 \}.$ Let $\mu \in \mathbb{C}$ be non-zero. Then $\mu := |\mu|e^{i\theta}$, where $\theta$ denotes the argument of $\mu.$ Throughout the article we consider the interval $[0,2\pi)$ to be the domain of argument. Let $ \mathsf{Re}~\mu $, $ \mathsf{Im}~\mu $, and $ \mathsf{arg}~\mu $ denote as usual the real part of $ \mu $, the imaginary part of $ \mu $, and the argument of $\mu,$ respectively. Given any $z:= a+ib \in \mathbb{C},$ we denote the conjugate of $z$ by $\overline z:=a-ib.$\\
Unless otherwise stated, we consider Banach spaces, in particular Hilbert spaces, (mostly) over the field $\mathbb{C}$ of complex numbers.\\
Let $ \mathbb{L}(\mathbb{X},\mathbb{Y}) $ denote the Banach space of all bounded linear operators from $ \mathbb{X} $ to $ \mathbb{Y} $ endowed with the usual operator norm. We write $ \mathbb{L}(\mathbb{X},\mathbb{Y}) = \mathbb{L}(\mathbb{X}), $ if $ \mathbb{X}=\mathbb{Y}. $ For any $ T \in \mathbb{L}(\mathbb{X},\mathbb{Y}), $ we denote the norm attainment set of $ T $ by $ M_T $, i.e., $$ M_T:= \{ x \in S_{\mathbb{X}} :~ \|Tx\| = \| T \| \}.$$ Given any two elements $ x,y \in \mathbb{X}, $ we say that $ x $ is Birkhoff-James orthogonal \cite{B,J,Ja} to $ y, $ written as $ x \perp_B y, $ if $$ \|x+\lambda y\| \geq \|x\|,~\mathrm{~for~ all~ }~ \lambda \in \mathbb{C}. $$ Let $x\in \mathbb{X}$ be non-zero and let $\mathbb{J}(x)$ be a subset of $S_{\mathbb{X}^*},$ defined by
\begin{align*}
\mathbb{J}(x):=\{x^*\in S_{\mathbb{X}^*}:~ x^*(x)=\|x\|\}.
\end{align*}
It is well-known \cite[Page 268]{Ja} that in the real case, $x\perp_B y$ (if and) only if there exists $u^*\in \mathbb{J}(x)$ such that $u^*(y)=0.$ It comes easily from convex analysis that the same characterization holds true in the complex case as well. We say that $x$ is smooth if the collection $\mathbb{J}(x)$ is singleton. The Banach space $\mathbb{X}$ is called smooth, if each of its non-zero elements is smooth.\\
We would like to remark here that another study on complex Birkhoff-James orthogonality was conducted in \cite{PSMM}, although most of our results in the present article differ in spirit from \cite{PSMM}. However, we do use some of the notations from \cite{PSMM}, which we mention below.
\begin{definition}
Let $ \mathbb{X} $ be a complex Banach space and let $ \gamma\in S^1 $. For any two elements $ x, y \in \mathbb{X} $, $ x $ is said to be \emph{orthogonal} to $ y $ \emph{in the direction of $ \gamma $}, written as $ x\perp_\gamma y $, if $ \| x + t \gamma y \| \geq \| x \|, $ for all $ t\in \mathbb{R} $.
\end{definition}
\begin{definition}
Let $ \mathbb{X} $ be a complex Banach space and let $ \gamma \in S^1 $. For any two elements $ x,y\in \mathbb{X} $, we say that $ y $ lies in the \emph{positive part} of $ x $ in the direction of $ \gamma $, written as $ y \in (x)_{\gamma}^{+} $, if $ \| x + t \gamma y \| \geq \| x \|, $ for all $ t\geq 0 $. Similarly, we say that $ y $ lies in the \emph{negative part} of $ x $ in the direction of $ \gamma $, written as $ y \in (x)_{\gamma}^{-} $, if $ \| x + t \gamma y \| \geq \| x \|, $ for all $ t\leq 0 $.
\end{definition}
The Bhatia-\v{S}emrl Theorem \cite{BS} provides a nice characterization of Birkhoff-James orthogonality of linear operators on a finite-dimensional Hilbert space in terms of orthogonality of certain special vectors in the ground space.
\begin{theorem}(Bhatia-\v{S}emrl Theorem)\label{Bhatia-Semrl}
Let $ \mathbb{H} $ be a finite-dimensional Hilbert space. Let $ T, A \in \mathbb{L}(\mathbb{H}) $. Then $ T \perp_B A $ if and only if there exists $ x\in M_T $ such that $ \langle Tx, Ax \rangle = 0. $
\end{theorem}
In view of this seminal result, the study of the Bhatia-\v{S}emrl type theorems in the setting of real Banach spaces has been conducted in \cite{S,SP}. Indeed, using Theorem 2.1 and Theorem 2.2 of \cite{SP}, an elementary proof of the Bhatia-\v{S}emrl Theorem can be obtained in the real setting. In our present work, we study Birkhoff-James orthogonality in complex Banach spaces from a geometric point of view. As an outcome of our exploration, we furnish an elementary proof of the Bhatia-\v{S}emrl Theorem in the complex case. We note that the Toeplitz-Hausdorff Theorem was substantially used in the proof of the Bhatia-\v{S}emrl Theorem in each of \cite{BS,BP,TA}. We also apply the Toeplitz-Hausdorff Theorem in our proof. However, the motivation behind our approach is different compared to the approaches taken in \cite{BS,BP,K,TA} to prove the Bhatia-\v{S}emrl Theorem.\\
The present work is organized in the following manner. In the second section, we investigate the directional orthogonality in a complex Banach space. In the third section, we describe the directional orthogonality in terms of linear functionals and characterize smoothness of an element in the underlying space. In the final section, we study Birkhoff-James orthogonality of linear operators between finite-dimensional complex Banach spaces. We apply the ideas developed in this article, along with the Toeplitz-Hausdorff Theorem, to prove the Bhatia-\v{S}emrl Theorem.
\section{Directional orthogonality in complex Banach spaces}
Let us begin with an easy proposition. The results of this proposition will be used extensively throughout this article. We omit the proof of this proposition as it is trivial. We would like to mention that some of the statements of the following proposition are already mentioned in Proposition 2.1 of \cite{PSMM}.
\begin{prop}\label{basic preliminaries}
Let $ \mathbb{X} $ be a complex Banach space and let $ \gamma\in S^1 $. Then for any $ x, y \in \mathbb{X} $, the following hold true:\\
(i) Either $ y\in (x)_\gamma^{+} $, or $ y\in (x)_\gamma^{-} $.\\
(ii) $(x)_\gamma^{+}=(x)_{-\gamma}^{-}. $ \\
(iii) $ x\perp_\gamma y $ if and only if $ x\perp_{-\gamma} y $ if and only if $ y\in (x)_\gamma^{+} \cap (x)_\gamma^{-} $ .
\end{prop}
Our next observation is also somewhat expected. For a given pair of non-zero elements $ x $ and $ y $, in a complex Banach space $ \mathbb{X} $, it may happen that $ x\perp_\gamma y, $ for some $ \gamma\in S^1 $ but $ x\not\perp_B y $. We furnish the following easy example in support of our statement.
\begin{example}
Let $ \mathbb{X} $ be the two-dimensional complex inner product space with the usual inner product and let $ x = ( 1,0 ) $ and $ y = ( 1, i) $. Then for any $ t\in \mathbb{R} $, $$ \| x + t i y \| = \| ( 1 + t i , - t ) \| = ( 1 + 2 t^2 )^{\frac{1}{2}} \geq \| x \|. $$ In other words, $ x\perp_i y $. However, $ x\not\perp_B y $ as $ \langle x, y \rangle = 1$.
\end{example}
We present the first non-trivial result of the article, which is also geometrically illustrating.
\begin{theorem}\label{existence of direction}
Let $ \mathbb{X} $ be a complex Banach space and let $ x , y \in \mathbb{X} $ be non-zero. Then there exists $ \beta\in S^1 $, such that $ x\perp_{\beta} y $. Moreover, given any $\gamma\in S^1$, if $ y = \lambda x ,$ for some $ \lambda \in \mathbb{C}\setminus \{ 0 \} $, then $ x\perp_\gamma y $ if and only if $ \lambda\gamma $ is purely imaginary.
\end{theorem}
\begin{proof}
If $x\perp_B y$, then $\beta_0:=1$ works. If possible, suppose, $ x\not \perp_\beta y $, for any $ \beta \in S^1 $. We now consider two subsets $W_1$ and $W_2$ of $S^1$, defined as follows:
\begin{align}\label{separation}
& W_1 := \{ \beta \in S^1 :~ y \in (x)_\beta ^-\setminus (x)_\beta ^+ \},
& W_2 := \{ \beta \in S^1 :~ y \in (x)_\beta ^+\setminus (x)_\beta ^- \}.
\end{align}
\noindent It follows from Proposition \ref{basic preliminaries} (i) that $ W_1 \cup W_2 = S^1 $. Also, it is easy to see that $ W_1 $ and $ W_2 $ are non-empty. Next, we show that $W_1$ and $W_2$ are closed subsets of $S^1.$ Assume that $(\beta_n) \subseteq W_1$ with $\beta_n \to \gamma$. Then
\begin{align*}
\|x+t\beta_n y\| \geq \|x\|,~~\mathrm{for~all}~~ t\leq 0~\mathrm{and}~n\in \mathbb{N}.
\end{align*}
Hence $\|x+t\gamma y\|=\underset{n}{\lim}\|x+t\beta_n y\|\geq \|x\|$ for every $t\leq 0.$
As a result, $y \in (x)_\gamma^-$.
Since $x\not\perp_\gamma y$, we have $y \notin (x)_\gamma^+$. Hence $\gamma \in W_1$. Thus, $W_1$ is closed. Similarly, we can show that $W_2$ is closed. Also, it follows from the definition of $W_1$ and $W_2$ that $W_1 \cap W_2 = \emptyset.$ Thus, so far we know that the sets $W_1$, $W_2$ are closed, disjoint, and $W_1\cup W_2=S^1.$ Hence both $W_1$, $W_2$ are also open. And from this we can easily get a contradiction. Therefore, there exists $\beta \in S^1,$ such that $x\perp_{\beta}y.$\\
For the next part of the theorem, we assume that $ x\perp_\gamma \lambda x, $ for some $\gamma \in S^1$. Let $\gamma\lambda=a+ib$, for some $a,b\in \mathbb{R}$. Note that
$$\| x + t(a+ib) x \| = \| x \| |1+ta+tib | = \|x\| \left( 1 +2ta + t^2(a^2+b^2) \right)^{\frac{1}{2}},$$
\noindent for all $t\in \mathbb{R}.$ If $a \neq 0$, then choosing $t = -\dfrac{a}{a^2+b^2} $, we have $$ 1 +2t a + t^2(a^2+b^2) = 1 - \dfrac{a^2}{a^2+b^2} < 1. $$ Consequently,
$$\|x+t\gamma y\|= \left\| x - \dfrac{a}{a^2+b^2}(a+ib) x \right\| < \| x \|,$$
a contradiction with $x\perp_\gamma y$. Therefore, we must have $a=0.$ As a result, $ \lambda\gamma $ is purely imaginary.
Conversely, suppose that $ \lambda\gamma $ is purely imaginary. Then for every $t\in \mathbb{R}$ we have
$$\| x + t\lambda \gamma x \| = \left( 1 + (t|\lambda \gamma |)^2\right)^{\frac{1}{2}} \| x \| \geq \| x \|.$$
Therefore, $ x\perp_\gamma \lambda x $.
\end{proof}
A natural question in view of the above result would be regarding the structure of all possible directions of orthogonality for a given pair of vectors. Our next theorem addresses this query. We need the following definition:
\begin{definition}
Let $ \beta \in S^1 $. The \emph{half-circle determined} by $ \beta $ is a subset of $ S^1 $, defined by
\[ U_\beta := \{ \gamma\in S^1 :~ \mathsf{arg}~\gamma \in [ \mathsf{arg}~\beta ,\pi + \mathsf{arg}~\beta ] \} .\]
\end{definition}
Note that $-U_\beta= U_{-\beta}$, for any $\beta\in S^1.$ We now establish a lemma regarding the direction of orthogonality of two vectors.
\begin{lemma}\label{y belongs to negative of all direction}
Let $ \mathbb{X} $ be a complex Banach space and let $ x , y \in \mathbb{X} $. Let $ x\perp_{\beta_0} y, $ for some $ \beta_0\in S^1 $. Let $ \gamma_0\in U_{\beta_0} $ be such that $ y\in (x)_{\gamma_0}^-\setminus (x)_{\gamma_0}^+ \left( y\in (x)_{\gamma_0}^+\setminus (x)_{\gamma_0}^- \right)$. Then $ y\in (x)_{\gamma}^- \left( y\in (x)_{\gamma}^+ \right),$ for all $ \gamma\in U_{\beta_0} $.
\end{lemma}
\begin{proof}
Since $ y\notin (x)_{\gamma_0}^+ $, there is $ t_0 \in (0,1) $ such that $$ \| x + t_0 \gamma_0 y \| < \| x \| .$$ Consider any $t_1\in (0,t_0)$. Put $\lambda_0:=\frac{t_1}{t_0}$; thus $\lambda_0\in(0,1)$. Observe that
$$ x + t_1 \gamma_0 y =(1-\lambda_0)x+\lambda_0( x + t_0 \gamma_0 y).$$
Considering norm on the both sides and applying triangle inequality, we obtain
\begin{align}\label{convexity of norm}
\| x + t_1 \gamma_0 y \| < \| x \|.
\end{align}
\begin{figure}
\caption{Positive and negative part along a half-circle.}
\label{Positive and negative part along a half circle}
\end{figure}
Let $ \kappa \in U_{-\beta_0}\setminus \{\pm \beta_0\} $. Let $ \varepsilon\in (0,1) $ be arbitrary. We now consider two subsets $\mathsf{A}$ and $\mathsf{B}$ of $\mathbb{C}$, defined as:
$$ \mathsf{A}:=\{ \mu (\varepsilon \kappa) + (1-\mu) (t_1 \gamma_0) :~ \mu \in [0,1] \},\qquad \mathsf{B}:= \{ r \beta_0 :~r\in [-1,1] \}.$$
It follows from the definition (Figure \ref{Positive and negative part along a half circle}) of $\mathsf{A}$ and $\mathsf{B}$ that there exist some $\mu_0\in (0,1]$ and $r_0\in [-1,1]$ such that $$\mathsf{A}\cap \mathsf{B}= \{\mu_0 (\varepsilon \kappa) + (1-\mu_0) (t_1 \gamma_0)\} = \{r_0 \beta_0\}.$$ Now,
\begin{align*}
\| x + r_0 \beta_0 y \| & = \| x + \left(\mu_0 (\varepsilon \kappa) + (1-\mu_0) (t_1 \gamma_0)\right)y \|\\
& = \| \mu_0 x + (1-\mu_0) x + \left(\mu_0 (\varepsilon \kappa) + (1-\mu_0) (t_1 \gamma_0)\right)y \|\\
& \leq \| \mu_0 x + \mu_0 (\varepsilon \kappa) y \| + \| (1-\mu_0) x + (1-\mu_0) (t_1 \gamma_0)y \|\\
& = \mu_0 \| x + (\varepsilon \kappa) y \| + (1- \mu_0) \| x + t_1 \gamma_0 y \|\\
& < \mu_0 \| x + (\varepsilon \kappa) y \| + (1-\mu_0)\| x \|;~~\mathrm{(using~\ref{convexity of norm})}.
\end{align*}
Therefore,
\begin{align}\label{Obtaining inequality}
\| x + r_0 \beta_0 y \|-(1-\mu_0)\| x \| < \mu_0 \| x + (\varepsilon \kappa) y \|.
\end{align}
\noindent By our assumption $x\perp_{\beta_0} y $. Therefore, (\ref{Obtaining inequality}) produces
$$ \| x \| < \| x + \varepsilon \kappa y \|.$$
This means that $ \| x + t \kappa y \| \geq \| x \| ,$ for all $ t \geq 0 $. Therefore, by $(ii)$ of Proposition \ref{basic preliminaries}, $ y\in (x)_\kappa^+=(x)_{-\kappa}^- $ for all $ \kappa \in -U_{\beta_0}.$ Thus, $y \in (x)_{\gamma}^-$ for all $ \gamma\in U_{\beta_0} $.
\end{proof}
Now, we prove the promised theorem.
\begin{theorem}\label{Directions of orthogonality}
Let $ \mathbb{X} $ be a complex Banach space and let $ x , y \in \mathbb{X} $ be non-zero with $x\not\perp_B y.$ Then $ \{ \gamma \in S^1 :~ x\perp_\gamma y \} $ is the union of two diametrically opposite closed arcs of the unit circle $ S^1 $.
\end{theorem}
\begin{proof}
Let $\mathsf{S}$ be a subset of $S^1$, defined by $$\mathsf{S}:=\{ \gamma \in S^1 :~ x\perp_\gamma y \} .$$
It is sufficient to show that $\mathsf{S} = \mathsf{E}\cup (-\mathsf{E}),$ where $ \mathsf{E} $ is a closed and connected subset of $ S^1 $. It follows from Theorem \ref{existence of direction} that $ x\perp_{\beta_0} y $ (hence $x\perp_{-\beta_0}y$), for some $ \beta_0 \in S^1 $. If $\mathsf{S} =\{\beta_0,-\beta_0\}$, then we are done. So we assume that $\{\beta_0,-\beta_0\}\subsetneq \mathsf{S} .$ Without loss of generality, let $ \beta_0 := 1 $. We now complete the proof in the following three steps:\\
\noindent Step I: Since $ x \not\perp_B y $, there exists $ \gamma_0 \in U_{\beta_0} $ such that $ x \not\perp_{\gamma_0} y $. Without loss of generality, let $ y\in (x)_{\gamma_0}^-\setminus (x)_{\gamma_0}^+ $. Next, we consider two subsets $\mathsf{C}$ and $\mathsf{D}$ of $S^1$, defined as:
\begin{align*}
&\mathsf{C} := \{ \beta\in S^1 :~ x\perp_\beta y,~ \mathsf{arg}~\beta\in [0,~ \mathsf{arg}~\gamma_0]\},\\
&\mathsf{D} := \{ \beta\in S^1 :~ x\perp_\beta y,~ \mathsf{arg}~\beta\in [ \mathsf{arg}~\gamma_0, \pi]\}.
\end{align*}
We show below that $\mathsf{C}$ and $\mathsf{D}$ are non-empty and closed (hence compact) subsets of $S^1.$ It is trivial to see that $\mathsf{C}$ and $\mathsf{D}$ are non-empty, as $\beta_0\in \mathsf{C}$ and $-\beta_0\in \mathsf{D}$. Let $(\beta_n)$ be a sequence in $\mathsf{C}$ with $\beta_n\to \alpha.$ Then
\begin{align*}
\|x+t\alpha y\|=\underset{n}{\lim}\|x+t\beta_n y\| \geq \|x\|,~~\mathrm{for~all}~~ t\in \mathbb{R}.
\end{align*}
\begin{figure}
\caption{Direction of orthogonality.}
\label{Direction of orthogonality}
\end{figure}
\noindent Therefore, $x\perp_\alpha y$. Observe that $\mathsf{arg}~{\gamma_0}\in (0, \pi).$ For every $\theta\in \mathbb{R}$, define $e^{i\theta}:=(\cos\theta, \sin\theta)\in \mathbb{R}^2.$ Since the argument function $\mu\mapsto \mathsf{arg}~\mu$ is continuous on $\{e^{i\theta}: 0\leq \theta \leq \mathsf{arg}~\gamma_0\}$, we have $\mathsf{arg}~\beta_n \to \mathsf{arg}~{\alpha}$. As a result, $0 \leq \mathsf{arg}~\alpha \leq \mathsf{arg}~\gamma_0$. Thus, $\alpha\in \mathsf{C}$ and $ \mathsf{C} $ is closed, as expected. A similar argument shows that $\mathsf{D}$ is also closed.\\
\noindent Step II: We show here that $\mathsf{C}$ and $\mathsf{D}$ are connected subsets of $S^1.$ If $\mathsf{C}:=\{\beta_0\}$, then we have nothing more to show. Assume that there are $\beta_1, \beta_2\in \mathsf{C}$ with $\mathsf{arg}~\beta_2 > \mathsf{arg}~\beta_1.$ Let $t_0\in (0,1)$ be arbitrary and let $$\beta_3 := \dfrac{t_0\beta_2+(1-t_0)\beta_1}{|t_0\beta_2+(1-t_0)\beta_1|}.$$
Observe that $\gamma_0, \beta_3 \in U_{\beta_1}$ (Figure \ref{Direction of orthogonality}) and $ y\in (x)_{\gamma_0}^-\setminus (x)_{\gamma_0}^+ $. Thus, it follows from Lemma \ref{y belongs to negative of all direction} that $y\in (x)_{\beta_3}^-.$ Also, $-\gamma_0, \beta_3 \in U_{-\beta_2}$ (Figure \ref{Direction of orthogonality}) and $ y\in (x)_{-\gamma_0}^+\setminus (x)_{-\gamma_0}^- $. Therefore, it follows from Lemma \ref{y belongs to negative of all direction} that $y\in (x)_{\beta_3}^+.$ Now, applying Proposition \ref{basic preliminaries} (iii), we have that $x\perp_{\beta_3} y.$ Since $\mathsf{arg}~{\beta_3} < \mathsf{arg}~{\beta_2} < \mathsf{arg}~{\gamma_0}$, we have $\beta_3\in \mathsf{C}$. Therefore, $\mathsf{C}$ is connected. A similar argument shows that $\mathsf{D}$ is connected.
In other words, we can find $\alpha_0,\kappa_0\in S^1$ such that $\mathsf{C}$ is the closed arc between $\beta_0$ and $\alpha_0$ and $\mathsf{D}$ is the closed arc between $\kappa_0$ and $-\beta_0$ (Figure \ref{Direction of orthogonality}).\\
\noindent Step III: Finally, observe that
$$\mathsf{S} = \mathsf{C}\cup (-\mathsf{C})\cup \mathsf{D} \cup (-\mathsf{D}).$$
Clearly, $\mathsf{C}\cap (-\mathsf{D})=\{\beta_0\}$. As a result, $\mathsf{C}\cup (-\mathsf{D})$ is a closed and connected subset of $S^1$. Now, put $\mathsf{E}:=\mathsf{C}\cup (-\mathsf{D}).$ Therefore, $\mathsf{S}= \mathsf{E}\cup (-\mathsf{E})$, where $\mathsf{E}$ is a closed and connected subset of $S^1$.
\end{proof}
\section{Directional orthogonality and smoothness}
This section plans to characterize the smoothness of a non-zero element $x$ in a complex Banach space $\mathbb{X},$ using the directional orthogonality. To fulfill the desired goal, we need to characterize the directional orthogonality in terms of linear functionals on $\mathbb{X}.$
For $x\in \mathbb{X}$, put
\begin{align*}
\mathsf{J}(x) := \{x^*\in S_{\mathbb{X}^*} :~ |x^*(x)| = \|x\| \}.
\end{align*}
Note that $\mathsf{J}(x)$ is different from $\mathbb{J}(x).$
For $\mu\in S^1$ define
\begin{align}\label{norming functionals for a direction}
\mathsf{J}_\mu(x) := \{x^*\in S_{\mathbb{X}^*} :~ x^*(x) = \mu\|x\| \}.
\end{align}
It is well-known from the Banach-Alaoglu Theorem that the closed unit ball $B_{\mathbb{X}^*}$ of $\mathbb{X}^*$ is weak$^*$ compact. Also, it is easy to see that $\mathsf{J}_\mu(x)$ is non-empty, convex and weak$^*$ compact.
The following theorem provides a necessary and sufficient condition for orthogonality of two vectors $x,y$ in a particular direction.
\begin{theorem}\label{functional Characterization}
Let $\mathbb{X}$ be a complex Banach space. Let $x,y\in \mathbb{X}$ be non-zero vectors and let $\mu\in S^1.$ Then $x\perp_\mu y$ if and only if there exists $u^*\in \mathsf{J}_\mu(x)$ such that $\mathsf{Re}~u^*(y)=0$.
\end{theorem}
\begin{proof}
Note that the directional orthogonality is real homogeneous. Therefore, without loss of generality, we may and do assume that $\|x\|=\|y\| = 1$. We first prove the necessity part. Clearly, $x\neq r\mu y$, for any $r\in \mathbb{R}$. If possible, suppose $\mathsf{Re}~x^*(y) \neq 0$, for every $x^*\in \mathsf{J}_\mu(x)$. We will proceed in three steps:\\
Step I: Define the set
\begin{align*}
W := \{ \mathsf{Re}~p^*(y) :~ p^*\in \mathsf{J}_\mu(x)\}\subseteq \mathbb{R}.
\end{align*}
The set $\mathsf{J}_\mu(x)$ being clearly convex and weak$^*$ compact (hence connected), $W$ must be a compact interval in $\mathbb{R}$. Clearly, $0 \notin W.$ Therefore, every member of $W$ is of the same sign. Without loss of generality, we assume that every member of $W$ is positive. Due to the compactness of $W$, we can find some $\lambda \in (0,1)$ such that $w>\lambda$, for all $w\in W$. In other words,
\begin{align}\label{condition on Jmu(x)}
\mathsf{Re}~p^*(y) > \lambda,~\mathrm{for~all~} p^*\in \mathsf{J}_\mu(x).
\end{align}
Step II: Define
\begin{align*}
G := \left\{x^*\in S_{\mathbb{X}^*} :~ \mathsf{Re}~\mu\overline{x^*(x)}x^*(y) \leq \dfrac{\lambda}{2}\right\}.
\end{align*}
Clearly, $|x^*(x)| \leq 1$ for every $x^*\in G$. We will show that
$$\sup\{|x^*(x)| :~ x^*\in G\} < 1-2\varepsilon,$$
for some $\varepsilon \in (0,\frac{1}{2})$. If possible, suppose $\sup\{|x^*(x)| :~ x^*\in G\}=1.$ We consider two cases.\\
\noindent Case I: Suppose there exists $x_0^*\in G$ such that $|x_0^*(x)|=1$. Then $x_0^*\in \mathsf{J}(x)$. Observe that $\mu\overline{x_0^*(x)}x_0^*\in \mathsf{J}_\mu(x)$. Therefore, $\mathsf{Re}~\mu\overline{x_0^*(x)}x_0^*(y)\in W$. In particular, it follows from (\ref{condition on Jmu(x)}) that $\mathsf{Re}~\mu\overline{x_0^*(x)}x_0^*(y) > \lambda $, which is a contradiction.\\
\noindent Case II: Suppose there exists a sequence $(x_n^*)$ in $G$ such that $|x_n^*(x)|\to 1$. Find a subnet $(x_{n_\tau}^*)_{\tau\in \mathscr{T}}$ of the sequence $(x^*_n)$ which converges to
an $x^*_0\in \mathbb{X}^*$, say. Then $|x_0^*(x)|=1$ and therefore, $x_0^*\in S_{\mathbb{X}^*}$. Consequently, $x_0^*\in \mathsf{J}(x)$ and $\mu\overline{x_0^*(x)}x_0^*\in \mathsf{J}_\mu(x)$. Again, it follows from (\ref{condition on Jmu(x)}) that $\mathsf{Re}~\mu\overline{x_0^*(x)}x_0^*(y) > \lambda $. However, this is a contradiction, since $\mathsf{Re}~\mu\overline{x_n^*(x)}x_n^*(y) \leq \dfrac{\lambda}{2}$, for every $n\in \mathbb{N}$. Therefore, $\sup\{|x^*(x)| :~ x^*\in G\} < 1-2\varepsilon$, for some $\varepsilon \in (0,\frac{1}{2})$, as desired.\\
Step III: In this step, we find a real number $\varepsilon_0>0$ such that $\|x-\varepsilon_0\mu y\| < 1,$ which will contradict to $x\perp_\mu y$. Choose $0< \varepsilon_0 < \min\{\frac{\lambda}{2}, \varepsilon\}$. Now, for any $x^*\in G$,
\begin{align*}
|x^*(x-\varepsilon_0\mu y)|\leq |x^*(x)|+\varepsilon_0|\mu||x^*(y)| < 1-2\varepsilon + \varepsilon_0 < 1-\varepsilon.
\end{align*}
If $x^*\in S_{\mathbb{X}^*}\setminus G$, then $\mathsf{Re}~\mu\overline{x^*(x)}x^*(y) > \dfrac{\lambda}{2}$, and so
\begin{align*}
|x^*(x-\varepsilon_0\mu y)|^2 & = x^*(x-\varepsilon_0\mu y)\overline{x^*(x-\varepsilon_0\mu y)}\\
& = |x^*(x)|^2+\varepsilon_0^2|x^*(y)|^2 - 2\varepsilon_0\mathsf{Re}~\mu\overline{x^*(x)}x^*(y)\\
& < 1 + \varepsilon_0^2 - \lambda\varepsilon_0.
\end{align*}
As $\varepsilon_0 < \lambda $, choose, $0<\delta_0 < \varepsilon$ such that $(1-\delta_0)^2> 1 + \varepsilon_0^2 - \lambda\varepsilon_0$. As a consequence,
\begin{align*}
|x^*(x-\varepsilon_0\mu y)| < 1-\delta_0, ~~\mathrm{for~all}~x^*\in S_{\mathbb{X}^*}.
\end{align*}
Thus,
$$\|x-\varepsilon_0\mu y\|=\sup\{|x^*(x-\varepsilon_0\mu y)|:~ x^*\in S_{\mathbb{X}^*}\}\leq 1-\delta_0 < 1=\|x\|.$$ Therefore, $x$ is not $\mu$ orthogonal to $y$, a contradiction.
To prove the sufficiency part, we proceed as follows:
\begin{align*}
\|x+t\mu y\|^2 & \geq |u^*(x+t\mu y)|^2\\
& = |\mu +t \mu u^*(y)|^2\\
& = |1 +t u^*(y)|^2\\
& = (1 +t u^*(y))(1+t\overline{u^*(y)})\\
& = 1+2t\mathsf{Re}~u^*(y)+t^2|u^*(y)|^2\\
& \geq 1,
\end{align*}
for all $t\in \mathbb{R}$.
\end{proof}
After formulating the directional orthogonality of two vectors in terms of linear functionals, we now turn our attention towards characterizing the smoothness of a non-zero element $x$ in a complex Banach space $\mathbb{X}.$ We introduce the following definition to serve our purpose.
\begin{definition}(\emph{Orthogonality pair})\label{Orthogonality pair}
Let $\mathbb{X}$ be a complex Banach space and let $(x,y)\in \mathbb{X}\times \mathbb{X}$. An ordered pair $(\mu,x^*)\in S^1\times S_{\mathbb{X}^*}$ is said to be an \emph{orthogonality pair} for $(x,y)$ if
\begin{align*}
x^*(x) = \mu\|x\|~~\mathrm{and}~~\mathsf{Re}~x^*(y)=0.
\end{align*}
We denote the collection of all orthogonality pairs for $(x,y)$ by $\mathcal{O}(x,y)$.
\end{definition}
\begin{remark}
Let $(x,y) \in \mathbb{X}\times\mathbb{X}.$ Then Theorem \ref{existence of direction} ensures the existence of a uni-modular constant $\mu$, such that $x\perp_\mu y.$ Now, applying Theorem \ref{functional Characterization} we can find a unit vector $x^*\in S_{\mathbb{X}^*}$ such that $$x^*(x)=\mu\|x\|~\mathrm{and}~\mathsf{Re}~x^*(y)=0.$$ This shows that the ordered pair $(\mu,x^*) \in S^1\times S_{\mathbb{X}^*}$ is a member of $\mathcal{O}(x,y).$ It follows that $\mathcal{O}(x,y)$ is non-empty, for any $(x,y) \in \mathbb{X}\times\mathbb{X}.$ Note that the cardinality of $\mathcal{O}(x,y)$ is always greater than or equals to $2,$ as $(\mu,x^*) \in \mathcal{O}(x,y)$ implies that $(-\mu,-x^*) \in \mathcal{O}(x,y).$
\end{remark}
We now present the promised characterization:
\begin{theorem}\label{smoothness characterization}
Let $\mathbb{X}$ be a complex Banach space and let $x\in \mathbb{X}$ be non-zero. Then $x$ is smooth if and only if for any $y\in \mathbb{X}$ with $x\not\perp_B y$, $\mathcal{O}(x,y)=\{ (\mu, u^*),(-\mu, -u^*)\}$, for some $(\mu,u^*)\in S^1\times S_{\mathbb{X}^*}$.
\end{theorem}
\begin{proof}
Without loss of generality, let $\|x\|=1$. We first prove the necessity part. Consider any $y\in \mathbb{X}$ with $x\not\perp_B y$. If possible, suppose $(\alpha, x^*),(\beta, y^*)\in \mathcal{O}(x,y)$ with $(\beta, y^*) \neq (\alpha, x^*), (-\alpha, -x^*)$, for some $(\alpha,x^*),( \beta,y^*)\in S^1\times S_{\mathbb{X}^*} $. Let $\mathbb{J}(x):=\{v^*\},$ for some $v^*\in S_{\mathbb{X}^*}.$ It is easy to see that $$x^*=\alpha v^*~\mathrm{and}~y^*=\beta v^*.$$ Indeed, if $x^*\neq \alpha v^*$ $(y^*\neq \beta v^*)$, then $\overline{\alpha}x^*$ $(\overline{\beta}y^*)$ is a member of $\mathbb{J}(x)$ with $\overline{\alpha}x^*\neq v^*$ $(\overline{\beta}y^*\neq v^*)$. This contradicts the fact that $x$ is smooth. It follows from our hypothesis that $$\mathsf{Re}~\alpha v^*(y)=\mathsf{Re}~\beta v^*(y)=0.$$ Since $S^1$ is a group under multiplication, there exists $\sigma\in S^1$ such that $\sigma \alpha = \beta.$ We claim that $\sigma \neq \pm 1$. If $\sigma = \pm 1$, then $\beta = \pm \alpha$. If $\beta = \alpha$, then
\begin{align*}
(\beta, y^*) = (\alpha, y^*) = (\alpha, \alpha v^*) = (\alpha, x^*).
\end{align*}
Also, if $\beta = -\alpha$, then
\begin{align*}
(\beta, y^*) = (-\alpha, y^*) = (-\alpha, -\alpha v^*) = (-\alpha, -x^*).
\end{align*}
This is a contradiction to the fact that $(\beta, y^*) \neq (\alpha, x^*), (-\alpha, -x^*)$. Thus, $\sigma \neq \pm 1$, as expected. Now, it follows from our assumption that
\begin{align*}
\mathsf{Re}~x^*(y)=0~~\mathrm{and}~~\mathsf{Re}~\sigma x^*(y)= \mathsf{Re}~\sigma\alpha v^*(y) = \mathsf{Re}~\beta v^*(y) = \mathsf{Re}~y^*(y)=0.
\end{align*}
Let $\sigma =a+ib$, for some real numbers $a,b$. Observe that $b\neq 0$, as $\sigma \neq \pm 1.$ Therefore,
\begin{align*}
\mathsf{Im}~x^*(y)=\mathsf{Re}~\dfrac{1}{b}(a-\sigma)x^*(y)= \dfrac{a}{b}\mathsf{Re}~ x^*(y)-\dfrac{1}{b} \mathsf{Re}~\sigma x^*(y) = 0.
\end{align*}
As a result, $\alpha v^*(y)=0$. Consequently, $x\perp_By$, which is a contradiction.\\
We now prove the sufficiency part. If possible, suppose $x^*,y^*$ are two distinct members of $\mathbb{J}(x)$. Then we can find $\alpha, \beta\in S^1$ such that $\mathsf{Re}~\alpha x^*(y)= \mathsf{Re}~\beta y^*(y)=0.$ Consequently, $(\alpha, \alpha x^*)$ and $(\beta, \beta y^*)$ are members of $\mathcal{O}(x,y).$ It follows from the hypothesis of the theorem that either $(\beta, \beta y^*) = (\alpha, \alpha x^*)$, or, $(\beta, \beta y^*)=(-\alpha, -\alpha x^*)$. However, in either case, we have $x^*=y^*.$ Therefore, the assumption that $x^*,y^*$ are distinct members of $\mathbb{J}(x)$ is not tenable. Consequently, $\mathbb{J}(x)$ is singleton and $x$ is smooth.
\end{proof}
As an obvious application of Theorem \ref{smoothness characterization}, we now characterize smooth Banach spaces among all complex Banach spaces.
\begin{cor}\label{smooth space}
Let $\mathbb{X}$ be a complex Banach space. Then $\mathbb{X}$ is smooth if and only if for any ordered pair $(x,y)\in \mathbb{X} \times \mathbb{X},$ with $x\not\perp_B y$, $\mathcal{O}(x,y)=\{ (\mu, u^*),(-\mu, -u^*)\}$, for some $(\mu, u^*)\in S^1\times S_{\mathbb{X}^*}$.
\end{cor}
Let $\mathbb{X}$ be a complex Banach space. Let $x,y \in \mathbb{X}$ be non-zero with $x$ being smooth. Our next corollary is about the directions along which $x$ is orthogonal to $y.$ We omit the proof, as it follows directly from the necessary part of Theorem \ref{smoothness characterization}.
\begin{cor}\label{direction and smoothness}
Let $\mathbb{X}$ be a complex Banach space. Let $x,y \in \mathbb{X}$ be non-zero with $x$ being smooth. Let $\mathsf{S}:=\{\gamma \in S^1 :~ x\perp_\gamma y \}.$ Then either the cardinality of $\mathsf{S}$ is $2,$ or, $\mathsf{S}=S^1.$
\end{cor}
Since every Hilbert space is smooth, we have the following:
\begin{cor}\label{direction and Hilbert space}
Let $\mathbb{H}$ be a complex Hilbert space and let $x,y \in \mathbb{H}$ be non-zero. Let $\mathsf{S}:=\{\gamma \in S^1 :~ x\perp_\gamma y \}.$ Then either the cardinality of $\mathsf{S}$ is $2,$ or, $\mathsf{S}=S^1.$
\end{cor}
The above two corollaries highlight the distinction between a smooth point and a non-smooth point in a Banach space. As a whole, the above results reflect the structural differences between complex Hilbert spaces and complex Banach spaces. It is needless to mention that the concept of the directional orthogonality facilitates capturing the aforesaid geometric dissimilarities.
\section{Directional orthogonality and Hilbert spaces}
In a complex Banach space, in general it is difficult to explicitly identify the directions along which two given vectors are orthogonal. However, in case of a complex Hilbert space, we have a straightforward description of the latter, which we present in the following theorem.\\
\begin{theorem}\label{Characterization of orthogonality in Hilbert space in a particular direction}
Let $ \mathbb{H} $ be a complex Hilbert space and let $ x, y $ be two non-zero elements in $ \mathbb{H} $. Then for some $ \gamma \in S^1 $, $ x\perp_\gamma y $ if and only if $ \mathsf{Re}~{\gamma} \langle y, x \rangle = 0 $.
\end{theorem}
\begin{proof}
Without loss of generality assume that $\|x\|=1$. The verification of the sufficiency is straightforward. Let us prove the necessity. It follows from Theorem \ref{functional Characterization} that there exists $u^*\in S_{\mathbb{H}^*}$, satisfying
$$u^*(x)=\gamma ~\mathrm{and}~\mathsf{Re}~u^*(y)=0.$$
Thus, $\overline{\gamma}u^*\in \mathbb{J}(x).$ And, as the underlying space is Hilbert, necessarily
$\overline{\gamma}u^*(z)=\langle z,x \rangle$, for all $z \in \mathbb{H}$. Hence $u^*(y) = \gamma \langle y,x\rangle $, and so $\mathsf{Re}~\gamma \langle y,x\rangle = \mathsf{Re}~u^*(y)= 0.$
\end{proof}
\begin{cor}\label{relation of orthogonality between x and y in a Hilbert space in a particular direction}
Let $ \mathbb{H} $ be a complex Hilbert space and let $ x, y $ be two non-zero elements in $ \mathbb{H} $. Let $ \gamma \in S^1 $. Then $ x\perp_\gamma y $ if and only if $ y \perp_{\overline{\gamma}} x $.
\end{cor}
\subsection{Directional orthogonality and Bhatia-\v{S}emrl Theorem}
In Theorem $ 2.6 $ of \cite{PSMM}, a Bhatia-\v{S}emrl type result has been given in the context of complex Banach spaces. In particular, Proposition $ 2.1 $ of \cite{TA} follows as a simple corollary to this result. In this article, we illustrate that Theorem $ 2.6 $ of \cite{PSMM} should be viewed as a generalization of the Bhatia-\v{S}emrl Theorem to the setting of complex Banach spaces. However, this requires some effort. For the sake of completeness, let us state the concerned theorem from \cite{PSMM}.
\begin{theorem}\label{Relation between direction and M_T}
Let $ \mathbb{X},~\mathbb{Y} $ be finite-dimensional complex Banach spaces and let $ T,A \in \mathbb{L}(\mathbb{X},\mathbb{Y}) $ be linear operators such that $ M_T $ is connected. Then $ T \perp_B A $ if and only if for each $ \gamma\in S^1 $, there exists $ x\in M_T $ such that $ Tx \perp_\gamma Ax $.
\end{theorem}
As mentioned earlier, we make note of the following corollary which follows from the above theorem. This gives an alternative proof of Proposition $ 2.1 $ of \cite{TA}.
\begin{cor}\label{Generalization of Turnsek's result}
Let $ \mathbb{H} $ be a finite-dimensional complex Hilbert space and let $ T,A \in \mathbb{L}(\mathbb{H}) $. Then $ T\perp_B A $ if and only if for each direction $ \gamma \in S^1 $, there exists $ x\in M_T $ such that $ \mathsf{Re}~{\gamma}\langle Ax, Tx \rangle = 0 $.
\end{cor}
\begin{proof}
From Theorem 2.2 of \cite{SP}, it follows that $ M_T $ is connected. Therefore, the proof of the corollary follows from Theorem \ref{Characterization of orthogonality in Hilbert space in a particular direction} and Theorem \ref{Relation between direction and M_T}.
\end{proof}
As the most important application of our study, we would like to obtain a completely new proof of the Bhatia-\v{S}emrl Theorem, which is strikingly simple and geometrically motivated. Moreover, to our surprise, we observe that it is possible to generalize the celebrated Toeplitz-Hausdorff Theorem and to apply it to serve our purpose. The proof of the following theorem, which generalizes the Toeplitz-Hausdorff Theorem, is strongly inspired by \cite{GK}.
Let $\mathbb{X}$ be a Banach space. Recall that a linear operator $\mathcal{I}\in \mathbb{L}(\mathbb{X})$ is called isometry if $\|\mathcal{I}(x)\|=\|x\|$, for all $x\in \mathbb{X}.$
\begin{theorem}\label{Generalization of Toeplitz-Hausdorff Theorem}
Let $ \mathbb{H} $ be a complex Hilbert space and let $ T,A\in \mathbb{L}(\mathbb{H}) $. Let $\mathbb{H}_0$ be a subspace of $\mathbb{H}$ such that $T$ is a scalar multiple of an isometry on $\mathbb{H}_0.$ Then
$$W_A(T) := \left\{ \langle Ax,Tx \rangle :~ x\in S_{\mathbb{H}_0} \right\}$$
is a convex subset of $\mathbb{C}.$
\end{theorem}
\begin{proof}
Let $\|Tx\| = c\|x\|, $ for all $x\in \mathbb{H}_0$, and for some $c\geq0.$ If $c=0,$ then $ W_A(T) = \{ 0 \} $, and we have nothing to prove. Let $\mu_1, \mu_2$ be two distinct members of $W_A(T).$ Find $x_1,x_2\in S_{\mathbb{H}_0}$ such that $ \langle Ax_1, Tx_1 \rangle = \mu_1 ~\mathrm{and}~ \langle Ax_2, Tx_2 \rangle = \mu_2.$ We have to show that the linear segment $[\mu_1, \mu_2]$ lies in $W_A(T).$ Observe that for all $y\in S_{\mathbb{H}_0}$,
\begin{align*}
\left \langle \left (\gamma A+\dfrac{\sigma}{c^2}T \right )y, Ty\right \rangle = \gamma \langle Ay, Ty \rangle +\sigma,~ \mathrm{for~scalars}~ \gamma , \sigma.
\end{align*}
Let $ \gamma_0 := \dfrac{1}{\mu_1-\mu_2} $, $ \sigma_0 := \dfrac{-\mu_2}{\mu_1-\mu_2} $ and let $ P := {\gamma_0 A + \dfrac{\sigma_0}{c^2}}T $. Then we have that
\begin{align}\label{translation}
W_P(T) = \gamma_0 W_A(T) + \sigma_0~;\quad \langle Px_1, Tx_1 \rangle = 1,~\langle Px_2, Tx_2 \rangle = 0.
\end{align}
It follows from (\ref{translation}) that
$$ [\mu_1, \mu_2] \subseteq W_A(T)~\mathrm{if~and~only~if}~ [0,1] \subseteq W_P(T).$$
Consider any $\lambda \in [0,1].$ We claim that there exists $x_0:=bx_1+ax_2$ with suitable real $a,b$ such that $\|x_0\|=1~\mathrm{and}~\langle Px_0,Tx_0\rangle= \lambda;$ thus $\lambda\in W_P(T). $ It is enough to show that the following system of equations
\begin{align}\label{ellipse}
a^2 + b^2 + 2ab ~\mathsf{Re}~ \langle x_1,x_2 \rangle & = 1,
\end{align}
\begin{align}\label{hyperbola}
b^2 + ab \left\{\left\langle Px_1, Tx_2 \rangle + \langle Px_2,Tx_1\right\rangle \right\} & = \lambda,
\end{align}
\begin{figure}
\caption{Intersection of ellipse and hyperbola.}
\label{Intersection of ellipse and hyperbola}
\end{figure}
\noindent with unknown $a,b \in \mathbb{R},$ possesses a real solution. We observe that $ | \mathsf{Re}~\langle x_1,x_2 \rangle | < 1 $. Indeed, $ | \mathsf{Re}~\langle x_1, x_2 \rangle | = 1 $ together with the Schwarz's inequality produces $ \langle x_1, x_2 \rangle = \pm 1 $. If $ \langle x_1, x_2 \rangle = 1 $, then we obtain $ \langle x_1-x_2, x_1-x_2 \rangle = 0 $. As a result, $\|x_1-x_2\|=0$ and $x_1=x_2.$ However, this is a contradiction, since $ \langle Px_1, Tx_1\rangle \neq \langle Px_2, Tx_2 \rangle .$ Similarly, considering $ \langle x_1, x_2 \rangle = -1 $, we arrive at the same contradiction. Therefore, $ | \mathsf{Re}~\langle x_1,x_2 \rangle | < 1 $, as expected.\\ Let $$ N_{x_1}:= \left\{\left\langle Px_1, Tx_2 \rangle + \langle Px_2,Tx_1\right\rangle \right\}.$$ If $ N_{x_1} $ is real, then equations (\ref{ellipse}) and (\ref{hyperbola}) represent an ellipse (intercepts $ \pm 1 $ with $a$-axis and $b$-axis) and a hyperbola (intercepts $ \pm \sqrt{\lambda} $ with $b$-axis), respectively (Figure \ref{Intersection of ellipse and hyperbola}).
Observe that $ N_{x_1} $ can always be chosen real by choosing an appropriate scalar multiple of $ x_1 $. Choose $ x_1' = \kappa x_1 $, where $ \kappa = m+in $ with $ m^2+n^2 = 1 $. Note that
\begin{align*}
\mathsf{Im}~N_{x_1'} & = \mathsf{Im}~\{(m+in) \left\langle Px_1, Tx_2 \rangle + (m-in)\langle Px_2,Tx_1\right\rangle \}\\
&= m~\mathsf{Im}~\langle Px_1,Tx_2\rangle +n~\mathsf{Re}~\langle Px_1,Tx_2\rangle+m~\mathsf{Im}~\langle Px_2,Tx_1\rangle -n~\mathsf{Re}~\langle Px_2,Tx_1\rangle\\
&= m~\mathsf{Im}~N_{x_1} + n~\mathsf{Re}~\{\langle Px_1,Tx_2\rangle - \langle Px_2,Tx_1\rangle \}
\end{align*}
It is not difficult to see that the system of equations
\begin{align}\label{circle}
& m^2 + n^2 = 1,\\
&\mathsf{Im}~N_{x_1'} = m~\mathsf{Im}~N_{x_1} + n~\mathsf{Re}~\{\langle Px_1,Tx_2\rangle - \langle Px_2,Tx_1\rangle \} = 0,
\end{align}
possesses two solutions. As a result, the system of equations (\ref{ellipse}) and (\ref{hyperbola}) possesses (four) solutions for $a$ and $b$. Thus, $ [0,1] \subseteq W_P(T) $, as expected.
\end{proof}
Let $\mathbb{H}$ be a complex Hilbert space and let $T\in \mathbb{L}(\mathbb{H}).$ It follows from Theorem 2.2 of \cite{SP} that if $M_T\neq\emptyset$, then $M_T=S_{\mathbb{H}_0}$, for some subspace $\mathbb{H}_0$ of $\mathbb{H}.$ Therefore, $T$ is a scalar multiple of an isometry on $\mathbb{H}_0$, and we have the following:
\begin{cor}\label{Generalization of Toeplitz-Hausdorff Theorem M}
Let $ \mathbb{H} $ be a complex Hilbert space and let $ T,A\in \mathbb{L}(\mathbb{H}) $. Then
either $M_T=\emptyset$ or $\left\{ \langle Ax,Tx \rangle :~ x\in M_T \right\}$
is a convex subset of $\mathbb{C}.$
\end{cor}
The well-known Toeplitz-Hausdorff Theorem is an immediate consequence of Theorem \ref{Generalization of Toeplitz-Hausdorff Theorem}.
\begin{cor}(Toeplitz-Hausdorff Theorem)\label{Toeplitz Hausdorff Theorem}
Let $ \mathbb{H} $ be a complex Hilbert space and let $ A\in \mathbb{L}(\mathbb{H}) $. Then the numerical range of $ A $, defined by
\[ W(A) := \{ \langle Ax, x \rangle :~ x\in S_{\mathbb{H}} \} \]
is a convex subset of $\mathbb{C}$.
\end{cor}
We finish the article with an elementary proof of the Bhatia-\v{S}emrl Theorem that follows as an application of Theorem \ref{Relation between direction and M_T} and Corollary \ref{Generalization of Toeplitz-Hausdorff Theorem M}.\\
\noindent {\emph{(Bhatia-\v{S}emrl Theorem)
Let $ \mathbb{H} $ be a finite-dimensional Hilbert space. Let $ T, A \in \mathbb{L}(\mathbb{H}) $. Then $ T \perp_B A $ if and only if there exists $ x\in M_T $ such that $ \langle Tx, Ax \rangle = 0.$}}\\
\begin{proof}
The verification of sufficiency is straightforward; hence we omit it. Now, assume that $ \langle Tx, Ax \rangle \neq 0$ for all $x\in M_T.$ In other words, $0\notin W_A(T):=\left\{ \langle Ax,Tx \rangle :~ x\in M_T \right\}.$ Since $\mathbb{H}$ is finite-dimensional, $W_A(T)$ is a compact convex subset of $\mathbb{C}$ (Corollary \ref{Generalization of Toeplitz-Hausdorff Theorem M}). Therefore, $W_A(T)$ possesses a unique closest point from $ 0 $, say $ \mu. $ Let $\kappa = \dfrac{\mu}{|\mu|}.$ Therefore, $ \mathsf{Re}~\overline{\kappa} \langle Ax,Tx \rangle \geq |\mu|, $ for all $ x\in M_T $. Thus, by Theorem \ref{Characterization of orthogonality in Hilbert space in a particular direction}, $ Tx \not\perp_{\overline{\kappa}} Ax $ for all $ x\in M_T $. Note that the set $M_T$ is
connected, because by \cite{SP}, it is equal to the (connected) circle $S_{\mathbb{H}_0}$ for some subspace $\mathbb{H}_0$ of $\mathbb{H}.$ Hence by Theorem \ref{Relation between direction and M_T}, $T\not\perp_B A$.
\end{proof}
\end{document}
|
\begin{document}
\title{Functoriality for Supercuspidal $L$-packets}
\author{Ad\`ele Bourgeois and Paul Mezo}
\date{}
\maketitle
\begin{abstract}
Kaletha constructs $L$-packets for supercuspidal $L$-parameters of tame $p$-adic groups. These $L$-packets consist entirely of supercuspidal representations, which are explicitly described. Using the explicit descriptions, we show that Kaletha's $L$-packets satisfy a fundamental functoriality property desired for the Local Langlands Correspondence.
\end{abstract}
\gdef\@thefnmark{}\@footnotetext{This research was supported by the Fields Institute for Research in Mathematical Sciences and the NSERC Discovery Grant RGPIN-2017-06361.
\noindent\textit{MSC2020:} Primary 22E50, 11S37, 11F70
\noindent\textit{Keywords:} Langlands correspondence, functoriality, supercuspidal representation, $p$-adic group}
\tableofcontents
\mathfrak{s}ection{Introduction}
Let $G$ be a connected reductive algebraic group defined over a
nonarchimedean local field $F$. The Local Langlands Correspondence
(LLC) is a conjectural map
$$\varphi \mapsto \Pi_{\varphi}$$
from $L$\emph{-parameters} to $L$\emph{-packets} \cite[Chapter
III]{Borel:1979}. The latter are finite
sets of (equivalence classes of) irreducible representations of the
group of $F$-points, $G(F)$.
The LLC is
expected to satisfy numerous additional properties which give it
content. We focus on only two properties. The first property concerns
\emph{supercuspidal} representations, for which packets were developed by Kaletha
\cite{Kaletha:Regular,
Kaletha:SCPackets}. The second concerns a basic form of
\emph{functoriality} as listed by Borel in \cite[Desideratum 10.3(5)]{Borel:1979}.
To describe the expected property for supercuspidal representations,
we recall that an $L$-parameter is an $L$-homomorphism
$$\varphi: W_{F} \times \mathrm{SL}_{2} \rightarrow {^L}G$$
from the Weil-Deligne group into the $L$-group of $G$
\cite[Section 8.2]{Borel:1979}. Following \cite[Section
4.1]{Kaletha:SCPackets}, the $L$-parameter $\varphi$ is defined to
be \emph{supercuspidal} if it is trivial on $\mathrm{SL}_{2}$,
\emph{i.e.}
$$\varphi: W_{F} \rightarrow {^L}G,$$
and its image is not contained in a proper parabolic subgroup of ${^L}G$
\cite[Section 3.3]{Borel:1979}. As observed in \cite[Section
4.1]{Kaletha:SCPackets}, $L$-packets consisting entirely of
supercuspidal representations are conjectured to correspond precisely
to supercuspidal $L$-parameters (\cite[Section 3.5]{DR}, \cite{AMS}).
In his recent works \cite{Kaletha:Regular, Kaletha:SCPackets}, Kaletha
provides an explicit construction for these conjectured
$L$-packets, under the additional assumptions that $G$ splits over a
tamely ramified extension, and that the residual characteristic $p$ of
$F$ does not divide the order of the the Weyl group of $G$. He further
proves that the $L$-packets satisfy some important properties
(\emph{e.g.} stability).
The main goal of this paper is to show that these packets satisfy
the desired functorial property \cite[Desideratum 10.3(5)]{Borel:1979}. For this reason, and from now on, we work under the additional
assumptions on $G$ and the residual characteristic of $F$. For simplicity alone, we also assume that $G$ is quasisplit over $F$.
Let $\Phi_{\mathrm{sc}}(G)$ denote the set (of conjugacy classes) of
supercuspidal $L$-parameters of $G$. Given $\varphi\in\Phi_{\mathrm{sc}}(G)$, we
let $\Pi_\varphi$ denote the associated supercuspidal
$L$-packet obtained via Kaletha's construction. This paper is dedicated to
proving the following theorem.
\begin{theorem}\label{th:desideratum}
Let $\eta: G \rightarrow \tilde{G}$ be an $F$-morphism of connected
reductive $F$-groups such that
\begin{itemize}
\item[i)] the kernel of $d\eta : \mathrm{Lie}(G) \rightarrow \mathrm{Lie}(\tilde{G})$ is central,
\item[ii)] the cokernel of $\eta$ is an abelian $F$-group.
\end{itemize}
Let $\varphiT\in \Phi_{\mathrm{sc}}(\tilde{G})$ and set $\varphi =
{^L\eta}\circ\varphiT$. Then for all $\tilde{\pi}\in \Pi_{\varphiT}$,
$\tilde{\pi}\circ\eta$ is the direct sum of finitely many irreducible
supercuspidal representations belonging to $\Pi_{\varphi}$
\end{theorem}
We recall that the map $^L\eta$ is given as follows. We let $\widehat{\eta}: \widehat{\tilde{G}} \rightarrow \widehat{G}$ be the induced map on the Langlands dual groups \cite[Sections 1 and 2]{Springer:1979}, and define $^L\eta : {^L\tilde{G}}\rightarrow {^LG}$ by $^L\eta(g,w) = (\widehat{\eta}(g),w)$ for all $g\in\widehat{\tilde{G}}, w\in W_F$.
Note that the above theorem is a modified version of \cite[Desideratum 10.3(5)]{Borel:1979}, in which $\eta$ is required to have abelian kernel and cokernel. The hypothesis on $\eta$ is precisely \cite[Condition 1]{Solleveld:2020}, and is stronger \cite[Lemma 5.1]{Solleveld:2020} than that of \cite[Desideratum 10.3(5)]{Borel:1979}. We require this stronger hypothesis to rule out purely inseparable homomorphisms, such as \cite[V.16.1]{Borel:LAG}, that arise in positive characteristic and to ensure that the root systems of $G$ and $\tilde{G}$ are identified through $\eta$. We also note that Solleveld proves a more precise version of Theorem \ref{th:desideratum} for different families of representations with \cite[Theorem 3]{Solleveld:2020}.
The supercuspidal representations that make up
the $L$-packets of Theorem~\ref{th:desideratum} are constructed from
\emph{tame $F$-non-singular elliptic pairs}, which consist of a
particular kind of torus and a character thereof \cite[Definition
3.4.1]{Kaletha:SCPackets}. Given such a pair
$(\tilde{S},\tilde{\theta})$ of $\tilde{G}$, we let $\pi_{(\tilde{S},\tilde{\theta})}$ denote the
attached supercuspidal representation of $\tilde{G}(F)$, which is obtained from the \emph{J.-K.~Yu construction} \cite{Yu:2001} after unfolding $(\widetilde{S},\widetilde{\theta})$ into an appropriate \emph{$\tilde{G}$-datum}. The representation
$\pi_{(\tilde{S},\tilde{\theta})}$ may be reducible, and its
irreducible components form part of an $L$-packet. The
first big result of this paper is writing $\pi_{(\tilde{S},\tilde{\theta})}\circ
\eta$ as a direct sum of conjugates of a certain
component of $\pi_{(S,\theta)}$, where $(S,\theta)$ is a tame
$F$-non-singular elliptic pair of $G$ that satisfies $\eta(S)\mathfrak{s}ubset
\tilde{S}$ and $\theta = \tilde{\theta}\circ\eta$ (Theorem~\ref{th:fullTheorem}).
The composition $\pi_{(\tilde{S},\tilde{\theta})}\circ\eta$ can be
viewed as the restriction of $\pi_{(\tilde{S},\tilde{\theta})}$ to $\eta(G(F))$.
Having abelian cokernel implies that $\eta(G)$ is a subgroup of
$\tilde{G}$ which contains the derived subgroup $[\tilde{G},\tilde{G}]$. The kernel of $\eta$ is also a central subgroup by \cite[Lemma 5.1]{Solleveld:2020}, so we may view $\eta(G)$ as
a quotient $G_Z = G/Z$,
where $Z = \ker\eta \mathfrak{s}ubset Z(G)$.
We compute the restriction of $\pi_{(S,\theta)}$ to $\eta(G(F)) = G(F)/Z(F)$ in
two steps. First, by restricting to $\eta(G)(F) =
G_Z(F)$, and second, by further restricting to $G(F)/Z(F)$. The
difference between the groups $G(F)/Z(F)$ and $G_Z(F)$
is parameterized by a subgroup of $H^1(F,Z)$ \cite[Proposition
12.3.4]{Springer:LAG}, which may be nontrivial. The restriction of
supercuspidal representations to subgroups that contain the derived
subgroup was extensively studied in \cite{arXivPaper}. As such, we can
apply the results therein to obtain a description for
$\pi_{(\tilde{S},\tilde{\theta})}|_{G_Z(F)}$ (Theorem~\ref{th:HG}). The second restriction can be computed via Mackey theory, as the quotient $G_Z(F)/(G(F)/Z(F))$ is compact and abelian \cite{Silberger:1979}. The decomposition of $\pi_{(\tilde{S},\tilde{\theta})}\circ\eta$ is given
in Theorem \ref{th:mainTheorem}.
In order to describe the supercuspidal representations in the
$L$-packets $\Pi_\varphi$ and
$\Pi_{\varphiT}$, one must know which tame $F$-non-singular elliptic
pairs to use. These pairs are provided by
\emph{supercuspidal $L$-packet data} \cite[Definition
4.1.4]{Kaletha:SCPackets}. The supercuspidal $L$-packet
data for $\varphi$ and $\varphiT$ consist of tuples
$(S,\widehat{j},\chi_0,\theta)$ and $(\tilde{S},\widehat{\tilde{j}},\tilde{\chi}_0,\tilde{\theta})$,
respectively, where $(S,\theta)$ and $(\tilde{S},\tilde{\theta})$ are tame
$F$-non-singular elliptic pairs. The elements $\widehat{j}$ and $\widehat{\tilde{j}}$
specify families of \emph{admissible embeddings} $S(F)\rightarrow
G(F)$ and $\tilde{S}(F)\rightarrow \tilde{G}(F)$, denoted $J(F)$ and $\tilde{J}(F)$,
respectively. Each embedding $j\in J(F)$ $(\tilde{j}\in\tilde{J}(F))$ is
used to generate a tame $F$-non-singular elliptic pair $(jS,j\theta)$
($(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})$), for which we let the components
of $\pi_{(jS,j\theta)}$ ($\pi_{(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})}$) be
elements of $\Pi_\varphi$ ($\Pi_{\varphiT}$).
In order to apply our decomposition formula for
$\pi_{(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})}\circ\eta$
and relate it to representations in $\Pi_\varphi$, we must first
establish an appropriate relationship between the supercuspidal $L$-packet data and
the admissible embeddings. This is given to us by
Theorem~\ref{th:data}, another key result of this paper, in which we
show that for all $\tilde{j}\in\tilde{J}(F)$, there exists $j\in J(F)$ such
that $\eta(jS)\mathfrak{s}ubset \tilde{j}\tilde{S}$ and $j\theta =
\tilde{j}\tilde{\theta}\circ\eta$. As such, we obtain a decomposition formula
for $\pi_{(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})}\circ\eta$ in terms of a
certain component of $\pi_{(jS,j\theta)}$. This completes the
proof of Theorem~\ref{th:desideratum}.
Let us briefly indicate what is required to extend
Theorem \ref{th:desideratum} to
non-quasisplit groups. Every connected reductive
algebraic $F$-group $G'$ is an inner form of a quasisplit form $G$.
When $\mathrm{char} F = 0$, the group $G'$ may be assigned to a class
of \emph{rigid inner twists} for $G$
\cite[Corollary 3.8 and Section 5.1]{Kaletha:2016}. This class
is an element in a set of the form $H^{1}(u \rightarrow W, Z'
\rightarrow G)$ which we shall not describe. There is a natural map
\begin{equation}
\label{SinG}
H^{1}(u \rightarrow W, Z' \rightarrow S) \ \longrightarrow \ H^{1}(u \rightarrow W, Z'
\rightarrow G)
\end{equation}
from the classes of rigid inner twists for the maximal torus $S
\mathfrak{s}ubset G$.
The elements in a supercuspidal L-packet of $G'$ are indexed by
the fiber in $H^{1}(u \rightarrow W, Z' \rightarrow S)$ over the class
in $H^{1}(u \rightarrow W, Z' \rightarrow G)$ corresponding to $G'$
\cite[Section 5.3]{Kaletha:Regular}.
In the case of Theorem \ref{th:desideratum}, \emph{i.e.} $G' = G$, the
classes of rigid inner twists may be chosen to equal the usual Galois
cohomology sets, and a supercuspidal packet is indexed by the fiber of
$$H^{1}(F,S) \rightarrow H^{1}(F, G)$$
over the trivial class. This fiber is in bijection with the set of
admissible embeddings $J(F)$ above. In general, the fiber of
(\ref{SinG}) corresponding to $G'$ is in bijection with admissible
embeddings into (the rigid inner twist for) $G'(F)$. The constructions
and results of this paper apply to these latter admissible embeddings
in exactly the same manner as they do to $J(F)$.
When $\mathrm{char}F \neq 0$, a parallel picture is given in
\cite{Dillery}.
The paper is organized as follows. In
Section~\ref{sec:supercuspidalRelations}, we start by giving a summary
of the construction of supercuspidal representations from tame
$F$-non-singular elliptic pairs (Section~\ref{sec:summaryKaletha}),
after which we move on to proving Theorem~\ref{th:fullTheorem}, which
describes the decomposition of $\pi_{(\tilde{S},\tilde{\theta})}\circ\eta$ given
the tame $F$-non-singular elliptic pair $(\tilde{S},\tilde{\theta})$
(Section~\ref{sec:MainTheorem}). In Section~\ref{sec:packetRelations}, we provide a summary
of the construction of the supercuspidal $L$-packets
(Section~\ref{sec:constructPackets}). We then establish the
relationship between the supercuspidal $L$-packet data associated to $\varphi$ and
$\varphiT$ (Section~\ref{sec:packetData}) and end with the
proof of Theorem~\ref{th:desideratum}
(Section~\ref{sec:proofDesideratum}).
We close this introduction with a list of notation. Given the nonarchimedean local field $F$, we denote by $\mathcal{O}_F$ its ring of integers, $\p_F$
the unique maximal ideal of $\mathcal{O}_F$ and $\res$ its residue
field of prime characteristic $p$. Let $F^\mathrm{un}$ be a maximal unramified
extension of $F$. The residue field of $F^\mathrm{un}$ is
an algebraic closure of $\res$, so we denote it by $\resun$. Let
$\mathrm{G}amma$ denote the absolute Galois groups of $F$,
$\mathrm{G}al(\overline{F}/F)$. We use the notation $I_F$ and $P_F$ for the inertia subgroup and wild inertia
subgroup of the Weil group $W_F$, respectively. We also let $E$ denote the tamely ramified extension of $F$ over which $G$ splits.
Given a maximal torus $T$ of $G$, we let $R(G,T)$ denote the root
system of $G$ with respect to $T$. Given $\alpha\in R(G,T)$, we denote
the associated root subgroup by $U_\alpha$. Letting $\tilde{T}$ denote
the maximal torus of $\tilde{G}$ such that $\eta(T) = \tilde{T}\cap\eta(G)$
(given by \cite[Theorem 2.2]{arXivPaper}), the root systems $R(G,T)$
and $R(\tilde{G},\tilde{T})$ are canonically identified, and the Weyl groups
of $G$ and $\tilde{G}$ coincide. We use $\mathfrak{g}$ and $\tilde{\mathfrak{g}}$ for the Lie
algebras of $G$ and $\tilde{G}$, respectively.
We would like to thank Tasho Kaletha for taking the time to answer our
questions regarding the results of his papers, as well as providing
guidance in establishing the content of this manuscript. We would
like to thank Maarten Solleveld for pointing out the missing
hypotheses for non-zero characteristic in an earlier draft. We would also
like to thank Monica Nevins for her insights on the J.-K.~Yu
construction, which allowed us to finalize the proofs that lead to our
decomposition formula.
\mathfrak{s}ection{The Decomposition of
$\pi_{(\tilde{S},\tilde{\theta})}\circ\eta$}\label{sec:supercuspidalRelations}
In \cite{Kaletha:Regular, Kaletha:SCPackets}, Kaletha describes a way
to construct a supercuspidal representation $\pi_{(\tilde{S},\tilde{\theta})}$ of
$\tilde{G}$ from a tame $F$-non-singular elliptic pair $(\tilde{S},\tilde{\theta})$
\cite[Definition 3.4.1]{Kaletha:SCPackets}. Here $\tilde{S}$ is a
maximally unramified elliptic maximal torus and $\tilde{\theta}$ is a character of $\tilde{S}(F)$ satisfying a certain \emph{non-singularity}
condition. The irreducible components of these representations are
what make up the supercuspidal $L$-packets. As such, finding a decomposition formula for $\pi_{(\tilde{S},\tilde{\theta})}\circ\eta$ is crucial in proving Theorem~\ref{th:desideratum}.
\mathfrak{s}ubsection{The Construction of Supercuspidal Representations from Tame $F$-non-singular Elliptic Pairs}\label{sec:summaryKaletha}
Let us recall the construction of supercuspidal representations from tame $F$-non-singular elliptic pairs as per \cite{Kaletha:Regular,Kaletha:SCPackets}. For simplicity of notation, we will describe the construction over $G$, though it is also applied to $G_Z = G/Z$ and $\tilde{G}$.
The reader is assumed to be familiar with the structure theory of $p$-adic groups, for which we start by recalling some
notation. In the case where $T$ is an $F$-split maximal
torus of $G$, we denote the corresponding affine apartment and reduced
affine apartment by $\mathcal{A}(G,T,F)$ and
$\mathcal{A}^{\mathrm{red}}(G,T,F)$, respectively. The Bruhat-Tits
building and reduced building of $G$ over $F$ are denoted by
$\mathcal{B}(G,F)$ and $\mathcal{B}^{\mathrm{red}}(G,F)$,
respectively. For each $x\in\mathcal{B}(G,F)$, we set $[x]$ to be the
projection of $x$ in $\mathcal{B}^{\mathrm{red}}(G,F)$. Furthermore,
for $r > 0$, $G(F)_{x,r}$ denotes the Moy-Prasad filtration subgroup
of the parahoric subgroup $G(F)_{x,0}$. We also set $G(F)_{x,r^+} =
\mathrm{un}derset{t >r}{\bigcup}G(F)_{x,t}$. We use colons to abbreviate
quotients, that is $G(F)_{x,r:t} = \quo{G(F)_{x,r}}{G(F)_{x,t}}$ for
$t > r$. We also let $G(F)_{[x]}$ denote the set of elements of $G(F)$
that fix $[x]$. We have analogous filtrations of
$\mathcal{O}_F$-submodules at the level of the Lie algebra.
For all $r>0$, the quotient $G(F)_{x,r:r^+}$ is an abelian group and
is isomorphic to its Lie algebra analog $\mathfrak{g}(F)_{x,r:r^+}$ via Adler's
mock exponential map \cite{Adler:1998}. The quotient $G(F)_{x,0:0^+}$
is also very important, as it results in the $\res$-points of a
reductive group. We shall denote this group by $\mathcal{G}_x$ and
refer to it as the \emph{reductive quotient of $G$ at $x$}. In other words,
$\mathcal{G}_x$ is the reductive group such that $\mathcal{G}_x(\res)
= G(F)_{x,0:0^+}$. Similar notations are adopted for $G_Z$, $\tilde{G}$, and
any subgroups of $G$, $G_Z$ and $\tilde{G}$ that we consider hereafter.
The construction of the supercuspidal representation
$\pi_{(S,\theta)}$ of $G$ starts from a tame $F$-non-singular elliptic pair
$(S,\theta)$ in the sense of \cite[Definition
3.4.1]{Kaletha:SCPackets}. The representation $\pi_{(S,\theta)}$ is
obtained in two steps. One starts by unfolding the pair $(S,\theta)$
into a $G$-datum $\Psi_{(S,\theta)} =
(\vec{G},y,\vec{r},\rho,\vec{\phi})$ in the sense of \cite[Section
3]{Yu:2001}. We will refer to $\Psi_{(S,\theta)}$ as the \emph{corresponding $G$-datum} of $(S,\theta)$. The properties of $S$ and $\theta$ provided by \cite[Definition
3.4.1]{Kaletha:SCPackets} allow us to go to the reductive quotient and use the theory of Deligne-Lusztig cuspidal representations in order to construct $\rho$, the so-called \emph{depth-zero} part of the datum $\Psi_{(S,\theta)}$. The second step consists of applying the J.-K.~Yu construction
\cite{Yu:2001} on the obtained $G$-datum. The unfolding of
the tame $F$-non-singular elliptic pair into a $G$-datum is given as follows.
\paragraph{The twisted Levi sequence $\vec{G}$ and the sequence $\vec{r}$:} We recall how to construct a Levi sequence from $S$ as per \cite[Section 3.6]{Kaletha:Regular}. For all $r$ positive, let ${E_r^\times = 1+ \mathfrak{p}_E^{\lceil er \rceil}}$, where $e$ denotes the ramification degree of $E/F$. We consider the set ${R_r \defeq \{\alpha\in R(G,S): \theta(N_{E/F}(\check{\alpha}(E^\times_r)))=1\}}$,where $\check{\alpha}$ is the coroot associated to $\alpha$ and $N_{E/F}$ is the norm of $E/F$, and set $R_{r^+} = \mathrm{un}derset{s>r}{\cap}R_s$. There will be breaks in this filtration, $r_{d-1}>r_{d-2}>\cdots > r_0 > 0$, and we set $r_{-1} = 0$ and $r_d = \mathrm{depth}(\theta)$. Then for all $0\leq i\leq d$, $G^i\defeq \langle S,U_\alpha:\alpha\in R_{r_{i-1}^+} \rangle$ is a tamely ramified twisted Levi subgroup of $G$ \cite[Lemma 3.6.1]{Kaletha:Regular}. These twisted Levi subgroups are what we use to form the twisted Levi sequence $\vec{G} = (G^0,\dots,G^d)$. We also set $G^{-1} = S$ and $\vec{r} = (r_0,\dots,r_d)$.
\paragraph{The character sequence $\vec{\phi}$:} By \cite[Proposition 3.6.7]{Kaletha:Regular} , given the character $\theta$ of $S(F)$, there exists a Howe factorization, that is a sequence of characters $\rep{i}: G^i(F) \rightarrow \mathbb{C}^\times$ for $i=-1,\dots,d$ such that
\begin{enumerate}
\item[1)] $\theta = \prod\limits_{i=-1}^d\rep{i}|_{S(F)}$;
\item[2)] for all $0\leq i\leq d$, $\rep{i}$ is trivial on $G^i_{\mathrm{sc}}(F)$;
\item[3)] for all $0\leq i < d$, $\rep{i}$ is a $G^{i+1}(F)$-generic character of depth $r_i$ in the sense of \cite[Definition 3.9]{HM:2008}. For $i=d$, $\rep{d}$ is trivial if $r_d = r_{d-1}$ and has depth $r_d$ otherwise. For $i=-1$, $\rep{-1}$ is trivial if $G^0=S$ and otherwise satisfies $\rep{-1}|_{S(F)_{0^+}}=1$.
\end{enumerate}
Given such a factorization, we set $\vec{\phi} = (\rep{0},\dots,\rep{d})$.
\paragraph{The point $y$:} Since $(S,\theta)$ is a tame $F$-non-singular elliptic pair, we have that $S$ is a maximally unramified elliptic maximal $F$-torus in the sense of \cite[Definition 3.4.2]{Kaletha:Regular}. As such, we can associate a vertex $[y]$ of $\mathcal{B}^\mathrm{red}(G,F)$ \cite[Lemma 3.4.3]{Kaletha:Regular}, which is the unique $\mathrm{G}al(F^\mathrm{un}/F)$-fixed point of $\mathcal{A}^{\mathrm{red}}(G,S,F^\mathrm{un})$.
\paragraph{The representation $\rho$:} Let $\mathcal{G}^0_y$ denote the reductive quotient of $G^0$ at $y$, that is the connected reductive $\res$-group such that $\mathcal{G}^0_y(\res) = G^0(F)_{y,0:0^+}$ Then, by \cite[Lemma 3.4.4]{Kaletha:Regular}, there exists an elliptic maximal $\res$-torus $\mathcal{S}$ of $\mathcal{G}^0_y$ such that for every unramified extension $F'$ of $F$, the image of $S(F')_0$ in $G(F')_{y,0:0^+}$ is equal to $\mathcal{S}(\res')$. By \cite[Lemma 3.4.14]{Kaletha:Regular}, the character $\rep{-1}|_{S(F)_0}$ factors through to a character $\overline{\rep{-1}}$ of $\mathcal{S}(\res)$ which is non-singular \cite[Definition 5.15]{DL:1976}, meaning that the virtual character $\pm R_{\mathcal{S},\overline{\rep{-1}}}$ is a (possibly reducible) Deligne-Lusztig cuspidal representation of $\mathcal{G}^0_y(\res)$ \cite[Proposition 7.4, Theorem 8.3]{DL:1976}. Note that the sign $\pm$ refers to $(-1)^{r_{\res}(\mathcal{G}^0_y)-r_{\res}(\mathcal{S})}$, where $r_{\res}(\mathcal{G}^0_y)$ and $r_{\res}(\mathcal{S})$ denote the $\res$-split ranks of $\mathcal{G}^0_y$ and $\mathcal{S}$, respectively.
The pullback of $\pm R_{\mathcal{S},\overline{\rep{-1}}}$ to $G^0(F)_{y,0}$ then gets extended uniquely to a representation $\upkappa_{(S,\rep{-1})}$ of $S(F)G^0(F)_{y,0}$ and $\rho\defeq \Ind_{S(F)G^0(F)_{y,0}}^{G^0(F)_{[y]}}\upkappa_{(S,\rep{-1})}$. This construction is summarized in Figure~\ref{fig:summaryRho}. Note that we are following the notation from \cite{Kaletha:Regular} in the paragraph above. What we have denoted by $\rho$ is denoted by $\kappa_{(S,\rep{-1})}$ in \cite[Section 3.3]{Kaletha:SCPackets}.
\begin{figure}
\caption{Summary of the construction of $\rho$.}
\label{fig:summaryRho}
\end{figure}
Once we have the $G$-datum $\Psi_{(S,\theta)} = (\vec{G},y,\vec{r},\rho,\vec{\phi})$, we apply the J.-K.~Yu construction to obtain the supercuspidal representation $\pi_{(S,\theta)}$. We do not recall all the details of this construction, but provide a summary in the form of a diagram (Figure~\ref{fig:summaryJKYu}). We invite the reader to consult \cite[Section 3]{arXivPaper} for a brief description of the steps involved. We point out that it is sometimes convenient to write $\kappa_G(\Psi_{(S,\theta)})$ for $\kappa_{S,\theta}$ and $\pi_G(\Psi_{(S,\theta)})$ for $\pi_{(S,\theta)}$ to indicate that we are applying the J.-K.~Yu construction on the $G$-datum $\Psi_{(S,\theta)}$.
\begin{figure}
\caption{Summary of the J.-K.~Yu construction for $\pi_{(S,\theta)}
\label{fig:summaryJKYu}
\end{figure}
The representation $\rho$ above may be reducible, and its irreducible components are given by \cite[Theorem 2.7.7]{Kaletha:SCPackets}. While the definition of a $G$-datum in \cite{Yu:2001} requires $\rho$ to be irreducible, we may still apply the steps of the J.-K.~Yu construction on $(\vec{G},y,\vec{r},\rho,\vec{\phi})$ to obtain $\pi_{(S,\theta)}$, which is a completely reducible supercuspidal representation independent of the chosen Howe factorization \cite[Corollary 3.4.7]{Kaletha:SCPackets}. We use the notation $[\pi_{(S,\theta)}]$ for the set of irreducible components of $\pi_{(S,\theta)}$.
From the pair $(S,\theta)$, one may also perform what we call a \emph{twisted} J.-K.~Yu construction. Indeed, following \cite[Section 4.1]{FKS}, let $\epsilon = \prod\limits_{i=1}^d\epsilon^{G^i/G^{i-1}}$, where $\epsilon^{G^i/G^{i-1}}$is the quadratic character of $K^d$ that is trivial on $G^1(F)_{y,r_0/2}\cdots G^d(F)_{y,r_d/2}$ and whose restriction to $K^0$ is given by $\epsilon^{G^i/G^{i-1}}_y$ defined in \cite[Definition 4.1.10]{FKS}. The so-called twisted representation then refers to $\Ind_{K^d}^{G}(\kappa_{(S,\theta)}\cdot \epsilon)$, which is equivalent to constructing $\pi_{(S,\theta\cdot \epsilon)}$ via the above steps.
\mathfrak{s}ubsection{Computing the Decomposition}\label{sec:MainTheorem}
In this section, we prove the following decomposition formula.
\begin{theorem}\label{th:fullTheorem}
Let $(\tilde{S},\tilde{\theta})$ and $(S,\theta)$ be tame $F$-non-singular elliptic pairs for $\tilde{G}$ and $G$, respectively. Assume that $\eta(S) \mathfrak{s}ubset \tilde{S}$ and $\theta = \tilde{\theta}\circ\eta$. Then
$$\pi_{(\tilde{S},\tilde{\theta})}\circ\eta = m \mathrm{un}derset{t\in T}{\oplus}\mathrm{un}derset{\ell\in L}{\oplus}\mathrm{un}derset{c\in C}{\oplus} {^{t \ell c}\varrho_{(S,\theta)}},$$ where $T$, $L$ and $C$ are sets of coset representatives of $G_Z(F)\mathfrak{s}etminus \tilde{G}(F) / \tilde{K}^d$, $G_Z^0(F)_{[\tilde{y}]}\mathfrak{s}etminus \tilde{G}^0(F)_{[\tilde{y}]} /\tilde{S}(F)\tilde{G}^0(F)_{\tilde{y},0}$ and $G_Z(F)/\{g\in G_Z(F): {^g\varrho_{(S,\theta)}}\mathfrak{s}imeq \varrho_{(S,\theta)}\}$, respectively, $\varrho_{(S,\theta)} \in [\pi_{(S,\theta)}]$ and $m$ is the multiplicity of $\varrho_{(S,\theta)}$ in $\pi_{(\eta(S),\tilde{\theta}|_{\eta(S)})}\circ\eta$.
\end{theorem}
\begin{remark}\label{rem:definedOverF}
The notation $^{t\ell c}\varrho_{(S,\theta)}$ is interpreted as follows. We have that $t,\ell \in \tilde{G}(F)$. Since $\tilde{G} = Z(\tilde{G})^\circ[\tilde{G},\tilde{G}] = Z(\tilde{G})^\circ G_Z$, we have that $t = z_tg_t, \ell = z_\ell g_{\ell}$ for some ${z_t,z_{\ell}\in Z(\tilde{G})^\circ(\overline{F})}$, ${g_t,g_{\ell}\in G_Z(\overline{F})}$. Conjugation by $t\ell c$ is then equivalent to conjugation by $g_tg_{\ell} c\in G_Z(\overline{F})$. Then, we write $^{t\ell c}\varrho_{(S,\theta)}$ to actually mean $^{\overline{g} \overline{c}}\varrho_{(S,\theta)}$, where $\overline{g},\overline{c}\in G(\overline{F})$ are such that $g_tg_{\ell}=\overline{g}Z$ and $c = \overline{c}Z$. For $^{\overline{g}\overline{c}}\varrho_{(S,\theta)}$ to be an actual representation of $G(F)$, we claim that $G(F)$ is stable under conjugation by $\overline{g}\overline{c}$, or equivalently that the map $\Ad(\overline{g}\overline{c})$ is defined over $F$. Indeed, given that $\Ad(t\ell) = \Ad(g_tg_\ell)$ is defined over $F$, one can show that $\Ad(\overline{g})$ is also defined over $F$.
Furthermore, by definition of $G_Z(F) = (G/Z)(F)$, $\overline{c}$ is such that $\mathfrak{s}igma(\overline{c})Z = \overline{c}Z$ for all $\mathfrak{s}igma\in \mathrm{G}amma$. It follows that $\Ad(\overline{c})$, and therefore $\Ad(\overline{g}\overline{c})$, is defined over $F$.
\end{remark}
For all $g\in G(F)$, we have
$$\pi_{(\tilde{S},\tilde{\theta})}\circ\eta (g) = \Res^{\tilde{G}(F)}_{G_Z(F)}\pi_{(\tilde{S},\tilde{\theta})}(\eta(g)).$$ The results from \cite{arXivPaper} allow us to write a direct sum decomposition for the representation $\Res^{\tilde{G}(F)}_{G_Z(F)}\pi_{(\tilde{S},\tilde{\theta})}$, as $G_Z$ is a normal subgroup of $\tilde{G}$ that contains $[\tilde{G},\tilde{G}]$.
\begin{theorem}\label{th:HG}
Let $(S,\theta)$ be a tame $F$-non-singular elliptic pair of $G$ and let $[y]$ be the vertex of $\mathcal{B}^{\mathrm{red}}(G,F)$ associated to $S$. Let $H$ be a closed connected $F$-subgroup of $G$ that contains $[G,G]$. Set $S_{H} = S\cap H$ and $\theta_H = \theta|_{S_H}$. Then $(S_H,\theta_H)$ is a tame $F$-non-singular elliptic pair of $H$ and
$$\pi_{(S,\theta)}|_{H(F)} = \mathrm{un}derset{t\in T}{\oplus}\mathrm{un}derset{\ell\in L}{\oplus} {^{t\ell}\pi_{(S_H,\theta_H)}},$$ where $T$ and $L$ are sets of coset representatives of $H(F)\mathfrak{s}etminus G(F) / K^d$ and $H^0(F)_{[y]}\mathfrak{s}etminus G^0(F)_{[y]}/ S(F)G^0(F)_{y,0}$, respectively.
\end{theorem}
\begin{proof}
Let $\Psi_{(S,\theta)} = (\vec{G},y,\vec{r},\rho,\vec{\phi})$ be the $G$-datum obtained from the pair $(S,\theta)$ as in Section~\ref{sec:summaryKaletha}. Recall that we may write $\pi_{G}(\Psi_{(S,\theta)})$ for $\pi_{(S,\theta)}$ and $\kappa_{G}(\Psi_{(S,\theta)})$ for $\kappa_{(S,\theta)}$ to indicate that we are applying the J.-K.~Yu construction to $\Psi_{(S,\theta)}$. Set $\Psi^{H}_{(S,\theta)} = (\vec{H},y,\vec{\tilde{r}}, \rho|_{H^0(F)_{[y]}},\vec{\phi}_{H})$, where $\vec{H}, \vec{\tilde{r}}$ and $\vec{\phi}_H$ are as per \cite[Theorem 4.1]{arXivPaper}. Then, it follows from \cite[Theorems 5.7 and 5.8]{arXivPaper} that
$$\pi_{(S,\theta)}|_{H(F)} = \pi_G(\Psi_{(S,\theta)})|_{H(F)} = \mathrm{un}derset{t\in T}{\oplus} {^t\pi_H(\Psi^H_{(S,\theta)})},$$ where $T$ is a set of coset representatives of $H(F)\mathfrak{s}etminus G(F) / K^d$. It remains to compare $\Psi^H_{(S,\theta)}$ and $\Psi_{(S_H,\theta_H)}$. We have that $\vec{H}$ is the twisted Levi sequence associated to $S_H$ by \cite[Theorem 2.3]{arXivPaper} and the discussion preceding it, and the point $[y]$ is the vertex of $\mathcal{B}^{\mathrm{red}}(H,F)$ associated to $S_H$ by \cite[Lemma 7.1]{arXivPaper}. The character sequence $\vec{\phi}_H$ clearly satisfies the first two axioms to be a Howe factorization of $\theta_H$, and genericity is given by \cite[Proposition 4.7]{arXivPaper}. Therefore, we have $\Psi_{(S_H,\theta_H)} = (\vec{H},y,\vec{\tilde{r}},\Ind_{S_H(F)H^0(F)_{y,0}}^{H^0(F)_{[y]}}\upkappa_{(S_H,\theta_H)},\vec{\phi}_H)$. Following the steps in the proof of \cite[Proposition 7.5]{arXivPaper}, we have
$$\rho|_{H^0(F)_{[y]}} = \mathrm{un}derset{\ell\in L}{\oplus} {^\ell \Ind_{S_H(F)H^0(F)_{y,0}}^{H^0(F)_{[y]}}\upkappa_{(S_H,\theta_H)}},$$ where $L$ is a set of coset representatives of $H^0(F)_{[y]}\mathfrak{s}etminus G^0(F)_{[y]} / S(F)G^0(F)_{y,0}$. Therefore, we conclude that $$\pi_H(\Psi^H_{(S,\theta)}) = \mathrm{un}derset{\ell\in L}{\oplus} {^\ell \pi_H(\Psi_{(S_H,\theta_H)})} = \mathrm{un}derset{\ell\in L}{\oplus}{^\ell\pi_{(S_H,\theta_H)}}$$ and so the result follows.
\end{proof}
As a consequence of the previous theorem, we have that
$$\pi_{(\tilde{S},\tilde{\theta})}\circ \eta(g) = \left(\mathrm{un}derset{t\in T}{\oplus}\mathrm{un}derset{\ell\in L}{\oplus} {^{t\ell}\pi_{(\eta(S),\tilde{\theta}|_{\eta(S)})}} \right) (\eta(g)),$$ where $T$ and $L$ are as per the statement of Theorem~\ref{th:fullTheorem}. Thus, we seek to describe the decomposition of $\pi_{(\eta(S),\tilde{\theta}|_{\eta(S)})}\circ\eta$ into irreducible representations of $G(F)$. This decomposition is given to us by the following theorem, and completes the proof of Theorem~\ref{th:fullTheorem}.
\begin{theorem}\label{th:mainTheorem}
Let $(S,\theta)$ and $(S_Z,\theta_Z)$ be tame $F$-non-singular elliptic pairs of $G$ and $G_Z$, respectively. Assume that $\eta(S) = S_Z$ and $\theta = \theta_Z\circ\eta$. Then $\pi_{(S,\theta)} \mathfrak{s}ubset \pi_{(S_Z,\theta_Z)}\circ\eta$. Furthermore, given $\varrho_{(S,\theta)} \in [\pi_{(S,\theta)}]$, we have
$$\pi_{(S_Z,\theta_Z)}\circ\eta = m\mathrm{un}derset{c\in C}{\oplus} {^c\varrho_{(S,\theta)}},$$ where $C$ is a set of coset representatives of $G_Z(F)/\{g\in G_Z(F): {^g\varrho_{(S,\theta)}\mathfrak{s}imeq \varrho_{(S,\theta)}}\}$ and $m$ is the multiplicity of $\varrho_{(S,\theta)}$ in $\pi_{(S_Z,\theta_Z)}\circ\eta$.
\end{theorem}
Proving Theorem~\ref{th:mainTheorem} is just a matter of following the strategy from \cite{arXivPaper} and going through the constructions of $\pi_{(S,\theta)}$ and $\pi_{(S_Z,\theta_Z)}$ step-by-step to make comparisons along the way. It is, however, a lengthy process, so we dedicate Section~\ref{sec:proofMainTheorem} to proving this statement.
\mathfrak{s}ubsubsection{The Proof of Theorem~\ref{th:mainTheorem}}\label{sec:proofMainTheorem}
We start by looking at the relationship between the J.-K.~Yu data that we obtain from $(S,\theta)$ and $(S_Z,\theta_Z)$. Before doing so, let us first state a proposition that will be of use to us in some of our later proofs.
\begin{proposition}\label{prop:induction}
Let $\mu: H_2' \rightarrow H_2$ be a morphism of groups, $H_1\mathfrak{s}ubset H_2$ and $H_1'\mathfrak{s}ubset H_2'$ subgroups such that $\mu(H_2')\cap H_1 = \mu(H_1')$ and $\ker\mu \mathfrak{s}ubset H_1'$. Let $\pi$ be a representation of $H_1$. Then, $\Ind_{H_1'}^{H_2'}(\pi\circ\mu)$ is a subrepresentation of $\left(\Ind_{H_1}^{H_2}\pi\right)\circ\mu$.
\end{proposition}
\begin{proof}
For all $h'\in H_2'$, we have
$$\left(\Ind_{H_1}^{H_2}\pi\right)\circ\mu(h) = \left(\Res^{H_2}_{\mu(H_2')}\Ind_{H_1}^{H_2}\pi\right)(\mu(h)).$$ We apply the Mackey decomposition formula on the right-hand side and obtain
$$\left(\Ind_{H_1}^{H_2}\pi\right)\circ\mu(h) = \left(\mathrm{un}derset{c\in C}{\oplus}\Ind_{\mu(H_2')\cap {^cH_1}}^{\mu(H_2')}\Res^{^cH_1}_{\mu(H_2')\cap{^cH_1}}{^c\pi}\right)(\mu(h)),$$ where $C$ is a set of representatives of $\mu(H_2')\mathfrak{s}etminus H_2/H_1$. Since $\mu(H_2')\cap H_1 = \mu(H_1')$, the term corresponding to $c=1$ in the sum is given by $\left( \Ind_{\mu(H_1')}^{\mu(H_2')}\Res^{H_1}_{\mu(H_1')}\pi\right)(\mu(h))$, which implies that $\left(\Ind_{\mu(H_1')}^{\mu(H_2')}\Res^{H_1}_{\mu(H_1')}\pi\right)\circ \mu$ is a subrepresentation of $\left(\Ind_{H_1}^{H_2}\pi\right)\circ\mu$. We claim that $\left(\Ind_{\mu(H_1')}^{\mu(H_2')}\Res^{H_1}_{\mu(H_1')}\pi\right)\circ \mu \mathfrak{s}imeq \Ind_{H_1'}^{H_2'}(\pi\circ\mu)$. Indeed, let $V$ denote the vector space on which the representations $\pi$ and $\pi\circ\mu$ act. Then, the representations $\left(\Ind_{\mu(H_1')}^{\mu(H_2')}\Res^{H_1}_{\mu(H_1')}\pi\right)\circ \mu$ and $\Ind_{H_1'}^{H_2'}(\pi\circ\mu)$ act on the vector spaces
$$W_\mu = \{f_\mu: \mu(H_2') \rightarrow V \text{ locally constant | } f_\mu(gh) = \pi(h)^{-1}f_\mu(g) \text{ for all } h\in \mu(H_1')\}$$ and $$W = \{f: H_2' \rightarrow V \text{ locally constant | } f(gh) = \pi\circ\mu(h)^{-1}f(g) \text{ for all } h\in H_1'\},$$ respectively. Define a linear map $T$ as follows:
\begin{align*}
T: W_\mu &\rightarrow W\\
f_\mu &\mapsto f_\mu\circ \mu.
\end{align*}
One sees that the map $T$ is injective, as $\mu : H_2' \rightarrow \mu(H_2')$ is surjective. Since $\ker\mu \mathfrak{s}ubset H_1'$, elements of $W$ are constant on the cosets of $H_2'/\ker\mu$, allowing us to define an inverse map and conclude that $T$ is bijective. Finally, one verifies that $T$ intertwines the representations $\left(\Ind_{\mu(H_1')}^{\mu(H_2')}\Res_{\mu(H_1')}^{H_1}\pi \right)\circ \mu$ and $\Ind_{H_1'}^{H_2'}(\pi\circ\mu)$, which concludes the proof.
\end{proof}
\mathfrak{s}ubsubsubsection{Matching the J.-K.~Yu Data}\label{sec:matchingInducedData}
In this section, $G, G_Z = G/Z$ and $\eta$ are as in the introduction. We let $(S,\theta)$ and $(S_Z,\theta_Z)$ be tame $F$-non-singular elliptic pairs of $G$ and $G_Z$, respectively, which satisfy $\eta(S) = S_Z$ and $\theta = \theta_Z\circ\eta$. The goal of this section is to show that the corresponding J.-K.~Yu data are also related through composition with $\eta$. More precisely, we have the following theorem, which is summarized in Figure~\ref{fig:summaryJKData}.
\begin{theorem}\label{th:matchingData}
Let $(S,\theta)$ and $(S_Z,\theta_Z)$ be tame $F$-non-singular elliptic pairs of $G$ and $G_Z$, respectively, such that $\eta(S) = S_Z$ and $\theta = \theta_Z\circ\eta$. Let $(\vec{G},y,\vec{r},\rho,\vec{\phi})$ and $(\vec{G_Z},y_Z,\vec{r_Z},\rho_Z,\vec{\phi_Z})$ be the corresponding J.-K.~Yu data as per described in Section~\ref{sec:summaryKaletha}. Then, $\vec{r} = \vec{r_Z}$, $\eta(\vec{G}) = \vec{G_Z}$, $y_Z = \eta\circ y$, $\vec{\phi} = \vec{\phi_Z}\circ\eta$ and $\rho \mathfrak{s}ubset \rho_Z\circ\eta$.
\end{theorem}
\begin{figure}
\caption{Relationship between the corresponding J.-K.~Yu datum given the relationship between the tame $F$-non-singular elliptic pairs.}
\label{fig:summaryJKData}
\end{figure}
The proof of this theorem will be divided into four parts. Lemma~\ref{lem:LeviSeq} shows that ${\eta(\vec{G}) = \vec{G_Z}}$ and $\vec{r} = \vec{r_Z}$, Lemma~\ref{lem:yyZ} gives us $y_Z = \eta\circ y$, Proposition~\ref{prop:HoweFact} allows us to set $\vec{\phi} = \vec{\phi_Z}\circ\eta$, and finally, we obtain the inclusion $\rho \mathfrak{s}ubset \rho_Z\circ\eta$ from Proposition~\ref{prop:rhoeta}.
\begin{lemma}\label{lem:LeviSeq}
Let $\vec{G} = (G^0,\dots, G^d)$ and $\vec{G_Z} = (G_Z^0,\dots,G_Z^{d_Z})$ be the twisted Levi sequences obtained from $S$ and $S_Z$, respectively, as per Section~\ref{sec:summaryKaletha}. Then $d = d_Z$ and $\eta(G^i) = G_Z^i$ for all $0\leq i\leq d$.
\end{lemma}
The previous lemma easily follows from the fact that the root systems $R(G,S)$ and $R(G_Z,S_Z)$ are canonically identified. Furthermore, the induced sequence of numbers $\vec{r}$ is the same for both $S$ and $S_Z$.
\begin{lemma}\label{lem:yyZ}
Let $[y]$ be the vertex of $\mathcal{B}^{\mathrm{red}}(G,F)$ associated to $S$ as per \cite[Lemma 3.4.3]{Kaletha:Regular}. Set $y_Z = \eta\circ y$. Then $[y_Z]$ is the vertex of $\mathcal{B}^{\mathrm{red}}(G_Z,F)$ associated to $S_Z$.
\end{lemma}
\begin{proof}
By definition, the vertex $[y]$ belongs to $\mathcal{A}^{\mathrm{red}}(G,S,F^{\mathrm{un}})$ and is its unique $\mathrm{Gal}(F^{\mathrm{un}}/F)$-fixed point. Recall that $\mathcal{A}(G,S,F)=X_*(S,F)\otimes_{\mathbb{Z}}\mathbb{R}$, where $X_*(S,F)$ denotes the cocharacters of $S$ which are defined over $F$. It follows that $y_Z = \eta\circ y$ belongs to $\mathcal{A}(G_Z,S_Z,F)=X_*(S_Z,F)\otimes_{\mathbb{Z}}\mathbb{R}$. Furthermore, $[y_Z]$ is an element of $\mathcal{A}^{\mathrm{red}}(G_Z,S_Z,F^{\mathrm{un}})$ which is fixed by $\mathrm{Gal}(F^{\mathrm{un}}/F)$, meaning that $[y_Z]$ is the vertex associated to $S_Z$.
\end{proof}
Before moving on to the other parts of the data, it is also important to establish a relationship between the Moy-Prasad filtration subgroups of $G^i(F)$ and those of $G_Z^i(F)$, as they are crucial in the steps of the J.-K.~Yu construction. In particular, we have the following lemma.
\begin{lemma}\label{lem:filtrations}
Let $Z$ be a central subgroup of $G$ and set $G_Z = G/Z$. Let $\eta$ be the quotient map from $G$ onto $G_Z$. Let $y\in \mathcal{B}(G,F)$ and set $y_Z = \eta\circ y$. Then $\eta(G(F)_{y,r}) = G_Z(F)_{y_Z,r}$ for all $r>0$.
\end{lemma}
\begin{proof}
Let $r>0$. Following the proof of \cite[Lemma 3.3.2]{Kaletha:Regular}, use \cite[Lemma 6.4.48]{BT:1972} to write $G(F)_{y,r}$ as the direct product of (topological spaces) of $T(F)_r$ and the appropriate affine root subgroups, where $T$ is a maximally unramified maximally split maximal torus, whose existence is guaranteed by \cite[Corollary 5.1.12]{BT:1984}. Since $\eta$ induces an isomorphism on the affine root subgroups, it suffices to show that $\eta(T(F)_r) = T_Z(F)_r$, where $T_Z = \eta(T) = T/Z$. To do so, let $Z^\circ$ denote the connected component of $Z$. The map $\eta$ factors as follows:
\begin{center}
\begin{tikzcd}
{T}\arrow{rr}{\eta}\arrow{rd}{\eta^\circ} &{} &{T_Z\mathfrak{s}imeq (T/Z^\circ)/(Z/Z^\circ)}\\
{} &{T/Z^\circ}\arrow{ru}{\overline{\eta}} &{}\\
\end{tikzcd}
\end{center}
We have that $Z^\circ$ is a torus by \cite[Theorem 16.2]{Humphreys:LAG} as it is a closed and connected subgroup of the torus $Z(G)^\circ$. By \cite[Lemma 3.1.3]{Kaletha:Regular}, we have an exact sequence
$$1\rightarrow Z^\circ(F)_r \rightarrow T(F)_r \rightarrow (T/Z^\circ)(F)_r \rightarrow 1,$$ implying that $(T/Z^\circ)(F)_r \mathfrak{s}imeq T(F)_r/Z^\circ(F)_r\mathfrak{s}imeq \eta^\circ(T(F)_r)$. Furthermore, since $\overline{\eta}: T/Z^\circ \rightarrow T_Z$ is an isogeny, \cite[Lemma 3.1.3]{Kaletha:Regular} tells us that $\overline{\eta}((T/Z^\circ)(F)_r) = T_Z(F)_r$. Combining these two equations allows us to conclude that $\eta(T(F)_r) = T_Z(F)_r$ which concludes the proof.
\end{proof}
\begin{remark}\label{rem:filtrations}
Given that $Z$ is a central subgroup of $G^i, 0\leq i\leq d$, we apply Lemma~\ref{lem:filtrations} and obtain $\eta(G^i(F)_{y,r_i}) = G_Z^i(F)_{y_Z,r_i}$. It follows that $\eta$ induces isomorphisms ${G^i(F)_{y,r_i:r_i^+}\mathfrak{s}imeq G_Z^i(F)_{y_Z,r_i:r_i^+}}$, $0\leq i\leq d$. Using a similar argument, we also have ${\eta(J^{i+1}) = \JZ{i+1}}$, $\eta(J^{i+1}_+) = \JZP{i+1}$ and $J^{i+1}/J^{i+1}_+\mathfrak{s}imeq \JZ{i+1}/\JZP{i+1}$ for all $0\leq i\leq d-1$, where $J^{i+1} = (G^i,G^{i+1})(F)_{y,(r_i,r_i/2)}$ and $J^{i+1}_+ =(G^i,G^{i+1})(F)_{y,(r_i,{r_i/2}^+)}$ as per \cite[Section 1]{Yu:2001}, and $\JZ{i+1}$ and $\JZP{i+1}$ are defined analogously.
At the depth-zero level, we can only guarantee an inclusion, that is ${\eta(G^0(F))_{y,0} \mathfrak{s}ubset G_Z^0(F)_{y_Z,0}}$. This induces a homomorphism $G^0(F)_{y,0:0^+}\rightarrow G_Z^0(F)_{y_Z,0:0^+}$, and implies that $\eta(K^i) \mathfrak{s}ubset \KZ{i}$ for all $0\leq i\leq d$, where $\KZ{0} = G_Z^0(F)_{[y]}$ and $\KZ{i+1} = \KZ{0}G_Z^1(F)_{y_Z,r_0/2}\cdots G_Z^{i+1}(F)_{y_Z,r_i/2}$, $0\leq i\leq d-1$.
\end{remark}
\begin{proposition}\label{prop:HoweFact}
Let $(\repZ{-1},\repZ{0},\dots,\repZ{d})$ be a Howe factorization for $\theta_Z$. For each ${-1\leq i\leq d}$, set $\rep{i} = \repZ{i}\circ \eta$. Then $(\rep{-1},\rep{0},\dots,\rep{d})$ is a Howe factorization for $\theta$.
\end{proposition}
\begin{proof}
One sees that $(\rep{-1},\rep{0},\dots,\rep{d})$ satisfies the two first axioms to be a Howe factorization of $\theta$, so it remains to verify the third axiom.
Let $0\leq i < d$. Verifying the genericity condition requires some additional notation. Following \cite[Section 3.1]{HM:2008}, we let $\mathfrak{z}^i$ denote the centre of $\mathfrak{g}^i = \mathrm{Lie}(G^i)$ and $\mathfrak{z}^istar$ its dual. We also set $\mathfrak{s} = \mathrm{Lie}(S), \mathfrak{z}^i_{r_i} = \mathfrak{z}^i(F)\cap \mathfrak{s}(F)_{r_i}$ and define $$\mathfrak{z}^{i,*}_{-r_i} = \{X^*\in \mathfrak{z}^istar(F) : X^*(Y) \in \mathfrak{p}_F \text{ for all }Y\in \mathfrak{z}^i_{r_i^+}\}.$$ We establish similar notation for $G_Z^i$ by adding subscript $Z$.
Fix a character $\psi$ of $F$ which is nontrivial on $\mathcal{O}_F$ and trivial on $\mathfrak{p}_F$. By definition of genericity \cite[Definition 3.9]{HM:2008}, $\repZ{i}$ is a character of depth $r_i$, and its restriction to $G_Z^i(F)_{y_Z,r_i}$ is realized by a $G_Z^{i+1}(F)$-generic element of $\mathfrak{z}^iZstarri$ of depth $-r_i$. That is, there exists a $G_Z^{i+1}(F)$-generic element $X_Z^*\in \mathfrak{z}^iZstarri$ of depth $-r_i$ in the sense of \cite[Definition 3.7]{HM:2008} such that $\repZ{i}(e_Z(Y+\mathfrak{g}Z^i(F)_{y_Z,r_i^+})) = \psi(X^*(Y))$ for all $Y\in \mathfrak{g}Z^i(F)_{y_Z,r_i}$, where $$e_Z: \mathfrak{g}Z^i(F)_{y_Z,r_i:r_i^+}\rightarrow G_Z^i(F)_{y_Z,r_i:r_i^+}$$ is the isomorphism from \cite{Adler:1998} referred to as the mock exponential map. Note that given our underlying hypothesis on $p$, one may simplify \cite[Definition 3.9]{HM:2008} to \cite[Definition 3.2]{arXivPaper} and omit so-called condition (GE2) from \cite[Definition 3.7]{HM:2008}.
To show that $\rep{i}$ is ${G^{i+1}}(F)$-generic of depth $-r_i$, we must show that $\rep{i}$ is trivial on ${G^i}(F)_{y,r_i^+}$ and that its restriction to ${G^i}(F)_{y,r_i}$ is realized by a ${G^{i+1}}(F)$-generic element of $\mathfrak{z}^istarri$ of depth $-r_i$. The quotient map $\eta: G^i \rightarrow G_Z^i$ induces a map on the Lie algebras $d\eta: \mathfrak{g}^i \rightarrow \mathfrak{g}Z^i$. Similarly to Remark~\ref{rem:filtrations}, we have that $d\eta(\mathfrak{g}^i(F)_{y,r_i}) = \mathfrak{g}Z^i(F)_{y_Z,r_i}$, which induces an isomorphism $\mathfrak{g}^i(F)_{y,r_i:r_i^+}\mathfrak{s}imeq\mathfrak{g}Z^i(F)_{y_Z,r_i:r_i^+}$ for all $0\leq i\leq d$. We also have $d\eta(\mathfrak{z}^i) \mathfrak{s}ubset \mathfrak{z}^iZ$ and $d\eta(\mathfrak{z}^i_{r_i}) \mathfrak{s}ubset (\mathfrak{z}^iZ)_{r_i}$. Therefore $d\eta$ induces a dual map
\begin{align*}
d\eta^*: \mathfrak{z}^iZstar &\rightarrow \mathfrak{z}^istar\\
X^*_Z &\mapsto X^*_Z\circ d\eta,
\end{align*}
which satisfies $d\eta^*(\mathfrak{z}^iZstarri) \mathfrak{s}ubset \mathfrak{z}^istarri$.
Setting $X^* = d\eta^*(X^*_Z)$, we then see from \cite[Definition 3.7]{HM:2008} that $X^*$ is a ${G^{i+1}}(F)$-generic element of $\mathfrak{z}^istarri$ of depth $-r_i$, as $R(G^i,S)$ canonically identifies with $R(G_Z^i, S_Z)$. We show that $X^*$ realizes $\rep{i}|_{{G^i}(F)_{y,r_i}}$. Let $e$ denote the mock exponential map from $\mathfrak{g}^i(F)_{y,r_i:r_i^+}$ to ${G^i}(F)_{y,r_i:r_i^+}$. One can verify that the following diagram commutes.
\begin{center}
\begin{tikzcd}
{\mathfrak{g}^i(F)_{y,r_i:r_i^+}}\arrow{d}{d\eta}\arrow{r}{e} &{{G^i}(F)_{y,r_i:r_i^+}}\arrow{d}{\eta}\\
{\mathfrak{g}Z^i(F)_{y_Z,r_i:r_i^+}}\arrow{r}{e_Z} &{{G_Z^i}(F)_{y_Z,r_i:r_i^+}}\\
\end{tikzcd}
\end{center}
From this relationship between the mock exponential maps, it follows that for all $Y\in \mathfrak{g}^i(F)_{y,r_i}$,
\begin{align*}
\rep{i}(e(Y+\mathfrak{g}^i(F)_{y,r_i^+})) &= \repZ{i}\circ\eta(e(Y+\mathfrak{g}^i(F)_{y,r_i^+})) \\
&=\repZ{i}(e_Z(d\eta(Y)+\mathfrak{g}Z^i(F)_{y_Z,r_i^+})) \\
&= \psi(X^*_Z(d\eta(Y)))\\
&= \psi(X^*(Y)).
\end{align*}
Thus, we conclude that $\rep{i}$ is ${G^{i+i}}(F)$-generic of depth $r_i$.
For $i=d$, we see that $\rep{d}$ is trivial whenever $\repZ{d}$ is. When $\repZ{d} \neq 1$, $\rep{d}$ must be of the same depth, as ${G}(F)_{y,r_d:r_d^+}\mathfrak{s}imeq G_Z(F)_{y_Z,r_d:r_d^+}$.
Finally, for $i=-1$, it is clear that $\rep{-1}$ is trivial in the case where $\repZ{-1}$ is trivial, and that $\rep{-1}|_{S(F)_{0^+}} = 1$ whenever $\repZ{-1}|_{S_Z(F)_{0^+}}$ as $\eta(S(F)_{0^+})\mathfrak{s}ubset S_Z(F)_{0^+}$.
\end{proof}
\begin{proposition}\label{prop:rhoeta}
Let $\rho$ and $\rho_Z$ be the representations of $G^0(F)_{[y]}$ and $G_Z^0(F)_{[y_Z]}$ constructed from $(S,\theta)$ and $(S_Z,\theta_Z)$, respectively, as per Section~\ref{sec:summaryKaletha}. Then $\rho \mathfrak{s}ubset \rho_Z\circ\eta$.
\end{proposition}
In order to prove this proposition, we will require the following lemma.
\begin{lemma}\label{lem:redQuotients}
Let $y\in\mathcal{B}(G,F)$, $y_Z = \eta\circ y$, $\mathcal{G}_y$ and $\mathcal{G}_{y_Z}$ be the reductive quotients of $G$ and $G_Z$ at $y$ and $y_Z$, respectively. Then $\mathcal{G}_{y}(\mathfrak{f})$ can be identified with the $\mathfrak{f}$-points of a reductive group $\mathcal{H}_{y_Z} \mathfrak{s}ubset \mathcal{G}_{y_Z}$ that contains $[\mathcal{G}_{y_Z},\mathcal{G}_{y_Z}]$.
\end{lemma}
\begin{proof}
We identify the reductive groups with their $\resun$-points. From Remark~\ref{rem:filtrations}, the map $\eta$ induces an embedding
$$\eta: G(F^{\mathrm{un}})_{y,0:0^+} \rightarrow G_Z(F^{\mathrm{un}})_{y_Z,0:0^+},$$ whose image, $\eta(G(F^{\mathrm{un}})_{y,0})G_Z(F^{\mathrm{un}})_{y_Z,0^+}/G_Z(F^{\mathrm{un}})_{y_Z,0^+}$, we denote by $\mathcal{H}_{y_Z}(\resun)$. It follows that $\mathcal{G}_y(\resun)\mathfrak{s}imeq \mathcal{H}_{y_Z}(\resun)\mathfrak{s}ubset \mathcal{G}_{y_Z}(\resun)$. Furthermore, since root subgroups are normalized by toral elements, it follows that $[G_Z(F^{\mathrm{un}})_{y_Z,0},G_Z(F^{\mathrm{un}})_{y_Z,0}]$ consists only of products of root subgroup elements, and therefore is a subgroup of $\eta(G(F^{\mathrm{un}})_{y,0})$. It follows that $[\mathcal{G}_{y_Z}(\resun),\mathcal{G}_{y_Z}(\resun)]\mathfrak{s}ubset \mathcal{H}_{y_Z}(\resun)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:rhoeta}]
Recall from Figure~\ref{fig:summaryRho} that $\rho = \Ind_{S(F)G^0(F)_{y,0}}^{G^0(F)_{[y]}}\kappa_{(S,\rep{-1})}$, where $\kappa_{(S,\rep{-1})}$ is constructed from the Deligne-Lusztig cuspidal representation $\pm R_{\mathcal{S},\overline{\rep{-1}}}$ of $\mathcal{G}^0_y(\res)$ with $\mathcal{S}$ a maximal torus of $\mathcal{G}^0_y$ which satisfies $\mathcal{S}(\res) = S(F)_{0:0^+}$. Adopting similar notation for $G_Z^0$, we have that $\rho_Z = \Ind_{{G_Z^0}(F)_{y_Z,0}}^{{G_Z^0}(F)_{[y_Z]}}\kappa_{(S_Z,{\repZ{-1}})}$, where $\kappa_{(S_Z,\repZ{-1})}$ is constructed from the Deligne-Lusztig cuspidal representation $\pm R_{\mathcal{S}_Z,\overline{\repZ{-1}}}$ of $\mathcal{G}^0_{y_Z}(\res)$, the reductive quotient of ${G_Z^0}$ at $y_Z$, with $\mathcal{S}_Z$ a maximal torus of $\mathcal{G}^{0}_{y_Z}$ which satisfies $\mathcal{S}_Z(\res) = S_Z(F)_{0:0^+}$. We start by showing that $\kappa_{(S,{\rep{-1}})} = \kappa_{(S_Z,\repZ{-1})}\circ \eta$.
By Lemma~\ref{lem:redQuotients}, we have that $\mathcal{G}^0_y(\res)$ can be identified with the $\res$-points of a reductive group $\mathcal{H}_{y_Z}\mathfrak{s}ubset \mathcal{G}^0_{y_Z}$ that contains $[\mathcal{G}^0_{y_Z},\mathcal{G}^0_{y_Z}]$. Let $\tilde{\mathcal{S}}$ be the maximal torus of $\mathcal{H}_{y_Z}$ which is isomorphic to $\mathcal{S}$ under $\eta$. We have that $\tilde{\mathcal{S}}\mathfrak{s}ubset\mathcal{S}_Z$.
Since virtual characters remain constant under isomorphism, we have
$$R_{\mathcal{S},\overline{\rep{-1}}}(g) = R_{\tilde{\mathcal{S}},\overline{\rep{-1}}\circ \eta^{-1}}(\eta(g)) \text{ for all } g\in \mathcal{G}^{0}_{y}(\res).$$ On the other hand, since $\mathcal{H}_{y_Z}$ contains $[\mathcal{G}^0_{y_Z},\mathcal{G}^0_{y_Z}]$, \cite[Theorem A.2]{arXivPaper}
gives us $R_{\mathcal{S}_Z,\overline{\repZ{-1}}}|_{\mathcal{H}_{y_Z}(\res)} = R_{\tilde{\mathcal{S}},\overline{\repZ{-1}}|_{\tilde{\mathcal{S}}(\res)}}$. We have that $\overline{\rep{-1}}\circ \eta^{-1} = \overline{\repZ{-1}}|_{\tilde{\mathcal{S}}(\res)}$. Indeed, an element of $\tilde{\mathcal{S}}(\res)$ can be written in the form $\eta(s)G_Z^0(F)_{y_Z,0^+}$ for some $s\in S(F)_0$. Using the definitions of the characters and the fact that $\rep{-1} = \repZ{-1}\circ\eta$, we obtain the following chain of equalities.
$$\overline{\rep{-1}}\circ\eta^{-1}(\eta(s)G_Z^0(F)_{y_Z,0^+}) = \overline{\rep{-1}}(s{G^0}(F)_{y,0^+}) = \rep{-1}(s) = \repZ{-1}\circ\eta(s) = \overline{\repZ{-1}}|_{\tilde{\mathcal{S}}(\res)}(\eta(s)G_Z^0(F)_{y_Z,0^+}).$$
Therefore, $R_{\mathcal{S},\overline{\rep{-1}}} = R_{\mathcal{S}_Z,\overline{\repZ{-1}}}|_{\mathcal{H}_{y_Z}(\res)}\circ\eta$. Furthermore, one can show using \cite[Proposition 4.27]{BorelTits:1965} that $r_{\res}(\mathcal{G}_y)-r_{\res}(\mathcal{S}) = r_{\res}(\mathcal{G}_{y_Z})-r_{\res}(\mathcal{S}_Z)$ so that $\pm R_{\mathcal{S},\overline{\rep{-1}}} = \pm R_{\mathcal{S}_Z,\overline{\repZ{-1}}}|_{\mathcal{H}_{y_Z}(\res)}\circ\eta$. Since the pullbacks and extensions are unique, it follows that $\upkappa_{(S,\rep{-1})} = \upkappa_{(S_Z,{\repZ{-1}})}\circ\eta$.
Finally, using Proposition~\ref{prop:induction}, we obtain
$$\rho =\Ind_{S(F)G^0(F)_{y,0}}^{G^0(F)_{[y]}}(\upkappa_{(S_Z,{\repZ{-1}})}\circ\eta) \mathfrak{s}ubset \left(\Ind_{S_Z(F)G_Z^0(F)_{y_Z,0}}^{G_Z^0(F)_{[y_Z]}}\upkappa_{(S_Z,\repZ{-1})}\right)\circ\eta = \rho_Z\circ\eta.$$
\end{proof}
This last proposition completes the proof of Theorem~\ref{th:matchingData}.
\mathfrak{s}ubsubsubsection{Going Through the Steps of the J.-K.~Yu Construction}\label{sec:stepsCommute}
Let $(S,\theta)$ and $(S_Z,\theta_Z)$ be tame $F$-non-singular elliptic pairs of $G$ and $G_Z$, respectively, such that $\eta(S) = S_Z$ and $\theta=\theta_Z\circ\eta$. In the previous section, we have established the relationship between the corresponding J.-K.~Yu data, $(\vec{G},y,\vec{r},\rho,\vec{\phi})$ and $(\vec{G_Z},y_Z,\vec{r},\rho_Z,\vec{\phi_Z})$, respectively. It is from these data that we construct the representations $\pi_{(S,\theta)}$ and $\pi_{(S_Z,\theta_Z)}$ following the steps of the J.-K.~Yu construction as outlined in Figure~\ref{fig:summaryJKYu}. To be consistent with notation, we will keep using subscript $Z$ to differentiate between the construction over $G_Z$ from that of $G$. Since we have that $\rep{i} = \repZ{i}\circ\eta$ for all $0\leq i\leq d$ and $\rho\mathfrak{s}ubset \rho_Z\circ\eta$, it is natural to expect that we also have $\prim{i} = \primZ{i}\circ\eta$, $\kap{-1} \mathfrak{s}ubset \kapZ{-1}\circ\eta$ and $\kap{i} = \kapZ{i}\circ\eta$ for all $0\leq i\leq d$, as illustrated in Figure~\ref{fig:etaCommute}, so that the J.-K.~Yu construction commutes with the map $\eta$. Indeed, we prove these equalities and inclusion with Propositions~\ref{prop:phieta} and \ref{prop:kappaeta}, which will allow us to complete the proof of Theorem~\ref{th:mainTheorem} at the end of this section.
\begin{figure}
\caption{Commutativity of the composition with $\eta$ with extension and inflation.}
\label{fig:etaCommute}
\end{figure}
\begin{proposition}\label{prop:phieta}
For all $0\leq i\leq d$ we have $\prim{i} = \primZ{i}\circ\eta$.
\end{proposition}
In order to define the extension $\prim{i}$ of $\rep{i}$, we require the groups $J^{i+1}$ and $J^{i+1}_+$, which were previously mentioned in Remark~\ref{rem:filtrations}. The extension process is divided in two steps: the first step consists of extending $\rep{i}$ to a character $\phihat{i}$ of $K^iG^{i+1}(F)_{y,s_i^+}$, where $s_i = r_i/2$. The character $\phihat{i}$ is the unique character of $K^iG^{i+1}(F)_{y,s_i^+}$ that agrees with $\rep{i}$ on $K^i$ and is trivial on $(G^i,G^{i+1})(F)_{y,(r_i^+,s_i^+)}$ \cite[Section 3.1]{HM:2008}. When $J^{i+1}\neq J^{i+1}_+$, a second step is required to extend the character $\phihat{i}$ a little further to a representation of $K^{i+1}$ by means of a Heisenberg-Weil lift. We adopt analogous notations to describe the extension $\primZ{i}$ of $\repZ{i}$. We note that $J^{i+1} = J^{i+1}_+$ if and only if $\JZ{i+1} = \JZP{i+1}$ (as a consequence of Remark~\ref{rem:filtrations}), which ensures that the construction of $\prim{i}$ requires a Heisenberg-Weil lift if and only if that of $\primZ{i}$ does.
\begin{proof}[Proof of Proposition~\ref{prop:phieta}]
We have that $\phihat{i} = \phihatZ{i}\circ\eta$. Indeed, given that $\eta(K^i)\mathfrak{s}ubset \KZ{i}$ and $\eta((G^i,G^{i+1})(F)_{y,(r_i^+,s_i^+)}) = (G_Z^i,G_Z^{i+1})(F)_{y_Z,(r_i^+,s_i^+)}$ (Remark~\ref{rem:filtrations}), one sees that $\phihatZ{i}\circ\eta$ agrees with $\rep{i}$ on ${K^i}$ and that it is trivial on $(G^i,G^{i+1})(F)_{y,(r_i^+,s_i^+)}$.
In the case where $J^{i+1} = J^{i+1}_+$, we have $\prim{i} = \phihat{i}$ and $\primZ{i} = \phihatZ{i}$ and we are done. In the case where $J^{i+1}\neq J^{i+1}_+$, we have that $\prim{i}$ is constructed using a Heisenberg-Weil lift $\omega^{i}$, which is a representation of $K^i\ltimes \mathcal{H}^{i}$, where $\mathcal{H}^{i} = J^{i+1}/\ker(\xi^{i})$ and $\xi^{i} = \phihat{i}|_{J^{i+1}_+}$. We then have $\prim{i}(kj) = \phihat{i}(k)\omega^{i}(k,j\ker(\xi^{i}))$ for all $k\in K^i, j\in J^{i+1}$. Since $J^{i+1} \neq J^{i+1}_+$ if and only if $\JZ{i+1} \neq \JZP{i+1}$ (Remark~\ref{rem:filtrations}), we also require a Heisenberg-Weil lift $\omega^{i}Z$, which is a representation of $\KZ{i}\ltimes\mathcal{H}^{i}Z$, where $\mathcal{H}^{i}Z = \JZ{i+1}/\ker(\xi^{i}_Z)$ and $\xi^{i}_Z = \phihatZ{i}|_{\JZP{i+1}}$, and have that $\primZ{i}(k_Zj_Z) = \phihatZ{i}(k)\omega^{i}Z(k_Z,j_Z\ker(\xi^{i}_Z))$ for all $k_Z\in\KZ{i}, j_Z\in\JZ{i+1}$.
Since we already know that $\phihat{i} = \phihatZ{i}\circ\eta$, it then suffices to show that $\omega^{i} = \omega^{i}Z\circ\eta$. Note that the map $\eta$ induces isomorphisms $\mathcal{H}^{i}\mathfrak{s}imeq \mathcal{H}^{i}Z$ and $W\mathfrak{s}imeq W^{i}Z$ by Remark~\ref{rem:filtrations}, where $W = J^{i+1}/J^{i+1}_+$ and $W^{i}Z = \JZ{i+1}/\JZP{i+1}$. We then obtain that $\omega^{i} = \omega^{i}Z\circ\eta$ as an application of \cite[Proposition 3.2]{Nevins:2015}, in which we set $H_1 = \mathcal{H}^{i}$, $H_2= \mathcal{H}^{i}Z$, $W_1 = W$, $W_2 = W^{i}Z$, $T_1 = K^i$, $T_2 = \KZ{i}$, $\alpha = \delta = \eta$, $\nu_1$ and $\nu_2$ the corresponding special isomorphisms from \cite[Lemma 2.35]{HM:2008}, and $f_1$ and $f_2$ the homomorphisms coming from the actions by conjugation of $K^i$ and $\KZ{i}$ on $J^{i+1}$ and $\JZ{i+1}$, respectively.
\end{proof}
\begin{proposition}\label{prop:kappaeta}
For all $0\leq i\leq d$ we have $\kap{i} = \kapZ{i}\circ\eta$. Furthermore, $\kap{-1} \mathfrak{s}ubset \kapZ{-1}\circ\eta$.
\end{proposition}
\begin{proof}
Let $0\leq i\leq d-1$. Let us briefly recall the process of inflation. We have that $K^d = K^{i+1}J$, where $J = J^{i+2}\cdots J^d$. Then, for all $k\in K^{i+1}, j\in J$, $\kap{i}(kj) = \prim{i}(k)$. Similarly, we have $\KZ{d} = \KZ{i+1}J_Z$, where $J_Z = \JZ{i+2}\cdots \JZ{d}$, and $\kapZ{i}(k_Zj_Z) = \primZ{i}(k_Z)$ for all ${k_Z\in \KZ{i+1}}$, $j_Z\in J_Z$.
Using these definitions, for all $k\in {K^{i+1}}, j\in J$, we have ${\kap{i}(kj) = \prim{i}(k) = \primZ{i}(\eta(k))}$. By Remark~\ref{rem:filtrations}, we have that $\eta(k)\in \KZ{i+1}$ and $\eta(j)\in J_Z$. Therefore, ${\primZ{i}(\eta(k)) = \kapZ{i}(\eta(k)\eta(j)) = \kapZ{i}\circ\eta(kj)}$. Thus, we conclude that $\kap{i} = \kapZ{i}\circ\eta$.
By a similar argument, we have that $\kap{-1} \mathfrak{s}ubset \kapZ{-1}\circ\eta $ as a consequence of having $\rho \mathfrak{s}ubset \rho_Z\circ\eta$ (Proposition~\ref{prop:rhoeta}).
\end{proof}
We are now in a position to complete the proof of our main theorem.
\begin{proof}[Proof of Theorem~\ref{th:mainTheorem}]
The previous proposition gives us that $\kappa_{(S,\theta)}\mathfrak{s}ubset \kappa_{(S_Z,\theta_Z)}\circ\eta$, which implies that $\pi_{(S,\theta)} \mathfrak{s}ubset \Ind_{K^d}^{G(F)}\left( \kappa_{(S_Z,\theta_Z)}\circ\eta\right)$. By Proposition~\ref{prop:induction}, we have that $\Ind_{K^d}^{G(F)}\left( \kappa_{(S_Z,\theta_Z)}\circ\eta\right) \mathfrak{s}ubset \left(\Ind_{\KZ{d}}^{G_Z(F)}\kappa_{(S_Z,\theta_Z)}\right) \circ \eta = \pi_{(S_Z,\theta_Z)}\circ\eta$.
Now, let $\varrho_{(S,\theta)}\in [\pi_{(S,\theta)}]$. In particular, we have that $\varrho_{(S,\theta)}\mathfrak{s}ubset \pi_{(S_Z,\theta_Z)}\circ\eta$, or equivalently $\varrho_{(S,\theta)}$ is a subrepresentation of $\Res^{G_Z(F)}_{G(F)/Z(F)}\pi_{(S_Z,\theta_Z)}$ when viewing it as a representation of the quotient $G(F)/Z(F)$. By \cite[Theorem]{Silberger:1979}, we know that $\pi_{(S_Z,\theta_Z)}\circ\eta$ (and therefore $\Res^{G_Z(F)}_{G(F)/Z(F)}\pi_{(S_Z,\theta_Z)}$) decomposes as a finite direct sum of irreducible representations. Set $m$ to be the multiplicity of $\varrho_{(S,\theta)}$ in $\Res^{G_Z(F)}_{G(F)/Z(F)}\pi_{(S_Z,\theta_Z)}$. Because $G(F)/Z(F)$ is a normal subgroup of $G_Z(F)$, one can use Mackey theory, as it applies to Clifford theory, to compute the restriction of $\pi_{(S_Z,\theta_Z)}$. Indeed, following the steps in \cite[Section 5.2]{thesis}, we write $\Res_{G(F)/Z(F)}^{G_Z(F)}\pi_{(S_Z,\theta_Z)} = m\mathrm{un}derset{c\in C}{\oplus}{^c\varrho_{(S,\theta)}}$, where $C$ is a set of coset representatives of $G_Z(F)/\{g\in G_Z(F): {^g\varrho_{(S,\theta)}\mathfrak{s}imeq \varrho_{(S,\theta)}}\}$. This concludes the proof.
\end{proof}
\mathfrak{s}ection{Functoriality for Supercuspidal $L$-packets}\label{sec:packetRelations}
Recall that $\Phi_{\mathrm{sc}}(G)$ denotes the set (of conjugacy classes) of supercuspidal $L$-parameters of $G$. Given our hypothesis on $p$, every $\varphi\in\Phi_{\mathrm{sc}}(G)$ has the property that $\varphi(P_F)$ is contained in a maximal torus of $\widehat{G}$ \cite[Lemma 4.1.3]{Kaletha:SCPackets}. Such parameters are called \emph{torally wild} in \cite{Kaletha:SCPackets}. Since all supercuspidal parameters we consider in this paper are torally wild, we will omit these adjectives.
Given $\varphi\in\Phi_{\mathrm{sc}}(G)$, we let $\Pi_\varphi$ denote the associated $L$-packet of \cite{Kaletha:SCPackets}. Kaletha provides an explicit parameterization for $\Pi_\varphi$, and elements therein consist entirely of supercuspidal representations obtained from the construction outlined in Section~\ref{sec:summaryKaletha}. Thus, when ${\varphi\in\Phi_{\mathrm{sc}}(G)}$, we shall refer to $\Pi_\varphi$ as a \emph{supercuspidal} $L$-packet.
\mathfrak{s}ubsection{The Construction of Supercuspidal $L$-packets}\label{sec:constructPackets}
In order to describe Kaletha's construction of supercuspidal $L$-packets, we must first familiarize ourselves with his notion of a supercuspidal $L$-packet datum. We start this section by recalling the definition below.
\begin{definition}[{\cite[Definition 4.1.4]{Kaletha:SCPackets}}]
A supercuspidal $L$-packet datum of $G$ is a tuple $(S,\hat{j},\chi_0,\theta)$, where
\begin{enumerate}
\item[1)] $S$ is a torus of dimension equal to the absolute rank of $G$, defined over $F$ and split over a tame extension of $F$;
\item[2)] $\hat{j}: \widehat{S} \rightarrow \widehat{G}$ is an embedding of complex reductive groups whose $\widehat{G}$-conjugacy class is $\mathrm{G}amma$-stable;
\item[3)] $\chi_0 = (\chi_{\alpha_0})_{\alpha_0}$ is tamely ramified $\chi$-data for $R(G,S^0)$, where $S^0$ is a particular subtorus of $S$ defined from $R_{0+}$ as explained in \cite[p.41]{Kaletha:SCPackets};
\item[3)] and $\theta :S(F)\rightarrow \bb{C}^\times$ is a character;
\end{enumerate}
subject to the condition that $(S,\theta)$ is a tame $F$-non-singular elliptic pair in the sense of \cite[Definition 3.4.1]{Kaletha:SCPackets}.
\end{definition}
The notion of $\chi$-data was introduced in \cite{LS:1987} and is recalled in \cite[Section 4.6]{Kaletha:Regular}. It is not necessary for the reader to be familiar with $\chi$-data in what follows. For our purposes, one can think of $\chi$-data for $R(G,S^0)$ simply as a set of characters of subfields of $F$ which are indexed by the root system.
By \cite[Proposition 4.1.8]{Kaletha:SCPackets}, there is a one-to-one correspondence between the $\widehat{G}$-conjugacy classes of supercuspidal $L$-parameters for $G$ and isomorphism classes of supercuspidal $L$-packet data. Following the proof of \cite[Proposition 4.1.8]{Kaletha:SCPackets}, given $\varphi\in\Phi_{\mathrm{sc}}(G)$, one constructs a representative $(S,\hat{j},\chi_0,\theta)$ of the corresponding isomorphism class of supercuspidal $L$-packet data as follows:
\begin{itemize}
\item $S$: Let $\widehat{M} = \mathrm{Cent}(\varphi(P_F),\widehat{G})^\circ$, $\widehat{C} = \mathrm{Cent}(\varphi(I_F),\widehat{G})^\circ$ and $\widehat{S} = \mathrm{Cent}(\widehat{C},\widehat{M})$. By \cite[Lemma 5.2.2]{Kaletha:Regular} and \cite[Lemma 4.1.3]{Kaletha:SCPackets}, $\widehat{M}$ is Levi subgroup of $\widehat{G}$, $\widehat{C}$ is a torus of $\widehat{G}$ and $\widehat{S}$ is a maximal torus of $\widehat{G}$. The action of $W_F$ (which extends to $\mathrm{G}amma$) on $\widehat{S}$ is defined as $\Ad(\varphi(-))$. The torus $S$ is then the torus dual to $\widehat{S}$.
\item $\widehat{j}$: one simply takes $\widehat{j}$ to be the set inclusion $\widehat{S} \hookrightarrow \widehat{G}$.
\item $\chi_0$: one chooses a tame $\chi$-data $\chi_0$ for $S^0$, which extends to a $\chi$-data for $S$ by \cite[Remark 4.1.5]{Kaletha:SCPackets}.
\item $\theta$: Following \cite[Section 2.6]{LS:1987}, the $\chi$-data allows one to extend $\widehat{j}$ to an embedding $^Lj: {^LS} \rightarrow {^LG}$. The image of $^Lj$ contains the image of $\varphi$ so that we may write $\varphi = {^Lj}\circ\varphi_S$ for some $L$-parameter $\varphi_S$ of $S$. We let $\theta$ be the corresponding character of $S(F)$ via the LLC for tori.
\end{itemize}
We will say that $(S,\hat{j},\chi_0,\theta)$ is the \emph{supercuspidal $L$-packet datum associated to} $\varphi\in\Phi_{\mathrm{sc}}(G)$. The embedding $\hat{j}$ belongs to a $\mathrm{G}amma$-stable $\widehat{G}$-conjugacy class $\widehat{J}$ of embeddings $\widehat{S} \rightarrow \widehat{G}$, and from $\widehat{J}$ we obtain a $\mathrm{G}amma$-stable $G(\overline{F})$-conjugacy class $J$ of embeddings $S\rightarrow G$ (called \emph{admissible embeddings}) as per \cite[Section 5.1]{Kaletha:Regular}. We denote by $J(F)$ the set of $G(F)$-conjugacy classes of elements of $J$ which are defined over $F$. For each $j\in J(F)$, we consider the torus $jS = j(S)$ and let $j\theta = \theta\circ j^{-1}\cdot\epsilon_j$, where $\epsilon_j$ is the specific character from \cite[Section 4.1]{FKS}, described at the end of Section~\ref{sec:summaryKaletha}. Each pair $(jS,j\theta)$ is a tame $F$-non-singular elliptic pair from which we can construct a supercuspidal representation $\pi_{(jS,j\theta)}$ as described in Section \ref{sec:summaryKaletha}. The supercuspidal $L$-packet $\Pi_\varphi$ is then defined as
$$\Pi_\varphi \defeq \{[\pi_{(jS,j\theta)}]:j\in J(F)\},$$
where $j$ is identified with its $G(F)$-conjugacy class and $\pi_{(jS,j\theta)}$ is identified with its equivalence class.
Similarly, given $\varphiT\in \Phi_{\mathrm{sc}}(\tilde{G})$, we denote the associated supercuspidal $L$-packet datum by $(\tilde{S},\widehat{\tilde{j}},\tilde{\chi}_0,\tilde{\theta})$, and let $\tilde{J}(F)$ be the set of $\tilde{G}(F)$-conjugacy classes of admissible embeddings obtained from the $\mathrm{G}amma$-stable $\widehat{\tilde{G}}$-conjugacy class of $\widehat{\tilde{j}}$ which are defined over $F$, so that
$$\Pi_{\varphiT} = \{[\pi_{(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})}]:\tilde{j}\in \tilde{J}(F)\}.$$
What we have denoted by $\Pi_\varphi$ is what Kaletha denotes as $\Pi_\varphi(G)$ in \cite{Kaletha:SCPackets}. Kaletha assigns the notation $\Pi_\varphi$ to his ``compound packet'' which encompasses rigid inner forms of $G$.
\mathfrak{s}ubsection{Matching the Supercuspidal $L$-packet Data}\label{sec:packetData}
The first step in establishing a relationship between $\Pi_{\varphiT}$ and $\Pi_\varphi$, where $\varphiT\in\Phi_{\mathrm{sc}}(\tilde{G})$ and $\varphi={^L\eta}\circ\varphi$, is to establish a relationship between their corresponding parameterizing data. Recall that $\widehat{\eta}: \widehat{\tilde{G}} \rightarrow \widehat{G}$ is the induced map on the Langlands dual groups \cite[Sections 1 and 2]{Springer:1979}, and $^L\eta : {^L\tilde{G}}\rightarrow {^LG}$ by $^L\eta(g,w) = (\widehat{\eta}(g),w)$ for all $g\in\widehat{\tilde{G}}, w\in W_F$. The map $^L\eta$ also has abelian kernel and cokernel.The goal of this section is to prove the following theorem, which is summarized in Figure~\ref{fig:summaryData}.
\begin{theorem}\label{th:data}
Let $\varphiT\in \Phi_{\mathrm{sc}}(\tilde{G})$, $\varphi = {^L\eta}\circ\varphiT$ and $(\tilde{S},\widehat{\tilde{j}},\tilde{\chi}_0,\tilde{\theta})$ and $(S,\widehat{j},\chi_0,\theta)$ be the associated supercuspidal $L$-packet data. Let $J(F)$ and $\tilde{J}(F)$ be the sets of embeddings which parameterize $\Pi_\varphi$ and $\Pi_{\varphiT}$, respectively. Then $\widehat{\eta}(\widehat{\tilde{S}}) \mathfrak{s}ubset \widehat{S}$, $\tilde{\chi}_0 = \chi_0$, $\widehat{\eta}\circ\widehat{\tilde{j}} = \widehat{j}\circ\widehat{\eta}$ and $\theta=\tilde{\theta}\circ\eta'$, where $\eta'$ is the dual map of $\widehat{\eta}|_{\widehat{\tilde{S}}}: \widehat{\tilde{S}}\rightarrow \widehat{S}$. Furthermore, for all $\tilde{j}\in \tilde{J}(F)$, there exists $j\in J(F)$ such that $\eta(jS) \mathfrak{s}ubset \tilde{j}\tilde{S}$ and $j\theta = \tilde{j}\tilde{\theta}\circ\eta$.
\end{theorem}
\begin{figure}
\caption{Summary of the relationship between the supercuspidal $L$-packet data associated to $\varphiT$ and $\varphi$.}
\label{fig:summaryData}
\end{figure}
The proof of this theorem will be divided into three propositions. Proposition~\ref{prop:SSprime} will give the relationship between the tori, and consequently the embeddings and $\chi$-data. Proposition~\ref{prop:characters} will give the relationship between the characters and Proposition \ref{prop:embeddings} will provide the statement regarding the sets $J(F)$ and $\tilde{J}(F)$.
\begin{proposition}\label{prop:SSprime}
Let $\tilde{S}$ and $S$ be as in Theorem~\ref{th:data}. Then $\widehat{\eta}(\widehat{\tilde{S}}) \mathfrak{s}ubset \widehat{S}$.
\end{proposition}
In order to prove this proposition, we will make use of the following result, which can be shown using \cite[Theorem 2.2]{Humphreys:Conjugacy} and the presentation of reductive groups in terms of generators from \cite[Theorem 26.3]{Humphreys:LAG}.
\begin{proposition}\label{lem:centralizers}
Let $T$ be a maximal torus of a connected reductive group $G'$, and assume $H$ is a subtorus of $T$. Then $T = \mathrm{Cent}(H,G')$ if and only if for every $\alpha\in R(G',T)$ there exists $h_\alpha\in H$ such that $\alpha(h_\alpha)\neq 1$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:SSprime}] Recall that $\widehat{S} = \mathrm{Cent}(\widehat{C},\widehat{M})$, where $\widehat{M} = \mathrm{Cent}(\varphi(P_F),\widehat{G})^\circ$ and $\widehat{C} = \mathrm{Cent}(\varphi(I_F),\widehat{G})^\circ$. Similarly, we have $\widehat{\tilde{S}} = \mathrm{Cent}(\widehat{\tilde{C}},\widehat{\tilde{M}})$, where $\widehat{\tilde{M}} = \mathrm{Cent}(\varphiT(P_F),\widehat{\tilde{G}})^\circ$ and $\widehat{\tilde{C}} = \mathrm{Cent}(\varphiT(I_F),\widehat{\tilde{G}})^\circ$.
We start by showing that $\widehat{\eta}(\widehat{\tilde{M}}) = \widehat{M}\cap \widehat{\eta}(\widehat{\tilde{G}})$. We have that $\varphiT(P_F)$ is contained in some maximal torus $\widehat{\tilde{\mathcal{T}}}$ of $\widehat{\tilde{G}}$ \cite[Lemma 4.1.3]{Kaletha:SCPackets}, and therefore $\varphi(P_F)$ is contained in the maximal torus $\widehat{\eta}(\widehat{\tilde{\mathcal{T}}})$ of $\widehat{\eta}(\widehat{\tilde{G}})$. Since $\widehat{\eta}(\widehat{\tilde{G}})$ contains $[\widehat{G},\widehat{G}]$, we have that $\widehat{\eta}(\widehat{\tilde{\mathcal{T}}}) = \widehat{\mathcal{T}} \cap \widehat{\eta}(\widehat{\tilde{G}})$ for some maximal torus $\widehat{\mathcal{T}}$ of $\widehat{G}$ \cite[Theorem 2.2]{arXivPaper}. By definition, we have
$$\widehat{\tilde{M}} = \mathrm{Cent}(\varphiT(P_F),\widehat{\tilde{G}})^\circ = \left(\mathrm{un}derset{s\in\varphiT(P_F)}{\cap}\mathrm{Cent}(s,\widehat{\tilde{G}})\right)^\circ.$$ Using the description from \cite[Theorem 2.2]{Humphreys:Conjugacy} for each set $\mathrm{Cent}(s,\widehat{\tilde{G}}), s\in\varphiT(P_F)$, it follows that
$$\widehat{\tilde{M}} = \langle \widehat{\tilde{\mathcal{T}}}, U_\beta: \beta(s) = 1 \text{ for all } s\in\varphiT(P_F)\rangle,$$ where $U_\alpha$ denotes the root subgroup of $\widehat{\tilde{G}}$ associated to the root $\alpha\in R(\widehat{\tilde{G}},\widehat{\tilde{\mathcal{T}}})$.
Using a similar argument, and given that the root systems of $\widehat{\tilde{G}}$ and $\widehat{G}$ are canonically identified, we have that $$\widehat{M} = \langle \widehat{\mathcal{T}}, \widehat{\eta}(U_\beta) : \beta(s)=1 \text{ for all }s\in\varphi(P_F) \rangle.$$ We then deduce from \cite[Section 2B]{arXivPaper} that $\widehat{\eta}(\widehat{\tilde{M}}) = \widehat{M} \cap \widehat{\eta}(\widehat{\tilde{G}})$. Analogously, one has $\widehat{\eta}(\widehat{\tilde{C}}) = \widehat{C}\cap \widehat{\eta}(\widehat{\tilde{G}})$.
Since $\widehat{\tilde{S}} = \mathrm{Cent}(\widehat{\tilde{C}},\widehat{\tilde{M}})$ it follows from Proposition~\ref{lem:centralizers} that there exists $c_\alpha\in\widehat{\tilde{C}}$ such that $\alpha(c_\alpha)\neq 1$ for all $\alpha\in R(\widehat{\tilde{M}},\widehat{\tilde{S}})$. Applying $\widehat{\eta}$, we have that for all $\alpha\in R(\widehat{\eta}(\widehat{\tilde{M}}),\widehat{\eta}(\widehat{\tilde{S}}))$, there exists $\widehat{\eta}(c_\alpha) \in \widehat{\eta}(\widehat{\tilde{C}})$ such that $\alpha({\widehat{\eta}(c_\alpha)})\neq 1$. Reapplying Proposition~\ref{lem:centralizers}, we obtain $\widehat{\eta}(\widehat{\tilde{S}}) = \mathrm{Cent}(\widehat{\eta}(\widehat{\tilde{C}}), \widehat{\eta}(\widehat{\tilde{M}}))$. It follows that
$$\widehat{S} \cap \widehat{\eta}(\widehat{\tilde{G}}) = \mathrm{Cent}(\widehat{C},\widehat{M})\cap \widehat{\eta}(\widehat{\tilde{G}}) = \mathrm{Cent}(\widehat{C},\widehat{\eta}(\widehat{\tilde{M}})) \mathfrak{s}ubset \mathrm{Cent}(\widehat{\eta}(\widehat{\tilde{C}}),\widehat{\eta}(\widehat{\tilde{M}})) = \widehat{\eta}(\widehat{\tilde{S}}).$$ Since both $\widehat{S}\cap \widehat{\eta}(\widehat{\tilde{G}})$ and $\widehat{\eta}(\widehat{\tilde{S}})$ are maximal tori of $\widehat{\eta}(\widehat{\tilde{G}})$, we conclude that they are equal, and therefore $\widehat{\eta}(\widehat{\tilde{S}})\mathfrak{s}ubset \widehat{S}$.
\end{proof}
Having $\widehat{\eta}(\widehat{\tilde{S}}) \mathfrak{s}ubset \widehat{S}$ implies that the root systems $R(\widehat{\tilde{G}},\widehat{\tilde{S}})$ and $R(\widehat{G},\widehat{S})$, together with their $\mathrm{G}amma$-actions, are canonically identified, which allows us to choose $\chi_0 = \tilde{\chi}_0$, as the $\chi$-data are parameterized by roots. Furthermore, $\widehat{\tilde{j}}: \widehat{\tilde{S}} \rightarrow \widehat{\tilde{G}}$ and $\hat{j}: \widehat{S} \rightarrow \widehat{G}$ are simply inclusions. This means we have the following commutative diagram:
\begin{center}
\begin{tikzcd}
\widehat{\tilde{S}} \arrow[rightarrow]{r}{\widehat{\tilde{j}}} \arrow[rightarrow]{d}{\widehat{\eta}}
& \widehat{\tilde{G}}\arrow[rightarrow]{d}{\widehat{\eta}} \\
\widehat{S} \arrow[rightarrow]{r}{\hat{j}}
& \widehat{G}
\end{tikzcd}
\end{center}
\begin{proposition}\label{prop:characters}
Let $\theta, \tilde{\theta}$ and $\eta'$ be as in Theorem~\ref{th:data}. Then, $\theta = \tilde{\theta}\circ\eta'$.
\end{proposition}
\begin{proof}
Using the $\chi$-data as in \cite[Section 2.6]{LS:1987}, the above diagram extends into another commutative diagram:
\begin{center}
\begin{tikzcd}
\widehat{\tilde{S}}\rtimes W_F \arrow[rightarrow]{r}{^L\tilde{j}} \arrow[rightarrow]{d}{^L\eta}
& \widehat{\tilde{G}}\rtimes W_F\arrow[rightarrow]{d}{^L\eta} \\
\widehat{S}\rtimes W_F \arrow[rightarrow]{r}{^Lj}
& \widehat{G}\rtimes W_F
\end{tikzcd},
\end{center}
where $^L\eta(g,w) = (\widehat{\eta}(g),w)$ for all $g\in\widehat{\tilde{G}}, w\in W_F$.
Following \cite[Proposition 4.1.8]{Kaletha:SCPackets}, $\mathrm{Im}\varphiT\mathfrak{s}ubset \mathrm{Im}{^L\tilde{j}}$ and $\mathrm{Im}\phi\mathfrak{s}ubset\mathrm{Im}{^Lj}$, meaning that $\varphiT = {^L\tilde{j}\circ\varphi_{\tilde{S}}}$ and $\varphi = {^Lj\circ\varphi_{S}}$ for some $L$-parameters $\varphi_{\tilde{S}}$ and $\varphi_S$ of $\tilde{S}$ and $S$, respectively. We claim that $\varphi_{S} = {^L\eta}\circ\varphi_{\tilde{S}}$. Indeed, by definition, $\varphi = {^L\eta}\circ\varphiT$, which implies $^Lj\circ\varphi_{S} = {^L\eta}\circ {^L\tilde{j}}\circ \varphi_{\tilde{S}}$. Using the commutative diagram above, it follows that $^Lj\circ\varphi_{S} = {^Lj}\circ {^L\eta}\circ\varphi_{\tilde{S}}$. Given that $^Lj$ is an embedding, it is injective by definition, which implies that $\varphi_{S} = {^L\eta}\circ\varphi_{\tilde{S}}$ as claimed.
By definition, $\tilde{\theta}$ and $\theta$ are the characters which correspond to $\varphi_{\tilde{S}}$ and $\varphi_{S}$, respectively, under the LLC for tori. Since $L$-packets of tori consist of singletons, we apply the functoriality property for the LLC of tori to conclude that $\theta = \tilde{\theta}\circ\eta'$.
\end{proof}
We now arrive to the final statement which completes the proof of Theorem~\ref{th:data}.
\begin{proposition}\label{prop:embeddings}
For all $\tilde{j}\in \tilde{J}(F)$, there exists $j\in J(F)$ such that $\eta(jS) \mathfrak{s}ubset \tilde{j}\tilde{S}$ and ${j\theta = \tilde{j}\tilde{\theta}\circ\eta}$.
\end{proposition}
\begin{proof}
Fix $\mathrm{G}amma$-invariant pinnings $(T,B,\{X_\alpha\})$ of $G$ and $(\widehat{T},\widehat{B},\{Y_{\widehat{\alpha}}\})$ of $\widehat{G}$. Using \cite[Section 2B]{arXivPaper}, let $\tilde{T}$ be the maximal torus of $\tilde{G}$ which satisfies $\eta(T) = \tilde{T}\cap\eta(G)$, and $\tilde{B}$ be the Borel subgroup of $\tilde{G}$ which satisfies $\eta(B) = \tilde{B}\cap \eta(G)$. Then $(\tilde{T},\tilde{B},\{X_\alpha\})$ and $(\widehat{\tilde{T}},\widehat{\tilde{B}}, \{Y_{\widehat{\alpha}}\})$ are $\mathrm{G}amma$-invariant pinnings of $\tilde{G}$ and $\widehat{\tilde{G}}$, respectively, and $\widehat{T}\cap\widehat{\eta}(\widehat{\tilde{G}}) = \widehat{\eta}(\widehat{\tilde{T}})$. Following \cite[Section 5.1]{Kaletha:Regular}, we may describe $\tilde{J}$ and $J$ as follows. Choose $\widehat{\tilde{i}}$ in $\widehat{\tilde{J}}$ such that $\widehat{\tilde{i}}(\widehat{\tilde{S}}) = \widehat{\tilde{T}}$ and define $\tilde{i}$ to be the inverse of the isomorphism $\tilde{T}\rightarrow \tilde{S}$ induced by $\widehat{\tilde{i}}$. We have that $\widehat{\tilde{i}} = \mathrm{Ad}(\widehat{g})\circ\widehat{\tilde{j}}$ for some $\widehat{g} \in \widehat{\tilde{G}}$. Let $\widehat{i}\in\widehat{J}$ be defined by $\widehat{i} = \mathrm{Ad}(\widehat{\eta}(\widehat{g}))\circ \widehat{j}$. Since $\widehat{\eta}\circ\widehat{\tilde{j}} = \widehat{j}\circ\widehat{\eta}$, we have the following commutative diagram.
\begin{equation}\label{diagram}
\begin{tikzcd}
\widehat{\tilde{S}} \arrow[rightarrow]{r}{\widehat{\tilde{i}}} \arrow[rightarrow]{d}{\widehat{\eta}}
& \widehat{\tilde{T}}\arrow[rightarrow]{d}{\widehat{\eta}} \\
\widehat{S} \arrow[rightarrow]{r}{\widehat{i}}
& \widehat{T}
\end{tikzcd}
\end{equation}
It follows that
$$\widehat{T}\cap \widehat{\eta}(\widehat{\tilde{G}}) = \widehat{\eta}(\widehat{\tilde{T}}) = \widehat{\eta}(\widehat{\tilde{i}}(\widehat{\tilde{S}})) = \widehat{i}(\widehat{\eta}(\widehat{\tilde{S}}))\mathfrak{s}ubset \widehat{i}(\widehat{S}).$$
Since we know $\widehat{i}(\widehat{S})$ has to be a maximal torus of $\widehat{G}$, we conclude from \cite[Theorem 2.2]{arXivPaper} that $\widehat{i}(\widehat{S}) = \widehat{T}$. Therefore, $\tilde{J}$ corresponds to the $\tilde{G}(\overline{F})$-conjugacy class of $\tilde{i}$ whereas $J$ corresponds to the $G(\overline{F})$-conjugacy class of $i$.
Now, given $\tilde{j}\in \tilde{J}(F)$, we have that $\tilde{j} = \mathrm{Ad}(\tilde{g})\circ \tilde{i}$ for some $\tilde{g}\in \tilde{G}(\overline{F})$. Using the fact that $\tilde{G} = Z(\tilde{G})^\circG_Z$, we may assume without loss of generality that $\tilde{g}\in G_Z(\overline{F})$. Let $g$ be any preimage of $\tilde{g}$ in $G(\overline{F})$ by $\eta$ and set $j = \mathrm{Ad}(g)\circ i$. By taking the dual of diagram~(\ref{diagram}), we have
\begin{center}
\begin{tikzcd}
{\tilde{S}} \arrow[rightarrow]{r}{{\tilde{i}}}
& {\tilde{T}} \\
{S} \arrow[rightarrow]{r}{{i}} \arrow[rightarrow]{u}{\eta'}
& {T}\arrow[rightarrow]{u}{\eta}
\end{tikzcd}
\end{center}
Here, $\eta'$ is the dual map of $\widehat{\eta}|_{\widehat{\tilde{S}}}: \widehat{\tilde{S}} \rightarrow \widehat{S}$. It follows that
$$\eta(jS) = \eta(g\cdot iS\cdot g^{-1}) = \eta(g)\cdot \eta(iS)\cdot \eta(g)^{-1} = \tilde{g}\cdot \tilde{i}(\eta'(S))\cdot \tilde{g}^{-1} = \tilde{j}(\eta'(S)) \mathfrak{s}ubset \tilde{j}\tilde{S}.$$ Since $\tilde{j}$ and $\eta$ are defined over $F$, we have that $j\in J(F)$. Indeed, by \cite[Lemma 7.6]{Dillery}, which generalizes \cite[Corollary 2.2]{Kottwitz:1982} to arbitrary local fields, there exists $h\in G(\overline{F})$ such that $\Ad(h)\circ j$ is defined over $F$. Then $\Ad(\eta(h))\circ\tilde{j}$ is also defined over $F$, implying that $\mathfrak{s}igma(\Ad(\eta(h))\circ\tilde{j})\mathfrak{s}igma^{-1}=\Ad(\eta(h))\circ\tilde{j}$ for all $\mathfrak{s}igma\in\mathrm{G}amma$.
Equivalently, $\eta(h)^{-1}\mathfrak{s}igma(\eta(h)) = \eta(h^{-1}\mathfrak{s}igma(h))\in \tilde{j}\tilde{S}$ for all $\mathfrak{s}igma\in\mathrm{G}amma$. This implies $h^{-1}\mathfrak{s}igma(h)\in jS$, and therefore $\Ad(h)\circ j = \Ad(\mathfrak{s}igma(h)) \circ j$ for all $\mathfrak{s}igma\in\mathrm{G}amma$. Using the fact that $\Ad(h)\circ j$ is defined over $F$, we rewrite this last equality as $\mathfrak{s}igma(\Ad(h)\circ j)\mathfrak{s}igma^{-1} = \Ad(\mathfrak{s}igma(h))\circ j$ for all $\mathfrak{s}igma\in\mathrm{G}amma$. It follows that $\Ad(\mathfrak{s}igma(h))\circ \mathfrak{s}igma j\mathfrak{s}igma^{-1} = \Ad(\mathfrak{s}igma(h))\circ j$, and therefore $\mathfrak{s}igma j \mathfrak{s}igma^{-1}=j$ for all $\mathfrak{s}igma\in\mathrm{G}amma$.
We now show that $j\theta = \tilde{j}\tilde{\theta}\circ \eta$. We have that $j\theta = \theta\circ j^{-1}\cdot \epsilon_j$ and $\tilde{j}\tilde{\theta} = \tilde{\theta}\circ \tilde{j}^{-1}\cdot \epsilon_{\tilde{j}}$, where $\epsilon_j$ and $\epsilon_{\tilde{j}}$ are the characters from \cite[Section 4.1]{FKS} which we briefly recalled at the end of Section~\ref{sec:summaryKaletha}. By what precedes, we have that $\eta(jS) \mathfrak{s}ubset \tilde{j}\tilde{S}$ and $\tilde{\theta}\circ \tilde{j}^{-1}\circ\eta = \tilde{\theta}\circ\eta'\circ j^{-1} = \theta \circ j^{-1}$. We claim that $\epsilon_j = \epsilon_{\tilde{j}}\circ\eta$. Indeed, let $(G^0_j,\dots,G^d_j)$ and $(\tilde{G}^0_{\tilde{j}},\cdots, \tilde{G}^d_{\tilde{j}})$ be the twisted Levi sequences obtained from $jS$ and $\tilde{j}\tilde{S}$, respectively. Recall that $\epsilon_j = \prod\limits_{i=1}^d\epsilon^{G^i_j/G^{i-1}_j}$, where $\epsilon^{G^i_j/G^{i-1}_j}$ is the quadratic character of $K^d_j$ that is trivial on $G^1_j(F)_{y_j,r_0/2}\cdots G^d_j(F)_{y_j,r_d/2}$ and whose restriction to $K^0_j$ is given by $\epsilon^{G^i_j/G^{i-1}_j}_{y_j}$ defined in \cite[Definition 4.1.10]{FKS}. The character $\epsilon^{G^i_j/G^{i-1}_j}_{y_j}$ is essentially just a composition of a sign character constructed from the adjoint groups of $G^i_j$ and $G^{i-1}_j$, and the adjoint map of $G^i_j$. The character $\epsilon_{\tilde{j}}$ is defined similarly. Given that $\eta(G^i_j) = \tilde{G}^i_{\tilde{j}}\cap\eta(G)$ (Lemma~\ref{lem:LeviSeq} and Theorem~\ref{th:HG}), it follows that $G^i_j$ and $\tilde{G}^i_{\tilde{j}}$ have the same adjoint group and that the adjoint map of $G^i_j$ is the composition of the adjoint map of $\tilde{G}^i_{\tilde{j}}$ with $\eta$. It follows that $\epsilon^{G^i_j/G^{i-1}_j}_{y_j} = \epsilon^{\tilde{G}^i_{\tilde{j}}/\tilde{G}^{i-1}_{\tilde{j}}}_{\tilde{y}_{\tilde{j}}} \circ \eta$ for all $1\leq i\leq d$ and therefore $\epsilon_j = \epsilon_{\tilde{j}}\circ\eta$. This completes the proof.
\end{proof}
\mathfrak{s}ubsection{The Proof of Theorem~\ref{th:desideratum}}\label{sec:proofDesideratum}
\begin{proof}[Proof of Theorem~\ref{th:desideratum}]
Let $(\tilde{S},\widehat{\tilde{j}},\chi_0,\tilde{\theta})$ and $(S,\widehat{j},\chi_0,\theta)$ be the supercuspidal $L$-packet data associated to $\varphiT$ and $\varphi$, respectively. By construction of $\Pi_{\varphiT}$, we have that $\tilde{\pi} \mathfrak{s}ubset \pi_{(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})}$ for some $\tilde{j}\in \tilde{J}(F)$. By Theorem~\ref{th:data}, there exists $j\in J(F)$ such that $\eta(jS) \mathfrak{s}ubset \tilde{j}\tilde{S}$ and $j\theta = \tilde{j}\tilde{\theta}\circ\eta$. Let $(\tilde{G}^0_{\tilde{j}},\dots,\tilde{G}^d_{\tilde{j}})$ and $(G_{Z,j}^0,\dots,G_{Z,j}^d)$ be the twisted Levi sequences obtained from $\tilde{j}\tilde{S}$ and $\eta(jS)$, respectively, and let $[\tilde{y}_{\tilde{j}}]$ denote the vertex associated to $\tilde{j}\tilde{S}$. By Theorem~\ref{th:fullTheorem}, we have
$$\tilde{\pi}\circ\eta \mathfrak{s}ubset \pi_{(\tilde{j}\tilde{S},\tilde{j}\tilde{\theta})}\circ\eta = m \mathrm{un}derset{t\in T}{\oplus}\mathrm{un}derset{\ell\in L}{\oplus}\mathrm{un}derset{c\in C}{\oplus}{^{t\ell c}\varrho_{(jS,j\theta)}},$$
where $T$, $L$ and $C$ are sets of coset representatives of $G_Z(F)\mathfrak{s}etminus \tilde{G}(F)/\tilde{K}^d_{\tilde{j}}$, $G_{Z,j}^0(F)_{[\tilde{y}_{\tilde{j}}]}\mathfrak{s}etminus \tilde{G}^0_{\tilde{j}}(F)_{[\tilde{y}_{\tilde{j}}]}/\tilde{j}\tilde{S}(F)\tilde{G}^0_{\tilde{j}}(F)_{\tilde{y}_{\tilde{j}},0}$, respectively, $\varrho_{(jS,j\theta)}\in [\pi_{(jS,j\theta)}]$ and $m$ is the multiplicity of $\varrho_{(jS,j\theta)}$ in $\pi_{(\eta(jS),\tilde{j}\tilde{\theta}|_{\eta(jS)})}\circ \eta$. For all $t\in T, \ell\in L, c\in C$ we have that $^{t\ell c}\varrho_{(jS,j\theta)}$ is an irreducible constituent of $^{t\ell c}\pi_{(jS,j\theta)}$. Recall that from Remark~\ref{rem:definedOverF}, the representation $^{t\ell c}\pi_{(jS,j\theta)}$ actually refers to $^{\overline{h}}\pi_{(jS,j\theta)}$ for some $\overline{h} \in G(\overline{F})$ such that $\Ad(\overline{h})$ is defined over $F$. Following the steps of the construction from Section~\ref{sec:summaryKaletha}, one sees that $^{\overline{h}}\pi_{(jS,j\theta)} \mathfrak{s}imeq \pi_{({^{\overline{h}}jS}, {^{\overline{h}}j\theta})}$ as a consequence of \cite[Section 5.1.1]{HM:2008}. We also have that $^{\overline{h}}jS = (\mathrm{Ad}(\overline{h})\circ j)S$ and $^{\overline{h}}j\theta = (\mathrm{Ad}(\overline{h})\circ j)\theta$, where $\mathrm{Ad}(\overline{h})\circ j \in J(F)$, and therefore $^{\overline{h}}\varrho_{(jS,j\theta)}\in [^{\overline{h}}\pi_{(jS,j\theta)}]$ belongs to $\Pi_{\varphi}$ by definition. In particular, all irreducible components of $\tilde{\pi}\circ\eta$ belong to $\Pi_\varphi$.
\end{proof}
\textsc{School of Mathematics and Statistics, Carleton University,
Ottawa, ON, Canada K1S 5B6}
\textit{E-mail addresses:} \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}
\end{document}
|
\begin{document}
\title{Lectures on generalized geometry}
\author{Nigel Hitchin\\[5pt]}
\maketitle
\centerline{{\it Subject classification}: {Primary 53D18}}
\subsection*{Preface}
These notes are based on six lectures given in March 2010 at the Institute of Mathematical Sciences in the Chinese University of Hong Kong as part of the JCAS Lecture Series. They were mainly targeted at graduate students. They are not intended to be a comprehensive treatment of the subject of generalized geometry, but instead I have attempted to present the general features and to focus on a few topics which I have found particularly interesting and which I hope the reader will too. The relatively new material consists of an account of Goto's existence theorem for generalized K\"ahler structures, examples of generalized holomorphic bundles and the B-field action on their moduli spaces.
Since the publication of the first paper on the subject \cite{NJH1}, there have been many articles written within both the mathematical and theoretical physics communities, and the reader should be warned that different authors have different conventions (or occasionally this author too!). For other accounts of generalized geometry, I should direct the reader to the papers and surveys (e.g. \cite{Cav}, \cite{MG1}) of my former students Marco Gualtieri and Gil Cavalcanti who have developed many aspects of the theory.
I would like to thank the IMS for its hospitality and for its invitation to give these lectures, and Marco Gualtieri for useful conversations during the preparation of these notes.
\tableofcontents
\section{The Courant bracket, B-fields and metrics}
\subsection{Linear algebra preliminaries}
Generalized geometry is based on two premises -- the first is to replace the tangent bundle $T$ of a manifold $M$ by $T\oplus T^*$, and the second to replace the Lie bracket on sections of $T$ by the Courant bracket. The idea then is to use one's experience of differential geometry and by analogy to
define and develop the generalized version. Depending on the object, this may or may not be a fertile process, but the intriguing fact is that, by drawing on the intuition of a mathematician, one may often obtain this way a topic which is also of interest to the theoretical physicist.
We begin with the natural linear algebra structure of the generalized tangent bundle $T\oplus T^*$.
If $X$ denotes a tangent vector and $\xi$ a cotangent vector then we write $X+\xi$ as a typical element of a fibre $(T\oplus T^*)_x$. There is a natural indefinite inner product defined by
$$(X+\xi,X+\xi)=i_X\xi\quad (= \langle \xi,X\rangle =\xi(X))$$
using the interior product $i_X$, or equivalently the natural pairing $\langle \xi,X\rangle$ or the evaluation $\xi(X)$ of $\xi\in T^*_x$ on $X$. This is to be thought of as replacing the notion of a Riemannian metric, even though on an $n$-manifold it has signature $(n,n)$.
In block-diagonal form, a skew-adjoint transformation of $T\oplus T^*$ at a point can be written as
$$\pmatrix {A & \beta\cr
B & -A^t}.$$
Here $A$ is just an endomorphism of $T$ and $B:T\rightarrow T^*$ to be skew-adjoint must satisfy
$$(B(X_1+\xi_1),X_2+\xi_2)= (B(X_1),X_2)=-(B(X_2),X_1)$$
so that $B$ is a skew-symmetric form, or equivalently $B\in \Lambda^2 T^*$, and its action is
$X+\xi\mapsto i_XB.$
Since
$$\pmatrix {0 & 0\cr
B & 0}^2= 0$$
exponentiating gives
\begin{equation}
X+\xi\mapsto X+\xi+i_XB
\label{B}
\end{equation}
This {\it B-field action} will be fundamental, yielding extra transformations in generalized geometries. It represents a breaking of symmetry in some sense since the bivector $\beta\in \Lambda^2T$ plays a lesser role.
\subsection{The Courant bracket}
We described above the pointwise structure of the generalized tangent bundle. Now we consider the substitute for the Lie bracket $[X,Y]$ of two vector fields. This is the {\it Courant bracket} which appears in the literature in two different formats -- here we adopt the original skew-symmetric one:
\begin{definition} \label{cou} The Courant bracket of two sections $X+\xi,Y+\eta$ of $T\oplus T^*$ is defined by
$$[X+\xi,Y+\eta]=[X,Y]+{\mathcal L}_X\eta-{\mathcal L}_Y\xi-\frac{1}{2}d(i_X\eta-i_Y\xi).$$
\end{definition}
This has the important property that it commutes with the B-field action of a {\it closed} 2-form $B$:
\begin{prp} \label{Baction}
Let $B$ be a closed 2-form, then
$$[X+\xi+i_XB,Y+\eta+i_YB]=[X+\xi,Y+\eta]+i_{[X,Y]}B$$
\end{prp}
\begin{prf} We shall make use of the Cartan formula for the Lie derivative of a differential form $\alpha$: ${\mathcal L}_X\alpha=d(i_X\alpha)+i_Xd\alpha$. First expand
$$[X+\xi+i_XB,Y+\eta+i_YB]=[X+\xi,Y+\eta]+{\mathcal L}_Xi_YB-{\mathcal L}_Yi_XB-\frac{1}{2}d(i_Xi_YB-i_Yi_XB).$$
The last two terms give
$d(i_Yi_XB)={\mathcal L}_Yi_XB-i_Yd(i_XB)$ by the Cartan formula, and so yield
\begin{eqnarray*}
[X+\xi+i_XB,Y+\eta+i_YB]&=&[X+\xi,Y+\eta]+{\mathcal L}_Xi_YB-i_Yd(i_XB)\\
&=& [X+\xi,Y+\eta]+i_{[X,Y]}B+i_Y{\mathcal L}_XB-i_Yd(i_XB)\\
&=& [X+\xi,Y+\eta]+i_{[X,Y]}B+i_Yi_XdB
\end{eqnarray*}
by the Cartan formula again. So if $dB=0$ the bracket is preserved.
\end{prf}
The inner product and Courant bracket naturally defined above are clearly invariant under the induced action of a diffeomorphism of the manifold $M$. However, we now see that a global closed differential 2-form $B$ will also act, preserving both the inner product and bracket. This means an overall action of the semi-direct product of closed 2-forms with diffeomorphisms
$$\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M).$$
This is a key feature of generalized geometry -- we have to consider B-field transformations as well as diffeomorphisms.
The Lie algebra of the group $\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M)$ consists of sections $X+B$ of $T\oplus \Lambda^2T^*$ where $B$ is closed. If we take $B=-d\xi$, then the Lie algebra action on $Y+\eta$ is
$$(X-d\xi)(Y+\eta)={\mathcal L}_X(Y+\eta)-i_Yd\xi=[X,Y]+{\mathcal L}_X\eta-{\mathcal L}_Y\xi+d(i_Y\xi).$$
It is then easy to see that we can reinterpret the Courant bracket as the skew-symmetrization of this:
$$[X+\xi,Y+\eta]=\frac{1}{2}((X-d\xi)(Y+\eta)-(Y-d\eta)(X+\xi)).$$
However, although the Courant bracket is derived this way from a Lie algebra action, it is not itself a bracket of any Lie algebra -- the Jacobi identity fails. More precisely we have (writing $u=X+\xi, v=Y+\eta, w=Z+\zeta$)
\begin{prp}\label{jac}
$$[[u,v],w]+[[v,w],u]+[[w,u],v]=\frac{1}{3}d(([u,v],w)+([v,w],u)+([w,u],v))$$
\end{prp}
\begin{prf} If $u=X+\xi$, let $\tilde u=X-d\xi$ be the corresponding element in the Lie algebra of $\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M).$ We shall temporarily write $uv$ for the action of $\tilde u$ on $v$ (this is also called the Dorfman ``bracket" of $u$ and $v$) so that the Courant bracket is $(uv-vu)/2$. We first show that
\begin{equation}
u(vw)=(uv)w+v(uw).
\label{deriv}
\end{equation}
To see this note that $u(vw)-v(uw)=\tilde u\tilde v(w)-\tilde v\tilde u(w)=[\tilde u,\tilde v](w)$ since $\tilde u,\tilde v$ are Lie algebra actions, and the bracket here is just the commutator. But $(uv)w$ is the Lie algebra action of
$uv=\tilde u v= [X,Y]+{\mathcal L}_X\eta-i_Yd\xi$
which acts as $[X,Y]-d({\mathcal L}_X\eta-i_Yd\xi)=[X,Y]-{\mathcal L}_Xd\eta+{\mathcal L}_Yd\xi$ using the Cartan formula and $d^2=0$. This however is just the bracket $[\tilde u,\tilde v]$ in the Lie algebra of
$\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M).$
To prove the Proposition we note now that the {\it symmetrization} $(uv+vu)/2$ is
$$\frac{1}{2}({\mathcal L}_X\eta-i_Yd\xi+{\mathcal L}_Y\xi-i_Xd\eta)=\frac{1}{2}d(i_X\eta+i_Y\xi)=d(u,v)$$
while we have already seen that the skew-symmetrization $(uv-vu)/2$ is equal to $[u,v]$. So we rewrite the left hand side of the expression in the Proposition as one quarter of
\begin{eqnarray*}
(uv-vu)w - w(uv-vu)\\
+(vw-wv)u- u(vw-wv)\\
+(wu-uw)v-v(wu-uw)
\end{eqnarray*}
Using (\ref{deriv}) we sum these to get $(-1)$ times the sum $r$ of the right-hand column. If $\ell$ is the sum of the left-hand column then this means $\ell+r=-r$. But then $\ell-r=3(\ell+r)$ is the sum of terms like $(uv-vu)w+w(uv-vu)=4([u,v],w)$. The formula follows directly.
\end{prf}
There are two more characteristic properties of the Courant bracket which are easily verified:
\begin{equation}
[u,fv]=f[u,v]+(Xf)v-(u,v)df
\label{cour1}
\end{equation}
where $f$ is a smooth function, and as usual $u=X+\xi$, and
\begin{equation}
X(v,w)=([u,v]+d(u,v),w)+(v,[u,w]+d(u,w)).
\label{cour2}
\end{equation}
\subsection{Riemannian geometry}
The fact that we introduced the inner product on $T\oplus T^*$ as the analogue of the Riemannian metric does not mean that Riemannian geometry is excluded from this area -- we just have to treat it in a different way. We describe a metric $g$ as a map $g:T\rightarrow T^*$ and consider its graph $V\subset T\oplus T^*$. This is the set of pairs $(X,gX)$ or in local coordinates (and the summation convention, which we shall use throughout) the span of
$$\frac{\partial}{\partial x_i}+g_{ij}dx_j.$$
The subbundle $V$ has an orthogonal complement $V^{\perp}$ consisting of elements of the form $X-gX$. The inner product on $T\oplus T^*$ restricted to $X+gX\in V$ is $i_XgX=g(X,X)$ which is positive definite and restricted to $V^{\perp}$ we get the negative definite $-g(X,X)$. So $T\oplus T^*$ with its signature $(n,n)$ inner product can also be written as the orthogonal sum $V\oplus V^{\perp}$. Equivalently we have reduced the structure group of $T\oplus T^*$ from $SO(n,n)$ to $S(O(n)\times O(n))$.
The nondegeneracy of $g$ means that $g:T\rightarrow T^*$ is an isomorphism so that the projection from $V\subset T\oplus T^*$ to either factor is an isomorphism. This means we can lift vector fields or 1-forms to sections of $V$. Let us call $X^+$ the lift of a vector field $X$ to $V$ and $X^-$ its lift to $V^{\perp}$, i.e. $X^{\pm}=X\pm gX$. We also have the orthogonal projection $\pi_V:T\oplus T^*\rightarrow V$ and then
$$\pi_V(X)=\pi_V\frac{1}{2}(X+gX + X-gX)=\frac{1}{2}X^+.$$
We can use these lifts and projections together with the Courant bracket to give a convenient way of working out the Levi-Civita connection of $g$.
First we show:
\begin{prp} \label{Vconnect} Let $v$ be a section of $V$ and $X$ a vector field, then
$$\nabla_Xv=\pi_V[X^-,v]$$
defines a connection on $V$ which preserves the inner product induced from $T\oplus T^*$.
\end{prp}
\begin{prf} Write $v=Y+\eta$, then observe that
$$\nabla_{fX}v=\pi_V[fX^-,v]=\pi_V(f[X^-,v]-(Yf)X^- +(v,X^-)df)$$
using Property (\ref{cour1}) of the Courant bracket. But $V$ and $V^{\perp}$ are orthogonal so $\pi_VX^-=0=(v,X^-)$ and hence $\nabla_{fX}v=f\nabla_Xv$.
Now using the same property we have
$$\nabla_{X}fv=\pi_V(f[X^-,v]+(Xf)v -(v,X^-)df)=f\nabla_{X}v+(Xf)v$$
since $(v,X^-)=0$ and $\pi_Vv=v$. These two properties define a connection.
To show compatibility with the inner product take $v,w$ sections of $V$, then
$$(\nabla_Xv,w)+(v,\nabla_Xw)=(\pi_V[X^-,v],w)+(v,\pi_V[X^-,w])=([X^-,v],w)+(v,[X^-,w])$$
since $\pi_V$ is the orthogonal projection onto $V$ and $v,w$ are sections of $V$. Now use Property (\ref{cour2}) of the Courant bracket to see that
$$X(v,w)=([X^-,v]+d(X^-,v),w)+(v,[X^-,w]+d(X^-,w)).$$
But $(X^-,v)=0=(X^-,w)$ and we get $X(v,w)=(\nabla_Xv,w)+(v,\nabla_Xw)$ as required.
\end{prf}
Using the isomorphism of $V$ with $T$ (or $T^*$) we can directly use this to find a connection on the tangent bundle. Directly, we take coordinates $x_i$ and then from the definition of the connection, the covariant derivative of ${\partial}/{\partial x_j}^+$ in the direction ${\partial}/{\partial x_i}$ is
$$\pi_V\left[\frac{\partial}{\partial x_i}-g_{ik}dx_k, \frac{\partial}{\partial x_j}+g_{j\ell}dx_{\ell}\right].$$
Expanding the Courant bracket gives
$$\frac{\partial g_{j\ell}}{\partial x_i}dx_{\ell}-\frac{\partial (-g_{ik})}{\partial x_j}dx_k-\frac{1}{2}d(g_{ji}-(-g_{ij}))=\frac{\partial g_{j\ell}}{\partial x_i}dx_{\ell}+\frac{\partial g_{ik}}{\partial x_j}dx_k-\frac{\partial g_{ij}}{\partial x_k}dx_{k}.$$
Projecting on $V$ we get
$$\frac{1}{2}(dx_k+g^{k\ell}\frac{\partial}{\partial x_{\ell}})\left(\frac{\partial g_{jk}}{\partial x_i}+\frac{\partial g_{ik}}{\partial x_j}-\frac{\partial g_{ij}}{\partial x_k}\right)=\frac{1}{2}g^{k\ell}\left(\frac{\partial g_{jk}}{\partial x_i}+\frac{\partial g_{ik}}{\partial x_j}-\frac{\partial g_{ij}}{\partial x_k}\right){\frac{\partial}{\partial x_{\ell}}}^{\!\!+}$$
which is the usual formula for the Christoffel symbols of the Levi-Civita connection.
\begin{ex}
Here is another computation -- the so-called Bianchi IX type metrics (using the terminology for example in \cite{GP}). These are four-dimensional metrics with an $SU(2)$ action with generic orbit three-dimensional and in the diagonal form
$$g=(abc)^2dt^2+a^2\sigma_1^2+b^2\sigma_2^2+c^2\sigma_3^2$$
where $a,b,c$ are functions of $t$ and $\sigma_i$ are basic left-invariant forms on the group, where $d\sigma_1=-\sigma_2\wedge \sigma_3$ etc. If $X_i$ are the dual vector fields then $[X_1,X_2]=X_3$
and ${\mathcal L}_{X_1}\sigma_2=\sigma_3$ etc.
Because of the even-handed treatment of forms and vector fields in generalized geometry, it is as easy to work out covariant derivatives of 1-forms as vector fields. Here we shall find the connection matrix for the orthonormal basis of 1-forms $e_0=abc\, dt, e_1=a\sigma_1,e_2= b\sigma_2, e_3=c\sigma_3$. By symmetry it is enough to work out derivatives with respect to $X_1$ and $\partial/\partial t$. First we take $X_1$, so that $X_1^-=X_1-a^2\sigma_1$.
For the covariant derivative of $e_0$ consider the Courant bracket
$$[X_1-a^2\sigma_1,\frac{\partial}{\partial t} + (abc)^2dt]=2aa'\sigma_1.$$
But $e^+_0=(abc)^{-1}(\partial/\partial t+(abc)^2dt)$ and using Property (\ref{cour1}) of the bracket and the orthogonality of $X_1^-,e_0^+$ we have
$$[X_1^-,e_0^+]=\frac{2a'}{bc}\sigma_1=\frac{2a'}{abc}e_1$$
Projecting on $V$ and using $\pi_V e_1=e^+_1/2$, we have
\begin{equation}
\nabla_{X_1}e_0=\frac{a'}{abc}e_1.
\label{10}
\end{equation}
For the 1-form $e_1$ note that, since ${\mathcal L}_{X_1}\sigma_1=0$
$$[X_1^-,X_1^+]=[X_1-a^2\sigma_1, X_1+a^2\sigma_1]=-\frac{1}{2}d(a^2+a^2)=-2aa'dt.$$
But $e_1=a^{-1}X_1^+$, and again using Property (\ref{cour1}) and the orthogonality of $X_1^-,X_1^+$ we have
$[X_1^-,e_1^+]=-2a'dt$. Projecting onto $V$ gives $\pi_V(-2a' dt)= -(a'dt+a'(abc)^{-2}\partial/\partial t).$ So
\begin{equation}
\nabla_{X_1}e_1=-\frac{a'}{abc}e_0.
\label{11}
\end{equation}
(Note that with (\ref{10}) this checks with the fact that the connection preserves the metric.)
\vskip .25cm
For $e_2^+=b^{-1}X_2^+$ we have
\begin{eqnarray*}
[X_1^-,X_2^+]&=&[X_1-a^2\sigma_1,X_2+b^2\sigma_2]=[X_1,X_2]+{\mathcal L}_{X_1}b^2\sigma_2+{\mathcal L}_{X_2}a^2\sigma_1-0\\
&=&X_3+(b^2-a^2)\sigma_3
\end{eqnarray*}
and so
$[X_1^-,e_2^+]=b^{-1}(X_3+(b^2-a^2)\sigma_3)$.
Projecting onto $V$,
$$\pi_V[X_1^-,e_2^+]=\frac{1}{2}b^{-1}(X_3+c^2\sigma_3)+\frac{1}{2}b^{-1}(b^2-a^2)(\sigma_3+c^{-2}X_3)$$
so that
\begin{equation}
\nabla_{X_1}e_2=\frac{1}{2bc}(c^2+b^2-a^2)e_3.
\label{12}
\end{equation}
Now we covariantly differentiate with respect to $t$.
\begin{eqnarray*}
\left[\frac{\partial}{\partial t}-(abc)^2dt, e_0^+\right]&=&\left[\frac{\partial}{\partial t}-(abc)^2dt, (abc)^{-1}\frac{\partial}{\partial t}+(abc) dt\right]\\
&=&-\frac{(abc)'}{(abc)^2}\frac{\partial}{\partial t}+(abc)'dt +(abc)'dt-\frac{1}{2}d(2(abc))\\
&=&\frac{(abc)'}{(abc)^2}\left(-\frac{\partial}{\partial t}+(abc)^2dt\right)
\end{eqnarray*}
so projecting onto $V$ gives
\begin{equation}
\nabla_{\!\frac{\partial}{\partial t}}e_0=0.
\label{00}
\end{equation}
and finally (similar to the first case above)
$$\left[\frac{\partial}{\partial t}-(abc)^2dt, X_1+a^2\sigma_1\right]=2aa'\sigma_1-0-0$$
so that
$$\left[\frac{\partial}{\partial t}-(abc)^2dt, e_1^+\right]=\left[\frac{\partial}{\partial t}-(abc)^2dt, a^{-1}(X_1+a^2\sigma_1)\right]=2a'\sigma_1-\frac{a'}{a^2}(X_1+a^2\sigma_1)$$
and projecting onto $V$ we get
\begin{equation}
\nabla_{\!\frac{\partial}{\partial t}}e_1=\frac{a'}{a}e_1.
\label{01}
\end{equation}
\end{ex}
The point to make here is that the somewhat mysterious Courant bracket can be used as a tool for automatically computing covariant derivatives in ordinary Riemannian geometry.
\section{Spinors, twists and skew torsion}
\subsection{Spinors}\label{spin}
In generalized geometry, the role of differential forms is changed. They become a {\it Clifford module} for the Clifford algebra generated by $T\oplus T^*$ with its indefinite inner product. Recall that, given a vector space $W$ with an inner product $(\,\,,\,)$ the Clifford algebra $\mathbf{C}liff(W)$ is generated by $1$ and $W$ with the relations $x^2=(x,x)1$ (in positive definite signature the usual sign is $-1$ but this is the most convenient for our case).
Consider an exterior differential form $\varphi\in \Lambda^*T^*$ and define the action of $X+\xi\in T\oplus T^*$ on $\varphi$ by
$$(X+\xi)\cdot \varphi=i_X\varphi+\xi\wedge \varphi$$
then
$$(X+\xi)^2\cdot \varphi=i_X(\xi\wedge\varphi)+\xi\wedge i_X\varphi=i_X\xi\varphi=(X+\xi,X+\xi)\varphi$$
and so $\Lambda^*T^*$ is a module for the Clifford algebra.
We have already remarked that we can regard $T\oplus T^*$ as having structure group $SO(n,n)$ and if the manifold is oriented this lifts to $Spin(n,n)$. The exterior algebra is almost the basic spin representation of $Spin(n,n)$, but not quite. The Clifford algebra has an anti-involution -- any element is a sum of products $x_1x_2\dots x_k$ of generators $x_i\in W$ and
$$x_1x_2\dots x_k\mapsto x_kx_{k-1}\dots x_1$$
defines the anti-involution. It represents a ``transpose" map $a\mapsto a^t$ arising from an invariant bilinear form on the basic spin module. In our case the spin representation is strictly speaking
$$S=\Lambda^*T^*\otimes (\Lambda^nT^*)^{-1/2}.$$
Another way of saying this is that there is an invariant bilinear form on $\Lambda^*T^*$ with values in the line bundle $\Lambda^nT^*$. Because of its appearance in another context it is known as the Mukai pairing. Concretely, given $\varphi_1,\varphi_2\in \Lambda^*T^*$, the pairing is
$$\langle \varphi_1,\varphi_2\rangle = \sum_j (-1)^j(\varphi_1^{2j}\wedge \varphi_2^{n-2j}+\varphi_1^{2j+1}\wedge \varphi_2^{n-2j-1})$$
where the superscript $p$ denotes the $p$-form component of the form.
The Lie algebra of the spin group (which is the Lie algebra of $SO(n,n)$) sits inside the Clifford algebra as the subspace $\{a\in \mathbf{C}liff(W): [a,W]\subseteq W\, {\mathrm {and}} \,\, a=-a^t\}$ where the commutator is taken in the Clifford algebra. Consider a 2-form $B\in \Lambda^2T^*$. The Clifford action of a 1-form $\xi$ is exterior multiplication, so $B\cdot \varphi=\sum b_{ij}\xi_i\cdot\xi_j\cdot\varphi=\sum b_{ij}\xi_i\wedge\xi_j\wedge\varphi$ defines an action on spinors. Moreover, being skew-symmetric in $\xi_i$ it satisfies $B^t=-B$. Now take $X+\xi\in W=T\oplus T^*$ and the commutator $[B,X+\xi]$ in the Clifford algebra:
$$B\wedge(i_X+\xi\wedge)\varphi-(i_X+\xi\wedge)B\wedge=B\wedge i_X\varphi-i_X(B\wedge\varphi)=-i_XB\wedge \varphi.$$
So this action preserves $T\oplus T^*$ and so defines an element in the Lie algebra of $SO(n,n)$. But
the Lie algebra action of $B\in \Lambda^2T^*$ on $T\oplus T^*$ was $X+\xi\mapsto i_XB$, so we see from the above formula that the action of a B-field on spinors is given by the exponentiation of $-B$ in the exterior algebra :
$$\varphi\mapsto e^{-B\wedge}\varphi.$$
One may easily check, for example, that the Mukai pairing is invariant under the action: $\langle e^{-B}\varphi_1,e^{-B}\varphi_2\rangle=\langle\varphi_1,\varphi_2\rangle.$
This action, together with the natural diffeomorphism action on forms, gives a combined action of the group $\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M)$ and a corresponding action of its Lie algebra. We earlier considered the map $u\mapsto \tilde u$ given by $X+\xi\mapsto X-d\xi$ and on any bundle associated to $T\oplus T^*$ by a representation of $SO(n,n)$ we have an action of $\tilde u$. We regard this now as a ``Lie derivative" ${\mathbf L}_u$ in the direction of a section $u$ of $T\oplus T^*$. In the spin representation there is a ``Cartan formula" for this:
\begin{prp} The Lie derivative of a form $\varphi$ by a section $u$ of $T\oplus T^*$ is given by $${\mathbf L}_u\varphi=d(u\cdot \varphi)+u\cdot d\varphi.$$
\end{prp}
\begin{prf} $$d(X+\xi)\cdot \varphi+(X+\xi)\cdot d\varphi=di_X\varphi+d(\xi\wedge \varphi)+i_Xd\varphi+\xi\wedge d\varphi={\mathcal L}_X\varphi+d\xi\wedge\varphi$$
using the usual Cartan formula and the fact that $B=-d\xi$ acts as $-B=d\xi$.
\end{prf}
In fact replacing the exterior product by the Clifford product is a common feature of generalized geometry whenever we deal with forms.
The Lie derivative acting on sections of $T\oplus T^*$ is the Lie algebra action we observed in the first lecture so
\begin{equation}
{\mathbf L}_uv-{\mathbf L}_vu=2[u,v]
\label{Lie}
\end{equation}
where $[u,v]$ is the Courant bracket.
\subsection{Twisted structures}\label{twist}
We now want to consider a twisted version of $T\oplus T^*$. Suppose we have a nice covering of the manifold $M$ by open sets $U_{\alpha}$ and we give ourselves a closed 2-form $B_{\alpha\beta}=-B_{\beta\alpha}$ on each two-fold intersection $U_{\alpha}\cap U_{\beta}$. We can use the action of $B_{\alpha\beta}$ to identify $T\oplus T^*$ on $U_{\alpha}$ with $T\oplus T^*$ on $U_{\beta}$ over the intersection. This will be compatible over threefold intersections if
\begin{equation}
B_{\alpha\beta}+B_{\beta\gamma}+B_{\gamma\alpha}=0
\label{cocycle}
\end{equation}
on $U_{\alpha}\cap U_{\beta}\cap U_{\gamma}$.
We have seen in the first lecture that the action of a closed 2-form on $T\oplus T^*$ preserves both the inner product and the Courant bracket, so by the above identifications this way we construct a rank $2n$ vector bundle $E$ over $M$ with an inner product and a bracket operation on sections. And since the B-field action is trivial on $T^*\subset T\oplus T^*$, the vector bundle is an extension:
$$0\rightarrow T^*\rightarrow E\stackrel{\pi}\rightarrow T\rightarrow 0.$$
Such an object is called an {\it exact Courant algebroid}. It can be abstractly characterized by the Properties (\ref{cour1}) and (\ref{cour2}) of the Courant bracket, where the vector field $X$ is $\pi u$, together with the Jacobi-type formula in Proposition \ref{jac}.
The relation (\ref{cocycle}) says that we have a 1-cocycle for the sheaf $\underline\Omega^2_{cl}$ of closed 2-forms on $M$. There is an exact sequence of sheaves
$$0\rightarrow \underline\Omega^2_{cl}\rightarrow \underline\Omega^2\stackrel{d}\rightarrow \underline\Omega^3_{cl}\rightarrow 0$$
and since $\underline\Omega^2$ is a flabby sheaf, we have
$$H^1(M,\underline\Omega^2_{cl})\cong H^0(M,{\underline\Omega^3_{cl}})/dH^0(M,{\underline\Omega^2})=\Omega_{cl}^3/d\Omega^2= H^3(M,\mathbf{R})$$
so that such a structure has a characteristic degree $3$ cohomology class.
\begin{ex} The theory of gerbes fits into the twisted picture quite readily. Very briefly, a $U(1)$ gerbe can be defined by a 2-cocycle with values on the sheaf of $C^{\infty}$ circle-valued functions -- so it is given by functions $g_{\alpha\beta\gamma}$ on threefold intersections satisfying a coboundary condition. In the exact sequence of sheaves of $C^{\infty}$ functions
$$1\mapsto \mathbf{Z}\mapsto \underline\mathbf{R}\stackrel{exp \,2\pi i}\rightarrow \underline U(1)\rightarrow 1$$
the 2-cocycle defines a class in $H^3(M,\mathbf{Z})$.
If we think of the analogue for line bundles, we have the transition functions $g_{\alpha\beta}$ and then a connection on the line bundle is given by 1-forms $A_{\alpha}$ on open sets such that
$$A_{\beta}-A_{\alpha}=(g^{-1}dg)_{\alpha\beta}$$
on twofold intersections (where we identify the Lie algebra of the circle with $\mathbf{R}$).
A {\it connective structure} on a gerbe is similarly a collection of 1-forms $A_{\alpha\beta}$ such that
$$A_{\alpha\beta}+A_{\beta\gamma}+A_{\gamma\alpha}=(g^{-1}dg)_{\alpha\beta\gamma}$$
on threefold intersections. Clearly $B_{\alpha\beta}=dA_{\alpha\beta}$ defines a Courant algebroid, and its characteristic class is the image of the integral cohomology class in $H^3(M,\mathbf{R})$.
An example of this is a hermitian structure on a holomorphic gerbe on a complex manifold (defined by a cocycle of holomorphic functions $h_{\alpha\beta\gamma}$ with values in $\mathbf{C}^*$). A hermitian structure on this is a choice of a cochain $k_{\alpha\beta}$ of positive functions with $\vert h_{\alpha\beta\gamma}\vert =k_{\alpha\beta}k_{\beta\gamma}k_{\gamma\alpha}$. Then $A_{\alpha\beta}=(h^{-1}d^ch)_{\alpha\beta}$ defines a connective structure. Here $d^c=I^{-1}dI=-i(\partial-\bar\partial)$.
\end{ex}
\vskip .25cm
The bundle $E$ has an orthogonal structure and so an associated spinor bundle $S$. By the definition of $E$, $S$ is obtained by identifying $\Lambda^*T^*$ over $U_{\alpha}$ with $\Lambda^*T^*$ over $U_{\beta}$ by
$$\varphi\mapsto e^{-B_{\alpha\beta}}\varphi.$$
A global section of $S$ is then given by local forms $\varphi_{\alpha},\varphi_{\beta}$ such that $\varphi_{\alpha}= e^{-B_{\alpha\beta}}\varphi_{\beta}$ on $U_{\alpha}\cap U_{\beta}$. Since $B_{\alpha\beta}$ is closed and even,
$$d \varphi_{\alpha}= d(e^{-B_{\alpha\beta}}\varphi_{\beta})=-dB_{\alpha\beta}\wedge e^{-B_{\alpha\beta}}\varphi_{\beta}+e^{-B_{\alpha\beta}}d\varphi_{\beta}=e^{-B_{\alpha\beta}}d\varphi_{\beta}$$
and so we have a well-defined operator
$$d:C^{\infty}(S^{ev})\rightarrow C^{\infty}(S^{od}).$$
The $\mathbf{Z}_2$-graded cohomology of this is the {\it twisted cohomology}. There is a more familiar way of writing this if we consider the inclusion of sheaves $\underline\Omega^2_{cl}\subset \underline\Omega^2$. Since $\underline\Omega^2$ is a flabby sheaf, the cohomology class of $B_{\alpha\beta}$ is trivial here and we can find 2-forms $F_{\alpha}$ such that on $U_{\alpha}\cap U_{\beta}$
$$F_{\beta}-F_{\alpha}=B_{\alpha\beta}.$$
Since $B_{\alpha\beta}$ is closed $dF_{\beta}=dF_{\alpha}$ is the restriction of a global closed 3-form $H$ which represents the characteristic class in $H^3(M,\mathbf{R})$.
But then
$$e^{-F_{\alpha}}\varphi_{\alpha}=e^{-F_{\beta}}e^{B_{\alpha\beta}}\varphi_{\alpha}=e^{-F_{\beta}}\varphi_{\beta}$$
defines a global exterior form $\psi$. Furthermore
$$d\psi=d(e^{-F_{\alpha}}\varphi_{\alpha})=-H\wedge \psi+e^{-F_{\alpha}}d\varphi_{\alpha}.$$
Thus the operator $d$ above defined on $S$ is equivalent to the operator
$$d+H:\Omega^{ev}\rightarrow \Omega^{od}$$
on exterior forms.
\begin{ex} For gerbes the full analogue of a connection is a connective structure together with a {\it curving}, which is precisely a choice of 2-form $F_{\alpha}$ such that $F_{\beta}-F_{\alpha}=dA_{\alpha\beta}$. In this case the 3-form $H$ such that $H/2\pi$ has integral periods is the curvature.
\end{ex}
\begin{rmk} Instead of thinking in cohomological terms about writing a cocycle of closed 2-forms $B_{\alpha\beta}$ as a coboundary $F_{\beta}-F_{\alpha}$ in the sheaf of all 2-forms, there is a more geometric interpretation of this choice which can be quite convenient. The B-field action of $F_{\alpha}$ gives an isomorphism of $T\oplus T^*$ with itself over $U_{\alpha}$ and the relation $F_{\beta}-F_{\alpha}=B_{\alpha\beta}$ says that this extends to an isomorphism
$$E\cong T\oplus T^*.$$
More concretely, $X$ over $U_{\alpha}$ is mapped to $X+i_XF_{\alpha}\in E$ and defines a splitting (in fact an {\it isotropic} splitting) of the extension $0\rightarrow T^*\rightarrow E\rightarrow T\rightarrow 0$.
With the same coboundary data, we identified the spinor bundle $S$ with the exterior algebra bundle and now we note that
$$(X+i_XF_{\alpha})\cdot e^{-F_{\alpha}}\varphi_{\alpha}=-e^{-F_{\alpha}}i_X\varphi_{\alpha}$$
so the two are compatible. We have a choice -- either consider $E,S$ with their standard local models of $T\oplus T^*$ and $\Lambda^*T^*$, or make the splitting and give a global isomorphism. The cost is that we replace the ordinary exterior derivative by $d+H$ and, as can be seen from the proof of Proposition \ref{Baction}, replace the standard Courant bracket by the twisted version
\begin{equation}
[X+\xi,Y+\eta]+i_Yi_XH.
\label{Couranttwist}
\end{equation}
\end{rmk}
\subsection{Skew torsion}\label{skewsection}
If we replace $T\oplus T^*$ by its twisted version $E$ we may ask how to incorporate a Riemannian metric as we did in the first lecture. Here is the definition:
\begin{definition} \label{geng} A generalized metric is a subbundle $V\subset E$ of rank $n$ on which the induced inner product is positive definite.
\end{definition}
Since the inner product on $T^*\subset E$ is zero and is positive definite on $V$, $V\cap T^*=0$ and so in a local isomorphism $E\cong T\oplus T^*$, $V$ is the graph of a map $h_{\alpha}:T\rightarrow T^*$. So, splitting into symmetric and skew symmetric parts
$$h_{\alpha}=g_{\alpha}+F_{\alpha}.$$
On the twofold intersection
$$h_{\alpha}(X)=h_{\beta}(X)+i_XB_{\alpha\beta}.$$
Thus $h_{\alpha}(X)(X)=h_{\beta}(X)(X)=g(X,X)$ for a well-defined Riemannian metric $g$, but
$F_{\alpha}=F_{\beta}+B_{\alpha\beta}$. Associated with a generalized metric we thus obtain a natural splitting.
Now the definition of a connection in Proposition \ref{Vconnect} makes perfectly good sense in the twisted case. To see what we get, let us redo the calculation using local coordinates. In this case $V$ is defined locally by $X+gX+i_XF_{\alpha}$ and $V^{\perp}$ by $X-gX+i_XF_{\alpha}$ (we are just transforming the $V$ and $V^{\perp}$ of the metric $g$ by the orthogonal transformation of the local B-field $F_{\alpha}$.) So the appropriate Courant bracket is
$$\left[\frac{\partial}{\partial x_i}-g_{ik}dx_k+F_{ik}dx_k, \frac{\partial}{\partial x_j}+g_{j\ell}dx_{\ell}+F_{j\ell}dx_{\ell}\right]$$
and this gives the terms for the Levi-Civita connection plus a term
\footnote[1]{In a parallel discussion in \cite{NJH2} the third term in this expansion was unfortunately omitted.}
$$\frac{\partial F_{j\ell}}{\partial x_i}dx_{\ell}-\frac{\partial F_{ik}}{\partial x_j}dx_k-\frac{1}{2}d(F_{ji}-F_{ij})=\left(\frac{\partial F_{j\ell}}{\partial x_i}-\frac{\partial F_{i\ell}}{\partial x_j}+\frac{\partial F_{ij}}{\partial x_{\ell}}\right)dx_{\ell}.$$
(We could also have used the twisted bracket as in (\ref{Couranttwist}).)
Writing the skew bilinear form $F_{\alpha}$ as $\sum_{i<j}F_{ij}dx_i\wedge dx_j$, and $dF_{\alpha}=\sum_{i<j<k}H_{ijk}dx_i\wedge dx_j\wedge dx_k$, this last term is $H_{ji\ell}dx_{\ell}$ and represents a connection with {\it skew torsion}: recall that the torsion of a connection on the tangent bundle is
$$T(X,Y)=\nabla_XY-\nabla_YX-[X,Y]$$
and is said be skew if $g(T(X,Y),Z)$ is skew-symmetric.
In our case, with $X=\partial/\partial x_i,Y=\partial/\partial x_j$, we use as before the projection onto $V$ to get
$$T\left(\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j}\right)=H_{ji\ell}g^{\ell k}\frac{\partial }{\partial x_k}.$$
\begin{rmk} We could have interchanged the roles of $V$ and $V^{\perp}$ in the above argument -- this would give a connection with opposite torsion $-H$.
\end{rmk}
\begin{exs}
\noindent 1. The standard example of a connection with skew torsion is the flat connection given by trivializing the tangent bundle on a compact Lie group by left translation, i.e. for all $X\in {\lie{g}}$, $\nabla X=0$. Then
$T(X,Y)=[X,Y]$.
\noindent 2. The second class of examples is given by the Bismut connection on the tangent bundle of a Hermitian manifold. In \cite{B} it is shown that any Hermitian manifold has a unique connection with skew torsion which preserves the complex structure and the Hermitian metric. In the K\"ahler case it is the Levi-Civita connection, but in general the skew torsion is defined by the 3-form $d^c\omega$ where $\omega$ is the Hermitian form.
\end{exs}
\section{Generalized complex manifolds}
\subsection{Generalized complex structures}\label{gcs}
There are many ways of defining a complex manifold other than by the existence of holomorphic coordinates. One is the following: an endomorphism $J$ of $T$ such that $J^2=-1$ and such that the $+i$ eigenspace of $J$ on the complexification $T\otimes \mathbf{C}$ is Frobenius-integrable. Of course it needs the Newlander-Nirenberg theorem to find the holomorphic coordinates from this definition, but it is one that we can adapt straightforwardly to the generalized case. The only extra condition is compatibility with the inner product. So we have:
\begin{definition} A generalized complex structure on a manifold is an endomorphism $J$ of $T\oplus T^*$ such that
\begin{itemize}
\item
$J^2=-1$
\item
$(Ju,v)=-(u,Jv)$
\item
Sections of the subbundle $E^{1,0}\subset (T\oplus T^*)\otimes \mathbf{C}$ defined by the $+i$ eigenspaces of $J$ are closed under the Courant bracket.
\end{itemize}
\end{definition}
\begin{rmks}
\noindent 1. It is not immediately obvious that the obstruction to integrability is tensorial, i.e. if $[u,v]$ is a section of $E^{1,0}$ then so is $[u,fv]$, but this is indeed so. It depends on the fact that $E^{1,0}$ is isotropic with respect to the inner product. In fact if $Ju=iu$ then
$$i(u,u)=(Ju,u)=-(u,Ju)=-i(u,u).$$
Then recall Property (\ref{cour1}) of the Courant bracket: $[u,fv]=f[u,v]+(Xf)v-(u,v)df$. If $u,v$ are sections of $E^{1,0}$ then $(u,v)=0$ so
$$[u,fv]=f[u,v]+(Xf)v$$
and if $u,v$ and $[u,v]$ are sections, so is $[u,fv]$.
\noindent 2. The definition obviously extends to the twisted case, replacing $T\oplus T^*$ by $E$.
\noindent 3. The data of $J$ is equivalent to giving an isotropic subbundle $E^{1,0}\subset (T\oplus T^*)\otimes \mathbf{C}$ of rank $n$ such that $E^{1,0}\cap \bar E^{1,0}=0$. This is the way we shall describe examples below, and we shall write $E^{0,1}$ for $\bar E^{1,0}$.
\noindent 4. The endomorphism $J$ reduces the structure group of $T\oplus T^*$ from $SO(2m,2m)$ to the indefinite unitary group $U(m,m)$.
\end{rmks}
\begin{exs}
\noindent 1. An ordinary complex manifold is an example. We take $E^{1,0}$ to be spanned by $(0,1)$ tangent vectors and $(1,0)$ forms:
$$\frac{\partial}{\partial \bar z_1}, \frac{\partial}{\partial \bar z_2},\dots , dz_1, dz_2,\dots$$
The integrability condition is obvious here.
\noindent 2. A symplectic form $\omega$ defines a generalized complex structure. Here $E^{1,0}$ is spanned by sections of $(T\oplus T^*)\otimes \mathbf{C}$ of the form
$$\frac{\partial}{\partial x_j}-i\omega_{jk}dx_k.$$
This is best seen as the transform of $T\subset T\oplus T^*$ by the complex B-field $-i\omega$. Since $T$ is isotropic and its sections are obviously closed under the Courant bracket (which is just the Lie bracket on vector fields) the same is true of its transform by a closed 2-form.
\noindent 3. A holomorphic Poisson manifold is an example. Recall that a Poisson structure on a manifold is a section $\sigma$ of $\Lambda^2T$, which therefore defines a homomorphism $\sigma:T^*\rightarrow T$. For a function $f$, $\sigma(df)=X$ is called a Hamiltonian vector field and $X(g)=\sigma(df,dg)$ is defined to be the Poisson bracket $\{f,g\}$ of the two functions. The integrability condition for a Poisson structure is
$$\sigma(d\{f,g\})=[X,Y]$$
where $Y$ is the Hamiltonian vector field of $g$.
If $M$ is a complex manifold and $\sigma\in \Lambda^2T^{1,0}$ a holomorphic Poisson structure then $E^{1,0}$ is spanned by
$$\frac{\partial}{\partial \bar z_1}, \frac{\partial}{\partial \bar z_2},\dots, dz_1-\sigma(dz_1), dz_2-\sigma(dz_1),\dots$$
Because $\sigma$ is holomorphic the only potentially non-trivial Courant brackets are of the form
$$[dz_i-\sigma(dz_i),dz_j-\sigma(dz_j)]=[\sigma(dz_i),\sigma(dz_j)]-d\{z_i,z_j\}+d\{z_j,z_i\}+\frac{1}{2}d(\{z_i,z_j\}-\{z_j,z_i\}).$$
But the Courant bracket on vector fields is the Lie bracket so by integrability of the Poisson structure
$[\sigma(dz_i),\sigma(dz_j)]=\sigma(d\{z_i,z_j\})$ and hence the bracket above is
$$\sigma(d\{z_i,z_j\})-d\{z_i,z_j\}$$
which again lies in $E^{1,0}$.
\end{exs}
There is another way to describe the integrability which can be very useful. The subbundle $E^{1,0}$ has rank $n$ and is isotropic in a $2n$-dimensional space. For a non-degenerate inner product this is the maximal dimension. Given any spinor $\psi$, the space of $x\in W$ such that $x\cdot\psi=0$ is isotropic because $0=x\cdot x\cdot\psi=(x,x)\psi$. A maximal isotropic subspace is determined by a special type of spinor called a pure spinor.
So to any maximal isotropic subspace we can associate a one-dimensional space of pure spinors it annihilates. Hence a generalized complex manifold has a complex line subbundle of $\Lambda^*T^*\otimes \mathbf{C}$ (called the {\it canonical bundle}) consisting of multiples of a pure spinor defining $J$. The condition $ E^{1,0}\cap \bar E^{1,0}=0$ is equivalent to the Mukai pairing $\langle \psi,\bar\psi\rangle$ for the spinor and its conjugate being non-zero.
\begin{exs}
\noindent 1. For an ordinary complex manifold the subspace $$\frac{\partial}{\partial \bar z_1}, \frac{\partial}{\partial \bar z_2},\dots , dz_1, dz_2,\dots$$
annihilates $dz_1\wedge dz_2\wedge\dots\wedge dz_m$. This generates the usual canonical bundle of a complex manifold.
\noindent 2. The tangent space $T$ annihilates $1$ by Clifford multiplication (interior product). Hence for a symplectic manifold the transform of $T$ by $-i\omega$ annihilates the form $e^{i\omega}$. Here the canonical bundle is trivialized by this form.
\end{exs}
Here is integrability in this context:
\begin{prp} \label{int} Let $\psi$ be a form which is a pure spinor with $\langle \psi,\bar\psi\rangle\ne 0$. Then it defines a generalized complex structure if and only if $d\psi=w\cdot\psi$ for some local section $w$ of $(T\oplus T^*)\otimes \mathbf{C}$.
\end{prp}
\begin{prf} First assume $d\psi=w\cdot\psi$. Suppose $u\cdot \psi=0=v\cdot \psi$. Then, since the Lie derivative ${\mathbf L}_v$ acts via the Lie algebra action and so preserves the Clifford product, we have
$0={\mathbf L}_v(u\cdot\psi)={\mathbf L}_vu\cdot\psi+u\cdot {\mathbf L}_v\psi.$
Using the Cartan formula ${\mathbf L}_v\psi=d(v\cdot\psi)+v\cdot d\psi=v\cdot d\psi=v\cdot w\cdot \psi$
and so
$${\mathbf L}_vu\cdot\psi+u\cdot v\cdot w\cdot\psi=0$$
Now use the Clifford relations,
$$u\cdot v\cdot w\cdot\psi=u\cdot(2(v,w)-w\cdot v)\cdot\psi=0$$
since $u\cdot \psi=0=v\cdot \psi$. We deduce that ${\mathbf L}_vu\cdot\psi=0$.
Hence, interchanging the roles of $u$ and $v$ and subtracting,
$$0={\mathbf L}_vu\cdot\psi-{\mathbf L}_uv\cdot\psi=2[v,u]\cdot \psi$$
from (\ref{Lie}).
The Courant bracket therefore preserves the annihilator of $\psi$ and we have the integrability condition for a generalized complex structure.
\vskip .25cm
Conversely, assume the structure is integrable. If $u\cdot \psi=0=v\cdot \psi$ then from the definition of integrability $[u,v]\cdot\psi=0$ and so
${\mathbf L}_vu\cdot\psi-{\mathbf L}_uv\cdot\psi=0$. But then from the above algebra we have
$(u\cdot v-v\cdot u)\cdot d\psi=0$ or, since $u\cdot v=- v\cdot u+2(u,v)1=- v\cdot u$,
$$u\cdot v\cdot d\psi=0.$$
As far as the linear algebra is concerned, any two endomorphisms $J$ satisfying the conditions of a generalized complex structure are equivalent under the action of $SO(2m,2m)$ -- they form the orbit $SO(2m,2m)/U(m,m)$. Hence to proceed, we can use the linear algebra of the standard complex structure where $\psi=dz_1\wedge\dots\wedge dz_m$ to determine those $\varphi$ which satisfy $u\cdot v\cdot \varphi=0.$
Taking $u=dz_i$ and $v=\partial/\partial \bar z_j$, the condition that $u\cdot v\cdot \varphi=0$ for all $i$ means that $\varphi$ is a sum of forms of type $(m,q)$ or $(p,0)$. Taking $u=dz_i,v=dz_j$ we must have $p=m$ or $m-1$. Taking $u=\partial/\partial \bar z_i, v=\partial/\partial \bar z_j$ we need $q=0$ or $1$. Thus $\varphi$ is a sum of $(m,0),(m,1)$ and $(m-1,0)$ terms. But $d\psi$ has opposite parity to $\psi$ so it must be $(m,1)$ and $(m-1,0)$ terms. However, these are generated by $d\bar z_i\wedge dz_1\wedge\dots\wedge dz_m$ and $i_{\partial/\partial z_j}dz_1\wedge\dots\wedge dz_m$, that is $w\cdot \psi$, as required.
\end{prf}
\begin{ex} The simplest use of this integrability is when there is a global closed form which is a pure spinor. Such manifolds are called {\it generalized Calabi-Yau manifolds} and include ordinary Calabi-Yau manifolds where the holomorphic $m$-form is $\psi$, or symplectic manifolds where $\psi=e^{i\omega}$.
\end{ex}
\subsection{Symmetries and twisting}
At first sight, there seems little common ground when we think of the symmetries of symplectic manifolds or complex manifolds. In the first case, any smooth function defines a Hamiltonian vector field, in the second the Lie algebra of holomorphic vector fields is at most finite-dimensional and often zero. In generalized geometry, however, we use the extended group $\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M)$ and this restores the balance between the two.
\begin{prp} Let $J$ be a generalized complex structure and $f$ a smooth function. Then if $X+\xi=J(df)$, $X-d\xi$ in the Lie algebra of $\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M)$ preserves $J$.
\end{prp}
\begin{prf}
If $u=Jdf$, then decompose $u=u^{1,0}+u^{0,1}$ into its $\pm i$ eigenspace components of $J$, so that $-df=Ju= iu^{1,0}-iu^{0,1}$. Let $\psi$ be a local section of the canonical bundle, then $u^{1,0}\cdot\psi=0$ and hence
$$u\cdot\psi=u^{0,1}\cdot\psi=-idf\cdot\psi=-idf\wedge\psi.$$
Thus, using Proposition \ref{int} and the Cartan formula
$${\mathbf L}_u\psi=d(u\cdot \psi)+u\cdot d\psi=d(-idf\wedge\psi)+u\cdot d\psi=(idf+u)\cdot w\cdot \psi.$$
But $idf+u=u^{1,0}-u^{0,1}+u^{1,0}+u^{0,1}=2u^{1,0}$ and so
$${\mathbf L}_u\psi=2u^{1,0}\cdot w\cdot\psi=4(u^{1,0},w)\psi$$
using the Clifford identity $u^{1,0}\cdot w+w\cdot u^{1,0}=2(u^{1,0},w)1$ and $u^{1,0}\cdot\psi=0$.
It follows that the Lie derivative of $\psi$ is a multiple of $\psi$ and so preserves the generalized complex structure.
\end{prf}
From this proposition we can see how a complex manifold acquires symmetries from smooth functions, for in this case $Jdf=X+\xi=-d^cf$ and exponentiating we have the B-field action of the closed 2-form $dd^cf$.
\vskip .25cm
Slightly more generally, any real closed $(1,1)$-form is a symmetry of an ordinary complex structure thought of as a generalized complex structure, since the interior product with a $(0,1)$-vector $\partial/\partial \bar z_i$ is a $(1,0)$ form -- a linear combination of $dz_j$s. It follows that, if we take a 1-cocycle of such forms and construct an extension $E$ as in Section \ref{twist}, we obtain a twisted generalized complex structure on $E$.
In particular, given a closed 3-form $H$ of type $(1,2)$ we can write this on a small enough open set $U_{\alpha}$ as $\partial\bar\partial A_{\alpha}$ for a $(0,1)$-form $A_{\alpha}$. Then on a twofold intersection
$\partial\bar\partial A_{\alpha}-\partial\bar\partial A_{\beta}=0$ and hence $d(\bar\partial A_{\alpha}-\bar\partial A_{\beta})=0$. Defining $B_{\alpha\beta}$ to be the real part of $\bar\partial (A_{\beta}- A_{\alpha})$ gives such a cocycle.
\subsection{The $\bar\partial$-operator}\label{dbarsec}
On a manifold with generalized complex structure $J$ we have the eigenspace decomposition $(T\oplus T^*)\otimes \mathbf{C}= E^{1,0}\oplus E^{0,1}$, and given a function $f$ we define $\bar\partial_Jf$ to be the $(0,1)$-component. Note that in the twisted case we have $T^*\subset E$ and so we can do the same.
\begin{exs}
\noindent 1. For an ordinary complex manifold
$$\bar\partial_Jf=\bar\partial f.$$
\noindent 2. For a symplectic structure $\omega$
$$\bar\partial_Jf=\frac{1}{2}(iX+df)$$
where $X$ is the Hamiltonian vector field of $f$ i.e. $i_X\omega=df$. Note that in this case $\bar\partial_Jf=0$ implies that $f$ is constant, so we can't approach generalized complex geometry purely in terms of sheaves of local holomorphic functions.
\noindent 3. For a holomorphic Poisson structure $\sigma$
$$\bar\partial_Jf=\bar\partial f+\sigma(\partial f)-\bar\sigma(\bar\partial f)$$
\end{exs}
The $\bar\partial$-operator maps functions to sections of $E^{0,1}$, and we want to extend it to a complex just like the Dolbeault complex of a complex manifold. For this we need to extend to an operator
$$\bar\partial_J:C^{\infty}(E^{0,1})\rightarrow C^{\infty}(\Lambda^2 E^{0,1}).$$
There is an obvious formula, analogous to the usual definition of the exterior derivative, but using the Courant bracket instead of the Lie bracket. Note first that because $E^{1,0}$ is maximal isotropic, the inner product identifies $E^{0,1}$ with the dual of $E^{1,0}$, so $\Lambda^p E^{0,1}$ is the space of alternating multilinear $p$-forms on $E^{1,0}$.
The extended operator is then given, for sections $u=X+\xi,v=Y+\eta$ of $E^{1,0}$ by the usual formula, but with the Courant bracket $[u,v]$:
\begin{equation}
2\bar\partial_J \alpha(u,v)=X(\alpha(v))-Y(\alpha(u))+\alpha([u,v]).
\label{da}
\end{equation}
Given this, there is an obvious extension to an operator
$$\bar\partial_J:C^{\infty}(\Lambda^pE^{0,1})\rightarrow C^{\infty}(\Lambda^{p+1} E^{0,1})$$
which satisfies the property
$$\bar\partial_J(f\alpha)=\bar\partial_Jf\wedge \alpha+f\bar\partial_J\alpha.$$
The only point to make here is that the relation $\bar\partial_J^2=0$ requires the Jacobi identity for the Courant bracket which we know doesn't hold in general. However, its failure as in Proposition \ref{jac} is due to the term
$$d(([u,v],w)+([v,w],u)+([w,u],v)).$$
But by the definition of a generalized complex structure the Courant bracket $[u,v]$ of two sections of $E^{1,0}$ is also a section, and as we have seen, $E^{1,0}$ is isotropic. It follows that this expression is zero for such sections and the Jacobi identity holds.
\begin{exs}
\noindent 1. In the case of an ordinary complex structure $E^{0,1}=\bar T^*\oplus T$ where $T$ here denotes the holomorphic tangent bundle. Hence
$$\Lambda^m E^{0,1}=\bigoplus_{p+q=m}\Lambda^pT\otimes \Lambda^q \bar T^*.$$
The $\bar\partial_J$ complex is then the direct sum over $p$ of the Dolbeault complexes for polyvector fields:
$$\rightarrow\Omega^{0,q}(\Lambda^pT)\stackrel{\bar\partial}\rightarrow \Omega^{0,q+1}(\Lambda^pT).$$
\noindent 2. Now consider a twisted version of this example. The $\bar\partial_J$ operator is now a twisted version of the Dolbeault complex, similar to the twisted exterior derivative in Section \ref{twist}. It is easiest to describe if we choose an isotropic splitting. Then we have to add onto the Courant bracket in equation (\ref{da}) the $H$-term, evaluated on $(0,1)$-vectors. It gives us
$$\bar\partial_J=\bar\partial-H^{1,2}$$
where $\bar\partial$ is the usual Dolbeault operator
$$\bar\partial: \Omega^{0,q}(\Lambda^pT)\rightarrow \Omega^{0,q+1}(\Lambda^pT)$$
and the 3--form $H^{1,2}$ acts by contraction in the $(1,0)$ factor and exterior product on the $(0,2)$ part:
$$H^{1,2}:\Omega^{0,q}(\Lambda^pT)\rightarrow \Omega^{0,q+2}(\Lambda^{p-1}T).$$
Note that the total degree is unchanged $(q+1)+p=(q+2)+(p-1)$.
\end{exs}
\section{Generalized K\"ahler manifolds}
\subsection{Bihermitian metrics}
Given that a complex structure $I$ and a symplectic structure $\omega$ are both types of generalized complex structure, it makes sense to consider the analogue of a K\"ahler structure. The compatibility of $I$ and $\omega$ is that $\omega$ should be of type $(1,1)$ with respect to $I$ (an algebraic condition) and also that the resulting Hermitian inner product should be positive definite. The algebraic condition when translated into a condition on two generalized complex structures $J_1,J_2$ is that they should commute. We make this then a definition, and see what more we can find beyond ordinary K\"ahler metrics:
\begin{definition} A generalized K\"ahler structure on a manifold consists of a pair of commuting generalized complex structures $J_1,J_2$ such that the inner product $(J_1J_2u,v)$ is positive definite.
\end{definition}
Note that $(J_1J_2u,v)=-(J_2u,J_1v)=(u, J_2J_1v)=(u, J_1J_2v)$ so that $(J_1J_2u,v)$ does define a symmetric bilinear form.
This is a natural definition in generalized geometry, but it gives in general the notion of a {\it bihermitian metric}. This is the theorem of Gualtieri:
\begin{thm} \label{Gthm} A generalized K\"ahler structure on a manifold gives rise to the following:
\begin{itemize}
\item
a Riemannian metric $g$
\item
two integrable complex structures $I_+,I_-$, Hermitian with respect to $g$
\item
affine connections $\nabla_{\pm}$ with skew torsion $\pm H$ which preserve the metric and the complex structure $I_{\pm}$.
\end{itemize}
Conversely, given this data, we can define a generalized K\"ahler structure, unique up to the action of a B-field.
\end{thm}
The proof below is based on that of \cite{MG2}. The surprising thing about this theorem is that it reveals a geometry considered long ago by the physicists Gates, Hull and Ro\v cek \cite{GHR}.
\begin{prf}
\noindent 1. Since the two generalized complex structures are orthogonal transformations and commute, $(J_1J_2)^2=(-1)(-1)=1$ and we have $\pm 1$ orthogonal eigenspaces $V$ and $V^{\perp}$. Splitting $u\in T\oplus T^*$ into components $u^+ + u^-$, the positive definiteness of $(J_1J_2u,v)$ means that $(u^+,u^+)-(u^-,u^-)$ is positive definite. So $V$ is positive definite and $V^{\perp}$ is negative definite. But the signature of the inner product is $(n,n)$ so $\dim V=\dim V^{\perp}=n$. This is a generalized metric as defined in Section {\ref{skewsection}} and so already gives us metric connections with skew torsion.
\noindent 2. Next we find the complex structures. Since $J_1$ commutes with $J_1J_2$, it preserves the eigenspaces and defines an almost complex structure $I_+$ on $M$ by $J_1X^+=(I_+X)^+$ (on $V$, $J_2=-J_1$ and so $J_2$ defines the opposite structure). On $V^{\perp}$ $J_1=J_2$ defines similarly a complex structure $I_-$. Complexifying $T\oplus T^*$ we have a decomposition into four equidimensional subbundles corresponding to the pairs of eigenvalues of $\pm i$ of $J_1$ and $J_2$: $V^{++},V^{+-},V^{-+},V^{--}$.
So $V$, where $J_1J_2=1$ is equal to $V^{+-}\oplus V^{-+}$ and $V^{+-}$ projects to the $+i$ eigenspace of $I_+$ on $T\otimes \mathbf{C}$. These subbundles are intersections of eigenspaces of $J_1,J_2$ and hence their sections are closed under Courant bracket. But the vector field part of the Courant bracket is just the Lie bracket, hence the
$+i$ eigenspace of $I_+$ is closed under Lie bracket and hence $I_+$ is integrable.
\noindent 3. We need to show that the connection $\nabla$ on $V$ preserves the complex structure $I_+$, or in other words preserves the subbundle $V^{+-}$. Since this is maximal isotropic in $V\otimes \mathbf{C}$, we need to show that for any sections $u$ and $v$ of $V^{+-}$ the inner product $(\nabla_Xu,v)=0$. By the Courant bracket definition of the connection we need $([X^-,u],v)=0$.
Decompose $X^-=x_1+x_2$ in $V^{\perp}=V^{++}\oplus V^{--}$. Now $x_1,u$ are both in the $+i$ eigenspace of $J_1$, hence so is the Courant bracket $[x_1,u]$. Since $v$ is also in this eigenspace, which is isotropic, we have $([x_1,u],v)=0$. Similarly $x_2$ is in the $-i$ eigenspace of $J_2$, as are $u$ and $v$, so $([x_2,u],v)=0$. It follows that $([X^-,u],v)=([x_1,u],v)+([x_2,u],v)=0$.
We now have a connection $\nabla^+$ with skew torsion which preserves the metric and the complex structure, and so is the Bismut connection.
\vskip .25cm
\noindent 4. For the converse, assume we have the bihermitian data. Then the graph of the metric defines $V\subset T\oplus T^*$ and we use the $H$-twisted Courant bracket which defines the connection. A closed B-field will transform this to another $V$, but defining the same connections on $T$, so there is an ambiguity at this stage.
The complex structure $I_+$ splits the complexification of $V$ into $(1,0)$ and $(0,1)$ parts $V^{+-}\oplus V^{-+}$
and similarly $I_-$ splits $V^{\perp}$ complexified into $V^{++}\oplus V^{--}$. We shall prove firstly that sections of these subbundles $V^{+-}$ etc. are closed under the Courant bracket, and then that sections of $V^{++}\oplus V^{+-}$ are also closed. Defining $J_1$ as having $+i$ eigenspace $V^{++}\oplus V^{+-}$, the closure condition will then make $J_1$ into a generalized complex structure.
So consider first $V^{+-}$. Choose local holomorphic coordinates $z_1,\dots,z_m$ with respect to $I_+$. Then elements of $V^{+-}$ can be written as
$$\frac{\partial}{\partial z_i}+g_{i\bar j}d\bar z_j=\frac{\partial}{\partial z_i}+i\omega_{i\bar j}d\bar z_j$$
where $\omega$ is the Hermitian form. As in the calculation in Section \ref{skewsection}, the Courant bracket
$$\left[\frac{\partial}{\partial z_i}+i\omega_{i\bar k}d\bar z_k, \frac{\partial}{\partial z_j}+i\omega_{j\bar \ell}d\bar z_{\ell}\right]=
{i(\partial \omega)}_{ji\bar\ell}d\bar z_{\ell}-H_{ji\bar\ell}d\bar z_{\ell}-H_{ji \ell}dz_{\ell}.$$
But the connection is compatible with $g$ and $I_+$ and is thus the Bismut connection, for which $H=d^c\omega$. This is a form of type $(2,1)+(1,2)$ and so has no $(3,0)$ component so $H_{ji\ell}=0$; the $(2,1)$ component is $\partial\omega$ and so the Courant bracket vanishes.
The other three cases are similar.
\noindent 5. Now consider $V^{++}\oplus V^{+-}$. The Courant bracket preserves sections of each component so we only have to check that $[u,v]\in V^{++}\oplus V^{+-}$ for $u$ a section of $V^{++}$ and $v$ of $V^{+-}$. Because this is maximal isotropic in $(T\oplus T^*)\otimes \mathbf{C}$, we need, as above, the vanishing of $([u,v],w)$ for $w=w^++w^-$ a section of $V^{++}\oplus V^{+-}$.
We just showed that the Courant bracket preserves $V^{++}$ so $[u,w^+]$ is a section of $V^{++}$ and hence $(v,[u,w^+])=0$ by isotropy. Property (\ref{cour2}) of the Courant bracket is, for any $u,v,w$,
$$X(v,w)=([u,v]+d(u,v),w)+(v,[u,w]+d(u,w)).$$
Since $V^{++}\oplus V^{+-}$ is isotropic this means that for our $u,v,w$
$$([u,v],w)+(v,[u,w])=0.$$
In particular, since $(v,[u,w^+])=0$, $([u,v],w^+)=0$.
Similarly $w^-$ and $v$ are sections of $V^{+-}$, hence $[v,w^-]$ is a section of $V^{+-}$, so that $(u,[v,w^-])=0$ and then $([u,v],w^-)=0$.
Hence $([u,v],w)= ([u,v],w^+)+([u,v],w^-)=0$ and $J_1$ satisfies the integrability condition; the same argument works for $J_2$.
\end{prf}
Giving examples of generalized K\"ahler manifolds is not so straightforward as in previous notions. There are some ad hoc explicit constructions, but the existence theorem of R. Goto \cite{Go} is the most powerful method to date, and we describe this next.
\subsection{Goto's deformation theorem}
Recall that a holomorphic Poisson structure $\sigma$ gives a generalized complex structure with $E^{1,0}$ spanned by
$$\frac{\partial}{\partial \bar z_1}, \frac{\partial}{\partial \bar z_2},\dots, dz_1-\sigma(dz_1), dz_2-\sigma(dz_1),\dots$$
But for $t\in \mathbf{C}$, $t\sigma$ is still a Poisson structure and as $t\rightarrow 0$ we get a smooth family $J_1(t)$ of generalized complex structures where $J_1(0)$ is just the standard complex structure with $E^{1,0}$
defined by
$$\frac{\partial}{\partial \bar z_1}, \frac{\partial}{\partial \bar z_2},\dots, dz_1, dz_2,\dots.$$
Goto's idea is to start with a holomorphic Poisson manifold together with a K\"ahler structure. The complex structure gives the generalized complex structure $J_1=J_1(0)$ and the symplectic structure of the K\"ahler form gives another, $J_2$. One then attempts to find, for small enough $t$, a family of generalized complex structures $J_2(t)$ which commute with $J_1(t)$ and for which $J_2(0)=J_2$.
Of course to use this to construct generalized K\"ahler manifolds one needs to start with a compact K\"ahler holomorphic Poisson manifold, but there are a number of examples:
\begin{exs}
\noindent 1. Any algebraic surface with a holomorphic section of $\Lambda^2T$ -- this is the anticanonical line bundle. In dimension two, the integrability of $\sigma$ is automatic. Examples are $\mathbf{C}P^2$ blown up at $k$ points on a cubic curve.
\noindent 2. The Hilbert scheme of $n$ points on a Poisson surface.
\noindent 3. Any threefold with a square root $K^{1/2}$ of the canonical bundle such that $K^{-1/2}$ has at least two sections: for example a Fano threefold of index 2 or 4. In this case take two sections $s_1,s_2$ of $K^{-1/2}$, then the Wronskian $s_1ds_2-s_2ds_1$ is a section of $T^*(K^{-1})\cong \Lambda^2T$ which satisfies the integrability condition.
\end{exs}
\begin{rmk} In fact, any bihermitian manifold defines a holomorphic Poisson structure -- the tensor $g([I_+,I_-]X,Y)$ is the real part of a holomorphic Poisson structure with respect to either $I_+$ or $I_-$ \cite{NJH3}. It means that Poisson geometry is a central theme in this area but it would be confusing at this point to discuss the relationship between all three of these holomorphic Poisson structures.
\end{rmk}
Here then is Goto's theorem:
\begin{thm} Let $M$ be a compact K\"ahler manifold with holomorphic Poisson structure $\sigma$. Let $J_1(t)$ be the generalized complex structure defined by $t\sigma$, then for sufficiently small $t$ there exists an analytic family of generalized K\"ahler structures $(J_1(t),J_2(t))$.
\end{thm}
\begin{prf}
The proof uses the fact that, from a linear algebra point of view, $J_1,J_2$ reduce the structure group of $T\oplus T^*$ to $U(m)\times U(m)$ -- the unitary structures on $V$ and $V^{\perp}$. In particular any two such structures are pointwise equivalent under the action of $SO(2m,2m)$. The idea is then to seek a formal power series $z(t)=tz_1+t^2z_2+\dots $ of skew adjoint endomorphisms of $T\oplus T^*$ such that
\begin{itemize}
\item
$\exp z(t)$ transforms $J_1(0)$ to $J_1(t)$ and
\item
$d(\exp z(t)e^{i\omega})=0$.
\end{itemize}
and then to
\begin{itemize}
\item
solve the equations term-by-term and then
\item
prove convergence by using Green's functions and harmonic theory.
\end{itemize}
Recall from Section \ref{gcs} that $e^{i\omega}$ is the pure spinor which defines the symplectic generalized complex structure. Then $\psi=\exp z(t)e^{i\omega}$ is again pure and satisfies $\langle\psi,\bar\psi\rangle=\langle e^{i\omega},e^{-i\omega}\rangle$ and therefore defines a $J_2(t)$. And if $d\psi=0$ it follows from Proposition \ref{int} that $J_2(t)$ satisfies the integrability condition. The last part of the process, proving convergence, is quite standard in this type of deformation theory, so we shall focus on the first part, which uses a number of features of generalized geometry.
\vskip .25cm
\noindent 1. Let $\sigma$ be the Poisson tensor and put $a=\sigma+\bar\sigma$. This is a section of the real exterior power $\Lambda^2T$, which is part of the bundle of skew-adjoint endomorphisms of $T\oplus T^*$, i.e. $\Lambda^2T \oplus \mathop{\rm End}\nolimits T \oplus \Lambda^2T^*$.
Let $b$ be a skew-adjoint endomorphism which preserves the generalized complex structure $J_1(0)$, which is the ordinary complex structure. Then $b$ is a real section of
$\Lambda^{1,1}T^{1,0} \oplus \mathop{\rm End}\nolimits_{\mathbf{C}} T\oplus \Lambda^{1,1}(T^{1,0})^*.$
For a power series $b(t)$, the composition $\exp at \circ \exp b(t)$ can (for small enough $t$) be written as $\exp z$ for a power series $z=z(t)$ and by construction its action transforms $J_1(0)$ to $J_1(t)$.
\noindent 2. The Clifford algebra of a vector space $W$ has a filtration according to the product of generators in $W$: $\mathbf{C}liff^0\subset \mathbf{C}liff^2\subset \mathbf{C}liff^4\subset...$ and $\mathbf{C}liff^1\subset \mathbf{C}liff^3\subset \mathbf{C}liff^5\subset...$ (the parity is preserved because $x\cdot y+y\cdot x=2(x,y)1$ is of degree zero). We saw in Section \ref{spin} that $\{a\in \mathbf{C}liff(W): [a,W]\subseteq W\, {\mathrm {and}} \,\, a=-a^t\}$ is isomorphic under the spin representation to the skew-adjoint endomorphisms of $W$, so we consider $z$ as lying in $\mathbf{C}liff^2$ and exponentiation in the group is exponentiation in the Clifford algebra. It follows that in $\mathbf{C}liff(T\oplus T^*)$ we have
$$e^{-z}\mathbf{C}liff^1e^z \subseteq \mathbf{C}liff^1.$$
\noindent 3. We need to solve $d(e^ z \cdot \psi)=0$ where $\psi=e^{i\omega}$ so we consider the operator $e^{-z} d\, e^z$ for $z$ a section of $\mathbf{C}liff^2$ (note that if $z=B$ in $\Lambda^2T^*\subset \Lambda^2T \oplus \mathop{\rm End}\nolimits T \oplus \Lambda^2T^*$, then $e^{-z} d\, e^z$ is the twisted differential $d+H$ where $H=dB$ which we met in Section \ref{twist}, but here we need the general case).
We use the formula
$$d\varphi=\sum_idx_i\wedge{\mathcal L}_{\partial/\partial x_i}\varphi.$$
(This is trivially true on functions and the right hand side has the obvious property $d(\alpha\wedge\beta)=d\alpha\wedge \beta+(-1)^p\alpha\wedge d\beta$.)
Then
\begin{eqnarray*}
e^{-z} d\, e^z \cdot \varphi &=&e^{-z} \sum_idx_i\wedge{\mathcal L}_{\partial/\partial x_i}( e^z \cdot \varphi)\\
&=&\sum_i(e^{-z}\cdot dx_i\cdot e^z)\cdot e^{-z}{\mathcal L}_{\partial/\partial x_i} (e^z \cdot \varphi)
\end{eqnarray*}
where we have rewritten the exterior product by a 1-form as a Clifford product. This expands to
$$\sum_i(e^{-z}\cdot dx_i\cdot e^z)((e^{-z}{\mathcal L}_{\partial/\partial x_i}e^z) \varphi+{\mathcal L}_{\partial/\partial x_i} \varphi).$$
Now $u_i=e^{-z}\cdot dx_i\cdot e^z$ is a local section of $\mathbf{C}liff^1$ and $a_i=(e^{-z}{\mathcal L}_{\partial/\partial x_i}e^z)$ is a section of the Lie algebra bundle in $\mathbf{C}liff^2$, so we have
\begin{equation}
e^{-z} d \,e^z \varphi =\sum_i u_i\cdot a_i\cdot\varphi+u_i\cdot {\mathcal L}_{\partial/\partial x_i} \varphi
\label{dtwist}
\end{equation}
\noindent 4. We need to consider the action of $\exp z$ on the pair $J_1(0),J_2(0)$ and it is convenient to see this via the pair of pure spinors $(\psi,\varphi)=(e^{i\omega}, dz_1\wedge\dots\wedge dz_m)$ -- the first is global, the second only local. The Lie derivative action on $T\oplus T^*$ preserves the inner product and the pair $(\psi,\varphi)$ at each point lie in an $SO(2m,2m)$ orbit, so
$${\mathcal L}_{\partial/\partial x_i} (\psi,\varphi)=(c_i\cdot\psi,c_i\cdot \varphi)$$
for some local section $c_i$ of $\mathbf{C}liff^2$. From equation (\ref{dtwist}) we then have
$$e^{-z} d \,e^z (\psi,\varphi)=(h\cdot\psi,h\cdot\varphi)$$
for some $h=\sum_i u_i\cdot(a_i+c_i)$ a section of $\mathbf{C}liff^3$.
But by our choice of $z$, $e^z \varphi$ defines a generalized complex structure so by the integrability criterion in Proposition \ref{int} we have $d(e^z \varphi)=w\cdot e^ z \cdot \varphi$ for some $w$ a local section of $\mathbf{C}liff^1=T\oplus T^*$. Putting $v=e^{-z}\cdot w \cdot e^z$ this gives us two equations: an algebraic condition on $h$, a section of $\mathbf{C}liff^3$, that $v\cdot \varphi=h\cdot \varphi$ for some $v$ a section of $\mathbf{C}liff^1$, and the differential equation
\begin{equation}
e^{-z} d( e^z \psi)=h\cdot\psi.
\label{deq}
\end{equation}
The important thing to note is that there is an $h$ satisfying these conditions for any $b(t)=tb_1+t^2b_2+\dots...$.
\noindent 5. We now need to identify the objects on the right hand side of this equation, which is the second item in the following lemma:
\begin{lemma} \label{hlemma} Let $b\in\mathbf{C}liff^2$ and $h\in \mathbf{C}liff^3$ be real elements, then
\noindent (i) if $b$ preserves $dz_1\wedge\dots\wedge dz_m$ up to a scalar multiple, then $b\cdot e^{i\omega}\in (\Lambda^0+\Lambda^{1,1})\wedge e^{i\omega}$.
\noindent (ii) If $h$ satisfies $h\cdot dz_1\wedge\dots\wedge dz_m=v\cdot dz_1\wedge\dots\wedge dz_m$ for some $v\in \mathbf{C}liff^1$ then $h\cdot e^{i\omega}\in (\Lambda^1+\Lambda^{2,1}+\Lambda^{1,2})\wedge e^{i\omega}$.
\end{lemma}
\begin{lemprf}
\noindent (i) If $X$ is any tangent vector, $i_Xe^{i\omega}=i(i_X\omega)\wedge e^{i\omega}$ so the action of $b$ can always be realized as the exterior product by some 0-form plus 2-form. The $(1,1)$ forms annihilate $dz_1\wedge\dots\wedge dz_m$ and so $\Lambda^{1,1}\wedge e^{i\omega}$ is in the image. Terms in $(\Lambda^{2,0}+\Lambda^{0,2})\wedge e^{i\omega}$ can arise from real linear combinations of the real and imaginary parts of
$$\frac{\partial}{\partial z_i}\cdot \frac{\partial}{\partial z_j}\qquad\frac{\partial}{\partial z_i}\cdot d\bar z_j \qquad d\bar z_i\cdot
d\bar z_j$$
But applied to $dz_1\wedge\dots\wedge dz_m$ these give non-zero terms with the respective $(p,q)$ types $(m-2,0), (m-1,1), (m,2)$ and so do not preserve the pure spinor.
\noindent (ii) If $v\in \mathbf{C}liff^1$ then $v\cdot dz_1\wedge\dots\wedge dz_m$ is of type $(m-1,0)+(m,1)$. As in (i), the action of $h$ on $e^{i\omega}$ is the exterior product of some 1-form plus 3-form. Forms of type $(2,1)+(1,2)$ annihilate $dz_1\wedge\dots\wedge dz_m$ by exterior multiplication so $(\Lambda^{2,1}+\Lambda^{1,2})\wedge e^{i\omega}$ is in the image. This time terms in $(\Lambda^{3,0}+\Lambda^{0,3})\wedge e^{i\omega}$ can arise from combinations of
$$\frac{\partial}{\partial z_i}\cdot \frac{\partial}{\partial z_j}\cdot \frac{\partial}{\partial z_k}\qquad\frac{\partial}{\partial z_i}\cdot \frac{\partial}{\partial z_j}\cdot d\bar z_j \qquad \frac{\partial}{\partial z_i}\cdot d\bar z_j \cdot d\bar z_k\qquad d\bar z_i\cdot
d\bar z_j\cdot d\bar z_k$$
but these applied to $dz_1\wedge\dots\wedge dz_m$ give non-zero forms with the $(p,q)$ types $(m-3,0)$, $(m-2,1),(m-1,2),(m,3)$ and not the required types $(m-1,0),(m,1)$.
It is easy to see that when $b$ or $h$ are real, we can realize the action by a real form.
\end{lemprf}
\noindent 6. We now begin the term-by-term solution to the equation $d( e^z \psi)=0$. Recall that
$$e^{z(t)}=e^{ta}e^{b(t)}=1+t(a+b_1)+\dots$$ and so
$$e^{-z} d( e^z \psi)=td((a+b_1)\psi)+o(t^2)$$
so the first task is to solve $d((a+b_1)\psi)=0$ for $b_1$. If we put all the $b_i=0$ in Equation (\ref{deq}), and use item (ii) in Lemma \ref{hlemma} then we see that
$$d(a\psi)\in (\Omega^1+\Omega^{2,1}+\Omega^{1,2})\wedge \psi.$$
But exterior product with $\psi=e^{i\omega}$ is invertible and commutes with $d$, so we have an exact form in $\Omega^1+\Omega^{2,1}+\Omega^{1,2}$ and, to find $b_1$, from item (i) in the lemma we want this to lie in $d(\Omega^0+\Omega^{1,1})$.
The 1-form part is obvious. Write the component in $\Omega^{2,1}+\Omega^{1,2}$ as the exact form $d(\alpha^{2,0}+\alpha^{1,1}+\alpha^{0,2})$ then $\bar\partial \alpha^{0,2}=0$ so $\partial \alpha^{0,2}$ is $\bar\partial$-closed and $\partial$-exact. So by the $\partial\bar\partial$-lemma (valid for K\"ahler manifolds) we can write
$$\partial \alpha^{0,2}=\partial \bar\partial\theta^{0,1}$$
and so the $(1,2)$ component of the exact form can be written as
$$\bar\partial \alpha^{1,1}+\partial \alpha^{0,2}=\bar\partial(\alpha^{1,1}- \partial\theta^{0,1})$$
and then $\alpha^{1,1}- \partial\theta^{0,1}-\overline{\partial\theta^{0,1}}$ is the required $(1,1)$ form.
\noindent 7. In general suppose we have inductively found $b_1,\dots,b_{k-1}$ so that $d(e^{z(t)}\psi)_i=0$ for $i<k$, then
$$(e^{-z(t)} d( e^{z(t)} \psi))_k=\sum_{i+j=k}(e^{-z(t)})_jd( e^{z(t)} \psi)_i=(d(e^{ z(t)} \psi))_k.$$
Then using (\ref{deq}) and the $\partial\bar\partial$-lemma again we can solve for $b_k$.
\end{prf}
\subsection{Deformation of the bihermitian structure}\label{deform}
According to Gualtieri's theorem, Goto's result gives a pair of complex structures $I_+,I_-$ starting from a K\"ahlerian holomorphic Poisson manifold. One may ask where these are coming from, and we can answer this at the infinitesimal level.
Recall that given a smooth family $I(t)$ of complex structures the derivative $I'$ satisfies $ I' I+I I'=0$ and defines a $\bar\partial$-closed section of $T^{1,0}\otimes (T^*)^{0,1}$ which gives the Kodaira-Spencer class in the Dolbeault cohomology group $[I']\in H^1(M,T)$. More concretely, if we write
$$\left(\frac{\partial}{\partial \bar z_j}\right)'= \alpha_{\bar j k}\frac{\partial}{\partial z_k}+\beta_{\bar j \bar k}\frac{\partial}{\partial \bar z_k}$$
then the Kodaira-Spencer class is represented by
$$ \alpha_{\bar j k}\frac{\partial}{\partial z_k}\,d\bar z_j$$
We defined the complex structure $I_+$ in Theorem \ref{Gthm} in terms of the intersection of the $+i$ eigenspace of $J_1$ and the $-i$ eigenspace of $J_2$: in the K\"ahler case this gives sections of $(T\oplus T^*)\otimes \mathbf{C}$ spanned by terms of the form
$$u_j=\frac{\partial}{\partial \bar z_j}+i\omega_{\bar j k}dz_k.$$
Deform this and we are looking for sections $u_j(t)$ such that
$u_j\cdot \psi=0$ and $u_j\cdot \varphi=0$. So differentiating with respect to $t$ at $t=0$,
$u'_j\cdot \psi+u_j\cdot \psi'=0$ and $ u_j' \cdot \varphi+u_j\cdot \varphi'=0$.
Now $\varphi(t)=e^{at}dz_1\wedge\dots\wedge dz_m$, so
$$\varphi'=a\cdot dz_1\wedge\dots\wedge dz_m= \sigma^{pq}i_{\partial/\partial z_p}i_{\partial/\partial z_q}dz_1\wedge\dots\wedge dz_m.$$
and hence, considering the $(m-1,0)$ component of the equation $ u_j' \cdot \varphi+u_j\cdot \varphi'=0$ we get
$$\alpha_{\bar j k}i_{\partial/\partial z_k}dz_1\wedge\dots\wedge dz_m+i\omega_{\bar j k}dz_k\wedge \sigma^{p q}i_{\partial/\partial z_{p}}i_{\partial/\partial z_q}dz_1\wedge\dots\wedge dz_m=0.$$
But
$$i_{\partial/\partial z_{p}}(d z_k\wedge i_{\partial/\partial z_{q}}dz_1\wedge\dots\wedge dz_m)=\delta_{pk} i_{\partial/\partial z_{q}}dz_1\wedge\dots\wedge dz_m-dz_k\wedge i_{\partial/\partial z_{p}}i_{\partial/\partial z_q}dz_1\wedge\dots\wedge dz_m$$
and
$$d z_k\wedge i_{\partial/\partial z_{q}}dz_1\wedge\dots\wedge dz_m=\delta_{qk}dz_1\wedge\dots\wedge dz_m.$$
It follows that
$$\alpha_{\bar j k}=-2i\omega_{\bar j \ell}\sigma^{\ell k}.$$
More invariantly, the K\"ahler form $\omega$ defines a Dolbeault class in $H^1(M,T^*)=H^{1,1}$ and the Poisson tensor $\sigma$ lies in $H^0(M,\Lambda^2T)$; then the Kodaira-Spencer class in $H^1(M,T)$ for the deformation $I_+(t)$ is the cup product combined with contraction ${\sigma}\omega$. If we do the same for $I_-(t)$ we get the negative of this class.
\begin{exs}
\noindent 1. For $\mathbf{C}P^2$, the bundle $\Lambda^2 T$ is isomorphic to ${\mathcal O}(3)$, and a holomorphic Poisson structure is defined by a section of this, which vanishes on a cubic curve. As mentioned earlier, blowing up $r$ points on this curve, the proper transform is again an anticanonical divisor and so defines a Poisson structure. If $E_1,\dots, E_{r}$ are the cohomology classes of the exceptional curves on the blow-up and $H$ the pull-back of a generator of $H^2(\mathbf{C}P^2,\mathbf{Z})$ then a K\"ahler form $\omega$ has a cohomology class of the form
$$[\omega]=aH+\sum_1^rc_iE_i$$
where $a>0$ and $c_i<0$. In this case the Kodaira-Spencer class for $I_+$ above consists of moving the points along the cubic curve with velocities (relative to a trivialization of the tangent bundle of the elliptic curve) proportional to $c_i$. For $I_-$, since $c_i$ have the same sign, the points all move in the opposite direction.
\noindent 2. If the cohomology class of the K\"ahler form is a multiple of $c_1(T)$ then, as we shall see in the next lecture, the Kodaira-Spencer class is zero. This is consistent with some concrete constructions of one-parameter families of bihermitian metrics on Fano manifolds in \cite{NJH4} and \cite{MG3}, where $I_+(t)$ and $I_-(t)$ are all equivalent under a diffeomorphism.
\end{exs}
One corollary of Goto's theorem is that a Kodaira-Spencer class of the form just described is {\it unobstructed} -- there exists a one parameter family integrating it. This is reminiscent of the Tian-Todorov result on Calabi-Yau manifolds. (In fact the analogy is close since Goto's theorem is a deformation theorem of generalized Calabi-Yau manifolds: the generalized complex structure $J_2(t)$ is defined by a closed pure spinor.) One can see this, however, directly without the language of generalized geometry:
\begin{prp} Let $M$ be a compact complex manifold with K\"ahler form $\omega$ and holomorphic Poisson tensor $\sigma$. Then the Kodaira-Spencer class $\sigma\omega\in H^1(M,T)$ is unobstructed.
\end{prp}
\begin{prf} For the deformation of a complex structure, one looks for a global $\phi\in \Omega^{0,1}(T)$ which satisfies the equation
$$\bar\partial\phi =[\phi,\phi]$$
where the bracket is the Lie bracket on vector fields together with exterior product on $(0,1)$-forms, and, as above, one solves this (if possible) term-by-term for a series $\phi(t)=t\phi_1+t^2\phi_2+\dots$ where $\phi_1$ represents the initial Kodaira-Spencer class.
The coefficient of $t^2$ requires a solution for $\phi_2$ of
\begin{equation}
\bar\partial \phi_2=[\phi_1,\phi_1].
\label{ob}
\end{equation}
Now locally we have the K\"ahler potential
$$\omega_{\bar j \ell}=\frac{\partial^2 f}{\partial z_{\ell} \partial \bar z_j}$$
and then writing $f_{\bar j}=\partial f/\partial \bar z_j$,
$$\phi_1=\omega_{\bar j \ell}\sigma^{\ell k}\frac{\partial}{\partial z_k}\,d\bar z_j=\sigma^{\ell k}\frac{\partial^2 f}{\partial z_{\ell} \partial \bar z_j}\frac{\partial}{\partial z_k}\,d\bar z_j=\sigma(\partial f_{\bar k})d\bar z_k.$$
Now $\sigma$ is holomorphic and so is insensitive to $\bar\partial$ operators, therefore
$$[\phi_1,\phi_1]=\sigma(\partial\{f_{\bar k},f_{\bar \ell}\})d\bar z_k\wedge d\bar z_{\ell}$$
using the integrability of $\sigma$.
The Poisson bracket expression $\{f_{\bar k},f_{\bar \ell}\}d\bar z_k\wedge d\bar z_{\ell}$ looks local but it is
$$\sigma^{ij}f_{i\bar k}f_{j \bar \ell}=\sigma^{ij}\omega_{i\bar k}\omega_{j\bar \ell}$$
or more invariantly $i_{\sigma}(\omega\wedge\omega)$. Thus $\partial i_{\sigma}(\omega\wedge\omega)=[\phi_1,\phi_1]$ is $\bar\partial$-closed and $\partial$-exact and so, by the $\partial\bar\partial$-lemma
$$\partial i_{\sigma}(\omega\wedge\omega)=\bar\partial\partial\alpha$$
for some $(0,1)$-form $\alpha$. It follows that
$$[\phi_1,\phi_1]=\sigma(\partial i_{\sigma}(\omega\wedge\omega))=\sigma(\bar\partial\partial\alpha)=\bar \partial( \sigma(\partial\alpha))$$
and we take $\phi_2=\sigma(\partial\alpha)$ to solve Equation (\ref{ob}).
\vskip .25cm
The coefficient of $t^3$ typifies the inductive process: we need
$$\bar\partial \phi_3=[\phi_1,\phi_2]+[\phi_2,\phi_1].$$
Write $\alpha=g_{\bar \ell}d\bar z_{\ell}$. then $[\phi_1,\phi_2]=\sigma(\partial \{f_{\bar k},g_{\bar \ell}\})d\bar z_k\wedge d\bar z_{\ell}$ and
$$\{f_{\bar k},g_{\bar \ell}\}d\bar z_k\wedge d\bar z_{\ell}=i_{\sigma}(\omega\wedge \partial\alpha)$$
and we proceed as above.
\end{prf}
\section{Generalized holomorphic bundles}
\subsection{Basic features}
The analytic viewpoint of a holomorphic vector bundle $V$ on a complex manifold was established by Malgrange (see \cite{DK} Section 2.2.2 for a simple proof). The existence of sufficiently many local holomorphic sections is equivalent to the existence of a differential operator $\bar\partial_A:\Omega^0(V)\rightarrow \Omega^{0,1}(V)$ such that $\bar\partial_A(fs)=\bar\partial f s+f\bar\partial_A s$ and such that the standard extension to forms $\bar\partial_A^2:\Omega^0(V)\rightarrow \Omega^{0,2}(V)$ vanishes. Gualtieri introduced the analogous concept in generalized complex geometry:
\begin{definition} A generalized holomorphic bundle on a generalized complex manifold $(M,J)$ is a vector bundle $V$ with a differential operator $\bar D:C^{\infty}(V)\rightarrow C^{\infty}(V\otimes E^{0,1})$ such that for a smooth function $f$ and section $s$
\begin{itemize}
\item
$\bar D(fs)=\bar\partial_J \!f s+f\bar Ds$
\item
$\bar D^2: C^{\infty}(V)\rightarrow C^{\infty}(V\otimes \Lambda^2E^{0,1})$ vanishes.
\end{itemize}
\end{definition}
Given a local trivialization $s_1,\dots,s_k$ of $V$ we obtain a ``connection matrix" $A_{ij}$ with values in $E^{0,1}$ defined by
$$\bar D s_i=A_{ji}s_j$$
and then the condition $\bar D^2=0$ is $\bar\partial_J A+A\cdot A=0$.
\begin{rmk} For an ordinary holomorphic bundle we have a Dolbeault complex
$$\rightarrow \Omega^{0,p}(V)\stackrel{\bar\partial}\rightarrow\Omega^{0,p+1}(V)\rightarrow $$
and by the same token there is a generalized version
$$\rightarrow C^{\infty}(V\otimes \Lambda^p E^{0,1})\stackrel{\bar D}\rightarrow C^{\infty}(V\otimes \Lambda^{p+1} E^{0,1})\rightarrow .$$
\end{rmk}
A universal example of a generalized holomorphic bundle is the canonical bundle of the generalized complex structure -- the
subbundle $K\subset \Lambda^*T^*\otimes\mathbf{C}$ of multiples of pure spinors whose annihilator is $E^{1,0}$. To see this recall Proposition \ref{int}, where we saw that $J$ was integrable if and only if for any local non-vanishing section $\psi$ of $K$, we have $d\psi=w\cdot\psi$ for some local section $w$ of $(T\oplus T^*)\otimes \mathbf{C}$. The $(1,0)$ component of $w$ annihilates $\psi$ and the $(0,1)$ component is unique because $E^{1,0}\cap E^{0,1}=0$, so we may as well assume $w$ lies in $E^{0,1}$.
A global section $s$ of $K$ can be written locally as $f\psi$ and we define
$$\bar D s=(\bar\partial_Jf) \psi+f w\cdot\psi.$$
If $\psi_1=g\psi$ is another local section, then $w_1=g^{-1}dg+w$, $f=f_1g$ and one can easily check that $\bar D s$ is well-defined.
We need also the condition $\bar D^2=0$ which means we need to prove that $\bar\partial_J w=0$. Let $u=X+\xi, v=Y+\eta$ be sections of $E^{1,0}$, then $u\cdot\psi=0=v\cdot\psi$ and
$${\bf L}_u\psi=d(u\cdot\psi)+u\cdot d \psi=u\cdot w\cdot\psi=2(u,w)\psi$$
and
$${\bf L}_v{\bf L}_u\psi=(2Y(u,w)+4(u,w)(v,w))\psi.$$
Hence
$$({\bf L}_u{\bf L}_v-{\bf L}_v{\bf L}_u)\psi=2(X(v,w)-Y(u,w))\psi.$$
But $({\bf L}_u{\bf L}_v-{\bf L}_v{\bf L}_u)\psi={\bf L}_{[u,v]}\psi$ (see the proof of Proposition \ref{jac}) and since, by integrability of $J$, $[u,v]\cdot \psi=0$ we also have
${\bf L}_{[u,v]}\psi=2([u,v],w)\psi$.
Hence $([u,v],w)=X(v,w)-Y(u,w)$ which from the definition (\ref{da}) is $\bar\partial_J w=0$.
From this we see also that a generalized Calabi-Yau manifold, which we have defined as having a global closed $\psi$, can also be thought of as being defined by a global non-vanishing generalized holomorphic section of the canonical bundle.
For the specific examples of generalized complex structures -- symplectic, complex, holomorphic Poisson -- we can determine what a generalized holomorphic bundle means:
\begin{exs}
\noindent 1. On a symplectic manifold $E^{0,1}$ is spanned by terms
$$\frac{\partial}{\partial x_j}+i\omega_{jk}dx_k$$
or equivalently, inverting $\omega_{ij}$, by
$$dx_i-i\omega_{ij}\frac{\partial}{\partial x_j}.$$
Thus the 1-form part of the connection matrix for a generalized holomorphic bundle
$$A_i(dx_i-i\omega_{ij}\frac{\partial}{\partial x_j})$$
defines an ordinary connection and $\bar D^2=0$ implies it is a flat connection.
\noindent 2. Now consider a complex manifold considered as a generalized complex manifold. On might think that generalized holomorphic bundles are just ordinary holomorphic bundles, but this is not quite the full picture, although they do provide examples.
Since $E^{0,1}=\bar T^*\oplus T$ (where now $T$ is the holomorphic tangent bundle)
$$\bar Ds=\bar\partial_A s+\phi s=\left(\frac{\partial s}{\partial \bar z_{j}}+A_{\bar j}s\right) d\bar z_{j}+\phi^{k} s \frac{\partial }{\partial z_{k}}$$
and $\bar D^2=0$ implies
$$\bar\partial_A^2=0 \in \mathop{\rm End}\nolimits V\otimes \Lambda^2\bar T^*,\quad \bar\partial_A \phi=0 \in \mathop{\rm End}\nolimits V\otimes \bar T^*\otimes T,\quad \phi^2=0\in \mathop{\rm End}\nolimits V\otimes \Lambda^2T.$$
The first condition gives $V$ the structure of a holomorphic vector bundle, the second says that $\phi$ is a holomorphic section of $\mathop{\rm End}\nolimits V\otimes T$, and the third is an algebraic condition on $\phi$. Since
$$\phi^2=\frac{1}{2}[\phi^{j},\phi^{k}]\frac{\partial }{\partial z_{j}}\wedge \frac{\partial }{\partial z_{k}}$$
this ``integrability" condition is $[\phi^{j},\phi^{k}]=0$.
We call these {\it co-Higgs bundles}. A Higgs bundle in the sense of C.Simpson \cite{S} is the same definition with $T$ replaced by $T^*$.
\noindent 3. The generalized complex structure determined by a holomorphic Poisson tensor has
$E^{0,1}$ spanned by
$$\frac{\partial}{\partial z_1}, \frac{\partial}{\partial z_2},\dots, d\bar z_1-\bar\sigma(d\bar z_1), d\bar z_2-\bar\sigma(d\bar z_1),\dots$$
and the $\bar\partial_J$ operator is
$$\bar\partial_Jf=\bar\partial f+\sigma(\partial f)-\bar\sigma(\bar\partial f).$$
We then write $\bar D$ as
$$\bar Ds=\left(\frac{\partial s}{\partial \bar z_{j}}+A_{\bar j}s\right) (d\bar z_{j}-\bar\sigma(d\bar z_j))+\frac{\partial s}{\partial z_{j}}\sigma(dz_j)+\phi^{k} s \frac{\partial }{\partial z_{k}}.$$
Again $A_{\bar j}$ defines a holomorphic structure on $V$ and then, in a local holomorphic basis, the operator is
$$\bar Ds=\frac{\partial s}{\partial z_{j}}\sigma(dz_j)+\phi^{k} s \frac{\partial }{\partial z_{k}}.$$
This is a first order holomorphic differential operator from $V$ to $V\otimes T$ whose symbol is $1\otimes \sigma:V\otimes T^*\rightarrow V\otimes T$. We can define the action of a local holomorphic function $f$ on a section $s$ by
$$f\cdot s=\langle \bar D s,df\rangle$$
and then the condition $\bar D^2=0$ says that
$$g\cdot f\cdot s-f\cdot g\cdot s=\{g,f\}\cdot s.$$
Such a holomorphic bundle is called a {\it Poisson module}.
\begin{rmk} Note that, given a co-Higgs bundle $(V,\phi)$ we can define an action of $f$ on a local section by
$$f\cdot s=\phi(df)s$$
and then the $\phi^2=0$ condition says that $g\cdot f\cdot s-f\cdot g\cdot s=0$. We can thus interpret a co-Higgs bundle as a Poisson module for the zero Poisson structure.
\end{rmk}
\end{exs}
In the next two sections we shall examine the last two examples in more detail.
\subsection{Co-Higgs bundles}\label{coH}
A co-Higgs bundle is, as we have seen, defined by a pair consisting of a holomorphic vector bundle $V$ and an endomorphism $\phi$, twisted by the tangent bundle. When studying such pairs one usually imposes a stability condition in order to construct a Hausdorff moduli space. On a K\"ahler manifold one can define the degree of a line bundle and the slope $\mathop{\rm deg}\nolimits \Lambda^kV/k$ of a vector bundle $V$ of rank $k$. The stability condition for Higgs bundles in \cite{S} is that the slope of any $\phi$-invariant torsion-free subsheaf should be less than the slope of $V$. This makes perfectly good sense whether one takes the tangent bundle or the cotangent bundle but the manifolds which support such stable objects are quite different. In one dimension, for example, the main interest in the case of Higgs bundles lies with genus $g>1$, for in that case there is a link with representations of the fundamental group. In the co-Higgs case there are no stable objects with $\phi\ne 0$ in higher genus. The point is that given a section $s$ of the $g$-dimensional space $H^0(M,K)$ of differentials, if $\phi\in H^0(M,\mathop{\rm End}\nolimits V\otimes K^*)$ then $\phi s$ is an endomorphism which commutes with $\phi$ and stable objects do not have any of these other than the scalars.
\begin{exs}
\noindent 1. In rank one, a co-Higgs bundle is just a line bundle $V=L$ together with a vector field $\phi$.
\noindent 2. If $V={\mathcal O}\oplus T$ then there is a canonical co-Higgs structure where $\phi(\lambda,X)=(X,0)$. Since the trivial bundle is invariant, we require $\mathop{\rm deg}\nolimits T>0$ and $T$ itself to be stable for stability of the co-Higgs bundle.
\end{exs}
We refer the reader to the forthcoming Oxford DPhil thesis of Steven Rayan for more results about co-Higgs bundles, but here we shall give some examples on projective spaces slightly more interesting than those above.
\begin{ex} Consider $M=\mathbf{C}P^m={\mathbf {\rm P}}(W)$. The tangent bundle fits into the Euler sequence of holomorphic bundles
$$0\rightarrow {\mathcal O}\rightarrow W(1)\rightarrow T\rightarrow 0$$
from which we obtain $W\cong H^0(\mathbf{C}P^m,T(-1))$. We also see that $\Lambda^m T\cong {\mathcal O}(m+1)$ and hence
$$T^*\cong \Lambda^{m-1}T\otimes \Lambda^mT^*\cong \Lambda^{m-1}T(-(m+1)).$$
This means that
$$T\otimes T^*(1)\cong T\otimes \Lambda^{m-1}T(-m))=T(-1)\otimes \Lambda^{m-1}(T(-1))$$
and from the $(m+1)$-dimensional space of sections of $T(-1)$ we can construct by tensor and exterior product many sections, not just scalars, of $T\otimes T^*(1)$. Take one, $\psi$, and a section $w$ of $T(-1)$. Then set
$$\phi= \psi\otimes w\in H^0(\mathbf{C}P^m,\mathop{\rm End}\nolimits T\otimes T).$$
By construction, $\phi^2= [\psi,\psi]\otimes w\wedge w=0$, and the tangent bundle itself is stable so this gives plenty of examples of co-Higgs bundles on projective space.
\end{ex}
\vskip .25cm
The simplest concrete example, where we can write down the moduli space, is the case of the bundle $V={\mathcal O}\oplus {\mathcal O}(-1)$ on $\mathbf{C}P^1$. Since $\Lambda^2T=0$, there is no integrability condition on $\phi$ in one dimension.
Here the tangent bundle is ${\mathcal O}(2)$ and so we must have
$$\phi=\pmatrix{a & b\cr
c & -a}$$
where $a,b,c$ are sections of ${\mathcal O}(2), {\mathcal O}(3), {\mathcal O}(1)$ respectively. Since the degree and rank of $V$ are coprime, there are no semi-stable bundles which means, in this one-dimensional case, that the moduli space is smooth. We first define a canonical six-dimensional complex manifold. We denote by $p:T\mathbf{C}P^1\rightarrow \mathbf{C}P^1$ the projection and $\eta \in H^0(T\mathbf{C}P^1,p^*T)$ the tautological section. This is a section of $p^*{\mathcal O}(2)$. Now define
$${\mathcal M}=\{(x, s)\in T\mathbf{C}P^1\times H^0(\mathbf{C}P^1,{\mathcal O}(4)): \eta^2(x)=s(p(x))\}.$$
\begin{prp} ${\mathcal M}$ is naturally isomorphic to the moduli space of stable rank 2 trace zero co-Higgs bundles of degree $-1$ on $\mathbf{C}P^1$.
\end{prp}
\begin{prf}
\noindent 1. First note that any vector bundle on $\mathbf{C}P^1$ is a sum of line bundles by the Birkhoff-Grothendieck theorem. If the decomposition is ${\mathcal O}(m)\oplus {\mathcal O}(-1-m)$, then
$a,b,c$ are sections of ${\mathcal O}(2), {\mathcal O}(2m+3), {\mathcal O}(1-2m)$.
If $c$ is zero, then ${\mathcal O}(m)$ is $\phi$-invariant, so by stability $m< -1/2$. If $m=-1$, then by changing the order of the subbundles we are in the same situation. If $m\le -2$ then $b$ is a section of a line bundle of negative degree and so vanishes -- then the invariant subbundle ${\mathcal O}(-1-m)$ contradicts stability. Thus the vector bundle in this moduli space is always $V={\mathcal O}\oplus {\mathcal O}(-1)$.
\noindent 2. Since $c$ is a non-zero section of ${\mathcal O}(1)$ it vanishes at a distinguished point $z=z_0$. Then $a(z_0)$ is a point $x$ in the total space of ${\mathcal O}(2)=T\mathbf{C}P^1$. It is well-defined because an automorphism of ${\mathcal O}\oplus {\mathcal O}(-1)$ is defined by
$$\pmatrix{A & B\cr
0 & C}$$
where $A,B,C$ are sections of ${\mathcal O}, {\mathcal O}(1), {\mathcal O}$ and where $c=0$ the action on $a$ is trivial.
\noindent 3. The determinant $\det \phi$ is a section $s$ of ${\mathcal O}(4)$, and at $z=z_0$, $c$ vanishes so we have $\det\phi(z_0)=-a(z_0)^2$, so set $p=-\det\phi$. This defines a map from the moduli space to ${\mathcal M}$.
\noindent 4. In the reverse direction, choose an affine parameter $z$ on $\mathbf{C}P^1$ such that the point $x$ maps to $z=0$ and write $p(z)=a_0^2+zb(z)$ where $b(z)$ is a cubic polynomial. Then $\eta^2(0)=a_0^2$ so $\eta(0)=\pm{a_0}$ and
$$\pmatrix{\eta(0) & b(z)\cr
z & -\eta(0)}$$
is a representative Higgs field.
\vskip .25cm
Note that ${\mathcal M}$ is a fibration of elliptic curves $y^2=c_0+c_1z+\dots +c_4z^4$ over the five-dimensional vector space of coefficients $c_0,\dots,c_4$. We shall see this again when we consider the B-field action in the next lecture.
\end{prf}
\begin{rmk} We saw in Section \ref{dbarsec} that the $\bar\partial_J$ complex for an ordinary complex structure was defined by
$$\Omega^{0,q}(\Lambda^pT)\stackrel{\bar\partial}\rightarrow \Omega^{0,q+1}(\Lambda^pT).$$
For a co-Higgs bundle $(V,\phi)$ the $\bar D$ complex is defined by $\bar\partial+\phi$ where
$$\bar\partial: \Omega^{0,q}(V\otimes \Lambda^pT)\rightarrow \Omega^{0,q+1}(V\otimes \Lambda^pT)$$
and
$$\phi: \Omega^{0,q}(V\otimes \Lambda^pT)\rightarrow \Omega^{0,q}(V\otimes \Lambda^{p+1}T).$$
\end{rmk}
Note that the total degree $p+(q+1)=(p+1)+q$ is preserved. It is easy to see that
the cohomology of the $\bar D$ complex is the hypercohomology of the complex of sheaves
$$\dots \rightarrow {\mathcal O}(V\otimes \Lambda^pT)\stackrel{\phi}\rightarrow {\mathcal O}(V\otimes \Lambda^{p+1}T)\rightarrow\dots$$
\subsection{Holomorphic Poisson modules}
We observed that a holomorphic Poisson module is a holomorphic vector bundle $V$ with a
first order holomorphic linear differential operator
$$\bar D:{\mathcal O}(V)\rightarrow {\mathcal O}(V\otimes T)$$
whose symbol is $1\otimes\sigma: V\otimes T^*\rightarrow V\otimes T$.
Relative to a local holomorphic basis $s_i$ of $V$, $\bar D$ is defined by a ``connection matrix" $A$ of vector fields:
$$\bar Ds_i= s_j\otimes A_{ji}.$$
When $\sigma$ is non-degenerate it identifies $T$ with $T^*$ and then $\bar D$ is a flat holomorphic connection.
\begin{ex} If $X=\sigma(df)$ is the Hamiltonian vector field of $f$ then the Lie derivative ${\mathcal L}_X$ acts on tensors but the action in general involves the second derivative of $f$. However for the canonical line bundle $K=\Lambda^nT^*$ we have
$${\mathcal L}_X(dz_1\wedge\dots\wedge dz_n)=\frac{\partial X_i}{\partial z_i} (dz_1\wedge\dots\wedge dz_n)$$
and, since $\sigma^{ij}$ is skew-symmetric,
$$\frac{\partial X_i}{\partial z_i}=\frac{\partial}{\partial z_i}\left(\sigma^{ij}\frac{\partial f}{\partial z_j}\right)=\frac{\partial \sigma^{ij}}{\partial z_i}\frac{\partial f}{\partial z_j}$$
which involves only the first derivative of $f$. Thus
$$\{f,s\}={\mathcal L}_Xs=\langle \bar Ds, df\rangle$$
defines a first order operator. The second condition for a Poisson module follows from the integrability of the Poisson structure: since $\sigma(df)=X, \sigma(dg)=Y$ implies $\sigma(d\{f,g\})=[X,Y]$, it follows that
$$\{\{f,g\},s\}={\mathcal L}_{[X,Y]}s=[{\mathcal L}_X,{\mathcal L}_Y]s=\{f,\{g,s\}\}-\{g,\{f,s\}\}.$$
This clearly holds for any power $K^m$.
\end{ex}
\begin{rmk} A holomorphic first-order operator $\bar D:{\mathcal O}(V)\rightarrow {\mathcal O}(V\otimes T)$ is globally defined as a vector bundle homomorphism $\alpha:J^1(V)\rightarrow V\otimes T$ where $J^1(V)$ is the bundle of holomorphic 1-jets of sections of $V$. It is an extension
$$0\rightarrow V\otimes T^*\rightarrow J^1(V)\rightarrow V\rightarrow 0$$
and its extension class in $H^1(M,\mathop{\rm End}\nolimits V\otimes T^*)$, the Atiyah class, is the obstruction to splitting the sequence holomorphically. When $V$ is a line bundle, and $M$ is K\"ahler, this is the first Chern class in $H^{1,1}$.
The symbol $\sigma$ of $\bar D$ is the homomorphism $\alpha$ restricted to $V\otimes T^*\subset J^1(V)$, so
the existence of $\bar D$ means that $\sigma\in H^0(M,\mathop{\rm Hom}\nolimits(V\otimes T^*,V\otimes T)$ lifts to a class $\alpha\in H^0(M,\mathop{\rm Hom}\nolimits(J^1(V),V\otimes T)$. In the long exact cohomology sequence of the extension, this means that the map
$$\sigma: H^1(M,\mathop{\rm End}\nolimits V\otimes T^*)\rightarrow H^1(M,\mathop{\rm End}\nolimits V\otimes T)$$
applied to the Atiyah class is zero.
In the case of a line bundle this is the cup product we encountered in Section \ref{deform} applied to the first Chern class, so in particular we see that the existence of a Poisson module structure on the canonical bundle means that the image of $c_1(T)$ in $H^1(M,T)$ is zero. So, as in Section \ref{deform}, if $c_1(T)$ is represented by a K\"ahler form, Goto's theorem, to first order, keeps the complex structures $I_+,I_-$ in the same diffeomorphism class.
\end{rmk}
\vskip .25cm
Just because the Lie derivative of a Hamiltonian vector field does not make the tangent bundle a Poisson module does not mean that it can't be one. If we take two vector fields $X_1,X_2$ on $\mathbf{C}P^2$ then $\sigma=X_1\wedge X_2$ defines a Poisson structure. It is holomorphic symplectic where $\sigma$ is non-zero, which is away from a cubic curve $C$ -- the curve where $X_1\wedge X_2=0$ i.e. where $X_1$ and $X_2$ become linearly dependent.
Here we can step back and view $\mathbf{C}P^2$ as a generalized complex manifold: away from $C$, $\sigma^{-1}$ defines a holomorphic section of the canonical bundle $K$ which we can write as a closed complex 2-form $B+i\omega$. The generalized complex structure here is a symplectic structure $\omega$ transformed by the B-field $B$. But on such a structure, a generalized holomorphic bundle is a flat vector bundle. Now $X_1$ and $X_2$ are linearly independent away from $C$ so we can try and define $\bar D$ on $T$ by making them covariant constant i.e. $\bar D X_1=\bar D X_2=0$. Then we need to show that this extends as a holomorphic differential operator -- the $\bar D^2=0$ condition is already satisfied on an open set and so holds everywhere.
Take a local holomorphic basis $\partial/\partial z_1,\partial/\partial z_2$ for $T$ in a neighbourhood of a point of $C$, and then
$$X_i=P_{ji}\frac{\partial}{\partial z_j}$$
so
\begin{equation}
\sigma=X_1\wedge X_2=\det P\frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2}.
\label{x1x2}
\end{equation}
A``connection matrix" for $\bar D$ relative to this basis is given by a matrix $A$ of vector fields such that
$$0=\bar DX_i=D(P_{ji}\frac{\partial}{\partial z_i})=\sigma(dP_{ji})\frac{\partial}{\partial z_j}+P_{ji}A_{kj}\frac{\partial}{\partial z_k}$$
which has solution
$$A=-\sigma(dP)P^{-1}=-\sigma(dP)\frac{\mathop{\rm adj}\nolimits P}{\det P}.$$
From (\ref{x1x2}) this is
$${\mathop{\rm adj}\nolimits P}\left( \frac{\partial P}{\partial z_2}\frac{\partial}{\partial z_1}-\frac{\partial P}{\partial z_1}\frac{\partial}{\partial z_2}\right)$$
which is holomorphic and so $\bar D$ is well-defined.
\vskip .25cm
We can extend this argument to other rank 2 vector bundles $V$ with $\Lambda^2V\cong K^*$, so long as they have two sections $s_1,s_2$ to replace the vector fields $X_1,X_2$. A generic vector field on $\mathbf{C}P^2$ has three zeros: suppose $V$ has a section $s$ with $k$ simple zeros $x_1,\dots,x_k$ (and then the second Chern class $c_2(V)=k$). Then $s$ defines an exact sequence of sheaves
$$0\rightarrow {\mathcal O}\stackrel{s}\rightarrow {\mathcal O}(V)\rightarrow K^*\otimes {\mathcal I}
\rightarrow 0$$
where ${\mathcal I}$ is the ideal sheaf of the $k$ points. If $H^0(\mathbf{C}P^2,K^*\otimes {\mathcal I})\ne0$, in other words if there is a cubic curve $C$ passing through the $k$ points, then from the exact cohomology sequence (and using $H^1(\mathbf{C}P^2,{\mathcal O})=0$) we can find a second section $s_2$ of $V$, and if $s_1=s$, $s_1\wedge s_2$ vanishes on the curve $C$, which defines a holomorphic Poisson structure.
The {\it Serre construction} provides a means of constructing such bundles (see for example \cite{DK} Section 10.2.2). Away from the $k$ points we have an extension of line bundles
$$0\rightarrow {\mathcal O}\stackrel{s}\rightarrow {\mathcal O}(V)\rightarrow K^*
\rightarrow 0$$
which is described by a Dolbeault representative $\alpha\in \Omega^{0,1}(K)=\Omega^{2,1}$.
It extends to an extension as above if it has a singularity at each of the points of the form
$$\frac{1}{4r^4}dz_1\wedge dz_2\wedge (\bar z_2d\bar z_1-\bar z_1d\bar z_2).$$
In distributional terms $\bar\partial\alpha=\sum_i \lambda_i\delta_{x_i}=\beta$ a linear combination of delta functions of the points.
Such a sum defines a class in $H^2(M,K)$. Since $H^2(M,K)$ is dual to $H^0(M,{\mathcal O})\cong \mathbf{C}$, this class is determined by evaluating it on the function $1$. But
$$\langle \beta, 1\rangle =\sum_i\lambda_i$$
so if the $\lambda_i$ sum to zero the class is zero and one can solve $\bar\partial \alpha=\beta$ for $\alpha$.
Thus, given a cubic curve and a collection of $k$ points with non-zero scalars $\lambda_i$ whose sum is zero, we obtain a rank $2$ Poisson module with $c_2(V)=k$.
\begin{rmk} The Serre construction can also be used to generate co-Higgs bundles on $\mathbf{C}P^2$ with nilpotent Higgs field $\phi$. This time we require $\Lambda^2V\cong {\mathcal O}(1)$ and we want to solve $\bar\partial\alpha=\beta$ for a distribution defining a class in $H^2(\mathbf{C}P^2,{\mathcal O}(-1))$ which is dual to $H^0(\mathbf{C}P^2,{\mathcal O}(-2))=0$. Hence there is no constraint on the $\lambda_i$s. We obtain an extension
$$0\rightarrow {\mathcal O}\stackrel{s}\rightarrow {\mathcal O}(V)\stackrel{\pi}\rightarrow {\mathcal O}(1)\otimes {\mathcal I}\rightarrow 0.$$
Choosing a section $w$ of $T(-1)$, $v\mapsto s(w\pi(v))$ defines $\phi \in H^0(\mathbf{C}P^2, \mathop{\rm End}\nolimits V\otimes T)$ whose kernel and image lie in the trivial rank one subsheaf.
\end{rmk}
\section{Holomorphic bundles and the B-field action}
\subsection{The B-field action}
On a complex manifold we can pull back a holomorphic vector bundle by a holomorphic diffeomorphism to get a new one, but in generalized geometry we have learned that the group $\Omega^2(M)_{cl}\rtimes \mathop{\rm Diff}\nolimits(M)$ replaces the group of diffeomorphisms and in particular that a closed real $(1,1)$-form $B$ preserves the generalized complex structure determined by an ordinary complex structure. We shall study next the effect of this action on generalized holomorphic bundles.
Recall that in this case a generalized holomorphic bundle is defined by an operator
$$\bar Ds=\left(\frac{\partial s}{\partial \bar z_{i}}+A_{\bar i}s\right) d\bar z_{i}+\phi^{j} s \frac{\partial }{\partial z_{j}}$$
where $(V,\phi)$ is a co-Higgs bundle.
The transform of this by $B$ is then the operator
$$\bar Ds=\left(\frac{\partial s}{\partial \bar z_{i}}+A_{\bar i}+\phi^{j}B_{j\bar i}s\right) d\bar z_{i}+\phi^{j} s \frac{\partial }{\partial z_{j}}$$
More invariantly we write $i_{\phi}B\in \Omega^{0,1}(\mathop{\rm End}\nolimits V)$ for the contraction of the Higgs field $\phi\in H^0(M,\mathop{\rm End}\nolimits V\otimes T)$ with $B\in \Omega^{0,1}(T^*)$ and then we have a new holomorphic structure
$$\bar\partial_B=\bar\partial+i_{\phi}B$$
on the $C^{\infty}$ bundle $V$. But the condition $\phi^2=0$ means that $i_{\phi}B$ commutes with $\phi$ and so $\phi$ is still holomorphic with respect to this new structure: $\bar\partial_B\phi=0.$
\begin{rmk} Note that if $U\subset V$ is a holomorphic subbundle with respect to $\bar\partial$ then if it is also $\phi$-invariant, it is holomorphic with respect to $\bar\partial_B$. So stability is preserved by the B-field action.
\end{rmk}
Now suppose $B=\bar\partial \theta$ for $\theta \in \Omega^{1,0}$ and define $\psi=i_{\phi}\theta$, a section of $\mathop{\rm End}\nolimits V$. In coordinates $\psi=\phi^{i}\theta_{i}$ which implies
$(\bar\partial\psi)_{\bar i}=\phi^{j}B_{j\bar i}$. Then
$$[\psi,\bar\partial\psi]_{\bar i}=[\phi^{k}\theta_{k},\phi^{j}B_{j\bar i}]=[\phi^{k},\phi^{j}]\theta_{k}B_{j\bar i}=0$$
since $\phi^j$ and $\phi^k$ commute. This means (unusually for a non-abelian gauge theory) that if we exponentiate to an automorphism of the bundle $V$ we have
$$\exp (-\psi)\bar\partial\exp \psi=\bar\partial\psi +i_{\phi}B\psi.$$
Thus if $B_1$ and $B_2$ represent the same Dolbeault cohomology class in $H^1(M,T^*)$, the two actions are related by an automorphism. Hence $H^1(M,T^*)$ acts on the moduli space of stable co-Higgs bundles.
\begin{ex} If we take the canonical Higgs bundle $V={\mathcal O}\oplus T$ and $\phi(\lambda,X)=(X,0)$ as in Section \ref{coH}, then for $[B]\in H^1(M,T^*)$ the structure $\bar\partial_B$ defines a non-trivial extension
$$0\rightarrow {\mathcal O}\rightarrow V\rightarrow T\rightarrow 0$$
which still has a canonical Higgs field.
\end{ex}
We shall investigate this action in more detail next.
\subsection{Spectral covers}
In the case of Higgs bundles, Simpson reinterpreted in \cite{S1} a {\it Higgs sheaf} on $M$ in terms of a sheaf on ${\mathbf {\rm P}}({\mathcal O}\oplus T^*)$ whose support is disjoint from the divisor at infinity. This can be adapted immediately replacing $T^*$ by $T$.
In standard local coordinates $y_i,z_j$ on $TM$ given by the vector field $y^i\partial/\partial z_i$, we pull back the rank $k$ bundle $V$ under the projection $p:TM\rightarrow M$ and define an action of $y^i$ by $\phi^i$. Since $[\phi^i,\phi^j]=0$ this defines a module structure over the commutative ring of functions polynomial in the fibre directions.
More concretely, suppose in a neighbourhood of a point some linear combination of the $\phi^i$, say $\phi^1$, has distinct eigenvalues. Then since by the Cayley-Hamilton theorem $\phi^1$ satisfies its characteristic equation, on the support of the sheaf $y_1$ is an eigenvalue of $\phi^1$, and the kernel of $\phi^1-y_1$ defines a line bundle $U\subset p^*V$. Since all $\phi^i$ commute with $\phi^1$, $L$ is an eigenspace for $\phi^i$ with eigenvalue $y^i$. If the $m$ characteristic equations of $\phi^i$ are generic, they define an $m$-dimensional submanifold $S$ of $TM$ which is an $m$-fold covering of $M$ under $p$. There will be points at which $\phi^1$ has coincident eigenvalues, but under suitable genericity conditions $S$ will still be smooth with a line bundle $L$. The action of $\phi$ on $U$ is
$$\phi\vert_U=\phi^i\vert_U\frac{\partial}{\partial z_i}=y_i\frac{\partial}{\partial z_i}$$
which is the tautological section of $p^*T$ on the total space of $T$.
The one-dimensional case of this, where $M=\mathbf{C}P^1$, was much studied from the point of view of integrable systems before its important application to the moduli space of Higgs bundles, and the co-Higgs situation on $\mathbf{C}P^1$ is a particular case described, for example, in \cite{NJH5}. In this case, where $T={\mathcal O}(2)$, $\phi$ is a holomorphic section of $\mathop{\rm End}\nolimits V(2)$ and its characteristic equation is
$$\det(\eta-\phi)=\eta^k+a_1\eta^{k-1}+\dots +a_k=0$$
where $a_i$ is a section of ${\mathcal O}(2i)$ on $\mathbf{C}P^1$. Interpreting $\eta=yd/dz$ as the tautological section of
$p^*{\mathcal O}(2)$, this is the vanishing of a section of $p^*{\mathcal O}(2k)$ on the algebraic surface $T\mathbf{C}P^1$ and it defines a spectral curve $S$ which, by the adjunction formula, has genus $g=(k-1)^2$ and is a branched covering of $\mathbf{C}P^1$ of degree $k$.
We reconstruct a co-Higgs bundle by taking the direct image $p_*L$ of a line bundle $L$ on $S$. For any open set $U\subseteq \mathbf{C}P^1$, by definition
$$H^0(U,p_*L)=H^0(p^{-1}(U),L).$$
The sheaf $p_*L$ defines a rank $k$ vector bundle and the direct image of multiplication by the tautological section
$$\eta: H^0(p^{-1}(U),L)\rightarrow H^0(p^{-1}(U),L(2))$$
defines a Higgs field $\phi$.
(Note that the line bundle $L$ is not quite the same as the eigenspace bundle $U$. The direct image gives a canonical evaluation map $p^*V\mapsto L$ so that $L^*\subset p^*V^*$ is the eigenspace bundle of the dual endomorphism $\phi^t$.)
The degree of the line bundle $L$ and that of the vector bundle $V$ are easily related -- the direct image definition implies that $H^0(S,L\otimes p^*{\mathcal O}(n))\cong H^0(\mathbf{C}P^1, p_*L(n))$ and taking $n$ large, these are given by the Riemann-Roch formula. The result is
$$\mathop{\rm deg}\nolimits V=\mathop{\rm deg}\nolimits L+k-k^2.$$
\begin{exs}
\noindent 1. Consider the example of $V={\mathcal O}\oplus {\mathcal O}(-1)$ in the previous lecture. Here $k=2$ and $\mathop{\rm deg}\nolimits V=-1$, so $\mathop{\rm deg}\nolimits L=1$. The curve $S$ has genus $(k-1)^2=1$ and is an elliptic curve ($y^2=c_0+c_1z+\dots +c_4z^4$). The line bundle $L$ has degree one and hence has a unique section which vanishes at a single point, which is $\eta(0)$ in our description of the moduli space.
\noindent 2. If $\mathop{\rm deg}\nolimits L= g-1= k^2-2k$ then $\mathop{\rm deg}\nolimits V=-k$, so $V(1)$ has degree zero. Now a vector bundle $E$ on $\mathbf{C}P^1$ is trivial if it has degree zero and $H^0(\mathbf{C}P^1,E(-1))=0$, so $V(1)$ is trivial if $0=H^0(\mathbf{C}P^1,V)=H^0(S,L)$, which is if the divisor class of $L$ does not lie on the theta divisor of $S$. In this case a co-Higgs bundle consists of a $k\times k$ matrix whose entries are sections of ${\mathcal O}(2)$.
\vskip .25cm
Now consider the B-field action from the point of view of the spectral cover. Since $\phi$ is only changed by conjugation, the spectral cover is unchanged -- it is only the holomorphic structure on the line bundle which can change. But the change in the holomorphic structure on $V$ was
$$\bar\partial \mapsto \bar\partial +i_\phi B$$
and on $U\subset V$ $\phi$ acts via the tautological section $\eta$ of $p^*TM$, so we are changing the holomorphic structure of $U$ by
$$\bar\partial \mapsto \bar\partial +i_\eta B.$$
In other words, we have $[B]\in H^1(M,T^*)$ which we pull back to $p^*[B]\in H^1(TM,p^*T)$ then contract with $\eta\in H^0(TM,p^*T)$ to get the class
$$\eta p^*[B]\in H^1(TM,{\mathcal O}).$$
Exponentiating to $H^1(TM,{\mathcal O}^*)$ defines a line bundle $L_B$. Restricting to $S$ the B-field action is $U\mapsto U\otimes L_B$.
\end{exs}
Let us look at this action in the two examples above. Since $H^{1,1}(\mathbf{C}P^1)$ is one-dimensional a real closed $(1,1)$ form is cohomologous to a multiple of
$$\frac{idz\wedge d\bar z}{(1+z \bar z)^2}$$
This form integrates to $2\pi$ over $\mathbf{C}P^1$.
Pulling back to $T\mathbf{C}P^1$ and contracting with $yd/dz$ we obtain the class in $H^1(S,{\mathcal O})$ represented by
$$\frac{iy d\bar z}{(1+z \bar z)^2}.$$
\noindent 1. In the first example, $S$ is an elliptic curve and a point of the moduli space is defined by a point $x$ on this curve, so tensoring with a line bundle $L_B$ is a translation. The non-vanishing 1-form $dz/y$ is equal to $du$ in the uniformization and then two points $x,x'$ are related by a translation $u\mapsto u+a$ if
$$\int_x^{x'}\frac{dz}{y}=a$$
modulo periods.
On the other hand our class in $H^1(S,{\mathcal O})$ pairs with $dz/y\in H^0(S, K)$ by integration:
$$\int_S \frac{idz\wedge d\bar z}{(1+z \bar z)^2}=4\pi$$
so this determines the translation.
\noindent 2. In the second example, since the bundle is trivial we may write the Higgs field in $\mathop{\rm End}\nolimits \mathbf{C}^k\otimes H^0(\mathbf{C}P^1,{\mathcal O}(2))$ as a matrix with entries quadratic polynomials in $z$. Write it thus:
$$\phi=(T_1+iT_2)+2iT_3 z+(T_1-iT_2)z^2.$$
Then, as derived in \cite{NJH5}, tensoring by $L_B$ is integrating to time $t=1$ the system of nonlinear differential equations called {\it Nahm's equations}.
$$\frac{dT_1}{dt}=[T_2,T_3],\quad \frac{dT_2}{dt}=[T_3,T_1],\quad \frac{dT_3}{dt}=[T_1,T_2].$$
These equations arise in the study of non-abelian monopoles and are dimensional reductions of the self-dual Yang-Mills equations.
From these examples it is clear that the B-field action can be highly non-trivial. What it also shows is that the action on the moduli space can be quite badly behaved, for the Nahm flow could be an irrational flow on the Jacobian of the spectral curve.
\subsection{Twisted bundles and gerbes}
Now suppose we replace the generalized complex structure on $T\oplus T^*$ by a twisted version on the bundle $E$ defined by a 1-cocycle $B_{\alpha\beta}$ of closed real $(1,1)$-forms. What is a generalized holomorphic bundle now? The general definition is the same -- a vector bundle $V$ with a differential operator $\bar D$ but we want to understand it in more concrete terms.
If we think of $E$ as obtained by patching together copies of $T\oplus T^*$ then over each open set $U_{\alpha}$, $V$ has the structure of a co-Higgs bundle -- a holomorphic structure $A_{\alpha}$ and a Higgs field $\phi_\alpha$. On the intersection $U_{\alpha}\cap U_{\beta}$ these are related by the B-field action of $B_{\alpha\beta}$:
\begin{equation}
(A_{\beta})_{\bar i}=(A_{\alpha})_{\bar i}+\phi^{j}(B_{\alpha\beta})_{j\bar i}, \qquad (\phi_{\beta})^{j} = (\phi_{\alpha})^{j}.
\label{af}
\end{equation}
Consider first the case of $V=L$ a line bundle. Then, because $\mathop{\rm End}\nolimits V$ is holomorphically trivial for all of the local holomorphic structures, $\phi$ is a global holomorphic vector field $X$. So consider the $(0,1)$ form
$$A_{\alpha\beta}=i_XB_{\alpha\beta}.$$
The $(1,1)$ form $B_{\alpha\beta}$ is closed so $\bar\partial B_{\alpha\beta}=0$ and $X$ is holomorphic so that $\bar\partial A_{\alpha\beta}=0$. Locally write $A_{\alpha\beta}=\bar\partial f_{\alpha\beta}$, then, since $B_{\alpha\beta}$ is a cocycle, on threefold intersections $f_{\alpha\beta}+f_{\beta\gamma}+f_{\gamma\alpha}$ is holomorphic. Write
$$g_{\alpha\beta\gamma}=\exp 2\pi i (f_{\alpha\beta}+f_{\beta\gamma}+f_{\gamma\alpha})$$
then this defines a holomorphic gerbe.
But the local holomorphic structure on $L$ is defined by a $\bar\partial$-closed form $A_{\alpha}$, so writing $A_{\alpha}=\bar\partial h_{\alpha}$ we have from (\ref{af}) that $k_{\alpha\beta}=f_{\alpha\beta}+h_{\alpha}-h_{\beta}$ is holomorphic and moreover
$$g_{\alpha\beta\gamma}=\exp 2\pi i (k_{\alpha\beta}+k_{\beta\gamma}+k_{\gamma\alpha}).$$
This is a {\it holomorphic trivialization} of the gerbe, or as is sometimes said, a line bundle over the gerbe. The ratio of any two trivializations (i.e. writing $g_{\alpha\beta\gamma}$ as a coboundary) is a cocycle which defines the transition functions for a holomorphic line bundle. In the untwisted case a generalized holomorphic bundle was just a line bundle and a vector field; here any two {\it differ} by such an object.
In more invariant terms we have taken the class in $H^2(M,T^*)$ defined by the $(1,2)$ component of the 3-form $H$, and contracted with the vector field $X\in H^0(M,T)$ to get a class in $H^2(M,{\mathcal O})$. Exponentiating gives us an element in $H^2(M,{\mathcal O}^*)$ which is the equivalence class of the holomorphic gerbe defined by $g_{\alpha\beta\gamma}$. The existence of a trivialization of the gerbe is the statement that this class is zero.
\vskip .25cm
Now consider the general case: over each $U_{\alpha}$ we can consider the spectral cover in $TM$. This is defined by characteristic polynomials of components of $\phi$. The $C^{\infty}$ transition functions for the vector bundle $V$ conjugate $\phi$ and so leave these polynomials invariant. It follows that the local spectral covers fit together into a global spectral cover $S\subset TM$. The eigenspace bundle $U$ however, only has local holomorphic structures. But $\phi$ acts on $U$ via the tautological section $\eta$ of $p^*TM$, and so we are in a parallel situation to the one we just considered: a gerbe on $TM$ defined by the cocycle
$$A_{\alpha\beta}=i_\eta p^*B_{\alpha\beta}.$$
In the untwisted case, a co-Higgs bundle was determined by a line bundle on the spectral cover, in this case it is a trivialization of the gerbe.
\vskip .25cm
The language of gerbes is convenient to describe things on the spectral cover, but a $C^{\infty}$ bundle $V$ with local holomorphic structures is not readily adaptable to conventional algebraic geometric language on $M$ itself. As far as generalized geometry is concerned we have $\bar D$, but it is still useful to rephrase the structure in more conventional language. For that purpose, we can split the extension $E$ and work with $T\oplus T^*$ and the Courant bracket twisted with a 3-form $H$.
The generalized Dolbeault complex is now $\bar D=\bar\partial_A-H^{1,2}+\phi$ where
$$\bar\partial_A: \Omega^{0,q}(V\otimes \Lambda^pT)\rightarrow \Omega^{0,q+1}(V\otimes \Lambda^pT),$$
the 3--form $H^{1,2}$ acts by contraction in the $(1,0)$ entry,
$$H^{1,2}:\Omega^{0,q}(V\otimes \Lambda^pT)\rightarrow \Omega^{0,q+2}(V\otimes \Lambda^{p-1}T)$$
and the Higgs field acts like this
$$\phi: \Omega^{0,q}(V\otimes \Lambda^pT)\rightarrow \Omega^{0,q}(V\otimes \Lambda^{p+1}T).$$
The condition $\bar D^2=0$ now becomes
$$\bar\partial_A^2=i_{\phi}H,\quad \bar\partial_A\phi=0,\quad \phi^2=0.$$
This shape of structure has appeared in the literature. For example, replacing $V$ by $\mathop{\rm End}\nolimits V$ (and thereby getting a complex which governs the deformation theory of a generalized holomorphic bundle), we obtain a {\it curved differential graded algebra} -- an algebra with derivation where $d^2a=[c,a]$ and $dc=0$. This is an identifiable concept, but nevertheless, packaged in the language of generalized geometry it becomes quite natural.
\vskip 1cm
Mathematical Institute, 24-29 St Giles, Oxford OX1 3LB, UK
[email protected]
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Optimization Problems Involving Matrix Multiplication \\ with Applications in Material Science and Biology
}
\author{Burak Kocuk}
\address{Industrial Engineering Program, Sabanc{\i} University, Istanbul, Turkey 34956\\[email protected]}
\begin{abstract}
We consider optimization problems involving the multiplication of variable matrices to be selected from a given family, which might be a discrete set, a continuous set or a combination of both. Such nonlinear, and possibly discrete, optimization problems arise in applications from biology and material science among others, and are known to be NP-Hard for a special case of interest. We analyze the underlying structure of such optimization problems for two particular applications and, depending on the matrix family, obtain compact-size mixed-integer linear or quadratically constrained quadratic programming reformulations that can be solved via commercial solvers. Finally, we present the results of our computational experiments, which demonstrate the success of our approach compared to heuristic and enumeration methods predominant in the literature.
\end{abstract}
\begin{keyword}
mixed-integer linear programming \sep mixed-integer quadratically constrained quadratic programming \sep global optimization \sep applications in biology \sep applications in material science
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec:intro}
Consider an optimization problem of the following form:
\begin{subequations} \label{eq:generic}
\begin{align}
\max_{T, w} \ & f( w ) \\
\text{s.t.} \ & p T_1 \cdots T_N = w \label{eq:generic cons} \\
\ & T_1,\dots,T_N \in \mathcal{T}. \label{eq:generic domain T}
\end{align}
\end{subequations}
Here, $p \in\mathbb{C}^{r \times d}$ is a given matrix, $f: \mathbb{C}^{r \times d} \to \mathbb{R}$ is a function and $\mathcal{T} \subseteq \mathbb{C}^{d \times d}$ is a family of matrices, which may be a discrete set, a continuous set or a combination of both. Observe that~\eqref{eq:generic} is a nonlinear optimization problem since constraint~\eqref{eq:generic cons} contains the multiplication of $N$ variable matrices, resulting in degree-$N$ polynomials. Depending on the structure of the set~$\mathcal{T}$, this optimization problem may also involve discrete decisions.
Such optimization problems naturally arise in material science and biology, and a special case in which $f$ is a linear function and $\mathcal{T}$ is a finite set is known to be NP-Hard \citep{tran2017antibiotics}.
The above abstract problem setting can be interpreted as follows: Suppose that there is a ``system'' initialized with the given matrix $p$. Then, the decision maker chooses a matrix $T_1$ and the system ``evolves'' to another state $p T_1$. This process continues for $N$ transitions. Finally, the ``performance'' of the decisions $T_1,\dots,T_N$ is computed via the function $f$, whose argument is the final system state $w$.
We will now give concrete examples from material science and biology, and motivate the significance of analyzing optimization problems involving matrix multiplication.
The first example is from material science and is called \textit{the multi-layer thin films problem}. Reflectance is an important electromagnetic property of materials and, in many optics applications, materials with high reflectance are desired. When reflectance of a metallic substrate is not satisfactory, dielectric coating materials can be used for enhancement. For instance, reflectance of Tungsten at 450 nanometers (nm) wavelength is approximately 47\% but it can be increased to 87\% if one layer each of Titanium Dioxide and Magnesium Fluoride thin films with a quarter wavelength optical thicknesses are coated on top. Given a material library and a budget on the number of layers, the multi-layer thin films problem seeks to find the optimal configuration of dielectric coating materials and their thicknesses to be coated in each layer so that the reflectance is maximized. This classical problem in optics is typically solved via heuristic and metaheuristic methods \citep{macleod2010thin, pedrotti2017introduction, tikhonravov1996application, hobson2004markov, rabady2014global, shi2017optimization, keccebacs2018enhancing}, and the rigorous treatment of the underlying optimization problem is lacking in the literature.
The second example arises from biology and is called \textit{the antibiotics time machine problem}. Antibiotic drug resistance is a serious concern in modern medical practices since the successive application of antibiotics may cause mutations, which might lead to ineffective or even harmful treatment plans. Even further complicating the problem is the inherent randomness associated with administering a certain drug.
Given a list of drugs and a predetermined length of the treatment plan, the antibiotics time machine problem seeks to find the optimal drug sequence to be applied so that the probability of reversing the mutations altogether at the end of the treatment is maximized.
Although there is significant interest in the biology community to understand the quantitative aspect of antibiotics resistance \citep{bergstrom2004ecological, kim2014alternating, nichol2015steering, mira2015rational, mira2017statistical, yoshida2017time}, the only method used to attack the antibiotic times machine problem appears to be complete enumeration.
These two seemingly unrelated optimization problems can, in fact, be formulated as in \eqref{eq:generic}. In the case of the multi-layer thin films problem, the matrix collection $\mathcal{T}$ is a mixed-integer nonlinear set and the objective function $f$ is the ratio of two convex quadratic functions whereas, in the antibiotics time machine problem, $\mathcal{T}$ is a finite set and $f$ is a linear function. One of our main contributions in this paper is that the generic nonlinear discrete optimization problem \eqref{eq:generic} can be reformulated as a mixed-integer quadratically constrained quadratic program (MIQCQP) for the former problem, and a mixed-integer linear program (MILP) for the latter problem. These reformulations allow us to solve the practically relevant instances of both problems to global optimality using commercial optimization packages.
As mentioned above, literature mostly focuses on heuristics methods and complete enumeration to solve optimization problems involving matrix multiplication and the rigorous analysis of these problems from an applied operations research perspective is insufficient. One exception in this direction is \citet{wu2018optimal}, in which the collection $\mathcal{T}$ is assumed to be a finite set. The authors provide sufficient conditions for the polynomial-time solvable cases of problem \eqref{eq:generic}, which are quite restrictive from an application point of view. In contrast, our approach in this paper is application-driven and computational, and focuses on developing methods to solve practical instances of problem \eqref{eq:generic} arising from real-life applications.
Moreover, it is also possible to utilize our approach to attack other applications with similar structure as reported in \citet{wu2018optimal},
including the matrix mortality problem \citep{blondel1997pair, bournez2002mortality} and the joint spectral radius computation \citep{rota1960note,jungers2009joint}.
The rest of the paper is organized as follows: {In Section~\ref{sec:main}, we provide reformulations of the feasible region of problem~\eqref{eq:generic}
depending on the structure of the set $\mathcal{T}$.}
Then, we specialize these general results to two applications, multi-layer thin films from material science in Section~\ref{sec:thinFilms} and antibiotics time machine from biology in Section~\ref{sec:antibiotic}, and present the results of our extensive computational experiments.
Finally, we conclude our paper in
Section~\ref{sec:conc} with final remarks and future research directions.
\section{General Results}
\label{sec:main}
In this section, we analyze problem~\eqref{eq:generic} and propose its reformulations based on the structure of the set~$\mathcal{T}$.
In particular, we first provide a straightforward bilinear reformulation of the polynomial constraint~\eqref{eq:generic cons} in
Section~\ref{sec:mainBilinear}. Under the assumption that $\mathcal{T}$ is a finite set, we further reformulate the feasible region of problem~\eqref{eq:generic} as a mixed-integer linear representable set in Section~\ref{sec:mainLinear}.
\subsection{Bilinearization}
\label{sec:mainBilinear}
Let us define a set of matrix variables $u_n \in \mathbb{C}^{r \times d}$, $n=0,\dots,N$ that satisfy the recursion $u_n = u_{n-1} T_n$ for $n=1,\dots,N$ with $u_0:=p$. Then, problem \eqref{eq:generic} can be reformulated as follows:
\begin{subequations} \label{eq:genericB}
\begin{align}
\max_{T, w, u} \ & f ( w ) \\
\textup{s.t.} \ \ & u_{n-1} T_n = u_{n} \qquad n = 1, \dots, N \label{eq:genericB bilinear} \\
\ & u_0 = p , u_N = w \label{eq:genericB boundary} \\
\ & \eqref{eq:generic domain T} \notag.
\end{align}
\end{subequations}
We remark that provided that the matrix family $\mathcal{T}$ is a bounded set, each $u_n$ matrix is guaranteed to come from another bounded set $\mathcal{U}_n\subseteq \mathbb{C}^{ r\times d}$ defined as
\begin{equation}\label{eq:define Un}
\mathcal{U}_n := \bigcup_{k=1}^K\{ u_{n-1} \hat T_k : u_{n-1}\in \mathcal{U}_{n-1}\},
\end{equation}
for $n=1,\dots,n$, with $\mathcal{U}_0:=\{p\}$.
This observation is quite important from the following aspect: The boundedness of the set $\mathcal{U}_n$ can be utilized to construct relaxations for the bilinear constraint~\eqref{eq:genericB bilinear} in a straightforward manner (e.g. one can obtain a McCormick-based relaxation \citep{mccormick1976computability} once variable bounds for each entry of the unknown matrices are available). Moreover, under the assumption that the matrix family $\mathcal{T}$ is a finite set and a polyhedral outer-approximation of $\mathcal{U}_n$ is utilized, then an \textit{equivalent} mixed-integer linear representation of the feasible region of
problem~\eqref{eq:genericB} can be obtained, as discussed in the next section.
\subsection{Linearization}
\label{sec:mainLinear}
In this part, we will assume that the set $\mathcal{T}$ is a finite set given as $\mathcal{T}:=\{\hat T_k : k=1,\dots,K\}$. Under this assumption, the resulting bilinear discrete optimization problem~\eqref{eq:genericB} obtained at the end of the bilinearization step can be further reformulated such that its feasible region is mixed-integer linear representable. We now introduce Proposition \ref{prop:linearization}, which will be crucial in the sequel.
\begin{proposition}\label{prop:linearization}
Given a finite collection of matrices $\mathcal{A} =\{\hat A_k : k=1,\dots, K\} \subseteq \mathbb{C}^{d \times d}$ and a polytope $\mathcal{Y} \subseteq \mathbb{C}^{r \times d}$, consider the set
\[
\mathcal{S} := \left\{(y, A, z)\in \mathcal{Y} \times \mathcal{A} \times \mathbb{C}^{r \times d} : y A = z \right\}.
\]
Then, the following statements hold:
\begin{enumerate}
\item
The system \eqref{eq:ext} in variables $(y, A, z, v_k, \mu_k)$ is an extended formulation for $\mathcal{S}$:
\begin{subequations}\label{eq:ext}
\begin{align}
\sum_{k=1}^K \mu_k \hat A_k &= A \label{eq:ext1} \\
\sum_{k=1}^K v_{k} &= y \label{eq:ext2} \\
\sum_{k=1}^K v_{k} \hat A_k &=z \label{eq:ext3} \\
\sum_{k = 1}^K \mu_{k} &= 1 \label{eq:ext4} \\
v_{k} \in \mathcal{Y} \mu_k, \ \mu_k &\in\{0,1\}, \ k=1,\dots,K.
\end{align}
\end{subequations}
\item We have
\[
\conv(\mathcal{S}) = \left\{ (y, A, z)\in \mathcal{Y} \times \mathcal{A} \times \mathbb{C}^{r \times d} : \ \exists ( v_{k},\mu_k) \in \mathcal{Y} \mu_k \times \mathbb{R}_+ : \eqref{eq:ext1}-\eqref{eq:ext4} \right\}.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
Let us define the sets
\[
\mathcal{S}_k := \left\{(y, A, z)\in \mathcal{Y} \times \mathcal{A} \times \mathbb{C}^{r \times d} : y \hat A_k = z, A=\hat A_k \right\},
\]
for each $k=1,\dots,K$. Clearly, we have that $\mathcal{S} = \cup_{k=1}^K \mathcal{S}_k$. The statements of the proposition follow by constructing a $K$-way disjunction of $\mathcal{S}$ due to \citet{balas1979disjunctive}.
\end{proof}
We now apply Proposition \ref{prop:linearization} to problem \eqref{eq:genericB} by setting $\mathcal{A}=\mathcal{T}$, $\hat A_k=\hat T_k$, $\mathcal{Y}=\bar \mathcal{U}_{n-1}$, $y=u_{n-1}$, $A=T_n$ and $z=u_n$ for $n=1,\dots,N$.
After defining the copy variables $v_{n,k} \in \mathbb{C}^{r\times d}$ and binary variables $x_{n-1,k}$ which take value one if $T_n =\hat T_k$ and zero otherwise, we obtain the following problem with a mixed-integer linear representable feasible region:
\begin{subequations}\label{eq:MILP}
\begin{align}
\max_{u,v,x} \ & f(w) \label{eq:objMILP}\\
\textup{s.t.} \ & \sum_{k=1}^K v_{n-1,k} = u_{n-1} &n&=1,\dots,N \label{eq:MILP1} \\
\ & \sum_{k=1}^K v_{n-1,k} \hat T_k =u_n &n&=1,\dots,N \label{eq:MILP2} \\
\ & \sum_{k = 1}^K x_{n-1,k} = 1 &n&=1,\dots,N \label{eq:MILP3} \\
\ & \ v_{n-1,k} \in \bar \mathcal{U}_{n-1} x_{n-1,k}, \ x_{n-1,k} \in\{0,1\} &n&=1,\dots,N, k=1,\dots,K \\
\ & \eqref{eq:genericB boundary}. \notag
\end{align}
\end{subequations}
In this formulation, the relation $v_{n-1,k} \in \bar \mathcal{U}_{n-1} x_{n-1,k}$ serves as a ``big-$M$ constraint''. Here, any polyhedral set $\bar \mathcal{U}_n$ that outer-approximates the set $\mathcal{U}_n$ can be used without changing the feasible region of problem \eqref{eq:MILP}. {When applicable, we provide such reasonable sets in the formulations of the specific applications considered in the remainder of this paper.}
\section{An Application from Material Science: Multi-Layer Thin Films}
\label{sec:thinFilms}
In this section, we study the multi-layer thin films problem from material science. We first introduce the basic notions in optics and provide a formal problem definition in Section~\ref{sec:thinFilmsDef}. Then, we propose an MIQCQP formulation in Section~\ref{sec:thinFilmsForm}
and its enhancements in Section~\ref{sec:thinFilmsFormEnhance}.
We overview a commonly used heuristic from literature and discuss its convergence behavior in Section~\ref{sec:thinFilmsHeur}.
Finally, we present the results of our computational experiments in Section \ref{sec:thinFilmsComp}, which include a discussion on the effect of formulation enhancements and a comparison of the optimal solutions with the heuristic ones.
\subsection{Problem Definition}
\label{sec:thinFilmsDef}
Suppose that we have a metallic substrate and our aim is to increase its reflectance by coating a set of dielectric materials on top.
Following the classical textbooks on optics~\citep{macleod2010thin, pedrotti2017introduction}, we will first introduce the basic concepts and notations in multi-layer thin films, and then present how we can attack this problem using optimization techniques.
Let us denote the refractive index of a metallic substrate (e.g. Tungsten, Tantalum, Molybdenum, Niobium) at wavelength $\lambda$ as $\hat a_s^\lambda \in \mathbb{C}$, where the imaginary part is a measure of reflection losses.
Let $\mathcal{M}$ be the set of dielectric coating materials, such as Silicon Dioxide (\ch{SiO2}), Titanium Dioxide (\ch{TiO2}), Magnesium Fluoride (\ch{MgF2}), Aluminum Oxide (\ch{Al2O3}). For a given wavelength~$\lambda$, we will denote the set of refractive indices\footnote{In reality, the refractive index of a dielectric coating material is also a complex number. However, since the reflection loss of a dielectric material is negligibly small, we will ignore the imaginary part of this complex number.} by
\[
\mathcal{A}^\lambda := \{\hat a_m^\lambda : m \in \mathcal{M}\} \subseteq \mathbb{R}_+.
\]
We will now introduce an important concept called the \textit{transfer matrix}, which is used to quantify the reflectance through a material.
Under the assumption that the light is at normal incidence, the transfer matrix of material $m$ of thickness $t$ at wavelength $\lambda$ is given as
\begin{equation}\label{eq:transferM}
T_{m,t}^\lambda =
\begin{bmatrix}
\cos \sigma_{m,t}^\lambda & \mathrm{i} \frac{\sin \sigma_{m,t}^\lambda}{ \hat a_m^\lambda} \\
\mathrm{i} {\hat a_m^\lambda}{\sin \sigma_{m,t}^\lambda} & \cos \sigma_{m,t}^\lambda
\end{bmatrix}, \quad \text{ where } \sigma_{m,t}^\lambda = \frac{2\pi \hat a_m^\lambda t}{\lambda} .
\end{equation}
Here, $\mathrm{i}=\sqrt{-1}$. An important fact related to transfer matrices is their ``multiplicative'' property, that is, the cumulative effect a coating material $m_1$ of thickness $t_1$ on top of a coating material $m_2$ of thickness $t_2$ is simply obtained by the product of their own transfer matrices
$T_{m_1,t_1}^\lambda T_{m_2,t_2}^\lambda $ (see Figure \ref{fig:thinFilms} for an illustration).
\begin{figure}
\caption{Illustration of a multi-layer thin film with $N=2$ layers.}
\label{fig:thinFilms}
\end{figure}
We will denote the set of all transfer matrices obtainable from coating materials in $\mathcal{M}$ at wavelength $\lambda$ as
\[
\mathcal{T}^\lambda_+ :=
\left\{ \begin{bmatrix}
\cos \sigma & \mathrm{i} \frac{\sin \sigma}{a} \\
\mathrm{i} {a}{\sin \sigma} & \cos \sigma
\end{bmatrix}: \sigma = \frac{2\pi a t}{\lambda} ,
a \in \mathcal{A}^\lambda, t \ge 0
\right\} .
\]
We note that the set $\mathcal{T}^\lambda_+$ has both discrete (selection of materials from a finite set) and continuous nature (the physical thickness $t$). Observe that the elements in $\mathcal{T}^\lambda_+$ have the property that their diagonals have zero imaginary part, and their off-diagonals have zero real part, a property preserved when two elements are multiplied from this set.
\begin{notation}
Let $M \in \mathbb{C}^{2 \times 2}$ be a matrix with the property that $\Im(M_{11})=\Im(M_{22})=0$ and $\Re(M_{12})=\Re(M_{21})=0$. Then, we will denote a matrix $\tilde M \in \mathbb{R}^{2 \times 2}$ by
\begin{equation*}\label{eq:tilde w def}
\tilde M_{i,j} = \begin{cases}
\Re(M_{ij}) & i=j \\
\Im(M_{ij}) & i\neq j
\end{cases},
\text{ for $i,j\in\{1,2\}$. }
\end{equation*}
\end{notation}
We will call the multiplication of transfer matrices as a ``cumulative transfer matrix''.
For a multi-layer thin film with the cumulative transfer matrix $w\in\mathbb{C}^{2\times2}$ coated on a certain substrate, one can compute the reflectance at wavelength $\lambda$ as
\begin{equation}\label{eq:reflectance}
R_s^\lambda (\tilde w) :=
\frac{ (\tilde w_{11}-\Im({\hat a_s^\lambda}) \tilde w_{12}-\Re({\hat a_s^\lambda})\tilde w_{22} )^2 + (\tilde w_{21}+\Im({\hat a_s^\lambda})\tilde w_{22}-\Re({\hat a_s^\lambda) \tilde w_{12}})^2 }
{ (\tilde w_{11}-\Im({\hat a_s^\lambda})\tilde w_{12}+\Re({\hat a_s^\lambda})\tilde w_{22} )^2 + (\tilde w_{21}+\Im({\hat a_s^\lambda})\tilde w_{22}+\Re({\hat a_s^\lambda) \tilde w_{12}})^2 }.
\end{equation}
Note that $ R_s^\lambda (\tilde w)$ is the ratio of two convex quadratic functions in $\tilde w$.
{
Finally, we are ready to formally describe the multi-layer thin films problem: Given a metallic substrate, a set of coating materials $\mathcal{M}$ and wavelength $\lambda$, decide coating materials and their thicknesses to be used in each layer of an $N$-layer thin film such that the reflectance is maximized.
}
\subsection{Problem Formulation}
\label{sec:thinFilmsForm}
Using the notation introduced in the previous section, we now formulate multi-layer thin films problem as an instance of the generic model~\eqref{eq:genericB}. In particular, we will set $p$ as the identity matrix, the objective function $f(w)$ as $R_s^\lambda (\tilde w)$ and the set $\mathcal{T}$ as $\mathcal{T}^\lambda_+ $. In the sequel, we will reformulate the problem as an MIQCQP in Section~\ref{sec:thinFilmsFormFinal}, using the structural properties derived in
Section~\ref{sec:thinFilmsFormPrelim}.
\subsubsection{Some Structural Properties}
\label{sec:thinFilmsFormPrelim}
We will now present some important properties of transfer matrices and the reflectance function.
\begin{proposition}\label{prop:somePropertiesTransfer}
Consider the transfer matrices as defined in~\eqref{eq:transferM}. Then,
\begin{enumerate}[(i)]
\item
$\det (T_{m,t}^\lambda ) = 1$.
\item
$T_{m,t_1}^\lambda T_{m,t_2}^\lambda = T_{m,t_1+ t_2}^\lambda$ for $t_1,t_2\ge0$.
\end{enumerate}
\end{proposition}
\begin{proof}
Statement (i) is clear. Statement (ii) can be checked via straightforward calculation and using trigonometric addition formulas.
\end{proof}
\begin{proposition}\label{prop:reflactanceSimplification}
Consider the reflectance function as defined in~\eqref{eq:reflectance} and let $w$ be a cumulative transfer matrix. Then,
\begin{enumerate}[(i)]
\item
$R_s^\lambda(-\tilde w) = R_s^\lambda(\tilde w)$.
\item
$
R_s^\lambda (\tilde w) = 1-\frac{ 4 \Re({\hat a_s^\lambda}) } { D_s^\lambda (\tilde w) }
,
$
where
\begin{equation}\label{eq:defDen}
D_s^\lambda (\tilde w) :={ (\tilde w_{11}-\Im({\hat a_s^\lambda})\tilde w_{12})^2 + (\Re({\hat a_s^\lambda) \tilde w_{12})^2 +
(\tilde w_{21}+\Im({\hat a_s^\lambda})\tilde w_{22}})^2 + (\Re({\hat a_s^\lambda})\tilde w_{22})^2 + 2\Re({\hat a_s^\lambda}) }.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Statement (i) is clear.
Statement (ii) can be proven via straightforward algebra and noting that $\det(\tilde w) = 1$.
\end{proof}
Let us now discuss the consequences of the above properties. In particular, we claim that instead of $\mathcal{T}^\lambda_+$, we can use the family of matrices defined as
\[
\mathcal{T}^\lambda :=
\left\{ \begin{bmatrix}
C & \mathrm{i} \frac{S}{a} \\
\mathrm{i} {a}{S} & C
\end{bmatrix}:
a \in \mathcal{A}^\lambda, C^2+S^2 = 1, (C,S)\in\mathbb{R}\times\mathbb{R}_+
\right\}.
\]
This is due to the fact that $\mathcal{T}^\lambda_+ = -\mathcal{T}^\lambda \cup \mathcal{T}^\lambda$ (Proposition \ref{prop:somePropertiesTransfer}(ii)) and, from optimization point of view, $T_n \in \mathcal{T}^\lambda$ and $T_n \in \mathcal{T}_+^\lambda$ are equivalent (Proposition \ref{prop:reflactanceSimplification}(i)). We note that one can recover the physical thickness of a layer associated with a given transfer matrix from $\mathcal{T}^\lambda$ as
\[
t = \frac{\lambda \arccos C}{2\pi a} ,
\]
which is well-defined.
Finally, we remark that the set $\mathcal{T}^\lambda$ is bounded, which is a property that will be exploited in the reformulation of the problem provided in the next section.
\subsubsection{Reformulation}
\label{sec:thinFilmsFormFinal}
By utilizing the structural properties derived in the previous section, we can formulate the multi-layer thin film problem as a nonlinear, discrete optimization problem as follows:
\begin{subequations} \label{eq:singleWave}
\begin{align}
\max_{T, w, u, C, S, a} \ & D_s^\lambda (\tilde w) \\
\textup{s.t.}
\ & \eqref{eq:genericB bilinear}, \eqref{eq:genericB boundary} \notag \\
\ & C_n^2 + S_n^2 = 1, S_n \ge 0 \label{eq:cos sin rel} \\
\ & (\tilde T_n)_{11} =(\tilde T_n)_{22} = C_n \label{eq:cos def} \\
\ & a_n (\tilde T_n)_{12} = (\tilde T_n)_{21} / a_n = S_n \label{eq:sin def} \\
\ & a_n \in \mathcal{A}^\lambda. \label{eq:material pick}
\end{align}
\end{subequations}
Here, constraints \eqref{eq:cos sin rel}--\eqref{eq:material pick} guarantee that $T_n \in \mathcal{T}^\lambda$.
As a final step in the reformulation, we will give a mixed-integer linear representation of constraints \eqref{eq:sin def}--\eqref{eq:material pick}. To this end, let us define binary variables $x_{n,m}$, which take value one if material~$m$ is used in layer $n$ and zero otherwise. Moreover, let $v_{n,m}$ be the auxiliary variables {needed in the disjunction arguments}, representing the quantity $S_n x_{n,m}$. Then, we obtain the following optimization problem with a convex quadratic maximization objective and a mixed-integer bilinear representable feasible region:
\begin{subequations} \label{eq:singleWaveF}
\begin{align}
\max_{T, w, u, C, S, v, x} \ & D_s^\lambda (\tilde w) \\
\textup{s.t.}
\ & \eqref{eq:genericB bilinear}, \eqref{eq:genericB boundary} , \eqref{eq:cos sin rel}, \eqref{eq:cos def} \notag \\
\ & \sum_{m \in \mathcal{M}} v_{n,m} = S_n & n&=1,\dots, N \\
\ & \sum_{m \in \mathcal{M}} v_{n,m} / \hat a_m^\lambda = (\tilde T_n)_{12} & n &=1,\dots, N \\
\ & \sum_{m \in \mathcal{M}} v_{n,m} \hat a_m^\lambda = (\tilde T_n)_{21} & n&=1,\dots, N \\
\ & \sum_{m \in \mathcal{M}} x_{n,m} = 1 &n&=1,\dots, N \label{eq:material pick binary} \\
\ & 0 \le v_{n,m} \le x_{n,m}, \ x_{n,m} \in\{0,1\} & n & =1,\dots, N,\ m\in\mathcal{M} .
\end{align}
\end{subequations}
We note that the nonconvex MIQCQP~\eqref{eq:singleWaveF} can be solved via Gurobi (version 9) or global solvers such as BARON.
\subsection{Formulation Enhancements}
\label{sec:thinFilmsFormEnhance}
\subsubsection{Bound Tightening}
\label{sec:thinFilmsFormBound}
Since the success of the solution methods of global optimization problems depends on the availability of variable bounds, we now discuss how to obtain tight variable bounds for problem~\eqref{eq:singleWaveF}. To start with, the following bounds are readily available:
\[
-1 \le C_n, (\tilde T_n)_{11}, (\tilde T_n)_{22} \le 1,
\ 0 \le S_n \le 1, 0 \le (\tilde T_n)_{12} \le 1/ \hat a_L^\lambda, 0 \le (\tilde T_n)_{21} \le 1/ \hat a_H^\lambda
\quad n=1,\dots,N,
\]
where
\begin{equation}\label{eq:highLowMaterials}
\hat a_L^\lambda := \min \{ \hat a_m^\lambda: m\in\mathcal{M} \} \text{ and }
\hat a_H^\lambda := \max \{ \hat a_m^\lambda: m\in\mathcal{M} \}.
\end{equation}
To obtain the variable bounds for the entries of the cumulative transfer matrices $u_n$, we will use the following proposition.
\begin{proposition}\label{prop:uBounds}
Let $\underline \alpha \le \overline \alpha$, $\underline \beta \le \overline \beta$, $ \Gamma := \{\gamma_h\}_{h=1}^H \in \mathbb{R}_+$, and define
\[
\Phi := \{ (\alpha,\beta,\gamma,C,S) \in \mathbb{R}^5 : \alpha \in [\underline \alpha, \overline \alpha] , \beta \in [\underline \beta, \overline \beta], \gamma \in \Gamma, C^2 + S^2 = 1, S \ge 0\}.
\]
Then,
\begin{enumerate}[(i)]
\item
$\max\{ \alpha C + \gamma \beta S : (\alpha,\beta,\gamma,C,S) \in \Phi \}
= \sqrt{ \max\{\underline \alpha^2, \overline \alpha^2\} + \max\{\gamma^2: \gamma\in\Gamma\} \max\{0, \overline \beta\}^2 }$.
\item
$\min\{ \alpha C + \gamma \beta S : (\alpha,\beta,\gamma,C,S) \in \Phi \}
= -\sqrt{ \max\{\underline \alpha^2, \overline \alpha^2\} + \max\{\gamma^2: \gamma\in\Gamma\} \max\{0, -\underline \beta\}^2 }$.
\end{enumerate}
\end{proposition}
\begin{proof}
We will prove Statement (i) in three steps. Firstly,
consider the optimization problem $z^*(\alpha,\beta,\gamma) = \max\{ \alpha C + \gamma \beta S : C^2 + S^2 = 1, S \ge 0 \} $ for some given $(\alpha,\beta,\gamma)\in\mathbb{R}^2\times\mathbb{R}_+$. Then, $z^*(\alpha,\beta,\gamma) $ is equal to $\sqrt{\alpha^2 + (\gamma\beta)^2}$ if $\beta \ge 0$ and $|\alpha|$ otherwise.
Secondly, consider $z^*(\gamma) := \max\{ \sqrt{\alpha^2 + \max\{0, \gamma\beta\}^2 } : \alpha \in [\underline \alpha, \overline \alpha] , \beta \in [\underline \beta, \overline \beta] \}$ for some $\gamma\in\mathbb{R}_+$, where the objective function comes from the first step. Then,
$z^*(\gamma) = \sqrt{ \max\{\underline \alpha^2, \overline \alpha^2\} + \max\{0, \gamma\overline\beta\}^2 }$. Finally, we observe that
$\max\{ \alpha C + \gamma \beta S : (\alpha,\beta,\gamma,C,S) \in \Phi \} = \max\{ z^*(\gamma) : \gamma \in \Gamma \} $, from which the result follows.
Statement (ii) follows by noting that $\min\{ \alpha C + \gamma \beta S : (\alpha,\beta,\gamma,C,S) \in \Phi \}
= - \max\{ \alpha' C + \gamma \beta' S : (\alpha',\beta',\gamma,C,S) \in \Phi' \}$ with
$
\Phi' := \{ (\alpha',\beta',\gamma,C,S) \in \mathbb{R}^5 : \alpha' \in [-\overline \alpha, -\underline \alpha] , \beta' \in [-\overline \beta, -\underline \beta], \gamma \in \Gamma, C^2 + S^2 = 1, S \ge 0\}$, and then applying the previous result.
\end{proof}
Let us now demonstrate how Proposition \ref{prop:uBounds} can be used to derive bounds for the entries of the matrix $u_n$ with an example. Consider one of the constraints of equation \eqref{eq:genericB bilinear}
\[
(\tilde u_n)_{21} = (\tilde u_{n-1})_{21} C_n + a_n (\tilde u_{n-1})_{22} S_n ,
\]
for some $n=1,\dots,N$. Since we will proceed recursively and $\tilde u_0 $ is the identity matrix, let us assume that the variable bounds of $(\tilde u_{n-1})_{21}$ and $(\tilde u_{n-1})_{22}$ are available, and denoted as $ [\underline \alpha, \overline \alpha]$ and $[\underline \beta, \overline \beta]$, respectively. Also, let $\Gamma = \{a_m^\lambda: m \in \mathcal{M}\}$. Then, Proposition \ref{prop:uBounds} gives upper and lower bounds for variable $(\tilde u_n)_{21}$. Similar arguments can be used to derive variable bounds for $(\tilde u_n)_{11}$, $(\tilde u_n)_{12}$ and $(\tilde u_n)_{22}$ as well.
\subsubsection{Valid Bilinear Equalities}
\label{sec:thinFilmsFormQuadEq}
Proposition \ref{prop:somePropertiesTransfer}(i) states that the determinant of the transfer matrices is 1, a property preserved by multiplication. Therefore, all the cumulative transfer matrices have determinant 1 as well. In particular, we can add the following bilinear equality to our formulation:
\begin{equation}\label{eq:validQuad}
\det( w) = \tilde w_{1,1} \tilde w_{2,2} + \tilde w_{2,1} \tilde w_{2,1} = 1.
\end{equation}
Note that, in principle, similar bilinear constraints corresponding to $\det(u_n)=1$ for each $n=1,\dots,N-1$ can be included as well. However, our preliminary experiments have shown that including many such bilinear constraints slows down the solvers.
\subsubsection{Symmetry Breaking Constraints}
\label{sec:thinFilmsFormSymmBreak}
Proposition \ref{prop:somePropertiesTransfer}(ii) implies that coating two consecutive layers of the same material with thickness $t_1$ and $t_2$ is equivalent to a single layer of the same materials with thickness $t_1+t_2$. Therefore, including the following inequality, which forbids feasible solutions in which two consequent layers of the same material are used, does not change the optimal value of problem \eqref{eq:singleWaveF}:
\begin{equation}\label{eq:symmBreak}
x_{n,m} + x_{n+1,m} \le 1 \quad n=1,\dots,N-1, \ m\in\mathcal{M}.
\end{equation}
However, the above inequality breaks the symmetry in the formulation, hence, is useful in the solution procedure.
\subsection{A Heuristic Approach}
\label{sec:thinFilmsHeur}
A common heuristic approach in the thin films literature to solve problem~\eqref{eq:singleWave} is to use alternating layers of high and low index materials with an optical thickness of a quarter wavelength (see e.g. \citet{macleod2010thin, pedrotti2017introduction}, among others). More precisely, for a given wavelength~$\lambda$, let us denote the refractive indices of materials among the set $\mathcal{M}$ with the highest and lowest values $\hat a_H^\lambda $ and $\hat a_L^\lambda $ as computed in \eqref{eq:highLowMaterials}, and
and choose the physical thicknesses as
\[
t_H^\lambda := \frac{\lambda}{4 \hat a_H^\lambda} \text{ and }
t_L^\lambda := \frac{\lambda}{4 \hat a_L^\lambda} .
\]
Consider a feasible solution to problem \eqref{eq:singleWave} constructed as follows: For each odd (resp. even) index~$n$, we use high (resp. low) index material $H$ (resp. $L$) with thickness $t_H^\lambda$ (resp. $t_L^\lambda)$, that is,
the transfer matrix of each layer is chosen as
\begin{equation}\label{eq:thinFilmHeur}
T_n = T_{H, t_H^\lambda}^\lambda =
\begin{bmatrix}
0 & \frac\mathrm{i}{\hat a_H^\lambda} \\
\mathrm{i} {\hat a_H^\lambda} & 0
\end{bmatrix} \text{ if $n$ is odd and }
T_n = T_{L, t_L^\lambda}^\lambda =
\begin{bmatrix}
0 & \frac\mathrm{i}{\hat a_L^\lambda} \\
\mathrm{i} {\hat a_L^\lambda} & 0
\end{bmatrix} \text{ if $n$ is even}.
\end{equation}
We will now prove that the reflectance of the multi-layer thin films obtained as above converges to~1 as $N\to\infty$.
\begin{proposition}\label{prop:thinFilmHeur}
Let $\lambda$ be given and consider a feasible solution to problem~\eqref{eq:singleWave} constructed in~\eqref{eq:thinFilmHeur}. Then,
\[
\lim_{N \to \infty} R_s^\lambda (\tilde u_N) = 1,
\]
where $u_N$ is the corresponding cumulative transfer matrix of an $N$-layer thin films.
\end{proposition}
\begin{proof}
First of all, the cumulative transfer matrix of an $N$-layer thin film is obtained as
\[
u_N = \begin{cases}
\begin{bmatrix}
0 & \frac{\mathrm{i}^N}{ {(\hat a_H^\lambda)}^{ \lceil N/2 \rceil } {(\hat a_L^\lambda)}^{ \lfloor N/2 \rfloor } } \\
\mathrm{i}^N {(\hat a_H^\lambda)}^{ \lceil N/2 \rceil } {(\hat a_L^\lambda)}^{ \lfloor N/2 \rfloor } & 0
\end{bmatrix} & \text{if $N$ is odd} \\
\\
\begin{bmatrix}
\frac{\mathrm{i}^N}{ {(\hat a_H^\lambda)}^{ N/2 } {(\hat a_L^\lambda)}^{ N/2 } } & 0 \\
0 &\mathrm{i}^N {{(\hat a_H^\lambda)}^{ N/2 } {(\hat a_L^\lambda)}^{ N/2 }}
\end{bmatrix} & \text{if $N$ is even}
\end{cases}.
\]
Then, we have
\[
D_s^\lambda(\tilde u_N) = \begin{cases}
\frac{|\hat a_s^\lambda|^2}{ \left({(\hat a_H^\lambda)}^{ \lceil N/2 \rceil } {(\hat a_L^\lambda)}^{ \lfloor N/2 \rfloor }\right)^2 } + { \left({(\hat a_H^\lambda)}^{ \lceil N/2 \rceil } {(\hat a_L^\lambda)}^{ \lfloor N/2 \rfloor }\right)^2 } + 2\Re({\hat a_s^\lambda}) & \text{if $N$ is odd} \\
\frac{|\hat a_s^\lambda|^2}{ \left({(\hat a_H^\lambda)}^{ N/2 } {(\hat a_L^\lambda)}^{ N/2 }\right)^2 } + { \left({(\hat a_H^\lambda)}^{ N/2 } {(\hat a_L^\lambda)}^{ N/2 }\right)^2 } + 2\Re({\hat a_s^\lambda}) & \text{if $N$ is even}
\end{cases},
\]
where $D_s^\lambda(\cdot) $ is defined as in \eqref{eq:defDen}. Note that since $\hat a_H^\lambda > 1$ and $\hat a_L^\lambda>1$, we have that $ \lim_{ N \to \infty} D_s^\lambda(\tilde u_N) = \infty$. Hence, we conclude that $\lim_{N \to \infty} R_s^\lambda (\tilde u_N) = 1$ due to Proposition \ref{prop:reflactanceSimplification}(ii).
\end{proof}
Proposition \ref{prop:thinFilmHeur} justifies the use of the heuristic approach introduced above, especially when a large number of layers are allowed to be used. However, in practice, thin films with a small number of layers can be preferred due to cost considerations and manufacturing challenges.
In such cases, the heuristic solutions obtained may not be optimal, as demonstrated by our computational experiments presented in the next section, and our optimization-based approach might prove very useful.
\subsection{Computations}
\label{sec:thinFilmsComp}
In this section, we present our extensive computations for the multi-layer thin films problem. In this analysis, we use the coating materials \ch{SiO2}, \ch{TiO2}, \ch{MgF2}, \ch{Al2O3}, and metallic substrates Tungsten, Tantalum, Molybdenum, Niobium.
The necessary refractive index data is obtained from \citet{keccebacs2018enhancing}, which is gathered from multiple sources \citep{malitson1965interspecimen,palik1998handbook,dodge1984refractive,dodge2refractive,golovashkin1969optical}. We utilize BARON 19.12.7 and Gurobi 9 to solve the nonconvex MIQCQP~\eqref{eq:singleWaveF} on a 64-bit personal computer with Intel Core i7
CPU 2.60GHz processor (16 GB RAM). The relative optimality gap is set to 0.001 for all experiments.
\subsubsection{Computational Efficiency}
\label{sec:thinFilmsCompEff}
We first carry out some preliminary experiments to decide between two competing solvers BARON and Gurobi to solve
problem~\eqref{eq:singleWaveF}, and to demonstrate the effect of valid bilinear equalities~\eqref{eq:validQuad} and symmetry breaking constraints~\eqref{eq:symmBreak}. Our results presented in Table~\ref{tab:thinFilmsCompEff} clearly show that Gurobi is the faster solver for this problem by at least one order-of-magnitude. We also observe that the addition of constraints~\eqref{eq:validQuad} and~\eqref{eq:symmBreak} help improve the computational performance of both solvers significantly (except $N=2$ for BARON).
\begin{table}[H]\small
\caption{Computational times (in seconds) of different methods to solve problem~\eqref{eq:singleWaveF} for different wavelengths $\lambda$ (in nanometers) and number of layers $N$ on a Tungsten substrate. ``Enh.'' stands for ``Enhanced'' and refers to the addition of
equations~\eqref{eq:validQuad} and~\eqref{eq:symmBreak}.}\label{tab:thinFilmsCompEff}
\centering
\begin{tabular}{c|rr|rr|rrr|rrr}
& \multicolumn{ 2}{c|}{BARON} & \multicolumn{ 2}{c|}{BARON Enh.} & \multicolumn{ 3}{c|}{Gurobi} & \multicolumn{ 3}{c}{Gurobi Enh.} \\
$\lambda$ (nm) & $ N=2$ & $N=3 $& $ N=2$ & $ N=3$ & $ N=2$ & $N=3$ & $ N=4$ & $ N=2$ & $ N=3$ & $ N=4$ \\
\hline
450 & 0.92 & 84.26 & 9.86 & 82.03 & 0.39 & 5.36 & 33.05 & 0.85 & 3.92 & 14.41 \\
600 & 1.03 & 152.28 & 1.24 & 154.89 & 0.45 & 7.96 & 58.79 & 0.45 & 4.73 & 24.16 \\
750 & 1.10 & 225.27 & 5.59 & 187.77 & 0.43 & 11.05 & 68.53 & 0.60 & 5.50 & 26.56 \\
900 & 1.03 & 221.17 & 3.92 & 181.93 & 0.54 & 9.47 & 81.15 & 0.55 & 5.03 & 29.75 \\
1200 & 0.98 & 245.74 & 2.75 & 133.05 & 0.44 & 9.30 & 73.91 & 0.58 & 4.82 & 31.41 \\
1500 & 0.94 & 217.62 & 4.49 & 182.65 & 0.42 & 9.17 & 86.32 & 0.47 & 5.55 & 29.62 \\
1800 & 0.98 & 283.27 & 7.56 & 225.14 & 0.48 & 8.81 & 78.24 & 0.44 & 5.80 & 33.36 \\
2100 & 0.74 & 314.47 & 4.10 & 144.45 & 0.40 & 10.61 & 88.68 & 0.59 & 6.53 & 32.16 \\
2400 & 0.76 & 449.25 & 5.40 & 180.88 & 0.42 & 13.08 & 87.57 & 0.59 & 6.68 & 36.68 \\
\hline
Avg. & 0.94 & 243.70 & 4.99 & 163.64 & 0.44 & 9.42 & 72.92 & 0.57 & 5.40 & 28.68 \\
\end{tabular}
\end{table}
As a result of these preliminary experiments, we have decided to use Gurobi with enhancements in the detailed experiments presented below.
\subsubsection{Comparison with the Heuristic Approach}
\label{sec:thinFilmsCompHeur}
We now compare the solutions obtained from the heuristic approach in Section \ref{sec:thinFilmsHeur} and solving the optimization problem \eqref{eq:singleWaveF}. In Tables \ref{tab:compareTungsten}--\ref{tab:compareNiobium}, we report the reflectance values obtained for different wavelengths $\lambda$ and number of layers $N$ on four metallic substrates.
In these computational experiments, we pre-terminate Gurobi once a feasible solution with a reflectance of at least 0.995 is obtained (this is enforced via the use of the parameter {\texttt BestObjStop}).
\begin{table}[H]\scriptsize
\caption{Comparison of the reflectance values obtained by problem \eqref{eq:singleWaveF} and the heuristic approach for different wavelengths $\lambda$ and number of layers $N$ on a Tungsten substrate (``AT (s)'' stands for ``average time in seconds'').}\label{tab:compareTungsten}
\centering
\begin{tabular}{cr|rrrrrr|rrrrr}
& & \multicolumn{ 6}{c|}{Heuristic} & \multicolumn{ 5}{c}{Optimal} \\
$\lambda$ (nm) & $N=0$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ & $N=6$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ \\
\hline
450 & 0.470 & 0.279 & 0.865 & 0.778 & 0.973 & 0.953 & 0.995 & 0.553 & 0.870 & 0.894 & 0.974 & 0.979 \\
600 & 0.508 & 0.209 & 0.857 & 0.683 & 0.966 & 0.917 & 0.992 & 0.563 & 0.862 & 0.879 & 0.967 & 0.971 \\
750 & 0.500 & 0.169 & 0.846 & 0.633 & 0.961 & 0.896 & 0.990 & 0.545 & 0.851 & 0.866 & 0.962 & 0.966 \\
900 & 0.521 & 0.223 & 0.850 & 0.661 & 0.961 & 0.903 & 0.990 & 0.579 & 0.856 & 0.875 & 0.962 & 0.968 \\
1200 & 0.642 & 0.283 & 0.892 & 0.660 & 0.972 & 0.899 & 0.993 & 0.683 & 0.897 & 0.909 & 0.973 & 0.976 \\
1500 & 0.698 & 0.384 & 0.910 & 0.718 & 0.976 & 0.917 & 0.994 & 0.740 & 0.914 & 0.926 & 0.977 & 0.980 \\
1800 & 0.866 & 0.616 & 0.962 & 0.805 & 0.990 & 0.942 & 0.997 & 0.881 & 0.964 & 0.967 & 0.990 & 0.991 \\
2100 & 0.933 & 0.751 & 0.981 & 0.844 & 0.995 & 0.951 & 0.999 & 0.938 & 0.982 & 0.983 & 0.995 & 0.995 \\
2400 & 0.951 & 0.787 & 0.986 & 0.831 & 0.996 & 0.942 & 0.999 & 0.953 & 0.986 & 0.987 & 0.996 & 0.996 \\
\hline
AT (s) & & & & & & & & 0.28 & 0.57 & 5.40 & 23.85 & 5517.26 \\
\end{tabular}
\end{table}
\begin{table}[H]\scriptsize
\caption{Comparison of the reflectance values obtained by problem \eqref{eq:singleWaveF} and the heuristic approach for different wavelengths $\lambda$ and number of layers $N$ on a Tantalum substrate.}\label{tab:compareTantalum}
\centering
\begin{tabular}{cr|rrrrrr|rrrrr}
& & \multicolumn{ 6}{c|}{Heuristic} & \multicolumn{ 5}{c}{Optimal} \\
$\lambda$ (nm) & $N=0$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ & $N=6$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ \\
\hline
450 & 0.409 & 0.329 & 0.842 & 0.805 & 0.968 & 0.960 & 0.994 & 0.530 & 0.850 & 0.887 & 0.969 & 0.977 \\
600 & 0.361 & 0.397 & 0.787 & 0.807 & 0.947 & 0.953 & 0.988 & 0.548 & 0.809 & 0.874 & 0.953 & 0.970 \\
750 & 0.672 & 0.592 & 0.903 & 0.866 & 0.976 & 0.966 & 0.994 & 0.772 & 0.915 & 0.940 & 0.979 & 0.985 \\
900 & 0.814 & 0.663 & 0.948 & 0.878 & 0.987 & 0.968 & 0.997 & 0.856 & 0.953 & 0.962 & 0.988 & 0.991 \\
1200 & 0.914 & 0.751 & 0.977 & 0.889 & 0.994 & 0.970 & 0.999 & 0.925 & 0.978 & 0.981 & 0.994 & 0.995 \\
1500 & 0.951 & 0.813 & 0.987 & 0.895 & 0.997 & 0.970 & 0.999 & 0.955 & 0.987 & 0.988 & 0.996 & 0.995 \\
1800 & 0.963 & 0.835 & 0.990 & 0.882 & 0.997 & 0.963 & 0.999 & 0.965 & 0.990 & 0.991 & 0.997 & 0.996 \\
2100 & 0.970 & 0.851 & 0.992 & 0.866 & 0.998 & 0.955 & 0.999 & 0.971 & 0.992 & 0.992 & 0.998 & 0.995 \\
2400 & 0.973 & 0.860 & 0.992 & 0.848 & 0.998 & 0.943 & 0.999 & 0.974 & 0.992 & 0.993 & 0.997 & 0.996 \\
\hline
AT (s) & & & & & & & & 0.28 & 0.51 & 5.19 & 16.74 & 1478.59 \\
\end{tabular}
\end{table}
\begin{table}[H]\scriptsize
\caption{Comparison of the reflectance values obtained by problem \eqref{eq:singleWaveF} and the heuristic approach for different wavelengths $\lambda$ and number of layers $N$ on a Molybdenum substrate.}\label{tab:compareMolybdenum}
\centering
\begin{tabular}{cr|rrrrrr|rrrrr}
& & \multicolumn{ 6}{c|}{Heuristic} & \multicolumn{ 5}{c}{Optimal} \\
$\lambda$ (nm) & $N=0$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ & $N=6$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ \\
\hline
450 & 0.569 & 0.325 & 0.896 & 0.791 & 0.979 & 0.956 & 0.996 & 0.643 & 0.901 & 0.920 & 0.980 & 0.984 \\
600 & 0.567 & 0.218 & 0.878 & 0.676 & 0.971 & 0.915 & 0.993 & 0.613 & 0.882 & 0.896 & 0.972 & 0.976 \\
750 & 0.566 & 0.191 & 0.872 & 0.631 & 0.968 & 0.895 & 0.992 & 0.607 & 0.875 & 0.888 & 0.969 & 0.972 \\
900 & 0.570 & 0.261 & 0.868 & 0.677 & 0.966 & 0.908 & 0.991 & 0.626 & 0.874 & 0.892 & 0.967 & 0.972 \\
1200 & 0.786 & 0.492 & 0.939 & 0.768 & 0.984 & 0.934 & 0.996 & 0.814 & 0.942 & 0.949 & 0.985 & 0.987 \\
1500 & 0.890 & 0.638 & 0.970 & 0.806 & 0.992 & 0.943 & 0.998 & 0.900 & 0.971 & 0.973 & 0.993 & 0.993 \\
1800 & 0.935 & 0.735 & 0.982 & 0.824 & 0.995 & 0.945 & 0.999 & 0.939 & 0.983 & 0.984 & 0.995 & 0.995 \\
2100 & 0.958 & 0.804 & 0.988 & 0.837 & 0.997 & 0.946 & 0.999 & 0.960 & 0.988 & 0.989 & 0.995 & 0.995 \\
2400 & 0.969 & 0.844 & 0.991 & 0.840 & 0.998 & 0.941 & 0.999 & 0.970 & 0.991 & 0.992 & 0.997 & 0.995 \\
\hline
AT (s) & & & & & & & & 0.25 & 0.44 & 5.51 & 18.64 & 5717.69 \\
\end{tabular}
\end{table}
\begin{table}[H]\scriptsize
\caption{Comparison of the reflectance values obtained by problem \eqref{eq:singleWaveF} and the heuristic approach for different wavelengths $\lambda$ and number of layers $N$ on a Niobium substrate.}\label{tab:compareNiobium}
\centering
\begin{tabular}{cr|rrrrrr|rrrrr}
& & \multicolumn{ 6}{c|}{Heuristic} & \multicolumn{ 5}{c}{Optimal} \\
$\lambda$ (nm) & $N=0$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ & $N=6$ & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ \\
\hline
450 & 0.558 & 0.486 & 0.890 & 0.862 & 0.978 & 0.972 & 0.996 & 0.688 & 0.900 & 0.931 & 0.980 & 0.987 \\
600 & 0.573 & 0.387 & 0.878 & 0.785 & 0.971 & 0.947 & 0.993 & 0.663 & 0.887 & 0.912 & 0.973 & 0.979 \\
750 & 0.620 & 0.407 & 0.888 & 0.775 & 0.972 & 0.940 & 0.993 & 0.696 & 0.896 & 0.917 & 0.974 & 0.979 \\
900 & 0.726 & 0.485 & 0.922 & 0.794 & 0.980 & 0.944 & 0.995 & 0.775 & 0.927 & 0.939 & 0.981 & 0.985 \\
1200 & 0.875 & 0.641 & 0.966 & 0.831 & 0.991 & 0.952 & 0.998 & 0.890 & 0.967 & 0.971 & 0.992 & 0.993 \\
1500 & 0.924 & 0.717 & 0.980 & 0.836 & 0.995 & 0.952 & 0.999 & 0.930 & 0.980 & 0.982 & 0.995 & 0.995 \\
1800 & 0.941 & 0.739 & 0.984 & 0.805 & 0.996 & 0.938 & 0.999 & 0.944 & 0.984 & 0.985 & 0.995 & 0.995 \\
2100 & 0.953 & 0.775 & 0.987 & 0.792 & 0.997 & 0.927 & 0.999 & 0.955 & 0.987 & 0.988 & 0.996 & 0.996 \\
2400 & 0.952 & 0.760 & 0.986 & 0.733 & 0.996 & 0.894 & 0.999 & 0.953 & 0.986 & 0.987 & 0.996 & 0.996 \\
\hline
AT (s) & & & & & & & & 0.25 & 0.49 & 5.18 & 18.75 & 2297.35 \\
\end{tabular}
\end{table}
We have similar observations for different metallic substrates. Firstly, we report that the success of the heuristic method heavily depends on the parity of the number of layers $N$. If $N$ is even, then the solution of the heuristic method performs quite well compared to the optimal solutions. However, if $N$ is odd, then the performance of the heuristic solution can be significantly worse than the optimal solution. Interestingly, in the case $N$ is odd, the heuristic solution obtained with $N-1$ layers seems to be even better. Also, the reflectance value difference between the heuristic and optimal solutions gets smaller as $N$ increases as expected due to Proposition~\ref{prop:thinFilmHeur}. Therefore, our proposed optimization approach is especially useful when $N$ is odd or small.
Since thin films with smaller number of layers providing high reflectance are desirable in practice, our approach can be beneficial in such cases.
We remark that we have not solved the optimization model for $N \ge 6$ since the heuristic method already provides very high reflectance values, which are sufficient from an application point of view when $N=6$.
We note that optimal solutions have similar characteristic to the solutions obtained by the heuristic method. In particular, the ordering of materials are again alternating between the highest and lowest indexed coating materials (in this case, \ch{TiO2} and \ch{MgF2}, respectively). However, the optimal optical thicknesses of the layers are not necessarily equal to quarter wavelength. We leave the formal analysis and exploration of these observations as future work.
Finally, we notice that Gurobi is quite successful in finding high quality solutions early, and spends most of the effort in improving the dual bound to certify that the feasible solution obtained is, in fact, optimal.
\section{An Application from Biology: Antibiotics Time Machine}
\label{sec:antibiotic}{
In this section, we study the antibiotics time machine problem from biology. After providing a formal problem definition in Section \ref{sec:antibioticDef}, we present an MILP formulation in Section \ref{sec:antibioticForm} by utilizing the linearization approach derived in Section \ref{sec:mainLinear}. Finally, we present the results of our computational experiments in Section \ref{sec:antibioticComp} using real and synthetic datasets.
}
\subsection{Problem Definition}
\label{sec:antibioticDef}
Let us first fix the notation used in this section.
Consider a string $s$ of size $g$ with $s_i\in\{0,1\}$, $i=1,\dots,g$. Here, each string $s \in \{0,1\}^g$ represents a bacterial genotype with $g$ alleles, and each character $s_i$ indicates whether there is a mutation in the corresponding allele ($s_i=1$) or not ($s_i=0$). The genotype $s=0$ is special and it is called the \textit{wild type} since it has no mutations.
Let us denote the total number of states as $d:=2^g$.
Suppose that we have $K$ antibiotics and the transition between genotypes (or states) under the administration of antibiotic $k$ is governed by a probability matrix $\hat T_k \in \mathbb{R}^{d \times d}$, $k=1,\dots,K$. In other words, there is an associated Markov chain for each drug. Under the common assumption of Strong Selection Weak Mutation (SSWM) developed in \citet{gillespie1983simple, gillespie1984molecular}, only the transition probability between two genotypes $s$ and $s'$ that have exactly one different character (i.e., $\|s-s'\|_1 = 1$) can be positive. Let us call the set of such pairs as $\mathcal{N}$.
The antibiotics time machine problem is formally described as follows: Given $K$ drugs and the initial genotype, find a treatment plan of length $N$ such that the probability of reaching the wild type is maximized.
Let us use Figure \ref{fig:antibiotic} as an example to explain the notation and clarify the problem setting. In this illustration, we have $g=3$ alleles and $K=2$ drugs, Blue (solid arcs) and Red (dotted arcs). Suppose that we are seeking a treatment plan of length $N=3$ given the initial genotype $111$.
Notice that there is no path with positive probability in the Markov chain corresponding to a single drug going from the initial genotype to the wild type in three steps. However, administering Blue, Blue, Red or
Red, Blue, Red drugs in sequence both provide positive probabilities. Then, our aim is to decide which of these treatment plans have the highest probability (observe that these are the only treatment plans with positive probabilities for this instance), which involves a series of matrix multiplications.
We would like to note that the antibiotics time machine problem can be seen as solving a ``static'' Markov decision process in which all the decisions are made before any realizations become available.
\def4.5cm{4.5cm}
\def3.5{3.5}
\begin{figure}
\caption{Illustration of an antibiotics time machine problem instance with $g=3$ alleles and $K=2$ drugs. Probabilistic state transitions under each drug are represented by the arcs.}
\label{fig:antibiotic}
\end{figure}
\subsection{Problem Formulation}
\label{sec:antibioticForm}
We will now formulate antibiotics time machine problem as an instance of the generic model~\eqref{eq:MILP} by benefiting from the fact that $\mathcal{T}$ is a finite set.
Recall that we denote the probability transition matrix of drug $k$ as $\hat T_k \in \mathbb{R}^{d \times d}$. Then, we have
$
\mathcal{T} = \{\hat T_k: k =1,\dots,K \}
$.
Let $p$ and $q$ be the unit row vectors corresponding to the initial and final states, respectively. Then, the objective function $f(w):=w q^T$ gives the probability of reaching to the final state in exactly $N$ steps with the decisions $T_1, \dots, T_N$. Since the objective function is linear, antibiotics time machine problem can be solved as an MILP via~\eqref{eq:MILP}. In this formulation, we select the outer-approximating polytopes $\bar\mathcal{U}_n$ as the standard simplex of order $d$, that is,
\[
\bar\mathcal{U}_n = \Delta_d := \bigg \{ u \in \mathbb{R}_+^{d} : \ \sum_{j=1}^d u_{j} = 1 \bigg\},
\]
for $n=1,\dots,N$, as each variable row vector $u_n$ corresponds to the probability distribution after administering the selected first~$n$ drugs.
\subsection{Computations}
\label{sec:antibioticComp}
In this section, we present the computational results obtained by solving the antibiotics time machine problem using a real dataset from \citet{mira2015rational} in Section~\ref{sec:antibioticCompReal} and a synthetic dataset in Section~\ref{sec:antibioticCompSynthetic}. We compare the computational effort of complete enumeration and the MILP~\eqref{eq:MILP}, and discuss their scalability issues.
\subsubsection{A Real Dataset}
\label{sec:antibioticCompReal}
We use the experimental growth data and probability calculations from \citet{mira2015rational} to obtain the transition probability matrices.
In particular, let $\omega_{k,j}$ be the growth rate of genotype $j$ under antibiotic $k$. The main principle behind the probability calculation is that if $\omega_{k,j'} > \omega_{k,j}$, then $\hat T_{k,(j,j')} > 0 $ for $(j,j') \in \mathcal{N}$.
Two different probability models are used in \citet{mira2015rational}.
\begin{itemize}
\item
Correlated Probability Model (CPM): In this model, the probabilities are computed as
\begin{equation*}\label{eq:probCPM}
\hat T^{\text{\tiny CPM}}_{k,(j,j')} = \frac{ \max\{0, \omega_{k,j'} - \omega_{k,j} \} }{ \sum_{ j'':(j'',j) \in \mathcal{N}} \max\{0, \omega_{k,j''} - \omega_{k,j} \} }, \quad (j,j') \in\mathcal{N}.
\end{equation*}
Here, we use the convention $\frac{0}{0}=0$.
\item
Equal Probability Model (EPM): In this model, the probabilities are computed as
\begin{equation*}\label{eq:probEPM}
\hat T^{\text{\tiny EPM}}_{k,(j,j')} = \frac{ \mathbbm{1} ( \omega_{k,j'} > \omega_{k,j} ) }{ \sum_{ j'':(j'',j) \in \mathcal{N}} \mathbbm{1} ( \omega_{k,j''} > \omega_{k,j}) }, \quad (j,j') \in\mathcal{N}.
\end{equation*}
Here, $\mathbbm{1}(\cdot)$ denotes the indicator function.
\end{itemize}
In both models, we set $\hat T_{k,(j,j)} = 1$ if genotype $j$ is an absorbing state under drug $k$, that is, $ \omega_{k,j} > \omega_{k,j'}$ for each $j'$ such that $(j,j') \in\mathcal{N}$.
In this real dataset, the measurements of $K=15$ drugs are reported for genotypes with $g=4$ alleles, that is, for $d=16$ states.
We now compare the computational performance of solving the antibiotics time machine problem with complete enumeration (which is the method used in \citet{mira2015rational} for up to $N=6$) versus MILP~\eqref{eq:MILP} in
Tables~\ref{tab:MiraCPM} and \ref{tab:MiraEPM}.
In these tables, we report i) the maximum probability of going from each initial genotype to the wild type (0000), ii) the average computation time and the number of branch-and-bound nodes (BBNode) under the absolute optimality gap of 0.001, and iii) the average computation time of complete enumeration (Enum) up to $N=6$ and its estimates for larger values.
We use
Gurobi 9 as the MILP solver on a 64-bit personal computer with Intel Core i7
CPU 2.60GHz processor (16 GB RAM).
\begin{landscape}
\begin{table}[H]\scriptsize
\caption{Maximum probabilities, average number of branch-and-bound nodes and average run times (in seconds) using the data from \citet{mira2015rational} under CPM.}\label{tab:MiraCPM}
\centering
\begin{tabular}{c|rrrrrr|rrrrrrrrr}
initial & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ & $N=6$ & $N=7$ & $N=8$ & $N=9$ & $N=10$ & $N=11$ & $N=12$ & $N=13$ & $N=14$ & $N=15$ \\
\hline
1000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
0100 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
0010 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 \\
0001 & 0.287 & 0.287 & 0.592 & 0.592 & 0.726 & 0.726 & 0.729 & 0.729 & 0.729 & 0.729 & 0.731 & 0.731 & 0.732 & 0.732 & 0.733 \\
1100 & 0.000 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
1010 & 0.000 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 & 0.715 \\
1001 & 0.000 & 0.559 & 0.559 & 0.726 & 0.726 & 0.729 & 0.729 & 0.729 & 0.729 & 0.731 & 0.731 & 0.732 & 0.732 & 0.733 & 0.733 \\
0110 & 0.000 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
0101 & 0.000 & 0.592 & 0.592 & 0.612 & 0.612 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
0011 & 0.000 & 0.361 & 0.361 & 0.586 & 0.600 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
1110 & 0.000 & 0.000 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
1101 & 0.000 & 0.000 & 0.592 & 0.592 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
1011 & 0.000 & 0.000 & 0.532 & 0.532 & 0.684 & 0.690 & 0.691 & 0.693 & 0.694 & 0.694 & 0.694 & 0.695 & 0.696 & 0.697 & 0.697 \\
0111 & 0.000 & 0.000 & 0.586 & 0.600 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
1111 & 0.000 & 0.000 & 0.000 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 & 0.617 \\
\hline
MILP (s) & 0.05 & 0.07 & 0.10 & 0.12 & 0.19 & 0.41 & 0.63 & 1.01 & 1.42 & 2.01 & 4.30 & 10.21 & 23.02 & 48.67 & 147.29 \\
BBNode & 0.00 & 0.27 & 0.87 & 1.73 & 8.53 & 26.53 & 84.33 & 252.53 & 602.47 & 1385.53 & 3628.40 & 8093.60 & 21219.20 & 42344.73 & 97275.80 \\
Enum (s) & 0.00 & 0.01 & 0.06 & 0.94 & 16.46 & 272.12 & $4.1 \cdot 10^3$ & $6.1\cdot10^4$ &
$ 9.2 \cdot10^5$ & $1.4\cdot10^7$ & $ 2.1\cdot10^8$ & $3.1\cdot10^9$ & $4.7\cdot10^{10}$ & $ 7.0\cdot10^{11}$ & $1.1\cdot10^{13}$ \\
\end{tabular}
\end{table}
\begin{table}[H]\scriptsize
\caption{Maximum probabilities, average number of branch-and-bound nodes and average run times (in seconds) using the data from \citet{mira2015rational} under EPM.}\label{tab:MiraEPM}
\centering
\begin{tabular}{c|rrrrrr|rrrrrrrrr}
initial & $N=1$ & $N=2$ & $N=3$ & $N=4$ & $N=5$ & $N=6$ & $N=7$ & $N=8$ & $N=9$ & $N=10$ & $N=11$ & $N=12$ & $N=13$ & $N=14$ & $N=15$ \\
\hline
1000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
0100 & 0.333 & 0.333 & 0.333 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 & 0.520 & 0.520 \\
0010 & 0.500 & 0.500 & 0.500 & 0.500 & 0.500 & 0.500 & 0.512 & 0.512 & 0.515 & 0.516 & 0.520 & 0.520 & 0.526 & 0.526 & 0.532 \\
0001 & 0.500 & 0.500 & 0.667 & 0.667 & 0.667 & 0.667 & 0.690 & 0.690 & 0.693 & 0.693 & 0.696 & 0.696 & 0.700 & 0.700 & 0.704 \\
1100 & 0.000 & 0.333 & 0.333 & 0.389 & 0.389 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 & 0.520 \\
1010 & 0.000 & 0.500 & 0.500 & 0.583 & 0.583 & 0.587 & 0.587 & 0.591 & 0.591 & 0.596 & 0.596 & 0.601 & 0.601 & 0.606 & 0.606 \\
1001 & 0.000 & 0.667 & 0.667 & 0.667 & 0.667 & 0.690 & 0.690 & 0.693 & 0.693 & 0.696 & 0.696 & 0.700 & 0.700 & 0.704 & 0.704 \\
0110 & 0.000 & 0.333 & 0.333 & 0.333 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 & 0.520 \\
0101 & 0.000 & 0.292 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 & 0.520 & 0.520 & 0.526 \\
0011 & 0.000 & 0.250 & 0.250 & 0.500 & 0.500 & 0.500 & 0.502 & 0.531 & 0.539 & 0.553 & 0.553 & 0.557 & 0.557 & 0.562 & 0.562 \\
1110 & 0.000 & 0.000 & 0.333 & 0.333 & 0.333 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 \\
1101 & 0.000 & 0.000 & 0.292 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 & 0.520 & 0.520 \\
1011 & 0.000 & 0.000 & 0.333 & 0.333 & 0.389 & 0.417 & 0.458 & 0.458 & 0.475 & 0.475 & 0.481 & 0.481 & 0.487 & 0.515 & 0.515 \\
0111 & 0.000 & 0.000 & 0.148 & 0.198 & 0.333 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 \\
1111 & 0.000 & 0.000 & 0.000 & 0.333 & 0.375 & 0.458 & 0.458 & 0.463 & 0.463 & 0.471 & 0.479 & 0.479 & 0.515 & 0.515 & 0.520 \\
\hline
MILP (s) & 0.04 & 0.08 & 0.09 & 0.14 & 0.25 & 0.42 & 0.60 & 0.96 & 1.96 & 3.83 & 8.01 & 14.96 & 35.72 & 77.47 & 165.86 \\
BBNode & 0.00 & 0.33 & 0.87 & 4.40 & 43.20 & 168.47 & 448.73 & 1002.93 & 2380.13 & 4682.67 & 9510.47 & 22676.53 & 48543.07 & 76276.20 & 125473.00 \\
Enum (s) & 0.00 & 0.00 & 0.06 & 0.94 & 16.46 & 273.02 & $4.1 \cdot 10^3$ & $6.1\cdot10^4$ &
$ 9.2 \cdot10^5$ & $1.4\cdot10^7$ & $ 2.1\cdot10^8$ & $3.1\cdot10^9$ & $4.7\cdot10^{10}$ & $ 7.0\cdot10^{11}$ & $1.1\cdot10^{13}$ \\
\end{tabular}
\end{table}
\end{landscape}
As expected, we clearly observe that the MILP approach is significantly faster than complete enumeration, especially for larger values of the treatment length~$N$. From an application point of view, this is quite important since complete enumeration only allows for smaller values of~$N$ such as 6 in practice (see e.g. \citet{mira2015rational}) whereas the maximum probabilities might be obtained for longer treatment plans. This is more evident under EPM in which 14 of 15 initial states have higher probability of returning to the wild type in 10 steps compared to 6 steps (the only exception is genotype 1000, which already has a deterministic path of going to the wild type). We note that even a small increase in maximum probabilities is crucial due to the critical nature of the application.
\subsubsection{A Synthetic Dataset}
\label{sec:antibioticCompSynthetic}
In this section, we randomly generate growth data to construct the transition probability matrices in order to test the scalability of the proposed approach.
Based on our observation from the real dataset \citep{mira2015rational}, we have come up with a simple growth data generation procedure as follows:
\begin{equation*}\label{eq:randomGrowthGen}
\omega_{k,j} = \begin{cases}
0 & \text{w.p. } 1/3 \\
1 & \text{w.p. } 1/6 \\
2 & \text{w.p. } 1/2
\end{cases}.
\end{equation*}
The intuition behind the parameters of this trinomial distribution is that most of the antibiotics are effective to prevent the growth of a limited number of genotypes (one-third) whereas the growth of the majority of the genotypes are either unaffected (one-half) or slightly affected (one-sixth).
Once we have the growth data, we construct the probability transition matrices under EPM as described in
Section~\ref{sec:antibioticCompReal} and solve the MILP~\eqref{eq:MILP} for each initial state. The average computational times are reported in Table~\ref{tab:RandomEPM}. Considering the size of the largest instance with $d=32$ states and $K=30$ drugs, and the fact that we only use a personal computer, an average computational time of about 7 minutes seems quite promising to demonstrate the scalability of the approach.
\begin{table}[H]\small
\caption{Average run times in seconds for the randomly generated instances under EPM.}\label{tab:RandomEPM}
\centering
\begin{tabular}{ccc|rrrrrr}
$g$ & $ d$ & $ K$ & $ N=5$ & $ N=6 $& $ N=7$ & $ N=8$ & $ N=9$ & $ N=10$ \\
\hline
4 & 16 & 15 & 0.23 & 0.45 & 0.64 & 1.09 & 2.17 & 5.11 \\
4 & 16 & 20 & 0.46 & 0.67 & 0.96 & 1.85 & 3.92 & 8.67 \\
4& 16 & 25 & 0.55 & 0.82 & 1.06 & 2.11 & 4.58 & 7.96 \\
4 & 16 & 30 & 0.64 & 0.95 & 1.33 & 3.04 & 7.53 & 19.45 \\
\hline
5 & 32 & 15 & 0.48 & 0.80 & 1.54 & 3.29 & 9.78 & 30.72 \\
5 & 32 & 20 & 0.64 & 1.18 & 2.65 & 7.45 & 22.71 & 94.34 \\
5 & 32 & 25 & 0.79 & 1.60 & 3.74 & 11.20 & 38.34 & 206.92 \\
5 & 32 & 30 & 1.05 & 2.20 & 5.31 & 17.15 & 74.55 & 412.64 \\
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:conc}
In this paper, we consider a class of optimization problems involving the multiplication of variable matrices to be selected from a family, and analyze such optimization problems depending on the structure of the matrix family. We focus on the study of two interesting real-life applications: the multi-layer thin films problem from material science and the antibiotics time machine problem from biology. We obtain compact-size mixed-integer quadratically constrained quadratic programming and mixed-integer linear programming formulations for these two problems, respectively. Finally, we carried out an extensive computational study comparing the accuracy and efficiency of our proposed approach against heuristics and exhaustive search, which are quite common in the literature.
We have future research directions in both material science and biology applications. In this paper, we only focused on optimizing the reflectance of multi-layer thin films at a given wavelength. However, in many practical applications, a design which works well for a \textit{spectrum} of wavelengths is desired. Such an optimization problem can be modeled with an objective function involving an integral and infinitely many constraints. Although a finite-size approximate reformulation of this optimization model can be obtained, the resulting model seems to be quite challenging to solve and it likely requires a specialized solution algorithm.
For the biology application, a promising research direction seems to incorporate the uncertainty related to the growth rate measurements. In a recent study \citep{mira2017statistical}, the growth rates of different genotypes are measured 12 times under 23 different antibiotics and dosages. Although these measurements lead to relatively small confidence intervals for the growth rates, the optimal treatment sequences obtained from each different measurement are quite different from each other. This observation motivates us to use robust optimization or (risk-averse) stochastic programming techniques to incorporate the uncertainty in the growth rate measurements as a future research direction.
\section*{Acknowledgments}
The author wishes to thank Ali Rana At{\i}lgan for introducing him the problems in this paper and their fruitful discussions. The author also acknowledges Muhammed Ali Ke\c{c}eba\c{s} and K{\"u}r\c{s}at \c{S}endur's help in providing the input data of the multi-layer thin films problem.
\end{document}
|
\begin{document}
\title{Principles and demonstrations of quantum
information processing by NMR spectroscopy\thanks{
Portions of this survey were presented at the AeroSense
Workshop on Photonic Quantum Computing II, held in Orlando,
Florida on April 16, 1998, at the Dagstuhl Seminar
on Quantum Algorithms, held in Schloss Dagstuhl,
Germany on May 10 -- 15, 1998, and at the Workshop
on Quantum Information, Decoherence and Chaos, held
on Heron Island, Australia September 21 -- 25, 1998;
this paper is an updated and extended version of one
published in the proceedings of the AeroSense meeting,
available as vol.\ 3385 from the International Society for
Optical Engineering, 1000 20th St., Bellingham, WA 98225, USA.}}
\author{T.~F.~Havel\inst{1}, S.~S.~Somaroo\inst{1},
C.-H.~Tseng\inst{2} and D.~G.~Cory\inst{3}}
\institute{BCMP, Harvard Medical School, Boston, MA 02115
\and Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138
\and Nuclear Engineering, MIT, Cambridge, MA 02139}
\maketitle
\begin{abstract}
This paper surveys our recent research on quantum information
processing by nuclear magnetic resonance (NMR) spectroscopy.
We begin with a geometric introduction to the
NMR of an ensemble of indistinguishable spins,
and then show how this geometric interpretation is
contained within an algebra of multispin product operators.
This algebra is used throughout the rest of the paper
to demonstrate that it provides a facile framework within
which to study quantum information processing more generally.
The implementation of quantum algorithms by NMR depends
upon the availability of special kinds of mixed states,
called pseudo-pure states, and we consider a
number of different methods for preparing these states,
along with analyses of how they scale with the number of spins.
The quantum-mechanical nature of processes involving such
macroscopic pseudo-pure states also is a matter of debate,
and in order to discuss this issue in concrete terms
we present the results of NMR experiments which
constitute a macroscopic analogue Hardy's paradox.
Finally, a detailed product operator description is
given of recent NMR experiments which demonstrate a
three-bit quantum error correcting code, using field
gradients to implement a precisely-known decoherence model.
\end{abstract}
\section{Introduction}
It has recently proven possible to perform simple
quantum computations by liquid-state NMR spectroscopy
\cite{ChGeKuLe:98,ChVaZhLeLl:98,CorFahHav:97,CMPKLZHS:98,
CorPriHav:98,GershChuan:97,JonMosHan:98,NieKniLaf:98}.
This unprecedented level of coherent control promises
to be quite useful not only in demonstrating the validity
of many of the basic ideas behind quantum information processing,
but more importantly, in providing researchers in the field
with new physical insights and concrete problems to study.
This is particularly true since the ensemble nature of the
systems used for NMR computing differs substantially from the
systems previously considered as candidate quantum computers.
The use of ensembles provides tremendous redundancy, which
makes computation with them relatively resistant to errors.
It also has the potential to provide access to a limited
form of massive classical parallelism \cite{CorFahHav:97},
which could for example be used to speed up searches
with Grover's algorithm by a constant but very large
factor \cite{BoBrHoTa:98,Grover:97a,LinBarFre:98}.
The barriers that have been encountered in extending NMR
computing to nontrivial problems further raise interesting
questions regarding the relations between microscopic
and macroscopic order, and between the quantum and
classical worlds \cite{GiuliniEtAl:96,Peres:95}.
NMR computing is also contributing to quantum
information processing through the assimilation
of theoretical and experimental NMR techniques.
These techniques have been developed
over half a century of intensive research,
and grown so advanced that a recent book on the subject
is entitled ``Spin Choreography'' \cite{Freeman:98}.
It is noteworthy that, due to the scope of its applications,
NMR is now more often studied in chemistry and
even biochemistry than it is in physics and
engineering, where it was initially developed.
This has had the effect that a large portion of these
techniques have been discovered empirically and put
into the form of intuitive graphical or algebraic rules,
rather than developed mathematically from well-defined principles.
Thus the interest which NMR computing is attracting from
the quantum information processing side likewise has
the potential to benefit the field of NMR spectroscopy,
particularly through the application of algorithmic,
information theoretic and algebraic techniques.
Finally, NMR has the potential to contribute in
significant ways to the development of its own mathematics,
in the same ways that computers have contributed to the
development of recursive function theory, number theory,
combinatorics and many other areas of mathematics.
By performing experiments which can be interpreted as
computations in homomorphic images on the algebras
that are naturally associated with NMR spectroscopy,
it may be possible to obtain insights into,
or even ``proofs'' of, algebraic properties
that would otherwise be inaccessible.
To give some idea of its potential computational power,
we point out that the spin dynamics of a crystal
of calcium fluoride one millimeter on a side,
which can be highly polarized, superbly
controlled and measured in microscopic detail by
NMR techniques \cite{TangWaugh:92,ZhangCory:98},
is described by an exponential map in an algebra on
about $4^{10^{11}}$ physically distinct dimensions.
This paper will survey our recent research on quantum
information processing by liquid-state NMR spectroscopy,
including some new experiments which serve
to clarify the underlying principles.
We begin with a geometric interpretation of the quantum
mechanical states and operators of an ensemble of
identical spin $1/2$ particles, both pure and mixed,
which provides considerable insight into NMR.
The corresponding geometric algebra is then
extended to the {\em product operator formalism\/},
which is widely used in analyzing NMR experiments,
and which constitutes a facile framework within which to study
quantum information processing more generally \cite{SomCorHav:98}.
We proceed to use this formalism to give an
overview of the basic ideas behind ensemble quantum
computing by liquid-state NMR spectroscopy,
with emphasis on ``pseudo-pure'' state preparation and scaling.
Next, we consider one way in which quantum correlations can
appear to be present even in weakly polarized spin ensembles,
and illustrate this with the results of NMR
experiments which constitute a macroscopic
analogue of Hardy's paradox \cite{Hardy:93}.
Finally, the utility of NMR and its associated
product operator formalism as a means of studying
decoherence will be demonstrated by an analysis
of our recent experiments with a three-bit quantum
error correcting code \cite{CMPKLZHS:98}.\footnote{
The reader is assumed throughout to be familiar
with the basic notions of quantum information
processing, as presented in e.g.\ Refs.~\cite{
Peres:95,Preskill:98,Steane:98,WilliClear:98}.
Excellent detailed expositions of NMR spectroscopy are also available,
see e.g.\ Refs.~\cite{ErnBodWok:87,Freeman:98,Munowitz:88,Slichter:90}.
A more introductory account of our work on ensemble quantum
computing by NMR spectroscopy, directed primarily towards
physicists, may be found in Ref.\ \cite{CorPriHav:98}. }
\vspace*{0.20in}
\section{The geometry of spin states and operators}
NMR spectroscopy is based on the fact that
the nuclei in many kinds of atoms are endowed
with an intrinsic angular momentum, the properties
of which are determined by an integer or half-integer
quantum number $S \ge0$, called the nuclear {\em spin\/}.
For the purposes of quantum information processing by NMR,
it will suffice to restrict ourselves to spin $S = 1/2$.
In this case, measurement of the component
of the angular momentum along a given axis in
space always yields one of two possible values:
$\pm\hbar/2$ (where $\hbar$ is Planck's constant $h$ over $2\pi$).
According to the principles of quantum mechanics,
the quantum state of the ``spin'' (nucleus) after
such a measurement may be completely characterized
by one of two orthonormal vectors in a two-dimensional
Hilbert (complex vector) space $\cal H$, with Hermitian
(sesquilinear) inner product $\AVG{\cdot|\cdot}$.
A rotation of this axis in physical space
induces a transformation in $\cal H$ by an
element of the special unitary group $\LAB{SU}(2)$,
which is the two-fold universal covering group of the
three-dimensional Euclidean rotation group $\LAB{SO}(3)$,
and the elements of $\cal H$ are called
{\em spinors\/} to emphasize this fact.
The Lie algebra basis $(\VEC{I}_\LAB{x},\VEC{I}_\LAB{y},\VEC{I}_\LAB{z})$ of $\LAB{SU}(2)$
(or $\LAB{SO}(3)$) corresponding to infinitesmal rotations about
three orthogonal axes in space satisfies the commutation relations
\begin{equation} \label{eq:com_rel}
\left[\VEC{I}_\LAB{x},\VEC{I}_\LAB{y}\right] ~=~ \imath \VEC{I}_\LAB{z} ~,\quad
\left[\VEC{I}_\LAB{z},\VEC{I}_\LAB{x}\right] ~=~ \imath \VEC{I}_\LAB{y} ~,\quad
\left[\VEC{I}_\LAB{y},\VEC{I}_\LAB{z}\right] ~=~ \imath \VEC{I}_\LAB{x} ~,
\end{equation}
and the eigenvalues $\pm1/2$ of these three Hermitian
(self-adjoint) operators correspond to the possible
outcomes of measurements of the angular momentum
(in units of $\hbar$) along the three axes.\footnote{
Detailed explanations of these basic features of the
quantum mechanics of spin may be found in modern textbooks.
We would particularly recommend Sakurai \cite{Sakurai:94},
for an introduction to the underlying physics,
or the monograph by Biedenharn and Louck \cite{BiedeLouck:81},
for a complete mathematical development. }
The Hilbert space representation of the
kinematics of an isolated spin, however,
is not sufficient to describe the joint state
of the macroscopic collections of spins
which are the subject of NMR spectroscopy.
A {\em mixed state\/} (as opposed to the
{\em pure state\/} of an isolated spin) is a random
{\em ensemble\/} of spins not all in the same pure state.
This ``ensemble'' could be a thought-construction
which describes our state-of-knowledge of a single spin
(as used in J.~W.~Gibbs' formulation of statistical thermodynamics),
or it could be a very large physical collection of spins,
as in an NMR sample tube.
In either case, a probability is assigned to every possible spinor,
which can be interpreted as its frequency of occurrence in the ensemble
(but see Ref.\ \cite{Jaynes:57} for a Baysian point-of-view).
The Heisenberg uncertainty principle limits what can
be known about the ensemble to the {\em ensemble-average
expectation values\/} of the quantum mechanical observables.
This information, in turn, can be encoded into a single
operator on $\cal H$, called the {\em density operator\/}.
To define this operator mathematically,
we first recall the canonical algebra isomorphism
between the endomorphisms $\FUN{End}({\cal H})$
and the tensor product ${\cal H} \otimes {\cal H}^*$
of $\cal H$ with its dual space ${\cal H}^*$.
Denoting the dual of a vector $\KET{\psi}$ under
the Hermitian inner product of $\cal H$ by $\BRA{\psi}$,
the composition product in $\FUN{End}({\cal H})$
corresponds to a product on ${\cal H} \otimes {\cal H}^*$
which is given on the factorizable tensors by
\begin{equation}
(\KET{\varphi} \otimes \BRA{\varphi'})
(\KET{\vartheta} \otimes \BRA{\vartheta'})
~=~ \AVG{\varphi'\,|\,\vartheta} \,
(\KET{\varphi} \otimes \BRA{\vartheta'}) ~,
\end{equation}
and extended to all tensors by linearity.
Following common practice, we shall usually
drop the tensor product sign ``$\otimes$''
and write this {\em dyadic product\/}
as $\KET{\varphi}\BRA{\vartheta}$.
The restriction of this product to the diagonal,
$\KET{\psi}\BRA{\psi}$, linearly spans the (real) subspace
of all Hermitian operators in $\FUN{End}({\cal H})$,
and the action of $\LAB{SU}(2)$ on these products
is its usual action on such operators,
\begin{equation}
\KET{\psi}\BRA{\psi} \quad\mapsto\quad
\VEC U\, \KET{\psi}\BRA{\psi} \,\tilde{\VEC U} ~,
\end{equation}
where $\tilde{\VEC U} \equiv \VEC U^\sim$ denotes the
Hermitian conjugate (adjoint) of $\VEC U \in \FUN{SU}(2)$.
Restricting ourselves to an ensemble
involving a finite set of states
$\{\KET{\psi_k}\}$ for ease of presentation,
the density operator may now be defined as \cite{Blum:96}
\begin{equation}
{\VEC{E}_{-}B\rho} ~\equiv~ \overline{\KET{\psi}\BRA{\psi}}
~\equiv~ {\sum}_k\, p_k\, \KET{\psi_k}\BRA{\psi_k} ~,
\end{equation}
where the $p_k \ge0$ are the probabilities of the
various states in the ensemble ($\sum_k p_k = 1$).
Because $\BRA{\varphi}\,{\VEC{E}_{-}B\rho}\,\KET{\varphi} = \sum_k p_k\,
|\AVG{\varphi\,|\,\psi_k}|^2 \ge0$ for any spinor $\KET{\varphi}$,
the density operator is necessarily positive semi-definite.
Letting ``${\rm tr}$'' be the contraction operation on
${\cal H} \otimes {\cal H}^*$ (or trace on $\FUN{End}({\cal H})$),
letting $\VEC A \in \FUN{End}({\cal H})$ be any
Hermitian operator, and using the invariance of
the trace under cyclic permutations, we find that
\begin{equation}
{\rm tr}\left( \VEC A\,{\VEC{E}_{-}B\rho} \right) ~=~
{\sum}_k\, p_k\, {\rm tr}\left(\,\VEC A\, \KET{\psi_k}\BRA{\psi_k}\,\right)
~=~ {\sum}_k\, p_k\, \BRA{\psi_k} \, \VEC A\, \KET{\psi_k} ~.
\end{equation}
This proves our claim that all ensemble-average
expectation values can be obtained from ${\VEC{E}_{-}B\rho}$.
Note in particular that ${\rm tr}({\VEC{E}_{-}B\rho}) = 1$.
Before showing how this applies to NMR spectroscopy,
we wish to introduce an important geometric
interpretation of the spin $1/2$ density operator,
and indeed of the entire operator algebra.
As operators, the angular momentum components
transform under $\LAB{SU}(2)$ by conjugation,
i.e.~$\VEC I_w \mapsto \VEC{UI}_w\tilde{\VEC U}$
($w \in \{\LAB{x},\LAB{y},\LAB{z}\}$).
Thus $\VEC U$ and $-\VEC U$ induce the same transformation,
so that conjugation constitutes a transitive group action of
$\LAB{SO}(3)$ on the real linear space $\AVG{\VEC{I}_\LAB{x}, \VEC{I}_\LAB{y}, \VEC{I}_\LAB{z}}$.
It follows that this space is naturally regarded
as a three-dimensional Euclidean vector space.
Note further that $\AVG{\VEC{1}, \VEC{I}_\LAB{x}, \VEC{I}_\LAB{y}, \VEC{I}_\LAB{z}}$ equals the
four-dimensional space of Hermitian operators on ${\cal H}$,
where $\VEC{1}$ is the identity on ${\cal H}$ which we
will henceforth identify with the scalar identity $1$.
This shows that any density operator can be uniquely
expanded as the sum of a {\em scalar\/} and a {\em vector\/}:
\begin{equation}
{\VEC{E}_{-}B\rho} ~=~ \HALF\,{\rm tr}({\VEC{E}_{-}B\rho})\, + \,{\rm tr}(\VEC{I}_\LAB{x}\,{\VEC{E}_{-}B\rho})\,2\VEC{I}_\LAB{x}\,
+ \,{\rm tr}(\VEC{I}_\LAB{y}\,{\VEC{E}_{-}B\rho})\,2\VEC{I}_\LAB{y}\, + \,{\rm tr}(\VEC{I}_\LAB{z}\,{\VEC{E}_{-}B\rho})\,2\VEC{I}_\LAB{z}\,
~\equiv~ \HALF( 1 + \VEC P )
\end{equation}
We call $\VEC P$ the {\em polarization vector\/},
since its length $P \equiv \|\VEC P\| \le1$
(the polarization) is a measure of the overall degree
of alignment of the spins in the ensemble along $\VEC P$.
The positive semi-definiteness of ${\VEC{E}_{-}B\rho}$ implies $P \le1$,
and if $P = 1$, the density operator describes a (ensemble of
spins in the same) pure state up to an overall phase factor.
In this latter case the density operator can be written as
${\VEC{E}_{-}B\rho} = \KET{\psi}\BRA{\psi}$ for some spinor $\KET{\psi}$,
and hence is {\em idempotent\/} (equal to its square).
This vectorial interpretation of two-state quantum
systems became widely known through the work of
Feynman, Vernon and Helwarth \cite{FeyVerHel:57},
although it is inherent in the phenomenological equations for
NMR first proposed by F.~Bloch \cite{Bloch:46} (see below).
To extend this geometric interpretation to the
entire algebra generated by $\AVG{\VEC{1}, \VEC{I}_\LAB{x}, \VEC{I}_\LAB{y}, \VEC{I}_\LAB{z}}$,
we regard the composition product of angular momentum
operators as an associative bilinear product of vectors.
We shall call this the {\em geometric vector product\/}.
Since the eigenvalues of the $(S = 1/2)$ angular momentum operators
are $\pm1/2$, the eigenvalues of their squares are both $1/4$,
from which it follows that $(2\VEC{I}_\LAB{x})^2 = (2\VEC{I}_\LAB{y})^2 = (2\VEC{I}_\LAB{z})^2 = 1$.
In accord with the isotropy of space, moreover,
\begin{equation}
(\VEC U\VEC I_w\tilde{\VEC U})^2 ~=~
\VEC U (\VEC I_w)^2 \tilde{\VEC U} ~=~ 1/4
\end{equation}
for all $\VEC U \in \LAB{SU}(2)$ and
$w \in \{\LAB{x},\LAB{y},\LAB{z}\}$,
which together with the bilinearity of the
product implies that the square of {\em any\/}
vector is equal (relative to the orthonormal
basis $(2\VEC{I}_\LAB{x},2\VEC{I}_\LAB{y},2\VEC{I}_\LAB{z})$) to its length squared.
Via the law of cosines, we can now show that the
{\em symmetric part\/} of the geometric product of
any two vectors is their usual Euclidean inner product:
\begin{equation} \begin{array}{rl}
\VEC A \cdot \VEC B ~= & \HALF \left( \|\VEC A\|^2
+ \|\VEC B\|^2 - \|\VEC A - \VEC B\|^2 \right) \\
= & \HALF \left( \VEC A^2 + \VEC B^2 - (\VEC A - \VEC B)^2 \right)
~=~ \HALF \left( \VEC{AB} + \VEC{BA} \right)
\end{array} \end{equation}
The commutation relations in Eq.~(\ref{eq:com_rel}), on the
other hand, show that the {\em antisymmetric part\/} is equal
(up to a factor of $-\imath$) to the usual vector cross product:
\begin{equation}
\VEC A \times \VEC B ~=~ -\FRAC{\imath}2 \left[ \VEC A, \VEC B \right]
~=~ -\FRAC{\imath}2(\VEC A \VEC B - \VEC B \VEC A)
~\equiv~ -\imath (\VEC A \wedge \VEC B)
\end{equation}
We call the antisymmetric part
$\VEC A \wedge \VEC B = \imath(\VEC A \times \VEC B)$
the {\em outer product\/} of $\VEC A$ and $\VEC B$,
and note that it is geometrically distinct from vectors
because inversion in the origin does not change it.
Such things have been called ``axial vectors'', although
we prefer the older and more descriptive term {\em bivector\/}.
On writing the geometric product as the sum of its symmetric and
antisymmetric parts, $\VEC{AB} = \VEC A\cdot\VEC B + \VEC A\wedge\VEC B$,
we see that perpendicular pairs of vectors anticommute.
It follows that the three basis bivectors $\imath2\VEC{I}_\LAB{x}$,
$\imath2\VEC{I}_\LAB{y}$ and $\imath2\VEC{I}_\LAB{z}$ also anticommute.
These square to $-1$ rather than $1$, however,
and thus can be identified with the usual
{\em quaternion\/} units \cite{Altmann:86}.
Finally, the {\em unit pseudo-scalar\/}
$8\VEC{I}_\LAB{x}\VEC{I}_\LAB{y}\VEC{I}_\LAB{z}$ likewise squares to $-1$,
which together with the fact that it
commutes with the basis vectors and hence
everything in the algebra enables it to
be identified with the unit imaginary
$\imath$ itself \cite{GulLasDor:93}.
This algebra is often called the {\em Clifford algebra\/}
of a three-dimensional Euclidean vector space,
although we shall use the term {\em geometric algebra\/}
here (which W.~K.~Clifford himself used).
Such an algebra is canonically associated with any
metric vector space, and provides a natural algebraic
encoding of the geometric properties of that space.
The fact that the three-dimensional Euclidean version
can be defined starting from the well-known properties
of the spin $1/2$ angular momentum operators indicates that
a large part of quantum mechanics is really just an unfamiliar
(but extremely elegant and facile \cite{Havel$ECC:98,Hestenes:86})
means of doing Euclidean geometry.
Geometric algebra has more recently been extensively advocated
and used to demystify quantum physics by a number of groups
\cite{Baylis:96,DorLasGul:93,Hestenes:66,Lounesto:97}.
Of particular interest are recent proposals to use the
geometric algebra of a direct sum of copies of Minkowski
space-time to obtain a relativistic multiparticle theory,
from which all the nonrelativistic theory used in this paper
falls out naturally as a quotient subalgebra \cite{DoLaGuSoCh:96}.
We are now ready to describe the
simplest possible NMR experiment.
The time-independent Schrodinger equation is
\begin{equation}
\imath\hbar\, \KET{\dot\psi} ~=~ \VEC H\, \KET{\psi} ~,
\end{equation}
where the Hamiltonian $\VEC H$ is the generator of
motion and the ``dot'' denotes the time derivative.
This implies that the density operator evolves according
to the {\em Liouville-von Neumann equation\/}:
\begin{equation} \begin{array}{rl}
\imath\hbar \, \dot{{\VEC{E}_{-}B\rho}}
~= & \imath\hbar \, {\sum}_k\, p_k \left(
\KET{\dot\psi_k} \BRA{\psi_k} +
\KET{\psi_k} \BRA{\dot\psi_k} \right) \\
=~ & {\sum}_k\, p_k \left( \VEC H \KET{\psi_k} \rule[0pt]{0pt}{12pt}
\BRA{\psi_k} - \KET{\psi_k} \BRA{\psi_k} \VEC H \right)
~=~ \left[ \,\VEC H, {\VEC{E}_{-}B\rho}\, \right]
\end{array} \end{equation}
The dominant Hamiltonian in NMR is the Zeeman
interaction of the magnetic dipoles of the spins
(which is parallel to their angular momentum vectors)
with a constant applied magnetic field $\VEC B_0$.
This {\em Zeeman Hamiltonian\/} is given by
$\VEC H_\LAB{Z} = -\HALF\gamma\hbar\VEC B_0$,
where $\gamma$ is a proportionality constant
called the {\em gyromagnetic ratio\/}, which together with
the above gives the {\em Bloch equation\/} \cite{Bloch:46}:
\begin{equation}
\dot{\VEC P} ~=~ \dot{{\VEC{E}_{-}B\rho}}
~=~ -\imath \HALF \gamma \left[\, {\VEC{E}_{-}B\rho}, \VEC B_0\, \right]
~=~ \gamma\, \VEC P \times \VEC B_0
\end{equation}
The solution to this equation is ${\VEC{E}_{-}B\rho}(t) = \VEC U {\VEC{E}_{-}B\rho}(0)\,
\tilde{\VEC U}$ with $\VEC U = \exp(-\imath t \VEC H_\LAB{Z})$,
which is a time-dependent rotation of the polarization
vector about the magnetic field with a constant angular
velocity $\omega_0 \equiv \gamma\hbar\,\|\VEC B_0\|$.
This ``classical'' picture is
an example of Ehrenfest's theorem,
and is analogous to the precession of
a gyroscope in a gravitational field.
Throughout this paper we adopt the universal
convention that the magnetic field is along
the $\LAB{z}$-axis: $\VEC B_0 = B_0 2\VEC{I}_\LAB{z}$.
The component of the net precessing magnetic moment of
the spins in the transverse $\LAB{xy}$-plane generates
a complex-valued radio-frequency electrical signal
proportional to ${\rm tr}((\VEC{I}_\LAB{x} + \imath\VEC{I}_\LAB{y}){\VEC{E}_{-}B\rho}(t)) =
2\VEC{I}_\LAB{x} \cdot \VEC P(t) + \imath 2\VEC{I}_\LAB{y} \cdot \VEC P(t)$,
whose Fourier transform is an NMR spectrum
containing a peak at the precession frequency of
each distinct kind of spin present in the sample.
This has the important consequence that in NMR we measure
the {\em expectation values\/} of the observables directly,
which is due in turn to the fact that we are measuring
the sum of the responses of the spins over the ensemble.
These measurements yield negligible information on the
quantum state of the individual spins in the ensemble
and hence are nonperturbing, in that they do not
appreciably change the state of the ensemble as a whole.
Such {\em weak measurements\/} contrast starkly with the strong
measurements usually considered in quantum mechanics,
where determining the component of a spin along
an axis yields one of two possible values and
``collapses'' it into the corresponding basis state,
so that only one classical bit of information
can be obtained \cite{Peres:95,Sakurai:94}.
A discussion of the computational implications of weak
measurements may be found in Ref.\ \cite{CorFahHav:97}.
The natural (minimum energy) orientation of the spins'
dipoles in a magnetic field is parallel to the field,
and thus to obtain a precessing magnetic dipole
it is necessary to rotate the polarization
vector $\VEC P$ away from the field axis $2\VEC{I}_\LAB{z}$.
This is done by applying an additional, rotating
magnetic field $\VEC B_1$ of magnitude $B_1$ in the
$\LAB{xy}$-plane perpendicular to the static field $\VEC B_0$,
which gives the time-dependent Hamiltonian
\begin{equation}
\VEC H ~=~ \VEC H_\LAB{Z} + \VEC H_\LAB{RF}
~=~ -\gamma\hbar \left( B_0 \VEC{I}_\LAB{z} + B_1
(\cos(\omega t)\VEC{I}_\LAB{x} + \sin(\omega t)\VEC{I}_\LAB{y}) \right) ~.
\end{equation}
The effect of such a rotating field is most
easily determined by transforming everything
into a frame which rotates along with it,
in which the Hamiltonian becomes time-independent:
\begin{equation}
{\VEC{E}_{-}B\rho}' ~=~ e^{-\imath\omega t\VEC{I}_\LAB{z}} {\VEC{E}_{-}B\rho}\, e^{\imath\omega t\VEC{I}_\LAB{z}} ~,\quad
\VEC H' ~=~ e^{-\imath\omega t\VEC{I}_\LAB{z}} \VEC H e^{\imath\omega t\VEC{I}_\LAB{z}}
~=~ \VEC H_\LAB{Z} + \gamma\hbar B_1 \VEC{I}_\LAB{x}
\end{equation}
Then the Bloch equation itself is transformed as follows:
\begin{equation} \begin{array}{rl}
{\rule[0pt]{0pt}{7pt}\smash{\dot{\VEC P}}}' ~= & -\imath\omega \VEC{I}_\LAB{z} \VEC P'
+ e^{-\imath\omega t\VEC{I}_\LAB{z}} \dot{\VEC P} \,
e^{\imath\omega t\VEC{I}_\LAB{z}} + \VEC P' \imath\omega \VEC{I}_\LAB{z} \\
= & \VEC P' \times (\VEC H' - \omega \VEC{I}_\LAB{z})
\label{eq:trans_bloch}
\end{array} \end{equation}
Thus if $\omega$ equals the natural precession
frequency of the spins $\omega_0 = \gamma\hbar B_0$,
the Zeeman Hamiltonian $\VEC H_\LAB{Z}' =
\VEC H_\LAB{Z} = \omega_0 \VEC{I}_\LAB{z}$ cancels out.
In this frame, the spins turn about a (rotating) axis
perpendicular to $\VEC B_1'$ at a rate $\omega_1 = \gamma\hbar B_1$,
so that if the polarization vector starts out along $\LAB{z}$,
it is in the $\LAB{xy}$-plane where it produces
the maximum signal after a time $t=\pi/(2\omega_1)$.
Henceforth, all our coordinate frames will be rotating
at the transmitter frequency unless otherwise mentioned.
\vspace*{0.20in}
\section{The product operator formalism}
Thus far we have restricted our presentation to
ensembles consisting of indistinguishable nuclear spins.
The power of NMR spectroscopy as a means of chemical analysis,
however, depends on the fact that the different nuclei in
a molecule generally have distinct electronic environments,
which affect the applied magnetic field at each nucleus.
As a result, they precess at slightly different frequencies and
give rise to resolvable ``peaks'' in the resulting spectrum.
This is also one of the reasons why NMR provides a
facile approach to quantum information processing,
since it permits each {\em chemical\/} equivalence class
of spins in the ensemble to be treated as a separate ``qubit''.
In this section we will describe an extension
of the density operator to multispin systems,
using a basis which is a direct generalization of the
``scalar + vector'' basis given above for a single spin.
We then illustrate this so-called {\em product operator formalism\/}
\cite{BoulaRance:94a,ErnBodWok:87,SomCorHav:98,SoEiLeBoEr:83,vdVenHilbe:83}
by describing how quantum information processing can be done on an ensemble
of multispin molecules, using the internal Hamiltonian of liquid-state NMR.
For the sake of simplicity we shall assume throughout
that the ensemble is in a pure state, i.e.\ that the
joint state of the spins in every molecule is identical.
The next section is devoted to the complications
involved in extending this approach to the highly
mixed states which are available in practice.
As usual in quantum information
processing \cite{Steane:98,WilliClear:98},
we choose a {\em computational basis\/}
$(\KET{0},\KET{1})$ for the Hilbert space $\cal H$ of each spin
that corresponds to the eigenvectors of its $\VEC{I}_\LAB{z}$ operator,
i.e.\ to alignment of the spin with (up) and against (down)
a magnetic field $\VEC B_0$ along the $\LAB{z}$ axis.
Relative to this basis, a superposition $c_0\KET{0} + c_1\KET{1}$
($c_0, c_1 \ne 0$ complex with $|c_0|^2 + |c_1|^2 = 1$)
is any state with transverse ($\LAB{xy}$) components.
The Hilbert space needed to describe the kinematics
of a system consisting of $N$ distinguishable spins
({\em not\/} an ensemble) is the $N$-fold tensor product
of their constituent Hilbert spaces \cite{Peres:95,Sakurai:94}.
The induced basis in this $(2^N)$-dimensional space is
\begin{equation}
\KET{\kappa^1} \otimes \KET{\kappa^2} \otimes\cdots\otimes \KET{\kappa^N}
~\equiv~ \KET{\kappa^1\kappa^2\ldots\kappa^N} ~\equiv~ \KET{k} ~,
\end{equation}
where $\kappa^n \in \{ 0, 1 \}$ $(n = 1,\ldots,N)$ is the
binary expansion of the integer $k \in \{ 0,\ldots,2^N-1 \}$.
Because of the canonical isomorphism
\begin{equation}
\FUN{End}({\cal H}) \otimes \FUN{End}({\cal H})
~\approx~ \FUN{End}({\cal H} \otimes {\cal H})
\end{equation}
together with our previous isomorphism
$\FUN{End}({\cal H}) \approx {\cal H} \otimes {\cal H}^*$,
this implies that the density operators for an ensemble of $N$-spin
molecules are all contained in the $N$-fold tensor product space
\begin{equation}
({\cal H} \otimes\cdots\otimes {\cal H}) \otimes
({\cal H}^* \otimes\cdots\otimes {\cal H}^*) ~\approx~
({\cal H} \otimes {\cal H}^*) \otimes\cdots\otimes
({\cal H} \otimes {\cal H}^*) ~.
\end{equation}
It follows that a basis for the algebra of $N$-spin operators is
\begin{equation} \label{eq:bad_basis}
\begin{array}{rl}
\KET{k} \BRA{\ell} ~= &
\KET{\kappa^1\,\kappa^2\,\ldots\,\kappa^N}
\BRA{\lambda^1\,\lambda^2\,\ldots\,\lambda^N} \\
= & (\KET{\kappa^1}\BRA{\lambda^1}) \otimes
(\KET{\kappa^2}\BRA{\lambda^2}) \otimes\cdots\otimes
(\KET{\kappa^N}\BRA{\lambda^N}) ~,
\end{array} \end{equation}
where $\kappa^n, \lambda^n \in \{ 0, 1 \}$
($n = 1,\ldots,N$) are binary expansions of
the integers $k, \ell \in \{ 0,\ldots,2^N-1 \}$.
This basis, however, does not consist of Hermitian operators,
and although the dyadic products $\KET{\psi}\BRA{\psi}$
($\KET{\psi} \in {\cal H} \otimes\cdots\otimes {\cal H}$)
do span the real subspace of all Hermitian operators,
the restriction of the basis in Eq.~(\ref{eq:bad_basis})
to the diagonal does not.
An algebra basis which has the advantage of also
being a linear basis for the subspace of Hermitian
operators is known as the {\em product operator basis\/}.
It is induced by the one-spin basis $(\VEC{1},\VEC{I}_\LAB{x},\VEC{I}_\LAB{y},\VEC{I}_\LAB{z})$,
and consists simply of the tensor products of the
angular momentum operators of the individual spins.
In the case of two spins, this basis has sixteen elements:
\renewcommand{1.3}{1.1}
\begin{equation} \begin{array}{cccc}
\VEC{1} \otimes \VEC{1} & \VEC{1} \otimes \VEC{I}_\LAB{x} & \VEC{1} \otimes \VEC{I}_\LAB{y} & \VEC{1} \otimes \VEC{I}_\LAB{z} \\
\VEC{I}_\LAB{x} \otimes \VEC{1} & \VEC{I}_\LAB{x} \otimes \VEC{I}_\LAB{x} & \VEC{I}_\LAB{x} \otimes \VEC{I}_\LAB{y} & \VEC{I}_\LAB{x} \otimes \VEC{I}_\LAB{z} \\
\VEC{I}_\LAB{y} \otimes \VEC{1} & \VEC{I}_\LAB{y} \otimes \VEC{I}_\LAB{x} & \VEC{I}_\LAB{y} \otimes \VEC{I}_\LAB{y} & \VEC{I}_\LAB{y} \otimes \VEC{I}_\LAB{z} \\
\VEC{I}_\LAB{z} \otimes \VEC{1} & \VEC{I}_\LAB{z} \otimes \VEC{I}_\LAB{x} & \VEC{I}_\LAB{z} \otimes \VEC{I}_\LAB{y} & \VEC{I}_\LAB{z} \otimes \VEC{I}_\LAB{z}
\end{array} \end{equation}
\renewcommand{1.3}{1.3}
As before, a notation which eliminates the need
for repetitive ``$\otimes$'' symbols is preferred.
This is obtained by using superscripts
for the spin indices in the operators
\begin{equation}
\VEC I_w^n ~\equiv~ \VEC{1} \otimes\cdots\otimes \VEC{1} \otimes
\VEC I_w \otimes \VEC{1} \otimes\cdots\otimes \VEC{1} \qquad
\end{equation}
($\VEC I_w$ in the $n$-th place, $n = 1,\ldots,N$,
$w \in \{\LAB{x},\LAB{y},\LAB{z}\}$),
and noting that by the {\em mixed product formula\/}
between the operator composition and tensor products:
\begin{equation}
\VEC I_u^m \VEC I_v^n
~=~ \VEC{1} \otimes\cdots\otimes \VEC I_u
\otimes \VEC{1} \otimes\cdots\otimes \VEC{1}
\otimes \VEC I_v \otimes\cdots\otimes \VEC{1}
~=~ \VEC I_v^n \VEC I_u^m
\end{equation}
($\VEC I_u$ in the $m$-th place, $\VEC I_v$ in the $n$-th,
$\,m, n = 1,\ldots,N$ with $m < n$,
and $u,v \in \{\LAB{x},\LAB{y},\LAB{z}\}$).
In the following, we will again identify the identity operator
$\VEC{1} \otimes\cdots\otimes \VEC{1}$ with the scalar identity $1$.
We will also be using the operator norm $\|\VEC I_w^n\|^2
\equiv \AVG{(\VEC I_w^n)^2} = (\VEC I_w^n)^2 = 1/4$ obtained
from the scalar part, rather than the more usual Frobenius norm
$\|\VEC I_w^n\|_\LAB{F}^2 = {\rm tr}((\VEC I_w^n)^2) = 2^{N-2}$
on $\FUN{End}(\cal H)$, because the former is independent of $N$.
The normalization of our basis to $\|\VEC I_w^n\| = 1/2$
rather than $1$ will be seen to have both advantages and
disadvantages, but the convention is well-established in NMR.
Just as with an ensemble consisting of a single type
of spin, a pure state may be characterized by the
idempotence of its density operator: ${\VEC{E}_{-}B\rho}^2 = {\VEC{E}_{-}B\rho}$.
The scalar part of the density operator is
$\AVG{{\VEC{E}_{-}B\rho}} = 2^{-N} {\rm tr}({\VEC{E}_{-}B\rho}) = 2^{-N}$,
and if we write an arbitrary density operator
${\VEC{E}_{-}B\rho} \equiv \overline{\KET{\psi}\BRA{\psi}}$ in diagonal form as
\begin{equation}
{\VEC{E}_{-}B\rho} ~=~ \VEC U \left( {\sum}_{k=0}^{2^N-1}\,
p_k\, \KET{k}\BRA{k} \right) \tilde{\VEC U}
\end{equation}
($0 \le p_k \le 1$, $\sum_k p_k = 1$)
for some $\VEC U \in \LAB{SU}(2^N)$,
we see that the idempotence of ${\VEC{E}_{-}B\rho}$ is equivalent
to $\AVG{{\VEC{E}_{-}B\rho}^2} = 2^{-N}$, i.e.\ $p_\ell = 1$
for some $\ell \in \{ 0,\ldots,2^N-1 \}$.
This shows that the density operator of a
pure state is in fact a primitive idempotent.
Without loss of generality we may take $\ell = 0$, so that
${\VEC{E}_{-}B\rho} = \VEC U \KET{00\ldots0}\BRA{00\ldots0} \tilde{\VEC U}$.
If we expand $\KET{0}\BRA{0}$ in the product operator
basis, we obtain $\VEC{E}_{+} \equiv \HALF(1 + 2\VEC{I}_\LAB{z})$,
and similarly for $\KET{1}\BRA{1} = \VEC{E}_{-} \equiv \HALF(1 - 2\VEC{I}_\LAB{z})$.
Thus we can also write the density operator of a pure state as
\begin{equation}
{\VEC{E}_{-}B\rho} ~=~ \VEC U \left( \VEC{E}_{+}^1 \VEC{E}_{+}^2 \cdots \VEC{E}_{+}^N \right) \tilde{\VEC U} ~,
\end{equation}
where the superscript on the idempotent
$\VEC{E}_{+}$ is the spin index as before.
More generally, the set of all density operators
consists of the closed convex cone of positive
semi-definite operators in the Hermitian subspace of
$\FUN{End}( {\cal H} \otimes\cdots\otimes {\cal H} )$,
and the density operators of pure states
are the extreme rays of this cone.
We now consider the form of the Hamiltonian
which is operative among the spins of an
ensemble of molecules in the liquid state
(with which we are exclusively concerned in this paper),
again using the product operator formalism.
First, there is the Zeeman Hamiltonian
previously given for a single spin, i.e.
\begin{equation}
\VEC{H}_\LAB{Z} ~\equiv~ -\omega_0^1 \VEC{I}_\LAB{z}^1 -\cdots- \omega_0^N \VEC{I}_\LAB{z}^N
\end{equation}
with $\omega_0^n = \hbar \gamma^n (1 - \sigma^n) B_0$,
where $\gamma^n$ the gyromagnetic ratio of the $n$-th spin
and $0\le\sigma^n\le1$ is the {\em chemical shift\/}
due to the (usually small) influence of the electronic
environment of the spins on their precession frequencies.
This Hamiltonian is easily seen to be diagonal in the
computational basis $\KET{k}$ ($k = 0,\ldots,2^N-1$),
with eigenvalues $(\pm\omega_0^1 \pm\cdots\pm \omega_0^N)/2$.
Second, there is an exchange interaction
known as the $J$ or {\em scalar coupling\/},
which is proportional to the inner product
of the spins' polarization vectors, namely
\begin{equation}
\VEC H_\LAB{J} ~=~ {\sum}_{m,n}\, 2\pi J^{mn}
\left( \VEC{I}_\LAB{x}^m\VEC{I}_\LAB{x}^n + \VEC{I}_\LAB{y}^m\VEC{I}_\LAB{y}^n + \VEC{I}_\LAB{z}^m\VEC{I}_\LAB{z}^n \right) ~,
\end{equation}
where $J^{mn}$ is the coupling strength in Hertz.
This interaction is mediated by the electrons in
the chemical bonds between atoms, and is usually
negligible for atoms separated by more than three bonds.
Standard perturbation theory \cite{Sakurai:94}
shows that the eigenvalues of the total Hamiltonian
$\VEC H = \VEC H_\LAB{Z} + \VEC H_\LAB{J}$ are
given to first order by the diagonal elements of
\begin{equation}
\VEC H' ~=~ \VEC H_\LAB{Z} + \VEC H_\LAB{J}' ~\equiv~
\VEC H_\LAB{Z} + 2\pi\, {\sum}_{m,n}\, J^{mn} \VEC{I}_\LAB{z}^m\VEC{I}_\LAB{z}^n ~,
\end{equation}
whereas the eigenvectors are given to first order by:
\begin{equation}
\KET{\ell}' ~=~ \KET{\ell} + 2\pi\, {\sum}_{m,n}\, J^{mn}\, {\sum}_{k\ne\ell}\,
\frac{\BRA{k}\, \VEC{I}_\LAB{x}^m\VEC{I}_\LAB{x}^n + \VEC{I}_\LAB{y}^m\VEC{I}_\LAB{y}^n + \VEC{I}_\LAB{z}^m\VEC{I}_\LAB{z}^n \,\KET{\ell}}
{\BRA{\ell}\,\VEC H_\LAB{Z}\,\KET{\ell} - \BRA{k}\,\VEC H_\LAB{Z}\,\KET{k}}
\, \KET{k}
\end{equation}
The numerator of each term in the summations
is nonzero only if $\kappa^p = \lambda^p$ for
$p \ne m, n$ and $\kappa^m = (1 - \lambda^n)$,
in which case it is $\pi J^{mn}$, while
the denominators of the corresponding
terms are $\omega_0^m - \omega_0^n$.
It follows that the eigenvectors are negligibly
perturbed so long as the frequency {\em differences\/}
are much larger than the scalar couplings,
i.e.\ $|\omega_0^m - \omega_0^n| \gg \pi|J^{mn}|$.
We shall be making this {\em weak coupling\/}
approximation throughout.
Another, potentially quite large term in the molecular spin Hamiltonian
is a through-space interaction between the spins' magnetic dipoles.
Because of the rapid motions of the molecules in a liquid,
however, these interactions are averaged to zero much
more quickly than they can have any net effect.
The effective absence of this interaction
nevertheless has the important consequence that the
{\em spins in different molecules do not interact\/},
and hence cannot be correlated with one another.\footnote{
More precisely, the spins do not interact to
an excellent, but first-order, approximation;
second-order effects do exist and are a source
of spin-spin relaxation (aka decoherence).}
As a result, the density operator of the entire sample
${\VEC{E}_{-}B\varrho}$ (which describes an abstract Gibbs ensemble
obtained by tracing over the spins' environment)
can be factorized into a product of density
operators for the individual molecules, i.e.
\begin{equation}
{\VEC{E}_{-}B\varrho} ~=~ {\VEC{E}_{-}B\varrho}^1 \cdots {\VEC{E}_{-}B\varrho}^M ~=~
{\VEC{E}_{-}B\rho}^1 \otimes\cdots\otimes {\VEC{E}_{-}B\rho}^M ~,
\end{equation}
where ${\VEC{E}_{-}B\varrho}^m \equiv \VEC{1} \otimes \cdots
\otimes {\VEC{E}_{-}B\rho}^m \otimes \cdots \otimes \VEC{1}$.
In a pure liquid (or if we are looking
at just one component of a solution),
all the molecules are equivalent so that
all these density operators are the same.
It follows that we can work with the partial
trace over all but any one of the molecules,
which is called the {\em reduced\/} density
operator ${\VEC{E}_{-}B\rho} ~\equiv~ {\VEC{E}_{-}B\rho}^1 ~=\cdots=~ {\VEC{E}_{-}B\rho}^M$.
Since this operates on a space of dimension $2^N$ where
$N$ is now the number of spins in a single molecule,
rather than $2^{MN}$ where $M \sim 10^{20}$
is the number of molecules in the sample,
this is a very considerable simplification.
It also means that in liquid-state NMR we are
working with a physical ensemble (the sample),
rather than a purely abstract Gibbs ensemble.
Finally, there is the interaction of the spins
with a transverse RF (radio-frequency) field,
which we described in the last section.
Whenever weak coupling is valid,
we can apply this field in a single ``pulse'',
tuned to the precession frequency of the $k$-th spin (say),
which is short enough that we may neglect the evolution
of the spins due to scalar coupling while it lasts.
This effects a unitary transformation of the form
\begin{equation} \begin{array}{rcl}
e^{-\imath\theta\VEC{I}_\LAB{x}^n} &=& 1 -
\imath \left(\FRAC{\theta}{2}\right) 2\VEC{I}_\LAB{x}^n -
\HALF \left(\FRAC{\theta}{2}\right)^2 +
\FRAC{\imath}{6} \left(\FRAC{\theta}{2}\right)^3
\,2 \VEC{I}_\LAB{x}^n + \cdots \\ &=&
\cos\left(\FRAC{\theta}{2}\right) -
\imath \sin\left(\FRAC{\theta}{2}\right) 2\VEC{I}_\LAB{x}^1 ~,
\end{array} \end{equation}
which corresponds to a right-hand rotation
of the $k$-th spin by an angle $\theta$ about
the $\LAB{x}$ axis in the rotating frame.
Using a pulse with a broad frequency range,
it is also possible (in fact easier) to apply
such a rotation to all the spins in parallel.
We will now indicate how RF pulses, in combination
with the innate Hamiltonian of the spins, enable us
to implement standard quantum logic gates in a manner
similar to that considered by computer scientists studying
universality in quantum computation \cite{BBCDMSSSW:95}.
The simplest such gate is the NOT operation on the
e.g.\ first spin, which simply rotates it by $\pi$;
combining the above formula with the basic geometric
algebra relations $\VEC{I}_\LAB{x}\VEC{I}_\LAB{z} = -\VEC{I}_\LAB{z}\VEC{I}_\LAB{x}$ and $(2\VEC{I}_\LAB{x})^2 = 1$,
we obtain
\begin{equation} \begin{array}{rcl}
e^{-\imath\pi\VEC{I}_\LAB{x}^1} \VEC{E}_{+}^1 e^{\imath\pi\VEC{I}_\LAB{x}^1}
&=& (-2\imath\VEC{I}_\LAB{x}^1) \VEC{E}_{+}^1 (2\imath\VEC{I}_\LAB{x}^1)
~=~ \HALF (1 + 8 \VEC{I}_\LAB{x}^1 \VEC{I}_\LAB{z}^1 \VEC{I}_\LAB{x}^1) \\
&=& \HALF (1 - 8 \VEC{I}_\LAB{x}^1 \VEC{I}_\LAB{x}^1 \VEC{I}_\LAB{z}^1)
~=~ \HALF (1 - 2\VEC{I}_\LAB{z}^1) ~=~ \VEC{E}_{-}^1
\end{array} \end{equation}
The c-NOT (controlled-NOT) gate, on the other hand,
is a $\pi$ rotation of e.g.\ the first spin
{\em conditional\/} on the polarization of a second.
Using the relation $\VEC{E}_{+}M^2 \VEC{E}_{-}P^2 = 0$, we can easily show that
\begin{equation}
(-2\imath\VEC{I}_\LAB{x}^1 \VEC{E}_{-}^2 + \VEC{E}_{+}^2) (\VEC{E}_\epsilon^1 \VEC{E}_{+}M^2)
(2\imath\VEC{I}_\LAB{x}^1 \VEC{E}_{-}^2 + \VEC{E}_{+}^2) ~=~ \VEC{E}_{\pm\epsilon}^1 \VEC{E}_{+}M^2
\end{equation}
($\epsilon \in \{\pm{}\}$).
The phase factor $\imath$ multiplying $\VEC{I}_\LAB{x}^1$
complicates the action of the c-NOT on a superposition,
but can be eliminated by a phase shift conditional on the second spin.
Using $\VEC{E}_{-}^2 + \VEC{E}_{+}^2 = 1$, this phase-corrected
c-NOT gate is given by $\VEC{S}^{1|2} \equiv$
\begin{equation} \begin{array}{rcl}
2\VEC{I}_\LAB{x}^1 \VEC{E}_{-}^2 + \VEC{E}_{+}^2 &=&
(-\imath \VEC{E}_{-}^2 + \VEC{E}_{+}^2) (2\imath\VEC{I}_\LAB{x}^1 \VEC{E}_{-}^2 + \VEC{E}_{+}^2)
\\ &=&
\left( 1 + (e^{-\imath\frac\pi2} - 1) \VEC{E}_{-}^2 \rule{0pt}{10pt} \right)
\left( 1 + (e^{\imath\pi\VEC{I}_\LAB{x}^1} - 1) \VEC{E}_{-}^2 \right)
\\ &=&
e^{-\imath\frac{\pi}{2}\VEC{E}_{-}^2} e^{\imath\pi\VEC{I}_\LAB{x}^1\VEC{E}_{-}^2}
~=~ e^{-\imath\frac{\pi}{2}(1 - 2\VEC{I}_\LAB{x}^1)\VEC{E}_{-}^2} ~,
\end{array} \end{equation}
and hence the idempotents $\VEC{E}_{+}M^n$ also give
us an algebraic description of the c-NOT gate,
in addition to the density operators of pure states.
It is well-known that single spin rotations,
together with the c-NOT, are sufficient to
implement any quantum logic gate \cite{BBCDMSSSW:95}.
The c-NOT can be implemented in NMR by combining single
spin rotations with the conditional rotations induced
by (weak) scalar coupling $2\pi J^{12}\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2$
\cite{CorPriHav:98,GershChuan:97,JonHanMos:98}.
Recalling that in the discrete $\LAB{SO}(3)$
subgroup of rotations by $\pi/2$,
\begin{equation}
e^{-\imath \frac{\pi}{2} \VEC{I}_\LAB{y}^1}
e^{\imath \frac{\pi}{2} \VEC{I}_\LAB{z}^1} ~=~
e^{\imath \frac{\pi}{2} \VEC{I}_\LAB{z}^1}
e^{\imath \frac{\pi}{2} \VEC{I}_\LAB{x}^1} ~,
\end{equation}
we can expand the above propagator as follows:
\begin{equation} \begin{array}{rcl}
\VEC S^{1|2} &=&
e^{-\imath\frac{\pi}{2}(1 - 2\VEC{I}_\LAB{x}^1)\VEC{E}_{-}^2}
\\ &=&
e^{-\imath\frac{\pi}{2}\VEC{I}_\LAB{y}^1} \, e^{-\imath\pi\VEC{E}_{-}^1\VEC{E}_{-}^2}
\, e^{\imath\frac{\pi}{2}\VEC{I}_\LAB{y}^1}
\\ &=&
\sqrt{-\imath}\, e^{-\imath\frac{\pi}{2}\VEC{I}_\LAB{y}^1}
\, e^{\imath\frac{\pi}{2}(\VEC{I}_\LAB{z}^1+\VEC{I}_\LAB{z}^2)}
\, e^{-\imath\pi\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2}
\, e^{\imath\frac{\pi}{2}\VEC{I}_\LAB{y}^1}
\\ &=&
\sqrt{-\imath}\,
e^{\imath\frac{\pi}{2}(\VEC{I}_\LAB{z}^1+\VEC{I}_\LAB{z}^2)}
\, e^{\imath\frac{\pi}{2}\VEC{I}_\LAB{x}^1}
\, e^{-\imath\pi\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2}
\, e^{\imath\frac{\pi}{2}\VEC{I}_\LAB{y}^1}
\end{array} \end{equation}
Since the overall phase of the transformation
has no effect on the density operator, it follows that
the c-NOT gate $\VEC{S}^{1|2}$ can be implemented by an NMR
{\em pulse sequence\/}, wherein each pulse and delay corresponds
to the indicated ``effective'' Hamiltonian in temporal order:
\begin{equation} \begin{array}{rl}
& [-\FRAC{\pi}{2}\VEC{I}_\LAB{y}^1] \rightarrow [\pi\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2] \rightarrow
[-\FRAC{\pi}{2}\VEC{I}_\LAB{x}^1] \rightarrow [-\FRAC{\pi}{2}(\VEC{I}_\LAB{z}^1+\VEC{I}_\LAB{z}^2)]
\\ \Leftrightarrow
& \exp\left( \FRAC{\pi}{2}(\VEC{I}_\LAB{z}^1+\VEC{I}_\LAB{z}^2) \right)
\exp\left( \FRAC{\pi}{2}\VEC{I}_\LAB{x}^1 \right)
\exp\left( -\pi\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2 \right)
\exp\left( \FRAC{\pi}{2}\VEC{I}_\LAB{y}^1 \right)
\end{array} \end{equation}
In practice, the effective Hamiltonian $[\pi\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2]$
is obtained by applying a $\pi$-pulse to both spins in the
middle and at the end of a $1/(2J^{12})$ evolution period,
to ``refocus'' their Zeeman evolution \cite{CorPriHav:98}.
The $[-\FRAC{\pi}{2}\VEC{I}_\LAB{y}^1]$ and $[-\FRAC{\pi}{2}\VEC{I}_\LAB{x}^1]$
Hamiltonians are implemented by RF pulses as above,
while the $[-\FRAC{\pi}{2}(\VEC{I}_\LAB{z}^1+\VEC{I}_\LAB{z}^2)]$ transformation
is most easily implemented by letting one spin evolve
while applying a $\pi$-pulse to the other, then vice versa,
and finally realigning the transmitter's phase with the spins'.
\pagebreak
The ``readout'' procedure needed to determine the result
of an NMR computation differs somewhat that usually
considered in quantum computing \cite{Steane:98,WilliClear:98}.
The most important difference is of course the fact that
in conventional NMR one can only make weak (nonperturbing)
measurements of the observables, as previously described.
As likwise described above, these observables are the
$\LAB{x}$ and $\LAB{y}$ components $\VEC{I}_\LAB{x}^n$ and $\VEC{I}_\LAB{y}^n$
of the dipolar magnetization due to each spin in a
rotating frame defined by the transmitter frequency.
The products of the angular momentum components
of different spins (e.g.\ $\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2$),
however, do not produce a net magnetic
dipole and hence cannot be detected directly.
Thus we are limited to one-spin observables,
as is usually assumed in quantum computation.
The unobservable degrees of freedom may also be
characterized in the basis $\KET{k}\BRA{\ell}$
as having a {\em coherence order\/}
$|\BRA{k}\,\VEC{I}_\LAB{z}\,\KET{k} - \BRA{\ell}\,\VEC{I}_\LAB{z}\,\KET{\ell}| \ne 1$,
where $\VEC{I}_\LAB{z} \equiv \VEC{I}_\LAB{z}^1 +\cdots+ \VEC{I}_\LAB{z}^N$ is
the total angular momentum along $\LAB{z}$
\cite{CorPriHav:98,ErnBodWok:87,Freeman:98,Munowitz:88,Slichter:90}.
According to the usual phase conventions of NMR,
the Fourier transform of the $\LAB{x}$-magnetization
of e.g.\ the first spin, $\VEC{I}_\LAB{x}^1$, yields an {\em absorptive\/}
peak shape, while $\VEC{I}_\LAB{y}^1$ produces a {\em dispersive\/} shape,
both centered on its precession frequency $\omega_0^1$.
If the first spin is coupled to e.g.\ the second,
its signal is modulated by $\cos(\pi J^{12} t)$
yielding a spectrum containing two peaks separated by
the coupling constant $J^{12}$ \cite{CorPriHav:98}.
An effective exception to the unobservability of
the products are those of the form $\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2$
(or $\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2$), which (when $J^{12} \ne 0$)
evolve under scalar coupling into one-spin terms.
Using the facts that $4\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2$ and $4\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2$
anticommute while ${(4\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2)}^2 = 1$,
we can show this as follows:
\begin{equation} \begin{array}{rcl}
e^{-\imath t \pi J^{12} 4\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2}
\, \left( 4\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2 \right)
&=& e^{-\imath t 2\pi J^{12} \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2}
\, \left( 4\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2 \right) \,
e^{\imath t 2\pi J^{12} \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2} \\
&=& \left(\cos(\pi J^{12} t) - \imath 4\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2
\sin(\pi J^{12} t)\right) \, \left( 4\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2 \right) \\
&=& \cos(\pi J^{12} t) \, 4\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2 + \sin(\pi J^{12} t) \, 2\VEC{I}_\LAB{y}^1
\end{array} \end{equation}
Because the signal is now sinusoidally modulated by the coupling,
for a single pair of coupled spins this results in
a pair of {\em antiphase\/} peaks with opposite signs,
as opposed to the {\em inphase\/} peaks
described for $\VEC{I}_\LAB{x}^1$ and $\VEC{I}_\LAB{y}^1$ above.
These antiphase peaks may likewise be absorptive
($\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2$) or dispersive ($\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2$), respectively.
Figure \ref{fig:readout} shows examples of all these
possibilities for a pair of two coupled spins.
\begin{figure}
\caption{
Plots of NMR spectra for a weakly coupled
two-spin molecule (amplitude versus frequency).
The left-hand plot is for the spin state
$\mbox{\boldmath$I$}
\label{fig:readout}
\end{figure}
More generally, if the $n$-th spin is coupled to $M$ others,
its signal is split into $2^M$ peaks at frequencies of
$(\omega_0^n/\pi \pm J^{m_1n} \pm\cdots\pm J^{m_Mn})/2$,
one for each combination of ``up'' and ``down''
states for the $M$ spins to which it is coupled.
If the transverse magnetization is due to a $\pi/2$
rotation of a spin polarized along $\LAB{z}$ as before,
then the heights of these peaks are proportional to
the probability differences between pairs of states
$\KET{\kappa^{m_1}\ldots\kappa^n\ldots\kappa^{m_M}} \leftrightarrow
\KET{\kappa^{m_1}\ldots(1-\kappa^n)\ldots\kappa^{m_M}}$
separated by flips of that spin.
To show this, we restrict ourselves
to two spins for ease of presentation,
and consider a general diagonal
density operator of the form
\begin{equation} \begin{array}{rl}
{\VEC{E}_{-}B\rho}_\LAB{zz} ~= & p_0 \KET{00}\BRA{00} + p_1 \KET{01}\BRA{01}
+ p_2 \KET{10}\BRA{10} + p_3 \KET{11}\BRA{11} \\
= & \FRAC14 + \HALF(p_0+p_1-p_2-p_3)\VEC{I}_\LAB{z}^1 + \HALF(p_0-p_1+p_2-p_3)\VEC{I}_\LAB{z}^2 \\
& \qquad +\, (p_0-p_1-p_2+p_3) \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2 ~,
\end{array} \end{equation}
where $p_k$ denotes the probability
that a molecule is in the state $\KET{k}$.
Rotating this to
\begin{equation} \begin{array}{rcl}
{\VEC{E}_{-}B\rho}_\LAB{xz} &\equiv& e^{-\imath\pi\VEC{I}_\LAB{y}^1}
{\VEC{E}_{-}B\rho}_\LAB{zz} e^{\imath\pi\VEC{I}_\LAB{y}^1} \\
&=& \FRAC14 + \HALF(p_0+p_1-p_2-p_3)\VEC{I}_\LAB{x}^1
+ \HALF(p_0-p_1+p_2-p_3)\VEC{I}_\LAB{z}^2 \\ && +\,
(p_0-p_1-p_2+p_3) \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2
\end{array} \end{equation}
and computing the signal in the Zeeman frame yields
\begin{equation} \begin{array}{rcl}
&& {\rm tr}\left( e^{-\imath t 2\pi J^{12} \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2} {\VEC{E}_{-}B\rho}_\LAB{xz} \,
e^{\imath t 2\pi J^{12} \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2} (\VEC{I}_\LAB{x}^1 + \imath\VEC{I}_\LAB{y}^1) \right) \\
&=& \HALF \left( (p_0+p_1-p_2-p_3) \cos(\pi J^{12} t) \right. \\
&& +\, \left. (p_0-p_1-p_2+p_3) \, \imath \sin(\pi J^{12} t) \right) \\
&=& \HALF e^{\imath\pi J^{12}t} (p_0 - p_2)
+ \HALF e^{-\imath\pi J^{12}t} (p_1 - p_3) ~,
\end{array} \end{equation}
thus showing that the peaks at $\omega_0^1 \pm \pi J^{12}$ have
amplitudes proportional to the probability differences as claimed.
In closing, we mention that although vector
interpretations of single quantum inphase ($\VEC{I}_\LAB{x}^1$)
and antiphase ($\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2$) states are available
(and widely used in NMR \cite{Freeman:98}),
no satisfactory geometric interpretation
of general product states is known.
The development of an intuitive model
for the geometry determined by the action
of $\LAB{SU}(2^N)$ on the product operators
thus stands as an open problem in the field.
There are two reasons why the problem is nontrivial.
The first is the well-known the existence of
{\em correlated\/} states, whose density operators
cannot be factorized; in the case of a pure state,
these states are also called {\em entangled\/} \cite{Peres:95}.
We shall consider such states further in Section 5.
The second, much less widely recognized reason is
that there is but one imaginary unit for all the spins,
so that in the tensor product of their geometric algebras
the unit pseudo-scalars $8\VEC{I}_\LAB{x}^n\VEC{I}_\LAB{y}^n\VEC{I}_\LAB{z}^n$ must be identified
by taking an appropriate quotient \cite{SomCorHav:98}.
This is a form of implicit correlation which is
always present even in otherwise factorizable states.
Further discussion of this issue may be found
in Refs.\ \cite{DorLasGul:93,DoLaGuSoCh:96}.
\vspace*{0.20in}
\section{Pseudo-pure state preparation and scaling}
Liquid state NMR must be done at temperatures
far above the differences between the spin
Hamiltonian's energy levels (eigenvalues).
The ensemble's spin state thus represents a
compromise between the constant force of the applied
magnetic field and the forces of the random fields induced
by the thermal motions of spins in other molecules.
Thus pure states are not available,
so that the underlying ensemble is not
uniquely determined by its density operator.
This would seem to make NMR useless as a means
of performing deterministic computations,
but in fact a class of mixed states has been found
for which a state vector is (up to an overall phase)
canonically associated with the density operator.
This section is devoted to describing the properties
and preparation of such {\em pseudo-pure states\/},
with emphasis on the computationally important issue
of how they scale with the number of spins $N$.
According to the principles of quantum
statistical mechanics \cite{Blum:96},
the density operator ${\VEC{E}_{-}B\rho}_\LAB{eq}$ for an ensemble
of $N$-spin molecules at thermal equilibrium is
given by the Boltzman operator determined by their
common Hamiltonian, $\exp(-\VEC{H}/k_\LAB{B}T)$,
divided by the corresponding partition function
$Z_\LAB{eq} =$ \linebreak
${\rm tr}(\exp(-\VEC{H}/k_\LAB{B}T))$
(where $k_B$ is Boltzman's constant).
The Hamiltonian $\VEC{H}$ is well-approximated
by its dominant Zeeman term $\VEC{H}_\LAB{Z}
= -\omega_0^1\VEC{I}_\LAB{z}^1-\cdots-\omega_0^N\VEC{I}_\LAB{z}^N$.
Given the gyromagnetic ratios of nuclear spins
and the strongest available magnetic fields, we have
$\|\VEC H_\LAB{Z}\| / (k_\LAB{B}T) \sim 10^{-5}$
at the temperatures needed for liquid-state NMR,
so that a linear approximation is quite accurate:
\begin{equation}
{\VEC{E}_{-}B\rho}_\LAB{eq} ~\approx~ \frac{1 - \VEC H_\LAB{Z} / k_\LAB{B}T}
{{\rm tr}( 1 - \VEC H_\LAB{Z} / k_\LAB{B}T )} ~=~
\frac{1 - \VEC H_\LAB{Z} / k_\LAB{B}T}{2^N}
\end{equation}
In homonuclear (i.e.\ single spin isotope) systems,
one can assume that $\omega_0^n \equiv \hbar B_0 (1 - \sigma^n)
\gamma^n \approx \hbar B_0 \gamma$ is constant for all $n$.
Since the amplitude of an NMR signal is also
proportional to imprecisely known factors
determined by the spectrometer setup,
$\omega_0^n/(k_\LAB{B}T)$ is usually set to
unity when analyzing a homonuclear experiment
(or to the ratios of each $\gamma^n$
with $\FUN{min}_m(\gamma^m)$ otherwise).
The partition function $2^{-N}$ is likewise
constant for any given system, but because
of our interest in scaling we shall always
include it explicitly in this section.
It is important to observe that,
because the angular momentum components observed
by NMR have no scalar part (i.e.\ are traceless),
the scalar part (identity component) of the density
operator $2^{-N}$ does not contribute to the signal.
It also does not evolve under unitary transformations,
and hence NMR spectroscopists
usually forget about it altogether ---
even though it comprises the vast majority
of the norm of the density operator.
In these terms, the equilibrium density operator
of a two-spin system, and its matrix representation
in the usual computational basis, is
\renewcommand{1.3}{1.0}
\begin{equation} \begin{array}{rcl}
&& \hat{{\VEC{E}_{-}B\rho}}_\LAB{eq} ~=~ \FRAC{1}{4} (\VEC{I}_\LAB{z}^1 + \VEC{I}_\LAB{z}^2)
~=~ \FRAC{1}{4} (\KET{00}\BRA{00} - \KET{11}\BRA{11})
\vspace*{3pt} \\ &~\leftrightarrow~&
\FRAC{1}{4}\, \MAT{Diag}( 1, 0, 0, -1 )
~\equiv~ {\displaystyle\frac{1}{4}}
\left[ \begin{array}{rrrr}
1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\ 0 & ~~0 & ~~0 & -1
\end{array} \right] ~,
\end{array} \end{equation}
\renewcommand{1.3}{1.3}
where the ``hat'' on ${\VEC{E}_{-}B\rho}_\LAB{eq}$
signifies its traceless part.
In contrast, the density operator of two spins in their
pseudo-pure ground (assuming $\gamma > 0$ as usual) state is
\begin{equation} \begin{array}{rcl}
\hat{{\VEC{E}_{-}B\rho}}_{00} &\equiv& \pm\FRAC{1}{6}
(\VEC{I}_\LAB{z}^1 + \VEC{I}_\LAB{z}^2 + 2 \VEC{I}_\LAB{z}^1 \VEC{I}_\LAB{z}^2) ~=~
\pm\FRAC{1}{3} \left( \VEC{E}_{+}^1 \VEC{E}_{+}^2 - \FRAC{1}{4} \right)
\\ &=&
\pm\FRAC{1}{3} \left( \KET{00}\BRA{00} - \FRAC{1}{4} \right)
~\leftrightarrow~ \pm\FRAC{1}{12}\, \MAT{Diag}( 3, -1, -1, -1 ) ~.
\end{array} \end{equation}
The overall sign depends on whether we have a
population excess or deficit in the ground state;
for consistency, we shall generally assume the former.
Observe that a unitary transformation of the density
operator induces a transformation of the corresponding
state vector just as it does for true pure states, since
\begin{equation} \label{eq:pp_tfm}
\VEC{U} \hat{{\VEC{E}_{-}B\rho}}_{00}\, \tilde\VEC{U} ~=~
\FRAC{1}{3} \left( \left( \VEC{U} \KET{00} \right)
\left( \VEC{U} \KET{00} \right)^{\sim} - \FRAC{1}{4} \right) ~.
\end{equation}
Similarly, because the NMR observables $\VEC A = \VEC{I}_\LAB{x}^n, \VEC{I}_\LAB{y}^n$
are traceless, the ensemble-average expectation value relative
to a pseudo-pure density operator yields the ordinary
expectation value versus the corresponding state vector:
\begin{equation} \label{eq:pp_obs}
{\rm tr}( \VEC A\, \hat{{\VEC{E}_{-}B\rho}}_{00} ) ~=~ \FRAC{1}{3} \left( {\rm tr}
( \VEC{A} \KET{00}\BRA{00} ) - \FRAC{1}{4} {\rm tr}( \VEC{A} ) \right)
~=~ \FRAC{1}{3} \BRA{00} \VEC{A} \KET{00}
\end{equation}
The general form of a pseudo-pure density operator is
\begin{equation} \label{eq:ppform}
\hat{{\VEC{E}_{-}B\rho}}_\LAB{\psi} ~=~ \FRAC{N/2}{\rule{0pt}{6pt}2^N-1}
\left( \KET{\psi}\BRA{\psi} - 2^{-N} \right) ~,
\end{equation}
where $\KET{\psi}$ is a normalized $N$-spin state vector,
and the prefactor has been chosen so as to keep
the maximum eigenvalue $\| \hat{{\VEC{E}_{-}B\rho}}_{\psi} \|_2$ equal
to that of the $N$-spin equilibrium density operator.
Even though we have defined them
to have the same maximum eigenvalue,
for $N > 1$ the remaining eigenvalues of
$\hat{{\VEC{E}_{-}B\rho}}_\LAB{eq}$ and $\hat{{\VEC{E}_{-}B\rho}}_{\psi}$ differ,
and hence there is no unitary
transformation taking one to the other.
There are nevertheless a number
of nonunitary processes by which
one can prepare pseudo-pure states.
The most direct is to generate a spatially
varying distribution of states across the sample,
such that the ensemble average is pseudo-pure.
This can be done by using a {\em field gradient\/}
along the $\LAB{z}$-axis to create a
position-dependent phase shift whose average is zero,
thereby in effect setting the transverse ($\LAB{xy}$)
components of the density operator to zero.\footnote{
In the homonuclear case, the zero-quantum coherences
are not rapidly dephased by a $\LAB{z}$-gradient,
so a slightly more complicated procedure is necessary. }
For example, it is readily shown that the sequence
\begin{equation}
[\FRAC{\pi}{4}(\VEC{I}_\LAB{x}^1+\VEC{I}_\LAB{x}^2)] \rightarrow [\pi\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2]
\rightarrow [-\FRAC{\pi}{6}(\VEC{I}_\LAB{y}^1+\VEC{I}_\LAB{y}^2)]
\end{equation}
applied to the two-spin equilibrium state $\hat{{\VEC{E}_{-}B\rho}}_\LAB{eq}$ yields
\begin{equation}
2^{-\frac52} \left( \sqrt3\, \left( \VEC{E}_{+}^1\VEC{E}_{+}^2 - \FRAC14 -
\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2 \right) - \VEC{I}_\LAB{x}^1\VEC{E}_{-}^2 - \VEC{E}_{-}^1\VEC{I}_\LAB{x}^2 \right) ~,
\end{equation}
which is reduced by a $\LAB{z}$-gradient
to $(3/32)^{\frac12}(\VEC{E}_{+}^1\VEC{E}_{+}^2 - 1/4)$.
Further RF and gradient pulse sequences
which convert the equilibrium state of
two and three spin systems to pseudo-pure states
may be found in Ref.~\cite{CorPriHav:98}.
An alterative proposed by E.~Knill {\em et al.\/}
\cite{KniChuLaf:98} is to ``time-average''
the results of several separate experiments.
In the simple case of a two spin system,
the average of the three states
\begin{equation} \begin{array}{rcl}
\hat{{\VEC{E}_{-}B\rho}}_{123} ~~\equiv~& \FRAC{1}{4} (\VEC{I}_\LAB{z}^1 + \VEC{I}_\LAB{z}^2)
&\leftrightarrow~~ \FRAC{1}{4}\, \MAT{Diag}( 1, 0, 0, -1 ) \\
\hat{{\VEC{E}_{-}B\rho}}_{231} ~~\equiv~& \FRAC{1}{4} (\VEC{I}_\LAB{z}^1 + 2 \VEC{I}_\LAB{z}^1 \VEC{I}_\LAB{z}^2)
&\leftrightarrow~~ \FRAC{1}{4}\, \MAT{Diag}( 1, 0, -1, 0 ) \\
\hat{{\VEC{E}_{-}B\rho}}_{312} ~~\equiv~& \FRAC{1}{4} (2 \VEC{I}_\LAB{z}^1 \VEC{I}_\LAB{z}^2 + \VEC{I}_\LAB{z}^2)
&\leftrightarrow~~ \FRAC{1}{4}\, \MAT{Diag}( 1, -1, 0, 0 )
\end{array} \end{equation}
is the pseudo-pure state
\begin{equation} \begin{array}{rcl} \label{eq:cyclic_avg}
\FRAC{1}{3} (\hat{{\VEC{E}_{-}B\rho}}_{123} + \hat{{\VEC{E}_{-}B\rho}}_{231} + \hat{{\VEC{E}_{-}B\rho}}_{312})
&=& \FRAC{1}{12} (2 \VEC{I}_\LAB{z}^1 + 2 \VEC{I}_\LAB{z}^2 + 4 \VEC{I}_\LAB{z}^1 \VEC{I}_\LAB{z}^2)
\\ &~\leftrightarrow~&
\FRAC{1}{12}\, \MAT{Diag}( 3, -1, -1, -1 ) ~.
\end{array} \end{equation}
More generally, one can obtain the same
results that one would get on a pseudo-pure
state by averaging the results of the experiments
over all $2^N-1$ cyclic permutations of the nonground
state populations of the equilibrium density operator.
Although this naive approach is not efficient,
Knill {\em et al.\/} have shown that one can average over
smaller groups in time $O(N^3)$ with much the same effect.
A fundamentally different approach, first proposed
by Stoll, Vega \& Vaughan \cite{StoVegVau:77}
and subsequently adapted to NMR computing by
Gershenfeld \& Chuang \cite{GershChuan:97},
involves working with subpopulations of molecules
distinguished by the states of additional {\em ancilla\/} spins.
Gershenfeld and Chuang \cite{ChGeKuLe:98} have given an
example of a two-spin {\em conditional pseudo-pure state\/}
(as we call it), which is obtained by row/column
permutation of the diagonal equilibrium density matrix
$\MAT{Diag}( 3, 1, 1, -1, 1, -1, -1, -3 ) / 16$
of a three-spin system including one ancilla, namely
\begin{equation} \begin{array}{rcl} \label{eq:condpure3a}
&&\FRAC{1}{16}\, \MAT{Diag}( 3, -1, -1, -1, -3, 1, 1, 1 )
\\ &~\leftrightarrow~&
\FRAC{1}{16} (2\VEC{I}_\LAB{z}^1 (2\VEC{I}_\LAB{z}^2 + 2\VEC{I}_\LAB{z}^3 + 4\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3))
\\ &=&
\FRAC{1}{4} (\VEC{E}_{+}^1 - \VEC{E}_{-}^1) (\VEC{E}_{+}^2\VEC{E}_{+}^3 - \FRAC{1}{4}) ~.
\end{array} \end{equation}
The last form makes it clear that in
the subpopulation with the first spin ``up'',
which is labeled by $\VEC{E}_{+}^1$,
and in the subpopulation with it ``down'',
which is labeled by $\VEC{E}_{-}^1$,
spins $2$ and $3$ are in the
pseudo-pure state $\VEC{E}_{+}^2\VEC{E}_{+}^3 - 1/4$.
Since the spectrum of spins $2$ and $3$ is
antiphase with respect to the ancilla spin $1$,
one can select the subpopulations just by
keeping only either positive or negative peaks.
Although the situation is considerably
more complicated with more spins,
Gershenfeld and Chuang have shown that conditional
pure states can be obtained (with some loss of signal)
using as few as $O(\log(N))$ ancillae.
An alternative to conditional pure states,
which we call {\em relative pseudo-pure states\/},
can be obtained via the partial trace operation (in NMR,
decoupling \cite{ErnBodWok:87,Freeman:98,Munowitz:88,Slichter:90}),
rather than peak selection as above.
For example, a two-spin relative pseudo-pure state is given
by the partial trace over the ancilla spins 1 \& 2 in
\begin{equation} \begin{array}{rcl} \label{eq:relpure4}
&& \FRAC{1}{32}\, \MAT{Diag}( 4, 2, 2, 0, 2, 0,
-2, 0, 0, -2, 0, 2, 0, -2, -2, -4 )
\\ &\leftrightarrow& \FRAC{1}{16}
\left( \VEC{E}_{+}^1\VEC{E}_{+}^2 (\VEC{E}_{+}^3 + \VEC{E}_{+}^4) +
\VEC{E}_{+}^1\VEC{E}_{-}^2 (\VEC{E}_{+}^3\VEC{E}_{+}^4 - \VEC{E}_{-}^3\VEC{E}_{+}^4)
\right. \\ && ~+ \left.
\VEC{E}_{-}^1\VEC{E}_{+}^2 (\VEC{E}_{-}^3\VEC{E}_{-}^4 - \VEC{E}_{+}^3\VEC{E}_{-}^4) -
\VEC{E}_{-}^1\VEC{E}_{-}^2 (\VEC{E}_{-}^3 + \VEC{E}_{-}^4) \right) ~,
\end{array} \end{equation}
which is again a permutation of the
diagonal elements of $\hat{{\VEC{E}_{-}B\rho}}_\LAB{eq}$.
This can be seen by adding up the $4\times 4$ blocks of
the matrix, obtaining $\MAT{Diag}( 6, -2, -2, -2 ) / 32$.
Alternatively, since the partial trace in the product
operator formalism corresponds to simply eliminating
those terms depending on either spins $1$ or $2$ and
multiplying the remaining terms by $4$ \cite{SomCorHav:98},
we need only add up the multipliers of
$\VEC{E}_{+}^1\VEC{E}_{+}^2, \ldots, \VEC{E}_{-}^1\VEC{E}_{-}^2$, which yields
\begin{equation} \begin{array}{rcl}
&& \FRAC{1}{16} ((1 + \VEC{E}_{+}^3 - \VEC{E}_{-}^3) (1 + \VEC{E}_{+}^4 - \VEC{E}_{-}^4) - 1)
\\ &=&
\FRAC{1}{4} \left( \VEC{E}_{+}^3 \VEC{E}_{+}^4 - \FRAC{1}{4} \right)
~\leftrightarrow~
\FRAC{1}{32}\,\MAT{Diag}( 6, -2, -2, -2 ) ~.
\end{array} \end{equation}
We now consider briefly how
the SNR (signal-to-noise ratio)
of these methods of creating pseudo-pure
states scales with the number of spins $N$.
It has been argued that since the equilibrium
population of the ground state falls off
exponentially with the number of spins,
and all these methods are aimed in some fashion at
isolating the signal from the ground state population,
the SNR of all these methods must likewise
decline exponentially with $N$ \cite{Warren:97}.
Although this argument carries considerable weight,
we shall see that the number and variety of the available
methods renders the actual situation rather more complex.
The standard to which the signal strength must be compared
is that of a single spin in its equilibrium state, namely
\begin{equation}
\hat{{\VEC{E}_{-}B\rho}}_\LAB{eq} ~=~ \HALF \VEC{I}_\LAB{z}^1 ~=~ \FRAC{1}{4}
(\KET{0}\BRA{0} - \KET{1}\BRA{1}) ~.
\end{equation}
The maximum eigenvalue $\| \hat{{\VEC{E}_{-}B\rho}}_\LAB{eq} \|_2 = 1/4$
is what we will use as the standard signal strength for
spins of like gyromagnetic ratio (as assumed throughout).
We shall therefore calculate the SNR of a pseudo-pure
state by transforming it to the corresponding ground state
$\KET{0\cdots 0}\BRA{0\cdots 0} - 2^{-N}$ (if need be),
taking the partial trace over all but one of the spins,
and multiplying the maximum eigenvalue of the result by $4$.
\begin{figure}
\caption{
Negative base ten logarithm of the polarization
$P$ as a function of the logarithm of the ratio
of the energy level spacing to $k_\LAB{B}
\label{fig:polar}
\end{figure}
The maximum eigenvalue of the partial trace over all
but one of the spins in a pseudo-pure state obtained
by cyclic averaging, as in Eq.~(\ref{eq:cyclic_avg}),
is easily seen to be $N/(4(2^N-1))$, which decays
almost exponentially with the number of spins $N$.
There is an additional factor of $\sqrt{2^N-1}$
which comes from averaging over $2^N-1$ experiments,
and gives a net SNR of $N/(4\sqrt{2^N-1})$ for the average.
The exponential time requirements of cyclic averaging
will nonetheless force one to average over smaller groups,
with consequently smaller improvements in the SNR.
In any case, the SNR declines superpolynomially with $N$.
Figure \ref{fig:polar} shows how the signal strength changes as
a function of the ratio of the energy level spacing to $k_\LAB{B}T$,
relative to the signal in a perfectly polarized sample,
when the pseudo-pure state is obtained by cyclic averaging,
for varying numbers of spins.
Because of the many possible variations on the
ideas and the difficulty of analyzing all of them,
it is not practical to present simple formulae for the
SNR of the other methods of preparing pseudo-pure states.
Further complexity is added to the situation by
the ability to combine the various methods above.
A number of such combinations are given in Knill
{\em et al.\/} \cite{KniChuLaf:98},
along with bounds on the SNR for each.
In our laboratory we are developing a new
method, again based on field gradients,
which enables the sample to be divided into
discrete volumes and separate unitary
transformations to be applied to each.
In principle, this permits multiple
experiments to be performed, and their
results added, in a single experiment,
thereby performing an average over
multiple experiments in constant time.
This new method could also be used in a
variety of combinations with existing methods.
It is nevertheless encouraging to observe that
the SNR of the two-spin conditional and relative
pseudo-pure states given in Eqs.~(\ref{eq:condpure3a})
and (\ref{eq:relpure4}) is $1/2$ in both cases;
this is exactly the decline in the ground state
population of a two-spin system compared to a one-spin.
In Eq.~(\ref{eq:condpure3a}), we attain this ``theoretical limit''
because the expansion of the density operator consists of
a single term conditioned on the state of a single ancilla;
it is not possible to do as well with more spins.
In Eq.~(\ref{eq:relpure4}), however,
it is because such permutations are able to
concentrate polarization in a subset of the spins.
We have found this makes it possible to derive
a two-spin pseudo-pure state from a six-spin
equilibrium state with {\em no\/} loss of SNR,
whereas a simplistic ground-state population
argument implies we should lose at least $1/2$.
This may be seen by adding up the rows in
the rearrangement of $\hat{{\VEC{E}_{-}B\rho}}_\LAB{eq}$
shown in Eq.~(\ref{eq:relpure6}) below,
which corresponds taking the traces of the
four $16\times 16$ blocks along the diagonal,
and yields $\MAT{Diag}( 48, -16, -16, -16 )$.
\renewcommand{1.3}{1.1}
{
\begin{equation} \label{eq:relpure6}
\MAT{Diag} \begin{array}[t]{rrrrrrrrrrrrrrrrr} (
~6, & 4, & 4, & 4, & 4, & 4, & 4, & 2, &
2, & 2, & 2, & 2, & 2, & 2, & 2, & 2, & \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, &
0, & 0, & -2, & -2, & -2, & -2, & -4, & -4, & \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, &
0, & 0, & -2, & -2, & -2, & -2, & -4, & -4, & \\
~2, & ~2, & ~2, & ~2, & ~2, & ~2, & -2, & -2, &
-2, & -2, & -2, & -2, & -2, & -4, & -4, & -6\, &
~) \end{array}
\end{equation}
}
\renewcommand{1.3}{1.3}
The partial trace over one of the two remaining
spins then gives $\MAT{Diag}( 32, -32 )$,
which when divided by $128$ (twice the partition
function) yields $\FRAC{1}{2} \VEC{I}_\LAB{z}$ as claimed.
A general algorithm has recently been given by
Schulman \& Vazirani \cite{SchulVazir:98} whereby one
can ``distill'' an $M$-spin relative pure state from
an ensemble of molecules each containing $N$ spins.
Starting from a uniform polarization of $P$,
this algorithm yields $M$ perfectly
polarized spins providing $M/N \sim O(P^2)$,
a result anticipated by earlier work in NMR which showed
that the polarization of a single spin can be enhanced
by at most a factor $O(\sqrt{N})$ \cite{Sorensen:89}.
Unfortunately, given that $P \approx 10^{-5}$ for
protons at equilibrium in a standard 500 MHz spectrometer,
a molecule with of order $10^{10}$ spins would be needed
to prepare a perfectly polarized state on a single spin
--- which is in a pseudo-pure state at equilibrium!
The importance of Schulman and Vazirani's algorithm
thus lies in the fact that it shows that there
is sufficient order in a typical NMR sample of
$10^{20}$ spins at room temperatures to make
it at least theoretically possible to perform
quantum computations on of order $10^{10}$ spins.
One might hope that a more tractable algorithm,
in terms of the absolute resources required,
could be found by requiring only that it produce
a {\em pseudo\/}-pure state with bounded SNR
from the high-temperature equilibrium state.
Since in the high-temperature approximation
the largest element of the density matrix decays
exponentially with the number of spins, it is clear that
any such an algorithm must go beyond that approximation.
Even so, given that Schulman and Vazirani's algorithm is
currently far beyond our ability to implement physically,
it seems unlikely that a practical breakthrough
will be obtained by purely algorithmic means.
Fortunately, physical methods of ``refrigerating'' the spins
are available, for example optical pumping \cite{NSRATP:96}.
These are presently confined to very simple systems,
but such a source of polarization could in principle be
used in conjunction with polarization transfer techniques
to produce (pseudo$\mbox{-}$)pure states on large numbers of spins.
Even at the low polarizations we can conveniently access,
however, NMR has proved itself to be a powerful means of
exploring quantum dynamics in Hilbert spaces of substantial size.
To illustrate this, we will now present the results of NMR
experiments which constitute a macroscopic analogue of a
quantum mechanical test for quantum correlations that are
inconsistent with the existence of ``hidden variables''.
\vspace*{0.20in}
\section{Macroscopic analogues of quantum correlations}
Given the success of the purely classical Bloch equations
(and their multispin extensions) in describing liquid-state NMR
phenomena \cite{ErnBodWok:87,Freeman:98,Munowitz:88,Slichter:90},
it is perhaps surprising that experiments can be
performed whose mathematical description, at least,
is formally identical to that of experiments which are
believed to demonstrate uniquely ``quantum'' phenomena.
For example, Seth Lloyd has recently proposed that
the nonclassical correlations in (Mermin's version of)
the GHZ state can be validated using NMR \cite{Lloyd:98}.
His approach involves using a fourth ``observer'' spin
to perform a nondemolition measurement on the three
spins in a GHZ state (or a pseudo-pure analogue thereof).
Here we shall describe experiments which demonstrate
another, rather different way in which we can
``emulate'' quantum phenomena with liquid-state NMR.
In reading this account, it should be kept in mind
that although pseudo-pure states do provide a faithful
representation of the transformations of pure states
within the highly mixed states that are available
in liquid-state NMR, their physical interpretation
differs significantly from that of true pure states.
Hence, as discussed in greater detail at the end of
this section (and also in Ref.\ \cite{BraunsteinEtAl:98}),
our results should {\em not\/} be taken to resolve any
foundational issues in quantum mechanics \cite{Peres:95}.
They demonstrate, nonetheless, a degree of coherent
control sufficient to enable such issues to be addressed,
{\em if\/} these same transformations and measurements
were applied to a true pure state.
The approach taken here was inspired by an educational
paper published a few years ago, in which T.~F.~Jordan
has shown that the contradictions with hidden variables
implied by violations of Bell's inequalities as well as
by the GHZ and Hardy's paradox can be derived entirely by
consideration of the expectation values of product operators,
rather than by observations on single spins \cite{Jordan:94b}.
This shows that, in principle, it is not necessary to use
nondemolition measurements with an observer spin in order to
perform experiments which demonstrate these contradictions by NMR;
it can be done directly from observations on ensembles of the spins
of interest, providing at least they are in a (pseudo-)pure state.
In a companion paper to Jordan's, N.~D.~Mermin points out that in
real-life experiments it is nevertheless not possible to perform
the measurements, either of single spins nor (by implication)
of expectation values, with sufficient precision to establish the
``perfect'' (total) correlations on which ``EPR'' arguments against
the existence of hidden variables are based \cite{Mermin:94}.
In that same paper, however, Mermin shows that Hardy's paradox
is a special case of the Clauser-Horne form of Bell's inequality.
This enables Hardy's paradox \cite{Branning:97,Hardy:93} to be
extended to an open set in the Hilbert space of only two spins,
to which sufficiently precise experimental data can confine us.
In the following, we present the results of NMR experiments
which implement the specific example of Hardy's paradox
presented by Mermin in an Appendix to his paper \cite{Mermin:94}.
Let us map the ``red'' and ``green'' eigenstates $\KET{\LAB{1G}}$
and $\KET{\LAB{1R}}$ of Mermin's measurement $1$ to
the spin states $\KET{0}$ and $\KET{1}$, respectively.
It will be clearer here to relabel this measurement as ``$\LAB{A}$'',
and to use $\KET{\alpha_\LAB{G}} \equiv \KET{0}$ and
$\KET{\alpha_\LAB{R}} \equiv \KET{1}$ as synonyms for its eigenbasis.
Correspondingly, we will relabel Mermin's measurement
$2$ as ``$\LAB{B}$'', and denote its the eigenbasis by
\begin{equation}
\KET{\beta_\LAB{G}} ~\equiv~ \SQRT{\FRAC{3}{5}} \KET{0}
- \SQRT{\FRAC{2}{5}} \KET{1} \quad\mbox{and}\quad
\KET{\beta_\LAB{R}} ~\equiv~ \SQRT{\FRAC{2}{5}} \KET{0}
+ \SQRT{\FRAC{3}{5}} \KET{1} ~.
\end{equation}
Then the state which Mermin has shown leads to a
near-maximum violation of Bell's inequality while
also providing an example of Hardy's paradox is
\begin{equation} \label{eq:sigh}
\KET{\psi} ~\equiv~ \HALF \KET{00} + \SQRT{\FRAC{3}{8}}
\KET{01} + \SQRT{\FRAC{3}{8}} \KET{10} ~.
\end{equation}
To translate this into the context of NMR,
we first note that the observable whose expectation
value is the probability that measurement $\LAB{A}$
yields the state $\KET{\alpha_\LAB{G}}$ is
given by $\VEC A \equiv \VEC{E}_{+} = \HALF(1+2\VEC{I}_\LAB{z})$
(we drop the usual spin index because the measurements
$\LAB{A}$ \& $\LAB{B}$ are assumed the same for both spins).
Similarly the observable which gives the probability that
measurement $\LAB{B}$ yields $\KET{\beta_\LAB{G}}$ is
\begin{equation} \begin{array}{rcl}
\VEC B ~\equiv~ \KET{\beta_\LAB{G}}\BRA{\beta_\LAB{G}}
&=& \FRAC{3}{5} \KET{0}\BRA{0} + \FRAC{2}{5} \KET{1}\BRA{1} -
\SQRT{\FRAC{6}{25}} (\KET{0}\BRA{1} + \KET{1}\BRA{0}) \\
&=& \HALF + \FRAC{1}{5} \VEC{I}_\LAB{z} - \SQRT{\FRAC{24}{25}} \VEC{I}_\LAB{x} ~.
\end{array} \end{equation}
In addition, the density operator
(including the identity) of Mermin's state is
\begin{equation} \begin{array}{rcl} \label{eq:Sigh}
\VEC{E}_{-}B\Psi ~\equiv~ \KET{\psi}\BRA{\psi}
&=& \FRAC{1}{4} + \FRAC{1}{8} \left(\VEC{I}_\LAB{z}^1 + \VEC{I}_\LAB{z}^2\right)
- \HALF \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2 \\
&& +\, \SQRT{\FRAC{3}{8}} \left(\VEC{I}_\LAB{x}^1(1 + 2\VEC{I}_\LAB{z}^2) +
(1 + 2\VEC{I}_\LAB{z}^1)\VEC{I}_\LAB{x}^2\right) \\
&& +\, \FRAC{3}{4}\left( \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2 \right) ~.
\end{array} \end{equation}
The state $\KET{01}$ is obviously related
to $\KET{00}$ by a rotation of spin $2$,
while $\KET{00}$ can likewise be rotated to
$\KET{10}$, but without affecting $\KET{01}$,
by a {\em conditional\/} rotation of spin $1$.
We shall denote these by
\begin{equation}
\VEC P(\phi) ~\equiv~ e^{-\imath\phi\VEC{I}_\LAB{y}^2}
\quad\mbox{and}\quad
\VEC Q(\theta) ~\equiv~ e^{-\imath\theta\VEC{I}_\LAB{y}^1\VEC{E}_{+}^2} ~,
\end{equation}
respectively.
They act consecutively on the ground state to yield
\begin{equation}
\BRA{00} \tilde{\VEC P}(\phi) \tilde{\VEC Q}(\theta) ~=~
\left[ \,\cos(\theta/2) \cos(\phi/2),\, \sin(\theta/2)
\cos(\phi/2),\, \sin(\phi/2),\, 0\, \right] ~,
\end{equation}
which is easily verified to equal
$\BRA{\psi} = [ 1/2, \sqrt{3/8}, \sqrt{3/8},\, 0 ]$ when
\begin{equation}
\phi ~=~ 2 \arctan( \SQRT{3/5} )
\quad\mbox{and}\quad
\theta ~=~ 2 \arctan( \SQRT{3/2} ) ~.
\end{equation}
Using the product operator techniques presented in section 3,
these transformations are readily implemented by NMR pulse sequences.
The next thing to notice is that if we take expectation values
with the usual idempotents $\VEC{E}_{+}^1\VEC{E}_{+}^2, \ldots, \VEC{E}_{-}^1\VEC{E}_{-}^2$, we get
\begin{equation} \begin{array}{rcl}
\FRAC{1}{4} ~= & 4\left\langle \VEC{E}_{-}B\Psi \VEC{E}_{+}^1\VEC{E}_{+}^2 \right\rangle
& \equiv~ 4\left\langle \VEC{E}_{-}B\Psi \VEC A^1 \VEC A^2 \right\rangle
\\
\FRAC{3}{8} ~= & 4\left\langle \VEC{E}_{-}B\Psi \VEC{E}_{+}^1\VEC{E}_{-}^2 \right\rangle
& \equiv~ 4\left\langle \VEC{E}_{-}B\Psi \VEC A^1 (1 - \VEC A^2) \right\rangle
\\
\FRAC{3}{8} ~= & 4\left\langle \VEC{E}_{-}B\Psi \VEC{E}_{-}^1\VEC{E}_{+}^2 \right\rangle
& \equiv~ 4\left\langle \VEC{E}_{-}B\Psi (1 - \VEC A^1) \VEC A^2 \right\rangle
\\
0 ~= & 4\left\langle \VEC{E}_{-}B\Psi \VEC{E}_{-}^1\VEC{E}_{-}^2 \right\rangle
& \equiv~ 4\left\langle \VEC{E}_{-}B\Psi (1 - \VEC A^1) (1 - \VEC A^2)
\right\rangle ~.
\end{array} \end{equation}
These correspond to the diagonal of the
density matrix in the usual $\VEC{I}_\LAB{z}$ basis,
\begin{equation}
{\bf diag}(\VEC{E}_{-}B\Psi) ~=~
[ \FRAC{1}{4}, \FRAC{3}{8}, \FRAC{3}{8}, 0 ]
\qquad\mbox{($\LAB{A}$ on 1, $\LAB{A}$ on 2)} ~,
\end{equation}
which contains the probabilities of the four possible outcomes
of performing measurement $\LAB{A}$ on both spins (as shown).
The product operator form of $\VEC B$ immediately
makes clear that measurement $\LAB{B}$ is just
a measurement of the magnetization of the spin
along an axis inclined at an angle of $\zeta
\equiv \arctan(\sqrt{24}) = \pi - \theta$ to
the $\LAB z$-axis in the $\LAB{xz}$-plane.
Letting $\VEC R \equiv \exp(-\imath \zeta \VEC{I}_\LAB{y})$,
it follows that the probability that measurement $\LAB{B}$ on
spin $1$ yields ``$\LAB{G}$'' (i.e.~$\KET{\beta_\LAB{G}}$) is
\newcommand{\tildeR}
{{\rule[0pt]{0pt}{7pt}\smash{\tilde\VEC{R}}}}
\begin{equation}
4 \AVG{ \VEC{E}_{-}B\Psi \VEC B^1 } ~=~
4 \AVG{ \VEC{E}_{-}B\Psi \tildeR^1
\VEC A^1 \VEC{R}^1 } ~=~
4 \AVG{ \VEC{R}^1 \VEC{E}_{-}B\Psi \tildeR^1 \VEC A^1 } ~.
\end{equation}
with a similar expression for spin $2$.
More generally, the probabilities of the outcomes
of the other combinations of measurements are given
by the diagonals of the transformed density matrices:
\begin{equation} \begin{array}{rcl}
{\bf diag}\left( \VEC{R}^2 \VEC{E}_{-}B\Psi \tildeR^2 \right)
&=& [ 0, \FRAC{5}{8}, \FRAC{9}{40}, \FRAC{3}{20} ]
\qquad\qquad\mbox{($\LAB{A}$ on 1, $\LAB{B}$ on 2)} \\
{\bf diag}\left( \VEC{R}^1 \VEC{E}_{-}B\Psi \tildeR^1 \right)
&=& [ 0, \FRAC{9}{40}, \FRAC{5}{8}, \FRAC{3}{20} ]
\qquad\qquad\mbox{($\LAB{B}$ on 1, $\LAB{A}$ on 2)} \\
{\bf diag}\left( \VEC{R}^1\VEC{R}^2
\VEC{E}_{-}B\Psi \tildeR^2 \tildeR^1 \right)
&=& [ \FRAC{9}{100}, \FRAC{27}{200}, \FRAC{27}{200}, \FRAC{16}{25} ]
\qquad\mbox{($\LAB{B}$ on 1, $\LAB{B}$ on 2)}
\end{array} \end{equation}
For compactness, let us denote these probabilities by $\Psi_{kl}^{ij}$,
where $i,j \in \{\LAB{A},\LAB{B}\}$ are the measurements and
$k,l \in \{\LAB{G},\LAB{R}\}$ are the corresponding outcomes,
e.g.~$\Psi_\LAB{GR}^\LAB{AB} = 4\AVG{\VEC{E}_{-}B\Psi \VEC A^1(1 - \VEC B^2)}$.
We may translate Mermin's proof \cite{Mermin:94}
that these probabilities are incompatible
with hidden variables associated with the
individual spins into this context as follows:
First, since $\Psi_\LAB{GG}^\LAB{AB} = \Psi_\LAB{GG}^\LAB{BA} = 0$,
in any molecule wherein one of the spins is parallel
to the $z$-axis the other must be antiparallel to
the axis of measurement $\LAB{B}$ and vice versa.
Hence, since $\Psi_\LAB{GG}^\LAB{BB}$ is nonzero,
in some molecules ($9$\%, to be precise) both
spins must be antiparallel to the $z$-axis.
But this contradicts the fact that $\Psi_\LAB{RR}^\LAB{AA} = 0$.
More generally, Mermin has shown that
\begin{equation}
\Psi_\LAB{GG}^\LAB{BB} ~\le~ \Psi_\LAB{GG}^\LAB{AB}
+ \Psi_\LAB{RR}^\LAB{AA} + \Psi_\LAB{GG}^\LAB{BA}
\end{equation}
is an example of the Clauser-Horne form of Bell's inequality \cite{Peres:95}.
Hence to disprove the existence of such one-particle hidden variables it
would be sufficient to determine these probabilities to $\pm2$\% or so.
\begin{figure}
\caption{
The pairs of ${}
\label{fig:hardy}
\end{figure}
\begin{table} \begin{center}
\renewcommand{1.3}{1.1}
\begin{tabular}{|c|rrrr|c|} \hline
\multicolumn{6}{|c|}{\parbox[t]{3.625in}{\bf
\centerline{
Probabilities of Outcomes {\boldmath$\LAB{G}$}
\& {\boldmath$\LAB{R}$} for the Measurements
}\centerline{
{\boldmath$\LAB{A}$} \& {\boldmath$\LAB{B}$}
(Carbon, Proton) Demonstrating Hardy's Paradox,
}\centerline{
as Derived from the Chloroform NMR Spectra
in Fig.\ \ref{fig:hardy}}
}
} \\ \hline\hline
~Measurements~ & $\quad(\LAB{G},\LAB{G})$ & $\qquad(\LAB{G},\LAB{R})$ &
$\qquad(\LAB{R},\LAB{G})$ & $\qquad(\LAB{R},\LAB{R})\quad$ & ~Residuals~
\\ \hline
$(\LAB{A^C},\LAB{A^H})$ & $0.253$ & $0.380$ & $0.366$ & $0.001\quad$
& $0.008$ \\
$(\LAB{A^C},\LAB{B^H})$ & $0.029$ & $0.609$ & $0.217$ & $0.145\quad$
& $0.018$ \\
$(\LAB{B^C},\LAB{A^H})$ & $-0.002$ & $0.230$ & $0.614$ & $0.159\quad$
& $0.005$ \\
$(\LAB{B^C},\LAB{B^H})$ & $0.097$ & $0.125$ & $0.156$ & $0.622\quad$
& $0.021$ \\ \hline
\end{tabular}
\renewcommand{1.3}{1.3}
\end{center} \end{table}
At this point we encounter a significant complication,
which is that the ``strong'' (von Neumann) measurements
assumed in their analyses by Jordan and
Mermin {\em cannot\/} be implemented by NMR;
we can only perform ``weak'' (nonperturbing) measurements
of the {\em probability differences\/} between states
connected by single spin flips \cite{CorPriHav:98}.
This is done by applying a magnetic field gradient
along the $\LAB{z}$-axis, which (as previously described)
dephases any transverse components in the density operator.
Thereafter, a pair of ``soft'' $\pi/2$ readout pulses,
each tuned to the frequency of just one of the two spins,
produces a pair of spectra each with two peaks whose
heights are proportional to the probability differences
between pairs of states connected by flips of that spin.
The factor relating the peak heights to the
corresponding differences in the probabilities
of the states can be determined from spectra
collected on the pseudo-pure ground state,
after which it is straightforward to convert
the differences into the corresponding
absolute probabilities by linear least squares,
subject to the constraint that their sum is unity.
We shall encounter field gradients again in the next section,
when we show how they can also be used to implement
precisely controlled decoherence models.
Thus the overall experiment consists of
collecting ten spectra, as follows:
\begin{enumerate}
\item Prepare the state $\VEC{E}_{-}B\Psi$, by first
preparing the pseudo-pure ground state $\KET{00}$
using one of the previously described methods,
and then transforming it by $\VEC Q(\theta) \VEC P(\phi)$.
\item Use a selective radio-frequency pulse to
apply the rotation $\VEC R$ to those spins on
which measurement $\LAB{B}$ is to be performed.
\item Use a $\LAB{z}$-gradient to dephase the transverse
components of the resulting density operator.
\item Apply a readout pulse to one of the
spins, and collect the corresponding spectrum;
repeat steps 1 -- 3 and then do the same for the other spin.
\item Repeat steps 1 - 4 for each of the four combinations of
measurements $\LAB{AA}$, $\LAB{AB}$, $\LAB{BA}$ and $\LAB{BB}$
on the two spins.
\item Collect two additional amplitude calibration spectra by applying
soft readout pulses to each spin in the pseudo-pure ground state.
\end{enumerate}
These experiments were performed on a Bruker
400 MHz spectrometer using the two spin $\HALF$
nuclei in ${}^{13}\LAB{C}$-labeled chloroform.
The spectra obtained from steps 1 -- 5 of this
experiment are shown in Fig.~\ref{fig:hardy}.
The probabilities derived from these peak heights,
and the residual (square-root of the sum of squares of the
deviations of the data points from the corresponding fit)
associated with each, are shown in the table below.
It follows that Bell's inequality is violated by
\begin{equation} \begin{array}{rl}
& \Psi_\LAB{GG}^\LAB{AB} + \Psi_\LAB{RR}^\LAB{AA} +
\Psi_\LAB{GG}^\LAB{BA} - \Psi_\LAB{GG}^\LAB{BB} \\
=~ & 0.029 + 0.001 - 0.002 - 0.097 ~=~ -0.069 ~.
\end{array} \end{equation}
A rigorous error analysis is not possible,
because the dominant errors in NMR spectra
(e.g.~RF field inhomogeneity) are not statistical.
If we nevertheless take the mean RMS residual
(half the total residuals shown in the table)
of $0.0065$ as an estimate of the errors and
assume they are independent between spectra,
the expected error in the sum of these four
numbers is only $0.013$, so that this violation
of Bell's inequality appears significant.
Nonetheless, as stressed in a recent preprint by Braunstein,
Caves, Jozsa, Linden, Popescu and Schack \cite{BraunsteinEtAl:98},
such experiments on weakly polarized pseudo-pure states
cannot actually disprove the existence of ``hidden variables''
associated with the spins of the individual molecules.
This is because the vast majority of a weakly polarized
density operator is contained in its identity component,
and there are many different ensembles of uncorrelated
spin states whose net density operator is the identity.
Hence the noise from the identity component dominates
the statistics of observations on the ensemble,
which are therefore consistent with microscopic
interpretations in which only uncorrelated
states are present with nonzero probability.
Indeed, if one were to pull the molecules
out the pseudo-pure sample used in the above
experiments one at a time, break them apart,
and perform the measurements $\LAB{A}$ and
$\LAB{B}$ with a Stern-Gerlach apparatus,
the frequencies of the four combinations of
outcomes would all be very close to $1/4$,
and would {\em not\/} violate Bell's inequality.
Thus, our apparent violation vanishes when the
whole ensemble is taken into consideration.
To see more precisely why the above experiments
fail to disprove the existence of hidden variables,
we first note that a pure state $\KET{\xi}\BRA{\xi}$ is
canonically associated with any given pseudo-pure density
operator ${\VEC{E}_{-}B\rho} = (1-\delta)/2^N + \delta\,\KET{\xi}\BRA{\xi}$,
which is distinguished mathematically by the fact
that $\KET{\xi}$ is the eigenvector corresponding
to its {\em sole\/} nondegenerate eigenvalue.
We further recall (see Eq.\ (\ref{eq:pp_tfm})) that the
traceless part of the pseudo-pure density operator ${\VEC{E}_{-}B\rho}$
transforms identically to that of the corresponding pure
state $\KET{\xi}\BRA{\xi}$ under unitary operations,
while the identity component transforms trivially,
and also that ${\VEC{E}_{-}B\rho}$ produces exactly the same NMR spectrum
as would $\KET{\xi}\BRA{\xi}$ up to its overall amplitude
(since the identity component of any density operator
does not contribute to the signals observed by NMR).
Thus the unitary dynamics of the observables
in NMR experiments on pseudo-pure states are,
for all practical intents and purposes,
indistinguishable from the same experiments on a (smaller)
ensemble in the corresponding pure state $\KET{\xi}\BRA{\xi}$.
It follows that NMR experiments on pseudo-pure
states are necessarily {\em consistent with\/}
(though not proof of the reality of)
a microscopic interpretation of the ensemble in
which those molecules contributing to the observations
are all in the same pure state $\KET{\xi}\BRA{\xi}$,
while the remaining (and large majority of the)
molecules are in completely random states
with a net density operator of $1/2^N$.
In deriving a violation of Bell's inequality from
our measurements above, we required that the fractions
of molecules in the four diagonal states sum to unity,
so that they could be identified with the
probabilities of those states in an unidentified
\underline{sub\hspace*{0.1pt}}ensemble
in the pure state $\KET{\psi}\BRA{\psi}$.
Implicitly, therefore, this microscopic interpretation
of the ensemble was assumed in deriving the violation.
As explained above, however, the large identity
component in the corresponding pseudo-pure density
operator guarantees that many other ensembles
could be found with the same net density operator,
so that a microscopic interpretation in terms of
a single well-defined subensemble in the pure state
$\KET{\psi}\BRA{\psi}$ is not physically justified.
In fact, the fundamental limits on the amount of information
that can be extracted on an unknown quantum state even by strong
measurements prevents us from ever knowing if any molecules
of our pseudo-pure sample exist in or near the corresponding
pure state $\KET{\psi}\BRA{\psi}$ at all \cite{Peres:95}.
It is for this reason that our apparent violation of Bell's
inequality fails to disprove the existence of hidden variables.
This ambiguity in the microscopic interpretation
of liquid-state NMR experiments not-with-standing,
quantum physics indicates that a psuedo-pure spin state,
subjected to the same electromagnetic fields as a true
pure state, will undergo the same unitary transformation.
In addition, applying a $\LAB{z}$-gradient to an NMR ensemble
renders unobservable the same transverse phase information
that would be destroyed on performing strong measurements
along the $\LAB{z}$-axis on all the spins in the ensemble.
Finally, existing experiments relying upon true pure states
and strong measurements provide direct evidence against hidden
variable theories (see e.g.\ Ref.\ \cite{AspDalRog:82,Branning:97}).
Given this background knowledge of the underlying physics,
our experiments indirectly imply that the pure state
$\KET{\psi}\BRA{\psi}$ would violate Bell's inequality.
More generally, the ambiguity in the microscopic
interpretation of liquid-state NMR experiments in
{\em no way\/} detracts from their utility as a means
of studying the dynamics of information contained in
either pseudo-pure or (by inference) true pure states,
even in significantly more complex spin systems
that would be difficult to study by other means.
To further emphasize this fact, we will now describe
NMR experiments we have performed which demonstrate
quantum error correction using pseudo-pure states.
\vspace*{0.20in}
\section{Quantum error correction by NMR spectroscopy}
The error correcting code we have chosen to illustrate
by NMR is well-known in the field \cite{KnillLafla:97},
and uses two ancilla (labeled $2$ \& $3$) to
encode the state of a data spin (labeled $1$).
Letting $\VEC{S}^{2|1}$ and $\VEC{S}^{3|1}$ be
c-NOT's, and $\VEC{R}_{90}^{123} \equiv
\exp(-\imath\FRAC{\pi}{2}(\VEC{I}_\LAB{y}^1+\VEC{I}_\LAB{y}^2+\VEC{I}_\LAB{y}^3))$,
the encoding operation proceeds as follows:
\begin{equation} \begin{array}{rcl}
&& (\alpha\KET{0} + \beta\KET{1}) \KET{00} / \sqrt2 ~
\stackrel{\VEC{S}^{2|1}}{\longrightarrow}
\stackrel{\VEC{S}^{3|1}}{\longrightarrow}
\stackrel{\VEC{R}_{90}^{123}}{\longrightarrow}
~ \alpha \KET{\mbox{$+$$+$$+$}} + \beta\KET{\mbox{$-$$-$$-$}}
\\ &&
\left( \mbox{where}~ \KET{\mbox{$\pm$$\pm$$\pm$}}
\equiv (\KET{0}\pm\KET{1})(\KET{0}\pm\KET{1})
(\KET{0}\pm\KET{1}) / \sqrt8 \right)
\end{array} \end{equation}
Decoding consists of applying the
inverse operations in the reverse order,
which acts on the states obtained by
single sign-flip errors as follows:
\begin{equation} \begin{array}{rcl}
&&
\alpha \KET{\mbox{$+$$+$$-$}} + \beta\KET{\mbox{$-$$-$$+$}}
\stackrel{\VEC{R}_{-90}^{123}}{\longrightarrow}
\stackrel{\VEC{S}^{3|1}}{\longrightarrow}
\stackrel{\VEC{S}^{2|1}}{\longrightarrow}
(\alpha\KET{0} + \beta\KET{1})\KET{01} / \sqrt2
\\ &&
\alpha \KET{\mbox{$+$$-$$+$}} + \beta\KET{\mbox{$-$$+$$-$}}
\stackrel{\VEC{R}_{-90}^{123}}{\longrightarrow}
\stackrel{\VEC{S}^{3|1}}{\longrightarrow}
\stackrel{\VEC{S}^{2|1}}{\longrightarrow}
(\alpha\KET{0} + \beta\KET{1})\KET{10} / \sqrt2
\\ &&
\alpha \KET{\mbox{$-$$+$$+$}} + \beta\KET{\mbox{$+$$-$$-$}}
\stackrel{\VEC{R}_{-90}^{123}}{\longrightarrow}
\stackrel{\VEC{S}^{3|1}}{\longrightarrow}
\stackrel{\VEC{S}^{2|1}}{\longrightarrow}
(\alpha\KET{1} + \beta\KET{0})\KET{11} / \sqrt2
\end{array} \end{equation}
It follows that a Toffoli gate $\VEC{T}^{1|23}$,
which flips the data spin conditional on
the ancillae being in the state $\KET{11}$,
will correct a sign-flip error in the
data spin and leave it alone otherwise,
even if an error occurs in the ancillae.
In practice, errors in quantum computers
are not expected to be single sign-flips,
but rather small random phase errors
which cumulatively result in decoherence.
Nevertheless, we can show that the ability to
correct sign-flips implies the ability to cancel
the effect of such phase errors to first order.
Random phase errors correspond to the propagator
$\exp(-\imath(\chi^1\VEC{I}_\LAB{z}^1+\chi^2\VEC{I}_\LAB{z}^2+\chi^3\VEC{I}_\LAB{z}^3))$,
where $\chi^1,\chi^2,\chi^3$ are random variables,
which acts to first order on the encoded state as:
\begin{equation} \begin{array}{rcl} &&
\exp(-\imath(\chi^1\VEC{I}_\LAB{z}^1+\chi^2\VEC{I}_\LAB{z}^2+\chi^3\VEC{I}_\LAB{z}^3) \left(
\alpha \KET{\mbox{$+$$+$$+$}} + \beta\KET{\mbox{$-$$-$$-$}}
\right) \\ &~\approx~& \left(
\alpha \KET{\mbox{$+$$+$$+$}} + \beta\KET{\mbox{$-$$-$$-$}}
\right) - \imath\chi^1 \left(
\alpha \KET{\mbox{$-$$+$$+$}} + \beta\KET{\mbox{$+$$-$$-$}}
\right) \\ && -\, \imath\chi^2 \left(
\alpha \KET{\mbox{$+$$-$$+$}} + \beta\KET{\mbox{$-$$+$$-$}}
\right) - \imath\chi^3 \left(
\alpha \KET{\mbox{$+$$+$$-$}} + \beta\KET{\mbox{$-$$-$$+$}}
\right)
\end{array} \end{equation}
Since decoding and the error-correcting
Toffoli gate are likewise linear,
it follows that the first-order effects
of phase errors are cancelled as claimed.
Note this argument makes no assumptions
concerning the correlations among the errors!
Experimental results demonstrating these expectations
have recently been published \cite{CMPKLZHS:98}.
In the following, we shall present a more
detailed explanation of how the error correction
works using the product operator formalism,
along with selected experimental data
illustrating and validating this explanation.
We shall assume that the data spin is in one of the
states $1$ (unpolarized), $\VEC{I}_\LAB{x}^1$, $\VEC{I}_\LAB{y}^1$ or $\VEC{I}_\LAB{z}^1$.
Although these are mixed states,
each consists of an incoherent sum of pure states,
e.g.\ $2\VEC{I}_\LAB{z}^1 = \KET{0}\BRA{0} - \KET{1}\BRA{1}$,
so if error correction works on these pure states,
by linearity it will also work on the mixtures (and vice versa).
In these terms, a complete set of initial states
${{\VEC{E}_{-}B\rho}}_\LAB{A}$ for error correction are:
\begin{equation} \begin{array}{rcl} \label{eq:rhoA}
&& \left. \begin{array}{l}
\VEC{E}_{+}^2\VEC{E}_{+}^3 \\ \VEC{I}_\LAB{x}^1 \VEC{E}_{+}^2\VEC{E}_{+}^3 \\
\VEC{I}_\LAB{y}^1 \VEC{E}_{+}^2\VEC{E}_{+}^3 \\ \VEC{I}_\LAB{z}^1 \VEC{E}_{+}^2\VEC{E}_{+}^3
\end{array} \right\}
~\equiv~ {{\VEC{E}_{-}B\rho}}_\LAB{A}^1 \VEC{E}_{+}^2 \VEC{E}_{+}^3 ~=~ {{\VEC{E}_{-}B\rho}}_\LAB{A}
\end{array} \end{equation}
The corresponding states ${{\VEC{E}_{-}B\rho}}_\LAB{B}$
to which they are mapped by encoding are:
\begin{equation} \begin{array}{rcl} \label{eq:rhoB}
{{\VEC{E}_{-}B\rho}}_\LAB{B} &\equiv& \left\{ \begin{array}{l}
\FRAC14 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 \\
\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 \\
\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3 \\
\FRAC{1}{4} (\VEC{I}_\LAB{x}^1 + \VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^3) + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3
\end{array} \right.
\end{array} \end{equation}
We note the last three states in Eq.~(\ref{eq:rhoA})
can be prepared (with a 50\% loss of polarization)
from the average of twice Eq.~(\ref{eq:condpure3a}) with
\begin{equation} \begin{array}{rcl} \label{eq:condpure3b}
&& \FRAC{1}{16}\, \MAT{Diag}( 3, 1, 1, 1, -3, -1, -1, -1 )
\\ &~\leftrightarrow~&
\FRAC{1}{16} (\VEC{I}_\LAB{z}^1 (3 + 2\VEC{I}_\LAB{z}^2 + 2\VEC{I}_\LAB{z}^3 + 4\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3))
\\ &=&
\FRAC{1}{4} (\VEC{E}_{+}^1 - \VEC{E}_{-}^1) (\VEC{E}_{+}^2\VEC{E}_{+}^3 + \FRAC{1}{2}) ~.
\end{array} \end{equation}
\begin{figure}
\caption{
Experimental NMR data illustrating the decay of each of the product
operators $\mbox{\boldmath$I$}
\label{fig:err_cor}
\end{figure}
In liquid-state NMR, decoherence occurs principally
through the randomly fluctuating external magnetic fields
$B_\LAB{z}^k$ along the $\LAB{z}$-axis at each spin $k$.
The effect of these fields is most simply described
in the {\em spherical\/} product operator basis $\VEC 1$,
$\VEC{I}_\LAB{z}^k$ and $\VEC{I}_{+}M^k \equiv \VEC{I}_\LAB{x}^k \pm \imath \VEC{I}_\LAB{y}^k$,
as opposed to the {\em Cartesian\/} basis used up to now.
The products of these basis elements can be shown \cite{ErnBodWok:87}
to decay exponentially at rates proportional to the
mean-square field $\overline{(B_\LAB{z}^k)^2}$ for $\VEC{I}_{+}M^k$
(as well as $\VEC{I}_{+}M^k \VEC{I}_\LAB{z}^\ell$, $\VEC{I}_{+}M^k \VEC{I}_\LAB{z}^\ell \VEC{I}_\LAB{z}^m$),
and to
\begin{equation} \label{eq:fields}
\begin{array}[t]{rll}
& \overline{(B_\LAB{z}^k - B_\LAB{z}^\ell)^2} &
\quad\mbox{for $\VEC{I}_{+}^k\VEC{I}_{-}^\ell$ \& $\VEC{I}_{-}^k\VEC{I}_{+}^\ell$,} \\
& \overline{(B_\LAB{z}^k + B_\LAB{z}^\ell)^2} &
\quad\mbox{for $\VEC{I}_{+}^k\VEC{I}_{+}^\ell$ \& $\VEC{I}_{-}^k\VEC{I}_{-}^\ell$,} \\
& \overline{(B_\LAB{z}^k + B_\LAB{z}^\ell - B_\LAB{z}^m)^2} &
\quad\mbox{for $\VEC{I}_{+}^k\VEC{I}_{+}^\ell\VEC{I}_{-}^m$ \& $\VEC{I}_{-}^k\VEC{I}_{-}^\ell\VEC{I}_{+}^m$,~~etc.,} \\
\mbox{and}\quad & \overline{(B_\LAB{z}^k + B_\LAB{z}^\ell + B_\LAB{z}^m)^2} &
\quad\mbox{for $\VEC{I}_{+}^k\VEC{I}_{+}^\ell\VEC{I}_{+}^m$ \& $\VEC{I}_{-}^k\VEC{I}_{-}^\ell\VEC{I}_{-}^m$.}
\end{array}
\end{equation}
These products are referred to as single (SQC1: $\VEC{I}_{+}M^k$),
zero (ZQC: $\VEC{I}_{+}M^k\VEC{I}_{-}P^\ell$), double (DQC: $\VEC{I}_{+}M^k\VEC{I}_{+}M^\ell$),
three-spin single (SQC3: $\VEC{I}_{+}M^k\VEC{I}_{+}M^\ell\VEC{I}_{-}P^m$, etc.) and triple
(TQC: $\VEC{I}_{+}M^k\VEC{I}_{+}M^\ell\VEC{I}_{+}M^m$) quantum coherences, respectively.
We shall consider two extreme forms of decoherence.
In the first, the fields at the different spins are
uncorrelated, and hence the random variables $\chi^k$ can
be assumed to be identically distributed and independent.
In the second, they are assumed to be totally correlated.
By Eq.\ (\ref{eq:fields}), the relative rates
of decoherence in these two cases are:
\begin{equation}
\begin{array}{lccccc}
& ~\mbox{\sf ZQC}~ & ~\mbox{\sf SQC1}~
& ~\mbox{\sf SQC3}~ & ~\mbox{\sf DQC}~
& ~\mbox{\sf TQC}~ \\
\mbox{\sf Uncorrelated:}
& 2 & 1 & 3 & 2 & 3 \\
\mbox{\sf Totally Correlated:}
& 0 & 1 & 1 & 4 & 9
\end{array}
\end{equation}
Decomposing ${{\VEC{E}_{-}B\rho}}_\LAB{B}$ into a spherical basis,
multiplying by decaying exponentials with the above
rates normalized by the SQC1 decay rate $\tau$,
and returning to the Cartesian basis gives
\begin{equation}
{{\VEC{E}_{-}B\rho}}_\LAB{C} ~\equiv~ \left\{ \begin{array}{l}
\FRAC14 + (\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3) e^{-2t/\tau}
\\
(\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3) e^{-2t/\tau}
- \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3
\\
(\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3) e^{-t/\tau}
- \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3 e^{-3t/\tau}
\\
\FRAC{1}{4} (\VEC{I}_\LAB{x}^1 + \VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^3) e^{-t/\tau}
+ \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 e^{-3t/\tau}
\end{array} \right.
\end{equation}
in the uncorrelated case, and
\begin{equation}
{{\VEC{E}_{-}B\rho}}_\LAB{C} ~\equiv~
\left\{ \begin{array}{l}
\FRAC14 + \HALF (\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 +
\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3) \\ +\,
\HALF (\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2
- \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^3 - \VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3) e^{-4t/\tau}
\\
\HALF (\VEC{I}_\LAB{z}^1(\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3+\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3) +
\VEC{I}_\LAB{z}^2(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^3+\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^3) +
\VEC{I}_\LAB{z}^3(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2+\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2)) \\ -\,
\HALF (\VEC{I}_\LAB{z}^1(\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3-\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3) +
\VEC{I}_\LAB{z}^2(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^3-\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^3) +
\VEC{I}_\LAB{z}^3(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2-\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2)) \\ \qquad
e^{-4t/\tau} - \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3
\\
(\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3)
e^{-t/\tau} \\ +\, \FRAC14
(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 + 3\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-t/\tau} \\ -\, \FRAC14
(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-9t/\tau}
\\
\FRAC14 (\VEC{I}_\LAB{x}^1 + \VEC{I}_\LAB{x}^2 + \VEC{I}_\LAB{x}^3)
e^{-t/\tau} \\ +\, \FRAC14
(3\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{x}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-t/\tau} \\ +\, \FRAC14
(\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{x}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{x}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{x}^2\VEC{I}_\LAB{y}^3 - \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-9t/\tau}
\end{array} \right.
\end{equation}
in the totally correlated case.
The decoding operation converts this to
\begin{equation}
{{\VEC{E}_{-}B\rho}}_\LAB{D} ~\equiv~ \left\{ \begin{array}{l}
\FRAC14 + (\VEC{I}_\LAB{z}^2 + \VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3) e^{-2t/\tau}
\\
(\HALF \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2 + \HALF \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3) e^{-2t/\tau}
+ \FRAC{1}{4} \VEC{I}_\LAB{x}^1
\\
(\FRAC{1}{4} \VEC{I}_\LAB{y}^1 + \HALF \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2 + \HALF \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^3) e^{-t/\tau}
+ \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 e^{-3t/\tau}
\\
(\FRAC{1}{4} \VEC{I}_\LAB{z}^1 + \HALF \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2 + \HALF \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^3) e^{-t/\tau}
+ \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 e^{-3t/\tau}
\end{array} \right.
\end{equation}
in the uncorrelated case, and
\begin{equation}
{{\VEC{E}_{-}B\rho}}_\LAB{D} ~\equiv~
\left\{ \begin{array}{l}
\FRAC14 + \HALF (\HALF\VEC{I}_\LAB{z}^2 + \HALF\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 -
2\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 - 2\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3) \\ +\,
\HALF (\HALF\VEC{I}_\LAB{z}^2 + \HALF\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 +
2\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + 2\VEC{I}_\LAB{x}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-4t/\tau}
\\
\HALF (\VEC{I}_\LAB{x}^1(\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3+\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3) +
\HALF \VEC{I}_\LAB{z}^3(\VEC{I}_\LAB{x}^1-\VEC{I}_\LAB{x}^2) +
\HALF \VEC{I}_\LAB{z}^2(\VEC{I}_\LAB{x}^1-\VEC{I}_\LAB{x}^3))
+ \FRAC{1}{4} \VEC{I}_\LAB{x}^1 \\ +\,
\HALF (\VEC{I}_\LAB{x}^1(\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3-\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3) +
\HALF \VEC{I}_\LAB{z}^3(\VEC{I}_\LAB{x}^1+\VEC{I}_\LAB{x}^2) +
\HALF \VEC{I}_\LAB{z}^2(\VEC{I}_\LAB{x}^1+\VEC{I}_\LAB{x}^3))
e^{-4t/\tau}
\\
(\HALF\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^3 + \HALF\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2 + \FRAC{1}{4}\VEC{I}_\LAB{y}^1)
e^{-t/\tau} \\ +\, \FRAC14
(\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3 - 3\VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3)
e^{-t/\tau} \\ -\, \FRAC14
(\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3)
e^{-9t/\tau}
\\
\FRAC14 (\VEC{I}_\LAB{z}^1 + 2 \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2 + 2 \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^3)
e^{-t/\tau} \\ +\, \FRAC14
(3\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 + \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 + \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-t/\tau} \\ +\, \FRAC14
(\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{z}^3 - \VEC{I}_\LAB{y}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{y}^3 - \VEC{I}_\LAB{z}^1\VEC{I}_\LAB{y}^2\VEC{I}_\LAB{y}^3)
e^{-9t/\tau}
\end{array} \right.
\end{equation}
in the totally correlated case.
This is clearly getting a little messy,
and it gets much worse after the Toffoli gate!
Therefore, we shall only present the partial trace
over the ancillae after applying the Toffoli, which is
\begin{equation}
{{\VEC{E}_{-}B\rho}}_\LAB{E}^1 ~\equiv~ \left\{ \begin{array}{l}
1 \\
\VEC{I}_\LAB{x}^1 \\
\VEC{I}_\LAB{y}^1 \left( \FRAC{3}{2} e^{-t/\tau} - \HALF e^{-3t/\tau} \right)
\\
\VEC{I}_\LAB{z}^1 \left( \FRAC{3}{2} e^{-t/\tau} - \HALF e^{-3t/\tau} \right)
\end{array} \right.
\end{equation}
in the uncorrelated case, and
\begin{equation}
{\VEC{E}_{-}B\rho}_\LAB{E}^1 ~\equiv~
\left\{ \begin{array}{l}
1 \\
\VEC{I}_\LAB{x}^1 \\
\VEC{I}_\LAB{y}^1 \left( \FRAC{3}{2} e^{-t/\tau} - \FRAC{3}{8} e^{-t/\tau}
- \FRAC{1}{8} e^{-9t/\tau} \right)
\\
\VEC{I}_\LAB{z}^1 \left( \FRAC{3}{2} e^{-t/\tau} - \FRAC{3}{8} e^{-t/\tau}
- \FRAC{1}{8} e^{-9t/\tau} \right)
\end{array} \right.
\end{equation}
in the correlated.
The slope of these curves at $t = 0$ is zero in all cases, as expected.
In order to demonstrate these results by NMR solution-state spectroscopy,
a precise implementation of the above decoherence models is needed.
This was achieved by combining {\em gradient\/}
methods with molecular diffusion.
In these methods, a magnetic field gradient
is created along the $\LAB{z}$-axis;
as previously described, this dephases
the transverse ($\LAB{xy}$) magnetization.
More precisely, a field gradient causes the
transverse magnetization to precess at rates
which depend linearly on its $\LAB{z}$-coordinate,
thereby winding it into a spiral about the $\LAB{z}$-axis
whose average transverse magnetization is essentially zero.
The gradient is turned off for a given time interval $t$, during
which diffusion of the molecules along $\LAB{z}$ blurs the spiral.
The gradient is then reversed, causing the
magnetization to refocus and so create an ``echo''.
Because those molecules which have moved now precess at
a different rate, their magnetization is not refocussed,
so the magnitude of the echo decays exponentially with $t$.
Because all the spins in each molecule are subject
to the same change in field, this constitutes a
true implementation of the totally correlated model.
By using refocusing $\pi$-pulses between gradients,
it is also possible to dephase each spin separately,
thereby implementing the uncorrelated model.
At this time, however, we have collected and processed
data only for the ${{\VEC{E}_{-}B\rho}}_\LAB{A}^1 = \VEC{I}_\LAB{z}^1$
state with the totally correlated model.
Although it is possible to prepare the
state $\VEC{I}_\LAB{z}^1\VEC{E}_{+}^2\VEC{E}_{+}^3$ as noted above,
we have chosen to illustrate the above analysis by
preparing the states $\VEC{I}_\LAB{z}^1$, $2\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2$, $2\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^3$
and $4\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3$ in four separate experiments,
each using sixteen different decoherence times $t$.
Because the SQC and TQC contributing to $4\VEC{I}_\LAB{z}^1\VEC{I}_\LAB{z}^2\VEC{I}_\LAB{z}^3$
refocussed at different times, this further
enabled us to follow their evolutions separately.
The results of these experiments are plotted
against the time $t$ in Figure \ref{fig:err_cor},
along with the corresponding logarithmic fits.
It may be seen that the sum of the data
and of the fits thereto (also shown) do
indeed exhibit a near-zero initial slope,
in accord with the above calculations.
Our published report \cite{CMPKLZHS:98}
includes the results of further experiments
(performed by E.~Knill and R.~Laflamme)
with the natural and far more complicated
decoherence processes that occur in solution.
These are more difficult to interpret,
but are nevertheless consistent with the state
preservation expected from error correction.
Additional experiments and more detailed
calculations are in progress.
While a method of inhibiting decoherence ($T_2$ relaxation)
during NMR pulse sequences would be highly desirable,
there are strong reasons to doubt that quantum
error correction will be useful in this regard.
First, the ancillae must be placed in a pseudo-pure state,
which as we have shown above entails a loss
of 50\% of the signal for each ``data'' spin;
this is more than is recovered by error correction.
In addition, the ancillae must be returned to a pseudo-pure
state uncorrelated with the state of the data spin(s), or else
``fresh'' ancillae in such a state must be continuously available,
in order to inhibit decoherence over an appreciable
period of time by the repeated correction of errors.
Nevertheless, we feel that the basic idea underlying
error correction of preparing multiple quantum coherences,
allowing them to decohere, and then mixing them so
as to determine their relative rates of relaxation,
may be of considerable use in NMR studies
of the statistics of molecular motion.
This in turn is one of the most important
applications of NMR spectroscopy.
Conversely, whereas NMR spectroscopists have
previously used their methods solely to unravel
the secrets of naturally occurring systems,
it now appears possible to use these same methods
to engineer artificial systems in which the basic
principles of quantum information processing,
in particular the emergence of the classical
world through decoherence \cite{GiuliniEtAl:96},
can be studied in unprecedented detail.
\end{document}
|
\begin{document}
\title[4th order NHL]{On a fourth order nonlinear Helmholtz equation}
\author[Bonheure, Casteras and Mandel]{Denis Bonheure \and Jean-Baptiste Casteras \and Rainer Mandel}
\address{Denis Bonheure, Jean-Baptiste Casteras
\newline \indent D\'epartement de Math\'ematiques, Universit\'e Libre de Bruxelles,
\newline \indent CP 214, Boulevard du triomphe, B-1050 Bruxelles, Belgium,
\newline \indent and INRIA- team MEPHYSTO.}
\email{[email protected]}
\email{[email protected]}
\address{Rainer Mandel
\newline \indent Karlsruhe Institute of Technology
\newline \indent Institute for Analysis
\newline \indent Englerstrasse 2, D-76131 Karlsruhe, Germany.}
\email{[email protected] }
\begin{abstract}
In this paper, we study the mixed dispersion fourth order nonlinear Helmholtz equation
$$
\Delta ^2 u -\beta \Delta u + \alpha u= \Gamma|u|^{p-2} u \quad\text{in } \mathbb{R}^N,
$$
for positive, bounded and $\mathbb{Z}^N$-periodic functions $\Gamma$ in the following three cases:
\begin{equation*}
(a)\;\; \alpha<0,\beta \in \mathbb{R} \qquad\text{or}\qquad
(b)\;\; \alpha>0,\beta < -2\sqrt{\alpha} \qquad\text{or}\qquad
(c)\;\; \alpha =0,\beta <0.
\end{equation*}
Using the dual method of Ev\'equoz and Weth, we find solutions to this equation and establish some of their
qualitative properties.
\end{abstract}
\maketitle
\section{Introduction}
In this paper, we study the existence and the qualitative properties of solutions to the following mixed
dispersion fourth order nonlinear Helmholtz type equation
\begin{equation}
\label{4nle}
\tag{4NHE}
\gamma \Delta^2 w - \Delta w +\alpha w = \Gamma |w|^{p-2}w \quad \text{in } \mathbb{R}^N,
\end{equation}
where $\gamma>0$, $\alpha \in \mathbb{R},p>2$ and $\Gamma$ is a positive, bounded and periodic function. When
$\gamma=0$, \eqref{4nle} yields standing wave solutions, i.e. solutions of the form $\psi
(t,x)=e^{i\alpha t}w(x)$, to the well-known Schr\" odinger equation
\begin{equation}
\label{2NLS}
\tag{2NLS}
i\partial_t \psi +\Delta \psi + |\psi|^{p-2}\psi =0,\ \psi (0,x)=\psi_0 (x),\quad (t,x)\in \mathbb{R}\times \mathbb{R}^N.
\end{equation}
It is well-known that when $(p-2)N< 4$, solutions to \eqref{2NLS} exist globally in time
and that they are stable whereas when $(p-2)N\geq 4$, they can become singular in finite time and they are unstable \cite{Caz,Sulem}.
Observe that in the physically relevant case $N=2$ and $p=4$,
we are in the second situation. In order to regularize and stabilize solutions to \eqref{2NLS}, Karpman and
Shagalov \cite{MR1779828} introduced a small fourth-order dispersion, namely they considered
\begin{equation}
\label{4NLS}
\tag{4NLS}
i\partial_t \psi -\gamma \Delta^2 \psi +\Delta \psi +|\psi|^{p-2}\psi =0,\ \psi (0,x)=\psi_0 (x),\quad
(t,x)\in \mathbb{R}\times \mathbb{R}^N.
\end{equation}
Using a combination of stability analysis and numerical simulations, they showed that standing wave solutions
for this equation, i.e. solutions of \eqref{4nle} with $\alpha>0$, are stable when $(p-2)N<8$
and unstable when $(p-2)N\geq 8$. Fibich and al. \cite{MR1898529} also proved, using the Strichartz estimates
of Ben-Artzi and al. \cite{MR1745182}, global existence in time of the solutions to \eqref{4NLS} when $(p-2)N<8$ when the initial datum is in the energy space. In particular, thanks to the presence of the biharmonic term, we see that when $N=2$ and $p=4$, solutions to \eqref{4NLS} exist globally in time and standing solutions are stable. The addition of the fourth order term has also been motivated from a phenomenological point of view. In nonlinear optics, \eqref{2NLS} is usually derived from the nonlinear Helmholtz equation through the so-called paraxial approximation. The fact that its solutions may blow up in finite time suggests that some small terms neglected in the paraxial approximation play an important role to prevent this phenomenon. The addition of a small fourth-order dispersion term was proposed in \cite{MR1898529} as a nonparaxial
correction, which eventually gives rise to \eqref{4NLS}. Despite being less studied than the classical \eqref{2NLS}, an increasing attention has been given to
\eqref{4NLS}. We refer to the works of Pausader \cite{MR2353631,MR2502523,MR2505703,MR2746203,MR3078112},
Miao and al. \cite{miao}, Ruzhansky and al. \cite{ruz}, Segata \cite{segata1,segata2} concerning global
well-posedness and scattering, to \cite{BCGJ2,BL} for finite-time blow-up and to
\cite{BCDN,BCGJ1,natalipastor} for the stability of standing wave solutions.
We also mention that \eqref{4nle} also appears in the theory of water waves \cite{bretherton} and as a model to study travelling waves in suspension bridges \cite{lazer,mckenna} (see also \cite{Buffoni1995109,levandosky}). \\
Next, let us mention existence results for \eqref{4nle}. First, observe that using the scaling $u(x)=w(\gamma^{-1/4}x)$, we see that \eqref{4nle} is equivalent to
\begin{equation}
\label{4nls}
\Delta^2 u - \beta\Delta u +\alpha u = |u|^{p-2}u \quad\text{in } \mathbb{R}^N,
\end{equation}
where $\beta =\gamma^{-1/2}$. Bonheure and Nascimento \cite{BN} considered the following minimization problem
\begin{equation}
\label{mini}
m:=\inf_{u\in M} J_{\alpha,\beta}(u),
\end{equation}
where
$$
J_{\alpha,\beta}(u):=\int_{\mathbb{R}^N} (|\Delta u|^2 +\beta |\nabla u|^2 +\alpha u^2) \, dx
$$
and
$$
M:=\{u\in H^2 (\mathbb{R}^N ):\ \int_{\mathbb{R}^N}|u|^{p} \, dx=1 \}.
$$
Notice that if $u\in M$ achieves the infimum $m$, then $u$ is a solution to
$$\Delta^2 u -\beta \Delta u+\alpha u=m|u|^{p-2}u.
$$
Thus, if $m>0$, then $w=m^{1/(p-2)}u$ solves \eqref{4nls}. They showed that the minimization problem
\eqref{mini} admits a solution provided that $\alpha>0$ and
$\beta>-2\sqrt{\alpha}$. The exponent is assumed to satisfy $p>2$ in the case $N=2,3,4$ and $2<p<2N/(N-4)$
if $N>4$. Observe that, thanks to these assumptions on $\alpha$ and $\beta$, the
functional $J_{\alpha,\beta}(u)$ is equivalent to the usual norm in $H^2 (\mathbb{R}^N)$. They also show that their solution has a sign, is radially symmetric if in addition $\beta\geq 2\sqrt{\alpha}$ and, in \cite{BCDN}, that it is exponentially decreasing (if $\beta > -2\sqrt{\alpha}$).
The case
$\alpha=0$ and $\beta>0$ has been considered in \cite{BCGJ1} where the existence of solutions, belonging to
$X:=\{u\in D^{1,2}(\mathbb{R}^N)|\ \|\Delta u\|_{L^2 (\mathbb{R}^N)}<\infty \}$, has been obtained provided that
$2N/(N-2)\leq p$ if $N=3,4$ and $2N/(N-2)\leq p< 2N/(N-4)$ if $N>4$. Moreover, this solution has a sign and belongs
to $L^2 (\mathbb{R}^N)$ if and only if $N\geq 5$ suggesting that it decays only polynomially. In fact, it was proved
in this setting that any radial solution $u$ to \eqref{4nls} satisfies $\lim_{|x|\rightarrow
\infty} u(x)|x|^{N-2}=C$ for some constant $C\in \mathbb{R} \backslash \{0\}$. The non radial case is open.
In this paper, we are interested in other ranges for the parameters $\alpha$ and $\beta$, namely we consider the cases
\begin{equation}
(a)\;\; \alpha<0,\beta \in \mathbb{R} \qquad\text{or}\qquad
(b)\;\; \alpha>0,\beta < -2\sqrt{\alpha} \qquad\text{or}\qquad
(c)\;\; \alpha =0,\beta <0.
\end{equation}
To our knowledge, existence results in these cases have not been previously treated in the literature. The main difficulty for these parameter values comes from the fact that $0$ is contained in the essential spectrum of the differential operator
$L:=\Delta^2-\beta\Delta+\alpha$.
Recently, a series of papers by Ev\'equoz and Weth \cite{Ev,MR3149060,EW,MR3625081} tackles this problem
for \eqref{4nle} in the case $\gamma=0,\beta=1$ and $\alpha <0$. We also refer to the previous work of Guti\' errez
\cite{Gu} and the very recent works \cite{Mandel2,Mandel1} of the third author and his collaborators for several extensions respectively to the case where $\alpha$ is replaced by a $\mathbb{Z}^N$-periodic potential and to the case of a system . In \cite{MMP} a sharp decay result for radial solutions was found for the second order Helmholtz equation.
\setminusallbreak
We now describe in more details the Ev\'equoz and Weth strategy, see \cite{EW}, that we will adapt. In that paper, the authors
studied the following equation
\begin{equation}
\label{eqEW}
-\Delta u - k^2 u = \Gamma |u|^{p-2}u \quad\text{ in }\mathbb{R}^N,
\end{equation}
where $\Gamma \in L^\infty (\mathbb{R}^N)$, $0\neq \Gamma\geq 0$ and either $\Gamma$ is $\mathbb{Z}^N$-periodic or
$\lim_{|x|\rightarrow \infty}\Gamma (x)=0$. The main difficulty of this problem is the lack of a direct
variational approach. Indeed, one expects that the solutions to \eqref{eqEW} will not decay faster than
$O(|x|^{(1-N)/2})$ as $|x|\rightarrow \infty$. This last claim was indeed proved in~\cite{MMP} for all
nontrivial radial solutions of a class of nonlinear Helmholtz equations of the form \eqref{eqEW}. As a consequence, in this case, the usual energy functional formally associated with \eqref{eqEW} is not even well-defined on
nontrivial solutions.
To overcome this difficulty, Ev\'equoz and Weth proposed a dual variational approach, transforming
\eqref{eqEW} into
\begin{equation} \label{eq:dual_equation_2ndorder}
|v|^{p^\prime-2}v= \Gamma^{1/p} \mathcal R_{k^2}\big(\Gamma^{1/p}v \big),
\end{equation}
where $v=\Gamma^{1/p^\prime}|u|^{p-2} u$ and $\mathcal R_{k^2}=(-\Delta - k^2)^{-1}$ is a resolvent-type
operator constructed via the so-called limiting absorption principle. We refer
to~\eqref{eq:definition_2ndorder_resolvent} below for a precise definition. Here and in the following, $p^\prime=p/(p-1)$.
Thanks to this dual formulation, which is variational in $L^{p^\prime}(\mathbb{R}^N)$, they obtained a ground-state
solution $v\in L^{p^\prime}(\mathbb{R}^N)$ of \eqref{eq:dual_equation_2ndorder} via the Moutain-Pass theorem whenever
the exponent satisfies $\frac{2(N+1)}{N-1}<p<\frac{2N}{N-2}$ and $\Gamma$ is positive, bounded and
$\mathbb{Z}^N$-periodic.
The associated function $u$ was shown to be a strong solution of~\eqref{eqEW} lying in $W^{2,q}(\mathbb{R}^N)\cap
C^{1,\alpha}(\mathbb{R}^N)$ for all $q\in [p,\infty)$ and $\alpha \in (0,1)$.
Notice that existence results have been obtained also in the Sobolev-critical case $p=\frac{2N}{N-2}$ in
\cite{EvYe}. The lower bound for $p$ is related to the mapping properties of the
resolvent type operator $\mathcal R_{k^2}$, which in turn is linked with the Stein-Tomas Theorem (see Theorem
\ref{STT}). We will comment on this in more detail later on.
Let us return to the nonlinear fourth order Helmholtz equation~\eqref{4nle} that we will investigate by
adapting the dual variational method of Ev\'equoz and Weth. The main task is to construct and analyze
a resolvent-type operator $\mathfrak R:= (\Delta^2 -\beta \Delta +\alpha)^{-1}$ with mapping properties
similar to and even better than their second order counterparts. First notice that we can
decompose the operator $L$ into two second order operators by writing
\begin{align} \label{eq:def_a1a2}
\begin{aligned}
L &= \Delta^2 -\beta \Delta +\alpha = (-\Delta -a_1 ) (-\Delta -a_2), \quad\text{where } \\
a_1 &:= \dfrac{-\beta+\sqrt{\beta^2 - 4\alpha}}{2},\qquad a_2:= \dfrac{-\beta-\sqrt{\beta^2 - 4\alpha}}{2}.
\end{aligned}
\end{align}
We see that $\alpha<0$ implies $a_1>0>a_2$ and $\alpha=0,\beta<0$ implies $a_1>0=a_2$ so that $L$ becomes a
composition of a Schr\" odinger operator and a Helmholtz operator or the Laplacian, respectively. In the case
$\alpha>0$ and $\beta < -2\sqrt{\alpha}$ we find $a_1>a_2>0$ so that $L$ decomposes into two Helmholtz
operators. This leads us to study the nonlinear problem~\eqref{4nls} under the following assumptions:
\begin{itemize}
\item[(A1)] $\alpha,\beta\in\mathbb{R}$ satisfy $\alpha<0$, $N\geq 2$ or $\alpha>0,\beta< -2\sqrt{\alpha}$, $N\geq
2$ or $\alpha=0$, $\beta<0$, $N\geq 3$;
\item[(A2)] $\Gamma\in L^\infty(\mathbb{R}^N)$ is $\mathbb{Z}^N$-periodic with $\inf_{\mathbb{R}^n} \Gamma>0$ and
$\frac{2(N+1)}{N-1}<p<\frac{2N}{(N-4)_+}$.
\end{itemize}
Here and in the following, the symbol $\frac{2N}{(N-4)_+}$ stands for $\infty$ in the case $N\leq 4$
and for $\frac{2N}{N-4}$ if $N\geq 5$. Besides the mere existence of a nontrivial $L^p(\mathbb{R}^N)$-solution
of~\eqref{4nls}, we will determine further regularity properties as well as a far field pattern for such
solutions. This pattern can be expressed in terms of the function
$$
U_f(x) := \frac{a_1^{\frac{N-3}{4}}}{\sqrt{\beta^2-4\alpha}} \sqrt{\frac{\pi}{2}} \frac{e^{i(\sqrt{a_1}
|x|-\frac{N-3}{4}\pi)}}{|x|^{\frac{N-1}{2}}} \hat{f}\big(\sqrt{a_1} \frac{x}{|x|}\big),
\qquad\text{if }\alpha<0 \text{ or } N>3,\ \alpha=0,\ \beta<0,
$$
$$ U_f(x) := \frac{1}{|\beta|} \sqrt{\frac{\pi}{2}} \frac{e^{i\sqrt{a_1}
|x|}}{|x|} \hat{f}\big(\sqrt{a_1} \frac{x}{|x|}\big)- \dfrac{1}{4\pi |\beta||x|}\int_{\mathbb{R}^N} f(x) \, dx ,
\qquad\text{if } N=3,\ \alpha=0,\ \beta<0,
$$
or
\begin{align*}
U_f(x)
&:= \frac{a_1^{\frac{N-3}{4}}}{\sqrt{\beta^2-4\alpha}} \sqrt{\frac{\pi}{2}} \frac{e^{i(\sqrt{a_1}
|x|-\frac{N-3}{4}\pi)}}{|x|^{\frac{N-1}{2}}} \hat{f}\big(\sqrt{a_1} \frac{x}{|x|}\big) \\
&- \frac{a_2^{\frac{N-3}{4}}}{\sqrt{\beta^2-4\alpha}} \sqrt{\frac{\pi}{2}} \frac{e^{i(\sqrt{a_2}
|x|-\frac{N-3}{4}\pi)}}{|x|^{\frac{N-1}{2}}} \hat{f}\big(\sqrt{a_2} \frac{x}{|x|}\big),
\qquad\text{if } \alpha>0,\ \beta < -2\sqrt{\alpha}.
\end{align*}
Here, $a_1,a_2$ are given as in~\eqref{eq:def_a1a2} and $f$ is chosen such that its Fourier transform on
spheres is well-defined. Our main result is the following.
\begin{thm} \label{mainthm}
Assume (A1),(A2). Then there exists a nontrivial solution $u\in W^{4,q} (\mathbb{R}^N)\cap
C^{3,\alpha}(\mathbb{R}^N)$ for all $q\in [p,\infty)$, $\alpha \in (0,1)$ to
\begin{equation} \label{intro}
\Delta^2 u - \beta \Delta u +\alpha u = \Gamma |u|^{p-2} u\quad\text{ in } \mathbb{R}^N
\end{equation}
satisfying the farfield expansion
\begin{equation} \label{eq:farfield_expansion}
\lim_{R\rightarrow \infty} \frac{1}{R}\int_{B_R} |u(x) - \mathbb{R}eal(U_f)(x)|^2 \,dx =0
\end{equation}
for $f:= \Gamma |u|^{p-2}u$.
\end{thm}
As in \cite{EW,MMP} one may put slightly different assumptions on $\Gamma$ that still
ensure the existence of nontrivial solutions. For instance, replacing the periodicity assumption on $\Gamma$
by $\Gamma(x)\to 0$ as $|x|\to\infty$ as in Theorem~1.2 \cite{EW}, the dual variational approach benefits from
even better compactness properties that allow to prove the existence of infinitely many solutions via the
Symmetric Mountain Pass Theorem. Similarly, $\Gamma$ may be replaced by $-\Gamma$ as was pointed out in
Section~3 of~\cite{MMP}. In Remark~\ref{rmqradial} we also comment on the radially symmetric case where one
can prove the existence of solutions for a strictly larger range of exponents.
Concerning the qualitative properties of the solution granted by the above theorem, we
can actually say more. Under mild additional assumptions (which are not even needed in the physically most
important case $N=3$) we can show that $u$ satisfies a radiation condition at infinity. Moreover, we will
show that the farfield expansion~\eqref{eq:farfield_expansion} has a simple pointwise counterpart
$u(x)=\mathbb{R}eal(U_f)(x)+o(|x|^{\frac{1-N}{2}})$ as $|x|\to\infty$ and it is expected,
as in the second order case, that solutions to~\eqref{intro} should not decay faster than
$O(|x|^{\frac{1-N}{2}})$.
So far, however, it is unclear how to prove such a claim even in the radial setting since
methods from \cite{MMP} do not seem to be easily generalizable to the fourth order case. Let us remark that
numerical considerations indicate that radial solutions in the parameter ranges $\alpha<0$ and
$\alpha>0,\beta<-2\sqrt\alpha$ (for constant $\Gamma$, say) behave rather differently. While the solutions
in the former case seem to remain bounded with oscillatory behaviour for all small initial data, the
solutions in the latter case seem to be unbounded for most initial data so that we expect
$L^p(\mathbb{R}^N)$-solutions as the ones from Theorem~\ref{mainthm} only at exceptional initial values. In particular, there is little hope
to treat this case by the methods from \cite{MMP}.
The plan of this paper is the following: in Section $2$, we introduce some notation and provide a few
preliminary results. In particular, the construction of the resolvent-type operators $\mathcal
R_a$ for the second order case and $\mathfrak R$ for the fourth order operator are explained.
In Section $3$, we prove the equivalent of Guti\' errez' a priori estimates (Theorem~6 in~\cite{Gu}) for our
fourth order operator by proving $(L^p,L^q)$-estimates for $\mathfrak R$. In Section~4 and Section~5 we will prove
the claims from Theorem~\ref{mainthm}. In Section $4$, using the dual variational approach of \cite{EW}, we
show the existence of a solution $u\in L^p(\mathbb{R}^N)$ to \eqref{intro}. In Section $5$, qualitative properties of
this solution such as its regularity and~\eqref{eq:farfield_expansion} will be established.
\section{Preliminaries} \label{sec:preliminaries}
The Fourier transform of a Schwartz function $f\in\mathcal S(\mathbb{R}^N)$ is defined via
$$
\hat f(\xi) := \frac{1}{(2\pi)^{N/2}} \int_{\mathbb{R}^N} f(x)e^{-ix\xi}\,dx.
$$
As an $L^2$-isometry, the Fourier transform may be extended to tempered distributions and in
particular to $f\in L^q(\mathbb{R}^N),q\in [1,\infty]$. As in the second order case, we have to construct a
resolvent-type operator $\mathfrak R$ associated with $L=\Delta^2-\beta\Delta+\alpha$. Since this is based on
the corresponding approach to Helmholtz operators via~\eqref{eq:def_a1a2}, let us describe this situation
first.
The fundamental solution $g_a$ of the Helmholtz operator $-\Delta-a,a>0$ is given by
\begin{equation} \label{greenhelm}
g_a (x)=\frac{i}{4}\Big(\frac{2\pi |x|}{\sqrt{a}}\Big)^{\frac{2-N}{2}} H^{(1)}_{\frac{N-2}{2}}(\sqrt{a}
|x|),
\end{equation}
see (4.21) in~\cite{Leis}, so that $(-\Delta -a) g_a=\delta$ holds in the distributional sense on $\mathbb{R}^N$.
Here, $H^{(1)}_{(N-2)/2}$ denotes the Hankel function of the first kind of order $(N-2)/2$. From the formulas~9.1.12,9.1.13 and~9.2.1.-9.2.3 in \cite{AbrSte_handbook} we get the following asymptotics:
\begin{align} \label{eq:asymptoticsH}
\begin{aligned}
H^{(1)}_{\frac{N-2}{2}}(r)
&\sim \frac{2}{\sqrt{\pi r}} \Big(e^{i(r-\frac{N-3}{4}\pi)} + O\big(\frac{1}{r} \big)\Big)
&&\text{as }r\to \infty, \\
H^{(1)}_{\frac{N-2}{2}}(r)
&\sim \frac{2i}{\pi}\ln \left(\frac{r}{2} \right)
+O(1)
&&\text{as }r\to 0^+,\;N=2, \\
H^{(1)}_{\frac{N-2}{2}}(r)
&\sim -i\sqrt{\frac{2}{\pi r}}
+O(r^{1/2})
&&\text{as }r\to 0^+,\;N=3, \\
H^{(1)}_{\frac{N-2}{2}}(r)
&\sim -\frac{2i}{\pi r}+ \frac{2i}{\pi} r \ln\left(\frac{r}{2}\right)+ O(r)
&&\text{as }r\to 0^+,\;N=4, \\
H^{(1)}_{\frac{N-2}{2}}(r)
&\sim -\frac{\Gamma(\frac{N-2}{2})i}{\pi}
\left(\frac{2}{r}\right)^{\frac{N-2}{2}}+O(r^{\frac{6-N}{2}})\qquad &&\text{as }r\to 0^+,\;N\geq 5.
\end{aligned}
\end{align}
In particular, for all $a>0$ we have the following estimate:
\begin{equation} \label{asympbessel}
|g_a(x)| \leq C(|x|^{2-N}+|\log(|x|)|) \quad (0<|x|\leq 1),\qquad
|g_a(x)| \leq C|x|^{\frac{1-N}{2}} \quad (|x|\geq 1).
\end{equation}
In the case $N\geq 3$ we have $g_0(r)=\frac{1}{(N-2)N\omega_N} r^{2-N}$ where $\omega_N$ denotes the area of
the sphere $S^{N-1}$, see (4.1) in \cite{GiTr}. Hence, \eqref{asympbessel} also holds in the case
$a=0,N\geq 3$, but not for $N=2$ because $g_0(x)=\frac{1}{2\pi}\log(|x|)$ grows logarithmically at infinity.
This is responsible for the extra assumption $N\geq 3$ in the case $\alpha=0,\beta<0$ from assumption (A1).
For positive $a$ the functions $g_a$ are known to satisfy the Sommerfeld radiation condition at infinity:
\begin{equation}\label{eq:Sommerfeld_ga}
\nabla g_a(x) - i \sqrt{a} g_a(x) \frac{x}{|x|} = O(|x|^{-\frac{N+1}{2}}) \quad\text{as }|x|\to\infty.
\end{equation}
The resolvent-type operator $\mathcal R_a$ associated with $-\Delta-a$ for $a>0$ is then defined via
\begin{equation} \label{eq:definition_2ndorder_resolvent}
\mathcal{R}_a f := \lim_{\varepsilon \rightarrow 0^+} \mathcal{R}_{a+i\varepsilon} f,
\quad\text{where }
(\mathcal{R}_{a+i\varepsilon} f)(x) :=\frac{1}{(2\pi)^{N/2}} \int_{\mathbb{R}^N}e^{i x \xi}
\frac{\hat{f}(\xi)}{|\xi|^2 - (a+i\varepsilon) }\,d\xi.
\end{equation}
Notice that the same formula holds in the case $a\leq 0$. Here, the limit has to be understood in the $L^p(\mathbb{R}^N)$-sense for $\frac{2(N+1)}{N-1}\leq
p\leq \frac{2N}{N-2}$ whenever $f\in L^{p^\prime}(\mathbb{R}^N)$. In fact, Guti\'errez proved in \cite{Gu}[Theorem 6]
the estimate $\|\mathcal{R}_{a+i\varepsilon} f \|_{L^p (\mathbb{R}^N)}\leq \| f \|_{L^{p^\prime} (\mathbb{R}^N)}$ for all
Schwartz functions $f\in \mathcal{S}(\mathbb{R}^N)$, so that the operator $\mathcal R_a$ is a well-defined bounded
linear operator from $L^{p'}(\mathbb{R}^N)$ to $L^p(\mathbb{R}^n)$ by the Uniform Boundedness Principle. Moreover, this operator may be expressed
in terms of the fundamental solution $g_a$ from~\eqref{greenhelm} via
\begin{equation} \label{eq:EWresolvent_vs_ga1}
(\mathcal{R}_a f)(x)
= (g_a \backslasht f)(x)
= \frac{1}{a} \mathcal{R}_1 \Big( f(\frac{\cdot}{\sqrt a})\Big)(\sqrt a x)
\qquad\text{for } f\in \mathcal{S}(\mathbb{R}^N),a>0.
\end{equation}
In view of~\eqref{eq:def_a1a2} the corresponding quantities for the fourth order operator $L=\Delta^2
-\beta\Delta + \alpha$ may be defined analogously. We put
\begin{align}\label{eq:def_G}
\begin{aligned}
G &:=\frac{1}{\sqrt{\beta^2-4\alpha}} (g_{a_1} - g_{a_2}), \quad
\hat G(\xi)
&= \frac{1}{\sqrt{\beta^2-4\alpha}} \Big(
\frac{1}{|\xi|^2-a_1} - \frac{1}{|\xi|^2-a_2}\Big) \quad( |\xi|\neq a_1,a_2)
\end{aligned}
\end{align}
we find $\Delta^2 G -\beta \Delta G +\alpha G= \delta$ in the distributional sense on $\mathbb{R}^N$ so that $G$ is a
fundamental solution of $L$.
Notice that formally the same definition has been used in \cite{BCDN}[Proposition $3.13$] when $a_1,a_2<0$,
while our focus lies on the cases $a_1>0>a_2$ or $a_1>a_2>0$ or $a_1=0>a_2$.
From \eqref{greenhelm}--\eqref{asympbessel} we deduce (taking into account the cancellations at zero)
\begin{align} \label{asymgreen}
\begin{aligned}
|G(x)| &\leq \begin{cases}
C(|x|^{4-N} +|\log(|x|)|) &, N\geq 4 \\
C &,N \in\{2,3\}
\end{cases}\quad &&(0<|x|\leq 1),\\
|G(x)| &\leq C|x|^{(1-N)/2} && (|x|\geq 1)
\end{aligned}
\end{align}
Moreover, from~\eqref{eq:Sommerfeld_ga} we get that $G$ satisfies a variant of Sommerfeld's outgoing radiation
condition (see \cite{CoDa}) given by
\begin{align} \label{sommerfeld}
\begin{aligned}
|\nabla G (x) - i \sqrt{a_1} G(x) \frac{x}{|x|} |&=o(|x|^{\frac{1-N}{2}})\ \text{as } |x|\rightarrow
\infty ,\text{if }a_1>0>a_2,\\
|\nabla G (x) - \dfrac{i}{a_1-a_2}(\sqrt{a_1}g_{a_1}(x)-\sqrt{a_2} g_{a_2}(x) )\frac{x}{|x|}|
& =o(|x|^{\frac{1-N}{2}})\ \text{as } |x|\rightarrow \infty,\ \text{if}\ a_1> a_2>0.
\end{aligned}
\end{align}
Notice that in the case $a_1>0>a_2$ the functions $g_{a_2},g_{a_2}'$ decrease
exponentially and hence much faster than $g_{a_1},g_{a_1}'$ at infinity so that \eqref{eq:def_G} allows to
deduce the Sommerfeld condition from~\eqref{eq:Sommerfeld_ga}. Here, $g_{a_2}$ denotes the
Green's function of the Schr\"odinger operator $-\Delta-a_2$. As to the case $a_1>a_2>0$ let us remark
$\sqrt{\beta^2-4\alpha}=a_1-a_2$. Motivated by \eqref{eq:definition_2ndorder_resolvent}--\eqref{eq:def_G}
we may now define
\begin{equation}\label{eq:def_resolvent}
\mathfrak {R} f
:= \lim_{\varepsilon \rightarrow 0} \frac{1}{\sqrt{\beta^2-4\alpha}}\big( \mathcal
R_{a_1+i\varepsilon}f - \mathcal R_{a_2+i\varepsilon} f\big).
\end{equation}
Being interested in real-valued solutions of~\eqref{intro} we will need $\bold R:=\mathbb{R}eal(\mathfrak R)$.
We will show in Theorem~\ref{thminvopbound} that $\mathfrak{R}$ is a (complex-valued) bounded linear operator between
$L^p(\mathbb{R}^N)$ and $L^q(\mathbb{R}^N)$ such that $\bold{R} f$ defines a (real-valued) distributional solution
of $\Delta^2 u -\beta \Delta u +\alpha u =f$ provided $f\in L^p(\mathbb{R}^N)$. Actually, better
qualitative properties of $u$ will be shown in
Section~\ref{sec:qualitative}. From~\eqref{eq:def_resolvent} and arguing as in Lemma~4.1 in \cite{EW}
we get
\begin{align} \label{eq:resolvent_symmetry}
\int_{\mathbb{R}^N} (\bold{R} f) g\,dx
= \int_{\mathbb{R}^N} f (\bold{R} g)\,dx
\end{align}
for all $f,g \in \mathcal{S}(\mathbb{R}^N)$. In the next section we provide the $(L^p,L^q)$-estimates for $\mathfrak
R,\bold R$. For notational convenience, the symbol $C$ will stand for a positive number that may change from
line to line.
\section{Resolvent estimates}
In this section we investigate the continuity properties of the resolvent $\mathfrak R$ as an operator
between Lebesgue spaces on $\mathbb{R}^N$. As in the paper by Ev\'equoz and Weth \cite{EW} these properties turn out to
be crucial for proving the existence of solutions of~\eqref{4nls} via a dual variational approach, which we
will set up in the next section. As in the proof of Theorem~6 in~\cite{Gu} the continuity properties are
established via interpolation and the Stein-Tomas theorem (see \cite{Tom_A_restriction} and p.386
in~\cite{Ste_harmonic}). Denoting by $S^{N-1}$ the unit sphere in $\mathbb{R}^N$ and by $\sigma$ the canonical surface
measure on $S^{N-1}$, this theorem reads as follows.
\begin{thm}[Stein-Tomas] \label{STT}
Let $1\leq p\leq \frac{2(N+1)}{N+3}$. Then there is a $C>0$ such that for all $g\in \mathcal S(\mathbb{R}^N)$ the
following inequality holds:
\begin{equation*}
\Big(\int_{S^{N-1}} |\hat{g}(r\omega)|^2 \,d\sigma(\omega)\Big)^{1/2}
\leq C r^{-N(1-1/p)}\|g\|_{L^p(\mathbb{R}^N)}.
\end{equation*}
\end{thm}
Notice that the Stein-Tomas inequality for $r\neq 1$ follows from the classical one ($r=1$) by rescaling.
Moreover, we will use the Riesz-Thorin interpolation theorem, see for instance Theorem~1.3.4 in
\cite{Grafakos}.
\begin{thm}[Riesz-Thorin] \label{thm:Riesz_Thorin}
If $T: L^{p_0}(\mathbb{R}^N)+L^{p_1}(\mathbb{R}^N)\to L^{q_0}(\mathbb{R}^N)+L^{q_1}(\mathbb{R}^N)$ such that $\|T\|_{L^{p_0}(\mathbb{R}^N) \to
L^{q_0}(\mathbb{R}^N)}\leq M_0$ and $\|T\|_{L^{p_1}(\mathbb{R}^N)\to L^{q_1}(\mathbb{R}^N)}\leq M_1$, then, we have
$$
\|T\|_{L^p(\mathbb{R}^N) \to L^q(\mathbb{R}^N)}\leq M_0^{1-\theta}M_1^\theta
$$
provided that
$$
\frac{1}{p}= \frac{1-\theta}{p_0} + \frac{\theta}{p_1} \quad \text{ and } \quad
\frac{1}{q}= \frac{1-\theta}{q_0} + \frac{\theta}{q_1}.
$$
\end{thm}
With these preliminary results at hand we are now in the position to prove the resolvent estimates. We
closely follow the proof of Theorem~6 in Guti\'{e}rrez' paper~\cite{Gu} along with its generalizations from
Theorem~2.1 in~\cite{Ev}.
\begin{thm} \label{thminvopbound}
Assume (A1). Then the operator $\mathfrak{R}$ defined by \eqref{eq:def_resolvent} extends to a
bounded linear operator $\mathfrak{R}:L^p(\mathbb{R}^N)\to L^q(\mathbb{R}^N)$, i.e.
\begin{equation} \label{eq:resolvent_estimate}
\|\mathfrak{R} f \|_{L^q(\mathbb{R}^N)}
\leq C \|f\|_{L^p (\mathbb{R}^N)},
\end{equation}
provided that $p,q\in [1,\infty]$ satisfy
\begin{equation}
\label{condgutie}
\frac{2}{N+1}\leq \frac{1}{p}-\frac{1}{q} \begin{cases}
\leq 1 &,\text{if } N\in\{2,3\}\\
< 1 &,\text{if } N=4 \\
\leq \frac{4}{N} &,\text{if } N\geq 5\\
\end{cases}, \quad\;
\frac{1}{p}> \frac{N+1}{2N},\quad\; \frac{1}{q}<\frac{N-1}{2N}.
\end{equation}
In particular, \eqref{eq:resolvent_estimate} holds for $q=p^\prime$ whenever
$\frac{2(N+1)}{N-1}\leq q\leq \infty$ for $N\in\{2,3\}$, $\frac{10}{3}\leq q<\infty$ for $N=4$ or
$\frac{2(N+1)}{N-1}\leq q\leq \frac{2N}{N-4}$ for $N\geq 5$.
\end{thm}
\begin{proof}
We only deal with the case $a_1>0\geq a_2$. Recall that $a_1,a_2$ were defined in \eqref{eq:def_a1a2}. We
will comment on the necessary modifications in the case $a_1>a_2>0$ at the end of the proof. We split the operator $f\mapsto \mathfrak{R}f=G\backslasht f$ into a resonant
and a nonresonant part.
To this end let $\psi \in \mathcal{S}(\mathbb{R}^N)$ be a function such that $\hat{\psi}\in C_c^\infty(\mathbb{R}^N)$ satisfies
$0\leq \hat{\psi}\leq 1$ and
\begin{equation} \label{eq:def_psi}
\hat{\psi} (\xi)=\begin{cases}
1 &,\text{if}\ ||\xi|-\sqrt a_1|\leq \frac{\sqrt a_1}{6},\\
0 &,\text{if}\ ||\xi|-\sqrt a_1|\geq \frac{\sqrt a_1}{4}
\end{cases}.
\end{equation}
Next define $G_1 := \psi \backslasht G$ and $G_2:=G -
G_1=(1-\psi)\backslasht G$. First we establish pointwise bounds for $G_1$ and $G_2$.
By \eqref{asymgreen} we know that $|G(x)|\leq C |x|^{\frac{1-N}{2}}$ for $|x|\geq 1$ so that
$G_1=\psi\backslasht G$ and $\psi\in\mathcal{S}(\mathbb{R}^N)$ imply
\begin{equation} \label{gutiee1}
|G_1 (x)|\leq C (1+|x|)^{\frac{1-N}{2}} \quad\text{for all }x\in\mathbb{R}^N.
\end{equation}
Furthermore, thanks to \eqref{asymgreen} and \eqref{gutiee1}, we deduce
$$
|G_2(x)|\leq \begin{cases}
C |x|^{4-N} &, \text{if}\ N> 4\\
C(1+|\log|x||)&, \text{if}\ N=4 \\
C&, \text{if}\ N=\{2,3\}, \\
\end{cases}
\qquad\text{for }|x|\leq 1.
$$
Since $\hat{G}_2= (1-\hat \psi ) \hat G$ and $\hat G$ is given by \eqref{eq:def_G}, we find
$\partial^{\gamma} \hat{G}_2 \in L^1 (\mathbb{R}^N)$ for all multi-indices $\gamma \in \mathbb{N}_0^N$ such that
$|\gamma|\geq N-3$ if $a_2>0$ resp. $|\gamma|\in \{N-3, N-2, N-1\}$ if $a_2=0$. Hence, $|G_2(x)|\leq C_s
|x|^{-s}$ for all $s\geq N-3$ in the case $a_2>0$ whereas $|G_2(x)|\leq C_s
|x|^{-s}$ for $N-3\leq s\leq N-1$ in the case $a_2=0$. From this we deduce
\begin{equation}
\label{estG2}
|G_2 (x)|\leq \begin{cases} C \min \{|x|^{4-N}, |x|^{-N}\} &,\text{if}\ N> 4,\\
C \min \{1+ |\log|x|||, |x|^{-N}\} &,\text{if}\ N= 4,\\
C \min \{1,|x|^{-N} \}&, \text{if}\ N=\{2,3\}, \\
\end{cases}
\qquad\text{for all }x\in\mathbb{R}^N,\;a_2>0
\end{equation}
as well as
\begin{equation}
\label{estG2a_2=0}
|G_2 (x)|\leq \begin{cases} C \min \{|x|^{4-N}, |x|^{-N-1}\} &,\text{if}\ N> 4,\\
C \min \{1+ |\log|x||, |x|^{-N-1}\} &,\text{if}\ N= 4,\\
C \min \{1,|x|^{-N-1} \}&, \text{if}\ N=3, \\
\end{cases}
\qquad\text{for all }x\in\mathbb{R}^N,\;a_2=0.
\end{equation}
We now use these pointwise bounds for $G_2$ in order to prove that the nonresonant part satisfies the mapping
properties asserted above. For $p,q$ as in \eqref{condgutie} we define
$r\in (\frac{N+1}{N-1},\frac{N}{(N-4)_+})$ via $1+\frac{1}{q}=\frac{1}{r}+\frac{1}{p}$ so that Young's
convolution inequality and $G_2\in L^r(\mathbb{R}^N)$ gives
\begin{equation}\label{RT0}
\|G_2 \backslasht f \|_{L^q(\mathbb{R}^N )} \leq \|G_2\|_{L^r (\mathbb{R}^N )} \|f\|_{L^p(\mathbb{R}^N )} \leq C \|f\|_{L^p(\mathbb{R}^N )}.
\end{equation}
In the limit case $\frac{1}{p}-\frac{1}{q}=1$ and $N\in\{2,3\}$ this inequality follows the same way using
$G_2\in L^\infty(\mathbb{R}^N)$, see \eqref{estG2}. In the limit case $\frac{1}{p}-\frac{1}{q}=\frac{4}{N}$ and
$N\geq 5$ it follows from $G_2\in L^{N/(N-4),w}(\mathbb{R}^N)$ and Young's inequality for weak Lebesgue
spaces, see Theorem~1.4.24 in~\cite{Grafakos}.
Next we estimate the resonant term $G_1 \backslasht f$. To this end let $\eta \in C_{c}^\infty (\mathbb{R}^N) $ be a cut-off
function such that $\eta(x)=1$ for $|x|\leq 1$ and $\eta(x)=0$ if $|x|\geq 2$. For $j\in \mathbb{N}$ we define
$\eta_j(x):=\eta (x/2^j ) - \eta (x /2^{j-1})$ and $\eta_0 :=\eta$. As a
consequence,
\begin{equation} \label{eq:def_G1j}
G_1=\sum_{j=0}^\infty G_1^j\quad \text{with}\; G_1^j:=G_1 \eta_j \text{ so that }
|G_1^j(x)| \leq C 2^{j(1-N)/2}1_{[2^{j-1},2^{j+1}]}(|x|),
\end{equation}
where the latter estimate follows from \eqref{gutiee1}. Next, let $\varphi \in \mathcal{S}(\mathbb{R}^N)$ be chosen
such that
\begin{equation}\label{eq:def_varphi}
\hat \varphi (\xi )=
\begin{cases}1 &,\text{if}\ ||\xi|-\sqrt a_1|\leq \sqrt a_1/2,\\
0 &,\text{if}\ ||\xi |-\sqrt a_1 |\geq 3\sqrt a_1/4.\end{cases}
\end{equation}
Notice that this definition guarantees $\supp(\hat{G_1})\subset \{\xi\in\mathbb{R}^N: \hat\varphi(\xi) = 1\}$ so that
the following identity holds by the definition of $G_1$ and \eqref{eq:def_psi}:
\begin{equation}\label{eq:def_Qj}
G_1\backslasht f = (G_1\backslasht \varphi) \backslasht f= \sum_{j=0}^\infty Q^j\backslasht f\qquad \text{with}\ Q^j =G_1^j \backslasht
\varphi.
\end{equation}
Using Plancherel's Theorem and the Stein-Tomas Theorem (see Theorem~\ref{STT}) we get for all
$f\in\mathcal{S}(\mathbb{R}^N)$ and $g:=\varphi \backslasht f$
\begin{align} \label{RT2}
\begin{aligned}
\|Q^j \backslasht f\|_{L^2 (\mathbb{R}^N )}^2
&= \|G_1^j\backslasht g\|_{L^2(\mathbb{R}^N)}^2 \\
&= \int_{||\xi| -\sqrt a_1| \leq 3 \sqrt a_1 /4} |\hat{G}_1^j (\xi ) \hat{g}(\xi )|^2 \,d\xi \\
&\leq C \int_{\sqrt a_1/4}^{7\sqrt a_1 /4}r^{N-1} |\hat{G}_1^j (r )|^2 \int_{S^{N-1}}
|\hat{g}(r \omega )|^2 d\sigma(\omega) \,dr \\
&\leq C\int_{\sqrt a_1/4}^{7\sqrt a_1/4} r^{N-1}|\hat{G}_1^j (r)|^2 \cdot r^{-2N(1-\frac{N+3}{2(N+1)})}\|g\|_{L^{ 2(N+1)/(N+3)}(\mathbb{R}^N )}^2
\,dr \\
&\leq C\|\hat{G}_1^j\|_{L^2(\mathbb{R}^N)}^2 \|g\|_{L^{ 2(N+1)/(N+3)}(\mathbb{R}^N )}^2 \\
&\leq C \|G_1^j\|_{L^2(\mathbb{R}^N)}^2\|\varphi\|_{L^1 (\mathbb{R}^N )}^2\|f\|_{L^{ 2(N+1)/(N+3)}(\mathbb{R}^N )}^2 \\
&\leq C 2^j \|f\|_{L^{ 2(N+1)/(N+3)}(\mathbb{R}^N )}^2.
\end{aligned}
\end{align}
In the last inequality we estimated the $L^2$-norm of $G_1^j$ by exploiting \eqref{eq:def_G1j}. Furthermore, we derive the inequality
\begin{align} \label{RT1}
\begin{aligned}
\|Q^j \backslasht f\|_{L^{\tilde q}(\mathbb{R}^N)}
&\leq \|Q^j\|_{L^r(\mathbb{R}^N )} \|f\|_{L^{\tilde p}(\mathbb{R}^N)} \\
&\leq \|\varphi \|_{L^1(\mathbb{R}^N )} \|G_1^j \|_{L^r(\mathbb{R}^N )}\|f\|_{L^{\tilde p}(\mathbb{R}^N)} \\
&\leq C 2^{j((1-N)/2 + N/r)}\|f\|_{L^{\tilde p}(\mathbb{R}^N)} \\
&= C 2^{j((1+N)/2 +N/{\tilde q} - N/{\tilde p})}\|f\|_{L^{\tilde p}(\mathbb{R}^N)}
\qquad\text{if }1\leq \tilde p\leq \tilde q\leq \infty,
\end{aligned}
\end{align}
and $r\in [1,\infty]$ is defined according to $1+\frac{1}{\tilde q} = \frac{1}{r}+\frac{1}{\tilde p}$. Notice
that in the third inequality we estimated $\|G_1^j\|_{L^r(\mathbb{R}^N)}$ once again by exploiting \eqref{eq:def_G1j}.
Interpolating the estimates \eqref{RT2} and
\eqref{RT1} yields
$$
\|Q^j \backslasht f\|_{L^q(\mathbb{R}^N)}
\leq C 2^{j(1/2 +\theta N(1/2+1/{\tilde q} - 1/{\tilde p}))}\|f\|_{L^p(\mathbb{R}^N)}
\qquad (j\in\mathbb{Z})
$$
provided $\frac{1}{p}=\frac{\theta}{\tilde p}+\frac{(1-\theta)(N+3)}{2(N+1)}$ and
$\frac{1}{q}=\frac{\theta}{\tilde q}+\frac{1-\theta}{2}$ with $\theta\in [0,1],\tilde p,\tilde q\in
[1,\infty]$. Substituting $\tilde q$ yields
$$
\|Q^j \backslasht f\|_{L^q(\mathbb{R}^N)}
\leq C 2^{j( (1-N)/2 + N/q +\theta N(1- 1/{\tilde p}))}\|f\|_{L^p(\mathbb{R}^N)}
\qquad (j\in\mathbb{Z})
$$
whenever $\frac{1}{p}=\frac{\theta}{\tilde p}+\frac{(1-\theta)(N+3)}{2(N+1)}$ for some $\tilde p\in
[1,\infty],\theta\in [1-\frac{2}{q},1],q\geq 2$. Substituting now $\tilde p$ gives
$$
\|Q^j \backslasht f\|_{L^q(\mathbb{R}^N)}
\leq C 2^{j( (3N+1)/2(N+1) + N/q - N/p +\theta N(N-1)/2(N+1))}\|f\|_{L^p(\mathbb{R}^N)}
\qquad (j\in\mathbb{Z})
$$
for $1\geq \theta\geq \max\{1-\frac{2}{q},\frac{2(N+1)-p(N+3)}{(N-1)p}\}$ for $q\geq 2$ and $1\leq p\leq
2(N+1)/(N+3)$. Summing up these estimates we get
\begin{align} \label{RT3}
\|G_1\backslasht f\|_{L^q(\mathbb{R}^N )}
\leq C \|f\|_{L^p(\mathbb{R}^N)} \quad\text{provided }
1\leq p\leq \frac{2N(N+1)}{N^2+4N-1},\;\frac{2N}{N-1}<q\leq \infty.
\end{align}
By duality, the $(L^p,L^q)$-estimate provides the corresponding $(L^{q'},L^{p'})$-estimate that reads
\begin{align} \label{RT4}
\|G_1\backslasht f\|_{L^q(\mathbb{R}^N )}
\leq C \|f\|_{L^p(\mathbb{R}^N)} \quad\text{provided }
1\leq p< \frac{2N}{N+1},\;\frac{2N(N+1)}{(N-1)^2}\leq q\leq \infty.
\end{align}
By \eqref{RT3},\eqref{RT4} the estimates for $G_1\backslasht f$ hold for all $p,q$ as in \eqref{condgutie} under
the additional assumption $1\leq p\leq \frac{2N(N+1)}{N^2+4N-1}$ or $\frac{2N(N+1)}{(N-1)^2}\leq q\leq\infty$. For all other exponents
$p,q$ as in \eqref{condgutie} except on the line $\frac{1}{p}-\frac{1}{q}=\frac{2}{N+1}$ we may choose
\begin{equation}\label{RT6}
p_1\in \Big[1,\frac{2N(N+1)}{N^2+4N-1}\Big],\;q_1\in \Big(\frac{2N}{N-1},\infty\Big],\qquad
p_2\in \Big[1,\frac{2N}{N+1}\Big),\;q_1 \in \Big[\frac{2N(N+1)}{(N-1)^2},\infty\Big]
\end{equation}
such that
\begin{equation}\label{RT7}
\frac{1}{p_1}-\frac{1}{q_1} = \frac{1}{p_2}-\frac{1}{q_2} = \frac{1}{p}-\frac{1}{q}\in
\Big(\frac{2}{N+1},1\Big].
\end{equation}
Then, by \eqref{RT3},\eqref{RT4} the operator $G_1\backslasht f$ is bounded from $L^{p_j}(\mathbb{R}^N)$ to $L^{q_j}(\mathbb{R}^N)$
for $j=1,2$ and we have $p_1<p<p_2$. This follows from \eqref{RT6},\eqref{RT7} since we have assumed $p$
to satisfy neither $1\leq p\leq \frac{2N(N+1)}{N^2+4N-1}$ nor $\frac{2N(N+1)}{(N-1)^2}\leq q\leq\infty$.
In particular, by~\eqref{RT7} we can find $\theta\in (0,1)$ such that
$\frac{1}{p}=\frac{\theta}{p_1}+\frac{1-\theta}{p_2}$ and hence
$\frac{1}{q}=\frac{\theta}{q_1}+\frac{1-\theta}{q_2}$. So the Riesz-Thorin Theorem finally yields
\begin{equation} \label{RT5}
\|G_1\backslasht f\|_{L^q(\mathbb{R}^N)}
\leq C \|f\|_{L^p(\mathbb{R}^N)} \quad
\text{if}\quad
\frac{2}{N+1}<\frac{1}{p}-\frac{1}{q}\leq 1,\;
\frac{1}{p}>\frac{N+1}{2N},\;\frac{1}{q}<\frac{N-1}{2N}.
\end{equation}
Hence, the assertion of the theorem for $1/q-1/p>2/(N+1)$ follows from \eqref{RT0}
and~\eqref{RT5}.
We finally consider the missing limiting case $1/q-1/p=2/(N+1)$, which is
established using Lorentz space interpolation following the ideas of Guti\'{e}rrez, see p.20 in~\cite{Gu}. As
in section~5.3 of \cite{SteinWeiss} we denote by $\|\cdot\|_{p,s}$ the standard norm of the Lorentz space $L^{p,s}(\mathbb{R}^N)$ so that the problem
reduces to proving
\begin{equation} \label{lorentze1}
\|G_1 \backslasht f\|_{q ,\infty}\leq C\|f\|_{p,1},
\end{equation}
for $(p,q)=(p_0,q_0)$ and for $(p,q)=(q_0^\prime , p_0^\prime )$ where $q_0 = 2N/(N-1)$ and $p_0= 2N(N+1)/(N^2
+4N -1)$.
So let $E\subset\mathbb{R}^N$ be any measurable set of finite measure and for any given $\lambda>0$ we define
$A:=\{x\in \mathbb{R}^N : |(G_1 \backslasht 1_E) (x) |>\lambda \}$. By Theorem~3.13 in section~5 in~\cite{SteinWeiss} we see that
\eqref{lorentze1} is equivalent to
\begin{equation} \label{lorentze2}
\lambda |A|^{1/q}\leq C |E|^{1/p}\quad\text{whenever }\lambda>0.
\end{equation}
In view of the definition of $Q^j$ from~\eqref{eq:def_Qj} and the estimates \eqref{RT2},\eqref{RT1}, we have
for all $M\in\mathbb{Z}$
\begin{align*}
\lambda|A|
&\leq \int_A |(G_1 \backslasht 1_E ) (x)| \,dx \\
&\leq \sum_{j=0}^\infty \int_A |(Q^j \backslasht 1_E )(x)| \,dx \\
&\leq \sum_{j=0}^\infty \big( \|Q^j \backslasht 1_E \|_{L^2 (\mathbb{R}^N)} |A|^{1/2} 1_{j\leq M}
+ \|Q^j \backslasht 1_E\|_{L^\infty (\mathbb{R}^N)} |A| 1_{j\geq M+1} \big) \\
&\leq \sum_{j=0}^\infty \big( 2^{j/2} |E|^{\frac{N+3}{2(N+1)}} |A|^{1/2} 1_{j\leq M}
+ 2^{j(1-N)/2} |E| |A| 1_{j\geq M+1} \big) \\
&\leq |E|^{\frac{N+3}{2(N+1)}} |A|^{1/2}\cdot \sum_{j=-\infty}^M 2^{j/2}
+ |E| |A| \cdot \sum_{j=M+1}^\infty 2^{j(1-N)/2} \\
&\leq C\left( 2^{M/2} |E|^{\frac{N+3}{2(N+1)}} |A|^{1/2}+ 2^{(M+1)(1-N)/2}|E||A|\right).
\end{align*}
Choosing $M\in\mathbb{Z}$ such that
$$
|E|^{\frac{N-1}{N(N+1)}}|A|^{1/N}\leq 2^M < 2 |E|^{\frac{N-1}{N(N+1)}}|A|^{1/N}
$$
we deduce \eqref{lorentze2} for $(p,q)=(p_0,q_0)$.
Finally, we consider the case $(p,q)=(q_0^\prime , p_0^\prime)$. Using
the dual version of the inequality~\eqref{RT2} we get similarly as above
\begin{align*}
\lambda|A|
&\leq \sum_{j=0}^\infty \|Q^j \backslasht 1_E \|_{L^{\frac{2(N+1)}{N-1}}(\mathbb{R}^N )}
|A|^{\frac{N+3}{2(N+1)}} 1_{j\leq M} +\sum_{j=0}^\infty \|Q^j \backslasht 1_E\|_{L^\infty (\mathbb{R}^N )} |A| 1_{j\geq
M+1}\\
& \leq C \big( 2^{M/2} |E|^{1/2} |A|^{\frac{N+3}{2(N+1)}}+ 2^{-(M+1) (N-1) /2}|E||A| \big) \\
&\leq C |E|^{\frac{N+1}{2N}} |A|^{\frac{N^2+4N-1}{2N(N+1)}}
\end{align*}
where $M\in\mathbb{Z}$ was chosen according to
$$
|E|^{\frac{1}{N}} |A|^{\frac{N-1}{N(N+1)}}\leq 2^M < 2 |E|^{\frac{1}{N}} |A|^{\frac{N-1}{N(N+1)}}.
$$
So we obtain \eqref{lorentze2} for $(p,q)=(q_0',p_0')$, which finishes the proof of
Theorem~\ref{thminvopbound} under assumption $a_1>0\geq a_2$.
It remains to discuss the modifications for the case $a_1>a_2>0$. One defines the functions $G_1,G_2$ as
above, but with a function $\psi\in\mathcal{S}(\mathbb{R}^N)$ satisfying \eqref{eq:def_psi} both for $a_1$ and for
$a_2$, which is possible due to $a_1>a_2>0$. Accordingly, the function $\varphi\in\mathcal S(\mathbb{R}^N)$ has to be
chosen such that \eqref{eq:def_varphi} holds both for $a_1$ and for $a_2$. Arguing as above yields the
resolvent estimates.
\end{proof}
\begin{rmq} \label{rmqradial}
We show how our results may be improved in the radial setting where G. Ev\'equoz \cite{Evrad} recently
announced the following inequality
\begin{equation}\label{eq:restriction_conjecture}
\|\hat f\|_{L^\infty_{rad}(S^{N-1})}\leq C\|f\|_{L^r_{rad}(\mathbb{R}^N)}\quad\text{for } 1\leq r<\frac{2N}{N+1}
\end{equation}
with a constant depending on $r$. Using this estimate in \eqref{RT2} instead of the Stein-Tomas Theorem,
we get $\|Q^j \backslasht f\|_{L^2_{rad} (\mathbb{R}^N )}^2\leq C 2^j \|f\|_{L^r_{rad}(\mathbb{R}^N )}^2$ for such $r$.
Interpolating this estimate with \eqref{RT1} for $\tilde{q}=\infty,\tilde p=1$, we find
for all $\theta\in [0,1],r\in [1,\frac{2N}{N+1})$ the inequality
$$
\|Q_j\backslasht f\|_{L^q_{rad}(\mathbb{R}^N)}
\leq C 2^{j(\theta/2+(1-\theta)(1-N)/2)} \|f\|_{L^p_{rad}(\mathbb{R}^N)}
\quad\text{if }
\frac{1}{q}=\frac{\theta}{2}+\frac{1-\theta}{\infty},\frac{1}{p}=\frac{\theta}{r}+\frac{1-\theta}{1}.
$$
So one finds $\theta=2/q$ and thus
\begin{equation} \label{RT5bis}
\|Q_j\backslasht f\|_{L^q_{rad}(\mathbb{R}^N)}
\leq C 2^{j/2( 1-N+2N/q)} \|f\|_{L^p_{rad}(\mathbb{R}^N)} \quad
\text{if }
\frac{N}{N-1}(1-\frac{1}{p})<\frac{1}{q}\leq 2,\;
\frac{N+1}{2N}<\frac{1}{p}\leq 1.
\end{equation}
Summing over $j\in\mathbb{N}_0$ one obtains
\begin{equation}\label{RT5bisbis}
\|G_1\backslasht f\|_{L^q_{rad}(\mathbb{R}^N)}
\leq C \|f\|_{L^p_{rad}(\mathbb{R}^N)}
\quad\text{if }
\frac{N}{N-1}(1-\frac{1}{p})<\frac{1}{q}<\frac{N-1}{2N},\;
\frac{N+1}{2N}<\frac{1}{p}\leq 1.
\end{equation}
Interpolating between this inequality and its dual, we find
$$
\|G_1\backslasht f\|_{L^q_{rad}(\mathbb{R}^N)}
\leq C \|f\|_{L^p_{rad}(\mathbb{R}^N)}
\quad\text{if } 1\geq \frac{1}{p}>\frac{N+1}{2N},\;
\frac{1}{q}< \frac{N-1}{2N},\; \frac{1}{p}-\frac{1}{q}>\frac{3N-1}{2N^2}.
$$
Indeed, for any given such pair $(p,q)$ one may choose $Q\in (2N/(N-1),q)$ such that
$1/p-1/q > 1-(2N-1)/(QN)>(3N-1)/(2N^2)$ and then define $P$ via
$1/p-1/q=:1/P-1/Q$. Then the couple $(P,Q)$ satisfies \eqref{RT5bisbis} and interpolating the $(L^P,L^Q)$
with the weight $\theta:=(\frac{1}{p}+\frac{1}{Q}-1)/(\frac{1}{P}+\frac{1}{Q}-1) \in [0,1]$ and the
dual $(L^{Q'},L^{P'})$-estimate with the weight $1-\theta$ gives the desired estimate. In particular, the
assumption $\frac{1}{p}-\frac{1}{q}>\frac{2}{N+1}$ from~\eqref{condgutie} may be replaced by the weaker
assumption $\frac{1}{p}-\frac{1}{q}>\frac{3N-1}{2N^2}$ and our existence result from Theorem~\ref{mainthm}
extends to all exponents $p\in (\frac{2N}{N-1},\frac{2N}{N-4})$ when $\Gamma$ is a positive constant (so
that the radial setting is meaningful).
\end{rmq}
\section{Existence of solutions via dual variational methods}
In this section we prove the existence of a nontrivial solution to
$$
\Delta^2 u -\beta \Delta u +\alpha u = \Gamma |u|^{p-2}u\quad\text{in }\mathbb{R}^N.
$$
We proceed
along the lines of Theorem~1.1 in \cite{EW} and Theorem~1.3 in \cite{Ev}. Since the proofs are very
similar, we keep the presentation short and refer to the corresponding results in \cite{Ev,EW} when
necessary. Adopting a dual variational approach we look for a function $v:=\Gamma^{1/p^\prime} |u|^{p-2}u\in L^{p^\prime}(\mathbb{R}^N)$
satisfying the following integral equation
\begin{equation} \label{eq:dual_equation}
\Gamma^{1/p} \bold R ( \Gamma^{1/p} v ) = |v|^{p^\prime -2}v\quad\text{in }\mathbb{R}^N
\end{equation}
whenever $\frac{2(N+1)}{N+3}<p<\frac{2N}{(N-4)_+}$. Recall that $\bold R$ is the real part of the
complex resolvent $\mathfrak R$ of the fourth order linear operator appearing in the equation,
see~\eqref{eq:def_resolvent}.
Notice that equation~\eqref{eq:dual_equation} is the Euler-Lagrange equation associated with the functional
$J\in C^1 (L^{p^\prime}(\mathbb{R}^N),\mathbb{R})$ given by
\begin{equation} \label{eq:def_J}
J(v) := \frac{1}{p^\prime} \int_{\mathbb{R}^N} |v|^{p^\prime} \,dx - \frac{1}{2} \int_{\mathbb{R}^N} \Gamma^{1/p} v \bold{R}( \Gamma^{1/p} v)
\,dx.
\end{equation}
This is a consequence of the following result:
\begin{prop} \label{lemcompact}
Assume (A1),(A2). Then $\bold{R}:L^{p^\prime}(\mathbb{R}^N)\rightarrow L^p(\mathbb{R}^N )$ satisfies
$$
\int_{\mathbb{R}^N} w \bold{R} (v)\,dx = \int_{\mathbb{R}^N} v \bold{R}(w)\,dx
\quad\text{for all }v,w\in L^{p^\prime}(\mathbb{R}^N).
$$
Moreover,
for any bounded and measurable set $B\subset\mathbb{R}^N$ the operator $1_B \bold{R}:L^{p^\prime}(\mathbb{R}^N )\rightarrow
L^p(\mathbb{R}^N)$ is compact.
\end{prop}
\begin{proof}
We argue as in the proof of \cite{EW}[Lemma~4.1]. The selfdual mapping properties of $\bold{R}$ result
from Theorem~\ref{thminvopbound} and its symmetry follows from~\eqref{eq:resolvent_symmetry}.
In order to prove the compactness property let $(v_n)\subset L^{p^\prime}(\mathbb{R}^N)$ satisfy $v_n
\rightharpoonup 0$ in $L^{p^\prime}(\mathbb{R}^N)$ as $n\rightarrow \infty$. The boundedness and symmetry of
$\bold R$ yield $\bold{R}(v_n)\rightharpoonup 0$ in $L^p(\mathbb{R}^N)$ as $n\rightarrow \infty$.
On the other hand, we will show in Proposition~\ref{propreg} that for all $R>0$ we have
$$
\|\bold{R}(v_n) \|_{W^{4,p^\prime}(B_R)}
\leq C_R \big( \|\bold{R} (v_n) \|_{L^{p^\prime} (\mathbb{R}^N)} +\|v_n \|_{L^{p^\prime} (\mathbb{R}^N)} \big)
\leq C_R.
$$
Using the compactness of the embedding $W^{4,p^\prime}(B_R) \hookrightarrow L^p(B_R)$ we find a subsequence
denoted again by $(v_n)$ such that $\bold{R}(v_n)$ converges in $L^p(B_R)$
towards its weak limit, which is 0 as we proved above. Since $R>0$ was arbitrary, this finishes the
proof.
\end{proof}
The following result shows that $J$ has the mountain pass geometry and that it admits a bounded
Palais-Smale sequence at its mountain pass level which, as usual, is defined as
follows:
\begin{equation}\label{eq:MP_level}
c = \inf_{\gamma \in P} \max_{t\in [0,1]}J(\gamma (t)).
\end{equation}
Here, $P= \{\gamma \in C([0,1], L^{p^\prime}(\mathbb{R}^N)):\ \gamma (0)=0 \text{ and } J(\gamma (1))<0
\}$.
\begin{lem} \label{PSbounded}
Assume (A1),(A2). Then we have:
\begin{itemize}
\itemsep-2pt
\item[(i)] There exist $\delta >0,\rho\in (0,1)$ such that $J(v) \geq \delta >0$ for all $v\in
L^{p^\prime}(\mathbb{R}^N )$ with $\|v\|_{L^{p^\prime}(\mathbb{R}^N)}=\rho$.
\item[(ii)] There exists $v_0 \in L^{p^\prime}(\mathbb{R}^N)$ such that $\|v_0\|_{L^{p^\prime}(\mathbb{R}^N)}>1$ and $J(v_0)
<0$.
\item[(iii)] There exists a bounded Palais-Smale sequence $(u_n) \subset L^{p^\prime}(\mathbb{R}^N)$ for $J$ at the
level $c>0$.
\end{itemize}
\end{lem}
\begin{proof}
The part (i) is proved exactly as in Lemma~4.2 in~\cite{EW}, see also p.9 in~\cite{Ev}. For the part (ii) we
argue as in Lemma~3.1 in~\cite{MMP}. We find a $z\in L^{p^\prime}(\mathbb{R}^N)$ such that
$$
\int_{\mathbb{R}^N} z \bold R z \,dx >0,
$$
so that $v_0=tz$ for $t$ sufficiently large is a valid choice by~\eqref{eq:def_J}. Indeed, using the
characterization~\eqref{eq:def_resolvent}, we choose $z\in\mathcal{S}(\mathbb{R}^N)$ such that
$|\xi|^2>a_1$ for all $\xi\in \supp(\hat z)$ so that $a_1>a_2$ gives
\begin{align*}
\int_{\mathbb{R}^N} z \bold R z \,dx
= \int_{\supp(\hat z)} \frac{|\hat z(\xi)|^2}{(|\xi|^2-a_1)(|\xi|^2-a_2)} \,d\xi
> 0.
\end{align*}
Part~(iii) is based on the deformation lemma and the proof is the same as the one of Lemma~4.2~(iii) and
Lemma~6.1 in~\cite{EW}.
\end{proof}
Next we need a compactness property for Palais-Smale sequences obtained in part (iii) of the previous lemma.
This will be achieved by establishing the ''nonvanishing property'' in the spirit of Theorem~3.1 in \cite{EW} or
Theorem~3.1 in \cite{Ev}.
\begin{lem} \label{nonvanishing}
Assume (A1),(A2) and let $(u_n) \subset L^{p^\prime}(\mathbb{R}^N) $ be
a bounded sequence such that
$$
\limsup_{n\rightarrow \infty} \Big|\int_{\mathbb{R}^N} u_n \bold{R} u_n \,dx\Big|>0.
$$
Then there exist $R,\zeta>0$ and a sequence $(x_n) \subset \mathbb{R}^N$ such that we have
\begin{equation*}
\int_{B_R(x_n)} |u_n|^{p^\prime}\,dx \geq \zeta \quad\text{for infinitely many }n\in\mathbb{N}.
\end{equation*}
\end{lem}
\begin{proof}
We use the same notation as in the proof of Theorem~\ref{thminvopbound}. We recall from
\eqref{gutiee1},\eqref{estG2} and \eqref{estG2a_2=0}, that there exists a $C>0$ such that for all $x\in \mathbb{R}^N$ the fundamental solution
$G=G_1+G_2$ satisfies
\begin{equation}\label{estG2bis1}
|G_1 (x)|\leq C (1+|x|)^{\frac{1-N}{2}},
\end{equation}
\begin{equation}
\label{estG2bis2}
|G_2 (x)|\leq \begin{cases} C \min \{|x|^{4-N}, |x|^{-N}\}&, \text{if }N> 4,\\
C \min \{1+\log|x|, |x|^{-N}\}&,\text{if } N= 4,\\
C \min \{1, |x|^{-N}\}&,\text{if } N= \{2,3\},\\ \end{cases} \qquad\text{when }\;a_2\neq 0,
\end{equation}
and
\begin{equation}
\label{estG2a_2=0bis}
|G_2 (x)|\leq \begin{cases} C \min \{|x|^{4-N}, |x|^{-N-1}\} &,\text{if}\ N> 4,\\
C \min \{1+ |\log|x||, |x|^{-N-1}\} &,\text{if}\ N= 4,\\
C \min \{1,|x|^{-N-1} \}&, \text{if}\ N=3, \\
\end{cases}
\qquad\text{when }\;a_2=0.
\end{equation}
For a sequence $(u_n)$ as required we assume for contradiction that
\begin{equation} \label{nonvae1}
\lim_{n\rightarrow \infty} \left(\sup_{y\in \mathbb{R}^N}\int_{B_\rho (y)} |u_n|^{p^\prime}\,dx \right)=0
\quad \text{for all }\rho>0.
\end{equation}
From this we will deduce
\begin{align} \label{nonvaeclaim}
\int_{\mathbb{R}^N} u_n (G_1 \backslasht u_n )\,dx\to 0 \;\;\text{and}\;\;\int_{\mathbb{R}^N} u_n (G_2 \backslasht u_n )\,dx \rightarrow 0
\quad\text{as } n\rightarrow \infty,
\end{align}
leading to a contradiction to our assumption.
Both claims are proved almost identically as in \cite{Ev,EW} so that we only provide the main steps.
We first prove the second assertion. For $R>1$ define
$D_R:= \mathbb{R}^N \backslash A_R$ for the annulus $A_R:=\{x\in \mathbb{R}^N :\ 1/R \leq |x|\leq R \}$.
Thanks to \eqref{estG2bis2} and \eqref{estG2a_2=0bis} we have $\|G_2 \|_{L^{p/2}(D_R)}\to 0$ as $R\to\infty$ since
$(N-4)p<2N$ and $(N+1)p>2N$. So Young's convolution inequality implies
\begin{equation} \label{nonvae2}
\sup_{n\in \mathbb{N}} \Big| \int_{\mathbb{R}^N}u_n \left[ (1_{D_R} G_2)\backslasht u_n \right] \,dx\Big|\leq
\|G_2\|_{L^{p/2}(D_R)} \sup_{n\in \mathbb{N}}\|u_n\|_{L^{p^\prime}(\mathbb{R}^N)}^2 \rightarrow 0
\text{ as }R\to \infty.
\end{equation}
On the other hand, in the case $N\geq 2,N\neq 4$ we may use the estimates from the bottom of p.706
in~\cite{EW} with $R^{N-2}$ replaced by $R^{N-4}$ and in the case $N= 4$ the estimates from p.11 in~\cite{Ev}
with $R^{4/p}(1+|\log(R)|)$ replaced by $R^{8/p}(1+|\log(R)|)$ to find
\begin{align}\label{nonvae3}
\int_{\mathbb{R}^N} u_n \left[(1_{A_R} G_2)\backslasht u_n \right] \,dx \to 0 \quad\text{as }n\to\infty\quad
\text{for all }R>0.
\end{align}
Combining \eqref{nonvae2} and \eqref{nonvae3} we get
$$
\int_{\mathbb{R}^N} u_n (G_2 \backslasht u_n )\,dx \to 0 \quad\text{as }n\to\infty.
$$
Next, we turn to the second claim. To this end we set $M_R:= \mathbb{R}^N \backslash B_R$. As in
Proposition~3.3 in \cite{EW} or Claim 2 on p.11 in \cite{Ev} one proves the inequality
\begin{equation*}
\|[1_{M_R} G_1]\backslasht f \|_{L^p (\mathbb{R}^N)} \leq C R^{-(N-1)/2 + (N+1)/p }\|f\|_{L^{p^\prime}(\mathbb{R}^N )}
\end{equation*}
whenever $f\in \mathcal{S} (\mathbb{R}^N)$ satisfies $\supp(\hat f) \subset \{\xi\in\mathbb{R}^N : ||\xi|- \sqrt a_1|\leq
\sqrt a_1/2 \}$.
Notice that $G_1$ satisfies, qualitatively, the same bounds as the function $\Phi_1$ in \cite{EW},
see~\eqref{estG2bis1} and the estimates~(26),(7) in~\cite{EW},\cite{Ev}, respectively. The proof of Lemma~3.4
in \cite{EW} and Claim~3 on p.12 in \cite{Ev} transfers literally to our situation proving
$$
\int_{\mathbb{R}^N} u_n (G_1 \backslasht u_n )\,dx \to 0 \quad\text{as }n\to\infty,
$$
which finishes to proof.
\end{proof}
With these preparations we can finally prove the existence of a nontrivial solution for~\eqref{4nls}.
\begin{thm} \label{thmex}
Assume (A1),(A2). Then there exists a nontrivial critical point $v\in L^{p^\prime}(\mathbb{R}^N)$ of $J$ at the mountain pass level $c$ defined in
~\eqref{eq:MP_level}.
\end{thm}
\begin{proof}
Let $(v_n) \subset L^{p^\prime}(\mathbb{R}^N)$ be a bounded Palais-Smale sequence for $J$ given by
Lemma~\ref{PSbounded}~(iii).
Then we have
$$
\lim_{n\rightarrow \infty} \int_{\mathbb{R}^N} \Gamma^{1/p} v_n \bold{R} ( \Gamma^{1/p} v_n) \,dx
= \frac{2 p^\prime}{2-p^\prime} \lim_{n\rightarrow \infty} \left( J(v_n) - \frac{J^\prime (v_n) v_n}{p^\prime}\right)
= \frac{2p^\prime }{2-p^\prime} c>0.
$$
Hence, Lemma~\ref{nonvanishing} implies that there exist $R,\zeta>0$ and a sequence $(x_n) \subset \mathbb{R}^N$
such that, up to a subsequence and reindexing $v_n$, we have
\begin{equation} \label{mainthme1}
\int_{B_R (x_n)}|v_n|^{p^\prime} \,dx \geq \zeta\quad \text{for all } n\in\mathbb{N}.
\end{equation}
Then the sequence $(w_n)\subset L^{p^\prime}(\mathbb{R}^N)$ given by
$w_n(x):= v_n (x+x_n)$ is bounded with $J(w_n)\to c$ and $\|J^\prime (w_n)\|=\|J^\prime (v_n)\|\to 0$ as
$n\to\infty$. For $\phi\in L^{p^\prime}(\mathbb{R}^N)$ with $\supp(\phi)\subset
\overlineerline{B_R}$ and all $n,m\in\mathbb{N}$ we get
\begin{align*}
&\left| \int_{\mathbb{R}^N} (|w_n|^{p^\prime -2} w_n - |w_m|^{p^\prime -2}w_m ) \phi \,dx \right| \\
&= \left|J^\prime (w_n) \phi - J^\prime (w_m) \phi + \int_{B_R} \bold{R}( \Gamma^{1/p} (w_n - w_m )) \Gamma^{1/p} \phi\,dx \right|\\
&\leq (\|J^\prime (w_n)\| + \|J^\prime (w_m)\|) \|\phi \|_{L^{p^\prime}(\mathbb{R}^N )}
+ \| \Gamma^{1/p} 1_{B_R} \bold{R} ( \Gamma^{1/p} (w_n - w_m))\|_{L^p (\mathbb{R}^N)} \|\phi \|_{L^{p^\prime}(\mathbb{R}^N )}.
\end{align*}
Using $\|J^\prime (w_n)\|=\|J^\prime (v_n)\|\to 0$ as well as the compactess of $1_{B_R}\bold R$ (see
Lemma~\ref{lemcompact}), we obtain that a subsequence of $(|w_n|^{p^\prime -2}w_n)$ is a
Cauchy sequence in $L^p(B_R)$. So there exists $w\in L^{p^\prime}(B_R)$ such that $|w_n|^{p^\prime -2}w_n
\rightarrow |w|^{p^\prime -2}w$ strongly in $L^p(B_R)$. Moreover, we deduce from
$w_n(x)=v_n(x+x_n)$ and~\eqref{mainthme1} that
$$
\int_{B_R} |w|^{p^\prime} \,dx >0,
$$
which implies $w\neq 0$. Finally we observe that $w$ is a critical point of $J$ because we have for
all $\phi \in C_0^\infty (\mathbb{R}^N)$ the identity
\begin{align*}
J^\prime (w) \phi
&= \left(\int_{\mathbb{R}^N} |w|^{p^\prime -2} w \phi\,dx - \int_{\mathbb{R}^N}\bold{R}( \Gamma^{1/p} w) \Gamma^{1/p} \phi \,dx \right) \\
&= \lim_{n\to \infty} \left(\int_{\mathbb{R}^N} |w_n|^{p^\prime -2} w_n \phi\,dx - \int_{\mathbb{R}^N}
\bold{R}( \Gamma^{1/p} w_n) \Gamma^{1/p} \phi\,dx \right) \\
&= \lim_{n\rightarrow \infty} J^\prime (w_n) \phi \\
&=0,
\end{align*}
where we used that $\bold{R}$ is a bounded linear operator and that $|w_n|^{p^\prime -2}w_n \rightarrow
|w|^{p^\prime -2}w$ in $L^p_{loc}(\mathbb{R}^N)$. Therefore, $w\in L^p (\mathbb{R}^N)$ is a nontrivial critical point of
$J$.
\end{proof}
\section{Qualitative properties of solutions}\label{sec:qualitative}
In this section we investigate the regularity and the asymptotic behavior of the critical point that we
obtained in Theorem~\ref{thmex}. First we consider the local and global regularity of critical points of $J$
and thus of the solution obtained above. In particular we will see that, not surprisingly, this critical
point is a strong solution of~\eqref{4nls}. Then we investigate its behaviour at infinity in more detail by
establishing its pointwise decay as well as its asymptotics at infinity, also known as the farfield expansion.
\subsection{Regularity of solutions}
We start by showing a local regularity result for distributional solutions of the linear problem associated
with~\eqref{4nls}. We refer to \cite{Man_reg} and \cite{BaoZh_homogeneous,BaoZh_nonhomogeneous} for other
results in this direction.
\begin{prop} \label{propreg}
Assume (A1). Let $f\in L^q_{loc}(\mathbb{R}^N)$ for $q\in (1,\infty)$ and let
$u\in L^q_{loc}(\mathbb{R}^N)$ be a distributional solution of
$
\Delta^2 u-\beta \Delta u + \alpha u = f \text{ in } \mathbb{R}^N.
$
Then $u\in W^{4,q}_{loc}(\mathbb{R}^N)$ is a strong solution and for all $r>0$ there exists a
constant $C>0$ depending on $r,\ p$ and $N$ such that for all $x_0 \in \mathbb{R}^N$
\begin{equation}\label{eq:CZ_estimates}
\|u \|_{W^{4,q} (B_r(x_0))}\leq C
(\|u\|_{L^{q}(B_{2r}(x_0))}+\|f\|_{L^{q}(B_{2r}(x_0))}).
\end{equation}
\end{prop}
\begin{proof}
The proof follows the lines of the proof of Proposition A.1 in~\cite{EW}. We use a mollifier $\rho\in
C_0^\infty(\mathbb{R}^N)$ and set $u_\varepsilon:=u\backslasht \rho_\varepsilon,f_\varepsilon:=f\backslasht \rho_\varepsilon$ for
$\rho_\varepsilon:=\varepsilon^{-N}\rho(\varepsilon^{-1}\cdot)$. Then the equation
$$
\Delta^2 u_\varepsilon -\beta \Delta u_\varepsilon+\alpha u_\varepsilon
= f_\varepsilon \quad \text{ in } \mathbb{R}^N
$$
holds in the classical sense.
Applying the interior $L^p$-estimates for higher order elliptic problems from Theorem 14.1' in
\cite{ADN_estimates_I} we get for sufficiently small $\varepsilon>0$ and for all $r>0,x_0\in\mathbb{R}^N$
\begin{align*}
\|u_\varepsilon\|_{W^{4,q}(B_r(x_0))}
&\leq C (\|u_\varepsilon\|_{L^q(B_{3r/2}(x_0))}+\|f_\varepsilon\|_{L^q(B_{3r/2}(x_0))}) \\
&\leq C (\|u\|_{L^q(B_{2r}(x_0))}+\|f \|_{L^q(B_{2r}(x_0))}).
\end{align*}
Since $u_\varepsilon-u_\delta$ solves the corresponding homogeneous Dirichlet problem and $u_\varepsilon\to u$
in $L^q_{loc}(\mathbb{R}^N)$ as $\varepsilon\to 0$, we deduce that $(u_\varepsilon)$ is a Cauchy sequence
in $W^{4,q}(B_r(x_0))$ as $\varepsilon\to 0$ for any $r>0,x_0\in\mathbb{R}^N$, hence $u$ lies in $W^{4,q}_{loc}(\mathbb{R}^N)$
and satisfies the estimates~\eqref{eq:CZ_estimates}.
\end{proof}
We go on with proving global regularity results for solutions of the linear problem.
\begin{prop} \label{prop:global_reg_linear}
Assume (A1). Let $f\in L^{p^\prime}(\mathbb{R}^N)\cap L^q(\mathbb{R}^N)$ for $p\geq \frac{2(N+1)}{N-1},q\in
(1,\infty)$ and $u=\bold{R} f \in L^q(\mathbb{R}^N)$. Then $u\in W^{4,q}(\mathbb{R}^N)$ is a strong
solution of $
\Delta^2 u-\beta \Delta u+\alpha u =f \text{ in } \mathbb{R}^N
$
and there is a $C>0$ such that
$$
\|u\|_{W^{4,q}(\mathbb{R}^N)} \leq C(\|u\|_{L^q(\mathbb{R}^N)}+\|f\|_{L^q(\mathbb{R}^N)}).
$$
\end{prop}
\begin{proof}
We first discuss the case $a_1>0>a_2$. With the notation from the previous
proposition we have
\begin{equation} \label{eq:ueps_veps_system}
-\Delta u_\varepsilon - a_1 u_\varepsilon = v_\varepsilon\quad\text{in }\mathbb{R}^N,\qquad
-\Delta v_\varepsilon - a_2 v_\varepsilon = \sqrt{\beta^2-4\alpha} f_\varepsilon\quad\text{in }\mathbb{R}^N
\end{equation}
for some function $v_\varepsilon\in W^{2,q}(\mathbb{R}^N)$. Thanks to $a_2<0$, the second equation and global
$L^p$-estimates (see for instance Theorem C.1.3.(iii) in \cite{LorBer_analytical})
imply that $(v_\varepsilon)$ is a Cauchy sequence in $W^{2,q}(\mathbb{R}^N)$ and satisfies
$$
\|v_\varepsilon\|_{W^{2,q}(\mathbb{R}^N)}\leq C \|f_\varepsilon\|_{L^q(\mathbb{R}^N)} \leq C \|f\|_{L^q(\mathbb{R}^N)}.
$$
From the first equation for $u_\varepsilon$ we deduce
$$
-\Delta (\Delta u_\varepsilon) + \Delta u_\varepsilon = (a_1+1)\Delta u_\varepsilon + \Delta v_\varepsilon,
$$
so that the same estimates as above together with interpolation estimates imply
\begin{align*}
\|u_\varepsilon\|_{W^{4,q}(\mathbb{R}^N)}
&\leq C\|\Delta u_\varepsilon\|_{W^{2,q}(\mathbb{R}^N)} \\
&\leq C(\|(a_\varepsilon+1)\Delta u_\varepsilon + \Delta v_\varepsilon\|_{L^q(\mathbb{R}^N)}) \\
&\leq C (\|u_\varepsilon\|_{W^{2,q}(\mathbb{R}^N)} + \|v_\varepsilon\|_{W^{2,q}(\mathbb{R}^N)}) \\
&\leq \frac{1}{2} \|u_\varepsilon\|_{W^{4,q}(\mathbb{R}^N)} + C(\|u_\varepsilon\|_{L^q(\mathbb{R}^N)} + \|f_\varepsilon\|_{L^q(\mathbb{R}^N)})
\\
&\leq \frac{1}{2} \|u_\varepsilon\|_{W^{4,q}(\mathbb{R}^N)} + C(\|u\|_{L^q(\mathbb{R}^N)}+\|f\|_{L^q(\mathbb{R}^N)}),
\end{align*}
which proves the boundedness of $(u_\varepsilon)$ in $W^{4,q}(\mathbb{R}^N)$. Performing the corresponding estimates for
$u_\varepsilon-u_\delta$, which solves \eqref{eq:ueps_veps_system} with $f_\varepsilon$ replaced by 0, we find that
$(u_\varepsilon)$ is a Cauchy sequence as $\varepsilon\to 0$ in $W^{4,q}(\mathbb{R}^N)$ converging to
$u\in W^{4,q}(\mathbb{R}^N)$ satisfying also the above estimate.
In the other case $a_1>a_2\geq 0$ we rewrite the equation as
$
\Delta^2 u-\beta \Delta u - u = \tilde f \text{ in } \mathbb{R}^N
$
where $\tilde f:= f-(1+\alpha)u$. Now the symbol of the differential operator on the left hand side has
one positive and one negative zero and $\|\tilde f\|_{L^q(\mathbb{R}^N)} \leq C(\|f\|_{L^q(\mathbb{R}^N)} + \|u\|_{L^q(\mathbb{R}^N)})$,
so that our considerations from above yield the result.
\end{proof}
With these preliminary results we deduce the regularity of the critical point constructed in
Theorem~\ref{mainthm}.
\begin{thm}\label{thm_regularity}
Assume (A1),(A2) and let $u\in L^p(\mathbb{R}^N)$ be a solution to $u=\bold{R}(\Gamma |u|^{p-2}u)$. Then $u\in
W^{4,q}(\mathbb{R}^N)\cap C^{3,\alpha}(\mathbb{R}^N)$ for all $q\in [p,\infty),\alpha\in [0,1)$ and $u$
is a strong solution of
$$
\Delta^2 u-\beta \Delta u+\alpha u = \Gamma|u|^{p-2}u \quad \text{in } \mathbb{R}^N.
$$
\end{thm}
\begin{proof}
It suffices to prove $u\in L^\infty(\mathbb{R}^N)$. Indeed, having shown this, we may apply
Proposition~\ref{prop:global_reg_linear} to $f:=\Gamma |u|^{p-2}u \in L^{p^\prime}(\mathbb{R}^N)\cap
L^q(\mathbb{R}^N)$ for all $q\in [p^\prime,\infty)$. Proposition~\ref{prop:global_reg_linear} yields $u\in W^{4,q}(\mathbb{R}^N)$
for all $q\in [p,\infty)$ and thus, by Morrey's imbedding Theorem, $u\in C^{3,\alpha}(\mathbb{R}^N)$ for all
$\alpha\in [0,1)$. In order to prove the boundedness of $u$ we iterate the local estimates from
Proposition~\ref{propreg}. From Sobolev's imbedding theorem and Proposition~\ref{propreg} we get for all
$q\in [1,\infty]$ and all $s\geq \frac{Nq}{N+4q},s>1$
\begin{align*}
\|u\|_{L^q(B_r(x_0))}
&\leq C \|u\|_{W^{4,s}(B_r(x_0))} \\
&\leq C(\|u\|_{L^s(B_{2r}(x_0))}+\|\Gamma
|u|^{p-2}u\|_{L^s(B_{2r}(x_0))}) \\
&\leq C(1+\|u\|_{L^{s(p-1)}(B_{2r}(x_0))}^{p-1})
\end{align*}
for some positive $C$ dependent of $r,N,s,q,\|\Gamma\|_{L^\infty(\mathbb{R}^N)}$ but not on $x_0$. Hence, for
$q_0:=\infty$ and $q_{n+1}:=\max\{\frac{Nq_n}{N+4q_n}(p-1),p\}$ we get
$$
\|u\|_{L^{q_n}(B_{2^nr}(x_0))} \leq C_n(1+\|u\|_{L^{q_{n+1}}(B_{2^{n+1}r}(x_0))})^{p-1} \quad\text{for
}n\in\mathbb{N}_0.
$$
Since $q_n\geq p>\frac{N(p-2)}{4}$ the sequence $(q_n)$ decreases until it reaches the
value $p$ after finitely many steps. This implies
\begin{align*}
\|u\|_{L^\infty(B_r(x_0))}
\leq P(\|u\|_{L^p(\mathbb{R}^N)})
\end{align*}
for some polynomial $P$ with positive coefficients and the proof is finished, since $P$ does not depend
on $x_0$.
\end{proof}
\subsection{Decay and farfield expansion}
In this section we establish the pointwise decay and the farfield expansion of the solution obtained in
Theorem~\ref{thmex}. The proof of this result follows again the lines of earlier results due to Ev\'{e}quoz
and Weth.
\begin{thm}\label{thm:decay}
Assume (A1),(A2) and $u=\bold{R}(f)$ where $f:=\Gamma |u|^{p-2}u\in L^{p^\prime}(\mathbb{R}^N)$.
Then
\begin{equation} \label{eqfarfield}
\lim_{R\rightarrow \infty} \frac{1}{R}\int_{B_R} | u(x) -\mathbb{R}eal(U_f )(x)|^2 \,dx =0.
\end{equation}
Moreover, if $N\in\{2,3\}$ or $N\geq 4,p>\frac{3N-1}{N-1}$, then we have
\begin{itemize}
\item[(i)] There is a $C>0$ such that $|u(x)|\leq C (1+|x|)^{\frac{1-N}{2}}$ for all $x\in\mathbb{R}^N$.
\item[(ii)] $u(x) =\mathbb{R}eal(U_f)(x) + o(|x|^{\frac{1-N}{2}})$ as $|x|\to\infty$.
\end{itemize}
\end{thm}
\begin{proof}
From Theorem~\ref{thminvopbound} we get that $u\in L^r(\mathbb{R}^N)$ implies $f\in L^{r/(p-1)}(\mathbb{R}^N)$ and
thus $u\in L^{\tilde r}(\mathbb{R}^N)$ whenever
$$
\tilde r:= \frac{r(N+1)}{-2r+(p-1)(N+1)}\quad\text{and}\quad
\frac{2N(N+1)}{N^2+4N-1}(p-1)=:r_* <r\leq p.
$$
So we put $r_0:= p$ and define $r_n$ inductively via
$$
r_{n+1} := \frac{1}{2} \Big( r_n + \max\big\{ \frac{r_n(N+1)}{-2r_n+(p-1)(N+1)}, r_* \big\}\Big).
$$
Then $r\leq p<\frac{N+1}{2}(p-2)$ allows to prove inductively $r_*<r_{n+1}<r_n\leq r$ for all $n\in\mathbb{N}_0$
so that the sequence $(r_n)$ strictly decreases
to $r_*$, which gives $u\in L^\infty(\mathbb{R}^N)\cap L^r(\mathbb{R}^N)$ for all $r>\frac{2N}{N-1}$. (The boundedness was proved in
Theorem~\ref{thm_regularity}.) In particular, we have $f=\Gamma |u|^{p-2}u \in L^{r'}(\mathbb{R}^N)$ for
$r=\frac{2(N+1)}{N-1}$ as well as the representation formula
\begin{align} \label{eq:representation_formula_for_u}
u(x)
= (\mathbb{R}eal G\backslasht f)(x)
= \frac{1}{\sqrt{\beta^2-4\alpha}} \Big( ( \mathbb{R}eal g_{a_1}\backslasht f)(x) - (\mathbb{R}eal g_{a_2}\backslasht f)(x)\Big),
\end{align}
see~\eqref{eq:def_G}. Our strategy is to prove first~\eqref{eqfarfield} for $N\geq 4$. Then we show
~(i),(ii) for $N\in\{2,3\}$ or $N\geq 4,p>\frac{3N-1}{N-1}$. Since~(i),(ii)
implies~\eqref{eqfarfield}, this will prove the assertion.
In the case $N\geq 4$ and $a_1>a_2>0$ we may directly deduce \eqref{eqfarfield} from Proposition~2.7
in~\cite{EW}. Indeed, the
formula~\eqref{eq:representation_formula_for_u} and~\eqref{eq:EWresolvent_vs_ga1} yields
$$
u(x)
= \frac{1}{\sqrt{\beta^2-4\alpha}} \mathbb{R}eal\Big( \frac{1}{a_1} \mathcal{R}_1\Big(f(\frac{\cdot}{\sqrt
a_1})\Big)(\sqrt a_1 x) - \frac{1}{a_2} \mathcal{R}_1\Big(f(\frac{\cdot}{\sqrt
a_2})\Big)(\sqrt a_2 x)\Big)
$$
where $\mathcal{R}_1$ is the resolvent for the Helmholtz operator $-\Delta -1$ studied in~\cite{Ev,EW}.
Using $f\in L^{r'}(\mathbb{R}^N)$ for $r=\frac{2(N+1)}{N-1}$ (see above) and Proposition~2.7 in~\cite{EW} yields
\eqref{eqfarfield} because of
\begin{align*}
& \frac{1}{\sqrt{\beta^2-4\alpha}} \frac{1}{a} \sqrt{\frac{\pi}{2}} \frac{e^{i(\sqrt{a}
|x|-\frac{N-3}{4}\pi)}}{|\sqrt{a} x|^{\frac{N-1}{2}}} \widehat{f\big(\frac{\cdot}{\sqrt
a}\big)}\Big(\frac{\sqrt a x}{|\sqrt a x|}\Big) \\
&= \frac{1}{\sqrt{\beta^2-4\alpha}} \frac{1}{a}\sqrt{\frac{\pi}{2}} \frac{e^{i(\sqrt{a}
|x|-\frac{N-3}{4}\pi)}}{|\sqrt{a} x|^{\frac{N-1}{2}}} \sqrt{a}^N \hat{f}(\sqrt a \frac{x}{|x|}) \\
&= \frac{a^{\frac{N-3}{4}}}{\sqrt{\beta^2-4\alpha}} \sqrt{\frac{\pi}{2}} \frac{e^{i(\sqrt{a}
|x|-\frac{N-3}{4}\pi)}}{|x|^{\frac{N-1}{2}}} \hat{f}\big(\sqrt{a} \frac{x}{|x|}\big)
\end{align*}
for $a\in\{a_1,a_2\}$, see the definition of $U_f$ in front of Theorem~\ref{mainthm}.
In the case $N\geq 4$ and $a_1>0\geq a_2$ we show that result from the above-mentioned Proposition remains true
after slight modification. To this end we first consider $\tilde f\in C_0^\infty(\mathbb{R}^N)$. Exactly the same
proof as Proposition~2.6 in~\cite{EW} yields $(\mathfrak R \tilde f)(x) = U_{\tilde f}(x) +
o(|x|^{(1-N)/2})$. Indeed, the proof of this proposition only exploits the formula
$$
(\mathcal{R}_1\tilde f)(x) = \gamma_N \int_{B_R}
\frac{e^{i|x-y|}}{|x-y|^{(N-1)/2}}(1+\delta(|x-y|))\tilde f(y)\,dy
$$
for $|x|\geq 2R$ and $R$ is chosen so large that $\supp(\tilde f)\subset B_R$ holds. Here, the function
$\delta$ satisfies $\sup_{r\geq 1}r|\delta(r)|<\infty$ where $\gamma_N =
\frac{1}{2}(2\pi)^{(1-N)/2}e^{-i(N-3)\pi/4}$, see the bottom of~p.697 in~\cite{EW}. In view of
$$
(\mathfrak{R}\tilde f)(x) = \frac{\gamma_N}{\sqrt{\beta^2-4\alpha}}
\int_{B_R} \frac{e^{i|x-y|}}{|x-y|^{(N-1)/2}}(1+\tilde \delta(|x-y|))\tilde f(y)\,dy
$$
for $|x|\geq 2R$ large enough and $\sup_{r\geq 1}\sqrt{r}|\tilde
\delta(r)|<\infty$ we therefore get $(\mathfrak R \tilde f)(x) = U_{\tilde f}(x) + o(|x|^{(1-N)/2})$
whenever $\tilde f\in C_0^\infty(\mathbb{R}^N)$.
Notice that this property of $\tilde\delta$ follows from \eqref{asymgreen}, i.e., from the fact that
the Green's function $g_{a_2}$ decays exponentially if $a_2<0$ or like $|x|^{2-N}$ if $a_2=0$ and hence
faster than $g_{a_1}$ at infinity. With this result we deduce~\eqref{eqfarfield} by approximation of $f$ in $L^{r^\prime}(\mathbb{R}^N)$ by test
functions exactly as in the proof of Proposition~2.7~\cite{EW}.
Now let us assume $N=3$ or $N\geq 4,p>\frac{3N-1}{N-1}$. Then we have $f=Vu$ and $u= \mathbb{R}eal G\backslasht
Vu$ for $V=\Gamma|u|^{p-2}\in L^q(\mathbb{R}^N)\cap L^s(\mathbb{R}^N)$ with $s=\infty>\frac{N}{2}$ and $q<\frac{2N}{N+1}$
because of $\frac{2N}{N+1}>\frac{2N}{(N-1)(p-2)}$. Furthermore, we have $Vu \in L^1(\mathbb{R}^N)\cap L^\infty(\mathbb{R}^N)$ due to
$\frac{2N}{(N-1)(p-1)}<1$. Exploiting the estimate $|G(z)|\leq C\max\{|z|^{2-N},|z|^{\frac{1-N}{2}}\}$
claim (i) follows from Lemma~2.9 in~\cite{EW}. In particular, $p>\frac{3N-1}{N-1}$ implies
$|f(x)|=\Gamma(x)|u(x)|^{p-1}\leq C(1+|x|)^{-N-\delta}$ for some $\delta>0$ and all $x\in\mathbb{R}^N$, so that
\eqref{eq:representation_formula_for_u} together with Proposition~2.8 in~\cite{EW} ($a_2>0$) or
the proof of Claim 2 in Proposition~6.3 in \cite{BCGJ2} ($a_2=0$), respectively, gives
\begin{align*}
u(x)
= ( \mathbb{R}eal G\backslasht f)(x)
= \mathbb{R}eal(U_f)(x) + o(|x|^{\frac{1-N}{2}}).
\end{align*}
In the case $N=2$ the corresponding result follows from Proposition~2.2 in~\cite{Ev}.
This finishes the proof.
\end{proof}
Finally let us add that the solution $u=\bold{R}(\Gamma|u|^{p-2}u)$ described in Theorem~\ref{thm:decay} is
the real part of a complex-valued solution
$$
\tilde u:=\mathfrak R(\Gamma|u|^{p-2}u) = G\backslasht (\Gamma|u|^{p-2}u).
$$
As above, one shows that in the case $a_1>0>a_2$ this function satisfies Sommerfeld's outgoing radiation
condition in the following integral sense
\begin{equation}
\label{some2}
\lim_{R\rightarrow \infty}
\frac{1}{R}\int_{B_R} \Big|\nabla \tilde u (x) - i \sqrt{a_1} \tilde{u}(x) \frac{x}{|x|} \Big|^2 \,dx =0
\quad (a_1>0>a_2),
\end{equation}
see equation~(53) in~\cite{EW} for the corresponding result in the Helmholtz case. Notice that
in this way the solution $\tilde u$ inherits the radiation condition from the fundamental solution $G$,
see~\eqref{sommerfeld}. In the case $a_1>a_2>0$,
however, defining $\tilde u_j:= g_{a_j}\backslasht (\Gamma|u|^{p-2}u)$ for $j=1,2$ we get
\begin{equation}
\label{some3}
\tilde u=\frac{\tilde u_1-\tilde u_2}{\sqrt{\beta^2-4\alpha}} \quad\text{with }
\lim_{R\rightarrow \infty}
\frac{1}{R}\int_{B_R} \Big|\nabla \tilde u_j (x) - i \sqrt{a_j} \tilde{u_j}(x) \frac{x}{|x|} \Big|^2 \,dx
=0 \quad (a_1>a_2>0).
\end{equation}
\end{document}
|
\begin{document}
\title{Schmidt's theorem, Hausdorff measures and Slicing}
\author{Victor Beresnevich\footnote{Research supported by EPSRC Grant R90727/01}
\\ {\small\sc York} \and Sanju Velani\footnote{Royal Society University Research
Fellow} \\ {\small\sc York}}
\beta_\alphaketitle
\date{}
\centerline{{\it For Bridget Bennett on her fortieth birthday }}
\begin{abstract} A Hausdorff measure version of
W.M. Schmidt's inhomogeneous, linear forms theorem in metric
number theory is established. The key ingredient is a `slicing'
technique motivated by a standard result in geometric measure
theory.
In short, `slicing' together with the Mass Transference Principle
\cite{mtp} allows us to transfer Lebesgue measure theoretic
statements for $\limsup$ sets associated with linear forms to
Hausdorff measure theoretic statements. This extends the approach
developed in \cite{mtp} for simultaneous approximation.
Furthermore, we establish a new Mass Transference Principle
which incorporates both forms of approximation. As an application
we obtain a complete metric theory for a `fully' non-linear
Diophantine problem within the linear forms setup.
\end{abstract}
\noindent{\small 2000 {\it Mathematics Subject Classification}\/:
Primary 11J83, 28A78; Secondary 11J13, 11K60}
\noindent{\small{\it Keywords and phrases}\/: Inhomogeneous
Diophantine approximation, linear forms, Hausdorff measure and
dimension. }
\section{Introduction}
Fix a vector $\vv b=(b_1,\ldots,b_m)\in{\Bbb R}^m$ and a non--negative,
real valued function $\Psi:{\Bbb Z}^n\to{\Bbb R}^+ := \{ x \gammaeq 0 : x \in {\Bbb R}
\} $ such that
$$\Psi(\vv a)\to0 \hspace{9mm} {\rhom as \ } \hspace{9mm} |\vv a
|:=\beta_\alphax(|a_1|,\ldots,|a_n|) \, \to \, \infty \ . $$ Let
$W_{n,m}^{\vv b}(\Psi)$ be the set of $X=(\vv x_1,\ldots,\vv
x_m)\in {\Bbb I}^{n\times m}:=[0,1)^{{n\times m}}$ where $\vv x_j \in {\Bbb R}^n $ for $1 \leqslantq j
\leqslantq m $, such that the system of inequalities
\begin{equation}\lambdabel{1}
\|\vv a\cdot\vv x_j-b_j\| \ < \ \Psi(\vv a) \hspace{15mm} (1
\leqslantq j \leqslantq m )
\end{equation}
is satisfied for infinitely many $\vv a\in{\Bbb Z}^n $. Here and
throughout $\vv x\cdot\vv y=x_1y_1+\dots+x_ny_n$ is the standard
inner product of two vectors $\vv x$,$\vv y \in {\Bbb R}^n$ and $\|x\|$
is the distance from $x \in {\Bbb R}$ to the nearest integer. The
following is a geometric interpretation of the set $W_{n,m}^{\vv
b}(\Psi)$ and brings to the forefront its $\limsup$ nature. For
vectors $\vv a\in{\Bbb Z}^n $, $\vv b\in{\Bbb R}^m$ and $\vv p \in {\Bbb Z}^m$,
consider the $(n-1)m$--dimensional plane $ R_{{\vv a},{\vv
p}}^{\vv b}$ given by
\begin{equation}
\lambdabel{resset}
R_{{\vv a},{\vv p}}^{\vv b} := \{ X \in {\Bbb R}^{n\times m}: \
\vv a\cdot\vv x_j-b_j = p_j \ \ (1 \leqslantq j \leqslantq m ) \, \} \ \ .
\end{equation}
Thus, $R_{{\vv a},{\vv p}}^{\vv b}$ is the product of the
$n$--dimensional hyperplanes given by $$ R_{{\vv a},{p_j}}^{b_j}
:= \{ \vv x \in {\Bbb R}^n: \ \vv a\cdot\vv x - b_j = p_j \} \qquad 1
\leqslantq j \leqslantq m \ . $$ For $\delta \gammaeq 0 $, let $\Deltalta( R_{{\vv
a},{\vv p}}^{\vv b}, \delta ) $ denote the $\delta$--neighborhood
of $R_{{\vv a},{\vv p}}^{\vv b}$; i.e.
the product of the
$\delta$-neighborhoods (with respect to the Euclidean norm) of the
hyperplanes $ R_{{\vv a},{p_j}}^{b_j} $. Note that
when $n=1$, the $\delta$--neighborhood $\Deltalta( R_{{\vv a},{\vv
p}}^{\vv b}, \delta ) $ is simply a ball of radius $\delta$ in the
supremum norm.
It is easily verified, that
$$ X \in W_{n,m}^{\vv b}(\Psi) \hspace{10mm} {\rhom if \ and \ only \ if } \hspace{10mm}
X \in \Deltalta\beta_\alphathcal{ B}ig( R_{{\vv a},{\vv p}}^{\vv b},
\textstyle{\frac{\Psi(\vv a)}{\sqrt{{\vv a}.{\vv a}}} } \beta_\alphathcal{ B}ig)
\cap {\Bbb I}^{n\times m}
$$ for infinitely many vectors $\vv a\in{\Bbb Z}^n$ and $ \vv p \in
{\Bbb Z}^m$.
\subsection{The Lebesgue measure theory: Schmidt's Theorem}
The following key result provides a beautiful and simple
criteria for the `size' of the set $W_{n,m}^{\vv b}(\Psi)$
expressed in terms of ${n\times m}$--dimensional Lebesgue measure $|\ \
|_{{n\times m}}$. The theorem is due to W.M. Schmidt and shows that
$|W_{n,m}^{\vv b}(\Psi)|_{{n\times m}}$ satisfies an elegant `zero--one'
law.
\begin{schmidt}
Let $\Psi$ be as above and $n+m>2$. Then
$$
|W_{n,m}^{\vv b}(\Psi)|_{{n\times m}}=\leqslantft\{
\begin{array}{rl}
0 & {\rhom if} \;\;\;
\displaystyle \sum_{\vv a\in{\Bbb Z}^n}\Psi(\vv a)^m<\infty\,\\[4ex]
1 & {\rhom if} \;\;\; \displaystyle \sum_{\vv a\in{\Bbb Z}^n}\Psi(\vv
a)^m=\infty\,
\end{array}
\rhoight.. $$
\end{schmidt}
In fact, Schmidt considers a more general setup for which he obtains
a quantitative result. The case that $m+n=2$, corresponding to
$n=m=1$, is naturally excluded since the statement is known to be
false -- the Duffin--Schaeffer conjecture \cite{mtp,Spr79} provides
the appropriate `expected' statement.
In order to appreciate the true significance of Schmidt's
Theorem it is well worth mentioning a few special cases which in
their own right represent landmarks within the classical theory
of metric Diophantine approximation.
\begin{description}
\item[~\hspace*{3ex} Khintchine's Theorem \cite{Kh} \ : ] $n=1$ and ${\vv b}= 0 $ with $\Psi$
monotonic. ~\vspace*{-1ex}
~\hspace*{32ex} (simultaneous, homogeneous approximation)
\item[~\hspace*{3ex} Groshev's Theorem \cite{groshev} \ \hspace*{3ex}: ] $n > 1$ and ${\vv b}= 0
$ with $\Psi$ monotonic\footnote{When $n>1$, the function $\Psi$
is in general multi-variable. To say that it is monotonic simply
means that $\Psi(\vv a):= \psi(|\vv a|)$ for some monotonic
$\psi$. }.~\vspace*{-1ex}
~\hspace*{32ex} (linear forms, homogeneous approximation)
\item[~\hspace*{3ex} Gallagher's Theorem \cite{gal} \ \hspace*{1.2ex}: ] $n=1$ and $m \gammaeq
2$. ~\vspace*{-1ex}
~\hspace*{32ex} (simultaneous, inhomogeneous approximation)
\end{description}
\noindent Note that under the condition that $\Psi$ is monotonic,
the results of Khintchine and Groshev already give the homogeneous
version (${\vv b}= 0 $) of Schmidt's Theorem without the condition
that $m+n > 2 $ -- Khintchine's Theorem covers the case $n=m=1$.
Generalizing the Khintchine--Groshev statement to the inhomogeneous
case and entirely removing the monotonicity condition when $m +n
> 2 $ is by no means a trivial feat. As with Schmidt, Gallagher considers
a more general setup for which he obtains a quantitative result.
\subsection{The general metric theory}
Let $f$ be a dimension function and let ${\cal H}^f$ denote the Hausdorff
$f$--measure -- see \S\rhoef{HM}. In short, our aim is to provide a
complete metric theory for the set $W_{n,m}^{\vv b}(\Psi)$. The
following result achieves this goal in that it provides a simple
criteria for the `size' of the set $W_{n,m}^{\vv b}(\Psi)$ expressed
in terms of the general measure ${\cal H}^f$.
\begin{theorem}\lambdabel{t1}
Let $\Psi$ be as above and $n+m>2$. Let $f$ be a dimension
function such that $r^{-nm}f(r)$ is monotonic. Furthermore, assume
that $g : r \to r^{-(n-1)m} f(r)$ is a dimension function. Then
$$
{\cal H}^f(W_{n,m}^{\vv b}(\Psi))=\leqslantft\{
\begin{array}{cl}
0& {\rhom if} \qquad\displaystyle \sum_{\vv
a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} \
g\!\leqslantft(\dfrac{\Psi(\vv a)}{|\vv a|}\rhoight) \times \ |\vv a|^m \ < \ \infty\, \\[4ex]
{\cal H}^f({\Bbb I}^{n\times m}) & {\rhom if} \qquad\displaystyle\sum_{\vv
a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} \ g\!\leqslantft(\dfrac{\Psi(\vv
a)}{|\vv a|}\rhoight) \times \ |\vv a|^m \ = \ \infty\,
\end{array}
\rhoight..
$$
\end{theorem}
Notice that in the case ${\cal H}^f$ is ${n\times m}$--dimensional Lebesgue
measure $|\ \ |_{{n\times m}}$, the theorem reduces to Schmidt's Theorem.
As with the Lebesgue theory, the convergence part of the above
theorem is relatively straightforward if not trivial -- see
\S\rhoef{t1conv}. The main substance is the divergent part. For
this, our particular strategy is straightforward enough. We
establish the following:
\begin{theorem}\lambdabel{t2}
$
~ \ \ {\it Schmidt's \ Theorem \ (divergent \ part) } \ \ \
\Longrightarrow \ \ \ {\it Theorem \ \rhoef{t1} \ (divergent \
part) } \ . $
\end{theorem}
In \cite{mtp}, this strategy has recently been successfully
implemented to establish the simultaneous version of Theorem
\rhoef{t2}; namely
\begin{bv}\lambdabel{bv}
$ {\it Gallagher's \ Theorem \ (divergent \ part) } \
\Longrightarrow \ {\it Theorem \ \rhoef{t1} \ (divergent \
part) }
\\ ~\hspace*{66ex} with \ n=1 \ and \ m \gammaeq 2 \ .
$
\end{bv}
Recall, that Schmidt's Theorem reduces to Gallagher's Theorem in
the case of simultaneous approximation ($n=1$). To be absolutely
precise, in \cite[\S6.2]{mtp} we only consider the homogeneous
case
of Theorem BV. However, given the method of proof adopted in
\cite{mtp} no extra obstacles appear in establishing the
inhomogeneous version above. Indeed, the proof is essentially a
simple application of the Mass Transference Principle (see
\S\rhoef{secmtp}) and for this it is irrelevant whether we start
with a homogeneous or inhomogeneous divergent statement of
Gallagher's Theorem. The only relevant aspect is that when $n=1$,
the set $W_{n,m}^{\vv b}(\Psi)$ is a limsup set naturally defined
in terms of a sequence of balls -- the Mass Transference
Principle then does the rest! This is no longer the case when
$n>1$ and so Theorem \rhoef{t2} is not simply a consequence of the
approach developed in \cite{mtp}; namely the Mass Transference
Principle.
The key aspect of this paper is the introduction of a `slicing'
technique to the theory of metric Diophantine approximation; in
particular to the linear forms aspect of the theory. The
technique is motivated by a relatively standard result in
geometric measure theory -- see \S\rhoef{secslice}. The upshot is
that `slicing' together with the Mass Transference Principle
yields Theorem \rhoef{t2} -- the `hard' part of Theorem \rhoef{t1}.
{\em Remark. \ } In all previous contributions towards the general
metric theory, such as the pioneering work of Jarn\'{\i}k \cite{Ja}
the function $\Psi$ is assumed to be monotonic. For further details
and references the reader is refereed to \cite[Sections 1.1 \&
12.1]{BDV03}.
Before moving on, it is useful to say a little
concerning the condition imposed on $g$ in Theorem \rhoef{t1};
namely that since $g$ is assumed to be a dimension function we
have that $g(r) \to 0$ as $r \to 0 $. For the sake of clarity and
ease of discussion, put $f: r \to r^s$ ($s>0$) in Theorem
~\rhoef{t1}. Then, Theorem~\rhoef{t1} reduces to the following
$s$--dimensional Hausdorff measure
statement which in its own right is of significant importance
since it characterizes the Hausdorff dimension of the set
$W_{n,m}^{\vv b}(\Psi)$ as the exponent of convergence of a
certain `$s$--volume' sum.
\begin{corollary}\lambdabel{cor1}
Let $\Psi$ be as above and $n+m>2$. Let $\delta>0$ and
$s:=(n-1)m+\delta$. Then
$$
{\cal H}^s(W_{n,m}^{\vv b}(\Psi))=\leqslantft\{
\begin{array}{cl}
0& {\rhom if} \qquad\displaystyle \sum_{\vv
a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} \
\Psi(\vv a)^\delta \ |\vv a|^{m-\delta} \ < \ \infty\, \\[4ex]
{\cal H}^s({\Bbb I}^{n\times m}) & {\rhom if} \qquad\displaystyle\sum_{\vv
a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}}\ \Psi(\vv a)^\delta \ |\vv
a|^{m-\delta} \ = \ \infty\,
\end{array}
\rhoight..
$$
\end{corollary}
\noindent It follows from the definition of Hausdorff dimension
(see \S\rhoef{HM}) that if for some $\delta>0$
the sum in the corollary diverges, then
$$
\dim W_{n,m}^{\vv b}(\Psi) \ = \ \inf \leqslantft\{ s:
\textstyle{\sum_{\vv a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} } \ \Psi(\vv
a)^{s-(n-1)m} \ |\vv a|^{nm- s} \ < \ \infty \rhoight\} \ .
$$
We suspect that the condition on $s$ imposed in Corollary
\rhoef{cor1}, namely that $s$ is strictly greater than $(n-1)m$,
cannot be relaxed. Briefly, if $\delta = 0$ so that $s=(n-1)m$, the
sum in Corollary~\rhoef{cor1} diverges irrespective of $\Psi$. Now
the `approximating'
hyperplanes
as defined by (\rhoef{resset}) are themselves of dimension $s$ and
indeed of positive ${\cal H}^s$ measure. Thus, it is highly likely
that for rapidly decreasing functions $\Psi$ the ${\cal H}^s$--measure
theoretic structure of $W_{n,m}^{\vv b}(\Psi)$ is purely dependent
on the arithmetic properties of the approximating hyperplanes. In
view of this, for appropriate $\Psi$ and $\vv b\in{\Bbb R}^m$ one might
expect that ${\cal H}^s (W_{n,m}^{\vv b}(\Psi)) $ is finite and
possibly even zero rather than ${\cal H}^s({\Bbb I}^{n\times m})$ which is infinite.
\section{Preliminaries}
\subsection{Hausdorff measures \lambdabel{HM}}
In this section we give a brief account of Hausdorff measures. For
further details see \cite{MAT}. A {\em dimension function} $f \, :
\, {\Bbb R}^+ \to {\Bbb R}^+ $ is a continuous, non-decreasing function such
that $f(r)\to 0$ as $r\to 0 \, $.
The Hausdorff
$f$--measure with respect to the dimension function $f$ will be
denoted throughout by ${\cal H}^{f}$ and is defined as follows.
Suppose $F$ is a subset of ${\Bbb R}^k$. For $\rhoho
> 0$, a countable collection $ \leqslantft\{B_{i} \rhoight\} $ of balls in
${\Bbb R}^k$ with radius $r(B_i) \leqslantq \rhoho $ for each $i$ such that $F
\subset \bigcup_{i} B_{i} $ is called a {\em $ \rhoho $-cover for
$F$}.
For a dimension
function $f$ define $$
{\cal H}^{f}_{\rhoho} (F) \, = \, \inf \ \sum_{i} f(r(B_i)),
$$
where the infimum is taken over all $\rhoho$-covers of $F$. The {\it
Hausdorff $f$--measure} $ {\cal H}^{f} (F)$ of $F$ with respect to
the dimension function $f$ is defined by $$ {\cal H}^{f} (F) :=
\lim_{ \rhoho \rhoightarrow 0} {\cal H}^{f}_{\rhoho} (F) \; = \;
\sup_{\rhoho > 0 } {\cal H}^{f}_{\rhoho} (F) \; . $$
A simple consequence of the definition of $ {\cal H}^f $ is the
following useful fact.
\begin{lemma}
If $ \, f$ and $g$ are two dimension functions such that the ratio
$f(r)/g(r) \to 0 $ as $ r \to 0 $, then ${\cal H}^{f} (F) =0 $
whenever ${\cal H}^{g} (F) < \infty $. \lambdabel{dimfunlemma}
\end{lemma}
In the case that $f(r) = r^s$ ($s > 0$), the measure $ {\cal H}^f $ is
the usual $s$--dimensional Hausdorff measure ${\cal H}^s $ and the
Hausdorff dimension $\dim F$ of a set $F$ is defined by $$ \dim \, F
\, := \, \inf \leqslantft\{ s : \beta_\alphathcal{ H}^{s} (F) =0 \rhoight\} = \sup
\leqslantft\{ s : \beta_\alphathcal{ H}^{s} (F) = \infty \rhoight\} . $$ In
particular when $s$ is an integer, ${\cal H}^s$ is
comparable\footnote{The symbols $\ll$ and $\gammag$ will be used to
indicate an inequality with an unspecified positive constant. If $a
\ll b $ and $a \gammag b $ we write $a \asymp b $, and say that the
quantities $a$ and $b$ are comparable. } to the $s$--dimensional
Lebesgue measure. Actually, ${\cal H}^s$ is a constant multiple of the
$s$--dimensional Lebesgue measure.
\subsection{The Mass Transference Principle \lambdabel{secmtp}}
Given a dimension function $f$ and a ball $B=B(x,r)$ in ${\Bbb R}^m$,
we define another ball
\begin{equation}\lambdabel{e:001}
\textstyle B^f:=B(x,f(r)^{1/m}) \ .
\end{equation}
When $f(x)=x^s$ for some $s>0$ we also adopt the notation $B^s$, {\it i.e.}\/
$ B^s:=B^{(x\beta_\alphapsto x^s)}. $ It is readily verified that
\begin{equation}\lambdabel{e:001a}
B^m=B.
\end{equation}
Given a sequence of balls $B_i$, $i=1,2,3,\ldots$, as usual its
limsup set is
$$
\limsup_{i\to\infty}B_i:=\bigcap_{j=1}^\infty\ \bigcup_{i\gammae j}B_i \
.
$$
By definition, $\limsup_{i\to\infty}B_i$ is precisely the set of
points in ${\Bbb R}^m$ which lie in infinitely many balls $B_i$.
\vspace*{1ex}
The following Mass Transference Principle allows us to transfer
Lebesgue measure theoretic statements for $\limsup$ subsets of
${\Bbb R}^m$ to Hausdorff measure theoretic statements.
\begin{lemma}[Mass Transference Principle]\lambdabel{thm3}
Let $\{B_i\}_{i\in{\Bbb N}}$ be a sequence of balls in ${\Bbb R}^m$ with
$r(B_i)\to 0$ as $i\to\infty$. Let $f$ be a dimension function such
that $r^{-m}f(r)$ is monotonic and let $\Omega$ be a ball in ${\Bbb R}^m
$. Suppose for any ball $B$ in $\Omega$
$$
{\cal H}^m\big(B \, \cap \, \limsup_{i\to\infty}B^f_i{}\,\big) \ = \
{\cal H}^m(B) \ .
$$
Then, for any ball $B$ in $\Omega$
$$
{\cal H}^f\big(B \, \cap \, \limsup_{i\to\infty}B^m_i\,\big)\ = \
{\cal H}^f(B) \ .
$$
\end{lemma}
With $\Omega = {\Bbb R}^m$, the lemma is precisely Theorem~2 in
\cite{mtp}. It is easily seen that this implies the above modified
statement which is better suited for the particular applications we
have in mind.
\subsection{The `slicing' lemma \lambdabel{secslice} }
The `slicing' lemma below is the crucial new ingredient and
together with the `slicing' technique (\S\rhoef{secst}) makes it
possible to reduce Theorem~\rhoef{t2} to an $m$-dimensional problem by
slicing the original set $W_{n,m}^{\vv b}(\Psi)$ into a family of
subsets lying on parallel $m$-dimensional planes. The `slicing'
technique is motivated by the `slicing' lemma.
In the following
$V$ will be a linear subspace of ${\Bbb R}^k$ and $V^\perp$ will denote
the linear subspace of ${\Bbb R}^k$ orthogonal to $V$. Also,
$V+a:=\{v+a:v\in V\}$ for $ a \in V^\perp$.
\begin{lemma} \lambdabel{slicing1}
Let $l,k\in{\Bbb N}$ such that $l \leqslantq k $ and $f$ and $g:r\beta_\alphapsto
r^{-l}f(r)$ be dimension functions. Let $A\subset{\Bbb R}^k$ be a Borel
set with ${\cal H}^f(A)<\infty$. Then for any $(k-l)$-dimensional
linear subspace $V$ of\/ ${\Bbb R}^k$,
$$
{\cal H}^{g}(A\cap(V+a))<\infty \ \ \ \text{for ${\cal H}^{l}$-almost all
$a\in V^\perp $}.
$$
\end{lemma}
When $f: r \to r^s$, the lemma constitutes the first part of
Theorem~10.10 in \cite{MAT}. The proof given there can be easily
modified to yield the more general statement above. Nevertheless,
given the importance of the lemma and for the sake of
completeness we have included the proof of Lemma \rhoef{slicing1} as
an appendix.
Trivially, Lemma~\rhoef{slicing1} implies the following:
\begin{lemma}[Slicing lemma]\lambdabel{slicing2}
Let $\, l,k\in{\Bbb N}$ such that $l \leqslantq k $ and $f$ and $g:r\beta_\alphapsto
r^{-l} f(r)$ be dimension functions. Let $A\subset{\Bbb R}^k$ be a Borel
set and $V$ be an $(k-l)$-dimensional linear subspace of ${\Bbb R}^k$.
If for a subset $S$ of $V^\perp$ of positive ${\cal H}^{l}$-measure
$$
{\cal H}^{g}(A\cap(V+b))=\infty\text{ \ \ \ \ for all \ \ }b\in S \,
,
$$
then ${\cal H}^f(A)=\infty$.
\end{lemma}
\subsection{Additional assumption in Schmidt's theorem (divergent part) \lambdabel{add}}
Let $n \gammaeq 2 $ as otherwise the substance of this
section becomes trivial. For each $i\in\{1,\dots,n\}$ define the
subset ${\cal Z}_i$ of ${\Bbb Z}^n\smallsetminus\{\vv0\}$ to consist of vectors
$\vv a\in {\Bbb Z}^n$ such that $|\vv a|=|a_i|$. Assume that
$$
\sum_{\vv a\in{\Bbb Z}^n}\Psi(\vv a)^m=\infty.
$$
Now
$$
\infty=\sum_{\vv a\in{\Bbb Z}^n\smallsetminus\{\vv0\}}\Psi(\vv a)^m\leqslant
\sum_{i=1}^n\sum_{\vv a\in{\cal Z}_i}\Psi(\vv a)^m.
$$
Therefore there is an index $i\in\{1,\dots,n\}$ such that
\begin{equation}\lambdabel{v}
\sum_{\vv a\in{\cal Z}_i}\Psi(\vv a)^m=\infty.
\end{equation}
Define
$$
\Psi_i(\vv a)= \leqslantft\{
\begin{array}{cl}
\Psi(\vv a), & \text{if }\vv a\in{\cal Z}_i,\\[2ex]
0 , & \text{if }\vv a\not\in{\cal Z}_i.
\end{array}\rhoight.
$$
Hence, $|W_{n,m}^{\vv b}(\Psi_i)|_{{n\times m}}=1$ by Schmidt's Theorem.
Trivially, this implies that for almost all $X\in{\Bbb I}^{n\times m}$
\begin{equation}\lambdabel{e:004}
\beta_\alphax_{1\leqslant j\leqslant m}\|\vv a\cdot\vv x_j-b_j\|\ < \ \Psi(\vv a)
\end{equation}
for infinitely many $\vv a\in{\cal Z}_i$. There is no loss of
generality in assuming that (\rhoef{v}) is satisfied with $i=1$ as
otherwise we can apply a permutation of variables (columns in $X$)
under which Schmidt's theorem is clearly invariant. Thus, when
considering the divergent part of Schmidt's theorem we can assume
that
\begin{equation*} \lambdabel{e:A083}
\Psi (\vv a ) \, = \, 0 \ \ \ \ \ \forall \ \ \vv a \in {\Bbb Z}^n
\hspace{5ex} {\rhom with } \hspace{5ex} |\vv a | \neq |a_1| \ .
\end{equation*}
\section{Proof of Theorem~\rhoef{t1}}
\subsection{The case of convergence \lambdabel{t1conv}}
We are given that
$$
\sum_{\vv a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} \ g\!
\leqslantft(\dfrac{\Psi(\vv a)}{|\vv a|}\rhoight) \times |\vv a|^m \ < \
\infty \ .
$$
The convergent part of Theorem \rhoef{t1} follows on using standard
covering arguments. For each $N \in {\Bbb N} $, it is easily verified that
$$
W_{n,m}^{\vv b}(\Psi) \ \subset \ \bigcup_{\substack{ {\vv a} \in
{\Bbb Z}^n: |{\vv a}| \gammaeq N \\ \Psi(\vv a) > 0 }} \ \ \bigcup_{{\vv p}
\in {\Bbb Z}^m } \ \Deltalta\beta_\alphathcal{ B}ig( R_{{\vv a},{\vv p}}^{\vv b},
\textstyle{\frac{\Psi(\vv a)}{|{\vv a }|} } \beta_\alphathcal{ B}ig) \cap {\Bbb I}^{n\times m} \ . $$
Note that there is no loss of generality in assuming that
$\Psi(\vv a)>0$ in the above union, for otherwise (\rhoef{1}) has no solutions
$X$ and the integer vector $\vv a$ makes no contribution to
$W_{n,m}^{\vv b}(\Psi)$.
Next notice that for any fixed ${\vv a} \in
{\Bbb Z}^n\smallsetminus\{\vv0\}$ with $\Psi(\vv a)
> 0 $ and ${\vv p} \in {\Bbb Z}^m $, it is possible to cover
$$\Deltalta\beta_\alphathcal{ B}ig( R_{{\vv a},{\vv p}}^{\vv b}, \textstyle{\frac{\Psi(\vv a)}{|{\vv a
}|} } \beta_\alphathcal{ B}ig) \cap {\Bbb I}^{n\times m} $$ by a collection ${\cal C }_{{\vv a},{\vv
p}}^{\vv b} $ of balls of common radius $ \frac{\Psi(\vv a)}{|{\vv
a }|} $ such that $$ \# \, {\cal C }_{{\vv a},{\vv p}}^{\vv b} \ll
\beta_\alphathcal{ B}ig(\textstyle{\frac{|{\vv a }|}{\Psi(\vv a)} }\beta_\alphathcal{ B}ig)^{(n-1)m} \ \
.
$$ Also, for a fixed ${\vv a} \in {\Bbb Z}^n\smallsetminus\{\vv0\}$
$$ \# \, \leqslantft\{ {\vv p} \in {\Bbb Z}^m : \, \Deltalta\beta_\alphathcal{ B}ig( R_{{\vv a},{\vv p}}^{\vv b},
\textstyle{\frac{\Psi(\vv a)}{|{\vv a }|} } \beta_\alphathcal{ B}ig) \cap {\Bbb I}^{n\times m} \ \neq
\emptyset \ \rhoight\} \ \ll \ |{\vv a}|^m \ . $$ Finally, note
that since $\Psi({\vv a}) \to 0 $ as $ |{ \vv a } | \to \infty $,
we have that for all $N$ sufficiently large
$$
\frac{\Psi(\vv a)}{|{\vv a }|} \ \leqslantq \frac{1}{N} \ \ . $$
It now follows from the definition of ${\cal H}^f $ that for $N$
sufficiently large,
\begin{eqnarray*}
{\cal H}^f_{\rhoho := \frac{1}{N} } \beta_\alphathcal{ B}ig(W_{n,m}^{\vv b}(\Psi)\beta_\alphathcal{ B}ig)
& \ll & \sum_{\substack{ {\vv a} \in {\Bbb Z}^n: |{\vv a}| \gammaeq N }}
f\!\leqslantft(\dfrac{\Psi(\vv a)}{|\vv a|}\rhoight) \times
\leqslantft(\dfrac{\Psi(\vv a)}{|\vv a|}\rhoight)^{-(n-1)m} \times \ |\vv
a|^m \\ & & \\ & := & \sum_{\substack{ {\vv a} \in {\Bbb Z}^n: |{\vv
a}| \gammaeq N }} g\!\leqslantft(\dfrac{\Psi(\vv a)}{|\vv a|}\rhoight) \times
\ |\vv a|^m \ \ \ \to \ \ 0 \hspace{4mm} {\rhom as \ }
\hspace{3mm} N \to \infty \ .
\end{eqnarray*}
Thus, ${\cal H}^f (W_{n,m}^{\vv b}(\Psi)) = 0 $ as required.
$\beta_\alphathcal{ B}ox$
\subsection{The case of divergence (Theorem \rhoef{t2}): the `slicing' technique \lambdabel{secst} }
\emph{Throughout we assume that $n\gammaeq 2$.
Theorem BV covers the $n=1$ case.} We are given that
\begin{equation} \lambdabel{divv}
\sum_{\vv a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} \ g\!
\leqslantft(\dfrac{\Psi(\vv a)}{|\vv a|}\rhoight) \times |\vv a|^m \ = \
\infty \ .
\end{equation}
We start by considering the case that $r^{-mn}f(r) \to L $ as $r
\to 0 $ and $L$ is finite. If $L=0$, then Lemma \rhoef{dimfunlemma}
implies that ${\cal H}^f({\Bbb I}^{n\times m})=0$ and since $ W_{n,m}^{\vv b}(\Psi)
\subset {\Bbb I}^{n\times m} $ the result follows. If $L \neq 0 $, then ${\cal H}^f$ is
comparable to ${\cal H}^{mn}$ (in fact, ${\cal H}^f = L \, {\cal H}^{mn}$). In
turn, ${\cal H}^{mn}$ is comparable to ${n\times m}$--dimensional Lebesgue
measure and so the required statement follows on showing that
$|W_{n,m}^{\vv b}(\Psi)|_{{n\times m}} = |{\Bbb I}^{n\times m}|_{{n\times m}} $. Well, this is
a simple consequence of Schmidt's theorem since the sum appearing
in (\rhoef{divv}) is comparable to $\sum_{\vv a\in{\Bbb Z}^n} \Psi(\vv
a)^m $.
In view of the above discussion, we can assume without loss of
generality that
\begin{equation} \lambdabel{maininf}
r^{-mn}f(r) \ \to \ \infty \hspace{6mm} {\rhom as } \hspace{6mm}
r\to0 \ \ . \end{equation}
Indeed, it is this situation that
constitutes the main substance of Theorems \rhoef{t1} and \rhoef{t2}.
Trivially, (\rhoef{maininf}) together with Lemma \rhoef{dimfunlemma}
implies that
$$ {\cal H}^f({\Bbb I}^{n\times m}) \ = \ \infty \ . $$ Similarly, since
(\rhoef{maininf}) is equivalent to the statement that $r^{-m}g(r)
\to \infty $ as $ r \to 0$, we have that ${\cal H}^g(B)=\infty$ for
any $m$--dimensional ball $B$.
To proceed, we set
$$
\widetilde\Psi(\vv a)^m \ := \ g \! \leqslantft(\dfrac{\Psi(\vv
a)}{|\vv a|}\rhoight)\times |\vv a|^m \ .
$$
In view of (\rhoef{divv}), it follows that
$$
\sum_{\vv a\in{\Bbb Z}^n}\widetilde\Psi(\vv a)^m=\infty \ ,
$$
and Schmidt's Theorem (divergent part) implies that
$$
|W_{n,m}^{\vv b}(\widetilde\Psi)|_{{n\times m}}=1 \ .
$$
The goal is to show that this implies that
$${\cal H}^f(W_{n,m}^{\vv b}(\Psi)) \ = \ \infty \ ; $$
i.e. to establish Theorem \rhoef{t2} under the condition imposed by
(\rhoef{maininf}). Recall, that the conclusion of Theorem \rhoef{t2}
is precisely the divergent part of Theorem \rhoef{t1}.
In view of the discussion in \S\rhoef{add}, we can assume without
loss of generality that
\begin{equation} \lambdabel{e:083}
\widetilde\Psi (\vv a ) \, = \, 0 \ \ \ \ \ \forall \ \ \vv a \in {\Bbb Z}^n
\hspace{5ex} {\rhom with } \hspace{5ex} |\vv a | \neq |a_1| \ .
\end{equation}
Now, let
$$
V=\{(\vv x_1,\dots,\vv x_m) \ :\ x_{j,i}=0 \ \ \forall\
j={1,\dots,m}\; ; i={2,\dots,n},\ \},
$$
where $\vv x_j=(x_{j,1},\dots,x_{j,n})$. Thus, $V$ is an
$m$--dimensional subspace of ${\Bbb R}^{n\times m}$. By Fubini's theorem, there is
a subset $S\subset {\Bbb I}^{m(n-1)}\subset V^\perp$ with $|S|_{m(n-1)}
= 1 $ such that for every $X_0\in S$ the set $W_{n,m}^{\vv
b}(\widetilde\Psi)$ has full $m$-dimensional Lebesgue measure in $
(V+{X_0})\cap{\Bbb I}^{n\times m} $; i.e.
\begin{equation}\lambdabel{e:080}
|(V+X_0)\cap W_{n,m}^{\vv b }(\widetilde\Psi) |_m \ = \ 1 \ .
\end{equation}
For every $(\vv p,\vv a)=(p_{1},\dots p_{m}; a_1,\dots,a_n)\in
{\Bbb Z}^{m} \times {\Bbb Z}^{n}$ define the set $\sigma(\vv p,\vv a)$ to
consist of $X\in{\Bbb I}^{n\times m}$ such that
\begin{equation*}
\beta_\alphax_{1\leqslant j\leqslant m}|\vv a\cdot\vv x_j-b_j+p_{j}|<\widetilde\Psi(\vv
a) \ .
\end{equation*}
In view of condition (\rhoef{e:083}) imposed on $\widetilde\Psi$,
the set $\sigma(\vv p,\vv a)$ is empty whenever $|\vv a | \neq
|a_1|$. We therefore assume that $|\vv a | = |a_1|$ throughout
the rest of the proof. Then
\begin{equation}\lambdabel{e:006}
\sigma(\vv p,\vv a)\cap (V+{X_0})
\end{equation}
is the product of $m$ intervals of length $2\widetilde\Psi(\vv
a)/|a_1|$. Indeed, for all $j=1,\dots,m$ and $i=2,\dots,n$ we have
that $x_{j,i}$ are fixed and defined by $X_0$ for all points in
this set. That is to say that the only coordinates that may vary
are $x_{j,1}$. Therefore, the set (\rhoef{e:006}) is defined by the
system
\begin{equation*}
\beta_\alphax_{1\leqslant j\leqslant
m}|a_1x_{j,1}+(a_2x_{j,2}+\dots+a_nx_{j,n}+p_{j}-b_j)|<\widetilde\Psi(\vv
a)
\end{equation*}
or equivalently
\begin{equation}\lambdabel{e:008}
\beta_\alphax_{1\leqslant j\leqslant
m}\leqslantft|x_{j,1}-\frac{b_j-(a_2x_{j,2}+\dots+a_nx_{j,n}+p_{j})}{a_1}\rhoight|
<\frac{\widetilde\Psi(\vv a)}{|a_1|} \ .
\end{equation}
The pathological situation of $a_1=0$ is excluded by the
conditions $|\vv a|=|a_1|$ and $\vv a\not=\vv0$. On identifying
${\Bbb I}^{n\times m}$ with the ${n\times m}$-dimensional torus it is easily seen that
every inequality in (\rhoef{e:008}) defines an interval of length
$2\widetilde\Psi(\vv a)/|a_1|$. Thus (\rhoef{e:006}) defines a ball
of radius $\widetilde\Psi(\vv a)/|\vv a|$. Such balls form a
sequence $(A_i)_{i\in{\Bbb N}}$. Therefore
$$
\limsup_{i\to\infty}A_i=(V+X_0)\cap W_{n,m}^{\vv b
}(\widetilde\Psi) \ .
$$
Hence, in view of (\rhoef{e:080}) we have that
$|\limsup_{i\to\infty}A_i|_m=1$. This implies that for any ball
$B\subset (V+X_0)\cap {\Bbb I}^{n\times m}$
$$
{\cal H}^m(\limsup_{i\to\infty}A_i\cap B)={\cal H}^m(B)\,.
$$
\noindent For each ball $A_i$ define the ball $B_i$ with the same
centre and radius $\Psi(\vv a)/|\vv a|$. Then, by definition
$B_i^{g}=A_i$ -- see (\rhoef{e:001}). It follows that
\begin{equation}
\lambdabel{e:081} \limsup_{i\to\infty} B_i \ \subset \ (V+X_0)\cap
W_{n,m}^{\vv b }(\Psi) \ .
\end{equation}
Also, $r^{-m} g(r) = r^{-mn} f(r) $ is monotonic by assumption.
Thus, on applying the Mass Transference Principle with $ \Omega =
(V+X_0)\cap {\Bbb I}^{n\times m}$, we obtain that for any ball $B$ in $\Omega$
$$
{\cal H}^{g}(\limsup_{i\to\infty}B_i^m\cap B)={\cal H}^{g}(B)=\infty \ .
$$
Recall, that ${\cal H}^{g}(B)=\infty$ is a consequence of (\rhoef{maininf})
and Lemma \rhoef{dimfunlemma}. Hence, in view of (\rhoef{e:081}) and
the fact that $B_i^m := B_i$ we have that for every $X_0\in S$
$$
{\cal H}^{g}((V+X_0)\cap W_{n,m}^{\vv b}(\Psi))=\infty \ .
$$
Recall that ${\cal H}^{m(n-1)}(S) > 0$ and so by the Slicing lemma,
$${\cal H}^f(W_{n,m}^{\vv b}(\Psi))=\infty \ . $$
This completes the proof of Theorem \rhoef{t2} and therefore the
divergent part of Theorem \rhoef{t1}.
$\beta_\alphathcal{ B}ox$
\section{A Mass Transference Principle for linear forms}
The Mass Transference Principle
deals with $\limsup$ sets which are defined as a sequence of
balls. However, we have seen that together with the `slicing'
technique introduced in this paper we are able to deal with
$\limsup$ sets defined as a sequence of neighborhoods of
`approximating' planes -- at least within the context of Schmidt's
Theorem. In short, the aim of this section is to develop a single
framework which enables us to combine the Mass Transference
Principle and `slicing' into a single statement. The main result
(Theorem \rhoef{t3} below) should be viewed as a generalization to
the linear forms setup of the Mass Transference Principle
developed in \cite{mtp} for simultaneous approximation. As
applications, we deduce Theorem \rhoef{t2} (which constitutes the
main substance of Theorem \rhoef{t1}) as a simple corollary and
more strikingly, we obtain a complete metric theory for a `fully'
non-linear Diophantine problem -- see Theorem \rhoef{t4} of
\S\rhoef{fnl}.
\subsection{A general framework for approximating by planes \lambdabel{gf}}
Throughout $k, m \gammaeq 1 $ and $ l\gammae0$ are integers such that
$k=m+l$.
Let ${\cal R}=(R_{\alpha} )_{\alpha \in J}$ be a family of planes in ${\Bbb R}^k$ of
common dimension $l$ indexed by an infinite countable set $J$. For
every $\alpha\in J$ and $\delta\gammaeq 0$ define
$$ \Deltalta(R_\alpha,\delta) := \{\vv x \in {\Bbb R}^k: \operatorname{dist}(\vv
x,R_\alpha) < \delta\} \ . $$ Thus $\Deltalta(R_\alpha,\delta)$ is
simply the $\delta$--neighborhood of the $l$--dimensional plane
$R_\alpha$. Note that by definition, $ \Deltalta(R_\alpha,\delta) =
\emptyset $ if $\delta =0$. Next, let
$$\Upsilon : J \to {\Bbb R}^+ : \alpha\beta_\alphapsto
\Upsilon(\alpha):=\Upsilon_\alpha$$ be a non-negative, real valued
function on $J$. In order to avoid pathological situations within
our framework, we assume that for every $\epsilon
>0$ the set $\{\alpha\in J:\Upsilon_\alpha>\epsilon \}$ is finite.
This condition implies that $\Upsilon_\alpha \to 0 $ as $\alpha$
runs through $J$. We now consider the following `$\limsup$' set,
$$ \Lambda(\Upsilon)=\{\vv x\in{\Bbb R}^k:\vv
x\in\Deltalta(R_\alpha,\Upsilon_\alpha)\ \mbox{for\ infinitely\ many\
}\alpha\in J\} \ . $$ Note that in view of the conditions imposed
on $k,l$ and $m$ we have that $l < k$. Thus the dimension of the
`approximating' planes $ R_\alpha$ is strictly less than that of
the ambient space ${\Bbb R}^k$. The situation when $l =k $ is of little
interest and has therefore been naturally omitted.
\subsection{The main result}
\begin{theorem}\lambdabel{t3}
Let ${\cal R}$ and $\Upsilon$ as above be given. Let $V$ be a linear
subspace of\/ ${\Bbb R}^k$ such that $\dim V=m=\beta_\alphathrm{codim}\,{\cal R}$ and
~ \qquad \qquad $(i)$\quad \ $V \ \cap \ R_\alpha \ \neq \
\emptyset $ \quad for all $ \ \alpha\in J \ $,
~ \qquad \qquad $(ii)$\quad $\sup_{\alpha\in J}\operatorname{diam}( \,
V\cap\Delta(R_\alpha,1) \, ) \ < \ \infty \ $ .
\noindent Let $f$ and $g : r \to g(r):= r^{-l} \, f(r)$ be
dimension functions such that $r^{-k}f(r)$ is monotonic and let
$\Omega $ be a ball in ${\Bbb R}^k$. Suppose for any ball $B$ in
$\Omega$ $$ {\cal H}^k \big( \, B \cap \Lambda \big(g(\Upsilon)^{\frac1m}
\big) \, \big) \, = \, {\cal H}^k(B)
$$ Then
$$ {\cal H}^f \big( \, B \cap \Lambdambda(\Upsilon) \, \big) \, = \, {\cal H}^f(B) \ . $$
\end{theorem}
{\it Remark} : Conditions (i) and (ii) are not particularly
restrictive. When $l=0$, so that ${\cal R}$ is a collection of points
in ${\Bbb R}^k $, conditions (i) and (ii) are trivially satisfied and
Theorem \rhoef{t3} simply reduces to the Mass Transference Principle
of \S\rhoef{secmtp}. When $l \gammaeq 1 $, so that ${\cal R}$ is a collection
of $l$--dimensional planes in ${\Bbb R}^k $, condition (i) excludes
planes $R_{\alpha}$ parallel to $V$ and condition (ii) simply means that
the angle at which $R_\alpha$ `hits' $V$ is bounded away from zero
by a fixed constant independent of $\alpha \in J$. This in turn
implies that each plane in ${\cal R}$ intersects $V$ at exactly one
point.
\subsection{Theorem \rhoef{t3} $\Longrightarrow $ Theorem \rhoef{t2} }
With reference to our general framework, let $k = m
\times n$. Hence, $l= m(n-1)$. Furthermore, let $ J := \{ (\vv a,
\vv p, \vv b) \in {\Bbb Z}^n\setminus\{\vv 0\} \times {\Bbb Z}^m \times
\{\vv b \} : |\vv a | = |a_1| \} $ where $\vv b $ is a fixed
vector in ${\Bbb R}^m$, $\alpha := (\vv a, \vv p, \vv b) \in J $,
$R_\alpha := R_{{\vv a},{\vv p}}^{\vv b}$ where the latter is
given by (\rhoef{resset}) and $ \Upsilon_\alpha :=
\textstyle{\frac{\Psi(\vv a)}{\sqrt{{\vv a}.{\vv a}}} } $. Then,
$$ W_{n,m}^{\vv b}(\Psi) \ \supset \
\widetilde{W}_{n,m}^{\vv b}(\Psi) \ := \ \Lambda(\Upsilon) \, \cap \, {\Bbb I}^{n\times m} \ \
. $$ In view of \S\rhoef{add}, it suffices to establish Theorem
\rhoef{t2} for the set $\widetilde{W}_{n,m}^{\vv b}(\Psi)$. As in
\S\rhoef{secst}, let
$$
V : =\{(\vv x_1,\dots,\vv x_m) \ :\ x_{j,i}=0 \ \ \forall\
j={1,\dots,m}\; ; i={2,\dots,n},\ \} \ ,
$$
where $\vv x_j=(x_{j,1},\dots,x_{j,n})$. Thus, $V$ is an
$m$--dimensional subspace of ${\Bbb R}^{n\times m}$ and it is easily verified that
conditions (i) and (ii) of Theorem \rhoef{t3} are satisfied. Theorem
\rhoef{t2} now follows on applying Theorem \rhoef{t3} with $\Omega =
{\Bbb I}^{n\times m} $. Note that we can actually deduce the following stronger
`local' statement. For any ball $B$ in $ {\Bbb I}^{n\times m} $,
$$
{\cal H}^f(B \cap W_{n,m}^{\vv b}(\Psi)) \, = \, {\cal H}^f(B) \hspace{6ex}
{\rhom if} \hspace{3ex} \qquad\displaystyle\sum_{\vv
a\in{\Bbb Z}^n\smallsetminus\{\vv 0\}} \ g\!\leqslantft(\dfrac{\Psi(\vv a)}{|\vv
a|}\rhoight) \times \ |\vv a|^m \ = \ \infty \ \ .
$$
\subsection{Preliminaries}
Before embarking on the proof of Theorem \rhoef{t3}, we derive some
crucial facts from conditions (i) and (ii) imposed in the
statement of the theorem. We also state a `shrinking' lemma which
will be required in the proof of Theorem \rhoef{t3}.
\subsubsection{Crucial consequences of conditions (i) and (ii)}
Let $l \gammaeq 1 $ as otherwise the substance of this section is
irrelevant. Thus, ${\cal R}$ is a family of `genuine' planes and not
points. In view of the remark immediately after the statement of
Theorem \rhoef{t3}, we have that for every $\alpha\in J$ there is
the unique point $c_\alpha$ given by $ V\cap R_\alpha$. Clearly,
the ball $B(c_\alpha,r)$ in ${\Bbb R}^k$ is contained in the
$r$--neighborhood of $R_\alpha$; i.e. $B(c_\alpha,r) \subset
\Delta(R_\alpha,r)$. Hence,
$$B(c_\alpha,r) \, \cap \, V \ \subset \ \Delta(R_\alpha,r) \, \cap \, V \ . $$
It follows that the diameter of $\Delta(R_\alpha,r)\cap V$ is at
least $2r$ -- the diameter of the ball $B(c_\alpha,r)\cap V$.
On the other hand, condition (ii) implies that the diameter of
$\Delta(R_\alpha,r)\cap V$ is bounded above by a constant $C>0$ times
$r$ (uniformly in $\alpha$). Indeed, with $C$ equal to the
supremum in the left hand side of condition (ii) we have that
$$
\Delta(R_\alpha,1) \, \cap \, V \ \subset \ B(c_\alpha,C) \, \cap
\, V \ .
$$
Since $R_\alpha$ and $V$ are planes, the set $\Delta(R_\alpha,r)\cap
V$ is simply the set $\Delta(R_\alpha,1)\cap V$ scaled by the factor
$r$ -- shrunk or expanded depending on whether $r$ is less than or
greater than one. Similarly, $B(c_\alpha,Cr)\cap V$ is
$B(c_\alpha,C)\cap V$ scaled by the factor $r$. The upshot of
this, is that
\begin{equation}\lambdabel{vb1}
B(c_\alpha,r)\cap V\ \subset\ \Delta(R_\alpha,r)\cap V\ \subset\
B(c_\alpha,Cr)\cap V\qquad \text{for any } \ r>0\ .
\end{equation}
Finally, we observe that since $R_\alpha$ and $V$ are planes, the
inclusions given by (\rhoef{vb1}) remains valid if $V$ is replaced
by any parallel hyperplane. Formally, for any $r>0$ and any $\vv
x_0\in{\Bbb R}^k$
\begin{equation}\lambdabel{vb2}
B(c_{\alpha,\vv x_0},r)\cap (V+\vv x_0)\ \subset\
\Delta(R_\alpha,r)\cap (V+\vv x_0)\ \subset\ B(c_{\alpha,\vv
x_0},Cr)\cap (V+\vv x_0)\ ,
\end{equation}
where $c_{\alpha,\vv x_0}$ is the unique point given by
$R_\alpha\cap (V+\vv x_0)$.
\subsubsection{The shrinking lemma}
Given a ball $B$ and a positive constant $\delta<1$, let $\delta
B$ denote the ball $B$ shrunk by the factor $\delta$. The
following result formally states that the measure of $\limsup$
sets arising from a sequence of balls in ${\Bbb R}^k$ is not effected by
shrinking the balls by a constant factor.
\begin{lemma} \lambdabel{shrlem}
Let $B_i$ be a sequence of balls in ${\Bbb R}^k$ such that
$\limsup_{i\to\infty} B_i$ has full measure in an open subset $U$
of ${\Bbb R}^k$. Let $\delta<1$ be a positive constant. Then, the set
$\limsup_{i\to\infty} \delta B_i$ has full measure in $U$.
\end{lemma}
The lemma is a simple consequence of Lemma~6 in \cite{mtp}.
\subsection{Proof of Theorem~\rhoef{t3}}
\emph{Without loss of generality, we assume that $l \gammaeq 1 $. The
case when $l=0$ corresponds to the Mass Transference Principle of
\S\rhoef{secmtp}. }
The proof of Theorem \rhoef{t3}
follows the basic strategy as the proof of Theorem \rhoef{t2}
(see \S\rhoef{secst}); i.e. that of combining the Mass Transference
Principle (Lemma \rhoef{thm3}) and the Slicing lemma (Lemma
\rhoef{slicing2}) in an appropriate manner. In view of this we
shall give a sketch proof and leave the details to the reader.
As in the proof of Theorem \rhoef{t2}, we can assume without loss of
generality that
\begin{equation} \lambdabel{maininf1}
r^{-k}f(r) \ \to \ \infty \hspace{6mm} {\rhom as } \hspace{6mm}
r\to0 \ \ . \end{equation} Indeed, it is this situation that
constitutes the main substance of Theorem \rhoef{t3}. Recall, that
(\rhoef{maininf1}) together with Lemma \rhoef{dimfunlemma} implies
that ${\cal H}^f(B)=\infty$ for any $k$--dimensional ball $B$ and that
${\cal H}^g(B)=\infty$ for any $m$--dimensional ball $B$. For the sake
of clarity we introduce the following notation. Let $V$ be as in
the statement of the theorem. For a subset $A$ of ${\Bbb R}^k$ and $\vv
x_0$ in $ V^\perp$ let
$$
A'_{\vv x_0} \; := \; A \, \cap \, (V+\vv x_0) \ \ . $$ By
definition,
$$
\Lambda'_{\vv x_0}(\Upsilon)=\{\vv x\in{\Bbb R}^k:\vv x\in\Deltalta'_{\vv x_0}
(R_\alpha,\Upsilon_\alpha)\ \mbox{for\ infinitely\ many\
}\alpha\in J\} \ .
$$
Fix a ball $D$ in $\Omega$. The aim is to show that
$$ {\cal H}^f(D \cap \Lambdambda(\Upsilon)) \, = \, \infty \ . $$
We are given that \begin{equation} \lambdabel{sv1}
{\cal H}^k\big(D \cap
\Lambda\big(g(\Upsilon)^{\frac1m}\big)\big) \, = \, {\cal H}^k(D) \ .
\end{equation} Now let $ D^* := \{ \vv x_0 \in V^\perp : D'_{\vv x_0} \neq \emptyset
\} $. Then, (\rhoef{sv1}) together with Fubini's theorem implies
the existence of a set $S\subset D ^* \subset V^\perp$ with
$|S|_{l} = |D^*|_{l} $ such that for every $ {\vv x_0 } \in S$
\begin{equation}\lambdabel{vb8}
{\cal H}^m\big(D'_{\vv x_0 } \; \cap \; \Lambda'_{\vv x_0
}\big(g(\Upsilon)^{\frac1m}\big) \big) \ = \ {\cal H}^m(D'_{\vv x_0 })
\ .
\end{equation}
In view of (\rhoef{vb2}), we have that
\begin{equation}\lambdabel{vb4}
\limsup_{\alpha \in J} \ B'_{\vv x_0 } \big(c_{\alpha}^*,
g(\Upsilon_\alpha)^{\frac1m}\big) \ \subset \ \Lambda'_{\vv x_0
}\big(g(\Upsilon)^{\frac1m}\big) \ \subset \ \limsup_{\alpha \in J}
\ B'_{\vv x_0 }\big(c_{\alpha}^*,Cg(\Upsilon_\alpha)^{\frac1m}\big)
\ \ .
\end{equation}
For each $\alpha \in J$, the ball $ B'_{\vv x_0 } (c_{\alpha}^*,r)
$ is by definition a subset of $V +{\vv x_0 }$ with centre $
c_{\alpha}^* := R_\alpha\cap (V+\vv x_0)$. It follows via
(\rhoef{vb8}) and (\rhoef{vb4}), that
\begin{equation}\lambdabel{vb7}
{\cal H}^m\big(\, D'_{\vv x_0 } \cap \; \textstyle{\limsup_{\alpha \in
J}} \ B'_{\vv x_0
}\big(c_{\alpha}^*,Cg(\Upsilon_\alpha)^{\frac1m}\big) \ \big) \ = \
{\cal H}^m(D'_{\vv x_0 }) \ .
\end{equation}
As a consequence of the shrinking lemma (Lemma \rhoef{shrlem}), we
can put $C=1$ in (\rhoef{vb7}); i.e.
\begin{equation}\lambdabel{vb6}
{\cal H}^m\big( \, D'_{\vv x_0 } \cap \; \textstyle{\limsup_{\alpha \in
J}} \ B'_{\vv x_0 }\big(c_{\alpha}^*,
g(\Upsilon_\alpha)^{\frac1m}\big) \ \big) \ = \ {\cal H}^m(D'_{\vv x_0 })
\ .
\end{equation}
Now for any ball $B$ in $D'_{\vv x_0 }$, (\rhoef{vb6}) implies that
$$
{\cal H}^m\big( \, B \cap \; \textstyle{\limsup_{\alpha \in J}} \
B'_{\vv x_0 }\big(c_{\alpha}^*, g(\Upsilon_\alpha)^{\frac1m}\big) \
\big) \ = \ {\cal H}^m(B ) \ .
$$
On applying the Mass Transference Principle with $\Omega =
D'_{\vv x_0 }$, we obtain that
\begin{equation}\lambdabel{vb5}
{\cal H}^g\big( \, D'_{\vv x_0 } \, \cap \, \textstyle{\limsup_{\alpha
\in J}} \ B'_{\vv x_0 }(c_{\alpha}^*, \Upsilon_\alpha) \ \big) \; =
\; {\cal H}^g (D'_{\vv x_0 }) \; = \; \infty \ .
\end{equation}
In view of (\rhoef{vb2}), we have that
\begin{equation*}\lambdabel{vb3}
\limsup_{\alpha \in J} \; B'_{\vv x_0 }(c_{\alpha}^*,
\Upsilon_\alpha) \ \subset \ \Lambda'_{\vv x_0 }(\Upsilon) \ \subset \
\limsup_{\alpha \in J} \; B'_{\vv x_0 }(c_{\alpha}^*,
C\Upsilon_\alpha) \ .
\end{equation*}
This together with (\rhoef{vb5}), implies that for every ${\vv x_0 }
\in S $
$$
{\cal H}^g\big( \, D'_{\vv x_0 } \, \cap \, \Lambda'_{\vv x_0 }(\Upsilon)
\ \big) \; = \; \infty \ .
$$
On applying the Slicing lemma, we obtain that $ {\cal H}^f(D \cap
\Lambdambda(\Upsilon)) = \infty $ as desired.
$\beta_\alphathcal{ B}ox$
\subsection{`Fully' non-linear Diophantine problems \lambdabel{fnl}}
Schmidt's theorem underpins the metric theory of non-linear
Diophantine approximation -- the integer points $\vv a$
associated with the definition of $W_{n,m}^{\vv b}(\Psi)$ can be
restricted to lie in a subset ${\cal A}$ of ${\Bbb Z}^n$ which is completely
free of any linear structure. Indeed, one simply sets $\Psi$ to
be zero for points $\vv a$ outside of ${\cal A}$ so that the points
$\vv a$ that make any contribution to $W_{n,m}^{\vv b}(\Psi)$ lie
only in ${\cal A}$. However, Schmidt's theorem is not non-linear in the
full sense, since the integer variable $\vv p$, implicit in the
symbol $\|\cdot\|$ is a linear term. Theorems~\rhoef{t1} is
therefore of the same nature; i.e. it provides a complete metric
theory of non-linear Diophantine approximation but fails to be
fully non-linear. Sprind\v{z}uk, in his 1979 monograph
\cite{Spr79} writes: `As of now, no metric theory of (fully)
non-linear Diophantine approximation has been constructed. The
working out of such a theory is a very topical problem.' Since
then, substantial progress has been made within the one
dimensional setting -- the numerator and denominator of the
rational approximates $a/p$ are restricted to sets of number
theoretic interest such as primes (see, for example \cite[Chapter
6]{har} for the Lebesgue measure theory and \cite[\S12.5]{BDV03}
for the complete metric theory). There has also been some progress
within the simultaneous setting \cite{jones}. To our knowledge,
there has been no progress what so ever within the linear forms
setting. We now demonstrate the power of Theorem~\rhoef{t3} -- it
naturally allows us to consider fully non-linear problems; in
particular within the linear forms setting.
A natural source of fully non-linear Diophantine problems is the
theory of partial differential equations (PDE's).
The following is a concrete example of a fully non-linear problem
arising in such a manner -- it is related to the solubility of the
two-dimensional inhomogeneous wave equation (see \cite{BDKL} for
details). Given a vector $\vv a=(a_1,a_2)\in{\Bbb Z}^2$, let $\vv
a^2:=(a_1^2,a_2^2)$. Let $\psi:{\Bbb R}p\to{\Bbb R}p$ be a non-negative,
real valued function and consider the set
$$
S_2(\psi):=\{\vv x\in{\Bbb I}^2:|\,\vv a^2\cdot\vv x-p^2|<\psi(|\vv a|)
\text{ for infinitely many }(\vv a,p)\in{\Bbb Z}^2\times{\Bbb Z}\ \}\ .
$$
Naturally, the problem is to determine a complete metric theory for
$S_2(\psi)$. Clearly, this is a fully non-linear problem since the
coefficients of the `approximating planes' are restricted to
perfect squares.
In \cite{BDKL}, the following criteria for the `size' of the set
$S_2(\psi)$ expressed in terms of $2$--dimensional Lebesgue
measure $|\ \ |_{2}$ is established.
\noindent{\bf Theorem BDKL } {\it Let $\psi:{\Bbb R}p\to{\Bbb R}p$ be a
monotonic function such that $\lim_{h\to\infty}\psi(h)=0$. Then $$
|S_2(\psi)|_2 =\leqslantft\{
\begin{array}{rl}
0& {\rhom if} \qquad
\sum_{h=1}^\infty \ \psi(h) \, < \, \infty\,\\[2ex]
1& {\rhom if} \qquad
\sum_{h=1}^\infty \ \psi(h) \, = \, \infty\,
\end{array}
\rhoight.. $$ }
In view of our general framework and Theorem~\rhoef{t3}, we are
able to give a complete measure theoretic description of the set
$S_2(\psi)$.
\begin{theorem} \lambdabel{t4} Let $\psi:{\Bbb R}p\to{\Bbb R}p$ be a monotonic function such that
$\lim_{h\to\infty}\psi(h)=0$. Let $f$ be a dimension function such
that $r^{-2}f(r)$ is monotonic. Furthermore, assume that $g : r
\to r^{-1} f(r)$ is a dimension function. Then
$$
{\cal H}^f(S_2(\psi))=\leqslantft\{
\begin{array}{cl}
0& {\rhom if} \qquad \displaystyle \sum_{h=1}^\infty \
g\!\leqslantft(\dfrac{\psi(h)}{h^2}\rhoight) \times \ h^2 \ < \ \infty\, \\[4ex]
{\cal H}^f({\Bbb I}^2) & {\rhom if} \qquad \displaystyle \sum_{h=1}^\infty \
g\!\leqslantft(\dfrac{\psi(h)}{h^2}\rhoight) \times \ h^2 \ = \ \infty\,
\end{array}
\rhoight..
$$
\end{theorem}
With $f:r \to r^s \; (s > 0) $, the theorem reduces the the
following $s$--dimensional Hausdorff measure statement. Naturally,
it coincides with Theorem BKDL when $s=2$.
\begin{corollary} \lambdabel{c4} Let $\psi:{\Bbb R}p\to{\Bbb R}p$ be a monotonic function such that
$\lim_{h\to\infty}\psi(h)=0$. For $ 1 < s \leqslantq 2$, we have that
$$ {\cal H}^s(S_2(\psi))=\leqslantft\{
\begin{array}{cl}
0& {\rhom if} \qquad
\sum_{h=1}^\infty \ \psi(h)^{s-1}h^{4-2s} \, < \, \infty\,\\[2ex]
{\cal H}^s({\Bbb I}^2) & {\rhom if} \qquad
\sum_{h=1}^\infty \ \psi(h)^{s-1}h^{4-2s} \, = \, \infty\,
\end{array}
\rhoight..
$$
\end{corollary}
Consider the case $\psi : r \to r^{-\tau} $ ($\tau > 0$) and
write $ S_2(\tau)$ for $S_2(\psi) $. For $\tau > 1$, the above
corollary not only implies that
$$
\dim S_2(\tau) \ = \ \textstyle{ \frac{5+ \tau}{2 + \tau } } \ ,
$$
but that ${\cal H}^s(S_2(\tau))$ is infinite at the critical exponent
$s= \dim S_2(\tau)$.
\subsubsection{Proof of Theorem \rhoef{t4} } We start be rewriting the set $S_2(\psi)$ in terms of
`approximating' planes. For $\vv a \in {\Bbb Z}^2$ and $ p \in {\Bbb Z}$, let
\begin{equation}
\lambdabel{resset4}
R_{{\vv a},{p}} := \{ {\vv x} \in {\Bbb R}^2: \
{\vv a}^2\cdot\vv x= p^2 \, \} \ \ .
\end{equation}
It is easily verified, that
$$ \vv x \in S_2(\psi) \hspace{10mm} {\rhom if \ and \ only \ if } \hspace{10mm}
\vv x \in \Deltalta\beta_\alphathcal{ B}ig( R_{{\vv a},{ p}} \; ,
\textstyle{\frac{\psi(|\vv a|)}{ {\vv a}.{\vv a}} } \beta_\alphathcal{ B}ig) \cap {\Bbb I}^2
$$ for infinitely many vectors $\vv a\in{\Bbb Z}^2$ and $ p \in
{\Bbb Z}$. The proof of Theorem \rhoef{t4} follows on establishing the
convergent and divergent parts separately. We make use of the fact
that:
$$
\sum_{h=1}^\infty \ g\!\leqslantft(\dfrac{\psi(h)}{h^2}\rhoight) \times
\ h^2 \ \ \asymp \ \ \sum_{\vv a\in{\Bbb Z}^2\smallsetminus\{\vv 0\}}
\ g\!\leqslantft(\dfrac{\psi(|\vv a|)}{|\vv a|^2}\rhoight) \times \ |\vv
a| \ .
$$
\noindent{\em The case of convergence. \ } The assumption that
$\psi$ is monotonic is irrelevant to this case. The proof follows
on modifying the argument of \S\rhoef{t1conv} in the obvious manner
with $n=2$, $m=1$ and with $\Psi(\vv a)/|\vv a| $ replaced by
$\psi(|\vv a|)/|\vv a|^2$.
\noindent{\em The case of divergence. \ }
With reference to our general framework \S\rhoef{gf}, let $k = 2$ and $m = 1$.
Hence, $l= 1$. Furthermore, let $ J := \{ (\vv a, p)
\in {\Bbb Z}^2\setminus\{\vv 0\} \times {\Bbb Z} : |\vv a | = |a_1| \} $,
$\alpha := (\vv a, p) \in J $, $R_\alpha := R_{{\vv a},p}$ where
the latter is given by (\rhoef{resset4}) and $ \Upsilon_\alpha :=
\psi(|\vv a|)/ {\vv a}.{\vv a} $. Then,
$$ S_2(\psi) \ \supset \ \widetilde{S}_2(\psi)
\ := \ \Lambda(\Upsilon) \, \cap \, {\Bbb I}^2 \ \
. $$ It is easily verified that $ |\widetilde{S}_2(\psi)|_2 = 1 $
whenever $ |S_2(\psi)|_2 = 1 $. Thus, it suffices to consider the
set $\widetilde{S}_2(\psi)$. Let $ V : =\{ \vv x = (x_1,x_2) \in
{\Bbb R}^2 \ :\ x_2 = 0 \} $. Trivially, conditions (i) and (ii)
of Theorem \rhoef{t3} are satisfied and with $\Omega = {\Bbb I}^2 $ the
divergence case now follows.
$\beta_\alphathcal{ B}ox$
\subsection{Generalizing Theorem \rhoef{t3} to fractal subsets $X$ of ${\Bbb R}^k$}
On making use of the general Mass Transference Principle
established in \cite[\S6.1]{mtp} and adapting the Slicing lemma in
an appropriate manner, it is possible to generalize Theorem
\rhoef{t3} to the following `fractal' setup. With $k,l$ and $m$ as
in \S\rhoef{gf}, let $K$ be a compact subset of ${\Bbb R}^l$. Suppose
there exists a dimension function $h$ and constants
$0<c_1<1<c_2<\infty$ and $r_0 > 0$ such that
\begin{equation*}\lambdabel{g}
c_1\ h(r) \ \leqslant \ {\cal H}^h(B(x,r)) \ \leqslant \ c_2\ h(r) \ ,
\end{equation*}
for any ball $B(x,r)$ with $x\in X$ and $r\leqslant r_0$. In the case $h
: r \to r^{\delta}$ for some $\delta > 0$, the above measure
condition on balls implies that $ \dim K = \delta $ and moreover
that ${\cal H}^{\delta} (K)$ is strictly positive and finite. The
simplest example of a fractal set $K$ satisfying these measure
theoretic properties is the standard middle third Cantor set --
simple take $h : r \to r^{\delta}$ with $\delta := \log 2 / \log
3$. More sophisticated examples include the attractor $K$
arising from a family of contracting self similarity maps of
${\Bbb R}^l$ satisfying the open set condition \cite{falc,MAT}. Now let
$$ X \ := \ K \times {\Bbb R}^m \ . $$ Thus, $X$ is a subset of ${\Bbb R}^k$
equipped with the product measure $\mu := {\cal H}^h \times | \ \ |_m
$. Note that if $\dim K = \delta $, then $\dim X = \delta + m $
and furthermore if $\delta < l $, then $X$ is a set of
$k$--dimensional Lebesgue measure zero. Finally, let $B$ be an
arbitrary ball in $X$ and consider the set $B \cap
\Lambdambda(\Upsilon) $. Thus, the points of interest are
restricted to $X$ since $B$ is by definition a subset of $X$.
In short, it is possible to
establish an analogue of Theorem \rhoef{t3} which enables us to
transfer full measure theoretic statements
with respect to the measure $\mu$ on $X$
to general Hausdorff measure theoretic
statements for $B \cap \Lambdambda(\Upsilon)$.
The details of this and its many consequences will be the
subject of a forthcoming article.
\section{Appendix: Proof of Lemma \rhoef{slicing1} }
On taking $\phi: {\Bbb R}^k \to V^\perp $ to be the orthogonal projection
map in the following statement, one easily deduces Lemma
\rhoef{slicing1}.
\begin{mat} \lambdabel{him}
Let $l,k\in{\Bbb N}$ such that $l \leqslantq k $ and $f$ and $g:r\beta_\alphapsto
r^{-l}f(r)$ be dimension functions. Furthermore, let $A\subset{\Bbb R}^k$
and let $\phi: A \to {\Bbb R}^l $ be a Lipschitz map. Then
$$
\int^{*}_{{\Bbb R}^l} {\cal H}^g(A \cap \phi^{-1}\{y\} ) \; d{\cal L}^ly \
\leqslantq \ \alpha(l) \; 2^l \; {\rhom Lip}(\phi)^l \; {\cal H}^f(A) \ \
.
$$
\end{mat}
{\em Remark. \ } This is essentially Lemma 7.7 in \cite{MAT}. For
the sake of comparison, the notation adopted above is as far as
possible the same as in \cite{MAT}. Thus, $\int^{*}$ denotes the
upper integral, ${\cal L}^l$ is the $l$--dimensional Lebesgue
measure on ${\Bbb R}^l$, $\alpha(l) := {\cal L}^l \{x \in {\Bbb R}^l : |x|
\leqslantq 1 \} $ is the volume of the $l$--dimensional unit ball and
${\rhom Lip}(\phi)$ is the Lipschitz constant of $\phi$. To avoid
unnecessary confusion when comparing Lemma \rhoef{slicing1}* with
Lemma 7.7 in \cite{MAT}, it is worth pointing out that our
statement contains an extra factor of $2^l$ since we have defined
Hausdorff measure in terms of radii of balls rather than
diameters. This extra factor has no effect in deducing Lemma
\rhoef{slicing1} since all that we require is that the right hand
side of the inequality appearing in Lemma \rhoef{slicing1}* is
finite whenever ${\cal H}^f(A) $ is finite.
\vspace*{2ex}
\subsection{Proof of Lemma \rhoef{slicing1}*}
The statement of Lemma \rhoef{slicing1}* follows on making the
obvious modifications to the proof of Lemma 7.7 in \cite{MAT}. It
follows from the definition of Hausdorff $f$--measure that for
each $n \in {\Bbb N}$, there exists a cover of $A$ by closed balls
$B_{n,1}, B_{n,2}, \ldots $ such that $r(B_{n,i}) \leqslantq 1/n $ and
\begin{equation} \lambdabel{a1}
\sum_i f(r(B_{n,i})) \ \leqslantq \ {\cal H}^{f}_{1/n} (A) \, + \, 1/n
\ \ .
\end{equation}
Let, $F_{n,i} := \{ y \in {\Bbb R}^l : B_{n,i} \ \cap \ \phi^{-1} \{y\}
\neq \emptyset \} $. By definition, if $y,z \in F_{n,i}$ then
there exist $u,v \in A \cap B_{n,i} $ such that $\phi(u)=y$ and
$\psi(v)=z$. It follows that $|y-z| \leqslantq {\rhom Lip}(\phi) \, |u-v|$
and so
\begin{equation} \lambdabel{a2}
{\cal L}^l(F_{n,i} ) \ \leqslantq \ \alpha(l) \ ({\rhom Lip}(\phi) \, 2 \,
r(B_{n,i}) \, )^l \ .
\end{equation}
For $y \in {\Bbb R}^l$, let
$B_{n,i}^{\phi} (y) $ denote a ball of diameter $d(B_{n,i} \cap
\phi^{-1} \{y\})$ such that $B_{n,i} \cap \phi^{-1} \{y\} \subseteq
B_{n,i}^{\phi} (y) $. On applying Fatou's lemma and using the fact
that $g $ is non-decreasing, we obtain that
\begin{eqnarray*}
\int^{*}_{{\Bbb R}^l} {\cal H}^g(A \cap \phi^{-1}\{y\} ) \; d{\cal L}^ly
& = & \int^{*}_{{\Bbb R}^l} \lim_{n \to \infty} {\cal H}^g_{1/n} (A
\cap \phi^{-1}\{y\} ) \; d{\cal L}^ly ~ \hspace*{18ex} ~ \\ & & \\
& \leqslantq & \int_{{\Bbb R}^l} \liminf_{n \to \infty} \sum_i g(\,
r(B_{n,i}^{\phi} (y)) \, ) \; d{\cal L}^ly \\ & & \\ & \leqslantq &
\liminf_{n \to \infty} \ \sum_i \ \int_{F_{n,i}} \!\!\!\! g\beta_\alphathcal{ B}ig(
\, \mbox{{\small $\frac{1}{2}$}} \, d(B_{n,i} \cap \phi^{-1}
\{y\}) ) \, \beta_\alphathcal{ B}ig) \; d{\cal L}^ly \\ & & \\ & \leqslantq & \liminf_{n
\to \infty} \ \sum_i \ g(
r(B_{n,i} ) \, ) \ \ {\cal L}^l( F_{n,i} ) \\
& & \\ & \stackrel{(\rhoef{a2})}{\ \leqslantq \ } & \alpha(l) \ ( 2 \, {\rhom
Lip}(\phi) \, )^l \ \liminf_{n \to \infty} \ \sum_i \ f(
r(B_{n,i} ) \, ) \\
& & \\ & \stackrel{(\rhoef{a1})}{\ \leqslantq \ } & \alpha(l) \ ( 2 \, {\rhom
Lip}(\phi) \, )^l \ \liminf_{n \to \infty} \ \beta_\alphathcal{ B}ig( {\cal
H}^{f}_{\mbox{{\tiny $1/n$}}}
(A) + 1/n \beta_\alphathcal{ B}ig) \\
& & \\ & \leqslantq & \alpha(l) \ ( 2 \, {\rhom Lip}(\phi) \, )^l \ {\cal
H}^{f} (A) \ \ .
\end{eqnarray*}
$\beta_\alphathcal{ B}ox$
\noindent{\bf Acknowledgments: \, } SV would like to thank the
Ayesha and Iona for keeping him well focused on those important things
in life -- namely good times, mangoes and simplicity. Also,
many thanks to Bridget for sharing nearly half of her years with
me -- poor thing!
{ \small
\noindent Victor V. Beresnevich: Department of Mathematics,
University of York,
\noindent\phantom{Victor V. Beresnevich: }Heslington, York, YO10
5DD, England.
\noindent\phantom{Victor V. Beresnevich: }e-mail: [email protected]
\noindent Sanju L. Velani: Department of Mathematics, University
of York,
~ \hspace{17mm} Heslington, York, YO10 5DD, England.
~ \hspace{17mm} e-mail: [email protected]
}
\end{document}
|
\begin{document}
\begin{abstract}
The aim of this paper is to present some results about the space $L^\Phi(\nu),$ where $\nu$ is a vector measure on a compact (not necessarily abelian) group and $\Phi$ is a Young function. We show that under certain conditions, the space $L^\Phi(\nu)$ becomes an $L^1(G)$-module with respect to the usual convolution of functions. We also define one more convolution structure on $L^\Phi(\nu).$
\end{abstract}
\keywords{Compact group, Orlicz Space, vector measure, convolution}
\subjclass[2010]{Primary 43A77, 43A15; Secondary 46G10}
\maketitle
\section{Introduction}
It is well known that, if $G$ is a compact abelian group, then for any $1\leq p\leq \infty,$ the space $L^p(G)$ forms an $L^1(G)$-module. See \cite{F}. In 2009, O. Delgado and P. Miana \cite{DM} generalised this result to an $L^p$ space associated with a vector measure. This note has the modest aim of showing how some of these results can be carried over, not always trivially, to spaces associated to a vector measure on a compact group that is not necessarily abelian. More precisely, we show that an Orlicz space associated with a vector measure on a compact group is an $L^1(G)$-module.
In the classical case, as is well-known, the Haar measure plays a major role in proving many facts about the $L^p$ spaces. Therefore, in order to prove that an Orlicz space associated with a vector measure is an $L^1(G)$-module, it becomes necessary to assume certain kind of translation invariance of the vector measure.
In Section 3, we show that an Orlicz space associated with a norm integral translation invariant vector measure is a homogeneous space. In Section 4, our main aim is to show that an Orlicz space associated with a vector measure is an $L^1(G)$-module. We also show that the Haar measure is a Rybakov control measure for an absolutely continuous norm integral translation invariant vector measure with some density. In the abelian case, one of the ingredients for the proof of this result is the classical Markov-Kakutani fixed point theorem. This doesn't work if the group is not abelian. We would like to mention here that the proof given in this paper depends on a fixed point theorem provided in \cite{K}, which works for any compact group.
One of the classical results of abstract harmonic analysis is that an approximate identity in $L^1(G)$ also serves as an approximate identity for the $L^p(G)$ spaces also. Theorem 4.5 of this paper, generalizes this result to the case of an Orlicz space associated with a vector measure.
Finally, in Section 5 we consider another convolution product. This product was introduced in \cite{CFNP} for the $L^p$ spaces associated with a vector measure. In this section, we extend this convolution product to an Orlicz space associated with a vector measure.
Throughout this paper, $G$ will denote a compact group with a fixed normalized Haar measure $m_G.$
\section{Vector measures and their associated Orlicz spaces}
Let $G$ be a compact group, $\mathcal{B}(G)$ be the Borel $\sigma$-algebra on $G$ and let $m_G$ be the normalized Haar measure on $G.$ For $1\leq p\leq\infty,$ let $L^p(G)$ denote the usual $p^{th}$ Lebesgue space with respect to the measure $m_G.$ Let $\mathcal{S}(G)$ denote the space of all simple functions on $G$ and $\mathcal{C}(G)$ denote the space of all continuous functions on $G.$
Let $X$ be a complex Banach space and let $\nu$ be a $\sigma$-additive $X$-valued vector measure on $G$. Let $X^\prime$ denote the topological dual of $X$ and let $B_{X^\prime}$ denote the closed unit ball in $X^\prime.$ For each $x^\prime\in X^\prime,$ we shall denote by $\langle\nu,x^\prime\rangle,$ the corresponding scalar valued measure for the vector measure $\nu,$ which is defined as $\langle\nu,x^\prime\rangle(A)=\langle\nu(A),x^\prime\rangle, A\in\mathcal{B}(G).$ A set $A\in\mathcal{B}(G)$ is said to be $\nu$-null if $\nu(B)=0$ for every Borel set $B\subset A.$ The vector measure $\nu$ is said to be {\it absolutely continuous} with respect to a non-negative scalar measure $\mu$ if $\underset{\mu(A)\rightarrow 0}{\lim}\nu(A)=0, A\in\mathcal{B}(G).$ We shall denote this as $\nu\ll\mu.$ The semivariation of $\nu$ on a set $A\in\mathcal{B}(G)$ is defined as $\|\nu\|(A)=\underset{x^\prime\in B_{X^\prime}}\sup |\langle\nu,x^\prime\rangle|(A).$ We shall denote by $\|\nu\|$ the quantity $\|\nu\|(G).$ Let $M_{ac}(G,X)$ denote the space of all $X$-valued measures which are absolutely continuous with respect to the Haar measure $m_G$. A finite positive measure $\mu$ is said to be a {\it Rybakov control measure} for a vector measure $\nu,$ if $\nu\ll\mu=|\langle\nu,x^\prime\rangle|$ for some $x^\prime\in X^\prime.$ It follows from \cite[Theorem IX.2.2]{DU} that such an $x^\prime$ always exists. Further, by \cite[Pg. 10, Theorem 1]{DU} and from the inequality $\mu(A)\leq\|x^\prime\|_{X^\prime}\|\nu\|(A),\ A\in\mathcal{B}(G),$ it follows that the measures $\nu$ and $\mu$ have same null-sets. For more details on vector measures see \cite{DU} and \cite{ORP}.
We say that a map $\Phi:[0,\infty)\rightarrow [0,\infty)$ is a Young function if $\Phi$ is a strictly increasing, convex and continuous such that $\Phi(t)=0$ if and only if $t=0$ and $\underset{t\rightarrow\infty}{\lim}\Phi(t)=\infty.$ The complementary Young function $\Psi$ of the function $\Phi$ is given by $\Psi(t)=\sup\{ts-\Phi(s):s\geq 0\},~t\geq 0.$ For a (finite) positive measure $\mu$ on $G$, let $L^\Phi(\mu)$ denote the Orlicz space, consisting of all complex-valued measurable functions on $G$ with $\|f\|_{L^\Phi(\mu)}<\infty,$ where $\|\cdot\|_{L^\Phi(\mu)}$ is the Luxemberg norm given by $$\|f\|_{L^\Phi(\mu)}=\inf\left\{k>0: \int_G\Phi\left(\frac{|f|}{k}\right)\,d\mu\leq 1\right\}.$$ Note that the Orlicz space is a natural generalisation of the classical $L^p$ spaces. For more on Orlicz spaces see \cite{RR}.
Let $L_w^\Phi(\nu)$ denote the weak Orlicz space with respect to a vector measure $\nu,$ i.e., a space consisting of all complex-valued measurable functions $f$ such that $\|f\|_{\nu,\Phi}<\infty,$ where $\|f\|_{\nu,\Phi}=\underset{x^\prime\in B_{X^\prime}}{\sup}\|f\|_{L^\Phi(|\langle\nu,x^\prime\rangle|)}.$ Let $L^\Phi(\nu)$ denote the closure of simple functions under the norm $\|\cdot\|_{\nu,\Phi}.$ The space $L^\Phi(\nu)$ is known as the Orlicz space with respect to the vector measure $\nu.$ Note that if $\Phi(t)=t,$ then the spaces $L^\Phi(\nu)$ and $L^\Phi_w(\nu)$ coincides with $L^1(\nu)$ and $L^1_w(\nu)$ respectively. We shall denote by $\|\cdot\|_\nu$ the norm on the space $L^1_w(\nu).$ Further, for any Young function $\Phi,$ the spaces $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$ are continuously embedded in $L^1_w(\nu)$ and $L^1(\nu)$ respectively. Moreover, if $f\in L^\Phi_w(\nu),$ then, by \cite[Proposition 4.2]{FF}, $\|f\|_\nu\leq2\|\chi_G\|_{\nu,\Psi}\|f\|_{\nu,\Phi},$ i.e., $L^\Phi_w(\nu)$ is continuously embedded in $L^1(\nu).$ For more details see \cite{D}.
From now onwards, $X$ will denote a Banach space and $\nu$ an $X$-valued measure on $G.$ Further, $\Phi$ will denote a Young function with $\Psi$ as its complementary Young function.
\section{Homogeneity of the space $L^\Phi(\nu)$}
The main aim of this section is to show that the space $L^\Phi(\nu)$ is a homogeneous space. We begin this section with the definition of a homogeneous space.
A Banach function space $Z$ is said to be a {\it homogeneous space} if for each $f\in Z$ and $t,s\in G,$ we have that
\begin{enumerate}[(i)]
\item $\tau_sf\in Z,$ where $\tau_sf(t)=f(s^{-1}t),$
\item $\|\tau_sf\|_Z=\|f\|_Z$ and
\item $s\mapsto\tau_sf$ is continuous from $G$ into $Z.$
\end{enumerate}
Let $h:G\rightarrow G$ be a homeomorphism. For a measurable function $f:G\rightarrow\mathbb{C}$, define a function $f_h:G\rightarrow\mathbb{C}$ by $f_h=f\circ h^{-1}.$ Note that $f_h$ is also a measurable function. Similarly, define a measure $\nu_h$ as $\nu_h(A)=\nu(h(A)),~A\in\mathcal{B}(G).$
We denote by $I_\nu$ the continuous linear operator $I_\nu:L^1(\nu)\rightarrow X$ given by $I_\nu(f)=\int_Gf\,d\nu,\ f\in L^1(\nu)$. See \cite[Pg. 152]{ORP}.
\begin{defn}\mbox{ }
\begin{enumerate}[(i)]
\item A vector measure $\nu$ is said to be a norm integral $h$-invariant vector measure if $\|I_\nu(\phi_h)\|=\|I_\nu(\phi)\|$ $\forall\ \phi\in\mathcal{S}(G).$
\item A Banach function space $Z$ is said to be norm $h$-invariant if for each $f\in Z,$ we have $f_h\in Z$ and $\|f_h\|_Z=\|f\|_Z.$
\end{enumerate}
\end{defn}
Observe that a vector measure $\nu$ is norm integral $h$-invariant if and only if $\nu$ is norm integral $h^{-1}$-invariant.
Now we prove a lemma which will help us to prove the norm $h$-invariance of (weak) Orlicz spaces with respect to a norm integral $h$-invariant vector measure.
\begin{lem}\label{dual}Let $\nu$ be a norm integral $h$-invariant vector measure and $x^\prime\in X^\prime$. Then there exists $x_h^\prime\in X^\prime$ such that $\|x_h^\prime\|\leq\|x^\prime\|.$ Further, $\langle\nu_h,x^\prime\rangle(A)=\langle\nu,x_h^\prime\rangle(A),~A
\in\mathcal{B}(G).$
\end{lem}
\begin{proof}
Define a map $T_h:I_\nu(\mathcal{S}(G))\subset X\rightarrow I_\nu(\mathcal{S}(G))$ by $T_h(I_\nu(\phi))=I_\nu(\phi_h).$ Since $\nu$ is norm integral $h$-invariant, it follows that the map $T_h$ is an isometry. Now, for $x^\prime\in X^\prime,$ let $y_h^\prime=x^\prime\circ T_h.$ Then $y_h^\prime\in (I_\nu(\mathcal{S}(G)))^\prime$ and $\|y_h^\prime\|\leq\|x^\prime\|.$ Further, by Hahn-Banach extension theorem, there exists $x_h^\prime\in X^\prime$ such that $x_h^\prime=y_h^\prime$ on $I_\nu(\mathcal{S}(G))$ and $\|x_h^\prime\|=\|y_h^\prime\|\leq\|x^\prime\|.$
Now, let $A\in\mathcal{B}(G).$ Then, we have,
\begin{align*}
\langle\nu_h,x^\prime\rangle(A)=&\langle\nu_h(A),x^\prime\rangle = \langle I_\nu((\chi_A)_h),x^\prime\rangle\\ =&\langle T_h(I_\nu((\chi_A)),x^\prime\rangle=\langle I_\nu((\chi_A),x^\prime\circ T_h\rangle \\ =&\langle\nu(A),y_h^\prime\rangle=\langle\nu,x_h^\prime
\rangle(A).\qedhere
\end{align*}
\end{proof}
We now prove that the spaces $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$ are norm $h$-invariant for a norm integral $h$-invariant vector measure $\nu$.
\begin{thm}\label{2T1}Let $\nu$ be a norm integral $h$-invariant vector measure. Then the spaces $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$ are norm $h$-invariant.
\end{thm}
\begin{proof}
Since $\nu$ is a norm integral $h$-invariant vector measure, for any $x^\prime\in X^\prime,$ by Lemma \ref{dual}, there exists $x_h^\prime\in X^\prime$ such that $\|x_h^\prime\|\leq\|x^\prime\|$ and $\langle\nu_h,x^\prime\rangle(A)=\langle\nu,x_h^\prime\rangle(A),\forall\ A
\in\mathcal{B}(G).$ We now claim that $\|f_h\|_{L^\Phi(|\langle\nu,x^\prime\rangle|)}=\|f\|_{L^\Phi(|\langle\nu,x_h^\prime\rangle|)},\ \forall\ f\in L^\Phi_w(\nu).$ Let $f=\sum_{i=1}^n\alpha_i\chi_{A_i}\in\mathcal{S}(G)$. Then, for $k>0,$ using Lemma \ref{dual} and the fact that the sets $A_i$'s are disjoint, we have
\begin{align*}
\int_G\Phi\left(\frac{|f_h|}{k}\right)\,d|\langle\nu,x^\prime\rangle|=& \int_G\Phi\left(\sum_{i=1}^n\frac{|\alpha_i|}{k}\chi_{h(A_i)}\right)\,d|\langle\nu,x^\prime\rangle|\\=&\sum_{i=1}^n\Phi\left(\frac{|\alpha_i|}{k}\right)|\langle\nu,x^\prime\rangle|(h(A_i))\\=&\sum_{i=1}^n\Phi\left(\frac{|\alpha_i|}{k}\right)|\langle\nu,x_h^\prime\rangle|(A_i)\\=&\int_G\Phi\left(\frac{|f|}{k}\right)\,d|\langle\nu,x_h^\prime\rangle|.
\end{align*}
Now, the claim follows from monotone convergence theorem.
Let $f\in L^\Phi_w(\nu).$ Then, using Lemma \ref{dual}, we have,
\begin{align*}
\|f_h\|_{\nu,\Phi} =&\underset{x^\prime\in B_{X^\prime}}{\sup}\|f_h\|_{L^\Phi(|\langle\nu,x^\prime\rangle|)} \\ =&\underset{x^\prime\in B_{X^\prime}}{\sup}\|f\|_{L^\Phi(|\langle\nu,x_h^\prime\rangle|)} \\ \leq &\underset{x_h^\prime\in B_{X^\prime}}{\sup}\|f\|_{L^\Phi(|\langle\nu,x_h^\prime\rangle|)}=\|f\|_{\nu,\Phi}.
\end{align*}
Note that if $\nu$ is a norm integral $h$-invariant measure, then $\nu$ is a norm integral $h^{-1}$-invariant measure and therefore $\|f\|_{\nu,\Phi}=\|(f_h)_{h^{-1}}\|_{\nu,\Phi}\leq\|f_h\|_{\nu,\Phi}.$ Hence the space $L^\Phi_w(\nu)$ is norm $h$-invariant.
Now let $f\in L^\Phi(\nu),$ then there exists a sequence $(\phi_n)$ of simple functions such that $\phi_n\rightarrow f$ in $\|\cdot\|_{\nu,\Phi}$ norm. Since $(\phi_n)_h\subset\mathcal{S}(G)$ and $$\|(\phi_n)_h-f_h\|_{\nu,\Phi}=\|(\phi_n-f)_h\|_{\nu,\Phi}=\|\phi_n-f\|_{\nu,\Phi},$$ implies that $f_h\in L^\Phi(\nu)$ as $L^\Phi(\nu)$ is the closure of simple functions. Hence it follows that $L^\Phi(\nu)$ is norm $h$-invariant.
\end{proof}
Next we show that the space of continuous functions on $G$ is dense in $L^\Phi(\nu).$ This will be used to prove homogeneity of the Orlicz space with respect to a norm integral translation invariant vector measure.
\begin{prop}\label{density}
Let $\nu\in M_{ac}(G,X).$ Then $\mathcal{C}(G)$ is dense in $L^\Phi(\nu).$
\end{prop}
\begin{proof}
Let $\phi\in\mathcal{S}(G)$ and $\epsilon>0.$ For every $\delta>0,$ using Lusin's theorem, there exists $f\in \mathcal{C}(G)$ such that $\phi=f$ on $A^c$ and $\|f\|_\infty\leq\|\phi\|_\infty$ where $A^c$ is the complement of the set $A\in\mathcal{B}(G)$ with $m_G(A)<\delta.$ By \cite[p. 78, Corollary 7]{RR}, we have
\begin{align*}
\|\phi-f\|_{\nu,\Phi}\leq&\|\phi-f\|_\infty\|\chi_A\|_{\nu,\Phi} \\ \leq & 2\|\phi\|_\infty\underset{x^\prime\in B_{X^\prime}}{\sup}\left[\Phi^{-1}\left(\frac{1}{|\langle\nu,x^\prime\rangle|(A)}\right)\right]^{-1}.
\end{align*}
Since $\nu\in M_{ac}(G,X),$ by choosing $\delta$ small enough, we get $\|\phi-f\|_{\nu,\Phi}<\epsilon.$ Now the result follows from the density of $\mathcal{S}(G)$ in $L^\Phi(\nu).$
\end{proof}
\begin{rem}
Since the trigonometric polynomials on $G$ are dense in $\mathcal{C}(G)$ with respect to the uniform norm and it follows from Proposition \ref{density} that the trigonometric polynomials on $G$ are dense in $L^\Phi(\nu)$ for $\nu\in M_{ac}(G,X).$
\end{rem}
Now we conclude the section with the main result that an Orlicz space with respect to a norm integral translation invariant vector measure is homogeneous.
\begin{thm}\label{homogeneity}
Let $\nu\in M_{ac}(G,X)$ be norm integral translation invariant. Then the space $L^\Phi(\nu)$ is homogeneous.
\end{thm}
\begin{proof}
Let $f\in L^\Phi(\nu).$ By Theorem \ref{2T1}, $L^\Phi(\nu)$ is norm translation invariant and it is enough to show that the map $s\rightarrow\tau_sf$ is continuous from $G$ to $L^\Phi(\nu).$ Let $\epsilon>0.$ By Proposition \ref{density}, there exists $g\in \mathcal{C}(G)$ such that $\|f-g\|_{\nu,\Phi}<\epsilon/3.$ Further, for $t,s\in G,$
\begin{align*} \|\tau_tf-\tau_sf\|_{\nu,\Phi}\leq&\|\tau_tf-\tau_tg\|_{\nu,\Phi}+\|\tau_tg-\tau_sg\|_{\nu,\Phi}+\|\tau_sg-\tau_sf\|_{\nu,\Phi}\\\leq& 2\|f-g\|_{\nu,\Phi}+\|\tau_tg-\tau_sg\|_\infty\|\chi_G\|_{\nu,\Phi}\\<&\frac{2\epsilon}{3}+\|\tau_tg-\tau_sg\|_\infty\|\chi_G\|_{\nu,\Phi}.
\end{align*}
As $G$ is compact, $g$ is uniformly continuous and therefore there exists a neighbourhood $N$ of $e$ in $G$ such that $|g(t)-g(s)|<\frac{\epsilon}{3}\|\chi_G\|_{\nu,\Phi}^{-1},$ for every $t,s\in G$ such that $ts^{-1}\in N.$ Hence we have $\|\tau_tf-\tau_sf\|_{\nu,\Phi}<\epsilon$ for every $t,s\in G$ such that $ts^{-1}\in N.$
\end{proof}
\section{Convolution product for $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$}
Throughout this section, $\nu$ will denote an absolutely continuous norm integral translation invariant vector measure. In this section, we first show that the Haar measure $m_G$ is a Rybakov control measure for the vector measure $\nu$ with some density. We will then show that $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$ are Banach algebras under some conditions on $\nu.$
Note that by \cite[Pg. 108, Theorem 3.7]{ORP}, the space $L^1(\nu)$ is an order continuous Banach function space w.r.t. any Rybakov control measure for $\nu$. Hence by \cite[Pg. 35, Proposition 2.16]{ORP} the K\"othe dual $L^1(\nu)^*$ of $L^1(\nu)$ coincides with the topological dual of $L^1(\nu).$ For more details on K\"othe dual see \cite{ORP}.
\begin{thm}\label{2T2}If $\mu$ is a Rybakov control measure for $\nu$, then there exists $0< h\in L^1(\nu)^*$ such that $\|h\|_{L^1(\nu)^*}=\|\nu\|^{-1}$, where $L^1(\nu)^*$ is the K\"othe dual space of $L^1(\nu).$ Further $m_G(A)=\int_Ah\,d\mu,~A\in\mathcal{B}(G).$
\end{thm}
\begin{proof}
Consider the space $L^1(\nu)^\prime$ with the weak* topology. Consider the family $(\tau_t^\ast)_{t\in G}$ of linear operators, where $\tau_t^\ast:L^1(\nu)^\prime\rightarrow L^1(\nu)^\prime$ is given by $\tau_t^*(f^\prime)=f^\prime\circ\tau_t,~f^\prime\in L^1(\nu)^\prime.$ Taking translation as homeomorphism and $\Phi(t)=t$ in Theorem \ref{2T1}, we have that $\tau_t:L^1(\nu)\rightarrow L^1(\nu)$ is an isometric isomorphism, it follows that $\tau_t^\ast$ is a weak*-weak* continuous linear operator. Further, note that the map $t\mapsto\tau_t^\ast$ is a continuous map from the compact group $G$ into $\mathcal{B}(L^1(\nu)^\prime)$ and hence the set $\{\tau_t^\ast:t\in G\}$ is compact. In particular, it is a totally bounded group.
Now, consider the set $S=\{f^\prime\in B_{L^1(\nu)^\prime}:f^\prime(\chi_G)=\|\nu\|\}.$ Since $\nu$ is non-zero, it is clear that $\|\chi_G\|_\nu=\|\nu\|\neq 0$ and therefore $0\neq\chi_G\in L^1(\nu).$ Hence, by Hahn-Banach theorem, there exists $f^\prime\in L^1(\nu)^\prime$ such that $\|f^\prime\|_{L^1(\nu)^\prime}=1$ and $f^\prime(\chi_G)=\|\chi_G\|_\nu=\|\nu\|.$ This implies that $S$ is non-empty.
Let $f_1^\prime,f_2^\prime\in S$ and $r\in(0,1).$ Then
\begin{align*}
\|rf_1^\prime+(1-r)f_2^\prime\|_{L^1(\nu)^\prime}=&\underset{f\in B_{L^1(\nu)}}{\sup}|(rf_1^\prime+(1-r)f_2^\prime)(f)|\\\leq&\underset{f\in B_{L^1(\nu)}}{\sup}(r\|f_1^\prime\|_{L^1(\nu)^\prime}+(1-r)\|f_2^\prime\|_{L^1(\nu)^\prime})\|f\|_\nu\\\leq&1
\end{align*}
and $(rf_1^\prime+(1-r)f_2^\prime)(\chi_G)=rf_1^\prime(\chi_G)+(1-r)f_2^\prime(\chi_G)=\|\nu\|.$ Thus, $S$ is convex.
Define $\beta:L^1(\nu)^\prime\rightarrow\mathbb{C}$ by $\beta(f^\prime)=f^\prime(\chi_G).$ It is clear that $\beta$ is continuous. Then the set $S=\beta^{-1}(\{\|\nu\|\})\cap B_{L^1(\nu)^\prime}$ is a closed subset of $B_{L^1(\nu)^\prime}$ and $B_{L^1(\nu)\prime}$ is compact by Banach-Alaoglu theorem. Therefore, $S$ is compact.
Let $t\in G$ and $f^\prime\in S.$ Then, $$\|\tau_t^\ast(f^\prime)\|_{L^1(\nu)^\prime}\leq\|\tau_t^\ast\|\|f^\prime\|_{L^1(\nu)^\prime}=\|\tau_t\|\|f^\prime\|_{L^1(\nu)^\prime}\leq 1$$ which implies that $\tau_t^\ast(f^\prime)\in B_{L^1(\nu)^\prime}.$ Further, $\tau_t^\ast(f^\prime)(\chi_G)=f^\prime(\tau_t\chi_G)=f^\prime(\chi_G)=\|\nu\|.$ Thus, $\tau_t^\ast(f^\prime)\in S.$ Hence, $\tau_t^\ast$ maps the non-empty, convex and compact set $S$ univalently onto itself for every $t\in G$. Therefore, by \cite[Corollary of Theorem 2, p. 245]{K}, there exists a common fixed point $f_0^\prime\in S$ for the family $(\tau_t^\ast)_{t\in G}$, that is, $\tau_t^\ast(f_0^\prime)=f_0^\prime~\forall~t\in G.$
Now, by the definition of the set $S$ and from the definition of the norm, it follows that $\|f_0^\prime\|_{L^1(\nu)^\prime}=1.$ Further, by \cite[Pg. 35, Proposition 2.16]{ORP}, there exists $f_0^\ast\in L^1(\nu)^\ast$ such that $\|f_0^\ast\|_{L^1(\nu)^\ast}=\|f_0^\prime\|_{L^1(\nu)^\prime}=1$ and $$f_0^\prime(f)=\int_G ff_0^\ast\ d\mu\ \forall\ f\in L^1(\nu).$$ Since $\chi_G\in L^1(\nu),$ it follows that $f_0^\ast\in L^1(\mu).$
Define a scalar measure $\mu_{f_0^\ast}$ as $$\mu_{f_0^\ast}(A)=\int_A f_0^\ast\ d\mu,\ \ A\in\mathcal{B}(G).$$ As $f_0^\prime$ is a fixed point for the family $\{\tau_t^\ast:t\in G\},$ it follows that the measure $\mu_{f_0^\ast}$ is translation invariant and hence so is the measure $|\mu_{f_0^\ast}|.$ Further, the density of $|\mu_{f_0^\ast}|$ is $|f_0^\ast|$ and hence non-zero. We now claim that the measure $|\mu_{f_0^\ast}|$ is regular. Let $A\in\mathcal{B}(G).$ Then $$|\mu_{f_0^\ast}|(A)=\int_A |f_0^\ast|\ d\mu\leq\|f_0^\ast\|_{L^1(\nu)^\ast}\|\chi_A\|_{L^1(\nu)}=\|\nu\|(A).$$ It now follows, from our assumption that $\nu\ll m_G,$ that $|\mu_{f_0^\ast}|\ll m_G$ and therefore by Radon-Nikodym theorem it follows that $|\mu_{f_0^\ast}|$ is regular. In particular, $|\mu_{f_0^\ast}|$ is a Haar measure and hence there exists a positive constant $c$ such that $|\mu_{f_0^\ast}|=cm_G.$ Note that $c=|\mu_{f_0^\ast}|(G)$ as $m_G$ is normalized. As $|f_0^\ast|$ is the density for the measure $|\mu_{f_0^\ast}|,$ it follows that the density for the Haar measure $m_G$ is $\frac{1}{|\mu_{f_0^\ast}|(G)}|f_0^\ast|.$
Let $h=\frac{1}{|\mu_{f_0^\ast}|(G)}|f_0^\ast|.$ Using the fact that $f_0^*\in L^1(\nu)^\ast,$ it follows that $h\in L^1(\nu)^*.$ Further, $$\|\nu\|=f_0^\prime(\chi_G)=\int_G f_0^\ast\ d\mu=\left|\int_G f_0^\ast\ d\mu\right|\leq \int_G |f_0^\ast|\ d\mu=|\mu_{f_0^\ast}|(G).$$ Thus $\|h\|_{L^1(\nu)^\ast}=\|\nu\|^{-1}.$
\end{proof}
\begin{rem}
Theorem $\ref{2T2}$ implies that the measure $m_G$ is a Rybakov control measure with some density and therefore $\nu$-null sets coincides with $m_G$-null sets.
\end{rem}
\begin{cor}\label{2T4}
The embedding of $L^1_w(\nu)$ inside $L^1(G)$ is continuous. Further $\|f\|_1\leq\frac{1}{\|\nu\|}\|f\|_\nu,~f\in L^1_w(\nu).$
\end{cor}
\begin{proof}
Let $\mu$ be a Rybakov control measure for $\nu.$ Since $\nu\ll m_G,$ by Theorem \ref{2T2}, there exists a function $0< h\in L^1(\nu)^\ast$ such that the Haar measure $m_G$ is given by $$m_G(A)=\int_A h\ d\mu,\ A\in\mathcal{B}(G)$$ and $\|h\|_{L^1(\nu)^\ast}=\frac{1}{\|\nu\|}.$ Now, let $f\in L^1_w(\nu).$ By monotone convergence theorem, it suffices to assume that $f$ is a simple function. Then,
\begin{align*}
\|f\|_1=\int_G |f|\ dm_G=&\int_G h|f|\ d\mu\leq \|h\|_{L^1(\nu)^\ast}\|f\|_{\nu}=\frac{1}{\|\nu\|}\|f\|_\nu.\qedhere
\end{align*}
\end{proof}
Since $L^\Phi_w(\nu)\hookrightarrow L^1_w(\nu),$ we have that $L^\Phi_w(\nu)\hookrightarrow L^1(G).$ Therefore, we can consider the classical convolution of any two functions in $L^\Phi_w(\nu).$ In the next theorem, we prove that the (weak) Orlicz space with respect to $\nu$ is an $L^1(G)$-module for the classical convolution.
\begin{thm}\label{2T3}
Let $f\in L^1(G)$ and $g\in L^\Phi_w(\nu).$ Then $f*g\in L^\Phi_w(\nu)$ with $\|f*g\|_{\nu,\Phi}\leq\|f\|_1\|g\|_{\nu,\Phi}$. In particular, if $g\in L^\Phi(\nu)$ then $f*g\in L^\Phi(\nu).$
\end{thm}
\begin{proof}Let $f\in L^1(G)$ and $g\in L^\Phi_w(\nu).$ Using Theorem \ref{2T1}, with translation as a homeomorphism, we have that $\tau_sg\in L^\Phi_w(\nu)$ with $\|\tau_sg\|_{\nu,\Phi}=\|g\|_{\nu,\Phi}.$ Then
\begin{align*}
\|f*g\|_{\nu,\Phi}=&\left\|\int_Gf(s)\tau_sg\,dm_G(s)\right\|_{\nu,\Phi}\\\leq&\int_G|f(s)|\|\tau_sg\|_{\nu,\Phi}\,dm_G(s)
\\=&\int_G|f(s)|\|g\|_{\nu,\Phi}\,dm_G(s)=\|f\|_1\|g\|_{\nu,\Phi}.
\end{align*}
Now, let $g\in L^\Phi(\nu),$ then there exists two sequences $(\phi_n)$ and $(\psi_n)$ of simple functions converging to $f$ in $L^1(G)$ and $g$ in $L^\Phi(\nu)$ respectively. Since, for each $n\in\mathbb{N},$ $\phi_n\in L^1(G)$ and $\psi_n\in L^\infty(G),$ we have that $\phi_n*\psi_n$ is bounded and therefore $\phi_n*\psi_n\in L^\Phi(\nu).$ Further,
\begin{align*}
\|\phi_n*\psi_n-f*g\|_{\nu,\Phi}\leq&\|(\phi_n-f)*\psi_n\|_{\nu,\Phi}+\|f*(\psi_n-g)\|_{\nu,\Phi}\\\leq&\|\phi_n-f\|_1\|\psi_n\|_{\nu,\Phi}+\|f\|_1\|\psi_n-g\|_{\nu,\Phi}.
\end{align*}
Therefore, the sequence $(\phi_n*\psi_n)$ converges to $f*g$ in the $\|\cdot\|_{\nu,\Phi}$ norm. Using the fact that $L^\Phi(\nu)$ is a closed subspace of $L^\Phi_w(\nu),$ we have that $f*g\in L^\Phi(\nu).$
\end{proof}
Our next result is the analogue of \cite[Proposition 2.42]{F}.
\begin{thm}[Existence of left approximate identity]\label{approximateidentity}The space $L^\Phi(\nu)$ has a left approximate identity, that is, there exists a net $(g_\alpha)_{\alpha\in\wedge}$ in $L^\Phi(\nu)$ such that
\begin{enumerate}[(i)]
\item $(g_\alpha)_{\alpha\in\wedge}\subset \mathcal{C}(G),$
\item $g_\lambda\geq 0\ \forall\ \lambda\in\wedge,$
\item $supp (g_\beta)\subset supp (g_\alpha)\mbox{ if }\alpha\preccurlyeq\beta,\ \alpha,\beta\in\wedge,$
\item $supp(g_\lambda)\rightarrow\{e\}$ as $\lambda\rightarrow\infty,$
\item $\int_Gg_\lambda\,dm_G=1\ \forall\ \lambda\in\wedge$ and
\item $\underset{\lambda}{\lim}\|g_\lambda*f-f\|_{\nu,\Phi}=0,~f\in L^\Phi(\nu).$
\end{enumerate}
\end{thm}
\begin{proof}
It is well-known that a net $(g_\alpha)_{\alpha\in\wedge}$ satisfying (i) to (v) exists in $L^1(G).$ This net satisfies our requirements. Indeed, for $\lambda\in\wedge$ and $f\in\ L^\Phi(\nu),$
\begin{align*}
\|g_\lambda*f-f\|_{\nu,\Phi}=&\left\|\int_Gg_\lambda(s)\tau_sf\,dm_G(s)-f\int_Gg_\lambda(s)\,dm_G(s)\right\|_{\nu,\Phi}\\\leq&\int_Gg_\lambda(s)\|\tau_sf-f\|_{\nu,\Phi}\,dm_G(s).
\end{align*}
Taking $\lambda\rightarrow\infty$ and using Theorem \ref{homogeneity}, the result follows.
\end{proof}
The next theorem is an analogue of \cite[Theorem 2.43]{F}.
\begin{thm}Let $I$ be a closed subspace of $L^\Phi(\nu).$ Then $I$ is a left $L^1(G) $-submodule of $L^\Phi(\nu)$ if and only if $I$ is a translation invariant subspace.
\end{thm}
\begin{proof}
Suppose that $I$ is a left $L^1(G)$-submodule of $L^\Phi(\nu).$ Let $f\in I$ and $t\in G.$ Let $(g_\lambda)_{\lambda\in\wedge}$ be a left approximate identity of $L^\Phi(\nu),$ provided by Theorem \ref{approximateidentity}. Then, using Theorem \ref{2T1}, $$\|(\tau_tg_\lambda)*f-\tau_tf\|_{\nu,\Phi}=\|\tau_t(g_\lambda*f-f)\|_{\nu,\Phi}=\|g_\lambda*f-f\|_{\nu,\Phi}\rightarrow 0.$$
Since $(\tau_tg_\lambda)*f\in I$ and as $I$ is a closed subspace of $L^\Phi(\nu)$, it follows that $\tau_tf\in I.$
Now let $I$ be a translation invariant space. Let $f\in L^1(G)$ and $g\in I.$ Since $f*g=\int_Gf(s)\tau_sg\,dm_G(s)$, it is clear that $f*g$ belongs to the closed linear span of the functions from $I.$ Since $I$ is a closed subspace of $L^\Phi(\nu)$, it follows that $f*g\in I.$
\end{proof}
Now we discuss the convolution structure in the spaces $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$.
\begin{thm}Let $f,g\in L^\Phi_w(\nu).$ Then $f*g\in L^\Phi_w(\nu)$ with $$\|f*g\|_{\nu,\Phi}\leq\frac{2}{\|\nu\|}\|\chi_G\|_{\nu,\Psi}\|f\|_{\nu,\Phi}\|g\|_{\nu,\Phi}.$$
In particular, if $f,g\in L^\Phi(\nu)$ then $f*g\in L^\Phi(\nu).$
\end{thm}
\begin{proof}
If $f,g\in L^\Phi_w(\nu)$ then using Theorem \ref{2T3} and the fact that $L^\Phi_w(\nu)\hookrightarrow L^1(G),$ we have $f*g\in L^\Phi_w(\nu)$ with $\|f*g\|_{\nu,\Phi}\leq\|f\|_1\|g\|_{\nu,\Phi}.$ Further, using \cite[Proposition 4.2]{FF} and Corollary \ref{2T4}, we have
\begin{align*}
\|f*g\|_{\nu,\Phi}\leq&\|f\|_1\|g\|_{\nu,\Phi}\leq\frac{1}{\|\nu\|}\|f\|_\nu\|g\|_{\nu,\Phi}\\\leq&\frac{2}{\|\nu\|}\|\chi_G\|_{\nu,\Psi}\|f\|_{\nu,\Phi}\|g\|_{\nu,\Phi}.
\end{align*}
The conclusion for $L^\Phi(\nu)$ follows from the density of $\mathcal{S}(G)$ in $L^\Phi(\nu).$
\end{proof}
We finish this section with a remark on the Banach algebra structure of the (weak) Orlicz spaces with respect to $\nu.$
\begin{rem}If the vector measure $\nu$ satisfies $\|\nu\|\geq 2\|\chi_G\|_{\nu,\Psi},$ then the spaces $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$ become Banach algebras with classical convolution as multiplication.
\end{rem}
\section{Another convolution product for $L^\Phi_w(\nu)$ and $L^\Phi(\nu)$}
In this final section, we define another convolution product for (weak) Orlicz spaces with respect to the vector measure $\nu$. The main result of this section is Theorem 5.2.
If $\nu\ll m_G,$ then $\langle\nu,x^\prime\rangle\ll m_G,~\forall\ x^\prime\in X^\prime.$ Therefore, by Radon-Nikodym theorem there exists a function $h_{x^\prime}\in L^1(G)$ such that $d\langle\nu,x^\prime\rangle=h_{x^\prime}\,dm_G.$ Further, for $f\in L^1_w(\nu)$, $$\int_G|f|(t)d|\langle\nu,x^\prime\rangle|(t)=\int_G|fh_{x^\prime}|(t)\,dm_G(t).$$ Hence, $fh_{x^\prime}\in L^1(G)$ for every $f\in L^1_w(\nu).$ With this as motivation, we define another convolution product in the spirit of \cite[Definition 4.1]{CFNP} and study some inequalities.
\begin{defn}\label{conv2}
Let $1\leq p\leq\infty.$ The convolution of functions $f\in L^1_w(\nu)$ and $g\in L^p(G)$ with respect to a vector measure $\nu\in M_{ac}(G,X)$ is defined by $$f\mathbf{*}_{\nu}g(x^\prime)=(fh_{x^\prime})*g,~x^\prime\in X^\prime.$$
\end{defn}
Note that $f \mathbf{*}_{\nu}g(x^\prime)(t)=\int_Gf(s)\tau_sg(t)\,d\langle
\nu,x^\prime\rangle(s),~\forall\ t\in G.$ Since for a norm integral translation invariant vector measure $\nu,$ $L^\Phi_w(\nu)$ is continuously embedded in $L^1(G),$ Definition \ref{conv2} makes sense for functions $f\in L^1_w(\nu)$ and $g\in L^\Phi_w(\nu)$. With this note, here is the main result of this section.
\begin{thm}\label{convo2ineq}
Let $\nu\in M_{ac}(G,X)$ be a norm integral translation invariant vector measure. If $f\in L^1_w(\nu)$ and $g\in L^\Phi_w(\nu),$ then $f\mathbf{*}_{\nu}g\in \mathcal{B}(X^\prime, L^\Phi_w(\nu))$ and $$\|f\mathbf{*}_{\nu}g\|_{\mathcal{B}(X^\prime, L^\Phi_w(\nu))}\leq\|f\|_\nu\|g\|_{\nu,\Phi}.$$ In particular, if $f\in L^1(\nu)$ and $g\in L^\Phi(\nu)$ then $f\mathbf{*}_{\nu}g\in \mathcal{B}(X^\prime, L^\Phi(\nu)).$
\end{thm}
\begin{proof}
Let $x^\prime\in B_{X^\prime}.$ Using Theorem \ref{2T1}, we have,
\begin{align*}
\|f\mathbf{*}_{\nu}g(x^\prime)\|_{\nu,\Phi}=&\left\|\int_Gf(s)\tau_sg\,d\langle
\nu,x^\prime\rangle(s)\right\|_{\nu,\Phi}
\\\leq&\int_G|f(s)|\|\tau_sg\|_{\nu,\Phi}\,d|\langle
\nu,x^\prime\rangle|(s)\\\leq&\int_G|f(s)|\|g\|_{\nu,\Phi}\,d|\langle
\nu,x^\prime\rangle|(s)\leq\|f\|_\nu\|g\|_{\nu,\Phi}.
\end{align*}
Thus, it follows that, $f\mathbf{*}_{\nu}g\in \mathcal{B}(X^\prime, L^\Phi_w(\nu))$ with $\|f\mathbf{*}_{\nu}g\|_{\mathcal{B}(X^\prime, L^\Phi_w(\nu))}\leq\|f\|_\nu\|g\|_{\nu,\Phi}.$
The last statement follows again from the density of the space of simple functions.
\end{proof}
Using Theorem \ref{convo2ineq} and \cite[Proposition 4.2]{FF} we have the following immediate consequence.
\begin{cor}
Let $\nu\in M_{ac}(G,X)$ be a norm integral translation invariant vector measure. If $f,g\in L^\Phi_w(\nu)$ then $f\mathbf{*}_{\nu}g\in \mathcal{B}(X^\prime, L^\Phi_w(\nu))$ and $\|f\mathbf{*}_{\nu}g\|_{\mathcal{B}(X^\prime, L^\Phi_w(\nu))}\leq2\|\chi_G\|_{\nu,\Psi}\|f\|_{\nu,\Phi}\|g\|_{\nu,\Phi}.$ In particular, if $f,g\in L^\Phi(\nu)$ then $f\mathbf{*}_{\nu}g\in \mathcal{B}(X^\prime, L^\Phi(\nu)).$
\end{cor}
\section*{Acknowledgment}
The first author would like to thank the University Grants Commission, India for providing the research grant.
\end{document}
|
\begin{document}
Maketitle
\begin{abstract}We show that an infinite group is definable in any non trivial geometric $C$-minimal structure which is definably maximal and does not have any definable bijection between a bounded interval and an unbounded one in its canonical tree. No kind of linearity is assumed.
\end{abstract}
\section{Introduction}
In the spirit of the construction of a field from its projective plane, Boris Zilber proposed an ambitious program: in model theory, the notion of algebraic closure in suitable first order structures gives rise to combinatorial geometries. These geometries can be very similar to projective spaces. Zilber conjectured that a strongly minimal structure interprets an infinite group, or even an infinite field, as soon as it fulfills some conditions, conditions that are clearly necessary \cite{Z}. This conjecture turned out to be false in general. However, together with Ehud Hrushovski they were able to establish that the conjecture holds for what they called ``Zariski structures'', first order structures with a topology which mimics the Zariski topology \cite{HZ}. Ya'acov Peterzil and Sergei Starchenko proved a variant of the conjecture for the class of o-minimal structures \cite{PS}. O-minimal structures are linearly ordered structures, thus endowed with the topology defined by the ordering, and they present
strong analogies with strongly minimal structures.
Par\ Par \noindent
It is then natural to ask the question for $C$-minimal structures. The $C$-minimality condition is an equivalent of strong minimality in the setting of ultrametric structures (or more generally $C$-structures) just as o-minimality is an equivalent of strong minimality in the setting of ordered structures. However, the Steinitz exchange property which is a consequence of strong minimality and o-minimality, does not hold for all $C$-minimal structures.
If we assume it, we are in the setting of \emph{geometric structures} as defined by Ehud Hrushovski and Anand Pillay \cite{HP}, since $C$-minimal structures do eliminate the quantifier $\exists^\infty$, the other required property.
Geometric structures offer a common framework for strongly minimal or o-minimal structures as well as for many classical mathematical structures and provide tuned tools and techniques.
Par\ Par\noindent
The second author had constructed an infinite definable group in any geometric $C$-minimal structure, which is non-trivial and locally modular (\cite{maalouf1} and \cite{maalouf2}).
In this paper, we remove the assumption of local modularity. New arguments have to be blown into the proof and we follow the spirit of \cite{PS}.
Some extra conditions are assumed, that appear in \cite{annalestoulouse} and in the context of fields in \cite{HK}. They in particular guarantee the existence of limits of unary functions in the neighborhood of a point. This allows us to copy an essential element of \cite{PS}: the notion of ``tangent''. For a definable curve $X$ and a definable family $\cal F$ of curves, where $X$ and all the curves of $\cal F$ pass through some fixed point $P$, the idea is to determine a curve in $\cal F$ which, on a neighborhood of $P$, is closer to $X$ than any other element of $\cal F$.
Par\ Par \noindent
More precisely we show the following.Par\ Par\noindent
{\bf Theorem:}\emph{
Let $\mathcal{M}$ be a $C$-minimal structure which is definably maximal, geometric and non-trivial. Suppose moreover that in the underlying tree of $\mathcal{M}$ there is no definable bijection between a bounded interval and an unbounded one. Then there is an infinite group definable in $\mathcal{M}$.}Par\
Par\ \noindent
The paper is organized as follows.
Section 2 is devoted to preliminaries on $C$-minimal structures.
In Section \ref{snicefamilies}, we show that either an infinite group or a family $\mathcal{F}$ of functions with \textit{nice properties} is definable in $\mathcal{M}$. The idea is then to get a group law by composing the elements of this family of functions. As the composition is in general not in $\mathcal{F}$, we have to \emph{approximate} it with a function from $\mathcal{F}$. For this end, we introduce in Sections \ref{stangents} and \ref{sderivatives} the notions of \emph{tangents} and \emph{derivatives}, and study their properties. We use them in Section \ref{sgrouplaw} to construct an infinite group in $\mathcal{M}$.
\section{Preliminaries}
We start with a few definitions and preliminary results. $C$-structures and $C$-minimal structures have been introduced and studied in \cite{macphersonhaskell} and \cite{macphersonsteinhorn}. We remind in what follows their definition and principal properties.Par\ Par\noindent
{\bf\emph{Notations:}} We use $Mathcal{M}, Mathcal{N},...$ to denote structures and $M,N,...$ for their underlying sets. Par\ Par\noindent
In this paper, a $C$-structure is a structure $Mathcal{M}=(M,C,...)$, where $C$ is a ternary predicate satisfying the following axioms:
\begin{itemize}
\item $\forall x,y,z, C(x,y,z)\longrightarrow C(x,z,y)$
\item $\forall x,y,z, C(x,y,z)\longrightarrow \neg C(y,x,z)$
\item $\forall x,y,z,w, C(x,y,z)\longrightarrow [ C(x,w,z)\vee C(w,y,z) ] $
\item $\forall x,y, x\neq y,\, \exists z\neq y, C(x,y,z)$.
\end{itemize}
Note that these C-structures are sometimes called ``dense'', see \cite{delonlms}.
Let $Mathcal{M}$ be a $C$-structure. We call \emph{cone}\footnote{Note: $M$ is not a cone.} or \emph{open ball} any subset of $M$ of the form $\{x; Mathcal{M}Models C(a,x,b)\}$, where $a$ and $b$ are two distinct elements of $Mathcal{M}$. We call $0$-\emph{level set} or \emph{closed ball} any subset of $M$ of the form $\{x; Mathcal{M}Models \neg C(x,a,b)\}$, where $a$ and $b$ are two elements of $Mathcal{M}$. A set is said to be a \emph{ball} if it is an open ball or a closed ball. It follows from the first three axioms of $C$-relations that the cones of $M$ form a basis of a completely disconnected topology on $M$. The last axiom guarantees that all cones are infinite. Par\ Par\noindent
Let $(T,\leq)$ be a partially ordered set. We say that $(T,\leq)$ is a \emph{tree} if the set of elements of $T$ less than any fixed element is totally ordered by $\leq$, and if any two elements of $T$ have a greatest lower bound. A \emph{leaf} is a maximal element of $T$. A \emph{branch} is a maximal totally ordered subset of $T$. It is easy to check that if $a$ and $b$ are two branches of $T$, then $sup(a\bigcap b)$ exists. On the set of branches of $T$, we define a ternary relation $C$ in the following way: we say that $C(a,b,c)$ is true if and only if $Mathrm{sup}(a\bigcap b)<Mathrm{sup}(b\bigcap c)$. It is easy to check that this relation on the set of branches satisfies the first three axioms of a $C$-relation.Par\ Par\noindent
A theorem from \cite{adeleke} says that $C$-structures can be looked at as sets of branches of a tree, equipped with the $C$-relation as defined above. The construction of the \emph{underlying tree} $T(Mathcal{M})$ of a $C$-structure $\mathcal{M}$ has been slightly modified in \cite{delonlms}. In this new construction, $M$ can be identified with the set of leaves of $T(Mathcal{M})$. The tree $T(\mathcal{M})$ appears as the quotient of $M^2$ by an adequate equivalence relation which is definable in $(M,C)$. To an element $x \in M$, we associate the branch
$br_x := \{ \nu \in T(Mathcal{M}) : \nu \leq x \}$.
To elements $x,y\in M$, we associate the node $t:=sup (x\cap y)$, where $x$ and $y$ are seen as branches of $T(Mathcal{M})$. This operation is well defined, and we say then that $x$ and $y$ \emph{branch at} $t$. If $a$ and $b$ are two distinct elements of $M$ branching at a node $\nu$, we denote $\Lambda_\nu(b):=\{x\in M; C(a,x,b)\}$, we call it \emph{the cone of $b$ at $\nu$}, and we say that $\nu$ is its \emph{basis}. For a node or a leaf $\nu$ of $T(Mathcal{M})$,
we denote by $\Lambda_\nu$ the
closed ball of $M$ defined by $\nu$. This corresponds to the set of all elements of $M$ which contain $\nu$ when
considered
as branches of $T(Mathcal{M})$. We call $\nu$ \emph{the basis of} $\Lambda_\nu$.Par\ Par\noindent
For $a,b\in T(\mathcal{M}) \cup \{-\infty\}$ with $a<b$, the interval $(a,b)$ is said to be \emph{bounded from above} (respectively, \emph{bounded from below}) if $b$ is not a leaf (respectively, if $a\neq-\infty$).
\begin{definition}Let $\mathcal{M}=(M,C,...)$ be a $C$-structure.
\begin{enumerate}
\item The structure $\mathcal{M}$ is \emph{geometric} if for any structure $Mathcal{N}$ elementarily equivalent to $\mathcal{M}$, the algebraic closure in $Mathcal{N}$ has the exchange property. In this case, the algebraic closure is the closure operator of a pregeometry on $Mathcal{N}$. If this pregeometry is trivial on any $Mathcal{N}$ (i.e. $Mathrm{acl}\: A = \bigcup \{Mathrm{ acl}\:\! a ; a Mbox{ a singleton in } A\}$ for any $A \subset N$), then $\mathcal{M}$ is said to be \emph{trivial}. See \cite{HP} and \cite{Marker}.
\item The structure $\mathcal{M}$ is \emph{definably maximal} if any definable family of cones which is linearly ordered by the inclusion has a non-empty intersection.
\item The structure $\mathcal{M}$ is $C$-\emph{minimal} if and only if for any structure $Mathcal{N}=(N,C,...)$ elementarily equivalent to $\mathcal{M}$, any definable subset of $N$ can be defined without quantifiers using only the relations $C$ and $=$.
\end{enumerate}
\end{definition}
Two comments on these definitions.\\
1. Geometric structures are provided with notions of independence, dimension, and generic points.
In other respects $C$-minimal structures admit a cellular decompositions (see \cite{macphersonhaskell} and its complement \cite{Pablo}), which gives rise to a topological dimension. In geometric $C$-minimal structures, these two dimensions coincide. This means that
a point is generic exactly when any definable set containing it contains a box of the ambient space. \\
2. Let us define a linearly ordered structure to be definably maximal if any definable decreasing family of bounded closed intervals has a non-empty intersection. Then any o-minimal structure is definably maximal. But not any $C$-minimal structure is definably maximal \cite{annalestoulouse}, nor geometric \cite{macphersonhaskell}.
\begin{prop}\label{nobadfunc}
Let $\mathcal{M}$ be a geometric $C$-minimal structure and $T$ its underlying tree. Let $f: M\longrightarrow T\setminus M$ be a definable partial function. Then $Mathrm{dom}(f)$ can be written as a definable union $F\cup K$ such that $F$ is finite, and $f$ is locally constant on $K$.
\end{prop}
\emph{Proof:}
This is a direct consequence of Propositions 3.9 and 6.1 of \cite{macphersonhaskell}.
$\square$Par\ Par\noindent
\begin{lemma}\label{formulevoisinage}
Let $\mathcal{M}$ be a geometric $C$-minimal structure, $\varphi$ a formula in two variables over $\mathcal{M}$ and $b\in M$ generic over the parameters defining $\varphi$. Let $D$ be a cone containing $b$ and suppose that for all $u\in D$ there is a subcone $V_u$ of $D$ containing $u$ such that for all $v\in V_u$, $\mathcal{M}Models \varphi(u,v)$. Then there exists a neighborhood $D'$ of $b$ such that for all $u,v\in D'$, $\mathcal{M}Models \varphi(u,v)$.
\end{lemma}
\emph{Proof:}
For $u\in D$, let $f(u)$ be the node on the branch $br_u$ of $u$ such that
\[f(u)=Min \{ Mathrm{inf}\{\nu\in br_u; \forall v\in\Lambda_{\nu}:\mathcal{M}Models\varphi(u,v)\},Mbox{basis of }D\}.\]
The function $f: M \rightarrow T(\cal M)$ is definable and
the hypotheses on $\varphi$ imply $f(D) \subset T({\cal M}) \setminus M$, so by Proposition \ref{nobadfunc}, $Mathrm{dom}(f)$ can be written as a definable union $F\cup K$ such that $F$ is finite, and $f$ is locally constant on $K$. By genericity, $b\notin F$. Thus there is a cone $D'\subset D$ containing $b$ on which $f$ is constant. Therefore, for any $u,v\in D'$, the formula $\varphi(u,v)$ is satisfied in $\mathcal{M}$.
$\square$Par\ Par\noindent
\begin{lemma}\label{fonctionbornee}
Let $\mathcal{M}$ be a $C$-minimal structure, $c\in M$ and $\varphi(i,x)$ a formula in two variables on $\mathcal{M}$ where $i$ ranges in the sort $T(\mathcal{M})$ in an interval $I = ]\rho,c[$ of the branch of $c$, and $x$ ranges in a cone $U\subset M$. We suppose that, for any $x,y\in M$, there is no definable bijection between a bounded interval of the branch of $x$ and an unbounded interval of the branch of $y$. Suppose that
\[\forall x \in U, \exists i \in I, \forall j \in I \ [j\geq i \rightarrow \varphi(j,x)].\] Then
\[\exists i \in I, \forall x \in U, \forall j \in I \ [j\geq i \rightarrow \varphi(j,x)].\]
\end{lemma}
\emph{Proof:}
For $i\in I$, let $U_i$ be the subset of elements $x\in U$ such that $\forall j \in I \ [j \geq i \rightarrow \varphi(j,x)]$. The map $iMapsto U_i$ is increasing, and the union of all the $U_i$ contains $U$. Then for every ball $\Lambda_{\nu}\subsetneq U$, $\Lambda_{\nu}$ is strictly contained in the union of the $U_i$, which is an increasing and definable family of definable subsets of $M$. By $C$-minimality the $U_i$ are (uniformly definable) Swiss Cheese and $\Lambda_{\nu}$ must be contained in one of them. For every $\nu$ let $i_{\nu}$ be the infimum of the $i \in br_c$ with this property. Fix a node $\nu_0$ such that $\Lambda_{\nu_0}\subset U$ and consider the function $f$ which to a node between the basis of $U$ and $\nu_0$ associates the element $i_{\nu}$.
The branches equipped with the structure induced by $\mathcal{M}$ are o-minimal,
thus there is a finite definable partition of the domain of $f$ such that on each piece $f$ is either constant or bijective monotonous. Consequently,
since the domain of $f$ is bounded, its image must be bounded from above by some $i_0\in br_c$. By the choice of $i_0$ we have $\forall x \in U, \forall j \in I \ [j\geq i_0 \rightarrow \varphi(j,x)]$.
$\square$Par\ Par\noindent
\begin{corollary}\label{fonctionbornee2}
Let $\mathcal{M}$ be a $C$-minimal structure with no definable bijection between a bounded interval and an unbounded interval of $T(\mathcal{M})$.
Let $\{ D(Mu) : Mu \in I \}$ be a family of uniformly definable subsets of $M$, indexed by an interval $I$ of $T(\cal M)$, $I=]\rho,c[$ for some $c \in M, \rho \in br_c$, and such that\\
- $\nu < Mu$ implies $D(\nu) \subset D(Mu)$\\
- $\bigcup_{\nu \in I} D(\nu)$ is a cone $\mathcal{G}amma$ in $M$. \\
Then $D(\nu_0)=\mathcal{G}amma$ for some $\nu_0 \in I$.
\end{corollary}
\section{Nice families of functions.}\label{snicefamilies}
Let $\mathcal{M}$ be a $C$-minimal structure.
\begin{lemma}\label{CCC}
Let $V$ be a cone in $M$, $e$ an element of $V$, and $f,g,h:V \rightarrow M$ definable functions. Then there is a neighborhood $W$ of $e$ such that either any element $x\in W\setminus\{e\}$ satisfies $C(f(x),g(x),h(x))$, or any element $x \in W\setminus\{e\}$ satisfies $\neg C(f(x),g(x),h(x))$.
\end{lemma}
\emph{Proof:}
Let $D\subset M$ be the set of elements $x$ of $V$ such that $C(f(x),g(x),h(x))$ holds. Either $e$ is an accumulation point of $D$, in which case by $C$-minimality there is a neighborhood $W$ of $e$ such that $W\setminus\{e\}\subset D$, or $e$ is an accumulation point of the complement $\neg D$ of $D$, in which case there is a neighborhood $W$ of $e$ such that $W\setminus\{e\}\subset \neg D$.
$\square$Par\ Par\noindent
{\bf\emph{Notation:}}\begin{enumerate}\item Let $x,y,z\in M$. We denote by $\Delta(x,y,z)$ the property \[\neg C(x,y,z)\wedge \neg C(y,x,z).\]
\item Let $V\subset M$ be a cone, $e$ an element of $V$, and $f, g, h$ functions from $V$ to $M$. We denote by $C_e(f,g,h)$ (or $C(f,g,h)$ if there is no confusion on $e$) the following property: there exists a neighborhood $W$ of $e$ such that \[\forall x\in W\setminus\{e\} : C(f(x),g(x),h(x)).\] We define $(\neg C)_e(f,g,h)$ and $\Delta_e(f,g,h)$ in the same way, and we denote them by $(\neg C)(f,g,h)$ and $\Delta(f,g,h)$ respectively, if there is no confusion on $e$.
\end{enumerate}
\begin{definition} Let $V$ be a cone in $\mathcal{M}$, $e\in V$, and $f:V \rightarrow M$ a definable function.
\begin{enumerate}
\item The function $f$ is said to be \emph{dilating on a neighborhood of $e$}, or just \emph{dilating} if there is no confusion, if it satisfies $ C_e(f\circ f,f,id_V)$.
\item The function $f$ is said to be \emph{non-dilating on a neighborhood of $e$}, or just \emph{non-dilating} if there is no confusion, if it satisfies $( \neg C)_e(f\circ f,f,id_V)$.
\end{enumerate}
\end{definition}
\begin{definition}\label{nice} Let $V \subset M$ be a cone and $\mathcal{F}=\{f_u:u\in U\}$ a definable family of definable functions from $V$ to $M$, indexed by a cone $U\subset M$. The family $\mathcal{F}$ is said to be \emph{a nice family of functions} if there is an element $e\in V$ with the following properties: \begin{enumerate}
\item All the $f_u$ are $C$-automorphisms of the cone $V$.
\item For every $u\in U$, we have $f_u(e)=e$.
\item For any fixed $x\in V\setminus\{e\}$, the application $U\longrightarrow V$ which to $u$ associates $f_u(x)$, is a $C$-isomorphism from $U$ onto some subcone of $V$.
$(*)$ \end{enumerate}
A nice family $\mathcal{F}$ of functions is said to have an \emph{identity element} if for some $u_0\in U$, $f_{u_0}=id_V$.
\end{definition}
{\bf\emph{Notation:}} If $\mathcal{F}$ is a nice family of functions, possibly with identity, then $U,V, e$ and $u_0$ will be as in Definition \ref{nice} if there is no other precision. For a set $W$ containing $e$, the set $W\setminus\{e\}$ will be denoted by $\stern{W}$.Par\ Par\noindent
{\bf\emph{Terminology:}} For a nice family of functions $\mathcal{F}$, the sets $U$ and $V$ are called respectively the \emph{index set} and the \emph{domain}, and $e$ is called the \emph{ absorbing element}.
\begin{prop}\label{existencefamille}
Let $\mathcal{M}$ be a non trivial geometric $C$-minimal structure. Suppose furthermore that in the underlying tree of $\mathcal{M}$ there is no definable bijection between a bounded interval and an unbounded one.
Then either an infinite group or a nice family of non-dilating functions with identity is definable in $\mathcal{M}$.
\end{prop}
\emph{Proof:}
Without loss of generality, we can suppose that $\mathcal{M}$ is $\aleph_1$-saturated. By Proposition 15 and Lemma 21 of \cite{maalouf1}, we can find cones $U_1,V_1$ in $M$ and a definable family of functions $\mathcal{H}=\{h_u:u\in U_1\}$ of (continuous) $C$-automorphisms of $V_1$ satisfying $(**)$: for every $x\in V_1$ the map $vMapsto h_v(x)$ is a continuous $C$-isomorphism from $U_1$ onto some cone. Fix a generic triplet $(a,b,e)\in U_1\times U_1\times V_1$, and let $e':=h_a\circ h_b(e)$. By $(**)$, for any $v \in U_1$ there is at most one $z \in U_1$ such that $h_z \circ h_v(e)=e'$. When such a $z$ exists, define $\theta(v) := z$.
By $(**)$ again and the genericity of $b$, the function $\theta$ is well defined on some cone $U_2$ containing $b$.
Par\ \noindent Par
Assume first that for all neighborhoods $U_a,U_b$ of $a,b$ there are $u \in U_a, u \not= a$ and $v \in U_b, v \not= b$ such that $h_u \circ h_v$ and $h_a \circ h_b$ coincide on some neighborhood of $e$ as soon as they agree on $e$. \\[5 mm]
Claim. Under this assumption there are cones $U_a$, $U_b$ and $V$ containing $a,b$ and $e$ respectively, such that for all $(u,v),(u',v') \in U_a \times U_b$,
$h_u\circ h_v$ and $h_{u'} \circ h_{v'}$ coincide on $V$ as soon they agree on $e$. \\[2 mm]
Proof. Take a cone $U_3\subset U_1$ containing $b$ such that all $v \in U_3\setminus \{b\}$ have the same type on $(a,b,e)$. Necessarily $U_3 \subset U_2$. This implies that, for any $v\in U_3$ there is $u \in U_1$ such that
$h_u \circ h_v (e) = e'$, but then $u=\theta(v)$.
So it follows from our assumption that for all $v\in U_3$, $h_{\theta(v)} \circ h_v$ and $h_a \circ h_b$ have the same germ on $e$.
Define $U_b' = U_3$ and $U_a' = \theta(U_b')$. For any $(u,v) \in U_a' \times U_b'$ there is some neighborhood $W$ of $e$ such that $h_u \circ h_v$ and $h_a \circ h_b$ coincide on $W$ as soon as they agree on $e$.
\\
Let $\varphi$ be the formula defined as follows: $\varphi (u,v,\nu) :\longleftrightarrow$ ``$h_u \circ h_v$ and $h_a \circ h_b$ coincide on $\Lambda_\nu(e)$ as soon as they agree on $e$''. We first fix $u\in U'_a$ and apply Lemma \ref{fonctionbornee} to get $\nu_u$ such that for all $v\in U'_b$, $h_u \circ h_v$ and $h_a \circ h_b$ coincide on $\Lambda_{\nu_u}(e)$ as soon as they agree on $e$. We apply Lemma \ref{fonctionbornee} again to get a cone $V$ containing $e$ which satisfies the following: for all $(u,v)\in U_a'\times U_b'$, $h_u \circ h_v$ and $h_a \circ h_b$ coincide on $V$ as soon as they agree on $e$.
We can define $V$ with parameters, say $c$, independent over $(a,b,e)$.
Thus there are cones $U_a$ and $U_b$ containing respectively $a$ and $b$ such that any point in $U_a \times U_b$ has same type as $(a,b)$ over $(c,e)$. Therefore, for all $(u,v),(u',v') \in U_a \times U_b$,
$h_u\circ h_v$ and $h_{u'} \circ h_{v'}$ coincide on $V$ as soon they agree on $e$.
$\dashv$
Par\ Par\noindent
In this first case, we construct first an infinite $C$-group $G$ type-definable in $\mathcal{M}$. For this end, we proceed exactly as in the proof of Theorem 19 of \cite{maalouf1}, applying the above claim instead of Lemma 22 of \cite{maalouf1}. Indeed the local modularity was only used for proving Lemma 22. Then Theorem 1 of \cite{maalouf2} gives us an infinite subgroup of $G$ definable in $\mathcal{M}$.
Par\ Par
Assume now that we are not in the above case. By $C$-minimality, there are cones $U_a,U_b$ containing respectively $a,b$ such that, for all $u \in U_a, u \not= a$ and $v \in U_b, v \not= b$ with $h_u \circ h_v(e)=e'$, the graphs of $h_u \circ h_v$ and $h_a \circ h_b$ do not intersect on $\stern{W}$ for some cone $W \ni e$. We can suppose without loss of generality that $U_b\subset U_2$. For $u\in U_2$, define $g_u:=h_{\theta(u)}\circ h_u$.
Par\
The element $b$ has the following property: there is a cone $U \ni b$ (for example $U=U_b$) such that for all $u\in U\setminus\{b\}$, the graphs of $g_u$ and $g_b$ do not intersect on $\stern{W}$ for some cone $W \ni e$. By genericity, this property holds on some cone $U_3\subset U_b$ containing $b$. Hence we have the following: for all $v\in U_3$, there is a cone $U_v \ni v$ such that for all $u\in U_v\setminus\{v\}$, the graphs of $g_u$ and $g_v$ do not intersect on $\stern{W}$ for some cone $W \ni e$. By Lemma \ref{formulevoisinage}, there is some cone $U_4\subset U_3$ containing $b$ such that
for any $u, v\in U_4$, $u\neq v$, there is some $W \ni e$ such that the functions $g_{u}$ and $g_{v}$ agree on exactly one point of $W$, namely $e$. Par\
For $\nu \in br_e$ and $v\in U_4$, we define \[D(\nu,v) := \{v\}\cup\{ u \in U_4 : Mathrm{Gr}(g_v) \cap Mathrm{Gr}(g_u) \cap [\Lambda_\nu(e)^*\times M] = \emptyset \},\]
where $Mathrm{Gr}(g_u)$ and $Mathrm{Gr}(g_v)$ are the graphs of $g_u$ and $g_v$ respectively.
By the choice of $U_4$ and Corollary \ref{fonctionbornee2}, for all $v\in U_4$, $U_4=D(\zeta(v),v)$ for some $\zeta(v) \in br_e$. This means that for all $v\in U_4$, any element of the family $\{g_u|\Lambda_{\zeta(v)}; u\in U_4\setminus \{v\}\}$ agrees with $g_v$ only on $e$. It follows by Lemma \ref{fonctionbornee} that there is some $\zeta\in br_e$ such that $\forall u, v\in U_4$, $u\neq v$, the functions $g_u|\Lambda_{\zeta}(e)$ and $g_v|\Lambda_{\zeta}(e)$ agree only on $e$. Replace $V_1$ by $V:=\Lambda_{\zeta}(e)$. Par\
For every $x\in V^*$, the (definable) function from $U_4$ to $V$ which to $u$ associates $g_u(x)$ is now injective, thus a $C$-isomorphism on some neighborhood of $b$ (since all points in $V$ are independent of $b$) which we can suppose uniformly definable in $x$:
by $C$-minimality the union of all cones for which this is true, is a cone,
call it $J(x)$.
For $\nu\in br_b$ define \[X(\nu):=\{x\in V: \Lambda_{\nu}\subset J(x)\}.\]
By Corollary 2.4 there is a node $\nu_0$ on the branch of $b$ such that $X(\nu_0)=V$.
We replace $U_4$ by the cone of $b$ at $\nu_0$, which we call $U_5$. The family of functions $g_v:V\longrightarrow M$, $v\in U_5$ has the property $(*)$.Par\
Fix a generic element $u_0\in U_5$, and let $U_6$ be a subcone of $U_5$ containing $u_0$ such that $\forall u\in U_6$, $g_{u_0}(V)=g_u(V)$. For $u\in U_6$, let $f_u$ be the $C$-automorphism of $V$ defined by $f_u:=g_{u_0}^{-1}\circ g_u$. The family $\mathcal{F}:=\{f_u:u\in U_6\}$ is a nice family of functions.
By $C$-minimality, there is a neighborhood $U$ of $u_0$ such that, either for all $u\in U\setminus\{u_0\}$, the function $f_u$ is non-dilating, or for all $u\in U\setminus\{u_0\}$, the function $f_u$ is dilating.
Suppose the $f_u$ are dilating. This means that
$C_e((g_{u_0}^{-1}\circ g_u)^2,g_{u_0}^{-1}\circ g_u,id_V)$
holds for any $u \in U$, $u \not= u_0$.
Define $D_u := \{ v \in U : C_e((g_{u}^{-1}\circ g_v)^2,g_{u}^{-1}\circ g_v,id_V) \}$ for $u \in U$. By genericity of $u_0$, for any $u$ close enough to $u_0$, $u \not= u_0$, $D_u$ contains $V_u \setminus \{u\}$ for some cone $V_u \ni u$.
Thus the function $U \rightarrow T(M) \setminus M$, $u Mapsto \inf \{ Mbox{basis of }\mathcal{G}amma ; \mathcal{G}amma Mbox{ a cone},
\mathcal{G}amma \setminus \{ u \}
\subset D_u, u \in \mathcal{G}amma \}$, is well defined in the neighborhood of $u_0$. It must be locally constant.
So we can find two elements $u,v$ such that $u\in D_v$ and $v\in D_u$. It follows that \[C_e(g_{u}^{-1}\circ g_v,id_V, (g_{u}^{-1}\circ g_v)^{-1}) ,\] and \[C_e((g_{u}^{-1}\circ g_v)^{-1},id_V, g_{u}^{-1}\circ g_v) .\] Contradiction.
$\square$Par\ Par\noindent
\section{Tangents: existence and uniqueness}\label{stangents}
\begin{definition}Let $\mathcal{F}$ be a nice family of functions and $g:V\longrightarrow V$ a function such that $g(e)=e$. \begin{enumerate} \item Let $u$ be an element of $U$. The function $f_u$ is said to be \emph{tangent to $g$ relatively to the family $\mathcal{F}$} if for any $u'\in U$, we have $(\neg C)(f_u,f_{u'},g)$, in which case we write $f_u\sim_{\mathcal{F}} g$, or just $f_u\sim g$ if there is no confusion on $\mathcal{F}$.
\item The function $g$ is said to be \emph{derivable relatively to the family} $\mathcal{F}$ if there is a unique function $f_u\in \mathcal{F}$ such that $f_u\sim g$. In this case, $f_u$ is called \emph{the tangent to $g$ in $\mathcal{F}$}.
\end{enumerate}
\end{definition}
{\bf\emph{Notation:}} We fix a nice family $\mathcal{F}$ of functions, and a definable function $g:V \rightarrow V$ such that $g(e)=e$. We define \[T_g := \{ u \in U ; f_u\sim g \},\] and for $u \in U$, \[\mathcal{G}amma_{g,u} := \{ y \in U: C(f_u,f_y,g) \}.\]
\begin{lemma}\label{gammavide}
$\mathcal{G}amma_{g,u} = \emptyset$ if and only if $u \in T_g$.
\end{lemma}
\emph{Proof:}
Fix elements $u$ and $v$ of $U$. So either $(\neg C)(f_u,f_v,g)$, or $C(f_u,f_v,g)$. By the definition of tangent,
$u \in T_g$ if and only if we are in the first case for every $v \in U$, if and only if $\mathcal{G}amma_{g,u} = \emptyset$.
$\square$Par\ Par\noindent
\begin{lemma}\label{Tforme}
\begin{enumerate}
\item Let $u$ and $v$ be elements of $T_g$. Then $\Lambda_{u \wedge v} \subset T_g$. Furthermore, there is a cone $W \subset V$ containing $e$ such that
\[\forall z\in\Lambda_{u\wedge v}\ \forall x\in W: \Delta (f_u(x),f_z(x),g(x)).\]
\item Let $u \in T_g$ and $v \in U\setminus T_g$. Then the cone $\mathcal{G}amma$ of $v$ at $u \wedge v$ is contained in $U\setminus T_g$. Furthermore, there is a cone $W \subset V$ containing $e$ such that \[\forall z\in\mathcal{G}amma, \forall x\in \stern{W}: C(f_z(x),f_u(x),g(x)).\]
\item If $T_g$ is not empty, then it is a ball.
\end{enumerate}
\end{lemma}
\emph{Proof:}
\emph{1.} Let $u,v$ be two distinct elements of $T_g$, and let $W$ be a neighborhood of $e$ such that, \[\forall x\in W: \Delta(f_u(x), f_v(x), g(x)).\] Let $w\in \Lambda_{u \wedge v}$, so we have $\neg C(w,u,v)$. By $(*)$, we have that \[\forall x\in W:\neg C(f_w(x),f_u(x),f_v(x)).\] So $\neg C(f_w(x), f_u(x), g(x))$ holds for all $x\in W$, and it follows that $w \in T_g$. For the second assertion, let $z$ be an element of $\Lambda_{u \wedge v}\subset T_g$. Let
\[W_{z} := \bigcup \{ W: W \,Mbox {is a cone, }
e \in W\subset V,
\forall x \in W:
\Delta (f_u(x),f_z(x),g(x))
\}.\] $W_z$ is a non empty union of nested cones, so by $C$-minimality it is a cone. Let $\nu_z$ be its basis.
If $z' \in U$ is such that $C(u,z',z)$, then ($z' \in T_g$ and) $\nu_{z} = \nu_{z'}$. Thus the application $z Mapsto \nu_{z}$ induces an application from the set of cones at $u\wedge v$ to the branch of $e$. As this set of cones equipped with the structure induced by $\mathcal{M}$ is strongly minimal and $br_e$ linearly ordered, the image of this application is finite. Let $\nu$ be the maximal element of the image. So we have
\[\forall z\in \Lambda_{u\wedge v}, \forall x \in\Lambda_\nu: \Delta (f_u(x),f_{z}(x),g(x)).\]
\emph{2.} Similar proof.Par\ Par\noindent
\emph{3.} By \emph{1}, either $T_g$ is empty, or it is a union of nested closed balls. It is then a cone or a closed ball by $C$-minimality.
$\square$Par\ Par\noindent
\begin{lemma}\label{chaine}
Let $u,u',v,v'$ be elements of $U$ such that $v\in\mathcal{G}amma_{g,u}$ and $v'\in\mathcal{G}amma_{g,u'}$. If $v$ is not an element of $\mathcal{G}amma_{g,u'}$, then $v'$ is an element of $\mathcal{G}amma_{g,u}$.
\end{lemma}
\emph{Proof:} Let $u,v,u',v'$ be elements of $U$ such that $v\in\mathcal{G}amma_{g,u}$, $v'\in\mathcal{G}amma_{g,u'}$ and $v\notin\mathcal{G}amma_{g,u'}$. Let $W\subset V$ be a neighborhood of $e$ such that, for every $x\in\stern{W}$, we have $C(f_u(x), f_v(x),g(x))$, $C(f_{u'}(x), f_{v'}(x),g(x))$ and $\neg C(f_{u'}(x), f_{v}(x),g(x))$. By the first and third relations, we have \[\forall x\in\stern{W}: C(f_u(x),f_{u'}(x),g(x)).\] This together with the second relation yields \[\forall x\in\stern{W}: C(f_u(x),f_{v'}(x),g(x)).\]
So $v'$ is an element of $\mathcal{G}amma_{g,u}$.
$\square$Par\ Par\noindent
\begin{lemma}\label{gammaforme}
If $\mathcal{G}amma_{g,u}$ is not empty, then it is a cone at a node on the branch of $u$. For a fixed $g$, the non empty $\mathcal{G}amma_{g,u}$ form a chain of cones.
\end{lemma}
\emph{Proof:} If $x$ and $y$ are elements of $\mathcal{G}amma_{g,u}$, then $\Lambda_{x \wedge y}\subset \mathcal{G}amma_{g,u}$.
So $\mathcal{G}amma_{g,u}$ is a union of nested closed balls and, by $C$-minimality, it is a ball.
Fix an element $v\in \mathcal{G}amma_{g,u}$. It is easy to see that $\mathcal{G}amma_{g,u}$ contains the cone of $v$ at $u\wedge v$, so $\mathcal{G}amma_{g,u}$ is a ball at a node $\nu$ on the branch of $u$. But it is clear that $u\notin\mathcal{G}amma_{g,u}$. Thus $\nu=u\wedge v$ and $\mathcal{G}amma_{g,u}$ is the cone of $v$ at the node $u\wedge v$. Par\noindent The claim that the non empty $\mathcal{G}amma_{g,u}$ form a chain of cones follows directly from Lemma \ref{chaine} and the fact that in $C$-structures, two cones have a nonempty intersection if and only if one of these cones is contained in the other one.
$\square$Par\ Par\noindent
\begin{lemma}
There is an element $u\in U$ such that $f_u\sim g$.
\end{lemma}
\emph{Proof:}
If $T_g$ is empty, then by Lemma \ref{gammavide}, no cone $\mathcal{G}amma_{g,u}$ is empty. By Lemma \ref{gammaforme}, the $\mathcal{G}amma_{g,u}$ form a chain of cones of $U$. By definable maximality, their intersection is not empty. But this intersection is contained in $T_g$, which is a contradiction.
$\square$Par\ Par\noindent
\begin{lemma}\label{Wuniforme}
\begin{enumerate}
\item There is a cone $W$ containing $e$ such that $\Delta (f_u(x),f_v(x),g(x))$ holds for all $u,v \in T_g$ and all $x \in W$.
\item Suppose that $T_g$ contains more than one element, and let $u \in T_g$. Then there is a cone $W$ containing $e$ such that $C(f_z(x),f_u(x),g(x))$ holds for all $z \in U \setminus T_g$ and all $x \in \stern{W}$.
\end{enumerate}
\end{lemma}
\emph{Proof:}
\emph{(i).} By $C$-minimality and the definition of $T_g$, for all $u,v\in T_g$ there is some cone $W_{u,v}$ containing $e$ such that $\Delta (f_u(x),f_v(x),g(x))$ holds for all $x \in W_{u,v}$. Fixing $u$ and applying Lemma \ref{fonctionbornee} to the formula
$\varphi (i,v) = ``\Delta (f_u(x),f_v(x),g(x))Mbox{\it{ holds for all }} x \in \Lambda_i(e)$''
gives for all $u$, a cone $W_u$ containing $e$ such that $\Delta (f_u(x),f_v(x),g(x))$ holds for all $v \in T_g$ and all $x \in W_u$. Applying Lemma \ref{fonctionbornee} again gives the wanted result. Par\ \noindent
\emph{(ii).} Suppose that $T_g$ is a ball containing at least two elements, and let $c_0 := \inf T_g$. Fix an element $a\in T_g$ and a cone $W$ containing $e$ such that $\Delta (f_a(x),f_z(x),g(x))$ holds for all $z \in T_g$ and all $x \in W$. For any node $c\leq c_0$ and any element $z \in U \setminus T_g$ such that $z\wedge a=c$, there is a maximal ball $W_{c,z}\subset W$ such that \[\forall x\in \stern{W}_{c,z}: C(f_z(x),f_a(x),g(x)).\] It is also easy to check that \[\forall x\in \stern{W}_{c,z}: C(f_{z'}(x),f_a(x),g(x))\] for every $z' \in U$ such that $C(a,z,z')$. Since $W_{c,z}\subset W$, we have
\[\forall v\in T_g, \forall x\in \stern{W}_{c,z}: \Delta (f_{v}(x),f_a(x),g(x)) \wedge C(f_{z'}(x),f_a(x),g(x))\]
for every $z'$ such that $C(a,z,z')$. Let $\nu_{c,z}$ be the basis of $W_{c,z}$. So the application $z Mapsto \nu_{c,z}$ induces an application from the set of cones at $c$ not containing $T_g$ to the branch of $e$. By strong minimality of this set of cones, the image of
this application is finite. Let $\nu_c$ be the maximal element of the image. Now the application which to $c$ associates $\nu_c$ is an application from a bounded interval of the tree (namely the interval delimited by the basis of $U$ and that of $T_g$) to the branch of $e$. Its image is then bounded from above by some node $d$. If $W_0\subset W$ is the cone of $e$ at $d$, then we have \[\forall u,v\in T_g,\forall z\in U\setminus T_g, \forall x\in \stern{W}_0: \Delta (f_{v}(x),f_u(x),g(x)) \wedge C(f_z(x),f_u(x),g(x)).\]
$\square$Par\ Par\noindent
\begin{prop}\label{unicite}
The following are equivalent: \begin{enumerate}
\item there are elements $u,v \in U$ such that $C(f_u,f_v,g)$;
\item $T_g \not= U$;
\item $g$ is derivable relatively to the family $\mathcal{F}$.
\end{enumerate}
\end{prop}
\emph{Proof:}
We show that (iii) follows from (i), the rest (namely (iii) $Mathbb{R}ightarrow$ (ii) $Mathbb{R}ightarrow$ (i)) is trivial. Let $u,v$ be like in the statement of (i). We know that $T_g$ is not empty. Suppose that it contains two distinct elements. By Lemma \ref{Tforme}(iii), $T_g$ is a cone or a closed ball. By Lemma \ref{Wuniforme}, there is a cone $W$ containing $e$ such that \[\forall \alpha\in T_g,\forall z\in U, \forall x\in W: \neg C(f_{\alpha}(x),f_z(x),g(x)).\,\,\,\,\,\,\,\,\,\,\,\,(a)\]
Restricting $W$ if necessary, we can suppose that \[\forall x\in\stern{W}: C(f_u(x),f_v(x),g(x)).\]
Fix an element $x_0 \in \stern{W} $. By $(*)$, there is an element $w \in U$ such that $f_w(x_0)=g(x_0)$. Since $T_g$ contains more than one element, choose an element $\alpha\neq w$, $\alpha\in T_g$. So we have $C(f_{\alpha}(x_0),f_w(x_0),g(x_0))$. This contradicts (\emph{a}).
$\square$Par\ Par\noindent
\section{Derivability relative to nice families of functions}\label{sderivatives}
We fix a nice family $\mathcal{F}$ of functions.
\begin{lemma}\label{deriv2} Let $g:V\longrightarrow V$ be a function such that $g(e)=e$. Suppose that $g$ is derivable relatively to the family $\mathcal{F}$, and let $f_{u_1}$ be its tangent. Then for every $u\in U, u\neq u_1$, we have $C(f_u,f_{u_1},g)$.
\end{lemma}
\emph{Proof:} Follows directly from Lemma \ref{CCC} and the definition of derivability.
$\square$Par\ Par\noindent
For $x\in \stern{V}$, the function $Phi_{\mathcal{F},x}: U\rightarrow V, uMapsto f_u(x)$, is a $C$-isomorphism from $U$ onto some cone $Phi_{\mathcal{F},x}(U)$.Par\ Par\noindent
Let $g:V\longrightarrow V$ be a function such that $g(e)=e$. We define the map $\displaystylePsi_{\mathcal{F},g}: \stern{V}\rightarrow U, xMapsto Phi_{\mathcal{F},x}^{-1}(g(x))$.
So $Psi_{\mathcal{F},g}(x)$ is the unique element $u$ of $U$ such that $f_u(x)=g(x)$, when such an element exists.
\begin{lemma}\label{psidefined}
If $g$ is derivable relatively to the family $\mathcal{F}$, then $Psi_{\mathcal{F},g}$ is defined on $\stern{W}$ for some neighborhood $W \subset V$ of $e$.
\end{lemma}
\emph{Proof:}
Let $f_{u_1}$ be the tangent to $g$ in $\mathcal{F}$, and let $u\neq u_1$ be an element of $U$. By Lemma \ref{deriv2}, we have $C(f_u,f_{u_1},g)$. So for some neighborhood $W \subset V$ of $e$, we have \[\forall x\in \stern{W}: C(f_u(x),f_{u_1}(x),g(x)).\] For every $x\in W$, $f_u(x)$ and $f_{u_1}(x)$ are elements of the cone $Phi_{\mathcal{F},x}(U)$, so the same holds for $g(x)$. Hence $Psi_{\mathcal{F},g}$ is defined on $\stern{W}$.
$\square$Par\ Par\noindent
\begin{prop}\label{limit} If $g$ is derivable relatively to $\mathcal{F}$, then $f_{u_1}\sim g$ if and only if \[\displaystyle \lim_{x\to e}Psi_{\mathcal{F},g}(x)=u_1. \]
\end{prop}
\emph{Proof:} By Lemma \ref{psidefined} $Psi_{\mathcal{F},g}$ is defined on $\stern{W}_0$ for some neighborhood $W_0$ of $e$. For every $x\in W_0$, we have that $f_{Psi_{\mathcal{F},g}(x)}(x)=g(x)$. Fix an element $u\in U\setminus\{u_1\}$ and a neighborhood $W_1 \subset V$ of $e$ such that \[\forall x\in \stern{W}_1: C(f_u(x), f_{u_1}(x),g(x)).\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(1)\]
From $(1)$ and the property $(*)$ of $\mathcal{F}$, it follows that \[C(u,u_1,Psi_{\mathcal{F},g}(x)).\]
This shows that the unique possible limit of $Psi_{\mathcal{F},g}$ at $e$ is $u_1$. On the other hand, we know by definable maximality and Proposition $4.4$ of \cite{annalestoulouse} that $Psi_{\mathcal{F},g}$ has a limit at $e$. So we have that \[\displaystyle \lim_{x\to e}Psi_{\mathcal{F},g}(x)=u_1. \]
$\square$Par\ Par\noindent
\begin{definition}
Let $\mathcal{F}$ and $\mathcal{G}$ be two nice families of functions having same domain and absorbing element.
\begin{enumerate}\item The family $\mathcal{G}$ is said to be \emph{derivable relatively to} $\mathcal{F}$ if every element of $\mathcal{G}$ is derivable relatively to $\mathcal{F}$.
\item The families $\mathcal{F}$ and $\mathcal{G}$ are said to be \emph{comparable} if both are derivable relatively to each other.
\end{enumerate}
\end{definition}
\noindent{\bf\emph{Notation:}} Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}\:=\{g_u:u\in U'\}$ be two nice families of functions. Suppose that the family $\mathcal{G}$ is derivable relatively to the family $\mathcal{F}$. We define the \emph{derivative} and denote it by $Partial_{\mathcal{F},\mathcal{G}}$ as the function
\begin{center}\begin{tabular}{ccll} $Partial_{\mathcal{F},\mathcal{G}}:$&$U'$ &$\longrightarrow$ &$U$ \\ &$u'$ &$\longmapsto$ &$uMbox{ such that } f_u\sim g_{u'}$.\end{tabular}
\end{center}
\begin{lemma}\label{deriveeiso1}
Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}\:=\{g_u:u\in U'\}$ be two nice families of functions. Suppose that the family $\mathcal{G}$ is derivable relatively to the family $\mathcal{F}$. Let $a'_1, a'_2$ and $a'_3$ be elements of $U'$ such that $C(a'_1,a'_2,a'_3)$, and let $a_i:=Partial_{\mathcal{F},\mathcal{G}}(a'_i)$. Then
either $C(a_1,a_2,a_3)$ or $a_1=a_3$.
\end{lemma}
\emph{Proof:}
Suppose for a contradiction that $\neg C(a_1,a_2,a_3)$ and $a_3\neq a_1$ hold. Then $a_3\neq a_2$. Let $W \subset V$ be a neighborhood of $e$ such that, for every $x\in \stern{W}$ we have \[C(f_{a_3}(x), g_{a'_1}(x), f_{a_1}(x))\,\,\wedge\,\, C(f_{a_3}(x), g_{a'_2}(x), f_{a_2}(x))\] and \[ C(f_{a_1}(x), g_{a'_3}(x), f_{a_3}(x))\,\,\wedge\,\, C(f_{a_2}(x), g_{a'_3}(x), f_{a_3}(x)).\] Therefore, for every $x\in W$ holds: \[\neg C (g_{a'_1}(x), g_{a'_2}(x), g_{a'_3}(x)).\] By $(*)$ for $\mathcal{G}$ we have: \[\neg C(a'_1, a'_2, a'_3),\] contradiction.
$\square$Par\ Par\noindent
\begin{corollary}\label{deriveecontinue}
Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}\:=\{g_u:u\in U'\}$ be two nice families of functions. Suppose that the family $\mathcal{G}$ is derivable relatively to the family $\mathcal{F}$. Then $Partial_{\mathcal{F},\mathcal{G}}$ is continuous on $U'$.
\end{corollary}
\emph{Proof:} By definable maximality, Proposition $4.4$ of \cite{annalestoulouse} and Lemma \ref{deriveeiso1}, the function $Partial_{\mathcal{F},\mathcal{G}}$ admits a limit in each point of $U'$, and the limit is an element of $U$. Let $a'\in U'$ and $a:=Partial_{\mathcal{F},\mathcal{G}}(a')$. If the limit of $Partial_{\mathcal{F},\mathcal{G}}$ in $a'$ is an element $b\neq a$, then we can find elements $b'_1,b'_2\in U'$ such that $C(b'_1,b'_2,a')$, and $C(a,b_i,b)$ for $i=1,2$, where $b_i=Partial_{\mathcal{F},\mathcal{G}}(b'_i)$. Then $C(a,b_1,b_2)$, which contradicts Lemma \ref{deriveeiso1}.
$\square$Par\ Par\noindent
\begin{prop}\label{symetrie1}
Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}\:=\{g_u:u\in U'\}$ be two nice families of functions. Suppose moreover that $\mathcal{G}$ is derivable relatively to $\mathcal{F}$. Let $a, a'$ be elements of $U$ and $U'$ respectively. Then $f_a\sim g_{a'}$ if and only if for any neighborhood $A\subset U$ of $a$, any neighborhood $A'\subset U'$ of $a'$, and any neighborhood $E\subset V$ of $e$, there are elements $\alpha\in A, \alpha'\in A'$ and $y\in \stern{E}$ such that $f_\alpha(y)=g_{\alpha'}(y)$.
\end{prop}
\emph{Proof:}
For the implication from left to right let $a\in U$, $a'\in U'$ and suppose that $f_a\sim g_{a'}$. By Lemma \ref{limit}, we have that \[\displaystyle \lim_{x\to e}Psi_{\mathcal{F},g_{a'}}(x)=a.\] Let $A\subset U,\,A'\subset U'$ and $E\subset V$ be neighborhoods of $a, a'$ and $e$ respectively. We can find an element $y\in \stern{E}$ such that $Psi_{\mathcal{F},g_{a'}}(y)\in A$. Set $\alpha:=Psi_{\mathcal{F},g_{a'}}(y)$ and $\alpha':=a'$. We have then that $\alpha\in A$, $\alpha'\in A'$, $y\in \stern{E}$, and by the definition of $Psi$ we have
\[f_\alpha(y)=g_{\alpha'}(y).\]
For the converse, let $b\in U$ be such that $f_b\not\sim g_{a'}$, and let $a\neq b$ be an element of $U$ such that $f_a\sim g_{a'}$. Let $E_0$ be a neighborhood of $e$ such that
\[\forall x\in \stern{E}_0: C(f_b(x), g_{a'}(x), f_a(x)).\,\,\,\,\,\,\,\,\,\,\,\,(1)\]
Let $\mathcal{G}amma$ be the cone of $a$ at the node $a\wedge b$. The element $Partial_{\mathcal{F},\mathcal{G}}(a')=a$ is an element of $\mathcal{G}amma$, so by continuity of $Partial_{\mathcal{F},\mathcal{G}}$, there is an element $a'_1\in U'\setminus \{a'\}$ such that $a_1:=Partial_{\mathcal{F},\mathcal{G}}(a'_1)\in \mathcal{G}amma.$
Let $E_1\subset V$ be a neighborhood of $e$ such that
\[\forall x\in \stern{E}_1: C(f_b(x), g_{a'_1}(x), f_{a_1}(x)).\,\,\,\,\,\,\,\,\,\,\,\,(2)\] Let $A'$ be the cone of $a'$ at the node $a'\wedge a'_1$, $B$ be the cone of $b$ at the node $a\wedge b$, and $E:=E_0\cap E_1$. By the properties of $\mathcal{F}$, we have \[\forall \beta\in B, \forall x\in \stern{E}: C(f_a(x),f_{\beta}(x),f_b(x) ).\,\,\,\,\,\,\,\,\,\,\,\,(3)\] By $(1)$, $(2)$ and the fact that $C(f_b(x),f_{a_1}(x),f_a(x))$ for any $x \not= e$, we have \[\forall x\in \stern{E}: C(f_b(x),g_{a'}(x),f_a(x))\wedge C(f_b(x),g_{a'_1}(x)
,f_a(x) ).\,\,\,\,\,\,\,\,\,\,\,\,(4)\] Furthermore, by the properties of $\mathcal{G}$ we have: \[\forall \alpha\in A', \forall x\in \stern{E}: C(g_{a'_1}(x), g_{\alpha}(x), g_{a'}(x)).\,\,\,\,\,\,\,\,\,\,\,\,(5)\] From $(4)$ and $(5)$ we have \[\forall \alpha\in A', \forall x\in \stern{E}: C(f_b(x),g_{\alpha}(x),f_a(x) ).\,\,\,\,\,\,\,\,\,\,\,\,(6)\]
By $(3)$ and $(6)$, we have \[\forall\beta\in B, \forall\alpha\in A', \forall x\in \stern{E}: f_{\beta}(x)\neq g_{\alpha}(x),\] and $B, A'$ and $E$ are neighborhoods of $b,a'$ and $e$ respectively. This completes the proof.
$\square$Par\ Par\noindent
An immediate consequence of Proposition \ref{symetrie1} is the
\begin{corollary}\label{symetrie2}
Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}=\{g_u:u\in U'\}$ be two comparable families of functions, and let $a, a'$ be elements of $U$ and $U'$ respectively. Then $f_a\sim_{\mathcal{F}} g_{a'}$ if and only if $g_{a'}\sim_{\mathcal{G}} f_a$.
\end{corollary}
\begin{corollary}\label{deriveeiso2} Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}=\{g_u:u\in U'\}$ be two comparable families of functions. Then $Partial_{\mathcal{F},\mathcal{G}}$ and $Partial_{\mathcal{G},\mathcal{F}}$ are inverse of each other, and they define (continuous) $C$-isomorphisms between $U$ and $U'$. \end{corollary}
\emph{Proof:} By Corollary \ref{symetrie2}, the functions $Partial_{\mathcal{F},\mathcal{G}}$ and $Partial_{\mathcal{G},\mathcal{F}}$ are inverse of each other, thus define bijections between $U$ and $U'$. Continuity follows from Corollary \ref{deriveecontinue}, and the rest from Lemma \ref{deriveeiso1}.
$\square$Par\ Par\noindent
\begin{example}
Let $Mathcal{M}$ be an algebraically closed valued field with maximal ideal $\Upsilon$, $U=U':=1+\Upsilon$, $V:=\Upsilon$, $u_0=u'_0=1$, $e=0$, $\mathcal{F}:=\{u.x;u\in U\}$ and $\mathcal{G}:=\{x+(u-1).x^2;u\in U'\}$. Then $\mathcal{G}$ is derivable relatively to $\mathcal{F}$, and $Partial_{\mathcal{F},\mathcal{G}}$ is constant and sends every $u'$ to $1$. Now $Partial_{\mathcal{G},\mathcal{F}}$ is defined only at the point $1$, and its image in this point is $1$.
\end{example}
\begin{prop}\label{transitivite} Let $\mathcal{F}=\{f_u:u\in U\}$ and $\mathcal{G}=\{g_u:u\in U'\}$ be nice comparable families of functions, and $h:V\longrightarrow V$ a function such that $h(e)=e$. Suppose that $h$ is derivable relatively to $\mathcal{G}$. Then $h$ is derivable relatively to $\mathcal{F}$. More precisely, if $f_a\sim_{\mathcal{F}}g_b\sim_{\mathcal{G}}h$, then $f_a\sim_{\mathcal{F}}h$.
\end{prop}
\emph{Proof:}
Let $a,b$ be elements of $U,U'$ respectively such that $f_{a}\sim_{\mathcal{F}} g_{b}$ and $g_{b}\sim_{\mathcal{G}} h$. Suppose first that $\neg C(h,f_a,g_b)$ holds.
For any $a_1\neq a$, we have $C(f_{a_1},f_a, h)$, which implies $f_a\sim_{\mathcal{F}} h$. Par\noindent
Now suppose $C(h,f_a,g_b)$, and $\neg C(f_{a_1},f_a,h)$ for some $a_1\neq a$. For every $y\neq b$, we have $C(g_y,g_b,h)$, thus $C(g_y,f_{a_1},g_b)$, and $g_b\sim_{\mathcal{G}} f_{a_1}$. From Corollary \ref{symetrie2} follows $f_{a_1}\sim_{\mathcal{F}} g_b$, so $a_1=a$. Contradiction.
$\square$Par\ Par\noindent
\section{The group law}\label{sgrouplaw}
Given a nice family of functions $\mathcal{F}=\{f_u:u\in U\}$, and a $C$-automorphism $g$ of $V$ with $g(e)=e$, then $\{g\circ f_u:u\in U\}$ and $\{f_u\circ g:u\in U\}$ are nice families of functions.
\begin{lemma}\label{law1}Let $\mathcal{F}=\{f_u:u\in U\}$ be a nice family of non-dilating functions with identity $f_{u_0}$. Let $g,h$ be two $C$-automorphisms of $V$ with $g(e)=h(e)=e$. Assume that there is an element $c\in U$ such that $C(f_c,g,f_{u_0})$ and $C(f_c,h,f_{u_0})$. Then $C(f_c,g\circ h, f_{u_0})$ holds, and $g\circ h$ is derivable relatively to the family $\mathcal{F}$. If $f_x$ is the derivative of $g\circ h$, then $C(c,x,u_0)$.
\end{lemma}
\emph{Proof:}
We have $C(g\circ f_c,g\circ h,g)$ (a), $C(f_c\circ f_c,g\circ f_c, f_c)$ (b) and $\neg C(f_c\circ f_c,f_c,id_V)$ (c). From (b) and (c) follows $C(id_V,g\circ f_c,f_c)$ (d), which together with $C(f_c,g,id_V)$ yields $C(g\circ f_c,g,id_V)$. By (a) we have $C(g\circ f_c,g\circ h,id_V)$. This relation together with (d) yields $C(f_c, g\circ h,id_V)$. The derivability of $g\circ h$ relatively to the family $\mathcal{F}$ follows from Proposition \ref{unicite}, and it is clear that if $f_x$ is the derivative of $g\circ h$, then $C(c,x,u_0)$.
$\square$Par\ Par\noindent
We fix a nice family of non-dilating functions $\mathcal{F}':=\{f_u:u\in U'\}$ with identity. Let $c\neq u_0$ be an element of $U'$, and $U:=\{x\in U':C(c,x,u_0)\}$. Then the family $\mathcal{F}:=\{f_u:u\in U\}$ is a nice family of non-dilating functions with identity.
\begin{lemma}\label{famoinsun}
Let $a\in U$. Then $C(f_c,f^{-1}_a,id_V)$.
\end{lemma}
\emph{Proof:} The relation $C(f_c,f_a,id_V)$ holds, so by composing with $f_c$ we have $C(f_c\circ f_c, f_a\circ f_c, f_c)$. This together with the fact that $f_c$ is not dilating yields
\[C(id_V, f_a\circ f_c, f_c). \,\,\,\,\,\,\,\,\,\,\, (*)\]
Now we have $C(f_c,f_a,id_V)$ and that $f_a$ is not dilating.
From this follows that \[C(f_c,f_a\circ f_a,id_V). \,\,\,\,\,\,\,\,\,\,\, (**)\]
The relations ($*$) and ($**$) give $C(f_a\circ f_c,f_a\circ f_a,id_V)$.
Composing with $f^{-1}_a$ and using $C(f_c,f_a,id_V)$ gives the wanted result.
$\square$Par\ Par\noindent
\begin{lemma}\label{comparable}
Let $a$ be an element of $U$. Then the nice families $\{f_a\circ f_u:u\in U\}$, $\{f^{-1}_a\circ f_u:u\in U\}$, $\{f_u\circ f_a:u\in U\}$ and $\{f_u\circ f^{-1}_a:u\in U\}$ are comparable with $\mathcal{F}$.
\end{lemma}
\emph{Proof:}
Let $u\in U$. By Lemma \ref{law1}, $f_a\circ f_u$ is derivable relatively to $\mathcal{F}'$, with derivative $f_x$ for some $x\in U'$. By the second part of Lemma \ref{law1}, $x\in U$, thus $f_a\circ f_u$ is derivable relatively to $\mathcal{F}$. This shows that the family $\{f_a\circ f_u:u\in U\}$ is derivable relatively to $\mathcal{F}$.Par\
The derivability of $f_a \circ f_u$ relatively to $\cal F$ implies also that $f_u$ is derivable relatively to $\{ f_a^{-1} \circ f_u ; u \in U \}$. Lemma \ref{famoinsun} yields $C(f_c,f_a^{-1},id)$, so the previous argument applies to $f_a^{-1}$ instead of $f_a$. This shows that $\cal F$, $\{ f_a \circ f_u ; u \in U \}$ and $\{ f_a^{-1} \circ f_u ; u \in U \}$ are comparable.
That $\cal F$, $\{ f_u \circ f_a ; u \in U \}$ and $\{ f_u \circ f_a^{-1} ; u \in U \}$
are comparable can be proved in a similar way.
$\square$Par\ Par\noindent
{\bf\emph{Notations:}} If $g$ is derivable relatively to $\mathcal{F}$, we denote by $\widehat{g}$ its tangent.
\begin{definition} Let $\mbox{\^{o}}:\mathcal{F}\times \mathcal{F}\rightarrow \mathcal{F}$ be the operation defined by $f_u\mbox{\^{o}} f_v:=\widehat{f_u\circ f_v}$.
\end{definition}
By Lemma \ref{comparable}, $\mbox{\^{o}}$ is well defined. We will show now that $(\mathcal{F},\mbox{\^{o}},f_{u_0})$ is a group (which is clearly infinite). This group structure can be obviously definably transferred on $U$.
\begin{lemma} The operation \emph{$\mbox{\^{o}}$} is regular.
\end{lemma}
\emph{Proof:}
We show right regularity, left regularity can be done in a similar way. Suppose that $f_w=f_u\mbox{\^{o}} f_{a}=f_v\mbox{\^{o}} f_{a}$. So $f_w\sim f_u\circ f_{a}$, and $f_w\sim f_v\circ f_{a}$. Let $\mathcal{G}:=\{f_x\circ f^{-1}_a: x\in U\}$. So $f_w\circ f^ {-1}_a\sim_{\mathcal{G}} f_u$. By Corollary \ref{symetrie2} we have $f_u\sim f_w\circ f^ {-1}_a$. The same argument yields $f_v\sim f_w\circ f^ {-1}_a$, and we have $f_u=f_v$.
$\square$Par\ Par\noindent
\begin{lemma} The operation \emph{$\mbox{\^{o}}$} is associative.
\end{lemma}
\emph{Proof:}
If $g$ is such that $f_a\sim g$, then $f_a\circ f_v\sim_{\mathcal{G}} g\circ f_v$, where $\mathcal{G}:=\{f_x\circ f_v: x\in U\}$. Furthermore, it follows from the definition of $\mbox{\^{o}}$ that $f_a\mbox{\^{o}} f_v\sim f_a\circ f_v$. So by Proposition \ref{transitivite}, $f_a\mbox{\^{o}} f_v\sim g\circ f_v$. The same is true when we compose with $g$ from the right: $f_v\mbox{\^{o}} f_a\sim f_v\circ g$. This means with our notations: \[\widehat{g}\,\,\mbox{\^{o}} f_v=\widehat{g\circ f_v}\] and \[f_v\mbox{\^{o}} \,\,\widehat{g}=\widehat{f_v\circ g}.\]
This implies the following:
\[(f_a\mbox{\^{o}} f_b)\mbox{\^{o}} f_c=\widehat{(f_a\circ f_b)}\mbox{\^{o}} f_c=\reallywidehat{Mbox{$(f_a\circ f_b)\circ f_c$}}=\reallywidehat{Mbox{$f_a\circ f_b\circ f_c$}}\] and
\[f_a\mbox{\^{o}} (f_b\mbox{\^{o}} f_c)=f_a\mbox{\^{o}} (\widehat{f_b\circ f_c})=\reallywidehat{Mbox{$f_a\circ( f_b\circ f_c)$}}=\reallywidehat{Mbox{$f_a\circ f_b\circ f_c$}}.\]
So $\mbox{\^{o}}$ is associative.
$\square$Par\ Par\noindent
\begin{lemma}
Every $f_a\in\mathcal{F}$ admits an inverse.
\end{lemma}
\emph{Proof:}
Let $\mathcal{G}$ be the family $\{f_a\circ f_u: u\in U\}$. The families $\mathcal{F}$ and $\mathcal{G}$ are comparable, so let $b\in U$ be such that $f_a\circ f_b\sim_{\mathcal{G}} f_{u_0}$. By Corollary \ref{symetrie2} we have $f_{u_0}\sim f_a\circ f_b$, so $f_a\mbox{\^{o}} f_b=f_{u_0}$, which is the identity element of $(\mathcal{F},\mbox{\^{o}})$. So $f_a$ has a right inverse, and the same argument shows that $f_a$ has a left inverse.
$\square$Par\ Par\noindent
We have proved the following
\begin{prop}\label{famillegroupe}Let $\mathcal{M}$ be a definably maximal and non-trivial geometric $C$-minimal structure. Suppose moreover that in the underlying tree of $\mathcal{M}$ there is no definable bijection between a bounded interval and an unbounded one, and that a nice family of non-dilating functions with identity is definable in $\mathcal{M}$. Then there is an infinite group definable in $\mathcal{M}$.
\end{prop}
Propositions \ref{existencefamille} and \ref{famillegroupe} yield directly the following Theorem.
\begin{theorem}Let $\mathcal{M}$ be a definably maximal and non-trivial geometric $C$-minimal structure. Suppose moreover that in the underlying tree of $\mathcal{M}$ there is no definable bijection between a bounded interval and an unbounded one. Then there is an infinite group definable in $\mathcal{M}$.
\end{theorem}
{\bf\emph{acknowledgements:}}
We are indebted to Bernhard Elsner for pointing out an error in the original proof of Proposition \ref{existencefamille}.
\nocite{adeleke}
\nocite{macphersonhaskell}
\nocite{macphersonsteinhorn}
\nocite{maalouf1}
\noindent
Fran\c coise Delon, CNRS, \\
IMJ-PRG, UMR 7586, \\Univ Paris Diderot, F-75013, Paris, \\
UFR de math\'ematiques, case 7012\\
75205 Paris Cedex 13 \\
France \\
email: [email protected]
Par\ Par\noindent
Fares Maalouf\\
Universit\'e St Joseph\\
ESIB\\
B.P. 11-514\\
Riad el Solh, Beirut 11072050\\
Lebanon \\
email:[email protected]
\end{document}
|
\begin{document}
\renewcommand{1}{1}
\begin{abstract}
Consider the family of CM-fields which are pro-$p$ $p$-adic Lie
extensions of number fields of dimension at least two, which
contain the cyclotomic ${\bf Z}_p$-extension, and which are ramified at
only finitely many primes. We show that the Galois groups of the
maximal unramified abelian pro-$p$ extensions of these fields are
not always pseudo-null as Iwasawa modules for the Iwasawa algebras
of the given $p$-adic Lie groups. The proof uses Kida's formula
for the growth of $\lambda$-invariants in cyclotomic
${\bf Z}_p$-extensions of CM-fields. In fact, we give a new proof of
Kida's formula which includes a slight weakening of the usual $\mu
= 0$ assumption. This proof uses certain exact sequences involving
Iwasawa modules in procyclic extensions. These sequences are
derived in an appendix by the second author.
\end{abstract}
\renewcommand{1}{1.3}
\title[On the failure of pseudo-nullity]{On the failure of pseudo-nullity\\
of Iwasawa modules}
\author{Yoshitaka Hachimori and Romyar T. Sharifi}
\maketitle
\section{Introduction}
Let $p$ be a prime number. Given a Galois extension $L$ of a
number field $F$, one may consider the inverse limit $X_L$ under
norm maps of the $p$-parts of class groups in intermediate number
fields in $L$. By class field theory, this is none other than the
Galois group of the maximal unramified abelian pro-$p$ extension
of $L$. Setting $\mc{G} = \Gal(L/F)$, we let $\Lambda(\mc{G})$
denote the Iwasawa algebra ${\bf Z}_p[[\mc{G}]]$ of $\mc{G}$. By a
module for $\Lambda(\mc{G})$, we will always mean a compact left
$\Lambda(\mc{G})$-module. In particular, $X_L$ may be given the
structure of a $\Lambda(\mc{G})$-module via conjugation, and we
shall be concerned in this article with its resulting structure.
Let $K$ denote the cyclotomic ${\bf Z}_p$-extension of $F$. We say that
a $p$-adic Lie extension $L/F$ is {\em admissible} if $\mc{G} =
\Gal(L/F)$ has dimension at least $2$, $L$ contains $K$, and $L/F$
is unramified outside a finite set of primes of $F$. An
admissible extension will be said to be {\em strongly admissible}
if $\mc{G}$ is pro-$p$ and contains no elements of order $p$. We
remark that any compact $p$-adic Lie group contains a pro-$p$
$p$-adic Lie group with no elements of order $p$ as an open
subgroup (cf.\ \cite[Corollary 8.34]{DdMS}).
R. Greenberg considered the situation in which $L$ is the
compositum of all ${\bf Z}_p$-extensions of $F$. In this case,
$\Lambda(\mc{G})$ is isomorphic to a power series ring over ${\bf Z}_p$
in finitely many commuting variables. Greenberg conjectured that
the annihilator of $X_L$ has height at least $2$ as a
$\Lambda(\mc{G})$-module, which is to say that $X_L$ is
$\Lambda(\mc{G})$-pseudo-null (see
\cite[Conjecture~3.5]{greenberg}).
In \cite{venj-str}, O. Venjakob generalized the notion of a
finitely generated pseudo-null $\Lambda(\mc{G})$-module to pro-$p$
groups $\mc{G}$ with no elements of order $p$. In \cite{css},
this definition was further generalized to finitely generated
modules over arbitrary rings. For $\mc{G}$ a compact $p$-adic Lie
group, a finitely generated $\Lambda(\mc{G})$-module $M$ is
pseudo-null if $\mr{Ext}^i_{\Lambda(\mc{G})}(M,\Lambda(\mc{G})) =
0$ for $i = 0,1$.
The following question became of interest.
\begin{question} \label{false}
Let $L/F$ be an admissible $p$-adic Lie
extension, and set $\mc{G} = \Gal(L/F)$.
Is $X_L$ necessarily pseudo-null as a $\Lambda(\mc{G})$-module?
\end{question}
Set $\Gamma = \Gal(K/F)$. In general, $X_K$ is known to be a
finitely generated torsion $\Lambda = \Lambda(\Gamma)$-module and
was conjectured by K.\ Iwasawa to have finite ${\bf Z}_p$-rank. In
other words, Iwasawa conjectured that $X_K$ has trivial
$\mu$-invariant $\mu(X_K)$. When $F/{\bf Q}$ is abelian, $\mu(X_K) =
0$ by a result of B.\ Ferrero and L.\ Washington \cite{FeWa}. In
the case that $F = {\bf Q}(\mu_p)$, the $\lambda$-invariant
$\lambda(X_K)$ is positive if and only if $p$ is an irregular
prime. Thus, $X_K$ is often not pseudo-null as a
$\Lambda$-module, since pseudo-null $\Lambda$-modules are finite,
hence the reason for the dimension at least $2$ condition.
\begin{remark}
The condition that $L$ contain $K$ is also necessary, as
Greenberg (following Iwasawa) had constructed examples of
extensions $L/F$ for which $X_L$ is not pseudo-null
and $\mc{G} = \Gal(L/F)$ is
free of arbitrarily large finite rank over ${\bf Z}_p$ but $L$ does
not contain $K$ (unpublished). In particular, these $X_L$ have
nonzero $\mu$-invariants as $\Lambda(\mc{G})$-modules
(cf.\ \cite[Section 3]{venj-str} for the general definition).
\end{remark}
In general, if $L/F$ is a admissible $p$-adic Lie extension, then
$X_L$ is a finitely generated torsion module over
$\Lambda(\mc{G})$
(cf.\ \cite[Theorem 7.14]{Ho} and Lemma \ref{torsion}).
If, in addition, it
is finitely generated over $\Lambda(G)$, for $G = \Gal(L/K)$, then
the property of $X_L$ being $\Lambda(\mc{G})$-pseudo-null is
equivalent to its being $\Lambda(G)$-torsion (in an appropriate
sense; cf.\ Lemma \ref{ven-p-null}). We note that if Iwasawa's
$\mu$-invariant conjecture holds, then $X_L$ is indeed finitely
generated over $\Lambda(G)$ (again, see Lemma \ref{torsion}). So,
Question \ref{false} could very well be rephrased to ask if $X_L$
is a finitely generated $\Lambda(G)$-torsion module.
As we shall demonstrate in this paper, the answer to Question
\ref{false} is ``no,'' with counterexamples occurring frequently
for CM-fields $L$. We remark that, until this point, no such
counterexamples had been known (or expected). Note that if $L$ is
a CM-field, then complex conjugation provides a canonical
involution on $X_L$, and if $p$ is odd we obtain a canonical
decomposition $X_L = X_L^+ \oplus X_L^-$ into its plus and minus
one eigenspaces, respectively. In fact, we can compute the {\em
$\Lambda(G)$-rank} (see Section \ref{modules}) of $X_L^-$ for any
strongly admissible $p$-adic Lie extension $L/F$ of CM-fields with
$\mu(X_K^-) = 0$. This rank is zero if and only if $X_L^-$ is
$\Lambda(G)$-torsion, or equivalently,
$\Lambda(\mc{G})$-pseudo-null.
\begin{theorem} \label{main}
Let $p$ be an odd prime and
$L/F$ be a strongly admissible $p$-adic Lie extension of CM-fields,
with Galois group $\mc{G}$.
Assume that $\mu(X_K^-) = 0$.
Let $\pr{L/K}$ be the set of primes in the maximal real subfield
$K^+$ of $K$ that split in $K$, ramify in $L^+$, and do not divide $p$.
Let $\delta$ be $1$ or $0$ depending upon whether $F$ contains the
$p$th roots of unity or not, respectively.
Then we have
$$
\rk_{\Lambda(G)}(X_L^-)= \lambda(X_K^-)-\delta+\numpr{L/K}.
$$
In particular, $X_L^-$ is not pseudo-null over
$\Lambda(\mc{G})$ if
and only if
$\lambda(X_K^-) - \delta + \numpr{L/K} \geq 1$.
\end{theorem}
The proof of Theorem \ref{main} uses a formula of Y.\ Kida's
\cite{Ki} for $\lambda$-invariants in CM-extensions of cyclotomic
${\bf Z}_p$-extensions of number fields. We will also give another
proof of Kida's formula.
Note that Theorem \ref{main} provides counterexamples to a
positive answer to Question \ref{false} even in the case that $F =
{\bf Q}(\mu_p)$ and $L$ is a ${\bf Z}_p$-extension of $K$ with complex
multiplication which is unramified outside $p$ (see Example
\ref{basicex}). For instance, the smallest prime $p$ for which the
${\bf Z}_p$-rank of $X_K^-$ is (at least) 2 is $p = 157$.
By Kummer duality, there are two ${\bf Z}_p$-extensions $L$ of $K$ that
are Galois over ${\bf Q}$ and for which $X_L^-$ is not pseudo-null as a
$\Lambda(\mc{G})$-module. Other examples occur for $p = 353, 379,
467, 491,$ and so on.
On the other hand, some mild evidence for a positive answer to
Question \ref{false} is given in \cite{massey} and \cite{paireis}
in the case that $F = {\bf Q}(\mu_p)$ and $L$ is a ${\bf Z}_p$-extension of
$K$ which is unramified outside $p$ and defined via Kummer theory
by a sequence of cyclotomic $p$-units in ${\bf Q}(\mu_{p^{\infty}})^+$.
In particular, such $L$ are not CM-fields. For instance, it is
shown in \cite{paireis} that, for the ${\bf Z}_p$-extension $L =
K(p^{1/p^{\infty}})$ of $K$ with $F = {\bf Q}(\mu_p)$, the
$\Lambda(\mc{G})$-module $X_L$ is pseudo-null for all $p < 1000$.
So, even with counterexamples to pseudo-nullity, there remains the
question of finding a natural class of extensions $L/F$ over which
$X_L$ is pseudo-null as a $\Lambda(\mc{G})$-module.
We describe a couple of possibilities for such a class, though
this description is tangential to the rest of the paper. First,
consider an algebraic variety $Z$ over $F$, and form, for some $i
\ge 0$ and $r \in {\bf Z}$, the cohomology group
$H^i_{\text{\'et}}(Z,{\bf Q}_p(r))$. In the spirit of Fontaine-Mazur
\cite{fm}, we say that $L/F$ {\em comes from algebraic geometry}
if $L$ lies in the fixed field of the representation of
$\Gal(\bar{{\bf Q}}/F)$ on such a cohomology group. We mention the
following refinement of Question \ref{false}.
\begin{question} \label{possible}
If $L/F$ is an admissible $p$-adic Lie extension which
comes from algebraic geometry, then must $X_L$ be pseudo-null as
a $\Lambda(\mc{G})$-module?
\end{question}
We feel that there is currently insufficient evidence for a
positive answer to this question to conjecture it in general.
However, it seems quite reasonable that it could hold, since we
restrict to a setting in which the size of $X_L$ might be
controlled by $p$-adic $L$-functions. We believe that it is not
known if CM-fields arising as admissible $p$-adic Lie extensions
ever come from algebraic geometry, though it is generally expected
that they do not.
One might wish for a still larger class in Question
\ref{possible}, since even in the case that $F = {\bf Q}(\mu_p)$ and
$L$ is a ${\bf Z}_p$-extension of $K$ defined by a sequence of
cyclotomic $p$-units, the extension $L$ need not come from
algebraic geometry (if the Tate twist of $G$ is non-integral). So,
consider a tower of algebraic varieties $(Z_n)_{n \ge 0}$ defined
over $F$ such that the $Z_n$ are all Galois \'etale covers of $Z =
Z_0$ and the Galois group of the tower is a $p$-adic Lie group. We
then expand our class to contain those admissible $p$-adic Lie
extensions which lie in the fixed field of the Galois action on
$\displaystyle \lim_{\leftarrow}
H^i_{\text{\'et}}((Z_n)_{/\bar{{\bf Q}}},{\bf Q}_p(r))$ for some $i \ge 0$
and $r \in {\bf Z}$, in which the inverse limit is taken with respect
to trace maps (cf.\ \cite{ohta}). Then any $L$ arising from
cyclotomic $p$-units can be recovered, for instance, from the
first cohomology groups with ${\bf Q}_p$-coefficients in the tower of
Fermat curves $x^{p^n} + y^{p^n} = z^{p^n}$ over $F = {\bf Q}(\mu_p)$
(cf.\ \cite[Corollary 1 of Theorem B]{iky}).
The organization and contents of this paper are as follows. In
Section \ref{kida}, we give another proof of Kida's formula
(Theorem \ref{kida2}) which includes a slight weakening of the
usual assumption $\mu(X_K^-) = 0$. We consider, in this formula,
any quotient $X_{L,T}$ of $X_L$ by the decomposition groups at
primes above $T$, for a finite set of primes $T$ of $F$. In
Section \ref{modules}, we discuss Iwasawa modules in $p$-adic Lie
extensions, elaborating on some of the definitions and remarks
given in this introduction as well as providing lemmas for later
use. In Section \ref{general}, we prove the generalization of
Theorem \ref{main} to the case of $X_{L,T}$ (Theorem
\ref{second}). In Section \ref{examples}, we provide, along with
a few remarks, specific examples of cases in which
$\Lambda(\mc{G})$-pseudo-nullity fails. Finally, in Appendix
\ref{special}, the second author derives two exact sequences
involving the $G$-invariants and coinvariants of $X_L$ for quite
general procyclic extensions $L/K$ (Theorem \ref{comparison} and
Corollary \ref{zpext}). These are used in the proof of Theorem
\ref{kida2}.
\begin{acknowledgments}
We wish to express our gratitude to John Coates for arranging for
our collaboration on this work, for his continued support
throughout,
and for his very helpful comments. We also thank Susan Howson,
Manfred Kolster, Masato Kurihara, Kazuo Matsuno, Manabu Ozaki, and
Otmar Venjakob for their valuable advice.
The first author was partially supported by Gakushuin University and
the 21st Century COE program at the Graduate School of Mathematical Sciences of
the University of Tokyo.
The second author was supported by the Max Planck Institute for Mathematics.
\end{acknowledgments}
\section{Kida's formula} \label{kida}
In this section, we will give a proof of a mild generalization of
Kida's formula in which the condition on the $\mu$-invariant is
weakened. For this, we use the exact sequences of Iwasawa modules
in cyclic extensions of Appendix \ref{special} (Theorem
\ref{comparison}). We let $p$ be an odd prime in this section.
Let $F$ denote a number field which is CM, and let $K$ denote its
cyclotomic ${\bf Z}_p$-extension. Consider a CM-field $L$ Galois over
$K$, and set $G = \Gal(L/K)$. Let $T$ be a finite set of primes
of $F^+$
(which we could just as well assume consists solely of primes above $p$
in what follows)
, and for any algebraic extension $E/F^+$, let $T_E$ be
the set of primes above $T$. Define $P_K$ to be the set of primes
$v$ of $K$ lying above $p$. Fix a set $V_K^-$ consisting of one
prime of $K$ for each prime of $K^+$ that splits in $K$. Let
$T_K^- = T_K \cap V_K^-$ and $P_K^- = P_K \cap V_K^-$.
Furthermore, we let
\begin{equation} \label{primeset}
Q_{L/K}^T =
\{ v \in V_K^- - (T_K^- \cup P_K^-) \colon I_v \neq 1 \}
\cup \{ v \in T_K^- \colon G_v \neq 1 \},
\end{equation}
with $G_v$ and $I_v$ denoting the
decomposition and inertia
groups
in $G$, respectively, at a chosen prime above $v$ in $L$. That
is, $Q_{L/K}^T$ is in one-to-one correspondence with the set of
primes $u$ of $K^+$ such that $u$ splits in $K$
but not completely in $L$ and either $u \in T_{K^+}$ or $u$ does not lie
above $p$.
For $v \in Q_{L/K}^T$, let $g_v(L/K) = [G:G_v]$.
Denoting the maximal real subfield of any $L$ as above by $L^+$,
we have a direct sum decomposition of any
${\bf Z}_p[\Gal(L/L^+)]$-module $M$ into $(\pm 1)$-eigenspaces $M^{\pm}$
under complex conjugation. We shall be particularly interested in
(the minus parts of) two such Iwasawa modules. That is, we define
$X_{L,T}$ to be the maximal unramified abelian pro-$p$ extension
of $L$ in which all primes in $T_L$ split completely, and we let
$\mc{U}_{L,T}$ denote the inverse limit of the $p$-completions of
the $T$-unit groups of the finite subextensions of $F$ in $L$.
We set
$$
\delta = \begin{cases} 1 & \mr{if\ } \mu_p \subset F
\\ 0 & \mr{if\ } \mu_p \not\subset F.
\end{cases}
$$
For a finitely generated $\Lambda$-module $M$ and any choice of
pseudo-isomorphism
$$
M(p) \to \bigoplus_{i=1}^N \Lambda/p^{r_i}\Lambda,
$$
where $M(p)$ denotes its $p$-power torsion subgroup,
set
$$
\theta(M) = \max\,\{ r_i \mid 1 \le i \le N \}.
$$
The following remark should be kept in mind in what follows.
\begin{remark}
Let $L/K$ be a finite Galois $p$-extension such that $L/F$ is
Galois. Set $\mc{G} = \Gal(L/F)$.
Any $\Lambda(\mc{G})$-module $M$
becomes a $\Lambda = \Lambda(\Gamma)$-module through a
choice of subgroup of $\mc{G}$ lifting $\Gamma$, and the isomorphism class
of $M$ as a $\Lambda$-module is independent of this choice.
\end{remark}
We have the following generalization of Kida's formula for the
behavior of $\lambda(X_L^-)$ in finite Galois $p$-extensions $L/K$
(such that $L/F$ is Galois) \cite{Ki} (see also the related
results of L.\ Kuz'min \cite[Appendix 2]{kuzmin} and in
\cite[Corollary 11.4.13]{nsw}, though for the latter we remark
that the formulas are not quite correct as written). In the above
results, it is assumed that $\mu(X_{K,T}^-) = 0$, which implies
that $\mu(X_{L,T}^-) = 0$. (In these results, it is also assumed
that either $T$ is empty or consists of the primes above $p$.) We
make the weaker assumption that $\theta(X_{L,T}^-) \le 1$. Of
course, Iwasawa conjectured that $\mu(X_{K,T}) = 0$ always holds.
\begin{theorem} \label{kida2}
Let $p$ be an odd prime and $L$ a finite
$p$-extension of $K$ which is Galois over $F$.
Assume that $\theta(X_{L,T}^-) \le 1$. Then
\begin{equation} \label{kidasformula}
\lambda(X_{L,T}^-)=[L:K](\lambda(X_{K,T}^-)
-\delta+|Q_{L/K}^T|)+\delta
-\sum_{v\in Q_{L/K}^T} g_v(L/K)
\end{equation}
and
$$
\mu(X_{L,T}^-) = [L:K]\mu(X_{K,T}^-).
$$
\end{theorem}
\begin{proof}
For notational convenience, we leave out superscript and
subscript $T$'s in this proof.
We claim that it suffices to
demonstrate
this result in the case that $[L:K] = p$.
To see this, let $E$ be an intermediate field in $L/K$, and, if necessary,
replace $F$ by a finite extension $F'$ of $F$ in $K$ such that
$E/F'$ is Galois.
Since $X_L^- \to X_E^-$ is surjective (as $G = G^+$), the fact that
$\theta(X_L^-) \le 1$ implies that $\theta(X_E^-) \le 1$ as
well. Now we use induction on the degree of $[L:K]$ and assume that
$L/E$ has degree $p$. Then \eqref{kidasformula} for $L/K$
amounts to a check of the formula
$$
\sum_{v \in Q_{E/K}} ([L:K]-p g_v(E/K))+ \sum_{w \in
Q_{L/E}} (p-1) =\sum_{v \in Q_{L/K}} ([L:K]-g_v(L/K)),
$$
which we leave to the reader.
Let us use $\mu'$ to denote $\mu$-invariants with respect to
$\Gal(K/F')$. By induction, we have
\begin{align*}
\mu(X_L^-) &= [F':F]^{-1}\mu'(X_L^-) = [F':F]^{-1}p\mu'(X_E^-)\\
&= [F':F]^{-1}[L:K]\mu'(X_K^-) = [L:K]\mu(X_K^-).
\end{align*}
This proves the claim.
So, let $G$
be cyclic of order $p$. We examine the minus parts of
the sequences $\Gamma_{L/K}$ and $\Psi_{L/K}$ of Theorem \ref{comparison}.
Since $\mc{U}_L^- \cong {\bf Z}_p(1)^{\delta}$ (see, for example,
\cite[Theorem 11.3.11(ii)]{nsw}), we have
$\hat{H}^0(\mc{U}_L^-) \cong \mu_p^{\delta}$ and
$\hat{H}^{-1}(\mc{U}_L^-) = 0$. Let $S_{L/K}$
be the set of elements of $P_K^-$ with $v \notin T_K^-$
and $I_v \neq 1$. Since $G$ is cyclic of order $p$
and $G = G^+$, the sequence $\Psi_{L/K}$ reduces to
\begin{multline*}
\Psi_{L/K}^-\colon\ \ 0 \to (G^{\otimes 2})^{\oplus |S_{L/K}|}
\to \hat{H}^{0}(X_L^-) \otimes G \to \mu_p^{\delta} \to \\
G^{\oplus |Q_{L/K}|+|S_{L/K}|} \to \hat{H}^{-1}(X_L^-) \to 0.
\end{multline*}
This yields that
\begin{equation} \label{herbrand}
h(X_L^-) = p^{\delta-|Q_{L/K}|},
\end{equation}
where we use $h(X_L^-)$ to denote the Herbrand quotient with respect to $G$.
As for the minus-part of $\Gamma_{L/K}$, need from it only
the rather well-known consequence that the restriction map
$(X_L^-)_G \to X_K^-$ is a pseudo-isomorphism. We have an
exact sequence
$$
0 \to X_L^-(p) \to X_L^- \to Z \to 0,
$$
with $Z$ a $\Lambda[G]$-module which is free of finite rank
over ${\bf Z}_p$. Clearly, we have
\begin{eqnarray} \label{lambda}
\lambda(Z) = \lambda(X_L^-) & \mr{and} &
\lambda(Z_G) = \lambda(X_K^-).
\end{eqnarray}
The sequence $\Psi_{L/K}^-$ also implies that
$\mu(\hat{H}^i(X_L^-)) = 0$
for all $i$. As we shall see in
Lemma \ref{mptrivial}(a), this and the
the fact that $\theta(X_L^-) \le 1$ imply that $h(X_L^-(p)) =
1$. Therefore, we have
\begin{equation} \label{extra}
h(Z) = h(X_L^-).
\end{equation}
Now, we know by
representation theory (as pointed out in \cite{Iw})
that
$$
Z \cong {\bf Z}_p[G]^a \oplus I_G^b \oplus {\bf Z}_p^c,
$$
where $I_G$ denotes the augmentation ideal of ${\bf Z}_p[G]$.
It follows that $\lambda(Z_G) = a+c$ and $h(Z) = p^{c-b}$.
Applying \eqref{herbrand}, \eqref{lambda}, and \eqref{extra},
we conclude that
$$
\lambda(X_L^-) = p(a+b) + c-b =
p(\lambda(X_K^-)-\delta+|Q_{L/K}|)+\delta-|Q_{L/K}|,
$$
verifying \eqref{kidasformula}.
As for the $\mu$-invariant, the fact
that $(X_L^-)_G$ and $X_K^-$ are pseudo-isomorphic implies
that $\mu(X_K^-) = \mu((X_L^-)_G)$.
The result then follows from Lemma \ref{mptrivial}(b) below.
\end{proof}
To finish the proof of Theorem \ref{kida2}, we need the following
results on Herbrand quotients and $\mu$-invariants. Let $G$ be a
cyclic group of order $p$. We use $\hat{H}^i(M)$ to denote the
$i$th Tate cohomology group of $M$ with respect to $G$.
\begin{lemma} \label{fptrivial}
Let $M$ be a ${\bf F}_p[G]$-module
for which $\hat{H}^i(M)$ is finite for all $i$. Then
the Herbrand quotient $h(M)$ is trivial.
\end{lemma}
\begin{proof}
Note that ${\bf F}_p[G]$ has a filtration
\begin{equation} \label{filtration}
{\bf F}_p[G] = I_G^0 \supset I_G \supset \ldots \supset
I_G^{p-1} = (N_G) \supset I_G^p = 0,
\end{equation}
where $I_G$ denotes the augmentation ideal and $N_G$ denotes
the norm element in ${\bf F}_p[G]$. Let $M[I_G^k]$ denote the
submodule of
elements of $M$ killed by $I_G^k$. We then have exact
sequences
\begin{multline*}
0 \to M[I_G^{k}]/(I_G M \cap M[I_G^{k}])
\to M[I_G^{k+1}]/(I_G M \cap M[I_G^{k+1}])
\xrightarrow{\phi_k} \\
M^G/(I_G^{k+1} M)^G \to M^G/(I_G^k M)^G \to 0
\end{multline*}
for $k \ge 0$, and $\phi_0$ is
simply the identity on $M^G/(I_G M)^G$. Since the kernel and
cokernel of $\phi_{k+1}$ are the domain and range of
$\phi_k$, we conclude that the orders of the domain and
range of $\phi_{p-2}$ are equal, if finite. Since this
finiteness is assumed, we conclude that $h(M) = 1$.
\end{proof}
In the following lemmas, we take $M$ to be a finitely generated
torsion $\Lambda$-module with a commuting $G$-action (i.e., a
finitely generated $\Lambda[G]$-module which is
$\Lambda$-torsion). We use $N_G$ to denote the norm element in
${\bf Z}_p[G]$.
\begin{lemma} \label{basiclemma}
The following conditions on $M$
are equivalent:
\begin{enumerate}
\item[(i)] $\mu(\hat{H}^0(M)) = 0$
\item[(ii)] $\mu(\hat{H}^i(M)) = 0$ for all $i \in {\bf Z}$
\item[(iii)] $\mu(M_G) = \mu(N_G M)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence of (i) and (ii) follows immediately from
$$
0 \to M^G \to M \xrightarrow{\sigma-1} M \to M_G \to 0,
$$
for some generator $\sigma$ of $G$,
\begin{equation} \label{2ndseq}
0 \to \hat{H}^{-1}(M) \to M_G \xrightarrow{N_G} M^G \to \hat{H}^0(M)
\to 0,
\end{equation}
and additivity of $\mu$-invariants in exact sequences. The
equivalence with (iii) also follows from \eqref{2ndseq}, as it
implies that $\mu(M_G) = \mu(N_G M)$ if and only if
$\mu(\hat{H}^{-1}(M)) = 0$.
\end{proof}
\begin{lemma} \label{mptrivial}
Assume that $\mu(\hat{H}^0(M)) = 0$ and $\theta(M) \le 1$.
Then we have:
\begin{enumerate}
\item[(a)] $h(M(p)) = 1$
\item[(b)] $\mu(M) = p\mu(M_G)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first claim that it suffices to replace $M$ by
$M(p)/pM(p)$. For this,
note that we have an exact sequence of $\Lambda[G]$-modules
$$
0 \to M(p) \to M \to Z \to 0,
$$
where $Z$ is free of finite rank over ${\bf Z}_p$. Since $\mu(Z) = 0$,
the same holds for its cohomology groups, and hence
$$
\mu(\hat{H}^0(M(p))) = \mu(\hat{H}^0(M)) = 0,
$$
which is to say that $\hat{H}^0(M(p))$ is finite. By Lemma
\ref{basiclemma}, the groups $\hat{H}^i(M(p))$ are all finite.
Since $\theta(M) \le 1$, we have that $M(p) \to M(p)/pM(p)$
is a pseudo-isomorphism, and hence the $\hat{H}^i(M(p)/pM(p))$
are finite as well. As Herbrand quotients of finite modules
are trivial, the claim is proven.
With the claim proven and $M$ now assumed to be $p$-torsion, part (a)
follows from Lemma \ref{fptrivial},
using again the equivalence of (i) and (ii) in Lemma \ref{basiclemma}.
For part (b), let
$I_G$ denote the augmentation ideal in ${\bf Z}_p[G]$.
Then $I_G^k M/I_G^{k+1} M$ is isomorphic to a
quotient of $I_G^{k-1} M/I_G^k M$ for $k \ge 1$. As in \eqref{filtration},
we have
$$
\mu(I_G^{p-1} M/I_G^p M) = \mu(I_G^{p-1} M) = \mu(N_G M).
$$
Since $\mu(M_G) = \mu(N_G M)$
by Lemma \ref{basiclemma}, it follows
that
$\mu(I_G^k M/I_G^{k+1} M) = \mu(M_G)$
for $0 \le k \le p-1$. We conclude that
$$
\mu(M) = \sum_{k=0}^{p-1} \mu(I_G^k M/I_G^{k+1} M) =
p\mu(M_G).
$$
\end{proof}
\begin{remark}
In general, $h(M)$ can be nontrivial when $M$ is a $p$-power torsion
$\Lambda$-module for which $h(M)$ is defined.
For example, the principal $(\Lambda/p^2\Lambda)[G]$-ideal
$M = ((g-1)-p(\gamma-1))$, for generators $g$ of $G$ and
$\gamma$ of $\Gamma$, has $h(M) = p$.
\end{remark}
\section{Iwasawa modules} \label{modules}
In this section, we assemble a few definitions and
easily proven results regarding modules for
the Iwasawa algebras of $p$-adic Lie extensions.
Let $G$ be a compact $p$-adic Lie group. We define a finitely
generated $\Lambda(G)$-module $M$ to be {\em torsion} (resp., {\em
pseudo-null}) if
$$
\mr{Ext}^i_{\Lambda(G)}(M,\Lambda(G)) = 0
$$
for $i = 0$ (resp., for $i = 0,1$). These definitions coincide
with those of Venjakob \cite[Sections 2-3]{venj-str} and J.
Coates, P. Schneider, and R. Sujatha in \cite[Section 1]{css}.
When $\Lambda(G)$ is an integral domain (for instance, if $G$ has
no elements of finite order), this definition of
$\Lambda(G)$-torsion reduces to the usual one.
For any pro-$p$ $p$-adic Lie group $G$ with no $p$-torsion, we let
$\mc{Q}(G)$ denote the skew fraction-field of $\Lambda(G)$. For
any finitely generated $\Lambda(G)$-module $M$, we define the {\em
$\Lambda(G)$-rank} of $M$ as a $\Lambda(G)$-module, denoted
$\rk_{\Lambda(G)} M$, to be the dimension of
$\mc{Q}(G)\otimes_{\Lambda(G)} M$ as a left $\mc{Q}(G)$-vector
space (see, for example, \cite[Section 2]{CH}). A finitely
generated $\Lambda(G)$-module $M$ is $\Lambda(G)$-torsion if and
only if it has trivial $\Lambda(G)$-rank.
We say that a compact $p$-adic Lie group $G$ is {\em uniform} (or,
{\em uniformly powerful}) if its commutator subgroup $[G,G]$ is
contained in the group $G^p$ generated by $p$th powers and the
$p$th power map induces isomorphisms on the successive graded
quotients in its lower central $p$-series (cf.\ \cite[Definition
4.1]{DdMS}). It is known that any compact $p$-adic Lie group
contains an open normal subgroup which is uniform (cf.\
\cite[Corollary 8.34]{DdMS}).
\begin{lemma} \label{ven-p-null}
Let $\mc{G}$ be a compact $p$-adic Lie group, and assume that
$G$ is a closed normal subgroup with $\mc{G}/G \cong {\bf Z}_p$.
Then a $\Lambda(\mc{G})$-module
which is finitely generated over $\Lambda(G)$
is pseudo-null if and only if it is torsion as a $\Lambda(G)$-module.
\end{lemma}
\begin{proof}
This is proven in
\cite[Proposition 5.4]{venjakob} (using \cite[Example 2.3]{venjakob})
if $G$ is uniform and we have
an inclusion of subgroups $[\mc{G},G] \le G^p$.
We claim that $\mc{G}$ has
an open subgroup $\mc{G}_1$ with this property, taking $G_1 =
G \cap \mc{G}_1$.
To see this, first let
$\mc{G}_0$ be any open normal uniform subgroup of $\mc{G}$,
and set $G_0 = G \cap \mc{G}_0$. Fix $\gamma \in \mc{G}_0$
with image generating $\mc{G}_0/G_0$, and let $\Gamma$ be the
closed subgroup of $\mc{G}_0$ that $\gamma$ generates.
We have a canonical isomorphism $\mc{G}_0 \cong G_0 \rtimes
\Gamma$ (for the given action of $\Gamma$ on $G_0$).
Let $G_1$ be an open normal uniform subgroup of $G_0$.
Choose $n \ge 0$ such that $[\Gamma^{p^n},G_1] \le G_1^p$,
and let $W$ be the closed normal subgroup of
$G_1$ generated by $[\Gamma^{p^n},G_1]$.
Let $\mc{G}_1 \cong G_1 \rtimes \Gamma^{p^n}$, an open
subgroup of $\mc{G}$.
Then $\mc{G}_1$ yields the claim, as
$$
[\mc{G}_1,G_1] \le [G_1,G_1] \cdot W \le G_1^p.
$$
The result now follows from the definitions of pseudo-nullity
and torsion modules given above, since
for any $\Lambda(\mc{G})$-module $M$, one has
\cite[Lemma 2.3]{jannsen}
$$
\mr{Ext}^i_{\Lambda(\mc{G})}(M,\Lambda(\mc{G})) \cong
\mr{Ext}^i_{\Lambda(\mc{G}_1)}(M,\Lambda(\mc{G}_1))
$$
for any $i \ge 0$,
along with the corresponding fact replacing $\mc{G}$ by $G$
and $\mc{G}_1$ by $G_1$.
\end{proof}
We now consider the behavior of a certain sort of filtration on a
compact $p$-adic Lie group with respect to taking subgroups.
\begin{lemma} \label{well-known}
Let $G$ be a compact $p$-adic Lie group, let $G_1$ be an open normal
uniform pro-$p$ subgroup of $G$,
and let $H$ be a closed subgroup of $G$. For $n \ge
1$, let $G_{n+1}$ denote the open normal subgroup of $G$
which is the topological closure
of $G_n^p[G_n,G_1]$ in $G$.
Then $H$ is a $p$-adic Lie group of dimension $e \le \dim
G$, and there exists a rational number $C$ such that
$[H:H\cap G_n]=Cp^{ne}$
for all sufficiently large $n$.
\end{lemma}
\begin{proof}
The first statement is well-known, and as for the second
statement,
we begin by noting that
$G_{n}={G_1^{p^{n-1}}}$
since $G_1$ is uniform (cf.\ \cite[Theorem 3.6]{DdMS}).
So, there exists an integer $c$ such that
$H\cap G_{n}$ is
uniform for all $n\geq c$
(cf.\ \cite[\S 4 Exercise 14 (i)]{DdMS}).
Let $H_0:=H\cap G_c$.
Then we can take some $c'$ such that
$H_0 \cap G_n=(H_0\cap G_{c'})^{p^{n-c'}}$ for all $n\geq c'$
(cf.\ \cite[\S 4 Exercise 14 (ii)]{DdMS}).
Setting $H_1=H_0\cap G_{c'}$ and defining
$H_{n+1}$ to be the topological closure of $H_{n}^p[H_n,H_1]$ in $G$,
we have
$H_n= H_1^{p^{n-1}}=H_0\cap G_{n+c'-1}$
since $H_1$ is uniform.
As there exists some constant $C'$ such that
$[H:H_0 \cap G_{n+c'-1}]=C'p^{ne}$ for all sufficiently
large $n$ and $[H \cap G_n : H_n]$ is
eventually constant, we have the result.
\end{proof}
We shall also
require the following asymptotic formula for $\Lambda(G)$-ranks
(cf.\ \cite[Theorem 2.22]{Ho}, or \cite[Theorem 1.10]{Ha} for
``adequate" $G$). We say that a sequence $(a_n)_{n \ge 1}$ of
integers is (or ``equals'') $O(q^n)$ for some nonnegative integer
$q$ if $0 \le a_n \le Cq^n$ for some constant $C$ for all
sufficiently large $n$.
\begin{lemma}[Howson] \label{susan}
Let $G$ be a pro-$p$ $p$-adic Lie group containing no
elements of order $p$, and
let $M$ be a finitely generated $\Lambda(G)$-module.
Choose a sequence $G_n$ as in Lemma \ref{well-known}.
Then $\rk_{\Lambda(G)} M=r$ if and only if
$$
\rk_{{\bf Z}_p} M_{G_n}=r[G:G_n]+O(p^{n(d-1)}).
$$
\end{lemma}
Finally, we move away from the purely module-theoretic setting to
prove the following basically well-known consequence of Nakayama's
Lemma that was used in the introduction.
\begin{lemma} \label{torsion}
Let $L/F$ be an admissible $p$-adic Lie extension with Galois
group $\mc{G}$, and set $G = \Gal(L/K)$. Then
$X_{L,T}$ is a finitely generated
$\Lambda(\mc{G})$-module.
Furthermore, if $L/K$ is strongly admissible and $\mu(X_{K,T}) = 0$,
then $X_{L,T}$ is finitely generated as a $\Lambda(G)$-module.
\end{lemma}
\begin{proof}
In the first part,
we may assume that $G$ is pro-$p$
by passing to an open subgroup of $\mc{G}$.
Note that $X_{K,T}$ is a finitely generated $\Lambda$-module.
Furthermore, the kernel of $(X_{L,T})_G \to X_{K,T}$ is a finitely
generated ${\bf Z}_p$-module,
since this kernel
maps to the direct sum of the decomposition groups in $G$ at the ramified primes in $L/K$ and the primes in $T_K$,
with kernel a quotient of $H_2(G,{\bf Z}_p)$.
Therefore, $(X_{L,T})_G$ is a finitely generated $\Lambda$-module,
and it follows from Nakayama's Lemma (as in \cite{bh})
that $X_{L,T}$ is a finitely
generated $\Lambda(\mc{G})$-module. If we know that
$\mu(X_{K,T}) = 0$ as well, then we see that $(X_{L,T})_G$
is finitely generated over ${\bf Z}_p$, and we conclude that
$X_{L,T}$ is finitely generated over $\Lambda(G)$.
\end{proof}
\section{Strongly admissible extensions} \label{general}
In this section, we shall prove our result on the behavior of
inverse limits of minus parts of class groups for
strongly admissible $p$-adic Lie
extensions of CM-fields
using our extension of Kida's formula (Theorem \ref{kida2}). That
is, we will prove the following theorem, which includes Theorem
\ref{main} (noting Lemma \ref{ven-p-null}).
\begin{theorem} \label{second}
Let $L/F$ be a strongly admissible $p$-adic Lie extension of
CM-fields, for an odd prime $p$.
Set $G = \Gal(L/K)$.
Let $T$ be a finite set of primes of $F^+$, and let $Q_{L/K}^T$
be as in \eqref{primeset}.
Assume that $\mu(X_{K,T}^-) = 0$.
Then
$X_{L,T}^-$ is finitely generated over $\Lambda(G)$, and
we have
$$
\rank_{\Lambda({G})} X_{L,T}^- =
\lambda(X_{K,T}^-)-\delta+|Q_{L/K}^T|.
$$
\end{theorem}
\begin{remark}
Let $X_K$ be as in the introduction, and let $Y_K$ be the
maximal quotient of $X_K$ in which all primes above $p$
split completely. Since $\lambda(X_K^-)-\lambda(Y_K^-)$ equals
the number of primes of $K^+$
above $p$ which split in $K$ (cf.\ \cite[Proposition 11.4.6]{nsw}),
Theorem \ref{second} implies that
$\rk_{\Lambda(G)} X_L^- - \rk_{\Lambda(G)} Y_L^-$
is the number of primes of $K^+$ above $p$ that split
completely in $L/K^+$.
In particular, if no prime above $p$
splits completely in $L/K^+$, then the kernel of the natural
surjection from $X_L^-$ to $Y_L^-$ is pseudo-null over $\Lambda(\mc{G})$,
with $\mc{G} = \Gal(L/F)$.
This is compatible with \cite[Theorem 4.9]{Ve3}.
On the other hand, $X_L^-$ is not pseudo-isomorphic to
$Y_L^-$ if there exists a prime above $p$
which splits completely in $L/K^+$.
\end{remark}
Let us work in the setting and with the notation of Theorem
\ref{second}. Let $G = \Gal(L/K)$. We set $d = \dim G$, and for
any prime $v$ of $K$ (or $K^+$), we let $d_v$ denote the dimension
of a decomposition group $G_v$ of $G$ at a prime above $v$. Let
$G_n$ be a sequence of open normal subgroups of $G$ chosen as in
Lemma \ref{well-known}. We then let $g_{n,v}$ denote the index of
the
image of $G_v$ in the group $G/G_n$.
\begin{lemma} \label{small}
Let $v$ be a prime of $K$ which does not split completely
in $L$. Then $g_{n,v}$ is $O(p^{n(d-1)})$.
\end{lemma}
\begin{proof}
Let $L_n$ be the subextension in $L/K$ corresponding
to $G_n$, a finite Galois $p$-extension of $K$.
Let $C$ be the constant in Lemma \ref{well-known}
such that $[G:G_n]=Cp^{nd}$
for all sufficiently large $n$.
Let $G_v$ and $G_{n,v}$ denote the decomposition groups of
$G$ and $G_n$, respectively, at a fixed prime of $L$ above
$v$. Then, by Lemma \ref{well-known},
there also exists a rational number $C_v$ such that
$[G_v:G_{n,v}]=C_v p^{nd_v}$
for all sufficiently large $n$.
Since $d_v \ge 1$ by assumption on $v$, we have
$$
g_{n,v}=[G/G_n:G_v/G_{n,v}]=(C/C_v)p^{n(d-d_v)}
=O(p^{n(d-1)}).
$$
\end{proof}
We are now ready to prove Theorem \ref{second}.
\begin{proof}[Proof of Theorem \ref{second}]
Again, let $L_n$ be the subextension in $L/K$ corresponding
to $G_n$. We remark that $Q_{L_n/K}^T=Q_{L/K}^T$
if $n$ is sufficiently large.
By Lemma \ref{small}, $g_{n,v}$ is $O(p^{n(d-1)})$ for
$v \in Q_{L/K}^T$, so Theorem \ref{kida2} yields
\begin{equation}\label{estimate}
\lambda(X_{L_n,T}^-)=
(\lambda(X_{K,T}^-)-\delta+|Q_{L/K}^T|)[L_n:K]+O(p^{n(d-1)})
\end{equation}
for all sufficiently large $n$.
We let $S_E$ denote the set of primes of an algebraic extension
$E/K$ consisting of all primes above $p$ and
the primes which ramify in $L/K$. We often abbreviate
$S_E$ simply by $S$.
Let $L_{n,w}$ denote the completion of $L_n$ at a given prime
$w$, and let $I_{L_{n,w}}$ denote the inertia group of
the absolute Galois group $G_{L_{n,w}}$.
We set
$$
\mc{H}_{L_n,T} = \bigoplus_{\substack{w \in S_{L_n}\\w \in T_{L_n}}}
H^1(G_{L_{n,w}},{\bf Q}_p/{\bf Z}_p) \oplus
\bigoplus_{\substack{w \in S_{L_n}\\w \notin T_{L_n}}}
H^1(I_{L_{n,w}},{\bf Q}_p/{\bf Z}_p)
$$
and take $\mc{H}_{L,T} = \varinjlim \mc{H}_{L_n,T}$.
Consider the following commutative diagram.
$$ \SelectTips{cm}{}
\xymatrix{
0 \ar[r] & \Hom({X_{L_n,T}^-},{\bf Q}_p/{\bf Z}_p) \ar[r] \ar[d] &
H^1(G_{L_n,S},{\bf Q}_p/{\bf Z}_p)^-
\ar[r] \ar[d]^{\beta_n^-} & \mc{H}_{L_n,T}^- \ar[d]^{\rho_n^-}\\
0 \ar[r] & \Hom({X_{L,T}^-},{\bf Q}_p/{\bf Z}_p)^{G_n} \ar[r] &
(H^1(G_{L,S},{\bf Q}_p/{\bf Z}_p)^-)^{G_n}
\ar[r] & (\mc{H}_{L,T}^-)^{G_n}.}
$$
Applying the Hochschild-Serre spectral sequence, we
see that
$$
\ker(\beta_n^-)\cong H^1(G_n,{\bf Q}_p/{\bf Z}_p)^-
$$
and
$\coker(\beta_n^-)$ injects into $H^2(G_n,{\bf Q}_p/{\bf Z}_p)^-$.
By \cite[Lemma 2.5.1]{Ha1},
the ${\bf Z}_p$-coranks of both of these cohomology groups are bounded by
a constant $C$, which depends only on $\dim G$, as $n$ increases.
Applying the snake lemma (with
$\mc{H}_{L_n,T}^-$ replaced by the image of
$H^1(G_{L_n,S},{\bf Q}_p/{\bf Z}_p)^-$ in it),
we have
\begin{equation}\label{b}
\lambda(X_{L_n,T}^-)-C \leq \rk_{{\bf Z}_p}
(X_{L,T}^-)_{G_n}
\leq \lambda(X_{L_n,T}^-)+\cork_{{\bf Z}_p} (\ker(\rho_n^-))+C,
\end{equation}
for all sufficiently large $n$.
We remark that
\begin{equation*}
\ker(\rho_n^-)\cong \bigl( \bigoplus_{\substack{w \in S_{L_n} \\
w \notin T_{L_n}}}
H^1(I_{n,w},{\bf Q}_p/{\bf Z}_p) \oplus \bigoplus_{\substack{w \in S_{L_n} \\ w \in T_{L_n}}}
H^1(G_{n,w},{\bf Q}_p/{\bf Z}_p) \bigr)^-,
\end{equation*}
where $I_{n,w}$ denotes the inertia group of $G_n$ at a prime
above $w$ (and $G_{n,w}$ is the decomposition group, as
before).
The latter equations break up as the direct sum of minus-parts
of cohomology groups over elements of conjugacy classes of
primes in $S_{L_n}$ under complex conjugation. If $w \in S_{L_n}$
is self-conjugate (or if $w$ splits completely in $L/L_n$),
then $H^1(I_{n,w},{\bf Q}_p/{\bf Z}_p)^-$ and $H^1(G_{n,w},{\bf Q}_p/{\bf Z}_p)^-$ are trivial.
If $w$ is complex conjugate to a distinct prime
$\bar{w}$, then
$$
( H^1(I_{n,w},{\bf Q}_p/{\bf Z}_p) \oplus H^1(I_{n,\bar{w}},{\bf Q}_p/{\bf Z}_p) )^-
\cong H^1(I_{n,w},{\bf Q}_p/{\bf Z}_p),
$$
and similarly with $I_{n,w}$ replaced by $G_{n,w}$.
We conclude that
\begin{multline}\label{c}
\cork_{{\bf Z}_p}(\ker(\rho_n^-)) = \sum_{w \in Q_{L/L_n}^S - Q_{L/L_n}^T}
\cork_{{\bf Z}_p}H^1(I_{n,w},{\bf Q}_p/{\bf Z}_p) + \\ \sum_{w \in Q_{L/L_n}^T}
\cork_{{\bf Z}_p}H^1(G_{n,w},{\bf Q}_p/{\bf Z}_p),
\end{multline}
The ${\bf Z}_p$-corank of $H^1(G_{n,w},{\bf Q}_p/{\bf Z}_p)$ is less than or equal to
the
${\bf F}_p$-dimension of $H^1(G_{n,w},{\bf Z}/p{\bf Z})$, which is
eventually constant by Lemma \ref{well-known}, equal to the dimension of
$G_w$.
Similarly, Lemma \ref{well-known} implies that
$\cork_{{\bf Z}_p} H^1(I_{n,w},{\bf Q}_p/{\bf Z}_p)$
is eventually less than or
equal to the dimension of the inertia group in
$G_w$, which is at most that of $G_w$.
Noting that $v\in Q_{L/K}^T$ if and only if $w\in Q_{L/L_n}^T$
(for any $T$ and)
for any $w|v$ because $G$ has no elements of finite order,
equation \eqref{c} implies that
\begin{equation}\label{d}
\cork_{{\bf Z}_p}(\ker(\rho_n^-)) \le \sum_{v \in Q_{L/K}^S} d_v g_{n,v}.
\end{equation}
Since Lemma \ref{small} implies that
$g_{n,v}$ is $O(p^{n(d-1)})$ for each
$v \in Q_{L/K}^S$,
equations \eqref{b}
and \eqref{d} yield
that
\begin{equation} \label{final}
\rk_{{\bf Z}_p}((X_{L,T}^-)_{G_n}) = \rk_{{\bf Z}_p} X_{L_n,T}^-
+ O(p^{n(d-1)}) \pm O(1)
\end{equation}
As in Lemma \ref{torsion}, the fact that $\mu(X_{K,T}^-) =
0$ implies that $X_{L,T}^-$ is finitely generated over
$\Lambda(G)$.
The result on ranks now follows from
the fact that $d \ge 2$, equations \eqref{estimate} and
\eqref{final}, and Lemma \ref{susan}.
\end{proof}
\section{Examples and Remarks} \label{examples}
We conclude this article with several remarks and examples of the
application of Theorem \ref{second}. We begin by giving three
examples of the application of Theorem \ref{second} to finding $L$
for which $X_L$ is not pseudo-null.
\begin{example} \label{basicex}
Let $F = {\bf Q}(\mu_p)$ and $K = {\bf Q}(\mu_{p^{\infty}})$ for an odd prime
$p$. Let $\mf{X}_K$ be the Galois group of the maximal pro-$p$ abelian
unramified
outside $p$ extension of $K$. Kummer
duality induces a pseudo-isomorphism
$$
\mf{X}_K^+ \to \Hom_{{\bf Z}_p}(X_K^-,{\bf Z}_p(1))
$$
of torsion $\Lambda$-modules with no ${\bf Z}_p$-torsion (see
\cite[Corollary 11.4.4]{nsw}).
In particular, we have $\lambda(\mf{X}_K^+) = \lambda(X_K^-)$.
Thus, there exist (at least) $\lambda(X_K^-)$ distinct
${\bf Z}_p$-extensions $L$ of $K$ which are CM, unramified outside $p$, and
Galois over ${\bf Q}$. For each of these extensions, Theorem
\ref{second}
implies that $\rk_{\Lambda(G)} X_L^-
= \lambda(X_K^-)-1$. In particular, all such
$X_L^-$ are not pseudo-null if $\lambda(X_K^-) \ge 2$.
\end{example}
\begin{example}
Let $F = {\bf Q}(\mu_p)$ and $K={\bf Q}(\mu_{p^\infty})$ for an odd prime
$p$. We demonstrate how one may construct a ${\bf Z}_p$-extension $L$
of $K$ with $L/F$ Galois such that $X_L^-$ has any sufficiently
large $\Lambda(G)$-rank, for $G = \Gal(L/K)$.
Let $\Pi$ be the set of all rational prime numbers which are
completely decomposed in $F$ but inert in ${\bf Q}(\mu_{p^2})/F$. By
the \v{C}ebotarev density theorem, $\Pi$ is a infinite set. Let
$T^+$ denote a finite set of primes of $K^+$ lying above primes in
$\Pi$. Let $S$ be the set of all primes of $K$ above $p$ or a
prime in $T^+$. From
\cite[p.\ 276]{Iw}
we have
\begin{equation}\label{exam}
0\rightarrow \varinjlim_n X_{K_n}^- \rightarrow H^1(G_{K,S},
\mu_{p^\infty})^- \rightarrow \bigoplus_{v\in T^+}
{\bf Q}_p/{\bf Z}_p \rightarrow 0.
\end{equation}
Furthermore, the summand in the third term which corresponds to
$v$ is canonically the cohomology group of the inertia subgroup at
$v \in T^+$ in $\mf{X}_{K,S}^+$, where $\mf{X}_{K,S}$ is the
Galois group of the maximal abelian pro-$p$ extension of $K$
unramified outside $S$.
Since $v$ splits in $K/K^+$ and lies over an inert prime of $F^+$,
equation \eqref{exam} therefore implies that $\mf{X}_{K,S}^+$
contains a $\Lambda$-submodule isomorphic to
${\bf Z}_p(1)^{\oplus|T^+|}$, with each ${\bf Z}_p(1)$-summand the inertia
group at some $v \in T^+$. Thus, there exists a quotient of
$\mf{X}_{K,S}^+$ by a $\Lambda(\Gamma)$-submodule which defines a
${\bf Z}_p$-extension $L$ that is Galois over $F$, abelian over $K^+$
(hence $L$ is CM), ramified at all primes in $T^+$, and unramified
outside $S$. For this $L$, Theorem \ref{second} yields that
$$
\rk_{\Lambda(G)}X_L^-=\lambda(X_K^-)+|T^+|-1.
$$
Note that $|T^+|$ can be taken to be arbitrarily large.
\end{example}
\begin{example}
In \cite{Ra}, R.\ Ramakrishna constructs a totally real field $L'$
which is Galois over ${\bf Q}$ with Galois group isomorphic to
$PSL_2({\bf Z}_3)$ and which is ramified only at $3$ and $349$. Then,
letting $L = L'{\bf Q}(\mu_{3^\infty})$, the Galois group of $L/{\bf Q}$ is
a $3$-adic Lie group of dimension four. It is possible to choose
a number field $F$ contained in $L$ such that $L/F$ is a strongly
admissible $3$-adic Lie extension. Let $K$ denote the cyclotomic
${\bf Z}_3$-extension of $F$. Applying Theorem \ref{second} to the
extension $L/F$, we see that if $\mu(X_K^-) = 0$, then $X_L^-$ is
not pseudo-null. However, we do not know that $\mu(X_K^-) = 0$
for this $K$, as $F$ cannot be taken to be a $3$-extension of an
abelian extension of ${\bf Q}$.
\end{example}
\begin{remark}
Removing the assumption that $L$ contains the cyclotomic ${\bf Z}_p$-extension
of $K$, Greenberg has recently constructed
nonabelian $p$-adic Lie extensions $L/F$ for which
$\mc{G}=\Gal(L/F)$ is isomorphic to, for instance,
an open subgroup of $PGL_2({\bf Z}_p)$ and
$X_L$ has nontrivial $\mu$-invariant as a $\Lambda(\mc{G})$-module
(unpublished).
\end{remark}
\begin{remark}
There is an analogous theory for elliptic curves.
Let $E$ be an elliptic curve over a number field $F$, and let $K$ be
the cyclotomic ${\bf Z}_p$-extension of $F$.
In ``classical'' Iwasawa theory for the elliptic curve $E$,
one studies the Pontryagin dual $\mathrm{Sel}_{p^\infty}(E/K)^\vee$
of the Selmer group of $E$ over $K$.
An analogue of Kida's formula for such
Selmer groups is given in \cite{HM} under the assumption that $E$ has good
ordinary reduction at $p$.
One can give a formula for the $\Lambda(G)$-rank of
$\mathrm{Sel}_{p^\infty}(E/L)^\vee$ for a pro-$p$ $p$-adic Lie
extension of $F$ containing $K$ in a similar manner to that of
Theorem \ref{second},
using the same method of proof and a Kida-type formula as above.
In fact, such a formula has already been given in some special
cases: see \cite[Corollary 6.10]{CH} and \cite[Theorem 2.8]{Ho2}
for the case that $L={\bf Q}(E_{p^\infty})$ and \cite[Theorem 3.1]{HV}
for the case in which the dimension of $\Gal(L/F)$ is $2$. In
general, the formula, which is due to the first author, is as
follows.
We remark that the same formula is also obtained in \cite{bha}
independently.
\end{remark}
\begin{theorem}
Let $L/F$ be a strongly admissible $p$-adic Lie extension. Let
$E$ be an elliptic curve defined over $F$ that has good ordinary
reduction at $p$. Let $M_0(L/K)$ be the set of primes $v$ of $K$
not lying above $p$ and ramified in $L/K$, and set
\begin{eqnarray*}
M_1(L/K) &=&\{v\in M_0(L/K): v
\mr{\ has\ split\ multiplicative\ reduction} \} \\
M_2(L/K) &=& \{ v\in M_0(L/K) : v \mr{\ has\ good\ reduction\ and\ }
E(K_v)[p]\ne 0\}.
\end{eqnarray*}
Assume that $\mathrm{Sel}_{p^\infty}(E/K)^\vee$ is finitely
generated over ${\bf Z}_p$. Then $\mathrm{Sel}_{p^\infty}(E/L)^\vee$ is
finitely generated over $\Lambda(G)$, and
$$
\rk_{\Lambda(G)} \mathrm{Sel}_{p^\infty}(E/L)^\vee =\rk_{{\bf Z}_p}
\mathrm{Sel}_{p^\infty}(E/K)^\vee+ |M_1(L/K)|+ 2|M_2(L/K)|.
$$
\end{theorem}
\appendix
\section{Iwasawa Modules in Procyclic Extensions\\by Romyar T. Sharifi}
\label{special}
In this appendix, we shall derive two types of exact sequences which describe
the behavior of Iwasawa modules in cyclic $p$-extensions $L/K$
such that $L$ is Galois over a number field $F$. These sequences
and the proof given here are related to a $6$-term exact sequence
of Iwasawa and its method of proof in \cite{iwasawa}, though
derived independently. By focusing on the case of cyclic
extensions of number fields, we are able to obtain a finer result
than the sequence of Iwasawa, which dealt with general Galois
extensions. (We also remark that our sequences also bear a relationship with the classical ambiguous class number formula, as in \cite[Lemma 13.4.1]{lang}.)
We take inverse limits to obtain related sequences for Iwasawa modules in the
general (pro)cyclic case.
We must first introduce a considerable amount of notation. For
now, let $K$ be a Galois extension of a number field $F$. In this
section, we allow $p$ to be any prime number. Let $L$ be a cyclic
$p$-extension of $K$ which is Galois over $F$. Set $G =
\Gal(L/K)$, $\mc{G} = \Gal(L/F)$, and $H = \Gal(K/F)$.
Let $T$ be any finite set of primes of $F$ which includes its real
places. For any algebraic extension $E$ of $F$, let $T_E$ denote
the set of primes of $E$ lying above those in $T$. Let
$\mc{U}_{E,T}$ denote the inverse limit of the $p$-completions of
the $T$-unit groups of the finite subextensions of $F$ in $E$. Let
$X_{E,T}$ denote the maximal unramified abelian pro-$p$ extension
of $E$ in which all primes in $T_E$ split completely. Let
$\phi_{L/K}^T \colon X_{K,T} \to X_{L,T}^G$ denote the natural
map. Let $\hat{H}^i(M)$ denote the $i$th Tate cohomology group for
the group $G$ and a ${\bf Z}[G]$-module $M$.
For each prime $v$ of $K$, let $G_v$ denote the decomposition
group at any prime above $v$ in $G$, and let $I_v$ denote the
inertia group. (If $v$ is a real prime, then
$I_v = G_v$
has order $1$ or $2$.) We can consider the map $\Sigma_{L/K}^T$
given by the inverse limit via restriction maps of the sum of
inclusion maps
$$
\Sigma_{L'/K'}^T \colon \bigoplus_{u \in T_{K'}} G'_u \oplus
\bigoplus_{u \notin T_{K'}} I'_u \to G',
$$
over number fields $L'$ containing $F$ inside $L$, with $K' = L'
\cap K$, such that $G'_u$ is the decomposition group of
$G'=\Gal(L'/K')$ at $u$ and $I'_u$ is the inertia group.
Similarly, letting $I_{L',T}$ (resp., $I_{K',T}$) denote the
$T$-ideal class group of $L'$ (resp., $K'$) and letting $P_{K',T}$
denote the group of principal $T$-ideals of $K'$, we set
$$
\mf{I}_{L/K}^T = \lim_{\leftarrow}\,
((I_{L',T}^{G'}/I_{K',T}) \otimes_{{\bf Z}} {\bf Z}_p)
$$
and
$$ \mf{E}_{L/K}^T = \lim_{\leftarrow}\,
((I_{L',T}^{G'}/(P_{K',T} \cdot N_{G'}I_{L',T})) \otimes_{{\bf Z}}
{\bf Z}_p),
$$
where $N_{G'}$ denotes the norm element in ${\bf Z}_p[G']$ and the
inverse limits are taken with respect to norm maps.
Finally, given $r \in {\bf Z}$ and an exact sequence of groups
$$
\Phi \colon\ \ \ldots \to A_i \to A_{i+1} \to \ldots
$$
with a distingushed term $A_0$, let
$$
\Phi[r] \colon\ \ \ldots \to B_i \to B_{i+1} \to \ldots
$$
denote the exact sequence with distinguished term $B_0$ and $B_i =
A_{i-r}$.
We then have the following theorem.
\begin{theorem} \label{comparison}
Let $F$ be a number field, $K/F$ a Galois extension, and
$L/K$ a cyclic $p$-extension with $L/F$ Galois. Set
$G = \Gal(L/K)$ and $H = \Gal(K/F)$. Let $T$ be a finite set
of primes of $F$. Then we have canonical exact sequences of
$\Lambda(H)$-modules:
\begin{multline*}
\Gamma_{L/K}^T:\ \
0 \to (\ker \phi_{L/K}^T) \otimes_{{\bf Z}_p} G \to
\hat{H}^{-1}(\mc{U}_{L,T}) \to
\mf{I}_{L/K}^T \otimes_{{\bf Z}_p} G
\to (\coker \phi_{L/K}^T) \otimes_{{\bf Z}_p} G
\to \\ \hat{H}^0(\mc{U}_{L,T}) \to
\ker \Sigma_{L/K}^T \to (X_{L,T})_G \to
X_{K,T} \to \coker \Sigma_{L/K}^T \to 0
\end{multline*}
and
\begin{multline*}
\Psi_{L/K}^T:\ \
\ldots \to \hat{H}^{-1}(\mc{U}_{L,T}) \to \mf{E}_{L/K}^T \otimes_{{\bf Z}_p} G
\to \hat{H}^0(X_{L,T}) \otimes_{{\bf Z}_p} G \to \\ \hat{H}^0(\mc{U}_{L,T}) \to
\ker \Sigma_{L/K}^T \to \hat{H}^{-1}(X_{L,T}) \to \ldots
\end{multline*}
with $\Psi_{L/K}^T[6] = \Psi_{L/K}^T \otimes_{{\bf Z}_p} G$.
\end{theorem}
\begin{proof}
We will leave out subscript and superscript $T$'s throughout
this proof, for compactness.
For $E$ a finite extension of $F$, we let $\mc{O}_{E}$ denote
the ring of $T$-integers of $E$, let $I_{E}$ denote the group of
fractional ideals of $\mc{O}_{E}$, let $P_{E}$ denote the
subgroup of principal fractional ideals, and let $\mathrm{Cl}_{E}$
denote the class group of $\mc{O}_{E}$.
We begin by proving the theorem in the case that $K$ and $L$ are both
number fields.
Consider the commutative diagram
of exact sequences:
\begin{equation} \label{firstseq} \SelectTips{cm}{}
\xymatrix@R=10pt@C=5pt{
&&&& 0 \ar[rd] &&0 \\
&&&&& P_E \ar[ur] \ar[rd] \\
0 \ar[rr] && \UT{E} \ar[rr] && E^{\times} \ar[rr] \ar[ur] &&
I_E \ar[rr] && \mathrm{Cl}_E \ar[rr] && 0.
}
\end{equation}
We obtain from \eqref{firstseq} in the case $E = L$ two long exact
sequences in Tate cohomology
\begin{equation} \label{twoseqs} \small \SelectTips{cm}{}
\xymatrix@C=6pt{
& 0 \ar[r] & \hat{H}^{2i-1}(P_{L}) \ar[r] \ar@{=}[d] &
\hat{H}^{2i}(\UT{L}) \ar[r] & \hat{H}^{2i}(L^{\times}) \ar[r] &
\hat{H}^{2i}(P_{L}) \ar[r] \ar@{=}[d] & \hat{H}^{2i+1}(\UT{L}) \ar[r]
& 0 \\
\ldots \ar[r] & \hat{H}^{2i-2}(\mathrm{Cl}_L) \ar[r] &
\hat{H}^{2i-1}(P_{L}) \ar[r] &
0 \ar[r] & \hat{H}^{2i-1}(\mathrm{Cl}_L) \ar[r] &
\hat{H}^{2i}(P_{L}) \ar[r] & \hat{H}^{2i}(I_{L}) \ar[r] &
\ldots,}
\end{equation}
where we have used that $\hat{H}^{2i-1}(L^{\times}) =
\hat{H}^{2i-1}(I_L) = 0$.
Chasing the diagram \eqref{twoseqs}, we obtain an exact
sequence:
\begin{multline} \label{longseq}
\ldots \to \hat{H}^{2i-2}(\mathrm{Cl}_{L}) \to
\hat{H}^{2i}(\UT{L}) \to
\ker(\hat{H}^{2i}(L^{\times}) \to \hat{H}^{2i}(I_{L})) \to \\
\hat{H}^{2i-1}(\mathrm{Cl}_{L}) \to \hat{H}^{2i+1}(\UT{L})
\to \coker(\hat{H}^{2i}(L^{\times}) \to \hat{H}^{2i}(I_{L})) \to \ldots.
\end{multline}
Using \eqref{firstseq} again, we have a commutative diagram
\begin{equation*} \label{seconddiag} \SelectTips{cm}{}
\xymatrix@!C=24pt{
0 \ar[r] & \UT{K} \ar[r] \ar@{=}[d] & K^{\times} \ar[r] \ar@{=}[d] &
P_K \ar[r] \ar[d] & 0 \\
0 \ar[r] & \UT{K} \ar[r] & K^{\times} \ar[r] & P_L^G \ar[r] &
\hat{H}^1(\UT{L}) \ar[r] & 0,
}
\end{equation*}
and it provides an isomorphism $\hat{H}^1(\UT{L}) \cong P_L^G/P_K$.
Noting this and applying the snake lemma to the commutative diagram
\begin{equation*} \label{firstdiag} \SelectTips{cm}{}
\xymatrix@!C=24pt{
0 \ar[r] & P_K \ar[r] \ar[d] & I_K \ar[r] \ar[d] &
\mathrm{Cl}_K \ar[r] \ar[d]^{\phi_{L/K}} & 0 \\
0 \ar[r] & P_L^G \ar[r] & I_L^G \ar[r] & \mathrm{Cl}_L^G \ar[r] &
\hat{H}^1(P_L) \ar[r] & 0,
}
\end{equation*}
we obtain an exact sequence
\begin{equation} \label{secondseq}
0 \to \ker(\mathrm{Cl}_K \xrightarrow{\phi_{L/K}} \mathrm{Cl}_L) \to \hat{H}^1(\UT{L}) \to
I_L^G/I_K \to \mathrm{Cl}_L^G/\phi_{L/K}(\mathrm{Cl}_K) \to \hat{H}^1(P_L) \to 0.
\end{equation}
One next checks easily that the map
$\hat{H}^{-1}(\mathrm{Cl}_L) \to \hat{H}^1(\mc{O}_L^\times)$ in \eqref{longseq}
has image contained in $\ker(\phi_{L/K})$ via the map in
\eqref{secondseq} and that the resulting map $\hat{H}^{-1}(\mathrm{Cl}_L) \to \mathrm{Cl}_K$
is induced by the norm map $(\mathrm{Cl}_L)_G \to \mathrm{Cl}_K$. Furthermore, since
the kernel of the norm map is contained in $\hat{H}^{-1}(\mathrm{Cl}_L)$, we have
an exact sequence
\begin{equation} \label{midway}
\ldots \to \hat{H}^{-2}(\mathrm{Cl}_L) \to
\hat{H}^0(\UT{L}) \to \ker(\hat{H}^0(L^{\times}) \to
\hat{H}^0(I_L)) \to (\mathrm{Cl}_L)_G \to \mathrm{Cl}_K.
\end{equation}
Next, we attach \eqref{secondseq} to the left of
\eqref{midway} via the map $\hat{H}^{-1}(P_L) \to
\hat{H}^0(\UT{L})$ in \eqref{twoseqs}, obtaining
\begin{multline} \label{almost} \SelectTips{cm}{}
0 \to \ker \phi_{L/K} \otimes G \to
\hat{H}^{-1}(\UT{L}) \to I_L^G/\phi(I_K) \otimes G \to
\coker \phi_{L/K} \otimes G \to \\
\hat{H}^0(\UT{L}) \to \ker(\hat{H}^0(L^{\times}) \to
\hat{H}^0(I_L)) \to (\mathrm{Cl}_L)_G \to \mathrm{Cl}_K.
\end{multline}
Now, we must study the map
$\hat{H}^0(L^{\times}) \to \hat{H}^0(I_L)$. By class field theory,
we have an exact sequence
$$
0 \to \hat{H}^2(L^{\times}) \to
\hat{H}^2\bigl(\bigoplus_w L_w^{\times}\bigr) \to
\frac{1}{[L:K]}{\bf Z}/{\bf Z},
$$
in which $L_w$ denotes the completion of $L$ at a prime $w$.
This yields a sequence that fits into a commutative diagram
\begin{equation} \label{anotherdiag} \SelectTips{cm}{}
\xymatrix{
0 \ar[r] & \hat{H}^0(L^{\times}) \ar[d] \ar[r] &
\hat{H}^0(\bigoplus_w L_w^{\times})
\ar[r] \ar@{->>}[d] & G \\
& \hat{H}^0(I_L) & \hat{H}^0(\bigoplus_{w \notin T_L} L_w^{\times})
\ar[l] &
\hat{H}^0(\bigoplus_{w \notin T_L} U_w^{\times}) \ar[l] & 0 \ar[l],
}
\end{equation}
where $U_w$ denotes the unit group of $L_w$.
The local reciprocity maps provide canonical isomorphisms
\begin{eqnarray*}
\hat{H}^0\bigl(\bigoplus_{w \mid v} L_w^{\times}\bigr) \cong
G_v & \mr{and} &
\hat{H}^0\bigl(\bigoplus_{w \mid v} U_w^{\times}\bigr) \cong I_v
\end{eqnarray*}
for any prime $v$ of $K$.
We also have a non-canonical isomorphism
\begin{equation} \label{idealcohom}
\hat{H}^0(I_L) \cong \bigoplus_{v \notin T_K} G_v.
\end{equation}
Making the resulting replacements in
\eqref{anotherdiag}, we have a commutative diagram
$$ \SelectTips{cm}{}
\xymatrix{
0 \ar[r] & \hat{H}^0(L^{\times}) \ar[d] \ar[r] &
\bigoplus_v G_v
\ar[r] \ar@{->>}[d] & G \\
& \bigoplus_{v \notin T_K} G_v & \bigoplus_{v \notin T_K} G_v
\ar_{\oplus_v |I_v|}[l] & \bigoplus_{v \notin T_K} I_v \ar[l] & 0, \ar[l]
}
$$
which implies that we have a canonical isomorphism
\begin{equation} \label{kerident}
\ker(\hat{H}^0(L^{\times}) \to \hat{H}^0(I_L)) \cong \ker \Sigma_{L/K}.
\end{equation}
Furthermore, we remark that
\begin{equation} \label{cokerident}
\coker( \hat{H}^0(L^{\times}) \to \hat{H}^0(I_L) ) \cong
I_L^G/(P_K \cdot N_G I_L).
\end{equation}
Plugging \eqref{kerident}
into \eqref{almost} and noting that
$$
\coker(\mathrm{Cl}_L \to \mathrm{Cl}_K) \cong \coker \Sigma_{L/K},
$$
we obtain an exact sequence with desired $p$-part $\Gamma_{L/K}^T$.
Similarly, plugging
\eqref{kerident} and \eqref{cokerident}
into \eqref{longseq}, we obtain the sequence $\Psi_{L/K}^T$.
Note that there are natural maps of exact sequences, $\Gamma_{L'/K'}^T \to
\Gamma_{L/K}^T$, for finite extensions $L'/L$ and $K'/K$ such that
$L'/K'$ is cyclic,
which are given by norm maps from $L'$ to $L$
on the first, second, third, fourth, and seventh terms,
norm maps from $K'$ to $K$ on the fifth and eighth terms,
and the maps induced by restriction of Galois groups on the
remaining two terms.
The sequence $\Gamma_{L/K}^T$
in the
case in which $L$ is not a number field
now follows by taking the inverse limit of
the sequences $\Gamma_{L'/L' \cap K}^T$, with $L'$ a number field
contained in $L$.
Since $T$ is assumed to be finite,
all terms at the finite level are finite, and therefore, the sequences
remain exact in the inverse limit.
Similarly, we
have maps $\Psi_{L'/K'}^T \to \Psi_{L/K}^T$, and the inverse
limit yields the desired sequence in the general case.
\end{proof}
Taking inverse limits, one easily obtains the following corollary
for ${\bf Z}_p$-extensions $L/K$ with Galois group $G$. Here, we let
$N_{L/K}^T$ denote the obvious map $(\mc{U}_{L,T})_G \to
\mc{U}_{K,T}$ induced by the inverse limit of norm maps.
\begin{corollary} \label{zpext}
Let $F$ be a number field, $K/F$ a Galois extension, and
$L/K$ a ${\bf Z}_p$-extension with $L/F$ Galois. Set
$G = \Gal(L/K)$ and $H = \Gal(K/F)$. Let $T$ be a finite set
of primes of $F$. Then we have canonical exact sequences of
$\Lambda(H)$-modules:
\begin{multline*}
\Gamma_{L/K}^T:\ \
0 \to \ker N_{L/K}^T \to
\mf{I}_{L/K}^T \otimes_{{\bf Z}_p} G
\to X_{L,T}^G \otimes_{{\bf Z}_p} G
\to \coker N_{L/K}^T \to \\
\ker \Sigma_{L/K}^T \to (X_{L,T})_G \to
X_{K,T} \to \coker \Sigma_{L/K}^T \to 0
\end{multline*}
and
\begin{multline*}
\Psi_{L/K}^T:\ \ \ldots
\to \ker N_{L/K}^T \to \mf{E}_{L/K}^T \otimes_{{\bf Z}_p} G
\to X_{L,T}^G \otimes_{{\bf Z}_p} G \to \\
\coker N_{L/K}^T \to \ker \Sigma_{L/K}^T \to (X_{L,T})_G \to
\ldots
\end{multline*}
with $\Psi_{L/K}^T[6] \cong \Psi_{L/K}^T \otimes_{{\bf Z}_p} G$.
\end{corollary}
Corollary \ref{zpext} can be used to give a simple proof of
Theorem \ref{second} for ${\bf Z}_p$-extensions which includes the case
of $p = 2$, though we do not include it here. There are numerous
remarks to be made.
\begin{remarks}
\ \\
\begin{enumerate}
\item[1.] Let $R_{L/K}$ denote the
set of finite primes $v$ of $K$ with $v \notin T_K$ and such that
the completion $L_w$ for $w \mid v$ does not contain the unramified
${\bf Z}_p$-extension of $F_v$.
We have a noncanonical isomorphism of $\Lambda(H)$-modules
$$
\mf{I}_{L/K}^T \cong
\prod_{v \in R_{L/K}} I_v
$$
and a canonical exact sequence
$$
0 \to \coker \Sigma_{L/K}^T \to \mf{E}_{L/K}^T \to
\mf{I}_{L/K}^T \to 0.
$$
Note that, if $G \cong {\bf Z}_p$, then
the set of $v \in R_{L/K}$ with
$I_v \neq 0$ consists only of primes over $p$.
\item[2.] It is not necessary to assume that $G$ is pro-$p$, rather
just procyclic, if we replace Galois groups by their
$p$-completions in the definitions of terms of $\Gamma_{L/K}^T$,
aside from $G$-invariants and coinvariants.
\item[3.] In addition to the functoriality inducing maps
$\Gamma_{L'/K'}^T \to \Gamma_{L/K}^T$ and $\Psi_{L'/K'}^T \to
\Psi_{L/K}^T$, there are natural maps
$\Gamma_{L/K}^T \to \Gamma_{L/K}^{T'}$ and $\Psi_{L/K}^T \to
\Psi_{L/K}^{T'}$
for $T \subseteq T'$ (induced by the natural quotient maps on
class groups and ideal groups, inclusion maps on unit groups, and
equality on $G$ together with the natural maps between its subgroups).
\item[4.] If we only wish to have an exact sequence of
${\bf Z}_p$-modules, we need not assume that $L$ and $K$ are Galois
over a number field $F$. To do this, choose a set $T_K$ of
primes of $K$ containing the real places,
let $T_E$ be the set of primes lying below
those in $T_K$ for any $E \subset K$, and assume that $T = T_{{\bf Q}}$ is
finite. The sequences $\Gamma_{L/K}^T$ and $\Psi_{L/K}^T$
of ${\bf Z}_p$-modules defined as before are still exact.
\item[5.] It is not necessary to assume that $T$ is finite if $L$ is
a number field, since we do not have to pass to an inverse
limit.
\item[6.] If $T_K$ contains the set of primes which ramify in $L/K$, then
we have $I_v = 0$ for all $v \notin T_K$, so
when $G \cong {\bf Z}_p$
we obtain a $6$-term exact sequence
\begin{multline*}
\qquad \ \ 0 \to
X_{L,T}^G
\otimes_{{\bf Z}_p} G
\to \coker N_{L/K}^T \to \\
\ker \Sigma_{L/K}^T \to (X_{L,T})_G \to
X_{K,T} \to \coker \Sigma_{L/K}^T \to 0
\end{multline*}
from $\Gamma_{L/K}^T$, and $\Psi_{L/K}^T$ becomes
\begin{multline*}
\qquad \ \ \ldots \to \ker N_{L/K}^T \to \coker \Sigma_{L/K}^T
\otimes_{{\bf Z}_p} G
\to X_{L,T}^G \otimes_{{\bf Z}_p} G \to\\ \coker N_{L/K}^T \to
\ker \Sigma_{L/K}^T \to (X_{L,T})_G \to \ldots.
\end{multline*}
\item[7.] If $K$ contains the cyclotomic ${\bf Z}_p$-extension of $F$, then only
(the $p$-parts of) those $G_v/I_v$ with $v$ lying
over $p$ (or real places when $p = 2$) can be nontrivial.
\end{enumerate}
\end{remarks}
We now mention a couple of other approaches to the proof of
Theorem \ref{comparison} for the sequences $\Gamma_{L/K}^T$, as we
believe the methods are quite interesting in their own right and
may apply in other contexts. Perhaps surprisingly, the method we
have given above is not only the most down-to-earth approach but
also seemingly the most easily applied to treat the general case.
In describing the alternate approaches, we focus on the case that
$T$ contains the primes above $p$ and all primes which ramify in
$L/K$, for which Galois cohomology with restricted ramification is
most easily applied.
The first approach involves again working first in the case that
$L$ is a number field and writing out a seven-term exact sequence
using the Hochschild-Serre spectral sequence
$$
H^p(G,H^q(G_{L,T},\mc{O}_{\Omega,T}^{\times})) \Rightarrow
H^{p+q}(G_{K,T},\mc{O}_{\Omega,T}^{\times}),
$$
where $G_{E,T}$ denotes the maximal unramified outside $T$
extension of $E$ (for $E = K,L$) and $\mc{O}_{\Omega,T}$ is the
ring of $T$-integers of $\Omega$, the maximal unramified outside
$T$ extension of $K$. Note that one has nice descriptions of (the
$p$-completions of) the groups
$H^i(G_{K,T},\mc{O}_{\Omega,T}^{\times})$ for $i = 0,1,2$ in terms
of units, class groups, and the Brauer group, respectively. (At
one point, one must describe explicitly
the map $E_2^{1,1} \to E_2^{3,0}$,
and this requires comparing with the map $\hat{H}^{-1}(\mathrm{Cl}_{L,T})
\to \hat{H}^1(\mc{O}_{L,T}^{\times})$ in \eqref{longseq}.) One
then passes to the inverse limit.
Another approach involves using the Poitou-Tate sequences for $E$
equal to $K$ and $L$ of \cite[Theorem 5.4]{jannsen}:
\begin{equation} \label{pt}
0 \to H^2(G_{E,T},{\bf Q}_p/{\bf Z}_p)^{\vee}
\to \mc{U}_{E,T} \to \mc{A}_{E,T}
\to \mf{X}_{E,T} \to X_{E,T} \to 0
\end{equation}
where $\mf{X}_{E,T}$ denotes the Galois group of the maximal
abelian
pro-$p$ unramified outside $T$ extension of $E$ and
$$
\mc{A}_{E,T} =
\lim_{\substack{\leftarrow\\E \subset K}}
\bigoplus_{w \in T_E} H^1(G_{E_w},{\bf Q}_p/{\bf Z}_p)^{\vee},
$$
where $G_{E_w}$ is the absolute Galois group of the completion
$E_w$ of $E$ at $w$ and ${}^{\vee}$ is used to denote the
Pontryagin dual. Let us focus on the case $G \cong {\bf Z}_p$, which one
can treat directly. In this case, one breaks up the $5$-term exact
sequences \eqref{pt} into three pairs of $3$-term exact sequences
and then considers maps on $G$-coinvariants between them (the
general idea here being taken from \cite{cs}). One obtains three
long exact sequences via the snake lemma and derives the desired
$6$-term sequence from these, using repeatedly the fact that $G$
has $p$-cohomological dimension $1$. (In fact, in the case that
$H^2(G_{K,T},{\bf Q}_p/{\bf Z}_p)$ is nontrivial, we only carried it out up to
a certain difficult check of commutativity.) One can also derive
the desired exact sequence from the spectral sequence for a
certain three-by-five complex with exact rows consisting of two
copies of \eqref{pt} for $L$ and one for $K$, which degenerates at
$E_4$, and we thank Marc Nieper-Wisskirchen for suggesting the
idea.
\renewcommand{1}{1}
\footnotesize \noindent
Yoshitaka Hachimori\\
CICMA\\
Department of Mathematics and Statistics\\
Concordia University\\
1455 de Maisonneuve Blvd. West\\
Montr\'{e}al, Qu\'{e}bec H3G 1M8, Canada\\
e-mail address: {\tt [email protected]}\\ \\
Romyar Sharifi \\
Department of Mathematics and Statistics\\
McMaster University\\
1280 Main Street West\\
Hamilton, Ontario L8S 4K1, Canada\\
e-mail address: {\tt [email protected]}
\end{document}
|
\begin{document}
\title{Mean-photon-number dependent variational method to the Rabi model}
\author{Maoxin Liu}
\affiliation{Beijing Computational Science Research Center, Beijing 100084, China}
\author{Zu-Jian Ying}
\affiliation{Beijing Computational Science Research Center, Beijing 100084, China}
\author{Jun-Hong An}
\affiliation{Center for Interdisciplinary Studies $\&$ Key Laboratory for Magnetism and Magnetic Materials of the MoE, Lanzhou University, Lanzhou 730000, China}
\author{Hong-Gang Luo}
\affiliation{Center for Interdisciplinary Studies $\&$ Key Laboratory for Magnetism and Magnetic Materials of the MoE, Lanzhou University, Lanzhou 730000, China}
\affiliation{Beijing Computational Science Research Center, Beijing 100084, China}
\begin{abstract}
We present a mean-photon-number dependent variational method, which works well in whole coupling regime if the photon energy is dominant over the spin-flipping, to evaluate the properties of the Rabi model for both the ground state and the excited states. For the ground state, it is shown that the previous approximate methods, the generalized rotating-wave approximation (only working well in the strong coupling limit) and the generalized variational method (only working well in the weak coupling limit), can be recovered in the corresponding coupling limits. The key point of our method is to tailor the merits of these two existing methods by introducing a mean-photon-number dependent variational parameter. For the excited states,our method yields considerable improvements over the generalized rotating-wave approximation. The variational method proposed could be readily applied to the more complex models, for which an analytic formula is difficult to be formulated.
\end{abstract}
\maketitle
\section{Introduction}\label{intro}
The Rabi model describes a two-level system interacting with a single-mode bosonic field \cite{rabi}. It plays a fundamental role in quantum optics \cite{cavityQED}, quantum information \cite{raimond}, and condensed matter physics \cite{holstein}. Although it has been intensively explored, only in recent years has the integrability of this model been formulated \cite{Braak2011}. However, this analytic achievement is not the end of the study on this model, oppositely, it has triggered more theoretical and experimental studies \cite{Solano2011,Romero2012,Restrepo2014}. Explicitly, the Rabi model has been experimentally simulated in optical waveguide \cite{Crespi2012}, superconducting circuit system \cite{circuit1,circuit2,StrongCouplingExps2,ultra}, and solid-state semiconductor\cite{Gunter2004, Deveaud2007, Cristofolini2012, Carusotto2013,wen}, which provides a perfect test bed to explore the physics of light-matter interaction in the deep strong coupling regime. Another significance of the analytic achievement is that it supplies some insight to understand the involved physics, e.g., the vacuum
induced Berry phase \cite{Larson2012}, and the quantum phase transition in the related multi-model Rabi model, i.e., the spin-boson model \cite{Leggett_RMP,weiss_book}, for which an exact solution is quite difficult to obtain, and a well-established approximate method is desirable.
For decades of study on the Rabi model, besides the numerical treatment \cite{casanova}, there exist many approximate analytic methods \cite{hausinger,firstorder,qhchen}. The most famous approximation is the rotating-wave approximation (RWA) \cite{jc}. Working in the near-resonance and weak-coupling regime, the RWA neglects the counter-rotating terms in the interaction and results in the Jaynes-Cummings (J-C) model \cite{jc}. It has served as a basic starting point in understanding many quantum phenomena involved in light-matter interaction \cite{jc_review}, because most of the practical quantum optical experiments work in the weak coupling regime \cite{raimond,mabuchi}. However, in circuit quantum electrodynamics system, the neglected counter-rotating term becomes important due to the strong \cite{circuit1,circuit2}
or the ultra-strong coupling \cite{ultra} between the bosonic field and the two-level system. To treat the strong coupling, Irish \textit{et al.} \cite{adiabatic} proposed an adiabatic approximation (AA) in the limit that the frequency of the field is much larger than the one of the two-level system. After working in the displaced oscillator basis, it takes the frequency of the two-level system as perturbation and results in a truncated Hamiltonian with the interaction effects collected in a renormalization factor to the frequency of the two-level system. In 2007 \cite{grwa}, the AA was improved by considering the RWA-type interaction in the reformulated Hamiltonian in the
displaced oscillator basis. This scheme was named as generalized RWA (GRWA). Although the GRWA works well in a quite broad parameter regime, especially in the strong coupling regime, it does not work well in the weak coupling regime, especially for the positive detuning case. In addition, the mean photon number predicted by the GRWA is independent of the frequency of the two-level system, which is actually not true. As an improvement, a generalized variational method (GVM) \cite{GVMground,GVM} has been introduced, where the displacement of the displaced oscillator basis is determined by minimizing the ground state energy. Indeed, the GVM evidently improves the GRWA in weak coupling regime with positive detuning, and yields a frequency dependent ground state mean photon number. However, for strong coupling and intermediate coupling regimes, the GVM is no longer applicable. Moreover the GVM is limited to the ground state.
Obviously, the merit of the GRWA and the AA comes from the introduction of the displaced oscillator basis, which captures the essential physics in the large coupling regime. However, its disadvantage lies in fixing the displacement, which leads to a frequency independent mean photon number of the obtained ground state. On the contrary, the GVM frees the displacement, but it does not introduce the displaced oscillator basis and has been excessively simplified in the analytic treatment. In the present work, we combine the merits of the GRWA and the GVM to obtain a novel analytic method. We start from the GRWA formula but further introduce a mean photon number dependent variational method to determine the displacement. As a result, our approximation method is applicable in both weak and strong coupling regimes. In the weak coupling regime, it recovers the result of the GVM and in the strong coupling regime it recovers the GRWA. In the intermediate coupling, it provides a natural crossover from the GVM to the GRWA. This variational method is not only valid for the ground state, but also for the excited states. To show the merit of the our method, we focus on the energy spectrum and mean photon number of the Rabi model and compare the result with that obtained by the GVM and the GRWA, taking the exact numerical result as a benchmark.
The paper is organized as follows. In Sec. \ref{section_am} we introduce the Rabi model and give a review to the previous approximate methods for self containing and also for convenience of later discussions. In Sec. \ref{section_ca} we present our method and make some detailed comparisons with the results obtained by the previous methods. Finally, Sec. \ref{section_con} is devoted to conclusions and discussions.
\section{The model and some previous methods}\label{section_am}
The Hamiltonian of the Rabi model reads
\begin{equation}\label{rabi}
H=\omega a^{\dagger}a+\frac{\Omega}{2}\sigma_x+g(\sigma_-+\sigma_+)(a+a^\dagger),
\end{equation}
where $a$ and $a^{\dag }$ are the annihilation and creation operators of the quantized single-mode bosonic field with frequency $\omega $, $\sigma_x$ is the Pauli matrix for the two-level system with level splitting $\Omega$, and $\sigma_\pm=(\sigma_z\mp i\sigma_y)/2$ are the transition operators between the two levels, and $g$ is the coupling strength. Here, for convenience of comparison we follow the notations in Ref.\cite{grwa} to use spin-flipping $\sigma _x$ for the level-splitting term instead of $\sigma _z$ commonly used in quantum optics \cite{scully}. However, these two notations can be transformed into each other by a rotation on the two-level system. According to the tuning relationship between the two-level system and the field, the model takes three cases: resonance ($\omega = \Omega$), positive detuning ($\omega <\Omega$) and negative detuning ($\omega > \Omega$). Throughout the paper we take $\Omega$ as unit of energy.
Essentially, the existing approximate methods can be formulated in two ways: One is to truncate Eq. (\ref{rabi}) into J-C-like exactly solvable form, and the other is to expand Eq. (\ref{rabi}) on a proper basis and then truncate the obtained matrix into the block-diagonal form. In the following, we reformulate these approximations in the two ways in order to compare their performance.
\subsection{Truncated Hamiltonian}
\begin{enumerate}
\item RWA: Neglecting the counter-rotating terms $\sigma_-a+\sigma_+a^\dag$ in Eq. (\ref{rabi}) yields the RWA Hamiltonian
\begin{equation}
H_\text{RWA} = \omega a^{\dagger}a+\frac{\Omega}{2}\sigma_x+g(\sigma_-a^\dagger + \sigma_+a ).
\end{equation}
This is the J-C Hamiltonian \cite{jc}, which is exactly solvable. Its eigen solution reads
\begin{equation}
E_{{\rm RWA}}^{(\pm,N)}=(N-\frac{1}{2})\omega\pm\sqrt{\frac{(\omega-\Omega)^2}{4}+Ng^2},
\end{equation}
with the ground eigen-energy $E_\text{RWA}^{(0)}=-\frac{\Omega }{2}$, which is just the J-C energy ladder \cite{Fink2008}.
\item AA: Performing a unitary transformation $U=e^{\lambda \sigma_z(a-a^{\dag})}$ with $\lambda=-\frac{g}{\omega}$ to Eq. (\ref{rabi}), one obtains $\tilde H = UHU^{\dag}$ with \cite{adiabatic}
\begin{equation}\label{Htilde}
\tilde{H} = \omega a^{\dag}a-{g^2\over \omega}+\frac{\Omega}{2} \sigma_x F(\lambda)+\frac{i\Omega}{2}\sigma_y G(\lambda).
\end{equation}
Here
$F(\lambda)=\sum_{k=0}^{\infty}[a^{\dag2k}f_{2k}(\lambda,a^{\dag}a)+\text{h.c.}]$, $G(\lambda)=\sum_{k=0}^{\infty}[a^{\dag2k+1}f_{2k+1}(\lambda,a^{\dag}a)-\text{h.c.}]$,
and
$f_m(\lambda,x)=\frac{(-2\lambda)^me^{-2\lambda^2}(x+m)!}{x!}L_x^m(4\lambda^2)$ with $L_x^m$ being the associated Laguerre polynomial (see Appendix A). In the small $\Omega$ case[$\Omega \ll (\omega, g)$], keeping only the zero-th order term of $a$ and $a^\dag$ in $F(\lambda)$ is a good approximation, which leads to
\begin{equation}
\tilde{H}_\text{AA}=\omega a^{\dagger}a-{g^2\over\omega}+{\Omega f_0(\lambda,a^{\dagger}a)\over 2} \sigma_x,
\end{equation}
whose eigensolution can be evaluated readily as
\begin{equation}\label{eigaa}
\begin{split}
&E^{\pm,N}_\text{AA} = N\omega -{g^2\over\omega} \pm {\Omega f_0(\lambda,N)\over 2},\\
&|\tilde{\Psi}^{\pm,N}_\text{AA}\rangle=|\pm_x,N\rangle,
\end{split}
\end{equation}
with $|\pm_x\rangle$ being the eigenstates of $\sigma_x$ and $|N\rangle$ being the Fock state. After the inverse transformation, through representing the $|\pm_x\rangle$ by the original $|\pm_z\rangle$ basis, one gets the eigen-state under the AA:
\begin{equation}\label{AA_basis_oldpre}
\begin{split}
|\Psi^{\pm,N}_\text{AA}\rangle& =U^\dag|\pm _x,N\rangle\\
&= e^{\lambda(a^{\dag}-a)}|+_z,N\rangle \pm e^{-\lambda(a^{\dag}-a)}|-_z,N\rangle .
\end{split}
\end{equation}
\item GRWA: Going beyond the AA, one further considers the zeroth order term in G($\lambda$) involving one-excitation terms. Only considering the ``energy-conserving" one-excitation terms, one arrives at the GRWA Hamiltonian \cite{grwa}
\begin{equation}\label{hgrwa}
\tilde{H}_\text{GRWA}= \tilde H_\text{AA} + \frac{\Omega }{2} [\sigma_-a^{\dag}f_{1}(\lambda,a^{\dag}a)+\text{h.c.}].
\end{equation}
On the basis of $|\pm_x,N\rangle$, Eq. \eqref{hgrwa} is block-diagonalized with $2 \times 2$ subblocks
\begin{equation}\label{GRWA_matrix_GRWA}
\tilde{H}_\text{GRWA}^\text{BLOCK}=
\left(
\begin{array}{cc}
E_\text{AA}^{+,N-1} & h'_{N-1_+,N_-} \\
h'_{N_-,N-1_+}& E^{-,N}_\text{AA} \\
\end{array}
\right),
\end{equation}
which gives a pair of eigen-vectors $\{R_{N,\pm}, S_{N,\pm}\}$. The off-diagonal entries are defined by
\begin{equation}
h'_{N-1_+,N_-}=h'_{N_-,N-1_+}=\frac{1}{2}\Omega\sqrt{N}f_1(\lambda,N).
\end{equation}
Thus, the eigenstates of Eq. (\ref{hgrwa}) read as
\begin{eqnarray}
|\tilde{\Psi}_\text{GRWA}^{\pm,N}\rangle&=&R_{N,\pm}
|+_x,N-1\rangle+S_{N,\pm}|-_x,N\rangle.\label{ddsf}
\end{eqnarray}
The states to the original Hamiltonian (\ref{rabi}) are obtained by the inverse transformation:
\begin{equation}
|\Psi_\text{GRWA}^{\pm,N}\rangle=U^\dag|\tilde{\Psi}_\text{GRWA}^{\pm,N}\rangle,\label{tddsf}
\end{equation}
while the ground state $|\tilde{\Psi}_\text{GRWA}^{(0)}\rangle=|-_x,0\rangle$ is the same as that of AA.
\item GVM: Different from the above two methods, the parameter $\lambda$ here is not fixed but is optimized by minimizing the ground-state energy \cite{GVMground}
\begin{equation}
E_{\text{GVM}}^{(0)}=\lambda^2\omega+2g\lambda-{\Omega\over 2} f_0(\lambda,0),
\end{equation}
which results in the equation to determine $\lambda$ as $g+\omega\lambda+\lambda e^{-2\lambda^2}=0$. Since it cannot be solved analytically, Zhang \textit{et al.} \cite{GVMground} took the following approximate solution
\begin{equation}\label{gvm_lambda}
\lambda=-\frac{g}{\omega}\frac{1}{1 + \frac{\Omega}{\omega}e^{-2g^2/(\omega+\Omega)^2}}.
\end{equation}
\end{enumerate}
Below we address the conditions under which the above methods work well. The RWA is valid in the very weak coupling regime ($g \ll \Omega,\omega$) and under the near-resonance ($\omega \sim \Omega$) conditions. Beyond the usual strong coupling regime, namely, in the strong coupling limit, the RWA is no longer valid but the AA shows its advantage. For either large $\omega$ or large $g$, the term of displaced oscillator is dominant in \eqref{Htilde} and the $\Omega$ terms can be treated as perturbation. Thus the validity of the AA lies in strong coupling limit ($g\gg\omega$) or negative detuning ($\omega> \Omega$) regime. Because the GRWA further keeps all one-excitation ``energy-conserving" terms unincorporated in the AA, its applicable range for the excited states is extended to the regime that covers those of both RWA and AA, which is nearly the whole parameter regime. The reason can be due to ``the fundamental similarity between the standard RWA and AA model: both involved calculating the energy splitting due to an interaction between two otherwise degenerate basis states", as clearly stated in Ref. [\onlinecite{grwa}].
However, the validity regime of the GRWA could be further broadened if the following aspects can be properly treated.
First, the ground-state energy of the GRWA is the same as the AA, no improvement has been obtained. Second, its energy spectrum
requires a more accurate calculation
for small ratio of $\omega/\Omega$ in the weak coupling regime, especially for the ground state. Third, it predicts an incorrect $\Omega$-independent mean photon number due to the fixed $\lambda$. The GVM improves the accuracy of the ground-state energy and its mean photon number behavior captures the $\Omega$-dependent property in the weak coupling regime, especially for the positive detuning case. However, since an oversimplified analytic treatment has been applied, the results of the GVM becomes even worse than the GRWA in the strong coupling limit regime.
\subsection{Basis Formulation}
Truncating the Hamiltonian in AA and GRWA can be understood in an alternative way by discarding the remote off-diagonal elements of the Hamiltonian matrix on certain basis \cite{grwa,adiabatic}. Here we reformulate the AA and the GRWA based on this idea.
Choosing the basis $|N_{\pm}\rangle=e^{-\lambda\sigma_z(a-a^{\dag})}|\pm_z,N\rangle$ with $\lambda=-g/\omega$, Eq. (\ref{rabi}) can be rewritten as
\begin{equation}\label{AA_matrix}
H=
\left(
\begin{array}{ccccc}
E_0 & h_{0_-,0_+} & 0 & h_{0_-,1_+} & \cdots\\
h_{0_+,0_-} & E_0 & h_{0_+,1_-} & 0 & \cdots \\
0& h_{1_-,0_+}& E_1 & h_{1_-,1_+} & \cdots \\
h_{1_+,0_-} & 0 & h_{1_+,1_-} & E_1 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \ddots\\
\end{array}
\right),
\end{equation}
with $E_N=\omega N$ and $h_{N_\alpha,M_\beta}=\langle N_\alpha|H|M_\beta\rangle$. Discarding the remote off-diagonal elements leads to a $2\times2$ block-diagonal matrix
\begin{equation}\label{AA_matrix_AA}
H_\text{AA}=
\left(
\begin{array}{ccccc}
E_0 & h_{0_-,0_+} & 0 & 0 & \cdots\\
h_{0_+,0_-} & E_0 & 0 & 0 & \cdots \\
0& 0& E_1 & h_{1_-,1_+} & \cdots \\
0 & 0 & h_{1_+,1_-} & E_1 & \cdots\\
\vdots & \vdots & \vdots & \vdots & \ddots\\
\end{array}
\right).
\end{equation}
By diagonalizing Eq. (\ref{AA_matrix_AA}), one finds the eigen solution
\begin{eqnarray}
E^{\pm, N}_\text{AA}&=&N\omega\pm\frac{|h_{N_+,N_-}|}{2},\\
|\Psi^{\pm,N}_\text{AA}\rangle&=&\frac{1}{\sqrt{2}}(|N_+\rangle \pm |N_-\rangle), \label{Wave_AA}
\end{eqnarray}
which matches well with Eq. (\ref{AA_basis_oldpre}) obtained under the AA.
Irish \textit{et al.} further used the eigenstates in Eq. (\ref{Wave_AA}) as basis to expand the Hamiltonian (\ref{rabi}), which reads
\begin{equation}\label{GRWA_matrix}
H=\left(
\begin{array}{cccccc}
E_\text{AA}^{-,0} & 0 & 0 & h'_{0_-,1_+} &h'_{0_-,2_-} & \cdots\\
0& E_\text{AA}^{+,0} & h'_{0_+,1_-} & 0 & 0 & \cdots \\
0& h'_{1_-,0_+}& E^{-,1}_\text{AA} & 0 & 0 & \cdots \\
h'_{1_+,0_-}& 0 & 0 & E_\text{AA}^{+,1} & h'_{1_+,2_-} & \cdots\\
h'_{2_-,0_-} & 0 & 0 & h'_{2_-,1_+} & E_\text{AA}^{-,2} & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots\\
\end{array}
\right),
\end{equation}
with $h'_{N_\alpha,M_\beta}=\langle\Psi^{(\alpha,N)}_\text{AA}|H|\Psi^{(\beta,M)}_\text{AA}\rangle$. Then dropping the remote off-diagonal matrix elements gives rise to
\begin{equation}\label{GRWA_matrix_GRWA}
H_\text{GRWA}=
\left(
\begin{array}{cccccc}
E_\text{AA}^{-,0} & 0 & 0 & 0 &0 & \cdots\\
0& E_\text{AA}^{+,0} & h'_{0_+,1_-} & 0 & 0 & \cdots \\
0&h'_{1_-,0_+}& E^{-,1}_\text{AA} & 0 & 0 & \cdots \\
0& 0 & 0 & E_\text{AA}^{+,1} & h'_{1_+,2_-} & \cdots\\
0& 0 & 0 & h'_{2_-,1_+} & E_\text{AA}^{-,2} & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots\\
\end{array}
\right).
\end{equation}
Based on this form, the energy spectra and the eigenstates can be readily solved, which are consistent with those obtained under the GRWA, i.e., Eq. (\ref{ddsf}).
\section{Mean photon-number dependent variational method}\label{section_ca}
\subsection{Method description and improvements for the ground state}\label{sectionGS}
From the above analysis on the previous approximations, we can see that truncating the Hamiltonian matrix into block-diagonalized form in a completed orthogonal basis is equivalent to truncating the Hamiltonian operator expansions. The better a basis is chosen, the more information an approximation obtains. Moreover, a proper basis can even be considered as an approximate state. First, let us focus on the ground state. For the existing approximate methods, the ground state takes the following form
\begin{equation}\label{Ga}
|G_A\rangle=\frac{1}{\sqrt{2}}(|+_z,\lambda\rangle - |-_z,-\lambda\rangle),
\end{equation}
where $|\pm\lambda\rangle=e^{\pm\lambda(a^{\dagger}-a)}|0\rangle$ are the coherent states. For the AA/GRWA, $\lambda=-g/\omega$, while for the GVM, $\lambda$ is approximately given by Eq. (\ref{gvm_lambda}). This motivates us to take Eq. (\ref{Ga}) as our trial state but completely free the parameter $\lambda$. The reasons for taking the form of Eq. (\ref{Ga}) as our trial state are as follows. On one hand, it is seen that Eq. (\ref{Ga}) can reproduce the previous known results from different approximations like AA/GRWA and GVM. In particular, the Irish's scheme is valid in nearly the whole coupling regime. On the other hand, it is motivated from the competition nature between the displaced oscillator and spin-flipping in the model. It is noticed that if the spin-flipping term $\frac{\Omega}{2}\sigma_x$ is neglected, the model reduces to a displaced oscillator Hamiltonian with two degenerated ground states $|+_z,-g/\omega\rangle$ and $|-_z,g/\omega\rangle$. Considering an infinitesimal spin-flipping, the degeneracy is lifted and their linear combination, namely, $\frac{1}{\sqrt{2}}(|+_z,-g/\omega\rangle - |-_z,g/\omega\rangle)$ is just the trial state Eq.\eqref{Ga} with $\lambda=-g/\omega$. When increasing the spin-flipping, the competition between the displaced oscillator and the spin-flipping motivates us to free the $\lambda$ as a variational parameter. In addition, $|G_A\rangle$ is also eigenstate of parity operator
$\Pi=-\sigma_x(-1)^{a^{\dag}a}$, which commutes with the model Hamiltonian. Thus this approximate ground state has a definite parity. With the assumed ground state Eq. (\ref{Ga}), the energy and the mean photon number of the ground state is easy to obtain:
\begin{eqnarray}
&& E_{0} = \langle G_A|H|G_A\rangle =\lambda^2\omega+2g\lambda-\frac{\Omega }{2}e^{-2\lambda^2},\label{Eg}\\
&& \langle a^\dagger a \rangle_0 = \langle G_A|a^{\dag}a|G_A\rangle = \lambda^2.\label{N}
\end{eqnarray}
Obviously, the parameter $\lambda$ is optimal if the projection $P(\lambda)=\langle G_A(\lambda)|\Psi_0\rangle$ is exactly equal to one, where $|\Psi_0\rangle$ is the exact ground state. Unfortunately, a simple expression of $|\Psi_0\rangle$ is unknown (though a series expression of $|\Psi_0\rangle$ can be given but it is quite useless due to the infinite series form). We here adopt an approximate but accurate enough form for $|\Psi_0\rangle$. Note that a unitary transformation $U=e^{\lambda\sigma_z(a-a^{\dag})}$ can recast $|G_A\rangle$ to $|\tilde{G}_A\rangle=U|G_A\rangle=|-_x,0\rangle$, which can be regarded as the zero-th order approximation for $|\tilde{\Psi}_0\rangle=U|\Psi_0\rangle$. Taking the complete orthogonal basis of $|\tilde{G}_A\rangle$ into consideration, we can construct the perturbative corrections of
$|\tilde{\Psi}_0\rangle$. For perturbative calculation, we choose the basis of AA $|\pm_z,N\rangle$ to be the complete orthogonal basis of $|\tilde{G}_A\rangle$. Note that it should work better to choose a more accurate basis, e.g., the GRWA basis, to expand the perturbation. But here, the choice of the AA or the GRWA basis makes little difference in the ground state calculation. According to perturbation theory, we can expand $|\tilde{\Psi}_0\rangle$ to the first order of the perturbation as
\begin{equation}\label{pertur_0}
|\tilde{\Psi}_0\rangle =(1+K)^{-1/2}( |\tilde{G}_A\rangle+{\sum_{\{\pm,N\}}}^{\prime} c_{\pm,N}|\pm_x, N\rangle ),
\end{equation}
where $c_{\pm,N}=\frac{\langle \pm_x,N|\Delta\tilde{H}|\tilde{G}_A\rangle}{\langle \pm_x,N|\tilde{H}_{0}|\pm_x,N\rangle-E_0}$, $K = {\sum\limits_{\{\pm,N\}}}^{\prime} c_{\pm,N}^2$ and $\tilde{H}_{0}=\tilde{H}_{\rm{AA}}$ with $\Delta \tilde{H}=\tilde{H}-\tilde{H}_{0}$. Note that the primed summation excludes the ground state itself with the label $\{-,0\}$, and $c_{\pm,N}$ will vanish if state $|\pm_x,N\rangle$ has a different parity from $|\tilde{G}_a\rangle$. Then we can calculate
\begin{equation}
P(\lambda)=(1+K)^{-1/2}.\label{fdlt}
\end{equation}
For given $\omega,~\Omega$ and $g$, if $K$ is minimized by choosing optimal $\lambda$, then the obtained $|G_A\rangle$ would be optimal ground state. This process can be done numerically.
Before presenting the numerical results, it is useful to discuss some limit cases analytically. First, our result can recover the ones under the AA/GRWA in the strong coupling limit. In this limit $g$ is much larger than $\Omega$, the two-level splitting $\frac{\Omega}{2}\sigma_x$ in Eq. \eqref{rabi} can be safely neglected. Then one can verify that Eq. (\ref{fdlt}) takes the form $P_\text{SC}(\lambda)=[1+(\omega\lambda+g)^2/(E_\text{AA}^{+,1}-E_0)^2]^{-{1\over2}}$, which has an optimal value only when $\lambda=-g/\omega$. It corresponds exactly to the result under the AA/GRWA. Second, our result can reduce to the one under the GVM in the weak coupling limit. In this case,
one can calculate $P_\text{WC}(\lambda)\simeq[1+(\omega\lambda+\Omega\lambda+g)^2/(E_\text{AA}^{+,1}-E_0)^2]^{-{1\over2}}$, which has the optimal value when $\lambda=-{g\over \omega+\Omega}$. This recovers the result under the GVM in Ref. \cite{GVMground}.
\begin{figure}
\caption{(color online) The $K$ and its leading four components $c_{+,1}
\label{lambda_K}
\end{figure}
For other cases, the analytic evaluation of the optimization on $P(\lambda)$ or $K$ is difficult. We should resort to numerical evaluation of the expression of $K$, which has much less numerical work than that of exact numeric. Figure \ref{lambda_K} shows $K$ as the function of $\lambda$ in different $\omega$ and $g$. The leading four components in the summation of $K$ are also plotted. We can see the following characters: (i) When $\omega/\Omega$ is sufficiently large [see Fig. \ref{lambda_K}(a)], all the components except $c_{+,1}^2$ are negligible. Thus, $c_{+,1}^2$ is a good substitution of $K$ for the minimization. (ii) When $\omega$ is comparable to $\Omega$, $c_{\pm,N}^2$ for $N > 1$ becomes sizable [see Fig. \ref{lambda_K}(b)]. However, $K$ still has only one minimum, which means that $c_{+,1}^2$ still can act as a substitution of $K$ for the minimization. (iii) When $\omega/\Omega$ is small [see Fig. \ref{lambda_K}(c)], $c_{\pm,N}^2$ for $N > 1$ become important and $K$ shows two minimums. Therefore, none of $c_{\pm,N}^2$ can be taken as a substitution of $K$ for the minimization. The two minimums of $K$ should be considered equally. (iv) For $\omega/\Omega$ sufficiently small [see Fig. \ref{lambda_K}(d)], the series of $c_{\pm,N}^2$ lose convergency and a multi-minimum structure of $K$ appears. This complicated structure indicates that the coherent state form of the trial wavefunction Eq. (21) cannot capture the physics dominated by the spin-flipping and our scheme is no longer valid in this regime.
\begin{figure}
\caption{$\lambda_\text{eff}
\label{lambda_P}
\end{figure}
Figure \ref{lambda_P} shows the optimal $\lambda$ and the corresponding $P(\lambda)$ for the negative-detuning ($\omega=2.0$), the resonance ($\omega=1.0$), and the positive-detuning ($\omega=0.5$) cases. For the two-minimum situation in the positive-detuning [see Fig. \ref{lambda_P}(a) and Fig. \ref{lambda_P}(d)] case, the optimal $\lambda$ can be evaluated effectively as
\begin{equation}
\lambda_\text{eff}=\frac{K_B\lambda_A+K_A\lambda_B}{K_A+K_B},
\end{equation}
where $\lambda_A$ and $\lambda_B$ are the two minimum positions of $\lambda$, and $K_A$ and $K_B$ are their corresponding $K$. We find that $\lambda_\text{eff}$ is dependent of the coupling strength, which is quite different from the fixed $\lambda$ result under the AA/GRWA \cite{grwa}. Furthermore, $\lambda/g$ approaches to $-0.67$ in the weak coupling limit, which is consistent with the analytic result of $\lambda/g \rightarrow\frac{-1}{\omega+\Omega}$ obtained under the GVM. And $\lambda/g$ approaches to $-2.0$ in the strong coupling limit, which is consistent with the analytic result of $-\frac{1}{\omega}$ obtained under the AA/GRWA. In the whole parameter range, $P(\lambda)$ shows little deviation from $1$, which indicates that our obtained $|G_A\rangle$ is almost the same as the exact ground state. For the one-minimum situation in the resonance [see Fig. \ref{lambda_P}(b) and (e)] and in the negative-detuning [see Fig. \ref{lambda_P}(c) and (f)] cases, where the optimal $\lambda$ is determined by minimizing $c_{+,1}^2$, it is interesting to find that the change scope of $P$ converges closer and closer to one. It means that our scheme performs better and better with the increase of the $\omega$.
Figure \ref{fig3} shows the ground-state energy $E_0$ and the mean photon number $\langle a^\dag a\rangle_0$ as a function of the coupling strength $g$ for different detuning cases evaluated by different methods. We stress that although $E_0$ obtained by various methods in the negative detuning case is almost the same [see Fig. \ref{fig3}(a)], the mean photon number obtained by different methods behaves quite differently, as shown in Fig. \ref{fig3}(d). The GVM works better than the AA/GRWA in
the weak coupling regime, while it gets worse in the intermediate coupling regime. However, our result obtained by optimizing $c_{+,1}^2$ matches well with the exact one even in the whole coupling regime. The improvement of our scheme to $E_0$ becomes more obvious with the decrease of $\omega$. In resonance case, we can see from Fig. \ref{fig3}(b) that the result from the AA/GRWA has a clear deviation from the exact value in the weak coupling regime and the one by the GVM shows a dramatic deviation in the strong coupling regime, while our result is consistent with the exact one almost in the whole coupling regime. For mean photon number in Fig. \ref{fig3}(e), our result is obviously more accurate than those obtained by the other methods. With a further decrease of $\omega$, the AA/GRWA and the GVM become worse and worse, but our results remain its good performance in evaluating $E_0$ and $\langle a^\dag a\rangle_0$, as shown in Figs. \ref{fig3}(c) and (f).
Thus, our result reproduces the result of the GVM in weak coupling regime and the one of the AA/GRWA in the strong coupling limit regime, respectively. Our method tailors the advantages of the existing GVM and AA/GRWA methods. Nevertheless, with further decreasing $\omega$, the $K(\lambda)$ shows a multiple-minimum structure [see Fig. \ref{lambda_K}(d)], and the performance of our scheme also gets inaccurate. This indicates that the coherent state basis is no longer a good starting point in this case where the spin-flipping becomes dominant.
\begin{figure}
\caption{(Color online) The ground-state energy $E_0$ (a-c) and the corresponding mean photon number (d-f) as a function of the coupling strength $g$ for different detuning cases obtained by our mean photon number dependent variational method (black solid line), by the AA/GRWA (purple dashed line), by the GVM (blue dashed dotted line), and the NR, which is the numerical result of exact diagonalization (red circle).}
\label{fig3}
\end{figure}
\subsection{Applications to excited states}\label{SectExcitation}
Our variational method can be also applied to the excited states. The GRWA-form excited state is adopted to be the trial state, but with unfixed parameter $\lambda$
\begin{equation}\label{excitedbasis}
|\Psi_A^{\pm,N}\rangle=|\Psi_{\rm GRWA}^{\pm,N}(\lambda)\rangle.
\end{equation}
This trial state possesses a definite parity. As in the ground state, the perturbation scheme is again employed to determine the optimal value of $\lambda$. For convenience, we still discuss in the transformed representation. The zero-th order Hamiltonian is $\tilde{H}_{0}=\tilde{H}_{\rm GRWA}$, and the perturbation is $\Delta \tilde{H}= \tilde{H}-\tilde{H}_{\rm GRWA}$. $\lambda$ is determined by maximizing the projection $P(\lambda)=\langle \tilde{\Psi}_A^{\pm,N}(\lambda)|\tilde{\Psi}_{\pm,N} \rangle$, where $|\tilde{\Psi}_{\pm,N} \rangle$ is the exact excited state corresponding to $|\tilde{\Psi}_A^{\pm,N}\rangle$. $|\tilde{\Psi}_{\pm,N} \rangle$ can be evaluated perturbatively as
\begin{equation}
|\tilde{\Psi}_{\pm,N} \rangle = {1\over\sqrt{1+K}}(|\tilde{\Psi}_A^{\pm,N}\rangle+{\sum_{\{\pm,M\}}}' c_{\pm, M} |\tilde{\Psi}_A^{\pm,M}\rangle),
\end{equation}
where $c_{\pm, M}=\frac{\langle \tilde{\Psi}_A^{\pm,M}|
\Delta \tilde{H} | \tilde{\Psi}_A^{\pm,N}\rangle} {E^A_{\pm,M}-E^A_{\pm,N}}$ and $K={\sum\limits_{\{\pm,M\}}}' c_{\pm, M}^2$ with $E^A_{\pm,N}=\langle\tilde{\Psi}_A^{\pm,N}| \tilde H_{0}|\tilde{\Psi}_A^{\pm,N}\rangle$. Here, similarly to \eqref{pertur_0}, the primed summation excludes the trial state itself with the label $\{\pm,N\}$. Then $\lambda$ can be calculated by optimizing $P(\lambda)$. The corresponding energy and mean photon number are
\begin{eqnarray}
E^{\pm,N}_A&=&\langle \tilde{\Psi}_A^{\pm,N}|\tilde{H}|\tilde{\Psi}_A^{\pm,N}\rangle=R_{N,\pm}^2E_{AA}^{+,N-1}+S_{N,\pm}^2E_{AA}^{-,N}\nonumber\\
&&+2R_{N,\pm}S_{N,\pm}\sqrt{N}(\omega\lambda+g+\Omega f_1(\lambda,N)),\\
\langle a^{\dag}a\rangle^{\pm,N}_A&=&\langle \tilde{\Psi}_A^{\pm,N}|\widetilde{a^{\dag}a}|\tilde{\Psi}_A^{\pm,N}\rangle\nonumber\\
&=&R_{N,\pm}^2(N-1)+S_{N,\pm}^2N\nonumber\\
&&+\lambda^2+2R_{N,\pm}S_{N,\pm}\sqrt{N}\omega\lambda,
\end{eqnarray}
In the large $g$ limit, $P(\lambda)$ approaches $[1+F^{\pm,N}(\omega\lambda+g)^2]^{1\over2}$, where
\begin{eqnarray}
&&F^{\pm,N}=R^2_{N,\pm}[(\frac{\sqrt{N-1}S_{N-1,+}}{E^A_{+,N-1}-E^A_{\pm,N}})^2+(\frac{\sqrt{N-1}S_{N-1,-}}{E^A_{-,N-1}-E^A_{\pm,N}})^2]\nonumber\\
&&~+S^2_{N,\pm}[(\frac{\sqrt{N}R_{N+1,+}}{E^A_{+,N+1}-E^A_{\pm,N}})^2+(\frac{\sqrt{N}R_{N+1,-}}{E^A_{+,N+1}-E^A_{\pm,N}})^2].
\end{eqnarray}
Then the optimal $\lambda$ takes $-g/\omega$, which recovers the result of the GRWA. In the small $g$ limit, $P(\lambda)$ approaches $[1+F^{\pm,N}(\omega\lambda+\Omega\lambda+g)^2]^{1\over2}$. Then the optimal $\lambda$ reads $\lambda=\frac{-g}{\omega+\Omega}$. In both of the two limits, the chosen $\lambda$ is independent of excitation label $\{\pm,N\}$. It means the series of the approximate states hold the orthogonality. For other coupling cases, where $\lambda$ can be extracted by simple numerics, one can expect that $\lambda$ depends on excitation label $\{\pm,N\}$. Exactly speaking, differences in $\lambda$ would come to break the orthogonality, this arises from the simplicity of the trial state \eqref{excitedbasis} we have adopted. Despite this small price, it is worth using such a simple trial state to gain quite many improvements in the physical properties, such as the energy spectrum and photon number.
To compare different methods for the excited states we illustrate by the example around resonance, i.e. $\omega = \Omega$, which is the most typical case. Since in the energy spectrum the level crossing occurs amongst the excited states with certain parities [see Fig. \ref{fig4}(a) $\&$ (d)], it is inconvenient to order the excited states in terms of energy. We order the excited states according to the label sequence of the GRWA basis. In the following, the first and second excited states are taken as examples. Since the GRWA modifies the AA in the excited cases, its improvement to the AA is remarkable and performs well in a quite broad regime. Thus, if one is viewing from a large scale of the coupling strength, the energy spectrum calculated by the GRWA nearly recovers the exact results, just as what our scheme performs [see Fig. \ref{fig4}(a) $\&$ (d)]. However, in the more detailed scales, the outcome of our method is more consistent with the exact one than the GRWA, especially in weak coupling regime [Fig. \ref{fig4}(b) $\&$ (e)]. For the mean photon number, although for the first excited state both the GRWA and our result are fairly accurate and thus show little difference in comparison with the exact one [Fig. \ref{fig4}(c)], for the second excited state the dramatic improvements over the GRWA from our variational method can be seen [Fig.\ref{fig4}(f)]. For the second excited state, the GRWA does not capture the concave feature of the photon number in small $g$, while our result coincides with the exact one well in almost the whole coupling regime except some small discrepancy in a narrow window of intermediate coupling regime.
Besides the tuning case, we also check the validity of our method by considering a set of experiment-related parameters in Ref.\cite{ultra}, which reads $\Omega=(4.20\pm 0.02)GHz$, $\omega/2\pi=(8.13\pm 0.01)GHz$ and $g/2\pi=(0.82\pm 0.03)GHz$. This is a large detuning case and the detuning is also much larger than the coupling strength since $\omega=12.16 \Omega$ and $g^{*}=1.227 \Omega$. In this case, we calculate the first and second exited state energies. Referring to the numerically exact results, Fig.\ref{fig5} presents a comparison between the results by our method and those by the GRWA, in which a significant improvement is seen. It should be mentioned that, since the AA is modified by the GRWA and the GVM is limited to the ground state, they are not included in the above comparisons.
\begin{figure}
\caption{(Color online) An overall view of the excited-state energies $E_1$ in (a) and $E_2$ in (d) as a function of $g$ in resonance case ($\omega = \Omega$) for our variation method (black solid line), the GRWA (purple dashed line), and the numerically exact result (NR) (red circles). The third excitation (the curve starting from $E=1.5$ at $g=0$) is also plotted to show the level crossing. A zoom-in comparison of $E_1$ in (b) and $E_2$ in (e) in the weak coupling regime. The mean photon number $\langle a^{\dag}
\label{fig4}
\end{figure}
\begin{figure}
\caption{(Color online) The excited-state energy deviations from the exact one for the first excited state $E_1$ in (a) and for the second excited state $E_2$ in (b) as a function of $g$ in experiment-related parameters, $\Omega = (4.20 \pm 0.02)GHz$, $\omega/2\pi= (8.13\pm 0.01)GHz$, and and $g/2\pi=(0.82\pm 0.03)GHz$, for our variational method (black squares), the GRWA (red dots).}
\label{fig5}
\end{figure}
\section{Conclusions and discussions}\label{section_con}
We have introduced a mean photon number dependent variational method to evaluate the properties of the Rabi model. Our scheme combines the advantages of the existing AA/GRWA and GVM approximations. For the ground state, the trial state is the superposition of two coherent states with opposite displacements, and the key parameter $\lambda$ is determined by maximizing the projection of the assumed state and the exact one, which has been approximated by a perturbation theory. In the weak coupling regime our result is in agreement with that of the GVM which is accurate in this regime but deviates from the exact one in the strong coupling regime. On the other hand, in the strong coupling regime, our result is consistent with that obtained by the AA/GRWA which works well in this regime but deviates from the exact one in the weak coupling regime. In the intermediate regime, our method not only provides a natural crossover from the AA/GRWA to the GVM but also yields an obvious improvement over all of them. It is shown that the improvements for the mean photon number are even more substantial than the energy. Thus our method is valid in whole coupling regime with not sufficiently small frequency of the bosonic field. In the small limit of the frequency of the bosonic field, both our method and the existing AA/GRWA and GVM work no longer well, which indicates that the position-displaced oscillator basis is no longer a good trial state and one should explore new starting point in this regime.
Although most variational methods limit to the ground state, our variational scheme can be also applied to the excited states. For the excited states, the deviation of the GRWA in the weak coupling regime is still considerable. In contrast, the validity of our scheme for the whole coupling regime still remains. The quantitative deviation of the GRWA energy in the weak coupling regime and the qualitative missing of concave feature for the mean photon number in the GRWA are well rectified in our scheme.
In short, our variational scheme efficiently improves several previous widely-used approximations such as the AA, the GRWA and the GVM, with better qualitative and quantitative descriptions on the physics of the model. Despite that the integrability and exactly analytical expressions of energy spectra have been obtained for the Rabi model in Ref.\cite{Braak2011}, series expansion form of its wavefunction is still inconvenient to calculate the physical variables in the model. On the contrary, our method directly starts from the wavefunction assumption and emphasizes its physics meaning. For example, the ground state form of our wavefunction is not only directly related the mean photon number but also useful for discussion of nonclassical states preparation \cite{nori}. In particular, our method to evaluate properties of the ground state and the low excited states might be applicable to the multi-mode Rabi model, i.e., the so-called spin-boson model, where a novel quantum phase transition characterized by the low level energies is intensively studied recently \cite{Vojta2012,Tong2011}.
\section{Unitary transformation on the Hamiltonian}
A unitary operator $U=e^{\lambda \sigma_z(a-a^{\dag})}$ transforms the model Hamiltonian
$H=\omega_0a^{\dag}a+\frac{1}{2}\Omega\sigma_x+\lambda\sigma_z(a^{\dag}+a)$ into $\tilde{H}$ in the new representation. With the formula
\begin{equation}
e^{A}Be^{-A}=\sum_{n=0}^{\infty}\frac{C_n}{n!},
\end{equation}
where $C_n=B$ if $n=0$ and $C_{n+1}=[A, C_n]$ otherwise, one can obtain
\begin{equation}\label{atildeH}
\tilde{H}=UHU^{\dag}=\omega a^{\dag}a+(\omega\lambda+g)\sigma_z(a+a^{\dag})+(\omega \lambda^2+2g\lambda)
+\frac{1}{2}\Omega\{\sigma_x \cosh[2\lambda(a-a^{\dag})]+i\sigma_y \sinh[2(a-a^{\dag})]\}.
\end{equation}
The terms of $\cosh[2\lambda(a-a^{\dag})]$ and $\sinh[2\lambda(a-a^{\dag})]$ can be expanded in powers of $a$ and $a^{\dag}$ according to formula
\begin{equation}
e^{(A+B)}=e^Ae^Be^{-\frac{1}{2}[A,B]}.
\end{equation}
Below we will use the associated Laguerre function defined by
\begin{equation}
L_n^{\mu}(z)=\frac{(n+\mu)!}{n!\mu !}\sum_{l=0}^{\infty}\frac{(-n)(-n+1)(-n+2)\cdots (-n+l-1)}{l!(\mu+1)(\mu+2)\cdots (\mu+l)}z^l,
\end{equation}
and and the Laguerre function
\begin{equation}
L_n(z)=L_n^{0}(z).
\end{equation}
For the factor $(a^{\dag})^m a^n$, one has
\begin{equation}
\left\{
\begin{array}{ll}
(a^{\dag})^m a^n=(a^{\dag})^{m-n} h_n(\hat{N}), ~~~~~~~~~~~~~~~~&m\geq n,\\
(a^{\dag})^m a^n=h_m(\hat{N})a^{n-m}, &m< n,
\end{array}
\right.
\end{equation}
where
\begin{equation}
h_n(\hat{N})=\hat{N}(\hat{N}-1)(\hat{N}-2)\cdots (\hat{N}-n+1).
\end{equation}
Here $\hat{N}=a^{\dag}a$ is particle number operator.
Set $\nu=-2\lambda$.
\begin{equation}
\begin{aligned}
\cosh {\nu(a^{\dag}-a)}&=\frac{1}{2}[e^{\nu(a^{\dag}-a)}+e^{-\nu(a^{\dag}-a)}]\\
&=\frac{1}{2}e^{-\nu ^2/2}[e^{\nu a^{\dag}}e^{-\nu a}+e^{-\nu a^{\dag}}e^{\nu a}]\\
&=\frac{1}{2}e^{-\nu ^2/2}\sum_{m,n}^{\infty}{\frac{1}{m!n!}[\nu^m(-\nu)^n+(-\nu)^m\nu^n](a^{\dag})^m a^n}.
\end{aligned}
\end{equation}
For $m-n=2k\geq 0$,
\begin{equation}
\begin{aligned}
I_x^{+}&=\frac{1}{2}e^{-\nu ^2/2}\sum_{m,n}^{\infty}\frac{1}{m!n!}[\nu^m(-\nu)^n+(-\nu)^m\nu^n](a^{\dag})^m a^n\\
&=\frac{1}{2}e^{-\nu ^2/2}\sum_{k}^{\infty}\sum_{n}^{\infty}\frac{1}{(n+2k)!n!}[\nu^{n+2k}(-\nu)^n+(-\nu)^{(n+2k)}\nu^n](a^{\dag})^{(n+2k)} a^n\\
&=\frac{1}{2}e^{-\nu ^2/2}\sum_{k}^{\infty}\nu^{2k}(a^{\dag})^{2k}\sum_{n}^{\infty}\frac{(-)^n h_n(\hat{N})}{(n+2k)!n!}(2\nu^{2n})\\
&=e^{-\nu ^2/2}\sum_{k}^{\infty}\nu^{2k}(a^{\dag})^{2k}\frac{(\hat{N}+2k)!}{N!}\frac{\hat{N}!}{(\hat{N}+2k)!}\sum_{n}^{\infty}\frac{(-)^n h_n(\hat{N})}{(n+2k)!n!}\nu^{2n}\\
&=e^{-\nu ^2/2}\sum_{k}^{\infty}\nu^{2k}(a^{\dag})^{2k}\frac{(\hat{N}+2k)!}{\hat{N}!}L_{\hat{N}}^{2k}(\nu^2).\\
\end{aligned}
\end{equation}
For $m-n=-2k< 0$,
\begin{equation}
\begin{aligned}
I_x^{-}&=\frac{1}{2}e^{-\nu ^2/2}\sum_{m,n}^{\infty}\frac{1}{m!n!}[\nu^m(-\nu)^n+(-\nu)^m\nu^n](a^{\dag})^m a^n\\
&=e^{-\nu ^2/2}\sum_{k}^{\infty}\nu^{2k}\frac{(\hat{N}+2k)!}{\hat{N}!}L_{\hat{N}}^{2k}(\nu^2)a^{2k}.
\end{aligned}
\end{equation}
By the definition of the function
\begin{equation}
f(\nu,\hat{N},m)=e^{-\nu ^2/2}\nu^m\frac{(\hat{N}+m)!}{\hat{N}!}L_{\hat{N}}^m(\nu^2),
\end{equation}
one can expand Eq.\eqref{acosh} as
\begin{equation}\label{acosh}
\cosh[\nu(a^{\dag}-a)]=I_x^{+}+I_x^{-}=f(\nu,\hat{N},0)+\sum_{k=1}^{\infty}[(a^{\dag})^{2k}f(\nu,\hat{N},2k)+f(\nu,\hat{N},2k)a^{2k}].
\end{equation}
By the same way, one has
\begin{equation}\label{asinh}
\sinh[\nu(a^{\dag}-a)]=\sum_{k=1}^{\infty}[(a^{\dag})^{2k+1}f(\nu,\hat{N},2k+1)-f(\nu,\hat{N},2k+1)a^{2k}].
\end{equation}
Substitute Eq.\eqref{acosh} and Eq.\eqref{asinh} into the transformed Hamiltonian Eq.\eqref{atildeH}, the expansion Eq.\eqref{Htilde} is obtained.
\end{document}
|
\begin{document}
\title[A new Generalized fractional derivative and integral]{A new
Generalized fractional derivative and integral}
\author[A. Akkurt, M.E. YILDIRIM and H. YILDIRIM]{Abdullah Akkurt$^{1,}$$
^{\ast }$, M. Esra YILDIRIM$^{2}$ and H\"{u}seyin YILDIRIM$^{1}$ }
\address{$^{1}$ Department of Mathematics, Faculty of Science and Arts,
University of Kahramanmara\c{s} S\"{u}t\c{c}\"{u} \.{I}mam, 46100,
Kahramanmara\c{s}, Turkey.}
\email{[email protected]; [email protected]}
\address{$^{2}$ Department of Mathematics, Faculty of Science, University of
Cumhuriyet, 58140, Sivas, Turkey.}
\email{[email protected]}
\keywords{\textbf{\thanks{\textbf{2010 Mathematics Subject Classification }
26A33, 26D10, 26D15.} }Fractional Calculus, Fractional derivative,
Conformable Fractional Integral.}
\date{03.03.2017\\
\indent$^{\ast }$ Corresponding author}
\begin{abstract}
In this article, we introduce a new general definition of fractional
derivative and fractional integral, which depends on an unknown kernel. By
using these definitions, we obtain the basic properties of fractional
integral and fractional derivative such as Product Rule, Quotient Rule,
Chain Rule, Roll's Theorem and Mean Value Theorem. We give some examples.
\end{abstract}
\maketitle
\setcounter{page}{1}
\section{Introduction}
The main aim of this paper is to introduced limit definition of the
derivative of a function which obeys classical properties including:
linearity, Product Rule, Quotient Rule, Chain Rule, Rolle's Theorem and Mean
Value Theorem.
Today, there are many fractional integral and fractional derivative
definitions such as Riemann-Liouville, Caputo, Gr\"{u}nwald-Letnikov,
Hadamard, Riesz. For these, please see \cite{Kilbas}, \cite{Katugampola1},
\cite{Samko}. For more information on the Fractional Calculus, please see (
\cite{Akkurt}, \cite{Abdel}, \cite{iyiola}, \cite{Hammad}, \cite{Hammad1}).
Here, all fractional derivatives do not provide some properties such as
Product Rule, Quotient Rule, Chain Rule, Roll's Theorem and Mean Value
Theorem.
To overcome some of these and other difficulties, Khalil et al. \cite{Khalil}
, came up with an interesting idea that extends the familiar limit
definition of the derivative of a function given by the following $T_{\alpha
}$
\begin{equation}
T_{\alpha }\left( f\right) \left( t\right) =\lim_{\varepsilon \rightarrow 0}
\frac{f\left( t+\varepsilon t^{1-\alpha }\right) -f\left( t\right) }{
\varepsilon }. \label{a}
\end{equation}
In \cite{Almeida}, Almeida et al. introduced limit definition of the
derivative of a function as follows,
\begin{equation}
f^{\left( \alpha \right) }\left( t\right) =\lim_{\varepsilon \rightarrow 0}
\frac{f\left( t+\varepsilon k\left( t\right) ^{1-\alpha }\right) -f\left(
t\right) }{\varepsilon }. \label{b}
\end{equation}
Recently, in \cite{Katugampola} Katugampola introduced the idea of
fractional derivative
\begin{equation}
D_{\alpha }\left( f\right) \left( t\right) =\lim_{\varepsilon \rightarrow 0}
\frac{f\left( te^{\varepsilon t^{-\alpha }}\right) -f\left( t\right) }{
\varepsilon }. \label{c}
\end{equation}
\section{Generalized new fractional derivative}
In this paper, we introduce a new fractional derivative which is generalized
the results obtained in \cite{Almeida}, \cite{Katugampola}, \cite{Khalil}.
In this section we present the definition of the Generalized new fractional
derivative and introduce the Generalized new fractional integral. We
provided representations for the Product Rule, Quotient Rule, Chain Rule,
Roll's Theorem and Mean Value Theorem. Also, we give some applications.
\begin{definition}
\label{d1} Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0,$ whenever $t>a.$ Given a function $
f:[a,b]\rightarrow
\mathbb{R}
\ $and $\alpha \in \left( 0,1\right) \ $a real, we say that the generalized
fractional derivative\ of $f$ of order $\alpha $ is defined by,
\begin{equation}
D^{\alpha }\left( f\right) \left( t\right) :=\lim_{\epsilon \rightarrow 0}
\frac{f\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }
}\right) -f\left( t\right) }{\epsilon } \label{1.8}
\end{equation}
exist$.\ $If $f$ is $\alpha -$differentiable in some $\left( 0,a\right) ,\
\alpha >0,\ \lim\limits_{t\rightarrow 0^{+}}f^{\left( \alpha \right) }\left(
t\right) \ $exist, then define
\begin{equation}
f^{\left( \alpha \right) }\left( 0\right) =\lim\limits_{t\rightarrow
0^{+}}f^{\left( \alpha \right) }\left( t\right) . \label{1.9}
\end{equation}
We can write $f^{\left( \alpha \right) }\left( t\right) $ for $D^{\alpha
}\left( f\right) \left( t\right) $ to denote the generalized fractional
derivatives of $f$ of order $\alpha $.
\end{definition}
\begin{remark}
When $k\left( t\right) =t\ $in (\ref{1.8}), it turns out to be the
definition for derivatives of a function,\ in \cite{Katugampola}.
\end{remark}
\begin{remark}
When $\alpha \rightarrow 1\ $and $k\left( t\right) =t\ $in (\ref{1.8}), it
turns out to be the classical definition for derivatives of a function,\ $
f^{\left( \alpha \right) }\left( t\right) =f^{\prime }\left( t\right) .$
\end{remark}
\begin{theorem}
Let $f:[a,b]\rightarrow
\mathbb{R}
\ $be a$\ $differentiable function and $t>a.\ $Then, $f\ $is a $\alpha -$
differentiable at $t\ $and
\begin{equation*}
f^{\left( \alpha \right) }\left( t\right) =\frac{\left( k\left( t\right)
\right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\frac{df}{dt}(t).
\end{equation*}
Also, if $f^{\prime }$\ is continuous at $t=a,\ $then
\begin{equation*}
f^{\left( \alpha \right) }\left( a\right) =k^{\prime }\left( a\right) \left(
k\left( a\right) \right) ^{1-\alpha }\frac{df}{dt}(a).
\end{equation*}
\end{theorem}
\begin{proof}
From definition \ref{d1}, we have
\begin{eqnarray*}
D^{\alpha }\left( f\right) \left( t\right) &=&\lim_{\epsilon \rightarrow 0}
\frac{f\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }
}\right) -f\left( t\right) }{\epsilon } \\
&& \\
&=&\lim_{\in \rightarrow 0}\frac{\left( t-k\left( t\right) +k\left( t\right)
\left[ 1+\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }+\frac{\left( \varepsilon \frac{\left( k\left(
t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }\right) ^{2}}{2!}
+...\right] \right) -f\left( t\right) }{\varepsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\frac{f\left( t+\epsilon \frac{\left(
k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\left[
1+O\left( \epsilon \right) \right] \right) -f\left( t\right) }{\epsilon }.
\end{eqnarray*}
Taking
\begin{equation*}
h=\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime
}\left( t\right) }\left[ 1+O\left( \epsilon \right) \right]
\end{equation*}
we have,
\begin{eqnarray*}
D^{\alpha }\left( f\right) \left( t\right) &=&\lim_{\epsilon \rightarrow 0}
\frac{f\left( t+h\right) -f\left( t\right) }{\frac{k^{\prime }\left(
t\right) \left( k\left( t\right) \right) ^{\alpha -1}h}{1+O\left( \epsilon
\right) }} \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }\frac{df}{dt}(t).
\end{eqnarray*}
\end{proof}
\begin{theorem}
If a function $f:[a,b]\rightarrow
\mathbb{R}
\ $is $\alpha -$differentiable at $a>0,\ \alpha \in \left( 0,1\right] ,\ $
then $f$\ is continuous at $a.$
\end{theorem}
\begin{proof}
Since
\begin{equation*}
f\left( a-k\left( a\right) +k\left( a\right) e^{\varepsilon \frac{\left(
k\left( a\right) \right) ^{-\alpha }}{k^{\prime }\left( a\right) }}\right)
-f\left( a\right) =\tfrac{f\left( a-k\left( a\right) +k\left( a\right)
e^{\varepsilon \frac{\left( k\left( a\right) \right) ^{-\alpha }}{k^{\prime
}\left( a\right) }}\right) -f\left( a\right) }{\epsilon }\epsilon ,
\end{equation*}
we have
\begin{equation*}
\lim_{\epsilon \rightarrow 0}\left[ f\left( a-k\left( a\right) +k\left(
a\right) e^{\varepsilon \frac{\left( k\left( a\right) \right) ^{-\alpha }}{
k^{\prime }\left( a\right) }}\right) -f\left( a\right) \right]
=\lim_{\epsilon \rightarrow 0}\tfrac{\left[ f\left( a-k\left( a\right)
+k\left( a\right) e^{\varepsilon \frac{\left( k\left( a\right) \right)
^{-\alpha }}{k^{\prime }\left( a\right) }}\right) -f\left( a\right) \right]
}{\epsilon }\lim_{\epsilon \rightarrow 0}\epsilon .
\end{equation*}
Let $h=\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{
k^{\prime }\left( t\right) }\left[ 1+O\left( \epsilon \right) \right] .\ $
Then,
\begin{equation*}
\lim_{h\rightarrow 0}\left[ f\left( a+h\right) -f\left( a\right) \right]
=D^{\alpha }\left( f\right) \left( a\right) .0
\end{equation*}
and
\begin{equation*}
\lim_{h\rightarrow 0}f\left( a+h\right) =f\left( a\right) .
\end{equation*}
This completes the proof.
\end{proof}
\begin{theorem}
Let $\alpha \in \left( 0,1\right] $ and $f,g$ be $\alpha -$differentiable at
$a$ point $t>0$. Then,
\end{theorem}
$1.\ D^{\alpha }\left( af+bg\right) \left( t\right) =aD^{\alpha }\left(
f\right) \left( t\right) +bD^{\alpha }\left( g\right) \left( t\right) ,\ $
for all $a,b\in
\mathbb{R}
\ $(linearity)$.$
$2.D^{\alpha }\left( t^{n}\right) =\frac{\left( k\left( t\right) \right)
^{1-\alpha }}{k^{\prime }\left( t\right) }nt^{n-1}\ $for all $n\in
\mathbb{R}
.$
$3.\ D^{\alpha }\left( c\right) =0,\ $for all constant functions\ $f\left(
t\right) =c.$
$4.\ D^{\alpha }\left( fg\right) \left( t\right) =f\left( t\right) D^{\alpha
}\left( g\right) \left( t\right) +g\left( t\right) D^{\alpha }\left(
f\right) \left( t\right) \ $(Product Rule)$.$
$5.\ D^{\alpha }\left( \dfrac{f}{g}\right) \left( t\right) =\dfrac{f\left(
t\right) D_{\alpha }\left( g\right) \left( t\right) -g\left( t\right)
D_{\alpha }\left( f\right) \left( t\right) }{\left[ g\left( t\right) \right]
^{2}}\ $(Quotient Rule)$.$
$6.\ D^{\alpha }\left( f\circ g\right) \left( t\right) =\frac{\left( k\left(
t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }f^{\prime }\left(
g\left( t\right) \right) D^{\prime }\left( g\right) \left( t\right) $ (Chain
rule).
\begin{proof}
Part (1) and (3) follow directly from the definition. Let us prove (2), (4),
(5) and (6) respectively. Now, for fixed $\alpha \in \left( 0,1\right] ,\
n\in
\mathbb{R}
\ $and $t>0,\ $we have
\begin{eqnarray*}
D^{\alpha }\left( t^{n}\right) &=&\lim_{\epsilon \rightarrow 0}\frac{\left(
t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left(
t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) ^{n}-t^{n}
}{\epsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\frac{\left( t+\epsilon \frac{\left( k\left(
t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\left[ 1+O\left(
\epsilon \right) \right] \right) ^{n}-t^{n}}{\epsilon } \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }nt^{n-1}.
\end{eqnarray*}
This completes proof of (2). Then, we shall prove (4). To this end, since $
f,g$ are $\alpha -$differentiable at $t>0$, note that,
\begin{eqnarray*}
&&D^{\alpha }\left( fg\right) \left( t\right) \\
&=&\lim_{\epsilon \rightarrow 0}\tfrac{f\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }}\right) g\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }}\right) -f\left( t\right) g\left( t\right) }{
\epsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\left[ \tfrac{f\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) g\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -f\left( t\right) g\left(
t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left(
t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{
\epsilon }\right. \\
&& \\
&&+\left. \tfrac{f\left( t\right) g\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }}\right) -f\left( t\right) g\left( t\right) }{
\epsilon }\right] \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\left[ \tfrac{f\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -f\left( t\right) }{
\epsilon }g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }
}\right) \right] \\
&& \\
&&+f\left( t\right) \lim_{\epsilon \rightarrow 0}\tfrac{g\left( t-k\left(
t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right)
\right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -g\left( t\right)
}{\epsilon } \\
&& \\
&=&D^{\alpha }\left( f\right) \left( t\right) \lim_{\epsilon \rightarrow 0}
\left[ g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }
}\right) \right] +f\left( t\right) D^{\alpha }\left( g\right) \left( t\right)
\\
&& \\
&=&g\left( t\right) D^{\alpha }\left( f\right) \left( t\right) +f\left(
t\right) D^{\alpha }\left( g\right) \left( t\right) .
\end{eqnarray*}
Since $g$ is continuous at $t$, $\lim_{\epsilon \rightarrow 0}\left[ g\left(
t-k\left( t\right) +k\left( t\right) e^{\varepsilon k\left( t\right)
^{-\alpha }}\right) \right] =g\left( t\right) .$ This completes the proof of
(4). Next, we prove (5). Similarly,
\begin{eqnarray*}
&&D^{\alpha }\left( \frac{f}{g}\right) \left( t\right) \\
&=&\lim_{\epsilon \rightarrow 0}\frac{\frac{f\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{g\left( t-k\left(
t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right)
\right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }-\frac{f\left(
t\right) }{g\left( t\right) }}{\epsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\tfrac{f\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }}\right) g\left( t\right) -f\left( t\right)
g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left(
k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{
\epsilon g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }
}\right) g\left( t\right) } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\tfrac{f\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }}\right) g\left( t\right) -f\left( t\right)
g\left( t\right) +f\left( t\right) g\left( t\right) -f\left( t\right)
g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left(
k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{
\epsilon g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }
}\right) g\left( t\right) } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\frac{1}{g\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{
k^{\prime }\left( t\right) }}\right) g\left( t\right) } \\
&& \\
&&\times \left[ \tfrac{f\left( t-k\left( t\right) +k\left( t\right)
e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime
}\left( t\right) }}\right) -f\left( t\right) }{\epsilon }g\left( t\right)
-f\left( t\right) \tfrac{g\left( t\right) -g\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{\epsilon }\right] \\
&& \\
&=&\dfrac{f\left( t\right) D^{\alpha }\left( g\right) \left( t\right)
-g\left( t\right) D^{\alpha }\left( f\right) \left( t\right) }{\left(
g\left( t\right) \right) ^{2}}.
\end{eqnarray*}
We have implicitly assumed here that $f^{\left( \alpha \right) }$ and $
g^{\left( \alpha \right) }$ exist and that $g\left( t\right) \neq 0.\ $
Finally, we prove (6). We have from the definition that
\begin{eqnarray*}
D^{\alpha }\left( f\circ g\right) \left( t\right) &=&\lim_{\epsilon
\rightarrow 0}\frac{\left( f\circ g\right) \left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -\left( f\circ g\right)
\left( t\right) }{\epsilon } \\
&=&\lim_{\epsilon \rightarrow 0}\frac{\left( f\circ g\right) \left(
t+\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime
}\left( t\right) }\left[ 1+O\left( \epsilon \right) \right] \right) -\left(
f\circ g\right) \left( t\right) }{\epsilon }.
\end{eqnarray*}
Let $h=\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{
k^{\prime }\left( t\right) }\left[ 1+O\left( \epsilon \right) \right] \ $
such that
\begin{eqnarray*}
D^{\alpha }\left( f\circ g\right) \left( t\right) &=&\lim_{\epsilon
\rightarrow 0}\frac{\left( f\circ g\right) \left( t+\epsilon \frac{\left(
k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\left[
1+O\left( \epsilon \right) \right] \right) -\left( f\circ g\right) \left(
t\right) }{\epsilon } \\
&=&\lim_{h\rightarrow 0}\frac{\left( f\circ g\right) \left( t+h\right)
-\left( f\circ g\right) \left( t\right) }{\frac{k^{\prime }\left( t\right)
\left( k\left( t\right) \right) ^{\alpha -1}h}{1+O\left( \epsilon \right) }}.
\end{eqnarray*}
Therefore, we have
\begin{equation*}
D^{\alpha }\left( f\circ g\right) \left( t\right) =\frac{\left( k\left(
t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }f^{\prime }\left(
g\left( t\right) \right) D^{\prime }\left( g\right) \left( t\right) .
\end{equation*}
This completes the proof of the theorem.
\end{proof}
Now, we will give the derivatives of some special functions.
\begin{theorem}
\label{thm1} Let $a,n\in
\mathbb{R}
\ $and $\alpha \in \left( 0,1\right] .\ $Then we have the following results.
\end{theorem}
$1.\ D^{\alpha }\left( 1\right) =0,$
$2.\ D^{\alpha }\left( e^{ax}\right) =a\frac{\left( k\left( x\right) \right)
^{1-\alpha }}{k^{\prime }\left( x\right) }e^{ax},$
$3.\ D^{\alpha }\left( \sin (ax)\right) =a\frac{\left( k\left( x\right)
\right) ^{1-\alpha }}{k^{\prime }\left( x\right) }\cos (ax),$
$4.\ D^{\alpha }\left( \cos (ax)\right) =-a\frac{\left( k\left( x\right)
\right) ^{1-\alpha }}{k^{\prime }\left( x\right) }\sin (ax),$
$5.\ D^{\alpha }\left( \log _{a}bx\right) =\dfrac{1}{x}\frac{\left( k\left(
x\right) \right) ^{1-\alpha }}{k^{\prime }\left( x\right) }\frac{1}{\ln a},$
$6.\ D^{\alpha }\left( a^{bx}\right) =b\frac{\left( k\left( x\right) \right)
^{1-\alpha }}{k^{\prime }\left( x\right) }a^{bx}\ln a.$
When $\alpha =1\ $and $k\left( t\right) =t\ $in Theorem \ref{thm1}, it turns
out to be the classical derivatives of a function.
\begin{theorem}[Rolle's theorem for $\protect\alpha -$generalized Fractional
Differentiable functions]
\label{thm2} Let $a>0\ $and $f:[a,b]\rightarrow
\mathbb{R}
$ be a function with the properties that,
\end{theorem}
1. $f$ is continuous on $[a,b],$
2. $f$ is a $\alpha $-differentiable{}on $\left( a,b\right) \ $for some $
\alpha \in \left( 0,1\right) ,$
3. $f(a)=f(b).$
Then, there exist $c\in \left( a,b\right) ,\ $such that $D^{\alpha }\left(
f\right) \left( c\right) =0.$
\begin{proof}
We will prove this theorem by using contradiction. Since $f$ is continuous
on $[a,b]$ and $f(a)=f(b)$, there is $c\in \left( a,b\right) $ at which the
function has a local extrema. Then,
\begin{equation*}
D^{\alpha }\left( f\right) \left( c\right) =\lim_{\epsilon \rightarrow 0^{-}}
\tfrac{\left[ f\left( c-k\left( c\right) +k\left( c\right) e^{\varepsilon
\frac{\left( k\left( c\right) \right) ^{-\alpha }}{k^{\prime }\left(
c\right) }}\right) -f\left( c\right) \right] }{\epsilon }=\lim_{\epsilon
\rightarrow 0^{+}}\tfrac{\left[ f\left( c-k\left( c\right) +k\left( c\right)
e^{\varepsilon \frac{\left( k\left( c\right) \right) ^{-\alpha }}{k^{\prime
}\left( c\right) }}\right) -f\left( c\right) \right] }{\epsilon }.
\end{equation*}
But, the two limits have opposite signs. Hence, $D^{\alpha }\left( f\right)
\left( c\right) =0.$
\end{proof}
When $\alpha =1\ $and $k\left( t\right) =t\ $in Theorem \ref{thm2}, it turns
out to be the classical Rolles's Theorem.
\begin{theorem}[Mean value theorem for Generalized fractional differentiable
functions]
Let $\alpha \in (0,1]$ and $f:[a,b]\rightarrow
\mathbb{R}
$ be a continuous on $[a,b]$ and an $\alpha $-generalized fractional
differentiable mapping on $\left( a,b\right) $ with $0\leq a<b.\ $Let $
k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.$ Then, there exists $c\in (a,b)$, such that
\begin{equation}
D^{\alpha }\left( f\right) \left( c\right) =\frac{f(b)-f(a)}{\frac{k^{\alpha
}\left( b\right) }{\alpha }-\frac{k^{\alpha }\left( a\right) }{\alpha }}.
\label{m1}
\end{equation}
\end{theorem}
\begin{proof}
Let $h$ be a constant. Consider the function,
\begin{equation}
G\left( x\right) =f\left( x\right) +h\frac{k^{\alpha }\left( x\right) }{
\alpha }. \label{m2}
\end{equation}
$G$ is continuous functions on $[a,b]\ $and integrable $\forall x\in \left(
a,b\right) $. Here, if we choose $G\left( a\right) =G\left( b\right) ,\ $
then we have
\begin{equation*}
f\left( a\right) +h\frac{k^{\alpha }\left( a\right) }{\alpha }=f\left(
b\right) +h\frac{k^{\alpha }\left( b\right) }{\alpha }.
\end{equation*}
Thus,
\begin{equation}
h=-\frac{f\left( b\right) -f\left( a\right) }{\frac{k^{\alpha }\left(
b\right) }{\alpha }-\frac{k^{\alpha }\left( a\right) }{\alpha }}. \label{m3}
\end{equation}
Using (\ref{m3}) in (\ref{m2}), it follows that
\begin{equation}
G\left( x\right) =f\left( x\right) -\frac{f\left( b\right) -f\left( a\right)
}{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{k^{\alpha }\left(
a\right) }{\alpha }}\frac{k^{\alpha }\left( x\right) }{\alpha }. \label{m4}
\end{equation}
\begin{eqnarray*}
D^{\alpha }\left( G\right) \left( x\right) &=&D^{\alpha }\left( f\right)
\left( x\right) -\frac{f\left( b\right) -f\left( a\right) }{\frac{k^{\alpha
}\left( b\right) }{\alpha }-\frac{k^{\alpha }\left( a\right) }{\alpha }}
D^{\alpha }\left( \frac{k^{\alpha }\left( x\right) }{\alpha }\right) \\
&=&D^{\alpha }\left( f\right) \left( x\right) -\frac{f\left( b\right)
-f\left( a\right) }{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{
k^{\alpha }\left( a\right) }{\alpha }}\frac{\left( k\left( t\right) \right)
^{1-\alpha }}{k^{\prime }\left( t\right) }\frac{d}{dt}\left( \frac{k^{\alpha
}\left( x\right) }{\alpha }\right) \\
&=&D^{\alpha }\left( f\right) \left( x\right) -\frac{f\left( b\right)
-f\left( a\right) }{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{
k^{\alpha }\left( a\right) }{\alpha }}.
\end{eqnarray*}
Then, the function $g$ satisfies the condition of the generalized fractional
Rolle's theorem.\ Hence, there exist $c\in \left( a,b\right) ,$ such that $
D^{\alpha }\left( G\right) \left( c\right) =0.$\ Using the fact that\ $
D^{\alpha }\left( \frac{k^{\alpha }\left( x\right) }{\alpha }\right) =1$, we
have
\begin{equation*}
f^{\left( \alpha \right) }\left( x\right) =\frac{f\left( b\right) -f\left(
a\right) }{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{k^{\alpha
}\left( a\right) }{\alpha }}.
\end{equation*}
Therefore, we get desired result.
\end{proof}
When $\alpha =1\ $and $k\left( t\right) =t\ $in Theorem \ref{thm2}, it turns
out to be the classical Mean Value Theorem.
\section{Generalized new fractional integral}
Now we introduce the generalized fractional integral as follows:
\begin{definition}[Generalized Fractional Integral]
\label{d2} Let $a\geq 0\ $and $t\geq a.\ $Also, let $f$ be a function
defined on $(a,t]$\ and $\alpha \in
\mathbb{R}
.\ $Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.$\ Then, the $\alpha -$generalized fractional
integral of $f$ is defined by,
\begin{equation*}
I^{\alpha }\left( f\right) \left( t\right) =\int\limits_{a}^{b}\frac{
k^{\prime }\left( x\right) f\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx
\end{equation*}
if the Riemann improper integral exist.
\end{definition}
\begin{theorem}[Inverse property]
Let $a\geq 0\ $and $\alpha \in (0,1).$Also, let $f$ be a continuous function
such that $I^{a}f$ exist. Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.\ $Then, for all $t>a,$ we have
\begin{equation*}
D^{a}\left[ I^{a}f\left( t\right) \right] =f\left( t\right) .
\end{equation*}
\end{theorem}
\begin{proof}
Since $f$ is continuous, then $I^{a}f\left( t\right) $ is clearly
differentiable. Hence,
\begin{eqnarray*}
D^{a}\left[ I^{a}\left( f\right) \left( t\right) \right] &=&\frac{\left(
k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\frac{d}{
dt}I^{a}(t) \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }\frac{d}{dt}\int\limits_{a}^{t}\frac{f\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }\frac{f\left( t\right) k^{\prime }\left( t\right) }{\left( k\left(
t\right) \right) ^{1-\alpha }} \\
&& \\
&=&f\left( t\right) .
\end{eqnarray*}
\end{proof}
\begin{theorem}
\label{T2} Let $f:(a,b)\rightarrow
\mathbb{R}
$ be differentiable and $0<\alpha \leq 1$. Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.\ $Then, for all $t>a$ we have
\begin{equation}
I^{a}\left[ D^{a}\left( f\right) \left( t\right) \right] =f\left( t\right)
-f\left( a\right) . \label{1.12}
\end{equation}
\end{theorem}
\begin{proof}
\begin{eqnarray*}
I^{a}\left[ D^{a}\left( f\right) \left( t\right) \right] &=&\int
\limits_{a}^{t}\frac{k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}D^{a}\left( f\right) (x)dx \\
&& \\
&=&\int\limits_{a}^{t}\frac{k^{\prime }\left( x\right) }{\left( k\left(
x\right) \right) ^{1-\alpha }}\frac{\left( k\left( x\right) \right)
^{1-\alpha }}{k^{\prime }\left( x\right) }\frac{df}{dx}(x)dx \\
&& \\
&=&\int\limits_{a}^{t}\frac{df}{dx}(x)dx \\
&& \\
&=&f\left( t\right) -f\left( a\right) .
\end{eqnarray*}
\end{proof}
\begin{theorem}
(\textbf{Integration by parts}) Let $f,g:[a,b]\rightarrow
\mathbb{R}
$ be two functions such that $fg$ is differentiable. Then
\begin{equation*}
\int_{a}^{b}f\left( x\right) D^{\alpha }\left( g\right) \left( x\right)
d_{\alpha }x=\left. fg\right\vert _{a}^{b}-\int_{a}^{b}g\left( x\right)
D^{\alpha }\left( f\right) \left( x\right) d_{\alpha }x.
\end{equation*}
\end{theorem}
\begin{proof}
The proof is done in a similar way in \cite{Abdel}.
\end{proof}
\begin{theorem}
\label{T1} Let $f$ and $g$ be functions satisfying the following
\end{theorem}
$\left( a\right) $ continuous on $[a,b],$
$\left( b\right) \ $bounded and integrable functions on $[a,b],$
In addition$,\ $Let $g(x)$ be nonnegative (or nonpositive) on $[a,b]$. Let $
k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.$ Let us set $m=\inf \{f(x):x\in \lbrack a,b]\}$ and
$M=\sup \{f(x):x\in \lbrack a,b]\}.\ $Then there exists a number $\xi $ in $
(a,b)$ such that
\begin{equation}
\int\limits_{a}^{b}\frac{f\left( x\right) g\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx=\xi
\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx. \label{t1}
\end{equation}
If $f$ continuous on $[a,b],$ then for $\exists x_{0}\in \left[ a,b\right] $
\begin{equation}
\int\limits_{a}^{b}\frac{f\left( x\right) g\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx=f\left(
x_{0}\right) \int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx. \label{t2}
\end{equation}
\begin{proof}
Let $m=\inf f$, $M=\sup f\ $and $g(x)\geq 0\ $in $[a,b].\ $Then, we get
\begin{equation}
mg(x)<f(x)g(x)<Mg(x). \label{t4}
\end{equation}
Multiplying (\ref{t4}) by $\frac{k^{\prime }\left( x\right) }{\left( k\left(
x\right) \right) ^{1-\alpha }}\ $and integrating (\ref{t4}) with respect to $
x$ over $(a,b)$, we obtain:
\begin{equation}
m\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx<\int\limits_{a}^{b}\frac{
f(x)g\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx<M\int\limits_{a}^{b}\frac{g\left( x\right)
k^{\prime }\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx.
\label{t5}
\end{equation}
Then there exists a number $\xi $ in $\left[ m,M\right] $ such that
\begin{equation*}
\int\limits_{a}^{b}\frac{f(x)g\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx=\xi \int\limits_{a}^{b}
\frac{g\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx.
\end{equation*}
When $g(x)<0$, the proof is done in a similar way.
By the intermediate value theorem, $f$ attains every value of the interval $
[m,M]$, so for some $x_{0}\ $in$\ [a,b]\ f\left( x_{0}\right) =\xi .\ $Then
\begin{equation*}
\int\limits_{a}^{b}\frac{f(x)g\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx=f\left( x_{0}\right)
\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx.
\end{equation*}
If\ $g(x)=0,\ $equality (\ref{t1}) becomes obvious; if $g(x)>0,\ $then (\ref
{t5}) implies
\begin{equation*}
m<\dfrac{\int\limits_{a}^{b}\frac{f(x)g\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx}{
\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx}<M
\end{equation*}
there exists a point $x_{0}$ in $(a,b)$ such that
\begin{equation*}
m<f\left( x_{0}\right) <M,
\end{equation*}
which yields the desired result (\ref{t1}). In particular, when $g(x)=1$, we
get from Theorem \ref{T1} the following result
\begin{eqnarray*}
\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx &=&f\left( x_{0}\right)
\int\limits_{a}^{b}\frac{k^{\prime }\left( x\right) }{\left( k\left(
x\right) \right) ^{1-\alpha }}dx \\
&& \\
&=&f\left( x_{0}\right) \left( \frac{k^{\alpha }\left( b\right) }{\alpha }-
\frac{k^{\alpha }\left( a\right) }{\alpha }\right) .
\end{eqnarray*}
Thus, we have
\begin{equation}
f\left( x_{0}\right) =\frac{1}{\frac{k^{\alpha }\left( b\right) }{\alpha }-
\frac{k^{\alpha }\left( a\right) }{\alpha }}\int\limits_{a}^{b}\frac{
f\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx. \label{t10}
\end{equation}
This (\ref{t10}) is called the mean value or variance of the $f$ function.
\end{proof}
For $\alpha =1$ and $k\left( t\right) =t$ this reduces to the classical mean
value theorem of integral calculus,
\begin{equation*}
\int\limits_{a}^{b}f\left( x\right) dx=\left( b-a\right) f\left(
x_{0}\right) .
\end{equation*}
\begin{theorem}
Let $a\geq 0\ $and $\alpha \in (0,1].\ $Also, let $f,g:\left[ a,b\right]
\rightarrow
\mathbb{R}
$ be a continuous function. Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.\ $Then,
\end{theorem}
$i.\ \int\limits_{a}^{b}\left( f\left( x\right) +g\left( x\right) \right)
\frac{k^{\prime }\left( x\right) }{\left( k\left( x\right) \right)
^{1-\alpha }}dx=\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx+\int
\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{\left(
k\left( x\right) \right) ^{1-\alpha }}dx,$
$ii.\ \int\limits_{a}^{b}\lambda \frac{f\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx=\lambda
\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx,\ \lambda \in
\mathbb{R}
,$
$iii.\ \int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right)
}{\left( k\left( x\right) \right) ^{1-\alpha }}dx=-\int\limits_{b}^{a}\frac{
f\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx,$
$iv.\ \int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right)
}{\left( k\left( x\right) \right) ^{1-\alpha }}dx=\int\limits_{a}^{c}\frac{
f\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx+\int\limits_{c}^{b}\frac{f\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx,$
$v.\ \int\limits_{a}^{a}\frac{f\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx=0,$
$vi.\ $if $f(x)\geq 0$ for all $x\in \lbrack a,b]$ , then $
\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right) }{
\left( k\left( x\right) \right) ^{1-\alpha }}dx\geq 0,$
$vii.\ \left\vert \int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}
dx\right\vert \leq \int\limits_{a}^{b}\frac{\left\vert f\left( x\right)
\right\vert k^{\prime }\left( x\right) }{\left( k\left( x\right) \right)
^{1-\alpha }}dx.$
\begin{proof}
The relations follow from Definition \ref{d2} and Theorem \ref{T2},
analogous properties of generalized fractional integral, and the properties
of section 2 for the generalized fractional derivative.
\end{proof}
\end{document}
|
\begin{document}
\title{Filtering variational quantum algorithms for combinatorial optimization}
\author{David Amaro}
\email{[email protected]}
\affiliation{Cambridge Quantum Computing Limited, SW1P 1BX London, United Kingdom}
\author{Carlo Modica}
\affiliation{Cambridge Quantum Computing Limited, SW1P 1BX London, United Kingdom}
\author{Matthias Rosenkranz}
\affiliation{Cambridge Quantum Computing Limited, SW1P 1BX London, United Kingdom}
\author{Mattia Fiorentini}
\affiliation{Cambridge Quantum Computing Limited, SW1P 1BX London, United Kingdom}
\author{Marcello Benedetti}
\affiliation{Cambridge Quantum Computing Limited, SW1P 1BX London, United Kingdom}
\author{Michael Lubasch}
\affiliation{Cambridge Quantum Computing Limited, SW1P 1BX London, United Kingdom}
\date{\today}
\begin{abstract}
Current gate-based quantum computers have the potential to provide a computational advantage if algorithms use quantum hardware efficiently.
To make combinatorial optimization more efficient, we introduce the Filtering Variational Quantum Eigensolver (F-VQE) which utilizes filtering operators to achieve faster and more reliable convergence to the optimal solution.
Additionally we explore the use of causal cones to reduce the number of qubits required on a quantum computer.
Using random weighted MaxCut problems, we numerically analyze our methods and show that they perform better than the original VQE algorithm and the Quantum Approximate Optimization Algorithm (QAOA).
We also demonstrate the experimental feasibility of our algorithms on a Honeywell trapped-ion quantum processor.
\end{abstract}
\maketitle
\section{Introduction}
Combinatorial optimization tackles problems of practical relevance~\cite{KoVy06}.
Applications include finding the shortest route via several locations for a delivery service, making optimal use of available storage space in logistics, and optimizing a manufacturing supply chain to increase the productivity of a factory.
If quantum algorithms can solve such problems even just slightly faster than classical algorithms, this can have a large impact on various sectors in industry and research.
Variational quantum algorithms are a promising tool to get the most out of the current generation of gate-based quantum processors~\cite{Benedetti_2019, CeEtAl20, Prieto2020, BhEtAl21, Lorenzo2021, Saleem2021}.
These algorithms employ parameterized quantum circuits that can be tailored to hardware constraints such as qubit connectivities and gate fidelities.
In this context, a common approach for combinatorial optimization encodes the optimal solution in the ground state of a classical multi-qubit Hamiltonian~\cite{Kochenberger2014, Lu14, Glover2019}.
Popular variational quantum algorithms such as the Variational Quantum Eigensolver (VQE)~\cite{Peruzzo2014} and the Quantum Approximate Optimization Algorithm (QAOA)~\cite{Farhi2014} attempt to prepare this ground state by searching for the circuit parameters that minimize the energy expectation value of the corresponding quantum state.
VQE imposes no restrictions on the ansatz circuit and has become a powerful method for quantum chemistry~\cite{Moll2018}, condensed matter~\cite{Prieto2020VQE}, and combinatorial optimization~\cite{Saez2018}.
For combinatorial optimization problems, however, it tends to produce sub-optimal solutions~\cite{Diez2021}.
QAOA uses a specific ansatz circuit inspired by adiabatic quantum computation~\cite{FaEtAl01} and the Trotterization of the time evolution corresponding to quantum annealing~\cite{TaHi98}.
Despite its promising properties~\cite{FaHa19, Zhou2020, Moussa2020} and considerable progress with regards to its experimental realization~\cite{HaEtAl21}, in general the QAOA ansatz requires circuit depths that are challenging for current quantum hardware.
\begin{figure}
\caption{\label{fig_main}
\label{fig_main}
\end{figure}
In this article, we introduce the Quantum Variational Filtering (QVF) algorithm that optimizes a parameterized quantum circuit to approximate the action of a filtering operator on this circuit.
We also present Filtering VQE (F-VQE) -- a special case of QVF -- which is particularly efficient and similar to VQE.
The main focus of this article is F-VQE which, due to its low quantum hardware requirements, is particularly relevant for current quantum computers.
We consider filtering operators $F \equiv f(\mathcal{H};\tau)$ defined via real-valued functions $f$ of the problem Hamiltonian $\mathcal{H}$ and a parameter $\tau$ in such a way that $f^2(E;\tau)$ strictly decreases with the energy $E$ for any $\tau > 0$.
The parameter $\tau$ plays a role similar to the time step in imaginary time evolution (ITE) and the ITE operator $\exp(-\tau \mathcal{H})$ is one example of a filtering operator considered in this work.
The repeated action of a filtering operator on a quantum state projects out high-energy eigenstates (corresponding to sub-optimal solutions of the combinatorial optimization problem) and increases the overlap with the ground state.
Importantly, QVF and F-VQE have no restrictions on the ansatz circuit and so they can employ the ansatz most suitable for the quantum hardware at hand.
Furthermore we address the question which filtering operators benefit from using causal cones in the optimization, as that can drastically reduce the required number of qubits of the quantum hardware.
Among the filtering operators considered in this article we find that the ITE operator is the best performing one that can be combined with causal cones.
We therefore focus on the combination of ITE with causal cones for which the F-VQE method is equivalent to the hardware-efficient ITE procedure in~\cite{BeFiLu20} (HE-ITE).
We investigate the performance of F-VQE for various filtering operators and of HE-ITE using MaxCut problems on random 3-regular weighted graphs of different sizes.
Finding the optimal solution for this class of MaxCut problems is NP-hard~\cite{Hastad2001, Berman2002} and therefore no classical polynomial-time algorithm is expected to exist that achieves this goal.
Given a weighted graph, the MaxCut problem consists in finding the optimal cut: a separation of the vertices into two disjoint subsets so that the cut cost, i.e.\ the sum of the weights of the edges between the two subsets, is maximum.
The approximation ratio of a cut is defined as the cut cost divided by the cost of the optimal cut.
As shown in Fig.~\ref{fig_main}, F-VQE consistently achieves larger approximation ratios after fewer optimization steps than VQE and QAOA.
Moreover, F-VQE readily runs on actual quantum processors.
The HE-ITE algorithm achieves similar results with a reduced number of qubits and gates.
This article is structured as follows. Section~\ref{s_methods} introduces filtering operators, QVF and F-VQE.
Section~\ref{sec_results} presents the numerical and experimental studies.
We conclude this article and discuss potential next steps in Section~\ref{s_conc_out}.
Technical details are provided in Appendices at the end of this article.
In another publication~\cite{AmEtAl21} we analyse the performance of F-VQE in the context of the job shop scheduling problem.
\section{Methods}
\label{s_methods}
In this Section, we first define filtering operators.
Then we explain the QVF and F-VQE algorithms.
For F-VQE we present a procedure that dynamically updates $\tau$ during the optimization.
Furthermore we address the question which filtering operators can use causal cones in F-VQE.
\subsection{Filtering operators}
\label{s_dh_fo}
Given a $n$-qubit Hamiltonian $\mathcal{H}$, we define a \emph{filtering operator} $F \equiv f(\mathcal{H};\tau)$ via a real-valued function $f(E;\tau)$ of the energy $E$ and a parameter $\tau > 0$.
We require that the function $f^2(E;\tau)$ is strictly decreasing on the interval given by the complete spectrum of the Hamiltonian $E \in [E_{\min}, E_{\max}]$.
Filtering operators are Hermitian and commute with the Hamiltonian by definition.
\begin{figure}
\caption{\label{f_filter_functions}
\label{f_filter_functions}
\end{figure}
When a quantum state $\ket{\psi}$ is sampled in the eigenbasis $\{\ket{\lambda_x}:\:x=0,\,1,\ldots,\,2^n-1\}$ of a Hamiltonian, then the probability distribution of eigenvectors is given by $P_\psi(\lambda_x) = |\langle\psi|\lambda_x\rangle|^2$.
The application of the filtering operator on the quantum state produces a new quantum state $\ket{F\psi} = F\ket{\psi}/\sqrt{\langle F^2\rangle_\psi}$ which generates a probability distribution that depends on the energy $E_x$ of each eigenstate $\ket{\lambda_x}$:
\begin{equation}\label{eq_distr}
P_{F\psi}(\lambda_x) = \frac{f^2(E_x;\tau)}{\langle F^2 \rangle_\psi}P_{\psi}(\lambda_x) .
\end{equation}
For non-eigenstates $\ket{\psi}$ the action of the filtering operator increases the probability of all overlapping eigenstates $\ket{\lambda_x}$ (where $P_{\psi}(\lambda_x) > 0$) for which $f^2(E_{x};\tau) > \langle F^2\rangle_\psi$ and decreases the probability of all overlapping eigenstates $\ket{\lambda_{y}}$ for which $f^2(E_{y};\tau) < \langle F^2\rangle_\psi$.
Since $f^2(E;\tau)$ is strictly decreasing as a function of $E$, such eigenstates exist for any non-eigenstate $\ket{\psi}$.
Hence, under the application of the filtering operator, the probability of sampling eigenstates with low energy increases and the probability of sampling eigenstates with high energy decreases.
This naturally leads to a reduction of the average energy.
In particular, if the ground state $\ket{\lambda_0}$ has a finite overlap ($P_{\psi}(\lambda_0) > 0$) with the initial state, the probability of sampling it increases with every application of the filtering operator.
After sufficiently many applications the ground state is produced.
Some filtering function definitions are given in Fig.~\ref{f_filter_functions}.
The Chebyshev filtering operator approximates the Dirac delta operator $\delta(\mathcal{H})$ using the following expansion in terms of Chebyshev polynomials up to order $\tau$~\cite{Weiss2006}:
\begin{align}
\delta(\mathcal{H}) \approx & f(\mathcal{H};\tau) = \sum_{r = 0}^{\lfloor \tau / 2 \rfloor} (-1)^{r} \frac{2 - \delta_{r, 0}}{\pi} g_{2 r}^{(\tau)} T_{2 r}(\mathcal{H}) \label{eq_che}\\
g_{s}^{(\tau)} & = \frac{(\tau - s + 1) \cos{\frac{\pi s}{\tau + 1}} + \sin{\frac{\pi s}{\tau + 1}} \cot{\frac{\pi}{\tau + 1}}}{\tau + 1} .
\end{align}
Here $\delta_{r, s}$ is the Kronecker delta and the Chebyshev polynomials are defined via the recursive formula $T_{s+1}(x) = 2 x T_{s}(x) - T_{s-1}(x)$ with $T_{0}(x) = 1,\, T_{1}(x) = x$.
The parameter $\tau$ in the filtering operator definition is inspired by the time step parameter of imaginary time evolution.
In fact, for the exponential filtering operator in Fig.~\ref{f_filter_functions} $\tau$ is precisely the imaginary time step.
This parameter interpolates the action of the filtering operator between two limits.
For vanishing values of $\tau \to 0$ the filtering operator becomes the identity operator and for sufficiently large values of $\tau \to \infty$ the filtering operator becomes a projector onto the ground state.
For our selection of filters we took inspiration from several works.
The inverse filter is inspired by the inverse iteration procedure which is a common ingredient in numerical routines that calculate eigenvalues and eigenvectors of matrices~\cite{TrBa97, NoLuJe13, NoEtAl17}.
The cosine filter was previously used in non-variational algorithms to achieve faster ground state preparation~\cite{ge2018faster} and to analyze finite energy intervals~\cite{LuBaCi20}.
The Chebyshev filter considered here was also employed in powerful tensor network algorithms for the study of thermalization~\cite{YaEtAl20, BaHuCi20, CaCiBa21}.
\subsection{Quantum Variational Filtering (QVF)} \label{sec:IIB}
The QVF algorithm approximates the repeated action of a filtering operator on some initial quantum state by successively optimizing the variational parameters of a parameterized quantum circuit.
The algorithm starts at optimization step $t = 0$ by preparing an initial state $\ket{\psi_0}$ that has finite overlap with the ground state: $P_{\psi_0}(\lambda_0) > 0$.
Then the algorithm proceeds iteratively and in each optimization step $t \geq 1$ approximates the state $\ket{F_t\psi_{t-1}}$ -- that results from exactly applying the filtering operator $F_t$ to the state $\ket{\psi_{t-1}}$ -- by a state $\ket{\psi_t}$.
The subscript $t$ in $F_t$ indicates that the filtering operator can change at each optimization step.
The algorithm stops after an initially chosen number of optimization steps.
In order to approximate the application of the filtering operator, we prepare a parameterized quantum circuit ansatz $\ket{\psi(\bm{\theta})}$ that depends on a vector of $m$ parameters $\bm{\theta} = (\theta_1,\,\ldots,\,\theta_m)$.
At optimization step $t$ we search for the parameters that minimize the Euclidean distance between the parameterized quantum state and $\ket{F_t\psi_{t-1}}$:
\begin{equation}\label{eq_cost_fun}
\begin{split}
\mathcal{C}_t(\bm{\theta}) & = \frac{1}{2}\Vert \ket{ \psi(\bm{\theta}) } - \ket{F_t \psi_{t-1}} \Vert^2\\
& = 1 - \frac{ \Re \bra{\psi_{t-1}} F_t \ket{\psi(\bm{\theta})} }{ \sqrt{ \expval{ F_t^2 }_{\psi_{t-1}}}} .
\end{split}
\end{equation}
The final vector of parameters obtained at the end of the minimization of Eq.~\eqref{eq_cost_fun} defines the quantum state $\ket{\psi_t} \equiv \ket{\psi(\bm{\theta}_t)}$ at optimization step $t$.
The cost function in Eq.~\eqref{eq_cost_fun} can be minimized with the help of the Hadamard test, which needs one additional ancilla qubit and several additional controlled operations, as we explain in Appendix~\ref{app_hadamard}.
\subsection{Filtering VQE (F-VQE)}
\label{subsec:F-VQE}
To avoid the additional quantum resources required by the Hadamard test in QVF, in the following we develop F-VQE.
The F-VQE algorithm uses a specific gradient-based procedure that requires essentially the same circuits as VQE.
The partial derivative of the cost function in Eq.~\eqref{eq_cost_fun} with respect to one parameter $\theta_j$ is derived in Appendix~\ref{app_an_grad_fvqe}:
\begin{equation}\label{eq_grad}
\dfrac{\partial \mathcal{C}_t(\bm{\theta})}{\partial \theta_j} = - \frac{ \Re \bra{\psi_{t-1}} F_t \ket{\psi(\bm{\theta} + \pi \bm{e}_j)}}{2\sqrt{\langle F_t^2 \rangle_{\psi_{t-1}}}} .
\end{equation}
Here the state $\ket{\psi(\bm{\theta} + \pi \bm{e}_j)}$ is produced by the same ansatz circuit except that the vector of angles is shifted by an amount $\pi$ along the direction $\bm{e}_j$ of parameter $\theta_j$.
If the gradient is evaluated at the current vector of parameters $\bm{\theta}_{t-1}$, then the parameter-shift rule~\cite{Mitarai2018, Schuld2019} yields:
\begin{equation}\label{eq_grad_simp}
\left.\dfrac{\partial \mathcal{C}_t(\bm{\theta})}{\partial \theta_j}\right\vert_{\bm{\theta}_{t-1}} = - \frac{ \langle F_t \rangle_{\psi_{t-1}^{j+}} - \langle F_t \rangle_{\psi_{t-1}^{j-}}}{4 \sqrt{\langle F_t^2 \rangle_{\psi_{t-1}}}} .
\end{equation}
Here the three circuits $\ket{\psi_{t-1}}$ and $\ket{\psi_{t-1}^{j\pm}} \equiv \ket{\psi(\bm{\theta}_{t-1} \pm \tfrac{\pi}{2} \bm{e}_j)}$ are generated by the ansatz with different parameter vectors.
Note that the expectation value in the denominator is the same for all partial derivatives at fixed $t$.
The F-VQE algorithm takes advantage of this favorable case as follows.
At optimization step $t$, F-VQE performs a \emph{single} gradient-descent update:
\begin{equation}
\bm{\theta}_t = \bm{\theta}_{t-1} - \eta \sum_{j = 1}^{m} \left.\dfrac{\partial \mathcal{C}_t(\bm{\theta})}{\partial \theta_j}\right|_{\bm{\theta}_{t-1}} \bm{e}_j ,
\end{equation}
where $\eta > 0$ is the learning rate.
Then F-VQE moves on to the next cost function $\mathcal{C}_{t+1}(\bm{\theta})$ and proceeds identically.
For each optimization step this algorithm requires the evaluation of $2 m + 1$ circuits.
The expectation value $\langle F_t \rangle_\psi$ of filtering operators can be efficiently evaluated by sampling the quantum state in the Hamiltonian eigenbasis.
If each eigenstate $\ket{\lambda_x}$ is sampled $M_x$ times from a total of $M$ samples, the filtering operator expectation value can be approximated via the Monte Carlo estimator $f(E;\tau)$ as:
\begin{equation}\label{eq_exp_val}
\langle F_t \rangle_\psi \approx \frac{1}{M} \sum_{x} M_x f(E_{x};\tau) .
\end{equation}
In this article, we represent combinatorial optimization problems by diagonal QUBO (Quadratic Unconstrained Binary Optimization) Hamiltonians~\cite{Kochenberger2014, Lu14, Glover2019} for which the eigenbasis is the computational basis and energies can be efficiently computed.
Therefore the expectation value of a filtering operator can be approximated by sampling the quantum state in the computational basis.
At each optimization step $t$ the samples used to compute $\langle F_t^2 \rangle_{\psi_{t-1}}$ in Eq.~\eqref{eq_grad_simp} are also used to compute the average energy $\langle \mathcal{H} \rangle_{\psi_{t-1}}$.
As $t$ increases, the average energy is expected to decrease and the probability of sampling the ground state is expected to increase.
Thus F-VQE provides the average energy and a growing chance of sampling a low energy eigenstate or even the ground state at no extra cost during the optimization.
The gradient in F-VQE is equivalent to the one in VQE under certain assumptions.
We derive the VQE gradient in Appendix~\ref{app_an_grad_vqe}.
If the Hamiltonian in the VQE gradient of Eq.~\eqref{eq_vqe_an_grad} is replaced by $-F_t$, the new VQE gradient evaluated at the point $\ket{\psi(\bm{\theta})} = \ket{\psi_{t-1}}$ coincides with the F-VQE gradient in Eq.~\eqref{eq_grad_simp} up to a positive multiplicative factor.
However, we emphasize that the corresponding VQE cost function $-\bra{\psi(\bm{\theta})} F_t \ket{\psi(\bm{\theta})}$ is different from the F-VQE cost function in Eq.~\eqref{eq_cost_fun} where the dependence on the parameters is of the form $-\text{Re}\bra{\psi_{t-1}} F_t \ket{\psi(\bm{\theta})}$.
We note that the F-VQE gradient in Eq.~\eqref{eq_grad}, in general, coincides only at the point $\ket{\psi(\bm{\theta})} = \ket{\psi_{t-1}}$ with the gradient of the modified VQE cost function.
We can also see in Eqs.~\eqref{eq_second_der} and \eqref{eq_second_der_vqe} that the second derivatives do not coincide, not even at that point $\ket{\psi(\bm{\theta})} = \ket{\psi_{t-1}}$.
Therefore both algorithms explore parameter landscapes with different curvatures.
\subsection{Adapting \texorpdfstring{$\tau$}{tau}}
\label{s_tau_sch}
Both the cost function in Eq.~\eqref{eq_cost_fun} and its gradient in Eq.~\eqref{eq_grad_simp} depend on the parameter $\tau$ via the expectation value of the filtering operator in Eq.~\eqref{eq_exp_val}.
We dynamically adapt $\tau$ to keep the gradient norm as close as possible to some desired large and fixed value at every optimization step.
This can prevent the gradient from vanishing and enable us to determine its value more accurately with a fixed number of measurements.
\begin{figure}
\caption{\label{f_grad_vs_tau}
\label{f_grad_vs_tau}
\end{figure}
In F-VQE we employ the following heuristic to dynamically adapt $\tau$.
At each optimization step the dependence $g(\tau) \equiv \Vert\bm{\nabla}\left.\mathcal{C}_t(\tau)\right|_{\bm{\theta}_{t-1}}\Vert $ between the gradient norm and $\tau$ is used to select the value $\tau_t$ that returns a gradient norm as close as possible below a certain threshold $g_c > 0$.
For each optimization step such value $\tau_t$ is obtained by solving the implicit equation $g(\tau_t) = g_c$.
Note that $g(0) = 0$ since for $\tau = 0$ the filtering operator becomes the identity operator and the gradient norm vanishes.
For large values of $\tau$ the gradient norm saturates at a finite value that is determined by the overlap of the gradient circuits with the ground state.
Taking this into account, we select $\tau_t$ in the following way.
We evaluate the gradient norm for increasing values of $\tau$ until either (i) an upper bound $\tau_u > \tau_t$ is found such that $g(\tau_u) > g_c$ or (ii) $g(\tau)$ converges to a constant.
In the first case (i) we search for $\tau_t$ in the range $[0, \tau_u]$ up to a certain precision.
In the second case (ii) we select from the tried values the one that provided a gradient norm closest below the threshold.
Note that for the Chebyshev filtering operator only positive integer values of $\tau$ are allowed.
We emphasize that our heuristic is different from a simple re-scaling of the gradient by a constant.
As shown in Fig.~\ref{f_grad_vs_tau} each partial derivative changes non-trivially as a function of $\tau$.
Moreover, in the simulations we observe that the gradient norm has a consistent dependence on $\tau$ across different optimization steps and problems.
\subsection{Causal cones}
\label{subsec:CausalCones}
The F-VQE algorithm is based on expectation value computations of filtering operators and it is natural to ask whether this method can benefit from using causal cones.
The causal cone of an observable is the quantum circuit composed of only those qubits and gates in the ansatz circuit that have an actual effect on the expectation value.
In general, causal cones allow us to simplify the computation of expectation values for local observables.
The simplification follows from the fact that outside the causal cone of a local operator unitary gates cancel with their adjoints.
Causal cones are a crucial ingredient in various tensor network methods, e.g.\ based on the multiscale entanglement renormalization ansatz~\cite{Vi08}.
We have previously used them to make variational quantum algorithms more hardware-efficient for the simulation of the time evolution of quantum many-body systems~\cite{BeFiLu20}.
Figures~\ref{fig_ansatz}(b) and (c) show the causal cones for observables with support on two neighboring qubits and two non-neighboring qubits, respectively.
Only the qubits and gates inside the causal cone need to be prepared experimentally.
Note that for observables with support on distant qubits, the causal cone splits into two separable causal cones that can be independently realized in hardware.
Therefore causal cones can reduce the required number of gates and qubits when the observables have small support.
Inspecting the filtering operators in Fig.~\ref{f_filter_functions}, we see that the exponential, power, and Chebyshev filters can make use of causal cones.
The exponential filter $\exp(-\tau \mathcal{H})$ is equivalent to a product of 2-local terms that can be processed independently if additional approximations are made~\cite{BeFiLu20}.
The power filter $(1-\mathcal{H})^{\tau}$ for integer values of $\tau$ is equivalent to a sum of at most $2\tau$-local terms so that the entire expectation value can be determined from the sum of simpler expectation values.
Similarly the expectation value of the Chebyshev filter can be calculated using the sum of expectation values of at most $2\tau$-local observables, as the Chebyshev filtering operator is a polynomial in $\mathcal{H}$ of degree $\tau$.
\section{Results}
\label{sec_results}
In this Section, we describe the MaxCut Hamiltonians, parameterized quantum circuits, as well as the simulation and experimental settings that are being considered in this article.
The settings are summarized in Tab.~\ref{t_settings}.
Then we analyze the performance of the algorithms.
Simulations and the experiment on the Honeywell H1 trapped-ion quantum processor~\cite{Pino2021} are compiled by TKET~\cite{Sivarajah2020}.
\subsection{Weighted MaxCut Hamiltonians}
\label{subsec:problem_hamiltonian}
We use 25 random MaxCut instances for each problem size of $n \in \{5,\, 7,\, 9,\, 11,\, 13,\, 23\}$ qubits.
These problem sizes $n$ correspond to graphs with $N = n+1$ vertices.
Each instance is defined on a random 3-regular weighted simple undirected and connected graph $\mathcal{G}(\mathcal{V}, \mathcal{E}, \mathcal{W})$ where $\mathcal{V}=\{1,\,2,\,\ldots,\,N\}$ is the set of vertices, $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ is the set of edges between different vertices, and $\mathcal{W} = \{w_e \in [0,1]:\: e \in \mathcal{E}\}$ is the set of random weights uniformly distributed in the range $[0,1]$ for all edges.
A cut is represented by variables $z_{v}=\{+1, -1\}$, with $v \in \mathcal{V}$, that are $+1$ for the vertices in one subset of the cut and $-1$ for the vertices in the other subset of the cut.
This formulation has the obvious symmetry of swapping labels $+1,\, -1$ for the two subsets.
We break this symmetry by assigning $+1$ to the last vertex $v = N$ and thereby reduce the number of variables to $n = N - 1$.
Then the MaxCut problem consists in solving the optimization problem $(z^*_1, z^*_2, \ldots, z^*_{n}) = \operatorname*{argmax}_{(z_1, z_2, \ldots, z_{n})}C(z_1, z_2, \ldots, z_{n})$, with cost function
\begin{equation} \label{eq_maxcut}
\begin{split}
C(z_1, z_2, \ldots, z_{n}) &= \sum_{e = \{u, N\} \in \mathcal{E}} w_e \frac{1 - z_u}{2} \\
&+ \sum_{e = \{u, v \neq N\} \in \mathcal{E}} w_e \frac{1 - z_u z_v}{2} .
\end{split}
\end{equation}
The Hamiltonian formulation of the MaxCut problem is obtained via any real coefficients $a$ and $b > 0$ as
\begin{equation}\label{eq:ham_def}
\mathcal{H} = a \mathds{1} - b C(Z_1, Z_2, \ldots, Z_n) ,
\end{equation}
where each variable $z_u$ in the MaxCut cost function is replaced by the Pauli operator $Z_u$ acting on qubit $u \in \{1, \ldots, n\}$.
A ground state of $\mathcal{H}$ in Eq.~\eqref{eq:ham_def} is the computational state $\ket{\lambda_0} = \bigotimes_{v=1}^n \ket{(1-z^*_v)/2}$.
The \textit{approximation ratio} of a $n$-qubit quantum state $\ket{\psi}$ is defined as
\begin{equation}
\langle \alpha \rangle_\psi = \frac{\langle C(Z_1, Z_2, \ldots, Z_n)\rangle_\psi}{\operatorname*{max}_{(z_1, z_2, \ldots, z_{n})}C(z_1, z_2, \ldots, z_{n})} .
\end{equation}
Before we apply filtering operators to MaxCut Hamiltonians, we re-scale the energy range to $[0, 1]$ using the coefficients $a$ and $b$ in Eq.~\eqref{eq:ham_def}.
To achieve this, we compute lower and upper bounds of the MaxCut cost function in Eq.~\eqref{eq_maxcut}.
We choose the upper bound to be the optimum cost of the semidefinite programming (SDP) relaxation of the MaxCut problem~\cite{Helmberg2000}.
We fix the lower bound to the minimum cost $0$, which corresponds to the trivial cut $z_u = +1$ for all $u \in \{1, \ldots, n\}$.
\subsection{Setup}
\label{subsec:setup}
\begin{table}
\begin{tabular}{l|cccc}
(a) algorithm & F-VQE & HE-ITE & VQE & QAOA \\
\hline
cost function & Eq.~\eqref{eq_cost_fun} & \cite{BeFiLu20} & $\langle \mathcal{H}\rangle_{\psi(\bm{\theta})}$ & $\langle \mathcal{H} \rangle_{\psi(\bm{\gamma}, \bm{\beta})}$ \\
ansatz circuit & Fig.~\ref{fig_ansatz}(a) & Fig.~\ref{fig_ansatz}(a) & Fig.~\ref{fig_ansatz}(a) & Eq.~(\ref{eq_qaoa_ansatz}) \\
initial param. & $\ket{+}^{\otimes n}$ & $\ket{+}^{\otimes n}$ & $\ket{+}^{\otimes n}$ & random \\
adaptive $\tau$ & yes, $g_c = 0.1$ & no, fix $1.0$ & - & - \\
learning rate $\eta$ & inv.\ Hess.\ d.\ & - & $1.0$ & $1.0$ \\
opt.\ steps & 70 & 70 & 70 & 70\\
\multicolumn{5}{c}{}
\end{tabular}
\begin{tabular}{l|cccccc}
(b) size $n$ (qubits) & 5 & 7 & 9 & 11 & 13 & 23 \\
\hline
algorithms & all & all & all & all & all & HE-ITE \\
layers $p$ in HE-ITE & 1 & 1 & 1 & 1 & 1 & 1 \\
layers $p$ in all except HE-ITE & 2 & 3 & 4 & 5 & 6 & -\\
measurement shots $M$ & 10 & 50 & 100 & 150 & 200 & $2^{n_{\text{cone}} + 2}$\\
\multicolumn{7}{c}{}
\end{tabular}
\begin{tabular}{l|c}
\multicolumn{2}{l}{(c) 9-qubit experiment on Honeywell H1}\\
\hline
layers $p$ & 1 \\
ansatz circuit & Fig.~\ref{fig_exp}(b) \\
adaptive $\tau$ & yes, $g_c = 0.2$ \\
measurement shots $M$ & 500 \\
optimization steps & 9
\end{tabular}
\caption{\label{t_settings}
Simulation and experimental settings.
(a) The cost function for VQE and QAOA is the average energy for their respective ansatz circuits $\ket{\psi(\bm{\theta})}$ and $\ket{\psi(\bm{\gamma}, \bm{\beta})}$.
The learning rate for F-VQE is the inverse of the cost function's Hessian diagonal.
(b) For the 23-qubit problems HE-ITE uses $2^{n_{\text{cone}}+2}$ measurement shots for each causal cone, where $n_{\text{cone}}$ is the number of qubits in the causal cone.
(c) Different settings for the 9-qubit experiment.
In the ansatz circuit all rotation gates except the first and the last ones on each qubit are removed.
Additional details corresponding to the experiment are provided in Appendix~\ref{app_exp_det}.
}
\end{table}
\textit{F-VQE}.
This algorithm uses the parameterized quantum circuit shown in Fig.~\ref{fig_ansatz}(a).
An initial state $\ket{\psi_0} = \ket{+}^{\otimes n}$ is prepared by setting to $\pi/2$ the parameters in the last rotation on each qubit and setting the remaining parameters to $0$.
Parameters are iteratively updated using analytical gradient descent as described in Section~\ref{subsec:F-VQE}.
At each optimization step, the value of the parameter $\tau$ is adapted according to the procedure explained in Section~\ref{s_tau_sch}.
We choose a threshold of $g_c = 0.1$ for the gradient norm and solve the implicit equation with a precision $0 < g_c - g(\tau_t) < 0.01$.
For the learning rate $\eta$ we choose the inverse of the Hessian's diagonal.
As shown in Appendix~\ref{app_an_grad_fvqe}, at each optimization step $t$ all diagonal elements of the Hessian have the same value which can be computed by means of the circuit $\ket{\psi_{t-1}}$.
This is a quasi-Newton method~\cite{Dalgaard2020, Andrea2021} that uses only the diagonal of the Hessian matrix and can be realized without additional cost.
\begin{figure}
\caption{\label{fig_ansatz}
\label{fig_ansatz}
\end{figure}
\textit{HE-ITE}.
In relation to using causal cones with F-VQE, we have chosen to concentrate our analysis on the exponential filter.
In this case the F-VQE method is equivalent to the HE-ITE algorithm called Angle Update in~\cite{BeFiLu20}.
We adapt this algorithm to the general QUBO Hamiltonians with long-range interactions considered here.
We use the ansatz circuit depicted in Fig.~\ref{fig_ansatz}(a) with $p = 1$ layer.
Figures~\ref{fig_ansatz}(b) and (c) show examples of causal cones in HE-ITE.
A total of $70$ time steps are performed with a fixed imaginary time step $\tau = 1.0$.
For the $23$-qubit problems, we choose the number of measurement shots dependent on the number $n_{\text{cone}}$ of qubits in the cone as $2^{n_{\text{cone}} + 2}$.
\begin{figure*}
\caption{\label{fig_app_rat}
\label{fig_app_rat}
\end{figure*}
\textit{VQE.}
For this algorithm we employ the same ansatz as in F-VQE and HE-ITE, shown in Fig.~\ref{fig_ansatz}(a).
The computation of the analytical gradient for this ansatz requires two quantum circuits per parameter as described in Appendix~\ref{app_an_grad_vqe}.
For the VQE cost function the diagonal of the Hessian can be obtained using the same circuits that are needed for the analytical gradient, similar to F-VQE.
Thus it is possible to apply the same quasi-Newton method to the VQE optimization.
However, we found that this heuristic performs worse than simply fixing a learning rate for all optimization steps and partial derivatives.
More specifically, in the simulations we found that the Hessian diagonal elements frequently vanish so that the parameter update often diverges with this heuristic.
The analysis of this phenomenon goes beyond the scope of this work and therefore we choose to fix a learning rate $\eta$ for all optimization steps and partial derivatives.
Comparing the performance of the values $\eta = 1$, $0.1$ and $0.01$ using our 5-qubit MaxCut instances, we conclude that $\eta = 1$ performs best and hence use this fixed value for $\eta$ in our simulations and experiments.
\textit{QAOA}.
The parameterized quantum circuit for QAOA is:
\begin{align}
\ket{\psi(\bm{\gamma}, \bm{\beta})} &= U(\gamma_p, \beta_p) \cdots U(\gamma_1, \beta_1) \ket{+}^{\otimes n} ,\label{eq_qaoa_ansatz}\\
U(\gamma_j, \beta_j) &= \exp \left( -i \gamma_j \sum_{q = 1}^n X_q \right) \exp \left( -i \beta_j \mathcal{H} \right) .\label{eq_qaoa_unitary}
\end{align}
Here $\ket{+} = (1/\sqrt{2})(\ket{0}+\ket{1})$, $X_{q}$ denotes the Pauli operator $X$ acting on qubit $q$, and the ansatz is defined by the $m = 2 p$ parameters in the vectors $\bm{\gamma} = (\gamma_1,\,\ldots,\,\gamma_p)$ and $\bm{\beta} = (\beta_1,\,\ldots,\,\beta_p)$.
We initialize the parameters randomly in the range $[0, \pi]$.
We optimize the parameters using analytical gradient descent.
As shown in Appendix~\ref{app_an_grad_qaoa}, the computation of the analytical gradient for the QAOA ansatz requires just the ansatz circuit with various parameter sets.
In the QAOA simulations we do not use the quasi-Newton method to determine the learning rate, as this would need additional circuits.
Instead we fix a learning rate $\eta$ for all optimization steps and partial derivatives.
We choose $\eta = 1$ as it is the best performing learning rate for the 5-qubit MaxCut instances where we compared the performance of $\eta = 1$, $0.1$, and $0.01$.
\subsection{Performance}
\label{subsec:performance}
In the following, we present and analyze the numerical and experimental results of the F-VQE algorithms and compare them with those of HE-ITE, VQE and QAOA.
For each MaxCut instance we pay special attention to two benchmark quantities: the approximation ratio and the probability of measuring the ground state.
These quantities have some dependence.
A large probability of sampling the ground state $P_\psi(\lambda_0) \approx 1$ implies a large approximation ratio $\langle \alpha \rangle_\psi \approx 1$.
However, a quantum state $\ket{\psi}$ given by a superposition of low-energy excited states can exhibit a large approximation ratio $\langle \alpha \rangle_\psi \approx 1$ but low ground state probability $P_\psi(\lambda_0) \approx 0$.
We compare the performance of various filtering operators in F-VQE in Fig.~\ref{fig_app_rat}.
Here filtering operators are sorted from left to right by performance and the inverse filter is the best performing one.
Figure~\ref{fig_app_rat}(a) shows for each algorithm considered in this article the final approximation ratio averaged over all 25 MaxCut instances for each problem size.
We observe that the best performing filters achieve the largest approximation ratios and are more reliable in obtaining such approximation ratios, which can be gathered from the small values of the corresponding standard deviations.
Figure~\ref{fig_app_rat}(b) shows the distribution of the minimum number of optimization steps required to achieve an approximation ratio above $0.75$.
Here all filters show a similar performance: they require 5 or fewer optimization steps with little deviation from the median.
Figure~\ref{fig_app_rat}(c) shows the fraction of MaxCut instances where the algorithms obtain a probability of sampling the ground state above $0.25$.
The probability of sampling it at least once with $M$ measurement shots is then $1 - (1 - 0.25)^M$.
The best filters achieve this probability for a larger fraction of MaxCut instances.
Let us now compare the best performing filtering operator, the inverse filter, with VQE and QAOA.
F-VQE requires less optimization steps than VQE and QAOA to achieve larger and more consistent approximation ratios.
This can be seen in Figs.~\ref{fig_app_rat}(a) and (b) as well as in Fig.~\ref{fig_main}(a).
Additionally, as shown in Fig.~\ref{fig_app_rat}(c), F-VQE converges to the ground state more often for all problem sizes.
\begin{figure}
\caption{\label{f_imaginary_time}
\label{f_imaginary_time}
\end{figure}
The HE-ITE algorithm also achieves approximation ratios close to the optimum and frequently converges to the ground state.
Moreover, the evolution of the approximation ratio shown in Fig.~\ref{fig_main}(a) almost overlaps with that for F-VQE.
Similarly Fig.~\ref{fig_main}(b) shows that HE-ITE obtains an average approximation ratio close to the optimum for 25 MaxCut instances of 23 qubits.
Importantly, the quantum circuits never use more than six qubits.
The inset in Fig.~\ref{fig_main}(b) shows the average fraction of circuits used for each qubit count.
We observe that the majority of circuits require just four qubits.
The performance of HE-ITE depends on the imaginary time step $\tau$ and to analyze it we have run HE-ITE for a long total imaginary time of $100$. Figure~\ref{f_imaginary_time} compares for three values of $\tau$ the final approximation ratios and the fraction of MaxCut instances where a probability of sampling the ground state above $0.25$ is achieved.
We conclude that HE-ITE improves systematically by choosing smaller values of $\tau$.
To demonstrate the experimental feasibility, we run F-VQE with the inverse filter on the Honeywell H1 trapped-ion quantum processor and solve a random 9-qubit MaxCut instance.
Experimental settings (see Table~\ref{t_settings}(c)) are similar to those used for the numerical simulations, with some differences described in detail in Appendix~\ref{app_exp_det}.
The main difference is that we use only $p = 1$ layer and remove all rotation gates except the first and the last ones for each qubit.
Figure~\ref{fig_honey} shows the approximation ratio and the probability of measuring the ground state at each optimization step.
The final approximation ratio is $0.9844 \pm 0.0062$ and the probability of sampling the ground state after the final optimization step is $0.928 \pm 0.024$.
Here the value after $\pm$ indicates a $95\%$ confidence interval.
\begin{figure}
\caption{\label{fig_honey}
\label{fig_honey}
\end{figure}
\section{Conclusions and outlook}
\label{s_conc_out}
We have introduced variational quantum algorithms that make use of filtering operators to solve combinatorial optimization problems.
The Quantum Variational Filtering (QVF) algorithm uses a parameterized quantum circuit to approximate the repeated action of a filtering operator.
Filtering VQE (F-VQE) is a particularly efficient version of QVF and similar to VQE.
These algorithms impose no restrictions on the ansatz circuit so that we can choose the one that performs optimally on hardware.
We have also tested a hardware-efficient imaginary time evolution (HE-ITE) algorithm introduced in~\cite{BeFiLu20}.
This algorithm approximates the action of the imaginary time evolution operator.
Using causal cones, HE-ITE can require circuits that are significantly smaller than the problem size.
We have compared F-VQE and HE-ITE with VQE and QAOA using a set of random weighted MaxCut problems of various sizes.
The F-VQE algorithm achieved larger and more consistent approximation ratios as well as more reliable convergence to the optimal solutions than VQE and QAOA via fewer optimization steps.
The HE-ITE algorithm obtained similar approximation ratios and ground state convergence with drastically reduced qubit count.
Moreover, F-VQE successfully solved a 9-qubit MaxCut problem on the Honeywell H1 trapped-ion quantum processor~\cite{Pino2021}.
We conclude that F-VQE and HE-ITE are powerful algorithms to solve combinatorial optimization problems on noisy quantum computers.
Owing to the high flexibility of F-VQE, various promising strategies can be considered to further improve the performance.
F-VQE can readily be combined with the Conditional Value-at-Risk cost function \cite{Barkoutsos2020, Kolotouros2021, Diez2021} to provide new filtering operators with additional capabilities.
Local cost functions and shallow ansatz circuits can be used to avoid barren plateaus~\cite{Cerezo2021}.
F-VQE might also benefit from the original QAOA ansatz~\cite{Farhi2014} or generalizations thereof such as the Hamiltonian variational ansatz~\cite{WeHaTr15, Wiersema2020}, the quantum alternating operator ansatz~\cite{HaEtAl19}, the hardware-efficient mixer-phaser ansatz~\cite{LaRose2021} or the depth optimized QAOA ansatz~\cite{MaEtAl21}.
The ansatz can be specifically selected to minimize experimental noise on quantum hardware~\cite{Du2020}.
To this end, the quantum autoencoder is a powerful concept~\cite{RoOlAs17, ChWa21}.
It is enticing to combine F-VQE with the holographic ansatz~\cite{FoEtAl20} that has led to impressive results on the Honeywell quantum computer~\cite{FoEtAl21, ChEtAl21}.
Any ansatz can be enhanced by extending it with classical neural networks~\cite{ZhEtAl21}.
The classical optimizer for the parameters can simultaneously adjust the ansatz circuit structure to alleviate the effects of experimental noise~\cite{Ostaszewski_Benedetti_2021, Ostaszewski2021}.
It might also be beneficial to grow the ansatz during the optimization as in ADAPT-VQE~\cite{GrEtAl19}, Adapt-QAOA~\cite{ZhEtAl20}, layerwise learning~\cite{SkEtAl21} or Layer VQE~\cite{Liu2021}.
Advanced gradient-descent techniques like stochastic gradient descent may reduce the number of optimization steps and avoid local minima~\cite{Sweke2020}.
Finally, different heuristic adaptations of the parameter $\tau$ and the learning rate can be explored to gain more information from circuit samples.
An interesting future application for QVF and F-VQE is the computation of states different from the ground state.
Our filtering algorithms can target any state of energy $E_{\text{target}}$, e.g.\ a specific excited state, simply by using the slightly modified filtering operator $f(|\mathcal{H}-E_{\text{target}}|; \tau)$.
Another interesting application for QVF and F-VQE is the optimization based on black-box cost functions.
Our filtering algorithms (as well as VQE) can optimize a variational ansatz given any black-box cost function $\mathcal{H}_{\text{black-box}}(\bm{x})$ that takes as input a bitstring of binary variables $\bm{x}$ and returns the associated cost function value.
Therefore it is not necessary to represent the problem in terms of a QUBO Hamiltonian.
A different problem representation might, e.g., reduce the required qubit count.
We note that one of the first successful approaches for the optimization corresponding to such black-box cost functions is based on simulated annealing~\cite{KiGeVe83} which made it possible to tackle real-world problems that could not be tackled before~\cite{Ro85}.
An interesting future project is the comparison of our filtering algorithms with simulated annealing.
It is exciting to think about using filtering operators for the computation of ground states of quantum Hamiltonians.
This can be achieved with some filtering operators.
For example, the imaginary time evolution filter is often used in combination with quantum Hamiltonians, for details see~\cite{BeFiLu20} and references therein.
Using the circuits described in this article, it is also straightforward to realize the power and Chebyshev filters for integer values of $\tau$ in QVF as well as F-VQE.
With respect to the inverse filter -- the best performing filtering operator of this work -- there exists a proposal to realize it with quantum Hamiltonians by means of a Fourier approximation~\cite{Kyriienko2020}.
With the help of the Fourier transform, also other filtering operators can be applied to quantum Hamiltonians~\cite{ZeSuYu21}.
The insights gained here are also useful for the design of new quantum-inspired classical algorithms that can obtain significant speedups compared to traditional classical methods~\cite{AlPe21, Ga21, Patti2021}.
Here one interesting question is whether tensor network methods for ground state minimization can benefit from the use of filtering operators.
Faster and more accurate ground state algorithms are crucial, e.g., for the construction of systematically improved functionals for density functional theory~\cite{LuEtAl16}, or to further improve fast solvers for the boundary value problem of general nonlinear partial differential equations~\cite{LuMoJa18}.
Such algorithms can also give us new answers to traditional tensor network questions regarding quantum phases and phase transitions in strongly correlated quantum systems~\cite{Or14, CiEtAl20}.
\section{Acknowledgments}
We thank Brian Neyenhuis and all the Honeywell Quantum Solutions team for their availability and support with the H1 device.
We acknowledge the cloud computing resources received from the `Microsoft for Startups’ program. We also thank Seyon Sivarajah, Alec Edginton, Richie Yeung, Ross Duncan, and all the TKET development team for their technical support.
\appendix
\section{Hadamard test for QVF}
\label{app_hadamard}
In this Appendix we describe how to minimize the QVF cost function in Eq.~\eqref{eq_cost_fun} with the help of a Hadamard test.
\begin{figure*}
\caption{\label{fig_hadamard}
\label{fig_hadamard}
\end{figure*}
Evaluating the cost function in Eq.~\eqref{eq_cost_fun} requires computing a quantity of the form $\bra{\psi(\bm{\phi})} F_t \ket{\psi(\bm{\theta})}$, where $\ket{\psi(\bm{\theta})}$ and $\ket{\psi(\bm{\phi})}$ are $n$-qubit quantum ansatz circuits shown in Fig.~\ref{fig_ansatz}(a) with parameter vectors $\bm{\theta}$ and $\bm{\phi}$.
The computation of such a quantity is also needed for the gradient calculation in Eq.~\eqref{eq_grad}.
This quantity can be computed with a Hadamard test by evaluating the expectation value of the diagonal and Hermitian observable $Z_\text{anc} \otimes F_t$ with the specific circuit $W(\bm{\theta}, \bm{\phi})$ in Fig.~\ref{fig_hadamard} -- composed of one ancilla qubit labeled by $\text{anc}$ and a $n$-qubit register -- that implements the following operation:
\begin{eqnarray}
W(\bm{\theta}, \bm{\phi}) \ket{0}_\text{anc} \ket{\bm{0}} & = & \frac{1}{2} \ket{0}_\text{anc} ( \ket{\psi(\bm{\theta})} + \ket{\psi(\bm{\phi})} ) +\nonumber\\
& & \frac{1}{2} \ket{1}_\text{anc} ( \ket{\psi(\bm{\theta})} - \ket{\psi(\bm{\phi})} ) .
\end{eqnarray}
\section{Analytical derivatives}
\label{app_an_grad}
In this Appendix we derive the analytical gradient in Eq.~\eqref{eq_grad} and the diagonal elements of the Hessian matrix for the cost function in Eq.~\eqref{eq_cost_fun} corresponding to the QVF algorithm.
We use the parameter-shift rule~\cite{Mitarai2018, Schuld2019} to derive the analytical gradient in Eq.~\eqref{eq_grad_simp} for the F-VQE algorithm.
This procedure is also used to derive analytical gradients for VQE and QAOA.
\subsection{QVF and F-VQE}
\label{app_an_grad_fvqe}
In the quantum ansatz circuits $\ket{\psi(\bm{\theta})}$ of Fig.~\ref{fig_ansatz}(a) parameters $\bm{\theta} = (\theta_1,\,\theta_2,\ldots,\theta_m)$ are present only in rotation gates of the form $R_G(\theta_j) = \exp(-i \theta_j G/2)$, where $G$ is a single-qubit Pauli operator or a tensor product of Pauli operators.
As a function of a single parameter, the circuit can be expressed in terms of two fixed unitaries $V_A,\,V_B$ and the rotation gate corresponding to the parameter:
\begin{equation}
\ket{\psi(\theta_j)} = V_A R_G(\theta_j) V_B \ket{\bm{0}} ,
\end{equation}
where $\ket{\bm{0}} = \ket{0}^{\otimes n}$ is the initial register state.
Hence the first and second derivatives with respect to a parameter $\theta_j$ are:
\begin{align}
\dfrac{\partial \ket{\psi(\theta_j)}}{\partial \theta_j} &= \frac{1}{2} V_A R_G(\theta_j)(-i G) V_B \ket{\bm{0}} = \frac{1}{2} \ket{\psi(\theta_j + \pi)}\\
\dfrac{\partial^2 \ket{\psi(\theta_j)}}{\partial \theta_j^2} &= \frac{1}{4} V_A R_G(\theta_j) (-i G)^2 V_B \ket{\bm{0}} = -\frac{1}{4} \ket{\psi(\theta_j)} ,
\end{align}
where we have used that $-i G = R_G(\pi)$.
Then the first and second derivatives of the cost function in Eq.~\eqref{eq_cost_fun} are:
\begin{align}
\dfrac{\partial \mathcal{C}_t(\bm{\theta})}{\partial \theta_j} &= -\frac{\Re \bra{\psi_{t-1}} F_t \ket{\psi(\bm{\theta} + \pi \bm{e}_j)}}{2 \sqrt{\langle F_t^2 \rangle_{\psi_{t-1}}}} ,\\ \label{second_der_fvqe}
\dfrac{\partial^2 \mathcal{C}_t(\bm{\theta})}{\partial \theta_j^2} &= \frac{\Re \bra{\psi_{t-1}} F_t \ket{\psi(\bm{\theta})}}{4 \sqrt{\langle F_t^2 \rangle_{\psi_{t-1}}}} .
\end{align}
The first equation corresponds to Eq.~\eqref{eq_grad}.
Since the filtering operator $F_t$ is Hermitian, when the first equation is evaluated for the vector of parameters $\bm{\theta}_{t-1}$ that produces the state $\ket{\psi_{t-1}} = \ket{\psi(\bm{\theta}_{t-1})}$ we can use the parameter-shift rule to express the numerator as a sum of two circuits, which leads to Eq.~\eqref{eq_grad_simp}.
When the second equation is evaluated for $\bm{\theta}_{t-1}$, it results in:
\begin{align} \label{eq_second_der}
\left.\dfrac{\partial^2 \mathcal{C}_t(\bm{\theta})}{\partial \theta_j^2}\right|_{\bm{\theta}_{t-1}} &= \frac{\langle F_t \rangle_{\psi_{t-1}}}{4 \sqrt{\langle F_t^2 \rangle_{\psi_{t-1}}}} ,
\end{align}
which requires only the quantum circuit for $\ket{\psi_{t-1}}$.
\subsection{VQE}
\label{app_an_grad_vqe}
The cost function is the average energy $\langle \mathcal{H} \rangle_{\psi(\bm{\theta})}$ and the ansatz circuit $\ket{\psi(\bm{\theta})}$ is the same as the one employed by F-VQE shown in Fig.~\ref{fig_ansatz}(a).
Given that each parameter is present in only one rotation gate, the parameter-shift rule can be applied to express the analytical gradient as:
\begin{equation}
\label{eq_vqe_an_grad}
\left.\dfrac{\partial \langle \mathcal{H} \rangle_{\psi(\bm{\theta})}}{\partial \theta_j}\right|_{\bm{\theta}_{t-1}} = \frac{1}{2} \left(\langle \mathcal{H} \rangle_{\psi_{t-1}^{j+}} - \langle \mathcal{H} \rangle_{\psi_{t-1}^{j-}}\right) .
\end{equation}
As with F-VQE, the circuits $\ket{\psi_{t-1}^{j\pm}} \equiv \ket{\psi\left(\bm{\theta}_{t-1} \pm \frac{\pi}{2}\bm{e}_j\right)}$ are implemented by shifting the parameter $\theta_j$ by an amount $\pm \pi/2$.
Using the same methods, the second derivative with respect to each parameter can be evaluated as
\begin{equation}
\label{eq_second_der_vqe}
\left.\dfrac{\partial^2 \langle \mathcal{H} \rangle_{\psi(\bm{\theta})}}{\partial \theta_j^2}\right|_{\bm{\theta}_{t-1}} = \frac{1}{2} \langle \mathcal{H} \rangle_{\psi_{t-1}^{j++}} - \frac{1}{2} \langle \mathcal{H} \rangle_{\psi_{t-1}},
\end{equation}
where $\ket{\psi_{t-1}} = \ket{\psi(\bm{\theta}_{t-1})}$ is the state with no shifts, and $\ket{\psi_{t-1}^{j++}} = \ket{\psi(\bm{\theta}_{t-1} + \pi\bm{e}_j)}$ is the state with a $+\pi$ shift.
\subsection{QAOA}
\label{app_an_grad_qaoa}
The QAOA ansatz in Eq.~\eqref{eq_qaoa_ansatz} is not of the previous form: here parameters multiply sums of tensor products of Pauli operators so that the partial derivatives become sums of circuit evaluations.
Given a Hamiltonian $\mathcal{H} = \sum_{k = 1}^K h_k Z_{Q_k}$ with real coefficients $h_k$ and $Z_{Q_k} = \bigotimes_{q \in Q_k} Z_q$, the ansatz derivatives are:
\begin{align}
\dfrac{\partial \ket{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \gamma_j} &= \sum_{q = 1}^n \tilde{U}_{(j, p)} (-i X_q) \tilde{U}_{(1, j-1)} \ket{+}^{\otimes n} ,\\
\dfrac{\partial \ket{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \beta_j} &= \sum_{k = 1}^K h_k \tilde{U}_{(j+1, p)} (-i Z_{Q_k}) \tilde{U}_{(1, j)} \ket{+}^{\otimes n} .
\end{align}
Here $\tilde{U}_{a, b} = U(\bm{\gamma}_b, \bm{\beta}_b) \cdots U(\bm{\gamma}_{a+1}, \bm{\beta}_{a+1}) U(\bm{\gamma}_a, \bm{\beta}_a)$, where $U(\bm{\gamma}_j, \bm{\beta}_j)$ is defined in Eq.~\eqref{eq_qaoa_unitary}, contains all QAOA circuit layers from $a$ to $b$.
Therefore the partial derivatives of the QAOA cost function are:
\begin{align}
\dfrac{\partial \langle \mathcal{H} \rangle_{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \gamma_j} &= 2\Re \bra{\psi(\bm{\gamma}, \bm{\beta})} \mathcal{H} \dfrac{\partial \ket{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \gamma_j} ,\\
\dfrac{\partial \langle \mathcal{H} \rangle_{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \beta_j} &= 2\Re \bra{\psi(\bm{\gamma}, \bm{\beta})} \mathcal{H} \dfrac{\partial \ket{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \beta_j} .
\end{align}
If the parameter-shift rule is applied to every term individually, we obtain the analytical gradient for the QAOA ansatz in terms of ansatz circuits for various parameter sets:
\begin{align}
\dfrac{\partial \langle \mathcal{H} \rangle_{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \gamma_j} &= \sum_{q = 1}^n \left( \langle \mathcal{H} \rangle_{\psi_X^{(j, q)+}} - \langle \mathcal{H} \rangle_{\psi_X^{(j, q)-}} \right) ,\label{eq_qaoa_grad_gamma}\\
\dfrac{\partial \langle \mathcal{H} \rangle_{\psi(\bm{\gamma}, \bm{\beta})}}{\partial \beta_j} &= \sum_{k = 1}^K h_k \left( \langle \mathcal{H} \rangle_{\psi_{\mathcal{H}}^{(j, k)+}} - \langle \mathcal{H} \rangle_{\psi_{\mathcal{H}}^{(j, k)-}} \right) .\label{eq_qaoa_grad_beta}
\end{align}
The evaluation of all gradient components requires the following $2 p (n + K)$ circuits that are defined by inserting a rotation gate in the ansatz circuit:
\begin{align}
\ket{\psi_X^{(j, q)\pm}} &= \tilde{U}_{(j, p)} R_{X_q} \left( \pm \frac{\pi}{2} \right) \tilde{U}_{(1, j-1)} \ket{+}^{\otimes n} ,\\
\ket{\psi_{\mathcal{H}}^{(j, k)\pm}} &= \tilde{U}_{(j+1, p)} R_{Z_{Q_k}} \left( \pm \frac{\pi}{2} \right) \tilde{U}_{(1, j)} \ket{+}^{\otimes n} .
\end{align}
\section{Experimental details}
\label{app_exp_det}
This Appendix provides further details on the 9-qubit experimental results, shown in Fig.~\ref{fig_honey}, obtained with the Honeywell H1 trapped-ion quantum processor solving a MaxCut problem.
The considered MaxCut problem is defined on the 10-node 3-regular weighted graph depicted in Fig.~\ref{fig_exp}(a).
The corresponding MaxCut weights are given in Tab.~\ref{t_weights}.
The solution divides the nodes into the two sets of black and white nodes shown in Fig.~\ref{fig_exp}(a), equivalent to the 10-variable solution vector $(1, -1, -1, 1, 1, -1, 1, -1, -1, -1)$ which corresponds to the 9-qubit solution computational state $\ket{011001011}$ in our experiment, as explained in Section~\ref{subsec:problem_hamiltonian}.
Figure~\ref{fig_exp}(b) shows the parameterized quantum circuit that is used in the experiment.
This circuit is simpler than the ones that are used for the simulations -- cf.~Fig.~\ref{fig_ansatz}(a) -- as it contains just $p = 1$ layer with two single-qubit rotation gates per qubit, one at the beginning and one at end of the circuit.
Before the circuit of Fig.~\ref{fig_exp}(b) is run on the Honeywell H1 processor, it is compiled into the circuit shown in Fig.~\ref{fig_exp}(c) which is composed of the native gates of the processor.
There are three native gates~\cite{Pino2021}: the standard single-qubit rotation gate around the $Z$ axis, a single-qubit rotation gate around an axis in the $X$-$Y$ plane $PX(\theta, \phi) = R_{z}(\phi) R_{x}(\theta) R_{z}(-\phi)$, and the two-qubit interaction gate $U_{zz} = \exp(-i \pi Z \otimes Z / 4)$.
The compilation of the original two-qubit gates in Fig.~\ref{fig_exp}(b) changes the rotation angles of the single-qubit gates to
\begin{equation}
f^{\pm}_{j} = 2 \text{atan2} \left( -\cos{\frac{\theta_{j}}{2}} \pm \sin{\frac{\theta_{j}}{2}}, \cos{\frac{\theta_{j}}{2}} \pm \sin{\frac{\theta_{j}}{2}} \right) .
\end{equation}
\begin{figure*}
\caption{\label{fig_exp}
\label{fig_exp}
\end{figure*}
\begin{table}
\begin{tabular}{c|c}
edge & weight\\
\hline
(1, 2) & 0.0609\\
(1, 8) & 0.1574\\
(1, 9) & 0.1392\\
(2, 5) & 0.6131\\
(2, 10) & 0.2670
\end{tabular}
\hspace{4mm}
\begin{tabular}{c|c}
edge & weight\\
\hline
(3, 5) & 0.4156\\
(3, 7) & 0.2020\\
(3, 8) & 0.0120\\
(4, 8) & 0.6738\\
(4, 9) & 0.5296
\end{tabular}
\hspace{4mm}
\begin{tabular}{c|c}
edge & weight\\
\hline
(4, 10) & 0.9927\\
(5, 6) & 0.2617\\
(6, 7) & 0.7781\\
(6, 10) & 0.0202\\
(7, 9) & 0.3973
\end{tabular}
\caption{\label{t_weights}
MaxCut weights for the MaxCut problem considered in our experiment based on the graph in Fig.~\ref{fig_exp}(a).
}
\end{table}
\end{document}
|
{\bf b}egin{document}
{\bf b}egin{frontmatter}
\title{A multi-region nonlinear age-size structured fish population model}
\author{Blaise Faugeras\corauthref{cor1}}
{\it et al. }d{[email protected]}
\corauth[cor1]{}
\author{and Olivier Maury}
\address{IRD, CRHMT, Av. Jean Monnet, BP 171, 34200 S\`ete, France}
{\bf b}egin{abstract}
The goal of this paper is to present a generic multi-region
nonlinear age-size structured fish population model, and to
assess its mathematical well-posedness. An initial-boundary value
problem is formulated. Existence and uniqueness of a positive weak
solution is proved. Eventually, a comparison result is derived : the population
of all regions decreases as the mortality rate increases in at least one region.
\end{abstract}
{\bf b}egin{keyword}
Population dynamics \sep age-size structure \sep system of partial differential equations
\sep initial-boundary value problem \sep variational formulation \sep positivity.
\end{keyword}
\end{frontmatter}
\section{Introduction}
Fish population dynamics models are essential to provide
assessment of the fish abundance and fishing pressure. Their use
forms the basis of scientific advice for fisheries managements.
Discrete age structured models are most of the time used for fisheries
stock assessments \cite{Megrey:1989}. Indeed, ecologists,
mathematicians and population biologists have observed that the
age structure provides more realistic results at reasonable
computational expense for a wide variety of biological populations
(see \cite{Webb:1985}, \cite{DeAngelis:1993}, \cite{Swart:1994}, \cite{Arino:1995}).\\
In this paper we study a model which was first designed to
represent Atlantic bigeye tuna populations \cite{Maury:2004} but
which is also generic enough to be potentially usefull for various
fish species. Indeed, most fish populations share specific
characteristics which have to be taken into account in order
to model their dynamics in a realistic manner.\\
A first point concerning tuna fisheries is that they are highly
heterogeneous in space and time. This has an important impact on
their functioning. Important migrations of fish occur at various
scales and fish movements have to be explicitly represented.
Moreover, growth potentially varies with space that is to say with
the region of the ocean under consideration. Hence, fishes of the
same age can exhibit very different sizes depending of their
various history. Consequently a spatialized approach taking
explicitly into account the potential variability of growth in
space
has to be used.\\
A second point is that, because of non-uniform mortality over
sizes, bias on both growth and mortality estimates may result from
simply adding a gaussian size distribution to an age structured
model as it is generally done. It is reasonable to think that the
use of both age and size as structure variables
should enable to overcome this difficulty.\\
These are some of the principal problems of current stock
assessment models. That is why it is necessary to carry on the
modelling effort by proposing and testing more complex models.
This paper follows
this direction and its purpose is twofold.\\
First we describe a synthetic and generic model of population
dynamics in which both age and size are taken as structure
variables and in which fish movements among spatial regions are
explicitly represented. The model is a system of coupled partial
differential equations. Nonlocal nonlinearities appear in the
boundary conditions modelling recruitment that is to say the birth
law or density dependent fish reproduction. The relative
complexity of the model enables a direct and simultaneous comparison with all
the data available for tuna fisheries such as catches, fishing
efforts, size frequencies, tagging data, and otoliths increments.
This paper does not aim at getting into all the details of the
parameterizations used to represent a particular tuna population
and we refer to \cite{Maury:2004} for these points.\\
Our second and most important goal is to assess the mathematical
well-posedness of the model. The paper is organized as follows.
The equations of the model are presented in Section
{\bf r}ef{sec:present}. Sections {\bf r}ef{sec:preliminaries},
{\bf r}ef{section:exist} and {\bf r}ef{section:positive} deal with the
mathematical analysis of the model. In Section
{\bf r}ef{sec:preliminaries} we formulate an initial-boundary value
problem, introduce a variational formulation and state our main
mathematical results. Existence of a unique weak solution is shown
in Section {\bf r}ef{section:exist}. As often with nonlinear problems
the proof uses a fixed point argument. The methodology follows the
one proposed in \cite{Langlais:1985} for a scalar equation. It has
to be adapted in order to be able to deal with our nonlinear
system. We also show positivity of the solution and give a
comparison result in Section {\bf r}ef{section:positive}. Namely we
prove that if the fish mortality rate increases in at least one
geographic region then the population globally decreases in all
regions.
\section{The model}
\label{sec:present} The dynamics of the population of fish is
described through density functions $p_i(t,a,l)$ where time $t \in
(0,T)$, age $a \in (0,A)$ and length $l \in (0,L)$ are continuous
variables and the subscript $i \in [1:N]$ refers to the geographic
zone or region under consideration. The number of fish of age
between $a_1$ and $a_2$, of length between $l_1$ and $l_2$ at time
$t$ in region $i$ is given by the integral
{\bf b}egin{equation*}
{\bf d}isplaystyle \int_{a_1}^{a_2} \int_{l_1}^{l_2} p_i(t,a,l)dl da,
\end{equation*}
Let us set ${\mathcal{O}}=(0,T)\times(0,A)$ and
${\mathcal{Q}}={\mathcal{O}} \times (0,L)$. The time evolution of
the population given by Eq. {\bf r}ef{eqn:e1-1} includes the following
processes.
In region $i$, as time goes on and fishes grow older, their length
increases with a growth rate ${\bf g}amma_i$.
In a fish population individuals of the same age can often differ
markedly in size \cite{Pfister:2002}. This variability in growth
can result from many different mechanisms, including genetic or
behavorial traits that confer different performances to
individuals, and factors such as environmental heterogeneity and
variability \cite{Beverton:1996}. In fishery science, this
variability is usually taken into account in age-structured models
using a length-at-age relation perturbed by a Gaussian noise with
a length dependent standard deviation (see for example
\cite{Fournier:1990}). The model discussed here is
length-structured and uses a diffusion term in the length variable
with dispersion rate $d_i$ to account for individuals having the
same age but different lengths. The advection-diffusion term in
length can be seen as the limit of a random walk model in which
each individual grows with an average velocity, but has at each
time step a small binomial probability to grow faster or slower
than this average (see the book by Okubo \cite{Okubo:1980} for
more details).
The model also describes mortality and migration of individuals.
The mortality rate is split into natural mortality $\mu_i$ and
fishing mortality $f_i$. Let also $m_{i{\bf r}ightarrow j}$ be the
migration rate of individuals going
from region $i$ to region $j$ ($m_{i{\bf r}ightarrow j}=0$ if regions $i$ and $j$ are not adjacent).\\
The density functions $p_i$ for $i \in [1:N]$ follow the balance
law:
{\bf b}egin{equation}
\label{eqn:e1-1} \left \lbrace {\bf b}egin{array}{ll}
{{\bf b}f p}artial_t p_i(t,a,l) + {{\bf b}f p}artial_a p_i(t,a,l)=&
{{\bf b}f p}artial_l(d_i(t,a,l) {{\bf b}f p}artial_l p_i(t,a,l)) - {{\bf b}f p}artial_l({\bf g}amma_i(t,a,l) p_i(t,a,l))\\
&+ {\bf d}isplaystyle \sum_{j \ne i}^N m_{j {\bf r}ightarrow i}(t,a,l) p_j(t,a,l)\\
&- ({\bf d}isplaystyle \sum_{j \ne i}^N m_{i {\bf r}ightarrow j}(t,a,l) ) p_i(t,a,l)\\
& -(\mu_i(t,a,l) + f_i(t,a,l))p_i(t,a,l),{{\bf b}f q}uad (t,a,l) \in
\mathcal{Q},
\end{array}
{\bf r}ight.
\end{equation}
These equations have to be completed with initial and boundary conditions.\\
Homogeneous Neumann boundary conditions at $l=0$ and $l=L$ express
the fact that the length of individuals can not reach negative
values or values larger than $L$.
{\bf b}egin{equation}
\label{eqn:e1-4}
{{\bf b}f p}artial_l p_i(t,a,0)={{\bf b}f p}artial_l p_i(t,a,L)=0,{{\bf b}f q}uad (t,a)\in \mathcal{O}.
\end{equation}
The initial age and size distribution is prescribed,
{\bf b}egin{equation}
\label{eqn:e1-2} p_i(0,a,l)=p_i^0(a,l),{{\bf b}f q}uad (a,l) \in
(0,A)\times(0,L).
\end{equation}
We also need a boundary condition for $a=0$ that is to say a
recruitment law. It is written as:
{\bf b}egin{equation}
\label{eqn:e1-3} p_i(t,0,l)={\bf b}eta_i(t,l,P_i(t)),{{\bf b}f q}uad (t,l)\in
(0,T)\times(0,L),
\end{equation}
The length of recruited fish is assumed to lie between $0$ and a
small constant length $L_b$. Moreover we denote by $L_m$ the
minimal length of fishes which have reached maturity. $L_b$ and
$L_m$ satisfy $0<L_b<L_m<L$. The stock spawning biomass is
calculated as
{\bf b}egin{equation}
\label{eqn:e2} P_i(t)={\bf d}isplaystyle \int_0^A \int_{L_m}^L
w_i(t,a,l)p_i(t,a,l)dlda,
\end{equation}
where $w_i$ is a weighting function. Finally we use a Beverton and
Holt \cite{Beverton:1996} stock-recruitment relation in each
region and obtain,
{\bf b}egin{equation}
\label{eqn:e3} {\bf b}eta_i(t,l,P)={\bf h}box{{\bf r}m l{\bf h}skip -6pt 1}_{[0,L_b]}(l) {{\bf b}f p}si_i(t) {\bf d}isplaystyle
{\bf f}rac{P}{\theta_i + P},
\end{equation}
where ${\bf h}box{{\bf r}m l{\bf h}skip -6pt 1}_{[0,L_b]}$ is the usual characteristic function,
$\theta_i > 0$ is a constant parameter and ${{\bf b}f p}si_i(t)$ is a given
function of time used to parameterize fluctuations of the
recruitment not taken into account in the Beverton and Holt
relation.
\section{Main assumptions and preliminary results}
\label{sec:preliminaries}
In this section we set the mathematical frame in which the
analysis is conducted. We formulate the main assumptions which are
made on the data of the model, give the definition of a weak
solution to the initial-boundary value problem and state our
results in Theorems {\bf r}ef{theo:exist-unique} and
{\bf r}ef{theo:comparaison}.
\subsection{Functional spaces}
Let us introduce the functional spaces which we use in the remainder of this work.\\
The vectorial notation ${{\bf b}f p} =(p_1,...,p_N)^T$ is used. The usual
scalar product of two vectors ${{\bf b}f p},{{\bf b}f q} \in \mathbb{R}^N$ is denoted
by ${{\bf b}f p}.{{\bf b}f q}$ and the norm of
${{\bf b}f p}$ by $|{{\bf b}f p}|$.\\
${\bf H}$ and ${\bf H}^1$ are the separable Hilbert spaces defined by ${\bf H}
=(L^2(0,L))^N$ and ${\bf H}^1 = (H^1(0,L))^N$. ${\bf H}$ is equipped with
the scalar product
$$
({{\bf b}f p},{{\bf b}f q})_{{\bf H}}= {\bf d}isplaystyle \int_{0}^{L} {{\bf b}f p}(l).{{\bf b}f q}(l)dl.
$$
We denote by $||.||_{{\bf H}}$ the induced norm on ${{\bf b}f H}$.\\
${\bf H}^1$ is equipped with the scalar product
$$({{\bf b}f p},{{\bf b}f q})_{{\bf H}^1}= {\bf d}isplaystyle
\int_{0}^{L} {{\bf b}f p}(l).{{\bf b}f q}(l)dl+ {\bf d}isplaystyle \int_{0}^{L}
{{\bf b}f p}artial_l {{\bf b}f p}(l).{{\bf b}f p}artial_l {{\bf b}f q}(l)dl.
$$
We denote by $||.||_{{\bf H}^1}$, the induced norm on ${\bf H}^1$.\\
By $<.,.>$ we denote the duality between ${\bf H}^1$ and its dual $({\bf H}^1)'$.\\
$L^2({\mathcal{O}},{\bf H})$ (resp. $L^2({\mathcal{O}},{\bf H}^1)$) denotes the Hilbert space of
measurable functions of ${\mathcal{O}}$ with values in ${\bf H}$ (resp. ${\bf H}^1$)
such that\\
$||{{\bf b}f p}||_{L^2({\mathcal{O}},{\bf H})} = ( {\bf d}isplaystyle \int_{\mathcal{O}} ||{{\bf b}f p}(t,a,.)||_{{\bf H}}^2 dt da)^{1/2} < \infty$
(resp. $||{{\bf b}f p}||_{L^2({\mathcal{O}},{\bf H}^1)} < \infty$). \\
We also make use of the notation $V=L^2({\mathcal{O}},{\bf H}^1)$ and
the dual space $V'=L^2({\mathcal{O}},({\bf H}^1)')$. By $<<.,.>>$ we denote the duality between $V$ and its dual $V'$.\\
The partial derivatives ${{\bf b}f p}artial_t$ and ${{\bf b}f p}artial_a$ denote
differentiation in ${\mathcal{D}}'({\mathcal{O}},({\bf H}^1)')$ and $D$ stands for
${{\bf b}f p}artial_t + {{\bf b}f p}artial_a$.\\
We will have to use he following trace result.
{\bf b}egin{lemma}
\label{lemma:trace}
Let ${{\bf b}f p},{{\bf b}f q} \in V$ such that $D{{\bf b}f p},D{{\bf b}f q} \in V'$. It holds that:\\
For all $t_0 \in (0,T)$ and all $a_0 \in (0,A)$, ${{\bf b}f p}$ has a trace
at $t=t_0$ belonging to $(L^2((0,A)\times(0,L)))^N$ and at $a=a_0$
belonging to $(L^2((0,T)\times(0,L)))^N$. The trace applications
are continuous in the strong and weak topology. Moreover the
following integration by parts formula holds,
$$
{\bf b}egin{array}{ll}
{\bf d}isplaystyle \int_{\mathcal{O}} [<D{{\bf b}f p},{{\bf b}f q}>+<D{{\bf b}f q},{{\bf b}f p}>] dtda=&{\bf d}isplaystyle \int_0^A \int_0^L [{{\bf b}f p}.{{\bf b}f q}(T,a,l)-{{\bf b}f p}.{{\bf b}f q}(0,a,l)]dadl\\[7pt]
&+ {\bf d}isplaystyle \int_0^T \int_0^L [{{\bf b}f p}.{{\bf b}f q}(t,A,l)-{{\bf b}f p}.{{\bf b}f q}(t,0,l)]dtdl
\end{array}
$$
\end{lemma}
{\bf b}egin{proof}
This result is the extension to dimension $N$ of Lemma 0 in
\cite{Garroni:1982}. Also see \cite{Lions:1968}.
\end{proof}
\noindent We will also have to consider the space ${{\bf b}f
L}^{\infty} = (L^{\infty}({\mathcal{Q}}))^N$. $L^{\infty}({\mathcal{Q}})$ is a Banach
space equipped with the norm $||p_i||_{\infty} = {\mathrm{inf}}
\lbrace M;|p_i(t,a,l)| \le M \ {\mathrm{a.e.}} \ {\mathrm{in}} \
{\mathcal{Q}} {\bf r}brace$. Similarly ${{\bf b}f L}^{\infty}$ is a Banach space
equipped with the norm $||{{\bf b}f p}||_{\infty} =
\underset{i\in[1:N]}{max}||p_i||_{\infty}$.
\subsection{Assumptions on the data and preliminary transformation of the system}
The movements rates $m_{i {\bf r}ightarrow j}$ are assumed to satisfy
{\bf b}egin{itemize}
\item $m_{i {\bf r}ightarrow j}(t,a,l) {\bf g}e 0$ a.e in ${\mathcal{Q}}$, $m_{i {\bf r}ightarrow j} \in L^{\infty}({\mathcal{Q}})$.
\end{itemize}
We define the matrix of movements ${{\bf b}f M}$ by
$$
M_{ij}= \left \lbrace {\bf b}egin{array}{l}
m_{j {\bf r}ightarrow i} {{\bf b}f q}uad {\mathrm{if}}\ i \ne j,\\
- {\bf d}isplaystyle \sum_{k \ne i}^{N} m_{i {\bf r}ightarrow k} {{\bf b}f q}uad {\mathrm{if}}\ i = j.\\
\end{array}
{\bf r}ight.
$$
Hence the term, $[{\bf d}isplaystyle \sum_{j \ne i}^N m_{j {\bf r}ightarrow i} p_j
- ({\bf d}isplaystyle \sum_{j \ne i}^N m_{i {\bf r}ightarrow j} ) p_i]$, in Eq. {\bf r}ef{eqn:e1-1} can be written in matrix form as $({{\bf b}f M}{{\bf b}f p})_i$.\\
\noindent Concerning the diffusion coefficients $d_i$, the growth
rates ${\bf g}amma_i$ and the natural and fishing mortality rates
$\mu_i$ and $f_i$, we make the following assumptions for all $i
\in [1:N]$:
{\bf b}egin{itemize}
\item $d_i(t,a,l) {\bf g}e d^0 > 0$, a.e in ${\mathcal{Q}}$, $d_i \in L^{\infty}({\mathcal{Q}})$,
\item ${\bf g}amma_i(t,a,l)$ is differentiable with respect to $l$, and ${\bf g}amma_i, {{\bf b}f p}artial_l {\bf g}amma_i \in L^{\infty}({\mathcal{Q}})$,
\item $\mu_i(t,a,l),f_i(t,a,l) {\bf g}e 0$, a.e in ${\mathcal{Q}}$, $\mu_i,f_i \in L^{\infty}({\mathcal{Q}})$. We also make use of the
notation $z_i=\mu_i+f_i$.
\end{itemize}
\noindent In the formulation of the recruitment process (Eq.
{\bf r}ef{eqn:e1-3}) ${{\bf b}f p}si_i$ and $w_i$ satisfy:
{\bf b}egin{itemize}
\item ${{\bf b}f p}si_i(t) {\bf g}e 0$ a.e in $(0,T)$ and ${{\bf b}f p}si_i \in L^{\infty}(0,T)$,
\item $w_i(t,a,l) {\bf g}e 0$ a.e in ${\mathcal{Q}}$ and $w_i \in L^{\infty}({\mathcal{Q}})$.
\end{itemize}
\noindent The initial distributions $p_i^0(a,l)$ satisfies for all
$i \in [1:N]$:
{\bf b}egin{itemize}
\item $p_i^0(a,l) {\bf g}e 0$ a.e in ${\mathcal{Q}}$, $p_i^0 \in L^{2}((0,A)\times(0,L))$.
\end{itemize}
\noindent In order to prove our existence result it is convenient
to perform a change of unknown function: ${{\bf b}f p}$ satisfies
({\bf r}ef{eqn:e1-1})-({\bf r}ef{eqn:e1-3}) if and only if
${\bf h}at{{{\bf b}f p}}=e^{-\lambda t}{{\bf b}f p}$ is a solution to the same system where
$-(\mu_i+f_i)p_i$ is replaced $-(\mu_i+f_i+\lambda)p_i$ in Eq.
{\bf r}ef{eqn:e1-1} and ${\bf b}eta_i$ in the expression of the boundary
condition at $a=0$ (Eq. {\bf r}ef{eqn:e1-3}) is replaced by
{\bf b}egin{equation}
\label{eqn:e4} {\bf h}at{{\bf b}eta}_i(t,l,{\bf h}at{P}_i(t))= {\bf h}box{{\bf r}m l{\bf h}skip -6pt 1}_{[0,L_b]}(l)
{{\bf b}f p}si_i(t) {\bf d}isplaystyle {\bf f}rac{{\bf h}at{P}_i(t)}{\theta_i e^{-\lambda t} +
{\bf h}at{P}_i(t)},
\end{equation}
{\bf b}egin{equation}
\label{eqn:e5} {\bf h}at{P}_i(t)={\bf d}isplaystyle \int_0^A \int_{L_m}^L
w_i(t,a,l){\bf h}at{p}_i(t,a,l)dlda.
\end{equation}
In the remaining part of this paper this change of unknown is
implicitly done and we omit the ${\bf h}at{p_i}$ notation. The constant
$\lambda$ will be fixed to a convenient value below. Moreover, the
possible nullification of the term $\theta_i e^{-\lambda t} +
{\bf h}at{P}_i(t)$, invites us to define,
{\bf b}egin{equation}
\label{eqn:e6} {\bf b}eta_i(t,l,P_i(t))= {\bf h}box{{\bf r}m l{\bf h}skip -6pt 1}_{[0,L_b]}(l) {{\bf b}f p}si_i(t) {\bf d}isplaystyle
{\bf f}rac{P_i(t)}{\theta_i e^{-\lambda t} + |P_i(t)|}.
\end{equation}
This formulation will be used in the following. We will show that
if initial distributions, $p_i^0$ are nonnegative then $p_i {\bf g}e 0$
a.e. in ${\mathcal{Q}}$, thus the two formulations are equivalent.
\subsection{Variational formulation and weak solutions}
Formally multiplying Eq. {\bf r}ef{eqn:e1-1} by a function $q_i$ and
integrating by parts on $(0,L)$ yields to the definition of the
following linear forms. For $p_i,q_i \in H^1(0,L)$ let us define,
{\bf b}egin{equation}
\label{eqn:bi} b_i(p_i,q_i)={\bf d}isplaystyle \int_0^L d_i {{\bf b}f p}artial_l p_i
{{\bf b}f p}artial_l q_i dl + {\bf d}isplaystyle \int_0^L {\bf g}amma_i ({{\bf b}f p}artial_l p_i)q_i dl
+{\bf d}isplaystyle \int_0^L (z_i + {{\bf b}f p}artial_l {\bf g}amma_i + \lambda )p_i q_i dl,
\end{equation}
{\bf b}egin{equation}
\label{eqn:ci} c_i({{\bf b}f p},q_i)=-{\bf d}isplaystyle \int_0^L ({{\bf b}f M}{{\bf b}f p})_iq_i dl,
\end{equation}
{\bf b}egin{equation}
\label{eqn:ei} e_i({{\bf b}f p},q_i)=b_i(p_i,q_i)+c_i({{\bf b}f p},q_i)
\end{equation}
Summing over $i$, we define for ${{\bf b}f p},{{\bf b}f q} \in {\bf H}^1$, the bilinear form
$e({{\bf b}f p},{{\bf b}f q})$ by,
{\bf b}egin{equation}
\label{eqn:e} e({{\bf b}f p},{{\bf b}f q})={\bf d}isplaystyle \sum_{i=1}^N e_i({{\bf b}f p},q_i)
\end{equation}
{\bf b}egin{lemma}
\label{lemmacoercive} For $\lambda > ({\bf d}isplaystyle {\bf f}rac{1}{2d^0}
||{\bf g}amma||^2_\infty +||{{\bf b}f p}artial_l {\bf g}amma||^2_\infty +N
||{{\bf b}f M}||_\infty )$, the bilinear form $e(.,.)$ is continuous and
coercive on ${\bf H}^1 \times {\bf H}^1$, i.e there exist constants $C_1>0$
and $C_2>0$ such that
{\bf b}egin{equation}
\label{continuous} |e({{\bf b}f p},{{\bf b}f q})| \le C_1 ||{{\bf b}f p}||_{{\bf H}^1} ||{{\bf b}f q}||_{{\bf H}^1},\
{\bf f}orall {{\bf b}f p},{{\bf b}f q} \in {\bf H}^1,
\end{equation}
{\bf b}egin{equation}
\label{coercive} e({{\bf b}f p},{{\bf b}f p}) {\bf g}e C_2 ||{{\bf b}f p}||_{{\bf H}^1}^2,\ {\bf f}orall {{\bf b}f p} \in
{\bf H}^1.
\end{equation}
\end{lemma}
{\bf b}egin{proof}
Using Cauchy-Schwarz inequality we obtain,
$$
|{\bf d}isplaystyle \sum_i b_i(p_i,q_i)| \le (||{\bf d}||_\infty +||{\bf b}gamma||_\infty
+||{\bf z}||_\infty +||{{\bf b}f p}artial_l {\bf b}gamma||_\infty + \lambda)
||{{\bf b}f p}||_{{\bf H}^1}||{{\bf b}f q}||_{{\bf H}^1}
$$
and
$$
|{\bf d}isplaystyle \sum_i c_i({{\bf b}f p},q_i)| = | {\bf d}isplaystyle \sum_i \sum_j \int_0^L M_{ij} p_j
q_i dl| \le ||{{\bf b}f M}||_\infty N ||{{\bf b}f p}||_{{\bf H}^1} ||{{\bf b}f q}||_{{\bf H}^1}.
$$
which proves ({\bf r}ef{continuous}).\\
Again using Cauchy-Schwarz inequality yields
$$
|{\bf d}isplaystyle \int_0^L {\bf g}amma_i {{\bf b}f p}artial_l p_i p_i|dl \le ||{\bf g}amma||_\infty
||{{\bf b}f p}artial_l p_i||_{L^2(0,L)} ||p_i||_{L^2(0,L)}.
$$
Young's inequality then gives for any $\alpha >0$
$$
|{\bf d}isplaystyle \int_0^L {\bf g}amma_i {{\bf b}f p}artial_l p_i p_i dl | \le
{\bf d}isplaystyle {\bf f}rac{\alpha}{2}||{{\bf b}f p}artial_l p_i||_{L^2(0,L)}^2 +
{\bf d}isplaystyle {\bf f}rac{1}{2\alpha} ||{\bf g}amma||^2_\infty ||p_i||^2_{L^2(0,L)}.
$$
Therefore we have that
$$
{\bf d}isplaystyle \sum_i \int_0^L {\bf g}amma_i {{\bf b}f p}artial_l p_i p_i dl {\bf g}e
-{\bf d}isplaystyle {\bf f}rac{\alpha}{2} ||{{\bf b}f p}artial_l {{\bf b}f p}||_{{\bf H}}^2 -
{\bf d}isplaystyle {\bf f}rac{1}{2\alpha} ||{\bf g}amma||^2_\infty ||{{\bf b}f p}||^2_{{\bf H}}.
$$
Now since $\mu_i$ and $f_i$ are positive and $d_i$ is bounded
below by $d^0$, it follows that
$$
e({{\bf b}f p},{{\bf b}f p}) {\bf g}e (d^0 -{\bf d}isplaystyle {\bf f}rac{\alpha}{2})||{{\bf b}f p}artial_l {{\bf b}f p}||_{{\bf H}}^2
+(\lambda-({\bf d}isplaystyle {\bf f}rac{1}{2\alpha} ||{\bf g}amma||^2_\infty +||{{\bf b}f p}artial_l
{\bf g}amma||^2_\infty +N ||{{\bf b}f M}||_\infty ))||{{\bf b}f p}||^2_{{\bf H}}.
$$
It is possible to choose $\alpha=d^0$ and $\lambda$ such that $
\lambda^0=(\lambda-({\bf d}isplaystyle {\bf f}rac{1}{2d^0} ||{\bf g}amma||^2_\infty
+||{{\bf b}f p}artial_l {\bf g}amma||^2_\infty +N ||{{\bf b}f M}||_\infty )) >0$ and
$C_2=min({\bf d}isplaystyle {\bf f}rac{d^0}{2},\lambda^0)$.
\end{proof}
We can now give the definition of a weak solution to the
initial-boundary value problem ({\bf r}ef{eqn:e1-1})-({\bf r}ef{eqn:e1-3})
and state the results which are shown in Section
{\bf r}ef{section:exist} and {\bf r}ef{section:positive}. \noindent A weak
solution to the initial-boundary value problem
({\bf r}ef{eqn:e1-1})-({\bf r}ef{eqn:e1-3})
is a vector valued function ${{\bf b}f p}$ satisfying the following {{\bf b}f problem (P)}:\\
Find
{\bf b}egin{equation}
\label{P1} {{\bf b}f p} \in V,\ {\mathrm{such}}\ {\mathrm{that}}\ D{{\bf b}f p} \in
V',
\end{equation}
solution of
{\bf b}egin{equation}
\label{P2} {\bf d}isplaystyle \int_{\mathcal{O}} < D{{\bf b}f p},{{\bf b}f q}>dtda+{\bf d}isplaystyle \int_{\mathcal{O}} e({{\bf b}f p},{{\bf b}f q})dtda=0,\
{\bf f}orall {{\bf b}f q} \in V,
\end{equation}
{\bf b}egin{equation}
\label{P3} {{\bf b}f p}(0,a,l)={{\bf b}f p}^0(a,l) {{\bf b}f q}uad {\mathrm{a.e}}\
{\mathrm{in}}\ (0,A)\times(0,L),
\end{equation}
{\bf b}egin{equation}
\label{P4} {{\bf b}f p}(t,0,l)={{\bf b}eta}(t,l,{{\bf b}f P}(t)) {{\bf b}f q}uad {\mathrm{a.e}}\
{\mathrm{in}}\ (0,T)\times(0,L).
\end{equation}
In Section {\bf r}ef{section:exist} it is proved that:
{\bf b}egin{theorem}
\label{theo:exist-unique} There exists a unique solution ${{\bf b}f p}$ to
{{\bf b}f problem (P)}.
\end{theorem}
{{\bf b}f Notation}: ${{\bf b}f p}(t,a,l)$ and ${{\bf b}f q}(t,a,l)$ being vector valued
functions,
${{\bf b}f p} \le {{\bf b}f q}$ means that $p_i \le q_i$ a.e. in ${\mathcal{Q}}$ for all $i \in [1:N]$.\\
With this notation, it is proved in Section {\bf r}ef{section:positive}
that:
{\bf b}egin{theorem}
\label{theo:comparaison}
The solution, ${{\bf b}f p}$, to {{\bf b}f problem (P)} is nonnegative a.e in ${\mathcal{Q}}$.\\
Moreover, let ${{\bf b}f p}^1$ (resp. ${{\bf b}f p}^2$) denote the solution to {{\bf b}f
problem (P)} associated with the vector of mortality rates ${\bf z}^1$
(resp. ${\bf z}^2$). If ${\bf z}^1 \le {\bf z}^2$ then ${{\bf b}f p}^2 \le {{\bf b}f p}^1$.
\end{theorem}
\section{Existence and uniqueness}
\label{section:exist} The proof of existence and uniqueness
consists in two main steps. First we show the result in the case
of a constant recruitment (independent of the fish density).
Second a fixed point argument enables to cope with the original
nonlinear recruitment.
{\bf b}egin{lemma}
\label{lemma1} Let ${\bf b}$ be fixed in $L^2((0,T)\times(0,L))^N$.
There exists a unique ${{\bf b}f p}$ satisfying ({\bf r}ef{P1})-({\bf r}ef{P3}) of
{{\bf b}f problem (P)} in which the initial condition ({\bf r}ef{P4}) is
replaced by ${{\bf b}f p}(t,0,l)={\bf b}(t,l)\ {\mathrm{a.e}}\ {\mathrm{in}}\
(0,T)\times(0,L)$.
\end{lemma}
{\bf b}egin{proof}
The proof is an adaptation of the results given for the scalar
case in \cite{Langlais:1985}. We sketch it for the sake of
completeness. It consists in two steps.\\
{{\bf b}f Step 1}: We prove that given ${\bf h} \in V'$ there exists a
unique ${{\bf b}f p} \in V$, $D{{\bf b}f p} \in V'$ such that
{\bf b}egin{equation}
{\bf d}isplaystyle \int_{\mathcal{O}} <D{{\bf b}f p},{{\bf b}f q}>dtda+\int_{\mathcal{O}}
e({{\bf b}f p},{{\bf b}f q})dtda=\int_{\mathcal{O}}<{\bf h},{{\bf b}f q}>dtda,{{\bf b}f q}uad {\bf f}orall {{\bf b}f q} \in V
\end{equation}
and ${{\bf b}f p}(0,a,l)={{\bf b}f p}(t,0,l)=0$.\\
Let $A^0$ be the unbounded linear operator on
$(L^2({\mathcal{Q}}))^N$ with domain $D(A^0)=\lbrace {{\bf b}f p} \in (L^2({\mathcal{Q}}))^N,\
{{\bf b}f p}artial_t {{\bf b}f p} + {{\bf b}f p}artial_a {{\bf b}f p} \in (L^2({\mathcal{Q}}))^N,
{{\bf b}f p}(0,t,l)={{\bf b}f p}(t,0,l)=0 {\bf r}brace$, defined by ${{\bf b}f p} \in D(A^0),\
A^0{{\bf b}f p}={{\bf b}f p}artial_t {{\bf b}f p} + {{\bf b}f p}artial_a {{\bf b}f p}$. Then $-A^0$ is the
infinitesimal generator of a contraction semigroup, $(S(\tau){{\bf b}f p},\
\tau {\bf g}e 0)$, in $(L^2({\mathcal{Q}}))^N$ (see \cite{Bardos:1970}) and
$$
(S(\tau){{\bf b}f p})(t,a,l)= \left \lbrace {\bf b}egin{array}{l}
{{\bf b}f p}(t-\tau,a-\tau,l) {{\bf b}f q}uad {\mathrm{if}}\ (t-\tau,a-\tau,l) \in {\mathcal{Q}},\\
0 {{\bf b}f q}uad {\mathrm{otherwise}}.\\
\end{array}
{\bf r}ight.
$$
From this one can deduce that the unbounded linear operator $A$
from $V$ to $V'$ with domain $D(A)=\lbrace {{\bf b}f p} \in V,\ D{{\bf b}f p} \in V',\
{{\bf b}f p}(0,a,l)={{\bf b}f p}(t,0,l)=0 {\bf r}brace$,
defined by $A{{\bf b}f p} = D{{\bf b}f p}$ is a maximal monotone operator.\\
With the bilinear form $e(.,.)$ we can define a linear bounded and
coercive operator $E$ from $V$ to $V'$ such that $<<E{{\bf b}f p},{{\bf b}f q}>>={\bf d}isplaystyle
\int_{\mathcal{O}} e({{\bf b}f p},{{\bf b}f q})dtda,{{\bf b}f q}uad {\bf f}orall {{\bf b}f p},{{\bf b}f q} \in V$. Since $E$ is
bounded and coercive and $A$ is maximal monotone we conclude that
for any ${\bf h} \in V'$ there exists a unique ${{\bf b}f p} \in D(A)$ solution
to $A{{\bf b}f p}+E{{\bf b}f p}={\bf h}$ which is an abstract formulation of our problem
because\\
$<<A{{\bf b}f p},{{\bf b}f q}>>={\bf d}isplaystyle \int_{\mathcal{O}} <D{{\bf b}f p},{{\bf b}f q}>dtda,{{\bf b}f q}uad {\bf f}orall {{\bf b}f p} \in D(A),\ {\bf f}orall {{\bf b}f q} \in V$.\\
{{\bf b}f Step 2}: Let us now introduce a sequence of functions
${\bf b}phi^n \in (C^\infty(\overline{{\mathcal{Q}}}))^N$ such that
$$
{\bf b}egin{array}{l}
{\bf b}phi^n(0,a,l) {\bf r}ightarrow {{\bf b}f p}^0(a,l){{\bf b}f q}uad {\mathrm{in}}\ (L^2((0,A)\times(0,L)))^N,\\
{\bf b}phi^n(t,0,l) {\bf r}ightarrow {\bf b}(t,l){{\bf b}f q}uad {\mathrm{in}}\ (L^2((0,T)\times(0,L)))^N,\\
\end{array}
$$
From {{\bf b}f step 1}, we conclude that there exists a unique ${{\bf b}f q}^n$
in $D(A)$ solution to $A{{\bf b}f q}^n+E{{\bf b}f q}^n=-A{\bf b}phi^n-E{\bf b}phi^n$. Therefore
${{\bf b}f p}^n={{\bf b}f q}^n+{\bf b}phi^n$ is a solution to ({\bf r}ef{P2}) satisfying
${{\bf b}f p}^n(0,a,l)={\bf b}phi^n(0,a,l)$ and
${{\bf b}f p}^n(t,0,l)={\bf b}phi^n(t,0,l)$.\\
Now taking ${{\bf b}f p}^n$ as a test function in ({\bf r}ef{P2}), integrating by
parts using Lemma {\bf r}ef{lemma:trace} and using the coercivity of
$e(.,.)$ we obtain that
$$
C_2 ||{{\bf b}f p}^n||^2_V \le {\bf d}isplaystyle
{\bf f}rac{1}{2}||{\bf b}phi^n(0,a,l)||^2_{(L^2((0,A)\times(0,L)))^N} +
{\bf f}rac{1}{2}||{\bf b}phi^n(t,0,l)||^2_{(L^2((0,T)\times(0,L)))^N}.
$$
By the choice of ${\bf b}phi^n$ this implies that ${{\bf b}f p}^n$ is a bounded
sequence in $V$. Therefore we can extract a subsequence still
denoted ${{\bf b}f p}^n$ such that ${{\bf b}f p}^n {\bf r}ightarrow {{\bf b}f p}$ weakly in $V$ and
$D{{\bf b}f p}^n {\bf r}ightarrow {\bf r}$ weakly in $V'$. Since the operator $D$ is
continuous on ${\mathcal{D}}'({\mathcal{O}},({\bf H}^1)')$, ${\bf r} = D{{\bf b}f p}$. Moreover
since $E$ is continuous $E{{\bf b}f p}^n {\bf r}ightarrow E{{\bf b}f p}$. We conclude that
${{\bf b}f p}$ satisfies ({\bf r}ef{P2}). The continuity of the trace
applications on $t=0$ and $a=0$ implies that ${{\bf b}f p}(0,a,l)={{\bf b}f p}^0(a,l)$
and ${{\bf b}f p}(t,0,l)={\bf b}(t,l)$.
\end{proof}
{\bf b}egin{lemma}
\label{lemma2} Let
$C_3=(\underset{i\in[1:N]}{max}[AL^2||{\bf b}psi||_\infty^2||{\bf w}||_\infty^2({\bf d}isplaystyle
{\bf f}rac{e^{\lambda T}}{\theta_i})^2])^{1/2}$,
then the application\\
$(p_i(t,a,l)) \mapsto ({\bf b}eta_i (t,l,P_i(t)))$ (cf Eqs {\bf r}ef{eqn:e5}
and {\bf r}ef{eqn:e6}) defines a bounded nonlinear operator, lipschitz
continuous from $L^2({\mathcal{O}},{\bf H})$ to $(L^2((0,T)\times(0,L))^N$ with
lipschitz constant $C_3$.
\end{lemma}
{\bf b}egin{proof}
The application, $p_i(t,a,l)\mapsto P_i(t)={\bf d}isplaystyle \int_0^A
\int_{L_m}^Lw_i(t,a,l)p_i(t,a,l)da dl$, defines a bounded linear
operator from $L^2({\mathcal{Q}})$ to $L^2(0,T)$. This follows from,
$$
|{\bf d}isplaystyle \int_0^A \int_{L_m}^Lw_i(t,a,l)p_i(t,a,l)da dl| \le {\bf d}isplaystyle
\int_0^A \int_{0}^L|w_i(t,a,l)p_i(t,a,l)|da dl,
$$
and using Cauchy-Schwarz yields
$$
{\bf d}isplaystyle \int_0^T|P_i(t)|^2dt \le ||{\bf w}||^2_\infty A L
||p_i||^2_{L^2({\mathcal{Q}})}.
$$
The application $p_i(t,a,l) \mapsto {\bf b}eta_i (t,l,P_i(t))$ defines
a bounded nonlinear operator from $L^2({\mathcal{Q}})$ to
$L^2((0,T)\times(0,L))$. This follows from the fact that the
application $u_i(t,P)={\bf d}isplaystyle {\bf f}rac{P}{\theta_i e^{-\lambda t} + |P|}$
from $[0,T]\times\mathbb{R}$ to $\mathbb{R}$ satisfies $|u_i(t,P)|
\le {\bf d}isplaystyle {\bf f}rac{e^{\lambda T}}{\theta_i}|P|$ and therefore we have
$$
{\bf d}isplaystyle \int_0^T \int_0^L ({\bf b}eta_i(t,l,P_i(t)))^2dtdl \le
||{\bf b}psi||^2_\infty ({\bf d}isplaystyle {\bf f}rac{e^{\lambda T}}{\theta_i})^2
||{\bf w}||_\infty^2 A L^2 ||p_i||^2_{L^2({\mathcal{Q}})}.
$$
Lipschitz continuity follows from the fact that $(t,P)\mapsto
u_i(t,P)$ is lipschitz continuous in $P$ uniformly in $t \in
[0,T]$,
$$
|u_i(t,P^1) - u_i(t,P^2)| \le {\bf d}isplaystyle {\bf f}rac{e^{\lambda T}}{\theta_i}
|P^1 -P^2|,{{\bf b}f q}uad {\bf f}orall P^1,P^2 \in \mathbb{R},\ {\bf f}orall t \in
[0,T].
$$
Hence, if to $p_i^1$ (resp. $p_i^2$) we associate $P_i^1$ (resp.
$P_i^2$) it holds that
$$
{\bf b}egin{array}{l}
{\bf d}isplaystyle \int_0^T \int_0^L [{\bf b}eta_i(t,l,P_i^1(t))-{\bf b}eta_i(t,l,P_i^2(t))]^2dtdl\\[7pt]
={\bf d}isplaystyle \int_0^T \int_0^L [{\bf h}box{{\bf r}m l{\bf h}skip -6pt 1}_{[0,L_b]}(l){{\bf b}f p}si_i(t)(u_i(t,P_i^1(t))-u_i(t,P_i^2(t)))]^2dtdl,\\[7pt]
\le L||{\bf b}psi||_\infty^2({\bf d}isplaystyle {\bf f}rac{e^{\lambda T}}{\theta_i})^2 {\bf d}isplaystyle \int_0^T |P_i^1(t)-P_i^2(t)|^2dt,\\[7pt]
\le AL^2||{\bf b}psi||_\infty^2||{\bf w}||_\infty^2({\bf d}isplaystyle {\bf f}rac{e^{\lambda
T}}{\theta_i})^2 ||p_i^1-p_i^2||_{L^2({\mathcal{Q}})}^2.
\end{array}
$$
\end{proof}
{\bf b}egin{lemma}
\label{lemma3} There exists a unique ${{\bf b}f p}$ satisfying {{\bf b}f problem
(P)}
\end{lemma}
{\bf b}egin{proof}
Let ${\bf h}at{{{\bf b}f p}}$ be given in $V$. With ${\bf h}at{{{\bf b}f p}}$ we associate a
vector $({\bf h}at{P}_i(t))$. Let us denote $\mathcal{F}{\bf h}at{{{\bf b}f p}}={{\bf b}f p}$
the solution to ({\bf r}ef{P1})-({\bf r}ef{P3}) and satisfying
$(p_i(t,0,l))=({\bf b}eta_i(t,l,{\bf h}at{P}_i(t))$.\\
From Lemma {\bf r}ef{lemma1} and Lemma {\bf r}ef{lemma2} we deduce that the
nonlinear operator $\mathcal{F}$ maps $V$ into itself. Moreover it
follows from Lemma {\bf r}ef{lemma:trace} that
$$
{\bf d}isplaystyle \int_{\mathcal{O}} <D{{\bf b}f p},{{\bf b}f p}> dtda {\bf g}e - {\bf d}isplaystyle {\bf f}rac{1}{2} \int_0^A
||{{\bf b}f p}^0(a,.)||_{{\bf H}}^2da - {\bf d}isplaystyle {\bf f}rac{1}{2} \int_0^T
||{{\bf b}f p}(t,0,.)||^2_{{\bf H}}dt
$$
The coercivity of $e(.,.)$ leads to
$$
C_2 {\bf d}isplaystyle \int_{\mathcal{O}} ||{{\bf b}f p}(t,a,.)||_{{\bf H}^1}^2 dtda \le {\bf d}isplaystyle {\bf f}rac{1}{2}
\int_0^A ||{{\bf b}f p}^0(a,.)||_{{\bf H}}^2da + {\bf d}isplaystyle {\bf f}rac{1}{2} \int_0^T
||{{\bf b}f p}(t,0,.)||^2_{{\bf H}}dt.
$$
Lemma {\bf r}ef{lemma2} then gives
$$
C_2 {\bf d}isplaystyle \int_{\mathcal{O}} ||{{\bf b}f p}(t,a,.)||_{{\bf H}^1}^2 dtda \le {\bf d}isplaystyle {\bf f}rac{1}{2}
\int_0^A ||{{\bf b}f p}^0(a,.)||_{{\bf H}}^2da + {\bf d}isplaystyle {\bf f}rac{1}{2} C_3
||{\bf h}at{{{\bf b}f p}}||^2_{L^2({\mathcal{O}},{\bf H})},
$$
and $\mathcal{F}$ is bounded from $L^2({\mathcal{O}},{\bf H})$ to $V$.\\
The solutions we are looking for are the fixed points of
$\mathcal{F}$.
Let us show that $\mathcal{F}$ is a strict contraction in $L^2({\mathcal{O}},{\bf H})$.\\
Let ${\bf h}at{{{\bf b}f p}}^1$ and ${\bf h}at{{{\bf b}f p}}^2$ be given in $L^2({\mathcal{O}},{\bf H})$ and
let ${{\bf b}f p}^1=\mathcal{F}{\bf h}at{{{\bf b}f p}}^1$ and ${{\bf b}f p}^2=\mathcal{F}{\bf h}at{{{\bf b}f p}}^2$
be the associated solutions. The difference
${{\bf b}f p}=\mathcal{F}{\bf h}at{{{\bf b}f p}}^1-\mathcal{F}{\bf h}at{{{\bf b}f p}}^2$ satisfies
({\bf r}ef{P1}),({\bf r}ef{P2}), ${{\bf b}f p}(0,a,l)=0$ and
$(p_i(t,0,l))=({\bf b}eta_i(t,l,{\bf h}at{P}_i^1(t))-{\bf b}eta_i(t,l,{\bf h}at{P}_i^2(t)))$.\\
At the end of the proof of Lemma {\bf r}ef{lemmacoercive}, since
$\lambda$ is arbitrary, one can choose $\lambda=\lambda_1 +
\lambda_2$ with $\lambda_1 >{\bf d}isplaystyle {\bf f}rac{1}{2d^0} ||{\bf g}amma||^2_\infty
+||{{\bf b}f p}artial_l {\bf g}amma||^2_\infty +N ||{{\bf b}f M}||_\infty$ and $\lambda_2 >
0$ arbitrary. Hence,
$$
e({{\bf b}f p},{{\bf b}f p}){\bf g}e \tilde{C_2}||{{\bf b}f p}||^2_{{\bf H}^1} + \lambda_2 ||{{\bf b}f p}||^2_{{\bf H}}
{\bf g}e \lambda_2 ||{{\bf b}f p}||^2_{{\bf H}},{{\bf b}f q}uad {\bf f}orall {{\bf b}f p} \in {\bf H}^1.
$$
Now using Lemma {\bf r}ef{lemma:trace} once again we obtain
$$
\lambda_2 {\bf d}isplaystyle \int_{\mathcal{O}} ||{{\bf b}f p}(t,a,.)||^2_{{\bf H}}dtda \le {\bf d}isplaystyle
{\bf f}rac{1}{2} \sum_{i=1}^N {\bf d}isplaystyle \int_0^T \int_0^L
[{\bf b}eta_i(t,l,{\bf h}at{P}_i^1(t))-{\bf b}eta_i(t,l,{\bf h}at{P}_i^2(t))]^2dtdl
$$
and Lemma {\bf r}ef{lemma2} gives
$$
\lambda_2 ||{{\bf b}f p}||^2_{L^2({\mathcal{O}},{\bf H})} \le {\bf d}isplaystyle {\bf f}rac{1}{2} C_3
||{\bf h}at{{{\bf b}f p}}^1-{\bf h}at{{{\bf b}f p}}^2||_{L^2({\mathcal{O}},{\bf H})}.
$$
We can choose $\lambda_2 = C_3$ and since
${{\bf b}f p}=\mathcal{F}{\bf h}at{{{\bf b}f p}}^1-\mathcal{F}{\bf h}at{{{\bf b}f p}}^2$ this proves that
$\mathcal{F}$ is a strict contraction on $L^2({\mathcal{O}},{\bf H})$. Thus it
follows from Banach fixed point theorem that $\mathcal{F}$ admits
a unique fixed point ${{\bf b}f p}$ which is the desired solution.
\end{proof}
\section{Positivity and comparison result}
\label{section:positive} In this section we first show in Lemma
{\bf r}ef{lemma4} that the fish density population solution to our
model is positive. Then a comparaison result is given in Lemma
{\bf r}ef{lemma5}.
{\bf b}egin{lemma}
\label{lemma4} The solution ${{\bf b}f p}$ to {{\bf b}f problem (P)} is
nonnegative a.e. in ${\mathcal{Q}}$.
\end{lemma}
{\bf b}egin{proof}
As in the proof of Lemma {\bf r}ef{lemma3}, let ${\bf h}at{{{\bf b}f p}}$ be given in
$V$ and let $\mathcal{F}{\bf h}at{{{\bf b}f p}}={{\bf b}f p}$ denote the solution to
({\bf r}ef{P1})-({\bf r}ef{P3}) and satisfying
$(p_i(t,0,l))=({\bf b}eta_i(t,l,{\bf h}at{P}_i(t)))$.
Let us also assume that ${\bf h}at{{{\bf b}f p}} {\bf g}e 0$.\\
The negative parts of $p_i(0,a,l)$ and $p_i(t,0,l)$ satisfy
$(p_i(0,a,l))^{-} = (p_i^0(a,l))^{-}=0$
and $(p_i(t,0,l))^{-} = ( {\bf b}eta_i (t,l,{\bf h}at{P}_i(t)) )^{-} =0$.\\
One can then show using Lemma {\bf r}ef{lemma:trace} (see
\cite{Langlais:1985}) that
${\bf d}isplaystyle \int_{\mathcal{O}} <D{{\bf b}f p},{{\bf b}f p}^->dtda \le 0$.\\
The bilinear form $e$ can be decomposed as
$e({{\bf b}f p},{{\bf b}f p}^-)=e({{\bf b}f p}^+,{{\bf b}f p}^-)-e({{\bf b}f p}^-,{{\bf b}f p}^-)$,
with\\
$e({{\bf b}f p}^+,{{\bf b}f p}^-)={\bf d}isplaystyle \sum_{i=1}^N b_i(p_i^+,p_i^-)+c_i({{\bf b}f p}^+,p_i^-)$.\\
It holds that $b_i(p_i^+,p_i^-)=0$ since one can check that
$b_i(p_i,p_i^-)=-b_i(p_i^-,p_i^-)$. Moreover,
$c_i({{\bf b}f p}^+,p_i^-)=-{\bf d}isplaystyle \int_0^L {\bf d}isplaystyle \sum_{j=1}^N M_{ij} p_j^+ p_i^-
dl \le 0$,
since $M_{ij} p_j^+ p_i^- {\bf g}e 0$ for $i\ne j$ and $M_{ii} p_i^+ p_i^-=0$.\\
We conclude that $e({{\bf b}f p}^+,{{\bf b}f p}^-) \le 0$.\\
Taking ${{\bf b}f q} = {{\bf b}f p}^-$ in Eq. {\bf r}ef{P2} yields,
$$
{\bf d}isplaystyle \int_{\mathcal{O}} <D{{\bf b}f p},{{\bf b}f p}^-> dtda+\int_{\mathcal{O}} e({{\bf b}f p}^+,{{\bf b}f p}^-)dtda - \int_{\mathcal{O}}
e({{\bf b}f p}^-,{{\bf b}f p}^-)dtda=0
$$
so that we obtain $\int_{\mathcal{O}} e({{\bf b}f p}^-,{{\bf b}f p}^-)dtda \le 0$. The coercivity
of $e$ gives,\\
$C_2 ||{{\bf b}f p}^-||_V \le 0$, that is to say ${{\bf b}f p}$ is nonnegative.\\
If we define a sequence with ${{\bf b}f p}^1={\bf h}at{{{\bf b}f p}}$ and
${{\bf b}f p}^{n+1}=\mathcal{F}{{\bf b}f p}^n$, then from the previous lines we deduce
that ${{\bf b}f p}^n$ is nonnegative for all $n {\bf g}e 1$. By Banach fixed
point theorem this sequence converges to the solution ${{\bf b}f p}$ which
is therefore nonnegative.
\end{proof}
{\bf b}egin{lemma}
\label{lemma5} Let ${{\bf b}f p}^1$ (resp. ${{\bf b}f p}^2$) denote the solution to
{{\bf b}f problem (P)} associated with the vector of mortality rates
${\bf z}^1$ (resp. ${\bf z}^2$). If ${\bf z}^1 \le {\bf z}^2$ then ${{\bf b}f p}^1 {\bf g}e {{\bf b}f p}^2$.
\end{lemma}
{\bf b}egin{proof}
{{\bf b}f Step 1}: Let ${\bf h}at{{{\bf b}f p}}^1$ and ${\bf h}at{{{\bf b}f p}}^2$ be given in $V$
and satisfying $0 \le {\bf h}at{{{\bf b}f p}}^1 \le {\bf h}at{{{\bf b}f p}}^2$. Let
${{\bf b}f p}^1=\mathcal{F}{\bf h}at{{{\bf b}f p}}^1$ and ${{\bf b}f p}^2=\mathcal{F}{\bf h}at{{{\bf b}f p}}^2$ be
the associated solutions defined as in the proof
of Lemma {\bf r}ef{lemma4}. Let us show that ${{\bf b}f p}^1 \le {{\bf b}f p}^2$.\\
It is clear that ${\bf h}at{P}_i^1(t) \le {\bf h}at{P}_i^2(t)$ a.e in
$(0,T)$, then since $u_i(t,P)$ is an increasing function of $P$ it
holds that ${\bf b}eta_i(t,l,{\bf h}at{P}_i^1(t)) \le
{\bf b}eta_i(t,l,{\bf h}at{P}_i^2(t))$ a.e in $(0,T)\times(0,L)$. The
difference ${{\bf b}f p}={{\bf b}f p}^2-{{\bf b}f p}^1$ satisfies ({\bf r}ef{P1}),({\bf r}ef{P2}) and
$$
{\bf b}egin{array}{l}
{{\bf b}f p}(0,l,a)=0,\\
{{\bf b}f p}(t,0,a)=({\bf b}eta_i(t,l,{\bf h}at{P}_i^2(t)))-({\bf b}eta_i(t,l,{\bf h}at{P}_i^1(t)))
{\bf g}e 0.
\end{array}
$$
This is the same situation as in the first part of proof of Lemma
{\bf r}ef{lemma4} and we conclude that ${{\bf b}f p}$ is nonnegative that is to
say ${{\bf b}f p}^1 \le {{\bf b}f p}^2$.
{{\bf b}f Step 2}: Let ${\bf h}at{{{\bf b}f p}} {\bf g}e 0$ be given in $V$. To the vectors
of mortality rates ${\bf z}^1$ and ${\bf z}^2$ ($0 \le {\bf z}^1 \le {\bf z}^2$) we
associate the bilinear forms $e^1$ and $e^2$ (see
({\bf r}ef{eqn:bi})-({\bf r}ef{eqn:ei}), note that $c_i^1(.,.)=c_i^2(.,.)$)
as well as the nonlinear operators $\mathcal{F}^1$ and
$\mathcal{F}^2$ defined as in the proof of lemma {\bf r}ef{lemma4}.
They define the solutions
${{\bf b}f p}^1=\mathcal{F}^1{\bf h}at{{{\bf b}f p}}$ and ${{\bf b}f p}^2=\mathcal{F}^2{\bf h}at{{{\bf b}f p}}$. Let us show that ${{\bf b}f p}^1 {\bf g}e {{\bf b}f p}^2$.\\
$({{\bf b}f p}^2 - {{\bf b}f p}^1)$ satisfies
{\bf b}egin{equation}
\label{eqn:raslebol1} {\bf d}isplaystyle \int_O <D({{\bf b}f p}^2-{{\bf b}f p}^1),{{\bf b}f q}>dtda+\int_{\mathcal{O}}
[e^2({{\bf b}f p}^2,{{\bf b}f q})-e^1({{\bf b}f p}^1,{{\bf b}f q})]dtda=0
\end{equation}
{\bf b}egin{equation}
\label{eqn:raslebol2} ({{\bf b}f p}^2-{{\bf b}f p}^1)(0,a,l)=0
\end{equation}
{\bf b}egin{equation}
\label{eqn:raslebol3} ({{\bf b}f p}^2-{{\bf b}f p}^1)(t,0,l)=0
\end{equation}
Let us choose ${{\bf b}f q}=({{\bf b}f p}^2 - {{\bf b}f p}^1)^+$. From the equality $z_i^2 p_i^2
- z_i^1 p_i^1 = z_i^1(p_i^2 -p_i^1) + p_i^2(z_i^2-z_i^1)$ follows
that
{\bf b}egin{equation}
\label{eqn:quitue}
{\bf b}egin{array}{l}
e_i^2({{\bf b}f p}^2,(p_i^2-p_i^1)^+)-e_i^1({{\bf b}f p}^1,(p_i^2-p_i^1)^+)\\
=c_i(({{\bf b}f p}^2-{{\bf b}f p}^1)^+,(p_i^2-p_i^1)^+)-c_i(({{\bf b}f p}^2-{{\bf b}f p}^1)^-,(p_i^2-p_i^1)^+)\\
+b_i((p_i^2-p_i^1)^+,(p_i^2-p_i^1)^+)-b_i((p_i^2-p_i^1)^-,(p_i^2-p_i^1)^+)\\
+\int_0^L p_i^2(f_i^2 -f_i^1)(p_i^2-p_i^1)^+ dl
\end{array}
\end{equation}
We have already shown in the proof of Lemma {\bf r}ef{lemma4} that
$c_i(({{\bf b}f p}^2-{{\bf b}f p}^1)^-,(p_i^2-p_i^1)^+) \le 0$ and that
$b_i((p_i^2-p_i^1)^-,(p_i^2-p_i^1)^+)=0$. Moreover since ${{\bf b}f p}^2$ is
nonnegative the last term of equality ({\bf r}ef{eqn:quitue}) is
nonnegative. Then we obtain that
$$
e^2({{\bf b}f p}^2,({{\bf b}f p}^2-{{\bf b}f p}^1)^+)-e^1({{\bf b}f p}^1,({{\bf b}f p}^2-{{\bf b}f p}^1)^+) {\bf g}e
e^1(({{\bf b}f p}^2-{{\bf b}f p}^1)^+,({{\bf b}f p}^2-{{\bf b}f p}^1)^+).
$$
Since $({{\bf b}f p}^2-{{\bf b}f p}^1)$ satisfies ({\bf r}ef{eqn:raslebol2}) and
({\bf r}ef{eqn:raslebol3}) it also holds that
$$
{\bf d}isplaystyle \int_O <D({{\bf b}f p}^2-{{\bf b}f p}^1),({{\bf b}f p}^2-{{\bf b}f p}^1)^+>dtda {\bf g}e 0,
$$
so that
$$
{\bf d}isplaystyle \int_{\mathcal{O}} e^1(({{\bf b}f p}^2-{{\bf b}f p}^1)^+,({{\bf b}f p}^2-{{\bf b}f p}^1)^+) dtda \le 0,
$$
and using the coercivity of $e^1$ we finally obtain ${{\bf b}f p}^2 \le
{{\bf b}f p}^1$.
{{\bf b}f Step 3}: Let ${\bf h}at{{{\bf b}f p}} {\bf g}e 0$ be given in $V$. We define two
sequences $({{\bf b}f p}^{1,n})_{n {\bf g}e 1}$ and $({{\bf b}f p}^{2,n})_{n {\bf g}e 1}$ by
(${{\bf b}f p}^{1,1}={\bf h}at{{{\bf b}f p}}$, ${{\bf b}f p}^{1,n+1}=\mathcal{F}^1{{\bf b}f p}^{1,n}$) and
(${{\bf b}f p}^{2,1}={\bf h}at{{{\bf b}f p}}$, ${{\bf b}f p}^{2,n+1}=\mathcal{F}^2{{\bf b}f p}^{2,n}$).\\
From {{\bf b}f step 2} follows that ${{\bf b}f p}^{1,2} {\bf g}e {{\bf b}f p}^{2,2}$.\\
In addition to ${{\bf b}f p}^{1,3}=\mathcal{F}^1{{\bf b}f p}^{1,2}$ and
${{\bf b}f p}^{2,3}=\mathcal{F}^2{{\bf b}f p}^{2,2}$,
let us define ${{\bf b}f q}^{3}=\mathcal{F}^2{{\bf b}f p}^{1,2}$.\\
The inequality ${{\bf b}f p}^{1,3} {\bf g}e {{\bf b}f q}^3$ follows from {{\bf b}f step 2},
whereas ${{\bf b}f p}^{2,3} \le {{\bf b}f q}^3$ follows from {{\bf b}f step 1}. Therefore
${{\bf b}f p}^{1,3} {\bf g}e {{\bf b}f p}^{2,3}$. An induction then shows that ${{\bf b}f p}^{1,n}
{\bf g}e {{\bf b}f p}^{2,n},\ {\bf f}orall n {\bf g}e 1$ and since the sequences converge
to the solution ${{\bf b}f p}^1$ and ${{\bf b}f p}^2$ of {{\bf b}f problem (P)} associated
with the vector of mortality rates ${\bf z}^1$ and ${\bf z}^2$ respectively,
the proof is complete.
\end{proof}
\section{Concluding remarks}
In this paper we have investigated a multi-region nonlinear
age-size structured fish population model. The model was
formulated in a generic way so that it can be potentially used for
various fish species. We formulated an initial boundary-value
problem and proved existence and uniqueness of a positive weak
solution. We also proved a comparison result which shows that the
variations
in the mortality rate in each region have consequences on the population of fish in every regions.\\
Other important problems need to be addressed now and are
currently under progress. The first one concerns the numerical
implementation of this model. In order to integrate numerically
system ({\bf r}ef{eqn:e1-1})-({\bf r}ef{eqn:e1-3}) we use the characteristic
method. Indeed this system can be viewed as a collection of
systems of parabolic equations on the characteristic lines
$$S =\lbrace (t_0+s,a_0+s);\ s \in (0,s_{max}(t_0,a_0)) {\bf r}brace,$$
where $(t_0,a_0) \in \lbrace 0 {\bf r}brace \times (0,A) \cup (0,T) \times
\lbrace 0 {\bf r}brace$. Each of these systems is then integrated in time with an operator
splitting method using the Lie formula (\cite{Strang:1968}, \cite{Marchuk:1990}).\\
The second problem concerns the estimation of the different badly
known parameters of the model (growth, mortality and migration
rates) from the data available for fisheries and mentioned in the
Introduction. In order to solve numerically this inverse problem,
the implementation of a variational data assimilation method is
under progress. The objective is to obtain a synthetic
representation of the real system combining theoritical knowledge
(the model) and experimental knowledge (the data).
{\bf b}ibliographystyle{elsart-num}
{\bf b}ibliography{biblioFauMau}
\end{document}
|
\begin{document}
\thanks{Research supported by NSF Grants DMS-1404754 and DMS-1708249}
\date{\today}
\begin{abstract} Andersen, Masbaum and Ueno conjectured that
certain quantum representations of surface mapping class groups should send pseudo-Anosov mapping classes to elements of infinite order (for large enough level $r$).
In this paper, we relate the AMU conjecture to a question about the growth of the Turaev-Viro invariants $TV_r$ of hyperbolic 3-manifolds. We show that if the $r$-growth of $|TV_r(M)|$ for a hyperbolic 3-manifold $M$ that fibers over the circle is exponential, then the monodromy of the fibration of $M$ satisfies the AMU conjecture.
Building on earlier work \cite{DK} we give broad constructions of (oriented) hyperbolic fibered links, of arbitrarily high genus, whose $SO(3)$-Turaev-Viro invariants have exponential $r$-growth.
As a result, for any $g>n\gammaeqslant 2$, we
obtain infinite families of non-conjugate pseudo-Anosov mapping classes, acting on surfaces of genus $g$ and $n$ boundary components, that satisfy the AMU conjecture.
We also discuss integrality properties of the traces of quantum representations and we answer a question of Chen and Yang about Turaev-Viro invariants of torus links.
\end{abstract}
\vskip 0.1in
\title{Quantum representations and monodromies of fibered links}
\Sigmaection{Introduction}
Given a compact oriented surface $\Sigma,$ possibly with boundary, the mapping class group $\mathrm{Mod}(\Sigma)$ is the group of isotopy classes
of orientation preserving homeomorphisms of $\Sigma$ that fix the boundary.
The Witten-Reshetikhin-Turaev Topological Quantum Field Theories \cite{ReTu, Turaevbook} provide families of finite dimensional projective representations of mapping class groups.
For each semi-simple Lie algebra, there is an associated theory and an infinite family of such representations.
In this article we are concerned with the
$SO(3)$-theory and we will follow the skein-theoretic framework given by Blanchet, Habegger, Masbaum and Vogel \cite{BHMV2}: For each odd integer $r\gammaeqslant 3,$ let $U_r=\lbrace 0,2,4,\ldots, r-3\rbrace$ be the set of even integers smaller than $r-2.$ Given a primitive $2r$-th root of unity $\zeta_{2r},$ a compact oriented surface $\Sigma,$ and a coloring $c$ of the components of $\partial \Sigma$ by elements of $U_r,$ a finite dimensional ${\mathbb{C}}$-vector space $RT_r(\Sigma,c)$ is constructed in \cite{BHMV2}, as well as a projective representation:
$$\rho_{r,c} : \mathrm{Mod}(\Sigma) \rightarrow \mathbb{P}\mathrm{Aut}(RT_r(\Sigma,c)).$$
For
different choices of root of unity, the traces of $\rho_{r,c}$, that are of particular interest to us in this paper,
are related by actions of Galois groups of cyclotomic fields.
Unless otherwise indicated, we will always choose $\zeta_{2r}=e^{\frac{i\pi}{r}},$ which is important for us in order to apply
results from \cite{DK, DKY}.
The representation $\rho_{r,c}$ is called the $SO(3)$-quantum representation of $\mathrm{Mod}(\Sigma)$ at level $r.$
Although the representations are known to be asymptotically faithful \cite{FZW, Andersen}, the question of how well these representations reflect the
geometry of the mapping class groups remains wide open.
By the Nielsen-Thurston classification, mapping classes $f\in \mathrm{Mod}(\Sigma)$ are divided into three types: periodic, reducible and
pseudo-Anosov. Furthermore, the type of $f$ determines the geometric structure, in the sense of Thurston, of the 3-manifold obtained as mapping torus of $f.$
In \cite{AMU} Andersen, Masbaum and Ueno formulated the following conjecture and proved it when $\Sigma$ is the four-holed sphere.
\begin{conjecture}\label{AMU}{\rm{(AMU conjecture \cite{AMU})}}{ Let $\phi \in \mathrm{Mod}(\Sigma)$ be a pseudo-Anosov mapping class. Then for any big enough level $r,$ there is a choice of colors $c$ of the components of $\partial \Sigma,$ such that $\rho_{r,c}(\phi)$ has infinite order.}
\end{conjecture}
Note that it is known that the representations $\rho_{r,c}$ send Dehn twists to elements of finite order and criteria for recognizing reducible mapping classes from their images under $\rho_{r,c}$
are given in
\cite{Andersen2}.
The results of \cite{AMU} were extended by Egsgaard and Jorgensen in \cite{EgsJorgr} and by Santharoubane in \cite{San17}
to prove Conjecture \ref{AMU} for some mapping classes of spheres with $n\gammaeqslant 5$ holes.
In \cite{San12},
Santharoubane proved the conjecture for
the one-holed torus.
However, until recently there were no known
cases of the AMU conjecture for mapping classes of surfaces of genus at least $2.$ In \cite{MarSan}, March\'e and Santharoubane used skein theoretic techniques in
$\Sigma \times S^1$ to obtain such examples of mapping classes in arbitrary high genus. As explained by Koberda and Santharoubane \cite{KS}, by means of
Birman exact sequences of mapping class groups, one extracts representations of $\pi_1(\Sigma)$ from the representations $\rho_{r,c}.$
Elements in $\pi_1(\Sigma)$ that correspond to pseudo-Anosov mappings classes via Birman exact sequences are characterized by a result of Kra \cite{Kra}.
March\'e and Santharoubane used this approach to obtain their examples of pseudo-Anosov mappings classes satisfying the AMU conjecture by exhibiting elements in $\pi_1(\Sigma)$ satisfying an additional technical condition they called Euler incompressibility. However, they informed us that they suspect their construction yields only finitely many mapping classes in any surface of fixed genus, up to mapping class group action.
The purpose of the present paper is to describe an alternative
method for approaching the AMU conjecture and use it to construct mapping classes
acting on surfaces of any genus, that satisfy the conjecture. In particular, we produce infinitely many non-conjugate mapping classes acting on surfaces of fixed genus that satisfy the conjecture.
Our approach is to relate the conjecture with a question on the growth rate, with respect to $r$, of the $SO(3)$-Turaev-Viro 3-manifold invariants $TV_r$.
For $M$ a compact orientable $3$-manifold, closed or with boundary, the invariants $TV_r(M)$ are real-valued topological invariants of $M,$ that can be computed from state sums over triangulations of $M$ and are closely related to the $SO(3)$-Witten-Reshetikhin-Turaev TQFTs.
For a compact 3-manifold $M$ (closed or with boundary) we define:
$$lTV(M)=\underset{r \rightarrow \infty, \ r \ \textrm{odd}}{\liminf} \frac{2\pi}{r}\log |TV_r(M,q)|,$$
where $q=\zeta_{2r}=e^{\frac{2i\pi}{r}}$.
Let $f \in \mathrm{Mod}(\Sigma)$ be a mapping class represented by a pseudo-Anosov homeomorphism of $\Sigma$ and
let $M_f=F \times [0,1]/_{(x,1)\Sigmaim ({ {f}}(x),0)}$ be the mapping torus of $f$.
\begin{theorem}\label{amu-ltv}Let $f \in \mathrm{Mod}(\Sigma)$ be a pseudo-Anosov mapping class and
let $M_{f}$ be the mapping torus of $f$. If $lTV(M_{f})>0,$ then $f$ satisfies the conclusion of the AMU conjecture.
\end{theorem}
The proof of the theorem relies heavily on the properties of TQFT underlying the Witten-Reshetikhin-Turaev $SO(3)$-theory as developed in \cite{BHMV2}.
As a consequence of Theorem \ref{amu-ltv} whenever we have a hyperbolic 3-manifold $M$ with $lTV(M)>0$ that fibers over the circle, then the monodromy of the fibration represents a mapping class
that satisfies the AMU conjecture.
By a theorem of Thurston, a mapping class $f\in \mathrm{Mod}(\Sigma)$ is represented by a pseudo-Anosov homeomorphism of $\Sigma$
if and only if the mapping torus $M_{f}$ is hyperbolic.
In \cite{Chen-Yang} Chen and Yang conjectured that for any hyperbolic 3-manifold with finite volume $M$ we should have $lTV(M)={\rm vol} (M)$. Their conjecture implies, in particular, that
the aforementioned technical condition $lTV(M_f)>0$ is true for all pseudo-Anosov mapping classes $f\in \mathrm{Mod}(\Sigma)$. Hence, the Chen-Yang conjecture implies the AMU conjecture.
Our method can also be used to produce new families of mapping classes acting on punctured spheres that satisfy the AMU conjecture (see Remark \ref{spheres}).
In this paper we will be concerned with surfaces with boundary and mapping classes that appear as monodromies of fibered links in $S^3.$
In \cite{BDKY}, with Belletti and Yang, we construct families of 3-manifolds in which the monodromies of all hyperbolic fibered links satisfy the AMU conjecture.
In this paper show the following.
\begin{theorem} \label{hyperbgeneral} Let $L\Sigmaubset S^3$ be a link with $lTV(S^3\Sigmaetminus L)>0.$
Then
there are fibered hyperbolic links $L',$ with $L\Sigmaubset L'$ and $lTV(S^3\Sigmaetminus L')>0,$ and
such that the complement of $L'$ fibers over $S^1$ with fiber a surface of arbitrarily large genus.
In particular, the monodromy of such a fibration
gives a mapping class
in $\mathrm{Mod}(\Sigma)$ that satisfies the AMU conjecture.
\end{theorem}
In \cite{DK} the authors gave criteria for constructing 3-manifolds, and in particular link complements, whose $SO(3)$-Turaev-Viro invariants satisfy the condition $lTV>0$.
Starting from these links, and applying
Theorem \ref{hyperbgeneral}, we obtain fibered links whose monodromies
give examples of mapping classes that satisfy Conjecture \ref{AMU}.
However, the construction yields only finitely many mapping classes in the mapping class groups of fixed surfaces.
This is because the links $L'$ obtained by Theorem \ref{hyperbgeneral}
are represented by closed homogeneous braids and it is known that there are only finitely many links of fixed genus and number of components represented that way.
To obtain infinitely many mapping classes for surfaces of fixed genus and number of boundary components, we need to refine our construction.
We do this
by using Stallings twists and appealing to a result of Long and Morton \cite{LongMorton}
on compositions of pseudo-Anosov maps with powers of a Dehn twist. The general process is given in Theorem \ref{infinitegen}. As an application we have the following.
\begin{theorem} \label{general} Let $\Sigma$ denote an orientable surface of genus $g$ and with $n$-boundary components. Suppose that either $n=2$ and $g\gammaeqslant 3$ or $g\gammaeqslant n \gammaeqslant 3.$
Then there are are infinitely many non-conjugate pseudo-Anosov mapping classes in
$\mathrm{Mod}(\Sigma)$ that satisfy the AMU conjecture.
\end{theorem}
In the last section of the paper we discuss integrality properties of quantum representations for mapping classes of finite order (i.e. periodic mapping classes) and how they reflect on the Turaev-Viro invariants of the corresponding mapping tori. To state our result, we recall that the traces of the representations $\rho_{r,c}$ are known to be algebraic numbers. For periodic mapping classes we have
the following.
\begin{theorem}\label{thm:integertrace} Let $f\in \mathrm{Mod}(\Sigma)$
be periodic of order $N.$ For any odd integer $r\gammaeqslant 3$, with $\mathrm{gcd}(r, N)=1$, we have
$|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}},$
for any $U_r$-coloring $c$ of $\partial \Sigma,$ and any primitive
$2r$-root of unity.
\end{theorem}
As a consequence of Theorem \ref{thm:integertrace} we have the following corollary that was conjectured by Chen and Yang \cite[Conjecture 5.1]{Chen-Yang}.
\begin{corollary}\label{cor:toruslinks} For integers $p,q$ let $T_{p,q}$ denote the $(p,q)$-torus link. Then, for any odd $r$ coprime with $p$ and $q$,
we have $TV_r(S^3\Sigmaetminus T_{p,q})\in {\mathbb{Z}}$.
\end{corollary}
The paper is organized as follows: In Section \ref{sec:TV}, we summarize results from the $SO(3)$-Witten-Reshetikhin-Turaev TQFT and their relation to Turaev-Viro invariants that we need in this paper.
In Section \ref{sec:lTV>0}, we discuss how to construct families of links whose $SO(3)$-Turaev-Viro invariants have exponential growth (i.e. $lTV>0$) and then we prove Theorem \ref{amu-ltv}
that explains how this exponential growth relates to the AMU Conjecture.
In Section \ref{sec:homogenize}, we describe a method to get hyperbolic fibered links with any given sublink and we prove
Theorem \ref{hyperbgeneral}.
In Section \ref{sec:examples}, we explain how to refine the construction of Section \ref{sec:homogenize} to get infinite families of mapping classes on fixed genus surfaces
that satisfy the AMU Conjecture (see Theorem \ref{general}). We also provide an explicit construction that leads to Theorem \ref{general}.
Finally in Section \ref{sec:integertraces}, we discuss periodic mapping classes and we prove Theorem \ref{thm:integertrace} and Corollary \ref{cor:toruslinks}. We also state a non-integrality conjecture about Turaev-Viro invariants of hyperbolic mapping tori.
\Sigmaubsection{Acknowledgement} We would like to thank Matthew Hedden, Julien March\'e, Gregor Masbaum, Ramanujan Santharoubane and Tian Yang for helpful discussions. In particular, we thank Tian Yang for bringing to our attention the connection between the volume conjecture of \cite{Chen-Yang} and the AMU conjecture. We also thank Jorgen Andersen and Alan Reid for their interest in this work.
\Sigmaection{TQFT properties and quantum representations}
\label{sec:TV}
In this section, we summarize some properties of the $SO(3)$-Witten-Reshetikhin-Turaev TQFTs, which we introduce in the skein-theoretic framework of \cite{BHMV2}, and briefly discuss their relation
to the $SO(3)$-Turaev-Viro invariants.
\Sigmaubsection{Witten-Reshetikhin-Turaev $SO(3)$-TQFTs} Given an
odd $r\gammaeqslant 3,$ let $U_r$ denote the set of even integers less than $r-2.$ A banded link in a manifold $M$ is an embedding of a disjoint union of annuli $S^1\times[0,1]$ in $M,$ and a $U_r$-colored banded link $(L,c)$ is a banded link whose components are colored by elements of $U_r.$
For a closed, oriented 3-manifold $M,$ the Reshetikhin-Turaev
invariants $RT_r(M)$ are complex valued topological invariants. They also extend to invariants $RT_r(M,(L,c))$ of manifolds containing colored banded links.
These invariants are part of a compatible set of invariants of compact surfaces and compact 3-manifolds, which is called a TQFT.
Below we summarize the main properties of the theory that will be useful to us in this paper, referring the reader to \cite{BHMV, BHMV2} for the precise definitions and details.
\begin{theorem}\label{thm:TQFTdef}{\rm{ (\cite[Theorem 1.4]{BHMV2})}} For any odd integer $r \gammaeqslant 3$ and any primitive $2r$-th root of unity $\zeta_{2r},$ there is a TQFT functor $RT_r$ with the following properties:
\begin{itemize}
\item[(1)] For $\Sigma$ a compact oriented surface, and if $\partial \Sigma\neq \emptyset$
a coloring $c$ of $\partial \Sigma$ by elements of $U_r$, there is a finite dimensional ${\mathbb{C}}$-vector space $RT_r(\Sigma,c),$ with a Hermitian form $\langle , \rangle.$
Moreover for disjoint unions, we have $$RT_r(\Sigma \colon\thinspaceprod \Sigma')=RT_r(\Sigma)\otimes RT_r(\Sigma').$$
\item[(2)]For $M$ a closed compact oriented $3$-manifold, containing a $U_r$-colored banded link $(L,c),$ the value $RT_r(M,(L,c),\zeta_{2r}) \in {\mathbb{Q}}[\zeta_{2r}]\Sigmaubset {\mathbb{C}}$ is the $SO(3)$-Reshetikhin-Turaev invariant at level $r.$
\vskip 0.06in
\item[(3)] For $M$ a compact oriented $3$-manifold with $\partial M=\Sigma$, and $(L,c)$ a $U_r$-colored banded link in $M,$ the invariant $RT_r(M,(L,c))$ is a vector in $RT_r(\Sigma).$
Moreover, for compact oriented 3-manifolds $M_1$, $M_2$ with $\partial M_1=-\partial M_2=\Sigma,$ we have $$RT_r(M_1\underset{\Sigma}{\cup}M_2)=\langle RT_r(M_1),RT_r(M_2)\rangle.$$
Finally, for disjoint unions $M=M_1\colon\thinspaceprod M_2,$ we have
$$RT_r(M)=RT_r(M_1)\otimes RT_r(M_2).$$
\item[(4) ]For a cobordism $M$ with $\partial M= -\Sigma_0 \cup \Sigma_1,$ there is a map
$$RT_r(M) \in \mathrm{End}(RT_r(\Sigma_0),RT_r(\Sigma_1)).$$
\item[(5)]The composition of cobordisms is sent by $RT_r$ to the composition of linear maps, up to a power of $\zeta_{2r}.$
\end{itemize}
\end{theorem}
In \cite{BHMV2} the authors construct some explicit orthogonal basis $E_r$ for $RT_r(\Sigma,c)$: Let $\Sigma$ be a compact, oriented surface that is not the 2-torus or the 2-sphere with less than four
holes. Let $P$ be a collection of simple closed curves on $\Sigma$ that contains the boundary $\partial \Sigma$
and gives a pants decomposition of $\Sigma.$ The elements of $E_r$ are in one-to-one correspondence with colorings ${\hat{c}}: P \longrightarrow U_r$, such that ${\hat{c}}$ agrees with $c$ on $\partial \Sigma$
and for each pant the colors of the three boundary components satisfy certain admissibility conditions. We will not make use of the general construction. What we need is the following:
\begin{theorem}\label{thm:TQFTbasis} {\rm{(\cite[Theorem 4.11, Corollary 4.10]{BHMV2}) }}
\\
\item[(1)] For $\Sigma$ a compact, oriented surface, with genus $g$ and $n$ boundary components, such that $(g,n)\neq (1,0),(0,0),(0,1),(0,2),(0,3),$ we have
$$\mathrm{dim}(RT_r(\Sigma,c))\leqslant r^{3g-3+n}.$$
\item[(2)] If $\Sigma=T$ is the 2-torus we actually have an orthonormal basis for
$RT_r(T).$ It consists of the elements $e_0,e_2,\ldots,e_{r-3},$ where
$$e_i=RT_r(D^2\times S^1, ([0,\frac{1}{2}]\times S^1, i))$$
is the Reshetikhin-Turaev vector of the solid torus with the core viewed as banded link and colored by $i.$
\end{theorem}
\Sigmaubsection{$SO(3)$-quantum representations of the mapping class groups}
For any odd integer $r \gammaeqslant 3,$ any choice of a primitive $2r$-root of unity $\zeta_{2r}$ and a coloring $c$ of the boundary components of $\Sigma$ by elements of $U_r,$ we have a finite dimensional projective representation,
$$\rho_{r,c} : \mathrm{Mod}(\Sigma) \rightarrow \mathbb{P}\mathrm{Aut}(RT_r(\Sigma,c)).$$
If $\Sigma$ is a closed surface and $f \in \mathrm{Mod}(\Sigma),$ we simply have $\rho_r(f)=RT_r(C_{\phi}),$ where the cobordism $C_{\phi}$ is the mapping cylinder of $f:$
$$C_{f}=\Sigma\times [0,1]\underset{(1,x)\Sigmaim f(x)}{\colon\thinspaceprod} \Sigma.$$
The fact that this gives a projective representation of $\mathrm{Mod}(\Sigma)$ is a consequence of points (4) and (5) of Theorem \ref{thm:TQFTdef}.
For $\Sigma$ with non-empty boundary, giving the precise definition of the quantum representations would require us to discuss the functor $RT_r$ for cobordisms containing colored tangles (see \cite{BHMV2}).
Since in this paper we will only be interested in the traces of the quantum representations, we will not recall the definition of the quantum representations in its full generality. We will use the following theorem:
\begin{theorem}\label{thm:tracequantumrep}
For $r\gammaeqslant 3$ odd, let $\Sigma$ be a compact oriented surface with $c$ a $U_r$-coloring on the components of $\partial \Sigma.$ Let $\tilde{\Sigma}$ be the surface obtained from $\Sigma$ by capping the components of $\partial \Sigma$ with disks. For $f\in \mathrm{Mod}(\Sigma),$ let $\tilde{f}\in \mathrm{Mod}(\tilde{\Sigma)},$ denote the mapping class of the extension of $f$ on the capping disks by the identity.
Let $M_{\tilde {f}}=F \times [0,1]/_{(x,1)\Sigmaim ({\tilde {f}}(x),0)}$
be the mapping torus of $\tilde{f}$ and let $L \Sigmaubset M_{\tilde {f}}$ denote the link whose components consist of the cores of the solid tori in $M_{\tilde {f}}$ over the capping disks.
Then, we have
$$\mathrm{Tr} (\rho_{r,c}(f))=RT_r(M_{\tilde {f}} ,(L,c)).$$
\end{theorem}
\Sigmaubsection{SO(3)-Turaev-Viro invariants}
In \cite{TuraevViro}, Turaev and Viro introduced invariants of compact oriented $3$-manifolds as state sums on triangulations of 3-manifolds.
The triangulations are colored by representations of a semi-simple quantum Lie algebra. In this paper, we are only concerned with the $SO(3)$-theory: Given a compact 3-manifold $M$, an odd integer $r\gammaeqslant 3,$
and a primitive $2r$-root of unity,
there is an ${\mathbb{R}}$-valued invariant $TV_r.$ We refer to \cite{DKY} for the precise flavor of Turaev-Viro invariants we are using here, and to \cite{TuraevViro} and for the original definitions and proofs of invariance.
We will make use the following theorem, which relates the Turaev-Viro invariants $TV_r(M)$ of a $3$-manifold $M$ with the Witten-Reshetikhin-Turaev TQFT $RT_r.$ For closed 3-manifolds it was proved by
Roberts \cite{Roberts}, and was extended to manifolds with boundary by Benedetti and Petronio \cite{BePe}. In fact, as Benedetti and Petronio formulated their theorem in the case of $\mathrm{SU}_2$-TQFT, the adaptation of the proof in the setting of $SO(3)$-TQFT we use here can be found in \cite{DKY}.
\begin{theorem}{\rm{ (\cite[Theorem 3.2]{BePe})}}\label{thm:BePe}
For $M$ an oriented compact $3$-manifold with empty or toroidal boundary and $r\gammaeqslant 3$ an odd integer, we have:
$$TV_r(M,q=e^{\frac{2i\pi}{r}})=||RT_r(M,\zeta=e^{\frac{i\pi}{r}})||^2.$$
\end{theorem}
\Sigmaection{Growth of Turaev-Viro invariants and the AMU conjecture}
In this section, first we explain how the growth of the $SO(3)$-Turaev-Viro invariants is related to the AMU conjecture.
Then we give examples of link complements $M$ for which the $SO(3)$-Turaev-Viro invariants have exponential growth with respect to $r$; that is, we have $lTV(M)>0.$
\Sigmaubsection{Exponential growth implies the AMU conjecture}
Let $\Sigma$ denote a compact orientable surface with or without boundary and, as before, let $\mathrm{Mod}(\Sigma)$ denote the mapping class group of $\Sigma$ fixing the boundary.
\begin{named}{Theorem \ref{amu-ltv}} Let $f \in \mathrm{Mod}(\Sigma)$ be a pseudo-Anosov mapping class and
let $M_{f}$ be the mapping torus of $f$. If $lTV(M_{f})>0,$ then $f$ satisfies the conclusion of the AMU conjecture.
\end{named}
\label{sec:amu-ltv}
The proof of Theorem \ref{amu-ltv} relies on the following elementary lemma:
\begin{lemma}\label{lem:order} If $A \in \mathrm{GL}_n (\mathbb{C})$ is such that $|\mathrm{Tr}(A)|>n,$ then $A$ has infinite order.
\end{lemma}
\begin{proof}
Up to conjugation we can assume that $A$
is upper triangular. If the sum of the $n$ diagonal entries has modulus bigger than $n,$ one of these entries must have modulus bigger that $1$. This implies that
$A$ has infinite order.
\end{proof}
\vskip 0.06in
\begin{proof}[Proof of Theorem \ref{amu-ltv}] Suppose that for the mapping torus $M_{f}$ of some $f \in \mathrm{Mod}(\Sigma)$, we have $lTV(M_{f})>0.$ We will prove Theorem \ref{amu-ltv} by relating $TV_r(M_{f})$ to traces of the quantum representations of $\mathrm{Mod}(\Sigma)$.
By Theorem \ref{thm:BePe}, we have $$TV_r(M_{f})=||RT_r(M_{f})||^2= \langle RT_r(M_{f}), \ RT_r(M_{f})\rangle,$$
where, with the notation of Theorem \ref{thm:TQFTdef}, $ \langle , \rangle$ is the Hermitian form on $RT_r(\Sigma,c).$
Suppose that $\Sigma$ has genus $g$ and $n$ boundary components.
Now $\partial M_f$ is a disjoint union of $n$ tori. Note that by Theorem \ref{thm:TQFTbasis}-(2) and Theorem \ref{thm:TQFTdef}-(1), $RT_r(\partial M_f)$ admits an orthonormal basis given by vectors
$${\bf e}_{c}=e_{c_1}\otimes e_{c_2} \otimes \ldots e_{c_n},$$
where $c=(c_1,c_2,\ldots c_n)$ runs over all $n$-tuples of colors in $U_r,$ one for each boundary component. By Theorem \ref{thm:TQFTdef}-(3) and Theorem \ref{thm:TQFTbasis}-(2), this vector is also the $RT_r$-vector of the cobordism consisting of $n$ solid tori, with the $i$-th solid torus containing the core colored by $c_i.$
We can write $ RT_r(M_{f})=\underset{c}{\Sigmaum} {\lambda_1}bda_c {\bf e}_{c}$ where ${\lambda_1}bda_c=\langle RT_r(M_f), {\bf e}_{c}\rangle$. Thus we have
$$TV_r(M_{f})=\underset{c}{\Sigmaum}|{\lambda_1}bda_c|^2=\underset{c}{\Sigmaum} |\langle RT_r(M_f), {\bf e}_{c}\rangle|^2,$$
where ${\bf e}_c$ is the above orthonormal basis of $RT_r(\partial M_{f})$ and the sum runs over $n$-tuples of colors in $U_r.$
By Theorem \ref{thm:TQFTdef}-(3), the pairing $\langle RT_r(M_{f}),{\bf e}_{c}\rangle$ is obtained by filling the boundary components of $M_{f}$ by solid tori and adding a link $L$ which is the union of the cores and the core of the $i$-th component is colored by $c_i.$
Thus by Theorem \ref{thm:tracequantumrep}, we have
$$\langle RT_r(M_{f}), {\bf e}_{c} \rangle= RT_r({M_{\tilde{f}}},(L,c))=\mathrm{Tr} (\rho_{r,c}(f)),$$
and thus
$$TV_r(M_{f})=\underset{c}{\Sigmaum}|\mathrm{Tr}\rho_{r,c}(f)|^2,$$
where the sum ranges over all colorings of the boundary components of $M_{f}$ by elements of $U_r.$
Now, on the one hand, since $lTV(M_{f})>0,$ the sequence $\{TV_r(M_{f})\}_r$ is bounded below by a sequence that is exponentially growing in $r$ as $r \rightarrow \infty.$
On the other hand, by Theorem \ref{thm:TQFTbasis}-(1), the sequence $\underset{c}{\Sigmaum}\mathrm{dim}(RT_r(\Sigma,c))$ only grows polynomially in $r.$
For big enough $r,$ there will be at least one $c$ such that $|\mathrm{Tr}\rho_{r,c}(f)|>\mathrm{dim}(RT_r(\Sigma,c))$. Thus by Lemma \ref{lem:order}, $\rho_{r,c}(\phi)$ will have infinite order.
\end{proof}
By a theorem of Thurston \cite{thurston:mappingtori}, a mapping class $f \in \mathrm{Mod}(\Sigma)$ is represented by a pseudo-Anosov homeomorphism of $\Sigma$
if and only if the mapping torus $M_f$ is hyperbolic.
As a consequence of Theorem \ref{amu-ltv}, whenever a hyperbolic 3-manifold $M$ that fibers over the circle has $lTV(M)>0,$ the monodromy of the fibration represents a mapping class
that satisfies the AMU Conjecture.
In the remaining of this paper we will be concerned with surfaces with boundary and mapping classes that appear as monodromies of fibered links in $S^3.$
\Sigmaubsection{Link complements with $lTV>0$}
\label{sec:lTV>0}
Links with exponentially growing Turaev-Viro invariants will be the fundamental building block of our construction of examples of pseudo-Anosov mapping classes satisfying the AMU conjecture.
We will need the following result proved by the authors in
\cite{DK}.
\begin{theorem}\label{thm:ltvdehnfilling}{\rm {(\cite[Corollary 5.3]{DK})}} Assume that $M$ and $M'$ are oriented compact 3-manifolds with empty or toroidal boundaries and such that
$M$ is obtained by a Dehn-filling of $M'.$ Then we have:
$$lTV(M)\leqslant lTV(M').$$
\end{theorem}
Note that for a link $L\Sigmaubset S^3$, and a sublink $K\Sigmaubset L$, the complement of $K$ is obtained from that of $L$ by Dehn-filling.
Thus Theorem \ref{thm:ltvdehnfilling} implies that if $K$ is a sublink of a link $L\Sigmaubset S^3$ and $lTV(S^3 \Sigmaetminus K)>0,$ then we have $lTV(S^3 \Sigmaetminus L) )>0.$
\begin{corollary} \label{positive} Let $K\Sigmaubset S^3$ be the knot $4_1$ or a link with complement homeomorphic to that of the Boromean links or the Whitehead link.
If $L$ is any link containing $K$ as a sublink then $lTV(S^3 \Sigmaetminus L) )>0.$
\end{corollary}
\begin{proof}Denote by $B$ the Borromean rings. By \cite{DKY}, $lTV(S^3\Sigmaetminus 4_1)=2v_3\Sigmaimeq 2.02988$ and $lTV(S^3\Sigmaetminus B)=2v_8\Sigmaimeq 7.32772;$
and hence the conclusion holds for $B$ and $4_1.$
The complement of $K=4_1$ is obtained by Dehn filing along one of the components of the Whitehead link $W.$
Thus, by Theorem \ref{thm:ltvdehnfilling}, $lTV(S^3\Sigmaetminus W)\gammaeqslant 2 v_3>0.$
For links with homeomorphic complements the conclusion follows since the Turaev-Viro invariants
are homeomorphism invariants of the link complement; that is, they will not distinguish different links with homeomorphic complements.
\end{proof}
\begin{remark}Additional classes of links with $lTV>0$ are given by the authors in \cite{DKY} and \cite{DK}. Some of these examples are non-hyperbolic.
However it is known that any link is a sublink of a hyperbolic link \cite{Baker}. Thus one can start with any link $K$ with $lTV(S^3\Sigmaetminus K)>0$ and construct hyperbolic links $L$ containing $K$ as sublink; by Theorem \ref{thm:ltvdehnfilling} these will still have $lTV(S^3 \Sigmaetminus L)> 0.$
\end{remark}
\Sigmaection{A hyperbolic version of Stallings's homogenization}
\label{sec:homogenize}
A classical result of Stallings \cite{Stallings} states that every link $L$ is a sublink of fibered links with fibers of arbitrarily large genera. Our purpose in this section is to prove the following hyperbolic version of this result.
\begin{theorem} \label{mainofsection} Given a link $L\Sigmaubset S^3,$
there are hyperbolic links $L',$ with $L\Sigmaubset L'$ and
such that the complement of $L'$ fibers over $S^1$ with fiber a surface of arbitrarily large genus.
\end{theorem}
\Sigmaubsection{Homogeneous braids}
Let $\Sigmaigma_1,\ldots, \Sigmaigma_{n-1}$ denote the standard braid generators of the $n$-strings braid group $B_n$.
We recall that a braid $\Sigmaigma \in B_n$ is said to be \emph{homogeneous} if each standard generator $\Sigmaigma_i$ appearing in $\Sigmaigma$ always appears with exponents of the same sign.
In \cite{Stallings}, Stallings studied relations between closed homogeneous braids and fibered links. We summarize his results as follows:
\begin{enumerate}
\item The closure of any homogeneous braid $\Sigmaigma \in B_n$ is a fibered link: The complement fibers over $S^1$ with fiber the surface $F$ obtained by Seifert's algorithm from the homogeneous closed braid
diagram. The Euler characteristic of $F$ is $\chi(F)=n-c(\Sigmaigma)$ where $c(\Sigmaigma)$ is the number of crossings of $\Sigmaigma.$
\item Given a link $L=\hat{\Sigmaigma}$ represented as the closure of a braid $\Sigmaigma \in B_n,$ one can add additional strands to obtain a homogeneous braid $\Sigmaigma' \in B_{n+k}$
so that the closure of $\Sigmaigma'$ is a link $L\cup K,$ where $K,$ the closure of the additional $k$-strands, represents the unknot. Furthermore, we can arrange $\Sigmaigma'$ so that the linking numbers of $K$ with the components of $\hat{\Sigmaigma}$ are any arbitrary numbers. The link $L\cup K,$ as a closed homogeneous braid, is fibered.
\\
Throughout the paper we will refer to the component $K$ of $L\cup K,$ as the Stallings component.
\end{enumerate}
In order to prove Theorem \ref{mainofsection},
given a hyperbolic link $L$, we want to apply Stallings' homogenizing method in a way such that the resulting link is still hyperbolic.
Let $L$ be a hyperbolic link with $n$ components $L_1,\ldots, L_n.$ The complement $M_L:=S^3\Sigmaetminus L$ is a hyperbolic 3-manifold with $n$
cusps; one for each component. For each cusp, corresponding to some component $L_i$, there is a conjugacy class of a rank two abelian subgroup of $\pi_1(M_L).$ We will refer to this as the {\emph{peripheral group}} of $L_i.$
\begin{definition} \label{defcondition}Let $L$ be a hyperbolic link with $n$ components $L_1,\ldots, L_n.$ We say that an unknotted circle $K$ embedded in $S^3 \Sigmaetminus L$ satisfies condition $(\clubsuit )$ if
(i) the free homotopy class $[K]$ does not lie in a peripheral group of any component of $L$; and (ii) we have
$$\mathrm{gcd}\left(lk(K,L_1),lk(K,L_2),\ldots , lk(K,L_n)\right)=1.$$
\end{definition}
The rest of this subsection is devoted to the proof of the following proposition that is needed for the proof of Theorem \ref{mainofsection}.
\begin{proposition}\label{prop:condition} Given a hyperbolic link $L,$ one can choose the Stallings component $K$ so that (i) $K$ satisfies condition $(\clubsuit )$; and (ii)
the fiber of the complement of $L\cup K$ has arbitrarily high genus.
\end{proposition}
Since $L$ is hyperbolic, we have the
discrete faithful representation
$$\rho: \pi_1(S^3\Sigmaetminus L)\longrightarrow \mathrm{PSL}_2({\mathbb{C}}).$$
We recall that an element of $A\in \mathrm{PSL}_2({\mathbb{C}})$ is called {\emph{parabolic}} if $\mathrm{Tr} (A)=\pm 2,$
and that $\rho$ takes elements in the peripheral subgroups of $ \pi_1(S^3 \Sigmaetminus L)$ to parabolic elements in $ \mathrm{PSL}_2({\mathbb{C}}).$
Since matrix trace is invariant under conjugation, in the discussion below we will not make distinction between elements in $\pi_1(M_L)$ and their conjugacy classes.
With this understanding we recall that
if an element $\gammaamma \in \pi_1(S^3 \Sigmaetminus L)$ satisfies $\mathrm{Tr} (\rho (\gammaamma))\neq \pm 2,$ then it does not lie in any peripheral subgroup \cite[Chapter 5]{thurston:notes}.
\begin{lemma}\label{lem:traces} Let $A$ and $B$ be elements in $\mathrm{PSL}_2({\mathbb{C}}).$
\begin{itemize}
\item[(1)]If $A$ and $B$ are non commuting parabolic elements then $|\mathrm{Tr}( A^l B^{-l})| >2$ for some $l.$
\item[(2)]If $|\mathrm{Tr}(A)|>2,$ then $|\mathrm{Tr} (A^k B)|\neq 2$ for all $k$ big enough.
\end{itemize}
\begin{proof}
For (1), note that after conjugation we can take $A=\begin{pmatrix}
1 & 0 \\ 1 & 1
\end{pmatrix}$ and $B=\begin{pmatrix}
1 & x \\ 0 & 1
\end{pmatrix}.$ Then $$|\mathrm{Tr}(A^l B^{-l})|=|1-l^2 x| \underset{l\rightarrow\infty}{\rightarrow} \infty.$$
For (2), after conjugation take $A=\begin{pmatrix}
{\lambda_1}bda & 0 \\ 0 & {\lambda_1}bda^{-1}
\end{pmatrix}$ where $|{\lambda_1}bda|>1$ and write $B=\begin{pmatrix}
u & v \\ w & x
\end{pmatrix}.$ Then
$$\mathrm{Tr} (A^k B)={\lambda_1}bda^k u+{\lambda_1}bda^{-k} x,$$
which as $k \rightarrow +\infty$ tends either to infinity if $u\neq 0$ or to $0$ else. In the first case we will have
$|\mathrm{Tr} (A^k B)|> 2$ for $k$ big enough; in the second case we will have
$|\mathrm{Tr} (A^k B)|< 2$. In both cases we have $|\mathrm{Tr} (A^k B)|\neq 2$ as desired.
\end{proof}
\end{lemma}
Next we will consider the Wirtinger presentation of $\pi_1(S^3 \Sigmaetminus L)$ corresponding to a link diagram
representing a
hyperbolic link $L$ as a closed braid
$\hat{\Sigmaigma}.$ The Wirtinger generators are conjugates of meridians of the components of $L$ and are mapped to parabolic elements of $\mathrm{PSL}_2({\mathbb{C}})$ by $\rho$.
A key point in the proof of Proposition \ref{prop:condition} is to choose the Stallings component $K$ so that the word it represents in
$\pi_1(S^3 \Sigmaetminus L)$ is conjugate to one that begins with a sub-word
$(a^l b^{-l})^k,$ where $a$ and $b$ are Wirtinger generators mapped to non-commuting elements under $\rho$. Then we will use
Lemma \ref{lem:traces} to prove that the free homotopy class $[K]$ is not in a peripheral subgroup of any component of $L$.
We first need the following lemma:
\begin{lemma}\label{lem:meridians} Let $L$ be a hyperbolic link in $S^3,$ with a link diagram of a closed braid
$\hat{\Sigmaigma}.$ We can find two strands of $\Sigmaigma$ meeting at a crossing so that if $a$ and $b$ are the Wirtinger generators corresponding to an under-strand and the over-strand of the crossing respectively, then
$\rho(a)$ and $\rho(b)$ don't commute.
\end{lemma}
\begin{proof}
Suppose that for any pair of Wirtinger generators $a, b$ corresponding to a crossing as above, $\rho(a)$ and $\rho(b)$ commute.
Since $\rho(a)$ and $\rho(b)$ are commuting parabolic elements of infinite order in $\mathrm{PSL}_2({\mathbb{C}}),$ elementary linear algebra shows that they share
their unique eigenline. Then step by step, we get that the images under $\rho$ of all Wirtinger generators share an eigenline. But this would imply that $\rho\left(\pi_1(S^3\Sigmaetminus L)\right)$ is abelian
which is a contradiction.
\end{proof}
We can now turn to the proof of Proposition \ref{prop:condition}, which we will prove by tweaking Stallings homogenization procedure.
\begin{proof}[Proof of Proposition \ref{prop:condition}] Let $L$ be a hyperbolic link, with components,
$L_1,\ldots,L_n$, represented as a braid closure ${\hat{ \Sigmaigma}}$. Let $a,b$ be Wirtinger generators of $\pi_1(S^3\Sigmaetminus L)$ chosen as in
Lemma \ref{lem:meridians}.
Starting with the projection of ${\hat{ \Sigmaigma}}$, we proceed
in the following way:
We arrange the crossings of ${\hat{ \Sigmaigma}}$ to occur at different verticals on the projection plane.
\begin{enumerate}
\item Begin drawing the Stallings component so that near the strands where above chosen Wirtinger generators $a,b$ occur,
we
create the pattern shown the left of Figure \ref{fig:pattern}.
\item We deform the strands of $\Sigmaigma$ to create
`` zigzags" as shown in the second drawing of Figure \ref{fig:homogenization}.
\item We fill the empty spaces in verticals with new braid strands
and choose the new crossings so that the resulting braid is homogeneous and so that the new strands meet the strands of $\Sigmaigma$ both in positive and negative crossings.
Adding enough `` zigzags" at the previous step will ensure that there is enough freedom in choosing the crossings to make this second condition possible.
\item At this stage, we have turned the braid $\Sigmaigma$ into a homogeneous braid, say $\Sigmaigma_h$. The closure ${\hat{ \Sigmaigma_h}}$ contains $L$ as a sublink and some number $s\gammaeqslant 1$ of unknotted components.
To reduce the number of components added, we connect the new components with a single crossing between each pair of neighboring new components. Doing so we may have to create new crossings with the components of $L,$ but we can always choose them to preserve homogeneousness. Thus we homogenized $L$ by adding a single unknotted component $K$ to it.
\end{enumerate}
The four step process described above is illustrated in Figure \ref{fig:homogenization}.
\begin{figure}
\caption{The four step homogenization process.}
\label{fig:homogenization}
\end{figure}
Now, because we have positive and negative crossings of $K$ with each component of $L,$ we can set the linking numbers as we want just by adding an even number of positive or negative crossings between $K$ and a component of $L$ locally.
If the strands $a$ and $b$ correspond to the same component $L_1,$ we simply ask that $lk(K,L_1)=1.$
If they correspond to two distinct components $L_1$ and $L_2,$ we choose $(lk(K,L_1),lk(K,L_2))=(1,0).$
Recall that we have chosen $a,b$ to be Wirtinger generators of $\pi_1(S^3\Sigmaetminus L)$, as in
Lemma \ref{lem:meridians}, and so
that $K$ is added to $L$
so that the pattern shown on the left hand side of Figure \ref{fig:pattern} occurs near the corresponding crossing.
Assume that $[K]$ is conjugate to a word $w \in \pi_1(S^3\Sigmaetminus L).$ Now one may modify the diagram of $L\cup K$ locally, as shown in the right hand side of Figure \ref{fig:pattern},
to make $[K]$ conjugate to $(a^{-l} b^l)^k w$ for any non-negative $k$ and $l.$ Notice also this move leaves $K$ unknotted and that $L \cup K$ is still a closed homogeneous braid.
Also notice that doing so, we left $lk(K,L_1)$ unchanged if $a$ and $b$ were part of the same component $L_1,$ and we turned $(lk(K,L_1),lk(K,L_2))$ into $(1-kl,kl)$ if they correspond to different components $L_1$ and $L_2.$ In both cases, we preserved the fact that
$$gcd(lk(K,L_1),lk(K,L_2),\ldots ,lk(K,L_n))=1$$
and $K$ satisfies part (ii) of Condition $(\clubsuit).$
\begin{figure}
\caption{Changing $[K]$ from a conjugate of $w \in \pi_1(S^3\Sigmaetminus L)$ (left), to a conjugate of $w(a^{-l}
\label{fig:pattern}
\end{figure}
To ensure that part (i) of the condition is satisfied, note that since $\rho(a)$ and $\rho(b)$ are non-commuting, Lemma \ref{lem:traces} (1) implies $|\mathrm{Tr} (\rho(a)^{-l}\rho(b)^l)|> 2$ for $l>>0.$ Thus by choosing $k>>0,$ and using Lemma \ref{lem:traces} (2), we may assume that $|\mathrm{Tr}(A^k B)|\neq 2,$ where $A=\rho(a)^{-l} \rho(b)^{l}$ and $B=\rho(w).$ Then
$[K]=A^k B$ is not in a peripheral subgroup of $\pi_1(S^3\Sigmaetminus L).$
Now notice that as above mentioned positive integers $k,l$ become arbitrarily large, the crossing number of the resulting homogeneous braid projections becomes arbitrarily large while the braid index remains unchanged. Since the fiber of the fibration of a closed homogeneous braid is the Seifert surface of the closed braid projection it follows that as $k,l\to \infty$, the genus of the fiber becomes arbitrarily large.
\end{proof}
\Sigmaubsection{Ensuring hyperbolicity}
In this subsection we will finish the proof of Theorem \ref{mainofsection}. For this we need the following:
\begin{proposition}\label{prop:hyperbolic} Suppose that $L$ is a hyperbolic link and let $L\cup K$ be a homogeneous closed braid obtained from $L$ by adding a Stallings
component $K$ that satisfies condition $(\clubsuit )$. Then $L\cup K$ is a hyperbolic link.
\end{proposition}
Before we can proceed with the proof of Proposition \ref{prop:hyperbolic} we need some preparation:
We recall that when an oriented link $L$ is embedded in a solid torus, the total winding number of $L$ is the non-negative integer $n$ such that $L$ represents $n$ times a generator of $H_1(V,{\mathbb{Z}}).$
When convenient we will consider $M_{L\cup K}$ to be the compact 3-manifold obtained by removing the interiors of neighborhoods of the components of $L\cup K$; the interior of $M_{L\cup K}$
is homeomorphic to $S^3 \Sigmaetminus (L\cup K)$.
In the course of the proof of the proposition we will
see that condition $(\clubsuit)$ ensures that the complement of $L\cup K$ cannot contain embedded tori that are not boundary parallel or compressible (i.e. $M_{L\cup K}$ is {\emph{atoroidal}}).
We need the following lemma that provides restrictions on winding numbers of satellite fibered links.
\begin{lemma}\label{lem:windingnumber}We have the following:
\begin{itemize}
\item[(1)] Suppose that $L$ is a oriented fibered link in $S^3$ that is embedded in a solid torus $V$ with boundary $T$ incompressible in $S^3\Sigmaetminus L.$ Then, some component of $L$ must have non-zero
winding number.
\item[(2)] Suppose that $L$ is an oriented fibered link in $S^3$ such that only one component $K$ is embedded inside a solid torus $V.$ If $K$ has winding number $1,$ then $K$ is isotopic to the core of $V.$
\end{itemize}
\end{lemma}
Though this statement is fairly classical in the context of fibered knots \cite{HiraMuraSilver}, we include a proof as we are working with fibered links.
\begin{proof} The complement $M_L=S^3\Sigmaetminus N(L)$ fibers over $S^1$ with fiber a surface $(F, \partial F)\Sigmaubset (M_L, \partial M_L)$. Then $S^3\Sigmaetminus L$ cut along $F=F\times \{0\}=F\times \{1\}$ is homeomorphic to $F \times [0,1].$
It is known that $F$ maximizes the Euler characteristic in its homology class in $H_2(M_L, \partial M_L)$ and thus $F$ is incompressible and $\partial$-incompressible.
\Sigmamallskip
(1) Assume that the winding number of every component of $L$ is zero, and consider the intersection of $F$ with $T,$ the boundary of the solid torus containing $L.$ Since $F \times [0,1]$ is irreducible, and $F$ is incompressible in the complement of $L$, up to isotopy, one can assume
that the intersection $T\cap F$ consists of a collection of parallel curves in $T,$ each of which is homotopically essential in $T.$
The hypothesis on the winding number implies that
the intersection $F\cap T$ is null-homologous in $T,$ where each component of $F\cap T$ is given the orientation inherited by the surface $V\cap F.$
Thus the curves in $F\cap T$ can be partitioned in pairs
of parallel curves with opposite orientations in $T \cap F$. Each such pair bounds an annulus in $T$ and
in $F \times (0,1)$ each of these annuli has both ends on $F \times \lbrace 0 \rbrace$ or on $F \times \lbrace 1 \rbrace.$ This implies that we can find $0<t<1$ such that
$F_t=F\times \{t\}$ misses the torus $T$. This in turn implies that $T$ must be an essential torus in the manifold obtained by cutting $S^3\Sigmaetminus L$ along the fiber $F_t$.
But this is impossible since the later manifold is $F_t\times I$ which is a handlebody and cannot contain essential tori; contradiction.
\Sigmamallskip
(2) By an argument similar to that used in case (1) above, we can simplify the intersection of the fiber surface $F$ with $T$ until it consists of one curve only. This curve, say $\gammaamma$, cuts $T$ into an essential annulus embedded in $F \times (0,1)$ with one boundary component on $F \times \lbrace 0 \rbrace$ and the other on $F \times \lbrace 1 \rbrace.$ As the annulus closes up, the curve $\gammaamma$ must be fixed by the monodromy of the fibration and one can isotope $T$ to make it compatible with the fibration. Then one has that $K$ is fibered in $V,$ and as the winding number of $K$ is $1,$ by Corollary 1 in \cite{HiraMuraSilver}, $K$ must be isotopic to the core of $V.$
\end{proof}
We are now ready to give the proof of Proposition \ref{prop:hyperbolic}.
\begin{proof}[Proof of Proposition \ref{prop:hyperbolic}] First we remark that $S^3 \Sigmaetminus (L \cup K)$ is non-split as $S^3 \Sigmaetminus L$ is and $K$ represents a non-trivial element in $\pi_1(S^3 \Sigmaetminus L).$
Next we argue that $S^3 \Sigmaetminus (L \cup K)$ is atoroidal: Assume that we have an essential torus, say $T,$ in $M_{L\cup K}=S^3 \Sigmaetminus (L \cup K).$ Since $L$ is hyperbolic, in $M_L=S^3 \Sigmaetminus L$
the torus $T$ becomes either boundary parallel or compressible. Moreover, the torus $T$ bounds a solid torus $V$ in $S^3.$
Suppose that $T$ becomes boundary parallel in the complement of $L$. Then, we may assume that $V$ is a tubular neighborhood of a component $L_i$ of $V.$
Then $K$ must lie inside $V$; for otherwise $T$ would still be boundary parallel in $M_{L\cup K}$.
Then the free homotopy class $[K]$ would represent a conjugacy class in the peripheral subgroup of $\pi_1(M_L)$ corresponding to $L_i.$ However this contradicts condition $(\clubsuit);$ thus this case cannot happen.
Suppose now that we know that $T$ becomes compressible in $M_L$.
In $S^3,$ the torus $T$ bounds a solid torus $V$ that contains a compressing disk of $T$ in $M_L.$ If $V$ contains no component of $L \cup K,$ the torus $T$ is still compressible in $M_{L\cup K}$. Otherwise, there are again two cases:
\Sigmamallskip
{\it Case 1:} The solid torus $V$ contains some components of $L.$ We claim that $V$ actually contains all the components of $L.$ Otherwise, after compressing $T$ in $M_L,$ one would get a sphere that separates the components of $L,$ which can not happen as $L$ is non-split. Moreover, as the compressing disk is inside $V,$ all components of $L$ have winding number zero in $V.$
Since $T$ is incompressible in the complement of $M_{L\cup K},$ the component $K$ must also lie inside $V.$
Note that $V$ has to be knotted since otherwise $T$ would compress outside $V$ and thus in $M_{L\cup K}$. But then since $K$ is unknotted, it must have winding number zero in $V.$
Thus we have the fibered link $L\cup K$ lying inside $V$ so that each component has winding number zero. But then $T$ can not be incompressible in $M_{L\cup K}$ by Lemma \ref{lem:windingnumber}-(1); contradiction. Thus this case will not happen.
\Sigmamallskip
{\it Case 2:} The solid torus $V$ contains only $K.$ Since $T$ is incompressible in $M_{L\cup K},$ $K$ must be geometrically essential in $V;$ that is it doesn't lie
in a 3-ball inside $V.$ Since $K$ is unknotted, it follows that $V$ is unknotted.
For each component $L_i$ of $L,$ we have
$$lk(K,L_i)=w\cdot lk(c,L_i),$$
where $c$ is the core of $V,$
and $w$ denote the winding number of $K$ in $V.$ Since $K$ satisfies condition $(\clubsuit),$ we know that
$$gcd\left(lk(K,L_1),lk(K,L_2),\ldots ,lk(K,L_n)\right)=1,$$
which implies that we must have $w=1.$
Thus by Lemma \ref{lem:windingnumber}-(2), $K$ is isotopic to the core of $V$ and $T$ is boundary parallel, contradicting the assumption that $T$ is essential in $M_{L\cup K}.$
This finishes the proof that $M_{L\cup K}$ is atoroidal.
Since $M_{L\cup K}$ contains no essential spheres or tori, and has toroidal boundary, it is either a Seifert fibered space or a hyperbolic manifold. But $M_L$ is a Dehn-filling of $M_{L\cup K}$ which is hyperbolic. Since the Gromov norm $||\cdot ||$ does not increase under Dehn filling \cite{thurston:notes} we get $||M_{L\cup K}||\gammaeq ||M_L||>0.$ The Gromov norm of Seifert 3-manifolds is zero, thus $L \cup K$ must be hyperbolic.
\end{proof}
\Sigmamallskip
We can now finish the proof of Theorem \ref{mainofsection} and the proof of Theorem \ref{hyperbgeneral} stated in the Introduction.
\begin{proof}[Proof of Theorem \ref{mainofsection}] Let $L$ be any link.
If $L$ is not hyperbolic, then we can find a hyperbolic link $L',$ that contains $L$ as a sub-link. See for example \cite{Baker}. If $L$ is hyperbolic then set $L=L'.$
Then apply
Proposition \ref{prop:condition} to $L'$ to get links $L'\cup K$ that are closed homogeneous braids with arbitrarily high crossing numbers and fixed braid index.
By \cite{Stallings}, the links $L'\cup K$ are fibered and the fibers have arbitrarily large genus and by Proposition \ref{prop:hyperbolic}
they are hyperbolic.
\end{proof}
\begin{proof}[Proof of Theorem \ref{hyperbgeneral}] Suppose that $L$ is a link with $lTV(S^3\Sigmaetminus L)>0$. By Theorem \ref{mainofsection} we have fibered hyperbolic links $L'$ that contain $L$ as sublink and whose fibers have
arbitrarily large genus. By Theorem \ref{thm:ltvdehnfilling} we have $lTV(S^3\Sigmaetminus L)>0$. \end{proof}
\Sigmaection{Stallings twists and the AMU conjecture}
\label{sec:examples}
By our results in the previous sections, starting from a hyperbolic link $L\Sigmaubset S^3$ with $lTV(S^3\Sigmaetminus L) >0,$ one can add an unknotted component $K$ to obtain a hyperbolic fibered link $L\cup K,$
with $lTV(S^3\Sigmaetminus (L\cup K)) >0.$
The monodromy of a fibration of $L\cup K$ provides a pseudo-Anosov mapping class
on the surface $\Sigma=\Sigma_{g,n},$ where $g$ is the genus of the fiber and $n$ is the number of components of $L\cup K.$
One can always increase the number of boundary components $n$ by adding more components to $L$ and appealing to Theorem \ref{thm:ltvdehnfilling}. However since $L\cup K$ is a closed homogeneous braid this construction alone will not provide infinite families of examples for fixed genus and number of boundary components.
In this section we show how to address this problem and prove Theorem \ref{general} stated in the introduction and which, for the convenience of the reader we restate here.
\begin{named}{Theorem \ref{general}} Let $\Sigma$ denote an orientable surface of genus $g$ and with $n$-boundary components. Suppose that either $n=2$ and $g\gammaeqslant 3$ or $g\gammaeqslant n \gammaeqslant 3.$
Then there are are infinitely many non-conjugate pseudo-Anosov mapping classes in
$\mathrm{Mod}(\Sigma)$ that satisfy the AMU conjecture.
\end{named}
\Sigmaubsection{Stallings twists and pseudo-Anosov mappings}
Stallings \cite{Stallings} introduced an operation that transforms a fibered link into a fibered link with a fiber of the same genus:
Let $L$ be a fibered link with
fiber $F$ and let $c$ be a simple closed curve on $F$ that is unknotted in $S^3$ and such that $lk(c,c^+)=0,$ where $c^+$ is the curve $c$ pushed along the normal of $F$ in the positive direction.
The curve $c$ bounds a disk $D\Sigmaubset S^3$ that is transverse to $F$. Let $L_m$ denote the link obtained from $L$ by a full twist of order $m$ along $D.$ This operation is known as Stallings twist of order $m.$
Alternatively, one can think the Stallings twist operation as performing $1/m$ surgery on $c,$ where the framing of $c$ is induced by the normal vector on $F.$
\begin{theorem} {\rm { (\cite[Theorem 4]{Stallings})}}\label{twist} Let $L$ be a link whose complement fibers over $S^1$ with fiber $F$ and monodromy $f$.
Let $L_m$ denote a link obtained by a Stallings twist of order $m$ along a curve $c$ on $F$. Then, the complement of $L_m$ fibers over $S^1$ with fiber $F$
and the monodromy is $f \circ \tau_c^m,$ where $\tau_c$ is the Dehn-twist on $F$ along $c.$
\end{theorem}
Note that when $c$ is parallel to a component of $L,$ then such an operation does not change the homeomorphism class of the link complement; we call these Stallings twists trivial.
To facilitate the identification of non-trivial Stallings twists on link fibers, we recall the notion of {\emph {state graphs}}:
Recall that the fiber for the complement of a homogeneous closed braid ${\hat{\Sigmaigma}}$ is obtained as follows: Resolve all the crossings in the projection of ${\hat{\Sigmaigma}}$ in a way consistent with the braid orientation.
The result is a collection of nested embedded circles (Seifert circles) each bounding a disk on the projection plane; the disks can be made disjoint by pushing them slightly above the projection plane. Then we construct the fiber $F$ by attaching a half twisted band for each crossing.
The state graph consists of the collection of the Seifert circles together with an edge for each crossing of ${\hat{\Sigmaigma}}.$ We will label each edge by $A$ or $B$ according to whether
the resolution of the corresponding crossing during the construction of $F$ is of type $A$ or $B$ shown in Figure \ref{fig:resolutions}, if viewed as unoriented resolution.
\begin{figure}
\caption{A crossing and its $A$ and $B$ resolutions.}
\label{fig:resolutions}
\end{figure}
\begin{figure}
\caption{Left: Pattern in the state graph exhibiting a non-trivial Stallings twist. Right: The curve $c$ obtained as a connected sum the curves $c_1, c_2$ with self linking $+2$ and $-2$. }
\label{fig:stallingstwist}
\end{figure}
\begin{remark}\label{locate} {\rm As the homogeneous braids get more complicated the fiber is more likely to admit a non-trivial Stallings twist. Indeed, if the state graph of $L={\hat{\Sigmaigma}}$ exhibits the local pattern
shown in the left hand side of
Figure \ref{fig:stallingstwist}, we can perform a non-trivial Stallings twist along the curve $c$ which corresponds to the connected sum of the two curves $c_1$ and $c_2$ shown in the Figure. We can see that $lk(c_1,c_1^+)=+2$ and $lk(c_2,c_2^+)=-2,$ and the mixed linkings are zero. In the end, $lk(c,c+)=2-2=0.$}
\end{remark}
We will need the following theorem, stated and proved by Long and Morton \cite{LongMorton} for closed surfaces. Here we state the bounded version and for completeness we sketch the slight adaptation of their argument in this setting.
\begin{theorem} {\rm {(\cite[Theorem A]{LongMorton})}}\ \label{thm:LongMorton}Let $F$ be a compact oriented surface with $\partial F\neq 0.$ Let $f$ be
a pseudo-Anosov homeomorphism on $F$ and let $c$ be a non-trivial, non-boundary parallel simple closed curve on $F.$
Let $\tau_{c}$ denote the Dehn-twist along $c.$ Then, the family $\{f \circ \tau_{c}^m\}_m$ contains infinitely many non-conjugate pseudo-Anosov homeomorphisms.
\end{theorem}
\begin{proof}
The proof rests on the fact that the mapping torus of $f_m = f \circ \tau_{c}^m$ is obtained from $M_f$ by performing
$1/m$-surgery on the curve $c$ with framing induced by a normal vector in $F.$ Once we prove that $M_f \Sigmaetminus c$ is hyperbolic,
Thurston's hyperbolic Dehn surgery theorem implies, for $m$ big enough, that
the mapping tori $M_{f_{m}}$ are hyperbolic and all pairwise non-homeomorphic (as their hyperbolic volumes differ). Since conjugate maps have homeomorphic mapping tori the non-finiteness statement follows.
We will consider the curve $c$ as embedded on the fiber $F \times \lbrace 1/2 \rbrace \Sigmaubset M_f.$
Notice that $M_f \Sigmaetminus c$ is irreducible, as $c$ is non-trivial in $\pi_1(F)$ and thus in $\pi_1(M_f).$
We need to show that $M_f \Sigmaetminus c$ contains no essential embedded tori: Let $T$ be a torus embedded in $M_f \Sigmaetminus c.$
If $T$ is boundary parallel in $M_f,$ it will also be in $M_f \Sigmaetminus c,$ otherwise one would be able to isotope $c$ onto the boundary of $M_f,$ and as $c$ is actually a curve on $F \times \lbrace 1/2 \rbrace,$ $c$ would be conjugate in
$\pi_1(F)$ to a boundary component. As we chose $c$ non-boundary parallel in $F,$ this does not happen.
Now, assume that $T$ is non-boundary parallel in $M_f.$
Then we can put $T$ in general position and consider of $T\cap F\times \lbrace 1/2 \rbrace.$ If this intersection is empty then $T$ is compressible as $F \times [0, 1]$ does not contain any essential tori. After isotopy we can assume that $T \cap F \times [0, 1]$ is a collection of properly embedded annuli in $ F \times [0, 1],$
each of which either misses a fiber $F$ or is vertical with respect to the $I$-bundle structure.
Now note that if one of these annuli misses a fiber then we can remove it by isotopy in $M_f \Sigmaetminus c,$ unless if it connects to curves parallel to $c$ on opposite sides of $c$ on
$ F\times \lbrace 1/2 \rbrace.$ Also observe that we cannot have annuli that connect a non-boundary parallel curve $c \Sigmaubset F\times \lbrace 1/2 \rbrace$
to $f(c):$ For, since $f$ is pseudo-Anosov, the curves
$f^k (c)$ and $f^l (c)$ are freely homotopic on the fiber if and only if $k=l;$ and thus the annuli would never close up to give $T.$
In the end, and since $M_f$ is hyperbolic,
we are left with two annuli connecting both sides of $c$ and $T$ is boundary parallel in $M_f \Sigmaetminus c.$
\\ Finally, $M_f\Sigmaetminus c$ is irreducible and atoroidal and since its Gromov norm satisfies $||M_f \Sigmaetminus c||>||M_f||>0,$ it is hyperbolic.\end{proof}
\Sigmaubsection{Infinite families of mapping classes} We are now ready to present our examples of infinite families of non-conjugate pseudo-Anosov mapping classes of fixed surfaces that satisfy the AMU conjecture.
The following theorem gives the general process of the construction.
\begin{theorem} \label{infinitegen} Let $L$ be a hyperbolic fibered link with fiber $\Sigma$ and monodromy $f.$ Suppose that
$L$ contains a sublink $K$ with $lTV(S^3\Sigmaetminus K)>0.$ Suppose, moreover, that the fiber $\Sigma$ admits a non-trivial Stallings twist along a curve $c\Sigmaubset \Sigma$ such that the interior of the twisting disc $D$ intersects $K$ at most once geometrically. Let $\tau_c$ denote the Dehn twist of $\Sigma$ along $c$.
Then the family $\{f \circ \tau_{c}^m\}_m$ of homeomorphisms gives infinitely many non-conjugate pseudo-Anosov mappings classes in $\mathrm{Mod}(\Sigma)$ that satisfy the AMU conjecture.
\end{theorem}
\begin{proof} Since $L$ contains $K$ as sublink we have $lTV(S^3\Sigmaetminus L)\gammaeqslant lTV(S^3\Sigmaetminus K)>0.$
Since $D$ intersects $K$ at most once, each of the links $L'$ obtained by Stalling twists along $c$, will also contain a sublink isotopic to $K$ and hence $lTV(S^3\Sigmaetminus L')\gammaeqslant lTV(S^3\Sigmaetminus K)>0.$
The conclusion follows by Theorems \ref{twist}, \ref{thm:LongMorton} and \ref{amu-ltv}.
\end{proof}
We finish the section with concrete constructions of infinite families obtained by applying Theorem \ref{infinitegen}. Start with $K_1=4_1$ represented as the closure of the homogeneous braid
$\Sigmaigma_2^{-1} \Sigmaigma_1 \Sigmaigma_2^{-1} \Sigmaigma_1.$
We construction a 2-parameter family of links
$L_{n,m}$ where $n\gammaeqslant 2$, $m\gammaeqslant 1,$ defined as follows:
The link $L_{4,m}$
is shown in the left panel of Figure \ref{fig:example2}, where the box shown contains $2m$ crossings. It is obtained from $K_1$ by adding three unknotted components.
The link $L_{3,m}$ is obtained from $L_{4,m}$ by removing the unknotted component corresponding to the outermost string of the braid.
The link $L_{n+1,m}$ for $n\gammaeqslant 4$ is obtained from $L_{n,m}$ adding one strand in the following way: denote by $K_1,\ldots ,K_n$ the components of $L_{n,m}$ from innermost to outermost, $K_1$ being the $4_1$ component.
To get $L_{n+1,m},$ we add one strand $K_{n+1}$ to $L_{n,m}$, so that traveling along $K_n$ one finds $2$ crossings with $K_{n-1},$ then $2$ crossings with $K_{n+1},$ then $2$ crossings with $K_{n-1},$ then $2$ crossings with $K_{n+1},$ and, moreover, the crossings with $K_{n-1}$ and $K_{n+1}$ have opposite signs. There is only one way to chose this new strand, and doing so we added one unknotted component to $L_{n,m},$ thus $L_{n+1,m}$ has $n+1$ components and $4$ more crossings than $L_{n,m}.$
\\ In the special case $n=2,$ the link $L_{2,m}$ is obtained from the link $L_{3,m}$ by replacing the box with $2m$ crossings with a box with $2m-1$ crossings. The links $L_{2,m}$ are then $2$-components links. We note that all the links $L_{n,m}$ contain the component $K_1$ we started with.
\begin{figure}
\caption{The link $L_{4,m}
\label{fig:example2}
\end{figure}
\begin{proposition}\label{prop:example} The link
$L_{n,m}$ is hyperbolic, fibered and satisfies the hypotheses of Theorem \ref{infinitegen}.
The fiber has genus $g=m+2$ if $n=2,$
$g=n+m-1$ otherwise.
\end{proposition}
\begin{proof} For every $n\gammaeqslant 2$ and any $m\gammaeqslant 1,$ the link $L_{n,m}$ contains the knot $K_1=4_1$ as sublink and as said earlier we have $lTV(S^3\Sigmaetminus K_1)>0.$
Since $L_{n,m}$ is alternating, hyperbolicity follows from Menasco's criterion \cite{menasco:primediagram}: any prime non-split alternating diagram of a link that is not the standard diagram of the $T(2,q)$ torus link, represents a hyperbolic link.
Since $L_{n,m}$ is represented by an alternating (and thus homogeneous) closed braid, fiberedness follows from Stalling's criterion.
For $n\gammaeqslant 3,$ the resulting closed braid diagram
has braid index $2+n$ and $2+4n+2m$ crossings. Hence the Euler characteristic is $-3n-2m$ and the genus is $m+n-1,$ as the fiber has $n$ boundary components. In the case $n=2,$ the braid index is $5,$ number of crossings $9+2m,$ thus the Euler characteristic of the fiber is $-4-2m$ and the genus is $m+2.$
Using Remark \ref{locate} and the state graph given in Figure \ref{fig:example2} we can easily locate a simple closed curve $c$ on the fiber with the properties
in the statement of Theorem \ref{infinitegen}.
\end{proof}
Now Proposition \ref{prop:example} and Theorem \ref{infinitegen} immediately give Theorem \ref{general} stated in the beginning of the section.
\begin{remark} Note that if we restrict ourselves to closed homogeneous braids to get explicit examples of links satisfying the hypotheses of Theorem \ref{infinitegen}, it seems necessary that the genus will grow with the number of components. However, one could consider other methods, that can increase the number of components, while keeping the fiber genus low, to produce links satisfying the hypotheses of Theorem \ref{infinitegen}. One way would be to take a Murasugi sum of the link $L_{2,m}$ with links of arbitrarily large number of components and whose complement fibers over $S^1$ with fiber of genus $0$.
This should be done so that the Murasugi sum operation leaves the component $4_1\Sigmaubset L_{2,m}$ unaffected and it produces hyperbolic links. It seems plausible that combining homogeneous braids and Murasugi sums should give explicit examples of infinite families of mapping classes that satisfy the AMU Conjecture, for all surfaces $\Sigma$ with genus $g\gammaeqslant 2$ and $n\gammaeqslant 2$ boundary components.
\end{remark}
\begin{remark}\label {spheres} Our methods also apply to surfaces of genus zero to produce examples of mapping class that satisfy the conclusion of the AMU Conjecture.
Given that such examples were previously known, we just outline an explicit construction without pursuing the details of determining the Nielsen-Thurston types of the resulting mapping classes or discussing how the construction relates to that of Santharoubane \cite{San17}:
Let $\Sigma_{0, n}$ denote the sphere with $n$ holes. A mapping class in $Mod(\Sigma_{0, n+1})$ can be thought as an element in the pure braid group on $n$-strings, say $P_n$.
It is known that for $n>3$, $P_n$ is a semi-direct product of a subgroup $W_n$ that is itself a semi direct product of free subgroups of $P_n$ and of $P_3$. See, for example,
\cite[Theorem 1.8]{birman:book}. As result any braid $b\in P_n$ can be uniquely written as a product $\beta\cdot w$ where
$\beta\in P_3$ and $w\in W_n$. Now take any $b=\beta \cdot w$ for which the closure of $\beta\in P_3$ represents the Borromean rings $B$.
The braid $b$ represents an element in $Mod(\Sigma_{0, n+1})$ whose mapping torus is the complement of the link $L_b$ that is the closure of $b$ together with the braid axis.
The link $L_b$ contains $B$ as a sublink and hence $lTV(S^3\Sigmaetminus L_b)>lTV(S^3\Sigmaetminus B)>0.$ Thus by Theorem \ref{amu-ltv}, for $r$ big enough,
there is a choice of colors $c$ of the components of $\Sigma_{0, n+1}$ such that $\rho_{r,c}(\phi)$ has infinite order.
\end{remark}
\begin{remark} \label{knots}Theorem \ref{infinitegen} leads to constructions of mapping classes on surfaces with at least two boundary components that satisfy the AMU. Furthermore, all these mapping classes are obtained
as monodromies of fibered links in $S^3$. In \cite{BDKY} we prove the Turaev-Viro invariants volume conjecture for an infinite family of cusped hyperbolic 3-manifolds. Considering the doubles of these 3-manifolds we obtain an infinite family of closed 3-manifolds $M$ with $lTV(M)>0$. It is known that every closed 3-manifold contains hyperbolic fibered knots \cite{Soma}.
By Theorem \ref{amu-ltv}, monodromies of such knots provide examples of pseudo-Anosov mappings on surfaces with a single boundary component that satisfy the AMU conjecture.
\end{remark}
\Sigmaection{Integrality properties of periodic mapping classes}
\label{sec:integertraces}
In this section we give the proofs of Theorem \ref{thm:integertrace} and Corollary \ref{cor:toruslinks} stated in the Introduction. We also state a conjecture about traces of quantum representations of pseudo-Anosov
mapping classes and we give some supporting evidence.
\begin{named} {Theorem \ref{thm:integertrace}} Let $f\in \mathrm{Mod}(\Sigma)$
be periodic of order $N.$ For any odd integer $r\gammaeqslant 3$, with $\mathrm{gcd}(r, N)=1$, we have
$|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}},$
for any $U_r$-coloring $c$ of $\partial \Sigma,$ and any primitive
$2r$-root of unity.
\end{named}
\begin{proof} For any choice of a primitive $2r$-root of unity $\zeta_{2r}$, the traces $\mathrm{Tr} \rho_{r,c}(f)$ lie in the field $ {\mathbb{Q}}[\zeta_{2r}]$. Since the ${\mathbb{Z}}$ is invariant under the action of the Galois group of the field,
the property $|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}},$
does not depend on the choice of root of unity to define the TQFT.
In the rest of the proof, for any positive integer $n,$ we write $\zeta_n=e^{\frac{2i\pi}{n}}.$
By choosing a lift we can consider $\rho_{r,c}(f)$ as an element of $\mathrm{Aut}(RT_r(\Sigma,c))$ instead of $\mathbb{P}\mathrm{Aut}(RT_r(\Sigma,c)).$
Since $f^N=id$ and $\rho_{r,c}$ is a projective representation with projective ambiguity a $2r$-root of unity, we have $\rho_r(f)^N=\zeta_{2r}^k\ \mathrm{Id}_{RT_r(\Sigma,c)}.$ Since $N$ and $r$ are coprime, by changing the lift $\rho_{r,c}(f)$ by a power of $\zeta_{2r}$ we can assume actually that $\rho_{r,c}(f)^N=\pm \mathrm{Id}_{RT_r(\Sigma,c)}.$ Then $\rho_{r,c}(f)$ is diagonalizable, with eigenvalues that are $2N$-th roots of unity. This implies that $|\mathrm{Tr} \rho_{r,c} (f)| \in {\mathbb{Z}}[\zeta_{2N}].$
On the other hand we know that the traces of quantum representations $\rho_{r, c}$ take values in ${\mathbb{Q}} [\zeta_{2r}]$, and the same is true for $RT_r$ invariants of any closed $3$-manifold with a colored link (see Theorem \ref{thm:TQFTdef}-(2)). Thus we have
$$|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}}[\zeta_{2N}]\cap {\mathbb{Q}}[\zeta_{2r}].$$
By elementary number theory, it is known that
$${\mathbb{Q}}[\zeta_m]\cap {\mathbb{Q}}[\zeta_n]= {\mathbb{Q}}[\zeta_{d}],$$
where $d=\mathrm{gcd}(m, n)$. See, for example, \cite[Theorem 3.4]{Konrad} for a proof of this fact.
Hence, since ${\mathbb{Q}}[\zeta_2]={\mathbb{Q}}[-1]={\mathbb{Q}}$ and the algebraic integers in ${\mathbb{Q}}$ are the integers,
we have ${\mathbb{Z}}[\zeta_{2N}]\cap {\mathbb{Q}}[\zeta_{2r}]={\mathbb{Z}}.$
Thus we obtain $|\mathrm{Tr} \rho_{r,c}(f)| \in {\mathbb{Z}}.$
\end{proof}
It is known that the mapping torus of a class $f\in \mathrm{Mod}(\Sigma)$
is a Seifert fibered manifold if and only if $f$ is periodic. In particular, the complement $S^3\Sigmaetminus T_{p,q}$ of a $(p, q)$ torus link is fibered with periodic monodromy of order
$pq$ \cite{orlik:seifert-manifolds}. As a corollary of Theorem \ref{thm:integertrace} we have the following result which in particular implies Corollary \ref{cor:toruslinks} that settles a question of \cite{Chen-Yang}.
\begin{corollary}\label{integerTV} Let $M_f$ be the mapping torus of a periodic mapping class $f\in \mathrm{Mod}(\Sigma)$ of order $N$. Then, for any odd integer $r\gammaeqslant 3$, with $\mathrm{gcd}(r, N)=1$, we have
$TV_r(M_f)\in {\mathbb{Z}},$
for any choice of root of unity.
\end{corollary}
\begin{proof}
As in the proof of Theorem \ref{amu-ltv}, we write
$$TV_r(M_f)=\underset{\mathbf{c}}{\Sigmaum} |\mathrm{Tr} \rho_{r,\mathbf{c}}(f)|^2$$
where $f$ is the monodromy and the sum is over $U_r$-colorings of the components of $\partial \Sigma$.
But if $r$ is coprime with $N$ this sum is a sum of integers by Theorem \ref{thm:integertrace}.
\end{proof}
Corollary \ref{integerTV} implies that for mapping tori of periodic classes the Turaev-Viro invariants take integer values at infinitely many levels and this property is independent of the choice of
the root of unity. In contrast with this we have the following, were $lTV$ is defined in the Introduction.
\begin{proposition}\label{prop:integertrace} Let $f \in \mathrm{Mod}(\Sigma)$ such that $lTV(M_f)>0.$
Then, there can be at most finitely many odd integers $r$ such that $TV_r(M_f)\in {\mathbb{Z}}$.
\end{proposition}
\begin{proof} As in the proof of Corollary \ref{integerTV} and Theorem \ref{thm:integertrace} for any odd $r\gammaeq 3$ and any choice of a primitive $2r$-root of unity $\zeta_{2r}$
the invariant $TV_r(M_f, e^{\frac{2i\pi}{r}})$ lies in $\mathbb F={\mathbb{Q}} [e^{\frac{i\pi}{r}}].$
Suppose that there are arbitrarily large odd levels $r$ such that $TV_r(M_f, e^{\frac{2i\pi}{r}})\in {\mathbb{Z}}$.
Then since ${\mathbb{Z}}$ is left fixed under the action of the Galois group of $\mathbb F$, we would have
$TV_r(M_f, e^{\frac{i\pi}{r}})=TV_r(M_f, e^{\frac{2i\pi}{r}})$, for all $r$ as above.
But this is contradiction: Indeed, on the one hand, the assumption $lTV(M_f)>0,$
implies that the invariants $TV_r(M_f, e^{\frac{2i\pi}{r}})$ grow exponentially in $r$; that is $TV_r(M_f, e^{\frac{2i\pi}{r}}) > \exp{Br}$, for some constant $B>0$.
On the other hand, by combining results of \cite{Garoufalidis} and \cite{BePe}, the invariants $TV_r(M_f, e^{\frac{i\pi}{r}})$ grow at most polynomially in $r$; that is $TV_r(M_f, e^{\frac{i\pi}{r}})\leqslant D r^N$, for some constants
$D>0$ and $N$.
\end{proof}
As discussed earlier the Turaev-Viro invariants volume conjecture of \cite{Chen-Yang} implies that for all pseudo-Anosov mapping classes we have $lTV(M_f)>0,$ and the later hypothesis implies the AMU Conjecture. These implications and Proposition \ref{prop:integertrace} prompt the following conjecture suggesting that the Turaev-Viro invariants of mapping tori distinguish
pseudo-Anosov mapping classes from periodic ones.
\begin{conjecture}\label{conj:integertrace} Suppose that $f\in \mathrm{Mod}(\Sigma)$ is
pseudo-Anosov. Then, there can be at most finitely many odd integers $r$ such that $TV_r(M_f)\in {\mathbb{Z}}.$
\end{conjecture}
\end{document}
|
\begin{document}
\title[On the depth and reflexivity of tensor products]{On the depth and reflexivity of tensor products}
\author[O. Celikbas]{Olgur Celikbas}
\operatorname{\mathsf{add}}ress{Olgur Celikbas\\
Department of Mathematics \\
West Virginia University\\
Morgantown, WV 26506-6310, U.S.A}
\email{[email protected]}
\author[U. Le]{Uyen Le}
\operatorname{\mathsf{add}}ress{Uyen Le\\
Department of Mathematics \\
West Virginia University\\
Morgantown, WV 26506-6310, U.S.A}
\email{[email protected]}
\author[H. Matsui]{Hiroki Matsui}
\operatorname{\mathsf{add}}ress{Hiroki Matsui\\Graduate School of Mathematical Sciences\\ University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan}
\email{[email protected]}
\subjclass[2010]{Primary 13D07; Secondary 13H10, 13D05, 13C12}
\keywords{Complexes, depth formula, reflexivity of tensor products, Serre's condition, vanishing of Tor}
\thanks{Matsui was partly supported by JSPS Grant-in-Aid for JSPS Fellows 19J00158.}
\maketitle{}
\begin{abstract} In this paper we study the depth of tensor products of homologically finite complexes over commutative Noetherian local rings. As an application of our main result, we determine new conditions under which nonzero tensor products of finitely generated modules over hypersurface rings can be reflexive only if both of their factors are reflexive.
A result of Asgharzadeh shows that nonzero symbolic powers of prime ideals in a local ring cannot have finite projective dimension, unless the ring in question is a domain. We make use of this fact in the appendix and consider the reflexivity of tensor products of prime ideals over hypersurface rings.
\end{abstract}
\section{Introduction}
Throughout, $R$ denotes a commutative Noetherian local ring with unique maximal ideal $\mathfrak{m}$ and residue field $k$, and all $R$-modules are assumed to be finitely generated.
The results in this paper are motivated by the following beautiful result of Huneke and Wiegand; see \cite[1.3 and 4.4]{GO}, \cite[1.1]{Onex}, \cite[2.7]{HW1}, and \cite[1.9]{HW2}.
\begin{thm} \label{srt} (Huneke and Wiegand \cite{HW1, HW2}) Let $R$ be a local hypersurface ring, and let $M$ and $N$ be nonzero finitely generated $R$-modules. Assume $N$ has rank and $M\otimes_RN$ is reflexive. Then $\Tor_i^R(M,N)=0$ for all $i\geq 1$, $M$ is reflexive, $N$ is torsion-free, $\Supp_R(N)=\Spec(R)$, and $\pd_R(M)<\infty$ or $\pd_R(N)<\infty$.
\end{thm}
It has been an open problem for quite some time whether or not the module $N$ in Theorem \ref{srt} must also be reflexive; see \cite{GO, HW1, CHW} for the details. In 2019 Celikbas and Takahashi \cite{celikbas2018second} constructed examples settling this query; one of their examples is the following:
\begin{eg} \label{mainex} (\cite[2.5]{celikbas2018second}) Let $R=\mathbb{C}[\!|x,y,z,w]\!]/(xy)$, $M=R/(x)$ and let $N$ be the Auslander transpose of $R/\mathfrak{p}$, where $\mathfrak{p}=(y,z,w) \in \Spec(R)$. Then $R$ is a reduced local hypersurface ring and $\pd_R(N)<\infty$ (so that $N$ has rank). Moreover, $M$ and $M \otimes_R N$ are both reflexive, but $N$ is not reflexive.
\end{eg}
In this paper, motivated by Theorem \ref{srt} and Example \ref{mainex}, we study the depth of tensor products and determine new conditions that force both of the modules considered in Theorem \ref{srt} to be reflexive. To faciliate the discussion, let us note, in Example \ref{mainex}, the sequence $\{y,z,w\}$ is $M$-regular, but it is not $R$-regular. We prove, if such sequences do not exist locally in the support of the module $M$ considered in Theorem \ref{srt}, then both of the modules in question must be reflexive. More precisely, we prove:
\begin{thm} \label{mainthmintro} Let $R$ be a local hypersurface ring, and let $M$ and $N$ be nonzero $R$-modules such that $M\otimes_RN$ is reflexive. Assume the following conditions hold:
\begin{enumerate}[\rm(i)]
\item $N$ has rank (e.g., $\pd_R(N)<\infty$).
\item Each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular for all $\mathfrak{p} \in \Supp_R(M)$.
\end{enumerate}
Then $M$ and $N$ are both reflexive.
\end{thm}
It is worth pointing out that the condition in part (ii) of Theorem \ref{mainthmintro} holds provided that the module $M$ in question has full support; see Corollary \ref{corpropnew} (note the module $M$ considered in Example \ref{mainex} does not have full support). On the other hand, there are examples of modules -- without full support -- that satisfy the aforementioned condition of Theorem \ref{mainthmintro}; see Examples \ref{Hiroki} and \ref{exson}.
In Section 3 we establish Theorem \ref{mainthmintro} as a consequence of our main result, namely Theorem \ref{mainthm}, which concerns the depth of (derived) tensor products of homologically finite complexes that have finite complete intersection dimension over local rings. The proof of Theorem \ref{mainthmintro} relies upon a relation between the condition in part (ii) of Theorem \ref{mainthmintro} and a certain depth inequality, which does not hold for the module $M$ in Example \ref{mainex}; see Corollary \ref{cor2}, Example \ref{ref}, and Proposition \ref{propnew}.
In Section 4 we compare Theorem \ref{mainthmintro} with the main result of \cite{Onex}, in which the reflexivity of tensor products of modules under the setting of Theorem \ref{srt} is also studied. We give examples and highlight that the condition we consider in part (ii) of Theorem \ref{mainthmintro} is independent of the main tool used in \cite{Onex}; see the examples and the first paragraph in Section 4.
In the appendix we give an application of Theorem \ref{srt}: we prove that, if the tensor product of two prime ideals is reflexive over a hypersurface ring that is not a domain, then both of the primes considered must be minimal. In fact, due to the work of Asgharzadeh \cite{Mohsen}, we are able to state our result in terms of the tensor product of symbolic powers of prime ideals; see Remark \ref{surp} and Corollary \ref{appcor1}.
\section{Preliminaries}
We start by recording some definitions and preliminary results that are needed for our arguments.
\begin{chunk} \label{cx1} \textbf{Complexes (\cite{Larsbook}).} Throughout, by an $R$-complex $X$, we mean a chain complex of $R$-modules which has homological differentials $\partial^X_i\colon X_i\to X_{i-1}$, and which is homologically finite, i.e., $\mathrm{H}_i(X)=0$ for all $|i|\gg 0$ and each $\mathrm{H}_i(X)$ is a finitely generated $R$-module.
If $X$ is a (not necessarily homologically finite) $R$-complex, we set:
$$
\sup X = \sup \{i \in \mathbb{Z} \mid \mathrm{H}_i(X) \neq 0\}
\mbox{ and }
\inf X = \inf \{i \in \mathbb{Z} \mid \mathrm{H}_i(X) \neq 0\}.
$$
Note we have that $\sup X = -\infty$ if and only if $\mathrm{H}(X)=0$ if and only if $\inf X = \infty$.
\end{chunk}
\begin{chunk} \label{cx}
If $X$ and $Y$ are $R$-complexes, then it follows that $\inf(X\otimes^{\bf{L}}_RY)=\inf(X)+\inf(Y)$; see \cite[(A.4.11) and (A.4.16)]{Larsbook}. Here, $X \otimes^{\bf{L}}_R Y$ denotes the derived tensor product of $X$ and $Y$.
\end{chunk}
\begin{chunk} \textbf{Annihilator and Support (\cite[A.8.4]{Larsbook}).} \label{Support}
The \emph{annihilator} and \emph{support} of an $R$-complex $X$ is:
\begin{align*} \notag
\mathbf{A}nn_R(X) =\bigcap_{i\in \mathbb{Z}} \mathbf{A}nn_R(\Ho_{i}(X)) \text{ and }
\Supp_R(X)=\bigcup_{i\in \mathbb{Z}}\Supp_R(\Ho_{i}(X)).
\end{align*}
Note that the equality $\Supp_R(X) = \mathrm{V}(\mathbf{A}nn_R(X))$ holds.
\end{chunk}
\begin{chunk} \label{dcx1} \textbf{Depth of complexes (\cite{Larsbook, I}).}
Let $X$ be an $R$-complex, $I$ an ideal of $R$, and let $\underline{x}=x_1, \ldots, x_n$ be a generating set of $I$.
Then the {\it $I$-depth} of $X$ is defined as:
$$
\depth_R(I, X) = n - \sup(\mathrm{K}(\underline{x}) \otimes^{\bf{L}}_R X).
$$
Here $\mathrm{K}(\underline{x})$ is the Koszul complex on $\underline{x}$; see \cite[\S 2]{I} and \cite[(A.6.1)]{Larsbook}. It is
known that this definition is independent of the choice of generators of $I$ \cite[1.3]{I}. It follows that $-\infty < \depth_R(I, X) \le n-\sup X$.
We set $\depth_R (X)= \depth_R(\mathfrak{m}, X)$. Then, by our convention for complexes, $\depth_R(X)$ is finite provided that $\Ho(X)
\neq 0$; see \cite[Observation on page 549]{I}. Note also that $\depth_R(0)=\infty$.
\end{chunk}
The following facts play an important role in the proof of Theorem \ref{mainthm}.
\begin{chunk} \label{dcx} Let $X$ be an $R$-complex and $I$ an ideal of $R$. Then the following hold:
\begin{enumerate}[\rm(i)]
\item $\depth_{R_\mathfrak{p}}(X_\mathfrak{p}) \ge \depth_{R_\mathfrak{q}}(X_\mathfrak{q}) - \dim( R_{\mathfrak{q}} / \mathfrak{p} R_{\mathfrak{q}})$ for each $\mathfrak{p}, \mathfrak{q} \in \Spec(R)$ with $\mathfrak{p} \subseteq \mathfrak{q}$; see \cite[(A.6.2)]{Larsbook}.
\item $\depth_R(I, X) = \inf \{\depth_{R_\mathfrak{p}}(X_\mathfrak{p}) \mid \mathfrak{p} \in \mathrm{V}(I)\}$; see \cite[2.10]{FS}.
\item $\depth_R(I, X) = \depth_R(\sqrt{I}, X)$.
\item $\depth_R(I+\mathbf{A}nn_R(X), X)= \depth_R(I, X)$.
\end{enumerate}
Note that, as $\mathrm{V}(I)=\mathrm{V}(\sqrt{I})$ and $\mathrm{V}\big(I+\mathbf{A}nn_R(X)\big)=\mathrm{V}(I) \cap \Supp_R(X)$, part (ii) yields parts (iii) and (iv).
\end{chunk}
\begin{chunk} \textbf{Serre's condition for complexes (\cite{GO}).} \label{sc} Let $X$ be an $R$-complex and let $n\geq 0$ be an integer. Then $X$ is said to satisfy {\it Serre's condition} $(S_n)$ if the following inequality holds for each prime ideal $\mathfrak{p}$ of $R$:
$$
\depth_{R_\mathfrak{p}}(X_\mathfrak{p}) + \inf (X_\mathfrak{p}) \ge \min\{n, \dim(R_{\mathfrak{p}})\}.
$$
\end{chunk}
\begin{chunk} \textbf{Complete intersection dimension of complexes (\cite{AGP, Sean}).} \label{CI1} Let $X$ be an $R$-complex. A diagram of local ring maps $R \to R' \twoheadleftarrow S$ is called a \emph{quasi-deformation} provided that $R \to R'$ is flat and the kernel of the surjection $R' \twoheadleftarrow S$ is generated by a regular sequence on $S$.
The \emph{complete intersection dimension} of $X$ is defined as:
\[
\CI_R(X)=\inf\{ \pd_S( X\otimes^{\bf{L}}_R R') -\pd_S(R'): R \to R' \twoheadleftarrow S \text{ is a quasi-deformation}\}.
\]
\end{chunk}
Some facts about the complete intersection dimension are recorded next:
\begin{chunk}\label{CI} Let $X$ be an $R$-complex. Then the following hold:
\begin{enumerate}[\rm(i)]
\item $\CI_R(X) \in \{-\infty\} \cup \mathbb{Z} \cup \{\infty\}$; see \cite[3.2.1]{Sean}.
\item $\CI_R(X)=-\infty$ if and only if $\mathrm{H}(X)=0$; see \cite[3.2.2]{Sean}.
\item $\inf(X)\leq \sup(X) \leq \CI_R(X)$; see \cite[3.3]{Sean}.
\item If $\CI_R(X)<\infty$, then $\CI_R(X)=\depth(R)-\depth_R(X)$; see \cite[3.3]{Sean}.
\item If $R$ is a complete intersection, then $\CI_R(X)<\infty$; see \cite[3.5]{Sean}.
\end{enumerate}
\end{chunk}
\begin{chunk} \textbf{Derived Depth Formula.} \label{DF} Let $X$ and $Y$ be $R$-complexes. If $\CI_R(X)<\infty$ and $X \otimes^{\bf{L}}_R Y$ is bounded, i.e., $\Tor_{i}^{R}(X,Y)=0$ for all $i\gg 0$, then the equality $\depth_R(X) + \depth_R(Y) = \depth(R) + \depth_R(X \otimes^{\bf{L}}_R Y)$ holds, i.e., the pair $(X,Y)$ satisfies the \emph{derived depth formula}; see \cite[4.4]{CJ}.
\end{chunk}
We need a few arguments from the proof of \cite[3.1]{GO} to establish our main result. In the following, for the sake of completeness, we include the arguments we need, along with a few additional details that are not explicitly stated in \cite{GO}.
\begin{chunk} (\cite[see the proof of 3.1]{GO}) \label{eski} Let $X$ and $Y$ be $R$-complexes such that $\mathrm{H}(X)\neq 0 \neq \mathrm{H}(Y)$. Assume $\CI_R(X)<\infty$. Assume further $X \otimes^{\bf{L}}_R Y$ is bounded and satisfies $(S_{n})$ for some $n\geq 0$.
Let $\mathfrak{p} \in \Supp_R(Y)$. We proceed and look at $\depth_{R_{\mathfrak{p}}}(Y_{\mathfrak{p}}) +\inf(Y_\mathfrak{p})$. We pick a minimal prime ideal $\mathfrak{q}$ of $\mathfrak{p} + \mathbf{A}nn_R(X)$ and consider the following three cases separately: $\dim(R_\mathfrak{q}) \le n$, $\dim (R_\mathfrak{q}) > n$, and $\mathfrak{p} \in \Supp_R(X)$. Note that, by the choice, we have that $\mathfrak{q} \in \Supp_R(X \otimes^{\bf{L}}_R Y)$.
\emph{Case 1}. Assume $\dim(R_{\mathfrak{q}})\leq n$. Then it follows:
\begin{align} \notag{}
\depth_{R_{\mathfrak{p}}}(Y_{\mathfrak{p}}) +\inf(Y_\mathfrak{p})&=\depth_{(R_{\mathfrak{q}})_{\mathfrak{p} R_{\mathfrak{q}}}}\left((Y_{\mathfrak{q}})_{\mathfrak{p} R_{\mathfrak{q}}}\right) +\inf((Y_\mathfrak{q})_{\mathfrak{p} R_\mathfrak{q}}) &\\ \notag{}
& \geq \big[\depth_{R_{\mathfrak{q}}}(Y_{\mathfrak{q}}) -\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})\big]+\inf(Y_\mathfrak{q}) &\\ \notag{}
& = \big[\depth_{R_{\mathfrak{q}}}(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q})+\depth(R_{\mathfrak{q}})-\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}})\big] -\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})+\inf(Y_\mathfrak{q}) &\\ \notag{}
& = \big[\depth_{R_{\mathfrak{q}}}(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q})+\CI_{R_{\mathfrak{q}}}(X_{\mathfrak{q}})\big] -\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})+\inf(Y_\mathfrak{q}) &\\ \tag{\ref{eski}.1}
& \geq \big[\depth_{R_{\mathfrak{q}}}(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q})+ \inf(X_\mathfrak{q}) \big] -\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})+\inf(Y_\mathfrak{q}) &\\ \notag{}
& = \depth_{R_{\mathfrak{q}}}(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q}) +\inf(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q}) -\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) &\\ \notag{} & \geq \min\{n, \dim(R_{\mathfrak{q}})\} - \dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) \notag{} &\\ \notag{} &\geq \dim(R_{\mathfrak{q}})- \dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) &\\ \notag{} &\geq \dim(R_{\mathfrak{p}}) &\\ \notag{}
& \geq \min\{n, \dim(R_{\mathfrak{p}})\}. \notag{}
\end{align}
Here, in (\ref{eski}.1), the first inequality follows from \ref{dcx}(i) and the definition of inf, the second inequality follows from \ref{CI}(iii), and the third inequaliy is due to the fact that $X \otimes^{\bf{L}}_R Y$ satisfies $(S_{n})$; note that the other inequalities are standard. For the equalities in (\ref{eski}.1), the first one is due to localization, the second one follows from \ref{DF}, the third one is due to \ref{CI}(iv), and the fourth one can be obtained by \ref{cx}.
\emph{Case 2}. Assume $\dim(R_{\mathfrak{q}})> n$, and set $t=\depth_{R_{\mathfrak{q}}}(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q})+\inf(X_\mathfrak{q} \otimes^{\bf{L}}_{R_\mathfrak{q}} Y_\mathfrak{q})$. Then it follows:
\begin{align} \notag{}
\depth_{R_{\mathfrak{p}}}(Y_{\mathfrak{p}}) + \inf(Y_\mathfrak{p})& \geq \depth_{R_{\mathfrak{q}}}\big(Y_{\mathfrak{q}}) + \inf(Y_\mathfrak{q}) -\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}\big) &\\ \notag{}
& = \depth(R_{\mathfrak{q}})-\big(\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}})+\inf(X_\mathfrak{q})\big) + t-\dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) &\\ \tag{\ref{eski}.2}
&\geq \depth(R_{\mathfrak{q}})-\big(\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) +\inf(X_\mathfrak{q})\big)+ n - \dim(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) &\\\notag{}
&\geq \depth(R_{\mathfrak{q}})-(\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) +\inf(X_\mathfrak{q}))+ n + \dim(R_{\mathfrak{p}}) -\dim(R_{\mathfrak{q}}).
\end{align}
Here, in (\ref{eski}.2), the first inequality is due to \ref{dcx}(i) and the definition of inf, the second inequality is due to the fact that $X \otimes^{\bf{L}}_R Y$ satisfies $(S_{n})$ and $\dim(R_{\mathfrak{q}})> n$, and the third inequality is standard. Moreover, the equality in (\ref{eski}.2) follows from \ref{cx} and \ref{DF}.
\emph{Case 3}. Assume $\mathfrak{p} \in \Supp_R(X)$. Then one has $\mathfrak{q} = \mathfrak{p}$. Set $\dim(R_{\mathfrak{p}})=l$. If $l\leq n$, then it follows by Case 1 that $\depth_{R_{\mathfrak{p}}}(Y_{\mathfrak{p}}) +\inf(Y_\mathfrak{p})\geq \min\{n, \dim(R_{\mathfrak{p}})\}$. Next assume $l> n$. Then it follows that:
\begin{align} \notag{}
\depth_{R_{\mathfrak{p}}}(Y_{\mathfrak{p}}) + \inf(Y_\mathfrak{p}) &\geq \depth(R_{\mathfrak{p}})-\big(\depth_{R_{\mathfrak{p}}}(X_{\mathfrak{p}}) +\inf(X_\mathfrak{p})\big)+ n + \dim(R_{\mathfrak{p}}) -\dim(R_{\mathfrak{p}}) \notag{}\\
&= \CI_{R_{\mathfrak{p}}}(X_{\mathfrak{p}}) - \inf(X_\mathfrak{p}) + n \tag{\ref{eski}.3} \\
&\geq n \geq \min\{n, \dim(R_\mathfrak{p})\}. \notag{}
\end{align}
Here, in (\ref{eski}.3), the first inequality follows by Case 2, the second inequality is due to \ref{CI}(iii), and the equality is due to \ref{CI}(iv). So, if $\mathfrak{p} \in \Supp_R(X)$, we have that $\depth_{R_\mathfrak{p}}(Y_\mathfrak{p}) + \inf Y_\mathfrak{p} \geq \min \{n, \dim(R_\mathfrak{p})\}$.
\end{chunk}
\section{Proof of the main theorem and corollaries}
We are now ready to prove the main result in this paper, namely Theorem \ref{mainthm}. The proof of the theorem is motivated by the results given in \ref{eski}, but the gist of our argument is different: the finishing touch of the proof of Theorem \ref{mainthm} relies upon an application of the properties stated in \ref{dcx}.
We set, for $\mathfrak{q}\in \Spec(R)$, that $\U(\mathfrak{q})=\{\mathfrak{p} \in \Spec(R): \mathfrak{p} \subseteq \mathfrak{q} \text{ where } \height_R(\mathfrak{p})>0\}$.
\begin{thm} \label{mainthm} Let $R$ be a local ring, and let $X$ and $Y$ be $R$-complexes such that $\mathrm{H}(X)\neq 0 \neq \mathrm{H}(Y)$. Assume $m$ and $n$ are nonnegative integers and the following conditions hold:
\begin{enumerate}[\rm(i)]
\item $\CI_R(X)<\infty$.
\item $X \otimes^{\bf{L}}_R Y$ is bounded, i.e., $\Tor_{i}^{R}(X,Y)=0$ for all $i\gg 0$.
\item $X \otimes^{\bf{L}}_R Y$ satisfies Serre's condition $(S_{n})$.
\item If $\mathfrak{q} \in \Supp_R(X \otimes^{\bf{L}}_R Y)$, then $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, X_\mathfrak{q}) +\inf(X_\mathfrak{q})\le \depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, R_\mathfrak{q})+m$ for all $\mathfrak{p} \in \U(\mathfrak{q})$.
\end{enumerate}
Then $Y$ satisfies Serre's condition $(S_{n-m})$.
\end{thm}
\begin{proof} Let $\mathfrak{p} \in \Supp_R(Y)$. We want to show that the following inequality holds:
\begin{align} \tag{\ref{mainthm}.1}
\depth_{R_\mathfrak{p}}(Y_\mathfrak{p}) + \inf (Y_\mathfrak{p}) \ge \min \{n -m, \dim(R_{\mathfrak{p}})\}.
\end{align}
If $\dim(R_{\mathfrak{p}})=0$, then (\ref{mainthm}.1) holds trivially. Moreover, if $\mathfrak{p} \in \Supp_R(X)$, then the inequality (\ref{mainthm}.1) holds by Case 3 of \ref{eski}. Hence we assume $\dim(R_{\mathfrak{p}})>0$, $\mathfrak{p} \notin \Supp_R(X)$, and pick a prime ideal $\mathfrak{q}$ of $R$ which is minimal over $\mathfrak{p} +\mathbf{A}nn_R(X)$.
Then it follows that $\mathfrak{q} \in \Supp_R(X \otimes^{\bf{L}}_R Y)$.
If $\dim(R_{\mathfrak{q}})\leq n$, then (\ref{mainthm}.1) holds by Case 1 of \ref{eski}. Hence, we further assume that $\dim(R_{\mathfrak{q}})> n$. Therefore, Case 2 of \ref{eski} yields:
\begin{align} \tag{\ref{mainthm}.2}
\begin{aligned}
\depth_{R_{\mathfrak{p}}}(Y_{\mathfrak{p}}) + \inf(Y_\mathfrak{q}) & \geq \depth(R_{\mathfrak{q}})-(\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) +\inf(X_\mathfrak{q}))+ n + \dim(R_{\mathfrak{p}}) -\dim(R_{\mathfrak{q}})&\\ \notag{}
&=n + \dim(R_{\mathfrak{p}})-(\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) +\inf(X_\mathfrak{q})).
\end{aligned}
\end{align}
Now we suppose $\depth_{R_\mathfrak{p}} (Y_\mathfrak{p}) +\inf(Y_\mathfrak{p}) < \min\{n - m, \dim (R_\mathfrak{p})\}$ and look for a contradiction. Note that we have $n-m > \depth_{R_\mathfrak{p}} (Y_\mathfrak{p}) +\inf(Y_\mathfrak{p})$ and so (\ref{mainthm}.2) shows:
\begin{align} \tag{\ref{mainthm}.3}
\dim(R_{\mathfrak{p}})<\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) +\inf(X_\mathfrak{q})-m.
\end{align}
Note that the following inequalities hold:
\[\begin{array}{rl}\tag{\ref{mainthm}.4}
\depth_{ R_{\mathfrak{q}} } (\mathfrak{p} R_{\mathfrak{q}}, X_{\mathfrak{q}} ) +\inf(X_\mathfrak{q}) -m & \le \depth_{ R_{\mathfrak{q}} } (\mathfrak{p} R_{\mathfrak{q}}, R_{\mathfrak{q}} ) \\ & \le \depth((R_{\mathfrak{q}})_{\mathfrak{p} R_{\mathfrak{q}}}) \\ & = \depth(R_{\mathfrak{p}}) \\ & \le \dim(R_{\mathfrak{p}}) \\ & < \depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) +\inf(X_\mathfrak{q}) -m.
\end{array}\]
In (\ref{mainthm}.4), the first inequality is due to the hypothesis (iv) since $\mathfrak{q} \in \Supp_R(X \otimes^{\bf{L}}_R Y)$ and $\mathfrak{p} \in \U(\mathfrak{q})$. Moreover, the second inequality of (\ref{mainthm}.4) follows from \ref{dcx}(ii), the third one is by \cite[1.2.12]{BH}, and the forth one is due to (\ref{mainthm}.3). Hence (\ref{mainthm}.4) gives:
\begin{equation}\tag{\ref{mainthm}.5}
\depth_{ R_{\mathfrak{q}} } (\mathfrak{p} R_{\mathfrak{q}}, X_{\mathfrak{q}} )<\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}).
\end{equation}
On the other hand, we have:
\[\begin{array}{rl}\tag{\ref{mainthm}.6}
\depth_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}) & = \depth_{R_{\mathfrak{q}}}(\mathfrak{q} R_{\mathfrak{q}}, X_{\mathfrak{q}})\\ & =
\depth_{R_{\mathfrak{q}}}\Big(\sqrt{\mathfrak{p} R_{\mathfrak{q}}+\mathbf{A}nn_{R_{\mathfrak{q}}}(X_{\mathfrak{q}})}, X_{\mathfrak{q}}\Big)\\
&=\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_{\mathfrak{q}}+\mathbf{A}nn_{R_{\mathfrak{q}}}(X_{\mathfrak{q}}), X_{\mathfrak{q}}) \\
&=\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_{\mathfrak{q}}, X_{\mathfrak{q}})\\
\end{array}\]
In (\ref{mainthm}.6), the first equality follows from \ref{dcx1}, the second one is due to the fact that $\mathfrak{p} R_{\mathfrak{q}}+\mathbf{A}nn_{R_{\mathfrak{q}}}(X_{\mathfrak{q}})$ is $\mathfrak{q} R_{\mathfrak{q}}$-primary, and the last two equalities follow from \ref{dcx}(iii) and \ref{dcx}(iv), respectively.
Consequently, in view of (\ref{mainthm}.5) and (\ref{mainthm}.6), we obtain a contradiction. This contradiction implies that the inequality (\ref{mainthm}.1) holds, and hence completes the proof.
\end{proof}
We proceed by recording some consequences of Theorem \ref{mainthm}.
\begin{cor} \label{cor1} Let $R$ be a local ring, $M$ and $N$ be finitely generated $R$-modules, and let $m$ and $n$ be nonnegative integers.
Assume the following hold:
\begin{enumerate}[\rm(i)]
\item $\CI_R(M)<\infty$.
\item $\Tor_i^R(M,N)=0$ for all $i\geq 1$.
\item $M\otimes_RN$ satisfies Serre's condition $(S_{n})$.
\item If $\mathfrak{q} \in \Supp_R(M \otimes_R N)$, then $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, M_\mathfrak{q}) \le \depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, R_\mathfrak{q})+m$ for all $\mathfrak{p} \in \U(\mathfrak{q})$.
\end{enumerate}
Then $N$ satisfies Serre's condition $(S_{n-m})$.
\end{cor}
\begin{proof} Note that $M \otimes^{\bf{L}}_R N \cong M \otimes_R N$ if and only if $\Tor_i^R(M,N)=0$ for all $i\geq 1$. Therefore, it follows by Theorem \ref{mainthm} that $N$ satisfies Serre's condition $(S_{n-m})$.
\end{proof}
\begin{rmk} In \cite{Omo} one can find further results concerning the depth inequality stated in part (iv) of Corollary \ref{cor1}; see also Proposition \ref{propnew}. In fact, when $m=1$, it is proved in \cite{Omo} that the aforementioned inequality always holds over hypersurface rings. More precisely, if $R$ is a hypersurface ring, $I$ an ideal of $R$, and $M$ is a non-zero torsion-free $R$-module which is generically free, then it follows from a result in \cite{Omo} that $\depth(I, M) \le \depth_R(I, R) +1$.\pushQED{\qed}
\qedhere
\popQED
\end{rmk}
Modules over hypersurface rings are reflexive if and only if they satisfy Serre's condition $(S_2)$; see, for example, \cite[2.5]{GO}. This fact is used in the next corollary of Theorem \ref{mainthm}.
\begin{cor} \label{cor2} Let $R$ be a local hypersurface ring, and let $M$ and $N$ be nonzero $R$-modules. Assume the following conditions hold:
\begin{enumerate}[\rm(i)]
\item $N$ has rank.
\item If $\mathfrak{q} \in \Supp_R(M)$, then $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, M_\mathfrak{q}) \le \height_R(\mathfrak{p})$ for all $\mathfrak{p} \in \U(\mathfrak{q})$.
\end{enumerate}
If $M\otimes_RN$ is reflexive, then $M$ and $N$ are both reflexive.
\end{cor}
\begin{proof} Assume $M\otimes_RN$ is reflexive. Then it follows from Theorem \ref{srt} that $\Tor_i^R(M,N)=0$ for all $i\geq 1$ and $M$ is reflexive. Moreover, since $R$ is Cohen-Macaulay, the equality $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, R_\mathfrak{q})=\height_R(\mathfrak{p})$ holds for each $\mathfrak{p}, \mathfrak{q} \in \Spec(R)$ with $\mathfrak{p} \subseteq \mathfrak{q}$. Note also $\CI_R(M)<\infty$ as $R$ is a hypersurface; see \ref{CI}(v). Therefore, setting $m=0$ and $n=2$, we conclude from Corollary \ref{cor1} that $N$ is reflexive.
\end{proof}
Recall that the module $N$ in Example \ref{mainex} is not reflexive. Hence, it is worth pointing out that the depth inequality in part (ii) of Corollary \ref{cor2} does not hold for the module $M$ in the example.
\begin{eg} \label{ref} Let $R$, $M$ and $N$ be as in Example \ref{mainex}, i.e., $R=\mathbb{C}[\!|x,y,z,w]\!]/(xy)$, $M=R/(x)$ and let $N$ be the Auslander transpose of $R/\mathfrak{p}$, where $\mathfrak{p}=(y,z,w) \in \Spec(R)$. Let $\mathfrak{q}=\mathfrak{m}$. Then it follows that $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, M_\mathfrak{q})=\depth_R(\mathfrak{p}, M)=3 > \height_R(\mathfrak{p})=2$.\pushQED{\qed}
\qedhere
\popQED
\end{eg}
Now our aim is to establish Theorem \ref{mainthmintro}, advertised in the introduction. First we prove the following general result which seems to be of independent interest.
\begin{prop} \label{propnew} Let $R$ be a local ring and let $M$ be a nonzero $R$-module. Then the following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item Each $M$-regular sequence is $R$-regular.
\item $\depth_{R}(I, M) \leq \depth_R(I,R)$ for each ideal $I$ of $R$.
\item $\depth_{R}(\mathfrak{p}, M) \leq \depth_R(\mathfrak{p},R)$ for each $\mathfrak{p} \in \Spec(R)$.
\end{enumerate}
\end{prop}
\begin{proof} It follows by definition that $\rm{(i)} \Longrightarrow \rm{(ii)} \Longrightarrow \rm{(iii)}$. So we proceed and show $\rm{(iii)} \Longrightarrow \rm{(ii)} \Longrightarrow \rm{(i)}$.
$\rm{(iii)} \Longrightarrow \rm{(ii)}$: Write $\sqrt{I}=\mathfrak{p}_1 \cap \ldots \cap \mathfrak{p}_n$ for some prime ideals $\mathfrak{p}_i$. Then it follows that
\begin{align}\notag{}
\depth_R(I,M) \notag{} & =\depth_R(\sqrt{I}, M) \\& =\inf\{ \depth_R(\mathfrak{p}_1, M), \ldots, \depth_R(\mathfrak{p}_n, M) \} \notag{} \\ & \leq \inf\{ \depth_R(\mathfrak{p}_1, R), \ldots, \depth_R(\mathfrak{p}_n, R) \} \tag{\ref{propnew}.1} \\& = \depth_R(\sqrt{I}, R) \notag{} \\& = \notag{} \depth_R(I,R).
\end{align}
Here in (\ref{propnew}.1), the first and the fourth equalities are due to \cite[1.2.10(b)]{BH} (see also \ref{dcx}(ii)), the second and third equalities are due to \cite[1.2.10(c)]{BH}, and the inequality follows by assumption.
$\rm{(ii)} \Longrightarrow \rm{(i)}$: Let $\underline{x}=x_1, \ldots, x_n \subseteq \mathfrak{m}$ be an $M$-regular sequence. We will show that this sequence is $R$-regular by induction on $n$.
If $n=1$, then we have $1\leq \depth_R((x_1), M) \leq \depth_R((x_1), R)$, which implies that $x_1$ is a non zero-divisor on $R$. Hence we assume $n\geq 2$. Then, by the induction hypothesis, it follows that $\underline{x}'=x_1, \ldots, x_{n-1}$ is $R$-regular. Thus, we have:
\begin{align}\notag{}
\notag{} \depth_R((x_n), M/ (\underline{x}') M)+(n-1) \notag{} & = \depth_R((\underline{x}),M) \notag{} \\ & \leq \depth_R((\underline{x}),R) \tag{\ref{propnew}.2} \\ \notag{} & = \depth_R((x_n), R/ (\underline{x}'))+(n-1)
\end{align}
Here in (\ref{propnew}.2), the equalities are due to \cite[1.2.10(d)]{BH}, while the inequality follows by assumption. Therefore, we have
\begin{align*}
1\leq \depth_R((x_n), M/ (\underline{x}') M) \leq \depth_R((x_n), R/ (\underline{x}')R),
\end{align*}
which implies that $x_n$ is a non zero-divisor on $R/ (\underline{x}') R$, as required.
\end{proof}
\begin{rmk} The equivalent conditions in Proposition \ref{propnew} hold if and only if, whenever $\underline{x}=x_1, \ldots, x_n $ is a sequence of elements in $\mathfrak{m}$ with $\Tor_1^R(M, R/\underline{x}R)=0$, it follows $\Tor_2^R(M, R/\underline{x}R)=0$; see \cite[2.2]{Kamal}.\pushQED{\qed} \qedhere \popQED
\end{rmk}
The following result is an immediate consequence of Proposition \ref{propnew}:
\begin{cor}\label{corpropnew} Let $R$ be a local ring, $I$ an ideal of $R$, and let $M$ be a nonzero $R$-module such that $\Supp_R(M)=\Spec(R)$. If $\depth_{R_{\mathfrak{p}}}(M_{\mathfrak{p}}) \leq \depth(R_{\mathfrak{p}})$ for all $\mathfrak{p} \in \Spec(R)$ (e.g., $R$ is Cohen-Macaulay, or $\CI_R(M)<\infty$), then each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular for all $\mathfrak{p} \in \Spec(R)$.
\end{cor}
\begin{proof} We have $\depth_R(I, M) = \inf \{\depth_{R_\mathfrak{p}}(M_\mathfrak{p}) \mid \mathfrak{p} \in \mathrm{V}(I)\} \leq \inf \{\depth(R_\mathfrak{p}) \mid \mathfrak{p} \in \mathrm{V}(I)\} = \depth(I, R)$; see \ref{dcx}(ii). Hence the result follows from Proposition \ref{propnew}.
\end{proof}
We are now ready to prove Theorem \ref{mainthmintro}. Let us first note a fact proved in \cite[2.6]{HW1}: if the module $M$ in Theorem \ref{mainthmintro} has full support, then $M$ and $N$ have full support so that a quick application of the depth formula shows that both $M$ and $N$ satisfy $(S_2)$, i.e., both $M$ and $N$ are reflexive; see also \ref{DF}, and \cite[1.3]{GO} for the details. Therefore, the gist of Theorem \ref{mainthmintro} is the case where $\Supp_R(M)\neq \Spec(R)$.
\begin{proof}[Proof of Theorem \ref{mainthmintro}] Let $\mathfrak{q} \in \Supp_R(M)$. Then, since each $M_{\mathfrak{q}}$-regular sequence is $R_{\mathfrak{q}}$-regular by assumption, it follows from Proposition \ref{propnew} that $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, M_\mathfrak{q}) \le \height_R(\mathfrak{p})$ for all $\mathfrak{p} \in \Spec(R)$ with $\mathfrak{p} \subseteq \mathfrak{q}$. Therefore, we have that $\depth_{R_{\mathfrak{q}}}(\mathfrak{p} R_\mathfrak{q}, M_\mathfrak{q}) \le \height_R(\mathfrak{p})$ for all $\mathfrak{p} \in \U(\mathfrak{q})$. Consequently, Corollary \ref{cor2} implies that both $M$ and $N$ are reflexive.
\end{proof}
It is interesting to note that Theorem \ref{mainthmintro} (and also Theorem \ref{srt}) can fail over rings that are not hypersurfaces. For example, if $R=k[\![t^3,t^4, t^5]\!]$ and $N=(t^3, t^4)$, the canonical module of $R$, Huneke and Wiegand \cite[4.8]{HW1} constructs an $R$-module $M$ such that $M\otimes_RN$ is reflexive, but neither $M$ nor $N$ is reflexive; note that the hypotheses in parts (i) and (ii) of Theorem \ref{mainthmintro} hold for these modules $M$ and $N$. In \cite[2.1]{Con} one can find a similar example over a Gorenstein ring that is not a hypersurface.
\section{Further remarks on Theorem \ref{mainthmintro}}
Recall that an $R$-module $M$ is called \emph{Tor-rigid} provided that the following condition holds: for each $R$-module $N$ satisfying $\Tor_1^R(M,N)=0$, one has that $\Tor_2^R(M,N)=0$. Examples of Tor-rigid modules are abundant in the literature. For example, if $R$ is hypersurface, that is quotient of an unramified regular local ring, and $M$ is an $R$-module such that $\length_R(M)<\infty$ or $\pd_R(M)<\infty$, then $M$ is Tor-rigid; see \cite[2.4]{HW1} and \cite[Theorem 3]{Li}, respectively. Tor-rigidity condition can impose certain restrictions on the ring in question. For example, Auslander \cite[4.3]{Au} proved that, if $M$ is a nonzero Tor-rigid module over a local ring $R$, then each $M$-regular sequence is an $R$-regular sequence. Note, this fact implies that the depth of a nonzero Tor-rigid module is always bounded by the depth of the ring considered.
In 2019 Celikbas, Matsui and Sadeghi \cite{Onex} examined the conclusion of Theorem \ref{srt} and studied the reflexivity of tensor products of modules over local hypersurface rings in terms of the Tor-rigidity. Their main result establishes the same conclusion of Theorem \ref{mainthmintro} for Tor-rigid modules. More precisely, the main result of \cite{Onex} shows that, if $M$ and $N$ are nonzero modules over a local hypersurface ring $R$ such that $M\otimes_RN$ is reflexive, $M$ is Tor-rigid, and $N$ has rank, then $M$ and $N$ are both reflexive; see Theorem \ref{srt} and \cite[3.1]{Onex}. Therefore, we next give examples and highlight that the Tor-rigidity condition and the condition we study in this paper, namely the condition in part (ii) of Theorem \ref{mainthmintro}, are independent of each other, in general.
\begin{eg} \label{Hiroki} Let $R=k[\![x,y,z]\!]/(xy)$ and let $M=R/(x^2)$. Then it follows that $\Supp_R(M)\neq \Spec(R)$, $M$ is not Tor-rigid, and each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular for all $\mathfrak{p}\in \Supp_R(M)$. We justify these properties as follows:
(i) $\Supp_R(M)\neq \Spec(R)$: this is clear since $(y) \notin \Supp_R(M)$. In fact, since $\{(x), (x,y)\}$ is the set of all associated primes of $M$, it follows that $\Supp_R(M)=\mathrm{V}\big((x)\big)\cup \mathrm{V}\big((x,y)\big)$.
(ii) $M$ is not Tor-rigid: setting $N=R/(y)$, one can check that $\Tor_1^R(M,N)= 0\neq \Tor_2^R(M,N)$.
(iii) Each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular for all $\mathfrak{p}\in \Supp_R(M)$: note, to justify this claim, due to Proposition \ref{propnew}, we proceed to prove the following claim:
\[ \tag{\ref{Hiroki}.1}
\text{If } I \text{ is an ideal of } R \text{ such that } I \subseteq \mathfrak{p}\in \Supp_R(M), \text{ then } \depth_{R_{\mathfrak{p}}}(IR_{\mathfrak{p}}, M_{\mathfrak{p}}) \leq \height_{R_{\mathfrak{p}}}(IR_{\mathfrak{p}}).
\]
Let $\mathfrak{p} \in \Supp_R(M)$ and let $I$ be an ideal of $R$ such that $I \subseteq \mathfrak{p}$. We look at the height of $\mathfrak{p}$, i.e., $\dim(R_{\mathfrak{p}})$.
\emph{Case 1}: Assume $\height_R(\mathfrak{p})=0$. In this case the claim in (\ref{Hiroki}.1) holds as $\height_{R_{\mathfrak{p}}}(IR_{\mathfrak{p}}) \leq \dim(R_{\mathfrak{p}})$ and $ \depth_{R_{\mathfrak{p}}}(IR_{\mathfrak{p}}, M_{\mathfrak{p}}) \leq \depth_{R_{\mathfrak{p}}}(M_{\mathfrak{p}}) \leq \dim(R_{\mathfrak{p}})$.
\emph{Case 2}: Assume $\height_R(\mathfrak{p})=1$. We first consider the case where $\mathfrak{p}=(x,y)$. As $\mathfrak{p}$ is an associated prime of $M$, it follows that $\depth_{R_{\mathfrak{p}}}(IR_{\mathfrak{p}}, M_{\mathfrak{p}}) \leq \depth_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})=0$, and so the claim in (\ref{Hiroki}.1) holds.
Next, we consider the case where $\mathfrak{p}\neq (x,y)$. Note, as $\mathfrak{p} \in \Supp_R(M)$, we have that $(x) \subseteq \mathfrak{p}$. Moreover, in case $I \subseteq \mathfrak{r} \subsetneqq \mathfrak{p}$ for some $\mathfrak{r} \in \Spec(R)$, one can observe that $I\subseteq (x)$. As $\depth_{R_{(x)}}(M_{(x)})=0$, the aforementioned observation and \ref{dcx}(ii) yield:
\[ \tag{\ref{Hiroki}.2}
\depth_{R_{\mathfrak{p}}}(IR_{\mathfrak{p}}, M_{\mathfrak{p}}) = \inf \{\depth_{R_\mathfrak{q}}(M_\mathfrak{q}) \mid I \subseteq \mathfrak{q} \subseteq \mathfrak{p}\} =
\begin{cases}
\depth_{R_{\mathfrak{p}}}(M_{\mathfrak{p}}) \text{ if } I \nsubseteq (x)\\
0 \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \text{ if } I \subseteq (x)\\
\end{cases}
\]
As the equalities in (\ref{Hiroki}.2) also hold when $M$ is replaced with $R$, the claim in (\ref{Hiroki}.1) holds.
\emph{Case 3}: Assume $\height_R(\mathfrak{p})=2$, i.e., $\mathfrak{p}=\mathfrak{m}$. As $\depth_{R}(I, M) \leq \depth_{R}(M)=1$, to establish the claim in (\ref{Hiroki}.1), it suffices to assume $\height_R(I)=0$ and show that $\depth_{R}(I, M)=0$. We observe, as each element of $I$ is a zero-divisor on $R$, that $I \subseteq (x)$ or $I \subseteq (y)$. Thus, we have $I \subseteq (x,y)$ and hence:
\[
\depth_{R}(I, M) = \inf \{\depth_{R_\mathfrak{r}}(M_\mathfrak{r}) \mid \mathfrak{r} \in \mathrm{V}(I)\} \leq\depth_{R_{(x,y)}}(M_{(x,y)})=0.
\]
This completes the proof of Case 3.\pushQED{\qed}
\qedhere
\popQED
\end{eg}
\begin{eg}\label{exson} Let $R=k[\![x,y,z,u]\!]/(xy)$ and let $M=N \oplus T$, where $N=R/(x)$ and $T$ is an $R$-module such that $\dim_R(T)=0$ and $\pd_R(T)=\infty$ (e.g., $T=k$). Then it follows that $\Supp_R(M)\neq \Spec(R)$, $M$ is Tor-rigid, and there is an $M_{\mathfrak{p}}$-regular sequence which is not $R_{\mathfrak{p}}$-regular for some $\mathfrak{p}\in \Supp_R(M)$. We justify these properties as follows:
(i) $\Supp_R(M)\neq \Spec(R)$: this is clear since $(y) \notin \Supp_R(M)$.
(ii) $M$ is Tor-rigid: to see this assume $\Tor_1^R(M, X)=0$ for some $R$-module $X$. Then $\Tor_1^R(T, X)=0$, and since $T$ is Tor-rigid \cite[2.4]{HW1}, we have that $\Tor_i^R(T, X)=0$ for all $i\geq 1$. This implies $X$ is free and hence $\Tor_i^R(M, X)=0$ for all $i\geq 1$; see \cite[2.5]{HW1} (or see \ref{CI}(v) and \ref{DF}). Therefore, $M$ is Tor-rigid (note also that $N$ is not Tor-rigid since $\Tor_1^{R}(N, R/yR)=0 \neq R/\mathfrak{p} =\Tor_2^{R}(N, R/yR)$).
(iii) There is an $M_{\mathfrak{p}}$-regular sequence which is not $R_{\mathfrak{p}}$-regular for some $\mathfrak{p}\in \Supp_R(M)$: for this part, let $\mathfrak{p}=(x,y)$. Then it follows that $\mathfrak{p} \in \Supp_R(M)$ and $M_{\mathfrak{p}} \cong N_{\mathfrak{p}}$. Hence, $y$ is a non zero-divisor on $M_{\mathfrak{p}}$. On the other hand, as $x\neq 0$, $y\neq 0$ and $xy=0$ in $R_{\mathfrak{p}}$, we see that $y$ is a zero-divisor on $R_{\mathfrak{p}}$. Thus, $\{y\}$ is an $M_{\mathfrak{p}}$-regular sequence which is not $R_{\mathfrak{p}}$-regular.
\pushQED{\qed}
\qedhere
\popQED
\end{eg}
The modules considered in Examples \ref{Hiroki} and \ref{exson} do not have full support. Next, in Example \ref{exson2}, we look at a module $M$ that has full support and observe, even for such a module, Tor-rigidity condition is distinct from the condition stated in part (ii) of Theorem \ref{mainthmintro}.
\begin{eg}\label{exson2} Let $R=k[\![x,y,z,u]\!]/(xu-yz)$ and let $M=(x,y)=\Omega \big(R/(x,y)\big)$. Then each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular because $\Supp_R(M)=\Spec(R)$; see Corollary \ref{corpropnew}. Furthermore, it follows that $\Tor^R_1(M,M)=0\neq k=\Tor^R_2(M,M)$, and hence $M$ is not Tor-rigid.\pushQED{\qed}
\qedhere
\popQED
\end{eg}
One can also construct examples similar to Example \ref{exson2} over rings that are not hypersurfaces.
\begin{eg} \label{exson0} Let $R$ be a Cohen-Macaulay local ring with canonical module $\omega$ such that $\omega \ncong R$; for example, $R=k[\![t^3,t^4, t^5]\!]$ and $\omega=(t^3, t^4)$. Then $M$ is not Tor-rigid; see, for example, \cite[4.13(i)]{HDT}. On the other hand, each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular for all $\mathfrak{p} \in \Spec(R)$; see Corollary \ref{corpropnew}.\pushQED{\qed}
\qedhere
\popQED
\end{eg}
It is well-known that the Tor-rigidity property does not localize, in general. For example, if $M$ is the module considered in Example \ref{exson}, then $M$ is Tor-rigid over $R$, but $M_{\mathfrak{p}}$ is not Tor-rigid over $R_{\mathfrak{p}}$. Furhermore, the same example also shows that the condition we consider in part (ii) of Theorem \ref{mainthmintro} does not localize in general, too. It seems worth summarizing these observations as a separate remark; see also Proposition \ref{propnew}.
\begin{rmk} \label{sonolsun} Let $R$ be a local ring and let $M$ be a nonzero $R$-module. Consider the following conditions.
\begin{enumerate}[\rm(i)]
\item $M$ is Tor-rigid over $R$.
\item $M_{\mathfrak{p}}$ is Tor-rigid over $R_{\mathfrak{p}}$ for all $\mathfrak{p} \in \Supp_R(M)$.
\item Each $M$-regular sequence is $R$-regular.
\item Each $M_{\mathfrak{p}}$-regular sequence is $R_{\mathfrak{p}}$-regular for all $\mathfrak{p} \in \Supp_R(M)$.
\end{enumerate}
Then we have:
\vspace*{-9ex}
$$
{\small
\xymatrix@C=3em@R=3em{
& \\
\text{(i)} \ar@{=>}[r]|{\object@{}}^-{\;\;\;\; (1)} \ar@{=>}[d] <0.8ex>|{\object@{|}}^-{ \;(3)} & \text{\;\;(iii)} \ar@{=>}[l]<1.5ex>|{\object@{|}}^-{\;\;\;\; (2)} \ar@{=>}[d]<1.5ex>|{\object@{|}}|{}^-{\; (6)} & \\ \text{(ii)} \ar@{=>}[u]<0.8ex>|{\object@{}}^-{\;\;\;\; (4)} \ar@{=>}[r]<0.6ex>|{\object@{}}^-{(7)}
& \text{(iv)} \ar@{=>}[l]<0.7ex>|{\object@{|}}^-{\;\;\;\; (8)} \ar@{=>}[u] |{\object@{}}|{}^-{\;\;\;\; (5)} }} \\
$$
The implications in the above diagram can be justified as follows:
\noindent (1) and (7): see \cite[4.3]{Au}.\\
\noindent (2) and (8): see the module $M$ in Example \ref{exson2}.\\
\noindent (3): see the module $M$ in Example \ref{exson}.\\
\noindent (4) and (5): these follow by definition.\\
\noindent (6): in view of \cite[4.3]{Au}, see the module $M$ in Example \ref{exson}.
\end{rmk}
\appendix
\section{An application of Theorem \ref{srt}}
In this appendix, we give an application of Theorem \ref{srt}, and provide a criterion for tensor products of prime ideals to be reflexive over hypersurface rings. More precisely, we prove in Corollary \ref{appcor1} that, if $R$ be a hypersurface ring that is not a domain, and the tensor product of two prime ideals is refexive, then both of the primes considered must be minimal. Our result, which seems to be new, is based on the following observations of Asgharzadeh \cite[5.1 and 5.5]{Mohsen}; see also \cite[II.3.3]{PS2}.
\begin{rmk} \label{surp} Let $R$ be a commutative Noetherian ring, and let $\mathfrak{p} \in \Spec(R)$. Assume $\mathfrak{p}^{(n)}\neq 0$ for some $n\geq 1$, where $\mathfrak{p}^{(n)}= \mathfrak{p}^n R_{\mathfrak{p}} \cap R$ denotes the $n$th \emph{symbolic power} of $\mathfrak{p}$. Set $M=R/\mathfrak{p}^{(n)}$.
\begin{enumerate}[\rm(i)]
\item Assume $M$ is Tor-rigid over $R$. Then each non zero-divisor on $M$ is a non zero-divisor on $R$ so that the canonical map $R \to R_{\mathfrak{p}}$ is injective; see Remark \ref{sonolsun}. Hence, $R$ is a domain if $R_{\mathfrak{p}}$ is a domain.
\item Assume $\pd_R(M)<\infty$. Then each non zero-divisor on $M$ is a non zero-divisor on $R$ so that the canonical map $R \to R_{\mathfrak{p}}$ is injective; see \cite{R2}, \cite[6.2.3]{Robertsbook}. Also, $R$ is a domain as $R_{\mathfrak{p}}$ is regular.
\item Assume $\id_R(M)<\infty$. Then it follows that $R$ is Gorenstein \cite[II.5.3]{PS2} so that $\pd_R(M)<\infty$. Hence, part (ii) implies that $R$ is a domain.
\end{enumerate}
\end{rmk}
\begin{prop} \label{appp1} Let $R$ be a local hypersurface ring which is not a domain, and let $M$ be a nonzero $R$-module. Let $\mathfrak{p}\in \Spec(R)$ such that $\height_R(\mathfrak{p})\geq 1$ and $\mathfrak{p}^{(n)}\neq 0$ for some $n\geq 1$. Set $N=\Omega^r\big(R/\mathfrak{p}^{(n)}\big)$ for some $r\geq 0$. Assume $M\otimes_RN$ is reflexive. Then it follows that $r\geq 1$, both $M$ and $N$ are reflexive, and $\pd_R(M)<\infty=\pd_R(N)$. Moreover, if $M$ is not free, then $M$ has rank at least two.
\end{prop}
\begin{proof} As $M\otimes_RN$ is a nonzero torsion-free $R$-module, we observe that neither $M$ nor $N$ can be torsion. This implies that $r\geq 1$. Note, since $\mathfrak{p}$ has positive height, it follows that $R/\mathfrak{p}$ is torsion, i.e., $R/\mathfrak{p}$ has rank zero. Thus $N$ has rank, and so the conclusions of Theorem \ref{srt} hold.
We know, by Theorem \ref{srt}, that $\pd_R(M)<\infty$ or $\pd_R(N)<\infty$. However, if $\pd_R(N)<\infty$, then $\pd_R(R/\mathfrak{p}^{(n)})<\infty$ and this forces $R$ to be a domain; see part (ii) of Remark \ref{surp}. Therefore, we have that $\pd_R(M)<\infty=\pd_R(N)$.
Notice, since both $M$ and $N$ have rank, both of these modules have full support. Consequently, Theorem \ref{srt} implies that both $M$ and $N$ are reflexive; see \cite[1.3]{GO}.
Now assume $M$ is not free. Then, since $M$ is reflexive, it follows that $\pd_R(M)\geq 3$. Hence, as $M$ is a second syzygy module, the syzygy theorem of Evans and Griffith \cite[1.1]{SP} (see also \cite{Andre}, \cite[9.5.6]{BH}, and \cite{Ogata}) forces $M$ to have rank at least two.
\end{proof}
The positive height assumption on the prime ideal considered in Proposition \ref{appp1} cannot be removed.
\begin{eg} \label{appeg1} Let $R=k[\![x,y]\!]/(xy)$, $\mathfrak{p}=(x)$, and let $\mathfrak{q}=(y)$. Then $\mathfrak{p}$ and $\mathfrak{q}$ are the minimal prime ideals of $R$. Set $M=R/(x^2)$ and $N=\Omega(R/\mathfrak{p})$. Then $\pd_R(M)=\infty$, but $M\otimes_R N \cong N$ is a reflexive $R$-module.
\end{eg}
The next corollary of Proposition \ref{appp1} yields the criterion we seek concerning the tensor products of prime ideals over local hypersurface rings:
\begin{cor} \label{appcor1} Let $R$ be a local hypersurface ring that is not a domain, and let $\mathfrak{p}, \mathfrak{q} \in \Spec(R)$. Assume $\mathfrak{p}^{(r)}\neq 0$ and $\mathfrak{p}^{(s)}\neq 0$ for some $r\geq 1$ and $s\geq 1$. If $\mathfrak{p}$ or $\mathfrak{q}$ has positive height, then $\mathfrak{p}^{(r)} \otimes_R \mathfrak{q}^{(s)}$ is not a reflexive $R$-module. Therefore, if $\mathfrak{p} \otimes_R \mathfrak{q}$ is reflexive, then both $\mathfrak{p}$ and $\mathfrak{q}$ are minimal primes.
\end{cor}
In view of Corollary \ref{appcor1}, it is worth noting that the tensor product of two minimal prime ideals over a non-domain hypersurface ring may, or may not, be reflexive.
\begin{eg} Let $R$, $\mathfrak{p}$ and $\mathfrak{q}$ be as in Example \ref{appeg1}. Then $\mathfrak{p}$ and $\mathfrak{q}$ are the minimal prime ideals of $R$. It follows that $\mathfrak{p} \cong R/(y)$ and $\mathfrak{p} \otimes_R\mathfrak{p} \cong \mathfrak{p}$ are reflexive $R$-modules. On the other hand, the tensor product $\mathfrak{p} \otimes_R \mathfrak{q} \cong k$ is not reflexive.
\end{eg}
\end{document}
|
\begin{document}
\title{A simple representation of quantum process tomography}
\author{Giuliano Benenti}
\email{[email protected]}
\affiliation{CNISM, CNR-INFM, and Center for Nonlinear and Complex Systems,
Universit\`a degli Studi dell'Insubria, via Valleggio 11, 22100 Como, Italy}
\affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano,
via Celoria 16, 20133 Milano, Italy}
\author{Giuliano Strini}
\email{[email protected]}
\affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano,
via Celoria 16, 20133 Milano, Italy}
\date{\today}
\begin{abstract}
We show that the Fano representation leads to a particularly simple
and appealing form of the quantum process tomography matrix $\chi_{_F}$,
in that the matrix $\chi_{_F}$ is real, the number of matrix
elements is
exactly equal to the number of free parameters required for the complete
characterization of a quantum operation, and these matrix elements
are directly related to evolution of the expectation values
of the system's polarization measurements.
These facts are illustrated in
the examples of one- and two-qubit quantum noise channels.
\end{abstract}
\pacs{03.65.Wj, 03.65.Yz}
\maketitle
Sec.ion{introduction}
The characterization of physical, generally noisy processes
in open quantum systems is a key issue in quantum information
science~\cite{qcbook,nielsen}. Quantum process tomography (QPT)
provides, in principle, full information on the dynamics of
a quantum system and can be used to improve the design and
control of quantum hardware.
Several QPT methods have been developed,
including the standard QPT~\cite{nielsen,chuangQPT,poyatos,korotkov},
ancilla-assisted QPT~\cite{dariano,leung,altepeter}, and direct
characterization of quantum dynamics~\cite{lidar}.
In recent years QPT has been experimentally demonstrated
with up to three-qubit systems
in a variety of different implementations, including
quantum optics~\cite{altepeter,mitchell,demartini,obrien,nambu,langford,kiesel,wang},
nuclear magnetic resonance quantum processors~\cite{childs,boulant,weinstein},
atoms in optical lattices~\cite{myrskog},
trapped ions~\cite{riebe,monz},
and solid-state qubits~\cite{howard,katz}.
Any quantum state $\rho$ can be expressed in the
Fano form~\cite{fano,eberly,mahler}
(also known as Bloch representation). Since the density operator $\rho$
is Hermitian, the parameters of the expansion over the Fano basis are
real. Furthermore, due to the linearity of quantum mechanics, any
quantum operation $\rho\to \rho'=\mathcal{E}(\rho)$ is represented,
in the Fano basis, by an affine map.
In this paper, we point out that in standard QPT it is convenient to compute
the QPT matrix in the Fano basis. Such process matrix, $\chi_{_F}$, has the
following advantages: (i) the matrix elements of $\chi_{_F}$ are real and
(ii) the number of matrix elements in $\chi_{_F}$ is exactly equal to
the number of free parameters needed in order to determine a generic
quantum operation.
Furthermore, the $\chi_{_F}$-matrix elements are directly
related to the modification, induced by the quantum operation
$\mathcal{E}$, of the expectation values of the system's
polarization measurements.
We will illustrate our results in the examples of
one- and two-qubit quantum noise. In particular, we will determine
in the $\chi_{_F}$-matrix the specific patterns of various quantum noise
processes. Finally, we will discuss the number of free parameters
physically relevant in determining a quantum operation for a
two-qubit system exposed to weak local noise.
Sec.ion{Fano rapresentation of the standard QPT}
To simplify writing, we discuss the Fano representation
of the standard QPT only for qubits, even though the obtained results
can be readily extended to qudit systems.
Any $n$-qubit state $\rho$ can be written in the
Fano form as follows~\cite{fano,eberly,mahler}:
\begin{equation}
\rho=\frac{1}{N} \sum_{\alpha_1,...,\alpha_n=x,y,z,I}
c_{\alpha_1...\alpha_n}
\sigma_{\alpha_1}\otimes
\cdots \otimes \sigma_{\alpha_n},
\label{eq:fanoform}
\end{equation}
where
$N=2^n$,
$\sigma_x$, $\sigma_y$, and $\sigma_z$ are the Pauli matrices,
$\sigma_IEq.iv \openone$, and
\begin{equation}
c_{\alpha_1...\alpha_n}=
{\rm Tr}(\sigma_{\alpha_1}\otimes
\cdots \otimes \sigma_{\alpha_n} \rho).
\end{equation}
Note that the normalization condition ${\rm Tr}(\rho)=1$ implies
$c_{I...I}=1$. Moreover, the generalized
Bloch vector ${\bf b}=\{b_\alpha\}_{\alpha=1,...,N^2-1}$
is real due to the hermiticity of $\rho$.
Here $b_\alphaEq.iv c_{\alpha_1...\alpha_n}$,
with $\alpha Eq.iv \sum_{k=1}^{n} i_k 4^{n-k}$,
where we have defined $i_k=1,2,3,4$ in correspondence
to $\alpha_k=x,y,z,I$.
Note that from $1$ to $n$ qubits run from the most significant
to the least significant.
For instance, for two qubits ($n=2$),
the $N^2-1=15$ components of vector ${\bf b}$ are ordered as
follows:
\begin{equation}
\begin{array}{c}
{\bf b}^T=(b_1,b_2,...,b_{15})=
(c_{xx},
c_{xy},
c_{xz},
c_{xI},
c_{yx},
c_{yy},
\\
c_{yz},
c_{yI},
c_{zx},
c_{zy},
c_{zz},
c_{zI},
c_{Ix},
c_{Iy},
c_{Iz}).
\end{array}
\end{equation}
Due to the linearity of quantum mechanics any quantum operation
$\rho\to \rho'=\mathcal{E}(\rho)$
is represented in the Fano basis
$\{\sigma_{\alpha_1}\otimes ...\otimes\sigma_{\alpha_n}\}$
by an affine map:
\begin{equation}
\left[
\begin{array}{c}
{\bf b'}
\\
\hline
1
\end{array}
\right]
=
\mathcal{M}
\left[
\begin{array}{c}
{\bf b}
\\
\hline
1
\end{array}
\right]
=
\left[
\begin{array}{ccc}
{\bf M} & \Big \lvert & {\bf a} \\
\hline
{\bf 0}^T & \Big \lvert & 1
\end{array}
\right]
\left[
\begin{array}{c}
{\bf b}
\\
\hline
1
\end{array}
\right],
\end{equation}
where ${\bf M}$ is a $(N^2-1) \times (N^2-1)$ matrix,
${\bf a}$ a column vector of dimension $N^2-1$ and
${\bf 0}$ the null vector of the same dimension.
All information about the quantum operation
$\mathcal{E}$ is contained in the $N^4-N^2$ free elements
of matrix $\mathcal{M}$, namely in the matrix
\begin{equation}
\chi_{_F}=
\left[
\begin{array}{ccc}
{\bf M} & \Big \lvert & {\bf a}
\end{array}
\right].
\label{eq:processmatrix}
\end{equation}
To obtain the QPT matrix $\chi_{_F}$ from experimental data, one
needs to prepare $N^2$ linearly independent initial states
$\{\rho_i\}$, let
them evolve according to the quantum operation $\mathcal{E}$ and
then measure the resulting states
$\{\rho_i'=\mathcal{E}(\rho_i)\}$.
If we call $\mathcal{R}$ the $N^2\times N^2$
matrix whose columns are given by the
Fano representation of states $\rho_i$
and $\mathcal{R'}$ the corresponding matrix constructed from states
$\rho_i'$, we have
\begin{equation}
\mathcal{R'}=\mathcal{M} \mathcal{R},
\end{equation}
and therefore
\begin{equation}
\mathcal{M}=\mathcal{R'} \mathcal{R}^{-1}.
\end{equation}
As it is well known~\cite{nielsen}, the standard QPT can be performed
with initial states being product states and local measurements
of the final states. As initial states $\{\rho_i\}$
we choose the $4^n$ tensor-product states
of the $4$ single-qubit states
\begin{equation}
|0\rangle, \;\;\;|1\rangle, \;\;\;\frac{1}{\sqrt{2}}(|0\rangle +|1\rangle),
\;\;\;\frac{1}{\sqrt{2}}(|0\rangle +i |1\rangle).
\label{sepbasis}
\end{equation}
To estimate $\mathcal{R'}$, one needs to prepare many copies of each initial
state $\rho_i$, let them evolve according to the quantum
operation $\mathcal{E}$ and then measure observables
$\sigma_{\alpha_1}\otimes \cdots \otimes \sigma_{\alpha_n}$.
Of course, such measurements can be performed on the computational
basis $\{|0\rangle,|1\rangle\}^{\otimes n}$, provided
each measurement is preceded by suitable single-qubit rotations.
Sec.ion{Single-qubit systems}
The matrix $\mathcal{R}$ corresponding to basis (\ref{sepbasis})
reads
\begin{equation}
\mathcal{R}=\left[
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & -1 & 0 & 0 \\
1 & 1 & 1 & 1
\end{array}
\right].
\end{equation}
Therefore,
\begin{equation}
\mathcal{R}^{-1}=\left[
\begin{array}{cccc}
-\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
-\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{array}
\right].
\end{equation}
The coefficients $(c_x,c_y,c_z)$ in the Fano form
(\ref{eq:fanoform}) are the Bloch-vector coordinates
of the density matrix $\rho$ in the Bloch-ball representation of
single-qubit states.
We need $N^4-N^2=12$ parameters to characterize a generic quantum operation
acting on a single qubit. Each parameter describes a particular noise
channel (like bit flip, phase flip, amplitude damping,...) and can be
most conveniently visualized as associated with rotations, deformations
and displacements of the Bloch ball~\cite{qcbook,nielsen,BFS}.
Here we point out that these noise channels
lead to specific patters in the state process matrix
$\chi_{_F}$.
For instance, for the phase-flip channel,
\begin{equation}
\rho'=\mathcal{E}(\rho)=
p \sigma_z \rho \sigma_z + (1-p) \rho,
\;\;
(0\le p \le \frac{1}{2}),
\label{eq:phaseflip}
\end{equation}
we have
\begin{equation}
\mathcal{R'}=\left[
\begin{array}{cccc}
0 & 0 & 1-2p & 0 \\
0 & 0 & 0 & 1-2p \\
1 & -1 & 0 & 0 \\
1 & 1 & 1 & 1
\end{array}
\right].
\end{equation}
We can then compute $\mathcal{M}=\mathcal{R'}R^{-1}$, and the first
three lines of $\mathcal{M}$ correspond to the state matrix
\begin{equation}
\chi_{_F}^{(\rm pf)}=
\left[
\begin{array}{cccc}
1-2p & 0 & 0 & 0 \\
0 & 1-2p & 0 & 0 \\
0 & 0 & 1 & 0
\end{array}
\right].
\end{equation}
Therefore, the Bloch ball is mapped into an ellipsoid
with $z$ as symmetry axis:
\begin{equation}
\left\{
\begin{array}{l}
c_x\to c_x'=(1-2p)c_x,\\
c_y\to c_y'=(1-2p)c_y,\\
c_z\to c_z'=c_z.
\end{array}
\right.
\end{equation}
As a further example, we consider the amplitude damping channel:
\begin{equation}
\rho'=\sum_{k=0}^1 E_k \rho E_k^\dagger,
\end{equation}
with the Kraus operators
\begin{equation}
E_0=|0\rangle\langle 0| +\sqrt{1-p} |1\rangle\langle 1|,
\;
E_1=\sqrt{p} |0\rangle\langle 1|,
\;
(0\le p \le 1).
\end{equation}
In this case we obtain
\begin{equation}
\chi_{_F}^{(\rm ad)}=
\left[
\begin{array}{cccc}
\sqrt{1-p} & 0 & 0 & 0 \\
0 & \sqrt{1-p} & 0 & 0 \\
0 & 0 & 1-p & p
\end{array}
\right].
\end{equation}
The Bloch ball is deformed into an ellipsoid, with its center
displaced along the $z$-axis:
\begin{equation}
\left\{
\begin{array}{l}
c_x\to c_x'=\sqrt{1-p}c_x,\\
c_y\to c_y'=\sqrt{1-p}c_y,\\
c_z\to c_z'=(1-p)c_z+p.
\end{array}
\right.
\end{equation}
Sec.ion{Two-qubit systems}
\begin{widetext}
Matrices $\mathcal{R}$ and $\mathcal{R}^{-1}$
corresponding to the $16$ tensor-product states of
single-qubit states (\ref{sepbasis}) read as follows:
\begin{equation}
\mathcal{R}=
\left [
\begin{array}{cccccccccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 1 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 & -1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & -1 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 &
0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 &
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\
1 & -1 & 0 & 0 & 1 & -1 & 0 & 0 &
1 & -1 & 0 & 0 & 1 & -1 & 0 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 &
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}
\right],
\end{equation}
\begin{equation}
\mathcal{R}^{-1}=\frac{1}{4}
\left [
\begin{array}{cccccccccccccccc}
1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 &
-1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 \\
1 & 1 & 1 & -1 & 1 & 1 & 1 & -1 &
-1 & -1 & -1 & 1 & -1 & -1 & -1 & 1 \\
-2 & 0 & 0 & 0 & -2 & 0 & 0 & 0 &
2 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\
0 & -2 & 0 & 0 & 0 & -2 & 0 & 0 &
0 & 2 & 0 & 0 & 0 & 2 & 0 & 0 \\
1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 &
1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\
1 & 1 & 1 & -1 & 1 & 1 & 1 & -1 &
1 & 1 & 1 & -1 & -1 & -1 & -1 & 1 \\
-2 & 0 & 0 & 0 & -2 & 0 & 0 & 0 &
-2 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\
0 & -2 & 0 & 0 & 0 & -2 & 0 & 0 &
0 & -2 & 0 & 0 & 0 & 2 & 0 & 0 \\
-2 & -2 & 2 & 2 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-2 & -2 & -2 & 2 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 4 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -2 & -2 & 2 & 2 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -2 & -2 & -2 & 2 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}
\right].
\end{equation}
\end{widetext}
The coordinates $\{c_{\alpha_1\alpha_2}\}$ in the Fano form (\ref{eq:fanoform})
are the expectation values of the polarization measurements
$\{\sigma_{\alpha_1}\otimes \sigma_{\alpha_2}\}$. The coefficients in the
state matrix $\chi_{_F}$ representing a quantum operation $\mathcal{E}$
can therefore be interpreted in terms of modification of these expectation
values.
For instance, let us assume that the two qubits are
independently exposed to pure dephasing, that is,
to quantum noise described
by the phase-flip channel (\ref{eq:phaseflip}),
with the same noise strength
$p$ for both qubits.
The process matrix for such uncorrelated dephasing channel is
given by
\begin{equation}
\chi_{_F}^{({\rm ud})}=
\left [
\begin{array}{ccccccccccccccc}
g^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & g^2 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & g & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & g & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & g^2 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & g^2 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & g & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & g &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
g & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & g & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & g & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & g & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right],
\label{eq:chiud}
\end{equation}
where $gEq.iv 1-2p$.
Correspondingly, the mapping for the expectation values of the
polarization measurements reads
\begin{equation}
c_{\alpha_1\alpha_2}'=g^{m_1+m_2} c_{\alpha_1\alpha_2},
\end{equation}
where $m_i=1$ for $\alpha_i=x,y$ and
$m_i=0$ for $\alpha_i=z,I$.
As an example of nonlocal quantum noise, we consider a model
of fully correlated pure dephasing. We model the interaction of the
two qubits with the environment as a phase-kick rotating both
qubits through the same angle $\theta$ about the $z$ axis of
the Bloch ball. This rotation is described in the
$\{|0\rangle,|1\rangle\}$ basis by the unitary matrix
\begin{equation}
R_z(\theta)=
\left[
\begin{array}{cc}
e^{-i\frac{\theta}{2}} & 0 \\
0 & e^{i\frac{\theta}{2}}
\end{array}
\right]
\otimes
\left[
\begin{array}{cc}
e^{-i\frac{\theta}{2}} & 0 \\
0 & e^{i\frac{\theta}{2}}
\end{array}
\right].
\end{equation}
We assume that the rotation angle is drawn from the random distribution
\begin{equation}
p(\theta)=\frac{1}{\sqrt{4\pi\lambda}} e^{-\frac{\theta^2}{4\lambda}}.
\end{equation}
Therefore, the final state $\rho'$, obtained after averaging
over $\theta$, is given by
\begin{equation}
\rho'=\int_{-\infty}^{+\infty}d\theta p(\theta)
R_z(\theta) \rho R_z^\dagger(\theta).
\end{equation}
For this correlated dephasing channel we obtain
the process matrix
\begin{equation}
\chi_{_F}^{({\rm cd})}=
\left [
\begin{array}{ccccccccccccccc}
h & 0 & 0 & 0 & 0 & k & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & h & 0 & 0 & -k & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & g & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & g & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & -k & 0 & 0 & h & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
k & 0 & 0 & 0 & 0 & h & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & g & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & g &
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
g & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & g & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & g & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & g & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right],
\label{eq:chicd}
\end{equation}
where $gEq.iv e^{-\lambda}$,
$hEq.iv \frac{1}{2}(1+g^4)$, $kEq.iv\frac{1}{2}(1-g^4)$.
It is clear that the process matrix (\ref{eq:chicd})
for correlated dephasing
has a pattern that allows to clearly distinguish it from
the process matrix (\ref{eq:chiud})
for the uncorrelated dephasing.
It is also obvious that, if there exists partial previous knowledge
of the dominant noise sources, it is not necessary to construct
the whole state process matrix $\chi_{_F}$ in order to characterize
the quantum operation. For instance, if we know a priori that
dephasing is the main source of noise and we wish to estimate its
degree of correlation, it is sufficient to prepare, for instance, the
initial state $\rho=\frac{1}{2}(|0\rangle+|1\rangle)^{\otimes 2}$
and measure the $x$- and $y$-polarizations of both qubits for
the final state $\rho'$. The initial state is fully polarized along $x$,
and therefore
\begin{equation}
\left\{
\begin{array}{l}
c_{xx}=1, \\
c_{yy}=0.
\end{array}
\right.
\end{equation}
For the final state, in the case of fully correlated dephasing
\begin{equation}
\left\{
\begin{array}{l}
(c_{xx}')^{({\rm cd})}=h=\frac{1}{2}[1+g^4],\\
(c_{yy}')^{({\rm cd})}=k=\frac{1}{2}[1-g^4],
\end{array}
\right.
\end{equation}
while the expectation values of the $xx$- and $yy$-polarization
measurements are remarkably different for uncorrelated dephasing:
\begin{equation}
\left\{
\begin{array}{l}
(c_{xx}')^{({\rm ud})}=g^2,\\
(c_{yy}')^{({\rm ud})}=0.
\end{array}
\right.
\end{equation}
While in general two-qubit quantum operations depend on
$N^4-N^2=240$ real parameters, an important question is how
many parameters are physically significant. The answer of
course depends on the specific noise processes. However, a
clear answer can be given assuming that external noise is
weak and local, that is to say, it acts independently on
the two qubits. In this case, local noise is described by $24$
parameters, $12$ for each qubit. Undesired coupling effects
(cross talk) between qubits can be characterized with only
three additional
parameters, $\theta_x$, $\theta_y$, and $\theta_z$.
Indeed, any two-qubit unitary transformation $U$
can be decomposed as~\cite{khaneja,kraus,nielsenPRA}
\begin{equation}
U=(A_1\otimes B_1) e^{i(\theta_x \sigma_x\otimes \sigma_x+
\theta_y \sigma_y\otimes \sigma_y +\theta_z \sigma_z\otimes \sigma_z)}
(A_2\otimes B_2),
\end{equation}
with $A_1$, $A_2$, $B_1$, and $B_2$ appropriate single-qubit
unitaries.
In the limit of weak noise, the state matrix $\chi_{_F}$
is simply given by the sum of the contributions of each noise
channel. Therefore, the local unitaries
$A_1$, $A_2$, $B_1$, and $B_2$ only change the $24$ local noise
parameters and overall we need $24+3=27\ll 240$ parameters
to describe the quantum noise. In the symmetric case in which
the local noise parameters are the same for both qubits the
number of free parameters further reduces to $12+3=15$.
The above argument can be easily extended to many-qubit
systems.
Due to the two-body nature of interactions, we need to
determine ${\mathcal N}=12n+3\frac{n(n-1)}{2}$ parameters
to characterize noise. Note that
${\mathcal N}=O(\log N)\ll N^4-N^2$.
Of course, cases with strong or nonlocal noise
would require a larger number of free parameters.
Sec.ion{Conclusions}
We have shown that the Fano representation of the standard QPT
is convenient, since the process matrix $\chi_{_F}$ is real
and the number of matrix elements is exactly equal to the number
of free parameters required for the complete characterization
of a generic quantum operation.
Moreover, the matrix elements of $\chi_{_F}$ are directly related to
the evolution, induced by the quantum operation, of the
system's polarization measurements.
We have also shown that quantum noise channels have specific
patterns in the Fano representation of $\chi_{_F}$.
Finally, we have shown that in the case, of interest
for quantum information processing, of weak and local noise
the number of relevant noise parameters is
$\mathcal{N}=O(\log N)\ll N^4-N^2$, that is, much smaller
than the number of parameters needed to determine a
generic quantum operation. In this case,
the $\chi_{_F}$-matrix is very sparse and therefore the
number of polarization measurements needed to reconstruct it
is much smaller than for a generic quantum operation, thus
considerably reducing the QPT complexity.
\end{document}
|
\begin{document}
\title[Modules of finite flat dimension and the Frobenius endomorphism]{Detecting finite flat dimension of modules via \\ iterates of the Frobenius endomorphism}
\author[D.\,J.\, Dailey]{Douglas J. Dailey}
\address{D.J.D. University of Dallas, Irving, Texas 75062, U.S.A.}
\email{[email protected]}
\urladdr{http://www.udallas.edu}
\author[S.\,B.\ Iyengar]{Srikanth B. Iyengar}
\address{S.B.I. University of Utah, Salt Lake City, UT 84112, U.S.A.}
\email{[email protected]}
\urladdr{http://www.math.utah.edu/~iyengar}
\author[T.\ Marley]{Thomas Marley}
\address{T.M. University of Nebraska-Lincoln, Lincoln, NE 68588, U.S.A.}
\email{[email protected]}
\urladdr{http://www.math.unl.edu/~tmarley}
\thanks{S.B.I.\ was partly supported by NSF grant DMS-1503044.}
\date{\today}
\keywords{Frobenius map, flat dimension, homotopical Loewy length}
\subjclass[2010]{13D05; 13D07, 13A35}
\begin{abstract}
It is proved that a module $M$ over a Noetherian ring $R$ of positive characteristic $p$ has finite flat dimension if there exists an integer $t\geqslant 0$ such that $\operatorname{Tor}_i^R(M, {}^{f^{e}}\!R)=0$ for $t\leqslant i\leqslant t+\operatorname{dim} R$ and infinitely many $e$. This extends results of Herzog, who proved it when $M$ is finitely generated. It is also proved that when $R$ is a Cohen-Macaulay local ring, it suffices that the Tor vanishing holds for one $e\geqslant \log_{p}e(R)$, where $e(R)$ is the multiplicity of $R$.
\end{abstract}
\maketitle
\section{Introduction}
The Frobenius endomorphism $f\colon R\to R$ of a commutative Noetherian local ring $R$ of prime characteristic $p$ is an effective tool for understanding the structure of such rings and the homological properties of finitely generated modules over them. A paradigm of this is a result of Kunz \cite{Ku} that $R$ is regular if and only $f^e$ is flat for some (equivalently, every) integer $e\geqslant 1$. Our work is motivated by the following module-theoretic version of Kunz's result:
\emph{There exists an integer $c$ such that for any finitely generated $R$-module $M$ the following statements are equivalent:
\begin{enumerate}[\quad\rm(1)]
\item The flat dimension of $M$ is finite.
\item $\operatorname{Tor}_i^R(M, {}^{f^{e}}\!R)=0$ for all positive integers $i$ and $e$.
\item $\operatorname{Tor}_i^R(M, {}^{f^{e}}\!R)=0$ for all $i>0$ and infinitely many $e>0$.
\item $\operatorname{Tor}_i^R(M, {}^{f^{e}}\!R)=0$ for $\operatorname{depth} R+1$ consecutive values of $i>0$ and some $e>c$.
\end{enumerate}
}
Peskine and Szpiro~\cite{PS} proved that (1)$\Rightarrow$(2), Herzog~\cite{He} proved that (3)$\Rightarrow$(1), and Koh and Lee \cite{KL} proved that (4)$\Rightarrow$(1). Recently, the third author and M. Webb \cite[Theorem 4.2]{MW} proved the equivalence of conditions (1), (2), and (3) for all $R$-modules, even infinitely generated ones. In their work, the argument for (3)$\Rightarrow$(1) is quite technical and heavily dependent on results of Enochs and Xu~\cite{EX} concerning flat cotorsion modules and minimal flat resolutions.
In this work we give another proof of \cite[Theorem 4.2]{MW} that circumvents \cite{EX}; more to the point, it yields a stronger result and sheds additional light also on the finitely generated case. See \cite{AF} for the definition of the flat dimension of a complex.
\begin{theorem}
\label{th:main}
Let $R$ be a Noetherian local ring of prime characteristic $p$ and $M$ an $R$-complex with $s:=\sup \operatorname{H}_{*}(M)$ finite. The following conditions are equivalent:
\begin{enumerate}[\quad\rm(1)]
\item The flat dimension of $M$ is finite.
\item $\operatorname{Tor}_i^R(M, {}^{f^{e}}\!R)=0$ for all $i>s$ and $e>0$.
\item There exists an integer $t\geqslant s$ such that $\operatorname{Tor}_i^R(M, {}^{f^{e}}\!R)=0$ for $t\leqslant i\leqslant t+\operatorname{dim} R$ and for infinitely many $e$.
\end{enumerate}
Moreover, when $R$ is Cohen-Macaulay of multiplicity $e(R)$, it suffices that the vanishing in \emph{(3)} holds for one $e\geqslant \log_pe(R)$.
\end{theorem}
This result is proved in Section~\ref{sec:proof}. The key element in our proofs is the use of homotopical Loewy lengths of complexes, in much the same way as in the work of the second author and Avramov and C.~Miller~\cite[Section 4]{AHIY}. Using rather different methods, Avramov and the second author~\cite{AI} have proved that the last part of the theorem above holds without the Cohen-Macaulay hypothesis, but with a different lower bound on $e$.
\section{Homotopical Loewy length}
\label{sec:local}
In this section we collect results on homotopical Loewy length, and their corollaries, needed in our proof of Theorem~\ref{th:main}. Throughout $(R,\mathfrak{m},k)$ will be a local (this includes commutative and Noetherian) ring, with maximal ideal $\mathfrak{m}$ and residue field $k$; there is no restriction on its characteristic. We adopt the terminology and notation of \cite[Section 2]{AHIY} regarding complexes and related constructs. In particular, given $R$-complexes $M$ and $N$, the notation $M \simeq N$ means that $M$ and $N$ are isomorphic in $\mathsf D(R)$, the derived category of $R$-modules.
The \emph{Loewy length} of an $R$-complex $M$ is the number
\[
\ell\ell_R(M):= \inf\{n\in \mathbb N\mid \mathfrak{m}^n M=0\}.
\]
Following \cite[6.2]{AIM}, the \emph{homotopical Loewy length} of an $R$-complex $M$ is the number
\[
\ell\ell_{\mathsf D(R)}(M):= \inf \{\ell\ell_R(V) \mid M \simeq V \text{ in } \mathsf D(R)\}.
\]
Given a finite sequence $\boldsymbol{x}\subset R$ and an $R$-complex $M$, we write $\kos{\boldsymbol{x}}M$ for the Koszul complex on $\boldsymbol{x}$, with coefficients in $M$. The result below extends, with an identical proof, \cite[Proposition 4.1]{AHIY} and \cite[Theorem 6.2.2]{AIM} that deal with the case when $\boldsymbol{x}$ generates $\mathfrak{m}$.
\begin{proposition}
\label{pr:loewy}
Let $\boldsymbol{x}$ be a finite sequence in $R$ such that $\ell_R(R/\boldsymbol{x} R)$ is finite. For each $R$-complex $M$ there are inequalities
\[
\ell\ell_{\mathsf D(R)} \kos{\boldsymbol{x}}M \leqslantq \ell\ell_{\mathsf D(R)}\kos{\boldsymbol{x}}R <\infty\,.
\]
\end{proposition}
\begin{proof}
Let $I=(\boldsymbol{x})$ and $K=\kos{\boldsymbol{x}}R$. For each $i$, consider the subcomplex $C^i$ of $K$
\[
\cdots \to I^{i-2}K_{2} \to I^{i-1}K_1\to I^iK_0\to 0.
\]
Since $I^{i}$ annihilates $K/C^i$ and $I$ is $\mathfrak{m}$-primary, it follows that $\ell\ell_{R}(K/C^{i})$ is finite for each $i$. There exists an $r$ such that $C^i$ is acyclic for all $i\geqslant r$; cf. \cite[Proposition]{EF}. Thus, for $i\geqslant r$ the natural map $K\to K/C^i$ is a quasi-isomorphism, and hence the homotopical Loewy length of $K$ is finite. The inequality $\ell\ell_{\mathsf D(R)}(K\otimes_{R}M) \leqslant \ell\ell_{\mathsf D(R)}K$ can be verified exactly as in the proof of \cite[Proposition 4.1]{AHIY}.
\end{proof}
The following invariant plays an important role in what follows.
\[
c(R):=\inf\{\ell\ell_{\mathsf D(R)} \kos{\boldsymbol{x}}R \mid\text{$\boldsymbol{x}$ is an s.o.p.\,for $R$}\}.
\]
Proposition~\ref{pr:loewy} yields that $c(R)$ is finite for any $R$. For our purposes, we need a uniform bound on $c(R_{\mathfrak{p}})$, as $\mathfrak{p}$ varies over the primes ideals in $R$. We have been able to establish this only for Cohen-Macaulay rings; this is the content of the next result, where $e(R)$ denotes the multiplicity of $R$.
\begin{lemma}
\label{le:CM}
Let $(R,\mathfrak{m},k)$ be local ring with $k$ infinite. When $R$ is Cohen-Macaulay, there is an inequality $c(R_{\mathfrak{p}})\leqslant e(R)$ for each $\mathfrak{p}$ in $\operatorname{Spec} R$.
\end{lemma}
\begin{proof}
By a result of Lech \cite{L}, one has $e(R)\geqslant e(R_{\mathfrak{p}})$ for all $\mathfrak{p}\in \operatorname{Spec} R$. Moreover, it is easy to verify that since $k$ is infinite, so is $k(\mathfrak{p})$ for each $\mathfrak{p}$. It thus suffices to verify that $c(R)\leqslant e(R)$.
Let $\boldsymbol{x}$ be a s.o.p. of $R$ that is a minimal reduction of $\mathfrak{m}$; this exists because $k$ is infinite; see \cite[Proposition 8.3.7]{SH}. Then there are inequalities
\[
e(R)=\ell_R(R/(\boldsymbol{x})) \geqslant \ell\ell_R (R/(\boldsymbol{x}))=\ell\ell_{\mathsf D(R)}\kos{\boldsymbol{x}}R \geqslant c(R).
\]
For the first equality see, for example, \cite[Proposition~11.2.2]{SH}, while holds the second equality holds because $\kos{\boldsymbol{x}}R\simeq R/(\boldsymbol{x})$; both need the hypothesis that $R$ is Cohen-Macaulay.
\end{proof}
The preceding result bring up the following question; its import for the results in this paper will become apparent in the proof of~Theorem \ref{th:main}.
\begin{question}
Is $\sup \{ c(R_{\mathfrak{p}})\mid \mathfrak{p}\in \operatorname{Spec} R\}$ finite for any Noetherian ring $R$?
\end{question}
The proof of Lemma~\ref{le:CM} is not likely to be of help in answering this question.
\begin{remark}
Let $\boldsymbol{x}$ be a finite sequence in a local ring $(R,\mathfrak{m},k)$. Since the ideal $(\boldsymbol{x})$ annihilates $\operatorname{H}_{*}(\kos{\boldsymbol{x}}R)$, it is immediate from definitions that there is an inequality
\[
\ell\ell_R (R/(\boldsymbol{x}))\leqslantq \ell\ell_{\mathsf D(R)}\kos{\boldsymbol{x}}R\,.
\]
Equality holds when $\boldsymbol{x}$ is a regular sequence, for then $R/(\boldsymbol{x})\simeq \kos{\boldsymbol{x}}R$; this is the main reason for the Cohen-Macaulay hypothesis in Lemma~\ref{le:CM}. The inequality can be strict in general.
For example, if $(\boldsymbol{x})=\mathfrak{m}$, then $\ell\ell_{R}(R/\mathfrak{m}) = 1$, whilst $\ell\ell_{\mathsf D(R)}\kos{\boldsymbol{x}}R=1$ exactly when $R$ is regular; see \cite[Corollary~6.2.3]{AIM}.
Here is an example where the inequality is strict for $\boldsymbol{x}$ a s.o.p.
Let $R:=k[|x,y|]/(x^{n}y,y^{2})$, where $n\geqslant 1$ is an integer. The residue class of $x$ in $R$ is a s.o.p., and the Loewy length of $R/(x)$ equals $2$. We claim that the homotopical Loewy length of $\kos xR$ is $n+1$.
Indeed, the following subcomplex of $\kos xR$
\[
A:=0\to (x^{n})\to (x^{n+1})\to 0
\]
is acyclic so one has $\kos xR\xrightarrow{ \simeq } \kos xR/A$. Since
\[
\kos xR/A = 0\longrightarrow \mathfrak{r}ac R{(x^{n})}\longrightarrow \mathfrak{r}ac R{(x^{n+1})} \longrightarrow 0
\]
and the Loewy length of this complex is $n+1$, it follows that $\ell\ell_{\mathsf D(R)}(\kos xR)\leqslant n+1$. On the other hand
\[
\operatorname{H}_{1}(\kos xR)=(x^{n-1}y)\subset R\,.
\]
Suppose $\kos xR\simeq V$ for some $R$-complex $V$. Since $\kos xR$ is a finite complex of free $R$-modules, there must exist a morphism $f\colon \kos xR\to V$ of $R$-complexes with $\operatorname{H}_{*}(f)$ an isomorphism. The map $f_{1}\colon K_{1}=R\to V_{1}$ satisfies
\[
0\ne f_{1}(x^{n-1}y)=x^{n-1}yf(1)
\]
It follows that $x^{n-1}y\cdot V_{1}\ne 0$, and hence that $\ell\ell_{R}V\geqslant \ell\ell_{R}(V_{1})\geqslant n+1$.
\end{remark}
As in \cite[Proposition 4.3(2)]{AHIY} one can apply Proposition~\ref{pr:loewy} to local homomorphisms to obtain an isomorphism relating Koszul homologies.
\begin{proposition}
\label{pr:tor-iso}
\pushQED{\qed}
Let $(S,\mathfrak{n},l)$ be a local ring, $\boldsymbol{y}$ a finite sequence of elements in $S$ such that the ideal $(\boldsymbol{y})$ is $\mathfrak{n}$-primary, and set $c:=\ell\ell_{\mathsf D(S)}\kos{\boldsymbol{y}}S$.
If $\varphi\colon (R,\mathfrak{m}, k)\to (S,\mathfrak{n},l)$ is a local homomorphism satisfying $\mathfrak{m} S\subseteq \mathfrak{n}^c$, then for each $R$-complex $M$, there exists an isomorphism of graded $k$-vector spaces
\[
\operatorname{Tor}^R_*(M, \kos{\boldsymbol{y}}S) \cong \operatorname{Tor}^R_*(M,k)\otimes_k \operatorname{H}_*(\kos{\boldsymbol{y}}S)\,.\qedhere
\]
\end{proposition}
We also need the following routine computation.
\begin{lemma}
\label{le:lem1}
Let $\varphi\colon R\to S$ be a homomorphism of rings, $\boldsymbol{y}=y_1,\dots, y_d$ a sequence of elements in $S$. Let M be an $R$-complex and $t$ an integer such that $\operatorname{Tor}_{i}^R(M, S)=0$ for $t\leqslant i\leqslant t+d$. Then $\operatorname{Tor}_{t+d}^R(M, \kos{\boldsymbol{x}}S)=0$. \qed
\end{lemma}
Applied to (an appropriate composition of the Frobenius endomorphism) the next result yields an analogue of \cite[Proposition 2.6]{KL} for complexes. The number of consecutive vanishing of Tor required in the case of modules is not optimal ($\operatorname{dim} R+1$ as compared to $\operatorname{depth} R + 1$ in \cite{KL}), but the proof we give applies to complexes whose homology need not be finitely generated.
\begin{lemma}
\label{le:prop1}
Let $\varphi\colon (R,\mathfrak{m},k) \to (S,\mathfrak{n},l)$ be a homomorphism of local rings such that $\varphi(\mathfrak{m}) \subseteq \mathfrak{n}^{c(S)}$.
Let $M$ be an $R$-complex.
If there is an integer $t$ such that $\operatorname{Tor}_i^R(M,S)=0$ for $t\leqslant i\leqslant t+\operatorname{dim} S$, then
\[
\operatorname{Tor}_{t+\operatorname{dim} S}^R(M,k)=0\,.
\]
If moreover the $R$-module $\operatorname{H}(M)$ is finitely generated and $t\geqslantq \sup \operatorname{H}(M)-\operatorname{dim} S$, the flat dimension of $M$ is at most $t+\operatorname{dim} S$.
\end{lemma}
\begin{proof}
Set $d:=\operatorname{dim} S$ and let $\boldsymbol{y}$ be an s.o.p of $S$ such that $c(S)=\ell\ell_{\mathsf D(S)}\kos {\boldsymbol{y}}S$. The hypothesis on $\varphi$ and Lemma \ref{le:lem1} yield $\operatorname{Tor}_{t+d}^R(M, \kos {\boldsymbol{y}}S)=0$. It then follows from Proposition \ref{pr:tor-iso} that $\operatorname{Tor}_{t+d}^R(M,k)=0$, since $\operatorname{H}_0(\kos {\boldsymbol{y}}S)\neq 0$.
Given this, and the additional hypotheses on $\operatorname{H}(M)$ and $t$, the desired result follows from the existence of minimal resolutions; see \cite[Proposition~5.5(F)]{AF}.
\end{proof}
\section{Finite flat dimension}
\label{sec:proof}
This section contains a proof of Theorem~\ref{th:main}. In preparation, we recall that an $R$-complex has \emph{finite flat dimension} if it is isomorphic in $\mathsf D(R)$ to a bounded complex of flat $R$-modules. The following result is \cite[Theorem 4.1]{CIM}.
\begin{remark}
\label{re:CIM}
Let $R$ be a Noetherian ring and $M$ an $R$-complex. If there exists an integer $n\geqslant \sup \operatorname{H}_{*}(M)+\operatorname{dim} R$ with $\operatorname{Tor}_n^{R_{\mathfrak{p}}}(M_{\mathfrak{p}},k(\mathfrak{p}))=0$ for all $\mathfrak{p}\in \operatorname{Spec} R$ then the flat dimension of $M$ is finite.
\end{remark}
In what follows, given an endomorphism $\phi\colon R\to R$ and an $R$-complex $M$, we write ${}^{\phi}\!M$ for $M$ viewed as an $R$-complex via $\phi$.
\begin{proof}[Proof of Theorem \ref{th:main}]
Recall that $R$ is a Noetherian ring of prime characteristic $p$ and $f\colon R\to R$ is the Frobenius endomorphism.
(1)$\Rightarrow$(2): Fix an integer $e\geqslant 1$ and set $r:=\sup \operatorname{Tor}_{*}^R(M, {}^{f^{e}}\!R)$. Since $\operatorname{flat\,dim}_R M$ is finite, $r<\infty$ holds. The desired result is that $r\leqslant s$.
Pick a prime ideal $\mathfrak{p}$ associated to $\operatorname{Tor}_{r}^R(M, {}^{f^{e}}\!R)$. Since Frobenius commutes with localization one has
\[
\operatorname{Tor}_{r}^R(M, {}^{f^{e}}\!R)_{\mathfrak{p}} \cong \operatorname{Tor}_{r}^{R_{\mathfrak{p}}}(M_{\mathfrak{p}}, {}^{f^{e}}\!R_{\mathfrak{p}})
\]
as $R_{\mathfrak{p}}$-modules. Moreover $\operatorname{flat\,dim}_{R_{\mathfrak{p}}}M_{\mathfrak{p}}$ is finite. Thus replacing $R$ and $M$ by their localizations at $\mathfrak{p}$ we get that the maximal ideal of $R$ is associated to $\operatorname{Tor}_{r}^R(M, {}^{f^{e}}\!R)$; that is to say, the depth of the latter module is zero.
The next step uses some results concerning depth for complexes; see~\cite{FI}. Given the conclusion of the last paragraph, \cite[2.7]{FI} yields the last equality below.
\begin{align*}
\operatorname{depth} R - \sup\operatorname{Tor}_{*}^{R}(k,M)
&= \operatorname{depth}_{R}({}^{f^{e}}\!R) - \sup\operatorname{Tor}_{*}^{R}(k,M) \\
&= \operatorname{depth}_{R}(M\otimes^{\mathbf L}_{R}{}^{f^{e}}\!R)\\
&=-r
\end{align*}
The second one is by \cite[Theorem 2.4]{FI}. The same results also yield
\[
\operatorname{depth} R - \sup\operatorname{Tor}_{*}^{R}(k,M) = \operatorname{depth}_{R}M \geqslant - \sup\operatorname{H}_{*}(M) =-s
\]
It follows that $-r\geqslant -s$, that is to say, $r\leqslantq s$, which is the desired conclusion.
(3)$\Rightarrow$(1): Let $d=\operatorname{dim} R$. By Remark~\ref{re:CIM}, it suffices to verify that
\begin{equation}
\label{eq:proof}
\operatorname{Tor}^{R_{\mathfrak{p}}}_{t+d}(M_{\mathfrak{p}}, k(\mathfrak{p}))=0\quad\text{for all $\mathfrak{p}\in \operatorname{Spec} R$}.
\end{equation}
Fix $\mathfrak{p}\in \operatorname{Spec} R$ and choose $e$ such that $p^e\geqslant c(R_{\mathfrak{p}})$ and $\operatorname{Tor}^{R}_{i}(M,{}^{f^{e}}\!R)=0$ for $t\leqslant i\leqslant t+d$; such an $e$ exists by our hypothesis. As the Frobenius map commutes with localization, one gets
\[
\operatorname{Tor}_i^{R_{\mathfrak{p}}}(M_{\mathfrak{p}}, {}^{f^{e}}\!(R_{\mathfrak{p}}))=0\quad \text{for $t\leqslant i \leqslant t+d$}.
\]
The choice of $e$ ensures that $f^{e}(\mathfrak{p} R_{\mathfrak{p}})\subseteq \mathfrak{p}^{c(R_{\mathfrak{p}})}R_{\mathfrak{p}}$. Thus, Lemma~\ref{le:prop1} applied to the Frobenius endomorphism $R_{\mathfrak{p}}\to R_{\mathfrak{p}}$ yields $\operatorname{Tor}_{t+d}^{R_{\mathfrak{p}}}(M_{\mathfrak{p}}, k(\mathfrak{p}))=0$, as desired.
Assume now that $R$ is Cohen-Macaulay and that the vanishing in (3) holds for some $e\geqslant \log_{p}e(R)$. It is a routine exercise to verify that the hypotheses remain unchanged, and that the desired conclusion can be verified, after passage to faithfully flat extensions. One can thus assume that the residue field $k$ is infinite; see \cite[IX.37]{Bo}. Then, by the choice of $e$ and Lemma~\ref{le:CM}, one gets that $p^{e}\geqslant c(R_{\mathfrak{p}})$ for each $\mathfrak{p}$ in $\operatorname{Spec} R$. Then one can argue as above to deduce that \eqref{eq:proof} holds, and that yields the finiteness of the flat dimension of $M$.
\end{proof}
\end{document}
|
\begin{document}
\title{Fully Computable Error Bounds for Eigenvalue Problem\footnote{The work
of Hehu Xie is supported in part by the National Natural Science Foundations
of China (NSFC 91330202, 11371026, 11001259, 11031006, 2011CB309703),
the National Center for Mathematics and Interdisciplinary Science
the national Center for Mathematics and Interdisciplinary Science, CAS.}}
\author{Hehu Xie\footnote{LSEC, ICMSEC, Academy of Mathematics and Systems Science,
Chinese Academy of Sciences, Beijing 100190, P.R. China ([email protected])},\ \
Meiling Yue\footnote{LSEC, ICMSEC, Academy of Mathematics and Systems Science,
Chinese Academy of Sciences, Beijing 100190, P.R. China ([email protected])}\ \ \
and \ Ning Zhang\footnote{LSEC, ICMSEC, Academy of Mathematics and Systems Science,
Chinese Academy of Sciences, Beijing 100190, P.R. China ([email protected])}}
\date{}
\maketitle
\begin{abstract}
This paper is concerned with the computable error estimates for the eigenvalue
problem which is solved by the general conforming finite element methods on the
general meshes. Based on the computable error estimate, we can give
an asymptotically lower bound of the general eigenvalues. Furthermore, we also give a guaranteed
upper bound of the error estimates for the first eigenfunction approximation and
a guaranteed lower bound of the first eigenvalue based on
computable error estimator. Some numerical examples are presented to validate the theoretical results
deduced in this paper.
\vskip0.3cm {\bf Keywords.} Eigenvalue problem, computable error estimate,
guaranteed upper bound, guaranteed lower bound, complementary method.
\vskip0.2cm {\bf AMS subject classifications.} 65N30, 65N25, 65L15, 65B99.
\end{abstract}
\section{Introduction}
This paper is concerned with the computable error estimates for the eigenvalue problem by the finite element
method. As we know, the priori error estimates can only give the asymptotic convergence order. The a posteriori
error estimates are very important for the mesh adaption process. About the a posteriori error estimate for the
partial differential equations by the finite element method, please refer to
\cite{AinsworthOden,BabuskaRheinboldt_1,BabuskaRheinboldt_2,BrennerScott,NeittaanmakiRepin,Repin,Verfurth}
and the references cited therein.
It is well known that the numerical approximations by the conforming finite element methods are upper
bounds of the exact eigenvalues. Recently, how to obtain the lower bounds of the desired eigenvalues
is a hot topic since it has many applications
in some classical problems \cite{ArmentanoDuran,CarstensenGallistl,CarstensenGedicke,HuHuangLin,
LinLuoXie_lowerbound,LinXie_lowerbound,LinXieLuoLiYang,Liu,LiuOishi,SebestovaVejchodsky,YangZhangLin,ZhangYangChen}.
So far, there have developed the nonconforming finite element methods, interpolation
constant based methods and computational error estimate methods. The nonconforming finite element methods can only
obtain the asymptotically lower bounds with the lowest order accuracy.
The interpolation constant method can only obtain the efficient lowest order accuracy on the quasi-uniform meshes.
The interesting computational error method need a condition that the numerical approximation is closer to
the first eigenvalue than the second one. But the paper \cite{SebestovaVejchodsky} gives a clue to us.
This paper is to give computable error estimates for the eigenpair approximations. We produce a guaranteed
upper-bound error estimate for the first eigenfunction approximation and then a
guaranteed lower bound of the first eigenvalue. The approach is based on complementary energy method from
\cite{HaslingerHlavacek,NeittaanmakiRepin,Repin,Vejchodsky_1,Vejchodsky_2} coupled with the upper and lower
bounds of the eigenvalues by the conforming and nonconforming finite element methods.
The first eigenvalue is the key information in many practical applications such as Friedrichs, Poincar\'{e}, trace and
similar inequalities (cf. \cite{SebestovaVejchodsky}). Thus the two-sided bounds of the first eigenvalue of the partial
differential operators are very important.
Further, the proposed computable error estimates are asymptotically exact for the general eigenpair approximations
which are obtained by the conforming finite element method.
Based on this property, we can provide asymptotically lower bounds for general eigenvalues by the finite element method.
The most important feature and contribution of this paper are that the method can also provide the
reasonable accuracy even on the general regular meshes which is different from the existed methods.
An outline of the paper goes as follows. In Section \ref{Section_FEM}, we introduce the
finite element method for the eigenvalue problem and the corresponding basic
error estimates. The computable error estimates for the eigenfunction approximations
and the corresponding upper-bound properties are given in Section \ref{Section_Upper_Bound}.
In Section \ref{Section_Lower_Bound}, lower bounds of eigenvalues are obtained
based on the results in Section \ref{Section_Upper_Bound}. Some numerical examples are presented
to validate our theoretical analysis in Section \ref{Section_Numerical_Examples}.
Some concluding remarks are given in the last section.
\section{Finite element method for eigenvalue problem}\label{Section_FEM}
This section is devoted to introducing some notation and the finite element
method for eigenvalue problem. In this paper, the standard notation
for Sobolev spaces $H^s(\Omega)$ and $H({\rm div};\Omega)$ and their
associated norms and semi-norms \cite{Adams} will be used. We denote
$H_0^1(\Omega)=\{v\in H^1(\Omega):\ v|_{\partial\Omega}=0\}$,
where $v|_{\partial\Omega}=0$ is in the sense of trace. The letter $C$ (with or without subscripts)
denotes a generic positive constant which may be different at its different occurrences
in the paper.
For simplicity, this paper is concerned with the following model problem:
Find $(\lambda, u)$ such that
\begin{equation}\label{LaplaceEigenProblem}
\left\{
\begin{array}{rcl}
-\Delta u+ u&=&\lambda u, \quad {\rm in} \ \Omega,\\
u&=&0, \ \ \quad {\rm on}\ \partial\Omega,
\end{array}
\right.
\end{equation}
where $\Omega\subset\mathcal{R}^d$ $(d=2,3)$ is a bounded domain with
Lipschitz boundary $\partial\Omega$ and $\Delta$ denotes the Laplacian operator.
We will find that the method in this paper can easily be extended to more general
eigenvalue problems.
In order to use the finite element method to solve
the eigenvalue problem (\ref{LaplaceEigenProblem}), we need to define
the corresponding variational form as follows:
Find $(\lambda, u )\in \mathcal{R}\times V$ such that
\begin{eqnarray}\label{weak_eigenvalue_problem}
a(u,v)&=&\lambda b(u,v),\quad \forall v\in V,
\end{eqnarray}
where $V:=H_0^1(\Omega)$ and
\begin{equation}\label{inner_product_a_b}
a(u,v)=\int_{\Omega}\big(\nabla u\cdot\nabla v + uv\big)d\Omega,
\ \ \ \ \ \ b(u,v) = \int_{\Omega}uv d\Omega.
\end{equation}
The norms $\|\cdot\|_a$ and $\|\cdot\|_b$ are defined by
\begin{eqnarray*}
\|v\|_a=\sqrt{a(v,v)}\ \ \ \ \ {\rm and}\ \ \ \ \ \|v\|_b=\sqrt{b(v,v)}.
\end{eqnarray*}
It is well known that the eigenvalue problem (\ref{weak_eigenvalue_problem})
has an eigenvalue sequence $\{\lambda_j \}$ (cf. \cite{BabuskaOsborn_Book,Chatelin}):
$$0<\lambda_1 < \lambda_2\leq\cdots\leq\lambda_k\leq\cdots,\ \ \
\lim_{k\rightarrow\infty}\lambda_k=\infty,$$ and associated
eigenfunctions
$$u_1, u_2, \cdots, u_k, \cdots,$$
where $b(u_i,u_j)=0$ when $i\neq j$. The first eigenvalue $\lambda_1$ is simple and
in the sequence $\{\lambda_j\}$, the $\lambda_j$ are repeated according to their
geometric multiplicity.
Now, we introduce the finite element method for the eigenvalue problem
(\ref{weak_eigenvalue_problem}). First we decompose the computing domain
$\Omega\subset \mathcal{R}^d\ (d=2,3)$
into shape-regular triangles or rectangles for $d=2$ (tetrahedrons or
hexahedrons for $d=3$) to produce the mesh $\mathcal{T}_h$ (cf. \cite{BrennerScott,Ciarlet}).
In this paper, we use $\mathcal{E}_h$ to denote the set of interior faces (edges or sides)
of $\mathcal{T}_h$.
The diameter of a cell $K\in\mathcal{T}_h$ is denoted by $h_K$ and
the mesh size $h$ describes the maximum diameter of all cells
$K\in\mathcal{T}_h$. Based on the mesh $\mathcal{T}_h$, we can
construct a finite element space denoted by $V_h \subset V$.
For simplicity, we only consider the Lagrange type conforming finite element space
which is defined as follows
\begin{equation}\label{linear_fe_space}
V_h = \big\{ v_h \in C(\Omega)\ \big|\ v_h|_{K} \in \mathcal{P}_k,
\ \ \forall K \in \mathcal{T}_h\big\}\cap H_0^1(\Omega),
\end{equation}
where $\mathcal{P}_k$ denotes the space of polynomials of degree at most $k$.
We define the standard finite element scheme for the eigenvalue
problem (\ref{weak_eigenvalue_problem}) as follows:
Find $(\lambda_h, u_h)\in \mathcal{R}\times V_h$
such that $b(u_h,u_h)=1$ and
\begin{eqnarray}\label{Weak_Eigenvalue_Discrete}
a(u_h,v_h)
&=&\lambda_h b(u_h,v_h),\quad\ \ \ \forall v_h\in V_h.
\end{eqnarray}
From \cite{BabuskaOsborn_1989,BabuskaOsborn_Book,Chatelin},
the discrete eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}) has eigenvalues:
$$0<\lambda_{1,h}<\lambda_{2,h}\leq \cdots\leq \lambda_{k,h}
\leq\cdots\leq\lambda_{N_h,h},$$
and corresponding eigenfunctions
$$u_{1,h},\cdots, u_{k,h}, \cdots, u_{N_h,h},$$
where $b(u_{i,h},u_{j,h})=\delta_{ij}$ ($\delta_{ij}$ denotes the Kronecker function),
when $1\leq i, j\leq N_h$ ($N_h$ is
the dimension of the finite element space $V_h$).
Let $M(\lambda_i)$ denote the eigenspace corresponding to the
eigenvalue $\lambda_i$ which is defined by
\begin{eqnarray*}
M(\lambda_i)&=&\big\{w\in H_0^1(\Omega): w\ {\rm is\ an\ eigenfunction\ of\
(\ref{weak_eigenvalue_problem})}\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm corresponding\ to}\ \lambda_i \big\},
\end{eqnarray*}
and define
\begin{eqnarray}
\delta_h(\lambda_i)=\sup_{w\in M(\lambda_i), \|w\|_b=1}\inf_{v_h\in
V_h}\|w-v_h\|_{a}.
\end{eqnarray}
We also define the following quantity:
\begin{eqnarray}
\eta_{a}(h)&=&\sup_{f\in L^2(\Omega),\|f\|_b=1}\inf_{v_h\in V_h}\|Tf-v_h\|_{a},\label{eta_a_h_Def}
\end{eqnarray}
where $T:L^2(\Omega)\rightarrow V$ is defined as
\begin{equation}\label{laplace_source_operator}
a(Tf,v) = b(f,v), \ \ \ \ \ \forall f \in L^2(\Omega) \ \ \ {\rm and}\ \ \ \forall v\in V.
\end{equation}
Then the error estimates for the eigenpair approximations by the finite
element method can be described as follows.
\begin{lemma}(\cite[Lemma 3.6, Theorem 4.4]{BabuskaOsborn_1989} and \cite{Chatelin})
\label{Err_Eigen_Global_Lem}
There exists the exact eigenpair $(\lambda_i,u_i)$ of (\ref{weak_eigenvalue_problem}) such that
each eigenpair approximation
$(\lambda_{i,h},u_{i,h})$ $(i = 1, 2, \cdots, N_h)$ of
(\ref{Weak_Eigenvalue_Discrete}) has the following error estimates
\begin{eqnarray}
\|u_i-u_{i,h}\|_{a}
&\leq& \big(1+C_i\eta_a(h)\big)\delta_h(\lambda_i),\label{Err_Eigenfunction_Global_1_Norm} \\
\|u_i-u_{i,h}\|_{b}
&\leq& C_i\eta_{a}(h)\|u_i - u_{i,h}\|_{a},\label{Err_Eigenfunction_Global_0_Norm}\\
|\lambda_i-\lambda_{i,h}|
&\leq&C_i\|u_i - u_{i,h}\|_{a}^2\leq C_i\eta_a(h)\|u_i-u_{i,h}\|_a.\label{Estimate_Eigenvalue}
\end{eqnarray}
Here and hereafter $C_i$ is some constant depending on $i$ but independent of the mesh size $h$.
\end{lemma}
\section{Complementarity based error estimate}\label{Section_Upper_Bound}
In this section, we derive a computable error estimate for the eigenfunction approximations
based on complementarity approach. A guaranteed upper bound of the error estimate for the first
eigenfunction approximation is designed based on the lower bounds of the second eigenvalue.
We also produce an asymptotically upper bound error estimate for the general eigenfunction approximations
which are obtained by solving the discrete eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}).
First, we recall the following divergence theorem
\begin{eqnarray}\label{Divergence_Equality}
\int_{\Omega}v{\rm div}\mathbf zd\Omega+\int_{\Omega}\mathbf z\cdot\nabla vd\Omega
=\int_{\partial\Omega}v\mathbf z\cdot\nu ds,\ \ \
\forall v\in V,\ \forall \mathbf z\in \mathbf W,
\end{eqnarray}
where $\mathbf W:=H({\rm div};\Omega)$ and $\nu$ denotes the unit outward normal to $\partial\Omega$.
We first give a guaranteed upper bound of the error estimate for the first eigenfunction
approximation and the method used here is independent from the way to obtain the solution.
We only consider the eigenfunction approximation $\widehat u_1\in V$ and estimate the error $e=u_1-\widehat u_1$
no matter how to obtain $\widehat u_1$. In this paper, we let $b(\widehat u_1,\widehat u_1)=1$
and the eigenvalue approximation $\widehat\lambda_1$
is determined as follows
\begin{eqnarray*}
\widehat\lambda_1=\frac{a(\widehat u_1,\widehat u_1)}{b(\widehat u_1,\widehat u_1)}=a(\widehat u_1,\widehat u_1).
\end{eqnarray*}
\begin{theorem}\label{Theorem_Upper_Bound}
Assume we have an eigenpair approximation $(\widehat\lambda_1,\widehat u_1)\in \mathcal{R}\times V$
corresponding to the first eigenvalue $\lambda_1$ and a lower bound eigenvalue approximation $\lambda_{2}^L$ of
the second eigenvalue $\lambda_2$ such that $\lambda_1\leq \widehat\lambda_1 < \lambda_2^L\leq \lambda_2$.
There exists an exact eigenfunction $u_1\in M(\lambda_1)$ such that the error estimate for
the first eigenfunction approximation $\widehat u_1\in V$ with $b(\widehat u_1,\widehat u_1)=1$ has the following guaranteed upper bound
\begin{eqnarray}\label{Upper_Bound}
\|u_1-\widehat u_1\|_a&\leq& \frac{\lambda_{2}^L}{\lambda_{2}^L-\widehat \lambda_1}\eta(\widehat \lambda_1,\widehat u_1,\mathbf y),
\ \ \ \forall \mathbf y\in \mathbf W,
\end{eqnarray}
where $\eta(\widehat\lambda_1,\widehat u_1,\mathbf y)$ is defined as follows
\begin{eqnarray}\label{Definition_Eta}
\eta(\widehat\lambda_1,\widehat u_1,\mathbf y):=\big(\|\widehat\lambda_1\widehat u_1-\widehat u_1+{\rm div}\mathbf y\|_0^2+\|\mathbf y-\nabla \widehat u_1\|_0^2\big)^{1/2}.
\end{eqnarray}
\end{theorem}
\begin{proof}
We can choose $u_1\in M(\lambda_1)$ such that $b(v,u_1-\widehat u_1)=0$ for any $v\in M(\lambda_1)$.
Now we set $w=u_1-\widehat u_1$ and the following estimates hold
\begin{eqnarray}\label{Inequality_1}
&&a(u_1-\widehat u_1,w)-\widehat\lambda_1b(u_1-\widehat u_1,w)\nonumber\\
&=& \int_{\Omega}\lambda_1 u_1w d\Omega-\int_{\Omega}\nabla \widehat u_1\cdot\nabla wd\Omega
-\int_{\Omega}\widehat u_1wd\Omega-\widehat\lambda_1\int_{\Omega}u_1wd\Omega\nonumber\\
&&\ \ +\widehat\lambda_1\int_{\Omega}\widehat u_1wd\Omega+\int_{\Omega}w{\rm div}\mathbf yd\Omega
+\int_{\Omega}\mathbf y\cdot\nabla wd\Omega\nonumber\\
&=&\int_{\Omega}\big(\widehat\lambda_1\widehat u_1-\widehat u_1+{\rm div}\mathbf y\big)wd\Omega
+\int_{\Omega}\big(\mathbf y-\nabla \widehat u_1\big)\cdot\nabla wd\Omega\nonumber\\
&\leq& \|\widehat\lambda_1\widehat u_1-\widehat u_1+{\rm div}\mathbf y\|_0\|w\|_0+\|\mathbf y-\nabla \widehat u_1\|_0\|\nabla w\|_0\nonumber\\
&\leq& \big(\|\widehat\lambda_1\widehat u_1-\widehat u_1+{\rm div}\mathbf y\|_0^2+\|\mathbf y-\nabla \widehat u_1\|_0^2\big)^{1/2}\|w\|_a,
\ \ \ \forall\mathbf y\in \mathbf W.
\end{eqnarray}
Since $b(v,u_1-\widehat u_1)=0$ for any $v\in M(\lambda_1)$, the following inequalities hold
\begin{eqnarray}\label{Inequality_2}
\frac{\|w\|_a^2}{\|w\|_b^2}\geq \lambda_2\geq \lambda_{2}^L.
\end{eqnarray}
Combining (\ref{Inequality_1}) and (\ref{Inequality_2}) leads to the following estimate
\begin{eqnarray*}
\Big(1-\frac{\widehat\lambda_1}{\lambda_{2}^L}\Big)\|w\|_a^2 &\leq& \eta(\widehat\lambda_1,\widehat u_1,\mathbf y)\|w\|_a,
\ \ \ \forall\mathbf y\in \mathbf W.
\end{eqnarray*}
It means that we have
\begin{eqnarray*}
\|w\|_a&\leq& \frac{\lambda_{2}^L}{\lambda_{2}^L-\widehat\lambda_1}\eta(\widehat\lambda_1,\widehat u_1,\mathbf y),
\ \ \ \forall\mathbf y\in\mathbf W.
\end{eqnarray*}
This is the desired result (\ref{Upper_Bound}) and the proof is complete.
\end{proof}
A natural problem is to seek the minimization $\eta(\widehat\lambda,\widehat u,\mathbf y)$ over $\mathbf W$ for the fixed
eigenpair approximation $(\widehat\lambda,\widehat u)$. For this aim, we define the minimization problem:
Find $\mathbf y^*\in\mathbf W$ such that
\begin{eqnarray}
\eta(\widehat\lambda, \widehat u,\mathbf y^*) \leq \eta(\widehat \lambda, \widehat u,\mathbf y),
\ \ \ \ \forall \mathbf y\in \mathbf W.
\end{eqnarray}
From \cite{Vejchodsky_1,Vejchodsky_2}, the optimization problem is equivalent to the following partial differential equation:
Find $\mathbf y^*\in \mathbf W$ such that
\begin{eqnarray}\label{Dual_Problem}
a^*(\mathbf y^*,\mathbf z)&=& \mathcal F^*(\widehat\lambda,\widehat u, \mathbf z),
\ \ \ \ \forall \mathbf z\in \mathbf W,
\end{eqnarray}
where
\begin{eqnarray*}
a^*(\mathbf y^*,\mathbf z)=\int_{\Omega}\big({\rm div}\mathbf y^*{\rm div}\mathbf z
+\mathbf y^*\cdot\mathbf z\big)d\Omega, \ \ \
\mathcal F^*(\widehat \lambda,\widehat u, \mathbf z)=-\int_{\Omega}\widehat\lambda\widehat u{\rm div}\mathbf zd\Omega.
\end{eqnarray*}
It is obvious $a^*(\cdot,\cdot)$ is an inner product in the space $\mathbf W$ and the corresponding norm
is $\||\mathbf z\||_{*}=\sqrt{a^*(\mathbf z,\mathbf z)}$. From the Riesz theorem,
we can know the dual problem (\ref{Dual_Problem})
has a unique solution.
Now, we state some properties for the estimator $\eta(\widehat \lambda,\widehat u,\mathbf y)$.
\begin{lemma}\label{Optimization_Property_Lemma}
Assume $\mathbf y^*$ be the solution of the dual problem (\ref{Dual_Problem}) and
let $\widehat \lambda\in \mathcal{R}$,
$\widehat u\in V$ and $\mathbf y\in\mathbf W$ be arbitrary. Then the following equality holds
\begin{eqnarray}\label{Optimization_Property}
\eta^2(\widehat \lambda,\widehat u,\mathbf y)&=&\eta^2(\widehat \lambda,\widehat u,\mathbf y^*)
+\||\mathbf y^*-\mathbf y\||_*^2.
\end{eqnarray}
\end{lemma}
In order to give a computable error estimate, the reasonable choice is a certain approximate
solution $\mathbf y_h\in \mathbf W$ of the dual problem (\ref{Dual_Problem}). Then we can give a
guaranteed upper bound of the error estimate for the first eigenfunction approximation.
\begin{corollary}
Under the conditions of Theorem \ref{Theorem_Upper_Bound},
there exists an exact eigenfunction $u_1\in M(\lambda_1)$
such that the error estimate for the eigenpair approximation
$(\widehat \lambda_1,\widehat u_1)$ has the following upper bound
\begin{eqnarray}\label{Upper_Bound_Computable}
\|u_1-\widehat u_1\|_a&\leq& \frac{\lambda_{2}^L}{\lambda_{2}^L-\widehat \lambda_1}
\eta(\widehat \lambda_1,\widehat u_1,\mathbf y_h),
\end{eqnarray}
where $\mathbf y_h\in \mathbf W$ is a reasonable approximate solution
of the dual problem (\ref{Dual_Problem})
with $\widehat \lambda=\widehat \lambda_1$ and $\widehat u=\widehat u_1$.
\end{corollary}
We would like to point out that the quantity $\eta(\lambda_{i,h},u_{i,h},\mathbf y^*)$, where $\mathbf y^*\in \mathbf W$
is the solution of (\ref{Dual_Problem}) with $\widehat\lambda =\lambda_{i,h}$ and $\widehat u=u_{i,h}$, is
an asymptotically exact error estimate for the
eigenfunction approximation $u_{i,h}$ when the eigenpair approximation is obtained by solving
the discrete eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}). Now, let us
discuss the efficiency of the a posteriori error estimate $\eta(\lambda_{i,h},u_{i,h},\mathbf y^*)$ and
$\eta(\lambda_{i,h},u_{i,h},\mathbf y_h)$.
\begin{theorem}\label{Efficiency_Theorem}
Assume $(\lambda_{i,h},u_{i,h})$ be an eigenpair approximation of the discrete eigenvalue
problem (\ref{Weak_Eigenvalue_Discrete})
corresponding to the eigenvalue $\lambda_i$. Then there exists an exact eigenfunction $u_i\in M(\lambda_i)$
such that $\eta(\lambda_{i,h},u_{i,h},\mathbf y^*)$ has the following inequalities
\begin{eqnarray}\label{Efficiency}
\theta_{1,i}\|u_i-u_{i,h}\|_a \leq
\eta(\lambda_{i,h},u_{i,h},\mathbf y^*)\leq \theta_{2,i}\|u_i-u_{i,h}\|_a,
\end{eqnarray}
where $\mathbf y^*\in \mathbf W$ is the solution of the dual problem (\ref{Dual_Problem})
with $\widehat \lambda= \lambda_{i,h}$ and $\widehat u=u_{i,h}$ and
\begin{eqnarray}\label{Theta_1_and_2}
\theta_{1,i}:=(1-C_i^2\lambda_{i,h}\eta_a^2(h))\ \ \ {\rm and}\ \ \
\theta_{2,i}:=\sqrt{1+\big(2(\lambda_i-1)^2+1\big)C_i^2\eta_a^2(h)}.
\end{eqnarray}
Further, we have the following asymptotic exactness
\begin{eqnarray}\label{Exactness}
\lim_{h\rightarrow 0}\frac{\eta(\lambda_{i,h},u_{i,h},\mathbf y^*)}{\|u_i-u_{i,h}\|_a}=1.
\end{eqnarray}
\end{theorem}
\begin{proof}
Similarly, we can also choose $u_i\in M(\lambda_i)$ such that $b(v,u_i-u_{i,h})=0$ for any $v\in M(\lambda_i)$.
Then from the similar process in (\ref{Inequality_1}), we have
\begin{eqnarray}
&&\|u_i-u_{i,h}\|_a^2\leq \eta(\lambda_{i,h},u_{i,h},\mathbf y)\|u_i-u_{i,h}\|_a
+\lambda_{i,h}\|u_i-u_{i,h}\|_b^2\nonumber\\
&&\leq \eta(\lambda_{i,h},u_{i,h},\mathbf y)\|u_i-u_{i,h}\|_a
+C_i^2\lambda_{i,h}\eta_a^2(h)\|u_i-u_{i,h}\|_a^2,\ \ \ \forall \mathbf y\in\mathbf W.
\end{eqnarray}
It leads to
\begin{eqnarray}\label{Inequality_6}
\|u_i-u_{i,h}\|_a &\leq& \frac{1}{1-C_i^2\lambda_{i,h}\eta_a^2(h)}\eta(\lambda_{i,h},u_{i,h},\mathbf y),
\ \ \ \forall \mathbf y\in\mathbf W.
\end{eqnarray}
From the definition (\ref{Definition_Eta}), the eigenvalue problem (\ref{LaplaceEigenProblem})
and $\nabla u_i\in\mathbf W$, we have
\begin{eqnarray}\label{Inequality_4}
\eta^2(\lambda_{i,h},u_{i,h},\nabla u_i)=\|\nabla u_{i,h}-\nabla u_i\|_b^2
+\|(\lambda_{i,h}-1)u_{i,h}-(\lambda_i-1)u_i\|_b^2.
\end{eqnarray}
Then combining (\ref{Optimization_Property}), (\ref{Inequality_4}) and Lemma \ref{Err_Eigen_Global_Lem},
the following estimates hold
\begin{eqnarray}\label{Inequality_3}
&&\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)\leq \eta^2(\lambda_{i,h},u_{i,h},\nabla u_i)\nonumber\\
&=&\|\nabla u_{i,h}-\nabla u_i\|_b^2+\|(\lambda_{i,h}-1)u_{i,h}-(\lambda_i-1)u_i\|_b^2\nonumber\\
&=&\|u_i-u_{i,h}\|_a^2+\|(\lambda_{i,h}-1)u_{i,h}-(\lambda_i-1)u_i\|_b^2-\|u_i-u_{i,h}\|_b^2\nonumber\\
&=& \|u_i-u_{i,h}\|_a^2+\|(\lambda_{i,h}-\lambda_i)u_{i,h}+(\lambda_i-1)(u_{i,h}-u_i)\|_b^2
-\|u_i-u_{i,h}\|_b^2\nonumber\\
&\leq& \|u_i-u_{i,h}\|_a^2 + 2|\lambda_{i,h}-\lambda_i|^2 +\big(2(\lambda_i-1)^2-1\big)\|u_i-u_{i,h}\|_b^2\nonumber\\
&\leq&\big(1+2C_i^2\eta_a^2(h)+\big(2(\lambda_i-1)^2-1\big)C_i^2\eta_a^2(h)\big)\|u_i-u_{i,h}\|_a^2\nonumber\\
&\leq&\big(1+\big(2(\lambda_i-1)^2+1\big)C_i^2\eta_a^2(h)\big)\|u_i-u_{i,h}\|_a^2,
\end{eqnarray}
where we used the estimate
\begin{eqnarray*}
\lambda_{i,h}-\lambda_i &\leq& C_i\eta_a(h)\|u_i-u_{i,h}\|_a.
\end{eqnarray*}
The inequality (\ref{Inequality_3}) leads to the following estimate
\begin{eqnarray}\label{Inequality_5}
\eta(\lambda_{i,h},u_{i,h},\mathbf y^*)&\leq&\sqrt{1+\big(2(\lambda_i-1)^2+1\big)C_i^2\eta_a^2(h)}\|u_i-u_{i,h}\|_a.
\end{eqnarray}
From inequalities (\ref{Optimization_Property}), (\ref{Inequality_6}) and (\ref{Inequality_5}), we
obtain the desired result (\ref{Efficiency}) and (\ref{Exactness}) can be deduced easily from the
fact that $\eta_a(h)\rightarrow 0$ as $h\rightarrow 0$.
\end{proof}
\begin{corollary}\label{Efficiency_h_Eigenfun_Corollary}
Assume the conditions of Theorem \ref{Efficiency_Theorem} hold and there exists a constant $\gamma_i>0$ such that
the approximation $\mathbf y_h$ of $\mathbf y^*$ satisfies
$\||\mathbf y^*-\mathbf y_h\||_*\leq \gamma_i\|u_i-u_{i,h}\|_a$. Then the following efficiency holds
\begin{eqnarray}\label{Efficiency_2}
\eta(\lambda_{i,h},u_{i,h},\mathbf y_h)&\leq&\sqrt{\theta_{2,i}^2+\gamma_i^2}\|u_i-u_{i,h}\|_a.
\end{eqnarray}
Further, the estimator $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h)$ is asymptotically exact if and only if
the following condition holds
\begin{eqnarray}\label{Condition_Exact}
\lim_{h\rightarrow0}\frac{\||\mathbf y^*-\mathbf y_h\||_*}{\|u_i-u_{i,h}\|_a}=0.
\end{eqnarray}
\end{corollary}
\begin{proof}
First from (\ref{Optimization_Property}) and (\ref{Efficiency}), we have
\begin{eqnarray}
\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h)&=&\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)
+\||\mathbf y^*-\mathbf y_h\||_*^2\nonumber\\
&\leq&\theta_{2,i}^2\|u_i-u_{i,h}\|_a^2+\gamma_i^2\|u_i-u_{i,h}\|_a^2\nonumber\\
&\leq&(\theta_{2,i}^2+\gamma_i^2)\|u_i-u_{i,h}\|_a^2.
\end{eqnarray}
Then the desired result (\ref{Efficiency_2}) can be obtained and the asymptotically exactness of the estimator
follows immediately from the condition (\ref{Condition_Exact}).
\end{proof}
\section{Lower bound of the eigenvalue}\label{Section_Lower_Bound}
In this section, based on the guaranteed upper bound for the error estimate of the first eigenfunction approximation,
we give a guaranteed lower bound of the first eigenvalue.
Further, we also give asymptotically lower bounds of the general eigenvalues based on the asymptotically exact
error estimates for the general eigenfunction approximations which are obtained by solving the
discrete finite element eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}).
Actually, the process is very direct since we have
the following Rayleigh quotient expansion which comes from \cite{BabuskaOsborn_1989,BabuskaOsborn_Book}.
\begin{lemma}(\cite{BabuskaOsborn_1989,BabuskaOsborn_Book})\label{Rayleigh_Quotient_error_theorem}
Assume $(\lambda,u)$ is an exact solution of the eigenvalue problem
(\ref{weak_eigenvalue_problem}) and $0\neq \psi\in V$. Let us define
\begin{eqnarray}\label{rayleighw}
\bar{\lambda}=\frac{a(\psi,\psi)}{b(\psi,\psi)}.
\end{eqnarray}
Then we have
\begin{eqnarray}\label{rayexpan}
\bar{\lambda}-\lambda
&=&\frac{a(u-\psi,u-\psi)}{b(\psi,\psi)}-\lambda
\frac{b(u-\psi,u-\psi)}{b(\psi,\psi)}.
\end{eqnarray}
\end{lemma}
\begin{theorem}\label{Guaranteed_Lower_Bound_Theorem}
Assume $\lambda_1$ is the first eigenvalue of the eigenvalue problem (\ref{LaplaceEigenProblem})
and $(\widehat \lambda_1,\widehat u_1)\in \mathcal{R}\times V$ ($\|\widehat u_1\|_b=1$)
be the eigenpair approximation for the first eigenvalue and eigenfunction, respectively.
Then we have the following
guaranteed lower bound of the first eigenvalue
\begin{eqnarray}\label{Upper_Bound_Estimate_Lambda}
\widehat\lambda_1-\lambda_1 &\leq& \left(\frac{\lambda_{2}^L}{\lambda_{2}^L-\widehat\lambda_1}\right)\frac{\lambda_2^L}{\lambda_2^L-\alpha^2\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h)}
\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h),
\end{eqnarray}
where $\alpha=\lambda_{2}^L/(\lambda_{2}^L-\widehat\lambda_1)$ and $\mathbf y_h\in \mathbf W$ is a reasonable approximate solution of the dual problem (\ref{Dual_Problem})
with $\widehat \lambda=\widehat \lambda_1$ and $\widehat u=\widehat u_1$.
Then the following guaranteed lower-bound result holds
\begin{eqnarray}\label{Lower_Bound_Lambda}
\widehat\lambda_1^L:= \widehat\lambda_1-\left(\frac{\lambda_{2}^L}{\lambda_{2}^L-\widehat\lambda_1}\right)\frac{\lambda_2^L}{\lambda_2^L-\alpha^2\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h)}
\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h)
\leq \lambda_1,
\end{eqnarray}
where $\widehat\lambda_1^L$ denotes a lower bound of the first eigenvalue $\lambda_1$.
\end{theorem}
\begin{proof}
Similarly, we can also choose $u_1\in M(\lambda_1)$ such that $b(v,u_1-\widehat u_1)=0$ for any $v\in M(\lambda_1)$.
We also set $w=u_1-\widehat u_1$ and from Lemma \ref{Rayleigh_Quotient_error_theorem}, (\ref{Upper_Bound}), (\ref{Inequality_1})
and $\|\widehat u_1\|_b=1$, we have
\begin{eqnarray}\label{Inequality_9}
\widehat\lambda_1-\lambda_1 -(\widehat\lambda_1-\lambda_1)\|w\|_b^2 &=&
a(u_1-\widehat u_1,u_1-\widehat u_1)-\widehat\lambda_1 b(u_1-\widehat u_1,u_1-\widehat u_1)\nonumber\\
&\leq&\eta(\widehat\lambda_1,\widehat u_1,\mathbf y_h)\|u_1-\widehat u_1\|_a.
\end{eqnarray}
Combining (\ref{Upper_Bound}), (\ref{Inequality_2}) and (\ref{Inequality_9}) leads to the following inequalities
\begin{eqnarray}
\widehat\lambda_1-\lambda_1 &\leq& \frac{\|w\|_a}{1-\|w\|_b^2}\eta(\widehat\lambda_1,\widehat u_1,\mathbf y_h)\nonumber\\
&\leq& \frac{\|w\|_a}{1-\frac{1}{\lambda_2^L}\|w\|_a^2}\eta(\widehat\lambda_1,\widehat u_1,\mathbf y_h)\nonumber\\
&\leq& \alpha\frac{\lambda_2^L}{\lambda_2^L-\alpha^2\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h)}
\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h).
\end{eqnarray}
This is the desired result (\ref{Upper_Bound_Estimate_Lambda}). The lower bound result (\ref{Lower_Bound_Lambda}) holds directly and
the proof is complete.
\end{proof}
\begin{remark}
From above derivation (Theorems \ref{Theorem_Upper_Bound} and \ref{Guaranteed_Lower_Bound_Theorem}),
it is easy to know the current method here can also obtain the guaranteed lower bounds for the first $m$
eigenvalues if provided the separation condition $\lambda_m< \lambda_{m+1}^L\leq \lambda_{m+1}$
and $\lambda_{m+1}^L$ is known.
\end{remark}
\begin{theorem}\label{Efficiency_Eigenvalue_Theorem}
Assume the conditions of Theorem \ref{Efficiency_Theorem} hold.
Then the following inequalities hold
\begin{eqnarray}\label{Efficiency_Eigenvalue}
\frac{1-\lambda_i C_i^2\eta_a^2(h)}{\theta_{2,i}^2}\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)
\leq \lambda_{i,h}-\lambda_i\leq \frac{1}{\theta_{1,i}^2}\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*),
\end{eqnarray}
where $\mathbf y^*\in \mathbf W$ is the solution of the dual problem (\ref{Dual_Problem})
with $\widehat \lambda= \lambda_{i,h}$ and $\widehat u=u_{i,h}$.
Further, we have the following asymptotic exactness
\begin{eqnarray}\label{Exactness_Eigenvalue}
\lim_{h\rightarrow 0}\frac{\lambda_{i,h}-\lambda_i}{\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)}=1.
\end{eqnarray}
\end{theorem}
\begin{proof}
From Lemma \ref{Err_Eigen_Global_Lem}, (\ref{Efficiency}) and (\ref{rayexpan}), we have
\begin{eqnarray}\label{Inequality_7}
\lambda_{i,h}-\lambda_i&=&\|u_i-u_{i,h}\|_a^2-\lambda_i\|u_i-u_{i,h}\|_b^2\nonumber\\
&\geq&\|u_i-u_{i,h}\|_a^2-\lambda_i C_i^2\eta_a^2(h)\|u_i-u_{i,h}\|_a^2\nonumber\\
&=&(1-\lambda_i C_i^2\eta_a^2(h))\|u_i-u_{i,h}\|_a^2\nonumber\\
&\geq&\frac{1-\lambda_i C_i^2\eta_a^2(h)}{\theta_{2,i}^2}\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*).
\end{eqnarray}
From (\ref{Efficiency}) and (\ref{rayexpan}), the following inequalities hold
\begin{eqnarray}\label{Inequality_8}
\lambda_{i,h}-\lambda_i &\leq& \|u_i-u_{i,h}\|_a^2\leq \frac{1}{\theta_{1,i}^2}\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*).
\end{eqnarray}
The desired result (\ref{Efficiency_Eigenvalue}) can be obtained by
combining (\ref{Inequality_7}) and (\ref{Inequality_8}).
Then we can deduce the asymptotic exactness easily by (\ref{Efficiency_Eigenvalue})
and the property $\eta_a(h)\rightarrow 0$ as $h\rightarrow 0$.
\end{proof}
Based on the result (\ref{Efficiency_Eigenvalue}), we can produce an asymptotically lower bound for the
general eigenvalue $\lambda_i$ by the finite element method.
\begin{corollary}\label{Lower_Bound_Eigen_Corollary}
Under the conditions of Theorem \ref{Efficiency_Eigenvalue_Theorem}, when the mesh size $h$ is small enough,
the following asymptotically lower bound for each eigenvalue $\lambda_i$ holds
\begin{eqnarray}\label{Lower_Bound_Eigen_i}
\lambda_{i,h}^{L}:=\lambda_{i,h}-\kappa \eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h) \leq \lambda_i,
\end{eqnarray}
where $\kappa$ is a number larger than $1$ and
$\mathbf y_h\in \mathbf W$ is a reasonable approximate solution of the dual problem (\ref{Dual_Problem})
with $\widehat \lambda= \lambda_{i,h}$ and $\widehat u=u_{i,h}$.
\end{corollary}
\begin{proof}
From Lemma \ref{Optimization_Property_Lemma} and (\ref{Efficiency_Eigenvalue}), we have the following
inequalities
\begin{eqnarray*}
\lambda_{i,h}-\lambda_i\leq \frac{1}{\theta_{1,i}^2}\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*) \leq
\frac{1}{\theta_{1,i}^2}\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h).
\end{eqnarray*}
Combining (\ref{Theta_1_and_2}) and $\eta_a(h)\rightarrow 0$ as $h\rightarrow 0$ leads to
$\theta_{1,i}^2\rightarrow 1$ as $h\rightarrow 0$. Then the lower bound result
(\ref{Lower_Bound_Eigen_i}) holds when the mesh size $h$ is small enough.
\end{proof}
\begin{remark}\label{Lower_Bound_Eigen_Remark}
It is easy to know that if we choose $\kappa$ closer to $1$, the mesh size $h$ need to be smaller.
For example, we can choose $\kappa =2$ and has the following eigenvalue approximation
\begin{eqnarray*}
\lambda_{i,h}^{L}:=\lambda_{i,h}-2 \eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h),
\end{eqnarray*}
which is a lower bound of the eigenvalue $\lambda_i$ when $h$ is small enough.
\end{remark}
\begin{corollary}\label{Efficiency_h_Eigenvalue_Corollary}
Assume the conditions of Corollary \ref{Lower_Bound_Eigen_Corollary} hold and there exists a constant $\gamma_i$
such that the approximation $\mathbf y_h$ of $\mathbf y^*$ satisfies
$\||\mathbf y^*-\mathbf y_h\||_*\leq \gamma_i\|u_i-u_{i,h}\|_a$. Then the following efficiency holds
\begin{eqnarray}\label{Efficiency_2_Eigenvalue}
\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h)&\leq&\Big(1+\frac{\gamma_i^2}{\theta_{1,i}^2}\Big)
\frac{\theta_{2,i}^2}{1-\lambda_i C_i^2\eta_a^2(h)}(\lambda_{i,h}-\lambda_i).
\end{eqnarray}
Further, the estimator $\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h)$ is asymptotically exact for
the eigenvalue error $\lambda_{i,h}-\lambda_i$ if and only if the condition (\ref{Condition_Exact})
holds.
\end{corollary}
\begin{proof}
First from (\ref{Optimization_Property}), (\ref{Efficiency}) and (\ref{Efficiency_Eigenvalue}),
we have the following estimates
\begin{eqnarray*}
\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h)&=&\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)
+\||\mathbf y^*-\mathbf y_h\||_*^2\nonumber\\
&\leq& \eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)+\gamma_i^2\|u_i-u_{i,h}\|_a^2\nonumber\\
&\leq& \eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)+\frac{\gamma_i^2}{\theta_{1,i}^2}
\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)\nonumber\\
&\leq&\Big(1+\frac{\gamma_i^2}{\theta_{1,i}^2}\Big)\eta^2(\lambda_{i,h},u_{i,h},\mathbf y^*)\nonumber\\
&\leq&\Big(1+\frac{\gamma_i^2}{\theta_{1,i}^2}\Big)
\frac{\theta_{2,i}^2}{1-\lambda_i C_i^2\eta_a^2(h)}(\lambda_{i,h}-\lambda_i).
\end{eqnarray*}
This is the desired result (\ref{Efficiency_2_Eigenvalue}) and the asymptotically exactness
result follows immediately from the condition (\ref{Condition_Exact}).
\end{proof}
\begin{remark}
From Corollaries \ref{Efficiency_h_Eigenfun_Corollary} and \ref{Efficiency_h_Eigenvalue_Corollary},
the estimators
$\eta(\lambda_{i,h},u_{i,h},\mathbf y_h)$ and $\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h)$
are asymptotically
exact for $\|u-u_{i,h}\|_a$ and $\lambda_{i,h}-\lambda_i$, respectively,
when the condition $\lim\limits_{h\rightarrow 0}\gamma_i=0$ holds.
\end{remark}
\section{Numerical results}\label{Section_Numerical_Examples}
In this section, two numerical examples are presented to validate the efficiency of the
posteriori estimate, the upper bound of the error estimate and lower bound of the first eigenvalue
proposed in this paper.
In order to give the a posteriori error estimate $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h)$, we need to
solve the dual problem (\ref{Dual_Problem}) to produce the approximation $\mathbf y_h$ of $\mathbf y^*$.
Here, the dual problem (\ref{Dual_Problem}) is solved
using the same mesh $\mathcal{T}_h$.
We solve the dual problem (\ref{Dual_Problem})
to obtain an approximation $\mathbf y_h^*\in \mathbf W_h\subset \mathbf W$ with the $H({\rm div};\Omega)$
conforming finite element space $\mathbf W_h$ defined as follows \cite{BrezziFortin}
\begin{eqnarray}
\mathbf W_h^p=\big\{\mathbf w\in \mathbf W:\ \mathbf w|_K\in {\rm RT}_p,\ \forall K\in \mathcal{T}_h\big\},
\end{eqnarray}
where ${\rm RT}_p= (\mathcal{P}_p)^d+\mathbf x\mathcal{P}_p$. Then the approximate solution $\mathbf y_h^p\in \mathbf W_h^p$
of the dual problem (\ref{Dual_Problem}) is defined as follows: Find $\mathbf y_h^* \in \mathbf W_h^p$ such that
\begin{eqnarray}\label{Dual_Problem_Discrete}
a^*(\mathbf y_h^*,\mathbf z_h)&=&\mathcal{F}^*(\lambda_{i,h},u_{i,h},\mathbf z_h),\ \ \ \forall \mathbf z_h\in \mathbf W_h^p.
\end{eqnarray}
After obtaining $\mathbf y_h^*$, we can compute the a posteriori error estimate $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^*)$
as in (\ref{Definition_Eta}).
We can obtain the lower bound $\lambda_{2,h}^L$ of the second eigenvalue $\lambda_2$ by the nonconforming
finite element method from the papers \cite{CarstensenGedicke,Liu,SebestovaVejchodsky}. Based on $\lambda_{2,h}^L$,
we can compute the guaranteed upper bound of the error estimate for the first eigenfunction approximation $u_{1,h}$ as
\begin{eqnarray*}
\eta_h^U(\lambda_{1,h},u_{1,h},\mathbf y_h^*):=
\frac{\lambda_{2,h}^L}{\lambda_{2,h}^L-\lambda_{1,h}}\eta(\lambda_{1,h},u_{1,h},\mathbf y_h^*),
\end{eqnarray*}
and the guaranteed lower bound of the first eigenvalue $\lambda_1$ as follows
\begin{eqnarray*}
\lambda_{1,h}^L:= \lambda_{1,h}-\left(\frac{\lambda_{2}^L}{\lambda_{2}^L-\widehat\lambda_1}\right)\frac{\lambda_2^L}{\lambda_2^L-\alpha^2\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h)}
\eta^2(\widehat\lambda_1,\widehat u_1,\mathbf y_h)
\leq \lambda_1,
\end{eqnarray*}
where $\alpha=\lambda_{2}^L/(\lambda_{2}^L-\lambda_{1,h})$.
In this paper, we solve the eigenvalue problem by the multigrid method from the papers \cite{Xie_JCP,Xie_IMA}
which only needs the optimal memory and computational complexity.
\subsection{Eigenvalue problem on unit square}
In the first example, we solve the eigenvalue problem (\ref{weak_eigenvalue_problem})
on the unit square $\Omega=(0,1)\times (0,1)$. In order to investigate the efficiency of the
a posteriori error estimate $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^*)$, the guaranteed upper bound
$\eta_h^U(\lambda_{1,h},u_{1,h},\mathbf y_h^*)$
of the error estimate $\|u_1-u_{1,h}\|_a$ and the lower bound $\lambda_{1,h}^L$ of the first eigenvalue $\lambda_1$,
we produce the sequence of finite element spaces on the sequence of meshes
which are obtained by the regular refinement (connecting the midpoints of each edge) from an initial
mesh. In this example, the initial mesh is showed in Figure \ref{Exam_1_Initial_Mesh}
which is generated by Delaunay method.
First we solve the eigenvalue
problem (\ref{Weak_Eigenvalue_Discrete})
by the linear conforming finite element method and solve the dual problem (\ref{Dual_Problem_Discrete})
in the finite element space $\mathbf W_h^0$ and $\mathbf W_h^1$, respectively.
The corresponding numerical results are presented in Figure \ref{Exam_1_P_1_RT0_RT1} which
shows that the a posteriori error estimate $\eta(\lambda_{1,h},u_{1,h},\mathbf y_h^*)$
is efficient when we solve the dual problem by $\mathbf W_h^1$. Figure \ref{Exam_1_P_1_RT0_RT1} also shows
the validation of the guaranteed upper bound $\eta_h^U(\lambda_{1,h},u_{1,h},\mathbf y_h^*)$ for the error $\|u_1-u_{1,h}\|_a$
and the eigenvalue approximation $\lambda_{1,h}^L$ is really a guaranteed lower bound for the first eigenvalue
$\lambda_1=1+2\pi^2$ despite the way to solve the dual problem by $\mathbf W_h^0$ or $\mathbf W_h^1$.
\begin{figure}
\caption{\small\texttt The initial mesh for the unit square}
\label{Exam_1_Initial_Mesh}
\end{figure}
\begin{figure}
\caption{\small\texttt The errors for the unit square domain when the eigenvalue problem is solved
by the linear finite element method, where $\eta(\lambda_h,u_h,\mathbf y_h^0)$ and
$\eta(\lambda_h,u_h,\mathbf y_h^1)$ denote the a posteriori error estimates $\eta(\lambda_{1,h}
\label{Exam_1_P_1_RT0_RT1}
\end{figure}
We also solve the eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}) by the quadratic
finite element method and solve the dual problem (\ref{Dual_Problem_Discrete})
with the finite element space $\mathbf W_h^1$ and $\mathbf W_h^2$, respectively.
Figure \ref{Exam_1_P_2_RT1_RT2} shows the corresponding numerical results. From Figure \ref{Exam_1_P_2_RT1_RT2},
we can find that the a posteriori error estimate $\eta(\lambda_{1,h},u_{1,h},\mathbf y_h^*)$ is efficient
when we solve the dual problem by $\mathbf W_h^2$. Figure \ref{Exam_1_P_2_RT1_RT2} also shows
$\eta_h^U(\lambda_{1,h},u_{1,h},\mathbf y_h^*)$ is really the guaranteed upper bound of the error $\|u_1-u_{1,h}\|_a$
and the eigenvalue approximation $\lambda_{1,h}^L$ is also really a guaranteed lower bound of the first eigenvalue
$\lambda_1$.
\begin{figure}
\caption{\small\texttt The errors for the unit square domain when the eigenvalue problem is solved
by the quadratic finite element method, where $\eta(\lambda_h,u_h,\mathbf y_h^1)$ and
$\eta(\lambda_h,u_h,\mathbf y_h^2)$ denote the a posteriori error estimates $\eta(\lambda_{1,h}
\label{Exam_1_P_2_RT1_RT2}
\end{figure}
In this section, we also check the efficiency of the error estimates $\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h^*)$
($i=2,3$) for the second and third eigenvalues. Tables \ref{Example_1_Table_1} and \ref{Example_1_Table_2} show
the corresponding numerical results. In Table \ref{Example_1_Table_1}, we solve the eigenvalue
problem (\ref{Weak_Eigenvalue_Discrete}) by the linear finite element method and the
dual problem (\ref{Dual_Problem_Discrete}) with the finite element space $\mathbf W_h^0$
and $\mathbf W_h^1$, respectively.
In Table \ref{Example_1_Table_2}, the eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}) is solved
by the quadratic finite element method and we solve the dual problem (\ref{Dual_Problem_Discrete})
with the finite element space $\mathbf W_h^1$ and $\mathbf W_h^2$, respectively.
\begin{table}[ht]
\centering
\caption{\footnotesize\texttt The errors for the unit square domain when the eigenvalue problem is solved
by the linear finite element method, where $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^0)$ ($i=2,3$) and
$\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^1)$ denote the a posteriori error estimates $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^*)$
when the dual problem is solved by $\mathbf W_h^0$ and $\mathbf W_h^1$, respectively.}\label{Example_1_Table_1}
\begin{tabular}{||c|c|c|c||}
\hline
\footnotesize{Number of elements} & $\lambda_{2,h}-\lambda_2$& $\eta^2(\lambda_{2,h},u_{2,h},\mathbf y_h^0)$ & $\eta^2(\lambda_{2,h},u_{2,h},\mathbf y_h^1)$ \\
\hline
208 & 1.9304e+00 & 7.2113e+01 & 1.9875e+00 \\
\hline
832 & 4.8497e-01 & 1.6651e+01 & 4.8866e-01 \\
\hline
3328 & 1.2164e-01 & 4.0794e+00 & 1.2188e-01 \\
\hline
13312 & 3.0450e-02 & 1.0147e+00 & 3.0469e-02 \\
\hline
53248 & 7.6161e-03 & 2.5337e-01 & 7.6182e-03 \\
\hline
212992 & 1.9043e-03 & 6.3322e-02 & 1.9047e-03 \\
\hline
\footnotesize{Number of elements} & $\lambda_{3,h}-\lambda_3$& $\eta^2(\lambda_{3,h},u_{3,h},\mathbf y_h^0)$ & $\eta^2(\lambda_{3,h},u_{3,h},\mathbf y_h^1)$ \\
\hline
208 & 1.9386e+00 & 7.0685e+01 &1.9968e+00\\
\hline
832 & 4.8728e-01 & 1.6198e+01 &4.9098e-01\\
\hline
3328 & 1.2227e-01 & 3.9655e+00 &1.2252e-01\\
\hline
13312 & 3.0615e-02 & 9.8627e-01 &3.0634e-02\\
\hline
53248 & 7.6578e-03 & 2.4625e-01 &7.6599e-03\\
\hline
212992 & 1.9148e-03 & 6.1543e-02 &1.9151e-03\\
\hline
\end{tabular}
\end{table}
The numerical results in Tables \ref{Example_1_Table_1} and \ref{Example_1_Table_2}
show that $\eta^2(\lambda_{i,h},u_{i,h},\mathbf y_h^*)$ ($i=2,3$) is a very efficient error estimator for the
eigenvalue approximation $\lambda_{i,h}$ when the error of the dual problem is small compared to the
error of the primitive problem. This phenomena is in agreement with Theorem \ref{Efficiency_Eigenvalue_Theorem},
Corollary \ref{Lower_Bound_Eigen_Corollary} and Remark \ref{Lower_Bound_Eigen_Remark}.
\begin{table}[ht]
\centering
\caption{\footnotesize\texttt The errors for the unit square domain when the eigenvalue problem is solved
by the quadratic finite element method, where $\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^0)$ ($i=2,3$) and
$\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^1)$ denote the a posteriori error estimates
$\eta(\lambda_{i,h},u_{i,h},\mathbf y_h^*)$ when the dual problem is solved by $\mathbf W_h^0$ and $\mathbf W_h^1$, respectively.}\label{Example_1_Table_2}
\begin{tabular}{||c|c|c|c||}
\hline
\footnotesize{Number of elements} & $\lambda_{2,h}-\lambda_2$& $\eta^2(\lambda_{2,h},u_{2,h},\mathbf y_h^0)$ & $\eta^2(\lambda_{2,h},u_{2,h},\mathbf y_h^1)$ \\
\hline
208 & 1.3955e-02 & 4.6633e-01 & 1.3818e-02 \\
\hline
832 & 9.0239e-04 & 2.9777e-02 & 9.0013e-04 \\
\hline
3328 & 5.7163e-05 & 1.8719e-03 & 5.7128e-05 \\
\hline
13312 & 3.5934e-06 & 1.1718e-04 & 3.5928e-06 \\
\hline
53248 & 2.2519e-07 & 7.3268e-06 & 2.2519e-07 \\
\hline
\footnotesize{Number of elements} & $\lambda_{3,h}-\lambda_3$& $\eta^2(\lambda_{3,h},u_{3,h},\mathbf y_h^0)$ & $\eta^2(\lambda_{3,h},u_{3,h},\mathbf y_h^1)$ \\
\hline
208 & 1.4340e-02 & 4.6548e-01 & 1.4193e-02 \\
\hline
832 & 9.2527e-04 & 2.9950e-02 & 9.2287e-04 \\
\hline
3328 & 5.8616e-05 & 1.8858e-03 & 5.8578e-05 \\
\hline
13312 & 3.6855e-06 & 1.1809e-04 & 3.6849e-06 \\
\hline
53248 & 2.3101e-07 & 7.3849e-06 & 2.3100e-07 \\
\hline
\end{tabular}
\end{table}
\subsection{Eigenvalue problem on L-shape domain}
In the second example, we solve the eigenvalue problem (\ref{weak_eigenvalue_problem})
on the L-shape domain $\Omega=(-1,1)\times (-1,1)/[0,1)\times (-1,0]$.
Since $\Omega$ has a re-entrant corner, the singularity of the first eigenfunction
is expected. The convergence order for the eigenvalue approximation is
less than $2$ by the linear finite element method which is the order predicted by the
theory for regular eigenfunctions.
We investigate the numerical results for the first eigenvalue. Since the exact
eigenvalue is not known, we choose an adequately accurate approximation
$\lambda_1 = 10.6397238440219$ obtained by the extrapolation method \cite{LinLin}
as the exact first eigenvalue for the numerical tests.
In order to treat the singularity of the eigenfunction, we solve the eigenvalue problem
(\ref{weak_eigenvalue_problem}) by the adaptive finite element method (cf. \cite{BrennerScott}).
For simplicity, we set $\lambda:=\lambda_1$, $u:=u_1$, $\lambda_h:=\lambda_{1,h}$ and $u_h:=u_{1,h}$
in this subsection.
We present this example to validate the results in this paper also hold on the adaptive meshes.
In order to use the adaptive finite element method, we define the a posteriori error estimator as follows:
Define the element residual $\mathcal{R}_K(\lambda_h,u_h)$ and the jump residual $\mathcal{J}_E(u_h)$ as
follows:
\begin{eqnarray}
\mathcal{R}_K(\lambda_h,u_h)&:=&\lambda_hu_h
+\Delta u_h- u_h \ \ \text{in}\ K\in \mathcal{T}_h,\\
\mathcal{J}_E(u_h)&:=&-\nabla u_h^+ \cdot \nu^+- \nabla u_h^- \cdot \nu^-
:=[[\nabla u_h]]_E\cdot\nu_E\ \ \ \text{on}\ E\in \mathcal{E}_h,
\end{eqnarray}
where $E$ is the common side of elements $K^{+}$ and $K^-$ with outward
normals $\nu^+$ and $\nu^-$, $\nu_E=\nu^-$.
For each element $K\in \mathcal{T}_h$, we define the local error
indicator $\eta_h(\lambda_h,u_h,K)$ by
\begin{eqnarray}\label{eta_definition}
\eta_h^2(\lambda_h,u_h,K):=h_T^2\|\mathcal{R}_K(\lambda_h,u_h)\|_{0,K}^2
+\sum\limits_{E\in \mathcal{E}_h,E\subset
\partial K}h_E\|\mathcal{J}_E(u_h)\|^2_{0,E}.
\end{eqnarray}
Then we define the global a posteriori error estimator
$\eta_{\rm ad}(\lambda_h,u_h)$ by
\begin{eqnarray}
\eta_{\rm ad}(\lambda_h,u_h):=
\left(\sum_{K\in \mathcal{T}_h}\eta_h^2(\lambda_h,u_h,K)\right)^{1/2}.
\end{eqnarray}
We solve the eigenvalue
problem (\ref{Weak_Eigenvalue_Discrete})
by the linear conforming finite element method and solve the dual problem (\ref{Dual_Problem_Discrete})
in the finite element space $\mathbf W_h^0$ and $\mathbf W_h^1$, respectively.
Figure \ref{Mesh_AFEM_Exam_2} (left) shows the corresponding adaptive mesh.
The corresponding numerical results are presented in Figure \ref{Exam_2_P_1_RT0_RT1}
which shows that the a posteriori error estimate $\eta(\lambda_h,u_h,\mathbf y_h^*)$ is also efficient
even on the adaptive meshes when we solve the dual problem by $\mathbf W_h^1$.
Figure \ref{Exam_2_P_1_RT0_RT1} also shows the validation of the guaranteed upper bound
$\eta_h^U(\lambda_h,u_h,\mathbf y_h^*)$ for the error $\|u-u_h\|_a$ and the eigenvalue approximation $\lambda_h^L$
is really a guaranteed lower bound of the first eigenvalue despite the way to
solve the dual problem by $\mathbf W_h^0$ or $\mathbf W_h^1$.
\begin{figure}
\caption{The triangulations after adaptive iterations for
L-shape domain by the linear element (left) and the quadratic element (right)}
\label{Mesh_AFEM_Exam_2}
\end{figure}
\begin{figure}
\caption{\small\texttt The errors for the L-shape domain when the eigenvalue problem is solved
by the linear finite element method, where $\eta(\lambda_h,u_h,\mathbf y_h^0)$ and
$\eta(\lambda_h,u_h,\mathbf y_h^1)$ denote the a posteriori error estimates $\eta(\lambda_h,u_h,\mathbf y_h^*)$
when the dual problem is solved by $\mathbf W_h^0$ and $\mathbf W_h^1$, respectively, and $\lambda_h^{0,L}
\label{Exam_2_P_1_RT0_RT1}
\end{figure}
In this example, we also solve the eigenvalue problem (\ref{Weak_Eigenvalue_Discrete}) by the quadratic
finite element method and the dual problem (\ref{Dual_Problem_Discrete})
with the finite element space $\mathbf W_h^1$ and $\mathbf W_h^2$, respectively. The corresponding adaptive mesh
is presented in Figure \ref{Mesh_AFEM_Exam_2} (right).
Figure \ref{Exam_2_P_2_RT1_RT2} shows the corresponding numerical results. From Figure \ref{Exam_2_P_2_RT1_RT2},
we can find that the a posteriori error estimate $\eta(\lambda_h,u_h,\mathbf y_h^*)$ is efficient
when we solve the dual problem by $\mathbf W_h^2$. Figure \ref{Exam_2_P_2_RT1_RT2} also shows
$\eta_h^U(\lambda_h,u_h,\mathbf y_h^*)$ is really the guaranteed upper bound of the error $\|u-u_h\|_a$
and the eigenvalue approximation $\lambda_h^L$ is also really a guaranteed lower bound of the first eigenvalue.
\begin{figure}
\caption{\small\texttt The errors for the L shape domain when the eigenvalue problem is solved
by the quadratic finite element method, where $\eta(\lambda_h,u_h,\mathbf y_h^1)$ and
$\eta(\lambda_h,u_h,\mathbf y_h^2)$ denote the a posteriori error estimates $\eta(\lambda_h,u_h,\mathbf y_h^*)$
when the dual problem is solved by $\mathbf W_h^1$ and $\mathbf W_h^2$, respectively, and $\lambda_h^{1,L}
\label{Exam_2_P_2_RT1_RT2}
\end{figure}
\section{Concluding remarks}
In this paper, we give a computable error estimate for the eigenpair approximation by the
general conforming finite element methods on general meshes. Furthermore, the guaranteed
upper bound of the error estimate for the first eigenfunction approximation and the lower
bound of the first eigenvalue can be obtained by the computable error estimate and a
lower bound of the second eigenvalue. If the eigenpair approximations are obtained by solving
the discrete eigenvalue problem, the computable error estimates are asymptotically exact and
we can also give asymptotically lower bounds for the general eigenvalues. Some numerical examples
are provided to demonstrate the validation of the guaranteed upper and lower bounds for the
general conforming finite element methods on the general meshes (quasi-uniform and regular types
\cite{BrennerScott,Ciarlet}).
The method here can be extended to other eigenvalue problems such as Steklov, Stokes and
other similar types \cite{LinXie,SebestovaVejchodsky}.
Especially, we would like to say that the computable error estimate can be extended to the nonlinear
eigenvalue problems
which are produced from the complicated linear eigenvalue problems. Furthermore, the method in this
paper can be used to check the modeling and discretization errors for the models
(nonlinear eigenvalue problems) in the density functional theory comes from the linear Schr\"{o}dinger
equation \cite{KohnSham,Martin}. These will be our future work.
\section*{Acknowledge}
We would like to thank Tom\'{a}\v{s} Vejchodsk\'{y} for his kindly discussion!
\end{document}
|
\begin{document}
\title{Mock characters and the Kronecker symbol}
\begin{abstract}
We introduce and study a family of functions we call the \emph{mock characters}. These functions satisfy a number of interesting properties, and of all completely multiplicative arithmetic functions seem to come as close as possible to being Dirichlet characters. Along the way we prove a few new results concerning the behavior of the Kronecker symbol.
\end{abstract}
One of the most familiar objects in number theory is the Kronecker symbol, denoted $\leg{a}{n}$ or $(a | n)$.
Viewed as a function of $n$, it is well-known that this is a primitive real character when $a$ is a fundamental
discriminant. Less well-known is the behavior when $a$ is \emph{not} a fundamental discriminant; in this case
$\leg{a}{\cdot}$ might be a primitive character (e.g., for $a = 2$), an imprimitive character (e.g., for $a = 4$),
or not a character at all (e.g., for $a = 3$).\footnote{See Section~\ref{Sect:KronSymbol} below, in particular,
Corollary~\ref{KroneckerEqualsDirichlet}.}
Even when $\leg{a}{\cdot}$ is not a character, it strongly mimics the behavior of one, replacing the condition
of periodicity with \emph{automaticity} (a notion we shall discuss below). Inspired by this example, we define
and study a family of character-like functions which we call \emph{mock characters}. Of all completely multiplicative
arithmetic functions, the mock characters are as close as possible to being characters. We will justify this statement
qualitatively, and also formulate a quantitative conjecture using the language of `pretentiousness' introduced by
Granville and Soundararajan in \cite{GranSound}.
The structure of the paper is as follows.
In Section~\ref{glance} we briefly review automatic sequences.
In Section~\ref{sect:MockChar} we introduce mock characters and prove a few basic properties.
In Section~\ref{sect:Kronecker} we explore the relationship between mock characters and the Kronecker symbol, and
use our results to obtain some new results about the Kronecker symbol.
In the final section, we view mock characters through the lens of the `pretentious' approach to number theory.
\section{A quick overview of automatic sequences}\label{glance}
The goal of this section is to recall and motivate the notion of an automatic sequence. We will first give a
computer-science definition, then discuss a mathematical motivation for studying automatic sequences, and finally
give an equivalent definition which is easier to compute with.
Recall that a \emph{Finite State Machine} is a finite collection of states along with transition rules
between states. A positive integer corresponds to a program: its digits (read right to left) dictate the
individual state transitions.
\begin{figure}
\caption{For this FSM, the binary program 1101 yields the output $s_2$}
\label{fig:FSM}
\end{figure}
\begin{definition}\label{defn:CSDefnAuto}
A sequence $(a_n)$ is called \emph{$q$-automatic} if and only if there exists a Finite State Machine
which outputs $a_n$ when given the program $n$ (written base $q$).
\end{definition}
\begin{example}\label{ex:FirstDefPaper}
A nice example of a 2-automatic sequence is the \emph{regular paperfolding sequence}. To generate this sequence,
start with a large piece of paper, and label the top left corner $L$ and the top right corner $R_0$.
Fold the paper in half so that $R_0$ ends up underneath $L$, and label the new upper right corner by $R_1$
(the paper should be oriented so that $L$ is still the top left corner). Now iterate this process, each time
folding the paper in half so that the top right corner $R_i$ ends up directly underneath $L$, and then labelling
the new top right corner $R_{i+1}$. After some number of iterations, unfold the paper completely so that $L$ is
the top left corner and $R_0$ is once again the top right corner.
The paper has a sequence of creases, some of which are mountains (denoted $\wedge$) and some valleys (denoted $\vee$).
Set $v_n$ to be the $n^\text{th}$ crease from the left, where the paper has been folded enough times for there to be
$n$ creases.
The sequence $(v_n)$ is called the (regular) paperfolding sequence; it begins
\[
\wedge
\wedge
\vee
\wedge
\wedge
\vee
\vee
\wedge
\wedge
\wedge
\vee
\vee
\wedge
\vee
\vee
\cdots
\]
For more information about this sequence see \cite{AS}. (An FSM generating the paperfolding sequence can
be found in \cite[p.\ 312]{AMF}; note that the first term of the sequence is treated as the $0^\text{th}$
term.) We will return to this sequence below.
\end{example}
From the definition it is not obvious how restrictive the condition of automaticity is; it turns out to be
quite strong. To justify this, first recall that a given $\alpha \in \mathbb{R}$, say with decimal expansion
$\displaystyle \alpha = \sum_{n \geq 0} \alpha_n 10^{-n}$, is rational if and only if the sequence of digits
$(\alpha_n)$ is eventually periodic. Similarly, given a finite field $\mathbb{F}$ and a formal power series
$\alpha(X) \in \mathbb{F}[[X]]$, say, $\displaystyle {\alpha(X) = \sum_{n \geq 0} \alpha_n X^n}$,
we have that $\alpha(X) \in \mathbb{F}(X)$ (i.e., $\alpha(X)$ is rational) if and only if the sequence
$(\alpha_n)$ is eventually periodic.
It is an interesting and largely open question to determine whether irrational algebraic numbers have
``random'' decimal expansions or not. In the function field case, when the ground field is finite,
Christol discovered the following result:
\begin{theorem}[\cite{C1, C2}]\label{christol}
Let $\mathbb F_q$ denote the finite field of $q$ elements. Then the formal power series
$\displaystyle \alpha(X) := \sum_{n \geq 0} \alpha_n X^n$ is algebraic over the
field of rational functions ${\mathbb F}_q(X)$ if and only if the sequence $(\alpha_n)$
is $q$-automatic.
\end{theorem}
\noindent
{\samepage Thus we see that
\begin{equation}\label{analogy}
\textit{automatic is to periodic}
\qquad \textit{as} \qquad
\textit{algebraic is to rational.}
\end{equation}
This analogy will play a role in the sequel.}
Definition~\ref{defn:CSDefnAuto} is clean and justifies the term \emph{automatic}. In practice, however,
the following equivalent definition (see, e.g., \cite{AS}) is more useful:
\begin{definition}
Let $q \geq 2$ be an integer. The sequence $\big(\alpha(n)\big)_{n \geq 0}$ is said to be
\emph{$q$-automatic} if its {\em $q$-kernel}, the collection of subsequences
\[
{\mathcal K}_{\alpha} := \Big\{\big(\alpha(q^k m + r)\big)_{m \geq 0} : \ k \geq 0 \text{ and } 0 \leq r < q^k\Big\} ,
\]
is finite. We emphasize that an individual element of ${\mathcal K}_{\alpha}$ is a \emph{sequence}.
\end{definition}
\begin{remark}
It follows from this that any eventually periodic sequence is $q$-automatic for every $q \geq 2$.
\end{remark}
\section{Mock characters}\label{sect:MockChar}
\noindent
Before defining the notion of mock character, we recall the definition of a Dirichlet character.
\begin{definition}\label{char}
A map $\chi : \mathbb Z \to \mathbb C$ is a \emph{Dirichlet character of modulus $q$}, denoted $\chi\mod{q}$,
if and only if
\begin{enumerate}[label=(\roman*)]
\item $\chi$ is completely multiplicative (i.e., $\chi(mn) = \chi(m)\chi(n)$ for all $m,n \in \mathbb Z$);
\item $\chi$ is $q$-periodic; and
\item $\chi(n) = 0$ if and only if $(n,q) \neq 1$.
\end{enumerate}
\end{definition}
\noindent
In fact, the third condition is merely a notational convenience; any completely multiplicative periodic function is a Dirichlet character. This does not seem to be well-known and the proof is short, so we include it here. (This is a modification of a theorem appearing in the second author's Ph.D. thesis \cite{GoldThesis}.)
\begin{proposition}\label{prop:CharFirstTwoProp}
Suppose $\chi : \mathbb Z \to \mathbb C$ is completely multiplicative and eventually periodic, and that there exists $n > 1$ such that $\chi(n) \neq 0$. Then $\chi$ is a Dirichlet character.
\end{proposition}
\begin{proof}
Observe that $\chi(1) = 1$. If $\chi$ is the trivial character of modulus 1 the proposition is proved, so we will henceforth assume that $\chi\not\equiv 1$; it follows that $\chi(0) = 0$.
We begin by proving a special case of the proposition: that the claim holds for all purely periodic $\chi$. Let $q$ denote the period of $\chi$, and set $d$ to be the largest divisor of $q$ such that $\chi(d) \neq 0$ (such a $d$ exists since $\chi(1) = 1$).
Observe that if $d = 1$ then $\chi$ must be a Dirichlet character \mod{q}. (To see this, note that $\chi(m) = 0$
whenever $(m,q) > 1$; conversely, if $(m,q) = 1$, then some linear combination of $m$ and $q$ yields 1,
whence $\chi(m)$ must be nonzero.) Thus we may assume we have $d > 1$. For any integers $k,r$ we have
\[
\chi(d) \chi\Big(\frac{kq}{d} + r\Big) =
\chi(kq + dr) =
\chi(d)\chi(r) .
\]
Now $\chi(d) \neq 0$ by hypothesis, so $\chi$ is periodic with period $\frac{q}{d}$.
Since $d > 1$, we see that $\frac{q}{d}$ is a proper divisor of $q$. Iterating this procedure yields the claim in the case that $\chi$ is periodic.
We now deduce the claim in full generality. Let $\chi$ be as in the statement of the proposition. A result of Heppner and Maxsein \cite[Satz 2]{HeppnerMaxsein} implies the existence of an integer $\ell \geq 0$ and a purely periodic multiplicative function $\xi$ such that
$
\chi(n) = n^\ell \xi(n) .
$
Since $\chi$ is completely multiplicative by hypothesis, we deduce that $\xi$ must be as well. Thus $\xi$ is periodic and completely multiplicative, so it must be a Dirichlet character.
Finally, observe that $\chi(n)$ takes only finitely many values, so complete multiplicativity implies that all these values must be 0 or roots of unity. Thus (by hypothesis) there exists $n > 1$ such that $|\chi(n)| = 1$, whence $\ell = 0$. It follows that $\chi = \xi$ is a Dirichlet character.
\end{proof}
\begin{remark}
Heppner and Maxsein's proof is an adaptation of a proof of a similar result (for purely periodic functions) due to S\'{a}rk\"{o}zy \cite{Sarkozy}. A much shorter proof of the Heppner-Maxsein theorem was discovered by Methfessel \cite[Theorem 3]{Methfessel}.
Although not required for the sequel, we mention that the S\'{a}rk\"{o}zy-Heppner-Maxsein-Methfessel theorem implies significantly more than we have indicated. For example, without any additional effort one can deduce that a completely multiplicative function $\chi$ which satisfies $\chi(n) \neq 0$ for some $n > 1$ and which eventually satisfies \emph{any} linear recurrence must be a Dirichlet character.
\end{remark}
The simultaneous requirements of complete multiplicativity and (eventual) periodicity are very strong. Slightly weakening the latter condition allows us to enlarge the family of Dirichlet characters.
To distinguish the new members of this family, the definition below excludes all Dirichlet characters.
\begin{definition}\label{mock}
A map $\kappa : \mathbb Z \to \mathbb C$ is a \emph{mock character of mockulus $q$}, denoted $\kappa \mock{q}$,
if and only if
\begin{enumerate}[label=(\roman*)]
\item $\kappa$ is completely multiplicative;
\item the sequence $(\kappa(n))_{n \geq 0}$ is $q$-automatic but not eventually periodic; and
\item there exists an integer $d \geq 1$ such that $\kappa(n) = 0$ precisely when $n=0$ or $(n,d) \neq 1$.
\end{enumerate}
\end{definition}
\begin{example}\label{ex:PaperFold}
We saw the paperfolding sequence $(v_n)_{n \geq 1}$ in Example~\ref{ex:FirstDefPaper}. We now reinterpret and
extend this sequence to all integers. First, replace every occurrence of $\wedge$ by $+1$ and every occurrence of
$\vee$ by $-1$.
It turns out that $(v_n)$ satisfies a nice recursion: $v_{2n} = v_n$, and $v_{2n+1} = (-1)^n$
(see, e.g., \cite[Section~6]{AS}).
Next, extend the sequence to all of $\mathbb Z$ by setting $v_0 :=0$ and $v_{-n} := -v_n$.
One can show that the sequence is completely multiplicative.
It is not hard to deduce that the function $v(n) := v_n$ is a mock character of mockulus $2$.
\end{example}
\begin{remark}
Our analogy \eqref{analogy} gives a qualitative heuristic that of all completely multiplicative arithmetic functions, the mock characters come as close as possible to being Dirichlet characters. In Section \ref{sect:Kronecker} we will further justify this by example; in Section \ref{Sect:FinalSection} we
propose a conjecture which quantifies this heuristic description.
\end{remark}
\begin{remark}
Other notions of ``character-like'' functions and ``generalized characters'' appear in the literature, which are
quite different from the mock characters defined above; see \cite{Bronstein, Cudakov, CL, CR, Vandehey}.
On the other hand, multiplicative automatic sequences (not exactly mock characters, but close) have been studied in a number of recent papers, e.g.,
\cite{BBC, BCH, BC, BCC, Coons, SP2, Yazdani}.
One notable conjecture, made in \cite{BBC}, is that any function which is simultaneously automatic and multiplicative
must agree at all primes with some eventually periodic function.
\end{remark}
We make a few observations about mock characters.
First, note that condition~(iii) in Definition~\ref{mock} implies that the set of primes $p$ for which $\kappa(p)=0$
is finite. Further, note that $\kappa$ is nonvanishing on $\mathbb Z^+$ (the positive integers) if and only if $d=1$ in
condition~(iii).
Less obvious are the following:
\begin{lemma} \label{remark-mock}
Suppose $\kappa$ is a mock character of mockulus at least $2$.
\begin{enumerate}
\item $\kappa$ has mockulus $q$ if and only if it has mockulus $q^r$ for any positive integer $r$.
\item For all $n \in \mathbb Z$, $\kappa(n)$ is either zero or a root of unity.
\end{enumerate}
\end{lemma}
\begin{proof}
The first assertion is a classical result about automatic sequences; see, e.g., \cite{AS}.
To prove the second assertion, note that $\kappa$ only takes finitely many values (since it is automatic).
By complete multiplicativity, it follows that all these values must be $0$ or roots of unity.
\end{proof}
Definition~\ref{mock} makes it seem that a mock character depends on two different parameters:
the mockulus $q$, and the integer $d$ appearing in condition (iii). We now give an equivalent
definition of mock character which eschews this complication by showing that the only required
parameter is the mockulus. In practice, however, Definition~\ref{mock} is the simpler one to verify.
\begin{proposition}
Let $q$ be an integer $\geq 2$. A map $\kappa : \mathbb Z \to \mathbb C$ is a mock
character \mock{q}
if and only if
\begin{enumerate}[label=(\mathbb Roman*)]
\item $\kappa$ is completely multiplicative;
\item the sequence $(\kappa(n))_{n \geq 0}$ is $q$-automatic but not eventually periodic; and
\item the series
$\displaystyle\sum_{\substack{p \ {\rm prime} \\ \kappa(p)=0}} \frac{1}{p}$ converges.
\end{enumerate}
\end{proposition}
\begin{proof}
The fact that ((i), (ii), (iii)) imply ((I), (II), (III)) is trivial. We will prove the
reverse implication by showing that (I), (II), and (III) imply that the set of $p$ for which
$\kappa(p) = 0$ is finite. Taking $d$ to be the product of these primes, we will deduce (iii).
Consider the map $|\kappa|$. This map only takes the values $0$ and $1$ since the nonzero values
of $\kappa$ are roots of unity (see Lemma~\ref{remark-mock} above).
Let $a_n$ denote the $n$th smallest element of the
set $\{a \in \mathbb N : \kappa(a) \neq 0\}$.
Applying a theorem of Delange \cite[Th\'eor\`eme~2]{Delange} with $f = |\kappa|$, we see that (III)
and (I) imply that $|\kappa|$ admits a nonzero mean value, say $M$. This immediately implies that
$a_n \sim n/M$ as $n \to \infty$, whence $a_{n+1}/a_n \to 1$ as $n \to \infty$.
Now, using Corollary~3 of \cite{SP1} and noting that our $|\kappa|$ is completely multiplicative
(so that the condition $q \mid f(p^{h_p})$ in that paper boils down to $\kappa(p) = 0$),
we see that there exist only finitely many primes $p$ for which $\kappa(p) = 0$.
\end{proof}
We conclude this section with two theorems about mock characters, which are inspired by (and improve upon)
some nice results in \cite{BCH,SP2} on nonvanishing completely multiplicative automatic functions.
\begin{theorem}\label{simple}
Let $q \geq 2$ be an integer and $f$ be a mock character \mock{q}.
Suppose that $f(p) \neq 0$ for some prime $p$ dividing $q$.
Then $q = p^m$ for some integer $m \geq 1$, and $f$ is a mock character \mock{p}.
\end{theorem}
\begin{proof}
The proof will follow the proof of \cite[Proposition~3.3]{BCH}, where the unnecessary hypothesis
that $f$ never vanishes is used. We restrict our function to the sequence $\big(f(n)\big)_{n \geq 0}$.
Recall that $p \mid q$, and set $q_1 := q/p$.
Then for any $k \geq 0$ and $r \in [0, q_1^k-1]$ we have
\[
f(p)^k f(q_1^k n + r)
= f(q^k n + p^k r).
\]
Now, since $p^k r$ belongs to $[0, q^k-1]$, the sequence $\big(f(q^k n + p^k r)\big)_{n \geq 0}$
belongs to ${\mathcal K}_f$ (the $q$-kernel of $f$), and hence belongs to a finite set of sequences.
Since $f(p) \neq 0$, $f(p)$ must be a root of unity (see Lemma~\ref{remark-mock});
thus $f(p)^k$ takes only finitely many values that are all nonzero.
Finally the $q_1$-kernel of $f$,
\[
{\mathcal K}_{q_1} =
\big\{\big(f(q_1^k n + r)\big)_{n \geq 0} : \ k \geq 0, \ r \in [0, q_1^k-1]\big\} ,
\]
is finite, which means that the sequence $f$ is $q_1$-automatic. But it is also $q$-automatic!
According to a deep theorem of Cobham \cite{Cobham} this implies, since $f$ is not periodic
(it is a mock character), that $q$ and $q_1$ cannot be multiplicatively independent.
Thus $q$ and $q_1$ are both powers of the same number; we conclude that $q = p^m$ for some
integer $m \geq 1$. By Lemma~\ref{remark-mock}, $f$ is a mock character of mockulus $p$.
\end{proof}
\begin{theorem} Let $q$ be an integer $\geq 2$, and suppose the mock character $f\mock{q}$ does not
vanish on $\mathbb Z^+$ (the positive integers). Then there exists a root of unity $\xi$, a prime $p$, a
positive integer $r$, and a Dirichlet character $\chi \mod{p^r}$ such that for all $n \geq 1$,
\[
f(n) = \xi^{v_p(n)} \chi\bigg(\frac{n}{p^{v_p(n)}}\bigg) .
\]
(Here $v_p(n)$ denotes the largest integer $t$ such that $p^t \mid n$.)
Moreover, $f$ has mockulus $p$.
\end{theorem}
\begin{proof}
By Theorem~\ref{simple}, we may assume that $q=p$ is prime. Using \cite[Proposition~1]{SP2}
we know that there exists an integer $k$,
such that if $n_1, n_2, \ell$ are integers with $(n_1, p^{\ell + 1}) \mid p^{\ell}$ and
$n_1 \equiv n_2 \mod{p^{k+\ell}}$, then $f(n_1) = f(n_2)$. But $p$ is prime, so that the
condition $(n_1, p^{\ell + 1}) \mid p^{\ell}$ boils down to $v_p(n_1) \leq \ell$. Taking
$\ell = 0$, we see (as also noted in \cite{BCH}) that there exists a Dirichlet character
$\chi \mod{p^k}$ such that if $(n,p) = 1$ then $f(n) = \chi(n)$. Now, for any $n \geq 1$,
we have $f(n) = f(p^{v_p(n)})f(n/p^{v_p(n)}) = f(p)^{v_p(n)} \chi(n/p^{v_p(n)})$. Letting
$\xi$ denote the value of $f(p)$, we conclude.
\end{proof}
\section{Kronecker symbols are (mock) characters} \label{sect:Kronecker}
For the remainder of the text, we set
\[
\kappa_a(n) :=
{\leg{a}{n}}
\]
where the right hand side is the Kronecker symbol.
After briefly recalling the definition and basic properties of the Kronecker symbol, we show that $\kappa_a$
is either a Dirichlet or a mock character. We apply the machinery we develop to prove some results about
generating functions of Kronecker symbols.
\subsection{The Kronecker symbol}
We briefly recall the definition of the Kronecker symbol $\leg{a}{n}$.
For convenience, we shall use Conway's convention \cite{Conway} that $-1$ is a prime.
First, set
\[
\phantom{\qquad \text{whenever } an = 0 \text{ or } (|a|,|n|) > 1.}
\leg{a}{n} = 0
\qquad \text{whenever } an = 0 \text{ or } (|a|,|n|) > 1.
\]
It therefore remains to define $\leg{a}{n}$ for nonzero coprime integers $a$ and $n$. Write $n$ in the form
\[
n = \prod_p p^{\nu_p}
\]
where $\nu_p \in \mathbb N$ (recall that $-1$ is considered prime).
We then set
\begin{equation}
\label{eq:DefnKronSymb}
\leg{a}{n} := \prod_p \leg{a}{p}^{\nu_p}
\end{equation}
where the $\leg{a}{p}$'s are defined as follows. (Keep in mind that we are assuming $(|a|,|p|) = 1$.)
For every $p \geq 3$, set
\[
\leg{a}{p} :=
\begin{cases}
1 &\quad \mbox{\rm if $a$ is a square modulo $p$, and} \\
-1 &\quad \mbox{\rm otherwise.}
\end{cases}
\]
For the remaining two primes, set
\[
\leg{a}{2} := \leg{2}{a}
\qquad \text{and} \qquad
\leg{a}{-1} :=
\begin{cases}
1 &\quad \mbox{\rm if $a > 0$} \\
-1 &\quad \mbox{\rm if $a < 0$}
\end{cases}
\]
Note that $\leg{a}{2}$ as defined above must be evaluated recursively using \eqref{eq:DefnKronSymb}.
However, there is also an explicit formula: for any odd $a$,
\[
\leg{a}{2} = (-1)^{\frac{a^2-1}{8}}
\]
In the special case that $n$ is odd, $\leg{a}{n}$ is called the \emph{Jacobi symbol}; when $n$ is a
positive odd prime, $\leg{a}{n}$ is called the \emph{Legendre symbol}.
A fundamental property of the Kronecker symbol (which we shall require in the sequel) is
\emph{quadratic reciprocity}: for any nonzero integers $m$ and $n$,
\begin{equation}\label{eq:QuadRecip}
\leg{m}{n}
= \sigma(m,n) \cdot
(-1)^{\frac{m_1 - 1}{2} \cdot \frac{n_1 - 1}{2}}
\leg{n}{m}
\end{equation}
where
$m_1$ and $n_1$ are the largest odd factors of $m$ and $n$, respectively
(i.e., $m_1 = m/2^{v_2(m)}$, $n_1 = n/2^{v_2(n)}$) and
\[
\sigma(m,n) =
\begin{cases}
-1 & \quad \text{if both } m,n <0 \\
1 & \quad \text{otherwise.}
\end{cases}
\]
\subsection{(Non)periodicity of the Kronecker symbol} \label{Sect:KronSymbol}
Recall that
$
\kappa_a(n) :=
{\leg{a}{n}} .
$
It is common practice to only define this symbol for $a \equiv 0 \mod 4$ or $1 \mod 4$ and squarefree,
as in \cite[Definition~20, p.\ 70]{Landau}, or even just for fundamental discriminants $a$, as in
\cite[p.\ 296]{MV}. Although the difficulty arising for other $a$ is occasionally hinted at, as in
\cite[Exercise~10, p.\ 36]{Cohn}, it seems to be rarely (if ever) treated carefully. For example,
Cohen's book \cite[Theorem 1.4.9]{Cohen} mistakenly asserts that the function $\leg{a}{\cdot}$ is
periodic for any integer $a$. This is false in general (see below), but is often true:
\begin{theorem}\label{thm:1stHalf}
Fix $a \not\equiv 3 \mod 4$. Then the function $\kappa_a(n)$ is periodic.
\end{theorem}
\begin{proof}
Suppose $a$ is even. Then $\kappa_a(2n) = 0$, while $\kappa_a(2n+1)$ is periodic in $n$ (with period $4|a|$,
see \cite[Theorem~3.3.9~(5), p.~76]{Halterkoch}). This handles the cases $a \equiv 0,2 \mod 4$.
In the remaining case $a \equiv 1 \mod 4$, it is shown in \cite[Theorem~99, p.~72]{Landau} that $\kappa_a(n)$
has period $|a|$. (Note that in Theorem~99, $a$ is supposed to be congruent to $0$ or $1$ \mod{4}, as indicated
in \cite[Definition~20, p.\ 70]{Landau}.)
\end{proof}
\noindent
Next we prove a converse of this.
\begin{theorem}\label{3mod4}
Fix $a \equiv 3 \mod 4$. The function $\kappa_a(n)$ is not (ultimately) periodic.
\end{theorem}
\begin{proof}
Fix $a \equiv 3 \mod 4$. By a well-known fact about the Jacobi symbol \cite[Theorem~3.3.9~(5)]{Halterkoch}, we
have $\kappa_a(n+4|a|) = \kappa_a(n)$ for all odd $n$. It follows that the sequence $\big(\kappa_a(2n+1)\big)_{n \geq 0}$
is periodic, and that $2|a|$ is a period. What can we say about the \emph{least} period? We claim it must be even.
Indeed, quadratic reciprocity \eqref{eq:QuadRecip} implies that
\[
\kappa_a\Big(2(n+|a|) + 1\Big)
= - \kappa_a(2n+1)
\]
for any positive $n$, whence
$|a|$ is not a period. We have therefore shown that $(\kappa_a(2n+1))_{n \geq 0}$ is periodic, and that its least
period must be even.
Now suppose that the function $\kappa_a$ were (eventually) periodic. Note that if one has
${\kappa_a(n+2T) = \kappa_a(n)}$ for all large $n$, then
\[
\kappa_a(2) \kappa_a(n) = \kappa_a(2n) = \kappa_a(2n+2T) = \kappa_a(2) \kappa_a(n+T)
\]
whence $\kappa_a(n) = \kappa_a(n+T)$. This shows that the smallest (eventual) period of $\kappa_a(n)$ would have
to be an odd number. Let $q$ denote this smallest (eventual) period. It follows that
\[
\kappa_a\big(2(n+q)+1\big)
= \kappa_a(2n+1+2q)
= \kappa_a(2n+1) ,
\]
so $q$ is an eventual period of the sequence $\big(\kappa_a(2n+1)\big)_{n\geq 0}$.
But then the smallest period of the sequence $\big(\kappa_a(2n+1)\big)_{n\geq 0}$ must divide $q$, and
in particular must be odd! This contradicts what we proved in the first paragraph, and the claim is proved.
\end{proof}
\noindent
Combining Theorems~\ref{thm:1stHalf} and \ref{3mod4} with basic properties of the Kronecker symbol yields
the following result, which is probably known but which we were unable to find in the literature.
\begin{corollary}\label{KroneckerEqualsDirichlet}
The Kronecker symbol $\kappa_a$ is a Dirichlet character if and only if $a \not\equiv 3 \mod 4$.
\end{corollary}
\noindent
In fact, as we shall show below, if $\kappa_a$ is not a Dirichlet character then it must be a mock character.
\begin{remark}
\ { }
\begin{itemize}
\item The special case of Theorem \ref{3mod4} with $a = 3$ was proved by the second author in an unpublished manuscript (see \cite{Goldmakher}).
\item The fact that the smallest period of the sequence $(\kappa_a(2n+1))_n$ is even for ${a \equiv 3 \mod 4}$
was observed earlier, e.g., in the paper \cite{SW}, where the following period patterns of the sequence
$\big(\kappa_a(2n+1)\big)_{n \geq 0}$ are given:
\[
\begin{array}{lll}
&a = -1 \ \ &\mbox{\rm period pattern\ } + \ - \\
&a = -5 \ \ &\mbox{\rm period pattern\ } + \ + \ 0 \ + \ + \ - \ - \ 0 \ - \ - \\
&a = -9 \ \ &\mbox{\rm period pattern\ } + \ 0 \ + \ - \ 0 \ - \\
&a = +3 \ \ &\mbox{\rm period pattern\ } + \ 0 \ - \ - \ 0 \ + \\
&a = +7 \ \ &\mbox{\rm period pattern\ } + \ + \ - \ 0 \ + \ - \ - \ - \ - \ + \ 0 \ - \ + \ +. \\
\end{array}
\]
\item Some of the sequences $(\kappa_a(n))_n$ appear in the Online Encyclopedia of Integer Sequences
\cite{oeis}, e.g., $\kappa_{-1}(n) =$ A034947$(n)$, $\kappa_3(n) =$ A091338$(n)$, $\kappa_7(n) =$ A089509$(n)$,
and $\kappa_{-5}(n) =$ A226162$(n)$. Moreover, the sequences A117888 and A117889 give the minimal periods of
the sequences $\big(\kappa_a(n)\big)_n$ and $\big(\kappa_{-a}(n)\big)_n$ for small values of $a$, writing $0$
if the sequence is not periodic.
\item If $a \equiv 3 \mod 4$, the sequence $\big(\kappa_a(n)\big)_n$ is a Toeplitz sequence, which (roughly speaking)
is a sequence obtained by repeatedly inserting periodic sequences into periodic sequences. For a more precise definition,
see \cite{All-repet, AB} and the references therein.
\item The proof of Theorem~\ref{3mod4} shows that a nontrivial sequence $(u(n))_{n \geq 0}$ cannot simultaneously
be periodic, completely multiplicative, and satisfy $u(2n) = u(n)$ for all $n$. However, it can satisfy any two of
these properties. In particular, it is possible for a periodic sequence $\big(u(n)\big)_{n \geq 0}$ to satisfy
$u(2n) = u(n)$ for all $n$. The number of such sequences on a given alphabet was studied in \cite{All-repet} in the
context of binary sequences with bounded repetitions and in \cite{EKF} in relation with the so-called perfect shuffle.
\item Periodicity and Kronecker symbols have also been studied in the context of periodic continued fractions; see
\cite{G1, G2, G3, G4}.
\end{itemize}
\end{remark}
\subsection{The connection to mock characters}
\label{subsect:ConnectToMock}
The (regular) paperfolding sequence $(v_n)_{n \geq 0}$ was introduced in Example~\ref{ex:FirstDefPaper}
and reinterpreted in Example~\ref{ex:PaperFold}. Jonathan Sondow observed that the Kronecker symbol
$\leg{-1}{n}$ satisfies the same recursion as $v_n$, hence generates the same sequence (see \cite[Section~6]{AS}).
As a consequence, we have
\begin{proposition}\label{pap}
Fix an integer $a$. Then either $\kappa_a(n)$ is a Dirichlet character, or a Dirichlet character
multiplied by the paperfolding sequence.
\end{proposition}
\begin{proof}
If $a \not\equiv 3 \mod 4$, then $\kappa_a$ is a Dirichlet character by Theorem~\ref{thm:1stHalf}.
If ${a \equiv 3 \mod 4}$, then (again by Theorem~\ref{thm:1stHalf}) $\kappa_{-a}$ is a Dirichlet character.
To conclude the proof, note that $\kappa_a(n) = \kappa_{-a}(n) \leg{-1}{n}$.
\end{proof}
\noindent
We now arrive at the promised connection between the Kronecker symbol and mock characters.
\begin{theorem}
If $a\equiv 3 \mod 4$, then $\kappa_a$ is a mock character \mock{2}.
\end{theorem}
\begin{proof}
Recall that any periodic sequence is 2-automatic. It follows that $\kappa_a$ is the product of two
$2$-automatic sequences, hence is 2-automatic. The remaining properties of mock characters are
straightforward to verify.
\end{proof}
\subsection{Generating functions involving Kronecker symbols}
We now give some applications of our results to various generating functions (a power series,
a Dirichlet series, an infinite product) involving the Kronecker symbol $\kappa_a(n) = \leg{a}{n}$.
\begin{proposition}
Fix $a \equiv 3 \mod 4$. Let $f$ be any injective map from $\{0, \pm 1\}$ to $\mathbb F_4$, the field with four elements.
Then the formal power series $\sum_{n \geq 0} f(\leg{a}{n}) X^n$ has degree $2$ or $4$ over $\mathbb F_4(X)$, the field
of rational functions on $\mathbb F_4$.
\end{proposition}
\begin{proof}
Let
\[
G(X) :=
\sum_{n \geq 0} f\big(\kappa_a(n)\big) X^n ,
\]
where $\kappa_a(n) = \leg{a}{n}$ as before.
Christol's theorem (Theorem~\ref{christol}) implies that $G$ must be algebraic over $\mathbb F_4(X)$.
On the other hand, by Theorem~\ref{3mod4} we know that the coefficients of $G$ are not (eventually) periodic,
whence $G$ is not a rational function. Thus, the minimal polynomial of $G$ has degree at least 2. We now show
its degree is at most 4.
Recall that $\alpha^4 = \alpha$ for all $\alpha \in \mathbb F_4$, and that $\mathbb F_4$ has characteristic 2. It follows that
\[
G(X)^4
= \sum_{n \geq 0} f\big(\kappa_a(n)\big)^4 X^{4n}
= \sum_{n \geq 0} f\big(\kappa_a(n)\big) X^{4n}
= \sum_{n \geq 0} f\big(\kappa_a(4n)\big) X^{4n} ,
\]
where the last equality holds because
$\kappa_a(4n) = \kappa_a(2)^2 \kappa_a(n) = \kappa_a(n)$.
We deduce that
\begin{equation}
\label{eq:FnlEqn}
G^4 + G + R = 0 ,
\end{equation}
where
$\displaystyle
R(X) :=
\sum_{\substack{n \geq 0 \\ 4 \dnd n}}
f\big(\kappa_a(n)\big) X^{n} .
$
Relation \eqref{eq:FnlEqn} is almost enough to show that the minimal polynomial of $G$ has degree at most 4;
all that is left to check is that $R(X)$ is a rational function. From the beginning of the proof of
Theorem~\ref{3mod4} we know that the sequence ${\big(\kappa_a(2n+1)\big)_{n \geq 0}}$ is periodic.
Thus the sequence ${\big(\kappa_a(4n+2)\big)_{n \geq 0} = \kappa_a(2) \cdot \big(\kappa_a(2n+1)\big)_{n \geq 0}}$
is also periodic. We conclude that the coefficients of $R(X)$ are periodic, whence $R(X)$ is rational.
We have thus bounded the degree of the minimal polynomial of $G$ between 2 and 4. To conclude the proof, we
show that the degree cannot equal 3. First, recall from~\eqref{eq:FnlEqn} that $Y=G$ is a solution to the equation
\[
Y^4 + Y + R = 0 .
\]
Now observe that no rational function $Y(X) \in \mathbb F_4(X)$ satisfies this equation. Indeed, if $Y$ is a solution,
then so is $Y+\lambda$ for any $\lambda \in \mathbb F_4$; it follows that either all roots of $Y^4 + Y + R$
are rational, or none of them are. Since $G$ is an irrational root, we are in the latter case, so $Y^4 + Y + R$
has no linear factors with rational coefficients. We deduce that it also has no cubic factors with rational
coefficients, and the claim follows.
\end{proof}
\begin{proposition}\label{prop:MockLFn}
The series
$\sum_{n \geq 1} \frac{\leg{a}{n}}{n^s}$
is either a Dirichlet $L$-function or the product of
$\frac{2^s}{2^s - (-1)^{\frac{a^2 - 1}{8}}}$
with a Dirichlet $L$-function.
\end{proposition}
\begin{proof}
As above, set $\kappa_a(n) := \leg{a}{n}$, and define
\[
L_a(s) := \sum_{n \geq 1} \frac{\kappa_a(n)}{n^s}
\]
If $a \not\equiv 3 \mod 4$, Proposition~\ref{pap} implies that $L_a(s)$ is a Dirichlet $L$-function.
Now suppose instead that $a \equiv 3 \mod 4$.
Define a function $\chi : \mathbb Z \to \mathbb C$ by
\[
\chi(n) :=
\begin{cases}
\kappa_a(n) & \quad \text{if $n$ is odd} \\
0 & \quad \text{if $n$ is even.}
\end{cases}
\]
It is easy to verify that $\chi$ is completely multiplicative, and (from the beginning of the proof of
Theorem~\ref{3mod4}) it is also periodic. Proposition~\ref{prop:CharFirstTwoProp} implies that $\chi$
is a Dirichlet character, and we write $L(s,\chi)$ to denote the associated $L$-function.
We have:
\[
\begin{split}
L_a(s)
&=
\sum_{n \geq 1} \frac{\kappa_a(n)}{n^s}
=
\sum_{n \geq 1} \frac{\kappa_a(2n)}{(2n)^s}
+ \sum_{n \geq 0} \frac{\kappa_a(2n+1)}{(2n+1)^s} \\
&=
\frac{\kappa_a(2)}{2^s}
\sum_{n \geq 1} \frac{\kappa_a(n)}{n^s}
+ \sum_{m \geq 1} \frac{\chi(m)}{m^s} \\
&=
\frac{\kappa_a(2)}{2^s}
L_a(s)
+ L(s,\chi) .
\end{split}
\]
Thus
$
L_a(s) =
\left(1 - \frac{\kappa_a(2)}{2^s}\right)^{-1} L(s,\chi) ,
$
and the claim follows.
\end{proof}
Recall from Example~\ref{ex:PaperFold} the paperfolding sequence $(v_n)_{n \geq 0}$, which (as noted at the
start of Section~\ref{subsect:ConnectToMock}) is the same as the sequence $\kappa_{-1}(n)$. It turns out that
many well-known properties of the paperfolding sequence hold more generally for $\big(\kappa_a(n)\big)_{n \geq 0}$
when $a \equiv 3 \mod 4$. For example, the following identity involving the paperfolding sequence was proved
in \cite{prod}:
\begin{equation}\label{PP}
\prod_{n \geq 1} \left(\frac{2n}{2n+1}\right)^{v_{n+1}} =
\frac{\Gamma(1/4)^2}{8\sqrt{2 \pi}}
\end{equation}
(the terms in the product are fractions, not Kronecker symbols).
A similar proof gives:
\begin{proposition}
Given $a \equiv 3 \mod 4$, set $\alpha := (-1)^{\frac{a^2-1}{8}}$. Then
\[
\prod_{n \geq 1}
\left(\left(\frac{n}{n+1}\right)\left(\frac{2n+2}{2n+1}\right)^{\alpha}\right)^{\leg{a}{n+1}}
= \frac{1}{2}\prod_{n \geq 1} \left(\frac{2n}{2n+1}\right)^{\alpha\leg{a}{2n+1}}.
\]
Furthermore the left side is a finite product of terms of the form $\Gamma(x/4a)^{\pm 1}$, where $x\in\mathbb Z$.
\end{proposition}
\begin{proof}
The proof mimics the proof of the result for $a = -1$ in \cite{prod}. It uses the fact (from the proof of
Proposition~\ref{prop:MockLFn}) that
\[
\chi(n) :=
\begin{cases}
\leg{a}{n} & \quad \text{if $n$ is odd} \\
0 & \quad \text{if $n$ is even.}
\end{cases}
\]
is a Dirichlet character, from which it follows that the sum of the values of $\leg{a}{2n+1}$ on a period is zero.
\end{proof}
\section{Quantified Mockery} \label{Sect:FinalSection}
Proposition~\ref{pap} shows that, in a qualitative sense, the mock character $\kappa_a$ is essentially a Dirichlet
character. This statement can be quantified using the language of the theory of pretentiousness. Recall the
pseudometric introduced by Granville and Soundararajan \cite{GranSound}: given any two completely multiplicative
functions $f,g : \mathbb Z \to \mathbb U$ (with $\mathbb U$ denoting the complex unit disc), set
\[
\mathbb D(f,g; y) :=
\left(
\sum_{p \leq y} \frac{1- \mathbb Re f(p) \overline{g(p)}}{p}
\right)^{1/2} .
\]
This is a useful tool for quantifying how closely one function mimics another. Any time we have
\[
\mathbb D(f,g;y)^2 = o(\log \log y) ,
\]
this means that $f$ and $g$ behave similarly, and the smaller the `distance' between $f$ and $g$, the more similar
their behavior. One key property of this pseudometric is a `triangle inequality' \cite{GranSound}:
for any functions $f_i,g_i : \mathbb Z \to \mathbb U$,
\[
\mathbb D(f_1,f_2;y) + \mathbb D(g_1,g_2;y) \geq \mathbb D(f_1 f_2, g_1 g_2; y) .
\]
We have
\begin{proposition}
Let $\kappa_a(n) = \leg{a}{n}$ as above.
Then for every $a \in \mathbb Z$ there exists a Dirichlet character $\chi$ such that
\[
\mathbb D(\kappa_a, \chi; y) \ll 1 .
\]
\end{proposition}
\begin{proof}
If $a \not\equiv 3 \mod 4$ then Theorem~\ref{thm:1stHalf} implies that $\kappa_a$ is a Dirichlet character, so
we can take $\chi = \kappa_a$ to conclude. Next, observe that $\kappa_{-1}(p) = \chi_{-4}(p)$ for all odd primes,
where $\chi_{-4}$ is the nontrivial character \mod 4. It follows that
\[
\phantom{\qquad
\forall y \geq 2}
\mathbb D(\kappa_{-1} , \chi_{-4} ; y)^2
= \sum_{p \leq y} \frac{1 - \kappa_{-1}(p) \chi_{-4}(p)}{p}
= \frac{1}{2}
\qquad
\forall y \geq 2 ,
\]
so the claim holds for $\kappa_{-1}$. Finally, suppose $a \equiv 3 \mod 4$. Then (again by Theorem~\ref{thm:1stHalf})
$\kappa_{-a}$ is a Dirichlet character, whence $\chi := \chi_{-4} \kappa_{-a}$ is a Dirichlet character. We deduce
\[
\begin{split}
\mathbb D(\kappa_a , \chi ; y)
&= \mathbb D(\kappa_{-1} \kappa_{-a} , \chi_{-4} \kappa_{-a} ; y) \\
&\leq \mathbb D(\kappa_{-1} , \chi_{-4}; y) +
\mathbb D(\kappa_{-a},\kappa_{-a}; y) \\
&= \mathbb D(\kappa_{-1} , \chi_{-4}; y) +
\sum_{\substack{p \leq y \\ \kappa_{-a}(p) = 0}} \frac{1}{p}
\end{split}
\]
The first sum is bounded by our work above; the second is bounded because $\kappa_{-a}(p) = 0$ only for those $p$
dividing the conductor of $\kappa_{-a}$.
\end{proof}
\noindent
We suspect this is a special case of a more general result:
\begin{conj}
For any mock character $\kappa$, there exists a Dirichlet character $\chi$ such that
$\mathbb D(\kappa,\chi;y)$ is bounded. Conversely, if $\kappa : \mathbb Z \to \mathbb U$ is completely multiplicative
and a bounded distance from some Dirichlet character, then $\kappa$ must be a mock character.
\end{conj}
\noindent
{\bf Acknowldgments} \ The first named author warmly thanks G\'erald Tenenbaum for ``old''
discussions on automatic multiplicative functions and Jonathan Sondow for having noted that $\leg{-1}{n}$
was the regular paperfolding sequence. We heartily thank Henri Cohen, Olivier Ramar\'e, Jeff Shallit, and
Soroosh Yazdani, for discussions or for comments on previous versions of this paper.
\end{document}
|
\begin{document}
\title{Incentivizing Mobile Edge Caching and Sharing: An Evolutionary Game Approach }
\author{
Mingyu Li, Changkun Jiang, Lin Gao, Tong Wang, and Yufei Jiang
\thanks{M.~Li, T.~Wang, and Y.~Jiang are with the School of Electronics and Information Engineering, Harbin Institute of Technology, Shenzhen, China.
C.~Jiang is with the Department of Computer Engineering, Shenzhen University, Shenzhen, China.
L.~Gao is with
the School of Electronics and Information Engineering, Harbin Institute of
Technology, Shenzhen, China, and the Shenzhen Institute of Artificial Intelligence
and Robotics for Society, Shenzhen, China. Email: [email protected].
(\emph{Corresponding Author: Lin Gao})~~~~
}
\thanks{This work is supported in part by the National Natural Science Foundation
of China (Grant No. 61972113), the Basic Research Project of Shenzhen
Science and Technology Program (Grant No. JCYJ20190806112215116,
JCYJ20180306171800589, and KQTD20190929172545139), and Guangdong
Science and Technology Planning Project under Grant 2018B030322004. This
work is also supported in part by the funding from Shenzhen Institute of
Artificial Intelligence and Robotics for Society.}
}
\maketitle
\addtolength{\abovedisplayskip}{-1mm}
\addtolength{\belowdisplayskip}{-1mm}
\begin{abstract}
\emph{Mobile Edge Caching} is a promising technique to enhance the content delivery quality and reduce the backhaul link congestion, by storing popular contents at the network edge or mobile devices (e.g. base stations and smartphones) that are proximate to content requesters.
In this work, we study a novel mobile edge caching framework, which enables mobile devices to \emph{cache} and \emph{share} popular contents with each other via device-to-device(D2D) links.
We are interested in the following incentive problem of mobile device users: \emph{whether} and \emph{which} users are willing to cache and share \emph{what} contents, taking the user mobility and cost/reward into consideration.
The problem is challenging in a large-scale network with a large number of users.
We introduce the evolutionary game theory, an effective tool for analyzing large-scale dynamic systems, to analyze the mobile users' content caching and sharing strategies.
Specifically, we first derive the users' best caching and sharing strategies, and then analyze how these best strategies change dynamically over time, based on which we further characterize the system equilibrium systematically.
Simulation results show that the proposed caching scheme outperforms the existing schemes in terms of the total transmission cost and the cellular load.
In particular, in our simulation, the total transmission cost can be reduced by 42.5\%$sim $55.2\% and the cellular load can be reduced by 21.5\%$sim $56.4\%.
\end{abstract}
\IEEEpeerreviewmaketitle
section{Introduction}
subsection{Background and Motivations}
With the rapid development of the mobile Internet and the dramatic increase of mobile terminals, data services have shown an explosive growth in recent years. According to Cisco, the average monthly data usage worldwide will reach 2.9 billion TB by 2023, where video data will account for 82\% of all traffic \cite{Cisco-2020}. Moreover, researchers find that most of the data traffic is generated by the requests for a small number of popular contents, hence different contents' requests follow the Zipf's distribution \cite{Zipf-1999}. The repeated downloading of the same popular content by different mobile users will cause a large waste of backhaul resources and lead to a severe network congestion, significantly affecting the quality of service.
To solve such problems, researchers have proposed a novel scheme called \emph{edge caching} \cite{Yao-CST-2019}, which brings the function of content caching to the network edge. Edge caching preemptively transfers contents from the remote cloud at the core network to the edge network consisting of cellular base stations (BSs) or mobile user equipments (UEs), so that contents can be retrieved directly from the edge network when a user initiates a request \cite{Yao-CST-2019}. By caching popular contents closer to users, latency in getting content can be reduced. It can also avoid duplicate transmissions from cloud servers to end nodes, thus reducing the congestion of backhaul link.~~~~~~~~~~~~~~
In general, edge caching can be deployed in two kinds of edge devices: \emph{fixed devices} (e.g., BSs) and \emph{mobile devices} (e.g., UEs). There have been plentiful research focusing on caching on fixed devices at the network edge.
For example, \cite{Li-JSAC-2017} proposed a scheme of cooperative caching in software-defined networks, where each BS can obtain content from other BSs in the same macro cell, thereby reducing the backhaul traffic load. \cite{Krolikowski-INFOCOM-2018} studied the joint optimization of edge nodes' caching placement and user-BS association to maximize the operator's utility. \cite{Shukla-JSAC-2018} optimized the duration of contents' placement based on the collaborative caching at edge BSs to minimize the total cost of caching and downloading. Caching on fixed devices can effectively alleviate the network congestion and reduce the delay of content delivery. However, there are also many limitations, such as the relatively small device number and the limited serving range of each device.
To compensate for the limitations of fixed device caching, \emph{mobile edge caching} was proposed to further reduce the network load, by caching contents on mobile UEs \cite{Li-JSAC-2018}.
{Today's mobile UEs generally have plenty of storage and computing resources to support content cache locally.
In addition, they can also act as mini caching servers and share the cached contents with each other via device-to-device(D2D) links, using licensed bands (e.g. LTE) or unlicensed bands (e.g. Bluetooth and Wi-Fi).}
Such a content caching and sharing scheme at mobile UEs is called \emph{crowdsourced caching} in \cite{Li-JSAC-2018}.
In \cite{Amer-TWC-2018}, Amer \emph{et al}. studied the optimal caching strategy in such a crowdsourced caching scheme to minimize the total energy consumption, where users' requests were served by the neighbors in the same cluster through single-hop D2D communication or served by any cluster in the same cell through inter-cluster collaboration.
Different from the fixed topology of fixed devices, the mobility of UEs leads to the \emph{non-fixed} topology, which makes the cache placement optimization in UEs very challenging. In particular, \cite{Song-TWC-2019} considered the user's mobility and modeled the arrival and departure of mobile users as a Poisson process. \cite{Quer-TWC-2018} divided each cell into multiple sectors and users moved between cells, where user's mobility was modeled as a discrete Markov model.
Probabilistic caching is also a method suitable for caching in large-scale mobile networks. For example, \cite{Zhang-TVT-2018} placed contents in UEs according to the best cache probability to maximize the average hit rate.
However, these existing works studied the pure optimization problems, without considering the user incentive to cache and share contents.~~~~~~~~~~~
Another challenge of caching contents on mobile UEs is the \emph{user incentive}.
Specifically,
users are often selfish in real world. That is, each user is more interested in caching his preferred contents, and hopes his neighbors to cache as many of his favorite contents as possible \cite{Chen-ICC-2016}.
In addition, caching in UEs will occupy the storage space of devices, and the D2D transmission will also consume UEs' energy. Thus, an incentive mechanism is necessary to encourage the cooperative caching and sharing between UEs.
A few works have considered the incentive issue in edge caching by using game theory, which is a mathematical method for studying the interaction between incentive structures and the phenomenon of competition.
For example, \cite{Sun-TVT-2016,Zheng-TWC-2018,Shen-GLOBECOM-2016} used game theory to analyze devices' caching strategy and equilibrium, but they only studied the game in a small network scenario with a limited number of users.
\cite{Jiang-ICC-2019} studied the game of caching strategy for a single content in a large-scale edge network, but ignored the impact of user mobility and the diversity of contents.
In this work, we will consider the mobile edge caching in a large-scale network with an infinite number of mobile UEs, taking the incentive issue, together with the user mobility and content diversity, into consideration.
subsection{Solution Approach and Contribution}
\begin{figure}
\caption{Illustration of the Crowdsourced Caching Framework. }
\label{fig:fig-model}
\end{figure}
Specifically, we study a crowdsourced caching framework for mobile edge caching, where contents are cached on mobile UEs. We consider a three-tier architecture, which consists of a content provider (CP), a macro base station (MBS), and a large number of mobile UEs.
UEs can cache contents (or files) in their device storages according to their respective cache strategies, and meanwhile share the cached contents with others via D2D links.
The content retrieval process is as follows.
When a UE initiates a content request, the requested content is first searched in the UE's local cache.
If the content is not found in the local cache, it is then searched in the caches of the requester's neighboring UEs.
The neighbor who caches the content becomes a mini server, and shares the content to the requester via the D2D link.
If the content is not found in all neighbors' cache, the request will be directed to the MBS and retrieved from the CP's remote cloud via the cellular link.
As content caching and sharing will introduce additional cost for UEs, the CP will provide certain monetary reward as incentive for UEs to cache and share contents.
Figure \rhoef{fig:fig-model} illustrates an example of such a framework, where the squares with different colors represent different content files.
In this example, UE2 caches the blue, red, and yellow contents, and shares the red content with UE3 who does not cache the red content.
UE1 requests the purple content which is neither in its own cache, nor in its neighbors' caches, and thus obtain the content from CP's server through the MBS.
In such a crowdsourced framework, we are interested in the content caching and sharing problems of UEs, i.e.,
\begin{enumerate}
\item Whether and which UEs are willing to cache and share contents, taking the user's mobility and incentive into consideration?
\item What contents will be cached at which UE, considering the different popularity of different contents?
\end{enumerate}
The above problems are challenging due to the following reasons.
First, the strategies of different UEs are coupled with each other.
Second, the neighbors of each UE may change dynamically due to user's mobility.
Third, a practical network often consists of a large number of UEs.
We will analyze these problems by using the \emph{evolutionary game theory}, an effective tool for analyzing large-scale dynamic systems.
Specifically, we first derive the UEs' best caching and sharing strategies, and analyze their strategy dependence. Then we analyze how these best strategies evolve dynamically over time, based on which
we further characterize the system equilibrium systematically.
Overall, the key contributions of this work are summarized as follows:
\begin{itemize}
\item \emph{Novel Framework:}
We study a novel crowdsourced caching framework, which enables mobile UEs to cache and share popular contents locally, and thus can greatly expand the capability of mobile edge caching.
\item \emph{Novel Solution Technique:}
We introduce a novel solution technique--evolutionary game theory--to analyze the content caching and sharing problems of mobile UEs in a large-scale dynamic network.
By using this technique, we analyze the user strategy dependence and dynamics systematically, and characterize the game equilibrium.
\item \emph{Performance Evaluations and Insights:}
Simulation results show that our proposed framework outperforms the existing schemes in terms of both transmission cost and cellular load. Specifically, in our simulations, our proposed framework can reduce the total transmission cost by 21.5\%$sim $56.4\% and the cellular load by 24.1\%$sim $57.8\%.
\end{itemize}
The rest of the paper is organized as follows. In Section \rhoef{section:model}, we present the system model. In Section \rhoef{section:game}, we analyze the equilibrium of the user-group strategies based on the evolutionary game. We present the simulation results in Section \rhoef{section:simulation}, and finally we conclude the paper in Section \rhoef{section:conclusion}.
section{System Model}\label{section:model}
subsection{Network Model}\label{section:model:network}
We consider a network of $U$ mobile UEs, each connecting to the Internet cloud through a MBS (as shown in Figure \rhoef{fig:fig-model}).
Let $ \mathcal{U} = \{1,\cdots,U\}$ denote the set of all UEs.\footnote{In this work, both ``user'' and ``UE'' refer to the mobile user device.}
UEs can cache contents in their own storage and share with each other, hence form a crowdsourced caching network.
When a UE initiates a content request, it first checks whether the content is available in its local cache; if not, then searches in the cache of nearby UEs via the D2D link; if the UE cannot get the requested content in all neighbors' cache, it connects to the cloud through MBS via the cellular link.
To encourage UEs to act as mini servers and share contents with others,
the CP will provide a certain monetary reward as the incentive.
subsection{Content Model}\label{section:model:content}
We consider a set of $F$ different content files, denoted by a set ${{\cal F}} = \{1,\cdots,f,\cdots,F\} $, where the size of file \emph{f} is $s_f$.
Content popularity is the probability distribution of content requests from all users, namely the ratio of the number of requests for a certain content to all contents. The popularity of content \emph{f} usually follows Zipf's distribution \cite{Li-X-JSAC-2018}, that is, the probability of any user requesting file \emph{f} is ${q_f}$:
\begin{equation}
{q_f} = \frac{{{{({R_f} + \varepsilon )}^{ - \beta }}}}{{sum\limits_{i \in {{\cal F}}} {{{({R_i} + \varepsilon )}^{ - \beta }}} }},\forall f \in{{\cal F}}
\end{equation}where ${R_f}$ is the ranking of file \emph{f} in descending order of popularity in content library, $\varepsilon \ge 0$ is the stationary factor, $\beta > 0$ is the skew factor. When $\beta = 0$, all files have the same popularity and users' requests will be evenly distributed on all content. When $\beta$ is a large number, the file with a lower index will have the higher popularity, which causes the users' requests to be concentrated on the first few popular files.
subsection{User's Mobility}\label{section:model:mobility}
For mobile users in the network, D2D sharing is possible only when a pair of users are close enough to each other.
However, the neighbors of each UE may change dynamically due to user's mobility.
Based on the Erdos-R random graph theory, we propose an abstractive user's mobility model: users move randomly within a certain area, and the probability of each user encountering any other user is $\rhoho \in [0,1]$.
The probability $\rhoho $ does not change over time, and the encounter events of each user are independent of each other.
Since we consider a large-scale network scenario, we suppose that the number of mobile users in the network is infinite, namely $U \to \infty $. Therefore, the average number of neighbors that any user can encounter is $psi = U \cdot \rhoho $.
subsection{User Model}\label{section:model:user}
Due to the non-fixed topology of the mobile edge caching network, UEs adopt a probabilistic caching strategy: they first compute the probability of caching each content, and then select the set of contents to be cached based on the above probability distributions.
Denote ${x_{u,f}} \in [0,1]$ as the probability that UE \emph{u} caches file \emph{f}.
Because of the differences in UEs' performance, the cost of caching in different UEs varies. We denote ${\alpha _u}$ as the cost of UE \emph{u} caching content in unit size and denote ${c_u}$ as the storage space of UE \emph{u}.
When a UE obtains contents from different objects, the transmission cost is various. Define the unit cost of a UE's request being satisfied locally as ${\omega _L}$, the unit cost of D2D sharing as ${\omega _D}$, and the unit cost of getting files via cellular links as ${\omega _B}$. Note that the three kinds of cost have a clear relationship of size, i.e., ${\omega _L}\ll{\omega _D}<{\omega _B}$. In addition, when a UE acts as a server to share contents via D2D links, the reward of unit transmission given to it by CP is \emph{r} (\emph{r} refers to the net value of the difference between the pure reward and the D2D output transmission cost). The total transmission cost and reward are directly proportional to the traffic.
The probability that UE \emph{u} requests file \emph{f} and its local cache can satisfy the request is ${x_{u,f}}{q_f}$, then the average local traffic load of user \emph{u} is
\begin{equation}
T_u^L = sum\limits_{f = 1}^F {{x_{u,f}}} {q_f}{s_f},\forall u \in {{\cal U}}
\end{equation}
The probability that UE \emph{u} requests file \emph{f} but the local cache cannot satisfy the request is $(1 - {x_{u,f}}){q_f}$, then the average traffic load of UE \emph{u} being satisfied by D2D sharing is
\begin{equation}
T_u^{in} = sum\limits_{f = 1}^F {(1 - {x_{u,f}}){P_f}} {q_f}{s_f},\forall u \in{{\cal U}}
\end{equation}where ${P_f}$ is the probability that any UE encounters at least one UE who has cached the file \emph{f} that it requests and the specific expression will be derived in Section \rhoef{section:game:derivation}.
The probability of the situation that UE \emph{u} cannot find file \emph{f} in all neighbors' cache is $1 - {P_f}$. Therefore, the average traffic load of UE \emph{u} being satisfied via cellular link is
\begin{equation}
T_u^B = sum\limits_{f = 1}^F {(1 - {x_{u,f}})(1 - {P_f})} {q_f}{s_f},\forall u \in{{\cal U}}
\end{equation}
The occupation of UEs' storage will affect their energy consumption and operating speed, thus bringing some cache cost.
Here we define the relationship between cache cost ${C_u}$ and cache size as a convex function \cite{J-INFOCOM-2018}:
\begin{equation}
{C_u} = sum\limits_{f = 1}^F {{{({x_{u,f}}{s_f})}^2}{\alpha _u}} ,\forall u \in{{\cal U}}
\end{equation}
When UE \emph{u} acts as a server to meet the requests of others, the reward of CP is proportional to the D2D output traffic:
\begin{equation}
{V_u}{\rhom{ = }}T_u^{out} \cdot r = \left(sum\limits_{f = 1}^F {{x_{u,f}}{N_f}{s_f}} \rhoight) \cdot r
\end{equation}where ${{N_f}}$ refers to the average number of UEs connected to the UEs who have cached file \emph{f}. The specific expression will be derived in Section \rhoef{section:game:derivation}.
When UEs perform D2D sharing in the edge network, their privacy cannot be fully guaranteed. Therefore, we introduce ${T_u^{out} \cdot \theta}$ to characterize the impact of users' privacy on the utility. The UE's utility consists of four parts: reward of D2D sharing, cache cost, transmission cost and price of privacy, and it is denoted by
\begin{equation}
{U_u} = {V_u} - {C_u} - underbrace {T_u^L \cdot {\omega _L} - T_u^{in} \cdot {\omega _D} - T_u^B \cdot {\omega _B}}_{{\rhom{Transmission Cost}}}-T_u^{out} \cdot \theta
\end{equation}
subsection{Problem Formulation}\label{section:model:formulation}
In this work, the questions we ultimately want to study is: \emph{whether and which UEs are willing to cache and share contents, taking the user's mobility and incentive into
consideration? what contents will be cached at which UE, considering the different popularity of different contents? }
We will analyze these problems by using the evolutionary game theory. Specifically, we first derive the UEs' best caching and sharing strategies, and analyze their strategy dependence. Then we analyze how these best strategies evolve dynamically over time, based on which we further characterize the system performance systematically.
section{Game Equilibrium Analysis}\label{section:game}
In this section, we will first formulate the game problem of optimal caching and sharing strategies, and find the condition of equilibrium. Then we will derive some important variables in the model. Finally, we will find UEs' best responses of crowdsourced caching by using evolutionary game theory, and analyze how these best strategies change dynamically.
subsection{Game Formulation}\label{section:game:formulation}
In the entire network, on the one hand, content caching and sharing will introduce additional cost for UEs. On the other hand, the CP will provide a certain monetary reward as the incentive for UEs to cache and share contents as mini servers.
Based on the crowdsourced caching framework, we consider mobile UEs as players in the game, and take the transmission cost and incentive into account, in order to analyze the optimization of UEs' strategies. For each UE $u$, we establish an optimization problem with the goal of maximizing individual utility ${U_u}$, and its decision variable is the probability cache strategy of UE $u$:
\begin{subequations}
\begin{equation}\begin{aligned}\label{eq:sub1} \mathop {\max }\limits_{{x_u}} ~~&{\rhom{ }}{U_u}({x_u},{x_{ - u}})~~~~~~ \\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label{eq:sub2}
s.t.~sum\limits_{f = 1}^F {{x_{u,f}}{s_f} \le {C_u}} {\rhom{ }} \end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label{eq:sub3}
e \cdot T_u^{out} \le E
\end{aligned}
\end{equation}
\begin{equation}
~~~~~~~~~{x_{u,f}} \in [0,1],{\rhom{ }}\forall f \in {{\cal F}}
\end{equation}
\end{subequations}
Here, ${x_u}=\{ {x_{u,1}}, \cdot \cdot \cdot ,{x_{u,f}}, \cdot \cdot \cdot ,{x_{u,F}}\} $. ${x_{ - u}}$ in \triangleqref{eq:sub1} denotes the cache strategies of all UEs except UE $u$. \triangleqref{eq:sub2} denotes the constraint of storage capacity. \triangleqref{eq:sub3} indicates the limit of power consumed by D2D sharing because of the actual situation that the number of UEs served via D2D is limited. We define the energy consumption for transmitting data in unit size via D2D as $e$, then a UE's energy consumption of D2D transmission should be less than the threshold $E$.
It can be seen from \triangleqref{eq:sub1} that the utility function of UE $u$ is related to the cache strategies of other UEs, so the cache strategies of all UEs are coupled with each other. For the optimal solution $X = {\{ {x_u}\} _{U \times 1}}$, the final \emph{equilibrium} is reached if and only if the following conditions are satisfied:
\begin{equation}\label{eq:eqa9}
{U_u}(x_u^*,x_{ - u}^*) \ge {U_u}(x_u^{},x_{ - u}^*),\forall u \in {{\cal U}}
\end{equation}
The above equations imply that once an equilibrium is reached, none of the UEs has the incentive to change its strategy solely, and thus the system will keep on the equilibrium.
subsection{Derivation of Important Variables}\label{section:game:derivation}
Before analyzing the optimization problem and game equilibrium, we first derive the specific expressions of ${P_f}$ and ${N_f}$, which are mentioned in Section \rhoef{section:model:user}.
subsubsection{\textbf{Calculation of} ${P_f}$(Probability of encountering at least one UE caching file f)}
Consider any mobile UE \emph{u} in the network, and define the probability that another UE encountered by UE \emph{u} just has cached file \emph{f} as ${\eta _f}$, that is, the percentage of UEs caching file \emph{f} is
\begin{equation}
{\eta _f} = \frac{1}{U}sum\limits_{u = 1}^U {{x_{u,f}}} {\rhom{, }}\forall f \in{{\cal F}}
\end{equation}
Based on the definition in Section \rhoef{section:model:mobility}, the probability that UE \emph{u} meets any other UE is $\rhoho $, so the probability that UE \emph{u} encounters a fixed UE who has cached file \emph{f} is ${\eta _f}\rhoho$. Since there are a total of $U-1$ other UEs in the network, the probability that UE \emph{u} cannot encounter any UE who has cached file \emph{f} is ${(1 - {\eta _f}\rhoho )^{U - 1}}$. Therefore, the probability that UE \emph{u} meets at least one UE who has cached file \emph{f} is
\begin{equation}
{P_f} = 1 - {(1 - {\eta _f}\rhoho )^{U - 1}}{\rhom{, }}\forall f \in{{\cal F}}
\end{equation}
Since $U$ grows to infinity, based on the principle of infinitesimal equivalence, ${P_f}$ can be further derived as
\begin{equation}
\begin{aligned}
\begin{array}{l}
{P_f}=\mathop {\lim }\limits_{U \to \infty } 1 - {(1 - {\eta _f}\rhoho )^{U - 1}}\\
= \mathop {\lim }\limits_{U \to \infty } 1 - {(1 - {\eta _f}\frac{psi }{U})^{U - 1}} = 1 - {e^{ - {\eta _f}psi }}
\end{array}
\end{aligned}
\end{equation}
subsubsection{\textbf{Calculation of} ${N_f}$(Average number of requesters connected to a UE caching file f)}
When UE \emph{u} requests file \emph{f} and meets multiple UEs who have cached file \emph{f} in the D2D communication range, he will randomly select one of these UEs for D2D sharing. Therefore, the probability that UE \emph{v} who has cached file \emph{f} is selected by requester \emph{u} is
\begin{equation}
P_f^{{\rhom{C}}} = sum\limits_{k = 0}^{U - 2} {\frac{1}{{k + 1}} \cdot P_f^{(k)}} ,\forall f \in{{\cal F}}
\end{equation}where $P_f^{(k)}$ denotes the probability that requester \emph{u} encounters $k$ UEs who also have cached file \emph{f} besides UE \emph{v}.
Except for UEs \emph{u} and \emph{v}, there are $U-2$ other UEs that requester \emph{u} may encounter. Therefore, the probability of requester \emph{u} encountering $k$ potential servers except UE \emph{v} obeys the binomial distribution:
\begin{equation}
P_f^{(k)} = \left( \begin{array}{l}
U - 2\\
{\rhom{ }}~~~k
\end{array} \rhoight) \cdot {({\eta _f}\rhoho )^k} \cdot {(1 - {\eta _f}\rhoho )^{U - 2 - k}}
\end{equation}
Since $U$ is of a large value, ${u_f}\rhoho = {u_f}psi /U$ can be seen very small, so $P_f^{(k)}$ can be further derived as a Poisson distribution with arrival rate ${\eta _f}psi $:
\begin{equation}
\begin{aligned}
P_f^{(k)} &= \mathop {\lim }\limits_{U \to \infty } \left( \begin{array}{l}
U - 2\\
{\rhom{ }}~~~k
\end{array} \rhoight) \cdot {({\eta _f}\frac{psi }{U})^k} \cdot {(1 - {\eta _f}\frac{psi }{U})^{U - 2 - k}}\\
&= \frac{{{{({\eta _f}psi )}^k}}}{{k!}} \cdot {e^{ - {\eta _f}psi }}
\end{aligned}
\end{equation}
Therefore, the probability that UE $v$ who has cached file $f$ is selected by requester $u$ can be written as
\begin{equation}
\begin{array}{l}
P_f^{{\rhom{C}}} = \mathop {\lim }\limits_{U \to \infty } sum\limits_{k = 0}^{U - 2} {\frac{1}{{k + 1}} \cdot P_f^{(k)}} \\
= \mathop {\lim }\limits_{U \to \infty } sum\limits_{k = 0}^{U - 2} {\frac{1}{{k + 1}} \cdot \frac{{{{({\eta _f}psi )}^k}}}{{k!}} \cdot {e^{ - {\eta _f}psi }} = \frac{1}{{{\eta _f}psi }}} (1 - {e^{ - {\eta _f}psi }})
\end{array}
\end{equation}
The probability that UE $v$ meets any UE who has not cached file $f$ is $1 - {\eta _f}$, so the average number of UEs requesting file $f$ encountered by UE $v$
is $(U - 1) (1 - {\eta _f})\rhoho{q_f}$. Thus the average number of requesters connected to UEs who have cached $f$ is
\begin{equation}
\begin{aligned}
N_f& = \mathop {\lim }\limits_{U \to \infty } (U - 1)(1 - {\eta _f}){q_f}\rhoho P_f^{{\rhom{C}}}\\
&= \frac{{1 - {\eta _f}}}{{{\eta _f}}} \cdot {q_f}(1 - {e^{ - {\eta _f}psi }})
\end{aligned}
\end{equation}
subsection{Equilibrium Analysis of Evolutionary Game}\label{section:game:equilibrium-analysis}
As we consider a mobile edge network consisting of a large number of UEs, the strategy of a single UE has a negligible impact on the whole network.
Evolutionary game is an effective theoretical tool for analyzing the behavior of a large-scale group, which is often used to predict the group's selection process and finally find a dynamic equilibrium.
Therefore, we model the decision-making process of mobile UEs as an evolutionary game.
In order to achieve the final equilibrium of the evolutionary game, we present an evolutionary-game iterative algorithm as shown in the following table. The algorithm involves many rounds of interaction between UEs, where in each round, UEs change their state according to the strategies of other UEs in the previous round.
When UEs' strategies do not change anymore, the algorithm reaches an equilibrium.
Formally, we summarize the key results in the following lemmas.\footnote{Due to space limit, we leave the proof in online technical report \cite{online}.
}
\begin{algorithm}
\caption{Evolutionary-Game Iterative Algorithm}
\mathcal{L}inesNumbered
Initialize the caching strategy ${X^{(0)}} = {\{ x_i^{(0)}\} _{U \times 1}}{\rhom{ = }}0$\;
Let ${X^\dag } = {\{ x_i^\dag \} _{U \times 1}} = {X^{(0)}}$\;
While{\emph{(1)}}{
Calculate ${\eta _f}$, ${P_f}$, $ {Y_f}$\;
\For{ i= \emph{1} \emph{to} U}{
Calculate $x_i^* = \arg \mathop {\max }\limits_{{x_i}} {U_i}(x_i^{},x_{ - i}^\dag)$;
}
${X^*} = {\{ x_i^* \} _{U \times 1}}$
${X^*} = \gamma \cdot {X^\dag }{\rhom{ + (1 - }}\gamma ) \cdot {X^*}$
\eIf{$\left\| {{X^*} - {X^\dag }} \rhoight\| \le \varepsilon $}{
break\;
}{
${X^\dag } = {X^*}$\;
}
}
\end{algorithm}
\begin{lemma}
There exist proper values of factor $\gamma$ that guarantee algorithm 1 to converge.
\end{lemma}
\begin{lemma}
If algorithm 1 converges, it must converge to an equilibrium point of the game.
\end{lemma}
\begin{figure*}
\caption{(a) Total Cost of Transmission; (b)Cellular Traffic Load; (c)Percentage of Users Caching Various File vs $\beta$.}
\label{fig:mcs-simu1}
\end{figure*}
\begin{figure*}
\caption{(a) Total Cost of Transmission; (b)Cellular Traffic Load; (c)Percentage of Users Caching Various File vs $\rhoho $.}
\label{fig:mcs-simu2}
\end{figure*}
section{Simulations}\label{section:simulation}
We perform numerical simulations to illustrate the performance of crowdsourced caching based on the evolutionary game. We use the following parameters of scene in the simulation:
(i) $U=1000$, $F=10$, ${s_f}=1(\forall f \in {{\cal F}})$, ${c_u}=3(\forall u \in{{\cal U}})$; (ii) the ranking of file's popularity decreases by index in increasing order; (iii) unit cost of transmission ${\omega _L}=0.01$, ${\omega _D}=2$, ${\omega _B}=10$, reward for unit D2D transfer $r=1.5$; (iv)as the transmission environment is relatively safe, $\theta=0.1$; (v) for limitations on energy of D2D transmission, $E = 75$, $e = 0.7$; (vi) the iteration factor is set to 0.98.
We compare the crowdsourced caching scheme proposed in this work with the following two baseline schemes: (i) \emph{Most Popular Caching (MPC)}, where all UEs cache the most popular files; (ii) \emph{Random Uniform Caching (RUC)}, where UEs cache files randomly to fill up their storage.
To evaluate these schemes, we use the following two performance metrics: (i) \emph{User's average cost of transmission}, denoting the average sum of cost to satisfy UE's request through local, D2D, and cellular transmission; (ii) \emph{Average traffic load of cellular transmission}, denoting the average load of UEs' requests satisfied via cellular links. In addition, the impact of various parameters on the percentage of users caching different files can also reflect the performance of the system.
Subfigure (a) of Fig.\rhoef{fig:mcs-simu1} and Fig.\rhoef{fig:mcs-simu2} show the total transmission costs under different $\beta$ and $\rhoho$.
We can see that as $\beta $ and $\rhoho$ increase, the total cost of both the crowdsourced schemes and the MPC scheme continues to decrease.
Moreover, with the increase of $\beta$, user requests are gradually concentrated on a few of the most popular files, and the probability of satisfying requests through D2D sharing increases, so the performance of the MPC schenme and our scheme are gradually approaching.
Comparing with the MPC scheme, our scheme can reduce the total cost by 42.5\%, and comparing with the RUC scheme, it can reduce the total cost by 55.2\%.
Subfigure (b) of Fig.\rhoef{fig:mcs-simu1} and Fig.\rhoef{fig:mcs-simu2} show the cellular transmission loads under different $\beta$ and $\rhoho$. We can see that as $\beta $ and $\rhoho$ increase, the total cost of the crowdsourced scheme continues to decrease.
With the increase of $\beta $, the slope of cellular load changing with $\rhoho$ gradually decreases.
Comparing with the MPC scheme, our scheme can reduce the cellular load by 21.5\%, and comparing with the RUC scheme, it can reduce the cellular load by 56.4\%.
Subfigure (c) of Fig.\rhoef{fig:mcs-simu1} and Fig.\rhoef{fig:mcs-simu2} show the percentage of UEs caching different files under different $\beta$ and $\rhoho$.
We can see that when the popularity of files is evenly distributed, the probability of different files being cached is all the same. With the concentration of requests, the caching probability gradually becomes scattered.
When the encounter probability is 0, UEs only cache the most popular files to meet their own needs.
section{Conclusion}\label{section:conclusion}
In this work, we studied a crowdsourced caching
framework for mobile edge caching, which enables mobile devices
to cache and share popular contents with each other via D2D.
We adopted
the evolutionary game theory to analyze the UEs' content caching
and sharing strategies.
Simulation results show that our proposed crowdsourced
caching scheme outperforms the existing schemes in terms of
the total transmission cost and the cellular load.
\end{document}
|
\begin{document}
\title{Certified Ordered Completion
hanks{This work is supported by the Austrian Science Fund (FWF): projects T789 and P27502.}
\begin{abstract}
On the one hand, ordered completion is a fundamental technique in equational
theorem proving that is employed by automated tools. On the other hand, their
complexity makes such tools inherently error prone.
As a remedy to this situation we give an Isabelle/HOL formalization of
ordered rewriting and completion that comes with a formally verified
certifier for ordered completion proofs.
By validating generated proof certificates, our certifier increases the
reliability of ordered completion tools.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
Completion has evolved as a fundamental technique in automated reasoning
since the ground-breaking work by Knuth and Bendix~\cite{KB70}.
Its goal is to transform a given set of equations into a terminating and
confluent term rewrite system that induces the same equational theory and can
thus be used to decide equivalence with respect to the
initial set of
equations. Since the
original procedure can fail if unorientable equations are
encountered, ordered completion was developed to remedy this
shortcoming~\cite{BDP89}. The systems generated by ordered completion
tools are in general only ground
confluent, but this turns out to be sufficient for
practical applications like refutational theorem proving.
Consider for example the following equational system $\mc{E}_0$
which the tool~\textsf{M{\ae}dMax}\xspace~\cite{WM18}
\begin{xalignat*}{3}
\fsdiv{\X}{\Y} &\eq \pair{\fs{0}}{\Y} &
\fsdiv{\X}{\Y} &\eq \pair{\fs{s}(\Q)}{\R} &
\fsminus{\X}{\fs{0}} &\eq \X \\
\fsminus{\fs{0}}{\Y} &\eq \fs{0} &
\fsminus{\fs{s}(\X)}{\fs{s}(\Y)} &\eq \fsminus{\X}{\Y} &
\fsgt{\fs{s}(\X) }{\fs{s}(\Y}) &\eq \fsgt{\X }{\Y} \\
\fsgt{\fs{s}(\X) }{\fs{0}} &\eq \tru &
\fsle{\fs{s}(\X)}{\fs{s}(\Y}) &\eq \fsle{\X}{\Y} &
\fsle{\fs{0}}{\X} &\eq \tru
\end{xalignat*}
transforms by ordered completion into the following rules $\mc{R}$ ($\to$) and
equations $\mc{E}$ ($\eq$):
\begin{xalignat*}{4}
\fsminus{\X}{\fs{0}} &\to \X &
\fsminus{\fs{0}}{\X} &\to \fs{0} &
\fsminus{\fs{s}(\X)}{\fs{s}(\Y)} &\to \fsminus{\X}{\Y} &
\fsdiv{\X}{\Y} &\to \pair{\fs{0}}{\Y}\\
\fsle{\m{\fs{0}}}{\X} &\to \tru &
\fsle{\fs{s}(\X)}{\fs{s}(\Y}) &\to \fsle{\X}{\Y} &
\fsgt{\fs{s}(\X) }{\fs{0}} &\to \tru \\
\fsgt{\fs{s}(\X) }{\fs{s}(\Y}) &\to \fsgt{\X }{\Y} &
\pair{\fs{s}(\X)}{\Y} &\eq \pair{\fs{s}(\Q)}{\R} &
\pair{\fs{s}(\Q)}{\R} &\eq \pair{\fs{0}}{\Y} &
\pair{\fs{0}}{\X} &\eq \pair{\fs{0}}{\Y}
\end{xalignat*}
This system can be used to decide a given ground equation
by checking whether the terms' unique normal
forms (with respect to ordered rewriting) are equal.
Such ground complete systems are useful for other tools, like
\textsf{ConCon}\xspace~\cite{SM14}---a tool for automatically proving confluence of conditional
term rewrite systems---which employs ordered completion for proving
infeasibility of conditional critical pairs.
In fact, $\mc{E}_0$ from our initial example is the equational system that
\textsf{ConCon}\xspace derives from Cops~\cops{361} for that purpose.
The latter models division with remainder,
though the transformation performed by \textsf{ConCon}\xspace creates some equations
which do not fit into this semantics but are required to decide confluence.
However, automated tools like \textsf{ConCon}\xspace and \textsf{M{\ae}dMax}\xspace are complex and
highly optimized.
The produced proofs often comprise hundreds of equations and thousands of steps.
Hence care should be taken to trust the output of such tools.
To
improve this situation we follow a two-staged certification
approach and first
(1) add the relevant concepts and results to a formal library, and then
(2) use code generation to obtain a trusted certifier.
More specifically, our contributions are as follows:
\begin{itemize}
\item
Regarding stage (1), we extended the \textsl{\EMPH{Isa}belle
\EMPH{F}ormalization \EMPH{o}f
\EMPH{R}ewriting}\footnote{\url{http://cl-informatik.uibk.ac.at/isafor}}
(\isafor) by ordered rewriting and a generalization of the ordered completion
calculus \textsf{oKB}\xspace~\cite{BDP89}, and proved the latter correct for finite runs using
ground-total reduction orders (\secref{okb}). Moreover, we established
ground-totality of the lexicographic path order and the Knuth-Bendix order.
\item
With respect to stage (2),
we extended the XML-based \emph{certification problem format} (CPF for short)
\cite{ST14}
by certificates comprising the initial
equations, the resulting system along with a reduction order, and a stepwise
derivation of the latter from the former.
We then formalized check functions that verify that the supplied derivation
corresponds to a valid \textsf{oKB}\xspace run
whose final state matches the resulting
system (\secref{proof checking}).
As a result \ceta (the certifier accompanying \isafor) can now certify ordered
completion proofs produced by the tool \textsf{M{\ae}dMax}\xspace~\cite{WM18}.
\end{itemize}
\section{Preliminaries}
\label{sec:preliminaries}
In the sequel we use standard notation from term rewriting~\cite{BN98}.
We consider the \emph{set of all terms} $\mc{T}(\mc{F},\mc{V})$ over a signature $\mc{F}$
and an infinite set of variables $\mc{V}$, while $\mc{T}(\mc{F})$ denotes the
\emph{set of all ground terms}.
A \emph{substitution} $\sigma$ is a mapping from variables to terms.
As usual, we write $t\sigma$ for the \emph{application} of $\sigma$ to a term $t$.
A \emph{variable permutation} (or \emph{renaming})~$\pi$ is a bijective substitution
such that $\pi(x) \in \mc{V}$ for all $x\in\mc{V}$.
For an equational system (ES) $\mc{E}$ we write $\symcl{\mc{E}}$ to
denote its symmetric closure~$\mc{E} \cup \{t \eq s \mid s\eq t \in \mc{E}\}$.
For a reduction order $>$ and an ES~$\mc{E}$, the term rewrite system (TRS)~$\mc{E}^>$
consists of all rules~$s\sigma \to t\sigma$ such that $s \eq t \in \mc{E}$ and
$s\sigma > t\sigma$.
Given a reduction order $>$, an \emph{extended overlap} is given by two
variable-disjoint variants~\mbox{$\ell_1 \eq r_1$} and~$\ell_2 \eq r_2$ of
equations in $\symcl{\mc{E}}$ such that $p \in \mc{P}\m{os}F(\ell_2)$ and $\ell_1$ and
$\ell_2|_p$ are unifiable with most general unifier~$\mu$.
An extended overlap which in addition satisfies $r_1\mu \not > \ell_1\mu$
and $r_2\mu \not > \ell_2\mu$ gives rise to the \emph{extended critical pair}
$\ell_2[r_1]_p\mu \eq r_2\mu$. The set $\ECP[>](\mc{E})$ consists of all extended
critical pairs among equations in
$\mc{E}$.
A TRS~$\mc{R}$ is \emph{(ground) complete} if it is terminating and confluent
(on ground terms).
Finally, we say that a TRS~$\mc{R}$ is a presentation of an ES~$\mc{E}$,
whenever ${\leftrightarrow^*_\mc{E}} = {\leftrightarrow^*_\mc{R}}$.
\section{Formalizing Ordered Completion}
\label{sec:okb}
We consider the following definition of ordered
completion.
\begin{definition}[Ordered Completion]
\label{def:oKB}
\isaforlink{Ordered_Completion}{ind:oKB'}{
The inference system \textsf{oKB} of ordered completion
operates on pairs $(\mc{E},\mc{R})$ of equations~$\mc{E}$ and rules~$\mc{R}$
over a common signature $\mc{F}$. It consists of the
following inference rules, where $\mc{S}$ abbreviates $\REgt$
and $\pi$ is a renaming.
\begin{center}
\begin{tabular}{@{}lc@{~~}l@{\qquad}lc@{~~}l@{}}
\textsf{deduce} &
$\displaystyle \frac
{\mc{E},\mc{R}}
{\mc{E} \cup \{ s\pi \eq t\pi \},\mc{R}}$
& if
$s \xleftarrow[\mc{R} \cup \mc{E}]{} \cdot \xrightarrow[\mc{R} \cup \mc{E}]{} t$
&
\textsf{compose} &
$\displaystyle \frac
{\mc{E},\mc{R} \uplus \{ s \to t \}}
{\mc{E},\mc{R} \cup \{ s\pi \to u\pi \}}$
& if $t \xrightarrow{}_{\mc{S}} u$
\\ & \\
&
$\displaystyle \frac
{\mc{E} \uplus \{ s \eq t \},\mc{R}}
{\mc{E},\mc{R} \cup \{ s\pi \to t\pi \}}$
& if $s > t$
&
&
$\displaystyle \frac
{\mc{E} \uplus \{ s \eq t \},\mc{R}}
{\mc{E} \cup \{ u\pi \eq t\pi \},\mc{R}}$
&
if $s \to_{\mc{S}} u$
\\[-.5ex]
\textsf{orient} & & &
\textsf{simplify}
\\[-.5ex]
&
$\displaystyle \frac
{\mc{E} \uplus \{ s \eq t \},\mc{R}}
{\mc{E},\mc{R} \cup \{ t\pi \to s\pi \}}$
& if $t > s$
&
&
$\displaystyle \frac
{\mc{E} \uplus \{ s \eq t \},\mc{R}}
{\mc{E} \cup \{ s\pi \eq u\pi \},\mc{R}}$
&
if $t \to_{\mc{S}} u$
\\ & \\
\textsf{delete} &
$\displaystyle \frac
{\mc{E} \uplus \{ s \eq s \},\mc{R}}
{\mc{E},\mc{R}}$ &
&
\textsf{collapse} &
$\displaystyle \frac
{\mc{E},\mc{R} \uplus \{ t \to s \}}
{\mc{E} \cup \{ u\pi \eq s\pi \},\mc{R}}$
&
if $t \to_{\mc{S}} u$
\end{tabular}
\end{center}
\mbox{}}
\end{definition}
We write $(\mc{E},\mc{R}) \vdash (\mc{E}',\mc{R}')$ if $(\mc{E}',\mc{R}')$ is obtained
from $(\mc{E},\mc{R})$ by employing one of the above inference rules.
A finite sequence of inferences
$
(\mc{E}_0,\varnothing) \vdash (\mc{E}_1,\mc{R}_1) \vdash \cdots \vdash (\mc{E}_n,\mc{R}_n)
$
is called a \emph{run}. \defref{oKB} differs from the original
formulation of ordered completion~\cite{BDP89} in two ways.
First, \textsf{collapse} and \textsf{simplify} do not require an encompassment
condition. This omission is possible since we only consider
\emph{finite} runs.
Second, we allow variants of rules and equations to be added.
This relaxation tremendously simplifies certificate generation in tools,
where facts are renamed upon generation
to avoid the maintenance and processing of many renamed versions of one
equation.
The following inclusions express
straightforward properties of \textsf{oKB}\xspace.
\begin{lemma}
\label{lem:oKB less}
\isaforlink{Ordered_Completion_Impl}{lem:oKB'_rtrancl_less}{
If $(\mc{E},\mc{R}) \vdash^* (\mc{E}',\mc{R}')$ then $\mc{R} \subseteq {>}$
implies $\mc{R}' \subseteq {>}$.
}
\qed
\end{lemma}
\begin{lemma}
\label{lem:oKB conversion}
\isaforlink{Ordered_Completion_Impl}{lem:oKB_steps_conversion_permuted}{
If $(\mc{E},\mc{R}) \vdash^* (\mc{E}',\mc{R}')$ then the conversion equivalence
${\leftrightarrow^*_{\mc{E}\cup\mc{R}}} = {\leftrightarrow^*_{\mc{E}'\cup\mc{R}'}}$ holds.
}
\qed
\end{lemma}
The following abstract result is the key ingredient to our proof of
ground completeness.
\begin{lemma}
\label{lem:GCR ordstep}
\isaforlink{Ordered_Rewriting}{lem:GCR_ordstep}{
Let $\mc{E}$ be an ES and $>$ a reduction order such that
$s > t$ or $t \eq s \in \mc{E}$ holds for all
$s \eq t \in \mc{E}$.
If for all $s \eq t \in \ECP[>](\mc{E})$ we have $s \downarrow_{\mc{E}^>} t$
or there is some $s' \eq t' \in \symcl{\mc{E}}$ such that
$s \eq t = (s' \eq t')\sigma$ then $\mc{E}^>$ is ground complete.
}
\qed
\end{lemma}
In combination, Lemmas~\ref{lem:oKB less}, \ref{lem:oKB conversion}, and
\ref{lem:GCR ordstep} allow us to obtain our main correctness result:
acceptance of a certificate by our check function
implies
that $\REgt$
is a ground complete presentation of $\mc{E}_0$. For simplicity's sake, we give
only the corresponding high-level result (that is, not mentioning our concrete
implementation):
\begin{theorem}
\label{thm:correctness}
\isaforlink{Check_Completion_Proof}{lem:check_ordered_completion_proof_sound}{
Suppose
$(\mc{E}_0,\varnothing) \vdash^* (\mc{E},\mc{R})$ was obtained
using a ground-total reduction order $>$
with minimal constant $c$ and for all
$s \eq t \in \ECP[>](\symcl{\mc{E}} \cup \mc{R})$ either
$s \downarrow_{\REgt} t$, or
$s \eq t = (s' \eq t')\sigma$ for some $s' \eq t' \in \symcl{\mc{E}}$.
Then ${\leftrightarrow^*_{\mc{E}_0}} = {\leftrightarrow^*_{\mc{R}\cup\mc{E}}}$ and $\REgt$ is ground complete.}\qed
\end{theorem}
This result employs the following sufficient condition for ground
completeness:
all critical pairs are joinable or instances of equations already present.
In fact, this is not a necessary condition.
Martin and Nipkow~\cite{MN90} gave examples of ground confluent systems that
do not satisfy this condition, and presented a stronger criterion.
However, ground confluence is known to be undecidable even for terminating
TRSs~\cite{KNO90}, hence no complete criterion can be implemented.
\paragraph{Ground-total reduction orders.}
Ground confluence crucially relies on ground-total reduction orders. Our \isafor
proofs of the following results follow the standard textbook
approach~\cite{BN98}.
\begin{lemma}
\label{lem:LPO gtotal}
\isaforlink{RPO}{lem:lpo_ground_total}{
If $>$ is a total precedence on $\mc{F}$ then $>_\m{lpo}$ is total on $\mc{T}(\mc{F})$.
}\qed
\end{lemma}
\begin{lemma}
\label{lem:KBO gtotal}
\isaforlink{KBO}{lem:S_ground_total}{
If $>$ is a total precedence on $\mc{F}$ then $>_\m{kbo}$ is total on $\mc{T}(\mc{F})$.
}\qed
\end{lemma}
In addition, we proved that for any given KBO $>_\m{kbo}$ (LPO $>_\m{lpo}$) defined
over a total precedence $>$ there exists a minimal constant $c$ such that
$t \geqslant_\m{kbo} c$ ($t \geqslant_\m{lpo} c$) holds for all $t \in \mc{T}(\mc{F})$.
\section{Checking Ordered Completion Proofs}
\label{sec:proof checking}
While \ceta
has supported
certification of standard completion for quite some
time~\cite{ST13},
certification of ordered completion proofs is considerably more intricate.
For standard completion, the certificate contains the initial set of equations
$\mc{E}_0$, the resulting TRS~$\mc{R}$ together with a termination proof, and stepwise
$\mc{E}_0$-conversions from $\ell$ to $r$ for each rule $\ell \to r \in \mc{R}$. The
certifier first checks the termination proof to guarantee
termination of $\mc{R}$. This allows us to establish confluence of $\mc{R}$ by ensuring
that all critical peaks are joinable. At this point it is easy to verify
${\leftrightarrow^*_{\mc{E}_0}} \subseteq {\leftrightarrow^*_\mc{R}}$: for each equation $s \eq t \in \mc{E}_0$
compute the $\mc{R}$-normal forms of $s$ and $t$ and check for syntactic equality.
The converse inclusion
${\leftrightarrow^*_\mc{R}} \subseteq {\leftrightarrow^*_{\mc{E}_0}}$
is taken care of by the provided $\mc{E}_0$-conversions.
Overall, we obtain that $\mc{R}$ is a complete presentation of $\mc{E}_0$ without
mentioning a specific inference system for completion.
Unfortunately, the same approach does not work for ordered completion:
The inclusion ${\leftrightarrow^*_{\mc{E}_0}} \subseteq {\leftrightarrow^*_{\mc{R} \cup \mc{E}}}$ cannot be
established by rewriting equations in $\mc{E}_0$ to normal form, since they
may contain variables but $\REgt$ is only ground confluent.
Therefore, we instead ask for certificates that contain the input equalities
$\mc{E}_0$, the resulting equations and rules $(\mc{E},\mc{R})$, the reduction
order $>$, and a sequence of inference steps according to \defref{oKB}.
A valid certificate ensures (by \lemref{oKB conversion}) that the
relations $\leftrightarrow^*_{\mc{E}_0}$ and $\leftrightarrow^*_{\mc{R} \cup \mc{E}}$ coincide.
\newcommand{\textsf{orient}$_{\m{lr}}$\xspace}{\textsf{orient}$_{\m{lr}}$\xspace}
\newcommand{\textsf{orient}$_\m{rl}$\xspace}{\textsf{orient}$_\m{rl}$\xspace}
The certificate corresponding to our initial example contains the equations
$\mc{E}_0$, the resulting system $(\mc{E},\mc{R})$, and the reduction order
$>_\m{kbo}$ with precedence $\fsgtname > \fs{s} > \fslename > \tru > \fsminusname > \fsdivname > \fs{p} > \fs{0}$,
$w_0 = 1$, and
$w(\fs{0}) = 2$, $w(\fsdivname) = w(\tru) = w(\fs{s}) = 1$,
and all other symbols having weight 0.
In addition, a sequence of inference steps explains how $(\mc{E},\mc{R})$ is obtained from
$\mc{E}_0$:
\noindent
\parbox{2cm}{\textsf{simplify$_{\m{left}}$}}
$\fsdiv{\X}{\Y} \eq \pair{\fs{s}(\Q)}{\R}$ to $\pair\fs{0}\Y \eq
\pair{\fs{s}(\Q)}{\R}$\\
\parbox{2cm}{\textsf{deduce}}
$\pair\fs{0}\X \leftarrow \pair{\fs{s}(\U)}{\V} \to \pair{\fs{0}}{\Y}$ \\
\parbox{2cm}{\textsf{deduce}}
$\pair{\fs{s}(\X)}{\Y} \leftarrow \pair\fs{0}\U \to \pair{\fs{s}(\Q)}{\R}$\\
\parbox{2cm}{\textsf{deduce}}
$\fsgt{\X}{\Y} \leftarrow \fsgt{\fs{s}(\X)}{\fs{s}(\Y}) \to
\fsgt{\fs{s}(\fs{s}(\X))}{\fs{s}(\fs{s}(\Y}))$\\
\parbox{2cm}{\textsf{deduce}}
$\fsgt{\fs{s}(\fs{s}(\X))}{\fs{s}(\fs{0}}) \leftarrow \fsgt{\fs{s}(\X)}{\fs{0}} \to \tru$ \\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace}
$\fsle{\fs{0}}{\X} \to \tru$ \\
\parbox{2cm}{\textsf{orient}$_{\m{lr}}$\xspace}
$\fsgt{\fs{s}(\fs{s}(\X))}{\fs{s}(\fs{0}}) \to \tru$\\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace}
$\fsgt{\fs{s}(\X)}{\fs{s}(\Y}) \to \fsgt{\X}{\Y}$\\
\parbox{2cm}{\textsf{orient}$_{\m{lr}}$\xspace}
$\fsgt{\fs{s}(\X)}{\fs{0}} \to \tru$ \\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace}
$\fsgt{\fs{s}(\fs{s}(\X))}{\fs{s}(\fs{s}(\Y})) \to \fsgt{\X}{\Y}$ \\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace} $\fsminus{\X}{\fs{0}} \to \X$ \\
\parbox{2cm}{\textsf{orient}$_{\m{lr}}$\xspace} $\fsdiv{\X}{\Y} \to \pair{\fs{0}}{\Y}$\\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace} $\fsminus{\fs{s}(\X)}{\fs{s}(\Y)} \to \fsminus{\X}{\Y}$\\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace} $\fsminus{\fs{0}}{\X} \to \fs{0}$ \\
\parbox{2cm}{\textsf{orient}$_\m{rl}$\xspace} $\fsle{\fs{s}(\X)}{\fs{s}(\Y}) \to \fsle{\X}{\Y}$ \\
\parbox{2cm}{\textsf{collapse}}
$\fsgt{\fs{s}(\fs{s}(\X))}{\fs{s}(\fs{s}(\Y})) \to \fsgt{\X}{\Y}$ to
$\fsgt{\fs{s}(\X)}{\fs{s}(\Y}) \eq \fsgt{\X}{\Y}$\\
\parbox{2cm}{\textsf{simplify$_{\m{left}}$}}
$\fsgt{\fs{s}(\X)}{\fs{s}(\Y}) \eq \fsgt{\X}{\Y}$ to $\fsgt{\X}{\Y} \eq
\fsgt{\X}{\Y}$ \\
\parbox{2cm}{\textsf{collapse}}
$\fsgt{\fs{s}(\fs{s}(\X))}{\fs{s}(\fs{0}}) \to \tru$ to $\fsgt{\fs{s}(\X)}{\fs{0}}
\eq \tru$ \\
\parbox{2cm}{\textsf{simplify$_{\m{left}}$}}
$\fsgt{\fs{s}(\X)}{\fs{0}} \eq \tru$ to $\tru \eq \tru$\\
\parbox{2cm}{\textsf{delete}} $\fsgt{\X}{\Y} \eq \fsgt{\X}{\Y}$\\
\parbox{2cm}{\textsf{delete}} $\tru \eq \tru$
Given such a certificate, \ceta checks that the provided sequence of
inferences forms a run $(\mc{E}_0\pi,\varnothing) \vdash^* (\mc{E},\mc{R})$
for some renaming $\pi$.
Verifying the validity of individual inferences involves checking side conditions
such as orientability of a term pair in an orient step with respect to the given
reduction order.
Then it is checked that $\REgt$ is ground confluent according to the
criterion of Theorem~\ref{thm:correctness}.
Finally, it is ensured that the given reduction order $>$ has a total precedence
(and is admissible, in the case of KBO). As usual in \ceta, error messages are printed
if one of these checks fails, pointing out the reason for the proof being
rejected.
\section{Conclusion}
\label{sec:conclusion}
We presented our formalization of ordered completion in \isafor, which enables
\ceta (starting with version~\isaforversion) to certify ordered completion
proofs. To the best of our knowledge, \ceta thus constitutes the first formally
verified certifier for ordered completion.
Together with Hirokawa and Middeldorp we reported on another Isabelle/HOL
formalization of ordered completion \cite{HMSW17}. The main difference to our
current work is that this other formalization is based on a more restrictive
inference system of ordered completion that also covers infinite runs, while
we restrict to finite runs in the interest of certification.
Indeed every finite run akin to \cite[Definition 18]{HMSW17} is also a run
according to Definition~\ref{def:oKB}, while the inference sequence in our
running example is not possible in the former setting.
As future work, we plan to add more powerful criteria for ground confluence to \isafor,
and support equational disproofs based on ground complete systems in \ceta.
To that end, it would be useful to also support narrowing in \ceta.
Certified equational disproofs could in turn be used to certify confluence
proofs by \textsf{ConCon}\xspace which rely on infeasibility of conditional critical pairs.
\end{document}
|
\begin{document}
\title{The Multivariate Hawkes Process in High~Dimensions: Beyond Mutual Excitation}
\author{Shizhe Chen, Ali Shojaie, Eric Shea-Brown, and Daniela Witten \\
}
\date{June 18th, 2019}
\maketitle
\begin{abstract}
The Hawkes process is a class of point processes whose future depends on their own history.
Previous theoretical work on the Hawkes process is limited to a special case in which a past event can only increase the occurrence of future events, and the link function is linear.
However, in neuronal networks and other real-world applications, inhibitory relationships may be present, and the link function may be non-linear.
In this paper, we develop a new approach for investigating the properties of the Hawkes process without the restriction to mutual excitation or linear link functions.
To this end, we employ a thinning process representation and a coupling construction to bound the dependence coefficient of the Hawkes process.
Using recent developments on weakly dependent sequences, we establish a concentration inequality for second-order statistics of the Hawkes process.
We apply this concentration inequality to cross-covariance analysis in the high-dimensional regime, and we verify the theoretical claims with simulation studies.
\end{abstract}
\textbf{Keywords:} {Hawkes process}; {thinning process}; {weak dependence}.
\section{Introduction}\label{sec::intro}
\cite{hawkes1971} proposed a class of point process models in which a past event can affect the probability of future events.
The Hawkes process and its variants have been widely applied to model recurrent events in many fields, with notable applications to earthquakes \citep{ogata1988}, crimes \citep{mohler2011}, interactions in social networks \citep{simma2012,perry2013}, financial events \citep{chavez2005,bowsher2007,sahalia2015}, and spiking histories of neurons (see e.g., \citealp{brillinger1988, okatan2005, pillow2008}).
There is currently a significant gap between applications and statistical theory for the Hawkes process.
\cite{hawkes1971} considered the \emph{mutually-exciting} Hawkes process, in which an event \emph{excites} the process, i.e. one event may trigger future events.
Later, \cite{hawkes1974} developed a cluster process representation for the mutually-exciting Hawkes process, which is an essential tool for subsequent theoretical developments \citep{reynaud2010, hansen2015,bacry2015}.
The cluster process representation requires two key assumptions: (i) the process is mutually-exciting; and (ii) the link function is linear, implying that the effects of past events on the future firing probabilities are additive.
In many applications, however, one might wish to allow for inhibitory events and non-additive aggregation of effects from past events.
For instance, it is well-known that a spike of one neuron may \emph{inhibit} the activities of other neurons (see e.g., \citealp{purves2001}), meaning that it decreases the probability that other neurons will spike.
Furthermore, non-linear link functions are often used when analyzing spike trains \citep{paninski2007,pillow2008}.
In these cases, many existing theoretical results do not apply, since \citeauthor{hawkes1974}'s cluster process representation is no longer viable.
In this paper, we propose a new analytical tool for the Hawkes process that applies beyond the mutually-exciting and linear setting.
We employ a new representation of the Hawkes process to replace the cluster process representation.
To demonstrate the application of this new analytical tool, we establish a concentration inequality for second-order statistics of the Hawkes process, without restricting the process to be mutually-exciting or to have a linear link function. We apply this tool to study smoothing estimators of cross-covariance functions of the Hawkes process.
While this paper was under revision, it came to our attention that \cite{costa2018} have concurrently studied the Hawkes process with inhibitions in the one-dimensional case. By contrast, our work allows for multiple dimensions and considers the high-dimensional setting. We will provide some additional remarks on their proposal in Section~\ref{sec::highd}.
The paper is organized as follows.
We introduce the Hawkes process and review the existing literature in Section~\ref{sec::model}.
In Section~\ref{sec::conc}, we present the construction of a coupling process, and derive a new concentration inequality for the Hawkes process.
In Section~\ref{sec::cross-covariance}, we study the theoretical properties of smoothing estimators of the cross-covariance functions of the Hawkes process, and corroborate our findings with numerical experiments.
Proofs of the main results are in Section~\ref{sec::proofs_conc}.
We conclude with a discussion in Section~\ref{sec::discussion}.
Technical proofs are provided in the Appendix.
\section{Background on the Hawkes Process}\label{sec::model}
In this section, we provide a very brief review of point processes in general, and the Hawkes process in particular.
We refer interested readers to the monograph by \cite{daley2003} for a comprehensive discussion of point processes.
We use the notation $f\ast g (t) \equiv \int_{-\infty}^{\infty} f(\Delta) g(t-\Delta) \mathrm{d} \Delta $ to denote the convolution of two functions, $f$ and $g$.
We use $\|a\|_2$ to denote the $\ell_2$-norm of a vector $a \in \mathbb{R}^p$.
Furthermore, $\|f \|_{2,[l,u]} \equiv \big\{\int_l^u f^2(t) \mathrm{d} t \big\}^{1/2}$ will denote the $\ell_2$-norm of a function $f$ on the interval $[l,u]$, and $\|f\|_{\infty} \equiv \sup_x |f(x)|$ will denote the maximum of $f$.
We use $\Gamma_{\max}(\bm{A})$ for the maximum eigenvalue of a square matrix $\bm{A}$.
We use $\bm{J}$ to denote a $p$-vector of ones.
The notation $\mathds{1}_{[C]}$ is an indicator variable for the event $C$.
\subsection{A Brief Review of Point Processes}
Let $\mathcal{B}(\mathbb{R})$ denote the Borel $\sigma$-field of the real line,
and let $\{t_{i}\}_{i \in\mathbb{Z}}$ be a sequence of real-valued random variables such that $t_{i+1}> t_{i}$ and $0 \leq t_1$.
Here, time $t=0$ is a reference point in time, e.g., the start of an experiment.
We define a simple point process $N$ on $\mathbb{R}$ as a family $\{N(A) \}_{A \in \mathcal{B}(\mathbb{R})}$ that takes on non-negative integer values such that the sequence $\{t_{i}\}_{i \in\mathbb{Z}}$ consists of event times of the process $N$, i.e., $N(A) = \sum_i \mathds{1}_{[ t_{i} \in A]}$ for $A \in \mathcal{B}(\mathbb{R})$.
We write ${N}\big([t,t+\mathrm{d} t) \big)$ as $\mathrm{d} N_j(t)$, where $\mathrm{d} t$ denotes an arbitrarily small increment of $t$.
Now suppose that $N$ is a \emph{marked} point process, in the sense that each event time $t_i$ is associated with a mark $m_i \in \{1,\ldots, p\}$ \citep[see e.g., Definition 6.4.I. in][]{daley2003}.
With a slight abuse of notation, we can then view $N$ as a multivariate point process, $\bm{N} \equiv \big( N_j\big)_{j=1,\ldots, p}$,
for which the $j$th component process, $N_j$, satisfies $N_j(A) = \sum_i \mathds{1}_{[ t_{i} \in A, m_i = j]}$ for $A \in \mathcal{B}(\mathbb{R})$.
To simplify the notation, in what follows, we will let $\{t_{j,1},t_{j,2},\ldots\}$ denote the event times of $N_j$.
Let $\mathcal{H}_t$ denote the history of $\bm{N}$ up to time $t$.
The \emph{intensity process} $\bm{\lambda}(t) =\big( \lambda_1(t), \ldots, \lambda_p(t) \big)^{{ \mathrm{\scriptscriptstyle T} }}$ is a $p$-variate $\mathcal{H}_t$-predictable process, defined as
\begin{equation}\label{eqn::intensity_definition}
\lambda_j(t) \mathrm{d} t =\mathbb{P}( \mathrm{d} N_j(t) =1 \mid \mathcal{H}_{t}), \ j=1,\ldots, p.
\end{equation}
\subsection{A Brief Overview of the Hawkes Process}\label{sec::hawkes_rev}
For the Hawkes process \citep{hawkes1971}, the intensity function \eqref{eqn::intensity_definition} takes the form
\begin{equation}\label{eqn::HP_intensity_general}
\lambda_{j}(t)= \phi_j \left\{\mu_{j} + \sum_{k=1}^p \big( {\omega}_{k,j} \ast \mathrm{d} {N}_k \big) (t) \right\}, \;\;\; j=1,\ldots,p,
\end{equation}
where
$$\big( {\omega}_{k,j} \ast \mathrm{d} {N}_k \big) (t) = \int_0^{\infty} \omega_{k,j}(\Delta) \mathrm{d} N_k(t-\Delta) = \sum_{i:t_{k,i} \leq t} \omega_{k,j}(t-t_{k,i}). $$
We refer to $\mu_{j} \in \mathbb{R}$ as the \textit{background intensity}, and $\omega_{k,j}(\cdot): \mathbb{R}^{+} \mapsto \mathbb{R}$ as the \textit{transfer function}.
If the \emph{link function} $\phi_j$ on the right-hand side of \eqref{eqn::HP_intensity_general} is non-linear, then $\lambda_j(t)$ is the intensity of a non-linear Hawkes process \citep{bremaud1996}.
We will refer to the class of Hawkes processes that allows for non-linear link functions and negative transfer functions as the \emph{generalized Hawkes process}.
In this paper, we assume that we observe the event times of a \emph{stationary} process $\mathrm{d}\bm{N}$ on $[0, T]$, whose intensities follow \eqref{eqn::HP_intensity_general}.
The existence of a stationary process is guaranteed by the following assumption.
\begin{assumption}
We assume that $\phi_j(\cdot)$ is $\alpha_j$-Lipschitz for $j=1,\ldots, p$.
Let $\bm{\Omega}$ be a $p \times p$ matrix whose entries are $\Omega_{j,k} =\alpha_j \int_0^{\infty} |\omega_{k,j}(\Delta)| \, \mathrm{d}\Delta$ for $1 \leq j,k \leq p$. We assume that there exists a generic constant $\gamma_{\Omega}$ such that $\Gamma_{\max}(\bm{\Omega})\leq \gamma_{\Omega} < 1$. \label{asmp::spectralradius}
\end{assumption}
Note that in Assumption~\ref{asmp::spectralradius}, the constant $\gamma_{\Omega}$ does not depend on the dimension $p$.
For any fixed $p$, \cite{bremaud1996} establish that the intensity process of the form \eqref{eqn::HP_intensity_general} is stable in distribution, and thus a stationary process $\mathrm{d}\bm{N}$ exists given Assumption~\ref{asmp::spectralradius}.
We refer interested readers to \cite{bremaud1996} for a rigorous discussion of stability for the Hawkes process.
We define the \emph{mean intensity} $\bm{\Lambda}=(\Lambda_1, \ldots, \Lambda_p)^{{ \mathrm{\scriptscriptstyle T} }} \in \mathbb{R}^p$ as
\begin{equation}\label{eqn::Lambda}
{\Lambda}_j \equiv \mathbb{E} [ \mathrm{d} {N}_j(t)] /\mathrm{d} t, \ j=1,\ldots, p.
\end{equation}
Following Equation 5 of \cite{hawkes1971}, we define
the \emph{(infinitesimal) cross-covariance} $\bm{V}(\cdot) =\big( V_{k,j}(\cdot) \big)_{p \times p}: \mathbb{R} \mapsto \mathbb{R}^{p \times p}$ as
\begin{equation}\label{eqn::V}
{V}_{k,j}(\Delta ) \equiv \begin{cases}
\mathbb{E}[\mathrm{d} N_j(t) \mathrm{d} N_{k}(t-\Delta )] /\{\mathrm{d} t \mathrm{d} (t-\Delta)\} - \Lambda_j \Lambda_k & j \neq k\\
\mathbb{E}[\mathrm{d} N_k(t) \mathrm{d} N_{k}(t-\Delta )] /\{\mathrm{d} t \mathrm{d} (t-\Delta)\} - \Lambda_k^2 - \Lambda_k \delta(\Delta) & j = k\\
\end{cases},
\end{equation}
for any $\Delta \in \mathbb{R} $, and $1\leq j,k \leq p$.
Here $\delta(\cdot)$ is the Dirac delta function, which satisfies $\delta(x)=0$ for $x \neq 0$ and $\int_{-\infty}^{\infty}\delta(x) \mathrm{d} x =1$.
\begin{example} (Linear Hawkes processes)
Suppose that $\phi(x)=x$ and $\omega_{k,j}(\Delta) \geq 0$ for all $\Delta \in \mathbb{R}^{+}$ and for all $j,k \in \{1,\ldots, p\}$.
The intensity in \eqref{eqn::HP_intensity_general} takes the form
\begin{equation}\label{eqn::HP_intensity}
\lambda_{j}(t)= \mu_{j} + \sum_{k=1}^p \big( {\omega}_{k,j} \ast \mathrm{d} {N}_k \big) (t), \;\;\; j=1,\ldots,p.
\end{equation}
This is known as the linear Hawkes process \citep{hawkes1971,bremaud1996,hansen2015}. If the Hawkes process defined in \eqref{eqn::HP_intensity} is stationary, then the following relationships hold between the mean intensity $\bm{\Lambda}$ \eqref{eqn::Lambda}, the cross-covariance $\bm{V}$ \eqref{eqn::V}, the background intensity $\bm{\mu}\equiv (\mu_1,\ldots, \mu_p)^{{ \mathrm{\scriptscriptstyle T} }}$ \eqref{eqn::HP_intensity}, and the transfer functions $\bm{\omega} \equiv ( \omega_{k,j} )_{p \times p}$ \eqref{eqn::HP_intensity} (see, e.g., Equations 21 and 22 in \citealp{hawkes1971} or Theorem~1 in \citealp{bacry2014}):
\begin{equation}\label{eqn::wheq_I}
\bm{\Lambda}= \bm{\mu} + \left[ \int_0^{\infty} \bm{\omega} (\Delta) \mathrm{d} \Delta \right] \bm{\Lambda},
\end{equation}
and
\begin{equation}\label{eqn::wheq_II}
\bm{V}(\Delta)= \bm{\omega}(\Delta) {\rm diag}(\bm{\Lambda} ) + (\bm{\omega} * \bm{V})( \Delta),
\end{equation}
where $[\bm{\omega} * \bm{V}]_{k,j} (\Delta) \equiv \sum_{i=1}^p [\omega_{k,i}*V_{i,j}] (\Delta)$.
Equation \eqref{eqn::wheq_II} belongs to a class of integral equations known as the Wiener-Hopf integral equations.
For the linear Hawkes process, these equations are often used to learn the transfer functions $\bm{\omega}$ by plugging in estimators of $\widehat{\bm{\Lambda}}$ and $\widehat{\bm{V}}$ (see, e.g., \citealp{bacry2014}, \citealp{krumin2010}).
These equations are similar to the Yule-Walker equations in the vector-auto regression model \citep{yule1927,walker1931}.
\end{example}
\begin{remark}
An assumption similar to Assumption~\ref{asmp::spectralradius}, known as walk summability, was proposed by \cite{anandkumar2012} in the context of the Gaussian graphical model.
Consider a linear Hawkes process \eqref{eqn::HP_intensity} with $\omega_{k,j}(\Delta) \geq 0$ for all $\Delta$, so that $\bm{\Omega} = \int_0^{\infty} \bm{\omega}(\Delta) \mathrm{d} \Delta$.
Under Assumption~\ref{asmp::spectralradius}, we can rewrite \eqref{eqn::wheq_I} as
\begin{equation}\label{eqn::infinity_Lambda}
\bm{\Lambda} = \sum_{i=0}^{\infty} \bm{\Omega}^i \bm{\mu},
\end{equation}
where $\bm{\Omega}^i$ is the $i$th power of the matrix $\bm{\Omega}$.
In \eqref{eqn::infinity_Lambda}, $\bm{\Omega}^i \bm{\mu}$ can be seen as the intensity induced through paths of length $i$.
Assumption~\ref{asmp::spectralradius} ensures that the induced intensity decreases exponentially fast as the path length $i$ grows, i.e., $\|\bm{\Omega}^i \bm{\mu}\|_2 \leq \gamma_{\Omega}^i \|\bm{\mu}\|_2$.
Viewed another way, the equality in \eqref{eqn::infinity_Lambda} says that the mean intensity is a sum of induced intensities through paths of all possible lengths,
and Assumption~\ref{asmp::spectralradius} prevents this sum from diverging.
\end{remark}
\section{A New Approach for Analyzing the Hawkes Process}\label{sec::conc}
In this section, we present a new approach for analyzing the statistical properties of the Hawkes process, without assuming linearity of the link function $\phi_j$ or nonnegativity of the transfer function $\omega_{k,j}$ in \eqref{eqn::HP_intensity_general}.
We provide an overview of existing theoretical tools and a new approach for analyzing the Hawkes process in Section~\ref{sec::overview_framework}.
In Section~\ref{sec::coupling_framework}, we construct a coupling process using the \emph{thinning process representation}.
In Section~\ref{sec::highd}, we present a bound on the \emph{weak dependence coefficient} for the Hawkes process using the coupling technique \citep{dedecker2004}, and present a new concentration inequality for the Hawkes process in the high-dimensional regime.
\subsection{Overview}\label{sec::overview_framework}
From a theoretical standpoint, the most challenging characteristic of a Hawkes process $\bm{N}$ is the inherent \emph{temporal dependence} in the intensity \eqref{eqn::HP_intensity_general}.
To be specific, the realization of $\bm{N}$ on any given time period $[t_1, t_2)$ depends on the realization of $\bm{N}$ on the previous time period $(-\infty,t_1)$.
As a result, it is challenging to quantify the amount of information available in the observed realization of a Hawkes process on any period $[0,T]$.
Most existing theoretical analyses of the Hawkes process rely on the cluster process representation proposed by \cite{hawkes1974}.
The basic idea behind this representation is simple: for the linear Hawkes process \eqref{eqn::HP_intensity}, when the transfer functions are non-negative, i.e., $\omega_{k,j}(\cdot) \geq 0$ for $1\leq j, k \leq p$, the process $\bm{N}$ can be represented as a sum of independent processes, or \emph{clusters}.
Each cluster has intensity of the form \eqref{eqn::HP_intensity}, but with background intensity set to zero.
Properties of the Hawkes process can then be investigated by studying the properties of independent clusters.
See, among others, \cite{reynaud2007} and \cite{hansen2015} for recent applications of the cluster process representation.
Unfortunately, the cluster process representation is no longer available for the generalized Hawkes process \eqref{eqn::HP_intensity_general} in which $\phi_j(\cdot)$ may be non-linear and $\omega_{k,j}(\cdot)$ may be negative.
To see this, note that a single cluster cannot model inhibition by itself, since its intensity is lower-bounded by zero.
Moreover, independence across clusters implies that events in one cluster cannot affect the behavior of other clusters, which prohibits inhibition across clusters.
Finally, the cluster process representation treats $\bm{N}$ as the \emph{summation} of clusters, which can only model an additive increase in the intensity, i.e., an intensity of the form \eqref{eqn::HP_intensity}.
The lack of available techniques to study the Hawkes process in the absence of linearity or non-negativity assumptions constitutes a significant gap between theory and applications of the Hawkes processes.
As an example, networks of neurons are known to have both excitatory and inhibitory connections \citep{van1996chaos, vogels2011inhibitory}.
Similarly, it is unrealistic for neurons to have unbounded firing rates, which is possible for the linear Hawkes process.
In fact, almost all applications of the Hawkes process to neuronal spike train data use a non-linear link function and avoid constraints on the signs of the transfer functions \citep{pillow2008,quinn2010,mishchenko2011,vidne2012,song2013}.
To bridge the gap between the theory and emerging applications of the Hawkes process, we present a new approach to study theoretical properties of the Hawkes process without assuming that link functions are linear or that transfer functions are non-negative.
The key idea of our new approach is to represent the generalized Hawkes process with intensity \eqref{eqn::HP_intensity_general} using the thinning process representation (see e.g., \citealt{ogata1981,bremaud1996}), which was first introduced by
\cite{ogata1981} in order to simulate data from the Hawkes process.
The thinning process representation has a clear advantage over the cluster process representation in that the former does not require the transfer functions to be non-negative or the link function to be linear.
However, the thinning process representation has not been put into full use for the Hawkes process.
In what follows, we will show that the thinning process representation can be used, in conjunction with a coupling result of \citet{dedecker2004}, to bound the temporal dependence of generalized Hawkes processes, without assuming linearity of ${\phi}_j(\cdot)$ or non-negativity of ${\omega}_{k,j}(\cdot)$.
\subsection{Coupling Process Construction using Thinning}\label{sec::coupling_framework}
To make the discussion more concrete, we consider the task of establishing a concentration inequality for
\begin{equation}\label{eqn::ybar}
\bar{y}_{k,j} \equiv \frac{1}{T} \int_0^T \int_0^T f(t-t') \, \mathrm{d} N_k(t) \, \mathrm{d} N_j(t'), \quad 1\leq j, k\leq p,
\end{equation}
where $f(\cdot) $ is a known function with properties to be specified later.
Quantities of the form~\eqref{eqn::ybar} appear in many areas of statistics, such as regression analysis, cluster analysis, and principal components analysis.
Concentration inequalities for~\eqref{eqn::ybar} thus provide the foundation for the theoretical analysis of these methods.
Let
\begin{equation}\label{eqn::y_kji}
{y}_{k,j,i} \equiv \frac{1}{ 2\epsilon } \int_{2\epsilon (i-1)}^{2\epsilon i} \int_0^T f(t-t') \, \mathrm{d} N_{k}(t) \, \mathrm{d} N_j(t'),
\end{equation}
where $\epsilon$ is some small constant.
For simplicity, assume that $T/(2\epsilon)$ is an integer.
Then, $\bar{y}_{k,j}$ can be intuitively seen as the average of the sequence $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$.
Due to the nature of the Hawkes process, it is clear that the sequence $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$ is inter-dependent, meaning that elements in the sequence depend on each other.
As a result, standard concentration inequalities that require independence do not apply to $\bar{y}_{k,j}$.
Moreover, the Hawkes process is not a Markov process (outside of some special cases, e.g., a linear Hawkes process with exponential transfer functions).
Thus, concentration inequalities for Markov processes do not apply to $\bar{y}_{k,j}$ either.
Existing concentration inequalities for $|\bar{y}_{k,j} - \mathbb{E}\bar{y}_{k,j}|$ (see, e.g., Proposition~5 in \cite{reynaud2010}, Proposition~3 in \cite{hansen2015}) rely heavily on the cluster process representation \citep{hawkes1974,reynaud2007}, which, as discussed earlier, is not applicable due to the non-linearity of $\phi_j$ and the possibility of $\omega_{k,j}$ taking negative values.
To develop a concentration inequality for $\bar{y}_{k,j}$, we bound the temporal dependence of the sequence $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$ using the thinning process representation, combined with the coupling result of \citet{dedecker2004}.
First, we need to choose a measure of dependence for $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$; see, e.g., the comprehensive survey by \cite{bradley2005}.
We consider the $\tau$-dependence coefficient \citep{dedecker2004},
\begin{equation}\label{eqn::tau_definition}
\tau(\mathcal{M},X) \equiv \mathbb{E} \left[ \sup_{h} \left\{ \left| \int h(x) \mathbb{P}_{X\mid \mathcal{M}} (\mathrm{d} x) - \int h(x) \mathbb{P}_{X}( \mathrm{d} x) \right| \right\} \right],
\end{equation}
where the supremum is taken over all $1$-Lipschitz functions $h: \mathbb{R} \mapsto \mathbb{R}$, $X$ is a random variable, and $\mathcal{M}$ is a $\sigma$-field.
Here, $\mathbb{P}_{X\mid \mathcal{M}}$ denotes the probability measure of $X$ conditioned on $\mathcal{M}$.
We now introduce a coupling lemma from \cite{dedecker2004}.
\begin{lemma}\label{lmm::tau_original}(Lemma 3 in \cite{dedecker2004})
Let $X$ be an integrable random variable and $\mathcal{M}$ a $\sigma$-field defined on the same probability space.
If the random variable $Y$ has the same distribution as $X$, and is independent of $\mathcal{M}$, then
\begin{equation}\label{eqn::tau_coupling}
\tau(\mathcal{M},X) \leq \mathbb{E} |X-Y|.
\end{equation}
\end{lemma}
Lemma~\ref{lmm::tau_original} provides a practical approach for bounding the $\tau$-dependence coefficient: one can obtain an upper bound for $\tau(\mathcal{M},X)$ by constructing a coupling random variable $Y$, and evaluating $\mathbb{E} |X-Y|$.
Equation~\ref{eqn::tau_definition} defines a measure of dependence between a random variable and a $\sigma$-field.
For a sequence $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$, the temporal dependence is defined as (see, e.g., \citealp{merlevede2011})
\begin{equation}\label{eqn::tau_sequence}
\tau_{y}(l) \equiv \sup_{u\in\{1,2,\ldots\}} \tau\big( \mathcal{H}_{u}^{y}, y_{k,j,u+l} \big),
\end{equation}
for any positive integer $l$, where $\mathcal{H}_{u}^{y}$ is the $\sigma$-field determined by $\{{y}_{k,j,i} \}_{i=1}^{u}$, and the supremum in \eqref{eqn::tau_sequence} is taken over all positive integers.
In words, for any time gap $l$, the temporal dependence of a sequence is defined as the maximum dependence between any elements in the sequence and the history of the sequence $l$ steps ago.
As a direct result of Lemma~\ref{lmm::tau_original}, we obtain the following coupling result for the temporal dependence \eqref{eqn::tau_sequence}:
\begin{equation}\label{eqn::tau_coupling_sequence}
\tau_{y}(l) \leq \sup_{u} \mathbb{E} \big|\tilde{y}_{k,j,u+l}^u - y_{k,j,u+l}\big|,
\end{equation}
where $\{\tilde{y}_{k,j,i}^u \}_{i=1}^{T/(2\epsilon)}$ is a sequence satisfying the conditions of Lemma~\ref{lmm::tau_original}, i.e., $\{\tilde{y}_{k,j,i}^u \}_{i=1}^{T/(2\epsilon)}$ has the same distribution as $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$, and is independent of $\{{y}_{k,j,i} \}_{i=1}^{u}$.
Given the definition of temporal dependence \eqref{eqn::tau_sequence} and the coupling result \eqref{eqn::tau_coupling_sequence}, it remains to construct a coupling sequence $\{\tilde{y}_{k,j,i}^u \}_{i=1}^{T/(2\epsilon)}$ and to bound the right-hand side of \eqref{eqn::tau_coupling_sequence}.
Recalling that $y_{k,j,i}$ is of the form \eqref{eqn::y_kji}, the problem reduces to finding a coupling process $\widetilde{\bm{N}}^z$ and bounding the first- and second-order deviations between $\mathrm{d} \widetilde{\bm{N}}^z$ and $\mathrm{d} \bm{N}$, where $z \equiv 2\epsilon u$.
In what follows, we will discuss the construction of $\mathrm{d} \widetilde{\bm{N}}^z$ for a fixed $z$, and thus suppress the superscript $z$ for simplicity of notation.
We use the thinning process representation of the Hawkes process to construct the coupling process $\mathrm{d} \widetilde{\bm{N}}$.
To begin, we review the iterative construction strategy proposed by \cite{bremaud1996}.
Let ${N}^{(0)}_j$, for $j=1,\ldots, p$, be a homogeneous Poisson process on $\mathbb{R}^2$ with intensity $1$. For $n=1$, we construct a $p$-variate process $\bm{N}^{(1)}$ as
\begin{equation}\label{eqn::iterative_initial_main}
\mathrm{d} N^{(1)}_j(t) = {N}^{(0)}_j\big( [0, \mu_j] \times \mathrm{d} t \big) \quad j=1,\ldots, p,
\end{equation}
where ${N}^{(0)}_j\big([0, \mu_j] \times \mathrm{d} t \big)$ is the number of points for ${N}^{(0)}_j$ in the area $[0, \mu_j] \times [t, t+\mathrm{d} t)$.
For $n \geq 2$, we construct $\bm{N}^{(n)}$ as
\begin{equation}\label{eqn::iterative_construction_main}
\begin{aligned}
{\lambda}_j^{(n)}(t) & = \phi_j\big\{ {\mu}_j + \big( \bm{\omega}_{\cdot,j} * \mathrm{d} \bm{N}^{(n-1)} \big)(t) \big\} \\
\mathrm{d} N^{(n)}_j(t) & = {N}^{(0)}_j\big( [0, \lambda_j^{(n)}(t)] \times \mathrm{d} t \big), \quad j=1,\ldots, p.
\end{aligned}
\end{equation}
\cite{bremaud1996} show that, under Assumption~\ref{asmp::spectralradius}, the sequence $\{ \bm{N}^{(n)} \}_{n=1}^{\infty}$ converges to the Hawkes process $\bm{N}$ with intensity~\eqref{eqn::HP_intensity_general}.
In a sense, \eqref{eqn::iterative_initial_main} and \eqref{eqn::iterative_construction_main} define the generalized Hawkes process for an arbitrary intensity function \eqref{eqn::HP_intensity_general}.
In order to construct a coupling process, we modify the iterative construction strategy, \eqref{eqn::iterative_initial_main} and \eqref{eqn::iterative_construction_main}, as follows.
Let $\widetilde{\bm{N}}^{(0)}$ also be a $p$-variate homogeneous Poisson process with each component process defined on $\mathbb{R}^2$ with intensity $1$, but independent of $\bm{N}^{(0)}$.
For $j=1,\ldots, p,$ define the process $\widetilde{\bm{N}}^{(1)}$ as
\begin{equation}\label{eqn::iterative_initial_tilde_main}
\mathrm{d} \widetilde{N}_j^{(1)}(t) =
\begin{cases}
\widetilde{N}^{(0)}_j\big( [0, \mu_j] \times \mathrm{d} t \big) & t \leq z \\
{N}^{(0)}_j\big( [0, \mu_j] \times \mathrm{d} t \big) & t > z
\end{cases}.
\end{equation}
For $n \geq 2$, we construct $\widetilde{\bm{N}}^{(n)}$ as
\begin{equation}\label{eqn::iterative_construction_tilde_main}
\begin{aligned}
\widetilde{\lambda}^{(n)}_j(t) & = \phi_j\big\{\mu_j + \big( \bm{\omega}_{\cdot,j} * \mathrm{d} \widetilde{\bm{N}}^{(n-1)} \big)(t)\big\} \\
\mathrm{d} \widetilde{N}^{(n)}_j(t) & =
\begin{cases}
\mathrm{d} \widetilde{N}_j^{(0)}( [0, \widetilde{\lambda}^{(n)}_j(t)] \times \mathrm{d} t) & t \leq z \\
\mathrm{d} N_j^{(0)}( [0, \widetilde{\lambda}^{(n)}_j(t)] \times \mathrm{d} t) & t > z
\end{cases}
\quad j=1,\ldots, p.
\end{aligned}
\end{equation}
The constructions of $\{\widetilde{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ and $\{{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ are almost identical, with the only difference being the use of $\widetilde{\bm{N}}^{(0)}$ or $\bm{N}^{(0)}$, which are identically distributed homogeneous Poisson processes.
Thus, from \cite{bremaud1996}, we know that $\{\widetilde{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ converges to a process $\widetilde{\bm{N}}$ with intensity~\eqref{eqn::HP_intensity_general}.
As a result, the two processes $\widetilde{\bm{N}}$ and $\bm{N}$ are identically distributed.
Thus, to apply Lemma~\ref{lmm::tau_original}, we only need to verify that $\widetilde{\bm{N}}$ is independent of $\mathcal{H}_z$.
However, this is guaranteed by our construction, because the process ${\bm{N}}$ is determined by the homogeneous Poisson process $\bm{N}^{(0)}$ up to time $z$, which is independent of $\widetilde{\bm{N}}^{(0)}$ and also independent of $\mathrm{d} \bm{N}^{(0)}(t)$ for any $t > z$ due to properties of the homogeneous Poisson process.
The next theorem shows that the deviation between $\widetilde{\bm{N}}$ and ${\bm{N}}$ can be bounded.
\begin{theorem}\label{thm::coupling}
Let $\bm{N}$ be a Hawkes process with intensity of the form \eqref{eqn::HP_intensity_general} satisfying Assumption~\ref{asmp::spectralradius}.
For any given $z >0$, there exists a point process $\widetilde{\bm{N}}$, called the coupling process of $\bm{N}$, such that
\begin{enumerate}
\item[a)] $\widetilde{\bm{N}}$ has the same distribution as $ {\bm{N}}$;
\item[b)] $\widetilde{\bm{N}}$ is independent of the history of $\bm{N}$ up to time $z$ (i.e., $\mathcal{H}_{z}$).
\end{enumerate}
Moreover, let $b$ be a constant and define a matrix $\bm{\eta}(b)=\big( \eta_{j,k}(b) \big)_{p\times p}$ with $\eta_{j,k}(b) = \big[ \alpha_j \int_b^{\infty} |\omega_{k,j}(\Delta)| \mathrm{d} \Delta \big]$.
Then, for any $u\geq 0$,
\begin{align}
\small
\mathbb{E}\big| \mathrm{d} \widetilde{\bm{N}}(z+u) - \mathrm{d} {\bm{N}}(z+u) \big|/ \mathrm{d} u & \preceq 2 v_1\left\{ \bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^{i-1} \bm{\eta}(b) \bm{J}\right\}, \label{eqn::bound_expectation}
\end{align}
\begin{equation}\label{eqn::bound_crosscovariance}
\begin{aligned}
\small
& {\mathbb{E}\big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} \widetilde{\bm{N}}(z+u) - \mathrm{d} {\bm{N}}(t') \mathrm{d} {\bm{N}}(z+u) \big|}/\big({\mathrm{d} u \mathrm{d} t'} \big) \\
\preceq & 2 v_2 \left\{ \bm{\Omega}^{\lfloor u/b +2 \rfloor} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^i \bm{\eta}(b) \right\}\bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + 2 v_1^2 \left\{ \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{\lfloor u/b +1 \rfloor}\big)^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\eta}(b) \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{i}\big)^{{ \mathrm{\scriptscriptstyle T} }}\right\},
\end{aligned}
\end{equation}
where $\preceq$ denotes element-wise inequality, and $v_1$ and $v_2$ are parameters that depend on $\bm{\Lambda}$, $\bm{V}$, and $\{\phi_j(\mu_j): j=1,\ldots,p\}$.
\end{theorem}
The proof of Theorem~\ref{thm::coupling} is given in Section~\ref{sec::coupling}.
In both \eqref{eqn::bound_expectation} and \eqref{eqn::bound_crosscovariance}, the bounds are decomposed into two parts: terms involving $\bm{\eta}(b)$ (e.g., the last term in \eqref{eqn::bound_expectation} and the last two terms in \eqref{eqn::bound_crosscovariance}), and terms that do not involve $\bm{\eta}(b)$.
By definition, $\bm{\eta}(b)$ characterizes the tail mass of $\omega_{k,j}, 1\leq j,k\leq p$, or, in other words, the long-term \emph{direct} effect of an event.
In contrast, $\bm{\Omega}^i$ captures the \emph{indirect} effect from the chain of events induced by an initial event.
Intuitively, this decomposition reveals that the temporal dependence of a Hawkes process is jointly regulated by the long-term direct effects of every event and the indirect ripple effects descended from these events.
\subsection{Main Results} \label{sec::highd}
We now use the coupling process constructed in Theorem~\ref{thm::coupling} to bound the weak dependence coefficient and establish a concentration inequality for $\bar y_{k,j}$.
To this end, we introduce two additional assumptions on the transfer functions $\omega_{k,j}(\cdot), 1\leq j,k \leq p$, which will allow us to obtain element-wise bounds for the terms on the right-hand sides of both \eqref{eqn::bound_expectation} and \eqref{eqn::bound_crosscovariance}.
To facilitate the analysis of the high-dimensional Hawkes process, we focus here on the setting where $p$ can grow.
First of all, the bounds in both \eqref{eqn::bound_expectation} and \eqref{eqn::bound_crosscovariance} involve residual terms related to the tail behaviour of the transfer functions $\omega_{k,j}(\cdot)$. Naturally, the temporal dependence of a Hawkes process depends on the shape of the transfer functions, e.g., a process has stronger temporal dependence if effects of each event last longer.
Therefore, we need an assumption on the tails of the transfer functions.
\begin{assumption}\label{asmp::tail_highd}
There exists a constant $b_0$ such that, for $b\geq b_0$ and some $r>0$,
$$ \max_{j} \sum_{k=1}^p \int_{b}^{\infty} |\omega_{k,j}(\Delta)|\mathrm{d} \Delta \leq c_{1} \exp\big(-c_{2} b^{r}\big).$$
\end{assumption}
This assumption prevents the long-term memory from dragging on indefinitely as $p$ increases.
For instance, Assumption~\ref{asmp::tail_highd} is violated by a Hawkes process with transfer functions that equal zero except for $\omega_{k,k+1}(\Delta)= 0.5$ for $\Delta \in [k,k+1], k=1,\ldots,p-1$.
We can see that the dependence between $\mathrm{d} N_{p-1}$ and $\mathrm{d} N_{p}$ grows as $p$ grows.
The bounds on temporal dependence in \eqref{eqn::bound_expectation} and \eqref{eqn::bound_crosscovariance}, although still valid, is not meaningful in this setting as $p$ can grow.
Secondly, in order to accommodate the high-dimensional regime, we further impose the following condition on the matrix $\bm{\Omega}$ defined in Assumption~\ref{asmp::spectralradius} of Section~\ref{sec::hawkes_rev}.
\begin{assumption}\label{asmp::uniform}
For all $j=1,\ldots,p$, there exists a positive constant $\rho_{\Omega}<1$ such that $\bm{\Omega}$ satisfies $\sum_{k=1}^p {\Omega}_{j,k} \leq \rho_{\Omega}$.
\end{assumption}
Applying Assumption~\ref{asmp::uniform} to \eqref{eqn::infinity_Lambda} for the linear Hawkes process (see Example~1) gives that, for each $j=1,\ldots,p$,
\begin{equation}\label{eqn::infinity_Lambda_j}
{\Lambda}_j = \mu_j + \bm{\Omega}_{j,\cdot} \cdot \left[\sum_{i=1}^{\infty} \bm{\Omega}^{i-1} \bm{\mu}\right] =\mu_j + \bm{\Omega}_{j,\cdot} \cdot\bm{\Lambda} \leq \mu_j + \rho_{\Omega} \max_{k} (\Lambda_k).
\end{equation}
In a sense, Assumption~\ref{asmp::uniform} prevents the intensity from concentrating on a single process.
Assumption~\ref{asmp::uniform} can be replaced by assumptions on the structure of $\bm{\Omega}$ if the magnitude of each entry of $\bm{\Omega}$ is upper bounded; for instance, it can be shown that Assumption~\ref{asmp::uniform} holds with high probability when the support of $\bm{\Omega}$ corresponds to the adjacency matrix of a sparse Erd\"{o}s-R\'{e}nyi graph or a stochastic block model with a suitable bound on $\max_{j,k} \Omega_{j,k}$ \citep{anandkumar2012}.
With these additional assumptions, we arrive at the following bound for the temporal dependence coefficient.
\begin{theorem}\label{thm::dependence}
Suppose that $\bm{N}$ is a Hawkes process with intensity~\eqref{eqn::HP_intensity_general}, which satisfies Assumptions~\ref{asmp::spectralradius}--\ref{asmp::bounded}.
Suppose also that the function $f(\cdot)$ has bounded support and $ \|f\|_{\infty} \equiv \max_x |f(x)| \leq C_f$.
For any positive integer $l$, the $\tau$-dependence coefficient of $\{{y}_{j,k,i}\}_{i}$ introduced in \eqref{eqn::tau_sequence} satisfies
\begin{equation}\label{eqn::dependence_y}
\tau_y(l) \leq a_{5} \exp(-a_{6} l^{r/(r+1)}),
\end{equation}
where $r$ is introduced in Assumption~\ref{asmp::tail_highd}, and $a_{5}$ and $a_{6}$ are parameters that do not depend on $p$ and $T$.
\end{theorem}
In order to bound the deviation of $\bar{y}_{j,k}$ from its mean, we need to introduce an additional assumption.
\begin{assumption}\label{asmp::bounded}
Assume that one of the following two conditions holds.
\begin{itemize}
\item[a)] For all $j=1,\ldots,p$, the link functions $\phi_j(\cdot)$ in \eqref{eqn::HP_intensity_general} are upper-bounded by a positive constant $\phi_{\max}$.
\item[b)] In Assumption~\ref{asmp::tail_highd}, the constant $c_{1} =0$.
\end{itemize}
\end{assumption}
Assumption~\ref{asmp::bounded} guarantees that the event count for the Hawkes process $N_j(A)$ has an exponential tail for any bounded interval $A$, as stated in the following lemma.
\begin{lemma}\label{lmm::exponential_tail}
Suppose $\bm{N}$ is a Hawkes process with intensity~ \eqref{eqn::HP_intensity_general}, which satisfies Assumptions~\ref{asmp::spectralradius}~and~\ref{asmp::bounded}.
For any bounded interval $A$, it holds that, for $j=1,\ldots,p$, $P({{N}_j(A)}> n ) \leq \exp( 1- n/K)$ for some constant $K$.
\end{lemma}
The proof of Lemma~\ref{lmm::exponential_tail} is provided in Appendix~\ref{sec::proof_exp}.
We hypothesize that Assumption~\ref{asmp::bounded} can be further relaxed, but we leave this for future work.
Finally, we can establish the following concentration inequality for $\bar{y}_{j,k}$.
\begin{theorem}\label{thm::concentration_hawkes}
Suppose that $\bm{N}$ is a Hawkes process with intensity~\eqref{eqn::HP_intensity_general}, which satisfies Assumptions~\ref{asmp::spectralradius}--\ref{asmp::bounded}.
Suppose also that the function $f(\cdot)$ has bounded support and $ \|f\|_{\infty} \equiv \max_x |f(x)| \leq C_f$.
Then, for $1 \leq j \leq k \leq p$,
\begin{equation}\label{eqn::y_ci}
\mathbb{P}\left( \bigcap_{1 \leq j \leq k \leq p} \left[\left| \bar{y}_{k,j} - \mathbb{E} \bar{y}_{k,j} \right| \geq c_3 T^{-(2r+1)/(5r+2)} \right]\right) \leq c_4 p^2 T \exp( - c_5 T^{r/(5r+2)}),
\end{equation}
where $r$ is introduced in Assumption~\ref{asmp::tail_highd}, and $ c_3 , c_4 $ and $ c_5 $ are parameters that do not depend on $p$ and $T$.
\end{theorem}
The proof of Theorem~\ref{thm::concentration_hawkes} is given in Section~\ref{sec::proofs_conc}, and is rather straightforward given Theorem~\ref{thm::dependence} and Lemma~\ref{lmm::exponential_tail}.
Briefly, we verify that the conditions in \cite{merlevede2011} hold for the sequence $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$.
We show that $y_{k,j,i}$ has an exponential tail of order $0.5$ using Lemma~\ref{lmm::exponential_tail}, and we show that the temporal dependence of the sequence $\{{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$ decreases exponentially fast as in Theorem~\ref{thm::dependence}.
Then, Equation \eqref{eqn::y_ci} is a direct result of Theorem~1 in \cite{merlevede2011}.
\begin{remark}\label{rmk::costa}
In concurrent work, \cite{costa2018} have proposed an alternative approach for analyzing the one-dimensional Hawkes process with inhibition. They take $\phi(x) \equiv \max(0,x)$ and a transfer function with bounded support and possibly negative values. Similar to our proposal, \cite{costa2018} employ the thinning process representation \citep{bremaud1996} to characterize the Hawkes process. However, the two proposals differ in the use of the thinning process representation. \cite{costa2018} construct a linear Hawkes process $N^{+}$ that dominates the original process $N$, which allows the use of the cluster process representation \citep{hawkes1974} and hence the existing theory on the linear Hawkes process. They further use $N^{+}$ to bound the renewal time of $N$, and establish limiting theorems using renewal techniques. In our proposal, we construct a coupling process $\widetilde{N}$ to bound the $\tau$-dependence coefficient of the generalized Hawkes process. This allows us to apply the theory of weakly dependent sequences (see, among others, Chapter~4 of \citealp{rio2017}) to the Hawkes process, and to obtain concentration inequalities for high-dimensional Hawkes processes with inhibition.
\end{remark}
\section{Application: Cross-Covariance Analysis of the Hawkes Process}\label{sec::cross-covariance}
\subsection{Theoretical Guarantees}\label{sec::cross-covariance_theory}
Cross-covariance analysis is widely used in the analysis of multivariate point process data.
For instance, many authors have proposed and studied estimation procedures for the transfer functions $\bm{\omega}$ based on estimates of the cross-covariance $\bm{V}$ \citep{brillinger1976, krumin2010, bacry2014, etesami2016}.
As another example, in neuroscience applications, it is common to cluster neurons based on the cross-covariances of their spike trains \citep{eldawlatly2009,feldt2010,muldoon2013,okun2015}.
The empirical studies in \cite{okun2015} show that the $j$th neuron can be viewed as a ``soloist" or a ``chorister" based on estimates of ${\Lambda}_j^{-1} \sum_{k\neq j} {V}_{k,j}(0)$ for $j=1,\ldots,p$.
However, estimators of the cross-covariance are not well-understood under practical assumptions. Existing theoretical studies often require multiple realizations of the same process (see e.g., \citealp{bacry2014}) and non-negativity of the transfer functions. These assumptions do not always hold for real world point process data, such as financial data or neural spike train data.
In this section, we demonstrate that Theorem~\ref{thm::concentration_hawkes} can be used to fill these gaps by providing a concentration inequality on smoothing estimators of the cross-covariance. As a concrete example, we consider the following smoothing estimator
\begin{equation}\label{eqn::screening}
\small
\widehat{V}_{k,j}(\Delta) = \begin{cases}
(T h)^{-1} \iint_{[0,T]^2} K \left( \frac{ (t' - t)+\Delta}{h} \right)\, \mathrm{d} N_j(t') \mathrm{d} N_{k}(t) - T^{-1} N_j([0,T]) \frac{1}{{T}}N_{k}([0,T]) & j \neq k\\
(T h)^{-1}\iint_{[0,T]^2 \backslash \{t=t' \}} K \left( \frac{(t' - t)+\Delta}{h} \right)\, \mathrm{d} N_k(t') \mathrm{d} N_{k}(t) - T^{-2} N^2_k([0,T]) & j = k\\
\end{cases},
\end{equation}
where $K(\cdot)$ is a kernel function with bandwidth $h$ and bounded support.
Using Theorem~\ref{thm::concentration_hawkes}, we obtain the following concentration inequality for the smoothing estimator \eqref{eqn::screening}.
\begin{corollary}
\label{cor::cross-covariance}
Suppose that $\bm{N}$ is a Hawkes process with intensity~\eqref{eqn::HP_intensity_general}, which satisfies Assumptions~\ref{asmp::spectralradius}--\ref{asmp::bounded}.
Further assume that the cross-covariances $\{V_{k,j}, 1\leq j,k \leq p\}$ are $\theta_0$-Lipschitz functions.
Let $h= c_6 T^{-(r+0.5)/(5r+2)}$ in \eqref{eqn::screening} for some constant $ c_6 $.
Then,
$$\mathbb{P}\left( \bigcap_{1 \leq j \leq k \leq p} \left[ \big\|\widehat{V}_{k,j}-V_{k,j} \big\|_{2,[-B,B]} \leq c_6 T^{-\frac{r+0.5}{5r+2}} \right] \right) \geq 1-2 c_4 p^2 T^{\frac{6r+0.5}{5r+2}} \exp\left( - c_5 T^{\frac{r}{5r+2}} \right), $$
where $ \big\|\widehat{V}_{k,j}-V_{k,j} \big\|_{2,[-B,B]}^2\equiv \int_{-B}^{B} \big[\widehat{V}_{k,j}(\Delta)-V_{k,j}(\Delta)\big]^2 \mathrm{d} \Delta$ with $B$ a user-defined constant, $ c_4 $ and $ c_5 $ are constants introduced in Theorem~\ref{thm::concentration_hawkes}, and $ c_6 $ depends on $\theta_0$, $B$, and $ c_3 $ in Theorem~\ref{thm::concentration_hawkes}.
\end{corollary}
Corollary~\ref{cor::cross-covariance} provides a foundation for theoretical analysis of statistical procedures based on cross-covariances of the high-dimensional Hawkes process.
It is a direct result of Theorem~\ref{thm::concentration_hawkes}. Its proof, given in Section~\ref{sec::proof_cross-covariance}, involves careful verification that the smoothing estimator \eqref{eqn::screening} satisfies the conditions in Theorem~\ref{thm::concentration_hawkes}.
\subsection{Simulation Studies}\label{sec::simulation}
In this section, we verify the theoretical result on estimators of the cross-covariance presented in Section~\ref{sec::cross-covariance_theory}.
In all simulations, the intensity of the Hawkes process takes the form \eqref{eqn::HP_intensity_general} with $\phi_j(x)=\exp(x)/[1+\exp(x)]$, $\mu_j=1$, and $\omega_{k,j}(t) = a_{k,j}\gamma^{2} t\exp(-\gamma t)$ for all $1\leq j,k\leq p$. Here, the parameter $\gamma$ controls the tail behavior of the transfer functions, and the parameter $a_{k,j}$ controls the magnitude of each transfer function. In what follows, we will provide details of the simulation setup and verify that the assumptions for Corollary~\ref{cor::cross-covariance} are met.
We generate networks of neurons as connected block-diagonal graphs shown in Figure~\ref{fig::simulation}(a). Each block consists of four nodes, among which Node 1 excites Nodes 2 and 3, and Nodes 2 and 3 are mutually inhibitory. Nodes 2 and 3 excite Node 4. The last node in each block excites the first node in the next block (i.e, $a_{4i+4,4i+5}>0$ for $i=0,1,2,...$). In addition, all nodes are self-inhibitory, i.e., $a_{j,j}<0$ for $j=1,\ldots p$.
From the choice of $\omega_{k,j}$, we know that, for any pair $(j,k)$, $\Omega_{j,k}=\|\omega_{k,j}\|_1 = a_{k,j}$ and $\|\omega_{k,j}\|_{1,[b,\infty)}= a_{k,j} \exp(-b/\gamma)$. Thus, Assumption~\ref{asmp::tail_highd} is met with $r=1$.
We set $a_{4i+2,4i+2}=a_{4i+3,4i+2}=a_{4i+3,4i+3}=a_{4i+2,4i+3}=a_{4i+4,4i+4}=-0.3$, and $a_{4i+2,4i+1}=a_{4i+3,4i+1}=a_{4i+4,4i+2}=a_{4i+4,4i+3}=0.3$ for the $i$th block, for $i=0,\ldots, p/4-1$. We set $a_{4i,4i+1}=0.45$ for $i=0,1,\ldots, p/4-2$.
We further set $a_{1,1}=-0.9$ and $a_{4i+1,4i+1}=-0.45$ for $i=1,\ldots, p/4-1$. Noting that the link function $\phi_j(\cdot)$ is 1-Lipschitz, we can verify that $\bm{\Omega} $ satisfies Assumptions~\ref{asmp::spectralradius} and \ref{asmp::uniform} with $\gamma_{\Omega}=0.9$ and $\rho_{\Omega}=0.9$. Finally, Assumption~\ref{asmp::bounded}(a) holds since $\phi_j(x) \leq \phi_{\max}=1$ for $j=1,\ldots, p$.
We consider three scenarios with $p\in\{ 20, 40, 80\}$. In each scenario, we simulate a Hawkes process with $T$ ranging from $20$ to $400$, using the thinning process \citep{ogata1988}. For each realization of the Hawkes process, we estimate the cross-covariance for all pairs of $1 \leq j,k \leq p$ using the estimator $\widehat{V}_{k,j}$ defined in \eqref{eqn::screening} with the Epanechnikov kernel and a bandwidth of $T^{-3/14}$ since $r=1$. Since the analytical form of the true cross-covariance $\bm{V}$ \eqref{eqn::V} is unknown, we approximate the true cross-covariance with $\widetilde{\bm{V}}$, which is the average of $500$ smoothing estimators on \emph{independent} realizations of the Hawkes process on $[0,200]$.
By the law of large numbers, the estimator $\widetilde{\bm{V}}$ is very close to the true value given the large number of independent samples, and we thus treat $\widetilde{\bm{V}}$ as the true cross-covariance in this numerical study.
Define the event $A$ as
\begin{equation} \label{eqn::event_A}
A\equiv\Big\{ \underset{{1\leq j,k \leq p}}{\medcap} \left[ \|\widehat{V}_{k,j}-\tilde{V}_{k,j}\|_{2,[-B,B]} \leq c_6 T^{-3/14} \right]\Big\},
\end{equation}
where $B=10$ and $ c_6 =0.32$.
Corollary~\ref{cor::cross-covariance} states that the event \eqref{eqn::event_A} occurs with probability converging to unity as time $T$ increases.
We will verify this claim by counting the proportion of cases that $A$ holds in $200$ independent simulations.
Simulation results are shown in Figure~\ref{fig::simulation}(b).
Note that $ c_6 $ does not have an analytic form in Corollary~\ref{cor::cross-covariance}.
We evaluate the empirical probability of $A$ for a range of values of $ c_6 $, and choose $ c_6 =0.32$ to produce the curves in Figure~\ref{fig::simulation}.
Other choices yield similar results.
The $y$-axis displays the empirical probability of the event $A$ in \eqref{eqn::event_A} over $200$ simulations, and the $x$-axis displays the scaled time $T^{1/7}$. Corollary~\ref{cor::cross-covariance} claims that $\mathbb{P}(A)$ converges to unity exponentially fast as a function of $T^{1/7}$, as reflected in this plot. Furthermore, as $p$ increases, convergence slows down only slightly.
\begin{figure}
\caption{Simulation setup and results to verify the concentration inequality in Corollary~\ref{cor::cross-covariance}
\label{fig::simulation}
\end{figure}
\section{Proofs of Main Results}\label{sec::proofs_conc}
In this section, we prove the main theoretical results from Sections~\ref{sec::conc} and \ref{sec::cross-covariance}.
This section is organized as follows.
In Section~\ref{sec::lemmas}, we list six technical lemmas, and a theorem from \cite{merlevede2011} that are useful in the proofs.
In Section~\ref{sec::coupling}, we prove Theorem~\ref{thm::coupling}, which guarantees the existence of a coupling process $\widetilde{\bm{N}}$ constructed in Section~\ref{sec::conc}, and bounds on the first- and second-order differences between $\mathrm{d} \widetilde{\bm{N}}$ and $\mathrm{d} \bm{N}$.
In Section~\ref{sec::weakdependence_y}, we prove Theorem~\ref{thm::dependence}, which bounds the $\tau$-dependence coefficient for the sequence $\{y_{k,j,i}\}_{i=1}^{T/\epsilon}$.
In Section~\ref{sec::proofs_conc_main}, we apply the result in \cite{merlevede2011} to prove Theorem~\ref{thm::concentration_hawkes}, which establishes a concentration inequality for $\bar{y}_{j,k}$ in \eqref{eqn::ybar}.
In Section~\ref{sec::proof_cross-covariance}, we prove the concentration inequality of cross-covariance estimators in Corollary~\ref{cor::cross-covariance}.
\subsection{Technical Lemmas}\label{sec::lemmas}
The first two lemmas bound the expected deviation between the limiting processes of the two sequences constructed in Equations~\ref{eqn::iterative_initial_main}~and~\ref{eqn::iterative_construction_main} and Equations~\ref{eqn::iterative_initial_tilde_main}~and~\ref{eqn::iterative_construction_tilde_main}, respectively.
Note that, under Assumption~\ref{asmp::spectralradius}, the existence of the limiting process is shown by \cite{bremaud1996}.
\begin{lemma}\label{lmm::bound_expectation}
Suppose that Assumption~\ref{asmp::spectralradius} holds for the intensity function \eqref{eqn::HP_intensity_general}.
Let $\bm{N}$ and $\widetilde{\bm{N}}$ be the limiting processes of the sequences $\{{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ and $\{\widetilde{\bm{N}}^{(n)}\}_{n=1}^{\infty}$, respectively.
Let $b$ be a constant and define a matrix $\bm{\eta}(b)=\big( \eta_{j,k}(b) \big)_{p\times p}$ with $\eta_{j,k}(b) = \big[ \alpha_j \int_b^{\infty} |\omega_{k,j}(\Delta)| \mathrm{d} \Delta \big]$.
Then, for any $z>0$ and $u\geq 0$,
\begin{align*}
\small
\mathbb{E}\big| \mathrm{d} \widetilde{\bm{N}}(z+u) - \mathrm{d} {\bm{N}}(z+u) \big|/ \mathrm{d} u & \preceq 2 v_1\left\{ \bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^{i-1} \bm{\eta}(b) \bm{J}\right\},
\end{align*}
where $\preceq$ denotes element-wise inequality, and $v_1$ is a parameter that depends on $\bm{\Lambda}$ and $\{\phi_j(\mu_j): j=1,\ldots,p\}$.
\end{lemma}
\begin{lemma}\label{lmm::bound_crosscovariance}
Suppose that Assumption~\ref{asmp::spectralradius} holds for the intensity function \eqref{eqn::HP_intensity_general}.
Let $\bm{N}$ and $\widetilde{\bm{N}}$ be the limiting processes of the sequences $\{{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ and $\{\widetilde{\bm{N}}^{(n)}\}_{n=1}^{\infty}$, respectively.
Let $b$ be a constant and define a matrix $\bm{\eta}(b)=\big( \eta_{j,k}(b) \big)_{p\times p}$ with $\eta_{j,k}(b) = \big[ \alpha_j \int_b^{\infty} |\omega_{k,j}(\Delta)| \mathrm{d} \Delta \big]$.
Then, for any $z>0$ and $u\geq 0$,
\begin{equation*}
\begin{aligned}
\small
& {\mathbb{E}\big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} \widetilde{\bm{N}}(z+u) - \mathrm{d} {\bm{N}}(t') \mathrm{d} {\bm{N}}(z+u) \big|}/\big({\mathrm{d} u \mathrm{d} t'} \big) \\
\preceq & 2 v_2 \left\{ \bm{\Omega}^{\lfloor u/b +2 \rfloor} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^i \bm{\eta}(b) \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}\right\} + 2 v_1^2 \left\{ \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{\lfloor u/b +1 \rfloor}\big)^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\eta}(b) \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{i}\big)^{{ \mathrm{\scriptscriptstyle T} }}\right\},
\end{aligned}
\end{equation*}
where $\preceq$ denotes element-wise inequality, $v_1$ is introduced in Lemma~\ref{lmm::bound_expectation}, and $v_2$ is a parameter that depends on $\bm{\Lambda}$, $\bm{V}$, and $\{\phi_j(\mu_j): j=1,\ldots,p\}$.
\end{lemma}
Proofs of Lemmas~\ref{lmm::bound_expectation}~and~\ref{lmm::bound_crosscovariance} are provided in Appendices~\ref{sec::proof_expectation}~and~~\ref{sec::proof_crosscovariance}, respectively.
The next two lemmas provide bounds on every entry in the left-hand sides of the inequalities in Lemmas~\ref{lmm::bound_expectation}~and~\ref{lmm::bound_crosscovariance}.
\begin{lemma}\label{lmm::elementwise_expectation}
Suppose $\bm{N}$ is a Hawkes process with intensity~\eqref{eqn::HP_intensity_general}, which satisfies Assumptions~\ref{asmp::spectralradius}--\ref{asmp::uniform}.
For a given $z >0$, consider the process $\widetilde{\bm{N}}$ constructed in Theorem~\ref{thm::coupling}. Then, for $j=1,\ldots, p$,
$$\mathbb{E}\big|\mathrm{d} \widetilde{{N}}_j(z+u) - \mathrm{d} {{N}}_j(z+u) \big|/\mathrm{d} u \leq a_{1} v\exp\big(-a_{2} u^{r/(r+1)}\big),$$
where $r$ is defined in Assumption~\ref{asmp::tail_highd}, $v=\max(v_1,v_1^2, v_2)$ in Theorem~\ref{thm::coupling}, and $a_{1}$ and $a_{2}$ are parameters that do not depend on $p$ and $T$.
\end{lemma}
\begin{lemma}\label{lmm::elementwise_crosscovariance}
Suppose $\bm{N}$ is a Hawkes process with intensity~\eqref{eqn::HP_intensity_general}, which satisfies Assumptions~\ref{asmp::spectralradius}--\ref{asmp::uniform}.
For a given $z >0$, consider the process $\widetilde{\bm{N}}$ constructed in Theorem~\ref{thm::coupling}. Then, for $1\leq j,k\leq p$ and $t' > z+u$,
$$ \mathbb{E}\big| \mathrm{d} \widetilde{{N}}_k(t') \mathrm{d} \widetilde{{N}}_j(z+u) - \mathrm{d} {{N}}_k(t') \mathrm{d} {{N}}_j(z+u) \big|/(\mathrm{d} t' \mathrm{d} u) \leq a_{3} v \exp\big(-a_{4} u^{r/(r+1)}\big), $$
where $r$ is defined in Assumption~\ref{asmp::tail_highd}, $v=\max(v_1,v_1^2, v_2)$ in Theorem~\ref{thm::coupling}, and $a_{3}$ and $a_{4}$ are parameters that do not depend on $p$ and $T$.
\end{lemma}
We provide the proofs in Appendix~\ref{sec::proof_elementwise_expectation} and Appendix~\ref{sec::proof_elementwise_crosscovariance}, respectively.
Lemma~\ref{lmm::poissontail} quantifies the tail behaviour of a Poisson random variable.
We state it without proof.
\begin{lemma}\label{lmm::poissontail}
Let $x$ be a Poisson random variable with mean $m$. Then for any $n>0$,
\begin{equation}
\mathbb{P}(x - m \geq n) \leq \exp\big(- m- [\log(n/m)-1] n \big).
\end{equation}
\end{lemma}
The next lemma characterizes the tail of the product of two random variables.
\begin{lemma}\label{lmm::product}
Suppose that $Z_1$ and $Z_2$ satisfy
\begin{equation}
P(|Z_i|> n ) \leq \exp( 1- n/K_i), \quad i=1,2,
\end{equation}
for all $n \geq 0$.
Then, for any $n\geq 0$ and $K^*={K_1 K_2}(\log 2 + 1)$,
\begin{equation}
P(|Z_1 Z_2|> n ) \leq \exp\big( 1- (n/K^*)^{1/2}\big).
\end{equation}
\end{lemma}
The proof of Lemma~\ref{lmm::product} is provided in Appendix~\ref{sec::proof_product}.
The following theorem from \cite{merlevede2011} will be used in the proof of Theorem~\ref{thm::concentration_hawkes}.
\begin{theorem}\label{thm::thm_mpr}(Theorem~1 in \citet{merlevede2011}) Let $\{y_i\}_{i \in \mathbb{Z}}$ be
a sequence of real valued random variables and let $v_y$ be defined as
\begin{equation}\label{eqn::v_y}
v_y = \sup_{i >0} \left\{ \mathbb{E}\big[ (y_{i}-\mathbb{E} y_{i})^2 \big] + 2\sum_{l \geq 1} \mathbb{E}\big[(y_{i}-\mathbb{E}y_{i})(y_{i+l}-\mathbb{E}y_{i+l}) \big] \right\}.
\end{equation}
Assume that
\begin{equation} \label{eqn::tau_thm}
\tau(y) \leq a \exp(-c x^{\gamma_1} ) \ {\rm for \ any \ } x \geq 1,
\end{equation}
and
\begin{equation}\label{eqn::tail_thm}
\sup_{k > 0} \mathbb{P}(|y_k| >t) \leq \exp\big(1-(t/b)^{\gamma_2} \big).
\end{equation}
Further assume that $\gamma <1$ where $1/\gamma = 1/\gamma_1 + 1/\gamma_2$.
Then $v_y$ is finite and, for any $n\geq 4$, there exist positive constants $C_1$, $C_2$, $C_3$ and $C_4$ depending only on $a$, $b$, $c$, $\gamma$ and $\gamma_1$ such that, for any positive $x$,
\begin{equation}\label{eqn::concofy_thm}
\begin{aligned}
\mathbb{P}\left(\left| \frac{1}{n}\sum_{i=1}^{n} {y}_{i} - \mathbb{E} {y}_{i} \right| \geq x \right) \leq & n \exp\left( -\frac{ x^{\gamma}}{C_1} \right) +\exp\left(-\frac{x^2}{C_2(1+n v_y)} \right) \\
& +\exp\left[ -\frac{x^2 }{ C_3 n} \exp\left(\frac{x^{\gamma(1-\gamma)} }{C_4 \left[\log (x)\right]^{\gamma} } \right) \right].
\end{aligned}
\end{equation}
\end{theorem}
\subsection{Proof of Theorem~\ref{thm::coupling}}\label{sec::coupling}
Without loss of generality, we assume that the link function $\phi_j(\cdot)$ is $1$-Lipschitz, $\alpha_j=1$ for all $j$. This can be achieved by setting $\tilde{\phi}_j(x)\equiv \phi_j(x/\alpha_j)$, $\tilde{\mu}_j = \mu_j \alpha_j$, and $\tilde{\omega}_{k,j}(\Delta)=\alpha_j \omega_{k,j}(\Delta)$ for all $j,k$. Using this simplification, Assumption~\ref{asmp::spectralradius} means that ${\Omega}_{j,k} = \int_0^{\infty} |\omega_{k,j}(\Delta) |\mathrm{d} \Delta$ and $\Gamma_{\max}(\bm{\Omega})=\gamma_{\Omega}<1$.
As outlined in Section~\ref{sec::coupling_framework}, the first part of the proof involves constructing two sequences of point processes that converge to $\bm{N}$ and its coupling process $\widetilde{\bm{N}}$, respectively.
We have provided the construction in \eqref{eqn::iterative_construction_main} and \eqref{eqn::iterative_construction_tilde_main}.
Here we verify the two statements in Theorem~\ref{thm::coupling}: (a) $\widetilde{\bm{N}}$ has the same distribution as $ {\bm{N}}$, and (b) $\widetilde{\bm{N}}$ is independent of the history of $\bm{N}$ up to time $z$.
We notice that (a) in Theorem~\ref{thm::coupling} is a direct result of our construction.
\cite{bremaud1996} show that, under Assumption~\ref{asmp::spectralradius}, the sequence $\big\{ \bm{N}^{(n)} \big\}_{n=1}^{\infty}$ converges to a process that has the same distribution as $\bm{N}$, i.e., the limiting process has intensity \eqref{eqn::HP_intensity_general}.
In this proof, we will not distinguish between $\bm{N}$ and the limit of $\big\{ \bm{N}^{(n)} \big\}_{n=1}^{\infty}$.
We claim that the sequence $\big\{ \widetilde{\bm{N}}^{(n)} \big\}_{n=1}^{\infty}$ also converges to a process with intensity \eqref{eqn::HP_intensity_general}.
To see this, define a hybrid process $\widehat{{N}}_j^{(0)}(\mathrm{d} s \times \mathrm{d} t ) \equiv \mathds{1}_{[ t\leq z]} \widetilde{{N}}_j^{(0)} (\mathrm{d} s \times \mathrm{d} t ) + \mathds{1}_{[ t> z]} {{N}}_j^{(0)} (\mathrm{d} s \times \mathrm{d} t) $.
The process $\widehat{{N}}_j^{(0)}$ is a homogeneous Poisson process on $\mathbb{R}^2$ with intensity $1$.
This follows from the fact that for any Borel set $A \in \mathcal{B}(\mathbb{R}^2)$, $\widehat{{N}}_j^{(0)}(A)$ follows a Poisson distribution with expectation $m(A)$, where $m(A)$ is the Lebesgue measure of $A$.
We can rewrite the construction of $\big\{ \widetilde{\bm{N}}^{(n)} \big\}_{n=1}^{\infty}$ using $\widehat{\bm{N}}^{(0)}$ as
\begin{equation}\label{eqn::iterative_initial_tilde2}
\mathrm{d} \widetilde{N}_j^{(1)}(t) =
\widehat{N}^{(0)}_j\big( [0, \mu_j] \times \mathrm{d} t \big) \end{equation}
and for $n \geq 2$,
\begin{equation}\label{eqn::construction_tilde2}
\begin{aligned}
\widetilde{\lambda}_j^{(n)}(t) & = \phi_j \left\{ \mu_j + \Big( \bm{\omega}_{\cdot,j} * \mathrm{d} \widetilde{\bm{N}}^{(n-1)} \Big)(t) \right\}\\
\mathrm{d} \widetilde{N}^{(n)}_j(t) & =
\mathrm{d} \widehat{N}_j^{(0)}\big( [0, \widehat{\lambda}^{(n)}_j] \times \mathrm{d} t\big).
\end{aligned}
\end{equation}
The argument in \cite{bremaud1996} can thus be applied to Equations~\ref{eqn::iterative_initial_tilde2}~and~\ref{eqn::construction_tilde2}, which means that a limiting process $\widetilde{\bm{N}}$ exists with intensity of the form \eqref{eqn::HP_intensity_general}.
To establish (b) in the statement of Theorem~\ref{thm::coupling}, we show that, for every $n$, $\widetilde{\bm{N}}^{(n)}$ is independent of $\mathcal{H}_z^{(n)}$.
To see this, note that, by construction, $\widetilde{\bm{N}}^{(n-1)}$ is independent of the history of $ {\bm{N}}^{(n-1)}$ before time $z$, denoted as $\mathcal{H}^{(n-1)}_z$.
Since $\widetilde{\bm{\lambda}}^{(n)}$ is a function of $\widetilde{\bm{N}}^{(n-1)}$, we see that $\widetilde{\bm{\lambda}}^{(n)}$ is independent of $\mathcal{H}^{(n-1)}_z$.
Thus, since $\widetilde{\bm{N}}^{(n)}$ is determined by $\widetilde{\bm{\lambda}}^{(n)}$ and $ \widetilde{\bm{N}}^{(0)}$, it is also independent of $\mathcal{H}^{(n-1)}_z$.
Hence, given that $\mathcal{H}^{(n)}_z$ is determined only by $\mathcal{H}^{(n-1)}_z$ and $\bm{r}^{(n)}$, $\widetilde{\bm{N}}^{(n)}$ is also independent of $\mathcal{H}^{(n)}_z$.
Therefore, the iterative construction preserves the independence between $\widetilde{\bm{N}}^{(n)}$ and $\mathcal{H}^{(n)}_z$.
As a result, $\widetilde{\bm{N}} \equiv \widetilde{\bm{N}}^{(\infty)} $ is independent of $\mathcal{H}^{(\infty)}_{z}\equiv \mathcal{H}_{z}$.
So far, we have verified claims (a) and (b) of Theorem~\ref{thm::coupling}, i.e., that (a) there exist identically distributed ${\bm{N}}$ and $\widetilde{\bm{N}}$, and that (b) $\widetilde{\bm{N}}$ is independent of $\mathcal{H}_z$.
The rest of the proof follows from Lemmas~\ref{lmm::bound_expectation}~and~\ref{lmm::bound_crosscovariance}, which lead to the inequalities in \eqref{eqn::bound_expectation}~and~\eqref{eqn::bound_crosscovariance}, respectively. \QEDB
\subsection{Proof of Theorem~\ref{thm::dependence}}\label{sec::weakdependence_y}
\iffalse
Recall from \eqref{eqn::ybar} that the smoothing estimator $\bar{y}_{j,k}$ takes the form
\begin{equation}\label{eqn::ybar_app}
\bar{y}_{j,k} \equiv \frac{1}{T} \int_0^T \int_0^T f(t-t') \mathrm{d} N_j(t') \mathrm{d} N_k(t).
\end{equation}
In this section, we will rewrite $\bar{y}_{j,k}$ in \eqref{eqn::ybar_app} as the mean of a discrete sequence, and show that
\begin{enumerate}
\item[(i)] this sequence has an exponential tail;
\item[(ii)] its temporal dependence decays exponentially fast as the time gap increases.
\end{enumerate}
Recall that the temporal dependence of a sequence is defined in \eqref{eqn::tau_sequence}.
We define the series $\{{y}_{k,j,i}\}$ for $i=1,\ldots,T/(2\epsilon) $ as
\begin{equation}\label{eqn::y_jki}
{y}_{k,j,i} \equiv \frac{1}{ 2\epsilon } \int_{2\epsilon (i-1)}^{2\epsilon i} \int_0^T f(t-t') \mathrm{d} N_{k}(t') \mathrm{d} N_j(t),
\end{equation}
where $\epsilon$ is the smallest number such that $\epsilon \geq \max\{|b_f|,b\}$, and $T/(2\epsilon)$ is an integer.
Recall that $b$ is the upper bound of the support of $\bm{\omega}$.
We immediately see that
\begin{equation*}
\frac{2\epsilon}{T} \sum_{i=1}^{T/(2\epsilon)} y_{k,j,i} = \frac{2\epsilon}{T} \sum_{i=1}^{T/(2\epsilon)} \frac{1}{ 2\epsilon } \int_{2\epsilon (i-1)}^{2\epsilon i} \int_0^T f(t-t') \mathrm{d} N_{k}(t') \mathrm{d} N_j(t) = \bar{y}_{j,k}.
\end{equation*}
We establish a non-asymptotic inequality for the tail behaviour of the Hawkes process. Specifically, we show that the number of events in any bounded intervals decrease exponentially fast for a stationary Hawkes process. Similar results have been studied on univariate linear Hawkes process \citep{reynaud2007} and multivariate linear Hawkes process \citep{hansen2015} assuming the transfer functions are non-negative and have bounded support, which do not apply in our problem. We provide the following results using the thinning process representation.
\begin{lemma}\label{lmm::tail_lowd}
Suppose $\bm{N}$ is a Hawkes process with intensity~ \eqref{eqn::HP_intensity}, which satisfies Assumptions~\ref{asmp::spectralradius} and \ref{asmp::tail_bound}.
For any interval $U\equiv [u_1, u_2]$, there exists a constant $\theta>0$ satisfies that
\begin{equation}\label{eqn::Eexp_lowd}
\mathbb{E}\big[\exp\big\{ \theta N_j(U) \big\} \big] = \mathcal{E}, \quad j=1,\ldots, p,
\end{equation}
and thus
\begin{equation}\label{eqn::tail_lowd}
\mathbb{P}\big(N_j(U) > n \big) \leq \mathcal{E} \exp\{-\theta n \},
\end{equation}
where $\mathcal{E}$ is a constant.
\end{lemma}
The next lemma, which is from Lemma~3 of \cite{dedecker2004}, provides an approach for evaluating the $\tau$-dependence of the sequence $\{y_{k,j,i}\}$.
To bound the $\tau$-dependence coefficient, we will use Lemma~\ref{lmm::taudependence} along with the coupling process constructed in Theorem~\ref{thm::coupling}.
We refer the readers to \cite{merlevede2011} for the definition of the $\tau$-dependence of a sequence.
\begin{lemma}\label{lmm::taudependence} (Lemma~3 in \cite{dedecker2004}).
For any sequence $\{y_{k,j,i}\}_{i=1}^{n}$, let $\mathcal{H}^y_{z}$ be the $\sigma$-filed generated by $\{y_{k,j,i}\}_{i=1}^z$ and $u$ be a positive integer.
The $\tau$-dependence coefficient $\tau_y(u)$ of the sequence $\{y_{k,j,i}\}_{i=1}^{n}$ is defined as
\begin{equation*}
\tau_y(u) = \sup_{z} \tau( \mathcal{H} _{z}, y_{j,k,z+u}),
\end{equation*}
where $\tau( \mathcal{H} _{z}, y_{j,k,z+u})$ is the $\tau$-dependence coefficient between the random variable $y_{j,k,z+u}$ and the $\sigma$-field $\mathcal{H}_z$.
Then, it holds that
\begin{equation*}
\tau_y(u) \leq \sup_{z} \mathbb{E} |y_{j,k,z+u} - \tilde{y}_{j,k,z+u} |,
\end{equation*}
where, for each $z$, $\tilde{y}_{j,k,z+u}$ is a random variable that satisfies
\begin{enumerate}
\item $\tilde{y}_{j,k,z+u}$ is distributed identically as $y_{j,k,z+u}$,
\item $\tilde{y}_{j,k,z+u}$ is independent of $\mathcal{H}^y_{z}$.
\end{enumerate}
\end{lemma}
We are now ready to state the properties of the sequence $\{{y}_{k,j,i}\}_{i=1}^{T/2\epsilon}$ that will be used to prove Theorem~\ref{thm::concentration_hawkes}.
\begin{proposition}\label{prop::dependence_y}
Suppose $\bm{N}$ is a Hawkes process with intensity~ \eqref{eqn::HP_intensity}, which satisfies Assumptions~\ref{asmp::spectralradius} -- \ref{asmp::bounded}.
The following hold for any $f$ such that $f(\cdot)$ has a bounded support and $ \|f\|_{\infty} \equiv \max_x |f(x)| \leq C_f$.
\begin{enumerate}
\item[i)] For any $i$, ${y}_{k,j,i}$ in \eqref{eqn::y_jki} has an exponential tail of order $1/2$, i.e.,
\begin{equation}\label{eqn::tail}
\sup_{i >0} \mathbb{P}(|y_{k,j,i}| \geq x ) \leq \exp\left(1- c_6 teen {x}^{1/2} \right),
\end{equation}
where $ c_6 teen$ is a constant.
\item[ii)] For any positive integer $l$, the $\tau$-dependence coefficient of $\{{y}_{k,j,i}\}_{i}$ satisfies
\begin{equation}\label{eqn::dependence_y}
\tau_y(l) \leq a_{5} \exp(-a_{6} l^{r/(r+1)}),
\end{equation}
where $a_{5}$ and $a_{6}$ are constants.
\end{enumerate}
\end{proposition}
\fi
Without loss of generality, in this proof, we assume that $\text{supp}(f) \subset [-b_f, 0]$ for a positive constant $b_f$.
It is straight-forward to generalize the results to $f$ with arbitrary bounded support.
To see this, consider three scenarios for $[b_1,b_2] \equiv \text{supp}(f)$:
\begin{enumerate}
\item $b_1<b_2 <0$. It is clear that $\text{supp}(f) \subset [b_f, 0] $ with $b_f \equiv b_1$.
\item $0<b_1 <b_2$. We can define $g(x)=f(-x)$. Then the proof applies to $\text{supp}(g)$.
\item $b_1<0<b_2$. We can write $f$ as $f=f^{+}+f^{-}$ where $f^{+}(x)=f(x)\mathds{1}_{[x>0]}$ and $f^{-}(x) = f(x) \mathds{1}_{[x\leq 0]}$. Setting $g(x) = f^{+}(-x)$, we can see that the proof applies to $f^{-}$ and $g$, and thus to $f$.
\end{enumerate}
Recall that the series $\{{y}_{k,j,i}\}$ for $i=1,\ldots,T/(2\epsilon) $ is introduced in \eqref{eqn::y_kji} as
\begin{equation*}
{y}_{k,j,i} \equiv \frac{1}{ 2\epsilon } \int_{2\epsilon (i-1)}^{2\epsilon i} \int_0^T f(t-t') \mathrm{d} N_{k}(t) \mathrm{d} N_j(t').
\end{equation*}
Here, $\epsilon$ is the smallest number such that $\epsilon \geq \max\{b_f,b\}$ and $T/(2\epsilon)$ is an integer.
We establish the bound \eqref{eqn::dependence_y} on the $\tau$-dependence coefficient of the sequence $\{y_{k,j,i}\}_{i=1}^{T/(2\epsilon)}$.
For any $z$, we construct a sequence $\{\tilde{y}_{k,j,i}^z \}_{i=1}^{T/(2\epsilon)}$, with the same distribution as $\{y_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$, but for all positive integers $l$, $\tilde{y}_{k,j,z+l}^z$ is independent of $\{y_{k,j,i} \}_{i=1}^z$.
For simplicity, we suppress the superscript $z$ in the remainder of this proof.
To this end, let $\widetilde{\bm{N}}$ be the process in Theorem~\ref{thm::coupling} such that $\widetilde{\bm{N}}$ has the same distribution as $\bm{N}$ and $\widetilde{\bm{N}}$ is independent of $\mathcal{H}_{2\epsilon z}$.
We define $\tilde{y}_{k,j,i}$ based on $\widetilde{\bm{N}}$ as
\begin{equation}
\tilde{y}_{k,j,i} \equiv \frac{1}{2\epsilon} \int_{2\epsilon (i-1)}^{2\epsilon i} \int_0^T f(t-t') \mathrm{d} \tilde{N}_{k}(t) \mathrm{d} \tilde{N}_j(t').
\end{equation}
From Theorem~\ref{thm::coupling}, we know that $\{\tilde{y}_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$ has the same distribution as $\{y_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$, and $\{\tilde{y}_{k,j,i}\}_{i=z+l}^{T/(2\epsilon)}$ is independent of $\{y_{k,j,i} \}_{i=1}^z$. We now bound the quantity $\mathbb{E} \big| \tilde{y}_{j,k,z+l} - {y}_{j,k,z+l}\big|$.
For $l\geq (2\epsilon+b_f)/\epsilon$,
\begin{equation*}
\begin{aligned}
\mathbb{E} \big| \tilde{y}_{j,k,z+l} - {y}_{j,k,z+l}\big| = & \frac{1}{2\epsilon}\mathbb{E} \left|\int_{2\epsilon(z+l-1)}^{2\epsilon(z+l)}\int_{t-b_f}^{t} f(t-t') \big[ \mathrm{d} \tilde{N}_k(t') \mathrm{d} \tilde{N}_j(t) - \mathrm{d} {N}_k(t') \mathrm{d} {N}_j(t) \big]\right| \\
\leq &\frac{1}{2 \epsilon } \int_{2\epsilon(z+l-1)}^{2\epsilon(z+l)}\int_{t-b_f}^{t} |f(t-t')| \mathbb{E} \left| \mathrm{d} \tilde{N}_k(t') \mathrm{d} \tilde{N}_j(t) - \mathrm{d} {N}_k(t') \mathrm{d} {N}_j(t)\right| \\
\leq &\frac{C_f}{2 \epsilon} \int_{2\epsilon(z+l-1)}^{2\epsilon(z+l)}\int_{t-b_f}^{t} \mathbb{E} \left| \mathrm{d} \widetilde{N}_k(t') \mathrm{d} \widetilde{N}_j(t) - \mathrm{d} {N}_k(t') \mathrm{d} {N}_j(t)\right| \\
\leq & \frac{C_f}{2 \epsilon} \int_{2\epsilon(z+l-1)}^{2\epsilon(z+l)}\int_{t-b_f}^{t} a_{3} p^{3/2}v_2\exp\big(-a_{4} [ \min(t',t)-2\epsilon z]^{r/(r+1)}\big) \mathrm{d} t' \mathrm{d}t,
\end{aligned}
\end{equation*}
where the second inequality follows from $\|f\|_{\infty}\leq C_f<\infty$, and the last inequality follows from Lemma~\ref{lmm::elementwise_crosscovariance}.
Given that $\min(t',t)-2\epsilon z\geq 2 \epsilon(z+l-1) -b_f -2\epsilon z = 2\epsilon (l-1) - b_f$ and $l\geq (2\epsilon+b_f)/\epsilon$, we have
\begin{equation*}
\mathbb{E} \big|\tilde{y}_{j,k,z+l}- {y}_{j,k,z+l}\big| \leq a_{5}P \exp(- a_{6} l^{r/(r+1)}),
\end{equation*}
where $a_{5}P = C_f b_f p^{3/2} v a_3$ with $v_2$ introduced in Theorem~\ref{thm::coupling}, and $a_{6} = \epsilon^{r/(r+1)} a_4$.
For $l<(2\epsilon+b_f)/\epsilon$, we similarly have
\begin{equation*}
\mathbb{E} \big| \tilde{y}_{j,k,z+l} - {y}_{j,k,z+l}\big| \leq C_f b_f a_{3} p^{3/2} v.
\end{equation*}
Taking $a_{5}= a_{5}P \exp\big([2\epsilon+b_f]^{r/(r+1)} \big) $, for any positive integer $l$, we arrive at
\begin{equation*}
\mathbb{E} \big|\tilde{y}_{j,k,z+l}- {y}_{j,k,z+l}\big| \leq a_{5} \exp(- a_{6} l).
\end{equation*}
Thus, using Eq.~\ref{eqn::tau_sequence} and \ref{eqn::tau_coupling_sequence},
$$ \tau_y(l) \equiv \sup_{z} \tau\big( \mathcal{H}_{z}^{y}, y_{j,k, z+u} \big) \leq a_{5} \exp(- a_{6} l^{r/(r+1)}), $$
as required. \QEDB
\subsection{Proof of Theorem~\ref{thm::concentration_hawkes}}\label{sec::proofs_conc_main}
In what follows we use $C_1$, $C_2$, $C_3$, and $C_4$ to denote constants whose value might change from line to line.
As in the proof of Theorem~\ref{thm::dependence}, we assume that $\text{supp}(f) \subset [-b_f, 0]$ for a positive constant $b_f$.
We first verify that, for any $i$, ${y}_{k,j,i}$ in \eqref{eqn::y_kji} has an exponential tail of order $1/2$, i.e.,
\begin{equation}\label{eqn::tail_ineq}
\sup_{i >0} \mathbb{P}(|y_{k,j,i}| \geq x ) \leq \exp\left(1- c_6 teen {x}^{1/2} \right),
\end{equation}
where $ c_6 teen$ is a constant.
Since $\|f\|_{\infty} \leq C_f < \infty$, we know that
\begin{equation}\label{eqn::y_naive_bound}
{y}_{k,j,i} \leq \frac{C_f}{2\epsilon} N_j\big([2\epsilon(i-1), 2\epsilon i] \big) N_k\big( [2\epsilon i -2 \epsilon -b_f, 2\epsilon i ) \big).
\end{equation}
Given that both $[2\epsilon(i-1), 2\epsilon i] $ and $ [2\epsilon i -2 \epsilon -b_f, 2\epsilon i )$ are finite intervals, Lemma~\ref{lmm::exponential_tail} shows that $N_j\big([2\epsilon(i-1), 2\epsilon i] \big) $ and $ N_k\big( [2\epsilon i -2 \epsilon -b_f, 2\epsilon i ) \big)$ have exponential tails of order $1$.
From Lemma~\ref{lmm::product}, we know that ${y}_{k,j,i}$ has an exponential tail of order $1/2$.
From~\eqref{eqn::tail_ineq} and the conclusion of Theorem~\ref{thm::dependence}, we know that $\{y_{k,j,i}\}_{i=1}^{ T/(2\epsilon) }$ satisfies \eqref{eqn::tail_thm} and \eqref{eqn::tau_thm} in Theorem~\ref{thm::thm_mpr} with $\gamma_1=r/(r+1)$ and $\gamma_2=1/2$.
Thus, applying Theorem~1 in \cite{merlevede2011} to $\{ {y}_{k,j,i}- \mathbb{E} y_{k,j,i} \}_{i=1}^{T/(2\epsilon)}$ gives
\begin{equation}\label{eqn::concofy}
\begin{aligned}
\mathbb{P}\left(\left| \sum_{i=1}^{ T/(2\epsilon) } {y}_{k,j,i} - \frac{T}{2\epsilon} \mathbb{E} {y}_{k,j,i} \right| \geq T \epsilon_1 \right) \leq & \frac{T}{2\epsilon} \exp\left( -\frac{ (\epsilon_1 T)^{r/(3r+1)}}{C_1} \right) +\exp\left(-\frac{\epsilon_1^2 T^2}{C_2(1+T v_y/ 2\epsilon)} \right) \\
& +\exp\left[ -\frac{\epsilon_1^2 T^2}{ C_3 T/2\epsilon} \exp\left(\frac{(\epsilon_1 T)^{r(2r+1)/(3r+1)^2} }{C_4 \left[\log (\epsilon_1 T)\right]^{r/(3r+1)} } \right) \right],
\end{aligned}
\end{equation}
where $\epsilon_1$ is to be specified later, and $v_y$ is a measure of the ``variance" of $y_{k,j,i}$, introduced in \eqref{eqn::v_y}.
We can see that
\begin{equation}\label{eqn::v_y_bound}
\small
\begin{aligned}
v_y = & \sup_{i >0} \left\{ \mathbb{E}\big[ (y_{k,j,i}-\mathbb{E} y_{k,j,i})^2 \big] + 2\sum_{l \geq 1} \mathbb{E}\big[ \mathbb{E}(y_{k,j,i+l}-\mathbb{E}y_{k,j,i+l} \mid y_{k,j,i}) (y_{k,j,i}-\mathbb{E}y_{k,j,i})\big] \right\}\\
\leq & \sup_{i >0} \left\{ \mathbb{E}\big[ (y_{k,j,i}-\mathbb{E} y_{k,j,i})^2 \big] + 2\sum_{l \geq 1} \mathbb{E}\big[ \big|\mathbb{E}(y_{k,j,i+l}-\mathbb{E}y_{k,j,i+l} \mid y_{k,j,i})\big| \big|( y_{k,j,i}-\mathbb{E}y_{k,j,i})\big|\big] \right\}\\
\leq & \sup_{i >0} \left\{ \mathbb{E}\big[ (y_{k,j,i}-\mathbb{E} y_{k,j,i})^2 \big] + 2\sum_{l \geq 1} \mathbb{E}\big[ a_{5} \exp(-a_{6} l) \big( |y_{k,j,i}-\mathbb{E}y_{k,j,i}|\big)\big] \right\}\\
\leq & \sup_{i >0} \left\{ \mathbb{E}\big[ (y_{k,j,i}-\mathbb{E} y_{k,j,i})^2 \big] + 2\sum_{l \geq 1} a_{5} \exp(-a_{6} l) \mathbb{E}\big( |y_{k,j,i}-\mathbb{E}y_{k,j,i}|\big) \right\}\\
= & \sup_{i >0} \left\{ \mathbb{E}\big[ (y_{k,j,i}-\mathbb{E} y_{k,j,i})^2 \big] + 2 \mathbb{E}\big( |y_{k,j,i}-\mathbb{E}y_{k,j,i}|\big) \frac{a_{5} \exp(-a_{6})}{1-\exp(-a_{6})} \right\},\\
\end{aligned}
\end{equation}
where the second inequality follows from a property of $\tau$-dependence, which ensures that $\tau_{y}(l-m) \geq |\mathbb{E}[y_{k,j,i+l}\mid y_{j,k,m}] - \mathbb{E}y_{k,j,i+l}|$ (see e.g., Equation 2.1 in \cite{merlevede2011}), and \eqref{eqn::dependence_y}. Furthermore, we know that both $\mathbb{E}\big[ (y_{k,j,i}-\mathbb{E} y_{k,j,i})^2 \big] $ and $\mathbb{E}|y_{k,j,i}-\mathbb{E}y_{k,j,i}|$ are finite since ${y}_{k,j,i}$ has an exponential tail of order $1/2$. Therefore, $v_y$ is bounded.
Letting $\epsilon_1 = c_3 T^{-(2r+1)/(5r+2)}/2$ and noting that $\epsilon$ is fixed gives
\begin{equation}\label{eqn::neweqinpfthm2}
\mathbb{P}\left(\left| \bar{y}_{j,k} - \mathbb{E} \bar{y}_{j,k} \right| \geq c_3 T^{-(2r+1)/(5r+2)} \right) \leq c_4 T \exp( - c_5 T^{r/(5r+2)}),
\end{equation}
where the right-hand side of \eqref{eqn::neweqinpfthm2} dominates the terms on the right-hand side of \eqref{eqn::concofy}.
Using a Bonferroni bound yields the result \eqref{eqn::y_ci} in Theorem~\ref{thm::concentration_hawkes}.
\QEDB
\subsection{Proof of Corollary~\ref{cor::cross-covariance}}\label{sec::proof_cross-covariance}
In the following, we only discuss the case $j\neq k$ for $\widehat{V}_{k,j}$.
The proof for $j = k$ follows from a similar argument and is omitted.
Recall that the estimator $\widehat{V}_{k,j}$ takes the form (Equation~\ref{eqn::screening})
\begin{equation}
\widehat{V}_{k,j}(\Delta) = \underbrace{\frac{1}{T h} \iint_{[0,T]^2} K \left( \frac{\{t' - t\}+\Delta}{h} \right)\, \mathrm{d} N_j(t) \mathrm{d} N_{k}(t')}_{\text{I}/h} - \underbrace{\frac{1}{{T}} N_j(T) \frac{1}{{T}}N_{k}(T)}_{\text{II}}.
\end{equation}
For $\text{I}$, applying Theorem~\ref{thm::concentration_hawkes} with $f(x) = K \big( [\Delta- \{t' - t\}]/h \big)$ gives
\begin{equation}\label{eqn::concofy_II}
\mathbb{P}\left(\left| \text{I} - \mathbb{E} [\text{I} ] \right| \geq c_3 T^{-\frac{2r+1}{5r+2}} \right) \leq c_4 T \exp( - c_5 T^{\frac{r}{5r+2}}),
\end{equation}
where by \eqref{eqn::Lambda} and \eqref{eqn::V}
\begin{equation}
\begin{aligned}
\mathbb{E} [ \text{I} ] = & \mathbb{E} \left[ \frac{1}{T} \iint_{[0,T]^2} K \left( \frac{\{t' - t\}+\Delta}{h} \right)\, \mathrm{d} N_j(t) \mathrm{d} N_{k}(t') \right] \\
= & \frac{1}{T} \iint_{[0,T]^2} K \left( \frac{\{t' - t\}+\Delta}{h} \right) (V_{k,j}(t-t')+ \Lambda_j \Lambda_k) \mathrm{d} t \mathrm{d} t'.
\end{aligned}
\end{equation}
Therefore,
\begin{equation*}\label{eqn::bias_ini}
\small
\begin{aligned}
& |\mathbb{E} [\text{I} ]- h [V_{k,j}( \Delta ) + \Lambda_j \Lambda_k ]| \\
= & \left| \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) \mathbb{E} [\mathrm{d} N_j(t') \mathrm{d} N_k(t) ] - \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) [V_{k,j}(\Delta) + \Lambda_j \Lambda_k] \mathrm{d} t \mathrm{d}t' \right| \\
= & \left| \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) \left\{\mathbb{E} [\mathrm{d} N_j(t') \mathrm{d} N_k(t) ] - \Lambda_j \Lambda_k \mathrm{d} t \mathrm{d} t'\right\} - \right.\\
& \left. \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) V_{k,j}(\Delta) \mathrm{d}t \mathrm{d}t' \right| \\
= & \left| \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) V_{k,j}(t'-t) \mathrm{d} t \mathrm{d} t' - \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) V_{k,j}(\Delta) \mathrm{d} t \mathrm{d} t' \right| \\
= & \left| \frac{1}{T} \iint_{[0,T]^2} K\left(\frac{t-t'+\Delta}{h} \right) \big[V_{k,j}(t'-t) -V_{k,j}(\Delta)\big] \mathrm{d} t \mathrm{d} t' \right|,
\end{aligned}
\end{equation*}
where we use the definition of $\bm{V}$ in the third equality.
Recalling that the kernel function $K(x/h)$ is defined on $[-h,h]$, it follows that
\begin{equation}\label{eqn::bias}
\small
\begin{aligned}
& | \mathbb{E} [\text{I}] - h [V_{k,j}(\Delta) + \Lambda_j \Lambda_k] | \\
= & \left| \frac{1}{T} \int_{0}^T \int_{\max(0, t+\Delta-h)}^{\min(T,t+\Delta+h)} K\left(\frac{t-t'+\Delta}{h} \right) \big[V_{k,j}(t'-t) -V_{k,j}(\Delta)\big] \mathrm{d} t \mathrm{d} t' \right| \\
\leq & \left| \frac{1}{T} \int_{0}^T \int_{\max(0, t+\Delta-h)}^{\min(T,t+\Delta+h)} K\left(\frac{t-t'+\Delta}{h} \right) \theta_0 |t'-t-\Delta| \mathrm{d} t \mathrm{d} t' \right| \\
\leq & \left| \frac{1}{T} \int_{0}^T \int_{\max(0, t+\Delta-h)}^{\min(T,t+\Delta+h)} K\left(\frac{t-t'+\Delta}{h} \right) \theta_0 h \mathrm{d} t \mathrm{d} t' \right| \\
\leq & \left| \frac{1}{T} \int_{0}^T 2\theta_0 h^2 \mathrm{d} t \right| \\
= & 2 \theta_0 h^2,
\end{aligned}
\end{equation}
where the first inequality follows from the fact that $V_{k,j}$ is a $\theta_0$-Lipschitz function, and the last inequality holds since the kernel function $K(t/h)$ integrates to $h$ on $\mathbb{R}$.
Similarly, for each term in $\text{II}=N_j(T)N_k(T)/T^2$, an argument very similar to the proof of Theorem~\ref{thm::concentration_hawkes} gives, for $1 \leq j \leq p$,
\begin{equation}\label{eqn::concofN}
\mathbb{P}\left(\left| N_j(T)/T - \Lambda_j \right| \geq c_3 T^{-\frac{2r+1}{5r+2}} \right) \leq c_4 T \exp( - c_5 T^{\frac{r}{5r+2}}).
\end{equation}
Combining \eqref{eqn::concofy_II}, \eqref{eqn::bias}, and \eqref{eqn::concofN}, we have, with probability at least $1-2 c_4 T \exp( - c_5 T^{\frac{r}{5r+2}})$,
\begin{equation*}
\begin{aligned}
\left| \widehat{V}_{k,j}(\Delta) -{V}_{k,j}(\Delta) \right| \leq & \left| h^{-1}{\rm I} -h^{-1} \mathbb{E}[\text{I}] \right|+ \left| h^{-1} \mathbb{E} [\text{I}] - V_{j,k}(\Delta) + \Lambda_j \Lambda_k \right| + \\
& \left|\frac{1}{T^2}( N_j(T) -T \Lambda_j) N_{k}(T) \right| + \left|\Lambda_j \frac{1}{T} N_{k}(T) - \Lambda_j \Lambda_{k} \right| \\
\leq & c_3 T^{-\frac{2r+1}{5r+2}} h^{-1} + 2\theta_0 h + (\|\bm{\Lambda}\|_{\infty}+ c_3 T^{-\frac{2r+1}{5r+2}}) c_3 T^{-\frac{2r+1}{5r+2}} + \|\bm{\Lambda}\|_{\infty} c_3 T^{-\frac{2r+1}{5r+2}}.
\end{aligned}
\end{equation*}
Thus, letting $h= c_6 T^{-\frac{r+0.5}{5r+2}}$, we can see that, for some constant $ c_3 rime$,
\begin{equation}
\left| \widehat{V}_{k,j}(\Delta) -{V}_{k,j}(\Delta ) \right| \leq c_3 rime T^{-\frac{r+0.5}{5r+2}}.
\end{equation}
Lastly, we need a uniform bound on $\widehat{V}_{k,j}-V_{k,j}$ on the region $[-B,B]$. We first note that the probability statement \eqref{eqn::concofN} holds for a grid of $\lceil T^{\frac{r+0.5}{5r+2}}\rceil$ points on $[-B,B]$, denoted as $\{ \Delta_i \}_{i=1}^{\lceil T^{\frac{r+0.5}{5r+2}}\rceil}$.
The gap between adjacent points on this grid is upper-bounded by $2B T^{\frac{r+0.5}{5r+2}}$.
Furthermore, for any $\Delta \in [-B,B]$, we can find a point $\Delta_i$ on the grid such that $| \Delta - \Delta_i| \leq 2B/ \lceil T^{\frac{r+0.5}{5r+2}} \rceil \leq 2B T^{-\frac{r+0.5}{5r+2}}$.
From basic algebra, for all $\Delta \in [-B,B]$,
\begin{equation*}
\begin{aligned}
\left| \widehat{V}_{k,j}(\Delta)-V_{k,j}(\Delta) \right| = & \left| \widehat{V}_{k,j}(\Delta)-\widehat{V}_{k,j}(\Delta_k)+\widehat{V}_{k,j}(\Delta_k)-{V}_{k,j}(\Delta_k) +{V}_{k,j}(\Delta_k)- V_{k,j}(\Delta) \right|\\
\leq & 2B T^{-\frac{r+0.5}{5r+2}} + c_3 rime T^{-\frac{r+0.5}{5r+2}} + 2\theta_0 BT^{-\frac{r+0.5}{5r+2}} \\
\leq & c_3 P T^{-\frac{r+0.5}{5r+2}},
\end{aligned}
\end{equation*}
where $ c_3 P = 2B+ c_3 rime +2\theta_0 B$.
Now, taking a union bound, with probability at least $1-2 c_4 p^2 T^{\frac{6r+0.5}{5r+2}} \exp\left( - c_5 T^{\frac{r}{5r+2}} \right)$, it holds for all $j,k$ that, for some constant $ c_6 $,
$\big\|\widehat{V}_{k,j}-V_{k,j}\big\|_{2,[-B,B]} \leq c_6 T^{-\frac{r+0.5}{5r+2}}.$ \QEDB
\section{Discussion}\label{sec::discussion}
The proposed approach in Section~\ref{sec::conc} generalizes existing theoretical tools for the Hawkes process by lifting the strict assumptions that allow only mutually-exciting relationships and linear link functions \citep{hawkes1974}.
However, more challenges remain in the analysis of the Hawkes process with inhibitory relationships.
For instance, the assumption for stability (Assumption~\ref{asmp::spectralradius}) introduced by \cite{bremaud1996} puts a strong restriction on the matrix $\bm{\Omega}$.
Here, each entry of $\bm{\Omega}$ is the $\ell_1$-norm of the corresponding transfer function, which neglects its sign.
This assumption can be too restrictive in the presence of inhibitory relationships.
To see this, consider a bivariate Hawkes process with a self-regulatory function $\omega_{1,1}(\Delta) = \omega_{2,2}(\Delta) = -2a \bm{1}_{[\Delta < 1]}$, $\omega_{1,2}(\Delta)=\omega_{2,1}(\Delta)= a \bm{1}_{[\Delta < 1]}$, any positive $1-$Lipschitz link function, and a non-zero spontaneous rate $\mu_1=\mu_2=a$.
For any $a\geq 1/3$, it is clear that this specification yields a stable process despite violating Assumption~\ref{asmp::spectralradius}. It is natural to hypothesize that the requirement for stability depends on the sign of the transfer functions and the graphical structure. Relaxing this assumption could be a fruitful direction of future research.
\appendix
\section{Proofs of Technical Lemmas}\label{sec::proof_lemmas}
\subsection{Proof of Lemma~\ref{lmm::bound_expectation}}\label{sec::proof_expectation}
Recall from the statement of Lemma~\ref{lmm::bound_expectation} that we define $\bm{ \eta}(b)\equiv (\eta_{j,k})_{p \times p}$ where $\eta_{k,j}(b) =\int_{b}^{\infty} |\omega_{k,j}(\Delta)| \mathrm{d} \Delta$, for $j,k \in \{1,\ldots, p\}$.
Since $b$ is a constant in this proof, we will denote $\bm{ \eta}(b)$ as $\bm{ \eta}$ for ease of notation.
We claim that, for $u > (n-1)b$, $n=1,2,\ldots,$
\begin{equation}\label{eqn::tau_N_hypothesis}
\mathbb{E}\big|\mathrm{d} \widetilde{\bm{N}}(z+u) - \mathrm{d} {\bm{N}}(z+u) \big|/\mathrm{d} u \preceq 2 v_1 \left(\bm{\Omega}^n \bm{J} + \sum_{i=1}^n \bm{\Omega}^{i-1} \bm{\eta}\bm{J}\right),
\end{equation}
where $\preceq$ is the element-wise inequality, $\bm{J}$ is a $p$-vector of ones, and $v_1$ is a constant such that $v_1 = \max_j \max( \phi_j(\mu_j) , \Lambda_j)$.
From \cite{bremaud1996}, we know that the limits of $\{{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ and $\{\widetilde{\bm{N}}^{(n)}\}_{n=1}^{\infty}$ exist under Assumption~\ref{asmp::spectralradius}.
Therefore, for $j=1,\ldots, p$, the limiting process $\bm{N}$ satisfies, for $j=1,\ldots, p$,
\begin{equation} \label{eqn::N_infinity}
\begin{aligned}
{\lambda}_j(t) & = \phi_j \big\{ \mu_j + \big( \bm{\omega}_{\cdot,j} * \mathrm{d} {\bm{N}} \big)(t) \big\} \\
\mathrm{d} N_j(t) & = \mathrm{d} {N}_j^{(0)}\big( [0, {\lambda}_j(t)] \times \mathrm{d} t\big).
\end{aligned}
\end{equation}
and $\widetilde{\bm{N}}$ satisfies
\begin{equation} \label{eqn::Ntilde_infinity}
\begin{aligned}
\widetilde{\lambda}_j (t) & = \phi_j \big\{\mu_j + \big( \bm{\omega}_{\cdot,j} * \mathrm{d} \widetilde{\bm{N}} \big)(t) \big\} \\
\mathrm{d} \widetilde{N}_j(t) & = \mathds{1}_{[ t\leq z]} \widetilde{N}_j^{(0)}\big( [0, \widetilde{\lambda}_j(t)] \times \mathrm{d} t\big) + \mathds{1}_{[ t > z]} N_j^{(0)}\big([0, \widetilde{\lambda}_j(t)] \times \mathrm{d} t\big).
\end{aligned}
\end{equation}
Using Equations~\ref{eqn::N_infinity}~and~\ref{eqn::Ntilde_infinity}, we can then prove \eqref{eqn::tau_N_hypothesis} by induction.
First note that, for any $u \in \mathbb{R}$,
\begin{equation}\label{eqn::bound_N_crude}
\small
\mathbb{E}\big|\mathrm{d} N_j(u+z)- \mathrm{d} \widetilde{N}_j(u+z) \big|/\mathrm{d}u \leq \mathbb{E}\big|\mathrm{d} N_j(u+z) \big|/\mathrm{d} u + \mathbb{E}\big| \mathrm{d} \widetilde{N}_j(u+z) \big|/\mathrm{d} u = 2 \Lambda_j \leq 2 v_1,
\end{equation}
where $\Lambda_j$ is the marginal intensity of $\mathrm{d} N_j$ defined in \eqref{eqn::Lambda}.
Hence, jointly for $j=1,\ldots,p$, $\mathbb{E}\big|\mathrm{d} \bm{N}(u+z)- \mathrm{d} \widetilde{\bm{N}}(u+z) \big|/\mathrm{d} u \preceq 2 v_1\bm{J}$ for any $u \in \mathbb{R}$.
We will then establish the bound for $u > (m-1)b$ for $m \in \{1,2,\ldots\}$ and $b$ introduced in Theorem~\ref{thm::coupling}.
For $m=1$, i.e., when $ u > 0$, we have
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big|\mathrm{d} {N}_j(u+z)- \mathrm{d} \widetilde{{N}}_j(u+z) \big|/\mathrm{d} u \\
= & \mathbb{E}\big|N_j^{(0)}([0, \widetilde{\lambda}_j(u+z)] \times \mathrm{d} t) -
N_j^{(0)}([0, {\lambda}_j(u+z)] \times \mathrm{d} t) \big| / \mathrm{d}u \\
= & \mathbb{E}\big| {\lambda}_j(u+z) - \widetilde{\lambda}_j(u+z) \big| \\
= & \mathbb{E}\Big| \phi_j\Big\{\mu_j+ \sum_{l=1}^p \big({\omega}_{l,j} * \mathrm{d} {N}_{l}\big) (u+z) \Big\} - \phi_j\Big\{\mu_j+ \sum_{l=1}^p \big({\omega}_{l,j} * \mathrm{d} \widetilde{N}_{l}\big) (u+z) \Big\} \Big|,
\end{aligned}
\end{equation*}
where the second equality follows since the expected differences between event counts is the expected differences between areas.
Recalling that we assume $\phi_j(\cdot)$ to be $1$-Lipschitz, we have
\begin{equation*}
\begin{aligned}
& \mathbb{E}\Big| \phi_j\Big\{\mu_j+ \sum_{l=1}^p \big({\omega}_{l,j} * \mathrm{d} {N}_{l}\big) (u+z) \Big\} - \phi_j\Big\{ \mu_j+ \sum_{l=1}^p \big({\omega}_{l,j} * \mathrm{d} \widetilde{N}_{l}\big) (u+z) \Big\} \Big| \\
\leq & \mathbb{E}\left| \sum_{l=1}^p \big({\omega}_{l,j} * \mathrm{d} {N}_{l}\big) (u+z)-\sum_{l=1}^p \big({\omega}_{l,j} * \mathrm{d} \widetilde{N}_{l}\big) (u+z) \right|\\
= & \mathbb{E}\left| \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \big[ \mathrm{d} N_l(u+z-\Delta) - \mathrm{d} \widetilde{N}_l(u+z-\Delta) \big] \right| \\
\leq & \sum_{l=1}^p \int_0^{\infty} |\omega_{l,j}(\Delta)|\, \mathbb{E}\big| \mathrm{d} N_l(u+z-\Delta) - \mathrm{d} \widetilde{N}_l(u+z-\Delta) \big|.
\end{aligned}
\end{equation*}
For each $l$, we note that
\begin{equation*}
\begin{aligned}
& \int_0^{\infty} |\omega_{l,j}(\Delta)| \, \mathbb{E}\big| \mathrm{d} N_l(u+z-\Delta) - \mathrm{d} \widetilde{N}_l(u+z-\Delta) \big| \\
= & \left\{ \int_0^{b} +\int_{b}^{\infty}\right\} |\omega_{l,j}(\Delta)| \, \mathbb{E}\big| \mathrm{d} N_l(u+z-\Delta) - \mathrm{d} \widetilde{N}_l(u+z-\Delta) \big|\\
\leq & \int_0^{b} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left\{ \max_{t' \in [u-b,u]}\mathbb{E}\big| \mathrm{d} N_l(t') - \mathrm{d} \widetilde{N}_l(t') \big|/\mathrm{d} t' \right\} +\\
& \int_{b}^{\infty} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left\{ \max_{t' \in [-\infty, u-b]}\mathbb{E}\big| \mathrm{d} N_l(t') - \mathrm{d} \widetilde{N}_l(t') \big|/\mathrm{d} t' \right\}\\
\leq & \int_0^{b} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left\{ \max_{t' \geq -b} \mathbb{E}\big| \mathrm{d} N_l(t') - \mathrm{d} \widetilde{N}_l(t') \big|/\mathrm{d} t' \right\} +\\
& \int_{b}^{\infty} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left\{ \max_{t' \in \mathbb{R} }\mathbb{E}\big| \mathrm{d} N_l(t') - \mathrm{d} \widetilde{N}_l(t') \big| /\mathrm{d} t' \right\}\\
\leq & \int_0^{b} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \big(\Lambda_l + \Lambda_l\big) + \int_{b}^{\infty} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \big(\Lambda_l + \Lambda_l\big) \\
\leq & 2\Omega_{j,l} v_1 + 2 \eta_{j,l} v_1,
\end{aligned}
\end{equation*}
where the second-to-last inequality follows from \eqref{eqn::bound_N_crude}, and the last inequality follows from the definition of $v$ and the tail property of $\omega_{j,l}$.
Combining the above inequalities, we see that, for $m=1$,
\begin{equation*}
\mathbb{E}\big|\mathrm{d} {N}_j(u+z)- \mathrm{d} \widetilde{{N}}_j(u+z) \big|/\mathrm{d} u \leq 2 v_1 \big( \bm{\Omega}_{j,\cdot}^{{ \mathrm{\scriptscriptstyle T} }} \bm{J}+ \bm{\eta}_{j,\cdot} \bm{J}\big),
\end{equation*}
Thus, jointly for all $j$ and for $ u >0$,
\begin{equation*}
\mathbb{E}\big|\mathrm{d} \bm{N}(u+z)- \mathrm{d} \widetilde{\bm{N}}(u+z) \big|/ \mathrm{d} u \preceq 2v_1 \big( \bm{\Omega} \bm{J} +\bm{\eta}\bm{J}\big),
\end{equation*}
i.e., \eqref{eqn::tau_N_hypothesis} holds for $m=1$.
Now assume that \eqref{eqn::tau_N_hypothesis} holds for $m=n-1$, i.e., when $ u > (n-2)b$,
\begin{equation*}
\mathbb{E}\big|\mathrm{d} \bm{N}(u+z)- \mathrm{d} \widetilde{\bm{N}}(u+z) \big|/ \mathrm{d} u \preceq 2 v_1 \left( \bm{\Omega}^{n-1} \bm{J}+ \sum_{i=1}^{n-1} \bm{\Omega}^{i-1} \bm{\eta}\bm{J}\right).
\end{equation*}
Then, for $u >(n-1)b$, we have
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big|\mathrm{d}{N}_j(u+z)- \mathrm{d} \widetilde{{N}}_j(u+z) \big|/ \mathrm{d} u \\
\leq & \sum_{l=1}^p \int_0^{b} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left\{ \max_{t' \geq (n-2)b} \mathbb{E}\big| \mathrm{d} N_l(t') - \mathrm{d} \widetilde{N}_l(t') \big|/\mathrm{d} t' \right\} + \\
& \sum_{l=1}^p \int_{b}^{\infty} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left\{ \max_{t' \in \mathbb{R} }\mathbb{E}\big| \mathrm{d} N_l(t') - \mathrm{d} \widetilde{N}_l(t') \big| /\mathrm{d} t' \right\}\\
\leq & \sum_{l=1}^p \int_0^{b} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \left[ 2 v_1 \big(\bm{\Omega}^{n-1}\big)_{l,\cdot} \bm{J} +2v_1 \sum_{i=1}^{n-1} \big(\bm{\Omega}^{i-1}\big)_{l,\cdot} \bm{\eta} \bm{J} \right] + \sum_{l=1}^p \int_{b}^{\infty} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta 2v_1 \\
\leq & \sum_{l=1}^p \Omega_{j,l} \left[ 2 v_1 \big(\bm{\Omega}^{n-1}\big)_{l,\cdot} \bm{J} +2v_1 \sum_{i=1}^{n-1} \big(\bm{\Omega}^{i-1}\big)_{l,\cdot} \bm{\eta} \bm{J} \right] + 2v_1 \bm{\eta}_{j,\cdot} \bm{J}\\
= & 2v_1 \bm{\Omega}_{j,\cdot} \bm{\Omega}^{n-1} \bm{J} +2v_1 \sum_{i=1}^{n-1} \big(\bm{\Omega}^{i}\big)_{l,\cdot} \bm{\eta} \bm{J} + 2v_1 \bm{\eta}_{j,\cdot} \bm{J},
\end{aligned}
\end{equation*}
where the inequalities follow from a similar argument to the case of $m=1$.
Thus, for $m=n$, i.e., $ u > (n-1)b$,
\begin{equation*}
\mathbb{E}\big|\mathrm{d} \bm{N}(u+z)- \mathrm{d}\widetilde{\bm{N}}(u+z) \big|/ \mathrm{d} u \preceq 2 v_1 \left( \bm{\Omega}^{n} \bm{J}+ \sum_{i=1}^{n} \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\right).
\end{equation*}
We have thus completed the induction for $ \mathbb{E}\big| \mathrm{d} \bm{N}(u+z)- \mathrm{d} \widetilde{\bm{N}}(u+z) \big|/ \mathrm{d} u$.
To summarize, for any $n$ and $u > (n-1)b$, it holds that
\begin{equation*}
\mathbb{E}\big|\mathrm{d} \bm{N}(u+z)- \mathrm{d}\widetilde{\bm{N}}(u+z) \big|/ \mathrm{d} u \preceq 2 v_1 \left( \bm{\Omega}^{n} \bm{J}+ \sum_{i=1}^{n} \bm{\Omega}^{i-1} \bm{\eta}_{j,\cdot} \bm{J}\right).
\end{equation*}
Equivalently, we can say that for any $u \in \mathbb{R}^+$, it holds that
\begin{equation*}
\mathbb{E}\big|\mathrm{d} \bm{N}(u+z)- \mathrm{d}\widetilde{\bm{N}}(u+z) \big|/ \mathrm{d} u \preceq 2 v_1 \left( \bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J}+ \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^{i-1}\bm{\eta} \bm{J}\right),
\end{equation*}
as required. \QEDB
\subsection{Proof of Lemma~\ref{lmm::bound_crosscovariance}}\label{sec::proof_crosscovariance}
As shown in the proof of Lemma~\ref{lmm::bound_expectation}, we know that the limiting processes $\bm{N}$ and $\widetilde{\bm{N}}$ exist and satisfy \eqref{eqn::N_infinity} and \eqref{eqn::Ntilde_infinity}, respectively.
We first bound the cross-term $ \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)$.
To begin, we show that there exists an upper bound on $ \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)$ for any $u \in \mathbb{R}$ and $t'\geq u+z$.
The triangle inequality yields that
\begin{equation}\label{eqn::cross_term_zero}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)\\
\preceq & \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) + \mathbb{E} \big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)\\
= & \bm{V}(t'-u-z) + \bm{\Lambda} \bm{\Lambda}^{{ \mathrm{\scriptscriptstyle T} }} + \mathbb{E} \big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u),
\end{aligned}
\end{equation}
where the equality follows from the definition of $\bm{V}$ and $\bm{\Lambda}$.
To find an upper bound for $\mathbb{E} \big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)$, recall the construction of $\{\bm{N}^{(i)}\}_{i=1}^{\infty}$: For $i=1$, we have
\begin{equation}
\frac{\mathbb{E} \big| \mathrm{d} \widetilde{{N}}^{(1)}_j(t') \mathrm{d} {{N}}^{(1)}_k(u+z)\big|}{\mathrm{d} t' \mathrm{d}u} = \frac{\mathbb{E} \big| \widetilde{{N}}^{(0)}_j\big([0,\phi(\mu_j)] \times \mathrm{d} t'\big) {{N}}^{(0)}_k \big([0,\phi(\mu_k)] \times \mathrm{d} (u+z) \big) \big|}{\mathrm{d} t' \mathrm{d}u}=\phi_j(\mu_j)\phi_k(\mu_k),
\end{equation}
where the first equality follows by definition and the second equality follows from the fact that $\widetilde{{N}}^{(0)}_j$ and ${{N}}^{(0)}_k$ are independent when $t'\geq u + z > z$.
Define $\bm{A}^{(1)}=\big(A^{(1)}_{j,k}\big)$ where $A^{(1)}_{j,k}=\phi_j(\mu_j)\phi_k(\mu_k)$.
Suppose that for $m=i$
\begin{equation}\label{eqn::crossterm}
{\mathbb{E} \big| \mathrm{d} \widetilde{\bm{N}}^{(i)}(t') \mathrm{d} {\bm{N}}^{(i)}(u+z)\big|}/{\mathrm{d} t' \mathrm{d}u} \preceq \bm{A}^{(i)}.
\end{equation}
We can see that \eqref{eqn::crossterm} holds for $i=1$.
Then for $m=i+1$, it follows that
\begin{equation*}
\begin{aligned}
& {\mathbb{E} \big| \mathrm{d} \widetilde{{N}}_j^{(i+1)}(t') \mathrm{d} {{N}}_k^{(i+1)}(u+z)\big|}/{\mathrm{d} t' \mathrm{d}u}\\
= & \mathbb{E} \big| \widetilde{{N}}^{(0)}_j\big(\big[0,\lambda_j^{(i+1)}(t') \big] \times \mathrm{d} t'\big) {{N}}^{(0)}_k\big(\big[0,\lambda_{k}^{(i+1)}(u+z)\big] \times \mathrm{d} (u+z)\big)\big|/{\mathrm{d} t' \mathrm{d}u} \\
= & \mathbb{E} \Big| \phi_j\Big\{ \mu_j + \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \Big\} \phi_k\Big\{ \mu_j + \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \mathrm{d} {N}_l^{(i)}(u+z- \Delta) \Big\} \Big|,
\end{aligned}
\end{equation*}
where the equalities follow by definition.
Using the Lipschitz condition for the link function yields
\begin{equation}\label{eqn::lip_cond}
\begin{aligned}
\phi_j\left\{ \mu_j + \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \right\} \leq & \phi_j(\mu_j)+ \left| \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta)\right|
\end{aligned}
\end{equation}
With this, we see that
\begin{equation}\label{eqn::prod_links}
\begin{aligned}
& \mathbb{E} \Big| \phi_j\Big\{ \mu_j + \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \Big\} \phi_k\Big\{ \mu_k + \sum_{l=1}^p \int_0^{\infty} \omega_{l,k}(\Delta) \mathrm{d} {N}_l^{(i)}(u+z- \Delta) \Big\} \Big|\\
\leq & \mathbb{E} \Big|\Big[ \phi_j(\mu_j)+ \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,j}(\Delta)\big| \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \Big] \Big[ \phi_k(\mu_k) + \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,k}(\Delta)\big| \mathrm{d} {N}_l^{(i)}(u+z- \Delta) \Big] \Big|\\
= & \mathbb{E} \Big| \phi_j(\mu_j)\phi_k(\mu_k) + \phi_k(\mu_k) \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,j}(\Delta)\big| \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) + \\
& \phi_j(\mu_j)\sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,k}(\Delta)\big| \mathrm{d} {N}_l^{(i)}(u+z- \Delta) + \\
& \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,j}(\Delta)\big| \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \sum_{l'=1}^p \int_0^{\infty} \big|\omega_{l',j}(\Delta)\big| \mathrm{d} {N}_{l'}^{(i)}(u+z- \Delta)\Big|\\
= & C^{(i)}_{j,k} +\mathbb{E} \Big| \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,j}(\Delta)\big| \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \sum_{l'=1}^p \int_0^{\infty} \big|\omega_{l',j}(\Delta)\big| \mathrm{d} {N}_{l'}^{(i)}(u+z- \Delta)\Big|
\end{aligned}
\end{equation}
where
\begin{equation*}
\begin{aligned}
C^{(i)}_{j,k} \equiv & \mathbb{E} \Big| \phi_j(\mu_j)\phi_k(\mu_k) + \phi_k(\mu_k) \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,j}(\Delta)\big| \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta)\\
& +\phi_j(\mu_j)\sum_{l'=1}^p \int_0^{\infty} \big|\omega_{l',j}(\Delta)\big| \mathrm{d} {N}_{l'}^{(i)}(u+z- \Delta)\Big|\\
\leq & \phi_j(\mu_j)\phi_k(\mu_k)+ \phi_k(\mu_k)\bm{\Omega}_{j,\cdot} \cdot \bm{\Lambda}+\phi_j(\mu_j)\bm{\Omega}_{k,\cdot} \cdot \bm{\Lambda}
\end{aligned}
\end{equation*}
is bounded by construction.
Rewriting the right-hand side of \eqref{eqn::prod_links} yields
\begin{equation*}
\begin{aligned}
& \mathbb{E} \Big| \phi_j\Big\{ \mu_j + \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \mathrm{d} \widetilde{N}_l^{(i)}(t'- \Delta) \Big\} \phi_k\Big\{ \mu_k + \sum_{l=1}^p \int_0^{\infty} \omega_{l,k}(\Delta) \mathrm{d} {N}_l^{(i)}(u+z- \Delta) \Big\} \Big|\\
\leq & C^{(i)}_{j,k} + \mathbb{E} \Big| \sum_{ 1\leq l,l' \leq p} \int \int \big|\omega_{l,j}(\Delta) \omega_{l',k}(\Delta')\big| \mathrm{d}\widetilde{N}_l^{(i)}(t'-\Delta) \mathrm{d}N_{l'}^{(i)}(u+z-\Delta)\Big| \\
= & C^{(i)}_{j,k} + \sum_{ 1\leq l,l' \leq p} \int \int \big|\omega_{l,j}(\Delta) \omega_{l',k}(\Delta')\big| \mathbb{E} \Big| \mathrm{d}\widetilde{N}_l^{(i)}(t'-\Delta) \mathrm{d}N_{l'}^{(i)}(u+z-\Delta)\Big|\\
\leq & C^{(i)}_{j,k} + \sum_{ 1\leq l,l' \leq p} \int \int \big|\omega_{l,j}(\Delta) \omega_{l',k}(\Delta')\big| A^{(i)}_{l,l'} \mathrm{d} \Delta \mathrm{d} \Delta' \\
\leq & C^{(i)}_{j,k} + \sum_{l,l'} \Omega_{j,l} \Omega_{k,l'} A^{(i)}_{l,l'}\\
= & C^{(i)}_{j,k} + \bm{\Omega}_{j,\cdot} \bm{A}^{(i)} \bm{\Omega}_{k, \cdot}^{{ \mathrm{\scriptscriptstyle T} }},
\end{aligned}
\end{equation*}
where the second equality holds since all terms in the integral are non-negative, and the second-to-last inequality holds due to the induction condition \eqref{eqn::crossterm}.
Therefore, we can see that \eqref{eqn::crossterm} holds for $i+1$ with $\bm{A}^{(i+1)}\equiv \bm{\Omega}^{{ \mathrm{\scriptscriptstyle T} }} \bm{A}^{(i)} \bm{\Omega} + \bm{C}^{(i)}$.
By induction,
$$\bm{A}^{(i+1)} = \left(\bm{\Omega}^{{ \mathrm{\scriptscriptstyle T} }} \right)^{i} \bm{A}^{(1)} \bm{\Omega}^{i} +\sum_{m=1}^i \left(\bm{\Omega}^{{ \mathrm{\scriptscriptstyle T} }} \right)^{i-m} \bm{C}^{(m)} \bm{\Omega}^{i-m}.$$
Given that $\Gamma_{\max}(\bm{\Omega}) <1$ from Assumption~\ref{asmp::spectralradius}, we know that $\bm{A}^{(\infty)}\equiv \lim_{i\rightarrow \infty} A^{(i)}$ exists and thus $\mathbb{E} \big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)$ has a well-defined element-wise bound, which we denote by $\bm{A}$.
We now prove the inequality in \eqref{eqn::cross_term_zero}. First, for any $u$,
\begin{equation}\label{eqn::cross_term_zero_cmp}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)\\
\preceq & \bm{V}(t'-u-z) + \bm{\Lambda} \bm{\Lambda}^{{ \mathrm{\scriptscriptstyle T} }} + \bm{A} \preceq v_2 \bm{J} \bm{J}^{{ \mathrm{\scriptscriptstyle T} }},
\end{aligned}
\end{equation}
where $v_2=\max_{j,k,\Delta} \big\{|V_{j,k}(\Delta)| +\Lambda_j \Lambda_k + A_{j,k}\big\}$.
Suppose, for $m=n-1$, that, for $u \geq (n-2)b$,
$$ \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) \preceq v_2\left( \bm{\Omega}^{n-1} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n-1} \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}\right),$$
where $\bm{\Omega}^{0}$ is the identity matrix.
Then, for $m=n$ (i.e., $u \geq (n-1)b$),
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} {N}_j(t') \mathrm{d}{N}_{k}(u+z)- \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} {{N}}_{k}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)\\
= & \mathbb{E}\big|
\big[\tilde{\lambda}_j(t')-\lambda_k(u+z) \big] \mathrm{d} {{N}}_{k}(u+z)\big|/\mathrm{d}u\\
= & \mathbb{E}\left| \left[
\phi\Big( \mu_j + \bm{\omega}_{\cdot,j}* \mathrm{d} \widetilde{\bm{N}}(t') \Big) - \phi\Big( \mu_j + \bm{\omega}_{\cdot,j} * \mathrm{d} {\bm{N}}(u+z) \Big)
\right] \mathrm{d} {{N}}_{k}(u+z)\right|/\mathrm{d}u\\
\leq & \mathbb{E}\left| \Big|
\phi\Big( \mu_j + \bm{\omega}_{\cdot,j}* \mathrm{d} \widetilde{\bm{N}}(t') \Big) - \phi\Big( \mu_j + \bm{\omega}_{\cdot,j} * \mathrm{d} {\bm{N}}(u+z) \Big)
\Big| \mathrm{d} {{N}}_{k}(u+z)\right|/\mathrm{d}u\\
\leq & \mathbb{E}\left| \Big|\sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \big[ \mathrm{d}\widetilde{N}_l(t'-\Delta) - \mathrm{d}{N}_l(u+z-\Delta) \big] \Big| \mathrm{d} {{N}}_{k}(u+z)\right|/\mathrm{d}u\\
\leq & \sum_{l=1}^p \int_0^{\infty} \mathbb{E}\big| \omega_{l,j}(\Delta) \big[ \mathrm{d}\widetilde{N}_l(t'-\Delta) - \mathrm{d}{N}_l(u+z-\Delta) \big] \big| \mathrm{d} {{N}}_{k}(u+z)/\mathrm{d}u,
\end{aligned}
\end{equation*}
where the second inequality follows from the Lipschitz condition of $\phi_j$ \eqref{eqn::lip_cond}.
Then, for each $l$, we can see that
\begin{equation*}
\begin{aligned}
& \int_0^{\infty} \mathbb{E}\big| \omega_{l,j}(\Delta) \big[ \mathrm{d}\widetilde{N}_l(t'-\Delta) - \mathrm{d}{N}_l(u+z-\Delta) \big] \big| \mathrm{d} {{N}}_{k}(u+z)/\mathrm{d}u\\
\leq & \int_0^{\infty} \big|\omega_{l,j}(\Delta) \big|\mathbb{E} \big| \mathrm{d}\widetilde{N}_l(t'-\Delta)\mathrm{d} {{N}}_{k}(u+z) - \mathrm{d}{N}_l(u+z-\Delta)\mathrm{d} {{N}}_{k}(u+z) \big|/\mathrm{d}u\\
= & \left\{ \int_0^{b} + \int_{b}^{\infty}\right\}\big|\omega_{l,j}(\Delta) \big| \, \mathbb{E} \big| \mathrm{d}\widetilde{N}_l(t'-\Delta)\mathrm{d} {{N}}_{k}(u+z) - \mathrm{d}{N}_l(u+z-\Delta)\mathrm{d} {{N}}_{k}(u+z) \big|/\mathrm{d}u\\
\leq & v_2 \Omega_{j,l} \Big[ \bm{\Omega}^{n-1} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n-1} \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \Big]_{l,k} +\eta_{j,l} v_2.
\end{aligned}
\end{equation*}
Thus,
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} {N}_j(t') \mathrm{d}{N}_{k}(u+z)- \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} {{N}}_{k}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) \\
\leq & v_2\bm{\Omega}_{j,\cdot}^{{ \mathrm{\scriptscriptstyle T} }}\Big[ \bm{\Omega}^{n-1} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n-1} \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \Big]_{\cdot,k}+ v_2 \eta_{j,l}.
\end{aligned}
\end{equation*}
Therefore, we have shown that, for $u \geq (n-1)b$
\begin{equation}\label{eqn::cross_term}
\mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} {\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) \preceq v_2\left( \bm{\Omega}^{n} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n} \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}\right).
\end{equation}
Finally, we return to bounding $\mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} \widetilde{\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)$.
Note that when $t'=u+z$,
$$ \mathbb{E}\big| \mathrm{d} {N}_k(u+z) \mathrm{d} {N}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z) \mathrm{d} \widetilde{{N}}_k(u+z)\big| = \mathbb{E}\big| \mathrm{d} {N}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z)\big|. $$
This quantity was bounded in Lemma~\ref{lmm::bound_expectation}.
Moreover,
\begin{equation*}
\small
\mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d}\bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)-d \widetilde{\bm{N}}(t')\mathrm{d} \widetilde{\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big| = \left[ \mathbb{E}\big| \mathrm{d}\bm{N}(u+z) \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(t')-\mathrm{d} \widetilde{\bm{N}}(u+z) \mathrm{d} \widetilde{\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(t')\big| \right]^{{ \mathrm{\scriptscriptstyle T} }}.
\end{equation*}
Hence, it suffices to consider the case when $t' > u+z$.
Now, for any $u \geq (n-1)b$ and $t' > u+z$, we have
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} {N}_j(t') \mathrm{d} {N}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} \widetilde{{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) \\
= & \mathbb{E}\big| \mathrm{d} {N}_j(t') \mathrm{d} {N}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} {{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)+ \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} {{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} \widetilde{{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u)\\
\leq & \mathbb{E}\big| \big[\mathrm{d} {N}_j(t') - \mathrm{d} \widetilde{{N}}_j(t')\big] \mathrm{d} {{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z) \big| /(\mathrm{d} t' \mathrm{d}u) + \mathbb{E}\big| \mathrm{d} \widetilde{{N}}_j(t') \big[\mathrm{d} {{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big]\big|/(\mathrm{d} t' \mathrm{d}u)\\
= & \mathbb{E}\big| \mathbb{E}\big\{ \big[\mathrm{d} {N}_j(t') - \mathrm{d} \widetilde{{N}}_j(t')\big]/ \mathrm{d} t' \mid \mathcal{H}_{t'}, \widetilde{\mathcal{H}}_{t'}\big\}\mathrm{d} {{N}}_k(u+z) \big| /(\mathrm{d}u) + \\
& \mathbb{E}\big| \mathbb{E}\big\{ \big[ \mathrm{d} \widetilde{{N}}_j(t')\big]/ \mathrm{d} t' \mid \widetilde{\mathcal{H}}_{t'}\big\}\big[\mathrm{d} {{N}}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z)\big] \big|/(\mathrm{d}u)\\
= & \mathbb{E}\big|\big[ \lambda_j(t') - \tilde{\lambda}_j(t')\big] \mathrm{d} {{N}}_k(u+z) \big| /(\mathrm{d}u) + \mathbb{E}\big| \tilde{\lambda}_j(t') \big[\mathrm{d} {{N}}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z)\big] \big|/(\mathrm{d}u).
\end{aligned}
\end{equation*}
Next, we use the Lipschitz condition of the link function $\phi_j$.
Recall that
$$ \lambda_j(t) = \phi_j\Big\{ \mu_j + \big(\bm{\omega}_{\cdot,j}* \mathrm{d} {\bm{N}}\big)(t) \Big\} \quad {\rm and} \quad \tilde{\lambda}_j(t) = \phi_j\Big\{ \mu_j + \big(\bm{\omega}_{\cdot,j}* \mathrm{d} \widetilde{\bm{N}}\big)(t) \Big\}.$$
Then,
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big|\big[ \lambda_j(t') - \tilde{\lambda}_j(t')\big] \mathrm{d} {{N}}_k(u+z) \big| /(\mathrm{d}u) \\
\leq & \mathbb{E}\Big|\Big\{\sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta) \big[\mathrm{d} N_l(t'-\Delta) -\mathrm{d} \widetilde{N}_l(t'-\Delta)\big] \Big\} \mathrm{d} {{N}}_k(u+z) \Big| /(\mathrm{d}u)\\
\leq & \sum_{l=1}^p \int_0^{\infty} \big|\omega_{l,j}(\Delta)\big| \mathrm{d} \Delta \max_{u \geq (n-1)b}\Big\{ \frac{\mathrm{d} N_l(t'-\Delta) \mathrm{d} {{N}}_k(u+z) -\mathrm{d} \widetilde{N}_l(t'-\Delta) \mathrm{d} {{N}}_k(u+z)}{\mathrm{d}\Delta \mathrm{d}u } \Big\}\\
\leq & v_2\sum_{l=1}^p \Omega_{j,l} \Big[ \bm{\Omega}^{n} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n} \bm{\Omega}^{i-1}\bm{\eta}\bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}\Big]_{l,k}.
\end{aligned}
\end{equation*}
And, similarly,
\begin{equation*}
\begin{aligned}
&\mathbb{E}\big| \tilde{\lambda}_j(t') \big[\mathrm{d} {{N}}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z)\big] \big|/(\mathrm{d}u) \\
\leq & \mathbb{E}\Big| \Big[\phi_j(\mu_j) + \sum_{l=1}^p \int_0^{\infty} \omega_{l,j}(\Delta)\mathrm{d} \widetilde{N}_l (t'-\Delta) \Big] \big[\mathrm{d} {{N}}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z)\big] \Big|/(\mathrm{d}u)\\
\leq & \phi_j(\mu_j) \max_{u \geq (n-1)b} \mathbb{E}\big|\mathrm{d} {{N}}_k(u+z)- \mathrm{d} \widetilde{{N}}_k(u+z)\big| /(\mathrm{d}u)+\\
& \sum_{l=1}^p \int_0^{\infty} |\omega_{l,j}(\Delta)| \mathrm{d} \Delta \max_{u \geq (n-1)b} \left\{\frac{\mathbb{E}\big|\mathrm{d} \widetilde{N}_l (t'-\Delta)\mathrm{d} {{N}}_k(u+z)- \mathrm{d} \widetilde{N}_l (t'-\Delta) \mathrm{d} \widetilde{{N}}_k(u+z) \big|}{\mathrm{d} \Delta \mathrm{d}u} \right\}\\
\leq & 2v_1^2 \left( \bm{\Omega}^{n} \bm{J}\right)_k + 2v_1^2 \sum_{i=1}^{n} \left(\bm{\Omega}^{i-1} \bm{\eta}\bm{J}\right)_k+\sum_{l=1}^p \Omega_{j,l} v_2 \Big[\bm{\Omega}^{n} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n} \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}\Big]_{l,k},
\end{aligned}
\end{equation*}
where the second inequality follows from the Lipschitz condition of $\phi_j$ \eqref{eqn::lip_cond}.
To summarize, we have shown that
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} {N}_j(t') \mathrm{d} {N}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} \widetilde{{N}}_k^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) \\
\leq & 2v_2 \bm{\Omega}_{j,\cdot} \Big[\bm{\Omega}^{n} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{n} \bm{\Omega}^{i-1} \bm{\eta}\bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}]_{\cdot,k}+2v_1^2 \left\{ \left( \bm{\Omega}^{n} \bm{J}\right)_k +\sum_{i=1}^{n} \left( \bm{\Omega}^{i-1} \bm{\eta} \bm{J}\right)_k\right\}.
\end{aligned}
\end{equation*}
In the matrix form, we have, for $u > (n-1)b$
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} \bm{N}(t') \mathrm{d} \bm{N}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)- \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} \widetilde{\bm{N}}^{{ \mathrm{\scriptscriptstyle T} }}(u+z)\big|/(\mathrm{d} t' \mathrm{d}u) \\
\preceq &
2 v_2 \bm{\Omega}^{n+1} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} +2 v_2^2 \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{n}\big)^{{ \mathrm{\scriptscriptstyle T} }} +2 v_2 \sum_{i=1}^{n} \bm{\Omega}^i \bm{\eta}\bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + 2 v_2^2 \sum_{i=1}^{n} \bm{\eta}\bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{i}\big)^{{ \mathrm{\scriptscriptstyle T} }},
\end{aligned}
\end{equation*}
or, alternatively,
\begin{equation}
\begin{aligned}
& {\mathbb{E}\big| \mathrm{d} \widetilde{\bm{N}}(t') \mathrm{d} \widetilde{\bm{N}}(z+u) - \mathrm{d} {\bm{N}}(t') \mathrm{d} {\bm{N}}(z+u) \big|}/\big({\mathrm{d} u \mathrm{d} t'} \big) \\
\preceq & 2 v_2 \left\{ \bm{\Omega}^{\lfloor u/b +2 \rfloor} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^i \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }}\right\} + 2 v_1^2 \left\{ \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{\lfloor u/b +1 \rfloor}\big)^{{ \mathrm{\scriptscriptstyle T} }} + \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{i}\big)^{{ \mathrm{\scriptscriptstyle T} }}\right\},
\end{aligned}
\end{equation}
as required. \QEDB
\subsection{Proof of Lemma~\ref{lmm::product}}\label{sec::proof_product}
Let $M= (nK_2 /K_1)^{1/2} $.
We have that
\begin{align*}
P(|Z_1 Z_2|> n ) & = P( \{ |Z_2| \geq M, |Z_1 Z_2|> n \} \cup \{ |Z_2| < M, |Z_1 Z_2|> n \} ) \\
& = P( \{ |Z_2| \geq M, |Z_1 Z_2|> n \} ) + P( \{ |Z_2| < M, |Z_1 Z_2|> n \} ) \\
& \leq P( |Z_2| \geq M ) + P( \{ |Z_2| < M, M |Z_1|> n \} ) \\
& \leq P( |Z_2| \geq M ) + P(M |Z_1|> n ) \\
& \leq 2\exp\big( 1- (n/K_1 K_2)^{1/2} \big)\\
& \leq \exp\big( 1- (n/K^*)^{1/2}\big).
\end{align*}
where the last inequality follows from the proof of Lemma A.2 in \cite{fan2014}. \QEDB
\subsection{Proof of Lemma~\ref{lmm::elementwise_expectation}}\label{sec::proof_elementwise_expectation}
From Assumption~\ref{asmp::uniform}, we have that $(\bm{\Omega}\bm{J})_{j} \leq \rho_{\Omega}$, which is equivalent to
$ \bm{\Omega}\bm{J}\preceq \rho_{\Omega}\bm{J}$.
Then, by induction, $\bm{\Omega}^n\bm{J}\preceq \rho_{\Omega}^n\bm{J}$.
Hence, from Theorem~\ref{thm::coupling}, we know that, for each $j$,
\begin{equation*}
\begin{aligned}
\mathbb{E}\big|\mathrm{d} \widetilde{{N}}_j(z+u) - \mathrm{d}{{N}}_j(z+u) \big|/\mathrm{d}u \leq & \big( 2 v \bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J}+ 2v \sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^{i-1} \bm{\eta}\bm{J}\big)_j\\
= & \big( 2 v \bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J}\big)_j+ 2v \sum_{i=1}^{\lfloor u/b +1 \rfloor} \big(\bm{\Omega}^{i-1} \bm{\eta}\bm{J}\big)_j\\
\leq & \big( 2 v \bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J}\big)_j+ 2v \max_{j} [\bm{\eta}_{j,\cdot} \bm{J}] \sum_{i=1}^{\lfloor u/b +1 \rfloor} \big(\bm{\Omega}^{i-1} \bm{J}\big)_j\\
\leq & 2v \rho_{\Omega}^{\lfloor u/b +1 \rfloor} + 2v \max_{j} [\bm{\eta}_{j,\cdot} \bm{J}] \sum_{i=1}^{\lfloor u/b +1 \rfloor} \rho_{\Omega}^{i-1}\\
\leq & 2v \rho_{\Omega}^{\lfloor u/b +1 \rfloor} + 2v \max_{j} [\bm{\eta}_{j,\cdot} \bm{J}] \frac{\rho_{\Omega}}{1-\rho_{\Omega}},
\end{aligned}
\end{equation*}
where we use that $v=\max(v_1,v_1^2,v_2)$.
Let $b=\max(b_0, \log^{1/(r+1)}(\rho_{\Omega}^{-1})(c_{1})^{-1/(r+1)} u^{1/(r+1)})$. Then by Assumption~\ref{asmp::tail_highd}
$$\max_{j} [\bm{\eta}_{j,\cdot} \bm{J}]\leq c_{2} \exp\{- \log^{r/(r+1)}(\rho_{\Omega}^{-1})(c_{1}) ^{1/(r+1)} u^{r/(r+1)} \}.$$
Therefore,
\begin{equation*}
\begin{aligned}
& \mathbb{E}\big|\mathrm{d} \widetilde{{N}}_j(z+u) - \mathrm{d}{{N}}_j(z+u) \big|/\mathrm{d}u \\
\leq & 2v \rho_{\Omega}^{\lfloor u/b +1 \rfloor} + 2v p \eta \frac{\rho_{\Omega}}{1-\rho_{\Omega}}\\
\leq & [2 v +2v \frac{\rho_{\Omega}}{1-\rho_{\Omega}}c_{2} ] \exp\{- \log^{r/(r+1)}(\rho_{\Omega}^{-1})(c_{1})^{1/(r+1)} u^{r/(r+1)} \}\\
\equiv & a_{1} v \exp\big(-a_{2} u^{r/(r+1)}\big),
\end{aligned}
\end{equation*}
where we use the fact that $\lfloor u/b +1 \rfloor\geq u/b$. \QEDB
\subsection{Proof of Lemma~\ref{lmm::elementwise_crosscovariance}}\label{sec::proof_elementwise_crosscovariance}
Similar to the proof of Lemma~\ref{lmm::elementwise_expectation}, we can rewrite the bound in Theorem~\ref{thm::coupling} as
\begin{equation}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} \widetilde{{N}}_k(z+u) - \mathrm{d} {{N}}_j(t')\mathrm{d} {{N}}_k(z+u) \big|/(\mathrm{d} t' \mathrm{d} u) \\
\leq & \left[ 2 v_2 \bm{\Omega}^{\lfloor u/b +2 \rfloor} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} +2 v_1^2 \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{\lfloor u/b +1 \rfloor}\big)^{{ \mathrm{\scriptscriptstyle T} }} +2 v_2\sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\Omega}^i \bm{\eta} \bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} + 2 v^2_1\sum_{i=1}^{\lfloor u/b +1 \rfloor} \bm{\eta}\bm{J}\bm{J}^{{ \mathrm{\scriptscriptstyle T} }} \big(\bm{\Omega}^{i}\big)^{{ \mathrm{\scriptscriptstyle T} }}\right]_{j,k}\\
\leq & 2 v \left[ \bm{\Omega}^{\lfloor u/b +2 \rfloor}\bm{J}\right]_{j} +2 v \left[\bm{\Omega}^{\lfloor u/b +1 \rfloor} \bm{J}\right]_{k} +2 v \sum_{i=1}^{\lfloor u/b +1 \rfloor} \left[ \bm{\Omega}^i \bm{\eta}\bm{J}\right]_{j}\\
\leq & 2 v \rho_{\Omega}^{\lfloor u/b +2 \rfloor} +2 v \rho_{\Omega}^{\lfloor u/b +1 \rfloor} +2 v \max_{j} [\bm{\eta}_{j,\cdot} \bm{J}] \sum_{i=1}^{\lfloor u/b +1 \rfloor} \rho_{\Omega}^i\\
\leq & 2 v \rho_{\Omega}^{\lfloor u/b +2 \rfloor} +2 v \rho_{\Omega}^{\lfloor u/b +1 \rfloor} +2 v\max_{j} [\bm{\eta}_{j,\cdot} \bm{J}] \frac{\rho_{\Omega}}{1-\rho_{\Omega}},
\end{aligned}
\end{equation}
where we use $v=\max(v_1,v_1^2,v_2)$ in the second inequality, and Assumption~\ref{asmp::tail_highd} in the second-to-last inequality.
Recall that we set $b=\max(b_0, \log^{1/(r+1)}(\rho_{\Omega}^{-1})(c_{1})^{-1/(r+1)} u^{1/(r+1)})$. And thus Assumption~\ref{asmp::tail_highd} yields that
$$\max_{j} [\bm{\eta}_{j,\cdot} \bm{J}] \leq c_{2} \exp\{- \log^{r/(r+1)}(\rho_{\Omega}^{-1})(c_{1})^{1/(r+1)} u^{r/(r+1)} \}.$$
Thus, using the fact that $\rho_{\Omega}<1$,
\begin{equation}
\begin{aligned}
& \mathbb{E}\big| \mathrm{d} \widetilde{{N}}_j(t') \mathrm{d} \widetilde{{N}}_k(z+u) - \mathrm{d} {{N}}_j(t')\mathrm{d} {{N}}_k(z+u) \big|/(\mathrm{d} t' \mathrm{d} u) \\
\leq & \left(2v+2v+ 2v \frac{\rho_{\Omega}}{1-\rho_{\Omega}}+2p^{3/2}v \frac{\rho_{\Omega}}{1-\rho_{\Omega}} \right)\exp\left\{- \log^{r/(r+1)}(\rho_{\Omega}^{-1})(c_{1})^{1/(r+1)} u^{r/(r+1)} \right\}\\
\equiv & a_{3} v\exp\big(-a_{4} u^{r/(r+1)}\big).
\end{aligned}
\end{equation}
\QEDB
\subsection{Proof of Lemma~\ref{lmm::exponential_tail}}\label{sec::proof_exp}
We discuss two scenarios depending on which condition holds in Assumption~\ref{asmp::bounded}.
(i) Suppose that Assumption~\ref{asmp::bounded}a holds, i.e., $\lambda_j(t) \leq \phi_{\max}$. We construct a dominating process $\widehat{\bm{N}}$ as
\begin{equation}\label{eqn::dominating_process}
\mathrm{d} \widehat{N}_j(t) = {N}^{(0)}_j\big( [0, \phi_{\max}] \times \mathrm{d} t \big), \quad j=1,\ldots, p.
\end{equation}
We see that, by construction, $\mathrm{d} \widehat{N}_j(t) \geq \mathrm{d} {N}^{(i)}_j(t)$ for any $i$, and thus $\mathrm{d} \widehat{N}_j(t) \geq \lim_{n\to \infty} \mathrm{d} {N}^{(i)}_j(t)= \mathrm{d} {N}_j(t)$.
It is then follows that, for any $n$ and $A \in \mathcal{B}(\mathbb{R})$,
\begin{equation}\label{eqn::thinning_tail}
\mathbb{P} \big(N_j\big(A\big) \geq n \big) \leq \mathbb{P} \big(\widehat{N}_j\big(A \big) \geq n \big).
\end{equation}
Now, by definition $\widehat{N}_j (A )$ is a Poisson random variable.
Then, by Lemma~\ref{lmm::poissontail}, the tail probability in \eqref{eqn::thinning_tail} decreases exponentially fast in $n$.
(ii) Suppose that Assumption~\ref{asmp::bounded}b holds. Recall that we represent the Hawkes process $\bm{N}$ using an iterative construction and a dominating homogeneous Poisson process $\bm{N}^{(0)}$ in Equations~ \ref{eqn::iterative_initial_main} -- \ref{eqn::iterative_construction_main}. Here we construct another process $\widehat{\bm{N}}$ in a similar manner but the intensity of the process $\widehat{\bm{N}}$ takes the form
\begin{equation}\label{eqn::HP_intensity_dom}
\hat{\lambda}_{j}(t)= \phi_j \big(\mu_{j} \big) + \sum_{k=1}^p \big(\hat{\omega}_{k,j} \ast \mathrm{d} {N}_k \big) (t), \;\;\; j=1,\ldots,p,
\end{equation}
where $\hat{\omega}_{j,k}(\Delta)\equiv|\omega_{j,k}(\Delta)|$.
From the Lipschitz condition of $\phi_j$, $\hat{\lambda}_{j}(t)$ is always larger than $\lambda_j$ given the same history.
From the iterative construction, we can see that the process $\widehat{{N}}^{(i)}_j$ dominates ${N}^{(i)}_j(A)$ for any $i$, which means that the limiting process $\widehat{N}_j$ dominates $N_j$.
As a result, we have ${{N}_j(A)} \leq \widehat{{N}}_j(A)$ for any bounded interval $A$. For ${\widehat{N}_j(A)}$, we know from Proposition~2.1 in \cite{reynaud2007} that $P(\widehat{{N}}_j(A)> n ) \leq \exp( 1- n/K)$, which implies that $P({{N}}_j(A)> n ) \leq \exp( 1- n/K)$. In other words, $N_j(A)$ has an exponential tail of order $1$.
\QEDB
\end{document}
|
\begin{document}
\title{Multi-partite entanglement speeds up quantum key distribution in networks}
\author{Michael Epping}
\email{[email protected]}
\affiliation{Institute for Quantum Computing, University of Waterloo, 200 University Ave. West, N2L 3G1 Waterloo, Ontario, Canada}
\affiliation{Institut f\"{u}r Theoretische Physik III, Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, Universit\"{a}tsstr. 1, D-40225
D\"{u}sseldorf, Germany}
\author{Hermann Kampermann}
\affiliation{Institut f\"{u}r Theoretische Physik III, Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, Universit\"{a}tsstr. 1, D-40225
D\"{u}sseldorf, Germany}
\author{Chiara Macchiavello}
\affiliation{Dipartimento di Fisica, Universit\`a di Pavia, and INFN-Sezione di Pavia, via Bassi 6, 27100 Pavia, Italy}
\author{Dagmar Bru\ss}
\affiliation{Institut f\"{u}r Theoretische Physik III, Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, Universit\"{a}tsstr. 1, D-40225
D\"{u}sseldorf, Germany}
\pacs{03.67.Dd,03.67.Bg,03.67.Pp}
\begin{abstract}
The laws of quantum mechanics allow for the distribution of
a secret random key between two parties.
Here we analyse the security of a protocol for
establishing a common secret key between N parties (i.e. a conference key),
using resource states with genuine N-partite entanglement.
We compare this protocol to conference key distribution via
bipartite entanglement, regarding the required resources,
achievable secret key rates and threshold qubit error rates. Furthermore we discuss quantum networks with bottlenecks for which our multipartite entanglement-based protocol can benefit from network coding, while the bipartite protocol cannot. It is shown how this advantage leads to a higher secret key rate.
\end{abstract}
\maketitle
\noindent
In the quantum world, randomness and security
are built-in properties~\cite{G+02,DLH06,B+07}: two parties may establish
a random secret key by exploiting
the no-cloning theorem~\cite{WZ82}, as in the BB84 protocol \cite{BB84}, or by
using the monogamy of entanglement~\cite{CKW00}, as in the Ekert protocol~\cite{Ekert91}.
Several variations of these seminal
protocols have been suggested~\cite{Bruss98,GG02,ZSW08,LCQ12,VV14}, and their security has been analysed
in detail~\cite{LC99,SP00,May01,Renner05,Sca+09,TL15,CML16}.\\
In the advent of quantum technologies, much
effort is devoted to building
quantum networks~\cite{ACL07,SECOQC09,LOW10,Sas+11,Met+13,Sat+16} and
creating global quantum states across them~\cite{EKB16,EKB16b}. Thus, the generalization of quantum key distribution (QKD) to multipartite scenarios
is topical. In order to establish a common secret key (the {\em conference key})
for $N$ parties, one
can follow mainly two different paths: building up the multipartite key
from bipartite QKD links (2QKD)~\cite{SS03}, see Fig.~\ref{fig:bipartite},
or exploiting correlations of
genuinely multipartite entangled states (NQKD)~\cite{Cabello00,SG01,CL05,Fu15},
see Fig.~\ref{fig:multipartite}. \\
\begin{figure}
\caption{The setup for $N$-partite conference key distribution. Black disks are qubits and the black lines connecting them indicate entanglement. Here, all quantum states are produced by Alice (A), who sends one subsystem to each of the other parties $B_i$. Both protocols require additional classical communication which is sent via open but authenticated channels. The grey background indicates the network infrastructure, i.e. the channels and nodes.
}
\label{fig:bipartite}
\label{fig:multipartite}
\label{fig:schemes}
\end{figure}
In this article we provide an information theoretic security analysis of NQKD, by
generalising methods developed for 2QKD in~\cite{Renner05,RGK05},
and perform an analytical calculation of secret key rates.
This enables us to quantitatively compare the two approaches;
we find that NQKD may outperform 2QKD, for example in networks with bottlenecks.
The article is structured as follows. In the Section~\ref{sec:NQKD} we introduce the NQKD protocol and its prepare-and-measure variant, perform a detailed security analysis and the secret key rate calculation. In Section~\ref{sec:implementation} we define the 2QKD protocol, summarise the steps of the NQKD protocol in an implementation and calculate the secret key rate for the example of a depolarised state. We explicitly model noise introduced by imperfect gates and channels in order to compare the performance of the two different approaches. Quantum networks are discussed in Section~\ref{sec:networks}. The article concludes with a discussion of the results.
\section{Multipartite QKD: protocol and security analysis}\label{sec:NQKD}
The entanglement-based Ekert protocol~\cite{Ekert91} can be generalised
to $N$ parties as follows, see also \cite{CL05}. The parties
$A$ and $B_1$, $B_2$, ..., $B_{N-1}$ share an $N$-partite entangled state
and perform local projective measurements.
The best performance in the ideal (noiseless) case is ensured
if one requires that the measurement outcomes of all parties are perfectly correlated
for one set of local bases -- which we can choose
without loss of generality to be the
$Z$-bases --
and occur with a uniform distribution.
The only pure $N$-qubit quantum state that
fulfils these requirements is the Greenberger-Horne-Zeilinger (GHZ) state~\cite{GHZ07};
however, for $N\geq3$, the existence of perfect correlations in one set of
bases forbids perfect
correlations (even only pairwise) in any other bases,
see Appendix~\ref{app:GHZ}. We remark that other protocols with less
than perfectly correlated resource states are possible, but will
introduce intrinsic errors~\cite{ZXP15}.\\
\subsection{The protocol for N-party quantum conference key distribution}
The protocol for N-party quantum conference key distribution (NQKD), with $N\geq 2$, consists of the following basic steps:
\begin{itemize}
\item[1)] {\sl State preparation:} The parties $A$ and $B_i$, ${i=1,2,...,N-1}$, share the $N$-qubit GHZ state
\begin{equation}
\ket{GHZ} = \frac{1}{\sqrt{2}}\left(\ket{0}^{\otimes N} + \ket{1}^{\otimes N}\right).\label{eq:GHZ}
\end{equation}
\item[2)] {\sl Measurement:}
There are two types of measurements. First type:
Party $A$ and parties $B_i$ measure their respective qubits in the $Z$-basis. Second type: They measure randomly, with equal probability, in the $X$- or $Y$-basis. Similar to the standard bipartite QKD protocol~\cite{Lo2005}, the latter case is much less frequent. The parties know the type of the measurement from a short pre-shared random key.
\item[3)] {\sl Parameter estimation:}
The parties announce the measurement bases and outcomes for the second type and an equal number of randomly chosen rounds of the first type. The announced data allows to estimate the parameters $Q_X$ and $Q_Z$, which determine the secret key rate, see below.
\item[4)] {\sl Classical post-processing:} As
in the bipartite protocol,
error correction and privacy amplification is performed, for the
details see below.
\end{itemize}
Note that the state preparation in step 1) can be achieved by locally preparing the GHZ-state at Alice's site and sending qubits to the Bobs (see Fig.~\ref{fig:multipartite}), or any suitable sub-protocol that achieves the same task. We analyse the distribution via quantum repeaters~\cite{EKB16} and quantum
network coding~\cite{EKB16b} below.\\
In the following section we briefly discuss prepare-and-measure variants of conference key distribution. Because the security proof of the NQKD protocol is done in the entanglement-based picture, this description is not necessary for understanding the rest of the paper.\\
\subsection{N-party prepare-and-measure schemes}
We now sketch two different prepare-and-measure schemes for conference key distribution.\\
{\sl 1) Preparing and measuring single qubits:} Single qubits are experimentally easier to prepare and
to distribute than entangled states. Thus, establishing a
conference key for $N$ parties by using single qubits is
an interesting possibility, which has been studied for the
case $N=3$ in \cite{Ryu07}.\\
The protocol proceeds in complete analogy to the case of
$N=2$, e.g. for BB84~\cite{BB84}: Alice prepares randomly $N-1$ copies of
a state $\ket{\phi_k}, k=1,...4,$ taken from the set
$S_{BB84}=\{\ket{0},\ket{1},\ket{+},\ket{-}\}$,
with $\ket{\pm}=1/\sqrt{2}(\ket{0}\pm\ket{1})$. She sends each
party $B_i$, with $i=1,...,N$, one of the copies. Each party
$B_i$ measures in the $Z$- or $X$-basis.
In the sifting step, the $N$ parties keep only those cases
where all parties used the same basis, and thus
establish a joint key.\\
We point out, however, that the secret key rate
in this scenario decreases
with increasing $N$, even for perfect channels and measurements,
and goes to zero for $N\rightarrow\infty$: an eavesdropper
can eavesdrop on all $N-1$ sent states at the same time,
i.e. she has to distinguish the four global states
$ |\phi_k>^{\otimes (N-1)}$, pairs of which have either overlap 0 or
$(1/\sqrt{2})^{N-1}$, i.e. the distinguishability increases with increasing
$N$. In the limit of infinite $N$ the four global states are
orthogonal and therefore perfectly distinguishable.
Thus, this prepare-and-measure-scheme is (for $N\geq 3$) {\em not} equivalent
to entanglement-based NQKD as described in the present article.
{\sl 2) Prepare-and-measure equivalent of NQKD:} The entanglement-based protocol NQKD described above can be
formulated
as a prepare-and-measure protocol, analogous to the six-state protocol~\cite{Bruss98}. Instead of producing the GHZ state of Eq.~(\ref{eq:GHZ}) and measuring her qubit afterwards, Alice can directly produce the ($N-1$)-qubit projection of the GHZ state according to her fictitious, random outcome.
Thus, for the $X$-, $Y$- and $Z$-basis, the six different $(N-1)$-qubit states she distributes among the Bobs, are
\begin{equation}
\begin{aligned}
\ket{\psi_{x,\pm}}=&\frac{1}{\sqrt{2}}(\ket{00...0}\pm \ket{11...1}),\\
\ket{\psi_{y,\pm}}=&\frac{1}{\sqrt{2}}(\ket{00...0}\mp i\ket{11...1}),\\
\ket{\psi_{z,0}}=&\ket{00...0} \text{ and }\ket{\psi_{z,1}}=\ket{11...1}.
\end{aligned}
\end{equation}
This protocol is equivalent to NQKD,
because it reproduces the correlations between $A$, $B_1$, ... $B_{N-1}$. Note that the six-state protocol is included as the special case $N=2$. The described protocol uses $(N-1)$-partite entanglement for four of the sent states, which are, however, sent much less frequent than the two product states. This fact renders an experimental implementation of our protocol more realistic than the entanglement-based description might suggest.\\
In the remainder of this article we use the equivalent entanglement-based description of the NQKD protocol.\\
\subsection{Security analysis of the N-party quantum key distribution}
The composable security definition of the bipartite
scenario~\cite{Renner05,Sca+09} can be generalised
in an analogous way to the $N$-partite case.
Our security analysis proceeds along analogous lines as the
bipartite case in \cite{RGK05}. See Appendix~\ref{app:security} for explicit details of these generalisations. By employing this security definition and using one-way communication only, we prove secrecy of the key under the most general eavesdropping attack allowed by the laws of quantum mechanics, so-called {\it coherent attacks}~\cite{CIRAC19971, PhysRevA.59.4238}, independent of the context in which the key is used.\\
In the asymptotic limit, i.e. for infinitely
many rounds,
the secret fraction $r_{\infty}$, i.e. the ratio of secret bits and the number of shared states (without parameter estimation rounds), is given by
\begin{equation}
\label{eq:secretfraction}
r_{\infty} = \sup_{U \leftarrow K}\inf_{\sigma_{A\{B_i\}} \in \Gamma}
[S (U|E) - \max_{i\in\{1,...N-1\}} H( U|K_i)],
\end{equation}
where $U \leftarrow K$ denotes a bitwise preprocessing channel on Alice's raw key bit $K$,
$S(U|E)$ is the conditional von-Neumann entropy of the (classical) key variable, given the state of Eve's system $E$,
$H(U|K_i)$ is the conditional Shannon entropy of $U$ given $K_i$, which is $B_i$'s guess of $K$, and $\Gamma$ is the set of all density matrices $\sigma_{A\{B_i\}}$ of Alice and the Bobs which are consistent with the parameter estimation. The secret key rate is
\begin{equation}
R=r_\infty R_{\mathrm{rep}},\label{eq:keyrate}
\end{equation}
where the repetition rate $R_{\mathrm{rep}}=\frac{1}{t_{\mathrm{rep}}}$ is given by the time $t_{\mathrm{rep}}$ that one round (steps 1 and 2) takes. For now we set $t_{\mathrm{rep}}=1$. The secret key rate in Eq.~(\ref{eq:keyrate}) as a figure of merit does not directly account for the amount of needed local randomness, classical communication, qubits and gates. Depending on the context one might want to incorporate one or more of the former quantities into a cost-performance ratio as a figure of merit~\cite{PhysRevLett.112.250501}.\\
Note that we have not assumed any symmetry about the quality of the channels connecting $A$ and $B_i$.
Therefore,
the worst-case
information leakage in the error correction step is determined by the noisiest channel, see the maximisation in the last term
of Eq. (\ref{eq:secretfraction}).
This is the main difference with respect to the bipartite case.\\
\subsection{The secret key rate}
We now derive an analytical formula for the multipartite secret key rate based on a variant of the method of depolarisation~\cite{RGK05}. In practice, the described depolarisation operations will be applied to the classical data only, as described in detail below. Readers who are not interested in the technical details can skip to Eq.~(\ref{eq:rdep}).\\
Let us denote the GHZ basis of $N$ qubits as follows:
\begin{equation}
\label{ghz-basis}
\ket{\psi_j^\pm} = \frac{1}{\sqrt{2}}(\ket{0}\ket{j}\pm \ket{1}\ket{\bar j})\ ,
\end{equation}
where $j$ takes the values $0,...,2^{N-1}-1$ in binary notation,
and $\bar j$ denotes the binary negation of $j$; i.e. for example
if $j = 01101$
then ${\bar j} = 10010$. \\
Remember that any state of $N$ qubits can be depolarised to a state which is diagonal in the GHZ basis by a sequence of local operations~\cite{DCT99,Duer+00}.
In our protocol we introduce the following \textit{extended depolarisation procedure}. The set of depolarisation operators is
\begin{equation}
\mathcal{D}=\{X^{\otimes N}\} \cup \{Z_A Z_{B_j}|1\leq j \leq N-1 \} \cup \{R_k |1\leq k \leq N-1 \}, \label{eq:depolarisationoperators}
\end{equation}
where $X$ and $Z$ are Pauli operators and
\begin{equation}
R_k=\mathrm{diag}(1,i)_A \otimes \mathrm{diag}(1,-i)_{B_k}.
\end{equation}
The parties apply each of these operators with probability $1/2$ or $\mathds{1}$ else.
The operators from the first two sets of
Eq.~(\ref{eq:depolarisationoperators}) make the density matrix GHZ diagonal as in~\cite{DCT99,Duer+00}. We denote the coefficient in front of $\proj{\psi_j^\sigma}$ by $\lambda_j^\sigma$ with $\sigma\in\{+,-\}$ and $j\in\{0,1,...,2^{N-1}-1\}$. The effect of $R_k$ is
\begin{alignat}{2}
R_k \ket{\psi_j^\sigma}=&\left\{ \begin{array}{ll}
\ket{\psi_j^\sigma} & \text{if } j^{(k)} = 0\\
-i \ket{\psi_j^{-\sigma}} & \text{if } j^{(k)} = 1\\
\end{array}\right. ,
\intertext{so applying this operator with probability $\frac{1}{2}$ transforms}
\lambda_j^\sigma \xrightarrow{}&\left\{ \begin{array}{ll}
\lambda_j^\sigma & \text{if } j^{(k)} = 0\\
\frac{1}{2}(\lambda_j^{-\sigma}+\lambda_j^{\sigma}) & \text{if } j^{(k)} = 1\\
\end{array}\right. ,
\end{alignat}
where $j^{(k)}$ denotes the $k$th bit of the string $j$. As this operation is applied for all $k=1,2,...,N-1$, it achieves that
\begin{equation}
\lambda_j^+=\lambda_j^- \text{ for all $j>0$.}\label{eq:equallambdas}
\end{equation}
The resulting depolarised state reads
\begin{equation}
\label{ghz-diagonal}
\rho_\text{dep} = \lambda_0^+ \proj{\psi_0^+} + \lambda_0^- \proj{\psi_0^-}+\sum_{j=1}^{2^{N-1}-1}\lambda_{j}(\proj{\psi_j^+}+\proj{\psi_j^-}).
\end{equation}
In our multipartite scenario we define the qubit error rate (QBER) $Q_Z$ to be the probability that at least one Bob obtains a different outcome than Alice in a $Z$-basis measurement. Note that this value is not the same as the bipartite qubit error rate $Q_{AB_i}$, which is the probability that the $Z$-measurement outcome of $B_i$ disagrees with the one of Alice. $Q_Z$ can be read directly from the structure of the depolarised state in Eq.~(\ref{ghz-diagonal}) and is given by
\begin{equation}
Q_Z = 1- \lambda_0^+-\lambda_0^- \ .\label{eq:QBER}
\end{equation}
For simplicity we neglect the possibility of increasing the key rate by adding pre-processing noise, i.e. we set $q=0$ in the notation of \cite{RGK05} such that $\mathbf{U}=\mathbf{K}$.
Because
\begin{equation}
\begin{aligned}
S(K|E)=&S(E|K)-S(E)+H(K)\\
\text{and }H(K|K_i)=&H(K_i|K)-H(K_i)+H(K)
\end{aligned}
\end{equation}
the asymptotic secret fraction is
\begin{equation}
\label{eq:askeyrate}
r_{\infty} = S(E|K) - S(E) - \max_{1\leq i\leq N-1} (H(K_i|K) - H(K_i)).
\end{equation}
Note that we did not need to include the infimum over $\Gamma$, see Eq.~(\ref{eq:secretfraction}), here because, as we will see below, the measurement statistics completely determine all relevant quantities in our protocol.
The entropies involving the classical random variable $K$ are directly obtained from the measurement statistics in the parameter estimation phase. They are given by
\begin{equation}
H(K|K_i)=h(Q_{AB_i}),
\end{equation}
with the binary Shannon entropy
\begin{equation}
h(p)=-p \log_2 p-(1-p) \log_2(1-p)
\end{equation}
and the bipartite error rate $Q_{AB_i}$, given by
\begin{equation}
Q_{AB_i}=\sum_{\substack{j\\ j^{(i)}=1}}\sum_{\sigma=\pm} \lambda_j^\sigma
\overset{Eq.~(\ref{eq:equallambdas})}{=} 2\sum_{\substack{j\\ j^{(i)}=1}}\lambda_j,
\end{equation}
where $j^{(i)}$ denotes the $i$-th bit of $j$ and, because both outcomes are equiprobable,
\begin{equation}
H(K_i)=1.
\end{equation}
Giving Eve the purification of Eq.~(\ref{ghz-diagonal}), the von-Neumann entropies involving Eve's system in Eq.~(\ref{eq:askeyrate}) are given by
\begin{align}
S(E|K)\overset{\phantom{Eq.~(\ref{eq:equallambdas})}}{=}&\frac{1}{2} S(E|K=0)+ \frac{1}{2} S(E|K=1)\nonumber \\
\overset{\phantom{Eq.~(\ref{eq:equallambdas})}}{=}&-\sum_{i=0}^{2^{N-1}-1} (\lambda_i^+ + \lambda_i^-)\log_2 (\lambda_i^+ + \lambda_i^-)\nonumber\\
\overset{Eq.~(\ref{eq:equallambdas})}{=}
&-(1-Q_Z)\log_2(1-Q_Z)-2\sum_{i=1}^{2^{N-1}-1} \lambda_i\log_2 (\lambda_i) - Q_Z
\label{eq:SEUnosymmetry}
\end{align}
and
\begin{align}
S(E)\overset{\phantom{Eq.~(\ref{eq:equallambdas})}}{=}&S(\frac{1}{2}(\sigma_E^0 + \sigma_E^1))
=-\sum_{j,\sigma=\pm} \lambda_j^{\sigma} \log_2 \lambda_j^{\sigma}\nonumber\\
\overset{Eq.~(\ref{eq:equallambdas})}{=}&-\lambda_0^+\log_2\lambda_0^+-\lambda_0^-\log_2\lambda_0^- -2 \sum_{j>0} \lambda_j \log_2 \lambda_j
\label{eq:SEnosymmetry},
\end{align}
i.e.
\begin{align}
S(E|K)-S(E)
=&-Q_Z-(1-Q_Z)\log_2(1-Q_Z)+\lambda_0^+\log_2\lambda_0^++(1-Q_Z-\lambda_0^+)\log_2(1-Q_Z-\lambda_0^+). \label{eq:difference}
\end{align}
Now $\lambda_0^+$ and $\lambda_0^-$ can be obtained with the additional $X^{\otimes N}$ measurement in the parameter estimation, because $\lambda_0^++\lambda_0^-=1-Q_Z=\tr \left(\rho_{\mathrm{dep}} (\proj{0}^{\otimes N}+\proj{1}^{\otimes N})\right)$ is known from the QBER and $\tr\left( \rho_{\mathrm{dep}} X^{\otimes N}\right) = \sum_j (\lambda_j^+-\lambda_j^-)=\lambda_0^+-\lambda_0^-$. In analogy to $Q_Z$ we denote the probability that the $X$-measurement gives an unexpected result, i.e. one that is incompatible with the noiseless state, by $Q_X$. Because $\bra{\psi_j^\sigma}X^{\otimes N}\ket{\psi_j^\sigma}=\sigma$ this leads to
\begin{equation}
Q_X=\frac{1-\langle X^{\otimes N}\rangle_{\mathrm{dep}}}{2},
\end{equation}
which can, as we will see in Section~\ref{sec:implementationofprotocol}, be obtained from the measured data in the parameter estimation step. We remark that $Q_X$ is not the probability that at least one Bob gets a
different $X$-measurement outcome than Alice, as the outcomes are
not correlated, see Appendix~\ref{app:GHZ}.\\
Finally, inserting Eq.~(\ref{eq:difference}) into Eq.~(\ref{eq:askeyrate}), and using Eq.~(\ref{eq:keyrate}),
we arrive at the achievable secret key rate,
\begin{equation}
\begin{aligned}
R=&\hphantom{{}+{}}\left(1- \frac{Q_Z}{2} - Q_X\right) \log_2\left(1- \frac{Q_Z}{2} - Q_X\right)
+ \left(Q_X-\frac{Q_Z}{2}\right) \log_2\left(Q_X-\frac{Q_Z}{2}\right) \\& + (1-Q_Z) (1 - \log_2(1-Q_Z))- h(\max_{1\leq i \leq N-1}Q_{AB_i}).
\end{aligned}\label{eq:rdep}
\end{equation}
Note that the parameters in this equation are obtained from the measured data and will depend on the number of parties $N$.\\
\section{Implementation and noise}\label{sec:implementation}
In this section we compare the multipartite-entanglement-based protocol (NQKD) as introduced above to a protocol based on bipartite entanglement (2QKD), which we define in the following.
\subsection{Conference key distribution with bipartite entangled quantum states (2QKD)}
A suitable protocol
to establish
a secret joint key for $N>2$ parties via bipartite entanglement
proceeds as follows, see Fig.~\ref{fig:bipartite}:
Party $A$ shares a Bell
state with each of the $N-1$ parties $B_i$ and establishes a
(different)
secret bipartite key ${\bf S}_i$ with each party $B_i$. For concreteness, we assume in our comparison that the six-state protocol~\cite{Bruss98} is used.
In general, the
$N-1$ channels may be different and thus have individual
QBERs. Party $A$ then defines a new random key ${\bf k}_c$ to be
the conference key. She sends the encoded conference key
${\bf k}_i = {\bf S}_i \oplus {\bf k}_c$ to party $B_i$ who
performs ${\bf k}_i \oplus {\bf S}_i = {\bf k}_c$ and thus regains
the conference key.\\
A comparison of the performance of the bipartite versus the multipartite
entanglement-based strategy for
$N$ parties is subtle and has to consider various aspects,
as different resources are needed: on one hand only bipartite
entanglement is needed for 2QKD, while multipartite
entanglement is needed for NQKD.
(Note, however, that the number of necessary two-qubit
gates for generation of the entangled states is in both cases $N-1$.)
On the other hand, the number of resource
qubits per round is $2(N-1)$ for 2QKD, while
only $N$ qubits are needed for NQKD.
Finally, the 2QKD protocol requires to transmit $(N-1)$ additional classical bits (the encoded conference key). Thus, each of the two strategies
has its own advantages. A quantitative comparison regarding imperfections in preparation and transmission is discussed below.\\
\subsection{Implementation of the NQKD protocol} \label{sec:implementationofprotocol}
We now describe how the depolarisation operations used in the security proof can effectively be implemented classically by adjusting the protocol.\\
For key generation and the $Q_Z$ estimation, the parties perform $Z^{\otimes N}$-measurements. These are only affected by the $X^{\otimes N}$ depolarisation operator, which flips the outcomes of all parties. It can therefore be implemented on the classical data. The other depolarisation operators are diagonal in the $Z$-basis and thus do not change the $Z$-measurement outcome.\\
Let us call the parameter estimation rounds, in which the parties measure $X^{\otimes N}$ (after depolarisation), estimation rounds of the second type. How the depolarisation step affects the $X^{\otimes N}$-measurement is not so obvious and is described in the following.\\
Note that the depolarisation operators $X^{\otimes N}$ and $Z_AZ_{B_k}$, $k=1,2,...,N-1$ (see Eq.~(\ref{eq:depolarisationoperators})), commute with the $X^{\otimes N}$-measurement and thus these depolarisation operators do not have an effect in second type rounds. But
\begin{equation}
R_k X_A X_{B_k} R_k^\dagger = (-Y_A) Y_{B_k}
\end{equation}
i.e. applying the depolarisation operator $R_k$ is equivalent to Bob $k$ measuring in $Y$-basis. Also note that
\begin{equation}
R_k (-Y_A) X_{B_k} R_k^\dagger = (-X_A) Y_{B_k},
\end{equation}
so let $\kappa_j$ be the number of Bobs measuring in $Y$-basis in the $j$-th round, then Alice measures in the basis
\begin{equation}
M_A(\kappa) = \left\{\begin{array}{cl}
X_A & \text{if } \kappa_j \bmod 4 = 0\\
-Y_A & \text{if }\kappa_j \bmod 4 = 1\\
-X_A & \text{if }\kappa_j \bmod 4 = 2\\
Y_A & \text{if }\kappa_j \bmod 4 = 3
\end{array}
\right. ,
\end{equation}
where a minus sign corresponds to a flip of the measurement outcome. Note that this measurement rule for Alice implies that always an even number of parties measures in $Y$-basis and that the outcome of the measurement is flipped whenever it is not a multiple of four.
Each party measures in $X$ or $Y$ basis with probability $1/2$. Note that the rule for $M_A$ described above means that only half of all possible combinations of these measurement bases are actually measured. However, in practice the parties can measure $X$ and $Y$ independently with probability $1/2$ and throw away half of their data (where an odd number of parties has measured in $Y$-basis) and Alice still flips her measurement outcome whenever the number of parties measuring in $Y$-basis was not a multiple of four. This is not a problem, because in the parameter estimation rounds each party announces its measurement setting and outcome. We thus arrive at the implementation described initially.
Let $\tilde{\kappa}_j$ be the number of parties measuring in $Y$-basis in run $j$, i.e.
\begin{equation}
\tilde{\kappa}_j = \left\{\begin{array}{cl}
\kappa_j + 1 & \text{if Alice measured in $Y$-basis}\\
\kappa_j & \text{else}
\end{array} \right.,
\end{equation}
then
\begin{align}
\langle X^{\otimes N} \rangle_{\mathrm{dep}} =&\lim_{\text{\#exp}\rightarrow\infty} \frac{1}{\text{\#exp}} \sum_{j=1}^{\text{\#exp}} f(\tilde{\kappa}_j) \prod_{i=1}^N a_{i,j} \\
=&\lim_{\text{\#exp}\rightarrow\infty} \frac{n_+-n_-}{n_++n_-} , \label{Eq:DetOfXN}
\end{align}
where \#exp is the number of experiments in the second type rounds with even $\tilde{\kappa}_j$, $a_{i,j}$ is the outcome of party $i$ in experiment $j$,
\begin{equation}
f(\tilde{\kappa})=\left\{\begin{array}{cl}
0 & \text{if $\tilde{\kappa}_j$ odd}\\
1 & \text{if }\tilde{\kappa}_j \bmod 4 = 0\\
-1 & \text{else}
\end{array}\right. \label{eq:signofkappa}
\end{equation}
and
\begin{equation}
n_{\pm} = \frac{1}{2} \text{\#exp}\pm \frac{1}{2}\sum_{j=1}^{\text{\#exp}} f(\tilde{\kappa})\prod_{i=1}^N a_{i,j}.
\end{equation}
We remark that, in contrast to full tomography, the number of rounds needed to get sufficient statistics for estimating $\langle X^{\otimes N}\rangle_{\mathrm{dep}}$ does not increase with the number of parties $N$.\\[1ex]
Let us summarise the steps of an implementation of the NQKD protocol:
\begin{enumerate}
\item Distribution of the state GHZ state $\ket{\psi_0^+}$.
\item $L\cdot h(p_p)$ bits of pre-shared key are used to mark the second type rounds, where $L$ is the total number of rounds and $p_p$ is the probability for an $X^{\otimes N}$-round. This amount of key suffices, because an $L$-bit binary string with a $1$ for each second type round can asymptotically be compressed to $L\cdot h(p_p)$ bits.
\item In each second type round each party measures randomly in the X- or $Y$-basis.
\item In all other cases all parties measure in $Z$-direction.
\item Parameter estimation:
\begin{enumerate}
\item Alice announces a randomly chosen small subset of size $L\cdot h(p_p)$ of $Z$-measurement rounds, in which all parties announce their $Z$-measurement results. From this data the QBER $Q_Z$ and the individual QBER's $Q_{AB_i}$ are estimated.
\item The parties announce the measurement results of the second type rounds together with the chosen measurement basis. Alice flips her outcome if the number of parties who measured in $Y$-basis is not a multiple of four (see Eq.~(\ref{eq:signofkappa})). From the data where an even number of parties measured in $Y$-basis (including zero), the parameter $Q_X$ is calculated according to Eq.~(\ref{Eq:DetOfXN}).
\end{enumerate}
\item Alice announces which $Z$-measurement results all parties have to flip (the probability for each bit is $1/2$). This effectively implements the depolarisation with operator $X^{\otimes N}$.
\item Classical post-processing:
\begin{enumerate}
\item Alice sends error correction information (for $\max_i Q_{AB_i}$) to all Bobs, which perform the error correction.
\item In privacy amplification the parties obtain the key by applying a two-universal hash function, which was chosen randomly by Alice, to the error corrected data.
\end{enumerate}
\item The achievable key rate is then given by Eq.~(\ref{eq:rdep}).
\end{enumerate}
\subsection{Example of depolarising noise}
In this section we assume that $\rho_{AB_1...B_{N-1}}$ is a mixture of the GHZ-state and white noise, i.e. the parties share the state
\begin{equation}
\rho = \lambda_0^+ \proj{\psi_0^+} + \frac{1-\lambda_0^+}{2^N-1} (\mathds{1}-\proj{\psi_0^+}). \label{eq:wernerstate}
\end{equation}
Here all coefficients other than $\lambda_0^+$ are equal, i.e. $\lambda_j^\pm = \lambda_0^- =Q_Z/(2^N-2)$ for $j=1,..., 2^{(N-1)}-1$ and $\lambda_0^+ = 1-Q_Z\frac{2^N-1}{2^N-2}$. The rate of unexpected results for the $X^{\otimes N}$-measurement is thus given by
\begin{equation}
Q_X=\frac{2^{N-2}}{2^{N-1}-1}Q_Z.
\end{equation}
For the highly symmetric state of Eq.~(\ref{eq:wernerstate}) the key rate is then a function of $Q_Z$ and $N$ only. The terms in Eq.~(\ref{eq:askeyrate}) are
\begin{align}
Q_{AB_i}=& \frac{2^{N-1}}{2^N-2} Q_Z,\\
S(E|U)=& -(1-Q_Z) \log_2(1-Q_Z) - Q_Z \log_2 \frac{2Q_Z}{2^N-2}\\
\text{and }S(E)=&-(1-Q_Z\frac{2^N-1}{2^N-2})\log_2 (1-Q_Z\frac{2^N-1}{2^N-2})\nonumber\\
& - (2^N-1)\frac{Q_Z}{2^N-2} \log_2 (\frac{Q_Z}{2^N-2})
\end{align}
and inserting them into Eq. (\ref{eq:askeyrate}) leads to the asymptotic secret key rate as
function of $Q=Q_Z$ and $N$, namely
\begin{equation}
\label{eq:rate}
R(Q,N)=1 + h(Q)-
h\left(Q \frac{2^N - 1}{2^N - 2}\right) - h\left(Q \frac{2^{N - 1}}{2^N - 2}\right)
+ \left(\log_2(2^{N - 1} - 1) - \frac{2^N - 1}{2^N - 2} \log_2 (2^N - 1)\right) Q.
\end{equation}
This function is shown in Fig.~\ref{fig:keyrates}.
\begin{figure}
\caption{The secret key rate of the NQKD protocol as a function of the QBER (a) and the gate failure probability (b).}
\label{fig:keyrates}
\label{fig:keyratesfG}
\label{fig:keyratePlots}
\end{figure}
For $N=2$ the key rate coincides with the one of the six-state protocol~\cite{Bruss98,RGK05}, namely
\begin{equation}
R(Q,2)= 1-h\left(\frac{3}{2}Q\right)-\frac{3 \log_2 3}{2} Q.
\end{equation}
In the limit of large $N$ the key rate simplifies to
\begin{equation}
R(Q,\infty)=1-h\left(\frac{Q}{2}\right)-Q.
\end{equation}
We also numerically determined the threshold values for the QBER, i.e. the value of $Q$ until which a non-zero secret key rate is achievable, for different numbers of parties $N$, see Table~\ref{tab:threshold}.
\begin{table}
\caption{Threshold values of the multipartite entanglement based protocol (NQKD) without preprocessing noise for different numbers of Parties $N$.
The well-known bipartite case, i.e. $N=2$, is also given for comparison.
A non-zero secret key can be distilled if the QBER is below the listed value.}\label{tab:threshold}
\begin{tabular}{cc}
N & Threshold QBER\\%
\hline
2 & 0.126193\\%
3 & 0.209716\\%
4 & 0.263087\\%
5 & 0.295974\\%
6 & 0.315562\\%
7 & 0.326892\\%
8 & 0.333296\\%
9 & 0.336851\\%
10 & 0.338799
\end{tabular}\hspace{1cm}\begin{tabular}{cc}
N & Threshold QBER\\%
\hline
11 & 0.339855\\%
12 & 0.340424\\%
13 & 0.340728\\%
14 & 0.340890\\%
15 & 0.340976\\%
16 & 0.341021\\%
17 & 0.341045\\%
\vdots & \\%
$\infty$ & 0.341071
\end{tabular}
\end{table}
Note that for fixed $Q$ the key rate increases with the number of parties $N$. However, one might expect that in practice the QBER is not constant but increases with increasing number of parties $N$ (because the experimental creation of the $N$-partite GHZ state becomes more demanding). This intuition is discussed quantitatively in the following section.\\
\subsection{Noisy gates and channels}
Let us compare the performance of NQKD and 2QKD when
using imperfect two-qubit gates in the production of
the entangled resource states.
We employ, for both 2QKD and NQKD, the model of depolarising noise, i.e. if a two-qubit gate fails,
which happens with probability $f_G$, then the two processed qubits
are traced out and replaced by the completely mixed state.\\
When the GHZ resource state is produced
in the network of Fig.~\ref{fig:multipartite},
Alice starts with the state $\ket{+}_A\ket{0}^{\otimes N-1}$ and applies a controlled-NOT gate from $A$ to each of the other qubits.
The secret key rate is shown in Fig.~\ref{fig:keyratesfG} as a function of the gate error rate $f_G$. It captures the expectation that the demands on the gates for producing an $N$-party
GHZ state increase with the number of parties $N$.
We mention that the GHZ state could also be produced using a single multi-qubit gate, e.g. $C_{X^{\otimes (N-1)}}=\proj{0}\otimes\mathds{1}+\proj{1}\otimes X^{\otimes (N-1)}$, which is locally equivalent to the controlled-Phase gate, see e.g.~\cite{Liu2016}. The QBER caused by this gate is $Q=\frac{f_G}{2}$. Because the threshold $Q$ increases with $N$ (cf. Fig.~\ref{fig:keyrates}), so does the threshold gate failure probability in this case.
In addition to imperfect gates, noise might be introduced by the transmission channel. Consider, for example, the situation when the qubit of each Bob is individually affected by a depolarising channel. Let the probability of depolarisation be $f_C$, then the QBER is
\begin{equation}
Q(f_C)=\frac{2^N-2 }{2^N}\left(1-(1-f_C )^N\right) \label{eq:fC}
\end{equation}
and the key rate can be calculated according to Eq.~(\ref{eq:rate}).\\
\section{Quantum key distribution in networks}\label{sec:networks}
We will now show that in quantum networks with constrained channel
capacity and with quantum routers, employing
multipartite entanglement leads to
a higher secret key rate than bipartite entanglement, when
the gate quality is higher than a threshold value.\\
Beyond the simple network of Fig.~\ref{fig:schemes}, the GHZ resource state can be distributed in many different networks. Consider a fixed but general network as given via a graph with vertices and directed edges. Let all channels have the same
transmission capacity (also called bandwidth), which is associated with the direction of the corresponding edge. For the sake of a simple presentation, we assume that this transmission capacity is one qubit per second. Thus, the time $t_{\mathrm{rep}}$ consumed in one round (steps 1 and 2 of the protocol) is proportional to the number of network uses in that round.
A generic network has some bottlenecks. In this case the difference between the NQKD and 2QKD protocol becomes evident: Alice may send a single qubit in the NQKD scheme, while she has to transmit $N-1$ qubits in the 2QKD case.\\
As an example consider the quantum network where all parties are connected to a single central router $C$, see Fig.~\ref{fig:router}.
\begin{figure}
\caption{This quantum network with a central router $C$, which is able to
produce and entangle qubits, exemplifies a network with a bottleneck. The GHZ-like resource state used in the multipartite entanglement QKD protocol, see Eq.~(\ref{eq:GHZ}
\label{fig:router}
\end{figure}
Because $C$ is not trusted we assume it to be in the control of Eve. In this network
the channel from $A$ to $C$ constitutes a bottleneck. Note, however, that this network can be much more economical than the one of Fig.~\ref{fig:schemes} if the distance between $A$ and $C$ is large. The 2QKD protocol needs $N-1$ network uses, i.e. $t_{\mathrm{rep}}^{\mathrm{(2QKD)}}=(N-1) \,\mathrm{s}$, to distribute the Bell pairs. In contrast to this the NQKD protocol can employ the quantum network coding~\cite{Ahlswede00,Leung06,Hayashi07,Kobayashi09,Kobayashi10,Beaudrap14,Satoh16} scheme of reference~\cite{EKB16b} to distribute the GHZ state in a single network use, i.e. $t_{\mathrm{rep}}^{\mathrm{(NQKD)}}=1\,\mathrm{s}$. See Appendix~\ref{app:QNC} for the explicit calculation. Thus the key rate of the NQKD protocol is $(N-1)$ times larger than the one of the 2QKD protocol in the ideal case ($r_\infty=1$).\\
When again using noisy two-qubit gates (the QBER calculation is analogous to the case of the network shown in Fig~\ref{fig:multipartite} discussed above), the QBER for the NQKD protocol increases with $N$. These two effects lead to gate error thresholds below which the NQKD protocol outperforms 2QKD, see Fig.~\ref{fig:maxfGNQKDadvantage}.
\begin{figure}
\caption{For less noise than the shown threshold, i.e. in the blue area, NQKD leads to higher key rates than 2QKD in the network of Fig.~\ref{fig:router}
\label{fig:maxfGNQKDadvantage}
\label{fig:maxfCNQKDadvantage}
\label{fig:NQKDadvantage}
\end{figure}
For a fixed number of parties $N$ there is a maximal gate error probability below which the NQKD protocol outperforms the bipartite approach in the quantum network of Fig.~\ref{fig:router}. For $N=3$ already gate failure rates below $7.2\,\%$ imply that NQKD outperforms 2QKD. More values are listed in the Appendix~\ref{app:gates}.\\
The exact same behavior can be observed when considering noisy channels. In the ideal case NQKD outperforms 2QKD, while NQKD is more prone to channel noise. The resulting threshold noise levels are shown in Fig.~\ref{fig:maxfCNQKDadvantage}.\\
We mention that the famous butterfly network~\cite{Ahlswede00} leads to a similar advantage, see Appendix~\ref{app:butterfly} for details.
\section{Conclusion}
In this paper we analysed
a quantum conference key distribution (QKD) protocol for $N$ parties
which is based on multipartite entangled resource states.
We generalised the information theoretic security analysis of \cite{Renner05} to this $N$-partite scenario.
Using the depolarisation method we derived an
analytical formula for the secret key rate as a function of the quantum bit error rate (QBER). For a fixed QBER the secret key rate is found to
increase with the number of parties. Accordingly, the threshold QBER until which a non-zero secret key can be obtained increases with the number of parties.
\\
Furthermore, we presented an example where multipartite entanglement-based
QKD outperforms the approach based on bipartite QKD links.
We found this advantage in networks with bottlenecks and showed that it
holds above a certain threshold gate quality which depends on the number
of parties.\\
We expect more interesting insights from analysing further aspects
of the multipartite entanglement-based
QKD protocol. Regarding implementations the secret key calculation of the protocol for finite numbers of rounds will be beneficial. Various examples
of network layouts and the link to network coding schemes will deserve more detailed investigations.
\vspace*{3ex}
\noindent {\bf\large Acknowledgments}\\
We acknowledge helpful discussions with Jan B\"orker and Norbert L\"utkenhaus. This work was financially supported by BMBF (network Q.com-Q) and ARL.\\
\appendix
\section{The resource state and its properties}\label{app:GHZ}
In this section we
derive the form of a pure quantum state that fulfils the requirements of
perfect correlations for one set of local measurement bases, with
uniformly distributed random measurement outcomes.
(These local bases are used for the key generation.)
We also prove properties of the resource state
regarding correlations of measurement outcomes in any other set
of local bases.\\
A general normalized $N$-qubit state reads
\begin{equation}
\ket{\phi} = \sum_{i_1,i_2,...i_N=0}^1 a_{i_1,i_2,...i_N}\ket{i_1,i_2,...i_N}\ ,
\end{equation}
with complex coefficients $a_{i_1,i_2,...i_N}$
that satisfy
$\sum_{i_1,i_2,...i_N=0}^1 |a_{i_1,i_2,...i_N}|^2=1$.
To achieve perfect correlations,
we can assume without loss of generality that all parties measure
in the $Z$-basis and get
the same outcome, as the choice of another local basis corresponds
to a local rotation,
and an opposite outcome could
be flipped locally. The requirement of perfect correlations in the $Z$-basis is only fulfilled by a
quantum correlated state of the form
\begin{equation}
\ket{\phi_{corr}} = a_{0,...,0}\ket{0,...,0}+
a_{1,...,1}\ket{1,...,1}\ .
\label{resource}
\end{equation}
It turns out that this requirement of perfect correlations in
one set of local bases forbids perfect correlations,
even only pairwise, in any other local bases,
for all $N\ge 3$.
\begin{theorem}
For $N$ qubits, with
$N\ge 3$, the state $\ket{\phi_{corr}} = a_{0,...,0}\ket{0,...,0}+
a_{1,...,1}\ket{1,...,1}$ leads to perfect classical correlations
between any number of parties, if and only if each of them
measures in the $Z$-basis.
\end{theorem}
\begin{proof}
Measuring in the $Z$-basis, perfect correlations follow
trivially. For the reverse implication,
let us denote the direction of measurement for party $i$ by the vector
$\vec{M_i}$, with components $M_{i}^{x}, M_{i}^{y}$ and $M_{i}^{z}$.
An observable ${\cal M}_{ij}$ of two parties $i$ and $j$ is given by
\begin{equation}
\label{twopartymeas}
{\cal M}_{ij} = (\vec{M_i}\cdot \vec{\sigma}) \otimes (\vec{M_j}\cdot \vec{\sigma})
= \sum_{\alpha,\beta\in \{x,y,z\}}M_{i}^{\alpha}M_{j}^{\beta}\sigma_i^\alpha \otimes
\sigma_j^\beta ,
\end{equation}
where $\vec{\sigma}$ denotes the vector of Pauli matrices and the identity
operators for the parties $\neq i,j$ are omitted.
Observe that
\begin{equation}
\label{expectvalue}
\bra{\phi_{corr}}\sigma_i^\alpha \otimes
\sigma_j^\beta\ket{\phi_{corr}} = 0 \ \ \text{unless} \ \ \alpha=\beta=z ,
\end{equation}
because all other combinations of Pauli operators change $\ket{\phi_{corr}}$ to
an orthogonal state.
Denoting by $p_i^{\alpha}(\pm)$
the probability that party $i$ finds eigenvalue
$\pm 1$ when measuring $\sigma^\alpha$,
we also have $\bra{\phi_{corr}}\sigma_i^\alpha \otimes
\sigma_j^\beta\ket{\phi_{corr}}= 2[p_i^{\alpha}(+)p_j^{\beta}(+)+
p_i^{\alpha}(-)p_j^{\beta}(-)] -1$, and thus
$p_i^{\alpha}(+)p_j^{\beta}(+)+
p_i^{\alpha}(-)p_j^{\beta}(-) \neq 1$, unless
$\alpha = \beta = z$.
Therefore, perfect correlations between two parties
are not possible in any other than the $Z$-basis.
This also excludes perfect correlations
between any other number of parties.
- Note that the above argument, in particular Eq.~(\ref{expectvalue}), does not hold for
$N=2$, which is special.
\end{proof}
Thus, any state of the form (\ref{resource}) contains the resource of perfect
multipartite correlations in the local $Z$-bases.
In order to ensure uniformity
of the outcome, i.e. randomness of the resulting secure bit string,
we choose for the key generation protocol $|a_{0,...,0}| = 1/\sqrt{2} =
|a_{1,...,1}|$, i.e. the unique perfect resource is a GHZ state~\cite{GHZ07}.
\section{Security analysis of the NQKD protocol}\label{app:security}
In this appendix we generalise the composable security definition of the bipartite
scenario~\cite{Renner05,Sca+09} to the $N$-partite case. As mentioned in the main text, the security analysis proceeds along analogous lines as the bipartite case in \cite{RGK05,RGK05PRL}. We assume that the parties $A$ and $B_i$, for $i=1,...,N-1$ share $n$ multipartite
states. The eavesdropper $E$ is supposed to hold a purification of the global state. The total quantum state after $Z$-measurement of $A$ and all $B_i$
is described by the density operator
\begin{equation}
\begin{aligned}
\rho^n_{{\bf K K_1...K_{N-1}}E} =& \sum_{\bf{x,x_1,...,x_{N-1}}}
P_{{\bf K K_1...K_{N-1}}}(\bf{x,x_1,...,x_{N-1}}) \\
&\proj{\bf{x}}\otimes
\bigotimes_{i=1}^{N-1}\proj{\bf{x_i}}\otimes
\rho_E^{\bf x,x_1,...x_{N-1}} \ ,
\end{aligned}
\end{equation}
where ${\bf x}$ and ${\bf x_i}$
describe the classical strings of parties A and $B_i$, respectively.
Note that the classical post-processing is identical to the bipartite
case: In an error correction step the parties transform their only partially correlated
raw data into a fully correlated shorter string. Party A pre-processes her random string ${\bf K}$
according to the channel
${\bf U}\leftarrow {\bf K}$ and sends classical error correction
information ${\bf W}$ to parties
$B_i$, who compute their respective guesses ${\bf U_i}$ for ${\bf U}$
from ${\bf K_i}$ and ${\bf W}$. The error correction information ${\bf W}$ is the same for all Bobs,
thus there is no additional information leakage compared to the bipartite case.
In a second step, the privacy amplification, Party A randomly chooses $f$ from a two-universal family
of hash functions, computes her key ${\bf S_A}= f({\bf U})$
and sends the description of $f$ to all
parties $B_i$ who also perform the privacy amplification to arrive at
their respective keys ${\bf S_{B_i}}=
f({\bf U_i})$. The total quantum
state will then be denoted as $\rho_{{\bf S_A S_{B_1}...S_{B_{N-1}}}E`}$. The key tuple
(${\bf S_A, S_{B_1},..., S_{B_{N-1}}}$) is called
$\epsilon$-secure, if it is $\epsilon$-close to the ideal state, i.e.\
if
\begin{equation}
\delta(\rho_{{\bf S_A S_{B_1}...S_{B_{N-1}}}E`}, \rho_{\bf SS...S}
\otimes \rho_{E`}) \leq \epsilon \ ,
\end{equation}
where $\delta(\rho,\sigma)= \text{tr}|\rho - \sigma|/2$
denotes the trace distance.
Note that we have not assumed any symmetry about the quality
of the channels connecting A and $B_i$. The information leaking
to the eavesdropper in the error correction step is determined by the amount of error correction information which the
Bob with the noisiest channel requires. This is the main difference with respect to the bipartite
case.
Therefore we arrive at the following
key length $\ell^{(n)}$, generated from $n$ multipartite entangled
states, in analogy to \cite{RGK05,RGK05PRL}:
\begin{equation}
\label{keylength}
\ell^{(n)} = \sup_{{\bf U} \leftarrow {\bf K}}[S_2^\epsilon ({\bf U} E)
-S_0^\epsilon(E) - \max_{i\in\{1,...N-1\}} H_0^\epsilon({\bf U}|{\bf K}_i)]\ ,
\end{equation}
where the smooth R\'enyi entropy $S_\alpha^\epsilon$ is defined as
\begin{equation}
S_{\alpha}^{\epsilon}(\rho)=\frac{1}{1-\alpha} \log_2 (\inf_{\sigma\in \mathbf{B}^\epsilon(\rho)}\tr(\sigma^\alpha)),
\end{equation}
which for $\alpha\in\{0,\infty\}$ is to be understood as $S_\alpha^\epsilon(\rho)=\lim_{\beta\rightarrow\alpha} S_\beta^\epsilon(\rho)$. The infimum is to be taken over all states $\sigma$ in a ball with radius $\varepsilon$ (w.r.t. the trace distance) around $\rho$, denoted as $\mathbf{B}^\epsilon(\rho)$.
For a (classical) probability distribution $P$ the smooth R\'enyi entropy is
\begin{equation}
H_\alpha^\epsilon(P)=\frac{1}{1-\alpha} \inf_{\substack{Q\\\bar{\delta}(Q,P)\leq \epsilon}} \log_2(\sum_z Q(z)^\alpha),
\end{equation}
where the infimum is taken over all probability distributions $\epsilon$-close to $P$ in the sense of the statistical distance $\bar{\delta}$ (the classical analogon of the trace distance).
The conditional smooth R\'enyi entropy is
\begin{equation}
H_0^\epsilon(P_X|P_Y)=\max_y H_0^\epsilon (P_{X|Y=y}).
\end{equation}
Note that, differently to the bipartite case~\cite{RGK05},
the worst of the $N-1$ channels influences the key length via
the maximal leakage to the eavesdropper in the error correction step, see the last term of Eq.~(\ref{keylength}). In the following the symbols $K$, $K_i$ and $U$ denote the single bit random variables corresponding to the respective bold-face strings.
For the limit $n\rightarrow \infty$ the secret fraction $r$ is given by
\begin{eqnarray}
r_{\infty} &=&\lim_{n \rightarrow \infty}\frac{\ell^{(n)}}{n} \nonumber \\
&=&
\sup_{ U \leftarrow K}\inf_{\sigma_{A\{B_i\}} \in \Gamma}
[S (U|E) - \max_{i\in\{1,...N-1\}} H( U|K_i)]
\ , \nonumber \\
&&
\end{eqnarray}
where $S(U|E)$ is the conditional von Neumann entropy, $H(U|K)$ is the conditional Shannon entropy and $\Gamma$ is the set of all density matrices of Alice and the Bobs which are consistent with the parameter estimation.
\section{Details for the network coding example}\label{app:QNC}
Here we explicitly describe the distribution of the GHZ state in the network of Fig.~\ref{fig:router}. This is a special case of the quantum network coding scheme which some of the authors described in \cite{EKB16b}. Let $\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$ and $\ket{-}=\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})$.\\
\begin{enumerate}
\item Alice produces two qubits $C$ and $A$, each in the state $\ket{+}$. She then applies a controlled-Phase gate $C_Z=\proj{0}\otimes \mathds{1} + \proj{1}\otimes Z$ to produce the Bell state
\begin{equation}
\ket{\includegraphics[width=6ex]{bellpair}}_{CA}=\frac{1}{\sqrt{2}}(\ket{0+}+\ket{1-}).
\end{equation}
\item Alice sends the qubit $C$ to the router station.
\item The router produces $
(N-1)$ qubits $B_i$, $i=1,2,...,N-1$, in the state $\ket{+}$ and entangles each of them with the qubit $C$ using $(N-1)$ $C_Z$ gates. At this stage the total state is
\begin{align}
\ket{\psi_C}=&\frac{1}{\sqrt{2}}(\ket{0+...+}+\ket{1-...-})\\
=&\frac{1}{\sqrt{2}}(\ket{+}_C\ket{GHZ'}+\ket{-}_CX_{B_1}\ket{GHZ'}),
\end{align}
where
\begin{align}
\ket{GHZ'}=&\frac{1}{\sqrt{2}}(\ket{++...+}+\ket{--...-})
\end{align}
is the GHZ state in the $X$-basis.
\item The router measures $C$ in $X$ basis. If the outcome is $-1$, i.e. $\ket{-}_C$, then it applies $X_{B_1}$. The state is now $\ket{\pm}_C\ket{GHZ'}$.
\item The router now distributes the qubits $B_1$, $B_2$, ..., $B_{N-1}$ to the corresponding parties.
\end{enumerate}
Up to a local basis choice (Hadamard gate), the resource state of the main text has been distributed and the multipartite entanglement based quantum key distribution (NQKD) protocol can be performed.\\
To see that it is impossible to create $N-1$ Bell pairs by sending a single qubit from Alice to the router, let us group the router and all Bobs into a single party $B$. When Alice sends one qubit across the channel, the entropy of entanglement $E_{A|B}\leq 1$. The $N-1$ Bell pairs, however, have entropy of entanglement $E_{A|B}=N-1$, so they cannot be created from the received state by local operations on $B$. Instead, $N-1$ network uses are necessary and the key rate decreases accordingly.
\FloatBarrier
\section{Gate error rates and the QBER}\label{app:gates}
In this appendix we give details for the key rate calculations regarding the quantum networks of Figs. \ref{fig:multipartite} and \ref{fig:router} with imperfect gates. We start with the simple network of Fig.~\ref{fig:multipartite}. The GHZ resource state is prepared as follows. Alice starts with the state $\ket{+}_A\ket{0}^{\otimes N-1}$ and applies a controlled-Not gate from $A$ to each of the other qubits, see Fig.~\ref{fig:circuit}.
\begin{figure}
\caption{The GHZ state that is to be distributed across the network of Fig.~\ref{fig:multipartite}
\label{fig:circuit}
\end{figure}
When a controlled-Not gate acts on qubits $i$ (control) and $j$ (target) we denote it by
\begin{equation}
C_X^{(i,j)}=(\proj{0}_i \otimes \mathds{1}_j+\proj{1}_i \otimes X_j)\otimes \mathds{1}_{\mathrm{rest}},
\end{equation}
where $X=\ket{0}\bra{1}+\ket{1}\bra{0}$ is a Pauli matrix. We use a depolarizing noise model for the gate errors. The action of the imperfect gate on the density matrix is
\begin{align}
C_{X,f_G}^{(i,j)}(\rho)=& (1-f_G) C_X^{(i,j)} \rho C_X^{(i,j)} + f_G \tr_{i,j}(\rho) \otimes \mathds{1}_{ij}\\
=&(1-f_G) C_X^{(i,j)} \rho C_X^{(i,j)} + \sum_{a,b\in\sigma} a_i b_j \rho a_i b_j,
\end{align}
where $\sigma=\{\mathds{1},X,Y,Z\}$ contains Pauli matrices.\\
It will be convenient to extend the notation of the GHZ basis to include the number of parties as a subscript, i.e.
\begin{equation}
\ket{\psi_{j,N}^\pm} = \frac{1}{\sqrt{2}}(\ket{0}_{1}\ket{j}_{2...N}\pm \ket{1}_{1}\ket{\bar j}_{2...N}). \label{eq:ghz-basisext}
\end{equation}
The initial state is
\begin{equation}
\rho_{\mathrm{in}} = \proj{\psi_{0,1}^+} \otimes (\proj{0})^{\otimes(N-1)}.
\end{equation}
The first gate turns it into
\begin{equation}
\rho_{1,\mathrm{out}} = ((1-f_G) \proj{\psi_{0,2}^+}+f_G \frac{\mathds{1}}{4})\otimes (\proj{0})^{\otimes N-2},
\end{equation}
the second into
\begin{align}
\rho_{2,\mathrm{out}} = &((1-f_G)^2 \proj{\psi_{0,3}^+}\nonumber \\
+&(1-f_G) f_G \frac{1}{2} (\proj{0}\otimes\frac{\mathds{1}}{2}\otimes\proj{0}+\proj{1}\otimes\frac{\mathds{1}}{2}\otimes\proj{1})\nonumber\\
+&f_G \frac{\mathds{1}}{8})\otimes (\proj{0})^{\otimes N-3},
\end{align}
and the third into
\begin{align}
\rho_{3,\mathrm{out}}=&((1-f_G)^3\proj{\psi_{0,4}^+}\nonumber\\
&+f_G(1-f_G)^2 \frac{1}{2}(\frac{\mathds{1}}{2}\otimes \proj{00} \otimes \frac{\mathds{1}}{2}+\frac{\mathds{1}}{2}\otimes \proj{11} \otimes \frac{\mathds{1}}{2})\nonumber\\
&+(1-f_G)^2 f_G \frac{1}{2} (\proj{0}\otimes\frac{\mathds{1}}{2}\otimes\proj{00}+\proj{1}\otimes\frac{\mathds{1}}{2}\otimes\proj{11})\nonumber\\
&+(1-f_G) f_G \frac{1}{2}(\proj{0}\otimes \frac{\mathds{1}}{4} \otimes \proj{0}+\proj{1}\otimes \frac{\mathds{1}}{4} \otimes \proj{1})\nonumber\\
&+(2-f_G)f_G^2 \frac{\mathds{1}}{16}) \otimes (\proj{0})^{\otimes N-4}.
\end{align}
One may deduce the following observation. Let us denote the pattern of actual gate successes/failures as the binary representation of an $(N-1)$-bit number $\mathbf{x}$, where a $0$ at position $i$ indicates the failure of gate $i$ and $1$ means the corresponding gate was successful. The number of connected blocks of ones in the bit string $\mathbf{x}1$ plus the number of zeros, $b(\mathbf{x}1)$, is the number of subsets of parties that are correlated amongst each other. This gives the prefactor
\begin{equation}
c_\mathbf{x}=\left\{\begin{array}{cl}
1 & \text{if } \mathbf{x}=11...1\\
2^{-b(\mathbf{x}1)} &\text{else } \end{array}\right.
\end{equation}
in front of the corresponding term in $\rho$. These prefactors determine the overlap between $\ket{\psi_{j,0}^{\pm}}\bra{\psi_{j,0}^{\pm}}$ and $\rho$, i.e. the coefficients $\lambda_0^\pm(f_G)$ of $\rho$ in the GHZ basis. They read
\begin{equation}
\begin{aligned}
\lambda_0^+(f_G)=&\sum_{\mathbf{x}=0}^{2^{N-1}-1} c_\mathbf{x} f_G^{N-1-|\mathbf{x}|_H} (1-f_G)^{|\mathbf{x}|_H}\\
\text{and\hspace{0.8cm}}\lambda_0^-(f_G)=&\lambda_0^+-(1-f_G)^{N-1},
\end{aligned}\label{eq:applambda0fG}
\end{equation}
where $|\mathbf{x}|_H$ is the Hamming weight of $\mathbf{x}$. After some combinatorics $\sum_{\mathbf{x}} c(\mathbf{x})$ for a given weight $w=|\mathbf{x}|_H$ (unequal to $N-1$) can be expressed in a more compact form by summing over all possible ``subset counts'' $\beta$ as
\begin{equation}
\sum_{\substack{\mathbf{x}\\|\mathbf{x}|_H=w}} c(\mathbf{x})= \sum_{\beta=N-w}^{N}{w \choose N-\beta} {N-w-1 \choose \beta -N+w} 2^{-\beta},
\end{equation}
which leads to the relevant coefficients in the GHZ basis,
\begin{equation}
\begin{aligned}
\lambda_0^-(f_G)=&\sum_{w=0}^{N-2} c'(w) f_G^{N-1-w} (1-f_G)^{w}\\
\text{and }\lambda_0^+(f_G)=&(1-f_G)^{N-1}+\lambda_0^{-}(f_G)
\end{aligned}\label{eq:lambda0fG}
\end{equation}
with
\begin{equation}
c'(w)=\sum_{\beta=N-w}^{N}{w\choose N - \beta} {N - w - 1\choose \beta - N + w} 2^{-\beta}.
\end{equation}
From Eq.~(\ref{eq:lambda0fG}) one obtains the QBER using Eq.~(\ref{eq:QBER}). We show it in Fig.~\ref{fig:QoffG}.
\begin{figure}
\caption{The QBER as a function of $f_G$ for the circuits described in the text, with $N=2,3,4,...,8$ (bottom to top).}
\label{fig:QoffG}
\end{figure}
The secret key rate is calculated using Eq.~(\ref{eq:rdep}) with
\begin{align}
Q_{AB_i} =& \frac{1}{N-1}\sum_{k=1}^{N-1} \frac{1}{2}(1-(1- f_G)^k)\\
=& \frac{(1 - f_G)^N + f_G N-1}{2 f_G (N-1)},
\end{align}
which is the average $Q_{AB_i}$ for one to $N-1$ gates, because we use a random order of the gates. This effectively mixes all $\lambda_j^\pm$ with $j$ of same Hamming weight and accomplishes that all $Q_{AB_i}$ are equal. Compared to a fixed gate order it improves the key rate and removes the maximum in Eq.~(\ref{eq:rdep}).\\
In the case of the network shown in Fig.~\ref{fig:router}, $N-1$ gates are performed at $C$ and one additional gate is performed at $A$. The initial state at $C$ depends on whether the gate of $A$ was successful, i.e. it is
\begin{align}
\rho_{\mathrm{in,QNC}}=&(1-f_G) \rho_{\mathrm{in}} + f_G \frac{\mathds{1}}{2}\otimes \proj{0}^{\otimes (N-1)}\\
=& (1-f_G) \rho_{\mathrm{in}} + f_G \frac{1}{2} (\rho_{\mathrm{in}}+Z_1 \rho_{\mathrm{in}} Z_1),
\end{align}
i.e.
\begin{align}
\lambda_{0,QNC}^+(f_G)=&(1-f_G) \lambda_{0}^+(f_G) + \frac{f_G}{2}(\lambda_0^+(f_G)+\lambda_0^-(f_G))\\
\lambda_{0,QNC}^-(f_G)=&(1-f_G) \lambda_{0}^-(f_G) + \frac{f_G}{2}(\lambda_0^+(f_G)+\lambda_0^-(f_G)),
\end{align}
and the previous results can be used to obtain the key rate in this case. Note that while the final density matrix depends on whether a router was used or not, the QBER (and $Q_{AB_i}$) does not, because the additional phase error does not contribute to it.\\
For a fixed number of parties $N$ there is a threshold gate error probability below which the NQKD protocol outperforms the bipartite approach in the quantum network of Fig.~\ref{fig:router}. These values are listed in Table~\ref{tab:minfG}. \begin{table}[tbp]
\caption{The multipartite entanglement based QKD protocol is more prone to gate errors, but requires Alice to send one qubit only. These two competing effects lead to a threshold value of the gate error probability $f_G$ below which it outperforms the bipartite approach.}\label{tab:minfG}
\begin{tabular}{cc}
$N$ & NQKD-threshold for $f_G$\\
\hline
3 & 0.0725754 \\%
4 & 0.0689939 \\%
5 & 0.0618163 \\%
6 & 0.0553032 \\%
7 & 0.0498258 \\%
8 & 0.0452567 \\%
9 & 0.0414201 \\%
10 & 0.0381659 \\%
\end{tabular}\hspace{1cm}
\begin{tabular}{cc}
$N$ & NQKD-threshold for $f_G$\\
\hline
11 & 0.0353766 \\%
12 & 0.0329621 \\%
13 & 0.0308531 \\%
14 & 0.0289959 \\%
15 & 0.0273484 \\%
16 & 0.0258773 \\%
17 & 0.024556 \\%
18 & 0.0233626 \\%
\end{tabular}
\end{table}
The key rate as a function of $N$ is shown for different values of the gate error rate $f_G$ in Fig.~\ref{fig:keyratesQNC}.
\begin{figure}
\caption{The secret key rate for the multipartite entanglement based (NQKD) protocol (solid lines) in the quantum network shown in Fig.~\ref{fig:router}
\label{fig:keyratesQNC}
\end{figure}
\FloatBarrier
\section{Key distribution in the butterfly network}\label{app:butterfly}
\begin{figure}\label{fig:butterflya}
\label{fig:butterflyb}
\label{fig:butterfly}
\end{figure}
We sketch how the NQKD protocol can be employed in the butterfly network shown in Fig.~\ref{fig:butterfly}. As usual, the rate constraints on the channels are one, i.e. each channel can send a single qubit per time step.
\begin{enumerate}
\item The quantum network code corresponding to the linear code shown in FIG.~S\ref{fig:butterflya} is employed to produce two GHZ states shared by $A$, $B_1$ and $B_2$ (FIG.~S\ref{fig:butterflyb}). See Thm. 1 of \cite{EKB16b}.
\item These two GHZ states allow to perform two rounds of the NQKD protocol in a single time step.
\end{enumerate}
In contrast the bipartite entanglement based (2QKD) protocol (also in its prepare and measure formulation) can only do a single round, because only two Bell pairs can be distributed (due to the outgoing capacity at A). Thus the key rate of the NQKD protocol is twice as high as in the ``standard approach''.\\
From the construction of this example it is clear how it generalizes: If the network allows A to multicast $n$ bits, then a single use of the corresponding quantum network will produce $n$ GHZ states. Thus the NQKD protocol can be performed $n$ times per time step. However, the 2QKD protocol can only perform $\frac{n}{N-1}$ rounds in the same time.
\end{document}
|
\begin{document}
\bstctlcite{IEEEexample:BSTcontrol}
\title{A comparison between quantum and classical noise radar sources}
\IEEEoverridecommandlockouts
\author{\IEEEauthorblockN{Robert Jonsson}
\IEEEauthorblockA{$^1$Department of Microtechnology\\ and
Nanoscience\\Chalmers University of Technology\\
$^2$Radar Solutions, Saab AB\\
G\"oteborg, Sweden\\
Email: [email protected]}\\
\IEEEauthorblockN{Anders Str\"om}
\IEEEauthorblockA{Radar Solutions, Saab AB\\
G\"oteborg, Sweden}
\and
\IEEEauthorblockN{Roberto Di Candia}
\IEEEauthorblockA{Department of Communications\\ and Networking\\Aalto
University\\
Helsinki, Finland\\
Email: [email protected]}\\[1em]
\IEEEauthorblockN{G\"oran Johansson}
\IEEEauthorblockA{Department of Microtechnology\\ and Nanoscience\\Chalmers
University of Technology\\
G\"oteborg, Sweden}
\and
\IEEEauthorblockN{Martin Ankel}
\IEEEauthorblockA{$^1$Department of Microtechnology\\ and
Nanoscience\\Chalmers University of Technology\\
$^2$Radar Solutions, Saab AB\\
G\"oteborg, Sweden}
}
\maketitle
\begin{abstract} We compare the performance of a quantum radar based on
two-mode
squeezed
states with a classical radar
system based on correlated thermal noise.
With a constraint of equal number of photons $N_S$ transmitted to
probe the environment, we find that the quantum setup exhibits an advantage
with respect to its classical counterpart of $\sqrt{2}$ in the cross-mode
correlations. Amplification of the signal and the idler is considered at
different stages of the protocol, showing that no quantum advantage is
achievable when a large-enough gain is applied, even when quantum-limited
amplifiers are available. We also characterize the minimal type-II error
probability decay, given a constraint on the type-I error probability, and find
that the optimal decay rate of the type-II error probability in the quantum
setup is $\ln(1+1/N_S)$ larger than the optimal classical setup, in the
$N_S\ll1$ regime. In addition, we consider the Receiver Operating
Characteristic (ROC) curves for the scenario when the idler and the received
signal are measured separately, showing that no quantum advantage is present in
this case. Our work characterizes the trade-off between quantum correlations
and noise in quantum radar systems.
\end{abstract}
\IEEEpeerreviewmaketitle
\section{Introduction}
The quantum Illumination (QI)
protocol~\cite{lloyd2008enhanced,tan2008quantum,Lopaeva2013,Zhang2013,Zhang2015,
Quntao2017, Sanz2017} uses entanglement as a resource to improve the detection
of a low-reflectivity object embedded in a bright environment. The protocol was
first developed for a single photon source~\cite{lloyd2008enhanced}, and it was
then extended to general bosonic quantum states and thermal bosonic
channels~\cite{tan2008quantum}. Here, a $6$~dB advantage in the effective
signal-to-noise ratio (SNR) is achievable when using two-mode squeezed states
instead of coherent states. This gain has been recently shown to be optimal,
and reachable {\it exclusively} in the regime of low transmitting power per
mode~\cite{DiCandia2020,Nair2020}. The QI protocol has possible applications in
the spectrum below the Terahertz band, as here the environmental noise is
naturally bright. In particular, microwave quantum technology has been very
well developed in the last decades~\cite{Nori2017}, paving the way of
implementing these ideas for building a first prototype of quantum radar.
The possible benefits of a quantum radar system are generally understood to be
situational. In an adversarial scenario, it is beneficial for a radar system
to be able to
operate while minimizing the power output, in order to reduce the probability
for the
transmitted signals
to be detected. This
property is commonly referred to as Low Probability of Intercept (LPI), and it
is a common measure to limit the ability of the enemy to localize and discover
the radar. The low signal levels required for QI are in principle excellent for
acquiring good LPI properties. However, there are several challenges to face in
order to achieve this goal.
A first proposal for implementing a microwave QI protocol was advanced in
Ref.~\cite{barzanjeh2015microwave}. The protocol relies on an efficient
microwave-optical interface for the idler storage and the measurement stage.
This technology is promising for this and other applications, however it is
still in its infancy. Furthermore, the signal generation requires cryogenic
technology, which must be interfaced with a room-temperature environment.
Recently, a number of QI-related experiments have been carried out in the
microwave regime~\cite{chang2019quantum,barzanjeh2019experimental}, showing
that some correlations of an entangled signal-idler system are preserved after
the signal is sent out of the dilution refrigerator. While these results are a
good benchmark for future QI experiments, they strictly rely on the
amplification of the signal and idler. This has been shown to rule out any
quantum advantage with respect to an optimal classical
reference~\cite{shapiro2019quantum}.
In this work, we discuss the role of quantum correlations and amplification in
the QI protocol, providing a comparative analysis of quantum and classical
noise radars in different scenarios. Noise radar is an old concept that
operates by probing the environment with a noisy signal and cross-correlating
the returns with a retained copy of the transmitted
signal~\cite{cooper1967random}.
A \textit{quantum} noise radar operates similarly to
its
conventional counterpart, but differs in the use of a two-mode entangled
state as noise source~\cite{chang2019quantum,barzanjeh2019experimental}. An
advantage of the quantum noise radar over the
classical counterpart can be declared
if stronger correlations can be achieved, when both systems
illuminate the environment with equal power. In the microwave regime, the
two-mode squeezed state used for noise
correlations can be generated with
superconducting circuits with a Josephson Parametric Amplifier (JPA) at
$T\simeq 20$~mK~\cite{Flurin2012, Menzel2012}. On the one hand, using quantum
correlated signals generated by a JPA enhances the signal-to-noise ratio in the
low transmitting power per mode regime. On the other hand, Josephson parametric
circuits are able to generate correlated and entangled signals with large
bandwidth~\cite{Wilson2011,johansson2013nonclassical,schneider2020observation}.
This allows, in principle, a system to operate in the low power-per-mode
regime, where quantum radars show fully their advantage.
Here, we analyze the performance of a JPA-based noise radar in different
scenarios which include different sources of noise. Our analysis shows that any
quantum advantage is destroyed by the unavoidable noise added when amplifying
either the signal or the idler. We also show when the idler and signal are
measured separately, the entanglement initially present in the signal-idler
system is not properly exploited, and no quantum advantage can be retained. The
latter happens even without amplifying the signal or the idler. Our work
complements the analysis done in Ref.~\cite{shapiro2019quantum} with the
explicit calculations of the cross-correlation coefficients and the optimal
asymptotic ROC performance in the microwave regime.
\section{Theory}\label{sec:theory}
In this section, we introduce the models for the quantum and classical systems,
within the quantum mechanical description. In this step, we emulate
Refs.~\cite{luong2019receiver,shapiro2019quantum}, where the classical and
quantum noise radar were first studied. In all expressions, we
assume the natural units ($\hbar =1$, $k_\text{B}=1$).
\subsection{Quantum preliminaries}\label{ssec:quantum}
A single, narrowband mode of the electric field, at microwave frequency $f$,
is defined with an
operator (in suitable units) as
$\hat{E} = \hat{q}\cos{2\pi f t} + \hat{p}\sin{2\pi ft},$
where $\hat{q}$ and $\hat{p}$ are the in-phase and quadrature operators,
respectively. The quadratures are related to the bosonic annihilation
($\hat{a}$) and creation ($\hat{a}^\dagger$)
operators as
$\hat{q}=(\hat{a}^\dagger{+}\hat{a})/\sqrt{2}$ and
$\hat{p}=\mathrm{i}(\hat{a}^\dagger{-}\hat{a})/\sqrt{2}$, where
$[\hat{a},\hat{a}^\dagger]=\hat{\mathbb{I}}$.
The commutation relation
$[\hat{q},\hat{p}]=\mathrm{i}\hat{\mathbb{I}}$ implies that the quadratures can
not be measured
simultaneously with arbitrary precision, due to the Heisenberg uncertainty
relation. In the following, we represent the quadratures of the two-modes of
the electric field by the
vector $\hat{X}=(\hat{q}_S,\hat{p}_S,\hat{q}_I,\hat{p}_I)^T$,
where the indices $S$
and
$I$ refer
to the signal and idler mode, respectively. These
mode designations are used interchangeably for both the quantum and classical
system.
\subsubsection{Classically-correlated thermal noise}
\begin{figure}
\caption{{\bf Preparation of classically-correlated thermal noise.}
\label{fig:beamsplitter}
\end{figure}
The classically-correlated noise (CCN) system uses two sources of thermal
noise, $\hat a_0$ and $\hat a_1$, at temperatures $T_0$ and $T_1$,
respectively. In general, the quantum state of a thermal noise mode at
temperature $T$ can be represented by the density operator
\begin{equation}
\mathbf{{\rho}}_{th} =
\sum_{n=0}^{\infty}\frac{N^n}{\left(N+1\right)^{n+1}}\ket{n}\bra{n},
\end{equation}
where the average number\footnote{All variables using the symbol $N$ refer
to mode quanta and should
not be confused with
the noise \textit{figure} of a microwave component, which often shares the
same
symbol.} of photons is defined by
the thermal equilibrium Bose-Einstein statistics at temperature $T$, i.e.,
$N=\left[\exp\left(2\pi
f/T\right)-1\right]^{-1}$. In the following, we will refer as $N_0$ ($N_1$) the
average number of photons for the mode $\hat a_0$ ($\hat a_1$). The thermal
modes $\hat a_0$ and $\hat a_1$ pass
through a
beamsplitter, as shown in Fig.~\ref{fig:beamsplitter}. This generates a signal
mode $\hat{a}_S^{(C)}$ and an idler mode $\hat{a}_I^{(C)}$, related to the
inputs as~\cite{loudon2000quantum}
\begin{equation}
\begin{pmatrix}
\hat{a}_S^{(C)}\\\hat{a}_I^{(C)}
\end{pmatrix} = \begin{pmatrix}
\sqrt{\xi} & \sqrt{1-\xi}\mathrm{e}^{\mathrm{i}\varphi}
\\-\sqrt{1-\xi}\mathrm{e}^{-\mathrm{i}\varphi} & \sqrt{\xi}
\end{pmatrix}\begin{pmatrix}
\hat{a}_0 \\ \hat{a}_1
\end{pmatrix}\label{eq:beamsplitter}.
\end{equation}
Here, $\xi\in(0,1)$ is the reflection coefficient
and $\varphi$ is the phase turning angle of the
beamsplitter, in the following set to be zero.
One can think of this process as a noise signal, generated by a thermal source
at temperature $T_0$, sent as input of a power divider placed in an environment
at temperature $T_1$. The output modes $\hat{a}_S^{(C)}$ and $\hat{a}_I^{(C)}$
are in a thermal state with $\xi N_0+(1-\xi)N_1$ and $\xi N_1+(1-\xi)N_0$
average number of photons, respectively. If $T_1\not=T_0$, or, equivalently,
$N_1\not=N_0$, then the outputs are classically-correlated, regardless of the
value of $\xi$.
\subsubsection{Entangled thermal noise}
A Two-Mode Squeezed
Vacuum (TMSV) state $\ket{\psi}_{\rm TMSV}$ is represented in the
Fock basis as
\begin{equation}
\ket{\psi}_{\rm
TMSV}=\sum_{n=0}^{\infty}\sqrt{\frac{N_S^n}{\left(N_S+1\right)^{n+1}}}\ket{n}_S\ket{n}_I,
\end{equation}
where $N_S$ is the average number of photons in both
the
signal and idler mode.
A TMSV state is closely related to a classically-correlated thermal noise, as
also here both signal and idler photons are Bose-Einstein distributed. However,
as we will see, the resulting correlations in the low signal-power regime are
{\it stronger} for the TSMV states.
\subsection{Covariance and correlation matrices}
As the states considered here are Gaussian, their statistics is entirely
determined by
the first and second order quadrature moments. For zero-mean states, i.e.,
when $\braket{\hat{X}_i}=0$ for all $i$, the states are
characterized entirely by
the covariance matrix $\mathbf{\Sigma}$, with elements
\begin{equation}\label{covariance}
\mathbf{\Sigma}_{i,j} =
\frac{1}{2}\braket{\hat{X}_i\hat{X}_j^\dagger+\hat{X}_j\hat{X}_i^\dagger}-\braket{\hat{X}_i}\braket{\hat{X}_j^\dagger}.
\end{equation}
This is the case for both the classical and the entangled thermal noise states.
Similarly, one can introduce the correlation coefficient matrix $\mathbf{R}$,
whose elements are
\begin{equation}
\mathbf{R}_{i,j} =
\frac{\Sigma_{i,j}}{\sqrt{\Sigma_{i,i}}\sqrt{\Sigma_{j,j}}}\in[-1,1].\label{eq:pearson}
\end{equation}
These coefficients, also referred to as
\textit{Pearson's correlation coefficients}, characterize the linear
dependence between the quadratures $\hat{X}_i$ and $\hat{X}_j$.
\subsection{Quantum relative entropy}\label{relative_entropy}
The quantum relative entropy defines an information measure between two quantum
states. It is defined as
\begin{equation}
D(\rho_1||\rho_0)=\text{Tr}\,\rho_1(\ln \rho_1-\ln\rho_0),
\end{equation}
for two density matrices $\rho_0$ and $\rho_1$. This quantity is related to the
performance in the asymmetric binary hypothesis testing via the quantum Stein's
lemma. The task is to discriminate between $M$ copies of $\rho_0$ and $M$
copies of $\rho_1$, given a bound on the type-I error probability
(\textit{probability of false alarm}, $P_{Fa}$) of $\varepsilon\in(0,1)$. In
this discrimination, the maximum type-II error probability (\textit{probability
of miss}, $P_M$) exponent is
\begin{equation}\label{Steins}\small
-\frac{\ln P_M}{M} = D(\rho_1 || \rho_0)+\sqrt{\frac{V(\rho_1 ||
\rho_0)}{M}}\Phi^{-1}(\varepsilon)+\mathcal{O}\left(\frac{\ln M}{M}\right),
\end{equation}
where
$V(\rho_1||\rho_0)=\text{Tr}\,\rho_1[\ln\rho_1-\ln\rho_0-D(\rho_1||\rho_0)]^2$
is the quantum relative entropy variance and $\Phi^{-1}$ is the inverse
cumulative normal distribution~\cite{Wilde2017}. In this work, we rely on
quantum relative entropy computations and its variance in order to quantify the
performance in the asymptotic setting, i.e., when $M\gg1$. This is in contrast
to the original treatment based on the Chernoff bound~\cite{tan2008quantum},
which provides an estimation of the average error probability when the prior
probabilities of target absence or presence are equal. In a typical radar
scenario, the prior probabilities are not the same, and may be even unknown.
\section{Noise radar operation}\label{sec:radar}
In this section, we analyze the performance of the classical and quantum noise
correlated radars, based on the states defined in the previous section.
\subsection{Probing the environment}
The signal mode is
transmitted to probe the environment where an object (\textit{target}) may be
present or absent. This process is modelled as a channel with reflection
coefficient
$\eta$ that is non-zero and small when the target is present ($0<\eta\ll 1$)
and zero when the target is absent ($\eta =0$), see
Fig.~\ref{fig:overview}.
Here, $\eta$ can be interpreted as the ratio between transmitted
power and received power, including the effects of atmospheric attenuation, the
antenna
gain and the target radar cross section. We use a beamsplitter model to take
into account the environmental losses. In other words, the returned mode $\hat
a_R$ is given by
\begin{equation}\label{return}
\hat{a}_R =
\sqrt{\eta}~\hat{a}_S\mathrm{e}^{-\mathrm{i}\theta}+\sqrt{1-\eta}~\hat{a}_B,
\end{equation}
where $\hat{a}_B$ is a bright background noise mode with
$\braket{\hat{a}_B^\dagger\hat{a}_B} = N_B$ average power per mode, and where
$\theta$ is a phase shift relative to the idler. In the $1-10$~GHz regime,
where the technology is advanced enough to apply these ideas in the quantum
regime, we have that $N_B\simeq 10^3$, which is assumed for numerical
computations. For the current calculation,
the reflection coefficient is assumed to be non-fluctuating. We also assume
$\braket{\hat{a}_I\hat{a}_B}=0$, i.e., the returned signal preserves some
correlations with the idler mode only if the object is present. This allows us
to define a correlation detector able to detect the presence or absence of the
object.
\begin{figure}
\caption{{\bf Scheme of the quantum and classical noise radar systems.}
\label{fig:overview}
\end{figure}
\subsection{Cross-correlation coefficient}
The covariance matrix of the system composed of the received signal and the
idler is easily computable using Eq.~\eqref{covariance} and Eq.~\eqref{return}.
For both the classical and the quantum noise radars considered here, applying
Eq.~\eqref{eq:pearson} gives us the correlation matrix
\begin{equation}
\mathbf{R} = \begin{pmatrix}
\mathbf{I} & \kappa \mathbf{D}(\theta) \\ \kappa \mathbf{D}(\theta)^T &
\mathbf{I}
\end{pmatrix},\label{eq:corrMat}
\end{equation}
where $0\leq\kappa\leq 1$ is the amplitude of the cross-correlation
coefficient,
and
$\mathbf{D}$ is a matrix with determinant $\left|\mathbf{D}\right|=\pm1$.
The cross-correlation coefficient for the entangled TMSV source can be derived
directly from the definition,
\begin{align}
\kappa_\text{TMSV} &= \frac{\sqrt{\eta
N_S(N_S+1)}}{\sqrt{N_R+\frac{1}{2}}\sqrt{N_S+\frac{1}{2}}},\label{eq:kappaQRI}
\end{align}
with $N_R =\eta
N_S+(1-\eta)N_B$.
For a fair comparison between the quantum and classical systems, we introduce a
constraint
on the transmitted power of the signal modes, i.e., we set
\begin{equation}
N_S = \xi N_0+(1-\xi)N_1.\label{eq:constraint0}
\end{equation}
This constraint can be interpreted as giving both systems equal LPI properties.
Eq.~\eqref{eq:constraint0} yields an expression for the classical
cross-correlation amplitude as
\begin{equation}
\kappa_\text{CCN}=\frac{\sqrt{\eta}(N_S-N_1)}{\sqrt{N_R+\frac{1}{2}}\sqrt{N_S-N_1+\frac{\xi}{1-\xi}\left(N_1+\frac{1}{2}\right)}},\label{eq:kappaCres}
\end{equation}
where $N_S>N_1$ follows directly from Eq.~\eqref{eq:constraint0} and the
assumption $N_0>N_1$.
This quantity is maximal in the $N_1\ll1$ regime, where
Eq.~\eqref{eq:constraint0} reduces to $N_S = \xi N_0$. We assume $N_1 \ll1$,
which corresponds to classically-correlated thermal noise generated in a
noise-free environment. At microwave frequencies this is achievable at mK
temperatures. Eq.~\eqref{eq:kappaCres}, for given noise transmitting power,
defines the correlations of a class of classical noise radars, labeled by the
beamsplitter parameter $\xi$.
\subsection{Quantum advantage}
It is easy to see that the quantity $\kappa^2$ is linearly proportional to the
\textit{effective} SNR in the likelihood-ratio tests. A larger value of
$\kappa^2$ means a stronger discrimination power. Therefore, we define a
figure of merit $Q_A$, quantifying the advantage of the quantum over the
classical noise radar, as
\begin{align}\label{eq:advantage}
Q_A &\equiv \frac{\kappa_\text{TMSV}^2}{\kappa_\text{CCN}^2} =
\frac{N_S+1}{N_S+\frac{1}{2}}\left[1+\frac{\xi}{2N_S(1-\xi)}\right],
\end{align}
which can be evaluated for different values of the free parameter $\xi$.
Restricting the constraint to equal power in both the signal and idler
mode is equivalent to applying a 50-50
beamsplitter to the thermal noise source in Eq.~\eqref{eq:beamsplitter}, or, in
other words, it corresponds to setting $\xi~=~1/2$. This gives
\begin{equation}
Q_{A}(\xi=1/2) = 1+\frac{1}{N_S},
\end{equation}
which is unbounded for $N_S\rightarrow 0$. This setting as been used as
benchmarking in the recent microwave quantum illumination
experiments~\cite{chang2019quantum,barzanjeh2019experimental}. However, this
choice of $\xi$ is not optimal, leading to an overestimation of the quantum
radar advantage\footnote{This criticism was already raised by J. H. Shapiro in
Ref.~\cite{shapiro2019quantum}.}. A strongly
\textit{asymmetric} beamsplitter must be applied to a very bright noise
source ($N_0\gg1$) in order to
maximize the correlations in the classical case, while maintaining the
constraint on transmitted power $N_S=\xi N_0$.
It can be seen in Eq.~\eqref{eq:advantage}, that $Q_A$ is maximized in the
$\xi/(1-\xi)\ll N_S$ limit. In this setting, the CCN idler has a much better
SNR than in the symmetric configuration, and we get
\begin{equation}
Q_{A}(\xi/(1-\xi)\ll N_S)\approx
2\left[1-\frac{N_S}{2N_S+1}\right].
\end{equation}
In the low transmitting power limit we have that $\lim\limits_{N_S\rightarrow
0}Q_{A}=2$, i.e., a $\sqrt{2}$ advantage in the correlation coefficient.
In Fig.~\ref{fig:cross-corr} we show the functional behaviour of the
correlation coefficients for the entangled and classically-correlated thermal
noise source, depending on the transmitting power $N_S$. The quantum advantage
decays slowly with increasing $N_S$, keeping an advantage also for finite
$N_S$, until virtually disappearing for $N_S\simeq 10$.
Note that in the limit where the advantage is
maximized,
the modes also become uncorrelated ($\kappa\rightarrow0$) in both systems.
A signal-idler system with a large operating bandwidth is needed to compensate
the low amount of correlations per mode in the $N_S\ll1$ limit.
Here, the phase-shift of the signal mode due to the propagating path has been
set to zero. In other words, we work in a rotated frame, where the inter-mode
phase angle $\theta$ is applied only to the idler mode, i.e., $\hat{a}_I
\mapsto
\hat{a}_I\mathrm{e}^{-\mathrm{i}\theta}$.
Note that, in general, the phase
$\theta$ is unknown. The original QI protocol assumes the knowledge of the
inter-mode phase angle $\theta$.
In this case, the complex conjugate receiver defined in Ref.~\cite{Guha2009}
saturates the quantum advantage given in Eq.~\eqref{eq:advantage} with a
likelihood-ratio test~\cite{Zhuang2017neyman}. If $\theta$ is unknown, then one
can define an adaptive strategy where $\mathcal{O}(\sqrt{M})$ copies are used
to get a rough estimate of $\theta$, and then $M-\mathcal{O}(\sqrt{M})$ are
used to perform the discrimination protocol in the optimal reference frame
maximizing the Fisher information~\cite{Sanz2017}. This strategy shows the same
asymptotic performance as in the case of known $\theta$.
\begin{figure}
\caption{{\bf Performance of the quantum and classical noise radars in
terms of the cross-correlation coefficient.}
\label{fig:cross-corr}
\end{figure}
\subsection{The effect of amplification}
In the following, we consider three Gaussian amplifying schemes. We show how
the quantum advantage defined in terms the cross-correlation coefficients
rapidly disappears when an amplification is involved at any stage of the
protocol.
\subsubsection{Amplification before transmitting the signal
mode}\label{ssec:SAmp}
If we amplify the signal mode before transmission to the
environment, the transmitted mode is
$\hat{a}_S'=\sqrt{G_S}~\hat{a}_S+\sqrt{G_S-1}~\hat{a}_{G_S}^\dagger,$
where $G_S\geq1$ is the gain and where $\braket{\hat{a}_{G_S}^\dagger
\hat{a}_{G_S}}=N_{G_S}$ is the added noise. The classical and quantum
covariance matrix transforms identically.
Correspondingly, the
advantage in Eq.~\eqref{eq:advantage} is actually
\textit{independent} of the signal amplification. However, although $G_S$
counteracts the
loss due to $\eta$, the added noise
limits the absolute correlations that can be achieved. In addition, in the weak
signal regime where the quantum advantage is relevant, the effective SNR is low
even before
amplification. Amplifying a signal with low SNR is trivially outperformed by
using
a stronger signal with better SNR to begin with, as is the case of a
classically-correlated thermal noise with $G_SN_S+(G_S-1)(N_{G_S}+1)$ average
signal photons. In addition, any sufficiently strong amplification applied at
this stage would weaken LPI properties, and possibly move the system to a
regime where another protocol outperforms correlation detection.
\subsubsection{Amplification before measurement}
These amplification schemes describe the pre-amplification of the modes in
heterodyne quadratures measurement.
With $G_R\geq1$ amplifying the returned mode, as
$\hat{a}_R'=\sqrt{G_R}~\hat{a}_R+\sqrt{G_R-1}~\hat{a}_{G_R}^\dagger,$
the covariance matrix again transforms identically for the quantum and
classical systems,
and any added noise cancels in the advantage merit. However, as can be
intuitively
understood,
amplification at the receiver end can not increase correlations with the idler
reference. Indeed, the added amplifier noise
reduces correlations,
but equally in both systems.
Instead, applying the amplification $G_I\geq1$ to the idler, as
$\hat{a}_I'=\sqrt{G_I}~\hat{a}_I+\sqrt{G_I-1}~\hat{a}_{G_I}^\dagger,$
is actually
detrimental to the
advantage of Eq.~\eqref{eq:advantage}.
In the limit of strong amplification ($G_I\gg1$), the advantage merit is
\begin{equation}
Q_A =
\frac{N_S+1}{N_S+N_{G_{I}}+1}\left[1+\frac{\xi(N_{G_I}+1)}{N_S(1-\xi)}\right],
\end{equation}
where $\braket{\hat{a}_{G_I}^\dagger\hat{a}_{G_I}}=N_{G_I}$ is the added noise.
We consider the ideal case of quantum limited amplifier noise
($N_{G_I}\rightarrow0$), which simplifies the
quantum
advantage to
\begin{equation}
Q_A = 1+\frac{\xi}{N_S(1-\xi)}.
\end{equation}
In this form it is clear that in the $\xi/(1-\xi)\ll N_S$ setting, all the
quantum advantage is destroyed. Conversely,
an unlimited quantum advantage can be observed in the $N_S\ll\xi$ regime.
\subsection{Receiver Operating Characteristic performance}
In this section, we analyze the quantum and classical noise radar in the
asymmetric setting, i.e., when the prior probabilities are not equal, under a
different perspective. We derive the ROC curves in the case when the idler and
the signal are separately measured with heterodyne detection.
In addition, we show the asymptotic performances of the optimal quantum
strategies.
\subsubsection{ROC curves with heterodyne detection}
We assume measurement is performed with heterodyne detection of the four
quadratures $\Vec{X}=\braket{\hat{X}}$. Based on the observed data set
$\mathbf{X}=\{\Vec{X}_1,\Vec{X}_2,\ldots,\Vec{X}_M\}$, an optimal threshold
detector is computed by maximizing the log-likelihood ratio test around
$\eta\ll 1$.
This detector has, by Wilks's theorem, asymptotically Chi-squared
statistical distribution. The ROC is computed in terms of the
probability of detection ($P_D=1-P_M$)
as
\begin{equation}
P_D = Q_{\chi_1'^2(2M\kappa^2)}\left(Q_{\chi_1^2}^{-1}(P_{Fa})\right),
\end{equation}
where $Q$ is defined as the right-tail probability of the respective
distribution. The performance for this detector is presented with an
example in Fig.~\ref{fig:ROC}, for realistic parameters in the microwave
regime. We see that the quantum source is not any better than a classical
source, when $\xi/(1-\xi)\ll N_S$.
This is due to the amount of noise added with heterodyne detection.
\begin{figure}
\caption{{\bf ROC curves with heterodyne detection for different radar
systems}
\label{fig:ROC}
\end{figure}
\subsubsection{Quantum Stein's lemma}
We provide the asymptotic performance as given in
Subsection~\ref{relative_entropy}. This analysis has been done for a radar
system based on coherent state~\cite{Wilde2017}, but the classically-correlated
noise case was missing. The quantum relative entropy rules the asymptotic
behaviour of the error probability decay. These are given by
\begin{equation}\small
D_\text{TMSV}\simeq\frac{\eta
N_S(N_S+1)}{N_S+N_B+1}\left[\ln\left(1+\tfrac{1}{N_B}\right)+\ln\left(1+\tfrac{1}{N_S}\right)\right],
\label{DQ}
\end{equation}
\begin{equation}\small
D_\text{CCN}\simeq\frac{\eta N_S^2}{N_S-\tfrac{\xi N_B}{1-\xi}
}\left[\ln\left(1{+}\tfrac{1}{N_B}\right){-}\ln\left(1{+}\tfrac{\xi}{N_S(1-\xi)}\right)\right],\label{DC}
\end{equation}
to first order in $\eta$. One can easily see that the $\xi/(1-\xi)\ll N_S$
setting is optimal for the classical case. In particular, in the $N_B\gg1\gg
N_S\gg\xi$ regime, we have that $D_\text{TMSV}/D_\text{CCN}\sim\ln(1+1/N_S)$. A
substantial advantage can be also found for moderate values of $N_S$ (see
Fig.~\ref{fig:errorexp}). The variance of the quantum relative entropy provides
the convergence rate of the error probability exponent to its asymptotic value,
for the type-I error probability constrained to be lower than $\varepsilon$. We
do not provide the explicit formula here. However, in Fig.~\ref{fig:errorexp}
we depict an example for the values of $\xi$ previously considered, and for
$\varepsilon=10^{-3}$.
\begin{figure}
\caption{{\bf Asymptotic error probability exponent in the asymmetric
setting.}
\label{fig:errorexp}
\end{figure}
\section{Conclusion and Outlook}\label{sec:conclude}
We have compared the performance of a quantum noise radar based on two-mode
squeezed states and a class of noise radars based on thermal states. We have
found that, given a constraint on the transmitting power, a quantum advantage
in the ROC curve and their asymptotic performances is possible.
If a low-powered signal of a quantum noise radar is amplified, then there is a
classical noise radar which outperforms considerably the quantum radar. In
addition, if enough noise is added at the idler level, such as when it is
amplified or measured heterodyne, then all the quantum advantage is lost.
However, a quantum advantage appears when the idler and signal are allowed to
be measured jointly. Our results show that amplification is not a good strategy
to overcome the technical difficulties that one must face in a practical
quantum radar implementation. This suggests that quantum radars are more
difficult to achieve than what recent experiments were
claiming~\cite{barzanjeh2019experimental, chang2019quantum}. Interfacing a
large bandwidth entangled microwave signal with the environment, and developing
low-noise power detectors is crucial for performing a QI experiment with a
quantum advantage.
\section*{Acknowledgment}
The authors would like to thank Per Delsing and Philip Krantz for very useful
discussions, and Jeffrey H. Shapiro for reviewing the manuscript. The authors
acknowledge the Knut and Alice Wallenberg foundation for funding. RDC
acknowledges support from the
Marie Sk{\l}odowska Curie fellowship number 891517 (MSC-IF
Green-MIQUEC).
\end{document}
|
\begin{document}
\title{
An Active Set Algorithm for Nonlinear Optimization with Polyhedral Constraints
hanks{
February 28, 2016.
The authors gratefully acknowledge support by the National
Science Foundation under grants 1522629 and 1522654,
by the Office of Naval Research under grants
N00014-11-1-0068 and N00014-15-1-2048, by the Air Force Research
Laboratory under contract FA8651-08-D-0108/0054, and by
the National Science Foundation of China under grant 11571178.
}
\begin{abstract}
A polyhedral active set algorithm PASA is developed for solving a
nonlinear optimization problem whose feasible set is a polyhedron.
Phase one of the algorithm is the gradient projection method,
while phase two is any algorithm for solving a linearly constrained
optimization problem.
Rules are provided for branching between the two phases.
Global convergence to a stationary point is established,
while asymptotically PASA performs
only phase two when either a nondegeneracy assumption holds,
or the active constraints are linearly independent and a strong
second-order sufficient optimality condition holds.
\end{abstract}
\begin{keywords}
polyhedral constrained optimization,
active set algorithm, PASA, gradient projection algorithm,
local and global convergence
\end{keywords}
\begin{AMS}
90C06, 90C26, 65Y20
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{W. W. HAGER and H. ZHANG}
{POLYHEDRAL CONSTRAINED NONLINEAR OPTIMIZATION}
\section{Introduction}
We develop an active set algorithm for a general
nonlinear polyhedral constrained optimization problem
\begin{equation} \label{P}
\min \; \{ f (\m{x}) : \m{x} \in \Omega \}, \quad
\mbox{where }
\Omega = \{ \m{x} \in \mathbb{R}^n: \m{Ax} \le \m{b} \}.
\end{equation}
Here $f$ is a real-valued, continuously differentiable function,
$\m{A} \in \mathbb{R}^{m \times n}$, $\m{b} \in \mathbb{R}^m$, and
$\Omega$ is assumed to be nonempty.
In an earlier paper \cite{hz05a}, we developed an active set algorithm
for bound constrained optimization.
In this paper, we develop new machinery for handling the more
complex polyhedral constraints of (\ref{P}).
Our polyhedral active set algorithm (PASA) has two phases:
phase one is the gradient projection algorithm,
while phase two is any algorithm for solving a linearly constrained
optimization problem over a face of the polyhedron $\Omega$.
The gradient projection algorithm of phase one
is robust in the sense that it converges
to a stationary point under mild assumptions, but the convergence rate
is often linear at best.
When optimizing over a face of the polyhedron in phase two,
we could accelerate the convergence through the use of a superlinearly
convergent algorithm based on conjugate gradients,
a quasi-Newton update, or a Newton iteration.
In this paper, we give rules for switching between phases
which ensure that asymptotically, only phase two is performed.
Hence, the asymptotic convergence rate of PASA coincides with the
convergence rate of the scheme used to solve the linearly constrained
problem of phase two.
A separate paper will focus on a specific numerical implementation of PASA.
We briefly survey some of the rich history of active set methods.
Some of the initial work focused on
the use of the conjugate gradient method with bound constraints
as in \cite{dt83, do97, do03, dfs03, mt91, po69, yt91}.
Work on gradient projection methods include
\cite{be76, cm87, G64,lp66, mt72, sp97}.
Convergence is accelerated using Newton and trust region
methods \cite{cgt00}.
Superlinear and quadratic convergence for nondegenerate problems
can be found in \cite{be82,bmt90, cgt88, fjs98}, while
analogous convergence results are given in
\cite{flp02, fms94, le91, lm99}, even for degenerate problems.
The affine scaling interior point approach
\cite{bcl99, cl94, cl96, cl97, dhv98, HagerZhang13a, huu99, kk05, uuh99, zh04}
is related to the trust region algorithm.
Linear, superlinear, and quadratic convergence results have been established.
Recent developments on active set methods for quadratic programming problem
can be found in \cite{ForsgrenGillWong15, GillWong15}.
A treatment of active set methods in a rather general setting is
given in \cite{IzmailovSolodov14}.
We also point out the recent work \cite{GoldbergLeyffer15} on
a very efficient two-phase
active set method for conic-constrained quadratic programming, and
the earlier work \cite{FriedlanderLeyffer08} on a two-phase
active set method for quadratic programming.
As in \cite{hz05a} the first phase in both applications
is the gradient projection method.
The second phase is a Newton method in \cite{GoldbergLeyffer15},
while it is a linear solver in \cite{FriedlanderLeyffer08}.
Note that PASA applies to the general nonlinear objective in (\ref{P}).
Active set strategies were applied to
$\ell_1$ minimization in \cite{wyzg12,wygz10} and in \cite{hz15}
they were applied to the minimization of a nonsmooth dual
problem that arises when projecting a point onto a polyhedron.
In \cite{wyzg12,wygz10}, a nonmonotone line search based on
``Shrinkage" is used to estimate a support at the solution,
while a nonmonotone SpaRSA algorithm \cite{HagerPhanZhang11,SpaRSA} is used
in \cite{hz15} to approximately identify active constraints.
Unlike most active set methods in the literature,
our algorithm is not guaranteed to identify the active constraints in
a finite number of iterations due to the structure of the line
search in the gradient projection phase.
Instead, we show that only the fast phase two algorithm is performed
asymptotically, even when strict complementary slackness is violated.
Moreover, our line search only requires one projection in
each iteration, while algorithms that identify active constraints often
employ a piecewise projection scheme that may require
additional projections when the stepsize increases.
The paper is organized as follows.
Section~\ref{structure} gives a detailed statement of the polyhedral
active set algorithm, while Section~\ref{global} establishes its global
convergence.
Section~\ref{stationarity} gives some properties for the solution
and multipliers associated with a Euclidean projection onto $\Omega$.
Finally, Section~\ref{nondegenerate} shows that asymptotically
PASA performs only phase two when converging to a nondegenerate
stationary point, while Section~\ref{degenerate} establishes the
analogous result for degenerate problems when the active constraint
gradients are linearly independent and a strong second-order sufficient
optimality condition holds.
{\bf Notation.}
Throughout the paper, $c$ denotes a generic nonnegative constant which
has different values in different inequalities.
For any set $\C{S}$, $|\C{S} |$ stands for the number of elements
(cardinality) of $\C{S}$, while $\C{S}^c$ is the complement of $\C{S}$.
The set $\C{S} - \m{x}$ is defined by
\[
\C{S} - \m{x} = \{ \m{y} - \m{x} : \m{y} \in \C{S} \}.
\]
The distance between a set $\C{S} \subset \mathbb{R}^n$ and a point
$\m{x} \in \mathbb{R}^n$ is given by
\[
\mbox{dist }(\m{x}, \C{S}) = \inf \{ \|\m{x} - \m{y}\|: \m{y} \in \C{S} \},
\]
where $\| \cdot \|$ is the Euclidean norm.
The subscript $k$ is often used to denote the iteration number
in an algorithm, while $x_{ki}$ stands for the $i$-th component of
the iterate $\m{x}_k$.
The gradient $\nabla f(\m{x})$ is a row vector while
$\m{g}(\m{x}) = \nabla f(\m{x})^{\sf T}$ is the gradient arranged as a column vector;
here $^{\sf T}$ denotes transpose.
The gradient at the iterate $\m{x}_k$ is $\m{g}_k = \m{g}(\m{x}_k)$.
In several theorems, we assume that $f$ is Lipschitz continuously differentiable
in a neighborhood of a stationary point $\m{x}^*$.
The Lipschitz constant for $\nabla f$ is always denoted $\kappa$.
We let $\nabla^2 f(\m{x})$ denote the Hessian of $f$ at $\m{x}$.
The ball with center $\m{x}$ and radius $r$ is denoted
$\C{B}_{r}(\m{x})$.
For any matrix $\m{M}$, $\C{N}(\m{M})$ is the null space.
If $\C{S}$ is a subset of the row indices of $\m{M}$,
then $\m{M}_{\C{S}}$ denotes the submatrix of $\m{M}$ with row indices $\C{S}$.
For any vector $\m{b}$, $\m{b}_{\C{S}}$
is the subvector of $\m{b}$ with indices $\C{S}$.
$P_\Omega (\m{x})$ denotes the Euclidean projection of $\m{x}$ onto $\Omega$:
\begin{equation} \label{proj}
\C{P}_{\Omega} (\m{x}) =
\arg \; \min \{ \|\m{x} - \m{y}\| : \m{y} \in \Omega \} .
\end{equation}
For any $\m{x} \in \Omega$, the active and free index sets are
defined by
\[
\C{A}(\m{x}) = \{ i: (\m{Ax} - \m{b})_i = 0\} \quad \mbox{and} \quad
\C{F}(\m{x}) = \{ i: (\m{Ax} - \m{b})_i < 0\},
\]
respectively.
\section{Structure of the algorithm}
\label{structure}
As explained in the introduction,
PASA uses the gradient projection algorithm in phase one and
a linearly constrained optimization algorithm in phase two.
Algorithm~\ref{GPA} is the gradient projection algorithm (GPA) used for the
analysis in this paper.
A cartoon of the algorithm appears in Figure~\ref{gpa-fig}.
\renewcommand\figurename{Alg.}
\begin{figure}
\caption{Prototype gradient projection algorithm (GPA). \label{GPA}
\label{GPA}
\end{figure}
\renewcommand\figurename{Fig.}
\begin{figure}
\caption{An iteration of the gradient projection algorithm.
\label{gpa-fig}
\label{gpa-fig}
\end{figure}
This is a simple monotone algorithm based on an Armijo line search.
Better numerical performance is achieved with a more general nonmonotone
line search such as that given in \cite{hz05a},
and all the analysis directly extends to this more general framework;
however, to simplify the analysis and discussion in the paper, we utilize
Algorithm~\ref{GPA} for the GPA.
The requirements for the linearly constrained optimizer (LCO)
of phase two, which operates on the {\it faces} of $\Omega$, are now developed.
One of the requirements is that when the active sets repeat in an
infinite series of iterations, then the iterates must
approach stationary.
To formulate this requirement in a precise way,
we define
\begin{equation}\label{dL}
\m{g}^\C{I} (\m{x}) =
\C{P}_{\C{N}(\m{A}_\C{I})}(\m{g}(\m{x})) =
\arg \; \min \{ \|\m{y} - \m{g}(\m{x})\| : \m{y} \in \mathbb{R}^n \mbox{ and }
\m{A}_\C{I} \m{y} = \m{0} \}.
\end{equation}
Thus $\m{g}^\C{I} (\m{x})$ is the projection of the gradient
$\m{g}(\m{x})$ onto the null space $\C{N}(\m{A}_\C{I})$.
We also let $\m{g}^\C{A}(\m{x})$ denote
$\m{g}^\C{I} (\m{x})$ for $\C{I} = \C{A}(\m{x})$.
If $\C{A}(\m{x})$ is empty, then $\m{g}^\C{A} (\m{x}) = \m{g}(\m{x})$,
while if $\m{x}$ is a vertex of $\Omega$, then $\m{g}^\C{A} (\m{x}) = \m{0}$.
This suggests that $e(\m{x}) = \|\m{g}^\C{A} (\m{x})\|$ represents a
local measure of stationarity in the sense
that it vanishes if and only if $\m{x}$ is a stationary point on its
associated face
\[
\{ \m{y} \in \Omega : (\m{Ay} - \m{b})_i = 0
\mbox{ for all } i \in \C{A}(\m{x}) \} .
\]
The requirements for the phase two LCO are the following:
\begin{itemize}
\item[F1.]
$\m{x}_{k} \in \Omega$ and $f(\m{x}_{k+1}) \le f(\m{x}_k)$ for each $k$.
\item[F2.]
$\C{A}(\m{x}_{k}) \subset \C{A}(\m{x}_{k+1})$ for each $k$.
\item[F3.]
If $\C{A}(\m{x}_{j+1}) = \C{A}(\m{x}_j)$
for $j \ge k$, then $\liminf\limits_{j\to\infty} e(\m{x}_j) = 0$.
\end{itemize}
Condition F1 requires that the iterates in phase two are monotone,
in contrast to phase one where the iterates could be nonmonotone.
By F2 the active set only grows during phase two, while F3
implies that the local stationarity measure becomes small
when the active set does not change.
Conditions F1, F2, and F3 are easily fulfilled by algorithms based
on gradient or Newton type iterations which employ a monotone line search
and which add constraints to the active set whenever a new constraint
becomes active.
Our decision for switching between phase one (GPA)
and phase two (LCO) is based on a
comparison of two different measures of stationarity.
One measure is the local stationarity measure $e(\cdot)$, introduced already,
which measures stationarity relative to a face of $\Omega$.
The second measure of stationarity is a global metric in the sense that
it vanishes at $\m{x}$ if and only if $\m{x}$ is
a stationary point for the optimization problem (\ref{P}).
For $\alpha \ge 0$, let $\m{y}(\m{x}, \alpha)$
be the point obtained by taking a step from $\m{x}$ along the negative
gradient and projecting onto $\Omega$; that is,
\begin{equation}\label{y-def}
\m{y}(\m{x}, \alpha) = \C{P}_\Omega (\m{x} - \alpha \m{g}(\m{x})) =
\arg \;
\min \left\{ \frac{1}{2} \|\m{x} - \alpha \m{g}(\m{x}) - \m{y}\|^2:
\m{Ay} \le \m{b} \right\}.
\end{equation}
The vector
\begin{equation}\label{dalpha}
\m{d}^\alpha (\m{x}) = \m{y}(\m{x},\alpha) - \m{x}
\end{equation}
points from $\m{x}$ to the projection of
$\m{x} - \alpha \m{g}(\m{x})$ onto $\Omega$.
As seen in \cite[Prop. 2.1]{hz05a}, when $\alpha > 0$,
$\m{d}^\alpha (\m{x}) = \m{0}$
if and only if $\m{x}$ is a stationary point for (\ref{P}).
We monitor convergence to a stationary point using the function $E$
defined by
\[
E(\m{x}) = \|\m{d}^1(\m{x}) \|.
\]
$E(\m{x})$ vanishes if and only if $\m{x}$ is a stationary point of (\ref{P}).
If $\Omega = \mathbb{R}^n$, then $E(\m{x}) = \|\m{g}(\m{x})\|$, the
norm of the gradient, which is the usual way to assess convergence
to a stationary point in unconstrained optimization.
The rules for switching between phase one and phase two depend on the relative
size of the stationarity measures $E$ and $e$.
We choose a parameter $\theta \in (0, 1)$ and branch from phase one to
phase two when $e(\m{x}_k) \ge \theta E(\m{x}_k)$.
Similarly, we branch from phase two to phase one when
$e(\m{x}_k) < \theta E(\m{x}_k)$.
To ensure that only phase two is executed asymptotically at a degenerate
stationary point, we may need to decrease $\theta$ as the iterates converge.
The decision to decrease $\theta$ is based on what we called the
undecided index set $\C{U}$ which is defined as follows.
Let $\m{x}$ denote the current iterate,
let $E(\m{x})$ be the global measure of stationarity,
and let $\g{\lambda}(\m{x})$ denote any Lagrange multiplier associated with
the polyhedral constraint in (\ref{y-def}) and $\alpha = 1$.
That is, if $\m{y} = \m{y}(\m{x}, 1)$ is the solution of (\ref{y-def})
for $\alpha = 1$, then $\g{\lambda} (\m{x})$ is any vector
that satisfies the conditions
\[
\m{y} - \m{x} + \m{g}(\m{x}) + \m{A}^{\sf T} \g{\lambda} (\m{x}) = \m{0}, \quad
\g{\lambda} (\m{x}) \ge \m{0}, \quad \lambda_i (\m{x}) = 0
\mbox{ if } i \in \C{F}(\m{y}).
\]
Given parameters $\beta \in (1, 2)$ and $\gamma \in (0,1)$,
the undecided index set is defined by
\[
\C{U}(\m{x}) = \{ i : \lambda_i (\m{x}) \ge E(\m{x})^\gamma
\mbox{ and } (\m{b} - \m{Ax})_i \ge E(\m{x})^\beta \} .
\]
If $\m{x}$ is close enough to a stationary point that $E(\m{x})$ is small,
then the indices in $\C{U}(\m{x})$ correspond to those constraints
for which the associated multiplier $\lambda_i (\m{x})$ is relatively
large in the sense that $\lambda_i (\m{x}) \ge E(\m{x})^\gamma$,
and the $i$-th constraint is relatively inactive in the sense
that $(\m{b} - \m{Ay})_i \ge E(\m{x})^\beta$.
By the first-order optimality conditions at a local minimizer,
large multipliers are associated with active constraints.
Hence, when the multiplier is relatively large and the constraint is
relatively inactive, we consider the constraint undecided.
When $\C{U}(\m{x})$ is empty, then we feel that the active constraints
are nearly identified, so we decrease $\theta$ in phase one so that
phase two will compute a more accurate local stationary point before
branching back to phase one.
Algorithm~\ref{PASA} is the polyhedral active set algorithm (PASA).
The parameter $\epsilon$ is the convergence tolerance,
the parameter $\theta$ controls the branching between phase one and
phase two, while the parameter $\mu$ controls the decay of $\theta$
when the undecided index set is empty.
\renewcommand\figurename{Alg.}
\begin{figure}
\caption{Polyhedral active set algorithm (PASA). \label{PASA}
\label{PASA}
\end{figure}
\renewcommand\figurename{Fig.}
\section{Global convergence}
\label{global}
Since Algorithm~\ref{GPA}, GPA, is a special case of the nonmonotone
gradient projection algorithm studied in \cite{hz05a},
our previously established global convergence result, stated below, holds.
\begin{theorem}\label{GPA-theorem}
Let $\C{L}$ be the level set defined by
\begin{equation}\label{level}
\C{L} = \{ \m{x} \in \Omega : f (\m{x}) \le f(\m{x}_0) \} .
\end{equation}
Assume the following conditions hold:
\begin{itemize}
\item[{\rm A1.}]
$f$ is bounded from below on $\C{L}$ and
$d_{\max} = {\sup}_k \|\m{d}_k\| < \infty$.
\item[{\rm A2.}]
If $\bar{\C{L}}$ is the collection of $\m{x}\in \Omega$ whose
distance to $\C{L}$ is at most $d_{\max}$,
then $\nabla f$ is Lipschitz continuous on $\bar{\C{L}}$.
\end{itemize}
Then GPA with $\epsilon = 0$ either terminates in a finite number of
iterations at a stationary point, or we have
\[
\liminf\limits_{k \to\infty} E(\m{x}_k) =0.
\]
\end{theorem}
The global convergence of PASA essentially follows from the global convergence
of GPA and the requirement F3 for the linearly constrained optimizer.
\begin{theorem}\label{global_theorem}
If the assumptions of Theorem~$\ref{GPA-theorem}$ hold and the linearly
constrained optimizer satisfies {\rm F1--F3}, then
PASA with $\epsilon = 0$ either terminates in a finite number of
iterations at a stationary point, or we have
\begin{equation}\label{lim_to_zero}
\liminf\limits_{k \to\infty} E(\m{x}_k)=0.
\end{equation}
\end{theorem}
\begin{proof}
If only phase one is performed for $k$ sufficiently large, then
(\ref{lim_to_zero}) follows from Theorem~\ref{GPA-theorem}.
If only phase two is performed for $k$ sufficiently large, then
$e(\m{x}_k) \ge \theta E(\m{x}_k)$ for $k$ sufficiently large.
Since $\theta$ is only changed in phase one, we can treat
$\theta$ as a fixed positive scalar for $k$ sufficiently large.
By F2, the active sets approach a fixed limit for $k$ sufficiently large.
By F3 and the inequality $e(\m{x}_k) \ge \theta E(\m{x}_k)$,
(\ref{lim_to_zero}) holds.
Finally, suppose that there are an infinite number of branches from
phase two to phase one.
If (\ref{lim_to_zero}) does not hold, then there exists $\tau > 0$
such that $E(\m{x}_k) = \|\m{d}^1(\m{x}_k)\| \ge \tau$ for all $k$.
By property P6 of \cite{hz05a} and the definition
$\m{d}_k = \m{d}^\alpha(\m{x}_k)$, we have
\begin{equation}\label{g1}
\nabla f (\m{x}_k) \m{d}_k = \m{g}_k^{\sf T} \m{d}^\alpha (\m{x}_k) \le
-\|\m{d}^\alpha(\m{x}_k)\|^2/\alpha = -\| \m{d}_k\|^2/\alpha ,
\end{equation}
which implies that
\begin{equation}\label{g4}
\frac{|\m{g}_k^{\sf T} \m{d}_k|}{\|\m{d}_k\|^2} \ge \frac{1}{\alpha}.
\end{equation}
By P4 and P5 of \cite{hz05a} and the
lower bound $\|\m{d}^1(\m{x}_k)\| \ge \tau$, we have
\begin{equation}\label{g2}
\|\m{d}_k\| =
\|\m{d}^\alpha(\m{x}_k)\| \ge \min \{\alpha , 1\} \|\m{d}^1(\m{x}_k)\| \ge
\min \{\alpha , 1\} \tau.
\end{equation}
For the Armijo line search in GPA, it follows from \cite[Lem.~2.1]{zhh04} that
\[
s_k \ge \min \left\{ 1, \;
\left( \frac{2\eta(1-\delta)}{\kappa} \right)
\frac{|\m{g}_k ^{\sf T} \m{d}_k|}{\|\m{d}_k\|^2}
\right\}
\]
for all iterations in GPA,
where $\kappa$ is the Lipschitz constant for $\nabla f$.
Combine this with (\ref{g4}) to obtain
\begin{equation}\label{lower}
s_k \ge \min \left\{ 1, \;
\left( \frac{2\eta(1-\delta)}{\kappa \alpha} \right) \right\} .
\end{equation}
By the line search condition in step~2 of GPA,
it follows from (\ref{g1}), (\ref{g2}), and (\ref{lower})
that there exists $c > 0$ such that
\begin{equation}\label{g3}
f(\m{x}_{k+1}) \le f(\m{x}_k) - c
\end{equation}
for all iterations in GPA with $k$ sufficiently large.
Since the objective function decreases monotonically in phase two,
and since there are an infinite number of iterations in GPA,
(\ref{g3}) contradicts the assumption A1 that $f$ is bounded from below.
\end{proof}
\section{Properties of projections}
\label{stationarity}
The proof that only phase two of PASA is executed asymptotically relies
on some properties for the solution of the projection problem (\ref{y-def})
that are established in this section.
Since the projection onto a convex set is a nonexpansive operator, we have
\begin{eqnarray}
\|\m{y}(\m{x}_1, \alpha) - \m{y}(\m{x}_2, \alpha)\| &=&
\|\C{P}_\Omega (\m{x}_1 - \alpha \m{g}(\m{x}_1)) -
\C{P}_\Omega (\m{x}_2 - \alpha \m{g}(\m{x}_2)) \| \nonumber \\
&\le& \|(\m{x}_1 - \alpha \m{g}(\m{x}_1)) -
(\m{x}_2 - \alpha \m{g}(\m{x}_2)) \| \nonumber \\
&\le& (1 + \alpha \kappa) \|\m{x}_1 - \m{x}_2\|, \label{lip}
\end{eqnarray}
where $\kappa$ is a Lipschitz constant for $\m{g}$.
Since $\m{d}^\alpha (\m{x}^*) = \m{0}$ for all $\alpha > 0$ when
$\m{x}^*$ is a stationary point, it follows that
$\m{y}(\m{x}^*, \alpha) = \m{x}^*$ for all $\alpha > 0$.
In the special case where $\m{x}_2 = \m{x}^*$, (\ref{lip}) yields
\begin{equation}\label{lip*}
\|\m{y}(\m{x}, \alpha) - \m{x}^*\| =
\|\m{y}(\m{x}, \alpha) - \m{y}(\m{x}^*, \alpha)\| \le
(1 + \alpha \kappa) \|\m{x} - \m{x}^*\| .
\end{equation}
Similar to (\ref{lip}), but with $\m{y}$ replaced by $\m{d}^\alpha$, we have
\begin{equation}\label{lipd}
\|\m{d}^\alpha (\m{x}_1) - \m{d}^\alpha(\m{x}_2)\| \le
(2 + \alpha \kappa) \|\m{x}_1 - \m{x}_2\|.
\end{equation}
Next, let us develop some properties for the
multipliers associated with the constraint in the (\ref{y-def}).
The first-order optimality conditions associated with (\ref{y-def}) can be
expressed as follows:
At any solution $\m{y} = \m{y}(\m{x}, \alpha)$ of (\ref{y-def}), there exists
a multiplier $\g{\lambda} \in \mathbb{R}^m$ such that
\begin{equation}\label{first-order}
\m{y} - \m{x} + \alpha \m{g}(\m{x}) + \m{A}^{\sf T}\g{\lambda}
= \m{0}, \quad \g{\lambda} \ge \m{0}, \quad
\lambda_i = 0 \mbox{ if } i \in \C{F}(\m{y}).
\end{equation}
Let $\Lambda (\m{x}, \alpha)$ denote the set of
multipliers $\g{\lambda}$ satisfying (\ref{first-order}) at the solution
$\m{y} = \m{y}(\m{x}, \alpha)$ of (\ref{y-def}).
If $\m{x}^*$ is a stationary point for (\ref{P}) and $\alpha > 0$,
then $\m{y}(\m{x}^*, \alpha) = \m{x}^*$, and the first equation in
(\ref{first-order}) reduces to
\[
\m{g}(\m{x}^*) + \m{A}^{\sf T}(\g{\lambda}/\alpha) = \m{0},
\]
which is the gradient of the Lagrangian for (\ref{P}), but with the
multiplier scaled by $\alpha$.
Since $\C{F}(\m{y}(\m{x}^*,\alpha)) = \C{F}(\m{x}^*)$,
$\g{\lambda}/\alpha$ is a multiplier for the constraint in (\ref{P}).
Thus if $\m{x}^*$ is a stationary point for (\ref{P}) and
$\Lambda(\m{x}^*)$ is the set of Lagrange multipliers associated with
the constraint, we have
\begin{equation}\label{scaling}
\Lambda(\m{x}^*, \alpha) = \alpha \Lambda (\m{x}^*).
\end{equation}
By (\ref{lip*}) $\m{y}(\m{x}, \alpha)$ approaches $\m{x}^*$ as $\m{x}$
approaches $\m{x}^*$.
Consequently, the indices $\C{F}(\m{x}^*)$ free at $\m{x}^*$ are
free at $\m{y}(\m{x}, \alpha)$ when $\m{x}$ is sufficiently close to $\m{x}^*$.
The multipliers associated with (\ref{y-def}) have the following
stability property.
\begin{proposition}\label{SetClose}
Suppose $\m{x}^*$ is a stationary point for $(\ref{P})$ and for
some $r > 0$, $\m{g}$ is Lipschitz continuous in $\C{B}_r (\m{x}^*)$
with Lipschitz constant $\kappa$.
If $\alpha \ge 0$ and
$\m{x} \in \C{B}_r (\m{x}^*)$ is close enough to $\m{x}^*$ that
$\C{F}(\m{x}^*) \subset \C{F}(\m{y}(\m{x}, \alpha))$, then
\[
\mbox{\rm dist}\{ \g{\lambda}, \Lambda (\m{x}^*, \alpha)\} \le
2c (1 + \kappa \alpha) \|\m{x} - \m{x}^*\|
\]
for all $\g{\lambda} \in \Lambda(\m{x}, \alpha)$,
where $c$ is independent of $\m{x}$ and depends only on $\m{A}$.
\end{proposition}
\begin{proof}
This is essentially a consequence of the upper Lipschitzian properties of
polyhedral multifunctions as established in \cite[Prop.~1]{Robinson81} or
\cite[Cor.~4.2]{Robinson82}.
Here is a short proof based on Hoffman's stability result \cite{Hoffman52}
for a perturbed linear system of inequalities.
Since $\C{F}(\m{x}^*) \subset \C{F}(\m{y}(\m{x}, \alpha))$,
it follows that any $\g{\lambda} \in \Lambda (\m{x}, \alpha)$ is feasible
in the system
\[
\m{p} + \m{A}^{\sf T} \g{\lambda} = \m{0}, \quad
\g{\lambda} \ge \m{0}, \quad \lambda_i = 0 \mbox{ if } i \in \C{F}(\m{x}^*),
\]
with $\m{p} = \m{p}_1 := \m{y}(\m{x}, \alpha) - \m{x} + \alpha \m{g}(\m{x})$.
Since $\m{x}^*$ is a stationary point for (\ref{P}),
the elements of $\Lambda (\m{x}^*, \alpha)$ are feasible in the same
system but with
\[
\m{p} = \m{p}_2 := \m{x}^* - \m{x}^* + \alpha \m{g}(\m{x}^*).
\]
Hence, by Hoffman's result \cite{Hoffman52}, there exists a constant $c$,
independent of $\m{p}_1$ and $\m{p}_2$ and depending only on $\m{A}$,
such that
\[
\mbox{\rm dist}\{ \g{\lambda}, \Lambda (\m{x}^*, \alpha)\} \le
c \|\m{p}_1 - \m{p}_2\|.
\]
We use (\ref{lip*}) to obtain
\[
\|\m{p}_1 - \m{p}_2\| =
\|(\m{y}(\m{x}, \alpha) - \m{x}^*) + (\m{x}^* - \m{x}) + \alpha
(\m{g}(\m{x}) - \m{g}(\m{x}^*))\| \le
2 (1 + \alpha \kappa)\|\m{x} - \m{x}^*\|,
\]
which completes the proof.
\end{proof}
In the following proposition, we study the projection of a step
$\m{x}_k - \alpha \m{g}(\m{x}_k)$ onto the subset $\Omega_k$ of $\Omega$
that also satisfies the active constraints at $\m{x}_k$.
We show that the step can be replaced by
$\m{x}_k - \alpha \m{g}^\C{A}(\m{x}_k)$ without effecting the projection.
\begin{proposition}
\label{P=prop}
For all $\alpha \ge 0$, we have
\begin{equation}\label{P=}
\C{P}_{\Omega_k} (\m{x}_k - \alpha \m{g}(\m{x}_k)) =
\C{P}_{\Omega_k} (\m{x}_k - \alpha \m{g}^\C{A}(\m{x}_k)),
\end{equation}
where
\begin{equation}\label{omegak}
\Omega_k = \{ \m{x} \in \Omega : (\m{Ax} - \m{b})_i = 0 \mbox{ for all }
i \in \C{A}(\m{x}_k) \}.
\end{equation}
\end{proposition}
\begin{proof}
Let $\m{p}$ be defined by
\begin{eqnarray}
\m{p} &=& \C{P}_{\Omega_k}
(\m{x}_k - \alpha \m{g}(\m{x}_k)) - \m{x}_k
\label{1}\\
&=& \arg \; \min \{
\| \m{x}_k - \alpha \m{g}(\m{x}_k) - \m{y}\| : \m{y} \in \Omega_k \} - \m{x}_k .
\nonumber
\end{eqnarray}
With the change of variables $\m{z} = \m{y} - \m{x}_k$, we can write
\begin{equation} \label{nd1}
\m{p} = \arg \; \min \{
\| \m{z} + \alpha \m{g}(\m{x}_k)\| : \m{z} \in \Omega_k - \m{x}_k \}.
\end{equation}
Since $ \m{g}^\C{A} (\m{x}_k)$ is the orthogonal projection of
$\m{g}(\m{x}_k)$ onto the null space $\C{N}(\m{A}_{\C{I}})$,
where $\C{I} = \C{A}(\m{x}_k)$,
the difference $\m{g}^\C{A} (\m{x}_k) - \m{g}(\m{x}_k)$ is orthogonal to
$\C{N}(\m{A}_{\C{I}})$.
Since $\Omega_k - \m{x}_k \subset \C{N}(\m{A}_{\C{I}})$, it follows from
Pythagoras that for any $\m{z} \in \Omega_k - \m{x}_k$, we have
\[
\| \m{z} + \alpha \m{g}(\m{x}_k) \|^2 =
\| \m{z} + \alpha \m{g}^\C{A}(\m{x}_k) \|^2 +
\alpha^2 \|
\m{g}(\m{x}_k) - \m{g}^\C{A}(\m{x}_k) \|^2.
\]
Since $\m{z}$ does not appear in the last term, minimizing
$\| \m{z} + \alpha \m{g}(\m{x}_k) \|^2$ over $\m{z} \in \Omega_k - \m{x}_k$
is equivalent to minimizing $\| \m{z} + \m{g}^\C{A}(\m{x}_k)\|^2$ over
$\m{z} \in \Omega_k - \m{x}_k$.
By (\ref{nd1}), we obtain
\begin{equation}\label{nd2}
\m{p} = \arg \; \min \{
\| \m{z} + \alpha \m{g}^\C{A}(\m{x}_k)\| : \m{z} \in \Omega_k - \m{x}_k \}.
\end{equation}
Changing variables from $\m{z}$ back to $\m{y}$ gives
\begin{eqnarray*}
\m{p} &=& \arg \; \min \{
\| \m{x}_k - \alpha \m{g}^\C{A}(\m{x}_k) - \m{y}\| :
\m{y} \in \Omega_k \} - \m{x}_k \\
&=& \C{P}_{\Omega_k}(\m{x}_k - \alpha \m{g}^\C{A}(\m{x}_k)) - \m{x}_k .
\end{eqnarray*}
Comparing this to (\ref{1}) gives (\ref{P=}).
\end{proof}
\section{Nondegenerate problems}
\label{nondegenerate}
In this section, we focus on the case where the iterates of PASA converge
to a nondegenerate stationary point; that is, a stationary point $\m{x}^*$
for which there exists a scalar $\pi > 0$ such that
$\lambda_i > \pi$ for all $i \in \C{A}(\m{x}^*)$ and
$\g{\lambda} \in \Lambda(\m{x}^*)$.
\begin{theorem}
\label{nondegenerate-theorem}
If PASA with $\epsilon = 0$ generates an infinite sequence of iterates
that converge to a nondegenerate stationary point $\m{x}^*$,
then within a finite number of iterations, only phase two is executed.
\end{theorem}
\begin{proof}
By (\ref{lip*}) and Proposition~\ref{SetClose},
$\m{y}(\m{x}, 1)$ is close to $\m{x}^*$ and $\Lambda(\m{x}, 1)$
is close to $\Lambda(\m{x}^*)$ when $\m{x}$ is close to $\m{x}^*$.
It follows that for $r$ sufficiently small, we have
\begin{equation}\label{lambda>0}
\lambda_i > 0 \mbox{ for all }
i \in \C{A}(\m{x}^*), \; \g{\lambda} \in \Lambda(\m{x}, 1),
\mbox{ and } \m{x} \in \C{B}_r (\m{x}^*).
\end{equation}
Since $(\m{Ax}^*-\m{b})_i < 0$ for all $i \in \C{F}(\m{x}^*)$, it also
follows from (\ref{lip*}) that we can take $r$ smaller, if necessary,
to ensure that for all $i \in \C{F}(\m{x}^*)$ and
$\m{x} \in \C{B}_r (\m{x}^*)$, we have
\begin{equation}\label{Ax<b}
(\m{Ay}(\m{x}, 1) - \m{b})_i < 0 \quad \mbox{and} \quad
(\m{Ax}-\m{b})_i < 0.
\end{equation}
By the last condition in (\ref{Ax<b}), we have
\begin{equation}\label{601}
\C{A}(\m{x}) \subset \C{A}(\m{x}^*) \quad \mbox{for all } \m{x} \in
\C{B}_r(\m{x}^*).
\end{equation}
By (\ref{lambda>0}) and (\ref{Ax<b}),
\begin{equation}\label{600}
\C{A}(\m{y}(\m{x}, 1)) = \C{A}(\m{x}^*) \quad
\mbox{for all } \m{x} \in \C{B}_r(\m{x}^*).
\end{equation}
That is, if $\m{x} \in \C{B}_r(\m{x}^*)$ and
$i \in \C{A}(\m{x}^*)$, then by (\ref{lambda>0}) and complementary slackness,
$i$ lies in $\C{A}(\m{y}(\m{x}, 1))$, which implies that
$\m{A}(\m{x}^*) \subset \C{A}(\m{y}(\m{x},1))$.
Conversely, if $i \in \C{F}(\m{x}^*) = \m{A}(\m{x}^*)^c$, then by (\ref{Ax<b}),
$i$ lies in $\C{F}(\m{y}(\m{x}, 1)) = \C{A}(\m{y}(\m{x},1))^c$.
Hence, (\ref{600}) holds.
Choose $K$ large enough that $\m{x}_k \in \C{B}_r(\m{x}^*)$ for all
$k \ge K$.
Since $\C{A}(\m{x}_k) \subset \C{A}(\m{x}^*) = \C{A}(\m{y}(\m{x}_k, 1))$
for all $k \ge K$ by (\ref{601}) and (\ref{600}), it follows that
\begin{equation}\label{602}
\C{P}_\Omega (\m{x}_k - \m{g}(\m{x}_k)) =
\m{y}(\m{x}_k, 1) \in \Omega_k \subset \Omega,
\end{equation}
where $\Omega_k$ is defined in (\ref{omegak}).
The inclusion (\ref{602}) along with Proposition~\ref{P=prop} yield
\[
\C{P}_\Omega (\m{x}_k - \m{g}(\m{x}_k)) =
\C{P}_{\Omega_k} (\m{x}_k - \m{g}(\m{x}_k)) =
\C{P}_{\Omega_k} (\m{x}_k - \m{g}^\C{A}(\m{x}_k)).
\]
We subtract $\m{x}_k$ from both sides and refer to the
definition (\ref{dalpha}) of $\m{d}^\alpha$ to obtain
\begin{eqnarray}
\m{d}^1(\m{x}_k) &=&
\C{P}_{\Omega_k} (\m{x}_k - \m{g}^\C{A}(\m{x}_k)) - \m{x}_k \nonumber \\
&=& \arg \; \min \left\{
\| \m{x}_k - \m{g}^\C{A}(\m{x}_k) - \m{y}\|^2 :
\m{y} \in \Omega_k \right\} - \m{x}_k \nonumber \\
&=& \arg \; \min \left\{
\| \m{z} + \m{g}^\C{A}(\m{x}_k)\|^2 :
\m{z} \in \Omega_k - \m{x}_k \right\} . \label{10}
\end{eqnarray}
Recall that at a local minimizer $\bar{\m{x}}$ of a smooth function
$F$ over the convex set $\Omega_k$, the variational inequality
$\nabla F(\bar{\m{x}}) (\m{x} - \bar{\m{x}}) \ge 0$ holds for all
$\m{x} \in \Omega_k$.
We identify $F$ with the objective in (\ref{10}),
$\bar{\m{x}}$ with $\m{d}^1 (\m{x}_k)$,
and $\m{x}$ with the point $\m{0} \in \Omega_k - \m{x}_k$
to obtain the inequality
\[
\m{d}^1(\m{x}_k) ^{\sf T} ( \m{g}^\C{A} (\m{x}_k) + \m{d}^1(\m{x}_k) ) \le 0.
\]
Hence,
\begin{eqnarray*}
\|\m{g}^\C{A} (\m{x}_k) \|^2 &=&
\| \m{g}^\C{A} (\m{x}_k) + \m{d}^1(\m{x}_k) \|^2
- 2 \m{d}^1(\m{x}_k) ^{\sf T} ( \m{g}^\C{A} (\m{x}_k) + \m{d}^1(\m{x}_k) )
+ \| \m{d}^1(\m{x}_k) \|^2 \\
&\ge& \|\m{d}^1(\m{x}_k) \|^2.
\end{eqnarray*}
By definition, the left side of this inequality is $e(\m{x}_k)^2$, while
the right side is $E(\m{x}_k)^2$.
Consequently, $E(\m{x}_k) \le e(\m{x}_k)$ when $k \ge K$.
Since $\theta \in (0,1)$, it follows that phase one immediately
branches to phase two, while phase two cannot branch to phase one.
This completes the proof.
\end{proof}
\section{Degenerate problems}
\label{degenerate}
We now focus on a degenerate stationary point $\m{x}^*$ where
there exists $i \in \C{A}(\m{x}^*)$ and $\g{\lambda} \in \Lambda (\m{x}^*)$
such that $\lambda_i = 0$.
We wish to establish a result analogous to Theorem~\ref{nondegenerate-theorem}.
To compensate for the degeneracy, it is assumed that the active constraint
gradients at $\m{x}^*$ are linearly independent;
that is, the rows of $\m{A}$ corresponding to indices $i \in \C{A}(\m{x}^*)$
are linearly independent,
which implies that $\Lambda (\m{x}^*, \alpha)$ is a singleton.
Under this assumption, Proposition~\ref{SetClose} yields the
following Lipschitz property.
\begin{corollary}\label{SetLip}
Suppose $\m{x}^*$ is a stationary point for $(\ref{P})$ and the active
constraint gradients are linearly independent at $\m{x}^*$.
If for some $r > 0$,
$\m{g}$ is Lipschitz continuous in $\C{B}_r (\m{x}^*)$
with Lipschitz constant $\kappa$ and
$\m{x} \in \C{B}_r (\m{x}^*)$ is close enough to $\m{x}^*$ that
$\C{F}(\m{x}^*) \subset \C{F}(\m{y}(\m{x}, \alpha))$ for
some $\alpha \ge 0$, then $\Lambda(\m{x}, \alpha)$ is a singleton and
\[
\| \Lambda(\m{x}, \alpha) - \Lambda (\m{x}^*, \alpha) \|
\le 2c (1 + \kappa \alpha) \|\m{x} - \m{x}^*\|,
\]
where $c$ is independent of $\m{x}$ and depends only on $\m{A}$.
\end{corollary}
\begin{proof}
Since $\C{F}(\m{x}^*) \subset \C{F}(\m{y}(\m{x}, \alpha))$, it follows that
$\C{A}(\m{x}^*) \supset \C{A}(\m{y}(\m{x}, \alpha))$.
Hence, the active constraint gradients are linearly independent at
$\m{x}^*$ and at $\m{y}(\m{x}, \alpha)$.
This implies that both
$\Lambda(\m{x}, \alpha)$ and $\Lambda (\m{x}^*, \alpha)$ are singletons, and
Corollary~\ref{SetLip} follows from Proposition~\ref{SetClose}.
\end{proof}
To treat degenerate problems, the convergence theory involves one more
requirement for the linearly constrained optimizer:
\begin{itemize}
\item[F4.]
When branching from phase one to phase two, the first iteration in phase two
is given by an Armijo line search of the following form: Choose $j \ge 0$
as small as possible such that
\begin{eqnarray}
f (\m{x}_{k+1}) &\le& f(\m{x}_k) +
\delta \nabla f(\m{x}_k) (\m{x}_{k+1} - \m{x}_k)
\mbox{ where} \label{p-armijo} \\
\m{x}_{k+1} &=& \C{P}_{\Omega_k} (\m{x}_k - s_k \m{g}^\C{A}(\m{x}_k)),
\quad s_k = \alpha \eta^j,
\nonumber
\end{eqnarray}
with $\Omega_k$ defined in (\ref{omegak}),
$\delta\in (0, 1)$, $\eta \in (0, 1)$, and $\alpha \in (0, \infty)$
(as in the Armijo line search of GPA).
\end{itemize}
As $j$ increases, $\eta^j$ tends to zero and $\m{x}_{k+1}$ approaches
$\m{x}_k$.
Hence, for $j$ sufficiently large,
$i \in \C{F}(\m{x}_k - \alpha \eta^j \m{g}^\C{A}(\m{x}_k))$ if
$i \in \C{F}(\m{x}_k)$.
Since $(\m{A} \m{g}^\C{A}(\m{x}_k))_i = 0$ if $i \in \C{A}(\m{x}_k)$,
it follows that $\m{x}_k - \alpha \eta^j \m{g}^\C{A}(\m{x}_k) \in \Omega_k$
for $j$ sufficiently large, which implies that
$\m{x}_{k+1} =$ $\m{x}_k - \alpha \eta^j \m{g}^\C{A}(\m{x}_k)$;
consequently, for $j$ sufficiently large,
the Armijo line search inequality (\ref{p-armijo}) reduces to the
ordinary Armijo line search condition
\[
f (\m{x}_{k+1}) \le f(\m{x}_k) -
s_k \delta \nabla f(\m{x}_k) \m{g}^\C{A}(\m{x}_k),
\]
which holds for $s_k$ sufficiently small.
The basic difference between the Armijo line search in F4 and the
Armijo line search in GPA is that in F4, the constraints active at
$\m{x}_k$ remain active at $\m{x}_{k+1}$ and F2 holds.
With the additional startup procedure F4 for LCO,
the global convergence result Theorem~\ref{global_theorem} remains
applicable since conditions F1 and F2 are satisfied by the initial
iteration in phase two.
Let $\m{x}^*$ be a stationary point where the active constraint
gradients are linearly independent.
For any given $\m{x} \in \mathbb{R}^n$, we define
\begin{equation}\label{fstar}
\bar{\m{x}} = \arg \; \min_{\m{y}} \{\|\m{x} - \m{y}\| :
(\m{Ay} - \m{b})_i = 0 \mbox{ for all }
i \in \C{A}_+(\m{x}^*) \cup \C{A}(\m{x}) \} ,
\end{equation}
where $\C{A}_+(\m{x}^*) = \{i \in \C{A}(\m{x}^*): \Lambda_i (\m{x}^*) > 0\}$.
If $\m{x}$ is close enough to $\m{x}^*$ that
$\m{A}(\m{x}) \subset \m{A}(\m{x}^*)$, then the feasible set in (\ref{fstar})
is nonempty since $\m{x}^*$ satisfies the constraints;
hence, the projection in (\ref{fstar}) is nonempty when $\m{x}$ is sufficiently
close to $\m{x}^*$.
\begin{lemma}
\label{ratio}
Suppose $\m{x}^*$ is a stationary point where the active constraint gradients
are linearly independent and $f$ is Lipschitz continuously differentiable
in a neighborhood of $\m{x}^*$.
If PASA with $\epsilon=0$ generates an infinite sequence of iterates $\m{x}_k$
converging to $\m{x}^*$, then there exists $c \in \mathbb{R}$ such that
\begin{equation}\label{Aidentify}
\| \m{x}_k - \bar{\m{x}}_k \| \le c \|\m{x}_k - \m{x}^*\|^2
\end{equation}
for all $k$ sufficiently large.
\end{lemma}
\begin{proof}
Choose $r$ in accordance with Corollary~\ref{SetLip}.
Similar to what is done in the proof of Theorem~\ref{nondegenerate-theorem},
choose $r > 0$ smaller if necessary to ensure that
for all $\m{x} \in \C{B}_r (\m{x}^*)$, we have
\begin{equation}\label{700}
\Lambda_i (\m{x}, \alpha) > 0 \mbox{ for all }
i \in \C{A}_+(\m{x}^*),
\end{equation}
and
\begin{equation}\label{701}
(\m{Ay}(\m{x}, \alpha) - \m{b})_j < 0, \quad (\m{Ax}-\m{b})_j < 0,
\mbox{ for all } j \in \C{F}(\m{x}^*).
\end{equation}
Choose $K$ large enough that $\m{x}_k \in \C{B}_r(\m{x}^*)$ for all $k \ge K$
and suppose that $\m{x}_k$ is any PASA iterate with $k \ge K$.
If $i \in \C{A}_+(\m{x}^*) \cap \C{A}(\m{x}_k)$, then by
(\ref{700}), $i \in \C{A}(\m{y}(\m{x}_k, \alpha))$ by complementary slackness.
Hence, $i \in \C{A}(\m{x})$ for all $\m{x}$ on the line segment
connecting $\m{x}_k$ and $\m{y}(\m{x}_k, \alpha)$.
In particular, $i \in \C{A}(\m{x}_{k+1})$ if $\m{x}_{k+1}$ is generated
by GPA in phase one, while
$i \in \C{A}(\m{x}_{k+1})$ by F2 if $\m{x}_{k+1}$ is generated in phase two.
It follows that if
constraint $i \in \C{A}_+(\m{x}^*)$ becomes active at iterate $\m{x}_k$,
then $i \in \C{A} (\m{x}_l)$ for all $l \ge k$.
Let $\C{I}$ be the limit of $\C{A}_+(\m{x}^*) \cap \C{A}(\m{x}_k)$
as $k$ tends to infinity;
choose $K$ larger if necessary to ensure that
$\C{I} \subset \C{A}(\m{x}_k)$ for all $k \ge K$ and suppose that
$\m{x}_k$ is any iterate of PASA with $k \ge K$.
If $\C{I} = \C{A}_+(\m{x}^*)$, then since $\C{I} \subset \C{A}(\m{x}_k)$,
it follows that
$\C{A}_+(\m{x}^*) \cup \C{A}(\m{x}_k) = \C{A}(\m{x}_k)$,
which implies that $\bar{\m{x}}_k = \m{x}_k$.
Thus (\ref{Aidentify}) holds trivially since the left side vanishes.
Let us focus on the nontrivial case where $\C{I}$ is strictly contained in
$\C{A}_+(\m{x}^*)$.
The analysis is partitioned into three cases.
{\bf Case 1.} {\em For $k$ sufficiently large,
$\m{x}_k$ is generated solely by LCO.}
By F3 it follows that for any $\epsilon > 0$, there exists $k \ge K$ such that
$\|\m{g}^\C{A}(\m{x}_k)\| = e (\m{x}_k) \le \epsilon$.
By the first-order optimality conditions for $\m{g}^\C{A}(\m{x}_k)$,
there exists $\g{\mu}_k \in \mathbb{R}^m$,
with $\mu_{ki} = 0$ for all $i \in \C{F}(\m{x}_k)$,
such that
\begin{equation}\label{702}
\| \m{g} (\m{x}_k) + \m{A}^{\sf T} \g{\mu}_k \| \le \epsilon.
\end{equation}
The multiplier $\g{\mu}_k$ is unique by
the independence of the active constraint gradients and the fact that
$\C{A}(\m{x}_k) \subset \C{A}(\m{x}^*)$ by the last condition in (\ref{701}).
Similarly, at $\m{x}^*$ we have
$\m{g} (\m{x}^*) + \m{A}^{\sf T} \g{\lambda}^* = \m{0}$,
where $\g{\lambda}^* = \Lambda(\m{x}^*)$.
Combine this with (\ref{702}) to obtain
\begin{equation}\label{703}
\|\m{A}^{\sf T} (\g{\mu}_k - \g{\lambda}^*)\| \le \epsilon +
\|\m{g}(\m{x}_k) - \m{g}(\m{x}^*)\| \le \epsilon + \kappa \|\m{x}_k - \m{x}^*\|.
\end{equation}
Since $\C{F}(\m{x}^*) \subset \C{F}(\m{x}_k)$ by
the last condition in (\ref{701}), it follows from complementary slackness
that $\mu_{ki} = \lambda_i^* = 0$ for all $i \in \C{F}(\m{x}^*)$.
Since the columns of $\m{A}^{\sf T}$ corresponding to indices in $\C{A}(\m{x}^*)$
are linearly independent, there exists a constant $c$ such that
\begin{equation}\label{704}
\|\g{\mu}_k - \g{\lambda}^*\| \le c \|\m{A}^{\sf T} (\g{\mu}_k - \g{\lambda}^*)\|.
\end{equation}
Hence, for $\epsilon$ sufficiently small and $k$ sufficiently large,
it follows from (\ref{703}) and (\ref{704}) that
$\mu_{ki} > 0$ for all $i \in \C{A}_+(\m{x}^*)$, which contradicts the
assumption that $\C{I}$ is strictly contained in $\C{A}_+(\m{x}^*)$.
Consequently, case 1 cannot occur.
{\bf Case 2.} {\em PASA makes an infinite number of branches from phase one
to phase two and from phase two to phase one.}
Let us consider the first iteration of phase two.
By Proposition~\ref{P=prop} and the definition of $\m{x}_{k+1}$ in F4,
we have
\[
\m{x}_{k+1} = \C{P}_{\Omega_k} (\m{x}_k - s_k \m{g}(\m{x}_k)).
\]
The first-order optimality condition for $\m{x}_{k+1}$ is that there
exists $\g{\mu}_k \in \mathbb{R}^m$ such that
\[
\m{x}_{k+1} - \m{x}_k + s_k \m{g}(\m{x}_k) + \m{A}^{\sf T} \g{\mu}_k
= \m{0},
\]
where $\mu_{ki} = 0$ for all $i \in \C{F}(\m{x}_{k+1}) \supset \C{F}(\m{x}^*)$.
Subtracting from this the identity
\[
s_k \m{g}(\m{x}^*) + \m{A}^{\sf T} (s_k \g{\lambda}^*) = \m{0}
\]
yields
\begin{equation}\label{800}
\m{A}^{\sf T} (\g{\mu}_k - s_k \g{\lambda}^*) =
\m{x}_k - \m{x}_{k+1} + s_k (\m{g}(\m{x}^*) - \m{g}(\m{x}_k)) .
\end{equation}
By the Lipschitz continuity of $\m{g}$,
the bound $s_k \le \alpha$ in F4,
and the assumption that the $\m{x}_k$
converge to $\m{x}^*$, the right side of (\ref{800})
tends to zero as $k$ tends to infinity.
Exploiting the independence of the active constraint gradients and
the identity $\mu_{ki} = \lambda_i^* = 0$ for all $i \in \C{F}(\m{x}^*)$,
we deduce from (\ref{704}) and (\ref{800}) that
$\|\g{\mu}_k - s_k \g{\lambda}^*\|$ tends to 0 as $k$ tends to
infinity.
It follows that for each $i$, $\mu_{ki} - s_k \lambda_i^*$
tends to zero.
If $s_k$ is uniformly bounded away from 0,
then $\mu_{ki} > 0$ when $i \in \C{A}_+(\m{x}^*)$.
By complementary slackness, $\C{I} = \C{A}_+(\m{x}^*)$, which
would contradict the assumption that $\C{I}$ is strictly
contained in $\C{A}_+(\m{x}^*)$.
Consequently, case 2 could not occur.
We will now establish a positive lower bound for $s_k$ in F4 of phase two.
If the Armijo stepsize terminates at $j = 0$, then $s_k = \alpha > 0$,
and we are done.
Next, suppose the stepsize terminates at $j \ge 1$.
Since $j$ is as small as possible, it follows from
Proposition~\ref{P=prop} and F4 that
\begin{equation}\label{900}
f(\m{x}_k + \m{d}_k) - f(\m{x}_k) > \delta \m{g}_k^{\sf T}\m{d}_k,
\end{equation}
where
\[
\m{d}_k =\C{P}_{\Omega_k} (\m{x}_k - \beta \m{g}_k) - \m{x}_k, \quad
\beta := s_k/\eta \le \alpha.
\]
The inequality $\beta \le \alpha$ holds since $j \ge 1$.
Since $\m{x}^* \in \Omega_k \subset \Omega$ by the second
condition in (\ref{701}), we have
\[
\C{P}_{\Omega_k} (\m{x}^*-\beta\m{g}(\m{x}^*)) = \m{x}^*.
\]
Since the projection onto a convex set is a nonexpansive operator,
we obtain
\[
\|(\m{x}_k + \m{d}_k) - \m{x}^*\| =
\|\C{P}_{\Omega_k}(\m{x}_k - \beta \m{g}(\m{x}_k)) -
\C{P}_{\Omega_k}(\m{x}^* - \beta \m{g}(\m{x}^*)) \| \le
(1 + \alpha \kappa) \|\m{x}_k - \m{x}^*\|.
\]
The right side of this inequality tends to zero as $k$ tends to infinity.
Choose $k$ large enough that
$\m{x}_k + \m{d}_k$ is within the ball centered at $\m{x}^*$ where
$f$ is Lipschitz continuously differentiable.
Let us expand $f$ in a Taylor series around $\m{x}_k$ to obtain
\begin{eqnarray}
f(\m{x}_k + \m{d}_k) - f(\m{x}_k) &=& \int_{0}^1
f'(\m{x}_k + t\m{d}_k) dt \nonumber \\
&=& \m{g}_k^{\sf T}\m{d}_k + \int_{0}^1
(\nabla f(\m{x}_k + t\m{d}_k) - \nabla f (\m{x}_k))\m{d}_k dt \nonumber \\
&\le& \m{g}_k^{\sf T}\m{d}_k +
0.5 \kappa\|\m{d}_k\|^2 . \label{904}
\end{eqnarray}
This inequality combined with (\ref{900}) yields
\begin{equation}\label{901}
(1-\delta)\m{g}_k^{\sf T}\m{d}_k + 0.5 \kappa \|\m{d}_k\|^2 > 0.
\end{equation}
As in (\ref{g1}), but with $\Omega$ replaced by $\Omega_k$, we have
\begin{equation}\label{902}
\m{g}_k^{\sf T}\m{d}_k \le - \|\m{d}_k\|^2/\beta.
\end{equation}
Note that $\m{d}_k \ne \m{0}$ due to (\ref{900}).
Combine (\ref{901}) and (\ref{902}) and replace $\beta$ by $s_k/\eta$
to obtain
\begin{equation}\label{903}
s_k > 2(1-\delta)\eta/\kappa.
\end{equation}
Hence,
if $j \ge 1$ in F4, then $s_k$ has the lower bound given in (\ref{903})
for $k$ sufficiently large, while $s_k = \alpha$ if $j = 0$.
This completes the proof of case 2.
{\bf Case 3.} {\em For $k$ sufficiently large, $\m{x}_k$ is generated
solely by GPA.}
The Taylor expansion (\ref{904}) can be written
\begin{equation}
f(\m{x}_k + \m{d}_k) = f(\m{x}_k) +
\delta\m{g}_k^{\sf T}\m{d}_k +
(1-\delta)\m{g}_k^{\sf T}\m{d}_k +
0.5 \kappa\|\m{d}_k\|^2, \label{100}
\end{equation}
where $\m{d}_k = \C{P}_\Omega (\m{x}_k - \alpha \m{g}(\m{x}_k)) - \m{x}_k$
is as defined in GPA.
If (\ref{Aidentify}) is violated, then for any choice of $c > 0$,
there exists $k\ge K$ such that
\begin{equation}\label{clower}
\| \m{x}_k - \bar{\m{x}}_k \| > c \|\m{x}_k - \m{x}^*\|^2 .
\end{equation}
By taking $c$ sufficiently large, we will show that
\begin{equation}\label{key0}
(1-\delta)\m{g}_k^{\sf T}\m{d}_k + 0.5 \kappa\|\m{d}_k\|^2 \le 0.
\end{equation}
In this case, (\ref{100}) implies that
$s_k = 1$ is accepted in GPA and
\[
\m{x}_{k+1} = \C{P}_\Omega (\m{x}_k - \alpha \m{g}(\m{x}_k)).
\]
By Corollary~\ref{SetLip},
$\|\Lambda(\m{x}_k, \alpha) - \Lambda(\m{x}^*,\alpha)\| =$
$\|\Lambda(\m{x}_k, \alpha) - \alpha \g{\lambda}^*\|$
tends to 0 as $k$ tends to infinity.
This implies that $\Lambda_{i}(\m{x}_k, \alpha) > 0$ when $\lambda_i^* > 0$,
which contradicts the assumption that $\C{I}$ is strictly contained in
$\C{A}_+(\m{x}^*)$.
Hence, (\ref{Aidentify}) cannot be violated.
To establish (\ref{key0}), first observe that
\begin{eqnarray}
\|\m{d}_k\| &=& \|\m{y}(\m{x}_k, \alpha) - \m{x}_k\| \le
\|\m{y}(\m{x}_k, \alpha) - \m{x}^*\| +
\|\m{x}^* - \m{x}_k\| \nonumber \\
&\le & (2+\alpha\kappa)\|\m{x}_k - \m{x}^*\|
\label{key1}
\end{eqnarray}
by (\ref{lip*}).
By the first-order optimality condition (\ref{first-order})
for $\m{y}(\m{x}_k, \alpha)$, it follows that
\[
\m{d}_k = -(\alpha \m{g}(\m{x}_k) + \m{A}^{\sf T} \Lambda (\m{x}_k, \alpha)).
\]
The dot product of this equation with $\m{d}_k$ gives
\begin{equation}\label{key2}
(\alpha \m{g}_k + \m{A}^{\sf T} \Lambda(\m{x}_k, \alpha))^{\sf T} \m{d}_k =
-\|\m{d}_k\|^2 \le 0.
\end{equation}
Again, by the definition of $\m{d}_k$ and by complementary slackness,
we have
\begin{equation}\label{key3}
\Lambda(\m{x}_k, \alpha)^{\sf T}\m{Ad}_k =
\Lambda(\m{x}_k, \alpha)^{\sf T}\m{A}(\m{y}(\m{x}_k,\alpha) - \m{x}_k) =
\Lambda(\m{x}_k, \alpha)^{\sf T}(\m{b} - \m{A}\m{x}_k) .
\end{equation}
By Corollary~\ref{SetLip}, it follows that for $K$ sufficiently large and
for any $k \ge K$,
\[
\Lambda_i(\m{x}_k, \alpha) \ge 0.5 \Lambda_i(\m{x}^*, \alpha)
\mbox{ for all } i \in \C{A}_+(\m{x}^*) .
\]
Hence, for any $i \in \C{A}_+(\m{x}^*)$ and $k \ge K$, (\ref{key3}) gives
\begin{equation}\label{key4}
\Lambda(\m{x}_k, \alpha)^{\sf T}\m{A}\m{d}_k \ge
0.5\Lambda_i(\m{x}^*, \alpha)(\m{b} - \m{A}\m{x}_k)_i
= 0.5 \alpha \Lambda_i (\m{x}^*)(\m{b} - \m{A}\m{x}_k)_i
\end{equation}
since $\Lambda(\m{x}_k, \alpha) \ge \m{0}$, $\m{A}\m{x}_k \le \m{b}$,
and each term in the inner product
$\Lambda(\m{x}_k, \alpha)^{\sf T}(\m{b} - \m{A}\m{x}_k)$ is nonnegative.
Combine (\ref{key2})--(\ref{key4}) to obtain
\begin{equation}\label{key5}
\m{g}_k^{\sf T} \m{d}_k \le
-0.5 \Lambda_i (\m{x}^*)(\m{b} - \m{A}\m{x}_k)_i
\end{equation}
for any $i \in \C{A}_+(\m{x}^*)$ and $k \ge K$.
The distance $\|\m{x}_k - \bar{\m{x}}_k\|$ between
$\m{x}_k$ and its projection $\bar{\m{x}}_k$ in (\ref{fstar})
is bounded by a constant times the
maximum violation of the constraint $(\m{b} - \m{Ax}_k)_i = 0$
for $i \in \C{A}_+(\m{x}^*) \cup \C{A}(\m{x}_k)$; that is,
there exists a constant $\bar{c}$ such that
\begin{equation}\label{101}
\|\m{x}_k - \bar{\m{x}}_k\| \le \bar{c}
\max \{ (\m{b} - \m{Ax}_k)_i : i \in \C{A}_+(\m{x}^*) \cup \C{A}(\m{x}_k) \}.
\end{equation}
Since $(\m{b} - \m{Ax}_k)_i = 0$ for all $i \in \C{A}(\m{x}_k)$,
it follows that the maximum constraint violation in (\ref{101}) is achieved for
some $i \in \C{A}_+(\m{x}^*)$
(otherwise, $\bar{\m{x}}_k = \m{x}_k$ and (\ref{clower}) is violated).
Consequently, if the index $i \in \C{A}_+(\m{x}^*)$ in (\ref{key5})
is chosen to make $(\m{b} - \m{Ax}_k)_i$ as large as possible, then
\[
\m{g}_k^{\sf T} \m{d}_k \le -d\|\m{x}_k - \bar{\m{x}}_k\|, \quad
\mbox{where } d = 0.5 \bar{c} \min \{ \Lambda_i (\m{x}^*) :
i \in \C{A}_+(\m{x}^*) \}.
\]
If (\ref{clower}) holds, then by (\ref{key1}), we have
\[
\m{g}_k^{\sf T} \m{d}_k \le -c d \|\m{x}_k - \m{x}^*\|^2 \le
-c d \|\m{d}_k\|^2 / (2 + \alpha \kappa ).
\]
Hence, the expression (\ref{key0}) has the upper bound
\[
(1-\delta)\m{g}_k^{\sf T}\m{d}_k + 0.5\kappa\|\m{d}_k\|^2 \le
\left( \frac{cd (\delta - 1)}{2 + \alpha \kappa} +0.5\kappa \right) \|\m{d}_k\|^2.
\]
Since $\delta < 1$, this is nonpositive when $c$ is sufficiently large.
This completes the proof of (\ref{key0}).
\end{proof}
Note that there is a fundamental difference between the
prototype GPA used in this paper and the versions of the
gradient projection algorithm based on a
piecewise projected gradient such as those in \cite{bm88, bm94, bmt90}.
In GPA there is a single projection followed by
a backtrack towards the starting point.
Consequently, we are unable to show that the active constraints
are identified in a finite number of iterations,
unlike the piecewise projection schemes,
where the active constraints can be identified in a finite number
of iterations, but at the expense additional projections when the
stepsize increases.
In Lemma \ref{ratio} we show that even though we do not identify the
active constraints, the violation of the constraints
$(\m{Ax} - \m{b})_i = 0$ for $i \in \C{A}_+(\m{x}^*)$ by iterate
$\m{x}_k$ is on the order of the error in $\m{x}_k$ squared.
When $\m{x}^*$ is fully determined by the active constraints for which the
strict complementarity holds, convergence is achieved
in a finite number of iterations as we now show.
\begin{corollary}
Suppose $\m{x}^*$ is a stationary point where the active constraint gradients
are linearly independent and $f$ is Lipschitz continuously differentiable
in a neighborhood of $\m{x}^*$.
If the PASA iterates $\m{x}_k$ converge to $\m{x}^*$ and
$|\C{A}_+(\m{x}^*)| = n$,
then $\m{x}_k = \m{x}^*$ after a finite number of iterations.
\end{corollary}
\begin{proof}
Choose $k$ large enough that $\C{A}(\m{x}_k) \subset \C{A}(\m{x}^*)$
and $\bar{\m{x}}_k$ is nonempty.
Since $|\C{A}_+(\m{x}^*)| = n$ and the active constraint gradients are
linearly independent, we have $\bar{\m{x}}_k = \m{x}^*$.
By Lemma~\ref{ratio}, we must have $\m{x}_k = \m{x}^*$ whenever
$\|\m{x}_k - \m{x}^*\| < 1/c$.
\end{proof}
To complete the analysis of PASA in the degenerate case and show
that PASA ultimately performs only iterations in phase two, we also need
to assume that the strong second-order sufficient optimality condition holds.
Recall that a stationary point $\m{x}^*$ of (\ref{P}) satisfies the
strong second-order sufficient optimality condition if there exists
$\sigma > 0$ such that
\begin{equation} \label{opt2}
\m{d} ^{\sf T} \nabla^2 f(\m{x}^*) \m{d} \ge \sigma \|\m{d}\|^2 \quad
\mbox{whenever} \quad (\m{Ad})_i = 0 \mbox{ for all } i \in \C{A}_+(\m{x}^*).
\end{equation}
First, we observe that under this assumption, the distance from
$\m{x}_k$ to $\m{x}^*$ is bounded in terms of $E(\m{x}_k)$.
\begin{lemma}\label{stability}
If $f$ is twice continuously differentiable in a neighborhood of
a local minimizer $\m{x}^*$ for $(\ref{P})$ where
the active constraint gradients are linearly independent
and the strong second-order sufficient optimality condition holds,
then for some $\rho > 0$ and for all $\m{x} \in \C{B}_\rho(\m{x}^*)$, we have
\begin{equation}\label{loc-bound}
\|\m{x} - \m{x}^*\| \le
\left[ \sqrt{1 + \left( \frac{2(1 + \kappa)(3+\kappa)}{\sigma} \right)^2 }
\; \right] E(\m{x}) ,
\end{equation}
where $\kappa$ is a Lipschitz constant for $\nabla f$ on
$\C{B}_{\rho}(\m{x}^*)$.
\end{lemma}
\begin{proof}
By the continuity of the second derivative of $f$,
it follows from (\ref{opt2}) that for $\rho> 0$ sufficiently small,
\begin{eqnarray}
\quad \quad \quad (\m{x}-\m{x}^*)^{\sf T} (\m{g} (\m{x})-\m{g} (\m{x}^*)) &=&
(\m{x}-\m{x}^*)^{\sf T} \int_0^1 \nabla^2 f (\m{x}^* + t (\m{x} - \m{x}^*)) dt \;
(\m{x} - \m{x}^*) \nonumber \\
&\ge& 0.5\sigma \|\m{x}-\m{x}^*\|^2
\label{31}
\end{eqnarray}
for all $\m{x} \in \C{B}_{\rho}(\m{x}^*) \cap \C{S}_+$, where
\[
\C{S}_+ = \{ \m{x} \in \mathbb{R}^n :
(\m{Ax}- \m{b})_i = 0 \mbox{ for all } i \in \C{A}_+(\m{x}^*)\} .
\]
Given $\m{x} \in \C{B}_\rho (\m{x}^*)$,
define $\hat{\m{x}} = \C{P}_{\C{S}_+}(\m{x})$.
Since $\C{P}_{\Omega \cap \C{S}_+}(\m{x} - \m{g}(\m{x})) \in \C{S}_+$,
it follows that
\begin{equation}\label{151}
\|\hat{\m{x}} - \m{x}\| = \|\C{P}_{\C{S}_+}(\m{x}) - \m{x}\| \le
\|\C{P}_{\Omega \cap \C{S}_+}(\m{x}- \m{g}(\m{x})) - \m{x}\| .
\end{equation}
Since $\Lambda_i(\m{x}^*) > 0$ for all $i \in \C{A}_+(\m{x}^*)$,
it follows from Corollary~\ref{SetLip} and complementary slackness
that $\rho$ can be chosen smaller if necessary to ensure that
\begin{equation}\label{*}
(\m{Ay}(\m{x}, 1) - \m{b})_i = 0 \mbox{ for all } i \in \C{A}_+(\m{x}^*) ,
\end{equation}
which implies that $\m{y}(\m{x}, 1) \in \C{S}_+$.
Since $\m{y}(\m{x}, 1) = \C{P}_\Omega (\m{x} - \m{g}(\m{x}))$ and
$\m{y}(\m{x},1) \in \C{S}_+$, we also have
$\C{P}_\Omega (\m{x} - \m{g}(\m{x})) =$
$\C{P}_{\Omega\cap\C{S}_+} (\m{x} - \m{g}(\m{x}))$.
With this substitution in (\ref{151}), we obtain
\begin{equation}\label{h53}
\|\hat{\m{x}} - \m{x}\| \le
\| \C{P}_{\Omega} (\m{x} - \m{g}(\m{x})) - \m{x}\| =
\| \m{y}(\m{x}, 1) - \m{x}\| = \| \m{d}^1(\m{x}) \| = E(\m{x}).
\end{equation}
By the Lipschitz continuity of $\m{g}$, (\ref{h53}), and (\ref{lipd}),
it follows that
\begin{eqnarray}
\| \m{d}^1(\hat{\m{x}}) \| & \le&
\| \m{d}^1(\m{x}) \| + \| \m{d}^1(\hat{\m{x}}) - \m{d}^1(\m{x}) \| \nonumber \\
& \le& \| \m{d}^1(\m{x}) \| +
(2+\kappa) \| \m{x} - \hat{\m{x}} \| \nonumber \\
&\le& (3+\kappa) \| \m{d}^1(\m{x}) \| \label{Z30}
\end{eqnarray}
for all $\m{x} \in \C{B}_{\rho}(\m{x}^*)$.
Since $\hat{\m{x}} = \C{P}_{\C{S}_+}(\m{x})$,
the difference $\hat{\m{x}} - \m{x}$ is orthogonal to
$\C{N}(\m{A}_{\C{I}})$ when $\C{I} = \C{A}_+(\m{x}^*)$.
Since $\hat{\m{x}} - \m{x}^* \in \C{N}(\m{A}_{\C{I}})$, it follows from
Pythagoras that
\begin{equation}\label{xbar23}
\|\m{x} - \hat{\m{x}} \|^2 + \|\hat{\m{x}} - \m{x}^*\|^2 =
\|\m{x} - \m{x}^*\|^2.
\end{equation}
Consequently, $\hat{\m{x}} \in \C{B}_{\rho}(\m{x}^*)$
for all $\m{x} \in \C{B}_{\rho}(\m{x}^*)$, and
\begin{equation}\label{x-x*}
\|\m{x} - \m{x}^*\| =
\sqrt {\|\m{x} - \hat{\m{x}} \|^2 + \|\hat{\m{x}} - \m{x}^*\|^2} .
\end{equation}
By P8 in \cite{hz05a}, (\ref{31}), and (\ref{Z30}), we have
\begin{equation}\label{98}
\|\hat{\m{x}} - \m{x}^*\| \le
\left( \frac{1+\kappa}{0.5\sigma} \right) \|\m{d}^1 (\hat{\m{x}})\| \le
\left( \frac{(1+\kappa)(3+\kappa)}{0.5\sigma} \right) \|\m{d}^1 (\m{x})\| .
\end{equation}
Insert
(\ref{h53}) and (\ref{98}) in (\ref{x-x*}) to complete the proof.
\end{proof}
We now examine the asymptotic behavior of the
undecided index set $\C{U}$.
\begin{lemma}
\label{undecided-lemma}
If $f$ is twice continuously differentiable in a neighborhood of
a local minimizer $\m{x}^*$ for $(\ref{P})$ where
the active constraint gradients are linearly independent
and the strong second-order sufficient optimality condition holds,
and if PASA generates an infinite sequence of iterates converging to
$\m{x}^*$, then $\C{U}(\m{x}_k)$ is empty for $k$ sufficiently large.
\end{lemma}
\begin{proof}
If $E(\m{x}_k) = 0$ for some $k$, then PASA terminates and the lemma holds
trivially.
Hence, assume that $E(\m{x}_k) \ne 0$ for all $k$.
To show $\C{U}(\m{x}_k)$ is empty for some $k$, we must show that
either
\[
\mbox{(a) } \g{\lambda}_i(\m{x}_k)/E(\m{x}_k)^{\gamma} < 1 \quad \mbox{or}\quad
\mbox{(b) } (\m{b} - \m{Ax})_i/E(\m{x})^{\beta} < 1
\]
for each $i$.
If $i \in \C{A}_+(\m{x}^*)$, then $[\m{b} - \m{A}\bar{\m{x}}_k]_i = 0$, and
by Lemma~\ref{ratio},
\[
[\m{b} -\m{Ax}_k]_i = [(\m{A}(\bar{\m{x}}_k - \m{x}_k)]_i \le
\|\m{A}\| \|\bar{\m{x}}_k - \m{x}_k\| \le
c\|\m{A}\|\|\m{x}_k - \m{x}^*\|^2.
\]
By Lemma \ref{stability}, there exists a constant $d$ such that
$\|\m{x} - \m{x}^*\| \le d E(\m{x})$ for $\m{x}$ near $\m{x}^*$.
Hence, for all $i \in \C{A}_+(\m{x}^*)$ and $k$ sufficiently large, we have
\[
[\m{b} -\m{Ax}_k]_i \le cd^2 \|\m{A}\| E(\m{x}_k)^2 =
cd^2 \|\m{A}\| E(\m{x}_k)^{2-\beta} E(\m{x}_k)^\beta .
\]
Since $\beta \in (1,2)$, $E(\m{x}_k)^{2-\beta}$ tends to zero as $k$
tends to infinity, and (b) holds when $k$ is large enough that
$cd^2 \|\m{A}\| E(\m{x}_k)^{2-\beta} < 1$.
If $i \in \C{A}_+(\m{x}^*)^c$, then $\lambda_i(\m{x}^*) = 0$.
By Corollary~\ref{SetLip}, there exist $c \in \mathbb{R}$ such that
\[
{\lambda}_i (\m{x}_k) =
{\lambda}_i (\m{x}_k) - {\lambda}_i (\m{x}^*) \le c\|\m{x}_k - \m{x}^*\|
\le cd E (\m{x}_k) = cd E(\m{x}_k)^{1-\gamma} E(\m{x}_k)^\gamma.
\]
Since $\gamma \in (0,1)$, $E(\m{x}_k)^{1-\gamma}$ tends to zero as
$k$ tends to infinity, and (a) holds when $k$ is large enough that
$cd E(\m{x}_k)^{1-\gamma} < 1$.
In summary, for $k$ sufficiently large, (a) holds when
$i \in \C{A}_+(\m{x}^*)^c$ and (b) holds when $i \in \C{A}_+(\m{x}^*)$.
This implies that $\C{U}(\m{x}_k)$ is empty for $k$ sufficiently large.
\end{proof}
As shown in the proof of Lemma~\ref{undecided-lemma},
(b) holds for $i \in \C{A}_+(\m{x}^*)$ and $k$ sufficiently large.
This implies that the constraint violation
$(\m{b} - \m{Ax})_i$ tends to zero faster than the error $E(\m{x}_k)$.
The following result, along with Lemma~\ref{undecided-lemma},
essentially implies that PASA eventually performs only phase two.
\begin{lemma}
\label{mu-bound}
If $f$ is twice continuously differentiable in a neighborhood of
a local minimizer $\m{x}^*$ for $(\ref{P})$ where
the active constraint gradients are linearly independent
and the strong second-order sufficient optimality condition holds,
and if PASA generates an infinite sequence of iterates converging to
$\m{x}^*$, then there exists $\theta^* > 0$ such that
\begin{equation}\label{facecond}
e(\m{x}_k) \ge \theta^* E(\m{x}_k)
\end{equation}
for $k$ sufficiently large.
\end{lemma}
\begin{proof}
Let $\C{I} := \C{A}_+(\m{x}^*) \cup \C{A}(\m{x}_k)$.
The projection $\bar{\m{x}}_k$ has the property that the difference
$\m{x}_k - \bar{\m{x}}_k$ is orthogonal to $\C{N}(\m{A}_\C{I})$.
Choose $k$ large enough that
$\C{A}(\m{x}_k) \subset \C{A}(\m{x}^*)$.
It follows that $\bar{\m{x}}_k - \m{x}^* \in \C{N}(\m{A}_\C{I})$.
Hence, by Pythagoras, we have
\[
\| \m{x}_k -\bar{\m{x}}_k\|^2 + \| \bar{\m{x}}_k - \m{x}^* \|^2
= \| \m{x}_k -\m{x}^*\|^2.
\]
Consequently,
\begin{equation}\label{201}
\|\bar{\m{x}}_k -\m{x}^*\| \le \| \m{x}_k -\m{x}^*\|,
\end{equation}
and $\bar{\m{x}}_k$ approaches $\m{x}^*$ as $k$ tends to infinity.
Choose $\rho > 0$ small enough that $f$ is twice
continuously differentiable in $\C{B}_\rho(\m{x}^*)$, and let
$\kappa$ be the Lipschitz constant for $\nabla f$ in $\C{B}_\rho(\m{x}^*)$.
Choose $k$ large enough that $\m{x}_k \in \C{B}_\rho(\m{x}^*)$.
By (\ref{201}) $\bar{\m{x}}_k \in \C{B}_\rho(\m{x}^*)$.
Since $\m{d}^1(\m{x}^*)=\m{0}$, it follows from (\ref{lipd}) that
\begin{eqnarray}
\|\m{d}^1(\m{x}_k)\| &\le& \|\m{d}^1(\m{x}_k) - \m{d}^1(\bar{\m{x}}_k)\| +
\|\m{d}^1(\bar{\m{x}}_k) - \m{d}^1(\m{x}^*)\|
\nonumber \\
&\le& (2+\kappa)(\|\m{x}_k - \bar{\m{x}}_k \| + \|\bar{\m{x}}_k - \m{x}^* \|)
\label{Z39}
\end{eqnarray}
Lemma~\ref{ratio} gives
\begin{equation}\label{999}
\|\bar{\m{x}}_k - \m{x}_k\| \le
c \|\m{x}_k - \m{x}^*\|^2
\le c \|\m{x}_k - \m{x}^*\|(\|{\m{x}}_k - \bar{\m{x}}_k\| +
\|\bar{\m{x}}_k - \m{x}^*\|) .
\end{equation}
Since $\m{x}_k$ converges to $\m{x}^*$, it follows from (\ref{999})
that for any $\epsilon > 0$,
\begin{equation}\label{xxzz}
\|\bar{\m{x}}_k - \m{x}_k\| \le \epsilon \|\bar{\m{x}}_k - \m{x}^*\|
\end{equation}
when $k$ is sufficiently large.
Combine (\ref{Z39}) and (\ref{xxzz}) to obtain
\begin{equation}\label{first}
\|\m{d}^1 (\m{x}_k) \| \le c \|\bar{\m{x}}_k - \m{x}^*\|
\end{equation}
for some constant $c$ and any $k$ sufficiently large.
Choose $\rho > 0$ small enough that (\ref{31}) holds for
all $\m{x} \in \C{B}_\rho (\m{x}^*)$, and
choose $k$ large enough that $\bar{\m{x}}_k \in \C{B}_\rho (\m{x}^*)$.
The bound (\ref{31}) yields
\begin{equation} \label{upper}
0.5\sigma\|\bar{\m{x}}_k-\m{x}^*\|^2 \le
(\bar{\m{x}}_k - \m{x}^*)^{\sf T} (\m{g}(\bar{\m{x}}_k) - \m{g}(\m{x}^*)) .
\end{equation}
By the first-order optimality conditions for a local minimizer $\m{x}^*$ of
(\ref{P}), there exists a multiplier $\g{\lambda}^* \in \mathbb{R}^m$ such that
\begin{equation}\label{1st}
\m{g}(\m{x}^*) + \m{A}^{\sf T} \g{\lambda}^* = 0 \quad \mbox{where} \quad
(\m{b} - \m{Ax}^*)^{\sf T} \g{\lambda}^* = 0 \; \mbox{ and } \;
\g{\lambda}^* \ge \m{0}.
\end{equation}
Observe that $\lambda_i^* [\m{A}(\bar{\m{x}}_k - \m{x}^*)]_i = 0$
for each $i$ since
$[\m{A}(\bar{\m{x}}_k - \m{x}^*)]_i = 0$
when $i \in \C{A}_+(\m{x}^*)$, while
$\lambda_i^* = 0$ when $i \in \C{A}_+(\m{x}^*)^c$.
Hence, we have
\[
[\m{A}(\bar{\m{x}}_k - \m{x}^*)]^{\sf T} \g{\lambda}^* = 0.
\]
We utilize this identity to obtain
\begin{equation}\label{x1}
(\bar{\m{x}}_k - \m{x}^*)^{\sf T} \m{g}(\m{x}^*) =
(\bar{\m{x}}_k - \m{x}^*)^{\sf T} (\m{g}(\m{x}^*) + \m{A}^{\sf T} \g{\lambda}^*) = \m{0}
\end{equation}
by the first equality in (\ref{1st}).
The first-order optimality conditions for the minimizer
$\m{g}^\C{I}(\bar{\m{x}}_k)$ in (\ref{dL})
imply the existence of $\g{\lambda}_\C{I}$ such that
\begin{equation}\label{2nd}
\m{g}^\C{I}(\bar{\m{x}}_k) -
\m{g}(\bar{\m{x}}_k) + \m{A}_\C{I}^{\sf T} \g{\lambda}_\C{I} = \m{0}
\quad \mbox{where} \quad \m{A}_\C{I} \bar{\m{x}}_k = \m{b}_\C{I} .
\end{equation}
Since $\C{A}(\m{x}_k) \subset \C{A}(\m{x}^*)$, we have
$\m{A}_\C{I} (\bar{\m{x}}_k - \m{x}^*) = \m{0}$,
$[\m{A}_\C{I} (\bar{\m{x}}_k - \m{x}^*)]^{\sf T}\g{\lambda}_\C{I} = \m{0}$, and
\begin{equation}\label{x2}
(\bar{\m{x}}_k - \m{x}^*)^{\sf T} \m{g}(\bar{\m{x}}_k) =
(\bar{\m{x}}_k - \m{x}^*)^{\sf T} (\m{g}(\bar{\m{x}}_k) -
\m{A}_\C{I}^{\sf T} \g{\lambda}_\C{I}) =
(\bar{\m{x}}_k - \m{x}^*)^{\sf T}
\m{g}^\C{I}(\bar{\m{x}}_k)
\end{equation}
by (\ref{2nd}).
Combine (\ref{upper}), (\ref{x1}), and (\ref{x2}) to obtain
\begin{equation}\label{dbar}
0.5\sigma\|\bar{\m{x}}_k-\m{x}^*\| \le \|\m{g}^\C{I}(\bar{\m{x}}_k)\| .
\end{equation}
If $\C{J}$ denotes $\C{A}(\m{x}_k)$, then
$\C{J} \subset \C{I} = \C{A}(\m{x}_k) \cup \C{A}_+(\m{x}_k)$.
Hence, $\C{N}(\m{A}_\C{I}) \subset \C{N}(\m{A}_\C{J})$.
It follows that
\begin{equation}\label{p1}
\|\m{g}^\C{I} (\m{x}_k)\| \le
\|\m{g}^\C{J} (\m{x}_k)\| = e(\m{x}_k) .
\end{equation}
Since the projection on a convex set is nonexpansive,
\begin{equation}\label{p2}
\|\m{g}^\C{I}(\bar{\m{x}}_k)) -
\m{g}^\C{I}({\m{x}}_k) \| \le
\|\m{g}(\bar{\m{x}}_k) - \m{g}({\m{x}}_k) \| \le
\kappa \|\bar{\m{x}}_k - {\m{x}}_k\| .
\end{equation}
Combine (\ref{xxzz}), (\ref{p1}), and (\ref{p2}) to get
\begin{eqnarray*}
\|\m{g}^\C{I}(\bar{\m{x}}_k)\|
&\le&
\|\m{g}^\C{I}(\bar{\m{x}}_k) - \m{g}^\C{I}({\m{x}}_k)\| +
\|\m{g}^\C{I}({\m{x}}_k)\| \\
&\le& e(\m{x}_k) + \kappa \|\bar{\m{x}}_k - {\m{x}}_k\| \le
e(\m{x}_k) + \epsilon \kappa \|\bar{\m{x}}_k - {\m{x}}^*\|.
\end{eqnarray*}
Consequently, by (\ref{dbar}) we have
$0.4\sigma \|\bar{\m{x}}_k - \m{x}^*\| \le e(\m{x}_k)$ for
$\epsilon$ sufficiently small and $k$ sufficiently large.
Finally, (\ref{first}) completes the proof.
\end{proof}
By the analysis of Section~\ref{nondegenerate},
(\ref{facecond}) holds with $\theta^* = 1$ for a nondegenerate problem;
neither the strong second-order sufficient optimality condition nor
independence of the active constraint gradients are needed in this case.
We now show that within a finite number of iterations,
PASA will perform only LCO.
\begin{theorem}
\label{UA-rules}
If PASA with $\epsilon = 0$ generates an infinite sequence of iterates
converging to a local minimizer $\m{x}^*$ of $(\ref{P})$
where the active constraint gradients are linearly independent and the
strong second-order sufficient optimality condition holds,
and if $f$ is twice continuously differentiable near $\m{x}^*$,
then within a finite number of iterations, only phase two is executed.
\end{theorem}
\begin{proof}
By Lemma~\ref{undecided-lemma}, the undecided index set $\C{U}(\m{x}_k)$
is empty for $k$ sufficiently large, and by Lemma~\ref{mu-bound},
there exists $\theta^* > 0$ such that $e(\m{x}_k) \ge \theta^*E(\m{x}_k)$.
If $k$ is large enough that $\C{U}(\m{x}_k)$ is empty, then in phase one,
$\theta$ will be reduced until $\theta \le \theta^*$.
Once this holds, phase one branches to phase two and phase two cannot
branch to phase one.
\end{proof}
Similar to Theorem~4.2 of \cite{hz05a},
when $f$ is a strongly convex quadratic
and LCO is based on a projected conjugate gradient method,
Theorem~\ref{UA-rules} implies that when the active constraint gradients
are linearly independent, PASA converges to the optimal
solution in a finite number of iterations.
\section{Conclusions}
\label{conclusions}
A new active set algorithm PASA was developed for solving polyhedral
constrained nonlinear optimization problems.
Phase one of the algorithm is the gradient projection
algorithm, while phase two is any algorithm for linearly
constrained optimization (LCO)
which monotonically improves the value of the objective function,
which never frees an active constraint, and which has the property
that the projected gradients tend to zero,
at least along a subsequence of the iterates.
Simple rules were given in Algorithm~\ref{PASA} for branching between
the two phases.
Global convergence to a stationary point was established, while asymptotically,
within a finite number of iterations, only phase two is performed.
For nondegenerate problems, this result follows almost immediately, while
for degenerate problems, the analysis required linear independence of
the active constraint gradients, the strong second-order sufficient
optimality conditions, and a special startup procedure for LCO.
The numerical implementation and performance of PASA for general
polyhedral constrained problems will be studied in a separate paper.
Numerical performance for bound constrained optimization
problems is studied in \cite{hz05a}.
\end{document}
|
\begin{document}
\title{A remark on stability and the D-topology of mapping spaces}
\author{Alireza Ahmadi}
\address{Department of Mathematical Sciences, Yazd University, Yazd, 89195--741, Iran}
\email{[email protected]; [email protected]}
\subjclass[2020]{Primary 58A05, 58C25; Secondary 57P99}
\keywords{Mapping spaces, D-topology, stability theorem, diffeological \'{e}tale manifolds}
\begin{abstract}
We discuss how stability is related to the D-topology of mapping spaces, equipped with the functional diffeology. Indeed, we show that stable classes of mapping spaces are D-open. After a reformulation of the classical stability theorem of manifolds with respect to the D-topology, we prove a version of the stability theorem in the class of diffeological \'{e}tale manifolds.
\end{abstract}
\maketitle
\section{Introduction}
The stability of mapping spaces of manifolds\footnote{By a manifold we mean a second-countable
Hausdorff finite-dimensional smooth manifold.} is an aspect of the classical differential geometry
which concerns those properties that remain invariant under small deformations or perturbations in a smooth manner (see, e.g., \cite{GG,GP}). Let us first recall the classical definition of a stable property.
\begin{definition}\label{def-0}(\cite{GP})
Let $ M $ and $ N $ be manifolds.
A property $ \mathsf{P} $ of smooth maps is \textbf{stable} on $ \mathrm{C}^{\infty}(M,N) $ under small deformations, whenever for any smooth map $ h:M\times[0,1]\rightarrow N $, where $ [0,1] $ is a manifold with boundary, if $ h_0:M\rightarrow N, x\mapsto h(x,0) $ possesses the property $ \mathsf{P} $, then there exists a number $ \epsilon>0 $ such that the map $ h_t:M\rightarrow N, x\mapsto h(x,t) $ also possesses the property $ \mathsf{P} $, for all $ t\in[0,\epsilon) $.
\end{definition}
This article is aimed to study the stability of mapping spaces in the context of diffeology, which extends ordinary differential geometry by diffeological spaces. The reader can refer to the book \cite{PIZ} for
a comprehensive introduction to diffeology.
One advantage of working in this framework, among others, is that mapping spaces have a natural structure, called the functional diffeology, for which the natural map
\begin{center}
$ \mathrm{C}^{\infty}(X,\mathrm{C}^{\infty}(Y,Z)) \longrightarrow\mathrm{C}^{\infty}(X\times Y,Z) $
\end{center}
taking $ f \mapsto \tilde{f}$ with $\tilde{f}(x,y)=f(x)(y) $, is a diffeomorphism of diffeological spaces
(see \cite[\S 1.60]{PIZ}).
This feature turns the category of diffeological spaces into a Cartesian closed one, which is very useful in smooth homotopy theory. This particularly enables us to define a coherent notion of stability in diffeology.
\begin{definition}\label{def-1}
Let $ \mathsf{P} $ be a property for smooth maps between diffeological spaces $ X $ and $ Y $, i.e., $ \mathrm{C}^{\infty}(X,Y) $.
We say that the property $ \mathsf{P} $ is \textbf{stable} on $ \mathrm{C}^{\infty}(X,Y) $ under small deformations whenever for any smooth homotopy $ h:\mathbb{R} \rightarrow \mathrm{C}^{\infty}(X,Y) $, if $ h(0) $ possesses the property $ \mathsf{P} $, then there exists a number $ \epsilon>0 $ such that $ h(t) $ also possesses the property $ \mathsf{P} $, for all $ t\in(-\epsilon,\epsilon) $.
\end{definition}
On the other hand, every diffeological space has an intrinsic topology, called the D-topology, in which a subset is D-open\footnote{Throughout this article, the prefix D of a topological property indicates the same property in terms of the D-topology.} if its preimage by any plot is open.
Hence a mapping space obtain a topological structure from the functional diffeology.
\begin{proposition}\label{p-3}
Let $ X $ and $ Y $ be diffeological spaces.
A property $ \mathsf{P} $ is stable on $ \mathrm{C}^{\infty}(X,Y) $ if and only if
\begin{center}
$ \mathsf{class}(\mathsf{P})=\{ f\in\mathrm{C}^{\infty}(X,Y)\mid f $ possesses the property $ \mathsf{P} \} $
\end{center}
is a D-open subset of $ \mathrm{C}^{\infty}(X,Y) $, where $ \mathrm{C}^{\infty}(X,Y) $ is equipped with the functional diffeology.
\end{proposition}
\begin{proof}
As the D-topology of a diffeological space is determined via smooth paths by \cite[Theorem 3.7]{CSW2014}, the proof is straightforward.
\end{proof}
\begin{corollary}
It is immediate that for any stable property $ \mathsf{P} $,
\begin{center}
$ \mathsf{dim}(\mathsf{class}(\mathsf{P}))\leq \mathsf{dim}(\mathrm{C}^{\infty}(X,Y)) $,
\end{center}
where $ \mathsf{dim} $ denotes the diffeological dimension \cite[\S 1.78]{PIZ}.
\end{corollary}
To compare the classical definition of stability with the diffeological one in the class of manifolds, we need the following lemma.
But before that, consider the space $ [0,1] $ equipped with the diffeology of manifolds with boundary (see \cite[\S 4.16]{PIZ} and \cite{GI}).
The D-topology of $ [0,1] $ is the subspace topology inherited from $ \mathbb{R} $.
\begin{lemma}\label{lem-5}
Let $ X $ be a diffeological space.
A subset $ A $ of $ X $ is D-open if and only if for every $ h\in \mathrm{C}^{\infty}([0,1],X) $
with $ h(0)\in A $, there exists a number $ \epsilon>0 $ such that $ h(t)\in A $, for all $ t\in[0,\epsilon) $.
\end{lemma}
\begin{proof}
The proof is inspired by that of \cite[Theorem 3.7]{CSW2014}:
Since smooth maps are D-continuous, if a subset $ A $ of $ X $ is D-open and
$ h\in \mathrm{C}^{\infty}([0,1],X) $ with $ h(0)\in A $, there exists a number $ \epsilon>0 $ such that $ h(t)\in A $, for all $ t\in[0,\epsilon) $.
Suppose that $ A \subseteq X $ and for every $ h\in \mathrm{C}^{\infty}([0,1],X) $
with $ h(0)\in A $, there exists a number $ \epsilon>0 $ such that $ h(t)\in A $, for all $ t\in[0,\epsilon) $. To see that $ A $ is D-open in $ X $, take any plot $ P:U\rightarrow X $. Let $ r\in P^{-1}(A) $ and $ \{r_n\} $ be a sequence which converges fast to $ r $ in $ U $, up to suitable subsequences. By the special curve lemma \cite[p. 18]{KM},
there is a smooth map $ C:\mathbb{R}\rightarrow U $ with
$ C(\frac{1}{n})=r_n $ and $ C(0)=r $. Set $ c=C|_{[0,1]} $. Then $ P\circ c:[0,1]\rightarrow X $ is an element of $ \mathrm{C}^{\infty}([0,1],X) $ with $ P\circ c(0)\in A $.
By hypothesis, there exists a number $ \epsilon>0 $ such that $ P\circ c(t)\in A $, for all $ t\in[0,\epsilon) $.
Thus, for sufficiently large $ n $ we get,
$ P\circ c(\frac{1}{n})\in A $ and consequently,
$ r_n=c(\frac{1}{n})\in P^{-1}(A) $. Therefore, $ P^{-1}(A) \subseteq U $ is open.
\end{proof}
\begin{corollary}\label{cor-2}
Let $ M $ and $ N $ be manifolds.
A property $ \mathsf{P} $ is stable on $ \mathrm{C}^{\infty}(M,N) $ (according to Definition \ref{def-0}) if and only if
\begin{center}
$ \mathsf{class}(\mathsf{P})=\{ f\in\mathrm{C}^{\infty}(M,N)\mid f $ possesses the property $ \mathsf{P} \} $
\end{center}
is a D-open subset of $ \mathrm{C}^{\infty}(M,N) $, where $ \mathrm{C}^{\infty}(M,N) $ is equipped with the functional diffeology.
\end{corollary}
\begin{proposition}
In the class of manifolds, Definitions \ref{def-0} and \ref{def-1} are equivalent, meaning that a property $ \mathsf{P} $ is stable according to Definition \ref{def-0} if and only if it is stable according to Definition \ref{def-1}.
\end{proposition}
\begin{proof}
The proof is a consequence of Proposition \ref{p-3} and Corollary \ref{cor-2}.
\end{proof}
\subsection{Stability theorem}
By Corollary \ref{cor-2}, the classical stability theorem of manifolds (see, e.g., \cite[p. 35]{GP}) can be rephrased with respect to the D-topology in the following way.
\begin{theorem}\label{the-stb}(Stability theorem of manifolds).
The following classes of smooth maps from a compact manifold $ K $ to a manifold $ M $ are D-open subsets of $ \mathrm{C}^{\infty}(K,M) $, where $ \mathrm{C}^{\infty}(K,M) $ is equipped with the functional diffeology.
\begin{enumerate}
\item[(i)]
diffeomorphisms,
\item[(ii)]
\'{e}tale maps (i.e., local diffeomorphisms),
\item[(iii)]
submersions,
\item[(iv)]
immersions,
\item[(v)]
embeddings,
\item[(vi)]
maps transversal to any specified closed submanifold $ N\subseteq M $.
\end{enumerate}
\end{theorem}
\begin{remark}
Besides the D-topology, there are other well-known topologies on mapping spaces of manifolds such as the compact-open
topology, the weak topology, the strong topology, and the Whitney topology.
In \cite[Theorem 43.1]{KM} or \cite[Section 5]{Mic}, the stability theorem is shown with respect to the Whitney topology.
Furthermore, in \cite[Chapter 2]{Hir}, results similar to the stability theorem is proved with respect to the strong topology.
As mentioned in \cite[Corollary 4.15]{CSW2014}, one can reach Theorem \ref{the-stb} by comparisons made in \cite{CSW2014}, between the D-topology and the strong topology, and also other topologies. Of course, it seems that way would be longer and more complicated than our approach.
\end{remark}
As an application, this restatement of the stability theorem can be helpful in some computations.
\begin{example}
Let $ K $ be a compact manifold.
In light of
\cite[Propositions 3.6]{CW}, also
\cite[p. 72]{Hec} and \cite[Propositions 6.3]{HM-V}, or equivalently \cite[Corollary 4.29]{CW},
one can compute the internal tangent spaces of the following subspaces of $ \mathrm{C}^{\infty}(K,K) $
at $ \mathrm{id}_K $, which
are all isomorphic to the vector space of all smooth vector fields on $ K $:
\begin{enumerate}
\item
diffeomorphisms,
\item
\'{e}tale maps (equivalently, submersions or immersions),
\item
embeddings,
\item
maps transversal to any specified closed submanifold $ N\subseteq K $.
\end{enumerate}
\end{example}
It is natural to suggest such a problem for mapping spaces of diffeological spaces.
\begin{question} (Stability problem).
Let $ X $ and $ Y $ be diffeological spaces. Which of the following classes and under what conditions are D-open in $ \mathrm{C}^{\infty}(X,Y) $?
\begin{enumerate}
\item[$ \bullet $]
diffeomorphisms,
\item[$ \bullet $]
(local) subductions,
\item[$ \bullet $]
(local) inductions,
\item[$ \bullet $]
(diffeological) submersions,
\item[$ \bullet $]
(diffeological) immersions,
\item[$ \bullet $]
(diffeological) \'{e}tale maps,
\item[$ \bullet $]
(diffeological) embeddings,
\item[$ \bullet $]
etc.
\end{enumerate}
(See Section \ref{S2} for terminology).
\end{question}
Although the stability problem remains open in general, thanks to some linear-algebraic tools
and additional topological conditions, we are able to generalize the stability theorem to diffeological \'{e}tale manifolds, which is the main result of this article.
Diffeological \'{e}tale manifolds, introduced by the author in \cite{ARA}, constitute a class of diffeological spaces that includes the usual manifolds and also irrational tori. We
briefly review this kind of diffeological space in Section \ref{S2}.
\begin{theorem}\label{thm-main}
(Stability theorem of diffeological \'{e}tale manifolds).
Suppose that $ \mathcal{K} $ and $ \mathcal{M} $ are diffeological \'{e}tale manifolds such that $ \mathcal{K} $ is D-compact (i.e., compact with respect to the D-topology). Also, $ \mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ is equipped with the functional diffeology.
\begin{enumerate}
\item[$ \textbf{(a)} $]
The following classes of $ \mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ are D-open:
\begin{enumerate}
\item[$ \textbf{(a1)} $]
diffeological submersions,
\item[$ \textbf{(a2)} $]
diffeological immersions,
\item[$ \textbf{(a3)} $]
diffeological \'{e}tale maps,
\item[$ \textbf{(a4)} $]
local subductions (provided that $ \mathcal{M} $ is D-Hausdorff).
\end{enumerate}
\item[$ \textbf{(b)} $]
If $ \mathcal{M} $ is a usual manifold, then
the following classes of $ \mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ are D-open:
\begin{enumerate}
\item[$ \textbf{(b1)} $] injective diffeological immersions,
\item[$ \textbf{(b2)} $] diffeological embeddings,
\item[$ \textbf{(b3)} $] diffeomorphisms.
\end{enumerate}
\item[$ \textbf{(c)} $]
Let $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M})\subseteq\mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ be the diffeological subspace of injective smooth maps from $ \mathcal{K} $ to $ \mathcal{M} $.
If $ \mathcal{M} $ is D-Hausdorff, then
the following classes of $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M}) $ are D-open subsets of $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M}) $.
\begin{enumerate}
\item[$ \textbf{(c1)} $] diffeological embeddings,
\item[$ \textbf{(c2)} $] diffeomorphisms.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}
Notice that
we have to consider requirements that guarantee the injectivity remains stable in the cases of diffeological embeddings and diffeomorphisms. We can take two ways to handle this difficulty: 1) to consider the restrictive condition that $ \mathcal{M} $ to be a usual manifold in item $ \textbf{(b)} $ so that diffeological immersions are locally injective (see Proposition \ref{p-2}) and follow the classical method of proof, or 2) to impose directly injectivity condition in $ \textbf{(c)} $.
We take into account diffeomorphisms just as a subset in $ \textbf{(b)} $ and $ \textbf{(c)} $, without any refined diffeological structure.
\end{remark}
\section{Diffeological \'{e}tale manifolds}\label{S2}
In this section, we briefly recall the needed definitions and results about diffeological \'{e}tale manifolds (see \cite{ARA} for more details).
\begin{definition}
We call a smooth map $ f:X\rightarrow Y $ between diffeological spaces a \textbf{submersion} if for each $ x_0 $ in $ X $, there exists a smooth local section $ \sigma:O\rightarrow X $ of $ f $
passing through $ x_0 $ defined on a D-open subset $ O\subseteq Y $ such that $ f\circ\sigma(y)=y $ for all $ y\in O $.
A smooth map $ f:X\rightarrow Y $ is said to be a \textbf{diffeological submersion} if
the pullback of $ f $ by every plot in $ Y $ is a submersion.
\end{definition}
\begin{proposition}
If $ f:X\rightarrow M $ is a diffeological submersion into a manifold $ M $, then it is a submersion.
In particular, a map between manifolds is a diffeological submersion if and only if it is a submersion of manifolds.
\end{proposition}
\begin{definition}
A smooth map $ f:X\rightarrow Y $ between diffeological spaces is an \textbf{immersion} if for each $ x_0 $ in $ X $, there exist a D-open neighborhood $ O\subseteq X $ of the point $ x_0 $, a D-open neighborhood $ O'\subseteq Y $ of the set $ f(O) $, and a smooth map $ \varrho:O'\rightarrow X $ such that
$ \varrho\circ f(x)=x $ for all $ x\in O $.
A smooth map $ f:X\rightarrow Y $ is a \textbf{diffeological immersion} if
for any plot $ P:U\rightarrow Y $ in $ Y $,
for each $ (r_0,x_0) $ in $ P^*X $, there exist a D-open neighborhood $ O $ of $ (r_0,x_0) $ in $ P^*X $, an open neighborhood $ V\subseteq U $ of $ P^*f(O) $ and a smooth map $ \varrho:V\rightarrow U\times X $ such that $ \varrho\circ P^*f(r,x)=(r,x) $ for all $ (r,x)\in O $.
\begin{displaymath}
\xymatrix{
U\times X\ar@/^0.4cm/[drr]^{\Pr_2}& & \\
& P^*X \ar[ul]\ar[r]^{P_{\#}}\ar[d]^{P^*f} & X\ar[d]^{f} \\
& V\ar@/^0.6cm/@{-->}[uul]^{\varrho}\ar[r] \subseteq U \ar[r]^{P} & Y }
\end{displaymath}
\end{definition}
Any immersion of diffeological spaces is a local induction.
\begin{proposition}\label{p-2}
If $ f:X\rightarrow M $ is a diffeological immersion into a manifold $ M $, then it is an immersion.
In particular, a map between manifolds is a diffeological immersion if and only if it is an immersion of manifolds.
\end{proposition}
\begin{definition}
A map $ f:X\rightarrow Y $ between diffeological spaces is called a \textbf{diffeological embedding} if it is both a diffeological immersion and a D-embedding (i.e., a topological embedding with respect to the D-topology).
\end{definition}
Any diffeological embedding is an induction.
\begin{definition}
A map $ f:X\rightarrow Y $ between diffeological spaces is \textbf{\'{e}tale} if for every $ x $ in $ X $, there are D-open neighborhoods $ O\subseteq X $ and $ V\subseteq Y $ of $ x $ and $ f(x) $, respectively, such that $ f|_O:O\rightarrow O' $ is a diffeomorphism.
A smooth map $ f:X\rightarrow Y $ is a \textbf{diffeological \'{e}tale map} if the pullback $ P^{*}f $ by every plot $ P $ in $ X $ is \'{e}tale.
\end{definition}
\begin{proposition}
If $ f:X\rightarrow M $ is a diffeological \'{e}tale map into a manifold $ M $, then it is \'{e}tale.
In particular, a map between manifolds is a diffeological \'{e}tale map if and only if it is a local diffeomorphism of manifolds.
\end{proposition}
\begin{remark}
Any diffeological \'{e}tale map and more generally, any diffeological submersion is a D-open map.
\end{remark}
\begin{definition}
A diffeological space $ \mathcal{M} $ is said to be a \textbf{diffeological \'{e}tale $ n $-manifold} if there exists a parametrized cover $ \mathfrak{A} $, called an \textbf{atlas} for $ \mathcal{M} $ consisting of diffeological \'{e}tale maps
from $ n $-domains into $ \mathcal{M} $.
We call the elements of $ \mathfrak{A} $ \textbf{diffeological \'{e}tale charts}.
\end{definition}
In this situation, $ \mathfrak{A} $ is actually a covering generating family for $ \mathcal{M} $ and the diffeological dimension of $ \mathcal{M} $ is equal to $ n $, $ \mathsf{dim}(\mathcal{M})=n $. Moreover, at each point $ x $ of $ \mathcal{M} $, the internal tangent space $ T_{x}\mathcal{M} $ is isomorphic to $ \mathbb{R}^n $.
Therefore, for a diffeological \'{e}tale $ n $-manifold, the diffeological dimension and the dimension of the tangent spaces are the same.
If $ \varphi:U\rightarrow \mathcal{M} $ are $ \psi:V\rightarrow \mathcal{M} $ are two diffeological \'{e}tale charts in $ \mathcal{M} $ with $ \varphi(r)=\psi(s) $, then we can find a unique smooth map $ h:U'\rightarrow V $ defined on an open neighborhood $ U'\subseteq U $ of $ r $ such that $ h(r)=s $ and the following diagram commutes:
\begin{displaymath}
\xymatrix{
& \mathcal{M} & \\
U' \ar[ur]^{\varphi|_{U'}}\ar[rr]_{h} & & V\ar[ul]_{\psi} \\
}
\end{displaymath}
Although diffeological \'{e}tale charts $ \varphi $ are $ \psi $ may not locally injective, $ h $ is indeed an \'{e}tale map and uniquely determined by $ \varphi $ are $ \psi $. Locally, $ h $ plays the role of a transition map.
\begin{example}\label{exa-1}
Diffeological manifolds are diffeological \'{e}tale manifolds but not conversely.
The irrational torus $ \mathbb{T}_{\alpha}=\mathbb{R}/(\mathbb{Z}+\alpha\mathbb{Z}) $, $ \alpha\notin\mathbb{Q} $, is a D-compact diffeological \'{e}tale $ 1 $-manifold which is not a diffeological manifold.
More generally, $ \mathbb{T}^n_{\Gamma}=\mathbb{R}^n/\Gamma $, where $ \Gamma $ is a discrete subgroup of $ \mathbb{R}^n $, is a diffeological \'{e}tale $ n $-manifold.
\end{example}
\begin{example}
Any open subset of a diffeological \'{e}tale $ n $-manifold itself is a diffeological \'{e}tale $ n $-manifold
in a natural way.
\end{example}
\begin{example}(Product \'{e}tale manifolds).
Suppose $ \mathcal{M}_1,\dots,\mathcal{M}_k $ are diffeological \'{e}tale manifolds of dimensions $ n_1,\dots,n_k $, respectively. The product space $ \mathcal{M}_1\times\cdots\times\mathcal{M}_k $
is a diffeological \'{e}tale manifold of dimension $ n_1+\cdots+n_k $ with
diffeological \'{e}tale charts of the form $ \varphi_1\times\cdots\times\varphi_k $, where $ \varphi_i $ is a diffeological \'{e}tale chart of $ \mathcal{M}_i $, $ i=1,\dots,k $.
\end{example}
Recall that the internal tangent map of a smooth map is the linear map between the internal tangent spaces induced by universal property (see \cite{CW,Hec,HM-V}).
\begin{theorem}\label{the-2}
Let $ \mathcal{M} $ and $ \mathcal{N} $ be diffeological \'{e}tale manifolds and
$ f:\mathcal{M}\rightarrow \mathcal{N} $ be a smooth map.
\begin{enumerate}
\item[(i)]
$ f $ is a diffeological submersion if and only if the internal tangent map $ df_x:T_x\mathcal{M}\rightarrow T_{f(x)}\mathcal{N} $ is an epimorphism at each point $ x\in\mathcal{M} $.
\item[(ii)]
$ f $ is a diffeological immersion
if and only if the internal tangent map $ df_x:T_x\mathcal{M}\rightarrow T_{f(x)}\mathcal{N} $ is a monomorphism at each point $ x\in\mathcal{M} $.
\item[(iii)]
$ f $ is a diffeological \'{e}tale map if and only if the internal tangent map $ df_x:T_x\mathcal{M}\rightarrow T_{f(x)}\mathcal{N} $ is an isomorphism at each point $ x\in\mathcal{M} $.
\end{enumerate}
\end{theorem}
\begin{definition}
A smooth map $ f:\mathcal{M}\rightarrow \mathcal{N} $ between diffeological \'{e}tale manifolds has \textbf{internal
rank} $ k $ at a point $ x\in \mathcal{M} $ if the rank of the internal tangent map $ df_x:T_x\mathcal{M}\rightarrow T_{f(x)}\mathcal{N} $ is equal to $ k $. Moreover,
$ f $ has \textbf{full rank} at $ x $ if its internal
rank is equal to $ \min\{\mathsf{dim}(\mathcal{M}),\mathsf{dim}(\mathcal{N})\} $.
\end{definition}
\begin{proposition}\label{cor-1}
Let $ f:\mathcal{M}\rightarrow \mathcal{N} $ be a smooth map and $ x_0\in \mathcal{M} $.
If $ f $ has full rank $ k $ at $ x_0 $,
then there is a D-open neighborhood $ O\subseteq \mathcal{M} $ of $ x_0 $ such that
$ f $ has full rank $ k $ on $ O $.
\end{proposition}
\begin{proof}
Take any diffeological \'{e}tale chart $ \varphi:U\rightarrow \mathcal{M} $ with $ \varphi(r_0)=x_0 $ for some $ r_0\in U $.
The composition $ f\circ\varphi $ is a plot in $ \mathcal{N} $. So there are
diffeological \'{e}tale chart $ \psi:V\rightarrow \mathcal{N} $ and a smooth map $ F:U'\rightarrow V $ defined on an open neighborhood of $ r_0\in U'\subseteq U $ such that $ f\circ\varphi|_{U'}=\psi\circ F $.
Then
$ df_{\varphi(r_0)}\circ d\varphi_{r_0}=d\psi_{F(r_0)}\circ dF_{r_0} $.
Since $ d\varphi_{r_0} $ and $d\psi_{F(r_0)} $ are isomorphisms, $ F $ has full rank $ k $ at $ r_0 $.
By \cite[Proposition 4.1]{Lee-SM}, $ r_0 $ has a neighborhood $ U''\subseteq U' $ such that $ F $ has full rank $ k $ on $ U'' $.
Therefore, $ f $ has full rank $ k $ on $ O:=\varphi(U'') $, which is a D-open neighborhood of $ x_0 $.
\end{proof}
\begin{lemma}\label{p-1}
Let $ f:\mathcal{L}\times\mathcal{N}\rightarrow \mathcal{M} $ be a smooth map between diffeological \'{e}tale manifolds.
For each $ x\in\mathcal{L} $, define $ f^{x}:\mathcal{N}\rightarrow \mathcal{M} $ by $ f^{x}(y)=f(x,y) $.
Then
\begin{center}
$ \mathsf{rank}~~d(\Pr_1,f)_{(x,y)}=\mathsf{dim}(\mathcal{L})+\mathsf{rank}~~d\big{(}f^{x}\big{)}_{y} $
\end{center}
for every $ (x,y)\in \mathcal{L}\times\mathcal{N} $,
where $ \Pr_1:\mathcal{L}\times\mathcal{N}\rightarrow \mathcal{L} $ is the projection on the first factor.
\end{lemma}
\section{Proof of the stability theorem for diffeological \'{e}tale manifolds}
We first need some preliminary results.
\begin{lemma}\label{lem-2}
Let $ \mathcal{M}$ and $\mathcal{N} $ be diffeological \'{e}tale manifolds, and $\mathcal{L} $ be a usual manifold. Suppose that $ h:\mathcal{L}\rightarrow\mathrm{C}^{\infty}(\mathcal{N},\mathcal{M}) $ be a smooth map. If $ (x_0,y_0)\in\mathcal{L}\times\mathcal{N} $ and the map $ h(x_0):\mathcal{N}\rightarrow \mathcal{M} $ has full rank $ k $ at $ y_0 $, then
there exist a D-open neighborhood $ O_{x_0}\subseteq\mathcal{L} $ of $ x_0 $ and a D-open neighborhood $ U_{y_0}\subseteq\mathcal{N} $ of $ y_0 $ such that for every $ x\in O_{x_0}$,
$ h(x)|_{U_{y_0}}:U_{y_0}\rightarrow \mathcal{M} $ has full rank $ k $ on $ U_{y_0} $.
\end{lemma}
\begin{proof}
Define
$ \hat{h}:\mathcal{L}\times\mathcal{N}\rightarrow \mathcal{M}$ by $ \hat{h}(x,y)= h(x)(y) $.
By Lemma \ref{p-1}, we have
\begin{center}
$ \mathsf{rank}~~d(\mathrm{Pr}_1,\hat{h})_{(x_0,y_0)}=\mathsf{dim}(\mathcal{L})+\mathsf{rank}~~d\big{(}h(x_0)\big{)}_{y_0}=\mathsf{dim}(\mathcal{L})+k, $
\end{center}
where $ \Pr_1:\mathcal{L}\times\mathcal{N}\rightarrow \mathcal{L} $ is the projection on the first factor.
This means that $ (\mathrm{Pr}_1,\hat{h}):\mathcal{L}\times\mathcal{N}\rightarrow \mathcal{L}\times\mathcal{M} $ has full rank $ \mathsf{dim}(\mathcal{L})+k $ at $ (x_0,y_0) $.
By Proposition \ref{cor-1}, there exist a D-open neighborhood $ O_{x_0}\subseteq\mathcal{L} $ of $ x_0 $ and a
D-open neighborhood $ U_{y_0}\subseteq\mathcal{N} $ of $ y_0 $ such that
$ (\mathrm{Pr}_1,\hat{h}) $ has full rank $ \mathsf{dim}(\mathcal{L})+k $ on $ O_{x_0}\times U_{y_0} $.
Again by Lemma \ref{p-1}, we get
\begin{center}
$ \mathsf{dim}(\mathcal{L})+k=\mathsf{rank}~~d(\mathrm{Pr}_1,\hat{h})_{(x,y)}=\mathsf{dim}(\mathcal{L})+\mathsf{rank}~~d\big{(}h(x)\big{)}_{y} $
\end{center}
or
\begin{center}
$ \mathsf{rank}~~d\big{(}h(x)\big{)}_{y}=k, $
\end{center}
for all $ (x,y)\in O_{x_0}\times U_{y_0} $. Hence for every $ x\in O_{x_0}$,
$ h(x)|_{U_{y_0}}:U_{y_0}\rightarrow \mathcal{M} $ has full rank $ k $ on $ U_{y_0} $.
\end{proof}
\begin{proposition}\label{lem-3}
Suppose that $ \mathcal{L} $ is a usual manifold,
and $\mathcal{M}, \mathcal{K} $ are diffeological \'{e}tale manifolds such that $ \mathcal{K} $ is D-compact. Let $ h:\mathcal{L}\rightarrow\mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ be a smooth map. If $ x_0\in\mathcal{L} $ and the map $ h(x_0):\mathcal{K}\rightarrow \mathcal{M} $ has full rank $ k $ on $ \mathcal{K} $, then there exists a D-open neighborhood $ O\subseteq\mathcal{L} $ of $ x_0 $ such that for every $ x\in O $,
$ h(x):\mathcal{K}\rightarrow \mathcal{M} $ has full rank $ k $ on $ \mathcal{K} $.
\end{proposition}
\begin{proof}
By Lemma \ref{lem-2}, for each $ y\in \mathcal{K} $, there exist a D-open neighborhood $ O_{x_0,y}\subseteq\mathcal{L} $ of $ x_0 $ and a D-open neighborhood $ U_{y}\subseteq\mathcal{K} $ of $ y $ such that for every $ x\in O_{x_0,y}$,
$ h(x)|_{U_{y}}:U_{y}\rightarrow \mathcal{M} $ has full rank $ k $ on $ U_{y} $.
Since the collection $ \{U_y\}_{y\in\mathcal{K}} $ covers $ \mathcal{K} $ and $ \mathcal{K} $ is D-compact, we conclude that there are finitely many points $ y_1,\dots,y_n $ in $ \mathcal{K} $ for which $ \bigcup_{i=1}^nU_{y_i}=\mathcal{K} $.
Set $ O=\bigcap_{i=1}^n O_{x_0,y_i}$, which is a D-open neighborhood of $ x_0 $ in $ \mathcal{L} $.
Therefore,
$ h(x):\mathcal{K}\rightarrow \mathcal{M} $ has full rank $ k $ on $ \mathcal{K} $, for every $ x\in O $.
\end{proof}
\begin{myproof}.
\ref{thm-main} \textbf{(a)}.
In light of Theorem \ref{the-2}, parts $ \textbf{(a1)},\textbf{(a2)} $ and $ \textbf{(a3)} $ are obtained as especial cases of Proposition \ref{lem-3}, where $ \mathcal{L}=\mathbb{R} $, $ x_0=0 $, and $ k$ is taken equal to $\mathsf{dim}(\mathcal{M}) $, $ \mathsf{dim}(\mathcal{K}) $, and $ \mathsf{dim}(\mathcal{K})=\mathsf{dim}(\mathcal{M}) $, respectively.
$ \textbf{(a4)} $
Let $ h:\mathbb{R}\rightarrow \mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ be a smooth homotopy such that $ h(0) $ is a local subduction. That is, $ h(0) $ is a surjective diffeological submersion.
By part $ \textbf{(a1)} $, one can find a positive number $ \epsilon >0 $ such that
$ h(t):\mathcal{K}\rightarrow\mathcal{M} $ is a diffeological submersion, for all $ t\in (-\epsilon,\epsilon) $.
In particular, each $ h(t) $ is a D-open map.
By \cite[Lemma 4.50(a)]{Lee-TM}, each $ h(t) $ is a D-closed map, too.
Due to the fact that a diffeological space is locally connected by \cite{PIZ0}, its connected components are both D-open and D-closed.
Thus we conclude that, if $ C\subseteq \mathcal{K} $ is a connected component, so is $ h(t)(C)\subseteq \mathcal{M} $, for all $ t\in (-\epsilon,\epsilon) $.
Now fix $ t\in (-\epsilon,\epsilon) $ and take any $ y\in \mathcal{M} $.
By surjectivity of $ h(0) $, one can find a point $ x\in\mathcal{K} $ with $ h(0)(x)=y $.
But by Cartesian closedness, the map $ \gamma_x:\mathbb{R}\rightarrow\mathcal{M}$ defined by $ \gamma_x(s)=h(st)(x) $ is a smooth path connecting $ h(0)(x)=y $ and $ h(t)(x) $.
Thus,
$ h(0)(x)=y $ and $ h(t)(x) $ belong to the same component. Actually, $ h(t) $ maps the component of $ x $ onto the component of $ y $.
Therefore, $ h(t) $ is surjective and so a local subduction,
for all $ t\in (-\epsilon,\epsilon) $.
\end{myproof}
\\
\\
\begin{myproof}.
\ref{thm-main} \textbf{(b)}.
\textbf{(b1)}
Let $ h:\mathbb{R}\rightarrow \mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ be a smooth homotopy such that $ h(0) $ is an injective diffeological immersion.
By part $ \textbf{(a2)} $, we can find a positive number $ \epsilon >0 $ such that
$ h(t):\mathcal{K}\rightarrow \mathcal{M} $ is a diffeological immersion, for all $ t\in (-\epsilon,\epsilon) $.
Thus, we only need to show that there exists a positive number $ 0<\delta<\epsilon $ such that $ h(t) $ is injective, for all $ t\in (-\delta,\delta) $.
Otherwise, for every positive integer $ n $, there are $ t_n\in\mathbb{R} $ and distinct points $ x_n,y_n\in\mathcal{K} $ such that $ |t_n|<\dfrac{1}{n} $ and $ h(t_n)(x_n)=h(t_n)(y_n) $.
Since $ \mathcal{K} $ is compact, up to suitable subsequences, we get
$ \displaystyle\lim_{n\rightarrow\infty} x_n= x_0 $ and $ \displaystyle\lim_{n\rightarrow\infty} y_n= y_0 $,
for some $ x_0,y_0\in\mathcal{K} $. Then
\begin{center}
$ h(0)(x_0)=\displaystyle\lim_{n\rightarrow\infty} h(t_n)(x_n)=\displaystyle\lim_{n\rightarrow\infty} h(t_n)(y_n)= h(0)(y_0). $
\end{center}
Injectivity of $ h(0) $ implies that $ x_0=y_0 $.
On the other hand, by Lemma \ref{p-1} and Theorem \ref{the-2}, we observe that
\begin{center}
$ (\Pr_1,\hat{h}):(-\epsilon,\epsilon)\times\mathcal{K}\rightarrow (-\epsilon,\epsilon)\times \mathcal{M},\quad (t,x)\mapsto \big{(}t,h(t)(x)\big{)} $
\end{center}
is a diffeological immersion. Because $ \mathcal{M} $ is a usual manifold, by Proposition \ref{p-2}, $ (\Pr_1,\hat{h}) $ is actually an immersion of diffeological spaces and therefore, locally injective.
In particular, there exists a neighborhood $ (-\epsilon',\epsilon')\times O\subseteq (-\epsilon,\epsilon)\times\mathcal{K} $ of $ (0,x_0) $ such that
$ (\Pr_1,\hat{h})|_{(-\epsilon',\epsilon')\times O} $
is injective. Due to convergence, for some sufficiently large $ m $, we have
$ (t_m,x_m),(t_m,y_m)\in (-\epsilon',\epsilon')\times O $.
Now from injectivity $ (\Pr_1,\hat{h})|_{(-\epsilon',\epsilon')\times O} $ and the equality
$ \big{(}t_m,h(t_m)(x_m)\big{)}=\big{(}t_m,h(t_m)(y_m)\big{)} $
we get
$ x_m=y_m $ for some $ m $, which is a contradiction with our hypothesis that $ x_m$ and $y_m $ are distinct points.
\textbf{(b2)}
Let $ h:\mathbb{R}\rightarrow \mathrm{C}^{\infty}(\mathcal{K},\mathcal{M}) $ be a smooth homotopy such that $ h(0) $ is a diffeological embedding. So $ h(0) $ is an injective diffeological immersion.
By part $ \textbf{(b1)} $, there exists a positive number $ \epsilon >0 $ such that
$ h(t):\mathcal{K}\rightarrow \mathcal{M} $ is an injective diffeological immersion, for all $ t\in (-\epsilon,\epsilon) $.
Now, in view of \cite[Lemma 4.50(c)]{Lee-TM}, the result is obtained.
\textbf{(b3)}
It is sufficient to consider the class of diffeomorphisms from $ \mathcal{K} $ onto $ \mathcal{M} $ as the intersection of
the class of local subductions from $ \mathcal{K} $ onto $ \mathcal{M} $ with
the class of injective diffeological immersions from $ \mathcal{K} $ to $ \mathcal{M} $.
\end{myproof}
\\
\\
\begin{myproof}.
\ref{thm-main} \textbf{(c)}.
$ \textbf{(c1)} $
By \cite[Lemma 4.50(c)]{Lee-TM}, all elements of $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M}) $ are D-embeddings.
So the class of diffeological embeddings from $ \mathcal{K} $ to $ \mathcal{M} $ is equal to the intersection of $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M}) $ with the class of diffeological immersions from $ \mathcal{K} $ to $ \mathcal{M} $.
Because the subspace topology is coarser than the D-topology of the subspace diffeology, the result is achieved by $ \textbf{(a2)} $.
$ \textbf{(c2)} $
Likewise, the class of diffeomorphisms from $ \mathcal{K} $ onto $ \mathcal{M} $ is equal to the intersection of $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M}) $ with
the class of local subductions from $ \mathcal{K} $ onto $ \mathcal{M} $, which is a D-open subset of $ \mathsf{Inj}^{\infty}(\mathcal{K},\mathcal{M}) $ by $ \textbf{(a4)} $.
\end{myproof}
\end{document}
|
\begin{enumerate}gin{document}
\mu} \def\n{\nuaketitle
\begin{enumerate}gin{abstract}
Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequality problems (SCVI) where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large. For solving this type of problems the classical stochastic approximation methods and their prox generalizations are computationally inefficient as each iteration becomes costly. To address this challenge, we develop a randomized block stochastic mirror-prox (B-SMP) algorithm, where at each iteration only a randomly selected block coordinate of the solution vector is updated through implementing two consecutive projection steps. Under standard assumptions on the problem and settings of the algorithm, we show that when the mapping is strictly pseudo-monotone, the algorithm generates a sequence of iterates that converges to the solution of the problem \fyR{almost surely.} To derive rate statements, we assume that the maps are strongly \fyR{pseudo-monotone} and obtain \fyR{a non-asymptotic rate of $\mu} \def\n{\nuathcal{O}\left(\frac{d}{k}\right)$ in mean-squared error}, where $k$ is the iteration number and $d$ is the number of \fyR{component sets}. Second, we consider large-scale stochastic optimization problems with convex objectives. For this class of problems, we develop a new averaging scheme for the B-SMP algorithm. Unlike the classical averaging stochastic mirror-prox (SMP) method where a decreasing set of weights for the averaging sequence is used we consider a different set of weights that are characterized in terms of the stepsizes. We show that by using \fyR{such} weights, the objective \fyR{values} of the averaged sequence converges to \fyR{the} optimal value in the mean sense at the rate \fyR{$\mu} \def\n{\nuathcal{O}\left(\frac{\sigma} \def\del{\etaqrt{d}}{\sigma} \def\del{\etaqrt{k}}\right)$}. Both of the rate results \fyR{appear} to be new in the context of SMP algorithms. Third, we consider \fyR{SCVIs and develop an SMP algorithm} that employs the new weighted averaging scheme. We show that the expected value of a suitably defined gap function converges to zero at the optimal rate \fyR{$\mu} \def\n{\nuathcal{O}\left(\frac{1}{\sigma} \def\del{\etaqrt{k}}\right)$}, extending the previous rate results of the SMP algorithm.
\epsilonnd{abstract}
\sigma} \def\del{\etaection{Introduction}\label{sec:introduction}
Variational inequality (VI) problems, first introduced in 1960s, provide a unifying framework for capturing a wide range of applications arising in operations research, finance, and economics (cf. \cite{facchinei02finite, Rockafellar98, LanAccVI, wang2015, IusemIncrementalVI}). Given a set $X \sigma} \def\del{\etaubset \mu} \def\n{\nuathbb{R}^n$ and a mapping $F:X \rightarrow \mu} \def\n{\nuathbb{R}^n$, a variational inequality problem is denoted by VI$(X,F)$, where the goal is to find a vector $x^* \in X$ such that \begin{enumerate}gin{align}\label{def:VI}
\langle F(x^*),x-x^*\rangle\gammaeq 0\qquad \hbox{for all }x \in X.\tag{VI}
\epsilonnd{align}
In the presence of uncertainty associated with the mapping $F$ or the set $X$, stochastic generalizations of variational inequalities have been developed and \fyR{studied} \cite{Wets12SVI, Xu10,Nem11}. Such situations occur \fyR{in two cases: (i) when }the mapping $F$ is characterized via an expected value of a stochastic mapping. \fyR{In this case, } when the underlying probability distribution is unknown \fyR{or} the number of random variables is large, direct application of deterministic methods becomes challenging; \fyR{(ii) The same applies when the set $X$} is characterized by uncertainty. While the the former case finds relevance in stochastic optimization and stochastic Nash games, the latter occurs in areas such as traffic equilibrium problems with uncertain link capacities (cf. Sec. 4 in \cite{Wets12SVI}).
Motivated by multi-user stochastic optimization problems and non-cooperative Nash games, in this paper, we are interested in solving stochastic Cartesian variational
inequality problems. Consider (deterministic) sets $X_i \in \mu} \def\n{\nuathbb{R}^{n_i}$ for $i=1,\ldots,d$. Let the random
vector $\xi$ be defined as $\xi:\Omega
\rightarrow \mu} \def\n{\nuathbb{R}^m$ and {$(\Omega,{\cal F}, \mu} \def\n{\nuathbb{P})$ denote the
associated probability space. A stochastic Cartesian variational
inequality (SCVI) is a problem of the form \epsilonqref{def:VI} such that the set $X$ and mapping $F$ are defined as follows:
\begin{enumerate}gin{align}\label{def:XFofSCVI}
X\triangleq\prod_{i=1}^d X_i, \quad F(x)\triangleq\EXP{F(x,\xi)},
\epsilonnd{align}
where $F(x,\xi): X\times \mu} \def\n{\nuathbb{R}^m \to \mu} \def\n{\nuathbb{R}^n$ denotes the random mapping, the mathematical expectation is taken with respect to the random vector $\xi$, and $n\triangleq\sigma} \def\del{\etaum_{i=1}^d n_i$.
Here the set $X$ is a Cartesian product of the component sets $X_i$.
This problem can be represented as determining a vector $x^*=({x^*}^1; {x^*}^2;\ldots; {x^*}^d)\in X$ such that for all $i=1,\ldots,d$,
\begin{enumerate}gin{align}\label{eqn:SCVI}
\langle\EXP{F_i(x^*,\xi)},x^i-{x^*}^i\rangle \gammaeq 0 \qquad \hbox{ for all $x^i \in X_i$} \tag{SCVI}.
\epsilonnd{align}
where $x^i$ and the component mapping $F_i(x,\xi)$ are such that \[x=(x^1;x^2;\ldots;x^d), \quad\hbox{and} \quad F(x,\xi)=(F_1(x,\xi);\ldots;F_d(x,\xi)).\]
Throughout, we assume that
the expected mappings $\EXP{F_i(x,\xi)}:\mu} \def\n{\nuathbb{R}^n\to\mu} \def\n{\nuathbb{R}^{n_i}$ are well-defined (i.e., the expectations are finite). Our work is motivated by the following two classes of problems that can be both represented \fyR{by} \epsilonqref{eqn:SCVI}:
\noindent \fyR{\textbf{Stochastic non-cooperative Nash games:}}
Consider a classical non-cooperative Nash game among $d$ players (agents). Each player is associated with a strategy set and a cost function. Let $x^i$ denote the strategy (decision variable) of the $i$th player, where $x_i$ belongs to the set of all possible actions of the $i$th player denoted by $X_i \sigma} \def\del{\etaubset \mu} \def\n{\nuathbb{R}^{n_i}$. Let $f_i((x^i;x^{-i}),\xi)$ denote the random cost function of the $i$th player that is in terms of action of the player $x^i$, actions of other players denoted by $x^{-i}$, and a random variable $\xi$ representing the state of the game. The goal of each player is to minimize the expected value of the cost function for any arbitrary strategies of the other players, i.e., $x^{-i}$, by solving the following problem:
\fyR{\begin{enumerate}gin{align*}
\displaystyle \mu} \def\n{\nubox{minimize} & \qquad {\EXP{f_i((x^i;x^{-i}),\xi)}} \\
\mu} \def\n{\nubox{subject to} & \qquad x^i \in X_i.
\epsilonnd{align*}}
A Nash equilibrium is a tuple of strategies $x^*=({x^*}^1;{x^*}^2;\ldots;{x^*}^d)$ where no player can obtain a lower cost by deviating from his strategy if the strategies of the other players remain unchanged.
Under the validity of the interchange between the expectation and the
derivative operator, the resulting equilibrium conditions of this stochastic Nash
game are compactly captured \fyR{by VI$(X,F)$}
where
$ X \triangleq \prod_{i=1}^d X_i$ and {$F(x) = (F_1(x);\ldots; F_d(x))$}
with $F_i(x)=\EXP{\nabla_{x^i} f_i(x,\xi)}$.
Such problems arise in communication
networks~\cite{alpcan02game,alpcan03distributed,yin09nash2},
competitive interactions in cognitive radio
networks~\cite{aldo1,scutari10monotone,koshal11single2}, and in power markets~\cite{KShKim11,KShKim12,SIGlynn11}.
\noindent \fyR{\textbf{Block structured stochastic optimization:}} Motivated by multi-agent
decision-making problems such as rate allocation problems in communication
networks~\cite{Kelly98,Srikant04,ShakSrikant07}, we consider the following block structured stochastic optimization problem:
\begin{enumerate}gin{align}\label{prob:SOP}
\displaystyle \mu} \def\n{\nubox{minimize} & \qquad f(x)\triangleq\EXP{f(x,\xi)}\tag{SCOP}\\
\mu} \def\n{\nubox{subject to} & \qquad x \in X\triangleq\prod_{i=1}^d X_i,\notag
\epsilonnd{align}
where $\xi \in \mu} \def\n{\nuathbb{R}^m$ is a random variable associated with a probability distribution, function $f(\cdot,\xi):X\to \mu} \def\n{\nuathbb{R}$ is continuous for all $\xi$, and the set $X \in \mu} \def\n{\nuathbb{R}^n$ is the Cartesian product of the sets $X_i \in \mu} \def\n{\nuathbb{R}^{n_i}$, \fyR{with} $n\triangleq\sigma} \def\del{\etaum_{i=1}^d n_i$. \fyR{Note} that the optimality conditions of \epsilonqref{prob:SOP} can be represented as \epsilonqref{eqn:SCVI}, where $F_i(x)=\EXP{\nabla_{x^i} f(x,\xi)}$.
Our primary interest in this paper lies in solving \epsilonqref{eqn:SCVI} when the number of component sets $X_i$, i.e., $d$, is very large. Computing the solution to this class of problems is challenging mainly due to presence of \fyR{\textit{uncertainty} and \textit{high dimensionality}} of the solution space. In what follows, we review some of the existing methods in addressing these challenges:
\noindent \textit{Addressing uncertainty in optimization and VI regimes:} Contending with uncertainty in solving variational inequalities has been carried out through the application of Monte-Carlo sampling schemes. Of these, sample average approximation (SAA) scheme proposes a framework in that the expected value of the stochastic mapping is approximated via the average over a large number of samples (cf. \cite{shap03sampling}, Chapter 6). However, it has been discussed that the SAA approach is computationally inefficient when the sample size is large \cite{nemirovski_robust_2009}. A counterpart to SAA schemes is the stochastic approximation (SA) methods and their generalizations where at each iteration, a sample (or a small batch) of the stochastic mapping is used to update the solution iterate. It was first in 1950s when Robbins and Monro \cite{robbins51sa} developed the SA method to address stochastic root-finding problems. Due to their computational efficiency in addressing problems with a large number of samples and also their adaption to on-line settings, SA methods have been very successful in solving optimization and equilibrium problems with uncertainties. \fyR{Jiang and Xu} \cite{xu2008} appear amongst the first who applied SA methods to solve stochastic variational inequalities with smooth and strongly monotone mappings. Extension of that work was studied by Koshal et al. \cite{koshal12regularized} addressing merely monotone stochastic VIs. More recently, we developed a regularized smoothing SA method to address stochastic VIs with non-Lipschitzian and merely monotone mappings \cite{Farzad3}. In recent years, prox generalization of SA methods were developed \cite{Nem04,nemirovski_robust_2009,Nem11,Lan-VI-13} for solving smooth and nonsmooth stochastic convex optimization problems and variational inequalities. In \cite{nemirovski_robust_2009}, a stochastic mirror descent (SMD) method is proposed to solve stochastic optimization problems with convex objectives. SMD method generalizes the SA method in that the Bregman distance function is employed in vector spaces equipped with non-Euclidean norms. The convergence properties and rate analysis of this class of solution methods rely on the monotonicity of the gradient mapping. To address variational inequalities requiring weaker assumptions, i.e., pseudo-monotone mappings, Korpelevich \cite{korp76} developed the extragradient method in 1970s. Since the extragradient method requires two projections per iteration, the computational complexity is twice than that of its classical gradient counterpart. However, it benefits significantly from the extra step by addressing VIs with weaker assumptions, i.e., VIs with pseudo-monotone mappings. Dang et al. \cite{Lan-VI-13} developed non-Euclidean extragradient methods addressing generalized monotone VIs and derived the convergence rate statements under smoothness properties of the problem and the distance generator function. In \cite{Nem11}, Juditsky et al. developed a stochastic mirror-prox (SMP) method to solve stochastic VIs with monotone operators. Loosely speaking, the SMP method is the prox generalization of extragradient scheme to stochastic settings. It is shown that under an averaging scheme, the SMP method generates iterates that converge to a weak solution of the stochastic VI. Kannan and Shanbhag (see \cite{Aswin17arxiv,Aswin-ACC14}) studied almost sure convergence of extragradient algorithms in solving stochastic VIs with pseudo-monotone mappings and derived optimal rate statements under a strong pseudo-monotone condition. \fyR{ Recently, Iusem et al. \cite{IusemExtra2017} developed an extragradient method with variance reduction for solving stochastic variational inequalities requiring only pseudo-monotonicity.} Motivated by the recent developments in extragradient methods and their generalizations, in this paper, we consider SMP methods.
\noindent \textit{Addressing high dimensionality in optimization and VI regime:} When the dimensionality of the solution space is huge, e.g., $n=\sigma} \def\del{\etaum_{i=1}^dn_i$ exceeds $10^{12}$, the direct implementation of the aforementioned solution methods becomes problematic. Specifically, SMD and SMP algorithms both require performing arithmetic operations of order $n$ per iteration. Moreover, the projection step (i.e., minimizing the prox function) in both methods is a another source of inefficiency for huge size problems. For problems with Cartesian solution spaces, the computational effort for projection can be reduced by decomposing the projection into $d$ projections corresponding to the set components $X_i$ at each iteration \cite{Farzad2}. However, this approach still requires updating and storing $d$ component solution iterates of sizes $n_i$ for all $i=1,\ldots,d$ per iteration. To address this issue and improve the efficiency of the underlying solution method, coordinate descent (CD) and more generally block coordinate descent (BCD) methods have been developed and studied in recent decades. Ortega and Rheinboldt \cite{ortega70} studied the concept of such approaches as ``univariate relaxation". The convergence properties of the CD methods was studied in 80s and 90s by researchers such as Tseng \cite{TsengCD1,TsengCD2,TsengCD3}, Bertsekas, and Tsitsiklis \cite{BertTsitBookPar}, and more recently in \cite{Wotao13,Richtarik14, Richtarik15} (see \cite{wrightCD15} for \fyR{detailed} review on CD methods). Nesterov \cite{nest10} appears amongst the first \fyR{to provide} a comprehensive iteration complexity analysis for randomized BCD methods for \fyR{solving optimization problems with smooth and convex objective functions.} In a recent work, Dang and Lan \cite{LanSBMD} developed a stochastic block coordinate mirror descent algorithm (SBMD) to address smooth and nonsmooth stochastic optimization problems with Cartesian constraint sets. in each iteration of SBMD scheme, only a randomly selected block of \fyR{iterates} is updated to improve the iteration complexity substantially.
\textit{Summary and main contributions}: \fyR{We develop two variants of the SMP algorithm.} In the first part of the paper, motivated by recent advancements in BCD methods, we develop a randomized block stochastic mirror-prox (B-SMP) algorithm to address SCVIs when the number of component sets is huge. At each iteration of the B-SMP scheme, first a block is selected randomly. Then, the selected block of the solution iterate is updated through performing two successive projection steps on the corresponding component set. We provide the convergence and rate analysis of the B-SMP algorithm under both non-averaging and averaging schemes as will be explained in details in the following discussion. In the second part of the paper, to address SCVIs with small or medium number of component sets, we develop a SMP method in that at each iteration, the solution iterate is updated at all the blocks through two projections on each component set. For this class of algorithms, we employ a new averaging scheme and derive an optimal rate statement. In what follows, we summarize the main contributions of our work:
\noindent \textit{(i) Addressing large-scale VIs}: While both SMD and SMP algorithms have been employed in the literature to address VIs \cite{Nem04, Nem11, Farzad3}, our algorithm appears to be the first that is capable of computing the solution to a large-scale Cartesian variational inequality in both deterministic and stochastic regimes. To this end, in contrast with the SBMD algorithm in \cite{LanSBMD}, we employ a randomized block scheme for the mirror-prox method. It is worth noting that the analysis of the B-SMP method \fyR{cannot be directly extended} mainly because the same randomly selected variable for coordinates is used in both projection steps, resulting a dependency between the uncertainty involved in the two projection steps. \\
\noindent \textit{(ii) Convergence and rate analysis}: In contrast with earlier work on stochastic mirror-prox methods \cite{Nem11} where the convergence and rate analysis is performed under monotonicity assumption for stochastic VIs using averaging schemes, we first consider a non-averaging random variant of the mirror-prox method. We study the properties of the iterate generated by the B-SMP algorithm under pseudo-monotonicity assumption (see Lemma \mu} \def\n{\nuathbb{R}f{lemma:pseudo}). Next, under strict pseudo-monotonicity assumption, we prove convergence of the generated iterate to the solution of \epsilonqref{eqn:SCVI} in an almost sure sense, extending the results of \cite{Aswin-ACC14} to the block coordinate settings. When considering SCVIs with strongly pseudo-monotone mappings, we obtain a bound of the order $\mu} \def\n{\nuathcal{O}\left(\frac{d}{k}\right)$ on the mean squared error, where $k$ is the iteration number and $d$ is the number of blocks extending results in \cite{Aswin-ACC14} to the block coordinate regime. This result differs from the rate analysis of \cite{LanSBMD} for the SBMD algorithm for stochastic optimization problems with strongly convex objectives in three aspects: (a) While the SBMD method addresses the optimization regime, our rate result applies to the broader class of problems, i.e., SVCIs; (b) The assumption of strong pseudo-monotonicity in our work is weaker than the strong monotonicity of the gradient mapping in \cite{LanSBMD}; (c) \fyR{In contrast with the SBMD scheme where an averaging scheme with a constant stepsize rule is employed for addressing problem (SCOP) {(cf. Corollary 2.2. in \cite{LanSBMD})}, here we use a non-averaging randomized block coordinate scheme for problem (SCVI). Note that averaging schemes with constant steplength necessarily produce sequences whose limit points are approximate solutions, while our scheme produces sequences that are asymptotically a.s. convergent. While in \cite{LanSBMD} the convergence rate is derived both under convexity and strong convexity assumptions of the objective function, here we are able to derive the convergence rate under a weaker assumption of strong pseudo-monotonicity of the mapping (see Proposition \mu} \def\n{\nuathbb{R}f{prop:ratestrongpse}).} \\
\noindent \textit{(iii) Developing optimal weighted averaging schemes}: In the second part of our analysis, to derive rate statements for the B-SMP method under monotonicity assumption, we restrict our attention to the class of stochastic convex optimization problems with Cartesian product of feasible sets. We develop a new averaging scheme for the B-SMP algorithm, \fyR{which updates a weighted average of the form:}
\begin{enumerate}gin{align*}
\bar x_k\triangleq\sigma} \def\del{\etaum_{t=0}^k\alphalpha_t x_t, \qquad \hbox{for all } k\gammaeq 0,
\epsilonnd{align*}
where $x_0,\ldots, x_t$ are the generated iterates of the B-SMP method and $\alphalpha_t$ is the weight assigned to the iterate $x_t$. In the literature, averaging schemes have been employed in addressing stochastic optimization problems and SVIs to derive error bounds. In the 90s, Polyak and Juditsky \cite{Polyak92} employed averaging schemes in stochastic approximation schemes and studied their convergence properties for a variety of problems. Later, Nemirovski et al. developed averaging schemes for the stochastic mirror descent method \cite{nemirovski_robust_2009} and stochastic mirror-prox algorithm \cite{Nem11} and derived the convergence \fyR{rate $\mu} \def\n{\nuathcal{O}\left(\frac{\ln(k)}{\sigma} \def\del{\etaqrt{k}}\right)$} when $\alphalpha_k$ is chosen to be $\frac{\gamma_t}{\sigma} \def\del{\etaum_{t=0}^k\gamma_t}$ and $\gamma_t$ is the stepsize sequence. They also showed that the optimal convergence \fyR{rate $\mu} \def\n{\nuathcal{O}\left(\frac{1}{\sigma} \def\del{\etaqrt{k}}\right)$} can be achieved if a window-based averaging scheme is employed. More recently, Nedi\'c and Lee \cite{Nedic14} developed a new averaging scheme for SMD algorithm in that $\alphalpha_t$ is given as
$\frac{\gamma_t^{-1}}{\sigma} \def\del{\etaum_{t=0}^k\gamma_t^{-1}}$. They showed that under this different set of weights, the SMD algorithm admits the optimal rate of convergence without requiring a window-based averaging scheme. Motivated by this contribution and our previous work on SA method \cite{Farzad3} and stochastic extragradient scheme \cite{Farzad-CDC14}, in this work, we extend this result by considering $\alphalpha_t$ as $\frac{\gamma_t^{r}}{\sigma} \def\del{\etaum_{t=0}^k\gamma_t^{r}}$ where $r$ is an arbitrary scalar. We show that for any arbitrary fixed $r<1$, the B-SMP algorithm generates iterates such that the objective function of $\bar x_k$ converges to its optimal value in mean admitting the convergence \fyR{rate $\mu} \def\n{\nuathcal{O}\left(\frac{\sigma} \def\del{\etaqrt{d}}{\sigma} \def\del{\etaqrt{k}}\right)$}. We also analyze the constant factor in terms of the problem and algorithm parameters and the parameter $r$. \fyR{We note that complexity analysis for averaging schemes in combination with constant stepsizes has been done in \cite{LanSBMD}. Unlike, the work in \cite{LanSBMD}, our averaging schemes are employed with a diminishing stepsize.}
In the last part of the paper, we consider SCVIs \fyR{with monotone mappings.} We employ the new averaging scheme in the classical SMP algorithm and show that the optimal convergence rate of the order $\mu} \def\n{\nuathcal{O}\left(\frac{1}{\sigma} \def\del{\etaqrt{k}}\right)$ can be achieved without requiring a window-based averaging, extending the results of \cite{Nem11}.
The rest of the paper is organized as follows. In Section \mu} \def\n{\nuathbb{R}f{sec:pre}, we \fyR{discuss} the prox functions and some of their main properties. We also cite a few results that will be used later in the analysis of the paper. Section \mu} \def\n{\nuathbb{R}f{sec:alg} contains the outline of the B-SMP algorithm and the main assumptions. In Section \mu} \def\n{\nuathbb{R}f{sec:conv}, we present the convergence theory for the developed algorithm in an almost sure sense, and present the rate analysis of the proposed scheme for SCVIs and SCOPs in Section \mu} \def\n{\nuathbb{R}f{sec:rate}. In Section \mu} \def\n{\nuathbb{R}f{sec:SMP}, we present the SMP algorithm for SCVIs with optimal averaging schemes. Lastly, we \fyR{provide some} concluding remarks in Section \mu} \def\n{\nuathbb{R}f{sec:conRem}.
\fyR{\textbf{Notation:}}\label{sec:not}
For any vector $x \in \mu} \def\n{\nuathbb{R}^n$, we let $x^i \in \mu} \def\n{\nuathbb{R}^{n_i}$ denote the $i$th block coordinate of $x$ such that $x=(x^1;x^2;\ldots,x^d)$. We use subscript $i$ to denote the $i$th block of a mapping in $\mu} \def\n{\nuathbb{R}^n$, e.g., for $F:\mu} \def\n{\nuathbb{R}^n \to \mu} \def\n{\nuathbb{R}^n$, we have $F(x)=(F_1(x);F_2(x);\ldots;F_d(x))$. For any $i = 1,\ldots,d$ and $x\in \mu} \def\n{\nuathbb{R}^n$, we let $x^{-i}$ denote the collection of blocks $x^j$ for any $j \neq i$, such that $x=(x^i;x^{-i})$. For any $i=1,\ldots,d$, $\|\cdot\|_i$ denotes the general norm operator on $\mu} \def\n{\nuathbb{R}^{n_i}$ and its dual norm is defined by $\|x\|_{*i}=\sigma} \def\del{\etaup_{\|y\|\leq 1}\langle y,x\rangle$ for $x,y \in \mu} \def\n{\nuathbb{R}^{n_i}$. For any $u,v \in \mu} \def\n{\nuathbb{R}^{n_i}$, $\langle u,v\rangle$ denotes the inner product of vectors $u$ and $v$ in $\mu} \def\n{\nuathbb{R}^{n_i}$. When $u,v \in \mu} \def\n{\nuathbb{R}^n$, we let $\langle u,v\rangle\triangleq\sigma} \def\del{\etaum_{i=1}^d \langle u^i,v^i\rangle$. We define norm $\|\cdot\|$ as $\|x\|^2\triangleq\sigma} \def\del{\etaum_{i=1}^d\|x^i\|_i^2$ for any $x \in \mu} \def\n{\nuathbb{R}^n$, and denote its dual norm by$\|\cdot\|_*$. We write $\textbf{I}_n$ to denote the identity matrix in $\mu} \def\n{\nuathbb{R}^n$. We use $\EXP{z}$ to denote the expectation of a random variable~$z$ and $Prob(A)$ to denote the probability of an event $A$. We let SOL$(X,F)$ denote the solution set of problem \epsilonqref{eqn:SCVI}.
\sigma} \def\del{\etaection{Preliminaries}\label{sec:pre}
In this section, we \fyR{provide} the background for the prox mappings and review some of their main properties. More \fyR{details} on prox mappings can be found in \cite{Nemir2000}.
Let $i \in \{1,\ldots,d\}$. A function $\omega_i:X_i \to \mu} \def\n{\nuathbb{R}$ is called a distance generating function with modulus $\mu} \def\n{\nuu_{\omega_i}>0$ with respect to norm $\|\cdot\|_i$, if $\omega_i$ is a continuously differentiable and strongly convex function with parameter $\mu} \def\n{\nuu_{\omega_i}$ with respect to $\|\cdot\|_i$, i.e.,
\begin{enumerate}gin{align}
\omega_i(y) \gammaeq \omega_i(x) + \langle \nabla \omega_i(x), y-x\rangle +\frac{\mu} \def\n{\nuu_{\omega_i}}{2}\|x-y\|_i^2 \quad \hbox{for all } x,y \in X_i.
\epsilonnd{align}
Throughout the paper, we assume that the function $\omega_i$ has Lipschitz continuous gradients with parameter $L_{\omega_i}$, i.e.,
\begin{enumerate}gin{align}
\omega_i(y)\leq \omega_i(x)+\langle \nabla \omega_i(x),y-x \rangle +\frac{L_{\omega_i}}{2}\|x-y\|_i^2 \quad \hbox{for all } x,y \in X_i.
\epsilonnd{align}
The Bregman distance function (also called prox function) $D_i: X_i\times X_i \to \mu} \def\n{\nuathbb{R}$ associated with $\omega_i$ is defined as follows:
\begin{enumerate}gin{align}\label{def:Di}
D_i(x,y) =\omega_i(y)-\omega_i(x)-\langle \nabla \omega_i(x),y-x \rangle \quad \hbox{for all } x,y \in X_i.
\epsilonnd{align}
Next we define the prox mapping $P_i:X_i\times\mu} \def\n{\nuathbb{R}^{ n_i}\to X_i$ as follows:
\fyR{\begin{enumerate}gin{align}
P_i(x,y)=\mu} \def\n{\nuathop{\rm argmin}_{z\in X_i}\{\langle y, z\rangle+D_i(x,z)\}, \quad \hbox{for all }x \in X_i, \ y \in \mu} \def\n{\nuathbb{R}^{n_i}.
\epsilonnd{align}}
The next Lemma provides some of the main properties of Bregman functions and their associated prox mappings. The proof of these properties can be found in earlier work by Nemirovski et al., e.g., see Chapter 5 in \cite{Nemir2000}, and also \cite{nemirovski_robust_2009}.
\begin{enumerate}gin{lemma}[Properties of prox mappings]\label{lemma:proxprop}Let $i \in \{1,\ldots,d\}$ be given. The following hold:
\begin{enumerate}gin{itemize}
\item [(a)] The Bregman distance function $D_i$ satisfies the following inequalities:
\begin{enumerate}gin{align}
\frac{\mu} \def\n{\nuu_{\omega_i}}{2} \|x-y\|_i^2\leq D_i(x,y) \leq \frac{L_{\omega_i}}{2}\|x-y\|_i^2 \quad \hbox{for all } x,y \in X_i.
\epsilonnd{align}
\item [(b)] For any $x,z \in X_i$ and $y \in \mu} \def\n{\nuathbb{R}^{n_i}$, we have
\[D_i(P_i(x,y),z) \leq D_i(x,z)+\langle y,z-P_i(x,y)\rangle-D_i(x,P_i(x,y)).\]
\item [(c)] For any $x,z \in X_i$ and $y \in \mu} \def\n{\nuathbb{R}^{n_i}$, we have
\[D_i(P_i(x,y),z) \leq D_i(x,z)+\langle y,z-x\rangle+\frac{{\|y\|_{*i}}^2}{2\mu} \def\n{\nuu_{\omega_i}}.\]
\item [(d)] For any $x \in X_i$, we have $x=P_i(x,0)$.
\item [(e)] The mapping $P_i(x,y)$is Lipschitz continuous in $y$ with modulus 1, i.e., \[\|P_i(x,y) -P_i(x,z)\|_i \leq \|y-z\|_{*i} \qquad\hbox{for all }x \in X_i, \ y,z \in \mu} \def\n{\nuathbb{R}^{n_i}.\]
\item [(f)] For any $x,y,z \in X_i$, we have $D_i(x,z)=D_i(x,y)+D_i(y,z)+\langle \nabla \omega_i(y)-\nabla \omega_i(x),z-y\rangle.$
\item [(g)] For any $x,z \in\mu} \def\n{\nuathbb{R}^{n_i}$, we have $\nabla_z D_i(x,z) = \nabla \omega_i(z)-\nabla \omega_i(x)$.
\epsilonnd{itemize}
\epsilonnd{lemma}
\fyR{In the analysis of the algorithms, we make use of} the following result, which can be found in~\cite{Polyak87} on page 50.
\begin{enumerate}gin{lemma}\label{lemma:supermartingale}
Let $v_k,$ $u_k,$ $\alphalpha_k,$ and $\begin{enumerate}ta_k$ be
non-negative random variables, and let the
following relations hold almost surely:
\[\EXP{v_{k+1}\mu} \def\n{\nuid {\tilde \mu} \def\n{\nuathcal{F}_k}}
\le (1+\alphalpha_k)v_k - u_k + \begin{enumerate}ta_k \quad\hbox{ for all } k,\qquad
\sigma} \def\del{\etaum_{k=0}^\infty \alphalpha_k < \infty,\qquad
\sigma} \def\del{\etaum_{k=0}^\infty \begin{enumerate}ta_k < \infty,\]
where $\tilde \mu} \def\n{\nuathcal{F}_k$ denotes the collection $v_0,\ldots,v_k$, $u_0,\ldots,u_k$,
$\alphalpha_0,\ldots,\alphalpha_k$, $\begin{enumerate}ta_0,\ldots,\begin{enumerate}ta_k$.
Then, almost surely we have
\[\lim_{k\to\infty}v_k = v, \qquad \sigma} \def\del{\etaum_{k=0}^\infty u_k < \infty,\]
where $v \gammaeq 0$ is some random variable.
\epsilonnd{lemma}
\begin{enumerate}gin{comment}\begin{enumerate}gin{lemma}\label{lemma:probabilistic_bound_polyak}
Let $\{v_k\}$ be a sequence of non-negative random variables, where
$\EXP{v_0} < \infty$, and let $\{\alpha_k\}$ and $\{\begin{enumerate}ta_k\}$
be deterministic scalar sequences such that:
\begin{enumerate}gin{align*}
& \EXP{v_{k+1}|v_0,\ldots, v_k} \leq (1-\alphalpha_k)v_k+\begin{enumerate}ta_k
\qquad a.s \ \hbox{for all }k\gammae0, \cr
& 0 \leq \alphalpha_k \leq 1, \quad\ \begin{enumerate}ta_k \gammaeq 0,
\quad\ \sigma} \def\del{\etaum_{k=0}^\infty \alphalpha_k =\infty,
\quad\ \sigma} \def\del{\etaum_{k=0}^\infty \begin{enumerate}ta_k < \infty,
\quad\ \lim_{k\to\infty}\,\frac{\begin{enumerate}ta_k}{\alphalpha_k} = 0.
\epsilonnd{align*}
Then, $v_k \rightarrow 0$ almost surely,
$\lim_{k\to\infty}\EXP{v_k}= 0$, and for any $\epsilonpsilon >0$,
\[\hbox{Prob}(v_j \leq \epsilon \hbox{ for all }j \gammaeq k)
\gammaeq 1 - \frac{1}{\epsilon}\left(\EXP{v_k}+\sigma} \def\del{\etaum_{i=k}^\infty \begin{enumerate}ta_i\right)
\qquad\hbox{for all $k>0$}.\]
\epsilonnd{lemma}
\epsilonnd{comment}
Next, we recall the following definitions (cf. \cite{facchinei02finite}) that will be referred to in our analysis.
\begin{enumerate}gin{definition}[Types of monotonicity]\label{def:map} Consider a mapping $F:X\to \mu} \def\n{\nuathbb{R}^n$.
\begin{enumerate}gin{itemize}
\item [(a)] $F$ is called a monotone mapping if for any $x,y \in X$, we have $\langle F(x)-F(y),x-y \rangle \gammaeq 0$.
\item [(b)] $F$ is called a strictly monotone mapping if for any $x,y \in X$, we have $\langle F(x)-F(y),x-y \rangle > 0$.
\item [(c)] $F$ is called a $\mu} \def\n{\nuu$-strongly monotone mapping if there is $\mu} \def\n{\nuu>0$ such that for any $x,y \in X$, we have \[\langle F(x)-F(y),x-y \rangle \gammaeq \mu} \def\n{\nuu\|x-y\|^2.\]
\item [(d)] $F$ is called a pseudo-monotone mapping if for any \fyR{$x,y \in X$}, $\langle F(y),x-y \rangle \gammaeq 0$ implies that \[\langle F(x),x-y \rangle \gammaeq 0.\]
\item [(e)] $F$ is called a strictly pseudo-monotone mapping if for \fyR{any $x,y \in X$ and $x\neq y$}, $\langle F(y),x-y \rangle \gammaeq 0$ implies that \[\langle F(x),x-y \rangle > 0.\]
\item [(f)] $F$ is called a $\mu} \def\n{\nuu$-strongly pseudo-monotone mapping if there is $\mu} \def\n{\nuu>0$ such that for any $x,y \in X$, $\langle F(y),x-y \rangle \gammaeq 0$ implies that \[\langle F(x),x-y \rangle \gammaeq \mu} \def\n{\nuu\|x-y\|^2.\]
\epsilonnd{itemize}
\epsilonnd{definition}
It is worth noting that $(a) \Rightarrow (d)$, $(b) \Rightarrow (e)$, and $(c) \Rightarrow (f)$.
\sigma} \def\del{\etaection{Randomized block stochastic mirror-prox algorithm: assumptions and algorithm outline}\label{sec:alg}
In this section, we outline the randomized block stochastic mirror-prox (B-SMP) algorithm and state the main assumptions. Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl} shows the steps of the B-SMP method. At iteration $k$, first a \fyR{realization
of} random variable $i_k$ is generated from the probability distribution $P_b$, where $Prob(i_k=i)=p_i$. Throughout the analysis, we assume $p_i>0$ for all $i=1,\ldots,d$ and $\sigma} \def\del{\etaum_{i=1}^dp_i=1$. Step 4 in Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl} provides the update rules. First, using a stochastic oracle, a \fyR{realization} of the stochastic mapping $F(\cdot,\xi)$ is generated at $x_k$, denoted by $F(x_k,\tilde \xi_k)$. Next, the $i_k$th block of the vector $y_{k}$ is updated using the projection given by \epsilonqref{BSMPproj1}. Here $\gamma_k>0$ is the stepsize sequence. The stochastic oracle is called at the resulting vector $y_{k+1}$ and $F(y_{k+1},\xi_k)$ is generated. Using the block $F_{i_k}(y_{k+1},\xi_k)$, stepsize $\gamma_k$, and the value of $x_k^{i_k}$, the $i_k$th block of $x_k$ is updated using the rule \epsilonqref{BSMPproj2}. We note that the B-SMP method extends the SMP algorithm proposed in \cite{Nem11} to a block coordinate variant. This results in reducing the computational complexity of each iteration substantially by only performing the two projection steps on a component set, versus two projections on the entire set $X \in \mu} \def\n{\nuathbb{R}^n$. The B-SMP algorithm differs from the SBMD method developed in \cite{LanSBMD} mainly from two aspects: (i) While in step 4, we do two projections similar to the extragradient method, in SBMD, a single projection is performed; (ii) Unlike the SBMD algorithm, here we do not use averaging. Later, we also study the averaging variant of the B-SMP method for stochastic convex optimization problems (see Proposition \mu} \def\n{\nuathbb{R}f{prop:optaveSCO}).
\begin{enumerate}gin{algorithm}
\caption{Randomized block stochastic mirror-prox (B-SMP) algorithm}
\label{algorithm:IRLSA-impl}
\begin{enumerate}gin{algorithmic}[1]
\STATE \textbf{initialization:} Set $k=0$, a random initial point $x_0=y_0\in X$, a stepsize $\gamma_0>0$, a discrete probability distribution $P_b$ with probabilities $p_i$ for $i=1,\ldots,d$ and a scalar $K$;
\FOR {$k=0,1,\ldots,K-1$}
\STATE Generate a \fyR{realization of} random variable $i_k$ using the probability distribution $P_b$ such that $Prob(i_k=i)=p_i$; \fyR{Generate $\xi_k$ and $\tilde \xi_k$ as realizations of the random vector $\xi$;}
\STATE Update the blocks $x_k^{i_k}$ and $y_k^{i_k}$:
\begin{enumerate}gin{align}
y_{k+1}^{i}&:= \left\{\begin{enumerate}gin{array}{ll}P_{i_k}\left(x_{k}^{i_k},\gamma_k F_{i_k}(x_{k},\tilde \xi_k)\right)
&\hbox{if } i=i_k,\cr \hbox{} &\hbox{}\cr
x_k^{i}& \hbox{if } i\neq i_k,\epsilonnd{array}\right.\label{BSMPproj1}
\\
x_{k+1}^{i}&:=\left\{\begin{enumerate}gin{array}{ll}P_{i_k}\left(x_{k}^{i_k},\gamma_k F_{i_k}(y_{k+1},\xi_k)\right)
&\hbox{if } i=i_k,\cr \hbox{} &\hbox{}\cr
x_k^{i}& \hbox{if } i\neq i_k.\epsilonnd{array}\right.\label{BSMPproj2}
\epsilonnd{align}
\ENDFOR
\STATE return $x_{K};$
\epsilonnd{algorithmic}
\epsilonnd{algorithm}
Throughout, we let $\mu} \def\n{\nuathcal{F}_{k}$ denote the history of the method up to the $k$th iteration, i.e., \[\mu} \def\n{\nuathcal{F}_{k}\triangleq\{i_0,\tilde \xi_0,\xi_0,\ldots,i_{k},\tilde \xi_{k},\xi_{k}\}.\]
Recall $F(x)$ denotes the expected value of the stochastic mapping $F(x,\xi)$ where $\xi \in \mu} \def\n{\nuathbb{R}^m$, i.e., $F(x)=\EXP{F(x,\xi)}$. We define the stochastic errors $\tilde w_k$ and $w_k$ as follows:
\begin{enumerate}gin{align}\label{def:wk}
\tilde w_k\triangleq F(x_k,\tilde \xi_k)-F(x_k), \qquad w_k\triangleq F(y_{k+1},\xi_k)-F(y_{k+1}).
\epsilonnd{align}
In the following, we state the main assumptions that will be considered in our analysis in the next sections.
\begin{enumerate}gin{assumption}[Problem]\label{assump:main}
\begin{enumerate}gin{itemize}
\item [(a)] For any $i \in \{1,\ldots,d\}$, the set $X_i$ is nonempty, closed, convex, and bounded, i.e., there exists $B_i \in \mu} \def\n{\nuathbb{R}$ such that $\|x\|_{i} \leq B_i$ for all $x\in X_i$.
\fyR{\item [(b)] For any $i$, the mapping $F_i$ is block-Lipschitz continuous with parameter $L_i$, i.e., there exists $L_i \in \mu} \def\n{\nuathbb{R}$ such that
\[\|F_i(x^i;x^{-i})-F_i(y^i;x^{-i})\|_{*i}\leq L_i\|x^i-y^i\|_i, \quad \hbox{for all } x^i, y^i \in X_i \hbox{ and }x^{-i} \in \prod_{j=1,j\neq i}^d X_j.\]}
\epsilonnd{itemize}
\epsilonnd{assumption}
\fyR{\begin{enumerate}gin{remark}\label{rem1}
Note that the block-Lipschitzian property in Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(b) implies continuity of the mapping $F$. Compactness, and convexity of the set $X$, along with continuity of the mapping $F$ imply that SOL$(X,F)$ is nonempty and compact (cf. Corollary 2.2.5 in~\cite{facchinei02finite}). Also, compactness of the sets $X_i$ in Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(a) and continuity of mapping $F$ in Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(b) imply boundedness of the mapping $F$. Therefore, for any $i$, the mapping $F_i$ is bounded on $X$, i.e., there exists $C_i \in \mu} \def\n{\nuathbb{R}_+$ such that $\|F_i(x)\|_{*i}\leq C_i \hbox{ for all } x \in X.$ Throughout the paper, $C_i$ denotes the bound on $F_i(x)$ with respect to the norm $\|\cdot\|_{*i}$.
\epsilonnd{remark}
}
\begin{enumerate}gin{assumption}{\bf{(Random variables)}}\label{assump:randvar}
\begin{enumerate}gin{itemize}
\item [(a)] The random variables $\xi_k$ and $\tilde \xi_k$, and $i_k$ are all i.i.d. and independent of each other.
\item [(b)] \fyR{For all $k\gammaeq 0$, the stochastic mappings $F(\cdot,\tilde \xi_k)$ and $F(\cdot,\xi_k)$ are unbiased estimators of the mapping $F(\cdot)$, i.e., for all $x \in X$}
\[\fyR{\EXP{F(x,\tilde \xi_k)\mu} \def\n{\nuid x}=\EXP{F(x,\xi_k)\mu} \def\n{\nuid x}=F(x).}\]
\item [(c)] \fyR{The stochastic mappings $F(\cdot,\tilde \xi_k)$ and $F(\cdot,\xi_k)$ both have bounded variances. More precisely, for all $i$ and $k\gammaeq 0$, there exist positive scalars $\nu_i$ and $\tilde \nu_i$ such that}
\[\fyR{\EXP{\|F_i(x,\tilde \xi_k)-F_i(x)\|_{*i}^2\mu} \def\n{\nuid x}\leq {\tilde \nu_i}^2, \quad \EXP{\|F_i(x,\xi_k)-F_i(x)\|_{*i}^2\mu} \def\n{\nuid x}\leq {\nu_i}^2.}\]
\item [(d)] For all $k\gammaeq 0$, the random variable $i_k$ is drawn from a discrete probability distribution $P_b$ \fyR{where $Prob(i_k=i)=p_i$ and $p_i>0$.}
\epsilonnd{itemize}
\epsilonnd{assumption}
\fyR{\begin{enumerate}gin{remark}
Note that $x_k$ is $\mu} \def\n{\nuathcal{F}_{k-1}$-measurable, and $y_{k+1}$ is $\mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}$-measurable. Therefore, from the definition of the stochastic errors $\fyR{\tilde w_k}$ and $w_k$ in \epsilonqref{def:wk}, Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}(b) implies that for all $i=1,\ldots,d$, we have for all $k\gammaeq 0$
\[\EXP{{\fyR{\tilde w_k}}^i\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}}=0, \quad \EXP{{w_k}^i\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k,i_k\}}=0.\]
Similarly, from Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}(c) we obtain for all $i=1,\ldots,d$, and for all $k \gammaeq 0$
\[ \EXP{\|{\fyR{\tilde w_k}}^i\|_{*i}^2\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}}\leq {\fyR{\tilde \nu}^2_i}, \quad \EXP{\|w_k^i\|_{*i}^2\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k,i_k\}}\leq \nu_i^2.\]
\epsilonnd{remark}}
\begin{enumerate}gin{assumption}{\bf{(Stepsize sequence)}}\label{assump:stepsize} Let the \fyR{positive} stepsize sequence $\{\gamma_k\}$ satisfy the following:
\begin{enumerate}gin{itemize}
\item [(a)] $\gamma_k$ is square summable, i.e., $\sigma} \def\del{\etaum_{k=0}^\infty \gamma_k^2 < \infty$.
\item [(b)] $\gamma_k$ is non-summable, i.e., $\sigma} \def\del{\etaum_{k=0}^\infty \gamma_k = \infty$.
\epsilonnd{itemize}
\epsilonnd{assumption}
\sigma} \def\del{\etaection{Convergence analysis of the B-SMP algortihm}\label{sec:conv}
In this section, we establish the convergence of the B-SMP algorithm in an almost sure sense.
Throughout the analysis, we make use of a Lyapunov function $\mu} \def\n{\nuathcal{L}:X\times \mu} \def\n{\nuathbb{R}^n\to \mu} \def\n{\nuathbb{R}$ given by \begin{enumerate}gin{align}\label{def:L}\mu} \def\n{\nuathcal{L}(x,y)\triangleq\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}D_i(x^i,y^i) \qquad \hbox{for any }x \in X \hbox{ and } y \in \mu} \def\n{\nuathbb{R}^n.\epsilonnd{align}
The following result will be used in our analysis.
\begin{enumerate}gin{lemma}[Properties of $\mu} \def\n{\nuathcal{L}$]\label{lemma:lyapProp}
\fyR{Let Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}(d) hold.} Function $\mu} \def\n{\nuathcal{L}$ satisfies the following:
\begin{enumerate}gin{itemize}
\item [(a)] $\mu} \def\n{\nuathcal{L}(x,y)\gammaeq 0$ for any $x \in X$ and $y \in \mu} \def\n{\nuathbb{R}^n$.
\item [(b)]$\mu} \def\n{\nuathcal{L}(x,y)=0$ holds if and only if $x=y$.
\epsilonnd{itemize}
\epsilonnd{lemma}
\begin{enumerate}gin{proof}
\noindent (a) This is an immediate consequence of Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(a).\\
\noindent (b) Note that from the definition of $D_i$, for any $i$, we have $D_i(x^i,x^i)=0$ for any $x^i \in X_i$. This implies that $\mu} \def\n{\nuathcal{L}(x,x)=0$. Let us assume $\mu} \def\n{\nuathcal{L}(x,y)=0$ for some $x \in X$ and $y \in \mu} \def\n{\nuathbb{R}^n$. From non-negativity of $D_i$, and the definition of $\mu} \def\n{\nuathcal{L}$, for any $i=1,\ldots, d$ we have $D_i(x^i,y^i)=0$. Invoking Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(a) again, we have $\|x^i-y^i\|_i=0$. Since $\|\cdot\|_i$ is a norm operator, we have $x^i=y^i$ for all $i=1,\ldots, d$, implying that $x=y$.
\epsilonnd{proof}
The first result stated below, provides a recursive relation in terms of the expected value of the Lyapunov function. This result will be used in both convergence analysis in this section and deriving the rate statements in the subsequent section.
\begin{enumerate}gin{lemma}[A recursive relation for the Lyapunov function]\label{lemma:rec-bound}
Let Assumptions~\mu} \def\n{\nuathbb{R}f{assump:main} and~\mu} \def\n{\nuathbb{R}f{assump:randvar} hold.
Then, for any $k\gammaeq 0$, for the iterate $x_k$ generated by Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl},
the following relation holds for all $x \in X$:
\begin{enumerate}gin{align}\label{ineq:rec-bound}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x)+\gamma_k\langle F(x_k), x-x_k\rangle+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2.
\epsilonnd{align}
\epsilonnd{lemma}
\begin{enumerate}gin{proof}
Consider the relation $x_{k+1}^{i_k}=P_{i_k}\left(x_{k}^{i_k},\gamma_kF_{i_k}(y_{k+1},\xi_{k})\right)$. From Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(c), for an arbitrary vector $x \in X$, we obtain
\begin{enumerate}gin{align}\label{ineq:Di}
D_{i_k}(x_{k+1}^{i_k},x^{i_k})& \leq D_{i_k}(x_{k}^{i_k},x^{i_k})+\gamma_k\langle F_{i_k}(y_{k+1},\xi_k), x^{i_k}-x_k^{i_k}\rangle+\gamma_k^2\frac{\|F_{i_k}(y_{k+1},\xi_k)\|_{*i_k}^2}{2\mu} \def\n{\nuu_{\omega_{i_k}}}\notag\\
& \leq D_{i_k}(x_{k}^{i_k},x^{i_k})+\underbrace{\gamma_k\langle F_{i_k}(y_{k+1}), x^{i_k}-x_k^{i_k}\rangle}_{\mu} \def\n{\nubox{Term}1}+\gamma_k\langle w_k^{i_k}, x^{i_k}-x_k^{i_k}\rangle\notag\\
&+\gamma_k^2\frac{\|F_{i_k}(y_{k+1})\|_{*i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+\gamma_k^2\frac{\|w_k^{i_k}\|_{*i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}},
\epsilonnd{align}
where we used the triangle inequality for the dual norm $\|\cdot\|_{*i_k}$ in the preceding inequality. Next, we provide an upper bound for Term $1$. Adding and subtracting $F_{i_k}(x_k)$ and using the Lipschitzian property of the mapping $F_{i_k}$, we obtain
\begin{enumerate}gin{align}\label{ineq:Term1}
\mu} \def\n{\nubox{Term}\ 1&= \gamma_k\langle F_{i_k}(y_{k+1})-F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle+\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle\notag\\
& \leq \gamma_k\|F_{i_k}(y_{k+1})-F_{i_k}(x_k)\|_{*i_k}\|x^{i_k}-x_k^{i_k}\|_{i_k}+\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle\notag\\
& \leq 2B_{i_k}\gamma_k\underbrace{\|F_{i_k}(y_{k+1})-F_{i_k}(x_k)\|_{*i_k}}_{\mu} \def\n{\nubox{Term}\ 2}+\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle,
\epsilonnd{align}
where in the first inequality, we use the definition of the dual norm $\|\cdot\|_{*i}$, while in the preceding relation, we use the triangle inequality for the term $\|x^{i_k}-x_k^{i_k}\|_{i_k}$ and boundedness of the set $X_{i_k}$. Next, we estimate an upper bound on $\mu} \def\n{\nubox{Term}\ 2$. We have
\begin{enumerate}gin{align*}
\mu} \def\n{\nubox{Term}\ 2&= \|F_{i_k}(y_{k+1})-F_{i_k}(x_k)\|_{*i_k}=\|F_{i_k}(y_{k+1}^{i_k};x_{k}^{-i_k})-F_{i_k}(x_{k}^{i_k};x_{k}^{-i_k})\|_{*i_k}\leq L_{i_k}\|y_{k+1}^{i_k}-x_{k}^{i_k}\|_{i_k},
\epsilonnd{align*}
where we use the block-Lipschitzian property of mapping $F$ given by Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(b). From Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(d,e) and the update rule for $y_{k+1}$, we obtain
\begin{enumerate}gin{align*}
\mu} \def\n{\nubox{Term}\ 2& \leq L_{i_k}\|y_{k+1}^{i_k}-x_{k}^{i_k}\|_{i_k}=L_{i_k}\left\|P_{i_k}\left(x_{k}^{i_k},\gamma_k F_{i_k}(x_{k},\tilde \xi_k)\right)-P_{i_k}\left(x_{k}^{i_k},0\right)\right\|_{i_k} \leq L_{i_k}\|\gamma_k F_{i_k}(x_{k},\tilde \xi_k)\|_{*i_k}\\
& =\gamma_kL_{i_k}\| F_{i_k}(x_{k})+\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}\leq \gamma_kL_{i_k}\| F_{i_k}(x_{k})\|_{*i_k}+\gamma_kL_{i_k}\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}.
\epsilonnd{align*}
Therefore, from \epsilonqref{ineq:Term1} and Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(c), we obtain
\begin{enumerate}gin{align*}
\mu} \def\n{\nubox{Term}\ 1& \leq 2L_{i_k}B_{i_k}\gamma_k^2\| F_{i_k}(x_{k})\|_{*i_k}+2L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}+\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle \\
& \leq 2L_{i_k}B_{i_k}C_{i_k}\gamma_k^2+2L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}+\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle.
\epsilonnd{align*}
From the preceding inequality, \epsilonqref{ineq:Di}, and Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(c), we have
\begin{enumerate}gin{align*}
D_{i_k}(x_{k+1}^{i_k},x^{i_k})& \leq D_{i_k}(x_{k}^{i_k},x^{i_k})+\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle+\gamma_k\langle w_k^{i_k}, x^{i_k}-x_k^{i_k}\rangle\notag\\
&+\gamma_k^2\left(\frac{C_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}C_{i_k}\right)+\gamma_k^2\frac{\|w_k^{i_k}\|_{*i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}.
\epsilonnd{align*}
Note that by the definition of Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}, for any $i\neq i_k$, we have $x_{k+1}^i=x_k^i$. Thus, from the definition of $\mu} \def\n{\nuathcal{L}$ given by \epsilonqref{def:L}, and the preceding relation we have
\begin{enumerate}gin{align*}
\mu} \def\n{\nuathcal{L}(x_{k+1},x)&=\sigma} \def\del{\etaum_{i\neq i_k}p_i^{-1}D_i(x_{k+1}^i,x^i)+p_{i_k}^{-1}D_{i_k}(x_{k+1}^{i_k},x^{i_k})\\
&=\sigma} \def\del{\etaum_{i\neq i_k}p_i^{-1}D_i(x_{k}^i,{x}^i)+p_{i_k}^{-1}D_{i_k}(x_{k+1}^{i_k},{x}^{i_k})\\
& \leq \mu} \def\n{\nuathcal{L}(x_k,x)+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle+p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, x^{i_k}-x_k^{i_k}\rangle\\
&+p_{i_k}^{-1}\gamma_k^2\left(\frac{C_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}C_{i_k}\right)+p_{i_k}^{-1}\gamma_k^2\frac{\|w_k^{i_k}\|_{*i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2p_{i_k}^{-1}L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}.
\epsilonnd{align*}
\fyR{Next, we take conditional expectations from the preceding relation on $\mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}$. Note that in Algorithm~\mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}, $x_k$ is $\mu} \def\n{\nuathcal{F}_{k-1}$-measurable, and $y_{k+1}$ and $\fyR{\tilde w_k}$ are both $\mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}$-measurable. Therefore, we obtain
\begin{enumerate}gin{align*}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x)+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle\\
&+\EXP{p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, x^{i_k}-x_k^{i_k}\rangle\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}\\& +p_{i_k}^{-1}\gamma_k^2\left(\frac{C_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}C_{i_k}\right)+\EXP{p_{i_k}^{-1}\gamma_k^2\frac{\|w_k^{i_k}\|_{*i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}\\& +2p_{i_k}^{-1}L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}.
\epsilonnd{align*}
Rearranging the terms, we obtain
\begin{enumerate}gin{align*}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x)+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle\\
&+p_{i_k}^{-1}\gamma_k\langle \EXP{w_k^{i_k}\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}, x^{i_k}-x_k^{i_k}\rangle\\& +p_{i_k}^{-1}\gamma_k^2\left(\frac{C_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}C_{i_k}\right)+p_{i_k}^{-1}\gamma_k^2\frac{\EXP{\|w_k^{i_k}\|_{*i_k}^2\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}}{\mu} \def\n{\nuu_{\omega_{i_k}}}\\& +2p_{i_k}^{-1}L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}.
\epsilonnd{align*}
Invoking Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}, we can write
\begin{enumerate}gin{align}\label{ineq:rev1ineq1}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k, i_k\}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x)+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle\notag\\
& +p_{i_k}^{-1}\gamma_k^2\left(\frac{C_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}C_{i_k}\right)+p_{i_k}^{-1}\gamma_k^2\frac{\nu_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\notag\\& +\underbrace{2p_{i_k}^{-1}L_{i_k}B_{i_k}\gamma_k^2\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}}_{\mu} \def\n{\nubox{Term}\ 3}.
\epsilonnd{align}
In the remainder of the proof, the idea is to first take conditional expectations from the preceding inequality with respect to $\tilde \xi_k$, and then with respect to $i_k$. Note that for Term $3$, we can write
\begin{enumerate}gin{align*}
\EXP{\mu} \def\n{\nubox{Term}\ 3\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{i_k\}}&\leq 2\gamma_k^2L_{i_k}B_{i_k}\EXP{\|\fyR{{\tilde w}^{i_k}_k}\|_{*i_k}\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{i_k\}}
\\ &\leq 2\gamma_k^2L_{i_k}B_{i_k}\sigma} \def\del{\etaqrt{\EXP{\|{\fyR{\tilde w_k}}^{i_k}\|^2_{*{i_k}}\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{i_k\}}}\leq2\gamma_k^2L_{i_k}B_{i_k}\fyR{{\tilde\nu}_{i_k}},\epsilonnd{align*}
where in the second inequality, we applied Jensen's inequality, and in the last inequality we used Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}.
Taking expectations in relation \epsilonqref{ineq:rev1ineq1} with respect to $\tilde \xi_k$, and using the preceding estimate, we obtain
\begin{enumerate}gin{align*}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{i_k\}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x)+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x_k), x^{i_k}-x_k^{i_k}\rangle\\
& +p_{i_k}^{-1}\gamma_k^2\left(\frac{C_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2L_{i_k}B_{i_k}C_{i_k}\right)+p_{i_k}^{-1}\gamma_k^2\frac{\nu_{i_k}^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}+2\gamma_k^2L_{i_k}B_{i_k}\fyR{{\tilde\nu}_{i_k}}.
\epsilonnd{align*}
Taking expectations in the preceding inequality with respect $i_k$, we have
\begin{enumerate}gin{align}\label{ineq2:L}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x)+\sigma} \def\del{\etaum_{i=1}^dp_{i}p_{i}^{-1}\gamma_k\langle F_{i}(x_k), x^{i}-x_k^{i}\rangle\notag\\
&+\sigma} \def\del{\etaum_{i=1}^dp_{i}p_{i}^{-1}\gamma_k^2\left(\frac{C_{i}^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}C_{i}\right)+\sigma} \def\del{\etaum_{i=1}^dp_{i}p_{i}^{-1}\gamma_k^2\frac{\nu_{i}^2}{\mu} \def\n{\nuu_{\omega_{i}}}\notag\\ &+2\sigma} \def\del{\etaum_{i=1}^dp_{i}p_{i}^{-1}L_{i}B_{i}\gamma_k^2\fyR{{\tilde \nu}_{i}}.
\epsilonnd{align}
From the definition of the inner product, by rearranging the terms, we obtain the desired inequality.}
\epsilonnd{proof}
Before we proceed to establish the almost sure convergence for Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}, in the following result, we present the properties of Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl} when the mapping $F$ is pseudo-monotone.
\begin{enumerate}gin{lemma}[Properties under pseudo-monotonicity]\label{lemma:pseudo}
Consider problem \epsilonqref{eqn:SCVI} where the mapping $F$ is pseudo-monotone on $X$. Let Assumptions \mu} \def\n{\nuathbb{R}f{assump:main}, \mu} \def\n{\nuathbb{R}f{assump:randvar}, and \mu} \def\n{\nuathbb{R}f{assump:stepsize} hold. Let $\{x_k\}$ be generated by Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}. Then, the following results hold almost surely:
\begin{enumerate}gin{itemize}
\item [(a)] For any $x^* \in \hbox{SOL}(X,F)$, the sequence $\{\mu} \def\n{\nuathcal{L}(x_k,x^*)\}$ is convergent.
\item [(b)] The sequence $\{x_k\}$ is bounded.
\item [(c)] For any $x^* \in \hbox{SOL}(X,F)$, $\liminf_{k \to \infty} \langle F(x_k), x^*-x_k\rangle=0$.
\item [(d)] If an accumulation point of $\{x_k\}$ is a solution to problem \epsilonqref{eqn:SCVI}, then the \fyR{entire} sequence $\{x_k\}$ converges to this solution.
\epsilonnd{itemize}
\epsilonnd{lemma}
\begin{enumerate}gin{proof}
Consider relation \epsilonqref{ineq:rec-bound}. Let $x=x^*$ where $x^* \in \hbox{SOL}(X,F)$. We have
\begin{enumerate}gin{align*}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}}&\leq \mu} \def\n{\nuathcal{L}(x_k,x^*)-\gamma_k\langle F(x_k), x_k-x^*\rangle+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2.
\epsilonnd{align*}
Next, we apply Lemma \mu} \def\n{\nuathbb{R}f{lemma:supermartingale}. Let us define \[\alphalpha_k=0, \quad \begin{enumerate}ta_k=\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2, \quad u_k=\gamma_k\langle F(x_k), x_k-x^*\rangle,\quad v_k=\mu} \def\n{\nuathcal{L}(x_k,x^*).\] From Assumption \mu} \def\n{\nuathbb{R}f{assump:stepsize}(a), we have $\sigma} \def\del{\etaum_{k=0}^\infty \begin{enumerate}ta_k <\infty$. Moreover, since $x^*$ is a solution to the problem \epsilonqref{eqn:SCVI}, we have $\langle F(x^*), x_k-x^*\rangle \gammaeq 0$. Invoking the pseudo-monotonicity property of $F$, we obtain $u_k \gammaeq 0$. Therefore, since all the conditions of Lemma \mu} \def\n{\nuathbb{R}f{lemma:supermartingale} are met, we can conclude that the sequence $\{\mu} \def\n{\nuathcal{L}(x_k,x^*)\}$ is convergent almost surely, implying that part (a) holds. Moreover, $\sigma} \def\del{\etaum_{k=0}^\infty u_k < \infty$ holds almost surely. Since we assumed $\sigma} \def\del{\etaum_{k=0}^\infty\gamma_k=\infty$, we conclude that $\liminf_{k \to \infty} \langle F(x_k), x^*-x_k\rangle=0$ almost surely indicating that part (c) holds. Note that since the set $X$ is compact, the sequence $\{x_k\}$ is bounded implying that part (b) holds. Next we show part (d). Form the hypothesis, let $\{x_{k_j}\}$ denote the subsequence of $x_k$ where $\lim_{j\to \infty}x_{k_j}=\bar x \in \hbox{SOL}(X,F)$. From part (a), since $\bar x$ is a solution, the sequence $\{\mu} \def\n{\nuathcal{L}(x_k,\bar x)\}$ is convergent. Let $\bar{\mu} \def\n{\nuathcal{L}}$ denotes the limit point of $\{\mu} \def\n{\nuathcal{L}(x_k,\bar x)\}$, i.e., $\lim_{k\to \infty}\mu} \def\n{\nuathcal{L}(x_k,\bar x)=\bar{\mu} \def\n{\nuathcal{L}}$. Note that since each function $\omega_i$ is convex and therefore continuous, the function $\mu} \def\n{\nuathcal{L}$ is also continuous. Taking this into account, we have
\[\lim_{j\to \infty}\mu} \def\n{\nuathcal{L}(x_{k_j},\bar x)=\mu} \def\n{\nuathcal{L}\left(\lim_{j\to \infty}x_{k_j},\bar x\right)=\mu} \def\n{\nuathcal{L}(\bar x,\bar x)=0.\]
Therefore, $\bar{\mu} \def\n{\nuathcal{L}}=0$ implying that $\lim_{k\to \infty}\mu} \def\n{\nuathcal{L}(x_k,\bar x)=0$. Invoking continuity of $\mu} \def\n{\nuathcal{L}$ again and using Lemma \mu} \def\n{\nuathbb{R}f{lemma:lyapProp}, we obtain the desired result.
\epsilonnd{proof}
The following lemma implies uniqueness of the solution of problem \epsilonqref{eqn:SCVI} under strict pseudo-monotonicity of the mapping.
\begin{enumerate}gin{lemma}\label{lem:uniqueness}
Consider problem \epsilonqref{eqn:SCVI}. Let Assumption \mu} \def\n{\nuathbb{R}f{assump:main} hold and mapping $F$ be strictly pseudo-monotone on $X$. Then, \epsilonqref{eqn:SCVI} has a unique solution.
\epsilonnd{lemma}
\begin{enumerate}gin{proof}
See Appendix \mu} \def\n{\nuathbb{R}f{A1}.
\epsilonnd{proof}
\fyR{Note that Lemma \mu} \def\n{\nuathbb{R}f{lemma:pseudo}(d) does not guarantee a.s. convergence to SOL$(X,F)$. To conclude this property, additional strict assumptions are needed. This is addressed in the following result.}
\begin{enumerate}gin{proposition}[a.s. convergence under strict pseudo-monotonicity for SCVIs]\label{prop:a.s.vi} Consider problem \epsilonqref{eqn:SCVI} and assume that $F$ is strictly pseudo-monotone on $X$. Let Assumptions \mu} \def\n{\nuathbb{R}f{assump:main}, \mu} \def\n{\nuathbb{R}f{assump:randvar}, and \mu} \def\n{\nuathbb{R}f{assump:stepsize} hold. Let $\{x_k\}$ be generated by Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}. Then, \fyR{$x_k$ converges to the unique solution of \epsilonqref{eqn:SCVI}, $x^*$, almost surely.}
\epsilonnd{proposition}
\fyR{\begin{enumerate}gin{proof}
The uniqueness of the solution of \epsilonqref{eqn:SCVI} is implied by Lemma \mu} \def\n{\nuathbb{R}f{lem:uniqueness}. From Lemma \mu} \def\n{\nuathbb{R}f{lemma:pseudo}(c), \[\liminf_{k \to \infty} \langle F(x_k), x^*-x_k\rangle=0.\] Let us define the function $g:X\to \mu} \def\n{\nuathbb{R}$ as $g(x)\triangleq\langle F(x), x^*-x\rangle$. We have $\liminf_{k \to \infty}g(x_k)=0$. This implies that there exists a subsequence $\{x_{k_j}\}$ (not necessarily convergent) such that $\lim_{j \to \infty}g(x_{k_j})=0$. From Lemma \mu} \def\n{\nuathbb{R}f{lemma:pseudo}(b), the sequence $\{x_k\}$ and therefore its subsequence $\{x_{k_j}\}$ are bounded. Thus, there exists a subsequence of the sequence $\{x_{k_j}\}$ that is convergent. Let us denote that subsequence by $\{x_{k_j(t)}\}$ and its accumulation point by $\hat x$. From $\lim_{j \to \infty}g(x_{k_j})=0$ we have $\lim_{t\to \infty}g(x_{k_j(t)})=0$. Using continuity of $g$, we obtain $g(\hat x)=0$ indicating that \begin{enumerate}gin{align}\label{ineq:spse4}\langle F(\hat x), x^*-\hat x\rangle=0.\epsilonnd{align}
Since $x^* \in $ SOL$(X,F)$, we have $\langle F(x^*),\hat x-x^*\rangle \gammae 0$. Using the definition of strict pseudo-monotonicity of $F$ if $\hat x\neq x^*$, we obtain $\langle F(\hat x),\hat x-x^*\rangle > 0.$ This is contradictory to \epsilonqref{ineq:spse4}. Therefore, $\hat x = x^*$. Thus, $x_k$ has an accumulation point $\hat x$ that solves VI$(X,F)$. Using Lemma \mu} \def\n{\nuathbb{R}f{lemma:pseudo}(d), we conclude that $x_k$ converges to $x^*$ almost surely.
\epsilonnd{proof}}
\sigma} \def\del{\etaection{Rate of convergence analysis for the B-SMP algorithm}\label{sec:rate}
To present the convergence rate of the B-SMP algorithm, in the first part of this section, we derive the rate under the assumption that the mapping $F$ is strongly pseudo-monotone. This result is provided by Proposition \mu} \def\n{\nuathbb{R}f{prop:ratestrongpse}. In the second part of this section, we consider a subclass of SCVIs, that is the stochastic convex optimization problems of the form \epsilonqref{prob:SOP}. For this class of problems, we show that under convexity of the objective function, an averaging variant of Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl} admits a convergence rate given by Proposition \mu} \def\n{\nuathbb{R}f{prop:optaveSCO}. Both of these results seem to be new for the stochastic mirror-prox algorithm addressing large-scale SCVIs. In the analysis of the first rate statement in this section, we make use of the following result.
\begin{enumerate}gin{lemma}[Convergence rate of a recursive sequence]\label{lemma:rateHarmonic} Let $\{e_k\}$ be a non-negative sequence such that for an arbitrary non-negative sequence $\{\gamma_k\}$, the following relation is satisfied:
\begin{enumerate}gin{align}\label{ekbound0}
e_{k+1}\leq (1-\alphalpha \gamma_k)e_k+\begin{enumerate}ta \gamma_k^2, \quad \hbox{for all } k\gammaeq 0.
\epsilonnd{align}
where $\alphalpha$ and $\begin{enumerate}ta$ are positive scalars. Suppose $\gamma_0=\gamma$, $\gamma_k=\frac{\gamma}{k}$ for any $k\gammaeq 1$, where $\gamma>\frac{1}{\alphalpha}$. Let $K\triangleq\lceil \alphalpha \gamma\rceil$. Then, we have
\begin{enumerate}gin{align}\label{ekbound1}
e_k\leq \frac{\mu} \def\n{\nuax \{\frac{\begin{enumerate}ta\gamma^2}{\alphalpha \gamma-1},Ke_K\}}{k}, \qquad \hbox{for all } k\gammaeq K.
\epsilonnd{align}
Specifically, if we set $\gamma=\frac{2}{\alphalpha}$, then we have
\begin{enumerate}gin{align}\label{ekbound2}
e_k\leq \frac{8\begin{enumerate}ta}{\alphalpha^2k}, \qquad \hbox{for all } k\gammaeq 2.
\epsilonnd{align}
\epsilonnd{lemma}
\begin{enumerate}gin{proof}
See Appendix \mu} \def\n{\nuathbb{R}f{A3}.
\epsilonnd{proof}
\begin{enumerate}gin{comment}\begin{enumerate}gin{remark}Here we want to note that a similar result to Lemma \mu} \def\n{\nuathbb{R}f{lemma:rateHarmonic} has been extensively used in the literature in establishing convergence rate results. However, we believe in several papers, there exists a partial mistake in applying this result that has never been corrected. Here we point out two examples of such an error. For example, see page 1578 in \cite{nemirovski_robust_2009}, or see the proof of Theorem 3.2 in \cite{nocedal15}. In both of these cases, it is claimed that relation \epsilonqref{ekbound1} holds for $K=1$. However, the argument breaks in the induction step. This is because in relation \epsilonqref{ekbound3}, for the second inequality to hold, it is required that the term $1-\frac{\alphalpha \gamma}{k}$ is non-negative. However, for any $k <K$, this term is negative and therefore the induction does not go through.
\epsilonnd{remark}\epsilonnd{comment}
Next, we derive the convergence rate for the B-SMP method under when the mapping is strongly pseudo-monotone.
\begin{enumerate}gin{proposition}[Rate statement under strong pseudo-monotonicity for SCVIs]\label{prop:ratestrongpse}
Consider problem \epsilonqref{eqn:SCVI} and assume that the mapping $F$ is $\mu} \def\n{\nuu$-strongly pseudo-monotone on $X$ with respect to the norm $\|\cdot\|$. Let Assumptions \mu} \def\n{\nuathbb{R}f{assump:main} and \mu} \def\n{\nuathbb{R}f{assump:randvar} hold. Let $\{x_k\}$ be generated by Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}. Then for $k\gammaeq 0$, we have:
\begin{enumerate}gin{align}\label{ineq:rec-bound3}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},{x^*})}&\leq \left(1-\gamma_k\frac{2\mu} \def\n{\nuu\mu} \def\n{\nuin\limits_{1\leq i\leq d}p_i}{\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}}\right)\EXP{\mu} \def\n{\nuathcal{L}(x_k,{x^*})}+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2.
\epsilonnd{align}
Let the probability distribution $P_b$ be uniform, i.e., $p_i=\frac{1}{d}$ for all $i \in \{1,\ldots,d\}$. Suppose the stepsize $\gamma_k$ is given by the following rule:
\begin{enumerate}gin{align}
\gamma_0\triangleq\frac{d\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}}{\mu} \def\n{\nuu}, \qquad \gamma_{k}=\frac{\gamma_0}{k}, \quad \hbox{for all }k \gammaeq 1.\label{stepsize}
\epsilonnd{align}
Then, $x_k$ converges to $x^*$ almost surely. Moreover, we have
\begin{enumerate}gin{align}\label{ineq:rec-bound4}
\EXP{\|x_k-{x^*}\|^2}\leq \dfrac{\mu} \def\n{\nuathcal{A}d}{k}, \quad \hbox{for all } k\gammaeq2,
\epsilonnd{align}
where $\mu} \def\n{\nuathcal{A}\triangleq\frac{4\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\left(\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}\right)^2 }{\mu} \def\n{\nuu^2\mu} \def\n{\nuin\limits_{1\leq i\leq d}\mu} \def\n{\nuu_{\omega_i}}$.
\epsilonnd{proposition}
\begin{enumerate}gin{proof}
Since $F$ is strongly pseudo-monotone, similar to the first part of the proof of Proposition \mu} \def\n{\nuathbb{R}f{prop:a.s.vi}, it can be shown that the solution set is a singleton. Let $x^*$ denote the unique solution to problem \epsilonqref{eqn:SCVI}. Consider relation \epsilonqref{ineq:rec-bound}. \fyR{Taking expectations on both sides, we have}
\begin{enumerate}gin{align}\label{ineq:rec-bound2}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)}&\leq \EXP{\mu} \def\n{\nuathcal{L}(x_k,x^*)}-\gamma_k\EXP{\langle F(x_k), x_k-x^*\rangle}+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2.
\epsilonnd{align}
Since $x^*$ solves problem \epsilonqref{eqn:SCVI}, we have $\langle F(x^*),x_k-x^*\rangle \gammaeq 0$. The definition of strong pseudo-monotonicity of $F$ implies that $\langle F(x_k),x_k-x^*\rangle \gammaeq \mu} \def\n{\nuu\|x_k-x^*\|^2$. From the definition of the norm $\|\cdot\|$, we obtain
\begin{enumerate}gin{align*}
\langle F(x_k),x_k-x^*\rangle &\gammaeq \mu} \def\n{\nuu\sigma} \def\del{\etaum_{i=1}^d\|x_k^i-{x^*}^i\|^2_i \gammaeq 2\mu} \def\n{\nuu\sigma} \def\del{\etaum_{i=1}^d\frac{D_i(x_k^i,{x^*}^i)}{L_{\omega_{i}}}\\& \gammaeq \frac{2\mu} \def\n{\nuu\mu} \def\n{\nuin\limits_{1\leq i\leq d}p_i}{\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}}\sigma} \def\del{\etaum_{i=1}^d p_i^{-1}D_i(x_k^i,{x^*}^i)=\frac{2\mu} \def\n{\nuu\mu} \def\n{\nuin\limits_{1\leq i\leq d}p_i}{\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}}\mu} \def\n{\nuathcal{L}(x_k,x^*),
\epsilonnd{align*}
where in the second inequality, we used the Lipschitzian property of the distance generator function $\omega_i$ (cf. Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(a)). From the preceding relation, the definition of $\mu} \def\n{\nuathcal{L}$, and \epsilonqref{ineq:rec-bound2}, we obtain the desired inequality \epsilonqref{ineq:rec-bound3}. The almost sure convergence of the sequence $\{x_k\}$ to $x^*$ follows directly from Proposition \mu} \def\n{\nuathbb{R}f{prop:a.s.vi}. Next, we establish the rate statement.
We apply the result of Lemma \mu} \def\n{\nuathbb{R}f{lemma:rateHarmonic} to the inequality \epsilonqref{ineq:rec-bound3}. Since $P_b$ has a uniform distribution, we have $p_i=\frac{1}{d}$ for all $i$. Let us define
\[e_k\triangleq\EXP{\mu} \def\n{\nuathcal{L}(x_{k},{x^*})}, \hbox{for all } k\gammaeq 0;\quad \alphalpha\triangleq \frac{2\mu} \def\n{\nuu}{d\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}}; \quad \begin{enumerate}ta \triangleq\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right),\]
and $\gamma\triangleq\frac{d\mu} \def\n{\nuax\limits_{1\leq i\leq d}L_{\omega_i}}{\mu} \def\n{\nuu}$. Note that from \epsilonqref{stepsize}, we have $\gamma_0=\gamma=\frac{2}{\alphalpha}$. Therefore, recalling Lemma \mu} \def\n{\nuathbb{R}f{lemma:rateHarmonic}, from \epsilonqref{ekbound2}, we have
\begin{enumerate}gin{align}\label{boundOnLy}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k},{x^*})}\leq \frac{8\begin{enumerate}ta}{\alphalpha^2k}, \qquad \hbox{for all } k\gammaeq 2.
\epsilonnd{align}
From the definition of $\mu} \def\n{\nuathcal{L}$, the strong convexity of $\omega_i$ for all $i$, and the definition of $\|.\|$, we have
\[\EXP{\mu} \def\n{\nuathcal{L}(x_{k},{x^*})}=d\sigma} \def\del{\etaum_{i=1}^d\EXP{D_i(x_k^i,{x^*}^i)}\gammaeq d\frac{\mu} \def\n{\nuin\limits_{1\leq i\leq d}\mu} \def\n{\nuu_{\omega_i}}{2}\sigma} \def\del{\etaum_{i=1}^d\EXP{\|x_k^i-{x^*}^i\|_i^2}=d\frac{\mu} \def\n{\nuin\limits_{1\leq i\leq d}\mu} \def\n{\nuu_{\omega_i}}{2}\EXP{\|x_k-x^*\|^2}.\]
From the preceding inequality, \epsilonqref{boundOnLy}, and the values of $\alphalpha$ and $\begin{enumerate}ta$ defined above, we obtain \epsilonqref{ineq:rec-bound4}.
\epsilonnd{proof}
In the next result, we consider the problem \epsilonqref{prob:SOP} where the objective function is assumed to be convex. In this class of problems, we introduce a new averaging variant of the B-SMP algorithm and derive its convergence rate.
\begin{enumerate}gin{proposition}[Rate statement under convexity for SCOPs]\label{prop:optaveSCO} Consider problem \epsilonqref{prob:SOP} and assume that the gradient mapping of $f$, denoted by $F$, is monotone. Let Assumptions \mu} \def\n{\nuathbb{R}f{assump:main} and \mu} \def\n{\nuathbb{R}f{assump:randvar} hold and let $\{x_k\}$ be generated by Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}. Let $r<1$ be an arbitrary scalar, the sequence $\gamma_k$ be non-increasing, and the sequence $\bar x_k$ be given by the following recursive rule for any $k\gammaeq 0$:
\begin{enumerate}gin{align}
&S_{k+1}\triangleq S_k+\gamma_k^r,\label{def:averagingS}\\
&\bar x_{k+1}\triangleq\frac{S_k \bar x_k +\gamma_{k+1}^r x_{k+1}}{S_{k+1}},\label{def:averaging}
\epsilonnd{align}
where we set $S_0=\gamma_0^r$ and $\bar x_0=x_0$. Then, for any $K\gammaeq 0$, the following result holds:
\begin{enumerate}gin{align}\label{ineq:aveBound}
\EXP{f(\bar x_K)}-f^*\leq\left(\sigma} \def\del{\etaum_{k=0}^{K}\gamma_k^r \right)^{-1}\left(2\gamma_{K}^{r-1}\sigma} \def\del{\etaum_{i=1}^d p_i^{-1}L_{\omega_i}B_i^2+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\sigma} \def\del{\etaum_{k=0}^{K}\gamma_k^{r+1}\right).
\epsilonnd{align}
Moreover, if $P_b$ is a uniform distribution and $\gamma_k=\frac{\gamma_0}{\sigma} \def\del{\etaqrt{k+1}}$ where $\gamma_0\triangleq\gamma \sigma} \def\del{\etaqrt{d}$ for some $\gamma>0$, then for any $K> \mu} \def\n{\nuax\{\lceil \left(\frac{3-r}{2}\right)^{\frac{2}{1-r}}\rceil,3\}$ we have
\begin{enumerate}gin{align}\label{ineq:aveRate}
\EXP{f(\bar x_K)}-f^*\leq \frac{\mu} \def\n{\nuathcal{B}\sigma} \def\del{\etaqrt{d}}{\sigma} \def\del{\etaqrt{K}},
\epsilonnd{align}
where $\mu} \def\n{\nuathcal{B}\triangleq\left(2-r\right)2^{1-0.5r}\left(\frac{2\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2}{\gamma} +\frac{ \gamma\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)}{1-r}\right)$.
\epsilonnd{proposition}
\begin{enumerate}gin{proof}
First, we use induction on $k$ to show \begin{enumerate}gin{align}\label{equ:aveFormula}\bar x_k=\sigma} \def\del{\etaum_{t=0}^{k}\left(\frac{\gamma_t^r}{\sigma} \def\del{\etaum_{j=0}^{k}\gamma_j^r}\right)x_t.\epsilonnd{align} It holds for $k=0$, since $\bar x_0=x_0$. Let us assume \epsilonqref{equ:aveFormula} holds for $k$. Note that from \epsilonqref{def:averagingS}, $S_k=\sigma} \def\del{\etaum_{j=0}^k \gamma_j^r$. Therefore, we have $x_k=\frac{\sigma} \def\del{\etaum_{t=0}^{k}\gamma_t^rx_t}{S_k}$. From \epsilonqref{def:averaging}, we have
\[\bar x_{k+1}=\frac{S_k \bar x_k +\gamma_{k+1}^r x_{k+1}}{S_{k+1}}=\frac{\sigma} \def\del{\etaum_{t=0}^{k}\gamma_t^rx_t +\gamma_{k+1}^r x_{k+1}}{S_{k+1}}=\frac{\sigma} \def\del{\etaum_{t=0}^{k+1}\gamma_t^rx_t}{\sigma} \def\del{\etaum_{j=0}^{k+1}\gamma_j^r}.\]
Thus, \epsilonqref{equ:aveFormula} holds for any $k\gammaeq 0$. Next, we show that \epsilonqref{ineq:aveBound} holds. Consider relation \epsilonqref{ineq:rec-bound} and let $x=x^*$, where $x^*$ is an arbitrary optimal solution of problem \epsilonqref{prob:SOP}. We have
\begin{enumerate}gin{align*}
\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)}&\leq \EXP{\mu} \def\n{\nuathcal{L}(x_k,x^*)}-\gamma_k\EXP{\langle F(x_k), x_k-x^*\rangle}+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2.
\epsilonnd{align*}
By the convexity of $f$, we have $\langle F(x_k), x_k-x^*\rangle\gammaeq f(x_k)-f^*$. Therefore, we obtain
\begin{enumerate}gin{align*}
\gamma_k\EXP{f(x_k)-f^*}\leq \EXP{\mu} \def\n{\nuathcal{L}(x_k,x^*)}-\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)}+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^2.
\epsilonnd{align*}
Multiplying both sides by $\gamma_k^{r-1}$, we get
\begin{enumerate}gin{align}\label{ineq:rec-boundn}
\gamma_k^r\EXP{f(x_k)-f^*}&\leq \gamma_k^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_k,x^*)}-\gamma_k^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)}+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^{r+1}.\epsilonnd{align}
Adding and subtracting the term $\gamma_{k-1}^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_{k},x^*)}$, we have
\begin{enumerate}gin{align*}
\gamma_k^r\EXP{f(x_k)-f^*}&\leq \gamma_{k-1}^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_k,x^*)}-\gamma_{k}^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)}+\left(\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\right)\EXP{\mu} \def\n{\nuathcal{L}(x_{k},x^*)}\\ &+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^{r+1}
\epsilonnd{align*}
Note that using the Lipschitzian property of $\omega_i$, we get
\fyR{\begin{enumerate}gin{align}\label{boundOnL}\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)=\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}D_i(x_k^i,{x^*}^i)\leq \sigma} \def\del{\etaum_{i=1}^d \frac{p_i^{-1}L_{\omega_i}}{2}\left(\|x_k^i\|_i+\|{x^*}^i\|_i\right)^2=2\sigma} \def\del{\etaum_{i=1}^d p_i^{-1}L_{\omega_i}B_i^2.
\epsilonnd{align}}
Also note that since $\gamma_k$ is non-increasing, and $r<1$, we have $\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\gammaeq 0$. Therefore, from the two preceding inequalities, we obtain
\begin{enumerate}gin{align*}
\gamma_k^r\EXP{f(x_k)-f^*}&\leq\gamma_{k-1}^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_k,x^*)}-\gamma_{k}^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_{k+1},x^*)}+\left(\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\right)2\sigma} \def\del{\etaum_{i=1}^d p_i^{-1}L_{\omega_i}B_i^2\\ &+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\gamma_k^{r+1}.
\epsilonnd{align*}
Summing over $k$, from $k=1$ to $K$, we have
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=1}^{K}\gamma_k^r\EXP{f(x_k)-f^*}&\leq\gamma_0^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_1,x^*)}-\gamma_{K}^{r-1}\EXP{\mu} \def\n{\nuathcal{L}(x_{K+1},x^*)}+\left(\gamma_{K}^{r-1}-\gamma_{0}^{r-1}\right)2\sigma} \def\del{\etaum_{i=1}^d p_i^{-1}L_{\omega_i}B_i^2\\&+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\sigma} \def\del{\etaum_{k=1}^{K}\gamma_k^{r+1}.
\epsilonnd{align*}
Next, we add the preceding inequality with \epsilonqref{ineq:rec-boundn} for $k=0$:
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=0}^{K}\gamma_k^r\EXP{f(x_k)-f^*}&\leq
\left(2\sigma} \def\del{\etaum_{i=1}^d p_i^{-1}L_{\omega_i}B_i^2\right)\gamma_{K}^{r-1}+\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)\sigma} \def\del{\etaum_{k=0}^{K}\gamma_k^{r+1},
\epsilonnd{align*}
where in the preceding inequality, \fyR{we employed the bound on $\EXP{\mu} \def\n{\nuathcal{L}(x_0,x^*)}$ given by \epsilonqref{boundOnL}}. Dividing both sides by $\sigma} \def\del{\etaum_{k=0}^{K}\gamma_k^r$, using definition of $\bar x_K$, and taking into account the convexity of $f$, we obtain the inequality \epsilonqref{ineq:aveBound}. In the last part of the proof, we derive the rate statement given by \epsilonqref{ineq:aveRate}. \fyR{We make use of the following inequality holding for $\gamma_k=\frac{\gamma_0}{\sigma} \def\del{\etaqrt{k+1}}$.
\begin{enumerate}gin{align}\label{ineq:boundForSumGammaR}
\sigma} \def\del{\etaum_{k=0}^K\gamma_k^{r+1}\leq \gamma_0^{(r+1)}\left(1+\frac{(K+2)^{0.5(1-r)}-1}{0.5(1-r)}\right), \quad \hbox{for all } r<1.
\epsilonnd{align} The proof for this inequality is provided in Appendix \mu} \def\n{\nuathbb{R}f{A2}.}
Note that since we assumed $K>\lceil \left(\frac{3-r}{2}\right)^{\frac{2}{1-r}}\rceil$, we have
\begin{enumerate}gin{align}\label{ineq:UBforS}
\sigma} \def\del{\etaum_{k=0}^K\gamma_k^{r+1}\leq 2\gamma_0^{(r+1)}\left(\frac{(K+2)^{0.5(1-r)}-1}{0.5(1-r)}\right)\leq \frac{ 4\gamma_0^{(r+1)}(K+2)^{0.5(1-r)}}{1-r}, \quad \hbox{for all } r<1.
\epsilonnd{align}
Let us define $\theta\triangleq\sigma} \def\del{\etaum_{i=1}^d\left(\frac{C_{i}^2+\nu_i^2}{\mu} \def\n{\nuu_{\omega_{i}}}+2L_{i}B_{i}(C_{i}+\tilde \nu_i)\right)$. From \epsilonqref{ineq:UBforS}, \epsilonqref{ineq:aveBound}, and the choice of $P_b$ being a uniform distribution, we obtain
\begin{enumerate}gin{align*}
&\EXP{f(\bar x_K)}-f^*\leq \left(\frac{2(1-0.5r)}{\gamma_0^r(K+1)^{1-0.5r}}\right)\left(2d\left(\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2\right)\gamma_0^{r-1}(K+1)^{0.5(1-r)}+\frac{\theta \gamma_0^{r+1}(K+2)^{0.5(1-r)}}{1-r}\right)\\
&\leq \left(\frac{2-r}{\gamma \sigma} \def\del{\etaqrt{d}(K+2)^{1-0.5r}}\right)\left(\frac{(K+2)^{1-0.5r}}{(K+1)^{1-0.5r}}\right)\left(2d\left(\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2\right) (K+2)^{0.5(1-r)}+\frac{\theta d \gamma^2(K+2)^{0.5(1-r)}}{1-r}\right)\\
& \leq \left(2-r\right)2^{1-0.5r}\left(\frac{2\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2}{\gamma} +\frac{\theta \gamma}{1-r}\right)\frac{\sigma} \def\del{\etaqrt{d}}{\sigma} \def\del{\etaqrt{K+2}}.
\epsilonnd{align*}
Therefore, we conclude the desired rate result.
\epsilonnd{proof}
\sigma} \def\del{\etaection{Stochastic mirror-prox algorithm for SCVIs with optimal averaging}\label{sec:SMP}
In this section, our goal lies in the development of a stochastic mirror-prox algorithm to address SCVIs when the number of component sets is not huge. In contrast with the previous sections that we studied the convergence of the B-SMP algorithm, here we employ a stochastic mirror-prox algorithm in that at each iteration, all the blocks of the solution iterate are updated. Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:SMP} presents the steps of the underlying method. Note that here we also employ a weighted averaging scheme similar to that of the previous section. However, the analysis of this section is different than that of Proposition \mu} \def\n{\nuathbb{R}f{prop:optaveSCO} for different reasons: (i) In this section, we do not require the Lipschitzian property of the mapping; (ii) Here we address SCVIs while Proposition \mu} \def\n{\nuathbb{R}f{prop:optaveSCO} addresses optimization problems; (iii) Our scheme here is a full-block scheme, i.e., all the blocks are updated, while the scheme in Proposition \mu} \def\n{\nuathbb{R}f{prop:optaveSCO} is a block variant of SMP method; (iv) Lastly, the averaging sequence here is $\bar y_k$, while in Proposition \mu} \def\n{\nuathbb{R}f{prop:optaveSCO} we use $\bar x_k$ as the averaging sequence. It is worth noting that in contrast with our earlier work \cite{Farzad2} where we employed a distributed stochastic approximation method for solving SCVIs, here we develop a stochastic mirror-prox method that requires two projections for each block in each iteration.
Unlike optimization problems, where the objective function provides a metric
for measuring the performance of the algorithms, there is no immediate analog in
variational inequality problems. Different variants of gap function have been used in the analysis of variational inequalities (cf. Chapter 10 in \cite{facchinei02finite}). To derive a convergence rate, here we use the following gap function that was also employed in \cite{Nem11}.
\begin{enumerate}gin{definition}[Gap function]\label{def:gap1}
Let $X \sigma} \def\del{\etaubset \mu} \def\n{\nuathbb{R}^n$ be a non-empty and closed set.
Suppose that mapping $F: X\rightarrow \mu} \def\n{\nuathbb{R}^n$ is defined on the set $X$. We define the following gap function $\hbox{G}: X \rightarrow \mu} \def\n{\nuathbb{R}^+\cup \{0\}$ to measure the accuracy of a vector $x \in X$:
\begin{enumerate}gin{align}\label{equ:gapf}
G(x)= \sigma} \def\del{\etaup_{y \in X} \langle F(y),x-y\rangle.
\epsilonnd{align}
\epsilonnd{definition}
It follows that $G(x)\gammaeq 0$ for any $x \in X$ and $G(x^*)=0$ for any $x^* \in \hbox{SOL}(X,F)$ under monotonicity of $F$. We note that the function $\hbox{G}$ is indeed also a function of the set $X$ and the map $F$, but we do not \fyR{explicitly display this dependence} and {we} use $\hbox{G}$ instead of~$\hbox{G}_{X,F}$. The following result, provides the optimal convergence rate for the SMP method. \fyR{We show that the expected gap function of the \fyR{averaged} sequence $\bar y_K$ is bounded by a term of the order $\frac{1}{\sigma} \def\del{\etaqrt{K}}$.}
\begin{enumerate}gin{proposition}[Rate statement under monotonicity for SMP algortihm for SCVIs]\label{prop:optaveSCVI} \fyR{Consider problem \epsilonqref{eqn:SCVI}} and assume that the mapping $F$ be monotone on $X$. Let Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(a,c,d) holds, and suppose $\xi$ and $\tilde \xi$ are i.i.d. random variables satisfying Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}(b,c). Let $\{x_k\}$ be generated by Algorithm \mu} \def\n{\nuathbb{R}f{algorithm:SMP}. Let $r<1$ be an arbitrary scalar, the sequence $\gamma_k$ be non-increasing, and the sequence $\bar y_k$ be given by \epsilonqref{def:averagingS-SCVI}-\epsilonqref{def:averaging-SCVI}. Then, for any $K\gammaeq 1$ the following result holds:
\begin{enumerate}gin{align}\label{ineq:lastineqresult}
\EXP{G(\bar y_{K})}&\leq \left(\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r\right)^{-1}\left(4\gamma_{K-1}^{r-1}\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^{r+1}\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +{\tilde \nu_i}^2+1.25\nu_i^2\right)\right).
\epsilonnd{align}
Moreover, if $\gamma_k=\frac{\gamma_0}{\sigma} \def\del{\etaqrt{k+1}}$ for some $\gamma_0>0$, then for any $K> \mu} \def\n{\nuax\{\lceil \left(\frac{3-r}{2}\right)^{\frac{2}{1-r}}\rceil,3\}$, we have
\begin{enumerate}gin{align*}
&\EXP{G(\bar y_{K})}\leq \frac{\mu} \def\n{\nuathcal{M}}{\sigma} \def\del{\etaqrt{K}},
\epsilonnd{align*}
where $\mu} \def\n{\nuathcal{M}\triangleq(2-r)2^{1-0.5r}\left(\frac{4\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2}{\gamma_0}+\frac{\gamma_0\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +{\tilde \nu_i}^2+1.25\nu_i^2\right)}{1-r}\right)$.
\epsilonnd{proposition}
\begin{enumerate}gin{algorithm}
\caption{Stochastic mirror-prox algorithm for SCVIs}
\label{algorithm:SMP}
\begin{enumerate}gin{algorithmic}[1]
\STATE \textbf{initialization:} Set a random initial point $x_0\in X$, a stepsize $\gamma_0>0$, a scalar $r<1$, $y_0=\bar y_0=0 \in \mu} \def\n{\nuathbb{R}^n$, and $\Gamma_0=0$;
\FOR {$k=0,1,\ldots,K-1$}
\FOR {$i=1,\ldots,d$}
\STATE Update the blocks $y_k^i$ and $x_k^i$ using the following relations:
\begin{enumerate}gin{align}
y_{k+1}^{i}&:= P_{i}\left(x_{k}^{i},\gamma_k F_{i}(x_{k},\tilde \xi_k)\right),
\\
x_{k+1}^{i}&:=P_{i}\left(x_{k}^{i},\gamma_k F_{i}(y_{k+1},\xi_k)\right).
\epsilonnd{align}
\ENDFOR
\STATE Update $\Gamma_k$ and $\bar y_{k}$ using the following recursions:
\begin{enumerate}gin{align}
&\Gamma_{k+1}:=\Gamma_k+\gamma_k^r,\label{def:averagingS-SCVI}\\
&\bar y_{k+1}:=\frac{\Gamma_k \bar y_k+\gamma_{k}^r y_{k+1}}{\Gamma_{k+1}}.\label{def:averaging-SCVI}
\epsilonnd{align}
\ENDFOR
\STATE return $\bar y_{K};$
\epsilonnd{algorithmic}
\epsilonnd{algorithm}
\begin{enumerate}gin{proof}
\fyR{The proof is done in the following three main steps: (Step 1)
In the first step, we derive a recursive bound for a suitably defined error function. Particularly, we consider the function $\bar D:X\times \mu} \def\n{\nuathbb{R}^n \to \mu} \def\n{\nuathbb{R}$ defined as
\[\bar D(x,y)\triangleq\sigma} \def\del{\etaum_{i=1}^dD_i(x^i,y^i) \qquad \hbox{for any }x \in X \hbox{ and } y \in \mu} \def\n{\nuathbb{R}^n.\] This function quantifies the distance of two points characterized by the block Bregman distance functions $D_i$ defined by \epsilonqref{def:Di}; (Step 2) Given the recursive error bound in terms of $\bar D$ in Step 1, we invoke the definition of the gap function \epsilonqref{equ:gapf} and the averaging sequence $\bar y_{k}$ in \epsilonqref{def:averaging-SCVI} to show the inequality \epsilonqref{ineq:lastineqresult}; (Step 3) In the last step, under the assumption that $\gamma_k=\frac{\gamma_0}{\sigma} \def\del{\etaqrt{k+1}}$, we use relation \epsilonqref{ineq:lastineqresult} to derive the rate result. Below, we present the details in each step.
\noindent \textbf{(Step 1)} } For any arbitrary $i$, consider the relation $y_{k+1}^i=P_{i}\left(x_{k}^i,\gamma_kF_{i}(x_{k},\tilde \xi_k)\right)$. Applying Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(b), we obtain
\begin{enumerate}gin{align}\label{ineq:rel1}
\gamma_k\langle F_{i}(x_k,\tilde \xi_k), y_{k+1}^{i}-x_{k+1}^{i}\rangle +D_i(x_{k}^{i},y_{k+1}^{i}) +D_{i}(y_{k+1}^{i},x_{k+1}^{i})\leq
D_{i}(x_{k}^{i},x_{k+1}^{i}).
\epsilonnd{align}
Let vector $x \in X$ be given. Similarly, from $x_{k+1}^i=P_{i}\left(x_{k}^i,\gamma_kF_{i}(y_{k+1},\xi_{k})\right)$ and Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(b), we obtain
\begin{enumerate}gin{align*}
\gamma_k\langle \fyR{F_{i}(y_{k+1},\xi_k)}, x_{k+1}^{i}-x^{i}\rangle +D_{i}(x_{k}^{i},x_{k+1}^{i}) +D_{i}(x_{k+1}^{i},x^{i})\leq
D_{i}(x_{k}^{i},x^{i}).
\epsilonnd{align*}
Adding and subtracting $y_{k+1}^{i}$, the preceding relation yields
\begin{enumerate}gin{align}\label{ineq:rel2}
\gamma_k\langle \fyR{F_{i}(y_{k+1},\xi_k)}, x_{k+1}^{i}-y_{k+1}^{i}\rangle +\gamma_k\langle F_{i}(y_{k+1},\xi), y_{k+1}^{i}-x^{i}\rangle+D_{i}(x_{k}^{i},x_{k+1}^{i}) +D_{i}(x_{k+1}^{i},x^{i})\leq
D_{i}(x_{k}^{i},x^{i}).
\epsilonnd{align}
Adding \epsilonqref{ineq:rel1} and \epsilonqref{ineq:rel2}, we obtain
\begin{enumerate}gin{align*}
&\gamma_k\langle \fyR{F_{i}(y_{k+1},\xi_k)}-F_{i}(x_k,\tilde \xi_k), x_{k+1}^{i}-y_{k+1}^{i}\rangle +\gamma_k\langle \fyR{F_{i}(y_{k+1},\xi_k)}, y_{k+1}^{i}-x^{i}\rangle \notag\\ &+D_{i}(x_{k}^{i},y_{k+1}^{i}) +D_{i}(y_{k+1}^{i},x_{k+1}^{i})+D_{i}(x_{k+1}^{i},x^{i})\leq
D_{i}(x_{k}^{i},x^{i}) .
\epsilonnd{align*}
Rearranging the terms in the preceding inequality, we obtain
\begin{enumerate}gin{align*}
D_i(x_{k+1}^i,x^i) &\leq D_i(x_{k}^i,{x}^i)+\gamma_k\langle \fyR{F_{i}(x_k,\tilde \xi_k)-F_{i}(y_{k+1},\xi_k)}, x_{k+1}^{i}-y_{k+1}^{i}\rangle\\
& +\gamma_k\langle F_{i}(y_{k+1},\xi), {x}^{i}-y_{k+1}^{i}\rangle -D_{i}(x_{k}^{i},y_{k+1}^{i})-D_{i}(y_{k+1}^{i},x_{k+1}^{i})
\epsilonnd{align*}
Using the definition of stochastic errors $\fyR{\tilde w_k}$ and $w_k$, in the preceding result, we can substitute $F_{i}(x_k,\tilde \xi_k)$ by $F_i(x_k)+{\tilde w_k}^i$, and $F_{i}(y_{k+1},\xi_k)$ by $F_i(y_{k+1})+{w_k}^i$. We then obtain
\begin{enumerate}gin{align*}
D_i(x_{k+1}^i,x^i) &\leq D_i(x_{k}^i,{x}^i)+\gamma_k\underbrace{\langle F_{i}(x_k)-F_{i}(y_{k+1}), x_{k+1}^{i}-y_{k+1}^{i}\rangle}_{\mu} \def\n{\nubox{Term}\ 1}\\
& +\gamma_k\langle F_{i}(y_{k+1}), {x}^{i}-y_{k+1}^{i}\rangle -D_{i}(x_{k}^{i},y_{k+1}^{i})- D_{i}(y_{k+1}^{i},x_{k+1}^{i})\\
& +\gamma_k\underbrace{\langle {\fyR{\tilde w_k}}^{i}-w_k^{i}, x_{k+1}^{i}-y_{k+1}^{i}\rangle}_{\mu} \def\n{\nubox{Term}\ 2}+\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle.
\epsilonnd{align*}
Applying Fenchel's inequality to Term $1$, we obtain
\begin{enumerate}gin{align}\label{term1}\mu} \def\n{\nubox{Term}\ 1&=\left\langle \sigma} \def\del{\etaqrt{\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}}\fyR{\left(F_{i}(x_k)-F_{i}(y_{k+1})\right)}, \sigma} \def\del{\etaqrt{\frac{\mu} \def\n{\nuu_{\omega_{i}}}{2}}\left(x_{k+1}^{i}-y_{k+1}^{i}\right)\right\rangle \notag\\
&\leq \frac{1}{2}\left(\frac{2}{\mu} \def\n{\nuu_{\omega_i}}\right)\| F_{i}(x_k)-F_{i}(y_{k+1})\|_{*i}^2+\frac{1}{2}\left(\frac{\mu} \def\n{\nuu_{\omega_i}}{2}\right)\|x_{k+1}^{i}-y_{k+1}^{i}\|_{i}^2.\epsilonnd{align}
Similarly, we may obtain a bound on $\mu} \def\n{\nubox{Term}\ 2$. Using strong convexity of $\omega_{i}$ (cf. Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(a)), we conclude
\begin{enumerate}gin{align*}
D_i(x_{k+1}^i,x^i)&\leq D_i(x_{k}^i,{x}^i)+\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\| F_{i}(x_k)-F_{i}(y_{k+1})\|_{*i}^2+ \left(\frac{\mu} \def\n{\nuu_{\omega_i}}{4}\right)\|x_{k+1}^{i}-y_{k+1}^{i}\|_{i}^2\\
& +\gamma_k\langle F_{i}(y_{k+1}), {x}^{i}-y_{k+1}^{i}\rangle -\frac{\mu} \def\n{\nuu_{\omega_i}}{2}\|x_{k}^{i}-y_{k+1}^{i}\|_{i}^2- \frac{\mu} \def\n{\nuu_{\omega_i}}{2}\|y_{k+1}^{i}-x_{k+1}^{i}\|_{i}^2\\
& +\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\| {\tilde w}_k^{i}-w_k^{i}\|_{*{i}}^2+\left(\frac{\mu} \def\n{\nuu_{\omega_{i}}}{4}\right)\|x_{k+1}^{i}-y_{k+1}^{i}\|_{i}^2+\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle.
\epsilonnd{align*}
Invoking Assumption \mu} \def\n{\nuathbb{R}f{assump:main}(c) and that for any $a,b \in \mu} \def\n{\nuathbb{R}$, $(a+b)^2\leq 2(a^2+b^2)$, we have
\begin{enumerate}gin{align*}
D_i(x_{k+1}^i,x^i)&\leq D_i(x_{k}^i,{x}^i)+\left(\frac{4}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\gamma_k^2C_i^2+\gamma_k\langle F_{i}(y_{k+1}), {x}^{i}-y_{k+1}^{i}\rangle -\frac{\mu} \def\n{\nuu_{\omega_i}}{2}\|x_{k}^{i}-y_{k+1}^{i}\|_{i}^2\\
& +\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_i}}\right)\| {\fyR{\tilde w_k}}^{i}-w_k^{i}\|_{*{i}}^2+\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle \\
& \leq D_i(x_{k}^i,{x}^i)+\left(\frac{4}{\mu} \def\n{\nuu_{\omega_i}}\right)\gamma_k^2C_{i}^2+\gamma_k\langle F_{i}(y_{k+1}), {x}^{i}-y_{k+1}^{i}\rangle \\
&+\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\| {\fyR{\tilde w_k}}^{i}-w_k^{i}\|_{*{i}}^2+\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle.
\epsilonnd{align*}
Using the triangle inequality for the dual norm $\|\cdot\|_{*i}$, from the preceding inequality we have
\begin{enumerate}gin{align}\label{ineq:intermediate0}
D_i(x_{k+1}^i,x^i)&\leq D_i(x_{k}^i,{x}^i) +\left(\frac{4}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\gamma_k^2C_{i}^2 +\gamma_k^2\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+\gamma_k^2\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\|w_k^{i}\|_{*{i}}^2\notag\\ &+\gamma_k\langle F_{i}(y_{k+1}), {x}^{i}-y_{k+1}^{i}\rangle+\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle.
\epsilonnd{align}
We now estimate the term $\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle$. Let us define a sequence of vectors $u_{k}\triangleq(u_k^1;u_k^2;\ldots;u_k^d)$ as follows:
\begin{enumerate}gin{align}\label{def:ut}
u_{k+1}^{i}=P_{i}(u_k^{i},-\gamma_kw_{k}^{i}), \quad \hbox{for all } k\gammaeq 0,
\epsilonnd{align}
where $u_0=x_0$. We can write
\begin{enumerate}gin{align}\label{boundOnLastTerm}\gamma_k\langle w_k^{i}, {x}^{i}-y_{k+1}^{i}\rangle =\langle \gamma_kw_k^{i}, {x}^{i}-u_{k}^{i}\rangle+\gamma_k\langle w_k^{i}, {u}_k^{i}-y_{k+1}^{i}\rangle. \epsilonnd{align}
Applying Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(c) and taking to account the definition of $u_k$ in \epsilonqref{def:ut}, we have
\begin{enumerate}gin{align*}
D_{i}(u_{k+1}^{i},x^{i})\leq D_{i}(u_k^{i},x^{i})-\langle \gamma_kw_k^{i}, {x}^{i}-u_{k}^{i}\rangle+\frac{\gamma_k^2}{2\mu} \def\n{\nuu_{\omega_{i}}}\|w_k^{i}\|_{*i}^2.
\epsilonnd{align*}
Therefore, from the preceding relation and \epsilonqref{boundOnLastTerm}, we obtain
\begin{enumerate}gin{align*}
\gamma_k\langle w_k^{i}, x^{i}-y_{k+1}^{i}\rangle \leq D_i(u_{k}^i,x^i)-D_i(u_{k+1}^i,x^i)+\frac{\gamma_k^2}{2\mu} \def\n{\nuu_{\omega_{i}}}\|w_k^{i}\|_{*i}^2+\gamma_k\langle w_k^{i}, {u}_k^{i}-y_{k+1}^{i}\rangle.
\epsilonnd{align*}
From \epsilonqref{ineq:intermediate0} and the preceding relation, we obtain
\begin{enumerate}gin{align*}
D_i(x_{k+1}^i,x^i)&\leq D_i(x_{k}^i,{x}^i) +\left(\frac{2\gamma_k^2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)\\
&+D_i(u_{k}^i,x^i)-D_i(u_{k+1}^i,x^i)+\gamma_k\langle F_{i}(y_{k+1}), {x}^{i}-y_{k+1}^{i}\rangle+\gamma_k\langle w_k^{i}, {u}_k^{i}-y_{k+1}^{i}\rangle.
\epsilonnd{align*}
By summing both sides of the preceding relation over $i$, invoking the definition of function $\bar D$, and the aggregated inner product, we obtain
\begin{enumerate}gin{align*}
\bar D(x_{k+1},x)&\leq \bar D(x_{k},{x}) +\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2\gamma_k^2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)\\
&+\bar D(u_{k},x)-\bar D(u_{k+1},x)+\gamma_k\langle F(y_{k+1}), {x}-y_{k+1}\rangle+\gamma_k\langle w_k, {u}_k-y_{k+1}\rangle.
\epsilonnd{align*}
\fyR{\noindent \textbf{(Step 2)}} Next, using monotonicity of $F$ and by rearranging the terms, we further obtain
\begin{enumerate}gin{align}\label{ineq:boundsomething1}
\gamma_k\langle F(x), y_{k+1}-x\rangle&\leq \left(\bar D(x_{k},{x})+\bar D(u_{k},x)\right)-\left(\bar D(x_{k+1},x)+\bar D(u_{k+1},x)\right)\notag\\
&+\gamma_k^2\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)+\gamma_k\langle w_k, {u}_k-y_{k+1}\rangle.
\epsilonnd{align}
Next, multiplying both sides by $\gamma_k^{r-1}$, and adding and subtracting $\gamma_{k-1}^{r-1}\left(\bar D(x_{k},{x})+\bar D(u_{k},x)\right)$, we obtain
\begin{enumerate}gin{align*}
\gamma_k^r\langle F(x), y_{k+1}-x\rangle&\leq \gamma_{k-1}^{r-1}\left(\bar D(x_{k},{x})+\bar D(u_{k},x)\right)-\gamma_k^{r-1}\left(\bar D(x_{k+1},x)+\bar D(u_{k+1},x)\right)\\
& +\left(\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\right)\left(\bar D(x_{k},{x})+\bar D(u_{k},x)\right)\\
&+\gamma_k^{r+1}\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)+\gamma_k^r\langle w_k, {u}_k-y_{k+1}\rangle.
\epsilonnd{align*}
Note that since $\gamma_k$ is non-increasing and $r<1$, we have $\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\gammaeq 0$. Note furthur that using Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(a) and the definition of $\bar D$, we have\begin{enumerate}gin{align}\label{ieq:boundonDbar}\bar D(x_{k},{x}) \leq \sigma} \def\del{\etaum_{i=1}^d\frac{L_{\omega_i}}{2}\|x_k^i-x^i\|_i^2\leq \sigma} \def\del{\etaum_{i=1}^d\frac{L_{\omega_i}}{2}(\|x_k^i\|_i+\|x^i\|_i)^2\leq 2\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}2C_i^2.\epsilonnd{align}
Similarly, we get $\bar D(u_{k},{x})\leq 2\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2$.
Using these bounds, summing over $k$ from $1$ to $K-1$, and then removing the negative terms on the right-hand side of the resulting inequality, we obtain
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=1}^{K-1}\langle F(x), \gamma_k^ry_{k+1}-x\rangle&\leq \gamma_{0}^{r-1}\left(\bar D(x_{1},{x})+\bar D(u_{1},x)\right)+\left(\gamma_{K-1}^{r-1}-\gamma_{0}^{r-1}\right)4\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2\\
&+\sigma} \def\del{\etaum_{k=1}^{K-1}\gamma_k^{r+1}\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)+\sigma} \def\del{\etaum_{k=1}^{K-1}\gamma_k^r\langle w_k, {u}_k-y_{k+1}\rangle.
\epsilonnd{align*}
Consider \epsilonqref{ineq:boundsomething1} for $k=0$. By multiplying both sides of that relation by $\gamma_0^{r-1}$, and then summing with the preceding inequality, we obtain
\begin{enumerate}gin{align*}
\langle F(x), \sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r y_{k+1}-x\rangle&\leq \gamma_{0}^{r-1}\left(\bar D(x_{0},{x})+\bar D(u_{0},x)\right)+\left(\gamma_{K-1}^{r-1}-\gamma_{0}^{r-1}\right)4\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2\\
&+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^{r+1}\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r\langle w_k, {u}_k-y_{k+1}\rangle.
\epsilonnd{align*}
Dividing both sides by $\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r$, invoking the definition of $\bar y_{K}$, and using \epsilonqref{ieq:boundonDbar}, we have
\begin{enumerate}gin{align*}
\langle F(x), \bar y_{K}-x\rangle&\leq \left(\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r\right)^{-1}\left(4\gamma_{K-1}^{r-1}\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^{r+1}\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)\right.\\
&\left.+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r\langle w_k, {u}_k-y_{k+1}\rangle\right).
\epsilonnd{align*}
Note that since the right-hand side of the preceding relation is independent of $x$, taking supremum from the left-hand side, and from definition of the G function, we obtain \begin{enumerate}gin{align*}
G(\bar y_{K})&\leq \left(\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r\right)^{-1}\left(4\gamma_{K-1}^{r-1}\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^{r+1}\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +\| {\fyR{\tilde w_k}}^{i}\|_{*{i}}^2+1.25\|w_k^{i}\|_{*{i}}^2\right)\right.\\
&\left.+\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^r\langle w_k, {u}_k-y_{k+1}\rangle\right).
\epsilonnd{align*}
\fyR{\noindent \textbf{(Step 3)}} Next, taking expectation on both sides, and using Assumption \mu} \def\n{\nuathbb{R}f{assump:randvar}(b,c), we can obtain the inequality \epsilonqref{ineq:lastineqresult}.
In the remainder of the proof, we derive the rate statement. For $\gamma_k=\frac{\gamma_0}{\sigma} \def\del{\etaqrt{k+1}}$, in a similar fashion to the proof of \fyR{ inequality \epsilonqref{ineq:boundForSumGammaR}}, for $K\gammaeq 4$, we have
\begin{enumerate}gin{align}\label{ineq:LBforSII}
\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^{r}\gammaeq \frac{\gamma_0^{r}(K+1)^{1-0.5r}}{2(1-0.5r)}, \quad \hbox{for all } r<1.
\epsilonnd{align}
Also, since we assumed that $K>\lceil \left(\frac{3-r}{2}\right)^{\frac{2}{1-r}}\rceil$, we see that
\begin{enumerate}gin{align}\label{ineq:UBforSII}
\sigma} \def\del{\etaum_{k=0}^{K-1}\gamma_k^{r+1}\leq \frac{ 4\gamma_0^{(r+1)}(K+1)^{0.5(1-r)}}{1-r}, \quad \hbox{for all } r<1.
\epsilonnd{align}
From \epsilonqref{ineq:LBforSII}, \epsilonqref{ineq:UBforSII}, and \epsilonqref{ineq:lastineqresult}, we obtain
\begin{enumerate}gin{align*}
&\EXP{G(\bar y_{K})}\leq \left(\frac{2(1-0.5r)}{\gamma_0^rK^{1-0.5r}}\right)\left(4\gamma_0^{r-1}K^{0.5(1-r)}\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2+\frac{\mu} \def\n{\nuathcal{C}\gamma_0^{r+1}(K+1)^{0.5(1-r)}}{1-r}\right)\\
&\leq \left(\frac{2-r}{\gamma_0(K+1)^{1-0.5r}}\right)\left(\frac{(K+1)^{1-0.5r}}{K^{1-0.5r}}\right)\left(4\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2(K+1)^{0.5(1-r)}+\frac{\mu} \def\n{\nuathcal{C}\gamma_0^2(K+1)^{0.5(1-r)}}{1-r}\right)\\
& \leq (2-r)2^{1-0.5r}\left(\frac{4\sigma} \def\del{\etaum_{i=1}^dL_{\omega_i}B_i^2}{\gamma_0}+\frac{\mu} \def\n{\nuathcal{C}\gamma_0}{1-r}\right)\frac{1}{\sigma} \def\del{\etaqrt{K+1}}.
\epsilonnd{align*}
where $\mu} \def\n{\nuathcal{C}\triangleq\sigma} \def\del{\etaum_{i=1}^d\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i}}}\right)\left(2C_{i}^2 +{\tilde \nu_i}^2+1.25\nu_i^2\right)$. Therefore, we obtain the desired rate statement.
\epsilonnd{proof}
\sigma} \def\del{\etaection{Concluding remarks}\label{sec:conRem}
Motivated by the challenges arising from computation of \fyR{solution} to the large-scale stochastic Cartesian variational inequality problems, in the first part of the paper, we develop a randomized block coordinate stochastic mirror-prox (B-SMP) algorithm. At each iteration, only a randomly selected block coordinate of the solution iterate is updated through implementing two consecutive projection steps. Under the assumption of strict pseudo-monotonicity of the mapping, first we prove convergence of the generated sequence to the solution set of the problem in an almost sure sense. To derive the rate statements, we consider SCVI problems with strongly pseudo-monotone mappings and derive the optimal convergence rate in terms of the problem parameters, prox mapping parameters, iteration number, and number of blocks. We then consider stochastic convex optimization problems on sets with block structures. Under a new weighted averaging scheme, we derive the associated convergence rate. In the second part of the paper, we develop a stochastic mirror-prox algorithm for solving SCVIs where all the blocks are updated at each iteration. We show that using a different weighted averaging sequence, the optimal convergence rate can be achieved.
\sigma} \def\del{\etaection{Appendix}
\sigma} \def\del{\etaubsection{Proof of Lemma \mu} \def\n{\nuathbb{R}f{lem:uniqueness}}\label{A1}
We show that VI$(X,F)$ has a unique solution. To arrive at a contradiction, let us assume $x^*, {x^*}' \in $ SOL$(X,F)$ such that $x^*\neq {x^*}'$. Since $x^*$ is a solution to VI$(X,F)$ and that
${x^*}' \in X$, we have \begin{enumerate}gin{align}\label{ineq:spse1}\langle F(x^*),{x^*}'-x^*\rangle \gammae 0.\epsilonnd{align} Similarly, since ${x^*}'$ is a solution to VI$(X,F)$ and that ${x^*} \in X$, we have
\begin{enumerate}gin{align}\label{ineq:spse2}\langle F({x^*}'),x^*-{x^*}'\rangle \gammae 0.\epsilonnd{align}
From the preceding two inequalities, we obtain
\begin{enumerate}gin{align}\label{ineq:spse3}\langle F(x^*)-F({x^*}'),x^*-{x^*}'\rangle \le 0.\epsilonnd{align}
Next, invoking the definition of strict pseudo-monotonicity of $F$, from \epsilonqref{ineq:spse1}, we obtain $\langle F({x^*}'),{x^*}'-x^*\rangle > 0.$ Simiraly, from \epsilonqref{ineq:spse2}, we have $\langle F({x^*}),x^*-{x^*}'\rangle > 0$. Summing the preceding two inequalities, we have \begin{enumerate}gin{align*}\langle F({x^*})-F({x^*}'),x^*-{x^*}'\rangle > 0,\epsilonnd{align*}
which contradicts \epsilonqref{ineq:spse3}. \fyR{Therefore, $x^*={x^*}'$ implying that VI$(X,F)$ has at most one solution. From Remark \mu} \def\n{\nuathbb{R}f{rem1}, we conclude that VI$(X,F)$ has a unique solution.}
\sigma} \def\del{\etaubsection{Proof of inequality \epsilonqref{ineq:boundForSumGammaR}}\label{A2}
\fyR{To show this inequality, consider the following two cases:\\ (i) If $r\leq 0$, we have
\[\gamma_0^{-r}\sigma} \def\del{\etaum_{k=0}^K\gamma_k^{r}=\sigma} \def\del{\etaum_{k=1}^{K+1}k^{-\frac{r}{2}}\gammaeq \int_{0}^{K+1}x^{-0.5r}dx\gammaeq \int_{1}^{K+1}x^{-0.5r}dx=\frac{(K+1)^{1-0.5r}-1}{1-0.5r}.\]
(ii) If $0<r<1$, we have
\[\gamma_0^{-r}\sigma} \def\del{\etaum_{k=0}^K\gamma_k^{r}=\sigma} \def\del{\etaum_{k=1}^{K+1}k^{-\frac{r}{2}}\leq \int_{1}^{K+2}x^{-0.5r}dx\gammaeq \int_{1}^{K+1}x^{-0.5r}dx=\frac{(K+1)^{1-0.5r}-1}{1-0.5r}.\]}
Therefore, from the two preceding relations, we have
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=0}^K\gamma_k^{r}\gammaeq \gamma_0^{r}\left(\frac{(K+1)^{1-0.5r}-1}{1-0.5r}\right), \quad \hbox{for all } r<1.
\epsilonnd{align*}
\sigma} \def\del{\etaubsection{Proof of Lemma \mu} \def\n{\nuathbb{R}f{lemma:rateHarmonic}}\label{A3}
We use induction to show \epsilonqref{ekbound1}. For $k=K$, we have
\[e_K=\frac{Ke_K}{K}\leq \frac{\mu} \def\n{\nuax \{\frac{\begin{enumerate}ta\gamma^2}{\alphalpha \gamma-1},Ke_K\}}{K},\]
implying that \epsilonqref{ekbound1} holds for $k=K$. Let us assume \epsilonqref{ekbound1} holds for some $k\gammaeq K$. Note that we have
\[k\gammaeq \lceil \alphalpha \gamma\rceil \gammaeq \alphalpha \gamma \Rightarrow 1-\frac{\alphalpha \gamma}{k}\gammaeq0.\]Then, using the preceding inequality, relation \epsilonqref{ekbound0}, $\gamma_k=\frac{\gamma}{k}$, and the induction hypothesis, we can write
\begin{enumerate}gin{align}\label{ekbound3}e_{k+1}\leq \left(1-\frac{\alphalpha\gamma}{k}\right)e_k+\frac{\begin{enumerate}ta \gamma^2}{k^2}\leq \left(1-\frac{\alphalpha\gamma}{k}\right)\frac{\theta}{k}+\frac{\begin{enumerate}ta \gamma^2}{k^2},\epsilonnd{align}
where $\theta\triangleq\mu} \def\n{\nuax \{\frac{\begin{enumerate}ta\gamma^2}{\alphalpha \gamma-1},Ke_K\}$. From the definition of $\theta$, we have
\begin{enumerate}gin{align*}
&\theta\gammaeq \frac{\begin{enumerate}ta\gamma^2}{\alphalpha \gamma-1} \Rightarrow \theta \alphalpha \gamma -\begin{enumerate}ta\gamma^2 \gammaeq \theta \Rightarrow \frac{\alphalpha \gamma \theta -\begin{enumerate}ta \gamma^2}{\theta}\gammaeq 1>\frac{k}{k+1}\Rightarrow \frac{\alphalpha \gamma \theta -\begin{enumerate}ta \gamma^2}{k}\gammaeq \frac{\theta}{k+1}\\
& \Rightarrow \frac{\alphalpha \gamma \theta -\begin{enumerate}ta \gamma^2}{k^2}\gammaeq \frac{\theta}{k(k+1)}\Rightarrow \frac{\alphalpha \gamma \theta}{k^2} -\frac{\begin{enumerate}ta \gamma^2}{k^2}\gammaeq \frac{\theta}{k}-\frac{\theta}{k+1}\Rightarrow \frac{\theta}{k+1}\gammaeq \left(1-\frac{\alphalpha\gamma}{k}\right)\frac{\theta}{k}+\frac{\begin{enumerate}ta \gamma^2}{k^2}.
\epsilonnd{align*}
Invoking \epsilonqref{ekbound3}, we have $e_{k+1}\leq \frac{\theta}{k+1}$. Therefore, \epsilonqref{ekbound1} holds for all $k\gammaeq K$. Next we show \epsilonqref{ekbound2}. Note that since $1< \alphalpha \gamma = 2$, we have $K=\lceil \alphalpha \gamma\rceil=2$. Therefore, from \epsilonqref{ekbound2} and $K=2$, we have
\begin{enumerate}gin{align*}
e_k\leq \frac{\mu} \def\n{\nuax \{\frac{\begin{enumerate}ta\gamma^2}{\alphalpha \gamma-1},2e_2\}}{k}, \qquad \hbox{for all } k\gammaeq 2.
\epsilonnd{align*}
Replacing $\gamma$ by $\frac{2}{\alphalpha}$, we obtain \begin{enumerate}gin{align}\label{ekbound4}
e_k\leq \frac{\mu} \def\n{\nuax \{\frac{4\begin{enumerate}ta}{\alphalpha^2},2e_2\}}{k}, \qquad \hbox{for all } k\gammaeq 2.
\epsilonnd{align}
Next we find a bound on $e_2$. From \epsilonqref{ekbound0}, $\gamma=\frac{2}{\alphalpha}$, and that $\gamma_1=\gamma$, we have \[e_2\leq (1-\alphalpha \gamma)e_1+\begin{enumerate}ta \gamma^2=\frac{4\begin{enumerate}ta}{\alphalpha^2}-e_1 \leq \frac{4\begin{enumerate}ta}{\alphalpha^2},\]
where the last inequality is due to non-negativity of $e_1$. Therefore, using the preceding relation and \epsilonqref{ekbound4}, we obtain the desired result.
\epsilonnd{document}
Using the definition of $D$ again, and from the definition of stochastic errors $\fyR{\tilde w_k}$ and $w_k$, we have
\begin{enumerate}gin{align*}
D(x_{k+1},x) &\leq D(x_{k},{x})+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x_k)-F_{i_k}(y_{k+1}), x_{k+1}^{i_k}-y_{k+1}^{i_k}\rangle\\
& +p_{i_k}^{-1}\gamma_k\langle F_{i_k}(y_{k+1},\xi_k), {x}^{i_k}-y_{k+1}^{i_k}\rangle -p_{i_k}^{-1}D_{i_k}(x_{k}^{i_k},y_{k+1}^{i_k})- p_{i_k}^{-1}D_{i_k}(y_{k+1}^{i_k},x_{k+1}^{i_k})\\
&\leq D(x_{k},{x})+p_{i_k}^{-1}\gamma_k\underbrace{\langle F_{i_k}(x_k)-F_{i_k}(y_{k+1}), x_{k+1}^{i_k}-y_{k+1}^{i_k}\rangle}_{Term 1}\\
& +p_{i_k}^{-1}\gamma_k\langle F_{i_k}(y_{k+1}), {x}^{i_k}-y_{k+1}^{i_k}\rangle -p_{i_k}^{-1}D_{i_k}(x_{k}^{i_k},y_{k+1}^{i_k})- p_{i_k}^{-1}D_{i_k}(y_{k+1}^{i_k},x_{k+1}^{i_k})\\
& +p_{i_k}^{-1}\gamma_k\underbrace{\langle {\fyR{\tilde w_k}}^{i_k}-w_k^{i_k}, x_{k+1}^{i_k}-y_{k+1}^{i_k}\rangle}_{Term 2}+p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle.
\epsilonnd{align*}
Applying Fenchel's inequality on Term 1, and we obtain
\begin{enumerate}gin{align}\label{term1}Term\ 1&=\left\langle \sigma} \def\del{\etaqrt{\frac{2}{\mu} \def\n{\nuu_{\omega_{i_k}}}}\left(F_{i_k}(x_k,\tilde \xi_k)-F_{i_k}(y_{k+1},\xi_k)\right), \sigma} \def\del{\etaqrt{\frac{\mu} \def\n{\nuu_{\omega_{i_k}}}{2}}\left(x_{k+1}^{i_k}-y_{k+1}^{i_k}\right)\right\rangle \notag\\
&\leq \frac{1}{2}\left(\frac{2}{\mu} \def\n{\nuu_{\omega_i}}\right)\| F_{i_k}(x_k)-F_{i_k}(y_{k+1})\|_{*i_k}^2+\frac{1}{2}\left(\frac{\mu} \def\n{\nuu_{\omega_i}}{2}\right)\|x_{k+1}^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2\epsilonnd{align}
Simiraly, we can obtain a bound on $Term\ 2$. Using strong convexity of $\omega_{i_k}$ (cf. Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(a)), we conclude
\begin{enumerate}gin{align*}
D(x_{k+1},x)&\leq D(x_{k},{x})+p_{i_k}^{-1}\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)\| F_{i_k}(x_k)-F_{i_k}(y_{k+1})\|_{*i_k}^2+p_{i_k}^{-1} \left(\frac{\mu} \def\n{\nuu_{\omega_i}}{4}\right)\|x_{k+1}^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2\\
& +p_{i_k}^{-1}\gamma_k\langle F_{i_k}(y_{k+1}), {x}^{i_k}-y_{k+1}^{i_k}\rangle -\frac{\mu} \def\n{\nuu_{\omega_i}}{2}p_{i_k}^{-1}\|x_{k}^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2- \frac{\mu} \def\n{\nuu_{\omega_i}}{2}p_{i_k}^{-1}\|y_{k+1}^{i_k}-x_{k+1}^{i_k}\|_{i_k}^2\\
& +p_{i_k}^{-1}\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)\| {\tilde w}_k^{i_k}-w_k^{i_k}\|_{*{i_k}}^2+p_{i_k}^{-1}\left(\frac{\mu} \def\n{\nuu_{\omega_{i_k}}}{4}\right)\|x_{k+1}^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2+p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle.
\epsilonnd{align*}
Invoking Assumption \epsilonqref{} and that for any $a,b \in \mu} \def\n{\nuathbb{R}$, $(a+b)^2\leq 2(a^2+b^2)$, we have
\begin{enumerate}gin{align*}
D(x_{k+1},x)&\leq D(x_{k},{x})+\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)p_{i_k}^{-1}\gamma_k^2\left(L_{i_k}^2\|x_k^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2+B_{i_k}^2\right)\\
& +p_{i_k}^{-1}\gamma_k\langle F_{i_k}(y_{k+1}), {x}^{i_k}-y_{k+1}^{i_k}\rangle -\frac{\mu} \def\n{\nuu_{\omega_i}}{2}p_{i_k}^{-1}\|x_{k}^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2\\
& +p_{i_k}^{-1}\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_i}}\right)\| {\fyR{\tilde w_k}}^{i_k}-w_k^{i_k}\|_{*{i_k}}^2+p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle \\
& \leq D(x_{k},{x})-p_{i_k}^{-1}\underbrace{\left(\frac{\mu} \def\n{\nuu_{\omega_{i_k}}}{2}-\frac{2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\gamma_k^2L_{i_k}^2\right)}_{Term 3}\|x_k^{i_k}-y_{k+1}^{i_k}\|_{i_k}^2 +\left(\frac{2}{\mu} \def\n{\nuu_{\omega_i}}\right)p_{i_k}^{-1}\gamma_k^2B_{i_k}^2\\
&+p_{i_k}^{-1}\gamma_k\langle F_{i_k}(y_{k+1}), {x}^{i_k}-y_{k+1}^{i_k}\rangle +p_{i_k}^{-1}\gamma_k^2\left(\frac{1}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)\| {\fyR{\tilde w_k}}^{i_k}-w_k^{i_k}\|_{*{i_k}}^2+p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle.
\epsilonnd{align*}
Next, using the assumption $\gamma_k \leq \frac{\mu} \def\n{\nuu_{\omega_i}}{2L_{\omega_i}}$ for all $i$, we can claim $Term\ 3$ is non-negative. Thus, using the triangle inequality for dual norm $\|\cdot\|_{*i_k}$, from the preceding inequality we have
\begin{enumerate}gin{align*}
D(x_{k+1},x)& \leq D(x_{k},{x}) +p_{i_k}^{-1}\gamma_k\langle F_{i_k}(y_{k+1}), {x}^{i_k}-y_{k+1}^{i_k}\rangle +\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)p_{i_k}^{-1}\gamma_k^2B_{i_k}^2\\
& +p_{i_k}^{-1}\gamma_k^2\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)\| {\fyR{\tilde w_k}}^{i_k}\|_{*{i_k}}^2+p_{i_k}^{-1}\gamma_k^2\left(\frac{2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)\|w_k^{i_k}\|_{*{i_k}}^2+p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle.
\epsilonnd{align*}
\epsilonnd{proof}
\us{Unlike optimization problems where the objective function provides a metric
for distinguishing solutions, there is no immediate analog in
variational inequality problems. However, one may prescribe a
residual function associated with a variational inequality
problem.}
\begin{enumerate}gin{definition}[Gap function]\label{def:gap1}
Let $X \sigma} \def\del{\etaubset \mu} \def\n{\nuathbb{R}^n$ be a non-empty and closed set.
Suppose that mapping $F: X\rightarrow \mu} \def\n{\nuathbb{R}^n$ is defined on the set $X$. We define the following gap function, $\hbox{G}: X \rightarrow \mu} \def\n{\nuathbb{R}^+\cup \{0\}$ to measure the accuracy of a vector $x \in X$:
\begin{enumerate}gin{align}\label{equ:gapf}
\hbox{G}(x)= \sigma} \def\del{\etaup_{y \in X} F(y)^T(x-y).
\epsilonnd{align}
\epsilonnd{definition}
\alphan{From this definition, it follows that $G(x)\gammaeq 0$ for any $x \in X$ and $G(x^*)=0$ for any $x^* \in \hbox{SOL}(X,F)$. We note that the gap function $\hbox{G}$ is in fact also a function of the set $X$ and the map $F$, but we do not use this dependency so \fy{we} use $\hbox{G}$ instead of~$\hbox{G}_{X,F}$.}
\begin{enumerate}gin{comment}
\fy{\begin{enumerate}gin{lemma}[Properties of gap function]\label{lemma:gap-positive}
Consider Def. (\mu} \def\n{\nuathbb{R}f{def:gap1}). Then, we have the following properties:
\begin{enumerate}gin{itemize}
\item [(a)] The gap function (\mu} \def\n{\nuathbb{R}f{equ:gapf}) is non-negative for any $x \in X$.
\item [(b)] Assume that the mapping $F$ is bounded over the set X
. Then, $G$ is continuous at any $x \in X$.
\epsilonnd{itemize}
\epsilonnd{lemma}}\epsilonnd{comment}
Next, we provide an upper bound for the gap function $G$.
\begin{enumerate}gin{proposition}[Error bounds on the expected gap value]\label{prop:errorbound-averaged}
Consider problem~(\mu} \def\n{\nuathbb{R}f{def:SVI}), and let Assumptions~\mu} \def\n{\nuathbb{R}f{assum:step_error_sub_1}
and~\mu} \def\n{\nuathbb{R}f{assump:stochastic-error} hold. Let the weighted average sequence $\{\bar y_{k}(r$)$\}$ be defined by
$$\bar y_N(r) \triangleq \frac{\sigma} \def\del{\etaum_{k=1}^{N} \gammaamma_k^r
y_k}{\sigma} \def\del{\etaum_{k=1}^{N} \gammaamma_k^r},\qquad\hbox{for all }k\gammae0,$$
where $r \in \mu} \def\n{\nuathbb{R}$, $\{y_{k}\}$ is
generated by Algorithm~\mu} \def\n{\nuathbb{R}f{algorithm:IRLSA-impl}, and the stepsize sequence $\{\gamma_k\}$ is non-increasing and
$0<\gamma_k\leq \frac{\mu} \def\n{\nuu_{\omega_i}}{4L_{\omega_i}}$ for all
$k\gammaeq 0$. Then, for any $k\gammaeq 0$,\\
\begin{enumerate}gin{align}\label{ineq:gap-general-bound1}& \quad \EXP{
\hbox{G}(\bar y_{N}(r))} \leq
\left( 4M^2\left(\gamma_0^{r-1}+\gamma_k^{r-1}\textbf{I}_r\right) +\sigma} \def\del{\etaqrt{n}C\sigma} \def\del{\etaum_{t=0}^{k}\gamma_t^r\epsilon_t +(5.5\nu^2+5C^2)\sigma} \def\del{\etaum_{t=0}^{k}\gamma_t^{r+1}\right)\left(\sigma} \def\del{\etaum_{t=0}^{k}\gamma_{t}^r\right)^{-1} ,
\epsilonnd{align}
where $\textbf{I}_r=0$ for $r\gammaeq 1$ and $\textbf{I}_r=1$ for $r< 1$.
\epsilonnd{proposition}
\begin{enumerate}gin{proof}
Consider the inequality \epsilonqref{ineq:rec-bound}. We first estimate the term $p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle$. We have
Let us define a sequence of vectors $u_{k}:=[u_k^1;u_k^2;\ldots;u_k^d]$ as follows:
\begin{enumerate}gin{align}\label{def:ut}
u_{k+1}^{i_k}=P_{i_k}(u_k^{i_k},-\gamma_kw_{K}^{i_k}), \quad \hbox{for all } k\gammaeq 0.
\epsilonnd{align}
where $u_0=x_0$. We can write
\begin{enumerate}gin{align}\label{boundOnLastTerm}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-y_{k+1}^{i_k}\rangle =\langle \gamma_kw_k^{i_k}, {x}^{i_k}-u_{k}^{i_k}\rangle+\gamma_k\langle w_k^{i_k}, {u}_k^{i_k}-y_{k+1}^{i_k}\rangle. \epsilonnd{align}
Applying Lemma \mu} \def\n{\nuathbb{R}f{lemma:proxprop}(c) and taking to account the definition of $u_k$ in \epsilonqref{def:ut}, we have
\begin{enumerate}gin{align*}
D_{i_k}(u_{k+1}^{i_k},x^{i_k})\leq D_{i_k}(u_k^{i_k},x^{i_k})-\langle \gamma_kw_k^{i_k}, {x}^{i_k}-u_{k}^{i_k}\rangle+\frac{\gamma_k^2}{2\mu} \def\n{\nuu_{\omega_{i_k}}}\|w_k^{i_k}\|_{*i_k}^2.
\epsilonnd{align*}
Next, using the definition of the Lyapunov function $D$, and the preceding relation, we have
\begin{enumerate}gin{align*}
D(u_{k+1},x)&=\sigma} \def\del{\etaum_{i\neq i_k}p_i^{-1}D_i(u_{k+1}^i,x^i)+p_{i_k}^{-1}D_{i_k}(u_{k+1}^{i_k},x^{i_k})\\
&=\sigma} \def\del{\etaum_{i\neq i_k}p_i^{-1}D_i(u_{k}^i,{x}^i)+p_{i_k}^{-1}D_{i_k}(u_{k+1}^{i_k},{x}^{i_k})\\
& \leq \sigma} \def\del{\etaum_{i\neq i_k}p_i^{-1}D_i(x_{k}^i,{x}^i)+ p_{i_k}^{-1}D_{i_k}(u_k^{i_k},x^{i_k}) -p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-u_{k}^{i_k}\rangle+p_{i_k}^{-1}\frac{\gamma_k^2}{2\mu} \def\n{\nuu_{\omega_{i_k}}}\|w_k^{i_k}\|_{*i_k}^2\\
&= D(u_{k},x)-p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {x}^{i_k}-u_{k}^{i_k}\rangle+p_{i_k}^{-1}\frac{\gamma_k^2}{2\mu} \def\n{\nuu_{\omega_{i_k}}}\|w_k^{i_k}\|_{*i_k}^2.
\epsilonnd{align*}
Therefore, from the preceding relation and \epsilonqref{boundOnLastTerm}, we obtain
\begin{enumerate}gin{align}\label{boundOnLastTerm2}
p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, x^{i_k}-y_{k+1}^{i_k}\rangle \leq D(u_{k},x)-D(u_{k+1},x)+p_{i_k}^{-1}\frac{\gamma_k^2}{2\mu} \def\n{\nuu_{\omega_{i_k}}}\|w_k^{i_k}\|_{*i_k}^2+\gamma_k\langle w_k^{i_k}, {u}_k^{i_k}-y_{k+1}^{i_k}\rangle.
\epsilonnd{align}
From the block-monotonicity assumption of mapping $F$, we have
\[\langle F_{i_k}(y_{k+1}), x^{i_k}-y^{i_k}_{k+1}\rangle \leq \langle F_{i_k}(x), x^{i_k}-y^{i_k}_{k+1}\rangle.\]
From the preceding inequality, relations \epsilonqref{boundOnLastTerm} and \epsilonqref{ineq:rec-bound}, we obtain
\begin{enumerate}gin{align*}
p_{i_k}^{-1}\gamma_k\langle F_{i_k}(x), y_{k+1}^{i_k}-{x}^{i_k}\rangle & \leq D(x_{k},{x}) -D(x_{k+1},x)+D(u_{k},x)-D(u_{k+1},x)\\
& +\left(\frac{2p_{i_k}^{-1}\gamma_k^2}{\mu} \def\n{\nuu_{\omega_{i_k}}}\right)\left(B_{i_k}^2+1.25\| {\fyR{\tilde w_k}}^{i_k}\|_{*i_k}^2+\|w_k^{i_k}\|_{*i_k}^2\right) +p_{i_k}^{-1}\gamma_k\langle w_k^{i_k}, {u}_k^{i_k}-y_{k+1}^{i_k}\rangle.
\epsilonnd{align*}
\epsilonnd{proof}
Multiplying both sides of the preceding inequality by $\gamma_k^{r-1}$ and rearranging the terms, we have
\begin{enumerate}gin{align}\label{ineq:r_multiplied}
\gamma_k^rp_{i_k}^{-1}\langle E_{i_k}F(x), y_{k+1}-x\rangle
& \leq \gamma_k^{r-1}D(x_{k},{x}) -\gamma_k^{r-1}D(x_{k+1},x)+2\gamma_k^{r+1}p_{i_k}^{-1}B_{i_k}^2\\
& +4\gamma_k^{r+1}p_{i_k}^{-1}\nu_{i_k}^2+\gamma_k^rp_{i_k}^{-1}\langle E_{i_k} w_k, {x}-y_{k+1}\rangle
\epsilonnd{align}
Therefore, combining the preceding relations, we have
\begin{enumerate}gin{align*}
\gamma_k^rp_{i_k}^{-1}\langle E_{i_k}F(x), y_{k+1}-x\rangle
& \leq \gamma_k^{r-1}D(x_{k},{x}) -\gamma_k^{r-1}D(x_{k+1},x)+2\gamma_k^{r+1}p_{i_k}^{-1}B_{i_k}^2\\
& +4\gamma_k^{r+1}p_{i_k}^{-1}\nu_{i_k}^2+\gamma_k^rp_{i_k}^{-1}\left(D_{i_k}(u_k^{i_k},x^{i_k})-D_{i_k}(u_{k+1}^{i_k},x^{i_k})+\frac{\gamma_k^2}{2}\|w_k^i\|_*^2+\langle E_{i_k} w_k, u_k-y_{k+1}\rangle\right)
\epsilonnd{align*}
Adding and subtracting the term $\gamma_{k-1}^{r-1}D(x_k,x)$, we obtain
\begin{enumerate}gin{align*}
\gamma_k^rp_{i_k}^{-1}\langle E_{i_k}F(x), y_{k+1}-x\rangle
& \leq \gamma_{k-1}^{r-1}D(x_{k},{x}) -\gamma_k^{r-1}D(x_{k+1},x) +\left(\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\right)D(x_{k},{x})\\&+2\gamma_k^{r+1}p_{i_k}^{-1}B_{i_k}^2
+4\gamma_k^{r+1}p_{i_k}^{-1}\nu_{i_k}^2+\gamma_k^rp_{i_k}^{-1}\langle E_{i_k} w_k, {x}-y_{k+1}\rangle
\epsilonnd{align*}
Note that since $r<1$ and that $\gamma_k$ is non-increasing, we have $\gamma_{k}^{r-1}-\gamma_{k-1}^{r-1}\gammaeq 0$. Moreover, from Lipschitzian property of the function $\omega_i$, we have \begin{enumerate}gin{align}\label{ineq:boundD}D(x_k,x) &=\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}D_i(x_k^i,{x}^i)\leq \frac{1}{2}\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\|x_k^i-{x}^i\|_i^2\leq \frac{1}{2}\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\left(\|x_k^i\|_i+\|{x}^i\|_i\right)^2\\ & =2\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2.\epsilonnd{align}
Using the preceding relation, taking summation over the relation ... from $k=1$ to $N-1$, and dropping the non-positive terms, we obtain
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=1}^{N-1}\gamma_k^rp_{i_k}^{-1}\langle E_{i_k}F(x), y_{k+1}-x\rangle
& \leq \gamma_{0}^{r-1}D(x_{1},{x}) +2\left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\gamma_{N-1}^{r-1}\\&+2\sigma} \def\del{\etaum_{k=1}^{N-1}\gamma_k^{r+1}p_{i_k}^{-1}\left(B_{i_k}^2+2\nu_{i_k}^2\right)+\sigma} \def\del{\etaum_{k=1}^{N-1}\gamma_k^rp_{i_k}^{-1}\langle E_{i_k} w_k, {x}-y_{k+1}\rangle
\epsilonnd{align*}
Adding the resulting inequality with \epsilonqref{ineq:r_multiplied} for $k=0$, and employing \epsilonqref{ineq:boundD} we have
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^rp_{i_k}^{-1}\langle E_{i_k}F(x), y_{k+1}-x\rangle
& \leq \gamma_{0}^{r-1}D(x_{0},{x}) +2\left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\gamma_{N-1}^{r-1}\\&+2\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1}p_{i_k}^{-1}\left(B_{i_k}^2+2\nu_{i_k}^2\right)+\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^rp_{i_k}^{-1}\langle E_{i_k} w_k, {x}-y_{k+1}\rangle \\
& \leq \left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\left(\gamma_{0}^{r-1} +2\gamma_{N-1}^{r-1}\right)\\&+2\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1}p_{i_k}^{-1}\left(B_{i_k}^2+2\nu_{i_k}^2\right)+\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^rp_{i_k}^{-1}\langle E_{i_k} w_k, {x}-y_{k+1}\rangle
\epsilonnd{align*}
Let us denote the history of the method up to the $k$th iteration by $\mu} \def\n{\nuathcal{F}_k=\{z_0,\tilde \xi_0,\xi_0,\ldots,z_k,\tilde \xi_k,\xi_k\}$.
Taking conditional expectation on $\mu} \def\n{\nuathcal{F}_{k-1}\cup\{\tilde \xi_k,\xi_k\}$ from both sides, we obtain
\begin{enumerate}gin{align}\label{ineq:expz}
\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle \EXP{p_{i_k}^{-1}E_{i_k}}F(y_{k+1}), y_{k+1}-x\rangle
& \leq \left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\left(\gamma_{0}^{r-1} +2\gamma_{N-1}^{r-1}\right)+2\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1} \EXP{p_{i_k}^{-1}\left(B_{i_k}^2+2\nu_{i_k}^2\right)}\\&+\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle \EXP{p_{i_k}^{-1} E_{i_k}} w_k, {x}-y_{k+1}\rangle
\epsilonnd{align}
Next we calculate the terms $\EXP{p_{i_k}^{-1}E_{i_k}}$ and $\EXP{p_{i_k}^{-1}\left(B_{i_k}^2+2\nu_{i_k}^2\right)}$. We have
\begin{enumerate}gin{align*}
\EXP{p_{i_k}^{-1}E_{i_k}}= \sigma} \def\del{\etaum_{i=1}^dp_ip_{i}^{-1}E_{i}=\textbf{I}_n,
\epsilonnd{align*}
and
\begin{enumerate}gin{align*}
\EXP{p_{i_k}^{-1}\left(B_{i_k}^2+2\nu_{i_k}^2\right)}=\sigma} \def\del{\etaum_{i=1}^dp_ip_{i}^{-1}\left(B_{i}^2+2\nu_{i}^2\right)=\sigma} \def\del{\etaum_{i=1}^d\left(B_{i}^2+2\nu_{i}^2\right)
\epsilonnd{align*}
Combining the preceding two relations with the inequality \epsilonqref{ineq:expz}, we have
\begin{enumerate}gin{align}\label{ineq:beforeMon}
\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle F(y_{k+1}), y_{k+1}-x\rangle
& \leq \left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\left(\gamma_{0}^{r-1} +2\gamma_{N-1}^{r-1}\right)+2\sigma} \def\del{\etaum_{i=1}^d\left(B_{i}^2+2\nu_{i}^2\right)\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1}\\&+\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle w_k, {x}-y_{k+1}\rangle
\epsilonnd{align}
From the monotonicity property of mapping $F$, we have $\langle F(x), y_{k+1}-x\rangle \leq \langle F(y_{k+1}), y_{k+1}-x\rangle$. Let us define
\begin{enumerate}gin{align}\label{ineq:aveSeq}
\bar y_{N}(r)=\sigma} \def\del{\etaum_{k=0}^{N-1}\frac{\gamma_k^r}{\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_{k}^r}y_{k+1}
\epsilonnd{align}
Therefore, by multiplying and dividing the left-hand side of the relation \epsilonqref{ineq:beforeMon}, we obtain
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle F(x), \bar y_{N}(r)-x\rangle
& \leq \left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\left(\gamma_{0}^{r-1} +2\gamma_{N-1}^{r-1}\right)+2\sigma} \def\del{\etaum_{i=1}^d\left(B_{i}^2+2\nu_{i}^2\right)\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1}\\&+\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle w_k, {x}-y_{k+1}\rangle
\epsilonnd{align*}
Taking supremum with respect to $x \in X$, Definition \epsilonqref{} implies that
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r G(\bar y_{N}(r))
& \leq \left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\left(\gamma_{0}^{r-1} +2\gamma_{N-1}^{r-1}\right)+2\sigma} \def\del{\etaum_{i=1}^d\left(B_{i}^2+2\nu_{i}^2\right)\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1}\\&+\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r\langle w_k, {x}-y_{k+1}\rangle
\epsilonnd{align*}
Now, taking expectation from both sides of the preceding inequality, we obtain
\begin{enumerate}gin{align*}
\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^r \EXP{G(\bar y_{N}(r))}
& \leq \left(\sigma} \def\del{\etaum_{i=1}^dp_i^{-1}L_{\omega_i}\mu} \def\n{\nuathcal{D}_i^2\right)\left(\gamma_{0}^{r-1} +2\gamma_{N-1}^{r-1}\right)+2\sigma} \def\del{\etaum_{i=1}^d\left(B_{i}^2+2\nu_{i}^2\right)\sigma} \def\del{\etaum_{k=0}^{N-1}\gamma_k^{r+1}
\epsilonnd{align*}a
where we used the following relation
\[\EXP{\langle w_k, {x}-y_{k+1}\rangle}=\EXP{\EXP{\langle w_k, {x}-y_{k+1}\rangle\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k\}}}=\EXP{\langle \EXP{ w_k\mu} \def\n{\nuid \mu} \def\n{\nuathcal{F}_{k-1}\cup \{\tilde \xi_k\}}, {x}-y_{k+1}\rangle}=0\]
a
\epsilonnd{document}
|
\begin{document}
\title{Entanglement for a bimodal cavity field interacting with a two-level atom}
\author{Jia Liu, Zi-Yu Chen, Shen-Ping Bu, Guo-Feng Zhang\footnote{Corresponding author. Email:
[email protected]; Phone: 86-10-82338289;
Fax:86-10-82338289}} \affiliation{ Department of Physics, Beijing
University of Aeronautics and Astronautics, Beijing 100083,
China.}
\begin{abstract}
Negativity has been adopted to investigate the entanglement in a
system composed of a two-level atom and a two-mode cavity field.
Effects of Kerr-like medium and the number of photon inside the
cavity on the entanglement are studied. Our results show that
atomic initial state must be superposed, so that the two cavity
field modes can be entangled, moreover, we also conclude that the
number of photon in the two cavity mode should be equal. The
interaction between modes, namely, the Kerr effect, has a
significant negative contribution. Note that the atom frequency
and the cavity frequency have an indistinguishable effect, so a
corresponding approximation has been made in this article. These
results may be useful for quantum information in optics systems.
Keywords: Negativity; Kerr-like medium; Jaynes-Cummings model.
\end{abstract}
\maketitle
Quantum computation, one of the most fascinating applications of
quantum mechanics, has the potential to outperform their classical
counterparts in solving hard problems using much less time. There
has been an ongoing effort to search for various physical systems
that maybe propitious to implement quantum computation. Several
prospective approaches for scalable quantum computation have been
identified \cite{1,2,3,4}. Compared to other physical systems, the
optic quantum can be easily realized in experiments. In quantum
optics, the Jaynes-Cummings (JC) model is one of the exactly
solvable models describing the interaction between a single-mode
radiation field and a two-level atom. It has been realized
experimentally in 1987 \cite{5}. There are many ongoing
experimental and theoretical investigations on the various
extensions of the JC model, such as a bimodal cavity field
\cite{6,7}, two atoms \cite{8,9}, multilevel atoms \cite{10,11},
and so on. A two-level atom interacting with a two-mode cavity
field is discussed here.
In view of the resource character of the entanglement, more
attention has been paid to its quantification, such as the
concurrence, the negativity, the relative entropy of entanglement
etc. Entanglement between two qubits in arbitrary state has been
quantified by concurrence \cite{12,13,14}. It is generally
considered that the two-atomic Wehrl entropy \cite{15} can be used
to quantify the entanglement in the JC model when these modes are
initially prepared in the maximally entangled states \cite{16,17}.
Here we use negativity as the measure and deal with the mixed
state entanglement \cite{18}. Many efforts have been put on the
study of the two-mode JC model, but Kerr effect \cite{19,20,21,22}
has not been considered, and this is the main motivation of the
present paper. The scaled units are used in this work. The
interaction between the field and atom are considered in an ideal
and closed cavity, namely, the field damping and the radioactive
damping \cite{23} are ignored.
The system we considered here is an effective two-level atom with
upper and lower states denoted by $|\uparrow>$ and $|\downarrow>$,
respectively. The corresponding frequencies are $\omega_{a}$ and
$\omega_{b}$, moreover, we denote $\omega_{\alpha}$ as the
transition frequency between states $|\uparrow>$ and
$|\downarrow>$. In the two-photon processes, some intermediate
states $|i>$, i=c, d,$\cdots$ are involved, which are assumed to
be coupled to $|\uparrow>$ and $|\downarrow>$ by dipole-allowed
transition. Let $\omega_{i}$ denote the corresponding frequency of
the atomic energy level $|i>$. There are two requirements:
firstly, the atom interacts with the two cavity fields with
frequencies $\omega_{1}$ and $\omega_{2}$, where $\omega_{1}$+
$\omega_{2}\cong \omega_{\alpha}$; secondly,
$\omega_{a}-\omega_{i}$ and $\omega_{b}-\omega_{i}$ are off
resonance of the one-photon linewidth with $\omega_{1}$ and
$\omega_{2}$. If both are satisfied, then the intermediate states
can be adiabatically eliminated \cite{24} and the effective
Hamiltonian of the system can be written in the rotating-wave
approximation as \cite{25,26}
\begin{eqnarray}
H=\sum_{j=1}^2\omega_ja_j^\dag a_j+\omega_\alpha
\frac{\sigma_z}{2}+\chi(a_1^{\dag2}a_1^{2}+a_2^{\dag2}a_2^{2})+\lambda(a_1a_2\sigma_++a_1^{\dag}a_2^{\dag}\sigma_-),
\end{eqnarray}
\noindent where $a_j^{\dag}(a_j)$ and $\omega_j$ denote the creation
(annihilation) operator and frequency in the $j$th mode, the natural
unit $\hbar=1$ is used throughout the paper. $\sigma_{\pm}=(\sigma_x
\pm i\sigma_y)/2$ is the raising and lowering operators, with
$\sigma_m (m=x,y,z)$ is common Pauli spin operator. $\lambda$ is the
coupling constant between the atom and the modes, known as the Rabi
frequency. $\chi$ is the dispersive part of the third-order
nonlinearity of the Kerr-like medium. Through out the investigation,
we consider that $\omega_1+\omega_2=\omega_\alpha$ (i.e. the
resonance case). After some study, it is found that the frequency of
two modes have the same effect on negativity, so for simplicity, it
is given that $\omega_{1}=\omega_{2}=\omega$ in the subsequent
calculations.
We assume that the initial state of the system has the form of
\begin{eqnarray}
|\Psi(0)\rangle=\cos\theta|n_1,n_2,\uparrow\rangle+\sin\theta|n_1,n_2,\downarrow\rangle,
\end{eqnarray}
\noindent where $n_1$ and $n_2$ are field quantum state in the
Fock representation. Here different values of $\theta$ describe
the states with different amplitudes. In view of the initial
condition and Schr\"{o}dinger equation, the wave function of the
system at time $t$ can be obtained as
\begin{eqnarray}
&&
|\Psi(t)\rangle=a(t)|n_1,n_2,\uparrow\rangle+b(t)|n_1,n_2,\downarrow\rangle\nonumber\\
&&\ \ \ \ \ \ \ \ \
+c(t)|n_1+1,n_2+1,\downarrow\rangle+d(t)|n_1-1,n_2-1,\uparrow\rangle,
\end{eqnarray}
\noindent under the condition
$|a(t)|^2+|b(t)|^2+|c(t)|^2+|d(t)|^2=1$, and
\begin{eqnarray}
&&
a(t)=\frac{1}{2\xi_1}\{e^{-T(i\gamma_1+\xi_1)}\cos\theta [i(e^{2T\xi_1}-1)\zeta(n_1+n_2)+\xi_1(e^{2T\xi_1}+1)]\};\nonumber\\
&&
b(t)=\frac{1}{2\xi_2}\{e^{-T(i\gamma_2+\xi_2)}\sin\theta [-i(e^{2T\xi_2}-1)\zeta(n_1+n_2-2)+\xi_2(e^{2T\xi_2}+1)]\};\nonumber\\
&&
c(t)=-\frac{1}{2\xi_1}[ie^{-T(i\gamma_1+\xi_1)}\eta\cos\theta \sqrt{(1+n_1)(1+n_2)}];\nonumber\\
&&
d(t)=-\frac{1}{2\xi_2}[ie^{-T(i\gamma_2+\xi_2)}\eta\sin\theta (e^{2T\xi_2}-1)\sqrt{n_1n_2}];\nonumber\\
&&
\xi_1=\sqrt{-\eta^2-\zeta^2n_1^2-n_2(\eta^2+\zeta^2n_2^2)-n_1[\eta^2(1+n_2)+2\zeta^2n_2]};\nonumber\\
&&
\xi_2=\sqrt{-\zeta^2n_1^2-\zeta^2(n_2-2)^2+n_1(4\zeta^2-n_2\eta^2-2\zeta^2n_2)};\nonumber\\
&&
\gamma_1=1+n_1+n_2+\zeta(n_1^2+n_2^2);\nonumber\\
&& \gamma_2=-1+2\zeta+(1-2\zeta)n_1+\zeta
n_1^2+n_2+\zeta(n_2-2)n_2,
\end{eqnarray}
\noindent where $\eta=\lambda/\omega$, $\zeta=\chi/\omega$, and
$T=\omega t$. From the above equations, the state density operator
at time $t$, $\rho(t)=|\Psi(t)\rangle\langle\Psi(t)|$, can be
easily derived.
If we know density matrix $\rho_{12}$ of a composite system
composed of subsystem $1$ and $2$, the reduced density operator
for subsystem $1$ is $\rho_{1}=Tr_{2}(\rho_{12})$. In our case,
the density operator of two modes for a given atom state is
$\rho_f(t)=Tr_a\rho(t)$ which can be found in the basis
$\{|i,j\rangle,$ $i=n_{1}-1,$ $n_{1},$ $n_{1}+1$ and $j=n_{2}-1,$
$n_{2},$ $n_{2}+1\}$ as
\begin{eqnarray}
\rho_f(t) = \left({{\begin{array}{*{20}c} \ \ \ {|d|^2}\ \
& {0}\ \
& {0}\ \
& {0}\ \ \ \ \ \ \ \
&
{da^*}\ \
& {0}\ \
& {0}\ \
& {0}\ \
&
{0}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \ \ \ \
\ \ \
& {0}\ \
& {0}\ \
& {0}\ \
&
{0}\ \
& {0}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}
\\
\ \ \ {ad^*}\ \
& {0}\ \
& {0}\ \
& {0}
& {|a|^2+|b|^2}\ \
& {0}\ \
& {0}\ \
&
{0}\ \
& {bc^*}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {0}
\\
\ \ \ {0}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {cb^*}\ \
& {0}\ \
& {0}\ \
& {0}\ \
& {|c|^2}
\\
\end{array} }} \right),
\end{eqnarray}
which, generally speaking, is a mixed state. So we introduce
negativity which is the usual measurement of entanglement for a
mixed state and is defined as
\begin{eqnarray}
N(\rho)\equiv\frac{\parallel \rho^{T_A}\parallel_1-1}{2},
\end{eqnarray}
where $\rho^{T_A}$ denotes the partial transpose of $\rho$ with
respect to part $A$, and the trace norm of $\rho^{T_A}$ is equal
to the sum of the absolute values of the eigenvalues of
$\rho^{T_A}$. $N(\rho)>0$ corresponds to a entangled state and
$N(\rho)=0$ corresponds to a separate one.
We perform the diagonalization on density matrix. From the
calculation, one can find that the two cavity mode frequencies
have a indistinguishable effect on the entanglement, and then the
frequencies are supposed to be the same as mentioned above. When
the angle $\theta$ is set to be zero, after a straightforward
calculation it is found that the negativity is zero. This
indicates that the two-level atom initial state is not a
superposition state, and then the two-mode cavity field states
will not be entangled during the time evolution process. The atom
initial state has an important influence on the production of the
entangled cavity mode state. Numerical results of entanglement
measure are presented in Figure 1 to 4.
Figure 1 shows the negativity as a function of time $T$. A
coherent superposition state is chosen as atomic initial state.
Fig.1(a) is the case that the cavity is in a two-mode vacuum
state. Negativity evolves with a period and the maximum value is
0.5 since the cavity systems are two qutrits. Changing the vacuum
state to a more general state, Fig. 1(b) shows a novel feature.
Negativity changes non-smoothly and the period is obviously
smaller than that in Fig.1(a). The maximum is 0.4, does not reach
0.5. It can be easily understood since the noise of the system
exists. When one mode is in a vacuum state and the other is in a
non-zero photon state, the period and the amplitude in Fig. 1(c)
decreases sharply compared with those of Figs. 1(a) and 1(b). This
means that we must prepare almost the same photon number in the
two cavity modes in order to get a higher entanglement and longer
entanglement time. Comparing the figures in Fig.1, the vacuum
state is more useful for the production of entanglement between
the cavity modes.
In Fig. 2 (a-c), negativity is shown as a function of the coupling
constant between the atom and the modes $\eta$, and Kerr medium is
set as $\zeta=10$. In Fig. 2(a), the negativity vibrates
periodically with a maximum amplitude 0.5 when the cavities are in
vacuum states. In Fig. 2(b), the negativity first climbs from zero
nearly linearly, and then increases quaveringly. In this case, two
modes have the same photon number and are indistinguishable. For
$\eta=0$, the atom does not interact with cavity modes, and the
negativity is zero. Then one can conclude that the coupling
strength between atom and modes decreases the classical noise
effect, then the entanglement between the two-mode cavity field is
enhanced accordingly. But it isn't superior to the case in figure
2(a), in which the classical noise is absent. When one cavity is
vacuum while the other is a common state, the entanglement
increases slowly with the increasing coupling constant, and
finally reaches a maximum value 0.5. These results can be seen
from Fig. 2(c). So, when two cavity modes have the same state,
especially the vacuum state, negativity can easily achieve the
ideal value.
Negativity is plotted in Fig. 3 as a function of Kerr medium
coefficient $\zeta$ and the angle $\theta$. When the Rabi frequency
equals to the transition frequency($\eta=1$), although the number of
photon in two cavity modes is the same, the maximum value can not
reach 0.5. The negativity fluctuates periodically with $\theta$.
When $\theta=\frac{n\pi}{4}$(n is odd), negativity has a maximum
value. Coupling strength between two modes field have a negative
contribution on entanglement. So the Kerr medium effect must be
controlled in order to obtain anticipative entanglement.
In order to clearly show the effects of the initial state and
evolve time on the entanglement producing, negativity as a
function of angle $\theta$ and time $T$ is plotted in Fig. 4. It
is found that negativity is periodic with a period $\pi/2$ for
the different cavity field state. In Figs. 4(a) and 4(b), two
cavity modes are indistinguishable. When Kerr medium effect
increases, entanglement between the two cavity modes becomes weak,
so that the maximum of negativity can not reach 0.5 in Fig. 4(a).
Figure 4(b) shows that some small wave peaks emerge between two
maximal peaks when the two cavity modes are both in states with
same photon number, which can be attributed to the noise since the
modes are not vacuum. This figure also supports the conclusion
that the cavities' photon number affect the entanglement between
two cavity modes. When one mode changes to be vacuum, negativity
decreases as shown in Fig. 4(c). We also should note that when the
coefficient absolute value of the atomic initial superposition
state equals to each other, the greatest mode entanglement can be
obtained.
In conclusion, we have investigated the entanglement between two
cavity modes. It shows that the entanglement is sensitive to the
atom initial state. A superposition initial state of the atom is
necessary to obtain an entangled state between the cavity modes.
Also, the number of photon inside the cavity, Kerr medium and the
coupling strength between cavity and atom all can greatly affect
the cavity modes entanglement.
\textbf{Acknowledgements}
This work was supported by the National Natural Science Foundation
of China (Grant No.10604053 , 2006CB932603, and 90305026) and
BeiHang Lantian Project.
\section*{Caption}
Fig.1: Negativity as a function of time $T$, when $\theta=\pi/4$,
$\eta=1$, $\zeta=0$. (a) $n_1=n_2=0$; (b)
$n_1=n_2=100$; (c) $n_1=0$, $n_2=100$.
Fig.2: Negativity as a function of the coupling constant between
the atom and the cavity $\eta$, when $T=1$, $\theta=\pi/4$,
$\zeta=10$. (a) $n_1=n_2=0$; (b) $n_1=n_2=100$; (c) $n_1=0$,
$n_2=100$.
Fig.3: Surface plot of negativity as a function of Kerr medium $\zeta$ and phase angle
$\theta$, when $T=1$, $\eta=1$, $n_1=n_2=100$.
Fig.4: Surface plot of negativity as a function of time $T$ and
phase angle $\theta$, when $\eta=1$, $\zeta=10$. (a) $n_1=n_2=0$;
(b) $n_1=n_2=100$; (c) $n_1=0$, $n_2=100$.
\begin{figure}
\caption{Negativity as a function of time
$T$, when $\theta=\pi/4$, $\eta=1$, $\zeta=0$. (a) $n_1=n_2=0$;
(b)
$n_1=n_2=100$; (c) $n_1=0$, $n_2=100$.}
\end{figure}
\begin{figure}
\caption{Negativity as a function of the
coupling constant between the atom and the cavity $\eta$, when
$T=1$, $\theta=\pi/4$, $\zeta=10$. (a) $n_1=n_2=0$; (b)
$n_1=n_2=100$; (c) $n_1=0$, $n_2=100$. }
\end{figure}
\begin{figure}
\caption{Surface plot of negativity as a
function of Kerr medium $\zeta$ and phase angle $\theta$, when
$T=1$, $\eta=1$, $n_1=n_2=100$.}
\end{figure}
\begin{figure}
\caption{Surface plot of negativity as a
function of time $T$ and phase angle $\theta$, when $\eta=1$,
$\zeta=10$. (a) $n_1=n_2=0$; (b) $n_1=n_2=100$; (c) $n_1=0$,
$n_2=100$.}
\end{figure}
\end{document}
|
\betagin{document}
\betagin{abstract}
We construct a smooth Artin stack
parameterizing the stable weighted curves of genus one with twisted fields and prove that it is isomorphic to the blowup stack of the moduli of genus one weighted curves studied by Hu and Li.
This leads to a blowup-free construction of Vakil-Zinger's desingularization of the moduli of genus one stable maps to projective spaces.
This construction provides the cornerstone of the theory of stacks with twisted fields,
which is thoroughly studied in~\cite{HN2} and leads to a blowup-free resolution of the stable map moduli of genus two.
\end{abstract}
\maketitle
\mathsf ection{Introduction}\lambdangle\!\lambdangleabel{Sec:Intro}
Moduli problems are of central importance in algebraic geometry.
Many moduli spaces possess arbitrary singularities~\cite{V}.
Among them, the moduli $\ov M_g(\mathbb P^n,d)$ of degree $d$ stable maps from genus $g$ nodal curves into projective spaces $\mathbb P^n$
are particularly important.
We aim to resolve the singularities of $\ov M_g(\mathbb P^n,d)$,
that is, to construct a new Deligne-Mumford stack that has smooth irreducible components and normal crossing boundaries and dominates $\ov\mathfrak M_g(\mathbb P^n,d)$ properly and birationally onto the primary component (the component whose general points have smooth domain curves).
The problem of resolution of singularities is arguably among the hardest ones
in algebraic geometry \cite{Hironaka64a, Hironaka64b, deJong96, K07}.
The stable map moduli are smooth if $g\!=\! 0$
and singular if $g\!\ge\!1$ and $n\!\ge\!2$.
For $g\!=\! 1$, a resolution was constructed by Vakil and Zinger~\cite{VZ08},
followed by an algebraic approach of Hu and Li~\cite{HL10}.
The latter is achieved by constructing a canonical smooth blowup
$\ti\mathfrak M^\tn{wt}_1$ of the Artin stack
$\mathfrak M^{\tn{wt}}_1$ of weighted nodal curves of genus one.
The method of \cite{HL10} was further developed in~\cite{HLN} to finally establish
a resolution in the case of $g\!=\! 2$.
The resolution of~\cite{HLN} is achieved by constructing a canonical smooth blowup
$\widetilde{\mathfrak P}_2$ of the relative Picard stack
${\mathfrak P}_2$ of nodal curves of genus two.
In higher genus cases, the construction of a possible resolution of the stable map moduli
may seem formidable. The constructions of the explicit resolutions in~\cite{VZ08,HL10,HLN} rely on certain precise knowledge on the singularities of the moduli.
For arbitrary genus, it calls for a more abstract and geometric approach.
As advocated by the first author,
every singular moduli space should admit a resolution which itself is also a moduli.
Following this principle,
we interpret the blowup stack $\ti\mathfrak M_1^\tn{wt}$ of~\cite{HL10} as a smooth
algebraic stack of stable weighted nodal curves of genus one with twisted fields,
and consequently, the resolution $\ti M_1(\mathbb P^n,d)$ of $\ov M_1(\mathbb P^n,d)$
as a Deligne-Mumford stack of genus one stable maps with twisted fields.
The results in this paper are the first step to tackle the arbitrary genus case.
The main theorem of this paper is the following:
\betagin{thm}
\lambdangle\!\lambdangleabel{Thm:Main} There exits a smooth Artin stack $\mtd{1}$
parameterizing the weighted nodal curves of genus one with twisted fields,
along with a universal family $\mathcal C^\tn{tf}\lambdangle\!\lambdangleongrightarrow\mtd 1$
and a proper and birational forgetful morphism $\varpi: \mtd 1\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_1$.
Moreover, $\mtd{1}/ \mathfrak M^{\tn{wt}}_1$ is isomorphic to
the blowup stack $\ti\mathfrak M_1^\tn{wt}/\mathfrak M^{\tn{wt}}_1$.
\end{thm}
We construct the strata of $\mtd 1$ and the forgetful map $\varpi$ in \S\rangle\!\rangleef{Sec:Set-theoretic}; see~(\rangle\!\rangleef{Eqn:MtdStrata}).
We then glue the strata of $\mtd 1$ together using smooth charts in \S\rangle\!\rangleef{Sec:Moduli} and
conclude that $\mtd 1$ is a smooth Artin stack and is birational to $\mathfrak M^{\tn{wt}}_1$ in Corollary~\rangle\!\rangleef{Crl:MtdSmooth}.
The universal family $\mathcal C^\tn{tf}\!\lambdangle\!\lambdangleongrightarrow\!\mtd 1$ is described in Proposition~\rangle\!\rangleef{Prp:Stack}.
We finally show that $\mtd 1/\mathfrak M^{\tn{wt}}_1$ is isomorphic to $\ti\mathfrak M_1^\tn{wt}/\mathfrak M^{\tn{wt}}_1$ in Proposition~\rangle\!\rangleef{Prp:Isomorphism},
which implies the properness of $\varpi$.
These results together establish Theorem~\rangle\!\rangleef{Thm:Main}.
We remark that a direct approach to the properness of $\varpi$ (i.e.~without the comparison with the blowup $\ti\mathfrak M^\tn{wt}_1/\mathfrak M^{\tn{wt}}_1$) is provided in the proof of~\cite[Theorem~2.19($\mathsf{p_1}$)]{HN2},
in a more general setting.
We also point out that there should exist a groupoid, represented by $\mtd{1}$, that sends any scheme~$S$ to the set of the flat families of stable weighted nodal curves of genus 1 with twisted fields over $S$ as in~(\rangle\!\rangleef{Eqn:family});
see Remark~\rangle\!\rangleef{Rmk:moduli} for some details.
According to~\cite{HL10},
the resolution $\ti M_1(\mathbb P^n,d)$ of $\ov M_1(\mathbb P^n,d)$ is given by
$$\ti M_1(\mathbb P^n,d)= \ov M_1(\mathbb P^n,d)\times_{\mathfrak M^{\tn{wt}}_1}\ti\mathfrak M_1^\tn{wt},$$
where
$$\ov M_1(\mathbb P^n,d)\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_1,\qquad
[C,\mathbf u]\mapsto[C,c_1(\mathbf u^*\mathscr O_{\mathbb P^n}(1))]$$
and $\ti\mathfrak M^\tn{wt}_1\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}_1$ is the canonical blowup.
Analogously,
we take
$$
\ti M_1^\tn{tf}(\mathbb P^n,d):=
\ov M_1(\mathbb P^n,d)\times_{\mathfrak M^{\tn{wt}}_1}\mtd 1,
$$
where $\mtd 1\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}_1$ is the forgetful morphism aforementioned.
Theorem~\rangle\!\rangleef{Thm:Main} then leads to the following conclusion immediately.
\betagin{crl}
\lambdangle\!\lambdangleabel{Crl:Main}
$\ti M_1^\tn{tf}(\mathbb P^n,d)$ is a proper Deligne-Mumford stack and is isomorphic to
$\ti M_1(\mathbb P^n,d)$.
\end{crl}
Via the above isomorphism and applying ~\cite{HL10},
one sees that the stack $\ti M_1^\tn{tf}(\mathbb P^n,d)$ provides a resolution of
$\ov M_1(\mathbb P^n,d)$.
Nonetheless, without relating to $\ti M_1(\mathbb P^n,d)$,
we can directly prove the resolution property of $\ti M_1^\tn{tf}(\mathbb P^n,d)$ by
investigating the local equations of $\ov M_1(\mathbb P^n,d)$ in~\cite{HL10}
and their pullbacks to $\ti M_1^\tn{tf}(\mathbb P^n,d)$;
see Remark~\rangle\!\rangleef{Rmk:smooth}.
The methods and ideas of this paper are essential to the development in \cite{HN2} and forthcoming works.
Based on the construction of $\mtd 1$,
we introduce the theory of \textsf{stacks with twisted fields} (\textsf{STF}) in~\cite[Theorem~2.19]{HN2}.
To be somewhat more informative,
we work on a smooth stack $\mathfrak M$ that has a stratification indexed by a set $\Gamma$ of graphs similar to~(\rangle\!\rangleef{Eqn:Mwt_strata}); see~\cite[Definition~2.15]{HN2}.
The graphs in $\Gamma$ need not to come from the dual graphs as in~(\rangle\!\rangleef{Eqn:Mwt_strata}),
but the stratification of $\mathfrak M$ should resemble~(\rangle\!\rangleef{Eqn:V_(I)}) locally.
Moreover, $\Gamma$ need not to consist of trees,
but it should contain necessary information on the notion of the (weighted) level trees in Definition~\rangle\!\rangleef{Dfn:level_tree} so that we can add the twisted fields to the strata of $\mathfrak M$ parallel to~(\rangle\!\rangleef{Eqn:MtdStrat_and_sEcT}) and obtain a new stack $\mtd{}$; see~\cite[Definition~2.17]{HN2}.
Such $\mtd{}$ enjoys desirable properties as in Corollary~\rangle\!\rangleef{Crl:MtdSmooth} and Remark~\rangle\!\rangleef{Rmk:smooth}.
As an application of the STF theory,
in~\cite{HN2},
we construct a smooth Artin stack~$\mathfrak P_2^\tn{tf}$ of genus~2 nodal curves with line bundles and twisted fields, along with a proper and birational forgetful morphism $\mathfrak P_2^\tn{tf}\lambdangle\!\lambdangleongrightarrow\mathfrak P_2$,
such that
$$
\ti M_2^\tn{tf}(\mathbb P^n,d)=\ov M_2(\mathbb P^n,d)\times_{{\mathfrak P}_2}\mathfrak P_2^\tn{tf}
\lambdangle\!\lambdangleongrightarrow \ov M_2(\mathbb P^n,d)
$$
provides a resolution.
Further, we expect that they can be extended to arbitrary genus, as far as the existence of
moduli of nodal curves with twisted fields is concerned.
This is the main motivation of the current article.
In a related work ~\cite{RSPW}, D. Ranganathan, K. Santos-Parker, and J.Wise
provide a different modular perspective of $\ti M_1(\mathbb P^n,d)$ using logarithmic geometry.
\textbf{Acknowledgments.} We would like to thank Dawei Chen, Qile Chen, and Jack Hall for the valuable discussions.
\textbf{Convention.}
The subscript ``1'' of the relevant stacks indicating the genus appears only in \S\rangle\!\rangleef{Sec:Intro} and will be omitted starting \S\rangle\!\rangleef{Sec:Set-theoretic},
as we only deal with the genus 1 case in this paper.
In particular,
we will denote by
$$
\mathfrak M^{\tn{wt}}\qquad\textnormal{and}\qquad
\mtd{}
$$
the aforementioned stacks $\mathfrak M^{\tn{wt}}_1$ and $\mtd 1$, respectively.
\iffalse
Throughout the paper,
the notion of graphs is frequently used to describe the strata of $\mathfrak M^{\tn{wt}}$ and~$\mtd{}$.
We remark that unless stated otherwise, the graphs are considered up to graph isomorphism,
which gives an equivalence relation on the set of all graphs.
Distinguishing between a graph~$\gamma$ and its equivalence class,
however,
would make the notation too complicated.
Thus by abuse of notation,
we still use~$\gamma$ to denote the equivalence class containing~$\gamma$ and write $\gamma\!=\!\gamma'$ if the graphs $\gamma$ and $\gamma'$ are isomorphic.
\fi
\mathsf ection{Set-theoretic descriptions}
\lambdangle\!\lambdangleabel{Sec:Set-theoretic}
In \S\rangle\!\rangleef{Subsec:Level Tree},
we discuss the combinatorics of the dual graphs of nodal curves and introduce the notion of the weighted level trees.
They will be used to define $\mtd{}$ set-theoretically in \S\rangle\!\rangleef{Subsec:twisted_fields}.
\subsection{Weighted level trees}
\lambdangle\!\lambdangleabel{Subsec:Level Tree}
Let
$\gamma$ be a \textsf{rooted tree},
i.e.~a connected finite graph that contains no cycles, along with a special vertex $o$, called the \textsf{root}.
The sets of the vertices and the edges of $\gamma$ are denoted by
$$
\tn{Ver}(\gamma)\qquad\textnormal{and}\qquad
\tn{Edg}(\gamma),
$$
respectively. The set
$\tn{Ver}(\gamma)$ is endowed with a partial order, called the \textsf{tree order},
so that
$v\!\succ\!v'$ if and only if $v\!\ne\!v'$ and $v$ belongs to a path between $o$ and~$v'$.
The root $o$ is thus the unique maximal element of $\tn{Ver}(\gamma)$ with respect to the tree order.
For each $e\!\in\!\tn{Edg}(\gamma)$,
we denote by $v_e^\pm\!\in\!\tn{Ver}(\gamma)$ the endpoints of $e$ such that
$
v_e^+\!\succ\!v_e^-.
$
Then, every vertex $v\!\in\!\tn{Ver}(\gamma)\backslash\{o\}$ corresponds to a unique
$$
e_v\in\tn{Edg}(\gamma)\qquad
\textnormal{satisfying}\qquad
v_{e_v}^-=v.
$$
The tree order on $\tn{Ver}(\gamma)$ induces a partial order on $\tn{Edg}(\gamma)$, still called the \textsf{tree order},
so that
$$
e\succ e'\quad\Longleftrightarrow\quad
v_e^-\succeq v_{e'}^+\,.
$$
\iffalse
We denote by
$$
\tn{Edg}(\gamma)_{\min}\subset\tn{Edg}(\gamma),
$$
the set of the minimal elements of $\tn{Edg}(\gamma)$ with respect to~$\prec$.
\fi
We call a pair ${\tau}\!=\!(\gamma,\mathbf w)$ consisting of a rooted tree $\gamma$ and a function $$\mathbf w:\tn{Ver}(\gamma)\lambdangle\!\lambdangleongrightarrow\mathbb Z_{\ge 0}$$ a~\textsf{weighted tree}.
For such ${\tau}$,
we write
$\tn{Ver}({\tau})\!=\!\tn{Ver}(\gamma)$ and $\tn{Edg}({\tau})\!=\!\tn{Edg}(\gamma)$.
The set of all the weighted trees is denoted by $\mathscr T_\mathsf R^\tn{wt}$.
We call a map $\ell\!:\tn{Ver}(\gamma)\lambdangle\!\lambdangleongrightarrow\mathbb R_{\lambdangle\!\lambdanglee 0}$ satisfying
$$
\ell^{-1}(0)\!=\!\{o\}
\qquad\textnormal{and}
\qquad
\ell(v)\!>\!\ell(v')\ \ \textnormal{whenever}\ \ v\!\succ\!v'
$$
a \textsf{level map}.
For each $i\!\in\!\ell(\tn{Ver}(\gamma))\backslash\{0\}$,
let
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:i^sharp}
i^\sharp=\min\big\{\,
k\!\in\!\ell\big(\tn{Ver}(\gamma)\big):\,
k\!>\!i\,\big\},
\end{equation}
i.e.~the level $i^\sharp$ is right ``above'' the level $i$;
see Figure~\rangle\!\rangleef{Fig:level_tree}.
We remark that a rooted tree along with a level map is called a {\it level graph} with the root as the unique {\it top level} vertex in~\cite[\S1.5]{BCGGM}.
\iffalse
Let $\Xi(\gamma)^*$ be the collection of
all nonempty $\mathfrak E\!\subset\!\tn{Edg}(\gamma)$ satisfying the property:
for each $v\!\in\!\tn{Ver}(\gamma)^{\textnormal t}$,
the path between $o$ and $v$ contains exactly one element of~$\mathfrak E$.
Set
$$
\Xi(\gamma)=\Xi(\gamma)^*\sqcup\{\emptyset\}.
$$
The elements of $\Xi(\gamma)$ are called \textsf{traverse sections} of $\gamma$.
The tree order on $\tn{Edg}(\gamma)$ induces a partial order on $\Xi(\gamma)$ such that
$$
\mathfrak E\succ\mathfrak E'\quad
\Longleftrightarrow\quad
\big\lambdangle\!\lambdanglegroup\,
\mathfrak E\!\ne\!\mathfrak E'\,
\big\rangle\!\ranglegroup~\textnormal{and}~
\big\lambdangle\!\lambdanglegroup\,
\forall~e\!\in\!\mathfrak E,~
\exists~e'\!\in\!\mathfrak E'~\textnormal{s.t.}~e\!\succeq\!e'\,
\big\rangle\!\ranglegroup.
$$
Every $\mathfrak E\!\in\!\Xi(\gamma)$ determines a unique sub-tree $\gamma_\mathfrak E$ of $\gamma$ via edge contraction such that
$$
\tn{Edg}(\gamma_\mathfrak E)_{\min}=\mathfrak E.
$$
In particular, $\gamma_\emptyset$ is the single vertex graph $\gamma_o$.
The set of all rooted trees is denoted by $\mathscr T_\mathsf R$.
It is endowed with a partial order $\prec$ such that $\gamma\!\prec\!\gamma'$ if and only if $\gamma\!\ne\!\gamma'$ and the following conditions all hold:
\betagin{itemize}
[leftmargin=*]
\item $\gamma'$ can be obtained from $\gamma$ via edge contraction;
\item $o(\gamma)\!=\! o(\gamma')$;
\item $\Xi(\gamma)\!\supset\!\Xi(\gamma')$.
\end{itemize}
Thus, given a pair $\gamma\!\prec\!\gamma'$,
there exists a unique edge contraction map
$$
\pi_{\gamma',\gamma}:\gamma\lambdangle\!\lambdangleongrightarrow\gamma'.
$$
The single vertex tree $\gamma_o$ is the unique maximal element of $\mathscr T_\mathsf R$ with respect to $\prec$.
\fi
\betagin{dfn}\lambdangle\!\lambdangleabel{Dfn:level_tree}
We call the tuple
\betagin{equation*}
\lambdangle\!\lambdanglet=\big(\,\gamma,\;
\mathbf w\!:\tn{Ver}(\gamma)\!\lambdangle\!\lambdangleongrightarrow\!\mathbb Z_{\ge 0}\,,\;
\ell\!:\tn{Ver}(\gamma)\!\lambdangle\!\lambdangleongrightarrow\!\mathbb R_{\lambdangle\!\lambdanglee 0}\,\big)
\end{equation*}
a \textsf{weighted level tree} if $(\gamma,\mathbf w)\!\in\!\mathscr T_\mathsf R^\tn{wt}$ and $\ell$ is a level map.
\end{dfn}
For every weighted level tree $\lambdangle\!\lambdanglet$ as above,
we write $\tn{Ver}(\lambdangle\!\lambdanglet)\!=\!\tn{Ver}(\gamma)$ and $\tn{Edg}(\lambdangle\!\lambdanglet)\!=\!\tn{Edg}(\gamma)$.
Set
\betagin{align*}
&{\mathbf{m}}={\mathbf{m}}(\lambdangle\!\lambdanglet)=
\max\big\{\ell(v):
v\!\in\!\tn{Ver}(\lambdangle\!\lambdanglet), \mathbf w(v)\!>\!0\big\}
&&(\,\lambdangle\!\lambdanglee 0\,),\\
&\wh\tn{Edg}(\lambdangle\!\lambdanglet)=
\big\{e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet):
\ell(v_e^+)\!>\!{\mathbf{m}}\big\}
&& \big(\,\subset\,\tn{Edg}(\lambdangle\!\lambdanglet)\big).
\end{align*}
For any two levels $i,j\!\in\!\mathbb R_{\lambdangle\!\lambdanglee 0}$,
we write
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:lrbr}
\lambdangle\!\lambdangleprp{i,j}_\lambdangle\!\lambdanglet=\ell\big(\tn{Ver}(\lambdangle\!\lambdanglet)\big)\!\cap\!(i,j),\quad
\lambdangle\!\lambdanglebrp{i,j}_\lambdangle\!\lambdanglet=
\ell\big(\tn{Ver}(\lambdangle\!\lambdanglet)\big)\!\cap\![i,j).
\end{equation}
For every $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$,
let
$$
\ell(e)=\max\{\ell(v_e^-),{\mathbf{m}}\}
\quad\big(\in\lambdangle\!\lambdanglebrp{{\mathbf{m}},0}_\lambdangle\!\lambdanglet\big).
$$
For each level $i\!\in\!\lambdangle\!\lambdanglebrp{{\mathbf{m}},0}_\lambdangle\!\lambdanglet$,
we set
\betagin{equation}
\lambdangle\!\lambdangleabel{Eqn:fE_i}
\mathfrak E_i=\mathfrak E_i(\lambdangle\!\lambdanglet)=\big\{\,e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet):
\ell(e)\!\lambdangle\!\lambdanglee\!i\!<\!\ell(v_e^+)\,\big\}.
\end{equation}
In other words,
$\mathfrak E_i$ consists of all the edges crossing the gap between the levels~$i$ and $i^\sharp$.
We remark that all the notions in the preceding paragraph depend on the weighted level tree $\lambdangle\!\lambdanglet$,
although
we may hereafter omit $\lambdangle\!\lambdanglet$ in any of such notions when the context is clear.
Every weighted level tree $\lambdangle\!\lambdanglet$ determines a unique index set
\betagin{equation}\betagin{split}\lambdangle\!\lambdangleabel{Eqn:bbI}
&\mathbb I(\lambdangle\!\lambdanglet) =
\mathbb I_+(\lambdangle\!\lambdanglet)\sqcup\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet)\sqcup\mathbb I_-(\lambdangle\!\lambdanglet),
\hspace{.59in}\textnormal{where}\quad
\mathbb I_+(\lambdangle\!\lambdanglet)\!=\!\lambdangle\!\lambdanglebrp{{\mathbf{m}},0}_\lambdangle\!\lambdanglet,\\
&\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet)\!=\! \{e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\!:\ell(v_e^-)\!<\!{\mathbf{m}}\},\qquad
\mathbb I_-(\lambdangle\!\lambdanglet)\!=\!
\big(\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\wh\tn{Edg}(\lambdangle\!\lambdanglet)\big).
\end{split}\end{equation}
The set $\mathbb I_+(\lambdangle\!\lambdanglet)$ becomes empty if ${\mathbf{m}}\!=\! 0$,
i.e.~the root $o$ is positively weighted.
As mentioned before,
we may simply write
$$
\mathbb I=\mathbb I(\lambdangle\!\lambdanglet),\quad
\mathbb I_\pm=\mathbb I_\pm(\lambdangle\!\lambdanglet),\quad
\mathbb I_{\mathbf{m}}=\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet)
$$
when the context is clear.
For each $I\!\subset\!\mathbb I$ (possibly empty), let
$$
I_{\mathbf{m}}=I\cap\mathbb I_{\mathbf{m}},\qquad
I_\pm=I\cap\mathbb I_\pm.
$$
We construct a weighted level tree
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:t(I)}
\lambdangle\!\lambdanglet_{(I)}=
\big(\tau_{(I)},\ell_{(I)}\big)=
\big(\,\gamma_{(I)},\,\mathbf w_{(I)},\,
\ell_{(I)}\,\big)
\end{equation}
as follows:
\betagin{itemize}
[leftmargin=*]
\item
the rooted tree $\gamma_{(I)}$ is obtained via the edge contraction
$$
\pi_{(I)}: \tn{Ver}(\gamma)\twoheadrightarrow\tn{Ver}\big(\gamma_{(I)}\big)
$$
such that the set of the contracted edges is
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:edges_cntrd}
\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\!=\!
\big\{e\!\in\!\big(\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{\mathbf{m}}\big)\!\sqcup\! I_{\mathbf{m}}\!:
\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}_\lambdangle\!\lambdanglet\!\subset\!I_+\big\}\!\sqcup\!I_-;
\end{equation}
\item
the weight function $\mathbf w_{(I)}$ is given by
$$
\mathbf w_{(I)}\!:\tn{Ver}\big(\gamma_{(I)}\big)\lambdangle\!\lambdangleongrightarrow\mathbb Z_{\ge 0},\qquad
\mathbf w_{(I)}(v)=\!\!\sum_{v'\in\pi_{(I)}^{-1}(v)}
\!\!\!\!\!\mathbf w(v');
$$
\item
the level map $\ell_{(I)}$ is such that for any $e\!\in\!\tn{Edg}\big(\gamma_{(I)}\big)~\big(\subset\!\tn{Edg}(\lambdangle\!\lambdanglet)\big)$,
$$
\ell_{(I)}(v_e^-)=
\betagin{cases}
\min\{i\!\in\!\mathbb I_+\!\backslash I_+\!:\,
i\!\ge\!\ell(v)~\forall\,v\!\in\!\pi^{-1}_{(I)}(v_e^-)\} &
\textnormal{if}~e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\!\backslash\mathbb I_{\mathbf{m}},
\\
\min(\mathbb I_+\!\backslash I_+) &
\textnormal{if}~e\!\in\! I_{\mathbf{m}},
\\
\max\{\ell(v)\!:\,
v\!\in\!\pi^{-1}_{(I)}(v_e^-)\} &
\textnormal{if}~e\!\in\! (\mathbb I_{\mathbf{m}}\!\backslash I_{\mathbf{m}})\!\sqcup\!\mathbb I_-.
\end{cases}
$$
\end{itemize}
It is a direct check that ${\tau}_{(I)}$ is a weighted tree and $\ell_{(I)}$ satisfies the criteria of a level map,
hence~(\rangle\!\rangleef{Eqn:t(I)}) gives a well defined weighted level tree.
The construction of $\lambdangle\!\lambdanglet_{(I)}$ implies
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:t_(I)_bbI}
\betagin{split}
&
\mathbb I_+\big(\lambdangle\!\lambdanglet_{(I)}\big)=\mathbb I_+\backslash I_+,\qquad
{\mathbf{m}}\big(\lambdangle\!\lambdanglet_{(I)}\big)=\min\big(
(\mathbb I_+\backslash I_+)\!\sqcup\!\{0\}\big),
\\
&\mathbb I_{\mathbf{m}}\big(\lambdangle\!\lambdanglet_{(I)}\big)=
\big\{e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}\!:
\ell(v_e^+)\!>\!{\mathbf{m}}\big(\lambdangle\!\lambdanglet_{(I)}\big)\big\},\\
&
\mathbb I_-\big(\lambdangle\!\lambdanglet_{(I)}\big)=\mathbb I_-\backslash I_-
\sqcup\big\{e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}\!:
\ell(v_e^+)\!\lambdangle\!\lambdanglee\!{\mathbf{m}}\big(\lambdangle\!\lambdanglet_{(I)}\big)\big\}
.
\end{split}
\end{equation}
Intuitively,
the weighted level tree $\lambdangle\!\lambdanglet_{(I)}$ is obtained from $\lambdangle\!\lambdanglet$ by contracting all the edges labeled by $I_-$,
then lifting all the vertices $v$ with $e_v\!\in\! I_{\mathbf{m}}$ to the level ${\mathbf{m}}$,
and finally contracting all the levels in $I_+$.
Such $\lambdangle\!\lambdanglet_{(I)}$ will be used to describe the local structure of the stack $\mtd{}$ in \S\rangle\!\rangleef{Sec:Moduli}.
\betagin{dfn}\lambdangle\!\lambdangleabel{Dfn:WLT_equiv}
Two weighted level trees $\lambdangle\!\lambdanglet\!=\!(\gamma,\mathbf w,\ell)$ and $\lambdangle\!\lambdanglet'\!=\!(\gamma',\mathbf w',\ell')$ are said to be \textsf{equivalent}, written as
$
\lambdangle\!\lambdanglet\sigmam\lambdangle\!\lambdanglet',
$
if
\betagin{enumerate}
[leftmargin=*,label=(E\arabic*)]
\item\lambdangle\!\lambdangleabel{Cond:WLT_equiv_WT} $(\gamma,\mathbf w)\!=\!(\gamma',\mathbf w')$ as weighted trees;
\item for any $v,w\!\in\!\tn{Ver}(\gamma)$ satisfying $\ell(v)\!=\!\ell(w)\!\ge\!{\mathbf{m}}(\lambdangle\!\lambdanglet)$, we have
$$\ell'(v)=\ell'(w);$$
\item for any $v,w\!\in\!\tn{Ver}(\gamma)$ satisfying $\ell(v)\!>\!\ell(w)$ and $\ell(v)\!\ge\!{\mathbf{m}}(\lambdangle\!\lambdanglet)$, we have
$$\ell'(v)>\ell'(w).$$
\end{enumerate}
\end{dfn}
It is a direct check that $\sigmam$ is an equivalence relation on the set of weighted level trees.
Intuitively, this equivalence relation records the relative positions of the vertices {\it above} or {\it in} the level ${\mathbf{m}}(\lambdangle\!\lambdanglet)$; see Figure~\rangle\!\rangleef{Fig:level_tree} for illustration.
\betagin{figure}
\betagin{center}
\betagin{tikzpicture}{htb}
\draw[dotted]
(0.4,0)--(5.6,0)
(0.4,-.8)--(5.6,-.8)
(0.4,-1.6)--(5.6,-1.6)
(0.4,-2.4)--(5.6,-2.4);
\draw
(3,0)--(1.8,-1.6)--(2.4,-2.4)
(1.8,-1.6)--(1.8,-2.8)
(1.8,-1.6)--(0.9,-2.8)
(3,0)--(4,-.8)--(3.2,-2.4)
(3.6,-1.6)--(4,-2.4)
(4,-.8)--(5.2,-3.2)
(5,-2.8)--(4.8,-3.2);
\draw[very thick]
(3,0)--(1.8,-1.6)
(3,0)--(4,-.8)
(3.6,-1.6)--(4,-2.4);
\draw[fill=white]
(3,0) circle (1.5pt)
(4,-.8) circle (1.5pt)
(3.6,-1.6) circle (1.5pt)
(1.8,-1.6) circle (1.5pt)
(5,-2.8) circle (1.5pt)
(8.5,-2.4) circle (1.5pt);
\filldraw
(3.2,-2.4) circle (1.5pt)
(4,-2.4) circle (1.5pt)
(.9,-2.8) circle (1.5pt)
(1.8,-2.4) circle (1.5pt)
(1.8,-2.8) circle (1.5pt)
(2.4,-2.4) circle (1.5pt)
(5.2,-3.2) circle (1.5pt)
(4.8,-3.2) circle (1.5pt)
(8.5,-2.8) circle (1.5pt);
\draw
(5.8,0) node[right] {\scriptsize{$0=-1[1]=-2[1]=-3[2]=-1^\sharp$}}
(5.8,-.8) node[right] {\scriptsize{$-1=-2^\sharp$}}
(5.8,-1.6) node[right] {\scriptsize{$-2=-3[1]=-3^\sharp$}}
(5.8,-2.4) node[right] {\scriptsize{${\mathbf{m}}=-3$}}
(4.1,-2) node {\scriptsize{$\mathsf e_{-3}$}}
(2.2,-0.6) node {\scriptsize{$\mathsf e_{-2}$}}
(3.75,-.3) node {\scriptsize{$\mathsf e_{-1}$}}
(3,0) node[above] {\scriptsize{$o=\mathsf v_{-1}^+=\mathsf v_{-2}^+$}}
(4.2,-0.65) node {\scriptsize{$\mathsf v_{-1}$}}
(1.55,-1.4) node {\scriptsize{$\mathsf v_{-2}$}}
(3.4,-1.35) node {\scriptsize{$\mathsf v_{-3}^+$}}
(4,-2.4) node[below] {\scriptsize{$\mathsf v_{-3}$}}
(8.5,-2.4) node[right] {\scriptsize{$:\textnormal{vertex~of~weight}~0$}}
(8.5,-2.8) node[right] {\scriptsize{$:\textnormal{vertex~of~positive~weight}$}};
\draw[dashed]
(8.2,-2.1) rectangle (12.2,-3.1);
\end{tikzpicture}
\end{center}
\caption{A weighted level tree with chosen $\mathsf v_{-1},\mathsf v_{-2},$ and $\mathsf v_{-3}$}\lambdangle\!\lambdangleabel{Fig:level_tree}
\end{figure}
We denote by
$\mathscr T_\mathsf L^\tn{wt}$ the set of the {\it equivalence classes} of the weighted level trees.
There is a natural forgetful map
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:ff}
\mathfrak f:\mathscr T_\mathsf L^\tn{wt}\lambdangle\!\lambdangleongrightarrow\mathscr T_\mathsf R^\tn{wt},\qquad
[\gamma,\mathbf w,\ell]\mapsto(\gamma,\mathbf w),
\end{equation}
which is well defined by Condition~\rangle\!\rangleef{Cond:WLT_equiv_WT} of Definition~\rangle\!\rangleef{Dfn:WLT_equiv}.
If $\lambdangle\!\lambdanglet\!\sigmam\!\lambdangle\!\lambdanglet'$,
then ${\mathbf{m}}(\lambdangle\!\lambdanglet)\!=\!{\mathbf{m}}(\lambdangle\!\lambdanglet')$, and there exists a bijection
$$
\phi_{\lambdangle\!\lambdanglet',\lambdangle\!\lambdanglet}:\mathcal P\big(\mathbb I(\lambdangle\!\lambdanglet)\big)\lambdangle\!\lambdangleongrightarrow
\mathcal P\big(\mathbb I(\lambdangle\!\lambdanglet')\big),\qquad
\phi_{\lambdangle\!\lambdanglet',\lambdangle\!\lambdanglet}(I)=\ell'\big(\ell^{-1}(I_+)\big)\sqcup I_{{\mathbf{m}}(\lambdangle\!\lambdanglet)}\sqcup I_-\,,
$$
where $\mathcal P(\cdot)$ denotes the power set.
The next lemma follows from direct check.
\betagin{lmm}\lambdangle\!\lambdangleabel{Lm:T^wt_equiv}
If $\lambdangle\!\lambdanglet\!\sigmam\!\lambdangle\!\lambdanglet'$,
then
$\lambdangle\!\lambdanglet_{(I)}\!\sigmam\!\lambdangle\!\lambdanglet'_{(\phi_{\lambdangle\!\lambdanglet',\lambdangle\!\lambdanglet}(I))}$ for any $I\!\subset\!\mathbb I(\lambdangle\!\lambdanglet)$.
\end{lmm}
\subsection{Twisted fields}
\lambdangle\!\lambdangleabel{Subsec:twisted_fields}
For every genus 1 nodal curve $C$,
its dual graph $\gamma_C^\star$ has either a unique vertex $o$ corresponding to the genus 1 irreducible component of $C$ or a unique loop.
In the former case,
$\gamma_C^\star$ can be considered as a rooted tree with the root~$o$;
in the latter case,
we contract the loop to a single vertex $o$ and
obtain a rooted tree with the root $o$.
Such defined rooted tree is denoted by $\gamma_C$ and called the \textsf{reduced dual tree} of $C$ (c.f.~\cite[\S3.4]{HL10}).
We call the minimal connected genus 1 subcurve of $C$ the \textsf{core} and denote it by $C_o$.
Other irreducible components of $C$ are smooth rational curves and denoted by $C_v$, $v\!\in\!\tn{Ver}(\gamma_C)\backslash\{o\}$.
For every incident pair $(v,e)$,
let
\betagin{equation}
\lambdangle\!\lambdangleabel{Eqn:Nodal}
q_{v;e}\in C_v
\end{equation}
be the \textsf{nodal point} corresponding to the edge $e$.
Let
$
\mathfrak M^{\tn{wt}}
$
be
the Artin stack of genus 1 stable weighted curves introduced in~\cite[\S2.1]{HL10}.
Here the subscript ``1'' indicating the genus is omitted as per our convention.
The stack $\mathfrak M^{\tn{wt}}$ consists of the pairs $(C,\mathbf w)$ of genus 1 nodal curves $C$ with non-negative weights $\mathbf w\!\in\! H^2(C,\mathbb Z)$,
meaning that $\mathbf w(\Sigma)\!\ge\!0$ for all irreducible $\Sigma\!\subset\!C$.
Here $(C,\mathbf w)$ is said to be \textsf{stable} if every rational irreducible component of weight $0$ contains at least three nodal points.
The weight of the core $\mathbf w(C_o)$ is defined as the sum of the weights of all irreducible components of the core.
Every $(C,\mathbf w)\!\in\!\mathfrak M^{\tn{wt}}$ uniquely determines a function
$$\mathbf w :\tn{Ver}(\gamma_C)\lambdangle\!\lambdangleongrightarrow\mathbb Z_{\ge 0},\qquad
v\mapsto \mathbf w(C_v),$$
which makes the pair $(\gamma_C,\mathbf w)$ a weighted tree,
called the \textsf{weighted dual tree}.
Thus, the stack $\mathfrak M^{\tn{wt}}$ can be stratified as
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:Mwt_strata}
\mathfrak M^{\tn{wt}}=\bigsqcup_{{\tau}\in\mathscr T_\mathsf R^\tn{wt}}
\!\!\!\mathfrak M^{\tn{wt}}_{{\tau}}~=
\bigsqcup_{{\tau}\in\mathscr T_\mathsf R^\tn{wt}}
\!\!\!\big\{\,
(C,\mathbf w)\!\in\!\mathfrak M^{\tn{wt}}\!:(\gamma_C,\mathbf w)\!=\!{\tau} \,
\big\}.
\end{equation}
If the sum of the weights of all vertices is fixed,
the stability condition of $\mathfrak M^{\tn{wt}}$ then guarantees there are only finitely many ${\tau}\!\in\!\mathscr T_\mathsf R^\tn{wt}$ so that $\mathfrak M^{\tn{wt}}_{\tau}$ is non-empty.
Given ${\tau}\!=\!(\gamma,\mathbf w)\!\in\!\mathscr T_\mathsf R^\tn{wt}$
and $e\!\in\!\tn{Edg}({\tau})$,
let $$L_e^\pm\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{\tau}$$ be the line bundles whose fibers over a weighted curve $(C,\mathbf w)$ are the tangent vectors of the irreducible components $C_{v_e^\pm}$ at the nodal points $q_{v_e^\pm;e}$, respectively.
We take
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:Le}
L_e= L_e^+\otimes L_e^-
\lambdangle\!\lambdangleongrightarrow \mathfrak M^{\tn{wt}}_{\tau},\qquad
L_e^\succeq=
\!\!\!\!\!\!\bigotimes_{
\betagin{subarray}{c}
e'\in\tn{Edg}({\tau}),\,
e'\succeq e
\end{subarray}
}\!\!\!\!\!\!\!\!\!\!
L_{e'}\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{\tau}.
\end{equation}
\iffalse
The following lemma is a restatement of a well-known fact of the moduli of curves in our setup;
see~\cite[Proposition~3.31]{HM}.
\betagin{lmm}\lambdangle\!\lambdangleabel{Lm:Normal_bundle_Mwt}
The normal bundle of the stratum $\mathfrak M^{\tn{wt}}_{{\tau}}$ in $\mathfrak M^{\tn{wt}}$ is given by
\betagin{equation}
\lambdangle\!\lambdangleabel{Eqn:Nga}
\mathscr N_{\tau}=
\Big(\bigoplus_{e\in \tn{Edg}({\tau})}\!\!\!\!L_e\Big)
\lambdangle\!\lambdangleongrightarrow \mathfrak M^{\tn{wt}}_{\tau}.
\end{equation}
\end{lmm}
\fi
For any direct sum of line bundles $V\!=\!\oplus_mL_m'$ (over any base),
we write
$$
\mathring\mathbb P(V):=
\big\{
\big(x,[v_m]\big)\!\in\!\mathbb P(V):
v_m\!\ne\!0~\forall\,m
\big\}.
$$
For any morphisms $M_1,\lambdangle\!\lambdangledots, M_k\!\lambdangle\!\lambdangleongrightarrow\! S$,
we write
$$
\prod_{1\lambdangle\!\lambdanglee i\lambdangle\!\lambdanglee k}
\!(M_i/S) \,
:=
M_1\times_S M_2\times_S\cdots
\times_S M_k.
$$
With notation as above,
given ${\tau}\!\in\!\mathscr T_\mathsf R^\tn{wt}$ and $[\lambdangle\!\lambdanglet]\!=\!\big[{\tau},\ell\big]\!\in\!\mathscr T_{\mathsf L}^\tn{wt}$,
let
\betagin{equation}\betagin{split}
\lambdangle\!\lambdangleabel{Eqn:MtdStrat_and_sEcT}
\varpi:\mtd{[\lambdangle\!\lambdanglet]}=
\bigg(
\prod_{i\in\mathbb I_+(\lambdangle\!\lambdanglet)}\!\!\!
\Big\lambdangle\!\lambdanglegroup\!
\Big(\mathring\mathbb P\big(\!
\bigoplus_{\!
\betagin{subarray}{c}
e\in\tn{Edg}(\lambdangle\!\lambdanglet),\,
\ell(v_e^-)=i
\end{subarray}
}\!\!\!\!\!\!\!\!\!\!\!
L_e^\succeq\;
\big)\!\Big)\Big/\mathfrak M^{\tn{wt}}_{\tau}\Big\rangle\!\ranglegroup\!\!
\bigg)
&\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{{\tau}}\,,\\
\mathscr E_{[\lambdangle\!\lambdanglet]}=
\bigg(
\prod_{i\in\mathbb I_+(\lambdangle\!\lambdanglet)}
\!\!\!
\Big\lambdangle\!\lambdanglegroup
\Big(\mathbb P\big(\!
\bigoplus_{e\in\mathfrak E_i}L_e^\succeq
\big)\!\Big)\Big/\mathfrak M^{\tn{wt}}_{\tau}\Big\rangle\!\ranglegroup\!\!
\bigg)
&\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{{\tau}}\,,
\end{split}\end{equation}
where $\mathbb I_+(\lambdangle\!\lambdanglet)$, $L_e^\succeq$, and $\mathfrak E_i$ are as in~(\rangle\!\rangleef{Eqn:bbI}),~(\rangle\!\rangleef{Eqn:Le}), and~(\rangle\!\rangleef{Eqn:fE_i}), respectively.
It is straightforward that both bundles in~(\rangle\!\rangleef{Eqn:MtdStrat_and_sEcT}) are independent of the choice of the weighted level tree $\lambdangle\!\lambdanglet$ representing $[\lambdangle\!\lambdanglet]$.
Since
$$
\big\{e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet)\!:\ell(v_e^-)\!=\! i
\big\}\subset\mathfrak E_i
\qquad\forall~i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet),
$$
we see that
$\mtd{[\lambdangle\!\lambdanglet]}$ is a subset of $\mathscr E_{[\lambdangle\!\lambdanglet]}$.
In addition,
since each stratum $\mathfrak M^{\tn{wt}}_{\tau}$ is an algebraic stack,
so are $\mtd{[\lambdangle\!\lambdanglet]}$ and $\mathscr E_{[\lambdangle\!\lambdanglet]}$.
Using~(\rangle\!\rangleef{Eqn:MtdStrat_and_sEcT}) and~(\rangle\!\rangleef{Eqn:Mwt_strata}),
we define
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:MtdStrata}
\mtd{}:=\bigsqcup_{[\lambdangle\!\lambdanglet]\in\mathscr T_\mathsf L^\tn{wt}}\!\!
\mtd{[\lambdangle\!\lambdanglet]}~\stackrel{\varpi}{\lambdangle\!\lambdangleongrightarrow}~
\mathfrak M^{\tn{wt}}.
\end{equation}
This is the set-theoretic definition of the proposed stack $\mtd{}$ as well as the forgetful map in Theorem~\rangle\!\rangleef{Thm:Main}.
For any $x\!\in\!\mathfrak M^{\tn{wt}}_{\tau}$,
the points of the fiber $\mtd{[\lambdangle\!\lambdanglet]}\big|_x$ are called the \textsf{twisted fields} over $x$.
\betagin{rmk}
By~(\rangle\!\rangleef{Eqn:MtdStrata}),
$\ti M_1^\tn{tf}(\mathbb P^n,d)$ in Corollary~\rangle\!\rangleef{Crl:Main} consists of the tuples
$$
\big(\,C,\,\mathbf u,\,[\lambdangle\!\lambdanglet],\,\upsilond\etaa\,\big),
$$
where $(C,\mathbf u)$ are stable maps in $\ov M_1(\mathbb P^n,d)$,
$[\lambdangle\!\lambdanglet]$ are the equivalence classes of weighted level trees satisfying $\mathfrak f[\lambdangle\!\lambdanglet]\!=\!\big(\gamma_C,c_1(\mathbf u^*\mathscr O_{\mathbb P^n}(1))\big)$,
and $\upsilond\etaa$ are twisted fields over $\big(C,c_1(\mathbf u^*\mathscr O_{\mathbb P^n}(1))\big)$.
\end{rmk}
\mathsf ection{The stack structure of $\mtd{}$}
\lambdangle\!\lambdangleabel{Sec:Moduli}
In \S\rangle\!\rangleef{Sec:Moduli}, we show $\mtd{}$ is naturally a smooth Artin stack and describe its universal family.
\iffalse
We construct affine smooth charts of~$\mtd{}$ and show they are compatible with each other in \S\rangle\!\rangleef{Subsec:Charts_of_Mtf}.
This implies $\mtd{}$ is a smooth Artin stack.
We then show $\mtd{}$ is a moduli stack in \S\rangle\!\rangleef{Subsec:Universal_family} by constructing its universal family.
\fi
\subsection{Twisted charts}\lambdangle\!\lambdangleabel{Subsec:Charts_of_Mtf}
We first fix
$[\lambdangle\!\lambdanglet]\!=\!\big[\gamma,\mathbf w,\ell\big]\!\in\!\mathscr T_{\mathsf L}^\tn{wt}
$ and $x\!\in\!\mtd{[\lambdangle\!\lambdanglet]}$,
and write
$$
{\tau}=\mathfrak f[\lambdangle\!\lambdanglet]=(\gamma,\mathbf w)\in\mathscr T_\mathsf R^\tn{wt},\qquad
(C,\mathbf w)=\varpi(x)\in\mathfrak M^{\tn{wt}}_{\tau}.
$$
Since $\mathfrak M^{\tn{wt}}$ is smooth,
we take an affine smooth chart
$$
\mathcal V=\mathcal V_{\varpi(x)}\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}
$$ containing $(C,\mathbf w)$.
\iffalse
Shrinking $\mathcal V$ if necessary,
we assume that
$$
\big(\ov{\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}}\backslash\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}\big)\cap\mathcal V=\emptyset\qquad
\forall~I\!\subset\!\mathbb I\!=\!\mathbb I(\lambdangle\!\lambdanglet),
$$
where $\ov{\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}}$ denotes the closure of $\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}$ in $\mathfrak M^{\tn{wt}}$;
see~(\rangle\!\rangleef{Eqn:bbI}) and~(\rangle\!\rangleef{Eqn:t(I)}) for notation.
\fi
As in~\cite[\S4.3]{HL10} and~\cite[\S2.5]{HLN},
there exists a set of regular functions
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:modular_parameters}
\zeta_e\in\Gamma(\mathscr O_\mathcal V)\quad\textnormal{with}\quad e\in\tn{Edg}(\gamma),
\end{equation}
called the \textsf{modular parameters},
so that for each $e\!\in\!\tn{Edg}(\gamma)$, the locus
$$\mathcal Z_e=\{\zeta_e\!=\! 0\}\subset\mathcal V$$ is the irreducible smooth Cartier divisor on $\mathcal V$ where the node labeled by $e$ is not smoothed.
For any $I\!\subset\!\mathbb I\!=\!\mathbb I(\lambdangle\!\lambdanglet)$, let
\betagin{align*}
\mathcal V_{(I)}^\circ
&:=
\big\{\,
\zeta_{e'}\!\ne\!0\!: e'\!\in\!\big(\tn{Edg}(\gamma)\backslash\tn{Edg}(\gamma_{(I)})\big)\,\big\}&\subset\mathcal V,&
\\
\mathcal V_{{\tau}_{(I)}}\!
&:=\mathcal V_{(I)}^\circ\cap\big\{\,
\zeta_e\!=\! 0\!: e\!\in\!\tn{Edg}\big(\gamma_{(I)}\big)\,\big\}& \subset\mathcal V_{(I)}^\circ\subset\mathcal V.&
\end{align*}
Then, $\mathcal V_{(I)}^\circ$ is an open subset of $\mathcal V$. Shrinking $\mathcal V$ if necessary,
we see that
\betagin{equation}
\betagin{split}
\lambdangle\!\lambdangleabel{Eqn:V_(I)}
\mathcal V_{{\tau}_{(I)}}\in\pi_0\big(
\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}\cap\mathcal V
\big),
\end{split}
\end{equation}
where $\pi_0$ denotes the set of the connected components.
In particular, $\mathcal V_{{\tau}_{(I)}}$ can be considered as a smooth chart of the stratum $\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}$.
Rigorously, the sets $\mathbb I$ and $I$ depend on the choice of the weighted level tree $\lambdangle\!\lambdanglet$ representing $[\lambdangle\!\lambdanglet]$,
however, $\mathcal V_{(I)}^\circ$ and $\mathcal V_{{\tau}_{(I)}}$ are independent of such choice;
see Lemma~\rangle\!\rangleef{Lm:T^wt_equiv}.
Given a set of the modular parameters as in~(\rangle\!\rangleef{Eqn:modular_parameters}),
we may extend it to a set of local parameters on~$\mathcal V$ centered at $(C,\mathbf w)$:
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:local_parameters}
\{\zeta_e\}_{e\in\tn{Edg}(\gamma)}\cup
\{\varsigma_j\}_{j\in J}\qquad
\textnormal{with}\quad
(C,\mathbf w)=\upsilond 0:=(0,\lambdangle\!\lambdangledots,0)\,,
\end{equation}
where $J$ is a finite set.
We do not impose other conditions on $\varsigma_j$.
For each $e\!\in\!\tn{Edg}(\gamma)$,
we set
$$
\partial_{\zeta_e}:=
(\textnormal d\zeta_e)^\varepsilone
\in\Gamma(\mathcal V; T\mathfrak M^{\tn{wt}}).
$$
\betagin{lmm}
\lambdangle\!\lambdangleabel{Lm:Prt_e}
For every $I\!\subset\!\mathbb I$ and every $e\!\in\!\tn{Edg}(\gamma_{(I)})$,
the restriction of $\partial_{\zeta_e}$ to $\mathcal V_{{\tau}_{(I)}}$ is a nowhere vanishing section of the restriction of the line bundle~$L_e$ in~(\rangle\!\rangleef{Eqn:Le}) to $\mathcal V_{{\tau}_{(I)}}$.
\end{lmm}
\betagin{proof}
Since the restriction of $\textnormal d\zeta_e$ to $\mathcal Z_e\!=\!\{\zeta_e\!=\! 0\}$ ($\supset\!\mathcal V_{{\tau}_{(I)}}$) is identically zero,
we observe that the restriction of $\partial_{\zeta_e}$ to $\mathcal Z_e$ is a nowhere vanishing section of the normal bundle of $\mathcal Z_e$.
It is a well-known fact of the moduli of curves that the normal bundle of $\mathcal Z_e$ is $L_e$;
see~\cite[Proposition~3.31]{HM}.
\end{proof}
For each level $i\!\in\!\mathbb I_+\!=\!\mathbb I_+(\lambdangle\!\lambdanglet)$,
we choose a special vertex
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:sv_i}
\mathsf v_i\!\in\!\tn{Ver}(\gamma)\qquad\textnormal{s.t.}\qquad\ell(\mathsf v_i)\!=\! i.
\end{equation}
We then denote by $\mathsf e_i$, $\mathsf e_i^+$, and $\mathsf v_i^+$ respectively the edges and the vertex satisfying
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:se_i}
\mathsf e_i\!=\! e_{\mathsf v_i},\qquad
\mathsf v_i^+=v_{\mathsf e_i}^+\,,\qquad
\mathsf e_i^+=e_{\mathsf v_i^+}.
\end{equation}
Each $i\!\in\!\mathbb I_+$ determines a strictly increasing sequence
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:i[h]}
i[0]:= i~<~
i[1]:= \ell(\mathsf v_i^+)~<~
i[2]:= \ell(\mathsf v_{i[1]}^+)~<~\cdots\,.
\end{equation}
We would like to remark that $i[1]$ and $i^\sharp$ in~(\rangle\!\rangleef{Eqn:i^sharp}) need not to be the same;
see Figure~\rangle\!\rangleef{Fig:level_tree} for illustration.
This sequence is finite,
as there is a unique step~$h$ satisfying $i[h]\!=\! 0$.
By Lemma~\rangle\!\rangleef{Lm:Prt_e},
there exist $\lambdangle\!\lambdanglea_e\!\in\!\mathbb A$ with $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$ so that
the fixed $x\!\in\!\mtd{[\lambdangle\!\lambdanglet]}$ over $(C,\mathbf w)$ can uniquely be written as
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:x_local_expression}
\betagin{split}
& x=
\Big(\,
\upsilond 0\;;\;\prod_{i\in\mathbb I_+}\Big[\lambdangle\!\lambdanglea_e\!\cdot\!
\big(\bigotimes_{\mathfrak e\succeq e}\!\partial_{\zeta_{\mathfrak e}}|_{\upsilond 0}\big)
: \ell(e)\!=\! i\Big]\,
\Big),\qquad\textnormal{where}
\\
&
\lambdangle\!\lambdanglea_e\!\ne\!0
\ \ \forall\,e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{\mathbf{m}},\quad
\lambdangle\!\lambdanglea_{\mathsf e_i}\!=\! 1\ \ \forall\,i\!\in\!\mathbb I_+,
\quad
\lambdangle\!\lambdanglea_e\!=\! 0\ \ \forall\,e\!\in\!\mathbb I_{\mathbf{m}}.
\end{split}
\end{equation}
Let
$$
\mathfrak U_x\subset \mathbb A^{\mathbb I_+}\times \mathbb A^{\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i:{i\in\mathbb I_+}\}}\times
\mathbb A^{\mathbb I_-}\times
\mathbb A^{J}
$$
be an open subset containing the point
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:y_x}
y_x:=
\big(\upsilond 0,\,
(\lambdangle\!\lambdanglea_e)_{e\in\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i:{i\in\mathbb I_+}\}}\,,
\upsilond 0,\,
\upsilond 0\big).
\end{equation}
The coordinates on $\mathfrak U_x$ are denoted by
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:ti_cV}
\big(
(\varepsilon_i)_{i\in\mathbb I_+},
(u_e)_{e\in\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i:{i\in\mathbb I_+}\}},
(z_e)_{e\in\mathbb I_-},
(w_j)_{j\in J}
\big).
\end{equation}
For any $I\!\subset\!\mathbb I$,
we set
\betagin{align*}
\mathfrak U_{x;(I)}^\circ
&\!=\!
\big\{\;
\varepsilon_i\!\ne\!0~\forall\,i\!\in\! I_+\;;~
u_e\!\ne\!0~\forall\,e\!\in\! I_{\mathbf{m}}\;;~
z_e\!\ne\!0~\forall\,e\!\in\! I_-\,
\big\}
\subset\mathfrak U_x,
\\
\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}
&\!=\!
\mathfrak U_{x;(I)}^\circ\!\cap\!
\big\{\,
\varepsilon_i\!=\! 0~\forall\,i\!\in\!\mathbb I_+\!\backslash I_+;~
u_{e}\!=\! 0~\forall\,e\!\in\! \mathbb I_{{\mathbf{m}}}\!\backslash I_{\mathbf{m}};~
z_{e}\!=\! 0~\forall\,e\!\in\!\mathbb I_-\!\backslash I_-
\big\}.
\end{align*}
This gives rise to a stratification
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:ti_V_strata}
\mathfrak U_x=\bigsqcup_{I\subset\mathbb I}\,\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}.
\end{equation}
We remark that neither $\mathfrak U_x$ nor its stratification~(\rangle\!\rangleef{Eqn:ti_V_strata}) depends on the choice of the weighted level tree $\lambdangle\!\lambdanglet$ representing $[\lambdangle\!\lambdanglet]$,
even though the sets $\mathbb I$ and $I$ depend on such choice.
We also notice that
$\mathfrak U_{x;(I)}^\circ$ is an open subset of $\mathfrak U_x$,
but the strata $\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}$ are not open unless $I\!=\! \mathbb I$.
For each $i\!\in\!\mathbb I_+$, we take
$$
u_{\mathsf e_i}:= 1.
$$
By~(\rangle\!\rangleef{Eqn:x_local_expression}), shrinking $\mathfrak U_x$ if necessary, we have
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:al_e_nonzero}
u_e\!\in\!\Gamma\big(\mathscr O_{\mathfrak U_x}^*\big)\qquad
\forall~e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{\mathbf{m}}.
\end{equation}
With the local parameters
$\zeta_e$ and~$\varsigma_j$ as in~(\rangle\!\rangleef{Eqn:local_parameters}),
we construct a morphism
$$
\theta_x :\mathfrak U_x\lambdangle\!\lambdangleongrightarrow\mathcal V\quad(\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}})
$$ given by
\betagin{equation}\betagin{split}\lambdangle\!\lambdangleabel{Eqn:theta_x}
&\theta_x^*\zeta_e=
\frac{u_e\cdot u_{\mathsf e_{\ell(e)}^+}\!\!\cdot u_{\mathsf e_{\ell(e)[1]}^+}\cdots}
{ u_{e_{v_e^+}}\!\!\cdot u_{\mathsf e^+_{\ell(v_e^+)}}\!\!\cdot u_{\mathsf e_{\ell(v_e^+)[1]}^+}\cdots}
\cdot \!
\prod_{
\betagin{subarray}{c}
i\in\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}
\end{subarray}}
\!\!\!\!\!\!\!\!\varepsilon_i
\qquad\forall~e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet);
\\
&\theta_x^*\zeta_e= z_e\quad\forall~e\!\in\!\mathbb I_-\!=\!\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\wh\tn{Edg}(\lambdangle\!\lambdanglet);
\qquad
\theta_x^*\varsigma_j=w_j\quad\forall~j\!\in\! J.
\end{split}\end{equation}
The numerator and the denominator in the first line of~(\rangle\!\rangleef{Eqn:theta_x}) are both finite products,
because~(\rangle\!\rangleef{Eqn:i[h]}) is always a finite sequence.
For any $I\!\subset\!\mathbb I$,
it follows from~(\rangle\!\rangleef{Eqn:al_e_nonzero}) and~(\rangle\!\rangleef{Eqn:theta_x}) that
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:th_{x;(I)}}
\theta_x\big(\mathfrak U_{x;{(I)}}^\circ\big)\subset\mathcal V_{{(I)}}^\circ,\qquad
\theta_x\big(\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}\big)\subset\mathcal V_{{\tau}_{(I)}},
\end{equation}
where $\mathcal V_{{(I)}}^\circ$ and $\mathcal V_{{\tau}_{(I)}}$ are described before~(\rangle\!\rangleef{Eqn:V_(I)}).
Fix $I\!\subset\!\mathbb I$ ($I$ may be empty).
With
$$
[\lambdangle\!\lambdanglet_{(I)}]=[\tau_{(I)},\ell_{(I)}]\in\mathscr T_\mathsf L^\tn{wt}
$$ as in~(\rangle\!\rangleef{Eqn:t(I)}), $\mtd{[\lambdangle\!\lambdanglet_{(I)}]}$ as in~(\rangle\!\rangleef{Eqn:MtdStrata}),
and the chart $\mathcal V_{{\tau}_{(I)}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}$ as in~(\rangle\!\rangleef{Eqn:V_(I)}),
let
$$
\mathbb Phi_{x;(I)} :\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}\lambdangle\!\lambdangleongrightarrow
\mathcal V_{{\tau}_{(I)}}\!\times_{\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}}\!
\mtd{[\lambdangle\!\lambdanglet_{(I)}]}\quad
\big(\lambdangle\!\lambdangleongrightarrow
\mtd{[\lambdangle\!\lambdanglet_{(I)}]}\big)
$$
be the morphism so that for any $y\!\in\!\mathfrak U_{x;\lambdangle\!\lambdanglet_{(I)}}$,
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:Phi_x_(I)}
\mathbb Phi_{x;(I)}(y)\!=\! \Big(
\theta_x(y);
\prod_{i\in\mathbb I_+\backslash I_+}\!\!\!
\Big[
\mu_{e;i;I}(y)\!\cdot\!
\big(\!
\bigotimes_{\mathfrak e\succeq e,~
\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_{\mathfrak e}^+)}\not\subset I}
\hspace{-.4in}
\partial_{\zeta_{\mathfrak e}}\big|_{\theta_x(y)}
\big):e\!\in\!\mathfrak E_i
\Big]
\Big),
\end{equation}
where
\betagin{align*}
\mu_{e;i;I} &=\!
\bigg(\!u_e\!\cdot\!
\frac{
u_{\mathsf e_{\ell(e)}^+}\!\!\cdot\! u_{\mathsf e_{\ell(e)[1]}^+}\!\cdots
}{
u_{\mathsf e_{i}^+}\!\cdot u_{\mathsf e_{i[1]}^+}\!\cdots
}\bigg)\!
\Bigg(
\frac{
\prod_{
\betagin{subarray}{c}
\mathfrak e\succ \mathsf e_i;\,
\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_{\mathfrak e}^+)}\subset I
\end{subarray}}\,
\theta_x^*\zeta_{\mathfrak e}
}{
\prod_{
\betagin{subarray}{c}
e'\succ e;\,
\lambdangle\!\lambdanglebrp{\ell(e'),\ell(v_{e'}^+)}\subset I
\end{subarray}}\,
\theta_x^*\zeta_{e'}
}\Bigg)\!
\Big(\!\!
\prod_{
\betagin{subarray}{c}
h\in \lambdangle\!\lambdanglebrp{\ell(e),i}
\end{subarray}}
\!\!\!\!\!\varepsilon_h\Big)
\end{align*}
for all $i\!\in\!\mathbb I_+\backslash I_+$ and $e\!\in\!\mathfrak E_i$.
Similar to~(\rangle\!\rangleef{Eqn:theta_x}),
the products in the first pair of parentheses above are both finite products.
By~(\rangle\!\rangleef{Eqn:th_{x;(I)}}), the description of $\mathcal V_{(I)}^\circ$ above~(\rangle\!\rangleef{Eqn:V_(I)}), and~(\rangle\!\rangleef{Eqn:edges_cntrd}),
we see that
$$\mu_{e;i;I}\in\Gamma\big(\mathscr O_{\mathfrak U_{x;(I)}^\circ}\big)
\qquad
\forall\,I\!\subset\!\mathbb I,\,i\!\in\!\mathbb I_+\backslash I_+,~e\!\in\!\mathfrak E_i.
$$
Moreover, by~(\rangle\!\rangleef{Eqn:al_e_nonzero}),
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:mu_vanishing}
\mu_{e;i;I}|_{\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}}
\betagin{cases}
=0 & \textnormal{if}\ \ \ell_{(I)}(v_e^-)<i,\\
\in \Gamma\big(\mathscr O^*_{\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}}\big)
& \textnormal{if}\ \ \ell_{(I)}(v_e^-)=i.
\end{cases}
\end{equation}
This, along with~(\rangle\!\rangleef{Eqn:th_{x;(I)}}), Lemma~\rangle\!\rangleef{Lm:Prt_e}, and (\rangle\!\rangleef{Eqn:MtdStrat_and_sEcT}),
implies~$\mathbb Phi_{x;(I)}$ is well-defined.
The morphisms $\mathbb Phi_{x:(I)}$, $I\!\subset\!\mathbb I$, together determine
$$\mathbb Phi_x:\mathfrak U_x\lambdangle\!\lambdangleongrightarrow\mtd{},\qquad
\mathbb Phi_x(y)=\mathbb Phi_{x;(I)}(y)\quad\textnormal{if}~y\in\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}.$$
We remark that $\mathbb Phi_{x;(I)}$ and $\mathbb Phi_x$ are also independent of the choice of the weighted level tree $\lambdangle\!\lambdanglet$ representing $[\lambdangle\!\lambdanglet]$.
Moreover,
we observe that
\betagin{equation}
\lambdangle\!\lambdangleabel{Eqn:Phi_onto}
\mathbb Phi_x(y_x)=\mathbb Phi_{x;(\emptyset)}(y_x)=x
\qquad
\forall\,x\in\mtd{},
\end{equation}
where $y_x\!\in\!\mathfrak U_{x}$ is given in~(\rangle\!\rangleef{Eqn:y_x}).
A priori $\mathbb Phi_x$ is just a {\it map}, for the set-theoretic definition~(\rangle\!\rangleef{Eqn:MtdStrata}) of $\mtd{}$ does not describe its stack structure, although each $\mtd{[\lambdangle\!\lambdanglet']}$ is a stack.
In~\S\rangle\!\rangleef{Subsec:transition_maps},
we will show such $\mathbb Phi_x$ patch together to endow $\mtd{}$ with a smooth stack structure.
Each $\mathbb Phi_x$ will hereafter be called a \textsf{twisted chart centered at~$x$ (lying over $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$)},
although rigorously it becomes a {\it chart} of $\mtd{}$ only after Corollary~\rangle\!\rangleef{Crl:MtdSmooth} is established.
\betagin{lmm}
\lambdangle\!\lambdangleabel{Lm:Phi_x_inj}
For every $I\!\subset\!\mathbb I$,
$
\mathbb Phi_{x;(I)}\!:\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}\!\lambdangle\!\lambdangleongrightarrow\mtd{[\lambdangle\!\lambdanglet_{(I)}]}
$
of~(\rangle\!\rangleef{Eqn:Phi_x_(I)})
is an isomorphism to an open subset of $\mtd{[\lambdangle\!\lambdanglet_{(I)}]}$.
\end{lmm}
\betagin{proof}
For any $i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{(I)})\!=\!\mathbb I_+\backslash I_+$,
notice that every edge in $\mathfrak E_i$ of the weighted level tree $\lambdangle\!\lambdanglet$ is not contracted in the construction of~$\lambdangle\!\lambdanglet_{(I)}$ (c.f.~(\rangle\!\rangleef{Eqn:edges_cntrd})).
Thus,
$$
\mathfrak E_i\subset\tn{Edg}\big(\lambdangle\!\lambdanglet_{(I)}\big)\qquad
\forall~i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{(I)}).
$$
In particular,
the edges $\mathsf e_i$, $i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{(I)})$, can be used as the special edges of~$\lambdangle\!\lambdanglet_{(I)}$.
For conciseness, let
\betagin{equation*}
\betagin{split}
\mathbf E[\lambdangle\!\lambdanglet_{(I)}] :\!\!&=
\wh\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\big\backslash\!\lambdangle\!\lambdangleeft(\big\{\mathsf e_i\!:i\in\mathbb I_+(\lambdangle\!\lambdanglet_{(I)})\big\}\sqcup\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet_{(I)})\rangle\!\rangleight)\\
&=\!\!\!\!
\bigsqcup_{i\in \mathbb I_+(\lambdangle\!\lambdanglet_{(I)})}
\!\!\!\!\!\!
\big\{e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\!:
\ell_{(I)}(v_e^-)\!=\! i,\,e\!\ne\!\mathsf e_i
\big\}
\quad
\big(\subset\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\!\subset\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\big);
\end{split}
\end{equation*}
see~(\rangle\!\rangleef{Eqn:t_(I)_bbI}) for notation.
Let $\{\zeta_e\}_{e\in\tn{Edg}(\gamma)}\!\cup\!\{\varsigma_j\}_{j\in J}$ be a set of the local parameters on $\mathcal V$ centered at $\varpi(x)$ as in~(\rangle\!\rangleef{Eqn:local_parameters}).
By the definition of $\mathcal V_{{\tau}_{(I)}}$ above~(\rangle\!\rangleef{Eqn:V_(I)}),
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:V_(I)_parameters}
\{\zeta_e\}_{
e\in\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})
}
\sqcup
\{\varsigma_j\}_{j\in J}
\end{equation}
is a set of local parameters of $\mathcal V_{{\tau}_{(I)}}$.
Recall that there exist $\lambdangle\!\lambdanglea_e\!\in\!\mathbb A^*$, $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$, such that
$$
x=
\Big(\,
\upsilond 0\;;\;\prod_{i\in\mathbb I_+}\Big[\lambdangle\!\lambdanglea_e\!\cdot\!
\big(\bigotimes_{\mathfrak e\succeq e}\partial_{\zeta_{\mathfrak e}}|_{\upsilond 0}\big)
: \ell(v_e^-)\!=\! i\Big]\,
\Big)
$$
as in~(\rangle\!\rangleef{Eqn:x_local_expression}).
Let $U_{x;\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}$ be an open subset of $(\mathbb A^*)^{\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}$ such that
$$
(\lambdangle\!\lambdanglea_e)_{e\in\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}\,\in\,
U_{x;\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}\,\subset\,
(\mathbb A^*)^{\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}.
$$
The coordinates of $U_{x;\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}$ are denoted by
$$
(\,\mu_e\,)_{e\in\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}.
$$
In addition, we set
$$
\mu_{\mathsf e_i}= 1\quad
\forall~i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{(I)}),\qquad
\mu_e=0\quad\forall~e\!\in\!\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet_{(I)}).
$$
Thus, the function $\mu_e$ is defined for all $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})$, and is nowhere vanishing on~$U_{x;\mathbf E(\lambdangle\!\lambdanglet_{(I)})}$ for all $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\backslash\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet_{(I)})$.
The smooth chart $\mathcal V_{{\tau}_{(I)}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}_{{\tau}_{(I)}}$ in (\rangle\!\rangleef{Eqn:V_(I)}) induces a smooth chart
$$
\mathcal U'_{x;[\lambdangle\!\lambdanglet_{(I)}]}:=\mathcal V_{{\tau}_{(I)}}\times U_{x;\mathbf E[\lambdangle\!\lambdanglet_{(I)}]}
\lambdangle\!\lambdangleongrightarrow\mtd{[\lambdangle\!\lambdanglet_{(I)}]}
$$
given by
\betagin{equation*}\betagin{split}
&\big(\,
\mathfrak z,\,
(\mu_e)_{e\in\mathbf E(\lambdangle\!\lambdanglet_{(I)})}
\big)
\mapsto
\Big(\mathfrak z\,,
\prod_{i\in\mathbb I_+(\lambdangle\!\lambdanglet_{(I)})}
\!\!\!\!
\big[
\mu_e\big(\!
\bigotimes_{
\mathfrak e\in\tn{Edg}(\lambdangle\!\lambdanglet_{(I)}),\,\mathfrak e\succeq e}
\hspace{-.25in}
\partial_{\zeta_\mathfrak e}|_{\mathfrak z}
\big):
\ell_{(I)}\!(v_e^-)\!=\! i
\big]
\Big).
\end{split}\end{equation*}
We will construct a morphism
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:Psi_{x;(I)}}
\mathbb Psi_{x;(I)}:\mathcal U'_{x;[\lambdangle\!\lambdanglet_{(I)}]}\lambdangle\!\lambdangleongrightarrow\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}
\end{equation}
such that $\mathbb Phi_{x;(I)}\!\circ\!\mathbb Psi_{x;(I)}$ and $\mathbb Psi_{x;(I)}\!\circ\!\mathbb Phi_{x;(I)}$ are both the identity morphisms,
which will then establish Lemma~\rangle\!\rangleef{Lm:Phi_x_inj}.
Given $\big(
\mathfrak z,
(\mu_e)_{e\in\mathbf E(\lambdangle\!\lambdanglet_{(I)})}
\big)\!\in\!\mathcal U'_{x;\lambdangle\!\lambdanglet_{(I)}},$
we denote its image by
$$
y:=\mathbb Psi_{x;(I)}\big(
\mathfrak z,
(\mu_e)_{e\in\mathbf E(\lambdangle\!\lambdanglet_{(I)})}
\big)\in\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]},
$$
which is to be constructed.
With the coordinates on $\mathfrak U_x$ as in~(\rangle\!\rangleef{Eqn:ti_cV}), we set
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:y_other'}
z_e(y)\!=\! \zeta_e(\mathfrak z)\ \ \forall\,e\!\in\!\mathbb I_-,\qquad
w_j(y)\!=\! \varsigma_j(\mathfrak z)\ \ \forall\,j\!\in\! J.
\end{equation}
By~(\rangle\!\rangleef{Eqn:V_(I)}),
we see that
\betagin{equation}\betagin{split}\lambdangle\!\lambdangleabel{Eqn:y_other}
z_e(y)\!=\!\zeta_e(\mathfrak z)\!=\! 0
\qquad
&\Longleftrightarrow\qquad
e\in\mathbb I_-\backslash I_-\;\big(\subset\mathbb I_-(\lambdangle\!\lambdanglet_{(I)})\big).
\end{split}\end{equation}
The rest of the coordinates of $y$ are much more complicated;
we describe them by induction over the levels in $\mathbb I_+\!=\!\mathbb I_+(\lambdangle\!\lambdanglet)$.
More precisely,
we will show that
{\it
$\varepsilon_i(y)$ with $i\!\in\!\mathbb I_+$ and $u_e(y)$ with $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i\!:i\!\in\!\mathbb I_+\}$ are all rational functions in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$, satisfying }
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:ve_u_e_inductive}
\big\lambdangle\!\lambdanglegroup
\varepsilon_i(y)\!=\! 0~\Leftrightarrow~
i\!\in\! \mathbb I_+\backslash I_+
\big\rangle\!\ranglegroup
\quad
\textnormal{and}\quad
\big\lambdangle\!\lambdanglegroup
u_e(y)\!=\! 0~\Leftrightarrow~
e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}
\big\rangle\!\ranglegroup.
\end{equation}
In particular,
(\rangle\!\rangleef{Eqn:ve_u_e_inductive}) and~(\rangle\!\rangleef{Eqn:y_other}) imply $y\!\in\!\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}$,
i.e.~$\mathbb Psi_{x;(I)}$ is well-defined.
The base case of the induction is for the level
$$i_0:=\max \mathbb I_+(\lambdangle\!\lambdanglet).$$
We take
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:v_e_base}
\varepsilon_{i_0}(y)=\zeta_{\mathsf e_{i_0}}\!(\mathfrak z).
\end{equation}
By~(\rangle\!\rangleef{Eqn:V_(I)}), we see that $\varepsilon_{i_0}(y)$ satisfies~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
We take
$$
u_{\mathsf e_{i_0}}(y)=1.
$$
For any $e\!\ne\!\mathsf e_{i_0}$ with $\ell(e)\!=\! i_0$, we set
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:u_e_base}
u_e(y)=
\betagin{cases}
\mu_e
\quad
&\textnormal{if}~i_0\!\not\in\! I_+~\big(\,\textnormal{i.e.}~i_0\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{(I)}),~e\!\in\!\mathbf E(\lambdangle\!\lambdanglet_{(I)})\,\big)
\\
\frac{\zeta_e(\mathfrak z)}{\zeta_{\mathsf e_{i_0}}\!\!(\mathfrak z)}
&\textnormal{if}~i_0\!\in\! I_+~\big(\,\textnormal{i.e.}~\mathsf e_{i_0}\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\,\big)
\end{cases}.
\end{equation}
If $i_0\!\in\! I_+$,
then by~(\rangle\!\rangleef{Eqn:V_(I)_parameters}) and~(\rangle\!\rangleef{Eqn:V_(I)}),
we have
$\zeta_e(\mathfrak z)\!=\! 0$ if and only if (${\mathbf{m}}\!=\! i_0$ and) $e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}$.
If $i_0\!\not\in\!I_+$,
then $\mu_e\!=\! 0$ if and only if (${\mathbf{m}}\!=\! i_0$ and) $e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}$.
We thus conclude that $u_e(y)$ satisfies~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}) for all
$e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$ with $\ell(e)\!=\! i_0.$
Moreover, such $\varepsilon_{i_0}(y)$ and $u_e(y)$ are obviously rational functions in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$.
Hence, the base case is complete.
Next, for any $i\!\in\!\mathbb I_+$, assume that
all $\varepsilon_k(y)$ with $k\!>\!i$ and all $u_e(y)$ with $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$ and $\ell(e)\!>\!i$ have been expressed as rational functions in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$, satisfying~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
For the level $i$, we first construct $\varepsilon_i(y)$.
The construction is subdivided into three cases.
\textbf{Case 1}.
If $i\!\not\in\!I_+$,
then set
\betagin{equation*}\lambdangle\!\lambdangleabel{Eqn:ve_i_induc_0}
\varepsilon_{i}(y)=0.
\end{equation*}
Obviously this satisfies~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
\textbf{Case 2}.
If $i\!\in\! I_+$ and $\lambdangle\!\lambdanglebrp{i,i[1]}\!\not\subset\!I_+$,
then $\mathsf e_i\!\in\!\mathbf E(\lambdangle\!\lambdanglet_{(I)})$,
hence $\mu_{\mathsf e_i}\!\ne\!0$.
Let
$$
\wh i:=\min\big(\lambdangle\!\lambdanglebrp{i,i[1]}\backslash I_+\big)
~ \in \mathbb I_+(\lambdangle\!\lambdanglet_{(I)}).
$$
Intuitively, $\wh i$ is the level containing the image of $\mathsf v_i$ in $\lambdangle\!\lambdanglet_{(I)}$.
Thus,
$$
\ell_{(I)}(\mathsf e_i)=\wh i.
$$
Let $\varepsilon_i(y)$ be given by
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:ve_i_induc_1}
\betagin{split}
\mu_{\mathsf e_i}\!=\!
\varepsilon_i(y)
\!\cdot\!
\frac{u_{\mathsf e_i^+}\!(y)\!\cdot\! u_{\mathsf e_{i[1]}^+}\!\!(y)\cdots}
{u_{\mathsf e_{\wh i}^+}\!(y)\!\cdot\! u_{\mathsf e_{\wh i[1]}^+}\!\!(y)\cdots}
\!\cdot\!
\frac{\prod_{\mathfrak e\succ \mathsf e_{\wh i},\,\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}_\lambdangle\!\lambdanglet\subset I}\zeta_{\mathfrak e}(\mathfrak z)}
{\prod_{e'\succ \mathsf e_i,\,\lambdangle\!\lambdanglebrp{\ell(e'),\ell(v_{e'}^+)}_\lambdangle\!\lambdanglet\subset I}\zeta_{e'}\!(\mathfrak z)}\!\cdot\!\!\!
\prod_{
h\in\lambdangle\!\lambdangleprp{i,\wh i}_\lambdangle\!\lambdanglet}\!\!\!
\varepsilon_h(y).
\end{split}
\end{equation}
The inductive assumption implies that all $\varepsilon_h(y)$ with $h\!\in\!\lambdangle\!\lambdangleprp{i,\wh i}_\lambdangle\!\lambdanglet$, as well as all $u_{\mathsf e_{i[h]}^+}\!\!(y)$ and $u_{\mathsf e_{\wh i[h]}^+}\!\!(y)$ with $h\!\ge\!0$, are non-zero and are rational functions in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$.
By~(\rangle\!\rangleef{Eqn:V_(I)_parameters}) and~(\rangle\!\rangleef{Eqn:edges_cntrd}),
we also see that all $\zeta_\mathfrak e(\mathfrak z)$ and $\zeta_{e'}\!(\mathfrak z)$ in~(\rangle\!\rangleef{Eqn:ve_i_induc_1}) are non-zero.
Therefore, such defined
$\varepsilon_i(y)$ is a rational function in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$, satisfying~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
\textbf{Case 3}.
If $\lambdangle\!\lambdanglebrp{i,i[1]}\!\subset\!I_+$ (hence $i\!\in\! I_+$),
then we see $\mathsf e_i\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})$;
c.f.~(\rangle\!\rangleef{Eqn:edges_cntrd}).
Intuitively, this means $\mathsf e_i$ is contracted in the construction of $\lambdangle\!\lambdanglet_{(I)}$.
By the description of $\mathcal V_{{\tau}_{(I)}}$ above~(\rangle\!\rangleef{Eqn:V_(I)}),
we see that
$\zeta_{\mathsf e_i}(\mathfrak z)\!\ne\!0$.
Let $\varepsilon_i(y)$ be given by
\betagin{equation}\betagin{split}
\lambdangle\!\lambdangleabel{Eqn:ve_i_induc_2}
\zeta_{\mathsf e_i}(\mathfrak z)&=
\varepsilon_i(y)\cdot
\!\!\!
\prod_{
h\in\lambdangle\!\lambdangleprp{i,\wh i}_\lambdangle\!\lambdanglet}\!\!\!
\varepsilon_h(y).
\end{split}\end{equation}
Mimicking the argument in Case 2,
we conclude that $\varepsilon_i(y)$ is a rational function in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$, satisfying~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
Next, we construct $u_e(y)$ for $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$ with $\ell(e)\!=\! i$.
Set $$u_{\mathsf e_i}(y)= 1.$$
For $e\!\ne\!\mathsf e_i$, the construction is subdivided into two cases.
\textbf{Case A}.
If $\lambdangle\!\lambdanglebrp{i,\ell(v_e^+)}\!\not\subset\!I_+$,
then
$$
e\in\mathbf E(\lambdangle\!\lambdanglet_{(I)})\!\sqcup\!\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet_{(I)})
~\big(=\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\backslash
\{\mathsf e_i\!:i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{(I)})\}\big),
$$
hence $\mu_e$ exists, and $\mu_e\!=\! 0$ if and only if $e\!\in\!\mathbb I_{\mathbf{m}}(\lambdangle\!\lambdanglet_{(I)})$.
In Case A,
since $\lambdangle\!\lambdanglebrp{i,\ell(v_e^+)}\!\not\subset\!I_+$,
(\rangle\!\rangleef{Eqn:t_(I)_bbI}) further implies
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:u_e_induc_1'}
\mu_e\!=\! 0\qquad
\Longleftrightarrow\qquad
e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}.
\end{equation}
Let
$$
\kappa=\kappa_e:=\min\big(\lambdangle\!\lambdanglebrp{i,\ell(v_e^+)}\backslash I_+\big)\quad \big(\in \mathbb I_+(\lambdangle\!\lambdanglet_{(I)})\big).
$$
Since $\lambdangle\!\lambdanglebrp{i,\kappa}\!\subset\!I_+$,
we have
$$
\ell_{(I)}(e)=\kappa.
$$
Let $u_e(y)$ be given by
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:u_e_induc_1}
\mu_{e}\!=\!
u_e(y)
\!\cdot\!
\frac{u_{\mathsf e_i^+}\!(y)\!\cdot\! u_{\mathsf e_{i[1]}^+}\!\!(y)\cdots}
{u_{\mathsf e_{\kappa}^+}\!(y)\!\cdot\! u_{\mathsf e_{\kappa[1]}^+}\!\!(y)\cdots}
\!\cdot\!
\frac{\prod_{\mathfrak e\succ \mathsf e_{\kappa},\,\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}_\lambdangle\!\lambdanglet\subset I}\zeta_{\mathfrak e}(\mathfrak z)}
{\prod_{e'\succ e,\,\lambdangle\!\lambdanglebrp{\ell(e'),\ell(v_{e'}^+)}_\lambdangle\!\lambdanglet\subset I}\zeta_{e'}\!(\mathfrak z)}\!\cdot\!\!\!
\prod_{
h\in\lambdangle\!\lambdanglebrp{i,\kappa}_\lambdangle\!\lambdanglet}\!\!\!\!
\varepsilon_h(y).
\end{equation}
We observe that if $i\!\not\in\! I_+$,
then $\kappa\!=\! i$ and hence $\prod_{
h\in\lambdangle\!\lambdanglebrp{i,\kappa}_\lambdangle\!\lambdanglet}
\varepsilon_h(y)\!=\! 1$;
if $i\!\in\! I_+$, then the previous construction of $\varepsilon_i(y)$, along with the inductive assumption,
guarantees $\prod_{
h\in\lambdangle\!\lambdanglebrp{i,\kappa}_\lambdangle\!\lambdanglet}
\varepsilon_h(y)\!\ne\!0$.
Mimicking the argument of Case 2 of the construction of $\varepsilon_i(y)$ and taking~(\rangle\!\rangleef{Eqn:u_e_induc_1'}) into account,
we see that $u_e(y)$ determined by~(\rangle\!\rangleef{Eqn:u_e_induc_1}) is a rational function in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$, and it satisfies~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
\textbf{Case B}.
If $\lambdangle\!\lambdanglebrp{i,\ell(v_e^+)}\!\subset\!I_+$,
then~(\rangle\!\rangleef{Eqn:edges_cntrd}) gives
\betagin{equation}
\lambdangle\!\lambdangleabel{Eqn:u_e_induc_2'}
e\in\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\ \ \big(\textnormal{i.e.}~\zeta_e(\mathfrak z)\!=\! 0\big)\quad
\Longleftrightarrow\quad
e\in\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}.
\end{equation}
Let $u_e(y)$ be given by
\betagin{equation}\betagin{split}
\lambdangle\!\lambdangleabel{Eqn:u_e_induc_2}
\zeta_e(\mathfrak z)
=
u_e(y)
\cdot
\frac{u_{\mathsf e_i^+}\!(y)\!\cdot\! u_{\mathsf e_{i[1]}^+}\!\!(y)\cdots}
{u_{e_{v_e^+}}\!\!(y)\!\cdot\!
u_{\mathsf e_{\ell(v_e^+)}^+}\!(y)\!\cdot\!
u_{\mathsf e_{\ell(v_e^+)[1]}^+}\!\!(y)\cdots}
\cdot\!\!
\prod_{
h\in\lambdangle\!\lambdanglebrp{i,\ell(v_e^+)}_\lambdangle\!\lambdanglet}\!\!\!\!
\varepsilon_h(y).
\end{split}\end{equation}
Once again,
mimicking the argument of Case A of the construction of $u_e(y)$, and taking~(\rangle\!\rangleef{Eqn:u_e_induc_2'}) as well as the description of $\mathcal V_{{\tau}_{(I)}}$ right before~(\rangle\!\rangleef{Eqn:V_(I)}) into account,
we see that $u_e(y)$ determined by~(\rangle\!\rangleef{Eqn:u_e_induc_2}) is a rational function in $\zeta_{e'}(\mathfrak z)$ and $\mu_{e''}$, and it satisfies~(\rangle\!\rangleef{Eqn:ve_u_e_inductive}).
The cases 1-3, A, and B together complete the inductive construction of~$\mathbb Psi_{x;(I)}$.
Moreover,
comparing
\betagin{enumerate}
[leftmargin=*,label=$\mathbf ullet$]
\item (\rangle\!\rangleef{Eqn:y_other'}) with the second line of~(\rangle\!\rangleef{Eqn:theta_x}),
\item (\rangle\!\rangleef{Eqn:v_e_base}), the second case of~(\rangle\!\rangleef{Eqn:u_e_base}), (\rangle\!\rangleef{Eqn:ve_i_induc_2}), and~(\rangle\!\rangleef{Eqn:u_e_induc_2}) with the first line ~(\rangle\!\rangleef{Eqn:theta_x}),
\item the first case of~(\rangle\!\rangleef{Eqn:u_e_base}),
(\rangle\!\rangleef{Eqn:ve_i_induc_1}), and~(\rangle\!\rangleef{Eqn:u_e_induc_1}) with the expressions of $\mu_{e;i;I}$ right after~(\rangle\!\rangleef{Eqn:Phi_x_(I)}),
\end{enumerate}
we observe that $\mathbb Psi_{x;(I)}$ is the inverse of $\mathbb Phi_{x;(I)}$.
\end{proof}
\betagin{crl}
$\mathbb Phi_x\!:\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ is injective.
\end{crl}
\betagin{proof}
This follows from Lemma~\rangle\!\rangleef{Lm:Phi_x_inj} and the stratification~(\rangle\!\rangleef{Eqn:ti_V_strata}) and~(\rangle\!\rangleef{Eqn:MtdStrata})
directly.
\end{proof}
\subsection{Stack structure}
\lambdangle\!\lambdangleabel{Subsec:transition_maps}
In this subsection,
we will show the twisted charts $\mathbb Phi_x$ patch together to endow $\mtd{}$ with a smooth stack structure;
c.f.~Proposition~\rangle\!\rangleef{Prp:Chart_compatible} and Corollary~\rangle\!\rangleef{Crl:MtdSmooth}.
Note that a priori, $\mathbb Phi_{x}$ depends on the choices of the special vertices $\{\mathsf v_i\}_{i\in\mathbb I_+}$~(\rangle\!\rangleef{Eqn:sv_i}) and of the local parameters~(\rangle\!\rangleef{Eqn:local_parameters}).
Nonetheless,
Lemmas~\rangle\!\rangleef{Lm:Chart_indep_svi} and~\rangle\!\rangleef{Lm:Chart_compatible_ptws} below will guarantee that such choices do not affect the proposed stack structure of $\mtd{}$,
hence will make the proof of Proposition~\rangle\!\rangleef{Prp:Chart_compatible} more concise.
Let $\{\mathsf w_i\!:i\!\in\!\mathbb I_+\}$ be another set of the special vertices satisfying~(\rangle\!\rangleef{Eqn:sv_i}),
and $\mathsf w_i^+$,~$\mathsf d_i$, and $\mathsf d_i^+$ be the analogues of $\mathsf v_i^+$, $\mathsf e_i$, and $\mathsf e_i^+$ in~(\rangle\!\rangleef{Eqn:se_i}), respectively.
As in~(\rangle\!\rangleef{Eqn:i[h]}),
each level $i\!\in\!\mathbb I_+$ similarly determines a {\it finite} sequence
$$
i\lambdangle\!\lambdangler 0=i<
i\lambdangle\!\lambdangler 1=\ell(\mathsf w_i^+)<
i\lambdangle\!\lambdangler 2=\ell(\mathsf w_{i\lambdangle\!\lambdangler 1}^+)<\cdots.
$$
We take an open subset
$$
\mathfrak U_x^\mathsf a\subset\mathbb A^{\mathbb I_+}\times
\mathbb A^{\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf d_i:i\in\mathbb I_+\}}\times
\mathbb A^{\mathbb I_-}\times\mathbb A^J
$$
with the coordinates
$$
\big(
(\delta_i)_{i\in\mathbb I_+},
(u^\mathsf a_e)_{e\in\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf d_i:{i\in\mathbb I_+}\}},
(z^\mathsf a_e)_{e\in\mathbb I_-},
(w^\mathsf a_j)_{j\in J}
\big).
$$
as in~(\rangle\!\rangleef{Eqn:ti_cV}),
and then construct
$$
\theta^\mathsf a_x\!:\mathfrak U_x^\mathsf a\lambdangle\!\lambdangleongrightarrow\mathcal V,\qquad
\mu^\mathsf a_{e;i;I}\!\in\!\Gamma\big(\mathscr O_{\mathfrak U_{x;(I)}^{\mathsf a,\circ}}\big),\qquad
\mathbb Phi^\mathsf a_x\!:\mathfrak U_x^\mathsf a\lambdangle\!\lambdangleongrightarrow\mtd{}
$$
parallel to~(\rangle\!\rangleef{Eqn:theta_x}) and~(\rangle\!\rangleef{Eqn:Phi_x_(I)}).
Let $\mathcal U\!=\!\mathbb Phi_x^\mathsf a(\mathfrak U_x^\mathsf a)\!\cap\!\mathbb Phi_x(\mathfrak U_x)$.
\betagin{lmm}
\lambdangle\!\lambdangleabel{Lm:Chart_indep_svi}
The transition map
$$
(\mathbb Phi_x^\mathsf a)^{-1}\!\circ\!\mathbb Phi_x:\,
\mathbb Phi_x^{-1}(\mathcal U)\lambdangle\!\lambdangleongrightarrow(\mathbb Phi_x^\mathsf a)^{-1}(\mathcal U)
$$ is an isomorphism.
\end{lmm}
\betagin{proof}
Let $g\!:\mathbb Phi_x^{-1}(\mathcal U)\!\lambdangle\!\lambdangleongrightarrow\!(\mathbb Phi_x^\mathsf a)^{-1}(\mathcal U)$ be the isomorphism given by
\betagin{gather*}
g^*\delta_i
\!=\!
\varepsilon_i\!\cdot\!
\frac{u_{\mathsf e_i^+}\!\cdot\! u_{\mathsf e_{i[1]}^+}\!\cdots
}{
u_{\mathsf e_{i^\sharp}^+}\!\cdot\! u_{\mathsf e_{i^\sharp[1]}^+}\!\cdots}
\frac{ u_{\mathsf d_i}\!\cdot\! u_{\mathsf d_{i\lambdangle\!\lambdangler 1}}\!\cdots
}{
u_{\mathsf d_{i^\sharp}}\!\cdot\!u_{\mathsf d_{i^\sharp\lambdangle\!\lambdangler 1}}\!\cdots}
\frac{ u_{\mathsf d_{i^\sharp}^+}\!\cdot\! u_{\mathsf d_{i^\sharp\lambdangle\!\lambdangler{1}}^+}\!\cdots}{ u_{\mathsf d_i^+}\!\cdot\! u_{\mathsf d_{i\lambdangle\!\lambdangler 1}^+}\!\cdots}
\qquad\forall\,i\!\in\!\mathbb I_+,
\\
g^*u^\mathsf a_e\!=\! \frac{u_e}{ u_{\mathsf d_{\ell(e)}}}\ \forall\,e\!\in\!
\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf d_i\}_{i\in\mathbb I_+}, \quad
g^*z^\mathsf a_e\!=\! z_e\ \forall\,e\!\in\!\mathbb I_-,\quad
g^*w_j^\mathsf a\!=\! w_j\ \forall\,j\!\in\! J.
\end{gather*}
The fact that $g$ is an isomorphism
can be shown by
constructing its inverse explicitly,
which is similar to the proof of Lemma~\rangle\!\rangleef{Lm:Phi_x_inj}, but is simpler.
The key point of the construction is that
$$
u_{\mathsf d_i}=\frac{1}{g^*u^\mathsf a_{\mathsf e_i}}\quad
\forall\,i\!\in\!\mathbb I_+,\qquad
u_e=\frac{g^*u^\mathsf a_e}{g^*u^\mathsf a_{\mathsf e_i}}\quad
\forall\,e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\ \textnormal{with}\ \ell(e)\!=\! i,
$$
and each $\varepsilon_i$ is a product of $g^*\delta_i$ and a rational function of $u_e$ with $e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)$.
It is a direct check that the isomorphism $g$ satisfies
$$
\theta_x^\mathsf a\!\circ\!g\!=\!\theta_x\qquad\textnormal{and}\qquad
g^*\mu_{e;i;I}^\mathsf a\!=\!
\frac{
\mu_{e;i;I}}{
\mu_{\mathsf d_i;i;I}}
\quad
\forall\,I\!\subset\!\mathbb I,~i\!\in\!\mathbb I_+\backslash I_+,~e\!\in\!\mathfrak E_i.
$$
Thus, $(\mathbb Phi_x^\mathsf a)^{-1}\!\circ\!\mathbb Phi_x\!=\! g$ and hence is an isomorphism.
\end{proof}
Let $$\{\wh\zeta_e\!:e\!\in\!\tn{Edg}(\gamma)\}\!\sqcup\!\{\wh\varsigma_j\!:j\!\in\! J\}$$ be another set of extended modular parameters centered at $x$ on the same chart $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$; see~(\rangle\!\rangleef{Eqn:local_parameters}).
We use this set of local parameters to construct another twisted chart $\wh\mathbb Phi_x\!:\wh\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$;
in particular,
we have $\wh\theta_x\!:\wh\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mathcal V$ and $\wh\mu_{e;i;I}\!\in\!\Gamma\big(\mathscr O_{\wh\mathfrak U_{x;(I)}^\circ}\big)$ as in~(\rangle\!\rangleef{Eqn:theta_x}) and~(\rangle\!\rangleef{Eqn:Phi_x_(I)}), respectively.
Parallel to~(\rangle\!\rangleef{Eqn:ti_cV}), the coordinates on $\wh\mathfrak U_x$ are denoted by
$$
\big(
(\wh\varepsilon_i)_{i\in\mathbb I_+},
(\wh u_e)_{e\in\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i:{i\in\mathbb I_+}\}},
(\wh z_e)_{e\in\mathbb I_-},
(\wh w_j)_{j\in J}
\big).
$$
Let $\mathcal U\!=\!\mathbb Phi_x(\mathfrak U_x)\!\cap\!\wh\mathbb Phi_x(\wh\mathfrak U_x)$.
\betagin{lmm}
\lambdangle\!\lambdangleabel{Lm:Chart_compatible_ptws}
The transition map
$$
(\wh\mathbb Phi_x)^{-1}\!\circ\!\mathbb Phi_x:\,
\mathbb Phi_x^{-1}(\mathcal U)\lambdangle\!\lambdangleongrightarrow\wh\mathbb Phi_x^{-1}(\mathcal U)
$$ is an isomorphism.
\end{lmm}
\betagin{proof}
By Lemma~\rangle\!\rangleef{Lm:Chart_indep_svi},
it suffices to use the same set of the special vertices $\{\mathsf v_i\}_{i\in\mathbb I_+}$ for both $\mathbb Phi_x$ and $\wh\mathbb Phi_x$. For any $e\!\in\!\tn{Edg}(\gamma)$,
the local parameters $\wh\zeta_e$ and $\zeta_e$ defines the same locus $\mathcal Z_e\!=\! \{\zeta_e\!=\! 0\}\!=\! \{\wh\zeta_e\!=\! 0\}$,
hence
there exists $f_e\!\in\!\Gamma(\mathscr O_\mathcal V^*)$ such that
$$
\wh\zeta_e=f_e\cdot \zeta_e.
$$
Therefore, we have
\betagin{equation}
\lambdangle\!\lambdangleabel{Eqn:different_modular_parameters}
\partial_{\wh\zeta_e}|_{\mathcal Z_e}=
\big(\tn{tf}rac{1}{f_e}\partial_{\zeta_e}\big)\big|_{\mathcal Z_e}.
\end{equation}
Let $g\!:\mathbb Phi_x^{-1}(\mathcal U)\!\lambdangle\!\lambdangleongrightarrow\!\wh\mathbb Phi_x^{-1}(\mathcal U)$ be the isomorphism given by
\betagin{gather*}
g^*\wh\varepsilon_i\!=\!
\varepsilon_i\!\cdot\!\frac{f_{\mathsf e_i}\!\cdot\!f_{\mathsf e_{i[1]}}\!\cdots
}{
f_{\mathsf e_{i^\sharp}}\!\cdot\!f_{\mathsf e_{i^\sharp[1]}}\!\cdots}~
\forall\,i\!\in\!\mathbb I_+,\qquad
g^*\wh z_e\!=\! z_e~\forall\,e\!\in\!\mathbb I_-,\qquad
g^*\wh w_j\!=\! w_j~\forall\,j\!\in\! J,
\\
g^*\wh u_e\!=\!
u_e\!\cdot\!\frac{\prod_{e'\succeq e}f_{e'}
}{
\prod_{e'\succeq \mathsf e_{\ell(e)}}f_{e'}}
\quad
\forall\,e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i:i\!\in\!\mathbb I_+\}.
\end{gather*}
The explicit expression of $g$ above implies it is invertible;
see the parallel argument in the proof of Lemma~\rangle\!\rangleef{Lm:Chart_indep_svi}.
It is a direct check that
$
\wh\theta_x\!\circ\!g\!=\!\theta_x
$
and
$$
\frac{
g^*\wh\mu_{e;i;I}
}{
\prod_{\mathfrak e\succeq e,\,
\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}\not\subset I}
f_{\mathfrak e}}
=
\frac{
\mu_{e;i;I}
}{
\prod_{\mathfrak e\succeq\mathsf e_{i},\,
\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}\not\subset I}
f_{\mathfrak e}}\quad
\forall\,I\!\subset\!\mathbb I,~i\!\in\!\mathbb I_+\backslash I_+,~e\!\in\!\mathfrak E_i.$$
Taking~(\rangle\!\rangleef{Eqn:different_modular_parameters}) into account,
we conclude that $(\wh\mathbb Phi_x)^{-1}\!\circ\!\mathbb Phi_x\!=\! g$ and hence is an isomorphism.
\end{proof}
Given $I\!\subset\!\mathbb I$ and
$x'\!\in\!\mathbb Phi_x\big(\mathfrak U_{x;[\lambdangle\!\lambdanglet_{(I)}]}\big)$,
let $\mathcal V_{\varpi(x')}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ be
a chart containing $\varpi(x')$ and $\mathbb Phi_{x'}\!:\mathfrak U_{x'}\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ be a twisted chart centered at $x'$ over $\mathcal V_{\varpi(x')}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$.
Let $\mathcal U\!=\!\mathbb Phi_x(\mathfrak U_x)\!\cap\!\mathbb Phi_{x'}(\mathfrak U_{x'}).$
\betagin{prp}
\lambdangle\!\lambdangleabel{Prp:Chart_compatible}
The transition map
$$
\mathbb Phi_{x'}^{-1}\circ\mathbb Phi_x:\,
\mathbb Phi_x^{-1}(\mathcal U)\lambdangle\!\lambdangleongrightarrow\mathbb Phi_{x'}^{-1}(\mathcal U)
$$
is an isomorphism.
\end{prp}
\betagin{proof}
Since $x'\!\in\!\mathbb Phi_x\big(\mathfrak U_{x;\lambdangle\!\lambdanglet_{(I)}}\big)\!\subset\!\mathbb Phi_x(\mathfrak U_x)$,
its underlying weighted curve satisfies
$$
\varpi(x')\in\mathcal V_{(I)}^\circ\quad
\Big(=\big\{\zeta_e\!\ne\!0:e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)}\big)\big\}
\subset \mathcal V\Big).
$$
Thus, replacing $\mathcal V_{\varpi(x')}$ by $\mathcal V_{\varpi(x')}\!\cap\!\mathcal V_{(I)}^\circ$ if necessary,
we may assume
$$
\varpi(x')\in
\mathcal V_{\varpi(x')}
\subset\mathcal V_{(I)}^\circ.
$$
Moreover,
the following modular parameters on $\mathcal V$:
$$
\zeta_e\,,\quad e\in\tn{Edg}\big(\lambdangle\!\lambdanglet_{(I)}\big)
\subset\tn{Edg}(\lambdangle\!\lambdanglet)
$$
also serve as modular parameters on $\mathcal V_{\varpi(x')}$.
Thus by Lemma~\rangle\!\rangleef{Lm:Chart_compatible_ptws},
we may assume $\mathbb Phi_{x'}$ is constructed using the local parameters on $\mathcal V_{\varpi(x')}$:
$$
\{\zeta_e\}_{e\in\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})}\sqcup
\Big(
\{\varsigma_j\}_{j\in J}
\sqcup
\{\zeta_e\}_{e\in\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})}
\Big)
$$
as the analogue of~(\rangle\!\rangleef{Eqn:local_parameters}).
Let the special vertices $\mathsf v_i$ and edges $\mathsf e_i$ of $\lambdangle\!\lambdanglet$ be respectively as in~(\rangle\!\rangleef{Eqn:sv_i}) and~(\rangle\!\rangleef{Eqn:se_i}).
By Lemma~\rangle\!\rangleef{Lm:Chart_indep_svi},
we may further assume that the special vertices and edges of $\lambdangle\!\lambdanglet_{(I)}$ are respectively
$$
\mathsf v_i\quad\textnormal{and}\quad\mathsf e_i,\qquad
i\!\in\!\mathbb I_+\big(\lambdangle\!\lambdanglet_{(I)}\big)\!=\!\mathbb I_+\backslash I_+.
$$
For any $i\!\in\!\mathbb I_+\backslash I_+$ and $h\!\in\!\mathbb Z_{\ge 0}$,
let $i^\upsilonparrow$, $\mathsf e_i^\dag$, and $i(h)$ be the analogues of $i^\sharp$, $\mathsf e_i^+$, and $i[h]$,
respectively, for the weighted level tree $\lambdangle\!\lambdanglet_{(I)}$ instead of $\lambdangle\!\lambdanglet$;
see~(\rangle\!\rangleef{Eqn:i^sharp}), (\rangle\!\rangleef{Eqn:se_i}), and~(\rangle\!\rangleef{Eqn:i[h]}) for notation.
Recall that
$
\mathbb I_-\big(\lambdangle\!\lambdanglet_{(I)}\big)\!=\!
\mathbb I_-\backslash I_-\!\sqcup\!
\big\{e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}\!:
\ell(v_e^+)\!\lambdangle\!\lambdanglee\!{\mathbf{m}}\big(\lambdangle\!\lambdanglet_{(I)}\big)\big\}.
$
We denote by
$$
\big(
(\varepsilon_i')_{i\in\mathbb I_+\!\backslash I_+},\,
(u_e')_{e\in\wh\tn{Edg}(\lambdangle\!\lambdanglet_{(I)})\backslash\{\mathsf e_i:{i\in\mathbb I_+\!\backslash I_+}\}},\,
(z_e')_{e\in\mathbb I_-(\lambdangle\!\lambdanglet_{(I)})},\,
(w_j')_{j\in J\sqcup(\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)}))}
\big)
$$
the coordinates on $\mathfrak U_{x'}$
parallel to~(\rangle\!\rangleef{Eqn:ti_cV}),
and construct
$$
\theta_{x'}\!:\mathfrak U_{x'}\!\lambdangle\!\lambdangleongrightarrow\!\mathcal V_{\varpi(x')}
\quad\textnormal{and}\quad
\mu'_{e;i;I'}\!\in\!\Gamma\big(\mathscr O_{\mathfrak U_{x';(I')}^{\circ}}\big),\
I'\!\subset\!\mathbb I\backslash I,\,i\!\in\!\mathbb I_+\!\backslash(I_+\!\sqcup\!I'),\,e\!\in\!\mathfrak E_i
$$
parallel to
$$
\theta_x:\mathfrak U_x\lambdangle\!\lambdangleongrightarrow\mathcal V\qquad
\textnormal{and}\qquad
\mu_{e;i;I}\in\Gamma\big(\mathscr O_{\mathfrak U_{x;(I)}^\circ}\big),
\quad i\!\in\!\mathbb I_+\!\backslash I_+,~e\!\in\!\mathfrak E_i,
$$
of~(\rangle\!\rangleef{Eqn:theta_x}) and~(\rangle\!\rangleef{Eqn:Phi_x_(I)}), respectively.
In this way,
$
\mathbb Phi_{x'}\!:\mathfrak U_{x'}\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}
$
is constructed analogously to $\mathbb Phi_x$.
Let $g\!:\mathbb Phi_x^{-1}(\mathcal U)\!\lambdangle\!\lambdangleongrightarrow\!\mathbb Phi_{x'}^{-1}(\mathcal U)$ be the isomorphism given by
\betagin{equation*}\betagin{split}
g^*\varepsilon_i'
=\varepsilon_i
&\cdot
\Big(\!\!
\prod_{h\in\lambdangle\!\lambdangleprp{i,i^\upsilonparrow}_\lambdangle\!\lambdanglet}\!\!\!\!\!
\varepsilon_h\Big)
\cdot
\frac{
\prod_{\mathfrak e\succ_\lambdangle\!\lambdanglet\mathsf e_{i^\upsilonparrow},\,
\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}_\lambdangle\!\lambdanglet\subset I}\theta_x^*\zeta_\mathfrak e
}{
\prod_{\mathfrak e\succ_\lambdangle\!\lambdanglet\mathsf e_i,\,
\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}_\lambdangle\!\lambdanglet\subset I}\theta_x^*\zeta_\mathfrak e}
\cdot
\frac{
u_{\mathsf e_i^+}\!\cdot\!u_{\mathsf e_{i[1]}^+}\!\cdots
}{
u_{\mathsf e_{i^\upsilonparrow}^+}\!\cdot\!u_{\mathsf e_{i^\upsilonparrow[1]}^+}\!\cdots
}
\cdot
\\
&\cdot
\frac{
\mu_{\mathsf e_{i^\upsilonparrow}^\dag;\,i^\upsilonparrow(1);\,I}
\cdot
\mu_{\mathsf e_{i^\upsilonparrow\!(1)}^\dag;\,i^\upsilonparrow(2);\,I}
\cdots
}{
\mu_{\mathsf e_{i}^\dag;\,i(1);\,I}
\cdot
\mu_{\mathsf e_{i(1)}^\dag;\,i(2);\,I}
\cdots}
\hspace{1in}
\forall\,i\!\in\!\mathbb I_+\backslash I_+
\end{split}
\end{equation*}
and
\betagin{equation*}\betagin{split}
&g^*u_e'\!=\!\mu_{e;\,\ell_{(I)}\!(e);\,I}
\ \ \forall\,e\!\in\!\wh\tn{Edg}\big(\lambdangle\!\lambdanglet_{(I)}\big)\backslash\{\mathsf e_i\!:i\!\in\!\mathbb I_+
\!\backslash I_+\},
\\
&g^*z_e'\!=\! u_e\ \ \forall\,e\!\in\!\big\{e\!\in\!\mathbb I_{\mathbf{m}}\backslash I_{\mathbf{m}}\!:
\ell(v_e^+)\!\lambdangle\!\lambdanglee\!{\mathbf{m}}\big(\lambdangle\!\lambdanglet_{(I)}\big)\big\},
\qquad
g^*z_e'\!=\! z_e\ \ \forall\,e\!\in\!\mathbb I_-\backslash I_-,
\\
&g^* w_j'\!=\! w_j\ \ \forall\,j\!\in\! J,\qquad
g^* w_e'\!=\! \theta_x^*\zeta_e\ \ \forall\,e\!\in\!
\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\tn{Edg}(\lambdangle\!\lambdanglet_{(I)}).
\end{split}\end{equation*}
To see $g$ is well defined,
notice that $\mathcal V_{\varpi(x')}\!\subset\!\mathcal V_{(I)}^\circ$ implies that
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:U_pullback}
\mathbb Phi_x^{-1}(\mathcal U)\subset\mathfrak U_{x;(I)}^\circ.
\end{equation}
Thus, every $\mu_{e;i;I}$ above can be considered as a function on $\mathbb Phi_x^{-1}(\mathcal U)$.
By~(\rangle\!\rangleef{Eqn:U_pullback}) and~(\rangle\!\rangleef{Eqn:th_{x;(I)}}),
the function $\theta_x^*\zeta_\mathfrak e$ with $\mathfrak e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{\mathbf{m}}$ is nowhere vanishing on $\mathbb Phi_x^{-1}(\mathcal U)$ whenever $\lambdangle\!\lambdanglebrp{\ell(\mathfrak e),\ell(v_\mathfrak e^+)}_\lambdangle\!\lambdanglet\!\subset\!I$.
Taking~(\rangle\!\rangleef{Eqn:al_e_nonzero}) and~(\rangle\!\rangleef{Eqn:mu_vanishing}) into account,
we conclude that $g$ is well defined.
Once again, the explicit expression of $g$ implies it is invertible;
see the parallel argument in the proof of Lemma~\rangle\!\rangleef{Lm:Chart_indep_svi}.
Moreover,
it is a direct check that $\theta_{x'}\!\circ\!g\!=\! \theta_x$ and
$$
g^*\mu'_{e;i;I'}=\mu_{e;i;I\sqcup I'}
\in\Gamma\big(
\mathscr O_{\mathfrak U^\circ_{x;(I\sqcup I')}\!\cap\mathbb Phi_x^{-1}(\mathcal U)}
\big)
\quad
\forall\,I'\!\subset\!\mathbb I\backslash I,\,i\!\in\!\mathbb I_+\!\backslash(I_+\!\sqcup\!I'),\,e\!\in\!\mathfrak E_i.
$$
Thus,
$\mathbb Phi_{x'}^{-1}\!\circ\!\mathbb Phi_x\!=\! g$ and hence is an isomorphism.
\end{proof}
\betagin{crl}
\lambdangle\!\lambdangleabel{Crl:MtdSmooth}
$\mtd{}$ is a smooth Artin stack that is birational to $\mathfrak M^{\tn{wt}}$, with $\{\mathbb Phi_x\!:\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}\}_{x\in\mtd{}}$ as smooth charts.
Moreover, the structure of the stratification~(\rangle\!\rangleef{Eqn:MtdStrata}) is locally identical to the one
induced by \!=\!ref{Eqn:ti_V_strata}.
Furthermore, for any $x\!\in\!\mtd{}$, any chart $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ containing $\varpi(x)\!\in\!\mathfrak M^{\tn{wt}}$, and any twisted chart $\mathbb Phi_x\!:\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ centered at $x$ lying over $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$,
we have
\betagin{center}
\betagin{tikzpicture}{h}
\draw (0,1.2) node {$\mathfrak U_x$}
(2,1.2) node {$\mtd{}$}
(0,0) node {$\mathcal V$}
(2,0) node {$\mathfrak M^{\tn{wt}}$}
(1,1.2) node[below] {\small{$\mathbb Phi_x$}}
(2.2,.6) node {\small{$\varpi$}}
(-.3,.55) node {\small{$\theta_x$}};
\draw[->,>=stealth'] (.25,1.2)--(1.55,1.2);
\draw[->,>=stealth'] (.25,0)--(1.5,0);
\draw[->,>=stealth'] (0,.9)--(0,.2);
\draw[->,>=stealth'] (1.9,.95)--(1.9,.2);
\end{tikzpicture}
\end{center}
where $\varpi$ is the forgetful morphism as in~(\rangle\!\rangleef{Eqn:MtdStrata}) and $\theta_x$ is as in~(\rangle\!\rangleef{Eqn:theta_x}).
\end{crl}
\betagin{proof}
The first statement follows from Proposition~\rangle\!\rangleef{Prp:Chart_compatible},
(\rangle\!\rangleef{Eqn:Phi_onto}), and the fact that $\varpi$
restricts to the identity map on the preimage of
the open subset
$$
\big\{(C,\mathbf w)\!\in\!\mathfrak M^{\tn{wt}}:
\mathbf w(C_o)\!>\!0\big\}
\subset\mathfrak M^{\tn{wt}}.
$$
Lemma~\rangle\!\rangleef{Lm:Phi_x_inj} then implies for every $[\lambdangle\!\lambdanglet]\!\in\!\mathscr T_\mathsf L^\tn{wt}$,
the stack structure of $\mtd{[\lambdangle\!\lambdanglet]}$ is the same as that induced from the inclusion $\mtd{[\lambdangle\!\lambdanglet]}\!\hookrightarrow\!\mtd{}$;
i.e.~the second statement of Corollary~\rangle\!\rangleef{Crl:MtdSmooth} holds.
The last statement follows from~(\rangle\!\rangleef{Eqn:Phi_x_(I)}).
\end{proof}
\betagin{rmk}\lambdangle\!\lambdangleabel{Rmk:smooth}
By~(\rangle\!\rangleef{Eqn:theta_x}) and Corollary~\rangle\!\rangleef{Crl:MtdSmooth},
one sees that on an arbitrary twisted chart $\mathfrak U_x$ of $\mtd{}$,
\betagin{equation*}\betagin{split}
\varpi^*\big(\!\!\prod_{e'\succeq \mathsf e_{\mathbf{m}}}\!\!\zeta_{e'}\big)
&=(u_{\mathsf e_{\mathbf{m}}^+}u_{\mathsf e_{{\mathbf{m}}[1]}^+}\cdots)
\prod_{i\in\mathbb I_{\mathbf{m}}}\varepsilon_i,\\
\varpi^*\big(\prod_{e'\succeq e}\zeta_{e'}\big)
&=(u_e u_{\mathsf e_{\mathbf{m}}^+}u_{\mathsf e_{{\mathbf{m}}[1]}^+}\cdots)
\prod_{i\in\mathbb I_{\mathbf{m}}}\varepsilon_i=
u_e\cdot \varpi^*\big(\!\!\prod_{e'\succeq \mathsf e_{\mathbf{m}}}\!\!\zeta_{e'}\big)
\quad\forall\,e\!\in\!\mathfrak E_{\mathbf{m}}.
\end{split}\end{equation*}
This, along with the local equations of $\ov M_1(\mathbb P^n,d)$ in~\cite[\S5.2]{HL10},
implies that the primary component of $\ti M^\tn{tf}_1(\mathbb P^n,d)$ is smooth and $\ti M^\tn{tf}_1(\mathbb P^n,d)$ contains at worst normal crossing singularities. This observation should be useful for the cases
of higher genera.
\end{rmk}
\subsection{A simple example}
\lambdangle\!\lambdangleabel{Subsec:Eg}
Let $[\lambdangle\!\lambdanglet]\!=\![\gamma,\mathbf w,\ell]\!\in\!\mathscr T_\mathsf L^\tn{wt}$ be
given by the leftmost diagram in Figure~\rangle\!\rangleef{Fig:Example}.
Then,
$$
\tn{Edg}(\lambdangle\!\lambdanglet)\!=\!\{a,b,c,d\},\qquad
{\mathbf{m}}\!=\! -2,\qquad
\mathbb I\!=\!\mathbb I_+\!=\!\{-1,-2\}.
$$
Each of the four distinct subsets $I$ of $\mathbb I$ determines a weighted level tree $\lambdangle\!\lambdanglet_{(I)}$; see Figure~\rangle\!\rangleef{Fig:Example}.
Let $x\!\in\!\mtd{[\lambdangle\!\lambdanglet]}$ be a weighted curve of genus 1 with twisted fields over $(C,\mathbf w)\!\in\!\mathfrak M^{\tn{wt}}$.
The core and the nodes of $C$ are
labeled by $o$ and by $a,b,c,d$, respectively.
Let $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ be an affine smooth chart containing $(C,\mathbf w)$,
with a set of local parameters
$$
\{\zeta_a, \zeta_b, \zeta_c, \zeta_d\}\sqcup
\{\varsigma_j\}_{j\in J}
$$
centered at $(C,\mathbf w)$,
where $\zeta_a,\lambdangle\!\lambdangledots,\zeta_d$ are the modular parameters.
There then exist non-zero $\lambdangle\!\lambdanglea_c$ and $\lambdangle\!\lambdanglea_d$ such that
$$
x=\big(\,\upsilond 0\;;\;
[\,0\,,\partial_{\zeta_b}|_{\upsilond 0}\,]\;,\;
[\,\partial_{\zeta_a}|_{\upsilond 0}\,,\,
\lambdangle\!\lambdanglea_c\!\cdot\!(\partial_{\zeta_b}\!\otimes\!\partial_{\zeta_c})|_{\upsilond 0}\,, \,
\lambdangle\!\lambdanglea_d\!\cdot\!(\partial_{\zeta_b}\!\otimes\!\partial_{\zeta_d})|_{\upsilond 0}\,]\;
\big).
$$
\betagin{figure}
\betagin{center}
\betagin{tikzpicture}{htb}
\draw[dotted]
(1.8,0)--(4.2,0)
(1.8,-.8)--(4.2,-.8)
(1.8,-1.6)--(4.2,-1.6);
\draw
(3,0)--(2,-1.6)
(3,0)--(4,-1.6)
(3.5,-.8)--(3,-1.6);
\draw[very thick]
(3,0)--(2,-1.6)
(3,0)--(3.5,-.8);
\draw[fill=white]
(3,0) circle (1.5pt)
(3.5,-.8) circle (1.5pt);
\filldraw
(2,-1.6) circle (1.5pt)
(4,-1.6) circle (1.5pt)
(3,-1.6) circle (1.5pt);
\draw
(4.2,0) node[right] {\scriptsize{$0$}}
(4.2,-.8) node[right] {\scriptsize{$-1$}}
(4.2,-1.6) node[right] {\scriptsize{$-2$}}
(2.35,-0.6) node {\scriptsize{$a$}}
(3.45,-.3) node {\scriptsize{$b$}}
(3.06,-1.12) node {\scriptsize{$c$}}
(3.95,-1.1) node {\scriptsize{$d$}}
(3,0) node[above] {\scriptsize{$o$}}
(3,-1.8) node[below] {\scriptsize{$\lambdangle\!\lambdanglet$}};
\draw[dotted, xshift=4cm]
(2.3,0)--(3.7,0)
(2.3,-.8)--(3.7,-.8);
\draw[xshift=4cm]
(3,0)--(2.5,-.8)
(3,0)--(3.5,-.8);
\draw[very thick,xshift=4cm]
(3,0)--(3.5,-.8);
\draw[fill=black,xshift=4cm]
(2.5,-.8) circle (1.5pt)
(3.5,-.8) circle (1.5pt);
\draw[xshift=4cm]
(3.7,0) node[right] {\scriptsize{$0$}}
(3.7,-.8) node[right] {\scriptsize{$-1$}}
(2.55,-0.3) node {\scriptsize{$a$}}
(3.45,-.3) node {\scriptsize{$b$}}
(3,0) node[above] {\scriptsize{$o$}}
(3.2,-.9) node[below] {\scriptsize{$\lambdangle\!\lambdanglet_{(\{-2\})}$}};
\draw[dotted, xshift=8cm]
(2,0)--(4,0)
(2,-.8)--(4,-.8);
\draw[xshift=8cm]
(3,0)--(2.2,-.8)
(3,0)--(3,-.8)
(3,0)--(3.8,-.8);
\draw[very thick,xshift=8cm]
(3,0)--(2.2,-.8);
\draw[fill=black,xshift=8cm]
(2.2,-.8) circle (1.5pt)
(3,-.8) circle (1.5pt)
(3.8,-.8) circle (1.5pt);
\draw[xshift=8cm]
(4,0) node[right] {\scriptsize{$0$}}
(4,-.8) node[right] {\scriptsize{$-2$}}
(2.45,-0.3) node {\scriptsize{$a$}}
(3.55,-.3) node {\scriptsize{$b$}}
(3.12,-.45) node {\scriptsize{$c$}}
(3,0) node[above] {\scriptsize{$o$}}
(3.2,-.9) node[below] {\scriptsize{$\lambdangle\!\lambdanglet_{(\{-1\})}$}};
\draw[dotted, xshift=7.4cm, yshift=-2.5cm]
(2.7,0)--(3.3,0);
\draw[fill=black,xshift=7.4cm,yshift=-2.5cm]
(3,0) circle (1.5pt);
\draw[xshift=7.4cm,yshift=-2.5cm]
(3.2,0) node[right] {\scriptsize{$0$}}
(3,0) node[above] {\scriptsize{$o$}}
(4.6,0) node {\scriptsize{$\lambdangle\!\lambdanglet_{(\{-1,-2\})}$}};
\draw[xshift=1.7cm,yshift=.1cm]
(4,-2) circle (1.5pt);
\filldraw[xshift=1.7cm,yshift=.1cm]
(4,-2.4) circle (1.5pt);
\draw[xshift=1.7cm,yshift=.1cm]
(4,-2) node[right] {\scriptsize{$:\textnormal{vertex~of~weight}~0$}}
(4,-2.4) node[right] {\scriptsize{$:\textnormal{vertex~of~positive~weight}$}};
\draw[dashed,xshift=1.7cm,yshift=.1cm]
(3.7,-1.7) rectangle (7.7,-2.7);
\end{tikzpicture}
\end{center}
\caption{Relevant weighted level trees in \S\rangle\!\rangleef{Subsec:Eg}}\lambdangle\!\lambdangleabel{Fig:Example}
\end{figure}
We choose the special edges (\rangle\!\rangleef{Eqn:se_i}) of $\lambdangle\!\lambdanglet$ to be $\mathsf e_1\!=\! b$ and $\mathsf e_2\!=\! a$.
Let
$$
\mathfrak U_x\subset\mathbb A^{\{-1,-2\}}\times\mathbb A^{\{c,d\}}\times\mathbb A^J
$$
be an open subset containing the point
$$
y_x= (0,0,\lambdangle\!\lambdanglea_c,\lambdangle\!\lambdanglea_d,0,\lambdangle\!\lambdangledots,0).
$$
The coordinates of $\mathfrak U_x$ are denoted by
$$
\varepsilon_{-1},\,\varepsilon_{-2},\,
u_c,\, u_d,\quad\textnormal{and}\quad
w_j~\textnormal{with}~j\!\in\! J.
$$
We may take $\mathfrak U_x\!=\!\{u_c\!\ne\!0,u_d\!\ne\!0\}$.
By Corollary~\rangle\!\rangleef{Crl:MtdSmooth} and~(\rangle\!\rangleef{Eqn:theta_x}),
the forgetful morphism $\varpi\!:\mtd{}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ can locally be written as $\theta_x\!:\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mathcal V$ such that
$$
\theta_x
\big(\,
\varepsilon_{-1},\,\varepsilon_{-2},\,
u_c,\, u_d,\, (w_j)\,
\big)
\!=\!
\big(
\upsilonnderbrace{
\varepsilon_{-1}\!\cdot\!\varepsilon_{-2}}_{\zeta_a},
\upsilonnderbrace{\varepsilon_{-1}}_{\zeta_b},
\upsilonnderbrace{
\varepsilon_{-2}\!\cdot\!u_c}_{\zeta_c},
\upsilonnderbrace{
\varepsilon_{-2}\!\cdot\!u_d}_{\zeta_d},
(w_j)\,
\big).
$$
Considering all possible subsets $I$ of $\mathbb I$ in (\rangle\!\rangleef{Eqn:Phi_x_(I)}),
we obtain a twisted chart $\mathbb Phi_x\!:\mathfrak U_x\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ centered at $x$ over $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$
so that for any
$$
y=\big(\,
\varepsilon_{-1},\,\varepsilon_{-2},\,
u_c,\, u_d,\, (w_j)\,
\big)\in\mathfrak U_x,
$$
\betagin{enumerate}
[leftmargin=*,label=$\mathbf ullet$]
\item
if $\varepsilon_{-1}\!=\! \varepsilon_{-2}\!=\! 0$,
then
\betagin{equation*}\betagin{split}
\mathbb Phi_x(y)\!=\!
\Big(
\big(&0,0,0,0,(w_j)\big);\\
[&0,\partial_{\zeta_b}\!|_{\theta_x(y)}],\;
[\partial_{\zeta_a}\!|_{\theta_x(y)},\,u_c
(\partial_{\zeta_b}\!\otimes\!\partial_{\zeta_c})\!|_{\theta_x(y)},\,
u_d
(\partial_{\zeta_b}\!\otimes\!\partial_{\zeta_d})\!|_{\theta_x(y)}]
\Big)
\in\mtd{[\lambdangle\!\lambdanglet]};
\end{split}\end{equation*}
\item
if $\varepsilon_{-1}\!\ne\! 0$ and $\varepsilon_{-2}\!=\! 0$,
then
$$
\mathbb Phi_x(y)\!=\!
\Big(\!
\big(0,\varepsilon_{-1},0,0,\,(w_j)\big);\,
[\partial_{\zeta_a}\!|_{\theta_x(y)}\,,\,
\tn{tf}rac{u_c}{\varepsilon_{-1}}\!\cdot\!
\partial_{\zeta_c}\!|_{\theta_x(y)}\,,\,
\tn{tf}rac{u_d}{\varepsilon_{-1}}\!\cdot\!
\partial_{\zeta_d}\!|_{\theta_x(y)}]
\Big)
\in\mtd{[\lambdangle\!\lambdanglet_{(\{-1\})}]};
$$
\item
if $\varepsilon_{-1}\!=\! 0$ and $\varepsilon_{-2}\!\ne\! 0$,
then
$$
\mathbb Phi_x(y)\!=\!
\Big(
\big(0,\,0,\,\varepsilon_{-2}\!\cdot\!u_c,\,\varepsilon_{-2}\!\cdot\!u_d,\,(w_j)\big);\,
[\varepsilon_{-2}\!\cdot\!\partial_{\zeta_a}\!|_{\theta_x(y)}\,,\,
\partial_{\zeta_b}\!|_{\theta_x(y)}]
\Big)
\in\mtd{[\lambdangle\!\lambdanglet_{(\{-2\})}]};
$$
\item
if $\varepsilon_{-1}\!\ne\! 0$ and $\varepsilon_{-2}\!\ne\! 0$,
then
$$
\mathbb Phi_x(y)=\theta_x(y)=
\big(\varepsilon_{-1}\!\cdot\!\varepsilon_{-2},\,
\varepsilon_{-1},\,\varepsilon_{-2}\!\cdot\!u_c,\,\varepsilon_{-2}\!\cdot\!u_d,(w_j)\big)
\in\mtd{[\lambdangle\!\lambdanglet_{(\{-1,-2\})}]}.
$$
\end{enumerate}
With the expressions of $\mathbb Phi_x$ as above,
it is straightforward to check
$$
\mathbb Phi_x(0,0,\lambdangle\!\lambdanglea_c,\lambdangle\!\lambdanglea_d,0,\lambdangle\!\lambdangledots,0)=x,
$$
as well as to verify the statements of Lemmas~\rangle\!\rangleef{Lm:Phi_x_inj}, \rangle\!\rangleef{Lm:Chart_indep_svi}, \rangle\!\rangleef{Lm:Chart_compatible_ptws},
and Proposition~\rangle\!\rangleef{Prp:Chart_compatible} in this situation.
\subsection{Universal family}\lambdangle\!\lambdangleabel{Subsec:Universal_family}
Let $\pi^\tn{wt}\!:\mathcal C^\tn{wt}\!\lambdangle\!\lambdangleongrightarrow\! \mathfrak M^{\tn{wt}}$ be the universal weighted nodal curves of genus 1.
The stratification~(\rangle\!\rangleef{Eqn:Mwt_strata}) gives rise to a stratification
$$
\mathcal C^\tn{wt}=\!\bigsqcup_{{\tau}\in\mathscr T_\mathsf R^\tn{wt}}\!\!\mathcal C^\tn{wt}_{\tau}\qquad
\textnormal{satisfying}\qquad
\pi^\tn{wt}(\mathcal C^\tn{wt}_{\tau})\!=\!\mathfrak M^{\tn{wt}}_{\tau}\quad
\forall\,{\tau}\!\in\!\mathscr T_\mathsf R^\tn{wt}.
$$
Parallel to~(\rangle\!\rangleef{Eqn:MtdStrat_and_sEcT}) and~(\rangle\!\rangleef{Eqn:MtdStrata}),
we set
\betagin{equation*}
\betagin{split}
\mathcal C^\tn{tf}_{[\lambdangle\!\lambdanglet]}&=
\bigg(
\prod_{i\in\mathbb I_+(\lambdangle\!\lambdanglet)}\!\!
\Big\lambdangle\!\lambdanglegroup\!
\Big(\,\mathring\mathbb P\big(\!
\bigoplus_{\!
\betagin{subarray}{c}
e\in\tn{Edg}(\lambdangle\!\lambdanglet),\,
\ell(v_e^-)=i
\end{subarray}
}\!\!\!\!\!\!\!\!\!\!\!\!
(\pi^\tn{wt})^*L_e^\succeq\;
\big)\!\Big)\Big/
\mathcal C^\tn{wt}_{\mathfrak f[\lambdangle\!\lambdanglet]}\Big\rangle\!\ranglegroup\!\!
\bigg)
\stackrel{}{\lambdangle\!\lambdangleongrightarrow}\mathcal C^\tn{wt}_{\mathfrak f[\lambdangle\!\lambdanglet]}\qquad
\forall\,[\lambdangle\!\lambdanglet]\!\in\!\mathscr T_\mathsf L^\tn{wt},
\\
\mathcal C^\tn{tf}&=
\!\bigsqcup_{[\lambdangle\!\lambdanglet]\in\mathscr T_\mathsf L^\tn{wt}}\!\!\!
\mathcal C^\tn{tf}_{[\lambdangle\!\lambdanglet]}
\stackrel{}{\lambdangle\!\lambdangleongrightarrow}\mathcal C^\tn{wt}.
\end{split}
\end{equation*}
Mimicking the construction of the stack structure of $\mtd{}$ in \S\rangle\!\rangleef{Subsec:Charts_of_Mtf} and \S\rangle\!\rangleef{Subsec:transition_maps},
we can endow $\mathcal C^\tn{tf}$ with a stack structure analogously.
Furthermore,
the projection $\pi^\tn{wt}\!:\mathcal C^\tn{wt}\!\lambdangle\!\lambdangleongrightarrow\! \mathfrak M^{\tn{wt}}$ induces a unique projection
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:tf_universal_family}
\pi^\tn{tf}:\mathcal C^\tn{tf}\lambdangle\!\lambdangleongrightarrow\mtd{}.
\end{equation}
It is straightforward that
$$
\mathcal C^\tn{tf}\cong\mathcal C^\tn{wt}\!\times_{\mathfrak M^{\tn{wt}}}\!\mtd{}\lambdangle\!\lambdangleongrightarrow\mtd{}.
$$
For any scheme $S$, a flat family $\mathcal Z/S$ of stable weighted nodal curves of genus 1 with twisted fields
corresponds to a morphism
\hbox{$f\!: S\lambdangle\!\lambdangleongrightarrow\mtd{}$} such that $\mathcal Z/S$ is the pullback of~(\rangle\!\rangleef{Eqn:tf_universal_family}):
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:family}
\mathcal Z/S \cong (S \times_{\mtd{}} \mathcal C^\tn{tf})/S\,.
\end{equation}
\iffalse
{\blue Alternatively, a morphism $\mathcal Z \lambdangle\!\lambdangleongrightarrow S$ is said to be a family of stable weighted nodal curves of genus 1 with twisted fields if there is morphism of schemes $S \to S'$ and a morphism
$S' \to \mathfrak M^{\tn{wt}}$ of stacks
such that $$S \cong S' \times_{\mathfrak M^{\tn{wt}}} \! \mtd{}\qquad
\textnormal{and}\qquad\mathcal Z/S \cong (S \times_{\mtd{}} \mathcal C^{\tn{tf}})/S.$$
}
\fi
This leads to the following statement.
\betagin{prp}
\lambdangle\!\lambdangleabel{Prp:Stack}
$\mathcal C^\tn{tf}\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ in~(\rangle\!\rangleef{Eqn:tf_universal_family}) gives the universal family of $\mtd{}$.
\end{prp}
\betagin{rmk}\lambdangle\!\lambdangleabel{Rmk:moduli}
One may establish a moduli interpretation of $\mtd{}$ as follows:
for any scheme $S$,
every flat family $\mathcal Z/S$ of stable weighted nodal curves of genus 1 with twisted fields can be constructed directly as follows.
A priori,
$\mathcal Z/S$ should be over a flat family $\mathcal C_S/S$ of stable weighted curves,
thus by the universality of the moduli $\mathfrak M^{\tn{wt}}$,
there exists a morphism
$$
\alpha:S\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}
=\!\bigsqcup_{{\tau}\in\mathscr T_\mathsf R^\tn{wt}}\!\!\mathfrak M^{\tn{wt}}_{\tau}
$$
such that $\mathcal C_S/S$ is the pullback of $\mathcal C^\tn{wt}\!/\,\mathfrak M^{\tn{wt}}$ via $\alpha$.
This induces a stratification of the scheme~$S$:
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:S_stratification}
S=\!\bigsqcup_{{\tau}\in\mathscr T_\mathsf R^\tn{wt}}\!\!S_{\tau}\qquad
\textnormal{satisfying}\qquad
\alpha(S_{\tau})\!\subset\!\mathfrak M^{\tn{wt}}_{\tau}\quad
\forall\,{\tau}\!\in\!\mathscr T_\mathsf R^\tn{wt}.
\end{equation}
We take
\betagin{equation*}
\betagin{split}
S^\tn{tf}_{[\lambdangle\!\lambdanglet]}&=
\bigg(
\prod_{i\in\mathbb I_+(\lambdangle\!\lambdanglet)}\!\!\!
\Big\lambdangle\!\lambdanglegroup\!
\Big(\,\mathring\mathbb P
\big(\!
\bigoplus_{\!
\betagin{subarray}{c}
e\in\tn{Edg}(\lambdangle\!\lambdanglet),\,
\ell(v_e^-)=i
\end{subarray}
}\!\!\!\!\!\!\!\!\!\!\!
\alpha^*L_e^\succeq\;
\big)
\Big)\Big/S_{\mathfrak f[\lambdangle\!\lambdanglet]}
\Big\rangle\!\ranglegroup\!\!
\bigg)
\stackrel{\pi_S}{\lambdangle\!\lambdangleongrightarrow}S_{\mathfrak f[\lambdangle\!\lambdanglet]}\qquad
\forall\,[\lambdangle\!\lambdanglet]\!\in\!\mathscr T_\mathsf L^\tn{wt},\\
S^\tn{tf}&=
\!\bigsqcup_{[\lambdangle\!\lambdanglet]\in\mathscr T_\mathsf L^\tn{wt}}\!\!
S^\tn{tf}_{[\lambdangle\!\lambdanglet]}
\stackrel{\pi_S}{\lambdangle\!\lambdangleongrightarrow}S.
\end{split}
\end{equation*}
For any chart $\mathcal V_S\!\lambdangle\!\lambdangleongrightarrow\!S$,
shrinking it if necessary,
we see there exists a (smooth) chart $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ such that
\betagin{center}
\betagin{tikzpicture}{h}
\draw (0,1) node {$\mathcal V_S$}
(1.9,1) node {$\mathcal V$}
(0,0) node {$S$}
(2,0) node {$\mathfrak M^{\tn{wt}}$};
\draw[->,>=stealth'] (.25,1)--(1.65,1);
\draw[->,>=stealth'] (.25,0)--(1.5,0);
\draw[->,>=stealth'] (0,.7)--(0,.2);
\draw[->,>=stealth'] (1.9,.75)--(1.9,.2);
\end{tikzpicture}
\end{center}
commutes.
The modular parameters $\zeta_e$ on $\mathcal V$ pull back to regular functions on $\mathcal V_S$,
which are denoted by $\zeta^S_e$.
By~(\rangle\!\rangleef{Eqn:S_stratification}),
we have
$$
S_{\tau}\cap\mathcal V_S=\{\zeta^S_e\!=\! 0:e\!\in\!\tn{Edg}({\tau})\}\cap\{\zeta^S_{e'}\!\ne\!0:e'\!\not\in\!\tn{Edg}({\tau})\}.
$$
Mimicking the construction in \S\rangle\!\rangleef{Subsec:Charts_of_Mtf} and \S\rangle\!\rangleef{Subsec:transition_maps},
we can thus endow $S^\tn{tf}$ with a scheme structure.
We say $\mathcal Z/S$ is a flat family of stable weighted nodal curves of genus 1 with twisted fields
if and only if there exists a section $\sigma^\tn{tf}$ of $\pi_S\!:S^\tn{tf}\!\lambdangle\!\lambdangleongrightarrow S$ such that
$$
\mathcal Z=\mathcal C_S\times_S\big(\sigma^\tn{tf}(S)\big).
$$
This construction is consistent with~(\rangle\!\rangleef{Eqn:family}).
One can check that the groupoid sending any scheme $S$ to the set of all such defined flat families $\mathcal Z/S$ is represented by $\mtd{}$.
We would like to remark that a more succinct definition of
a flat family of stable weighted nodal curves of genus 1 with twisted fields
should be desirable.
\end{rmk}
\mathsf ection{Comparison with Hu-Li's blowup stack $\ti\mathfrak M^{\textnormal{wt}}$}
Let $\pi\!:\ti\mathfrak M^{\tn{wt}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ be the sequential blowup constructed in~\cite[\S2.2]{HL10}.
Since $\mathfrak M^{\tn{wt}}$ is a smooth Artin stack and the blowup centers are all smooth,
so is $\ti\mathfrak M^{\tn{wt}}$.
As per the convention of this paper, we omit the subscript indicating the genus.
In Proposition~\rangle\!\rangleef{Prp:Isomorphism},
we show that $\mtd{}$ is isomorphic to $\ti\mathfrak M^{\tn{wt}}$.
Lemma~\rangle\!\rangleef{Lm:Blowup} is rather technical; it is only used in the proof of Proposition~\rangle\!\rangleef{Prp:Isomorphism}.
We briefly recall the notion of the \textsf{locally tree compatible blowups} described in~\cite[\S3]{HLN}.
Let $\mathfrak M$ be a smooth stack, $\gamma$ be a rooted tree, and $\mathcal V$ be an affine smooth chart of $\mathfrak M$.
If there exists a set of local parameters on $\mathcal V$ so that a subset of which can be written as
\betagin{equation*}
\big\{\,
z_e\!\in\!\Gamma\big(\mathscr O_{{\mathcal V}}\big):
e\in\tn{Edg}(\gamma)\,
\big\}\,,
\end{equation*}
then the set is called a \textsf{$\gamma$-labeled subset of local parameters} on $\mathcal V$.
For example,
if $\mathfrak M\!=\!\mathfrak M^{\tn{wt}}$ and $\mathcal V$ is a chart centered at a weighted curve whose reduced dual tree is $\gamma$,
then the set of the modular parameters $\{\zeta_e\}_{e\in\tn{Edg}(\gamma)}$ is a $\gamma$-labeled subset of local parameters.
Let $\tn{Ver}(\gamma)_{\min}$ be the set of the minimal vertices of $\gamma$ with respect to the tree order.
We call a subset $\mathfrak S$ of $\tn{Edg}(\gamma)$ a \textsf{traverse section} if for any $v\!\in\!\tn{Ver}(\gamma)_{\min}$,
the path between $o$ and $v$ contains exactly one element of~$\mathfrak S$.
For example,
the subsets $\mathfrak E_i$ of $\tn{Edg}(\gamma)$ as in~(\rangle\!\rangleef{Eqn:fE_i}) are traverse sections.
Let $\Xi(\gamma)$ be the set of the traverse sections.
The tree order on $\tn{Edg}(\gamma)$ induces a partial order on $\Xi(\gamma)$ such that
$$
\mathfrak S\succ\mathfrak S'\quad
\Longleftrightarrow\quad
\big\lambdangle\!\lambdanglegroup\,
\mathfrak S\!\ne\!\mathfrak S'\,
\big\rangle\!\ranglegroup~\textnormal{and}~
\big\lambdangle\!\lambdanglegroup\,
\forall~e\!\in\!\mathfrak S,~
\exists~e'\!\in\!\mathfrak S'~\textnormal{s.t.}~e\!\succeq\!e'\,
\big\rangle\!\ranglegroup.
$$
We remark that the tree order on $\tn{Edg}(\gamma)$ and the induced order on $\Xi(\gamma)$ in this paper are both {\it opposite} to those in~\cite{HLN},
in order to be consistent with the order of the levels of the weighted level trees.
Let $\ti\mathfrak M\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M$ be the sequential blowup of $\mathfrak M$ successively along the proper transforms of the closed substacks
$
Z_1, Z_2,\lambdangle\!\lambdangledots
$ of $\mathfrak M$.
\betagin{dfn}{\cite[Definitions~3.2.4~\&~3.2.1]{HLN}}\lambdangle\!\lambdangleabel{Dfn:Local_Tree_Comptbl}
The blowup $\ti\mathfrak M\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M$ above is said to be \textsf{locally tree-compatible} if
there exists an \'etale cover $\{\mathcal V\}$ of $\mathfrak M$
such that for each $\mathcal V\!\in\!\{\mathcal V\}$,
there exist a rooted tree $\gamma$, a partition of $\Xi(\gamma)$:
$$
\Xi(\gamma)
=\bigsqcup_{k\ge 1}
\Xi_k(\gamma)
$$
and a $\gamma$-labeled subset of local parameters on $\mathcal V$ such that
\betagin{itemize}[leftmargin=*]
\item
for every $k\ge 1$,
$$
Z_{k}\cap\mathcal V=
\bigcup_{\mathfrak S\in\Xi_k(\gamma)} \!\!\!\!
\big\{\,z_e\!=\! 0:\,
e\!\in\! \mathfrak S\,\big\};
$$
\item
if $\mathfrak S'\!\in\!\Xi_{k'}(\gamma)$, $\mathfrak S''\!\in\!\Xi_{k''}(\gamma)$, and $\mathfrak S'\!\succ\!\mathfrak S''$,
then $k'\!<\!k''$.
\end{itemize}
\end{dfn}
If a sequential blowup $\ti\mathfrak M\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M$ is locally tree-compatible,
then the blowup procedure is finite on each $\mathcal V\!\in\!\{\mathcal V\}$, because the set $\Xi(\gamma)$ is finite.
\betagin{lmm}\lambdangle\!\lambdangleabel{Lm:Blowup}
If the blowup $\ti\mathfrak M\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M$ successively along the proper transforms of the closed substacks $Z_1,Z_2,\lambdangle\!\lambdangledots$ of $\mathfrak M$ is locally tree-compatible,
then the blowup $\ti\mathfrak M'\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M$ successively along the total transforms of
$$
Y_1=Z_1,\ \ Y_2=Z_1\!\cup\!Z_2,\ \ Y_3=Z_1\!\cup\!Z_2\!\cup\!Z_3,\ \ \lambdangle\!\lambdangledots
$$
yields the same space, i.e.~$\ti\mathfrak M'\!=\!\ti\mathfrak M$.
\end{lmm}
\betagin{proof}
We prove the statement by induction.
For each $h\!\ge\!1$,
we will show that after the $h$-th step,
the blowup stacks $\ti\mathfrak M_{(h)}'$ of $\mathfrak M$ along the total transforms of $Y_1,\lambdangle\!\lambdangledots,Y_h$ is the same as the blowup $\ti\mathfrak M_{(h)}$ of $\mathfrak M$ along the proper transforms of $Z_1,\lambdangle\!\lambdangledots,Z_h$.
The base case of the induction is trivial.
Suppose the blowup $\ti\mathfrak M_{(k)}'\!=\!\ti\mathfrak M_{(k)}$.
We will show that for any $x\!\in\!\mathfrak M$ and any lift $\ti x$ of $x$ after the $k$-th step,
the blowup along the total transform $\ti Y_{k+1}$ of $Y_{k+1}$ has the same effect as that along the proper transform $\wc Z_{k+1}$ of $Z_{k+1}$ near $\ti x$.
Since $x$ and $\ti x$ are arbitrary,
this will establish the $(k\!+\!1)$-th step of the induction.
W.l.o.g.~we may assume $x\!\in\!\bigcap_{i=1}^{k+1}Z_k$
(otherwise we simply omit the loci $Z_i$ not containing $x$ and change the indices of $Z_i$ and $Y_i$ accordingly).
The blowup $\ti\mathfrak M\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M$ is locally tree-compatible,
hence there exist a rooted tree $\gamma$, an affine smooth chart $\mathcal V$ containing $x$, and a $\gamma$-labeled subset of local parameters $z_e$, $e\!\in\!\tn{Edg}(\gamma)$ on $\mathcal V$ such that
$$
x\in\{z_e\!=\! 0 :e\!\in\!\tn{Edg}(\gamma)\}.
$$
As shown in~\cite[Lemma~3.3.2]{HLN},
there exist traverse sections $\mathfrak S_{(k)}\!\in\!\Xi_k(\gamma)$ and $\mathfrak S_{(k+1)}\!\in\!\Xi_{k+1}(\gamma)$ (c.f.~Definition~\rangle\!\rangleef{Dfn:Local_Tree_Comptbl}), an affine smooth chart $\ti\mathcal V_{\ti x}$, and a subset of local parameters
$$
\ti\varepsilon_1,\lambdangle\!\lambdangledots,\ti\varepsilon_k;\qquad
\ti z_e\ \ \textnormal{with}~e\!\in\!\mathfrak S_{(k+1)}\backslash\mathfrak S_{(k)};\qquad
\wc z_e\ \ \textnormal{with}~e\!\in\!\mathfrak S_{(k+1)}\!\cap\!\mathfrak S_{(k)}
$$
on $\ti\mathcal V_{\ti x}$ so that $\wc Z_{k+1}$ is locally given by
\betagin{equation*}\betagin{split}
\wc Z_{k+1}\cap\ti\mathcal V_{\ti x}&= \big\{\ti z_e\!=\! 0:e\!\in\!\mathfrak S_{(k+1)}\backslash\mathfrak S_{(k)}, \; \wc z_e\!=\! 0:e\!\in\!\mathfrak S_{(k+1)}\!\cap\!\mathfrak S_{(k)}\big\}.
\end{split}\end{equation*}
Moreover, by~\cite[(3.13)]{HLN},
the total transform of each $Z_i$ with $1\!\lambdangle\!\lambdanglee\!i\!\lambdangle\!\lambdanglee\!k$ is locally given by $\{\ti\varepsilon_i\!=\! 0\}$.
Thus, $\ti Y_{k+1}$ is locally given by
$$
\ti Y_{k+1}\cap\ti\mathcal V_{\ti x}
=\big(\wc Z_{k+1}\cap\ti\mathcal V_{\ti x}\big)
\cup
\big\{\prod_{1\lambdangle\!\lambdanglee i\lambdangle\!\lambdanglee k}\!\!\ti\varepsilon_i= 0\big\}.
$$
That is, on the chart $\ti\mathcal V_{\ti x}$, $\wc Z_{k+1}$ and $\ti Y_{k+1}$ are defined by the ideals
$$
\mathscr I_{\wc Z_{k+1}}\!\!=\!\lambdangle\!\lambdangleangle \ti z_e:e\!\in\!\mathfrak S_{(k+1)}\backslash\mathfrak S_{(k)}, \; \wc z_e :e\!\in\!\mathfrak S_{(k+1)}\!\cap\!\mathfrak S_{(k)} \rangle\!\rangleangle\quad\textnormal{and}\quad
\mathscr I_{\wc Z_{k+1}}\!\big(\!\prod_{1\lambdangle\!\lambdanglee i\lambdangle\!\lambdanglee k}\!\!\!\ti\varepsilon_i\big),
$$
respectively.
Therefore,
blowing up along $\wc Z_{k+1}$ has the same effect on $\ti\mathcal V_{\ti x}$ as that along $\ti Y_{k+1}$.
\end{proof}
\betagin{prp}
\lambdangle\!\lambdangleabel{Prp:Isomorphism}
$\mtd{}/\mathfrak M^{\tn{wt}}$ is isomorphic to $\ti\mathfrak M^{\tn{wt}}/\mathfrak M^{\tn{wt}}$.
In particular, $\varpi\!:\mtd{}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ is proper.
\end{prp}
\betagin{proof}
Our goal is to construct two morphisms $\tn{ps}i_1$ and $\tn{ps}i_2$ between $\ti\mathfrak M^{\tn{wt}}$ and $\mtd{}$ so that the following diagram
\betagin{center}
\betagin{tikzpicture}{h}
\draw (0,1) node {$\ti\mathfrak M^{\tn{wt}}$}
(3.2,1) node {$\mtd{}$}
(1.5,0) node {$\mathfrak M^{\tn{wt}}$}
(1.6,1) node[above]{\small{$\tn{ps}i_2$}}
(1.6,1) node[below]{\small{$\tn{ps}i_1$}}
(.55,.35) node[left]{\small{$\pi$}}
(2.45,.35) node[right]{\small{$\varpi$}};
\draw[->,>=stealth'] (.51,1.05)--(2.7,1.05);
\draw[->,>=stealth'] (2.69,.95)--(.5,.95);
\draw[->,>=stealth'] (.1,.7)--(1,.1);
\draw[->,>=stealth'] (2.9,.7)--(2,.1);
\end{tikzpicture}
\end{center}
commutes.
Since $\pi$ and $\varpi$
restrict to the identity map on the preimages of
the open subset
$$
\big\{(C,\mathbf w)\!\in\!\mathfrak M^{\tn{wt}}:
\mathbf w(C_o)\!>\!0\big\}
\subset\mathfrak M^{\tn{wt}},
$$
respectively,
we see that
$\tn{ps}i_2\!\circ\!\tn{ps}i_1$ and $\tn{ps}i_1\!\circ\!\tn{ps}i_2$ are the identity maps.
This then implies the former statement of Propoistion~\rangle\!\rangleef{Prp:Isomorphism}.
The latter statement follows from the former as well as the properness of the blowup $\ti\mathfrak M^{\tn{wt}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$.
We first construct $\tn{ps}i_1$.
For each $k\!\in\!\mathbb Z_{>0}$,
let $Z_k\subset\!\mathfrak M^{\tn{wt}}$ be the closed locus whose {\it general} point is obtained by attaching $k$ smooth positively-weighted rational curves to the smooth 0-weighted elliptic core at pairwise distinct points.
By Lemma~\rangle\!\rangleef{Lm:Blowup},
the blowup $\pi\!:\ti\mathfrak M^\tn{wt}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ successively along the proper transforms of $Z_1, Z_2, \lambdangle\!\lambdangledots$ can be identified with the blowup of $\mathfrak M^{\tn{wt}}$ successively along the total tranforms of
$$
Y_1\!=\! Z_1,\quad Y_2\!=\! Z_1\!\cup\!Z_2,\quad Y_3\!=\! Z_1\!\cup\!Z_2\!\cup\!Z_3,\quad\lambdangle\!\lambdangledots
$$
We observe that for each $k\!\in\!\mathbb Z_{>0}$,
the pullback $\varpi^{-1}(Y_k)$ to $\mtd{}$ is a Cartier divisor.
In fact, for any
$[\lambdangle\!\lambdanglet]\!=\![\gamma,\mathbf w,\ell]\!\in\!\mathscr T_\mathsf L^\tn{wt}$ and $x\!\in\!\mtd{[\lambdangle\!\lambdanglet]}$,
let $\mathfrak U\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ be a twisted chart
centered at $x$, lying over a chart $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$.
In \cite{HL10},
the blowup $\pi$ locally on $\mathcal V$ is proved to be compatible with the weighted tree $(\ov\gamma,\ov\mathbf w)$ obtained by contracting all the edges $e$ of $\gamma$ as long as there exists $v\!\succeq\!v_e^+$ satisfying
$\mathbf w(v)\!>\!0$.
Let $\{\zeta_e\!:e\!\in\!\tn{Edg}(\gamma)\}$ be a set of modular parameters on~$\mathcal V$ as in~(\rangle\!\rangleef{Eqn:modular_parameters}) and
$$
\{\varepsilon_i\}_{i\in\mathbb I_+}\cup
\{u_e\}_{e\in\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\{\mathsf e_i:i\in\mathbb I_+\}}\cup
\{z_e\}_{e\in\mathbb I_-}
$$
be the subset of the parameters~(\rangle\!\rangleef{Eqn:ti_cV}) on $\mathfrak U$.
We claim that
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:Yk_Cartier_Div}
\varpi^{-1}(Y_k)\cap
\mathfrak U=
\big\{\prod_{
\betagin{subarray}{c}
i\in\mathbb I_+,\,
|\mathfrak E_i|\lambdangle\!\lambdanglee k
\end{subarray}}
\!\!\!\!\!\!\!
\varepsilon_i~\,\!=\! 0\,
\big\}.
\end{equation}
To show~(\rangle\!\rangleef{Eqn:Yk_Cartier_Div}),
we first notice that $\varpi^{-1}(Y_k)\!\cap\!
\mathfrak U\!=\!
\varpi^{-1}(Y_k\!\cap\!\mathcal V)$ by Corollary~\rangle\!\rangleef{Crl:MtdSmooth}.
Every irreducible component of $Y_k\!\cap\!\mathcal V$ can be written in the form
$$
Y_{k,\mathfrak S}\!:=\!
\{\zeta_e\!=\! 0: e\!\in\!\mathfrak S\}\quad
\textnormal{with}~\mathfrak S\!\in\!\Xi(\ov\gamma),~|\mathfrak S|\!\lambdangle\!\lambdanglee\!k,~
\mathfrak S\!\cap\!\big(\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{{\mathbf{m}}}\big)\!\ne\!\emptyset.
$$
For each irreducible component $Y_{k,\mathfrak S}$ of $Y_k\!\cap\!\mathcal V$,
the local expression $\theta_x$ of $\varpi$ as in~(\rangle\!\rangleef{Eqn:theta_x}) implies the pullback $\varpi^{-1}(Y_{k;\mathfrak S})$ can be written as
\betagin{equation}\betagin{split}\lambdangle\!\lambdangleabel{Eqn:Yk_pullback}
&\big\{\,
\prod_{h\in\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}}\!\!\!\!\!\!\!\!\!
\varepsilon_h\;\!=\! 0:\,e\!\in\!\mathfrak S\!\cap\!(\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{\mathbf{m}}),\\
&\hspace{.5in}
\big(u_e\cdot\!\!\!\!\!\!\!\!\!\prod_{h\in\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}}\!\!\!\!\!\!\!\!\!
\varepsilon_h~\big)\!=\! 0:\,e\!\in\!\mathfrak S\!\cap\!\mathbb I_{\mathbf{m}},
\quad
z_e\!=\! 0:\,
e\!\in\!\mathfrak S\!\cap\!\mathbb I_-\,\big\}.
\end{split}\end{equation}
Since $\mathfrak S\!\cap\!\big(\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{{\mathbf{m}}}\big)\!\ne\!\emptyset$ and $|\mathfrak S|\!\lambdangle\!\lambdanglee\!k$,
we can always find $e\!\in\!\mathfrak S\!\cap\!\big(\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\mathbb I_{{\mathbf{m}}}\big)$ such that
$|\mathfrak E_{h}|\!\lambdangle\!\lambdanglee\!|\mathfrak S|\!\lambdangle\!\lambdanglee\!k$ for all $h\!\in\!\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}$.
By~(\rangle\!\rangleef{Eqn:Yk_Cartier_Div}) and~(\rangle\!\rangleef{Eqn:Yk_pullback}),
the pullback $\varpi^{-1}(Y_{k,\mathfrak S})$ is thus a sub-locus of the right-hand side of~(\rangle\!\rangleef{Eqn:Yk_Cartier_Div}).
Moreover, it is a direct check that
$$
\varpi^{-1}(Y_{k,\mathfrak E_i})\cap\mathfrak U=\{\varepsilon_i\!=\! 0\}\qquad
\forall\,i\!\in\!\mathbb I_+~\textnormal{with}~|\mathfrak E_i|\!\lambdangle\!\lambdanglee\! k.
$$
Therefore, (\rangle\!\rangleef{Eqn:Yk_Cartier_Div}) holds.
Since every $\varpi^{-1}(Y_k)$ is a Cartier divisor of $\mtd{}$,
by the universality of the blowup $\pi\!:\ti\mathfrak M^\tn{wt}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$,
we obtain a unique morphism
$$
\tn{ps}i_1:\mtd{}\lambdangle\!\lambdangleongrightarrow\ti\mathfrak M^\tn{wt}
$$
that $\varpi\!:\mtd{}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ factors through.
We next construct $\tn{ps}i_2$ explicitly.
For any $\ti x\!\in\!\ti\mathfrak M^\tn{wt}$,
let $(C,\mathbf w)$ be its image in $\mathfrak M^{\tn{wt}}$.
As shown in~\cite[\S3.3]{HLN},
there exists a unique maximal sequence of exceptional divisors
$$
\ti E_{i_1},\lambdangle\!\lambdangledots,\ti E_{i_k}\subset\ti\mathfrak M^{\tn{wt}},\qquad
1\lambdangle\!\lambdanglee i_1<\cdots<i_k
$$
containing $\ti x$.
Each $\ti E_{i_j}$ is obtained from blowing up along the proper transform of $Z_{i_j}$.
Note that $k$ is possibly 0, which means $(C,\mathbf w)$ is not in the blowup loci.
The weighted dual tree $\tau\!=\!(\gamma_C,\mathbf w)$, along with the exceptional divisors $\ti E_{i_1},\lambdangle\!\lambdangledots,\ti E_{i_k}$,
uniquely determines a weighted level tree $\lambdangle\!\lambdanglet_{\ti x}$ such that
$$
\mathbb I_+=\mathbb I_+(\lambdangle\!\lambdanglet_{\ti x})=\{-i_k,\lambdangle\!\lambdangledots,-i_1\}.
$$
In particular, ${\mathbf{m}}\!=\!{\mathbf{m}}(\lambdangle\!\lambdanglet_{\ti x})\!=\! -i_k$.
With the line bundles $L_e$, $e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x})$, as in~(\rangle\!\rangleef{Eqn:Le}),
the notation $\lambdangle\!\lambdangleprp{\cdot,\cdot}_{\lambdangle\!\lambdanglet_{\ti x}}$ and $\lambdangle\!\lambdanglebrp{\cdot,\cdot}_{\lambdangle\!\lambdanglet_{\ti x}}$ as in~(\rangle\!\rangleef{Eqn:lrbr}),
and the notation $i[h]$ as in~(\rangle\!\rangleef{Eqn:i[h]}),
the line bundles
\betagin{align}\lambdangle\!\lambdangleabel{Eqn:fL_i}
\mathfrak L_i = L_{\mathsf e_i}\!\otimes
\!\!\!\bigotimes_{j\in\lambdangle\!\lambdangleprp{\,i,\,i[1]\,}_{\lambdangle\!\lambdanglet_{\ti x}}}\!\!\!\!\mathfrak L_j^\varepsilone
\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{{\tau}},\qquad
i\!\in\!\mathbb I_+,
\end{align}
can be constructed inductively over $\mathbb I_+$.
Then, we take
\betagin{align}\lambdangle\!\lambdangleabel{Eqn:fL_e}
\mathfrak L_e = L_{e}\otimes\!\!\!\!
\bigotimes_{j\in \lambdangle\!\lambdangleprp{\,\ell(e),\,\ell(v_e^+)\,}_{\lambdangle\!\lambdanglet_{\ti x}}}
\!\!\!\!\!\!\!\!\!\mathfrak L_j^\varepsilone\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{\tau},\qquad
e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x}).
\end{align}
In particular, $\mathfrak L_{\mathsf e_i}\!=\!\mathfrak L_i$.
For each $e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x})$,
(\rangle\!\rangleef{Eqn:fL_i}) and (\rangle\!\rangleef{Eqn:fL_e}) imply
\betagin{equation*}
\mathfrak L_e
\!\otimes\!
\bigotimes_{e'\succ e}\!
(\mathfrak L_{e'}
\!\otimes\!
\mathfrak L_{\ell(e')}^\varepsilone
)
=
L_e^\succeq
\otimes
\big(\bigotimes_{\!\!
\betagin{subarray}{c}
j\in\lambdangle\!\lambdangleprp{\,\ell(e),\,0\,}_{\lambdangle\!\lambdanglet_{\ti x}}
\end{subarray}}\!\!\!\!\!\!\!\mathfrak L_j^\varepsilone\,\big).
\end{equation*}
Hence for each $i\!\in\!\mathbb I_+$,
\betagin{equation}\lambdangle\!\lambdangleabel{Eqn:blowup->tf}
\mathring\mathbb P\Big(\bigoplus_{
\betagin{subarray}{c}
\ell(v_e^-)=i
\end{subarray}
}\!\!\!\!
\big(\mathfrak L_e
\!\otimes\!
\bigotimes_{e'\succ e}\!
(\mathfrak L_{e'}
\!\otimes\!
\mathfrak L_{\ell(e')}^\varepsilone
)
\big)
\Big)
\!=
\mathring\mathbb P\big(\bigoplus_{
\betagin{subarray}{c}
\ell(v_e^-)=i
\end{subarray}
}\!\!\!L_e^\succeq\,\big).
\end{equation}
For $h\!\ge\!1$,
let $\ti x_{(h)}$ be the image of $\ti x$ in the exceptional divisor of the $h$-th step.
Given $i\!\in\!\mathbb I_+$,
The proper transform of $Z_{-i}$ after the first $-i\!-\!1$ steps of the blowup may have several connected components;
see~\cite[Lemma 3.3.2]{HLN}.
The normal bundle of the component containing $\ti x_{(-i-1)}$ is the pullback
$\pi_{(-i-1)}^*\bigoplus_{e\in\mathfrak E_{i}}\!\!\mathfrak L_e$,
where $\pi_{(h)}\!:\ti\mathfrak M^\tn{wt}_{(h)}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ is the blowup after the $h$-th step.
Notice that the non-zero entries of $\ti x_{(-i)}$ exactly correspond to the edges $e\!\in\!\mathfrak E_{i}$ satisfying $\ell(v_e^-)\!=\! i$.
Therefore,
$$
\ti x_{(-i)}\,\in\,
\pi^*_{(-i-1)}\mathring\mathbb P\big(\!\bigoplus_{
\betagin{subarray}{c}
\ell(v_e^-)=i
\end{subarray}
}\!\!\!\mathfrak L_e\,\big).
$$
Then, $\ti x_{(-j)}$ with $j\!\in\!\lambdangle\!\lambdanglebrp{i,0}_{\lambdangle\!\lambdanglet_{\ti x}}$ together determine a unique
\betagin{equation*}\lambdangle\!\lambdangleabel{Eqn:blowup_intermediate}
\etaa_i(\ti x)\in
\mathring\mathbb P\Big(\bigoplus_{
\betagin{subarray}{c}
\ell(v_e^-)=i
\end{subarray}
}\!\!\!\!
\big(\mathfrak L_e
\!\otimes\!
\bigotimes_{e'\succ e}\!
(\mathfrak L_{e'}
\!\otimes\!
\mathfrak L_{\ell(e')}^\varepsilone
)
\big)
\Big)
=
\mathring\mathbb P\big(\bigoplus_{
\betagin{subarray}{c}
\ell(v_e^-)=i
\end{subarray}
}\!\!\!L_e^\succeq\,\big).
\end{equation*}
The last equality above follows from~(\rangle\!\rangleef{Eqn:blowup->tf}).
We then set
\betagin{equation*}
\tn{ps}i_2(\ti x)=
\Big((C,\mathbf w),[\lambdangle\!\lambdanglet_{\ti x}],
\big(\etaa_i(\ti x):i\!\in\!\mathbb I_+(\lambdangle\!\lambdanglet_{\ti x})\big)
\Big)\quad
\in\mtd{[\lambdangle\!\lambdanglet_{\ti x}]}.
\end{equation*}
Obviously, this implies $\varpi\!\circ\!\tn{ps}i_2\!=\!\pi$.
It remains to verify such defined $\tn{ps}i_2$ is a morphism.
Let $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ be a smooth chart containing $(C,\mathbf w)$,
and $\{\zeta_e\!:e\!\in\!\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x})\}\!\sqcup\!\{\varsigma_j\!:j\!\in\! J\}$ be a set of local parameters centered at $(C,\mathbf w)$ as in~(\rangle\!\rangleef{Eqn:local_parameters}).
As shown in~\cite[\S3.1\&\S3.3]{HLN},
there exists a chart $\ti\mathcal V_{\ti x}\!\lambdangle\!\lambdangleongrightarrow\!\ti\mathfrak M^\tn{wt}$ containing $\ti x$ with local parameters
\betagin{gather*}
\ti\varepsilon_i,~i\!\in\!\mathbb I_+;\qquad
\rangle\!\rangleho_e,~e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x})\backslash\big(\mathbb I_{\mathbf{m}}\!\sqcup\!\{\mathsf e_i\!:i\!\in\!\mathbb I_+\}\big);\\
\wc z_e,~e\!\in\!\mathbb I_{\mathbf{m}};\qquad
\ti z_e,~e\!\in\!\mathbb I_-;\qquad
s_j,~j\!\in\! J.
\end{gather*}
All $\rangle\!\rangleho_e$ are nowhere vanishing on $\ti\mathcal V_{\ti x}$.
Moreover,
with
$
\pi\!:\ti\mathcal V_{\ti x}\!\lambdangle\!\lambdangleongrightarrow\!\mathcal V
$
denoting the blowup,
we have
\betagin{gather*}
\pi^*\zeta_{\mathsf e_i}\!\!=\!\!\!\!\!\!\prod_{h\in\lambdangle\!\lambdanglebrp{i,i[1]}_{\lambdangle\!\lambdanglet_{\ti x}}}\!\!\!\!\!\!\!
\ti\varepsilon_h\ \ \forall\,i\!\in\!\mathbb I_+;
\quad
\pi^*\zeta_e\!=\!\rangle\!\rangleho_e\!\!\!\!\!\!\!\!\prod_{i\in\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}_{\lambdangle\!\lambdanglet_{\ti x}}}\!\!\!\!\!\!\!\!\!\!\!\ti\varepsilon_i\ \ \;\forall\,e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x})\backslash\big(\mathbb I_{\mathbf{m}}\!\sqcup\!\{\mathsf e_i\}_{i\in\mathbb I_+}\big);
\\
\pi^*\zeta_e\!=\!\wc z_e\!\!\!\!\!\!\!\!\prod_{i\in\lambdangle\!\lambdanglebrp{\ell(e),\ell(v_e^+)}_{\lambdangle\!\lambdanglet_{\ti x}}}\!\!\!\!\!\!\!\!\!\!\!\ti\varepsilon_i\ \ \forall\,e\!\in\!\mathbb I_{\mathbf{m}};\qquad
\pi^*\zeta_e\!=\!\ti z_e\ \ \forall\,e\!\in\!\mathbb I_-;\qquad
\pi^*\varsigma_j\!=\! s_j\ \ \forall\,j\!\in\! J.
\end{gather*}
For $e\!\in\!\{\mathsf e_i\}_{i\in\mathbb I_+}$, we set $\rangle\!\rangleho_e\!=\! 1$. Then,
$$
\rangle\!\rangleho_e
\in\Gamma\big(\mathscr O_{\ti\mathcal V_{\ti x}}^*\big)\qquad
\forall\
e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet_{\ti x})\backslash\mathbb I_{\mathbf{m}}.
$$
Let $\mathfrak U_{\tn{ps}i_2(\ti x)}\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ be a twisted chart centered at $\tn{ps}i_2(\ti x)$, lying over $\mathcal V\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$.
The parameters on $\mathfrak U_{\tn{ps}i_2(\ti x)}$ are as in~(\rangle\!\rangleef{Eqn:local_parameters}).
It is a direct check that the point-wise defined $\tn{ps}i_2$ can locally be written as
$$
\tn{ps}i_2:\ti\mathcal V_{\ti x}\lambdangle\!\lambdangleongrightarrow \mathfrak U_{\tn{ps}i_2(\ti x)}
$$
such that
\betagin{gather*}
\tn{ps}i_2^*\varepsilon_i\!=\! \ti\varepsilon_i\ \ \forall\,i\!\in\!\mathbb I_+;\qquad
\tn{ps}i_2^*u_e\!=\!
\frac{\prod_{\mathfrak e\succeq e}\rangle\!\rangleho_{\mathfrak e}}
{\prod_{\mathfrak e\succeq \mathsf e_{\ell(e)}}\!\rangle\!\rangleho_{\mathfrak e}}\ \ \forall\,e\!\in\!\wh\tn{Edg}(\lambdangle\!\lambdanglet)\backslash\big(\mathbb I_{\mathbf{m}}\!\sqcup\!\{\mathsf e_i\}_{i\in\mathbb I_+}\!\big);
\\
\tn{ps}i_2^*u_e\!=\!\wc z_e\!\cdot\!\frac{\prod_{\mathfrak e\succ e}\rangle\!\rangleho_{\mathfrak e}}
{\prod_{\mathfrak e\succeq \mathsf e_{\ell(e)}}\!\rangle\!\rangleho_{\mathfrak e}}\ \ \forall\,e\!\in\!\mathbb I_{\mathbf{m}};\quad
\tn{ps}i_2^*z_e\!=\!\ti z_e\ \ \forall\,e\!\in\!\mathbb I_-;\quad
\tn{ps}i_2^*w_j\!=\! s_j\ \ \forall\,j\!\in\! J.
\end{gather*}
This shows $\tn{ps}i_2\!:\ti\mathfrak M^\tn{wt}\!\lambdangle\!\lambdangleongrightarrow\!\mtd{}$ is a morphism.
\end{proof}
\betagin{rmk}
In~\cite{HL11},
another resolution $\ti\mathfrak M^{\textnormal{dr}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$,
called the \textsf{derived resolution} of $\mathfrak M^{\tn{wt}}$,
is constructed for the purpose of diagonalizing certain direct image sheaves.
That resolution is ``smaller'' in that the resolution $\ti\mathfrak M^{\tn{wt}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$ of~\cite{HL10} factors through $\ti\mathfrak M^{\textnormal{dr}}\!\lambdangle\!\lambdangleongrightarrow\!\mathfrak M^{\tn{wt}}$.
Mimicking the approach of \S3,
we may construct a moduli stack
$$
\mathfrak N=\bigsqcup_{[\lambdangle\!\lambdanglet]\in\mathscr T_\mathsf L^\tn{wt}}\!\!\!\mathfrak N_{[\lambdangle\!\lambdanglet]},\qquad
\mathfrak N_{[\lambdangle\!\lambdanglet]}=
\mathring\mathbb P\Big(\!
\bigoplus_{
\betagin{subarray}{c}
e\in\tn{Edg}(\lambdangle\!\lambdanglet),\,
\ell(v_e^-)={\mathbf{m}}(\lambdangle\!\lambdanglet)
\end{subarray}
}\!\!\!\!\!\!\!\!\!\!\!\!\!\!
L_e^\succeq\ \ \Big)
\lambdangle\!\lambdangleongrightarrow\mathfrak M^{\tn{wt}}_{\mathfrak f[\lambdangle\!\lambdanglet]}.
$$
This moduli should be isomorphic to $\ti\mathfrak M^{\textnormal{dr}}$. \\
\end{rmk}
\betagin{thebibliography}{99}
\bibitem{BCGGM} M.~Bainbridge, D.~Chen, Q.~Gendron, S.~Grushevsky, and M.~M\"oller,
{\it Compactification of strata of Abelian differentials},
Duke Math.~J. 167 (2018), no. 12, 2347--2416.
\bibitem{HM} J.~Harris and I.~Morrison, {\it Moduli of curves}, Graduate Texts in Math. vol. 187, Springer-Verlag, New York, 1998.
\bibitem{Hironaka64a}
H.~Hironaka,
{\it Resolution of singularities of an algebraic variety over a field of characteristic zero. I},
Ann. of Math., 2, 79 (1), (1964) 109--203.
\bibitem{Hironaka64b} H.~Hironaka,
{\it Resolution of singularities of an algebraic variety over a field of characteristic zero. II},
Ann. of Math., 2, 79 (1), (1964) 205--326.
\bibitem{HL10} Y.~Hu and J.~Li, {\it Genus-one stable maps, local
equations and Vakil-Zinger's desingularization}, Math. Ann. 348 (2010), no. 4, 929--963.
\bibitem{HL11} Y.~Hu and J.~Li, {\it Derived resolution property for stacks, Euler classes and applications},
Math. Res. Lett. 18 (2011), no. 04, 677--690.
\bibitem{HLN} Y.~Hu, J.~Li, and J. Niu, {\it
Genus two stable maps, local equations and modular resolutions},
math/1201.2427v3
\bibitem{HN2} Y.~Hu and J. Niu, {\it A theory of stacks with twisted fields
and resolution of moduli of genus two stable maps},
math/2005.03384.
\bibitem{deJong96} A.~de~Jong,
{\it Smoothness, semi-stability and alterations,}
Publ. Math. IHES 83, (1996) 51-93.
\bibitem{K07} J.~Koll\'ar, {\it Lectures on resolution of singularities,}
Annals of Mathematics Studies 166, (2007).
\bibitem{RSPW} D.~Ranganathan, K.~Santos-Parker, and J.~Wise,
{\it Moduli of stable maps in genus one \& logarithmic geometry I},
math/1708.02359
\bibitem{V} R.~Vakil, {\it Murphy's law in algebraic geometry: Badly-behaved deformation spaces}, Invent.~Math.~164 (2006), no. 3, 569--590.
\bibitem{VZ08} R.~Vakil and A.~Zinger,
{\it A desingularization of the main component of the moduli space
of genus-one stable maps into $\mathbb P^n$}, Geom. Topol. 12 (2008),
no. 1, 1--95.
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Asymptotics of the maximal radius of an $L^r$-optimal sequence
of quantizers}
\runtitle{Maximal radius of quantizers}
\begin{aug}
\author{\fnms{Gilles} \snm{Pag\`{e}s}\corref{}\thanksref{e1}\ead[label=e1,mark]{[email protected]}} \and
\author{\fnms{Abass} \snm{Sagna}\thanksref{e2}\ead[label=e2,mark]{[email protected]}}
\runauthor{G. Pag\`{e}s and A. Sagna}
\address{Laboratoire de Probabilit\'{e}s et Mod\`{e}les al\'{e}atoires,
UMR~7599,
Universit\'{e} Paris 6, case 188, 4, pl. Jussieu, F-75252 Paris Cedex
5, France.
\printead{e1,e2}}
\end{aug}
\received{\smonth{7} \syear{2010}}
\begin{abstract}
Let $P$ be a probability distribution on $\mathbb{R}^d$ (equipped with
an Euclidean norm
$\vert\cdot\vert$). Let $ r> 0 $ and let $(\alpha_n)_{n \geq1}$ be an
(asymptotically) $L^r(P)$-optimal sequence of $n$-quantizers.
We investigate the asymptotic behavior of the maximal radius sequence
induced by the sequence $(\alpha_n)_{n \geq1}$ defined for every
$n \geq1$ by $\rho(\alpha_n) = \max\{\vert a \vert, a \in\alpha_n \}$.
When $\operatorname{card}(\operatorname{supp}(P))$ is infinite, the maximal radius sequence
goes to
$\sup\{ \vert x \vert, x \in\operatorname{supp}(P) \}$ as $n$ goes to infinity.
We then give the exact rate of convergence for two classes of
distributions with
unbounded support: distributions with hyper-exponential tails and
distributions with polynomial tails.
In the one-dimensional setting, a sharp rate and constant are provided
for distributions with
hyper-exponential tails.
\end{abstract}
\begin{keyword}
\kwd{distribution tail}
\kwd{function with regular variation}
\kwd{maximal radius of a quantizer}
\kwd{optimal quantization}
\kwd{Zador theorem}
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec1}
The aim of this paper (which is a part of the second author's Ph.D.
thesis \cite{Sag1}) is to provide some precise upper and lower
bounds for the radius of a sequence of quantizers of an $\mathbb
{R}^d$-valued random vector. Our motivation is that it is a first
attempt toward the elucidation of the geometric structure of an optimal
quantizer in higher dimension.
Quantization has become an important field of information theory since
the early $1940$'s. Nowadays, it plays an important role in digital
signal processing (DSP), the basis of many areas of technology, from
mobile phones to modems and multimedia PCs. In DSP, vector quantization
is the process of approximating a continuous range of values or a very
large set of discrete values by a relatively small set of discrete
values. A common use of quantization is the conversion of a continuous
signal into a digital signal. This is performed in analog-to-digital
converters with a given quantization level.
Recently, optimal vector quantization has become a promising tool in
numerical probability: it is an efficient method to produce grids
optimally fitted to the distribution of a random vector~$X$. This leads
to some cubature formulas that may approximate either expectations (see
\cite{Pag}) or, more significantly, conditional expectations (see
\cite{PagPhaPri1}). This ability to approximate conditional
expectations is the key property called upon in the quantization-based
numerical schemes used to solve some problems arising in finance,
including optimal stopping problems (pricing and hedging American-style
options, see \cite{BalPag,BalPagPri}), the pricing of swing options
(see \cite{BarBouPag,BarBouPag1}), stochastic control problems (see
\cite{CorPhaRun,PagPhaPri}) for portfolio management and nonlinear
filtering (see \cite{PagPha,PhaRunSel}). Other applications, like
some new schemes for the discretization of Zakai and McKean--Vlasov
equations, have also been investigated (see \cite{GobPagPhaPri}).
At this stage, we need to recall some basic facts on optimal
quantization. At this level of generality, we just assume that $\mathbb
{R}^d$ is endowed with a norm $|\cdot |$, possibly not Euclidean.
Let $X \in L^r(\Omega,\mathcal{A},\mathbb{P})$ be an $\mathbb
{R}^d$-valued random vector with distribution $P=\mathbb{P}_X$. The
$L^r(P)$-optimal quantization problem at level $n$ for $X$ consists in
finding the best approximation of $X$ by $q(X)$ for the $L^r(\mathbb
{P})$-norm, where $q$ is a Borel function taking at most $n$ values.
This leads to the following minimization problem:
\[
\inf\{ \Vert X - q(X) \Vert_r, q\dvtx \mathbb{R}^d \stackrel{\mathrm
{Borel}}{\longrightarrow} \mathbb{R}^d, \operatorname{card}(q(\mathbb{R}^d))
\leq n \},
\]
where $\operatorname{card}(\alpha)$ stands for the cardinality of $\alpha$. The
solution, $e_{n,r}(X),$ of the previous problem is called the
$L^{r}$-optimal mean quantization error induced by $X$ (at level $n$).
Note that, in fact, $e_{n,r}(X)$ only depends on the distribution of
$X$ so that we will occasionally use the notation $e_{n,r}(P)$.
However, for every Borel function $q\dvtx \mathbb{R}^d \rightarrow\alpha$,
\mbox{$\alpha\subset\mathbb{R}^d, \operatorname{card} (\alpha) \leq n$}, we have
\[
\vert X -q(X) \vert\geq d(X,\alpha) := \mathop{\min}_{a\in\alpha} \vert X - a
\vert \qquad\mathbb{P} \mbox{-a.s.}
\]
Consider $\alpha\subset\mathbb{R}^d$ with $\operatorname{card} (\alpha) \leq
n$ (called an $n$-quantizer). Let $(C_a(\alpha))_{a\in\alpha}$ be a
Voronoi partition of $\mathbb{R}^d$ (with respect to the norm $\vert
\cdot\vert$), that is, a Borel partition of $\mathbb{R}^d$ satisfying
for every $a \in\alpha$,
\[
C_a(\alpha) \subset\Bigl\{ x \in\mathbb{R}^d\dvt \vert x-a \vert= \min_{b
\in\alpha} \vert x-b \vert\Bigr\}
\]
and let $ \widehat{X}^{\alpha} = \sum_{a\in\alpha} a \mathbf{1}_{\{
X \in C_a(\alpha)\}}$. Then $ \widehat{X}^{\alpha}$ is a projection on
$\alpha$ following the nearest neighbor rule and satisfying $\vert X -
\widehat{X}^{\alpha} \vert= d(X,\alpha)$ so that one also has
\begin{eqnarray} \label{er.quant}
e_{n,r}(X) &=& \mathop{\mathop{\inf}_{{\alpha\subset\mathbb{R}^d }}}_
{\operatorname{card}(\alpha) \leq n} \biggl(\int_{\mathbb{R}^d} d(x,\alpha)^r P(\mathrm{d}x)
\biggr)^{1/r}\nonumber
\\[-8pt]
\\[-8pt]
&=&\inf \{ (\mathbb{E} \vert X -
\widehat{X}^{\alpha} \vert^r )^{1/r}, \alpha\subset\mathbb{R}^d,
\operatorname{card}(\alpha) \leq n \}.
\nonumber
\end{eqnarray}
For every $n \geq1$, the infimum in (\ref{er.quant}) holds as a
(finite) minimum {attained by (at least) one so-called \textit{$L^r(P)$-optimal $n$-quantizer} $\alpha^{\star}$ (see, e.g., \cite
{Pag}, Proposition 11 or \cite{GraLus}, Theorem 4.1), also called,
especially when dealing with numerical applications, \textit{the optimal
$n$-grid}. A sequence of $n$-quantizers $ (\alpha_n)_{n \geq1} $ is
$L^r(P)$-optimal if, for every $n \geq1$, $\alpha_n$ is
$L^r(P)$-optimal. A sequence $ (\alpha_n)_{n \geq1} $ is \textit{asymptotically $L^r(P)$-optimal} if
\[
\int_{\mathbb{R}^d}d(x,\alpha_n)^r P(\mathrm{d}x) = e_{n,r}^r(X) +
\mathrm{o}(e_{n,r}^r(X)) \qquad \mbox{as } n \rightarrow \infty
\]
($f(x) = \mathrm{o}(g(x))$, as $x \rightarrow\infty$, if $f(x)=\epsilon(x)
g(x)$ with $\lim_{x \rightarrow\infty} \epsilon(x) =0$ for two
$\mathbb R$-valued functions $f$ and $g$).} Moreover, the
$L^r(P)$-optimal mean quantization error $e_{n,r}(X)$ decreases to $0$
as $n$ goes to infinity. As soon as $X$ has a finite $r'$-moment for
some $r' >r$, its rate of convergence to $0$ is ruled by the so-called
Zador theorem.
\begin{theo}[(Zador theorem, see \cite{GraLus,BucWis,ZAD})] Let $X \in
L^{r'}(\mathbb{P})$ for an $r'>0$, with distribution $P=f \lambda_d+
P_s$ (where $P_s$ denotes the singular part of $P$ with respect to
$\lambda_d$). Then,
\begin{equation}\label{Zadorrate}
\forall r \in(0,r') \qquad\lim_{n} n^{r/d} (e_{n,r}(P))^r = Q_r(P),
\end{equation}
where
\begin{equation}\label{QrP}
Q_r(P) = J_{r,d} \biggl( \int_{\mathbb
{R}^d} f^{\fracc{d}{d+r}}\,\mathrm{d}\lambda_d \biggr)^{\fracb{d+r}{d}} = J_{r,d}
\Vert f \Vert_{\fracc{d}{d+r}} \in[0,+\infty),
\end{equation}
with
\[
J_{r,d} = \inf_{n \geq1} n^{r/d}
e_{n,r}^r(U([0,1]^d)) \in(0,+\infty)
\]
and $U([0,1]^d)$ stands for the uniform distribution on $[0,1]^d$.
\end{theo}
Note that $\mathbb{E}\vert X \vert^{r'} < + \infty$ implies $\Vert f
\Vert_{\fracc{d}{d+r}} <+\infty$ and that $J_{r,d} $ depends upon the
norm $|\cdot |$ on~$\mathbb{R}^d$.
Let us come back to our topic of interest, that is, the asymptotic
behavior of the radii of a sequence $(\alpha_n)_{n \geq1}$ of
$L^r$-optimal quantizers. The maximal radius (or simply radius) $\rho
(\alpha)$ of a quantizer $\alpha\subset\mathbb{R}^{d}$ is defined by
\[
\rho(\alpha) = \max\{ \vert a \vert, a \in\alpha \}.
\]
In a one-dimensional setting ($d=1$), one can define the \textit{one-sided} (right) radius of $\alpha$ by removing absolute values in
the above definition. The one-sided left radius is defined as the
opposite of the right radius of $-\alpha$ viewed as a quantizer of $-X$.
From now on, $\vert\cdot\vert$ will denote an Euclidean norm on
$\mathbb{R}^d$, except where explicitly stated otherwise. Except in ambiguous
cases, we will denote $(\rho_n)_{n \geq1}$ for the sequence $(\rho
(\alpha_n))_{n \geq1}$ of radii of $(\alpha_n)_{n \geq1}$.
We will first show that, if the support of $P$, denoted $\operatorname{supp}(P)$, is unbounded, then
$ \lim_{n \rightarrow+\infty}
\rho_n = +\infty$ (when $d = 1$, the sequence of one-sided right
radii goes to infinity as soon as $\sup\operatorname{supp}(P) = +\infty$).
The key inequalities to get the upper and lower estimates of the
maximal radius sequence are provided in Theorems~\ref
{prop_princip_limsup} and~\ref{prop_gen_liminf}. In these theorems, we
point out the close connection between the asymptotics of $\rho_n$ and
the generalized survival function of $X$ defined on $\mathbb{R}_+ :=
[0,+\infty)$ by $\bar{F}_r(\xi) = \mathbb{E}(\vert X \vert^r
\mathbf{1} _{\{ \vert X \vert>\xi\}})$. The regular variation index will
play an important role since we elucidate the asymptotic behaviour of
$\rho_n$ (or $\log\rho_n$) from the asymptotic behavior of the
function $-\log\bar{F}_r$ as a regularly varying function. We present
below two typical results obtained for important families of
(essentially radial) distributions: a sharp rate for $\log\rho_n$ for
distributions with polynomial tails and an exact rate for $\rho_n$ for
distributions with hyper-exponential tails (also made sharp when $d =
1$ and $r \ge 1$).
\begin{theo}\label{1.2} Let $P= f \lambda_d$.
\begin{longlist}[(b)]
\item[(a)] \textup{Polynomial tail.} If there exists $K>0$, $\beta
\in\mathbb{R}$, $c> r+d$ and a real number $A>0$ such that
\[
\forall x \in\mathbb{R}^d \qquad|x|\ge A \quad \Longrightarrow \quad f(x) =K
\frac{({\log}\vert x \vert)^{\beta}}{\vert x \vert^c},
\]
then
\[
\lim_{n} \frac{\log\rho_n}{\log n} = \frac{1}{c-r-d} \frac{r+d}{d}.
\]
\item[(b)] \textup{Hyper-exponential tail.} If there exists $K>0$,
$\kappa$, $\vartheta>0$, $c \in\mathbb{R}$ and a real number $A>0$
such that
\[
\forall x \in\mathbb{R}^d \qquad |x|\ge A \quad \Longrightarrow \quad f(x) =K
\vert x \vert^{c} \mathrm{e}^{- \vartheta \vert x \vert^{\kappa}},
\]
then
\[
\vartheta^{-1/ \kappa} \biggl (1+\frac{r}{d}
\biggr)^{1/\kappa} \leq \liminf_{n} \frac{\rho_n}{ ( \log n)^{1/\kappa}}
\leq \limsup_{n} \frac{\rho_n}{ (\log n)^{1/\kappa}} \leq2 \vartheta
^{-1/ \kappa} \biggl (1+\frac{r}{d} \biggr)^{1/\kappa}.
\]
Furthermore, if $d=1$ and $r \geq1$,
\[
\lim_{n} \frac{\rho_n}{ (\log n)^{1/\kappa}}= \vartheta^{-1/ \kappa}
( 1+r )^{1/\kappa}.
\]
\item[(c)] If $f$ has a one-sided polynomial or hyper-exponential
tail, say on $\mathbb{R}_+$, then the maximal radius sequence satisfies
the above asymptotic bounds.
\end{longlist}
\end{theo}
\begin{remarks*}
$\bullet$ Note that the Euclidean norm
appearing in the statement of the above theorem needs to be the one
used to define the radius and the distance between the random vector
and the quantizer. If $X$ has a ${\mathcal N}(0,I_d)$ distribution, this
norm is the canonical one. As concerns the ${\mathcal N}(0,\Sigma)$
distribution, the ``reference'' Euclidean norm is $|\cdot |_{\Sigma
^{-1}}$ induced by the inverse $\Sigma^{-1}$ ($\vert x \vert^2_{\Sigma
^{-1}} := x' \Sigma^{-1} x$ for a (column) vector $x \in\mathbb R^{d}$
with $x'$ standing for the transpose of $x$). To derive asymptotic
bounds from such results for the radius \textit{measured in the canonical
Euclidean norm} one needs to use the strong equivalence of the norms,
namely $\frac{1}{\lambda_{\Sigma,\max}}|\cdot |\le|\cdot |_{\Sigma
^{-1}}\le\frac{1}{\lambda_{\Sigma, \min}}|\cdot |$, where $\lambda
_{\Sigma,\max}$ and $\lambda_{\Sigma,\min}$ are the maximum and the
minimum eigenvalues of $\Sigma$, respectively.
$\bullet$ Note that as concerns asymptotic lower estimates,
we propose in Section~\ref{loweriid} an alternative approach based on
random quantization.
\end{remarks*}
The paper is organized as follows. We first give, as a preliminary
result, the limit of the maximal radius for distributions supported by
a set of infinite cardinality. Section~\ref{sec2} is devoted to the upper
estimate of the maximal radius based on the asymptotic estimates of
survival functions of~$X$. Section~\ref{sec3} is devoted to the lower limit
where our results are obtained by two different methods -- one still
based on survival functions and one based on mean random quantization.
In both cases, we strongly rely on recent results obtained in \cite
{GraLusPag} about the $L^s$-behaviour of $L^r$-optimal quantizers when
$r<s<r+d$.
\begin{NA*} For every $r\ge0$, we
define $ L^{r+}(\mathbb{P}) = {\bigcup}_{\varepsilon>0}L^{r+\varepsilon
}(\mathbb{P})$ and the generalized $r$-survival function $\bar{F}_r(\xi
) = \mathbb{E} (\vert X \vert^r \mathbf{1} _{\{ \vert X \vert>
\xi\}} )$ of a random vector $X \in L^r(\mathbb{P})$. Note that
$\bar{F}_r$ is defined on $\mathbb{R}_{+}$ and takes values in
$[0,\mathbb{E} \vert X \vert^r]$. $\bar{F_0}$ is the regular survival
function denoted $\bar{F}$.
Let $A\subset\mathbb{R}^d$. $\overline{A}$ will stand for its closure,
$\partial A$ for its boundary, $\operatorname{Conv}(A)$ for its convex hull,
$\accentset{\circ}{A}$ for its interior and $A^{c}$ for its complement.
$[x]$ will denote the integer part of an $x \in\mathbb{R}$.
$B(x,r)$, $r>0$, will denote the open ball with center $x \in\mathbb
{R}^d$ and radius $r\ge0$ and~$d(x,A)$ the distance of $x$ to the set
$A\subset\mathbb{R}^d$. For $x,y \in\mathbb R^{d}$, $(x|y)$ will
denote the inner product of $x$ and $y$ with respect to the specified
Euclidean norm and for two real-valued functions $f$ and $g$, $f(x)
=\mathrm{O}(g(x))$ as $x \to\infty$ if there is a positive real constant $C$
such that $ |f(x)| \leq C |g(x)| $, for all large enough $x$.
\end{NA*}
\section{A first preliminary result}\label{sec2}
As a first necessary step we
elucidate the connections between the asymptotics of the maximal radius
sequence and the ``supremum'' of the support of the distribution $P$.
\begin{prop} \label{proplimitRM}
\textup{(a)} Let $\vert\cdot\vert$ be an arbitrary norm on $\mathbb{R}^d$
and $X \in L^r(\mathbb{P})$. Let $(\alpha_n)_{n\geq1}$ be a sequence
of $n$-quantizers such that
$\int_{\mathbb{R}^d}d(x,\alpha_n)^rP(\mathrm{d}x)\to0$ as $n \rightarrow
+\infty$. Then,
\begin{equation} \label{hyp6.1}
\liminf_{n} \rho_n \geq \sup\{ \vert x \vert, x \in\operatorname{supp}(P) \}.
\end{equation}
\textup{(b)} Suppose that $\vert\cdot\vert$ is an Euclidean norm
on $\mathbb{R}^d$. If $\operatorname{card}(\operatorname{supp}(P)) = +\infty$, then for any
$L^r(P)$-optimal sequence of $n$-quantizers $(\alpha_n)_{n \geq1}$
\begin{equation}
\lim_{n} \rho_n = \sup_{n \geq1} \rho_n = \sup\{ \vert x \vert, x
\in\operatorname{supp}(P) \}.
\end{equation}
\end{prop}
\begin{pf}
(a)
Let $ x \in\operatorname{supp}(P)$ and let $\varepsilon>0$. For every $n
\geq1$,
\begin{eqnarray*}
\Vert d(X,\alpha_{n}) \Vert_r
& \geq& \Vert d(X,B(0,\rho_{n})) \Vert_r \qquad \bigl(\mbox{since } \alpha_{n} \subset B(0, \rho_{n})\bigr) \\
& \geq& \bigl\Vert d(X,B(0,\rho_{n})) \mathbf{1} _{\{X \in
B(x,\varepsilon)\}} \bigr\Vert_r \\
& \geq& d(B(x,\varepsilon),B(0,\rho_{n})) \mathbb{P}\bigl(X \in
B(x,\varepsilon)\bigr)^{1/r}.
\end{eqnarray*}
Consequently, $d(B(x,2\varepsilon),B(0,\rho_{n}))=0$ for large enough
$n$ since $\Vert d(X,\alpha_{n}) \Vert_r \rightarrow0 $ so that
$|x|-2\varepsilon\le\rho_n$, which eventually implies $ \liminf_{n}
\rho_n \geq\vert x \vert$.
(b) We will show first that if $\alpha$ is an $L^r$-optimal quantizer
at level $n$ and if $\operatorname{card}(\operatorname{supp}(P)) \geq n,$ then
\begin{equation} \label{Eqlimitesup}
\alpha\subset\overline{\operatorname{Conv}(\operatorname{supp}(P))} \quad \mbox{and}\quad
\rho_n \leq \sup\{ \vert x \vert, x \in\operatorname{supp}(P) \}.
\end{equation}
Note first that if $\alpha$ is $L^r$-optimal at level $n$, then
$\operatorname
{card}(\alpha) = n$ since $\operatorname{card}(\operatorname{supp}(P)) \geq n$ (see \cite
{Pag}, Proposition 11 or \cite{GraLus}, Theorem 4.1). Now, suppose
that there exists $ a \in\alpha\cap (\overline{\operatorname{Conv}(\operatorname{supp}(P))} )^{c}$ and set
\[
\alpha' = (\alpha\backslash\{ a \}) \cup\{ \Pi(a) \},
\]
where $\Pi$ denotes the projection on the non-empty closed convex set
$\overline{\operatorname{Conv}(\operatorname{supp}(P))}$. The projection is $1$-Lipschitz (see,
e.g., \cite{HirLem}, Chapter III, page~116) and $X$ is $\mathbb{P}\mbox{-a.s. }
\operatorname{supp}(P)$-valued, hence
\begin{equation} \label{EqIneqDistance}
d(X,a) \geq d(\Pi(X),\Pi(a)) \stackrel{\mathbb{P}\mathrm{\mbox{-}a.s.}}{=}
d(X,\Pi(a)).
\end{equation}
It follows that
\[
d(X,\alpha) \geq d(X,\alpha') \qquad\mathbb{P}\mbox{-a.s.}
\]
Since $\alpha$ is $L^r(P)$-optimal at level $n$ and $\operatorname{card}(\alpha
') \leq\operatorname{card}(\alpha)=n$,
\[
\mathbb{E}(d(X,\alpha')^r) = \mathbb{E}(d(X,\alpha)^r)
\]
so that the following two statements hold:
\begin{itemize}
\item $d(X,\alpha') = d(X,\alpha)$ $\mathbb{P}\mbox{-a.s.}$
\item $\Pi(a) \notin\alpha\backslash\{ a \} $ since $\alpha'$ is
$L^r(P)$-optimal (which implies that $\operatorname{card} (\alpha') = n$).
\end{itemize}
On the other hand, it follows from equation (\ref{EqIneqDistance}) that
\[
\bigl(a-\Pi(a) \vert X-\Pi(a) \bigr) \leq0 \qquad\mathbb{P} \mbox{-a.s.}
\]
Consequently
\begin{eqnarray*}
\vert X-a \vert^2 - \vert X - \Pi(a) \vert^2 & = & 2\bigl(\Pi(a)-a \vert
X-\Pi(a)\bigr) + \vert a - \Pi(a) \vert^2 \\
& \geq& \vert a-\Pi(a) \vert^2 >0 \qquad\mathbb{P}\mbox{-a.s.}
\end{eqnarray*}
since $a \notin\overline{\operatorname{Conv}(\operatorname{supp}(P))}$. As a consequence
\[
d(X,\alpha') < d(X,\alpha) \qquad\mathbb{P}\mbox{-a.s. on } \bigl\{ X \in
\accentset{\circ}{C}_{\Pi(a)}(\alpha') \bigr\},
\]
where $ \accentset{\circ}{C}_{\Pi(a)}(\alpha') = \{\xi \in\mathbb{R}^d, d(\xi
,\Pi(a))< d(\xi,\alpha\backslash\{ a \}) \} $ since the norm is Euclidean.
This implies that $\mathbb{P}(X \in\accentset{\circ}{C}_{\Pi(a)}(\alpha')) =
0$; if so, $\alpha'\setminus\{\Pi(a)\}=\alpha\setminus\{a\}$ would
clearly be optimal at level $n$ (since $d(X,\alpha)=d(X,\alpha\setminus
\{a\})$ a.s.) with a cardinality equal to $n-1,$ which is impossible
since $e_{n,r}(X)$ decreases (strictly) to $0$ (see again \cite
{GraLus,Pag}). Hence $ \alpha\subset\overline{\operatorname{Conv}(\operatorname{supp}(P))} $.
Now, let us prove that $ \rho_n \leq \sup\{ \vert x \vert, x \in
\operatorname{supp}(P) \}.$ Note first that this assertion is obvious if $\operatorname{supp}(P)$ is unbounded. Otherwise, if $\operatorname{supp}(P)$ is bounded, then
it is compact and so is $\operatorname{Conv}(\operatorname{supp}(P))$. Let $x_0 \in \operatorname{Conv}(\operatorname{supp}(P))$ be such that $\vert x_0 \vert= \sup\{ \vert x \vert,
x \in\operatorname{Conv}(\operatorname{supp}(P)) \}.$ Thus
\[
x_0 = \lambda_0 \xi_1 + (1-\lambda_0) \xi_2, \qquad\xi_1,\xi_2 \in
\operatorname{supp}(P)
\]
and $\lambda\mapsto\vert\lambda\xi_1 + (1-\lambda) \xi_2 \vert$ is
convex so that it attains its maximum at $\lambda = 0$ or $\lambda
= 1$. Consequently $x_0 \in\operatorname{supp}(P)$. Hence $\rho_n \le
\sup\{ |x |, x \in\operatorname{supp}(P) \} $, which, combined with~(\ref
{proplimitRM}), yields the conclusion.
\end{pf}
\begin{remark*} Note that (b) follows from the fact that if
$\alpha$ is an $L^r$-optimal quantizer at level $n$, then
\begin{equation} \label{AssertRestNorm}
\alpha\subset\overline{\operatorname{Conv}(\operatorname{supp}(P))}
\end{equation}
as soon as $\operatorname{card}(\operatorname{supp}(P)) \geq n$. But this result holds true
only for Euclidean norms on~$\mathbb{R}^d$. For an arbitrary norm, this
assertion may fail. A counterexample is given with the $l_{\infty
}$-norm in \cite{GraLus}, page~25.
Before dealing with the general case we give two examples of
distributions (exponential and Pareto) for which the sharp convergence
rate of the maximal radius sequence can be easily derived from
semi-closed forms established in \cite{ForPag} for their $L^r$-optimal
quantizers.
\end{remark*}
$\rhd$ \textit{Exponential distribution.} Let $r > 0$ and let $P$ be an
exponential distribution with parameter $\lambda > 0$.
Then
\begin{equation} \label{asymp_exp}
\rho_n = \frac{r+1}{\lambda} \log n+ \frac{C_{r}}{\lambda
} + \mathrm{O}\biggl (\frac{1}{n} \biggr),
\end{equation}
where $C_{r}$ is a real constant depending only on $r$.
$\rhd$ \textit{Pareto distribution.} Let $r>0$ and let $P$ be a Pareto
distribution with index $\gamma>r$. Then,
\begin{equation} \label{asymp_Par}
\rho_n = K_r n^{\fracd{r+1}{\gamma-r}} \biggl(1 + \mathrm{O} \biggl(\frac
{1}{n} \biggr) \biggr),
\end{equation}
where $K_r$ is a positive real constant depending only on $r$.
A short proof of these results is given in the \hyperref[appm]{Appendix}.
These rates will be useful to validate the asymptotic rates obtained by
other approaches.
\section{Asymptotic upper bounds for the radius}\label{sec3}
We investigate in this section the upper rate of convergence of $(\rho
_n)$ to infinity. We next give some definitions and some hypotheses
that will be useful later on.
Let $(\alpha_n)_{n \geq1}$ be an $L^r(P)$-optimal sequence of
quantizers at level $n$. For every $n \geq1$, we denote by $M(\alpha
_n)$ the set of points in $\alpha_{n}$ for which the maximal norm is
reached, namely,
\[
M(\alpha_n) = \Bigl\{a \in\alpha_n \mbox{ such that } \vert a \vert=
{\max_{b \in\alpha_n}} \vert b \vert \Bigr\}.
\]
We will need the following (light) assumption on the distribution $P$:
\[
\mathbf{(H)} \equiv \exists x_0 \in\mathbb{R}^d,
\exists \varepsilon_0>0, \exists r_0>0 \mbox{ such that }
P(\mathrm{d}x) \geq\varepsilon_0 \mathbf{1} _{B(x_0,r_0)} (x) \lambda_d(\mathrm{d}x),
\]
which means that $P$ is locally lower bounded as a measure by the
Lebesgue measure on a ball. This assumption holds as soon as $P$ has a
density $f$, bounded away from $0$ on a~non-empty open set.
In order to get a sharp estimate for $\rho_n$ for one-dimensional
distributions with hyper-exponential tails, we will need the following
more technical assumption (for $r \in[1,+\infty)$):
$\mathbf{(G_r)} \equiv P=f \cdot\lambda_1,$ where $f>0$ is
non-increasing to $0$ on $[A,+\infty)$, non-decreasing from~$0$ on
$(-\infty,-A]$ for some real constant $A\ge0$ and
\begin{equation} \label{Assump_exact_limit}
\lim_{|y| \rightarrow+\infty} \int_1^{+\infty} (u-1)^{r-1} \frac
{f(uy)}{f(y)}\,\mathrm{d}u =0.
\end{equation}
Such an assumption is clearly satisfied by distributions with
hyper-exponential tails, that is, of the form
$ f(x) = K\vert x \vert^{c} \mathrm{e}^{- \vartheta \vert x \vert^{\kappa}}$, $
|x| >A>0$, $\vartheta, \kappa>0$, $c \in\mathbb{R}$.
Indeed, such a density $f$ is non-increasing outside a compact
interval and we have
\[
\int_1^{+\infty} (u-1)^{r-1} \frac{f(uy)}{f(y)}\,\mathrm{d}u = y^{-c}\int
_1^{+\infty} (u-1)^{r-1}u^c \mathrm{e}^{-\vartheta y^{\kappa}(u^{\kappa}-1)}\,\mathrm{d}u
\stackrel{y \rightarrow+\infty}{\longrightarrow} 0
\]
by the Lebesgue convergence theorem. A \textit{one-sided version} of
condition $\mathbf{(G_r)}$ can be stated by restricting $f$ on
$[A,+\infty)$ or $(-\infty,-A]$ for some $A\ge0$.
\subsection{Main results on asymptotic upper bounds}\label{sec3.1}
The main result of this section, stated below, makes the connection
between the asymptotic behaviour of $\rho_n$ and that of its survival
function (through some asymptotic ``semi-inverse'' of $-\log\bar F_r$
or $-\log\bar F_r(\mathrm{e}^{\centerdot})$), where $\bar{F}_r(\xi) = \mathbb{E}
(\vert X \vert^r \mathbf{1} _{\{ \vert X \vert> \xi\}} )$
denotes the generalized survival function.
First we need to briefly recall some background on inverse function and
regular variations.
It is clear that the function $\bar{F}_r$ is non-increasing and goes
to $0$ as $\xi\rightarrow+\infty$ (provided $\mathbb{E} \vert X \vert
^{r}<+\infty$). Consequently, $\xi\mapsto-\log\bar{F}_r(\xi)$ is
monotone non-decreasing and goes to ${+}\infty$ as $\xi$ goes to ${+}\infty$.
It is well known that if a function $f$ defined on $(0,+\infty)$ is
non-decreasing to ${+}\infty$, its generalized inverse function
$f^{\leftarrow}$ defined for every $y>0$ by
\begin{equation}
f^{\leftarrow}(y)=\inf\{\xi>0, f(\xi) \geq y \}
\end{equation}
is non-decreasing to ${+} \infty$. If, furthermore (see \cite
{BinGolTeu}, Theorem 1.5.12.), $f$ is regularly varying (at ${+}\infty$)
with index\vadjust{\goodbreak} $1/ \delta$, $\delta>0$ (i.e., for every $t>0$, $ \frac
{f(t\xi)}{f(\xi)} \rightarrow t^{1/\delta}$ as $\xi\rightarrow+\infty
$),\vspace*{2pt} then there exists a function $\psi$, regularly varying with index
$\delta$ and satisfying
\begin{equation} \label{Asymp_equiv}
\lim_{\xi\to+ \infty} \frac{\psi(f(\xi))}{\xi} = \lim_{y\to+\infty
}\frac{f(\psi(y))}{y} =1 .
\end{equation}
Such a function $\psi$ is called an \textit{asymptotic inverse} of $f$.
It is neither necessarily increasing nor continuous. Moreover, $\psi$
is unique up to asymptotic equivalence at ${+}\infty$ and $f^{\leftarrow
}$ is one version of $\psi$. (By asymptotic equivalence (at ${+}\infty$),
we mean $f\sim g$ if $ \lim_{x \rightarrow+\infty} \frac{f(x)}{g(x)}=1$.)
We show in the theorem below how to derive from the regularly varying
property of a function $\psi_r$ with upper bounds $(-\log\bar
{F}_r)^{\leftarrow}$ or $(-\log\bar{F}_r(\mathrm{e}^{\centerdot}))^{\leftarrow
}$ an asymptotic upper estimate for $\rho_n$ or $\log(\rho_n)$.
\begin{theo} \label{thm_princip_limsup}
Let $r>0$ and let $X \in L^{r}(\mathbb{P})$ with distribution $P$
having an unbounded support and satisfying $(\mathbf{H})$. Let $(\alpha
_n)_{n \geq1}$ be an $L^r(P)$-optimal sequence of $n$-quantizers.
\begin{longlist}[(b)]
\item[(a)] If $\psi_r$ is a non-decreasing function, regularly
varying with index $\delta$ and
\begin{equation} \label{hyp_surv2}
\lim_{\xi\to+ \infty}\frac{\psi_r(-\log\bar{F}_r(\mathrm{e}^\xi))}{\xi} \geq 1,
\end{equation}
then
\begin{equation} \label{result2_thm_result_gen}
\limsup_{n} \frac{ \log\rho_n}{ \psi_r(\log n)} \leq \biggl( 1+\frac
{r}{d} \biggr)^{\delta}.
\end{equation}
If $-\log\bar{F}_r(\mathrm{e}^{\centerdot})$ has regular variation of index $
1/ \delta$ then (\ref{result2_thm_result_gen}) holds with $\psi_r =
(-\log\bar{F}_r(\mathrm{e}^{\centerdot}))^{\leftarrow}$.
\item[(b)] If $\psi_r$ is a non-decreasing function, regularly
varying with index $\delta$ and
\begin{equation} \label{hyp_surv1}
\lim_{\xi\to+\infty}\frac{\psi_r(-\log\bar{F}_r(\xi))}{\xi} \geq1,
\end{equation}
then
\begin{equation} \label{result1_thm_result_gen}
\limsup_{n } \frac{\rho_n}{ \psi_r(\log n)} \leq c_{r,d} \biggl( 1+\frac
{r}{d} \biggr)^{\delta},
\end{equation}
where $c_{r,d} =1$ if $d=1, r \geq1$ and $\mathbf{(G_r)}$ holds and
$c_{r,d}=2$ otherwise.
In particular, if $-\log\bar{F}_r$ has regular variation with index $
1/ \delta$, then (\ref{result1_thm_result_gen}) holds with $\psi_r =
(-\log\bar{F}_r)^{\leftarrow}$.
\end{longlist}
\end{theo}
\textit{Further comments on the choice of $\psi_{r}$.}
As we will show further on, claim (a) is devoted to distributions
with polynomial tails whereas claim (b) will be applied to
distributions with hyper-exponential tails.
Note that for distributions with exponential tails, the function $\psi
_r$ in (b) can be chosen independently of $r$ (see the proof of
Corollary~\ref{cor_princip_limsup}).
Also note that if $-\log\bar{F}_r$ (resp., $-\log\bar
{F}_r(\mathrm{e}^{\centerdot})$) is measurable, locally bounded and regularly
varying with index $1/ \delta, \delta>0$, then its generalized inverse
function $\phi_r$ (resp., $\Phi_r$) is measurable increasing to
${+}\infty$, regularly varying with index $\delta$ and $\phi_r(-\log\bar
{F}_r(x)) = x + \mathrm{o}(x)$ (resp., $\Phi_r(-\log\bar{F}_r(\mathrm{e}^x)) = x
+ \mathrm{o}(x)$). Consequently, inequality (\ref{result1_thm_result_gen})
(resp., (\ref{result2_thm_result_gen})) holds with $\phi_r$
(resp., $\Phi_r$) in place of $\psi_r$. However, $\phi_r$
(resp., $\Phi_r$) is, in general, not easy to compute and the
examples below show that it is often easier to directly exhibit a
function $\psi_r$ satisfying the announced hypotheses without inducing
any asymptotic loss of accuracy.
The above theorem is a consequence of the following more abstract
result, which connects $\rho_n$ and the generalized functions $\bar F_r$.
\begin{theo} \label{prop_princip_limsup}
Let $r>0$ and let $X \in L^{r}(\mathbb{P})$ with a distribution $P$
having an unbounded support and satisfying $\mathbf{(H)}$. Let $(\alpha
_n)_{n \geq1}$ be an $L^r(P)$-optimal sequence of $n$-quantizers. Then,
\begin{equation} \label{eqthm2.6}
\lim_{\varepsilon\downarrow0} \liminf_{n} \biggl( n^{1+ {r}/{d}}
\bar{F}_r \biggl (\frac{\rho_n}{c_{r,d}+\varepsilon} \biggr) \biggr) \geq
C_{r,d} \in(0, \infty),
\end{equation}
where $c_{r,d}$ is defined in Theorem~\ref{thm_princip_limsup}.
\end{theo}
We will temporarily admit this result to prove Theorem~\ref{thm_princip_limsup}.
\begin{pf*}{Proof of Theorem~\ref{thm_princip_limsup}}
(a) It follows from (\ref{eqthm2.6}) that, for every $\varepsilon
>0$, there is a positive real constant $C_{r,d,\varepsilon}$
such that $ n^{-\fracb{d+r}{d}} C_{r,d,\varepsilon} \leq\bar{F}_r
(\frac{\rho_n}{c_{r,d} + \varepsilon} )$. Therefore, one has
\[
\frac{r+d}{d} \log n-\log(C_{r,d,\varepsilon}) \geq-\log\bar{F}_r
\biggl(\frac{\rho_n}{c_{r,d} + \varepsilon} \biggr).
\]
Combining the fact that $\psi_r$ is non-decreasing with assumption
(\ref{hyp_surv2}) yields
\begin{eqnarray*}
\psi_r \biggl( \frac{r+d}{d} \log n-\log(C_{r,d,\varepsilon}) \biggr) &
\geq& \psi_r \biggl(-\log\bar{F}_r \biggl(\frac{\rho_n}{c_{r,d} +
\varepsilon} \biggr) \biggr) \\
& \geq& \log\rho_n - \log(c_{r,d} + \varepsilon)+ \mathrm{o}(\log\rho_n ).
\end{eqnarray*}
Moreover, dividing by $\psi_r(\log n)$ (which is positive for large
enough $n$) yields
\[
\frac{ \log\rho_n}{\psi_r(\log n)} \leq\biggl ( 1-\frac{\log
(c_{r,d}+\varepsilon)}{ \log\rho_n} + \frac{\mathrm{o}(\log\rho_n)}{\log\rho
_n} \biggr)^{-1} \frac{\psi_r ( (\fracb{r+d}{d}) \log n-\log
(C_{r,d,\varepsilon}) )}{\psi_r(\log n)}.
\]
Owing to the regularly varying hypothesis on $\psi_r$ and the fact that
$\lim_{n} \rho_n = +\infty$ (which follows from Proposition
\ref{proplimitRM}), we have
\[
\limsup_{n} \frac{ \log\rho_n}{ \psi_r(\log n)} \leq \biggl(1+ \frac
{r}{d} \biggr)^{\delta}.
\]
(b) As previously, one derives from (\ref{hyp_surv1}) and from the
non-decreasing hypothesis on~$\psi_r$ that
\begin{eqnarray*}
\psi_r \biggl( \frac{r+d}{d} \log n-\log(C_{r,d,\varepsilon}) \biggr) &
\geq& \psi_r \biggl (-\log\bar{F}_r \biggl (\frac{\rho_n}{c_{r,d} +
\varepsilon} \biggr) \biggr) \\
& \geq& \frac{\rho_n}{c_{r,d} + \varepsilon} + \mathrm{o}(\rho_n ).
\end{eqnarray*}
It follows that
\[
\frac{\rho_n}{\psi_r(\log n)} \leq(c_{r,d}+\varepsilon) \biggl(1+ \frac
{\mathrm{o}(\rho_n)}{\rho_n} \biggr)^{-1} \frac{\psi_r ( (\fracb{r+d}{d}) \log
n-\log(C_{r,d,\varepsilon}) )}{\psi_r(\log n)}.
\]
The regularly varying hypothesis on $\psi_r$ and the fact that
$\lim_{n} \rho_n =+\infty$ yields
\[
\forall\varepsilon>0 \qquad\limsup_{n} \frac{\rho_n}{\psi_r(\log n)}
\leq (c_{r,d}+\varepsilon) \biggl (\frac{r+d}{d} \biggr)^{\delta}.
\]
The result follows by letting $\varepsilon\rightarrow0.$
\end{pf*}
Now we pass to the proof of Theorem~\ref{prop_princip_limsup}, which is
based on the following two lemmas.
\begin{lem} \label{lem1_princip_limsup} Let $r>0$ and let $X \in
L^r(\mathbb{P})$ with a distribution $P$ on $\mathbb{R}^d$ having an
unbounded support. Let $(\alpha_n)_{n \geq1}$ be a sequence of
$n$-quantizers, such that $\mathbb{E} d(X,\alpha_n)^r\to0$. Then,
\begin{equation} \label{equaconjecture}
\forall \varepsilon>0, \exists n_{\varepsilon} \mbox{ such
that } \forall n\geq n_{\varepsilon}, \forall a \in M(\alpha_n),
\forall y \in C_a(\alpha_n) \qquad\vert y \vert\geq\frac{\rho
_n}{c_{r,d}+\varepsilon},
\end{equation}
where $c_{r,d}$ is defined in Theorem~\ref{thm_princip_limsup}.
\end{lem}
\begin{pf}
\textit{Step} 1. Let $r>0$ and let $d \geq
1$. Since $\mathbb{E} d(X,\alpha_n)^r\to0$ as $n \rightarrow+\infty
$, the following asymptotic density property of $(\alpha_n)$ in the
support of $P$ holds:
\begin{equation}
\forall\varepsilon>0, \forall x \in\operatorname{supp}(P), \exists
n_{\varepsilon,x} \in\mathbb{N}, \forall n \geq n_{\varepsilon
,x} \qquad B(x,\varepsilon) \cap\alpha_n \not= \varnothing.
\end{equation}
Otherwise, there exists $x \in\operatorname{supp}(P), \varepsilon>0$ and a
subsequence $(\alpha_{n_k})_{k \geq1}$ so that $ \forall k \geq1$,
$B(x,\varepsilon) \cap\alpha_{n_k} = \varnothing$. Then, for every $k
\geq1$,
\[
\Vert d(X,\alpha_{n_k})\Vert_r \geq\bigl\Vert d(X,\alpha_{n_k}) \mathbf
{1} _{X \in B(x, \varepsilon/2)} \bigr\Vert_r \geq\frac{\varepsilon}{2}
P\bigl(B(x,\varepsilon/2)\bigr)^{1/r} >0,
\]
which contradicts the fact that $\Vert d(X,\alpha_{n})\Vert_r
\rightarrow0$ as $n \rightarrow+\infty$.
Assume first that $0 \in\operatorname{supp}(P)$. Let $ \varepsilon>0$ and $a
\in M(\alpha_n)$. There exists an $N_1 \in\mathbb{N}$ such that
$B(0,\varepsilon) \cap\alpha_n \not= \varnothing$ for every $n\geq N_1$.
Now $\rho_n \rightarrow+\infty$ implies the existence of $N'_1 \in
\mathbb{N}$, $N'_1\ge N_1$ such that $B(0,\varepsilon) \cap(\alpha_n
\backslash M(\alpha_n)) \not= \varnothing$ for $n \geq N'_1$.
Let $n \geq N'_{1}$ and let $b\in B(0,\varepsilon) \cap(\alpha_n
\backslash M(\alpha_n)) $. For every $y \in C_a(\alpha_n)$, we have $
\vert y -b \vert^2 \geq \vert y -a \vert^2$, so that
\[
2(y|a-b) \geq \vert a \vert^2 -\vert b \vert^2 = \rho_n^2 - \vert b
\vert^2\ge0.
\]
Now, if $\vert y \vert\vert a-b \vert\geq (y|a-b$), then,
\[
\vert y \vert\vert a-b \vert\geq\frac{(\rho_n+\vert b \vert)(\rho
_n-\vert b \vert)}{2}.
\]
Moreover, $0<\vert a-b \vert\leq\vert a \vert+ \vert b \vert = \rho
_n+\vert b \vert$. One finally gets
\[
\vert y \vert\geq\frac{\rho_n-\vert b \vert}{2} \geq\frac{\rho
_n-\varepsilon}{2}.
\]
Since $\rho_n \rightarrow+\infty$, then $\vert y \vert\geq\frac{\rho
_n}{2+\varepsilon}$ as soon as $n \geq\max(N'_{1},N_{2})$, with
$N_{2}$ such that $\rho_{N_{2}} \geq2+\varepsilon$.
If $0 \notin\operatorname{supp}(P)$, we show likewise that $\vert y \vert\geq
\frac{\rho_n - \vert x_0 \vert- \varepsilon}{2}$, where $x_0 \in
\operatorname{supp}(P)$ is fixed. This implies the announced result since $\rho
_n \rightarrow+\infty$.
\textit{Step} 2. Suppose that $d=1, r \geq1$ and ${(\mathbf{G_r})}$
holds. First, we use the well-known fact (see, e.g., \cite{GraLus},
Lemma 4.10 or \cite{Pag}, Proposition 9) that the $L^r$-distortion function
\[
\alpha= (\alpha_1,\ldots ,\alpha_n) \longmapsto D_{n,r}^X (\alpha) =
\mathbb{E} \Bigl( \min_{i=1,\ldots ,n} \vert X - \alpha_i \vert^r \Bigr)
\]
is differentiable at any codebook $\alpha \in(\mathbb{R}^d)^n$ having
pairwise distinct components and that
\begin{equation} \label{station.}
\nabla D_{n,r}^X(\alpha) = r \biggl( \int_{C_i(\alpha)} (\alpha_i - u)
\vert u - \alpha_i\vert^{r-2} f(u)\,\mathrm{d}u \biggr)_{1 \leq i \leq n}.
\end{equation}
An optimal $L^r$-quantizer $ \alpha=\{\alpha_1, \ldots ,\alpha_n \}$ at
level $n$ for $P = f \lambda_1$ has full size $n$ so that
\begin{equation} \label{EqStationnar}
\nabla D_{n,r}^X(\alpha) =0.
\end{equation}
Note that for any (ordered) quantizer $\alpha_n = \{x_{1}^{(n)},\ldots
,x_{n}^{(n)}\}$, $x_{1}^{(n)} < \cdots < x_{n}^{(n)}$ at level
$n$, its Voronoi partition is given by
\begin{eqnarray*}
C_{1}(\alpha_n)&=&\bigl(-\infty,x_{ {1}/{2}}^{(n)}\bigr], \qquad C_n(\alpha
_n)=\bigl(x_{n- {1}/{2}}^{(n)},+\infty\bigr), \\
C_i(\alpha_n) &=&\bigl
(x_{i-
{1}/{2}}^{(n)},x_{i+ {1}/{2}}^{(n)}\bigr], \qquad i=2,\ldots ,n-1,
\end{eqnarray*}
with $ x_{i\pm {1}/{2}}^{(n)} = \frac{x_{i}^{(n)}+x_{i\pm
1}^{(n)}}{2} $. We will focus on the one-sided setting by considering
\[
\rho_n = \rho_n^{+}:= \max\{x, x \in\alpha\}.
\]
All results on $ \rho_n^{-}:= \max\{-x, x \in\alpha\}$ follow by
considering $-X$ instead of $X$. Finally, one will conclude by noting
that the bi-sided radius is given by $\rho_n =\max(\rho^+_n,\rho^-_n)$.
Let $\alpha_n = \{x_1^{(n)},\ldots ,x_n^{(n)} \}$ with $x_1^{(n)}
< \cdots < x_n^{(n)}$ and suppose that (up to a subsequence) $\frac
{x_{n-1}^{(n)}}{x_{n}^{(n)}} \rightarrow \rho<1$.
Let $\varepsilon>0$ such that $\rho+ \varepsilon<1$. We have for
large enough $n$, $ \frac{x_{n-1}^{(n)}}{x_{n}^{(n)}} < \rho+
\varepsilon<1$
or, equivalently,
\begin{equation} \label{equa_ref1}
\frac{x_{n-1}^{(n)}+x_n^{(n)}}{2} < x_n^{(n)} \frac{1+\rho+\varepsilon}{2}.
\end{equation}
Let $\rho'$ be such that $0<\rho'< \frac{1-(\rho+\varepsilon)}{2}$,
that is, $\frac{1+\rho+\varepsilon}{2}< 1-\rho'<1$. It follows from
(\ref{equa_ref1}) that
\begin{eqnarray} \label{IneqFromStation}
\int_{\fracb{x_{n-1}^{(n)} +x_{n}^{(n)}}{2}}^{x_n^{(n)}} \biggl( 1-\frac
{u}{x_n^{(n)}} \biggr)^{r-1} f(u)\,\mathrm{d}u
& \geq&
\int_{ {x_n^{(n)}(1+\rho+\varepsilon)}/{2}}^{x_n^{(n)}(1-\rho')}
\biggl( 1-\frac{u}{x_n^{(n)}} \biggr)^{r-1} f(u)\,\mathrm{d}u \nonumber\\
& \geq& (\rho')^{r-1} \int_{ {x_n^{(n)}(1+\rho+\varepsilon
)}/{2}}^{x_n^{(n)}(1-\rho')} f(u)\,\mathrm{d}u \\
& \geq& \rho'' x_n^{(n)} f(c_n)\nonumber
\end{eqnarray}
with $\rho'' = (\rho')^{r-1}(\frac{1}{2}-\rho' - \frac{\rho+\varepsilon
}{2}) >0$ and $c_n \in(x_n^{(n)}(1+\rho+\varepsilon
)/2,x_n^{(n)}(1-\rho'))$. On the other hand, since we have
\[
\frac{1}{x_n^{(n)} f(x_n^{(n)})} \int_{x_n^{(n)}}^{+\infty} \biggl( \frac
{u}{x_n^{(n)}} -1 \biggr)^{r-1}f(u)\,\mathrm{d}u = \int_{1}^{+\infty} (u-1)^{r-1}
\frac{f(u x_n^{(n)})}{f(x_n^{(n)})}\,\mathrm{d}u,
\]
it follows from assumption $\mathbf{(G_r)}$ that
\[
\lim_{n} \frac{1}{x_n^{(n)} f(x_n^{(n)})} \int_{x_n^{(n)}}^{+\infty}
\biggl ( \frac{u}{x_n^{(n)}} -1 \biggr)^{r-1}f(u)\,\mathrm{d}u =0.
\]
Consequently, for large enough $n$,
\[
\frac{1}{x_n^{(n)} f(x_n^{(n)})} \int_{x_n^{(n)}}^{+\infty} \biggl( \frac
{u}{x_n^{(n)}} -1 \biggr)^{r-1}f(u)\,\mathrm{d}u < \rho''
\]
so that using (\ref{IneqFromStation}) and the fact that $f$ is
non-increasing in $[A,+\infty)$ and $A<c_n < x_n^{(n)}$ for large
enough $n$, one gets
\begin{eqnarray*}
\int_{x_n^{(n)}}^{+\infty} \biggl( \frac{u}{x_n^{(n)}} -1
\biggr)^{r-1}f(u)\,\mathrm{d}u &<& \rho'' x_n^{(n)} f\bigl(x_n^{(n)}\bigr)\\
&\leq& \rho'' x_n^{(n)}
f(c_n) \leq \int_{\fracb{x_{n-1}^{(n)} +x_{n}^{(n)}}{2}}^{x_n^{(n)}}
\biggl ( 1-\frac{u}{x_n^{(n)}} \biggr)^{r-1} f(u)\,\mathrm{d}u.
\end{eqnarray*}
This leads to a contradiction since the $L^r$-stationary equation (\ref
{EqStationnar}) implies in particular
\[
\int_{\fracb{x_{n-1}^{(n)} +x_{n}^{(n)}}{2}}^{x_n^{(n)}} \biggl( 1-\frac
{u}{x_n^{(n)}} \biggr)^{r-1} f(u)\,\mathrm{d}u = \int_{x_n^{(n)}}^{+\infty} \biggl(
\frac{u}{x_n^{(n)}} -1 \biggr)^{r-1} f(u)\,\mathrm{d}u.
\]
We therefore have shown that $ \lim_{n} \frac
{x_{n}^{(n)}}{x_{n-1}^{(n)}} =1$.
It follows that
\[
\forall\varepsilon>0, \exists n_{\varepsilon} \mbox{ such that }
\forall n \geq n_{\varepsilon} \qquad x_{n}^{(n)} < (1+\varepsilon) x_{n-1}^{(n)}.
\]
Thus, one completes the proof by noting that
\[
\forall y \in C_a(\alpha_n), a \in M(\alpha_n) \qquad \rho_n =
x_{n}^{(n)} < (1+\varepsilon) x_{n-1}^{(n)} < (1+\varepsilon) y.
\]
\upqed
\end{pf}
\begin{lem} \label{lem_minor_diff} Let $r>0$ and let $X \in
L^{r}(\mathbb{P})$ with distribution $P$ satisfying $\mathbf{(H)}$. Let
$(\alpha_n)_{n \geq1}$ be a sequence of $L^r$-optimal $n$-quantizers
of the distribution $P$. Then for large enough $n$,
\begin{equation}
e^r_{n,r}(X) - e^r_{n+1,r}(X) \geq C_{r,d} n^{-\fracb{r+d}{d}},
\end{equation}
{with}
\begin{equation} \label{def_Const}
C_{r,d} = \frac{r }{2^{(r+d)}(d+r)} \biggl( \frac
{d}{d+r} \biggr)^{d/r} \frac{\varepsilon_0}{1+\varepsilon_0} Q_{d+r}\bigl(U\bigl(\bar
{B}(x_0, r_0/2)\bigr)\bigr),
\end{equation}
where $U(\bar{B}(x_0,\frac{r_0}{2}))$ stands for the uniform
distribution on the closed ball $\bar{B}(x_0,\frac{r_0}{2})$, the
constants $\varepsilon_{0}$, $x_{0}$, $r_{0}$ come from assumption
$\mathbf{(H)}$ and $Q_{d+r}$ is defined by (\ref{QrP}) in Zador's theorem.
\end{lem}
\begin{pf}
\textit{Step} 1. Let $y \in\mathbb
{R}^d$. We temporarily set $ \delta_n=d(y,\alpha_n)$ and may assume
$\delta_n>0$. Following the lines of the proof of Theorem 2 in \cite
{GraLusPag}, we have for every $x \in B(y,\delta_n/2)$ and $a \in
\alpha_n$,
\[
\vert x -a \vert\geq\vert y-a \vert-\vert x-a \vert\geq\delta_n/2
\]
and hence\vspace*{-1.5pt}
\[
d(x,\alpha_n) \geq\delta_n/2 \geq\vert x-y \vert, \qquad x \in
B(y,\delta_n/2).
\]
It follows, by setting $\beta_n = \alpha_n \cup\{ y\}$, that
$d(x,\alpha_n)\ge d(x,\beta_n)$ and $ d(x,\beta_n) = \vert x -y \vert,
x \in B(y,\delta_n/2).$ Consequently for every $b \in(0,1/2)$,
\begin{eqnarray*}
e^r_{n,r}(X) -e^r_{n+1,r}(X)
& \geq& \int_{B(y,\delta_n b)}\bigl(d(x,\alpha_n)^r-d(x,\beta_n)^r\bigr) P(\mathrm{d}x)
\\
& = & \int_{B(y,\delta_n b)} \bigl(d(x,\alpha_n)^r- \vert x-y \vert^r\bigr)
P(\mathrm{d}x) \\
& \geq& \int_{B(y,\delta_n b)} \bigl((\delta_n/2)^r -( \delta_n
b)^r\bigr)P(\mathrm{d}x) \\
& = & (2^{-r} -b^r) \delta_n^r P(B(y,\delta_n b)).
\end{eqnarray*}
\textit{Step} 2. This step is the core of our proof. Let $x_0$
and $r_0$ be as in $\mathbf{(H)}$. For every $y \in\bar{B}(x_0,\frac
{r_0}{2})$,
\begin{eqnarray*}
e^r_{n,r}(X) - e^r_{n+1,r}(X) & \geq& (2^{-r} - b^{r}) \delta_n^r
P\biggl(B\biggl(y, \min\biggl(b \delta_n, \frac{r_0}{2} \biggr)\biggr)\biggr) \\
& \geq& (2^{-r} - b^{r}) \delta_n^r \varepsilon_0 \min \biggl((b \delta
_n)^d, \biggl(\frac {r_0 }2\biggr)^d \biggr).
\end{eqnarray*}
We know from \cite{DelGraLusPag} that, as soon as $d(x,\alpha_n)\to0$
as $n\to\infty$ in $L^r(P)$, the convergence will hold uniformly on
compact sets as well. In particular, we have
\[
\sup_{y\in\bar{B}(x_0, {r_0}/{2})} d(y,\alpha_n) \rightarrow0
\]
so that there exists $N(x_0,r_0) \in\mathbb N$ such that for every
$n \ge N(x_0,r_0)$,
\[
\sup_{y\in\bar{B}(x_0, {r_0}/{2})} d(y,\alpha_n) \leq\frac{r_0}{2}.
\]
Consequently\vspace*{-1.5pt}
\[
e^r_{n,r}(X) - e^r_{n+1,r}(X) \geq(2^{-r} - b^{r}) b^d d(y,\alpha
_n)^{d+r} \varepsilon_0 \mathbf{1} _{\{y\in\bar{B}(x_0,
{r_0}/{2})\}}.
\]
It follows that
\begin{eqnarray*}
e^r_{n,r}(X) - e^r_{n+1,r}(X)
& \geq& (2^{-r} - b^{r}) \varepsilon_0 b^d \int_{\bar{B}(x_0,
{r_0}/{2})} d(y,\alpha_n)^{d+r}\frac{\lambda_d(\mathrm{d}y)}{\lambda_d(\bar
{B}(x_0, {r_0}/{2}))} \\
& \geq& (2^{-r} - b^{r})b^d \varepsilon_0 \lambda_d\bigl(\bar
{B}(x_0,r_0/2)\bigr) e_{n,r+d}^{r+d}\bigl(U\bigl(\bar{B}(x_0,r_0/2)\bigr)\bigr),
\end{eqnarray*}
where we used in the last inequality the fact that $\alpha_n$ is
suboptimal for the uniform distribution over $\bar{B}(x_0,\frac
{r_0}{2}).$ As a consequence,
\[
e^r_{n,r}(X) - e^r_{n+1,r}(X) \geq (2^{-r} - b^{r}) b^d \varepsilon
_0 e_{n,r+d}^{r+d}\bigl(U\bigl(\bar{B}(x_0,r_0/2)\bigr)\bigr).
\]
Finally, one completes the proof by noting that, for large enough $n
\geq N(x_0,r_0)$,
\[
e^r_{n,r}(X) - e^r_{n+1,r}(X) \geq\sup_{b\in(0,1/2)} \bigl((2^{-r} -
b^{r})b^d\bigr) \frac{\varepsilon_0}{1+\varepsilon_0} Q_{d+r}\bigl(U\bigl(\bar
{B}(x_0,r_0/2)\bigr)\bigr) n^{-\fracb{d+r}{d}}.
\]
\upqed
\end{pf}
Now we are in position to complete the proof of Theorem~\ref
{prop_princip_limsup}.
\begin{pf*}{Proof of Theorem \ref{prop_princip_limsup}}
Let $a \in M(\alpha_n)$ and $\varepsilon>0$. We have,
\[
e^r_{n-1,r}(X) =\mathbb{E} \vert X - \widehat{X}^{\alpha_{n-1 }} \vert
^r \leq\mathbb{E} \bigl\vert X - \widehat{X}^{\alpha_{n} \backslash\{a\} }
\bigr\vert^r
\]
since $\alpha_{n-1}$ is $L^r$-optimal at level $n-1$. Hence
\begin{eqnarray*}
\mathbb{E} \bigl\vert X - \widehat{X}^{\alpha_{n} \backslash\{ a \} } \bigr\vert
^r & = &
\mathbb{E} \bigl( \vert X - \widehat{X}^{\alpha_{n} } \vert^r \mathbf
{1} _{ \{ X \in C_a^{c}(\alpha_{n}) \}} \bigr) +
\mathbb{E} \Bigl( \min_{b \in\alpha_{n} \backslash\{a \}} \vert X - b
\vert^r \mathbf{1} _{ \{ X \in C_a(\alpha_{n}) \}} \Bigr) \\
& \leq & e^r_{n,r}(X) + \mathbb{E} \Bigl( \min_{b \in\alpha_{n}
\backslash\{ a \} } ( \vert X \vert+ \vert b \vert)^r \mathbf
{1}_{ \{ X \in C_a(\alpha_{n}) \}} \Bigr).
\end{eqnarray*}
It follows from Lemma \ref{lem1_princip_limsup} that, for every
$\varepsilon>0$, there exists $n_{\varepsilon} \in\mathbb{N}$ such
that for every $n \geq n_{\varepsilon}$, $\vert X \vert> \frac{\rho
_n}{c_{r,d}+\varepsilon}$, on the event $\{ X \in C_a(\alpha_{n}) \}$.
Consequently, for all $ b \in\alpha_{n} \backslash\{ a \}, \vert b
\vert\leq\vert a \vert= \rho_n < (c_{r,d}+\varepsilon) \vert X \vert
$. Hence,
\[
e^r_{n-1,r}(X) - e^r_{n,r}(X) \leq(c_{r,d}+1+\varepsilon)^r \mathbb
{E} \bigl(\vert X \vert^r \mathbf{1}_{\{ \vert X \vert> {\rho
_n}/({c_{r,d}+\varepsilon}) \}} \bigr).
\]
Lemma \ref{lem_minor_diff} yields for large enough $n$ (since
$(n-1)^{-\fracb{r+d}{d}} \sim n^{-\fracb{r+d}{d}}$ as $n \rightarrow
+\infty$),
\[
(1+\varepsilon)^{-1} C_{r,d} n^{-\fracb{r+d}{d}} \leq
(c_{r,d}+1+\varepsilon)^r \mathbb{E} \bigl(\vert X \vert^r \mathbf{1}_{\{ \vert X \vert> {\rho_n}/({c_{r,d}+\varepsilon}) \}} \bigr)
\]
so that for every $\varepsilon>0$,
\[
\liminf_{n } \biggl( n^{\fracb{r+d}{d}} \bar{F}_r \biggl(\frac{\rho
_n}{c_{r,d}+\varepsilon} \biggr) \biggr) \geq\frac{C_{r,d}}{
(c_{r,d}+1+\varepsilon)^r (1+\varepsilon) }.
\]
Letting $\varepsilon\rightarrow0$ yields the statement (\ref{eqthm2.6}).
\end{pf*}
\subsection{Applications to distributions with polynomial and
hyper-exponential tails} \label{Upperexplicit}
We next give an explicit
asymptotic upper bound for the convergence rate of the maximal radius
sequence by making the function $\psi_r$ explicit. These bounds are
derived in terms of the rate of decay of the generalized survival
function $\bar{F}_r$.
\begin{prop} \label{cor_princip_limsup} Let $r>0$ and let $X \in
L^{r+}(\mathbb{P})$ with distribution $P$ having an unbounded support
and satisfying $(\mathbf{H})$. Suppose that $(\alpha_n)_{n \geq1}$ is
an $L^r$-optimal sequence of $n$-quantizers for~$X$.
\begin{longlist}[(b)]
\item[(a)] \textup{Polynomial tail.} Set
\begin{equation} \label{Assum_Cor_pol_tail1}
\zeta^{\star}=\sup \Bigl\{ \zeta>0, \limsup_{\xi\rightarrow+\infty} \xi
^{\zeta-r} \bar{F}_{r}(\xi) <+\infty \Bigr\} = \sup \{ \zeta>r,
\mathbb{E} \vert X \vert^{\zeta} <+\infty \}.
\end{equation}
Then $\zeta^{\star} \in(r,+\infty]$ and
\begin{equation} \label{Ineq_Cor_pol_tail1}
\limsup_{n} \frac{\log\rho_n}{\log n} \leq\frac{1}{\zeta^{\star}-r}
\frac{r+d}{d}.
\end{equation}
\item[(b)] \textup{Hyper-exponential tail.} Assume there exists $\kappa>0$
such that $\mathrm{e}^{\vert X \vert^{\kappa}} \in L^{0+}(\mathbb{P})$. Set
\begin{equation} \label{Assum_Cor_Exp_tail1}
{\theta}^{\star} = \sup \Bigl\{ \theta>0, \limsup_{\xi\rightarrow
+\infty} \mathrm{e}^{\theta \xi^{\kappa}} \bar{F}_r(\xi) < +\infty \Bigr\} =
\sup \bigl\{ \theta>0, \mathbb{E} \mathrm{e}^{\theta\vert X \vert^{\kappa}} <
+\infty \bigr\}.
\end{equation}
Then $\theta^{\star} \in(0,+\infty]$ and
\begin{equation} \label{Ineq_Cor_Exp_tail1}
\limsup_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} \leq c_{r,d}
\biggl ( \frac{r+d}{ d \theta^{\star} } \biggr)^{1/ \kappa}.
\end{equation}
\end{longlist}
\end{prop}
\begin{rem} If $X \in\bigcap_{r>0} L^r(\mathbb{P}),$
then $\zeta^{\star}=+\infty$ and, consequently, $ \lim_{n\rightarrow+\infty} \frac{\log\rho_n}{\log n} =0$. This
confirms that this asymptotics is not the significant one for
distributions with hyper-exponential tails.\vspace*{-3pt}
\end{rem}
\begin{pf*}{Proof of Proposition~\ref{cor_princip_limsup}}
The equalities in (\ref{Assum_Cor_Exp_tail1}) and (\ref
{Assum_Cor_pol_tail1}) are elementary.
\begin{longlist}[(b)]
\item[(a)] Let $\zeta \in(r,\zeta^{\star})$. We have
\begin{eqnarray*}
\mathbb{E} \bigl(\vert X \vert^r \mathbf{1}_{\{ \vert X \vert> \xi\}
} \bigr)
& = & \mathbb{E} \bigl(\vert X \vert^r \mathbf{1}_{\{ 1< \xi^{-\zeta
+r} \vert X \vert^{\zeta-r} \}} \bigr) \\[-2pt]
& \leq& \xi^{-\zeta+r} \mathbb{E} \vert X \vert^{\zeta}.
\end{eqnarray*}
Then $ -\log\bar{F}_r(\xi) \geq(\zeta-r) \log\xi+ C$, $C \in
\mathbb{R}$, so that by setting $\psi_r(\xi) = \frac{\xi}{\zeta- r}$,
it follows from Theorem~\ref{thm_princip_limsup}(a) that
\[
\limsup_{n} \frac{\log\rho_n}{\log n} \leq\frac{1}{\zeta-r} \frac{r+d}{d}.
\]
Letting $\zeta$ go to $\zeta^{\star}$ yields the assertion (\ref
{Ineq_Cor_pol_tail1}).
\item[(b)] Let $\theta \in(0,\theta^{\star})$. We have
\[ \label{Ineq_tail}
\mathbb{E} \bigl(\vert X \vert^r \mathbf{1}_{\{ \vert X \vert> \xi
\}} \bigr) = \mathbb{E} \bigl (\vert X \vert^r \mathbf{1}_{ \{
\mathrm{e}^{\theta\vert X \vert^{\kappa}} > \mathrm{e}^{\theta\xi^{\kappa}} \} }
\bigr) \leq \mathrm{e}^{-\theta\xi^{\kappa}} \mathbb{E} \bigl (\vert X \vert^r
\mathrm{e}^{\theta\vert X \vert^{\kappa}} \bigr).
\]
Now, the right-hand side of this last inequality is finite because if
$\theta' \in(\theta,\theta^{\star})$, there exists a positive
constant $C_{\theta, \theta'} $ such that, for every $\xi \in\mathbb
{R}^d$, $\vert\xi\vert^r \mathrm{e}^{\theta\vert \xi\vert^{\kappa}} \leq1
+ C_{\theta, \theta'} \mathrm{e}^{\theta' \vert \xi\vert^{\kappa}}$. As a~consequence,
\[
-\log\bar{F}_r(\xi) \geq\theta\xi^{\kappa} + C_{\theta,X}, \qquad
C_{\theta,X} \in\mathbb{R}.
\]
Let $\psi_{\theta} (y)= ( \frac{y}{\theta} )^{1/ \kappa}$.
As a function of $y$, $\psi_{\theta}$ is continuous increasing to
${+}\infty$, regularly varying with index $\delta=\frac{1}{\kappa}$ and
we have
\[
\psi_{\theta}(-\log\bar{F}_r(\xi)) \geq \biggl(\xi^{\kappa} + \frac
{C_X}{\theta} \biggr)^{1/ \kappa} = \xi+ \mathrm{o}(\xi)\qquad\mbox{as } \xi
\rightarrow+\infty.
\]
It follows from Theorem~\ref{thm_princip_limsup}(b) that, for every
$\theta \in(0,\theta^{\star})$,
\[
\limsup_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} \leq c_{r,d}
\biggl(\frac{d+r}{d \theta} \biggr)^{1/ \kappa}.
\]
Letting $\theta\rightarrow\theta^{\star}$ completes the proof.
\end{longlist}
\upqed\vspace*{-3pt}
\end{pf*}
We now give more explicit results for two wide classes of density
functions in $\mathbb{R}^d$: the distributions with polynomial tails
and hyper-exponential tails which, among others, include the Pareto,
Gaussian, Weibull, gamma and double-sided gamma distributions, respectively.\vspace*{-3pt}
\begin{cor} \label{cor_gen_density}
\textup{(a)} If the density $f$ of $X$ satisfies
\begin{equation} \label{gen_density_pol_limsup}
\limsup_{|x|\to+ \infty} |x|^c f(x)<+\infty\qquad\mbox{for some }
c>r+d,\vadjust{\goodbreak}
\end{equation}
then $X \in L^{r+}(\mathbb{P})$ and
\begin{equation} \label{Ineq_Cor_pol_tail_density2}
\zeta^{\star} \ge c-d\quad\mbox{and}\quad \limsup_{n} \frac
{\log\rho_n}{\log n} \leq\frac{1}{c-d-r} \frac{r+d}{d}.
\end{equation}
\textup{(b)} If the density of $X$ satisfies
\begin{equation} \label{gen_density_exp_limsup}
\limsup_{|x|\to+ \infty} \frac{\log f(x)}{|x|^{\kappa}} =-\vartheta
<0\qquad\mbox{for some } \kappa>0,
\end{equation}
then $X \in L^{r+}(\mathbb{P})$ and
\begin{equation} \label{Ineq_Cor_Exp_tail_density1}
\theta^{\star}\ge \vartheta \quad\mbox{and}\quad \limsup_{n}
\frac{\rho_n}{ (\log n )^{1/ \kappa}} \leq\frac
{c_{r,d}}{\vartheta^{1/ \kappa}} \biggl(1+ \frac{r}{d} \biggr)^{1/ \kappa}.
\end{equation}
\end{cor}
\begin{pf}
(a)
Let $A$, $B>0$ such that for every $x$ with $|x|\ge B$, $
f(x)\le\frac{A}{|x|^c}$. Then, as soon as $\xi\ge B$,
\begin{eqnarray*}
\bar F_r(\xi) = \mathbb{E} \bigl( |X|^r\mathbf{1}_{\{|X|\ge\xi\}}
\bigr)\le A \int_{\{|x|\ge\xi\}} |x|^r \frac{\mathrm{d}x}{|x|^c}= A d V_d
\operatorname{det}(S) \frac{\xi^{ r+d-c}}{r+d-c}, &&
\end{eqnarray*}
where $V_d$ denotes the hyper-volume of the unit Euclidean ball of
$\mathbb{R}^d$ and $|x|^2= ^t xS x$. As a consequence, for any $\zeta
<c-d$ and any $\xi\ge B$,
\[
\xi^{\zeta-r}\overline F_r(\xi) \le A d V_d \operatorname{det}(S) \frac{\xi
^{ r+d-c}}{r+d-c}
\]
so that $\overline{\lim}_{\xi\to\infty}
\xi^{ \zeta-r} \bar
F_r(\xi) =0, $ that is, $\zeta^{\star}\ge c-d$ by Proposition~\ref
{cor_princip_limsup}(a).
(b) It follows from the assumption that, for every $\eta\in
(0,\vartheta/3)$, there exists $B>0$ such that, for every $x$ with
$|x|\ge B$, $ f(x)\le \mathrm{e}^{-(\vartheta-\eta)|x|^{\kappa}}$.
Hence, as soon as $\xi\ge B$,
\begin{eqnarray*}
\bar F_r(\xi) &=& \mathbb{E} \bigl( |X|^r\mathbf{1}_{\{|X|\ge\xi\}
} \bigr)\\
&\le& \int_{\{|x|\ge\xi \}} |x|^r \mathrm{e}^{-(\vartheta-\eta)|x|^{\kappa
}}\,\mathrm{d}x\\
&= & d V_d \operatorname{det}(S) \int_{\{ u\ge\xi\}}u^{r+d-1}
\mathrm{e}^{-(\vartheta-\eta) u^{\kappa}}\,\mathrm{d}u
\end{eqnarray*}
so that
\[
\mathrm{e}^{(\vartheta-3\eta)\xi^{\kappa}}\bar F_r(\xi
) \le d V_d \operatorname{det}(S) \mathrm{e}^{-\eta\xi^{\kappa}}\int_{\{u\ge B \}}
u^{r+d-1}\mathrm{e}^{-\eta u^{\kappa}}\,\mathrm{d}u.
\]
Consequently, $\theta^{\star}\ge\vartheta-3\eta$ and letting $\eta$ go
to $0$ shows that $\theta^{\star}\ge\vartheta$, which completes the proof.
\noqed\mbox{
}\qed
\end{pf}
\section{Lower estimate and asymptotic rates}\label{sec4}
In this section we study the asymptotic lower estimate of the maximal
radius sequence $(\rho_n)_{n \geq1}$ induced by an $L^{r}$-optimal
sequence of $n$-quantizers. First we introduce the family of the
$(r,s)$-distributions, which will play a crucial role to obtain the
sharp lower estimate of the maximal radius sequence.
Let $r>0$ and let $ s>r$. Since the $L^r$-norm is increasing, it is
clear that, for every $s \leq r$, any $L^r$-optimal sequence of
quantizers $(\alpha_n)_{n \geq1}$ is \textit{$L^s$-rate optimal}, that
is, $\limsup_{n} n^{1/d} \Vert X - \widehat{X}^{\alpha_n} \Vert_s <
+\infty.$
But if $s>r$ (and $X \in L^s(\mathbb{P})$), this asymptotic rate
optimality usually fails. This is always the case when $s>r+d$ and $X$
has a probability distribution $f$ satisfying $\lambda_d(f>0) = +\infty
$, as pointed out in \cite{GraLusPag}, Corollaries 3 and 4. It is
established in \cite{Sag} that some linear transformation of the
$L^r$-optimal quantizers $(\alpha_n)$ makes it possible to overcome the
critical exponent $r+d$; that is, one can always construct an
$L^s$-rate-optimal sequence of quantizers up to an affine
transformation of the $L^r$-optimal sequence of quantizers $(\alpha_n)$.
However, there are many (usual) distributions for which $L^s$-rate
optimality does hold for every $s \in[ r,r+d)$. This leads to the
following definition.
\begin{defin} \label{def_(r,s)_dist}
Let $r>0$ and $\nu \in(0,d)$. A random vector $X \in L^{r+}(\mathbb
{P})$ has an \textit{$(r,r+\nu)$-distribution} if any $L^r$-optimal
sequence $(\alpha_n)_{n \geq1}$ is $L^{r+\nu}$-rate optimal, that is,
\[
\limsup_{n} n^{1/d} \Vert X - \widehat{X}^{\alpha_n} \Vert_{r+\nu} <
+\infty.
\]
\end{defin}
Note that if $X$ has an $(r,r+\nu)$-distribution, then $X \in L^{r+\nu
}(\mathbb{P})$. A necessary condition for a distribution $P$ with
density $f$ to have an $(r,r+\nu)$-distribution is (see \cite{GraLusPag}):
\begin{equation} \label{necess_rate_opt}
\int_{\mathbb{R}^d} f(x)^{-\fracc{(r+\nu)}{d+r}} P(\mathrm{d}x) < +\infty.
\end{equation}
For $\nu \in(0,d)$, criterions that imply that $X$ has an $(r,r+\nu
)$-distribution have been provided in \cite{GraLusPag}. We mention two
of them below.
\begin{prop}[(Radial tail)] \label{crit_radial} Let $r>0$ and let $X
\in L^{r+}(\mathbb{P})$ with distribution $P= f \lambda_d$ having an
unbounded support that is the intersection of finitely many half-spaces.
\begin{longlist}[(b)]
\item[(a)] Suppose $f$ has a \textit{radial tail}, that is, there
exists a norm $N(\cdot)$ on $\mathbb{R}^{d}$ and $R_0 \in\mathbb{R}_{+}$
such that
\begin{equation}\label{Essradial}
\hspace*{-15pt}f = h(N( \cdot)) \mbox{ on } B_{N(\cdot)}(0,R_0)^{c}, \mbox{ where }
h\dvtx
[R_0,+\infty) \rightarrow \mathbb{R}_{+} \mbox{ is a decreasing
function.}
\end{equation}
Let $\nu \in(0,d)$. If
\begin{equation} \label{asscor1}
\int_{\mathbb{R}^d} f(\rho x)^{- ({r+\nu})/({r+d})} P(\mathrm{d}x) < +\infty
\qquad\mbox{for some $\rho>1,$}
\end{equation}
then $X$ has an $(r,r+\nu)$-distribution.
\item[(b)] Assume $d=1$. If $ \operatorname{supp}(P) \subset[A_0, +\infty
)$ for some $A \in\mathbb{R}$, $f_{|(R_0, +\infty)}$ is decreasing
for $R_0 \geq A_0$ and, if, furthermore, assumption
(\ref{asscor1}) holds for some $\rho>1$, then $X$ has an $(r,r+\nu
)$-distribution.
\end{longlist}
\end{prop}
The following proposition works for distributions with non-radial tails.
\begin{prop} \label{Critere2} Let $r>0$ and let $X \in L^{r+}(\mathbb
{P})$ with distribution $P= f \lambda_d$ having a~convex (unbounded)
support. Assume that $f$ satisfies the following \textit{local decay
control} assumption: There exist real numbers $\varepsilon\geq0, \eta
\in(0,1)$, $M$, $ K >0$ such that
\begin{equation}\label{LocalControl}
\forall x,y \in\operatorname{supp}(P), \vert x \vert\geq M, \vert y-x
\vert\leq \eta\vert x \vert \quad \Longrightarrow \quad f(y) \geq K
f(x)^{1+\varepsilon}.
\end{equation}
Let $\nu \in(0,d)$. If
\begin{equation} \label{Hypoth2}
\int_{\mathbb{R}^d} f(x)^{-\fracc{(r+\nu)(1+\varepsilon)}{r+d}} P(\mathrm{d}x) <
+\infty,
\end{equation}
then $X$ has an $(r,r+\nu)$-distribution.
\end{prop}
It follows from Proposition~\ref{crit_radial} that the Gaussian,
Weibull and gamma distributions are all $(r,r+\nu)$-distributions for
every $\nu \in(0,d)$. The Pareto distribution with index $\gamma>r$
has an $(r,r+\nu)$-distribution if and only if $ \nu \in(0,\frac
{\gamma-r}{\gamma+1})$.
More generally, if a distribution $P= f \lambda_d$ is supported by a
convex subset $C$ of $\mathbb{R}^d$ such that
\[
f(x)= \mathrm{e}^{-g(x)^{\kappa}}, \qquad g\dvtx C\to\mathbb{R}_+, \mbox{ Lipschitz
continuous, }\kappa>0,
\]
or
\[
f(x) = \frac{1}{g(x)^c}, \qquad g\dvtx C\to \mathbb{R}_+, \mbox{ Lipschitz
continuous, }g\ge\varepsilon_0 \mbox{ on } B(0,M)^{c}, c>d,
\]
then $P$ satisfies the local decay control criterion (\ref
{LocalControl}) of Proposition~\ref{Critere2} for arbitrarily small
positive $\varepsilon$ and $\varepsilon=0,$ respectively.
Now, suppose that $X$ has an $(r,r+\nu)$-distribution for some $\nu
\in(0,d)$ and set
\[
\nu^{\star}_{X}:= \sup\{\nu>0 \mbox{ s.t. $X$ has an $(r,r+\nu
)$-distribution}\} \in[0,d].
\]
Note that $X \in L^{r+\nu}(\mathbb{P})$ for every $\nu \in(0,\nu
^{\star}_{X})$ and that
\[
\{\nu>0 \mbox{ s.t. $X$ has an $(r,r+\nu)$-distribution}\} = (0,\nu
^{\star}_{X}) \mbox{ or } (0,\nu^{\star}_{X}].
\]
When $ \{\nu>0 \mbox{ s.t. $X$ has an $(r,r+\nu)$-distribution}\} =
\varnothing$, we set
\[
\nu^{\star}_{X}=0^+ \qquad \mbox{with the convention } [0,0^+)=
\{0\}.\vspace*{-2pt}
\]
This convention is consistent with the Zador theorem satisfied by $X
\in L^{r+}(\mathbb{P})$. Note that~$\nu^{\star}_{X}$ may be lower than
$d$, as is the case for the Pareto distribution.
We present below two different approaches to derive the asymptotic
lower bound. The first one is based on tail estimates and involves the
generalized survival functions $\bar{F}_r$ like for the upper estimate.
The second one is based on a new connection with mean random quantization.
\subsection{Distribution tail approach}\vspace*{-2pt}\label{sec4.1}
\subsubsection{General results on asymptotic lower bounds}\label{sec4.1.1}
The main result of this section is the theorem below, which connects
the asymptotic lower estimate for $\rho_n$ with the regularly varying
property of ``the'' asymptotic inverse of $-\log\bar{F}_r$ (or one of
its lower bound).
\begin{theo} \label{thm_princip_liminf} Let $r>0$ and let $X \in
L^{r+}(\mathbb{P})$ be an $\mathbb{R}^d$-valued random variable with
distribution $P$ having an unbounded support.
Let $(\alpha_n)_{n \geq1}$ be an $L^r(P)$-optimal sequence of $n$-quantizers.
\begin{longlist}[(b)]
\item[(a)] Let $\nu \in[0,\nu^{\star}_{X})$. If there is a
non-decreasing function $\psi_{r,\nu}$ going to ${+}\infty$ as $x \to
+\infty$, regularly varying with index $\delta$ and satisfying
\begin{equation} \label{Hyp_liminf2}
\limsup_{\xi\to+ \infty}\frac{\psi_{r,\nu}(-\log\bar{F}_{r+\nu}(\mathrm{e}^{\xi
}))}{\xi} \leq 1,
\end{equation}
then
\begin{equation} \label{eq_thm_gen_liminf2}
\liminf_{n} \frac{\log\rho_n}{\psi_{r,\nu}(\log n)} \geq \biggl( \frac
{r + \nu}{d} \biggr)^{\delta}.
\end{equation}
In particular, if $-\log\bar{F}_{r+\nu}(\mathrm{e}^{x})$ has regular variation
with index $1/ \delta$, then (\ref{eq_thm_gen_liminf2}) holds with
$\psi_{r,\nu}(x)= (-\log\bar{F}_{r+\nu}(\mathrm{e}^{x}))^{\leftarrow}$.
\item[(b)] If $\psi$ is a non-decreasing function going to ${+}\infty
$ as $x \rightarrow+\infty$, regularly varying with index $\delta$ and
satisfying
\begin{equation} \label{Hyp_liminf1}
\limsup_{\xi\to+ \infty}\frac{ \psi(-\log\bar{F}(\xi))}{\xi} \leq1,
\end{equation}
then
\begin{equation} \label{eq_thm_gen_liminf1}
\liminf_{n} \frac{\rho_n}{ \psi(\log n) } \geq \biggl( \frac{r +\nu
^{\star}_{X}}{d} \biggr)^{\delta}.
\end{equation}
If $-\log\bar{F}$ has regular variation of index $1/ \delta$, then
(\ref{eq_thm_gen_liminf1}) holds with $\psi= (-\log\bar
{F})^{\leftarrow}$.
\end{longlist}
\end{theo}
Similar to the upper limit, one may note that for distribution with
exponential tails, the function $\psi$ does not depend on $r$ and $\nu$
even if in assumption (\ref{Hyp_liminf1}) we take the generalized
survival function $\bar{F}_{r+\nu}$ instead of the regular survival
function $\bar{F}$. However, for distributions with polynomial tails
like the Pareto distribution, the function $\psi_{r,\nu}$ in~(\ref
{Hyp_liminf2}) may depend on $r$ and consideration of the standard
survival function $\bar{F}$ in place of $\bar{F}_{r+\nu}$ would lead to
a less accurate lower bound.\vadjust{\goodbreak}
As for the upper limit, this result essentially relies on a more
abstract result that connects $\rho_n$ and the (generalized) survival
functions $\bar F_r$.
\begin{theo} \label{prop_gen_liminf}
Let $r>0$ and let $X \in L^{r+}(\mathbb{P})$ be an $\mathbb
{R}^d$-valued random variable with distribution $P$. Let $(\alpha_n)_{n
\geq1}$ be an $L^r(P)$-optimal sequence of $n$-quantizers. For every
$\nu \in[0,\nu^{\star}_{X})$, the following statements hold:
\begin{eqnarray}\label{eqthm1.6}
&&\hspace*{-112pt}\textup{(a)} \quad \displaystyle \limsup_{n} \sup_{c>0} \bigl( c^{r+\nu} n^{\fracb{r+\nu
}{d}} \bar{F} ( \rho_n + c ) \bigr) < +\infty.\\
\label{eqthm1.6.1}
&&\hspace*{-112.5pt} \textup{(b)} \quad \displaystyle \limsup_{n} \sup_{u >1} \bigl( ( 1- 1/u )^{r+\nu
} n^{\fracb{r+\nu}{d}} \bar{F}_{r+\nu} ( u \rho_n ) \bigr) <
+\infty.
\end{eqnarray}
\end{theo}
We temporarily admit this theorem to prove Theorem~\ref{thm_princip_liminf}.
\begin{pf*}{Proof of Theorem~\ref{thm_princip_liminf}} Let
us focus on (b) (claim (a) is proved in a similar manner by
considering $\bar{F}_{r+\nu}$ instead of $\bar{F}$, for $\nu \in[0,\nu
^{\star}_{X})$). Let
$ \nu \in[0,\nu^{\star}_{X})$. It follows from (\ref{eqthm1.6})
that for large enough $n$,
\[
- \log\bar{F}( \rho_n +c) \geq- \log(C_{\nu,c}) + \frac{r+\nu}{d}
\log n,
\]
where $C_{\nu,c}$ is a positive real constant depending on the indexing
parameters. We derive from the fact that $\psi$ is non-decreasing and
goes to ${+}\infty$ and from assumption (\ref{Hyp_liminf1}) that
\[
\frac{\rho_n}{\psi(\log n)} \geq \biggl( 1+ \frac{c}{\rho_n} + \frac
{\mathrm{o}(\rho_n)}{ \rho_n} \biggr)^{-1} \frac{\psi( \fracb{r+\nu}{d} \log n -
\log(C_{\nu,c})) }{\psi(\log n)}.
\]
Since $\psi$ is regularly varying with index $\delta$ we have
\[
\forall\nu \in[0,\nu^{\star}_{X}) \qquad \liminf_{n} \frac{\rho
_n}{\psi(\log n)} \geq \biggl( \frac{r+\nu}{d} \biggr)^{\delta}.
\]
When $\nu^{\star}_{_X}>0$, letting $\nu\rightarrow\nu^{\star}_{X}$
yields the announced result.
\end{pf*}
\begin{pf*}{Proof of Theorem~\ref{prop_gen_liminf}}
(a)
Let $n\ge1$, let $c>0$ and let $\nu \in[0,\nu^{\star}_{X})$. Then
\[ \label{eq.bad_estim}
\mathbb{E} \vert X - \widehat{X}^{\alpha_n} \vert^{r+\nu} \geq
\mathbb{E} \Bigl( \min_{a \in\alpha_n}\vert X - a \vert^{r+\nu} \mathbf{1} _{ \{ \vert X \vert> \rho_n + c \}} \Bigr).
\]
In the event $\{\vert X \vert> \rho_n + c \}$, we have $\vert X \vert
> \rho_n +c > \rho_n \geq \vert a \vert$ for every $a \in\alpha_n$. Then
\begin{eqnarray}\label
{estim_error}\label{result_inter}
n^{\fracb{r+\nu}{d}}\mathbb{E} \vert X - \widehat{X}^{\alpha_n} \vert
^{r+\nu} & \geq&n^{\fracb{r+\nu}{d}}
\mathbb{E} \Bigl( \min_{a \in\alpha_n}\vert X - a \vert^{r+\nu}
{\mathbf{1}}_{ \{ \vert X \vert> \rho_n + c \}} \Bigr) \nonumber \\
& \geq& n^{\fracb{r+\nu}{d}}\mathbb{E} \Bigl( \min_{a \in\alpha_n}
(\vert X \vert- \vert a \vert )^{r+\nu} \mathbf{1}_{ \{ \vert X
\vert> \rho_n +c \}} \Bigr) \nonumber
\\[-8pt]
\\[-8pt]
& \geq&n^{\fracb{r+\nu}{d}} \mathbb{E} \bigl ( (\vert X \vert- \rho
_n )^{r+\nu} \mathbf{1}_{ \{ \vert X \vert> \rho_n + c \}}
\bigr) \nonumber\\
& \geq& c^{r+\nu} n^{\fracb{r+\nu}{d}}\mathbb{P} ( \vert X \vert>
\rho_n + c ) .
\nonumber
\end{eqnarray}
Taking the supremum over $c > 0$ and using that $X$ has an $(r,r+\nu
)$-distribution, we complete the proof.
(b) is proved like (a). Inequality (\ref{result_inter})
has the following counterpart: For every $u>1$,
\[
\mathbb{E} \vert X - \widehat{X}^{\alpha_n} \vert^{r+\nu} \geq \mathbb
{E} \bigl( (\vert X \vert- \rho_n )^{r+\nu} \mathbf{1}_{ \{
\vert X \vert> u \rho_n \}} \bigr) \geq\mathbb{E} \bigl (\vert X
\vert^{r+\nu} (1- 1/u )^{r+\nu} \mathbf{1}_{ \{ \vert X
\vert> u \rho_n \}} \bigr).
\]
Inequality (\ref{eqthm1.6.1}) follows from
\[
n^{\fracb{r+\nu}{d}} \mathbb{E} \vert X - \widehat{X}^{\alpha_n} \vert
^{r+\nu} \geq \sup_{u>1} \bigl [ (1- 1/u )^{r+\nu} n^{\fracb
{r+\nu}{d}} \mathbb{E} \bigl(\vert X \vert^{r+\nu} \mathbf{1}_{ \{
\vert X \vert> u \rho_n \}} \bigr) \bigr].
\]
\upqed
\end{pf*}
\subsubsection{Application to distributions with polynomial or
hyper-exponential tails}
The next proposition is the counterpart of Proposition \ref
{cor_princip_limsup} devoted to the asymptotic lower bound.
\begin{prop} \label{cor_princip_liminf} Let $r>0$ and let $X \in
L^{r+}(\mathbb{P})$ be an $\mathbb{R}^d$-valued random variable having
an unbounded support.
\begin{longlist}[(b)]
\item[(a)] \textup{Polynomial tail.} Set
\begin{equation}
\zeta_{\star} = \inf \Bigl\{ \zeta>0, \forall\nu \in[0,\nu^{\star
}_{X}), \liminf_{\xi\rightarrow+\infty} \xi^{\zeta-r -\nu} \bar
{F}_{r+\nu}(\xi) >0 \Bigr\} \in[r+\nu^{\star}_{X}, +\infty].
\end{equation}
Then
\begin{equation}\label{3.14}
\liminf_n \frac{\log\rho_n}{\log n} \geq\frac{1}{\zeta_{\star}-r-\nu
^{\star}_{X}} \frac{r+\nu^{\star}_{X}}{d}.
\end{equation}
\item[(b)] \textup{Hyper-exponential tail.} Set
\begin{equation} \label{def_expon_tail}
\theta_{\star} = \inf \Bigl\{\theta>0, \liminf_{\xi\rightarrow
+\infty} \mathrm{e}^{\theta\xi^{\kappa}} \mathbb{P}(\vert X \vert>\xi) >0
\Bigr\} \in[ 0,+\infty].
\end{equation}
Then, $\theta^\star\le\theta_\star$ and
\begin{equation}
\liminf_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} \geq
\biggl(\frac{r + \nu^{\star}_{X}}{d \theta_{\star}} \biggr)^{1/ \kappa}.
\end{equation}
\end{longlist}
\end{prop}
\begin{pf}
(a) Let $\zeta \in(0, \zeta_{\star
})$. For every $\nu \in(0,\nu^{\star}_{X})$, there exists a positive
real constant~$C_{\nu}$ such that $ \bar{F}_{r+\nu}(\xi) \geq C_{\nu}
\xi^{-\zeta+r+\nu}$ for large enough $\xi$. Setting $\psi_{r,\nu}(y) =
\frac{y}{\zeta-r-\nu}$ yields $\psi_{r,\nu}(-\log\bar{F}_{r+\nu}(\xi))
\leq\log\xi+\mathrm{o}(\log\xi)$. It follows from Theorem~\ref
{thm_princip_liminf}(a) that
\[
\liminf_{n} \frac{\log\rho_n}{\log n} \geq\frac{1}{\zeta-r-\nu} \frac
{r+\nu}{d}.
\]
Letting $\nu$ and $\zeta$ go to $\nu^{\star}_{X}$ and $\zeta_{\star}$
yields the announced result.
(b) Let $\theta \in (\theta_{\star},+\infty)$. Then, there exists
a positive real constant $C$ such that $ \bar{F}(\xi) \geq C \mathrm{e}^{-\theta
\xi^{\kappa}}\!$ for large enough\vadjust{\goodbreak} $x$. Therefore $ -\log\bar{F}(\xi)
\leq\theta\xi^{\kappa} (1 - \xi^{-\kappa} \log(C) ) $ so that, by
setting $\psi_{\theta}(y) = (y/\theta)^{1/ \kappa}$, we have
\[
\psi_{\theta}(-\log\bar{F}(\xi)) \leq\xi+ \mathrm{o}(\xi).
\]
It follows from Theorem~\ref{thm_princip_liminf}(b) that
\[
\liminf_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} \geq
\biggl(\frac{r + \nu^{\star}_{X}}{\theta d} \biggr)^{1/ \kappa}.
\]
Letting $\theta$ go to $\theta_{\star}$ completes the proof. Finally,
the inequality between $\theta_\star$ and $\theta^\star$ is an easy
consequence of the fact that $\bar F_r(\xi) \ge\xi^r\bar F(\xi)$.
\end{pf}
Now we give explicit bounds and rates for several families of
distribution tails (which include most usual distributions). To do so,
we combine asymptotic upper bound results from Section~\ref
{Upperexplicit} with asymptotic lower bound results obtained in this
section. The results below are fully explicit in that we make no \textit{a
priori assumptions} on $\nu^{\star}_{_X}$.
\begin{cor} \label{cor_explic_dens_liminf} Let $r>0$ and let $X \in
L^{r+}(\mathbb{P})$ be an $\mathbb{R}^d$-valued random variable, with
probability density $f$, having an unbounded convex support.
\begin{longlist}[(b)]
\item[(a)]\textup{Polynomial tail.} If there exists $c'\ge c>r+d$
such that
\[
0< \liminf_{|x|\to+\infty} |x|^{c'} f(x)\quad\mbox{and}\quad \limsup
_{|x|\to+\infty} |x|^{c} f(x) <+\infty,
\]
then $f$ satisfies (\ref{LocalControl}),
\begin{eqnarray} \label{gen_density_pol_liminf}
&&d \biggl(1-\frac{d+r}{c'} \biggr)-(r+d) \biggl(1-\frac{c}{c'} \biggr)\le\nu
^{\star}_{X} \le d \biggl(1-\frac{d+r}{c'} \biggr),\nonumber
\\[-8pt]
\\[-8pt]
&& \quad c-d\le \zeta
^{\star}, \qquad \zeta_\star\le c'-d
\nonumber
\end{eqnarray}
and
\[
\frac{1}{c'-r-d} \biggl(1+\frac rd \biggr)\le \liminf_n \frac{\log\rho
_n}{\log n} \le \limsup_n \frac{\log\rho_n}{\log n}\le \frac
{1}{c-r-d} \biggl(1+\frac rd \biggr).
\]
Finally, if $c=c'$, then
\begin{equation}\label{eq_liminf_limsup_pol}
\nu^{\star}_{X} = d \biggl(1-\frac{d+r}{c'} \biggr), \qquad \zeta_{\star} =
\zeta^{\star} = c-d\quad\mbox{and} \quad
\lim_{n} \frac{\log\rho_n}{\log n} = \frac{1}{c-r-d} \biggl(1+\frac
rd \biggr).
\end{equation}
\item[(b)] \textup{Hyper-exponential tail.} If there exists $\kappa>0$ such that
\begin{equation} \label{gen_density_exp_liminf}
\lim_{|x|\to+\infty}\frac{\log f(x)}{|x|^{\kappa}} =- \vartheta \in
(-\infty,0),
\end{equation}
then
\[
\nu^{\star}_{X} = d\quad\mbox{and}\quad \theta_{\star} =\theta
^{\star} = \vartheta\vadjust{\goodbreak}
\]
so that
\begin{equation} \label{eq_liminf_limsup_exp}
\frac{1}{ \vartheta^{1/ \kappa}} \biggl(1+\frac{r }{d} \biggr)^{1/ \kappa
} \leq\liminf_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} \leq
\limsup_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} \leq\frac{2}{
\vartheta^{1/ \kappa}} \biggl(1+\frac{r}{d} \biggr)^{1/ \kappa}.\quad
\end{equation}
When $d=1$, $ r \geq1$,
then the following sharp rate holds
\begin{equation} \label{eq_liminf_limsup_expdim1}
\lim_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} = \biggl(\frac
{r+1}{\vartheta} \biggr)^{1/ \kappa}.
\end{equation}
\end{longlist}
\end{cor}
\begin{remark*} When $d=1$, a one-sided result follows by
considering ``$x\to+ \infty$'' instead of ``$|x|\to\infty$''.
\end{remark*}
\begin{pf*}{Proof of Corollary~\ref{cor_explic_dens_liminf}}
(a) First we need to check that $f$ satisfies the control
criterion~(\ref{LocalControl}) from Proposition~\ref{Critere2}: Let
$A$, $A'$ and $B$ be such that $A'|x|^{-c'}\le f(x)\le A |x|^{-c}$ for
every $x \in\mathbb{R}^d$, $|x|\ge B$. Then, if $\eta \in(0,1)$,
one checks that the criterion is satisfied with $M= \frac{B}{1-\eta}$,
$K= \frac{A'}{A^{{c'}/{c}}}(1+\eta)^{-c'}$ and $\varepsilon=\frac
{c'-c}{c}\ge0$.
Using that $A'|x|^{-c'}\le f(x)$ and that $f$ is a probability density
(so that $f^a$ is locally integrable if $a \in(0,1]$) yields by
checking (\ref{necess_rate_opt}) the upper bound for $\nu^\star_{_X}$.
Checking now the integral criterion (\ref{Hypoth2})
yields the lower bound.
The lower bound for $\zeta^\star$ is established in Corollary~\ref
{cor_gen_density}. The upper-bound is obtained by similar computations
that show that, if $\zeta>c'-d$, then for $\xi$ large enough, $\xi
^{\zeta-r}\bar F_r(\xi) \ge Ad V_d\xi^{\zeta-(c'-d)}$ for some real
constant $A>0$. This shows that $\zeta^{\star} \le c'-d$.
The bounds for~$\zeta_\star$ are obtained by similar computations.
As concerns the lower bound for the radius, one concludes by plugging
all these estimates into (\ref{3.14}). Combining this with
Corollary~\ref{cor_gen_density}(a) completes this part of the proof.
(b) First we need to check that $f$ satisfies the control
criterion (\ref{LocalControl}). We know from assumption (\ref
{gen_density_exp_liminf}) that for every $ \bar\eta \in(0,\vartheta
)$, there exists $B_{\bar\eta}>0$ such that $\mathrm{e}^{-(\vartheta+\bar\eta
)|x|^{\kappa}}\le f(x)\le \mathrm{e}^{(-\vartheta+\bar\eta)|x|^{\kappa}}$, as
soon as $|x|\ge B_{\bar\eta}$. Then, one shows that the criterion is
satisfied with $M=\frac{B_{\bar\eta}}{1-\eta}$, $K=1$, $\varepsilon
=\frac{\vartheta+\bar\eta}{\vartheta-\bar\eta}(1+\eta)^{\kappa}-1$.
Then, one checks that $\nu^{\star}_{_X}\ge d-(r+d)\frac{\varepsilon
}{1+\varepsilon}$ since $\int_{\{|x|\ge B \}} \exp{(-\mu|x|^{\kappa
})}\,\mathrm{d}x<+\infty$ for every $B$, $\mu>0$. Letting $\eta$ and $\bar\eta
\to0$ yields $\nu^{\star}_{_X}=d$.
To compute $\theta_{\star}$, one first notes that, as soon as $\xi\ge
B_{\bar\eta}$,
\begin{eqnarray*}
\mathbb{P}(|X|>\xi) &\ge& d V_d \int_{\{u>\xi\}} \mathrm{e}^{-(\vartheta+\bar
\eta)u^{\kappa}}u^{d-1}\,\mathrm{d}u\\
&=& \mathrm{O}\bigl(\mathrm{e}^{-(\vartheta+\bar\eta)\xi^{\kappa}}\xi^{d-\kappa}\bigr),
\end{eqnarray*}
where the equality follows by a standard argument based on an
integration by parts and a comparison theorem for integrals. As a
consequence $\theta_{\star}\le\vartheta+\bar\eta$, which finally
implies $\theta_{\star}\le\vartheta$. Combining this
with Corollary~\ref{cor_gen_density}(b) and Proposition~\ref
{cor_princip_liminf}(b) yields $\theta_{\star}=\theta^{\star
}=\vartheta$.
\end{pf*}
\begin{pf*}{Proof of Theorem~\ref{1.2}} Claim (a) follows
from the former Corollary~\ref{cor_explic_dens_liminf}(a) once it is
noted that for every $\varepsilon \in(0,c)$, $\liminf_{|x|\to\infty
}|x|^{c+\varepsilon}f(x)>0$ and $\limsup_{|x|\to\infty}
|x|^{c-\varepsilon}f(x)<+\infty$. Claim~(b) directly follows from
(b) in the above corollary.
\end{pf*}
\subsection{An alternative approach based on random quantization}\label
{loweriid}
Random quantization is another tool to compute the lower estimate of
the maximal radius sequence of a random vector $X$ with distribution
$P$. It makes a connection between $\rho_n$ and the maximum of an
i.i.d. sequence of random variables with distribution $P$.
\begin{theo} \label{theoreme1} Let $r>0$ and let $ X \in L^{r+}(\mathbb
{P}) $ be a random variable taking values in $\mathbb{R}^d$ with an
absolutely continuous distribution $P$.
Assume $(\alpha_n)_{n \geq1}$ is a sequence of $L^r(P)$-optimal
$n$-quantizers. Let $( X_k)_{k \geq1}$ be an i.i.d. sequence of
$\mathbb{R}^d$-valued copies of $X$. For every $\nu \in[0,\nu^{\star
}_{X})$ such that $r+\nu\ge1$, there exists a real constant $C_{r,\nu}
\in(0,\infty)$ such that\looseness=1
\begin{equation} \label{first_low_estim}
\liminf_{n} \Bigl( \rho_n - \mathbb{E} \Bigl( \max_{k \leq[n^{(r+\nu
)/d}]} \vert X_k \vert \Bigr) \Bigr) \geq - C_{r,\nu}.
\end{equation}\looseness=0
\end{theo}
\begin{pf} Let $\nu \in[0,\nu^{\star}_{X})$ and
set $\widehat{X}_{k}^{\alpha_n} = \sum_{a \in\alpha_n} a \mathbf{1}_{\{ X_k \in C_a (\alpha_n) \}}$. We have, for integer $m \ge1$,
\begin{eqnarray*}
\rho_n & \geq & {\max_{k \leq m}} \vert\widehat{X}_{k}^{\alpha_n}
\vert \\
& \geq& \sum_{k=1}^{m} {\max_{l \leq m}} \vert\widehat{X}_{l}^{\alpha
_n} \vert\mathbf{1}_{\{ \vert X_k \vert> \max\{ \vert X_i \vert,
i \in\{1,\ldots ,m \}, i\neq k \} \}}\\
& \geq& \sum_{k=1}^{m} \vert\widehat{X}_{k}^{\alpha_n} \vert\mathbf{1}_{\{ \vert X_k \vert> \max_{i \not= k} \vert X_i \vert\}} \\
& \geq& \sum_{k=1}^{m} (\vert X_k \vert- \vert X_k - \widehat
{X}_{k}^{\alpha_n} \vert ) \mathbf{1}_{\{ \vert X_k \vert> {\max
_{i \not= k}} \vert X_i \vert\}}.
\end{eqnarray*}
Taking the expectation of both sides of the previous inequality yields
\[
\rho_n \geq \mathbb{E} {\max_{k \leq m}} \vert X_k \vert- \sum
_{k=1}^{m} \mathbb{E} \bigl(\vert X_k - \widehat{X}_{k}^{\alpha_n}
\vert\mathbf{1}_{\{ \vert X_k \vert> {\max_{i \not= k}} \vert X_i
\vert\}} \bigr).
\]
Furthermore, $ \forall k \geq1$, $ \vert X_k - \widehat{X}_k^{\alpha
_n} \vert\mathbf{1}_{\{ \vert X_k \vert> \max_{i \not= k} \vert X_i
\vert\}}$ and $\vert X_1 - \widehat{X}_1^{\alpha_n} \vert\mathbf{1}_{\{ \vert X_1 \vert> \max_{i \not= 1} \vert X_i \vert\}}$ have the
same distribution. Hence,
\begin{eqnarray*}
\rho_n & \geq & \mathbb{E} {\max_{k \leq m}} \vert X_k \vert- m \mathbb
{E} \bigl(\vert X_1 - \widehat{X}_{1}^{\alpha_n} \vert\mathbf{1}_{\{ \vert X_1 \vert> {\max_{i \not= 1}} \vert X_i \vert\}} \bigr) \\
& \geq& \mathbb{E} {\max_{k \leq m}} \vert X_k \vert- m \Vert X_1 -
\widehat{X}^{\alpha_n}_1 \Vert_{r+\nu} \Bigl (\mathbb{P} \Bigl(\vert X_1
\vert> {\max_{i \not= 1}}\vert X_i \vert \Bigr) \Bigr)^{1-1/(r+\nu)}
\end{eqnarray*}
owing to the H\"{o}lder inequality. Since the events $ \{ \vert X_k
\vert> {\max_{i \not= k}} \vert X_i \vert\}, k=1,\ldots ,m $,
are pairwise disjoint with the same probability, we have $\mathbb{P}
(\vert X_1 \vert> {\max_{i \not= 1}}\vert X_i \vert )\leq\frac{1}{m}.$
Finally,
\[
\rho_n \geq \mathbb{E} {\max_{k \leq m}} \vert X_k \vert- m^{\fracc
{1}{r+\nu}} \Vert X - \widehat{X}^{\alpha_n} \Vert_{r+\nu}.
\]
It follows, by setting $ m = [n^{(r+\nu)/d}] $, that
\[
\liminf_{n} \Bigl ( \rho_n - \mathbb{E} \Bigl( {\max_{k \leq[n^{(r+\nu
)/d}]}} \vert X_k \vert \Bigr) \Bigr) \geq- \limsup_{n} n^{
{1}/{d}}\Vert X - \widehat{X}^{\alpha_n} \Vert_{r+\nu}.\vspace*{-2pt}
\]
The upper limit on the right-hand side is finite since $X$ has an
$(r,r+\nu)$-distribution.\vspace*{-2pt}
\end{pf}
\begin{exam} [(Exponential distribution)] Let $r>0$ and let $X$ be
an exponentially distributed random variable with parameter $\lambda
>0$. If $(\alpha_n)_{n \geq1}$ is an $L^r$-optimal sequence of
$n$-quantizers for~$X,$ then Theorem \ref{theoreme1} implies
\begin{equation} \label{eq_exp_comment}
\liminf_{n} \frac{\rho_n}{ \log n} \geq\frac{r+1}{\lambda},\vspace*{-2pt}
\end{equation}
which corresponds to the sharp rates given by (\ref{asymp_exp})
and (\ref{eq_liminf_limsup_pol}), respectively.\vspace*{-2pt}
\end{exam}
Indeed, let $\nu \in(0,\nu^{\star}_{X})$ and let $(X_i)_{i=1,\ldots
,[n^{r+\nu}]}$, be an i.i.d. exponentially distributed sequence of
random variables with parameter $\lambda$. We have for every $u \geq0$,
\[
\mathbb{P}\Bigl(\max_{i \leq[n^{r+\nu}]} X_i \geq u\Bigr) = 1 - \mathbb{P}(X
\leq u)^{[n^{r+\nu}]} = 1 - F(u)^{[n^{r+\nu}]},\vspace*{-2pt}
\]
where $F$ is the distribution function of $X$ (we will denote by $f$
its density function).~Then
\begin{eqnarray*}
\mathbb{E} \Bigl ( \max_{i \leq[n^{r+\nu}]} X_i \Bigr)
& = & \int_0^{+ \infty} \mathbb{P}\Bigl(\max_{i \leq[n^{r+\nu}]} X_i \geq
u\Bigr)\,\mathrm{d}u = \int_0^{+\infty} \bigl(1 - (1-\mathrm{e}^{- \lambda u})^{[n^{r+\nu}]}\bigr)\,\mathrm{d}u \\[-2pt]
& = & \int_0^{+\infty} \bigl (1 + F(u) + \cdots+ F(u)^{[n^{r+\nu}]-1}
\bigr) \frac{f(u)}{\lambda}\,\mathrm{d}u \\[-2pt]
& = & \frac{1}{\lambda} \biggl(1 + \frac{1}{2} + \cdots+ \frac{1}{[n^{r+\nu
}]}\biggr) \\[-2pt]
& \geq & \frac{1}{\lambda} \log(1+ [n^{r+\nu}]) \geq \frac{r+\nu
}{\lambda} \log n.\vspace*{-2pt}
\end{eqnarray*}
Consequently, it follows from the super-additivity of the liminf that
for every $\nu \in(0,1)$,
\[
\liminf_{n} \frac{\rho_n}{\log n} \geq \liminf_{n} \frac{ \rho_n -
\mathbb{E} ( \max_{i \leq[n^{r+\nu}] } X_i )}{\log n} + \liminf
_{n} \frac{\mathbb{E} ( \max_{i \leq[n^{r+\nu}]}X_i )}{ \log
n} \geq \frac{r+ \nu}{\lambda}.\vspace*{-2pt}
\]
The result follows by letting $\nu$ go to $\nu^{\star}_{X}=1$.
In fact, one may easily extend this example to a more general
framework, although, overall, the connection made in Theorem~\ref
{theoreme1} seems less straightforward in terms of deriving explicit
asymptotic lower bounds than the former approach based on more
geometric arguments.\vspace*{-2pt}
\begin{exam} [(Radial distribution with exponential tails)] Let $X$
be an $\mathbb{R}^d$-valued random vector with an unbounded support
having an absolutely continuous distribution with a radial probability
density $f(x)=g(|x|_{S})$ with respect to an Euclidean norm $|\cdot
|_{S}$ so that $\bar F(\xi)= K_{d,S} \int_{\xi}^{+\infty}
u^{d-1}g(u)\,\mathrm{d}u$, $\xi>0$, with $ K_{d,S}= d V_d
(\operatorname{det}(S))^{-
1/2}>0$.\vadjust{\goodbreak} Assume that $\bar F(\xi)\ge cf(\xi)$ for $\xi\ge A>0$ for some
real constant $c>0$. Then
\[
\liminf_{n} \frac{\rho_n}{\log n} \ge c(r+\nu_{_X}^{\star}).\vspace*{-2pt}
\]
\end{exam}
\begin{exam} [(Pareto distribution)] Let $X$ be a random variable
having a Pareto distribution with index $\gamma>0 $. If $(\alpha_n)_{n
\geq1}$ is an asymptotically $L^r$-optimal sequence of $n$-quantizers
for $X$, $r \in(0, \gamma)$, then Theorem \ref{theoreme1} yields
\[
\liminf_{n} \frac{\log\rho_n}{ \log n} \geq\frac{r+1}{\gamma+1},\vspace*{-2pt}
\]
which is not the sharp rate given by (\ref{asymp_Par}).\vspace*{-2pt}
\end{exam}
Notice that if $\gamma>r$, then $X \in L^{r+\eta}( \mathbb{P})$ for $
\eta \in (0,\gamma- r )$. Now, to prove this result, let $\nu
\in(0, \nu^{\star}_{X})$ and let $(X_i)_{i\ge1}$ be an i.i.d.
sequence of Pareto-distributed random variables (with index $\gamma$).
We have
\[
\forall m \geq1, \forall u \geq1 \qquad \mathbb{P} \Bigl(\max_{i \leq m}
X_i \leq u \Bigr) = (1-u^{- \gamma})^{m}.\vspace*{-2pt}
\]
Then, the density function of $\max_{1 \leq i \leq m} X_i$ is $ m
\gamma u^{-(\gamma+1)}(1 - u^{-\gamma})^{m-1}.$ Hence,
\begin{eqnarray*}
\mathbb{E} \Bigl( \max_{1 \leq i \leq m } X_i \Bigr) & = & m \gamma \int
_1^{+ \infty} x^{- \gamma} (1-x^{-\gamma})^{m-1}\,\mathrm{d}x
=m B\biggl(1- \frac{1}{ \gamma},m\biggr) \\[-2pt]
& = & \frac{\Gamma(1- {1}/{ \gamma})\Gamma(m+1)}{ \Gamma(m+1-
{1}/{\gamma})}
\sim\Gamma\biggl(1- \frac{1}{ \gamma}\biggr) m^{ {1}/{\gamma}} \qquad\mbox{as } m \rightarrow+\infty,\vspace*{-2pt}
\end{eqnarray*}
where we used Stirling's formula for the last statement ($B(\cdot,\cdot)$
denotes the beta function of the first kind). We finally set $m =
[n^{r+\nu}] $ to get
\[
\mathbb{E} \Bigl ( \max_{1 \leq i \leq[n^{r+\nu}] } X_i \Bigr) \sim
\Gamma\biggl(1- \frac{1}{ \gamma}\biggr) n^{\fracb{r+\nu}{\gamma}}.\vspace*{-2pt}
\]
It follows from (\ref{first_low_estim}) that for every $\varepsilon
\in(0,1)$, $ \rho_n - (1-\varepsilon) \Gamma(1- \frac{1}{ \gamma})
n^{\fracb{r+\nu}{\gamma}} \geq-( \mathrm{C}_{r,\nu}+\varepsilon)$.
Dividing both sides of the inequality by $n^{\fracb{r+\nu}{\gamma}}$ and
taking the logarithm yields
\[
\log\rho_n - \frac{r+\nu}{\gamma} \log n \geq\log \biggl( (1-\varepsilon
) \Gamma\biggl(1- \frac{1}{ \gamma}\biggr) - (\varepsilon+ \mathrm{C}_{r,\nu})
n^{-\fracb{r+\nu}{\gamma}} \biggr).\vspace*{-2pt}
\]
Consequently ${\liminf}_{n \rightarrow+\infty} \frac{\log\rho
_n}{\log n} \geq\frac{r+\nu}{\gamma}$ for every $\nu \in(0,\nu^{\star
}_{X})$. One concludes by letting $\nu$ go to $\nu^{\star}_{X}= \frac
{\gamma-r}{\gamma+1}$.\vspace*{-2pt}
\begin{comment*} Let $\phi$ be the inverse (if any)
function of $-\log\bar{F}$. Notice that in both examples above we have
\begin{equation} \label{eq_par_comment}
\lim_n \frac{ \mathbb{E} ({\max_{k \leq[n^{r+\nu^{\star}_{X}}]}}
\vert X_k \vert )}{\phi((r+\nu^{\star}_{X}) \log n)}=1,\vspace*{-2pt}\vadjust{\goodbreak}
\end{equation}
which, for distributions with hyper-exponential tails, leads to the
asymptotic lower bound~(\ref{eq_thm_gen_liminf1}) for the sequence
$(\rho_n)_{n\geq1}$. As concerns Pareto distribution, using the
approximation~(\ref{eq_par_comment}) to compute the asymptotic lower
estimate of the maximal radius sequence induces the loss of the
``$-r$'' term in the exact asymptotics. To recover this remaining term
we have simply to consider the inverse function of $-\log\bar{F}_{r+\nu
^{\star}_{X}}$ (as done in the previous section) instead of $-\log\bar
{F}$, and the random quantization approach clearly does not allow us to
do so.
\end{comment*}
\subsubsection{A conjecture about the sharp rate}\label{sec4.2.1}
The previous results related to distributions with hyper-exponential
tails strongly suggest the following conjecture: Suppose $X$ is a
distribution with hyper-exponential tail in the sense of statement (\ref
{gen_density_exp_liminf}). Then, for every $r>0$ and $d \geq1$,
\[
\lim_{n} \frac{\rho_n}{ (\log n )^{1/ \kappa}} = \biggl(\frac
{r+d}{d \theta^{\star}} \biggr)^{1/ \kappa}.
\]
This conjecture is proved for $d=1$ and $r\geq1$. To be satisfied for
higher dimensions we need to prove that the geometric statement (\ref
{equaconjecture}) of Lemma \ref{lem1_princip_limsup} holds true with
$c_{r,d}=1$ for every $r>0$, $d \geq1$.
\begin{appendix}\label{appm}
\section*{Appendix}
\mbox{}\vspace*{-10pt}
\setcounter{equation}{0}
$\rhd$ \textit{Exponential distribution.} $\rho_n = \frac
{r+1}{\lambda} \log n+ \frac{C_{r}}{\lambda} + \mathrm{O} (\frac{1}{n}
),$ we use the following result (see \cite{ForPag}): If $X$ is
exponentially distributed with parameter $\lambda>0,$ then, for any $n
\geq1$, the $L^r$-optimal quantizer $\alpha_n = (\alpha_{n,1}, \ldots ,
\alpha_{n,n})$ is unique and given by
\begin{equation}
\alpha_{n,k} = \frac{1}{\lambda}\Biggl (\frac{a_n}{2} + \sum
_{i=n+1-k}^{n-1} a_i \Biggr), \qquad 1 \leq k \leq n,
\end{equation}
where $(a_k)_{k \geq1}$ is an $\mathbb{R}_{+}$-valued sequence
recursively defined by the following implicit equation: $a_0:= +\infty,
\phi_r(-a_{k+1}):= \phi_r(a_k), k \geq0 $, with $\phi_r(x):= {\int
_0^{x/2}} \vert u \vert^{r-1} \operatorname{sign}(u) \mathrm{e}^{-u}\,\mathrm{d}u $ (convention: $
0^0 = 1$). Furthermore, the sequence $(a_k)_{k \geq1}$ decreases to
zero and for every $k \geq1$, $a_k = \frac{r+1}{k} (1+ \frac
{c_r}{k} + \mathrm{O}(\frac{1}{k^2}) ) $ for some positive real
constant $c_r$. Then it follows that $\lambda\rho_n = \frac{a_n}{2} +
\sum_{i=1}^{n-1} a_i $ so that
\[
\lambda\rho_n = \frac{a_n}{2} + (r+1) \sum_{i=1}^{n-1} \frac{1}{i} +
c_r \sum_{i=1}^{n-1} \frac{1}{i^2} + \sum_{i=1}^{n-1} \mathrm{O}(1/i^3) =
(r+1) \log n + C_r + \mathrm{O} \biggl(\frac{1}{n} \biggr).
\]
$\rhd$ \textit{Pareto distribution.} The proof is similar after
noting that $ \rho_n = \frac{1}{1+a_n} \prod_{i=1}^{n-1} (1+a_i),$
where $(a_n)_{n \geq1}$ is an $\mathbb{R}_{+}$-valued sequence,
decreasing to zero and satisfying, for every $n \geq1$, $ a_n = \frac
{r+1}{(\gamma- r)n} ( 1 + c_r /n + \mathrm{O}(1/n^2) )$ for
some\vadjust{\goodbreak}
real constant $ c_r $. Hence, if $i_0:=\max\{i \mid |a_i|\ge1\}$,
\[
\log(\rho_n) = -\log(1+a_n) +C_{i_0}+ \sum_{i=i_0+1}^{n-1} \biggl( a_i -
\frac{a_i^2}{2} + \mathrm{O}(a_i^3) \biggr) = \frac{r+1}{\gamma-r} \log n +
C_r + \mathrm{O} \biggl(\frac{1}{n} \biggr),
\]
where we used that $\sum_{i=1}^{\infty} a_i^2 < \infty$ and $\sum
_{i=1}^{\infty} \mathrm{O}(a_i^3) < \infty$.
\end{appendix}
\printhistory
\end{document}
|
\begin{document}
\title[Mixed local and nonlocal quasilinear parabolic equations]
{On the regularity theory for mixed local and nonlocal quasilinear parabolic equations}
\author{Prashanta Garain and Juha Kinnunen}
\address[Prashanta Garain ]
{\newline\indent Department of Mathematics
\newline\indent
Ben-Gurion University of the Negev
\newline\indent
P.O.B. 653
\newline\indent
Beer Sheva 8410501, Israel
\newline\indent
Email: {\tt [email protected]} }
\address[Juha Kinnunen ]
{\newline\indent Department of Mathematics
\newline\indent
Aalto University
\newline\indent
P.O. Box 11100, FI-00076, Finland
\newline\indent
Email: {\tt [email protected]} }
\begin{abstract}
We consider mixed local and nonlocal quasilinear parabolic equations of $p$-Laplace type and discuss several regularity properties of weak solutions for such equations. More precisely, we establish local boundeness of weak subsolutions, lower semicontinuity of weak supersolutions as well as upper semicontinuity of weak subsolutions. We also discuss the pointwise behavior of the semicontinuous representatives. Our main results are valid for sign changing solutions. Our approach is purely analytic and is based on energy estimates and the De Giorgi theory.
\end{abstract}
\subjclass[2010]{35B65, 35B45, 35K59, 35K92, 35M10, 35R11.}
\keywords{Mixed local and nonlocal quasilinear parabolic equation, energy estimates, local boundedness, lower and upper semicontinuity, pointwise behavior, De Giorgi theory.}
\maketitle
\section{Introduction}
We discuss regularity properties of weak solutions $u:\R^N\times(0,T)\to\R$ for the mixed local and nonlocal quasilinear parabolic equation
\begin{equation}\label{maineqn}
\partial_t u+\mathcal{L}u(x,t)-\operatorname{div}\mathcal{B}(x,t,u,\nabla u)=g(x,t,u)\text{ in } \Om\times(0,T),
\end{equation}
where $T>0$, $\Om\subset\mathbb{R}^N$, with $N\geq 2$, is a bounded domain (i.e. bounded, open and connected set) and $\mathcal{L}$ is an integro-differential operator of the form
\begin{equation}\label{fLap}
\mathcal{L} u(x,t)=\text{P.V.}\,\int_{\mathbb{R}^N}\mathcal{A}\big(x,y,t,u(x,t),u(y,t)\big)K(x,y,t)\,dx\,dy\,dt,
\end{equation}
where $\text{P.V.}$ denotes the principal value and $\mathcal{A}:\mathbb{R}^N\times\mathbb{R}^N\times(0,T)\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ is measurable with respect to $(x,y,t)$ and continuous with respect to $(u(x,t),u(y,t))$ satisfying the growth condition
\begin{equation}\label{L}
\begin{split}
C_1|u(x,t)-u(y,t)|^{p-2}\big(u(x,t)-u(y,t)\big)&\leq\mathcal{A}\big(x,y,t,u(x,t),u(y,t)\big)
\\
&\leq C_2|u(x,t)-u(y,t)|^{p-2}\big(u(x,t)-u(y,t)\big)
\end{split}
\end{equation}
for some positive constants $C_1$ and $C_2$.
We assume that $1<p<\infty$, unless otherwise mentioned.
The kernel $K$ is symmetric in $x$ and $y$ such that, for some $0<s<1$ and $\Lambda\geq 1$, we have
\begin{equation}\label{kernel}
\frac{\Lambda^{-1}}{|x-y|^{N+ps}}\leq K(x,y,t)\leq\frac{\Lambda}{|x-y|^{N+ps}},
\end{equation}
for every $x,y\in\R^N$ and $t\in(0,T)$. Here $\mathcal{B}(x,t,u,\zeta):\Om\times(0,T)\times\mathbb{R}^{N+1}\to\mathbb{R}^N$ is a measurable function with respect to $(x,t)$ and continuous with respect to $(u,\zeta)$ such that
\begin{equation}\label{ls}
\mathcal{B}(x,t,u,\zeta)\zeta\geq C_3|\zeta|^p
\quad\text{and}\quad
|\mathcal{B}(x,t,u,\zeta)|\leq C_4|\zeta|^{p-1}
\end{equation}
for almost every $(x,t)\in\Om\times(0,T)$ and for every $(u,\zeta)\in\mathbb{R}^{N+1}$. We assume that the source function $g:\Omega\times(0,T)\times\mathbb{R}\to\mathbb{R}$ is measurable with respect to $(x,t)$ and continuous with respect to $u$ which satisfies
\begin{equation}\label{ghypo}
|g(x,t,u)|\leq\alpha|u|^{l-1}+|h(x,t)|,
\end{equation}
for every $(x,t,u)\in\Om\times(0,T)\times\mathbb{R}$, where $1<l\le\max\{2,p(1+\frac{2}{N})\}$, $\alpha\geq 0$ and $h$ is an integrable function to be made precise later.
There are several interesting equations that satisfy the conditions \eqref{L}, \eqref{kernel} and \eqref{ls}.
The leading example of \eqref{maineqn} is the mixed local and nonlocal parabolic $p$-Laplace equation
\begin{equation}\label{prot}
\partial_t u+a(-\Delta_p)^s u-b\Delta_p u=0\text{ in }\Omega\times(0,T),
\end{equation}
with $a>0$ and $b>0$. This equation is obtained by choosing
\begin{equation}\label{a}
\mathcal{A}(x,y,t,u(x,t),u(y,t))=|u(x,t)-u(y,t)|^{p-2}(u(x,t)-u(y,t)),
\end{equation}
\begin{equation}\label{b}
K(x,y,t)=a|x-y|^{-N-ps}\quad\text{and}\quad\mathcal{B}(x,t,u,\nabla u)=b|\nabla u|^{p-2}\nabla u.
\end{equation}
This kind of equation appears in image processing, L\'evy processes etc, see Dipierro-Valdinoci \cite{DV21} and the references therein.
For $\mathcal{A}$ as in \eqref{a} and $a=1$ in \eqref{b}, the operator $\mathcal{L}$ defined by \eqref{fLap} becomes the fractional $p$-Laplace operator $(-\Delta_p)^s$.
For $b=1$ in \eqref{b},
$$
\operatorname{div}\big(\mathcal{B}(x,t,u,\nabla u)\big)=\operatorname{div}(|\nabla u|^{p-2}\nabla u)=\Delta_p u
$$
is the $p$-Laplace operator.
We would like to emphasize that \eqref{maineqn} also extends the following mixed Finsler and fractional $p$-Laplace equation
\begin{equation}\label{mff}
\partial_t u+(-\Delta_p)^s u=\Delta_{\mathcal{F},p}\,u\text{ in } \Om\times(0,T),
\end{equation}
where
\begin{equation}\label{fo}
\Delta_{\mathcal{F},p}\,u=\text{div}\big(\mathcal{F}(\nabla u)^{p-1}\nabla_{\eta}\mathcal{F}(\nabla u)\big),
\end{equation}
is the Finsler $p$-Laplace operator, with $\nabla_{\eta}$ denoting the gradient operator with respect to the $\eta$ variable. Here $\mathcal{F}:\R^N\to[0,\infty)$ is the Finsler-Minkowski norm,
that is, $\mathcal{F}$ is a nonnegative convex function in $C^1(\R^N\setminus\{0\})$ such that $\mathcal{F}(\eta)=0$ if and only if $\eta=0$, and $\mathcal{F}$ is even and positively homogeneous of degree $1$, so that
\begin{equation}\label{eh}
\mathcal{F}(t\eta)=|t|\mathcal{F}(\eta)
\quad\text{for every $\eta\in\R^N$ and $t\in\mathbb{R}$}.
\end{equation}
Then, it follows that $\mathcal{B}(x,t,u,\nabla u)=\mathcal{F}(\nabla u)^{p-1}\nabla_{\eta}\mathcal{F}(\nabla u)$ satisfies the hypothesis \eqref{ls}, see Xia \cite[Chapter $1$]{Xiathesis}. Various examples of Finsler-Minkowski norm $\mathcal{F}$ can be found in, for example in Belloni-Ferone-Kawohl \cite{BFKzamp}, Xia \cite[p. 22--23]{Xiathesis} and the references therein. A typical example of $\mathcal{F}$ includes the $l^q$-norm defined by
\begin{equation}\label{ex1}
\mathcal{F}(\eta)=\Big(\sum_{i=1}^{N}|\eta_i|^q\Big)^\frac{1}{q},\quad q>1,
\end{equation}
where $\eta=(\eta_1,\eta_2,\ldots,\eta_N)$. When $\mathcal{F}$ is the $l^q$-norm as in \eqref{ex1}, we have
\begin{equation}\label{exf}
\Delta_{\mathcal{F},p}\,u=\sum_{i=1}^{N}\frac{\partial}{\partial x_i}\bigg(\Big(\sum_{k=1}^{N}\Big|\frac{\partial u}{\partial x_k}\Big|^{q}\Big)^\frac{p-q}{q}\Big|\frac{\partial u}{\partial x_i}\Big|^{q-2}\frac{\partial u}{\partial x_i}\bigg).
\end{equation}
For $q=2$ in \eqref{exf}, $\Delta_{\mathcal{F},p}$ becomes the usual $p$-Laplace operator $\Delta_p$. Moreover, for $q=p$ in \eqref{exf}, the operator $\Delta_{\mathcal{F},p}$ reduces to the pseudo $p$-Laplace operator, see Belloni-Kawohl \cite{BKesaim} and therefore, equation \eqref{mff} covers a wide range of mixed local and nonlocal problems and in particular, extends the following mixed pseudo and fractional $p$-Laplace equation
\begin{equation}\label{mpf}
\partial_t u+(-\Delta_p)^s u=\sum_{i=1}^{N}\frac{\partial}{\partial x_i}\Big(\Big|\frac{\partial u}{\partial x_i}\Big|^{p-2}\frac{\partial u}{\partial x_i}\Big)
\text{ in }\Om\times(0,T).
\end{equation}
Before proceeding further, let us discuss some known results.
In the purely local case $a=0$, the equation \eqref{prot} has been studied thoroughly over the last decades. Local boundedness, Harnack inequalities, H\"older continuity among several other regularity results are discussed in DiBenedetto \cite{Dibe} and DiBenedetto-Gianazza-Vespri \cite{DiBGV}.
For lower semicontinuity and further properties of weak supersolutions, we refer to Kinnunen-Lindqvist \cite{KLin}, Kuusi \cite{Kuusilsc}, Liao \cite{Liao} and the references therein.
In the purely nonlocal case $b=0$,
existence, uniqueness and asymptotic behavior of strong solutions of \eqref{prot} were studied by Maz\'{o}n-Rossi-Toledo \cite{MazonRT} and V\'azquez \cite{Vazquez}.
Str\"omqvist \cite{Mlb} obtained a local boundedness result for weak subsolutions with $p>2$.
Brasco-Lindgren-Str\"omqvist \cite{BLM} obtained local H\"older continuity result based on the method of discrete differentiation. By an alternative approach, Ding-Zhang-Zhou \cite{DZZ} established local H\"older continuity along with local boundedness result. Lower semicontinuity result for doubly nonlinear nonlocal problems can be found in Banerjee-Garain-Kinnunen \cite{BGKlsc}.
In the steady state case equation \eqref{prot}, with $a=b=1$, reduces to
\begin{equation}\label{me}
-\Delta_p u+(-\Delta_p)^s u=0.
\end{equation}
Foondun \cite{Fo} proved Harnack and H\"older continuity results for \eqref{me} with $p=2$. For an alternative approach to obtain Harnack inequality for \eqref{me}, see Chen-Kim-Song-Vondra\v{c}ek \cite{CKSV}. Existence and symmetry results together with various other qualitative properties of solutions of \eqref{me} have recently been studied by Biagi-Dipierro-Valdinoci-Vecchi \cite{BSVV2, BSVV1}, Dipierro-Proietti Lippi-Valdinoci \cite{DPV20, DPV21} and Dipierro-Ros-Oton-Serra-Valdinoci \cite{DRXJV20}.
Much less is known in the nonlinear case $p\neq 2$ of \eqref{me}. For this, we refer to Buccheri-da Silva-Miranda \cite{Silvaarxiv}, da Silva-Salort \cite{Silvas}, Biagi-Mugnai-Vecchi \cite{BMV}, Garain-Kinnunen \cite{GK} and Garain-Ukhlov \cite{GU}.
Concerning mixed parabolic equation, Barlow-Bass-Chen-Kassmann \cite{BBCK} obtained Harnack inequality for the linear equation
\begin{equation}\label{mp}
\partial_t u+(-\Delta)^s u-\Delta u=0.
\end{equation}
Chen-Kumagai \cite{CK} also proved Harnack inequality along with local H\"older continuity result.
Recently, Garain-kinnunen \cite{GKwh} proved a weak Harnack inequality for \eqref{mp} by analytic means.
The main purpose of this article is to establish local boundedness of weak subsolutions (Theorem \ref{thm1}-Theorem \ref{lbthm2}).
Further, we provide lower and upper semicontinuous representatives of weak supersolutions (Theorem \ref{lscthm1}) and weak subsolutions (Theorem \ref{uscthm1}) of \eqref{maineqn} respectively. We also investigate the pointwise behavior of such representatives (Theorem \ref{lscpt}-\ref{uscpt}).
Moreover, all of our main results are valid for sign changing solutions.
Shang-Fang-Zhang \cite{ShangFZ} recently established the local boundedness result for the equation
\begin{equation}\label{zeqn}
\partial_t u+(-\Delta_p)^s u-\Delta_pu=0\text{ in } \Om\times(0,T),
\end{equation}
but our results cover a wider class of equations.
As far as we are aware, all our main results are new, even for the homogeneous case $g\equiv 0$ in \eqref{maineqn}.
In contrast to the approach from probability and analysis introduced in \cite{BBCK, CK}, we study the problem \eqref{maineqn} with a purely analytic approach. To this end, we adopt the approach from Castro-Kuusi-Palatucci \cite{Kuusilocal}, Ding-Zhang-Zhou \cite{DZZ} to the mixed problem \eqref{maineqn} to obtain the local boundedness result (Theorem \ref{thm1}-Theorem \ref{lbthm2}).
Due to the nonlocality, a tail quantity defined by \eqref{taildef} appears in our local boundedness estimate. This captures both the local and nonlocal behavior of the equation \eqref{maineqn}. We refer to Kassmann \cite{KassmanHarnack}, Castro-Kuusi-Palatucci \cite{Kuusilocal}, Brasco-Lindgren-Schikorra-Str\"omqvist \cite{BrascoLind, BLS,BLM}, Ding-Zhang-Zhou \cite{DZZ} for the purely nonlocal tail and Garain-Kinnunen \cite{GK, GKwh} for the mixed local and nonlocal tail. To provide the semicontinuous representatives (Theorem \ref{lscthm1} and Theorem \ref{uscthm1}) and their pointwise behavior (Theorem \ref{lscpt} and Theorem \ref{uscpt}), we use the theory from Liao \cite{Liao} and adopt the approach from Banerjee-Garain-Kinnunen \cite{BGKlsc,GK}. We obtain energy estimates and De Giorgi type lemmas to prove our main results.
In Section $2$, we discuss the functional setting for the problem \eqref{maineqn} and state some useful results. Section $3$ is devoted to the statement and proof of the local boundedness result. In Section $4$, we state and prove the semicontinuity results and investigate their pointwise bahavior.
\section{Functional setting and auxiliary results}
In this section, we discuss the functional setting of weak solutions for the problem \eqref{maineqn} and state some useful results.
\subsection{Notation} The following notation will be used throughout the paper.
We denote the positive and negative parts of $a\in\R$ by $a_+=\max\{a,0\}$ and $a_-=\max\{-a,0\}$ respectively.
The conjugate exponent of $t>1$ is written as $t'=\frac{t}{t-1}$. The Lebesgue measure of a set $S$ is denoted by $|S|$.
The barred integral sign $\fint_{S}f$ denotes the integral average of $f$ over $S$.
We write $C$ to denote a constant which may vary from line to line or even in the same line.
The dependencies of the constant $C$ on the parameters $r_1,r_2,\ldots, r_k$ are indicated as $C=C(r_1,r_2,\ldots,r_k)$.
For $(r,\theta)\in(0,\infty)\times\R$ and $(x_0,t_0)\in\mathbb{R}^{N+1}$, we consider an open ball $B_r(x_0)$ of radius $r$ with centre at $x_0$
and a cylinder $\mathcal{Q}_{r,\theta}(x_0,t_0)=B_{r}(x_0)\times(t_0-\theta r^p,t_0+\theta r^p)$.
If $\theta=1$, we write $\mathcal{Q}_r(x_0,t_0)=\mathcal{Q}_{r,\theta}(x_0,t_0)$.
Moreover, we write $\Om_T=\Om\times(0,T)$ with $T>0$.
\subsection{Sobolev spaces}
Let $1<p<\infty$ and assume that $\Om\subset\mathbb{R}^N$ is an open connected set. Recall that the Lebesgue space $L^p(\Om)$ is the set of measurable functions $u:\Om\to\R$ such that $\|u\|_{L^p(\Om)}<\infty$, where
$$
\|u\|_{L^p(\Om)}=\left(\int_{\Om}|u(x)|^p\,dx\right)^\frac{1}{p}.
$$
If $0<p\leq 1$, we denote by $L^p(\Om)$ to be the set of measurable functions $u:\Om\to\R$ such that $\int_{\Om}|u(x)|^p\,dx<\infty$. We say that $u\in L^p_{\mathrm{loc}}(\Om)$ if $u\in L^p(\Om')$ for every $\Om'\Subset\Om$.
For $1<p<\infty$, the Sobolev space $W^{1,p}(\Omega)$ is defined by
$$
W^{1,p}(\Om)=\{u\in L^p(\Om):\|u\|_{W^{1,p}(\Om)}<\infty\},
$$
where
$$
\|u\|_{W^{1,p}(\Om)}=\left(\int_{\Om}|u(x)|^p\,dx+\int_{\Om}|\nabla u(x)|^p\,dx\right)^\frac{1}{p}.
$$
The Sobolev space $W_0^{1,p}(\Om)$ with zero boundary value is defined by
$$
W_0^{1,p}(\Om)=\{u\in W^{1,p}(\Om):u=0\text{ in }\mathbb{R}^N\setminus\Om\}.
$$
We present some known theory of fractional Sobolev spaces, see Di Nezza-Palatucci-Valdinoci \cite{Hitchhiker'sguide} for more details.
\begin{Definition}
Let $1<p<\infty$ and $0<s<1$. Assume that $\Omega$ is an open and connected set in $\mathbb R^N$. The fractional Sobolev space $W^{s,p}(\Omega)$ is defined by
$$
W^{s,p}(\Omega)=\{u\in L^p(\Omega):\|u\|_{W^{s,p}(\Om)}<\infty\},
$$
where
$$
\|u\|_{W^{s,p}(\Omega)}=\left(\int_{\Omega}|u(x)|^p\,dx+\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\,dx\,dy\right)^\frac{1}{p}.
$$
The fractional Sobolev space with zero boundary value is defined by
$$
W_{0}^{s,p}(\Omega)=\{u\in W^{s,p}(\mathbb{R}^N):u=0\text{ in }\mathbb{R}^N\setminus\Omega\}.
$$
\end{Definition}
For $0<s\leq 1$, the space $W^{s,p}_{\mathrm{loc}}(\Omega)$ is defined by requiring that a function belongs to $W^{s,p}(\Omega')$ for every $\Omega'\Subset\Omega$.
Here $\Omega'\Subset\Omega$ denotes that $\overline{\Omega'}$ is a compact subset of $\Omega$.
The Sobolev spaces $W^{s,p}(\Omega)$ and $W_{0}^{s,p}(\Omega)$, with $1<p<\infty$ and $0<s\leq 1$, are reflexive Banach spaces, see \cite{Hitchhiker'sguide, Maly, Evans}.
The next result asserts that the classical Sobolev space is continuously embedded in the fractional Sobolev space, see \cite[Proposition 2.2]{Hitchhiker'sguide}.
\begin{Lemma}\label{locnon}
Let $1<p<\infty$, $0<s<1$ and assume that $\Omega$ is a smooth bounded domain in $\mathbb{R}^N$. There exists a constant $C=C(N,p,s)$ such that
$$
||u||_{W^{s,p}(\Omega)}\leq C||u||_{W^{1,p}(\Omega)}
$$
for every $u\in W^{1,p}(\Omega)$.
\end{Lemma}
The following result for the fractional Sobolev spaces with zero boundary value follows from \cite[Lemma 2.1]{Silvaarxiv}.
\begin{Lemma}\label{locnon1}
Let $1<p<\infty$, $0<s<1$ and assume that $\Omega$ is a bounded domain in $\mathbb{R}^N$. There exists a constant $C=C(N,p,s,\Omega)$ such that
\[
\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\,dx\, dy
\leq C\int_{\Omega}|\nabla u(x)|^p\,dx
\]
for every $u\in W_0^{1,p}(\Omega)$.
Here we consider the zero extension of $u$ to the complement of $\Omega$.
\end{Lemma}
Next we introduce a tail space and a mixed parabolic tail that will be used throughout the paper. We refer to Brasco-Lindgren-Schikorra \cite{BLS} for tail space.
\begin{Definition}
Let $m_1>0$ and $m_2>0$. We define a tail space $L_{m_1}^{m_2}(\mathbb{R}^N)$ by
\begin{equation}\label{tailspdef}
L_{m_1}^{m_2}(\mathbb{R}^N)=\bigg\{u\in L^{m_2}_{\mathrm{loc}}(\mathbb{R}^N):\int_{\mathbb{R}^N}\frac{|u(y)|^{m_2}}{1+|y|^{N+m_1}}\,dy<\infty\bigg\},
\end{equation}
endowed with the norm
$$
\|u\|_{L_{m_1}^{m_2}(\mathbb{R}^N)}=\left(\int_{\mathbb{R}^N}\frac{|u(y)|^{m_2}}{1+|y|^{N+m_1}}\,dy\right)^\frac{1}{m_2}.
$$
\end{Definition}
\begin{Definition}
Let $(x_0,t_0)\in\mathbb{R}^N\times(0,T)$ and the interval $I=[t_0-T_1,t_0+T_2]\subset(0,T)$.
We define the mixed parabolic tail by
\begin{equation}\label{taildef}
\begin{split}
\mathrm{Tail}_{\infty}(u;x_0,r,I)
&=\bigg(r^p\esssup_{t\in I}\int_{\mathbb{R}^N\setminus B_r(x_0)}\frac{|u(y,t)|^{p-1}}{|y-x_0|^{N+ps}}\,dy\bigg)^\frac{1}{p-1}.
\end{split}
\end{equation}
\end{Definition}
From the definitions above, it is clear that for any $v\in L^\infty(I,L^{p-1}_{ps}(\mathbb{R}^N))$, the parabolic tail $\mathrm{Tail}_{\infty}(v;x_0,r,I)$ is well defined.
For short, we write
\[
\mathcal{A}\big(x,y,t,u(x,t),u(y,t)\big)=\mathcal{A}(u(x,y,t)).
\]
Now we are ready to define the notion of weak solutions for the problem \eqref{maineqn}.
\begin{Definition}\label{wksoldef}
Let $g$ satisfies the hypothesis \eqref{ghypo} for $1<l\leq\max\{2,p(1+\frac{2}{N})\}$ and $h\in L^{l'}_{\mathrm{loc}}(\Om_T)$. We say that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak subsolution (or supersolution) of \eqref{maineqn} if for every $\Om'\times(t_1,t_2)\Subset\Om_T$ and for every nonnegative $\phi\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{0}(\Om')\big)\cap W^{1,2}_{\mathrm{loc}}\big(0,T;L^2(\Om')\big)$, we have
\begin{equation}\label{wksol}
\begin{split}
&\int_{\Om'}u(x,t_2)\phi(x,t_2)\,dx-\int_{\Om'}u(x,t_1)\phi(x,t_1)\,dx-\int_{t_1}^{t_2}\int_{\Om'}u(x,t)\partial_t\phi(x,t)\,dx\,dt\\
&+\int_{t_1}^{t_2}\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\mathcal{A}\big(u(x,y,t)\big)\big(\phi(x,t)-\phi(y,t)\big)K(x,y,t)\,dx\,dy\,dt\\
&+\int_{t_1}^{t_2}\int_{\Om'}\mathcal{B}(x,t,u,\nabla u)\nabla\phi(x,t)\,dx\,dt\leq (\text{ or }\geq) \int_{t_1}^{t_2}\int_{\Om'}g(x,t,u)\phi(x,t)\,dx\,dt.
\end{split}
\end{equation}
Moreover, we say that $u$ is a weak solution of \eqref{maineqn} if the equality holds in \eqref{wksol} for every $\phi\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{0}(\Om')\big)\cap W^{1,2}_{\mathrm{loc}}\big(0,T;L^2(\Om')\big)$ without a sign restriction.
\end{Definition}
\begin{Remark}\label{wksolrmk}
Since $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)$, by Lemma \ref{Sobo} (a) below, we have $u\in L^l_{\mathrm{loc}}(0,T;L^l_{\mathrm{loc}}(\Om))$ for $1<l\leq\max\{2,p(1+\frac{2}{N})\}$. Therefore, the integral involving $g$ in the right-hand side of \eqref{wksol} is finite.
Noting this fact along with Lemma \ref{locnon} and Lemma \ref{locnon1} imply that Definition \ref{wksoldef} well stated.
\end{Remark}
\subsection{Auxiliary results}
The following result follows from \cite[Proposition $3.1$ and Proposition $3.2$]{Dibe}.
\begin{Lemma}\label{Sobo}
Let $p,m\in[1,\infty)$ and $q=p(1+\frac{m}{N})$.
Assume that $\Om$ is a bounded smooth domain in $\mathbb{R}^N$.
\begin{enumerate}
\item[(a)] If $u\in L^p\big(0,T;W^{1,p}(\Om)\big)\cap L^\infty\big(0,T;L^m(\Om)\big)$, then $u\in L^q\big(0,T;L^q(\Om)\big)$.
\item[(b)] Moreover, if $u\in L^p\big(0,T;W^{1,p}_{0}(\Om)\big)\cap L^\infty\big(0,T;L^m(\Om)\big)$, then there exists a constant $C=C(p,m,N)>0$ such that
\begin{equation}\label{Soboine}
\int_{0}^{T}\int_{\Om}|u(x,t)|^q\,dx\,dt\leq C\bigg(\int_{0}^{T}\int_{\Om}|\nabla u(x,t)|^p\,dx\,dt\bigg)\bigg(\esssup_{0<t<T}\int_{\Om}|u(x,t)|^m\,dx\bigg)^\frac{p}{N}.
\end{equation}
\end{enumerate}
\end{Lemma}
For $a,k\in\R$, we define the functions $\zeta_+$ and $\zeta_-$ by
\begin{equation}\label{xi}
\zeta_+(a,k)=\int_{k}^{a}(\eta-k)_{+}\,d\eta
\quad\text{and}\quad
\zeta_-(a,k)=-\int_{k}^{a}(\eta-k)_{-}\,d\eta.
\end{equation}
Note that $\zeta_+\geq 0$, $\zeta_-\geq 0$. The following result follows from \cite[Lemma 2.2]{Verenacontinuity}.
\begin{Lemma}\label{Auxfnlemma}
For every $a,k\in\mathbb{R}$, there exists a constant $\lambda>0$ such that
$$
\frac{1}{\lambda}(a-k)_{+}^2\leq \zeta_+(a,k)\leq\lambda(a-k)_{+}^2
\quad\text{and}\quad
\frac{1}{\lambda}(a-k)_{-}^2\leq \zeta_-(a,k)\leq\lambda(a-k)_{-}^2.
$$
\end{Lemma}
The following iteration lemma is from \cite[Lemma $4.3$]{HS15}.
\begin{Lemma}\label{ite}
Let $\{Y_j\}_{j=0}^{\infty}$ be a sequence of positive real numbers such that
$$
Y_{j+1}\leq K b^j(Y_j^{1+\delta_1}+Y_j^{1+\delta_2}),\quad j\in\mathbb{N}\cup\{0\},
$$
where $K>0$, $b>1$ and $\delta_2\geq\delta_1>0$. If
$$
Y_0\leq\min\Big\{1,(2K)^{-\frac{1}{\delta_1}}b^{-\frac{1}{\delta_1^{2}}}\Big\}
\quad\text{or}\quad
Y_0\leq\min\Big\{(2K)^{-\frac{1}{\delta_1}}b^{-\frac{1}{\delta_1^{2}}},(2K)^{-\frac{1}{\delta_2}}b^{-\frac{1}{\delta_1\delta_2}-\frac{\delta_2-\delta_1}{\delta_2^{2}}}\Big\},
$$
then $Y_j\leq 1$ for some $j\in\mathbb{N}\cup\{0\}$. Moreover,
$$
Y_j\leq \min\Big\{1,(2K)^{-\frac{1}{\delta_1}}b^{-\frac{1}{\delta_1^{2}}}b^{-\frac{j}{\delta_1}}\Big\}\quad\text{for every}\quad j\geq j_0,
$$
where $j_0$ is the smallest $j\in\mathbb{N}\cup\{0\}$ such that $Y_j\leq 1$. In particular, we have $\lim_{j\to\infty}Y_j=0$.
\end{Lemma}
\section{Local boundedness result}
Our first main result is the following local boundedness estimate for weak subsolutions. We prove the result by applying Lemma \ref{eng1app2} below.
\begin{Theorem}\label{thm1}(\textbf{Local boundedness})
Let $\frac{2N}{N+2}<p<\infty,\,0<s<1$ and $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\\\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ be a weak subsolution of \eqref{maineqn} in $\Om_T$. Suppose $(x_0,t_0)\in\Om_T$ and $r\in(0,1)$ such that $\mathcal{Q}_r(x_0,t_0)=B_r(x_0)\times(t_0-r^p,t_0+r^p)\Subset\Om_T$. Assume that $g$ satisfies \eqref{ghypo} for some $\alpha\geq 0$ and $\max\{p,2\}\leq l<p\kappa$ with $\kappa=1+\frac{2}{N}$ such that
$h\in L^{\gamma l'}_{\mathrm{loc}}(\Om_T)$ for some $\gamma>\frac{N+p}{p}$.
Then there exists a positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)$ such that
\begin{equation}\label{lbest}
\esssup_{\mathcal{Q}_{\frac{r}{2}}(x_0,t_0)}\,u
\leq C\max\bigg\{\bigg(\fint_{\mathcal{Q}_r(x_0,t_0)}u_{+}^{l}\,dx\, dt\bigg)^{\sigma},1\bigg\}+\mathrm{Tail}_{\infty}(u_+;x_0,\tfrac{r}{2},t_0-r^p,t_0 +r^p),
\end{equation}
where $\mathrm{Tail}_{\infty}$ is defined by \eqref{taildef} and $\sigma=\frac{p}{N(p\kappa-l)}$.
\end{Theorem}
\begin{proof}
For $j\in\mathbb{N}\cup\{0\}$, let $B_j,\hat{B}_j,\Gamma_j,\hat{\Gamma}_j,\hat{k},w_j,\hat{w}_j$ be given by \eqref{bl}-\eqref{tm} and \eqref{k1}-\eqref{ct} and denote
$$
Y_j=\frac{1}{r^p}\fint_{B_j}\int_{\Gamma_j}w_j^{l}\,dx\, dt.
$$
Setting $\theta=\frac{1}{2}$ and $\hat{k}\geq \mathrm{Tail}_{\infty}(u_+;x_0,\tfrac{r}{2},t_0-r^{p},t_0+r^{p})+1$ in Lemma \ref{eng1app2} and using the fact that $r\in(0,1)$, we have
\begin{equation}
\begin{split}
Y_{j+1}&\leq \frac{C2^{aj}}{\hat{k}^{l(1-\frac{l}{p\kappa})}}\big(Y_{j}^{1+\frac{l}{\kappa N}}+Y_j^{1+\frac{l\kappa_0}{\kappa N}}\big),
\end{split}
\end{equation}
where $a=\xi(N+p+l)$ for $\xi=1+\frac{p}{N}$, $\kappa=1+\frac{2}{N}$, $\kappa_0=1-\frac{p+N}{p\gamma}\in(0,1]$, since $\gamma>\frac{N+p}{p}$, and $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$. For such a constant $C$, we choose
$$
\hat{k}=C\max\bigg\{\bigg(\fint_{\mathcal{Q}_r(x_0,t_0)}u_{+}^{l}\,dx\, dt\bigg)^{\sigma}, 1\bigg\}+\mathrm{Tail}_{\infty}(u_+;x_0,\tfrac{r}{2},t_0-r^{p},t_0+r^{p}).
$$
Thus setting
$$
K=\frac{C}{\hat{k}^{l(1-\frac{l}{p\kappa})}},\quad b=2^a,\quad \delta_2=\frac{l}{N\kappa},\quad\text{and}\quad \delta_1=\frac{l\kappa_0}{N\kappa}
$$
in Lemma \ref{ite}, we obtain $\lim_{j\to\infty}Y_j=0$. This implies \eqref{lbest} and the result follows.
\end{proof}
When $1<p\leq\frac{2N}{N+2}$, we prove a local boundedness estimate below in Theorem \ref{lbthm2}, where we follow the idea of the proof of \cite[Theorem $2$]{DZZ}. To this end, we assume that $m>\frac{N(2-p)}{p}$ and for a given weak subsolution $u\in L^{m}_{\mathrm{loc}}(\Om_T)\cap L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ of \eqref{maineqn} in $\Om_T$, there exists a sequence of bounded weak subsolutions $\{u_k\}_{k=1}^\infty\subset L^{\infty}_{\mathrm{loc}}(\Om_T)\cap L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ of \eqref{maineqn} in $\Om_T$ such that for some constant $C>0$, independent of $k$, we have
\begin{equation}\label{lbc1}
\|u_k\|_{L^\infty_{\mathrm{loc}}(0,T;L^{p-1}_{ps}(\R^N))}\leq C,
\end{equation}
and
\begin{equation}\label{lbc2}
u_k\to u\text{ in } L^m_{\mathrm{loc}}(\Om_T)\text{ as } k\to\infty.
\end{equation}
By applying Lemma \ref{lbthm2lemma} below, we have our second main result.
\begin{Theorem}\label{lbthm2}(\textbf{Local boundedness})
Let $1<p\leq\frac{2N}{N+2},\,0<s<1,\,m>\frac{N(2-p)}{p}$ and $u\in L^{m}_{\mathrm{loc}}(\Om_T)\cap L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ be a weak subsolution of \eqref{maineqn} in $\Om_T$ for which \eqref{lbc1} and \eqref{lbc2} holds. Suppose $(x_0,t_0)\in\Om_T$ and $R\in(0,1)$ such that $\mathcal{Q}_R(x_0,t_0)=B_R(x_0)\times(t_0-R^p,t_0+R^p)\Subset\Om_T$. Assume that $g$ satisfies the hypothesis \eqref{ghypo} for some $\alpha\geq 0$, $1<l\leq 2$ and $h\in L^\infty_{\mathrm{loc}}(\Om_T)$. Then there exists a positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$ such that
\begin{equation}\label{lbthm2est}
\begin{split}
\esssup_{\mathcal{Q}_{\frac{R}{2}}(x_0,t_0)}\,u
&\leq C\max\bigg\{\bigg(\fint_{\mathcal{Q}_R(x_0,t_0)}u_{+}^m\,dx\,dt\bigg)^\frac{p}{(p+N)(m-\mu_m)},\bigg(\fint_{\mathcal{Q}_{R}(x_0,t_0)}u_{+}^m\,dx\,dt\bigg)^\frac{p}{(p+N)(m-2-\mu_m)}\bigg\}\\
&\qquad+\mathrm{Tail}_{\infty}(u_+;x_0,\tfrac{R}{2},t_0-R^p,t_0+R^p),
\end{split}
\end{equation}
where $\mu_m=\frac{(m-p\kappa)N}{(p+N)}$ with $\kappa=1+\frac{2}{N}$ and $\mathrm{Tail}_\infty$ is defined by \eqref{taildef}.
\end{Theorem}
\begin{proof}
By \eqref{lbc1} and \eqref{lbc2}, we may assume that $u$ is qualitatively locally bounded and run the arguments below to get the required estimate \eqref{lbthm2est}.
Indeed, since every $u_k$ is a bounded weak subsolution of \eqref{maineqn}, the arguments below holds for $u_k$ in place of $u$. Then the estimate \eqref{lbthm2est} holds for every $u_k$, which gives a $k$-independent bound of $u_k$ using \eqref{lbc1} and \eqref{lbc2}. By the pointwise convergence of $u_k$ to $u$, we conclude that $u$ is locally bounded.
To this end, let
$$
R_0=\frac{R}{2},\quad R_n=\frac{R}{2}+\sum_{i=1}^{n}2^{-i-1}R,
\quad n\in\N.
$$
Moreover, let
$$
\mathcal{U}_n=B_{R_n}(x_0)\times(t_0-R_n^{p},t_0+R_n^{p}),\quad S_n=\esssup_{\mathcal{U}_n}\,u_+,\quad n\in\mathbb{N}\cup\{0\}.
$$
By denoting $r=R_{n+1}$ and $\theta r=R_n$, we have
$$
\theta=\frac{\frac{1}{2}+\sum_{i=1}^{n}2^{-i-1}}{\frac{1}{2}+\sum_{i=1}^{n+1}2^{-i-1}}\in\Big[\frac{1}{2},1\Big).
$$
For $j\in\mathbb{N}\cup\{0\}$, let $B_j,\Gamma_j,k_j,\hat{k}$ be defined as in \eqref{bl}, \eqref{tm}, \eqref{k}, \eqref{k1} and $m>\frac{N(2-p)}{p}$.
By setting
$$
X_j=\fint_{B_j}\int_{\Gamma_j}(u-k_j)_{+}^m\,dx\,dt
$$
in Lemma \ref{lbthm2lemma}, we obtain
\begin{equation}\label{lbthm2lemmap1}
\begin{split}
X_{j+1}&\leq\Bigg[\frac{C}{r^{\frac{p^2}{N}}}\frac{2^{aj}}{(1-\theta)^\frac{(N+p)^2}{N}}\Big(\frac{1}{\hat{k}^{m-2}}+\frac{1}{\hat{k}^{m-p}}\Big)^{1+\frac{p}{N}}+C\frac{2^{aj}}{\hat{k}^{m(1+\frac{p}{N})}}\Bigg]\|u_+\|^{m-p\kappa}_{L^\infty(\mathcal{U}_{n+1})}X_j^{1+\frac{p}{N}},
\end{split}
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$, where $a=(N+p+m)(1+\frac{p}{N})$. Now setting $Y_j=\frac{X_j}{R_n^{p}}$ for $j\in\mathbb{N}\cup\{0\}$ from \eqref{lbthm2lemmap1} we obtain
\begin{equation}\label{lbthm2lemmap2}
\begin{split}
Y_{j+1}&\leq C2^{en}\bigg(\frac{1}{\hat{k}^{(m-2)(1+\frac{p}{N})}}+\frac{1}{\hat{k}^{m(1+\frac{p}{N})}}\bigg)S_{n+1}^{m-p\kappa}2^{aj}Y_{j}^{1+\frac{p}{N}},
\end{split}
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$, where $e=\frac{(N+p)^2}{N}$. By choosing
\begin{equation}\label{nk}
\begin{split}
\hat{k}&=C2^\frac{enN}{(m-2)(p+N)}\bigg(\fint_{B_{R_{n+1}}(x_0)}\fint_{t_0-R_{n+1}^p}^{t_0-R_{n+1}^p}u_+^{m}\,dx\,dt\bigg)^\frac{p}{(m-2)(p+N)}S_{n+1}^\frac{N(m-p\kappa)}{(m-2)(p+N)}\\
&\qquad+C2^\frac{enN}{m(p+N)}\bigg(\fint_{B_{R_{n+1}}(x_0)}\fint_{t_0-R_{n+1}^p}^{t_0-R_{n+1}^p}u_+^{m}\,dx\,dt\bigg)^\frac{p}{m(p+N)}S_{n+1}^\frac{N(m-p\kappa)}{m(p+N)}\\
&\qquad+\frac{1}{2}\mathrm{Tail}_\infty(u_+;x_0,R_n,t_0-R_{n+1}^p,t_0+R_{n+1}^p),
\end{split}
\end{equation}
we obtain
$
Y_0\leq (2K)^{-\frac{1}{\delta}}b^{-\frac{1}{\delta^2}},
$
where
$$
K=C 2^{en-1}\bigg(\frac{1}{\hat{k}^{(m-2)(1+\frac{p}{N})}}+\frac{1}{\hat{k}^{m(1+\frac{p}{N})}}\bigg)S_{n+1}^{m-p\kappa},\quad b=2^{a}
\quad\text{and}\quad \delta_2=\delta_1=\delta=\frac{p}{N}.
$$
Therefore, by Lemma \ref{ite} we have $\lim_{j\to\infty}Y_j=0$ and we get
\begin{equation}\label{lb2it1}
\begin{split}
\esssup_{\mathcal{U}_n}\,u_+&\leq C2^\frac{enN}{(m-2)(p+N)}\Big(\fint_{B_{R_{n+1}}(x_0)}\fint_{t_0-R_{n+1}^p}^{t_0-R_{n+1}^p}u_+^{m}\,dx\,dt\Big)^\frac{p}{(m-2)(p+N)}S_{n+1}^\frac{N(m-p\kappa)}{(m-2)(p+N)}\\
&\qquad+C2^\frac{enN}{m(p+N)}\bigg(\fint_{B_{R_{n+1}}(x_0)}\fint_{t_0-R_{n+1}^p}^{t_0-R_{n+1}^p}u_+^{m}\,dx\, dt\bigg)^\frac{p}{m(p+N)}S_{n+1}^\frac{N(m-p\kappa)}{m(p+N)}\\
&\qquad+\frac{1}{2}\mathrm{Tail}_\infty(u_+;x_0,R_n,t_0-R_{n+1}^p,t_0+R_{n+1}^p),
\end{split}
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$. Since $1<p\leq\frac{2N}{N+2},\,\kappa=1+\frac{2}{N}$ and $m>\frac{N(2-p)}{p}$, we have
$$
0<\frac{N(m-p\kappa)}{(m-2)(p+N)},\frac{N(m-p\kappa)}{m(p+N)}<1.
$$
By Young's inequality with $\epsilon$ (to be chosen below) in \eqref{lb2it1}, we get
\begin{equation}\label{lb2it2}
\begin{split}
S_n=\esssup_{\mathcal{U}_n}\,u_+&\leq\epsilon S_{n+1}+P_0+T_1^{n}P_1+T_2^{n}P_2,\quad n\in\mathbb{N}\cup\{0\},
\end{split}
\end{equation}
where
\begin{align*}
P_0&=\frac{1}{2}\mathrm{Tail}_\infty(u_+;x_0,\frac{R}{2},t_0-R^p,t_0+R^p),\\
P_1&=C\epsilon^{-\frac{\mu_m}{m-2-\mu_m}}\bigg(\fint_{B_R(x_0)}\fint_{t_0-R^p}^{t_0+R^p}u_+^{m}\,dx\, dt\bigg)^\frac{p}{(p+N)(m-2-\mu_m)},\\
P_2&=C\epsilon^{-\frac{\mu_m}{m-\mu_m}}\bigg(\fint_{B_R(x_0)}\fint_{t_0-R^p}^{t_0+R^p}u_+^{m}\,dx\,dt\bigg)^\frac{p}{(p+N)(m-\mu_m)},\\
T_1&=2^\frac{eN}{(p+N)(m-2-\mu_m)}
\quad\text{and}\quad
T_2=2^\frac{eN}{(p+N)(m-\mu_m)},
\end{align*}
for $\mu_m=\frac{(m-p\kappa)N}{(p+N)}$.
We claim that
\begin{equation}\label{lb2it3}
S_0\leq \epsilon^{n+1}S_{n+1}+P_0\sum_{i=0}^{n}\epsilon^i+P_1\sum_{i=0}^{n}(\epsilon T_1)^i+P_2\sum_{i=0}^{n}(\epsilon T_2)^i,\quad n\in\mathbb{N}\cup\{0\}.
\end{equation}
Indeed, by \eqref{lb2it2}, the inequality \eqref{lb2it3} holds for $n=0$. We assume that \eqref{lb2it3} holds for $n=j$ and prove it for $n=j+1$. To this end, assuming \eqref{lb2it3} for $n=j$, we observe that
\begin{equation}\label{lb2it4}
\begin{split}
S_0&\leq \epsilon^{j+1}S_{j+1}+P_0\sum_{i=0}^{j}\epsilon^i+P_1\sum_{i=0}^{j}(\epsilon T_1)^i+P_2\sum_{i=0}^{j}(\epsilon T_2)^i\\
&\leq\epsilon^{j+1}(\epsilon S_{j+2}+P_0+T_1^{j+1}P_1+T_2^{j+1}P_2)+P_0\sum_{i=0}^{j}\epsilon^i+P_1\sum_{i=0}^{j}(\epsilon T_1)^i+P_2\sum_{i=0}^{j}(\epsilon T_2)^i\\
&=\epsilon^{j+2}S_{j+2}+P_0\sum_{i=0}^{j+1}\epsilon^i+P_1\sum_{i=0}^{j+1}(\epsilon T_1)^i+P_2\sum_{i=0}^{j+1}(\epsilon T_2)^i.
\end{split}
\end{equation}
Thus, \eqref{lb2it3} holds for $n=j+1$. By induction \eqref{lb2it3} holds for every $n\in\mathbb{N}\cup\{0\}$.
By inserting $P_0,P_1,P_2,T_1,T_2$, choosing $\epsilon=2^{-\frac{eN}{(p+N)(m-2-\mu_m)}-1}$ in \eqref{lb2it3} and letting $n\to\infty$ in the resulting inequality,
we conclude that \eqref{lbthm2est} holds. This completes the proof.
\end{proof}
\subsection{Preliminaries}
For $f\in L^1(\Om_T)$, we define the mollification in time by
\begin{equation}\label{mol}
f_m(x,t)=\frac{1}{m}\int_{0}^{t}e^{\frac{s-t}{m}}f(x,s)\,ds,\quad m>0.
\end{equation}
This is useful in our energy estimates below (see for example Lemma \ref{eng1}) to justify the use of test functions depending on the solution itself. For more details on $f_m$, we refer to \cite{KLin}. For short, we denote
$$d\mu=K(x,y,t)\,dx\,dy,$$
where $K(x,y,t)$ is given by \eqref{kernel}.
\begin{Lemma}\label{eng1}{(\textbf{Energy estimate})
Let $1<p<\infty,\,0<s<1$ and $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ be a weak subsolution of \eqref{maineqn} in $\Om_T$. Suppose $x_0\in\Om,\,r>0$ such that $B_r=B_r(x_0)\Subset\Om$ and $0<\tau_1<\tau_2$, $\tau>0$ such that $(\tau_1-\tau,\tau_2)\Subset(0,T)$. For $k\in\mathbb{R}$, we denote by $w_+=(u-k)_+$. Assume that $g$ satisfies \eqref{ghypo} for some $\alpha\geq 0$, $1<l\le\max\{2,p(1+\frac{2}{N})\}$ and $h\in L^{l'}_{\mathrm{loc}}(\Om_T)$. Then there exists a positive constant $C=C(p,\Lambda,C_1,C_2,C_3,C_4,l,\alpha)$ such that
\begin{equation}\label{eng1eqn}
\begin{split}
&\esssup_{\tau_1-\tau<t<\tau_2}\int_{B_r}\zeta_+(u,k)\xi^p\,dx+\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}|\nabla w_+|^p\xi^p\,dx\,dt\\
&\leq C\bigg(\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}w_+^p|\nabla\xi|^p\,dx\,dt+\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}\int_{B_r}\max\{w_+(x,t),w_+(y,t)\}^p|\xi(x,t)-\xi(y,t)|^p\,d\mu\,dt\\
&\qquad+\esssup_{(x,t)\in\mathrm{supp}\,\xi,\,\tau_1-\tau<t<\tau_2}\int_{\mathbb{R}^N\setminus B_r}\frac{w_+(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}w_+\xi^p\,dx\,dt\\
&\qquad+\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}\zeta_+(u,k)|\partial_t\xi^p|\,dx\,dt+\int_{B_r}\zeta_+(u(x,\tau_1-\tau),k)\xi(x,\tau_1-\tau)^p\,dx\\
&\qquad+\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\,dt\bigg),
\end{split}
\end{equation}
where $\xi(x,t)=\psi(x)\eta(t)$, with $\psi\in C_c^{\infty}(B_r)$ and $\eta\in C^\infty(\tau_1-\tau,\tau_2)$ are nonnegative functions. Here $\zeta_+$ is given by \eqref{xi}.}
\end{Lemma}
\begin{Remark}\label{eng1rmk}
If $g(x,t,u)=h(x,t)$ in $\Om_T\times\R$, then the sixth integral in the right hand side of \eqref{eng1eqn} can be replaced by the integral $\int_{\tau_1-\tau}^{\tau_2}\int_{B_r}|h|w_{+}\xi^p\,dx\,dt$, which vanishes if $h\equiv 0$.
\end{Remark}
\begin{Remark}\label{eng1rmk2}
If $g\equiv 0$ in $\Om_T\times\R$ and $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;\\L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak supersolution of \eqref{maineqn} in $\Om_T$ such that ${B_r(x_0)}\times(\tau_1-\tau,\tau_2)\Subset\Om_T$, then proceeding with similar arguments as in the proof of Lemma \ref{eng1}, the estimate \eqref{eng1eqn} will hold by replacing $\zeta_+,w_+$ and the sixth integral in right hand side of \eqref{eng1eqn} with $\zeta_-,w_-$ and zero respectively. Here $w_-=(u-k)_-$ for $k\in\R$ and $\zeta_{-}$ is given by \eqref{xi}.
\end{Remark}
\begin{proof}
Let $t_1=\tau_1-\tau$ and $t_2=\tau_2$. For small enough $\epsilon>0$ and for fixed $t_1<\theta_1<\theta_2<t_2$, we define a Lipschitz cutoff function $\zeta_{\epsilon}:[t_1,t_2]\to[0,1]$ by
\begin{equation}\label{Lip1}
\zeta_{\epsilon}(t) =
\begin{cases}
0 & \text{for } t_1\leq t\leq \theta_1-\epsilon,\\
1+\frac{t-\theta_1}{\epsilon} & \text{for }\theta_1-\epsilon<t\leq \theta_1, \\
1, & \text{for } \theta_1<t\leq \theta_2, \\
1-\frac{t-\theta_2}{\epsilon}, & \text{for }\theta_2<t\leq \theta_2+\epsilon, \\
0, & \text{for } \theta_2 +\epsilon<t\leq t_2.
\end{cases}
\end{equation}
Recalling that $\xi(x,t)=\psi(x)\eta(t)$, we choose
\[
\phi(x,t)=w_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)
\]
as an admissible test function in \eqref{wksol}. Indeed, for $(\cdot)_m$ as defined in \eqref{mol} and following \cite{Verenacontinuity,KLin}, the weak subsolution $u$ of \eqref{maineqn} satisfies the following mollified inequality
\begin{equation}\label{mollified}
\lim_{\epsilon\to 0}\lim_{m\to 0}(T_{m}^{\epsilon}+L_{m}^{\epsilon}+N_m^{\epsilon}-S_{m}^{\epsilon})\leq 0,
\end{equation}
where
\begin{equation*}
\begin{split}
T_{m}^{\epsilon}&=\int_{t_1}^{t_2}\int_{B_r}\partial_t{u_m}(x,t)\phi(x,t)\,dx\,dt
=\int_{t_1}^{t_2}\int_{B_r}\partial_t{u_m}(x,t)\,w_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)\,dx\,dt,\\
L_m^{\epsilon}&=\int_{t_1}^{t_2}\int_{B_r}(\mathcal{B}(x,t,u,\nabla u))_m\nabla\phi(x,t)\,dx\,dt\\
&=\int_{t_1}^{t_2}\int_{B_r}(\mathcal{B}(x,t,u,\nabla u))_m\nabla\big(w_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)\big)\,dx\,dt,\\
N_{m}^{\epsilon}&=\int_{t_1}^{t_2}\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}(\mathcal{V}(u(x,y,t)))_{m}(\phi(x,t)-\phi(y,t))\,dx\,dy\,dt\\
&=\int_{t_1}^{t_2}\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}(\mathcal{V}(u(x,y,t)))_{m}\big(w_+(x,t)\xi(x,t)^p-w_+(y,t)\xi(y,t)^p\big)\zeta_{\epsilon}(t)\,dx\,dy\,dt,
\end{split}
\end{equation*}
with $\mathcal{V}(u(x,y,t))=\mathcal{A}(u(x,y,t))K(x,y,t)$, and
\begin{align*}
S_m^{\epsilon}&=\int_{t_1}^{t_2}\int_{\Omega}(g(x,t,u))_m\phi(x,t)\,dx\,dt
=\int_{t_1}^{t_2}\int_{\Omega}(g(x,t,u))_mw_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)\,dx\,dt.
\end{align*}
\textbf{Estimate of $T_{m}^{\epsilon}$:} Recalling $\zeta_+(u,k)$ defined in \eqref{xi}, from the proof of \cite[Proposition $3.1$, p. $9$]{Verenacontinuity}, we arrive at
\begin{equation}\label{Ihpre}
\begin{split}
\lim_{\epsilon\to 0}\lim_{m\to 0}T_{m}^{\epsilon}
&\geq \int_{B_r}\zeta_+(u(x,\theta_2),k)\xi(x,\theta_2)^p\,dx-\int_{B_r}\zeta_+(u(x,\theta_1),k)\xi(x,\theta_1)^p\,dx\\
&\qquad-\int_{\theta_1}^{\theta_2}\int_{B_r}\zeta_+(u(x,t),k)\partial_t\xi(x,t)^p\,dx\,dt.
\end{split}
\end{equation}
\textbf{Estimate of $L_{m}^{\epsilon}$:} From the proof of \cite[Proposition $3.1$, p. $10$]{Verenacontinuity}, we obtain
\begin{equation}\label{llimit}
\begin{split}
\lim_{\epsilon\to 0}\lim_{m\to 0}L_{m}^{\epsilon}
&\geq\frac{C_3}{p}\int_{\theta_1}^{\theta_2}\int_{B_r}|\nabla w_+|^p\xi^p\,dx\,dt-C\int_{\theta_1}^{\theta_2}\int_{B_r}w_+^{p}|\nabla\xi|^p\,dx\,dt,
\end{split}
\end{equation}
for some constant $C=C(C_3,C_4,p)>0$, where $C_3,C_4$ are given by \eqref{ls}.\\
\textbf{Estimate of $N_{m}^{\epsilon}$:} From the proof of \cite[Lemma $3.1$, p. $9-12$]{BGK}, we obtain
\begin{equation}\label{jlimit}
\begin{split}
\lim_{\epsilon\to 0}\lim_{m\to 0}N_{m}^{\epsilon}&\geq c\int_{\theta_1}^{\theta_2}\int_{B_r}\int_{B_r}|w_+(x,t)\xi(x,t)-w_+(y,t)\xi(y,t)|^p\,d\mu\, dt\\
&\qquad-C\int_{\theta_1}^{\theta_2}\int_{B_r}\int_{B_r}\max\{w_+(x,t),w_+(y,t)\}^p|\xi(x,t)-\xi(y,t)|^p\,d\mu\,dt\\
&\qquad-C\esssup_{(x,t)\in\mathrm{supp}\,\zeta,\,\theta_1<t<\theta_2}\int_{\mathbb{R}^N\setminus B_r}\frac{w_+(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy
\int_{\theta_1}^{\theta_2}\int_{B_r}w_+(x,t)\xi(x,t)^p\,dx\,dt,
\end{split}
\end{equation}
for some positive constants $c=c(C_1,C_2,\Lambda,p)$ and $C=C(C_1,C_2,\Lambda,p)$, where $C_1,C_2$ are given by \eqref{L}.\\
\textbf{Estimate of $S_{m}^{\epsilon}$:} Since $u\in L^p(t_1,t_2;W^{1,p}(B_r))\cap C(t_1,t_2;L^2(B_r))$, by Lemma \ref{Sobo} (a), we have $u\in L^l(t_1,t_2;L^l(B_r))$. Using this fact along with the given hypothesis \eqref{ghypo} on $g$, we obtain $g\in L^{l'}(t_1,t_2;L^{l'}(B_r))$ and $\phi=w_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)\in L^l(t_1,t_2;L^{l}(B_r))$. Setting
$$
S^{\epsilon}=\int_{t_1}^{t_2}\int_{B_r}g(x,t,u)w_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)\,dx\,dt,
$$
by H\"older's inequality, we obtain
\begin{equation}\label{sest}
\begin{split}
|S_m^{\epsilon}-S^{\epsilon}|&\leq\left(\int_{t_1}^{t_2}\int_{B_r}|g_m-g|^{l'}\,dx\,dt\right)^\frac{1}{l'} \left(\int_{t_1}^{t_2}\int_{B_r}|\phi|^l\,dx\,dt\right)^\frac{1}{l},
\end{split}
\end{equation}
where $\phi(x,t)=w_+(x,t)\xi(x,t)^p\zeta_{\epsilon}(t)$. Using \cite[Lemma $2.9$]{KLin}, the first integral in the above estimate \eqref{sest} goes to zero as $m\to 0$.
This implies $\lim_{m\to 0}S_m^{\epsilon}=S^{\epsilon}$. Letting $\epsilon\to 0$, by the Lebesgue dominated convergence theorem, we obtain
\begin{equation}\label{sest1}
\lim_{\epsilon\to 0}\lim_{m\to 0}S_m^{\epsilon}=S,
\end{equation}
where
$$
S=\int_{\theta_1}^{\theta_2}\int_{B_r}g(x,t,u)w_+(x,t)\xi(x,t)^p\,dx\,dt.
$$
Using Young's inequality with exponents $l$ and $l'$ in \eqref{ghypo}, for some positive constant $C=C(l,\alpha)$, we obtain
$$
g(x,t,u)w_+(x,t)\leq C\big(w_+(x,t)^l+|u|^l\chi_{\{u\geq k\}}(x,t)+|h(x,t)|^{l^{'}}\chi_{\{u\geq k\}}(x,t)\big).
$$
Therefore, we have
\begin{equation}\label{sest2}
S\leq C\int_{\theta_1}^{\theta_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\,dt,
\end{equation}
for some positive constant $C=C(l,\alpha)$.
From \eqref{sest1} and \eqref{sest2}, we have
\begin{equation}\label{sest3}
\lim_{\epsilon\to 0}\lim_{m\to 0}S_m^{\epsilon}\leq C\int_{\theta_1}^{\theta_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\,dt,
\end{equation}
for some positive constant $C=C(l,\alpha)$. Combining the estimates \eqref{Ihpre}, \eqref{llimit}, \eqref{jlimit} and \eqref{sest3} in \eqref{mollified}, we obtain
\begin{equation}\label{eng1eqn1}
\begin{split}
&\int_{B_r}\zeta_+(u(x,\theta_2),k)\xi(x,\theta_2)^p\,dx+\int_{\theta_1}^{\theta_2}\int_{B_r}|\nabla w_+|^p\xi^p\,dx\,dt\\
&\leq C\bigg(\int_{\theta_1}^{\theta_2}\int_{B_r}w_+^p|\nabla\xi|^p\,dx\,dt
+\int_{\theta_1}^{\theta_2}\int_{B_r}\int_{B_r}\max\{w_+(x,t),w_+(y,t)\}^p|\xi(x,t)-\xi(y,t)|^p\,d\mu\,dt\\
&\qquad+\esssup_{(x,t)\in\mathrm{supp}\zeta,\,\theta_1<t<\theta_2}\int_{\mathbb{R}^N\setminus B_r}\frac{w_+(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{\theta_1}^{\theta_2}\int_{B_r}w_+\xi^p\,dx\,dt+\int_{\theta_1}^{\theta_2}\int_{B_r}\zeta_+(u,k)\partial_t\xi^p\,dx\,dt\\
&\qquad+\int_{B_r}\zeta_+(u(x,\theta_1),k)\xi(x,\theta_1)^p\,dx+\int_{\theta_1}^{\theta_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\,dt\bigg),
\end{split}
\end{equation}
whenever $t_1<\theta_1<\theta_2<t_2$, for some constant $C=C(p,\Lambda,C_1,C_2,C_3,C_4,l,\alpha)>0$. Letting $\theta_1\to t_1$ in \eqref{eng1eqn1} gives
\begin{equation}\label{eng1eqn12}
\begin{split}
&\int_{B_r}\zeta_+(u(x,\theta_2),k)\xi(x,\theta_2)^p\,dx+\int_{t_1}^{\theta_2}\int_{B_r}|\nabla w_+|^p\xi^p\,dx\,dt\\
&\leq C\bigg(\int_{t_1}^{t_2}\int_{B_r}w_+^p|\nabla\xi|^p\,dx\,dt+\int_{t_1}^{t_2}\int_{B_r}\int_{B_r}\max\{w_+(x,t),w_+(y,t)\}^p|\xi(x,t)-\xi(y,t)|^p\,d\mu\, dt\\
&\qquad+\esssup_{(x,t)\in\mathrm{supp}\zeta,\,t_1<t<t_2}\int_{\mathbb{R}^N\setminus B_r}\frac{w_+(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{t_1}^{t_2}\int_{B_r}w_+\xi^p\,dx\,dt+\int_{t_1}^{t_2}\int_{B_r}\zeta_+(u,k)|\partial_t\xi^p|\,dx\, dt\\
&\qquad+\int_{B_r}\zeta_+(u(x,t_1),k)\xi(x,t_1)^p\,dx+\int_{t_1}^{t_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\,dt\bigg),
\end{split}
\end{equation}
for some constant $C=C(p,\Lambda,C_1,C_2,C_3,C_4,l,\alpha)>0$.
Since $\zeta_+$ and $\xi$ are nonnegative, ignoring the second integral on the left hand side of \eqref{eng1eqn12} and then taking essential supremum with respect to $\theta_2\in(t_1,t_2)$, we arrive at
\begin{equation}\label{eng1eqn2}
\begin{split}
&\esssup_{\tau_1-\tau<t<\tau_2}\int_{B_r}\zeta_+(u,k)\xi^p\,dx
\leq C\bigg(\int_{t_1}^{t_2}\int_{B_r}w_+^p|\nabla\xi|^p\,dx\, dt\\
&\qquad+\int_{t_1}^{t_2}\int_{B_r}\int_{B_r}\max\{w_+(x,t),w_+(y,t)\}^p|\xi(x,t)-\xi(y,t)|^p\,d\mu\, dt\\
&\qquad+\esssup_{(x,t)\in\mathrm{supp}\zeta,\,t_1<t<t_2}\int_{\mathbb{R}^N\setminus B_r}\frac{w_+(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{\theta_1}^{\theta_2}\int_{B_r}w_+\xi^p\,dx\,dt+\int_{t_1}^{t_2}\int_{B_r}\zeta_+(u,k)|\partial_t\xi^p|\,dx\, dt\\
&\qquad+\int_{B_r}\zeta_+(u(x,t_1),k)\xi(x,t_1)^p\,dx+\int_{t_1}^{t_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\, dt\bigg),
\end{split}
\end{equation}
for some constant $C=C(p,\Lambda,C_1,C_2,C_3,C_4,l,\alpha)>0$. Next, discarding the first integral in \eqref{eng1eqn12} and then we let $\theta_2\to t_2$ in \eqref{eng1eqn12} to obtain
\begin{equation}\label{eng1eqn23}
\begin{split}
&\int_{t_1}^{t_2}\int_{B_r}|\nabla w_+|^p\xi^p\,dx\, dt
\leq C\bigg(\int_{t_1}^{t_2}\int_{B_r}w_+^p|\nabla\xi|^p\,dx\,dt\\
&\qquad+\int_{t_1}^{t_2}\int_{B_r}\int_{B_r}\max\{w_+(x,t),w_+(y,t)\}^p|\xi(x,t)-\xi(y,t)|^p\,d\mu\, dt\\
&\qquad+\esssup_{(x,t)\in\mathrm{supp}\zeta,\,t_1<t<t_2}\int_{\mathbb{R}^N\setminus B_r}\frac{w_+(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{\theta_1}^{\theta_2}\int_{B_r}w_+\xi^p\,dx\,dt+\int_{t_1}^{t_2}\int_{B_r}\zeta_+(u,k)|\partial_t\xi^p|\,dx\,dt\\
&\qquad+\int_{B_r}\zeta_+(u(x,t_1),k)\xi(x,t_1)^p\,dx+\int_{t_1}^{t_2}\int_{B_r}\big(|u|^l+|h|^{l'}+w_+^{l}\big)\chi_{\{u\geq k\}}\xi^p\,dx\,dt\bigg),
\end{split}
\end{equation}
for some constant $C=C(p,\Lambda,C_1,C_2,C_3,C_4,l,\alpha)>0$.
From \eqref{eng1eqn2} and \eqref{eng1eqn23} we conclude that the estimate \eqref{eng1eqn} holds. Hence the result follows.
\end{proof}
Next, we obtain some auxiliary results using the energy estimate in Lemma \ref{eng1}. Before stating them, let us define the following parameters.
Let $(x_0,t_0)\in\Om_T$, $r\in(0,1)$ be such that the parabolic cylinder $\mathcal{Q}_r=\mathcal{Q}_r(x_0,t_0)=B_r(x_0)\times(t_0-r^p,t_0+r^p)\Subset\Omega_T$.
For $\frac{1}{2}\leq\theta<1$ we define
\begin{equation}\label{rad}
r_j=\theta r+(1-\theta)2^{-j}r
\quad\text{and}\quad
\hat{r}_j=\frac{r_j+r_{j+1}}{2},\quad j\in\mathbb{N}\cup\{0\}.
\end{equation}
Note that
\begin{equation}\label{rp}
r_{j+1}<\hat{r}_j<r_j,\quad j\in\mathbb{N}\cup\{0\}.
\end{equation}
We denote
\begin{equation}\label{bl}
B_j=B_{r_j}(x_0),\quad\hat{B}_j=B_{\hat{r}_j}(x_0),
\end{equation}
\begin{equation}\label{tm}
\Gamma_j=(t_0-r_j^{p},t_0+r_j^{p}),\quad\hat{\Gamma}_j=(t_0-\hat{r}_j^p,t_0+\hat{r}_j^{p})
\end{equation}
and
\begin{equation}\label{cyll}
\mathcal{Q}_j=B_j\times\Gamma_j,\quad\hat{\mathcal{Q}}_j=\hat{B}_j\times\hat{\Gamma}_j,
\quad j\in\mathbb{N}\cup\{0\}.
\end{equation}
By \eqref{rp} we obtain
\begin{equation}\label{bp}
B_{j+1}\subset \hat{B}_j\subset B_j,\quad \Gamma_{j+1}\subset\hat{\Gamma}_{j}\subset\Gamma_j,
\quad
j\in\mathbb{N}\cup\{0\}
\end{equation}
and therefore, we have
\begin{equation}\label{scyl}
\mathcal{Q}_{j+1}\subset\hat{\mathcal{Q}}_j\subset\mathcal{Q}_j,
\quad j\in\mathbb{N}\cup\{0\}.
\end{equation}
Next, we set
\begin{equation}\label{k}
k_j=(1-2^{-j})\hat{k},\quad\hat{k}_j=\frac{k_j+k_{j+1}}{2},\quad j\in\mathbb{N}\cup\{0\},
\end{equation}
for
\begin{equation}\label{k1}
\hat{k}\geq\frac{1}{2}\mathrm{Tail}_{\infty}(u_+;x_0,\theta r,t_0-r^p,t_0+r^p),
\end{equation}
where $\mathrm{Tail}_\infty$ is defined in \eqref{taildef}. Finally, we define the functions
\begin{equation}\label{ct}
w_j=(u-k_j)_+,\quad\hat{w}_j=(u-\hat{k}_j)_+,\quad j\in\mathbb{N}\cup\{0\}.
\end{equation}
We apply Lemma \ref{eng1} to obtain the following result.
\begin{Lemma}\label{eng1app}
Let $1<p<\infty,\,0<s<1$ and $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ be a weak subsolution of \eqref{maineqn} in $\Om_T$. Assume that $(x_0,t_0)\in\Omega_T$ and $r\in(0,1)$ such that $\mathcal{Q}_r=\mathcal{Q}_r(x_0,t_0)=B_r(x_0)\times(t_0-r^p,t_0+r^p)\Subset\Omega_T$. Suppose that $g$ satisfies \eqref{ghypo} for some $\alpha\geq 0$, $1<l\le\max\{2,p(1+\frac{2}{N})\}$ and $h\in L^{\gamma l'}_{\mathrm{loc}}(\Omega_T)$ for some $\gamma>\frac{N+p}{p}$.
For $j\in\mathbb{N}\cup\{0\}$, let $B_j,\hat{B}_j,\Gamma_j,\hat{\Gamma}_j,\mathcal{Q}_j,\hat{\mathcal{Q}}_j$ be given by \eqref{bl}-\eqref{cyll} and $k_j,\hat{k}_j,\hat{k},w_j,\hat{w}_j$ are given by \eqref{k}-\eqref{ct}. Assume that $\frac{1}{2}\leq\theta<1$. Then for any $q\geq\max\{p,2,l\}$ and for any $j\in\mathbb{N}\cup\{0\}$, there exists a positive constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)$ such that
\begin{equation}\label{eng1appeqn}
\begin{split}
&\esssup_{t\in\hat{\Gamma}_{j}}\int_{\hat{B}_{j}}\hat{w}_j^{2}\,dx+\int_{\hat{\Gamma}_{j}}\int_{\hat{B}_{j}}|\nabla\hat{w}_j|^p\,dx\,dt\\
&\leq\frac{C}{r^p}\Bigg[\frac{1}{\theta^p(1-\theta)^{N+p}}+\frac{1}{(1-\theta)^p}\Bigg]\Bigg[\frac{2^{(p+q-2)j}}{\hat{k}^{q-2}}+\frac{2^{(N+p+q-1)j}}{\hat{k}^{q-p}}\Bigg]\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt\\
&\qquad+C\frac{2^{qj}}{\hat{k}^{q-l}}\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt
+C\frac{2^\frac{q(N+p\kappa_0)j}{N+p}}{\hat{k}^\frac{q(N+p\kappa_0)}{N+p}}\bigg(\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt\bigg)^\frac{N+p\kappa_0}{N+p},
\end{split}
\end{equation}
where $\kappa_0=1-\frac{N+p}{p\gamma}\in(0,1]$.
\end{Lemma}
\begin{proof}
Noting that $\hat{k}_j>k_j$ and as in \cite[Lemma $4.1$, p. $25$]{DZZ}, for any $0\leq\lambda<q$ where $q\geq\max\{p,2,l\}$, we get
\begin{equation}\label{wp}
\hat{w}_j\leq w_j\quad\text{and}\quad\hat{w}_j^{\lambda}\leq C\hat{k}^{\lambda-q}2^{(q-\lambda)j}w_j^{q}\text{ in }\Omega_T,
\end{equation}
for some constant $C=C(\lambda,q)>0$. Let $\xi_j(x,t)=\psi_j(x)\eta_j(t)$, where $\psi_j\in C_c^{\infty}(B_j)$ and $\eta_j\in C_c^{\infty}(\Gamma_j)$ such that
\begin{equation}\label{ct1}
\begin{split}
&0\leq\psi_j\leq 1\text{ in }B_j,\quad|\nabla\psi_j|\leq C\frac{2^j}{(1-\theta)r}\text{ in }B_j,\\
&\psi_j\equiv 1\text{ in }\hat{B}_{j},\quad\mathrm{dist}(\mathrm{supp}\,\psi_j,\mathbb{R}^N\setminus B_j)\geq C\frac{(1-\theta)r}{2^j}
\end{split}
\end{equation}
and
\begin{equation}\label{ct2}
\begin{split}
&0\leq\eta_j\leq 1\text{ in }\Gamma_j,\quad|\partial_t\eta_j|\leq C\frac{2^{pj}}{(1-\theta)^p r^p}\text{ in }\Gamma_j,\quad\eta_j\equiv 1\text{ in }\hat{\Gamma}_j,
\end{split}
\end{equation}
for some positive constant $C=C(N,p)$. Noting Lemma \ref{Auxfnlemma} and setting $r=r_j,\tau_1=t_0-\hat{r}_{j}^p,\tau_2=t_0+r_{j}^p$, $\tau=r_j^{p}-\hat{r}_{j}^p$ in Lemma \ref{eng1}, we obtain
\begin{equation}\label{eng1equation}
\begin{split}
&\esssup_{t\in\hat{\Gamma}_j}\int_{\hat{B}_j}\hat{w}_j^{2}\,dx
+\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\nabla\hat{w}_j|^p\,dx\,dt
\leq I_0+I_1+I_2+I_3+I_4+I_5,
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
I_0&=C\int_{\Gamma_j}\int_{B_j}\hat{w}_j^p|\nabla\psi_j|^p\eta_j^{p}\,dx\,dt,\\
I_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}\max\{\hat{w}_j(x,t),\hat{w}_j(y,t)\}^p|\psi_j(x)-\psi_j(y)|^p\eta_{j}^p\,d\mu\,dt,\\
I_2&=C\esssup_{x\in\mathrm{supp}\psi_j,\,t\in\Gamma_j}\int_{\mathbb{R}^N\setminus B_j}\frac{\hat{w}_j(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{\Gamma_j}\int_{B_j}\hat{w}_j\psi_j^{p}\eta_j^{p}\,dx\,dt,\\
I_3&=C\int_{\Gamma_j}\int_{B_j}\hat{w}_j^2\psi_j^{p}|\partial_t\eta_j^p|\,dx\,dt,\\
I_4&=C\int_{\Gamma_j}\int_{B_j}\big(|u|^l\chi_{\{u\geq \hat{k}_j\}}+ \hat{w}_j^{l}\big)\psi_j^{p}\eta_j^{p}\,dx\,dt
\quad\text{and}\\
I_5&=C\int_{\Gamma_j}\int_{B_j}|h|^{l'}\chi_{\{u\geq \hat{k}_j\}}\psi_j^{p}\eta_j^{p}\,dx\,dt
\end{split}
\end{equation*}
for some constant $C=C(p,\Lambda,C_1,C_2,C_3,C_4,l,\alpha)>0$.\\
\textbf{Estimate of $I_0$:} Using \eqref{wp} with $\lambda=p$ we obtain $\hat{w}_j^{p}\leq C\hat{k}^{p-q}2^{(q-p)j}w_j^{q}$ in $\mathcal{Q}_j$ for some constant $C=C(p,q)>0$. Using this fact along with the properties of $\psi_j$ and $\eta_j$ from \eqref{ct1} and \eqref{ct2}, for some constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$, we have
\begin{equation}\label{I0}
\begin{split}
I_0&=C\int_{\Gamma_j}\int_{B_j}\hat{w}_j^p|\nabla\psi_j|^p\eta_j^{p}\,dx\,dt
\leq C\frac{2^{pj}}{(1-\theta)^p r^p}\int_{\Gamma_j}\int_{B_j}\hat{w}_j^{p}\,dx\,dt\\
&\leq C\frac{2^{qj}}{\hat{k}^{q-p}(1-\theta)^p r^p}\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt.
\end{split}
\end{equation}
\textbf{Estimate of $I_1$:} Choosing $\lambda=p$ in \eqref{wp}, we obtain $\hat{w}_j^p\leq C\hat{k}^{p-q}2^{(q-p)j}w_j^{q}$ in $\mathcal{Q}_j$ for some constant $C(p,q)>0$. Using this estimate along with the properties of $\psi_j$ and $\eta_j$ from \eqref{ct1} and \eqref{ct2}, for some constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$, we have
\begin{equation}\label{I1}
\begin{split}
I_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}\max\{\hat{w}_j(x,t),\hat{w}_j(y,t)\}^p|\psi_j(x)-\psi_j(y)|^p\eta_{j}^p\,d\mu\,dt\\
&\leq C\frac{2^{pj}}{(1-\theta)^p r^p}\esssup_{x\in B_j}\int_{B_j}\frac{dy}{|x-y|^{N+ps-p}}\int_{\Gamma_j}\int_{B_j}\hat{w}_j^p\,dx\,dt\\
&\leq C\frac{2^{qj}}{\hat{k}^{q-p}(1-\theta)^pr^{ps}}\int_{\Gamma_j}\int_{B_j}w_j^q\,dx\,dt\leq C\frac{2^{qj}}{\hat{k}^{q-p}(1-\theta)^p r^{p}}\int_{\Gamma_j}\int_{B_j}w_j^q\,dx\,dt,
\end{split}
\end{equation}
where to deduce the last line above, we have also used the fact that $r\in(0,1)$.\\
\textbf{Estimate of $I_2$:} Without loss of generality, we assume that $x_0=0$. For $x\in\mathrm{supp}\,\psi_j$ and $y\in\mathbb{R}^N\setminus B_j$, we have
$$
\frac{1}{|x-y|}=\frac{1}{|y|}\frac{|y|}{|x-y|}\leq\frac{1}{|y|}\frac{|x|+|x-y|}{|x-y|}\leq \frac{1}{|y|}\frac{2^{j+4}}{(1-\theta)}.
$$
This implies
\begin{equation}\label{teqn}
\begin{split}
\esssup_{x\in\mathrm{supp}\,\psi_j,\,t\in\Gamma_j}\int_{\mathbb{R}^N\setminus B_j}\frac{\hat{w}_j(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy&\leq
C\frac{2^{j(N+ps)}}{(1-\theta)^{N+ps}}\esssup_{t\in\Gamma_j}\int_{\mathbb{R}^N\setminus B_j}\frac{\hat{w}_j(y,t)^{p-1}}{|y|^{N+ps}}\,dy\\
&\leq C\frac{2^{j(N+ps)}}{(1-\theta)^{N+ps}}\esssup_{t\in\Gamma_j}\int_{\mathbb{R}^N\setminus B_{\theta r}}\frac{\hat{w}_j(y,t)^{p-1}}{|y|^{N+ps}}\,dy\\ &\leq C\frac{2^{j(N+p)}}{r^p\theta^p(1-\theta)^{N+p}}\mathrm{Tail}_{\infty}(u_+;x_0,\theta r,t_0-r^{p},t_0+r^{p})^{p-1},
\end{split}
\end{equation}
for some constant $C=C(N,p,s)>0$. Again using \eqref{wp} with $\lambda=1$, we obtain $\hat{w}_j\leq C\hat{k}^{1-q}2^{(q-1)j}w_j^{q}$ in $\mathcal{Q}_j$ for some constant $C=C(q)>0$. This fact along with \eqref{teqn} and the choice of $\hat{k}$ in \eqref{k1}, for some constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$ gives us
\begin{equation}\label{I2}
\begin{split}
I_2&=\esssup_{x\in\mathrm{supp}\,\psi_j,\,t\in\hat{\Gamma}_j}\int_{\mathbb{R}^N\setminus B_j}\frac{\hat{w}_j(y,t)^{p-1}}{|x-y|^{N+ps}}\,dy\int_{\Gamma_j}\int_{B_j}\hat{w}_j\psi_j^{p}\eta_j^{p}\,dx\, dt\\
&\leq C\frac{2^{j(N+p+q-1)}}{r^p\theta^p(1-\theta)^{N+p}\hat{k}^{q-p}}\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt.
\end{split}
\end{equation}
\textbf{Estimate of $I_3$:} Again, using \eqref{wp} for $\lambda=2$, we get $\hat{w}_j^{2}\leq C\hat{k}^{2-q}2^{(q-2)j}w_j^{q}$ in $\mathcal{Q}_j$ for some constant $C=C(q)>0$.
Using the properties of $\psi_j,\eta_j$ for a constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,\\C_4,l,\alpha,h)>0$, we have
\begin{equation}\label{I3}
\begin{split}
I_3&=\int_{\Gamma_j}\int_{B_j}\hat{w}_j^2\psi_j^{p}|\partial_t\eta_j^p|\,dx\,dt
\leq C\frac{2^{pj}}{(1-\theta)^p r^p}\int_{\Gamma_j}\int_{B_j}\hat{w}_j^{2}\,dx\,dt\\
&\leq C\frac{2^{(p+q-2)j}}{(1-\theta)^p r^p \hat{k}^{q-2}}\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt.
\end{split}
\end{equation}
\textbf{Estimate of $I_4$:} From the proof of \cite[Lemma $4.1$, p. $27$]{DZZ}, we have
\begin{equation}\label{I4}
\begin{split}
I_4&=\int_{\Gamma_j}\int_{B_j}\big(|u|^l\chi_{\{u\geq \hat{k}_j\}}+\hat{w}_j^{l}\big)\psi_j^{p}\eta_j^{p}\,dx\,dt\leq C\frac{2^{qj}}{\hat{k}^{q-l}}\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt,
\end{split}
\end{equation}
for a positive constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)$.\\
\textbf{Estimate of $I_5$:} Using H\"older's inequality with exponents $\gamma$ and $\gamma'$ along with \eqref{wp}, for a constant $C=C(N,p,q,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$, we have
\begin{equation}\label{I5}
\begin{split}
I_5&=\int_{\Gamma_j}\int_{B_j}|h|^{l'}\chi_{\{u\geq \hat{k}_j\}}\psi_j^{p}\eta_j^{p}\,dx\,dt
\leq C\||h|^{l^{'}}\|_{L^\gamma(\mathcal{Q}_0)}\frac{2^\frac{qj}{\gamma'}}{\hat{k}^\frac{q}{\gamma'}}\bigg(\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt\bigg)^\frac{1}{\gamma'}\\
&\leq C\frac{2^\frac{q(N+p\kappa_0)j}{N+p}}{\hat{k}^\frac{q(N+p\kappa_0)}{N+p}}\Big(\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt\Big)^\frac{N+p\kappa_0}{N+p},
\end{split}
\end{equation}
where $\kappa_0=1-\frac{p+N}{p\gamma}\in(0,1]$, since $\gamma>\frac{N+p}{p}$.
The estimate \eqref{eng1appeqn} follows by combining \eqref{I0}, \eqref{I1}, \eqref{I2}, \eqref{I3}, \eqref{I4} and \eqref{I5} in \eqref{eng1equation}.
\end{proof}
\begin{Lemma}\label{eng1app2}
Let $\frac{2N}{N+2}<p<\infty,\,0<s<1$ and $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ be a weak subsolution of \eqref{maineqn} in $\Omega_T$. Assume that $(x_0,t_0)\in\Omega_T$ and $r\in(0,1)$ such that $\mathcal{Q}_r=\mathcal{Q}_r(x_0,t_0)=B_r(x_0)\times(t_0-r^p,t_0+r^p)\Subset\Omega_T$. Let $\max\{p,2\}\leq l<p\kappa$ where $\kappa=\frac{N+2}{N}$ and $g$ satisfies \eqref{ghypo}, where $h\in L^{\gamma l'}_{\mathrm{loc}}(\Omega_T)$ for some $\gamma>\frac{N+p}{p}$.
For $j\in\mathbb N\cup\{0\}$, let $B_j,\hat{B}_j,\Gamma_j,\hat{\Gamma}_j,\mathcal{Q}_j,\hat{\mathcal{Q}}_j$ be given by \eqref{bl}-\eqref{cyll} and $k_j,\hat{k}_j,\hat{k},w_j,\hat{w}_j$ are given by \eqref{k}-\eqref{ct}. Assume that $\frac{1}{2}\leq\theta<1$. Then for any $j\in\N\cup\{0\}$, there exists a positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)$ such that
\begin{equation}\label{eng1app2eqn}
\begin{split}
&\int_{\Gamma_{j+1}}\int_{B_{j+1}}w_{j+1}^l\,dx\,dt\\
&\leq C\frac{2^{aj}}{r^\frac{\xi l}{\kappa}}\Bigg[\frac{1}{\theta^{\frac{\xi l}{\kappa}}(1-\theta)^\frac{(N+p)\xi l}{p\kappa}}+\frac{1}{(1-\theta)^\frac{\xi l}{\kappa}}\Bigg]\frac{1}{\hat{k}^{l(1-\frac{l}{p\kappa})}}\Bigg[\frac{1}{\hat{k}^{(l-2)\xi}}+\frac{1}{\hat{k}^{(l-p)\xi}}\Bigg]^\frac{l}{p\kappa}\bigg(\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx dt\bigg)^{1+\frac{l}{\kappa N}}\\
&\qquad+C\frac{2^{aj}}{\hat{k}^{l(1-\frac{l}{p\kappa})}}\bigg(\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt\bigg)^{1+\frac{l}{\kappa N}}+C\frac{2^{aj}}{\hat{k}^{{l(1-\frac{l}{p\kappa})}+\frac{l^2}{p\kappa}(1+\frac{\kappa_0 p}{N})}}\bigg(\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt\bigg)^{1+\frac{l\kappa_0}{\kappa N}},
\end{split}
\end{equation}
where $\xi=1+\frac{p}{N}$, $a=\xi(N+p+l)$ and $\kappa_0=1-\frac{p+N}{p\gamma}\in(0,1]$.
\end{Lemma}
\begin{proof}
Let $\Phi_j\in C_c^{\infty}(\hat{\mathcal{Q}}_j)$ be such that
\begin{equation}\label{clbd}
0\leq\Phi_j\leq 1\text{ in }\hat{\mathcal{Q}}_j,\quad|\nabla\Phi_j|\leq C\frac{2^j}{(1-\theta)r},\quad\Phi_j\equiv 1\text{ in }\mathcal{Q}_{j+1},
\end{equation}
for some constant $C=C(N,p)>0$. Note that $k_{j+1}\geq\hat{k}_j$, thus, we have $w_{j+1}\leq\hat{w}_j$. Since $l<p\kappa$, where $\kappa=\frac{N+2}{N}$ and $\mathcal{Q}_{j+1}\subset\hat{\mathcal{Q}}_j$, using H\"older's inequality, we obtain
\begin{equation}\label{Semb1app2}
\begin{split}
\int_{\Gamma_{j+1}}\int_{B_{j+1}}w_{j+1}^l\,dx\,dt&\leq\int_{\Gamma_{j+1}}\int_{B_{j+1}}\hat{w}_j^{l}\,dx\,dt\\
&\leq\bigg(\int_{\Gamma_{j+1}}\int_{B_{j+1}}\hat{w}_j^{p\kappa}\,dx\,dt\bigg)^\frac{l}{p\kappa}\bigg(\int_{\Gamma_{j+1}}\int_{B_{j+1}}\chi_{\{u\geq\hat{k}_j\}}\,dx\,dt\bigg)^{1-\frac{l}{p\kappa}}\\
&\leq\bigg(\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}\big(\hat{w}_j\Phi_j\big)^{p\kappa}\,dx\,dt\bigg)^\frac{l}{p\kappa}\bigg(C\frac{2^{aj}}{\hat{k}^l}\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt\bigg)^{1-\frac{l}{p\kappa}},
\end{split}
\end{equation}
for a constant $C=C(l)>0$, where $a=\xi(N+p+l)$ with $\xi=1+\frac{p}{N}$. To obtain the last line above in \eqref{Semb1app2}, we have also used the estimate
$$
\int_{\Gamma_{j+1}}\int_{B_{j+1}}\chi_{\{u\geq\hat{k}_j\}}\,dx\,dt
\leq C\frac{2^{lj}}{\hat{k}^l}\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt,
$$
for some constant $C=C(l)>0$. This follows by choosing $\lambda=0$ and $q=l$ in \eqref{wp}, which is possible, since $l\geq\max\{p,2\}$. Since $\kappa=1+\frac{2}{N}$, by Lemma \ref{Sobo} (b), for some constant $C=C(p,N)>0$, we have
\begin{equation}\label{Semb1app1}
\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}\big(\hat{w}_j\Phi_j\big)^{p\kappa}\,dx\,dt
\leq C\bigg(\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\nabla(\hat{w}_j\Phi_j)|^p\,dx\,dt\bigg)\bigg(\esssup_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\hat{w}_j\Phi_j|^2\,dx\bigg)^\frac{p}{N}.
\end{equation}
Using the properties of $\Phi_j$ and combining the estimates \eqref{Semb1app2} and \eqref{Semb1app1}, for some constant $C=C(N,p,l)>0$, we have
\begin{equation}\label{Semb1app3}
\begin{split}
\int_{\Gamma_{j+1}}\int_{B_{j+1}}w_{j+1}^l\,dx\,dt
&\leq C\big((I+\hat{I})J^\frac{p}{N}\big)^\frac{l}{p\kappa}\bigg(\frac{2^{aj}}{\hat{k}^l}\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt\bigg)^{1-\frac{l}{p\kappa}},
\end{split}
\end{equation}
where
$$
I=\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\nabla\hat{w}_j|^p\,dx\,dt,
\quad\hat{I}=\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}\hat{w}_j^{p}|\nabla\Phi_j|^p\,dx\,dt\quad\text{and}\quad
J=\esssup_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\hat{w}_j|^2\,dx\,dt.
$$
\textbf{Estimate of $\hat{I}$:} Using \eqref{wp} with $\lambda=p$ we obtain $\hat{w}_j^{p}\leq C\hat{k}^{p-q}2^{(q-p)j}w_j^{q}$ in $\mathcal{Q}_j$ for some constant $C=C(p,q)>0$. Using this fact along with the property of $\Phi_j$, for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$, we have
\begin{equation}\label{Ihat}
\begin{split}
\hat{I}&=\int_{\Gamma_j}\int_{B_j}\hat{w}_j^p|\nabla\Phi_j|^p\,dx\,dt
\leq C\frac{2^{pj}}{(1-\theta)^p r^p}\int_{\Gamma_j}\int_{B_j}\hat{w}_j^{p}\,dx\,dt\\
&\leq C\frac{2^{qj}}{\hat{k}^{q-p}(1-\theta)^p r^p}\int_{\Gamma_j}\int_{B_j}w_j^{q}\,dx\,dt.
\end{split}
\end{equation}
Using the above estimate of $\hat{I}$ from \eqref{Ihat} and choosing $q=l$ in Lemma \ref{eng1app}, for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,h)>0$, we have
\begin{equation}\label{IJest}
\begin{split}
I+\hat{I},J&\leq\frac{C}{r^p}\Bigg[\frac{1}{\theta^p(1-\theta)^{N+p}}+\frac{1}{(1-\theta)^p}\Bigg]\Bigg[\frac{2^{(p+l-2)j}}{\hat{k}^{l-2}}+\frac{2^{(N+p+l-1)j}}{\hat{k}^{l-p}}\Bigg]\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt\\
&\qquad+C{2^{lj}}\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt+C\frac{2^\frac{l(N+p\kappa_0)j}{N+p}}{\hat{k}^\frac{l(N+p\kappa_0)}{N+p}}\bigg(\int_{\Gamma_j}\int_{B_j}w_j^{l}\,dx\,dt\bigg)^\frac{N+p\kappa_0}{N+p}.
\end{split}
\end{equation}
By applying \eqref{IJest} in \eqref{Semb1app3}, we obtain \eqref{eng1app2eqn}. Hence the result follows.
\end{proof}
The Lemma below helps us to conclude the local boundedness result in Theorem \ref{lbthm2}.
\begin{Lemma}\label{lbthm2lemma}
Let $1<p\leq\frac{2N}{N+2},\,0<s<1$ and $u\in L^\infty_{\mathrm{loc}}(\Om_T)\cap L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ be a weak subsolution of \eqref{maineqn} in $\Om_T$. Suppose $(x_0,t_0)\in\Om_T$ and $r\in(0,1)$ such that $\mathcal{Q}_r=\mathcal{Q}_r(x_0,t_0)=B_r(x_0)\times(t_0-r^p,t_0+r^p)\Subset\Om_T$. Assume that $m>\frac{N(2-p)}{p}$, $g$ satisfies \eqref{ghypo} for some $\alpha\geq 0$ with $1<l\leq 2$ and $h\in L^\infty_{\mathrm{loc}}(\Om_T)$. For $j\in\mathbb{N}\cup\{0\}$, let $B_j,\hat{B}_j,\Gamma_j,\hat{\Gamma}_j,\mathcal{Q}_j,\mathcal{\hat{Q}}_j$ be given by \eqref{bl}-\eqref{cyll} and $k_j,\hat{k}_j,\hat{k},w_j,\hat{w}_j$ are given by \eqref{k}-\eqref{ct}. Assume that $\frac{1}{2}\leq\theta<1$. Then for any $j\in\mathbb{N}\cup\{0\}$ there exists a positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$ such that
\begin{equation}\label{lbthm2lemmaest}
\begin{split}
\int_{B_{j+1}}\int_{\Gamma_{j+1}}w_{j+1}^m\,dx\,dt
&\leq\Bigg[\frac{C}{r^{p(1+\frac{p}{N})}}\frac{2^{aj}}{(1-\theta)^\frac{(N+p)^2}{N}}\Big(\frac{1}{\hat{k}^{m-2}}+\frac{1}{\hat{k}^{m-p}}\Big)^{1+\frac{p}{N}}+\frac{C}{r^p}\frac{2^{aj}}{\hat{k}^{m(1+\frac{p}{N})}}\Bigg]\\
&\qquad\cdot\|\hat{w}_j\|^{m-p\kappa}_{L^\infty(\mathcal{Q}_{j+1})}\bigg(\int_{B_j}\int_{\Gamma_j}w_j^{m}\,dx\,dt\bigg)^{1+\frac{p}{N}},
\end{split}
\end{equation}
where $a=(N+p+m)(1+\frac{p}{N})$ and $\kappa=1+\frac{2}{N}$.
\end{Lemma}
\begin{proof}
Recall that $\mathcal{Q}_j=B_j\times\Gamma_j$ and $\hat{\mathcal{Q}}_j=\hat{B}_j\times\hat{\Gamma}_j$ for $j\in\mathbb{N}\cup\{0\}$. Using $m>p\kappa$, we observe that
\begin{equation}\label{O1}
\int_{B_{j+1}}\int_{\Gamma_{j+1}}w_{j+1}^m\,dx\,dt
\leq\int_{B_{j+1}}\int_{\Gamma_{j+1}}\hat{w}_j^{m}\,dx\,dt
\leq\|\hat{w}_j\|^{m-p\kappa}_{L^\infty(\mathcal{Q}_{j+1})}\int_{B_{j+1}}\int_{\Gamma_{j+1}}\hat{w}_j^{p\kappa}\,dx\,dt.
\end{equation}
Below, we estimate the integral
$$
I=\int_{B_{j+1}}\int_{\Gamma_{j+1}}\hat{w}_j^{p\kappa}\,dx\,dt.
$$
By Lemma \ref{Sobo} (b), for some constant $C=C(p,N)>0$, we have
\begin{equation}\label{O2}
\int_{\hat{B}_j}\int_{\hat{\Gamma}_j}\big(\hat{w}_j\Phi_j\big)^{p\kappa}\,dx\,dt
\leq C\int_{\hat{B}_j}\int_{\hat{\Gamma}_j}|\nabla(\hat{w}_j\Phi_j)|^p\,dx\,dt\bigg(\esssup_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\hat{w}_j\Phi_j|^2\,dx\bigg)^\frac{p}{N},
\end{equation}
where $\Phi_j\in C_c^{\infty}(\hat{\mathcal{Q}}_j)$ is such that
\begin{equation}\label{O3}
0\leq\Phi_j\leq 1\text{ in }\hat{\mathcal{Q}}_j,\quad|\nabla\Phi_j|\leq C\frac{2^j}{(1-\theta)r},\quad\Phi_j\equiv 1\text{ in }\mathcal{Q}_{j+1},
\end{equation}
for some constant $C=C(N,p)>0$. Since $\mathcal{Q}_{j+1}\subset\hat{\mathcal{Q}}_j$, from \eqref{O2} we deduce that
\begin{equation}\label{O4}
\int_{B_{j+1}}\int_{\Gamma_{j+1}}\hat{w}_j^{p\kappa}\,dx dt\leq C(I_1+I_2)I_3^\frac{p}{N},
\end{equation}
for some constant $C=C(N,p)>0$, where
$$
I_1=\int_{\hat{B}_j}\int_{\hat{\Gamma}_j}|\nabla\hat{w}_j|^p\,dx\,dt,\quad I_2=\int_{\hat{B}_j}\int_{\hat{\Gamma}_j}\hat{w}_j^{p}|\nabla\Phi_j|^p\,dx\,dt
\quad\text{and}\quad
I_3=\esssup_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\hat{w}_j|^2\,dx.
$$
\textbf{Estimate of $I_1$ and $I_3$:} Since $1<l\leq 2$, $m>\frac{N(2-p)}{p}$ and $h\in L^\infty_{\mathrm{loc}}(\Om_T)$, choosing $q=m$ and $\kappa_0=1$ in Lemma \ref{eng1app}, for a positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$, we have
\begin{equation}\label{OI13}
\begin{split}
I_i&\leq\frac{C}{r^p}\frac{2^{bj}}{(1-\theta)^{N+p}}\bigg(\frac{1}{\hat{k}^{m-2}}+\frac{1}{\hat{k}^{m-p}}\bigg)\int_{B_j}\int_{\Gamma_j}w_j^{m}\,dx dt+C\frac{2^{bj}}{\hat{k}^m}\int_{B_j}\int_{\Gamma_j}w_j^{m}\,dx dt,
\end{split}
\end{equation}
for $i=1,3$ where we have also used the fact that $\frac{1}{2}\leq\theta<1$ and $r\in(0,1)$.
\\
\textbf{Estimate of $I_2$:} Using the properties of $\Phi_j$ from \eqref{O3}, we have
\begin{equation}\label{OI2}
I_2\leq C\frac{2^{pj}}{(1-\theta)^p r^p}\int_{B_j}\int_{\Gamma_j}\hat{w}_j^p\,dx\,dt
\leq\frac{C}{r^p}\frac{2^{mj}}{(1-\theta)^{p}}\frac{1}{\hat{k}^{m-p}}\int_{B_j}\int_{\Gamma_j}w_j^{m}\,dx\,dt,
\end{equation}
for some positive constant $C=C(N,p,m)$, where we have also applied
$$
\hat{w}_j^{p}\leq C\frac{2^{(m-p)j}}{\hat{k}^{m-p}}w_j^{m}\text{ in }\mathcal{Q}_j,
$$
for some $C=C(m,p)>0$, which follows by choosing $\lambda=p$ and $q=m$ in \eqref{wp}. Combining \eqref{OI13} and \eqref{OI2} in \eqref{O4}, for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4,l,\alpha,m,h)$, we obtain
\begin{equation}\label{Ofinal}
\begin{split}
&\int_{B_{j+1}}\int_{\Gamma_{j+1}}\hat{w}_j^{p\kappa}\,dx\,dt\\
&\leq\Bigg[\frac{C}{r^{p(1+\frac{p}{N})}}\frac{2^{aj}}{(1-\theta)^\frac{(N+p)^2}{N}}\Big(\frac{1}{\hat{k}^{m-2}}+\frac{1}{\hat{k}^{m-p}}\Big)^{1+\frac{p}{N}}+\frac{C}{r^p}\frac{2^{aj}}{\hat{k}^{m(1+\frac{p}{N})}}\Bigg]\bigg(\int_{B_j}\int_{\Gamma_j}w_j^{m}\,dx\,dt\bigg)^{1+\frac{p}{N}},
\end{split}
\end{equation}
where $a=(N+p+m)(1+\frac{p}{N})$. Therefore, using \eqref{Ofinal} in \eqref{O1}, the estimate \eqref{lbthm2lemmaest} follows.
\end{proof}
\section{Semicontinuity and pointwise behavior of subsolutions and supersolutions}
In this section, we obtain lower semicontinuity as well as upper semicontinuity results for weak supersolution and subsolution of \eqref{maineqn} respectively. We also discuss the pointwise behavior of such semicontinuous functions. First, we define the lower and upper semicontinuous representatives of a measurable function.
Let $u$ be a measurable function which is locally essentially bounded below in $\Om_T$. Suppose that $(x,t)\in\Om_T$ and $r\in(0,1)$, $\theta>0$ such that $\mathcal{Q}_{r,\theta}(x,t)=B_r(x)\times(t-\theta r^p,t+\theta r^p)\Subset\Om_T$. The lower semicontinuous regularization $u_*$ of $u$ is defined as
\begin{equation}\label{lscrp}
u_*(x,t)=\essliminf_{(y,\hat{t})\to (x,t)}\,u(y,\hat{t})=\lim_{r\to 0}\essinf_{\mathcal{Q}_{r,\theta}(x,t)}\,u\text{ for } (x,t)\in \Om_T.
\end{equation}
Analogously, for a locally essentially bounded above measurable function $u$ in $\Om_T$, we define an upper semicontinuous regularization $u^*$ of $u$ by
\begin{equation}\label{uscrp}
u^*(x,t)=\esslimsup_{(y,\hat{t})\to (x,t)}\,u(y,\hat{t})=\lim_{r\to 0}\esssup_{\mathcal{Q}_{r,\theta}(x,t)}\,u\text{ for } (x,t)\in \Om_T.
\end{equation}
It is easy to see that $u_*$ is lower semicontinuous and $u^*$ is upper semicontinuous in $\Om_T$.
Let $u\in L^1_{\mathrm{loc}}(\Om_T)$ and define the set of Lebesgue points of $u$ by
$$
\mathcal{F}=\bigg\{(x,t)\in\Om_T:|u(x,t)|<\infty,\,\lim_{r\to 0}\fint_{\mathcal{Q}_{r,\theta}(x,t)}|u(x,t)-u(y,\hat{t})|\,dy\,d\hat{t}=0\bigg\}.
$$
From the Lebesgue differentiation theorem we have $|\mathcal{F}|=|\Om_T|$.
The following lower semicontinuity result for weak supersolutions of \eqref{maineqn} follows by combining Lemma \ref{DGL1} and Theorem \ref{lscthm}.
\begin{Theorem}\label{lscthm1}(\textbf{Lower semicontinuity})
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak supersolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded below in $\R^N\times(0,T)$. Let $u_*$ be defined by \eqref{lscrp}. Then $u(x,t)=u_*(x,t)$ at every Lebesgue point $(x,t)\in\Om_T$.
In particular, $u_*$ is a lower semicontinuous representative of $u$ in $\Om_T$.
\end{Theorem}
We have the following upper semicontinuity result for weak subsolutions of \eqref{maineqn}, which follows by combining Lemma \ref{DGLi} and Theorem \ref{uscthmi}.
\begin{Theorem}\label{uscthm1}(\textbf{Upper semicontinuity})
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak subsolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded above in $\R^N\times(0,T)$. Let $u^*$ be defined by \eqref{uscrp}. Then $u(x,t)=u^*(x,t)$ at every Lebesgue point $(x,t)\in\Om_T$. In particular, $u^*$ is an upper semicontinuous representative of $u$ in $\Om_T$.
\end{Theorem}
Our next result asserts that the lower semicontinuous representative $u_*$, given by Theorem \ref{lscthm1}, is determined by previous times. The proof follows by a combination of Lemma \ref{DGL2} below and the proof of \cite[Theorem 3.1]{Liao}.
\begin{Theorem}\label{lscpt}(\textbf{Pointwise behavior})
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak supersolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded below in $\R^N\times(0,T)$.
Assume that $u_*$ is the lower semicontinuous representative of $u$ given by Theorem \ref{lscthm1}.
Then for every $(x,t)\in\Om_T$, we have
$$
u_*(x,t)=\inf_{\theta>0}\lim_{r\to 0}\essinf_{\mathcal{Q}'_{r,\theta}(x,t)}\,u,
$$
where $\mathcal{Q}'_{r,\theta}(x,t)=B_r(x)\times(t-2\theta r^{p},t-\theta r^{p}),\,r\in(0,1)$.
In particular, we have
$$
u_*(x,t)={\essliminf_{(y,\hat{t})\to(x,t),\,\hat{t}<t}}\,u(y,\hat{t})
$$
at every point $(x,t)\in\Om_T$.
\end{Theorem}
Our final result concerns the pointwise behavior of the upper semicontinuous representative given by Theorem \ref{uscthm1}. The proof follows by a combination of Lemma \ref{DGLii} below and proceeding similarly as in the proof of \cite[Theorem 3.1]{Liao}.
\begin{Theorem}\label{uscpt}(\textbf{Pointwise behavior})
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak subsolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded above in $\R^N\times(0,T)$.
Assume that $u^*$ is the upper semicontinuous representative of $u$ given by Theorem \ref{uscthm1}.
Then for every $(x,t)\in\Om_T$, we have
$$
u^*(x,t)=\sup_{\theta>0}\lim_{r\to 0}\esssup_{\mathcal{Q}'_{r,\theta}(x,t)}\,u,
$$
where $\mathcal{Q}'_{r,\theta}(x,t)=B_r(x)\times(t-2\theta r^{p},t-\theta r^{p}),\,r\in(0,1)$.
In particular, we have
$$
u^*(x,t)={\esslimsup_{(y,\hat{t})\to(x,t),\,\hat{t}<t}}\,u(y,\hat{t})
$$
at every point $(x,t)\in\Om_T$.
\end{Theorem}
\subsection{Preliminaries}
The following measure theoretic property from \cite{Liao} will be useful for us.
\begin{Definition}\label{propertyP} Let $u$ be a measurable function which is locally essentially bounded below in $\Om_T$. Assume that $(x_0,t_0)\in\Om_T$ and $r\in(0,1)$, $\theta>0$ such that $\mathcal{Q}_{r,\theta}(x_0,t_0)=B_r(x_0)\times(t_0-\theta r^p,t_0+\theta r^p)\Subset\Om_T$. Suppose
\begin{equation}\label{lmu}
a,c\in(0,1),\quad M>0,\quad \mu^-\leq\essinf_{\mathcal{Q}_{r,\theta}(x_0,t_0)}\,u.
\end{equation}
We say that $u$ satisfies the property $(\mathcal D)$ if there exists a constant $\nu\in(0,1)$,
which depends only on $a,M,\theta,\mu^-$ and other data, but independent of $r$, such that if
\begin{equation}\label{lgc}
|\{u\leq\mu^-+M\}\cap\mathcal{Q}_{r,\theta}(x_0,t_0)|\leq\nu|\mathcal{Q}_{r,\theta}(x_0,t_0)|,
\end{equation}
then
\begin{equation}\label{lr}
u\geq\mu^-+aM\text{ a.e. in }\mathcal{Q}_{cr,\theta}(x_0,t_0).
\end{equation}
\end{Definition}
Next, we state a result from Liao in \cite[Theorem 2.1]{Liao} that shows that any such function with the property $(\mathcal D)$ has a lower semicontinuous representative.
\begin{Theorem}\label{lscthm}
Let $u$ be a measurable function in $\Om_T$ which is locally integrable and locally essentially bounded below in $\Om_T$. Assume that $u$ satisfies the property $(\mathcal D)$. Let $u_*$ be defined by \eqref{lscrp}. Then $u(x,t)=u_*(x,t)$ for every $x\in \mathcal{F}$.
In particular, $u_*$ is a lower semicontinuous representative of $u$ in $\Om_T$.
\end{Theorem}
Next we state another useful measure theoretic property, which will help us to study weak subsolutions of \eqref{maineqn}.
\begin{Definition}\label{propertyE} Let $u$ be a measurable function which is locally essentially bounded above in $\Om_T$. Assume that $(x_0,t_0)\in\Om_T$ and $r\in(0,1)$, $\theta>0$ such that $\mathcal{Q}_{r,\theta}(x_0,t_0)=B_r(x_0)\times(t_0-\theta r^p,t_0+\theta r^p)\Subset\Om_T$. Suppose
\begin{equation}\label{umu}
a,c\in(0,1),\quad M>0,\quad \esssup_{\mathcal{Q}_{r,\theta}(x_0,t_0)}\,u\leq\mu^+.
\end{equation}
We say that $u$ satisfies the property $(\mathcal E)$ if there exists a constant $\nu\in(0,1)$,
which depends only on $a,M,\theta,\mu^+$ and other data, but independent of $r$, such that if
\begin{equation}\label{ugc}
|\{u\geq\mu^+-M\}\cap\mathcal{Q}_{r,\theta}(x_0,t_0)|\leq\nu|\mathcal{Q}_{r,\theta}(x_0,t_0)|,
\end{equation}
then
\begin{equation}\label{ur}
u\leq\mu^+-aM\text{ a.e. in }\mathcal{Q}_{cr,\theta}(x_0,t_0).
\end{equation}
\end{Definition}
Our next result shows that any such function with the property $(\mathcal E)$ has an upper semicontinuous representative. The proof of Theorem \ref{uscthmi} stated below is analogous to the proof of \cite[Theorem 2.1]{Liao}.
\begin{Theorem}\label{uscthmi}
Let $u$ be a measurable function in $\Om_T$ which is locally integrable and locally essentially bounded above in $\Om_T$. Assume that $u$ satisfies the property $(\mathcal E)$. Let $u^*$ be defined by \eqref{uscrp}.
Then $u(x,t)=u^*(x,t)$ for every $x\in \mathcal{F}$.
In particular, $u^*$ is an upper semicontinuous representative of $u$ in $\Om_T$.
\end{Theorem}
\subsection{De Giorgi Lemmas for weak supersolutions}
Assume that $(x_0,t_0)\in\Om_T$ and $r\in(0,1)$, $\theta>0$ such that $\mathcal{Q}_{r,\theta}(x_0,t_0)=B_r(x_0)\times(t_0-\theta r^p,t_0+\theta r^p)\Subset\Om_T$. Let $a,\mu^-$ and $M$ be defined as in \eqref{lmu}. Then the following De Giorgi type lemma shows that a weak supersolution satisfies the property $(\mathcal D)$ in Definition \ref{propertyP}.
\begin{Lemma}\label{DGL1}
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak supersolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded below in $\R^N\times(0,T)$ and let
\begin{equation}\label{lmbc}
\lambda^-\leq\essinf_{\mathbb{R}^N\times(0,T)}\,u.
\end{equation}
Then there exists a constant $\nu=\nu(a,M,\mu^-,\lambda^-,\theta,N,p,s,\Lambda,C_1,C_2,C_3,C_4)\in(0,1)$, such that if
$$
|\{u\leq\mu^-+M\}\cap \mathcal{Q}_{r,\theta}(x_0,t_0)|\leq\nu|\mathcal{Q}_{r,\theta}(x_0,t_0)|,
$$
then
$$
u\geq\mu^-+aM\text{ a.e. in } \mathcal{Q}_{\frac{3r}{4},\theta}(x_0,t_0).
$$
\end{Lemma}
\begin{proof}
For $j\in\mathbb N\cup\{0\}$, let
\begin{equation}\label{itelsc}
\begin{split}
&k_j=\mu^-+aM+\frac{(1-a)M}{2^j},\quad\hat{k}_j=\frac{k_j+k_{j+1}}{2},\\
&r_j=\frac{3r}{4}+\frac{r}{2^{j+2}},\quad\hat{r}_j=\frac{r_j+r_{j+1}}{2},\\
&B_j=B_{r_j}(x_0),\quad\hat{B}_{j}=B_{\hat{r}_j}(x_0),\\
&\Gamma_j=(t_0-\theta r_j^{p},t_0+\theta r_j^{p}),\quad\hat{\Gamma}_j=(t_0-\theta\hat{r}_j^p,t_0+\theta\hat{r}_{j}^p)\\
&\mathcal{Q}_j=B_j\times\Gamma_j,\quad\mathcal{\hat{Q}}_j=\hat{B}_j\times\hat{\Gamma}_j,\quad A_j=\mathcal{Q}_j\cap\{u\leq k_j\}.
\end{split}
\end{equation}
Notice that $r_{j+1}<\hat{r}_j<r_j$, $k_{j+1}<\hat{k}_j<k_j$ for all $j\in\mathbb N\cup\{0\}$ and therefore, we have
$B_{j+1}\subset\hat{B}_j\subset B_j$ and $\Gamma_{j+1}\subset\hat{\Gamma}_j\subset\Gamma_j$.
Let $\{\Phi_j\}_{j=0}^{\infty}\subset C_c^{\infty}(\hat{\mathcal{Q}}_j)$ be such that
\begin{equation}\label{Philsc}
0\leq\Phi_j\leq 1,\quad |\nabla\Phi_j|\leq C\frac{2^j}{r}\text{ in }\hat{\mathcal{Q}}_j
\quad\text{and}\quad \Phi_j\equiv 1\text{ in }\mathcal{Q}_{j+1},
\end{equation}
for some constant $C=C(N,p)>0$. Notice that, over the set $A_{j+1}=\mathcal{Q}_{j+1}\cap\{u\leq k_{j+1}\}$, we have $\hat{k}_j-k_{j+1}\leq \hat{k}_j-u$.
By integrating over the set $A_{j+1}$ and using Lemma \ref{Sobo} (b), we obtain
\begin{equation}\label{Sobolsc}
\begin{split}
(1-a)\frac{M}{2^{j+3}}|A_{j+1}|
&\leq\int_{A_{j+1}}(\hat{k}_j-k_{j+1})\,dx\,dt
\leq\int_{\mathcal{Q}_{j+1}}(\hat{k}_j-u)\,dx\,dt
\leq\int_{\mathcal{\hat{Q}}_j}(u-\hat{k}_j)_-\Phi_j\,dx\,dt\\
&\leq\bigg(\int_{\hat{\mathcal{Q}}_j}\big((u-\hat{k}_j)_-\Phi_j\big)^{p(1+\frac{2}{N})}\,dx\,dt\bigg)^\frac{N}{p(N+2)}|A_j|^{1-\frac{N}{p(N+2)}}\\
&\leq C(I+J)^\frac{N}{p(N+2)}\hat{K}^\frac{1}{N+2}|A_j|^{1-\frac{N}{p(N+2)}},
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$, where
$$
I=\int_{\hat{\mathcal{Q}}_j}|\nabla (u-\hat{k}_j)_-|^p\,dx\,dt,
\quad
J=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{-}^{p}|\nabla\Phi_j|^p\,dx\,dt
\quad\text{and}\quad
\hat{K}=\esssup_{t\in\hat{\Gamma}_j}\int_{\hat{B}_j}(u-\hat{k}_{j})_-^{2}\,dx.
$$
Note that, due to the assumption \eqref{lmbc}, we know $\lambda^-\leq\essinf_{\mathbb{R}^N\times(0,T)}\,u$, which gives
\begin{equation}\label{eqn}
(u-\hat{k}_j)_-\leq(\mu^-+M-\lambda^-)_+:=L \quad\text{in}\quad\mathbb{R}^N\times(0,T).
\end{equation}
\textbf{Estimate of $J$:} Using the properties of $\Phi_j$ and \eqref{eqn}, we get
\begin{equation}\label{Jlsc1}
\begin{split}
J&=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{-}^{p}|\nabla\Phi_j|^p\,dx\,dt\leq C\frac{2^{jp}}{r^p}L^p|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$.\\
\textbf{Estimate of $I$ and $\hat{K}$:} Let $\xi_j=\psi_j\eta_j$, where $\{\psi_j\}_{j=0}^{\infty}\subset C_{c}^\infty(B_j)$ and $\{\eta_j\}_{j=0}^{\infty}\subset C_{c}^\infty(\Gamma_j)$ be such that
\begin{equation}\label{lscctof}
\begin{split}
&0\leq\psi_j\leq 1,\quad|\nabla\psi_j|\leq C\frac{2^j}{r}\text{ in }B_j,\quad\psi_j\equiv 1\text{ in }\hat{B}_j,\quad\mathrm{dist}(\mathrm{supp}\,\psi_j,\mathbb{R}^N\setminus B_j)\geq 2^{-j-1}r,\\
&0\leq\eta_j\leq 1,\quad|\partial_t\eta_j|\leq C\frac{2^{pj}}{\theta r^p}\text{ in }\Gamma_j,\quad\eta_j\equiv 1\text{ in }\hat{\Gamma}_j,
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$. Note that $g\equiv 0$ in $\Om_T\times\R$ and for $\hat{k}_j<k_j$, we have $(u-k_j)_-\geq (u-\hat{k}_j)_-$. Therefore, noting Lemma \ref{Auxfnlemma} and Remark \ref{eng1rmk2}, we set $r=r_j$, $\tau_1=t_0-\theta \hat{r}_j^{p}$, $\tau_2=t_0+\theta r_j^{p}$, $\tau=\theta r_j^{p}-\theta \hat{r}_j^{p}$ in Lemma \ref{eng1} to obtain
\begin{equation}\label{lscengeqn1}
\begin{split}
&I+\hat{K}=\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\nabla (u-\hat{k}_j)_-|^p\,dx dt+\esssup_{\hat{\Gamma}_j}\int_{\hat{B}_j}(u-\hat{k}_j)_-^{2}\,dx\leq I_1+I_2+I_3+I_4,
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
I_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}{\max\{(u-k_j)_{-}(x,t),(u-k_j)_{-}(y,t)\}^p|\xi_j(x,t)-\xi_j(y,t)|^p}\,d\mu\,dt,\\
I_2&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_-^p|\nabla\xi_j|^p\,dx\,dt,\\
I_3&=C\esssup_{x\in\mathrm{supp}\,\psi_j,\,t\in\Gamma_j}\int_{{\mathbb{R}^N\setminus B_j}}{\frac{(u-k_j)_{-}(y,t)^{p-1}}{|x-y|^{N+ps}}}\,dy
\int_{\Gamma_j}\int_{B_j}(u-k_j)_{-}\xi_{j}^p\,dx\,dt
\quad\text{and}\\
I_4&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_-^{2}|\partial_t\xi_j^{p}|\,dx\,dt,
\end{split}
\end{equation*}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_1$:} Using \eqref{eqn}, the properties of $\xi_j$ and $r\in(0,1)$, we have
\begin{equation}\label{I1lsc}
\begin{split}
I_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}{\max\{(u-k_j)_{-}(x,t),(u-k_j)_{-}(y,t)\}^p|\xi_j(x,t)-\xi_j(y,t)|^p}\,d\mu\,dt\\
&\leq C\frac{2^{jp}}{r^{ps}}L^p|A_j|\leq C\frac{2^{jp}}{r^{p}}L^p|A_j|,
\end{split}
\end{equation}
$C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_2$:} Again, using \eqref{eqn} and the properties of $\xi_j$, we deduce that
\begin{equation}\label{I2lsc}
\begin{split}
I_2&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_-^p|\nabla\xi_j|^p\,dx\,dt\leq C\frac{2^{jp}}{r^{p}}L^p|A_j|,
\end{split}
\end{equation}
$C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_3$:} Without loss of generality, we assume that $x_0=0$. Then for every $x\in\mathrm{supp}\,\psi_{j}$ and every $y\in\mathbb{R}^N\setminus B_j$, we observe that
\begin{equation}\label{I3lscnl}
\frac{1}{|x-y|}=\frac{1}{|y|}\frac{|x-(x-y)|}{|x-y|}\leq\frac{1}{|y|}(1+2^{j+3})\leq\frac{2^{j+4}}{|y|}.
\end{equation}
Using \eqref{eqn}, $r\in(0,1)$ and $0\leq\xi_j\leq 1$, we have
\begin{equation}\label{I3lsc}
\begin{split}
I_3&=C\esssup_{(x,t)\in\mathrm{supp}\,\psi_j,\,t\in\Gamma_j}\int_{{\mathbb{R}^N\setminus B_j}}{\frac{(u-k_j)_{-}(y,t)^{p-1}}{|x-y|^{N+ps}}}\,dy
\int_{\Gamma_j}\int_{B_j}(u-k_j)_{-}\xi_{j}^p\,dx\,dt\\
&\leq C\frac{2^{j(N+p)}}{r^{p}}L^p|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_4$:} Using \eqref{eqn} along with the properties of $\xi_j$, we have
\begin{equation}\label{I4lsc}
\begin{split}
I_4&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_{-}^{2}|\partial_t\xi_j^p|\,dx\,dt\\
&\leq C\frac{2^{jp}}{\theta r^{p}}\int_{\Gamma_j}\int_{B_j}(u-k_j)_{-}^2\,dx\, dt\leq C\frac{2^{jp}}{\theta r^{p}} L^{2}|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.
Inserting \eqref{I1lsc}, \eqref{I2lsc}, \eqref{I3lsc} and \eqref{I4lsc} in \eqref{lscengeqn1}, we arrive at
\begin{equation}\label{KIlsc1}
\begin{split}
I+\hat{K}\leq C\frac{2^{j(N+p)}}{r^p}L^p\Big(1+\frac{L^{2-p}}{\theta}\Big)|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$. Using \eqref{Jlsc1} and \eqref{KIlsc1} in \eqref{Sobolsc}, we obtain
\begin{equation}\label{IJKlsc1}
\begin{split}
|A_{j+1}|\leq C\frac{2^{j((N+p)^2+1)}L^\frac{N+p}{N+2}}{(1-a)M} \Big(1+\frac{L^{2-p}}{\theta}\Big)^\frac{N+p}{p(N+2)}\frac{|A_j|^{1+\frac{1}{N+2}}}{r^\frac{N+p}{N+2}},
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$. Dividing both sides of \eqref{IJKlsc1} by $|\mathcal{Q}_{j+1}|$ and denoting by $Y_j=\frac{|A_j|}{|\mathcal{Q}_j|}$, we get
$$
Y_{j+1}\leq 2K b^{j}Y_j^{1+\delta},
$$
where
$$
K=\frac{C(\theta L^{N+p})^\frac{1}{N+2}}{2(1-a)M} \Big(1+\frac{L^{2-p}}{\theta}\Big)^\frac{N+p}{p(N+2)},
\quad\delta=\frac{1}{N+2}
\quad\text{and}\quad
b=2^{(N+p)^2+1}>1.
$$
We define $\nu=(2K)^{-\frac{1}{\delta}}b^{-\frac{1}{\delta^2}}$, which depends on $a,M,\mu^-,\lambda^-,\theta,N,p,s,\Lambda,C_1,C_2,C_3,C_4$ such that if $Y_0\leq\nu$, then by Lemma \ref{ite}, we have $\lim_{j\to\infty}Y_j=0$. This completes the proof.
\end{proof}
Recalling that $a,\mu^-$ and $M$ are defined as in \eqref{lmu}, we prove our second De Giorgi Lemma.
\begin{Lemma}\label{DGL2}
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak supersolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded below in $\R^N\times(0,T)$ and let
\begin{equation}\label{lmbcc}
\lambda^-\leq\essinf_{\mathbb{R}^N\times(0,T)}\,u.
\end{equation}
Then there exists a constant $\theta=\theta(a,M,\mu^-,\lambda^-,N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$ such that if $t_0$ is a Lebesgue point of $u$ and
$$
u(\cdot,t_0)\geq \mu^-+M\text{ a.e. in }B_r(x_0),
$$
then
$$
u\geq\mu^-+aM\text{ a.e. in }\mathcal{Q}^{+}_{\frac{3r}{4},\theta}(x_0,t_0)=B_{\frac{3r}{4}}(x_0)\times\big(t_0,t_0+\theta\big(\tfrac{3r}{4}\big)^p\big).
$$
\end{Lemma}
\begin{proof}
For $j\in\N\cup\{0\}$, we define $k_j,\hat{k}_j,r_j,\hat{r}_j,B_j,\hat{B}_j$ as in \eqref{itelsc} and for $\theta>0$, let us set
\begin{equation}\label{itelsc2}
\begin{split}
\Gamma_j=(t_0,t_0+\theta r_j^{p}),\quad\mathcal{Q}_j=B_j\times\Gamma_j,\quad \mathcal{\hat{Q}}_j=\hat{B}_j\times\Gamma_j,\quad A_j=\mathcal{Q}_j\cap\{u\leq k_j\}.
\end{split}
\end{equation}
Therefore, for all $j\in\mathbb N\cup\{0\}$ we have
$
B_{j+1}\subset\hat{B}_j\subset B_j,\,\Gamma_{j+1}\subset\Gamma_j.
$
Let $\{\Phi_j\}_{j=0}^{\infty}\subset C_c^{\infty}(\hat{\mathcal{Q}}_j)$ be as defined in \eqref{Philsc}. Notice that, over the set $A_{j+1}=\mathcal{Q}_{j+1}\cap\{u\leq k_{j+1}\}$, we have $\hat{k}_j-k_{j+1}\leq \hat{k}_j-u$. Hence integrating over the set $A_{j+1}$ as in the proof of \eqref{Sobolsc}, we obtain
\begin{equation}\label{Sobolsc2}
\begin{split}
(1-a)\frac{M}{2^{j+3}}|A_{j+1}|&\leq C(I+J)^\frac{N}{p(N+2)}\hat{K}^\frac{1}{N+2}|A_j|^{1-\frac{N}{p(N+2)}},
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$, where
$$
I=\int_{\hat{\mathcal{Q}}_j}|\nabla (u-\hat{k}_j)_-|^p\,dx\,dt,
\quad
J=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{-}^{p}|\nabla\Phi_j|^p\,dx\,dt
\quad\text{and}\quad
\hat{K}=\esssup_{t\in\Gamma_j}\int_{\hat{B}_j}(u-\hat{k}_{j})_{-}^2\,dx.
$$
Due to \eqref{lmbcc}, as in \eqref{eqn}, we get $(u-\hat{k}_j)_-\leq (\mu^-+M-\lambda^-)_+:=L$ in $\R^N\times(0,T)$.\\
\textbf{Estimate of $J$:} From the proof of \eqref{Jlsc1}, we have
\begin{equation}\label{Jlsc2}
\begin{split}
J&=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{-}^{p}|\nabla\Phi_j|^p\,dx\,dt\leq C\frac{2^{jp}}{r^p}L^p|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$.\\
\textbf{Estimate of $I$ and $\hat{K}$:} Let $\xi_j(x,t)=\xi_j(x)$ be a time independent smooth function with compact support in $B_j$ such that $0\leq\xi_j\leq 1$, $|\nabla\xi_j|\leq C\frac{2^j}{r}$ in $\mathcal{Q}_j$, $\mathrm{dist}(\mathrm{supp}\,\xi_j,\mathbb{R}^N\setminus B_j)\geq 2^{-j-1}r$ and $\xi_j\equiv 1$ in $\hat{B}_j$ for some constant $C=C(N,p)>0$. Therefore, $\partial_t\xi_j=0$. Also, since $\hat{k}_j<\mu^-+M$, due to the hypothesis $u(\cdot,t_0)\geq\mu^-+M$ a.e. in $B_r(x_0)$, we deduce that $(u-\hat{k}_j)_-(\cdot,t_0)=0$ a.e. in $B_r(x_0)$. Noting these facts along with $g\equiv 0$ in $\Om_T\times\R$ and $(u-k_j)_-\geq (u-\hat{k}_j)_-$, by Lemma \ref{Auxfnlemma}, Remark \ref{eng1rmk2} and Lemma \ref{eng1}, we obtain
\begin{equation}\label{KIlsc2}
\begin{split}
\hat{K}+I&=\esssup_{\Gamma_j}\int_{\hat{B}_j}(u-\hat{k}_j)_-^{2}\,dx+\int_{\Gamma_j}\int_{\hat{B}_j}|\nabla (u-\hat{k}_j)_-|^p\,dx\,dt\leq J_1+J_2+J_3,
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
J_1&=C\Bigg(\int_{\Gamma_j}\int_{B_j}\int_{B_j}{\max\{(u-k_j)_{-}(x,t),(u-k_j)_{-}(y,t)\}^p|\xi_j(x,t)-\xi_j(y,t)|^p}\,d\mu\,dt,\\
J_2&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_-^p|\nabla\xi_j|^p\,dx\,dt
\quad\text{and}\\
J_3&=\esssup_{(x,t)\in\mathrm{supp}\,\xi_j,\,t\in\Gamma_j}\int_{{\mathbb{R}^N\setminus B_j}}{\frac{(u-k_j)_{-}(y,t)^{p-1}}{|x-y|^{N+ps}}}\,dy
\int_{\Gamma_j}\int_{B_j}(u-k_j)_{-}\xi_{j}^p\,dx\,dt,
\end{split}
\end{equation*}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)$. From \eqref{I1lsc}, \eqref{I2lsc} and \eqref{I3lsc}, it follows that
\begin{equation}\label{KIlsc223}
\hat{K}+I\leq J_1+J_2+J_3\leq C\frac{2^{j(N+p)}}{r^{p}}L^p|A_j|,
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)$. Now employing \eqref{Jlsc2} and \eqref{KIlsc223} in \eqref{Sobolsc2}, by setting $Y_j=\frac{|A_j|}{|\mathcal{Q}_j|}$ we obtain
\begin{equation}\label{itelsc2new}
Y_{j+1}\leq C\frac{(\theta L^{N+p})^\frac{1}{N+2} }{(1-a)M} 2^{j((N+p)^2+1)}Y_j^{1+\frac{1}{N+2}},
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)$. Letting
\[
d_0=\frac{CL^\frac{N+p}{N+2}}{(1-a)M},\quad b=2^{(N+p)^2+1},\quad\delta_2=\delta_1=\delta=\frac{1}{N+2}
\quad\text{and}\quad
K=\frac{d_0\,\theta^\delta}{2}
\]
in Lemma \ref{ite}, we have $\lim_{j\to\infty}Y_j\to0$, if
$Y_0\leq\nu=(2K)^{-\frac{1}{\delta}}b^{-\frac{1}{\delta^2}}$.
Let $\beta\in(0,1)$, then choosing $\theta=\beta\,d_0^{-\frac{1}{\delta}}\,b^{-\frac{1}{\delta^2}}$, which depends on $a,M,\mu^-,\lambda^-,N,p,s,\Lambda,C_1,C_2,C_3,C_4$, we get $\nu=\beta^{-1}>1$.
Hence the fact that $Y_0\leq 1$ and thus Lemma \ref{ite} imply that
$\lim_{j\to\infty}Y_j\to 0$.
Therefore, we have
\[
u\geq\mu^-+aM
\text{ a.e. in }
\mathcal{Q}_{\frac{3r}{4},\theta}^{+}(x_0,t_0).
\]
Hence the result follows.
\end{proof}
\subsection{De Giorgi Lemmas for weak subsolutions} In this subsection, we prove De Giorgi lemmas for weak subsolutions of \eqref{maineqn}. To this end, we assume that $(x_0,t_0)\in\Om_T$, $r\in(0,1)$ and $\theta>0$ such that $\mathcal{Q}_{r,\theta}(x_0,t_0)=B_r(x_0)\times(t_0-\theta r^p,t_0+\theta r^p)\Subset\Om_T$. Let $a,\mu^+$ and $M$ be defined as in \eqref{umu}. The following De Giorgi type lemma shows that a weak subsolution satisfies the property $(\mathcal E)$ in Definition \ref{propertyE}.
\begin{Lemma}\label{DGLi}
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak subsolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded above in $\R^N\times(0,T)$ and let
\begin{equation}\label{umbc}
\essinf_{\mathbb{R}^N\times(0,T)}\,u\leq\lambda^+.
\end{equation}
Then there exists a constant $\nu=\nu(a,M,\mu^+,\lambda^+,\theta,N,p,s,\Lambda,C_1,C_2,C_3,C_4)\in(0,1)$, such that if
$$
|\{u\geq\mu^+-M\}\cap \mathcal{Q}_{r,\theta}(x_0,t_0)|\leq\nu|\mathcal{Q}_{r,\theta}(x_0,t_0)|,
$$
then
$$
u\leq\mu^+-aM\text{ a.e. in } \mathcal{Q}_{\frac{3r}{4},\theta}(x_0,t_0).
$$
\end{Lemma}
\begin{proof}
For $j\in\mathbb N\cup\{0\}$, let
\begin{equation}\label{iteusc}
\begin{split}
&k_j=\mu^+-aM-\frac{(1-a)M}{2^j}
\end{split}
\end{equation}
and $\hat{k}_j,r_j,\hat{r}_j,B_j,\hat{B}_j,\Gamma_j,\hat{\Gamma}_j,\mathcal{Q}_j,\mathcal{\hat{Q}}_j$ be as defined in \eqref{itelsc}. Here we define by $A_j=\mathcal{Q}_j\cap\{u\geq k_j\}.$ Notice that $r_{j+1}<\hat{r}_j<r_j$, $k_{j+1}>\hat{k}_j>k_j$ for all $j\in\mathbb N\cup\{0\}$ and therefore, we have
$B_{j+1}\subset\hat{B}_j\subset B_j$ and $\Gamma_{j+1}\subset\hat{\Gamma}_j\subset\Gamma_j$.
Let $\{\Phi_j\}_{j=0}^{\infty}\subset C_c^{\infty}(\hat{\mathcal{Q}}_j)$ be as defined in \eqref{Philsc}.
Notice that, over the set $A_{j+1}=\mathcal{Q}_{j+1}\cap\{u\geq k_{j+1}\}$, we have ${k}_{j+1}-\hat{k}_{j}\leq u-\hat{k}_j$. By integrating over the set $A_{j+1}$ and using Lemma \ref{Sobo} (b), following the proof of the estimate in \eqref{Sobolsc}, we obtain
\begin{equation}\label{Sobousc}
\begin{split}
(1-a)\frac{M}{2^{j+3}}|A_{j+1}|
&\leq C(I+J)^\frac{N}{p(N+2)}\hat{K}^\frac{1}{N+2}|A_j|^{1-\frac{N}{p(N+2)}},
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$, where
$$
I=\int_{\hat{\mathcal{Q}}_j}|\nabla (u-\hat{k}_j)_+|^p\,dx\,dt,
\quad
J=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{+}^{p}|\nabla\Phi_j|^p\,dx\,dt
\quad\text{and}\quad
\hat{K}=\esssup_{t\in\hat{\Gamma}_j}\int_{\hat{B}_j}(u-\hat{k}_{j})_+^{2}\,dx.
$$
Note that, by the assumption \eqref{umbc}, we know $\esssup_{\mathbb{R}^N\times(0,T)}\,u\leq\lambda^+$, which gives
\begin{equation}\label{ueqn}
(u-\hat{k}_j)_+\leq(\lambda^+-\mu^++M)_+:=L \quad\text{in}\quad\mathbb{R}^N\times(0,T).
\end{equation}
\textbf{Estimate of $J$:} Using the properties of $\Phi_j$ and \eqref{ueqn}, we get
\begin{equation}\label{Jusc1}
\begin{split}
J&=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{+}^{p}|\nabla\Phi_j|^p\,dx\,dt\leq C\frac{2^{jp}}{r^p}L^p|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$.\\
\textbf{Estimate of $I$ and $\hat{K}$:} Let $\xi_j=\psi_j\eta_j$, where $\{\psi_j\}_{j=0}^{\infty}\subset C_{c}^\infty(B_j)$ and $\{\eta_j\}_{j=0}^{\infty}\subset C_{c}^\infty(\Gamma_j)$ are as defined in \eqref{lscctof}
for some constant $C=C(N,p)>0$. Note that $g\equiv 0$ in $\Om_T\times\R$ and for $\hat{k}_j>k_j$, we have $(u-k_j)_+\geq (u-\hat{k}_j)_+$. Therefore, noting Lemma \ref{Auxfnlemma} and Remark \ref{eng1rmk}, we set $r=r_j$, $\tau_1=t_0-\theta \hat{r}_j^{p}$, $\tau_2=t_0+\theta r_j^{p}$, $\tau=\theta r_j^{p}-\theta \hat{r}_j^{p}$ in Lemma \ref{eng1} to obtain
\begin{equation}\label{uscengeqn1}
\begin{split}
&I+\hat{K}=\int_{\hat{\Gamma}_j}\int_{\hat{B}_j}|\nabla (u-\hat{k}_j)_+|^p\,dx\,dt+\esssup_{\hat{\Gamma}_j}\int_{\hat{B}_j}(u-\hat{k}_j)_+^{2}\,dx\leq I_1+I_2+I_3+I_4,
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
I_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}{\max\{(u-k_j)_{+}(x,t),(u-k_j)_{+}(y,t)\}^p|\xi_j(x,t)-\xi_j(y,t)|^p}\,d\mu\,dt,\\
I_2&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_+^p|\nabla\xi_j|^p\,dx\,dt,\\
I_3&=C\esssup_{x\in\mathrm{supp}\,\psi_j,\,t\in\Gamma_j}\int_{{\mathbb{R}^N\setminus B_j}}{\frac{(u-k_j)_{+}(y,t)^{p-1}}{|x-y|^{N+ps}}}\,dy
\int_{\Gamma_j}\int_{B_j}(u-k_j)_{+}\xi_{j}^p\,dx\,dt
\quad\text{and}\\
I_4&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_+^{2}|\partial_t\xi_j^{p}|\,dx\,dt,
\end{split}
\end{equation*}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_1$:} Using \eqref{ueqn}, the properties of $\xi_j$ and $r\in(0,1)$, we have
\begin{equation}\label{I1usc}
\begin{split}
I_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}{\max\{(u-k_j)_{+}(x,t),(u-k_j)_{+}(y,t)\}^p|\xi_j(x,t)-\xi_j(y,t)|^p}\,d\mu\,dt\\
&\leq C\frac{2^{jp}}{r^{ps}}L^p|A_j|\leq C\frac{2^{jp}}{r^{p}}L^p|A_j|,
\end{split}
\end{equation}
$C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_2$:} Again, using \eqref{ueqn} and the properties of $\xi_j$, we get
\begin{equation}\label{I2usc}
\begin{split}
I_2&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_+^p|\nabla\xi_j|^p\,dx\,dt\leq C\frac{2^{jp}}{r^{p}}L^p|A_j|,
\end{split}
\end{equation}
$C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_3$:}
Using \eqref{ueqn} and proceeding similarly as in the proof of the estimate \eqref{I3lsc}, we obtain
\begin{equation}\label{I3usc}
\begin{split}
I_3&=C\esssup_{(x,t)\in\mathrm{supp}\,\psi_j,\,t\in\Gamma_j}\int_{{\mathbb{R}^N\setminus B_j}}{\frac{(u-k_j)_{+}(y,t)^{p-1}}{|x-y|^{N+ps}}}\,dy
\int_{\Gamma_j}\int_{B_j}(u-k_j)_{+}\xi_{j}^p\,dx\,dt\\
&\leq C\frac{2^{j(N+p)}}{r^{p}}L^p|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.\\
\textbf{Estimate of $I_4$:} Using \eqref{ueqn} along with the properties of $\xi_j$, we have
\begin{equation}\label{I4usc}
\begin{split}
I_4&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_{+}^{2}|\partial_t\xi_j^p|\,dx\,dt\\
&\leq C\frac{2^{jp}}{\theta r^{p}}\int_{\Gamma_j}\int_{B_j}(u-k_j)_{+}^2\,dx\, dt\leq C\frac{2^{jp}}{\theta r^{p}} L^{2}|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$.
Inserting \eqref{I1usc}, \eqref{I2usc}, \eqref{I3usc} and \eqref{I4usc} in \eqref{uscengeqn1}, we arrive at
\begin{equation}\label{KIusc1}
\begin{split}
I+\hat{K}\leq C\frac{2^{j(N+p)}}{r^p}L^p\Big(1+\frac{L^{2-p}}{\theta}\Big)|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$. Using \eqref{Jusc1} and \eqref{KIusc1} in \eqref{Sobousc}, we obtain
\begin{equation}\label{IJKusc1}
\begin{split}
|A_{j+1}|\leq C\frac{2^{j((N+p)^2+1)}L^\frac{N+p}{N+2}}{(1-a)M} \Big(1+\frac{L^{2-p}}{\theta}\Big)^\frac{N+p}{p(N+2)}\frac{|A_j|^{1+\frac{1}{N+2}}}{r^\frac{N+p}{N+2}},
\end{split}
\end{equation}
for some constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$. Dividing both sides of \eqref{IJKusc1} by $|\mathcal{Q}_{j+1}|$ and denoting by $Y_j=\frac{|A_j|}{|\mathcal{Q}_j|}$, we get
$$
Y_{j+1}\leq 2K b^{j}Y_j^{1+\delta},
$$
where
$$
K=\frac{C(\theta L^{N+p})^\frac{1}{N+2}}{2(1-a)M} \Big(1+\frac{L^{2-p}}{\theta}\Big)^\frac{N+p}{p(N+2)},
\quad\delta=\frac{1}{N+2}
\quad\text{and}\quad
b=2^{(N+p)^2+1}>1.
$$
We define $\nu=(2K)^{-\frac{1}{\delta}}b^{-\frac{1}{\delta^2}}$, which depends on $a,M,\mu^+,\lambda^+,\theta,N,p,s,\Lambda,C_1,C_2,C_3,C_4$ such that if $Y_0\leq\nu$, then by Lemma \ref{ite}, we have $\lim_{j\to\infty}Y_j=0$. This completes the proof.
\end{proof}
Recalling that $a,\mu^+$ and $M$ are defined as in \eqref{umu}, we prove our final De Giorgi Lemma.
\begin{Lemma}\label{DGLii}
Let $1<p<\infty,\,0<s<1$ and $g\equiv 0$ in $\Om_T\times\R$. Suppose that $u\in L^p_{\mathrm{loc}}\big(0,T;W^{1,p}_{\mathrm{loc}}(\Om)\big)\cap C_{\mathrm{loc}}\big(0,T;L^2_{\mathrm{loc}}(\Om)\big)\cap L^\infty_{\mathrm{loc}}\big(0,T;L^{p-1}_{ps}(\mathbb{R}^N)\big)$ is a weak subsolution of \eqref{maineqn} in $\Om_T$ such that $u$ is essentially bounded above in $\R^N\times(0,T)$ and let
\begin{equation}\label{umbcc}
\esssup_{\mathbb{R}^N\times(0,T)}\,u\leq\lambda^+.
\end{equation}
Then there exists a constant $\theta=\theta(a,M,\mu^+,\lambda^+,N,p,s,\Lambda,C_1,C_2,C_3,C_4)>0$ such that if $t_0$ is a Lebesgue point of $u$ and
$$
u(\cdot,t_0)\leq \mu^+-M\text{ a.e. in }B_r(x_0),
$$
then
$$
u\leq\mu^+-aM\text{ a.e. in }\mathcal{Q}^{+}_{\frac{3r}{4},\theta}(x_0,t_0)=B_{\frac{3r}{4}}(x_0)\times\big(t_0,t_0+\theta\big(\tfrac{3r}{4}\big)^p\big).
$$
\end{Lemma}
\begin{proof}
For $j\in\N\cup\{0\}$, let $k_j$ be as in \eqref{iteusc} and $\hat{k}_j,r_j,\hat{r}_j,B_j,\hat{B}_j$ as in \eqref{itelsc}. For $\theta>0$, let us set
\begin{equation}\label{iteusc2}
\begin{split}
\Gamma_j=(t_0,t_0+\theta r_j^{p}),\quad\mathcal{Q}_j=B_j\times\Gamma_j,\quad \mathcal{\hat{Q}}_j=\hat{B}_j\times\Gamma_j,\quad A_j=\mathcal{Q}_j\cap\{u\geq k_j\}.
\end{split}
\end{equation}
Therefore, for all $j\in\mathbb N\cup\{0\}$ we have
$
B_{j+1}\subset\hat{B}_j\subset B_j,\,\Gamma_{j+1}\subset\Gamma_j.
$
Let $\{\Phi_j\}_{j=0}^{\infty}\subset C_c^{\infty}(\hat{\mathcal{Q}}_j)$ be as defined in \eqref{Philsc}. Notice that, over the set $A_{j+1}=\mathcal{Q}_{j+1}\cap\{u\geq k_{j+1}\}$, we have $k_{j+1}-\hat{k}_j\leq u-\hat{k}_j$. Hence integrating over the set $A_{j+1}$ as in the proof of \eqref{Sobolsc}, we obtain
\begin{equation}\label{Sobousc2}
\begin{split}
(1-a)\frac{M}{2^{j+3}}|A_{j+1}|&\leq C(I+J)^\frac{N}{p(N+2)}\hat{K}^\frac{1}{N+2}|A_j|^{1-\frac{N}{p(N+2)}},
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$, where
$$
I=\int_{\hat{\mathcal{Q}}_j}|\nabla (u-\hat{k}_j)_+|^p\,dx\,dt,
\quad
J=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{+}^{p}|\nabla\Phi_j|^p\,dx\,dt
\quad\text{and}\quad
\hat{K}=\esssup_{t\in\Gamma_j}\int_{\hat{B}_j}(u-\hat{k}_{j})_{+}^2\,dx.
$$
Using \eqref{umbcc}, we get $(u-\hat{k}_j)_+\leq (\lambda^+-\mu^-+M)_+:=L$ in $\R^N\times(0,T)$.\\
\textbf{Estimate of $J$:} From the proof of \eqref{Jlsc1}, we have
\begin{equation}\label{Jusc2}
\begin{split}
J&=\int_{\hat{\mathcal{Q}}_j}(u-\hat{k}_j)_{+}^{p}|\nabla\Phi_j|^p\,dx\,dt\leq C\frac{2^{jp}}{r^p}L^p|A_j|,
\end{split}
\end{equation}
for some constant $C=C(N,p)>0$.\\
\textbf{Estimate of $I$ and $\hat{K}$:} Let $\xi_j(x,t)=\xi_j(x)$ be a time independent smooth function with compact support in $B_j$ such that $0\leq\xi_j\leq 1$, $|\nabla\xi_j|\leq C\frac{2^j}{r}$ in $\mathcal{Q}_j$, $\mathrm{dist}(\mathrm{supp}\,\xi_j,\mathbb{R}^N\setminus B_j)\geq 2^{-j-1}r$ and $\xi_j\equiv 1$ in $\hat{B}_j$ for some constant $C=C(N,p,s)>0$. Therefore, $\partial_t\xi_j=0$. Also, since $\hat{k}_j>\mu^+-M$, due to the hypothesis $u(\cdot,t_0)\leq\mu^+-M$ a.e. in $B_r(x_0)$, we deduce that $(u-\hat{k}_j)_+(\cdot,t_0)=0$ a.e. in $B_r(x_0)$. Noting these facts along with $g\equiv 0$ in $\Om_T\times\R$ and $(u-k_j)_+\geq (u-\hat{k}_j)_+$, by Lemma \ref{Auxfnlemma}, Remark \ref{eng1rmk} and Lemma \ref{eng1}, we obtain
\begin{equation}\label{KIusc2}
\begin{split}
\hat{K}+I&=\esssup_{\Gamma_j}\int_{\hat{B}_j}(u-\hat{k}_j)_+^{2}\,dx+\int_{\Gamma_j}\int_{\hat{B}_j}|\nabla (u-\hat{k}_j)_+|^p\,dx\,dt\leq J_1+J_2+J_3,
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
J_1&=C\int_{\Gamma_j}\int_{B_j}\int_{B_j}{\max\{(u-k_j)_{+}(x,t),(u-k_j)_{+}(y,t)\}^p|\xi_j(x,t)-\xi_j(y,t)|^p}\,d\mu\,dt,\\
J_2&=C\int_{\Gamma_j}\int_{B_j}(u-k_j)_+^p|\nabla\xi_j|^p\,dx\,dt
\quad\text{and}\\
J_3&=\esssup_{(x,t)\in\mathrm{supp}\,\xi_j,\,t\in\Gamma_j}\int_{{\mathbb{R}^N\setminus B_j}}{\frac{(u-k_j)_{+}(y,t)^{p-1}}{|x-y|^{N+ps}}}\,dy
\int_{\Gamma_j}\int_{B_j}(u-k_j)_{+}\xi_{j}^p\,dx\,dt,
\end{split}
\end{equation*}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)$. From \eqref{I1usc}, \eqref{I2usc} and \eqref{I3usc}, it follows that
\begin{equation}\label{KIusc223}
\hat{K}+I\leq J_1+J_2+J_3\leq C\frac{2^{j(N+p)}}{r^{p}}L^p|A_j|,
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)$. Now employing \eqref{Jusc2} and \eqref{KIusc223} in \eqref{Sobousc2}, by setting $Y_j=\frac{|A_j|}{|\mathcal{Q}_j|}$ we obtain
\begin{equation}\label{iteusc2new}
Y_{j+1}\leq C\frac{(\theta L^{N+p})^\frac{1}{N+2} }{(1-a)M} 2^{j((N+p)^2+1)}Y_j^{1+\frac{1}{N+2}},
\end{equation}
for some positive constant $C=C(N,p,s,\Lambda,C_1,C_2,C_3,C_4)$. Letting
\[
d_0=\frac{CL^\frac{N+p}{N+2}}{(1-a)M},\quad b=2^{(N+p)^2+1},\quad\delta_2=\delta_1=\delta=\frac{1}{N+2}
\quad\text{and}\quad
K=\frac{d_0\,\theta^\delta}{2}
\]
in Lemma \ref{ite}, we have $\lim_{j\to\infty}Y_j\to0$, if
\[
Y_0\leq\nu=(2K)^{-\frac{1}{\delta}}b^{-\frac{1}{\delta^2}}.
\]
Let $\beta\in(0,1)$, then choosing $\theta=\beta\,d_0^{-\frac{1}{\delta}}\,b^{-\frac{1}{\delta^2}}$, which depends on $a,M,\mu^+,\lambda^+,N,p,s,\Lambda,C_1,C_2,C_3,C_4$, we get $\nu=\beta^{-1}>1$.
Hence the fact that $Y_0\leq 1$ and thus Lemma \ref{ite} imply that
$\lim_{j\to\infty}Y_j\to 0$.
Therefore, we have
\[
u\leq\mu^+-aM
\text{ a.e. in }
\mathcal{Q}_{\frac{3r}{4},\theta}^{+}(x_0,t_0).
\]
Hence the result follows.
\end{proof}
\end{document}
|
\begin{document}
\title{Generators for the representation rings of certain wreath products}
\author{Nate Harman}
\maketitle
\begin{abstract}
Working in the setting of Deligne categories, we generalize a result of Marin that hooks generate the representation ring of symmetric groups to wreath products of symmetric groups with a fixed finite group or Hopf algebra. In particular, when we take the finite group to be cyclic order 2 we recover a conjecture of Marin about Coxeter groups in type B.
\end{abstract}
\section{Introduction}
In \cite{B}, Deligne defined the categories $\underline{Rep}(S_t)$ for $t$ an arbitrary complex number. In the context of the Church-Farb framework of representation stability \cite{J} we may think of these Deligne categories at generic values of $t$ as models for stable categories of representations of the symmetric group. In particular they satisfy the following ``stable" properties:
\begin{itemize}
\item For generic $t$, $\underline{Rep}(S_t)$ is semisimple with irreducible objects $\tilde{V}(\lambda)$ indexed by partitions. These interpolate the irreducible representations $V(\lambda(n))$ of $S_n$ with $n>>0$, where we add a sufficiently long first row to $\lambda$ to make it the right size. In particular these representations are known to have polynomial growth of dimension $Dim(V(\lambda(n))) = p_\lambda(n)$, and in the Deligne category we have $Dim(\tilde{V}(\lambda) )= p_\lambda(t)$.
\item If $k \in \mathbb{Z}_+$ we have induction functors $ Rep(S_k) \boxtimes \underline{Rep}(S_t) \rightarrow \underline{Rep}(S_{t+k})$, where $Rep(S_k)$ denotes the usual category of complex representations of $S_k$. The multiplicity $\tilde{c}_{\lambda,\mu}^\nu$ of $\tilde{V}(\nu)$ in ${\rm Ind}(V(\lambda)\boxtimes \tilde{V}(\mu))$ is equal to the stable limit of Littlewood-Richardson coefficients $c_{\lambda,\mu(n)}^{\nu(n+k)}$. Similar statements hold for restriction, with an appropriate version of Frobenius reciprocity.
\item The structure constants for the tensor product are the so called reduced Kronecker coefficients which are the stable limits of the Kronecker coefficients.
\end{itemize}
In \cite{A} Marin proves that hooks, i.e. partitions of the form $(n-k,1^k)$ generate the representation ring of $\text{Rep}(S_n)$. While he doesn't use the language, his proof mostly takes place in the stable setting and his argument shows that hooks freely generate the stable representation ring. This then implies they must generate in the classical setting (although not freely). So we may think of this result as an application of stable representation theory to classical representation theory.
In the Deligne category setting Marin's result appears in Deligne's original paper \cite{B}, saying that the Grothendieck ring of the Deligne category is freely generated by objects corresponding to hooks. The result for the classical case follows by projecting from the Deligne category onto $\text{Rep}(S_n)$, and looking at the induced map of Grothendieck rings.
The proof is done by defining a filtration on the Deligne category such that the associated graded Grothendieck ring is isomorphic in a natural way to the ring $(\bigoplus_n K_0(\text{Rep}(S_n)), \cdot)$ with multiplication coming from inducing representations from $S_n \times S_m$ to $S_{n+m}$. This ring is well known to be isomorphic to the ring of symmetric functions, and the elementary symmetric functions correspond to hooks.
Deligne categories for wreath products with a finite group or Hopf algebra were defined by Knop \cite{E}. In \cite{D} Mori defined wreath product Deligne categories associated to an arbitrary $k$-linear category $\mathcal{C}$, which is a tensor category whenever $\mathcal{C}$ is. As for the symmetric group these may be thought of as stable versions of the more classical wreath product categories in ways analogous to those listed above. See \cite{F} for a more detailed overview of representation theory in complex rank, including discussions of the constructions mentioned here.
Motivated by a conjecture of Marin about generalizing his result to Coxeter groups in type B (which are wreath products), the goal of this paper is to prove similar results about the Grothendieck rings of these Deligne categories. By projecting from the Deligne categories to classical representation categories, we obtain systems of generators for the representation rings of wreath products with finite groups, answering the conjecture of Marin in the case when the finite group is cyclic of order 2.
\section{The categories $\mathcal{W}_n( \mathcal{C})$ and $S_t(\mathcal{C})$}
First we will review Mori's construction of the wreath product Deligne categories $S_t(\mathcal{C})$, and the aspects of the theory useful for our purposes. This section is largely expository, and all the proofs will be left to the references.
Fix $k$ an algebraically closed field of characteristic zero, and let $\mathcal{C}$ be a $k$-linear semisimple\footnote{Since we will be mostly working at the level of the Grothendieck group, the semisimple condition can be relaxed to an artinian condition and the main result will still hold but we will assume it for simplicity.} tensor category with a unit object $\textbf{1}$ satisfying $End_{\mathcal{C}} (\textbf{1}) = k$.
\begin{definition}[\textbf{Mori \cite{D} Definition 3.13}]
Consider the action of the symmetric group $S_n$ on the category $\mathcal{C}^{\boxtimes n}$ by permuting the factors. We denote the equivariantization of this action by $\mathcal{W}_n( \mathcal{C})$ and refer to it as a wreath product category.
\end{definition}
In the case that $\mathcal{C}$ is the category of representations of a finite group $G$, the category $\mathcal{W}_n(\mathcal{C})$ equivalent to the representation category of the wreath product $S_n(G)$ (also denoted by $G \wr S_n$ or $G^n \rtimes S_n$) of $G$ with a symmetric group. Many of the standard facts about the representation theory of these groups hold in the categorical setting as well, we will highlight some of these properties that are important for our purposes.
\begin{proposition}[\textbf{Mori \cite{D} sections 3.5 and 5.3}]
\
\begin{enumerate}
\item If $I(\mathcal{C})$ is an indexing set for equivalence classes of irreducible objects in $\mathcal{C}$, then the irreducible objects of $\mathcal{W}_n( \mathcal{C})$ are indexed by the set:
$$\mathcal{P}_n^{\mathcal{C}} = \{ \lambda: I(\mathcal{C}) \rightarrow \mathcal{P} \ \text{such that} \ |\lambda| :=\sum_{U\in I(C)}|\lambda(U)| = n \}$$
where $\mathcal{P}$ denotes the set of partitions. We denote the irreducible object corresponding to $\lambda \in \mathcal{P}_n^{\mathcal{C}}$ by $V(\lambda)$.
\item The natural inclusions $S_n \times S_m \hookrightarrow S_{n+m}$ give rise to induction functors $$ {\rm Ind}_{\mathcal{W}_n( \mathcal{C}) \boxtimes \mathcal{W}_m( \mathcal{C})}^{\mathcal{W}_{n+m}(\mathcal{C})} : \mathcal{W}_n( \mathcal{C}) \boxtimes \mathcal{W}_m( \mathcal{C}) \rightarrow \mathcal{W}_{n+m}( \mathcal{C})$$
admitting two sided adjoints ${\rm Res}_{\mathcal{W}_n( \mathcal{C}) \boxtimes \mathcal{W}_m( \mathcal{C})}^{\mathcal{W}_{n+m}(\mathcal{C})}$ referred to as restriction functors.
\item If we take the Grothendieck ring of $\mathcal{W}_*(\mathcal{C}) := \bigoplus_n \mathcal{W}_n( \mathcal{C})$ with a tensor structure corresponding to induction, then the map
$$[V(\lambda)] \rightarrow \bigotimes_{U \in I(\mathcal{C})} s_{\lambda(U)}$$
is a graded isomorphism with the ring $\bigotimes_{U \in I(\mathcal{C})}\Lambda$ where $\Lambda$ is the ring of symmetric functions, and $s_\lambda$ is a Schur function.
\end{enumerate}
\end{proposition}
We now want to construct categories $S_t(\mathcal{C})$ which interpolate $\mathcal{W}_n( \mathcal{C})$ to non-integer values of $t$ analogously to the construction of the Deligne category $\underline{Rep}(S_t)$ as explained by Comes and Ostrik in \cite{I}. To do this we will define certain well behaved standard objects in $\mathcal{W}_n(\mathcal{C})$ for different values of $n$.
Let $I = \{i_1,i_2,\dots, i_k\}$ be a finite set and $U_I = (U_i)_{i\in I}$ a collection of objects in $\mathcal{C}$. If $n \ge k$ we can consider $U_{i_1} \boxtimes U_{i_2} \boxtimes ...\boxtimes U_{i_k} \boxtimes \textbf{1}_{\mathcal{C}}^{\boxtimes n-k}$ as an object of $\mathcal{C}^{\boxtimes k} \boxtimes \mathcal{W}_{n-k}(\mathcal{C})$. We will let $[U_I]_n \in \mathcal{W}_n( \mathcal{C})$ denote the image of this under the appropriate induction functor, when $n < k$ it will be convenient to define $[U_I]_n$ to be the zero object. We have the following lemma:
\begin{lemma}[\textbf{Mori \cite{D} definition 4.5 and lemma 4.9}]
\
\begin{enumerate}
\item For finite sets $I$ and $J$ and collections of objects $U_I$, $V_J$ as above there exists an explicitly described vector space $H(U_I,V_J)$ such that $Hom_{ \mathcal{W}_n( \mathcal{C})}([U_I]_n,[V_J]_n) \cong H(U_I,V_J)$ for all $n$ sufficiently large.
\item There exists a map $ \Phi: H(V_I,W_K) \otimes H(U_I,V_J) \rightarrow H(U_I,W_K)\otimes k[x]$ such that for $n$ sufficiently large the composition of $\Phi$ with the evaluation at $n$ map $ev_n: H(U_I,W_K)\otimes k[x] \rightarrow H(U_I,W_K)$ corresponds to the composition map $Hom_{ \mathcal{W}_n( \mathcal{C})}([V_J]_n,[W_K]_n) \otimes Hom_{ \mathcal{W}_n( \mathcal{C})}([U_I]_n,[V_J]_n) \rightarrow Hom_{ \mathcal{W}_n( \mathcal{C})}([U_I]_n,[W_k]_n).$
\end{enumerate}
\end{lemma}
Using this lemma we are able to define an auxiliary category $S_t(\mathcal{C})_0$ for an arbitrary number $t \in k$ as follows:
\noindent \textbf{Objects} are symbols $\langle U_I\rangle_t$ where $U_I$ is a finite collection of objects of $\mathcal{C}$.
\noindent \textbf{Morphisms} from $\langle U_I\rangle_t$ to $\langle V_J\rangle_t$ are given by the space $H(U_I,V_J)$ from the first part of the previous lemma.
\noindent \textbf{Composition} is given by $ev_t \circ \Phi$ where $\Phi$ is the map from the second part of the previous lemma, and $ev_t$ is the evaluation at $t$ map.
\begin{definition}[\textbf{Mori \cite{D} Definitions 4.10 and 4.16}]
The wreath product Deligne category $S_t(\mathcal{C})$ is defined as the pseudo-abelian or Karoubian envelope of $S_t(\mathcal{C})_0$, it comes equipped a natural tensor structure coming from the tensor structure of $\mathcal{C}$.
\end{definition}
Now we want to explicitly describe these Deligne categories $S_t(\mathcal{C})$ and how they interpolate the wreath product categories $\mathcal{W}_n(\mathcal{C})$. First define the set $\mathcal{P}^{\mathcal{C}}$ by:
$$\mathcal{P}^{\mathcal{C}}= \cup_n \mathcal{P}_n^{\mathcal{C}} = \{ \lambda: I(\mathcal{C}) \rightarrow \mathcal{P}| \ \lambda(U) =\emptyset \ \text{for all but finitely many} \ U\}$$
and for $\lambda \in\mathcal{P}^{\mathcal{C}}$ and $n$ sufficiently large define $\lambda_n \in \mathcal{P}_n^{\mathcal{C}}$ by:
\begin{equation}{\label{Z}}
\lambda_n(U) = \left\{
\begin{array}{lr}
\lambda(U) & : U \ne \textbf{1}_\mathcal{C}\\
(n-|\lambda|, \lambda(U)) & : U = \textbf{1}_\mathcal{C}
\end{array}
\right.
\end{equation}
\noindent That is $\lambda_n$ is obtained from $\lambda$ by adding a long first row to the partition corresponding to the unit object of $\mathcal{C}$, and leaving the rest the same. If $n$ is not sufficiently large, i.e. if $(n-|\lambda|, \lambda(\textbf{1}))$ is not a valid partition, then it will be convenient to define $\lambda_n = 0$. We have the following description of $S_t(\mathcal{C})$:
\begin{theorem}[\textbf{Mori \cite{D} Theorems 4.13 and 5.6}]
\
\begin{enumerate}
\item There exists a bijection between $\mathcal{P}^{\mathcal{C}}$ and indecomposable objects of $S_t(\mathcal{C})$. We denote these corresponding indecomposable objects by $\tilde{V}(\lambda)$.
\item When $t \notin \mathbb{Z}_{\ge0}$ the category $S_t(\mathcal{C})$ is semisimple.
\item For $n \in \mathbb{Z}_{\ge0}$ there exists a full, essentially surjective tensor functor $S_n(\mathcal{C}) \rightarrow \mathcal{W}_n(\mathcal{C})$ which we will refer to as a projection functor.
\item For all $\lambda \in \mathcal{P}^{\mathcal{C}}$ there exists $N$ such that the image of $\tilde{V}(\lambda)$ under the projection from $S_n(\mathcal{C})$ to $\mathcal{W}_n(\mathcal{C})$ is isomorphic to $V(\lambda_n)$ for all $n \ge N$.
\end{enumerate}
\end{theorem}
In order to pass results from $S_t(\mathcal{C})$ at generic values of $t$ to the more classical wreath product categories $\mathcal{W}_n(\mathcal{C})$ we will need a more detailed description of what happens at integer values of $t$. For our purposes however it will be convenient to do this after we have defined a filtration in the next section.
\begin{remark}
These sequences of irreducible representations $V(\lambda_n)$ of $S_n(G)$ are exactly those that show up in finitely generated $FI_G$ modules, as defined by Gan and Li in \cite{M} and developed further by Sam and Snowden in \cite{SS}. This is consistent with the philosophy that these Deligne categories can be thought of as models for a stable representation category.
\end{remark}
\subsection{Induction and the $|\lambda|$ filtration}
An immediate consequence of Mori's construction in terms of the objects $[U_I]$ is the existence of induction functors $\mathcal{W}_n( \mathcal{C}) \boxtimes S_t(\mathcal{C}) \rightarrow S_{t+n}(\mathcal{C})$ which interpolate the induction functors $\mathcal{W}_n( \mathcal{C}) \boxtimes \mathcal{W}_m( \mathcal{C})\rightarrow \mathcal{W}_{n+m}( \mathcal{C})$ where we fix $n$ and let $m$ grow to infinity. This defines a categorical action of the tensor category $\mathcal{W}_*( \mathcal{C})$ on $S_*(\mathcal{C}) := \bigoplus_{t\in k} S_t(\mathcal{C})$.
Similarly, we get restriction functors the other direction which are two sided adjoints to the induction functors and interpolate the corresponding restriction functors between the genuine wreath product categories.
Mori's construction ensures that objects of $S_{t}(\mathcal{C})$ occur as summands in objects of the form:
\begin{equation}{\label{A}}
{\rm Ind}_{\mathcal{W}_k( \mathcal{C}) \boxtimes S_{t-k}(\mathcal{C})}^{S_{t}(\mathcal{C})}(W\boxtimes \textbf{1}_{S_{t-k}(\mathcal{C})})
\end{equation}
for some natural number $k$ and $W \in \mathcal{W}_k( \mathcal{C})$. In particular we could describe $\tilde{V}(\lambda)$ as the unique indecomposable summand of: $$M(\lambda) := {\rm Ind}_{\mathcal{W}_n( \mathcal{C}) \boxtimes S_{t-n}(\mathcal{C})}^{S_{t}(\mathcal{C})}(V(\lambda) \boxtimes \textbf{1}_{S_{t-n}(\mathcal{C})})$$ not occurring as a summand in an object of the form (\ref{A}) for any $k <n$. This suggests we consider the following filtrations:
\begin{definition}[The $|\lambda|$-filtration]
Define a filtration on $S_{t}(\mathcal{C})$ by putting an object $M$ in the $k$th level of the filtration if $M$ occurs as a summand in an object of the form of (\ref{A}). Analogously define such filtrations on the categories $\mathcal{W}_n(\mathcal{C})$.
\end{definition}
Immediately we see that $\tilde{V}(\lambda)$ is minimally contained in the $|\lambda|$th level of the filtration on $S_{t}(\mathcal{C})$. Similarly $V(\lambda)$ is minimally contained in the $(n - \mu( \textbf{1})_1)$th level of the filtration on $\mathcal{W}_n(\mathcal{C})$.
By the nature of its definition, this filtration is well behaved with respect to the induction functors. If we let $c_{\lambda, \mu}^\gamma$ denote the structure constants for $(\bigoplus_n K_0(\mathcal{W}_n(\mathcal{C})), \cdot)$, which are just products of the usual Littlewood-Richardson coefficients by Proposition 2.2 part 3. Then by looking at the Littlewood-Richardson rule with one partition having a long first row, we get the following relation:
\begin{equation}{\label{B}}
{\rm Ind}_{\mathcal{W}_n( \mathcal{C}) \boxtimes S_{t-n}(\mathcal{C})}^{S_{t}(\mathcal{C})}(V(\lambda) \boxtimes \tilde{V}(\mu))
= (\bigoplus c_{\lambda, \mu}^\gamma \tilde{V}(\gamma) )\oplus \{ \text{terms lower in the filtration} \}
\end{equation}
Frobenius reciprocity and similar analysis of the Littlewood-Richardson rule gives us the ``lead term" for restrictions of indecomposables:
\begin{equation}{\label{C}}
{\rm Res}_{\mathcal{W}_n( \mathcal{C}) \boxtimes S_{t-n}(\mathcal{C})}^{S_{t}(\mathcal{C})}(\tilde{V}(\mu)) = (\textbf{1} \boxtimes \tilde{V}(\mu)) \oplus \{ \text{terms} \ M \boxtimes \tilde{V}(\nu) \text{ with } |\nu| < |\mu| \}
\end{equation}
In the case of $\mathcal{C} =Vect_k$ this $S_t(\mathcal{C})$ is equivalent to $\underline{Rep}(S_t)$, and our filtration agrees with the filtration defined by Deligne in \cite{B} and by Marin in \cite{A}. In that case the filtration was also well behaved with respect to the internal tensor structure, and we had that the associated graded Grothendieck ring was isomorphic to the ring of symmetric functions. In our setting we have the following generalization:
\begin{lemma}
The filtration defined above gives the Grothendieck ring $K_0(S_t(\mathcal{C}))$ the structure of a filtered ring. Moreover, the associated graded ring is then isomorphic to $(\bigoplus_n K_0(\mathcal{W}_n(\mathcal{C})), \cdot)$, where the isomorphism sends the image of $[\tilde{V}(\lambda)]$ to $[V(\lambda)]$.
\end{lemma}
\begin{proof}
Let $M(\lambda) = {\rm Ind}_{\mathcal{W}_n( \mathcal{C}) \boxtimes S_{t-n}(\mathcal{C})}^{S_{t}(\mathcal{C})}(V(\lambda) \boxtimes \textbf{1}_{S_{t-n}(\mathcal{C})})$. At the level of the Grothendieck ring we have that $[M(\lambda)] = [\tilde{V}(\lambda)] + \{ \text{terms lower in the filtration}\}$. So inductively we can conclude that $[\tilde{V}(\mu) \otimes \tilde{V}(\lambda)]$ and $[\tilde{V}(\mu) \otimes M(\lambda)]$ have the same highest order terms with respect to this filtration. To simplify the notation we will now begin omitting the subscripts and superscripts from the induction and restriction functors, they all will go between $\mathcal{W}_n( \mathcal{C}) \boxtimes S_{t-n}(\mathcal{C})$ and $S_{t}(\mathcal{C})$;
$$\tilde{V}(\mu) \otimes M(\lambda) = \tilde{V}(\mu) \otimes {\rm Ind}(V(\lambda) \boxtimes \textbf{1})$$
$$ = {\rm Ind} ({\rm Res}(\tilde{V}(\mu)) \otimes (V(\lambda) \boxtimes \textbf{1}))$$
By (\ref{C}) this becomes:
$$ \tilde{V}(\mu) \otimes M(\lambda) = {\rm Ind} (((\textbf{1} \boxtimes \tilde{V}(\mu) \oplus \{ \text{terms} \ M \boxtimes \tilde{V}(\nu) \text{ with} \ |\nu| < |\mu| \}) \otimes (V(\lambda) \boxtimes \textbf{1}) )$$
$$ \ \ \ \ \ \ \ = {\rm Ind}( (V(\lambda) \boxtimes \tilde{V}(\mu)) \oplus \{ \text{terms} \ M \boxtimes \tilde{V}(\nu) \text{ with} \ |\nu| < |\mu| \} )$$
Which by (\ref{B}) becomes:
$$ \tilde{V}(\mu) \otimes M(\lambda) = \bigoplus c_{\lambda, \mu}^\gamma \tilde{V}(\gamma) \oplus \{ \text{summands lower in the filtration} \}$$
Since the coefficients $c_{\lambda, \mu}^\gamma$ were the structure constants for the induction product, we see that indeed the associated graded Grothendieck ring of $S_{t}(\mathcal{C})$ is isomorphic to the induction ring $(\bigoplus_n K_0(\mathcal{W}_n(\mathcal{C})), \cdot)$.
\end{proof}
This combined with proposition 2.2 part 3 immediately gives us the following useful corollary:
\begin{corollary} \label{isom}
The associated graded Grothendieck ring of $S_{t}(\mathcal{C})$ at generic $t$ with respect to the $|\lambda|$-filtration is isomorphic to $\bigotimes_{U \in I(\mathcal{C})}\Lambda$. The map sends irreducible objects to products of Schur functions.
\end{corollary}
\begin{section}{Deligne categories at integer $t$}
Our goal is to obtain a collection of generators for the representation rings of wreath products $S_n(G)$, or more generally the Grothendieck rings of the categorical wreath products $\mathcal{W}_n(\mathcal{C})$. To do this we want to take a nice system of generators for these relatively well behaved Deligne categories and pass them down to these more classical categories.
In general some care needs to be taken in the Deligne category setting when taking $t$ to be a positive integer. What happens in this case is handled in depth in the case of symmetric groups by Comes and Ostrik in \cite{I}, and their results were extended to the wreath product setting by Mori \cite{D}. We will outline the parts of the theory that are relevant for our purposes.
The Deligne category $S_{t=n}(\mathcal{C})$ fails to be semisimple or even abelian, but it has a tensor structure and we can still consider the split Grothendieck ring spanned by indecomposable objects. We want to think of these as being a link between the Grothendieck rings of $\mathcal{W}_n(\mathcal{C})$, and the Grothendieck rings of $S_{t}(\mathcal{C})$ at generic $t$. We can make this precise via projection and lifting.
\begin{theorem}[\textbf{Description of projection, Mori \cite{D} theorem 5.6}]
The projection functor of theorem 2.5 induces a surjective homomorphism from this split Grothendieck ring to the Grothendieck ring of $\mathcal{W}_n(\mathcal{C})$. Explicitly, it sends $\tilde{V}_{t=n}(\lambda)$ to either $V(\lambda_n)$ if $n - |\lambda| \ge \lambda(\textbf{1}_\mathcal{C})_1$ or the zero object otherwise.
\end{theorem}
In particular we have the following immediate corollary:
\begin{corollary}
A collection of generators of the split Grothendieck ring of $S_{t=n}(\mathcal{C})$ consisting of elements of the form $[\tilde{V}_{t=n}(\lambda)]$ projects to a collection of generators of the Grothendieck ring of $\mathcal{W}_n(\mathcal{C})$ consisting of elements of the form $[V(\lambda_n)]$.
\end{corollary}
Next we want to relate the split Grothendieck rings at positive integer $t$ to the Grothendieck ring at generic $t$. We need to be a bit careful because the indecomposable objects $\tilde{V}_{t=n}(\lambda)$ of $S_{t=n}(\mathcal{C})$ are not always flat deformations of the irreducible objects $\tilde{V}(\lambda)$ in the Deligne categories for generic $t$. In particular we do not expect the map $[\tilde{V}_{t=n}(\lambda)] \mapsto [\tilde{V}(\lambda)]$ to be a homomorphism of between these rings. We can relate these by the process of lifting, which has the following (simplified) description:
\begin{theorem}[\textbf{Description of lifting, after \cite{I} lemma 5.20}] \
There exists a map Lift$_t$ from objects of $S_{t=n}(\mathcal{C})$ to objects of $S_T(\mathcal{C})$ where $T$ is a formal variable, satisfying:
\begin{enumerate}
\item Lift$_t$ descends to an isomorphism from the split Grothendieck ring $S_{t=n}(\mathcal{C})$ to the Grothendieck ring of $S_T(\mathcal{C})$.
\item This isomorphism sends $[\tilde{V}_{t=n}(\lambda)]$ to either $[\tilde{V}(\lambda)]$ or $[\tilde{V}(\lambda)]+[\tilde{V}(\lambda ')]$ for an explicitly described $\lambda'$ depending on the combinatorics of $n$ and $\lambda$ satisfying $|\lambda'| < |\lambda|$.
\end{enumerate}
\end{theorem}
\begin{proof}
The existence of a lift follows from standard facts about lifting of idempotents, and the proof is identical to that for $\underline{Rep}(S_t)$. The combinatorics follows from the case of $\underline{Rep}(S_t)$ via an explicit identification of the blocks of $\mathcal{S}_t(\mathcal{C})$ with blocks of $\underline{Rep}(S_{t'})$ given by Mori (\cite{D}, Prop 5.4).
\end{proof}
In particular, the additional ``error" term picked up when lifting is lower in the filtration. So while the map $[\tilde{V}_{t=n}(\lambda)] \mapsto [\tilde{V}(\lambda)]$ is not an isomorphism (or even a homomorphism) between the split Grothendieck ring at integer $t$ and the Grothendieck ring at generic $t$, it is an isomorphism at the level of associated graded rings. We now have the following key lemma:
\begin{lemma} \label{main}
If $\{[\tilde{V}(\lambda)] \ | \ \lambda \in J \}$ is a collection of generators for the associated graded Grothendieck ring of $S_t(\mathcal{C})$ at generic values of $t$ for some indexing set $J \subset \mathcal{P}^{\mathcal{C}}$, then $\{ [V(\lambda_n)] \ | \ \lambda \in J \}$ is a collection of generators for the Grothendieck ring of $\mathcal{W}_n(\mathcal{C})$.
\end{lemma}
\begin{proof}
The previous remark allows us to transfer this to a system of generators for the associated graded Grothendieck ring at integer $t$. A standard upper triangularity argument tells us that any lift of this system of generators will also generate the split Grothendieck ring. Corollary 3.2 allows us to transfer this down to $\mathcal{W}_n(\mathcal{C})$.
\end{proof}
\begin{remark} Understanding these non-semisimple Deligne categories and their split Grothendieck rings in better detail is of independent interest as they seem to lie somewhere between classical representation theory and stable representation theory. Recently Inna Entova-Aizenbud \cite{H} used the non-semisimple Deligne categories for symmetric groups to find new identities involving the reduced and non-reduced Kronecker coefficients.
\end{remark}
\end{section}
\section{Explicit systems of generators}
Our goal is to write down explicit systems of generators for the Grothendieck rings of the wreath product categories $\mathcal{W}_n(\mathcal{C})$ consisting of irreducible objects. By lemma \ref{main} it suffices to find an explicit system of generators for the associated graded Grothendieck ring of $S_t(\mathcal{C})$ at generic $t$. By corollary \ref{isom} this is isomorphic to the tensor product of one copy of the ring $\Lambda$ of symmetric functions for each irreducible object of $\mathcal{C}$.
Once translated into the language of symmetric functions, for each irreducible object of $\mathcal{C}$ we need to choose a collection of Schur functions which generate the ring of symmetric functions. Two well known such collections are the elementary and complete homogeneous symmetric functions, which are Schur functions corresponding to $\lambda = (1^k)$ or $\lambda = (k)$ respectively. All that is left is to describe which objects these correspond to.
First let's handle the case when the irreducible object of $\mathcal{C}$ we are looking at is the unit object. The elementary and complete homogeneous symmetric functions just correspond to the usual hook and length two partition representations of the natural copy of $\underline{Rep}(S_t)$ inside $S_t(\mathcal{C})$. If $\mathcal{C}$ is the representation category of a finite group $G$ and $t = n$ is a natural number then under the projection to the category of representations of $S_n(G)$ these are just the corresponding representations of $S_n$ with trivial action of $G$.
If $V \in I(\mathcal{C})$ is a nontrivial irreducible then let $V^{k,\varepsilon} := V^{\boxtimes k}\otimes \varepsilon \in \mathcal{W}_k(\mathcal{C})$ where $\varepsilon$ is either the sign or trivial representation for the copy of $Rep(S_k)$ in $\mathcal{W}_k(\mathcal{C})$. Under our correspondence the elementary (resp. complete homogenous) symmetric functions correspond to the objects ${\rm Ind}_{\mathcal{W}_k(\mathcal{C})\boxtimes S_{t-k}(\mathcal{C})}^{S_{t}(\mathcal{C})}(V^{k,\varepsilon} \boxtimes \textbf{1})$ for $\varepsilon$ the sign representation (resp. trivial).
Putting it all together in the case where $\mathcal{C}$ is the representation category of a finite group $G$ with $m$ equivalence classes of irreducible representations, we get $2^m$ easy to describe systems of generators for the representation ring of the wreath product $S_n(G)$. The previous remarks imply the following result:
\begin{theorem}{\textbf{(Systems of generators for representation rings)}}
If $G$ is a finite group, then the representation ring of the wreath product $S_n(G)$ is generated by either the hook or length two partition representations of $S_n$ along with induced representations ${\rm Ind}_{S_k(G)\times S_{n-k}(G)}^{S_n(G)}( (V^{\otimes k} \otimes \varepsilon_V) \boxtimes \textbf{1}_{S_{n-k}(G)})$ for each irreducible representation $V$ of $G$ and $k\le n$, where $\varepsilon_V$ is a consistent (not depending on $k$) choice of either the trivial or sign character of the symmetric group $S_k$.
\end{theorem}
\begin{subsection}{The abelian group case and Marin's conjecture}
Of particular interest is the case when $G$ is an abelian group, this includes Coxeter groups in type B as well as the complex reflection groups $G(m,1,n)$. Here we can get an alternate description of the objects corresponding to the elementary symmetric functions.
The sign-twisted representations ${\rm Ind}_{S_k(G)\times S_{n-k}(G)}^{S_n(G)}( (V^{\otimes k} \otimes \varepsilon)\boxtimes \textbf{1}_{S_{n-k}(G)})$ described in the previous section appear as summands in exterior powers of the relatively easy to describe representations ${\rm Ind}_{G \times S_{n-1}(G)}^{S_n(G)}( V \boxtimes \textbf{1}_{S_{n-1}(G)})$, along with other more complicated terms coming from the exterior powers of $V$ itself. However if we start with a nontrivial character $\chi$ of $G$ these other terms all vanish and these exterior powers coincide with our sign-twisted representations.
These exterior powers are perhaps closer to a direct generalization of hooks for these wreath products, and can be taken to be in our generating sets by choosing the elementary symmetric functions over the complete homogeneous symmetric functions for each character of $G$. In particular if $G$ is abelian then all of its irreducible representations are characters and Theorem 4.1 becomes:
\begin{theorem}{\textbf{(Hook-like generating sets)}}
If $G$ is a finite abelian group, then the representation ring of the wreath product $S_n(G)$ is generated by the reflection representation of $S_n$, the $n$-dimensional representations ${\rm Ind}_{G \times S_{n-1}(G)}^{S_n(G)}( \chi \boxtimes \textbf{1}_{S_{n-1}(G)})$ for nontrivial characters $\chi$ of $G$, and exterior powers thereof.
\end{theorem}
\begin{remark}
While this paper was being written the author was informed that this result for wreath products with Abelian groups has been recently proven by Schlank and Stapleton using different methods.
\end{remark}
Now let's specialize to the case when $G$ is cyclic of order 2. The wreath products $S_n(G)$ are Coxeter groups $W$ of type $B_n$. If $\chi$ is the nontrivial representation of $G$, then the reflection representation of $W$ is given by $V := {\rm Ind}_{G\times S_{n-1}(G)}^{S_n(G)} (\chi \boxtimes \textbf{1}_{S_{n-1}(G)})$. Next we let $U$ be the reflection representation of $S_n$, upgraded to a $S_n(G)$ representation by letting $G$ act trivially. Translated into this language our theorem becomes:
\begin{theorem}{\textbf{(Marin's conjecture 6.2 for type $B_n$)}}
For $W$ a Coxeter group of type $B_n$ the representation ring $R(W)$ is generated by $\Lambda^k U, \Lambda^k V, k \ge 0$.
\end{theorem}
\end{subsection}
\end{document}
|
\begin{document}
\title[Yoshida lifts and simultaneous non-vanishing]{Yoshida lifts and simultaneous non-vanishing of dihedral twists of modular L-functions}
{\mathfrak a}uthor{Abhishek Saha}
{\mathfrak a}ddress{Departments of Mathematics \\
University of Bristol\\
Bristol BS81SN \\
UK} \email{[email protected]}
{\mathfrak a}uthor{Ralf Schmidt}
{\mathfrak a}ddress{Department of Mathematics
\\ University of Oklahoma\\ Norman\\
OK 73019, USA}
\email{[email protected]}
\begin{abstract}
Given elliptic modular forms $f$ and $g$ satisfying certain conditions on their weights and levels, we prove (a quantitative version of the statement) that there exist infinitely many imaginary quadratic fields $K$ and characters $\chi$ of the ideal class group ${\rm Cl}_K$ such that $L(\frac12,{\rm BC}_{K}(f)\times\chi)\neq0$ and $L(\frac12,{\rm BC}_{K}(g)\times\chi)\neq0$. The proof is based on a non-vanishing result for Fourier coefficients of Siegel modular forms combined with the theory of Yoshida liftings.
\end{abstract}
\maketitle
\let\thefootnote\relax\footnotetext{2010 MSC: 11F30, 11F70, 11F46, 11F66}
\section{Introduction}
Let $K$ be an imaginary quadratic field of discriminant $-d$. We denote its ideal class group by ${{\mathbb C}l_K}$ and the group of ideal class characters by $\widehat{{\mathbb C}l_K}$. For any $\chi$ in $\widehat{{\mathbb C}l_K}$ and $f$ a holomorphic newform with trivial nebentypus as in~\cite{at-le}, one can form the $L$-function $L(s, \mathfrak pi_f \times \theta_\chi)$; this is the Rankin-Selberg convolution of the automorphic representation $\mathfrak pi_f$ attached to $f$ and the $\theta$-series $$\theta_\chi(z) = \sum_{0 \ne {\mathfrak a} \subset O_K}\chi({\mathfrak a}) e(N({\mathfrak a})z).$$
Here, $\theta_\chi$ is a holomorphic modular form of weight $1$ and nebentypus $\left(\frac{-d}{*} \right)$ on $\Gamma_0(d)$; it is a cusp form if and only if $\chi^2 \ne 1$. We remark here that $L(s, \mathfrak pi_f \times \theta_\chi) = L(s, {\rm BC}_{K}(\mathfrak pi_f) \times \chi) $ where ${\rm BC}_{K}$ denotes base-change to $K$.
The problem of studying the non-vanishing of the central values $L(\frac12, \mathfrak pi_f \times \theta_\chi)$ arises naturally in several contexts, and a considerable amount of work has been done in this direction. We note in particular the paper of Michel and Venkatesh~\cite{micven07} which proves that given a cusp form $f$ (satisfying certain conditions on weight and level) and an imaginary quadratic field $K$ of discriminant $-d$, there exist asymptotically at least $d^{\frac{1}{2700} - \epsilon}$ characters $\chi \in \widehat{{\mathbb C}l_K}$ such that $L(\frac12, \mathfrak pi_f \times \theta_\chi) \ne 0$. The introduction to~\cite{micven07} has a review of several other papers on related questions and a summary of the methods available.
In this paper, we prove a \emph{simultaneous} non-vanishing result for $L(\frac12, \mathfrak pi_f \times \theta_\chi)$, $L(\frac12, \mathfrak pi_g \times \theta_\chi) $ for two fixed forms $f$, $g$ (but varying $K$ and $\chi$) under certain hypotheses.
\begin{theorem}\label{th:simulnonvan}Let $k>1$ be an odd integer. Let $N_1$, $N_2$ be two positive, squarefree integers such that $M = \gcd(N_1, N_2)>1$. Let $f$ be a holomorphic newform of weight $2k$ on $\Gamma_0(N_1)$ and $g$ be a holomorphic newform of weight $2$ on $\Gamma_0(N_2)$. Assume that for all primes $p$ dividing $M$ the Atkin-Lehner eigenvalues of $f$ and $g$ coincide. Then there exists an imaginary quadratic field $K$ and a character $\chi \in \widehat{{\mathbb C}l_K}$ such that $L(\frac12, \mathfrak pi_f \times \theta_\chi) \ne 0$ and $L(\frac12, \mathfrak pi_g \times \theta_\chi) \ne 0$. In fact, if $D(f,g)$ is the set of $d$ satisfying the following conditions:
\begin{enumerate}
\item $d>0$ is an odd, squarefree integer and $-d$ is a fundamental discriminant,
\item \label{part 2} There exists an ideal class group character $\chi$ of $K={\mathbb Q}(\mathfrak{S}rt{-d})$ such that $L(\frac12, \mathfrak pi_f \times \theta_\chi) \neq 0$ and $L(\frac12, \mathfrak pi_g \times \theta_\chi) \neq 0$,
\end{enumerate}
then, for any $0<\delta <\frac58$, one has the lower bound \footnote{Recall that $A(X) \gg_{a,b,..} B(X)$ means that there exist constants $C>0$, $D>0$, which depend only on $a,b,..$, such that $A(X) > C |B(X)|$ for all $X>D$.}
\begin{equation} \label{bounddelta}|\{0<d<X, \ d\in D(f,g) \}| \gg_{f,g,\delta} X^\delta.\end{equation}
\end{theorem}
Note that if we could show that for some $K$, the trivial character is a suitable choice for $\chi$, we would solve the long-standing problem of showing the existence of a quadratic Dirichlet character whose twist with the central $L$-values of $f$, $g$ are simultaneously non-zero. However, because of the nature of our method, we cannot get any handle on the $\chi$ that are good for our purposes, nor can we give any quantitative bounds on how many such $\chi$ exist for each $K$.
Our method involves Siegel modular forms, Jacobi forms, and classical holomorphic forms of half-integral weight. First, we lift the pair $(f,g)$ via the theta correspondence to a Siegel cusp form of degree 2 and level $N$. Such lifts are traditionally known as Yoshida lifts (after Hiroyuki Yoshida who first investigated such forms in~\cite{yosh1980}) and have been studied extensively by B{\"o}cherer and Schulze-Pillot~\cite{bocsch, bocsch1991, bocsch1994, bocsch1997}. In fact, the Yoshida lift is a certain case of Langlands functoriality; see Sect.~\ref{yoshidarepsec} for more details.
Via the ``pullback" of Bessel periods from \cite[Theorem 3]{prasadbighash} and the formula of Waldspurger, Theorem~\ref{th:simulnonvan} reduces to showing that the Yoshida lift attached to $(f,g)$ has many non-vanishing Fourier coefficients of fundamental discriminant. This turns out to be a special case of the other main result of this paper, Theorem~\ref{th:nonvanfouriersiegel}, which asserts that any Siegel cusp form of degree 2 and squarefree level that is an eigenfunction of certain Hecke operators has many non-zero fundamental Fourier coefficients. The proof of Theorem~\ref{th:nonvanfouriersiegel} --- which exploits the Fourier-Jacobi expansion of Siegel forms and the relation between Jacobi forms and holomorphic modular forms of half-integral weight --- involves only minor modifications to the proof of the main result of~\cite{sahafund}, where a version of this theorem in the case of full level was proved.
After some basic facts and definitions, Theorem~\ref{th:nonvanfouriersiegel} is stated in Sect.\ \ref{nonvanishingsec} below. We explain how its proof follows from a statement about half-integral weight modular forms, and continue to prove this half-integral weight result in Sect.\ \ref{halfintegralsec}. Then, in Sect.\ \ref{yoshidasec}, we turn to Yoshida liftings, starting with some general facts on the relationship between Siegel modular forms of degree $2$ and automorphic representations of the group ${\rm GSp}_4$. This is followed by a brief survey of the representation-theoretic construction of the Yoshida lifting due to Roberts \cite{rob2001}. Combined with some local results about representations of the non-archimedean ${\rm GSp}_4$, we explain how this leads to an alternative proof of the existence of the classical Yoshida liftings constructed in \cite{bocsch1991, bocsch1997}. The alternative proof comes with a few additional benefits, which will be used in Sect.\ \ref{besselsec}. We start this final section by giving some background on Bessel models and their relationship with Fourier coefficients. Finally, combining Theorem~\ref{th:nonvanfouriersiegel}, Yoshida liftings and \cite[Thm.\ 3]{prasadbighash}, we prove Theorem \ref{th:simulnonvan}. Note that while our method gives a lower bound on the number of non-vanishing twists, it does not give a lower bound on the size of the non-vanishing $L$-value itself.
We say a few words about the restrictions on $f$ and $g$ in Theorem~\ref{th:simulnonvan}. The conditions that $N_1, N_2$ are squarefree and that the Atkin-Lehner eigenvalues of $f$ and $g$ coincide are needed to ensure that there exists a holomorphic Yoshida lift attached to $(f,g)$ with respect to a Siegel-type congruence subgroup $\Gamma_0^{(2)}(N) \subset \mathrm{Sp}_4({\mathbb Z})$ of squarefree level $N$. Indeed, our key result (Theorem~\ref{th:nonvanfouriersiegel}) on non-vanishing fundamental Fourier coefficients is only proved for Siegel cusp forms with respect to such congruence subgroups. However, even if these two conditions are removed, $(f,g)$ will still have a Yoshida lift (possibly with respect to some other congruence subgroup) provided that there is a prime $p$ dividing $\gcd(N_1, N_2)$ such that $\mathfrak pi_f$, $\mathfrak pi_g$ are both discrete series at $p$.\footnote{The restriction that there is a prime dividing $\gcd(N_1, N_2)$ where $\mathfrak pi_f$, $\mathfrak pi_g$ are both discrete series will probably be very difficult to remove by our method, because without this condition there are no Jacquet-Langlands transfers and hence no (holomorphic) Yoshida lifts. It is conceivable that one could still consider the ``Fourier coefficients" of the non-holomorphic Yoshida lift in this setup and prove a non-vanishing result for those.} So, an analogue of Theorem~\ref{th:nonvanfouriersiegel} for Siegel cusp forms with respect to more general congruence subgroups will allow us to remove some of the restrictions on $f$ and $g$. This is currently work in progress by J. Marzec at the University of Bristol. To remove the restriction on the weight of $g$ would require us to extend Theorem~\ref{th:nonvanfouriersiegel} to vector valued Siegel cusp forms, which seems possible in principle.
A word about the exponent $\frac58$ in Theorem~\ref{th:simulnonvan}. Let $\theta$ be a real number such that given any $\epsilon>0$ and a cusp form $f$ of weight $k + \frac12$ with $k\ge1$ we have $\widetilde{a}(f,n) \ll_{f, \epsilon} n^{\theta + \epsilon}$; here $n$ varies over squarefree integers coprime to the level and $\widetilde{a}(f,n)$ denotes the normalized Fourier coefficient. Then what we really prove is that~\eqref{bounddelta} is valid for any $0 < \delta < 1-2\theta$. The first non-trivial bound for $\theta$ ($= \frac{3}{14}$) was obtained by Iwaniec~\cite{iwanfourier}. In Theorem~\ref{th:simulnonvan} we have used $\theta = \frac{3}{16}$ which is due to Bykovski{\u\i}~\cite{byko}; see also the papers of Blomer--Harcos~\cite{blomer-harcos08} and Conrey--Iwaniec~\cite{conrey-iwaniec}.
As for related work, we have already mentioned the paper of Michel and Venkatesh~\cite{micven07}. The more general problem of non-vanishing of twists of automorphic $L$-functions has a long history. The book of Murty--Murty~\cite{murty-murty}, which brings together some of the main techniques and results in the area, is a good reference; see also the introduction to Ono--Skinner~\cite{ono-skinner}. There are only a few simultaneous non-vanishing results available in the literature. An interesting example is the result of Michel and Vanderkam~\cite{micvan} where families of \emph{three} different ${\rm GL}_1 \times {\rm GL}_2$ $L$-functions are considered. Closely related to the present work is a paper of Prasad and Takloo-Bighash on Bessel models where a similar non-vanishing result is proved~\cite[Corollary 13.3]{prasadbighash}; however, in their result, the twisting character $\chi$ can be any Hecke character of $K$ (of possible high conductor) and it does not seem possible to give an \emph{effective} bound on this conductor in terms of $f$, $g$ by their method. For many arithmetic applications, it is necessary to know the existence of non-vanishing twists by characters whose conductor can be effective bounded, and the ideal scenario is when an unramified twist exists, as in Theorem~\ref{th:simulnonvan}.
\section{Siegel cusp forms of degree 2}\label{siegelsection}
\subsection{Preliminaries}For any commutative ring $R$ and positive integer $n$, let $M_n(R)$
denote the ring of $n$ by $n$ matrices with entries in $R$ and ${\rm GL}_n(R)$ denote the group of invertible matrices. If
$A\in M_n(R)$, we let $\T{A}$ denote its transpose. We let $M_n^{\rm sym}(R)$ denote the additive group of symmetric matrices in $M_n(R)$. We say that a matrix in $M_n^{\rm sym}({\mathbb Z})$ is semi-integral if
it has integral diagonal entries and half-integral off-diagonal
ones. We let $\mathcal Lambda_n \subset M_n^{\rm sym}({\mathbb Z})$ denote the set of symmetric, semi-integral, positive-definite matrices of
size $n$.
Denote by $J$ the $4$ by $4$ matrix given by
$$
J =
\begin{pmatrix}
0 & I_2\\
-I_2 & 0\\
\end{pmatrix}.
$$ where $I_2$ is the identity matrix of size 2. Define the algebraic groups
${\rm GSp}_4$ and $\mathrm{Sp}_4$ over ${\mathbb Z}$ by
$${\rm GSp}_4(R) = \{g \in {\rm GL}_4(R) \; | \; \T{g}Jg =
\mu_2(g)J,\:\mu_2(g)\in R^{\times}\},$$
$$
\mathrm{Sp}_4(R) = \{g \in {\rm GSp}_4(R) \; | \; \mu_2(g)=1\},
$$
for any commutative ring $R$. The group ${\rm GSp}_4$ will be denoted by the letter $G$.
The Siegel upper-half space of degree 2 is defined by
$$
\mathbb H_2 = \{ Z \in M_2({\mathbb C})\;|\;Z =\T{Z},\ \mathrm{Im}(Z)
\text{ is positive definite}\}.
$$
We define
$$
g \langle Z\rangle = (AZ+B)(CZ+D)^{-1}\qquad\text{for }
g=\begin{pmatrix} A&B\\ C&D \end{pmatrix} \in \mathrm{Sp}_4({\mathbb R}),\;Z\in \mathbb H_2.
$$
We let $J(g,Z) = CZ + D$ and use $i_2$ to denote the point $\begin{pmatrix}i&\\& i \end{pmatrix} \in \mathbb H_2$.
For any positive integer $N$,
define
\begin{equation}\label{Gamma0defeq}
\Gamma_0^{(2)}(N) := \left\{\begin{pmatrix}A&B\\ C&D \end{pmatrix} \in \mathrm{Sp}_4({\mathbb Z})\;|\;C \equiv 0 \mathfrak pmod{N}\right\}.
\end{equation}
Let $S_k^{(2)}(N)$ denote the space of holomorphic functions $F$ on
$\mathbb H_2$ which satisfy the relation
\begin{equation}\label{siegeldefiningrel}
F(\gamma \langle Z\rangle) = \det(J(\gamma,Z))^k F(Z)
\end{equation}
for $\gamma \in \Gamma_0^{(2)}(N)$, $Z \in \mathbb H_2$, and vanish at all the
cusps. Elements of $S_k^{(2)}(N)$ are often referred to as Siegel cusp forms of degree (genus) 2, weight $k$ and level $N$.
\subsection{The Fourier and Fourier-Jacobi expansions}
It is well-known that any $F$ in $S_k^{(2)}(N)$ has a Fourier expansion \begin{equation}\label{siegelfourierexpansion}F(Z)
=\sum_{T \in \mathcal Lambda_2} a(F, T) e(\mathrm{Tr}(TZ)).
\end{equation}
Applying~\eqref{siegeldefiningrel} for $\gamma = \begin{pmatrix}A&\\&\T{A}^{-1}\end{pmatrix}$, where $A \in {\rm GL}_2({\mathbb Z})$, yields the relation \begin{equation}\label{fourierinvariance}a(F, T) = \det(A)^k\,a(F, \T{A}TA) \end{equation} for $A \in {\rm GL}_2({\mathbb Z})$. In particular, if $k$ is even, then the Fourier coefficient $a(F, T)$ depends only on the ${\rm GL}_2({\mathbb Z})$ equivalence class of $T$.
The Fourier expansion~\eqref{siegelfourierexpansion} also immediately shows that any $F \in S_k^{(2)}(N)$ has a ``Fourier--Jacobi expansion"
\begin{equation}\label{fjexpand}F(Z) = \sum_{m > 0} \mathfrak phi_m(\tau, z) e(m \tau')\end{equation} where we write $Z= \begin{pmatrix}\tau&z\\z&\tau' \end{pmatrix}$ and for each $m>0$,
\begin{equation}\label{jacobifourier}\mathfrak phi_m(\tau, z) = \sum_{\substack{n,r \in {\mathbb Z} \\ 4nm> r^2}}a \left(F, \mat{n}{r/2}{r/2}{m}\right) e(n \tau) e(r z) \in J_{k,m}^{\text{cusp}}(N). \end{equation} Here $J_{k,m}^{\text{cusp}}(N)$ denotes the space of Jacobi cusp forms of weight $k$, level $N$ and index $m$; for details see~\cite{manickram}. If we put $c(n,r) = a \left(F, \mat{n}{r/2}{r/2}{m}\right)$, then~\eqref{jacobifourier} becomes
$$\mathfrak phi_m(\tau, z) = \sum_{\substack{n,r \in {\mathbb Z} \\ 4nm> r^2}} c(n,r) e(n \tau) e(r z),$$ and this is called the Fourier expansion of the Jacobi form $\mathfrak phi_m$.
\subsection{The $U(p)$ operator} For each prime $p$ dividing $N$, there exists a Hecke operator $U(p)$ acting on the space $S_k^{(2)}(N)$. It can be most simply described by its action on Fourier coefficients,
\begin{equation}\label{upaction}F(Z) = \sum_{T \in \mathcal Lambda_2} a(F, T) e(\mathrm{Tr}(TZ)) \mapsto (U(p) F)(Z) =
\sum_{T \in \mathcal Lambda_2} a(F, pT) e(\mathrm{Tr}(TZ)).\end{equation}
When $N$ is squarefree, the operator $U(p)$ has been interpreted representation-theoretically in~\cite{sch} (where this operator is called $T_2(p)$). Furthermore, it has been proved by B{\"o}cherer~\cite{boch-up} that $U(p)$ is an invertible operator on the space $S_k^{(2)}(N)$ (we will however not need this fact).
\begin{lemma}\label{nonvanlemma} Let $F \in S_k^{(2)}(N)$ be an eigenfunction for the Hecke operators $U(p)$ for all $p$ dividing $N$. Suppose for some $N_1$ dividing $N$ and some $T \in \mathcal Lambda_2$ we have that $a(F, N_1T) \neq 0$. Then $a(F, T) \neq 0$.
\end{lemma}
\begin{proof} For each fixed $T$, we prove the statement by using induction on the number of primes (counted with multiplicity) dividing $N_1$. The statement is trivially true if $N_1 = 1$. Now let $N_1 >1$ and let $a(F, N_1T) \neq 0$. We need to show that $a(F, T) \neq 0$. Let $p$ be a prime dividing $N_1$ and suppose that $U(p) F = \lambda_p F$; such a $\lambda_p$ exists by our assumption on $F$. By~\eqref{upaction}, this means that $a(F, pS) = \lambda_p a(F,S)$ for all $S \in \mathcal Lambda_2$. Applying this fact for $S= (N_1/p)T$ and using the assumption $a(F, N_1T) \neq 0$, we deduce that $a(F, (N_1/p)T) \neq 0$. Now the induction hypothesis shows that $a(F, T) \neq 0$.
\end{proof}
\subsection{Non-vanishing of Fourier coefficients}\label{nonvanishingsec}
Recall that elements $S$ of $\mathcal Lambda_2$ are matrices of the form
$$
S=\mat{a}{b/2}{b/2}{c},\qquad a,b,c\in{\mathbb Z}, \qquad a>0, \qquad {\rm disc}(S) := b^2 - 4ac < 0.
$$
If $\gcd(a,b,c)=1$, then $S$ is called \emph{primitive}. If ${\rm disc}(S)$ is a fundamental discriminant, then $S$ is called \emph{fundamental}. Observe that if $S$ is fundamental, then it is automatically primitive. Observe also that if ${\rm disc}(S)$ is odd, then $S$ is fundamental if and only if ${\rm disc}(S)$ is squarefree.
In an earlier work~\cite{sahafund} of the first author, it was shown that elements of $S_k^{(2)}(1)$ are uniquely determined by almost all of their fundamental Fourier coefficients. We now extend that result to elements of $S_k^{(2)}(N)$ under some assumptions as well as make it quantitative\footnote{Note however, that in the full-level case treated in~\cite{sahafund}, $k$ was allowed to be any integer while here we will restrict to $k$ even.}. In the theorem below, $\mathfrak{S}$ denotes the set of odd squarefree positive integers.
\begin{theorem} \label{th:nonvanfouriersiegel}Let $k>2$ be even and $N$ be a squarefree integer. Let $0 \ne F \in S_k^{(2)}(N)$ be an eigenfunction for the $U(p)$ operator for all primes $p$ dividing $N$. Then, for any $0< \delta < \frac58$, one has the lower bound
$$|\{0 < d < X , \ d \in \mathfrak{S}, \ a(F, S) \ne 0 \text{ for some } S \text{ with } d = -{\rm disc} (S) \}| \gg_{F, \delta} X^\delta. $$
\end{theorem}
\begin{remark} In particular, this implies that for $k, N$ as above, if $F \in S_k^{(2)}(N)$ is non-zero and an eigenfunction for the $U(p)$ operator for all primes $p$ dividing $N$, then there exist infinitely many fundamental $S$ such that $a(F,S) \ne 0$.
\end{remark}
\begin{proof} The proof is very similar to that of Theorem 1 of~\cite{sahafund}. Let $F\in S_k^{(2)}(N)$ be non-zero and an eigenfunction for the $U(p)$ operator for all primes $p$ dividing $N$. A result of Yamana~\cite[Thm. 2]{yamana09} tells us that there exists a \emph{primitive} matrix $T$ and an integer $N_1$ dividing $N$ such that $a(F, N_1T) \neq 0$. Now by Lemma~\ref{nonvanlemma}, it follows that $a(F, T) = 0$.
Since $T$ is primitive, we can write $T= \mat{n}{r/2}{r/2}{m}$ with $\gcd(m,r,n) = 1$ and $4nm> r^2$. By the main theorem of~\cite{iwanprime}, there exist infinitely many primes of the form $mx_0^2 + rx_0y_0 + ny_0^2$. We pick a prime $p$ such that $p \nmid N$ and $p = mx_0^2 + rx_0y_0 + ny_0^2$. Since this implies $\gcd(x_0, y_0) = 1$, we can find integers $x_1$, $y_1$ such that $A= \mat{y_1}{y_0}{x_1}{x_0} \in {\rm SL}_2({\mathbb Z}).$ Let $T' =\,\T\!ATA$. Then $a(F,T) = a(F, T')$ and $T'$ is of the form $\mat{n'}{r'/2}{r'/2}{p}$.
This implies that there is a prime $p$ not dividing $N$ such that the Jacobi form $\mathfrak phi_p$ in the expansion~\eqref{fjexpand} satisfies $\mathfrak phi_p \neq 0$. Let us denote $$c(n,r) = a\left(F, \mat{n}{r/2}{r/2}{p}\right).$$ Then the Fourier expansion of $\mathfrak phi_p$ is given by $$\mathfrak phi_p(\tau, z) = \sum_{\substack{n,r \in {\mathbb Z} \\ 4np> r^2}} c(n,r) e(n \tau) e(r z).$$ By our assumption $c(n', r') \neq 0$, where $$T' = \mat{n'}{r'/2}{r'/2}{p}.$$
Now, let $$h(\tau) = \sum_{m=1}^\infty c(m) e(m \tau).$$
where $$c(m) = \sum_{\substack{0 \le \mu \le 2p-1 \\ \mu^2 \equiv -m \mathfrak pmod{4p}}} c\left((m+\mu^2)/4p, \mu \right).$$
By Theorem 4.8 of~\cite{manickram}, we know that $h \in S_{k-\frac{1}{2}}(4pN)$. Here $S_{k-\frac{1}{2}}(4pN)$ denotes the space of cusp forms of weight $k - \frac12$ for $\Gamma_0(4pN)$; for the basic definitions and properties of such spaces of half-integral forms, see for instance~\cite[Sect.\ 3.1]{sahafund}.
It is easy to see that $h(\tau)$ is not identically equal to 0. Indeed put $d_0 = 4 n' p - r'^2$. Then $c(d_0)$ equals $a\left(F, \mat{n'}{r'/2}{r'/2}{p}\right) + a\left(F, \mat{n'+p-r'}{p - r'/2}{p - r'/2}{p}\right)$, which is simply $2 a\left(F, \mat{n'}{r'/2}{r'/2}{p}\right)$ by~\eqref{fourierinvariance} and hence non-zero.
Now, by Theorem~\ref{hafintgen} below, it follows that $$|\{0 < d<X, \ d \in \mathfrak{S}, \ c(d) \ne 0\}|\gg_{h,\delta} X^\delta. $$ For any of these $d$, there exists a $\mu$ such that $c\left(\frac{d+\mu^2}{4p},\mu \right) = a \left(F, \mat{\frac{d+\mu^2}{4p}}{\mu/2}{\mu/2}{p}\right)$ is not equal to zero. This completes the proof.
\end{proof}
\subsection{A result on half-integral weight cusp forms}\label{halfintegralsec}
The following theorem, which is a generalization of Theorem 2 of~\cite{sahafund}, was used in the proof above. We refer the reader to~\cite[Sect.\ 3.1]{sahafund} for the notations and definitions related to half-integral weight cusp forms.
\begin{theorem}\label{hafintgen}Let $N$ be a positive integer divisible by $4$ and $\chi : ({\mathbb Z} / N {\mathbb Z})^\times \rightarrow {\mathbb C}^\times$ be a character. Write $\chi = \mathfrak prod_{p | N} \chi_p$ and assume that the following conditions are satisfied:
\begin{enumerate}
\item $N$ is not divisible by 16, and if $N$ is divisible by $8$, then $\chi_2 =1$.
\item $N$ is not divisible by $p^3$ for any odd prime $p$,
\item If $p$ is an odd prime such that $p^2$ divides $N$, then $\chi_p \ne 1$.
\end{enumerate}
For some $k \ge 2$, let $f \in S_{k+\frac{1}{2}}(N, \chi)$ be non-zero with the Fourier expansion $f(z) = \sum_{n > 0} a(f, n)e(nz).$ Then, for any $0< \delta < \frac58$, one has the lower bound
$$|\{0 < d < X , \ d \in \mathfrak{S}, \ a(f,d) \ne 0 \}| \gg_{f, \delta} X^\delta. $$
\end{theorem}
The rest of this subsection is devoted to the proof of the above theorem. We start with the following key proposition.
\begin{proposition}\label{keyprop}Let $k \ge 2$, $N$ be a positive integer that is divisible by $4$ and $\chi$ a Dirichlet character $\ \bmod \ N$. Let $f \in S_{k + \frac12}(N, \chi)$, $f \ne 0$, and suppose that $a(f,n)$ equals 0 whenever $n$ and $N$ have a common prime factor. Then, for any $0< \delta < \frac58$, one has the lower bound
$$|\{0 < d < X , \ d \in \mathfrak{S}, \ a(f,d) \ne 0 \}| \gg_{f, \delta} X^\delta. $$
\end{proposition}
\begin{proof} The qualitative version of this proposition, i.e., the assertion that there are infinitely many $d\in \mathfrak{S}$ such that $a(f,d) \neq 0$, is just Proposition 3.7 of~\cite{sahafund}. The proof of the quantitative version as stated here requires no new ingredients. Indeed, the proof there proceeded by showing that there exists an integer $M$ such that $$S(M,X;f) := \sum_{\substack{d\in \mathfrak{S} \\ (d, M) = 1}} |\tilde{a}(f, d)|^2 e^{-d/X}$$
satisfies $S(M,X;f) \gg_f X$. Here $\tilde{a}(f,n)$ denotes the ``normalized" Fourier coefficients, defined by
$$\tilde{a}(f,n) = a(f,n)n^{ \frac14-\frac k2}.$$ Proposition~\ref{keyprop} now follows immediately from the well-known bound due to Bykovski{\u\i}~\cite{byko} that $$|\tilde{a}(f, m)|^2 \ll_{f,\epsilon} m^{\frac{3}{8} + \epsilon}.$$
\end{proof}
We prove Theorem~\ref{hafintgen}. Let $f \in S_{k+\frac{1}{2}}(N, \chi)$ be non-zero where $N, \chi$ satisfy the assumptions listed in the statement of the theorem. Let $2=p_1, p_2,\ldots,p_t$ be the distinct primes dividing $N$. For $1 \le i \le t$, let $S_i = \{p_1, \ldots, p_i\}$. We will construct a sequence of forms $g_i$, $0 \le i \le t$, such that \begin{enumerate}
\item $g_0 = f$,
\item $g_i \ne 0$ for any $0 \le i \le t$,
\item For $1 \le i \le t$, $g_i \in S_{k+ \frac12}(N N_i, \chi \chi_i)$ where $N_i$ is not divisible by any prime lying outside $S_i$ and $\chi_i$ is a Dirichlet character whose conductor is not divisible by any prime outside $S_i$,
\item $a(g_i, n) = 0$ whenever $n$ is divisible by a prime in $S_i$,
\item For some $1 \le i \le t$, suppose it is true that $$|\{0 < d < X , \ d \in \mathfrak{S}, \ a(g_i,d) \ne 0 \}| \gg_{g_i, N, \delta} X^\delta. $$ Then it is also true that $$|\{0 < d < X , \ d \in \mathfrak{S}, \ a(g_{i-1},d) \ne 0 \}| \gg_{g_{i-1}, N, \delta} X^\delta. $$
\end{enumerate}
It is clear that the existence of such a sequence of forms, along with Proposition~\ref{keyprop} above, directly implies Theorem~\ref{hafintgen}. The proof of the fact that such forms $g_i$ exist follows exactly the argument in Section 3.5 of~\cite{sahafund}. Indeed, the only difference is that in~\cite{sahafund}, $N/4$ was assumed to be odd, while here we allow $N/4$ to be divisible by 2 (but not by 4) so long as $\chi_2 =1$. However, a careful look at the proof of Theorem 2 of~\cite{sahafund} shows that the only place where the assumption $N/4$ odd was used was in Section 3.4 in order to show that $g_1 \neq 0$. However if $g_1 = 0$, then $a(g_0, n) = 0$ unless $n$ is even. By~\cite[Lemma 7]{Serre-Stark}, this implies that the conductor of $\epsilon_2$ divides $N/4$; here $\epsilon_{2}$ is the quadratic character associated to the field ${\mathbb Q}(\mathfrak{S}rt{2}).$ Since this conductor is equal to 8, this means that $N$ is divisible by 32, a contradiction. This completes the proof of Theorem~\ref{hafintgen}.
\section{Yoshida lifts}\label{yoshidasec}
\subsection{Siegel cusp forms and representations}\label{siegelfromrepsec}
Below we will use a representation theoretic construction of certain elements of $S_k^{(2)}(N)$. In preparation, we will briefly explain the relationship between Siegel modular forms of degree $2$ and automorphic representations of $G={\rm GSp}_4$. For the full modular group this was explained in \cite{asgsch}, and even though we will now require levels, we may still refer to this paper for some details. In the level case the precise correspondence between modular forms and representations is complicated, due to a lack of multiplicity one both locally and globally. However, all we will have to do is construct a cusp form from an irreducible, cuspidal, automorphic representation, and this direction of the correspondence is unproblematic.
Throughout let ${\mathbb A}$ be the ring of adeles of ${\mathbb Q}$. Let $\mathfrak pi=\otimes\mathfrak pi_v$ (restricted tensor product) be a cuspidal, automorphic representation of the adelized group $G({\mathbb A})$ with trivial central character. The only requirement on $\mathfrak pi$ necessary for the construction of classical, holomorphic modular forms is that the archimedean component $\mathfrak pi_\infty$ is a lowest weight representation $\mathcal{E}(l,l')$ with integers $l\geq l'>0$ in the notation of \cite[Sect.\ 2.3]{pitsch2}. If $l'\geq3$, then $\mathcal{E}(l,l')$ is a holomorphic discrete series representation with Harish-Chandra parameter $(l-1,l'-2)$, but $l'=1$ and $l'=2$ are also admissible. Let $K_\infty\cong U(2)$ be the standard maximal compact subgroup of ${\rm Sp}_4({\mathbb R})$, and let $(\tau_{l,l'},W_{l,l'})$ be a model for the minimal $K_\infty$-type of $\mathcal{E}_{l,l'}$. Then $\dim W_{l,l'}=l-l'+1$. Up to multiples, the representation $\mathcal{E}(l,l')$ contains a unique vector of weight $(l,l')$ (see \cite[Sect.\ 2.2]{pitsch2}); it corresponds to a highest weight vector $w_1$ in $W_{l,l'}$. In the given model $\mathfrak pi_\infty$ (which is arbitrary) we fix a non-zero such vector and denote it by $f_\infty$.
As for finite places, if $p$ is a prime such that $\mathfrak pi_p$ is an unramified representation, let $f_p$ be a non-zero vector in $\mathfrak pi_p$ such that $f_p$ is invariant under $G({\mathbb Z}_p)$. For other primes, fix any non-zero vector $f_p$ in $\mathfrak pi_p$, and let $K_p$ be any compact and open subgroup of $G({\mathbb Q}_p)$ such that $f_p$ is invariant under $K_p$; for example, $K_p$ could be a principal congruence subgroup of $G({\mathbb Z}_p)$ of high enough level. Then
$$
\Gamma=G({\mathbb Q})^+\cap\mathfrak prod_{p<\infty}K_p,
$$
where the superscript ``$+$'' denotes elements with positive multiplier, is a discrete subgroup of ${\rm Sp}_4({\mathbb R})$.
Now let $\Phi$ be the vector in the space of $\mathfrak pi$ corresponding to the pure tensor $\otimes f_v$ in $\otimes\mathfrak pi_v$. Then $\Phi$ is a ${\mathbb C}$-valued function on $G({\mathbb A})$ which is left-invariant under $G({\mathbb Q})$ and right-invariant under $\mathfrak prod_{p<\infty}K_p$. We would like to construct from $\Phi$ a function taking values in the contragredient representation $W_{l,l'}^\vee$. We claim that there exists a unique function $L:\:G({\mathbb A})\rightarrow W_{l,l'}^\vee$ such that
\begin{equation}\label{tildePhidefeq}
\Phi(g)=L(g)(w_1)\qquad\text{for all }g\in G({\mathbb A}).
\end{equation}
Indeed, let $w_1,\ldots,w_n$ be a basis of $W_{l,l'}$ such that $w_2,\ldots,w_n$ have weights different from $w_1$, and let $L_1,\ldots,L_n$ be a basis of $W_{l,l'}^\vee$. Then, by the Peter-Weyl theorem, there exist uniquely determined complex numbers $c_{ij}(g)$ such that
$$
\Phi(gh)=\sum_{i,j=1}^nc_{ij}(g)L_i(\tau_{l,l'}(h)w_j)\qquad\text{for all }h\in K_\infty.
$$
Since $\Phi$ has weight $(l,l')$, it follows that $c_{ij}=0$ for $j\neq1$. Hence (\ref{tildePhidefeq}) holds with $L(g)=\sum_ic_{i1}(g)L_i$. The uniqueness of $L$ follows from the construction.
Observe that, by construction, $\Phi(gh)=(\tau_{l,l'}^\vee(h^{-1})L(g))w_1$ for all $h\in K_\infty$, and $L$ is characterized by this property. This implies that
\begin{equation}\label{tildePhirhoeq}
L(gh)=\tau_{l,l'}^\vee(h^{-1})L(g)\qquad\text{for all }g\in G({\mathbb A})\text{ and }h\in K_\infty.
\end{equation}
Furthermore, $L$ is left-invariant under $G({\mathbb Q})$ and right-invariant under $\mathfrak prod_{p<\infty}K_p$.
We can now construct a modular form $F$ on the Siegel upper half space $\mathbb{H}_2$ taking values in $W_{l,l'}^\vee$. First we extend $\tau_{l,l'}^\vee$, which is a representation of $U(2)$, to a representation of ${\rm GL}_2({\mathbb C})$; by the unitary trick, this can be done in exactly one way. It is easy to verify that this extension has highest weight $(l,l')$ in the sense of \cite[Appendix to I.6]{Fr1991}. We will write $\rho_{l,l'}$ for this extension. For $Z$ in $\mathbb{H}_2$, let $g$ be an element of ${\rm Sp}_4({\mathbb R})$ such that $Z=g\langle i_2\rangle$, and set
$$
F(Z)=\rho_{l,l'}(Ci_2+D)L(g),\qquad\text{where }g=\mat{A}{B}{C}{D}.
$$
Then $F$ is a well-defined holomorphic function on $\mathbb{H}_2$ with values in the space of $\rho_{l,l'}$. It satisfies
\begin{equation}\label{Frhollpeq}
F(\gamma\langle Z\rangle)=\rho_{l,l'}(CZ+D)F(Z),\qquad\text{for }\gamma=\mat{A}{B}{C}{D}\in\Gamma.
\end{equation}
Hence, $F$ is a vector-valued modular form of type $\rho_{l,l'}$ with respect to $\Gamma$, in the sense of \cite{Fr1991}. It is a cusp form, and it is an eigenform of the local Hecke algebras $\mathcal{H}_p$ at each place $p$ where $\mathfrak pi_p$ is unramified. It is scalar-valued if and only if $l=l'$.
In our application below we will have a situation where each $K_p$ can be chosen to be a Siegel congruence subgroup, i.e., a group of type
\begin{equation}\label{Gamma0localdefeq}
\Gamma_{0,p}(M) := \left\{\begin{pmatrix}A&B\\ C&D \end{pmatrix} \in G({\mathbb Z}_p)\;|\;C \equiv 0 \mathfrak pmod{M{\mathbb Z}_p}\right\}.
\end{equation}
These are, of course, the local analogues of the groups defined in (\ref{Gamma0defeq}). If $K_p=\Gamma_{0,p}(p^{m_p})$ for all $p$, and if $l=l'=k$, then the resulting $F$ will be an element of the space $S_k^{(2)}(N)$ defined earlier, where $N=\mathfrak prod_pp^{m_p}$.
Provided that the multiplier maps from the local groups $K_p$ to ${\mathbb Z}_p^\times$ are all surjective, the above procedure can be reversed (see \cite[Sect.\ 4.5]{asgsch}), and one can reconstruct the automorphic representation $\mathfrak pi$ from the modular form $F$. In general, starting from an arbitrary cusp form $F$ which is an eigenform for almost all local Hecke algebras, it is unclear whether $F$ generates an \emph{irreducible} automorphic representation.
\subsection{Representation-theoretic Yoshida liftings}\label{yoshidarepsec}
In the language of automorphic representations, the Yoshida lifting is a certain case of Langlands functoriality. We will first make some comments on dual groups and $L$-packets, and then explain how the Yoshida lifting can be constructed using the theta correspondence. In the next section we will use this group theoretic lifting to construct holomorphic Siegel modular forms.
We fix a totally real number field $F$. The Yoshida lifting comes from the embedding of dual groups
\begin{align}\label{dualgroupmorphismeq}
\{(g_1,g_2)\in{\rm GL}_2({\mathbb C})\times{\rm GL}_2({\mathbb C})\:|\:\det(g_1)=\det(g_2)\}&\longrightarrow{\rm GSp}_4({\mathbb C}),\\
(\mat{a}{b}{c}{d},\mat{a'}{b'}{c'}{d'})&\longrightarrow\left(\begin{matrix}a&&b\\&a'&&b'\\c&&d\\&c'&&d'\end{matrix}\right).\nonumber
\end{align}
The principle of functoriality predicts that for a pair $\tau_1,\tau_2$ of automorphic representations of ${\rm GL}_2({\mathbb A})$ with the same central character there exists an $L$-packet $\Pi(\tau_1,\tau_2)$ of automorphic representations of $G({\mathbb A})$ such that
\begin{equation}\label{basicLrelationeq}
L(s,\Pi(\tau_1,\tau_2))=L(s,\tau_1)L(s,\tau_2).
\end{equation}
The trivial central character version of the Yoshida lifting comes from the embedding of dual groups ${\rm SL}_2({\mathbb C})\times{\rm SL}_2({\mathbb C})\rightarrow{\rm Sp}_4({\mathbb C})$ given by the same formula, and predicts that for a pair $\tau_1,\tau_2$ of automorphic representations of ${\rm PGL}_2({\mathbb A})$ there exists an $L$-packet $\Pi(\tau_1,\tau_2)$ of automorphic representations of ${\rm PGSp}_4({\mathbb A})$ such that (\ref{basicLrelationeq}) holds.
Let us assume that $\tau_1=\otimes\tau_{1,v}$ and $\tau_2=\otimes\tau_{2,v}$ with irreducible, admissible, \emph{tempered} representations $\tau_{1,v},\tau_{2,v}$ of ${\rm GL}_2(F_v)$; this is all we need for our application. By the results of \cite{gantakGSp4} or the construction in \cite{rob2001}, the local $L$-packets resulting from the morphism (\ref{dualgroupmorphismeq}) have one or two elements. A packet has two elements precisely if $\tau_{1,v}$ and $\tau_{2,v}$ are both discrete series representations. In this case
$$
\Pi(\tau_{1,v},\tau_{2,v})=\{\Pi_v^{\rm gen},\Pi_v^{\rm ng}\},
$$
where $\Pi_v^{\rm gen}$ is a generic representation and $\Pi_v^{\rm ng}$ is a non-generic representation. By definition, the global $L$-packet $\Pi(\tau_1,\tau_2)$ consists of all representations $\Pi=\otimes\Pi_v$ where $\Pi_v$ lies in the local packet $\Pi(\tau_{1,v},\tau_{2,v})$. Hence, if $T$ denotes the set of places where both $\tau_{1,v}$ and $\tau_{2,v}$ are square-integrable, then the number of irreducible admissible representations of $G({\mathbb A})$ in $\Pi(\tau_1,\tau_2)$ is $2^{\#T}$.
Arthur's multiplicity formula makes a precise prediction on which elements of the global packet occur in the discrete automorphic spectrum. Given $\Pi=\otimes\Pi_v$ in $\Pi(\tau_1,\tau_2)$, let $T^{\rm ng}$ be the set of places where $\Pi_v$ is non-generic (this can only happen at places where the local packet has two elements). Then the prediction is that $\Pi$ occurs in the discrete spectrum if and only if $\#T^{\rm ng}$ is \emph{even}. Hence, the number of discrete elements in the $L$-packet is
$$
\#\Pi(\tau_1,\tau_2)_{\rm disc}=\begin{cases}
1&\text{if }T=\emptyset,\\
2^{\#T-1}&\text{if }T\neq\emptyset.
\end{cases}
$$
Thus, the global packet is \emph{finite} and \emph{unstable}. The prediction of Arthur's multiplicity formula in this situation has been proven in \cite[Thm.\ 8.6 (2)]{rob2001}. It turns out that in fact the discretely occurring $\Pi$ are cuspidal, automorphic representations.
The construction of the local and global packets in \cite{gantakGSp4} and \cite{rob2001} uses the theta correspondence (with similitudes) between $G={\rm GSp}_4$ and various orthogonal groups of four-dimensional quadratic spaces. If $D_v$ is a (possibly split) quaternion algebra over $F_v$, considered as a quadratic space with the reduced norm, then it is well known that there is an exact sequence
\begin{equation}\label{GSOexacteq}
1\longrightarrow F_v^\times\longrightarrow D_v^\times\times D_v^\times\longrightarrow{\rm GSO}(D_v)\longrightarrow1.
\end{equation}
Thus, representations of ${\rm GSO}(D_v)$ can be identified with pairs of representations of $D_v^\times$ with the same central character. Each such pair gives then rise to a representation of $G(F_v)$ via the theta correspondence. More precisely, one first induces from ${\rm GSO}(D_v)$ to ${\rm GO}(D_v)$; if this induction is irreducible, it participates in the theta correspondence with $G(F_v)$, and if it is not irreducible, there is a unique irreducible component that participates in the theta correspondence with $G(F_v)$. See Sect.\ 3 of \cite{gantakGSp4} for more information on the relationship between ${\rm GSO}(D_v)$ and ${\rm GO}(D_v)$.
The construction of the local packets $\Pi(\tau_{1,v},\tau_{2,v})$ above is now as follows. First, let $D_v=M_2(F_v)$ be the split quaternion algebra, so that $D_v^\times={\rm GL}_2(F_v)$. Then, via the theta correspondence, the pair $\tau_{1,v},\tau_{2,v}$ gives rise to an irreducible, admissible representation of $G(F_v)$, which is the \emph{generic} member $\Pi^{\rm gen}_v$ in the local packet. Next let $D_v$ be the unique division quaternion algebra over $F_v$. If $\tau_{1,v}$ and $\tau_{2,v}$ are both square integrable, we transfer these representations to $D_v^\times$ via the Jacquet-Langlands correspondence. Using again the theta correspondence, the pair of representations thus obtained gives rise to another representation of $G(F_v)$, which is the \emph{non-generic} member $\Pi_v^{\rm ng}$ of the local packet.
In the global case, let $T$ be as above and let $T^{\rm ng}$ be a subset of $T$ of even cardinality. Let $D$ be the global quaternion algebra over $F$ which is non-split at exactly the places in $T^{\rm ng}$. We use the Jacquet-Langlands lifting to transfer $\tau_1$ and $\tau_2$ to automorphic representations $\tau_1'$ and $\tau_2'$ of $D_{\mathbb A}^\times$. By the global analogue of the exact sequence (\ref{GSOexacteq}), we obtain an automorphic representation of the group ${\rm GSO}(D_{\mathbb A})$. It was proved in \cite{rob2001} that the global theta lifting of this representation to $G({\mathbb A})$ is non-vanishing (again, one should first transition to the global group ${\rm GO}(D_{\mathbb A})$; see \cite[Sect.\ 7 and proof of Thm.\ 8.5]{rob2001}). It follows from the compatibility of the local and global theta correspondence that this global lifting is the element $\Pi=\otimes\Pi_v$ in the global packet $\Pi(\tau_1,\tau_2)$ for which $\Pi_v$ is non-generic exactly at the places in $T^{\rm ng}$.
We close this section by giving a more explicit description of the non-archime\-dean local packets with two elements. To ease the notation, let us omit the subscript $v$ from the local field $F$. The notation we use for irreducible, admissible representations of $G(F)$ goes back to \cite{sallytadic}. The classification into types I, II, etc.\ is taken from \cite{NF}. The $L$-parameters listed in Table A.7 of \cite{NF} coincide with those defined in \cite{gantakGSp4}. In the following table $\sigma$ stands for an arbitrary character of $F^\times$, and $\xi$ denotes a non-trivial quadratic character. The Steinberg representation of ${\rm GL}_2(F)$ is denoted by ${\rm St}_{{\rm GL}(2)}$. The symbols $\mathfrak pi$, $\mathfrak pi_1$ and $\mathfrak pi_2$ stand for supercuspidal representations of ${\rm GL}_2(F)$, and $\omega_\mathfrak pi$ denotes the central character of $\mathfrak pi$. Finally, $\nu$ denotes the normalized absolute value on $F$.
\begin{equation}\label{padicpacketseq}
\begin{array}{cclcc}
\tau_1&\tau_2&\multicolumn{2}{c}{\Pi(\tau_1,\tau_2)}&\text{type}\\
\toprule
\sigma{\rm St}_{{\rm GL}(2)}&\xi\sigma{\rm St}_{{\rm GL}(2)}&\Pi^{\rm gen}&\delta([\xi,\nu\xi],\nu^{-1/2}\sigma)&{\rm Va}\\
&&\Pi^{\rm ng}&\multicolumn{2}{c}{\text{non-generic supercuspidal}}\\
\sigma{\rm St}_{{\rm GL}(2)}&\sigma{\rm St}_{{\rm GL}(2)}&\Pi^{\rm gen}&\tau(S,\nu^{-1/2}\sigma)&{\rm VIa}\\
&&\Pi^{\rm ng}&\tau(T,\nu^{-1/2}\sigma)&{\rm VIb}\\
\mathfrak pi&\mathfrak pi&\Pi^{\rm gen}&\tau(S,\mathfrak pi)&{\rm VIIIa}\\
&&\Pi^{\rm ng}&\tau(T,\mathfrak pi)&{\rm VIIIb}\\
\sigma{\rm St}_{{\rm GL}(2)}&\sigma\mathfrak pi\;(\omega_\mathfrak pi=1)&\Pi^{\rm gen}&\delta(\nu^{1/2}\mathfrak pi,\nu^{-1/2}\sigma)&{\rm XIa}\\
&&\Pi^{\rm ng}&\multicolumn{2}{c}{\text{non-generic supercuspidal}}\\
\mathfrak pi_1&\mathfrak pi_2\;(\not\cong\mathfrak pi_1)&\Pi^{\rm gen}&\multicolumn{2}{c}{\text{generic supercuspidal}}\\
&&\Pi^{\rm ng}&\multicolumn{2}{c}{\text{non-generic supercuspidal}}\\
\end{array}
\end{equation}
We remark that these packets, and many more, appear also in \cite{robjjl}.
\subsection{Classical Yoshida liftings}
In view of the procedure explained in Sect.\ \ref{siegelfromrepsec}, the representation theoretic construction outlined in the previous section may be used to construct holomorphic Siegel modular forms. We will now work over the number field ${\mathbb Q}$. For simplicity, we will consider the trivial central character version of the Yoshida lifting. For $i=1,2$ let $\tau_i=\otimes\tau_{i,v}$ be a cuspidal, automorphic representation of ${\rm GL}_2({\mathbb A})$ corresponding to a primitive cuspform $f_i$ of (even) weight $k_i$ and level $N_i$. Further, we will make the assumption that $N_1$ and $N_2$ are squarefree, since complete local information is currently only available in this case (however, it is possible to construct holomorphic Yoshida liftings in somewhat greater generality). We will also assume that $k_1 \ge k_2$. Since the temperedness hypothesis is satisfied, we obtain a global $L$-packet $\Pi(\tau_1,\tau_2)$ as in the previous section.
To understand the local packet at the archimedean place, let $W_{\mathbb R}={\mathbb C}^\times\mathfrak{S}cup j{\mathbb C}^\times$ be the real Weil group, as in \cite{knapparch}. For an odd, positive integer $l$ let $\varphi_l$ be the two-dimensional, irreducible representation of $W_{\mathbb R}$ given by
\begin{equation}\label{GL2archWeil4}
{\mathbb C}^\times\ni re^{i\theta} \longmapsto\mat{e^{il\theta}}{}{}{e^{-il\theta}},\quad j\longmapsto\mat{}{-1}{1}{}.
\end{equation}
By the local Langlands correspondence, $\varphi_l$ is the parameter of a discrete series representation of ${\rm PGL}_2({\mathbb R})$ with minimal weight $l+1$. Hence, the archimedean parameter of $\tau_{i,\infty}$ is $\varphi_{k_i-1}$, for $i=1,2$. Composing with the dual group morphism (\ref{dualgroupmorphismeq}), we obtain the parameter $\varphi_{k_1-1}\oplus\varphi_{k_2-1}$ (as a representation of $W_{\mathbb R}$). If $k_1\geq k_2+2$, it corresponds to a two-element packet of discrete series representations of ${\rm PGSp}_4({\mathbb R})$ with Harish-Chandra parameter
\begin{align*}
(\lambda_1,\lambda_2)&=\Big(\frac{(k_1-1)+(k_2-1)}2,\:\frac{(k_1-1)-(k_2-1)}2\Big)\\
&=\Big(\frac{k_1+k_2-2}2,\:\frac{k_1-k_2}2\Big).
\end{align*}
In order to obtain holomorphic modular forms, we need to choose the holomorphic element in the $L$-packet. In the notation of Sect.\ \ref{siegelfromrepsec}, this is the lowest weight representation $\mathcal{E}(l,l')$ with
\begin{equation}\label{minKk1k2eq}
(l,l')=\Big(\frac{k_1+k_2}2,\:\frac{k_1-k_2+4}2\Big)
\end{equation}
(the $(1,2)$-shift between the Harish-Chandra parameter and the minimal $K$-type is half the sum of the positive non-compact roots minus half the sum of the positive compact roots). Hence, the element $\Pi=\otimes\Pi_v$ in the global packet $\Pi(\tau_1,\tau_2)$ we are going to construct will have this lowest weight representation as its archimedean component $\Pi_\infty$. If $k_1=k_2$, these considerations remain true except that $\mathcal{E}(l,l')$ will be a limit of discrete series representation.
Now $\Pi_\infty$ is known to be the \emph{non-generic} member $\Pi^{\rm ng}_\infty$ of the archimedean packet. Therefore, in order to satisfy the parity condition coming from Arthur's multiplicity formula, we require an \emph{odd} number of (finite) primes $p$ such that $\Pi_p$ is non-generic. Under our assumption that $N_i$ is squarefree, the local component $\tau_{i,p}$ is square-integrable if and only if $p|N_i$. Hence, the parity condition can be satisfied if and only if $M:=\gcd(N_1,N_2)>1$. We will make this assumption.
It is well known (and easy to verify) that, for $p|N_i$, the local component $\tau_{i,p}$ is an unramified twist of the Steinberg representation. More precisely, if the Atkin-Lehner eigenvalue of $f_i$ at $p$ is $-1$, then $\tau_{i,p}={\rm St}_{{\rm GL}(2)}$, and otherwise $\tau_{i,p}=\xi{\rm St}_{{\rm GL}(2)}$, where $\xi$ is the non-trivial, quadratic, unramified character of ${\mathbb Q}_p^\times$. The local packets for places $p|M$ can now be read off table (\ref{padicpacketseq}). For places $p\nmid M$ but $p|N$, where $N={\rm lcm}(N_1,N_2)$, the local packets have one element and can be read off \cite[Table A.7]{NF}. The following table summarizes all possibilities of local packets for $p|N$. The character $\sigma$ in the table is quadratic and unramified, but allowed to be trivial. Since we would like to construct modular forms with respect to $\Gamma_{0,p}(N)$, we have indicated in the last column the dimension of fixed vectors under the local Siegel congruence subgroup $\Gamma_{0,p}(p)$ defined in (\ref{Gamma0localdefeq}). This information about fixed vectors is taken from \cite[Table A.15]{NF}; note that Va$^*$, being a supercuspidal representation, has no Iwahori fixed vectors.
\begin{equation}\label{padicpacketssquarefreeeq}
\begin{array}{cclccc}
\tau_1&\tau_2&\multicolumn{2}{c}{\Pi(\tau_1,\tau_2)}&\text{type}&\dim\\
\toprule
\sigma{\rm St}_{{\rm GL}(2)}&\xi\sigma{\rm St}_{{\rm GL}(2)}&\Pi^{\rm gen}&\delta([\xi,\nu\xi],\nu^{-1/2}\sigma)&{\rm Va}&0\\
&&\Pi^{\rm ng}&\delta^*([\xi,\nu\xi],\nu^{-1/2}\sigma)&{\rm Va}^*&0\\
\sigma{\rm St}_{{\rm GL}(2)}&\sigma{\rm St}_{{\rm GL}(2)}&\Pi^{\rm gen}&\tau(S,\nu^{-1/2}\sigma)&{\rm VIa}&1\\
&&\Pi^{\rm ng}&\tau(T,\nu^{-1/2}\sigma)&{\rm VIb}&1\\
\sigma{\rm St}_{{\rm GL}(2)}&\chi\times\chi^{-1}\;\text{(unram.)}&\multicolumn{2}{c}{ \qquad \quad \sigma\chi^{-1}{\rm St}_{{\rm GL}(2)}\rtimes\chi}&{\rm IIa}&1
\end{array}
\end{equation}
We see from the last column that, in order to construct modular forms with respect to $\Gamma_0(N)$, we need to completely avoid the packet $\{{\rm Va},{\rm Va}^*\}$. In other words, the Atkin-Lehner eigenvalues of $f_1$ and $f_2$ need to coincide for all $p|M$, an assumption we will make from now on.
Under this assumption we have either a VIa or VIb type representation at places $p|M$, and since either one of these representations contains a $\Gamma_{0,p}(p)$ fixed vector, we can make an arbitrary choice. As pointed out above, the only constraint is that the non-generic VIb has to appear an odd number of times. By the general procedure outlined in Sect.\ \ref{siegelfromrepsec} of constructing vector-valued modular forms from automorphic representations, we now obtain the following result. Even though it is not necessary for our applications further below, we have included a statement about Atkin-Lehner eigenvalues for completeness; see \cite[Sect.\ 3.2]{sch} for the definition of Atkin-Lehner involutions in the case of Siegel modular forms.
\begin{proposition}\label{classicalyoshidaprop}
Let $k_1$ and $k_2$ be even, positive integers with $k_1\geq k_2$. Let $N_1$, $N_2$ be two positive, squarefree integers such that $M = \gcd(N_1, N_2)>1$. Let $f$ be a classical newform of weight $k_1$ and level $N_1$ and $g$ be a classical newform of weight $k_2$ and level $N_2$, such that $f$ and $g$ are not multiples of each other. Assume that for all primes $p$ dividing $M$ the Atkin-Lehner eigenvalues of $f$ and $g$ coincide. Put $N = \mathrm{lcm}(N_1, N_2)$. Then for any divisor $M_1$ of $M$ with an \emph{odd} number of prime factors, there exists a non-zero holomorphic Siegel cusp form $F_{f,g} = F_{f,g;M_1}$ with the following properties.
\begin{enumerate}
\item $F_{f,g}$ is a modular form with respect to $\Gamma_0(N)$ of type $\rho_{l,l'}$ (see (\ref{Frhollpeq})), where $(l,l')$ is as in (\ref{minKk1k2eq}).
\item $F_{f,g}$ is an eigenfunction of the local Hecke algebra at all places $p\nmid N$, and generates an irreducible cuspidal representation $\Pi_{f,g}$ of ${\rm GSp}_4({\mathbb A})$.
\item $F_{f,g}$ is an eigenfunction of the operator $U(p)$ for all $p|N$.
\item For $p|N$, let $\epsilon_p$ be the Atkin-Lehner eigenvalue of $F_{f,g}$ at $p$, and let $\delta_p$ be the Atkin-Lehner eigenvalue of $f$ (if $p|N_1$) or $g$ (if $p|N_2$). Then, for all $p|N$,
$$
\epsilon_p=\begin{cases}
\delta_p&\text{if }p\nmid M_1,\\
-\delta_p&\text{if }p|M_1.
\end{cases}
$$
\item There is an equality of (complete) Langlands $L$-functions
$$
L(s,\Pi_{f,g}) = L(s,\mathfrak pi_f)L(s,\mathfrak pi_g),
$$
where $\mathfrak pi_f$ and $\mathfrak pi_g$ are the cuspidal representations of ${\rm GL}_2({\mathbb A})$ attached to $f$ and $g$.
\item Let $D$ be the definite quaternion algebra over ${\mathbb Q}$ ramified exactly at (infinity and) the primes dividing $M_1$. Let $\mathfrak pi'_f$ (resp.\ $\mathfrak pi'_g$) be the Jacquet-Langlands transfer of $\mathfrak pi_f$ (resp.\ $\mathfrak pi_g$) to $D^\times_{\mathbb A}$. Then $\Pi_{f,g}$ is the global theta lift from $(D^\times_{\mathbb A} \times D^\times_{\mathbb A})/{\mathbb A}^\times\cong{\rm GSO}(D_{\mathbb A}) $ to ${\rm GSp}_4({\mathbb A})$ of the representation $\mathfrak pi'_f \boxtimes \mathfrak pi'_g$.
\end{enumerate}
\end{proposition}
\begin{proof}
All statements except iii) and iv) follow from the construction explained in Sect.\ \ref{yoshidarepsec}. To prove iii), note that the operator $U(p)$ defined in (\ref{upaction}) corresponds to a certain element in the local Hecke algebra at $p$ consisting of left and right $\Gamma_{0,p}(p)$ invariant functions; see for example the appendix to \cite{boch-up}. Since, by (\ref{padicpacketssquarefreeeq}), the local space of $\Gamma_{0,p}(p)$ fixed vectors is one-dimensional in each case, this Hecke algebra acts via scalars on the one-dimensional spaces. In particular, $F_{f,g}$ is a $U(p)$ eigenvector. Finally, iv) can be deduced from (\ref{padicpacketssquarefreeeq}) and the Atkin-Lehner eigenvalues given in \cite[Table A.15]{NF}.
\end{proof}
\begin{remark}
In our application below we will set $k_1=2k$ for a positive integer $k$ and $k_2=2$. In this case $F_{f,g}\in S_{k+1}^{(2)}(N)$.
\end{remark}
\begin{remark}
The cusp forms $F_{f,g} = F_{f,g;M_1}$ constructed in the proposition are known as Yoshida lifts. The theory was initiated in \cite{yosh1980} using a ``semi-classical'' language. The non-vanishing problem for Yoshida's construction was solved in \cite{bocsch1991} for the scalar-valued case, and in \cite{bocsch1997} for the vector-valued case. While the Siegel cusp forms constructed in Proposition \ref{classicalyoshidaprop} and in \cite{bocsch1997} are the same, the representation theoretic approach is slightly better suited for our purposes. One reason is that the $F_{f,g}$ from Proposition \ref{classicalyoshidaprop} automatically generate an irreducible, cuspidal representation\footnote{R.\ Schulze-Pillot has pointed out to the authors that it can be shown using results of Moeglin \cite{Moeglin1997} that the Yoshida liftings of \cite{bocsch1997} indeed generate an irreducible cuspidal representation.} of ${\rm GSp}_4({\mathbb A})$.
\end{remark}
\begin{remark}
In \cite{yosh1980} Yoshida also considers a construction of Siegel modular forms from Hilbert modular forms. For a thorough representation-theoretic treatment of this lifting, see \cite{rob2001} and \cite{robjjl}. The local data given in \cite{robjjl} shows that the resulting modular forms cannot be with respect to a congruence subgroup $\Gamma^{(2)}_0(N)$ with square-free $N$. The same is true for the imaginary-quadratic version of this lifting considered in \cite{harsoutay}, since the non-archimedean situation is identical. Since our Theorem \ref{th:nonvanfouriersiegel} is for square-free levels only, it does not apply to these kinds of Yoshida liftings.
\end{remark}
\begin{remark}
For a positive integer $N$ let $\Gamma^{\rm para}(N)$ be the paramodular group of level $N$, as defined in \cite{robjjl}. It is not possible to construct holomorphic Yoshida lifts with respect to $\Gamma^{\rm para}(N)$, for any $N$. The reason is that, as pointed out above, holomorphy forces at least one of the finite components in $\Pi=\otimes\Pi_v$ to be one of the non-generic representations occurring in table (\ref{padicpacketseq}). By Theorem 3.4.3 of \cite{NF}, these non-generic representations have no paramodular vectors.
\end{remark}
\section{Bessel periods and $L$-values}\label{besselsec}
\subsection{Bessel periods}\label{s:besselperiod}\nopagebreak
Let $S = \begin{pmatrix}
a & b/2\\
b/2 & c\\
\end{pmatrix} \in M_2({\mathbb Q})$ be a symmetric matrix. Put $d = 4ac-b^2$ and define the element
$$
\xi = \xi_S = \begin{pmatrix}
b/2 & c\\
-a & -b/2\\
\end{pmatrix}.
$$
Note that $$\xi^2 = \begin{pmatrix} -\frac{d}{4} &\\&-\frac{d}{4} \end{pmatrix}.$$ Let $K$ denote the subfield ${\mathbb Q}(\mathfrak{S}rt{-d})$ of ${\mathbb C}$.
We always identify ${\mathbb Q}(\xi)$ with $K$ via
\begin{equation}\label{e:L}
{\mathbb Q}(\xi)\ni x + y\xi \mapsto x +
y\frac{\mathfrak{S}rt{-d}}{2} \in K, \ x,y\in {\mathbb Q}.
\end{equation}
We define a subgroup $T =T_S$ of ${\rm GL}(2)$ by
\begin{equation}
T({\mathbb Q}) = \{g \in {\rm GL}(2,{\mathbb Q})\;|\; \T{g}Sg =\det(g)S\}.
\end{equation}
It is not hard to verify that $T({\mathbb Q}) = {\mathbb Q}(\xi)^\times$. We identify
$T({\mathbb Q})$ with $K^\times$ via~\eqref{e:L}.
We can consider $T$ as a subgroup of $G$ via
\begin{equation}\label{embedding}
T \ni g \longmapsto
\begin{pmatrix}
g & 0\\
0 & \det(g)\ \T{g^{-1}}
\end{pmatrix} \in G.
\end{equation}
Let us denote by $U$ the subgroup of $G$ defined by
$$
U = \{u(X) =
\begin{pmatrix}
1_2 & X\\
0 & 1_2\\
\end{pmatrix}\;|\;\T{X} = X\}.
$$
Let $R$ be the subgroup of $G$ defined by $R=TU$.
Recall that ${\mathbb A}$ denotes the ring of adeles of ${\mathbb Q}$. Let $\mathfrak psi = \mathfrak prod_v\mathfrak psi_v$ be a character of ${\mathbb A}$ such that
\begin{itemize}
\item The conductor of $\mathfrak psi_p$ is ${\mathbb Z}_p$ for all (finite) primes $p$,
\item $\mathfrak psi_\infty(x) = e(x),$ for $x \in {\mathbb R}$,
\item $\mathfrak psi|_{\mathbb Q} =1.$
\end{itemize} We define the
character $\theta = \theta_S$ on $U({\mathbb A})$ by $\theta(u(X))=
\mathfrak psi(\mathrm{Tr}(SX))$. Let $\chi$ be a character of $T({\mathbb A}) / T({\mathbb Q})$ such
that $\chi |_{{\mathbb A}^\times}= 1$. Via~\eqref{e:L} we can think of
$\chi$ as a character of $K^\times({\mathbb A})/K^\times$ such
that $\chi |_{{\mathbb A}^\times} = 1$. Denote by $\chi \otimes \theta$ the
character of $R({\mathbb A})$ defined by $(\chi \otimes \theta)(tu) =
\chi(t)\theta(u)$ for $t\in
T({\mathbb A})$ and $u\in U({\mathbb A}).$
Let $\mathcal{A}_0(G({\mathbb Q})\backslash G({\mathbb A}), 1)$ denote the space of cusp forms on $G({\mathbb A})$ with trivial central character; thus any $\Phi \in \mathcal{A}_0(G({\mathbb Q})\backslash G({\mathbb A}), 1)$ can be written as a finite sum of vectors in irreducible cuspidal representations of $G({\mathbb A})$.
For $\Phi \in \mathcal{A}_0(G({\mathbb Q})\backslash G({\mathbb A}), 1)$, we define the Bessel period $B(\Phi) = B_{S, \chi, \mathfrak psi}(\Phi) $
by
\begin{equation}\label{defbessel}
B(\Phi) =
\int_{{\mathbb A}^\times R({\mathbb Q})\backslash R({\mathbb A})} (\chi \otimes \theta)(r)^{-1}\Phi(r)\,dr.
\end{equation}
\mathfrak par
\subsection{Relation with Fourier coefficients}
Now, let $d$ be a positive integer such that $-d$ is
the discriminant of the imaginary quadratic field
${\mathbb Q}(\mathfrak{S}rt{-d})$ and define
\begin{equation}\label{defsd}
S = S(-d) = \begin{cases} \begin{pmatrix}
\frac{d}{4} & 0\\
0 & 1\\\end{pmatrix} & \text{ if } d\equiv 0\mathfrak pmod{4}, \\[4ex]
\begin{pmatrix} \frac{1+d}{4} & \frac12\\\frac12 & 1\\
\end{pmatrix} & \text{ if } d\equiv 3\mathfrak pmod{4}.\end{cases}
\end{equation}
\mathfrak par
Let $K = {\mathbb Q}(\mathfrak{S}rt{-d})$ and define the group $R$ as in the previous subsection. Let $N$ be a positive integer and let $F$ be an element of $S_k^{(2)}(N)$ with the Fourier expansion \begin{equation}\label{siegelfourierexpansion2}F(Z)
=\sum_{T \in \mathcal Lambda_2} a(F, T) e(\mathrm{Tr}(TZ)).\end{equation}
We define the adelization $\Phi_F$ of $F$ to be the function on $G({\mathbb A})$ given by
\begin{equation}\label{adelization}
\Phi_F(\gamma h_\infty k_0) =
\mu_2(h_\infty)^k \det (J(h_\infty,
i_2))^{-k}F(h_\infty\langle i_2\rangle),
\end{equation}
where $\gamma \in G({\mathbb Q})$, $h_\infty \in G({\mathbb R})^+$ and
$$
k_0 \in \mathfrak prod_{ p< \infty} \Gamma_{0,p}(N),
$$
where the group
$
\Gamma_{0,p}(N)$ defined in~\eqref{Gamma0localdefeq}
is the local analogue of $\Gamma_0(N)$.
It is not hard to see that $\Phi_F \in \mathcal{A}_0(G({\mathbb Q})\backslash G({\mathbb A}), 1)$. For a symmetric matrix $T\in M_2^{\rm sym}({\mathbb Q})$, define
\begin{equation}\label{fouriercoefficientdefeq}
(\Phi_F)_{T}(g)=\int\limits_{M_2^{\rm sym}({\mathbb Q})\backslash M_2^{\rm sym}({\mathbb A})}\mathfrak psi^{-1}({\rm tr}(TX))\Phi_F(\mat{1}{X}{}{1}g)\,dX,\qquad g\in G({\mathbb A}).
\end{equation}
\begin{lemma}\label{globalfourierlemma}
Let $F\in S_k^{(2)}(N)$ and $\Phi_F$ as in (\ref{adelization}). Then, for all $g\in G({\mathbb R})$,
\begin{equation}\label{globalfourierlemmaeq1}
(\Phi_F)_T(g)= \begin{cases}\mu_2(g)^k\,\det J(g,i_2)^{-k}a(F,T)e({\rm tr}(TZ)) & \text{ if } g\in G({\mathbb R})^+ , \\ 0 & \text{ if } g\in G({\mathbb R})^-, \end{cases}
\end{equation}
where $Z=g\langle i_2\rangle$.
\end{lemma}
\begin{proof}This is a standard calculation.\end{proof}
\begin{remark}A version of this lemma holds more generally for Siegel modular forms with respect to any congruence subgroup.
\end{remark}
Recall that ${\mathbb C}l_K$ denotes the ideal
class group of $K$. Let $(t_c)$, $c\in {\mathbb C}l_K$, be coset
representatives such that
\begin{equation}\label{e:tcosetca1}
T({\mathbb A}) = \bigsqcup_{c}\,t_cT({\mathbb Q})T({\mathbb R})\mathfrak prod_{p<\infty}
(T({\mathbb Q}_p) \cap {\rm GL}_2({\mathbb Z}_p)),
\end{equation}
with $t_c \in \mathfrak prod_{p<\infty} T({\mathbb Q}_p)$. We can write
$$
t_c = \gamma_{c}m_{c}\kappa_{c}
$$
with $\gamma_{c} \in {\rm GL}_2({\mathbb Q})$, $m_{c} \in {\rm GL}^+_2({\mathbb R})$, and
$\kappa_{c}\in \mathfrak prod_{p<\infty} {\rm GL}_2({\mathbb Z}_p).$ By $(\gamma_{c})_f$ we denote the finite part of $\gamma_{c}$ when considered as an element of ${\rm GL}_2({\mathbb A})$, thus we have the equality $(\gamma_{c})_f=\gamma_{c}m_{c},$ as elements of ${\rm GL}_2({\mathbb A})$.
The matrices
$$
S_{c} = \det(\gamma_{c})^{-1}\ (\T{\gamma_{c}})S\gamma_{c}
$$
have discriminant $-d$ and form a set of representatives of the
${\rm SL}_2({\mathbb Z})$-equivalence classes of primitive semi-integral positive
definite matrices of discriminant $-d$.
Choose $\chi$ to be a character of $T({\mathbb A})/T({\mathbb Q})T({\mathbb R})\mathfrak prod_{p<\infty}
(T({\mathbb Q}_p) \cap {\rm GL}_2({\mathbb Z}_p))$; we identify $\chi$ with an ideal class group character of
$K$.
\begin{proposition}\label{classicalbesselperiodprop}Let $F \in S_k^{(2)}(N)$ and $S$, $\chi$, $\mathfrak psi$ be as above. Then the Bessel period $B(\Phi_F)$ defined by~\eqref{defbessel} satisfies
\begin{equation}
B({\Phi_F})= r \cdot e^{-2 \mathfrak pi {\rm tr}(S)} \sum_{c \in {\mathbb C}l_K} \chi(t_c)^{-1}a(F,S_c)
\end{equation}
where the non-zero constant $r$ depends only on the normalization used for the Haar measure on $ R({\mathbb A})$.
\end{proposition}
\begin{proof}Note that
$$
B({\Phi_F})=\int_{{\mathbb A}^\times T({\mathbb Q}) \backslash T({\mathbb A})}(\Phi_F)_S(t)\chi^{-1}(t)\,dt.
$$
By the coset decomposition~\eqref{e:tcosetca1}, we get (up to a non-zero constant coming from the Haar measure)
$$
B({\Phi_F})= \sum_{c \in {\mathbb C}l_K} \chi(t_c)^{-1}\int_{{\mathbb R}^\times \backslash T({\mathbb R})}(\Phi_F)_S(t_ct_\infty)\,dt_\infty.
$$
Let us compute the inner integral. Note that $T({\mathbb R}) = {\mathbb C}^\times$. For $g \in {\rm GL}_2$, let $\widetilde{g} = \begin{pmatrix}
g & 0\\
0 &\!\det(g)\, \T{g^{-1}}
\end{pmatrix}$. We have
\begin{align*}
\int_{{\mathbb R}^\times \backslash T({\mathbb R})}(\Phi_F)_S(t_ct_\infty)\,dt_\infty
&=\int_{{\mathbb R}^\times \backslash T({\mathbb R})}(\Phi_F)_S(\gamma_cm_ct_\infty)\,dt_\infty\\
&=\int_{{\mathbb R}^\times \backslash T({\mathbb R})}(\Phi_F)_S(t_\infty \widetilde{(\gamma_c)_f})\,dt_\infty.
\end{align*}
Put
$$
G(Z) = F(\gamma_c^{-1} Z\,\T{\gamma_c}^{-1} \det(\gamma_c)) = F(\widetilde{\gamma_c^{-1}}\langle Z \rangle).
$$
It is not hard to check that $G$ is a Siegel cusp form on some congruence subgroup of $\mathrm{Sp}_4({\mathbb Z})$. We claim that $\Phi_F(h \widetilde{(\gamma_c)_f}) = \Phi_G(h)$ for $h \in G({\mathbb A})$. By strong approximation, it suffices to prove this for $h \in G({\mathbb R})^+$. This follows from the following calculation,
\begin{align*}\Phi_F(h \widetilde{(\gamma_c)_f}) &= \Phi_F(\widetilde{m_c}h)\\&= \mu_2(h)^k \det J((h,
i_2))^{-k}F(\widetilde{m_c}h\langle i\rangle) \\ &= \mu_2(h)^k\det J((h,
i_2))^{-k}F (\widetilde{\gamma_c^{-1}}h\langle i\rangle) \\ &=\mu_2(h)^k \det J((h,
i_2))^{-k}G (h\langle i\rangle) \\&= \Phi_G(h).
\end{align*}
Thus we conclude
$$ \int_{{\mathbb R}^\times \backslash T({\mathbb R})}(\Phi_F)_S(t_ct_\infty ) dt_\infty=\int_{{\mathbb R}^\times \backslash T({\mathbb R})}(\Phi_G)_S(t_\infty ) dt_\infty.$$ The desired result now follows from Lemma~\ref{globalfourierlemma} and the simple observation that $a(G, S) = a(F,S_c)$.
\end{proof}
\subsection{Simultaneous non-vanishing of $L$-values}
We will now prove Theorem~\ref{th:simulnonvan}. Let $f$, $g$ be as in the statement of the theorem. In the case that $f$ and $g$ are multiples of each other, the theorem is known; indeed a stronger version easily follows from recent work of Munshi~\cite[Corollary 1]{Munshi}. So we may assume that $f$ and $g$ are not multiples of each other.
Let $M_1$ be any divisor of $M$ with an odd number of prime factors. Let $D$ be the definite quaternion algebra over ${\mathbb Q}$ ramified exactly at (infinity and) the primes dividing $M_1$. Let $\mathfrak pi'_f$ (resp.\ $\mathfrak pi'_g$) be the Jacquet-Langlands transfer of $\mathfrak pi_f$ (resp.\ $\mathfrak pi_g$) to $D^\times_{\mathbb A}$. Using Proposition~\ref{classicalyoshidaprop}, we construct a non-zero Siegel cusp form $F_{f,g} \in S_{k+1}^{(2)}(N)$ whose adelization generates an irreducible cuspidal representation $\Pi_{f,g}$ of $G({\mathbb A})$ such that $\Pi_{f,g}$ is the global theta lift from $(D^\times_{\mathbb A} \times D^\times_{\mathbb A})/{\mathbb A}^\times\cong{\rm GSO}(D_{\mathbb A}) $ to ${\rm GSp}_4({\mathbb A})$ of the representation $\mathfrak pi'_f \boxtimes \mathfrak pi'_g$.
By assertion iii) of Proposition~\ref{classicalyoshidaprop}, $F_{f,g}$ is an eigenfunction of the operator $U(p)$ for all $p|N$. So all the required conditions for Theorem~\ref{th:nonvanfouriersiegel} are satisfied. Let $d$ be an odd squarefree integer such that there exists $T \in \mathcal Lambda_2$ with $d=-{\rm disc}(T)$ and $a(F_{f,g},T)\ne 0$. Put $K={\mathbb Q}(\mathfrak{S}rt{-d})$. In light of Theorem~\ref{th:nonvanfouriersiegel}, it is clear that Theorem~\ref{th:simulnonvan} will be proved if we can show that for any such $d$ there exists a character $\chi \in \widehat{{\mathbb C}l_K}$ such that $L(\frac12, \mathfrak pi_f \times \theta_\chi) \ne 0$ and $L(\frac12, \mathfrak pi_g \times \theta_\chi) \ne 0$.
The key result which enables us to do this is the following theorem of Prasad and Takloo-Bighash as applied to our setup. We refer the reader to~\cite[Theorem 3]{prasadbighash} for the full statement. Note that by the adjointness property (proved in a much more general setting in Proposition 3.1 of \cite{PR}), for any automorphic representation $\mathfrak pi$ of $D^\times({\mathbb A})$ we have the equality $$L(\frac12, \mathfrak pi \times \theta_\chi) = L(\frac12, {\rm BC}_{K}(\mathfrak pi) \times \chi),$$ where ${\rm BC}$ denotes base-change.
\begin{thm}[Prasad -- Takloo-Bighash]Let $D$ be a quaternion algebra and $\mathfrak pi_1$, $\mathfrak pi_2$ be two automorphic representations of $D^\times({\mathbb A})$ with trivial central characters. Consider $\mathfrak pi_1\boxtimes \mathfrak pi_2$ as an automorphic representation on the group ${\rm GSO}(D_{\mathbb A}) = (D^\times_{\mathbb A} \times D^\times_{\mathbb A})/{\mathbb A}^\times $ and let $\Pi$ be its theta lift to $G({\mathbb A})$. Let $d$ be an integer such that $-d$ is the discriminant of the imaginary quadratic field $K={\mathbb Q}(\mathfrak{S}rt{-d})$ and define $S$ by~\eqref{defsd}. Let the additive character $\mathfrak psi$ and the groups $T$, $R$ be defined as in Section~\ref{s:besselperiod} and let $\chi$ be a character on $T({\mathbb A})/T({\mathbb Q})$ such that $\chi |_{{\mathbb A}^\times}= 1$.
Then, if the linear functional on $\Pi$ given by the period integral $$\Phi \mapsto B_{S, \chi, \mathfrak psi}(\Phi)$$ as defined in~\eqref{defbessel} is not identically zero, then $L(\frac12, \mathfrak pi_1 \times \theta_{\chi^{-1}}) \ne 0$ and $L(\frac12, \mathfrak pi_2 \times \theta_{\chi^{-1}}) \ne 0$.
\end{thm}
\begin{remark}The proof of the above theorem involves pulling back the Bessel period via theta-correspondence to ${\rm GSO}(D_{\mathbb A})$. This equals a product of two toric periods on $\mathfrak pi_1$ and $\mathfrak pi_2$, which by Waldspurger's formula equals the product of central $L$-values. Takloo-Bighash has communicated to one of the authors that this procedure is originally due to Furusawa. \end{remark}
Now, since there exists $T \in \mathcal Lambda_2$ with $d=-{\rm disc}(T)$ and $a(F_{f,g},T)\ne 0$, it follows that we can pick a character $\chi \in \widehat{{\mathbb C}l_K}$ such that $$\sum_{c \in {\mathbb C}l_K} \chi(t_c)^{-1}a(F_{f,g},S_c) \ne 0.$$ For this choice of $\chi$, the Bessel period $B(\Phi_{F_{f,g}})$ is non-zero by Proposition~\ref{classicalbesselperiodprop}. Since $\Phi_{F_{f,g}}$ is a vector in $\Pi_{f,g}$ it follows that the the linear functional on $\Pi_{f,g}$ given by the period integral $$\Phi \mapsto B_{S, \chi, \mathfrak psi}(\Phi)$$ is not identically zero. So, by the theorem of Prasad and Takloo-Bighash stated above, $L(\frac12, \mathfrak pi_f \times \theta_{\chi^{-1}}) = L(\frac12, \mathfrak pi_f' \times \theta_{\chi^{-1}}) \ne 0$ and $L(\frac12, \mathfrak pi_g \times \theta_{\chi^{-1}}) = L(\frac12, \mathfrak pi_g' \times \theta_{\chi^{-1}}) \ne 0$. The proof of Theorem~\ref{th:simulnonvan} is complete.
\end{document}
|
\begin{document}
\makefrontmatter
\chapter{Introduction\label{ch:intro}}
This chapter introduces the main topics covered in this thesis, and
the major tools and ideas that link these topics. The discussion is
philosophical and purely motivational, and should not be considered
mathematically rigorous or exhaustive. Chapters \ref{ch:npdr},
\ref{ch:rl}, \ref{ch:ss}, and \ref{ch:sltr} contain their own relevant
introductory discussions, descriptions of prior work, and literature
reviews.
An inverse problem is a general question of the following form:
\begin{center}
\framebox[\linewidth]{\parbox{.92\linewidth}{
You are given observed measurements $Y=F(X)$, where $F : D \to R$,
is a mapping from domain $D$ to range $R$. Provide an estimate of
a physical object or underlying set of parameters~$X$.
}}
\end{center}
\;
Inverse problems encompass a variety of types of questions in
many fields, from geophysics and medical imaging to
computer graphics. In a high level sense, all of the following
questions are inverse problems:
\begin{itemize}
\item Given the output of a CT scanner (sinogram), reconstruct the 3D
volumetric image of the patient's chest.
\item Given noisy and limited satellite observations of the magnetic
field over Australia, estimate the magnetic field everywhere on the
continent.
\item Given the lightcurve of a periodic dwarf star, extract the
significant slowly time-varying modes of oscillation. Determine
which modes are associated with the rotation of the star, which are
caused by transiting giant planets, and which are caused by one or
more earth-sized planets.
\item Given a spline curve corrupted by additive Gaussian noise,
identify its order, knot positions, and the parameters at each knot.
\item Given a social network graph with a variety of both known and
unknown parameters at each node, and given a sample of
known zip codes on a subset of the graph,
estimate the zip codes of the rest of the nodes.
\end{itemize}
Of the questions above, only the first two are traditionally
considered to be inverse problems because they have a clear
physical system $F(\cdot)$ mapping inputs to outputs, and the domain
and range are also clear from the physics of the problem.
Nevertheless, all of these problems can
be formulated as general inverse problems: a domain and range are
given, as are observations, and the simplest, most accurate, or
most statistically likely parameters to match the observations are
required.
This basic premise encourages the ``borrowing'' and mixing
of ideas from many fields of mathematics and engineering: graph
theory, statistics, functional analysis, differential equations, and
geometry.
This thesis focuses on finding representations of the data $X$, or
of the underlying transform $F(\cdot)$, that lead to either simpler or
more robust ways to solve particular inverse problems, or that provide
a new perspective for existing solutions.
We especially focus on finding representations within domains $D$
which are important in fields such as medical imaging and geoscience,
or are becoming important in the signal processing community due to
the recent explosion of ``high dimensional'' data sets. These domains are:
\begin{itemize}
\item General compact Riemannian manifolds.
\item Large graphs (especially nearest neighbor graphs from point samples of
Riemannian manifolds embedded in $\bbR^p$).
\item The Sphere $S^2$.
\end{itemize}
A short technical description of Riemannian manifolds, and a set of
references, is given in App.~\ref{app:diffgeom}. A description of
(Nearest Neighbor) graphs can be found in \S\ref{sec:prelim}.
The central tool that brings together all of the problems,
constructions, and insights in our work is the Laplacian.
In a way, the thrust of this thesis is the importance and flexibility
of the Laplacian as a data analysis and problem solving tool. Roughly
speaking, the Laplacian is defined as follows:
\begin{center}
\framebox[\linewidth]{\parbox{.9\linewidth}{
Given a domain $D$, with measure $\mu$ and knowing for any
point $x \in D$ its neighborhood $N(x)$, the Laplacian $L$ is a
linear operator that, for any well defined
function $f(x)$, subtracts some weighted average of $f(\cdot)$ in
$N(x)$ from its value at~$x$:
$$[Lf](x) = f(x) - \int_{x' \in N(x)} w(x,x') f(x') d\mu(x'),$$
where the weights are determined by the intrinsic relationship
(e.g., distance) between $x$ and $x'$ on $D$.
}}
\end{center}
\;
The Laplacian is a powerful tool because it synthesizes local
information at each point $x$ in domain $D$ into global information
about the domain; and this in turn can be used to analyze functions
and other operators on this domain. This statement is formalized in
the classical language of Fourier analysis and the recent work of
Mikhail Belkin, Ronald Coifman, Peter Jones, and~Amit~Singer~
\cite{Belkin2008th,Coifman2006,Jones2008,Singer2011}:
the eigenfunctions of the Laplacian are a powerful tool for
both the analysis of points in $D$ and functions defined on $D$.
Throughout this thesis, we use and study a variety of Laplacians, each
for a slightly different purpose:
\begin{itemize}
\item The weighted graph Laplacian: its ``averaging'' component in
Chapter~\ref{ch:npdr} and its regularized inverse in Chapters~
\ref{ch:rl}~and~\ref{ch:rlapp}.
\item The Laplace-Beltrami on a compact Riemannian manifold in
Chapters~\ref{ch:rl}~and~\ref{ch:rlapp}.
\item The Fourier transform (projections on the eigenfunctions of the
Laplacian~on~$\bbR$) in Chapters~\ref{ch:ss}~and~
\ref{ch:ssapp}.
\item Spherical harmonics (eigenfunctions of the Laplace-Beltrami
operator~on~$S^2$) and their use in analyzing bandlimited functions
on the sphere in Chapter \ref{ch:sltr}.
\end{itemize}
The definition and properties of the weighted graph Laplacian are
given in \S\ref{sec:ssllimit}. A definition of the Laplacian for a
compact Riemannian manifold is given in App.~\ref{app:diffgeom}, with
specific definitions for the real line and the sphere in
App.~\ref{app:fourier}. This appendix also contains a
comprehensive discussion of Fourier analysis on Riemannian manifolds,
and on the Sphere in particular.
Though the definition of the Laplacian differs depending on the
underlying domain $D$, all Laplacians are intricately linked: the
Laplacian on a domain $D$ converges to that on $D'$ as
the former domain converges to the latter, or when one is a special
case of the other. We study these relationships when they are relevant
(e.g.,~in~\S\ref{sec:ssllimit}).
The term Nonlinear in the title of this thesis refers to the ways in
which our constructions use properties of the Laplacian, or of its
eigenfunctions, to represent~data:
\begin{itemize}
\item In Chapter \ref{ch:npdr}, the regularized inverse of the graph
Laplacian is used to denoise graphs (this is not directly clear from
the context but see \S\ref{sec:geonpdr} for a discussion).
\item In Chapters \ref{ch:rl} and \ref{ch:rlapp}, the regularized
inverse of both the graph Laplacian, and the Laplace Beltrami
operator, are analyzed; and a special ``trick'' of taking the
nonlinear logarithm is used to both perform regression and to
estimate geodesic~distances.
\item In Chapters \ref{ch:ss} and \ref{ch:ssapp}, the Fourier
transform of the Wavelet representation of harmonic signals is used
to motivate a time localized estimate of amplitude and frequency.
This is followed up by a nonlinear reassignment
of~the~Wavelet~representation.
\item In Chapter \ref{ch:sltr}, we construct a multi-scale dictionary
composed of bandlimited Slepian functions on the sphere (which are
in turn constructed via Fourier analysis). This dictionary is then
combined with nonlinear ($\ell_1$) methods for signal~estimation.
\end{itemize}
We hope that the underlying theme of this thesis is now clear: as the
movie \emph{Manhattan} is Woody Allen's ode to the city of New
York, the work herein is a testament to the flexibility and power
of the Laplacian.
Without further ado, we now describe in detail the focus of the
individual chapters of this thesis.
\section{Denoising and Inference on Graphs}
Many unsupervised and semi-supervised learning problems contain
relationships between sample points that can be modeled with a
graph. As a result, the weighted adjacency matrix of a graph, and
the associated graph Laplacian, are often used to solve such
problems. More specifically, the spectrum of the graph Laplacian,
and its regularized inverse, can both be used to determine
relationships between observed data points (such as
``neighborliness'' or ``connectedness''), and to perform regression
when a partial labeling of the data points is available. Chapters
\ref{ch:npdr}, \ref{ch:rl}, and \ref{ch:rlapp} study the inverse
regularized graph Laplacian.
Chapter \ref{ch:npdr} focuses on the unsupervised problem of detecting
``bad'' edges, or bridges, in a graph constructed with erroneous
edges. Such graphs arise, e.g., as nearest neighbor graphs from
high dimensional points sampled under noise. A novel bridge detection
rule is constructed, based on Markov random walks with restarts,
that robustly identifies bridges. The detection rule uses the
regularized inverse of the graph's Laplacian matrix, and its
structure can be analyzed from a geometric point of view under certain
assumptions of the underlying sample points. We compare this
detection rule to past work and show its improved performance
as a preprocessing step in the estimation of geodesic distances on the
underlying graph, a global estimation problem. We also show its superior
performance as a preprocessing tool when solving the random projection
computational tomography inverse problem.
Chapter \ref{ch:rl} studies a closely related problem, that of
performing regression on points sampled from a high dimensional space,
only some of which are labeled. We focus on the common case when the
regression is performed via the nearest neighbor graph of the points,
with ridge and Laplacian regularization.
This common solution approach reduces to a
matrix-vector product, where the matrix is the regularized inverse of
the graph Laplacian, and the vector contains the partially known label
information.
In this chapter, we focus on the geometric aspects of
the problem. First, we prove that in the noiseless, low
regularization case, when the points are sampled from a smooth,
compact, Riemannian manifold, the matrix-vector product converges to a
sampling of the solution to a elliptic PDE. We use the theory of
viscosity solutions to show that in the low regularization case, the
solution of this PDE encodes geodesic distances between labeled points
and unlabeled ones. This geometric PDE framework provides key
insights into the original semisupervised regression problem, and into
the regularized inverse of the graph Laplacian matrix in general.
Chapter \ref{ch:rlapp} follows on the theoretical analysis in Chapter
\ref{ch:rl} by displaying a wide variety of applications for the
Regularized Laplacian PDE framework. The contributions of this
chapter include:
\begin{itemize}
\item A new consistent geodesics distance estimator on Riemannian
manifolds, whose complexity depends only on the number of sample
points $n$, rather than the ambient dimension $p$ or the manifold
dimension $d$.
\item A new multi-class classifier, applicable to high-dimensional
semi-supervised learning problems.
\item New explanations for negative results in the machine learning
literature associated with the graph Laplacian.
\item A new dimensionality reduction algorithm called Viscous ISOMAP.
\item A new and satisfying interpretation for the bridge detection
algorithm constructed in Chapter \ref{ch:npdr}.
\end{itemize}
\section{Instantaneous Time-Frequency Analysis}
Time frequency analysis in the form of the Fourier transform, short
time Fourier transform (STFT), and their discrete variants (e.g. the power
spectrum), have long been standard tools in science, engineering, and
mathematical analysis. Recent advances in time-frequency
reassignment, wherein energies in the magnitude plot of the STFT are
shifted in the time-frequency plane, have found application in
``sharpening'' images of, e.g., STFT magnitude plots, and have been
used to perform ridge detection and other types of time- and
frequency-localized feature detection in time series.
Chapters \ref{ch:ss} and \ref{ch:ssapp} focus on a novel
time-frequency reassignment transform, Synchrosqueezing, which can be
constructed to work ``on top of'' many invertible transforms (e.g.,
the Wavelet transform or the STFT). As Synchrosqueezing is itself an
invertible transform, it can be used to filter and denoise
signals. Most importantly, it can be used to extract
individual components from superpositions of ``quasi-harmonic''
functions: functions that take the form $A(t) e^{i \phi(t)}$, where
$A(t)$ and $d\phi(t)/dt$ are slowly varying.
Chapter \ref{ch:ss} focuses on two aspects of Synchrosqueezing.
First, we develop a fast new numerical implementation of Wavelet-based
Synchrosqueezing. This implementation, and other useful utilities,
have been included in the Synchrosqueezing MATLAB toolbox. Second,
we present a stability theorem, showing that Synchrosqueezing
can extract components from superpositions of quasi-harmonic signals
when the original observations have been corrupted by
bounded perturbations (of the form often encountered
during the pre-processing of signals).
Chapter \ref{ch:ssapp} builds upon the work of the previous
chapter to develop novel applications of Synchrosqueezing. We present
a wide variety of problem domains in which the Synchrosqueezing
transform is a powerful tool for denoising, feature extraction, and
more general scientific analysis. We especially focus on estimation
in inverse problems in medical signal processing and the geosciences.
Contributions include:
\begin{itemize}
\item The extraction of respiratory signals from ECG
(Electrocardiogram) signals.
\item Precise new analyses of paleoclimate simulations (solar
insolation models), individual paleoclimate proxies, and proxy
stacks in the last 2.5 Myr. These results are compared to
Wavelet- and STFT-based analyses, which are less precise and harder to
interpret.
\end{itemize}
\section{Multiscale Dictionaries of Slepian Functions on the Sphere}
Just as audio signals and images are constrained by the physical
processes that generate them, and by the sensors that observe them, so
too are many geophysical and cosmological signals, which reside on the
sphere $S^2$. On the real line and in the plane, both physical
constraints and sampling constraints lead to assumptions of a
bandlimit: that a signal contains zero energy outside some supporting
region in the frequency domain. Similarly, bandlimited signals on the
sphere are zero outside the low-frequency spherical harmonic
components.
In Chapter \ref{ch:sltr}, following up on Claude Shannon's initial
investigations into sampling, we describe Slepian, Landau, and
Pollak's spatial concentration problem on subsets of the real line,
and the resulting Slepian functions. The construction of Slepian
functions have led to many important modern algorithms for the
inversion of bandlimited signals from their samples within an
interval, and especially Thompson's multitaper spectral estimator. We
then study Simons and Dahlen's extension of these results to subsets
of the Sphere, where now the definition of frequency and bandlimit has
been appropriately modified.
Building upon these results, we develop an algorithm for the
construction of dictionary elements that are bandlimited, multiscale,
and localized. Our algorithm is based on a subdivision scheme that
constructs a binary tree from subdivisions of the region of interest
(ROI). We show, via numerous examples, that this dictionary has many
nice properties: it closely overlaps with the most concentrated
Slepian functions on the ROI, and most element pairs have low
coherence.
The focus of this construction is to solve ill-posed inverse problems
in geophysics and cosmology. Though the new dictionary is no longer
composed of purely orthogonal elements like the Slepian basis, it can
be combined with modern inversion techniques that promote sparsity in
the solution, to provide significantly lower residual error after
reconstruction (as compared to classically optimal Slepian inversion
techniques).
We provide additional numerical results showing the solution path that
these techniques take when combined with the multiscale dictionaries,
and their efficacy on a standard model of the Earth's magnetic field,
POMME-4. Finally, we show via randomized trials that the combination
of the multiscale construction and $\ell_1$-based estimation provides
significant improvement, over the current state of the art, in the
inversion of bandlimited white and pink random processes within
subsets of the sphere.
\chapter{Graph Bridge Detection via Random Walks
\chattr{This chapter is based on work in collaboration with
Peter~J.~Ramadge, Department of Electrical Engineering,
Princeton~University. A preliminary version appears in
\cite{Brevdo2010}.}}
\label{ch:npdr}
\section{Introduction}
Many new problems in machine learning and signal processing require
the robust estimation of geodesic distances between nodes of a nearest
neighbors (NN) graph. For example, when the nodes represent points
sampled from a manifold, estimating feature space distances between
these points can be an important step in unsupervised
\cite{Tenenbaum2000} and semi-supervised \cite{Belkin2006} learning.
This problem often reduces to that of having accurate estimates of
each point's neighbors, as described below.
In the simplest approach to estimating geodesic distances,
the NN graph's edges are estimated from either the $k$ nearest
neighbors around each point, or from all of the neighbors
within an ambient (Euclidean) $\delta$-ball around each point. Each
graph edge is then assigned a weight: the ambient distance between its
nodes. A graph shortest path (SP) algorithm, e.g. Dijkstra's
\cite[\S24.3]{Cormen2001}, is then used to estimate geodesic distances
between pairs of points.
When the manifold is sampled with noise, or contains outliers,
bridges (short circuits between distant parts of the manifold) can
appear in the NN graph and this has a catastrophic effect on geodesics
estimation \cite{Balasubramanian2002}.
In this chapter, we develop a new approach for calculating point
``neighborliness'' from the NN graph. This approach allows the robust
removal of bridges from NN graphs of manifolds sampled with noise.
This metric, which we call ``neighbor probability,'' is based on a
Markov Random walk with independent restarts. The bridge decision
rule based on this metric is called the neighbor probability decision
rule (NPDR), and reduces to removing edges from the NN graph whose
neighbor probability is below a threshold. We study some of the NPDR's
geometric properties when the number of samples grow large. We also
compare the efficacy of the NPDR to other decision rules and show its
superior performance on removing bridges in simulated data, and in
the novel inverse problem of computational tomography with random
(and unknown) projection angles.
\section{Preliminaries}
\label{sec:prelim}
Let $\cX=\set{x_i}_{i=1}^n$ be nonuniformly sampled points from
manifold $\cM \subset \bbR^r$. We observe
$\cY=\set{y_i = x_i + \nu_i}_{i=1}^n$, where $\nu_i$ is noise.
A nearest neighbor (NN) graph $G = (\cY,\cE,d)$ is constructed
from $k$-NN or $\delta$-ball neighborhoods of $\cY$
with the scale ($k$ or $\delta$) chosen via cross-validation or
prior knowledge. The map $d \colon \cE \to \bbR$ assigns cost $d_e =
\norm{x_k-x_l}_2$ to edge $e=(k,l) \in \cE$. Let $\cD = \set{d_e : e \in
\cE}$. The set $\cE$ gives initial estimates of neighbors
on the manifold. Let $\cF_k$ denote the neighbors of $x_k$~in~$G$.
In \cite{Tenenbaum2000} the geodesic distance between $(i,j) \in \cY^2$
is estimated by $\hat{g}_{ij} = \sum_{e \in \cP_{ij}} d_e$
where $\cP_{ij}$ is a minimum cost path from $i$ to $j$ in $G$
(this can be calculated via Dijkstra's algorithm).
When there is no noise, this estimate converges to the true geodesic
distance on $\cM$ as $n \to \infty$ and neighborhood size $\delta \to 0$.
However, in the presence of noise bridges form in the NN graph
and this results in significant estimation error. Forming
the shortest path in $G$ is too greedy in the presence of~bridges.
If bridges could be detected, their anomalous effect could be
removed without disconnecting the graph by substituting
a surrogate weight:
$\td_e = d_e + M, e \in \cB$ where $\cB$ is the set of detected
bridges and $M=n (\max_{e \in \cE} d_e)$, larger than the diameter of
$G$, is a penalty.
Let $\tG = (\cY,\cE,\td)$ and $\tcP_{ij}$ be
a minimum cost path between $i$ and $j$ in $\tG$. The adjusted estimate
of geodesic distance is $\tg_{ij} = \sum_{e \in \tcP_{ij}} d_e$.
With this in mind, we first review some bridge detection methods, and
discuss recent theoretical work in random walks on graphs.
\section{Prior Work}
The greedy nature of the SP solution encourages the traversal
of bridges, thereby significantly underestimating geodesic distances.
Previous work has considered denoising the nearest neighbors
graph via rejection of edges based on local distance statistics
\cite{Chen2006,Singer2009}, or via local
tangent space estimation \cite{Li2008, CHANG2006}. However, unlike
the method we propose (NPDR), these methods use local rather
that global statistics. We have found that using only local statistics
can be unreliable. For example, with state of the art robust
estimators of the local tangent space (as in \cite{Subbarao2006a}), local
rejection of neighborhood edges is not reliable with moderate noise or
outliers. Furthermore, edge removal (pruning) based on local edge
length statistics is based on questionable assumptions. For example, a
thin chain of outliers can form a bridge without unusually long edge
lengths.
As an example, we first describe the simplest class of bridge decision
rules (DRs): ones that classify bridges by a threshold on edge
length. We call this the length decision rule (LDR); it is similar to the DR
of \cite{Chen2006}. It is calculated with the following steps:
\begin{enumerate}
\item Normalize edge lengths for local sampling density by setting
$$\bd_{kl} = \frac{d_{kl}}{\sqrt{d_{k\cdot}d_{l\cdot}}},$$
where $d_{k\cdot} =\sum_{m \in \cF_k} d_{km}$ sums outgoing edge lengths from
$x_k$.
\item Let $\bcD = \set{\bd_e : e \in \cE}$.
Select a ``good edge percentage'' $0 < q < 1$ (e.g. $99\%$) and
calculate the detected bridge set $\cB$ via:
$$\cB = \set{e \in \cE \colon \bd_e \geq Q(\bcD,q)},$$
where $Q(\cD,q)$ is the $q$-th quantile of the set $\cD$.
\end{enumerate}
The second decision rule, Jaccard similarity DR (JDR),
classifies bridges as edges between points with dissimilar
neighborhoods \cite{Singer2009}. As opposed to the LDR,
the JDR uses information from immediate neighbors of two points to
detect bridges:
\begin{enumerate}
\item The Jaccard similarity between the neighborhoods of $x_l$ and
$x_m$ is
$$j_{lm} = \frac{\abs{\cF_l \cap \cF_m}}{\abs{\cF_l \cup \cF_m}}.$$
\item Let $\cJ = \set{j_e : e \in \cE}$ be the set of Jaccard
similarities. Select a $q \in (0,1)$; the estimated bridge set is
$$\cB = \set{e \in \cE \colon j_e < Q(\cJ,1-q)}.$$
\end{enumerate}
We now describe a more global neighborliness metric. The main
motivation is that bridges are short cuts for shortest paths. This
suggests detecting bridges by counting the traversals of each edge by
estimated geodesics. This is the concept of edge centrality
(\emph{edge betweenness}) in networks \cite{Newman2004}.
In a network, the centrality of edge $e$ is
$$BE(e) = \sum_{(i,j) \in \cY^2} \bbI(e \in \cP_{ij}).$$
Edge centrality can be calculated in $O(n^2 \log n)$ time and
$O(n + \abs{\cE})$ space using algorithms of Newman or Brandes
\cite{Newman2004,Brandes2001}.
However, caution is required in using edge centrality for our purpose.
Consider a bridge $(a,b) \in \cE$ having high centrality. Suppose
there exists a point $y_c$ with $(a,c), (c,b)\in \cE$ such that
$d_{ac}+d_{cb} < d_{ab} + \delta'$,
$\delta'$ small. These edges are never preferred over $(a,b)$ in a
geodesic path, hence have low centrality.
However, once $(a,b)$ is placed into $\cB$ and given increased weight,
$(a,c),(c,b)$ reveals itself as a secondary bridge in $\tG$.
So detection by centrality must be done in
rounds, each adding edges to $\cB$. This allows
secondary bridges to be detected in subsequent rounds.
We now describe the Edge Centrality Decision Rule (ECDR):
Select quantile $q \in (0,1)$ and iterate the following steps $K$
times on $\tG$:
\begin{enumerate}
\item Calculate $BE(e)$ for each edge $e$.
\item Place $(1-q) n/K$ of the most central edges into $\cB$ and
update $\tG$.
\end{enumerate}
The result is a bridge set $\cB$ containing approximately $\lfloor
(1-q) n \rfloor$ edges, matching the $q$-th quantile sets of the previous
DRs. The iteration count parameter $K$ trades off between
computational complexity (higher $K$ implies more iterations of edge
centrality estimation) and robustness (it also more likely to detect
bridges). To our knowledge, the use of centrality as a bridge
detector is new. While an improvement over LDR, the deterministic
greedy underpinnings of ECDR are a limitation: it initially fails to
see secondary bridges, and may also misclassify true high centrality
edges as bridges, e.g. the narrow path in the dumbbell manifold
\cite{Coifman2005}.
The Diffusion Maps approach to estimating feature space distances
\cite{Coifman2005}, has experimentally exhibited robustness to noise,
outliers, and finite sampling. Diffusion distances, based on random
walks on the NN graph, are closely related to ``natural'' distances
(\emph{commute times}) on a manifold \cite{Ham2004}. Furthermore,
Diffusion Maps coordinates (based on these distances) converge to
eigenfunctions of the Laplace Beltrami operator on the underlying
manifold.
The neighbor probability metric we construct is a global measure of edge
reliability based on diffusion distances. The NPDR, based on this
metric, is then used to inform geodesic estimates.
\section{Neighbor Probability Decision Rule}
\label{sec:npdr}
We now propose a DR based on a Markov random walk that assigns
a probability that two points are neighbors.
This steps back from immediately looking for a shortest path
and instead lets a random walk ``explore'' the NN graph.
To this end, let $P$ be a row-stochastic matrix
with $P_{ij}=0$ if $i\neq j$ and $(i,j)\notin \cE$. Let $p \in
(0,1)$ be a parameter and $\bp = 1-p$.
Consider a random walk $s(m)$, $m = 0,1,\ldots$, on $G$ starting at $s(0)=i$
and governed by $P$. For each $t\geq 0$, with probability $p$ we stop the walk
and declare $s(m)$ a neighbor of $i$.
Let $N$ be the matrix of probabilities that the walk starting at node $i$,
stops at node $j$. The $i$-th row of $N$ is the neighbor distribution
of node $i$. This distribution can be calculated:
\begin{align}
\label{eq:pNN1} N_{ij} &= \sum_{m=0}^\infty \Pr(\text{stop at $m$}, y^{(m)}=y_j|y^{(0)}=y_i) \\
\label{eq:pNN2} &= \sum_{m=0}^\infty \Pr(\text{stop at $m$}) \Pr(y^{(m)}=y_j|y^{(0)}=y_i) \\
\label{eq:pNN3} &= \sum_{m=0}^\infty \Pr(\text{stop at $m$}) \left( P^m \right)_{ij} \\
\label{eq:pNN4} &= \sum_{m=0}^\infty p (1-p)^m \left( P^m \right)_{ij}.
\end{align}
In \eqref{eq:pNN1} we decompose the stopping event into the disjoint
events of stopping at time $t$. In \eqref{eq:pNN2} we separate the
independent events of stopping from being at the current
position $y^{(m)}$. In \eqref{eq:pNN3} we use the well known Markov
property $P_{ij}^{(m)} = \left( P^m \right)_{ij}$ (with $P^{(0)} =I$).
Finally, in \eqref{eq:pNN4} we recall that stopping at time $m$
means we choose not to stop, independently, for each $m'=0,\ldots,m-1$,
and finally stop at time $m$, so the probability of this event is
$(1-p)^m p$. Thus, the neighbor probabilities are:
$$
N = \sum_{t\geq 0} p(1-p)^m P^m = p (I - \bp P)^{-1}.
$$
A smaller stopping probability $p$ induces greater weighting of long-time
diffusion effects, which are more dependent on the topology of $\cM$.
$N$ is closely related to
the \emph{normalized commute time} of \cite{Zhou2004}.
Its computation requires $O(n^3)$ time and $O(n^2)$ space ($P$ is sparse but $N$ may
not be).
The neighborhood probability matrix $N$ is heavily dependent on the
choice of underlying random walk matrix $P$. First we relate several
key properties of $N$ to those of $P$, then we introduce a geometrically
intuitive choice of $P$ when the sample points are sampled from a
manifold.
\begin{lem}\label{lem:propN}
$N$ is row-stochastic, shares
the left and right eigenvectors of $P$, and has
spectrum $\sigma(N)=\{p(1-\bp \lambda)^{-1}$, $\lambda\in\sigma(P)\}$.
\end{lem}
\begin{proof}
Clearly, $N^{-1}=(I-\bp P)/p$ is row-stochastic, shares the left and right eigenvectors of $P$ and has
spectrum $\{(1-\bp\lambda)/p, \lambda\in\sigma(P)\}$.
\end{proof}
Following \cite{Coifman2005}, we select $P$ to be the popular
Diffusion Maps kernel on $G$:
\begin{enumerate}
\item Choose $\epsilon > 0$ (e.g. $\epsilon = \med_{e\in\cE}d_e/2$)
and let $(\hPe)_{lk} =
\exp \left(-d_{kl}^2 / \epsilon \right)$ if $(l,k) \in \cE$,
$1$ if $l=k$, and $0$ otherwise.
\item Let $\hDe$ be diagonal
with $(\hDe)_{kk} = \sum_l (\hPe)_{kl}$, and normalize for sampling density
by setting $\Ae = \hDe^{-1} \hPe \hDe^{-1}$.
\item Let $\De$ be diagonal with $(\De)_{kk} = \sum_l (\Ae)_{kl}$
and set $\Pe = \De^{-1}\Ae$.
\end{enumerate}
$\Pe$ is row-stochastic and, as is well known, has bounded spectrum:
\begin{lem}\label{lem:eigP}
The spectrum of $\Pe$ is contained in $[0,1]$.
\end{lem}
\begin{proof}
$\hPe$, $\Ae$ and $S=\De^{-1/2}\Ae\De^{-1/2}$, are symmetric PSD.
$\Pe = \De^{-1} \Ae$ is row-stochastic and similar to $S$.
\end{proof}
\noindent We define the Neighbor Probability Decision Rule (NPDR) as follows.
\begin{enumerate}
\item Let $\Ne=p(I-\bp \Pe)^{-1}$ and find the restriction $\cN =
\set{(\Ne)_e : e \in \cE}$ of $\Ne$ to $\cE$.
\item Choose a $q \in (0,1)$ and let $\cB = \set{e \in
\cE : (\Ne)_e < Q(\cN,1-q)}$, i.e., edges connecting nodes with a
low probability of being neighbors, are bridges.
\end{enumerate}
Calculating $\Ne$ can be prohibitive for very large datasets.
Fortunately, we can effectively order the elements of $\cN$
using a low rank approximation to $\Ne$.
By Lemmas \ref{lem:propN}, \ref{lem:eigP}, we calculate
$N_{lm} \approx \sum_{j=1}^J
p(1-\bp\lambda_{j})^{-1}(v^R_j)_l(v^L_j)_m$ for $(l,m) \in \cE$,
where $\lambda_j$,$v^R_j$ and $v^L_j$ are the $J$ largest eigenvalues
of $\Pe$ and the associated right and left eigenvectors.
In practice, an effective ordering of $\cN$ is obtained with $J \ll n$
and since $\Pe$ is sparse, its largest eigenvalues and
eigenvectors can be computed efficiently using a standard
iterative algorithm.
Matrix $\Ne$ has, thanks to our choice of $\Pe$, a more geometric
interpretation which we now discuss.
\subsection{Geometric Interpretation of $N$}
\label{sec:Ngeo}
In this section, we show the close relationship between the matrices
$\Ne$, $\Pe$, and the weighted graph Laplacian matrix (to be defined
soon). One important property of $\Pe$ is
its relationship to the Laplace Beltrami operator on $\cM$. Specifically,
when $n \to\infty$, and as $\epsilon \to 0$ at the appropriate rate
\footnote{We discuss this convergence in greater detail in Chapter~\ref{ch:rl}.},
$(I-\Pe)/\epsilon \to c \Delta_{LB}$ both pointwise and in spectrum
\cite{Coifman2005}. Here $\Le = (I-\Pe)/\epsilon$ is the
weighted graph Laplacian and $\Delta_{LB}$ is the Laplace Beltrami
operator on $\cM$ and $c$ is a constant. Thus, for large $n$ and
small $\epsilon$, $\Pe$ is a neighborhood averaging operator. This
property also holds for $\Ne$:
\begin{thm} \label{thm:LB}
As $n \to \infty$ and $\epsilon \to 0$, $(I-\Ne)/\epsilon \to c' \Delta_{LB}$.
\end{thm}
\begin{proof}
For $I-S$ invertible, $(I-S)^{-1} = I+S(I-S)^{-1}$. From
Lem. \ref{lem:eigP}, $I-\bp \Pe$ is invertible. Thus
\begin{align}
\label{eq:LBp1}
p(I-\bp\Pe)^{-1} &= \prn{I-\frac{\bp}{p}(\Pe-I)}^{-1} \\
\label{eq:LBp2}
&= I + \frac{\bp}{p}(\Pe-I)\prn{I-\frac{\bp}{p}(\Pe-I)}^{-1}.
\end{align}
Therefore
$$
\frac{I-\Ne}{\epsilon} = \frac{\bp}{p} \frac{I-\Pe}{\epsilon}
\left(I-\frac{\bp}{p}(\Pe-I) \right)^{-1}
= \bp \frac{I-\Pe}{\epsilon} (I - \bp \Pe)^{-1}.
$$
The first factor on the RHS converges to $\bp \Delta_{LB}$ and the second
to $p^{-1}I$ since $\Pe \to I$.
\end{proof}
\noindent By Theorem \ref{thm:LB}, as $n \to \infty$ and $\epsilon \to 0$,
$\Ne$ acts like $\Pe$. For finite sample sizes, however, experiments
indicate that $\Ne$ is more informative of neighborhood
relationships. We provide here a simple justification based on the
original random walk construction.
Were we to replace $\Ne$ with $\Pe$ in the implementation of the NPDR,
edge $(i,j)$ would be marked as a bridge essentially according to
its normalized weight (that is, proportional to $P_{ij}$ and
normalized for sampling density). As $P_{ij} =
\exp(-d_{ij}^2/\epsilon)$, this edge would essentially be marked as
a bridge if the pairwise distance between its associated points is
above a threshold. NPDR would in this case yield a performance very
similar to that of LDR. In contrast, NPDR via $\Ne$ uses multi-step
probabilities with an exponential decay weighting to determine whether
an edge is a bridge. Thus, the more ways there are to
get from $y_i$ to $y_j$ over a wide variety of possible step counts,
the less likely that $(i,j)$ is a bridge.
We now provide some synthetic examples comparing NPDR to the other
decision rules for finite sample sizes.
\section{Denoising the Swiss Roll}
\label{sec:experiments}
We first test our method on the synthetic Swiss roll,
parametrized by $(a,b)$ in $U = [\pi,4\pi] \x [0,21]$
via the embedding $(x^1,x^2,x^3) =
f(a,b) = (a \cos a, b, a \sin a)$.
True geodesic distances $\set{g_{1j}}_{j=1}^n$
are computed via:
$$g_{ij} = \int_0^1 \norm{Df(\bv(t))
\frac{\partial{\bv(t)}}{\partial t}}_2 dt$$
where $\bv(t) =(1-t)[a_i\ \ b_i]^T + t [a_j\ \ b_j]^T$, and $Df(\bv)$ is the
differential of $f$ evaluated at $(\bv^1,\bv^2)$.
We sampled $n=500$ points uniformly in the ambient space $\bbR^3$
with $x_1$ fixed ($(a_1,b_1) = (\pi,0)$).
For $t=1,\dots,T$ ($T=100$)
we generated $n$ random noise values,
$\set{u_{ti}}_{i=1}^n$, uniformly on $[-1,1]$.
Then $y_{ti} = x_i + \mu u_{ti} \bn_i$, $t=1,\dots,T$, where $\bn_i$ is the normal to
$\cM$ at $x_i$. Each experiment was repeated for $\mu \in \set{0,.05,\cdots,1.95,2}$.
\begin{figure}
\caption{NN graph of Swiss roll ($\mu$=1.6) \label{sfig:swissroll_edges}
\caption{Median bridge count vs. $\mu$. \label{sfig:bridges}
\caption{Noisy Swiss Roll}
\label{sfig:swissroll_edges}
\label{sfig:bridges}
\end{figure}
\begin{figure}
\caption{$\mu=.1$ \label{sfig:simple_noiseless}
\caption{$\mu=1.54$ \label{sfig:simple_noisy}
\caption{SP Denoising: Swiss Roll Geodesic estimates vs. ground truth (from $x_1$)}
\label{sfig:simple_noiseless}
\label{sfig:simple_noisy}
\end{figure}
\begin{figure}
\caption{$\mu=.1, q=.92$ \label{sfig:ECDR_noiseless}
\caption{$\mu=1.54, q=.92$ \label{sfig:ECDR_noisy_q1}
\caption{ECDR Denoising: Swiss Roll Geodesic estimates vs. ground truth (from $x_1$)}
\label{sfig:ECDR_noiseless}
\label{sfig:ECDR_noisy_q1}
\end{figure}
\begin{figure}
\caption{$\mu=.1, q=.92$ \label{sfig:NPDR_noiseless}
\caption{$\mu=1.54, q=.92$ \label{sfig:NPDR_noisy_q1}
\caption{NPDR Denoising: Swiss Roll Geodesic estimates vs. ground truth (from $x_1$)
}
\label{sfig:NPDR_noiseless}
\label{sfig:NPDR_noisy_q1}
\end{figure}
The initial NN graph $G$ was constructed using $\delta$-balls
($\delta=4$). The median bridge counts over $T$ realizations are shown in
Fig.~\ref{sfig:bridges}. Bridges first appear at $\mu \approx 1.2$.
Fig.~\ref{sfig:swissroll_edges} shows one realization of $\cY$ and the NN graph $G$ (note bridges).
We compare the simple SP with the LDR, ECDR ($K=15$), and NPDR ($p=0.01$)
-based estimators by plotting the estimates of geodesic distance versus ground truth (sorted
by distance from $x_1$). We plot the median estimate, 33\%, and 66\%
quantiles over the $T$ runs, for $\mu=.1$ and $\mu = 1.54$. The LDR
and JDR based estimators' performance is comparable to SP for $q>.9$
(plots not included).
With no noise: SP provides excellent estimates; NPDR estimates are
accurate even after removing $8\%$ of the graph edges
(Fig.~\ref{sfig:NPDR_noiseless}); however, ECDR removes important
edges (Fig.~\ref{sfig:ECDR_noiseless}).
At $\mu = 1.54$ with approximately $25$ bridges: SP has failed (Fig.~\ref{sfig:simple_noisy});
ECDR is removing bridges but also important edges, resulting in an upward
estimation bias (Fig.~\ref{sfig:ECDR_noisy_q1}); in contrast,
NPDR is successfully discounting bridges without
any significant upward bias even at $q=.92$ (Fig.~\ref{sfig:NPDR_noisy_q1}).
This supports our claim that bridges occur between edges with low
neighbor probability in the NPDR random walk.
Lower values of $q$ remove more edges, including bridges, but removing
non-bridges always increases SP estimates, and can lead to an upward bias.
The choice of $q$ should be based on prior knowledge of the
noise or cross-validation.
\begin{table}[h!]
\caption{Comparison of mean error $E$, varying $\mu$.}
\label{fig:cmp}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\mu$} &
SP &
LDR, &
\multicolumn{3}{|c|}{ECDR, q=} &
\multicolumn{3}{|c|}{NPDR, q=} \\
{} & {} & q=.92 & .92 & .95 & .99 & .92 & .95 & .99 \\ \hline
0.10 & \textbf{0.8} & 1.5 & 4.4 & 3.8 & 1.9 & 2.4 & 1.9 & 1.2 \\ \hline
1.44 & 12.0 & 10.9 & 10.7 & 8.1 & 2.1 & 3.6 & 2.5 & \textbf{1.6} \\
1.54 & 13.6 & 12.8 & 11.7 & 7.6 & 2.5 & 3.8 & 2.6 & \textbf{2.3} \\
1.64 & 14.4 & 13.9 & 11.6 & 6.8 & 5.1 & 4.0 & \textbf{3.1} & 6.5 \\
1.74 & 14.9 & 14.4 & 11.4 & 6.5 & 8.9 & \textbf{5.0} & 5.6 & 11.1 \\
1.85 & 15.3 & 14.9 & 12.1 & 8.6 & 12.0 & \textbf{8.1} & 9.4 & 13.4 \\
\hline
\end{tabular}
\end{table}
Table~\ref{fig:cmp} compares the performance of DRs at moderate noise
levels. For this experiment, we chose $5$ points,
$\set{x_r}_{r=1}^5$, well distributed over $\cM$. Over $T=100$
noise realizations, we calculated the mean of the value
$$
E = \frac{1}{5n} \sum_{r=1}^5 \sum_{i=1}^n \abs{g_{ri}-\tg_{ri}},
$$
the average absolute error of the geodesic estimate from all of the
points to these 5. As seen in this figure, given an appropriate
choice of $q$, NPDR outperforms the other DRs at moderate noise
levels. As expected, $q$ must grow with the noise level as more
bridges are found in the initial graph.
\section{Denoising a Random Projection Graph}
We now consider the random projection tomography problem of
\cite{Singer2009}. Random projections of $I$ are taken at angles
$\theta \in [0, 2\pi)$. More specifically, these projections are
$f(\theta) = R_{\theta}(I)$, where $R_\theta$ is the Radon transform
at angle $\theta$.
In \cite{Singer2009}, we observe $n=1024$ random projections:
$$y_i = f(\theta_i) + \nu_i \qquad \text{where} \qquad \nu_i \sim N(0,\sigma^2).$$
for which the ambient dimension of the projection is $r = 512$ and
$\sigma^2 = \sigma^2_f /10^{SNR_{db}/10}$, where the signal power is
$\sigma_f^2 \approx 0.0044$. The image used in all experiments is the
Shepp-Logan phantom (Fig.~\ref{fig:tomo_true}), and the projection
angles are unknown (Fig.~\ref{fig:tomo_proj}). After some initial
preprocessing, a NN graph ($k=50$) is constructed from the noisy
projections, and JDR is used to detect bridges in this graph. After
detected bridges are removed (pruned), nodes with less than two
remaining edges (that is, isolated nodes) are removed from the
graph. An eigenvalue problem on the new graph's adjacency matrix is
then solved to find an angular ordering of the remaining projections
(nodes). Finally, $\hI$ is reconstructed via an inverse Radon
transform of these resorted projections.
\begin{figure}
\caption{Original Phantom, $I$ \label{fig:tomo_true}
\label{fig:tomo_true}
\caption{Projections of $I$ \label{fig:tomo_proj}
\label{fig:tomo_proj}
\end{figure}
\begin{figure}
\caption{Estimated $\theta$, JDR \label{fig:tomo_sorted_jdr}
\caption{Estimated $\theta$, NPDR \label{fig:tomo_sorted_npdr}
\caption{Reconstruction $\hI$, JDR \label{fig:tomo_recon_jdr}
\caption{Reconstruction $\hI$, NPDR \label{fig:tomo_recon_npdr}
\caption{Tomography Reconstructions from Random Projections}
\label{fig:tomo_sorted_jdr}
\label{fig:tomo_sorted_npdr}
\label{fig:tomo_recon_jdr}
\label{fig:tomo_recon_npdr}
\end{figure}
We compared JDR to NPDR pruning at a SNR of $-2\text{db}$.
Exhaustive search finds the optimal $q$ for
JDR at $q=.78$. For NPDR we used $\hat{P}_\infty$ (all edges in $G$
have weight $1$), $p=.01$, and $q=.8$. After pruning, JDR
disconnected 277 nodes compared to 21 for NPDR.
The estimated sorted angles are shown in
Figs. \ref{fig:tomo_sorted_jdr},\ref{fig:tomo_sorted_npdr}, and the
rotated reconstructions in
Figs. \ref{fig:tomo_recon_jdr},\ref{fig:tomo_recon_npdr}.
Under the similarity metric $\rho = \frac{I^T\hI}{\norm{I}\norm{\hI}}$,
with alignment of $\hI$ with $I$, the increase in NPDR similarity (0.15)
over JDR similarity ($0.12$) is 25\%. Note the clearer boundaries
in the NPDR phantom, thanks to 256 additional (unpruned) projections
(best viewed on screen).
At moderate noise levels, NPDR removes fewer NN graph nodes
and yields a more accurate reconstruction.
As more projections are left after pruning, the final accuracy is
higher.
\section{Conclusion and Connections}
We studied the problem of estimating geodesics in a point cloud. A
slight revision for removing edges from a neighborhood graph allows us
to avoid disconnecting weakly connected groups. Building on this
framework, we studied several global measures for detecting
topological bridges in the NN graph. In particular, we developed and
analyzed the NPDR bridge detection rule, which is based on a special
type of Markov random walk. Using a special random walk matrix
derived from the geometry of the sample points, we constructed the
NPDR to detect bridges by thresholding entries of the neighborhood
matrix $\Ne$. The entries of column $i$ in this matrix converge to
those of a special averaging operator in the neighborhood of point
$x_i$ in $\cM$: the averaging intrinsically performed by the Laplace
Beltrami operator around $x_i$.
Our experiments indicate that NPDR robustly detects bridges in the NN
graph without misclassifying edges important for geodesic estimation
or tomographic angle estimation. Furthermore, it does so over a wider
noise range than competing methods, e.g. LDR and ECDR. It can be
calculated efficiently via a sparse eigenvalue decomposition.
Preliminary evidence from synthetic experiments indicates that, as
\S\ref{sec:npdr} suggests, for NPDR one should choose $p$ as small as
possible while retaining numerical conditioning of $I-\bp\Pe$.
For very large $n$ and small $\epsilon$, the matrices $\Ne$ and $\Pe$
are equivalent, but in practical cases $\Ne$ yields significantly
better performance. More testing of NPDR on non-synthetic
datasets is needed. Possible applications include determining
bridges in social and webpage (hyperlink) network graphs, and in the
common line graphs estimated in the blind 3D tomography ``Cryo-EM''
problem~\cite{Singer2010}.
Furthermore, the matrix $\Ne$ is closely related to the regularized
inverse of the graph Laplacian. The term $\Pe-I$ in \eqref{eq:LBp1}
is proportional to the weighted graph Laplacian $\Le$, and from this
equation it is clear that $\Ne$ is proportional to the inverse of the
Tikhonov regularized weighted graph Laplacian (with regularization
parameter $p / \bp$). The efficacy of the NPDR, and the initial
theoretical results developed in \S\ref{sec:Ngeo} lead us to study the
regularized inverse of the graph Laplacian in more detail;
this is the focus of Chapter~\ref{ch:rl}.
\chapter{The Inverse Regularized Laplacian: Theory\label{ch:rl}
\chattr{This chapter, and the next, are based on work in collaboration with
Peter~J.~Ramadge, Department of Electrical Engineering, Princeton
University, as submitted in \cite{Brevdo2011a}.}}
\section{Introduction}
Semi-supervised learning (SSL) encompasses a class of machine learning
problems in which both labeled data points and unlabeled data points
are available during the training (fitting) stage~\cite{Chapelle2006}.
In contrast to supervised learning, the goal of SSL algorithms is to
improve future prediction accuracy by including information from the
unlabeled data points during training. SSL extends
a wide class of standard problems, such as classification and
regression.
A number of recent nonlinear SSL algorithms use aggregates of nearest
neighbor (NN) information to improve inference performance. These
aggregates generally take the form of some transform, or
decomposition, of the weighted adjacency, or weight, matrix of the NN
graph. Formal definitions of NN graphs, and associated weight
matrices, are given in~\S\ref{sec:prelim}; we will also review them in
the SSL learning context in~\S\ref{sec:sslintro}.
Motivated by several of these algorithms, we show a connection between
a certain nonlinear transform of the NN graph weight matrix, the
regularized inverse of the graph Laplacian matrix, and the
solution to the regularized Laplacian partial differential equation
(PDE), when the underlying data points are sampled from a compact
Riemannian manifold.
We then show a connection between this PDE and the Eikonal equation,
which generates geodesics on the manifold.
These connections lead to intuitive geometric interpretations of
learning algorithms whose solutions include a regularized inverse
of the graph Laplacian.
As we will show in chapter~\ref{ch:rlapp}, it
also enables us to build a robust geodesic distance estimator, a
competitive new multiclass classifier, and a regularized version of
ISOMAP.
This chapter is organized as follows:
\S\ref{sec:sslintro}-\S\ref{sec:ssllimitthm} motivate our study by
showing that in a certain limiting case, a standard SSL problem can be
modeled as a regularized Laplacian (RL) PDE problem.
\S\ref{sec:rlpde}-\S\ref{sec:lap} derive the relationship between
the regularized Laplacian and geodesics and discusses convergence
issues as an important regularization term (viscosity) goes to zero.
\section{Prior Work}
The graph Laplacian is an important tool for regularizing
the solutions of unsupervised and semi-supervised learning
problems, such as classification and regression, in high-dimensional
data analysis \cite{Smola2003, Belkin2004, Belkin2006, Zhu2003, Luxburg2007}.
Similarly, estimating geodesic distances from sample points on a
manifold has important applications in manifold learning and SSL
(see, e.g., chapter~\ref{ch:npdr} and \cite{Tenenbaum2000,
Chapelle2005, Bengio2004}). Though heavily used in the learning and
geometry communities, these methods still raise many questions.
For example, with dimension $d>1$, graph Laplacian regularized
SSL does not work as expected in the large sample limit \cite{Nadler2009}.
It is also desirable to have geometric intuition about the behavior
of the solutions of models like those proposed in \cite{Belkin2006, Zhu2003}
in the limit when the data set is large.
To this end, we elucidate a connection between three important
components of analysis of points sampled from manifolds:
\begin{enumerate}
\item The inverse of the regularized weighted graph Laplacian matrix.
\item A special type of elliptic Partial Differential Equation (PDE) on the
manifold.
\item Geodesic distances on the manifold.
\end{enumerate}
This connection provides a novel geometric interpretation for
machine learning algorithms whose solutions require the regularized
inverse of a graph Laplacian matrix.
It also leads to a consistent geodesic distance estimator with two
desired properties: the complexity of the estimator depends only on the
number of sample points and the estimator naturally incorporates a
smoothing penalty.
\section{Manifold Laplacian, Viscosity, and Geodesics}
\label{sec:summary}
We motivate our study by first looking at a standard
semisupervised learning problem (\S\ref{sec:sslintro}).
We show that as the amount of data increases and regularization is
relaxed, this problem reduces to a PDE
(\S\ref{sec:sslassume}-\S\ref{sec:ssllimit}). We then analyze this PDE in
the low regularization setting to uncover new geometric insights into its
solution (\S\ref{sec:rlpde}-\S\ref{sec:transport}). In
\S\ref{sec:sslr}, these insights will allow us to analyze the original
SSL problem from a geometric perspective.
\subsection{A Standard SSL Problem}
\label{sec:sslintro}
We present a classic SSL problem in which points are sampled, some
with labels, and a regularized least squares regression is performed to
estimate the labels on the remaining points. The regression contains
two regularization penalties: a ridge regularization penalty and a graph Laplacian
``smoothing'' penalty.
Data points $\cX = \set{x_i}_{i=1}^n$ are sampled from a space $\cM
\subset \bbR^p$. The first $l$ of these $n$ points have associated
labels: $w_i \in \cL \subset \bbR, i=1,\ldots,l$. For binary
classification, one could take $\cL = \set{\pm 1}$.
The goal is to find a vector $\tf \in \bbR^n$ that approximates the $n$
samples at the points in $\cX$ of an unknown smooth function $f$ on $\cM$ and
minimizes the regression penalty $\sum_{i=1}^l (f(x_i)-w_i)^2$.
The solution is regularized by two penalties: the ridge
penalty $\|\tf\|_2$ and the graph Laplacian penalty
$\wt{J}(\tf) = \tf^T L \tf$; details of the construction
of $L$ are given in~\S\ref{sec:ssllimit}.
The second penalty approximately penalizes the gradient of $f$. It is
a discretization of the functional:
$
{J(f) = \int_{\cM} \norm{\grad f(x)}^2 d\cP(x) = \int_{\cM} f(x)\Delta
f(x) d\cP(x),}
$
where $\cP(x)$ is the sampling density.
To find $\tf$ we solve the convex minimization problem:
\begin{equation}
\label{eq:rlsp}
\min_{\tf \in \bbR^n} \sum_{i=1}^l (\tf_i-w_i)^2
+ \gamma_A \|\tf\|^2_2 + \gamma_I \wt{J}(\tf),
\end{equation}
where the nonnegative regularization parameters $\gamma_A$
and $\gamma_I$ depend on $l$ and $n$.
We can rewrite \eqref{eq:rlsp} in its matrix form. Let
$E_\cA = \diag{1\cdots 1\ \ 0\cdots 0}$ have $l$ ones followed by
$n-l$ zeros on its diagonal, and let $E_\cAp = I -
E_\cA$. Further, let $\wt{w} =
\left[w_1\ \cdots\ w_l\ \ 0\ \cdots\ 0\ \right]^T$ be a vector of the
labels $w_i$ for the first $l$ sample points, and zeros for the
unlabeled points. Then Eq.~\eqref{eq:rlsp} can be written as:
\begin{equation}
\label{eq:rlspdisc}
\min_{\tf \in \bbR^n} \|\wt{w}-E_\cA \tf\|^2_2 + \gamma_A \tf^T \tf +
\gamma_I \tf^T L \tf
\end{equation}
This is a quadratic program for $\tf$. Setting the gradient, with
respect to $\tf$, to zero yields the linear system
\begin{equation}
\label{eq:lrlssysorig}
\left(E_\cA + \gamma_A I + \gamma_I L \right) \tf = \wt{w}.
\end{equation}
The optimization problem \eqref{eq:rlsp} and the linear
system \eqref{eq:lrlssysorig} are related to two previously
studied problems. The first is graph regression with Tikhonov
regularization \cite{Belkin2004}. Problem
\eqref{eq:lrlssysorig} is closely related to the one solved in
Algorithm 1 of that paper, where we replace their general penalty
$\gamma S$ term with the more specific form $\gamma_A I + \gamma_I L$.
The ridge penalty $\gamma_A I$ encourages stability in the solution,
replacing their zero-sum constraint on $f$.
The second related problem is Laplacian
Regularized Least Squares (LapRLS) of \cite{Belkin2006}.
Specifically, \eqref{eq:lrlssysorig} is identical to Eq.~35 of
\cite{Belkin2006} with one of the regularization terms removed by
setting the kernel matrix $K$ equal to the identity. In that
framework, the matrix $K$ is defined as $K_{ij} \propto
e^{-\norm{x_i-x_j}^2/(2 \sigma^2)}$, $i,j = 1,\ldots,n$ for some
$\sigma > 0$.
A choice of $\sigma > 0$
regularizes for finite sample size and sampling noise
\cite[Thm.~2, Remark 2]{Belkin2006}. Eq.~\eqref{eq:lrlssysorig} is
thus closely related to the limiting solution to the LapRLS problem
in the noiseless case, where the sampling size $n$ grows and $\sigma$
shrinks quickly with $n$.
We study the following problem: suppose the function $w$
is sampled without noise on specific subsets of
$\cM$. The estimate $\tf$ represents an \emph{extension} of $w$ to
the rest of $\cM$. What does this extension look like, and how
does it depend on the geometry of $\cM$? The first step is to
understand the implications of the noiseless case on
\eqref{eq:lrlssysorig}; we study this next.
\subsection{SSL Problem -- Assumptions}
\label{sec:sslassume}
We now list our main assumptions for the SSL problem in
\eqref{eq:lrlssysorig}:
\begin{enumerate}
\item \label{it:as1} $\cM$ is a $d$-dimensional ($d<p$) manifold
$(\cM,g)$, which is compact, and Riemannian.
\item \label{it:as2} The labeled points $\set{x_i}_{i=1}^l$ are
sampled from a regular and closed nonempty subset $\cA \subset \cM$.
\item \label{it:as3} The labels $\set{w_i}_{i=1}^l$ are sampled from a
smooth (e.g. $C^2$ or Lipschitz) function $w : \cA \to \cL \subset
\bbR$.
\item \label{it:as4} The density $\cP$ is nonzero everywhere on
$\cM$, including on $\cA$.
\end{enumerate}
These assumptions ensure that the points $\cX$ are sampled without
noise from a bounded, and smooth space, that the labels are sampled
without noise, and that the label data is also sampled from a bounded
and smooth function. We will use these assumptions to show the
convergence of $\tf$ to a smooth function $f$ on $\cM$.
In assumption \ref{it:as2} we use the term regular in the PDE
sense \cite[Irregular Boundary Point]{Hazewinkel1995}; we discuss
this further in~\S\ref{sec:irr}. We call $\cA$ the
\emph{anchor set} (or anchor); note that it is not necessarily
connected. In addition, let $\cAp = \cM \bs
\cA$ denote the complement set. In assumption \ref{it:as3}, for
$w$ to be smooth it suffices that it is smooth on all connected
components of $\cA$. Thus we can allow $\cL$ to take on discrete
values, as long as the classes they represent are separated from each
other on $\cM$. We call the function $w$ the \emph{anchor condition}
or \emph{anchor function}. Note finally that assumption \ref{it:as4}
implies that the labeled data size $l$ grows with $n$.
As we have assumed that there is no noise
on the labels (assumptions \ref{it:as2} and \ref{it:as3}), we will not
apply a regularization penalty to the labeled data. On the labeled
points, therefore, \eqref{eq:lrlssysorig} reduces to $E_\cA \tf = \wt{w}$.
Hence, the regularized problem becomes an interpolation problem.
The ridge penalty, now restricted to the unlabeled data, changes from
$\tf^T \tf$ to $\tf^T E_\cAp \tf$. The Laplacian penalty function
becomes ${J(f) = \int_{\cAp} f(x) \lap f(x) d\cP(x)}$, and the
discretization of this penalty similarly changes from $\tf^T L \tf$ to
$(E_\cAp \tf)^T E_\cAp L \tf$.
The original linear system \eqref{eq:lrlssysorig} thus becomes
\begin{align}
\label{eq:lrlssys} &M_n \tf = \wt{w}, \\
\nonumber \text{where }
&M_n = E_\cA + \gamma_A(n) E_\cAp + \gamma_I(n) E_\cAp L.
\end{align}
We note that the dimensions of $M_n$, $\tf$, and $\wt{w}$
in \eqref{eq:lrlssys} grow with $n$.
A more detailed explanation of the component terms in the
original problem \eqref{eq:rlspdisc} and the linear systems
\eqref{eq:lrlssysorig} and \eqref{eq:lrlssys} are available in
\S\ref{app:lrldetail}, where we show that the solution
to the simplified problem \eqref{eq:lrlssys} depends only on the ratio
$\gamma_I(n) / \gamma_A(n)$. We will return to this in~\S\ref{sec:ssllimit}.
\subsection{SSL Problem -- the Large Sample Limit -- Preliminary Results}
\label{sec:ssllimit}
We study the linear system \eqref{eq:lrlssys} as $n \to \infty$,
and prove that given an appropriate choice of graph Laplacian $L$ and
growth rate of the regularization parameters $\gamma_A(n)$ and $\gamma_I(n)$,
the solution $\tf$ converges to the solution $f$ of a
particular PDE on $\cM$. The convergence occurs in the $\ell_\infty$
sense with probability $1$.
Our proof of convergence has two parts. First, we must
show that $M_n \tf$ models a forward PDE with increasing accuracy as
$n$ grows large; this is called \emph{consistency}. Second, we must
show that the norm of $M_n^{-1}$ does not grow too quickly as $n$
grows large; this is called \emph{stability}. These two results will
combine to provide the desired proof. As all of our estimates rely on
the choice of graph Laplacian matrix on the sample points $\cX$, we
first detail the specific construction that we use throughout:
\begin{enumerate}
\item Construct a weighted nearest neighbor (NN) graph $G = (\cX,\cE,\td)$
from $k$-NN or $\delta$-ball neighborhoods of $\cX$,
where edge $(l,k) \in \cE$ is assigned the distance
$\td_{lk} = \norm{x_k-x_l}_{\bbR^p}$.
\item Choose $\epsilon > 0$ and let $(\hPe)_{lk} = \exp(-\td_{kl}^2 /
\epsilon)$ if $(l,k) \in \cE$, $1$ if $l=k$, and $0$ otherwise.
\item Let $\hDe$ be diagonal with $(\hDe)_{kk} = \sum_l (\hPe)_{kl}$,
and normalize for sampling density by setting $\Ae = \hDe^{-1} \hPe
\hDe^{-1}$.
\item Let $\De$ be diagonal with $(\De)_{kk} = \sum_l (\Ae)_{kl}$ and define the
row-stochastic matrix $\Pe = \De^{-1}\Ae$.
\item The asymmetric normalized graph Laplacian is $\Le = (I-\Pe)/\epsilon$.
\end{enumerate}
We will use $\Le$ from now on in place of the generic Laplacian matrix
$L$.
Regardless of the sampling density, as $n \to
\infty$ and $\epsilon(n) \to 0$ at the appropriate rate, $\Le$
converges (with probability 1) to the Laplace-Beltrami operator on
$\cM$: ${\Le \to -c \lap}$, for some $c > 0$, uniformly pointwise
\cite{Hein2005,Singer2006,Coifman2006,Ting2010} and in spectrum
\cite{Belkin2008th}. The concept of correcting for sampling density
was first suggested in \cite{Lafon2004}.
This convergence forms the basis of our consistency argument.
We first introduce a result of \cite{Singer2006}, which shows that
with probability 1, as $n \to \infty$, the system \eqref{eq:lrlssys} with
$L=\Le_{(n)}$ consistently models the Laplace-Beltrami operator in the
$\ell_\infty$ sense.
Let $\pi_\cX : L^2(\cM) \to \bbR^n$ map any square integrable function
on the manifold to the vector of its samples on the discrete set $\cX
\subset \cM$.
\begin{thm}[Convergence of $\Le$: \cite{Singer2006}, Eq.~1.7]
\label{thm:convLe}
Suppose we are given a compact Riemannian manifold $\cM$ and smooth
function $f : \cM \to \bbR$. Suppose the points $\cX$ are sampled iid
from everywhere on $\cM$. Then, for $n$ large and $\epsilon$ small,
with probability $1$
$$
(\Le \pi_\cX(f))_i = -c \lap f(x_i)
+ O\left(\frac{1}{n^{1/2} \epsilon^{1/2 + d/2}} + \epsilon \right),
$$
where $\lap$ is the negatively defined Laplace-Beltrami operator on
$\cM$, and $c>0$ is a constant. Choosing $\epsilon = C
n^{-1/(3+d/2)}$ (where $C$ depends on the geometry of $\cM$) leads to
the optimal bound of $O(n^{-1/(3+d/2)})$. Following
\cite{Hein2005}, the convergence is uniform.
\end{thm}
As convergence is uniform in Thm.~\ref{thm:convLe}, we may write the bound
in terms that we will use throughout:
\begin{equation}
\label{eq:bdconvLe}
\norm{\Le \pi_\cX (f) - c' \pi_\cX (\lap f)}_\infty =
O(n^{-1/(3+d/2)}),
\end{equation}
where $c'=-c$ and $\norm{\cdot}_\infty$ is the vector infinity norm.
We now show that $M_n$ (as defined in \eqref{eq:lrlssys}) is consistent:
\begin{cor}[Consistency of $M_n$]
\label{cor:consistency}
Assume that $\bdM = \emptyset$ or that $\bdM \subset \cA$. Let $F_n$ be
the following operator for functions $f \in C^2(\cAp) \cap C^0(\cA)$:
\begin{equation}
\label{eq:defFn}
F_n f(x) = \begin{cases}
\gamma_A(n) f(x) - c \gamma_I(n) \lap f(x) & x \in \cAp \\
f(x) & x \in \cA
\end{cases}
\end{equation}
Then, under the same conditions as in Thm.~\ref{thm:convLe}, with probability $1$
for $n$ large and $\epsilon$ chosen as in Thm.~\ref{thm:convLe},
$\norm{M_n \pi_\cX(f) - \pi_\cX(F_n f)}_\infty = O(n^{-1/(3+d/2)})$.
\end{cor}
\begin{proof}
This follows directly from Thm.~\ref{thm:convLe} and the fact that
$(E_\cS f)_i = f(x_i) \delta_{x_i \in \cS}$ for any set
$\cS \subset \cM$.
\end{proof}
Our notion of stability is described in terms of certain limiting
inequalities. We use the notation $a_n <_n b_n$ to mean that
there exists some $n_0$ such that for all $n' > n_0$, $a_{n'} <
b_{n'}$.
\begin{prop}[Stability of $M_n$]
\label{prop:fstable}
Suppose that $\gamma_I(n)/\gamma_A(n) <_n
\epsilon(n)/\kappa$, $\kappa>2$. Then
$\norm{M_n^{-1}}_\infty = O(\gamma_A^{-1}(n))$.
\end{prop}
\noindent The proof of Prop. \ref{prop:fstable} is mainly technical and is given
in~\S\ref{sec:deferproof}.
If we modify $M_n$ as
\begin{equation}
\label{eq:lrlssysp}
M'_n = E_\cA + E_\cAp(I + h^2 \Le),
\end{equation}
we can also state the following corollary, which will be useful in
chapter~\ref{ch:rlapp}.
\begin{cor}[Stability of $M'_n$]
\label{cor:fpstable}
Suppose $h^2(n) <_n \epsilon(n)/\kappa$, $\kappa>2$. Then
$\norm{{M'}_n^{-1}}_\infty = O(1)$.
\end{cor}
\subsection{SSL Problem -- the Large Sample Limit -- Convergence Theorem}
\label{sec:ssllimitthm}
We are now ready to state and prove our main theorem about the convergence of
$\tf$. We assume that $\cM$ has empty boundary, or that $\bdM \subset
\cA$. For the case of nonempty manifold boundary $\bdM \not \subset
\cA$, there are additional constraints at this boundary in the
resulting PDE. We discuss this case in~\S\ref{app:derivbd}.
Note, by assumption \ref{it:as2} of~\S\ref{it:as2}, the anchor set
$\cA$ and its boundary $\partial \cA$ are not empty.
\begin{thm}[Convergence of $\tf$ under $M_n$]
\label{thm:fconv}
Consider the solution of $M_n \tf_n = \wt{w}$
(Eq.~\eqref{eq:lrlssys} with $L=\Le$), with $\cM$, $\cA$, and $w$ as
described in~\S\ref{sec:sslassume}, and with either $\bdM =
\emptyset$ or $\bdM \subset \cA$.
Further, assume that
\begin{enumerate}
\item \label{it:fc1} $\epsilon(n)$ shrinks as given in
Thm.~\ref{thm:convLe}.
\item \label{it:fc2} $\lim_{n \to \infty} \epsilon(n)/\gamma_A(n) = 0$.
\item \label{it:fc3} As in Prop. \ref{prop:fstable}, $\gamma_I(n) <_n
\gamma_A(n) \epsilon(n) / \kappa$ for some $\kappa > 2$.
\end{enumerate}
\noindent Then for $n$ large, with probability 1,
$$
\norm{\tf_n - \pi_\cX f_n}_\infty <_n \epsilon(n)/\gamma_A(n)
$$
where the function $f_n \in C^2(\cM)$ is the unique, smooth, solution
to the following PDE for the given $n$:
\begin{equation}
\label{eq:lrlssim2}
f_n(x) - c\frac{\gamma_I(n)}{\gamma_A(n)} \lap f_n(x) = 0, \quad x \in \cAp
\qquad \text{and} \qquad
f_n(x) = w(x), \quad x \in \cA.
\end{equation}
\end{thm}
\begin{proof}[Proof of Thm.~\ref{thm:fconv}, Convergence]
Let $F_n$ be given by \eqref{eq:defFn} and
let $f_n$ be the solution to \eqref{eq:lrlssim2}. The existence and
uniqueness of $f_n$, under assumptions \ref{it:as1}-\ref{it:as3} of
\S\ref{sec:sslassume}, is well known (we show it in
\S\ref{sec:rlpde}).
We bound $\norm{\tf_n-\pi_\cX f_n}_\infty$ as follows:
\begin{align*}
\norm{\tf_n - \pi_\cX f_n}_\infty &= \norm{M_n^{-1}\wt{w} - \pi_\cX f_n}_\infty \\
&= \norm{M_n^{-1} (\pi_\cX (F_n f_n) - M_n \pi_\cX f_n)}_\infty \\
&\leq \norm{M_n^{-1}}_\infty \norm{\pi_\cX(F_n f_n) - M_n \pi_\cX f_n}_\infty.
\end{align*}
From stability (Prop. \ref{prop:fstable}), $\norm{M_n^{-1}}_\infty =
O(\gamma_A^{-1}(n))$, and by consistency (Cor. \ref{cor:consistency}),
with probability 1 for large $n$,
$\norm{\pi_\cX(F_n f_n) - M_n \pi_\cX f_n}_\infty = O(\epsilon(n))$.
The theorem follows upon applying assumption \ref{it:fc2}.
\end{proof}
If we modify $M_n$ in Thm.~\ref{thm:fconv} as in \eqref{eq:lrlssysp},
we obtain Cor. \ref{cor:fpconv} below.
\begin{cor}[Convergence of $\tf'$ under $M'_n$]
\label{cor:fpconv}
Consider the solution of $M'_n \tf_n' = \wt{w}$, with $\cM$, $\cA$,
and $w$ as described in~\S\ref{sec:sslassume}, and with $\partial
\cM = \emptyset$ or $\bdM \subset \cA$.
Further, assume $\epsilon(n)$ shrinks as given in
Thm.~\ref{thm:convLe}, and $h(n)$ as in
Cor. \ref{cor:fpstable}. Then for $n$ large, with probability 1,
$$
\norm{\tf_n' - \pi_\cX f_n}_\infty = O(\epsilon(n))
$$
where $f_n \in C^2(\cM)$ again solves \eqref{eq:lrlssim2}:
$$
f_n(x) - h^2(n) \lap f_n(x) = 0, \quad x \in \cAp
\qquad \text{and} \qquad
f_n(x) = w(x), \quad x \in \cA.
$$
\end{cor}
\begin{proof}[Proof of Cor. \ref{cor:fpconv}]
Similar to that of Thm.~\ref{thm:fconv}, with stability given by
Cor. \ref{cor:fpstable}.
\end{proof}
We will call \eqref{eq:lrlssim2} the Regularized Laplacian PDE (RL
PDE) with the regularization parameter $h(n) = \sqrt{c \gamma_I(n) /
\gamma_A(n)}$. By assumptions \ref{it:fc1} and \ref{it:fc2} of
Thm.~\ref{thm:fconv} , $h(n) \to 0$ as $n \to \infty$. This, and the
analysis in~\S\ref{sec:sslassume}, motivate us to study the RL PDE
when $h$ is small to gain some insight into its solution, and hence
into the behavior of the original SSL problem.
\subsection{The Regularized Laplacian PDE}
\label{sec:rlpde}
We now study the RL PDE in greater detail.
We will assume a basic knowledge of differential geometry on compact
Riemannian manifolds throughout. For basic definitions and notation,
see App.~\ref{app:diffgeom}.
We rewrite the RL PDE \eqref{eq:lrlssim2}, now
denoting the explicit dependence on a parameter $h$, and making the
problem independent of sampling:
\begin{equation}
\label{eq:rl}
- h^2 \lap f_h(x) + f_h(x) = 0 \quad x \in \cAp
\qquad\text{and}\qquad
f_h(x) = w(x) \quad x \in \cA,
\end{equation}
where $\cM$, $\cA$, and $w$ are defined as in~\S\ref{sec:sslassume}.
The idea is that $f_h$ is specified smoothly on $\cA$ and by solving
\eqref{eq:rl} we seek a smooth extension of $f_h$ to all of $\cM$.
The RL PDE, \eqref{eq:rl}, has been well studied
\cite[Thm.~6.22]{Gilbarg1983}, \cite[App.~A]{Jost2001}.
It is uniformly elliptic, and for $h>0$ it admits a unique, bounded,
solution ${f_h \in C^2(\cAp) \cap C^0(\cM)}$.
The boundedness of $f_h$ follows from the strong maximum principle
\cite[Thm.~3.5]{Gilbarg1983}. One consequence is that $f_h$
will not extrapolate beyond an interval determined by the anchor
values.
\begin{prop}[\cite{Jost2001},~\S A.2] \label{prop:bound}
The RL PDE \eqref{eq:rl} has a unique, smooth solution that is bounded within the range
$(w_-,w_+)$ for $x \in \cAp$, where
$$
w_- = \inf_{y \in \cA} \min(w(y),0)
\qquad\text{and}\qquad
w_+ = \sup_{y \in \cA} \max(w(y),0).
$$
\end{prop}
Our goal is to understand the solution of \eqref{eq:rl} as the
regularization term vanishes, i.e., $h \downarrow 0$. To do so,
we introduce the Viscous Eikonal Equation.
\subsection{The Viscous Eikonal Equation}
\label{sec:ve}
The RL PDE is closely related to what we will call the Viscous Eikonal
(VE) equation. This is the following ``smoothed'' Hamilton-Jacobi
equation of Eikonal type:
\begin{equation}
\label{eq:ve}
- h \lap S_h(x) + \norm{\grad S_h(x)}^2 = 1 \quad x \in \cAp
\qquad\text{and}\qquad
S_h(x) = u_h(x) \quad x \in \cA.
\end{equation}
The term containing the Laplacian is called the \emph{viscosity term},
and $h$ is called the \emph{viscosity parameter}.
The two PDEs, \eqref{eq:rl} and \eqref{eq:ve}, are connected via
the following proposition:
\begin{prop}
\label{prop:rl2ve}
Consider \eqref{eq:rl} with $h>0$ and $w(x) > 0$, and \eqref{eq:ve} with
$u_h(x) = -h \log w(x)$. When solution $f_h$ exists for
\eqref{eq:rl}, then $S_h = -h \log f_h$ is the unique, smooth,
bounded solution to \eqref{eq:ve}.
\end{prop}
\begin{proof}
Let $f_h$ be the unique solution of \eqref{eq:rl}. From Prop. \ref{prop:bound},
$f_h > 0$ for $x \in \cAp$. Apply the inverse
of the smooth monotonic bijection $\tau_h(t) = e^{-t/h}$, $\tau_h :
[0,\infty) \to (0,1]$ to $f_h$. Let $R_h = -h \log f_h$, hence $f_h =
e^{-R_h/h}$.
We will need the standard product rule for the divergence
``$\grad \cdot$''. When $f$ is a differentiable function and $\bm{f}$
is a differentiable vector field,
\begin{equation}
\label{eq:prodrule}
\grad \cdot (f \bm{f}) = (\grad f) \cdot \bm{f} + (\grad \cdot \bm{f}) f.
\end{equation}
As $f_h$ is harmonic, on $\cAp$:
\begin{align}
\label{eq:rl2ve1}
0 &= -h^2 \grad \cdot \grad e^{-R_h/h} + e^{-R_h/h} \\
\label{eq:rl2ve2}
&= h \grad \cdot (e^{-R_h/h} (\grad R_h)) + e^{-R_h/h} \\
\label{eq:rl2ve3}
&= h (\grad e^{-R_h/h} \cdot \grad R_h
+ e^{-R_h/h} \grad \cdot \grad R_h) + e^{-R_h/h} \\
\label{eq:rl2ve4}
&= e^{-R_h/h} (- \norm{\grad R_h}^2 + h \lap R_h + 1).
\end{align}
Here, from \eqref{eq:rl2ve1} to \eqref{eq:rl2ve2} we use the
chain rule, and from \eqref{eq:rl2ve2} to \eqref{eq:rl2ve3} we
use the product rule \eqref{eq:prodrule}.
After dropping the positive multiplier in \eqref{eq:rl2ve4},
we see that that $R_h$ satisfies the first part of \eqref{eq:ve}.
Further, $R_h \in C^2(\cAp) \cap C^0(\cM)$ because $\tau_h$ is a
smooth bijection. Similarly, $R_h$ is bounded because $f_h$ is:
${R_h \in [-h \log w_+, -h \log w_-)}$.
Finally, on the boundary $\cA$ we have $e^{-R_h/h} = w$, equivalently $R_h =
-h \log w$. Hence, $R_h$ solves \eqref{eq:ve}.
\end{proof}
To summarize: for $h>0$, \eqref{eq:ve} has a unique solution $S_h \in
C^2(\cAp)$ (Prop. \ref{prop:rl2ve}; see also
\cite{Gilbarg1983,Jost2001}).
We are interested in the solutions of the VE Eq. for the case of $h=0$
as well as for solutions obtained for $h>0$ small and for more general
$u_h$. When $h=0$ and $u_h=0$, it is well known that on a compact
Riemannian manifold $\cM$, \eqref{eq:ve} models propagation from $\cA$
through $\cAp$ along shortest paths. Results are known
for a number of important cases, and we will discuss them after
describing the following assumption.
\theoremstyle{plain}
\newtheorem{assm}{Assumption}[section]
\begin{assm}
\label{conj:vedist}
For $h=0$ and $u_0$ sufficiently regular, \eqref{eq:ve} has the
unique \emph{viscosity} solution:
\begin{equation}
\label{eq:vedist}
S_0(x) = \inf_{y \in \cA} \left( d(x,y) + u_0(y) \right),
\end{equation}
where $d(x,y)$ is the geodesic distance between $x$ and $y$ through $\cAp$.
Furthermore, as $h \to 0$, $S_h$ converges to $S_0$ in $L^p(\cAp)$, $1
\leq p < \infty$, and in $(L^\infty(\cAp))^*$ (i.e. essentially
pointwise) when $u_h$ converges to $u_0$ in the same sense. The rate
of convergence is
$\norm{S_0-S_h}_\infty^* = O(h)$.
\end{assm}
\noindent From now on we will denote $S_0$ simply by $S$.
\begin{proof}[Discussion of Assum. \ref{conj:vedist} on compact $\cM$]
To our knowledge, a complete proof of \eqref{eq:vedist} for compact
Riemannian $\cM$ is not known; the theory of unique viscosity
solutions (nondifferentiable in some areas), on manifolds is
an open area of research \cite{Crandall1992, Azagra2008}. However,
below we cite known partial results.
Eq.~\eqref{eq:vedist} was shown to hold for $u_0 = 0$ on compact
$\cM$ in \cite[Thm.~3.1]{Mantegazza2002}, and for $u_0$ sufficiently regular on
bounded, smooth, and connected subsets of $\bbR^d$ in \cite[Thms. 2.1, 6.1,
6.2]{Lions1982}, and e.g., when $u_0$ is Lipschitz
\cite[Eq.~4.23]{Kruzkov1975}. Convergence and the convergence rate of
$S_h$ to $S_0$ were also shown on such Euclidean subsets in
\cite[Eq.~69]{Lions1982}. Conditions of convergence to a viscosity
solution are not altered under the exponential map \cite[Cor.
2.3]{Azagra2008}, thus convergence in local coordinates around
$\cA$ (which follows from \cite[Thm.~6.5]{Lions1982} and
Prop. \ref{prop:rl2ve}) implies convergence on open subsets of
$\cAp$. However, global convergence of $S_h$ to $S_0$ on $\cM$ is
still an open problem.
Not surprisingly, despite the lack of formal proof, and in light of
the above evidence, our numerical experiments on a variety of
nontrivial compact Riemannian manifolds (e.g. compact subsets of
hyperbolic paraboloids) give additional evidence that this convergence
\emph{is} achieved.
\end{proof}
\subsection{What happens when $h$ converges to $0$: Transport Terms}
\label{sec:transport}
To study the relationship between $S_h$ and
$S$, we look for a higher order expansion of $f_h$ using a tool
called Transport Equations \cite{Schuss2005}.
Assume $f_h$ can be expanded into the following form:
\begin{equation}
\label{eq:trans}
f_h(x) = e^{-R(x)/h} \sum_{k \geq 0} h^{\alpha k} Z_k(x)
\end{equation}
with $\alpha>0$. The terms $Z_k$, $k = 0,1,\ldots$ are called the
transport terms. Substitution of this form into \eqref{eq:rl} will
give us the conditions required on $R$ and $Z_k$.
\begin{thm}
\label{thm:transport}
If \eqref{eq:trans}, (with $\alpha = 1$) is a solution to the RL PDE
\eqref{eq:rl} for all $h>0$, then:
\begin{equation}
\forall x \in \cA \text{ and } \forall k \geq 1 \qquad
R(x) = 0,
\qquad Z_0(x) = w(x),
\qquad Z_k(x) = 0,
\end{equation}
and \eqref{eq:rl} reduces to a series of PDEs:
\begin{align}
0 &= -\norm{\grad R}^2 + 1 \nonumber \\
\label{eq:trl}
0 &= Z_0 \lap R + 2 \grad R \cdot \grad Z_0 \\
0 &= Z_{k} \lap R + 2 \grad R \cdot \grad Z_{k} - \lap Z_{k-1},
\quad k > 0 \nonumber
\end{align}
In particular, letting $d_\cA(x) = \inf_{y \in \cA} d(x,y)$ denote
the shortest geodesic distance from $x$ to $\cA$, we have that
$R(x) = d_\cA(x)$ everywhere.
\end{thm}
\begin{proof}
The anchor conditions follow from the fact that for all $h>0$, $w=Z_0
e^{-R/h}+ h Z_1 e^{-R/h} + \ldots$ (thus forcing $R=0$ and
therefore $Z_0 = w$, and $Z_k = 0, \forall k > 0$).
Plugging \eqref{eq:trans} into \eqref{eq:rl}, and applying the product
and chain rules, we get
$$
0 = \sum_{k \geq 0} e^{-R/h} h^k \left(-h^2 \lap Z_k + 2 h \grad R \cdot \grad Z_k
+ h Z_k \lap R - Z_k \norm{\grad R}^2 + Z_k \right).
$$
Eqs. \eqref{eq:trl} follow after collecting like powers of $h$ and
simplifying.
\end{proof}
Thm.~\ref{thm:transport} shows first that $R$ is determined by the
Eikonal equation with zero boundary conditions. Second, it shows that
$Z_0$ is the dominant term affected by the boundary values $w$ as $h
\downto 0$. For $k>0$, the transport terms $Z_k$ are affected by $w$
via $Z_{k-1}$, but these are not the dominant terms for small $h$.
The existence, uniqueness, and smoothness of $Z_0$ on $\cAp$ and
within the cut locus of $\cA$, is proved in~\S\ref{sec:deferproof}
(Thm.~\ref{thm:Zsmooth}).
Note that the choice of $\alpha=1$ is not arbitrary.
For $\alpha<1$ in \eqref{eq:trans}, \eqref{eq:rl} does not admit a
consistent set of solvable transport equations.
For $\alpha = 2$, the resulting transport equations reduce to those of
Eqs. \eqref{eq:trl} (the nonzero odd $k$ terms are forced to zero and
the even $k$ terms are related to each other via Eqs. \eqref{eq:trl}).
\subsection{Manifold Laplacian and Vanishing Viscosity}
\label{sec:lap}
We now combine Assum. \ref{conj:vedist} and Thm.~\ref{thm:transport} in
a way that summarizes the solution of the RL PDE \eqref{eq:rl} for
small $h$, taking into account possible arbitrary nonnegative boundary
conditions.
\begin{thm}
\label{thm:rl2e}
Let $w(x)\geq 0, \forall x \in \cA$ and let $\hat{\cA} = \supp{w}$.
Further, define
$
f^*_h(x) = w(x') e^{-d_{\cA}(x)/h}
$
where $x' = \arg\inf_{y \in \cA} d(x,y)$.
Then for a situation where Assum. \ref{conj:vedist} holds,
and for small $h > 0$, the solution of \eqref{eq:rl} with sufficiently
regular anchor $\cA$, satisfies:
\begin{align}
\abs{f_h(x) - f^*_h(x)} &= O(h e^{-d_\cA(x)/h}); \text{and}
\label{eq:rlappr}\\
\lim_{h \to 0} -h \log f_h(x) &= d_{\hat{\cA}}(x). \label{eq:rllim}
\end{align}
\end{thm}
\begin{proof} First, apply Thm.~\ref{thm:transport} to decompose $f_h$
in terms of $R$ and the transport terms $Z_k$, $k \geq 0$. Next,
by Thm.~3.1 of \cite{Mantegazza2002}, as discussed in
Assum. \ref{conj:vedist}, we obtain $R(x) = d_{\cA}(x)$.
We can therefore write $f_h(x) = Z_0(x) e^{-d_\cA(x)/h} + O(h
e^{-d_\cA(x)/h})$. Further, $Z_0(x)$ is unique and smooth
within an intersection of $\cA'$ and a cut locus of $\cA$,
and satisfies the boundary conditions ($Z_0(x) = w(x)$ for $x \in
\cA$) . This can be shown using the method of characteristics
(Thm.~\ref{thm:Zsmooth}). This verifies \eqref{eq:rlappr}.
Showing that \eqref{eq:rllim} holds requires more work due to possible
zero boundary conditions on $\cA$. To prove \eqref{eq:rllim}, we
find a sequence of PDEs, parametrized by viscosity $h$ and ``height''
$c>0$; we denote these solutions $\hat{f}_{h,c}(x)$. These solutions
match $f_h$ as $h \to 0$. We then show that for large $c$, they
also match $f_h$ for nonzero $h$.
Let $\hat{\cA}(c,h) = \set{x \in \cA : w(x) > e^{-c/h}}$ and
$\cA_0(c,h) = \set{x \in \cA : w(x) \leq e^{-c/h}}$. We define
$\hat{f}_{h,c}$ as the solution to \eqref{eq:rl} with the modified
boundary conditions
\begin{equation}
w_{h,c}(x) = \begin{cases}
w(x) & x \in \hat{\cA}(c,h) \\
e^{-c/h} & x \in \cA_0(c,h)
\end{cases}. \label{eq:whc}
\end{equation}
This is a modification of the original problem with a lower bound
saturation point of $e^{-c/h}$. Clearly, as $h \to 0$, $w_{h,c}(x)
\to w(x)$ on the boundary.
As in Prop. \ref{prop:rl2ve}, for fixed $c$ we can write $w_{h}(x) =
e^{-u_{h}(x)/h}$ for any $x \in \cA$ and $h>0$. Then
$$
u_{h}(x) = \begin{cases} c & x \in \cA_0(c,h) \\
-h \log w(x) & x \in \hat{\cA}(c,h)
\end{cases}.
$$
Let $\cA_0 = \set{x \in \cA : w(x) = 0}$ and $\hat{\cA} = \set{x
\in \cA : w(x) > 0}$, and define
$$
u_{0}(x) = \begin{cases} c & x \in \cA_0 \\
0 & x \in \hat{\cA}
\end{cases}.
$$
Then
$$
u_h(x)-u_0(x) = \begin{cases}
0 & w(x) = 0 \\
c & 0 < w(x) \leq e^{-c/h} \\
-h \log w(x) & w(x) > e^{-c/h}
\end{cases}.
$$
As $w(x)$ is regular and $\cA$ is compact, $u_h \to u_0$ pointwise on
$\cA$, and the convergence is also uniform.
Clearly, then, $u_h \to u_0$ almost everywhere on $\cA$. Furthermore,
for any $h>0$, $u_h \leq c$ everywhere on $\cA$.
Therefore $u_h \to u_0$ in $L^p$ for all $p \geq 1$
\cite[Prop. 6.4]{Kubrusly2007}.
The rate of convergence, $O(h)$, is determined
by the set of points $\set{x : w(x) > e^{-c/h}}$.
Thus, by Prop. \ref{prop:rl2ve} and Assum. \ref{conj:vedist},
\begin{equation}
\hat{S}_{0,c}(x) = \lim_{h \to 0} -h \log \hat{f}_{h,c}(x) = \min
\left(d_{\hat{\cA}}(x), c + d_{\cA_0}(x)\right).
\label{eq:Scx}
\end{equation}
To match the boundary conditions of $\hat{f}_{h,c}$ to those of
$f_h$ for a fixed $h>0$, we must choose $c$ large in \eqref{eq:whc}.
Subsequently, when $c$ is large in \eqref{eq:Scx}, e.g., when $c \geq
\text{diam}(\cM)$, we have $\hat{S}_{0,c}(x) = d_{\hat{\cA}}(x)$.
This verifies \eqref{eq:rllim}.
\end{proof}
When $w(x) = 1$ on $\cA$, and for small $h$, the exponent
of $f_h$ directly encodes $d_\cA$. The following simple
example illustrates Thm.~\ref{thm:rl2e}. Additional examples on the
Torus $T=S^1 \x S^1$ and on a complex triangulated mesh are included
in~\S\ref{sec:geoexmp} of chapter~\ref{ch:rlapp}.
\begin{exmp}[The Annulus in $\bbR^2$]
\label{ex:annul}
Let $\cM = \set{r_0 \leq r \leq 1}$, where $r=\norm{x}$ is the
distance to the origin. Let $\cA = \set{r=r_0} \cup \set{r=1}$ be the inner
and outer circles. Letting $w=1$ ($u_h=0$), we get $S(r) = d_{\cA}(r)
= \min(1-r,r-r_0)$.
For symmetry reasons, we can assume a radially symmetric solution to
the RL Eq. For a given dimension $d$, the radial Laplacian is
$\lap f(r) = f''(r) + (d-1)r^{-1} f'(r)$.
So \eqref{eq:rl}
becomes:
$-h^2 \left(f_h''(r) + r^{-1} f_h'(r) \right) + f_h(r) = 0$
for $r \in (r_0,1)$, and $f_h(r_0)=f_h(1)=1$.
The solution, as calculated in Maple \cite{Maple10}, is
$$
f_h(r) = \frac{I_0(r/h)K_0(1/h)-I_0(r/h)K_0(r_0/h) -
K_0(r/h)I_0(1/h)+K_0(r/h)I_0(r_0/h)}{K_0(1/h)I_0(r_0/h) -
K_0(r_0/h)I_0(1/h)},
$$
where $I_j$ and $K_j$ are the $j$'th order modified Bessel functions
of the first kind and second kind, respectively.
A series expansion of $f_h(r)$ around $h=0$ (partially calculated with
Maple) gives
\begin{equation}
\label{eq:fhr}
f_h(r) = \sqrt{r_0/r} e^{-(r-r_0)/h} + \sqrt{1/r} e^{-(1-r)/h} + O(h)
\end{equation}
As the limiting behavior of $f_h$, as $h$ grows small, depends on the
exponents of the two terms in \eqref{eq:fhr}, one can check that the limit
depends on whether $r$ is nearer to $r_0$ or $1$. Depending on this,
one of the terms drops out in the limit.
From here, it is easy to check that $\lim_{h \to 0} -h \log
f_h(r) = S(r)$, confirming \eqref{eq:rllim}.
We simulated this problem with $r_0=0.25$ by sampling
$n=1500$ points from the ball $B(0,1.25)$, rescaling points having
$r \in [0,0.25)$ to $r=0.25$, and rescaling points having $r \in
(1,1.25]$ to $r=1$. $S_h$ is approximated up to a constant using
the numerical discretization, via \eqref{eq:lrlssysp}, of
\eqref{eq:lrlssim2}. For the graph Laplacian we used a $k=20$
NN graph and $\epsilon=0.001$.
\begin{figure}\label{fig:geoannul}
\end{figure}
Fig.~\ref{fig:geoannul} shows (in the $z$ axis) the estimate $S_h(x)$
as $h$ grows small. The colors of the points reflect the true
distance to $\cA$: $S(r) = S_0(r) = \min(1-r,r-r_0)$. Note the
convergence as $h \downto 0$, and also the clear offset of $S_h$ which
is especially apparent in the right panel at $r=0.25$ and $r=1$.
From the second of Eqs. \eqref{eq:trl} and the fact that $\grad S(r) =
1, \lap S(r) = 1/r$, we have $Z_0/r + 2 Z'_0 = 0$ for $r_0 < r \leq 1$,
and $Z_0 = 1$ for $r \in \set{r_0,1}$. To solve this near $r=r_0$, we use
the boundary condition $Z_0(r_0)=1$ and get $Z_0(r) = \sqrt{r_0/r}$.
Likewise, near $r=1$ we use the boundary condition $Z_0(1)=1$ and get
$Z_0(r) = \sqrt{1/r}$.
Near $r=r_0$, the solution becomes $f_h(r) =
e^{-(r-r_0)/h}(\sqrt{r_0/r} + O(h))$, and near $r=1$, it becomes
$f_h(r) = e^{-(1-r)/h}(\sqrt{1/r} + O(h))$, which match the earlier
series expansion of the full solution.
Furthermore, upon an additional Taylor expansion near $r=r_0$, we have
$S_h(r) = r-r_0 - h \log(r_0/r)/2 + O(h \log h)$.
Note the extra term in the $S_h$ estimate,
which has a large effect when $r-r_0$ is small (as seen
in the right pane of Fig.~\ref{fig:geoannul}). A similar expansion
can be made around the outer circle, at $r=1$.
\end{exmp}
\subsection{The SSL Problem of~\S\ref{sec:sslintro}, Revisited}
\label{sec:sslr}
Armed with our study of the RL PDE, we can now return
to the original SSL problem of~\S\ref{sec:sslintro}.
Suppose the anchor is composed of two simply connected domains $\cA_0$
and $\cA_1$, where $w$ takes on the constant values $c_0$ and $c_1$,
respectively, within each domain.
When $c_1 > c_0 \geq 0$, we can directly apply the result of
Thm.~\ref{thm:rl2e} to \eqref{eq:lrlssim2}. The solution, for
$\gamma_I \ll \gamma_A$, is given by \eqref{eq:rlappr} with
$h=\sqrt{c \gamma_I / \gamma_A}$:
$$
f(x) \approx w(x') \sup_{y \in \cA} {e^{- d(x,y) \sqrt{\gamma_A / c
\gamma_I} }}
\qquad \text{where} \qquad x' = \arg\inf_{y \in \cA} d(x,y)
$$
The solution depends on both the geometry of $\cM$ (via the
geodesic distance to $\cA_0$ or $\cA_1$) and on
the values chosen to represent the class labels. For example,
suppose $\cL = \set{0,1}$. As $n$ grows large and $h$ grows small, we
apply \eqref{eq:rllim} to see that the classifier is biased towards
the class in $\cA_0$:
\begin{equation}
\label{eq:sslasymm}
f(x) \approx \sup_{y \in \cA_1} e^{-d(x,y) \sqrt{\gamma_A / c
\gamma_I}}.
\end{equation}
Choosing the symmetric labels $\cL = \set{c_0,-c_0}$ is more natural.
In this case, we decompose \eqref{eq:rl} into two problems:
\begin{align*}
-h^2 \lap f_{h,0} + f_{h,0} &= 0 \quad x \in \cAp
\qquad \text{and} \qquad f_{h,0} = c_0 \quad x \in \cA_0;
\quad f_{h,0} = 0 \qquad x \in \cA_1 \\
-h^2 \lap f_{h,1} + f_{h,1} &= 0 \quad x \in \cAp
\qquad \text{and} \qquad f_{h,1} = 0\ \quad x \in \cA_0;
\quad f_{h,1} = -c_0 \quad x \in \cA_1,
\end{align*}
and note that by linearity of the problem and the separation of the
anchor conditions, the solution to \eqref{eq:rl} is
given by $f_h = f_{h,0} + f_{h,1}$.
Therefore, by taking $h=\sqrt{c \gamma_I / \gamma_A}$, we separate
\eqref{eq:lrlssim2} into two problems with
nonnegative anchor conditions (one in $f_{h,0}$ and one in $-f_{h,1}$).
Applying the result of Thm.~\ref{thm:rl2e} to each of these
individually, and combining the solutions, yields
\begin{equation}
\label{eq:sslsymm}
f(x) \approx c_0 \sup_{y \in \cA_0} e^{-d(x,y)/h}
- c_0 \sup_{y \in \cA_1} e^{-d(x,y)/h}
\propto e^{-d_{\cA_0}(x)/h} - e^{-d_{\cA_1}(x)/h}.
\end{equation}
This solution is zero when $d_{\cA_0}(x) = d_{\cA_1}(x)$, positive
when $d_{\cA_0}(x) < d_{\cA_1}(x)$, and negative otherwise. That is,
in the noiseless, low regularization regime with symmetric anchor values,
algorithms like LapRLS classification assign the point $x$ to the
class that is closest in geodesic distance.
We illustrate this with a simple example of classification on the
sphere $S^2$.
\begin{exmp}
\label{ex:sslsphere}
We sample $n=1000$ points from the sphere $S^2$ at random, and define
the two anchors $\cA_0 = B_g((1,0,0),\pi/16)$ and $\cA_1 =
B_g((-1,0,0), \pi/16)$. Here $B_g(x,\theta)$ is a cap of angle
$\theta$ around point $x$. The associated anchor labels are $w_0 =
+1$ and $w_1 = -1$.
We discretize the Laplacian $\Le$ using $k=50$,
and $\epsilon = 0.001$ and solve \eqref{eq:lrlssys}.
Fig.~\ref{fig:sslsphere} compares the numerical
solutions at small $h$ to our estimates from \eqref{eq:sslsymm}.
The two solutions are comparable up to a positive multiplicative
factor (due to the fact that $\Le$ converges to $\lap$ times a
constant).
$\square$
\begin{figure}
\caption{$f_h$ on $\cM$, numerical, log scale}
\caption{$S_h$ (numerical)}
\caption{$S_h$ (prediction)}
\label{fig:sslsa}
\label{fig:sslsc}
\label{fig:sslsb}
\label{fig:sslsphere}
\end{figure}
\end{exmp}
\section{Technical Details}
\subsection{Deferred Proofs}
\label{sec:deferproof}
\subsubsection{Stability of $M_n$}
To prove the stability of $M_n$, we first need to present
some notation. The matrices $\Pe, \Le,$ and $\De$ can be written in
terms of submatrices to simplify the exposition.
Separating these matrices into submatrices associated with the
$l$ labeled points and the $n-l$ unlabeled points, we write:
\begin{align*}
\Pe &= \pmat{P_{ll} & P_{lu} \\ P_{ul} & P_{uu} }, \quad
\Le = \pmat{L_{ll} & L_{lu} \\ L_{ul} & L_{uu}}
= \frac{1}{\epsilon} \pmat{I - P_{ll} & - P_{lu} \\ - P_{ul} & I - P_{uu} }, \quad
\\
\text{and }
\De &= \pmat{D_{ll} & 0 \\ 0 & D_{uu} }.
\end{align*}
Note that the two identities in the definition of $\Le$ are of size $l$
and $n-l$, respectively.
We will also also need a lemma bounding the spectrum of the matrix
$I-P_{uu}$.
\begin{lem}
\label{lem:bdspec}
The matrix $I-P_{uu}$ is bounded in spectrum between $0$ and $1$.
\end{lem}
\begin{proof}
The hermitian matrix $\tilde{P}_\epsilon = \De^{-1/2} \Ae \De^{-1/2}$ has
eigenvalues bounded between $0$ and $1$. The eigenvalues of its lower
right principal submatrix, $\tilde{P}_{uu}$, are therefore also bounded
between $0$ and $1$ \cite[Thm.~4.3.15]{Horn1985}. Finally, $P_{uu}$
is similar to $\tilde{P}_{uu}$ via the transformation $P_{uu} =
D_{uu}^{-1/2} \tilde{P}_{uu} D_{uu}^{1/2}$.
\end{proof}
We are now ready to prove the stability of $M_n$.
\begin{proof}[Proof of Prop. \ref{prop:fstable}, Stability]
We first expand $M_n$ in block matrix form:
$$
M_n = \pmat{I & 0 \\ \gamma_I L_{ul} & G_n}
\quad \text{where} \quad G_n = \gamma_A(n) I + \gamma_I(n) L_{uu}.
$$
From the block matrix inverse formula, the inverse of $M_n$ is:
$$
M_n^{-1} = \pmat{I & 0 \\ -G_n^{-1} \gamma_I L_{ul} & G_n^{-1}},
$$
and the norm may be bounded as:
\begin{equation}
\label{eq:maxFn}
\norm{M_n^{-1}}_\infty \leq \max \left(1, \norm{G_n^{-1}\gamma_I
L_{ul}}_\infty + \norm{G_n^{-1}}_\infty \right)
\end{equation}
where we use the inequalities $\norm{(A^T \ B^T)^T}_\infty =
\max(\norm{A}_\infty,\norm{B}_\infty)$ and $\norm{(C\ \ D)}_\infty \leq
\norm{C}_\infty + \norm{D}_\infty$.
We first expand $G_n^{-1} = (\gamma_A(n) I + \gamma_I(n) L_{uu})^{-1} =
\gamma_A^{-1}(n) \left(I + \frac{\gamma_I}{\gamma_A \epsilon(n) }(I - P_{uu})
\right)$.
By Lem. \ref{lem:bdspec}, $I-P_{uu}$ is bounded in spectrum between $0$ and
$1$, and by the first assumption in the proposition, when $\kappa > 2$
we have $\gamma_I <_n \gamma_A \epsilon(n) / \kappa <_n \gamma_A(n) \epsilon(n)$.
Thus there exists some $n_0$ so that for all $n'>n_0$ we can write
$$
G_n^{-1}
= \gamma_A^{-1}(n) \left(I + \frac{\gamma_I(n)}{\gamma_A(n) \epsilon(n) }(I - P_{uu}) \right)^{-1}
= \gamma_A^{-1} \sum_{k = 0}^\infty \left( \frac{\gamma_I(n)}{\gamma_A(n)
\epsilon(n)} \right)^k (P_{uu}-I)^k.
$$
Now we use this expansion to bound $\norm{G_n^{-1}}_\infty$. Let $a_n
= \gamma_I(n) (\gamma_A(n) \epsilon(n))^{-1}$. As the norm is
subadditive,
\begin{equation}
\label{eq:Gnibnd}
\norm{G_n^{-1}}_\infty \leq \gamma_A^{-1}(n) \sum_{k \geq 0} a^k_n \norm{P_{uu}-I}_\infty^k.
\end{equation}
Furthermore, we can bound
$\norm{P_{uu}-I}_\infty^k$ as follows. Since the entries of $P_{uu}$
are nonnegative, $\norm{P_{uu}-I}_\infty \leq
\norm{(P_{uu}+I)1}_\infty$ where $1$ is a vector of all ones. Further,
since $P_{uu}$ is a submatrix of a stochastic matrix,
$\norm{(P_{uu}+I)1}_\infty \leq 2$. Thus
since $a_n = \gamma_I(n) (\gamma_A(n) \epsilon(n))^{-1} <_n \kappa^{-1} < 1/2$,
for $n$ large enough \eqref{eq:Gnibnd} is bounded by the
geometric sum:
$$
\norm{G_n^{-1}}_\infty \leq \gamma_A^{-1}(n) \sum_{k \geq 0} (2a_n)^k = \frac{1}{\gamma_A(n)(1-2 a_n)}
= \frac{\epsilon(n)}{\epsilon(n) \gamma_A(n) - 2 \gamma_I(n)}
$$
and this last term is bounded based on our initial assumption:
$\epsilon(n)(\epsilon(n) \gamma_A(n) - 2 \gamma_I(n))^{-1} <_n \gamma_A^{-1}(n) (1-2 \kappa^{-1})^{-1}$. Thus, for large enough $n$,
$\norm{G_n^{-1}}_\infty \leq \gamma_A^{-1}(n) (1-2 \kappa^{-1})^{-1}$.
Now we bound $\norm{G_n^{-1} \gamma_I L_{ul}}_\infty$. Note
that $\norm{G_n^{-1} \gamma_I(n) L_{ul}}_\infty \leq \gamma_I(n)
\norm{G_n^{-1}}_\infty \norm{L_{ul}}_\infty$. As $L_{ul} = -P_{ul}$ and
$\Pe$ is stochastic, $\norm{L_{ul}}_\infty \leq 1$. Putting together
these two steps, we have $\norm{G_n^{-1} \gamma_I(n) L_{ul}}_\infty \leq
\gamma_I(n) \norm{G_n^{-1}}_\infty$.
Combining these two bounds, \eqref{eq:maxFn} finally becomes
$$
\norm{M_n^{-1}}_\infty \leq \max{\left( 1,
\gamma_A(n)^{-1}(1-2 \kappa^{-1})^{-1} (1 + \gamma_I(n))\right)}.
$$
For small $\gamma_A(n) > 0$, the second term is the maximum and the result
follows.
\end{proof}
\subsubsection{Characterization of $Z_0$}
We first need some preliminary definitions and results.
We define the \emph{cut locus} of the set $\cA$ as closure of the set
of points in $\cAp$ where $d_\cA^2(x)$ is not differentiable (i.e.,
where there is more than one minimal geodesic between $x$ and $\cA$):
$$
\Cut(\cA) = \overline{\set{x \in \cAp \ |\
d_\cA^2 \text{ is not differentiable at } x}}.
$$
The cut locus and $d_\cA$ have several important properties, which
we now list:
\begin{enumerate}
\item \label{it:cutloc1} The Hausdorff dimension of $\Cut(\cA)$ is at
most $d-1$ \cite[Cor. 4.12]{Mantegazza2002}.
\item \label{it:cutloc2} $\Cut(\cA) \cup \cA$ is closed in $\cM$.
\item \label{it:cutloc3} The open set $\cAp \bs \Cut(\cA)$ can be
continuously retracted to $\partial \cA$.
\item \label{it:cutloc4} If $\cA \in C^r$ then $d_\cA$ is $C^r$ in
$\cAp \bs \Cut(\cA)$.
\end{enumerate}
\noindent Items \ref{it:cutloc2}-\ref{it:cutloc4} are proved in
\cite[Prop 4.6]{Mantegazza2002}.
Property \ref{it:cutloc1} shows that $d_\cA$ is smooth almost
everywhere on $\cAp$. Properties \ref{it:cutloc2}-\ref{it:cutloc3}
show that $\cAp \bs \Cut(\cA)$ is composed of a finite number of
disjoint connected components, each touching $\cA$. Finally, property
\ref{it:cutloc4} shows that $d_\cA$ is as smooth as the boundary
$\partial \cA$.
Let $\cM$ be a $d$-dimensional Riemannian manifold and let $\cA
\subset \cM$ be a Riemannian submanifold such that $\partial \cA$ is
regular (in the PDE sense). As in Thm.~\ref{thm:transport}, define the
differential equation in $Z_0$ as
\begin{align}
\label{eq:diffZ0}
Z_0(x) \lap d_\cA(x) + 2 \grad d_\cA(x) \cdot \grad Z_0(x) = 0 &\quad x \in \cAp \\
Z_0(x) = w(x) &\quad x \in \cA \nonumber
\end{align}
We first show that $Z_0$ of Thm.~\eqref{thm:transport} has a unique,
smooth, local solution in a chart at $\cA$. To do this we will use
the method of characteristics \cite[Chap. 3]{Evans1998}. We will need
an established result for the local solutions of PDEs on open subsets
of $\bbR^d$.
Let $V$ be an open subset in $\bbR^d$ and let $\Gamma \subset \partial
V$. Let $u : V \to \bbR$ and $Du$ be its derivative on $\bbR^d$.
Finally, suppose $x \in V$ and let $w : \Gamma \to \bbR$. We study
the first-order~PDE
\begin{align*}
F(Du, u, x) = 0 &\qquad x \in V, \\
u = w &\qquad x \in \Gamma
\end{align*}
\noindent Note that we can write $F = F(p, z, x) : \bbR^d \x \bbR \x
\overline{V} \to \bbR$. The main test for existence, uniqueness, and
smoothness is the test for noncharacteristic boundary conditions.
\begin{defn}[\cite{Evans1998}, Noncharacteristic boundary condition]
\label{defn:nonchar}
Let $p^0 \in \bbR^d$, $z^0 \in \bbR$ and and $x^0 \in \Gamma$. We say
the triple $(p^0,z^0, x^0)$ is noncharacteristic if
$$
D_p F(p^0, z^0, x^0) \cdot \nu(x^0) \neq 0,
$$
where $\nu(x^0)$ is the outward unit normal to $\partial V$ at $x^0$.
We also say that the noncharacteristic boundary condition holds at
$(p^0,z^0, x^0)$.
\end{defn}
\noindent This test is sufficient for local existence:
\begin{prop}[\cite{Evans1998},~\S 3.3, Thm.~2 (Local Existence)]
\label{prop:locexist}
Assume that $F(p,z,x)$ is smooth and that the noncharacteristic
boundary condition holds on $F$ for some triple $(p^0, z^0, x^0)$.
Then there exists a neighborhood $V'$ of $x^0$ in $\bbR^d$ and a
unique, $C^2$ function $u$ that solves the PDE
\begin{align*}
F(Du(x), u(x), x) = 0 &\qquad x \in V', \\
u(x) = w(x) &\qquad x \in \Gamma \cap V'.
\end{align*}
\end{prop}
\noindent We are now ready to prove the existence, uniqueness, and
smoothness of $Z_0$.
\begin{lem}Let $\cA'_0$ be one of the connected components of $\cAp
\bs \Cut(\cA)$. Then on any chart $(U,\phi)$ that satisfies $U
\subset \cA'_0 \cup \cA$ and for which $U \cap \cA$ is sufficiently
regular, the differential equation \eqref{eq:diffZ0} has a unique, and
smooth solution.
\label{lem:Zsmoothchart}
\end{lem}
\begin{proof}
Under the diffeomorphism $\phi$, \eqref{eq:diffZ0} is modified.
Choose a point $u_0 \in U \cap \cA$ and apply $\phi$. The boundary
$\cA$ becomes a boundary $\Gamma$ in $\bbR^d$. Let $V$
represent the rest of the mapped space. Eq.~\eqref{eq:diffZ0} then
becomes
\begin{align*}
Z_0(v) l(v) + 2 \sum_{i,j}^{d} g^{ij}(v) \partial_i d_{\cA}(v) \partial_j Z_0(v) = 0 &\quad v \in V \\
Z_0(x) = w(x) &\quad v \in \Gamma \nonumber,
\end{align*}
where we use the abusive notation $f(v) =
f(\phi^{-1}(v))$ for a function $f : \cM \to \bbR$, and where $l(v) =
\lap d_\cA(v)$ is the (smooth) Laplacian of $d_\cA(x)$ mapped into
local coordinates. Using the notation of Lem. \ref{defn:nonchar}, we
can write the equation above as $F(DZ_0(v), Z_0(v), v) = 0$ where
$$
F(p,z,v) = z l(v) + 2 \sum_{i,j=1}^{d} g^{ij}(v) \partial_i d_{\cA}(v) p_j,
$$
and therefore $D_p F(p,z,v)$ becomes
$$
(D_pF)^k(p,z,v) = 2 \sum_{i=1}^{d} g^{ik}(v) \partial_i d_{\cA}(v).
$$
\noindent for $k=1,\ldots,d$. At the point $\phi(u^0) = v^0 \in \Gamma$, the
outward unit normal is $\nu(v^0) = \grad d_A(v^0)$, which in local
coordinates is given by the vector $\nu^j(v^0) = \sum_{k=1}^{d} g^{jk}(v^0)
\partial_k d_{\cA}(v^0)$ for $j=1,\ldots,d$.
The uniqueness, existence, and smoothness of $Z_0$ near $\cA$ in this
chart follows by Prop. \ref{prop:locexist} after checking the
noncharacteristic boundary condition for $F$ at $v^0$:
\begin{align*}
D_p F(p^0, z^0, v^0) \cdot \nu(v^0)
&= 2 \sum_{kj} g_{kj}(v^0)
\sum_{i=1}^{d} g^{ik}(v^0) \partial_i d_{\cA}(v^0)
\sum_{k=1}^{d} g^{jk}(v^0) \partial_k d_{\cA}(v^0) \\
&= 2 \sum_{kj} g^{kj}(v^0) \partial_k d_{\cA}(v^0) \partial_j d_{\cA}(v^0) \\
&= 2 \norm{\grad d_{\cA}(v^0)}^2 = 2 \neq 0,
\end{align*}
where the last equality follows by definition of the distance function
in terms of the Eikonal equation.
\end{proof}
\begin{thm}
Let $\cA'_0$ be one of the connected components of $\cAp
\bs \Cut(\cA)$. The differential equation \eqref{eq:diffZ0} has a
unique, and smooth solution on $\cA'_0$.
\label{thm:Zsmooth}
\end{thm}
\begin{proof}[Proof of Thm.~\ref{thm:Zsmooth}]
A local solution exists in an open ball around each point $u_0$ in the
region $\cA'_0 \cap \cA$, due to Lem. \ref{lem:Zsmoothchart}. The
size of each ball is bounded from below, so by compactness we can find
a finite number of subsets $U = \cup_i U_{0,i}$ that cover $\cA'_0
\cap \cA$, for which \eqref{eq:diffZ0} has a smooth unique solution,
and which overlap. As the charts overlap and the associated mappings
are diffeomorphic, a consistent, smooth, unique solution therefore
exists near $\cA$.
To extend this solution away from the boundary, we choose a small
distance $d_0$ such that for all $x$ with $d_\cA(x) \leq d_0$, that
$x$ is also in the initially solved region $U$.
This set, which we call $\cD_0$, is a contour of $d_\cA$ within
$\cA'_0$. From the previous argument, $Z_0$ has been solved up to
this contour, and we now look at an updated version of
\eqref{eq:diffZ0} by setting the new Dirichlet anchor conditions at
$\cD_0$ from the solved-for $Z_0$, and setting the interior of the
updated problem domain to the remainder of $\cA'_0$.
Let $U_0 \ \set{x \in U : d(x) \leq d_0}$ and let $\cA'_1 = \cA'_0 \bs
U_0$. The method of characteristics also applies on $\cA_1'$ near
$\cD_0$. We apply Lem. \ref{lem:Zsmoothchart} with the updated
Dirichlet boundary conditions. As $\cD_0$ defines a contour of $d_\cA$, its outward
normal direction is $\grad d_\cA$. Similarly, $D_p F$ (of
Lem. \eqref{lem:Zsmoothchart}) has not changed. A solution
therefore exists locally around each point $u_1 \in \cA'_1 \cap
\cD_0$. The process above can be repeated to ``fill in'' the solution
within all of $\cA'_0$.
\end{proof}
\subsection{Details of the Regression Problem of~\S\ref{sec:sslintro}}
\label{app:lrldetail}
In this deferred section, we decompose the problem \eqref{eq:rlspdisc} into two
parts: elements associated with the first $l$ labeled points (these
are given subscript $l$) and elements associated with the remaining
unlabeled points (given subscript $u$). This decomposition provides a
more direct look into the how the assumptions in~\S\ref{sec:sslassume}
simplify the original problem, and how the resulting optimization
problem depends only on the ratio of the two parameters $\gamma_I$ and
$\gamma_A$.
We first rewrite \eqref{eq:rlspdisc}, expanding all the parts:
$$
\min_{\tf_l, \tf_u} \left\{
\norm{\pmat{w_l \\ 0} - \pmat{\tf_l \\ 0}}^2_2
+ \gamma_A \pmat{\tf_l \\ \tf_u}^T \pmat{\tf_l \\ \tf_u}
+ \gamma_I \pmat{\tf_l \\ \tf_u}^T \pmat{L_{ll} & L_{lu} \\ L_{ul} & L_{uu}} \pmat{\tf_l \\\tf_u}
\right\}.
$$
In this system, the optimization problems on $\tf_u$ and $\tf_l$ are
coupled by the matrix $L$.
The assumptions in~\S\ref{sec:sslassume} decouple
\eqref{eq:rlspdisc}. This comes from the equality constraint $E_\cA
\tf = \tilde{w}$, equivalently $\tf_l = \tilde{w}_l$. The problem is
further simplified by the restriction of the
integral domain from $\cM$ to $\cAp$ in the modified penalty
$J(f)$. We can write $J(f) = \int_\cAp f(x) \lap f(x) dx =
\int_\cM f(x) 1_{\set{x \in \cAp}} \lap f(x) dx$, and the
discretization of this term is $\wt{J}(\tf) = (E_\cAp \tf)^T L \tf$.
After these reductions, and the reduction of the ridge term to $\tf^T
E_\cAp \tf$, the problem \eqref{eq:rlspdisc} becomes:
\begin{align*}
\min_{\tf_u} \left\{
\gamma_A \tf_u^T \tf_u
+ \gamma_I \pmat{\tilde{w}_l \\ \tf_u}^T \pmat{0 & 0 \\ L_{ul} & L_{uu}}
\pmat{\tilde{w}_l \\ \tf_u}
\right\} \\
= \min_{\tf_u} \left\{
\gamma_A \tf_u^T \tf_u
+ \gamma_I (\tf_u^T L_{ul} \tilde{w}_l + \tf_u^T L_{uu} \tf_u )
\right\}.
\end{align*}
The solution to this problem, combined with the constraint $\tf_l =
\tilde{w}_l$, leads to \eqref{eq:lrlssys}.
The $\gamma_A$ term above normalizes the Euclidean norm of $\tf_u$, thus
earning it the mnemonic ``ambient regularizer''. The first $\gamma_I$
term is an inner product between $\tf_u$ and $L_{ul} \tilde{w}_l$. As
$L_{ul}$ is an averaging operator with negative coefficients, the
component $(L_{ul} \tilde{w}_l)_j$ contains the negative average of
the labels for points in $\cA$ near $x_j \in \cAp$. If $x_j$ is far
from $\cA$, this component is near zero. Minimizing $\tf_u^T L_{ul}
\tilde{w}_l$ therefore encourages points near $\cA$ to take on the
labels of their labeled neighbors. For points away from $\cA$ it has
no direct effect. Minimizing the second $\gamma_I$ term encourages a
diffusion of values between points in $\cAp$, thus diffusing these
near-boundary labels to the rest of the space. This process earns the
$\gamma_I$ term the mnemonic ``intrinsic regularizer'', because it
encourages diffusion of the labels across $\cM$.
Dividing the problem by $\gamma_A$, we see that the solution
depends only on the ratio $\gamma_I(n) / \gamma_A(n)$.
When $\gamma_I \gg \gamma_A$ the solution of \eqref{eq:rlsp}
is biased towards a constant \cite{Kim2009} on $\cAp$, equivalent to
solving the Laplace equation $\lap f = 0$ on $\cAp$ with the anchor
conditions $f = w$ on $\cA$. This case of heavy regularization is
useful when $n$ is small, but offers little insight about how the
solution depends on the geometry of $\cM$. We
are interested in the situation of light regularization: $\gamma_I(n)
= o(\gamma_A(n))$. We also independently see this assumption as a
requirement for convergence of $\tf$ in~\S\ref{sec:ssllimit}.
\subsection{The RL PDE with Nonempty Boundary ($\bdM \neq \emptyset$)\label{app:derivbd}}
When the boundary of $\cM$ is not empty,
Thm.~\ref{thm:fconv} and Cor. \ref{cor:fpconv} no longer apply in
their current form. In this section, we provide a road map for how
these results must be modified. We also argue why in the case of
small $h>0$, the limiting results (expressions for $S_h$ and $f^*_h$
in Assum. \ref{conj:vedist}, Thm.~\ref{thm:transport}, and
Thm.~\ref{thm:rl2e}) are not affected by these modifications.
Let $\cM_{\epsilon} = \set{x \in \cM : d_{\bdM}(x) >
\epsilon^{\gamma}}$ where $\gamma \in (0,1/2)$. For points
in the intersection of $\cA$ and $\cM \bs \cM_{\epsilon}$ (for example, when
$\cM \bs \cM_{\epsilon} \subset \cA$, and the anchor ``covers'' the boundary of $\cM$), we
need only consider the standard anchor conditions. For other cases,
we proceed thus:
It has been shown \cite[Prop. 11]{Coifman2006} that as $n \to \infty$:
\begin{enumerate}
\item $\Le \pi_{\cX}(f)_i = -c \lap f(x_i) + O(\epsilon)$ for $x_i \in
\cM_{\epsilon}$ (this matches Thm.~\ref{thm:convLe}).
\item For $x_i \in \cM \bs \cM_{\epsilon}$, $(\Le \pi_{\cX}(f))_i \approx
\pd{f}{\nu}(x'_i)$, where $x'_i$ is the nearest point in $\bdM$ to
$x_i$ and $\nu$ is the outward normal at $x'_i$. That is, near the
boundary $\Le$ takes the outward normal derivative.
\item This region $\cM \bs \cM_{\epsilon}$ is small, and shrinks with
decreasing $\epsilon$: ${\mu(\cM \bs \cM_{\epsilon}) = O(\epsilon^{1/2})}$.
\end{enumerate}
One therefore expects that Thm.~\ref{thm:fconv} and Cor. \ref{cor:fpconv}
still hold, albeit with the norms restricted to points in
$\cM_{\epsilon}$. More specifically, the set $\cA'$ must necessarily
become $\cAp(n) = \cM_{\epsilon(n)} \bs \cA$.
Furthermore, as $n \to \infty, \epsilon \to 0$, this set grows to
encompass more of $\cAp$.
As a result, the domains of the RL PDE \eqref{eq:rl} change.
It is hard to write down the boundary condition at
$\bdM$, precisely because there is no analytical description for how
$\Le$ acts on functions in $\cM \bs \cM_{\epsilon}$. However, from
item 2 above, we can model it as an unknown Neumann condition.
Fortunately, for vanishing viscosity (small $h$), the effect of this
second boundary condition disappears: the Eikonal equation depends only on
the (Dirichlet) conditions at $\cA$. More specifically, regardless of
other Neumann boundary conditions away from $\cA$,
Assum. \ref{conj:vedist} still holds and,
as a result, so do Thm.~\ref{thm:transport} and Thm.~\ref{thm:rl2e}.
This follows because the Eikonal equation is a first order
differential equation, and so some of the boundary conditions may be
dropped in the small $h$ approximation. A more rigorous discussion
requires a perturbation analysis (see, e.g., \cite{Nayfeh1973}).
We instead provide an example, mimicking Ex. \ref{ex:annul},
except now we let the anchor domain be the inner circle only.
\begin{exmp}[The Annulus in $\bbR^2$ with reduced anchor]
\label{ex:annulred}
Let $\cM = \set{r_0 \leq r \leq 1}$, where $r=\norm{x}$ is the
distance to the origin. Let $\cA = \set{r=r_0}$ be the inner
circle. Letting $w=1$ ($u_h=0$), we get $S(r) = d_{\cA}(r) = r-r_0$.
We again assume a radially symmetric solution to
the RL Eq. and \eqref{eq:rl} becomes:
$-h^2 \left(f_h''(r) + r^{-1} f_h'(r) \right) + f_h(r) = 0$
for $r \in (r_0,1]$, $f_h(r_0)=1$.
Furthermore, since the boundary condition at $r=1$ is unknown, we
set it to be an arbitrary Neumann condition: $f'_h(1) = b$.
The solution is
$$
f_h(r) = \frac{bh[I_0(r/h)K_0(r_0/h)-K_0(r/h)I_0(r_0/h)]
+ I_0(r/h)K_1(1/h) + K_0(r/h) I_1(1/h)}
{K_0(r_0/h)I_1(1/h)+K_1(1/h)I_0(r_0/h)},
$$
A series expansion of $f_h(r)$ around $h=0$ gives
$f_h(r) = \sqrt{r_0/r} e^{-(r-r_0)/h} + O(h)$,
and therefore $\lim_{h \to 0} -h \log f_h(r) = S(r)$
(again confirming \eqref{eq:rllim}).
We simulated this problem with $r_0=0.25$ by sampling
$n=1000$ points from the ball $B(0,1)$, and rescaling points
with $r \leq r_0$ to $r=r_0$. $S_h$ is approximated up to a constant
using the numerical discretization, via \eqref{eq:lrlssysp}, of
\eqref{eq:lrlssim2}. For the graph Laplacian we used a $k=20$ NN
graph and $\epsilon=0.001$.
\begin{figure}\label{fig:geoannulred}
\end{figure}
Fig.~\ref{fig:geoannulred} shows (in the $z$ axis) the estimate $S_h(x)$
as $h$ grows small. The colors of the points reflect the true
distance to $\cA$: $S(r) = S_0(r) = r-r_0$. Note the convergence as $h
\downto 0$, and also the clear offset of $S_h$ which is especially
apparent in the right pane near $r=0.25$.
From the second of Eqs. \eqref{eq:trl} and the fact that $\grad S(r) =
1, \lap S(r) = 1/r$, we have $Z_0/r + 2 Z'_0 = 0$ for $r_0 < r \leq 1$
and $Z_0 = 1$ for $r=r_0$. Solving this we get $Z_0(r) = \sqrt{r_0/r}$.
The solution becomes $f_h(r) = e^{-(r-r_0)/h}(\sqrt{r_0/r} +
O(h))$, which matches the earlier series expansion of the full solution.
Furthermore, upon an additional Taylor expansion we have
$S_h(r) = r-r_0 - h \log(r_0/r)/2 + O(h \log h)$.
As before, the extra term in the $S_h$ estimate
has a large effect when $r-r_0$ is small (as seen
in the right pane of Fig.~\ref{fig:geoannulred}).
\end{exmp}
\section{Conclusion and Future Work\label{sec:concl}}
We have proved that the solution to the SSL problem \eqref{eq:lrlssys}
converges to the sampling of a smooth solution of a Regularized
Laplacian PDE, in certain limiting cases. Furthermore, we have
applied the established theory of Viscosity PDE solutions to analyze
this Regularized Laplacian PDE. Our analysis leads to a geometric
framework for understanding the regularized graph Laplacian in the
noiseless, low regularization regime (where $h \to 0$). This framework
provides intuitive explanations for, and validation of, machine
learning algorithms that use the inverse of a regularized Laplacian
matrix.
We have taken the first steps in extending the theoretical analysis in
this chapter to manifolds with boundary (\S\ref{app:derivbd})
While the results within this section can be confirmed numerically, in
some cases additional work must be done to confirm them in full
generality. Furthermore, Assum. \ref{conj:vedist} awaits confirmation
within the viscosity theory community.
There are a host of applications derived from the work in this
chapter, and we turn our focus to them in chapter~\ref{ch:rlapp}.
\chapter{The Inverse Regularized Laplacian: Applications\label{ch:rlapp}}
\section{Introduction}
Thanks to the theoretical development in chapter \ref{ch:rl},
we now have a framework within which we can construct new tools for
learning (e.g. a regularized geodesic distance estimator and a new
multiclass classifier). These tools can also shed light on other
results in the literature (e.g. a result of \cite{Nadler2009}).
Throughout this chapter we will use the notation developed in chapter
\ref{ch:rl}.
\section{Regularized Nearest Sub-Manifold (NSM) Classifier}
\label{sec:geo}
We now construct a new robust geodesic distance estimator and employ
it for classification. We then demonstrate the classifier's efficacy on
several standard data sets.
To construct the estimator, first choose some anchor
set $\cA \subset \cM$, and suppose the points $\set{x_i}_{i=1}^l$ are
sampled from $\cA$. To calculate the distance $d_\cA(x_i)$ for $i=l+1
\ldots n$, construct the normalized graph Laplacian
$\Le$. Choosing $\wt{h} > 0$ appropriately,
solve the linear system \eqref{eq:lrlssysp}:
\begin{equation}
\label{eq:egeo}
(E_\cA + E_\cAp(I + \wt{h}^2 \Le)) a = \wt{w}
\end{equation}
where $\wt{w} = \left[1\ \cdots\ 1\ \ 0\ \cdots\ 0\ \right]^T$ is a
vector of all zeros for sample points in $\cAp$ and all ones for
sample points in $\cA$. For $n$ large, $\epsilon$ small, and
$\wt{h}$ small, this linear system approximates \eqref{eq:rl}
with $h = \wt{h} \sqrt{c}$. Applying Thm. \ref{thm:rl2e}, we see
that $\wt{S}_i = -\wt{h} \log a_i \approx c^{-1/2} d_\cA(x_i)$.
While the estimator $\wt{S}$ is approximate and only valid up to a
constant, it is also simple to implement and consistent (due to
Cor. \ref{cor:fpconv}).
We know of two other consistent geodesics estimators that work on
point samples from $\cM$. One performs fast
marching by constructing complex local upwind schemes
that require the iterative solution of sequences of
high dimensional quadratic systems \cite{Sethian2000}. Another
performs fast marching in $\bbR^p$ on offsets of $\cX$ and is also
approximate \cite{Memoli2005}. The first scheme is complex
to implement; the second is exponential in the ambient dimension $p$.
Our estimator, on the other hand, can be implemented in Matlab in
under 10 lines, given one of many fast approximate NN
estimators. Furthermore, it requires the solution of a linear system
of size essentially $n$, so its complexity depends only on the number
of samples $n$, not on the ambient dimension $p$.
Finally, our scheme allows for a natural regularization by tweaking
the viscosity parameter $\wt{h}$.
\S\ref{sec:geoexmp} contains numerical comparisons between our
estimator and, e.g. Dijkstra's Shortest Path and Sethian's Fast
Marching estimators.
The lack of dynamic range in the estimator $\wt{S}$, following
\eqref{eq:egeo}, leads to important numerical considerations. According to
Thm. \ref{thm:rl2e}, for a given sampling $\cX$ one would choose
$\wt{h} \ll \min_{e \in \cE} \td_e$ to have an accurate estimate of
geodesics for all point samples. In this case, however, many
points far from $\cA$ may have their associated estimate $a_i$ drop
below the machine epsilon. In this case an iterative multiscale
approach will work: estimates are first calculated for points nearest
to $\cA$ for which no estimate yet exists (but $a_i$ is above machine
epsilon), then $\wt{h}$ is multiplied by some factor $\gamma > 1$,
and the process is repeated.
We now use the above estimator to form the Nearest Sub-Manifold (NSM)
classifier. The classifier is based on two simplifications. First,
for noisy samples, one would want to select $\wt{h}$ based on the
noise level or via cross-validation; it therefore becomes
a regularization term.
Second, as seen in \S\ref{sec:sslr}, for classification the exact estimate of geodesic
distance is less important than relative distances; hence
there is no need to estimate scaling constants.
As before, suppose we are given $n$ samples from a manifold $\cM$. Of
these, each of the first $l$ belong to one of $M$ classes; that
is, $x_i \in \cC_m$, $m \in \set{1,\ldots,M}$. We assume that all
points within class $m$ belong to a smooth closed subset of
$\cM$, which we call anchor $\cA_m$, $m=1,\ldots,M$. For each anchor,
we define the anchor data vector $\wt{w}^m$ via $\wt{w}^m_i =
\delta(x_i \in \cC_m)$, $i=1,\ldots,n$.
To classify, first choose $\wt{h}>0$ and solve
\eqref{eq:egeo} for each of the $M$ different anchor sets $\cA_m$ (and
associated $\wt{w}^m$), to get solutions $\set{a^m}_{m=1}^M$. Then
for each unlabeled point $x_i$, $a^m_i$ encodes its distance to anchor
$\cA_m$. The decision rule is $C(x_i) = \argmax_m a^m_i$.
For $n$ and $l$ large, $\epsilon$, $\wt{h} > 0$ small, and
no noise, $C(x_i)$ will accurately estimate the class
which is closest in geodesic distance to $x_i$. In the noisy, finite
sample case with irregular boundaries, $C$ provides a regularized
estimate of the same.
\section{NSM Classifier: Performance}
We compare the classification performance of the NSM classifier to
several state-of-the-art classifiers using the test set from
\cite{Chapelle2006} (testing protocol and datasets:
\url{http://www.kyb.tuebingen.mpg.de/ssl-book/}).
For the NSM classifier, we performed a parameter search as described
in \cite[\S 21.2.5]{Chapelle2006}, and additionally cross-validated
over the viscosity parameter $\wt{h} \in \set{0.1, 1, 10}$ scaled by
the median distance between pairs of points in $\cE$.
We compare our results with publicly available implementations of:
\begin{itemize}
\item LapRLS from M. Belkin's website,
with obvious modifications for one-vs-all multiclassification
and with the exception that, as opposed to \cite{Chapelle2006}, we
used $p=1$ in the kernel $\tLe^p$ instead of $p=2$. Here, we also
performed a parameter search as in \cite[\S 21.2.5]{Chapelle2006}.
\item LDS from O. Chapelle's website with parameters optimized as in
\cite[\S 21.2.11]{Chapelle2006}.
\item Kernel TSVM using primal gradient descent (available in the LDS
package) with parameters optimized as in \cite[\S
21.2.1]{Chapelle2006}.
\end{itemize}
For testing, we also included the LIBRAS (LIB) dataset with 12 splits
of $l=30, 100$ labeled points and the ionosphere (Ion) dataset with 12
splits of $l=10, 100$ labeled points.
All datasets have $M=2$ (the task is binary classification) except
COIL, which has $M=6$.
Table \ref{tab:ssl} shows percent classification error vs. percentage
labeled points, over 12 randomized splits of the testing and training
data set. Parameter optimization (cross-validation) was always
performed on the training splits only; classification error is
reported over the testing data.
Note that that the NSM classifier is competitive with the
others, especially on those datasets where we expect a manifold
structure (e.g. the image sets USPS and COIL).
\begin{table}[t]
\caption{Percent classification error over 12 splits. Clear~winners~in~bold.}
\footnotesize
\label{tab:ssl}
\centering
\begin{tabular}{|>{\bfseries}c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\ &
\multicolumn{2}{|c|}{{\bf USPS}} &
\multicolumn{2}{|c|}{{\bf BCI}} &
\multicolumn{2}{|c|}{{\bf g241n}} &
\multicolumn{2}{|c|}{{\bf COIL}} &
\multicolumn{2}{|c|}{{\bf LIB}} &
\multicolumn{2}{|c|}{{\bf Ion}} \\
\hline
$100 (l/n)$
& 0.66 & 6.66
& 2.5 & 25
& 0.66 & 6.66
& 0.66 & 6.66
& 8.3 & 28
& 2.8 & 28 \\
\hline
$l$ & 10 & 100
& 10 & 100
& 10 & 100
& 10 & 100
& 30 & 100
& 10 & 100 \\
\hline
NSM
& {\bf 10.0} & {\bf 5.9}
& 49.0 & 44.8
& 46.0 & 39.0
& 58.8 & {\bf 11.6}
& 51.0 & {\bf 30.3}
& 30.4 & 14.2 \\
LapRLS
& 15.0 & 10.6
& 48.7 & 45.4
& 43.0 & 33.9
& 63.1 & 20.6
& 48.9 & {\bf 30.4}
& 31.9 & 13.0 \\
LDS
& 22.5 & 11.5
& 48.7 & 46.0
& 49.0 & 41.0
& 56.7 & 16.2
& 54.9 & 36.6
& {\bf 17.3} & 13.5 \\
TSVM
& 17.4 & 12.0
& 48.8 & 46.1
& 47.3 & {\bf 23.7}
& 69.5 & 39.9
& 66.0 & 36.6
& 48.1 & 20.7 \\
\hline
\end{tabular}
\end{table}
\section{Irregular Boundaries and the counterexample of Nadler et al.}
\label{sec:irr}
We relate the Annulus example (Ex. \ref{ex:annul}) to a
negative result of \cite[Thm. 2]{Nadler2009}, which
essentially states that no solution exists for \eqref{eq:rl} for $\cM$
with $d \geq 2$ and the anchor set a countable number of points.
This yields a special case of a result known in PDE theory: no
solution exists to \eqref{eq:rl} when $\cA$ is irregular; and isolated
points on subsets of $\bbR^d$, $d \geq 2$, are irregular
\cite[Irregular Boundary Point]{Hazewinkel1995}.
This is very clearly seen in Ex. \ref{ex:annul}, where attempting
to let $r_0 \to 0$ (thus forcing a single point anchor) forces
the first term of the solution $f_h$ in \eqref{eq:fhr} to zero for any
$r>r_0$, regardless of the anchor condition at $r=r_0$ and of $h$.
The major culprit here is the $(d-1)/r$ term that appears in the
radial Laplacian and is unbounded at the origin.
Note, however, that viscosity solutions to
\eqref{eq:ve} do exist even for singular anchors
\cite{Mantegazza2002}.
In many practical cases (i.e., if we had chosen single point anchors
in \S\ref{sec:sslr}, Ex. \ref{ex:sslsphere}, etc), the sampling
size is finite and we keep $\epsilon \geq \epsilon_0$ for some
$\epsilon_0>0$. In these cases, the issues raised here do not affect
the numerical analysis because even single points act like
balls of radius $\epsilon$ in $\bbR^p$.
\section{Beyond Classification: Graph Denoising, Manifold Learning \label{sec:beyond}}
The ideas presented in the previous sections can also be applied to
other areas of machine learning. As illustrations, we show that the
graph denoising scheme of \cite{Brevdo2010} is a special case of our
geodesics estimator. Further, we show how to construct a regularized
variant of ISOMAP and provide some numerical examples of geodesics
estimation.
\section{The Graph Denoising Algorithm of Chapter \ref{ch:npdr}}
\label{sec:geonpdr}
In chapter \ref{ch:npdr}, we studied decision rules for denoising
(removing) edges from NN graphs that have been corrupted by sampling
noise. We examine the Neighborhood Probability Decision Rule (NPDR)
of \S\ref{sec:npdr}, and show that in the low noise,
low regularization regime, it removes graph edges between
geodesically distant points.
The NPDR is constructed in three stages: (a) the NN graph $G$ is
constructed from the sample points $\cX$. $\cE$ contains an initial
estimate of neighbors in $G$, but may contain incorrect edges
due to sampling noise; (b) a special Markov random walk
is constructed on $G$, resulting in the transition probability matrix
$\Ne \propto (I-\bp \Pe)^{-1}$ for $\bp \in (0,1)$;
(c) the edges $(l,k) = e \in \cE$ with the smallest associated
entries $(\Ne)_{lk}$ are removed from $G$.
In chapter \ref{ch:npdr}, we provide a probabilistic
interpretation for the coefficients of $\Ne$.
We show that $\Ne$ encodes geodesic distances by
reducing $\Ne^{-1}$ to look like \eqref{eq:rl}:
\begin{align}
I - \bp \Pe &= (1-\bp)I + \bp (I - \Pe)
\propto ({1}/{\epsilon}) I + {\bp}(1-\bp)^{-1} \Le.
\label{eq:redNe}
\end{align}
For $\epsilon$ small,
after applying the RHS of \eqref{eq:redNe}, $\Ne e_i$
approximately solves \eqref{eq:rl} with $\cA = \set{x_i}$,
$w(x_i) = c'$ (where $c'$ is a function of ${\bp,\epsilon}$), and
$h^2 = c \epsilon \bp / (1-\bp)$.
Then by Thm. \ref{thm:rl2e}
$$(\Ne)_{lk} \approx c' e^{-d(x_l,x_k) \sqrt{(1-\bp)/(c \epsilon
\bp)}}, \mbox{some } c'>0.
$$
Thus in the noiseless case and with $\bp \approx 0$, the NPDR
algorithm will remove edges in the graph between
points that are geodesically far from each other.
As edges of this type are the most detrimental to learning
\cite{Balasubramanian2002}, the NPDR is a powerful denoising rule.
As shown in chapter \ref{ch:npdr},
for noisy samples one would choose $\bp \approx 1$ to regularize for
noisy edges. In this case, one can think
of $\Ne$ as a highly regularized encoding of pairwise geodesic distances.
\section{Viscous ISOMAP}
\label{sec:isomap}
As a second example of how the ideas from \S\ref{sec:sslr} can be
used, we construct a regularized variant of ISOMAP
\cite{Tenenbaum2000}, which we call Viscous ISOMAP.
ISOMAP is a dimensionality reduction algorithm that constructs an
embedding for $n$ points sampled from a high-dimensional space by
performing Multidimensional Scaling (MDS) on the estimated geodesic
distance matrix of the NN graph of these points.
The first step of ISOMAP is to estimate all pairwise geodesic
distances. Traditionally this is done via Dijkstra's Shortest Path
algorithm. We replace this step with our regularized geodesics
estimator. A direct implementation requires $n$ calculations of
\eqref{eq:egeo}. However, a faster estimator can be constructed,
based on our analysis of the NPDR algorithm in \S\ref{sec:geonpdr}.
Specifically, to calculate pairwise distances, first
calculate $M = (I + \wt{h}^2 \Le)^{-1}$. Then the symmetrized
geodesics estimates are $H = -\wt{h}(\log M + \log M^T)$, where the
logarithm is taken elementwise. Finally, perform MDS on the matrix
$H$ to calculate the ISOMAP embedding.
For small $\wt{h}$, the Viscous ISOMAP embedding matches that
of standard ISOMAP. For large $\wt{h}$, the additional
regularization can remove the effects of erroneous edges caused by
noise and outliers.
We provide a rather simple numerical example. It confirms
that for small viscosity $\wt{h}$, Viscous ISOMAP embeddings match
standard ISOMAP embeddings, and that for larger viscosities the
embeddings are less sensitive to outliers in the original sampling set
$\cX$ and in $\cE$.
Fig.~\ref{fig:isomap} compares Viscous ISOMAP to regular ISOMAP
on a noisy Swiss Roll with topological shortcuts. We used the same
$n=1000$ samples and $\delta=4$ for NN estimation for both algorithms,
and $\epsilon=1$ for Viscous ISOMAP. Note how for small $\wt{h}$,
the Viscous ISOMAP embedding matches the standard one. Also note how
increasing the viscosity term $\wt{h}$ leads to the an accurate
embedding in the principal direction, ``unrolling'' the Swiss Roll.
\begin{figure}\label{fig:isomap}
\end{figure}
\section{Numerical Examples of Geodesics Estimation}
\label{sec:geoexmp}
We provide two examples of Geodesic Estimation: on the Torus, and on a
triangulated mesh. For the Torus, we used the normalized Graph Laplacian
of \S\ref{sec:sslintro} and ground truth geodesic distances were given
by Dijkstra's Shortest Path algorithm. On the mesh, we used the mesh
surface Laplacian of \cite{Belkin2008}, and for ground truth geodesic
distances the mesh Fast Marching algorithm of \cite{Kimmel1998} (As
implemented in Toolbox Fast Marching at
\url{http://www.ceremade.dauphine.fr/~peyre/}). In both cases, $S_h$
was calculated via the geodesics estimator of \S\ref{sec:geo}; thus,
as always, $S_h$ estimates geodesic distances up to a constant.
\begin{exmp}[The Torus $T = S^1 \x S^1$ in $\bbR^3$]
\label{ex:torus}
The torus $T$ is defined by the points $(x^1,x^2,x^3) = ((2+\cos
v^1)\cos v^2, (2+\cos v^1)\sin v^2, \sin v^2)$ for $v^1 \in [0,2\pi)$
and $v^2 \in [0,2\pi)$. We used $n=1000$ randomly sampled points,
with $k=100$ neighbors for the initial NN graph, $\epsilon = .01$ and
$\wt{h} = .001$. Setting $\cA = \set{x_i}$, $i=1,3$ where $x_1$
corresponds to $(v^1,v^2) = (0,0)$, and $x_3$ is a randomly chosen point,
we can calculate geodesic distances of all points in $\cX$ to these
anchors. The results are shown in Fig.~\ref{fig:geotorus}.
\begin{figure}\label{fig:geotorus}
\end{figure}
\end{exmp}
\begin{exmp}[The Dancing Children Mesh]
\label{ex:mesh}
The Dancing Children mesh is a complex (high genus) mesh from the
Aim$@$Shape Repository (\url{http://shapes.aimatshape.net/}).
The mesh $G$ is composed of $n \approx 36000$ vertices; and we used
$\epsilon=.1$ (times the mean edge distance), and $\wt{h}=.01$ for
the estimation procedure.
The anchor point $x_1$ was chosen randomly.
Our results are shown in Fig.~\ref{fig:geomesh}. Note that minor
discrepancies between the Fast Marching estimate $d_{FM}$ and our
estimate $S_h$ occur near areas with complex topology and areas of high
curvature (e.g. near a hole in the mesh).
\begin{figure}\label{fig:geomesh}
\end{figure}
\end{exmp}
\chapter{Synchrosqueezing\label{ch:ss}
\chattr{This chapter is based on work in collaboration with
Hau-Tieng~Wu, Department of Mathematics, and Gaurav~Thakur, Program in
Applied and Computational Mathematics, Princeton University, as
submitted in \cite{Brevdo2011b}.}}
\section{Introduction}
In this chapter, we analyze the Synchrosqueezing transform, a
consistent and invertible time-frequency analysis tool that can
identify and extract oscillating components (of time-varying frequency
and amplitude) from regularly sampled time series. We first describe
a fast algorithm implementing the transform. Second, we show
Synchrosqueezing is robust to bounded perturbations
of the signal. This stability property extends the applicability of
Synchrosqueezing to the analysis of nonuniformly sampled and noisy
time series, which are ubiquitous in engineering and the natural
sciences. Numerical simulations show that Synchrosqueezing provides a
natural way to analyze and filter a variety of signals. In Chapter
\ref{ch:ssapp}, we use Synchrosqueezing to analyze a variety of
data, including ECG signals and climate proxies.
The purpose of this chapter is twofold.
We first describe the Synchrosqueezing transform in detail and
highlight the subtleties of a new fast numerical implementation.
Second, we show both numerically and theoretically that
Synchrosqueezing is stable under bounded signal perturbations. It is
therefore robust to noise and to errors incurred by preprocessing
using approximations, such as interpolation.
The chapter is organized as follows. We first describing
Synchrosqueezing, and in \S\ref{sec:ssimp} we provide a fast new
implementation
\footnote{The Synchrosqueezing Toolbox for MATLAB, and the codes
used to generate all of the figures in this chapter, are available
at \url{http://math.princeton.edu/~ebrevdo/synsq/}.}.
In \S\ref{sec:SStheory} we provide theoretical evidence that
Synchrosqueezing analysis and reconstruction are stable to bounded
perturbations. In \S\ref{sec:wmisc}, we numerically compare
Synchrosqueezing to other common transforms, and provide
examples of its stability properties. Conclusions and ideas for
future theoretical work are in \S\ref{sec:ssfuture}.
Comprehensive numerical examples and applications are deferred to
Chapter~\ref{ch:ssapp}.
\section{Prior Work}
Synchrosqueezing is a tool designed to extract and compare
oscillatory components of signals that arise in complex systems. It
provides a powerful method for analyzing signals with time-varying behavior
and can give insight into the structure of their constituent
components. Such signals $f(t)$ have the general form
\begin{equation}\label{eq:sig}
f(t) \,=\, \sum_{k=1}^K f_k(t) + e(t),
\end{equation}
where each component $f_k(t) = A_k(t)\cos(\phi_k(t))$ is an
oscillating function, possibly with smoothly time-varying amplitude and
frequency, and $e(t)$ represents noise or observation
error. The goal is to extract the amplitude factor $A_k(t)$ and the
Instantaneous Frequency (IF) $\phi'_k(t)$ for each $k$.
Signals of the form \eqref{eq:sig} arise naturally in engineering and
scientific applications, where it is often important to understand
their spectral properties. Many time-frequency (TF) transforms exist
for analyzing such signals, such as the Short Time Fourier Transform
(STFT), Wavelet Transform, and Wigner-Ville distribution
\cite{Flandrin1999}, but these methods can fail to capture key
short-range characteristics of the signals. As we will see,
Synchrosqueezing deals well with such complex data.
Synchrosqueezing is a TF transform that is ostensibly
similar to the family of time-frequency reassignment (TFR) algorithms,
methods used in the estimation of IFs in signals of the form given in
\eqref{eq:sig}. TFR analysis originates from a study of the STFT,
which smears the energy of the superimposed IFs around their center
frequencies in the spectrogram. TFR analysis ``reassigns'' these
energies to sharpen the spectrogram \cite{Flandrin2002, Fulop2006}.
However, there are some significant differences between
Synchrosqueezing and most standard TFR techniques.
Synchrosqueezing was originally introduced in the context of audio
signal analysis \cite{Daubechies1996}. In \cite{Daubechies2010}, it
was further analyzed theoretically as an alternative way to understand
the {\em Empirical Mode Decomposition} (EMD) algorithm
\cite{Huang1998}. EMD has proved to be a
useful tool for analyzing and decomposing natural
signals. Like EMD, Synchrosqueezing can extract and
clearly delineate components with time varying spectrum. Furthermore,
like EMD, and unlike most TFR techniques, it allows individual
reconstruction of these components.
\section{\label{sec:analysis}Synchrosqueezing: Analysis}
Synchrosqueezing is performed in three steps. First, the Continuous
Wavelet Transform (CWT) $W_f(a,b)$ of $f(t)$ is calculated
\cite{Daubechies1992}. Second, an initial estimate of the
FM-demodulated frequency, $\omega_f(a,b)$, is calculated on the
support of $W_f$. Finally, this estimate is used to squeeze $W_f$ via
reassignment; we thus get the Synchrosqueezing representation
$T_f(\omega, b)$. Synchrosqueezing is invertible: we can calculate
$f$ from $T_f$. Our ability to extract individual components
stems from filtering $f$ by keeping energies from
specific regions of the support of $T_f$ during reconstruction.
Note that Synchrosqueezing, as originally
proposed \cite{Daubechies1996}, estimates the FM-demodulated frequency
from the wavelet representation $W_f(a,b)$ before performing reassignment.
However, it can be adapted to work on ``on top of'' many invertible
transforms (e.g. the STFT \cite{Thakur2010}). We focus on the
original wavelet version as described in \cite{Daubechies2010}.
We now detail each step of Synchrosqueezing, using the harmonic signal
$h(t) = A \cos(\Omega t)$ for motivation. As a visual aid,
Fig.~\ref{fig:simple} shows each step on the signal
$h(t)$ with $A=1$ and $\Omega = 4 \pi$. Note that
Figs.~\ref{fig:simple}(b,d) show that Synchrosqueezing is more
``precise'' than the CWT.
\begin{figure}\label{fig:simple}
\end{figure}
\subsection{CWT of ${f(t)}$}
For a given mother wavelet $\psi$, the CWT of
$f$ is given by $W_f(a,b) = \int_{-\infty}^\infty f(t) a^{-1/2}
\overline{\psi \left(\frac{t-b}{a} \right)} dt,$ where $a$ is the
scale and $b$ is the time offset.
We assume that $\psi$ has fast decay, and that its Fourier
transform $\wh{\psi}(\xi) = (2\pi)^{-1/2} \int_{-\infty}^\infty
\psi(t) e^{-i \xi t} dt$ is approximately zero in the negative frequencies
\footnote{More details about the Fourier transform, and analysis
on intervals, are available in App.~\ref{app:fourier}}:
$\wh{\psi}(\xi) \approx 0$ for $\xi < 0$, and is concentrated around
some positive frequency $\xi = \omega_0$ \cite{Daubechies2010}. Many
wavelets have these properties (several examples and
compared in \S\ref{sec:wcmp}). For $h(t)$, the harmonic signal above,
upon applying our assumptions we get $W_{h}(a,b) = \frac{1}{2 \sqrt{2
\pi}} A a^{1/2} \overline{\wh{\psi}(a \Omega)} e^{i b \Omega}$.
\subsection{Calculate the FM-demodulated frequency ${\omega(a,b)}$}
The wavelet representation of the harmonic signal $h(t)$
(with frequency $\Omega$) will have its energy spread out in the time-scale
plane around the line $a = \omega_0/\Omega$, and this frequency will
be encoded in the phase \cite{Daubechies1996, Daubechies2010}.
In those regions where $\abs{W_h} > 0$ we would like to remove the
effect of the Wavelet on this frequency. We perform
a type of FM demodulation by taking derivatives: $ (W_{h}(a,b))^{-1}
\partial_b W_{h}(a,b) = i \Omega $. This simple model
leads to an estimate of the frequency in the time-scale plane:
\begin{equation}
\omega_{f}(a,b)=
\label{eq:omegax0}
\begin{cases} \frac{-i\partial_{b}W_{f}(a,b)}{W_{f}(a,b)} & |W_{f}(a,b)|>0\\
\infty & |W_{f}(a,b)|=0
\end{cases}.
\end{equation}
\subsection{Squeezing in the time-frequency plane: ${T_f(\omega,b)}$}
The final step of Synchrosqueezing is reassigning energy
in the time-scale plane to the TF plane according to the frequency map
$(a,b) \to (\omega(a,b), b)$. Reassignment follows from the inversion
property of the CWT: when $f(t)$ is real,
\begin{equation}
\label{eq:Winv}
f(b) = 2 \cR_\psi^{-1} \Re{\int_0^\infty W_f(a,b) a^{-3/2} da},
\end{equation}
where $\cR_\psi = \sqrt{2 \pi} \int_0^\infty \xi^{-1}
\overline{\wh{\psi}(\xi)} d\xi$ is a normalizing constant.
We first break up the integrand in \eqref{eq:Winv} according to the
FM-demodulated frequency estimate $\omega_f$. Define frequency divisions
$\set{w_l}_{l=0}^\infty$ s.t. $w_0 > 0$ and $w_{l+1} > w_l$ for all
$l$. Further, let the frequency bin $\cW_l$ be the set of
points $w' \in \bbC$ closer to $w_l$ than to any other $w_l'$.
We define the Discrete-Frequency Wavelet Synchrosqueezing transform of
$f$
as:
\begin{equation}
\label{eq:Tfdef}
T_f(w_l, b) = \int_{\set{a : \omega_f(a,b) \in \cW_l}} W_f(a,b) a^{-3/2} da.
\end{equation}
In other words, $T_f(w_l,b)$ is the ``volume'' of the
frequency preimage set
$\cW_l^{-1}(b) = \set{a : \omega_f(a,b) \in \cW_l}$ under
the signed measure $\mu_{f,b}(a) = W_f(a,b) a^{-3/2} da$.
This definition has several favorable properties. First, it allows
us to reconstruct $f$ from $T_f$:
\begin{equation}
\label{eq:Tfrecon}
f(b) = 2 \cR_\psi^{-1} \Re{\sum_l T_f(w_l, b)}.
\end{equation}
Second, for the
harmonic signal $h(t)$, with $\omega_{h}(a,b) = \Omega$, there
will be a single $\hat{l}$ such that $w_{\hat{l}}$ is closest to
$\omega_{h}(a,b)$. From \eqref{eq:Winv}, we have $h(b) =
2\Re{\cR_\psi^{-1} T_{h}(w_{\hat{l}},b)}$. Further, the
magnitude of $T_h$ is proportional to that of $h(t)$:
$\abs{T_{h}(w_{\hat{l}},b)} = \frac{\abs{A}}{2 \pi} \abs{\cR_\psi}$.
More generally, for a wide class of signals with slowly varying
$A_k(t)$ and well separated ${\phi_k'}(t)$, given a
sufficiently fine division of the frequency bins $\set{w_l}$, each of
the $K$ components can be well concentrated into its own ``curve'' in the
TF plane (see Thm. \ref{SSThm} below).
This allows us to analyze such signals: by looking at $\abs{T_f(w,b)}$
to identify and extract the curves, and to reconstruct
their associated components.
\section{\label{sec:ssimp}A Fast Implementation}
In practice, we observe the vector $\tf \in \bbR^n$, $n = 2^{L+1}$,
where $L$ is a nonnegative integer. Its elements, $\tf_m,
m=0,\ldots,n-1$, correspond to a uniform discretization of $f(t)$ taken at
the time points $t_m = t_0 + m \Delta t$. To prevent boundary
effects, we pad $\tf$ on both sides (using, e.g., reflecting boundary
conditions).
We now describe a fast numerical implementation of Synchrosqueezing.
The speed of our algorithm lies in two key steps. First, we
calculate the Discrete Wavelet Transform (DWT) of the vector $\tf$ using
the Fast Fourier Transform (FFT). Second, we discretize the squeezing
operator $T$ in a way that lends itself to a fast numerical
implementation.
\subsection{DWT of sampled signal $\tf$}
The DWT samples the CWT $W_f$ at the locations $(a_j,t_m)$, where $a_j
= 2^{j/n_v} \Delta t$, $j=1,\ldots,L n_v$, and the number of voices
$n_v$ is a user-defined ``voice number'' parameter
\cite{Goupillaud1984} (we have found that $n_v = 32$ works well). The
DWT of $\tf$ can be calculated in $O(n_v n \log_2^2 n)$ operations
using the FFT. We outline the steps below.
First note that $W_f(a,b) = \left[ a^{-1/2} \overline{\psi(-t/a)} *
f(t) \right](b)$, where $*$ denotes convolution over $t$.
In the frequency domain, this relationship becomes:
$\wh{W}_f(a,\xi) = a^{1/2} \wh{f}(\xi) \wh{\psi}(a \xi)$.
We use this to calculate the DWT, $\tW_\tf(a_j,t_m)$. Let $\cF_n$
($\cF_n^{-1}$) be the standard (inverse) circular Discrete Fourier
Transform. Then
\begin{equation}
\label{eq:Wxdisc}
\tW_\tf(a_j, \cdot) = \cF_n^{-1} \left( (\cF_n \tf) \odot \wh{\psi}_j \right).
\end{equation}
Here $\odot$ denotes elementwise
multiplication and $\wh{\psi}_j$ is an $n$-length vector with
$(\wh{\psi}_j)_m = a_j^{1/2} \wh{\psi}(a_j \xi_m)$; $\xi_m$
are samples in the unit frequency interval: $\xi_m = 2 \pi m / n$,
$m=0,\ldots, n-1$.
\subsection{A Stable Estimate of $\omega_f$: $\wt{\omega}_\tf$}
We first require a slight modification of the FM-demodulated frequency
estimate
\eqref{eq:omegax0},
\begin{equation}
\label{eq:omegax}
\omega_f(a,b) = \Im{(W_f(a,b))^{-1} \partial_b W_f(a,b)}.
\end{equation}
This definition is equivalent to \eqref{eq:omegax0} when
Synchrosqueezing is performed via \eqref{eq:Tfdef}, and simplifies the
algorithm.
In practice, signals have noise and other artifacts due to,
e.g., sampling errors, and the phase of $W_f$ is unstable when
$\abs{W_f} \approx 0$. As such the user should choose some
$\gamma > 0$ (we often use $\gamma \approx 10^{-8}$) as a hard
threshold on $\abs{W_f}$. We define the numerical support of
$\tW_\tf$, on which $\omega_f$ can be estimated:
\begin{center}
$ \wt{\cS}^\gamma_\tf(m) = \set{j : \abs{\tW_\tf(a_j,t_m)} >
\gamma}$,
for $m = 0,\ldots,n-1$.
\end{center}
The estimate of $\omega_f$, $\wt{\omega}_\tf$, can be calculated
by taking differences of $\tW_\tf$ with respect to $m$ before
applying \eqref{eq:omegax}, but we provide a more direct way.
Let
Using the property $\wh{\partial_b W_f}(a,\xi) = i \xi
\wh{W_f}(a,\xi)$, we estimate the FM-demodulated frequency, for
$j \in \wt{\cS}^\gamma_\tf(m)$, as
\begin{center}
$ \wt{\omega}_\tf(a_j,t_m) =
\Im{\left(\tW_\tf(a_j,t_m)\right)^{-1} \partial_b \tW_\tf(a_j,t_m) },
$
\end{center}
with the time derivative of $W_f$ estimated via (e.g., \cite{Tadmor1986}):
\begin{center}
$ \partial_b \tW_\tf(a_j,\cdot) =
\cF^{-1}_n \left( (\cF_n \tf) \odot \wh{\partial \psi}_j \right),
$
\end{center}
where
$(\wh{\partial \psi}_j)_m = a_j^{1/2} i \xi_m \wh{\psi}(a_j \xi_m)/\Delta
t$ for $m=0,\ldots,n-1$.
Finally, we normalize $\wt{\omega}$ by $2 \pi$ so that the dominant
frequency estimate is $\alpha$ when $f(t) = \cos(2 \pi \alpha t)$.
\subsection{Fast estimation of ${T_f}$ from ${\tW_\tf}$ and ${\wt{\omega}_\tf}$}
The representation $\tW_\tf$ is given with respect to $n_a = L n_v$
log-scale samples of the scale $a$, and this leads to several
important considerations when estimating $T_f$ via \eqref{eq:Winv}
and \eqref{eq:Tfdef}. First, due to lower resolutions in coarser
scales, we expect to get lower resolutions in the lower frequencies.
We thus divide the frequency domain into $n_a$ components on a
log scale. Second, sums with respect to $a$ on a
log scale, $a(z) = 2^{z/n_v}$ with $da(z) = a \frac{\log
2}{n_v} dz$, lead to the modified integrand
${W_{f}(a, b) a^{-1/2} \frac{\log 2}{n_v} dz}$ in \eqref{eq:Tfdef}.
To choose the frequency divisions, note that the discretization
period $\Delta t$ limits the maximum frequency $\ol{w}$ that can be
estimated. The Nyquist theorem suggests that this frequency is
$\ol{w} = w_{n_a-1} = \frac{1}{2 \Delta t}$. Further, if we assume
periodicity, the maximum period of an input signal is $n \Delta t$;
thus the minimum frequency is $\ul{w} = w_0 = \frac{1}{n \Delta
t}$. Combining these limits with the log scaling of
the $w$'s we get the divisions: $w_l = 2^{l \Delta w} \ul{w}$, $l =
0, \ldots, n_a-1$, where $\Delta w = \frac{1}{n_a-1} \log_2 (n/2)$.
Note, the voice number $n_v$ has a big
effect on the frequency resolution.
We can now calculate the Synchrosqueezed estimate $\tT_\tf$. Our
fast implementation of \eqref{eq:Tfdef} finds the associated $\cW_l$
for each $(a_j,t_m)$ and adds it to the
correct sum, instead of performing a search over all
scales for each $l$. This is possible because
$\wt{\omega}_\tf(a_j, t_m)$ only ever lands in one frequency bin.
We provide pseudocode for this $O(n_a)$ implementation in
Alg. \ref{alg:Tf}.
\begin{algorithm}[ht]
\caption{Fast calculation of $\tT_\tf$ for fixed $m$}
\label{alg:Tf}
\begin{algorithmic}
\small
\FOR[Initialize $\tT$ for this $m$]{$l = 0$ to $n_a-1$}
\STATE $\tT_{\tf}(w_l, t_m) \leftarrow 0$
\ENDFOR
\FORALL[Calculate \eqref{eq:Tfdef}]{$j \in
\wt{\cS}^\gamma_{\tf}(m)$}
\STATE \COMMENT{Find frequency bin via $w_l = 2^{l \Delta w}
\ul{w}$, and $\wt{\omega}_{\tf} \in \cW_l$}
\STATE $l \leftarrow
\textrm{ROUND} \left[ \frac{1}{\Delta w} \log_2 \left(
\frac{\wt{\omega}_{\tf}(a_j,b_m)}{\ul{w}} \right) \right]$
\IF{$l \in [0, n_a-1]$}
\STATE \COMMENT{Add normalized term to appropriate integral;
$\Delta z = 1$}
\STATE $\tT_{\tf}(w_l,t_m) \leftarrow \tT_{\tf}(w_l,t_m) + \frac{\log 2}{n_v}
\tW_{\tf}(a_j,t_m) a^{-1/2}_j $
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{IF Curve Extraction and Filtered Reconstruction}
A variety of signals, especially sums of quasi-harmonic signals with
well-separated IFs, will have a frequency image
$\abs{T_f(w,b)}$ composed of several curves in the $(w,b)$ plane.
The image of the $k$th curve corresponds to both the IF
${\phi_k'}(b)$, and the entire component $A_{k}(b) \cos(\phi_{k}(b))$.
To extract a discretized curve $c^*$ we maximize a functional of the
energy of the curve that penalizes variation
\footnote{The implementation of this step in the Synchrosqueezing
Toolbox is a heuristic (greedy) approach that maximizes the
objective at each time index, assuming the objective has been
maximized for all previous time indices.}:
\begin{equation}
\label{eq:Cextract}
\max_{c \in \set{w_l}^n} \sum_{m=0}^{n-1}
E_\tf(w_{c_m}, t_m) -
\lambda \sum_{m=1}^{n-1} \Delta w |c_m - c_{m-1}|^2,
\end{equation}
where $E_\tf(w_l, t_m) = \log (|\tT_\tf(w_l,t_m)|^2)$ is the
normalized energy of $\tT$. The user-defined parameter $\lambda > 0$
determines the ``smoothness'' of the resulting curve estimate
(we use $\lambda = 10^5$). Its associated component
$\tf^*$ can be reconstructed via \eqref{eq:Tfrecon}, by restricting the
sum over $l$, at each $t_m$, to the neighborhood $\cN_m =
[{c^*_m-n_w},{c^*_m+n_w}]$ (we use the window size $n_w =
n_v/2$). The next curve is extracted by setting $\tT_\tf(\cN_m,t_m)=0$
for all $m$ and repeating the process above.
\section{Consistency and Stability of Synchrosqueezing}
\label{sec:SStheory}
We first review the main theorem on wavelet-based
Synchrosqueezing, as developed in \cite{Daubechies2010} (Thm. \ref{SSThm}).
Then we show that the components extracted via Synchrosqueezing
are stable to bounded perturbations such as noise and
discretization error.
We specify a class of functions on which these results hold. In
practice, Synchrosqueezing works on a wider function class.
\begin{defn}[Sums of Intrinsic Mode Type (IMT) Functions]
The space $\mathcal{A}_{\epsilon,d}$ of superpositions
of IMT functions, with smoothness $\epsilon$ and separation $d$,
consists of functions having the form
$f(t)=\sum_{k=1}^{K}f_{k}(t)$ with $f_{k}(t) = A_{k}(t)
e^{i\phi_{k}(t)}$. For $t \in \bbR$ the IF components
$\phi'_{k}$ are ordered and relatively well separated (high
frequency components are spaced further apart than low frequency
ones):
\begin{align*}
\forall t \quad \phi_{k}'(t)&>\phi_{k-1}'(t), \quad \text{and} \\
\inf_{t}\phi'_{k}(t) - \sup_{t}\phi'_{k-1}(t)
&\geq d(\inf_{t}\phi'_{k}(t)+\sup_{t}\phi'_{k-1}(t)).
\end{align*}
Functions in the class $\mathcal{A}_{\epsilon,d}$ are
essentially composed of components with time-varying
amplitudes. Furthermore, the amplitudes vary slowly, and the
individual IFs are sufficiently smooth. For each $k$,
\begin{align*}
& A_k \in L^{\infty}\cap C^{1}, \quad \phi_k \in C^{2},
\quad \phi'_k,\phi''_k \in L^{\infty}, \quad \phi'_k(t)>0, \\
& \| A'_k \| _{L^{\infty}} \leq \epsilon \| \phi'_k \|_{L^{\infty}},
\quad \text{and} \quad \| \phi''_k \|_{L^{\infty}} \leq \epsilon \| \phi'_k \|_{L^{\infty}}.
\end{align*}
\end{defn}
For the theoretical analysis, we also define the Continuous
Wavelet Synchrosqueezing transform, a smooth version of $T_f$.
\begin{defn}[Continuous Wavelet Synchrosqueezing]
Let $h\in C_{0}^{\infty}$ be a smooth function such that
$\norm{h}_{L^1}=1$. The Continuous Wavelet
Synchrosqueezing transform of function $f$, with accuracy $\delta$ and
thresholds $\epsilon$ and $M$, is defined by
\begin{equation}
S_{f,\epsilon}^{\delta,M}(b,\eta) =
\int_{\Gamma_{f,\epsilon}^{\delta, M}}
\frac{W_f(a,b) }{a^{3/2}} \delta^{-1} h\left(\frac{\abs{\eta-\omega_{f}(a,b)}}{\delta}\right) da
\label{SS}
\end{equation}
where $\Gamma_{f,\epsilon}^{M} = \set{(a,b) : a \in
[M^{-1},M],|W_{f}(a,b)|>\epsilon}$. We also denote
$S_{f,\epsilon}^{\delta} = S_{f,\epsilon}^{\delta,\infty}$ and
$\Gamma_{f,\epsilon}^{\infty} = \Gamma_{f,\epsilon}$, where the
condition $a \in [M^{-1},M]$ is replaced by $a>0$.
\end{defn}
The continuous ($S_f^{\delta}$) and discrete frequency
($T_f$) Synchrosqueezing transforms are equivalent for small
$\delta$ and large $n_v$, respectively. The frequency term $\eta$ in
\eqref{SS} is equivalent to $w_l$ in \eqref{eq:Tfdef}, and the
integrand term $\frac{1}{\delta}h(\frac{\cdot}{\delta})$ in \eqref{SS}
takes the place of constraining the frequencies to $\cW_l$ in
\eqref{eq:Tfdef}. Signal reconstruction and filtering analogues via
the continuous Synchrosqueezing transform thus reduce to
integrating $S_{f,\epsilon}^\delta$ over $\eta>0$, similar
to summing over $l$ in \eqref{eq:Tfrecon}.
The following consistency theorem was proved in \cite{Daubechies2010}:
\begin{thm}[Synchrosqueezing Consistency]
\label{SSThm}
Suppose $f \in \cA_{\epsilon,d}$. Pick a wavelet $\psi\in C^{1}$
such that its Fourier transform $\wh{\psi}(\xi)$
is supported in $[1-\Delta,1+\Delta]$ for some $\Delta<\frac{d}{1+d}$.
Then for sufficiently small $\epsilon$, Synchrosqueezing can
identify and extract the components $\set{f_k}$ from $f$:
\textbf{1}. The Synchrosqueezing plot $|S^\delta_f|$ is concentrated
around the IF curves $\{\phi_{k}'\}$.
For each $k$, define the ``scale band''
$Z_k = \{(a,b):|a\phi_{k}'(b)-1|<\Delta\}$.
For sufficiently small $\epsilon$, the FM-demodulated frequency
estimate $\omega_f$ is
accurate inside $Z_k$ where $W_f$ is sufficiently
large ($|W_{f}(a,b)|>{\epsilon^{1/3}}$):
\begin{center}
$\abs{\omega_f(a,b) - \phi_k'(b)} \leq {\epsilon^{1/3}}$.
\end{center}
\noindent Outside the scale bands $\set{Z_k}$, $W_f$ is small:
\begin{center}
$|W_{f}(a,b)|\leq{\epsilon^{1/3}}$.
\end{center}
\textbf{2}. Each component $f_k$ may be reconstructed
by integrating $S^\delta_f$ over a neighborhood around $\phi_{k}'$.
Choose the Wavelet threshold $\epsilon^{1/3}$ and
let $N_k(b) = \{\eta : |\eta-\phi'_{k}(b)|\leq{\epsilon^{1/3}}\}$.
For sufficiently small $\epsilon$, there is a constant $C_{1}$
such that for all $b\in\mathbb{R}$,
$$
\left|\lim \limits_{\delta\rightarrow0}\left(\mathcal{R}_{\psi}^{-1}
\int_{N_k(b)}
S_{f,\epsilon^{1/3}}^{\delta}(b,\eta)d\eta\right)-f_k(b)\right|\leq
C_{1} \epsilon^{1/3}.
$$
\end{thm}
Note that, as expected, Thm. \ref{SSThm} implies that components $f_k$
with low amplitude may be difficult to identify and extract (as their
Wavelet magnitudes may fall below $\epsilon^{1/3}$).
Thm. \ref{SSThm} also applies to discrete Synchrosqueezing,
with the following modifications: letting $\delta \to 0$ is equivalent
to letting $n_v \to \infty$. For reconstruction via
\eqref{eq:Tfrecon}, the integral over $\eta$ should be replaced by a sum
over $l$ in the discrete neighborhood $N_k(b) = \set{l : \abs{w_l -
\phi'_k(b)} \leq \epsilon^{1/3}}$. Finally, the threshold
$\epsilon^{1/3}$ in Thm. \ref{SSThm} part 2 can be applied numerically
by letting $\gamma > \epsilon^{1/3}$ when calculating the discrete support
$\cS_{\tf}^\gamma$.
We prove the following theorem in \cite{Brevdo2011b}:
\begin{thm}[Synchrosqueezing stability to small perturbations]
\label{SSStableThm}
The statements in Thm. \ref{SSThm} essentially still hold
if $f$ is corrupted by a small error $e$, especially for
mid-range IFs.
Let $f\in\mathcal{A}_{\epsilon,d}$ and suppose we have a corresponding
$\epsilon$, $h$, $\psi$, $\Delta$, and $Z_k$ as given in Thm. \ref{SSThm}.
Furthermore, assume that $g=f+e$, where $e$ is a bounded perturbation
such that $\norm{e}_{L^\infty} \leq C_\psi \epsilon$, where
$C_\psi^{-1}=\max(\|\psi\|_{L^{1}},\|\psi'\|_{L^{1}})$.
For each $k$ define the ``maximal frequency range'' $M_k \geq 1$ such that
$\phi'_k(t) \in [M^{-1},M]$ for all $t$. A mid-range IF is
defined as having $M_k$ near $1$.
\textbf{1}. The Synchrosqueezing plot $|S_g^\delta|$ is
concentrated around the IF curves $\set{\phi_k'}$.
For sufficiently small $\epsilon$, the FM-demodulated frequency
estimate $\omega_g$ is accurate inside $Z_k$ where $W_g$ is sufficiently large
($|W_{g}(a,b)|>M_{k}^{1/2}\epsilon+{\epsilon^{1/3}}$):
\begin{center}
$|\omega_{g}(a,b)-\phi_{k}'(b)| \leq C_2 \epsilon^{1/3}$,
\end{center}
where $C_2 = O(M_k)$.
\noindent Outside the scale bands $\set{Z_{k}}$, $W_g$ is small:
\begin{center}
$|W_{g}(a,b)|\leq M_{k}^{1/2}\epsilon+{\epsilon^{1/3}}$.
\end{center}
\textbf{2}. Each component $f_k$ may be reconstructed with
accuracy proportional to the noise magnitude and its maximal frequency
range by integrating $S_g^\delta$ over a neighborhood around $\phi_{k}'$.
Choose the wavelet threshold $M_k^{1/2}{\epsilon^{1/3}} + \epsilon$
and let $N'_k(b) = \set{\eta : |\eta-\phi'_k(b)| \leq C_2
\epsilon^{1/3}}$, where (as before) $C_2 = O(M_k)$.
For sufficiently small $\epsilon$,
$$
\left|\lim_{\delta\rightarrow0}\left(\mathcal{R}_{\psi}^{-1}
\int \limits_{N'_k(b)}
S_{g,M_{k}^{1/2}\epsilon+\epsilon^{1/3}}^{\delta,M_{k}}(b,\eta)d\eta\right)
- f_k(b)\right|\leq C_{3}\epsilon^{1/3},
$$
where $C_3 = O(M_k)$.
\end{thm}
Thm. \ref{SSStableThm} has two important implications. First,
components with mid-range IF tend to have the best estimates and lowest
reconstruction error under bounded noise. Second,
to best identify signal component $f_k$ with IF $\phi'_k \in
[M^{-1},M]$, from a noisy signal, the
threshold $\gamma$ should be chosen proportional to $M^{1/2}
\epsilon$, where $\epsilon$ is an estimate of the noise magnitude.
\subsection{Stability under Spline Interpolation}
In many applications, samples of a signal $f \in \cA$ are
only given at irregular sample points $\set{t'_m}$, and these
are spline interpolated to a function $f_s$.
Thm. \ref{SSStableThm} bounds the error incurred due to this
preprocessing:
\begin{cor}
\label{cor:splinestable}
Let $\displaystyle D = \max_m |t'_{m+1}-t'_m|$ and let
$e = f_s-f$. Then the error in the estimate of the $k$th IF of $T_{f_s}$
is $O(M_k D^{4/3})$, and the error in extracting $f_k$ is
$O(M_k D^{4/3})$.
\end{cor}
\begin{proof}
This follows from Thm. \ref{SSStableThm} and the following standard
estimate on cubic spline approximations \cite[p. 97]{Stewart1998}:
\begin{center}
$
\left\Vert e \right\Vert_{L^{\infty}}
\leq\frac{5}{384} D^4 \| f^{(4)} \|_{L^{\infty}}.
$
\end{center}
\end{proof}
Thus, we can Synchrosqueeze $f_s$ instead of $f$ and, as long as the
minimum sampling rate $D^{-1}$ is high enough, the results will match.
Furthermore, in practice errors are localized in time to areas of low
sampling rate, low component amplitude, and/or high component
frequency (see, e.g., \S\ref{sec:wmisc}).
\section{\label{sec:wmisc}Examples of Synchrosqueezing Properties}
We now provide numerical examples of several important properties of
Synchrosqueezing. First, we compare Synchrosqueezing with two
common analysis transforms.
\subsection{Comparison of Synchrosqueezing to the CWT and STFT}
We compare Synchrosqueezing to the Wavelet
transform and the Short Time Fourier Transform (STFT)
\cite{Oppenheim1999}. We show its superior precision, in both time and
frequency, at identifying components of sums of
quasi-harmonic signals.
\begin{figure}\label{fig:cmpstftwave}
\end{figure}
In Fig.~\ref{fig:cmpstftwave} we focus on a signal $s(t)$ defined on
$t \in [0,10]$, that contains an abrupt transition at $t=5$, and
time-varying AM and FM modulation. It is discretized to $n=1024$
points and is composed of the following components:
\begin{align*}
t < 5 :
s_1(t) &= .5 \cos(2 \pi (3 t)), s_2(t) = .5 \cos(2 \pi (4t)), \\
s_3(t) &= .5 \cos(2 \pi (5t)) \\
t \geq 5 :
s_1(t) &= \cos(2 \pi (.5 t^{1.5})), \\
s_2(t) &= \exp(-t/20) \cos(2 \pi (.75 t^{1.5})), \\
s_3(t) &= \cos(2 \pi t^{1.5}).
\end{align*}
We used the shifted bump wavelet (see \S\ref{sec:wcmp}) and $n_v = 32$
for both the Wavelet and Synchrosqueezing transforms, and a Hamming
window with length 300 and overlap of length 285 for the STFT. These
STFT parameters focused on optimal precision in frequency, but not in
time \cite{Oppenheim1999}.
For $t<5$, the harmonic components of $s(t)$ are
clearly identified in the Synchrosqueezing plot $T_s$
(Fig.~\ref{fig:cmpstftwave}(d)) and the STFT plot
(Fig.~\ref{fig:cmpstftwave}(b)), though the
frequency estimate is more precise in $T_s$. The higher frequency
components are better estimated up to the singularity at $t=5$ in
$T_s$, but in the STFT there is mixing at the singularity. For $t \geq
5$, the frequency components are more clearly visible in $T_s$ due
to the smearing of lower frequencies in the STFT.
The temporal resolution in the STFT is also significantly lower than
for Synchrosqueezing due to the selected parameters. A
shorter window in the STFT will provide higher temporal
resolution, but lower frequency resolution and more smearing between
the three components.
\subsection{\label{sec:ssnonunif}Nonuniform Sampling and Splines}
We now demonstrate how Synchrosqueezing and extraction work for a
more complicated signal that contains multiple time-varying
amplitude and frequency components, and has been irregularly
subsampled. Let
\begin{align}
\label{eq:fsum}
f(t) &= \cos(4 \pi t)\\
&+ (1+0.2\cos(2.5t))\cos(2\pi(5t+2t^{1.2})) \nonumber \\
&+ e^{-0.2t}\cos(2\pi(3t+0.2\cos(t))), \nonumber
\end{align}
and let the sampling times be perturbations of uniformly spaced times
having the form $t'_{m}=\Delta t_1 m + \Delta t_2 u_{m}$, where
$\Delta t_2 < \Delta t_1$ and $\{u_{m}\}$ is sampled from
the uniform distribution on $[0,1]$. Here we fix $\Delta
t_1=11/180$ and $\Delta t_2=11/600$. This leads to $\approx 160$
samples on the interval $t \in [0,10]$. To correct for nonuniform
sampling, we fit a spline through $(t'_m,f(t'_m))$ to get the
function $f_s(t)$ and discretize on the finer
grid $t_m = m \Delta t$, with $\Delta t=10/1024$ and
$m=0,\ldots,1023$. The resulting vector, $\tf_s$,
is a discretization of the original signal plus a spline
error term. Fig.~\ref{fig:nonunif}(a) shows
$\tf_s$ for $t \in [2,8]$.
\begin{figure}\label{fig:nonunif}
\end{figure}
Figs.~\ref{fig:nonunif}(b-e) show the results of
Synchrosqueezing and component extraction of $\tf_s$, for $t \in
[2,8]$. All three components are well separated in
the TF domain. The second component is the most difficult to
reconstruct, as it contains the highest frequency information. Due to stability
(Thm. \ref{SSStableThm} and Cor. \ref{cor:splinestable}), extraction
of components with mid-range IFs is more stable to the error $e(t)$.
Fig.~\ref{fig:nonunif} shows that reconstruction errors are
time localized to the locations of errors in $\tf_s$.
\subsection{White Noise and Reconstruction}
We take the signal $f(t)$ of \eqref{eq:fsum}, now regularly sampled on
the fine grid with $\Delta t = 10/1024$ ($n=1024$ samples) as before,
and corrupt it with white Gaussian noise having a standard deviation
of $\sigma_N = 1.33$. This signal, $\tf_N$ (see
Fig.~\ref{fig:noise}(a)) has an SNR of $-1$ dB.
\begin{figure}\label{fig:noise}
\end{figure}
Figs.~\ref{fig:noise}(b-e) show the results of
Synchrosqueezing and component extraction of $\tf_N$, for $t \in
[2,8]$. As seen in
Fig.~\ref{fig:noise}(b), most of the additional energy, caused by
the white noise, appears in the higher frequencies.
Again, all three components are well separated in
the TF domain, though now the third, lower-amplitude, component
experiences a ``split'' at $t \approx 6.5$.
Reconstruction of signal components is less reliable in
locations of high frequencies and low magnitudes (note the axis in
Fig.~\ref{fig:noise}(e) is half that of the others).
This again numerically confirms Thm. \ref{SSStableThm}:
components with mid-range IFs and higher amplitudes are more stable to
the noise.
\section{\label{sec:wcmp}Invariance to the underlying transform}
As mentioned in \S\ref{sec:analysis} and in \cite{Daubechies2010},
Synchrosqueezing is invariant to the underlying choice of
transform. The only differences one sees in practice are due to two factors: the
time compactness of the underlying analysis atom (e.g. mother
wavelet), and the frequency compactness of this atom. That is,
$\abs{\psi(t)}$ should fall off quickly away from zero,
$\wh{\psi}(\xi)$ is ideally zero for $\xi<0$, and $\Delta$ (of
Thm. \ref{SSThm}) is small.
Fig.~\ref{fig:wcmp} shows the effect of Synchrosqueezing the
discretized spline signal $\tf_s$ of the synthetic
nonuniform sampling example in \S\ref{sec:ssnonunif}, using three
different complex CWT mother wavelets. These wavelets are:
\begin{align*}
&\textbf{a. Morlet (shifted Gaussian)} \\
&\qquad \wh{\psi}_a(\xi) \propto \exp(-(\mu-\xi)^2/2),
\quad \xi \in \bbR\\
&\textbf{b. Complex Mexican Hat} \\
&\qquad \wh{\psi}_b(\xi) \propto \xi^2 \exp(-\sigma^2 \xi^2/2),
\quad \xi > 0\\
&\textbf{c. Shifted Bump} \\
&\qquad \wh{\psi}_d(\xi) \propto
\exp\left(- (1- ((\xi-\mu)/\sigma )^2 )^{-1} \right), \\
&\qquad \xi \in [\sigma(\mu-1), \sigma(\mu+1)]
\end{align*}
where for $\psi_a$ we use $\mu=2\pi$, for $\psi_b$ we use $\sigma=1$,
for and for $\psi_c$ we use $\mu=5$ and $\sigma=1$.
\begin{figure}\label{fig:wcmp}
\end{figure}
The Wavelet representations of $\tf_s$ differ due to
differing mother wavelets, but the Synchrosqueezing representation is
mostly invariant to these differences. As expected from
Thm. \ref{SSThm}, more accurate representations are given
by wavelets having compact frequency support on $\xi$
away from $0$.
\section{\label{sec:ssfuture}Conclusions and Future Work}
Synchrosqueezing can be used to extract the instantaneous spectra of,
and filter, a wide variety of signals that include complex simulation data
(e.g. dynamical models), and physical signals
(e.g. climate proxies). A careful implementation runs in
$O(n_v n \log^2 n)$ time, and is stable (in theory and in practice)
to errors in these types of signals.
Areas in which Synchrosqueezing has shown itself to be an important
analysis tool include ECG analysis (respiration and T-end detection),
meteorology and oceanography (large-scale teleconnection and
ocean-atmosphere interaction), and climatology. Some of these
examples are described in the next chapter.
Additional future work includes theoretical analysis of the
Synchrosqueezing transform, including the development of
Synchrosqueezing algorithms that directly support nonuniform sampling,
the analysis of Synchrosqueezing when the signal is perturbed by
Gaussian, as opposed to bounded, noise, and extensions to higher
dimensional data.
\chapter{\label{ch:ssapp}Synchrosqueezing: Applications
\chattr{Section \ref{sec:resp} of this chapter are
based on work in collaboration with Hau-Tieng~Wu and Gaurav~Thakur.
Section \ref{sec:SSpaleo} is based on work in collaboration with
Neven~S.~Fu\v{c}kar, International Pacific Research Center,
University of Hawaii, as submitted in \cite{Brevdo2011b}.}}
\section{Introduction}
The theoretical results of Chapter \ref{ch:ss} provide important
guarantees and guidelines for the use of Synchrosqueezing in data
analysis techniques. Here, we focus on two specific applications in
which Synchrosqueezing, in combination with preprocessing methods such
as spline interpolation, provides powerful new analysis tools.
This chapter is broken down into two sections. First, we use
Synchrosqueezing and spline interpolation to estimate patients'
respiration from the R-peaks (beats) in their Electrocardiogram (ECG)
signals. This extends earlier work on the ECG-Derived Respiration problem.
Second, we visit open problems in paleoclimate studies of the last
2.5\,Myr, where Synchrosqueezing provides improved insights.
We compare a calculated solar flux index with a deposited $\delta^{18}
O$ paleoclimate proxy over this period. Synchrosqueezing cleanly
delineates the orbital cycles of the solar radiation, provides an
interpretable representation of the orbital signals of
$\delta^{18}O$, and improves our understanding of the effect that the
solar flux distribution has had on the global climate.
Compared to previous analyses of these data, the Synchrosqueezing
representation provides more robust and precise estimates in the
time-frequency plane.
\section{\label{sec:resp}ECG Analysis: Respiration Estimation}
We first demonstrate how Synchrosqueezing can be
combined with nonuniform subsampling of a single lead
ECG recording to estimate the instantaneous
frequency of, and in some cases extract, a patients's respiration signal.
We verify the accuracy of our estimates by comparing them with the
instantaneous frequency (IF) extracted from a simultaneously recorded
respiration signal.
The respiratory signal is usually recorded mechanically via, e.g.,
spirometry or plethysmography. There are two common disadvantages to
these techniques. First, they require the use of complicated devices
that might interfere with natural breathing. Second,
they are not appropriate in many situations, such as
ambulatory monitoring. However, having the respiratory signal is often
important, e.g. for the diagnosis of obstructive sleep apnea. Thus,
finding a convenient way to directly
record, or indirectly estimate, information about the respiration
signal is important from a clinical perspective.
ECG is a cheap, non-invasive, and ubiquitous technique, in which
voltage differences are passively measured between electrodes (leads)
connected to a patient's body (usually the chest and arms). The
change of the thoracic electrical impedance caused by inhalation and
exhalation, and thus physiological respiratory information, is
reflected in the ECG amplitude. The
respiration-induced distortion of ECG was first studied in
\cite{einthoven} and \cite{flaherty}. A well-known ECG-Derived
Respiration (EDR) technique \cite{MMZM85}
experimentally showed that ``electrical rotation'' during the
respiratory cycle is the main contributor to the distortion of ECG
amplitude, and that the contribution of thoracic impedance variations
is relatively minor. These prior work confirm that analyzing ECG may
enable us to estimate respiration. More details about EDR are available
in \cite{ecgbook}.
Relying on the coupling between physiological respiration and R-peak
amplitudes (the tall spikes in Fig.~\ref{fig:ecgFs}(a)), we
use the R-peaks as a proxy for the respiration signal. More
specifically, we hypothesize that the R peaks, taken as samples of the
envelope of the ECG signal $f_E(t)$, have the same IF profile as the
true respiration signal $f_R(t)$. By sampling $f_E(t)$ at the R
peaks and performing spline interpolation on the resulting samples, we
hope to see a time shifted, amplitude scaled, version of $f_R(t)$
near the respiratory frequency (0.25Hz).
\begin{figure}\label{fig:ecgFs}
\end{figure}
\begin{figure}\label{fig:ecgTs}
\end{figure}
In Fig.~\ref{fig:ecgFs}, we show the lead II ECG signal and the
true respiration signal (via respiration belt) of a healthy $30$ year
old male, recorded over a $10$ minute interval ($t \in [0,600]$ sec).
The sampling rates of the ECG and respiration signals are respectively
400Hz and 50Hz within this interval. There are $846$ R peaks
appearing at nonuniform times $t'_{m}\in[0,600]$,
$m = 1,\ldots,846$. We run cubic spline interpolation on the R-peaks
$\{(t'_m,f_E(t'_m))\}$ to get $f_{EP}(t)$, which we discretize at 50Hz
(with $n=30000$) to get $\tf_{EP}$. Fig.~\ref{fig:ecgTs} shows the
result of running Synchrosqueezing on $\tf_R$ and $\tf_{EP}$.
The computed IF, $\tf_{EP_1}$, turns out to be a good (shifted and
scaled) approximation to the IF of the true respiration, $\tf_{R_1}$.
It can be seen, from Figs. \ref{fig:ecgFs} and
\ref{fig:ecgTs}, that the spacing of respiration
cycles in $f_R(t)$ is reflected by the main IF of $\tf_{EP}$: closer
spacing corresponds to higher IF values, and wider spacing to
lower values.
These results were confirmed by tests on several subjects.
Thanks to the stability of Synchrosqueezing (Thm. \ref{SSStableThm} and
Cor. \ref{cor:splinestable}), this algorithm has the potential
for broader clinical usage.
\subsection{Notes on Data Collection and Analysis Parameters}
The ECG signal $\tf_E$ was collected at 400Hz via a MSI MyECG E3-80.
The respiration signal $\tf_R$ was collected at 50Hz via a
respiration belt and PASCO SW750. The ECG signal was filtered to
remove the worst nonstationary noise by thresholding signal values
below the $0.01\%$ and above the $99.99\%$ quantiles, to these
quantile values. The ECG R-peaks were then extracted from $\tf_E$
by first running the physionet
\texttt{ecpguwave}~
\footnote{{ecgpuwave may be found at: \url{http://www.physionet.org/physiotools/ecgpuwave/}}}
program, followed by
a ``maximum'' peak search within a 0.2 sec window of
each of the ecgpuwave-estimated R-peaks.
For Synchrosqueezing, the
parameters $\gamma = 10^{-8}$ and $\lambda = 10^5$ were used for
thresholding $W_f$ and extracting contours from both the R-peak spline
and respiratory signals.
\section{\label{sec:SSpaleo}Paleoclimatology: Aspects of the mid-Pleistocene transition}
Next, we apply Synchrosqueezing to analyze the
characteristics of a calculated index of the incoming solar radiation
(insolation) and to
measurements of repeated transitions between glacial (cold) and
interglacial (warm) climates during the Pleistocene epoch:
$\approx$\,1.8\,Myr to 12\,kyr before the present.
The Earth's climate is a complex, multi-component nonlinear, system
with significant stochastic elements \cite{Pierrehumbert2010}. The key
external forcing field is the insolation at the top of the
atmosphere (TOA). Local insolation has predominately harmonic
characteristics in time (diurnal cycle, annual cycle and
Milankovi\'{c} orbital cycles). However, response of planetary
climate, which varies at all time scales \cite{Huybers2006}, also
depends on random perturbations (e.g., volcanism), solid boundary
conditions (e.g., plate tectonics and global ice distribution),
internal variability and feedbacks (e.g., global carbon
cycle). Various paleoclimate records or proxies provide us with
information about past climates beyond observational records. Proxies
are biogeochemical tracers, i.e., molecular or isotopic properties,
imprinted into various types of deposits (e.g., deep-sea sediment),
and they indirectly represent physical conditions (e.g., temperature)
at the time of deposition. We focus on climate variability during the
last 2.5\,Myr (that also includes the late Pliocene) as recorded by
${\delta}^{18}O$ in foraminiferal shells at the bottom of the ocean
(benthic forams). Benthic $\delta^{18}O$ is the deviation of the ratio
of $^{18}O$ to $^{16}O$ in sea water with respect to the
present-day standard, as imprinted in benthic forams during their
growth. It increases with glaciation during cold climates because $^{16}O$
evaporates more readily and accumulates in ice sheets. Thus, benthic
${\delta}^{18}O$ can be interpreted as a proxy for either
high-latitude temperature or global ice volume.
\begin{figure}\label{fig:paleo_ts}
\end{figure}
We first examine a calculated element of the TOA solar forcing
field. Fig.~\ref{fig:paleo_ts}(a) shows $f_{SF}$, the mid-June
insolation at $65^{o}N$ at 1\,kyr intervals \cite{Berger1992}. This
TOA forcing index does not encompass the full complexity of solar
radiation structure and variability, but is commonly used to gain
insight into the timing of advances and retreats of ice sheets in the
Northern Hemisphere in this period (e.g., \cite{Hays1976}).
The Wavelet and Synchrosqueezing
decompositions in Fig.~\ref{fig:paleo_wavelet}(a) and
Fig.~\ref{fig:paleo_synsq}(a), respectively, show the key harmonic
components of $f_{SF}$. The application of a shifted bump mother
wavelet (see \S\ref{sec:wcmp}) yields an upward
shift of the spectral features along the scale axis in each of the
the representations in Fig.~\ref{fig:paleo_wavelet}. Therefore
the scale $a$ should not be used to directly infer periodicities.
In contrast, the Synchrosqueezing spectrums in Fig.~\ref{fig:paleo_synsq}
explicitly present time-frequency (or here specifically
time-periodicity) decompositions with a sharper structure, and
are not affected by the scale shift inherent in the choice of mother
wavelet.
Fig.~\ref{fig:paleo_synsq}(a) clearly shows the presence of
strong precession cycles (at periodicities $\tau$=19\,kyr and
23\,kyr), obliquity cycles (primary at 41\,kyr and secondary at
54\,kyr), and very weak eccentricity cycles (primary periodicities at
95\,kyr and 124\,kyr, and secondary at 400\,kyr). This is in contrast
with Fig.~\ref{fig:paleo_wavelet}(a), which contains blurred and shifted
spectral structures only qualitatively similar to
Fig.~\ref{fig:paleo_synsq}(a).
We next analyze the climate response during the last 2.5\,Myr as deposited
in benthic ${\delta}^{18}O$ in long sediments cores.
(in which deeper layers contain forams settled further back in time).
Fig.~\ref{fig:paleo_ts}(b) shows
$f_{CR1}$: benthic ${\delta}^{18}O$, sampled at irregular time intervals
from a single core, DSDP Site 607, in the North Atlantic
\cite{Ruddiman1989}. This signal was spline interpolated to 1\,kyr
intervals prior to the spectral analyses. Fig.~\ref{fig:paleo_ts}(c)
shows $f_{CR2}$: the benthic ${\delta}^{18}O$ stack (H07) calculated
at 1\,kyr intervals from fourteen cores (most of them from the Northern
Hemisphere, including DSDP607) using the extended depth-derived age
model \cite{Huybers2007}.
Prior to combining the cores in the H07 stack, the record mean
between 0.7\,Myr ago and the present was subtracted from each
${\delta}^{18}O$ record; this is the cause of the differing vertical
ranges in Figs. \ref{fig:paleo_ts}(b-c). Noise due
to local climate characteristics and measurement errors of
each core is reduced when we shift the spectral analysis from DSDP607
to the stack; and this is particularly visible in the finer
scales and higher frequencies.
\begin{figure}\label{fig:paleo_wavelet}
\end{figure}
\begin{figure}\label{fig:paleo_synsq}
\end{figure}
The Synchrosqueezing decomposition in
Fig.~\ref{fig:paleo_synsq}(c) is a more precise time-frequency
representation of the stack than a careful STFT analysis
\cite[Fig.~4]{Huybers2007}. In addition, it shows far less
stochasticity above the obliquity band as compared to
Fig.~\ref{fig:paleo_synsq}(b), enabling the 23\,kyr precession cycle
to become mostly coherent over the last 1\,Myr. Thanks to the
stability of Synchrosqueezing, the spectral differences below the
obliquity band are less pronounced between
Fig.~\ref{fig:paleo_synsq}(b) and Fig.~\ref{fig:paleo_synsq}(c).
Overall, the stack reveals sharper
time-periodicity evolution of the climate system than
DSDP607 or any other single core possibly could. The Wavelet
representations in Figs. \ref{fig:paleo_wavelet}(b-c)
also show this suppression of noise in
the stack (in more diffuse and scale shifted patterns).
Figs. \ref{fig:paleo_spectrum}(a) through \ref{fig:paleo_spectrum}(c)
show that the time average of Synchrosqueezing magnitudes (normalized
by $1/R_\psi$) is directly comparable with the Fourier spectrum, but
delineates the harmonic components much more clearly (not shown).
During the last 2.5\,Myr, the Earth experienced a gradual decrease in
global background temperature and $\text{CO}_2$ concentration, and an
increase in mean global ice volume accompanied with
glacial-interglacial oscillations that have intensified towards the present
(this is evident in Fig.~\ref{fig:paleo_ts}(b) and
\ref{fig:paleo_ts}(c)). The mid-Pleistocene transition, occurring
gradually or abruptly sometimes between 1.2\,Myr and 0.6\,Myr ago, was
the shift from 41\,kyr-dominated glacial cycles to 100\,kyr-dominated
glacial cycles recorded in deep-sea proxies (e.g., \cite{Ruddiman1986,
Clark2006, Raymo2008}). The origin of this strong 100\,kyr cycle
in the late-Pleistocene climate and the prior incoherency of the precession
band are still unresolved questions. Both types of spectral analyses of
selected ${\delta}^{18}O$ records indicate that the climate system
does not respond linearly to external periodic
forcing.
Synchrosqueezing enables the detailed time-frequency
decomposition of a noisy, nonstationary, climate time series due to
stability (Thm. \ref{SSStableThm}) and more precisely reveals key
modulated signals that rise above the stochastic background. The gain
(the ratio of the climate response amplitude to insolation forcing
amplitude) at a given frequency or period, is not constant.
The response to the 41\,kyr obliquity cycle is present almost
throughout the entire Pleistocene in
Fig.~\ref{fig:paleo_synsq}(c). The temporary incoherency of the
41\,kyr component starting about 1.25\,Myr ago
roughly coincides with the initiation of a lower frequency signal
($\approx$\,70\,kyr) that evolves into a strong 100\,kyr component in
the late Pleistocene (about 0.6\,Myr ago).
Inversion (e.g., spectral integration) of the Synchrosqueezing
decomposition of $f_{SF}$ and $f_{CR2}$ across the key
orbital frequency bands in Fig.~\ref{fig:paleo_filtered} again emphasize
the nonlinear relation between insolation and climate evolution.
Specifically, in Fig.~\ref{fig:paleo_filtered}(a) the amplitude
of the filtered precession signal of $f_{CR2}$ abruptly rises
1\,Myr ago, while in Fig.~\ref{fig:paleo_filtered}(c) the amplitude
of the eccentricity signal shows a gradual increase.
\begin{figure}\label{fig:paleo_spectrum}
\end{figure}
Synchrosqueezing analysis of the solar insolation index and
benthic $\delta^{18}O$ makes a significant contribution in three
important ways. First, it produces spectrally sharp traces
of complex system evolution through the high-dimensional climate state
space (compare with, e.g., \cite[Fig.~2]{Clark2006}). Second, it
delineates the effects of noise on specific
frequency ranges when comparing a single core to the stack.
Low frequency components are mostly robust to noise induced by both
local climate variability and the measurement process. Third,
thanks to its precision, Synchrosqueezing allows the filtered
reconstruction of signal components within frequency bands.
Questions about the key physical processes governing large scale
climate variability over the last 2.5\,Myr can be answered with
sufficient accuracy only by precise
data analysis and the development of a hierarchy of
models at various levels of complexity that reproduce the key aspects
of Pleistocene history. The resulting dynamic stochastic
understanding of past climates may benefit our ability
to predict future climates.
\begin{figure}\label{fig:paleo_filtered}
\end{figure}
\chapter{\label{ch:sltr}Multiscale Dictionaries of Slepian Functions on the Sphere
\chattr{This chapter is based on ongoing work in collaboration with
Frederik~J.~Simons, Department of Geosciences, Princeton University.}}
\section{Introduction}
The estimation and reconstruction of signals from their
samples on a (possibly irregular) grid is an old and important problem
in engineering and the natural sciences. Over the last several
centuries, both approximation and sampling techniques have been
developed to address this problem.
Approximation theorems provide (possibly probabilistic) guarantees
that a function can be approximated to a specified precision with a
bounded number of coefficients in an alternate basis or frame. In
general, such guarantees put constraints on the function (e.g.,
differentiability) and the domain (e.g., smoothness
and compactness). Sampling theorems guarantee that a function can be
reconstructed to a given precision from either point samples or some
other form of sampling technique (e.g., linear combinations of point
observations, as in the case of compressive sensing). Again, sampling
theorems place requirements on both the sampling (e.g., grid
uniformity or a minimum sampling rate), on the original function (e.g.,
a bandlimit), and/or on the domain (e.g., smoothness, compactness).
Approximation and sampling techniques are closely linked due to
their similar goals. For example, a signal can be estimated via
its representation in an alternate basis (e.g., via Riemannian
sums that numerically calculate projections of point samples onto the
basis functions). The estimate then follows by expanding the function
in the given basis. Regularization (the approximation) of the
estimate can be performed by excluding the basis elements assumed to
be zero.
\begin{figure}\label{fig:emag2}
\end{figure}
In this chapter, we focus on the approximation and sampling problem
for subsets of the sphere $\cR \subset S^2$. First, we are
interested in the representation of signals that are
bandlimited
\footnote{In the sense that they have compact support
in the spherical harmonic basis (see App.~\ref{app:fourier}).}
but whose contribution of higher frequencies arises from within a certain
region of interest (ROI), denoted $\cR$ (see, e.g.,
Fig.~\ref{fig:emag2}). Second, we are interested in
reconstructing such functions from point samples within the ROI. To
this end, we construct \emph{multiscale} dictionaries of functions that are
bandlimited on the sphere $S^2$ and space-concentrated in contiguous
regions $\cR$.
Our constructions are purely numerical, and are motivated by
subdivision schemes for wavelet constructions on the interval. By
construction, the functions have low coherency (their pairwise inner
products are bounded in absolute value). As a result, thanks to new
methods in sparse approximation (see, e.g.,
\cite{Candes2007,Gurevich2008}), they are good candidates for the
approximation and reconstruction of signals that are locally
bandlimited.
\section{Notation and Prior Work}
\label{sec:sltrprior}
We will focus on the important prior work in signal
representation, working our way up to Slepian functions on the
sphere --- the functions upon which our construction is based.
Before proceeding, we first introduce some notation. For two
sequences (vectors) $\wh{f}, \wh{g} \in \ell^2$ and a subset of the
natural numbers $\Omega$, we refer to the product
${\ip{\wh{f}}{\wh{g}}_\Omega = \sum_{i \in \Omega} \wh{f}_i
\ol{\wh{g}_i}}$. When $\Omega$ is omitted, we assume the sum is over
all indices. The norm of $\wh{f}$ is denoted ${{\norm{\wh{f}}_\Omega^2
= \ip{\wh{f}}{\wh{f}}}_\Omega}$.
For square-integrable functions $f$ and $h$ on a Riemannian manifold
$(\cM,g)$ (that is, $f,h \in L^2(\cM)$), and $\cR$ some subset of
$\cM$, we denote the inner product ${\ip{f}{h}_\cR = \int_\cR f(x)
\ol{h(x)} d\mu(x)}$, where $d\mu(x)$ is the volume element associated
with the metric $g$ (see Apps. \ref{app:diffgeom}--\ref{app:fourier}).
The norm is again defined as ${\norm{f}_\cR = \sqrt{\ip{f}{f}}_\cR}$. The
type of inner product, manifold, and metric will be clear from the
context. Notation referring specifically to functions on the sphere
$S^2$ and the spherical harmonics may be found in
\S\ref{sec:fouriersphere}.
The prior literature in this area pertains to sampling,
interpolation, and basis functions on the real line, and we focus on
these next.
\subsection{Reconstruction of Bandlimited, Regularly Sampled Signals
on $\bbR$}
A signal $f(t)$ on the real line is defined as bandlimited when it has
no frequencies higher than some bandlimit $W$. That is, $\wh{f}(\omega) = 0$ for
$\abs{\omega} > W$. The simplest version of the sampling theorem, as
given by Shannon \cite[\S II]{Shannon1949}, states that if $f$ is
sampled at regular intervals of with a frequency at or above the
sampling limit $t_s = 1/2W$, it can be exactly reconstructed via
convolution with the sinc low-pass filter:
\begin{equation}
\label{eq:sampthm}
f(t) = \sum_{n = -\infty}^\infty x_n \sinc(2 W t - n),
\end{equation}
where $x_n = f(t_s n)$ are the samples and $\sinc(z) = \sin(\pi z)/(\pi z)$.
Shannon also \emph{heuristically} describes \cite[\S III]{Shannon1949} that if a
bandlimited function $f$, with bandlimit~$W$, is also timelimited to
an interval $[-T,T]$ (that is, all of its samples, taken at
rate~$k_s$, are exactly 0 outside of this interval), then it requires
$N = \floor{2 T W}$ samples on this interval to reconstruct. Thus a function
that is both time- and bandlimited as described above can be
described using only $N$ numbers---and the dimension of such functions
is $N$, which is called the Shannon number.
Shannon's heuristic definition was based on the (at that time)
well-known fact that a bandlimited, \emph{substantially} spacelimited
\footnote{From Shannon's paper, we assume ``substantially'' implies
space-concentrated.}
function can be represented well with~$N$ numbers. In fact,
it is impossible to construct exactly space- and frequency- limited functions on
the line; see, e.g., the Paley-Wiener theorem
\footnote{This theorem states that the Fourier transform of a compact
function is entire. The only entire function with an accumulation
point of zeros (e.g., a compactly supported one) is the zero function.}
\cite[Thm.~7.22]{Rudin1973}.
Thus a different technique is needed for the estimation of bandlimited
functions on an interval, from samples only within that
interval. As we will describe next the optimal representation for
this $N$-dimensional space is given by the ordered basis of
Slepian functions. We will now focus on the construction of Slepian
functions, and will provide a more rigorous definition of the Shannon
number for the space of bandlimited, space-concentrated
functions.
\subsection{\label{sec:slepconstr1d}An Optimal Basis for Bandlimited Functions on the Interval}
The estimation of bandlimited signals on an interval requires the
construction of an optimal basis to represent such functions. The
Prolate Spheroidal Wave Functions (PSWF)
\cite{Landau1961,Landau1962,Slepian1961}, also known as Slepian
functions, are one such basis.
The criterion of optimal concentration is with respect to the ratio of
$L^2$ norms. Let $\cR = [-T,T]$ be the interval on the line, and
$\Omega=[-W,W]$ be the bandlimit in frequency. The PSWF are
the orthogonal set of solutions to the variational problem
\begin{align}
\label{eq:sleptime1d}
\maximize_{g \in L^2(\bbR)} \quad &\lambda = \frac{\int_\cR g^2(t)
dt}{\int_\bbR g^2(t) dt}, \\
\text{subject to} \quad & \hat{g}(\omega) = 0, \quad \omega \not\in \Omega,
\nonumber
\end{align}
where for $\alpha=1,2,\ldots$, the value $\lambda_\alpha$ is
achieved by $g_\alpha$, and we impose the orthonormality
constraint $\ip{g_\alpha}{g_{\alpha'}} = \delta_{\alpha \alpha'}$.
Here $\lambda_\alpha$ is the measure of concentration of $g_\alpha$
on~$\cR$; these eigenvalues are bounded: $0 < \lambda_\alpha < 1$.
We use the standard ordering of the Slepian functions, wherein
${\lambda_1 > \lambda_2 > \cdots}$ (the first Slepian function is
the most concentrated within $\cR$, the second is the second most
concentrated, and so on).
The orthonormal solutions to \eqref{eq:sleptime1d} satisfy the
integral eigenvalue problem \cite{Slepian1961,Simons2006b}:
\begin{align}
\label{eq:slep1dtimeint}
\int_\Omega &K_F(\omega,\omega') \wh{g}_\alpha(\omega') d\omega' = \lambda_\alpha
\wh{g}_\alpha(\omega),\ \omega \in \Omega, \quad \text{where}\\
&K_F(\omega,\omega') = \frac{\sin T(\omega-\omega)}{\pi(\omega-\omega')}.
\nonumber
\end{align}
Note that $K_F$ is a smooth, symmetric, positive-definite kernel
with eigenfunctions $\set{\wh{g}_\alpha}_\alpha$
(and associated eigenvalues $\set{\lambda_\alpha}_\alpha$).
Mercer's theorem therefore applies \cite[\S 97,98]{Riesz1956}, and we can write
\begin{equation}
\label{eq:DFmercer}
K_F(\omega,\omega') = \sum_{a \geq 1} \lambda_\alpha \wh{g}_\alpha(\omega) \ol{\wh{g}_\alpha(\omega')}
\qquad \text{and} \qquad
K_F(\omega,\omega) = \sum_{a \geq 1} \lambda_\alpha \abs{\wh{g}_\alpha(\omega)}^2.
\end{equation}
In practice, \eqref{eq:slep1dtimeint} can be solved exactly and
efficiently using a special ``trick'': this integral equation commutes
with a special second-order differential operator and thus its solution
can be found via the solution of a PDE of Sturm-Liouville type. The
values of the $g_\alpha$'s, or of the $\wh{g}_\alpha$'s, can be evaluated
exactly at any set of points on their domains, via the factorization
of a special tridiagonal matrix~\cite{Simons2010,Slepian1978}. This
construction is beyond the scope of this chapter.
It has been shown that the Slepian functions are all either very well
concentrated within the interval, or very well concentrated outside
of it. That is, the eigenvalues of the Slepian functions are
all either nearly $1$ or nearly $0$~\cite[Table~1]{Slepian1961}.
Furthermore the values of any bandlimited signal with concentration
$1-\epsilon$ on the interval, can be estimated to within a squared error
(in $L^2$) bounded by $\epsilon$, for any arbitrarily
$\epsilon > 0$, by approximating this signal using a linear
combination of all the Slepian functions whose eigenvalues are near
unity~\cite[Thm.~3]{Landau1962}, and no fewer~\cite[Thm.~5]{Landau1962}.
The number of Slepian functions required to approximate a bandlimited,
space-concentrated function can be therefore be calculated by summing
their energies:
\begin{equation}
\label{eq:shanline}
N_{T,W} = \sum_{\alpha \geq 1} \lambda_\alpha
= \sum_{\alpha \geq 1} \lambda_\alpha \int_\Omega \abs{\wh{g}_\alpha(\omega)}^2 d\omega
= \int_{\Omega} K_F(\omega,\omega) d\omega = \frac{2 T W}{\pi},
\end{equation}
where we used \eqref{eq:DFmercer} after swapping the sum and integral.
The difference between this value of $N$ and Shannon's version is due
to changes in normalization of Fourier transforms and the
constant $\pi$ inside the $\sin$ in \eqref{eq:sampthm}.
\subsection{An Optimal Basis for Bandlimited Functions on subregions
of the Sphere $S^2$}
\label{sec:sltrsphere}
The construction of Slepian functions on the sphere proceeds similarly
to the interval case, with differences due to the compactness of
$S^2$. Let $\cR$ be a closed and connected subset of $S^2$ and let
the frequency bandlimit
be ${\Omega=\set{(l,m) : 0 \leq l \leq L, -l \leq m \leq l}}$.
The set of square-integrable functions on $S^2$ with bandlimit~$\Omega$,
which we will call $L^2_\Omega(S^2)$, is an $(L+1)^2$
dimensional space. This follows because any $f \in L^2_\Omega(S^2)$
can be written as
\footnote{The spherical harmonics $Y_{lm}$ and their properties are
given in App.~\ref{app:fourier}.}
\begin{equation}
\label{eq:plm2xyz}
f(\theta,\phi) = \sum_{l=0}^L \sum_{m=-l}^l \wh{f}_{lm} Y_{lm}(\theta,\phi),
\end{equation}
and therefore
$$
\dim L^2_\Omega(S^2) = \sum_{l=0}^L (2l+1) = (L+1)^2.
$$
For the rest of the chapter, we will refer to the bandlimits ``$\Omega$''
and ``$L$'' interchangeably.
We now proceed as in \cite{Simons2006b}.
Slepian functions concentrated on $\cR$ with bandlimit~$L$ are
the orthogonal set of solutions to the variational problem
\begin{align}
\label{eq:slepsphere}
\maximize_{g \in L^2(S^2)} \quad &\lambda = \frac{\int_\cR g^2(x) d\mu(x)}{\int_{S^2} g^2(x) d\mu(x)}, \\
\text{subject to} \quad & \hat{g}_{lm} = 0, \quad (l,m) \not\in \Omega,
\nonumber
\end{align}
where the value $\lambda_\alpha$ is achieved by $g_\alpha$,
$\alpha = 1,2,\ldots$, and we impose the orthonormality
constraint
\begin{equation}
\label{eq:sleponorm}
\int_{S^2} g_\alpha(x) g_{\alpha'}(x) d\mu(x) = \delta_{\alpha \alpha'}.
\end{equation}
Finally, as before, we use the standard order for the
Slepian functions: ${\lambda_1 \geq \lambda_2 \geq \cdots}$ (i.e., in
decreasing concentration). Note that in contrast to the 1D case, the
concentration inequalities are not strict due to possible geometric
degeneracy. Due to orthonormality, the Slepian functions also fulfill
the orthogonality constraint
\begin{equation}
\label{eq:slepo}
\int_{\cR} g_\alpha(x) g_{\alpha'}(x) d\mu(x) = \lambda_\alpha
\delta_{\alpha \alpha'}.
\end{equation}
The problem \eqref{eq:slepsphere} admits an integral
formulation equivalent to \eqref{eq:slep1dtimeint}. In addition, as the Fourier
basis on $S^2$ is countable, the construction can be reduced to a
matrix eigenvalue problem.
Writing $g$ in \eqref{eq:slepsphere} via its Fourier series, as in
\eqref{eq:plm2xyz}, reduces the problem~to~\cite[Eq.~33]{Simons2010}
\begin{align}
\label{eq:slepspheremat}
\sum_{(l',m') \in \Omega}& K_{lm,l'm'} \wh{g}_{l'm'} = \lambda \wh{g}_{lm},
\quad (l,m) \in \Omega, \\
\text{where } &K_{l'm',lm} = \ip{Y_{lm}}{Y_{l'm'}}_\cR.
\nonumber
\end{align}
From now on, we will denote by $K$ the $(L+1)^2 \x (L+1)^2$
matrix with coefficients $K_{l'm',lm}$. The vector $K \wh{g}$ is thus
an $(L+1)^2$-element vector, indexed by coefficients $(l,m)$, with $(K
\wh{g})_{lm} = \sum_{(l',m') \in \Omega} K_{lm,l'm'} \wh{g}_{l'm'}$
for any $(l,m) \in \Omega$. The matrix $K$ is called the spectral
localization kernel; it is real, symmetric and positive-definite.
We can now rewrite \eqref{eq:slepsphere} as a proper eigenvalue
problem: we solve
\begin{equation}
\label{eq:slepsphereeig}
K \wh{G} = \wh{G} \Lambda
\end{equation}
where $\wh{G} = \left(\wh{g}_1~\cdots~\wh{g}_{(L+1)^2}\right)$ is the
orthonormal matrix of eigenvectors and the
diagonal matrix $\Lambda$ is composed of eigenvalues in decreasing
order: ${\Lambda = \diag{\lambda_1~\lambda_2~\cdots~\lambda_{(L+1)^2}}}$.
As $K$ is positive-definite and symmetric, its eigenvectors,
${\wh{g}_\alpha,~\alpha=1,2,\cdots,(L+1)^2}$, form an orthogonal set
that spans the space $L^2_\Omega(S^2)$. The spatial functions can be
easily calculated via \eqref{eq:plm2xyz} and efficient recursion
formulas for the spherical harmonics.
The Shannon number, $N_{\abs{\cR'},L}$ is again defined as the sum of the
eigenvalues,
\begin{equation}
\label{eq:shansphsum}
N_{\abs{\cR'},L} = \sum_{\alpha = 1}^{(L+1)^2} \lambda_\alpha = \text{Tr}(\Lambda)
= \text{Tr}(\wh{G}).
\end{equation}
It can also be shown (see,~e.g.,~\cite[\S4.3]{Simons2006b}) that the
Shannon number is given by a formula similar~to~\eqref{eq:shanline}:
\begin{equation}
\label{eq:shansph}
N_{\abs{\cR'},L} = \frac{\abs{\cR'}}{4 \pi} (L+1)^2,
\end{equation}
where $\abs{\cR'} = \int_{x \in \cR'} d\mu(x)$ is the area of $\cR'$.
\subsection{\label{sec:calcslepD}Calculation of the Spectral Localization Kernel $K$}
In contrast to the 1D construction of
\S\ref{sec:slepconstr1d}, and with the exception of the cases when the
domain $\cR$ has azimuthal and/or equatorial symmetry (see, e.g.,
\cite{Simons2006b}), there is no known differential operator that
commutes with the spatial integral version of~
\eqref{eq:slepspheremat}. Nevertheless, thanks to the discrete nature
of the problem, we can find tractable solutions using simple
numerical analysis.
We now briefly discuss the calculation of the symmetric
positive-definite matrix $K$ of \eqref{eq:slepspheremat}; basing the
discussion on the work in \cite[\S4.2]{Simons2007}. As the
calculation of the largest eigenvalues and associated eigenvectors
of $K$ can be performed using standard efficient iterative solvers,
the main computational complexity lies in constructing the matrix
itself.
The problem of calculating $K$ reduces to numerically
estimating the constrained spatial inner product between spherical
harmonics,
\begin{equation}
\label{eq:YlmYlpmp}
K_{lm,l'm'} = \int_\cR Y_{lm}(x) Y_{l'm'}(x) d\mu(x),
\end{equation}
when we are given the (splined) boundary $\partial\cR$ (a closed simple
curve in $S^2$). This is performed via a semi-analytic integration
over a grid. We first find the northernmost and southernmost
colatitudes, $\theta_n$ and $\theta_s$, of $\partial \cR$. For a
given colatitude $\theta$, we can find the westernmost and easternmost
points, $\phi_{e}(\theta)$ and $\phi_{w}(\theta)$ of $\partial
\cR$. If $\cR$ is nonconvex, there will be some $I(\theta)$ number of
such points, which we denote $\phi_{e,i}(\theta)$~and~
${\phi_{w,i}(\theta), i=1,\ldots,I(\theta)}$.
The integral \eqref{eq:YlmYlpmp} thus becomes
\begin{align}
\label{eq:Ylmp1}
K_{lm,l'm'} &= \int_{\theta_s}^{\theta_n} X_{lm}(\theta)
X_{l'm'}(\theta) \Phi_{mm'}(\theta) \sin \theta d\theta, \\
\label{eq:Ylmp2}
\text{where} \quad \Phi_{mm'}(\theta) &=
\sum_{i=1}^{I(\theta)} \int_{\phi_{e,i}(\theta)}^{\phi_{w,i}(\theta)}
\sfN_{m} \sfN_{m'} \sfS_{m}(\phi) \sfS_{m'}(\phi) d\phi, \\
\sfN_{m} &= \sqrt{2 - \delta_{0m}},
\nonumber \\
\text{and} \quad \sfS_{m}(\phi) &=
\begin{cases}
\cos m \phi & \text{if } m \leq 0, \\
\sin m \phi & \text{if } m > 0.
\end{cases}
\nonumber
\end{align}
Above, $X_{lm}$ is the colatitudinal portion of $Y_{lm}$;
for more details, see App.~\ref{app:fourier}.
Equation \eqref{eq:Ylmp1} is calculated via Gauss-Legendre
integration using the Nystr\"{o}m method, first by discretizing the
colatitudinal integral into $J$ points $\set{\theta_j}_{j=1}^J$, and
then evaluating the integral \eqref{eq:Ylmp2} analytically
at each point $\theta_j$. The discretization number $J$
for the numerical integration is chosen large enough that the
\emph{spatial-domain} eigenfunctions, as calculated via
the diagonalization of $K$ and application of \eqref{eq:plm2xyz},
satisfy the Slepian orthogonality relations \eqref{eq:sleponorm} and
\eqref{eq:slepo} to within machine precision.
A second way involves the expansion of $\ip{Y_{lm}}{Y_{l'm'}}$ into
spherical harmonics --- the expansion coefficients are the
quantum-mechanical Wigner~$3j$ functions, which can be calculated
recursively. The remaining integral over a single spherical harmonic
can be performed recursively in the manner of \cite{Paul1978}, which
is exact. See also \cite{Eshagh2009}.
\section{\label{sec:sltrtree}Multiscale Trees of Slepian Functions}
We now turn our focus to numerically constructing a dictionary $\cD$ of
functions that can be used to approximate mostly low bandwidth signals
on the sphere. As we will see in the next section, this dictionary
allows for the reconstruction of a variety of signals from their point
samples.
To construct $\cD$, we first need some definitions. Let $\cR \subset
S^2$ be a simply connected subset of the sphere. Let $L$ be the
bandwidth: the dictionary $\cD$ will be composed of functions
bandlimited to harmonic degrees $0 \leq l \leq L$. The construction
is based on a binary tree. Choose a positive integer (the node
capacity) $n_b$; each node of the tree corresponds to the first $n_b$
Slepian functions with bandlimit $L$ and concentrated on a subset
$\cR' \subset \cR$. The top node of the tree corresponds to the entire
region $\cR$, and each node's children correspond to a division of
$\cR'$ into two roughly equally sized subregions (the subdivision scheme
will be described soon). As the child nodes will be concentrated in
disjoint subsets of $\cR'$, all of their corresponding functions and
children are effectively incoherent.
\begin{figure}\label{fig:sltrdiag}
\end{figure}
We now fix a height $H$ of the tree: the number of times to subdivide
$\cR$. The height is determined as the maximum number of binary
subdivisions of $\cR$ that can have $n_b$ well concentrated functions.
That is, we find the minimum integer $H$ such that
$$
n_b \geq N_{2^{-H} \abs{\cR}, L}
$$
with the solution
$$
H = \ceil{\log_2\left(\frac{\abs{\cR}}{4 \pi} \frac{(L+1)^2}{n_b}\right)}.
$$
A complete binary tree with height $H$ has $2^{H+1}-1$ nodes, so from
now on we will denote the dictionary
$$
\cD_{\cR,L,n_b} =
\set{d^{(1,1)},d^{(1,2)},\cdots,d^{(1,n_b)},\cdots,d^{\left(2^{H+1}-1,1\right)},
\cdots,d^{\left(2^{H+1}-1,n_b\right)}}
$$
as the set of
$\abs{\cD_{\cR,L,n_b}} = n_b \, (2^{H+1}-1)$ functions thus constructed
on region $\cR$ with bandlimit $L$ and node capacity
$n_b$.
Fig.~\ref{fig:sltrdiag} shows the tree diagram of the
subdivision scheme. We use the standard enumeration of nodes wherein
node $(j,\cdot)$ is subdivided into child nodes $(2j,\cdot)$ and
$(2j+1,\cdot)$, and at a level $0 \leq h \leq H$, the nodes are
indexed from $2^h \leq j \leq 2^{h+1}-1$. More specifically, for
${j = 1,2,\ldots}$, we have ${\cR^{(j)} = \cR^{(2j)} \cup \cR^{(2j+1)}}$.
Furthermore, letting $g^{\cR'}_\alpha$ be the $\alpha$'th Slepian
function on $\cR'$ (the solution to \eqref{eq:slepsphere} with
concentration region $\cR'$), we have that
$$
d^{(j,\alpha)} = g^{\cR^{(j)}}_\alpha.
$$
Fig.~\ref{fig:sleptrafrica} shows an example of the construction when
$\cR$ is the African continent. Note how, for example, $d^{(4,1)}$
and $d^{(5,1)}$ are the first Slepian functions associated with
the subdivided domains of $\cR^{(2)}$.
\begin{figure}
\caption{$d^{(1,1)}
\caption{$d^{(2,1)}
\caption{$d^{(3,1)}
\caption{$d^{(4,1)}
\caption{$d^{(5,1)}
\caption{$d^{(6,1)}
\caption{$d^{(250,1)}
\caption{$d^{(251,1)}
\caption{$d^{(252,1)}
\caption{$d^{(253,1)}
\caption{$d^{(254,1)}
\caption{$d^{(255,1)}
\label{fig:sleptrafrica}
\end{figure}
To complete the top-down construction, it remains to decide how to
subdivide a region $\cR'$ into equally sized subregions.
For roughly circular connected domains, the first Slepian function has
no sign changes, and the second Slepian function has a single
zero-level curve that subdivides the region into approximately equal
areas; when $\cR'$ is a spherical cap, the subdivision is
exact~\cite{Simons2006b}. We thus subdivide a region $\cR'$ into the
two nodal domains associated with the second Slepian function on that
domain; see Fig.~\ref{fig:sleptrafricasecond} for a visualization of
the subdivision scheme as applied to the African continent.
\begin{figure}
\caption{$d^{(1,2)}
\caption{$d^{(2,2)}
\caption{$d^{(3,2)}
\caption{$d^{(4,2)}
\caption{$d^{(5,2)}
\caption{$d^{(6,2)}
\label{fig:sleptrafricasecond}
\end{figure}
\section{\label{sec:sltrprop}Concentration, Range, and Incoherence}
The utility of the Tree construction presented above depends on its
ability to represent bandlimited functions in a region $\cR$, and its
efficacy at reconstructing functions from point samples in $\cR$.
These properties, in turn, reduce to questions of concentration,
range, and incoherence:
\begin{itemize}
\item Dictionary $\cD$ is concentrated in $\cR$ if its
functions are concentrated in $\cR$.
\item The range of dictionary $\cD$ is the subspace spanned by its
elements. Ideally, the basis formed by the first
$N$ Slepian functions on $\cR$ is a subspace of the range~of~$\cD$.
\item When $\cD$ is incoherent, pairwise inner products of
its elements have low amplitude: pairs of functions are
approximately orthogonal. This, in turn, is a useful property when
using $\cD$ to estimate signals from point samples, as we will see
in the next section.
\end{itemize}
In this section, we provide several techniques for analyzing these
properties for a given dictionary $\cD$, providing numerical
examples as we go along.
Unlike the eigenvalues of the Slepian functions on $\cR$, not all of
the eigenvalues of the elements of $\cD_{\cR}$ reflect their
concentration within this top-level (parent) region. We thus define
the modified concentration value
\begin{equation}
\nu^{(j,\alpha)} = \int_{\cR} \left[d^{(j,\alpha)}(x)\right]^2 d\mu(x).
\end{equation}
Recalling that $\norm{d^{(j,\alpha)}}_2 = 1$, the value
$\nu$ is simply the percentage of energy of the
$(j,\alpha)^{\text{th}}$ element that is concentrated in $\cR$. This value is
always larger than the element's eigenvalue,
which relates its fractional energy within the smaller subset $\cR^{(j)}$.
Figs.~\ref{fig:slepafricaeigvals},~\ref{fig:sleptrafricaeigvals},~
and~\ref{fig:sleptrafricaconc} compare the eigenvalues of the Slepian
functions on the African continent with those of the Tree
construction, as well as with numerically calculated
\footnote{Calculations performed using gridded Gauss-Legendre
integration similar to that in \S\ref{sec:calcslepD}.}
values of $\nu$.
\begin{figure}\label{fig:slepafricaeigvals}
\end{figure}
\begin{figure}\label{fig:sleptrafricaeigvals}
\end{figure}
\begin{figure}\label{fig:sleptrafricaconc}
\end{figure}
The size of dictionary $\cD_{\cR,L,n_b}$ is generally larger than
the Shannon number $N_{\abs{\cR},L}$ for any node capacity $n_b$, and
as a result it cannot form a proper basis (it has too many
functions). Ideally, then, we require that elements of the range of
the dictionary spans the space of the first $N_{\abs{\cR},L}$ Slepian
functions. We discuss two visual approaches for determining if this
is the case.
Though the spatial nature of the construction makes it
clear that dictionary elements tend to cover the entire domain $\cR$,
we also investigate the spectral energies of these elements; and
compare them with the energies of the $N_{\abs{\cR},L}$ Slepian
functions on $\cR$, which ``essentially'' form a basis for bandlimited
functions in $\cR$. The spectral energy density of a function $f$ is
given for each degree~${l=0,\ldots,L}$~by~\cite[Eq.~38]{Dahlen2008}:
$$
S^f_l = \frac{1}{2l+1} \sum_{m=-l}^l \abs{\wh{f}_{lm}}^2.
$$
Figs. \ref{fig:slepafricaspec} and \ref{fig:sleptrafricaspec} compare
the power spectra of the Slepian functions on the African continent
with the power spectra of two dictionaries given by the tree
construction. While the Slepian functions are concentrated within
specific ranges of the harmonics, the tree construction leads to
spectra that depend on the degree. The dictionary elements with
$\alpha=1$ tend to either contain mainly low-frequency harmonics or,
for elements concentrated on smaller regions, have a more flat
harmonic response within the bandlimit. Dictionary elements
associated with higher order Slepian functions have a more pass-band
response when concentrated on larger regions and a more flat response
within the higher frequencies of the bandlimit when concentrated on
smaller regions. So while it is clear that the Slepian functions span
the bandlimited frequencies, the spectral amplitude plots do not relay
this as clearly for the tree construction.
\begin{figure}\label{fig:slepafricaspec}
\end{figure}
\begin{figure}\label{fig:sleptrafricaspec}
\end{figure}
A complementary answer to the question of the range of $\cD$ is given by
studying the angle between the subspaces spanned by elements of $\cD$
and the first $\alpha$ functions of the Slepian basis, for
${\alpha=1,2,\ldots}$~\cite{Wedin1983}. The angle between two subspaces
$A$ and $B$ of $\bbC^n$ (having possibly different dimensions), is
given by the formula
\begin{align}
\angle(A,B) &= \min\left(\sup_{x \in A} \angle(x,B), \sup_{y \in B} \angle(y,A)\right), \quad \text{where} \\
\angle(x,B) &= \inf_{y \in B} \angle(x,y) = \cos^{-1} \frac{\norm{P_B x}}{\norm{x}}.
\end{align}
Here, $P_B$ is the orthogonal projection operator onto space $B$ and
all of the norms are with respect to the given subspace.
The angle $\angle(A,B)$ is symmetric, nonnegative, and zero iff
$A \subset B$ or $B \subset A$; furthermore it is invariant under
unitary transforms applied to both on $A$ and $B$ (such as Fourier
synthesis), and admits a triangle inequality. It is thus a good
indicator of distance between two subspaces; furthermore, it can be
calculated accurately\footnote{See MATLAB function \texttt{subspace}.}
given two matrices whose columns span $A$ and $B$.
We can therefore identify the matrices $A$ and $B$ with the subspaces
spanned by their columns.
Let $\left(\wh{G}_{\cR,L}\right)_{1:\alpha}$ denote the matrix containing the
first $\alpha$ column vectors of $\wh{G}$ from \eqref{eq:slepsphereeig}.
Further, let $\wh{D}$ denote the $(L+1)^2 \x \abs{\cD_{\cR,L,n_b}}$ matrix
containing the spherical harmonic representations of the elements of
$\cD_{\cR,L,n_b}$. Fig.~\ref{fig:slepvstrdist} shows
$\angle(\wh{G}_{1:\alpha}, \wh{D})$ for ${\cR=\text{Africa}}$ with ${L=36}$.
The Shannon number is ${N_{\text{Africa},36} \approx 79}$ (see~
Fig.~\ref{fig:slepafricaeigvals}~and~\eqref{eq:shansphsum}).
From this figure, it is clear that while the dictionaries
$\cD_{\text{Africa},36,1}$ and $\cD_{\text{Africa},36,1}$ do not
strictly span the space of functions bandlimited to $L=36$ and
optimally concentrated in Africa, they are a close approximation: the
column span of $\left(\wh{G}_{\text{Africa},36}\right)_{1:\alpha}$ is
nearly linearly dependent with the spans of
$\wh{D}_{\text{Africa},36,1}$ and $\wh{D}_{\text{Africa},36,2}$, for
$\alpha$ significantly larger than $N_{\text{Africa},36}$.
\begin{figure}\label{fig:slepvstrdist}
\end{figure}
Thanks to novel approaches to signal approximation, which we will
discuss in the next section, the requirement that the dictionary
elements form an orthogonal basis is less important than the property
of mutual incoherence. Mutual incoherence in a dictionary
means that the inner product (the angle, when elements are of unit norm)
between pairs of elements is almost always very low.
Figs. \ref{fig:sleptrip} and \ref{fig:sleptripecdf} numerically show
that the two tree constructions on continental Africa have good
incoherency properties: most dictionary element pairs are nearly
orthogonal.
\begin{figure}
\caption{$\cD_{\text{Africa}
\caption{$\cD_{\text{Africa}
\label{fig:sleptrip}
\end{figure}
\begin{figure}\label{fig:sleptripecdf}
\end{figure}
In Fig. \ref{fig:sleptrip}(a), most pairwise inner products are
nearly zero, with the exception of nodes and their ancestors, which
share their parents' regions. More specifically, as expected,
dictionary elements ${(j,1)}$, ${(2j,1)}$, ${(2j+1,1)}$, ${(2(2j),1)}$,
${(2(2j)+1,1)}$, ${(2(2j+1), 1)}$, ${(2(2j+1)+1,1), \ldots}$, tend to
have large inner products, while those elements with non-overlapping
borders do not. This exact property is also visible in the two
diagonal submatrices of Fig. \ref{fig:sleptrip}(b). In the
off-diagonals, due to the orthonormality of the construction, elements
of the form $(j,1)$ and $(j,2)$ are orthogonal. In contrast, due to
the nature of the tree subdivision scheme, elements of the
form $(2j,1)$ or $(2j+1,1)$ and $(j,2)$ have a large magnitude inner
product. However, the number of connections between nodes and their
ancestors is $O(n_b\, (2^H H))$, while the total number of pairwise
inner products is $O((n_b 2^{H})^2)$; and for reasonably sized values
of $L$ the ratio of ancestral connections to pairwise inner products
grows small (see Fig. \ref{fig:sleptripecdf}).
\section{\label{sec:sltrlin}Solution Approaches for Linear~Systems}
As we will show in \S\ref{sec:sltrapprox}, the signal approximation
problem on the sphere reduces to a linear problem of the form
\begin{equation}
\label{eq:linrecon}
y = A~x,
\end{equation}
where $y \in \bbR^m$ are samples of a function on a region ${\cR
\subset S^2}$, $x \in \bbR^n$ are estimate coefficients in
the given dictionary, and ${A \in \bbR^{m \x n}}$ represents a spatial
discretization of the dictionary elements.
While the number of samples $m$ is usually larger than the number of
dictionary elements, in practice, due to the nature of sampling and
discretization (e.g.,~\cite[Fig.~1b]{Slobbe2011}), the rank~$r$~of~$A$
is significantly lower than~$m$. There are several examples
in the literature concerning spherical harmonics, for which
\eqref{eq:linrecon} is invertible (that is, with $r = n \approx m$).
For example, it can be shown via a Shannon-type theorem that when $x$
represents the spherical harmonic coefficients of a function with
bandlimit $L$, $y$ represents its samples on a special semi-regular
grid, and $A$ represents the discretization of the harmonics to the
grid, the signal $x$ can be reconstructed
exactly~\cite[Thm.~3]{Driscoll1994}. In this
case, the semi-regular grid consists of $m=4 L^2$ points on the entire
sphere $S^2$, and the number of harmonic coefficients (also the rank
of $A$) is~${r=n=(L+1)^2}$. In the limit of large bandlimit $L$,
the ratio of the samples to unknowns is $4$. As such, the sampling
scheme presented in that paper is in a sense optimal in order of
magnitude. Nevertheless, even there the rank is significantly lower
than the sample number.
Under the assumption that $r < m$, we can write the
compact~Singular~Value~Decomposition~(SVD)~\cite[Chapter~7]{Horn1985}~of~$A$:
\begin{equation}
\label{eq:svdA}
A = U^{}_+ \Sigma^{}_+ V_+^* , \quad U_+^* U^{}_+ = I_{r \x r},
\text{ and } V_+^* V^{}_+ = I_{r \x r}.
\end{equation}
Here $U_+ \in \bbR^{m \x r}$, $\Sigma_+ \in \bbR^{r \x r}$ is a
diagonal matrix of positive singular values, and
$V_+ \in \bbR^{n \x r}$. We can thus rewrite \eqref{eq:linrecon} as
$$
\wh{y} = \wh{A} x, \quad \wh{y} =
U_+^* y, \text{ and } \wh{A} = \Sigma^{}_+ V_+^*,
$$
where $\wh{A} \in \bbR^{r \x n}$ and $r$ may be smaller than $n$.
We must therefore focus on \eqref{eq:linrecon} with all three
possible cases: the overdetermined case $m > n$ (with rank $r=n$), the
case $m = n$ (with rank $r=m=n$), and especially the underdetermined
case $m > n$ (with rank $r=n$). We first quickly review the
overdetermined case.
\subsection{\label{sec:sltrover}Overdetermined and Square Cases: $m \geq n$}
When the system \eqref{eq:linrecon} is overdetermined, there many not
be one exact solution $x$. However, in this case it is possible
to minimize the sum of squared errors
\begin{equation}
\label{eq:minlsqr}
x_M = \argmin_{x \in \bbR^n} \norm{y - A x}_2^2.
\end{equation}
Taking gradients with respect to vector $x$ and setting the resulting
system equal to zero, we get the least squares solution
\begin{equation}
\label{eq:linlsqr}
x_M = (A^* A)^{-1} A^* y.
\end{equation}
Note that \eqref{eq:minlsqr} arises from the Maximum Likelihood
formulation of a statistical model in which the samples $y$ are
observed as~\cite[Chapter~3]{Bishop2006}
$$
y = Ax + n,
$$
where $n \sim \cN(0,\sigma^2 I_{m \x m})$ is i.i.d Gaussian noise.
When $m=n=r$, \eqref{eq:linrecon} has exactly one solution. The
matrix $A$ and its conjugate are invertible and,
using the identity $(AB)^{-1} = B^{-1}A^{-1}$,
\eqref{eq:linlsqr} reduces to ${x_M = A^{-1} x}$, as expected.
\subsection{\label{sec:sltrunder}Underdetermined Case: $m < n$}
When the linear system $A$ has more unknowns than equations
(or~rank~$r$), additional modeling or regularization is required. We
discuss two possible statistical models on $x$, their limiting
cases, and the resulting computational considerations.
\subsubsection{Prior: $x$ distributed according to a Gaussian distribution}
One possible way to model the underdetermined case is via a
statistical model in which the coefficients of $x$ are generated
i.i.d. with a zero-mean, fixed variance Gaussian distribution:
\begin{align*}
y|x &= Ax + n, \\
n &\sim \cN(0,\sigma I_{m \x m}), \text{ and} \\
x &\sim \cN(0,\sigma_s I_{n \x n}).
\end{align*}
The maximum a posteriori (MAP) estimate follows from Bayes' rule:
\begin{align*}
\Pr(x|y) &\propto \Pr(y|x)\Pr(x) \\
&= \frac{1}{(2 \pi)^{(m+n)/2} \sigma^{m/2} \sigma_s^{n/2}}
\exp\set{-\left(\norm{y-Ax}^2_2/2 \sigma^2 + \norm{x}^2_2/2 \sigma_s^2\right)}.
\end{align*}
When $\sigma$ and $\sigma_s$ are fixed, maximizing $\Pr(x|y)$ is
equivalent to minimizing its negative logarithm. The MAP problem in
this case becomes
\begin{equation}
\label{eq:minlsqrl2}
x_{N} = \argmin_{x \in \bbR^n} \norm{y-Ax}_2^2 + \gamma^2 \norm{x}_2^2,
\end{equation}
where ${\gamma = \sigma / \sigma_s}$.
Simple calculus again provides the solution, also known as the
Tikhonov regularized solution to \eqref{eq:minlsqrl2}:
\begin{equation}
x_{N} = (A^* A + \gamma^2 I)^{-1} A^* y.
\end{equation}
When the model noise power $\sigma$ grows small with respect to the
signal power $\sigma_s$, the regularization term $\gamma$ goes to
zero. In this limiting case, we can write the limiting
solution~as~\cite[pp~421-422]{Horn1985}
\begin{equation}
\label{eq:minlsqrMP}
x_{MP} = \lim_{\gamma^2 \to 0} (A^* A + \gamma^2 I)^{-1} A^* y = A^\dagger y,
\end{equation}
where $A^\dagger$ is the Moore-Penrose generalized inverse of $A$.
For overdetermined and exactly determined systems, $A^*A$ has a well
defined inverse and $A^\dagger$ coincides with the matrix in
\eqref{eq:linlsqr}. For underdetermined systems, the matrix
$A^\dagger$ is still well defined, and is given by
$$
A^\dagger = U^{}_+ \Sigma_+^{-1} V_+^*,
$$
where $U,\Sigma,V$ are given by the SVD as in \eqref{eq:svdA}. Note
that $x_{MP}$ also solves the convex, quadratic optimization
problem
$$
x_{MP} = \argmin_{x \in \bbR^m} \norm{x}_2^2 \text{ subject to }
y = Ax.
$$
The Tikhonov and Moore-Penrose solutions \eqref{eq:minlsqrl2} and
\eqref{eq:minlsqrMP} are a common approach to solving underdetermined
inverse problems in the Geosciences literature
(see~e.g.,~\cite{Xu1998}). However, depending on the dictionary used
for the representation of $x$ the Gaussian prior may not be an
ideal one; as it encourages \emph{all} of the coefficients to be
nonzero.
One of the major underlying foundations of this work includes
recent results in representation theory, which have shown that
overcomplete (redundant) multiscale frames and dictionaries with
certain incoherency properties can provide stable and noise-robust
estimates to ill-posed inversion problems. The basic requirement in
the estimation stage is that the solution is as ``simple'' as possible:
most of the coefficients in of the solution $x$ are zero; $x$ is
\emph{sparse}. We discuss this next.
\subsubsection{Prior: $x$ distributed according to a Laplace distribution}
It is well known in the statistics community (and, most
recently, in the Compressive Sensing literature), that applying a
super Gaussian prior $\Pr(x)$ induces MAP solutions that have many
zero components and a few large magnitude ones. The zero-mean Laplace
distribution is one such particularly convenient distribution. We
model each component of $x$, $x_i$ as i.i.d. Laplace distributed:
$$
\Pr(x) = \prod_{i =1}^n \frac{1}{2b} \exp(-\abs{x_i}/b) =
\frac{1}{(2b)^n} \exp(-\norm{x}_1/b).
$$
In a manner identical to that of the previous section, for fixed noise
power $\sigma^2$ and signal scale $b>0$, the MAP problem can be
reduced to
\begin{equation}
\label{eq:minlsqrl1}
x_{L} = \argmin_{x \in \bbR^n} \norm{y-Ax}_2^2 + \eta \norm{x}_1,
\end{equation}
where $\eta = 2 \sigma / b$. For any $\eta>0$, there is a $t > 0$
such that the following problem is identical:
\begin{equation}
\label{eq:lasso}
\argmin_{x \in \bbR^n} \norm{y-Ax}_2^2 \text{ subject to } \norm{x}_1
\leq t,
\end{equation}
where for a given $\eta$, there exists a $t(\eta) > 0$ such that
\eqref{eq:lasso} gives a solution identical to \eqref{eq:minlsqrl1},
and $t(\eta)$ decreases monotonically with $\eta$. Furthermore, for any
given $t$ to \eqref{eq:lasso}, an $\eta$ can be found for
\eqref{eq:minlsqrl1} that provides the identical
solution~\cite[\S12.4.2]{Mallat2009}; this result essentially follows
from the method of Lagrange multipliers. In other words, the two
convex problems are completely equivalent. They are also equivalent
to the popularly studied Compressive Sensing problem
$$
\argmin_{x \in \bbR^n} \norm{x}_1 \text{ subject to } \norm{y-Ax}_2^2
\leq \epsilon,
$$
where $\epsilon$ grows monotonically in $\eta$ and/or $t^{-1}$.
Fast and robust solvers for the convex quadratic optimization problems
\eqref{eq:minlsqrl1} and \eqref{eq:lasso} have been the subject of
study for many years. For our calculations
we use the LASSO solver\footnote{The LASSO \texttt{glmnet}
package:~\url{http://www-stat.stanford.edu/~tibs/lasso.html}.}
(see,~e.g.,~\cite{Tibshirani1996}).
LASSO is an iterative solution method that provides the full solution
paths to \eqref{eq:lasso}. It is computationally efficient for small-
to medium- scale linear systems. For systems with more than tens
of thousands of unknowns, there are a variety of other techniques for
the solution of \eqref{eq:lasso} that are
much more computationally tractable, though possibly less accurate
(see, e.g.,~\cite{Daubechies2010b}~and~\cite[\S12.4,~\S12.5]{Mallat2009}).
\subsubsection{Debiasing the $\ell_1$ solution}
An alternative approach to finding a sparse solution of
\eqref{eq:linrecon} is to attempt to minimize the problem
\begin{equation}
\label{eq:min0norm}
\min_{x \in \bbR^n} \norm{y-Ax} \text{ subject to } \norm{x}_0 \leq \kappa,
\end{equation}
where $\norm{x}_0 = \abs{\supp{x}}$ and
${\supp{x} = \set{i : x_i \neq 0}}$. That is, find the best matching
data to the model where the number of nonzero coefficients of the
model is bounded by $\kappa$.
Unfortunately, this combinatorial problem is nonconvex and therefore
usually intractable. In some cases, it can be shown that the solution
is equivalent to the $\ell_1$ problem \eqref{eq:lasso}, when the
matrix $A$ fulfills one of a number of special properties, e.g., the
Restricted Isometry Property (RIP) or Null Space Property (NSP). This
result is, in fact, a celebrated equivalence result in Compressive
Sensing~\cite{Candes2006,Donoho2006}. Unfortunately, for physical
discretization matrices $A$, the RIP and its equivalents are
difficult to check~\cite{dAspremont2008,Juditsky2008}.
In practice, problem \eqref{eq:min0norm} can be reduced into two parts:
\begin{enumerate}
\item Estimate the support of $x$, $\cS = \supp{x}$, such that $\abs{\cS} < r$.
\item Solve the overdetermined system \eqref{eq:minlsqr} via
\eqref{eq:linlsqr} on the reduced set $\supp{x}$ by
keeping only the columns of $A$ associated with $\supp{x}$,
$A_{\supp{x}}$, when solving the least squares problem.
\end{enumerate}
A tractable solution to the first part is to use the output
of the $\ell_1$~(LASSO) estimator:
$$
\cS = \supp{x} = \supp{x_L}.
$$
The final estimate is the vector $x_D$ where
\begin{align}
\label{eq:minsqrl1deb}
x'_{D} &= \argmin_{x' \in \bbR^{\abs{\cS}}} \norm{y-A_{\cS} x'}_2^2,
\text{ and }
\\
\nonumber x_{D,i} &= x'_{D,i} \delta_{i \in \cS},
\quad i=1,\ldots,n.
\end{align}
This alternative solution is also sometimes called the \emph{debiased}
$\ell_1$ solution, because after the $\ell_1$ minimization step, the
bias of the $\ell_1$ penalty is removed via least squares on the
estimated support.
As we will show in the next section, this final combination of
support estimate based on a sparsity-inducing prior, followed by
the solution to an overcomplete least squares problem on this support
set, allows for improvements in signal approximation over currently
standard techniques in geophysics.
\section{\label{sec:sltrapprox}Signal Approximation Models for Subsets of the Sphere~$S^2$}
We now turn to the problem of estimating a signal from noisy and/or
incomplete observations on a subset $\cR$ of the sphere. Following
the notation of~\cite{Simons2010}, suppose we observe data (samples of
some function $f$) on a set of points within the region $\cR$,
consisting of signal $s$ plus noise $n$. We are interested in
estimating the signal within~$\cR$ from these samples.
While in practice, most signals of interest in geophysics are not
bandlimited, this assumption allows us to perform estimates, and can
be thought of as a regularization of the signal, similar in nature to
assumptions of a maximum frequencies in audio analysis. Furthermore,
as in 1D signal processing, constraints on physical sampling and high
frequency noise always reduce the maximum determinable frequency.
See, for example, a noise analysis for satellite observations in the GRACE
mission~\cite[Fig.~1]{Wahr1998}, and the effects of noise on power
spectral estimation for the CMB dataset~\cite[Fig.~12]{Dahlen2008}.
Let ${\cX = \set{x_i}_{i=1}^m, x_i \in \cR}$ be a set of points on
which data $f$ is observed, and let the corresponding observations be~
$\set{f_i = f(x_i)}_{i=1}^m$, which we denote with the vector~$\tf$.
Then via the harmonic expansion \eqref{eq:plm2xyz}, we can write
\begin{equation}
\label{eq:fmodelsph}
f_i = \sum_{l=0}^L \sum_{m=-l}^l \wh{s}_{lm} Y_{lm}(\theta_i,\phi_i) + \nu_i,
\end{equation}
where the $\wh{s}_{lm}$ are the harmonic expansion coefficients of the
signal $s$ and $\nu_i = \nu(x_i)$ is a realization of the noise. We will also
denote by $\tnu$ the vector of samples of the noise process. Let
$Y$ be the $\abs{\cX} \x (L+1)^2$ harmonic sensing matrix, with
$$
{Y_{i,lm} = Y_{lm}(\theta_i,\phi_i)} \text{ for } i =
1,\ldots,\abs{\cX}, \text{ and } (l,m) \in \Omega.
$$
Then we can rewrite \eqref{eq:fmodelsph} as
\begin{equation}
\label{eq:fmodelsphmat}
\tf = Y \wh{s} + \tnu.
\end{equation}
By restricting the bandlimit to $L$, we restrict the function $s$ to
lie in $L^2_\Omega(S^2)$. Moreover, we are only interested in
estimating $s$ on $\cR \subset S^2$.
The Slepian functions are another basis for $L^2_\Omega(S^2)$, in which
the functions are ordered in terms of their concentration
on $\cR$. As such, we may rewrite the samples via their Slepian
expansion:
\begin{equation}
\label{eq:fmodelslep}
f_i = \sum_{\alpha=1}^n \wt{s}_\alpha g_\alpha(\theta_i, \phi_i) + \nu_i
\end{equation}
where now the $\wt{s}_\alpha$ are the Slepian expansion coefficients,
and we are free to constrain $n$ from $1$ through $(L+1)^2$. By
setting $n$ to $N_{\abs{\cR},L}$, we concentrate the estimate to
$\cR$, while choosing $n=(L+1)^2$ leads to a representation equivalent
to \eqref{eq:fmodelsph}.
Using the harmonic expansion of the Slepian functions, we rewrite
\eqref{eq:fmodelslep} via the spherical harmonics:
\begin{equation}
\label{eq:fmodelslepsph}
f_i = \sum_{\alpha=1}^n \wt{s}_\alpha \sum_{l=0}^L \sum_{m=-l}^l
\wh{g}_{\alpha,lm} Y_{lm}(\theta_i,\phi_i) + \nu_i,
\end{equation}
and, using the terminology $\wh{G}$ of \S\ref{sec:sltrprop} to
denote the harmonic expansion matrix of the Slepian functions,
and the ``colon'' notation to denote restrictions of matrices and
vectors to specific index subsets, we can
rewrite~\eqref{eq:fmodelslep} in matrix notation:
\begin{equation}
\label{eq:fmodelslepmat}
\tf = Y \wh{G}_{1:n} \wt{s}_{S,{1:n}} + \tnu.
\end{equation}
As just described, by setting $n < (L+1)^2$ this model assumes that
all but the first $n$ of the Slepian coefficients $\wt{s}_S$ are zero.
Finally, using the Tree dictionary construction
of~\S\ref{sec:sltrtree} and the notation of~\S\ref{sec:sltrprop}, for
a given dictionary $\cD_{\cR,L,n_b}$ we can model the signal with the
linear model
\begin{equation}
\label{eq:fmodelsltr}
f_i = \sum_{\alpha=1}^{n_b} \sum_{j=1}^{2^{H+1}-1} \wt{s}_{T,(j,\alpha)}
d^{(j,\alpha)}_{\cR,L,n_b}(\theta_i,\phi_i) + \nu_i.
\end{equation}
Writing the dictionary elements via their harmonic expansions, the
matrix formulation of \eqref{eq:fmodelsltr} becomes
\begin{equation}
\label{eq:fmodelsltrmat}
\tf = Y \wh{D}_{\cR,L,n_b} \wt{s}_T + \tnu.
\end{equation}
As we will see next, this alternative way of describing bandlimited
functions on $\cR$ has a number of advantages.
\subsection{\label{sec:sltrinv}Regularized Inversion and Numerical Experiments}
In practice, the sensing matrix $Y$ is highly rank
deficient: depending on the sensing grid points $\cX$, its rank $r$
tends to be significantly smaller than the maximum possible value $n$,
$\dim L^2_\Omega(S^2) = (L+1)^2$. As such, the estimation of
$s$ via direct inversion of \eqref{eq:fmodelsphmat} is ill
conditioned: it must be regularized.
As discussed in \S\ref{sec:sltrlin}, the most common form of
regularization is via the Moore-Penrose pseudoinverse:
\eqref{eq:linlsqr} for overdetermined systems or \eqref{eq:minlsqrMP}
for underdetermined ones. Following the discussion of
\S\ref{sec:sltrsphere} and the statistical analyses
in~\cite[\S3]{Simons2010} and~\cite[\S7]{Simons2006}, the
``classically'' optimal way to estimate $\wt{s}$ is by restricting the
reconstruction to be concentrated within $\cR$: that is, first by
choosing a small $n$ in \eqref{eq:fmodelslepmat}, such that $n < r$,
and then applying \eqref{eq:linlsqr} to estimate $\wt{s}_S$. In
practice, we can consider $n$ ranging from $1$ to $N_{\abs{R},L}$
because the Shannon number is less than rank $r$. We will call this
first estimation method Slepian~Truncated~Least~Squares~(STLS).
We now propose, first, a simple alternative approach: assume sparsity of
$\wt{s}$ (i.e., with respect to the Slepian basis). As the Slepian
basis was initially constructed to promote sparsity in the
representation of bandlimited functions concentrated on
$\cR$~[\S3.1.2]\cite{Simons2010}, we expect that this assumption
should lead to estimates that are equivalent to STLS, if not
better. The basic idea is to let $n=(L+1)^2$ in
\eqref{eq:fmodelslepmat}, and use the solution method
\eqref{eq:minsqrl1deb} with sparsity penalties $\eta_1 \geq \eta_2
\geq \cdots$ sufficiently large that only a few nonzero coefficients
are found in the support. We consider a range of values $\eta$ from a
maximum $\ol{\eta}$ that induces only one nonzero coefficient, to a
minimum $\ul{\eta}$ that induces $N_{\abs{\cR},L}$. We call this
estimation method Slepian~$\ell_1$~+~Debias~(SL1D). Though in this
case we do not explicitly require the estimate to be well concentrated
in $\cR$ via choice of basis functions, by minimizing the squared
error between sample values on $\cX \subset \cR$ we expect that most
of estimated support will be within the first Slepian functions.
With the Tree construction of \S\ref{sec:sltrtree}, we have
a new dictionary of elements that are both concentrated in $\cR$,
bandlimited, and multiscale. As such, these dictionaries are
excellent candidates for estimation via the $\ell_1$+~Debias technique
\eqref{eq:minsqrl1deb}. This method is similar to the previous one:
apply \eqref{eq:minsqrl1deb} to the model \eqref{eq:fmodelsltrmat},
choosing a range of $\ell_1$ penalties $\eta$ that lead to between $1$
and $N_{\abs{\cR},L}$ dictionary elements in the support of
$\wt{x}_T$. We call this the Slepian~Tree~$\ell_1$~Debias~(STL1D) method.
The experiments below numerically show that the two new estimation
(inversion) methods SL1D and STL1D provide improved performance over
the classic STLS, in terms of average reconstruction error over the
domain of interest $\cR$ using a small number of coefficients, for
several important types of bandlimited signals. Furthermore, as
expected the multiscale and spatially concentrated dictionary elements
of the Tree construction provide improved estimation performance when
the signal is ``red'', i.e., when it contains more energy in the lower
harmonic components.
\subsubsection{Bandpass Filtered POMME Model}
Fig. \ref{fig:pomme4dlocal} shows a bandpass filtered version of the
radial component of Earth's crustal magnetic field, which we will call
$p(\theta,\phi)$: a preprocessed version of the
output of the POMME model~\cite{Maus2006}. The signal $p$
has been:
\begin{enumerate}
\item Bandpassed between $l_{\text{min}}=9$ and $l_{\text{max}}=36$.
\item Spatially tapered (multiplied) by the
first Slepian function bandlimited to ${L_t=18}$ and concentrated within
Africa.
\item Low-pass filtered to have maximum frequency~$L=36$~via direct
projection onto the first $(L+1)^2$ spherical harmonics using standard
Riemannian sum-integral approximations~\cite[Eq.~80]{Simons2010},
i.e., direct inversion.
\end{enumerate}
It can be shown~\cite[\S2]{Wieczorek2005} that the harmonics
of the tapered signal, at degree $l$, receive contributions from the
original coefficients in the range from $\abs{l-L_t}$ to $l+L_t$. As
a result, only the first $l_{\text{max}}-L_t$ degree coefficients are
reliable estimates of the original signal's harmonics.
Samples of $p$ are given via the forward model \eqref{eq:fmodelsph}
(with $\tnu=0$), from the low-pass filtered ``ground truth'' signal
$\wt{p}$, on the intersection of the African continent $\cR$ with the
grid
$$\cX^* = \set{(k_\theta
\Delta,k_\phi
\Delta), \Delta = 0.25^o, k_\theta = \pm 0, \pm 1, \ldots, k_\phi =
\pm 0, \pm 1, \ldots}
$$
We denote this reduced set $\cX$; it contains $\abs{\cX} = 40250$
points. For $L=36$, as before, the Shannon number of Africa is
$N_{\text{Africa},36} \approx 79$, the dimension of the bandlimited
space is $\dim L^2_\Omega(S^2) = (L+1)^2 = 1639$, and the rank of
the discretization matrix in \eqref{eq:fmodelsph}, $Y_{1:(L+1)^2}$, is
${r=528} \ll 1639$.
\begin{figure}\label{fig:pomme4dlocal}
\end{figure}
Figs. \ref{fig:pomme4slepr}, \ref{fig:pomme4slepl1r}, and
\ref{fig:pomme4sleptrr} show intermediate results in the estimation of
$p$ via the three methods: STLS, SL1D, and STL1D, respectively.
Specifically, they show the absolute error between the original
sampled signal $\tp$, and the expansion via \eqref{eq:plm2xyz} of the
three estimates. In Fig. \ref{fig:pomme4slepr}, the number of nonzero
coefficients in the estimate is determined by the Slepian truncation
number $n$, while in Figs. \ref{fig:pomme4slepl1r} and
\ref{fig:pomme4sleptrr}, the number of nonzero coefficients are
indirectly determined by the parameter $\eta$ after the support
estimation stage, as per our earlier discussion. As such, the number
of nonzero components do not always match that of
Fig. \ref{fig:pomme4slepr}; instead nearby values are used when
found.
\begin{figure}
\caption{$e_{10}
\caption{$e_{20}
\caption{$e_{30}
\caption{$e_{40}
\caption{$e_{50}
\caption{$e_{60}
\label{fig:pomme4slepr}
\end{figure}
\begin{figure}
\caption{$e_{10}
\caption{$e_{20}
\caption{$e_{30}
\caption{$e_{42}
\caption{$e_{50}
\caption{$e_{61}
\label{fig:pomme4slepl1r}
\end{figure}
\begin{figure}
\caption{$e_{9}
\caption{$e_{20}
\caption{$e_{31}
\caption{$e_{40}
\caption{$e_{50}
\caption{$e_{62}
\label{fig:pomme4sleptrr}
\end{figure}
As the dictionary elements given by the Tree construction are
localized in both scale and location, we can graphically show which
elements are ``turned on'' through the solution path, as more and more
elements in the support are chosen to be nonzero.
Fig.~\ref{fig:pomme4sleptrsuppr} shows the supporting regions
$\set{\cR^{(j,\alpha)}}$ of dictionary $\cD_{\text{Africa},36,1}$
associated the solutions given in the corresponding panels of
Fig.~\ref{fig:pomme4sleptrr}. Clearly, larger scale dictionary
elements are chosen first; these reduce the residual error the most.
As more and more dictionary elements are added during the $\ell_1$
based inversion process, finer and finer details are included in the
reconstruction.
\begin{figure}
\caption{$\cS_{9}
\caption{$\cS_{20}
\caption{$\cS_{31}
\caption{$\cS_{40}
\caption{$\cS_{50}
\caption{$\cS_{62}
\label{fig:pomme4sleptrsuppr}
\end{figure}
Fig. \ref{fig:pomme4error} compares, on a logarithmic scale, the
spatial residual errors (sum of squared differences) between the
three estimates, as a function of the number of nonzero components
allowed. Clearly, when a small number of nonzero components
is allowed, the sparsity-based estimators outperform the standard
Slepian truncation-based inversion.
\begin{figure}\label{fig:pomme4error}
\end{figure}
\begin{figure}\label{fig:pomme4solpath}
\end{figure}
To study the consistency of the $\ell_1$-based SL1D
and STL1D estimators, and as a measure of how closely the SL1D
method matches the classical truncation strategy, we plot the solution
paths of these two estimators. Fig. \ref{fig:pomme4solpath} shows
that while lower order (better-concentrated in $\cR$) Slepian functions were
chosen early on, when $n$ was small. However, as more and more
nonzero indices were allowed, less well concentrated Slepian
functions, with lower magnitudes, were included in the solution,
probably as small ``tweaks'' to the estimate near the edges of the
region. Once a Slepian function was included into the solution, its
magnitude did not change much throughout the solution path (as other
elements were added).
In contrast to the behavior of SL1D, the Slepian Tree
solution chose particular elements localized to the main features of
the signal, not simply elements that are well concentrated in all of
Africa on a large scale. In addition, as the size of the support was
allowed to increase, the magnitudes of some coefficients were
decreased as new elements were added. This supports the general
statement that multiscale dictionaries, when combined with
sparsity-inducing reconstruction techniques, ``fit'' the support to
the nature of the data. Figs. \ref{fig:pomme4error} and
\ref{fig:pomme4solpath} thus help to clarify the behavior of the STL1D
estimator.
\subsubsection{White and Pink Noise}
As a second experiment, we generated multiple observations of either
pink or white noise fields, with a bandlimit of $L=36$, on the sphere
$S^2$, via randomization in the spherical harmonic representation of
$L^2_\Omega(S^2)$. We then sampled these on $\cX$ to and attempted to
reconstruct them only in Africa, as in the previous example.
A random field $r$ with spectral slope $\beta$, up to degree $L$, is
defined as having the harmonic coefficients
\begin{align}
\wh{r}_{lm} &= l^{\beta/2} N_l^{-1} n_{lm} \quad (l,m) \in \Omega, \\
\text{where } n_{lm} &\operatorname*{\sim}^{\text{i.i.d.}} \cN(0,1)
\quad (l,m) \in \Omega, \nonumber \\
\text{and } N_l &= (2l+1)^{-1} \textstyle{\sqrt{\sum_{m=-l}^l n_{lm}^2}}. \nonumber
\end{align}
For a white noise process, with equal signal power across its
spectrum, $\beta = 0$. Most spatial processes in geophysics, however,
have some $\beta < 0$; their power drops off with degree $l$. Fields
with $\beta$ near $-2$ are considered to be ``pink'', while fields
with $\beta$ near $-4$ are ``red''. The more red a noise process, the
higher its spatial correlation, the less ``random'' it looks. The
earth's geopotential field, for example, is modeled as having ${\beta =
-4.036}$. In our experiments, we use $\beta=0$ to generate white
noise processes and $\beta=-2$ for pink.
For each of $T=200$ iterations, we generated both pink and white noise
fields $r$ (bandlimited to $L=36$) and sampled them on $\cX$ (the grid
over the African continent) to get the vectors $\tr$. As before, the
discretization matrix $Y$ is of rank $r=528$ so direct inversion is
impossible. We again performed reconstruction via the three methods
STLS, SL1D, and STL1D. As in Fig. \ref{fig:pomme4error}, for each of
the iterations we calculated the normalized residual: the sum of
squared differences between the estimates, as expanded on $\cX$, and
the original samples, normalized by the sum of squares
$\norm{\tr}_2^2$.
Figs.~\ref{fig:slepwhitenoiseavg} and~\ref{fig:sleprednoiseavg} show
the mean normalized error over the $T$ iterations for white and pink
noise, respectively. For white noise, the ability to use any of the
Slepian functions clearly provides an advantage for the SL1D
algorithm. In contrast, for pink noise the spatial localization of
the dictionary elements and the smaller size of the dictionary give an
edge to the Tree-based estimator. Clearly, however, both estimators
lead to lower normalized observation error on average, as compared
with the classically optimal Slepian truncation method STLS.
\begin{figure}\label{fig:slepwhitenoiseavg}
\end{figure}
\begin{figure}\label{fig:sleprednoiseavg}
\end{figure}
\section{Conclusion and Future Work}
We have motivated and described a construction for
dictionaries of multiscale, bandlimited functions on the sphere. When
paired with the modern inversion techniques of \S\ref{sec:sltrunder},
these dictionaries provide a powerful tool for the approximation
(inversion) of bandlimited signals concentrated on subsets of the
sphere.
The numerical examples in \S\ref{sec:sltrinv} provide
good evidence for the efficacy of the estimators SL1D and STL1D. More
simulations are required to confirm and explore their numerical
accuracy.
In addition, more theoretical analysis of the existing dictionary
constructions (e.g., their concentration properties)
is also required. Especially when working in concert with the
$\ell_1$-based estimators, questions of coherence are especially
important~\cite{Gurevich2008,Candes2006}.
The theoretical underpinnings of the SL1D estimator have not been
studied, to our knowledge. In contrast, the identically equivalent
question of estimating the support of the Fourier transform of a
signal, given its (possibly nonuniform) samples, is one that has been
studied extensively in the Compressive Sensing
community (starting with, e.g.,~\cite{Candes2006,Candes2006b}).
The top-down subdivision based scheme described in this chapter is not
the only way to construct multiscale dictionaries. Followup work may
include one or more of the following ideas:
\begin{itemize}
\item Instead of estimating an optimal height $H$ during construction,
simply prune a tree element $d^{(j,\alpha)}$ if its spectral
concentration $\lambda^{(j,\alpha)}$ or concentration in $\cR$,
$\nu^{(j,\alpha)}$, is below a minimum threshold. This allows for
more adaptive and better concentrated dictionary elements near
high-curvature borders.
\item While the dictionaries described here describe ``summary''
functions (for $\alpha=1$), it is possible to use Gram-Schmidt
orthogonalization to construct an alternate ``difference''
dictionary by orthogonalizing each node with its parent and
sibling. Such dictionaries would be better tuned to find ``edges'',
and would provide sparser representations for mostly smooth data.
In practice, this leads to better performance of $\ell_1$-based
estimators like STL1D.
\item Other subdivision construction schemes should be considered. For
example, when the subregion $\cR$ is highly nonconvex (e.g., when
$\cR$ is the interior of the Earth's oceans), even the second
Slepian function contains more than one mode. In this case, it is
unclear how to subdivide the domain from the top down. Instead, a
bottom-up approach would work, wherein a fine grid is constructed on
the region $\cR$, and grid elements are ``merged'' until their area
is large enough that reasonably well concentrated Slepian functions
with bandwidth $L$ will fit in them.
\end{itemize}
The ultimate goal of the constructions in this chapter is an
overcomplete multiscale \emph{frame} of bandlimited functions that are
well concentrated on $\cR$, can be constructed quickly, and admit fast
forward and inverse transforms. That is, we seek a methodology
similar to the Wavelet transforms but allowing for bandlimits. The work
here should be considered a stepping stone in that direction as it
shares many of the properties of third generation Wavelets treated
elsewhere, especially the one that may be most important: numerical
accuracy in the solution of ill-posed inverse problems.
\input{ch-conclusion/conclusion}
\appendix
\chapter{Differential Geometry: Definitions \label{app:diffgeom}}
The manifold $(\cM,g)$ with metric tensor $g$ admits an atlas of charts
$\set{(\phi_\alpha,U_\alpha)}_\alpha$, where $\phi_\alpha : U_\alpha
\to \bbR^d$ is a diffeomorphism from the open subset $U_\alpha \subset
\cM$ \cite[Chs. 0,4]{doCarmo1992}.
The choice of a standard orthonormal basis in $\bbR^d$ defines a
corresponding basis for the tangent plane to $\cM$ at each $x \in
U_\alpha$, as well as a local coordinate system $\set{v^j}$ near $x$.
The components of the tensor $g$ are defined as
the inner products between the partial derivatives of $\phi_\alpha$ in
this coordinate system:
$g_{ij}(x) = \ip{\partial \phi_\alpha(x) / \partial v^i}{\partial
\phi_\alpha(x)/ \partial v^j}$
where $v = \phi_\alpha(x)$. The inverse tensor,
denoted by $g^{ik}(x) = (g^{-1}(x))_{ik}$, is smooth, everywhere
symmetric and positive definite because $g$ is. We use the notation
$\abs{g} = \det{(g_{ij})}$. Note that we will often drop the position
term $x$; for example, $g^{-1} = g^{-1}(x)$.
We assign to $\cM$ the standard
gradient $\grad$, inner product $\cdot$, divergence $\grad \cdot$, and
Laplacian $\lap = \grad \cdot \grad$ at a point $x$
\cite[Ch. 1]{Rosenberg1997}.
For $f$ a twice-differentiable function on $\cM$ (i.e. $f \in C^2(\cM)$)
and $\bm{f}$ and $\bm{g}$ differentiable vector fields on $\cM$,
\begin{align*}
\partial_i f(x) = \frac{\partial f(x)}{\partial v^i}
\qquad (\grad f)^j = \sum_i g^{ij}(x) \partial_i f(x)
\qquad \bm{f} \cdot \bm{g} = \sum_{i,j} g_{ij}(x) \bm{f}^i(x) \bm{f}^j(x) \\
\grad \cdot \bm{f} = \sum_i \frac{1}{\sqrt{\abs{g(x)}}} \partial_i
\left(\sqrt{\abs{g(x)}} \bm{f}^i(x) \right)
\qquad \lap f = \sum_{i,j} \frac{1}{\sqrt{\abs{g(x)}}} \partial_i
\left(\sqrt{\abs{g(x)}} g^{ij}(x) \partial_j f(x) \right),
\end{align*}
where again $v = \phi_\alpha(x)$ and $1 \leq i, j, k \leq d$.
We use the definitions above throughout, as well as the norm
definition $\norm{\bm{f}}^2 = \bm{f} \cdot \bm{f}$; thus
$\norm{\grad f}^2 = \sum_{i,j} g^{ij} \partial_i f \partial_j f$.
\chapter{Spectral Theory \label{app:fourier}}
In this appendix we state some basic facts about the existence
of the Fourier transform for functions in $L^2(\bbR^n)$. We also
discuss the existence and properties of Fourier series representations
for functions in $L^2(\cM)$, where $(\cM,g)$ is a
Riemannian manifold with metric $g$ (see App. \ref{app:diffgeom}). A
comprehensive review of Fourier transforms and convolutions on general
(possibly non-compact) spaces is available in \cite[Ch. 7]{Rudin1973},
and on Riemannian Manifolds in particular in
\cite[Ch. 1]{Rosenberg1997}.
\section{Fourier Transform on $\bbR^n$}
The Fourier transform of a function $f \in L^2(\bbR^n)$ is formally defined,
for all $\omega \in \bbR^n$, as
$$
\wh{f}(\omega) = \int_{x \in \bbR^n} f(x) \ol{\psi_\omega(x)} dx,
$$
where $\psi_\omega(x) = e^{i x \cdot \omega}$ solves the eigenvalue problem
$$
\Delta \psi_\omega(x) = -\lambda_\omega^2 \psi_\omega(x), \qquad x \in \bbR^n
$$
with $\Delta f(x) = \sum_{j=1}^n \frac{\partial^2}{\partial x_j^2} f(x)$ and $\lambda_\omega =
\norm{\omega}_2$.
The Fourier inversion theorem and its extensions \cite[Thms. 7.7 and
7.15]{Rudin1973} state that $\wh{f}$ is well defined, and that $f$
can be represented via $\wh{f}$. For all $x \in \bbR^n$,
$$
f(x) = (2 \pi)^{-n} \int_{\omega \in \bbR^n} \wh{f}(\omega) \psi_\omega(x) dx.
$$
\section{Fourier Analysis on Riemannian Manifolds}
Suppose $\cH$ is a separable Hilbert space, e.g., $\cH = L^2(M)$ where
$M$ is a compact set. Further, suppose $T : \cH \to \cH$ is a
bounded, compact, self-adjoint operator. Then the Hilbert-Schmidt theorem
\cite[Thm. VI.16]{Reed1980} states that there is a complete
orthonormal basis $\set{\psi_k}$ for $\cH$ such that $T \psi_k(x) =
\lambda_k \psi_k(x)$, $k=0,1,\ldots$, and $\lambda_k \to 0$ as $k \to
\infty$. The Laplacian operator $T = \Delta$ defined on compact
subsets of $M \subset \bbR^n$ therefore admits a complete orthonormal
basis (as its inverse is compact), and from now on we refer to
$\set{\psi_k}$ as the eigenfunctions of the Laplacian. This set is
called the Fourier basis.
We can therefore write any function $f \in L^2(M)$ as
$$
f(x) = \sum_{k=0}^\infty \wh{f}_k \psi_k(x) \qquad \text{where}
\qquad
\wh{f}_k = \int_M f(x) \ol{\psi_k(x)} dx,
$$
where $\set{\wh{f}_k}$ are called the Fourier coefficients. The
calculation of Fourier coefficients is called analysis, and the
reconstruction of $f$ by expansion in the Fourier basis is called
synthesis.
On a Riemannian manifold $(\cM,g)$ a similar analysis applies
\cite[Thm. 1.29]{Rosenberg1997}:
\begin{prop}[Hodge Theorem for Functions]
\label{prop:hodge}Let $(\cM,g)$ be a compact connected oriented
Riemannian manifold. There exists an orthonormal basis of
$L^2(\cM)$ consisting of eigenfunctions of the Laplacian. All the
eigenvalues are positive, except that zero is an eigenvalue with
multiplicity one. Each eigenvalue has finite multiplicity, and the
eigenvalues accumulate only at infinity.
\end{prop}
The Laplacian given above is the negative of the Laplace-Beltrami
operator defined in \S\ref{app:diffgeom}, and the eigenfunctions have
Neumann boundary conditions.
Fourier analysis and synthesis can be written as
$$
f(x) = \sum_{k=0}^\infty \wh{f}_k \psi_k(x), x \in \cM \qquad
\text{where} \qquad
\wh{f}_k = \int_\cM f(x) \ol{\psi_k(x)} d\mu(x),
$$
where the analysis integral above is with respect to the volume
metric $g$. Fourier analysis is a unitary operation (this is
known as Parseval's theorem). Note that practical definitions of the
forward and inverse Fourier operators often differ by the choice and
placement of normalization constants, in order to simplify notation
(as is the case throughout this thesis).
\subsection{Fourier Analysis on the Circle $S^1$}
On the circle, $S^1$, parametrized by $\set{\theta : \theta \in
[0,2\pi)}$ with $d\mu(\theta) = d\theta$, we have $\Delta e^{i
\theta k} = - k^2 e^{i k \theta}$ for
$k = 0, \pm 1, \pm 2, \ldots$. We can therefore decompose $L^2(S^1)$ via
projections onto $\set{e^{i \theta k}}_k$. That is, we can write
$$
f(\theta) = \sum_{k = -\infty}^\infty \wh{f}_k e^{ i \theta k} \qquad \text{where} \qquad
\wh{f}_k = \frac{1}{2 \pi} \int_{S^1} f(\theta) e^{- i \theta k} d\theta
$$
for any $f \in L^2(S^1)$.
The analysis, as defined, is an isometry. By Parseval's theorem, for
$f, g \in L^2(S^1)$,
$$
\int_{S^1} f(\theta) \ol{g(\theta)} d\theta = \sum_{k=-\infty}^\infty \wh{f}_k \ol{\wh{g}_k}
$$
This analysis can also be applied to functions on $L^2[0,2\pi]$ with
periodic boundary conditions by identifying the interval with $S^1$.
\subsection{Fourier Analysis on the Sphere $S^2$\label{sec:fouriersphere}}
The unit sphere, $S^2$, can be parametrized by $\set{(\theta,\phi) : \theta
\in [0,\pi], \phi \in [0,2\pi)}$ where $\theta$ is the colatitude and
$\phi$ is the longitude. In this case, the Laplacian is
$$
\Delta f(\theta,\phi) = \frac{1}{\sin \theta} \partial_\theta
\left(\sin \theta \partial_\theta f(\theta,\phi) \right)
+ \frac{1}{\sin^2 \theta} \partial_{\phi \phi} f(\theta,\phi)
$$
and the volume element is $d\mu(\theta,\phi) = \sin \theta d\phi d\theta$.
For our purposes, we are interested in real-valued functions on the
sphere. As such, we study the \emph{real-valued} eigenfunctions of
the Laplacian on $S^2$. The real surface spherical harmonics,
$\set{Y_{lm}(\theta,\phi)}$, are parametrized by
the degree $l$ and order $m$, where $l = 0, 1, \ldots$ and $m = -l,
\ldots, l$. These can be given by \cite{Simons2006}:
\begin{align}
Y_{lm}(\theta,\phi) &=
\begin{cases}
\sqrt{2} X_{l\abs{m}}(\theta) \cos(m \theta) & -l \leq m < 0, \\
X_{l0}(\theta) & m = 0,\\
\sqrt{2} X_{lm}(\theta) \sin(m \theta) & 0 < m \leq l,
\end{cases} \qquad \text{where}\\
X_{lm}(\theta) &= (-1)^m \left( \frac{(2l+1)}{4 \pi} \frac{(l-m)!}{(l+m)!} \right)^{1/2}
P_{lm} (\cos \theta),
\end{align}
and $P_{lm}(t)$ are the associated Legendre functions of degree $l$
and degree $m$ \cite[\S8.1.1]{Abramowitz1964}. Each spherical
harmonic $Y_{lm}$ fulfills the eigenvalue relationship $\Delta Y_{lm}
= -l(l+1) Y_{lm}$.
We can therefore decompose $L^2(S^2,\bbR)$ via
projections onto $\set{Y_{lm}(\theta,\phi)}_{lm}$, where now
$\ol{Y_{lm}} = Y_{lm}$. That is, we can write
$$
f(\theta,\phi) = \sum_{lm} \wh{f}_{lm} Y_{lm}(\theta,\phi) \qquad \text{where} \qquad
\wh{f}_{lm} = \int_{S^2} f(\theta,\phi) Y^{m}_l(\theta,\phi) d\mu(\theta,\phi),
$$
for any $f \in L^2(S^2,\bbR)$.
The analysis, as defined, is an isometry. By a
Parseval's theorem, for real-valued functions
$f, g \in L^2(S^2)$,
$$
\int_{S^2} f(\theta,\phi) g(\theta,\phi) d\mu(\theta,\phi)
= \sum_{l \geq 0} \sum_{m = -l}^l \wh{f}_{lm} \wh{g}_{lm}.
$$
Appendices are just chapters, included after the $\backslash appendix$ command.
\section{Long Tables}
Long tables span multiple pages. By default they are treated like body text, but we want them to be single spaced all the time. The class therefore defines a new command, $\backslash tablespacing$, that is placed before a long table to switch to single spacing when the rest of the document is in double spacing mode. Another command, $\backslash bodyspacing$, is placed after the long table to switch back to double spacing. Normal tables using \texttt{tabular} automatically use single spacing and do not require the extra commands.
When the documentclass is defined with the `singlespace' option, these commands are automatically adjusted to stay in single spacing after the long table.
Make sure there is always at least one blank line after the $\backslash bodyspacing$ command before the end of the file.
Some times long tables do not format correctly on the first pass. If the column widths are wrong, try running the \LaTeX compiler one or two extra times to allow it to better calculate the column widths.
If you want your long table to break pages at a specific point, you can insert the command $\backslash pagebreak[4]$, to tell \LaTeX that it really should put a page break there. $\backslash pagebreak[2]$ gives it a hint that this is a good place for a page break, if needed. If there's a row that really should not be broken across a page, use $\backslash \backslash *$, which will usually prevent a pagebreak.
\section{Bibliography and Footnotes}
The bibliography and any footnotes can also be single spaced, even for the electronic copy. The template is already setup to do this.
Bibliography entries go in the .bib file. As usual, be sure to compile the \LaTeX code, then run BibTeX, and then run \LaTeX again.
To cite websites and other electronically accessed materials, you can use the `@electronic' type of BibTeX entry, and use the `howpublished' field to include the URL of the source material.
The formatting of bibliography entries will be done automatically. Usually the titles are changed to have only the first word capitalized. If you'd prefer to have your original formatting preserved, place the title in an extra set of curly braces, i.e., ``title = \{\{My title has an AcroNyM that should stay unchanged\}\},''.
\section{Figures and Tables}
The captions of figures and tables take an optional parameter in square brackets, specifying the caption text to be used in the Table of Contents. The regular caption in curly braces is used for the table itself.
Generally captions for tables are placed above the table, while captions for figures are placed below the figure.
\section{Printing}
For the library copies of your dissertation, you must use archival quality printing and binding. This means acid-free paper, containing at least 25\% cotton fiber. Triangle Repocenter on Nassau Street in Princeton offers both 25\% cotton paper and 100\% cotton paper. Most people choose the 25\% cotton paper, and this is generally recommended by the binders. The 100\% copy paper is somewhat thicker and the extra expense is unnecessary.
Triangle offers online submission of your printing and binding order at: \url{http://triangleprinceton.com/collegiatebinding/thesis/}. If you request binding from them, they will deliver the paper copies to Smith-Shattuck Bookbinding for you and allow you to pick up the completed copies at their store on Nassau Street. The whole process takes 2-3 business days, but check with them in advance during the busy thesis-printing season in April and May.
Currently, your printed and bound dissertation copies can be single spaced. Only the electronic copy submitted to ProQuest must be double spaced. All copies must be printed single-sided, with specific margins.
\section{Binding}
An archival-quality sewn binding is required for the library copies of your dissertation. Smith-Shattuck Bookbinding is highly recommended, and is used by most students. Triangle Repocenter will send your copies there for you, greatly simplifying the process, but you can call Smith-Shattuck with special requests.
The ``library standard'' sewn binding is sufficient for the copies to be sent to Mudd Library. It uses a black buckram cloth cover, which is the most popular option. For extra copies for yourself and your family members, you can choose ``buckram roundback binding'', which adds decorative lines on the spine, and printing of the title and author on the front cover. For a small additional fee, you can include the Princeton University shield on the front cover and a ribbon bookmark. Leather covers are also available. See Smith-Shattuck's website for more details at: \url{http://www.thesisbookbinding.com/}.
\singlespacing
\cleardoublepage
\ifdefined\phantomsection
\phantomsection
\else
\fi
\addcontentsline{toc}{chapter}{Bibliography}
\end{document}
|
\begin{document}
\title{Coloring Mixed and Directional Interval Graphs
hanks{Work on this problem was initiated at the HOMONOLO Workshop 2021
in Nov\'a Louka, Czech Republic. G.G.\ is partially
supported by the National Science Center of Poland under grant no.\
2019/35/B/ST6/02472.}
\begin{abstract}
A \emph{mixed graph} has a set of vertices, a set of undirected
egdes, and a set of directed arcs. A \emph{proper coloring} of a
mixed graph~$G$ is a function $c$ that assigns to each vertex in~$G$
a positive integer such that, for each edge \edge{u}{v} in~$G$,
$c(u) \ne c(v)$ and, for each arc \arc{u}{v} in~$G$, $c(u)<c(v)$. For
a mixed graph~$G$, the \emph{chromatic number} $\chi(G)$ is the smallest
number of colors in any proper coloring of~$G$.
A \emph{directional interval graph} is a mixed graph whose vertices
correspond to intervals on the real line. Such a graph has an edge
between every two intervals where one is contained in the other and an
arc between every two overlapping intervals, directed towards the
interval that starts and ends to the right. \par\quad
Coloring such graphs has applications in routing edges in layered
orthogonal graph drawing according to the Sugiyama framework; the colors correspond to the tracks for
routing the edges.
We show how to recognize directional interval graphs, and how to
compute their chromatic number efficiently.
On the other hand, for \emph{mixed interval graphs},
i.e., graphs where two intersecting intervals can be connected by an
edge or by an arc in either direction arbitrarily,
we prove that computing the chromatic number is \ensuremath{\mathsf{NP}}\xspace-hard.
\keywords{Mixed graphs \and mixed interval graphs \and directed interval
graphs \and recognition \and proper coloring}
\end{abstract}
\section{Introduction}
\label{sec:intro}
A \emph{mixed graph} is a graph that contains both undirected edges
and directed arcs. Formally, a mixed graph $G$ is a tuple~$(V,E,A)$
where~$V=V(G)$ is the set of vertices, $E=E(G)$ is the set of edges,
and~$A=A(G)$ is the set of arcs. We require that any two vertices are
connected by at most one edge or arc.
For a mixed graph $G$, let $U(G)=(V(G),E')$ denote the
\emph{underlying undirected graph}, where $E'=E(G) \cup \{ \edge{u}{v}
\colon \arc{u}{v} \in A(G) \text{ or } \arc{v}{u} \in A(G)\}$.
A \emph{proper coloring} of a mixed graph $G$ is a function $c$
that assigns a positive integer to every vertex in $G$, satisfying
$c(u) \neq c(v)$ for every edge \edge{u}{v} in $G$, and $c(u) < c(v)$
for every arc \arc{u}{v} in $G$. It is easy to see that a mixed graph
admits a proper coloring if and only if the arcs of~$G$ do not induce a
directed circuit. For a mixed graph~$G$ with no directed circuit, we define the chromatic
number $\chi(G)$ as the smallest number of colors in any proper
coloring of~$G$.
The concept of mixed graphs was introduced by Sotskov and
Tanaev~\cite{SotskovT76} and reintroduced by Hansen, Kuplinsky, and de
Werra~\cite{HansenKW97} in the context of proper colorings of mixed
graphs. Coloring of mixed graphs was used to model problems in
scheduling with precedence constraints~\cite{Sotskov20}.
It is \ensuremath{\mathsf{NP}}\xspace-hard in general, and it
was considered for some restricted graph classes, e.g., when the
underlying graph is a tree, a
series-parallel graph, a graph of bounded
tree-width, or a bipartite graph~\cite{FurmanczykKZ08Trees,FurmanczykKZ08SP,RiesW08}. Mixed
graphs have also been studied in the context of (quasi-) upward planar
drawings~\cite{Binucci2016,Binucci2014,fkptw-upmpg-14}, and extensions of partial
orientations~\cite{BangJensenHZ18,KlavikKORSSV17}.
Let $\mathcal{I}$ be a set of closed non-degenerate intervals on the
real line. The \emph{intersection graph} of $\mathcal{I}$ is the
graph with vertex set $\mathcal{I}$ where two vertices are adjacent if
the corresponding intervals intersect. An \emph{interval graph} is a
graph~$G$ that is isomorphic to the intersection graph of some set
$\mathcal{I}$ of intervals. We call $\mathcal{I}$ an \emph{interval
representation} of $G$, and for a vertex $v$ in $G$, we write
$\mathcal{I}(v)$ to denote the interval that represents $v$. A
\emph{mixed interval graph} is a mixed graph~$G$ whose underlying
graph $U(G)$ is an interval graph.
For a set $\mathcal{I}$ of closed non-degenerate intervals on the real line, the
\emph{directional intersection graph} of $\mathcal{I}$ is a mixed
graph $G$ with vertex set $\mathcal{I}$ where, for every two vertices
$u = \sbrac{l_u,r_u}$, $v = \sbrac{l_v,r_v}$ with $u$ starting to the left of $v$, i.e., $l_u \leq l_v$,
exactly one of the following conditions holds:
\begin{align*}
\text{$u$ and $v$ are disjoint, i.e., } r_u < l_v & \iff
\text{$u$ and $v$ are independent in $G$,} \\
\text{$u$ and $v$ overlap, i.e., } l_u < l_v \leq r_u < r_v & \iff
\text{arc \arc{u}{v} is in $G$,} \\
\text{$u$ contains $v$, i.e., } r_v \leq r_u & \iff
\text{edge \edge{u}{v} is in $G$.}
\end{align*}
A \emph{directional interval graph} is a mixed graph $G$ that is
isomorphic to the directional intersection graph of some
set~$\mathcal{I}$ of intervals.
We call $\mathcal{I}$ a \emph{directional representation} of $G$.
Similarly to interval graphs, a directional interval graph may have several different directional representations.
As there is no directed circuit in a directional interval graph $G$, $\chi(G)$ is well defined.
Observe that the endpoints in any directional representation can be perturbed so that every endpoint is unique, and the modified intervals represent the same graph.
Further, we generalize directional interval graphs and directional representations
to \emph{bidirectional interval graphs} and \emph{bidirectional representations}.
There, we assume that we have two types of intervals, which we call
\emph{left-going} and \emph{right-going}.
For left-going intervals, the edges and arcs are defined as in directional intersection graphs.
For right-going intervals, the symmetric definition applies, that is,
we have an arc $\arc{u}{v}$ if and only if $l_v < l_u \leq r_v < r_u$.
Moreover, there is an edge for every pair of a left-going and a
right-going interval that intersect.
\begin{figure}
\caption{Separate greedy assignment of left-going and right-going
edges to tracks.}
\label{fig:edge-routing}
\end{figure}
Interval graphs are a classic subject of algorithmic graph theory
whose applications range from scheduling problems to analysis of
genomes~\cite{g-agtpg-80}. Many problems that are \ensuremath{\mathsf{NP}}\xspace-hard for
general graphs can be solved efficiently for interval graphs.
In particular, the chromatic number of (undirected) interval
graphs~\cite{g-agtpg-80} and directed acyclic graphs~\cite{HansenKW97}
can be computed in linear time.
In this paper we combine the research directions of coloring geometric
intersection graphs and of coloring mixed graphs, by studying the
coloring of mixed interval graphs.
Our research is also motivated by the following application.
A subproblem that occurs when drawing layered graphs according to the
Sugiyama framework \cite{stt-mvuhss-TSMC81} is the edge routing step.
This step is applied to every pair of consecutive layers. Zink et
al.~\cite{zwbw-ldugg-CGTA22} formalize this
for orthogonal edges as follows. Given a set
of points on two horizontal lines (corresponding to the vertices on
two consecutive layers) and a perfect matching between the points on
the lower and those on the upper line, connect the matched pairs of
points by x- and y-monotone rectilinear paths. Since we can assume
that no two points have the same x-coordinate, each pair of
points can be connected by a path that consists of three axis-aligned
line segments; a vertical, a horizontal, and another vertical one; see
\cref{fig:edge-routing}. We refer to the interval that corresponds to
the vertical projection of an edge to the x-axis as the \emph{span} of
that edge.
We direct all edges upward. This allows us to classify the edges into
\emph{left-} vs.\ \emph{right-going}.
Now the task is to map the horizontal pieces to horizontal ``tracks''
between the two layers such that no two such pieces overlap and no two
edges cross twice. This implies that any two edges whose spans
intersect must be mapped to different tracks. If there is a
left-going edge~$e$ whose span overlaps that of another left-going
edge~$e'$ that lies further to the left (see
\cref{fig:edge-routing}), then $e$ must be mapped to a
higher track than~$e'$ to avoid crossings. The symmetric statement
holds for pairs of right-going edges. The aim is to minimize the
number of tracks in order to get a compact layered drawing of the
original graph.
This corresponds to minimizing the number of colors in a proper
coloring of a bidirectional interval graph.
Zink et al.\ solve this combinatorial problem heuristically.
They greedily construct two colorings (of left-going edges and of
right-going edges) and combine the colorings by assigning separate
tracks to the two directions; see \cref{fig:edge-routing}.
\paragraph{Our contribution.}
We first show that the above-mentioned greedy algorithm of Zink et
al.~\cite{zwbw-ldugg-CGTA22} colors directional interval graphs with
the minimum number of colors; see \cref{sec:greedy}. This yields a
simple 2-approximation algorithm for the bidirectional case.
Then, we prove that computing the chromatic number of a mixed interval
graph is \ensuremath{\mathsf{NP}}\xspace-hard; see \cref{sec:hardness}. This result extends to
proper interval graphs; see \cref{sec:proper}.
Finally, we present an efficient algorithm that recognizes directional
interval graphs; see \cref{sec:recognition}. Our algorithm is based
on PQ-trees and the recognition of two-dimensional posets. It can
construct a directional interval representation of a yes-instance in
quadratic time.
We postpone the proofs of statements with a (clickable)
``$\star$'' to the appendix.
\section{Coloring Directional Interval Graphs}
\label{sec:greedy}
We prove that the greedy algorithm of Zink et
al.~\cite{zwbw-ldugg-CGTA22} computes an optimal coloring for a given directional interval representation of $G$.
If we are not given a representation (i.e., a set of intervals)
but only the graph, we obtain
a representation in quadratic time by \cref{thm:recognition}.
The greedy algorithm proceeds analogously to the classic greedy coloring algorithm for (undirected) interval graphs.
Also our optimality proof follows, on a high level, the strategy of relating the coloring to a large clique.
In our setting, however, the underlying geometry is more intricate,
which makes the optimality proof as well as a fast implementation more
involved.
The algorithm works as follows; see \cref{fig:edge-routing} (left) for an example.
\noindent
\fbox{
\begin{minipage}{.97\linewidth}
\textsc{Greedy Algorithm.}
Iterate over the given intervals in increasing order of
their left endpoints. For each interval~$v$, assign $v$ the
smallest available color~$c(v)$. A~color~$k$ is \emph{available}
for $v$ if, for any interval~$u$ that has already been colored,
$k \ne c(u)$ if~$u$ contains~$v$ and $k>c(u)$ if~$u$ overlaps~$v$.
\end{minipage}
}
A naive implementation of the greedy algorithm runs in quadratic time. Using augmented binary search trees, we can speed it up to optimal $O(n \log n)$ time.
\begin{restatable}[{\hyperref[lem:runtimegreedy*]{$\star$}}]{lemma}{RuntimeGreedy}
\label{lem:runtimegreedy}
The greedy algorithm can be implemented to color $n$
intervals in $\Oh{n \log n}$ time, which is
optimal assuming the comparison-based model.
\end{restatable}
Next we show that the greedy algorithm computes an optimal proper
coloring. This also yields a simple 2-approximation for the
bidirectional case.
\begin{theorem}
\label{clm:greedy-is-optimal}
Given a directional representation of a directional interval
graph~$G$, the greedy algorithm computes a proper coloring of~$G$
with~$\chi(G)$ many colors.
\end{theorem}
\begin{proof}
The \emph{transitive closure} $G^+$ of $G$ is the graph that we obtain
by exhaustively adding transitive arcs, i.e., if there are arcs
$\arc{u}{v}$ and $\arc{v}{w}$, we add the arc $\arc{u}{w}$ if
absent. Clearly, no pair of adjacent intervals in
the underlying undirected graph~$U(G^+)$ of $G^+$ can have
the same color in a proper coloring of~$G$. Therefore,
$\omega(U(G^+))\leq \chi(G)$ where $\omega(U(G^+))$ denotes the size
of a largest clique in $U(G^+)$. We show below that the greedy algorithm
computes a coloring with at most $\omega(U(G^+))$ many colors, which
must therefore be optimal. For $v\in V$ let
$\mathcal{I}_\textsf{in}(v)$ be the set of intervals having an arc to~$v$
in $G$.
Let $c$ be the coloring computed by our greedy coloring algorithm.
Since we always pick an available color, $c$ is a proper
coloring. To prove optimality of~$c$, we show the existence of a
clique in $U(G^+)$ of cardinality $c_{\max}=\max_{v\in V}c(v)$.
\begin{figure}
\caption{A staircase and its intermediate intervals, which form a
clique in $U(G^+)$.}
\label{fig:staircase}
\end{figure}
Consider an interval $v_0 = [l_0, r_0]$ of color~$c_{\max}$. Among
$\mathcal{I}_\textsf{in}(v_0)$, let~$v_1$ be the unique interval
with the largest color (all intervals in
$\mathcal{I}_\textsf{in}(v_0)$ have different colors as they
share the point $l_0$). We call $v_1$ the \emph{step below $v_0$}.
We repeat this argument to find the step $v_2$ below $v_1$ and
so on. For some $t \ge 0$, there is a $v_t$ without a step below
it, namely where $\mathcal{I}_\textsf{in}(v_t) = \emptyset$. We
call the sequence $v_0, v_1, \dots, v_t$ a \emph{staircase} and each
of its intervals a \emph{step}; see~\cref{fig:staircase}. Clearly,
$\arc{v_j}{v_i}$ is an arc of~$G^+$ for~$0 \le i < j \le t$. In
particular, the staircase is a clique of size $t+1$ in $U(G^+)$.
Next we argue about the intervals with colors in-between the steps.
For a step~$v_i = [l_i,r_i]$, $i \in \set{0,\dots,t}$, let~$S_i$
denote the set of intervals that contain the point~$l_i$ and have a
color~$x \in \set{c(v_{i+1})+1,c(v_{i+1})+2,\dots,c(v_i)}$; see
\cref{fig:staircase}. Note that~$v_i \in S_i$ and, by the
definition of steps, each interval in~$S_i$ contains~$v_i$.
Observe that $|S_i| = c(v_{i}) - c(v_{i+1})$, as
otherwise the greedy algorithm would have assigned a smaller color to~$v_i$. It
follows that~$c_{\max} = \sum_{i=0}^t |S_i|$.
We claim that~$S = \bigcup_{i=0}^t S_i$ is a clique in~$U(G^+)$.
Let~$u \in S_i$, $v \in S_l$ such that~$u \cap v = \emptyset$
(otherwise they are clearly adjacent in~$U(G^+)$). Assume without
loss of generality that~$i<l$. Let~$j,k$ be the largest and
smallest index so that~$v_j \cap u \ne \emptyset$ and
$v_k \cap v \ne \emptyset$, respectively. Observe
that~$u \cap v = \emptyset$, $u \cap v_{i+1} \ne \emptyset$,
and~$v \cap v_{l-1} \ne \emptyset$ imply~$i < j < l$ and
$i < k < l$. Since~$u$ does not intersect~$v_{j+1}$, it overlaps
with~$v_j$, i.e., $G$ contains the arc~$\arc{v_j}{u}$ and likewise,
since~$v$ does not intersect $v_{k-1}$, it overlaps with~$v_k$,
i.e., $G$ contains the arc~$\arc{v}{v_k}$.
If $j<k$, then $G^+$ contains $\arc{v}{v_k}$ and~$\arc{v_k}{v_j}$,
and therefore $\arc{v}{v_j}$. If $j\geq k$, then~$v_j$ is adjacent
to both~$u$ and~$v$, and since~$u,v$ are disjoint, $v_j$ overlaps with~$u$ and~$v$, i.e., $G$ contains $\arc{v}{v_j}$. In either
case, the presence of \arc{v}{v_j} and \arc{v_j}{u} implies that
$G^+$ contains \arc{v}{u}. It follows that~$S$ forms a clique
in~$U(G^+)$.
\end{proof}
\begin{restatable}[{\hyperref[cor:approx*]{$\star$}}]{corollary}{Approx}
\label{cor:approx}
There is an $\Oh{n \log n}$-time algorithm that, given a bidirectional interval representation, computes a 2-approximation of an optimal proper coloring of the corresponding bidirectional interval graph.
\end{restatable}
\section{Coloring Mixed Interval Graphs}
\label{sec:hardness}
In this section, we show that computing the chromatic number of a mixed interval graph is \ensuremath{\mathsf{NP}}\xspace-hard.
Recall that the chromatic number can be computed efficiently for
interval graphs~\cite{g-agtpg-80}, directed acyclic
graphs~\cite{HansenKW97}, and directional interval graphs
(\cref{clm:greedy-is-optimal}). In other words, coloring interval
graphs becomes \ensuremath{\mathsf{NP}}\xspace-hard only if edges and arcs are combined in a
non-directional way.
\begin{theorem}\label{thm:hardness}
Given a mixed interval graph $G$ and a number $k$,
it is \ensuremath{\mathsf{NP}}\xspace-complete to decide whether $G$ admits a proper coloring
with at most $k$ colors.
\end{theorem}
\begin{proof}
Containment in \ensuremath{\mathsf{NP}}\xspace is clear since a specific coloring with $k$ colors
serves as a certificate of polynomial size. We prove \ensuremath{\mathsf{NP}}\xspace-hardness by
a polynomial-time reduction from \mbox{\textsc{3-SAT}}\xspace. The high-level idea is as
follows. We are given a \mbox{\textsc{3-SAT}}\xspace formula $\Phi$ with variables
$v_1, v_2, \dots, v_n$, and clauses $c_1, c_2, \dots, c_m$, where each
clause contains at most three literals. A literal is a variable or a
negated variable~-- we refer to them as a \emph{positive} or a
\emph{negative} occurrence of that variable. From $\Phi$, we
construct in polynomial time a mixed interval graph~$G_\Phi$ with the
property that $\Phi$ is satisfiable if and only if $G_\Phi$ admits a
proper coloring with $6n$ colors.
To prove that~$G_\Phi$ is a mixed interval graph, we present an interval representation of~$U(G_\Phi)$ and specify which pairs of intersecting intervals are connected by a directed arc, assuming that all other pairs of intersecting intervals are connected by an edge.
The graph $G_\Phi$ has the property that the color of many of the intervals is fixed in every proper coloring with $6n$ colors.
In our figures, the x-dimension corresponds to the real line that contains the interval, whereas we indicate its color
by its position in the~y-dimension~-- thus, we also refer to a color as a \emph{layer}.
In this model, our reduction has the property that~$\Phi$ is satisfiable if and only if the intervals of $G_\Phi$ admit a drawing that fits into $6n$ layers.
Our construction consists of a \emph{frame} and $n$ \emph{variable gadgets} and $m$ \emph{clause gadgets}.
Each variable gadget is contained in a horizontal strip of height~$6$ that spans the whole construction,
and each clause gadget is contained in a vertical strip of width~$4$ and height~$6n$.
The strips of the variable gadgets are pairwise disjoint,
and likewise the strips of the clause gadgets are pairwise disjoint.
\paragraph{Frame.}
See \cref{fig:frame-structure}.
The frame consists of six intervals~$f_i^1, f_i^2,\dots,f_i^6$ for each of the variables~$v_i$, $i=1,\dots, n$.
All of these intervals start at position~$0$ and extend from the left into the construction.
The intervals~$f_i^2,f_i^4,f_i^6$ end at position~$1$.
The intervals~$f_i^1$ and~$f_i^5$ extend to the very right of the construction.
Interval~$f_i^3$ ends at position~$3$.
Further, there are arcs~$\arc{f_i^j}{f_i^{j+1}}$ for~$j=1,\dots,5$ and~$\arc{f_i^6}{f_{i+1}^1}$ for~$i=1,\dots,n-1$.
This structure guarantees that any proper coloring with colors $\set{1,2,\ldots,6n}$ assigns color $6(i-1)+j$ to interval $f_i^j$.
\begin{figure}
\caption{$v_i$ is \textsf{true}
\label{fig:variable-gadget-true}
\caption{$v_i$ is \textsf{false}
\label{fig:variable-gadget-false}
\caption{Frame.}
\label{fig:frame-structure}
\caption{A variable gadget for a variable $v_i$.}
\label{fig:variable-gadget}
\end{figure}
\paragraph{Variable Gadget.}
See \cref{fig:variable-gadget-true,fig:variable-gadget-false}.
For each variable~$v_i$, $i = 1, \dots, n$, we have two intervals $v_i^\mathsf{false}$ and $v_i^\mathsf{true}$, which start at position~2 and extend to the very right of the construction.
Moreover, they both have an incoming arc from~$f_i^1$ and an outgoing arc to~$f_i^5$.
This guarantees that they are drawn in the layers of~$f_i^2$ and~$f_i^4$, however their ordering can be chosen freely.
We say that $v_i$ is set to \textsf{true} if $v_i^{\mathsf{true}}$ is below $v_i^{\mathsf{false}}$, and $v_i$ is set to \textsf{false} otherwise.
For each occurrence of $v_i$ in a clause $c_j$, $j = 1, \dots, m$, we create an interval~$o_i^j$ within the clause gadget of~$c_j$.
There is an arc $\arc{v_i^\mathsf{true}}{o_i^j}$ for a positive occurrence and an arc $\arc{v_i^\mathsf{false}}{o_i^j}$ for a negative occurrence as well as an arc~$\arc{o_i^j}{f_{i+1}^1}$ if $i<n$.
This structure guarantees that $o_i^j$ is drawn either in the same layer as~$f_i^3$ or as~$f_i^6$.
However, drawing $o_i^j$ in the layer of~$f_i^3$
(which lies between $v_i^\mathsf{true}$ and~$v_i^\mathsf{false}$) is possible if and only if
the chosen truth assignment of $v_i$ satisfies~$c_j$.
\paragraph{Clause Gadget.}
\begin{figure}
\caption{$c_j$ is satisfied.}
\label{fig:clause-gadget-satisfied}
\caption{$c_j$ is not satisfied.}
\label{fig:clause-gadget-not-satisfied}
\caption{A clause gadget for a clause $c_j = v_i \lor \lnot
v_k \lor v_\ell$, where $z \notin \set{i,k,\ell}
\label{fig:clause-gadget}
\end{figure}
See \cref{fig:clause-gadget}.
Our clause gadget starts at position~$4j$,
relative to which we describe the following positions.
Consider a fixed clause~$c_j$ that contains variables~$v_i,v_k,v_\ell$.
We create an interval~$s_j$ of length $3$ starting at position~$1$.
The key idea is that~$s_j$ can be drawn in the layer of~$f_i^6, f_k^6$ or~$f_\ell^6$, but only if $o_i^j$, $o_k^j$ or~$o_\ell^j$,
each of which has length~$1$ and starts at position~$3$,
is not drawn there.
This is possible iff
the corresponding variable satisfies the clause.
To ensure that~$s_j$ does not occupy any other layer, we block all the other layers.
More precisely, for each variable~$v_z$ with $z \notin \set{i,k,\ell}$, we create \emph{dummy intervals}~$d_z^j,e_z^j$ of length~$3$ starting at
position~$1$ that have arcs from~$f_z^1$ and to~$f_{z+1}^1$.
These arcs force
$d_z^j, e_z^j$ to be drawn in the layers of~$f_z^3$ and~$f_z^6$, thereby ensuring that~$s_j$ is not placed in any layer associated with the variable~$z$.
Similarly, for each $z \in \set{i,k,\ell}$, we create a blocker~$b_z^j$
of length~$1$ starting at position~$1$
that has arcs from~$f_z^1$ and to~$f_z^5$.
This fixes $b_z^j$ to the layer of~$f_z^3$ (since the layers of~$f_z^2$ and~$f_z^4$
are occupied by~$v_z^\mathsf{true}$ and $v_z^\mathsf{false}$), thereby ensuring that,
among all layers associated with~$v_z$, $s_j$ can only be drawn in the layer of~$f_z^6$.
\paragraph{Correctness.}
Consider for each clause $c_j$ with variables $v_i$, $v_k$, and
$v_\ell$ the corresponding clause gadget. To achieve a total height
of at most $6n$, $s_j$ needs to be drawn in the same layer as some interval
of the frame. Due to the presence of the dummy intervals,
the only available layers are the ones of~$f_z^6$ for~$z \in \{i,k,\ell\}$.
However, the layer of~$f_z^6$ is only free if~$o_z^j$ is not there, which is the
case if and only if~$o_z^j$ is drawn in the layer of~$f_z^3$. By
construction, this is possible if and only if the variable~$v_z$ is in
the state that satisfies clause~$j$.
Otherwise we need an extra $(6n+1)$-th layer.
Both situations are
illustrated in \cref{fig:clause-gadget}. Hence, $6n$ layers are
sufficient if and only if the variable gadgets represent a truth
assignment that satisfies all the clauses of $\Phi$. The mixed interval
graph $G_\Phi$ has polynomial size and can be constructed in
polynomial time.
\end{proof}
A \emph{proper interval graph} is an interval graph that
admits an interval representation of the underlying graph in which
none of the intervals properly contains another interval.
We can slightly adjust the reduction presented in the proof of \cref{thm:hardness} to make $G_\Phi$ a \textit{mixed} proper interval graph.
\begin{restatable}[{\hyperref[clm:hardnesspropper*]{$\star$}}]{corollary}{HardnessPropper}
\label{clm:hardnesspropper}
Given a mixed proper interval graph $G$ and a number $k$,
it is \ensuremath{\mathsf{NP}}\xspace-complete to decide whether $G$ admits a proper coloring
with at most~$k$~colors.
\end{restatable}
\section{Recognizing Directional Interval Graphs}
\label{sec:recognition}
In this section we present a recognition algorithm for directional interval graphs.
Given a mixed graph~$G$, our algorithm decides whether $G$ is a
directional interval graph, and additionally if the answer is yes, it
constructs a set of intervals representing~$G$.
The algorithm works in two phases.
The first phase carefully selects a rotation of the PQ-tree of~$U(G)$.
This fixes the order of maximal cliques in the interval representation of~$U(G)$.
In the second phase, the endpoints of the intervals are perturbed so that the edges and arcs in~$G$ are represented correctly.
This is achieved by checking that an auxiliary poset is two-dimensional.
PQ-trees of interval graphs~\cite{LuekerB79} and realizers
of two-dimensional posets~\cite{McConnellS99} can be constructed in
linear time. Our algorithm runs in quadratic time, but we suspect
that a more involved implementation can achieve linear running time.
For a set of pairwise intersecting intervals on the real line, let the \emph{clique point} be the leftmost point on the real line that lies in all the intervals.
Given an interval representation of an interval graph~$G$, we get a linear order of the maximal cliques of~$G$ by their clique points from left to right.
Booth and Lueker~\cite{LuekerB79} showed that a graph~$G$ is an interval graph if and only if the maximal cliques of~$G$ admit a \emph{consecutive arrangement}, i.e., a linear order such that, for each vertex~$v$, all the maximal cliques containing~$v$ occur consecutively in the order.
They have also introduced a data structure called PQ-tree
that encodes all possible consecutive arrangements of~$G$.
We present our algorithm in terms of modified PQ-trees
(MPQ-trees, for short) as described by Korte and
M{\"o}hring~\cite{KorteM85,KorteM89}.
We briefly describe MPQ-trees in the next few paragraphs;
see~\cite{KorteM89} for a proper introduction.
An \emph{MPQ-tree}~$T$ of an interval graph~$G$ is a rooted, ordered tree with two types of nodes: PQP-nodes and PQQ-nodes, joined by links.
Each node can have any number of children and a set of consecutive links joining a PQQ-node~$x$ with children is called a \emph{segment} of~$x$.
Further, each vertex $v$ in~$G$ is assigned either to one of the PQP-nodes, or to a segment of some PQQ-node.
Based on this assignment, we \emph{store} $v$ in the links of~$T$.
If~$v$ is assigned to a PQP-node~$x$, we store $v$ in the link just above $x$ in $T$ (adding a dummy link above the root of $T$).
If $v$ is assigned to a segment of a PQQ-node~$x$, we store $v$ in each link of the segment.
For a link \edge{x}{y}, let $S_{xy}$ denote the set of vertices stored in \edge{x}{y}.
We say that $v$ is \emph{above} (\emph{below}, resp.) a node~$x$ if $v$ is stored in any of the links on the upward path (in any of the links on some downward path, resp.) from $x$ in $T$.
We write $A^T_x$ ($B^T_x$, resp.) for the set of all vertices in~$G$ that are above (below, resp.) node~$x$.
The \emph{frontier} of~$T$ is the sequence of the sets $A^T_x$, where
$x$ goes through all leaves in~$T$ in the order of~$T$.
Given an MPQ-tree~$T$, one can obtain another MPQ-tree, which is called a \emph{rotation} of~$T$,
by arbitrarily permuting the order of the children of PQP-nodes and
by reversing the orders of the children of some PQQ-nodes. The defining property
of the MPQ-tree~$T$ of a graph $G$ is that each leaf $x$ of~$T$ corresponds to a maximal clique $A^T_x$ of~$G$ and the frontiers of rotations of~$T$ correspond bijectively to the consecutive arrangements of~$G$.
Observe that any two vertices adjacent in~$G$ are stored in links that are connected by an upward path in~$T$.
We say that~$T$ \emph{agrees} with an interval representation $\mathcal{I}$ of~$G$ if the order of the maximal cliques of~$G$ given by their clique points in $\mathcal{I}$ from left to right is the same as in the frontier of~$T$.
We assume the following properties of the MPQ-tree (see \cite{KorteM89}, Lemma~2.2):
\begin{itemize}
\item For a PQP-node~$x$ with children $y_1,\ldots,y_k$, for every $i=1,\ldots,k$,
there is at least one vertex stored in link \edge{x}{y_i} or below $y_i$, i.e., $S_{xy_i} \cup B^T_{y_i} \neq \emptyset$.
\item For a PQQ-node~$x$ with children $y_1,\ldots,y_k$, we have $k \geq 3$.
Further, for $S_i=S_{xy_i}$, we have:
\begin{itemize}
\item $S_1 \cap S_k = \emptyset$,
$B^T_{y_1} \neq \emptyset$, $B^T_{y_k} \neq \emptyset$,
$S_{1} \subsetneq S_{2}$, $S_{k} \subsetneq S_{k-1}$,
\item $(S_i \cap S_{i+1})\setminus S_1 \neq \emptyset$, $(S_{i-1} \cap S_{i})\setminus S_k \neq \emptyset$, for $i=2,\ldots,k-1$.
\end{itemize}
\end{itemize}
A \emph{partially ordered set}, or a \emph{poset} for short, is a transitive directed acyclic graph.
A poset~$P$ is \emph{total} if, for every pair of vertices~$u$
and~$v$, there is either an arc \arc{u}{v} or an arc \arc{v}{u}
in~$P$.
We can conveniently represent a total poset~$P$ by a linear order of its vertices $v_1 < v_2 < \dots < v_n$ meaning that there is an arc \arc{v_i}{v_j} for each $1 \leq i < j \leq n$.
A poset~$P$ is \emph{two-dimensional} if the arc set of~$P$ is the
intersection of the arc sets of two total posets on the same set
of vertices as~$P$.
McConnell and Spinrad~\cite{McConnellS99} gave a linear-time algorithm
that, given a directed graph~$D$ as input, decides whether~$D$ is a
two-dimensional poset. If the answer is yes, the algorithm also
constructs a \emph{realizer}, that is, (in this case) two linear
orders $(R_1,R_2)$ on the vertex set of~$D$ such that
\begin{align*}
\textrm{arc \arc{u}{v} is in~$D$} & \iff
\sbrac{\brac{\textrm{$u < v$ in $R_1$}} \wedge \brac{\textrm{$u < v$ in $R_2$}}} \textrm{.}
\end{align*}
The main result of this section is the following theorem.
\begin{restatable}[{\hyperref[thm:recognition*]{$\star$}}]{theorem}{ThmRecognition}
\label{thm:recognition}
There is an algorithm that, given a mixed graph $G$, decides
whether $G$ is a directional interval graph. The algorithm runs in
$\Oh{|V(G)|^2}$ time
and produces a directional representation of~$G$ if $G$ admits one.
\end{restatable}
The algorithm runs in two phases that we introduce in separate lemmas.
\begin{lemma}[Rotating PQ-trees]\label{lem:recognition_rotation}
There is an algorithm that, given a directional interval graph~$G$,
constructs an MPQ-tree~$T$ that agrees with some
directional representation of~$G$.
\end{lemma}
\begin{proof}
Given a mixed graph $G$, if~$G$ is a directional interval graph,
then clearly $U(G)$ is an interval graph and we can construct an
MPQ-tree~$T$ of~$U(G)$ in linear time using the algorithm by Korte
and M{\"o}hring~\cite{KorteM89}. We call a rotation of~$T$
\emph{directional} if it agrees with some directional representation
of~$G$. As we assume~$G$ to be a directional interval graph, there
is at least one directional rotation~$\tilde T$ of~$T$, and our goal
is to find some directional rotation of~$T$. Our algorithm decides
the rotation of each node in~$T$ independently.
\paragraph{Rotating PQQ-nodes.}
Let $y_1,\ldots,y_k$ be the children of a PQQ-node~$x$ in~$T$.
We are to decide whether to reverse the order of the children of~$x$.
Let $S_i=S_{xy_i}$,
let~$\ell = \max\set{i \colon S_1 \cap S_i \ne \emptyset}$, and let
$u \in S_1 \cap S_\ell$. We have $\ell < k$, and there is some vertex
$v \in (S_\ell \cap S_{\ell+1}) \setminus S_1$. This implies that
$u$ and $v$ are assigned to overlapping segments of~$x$.
Thus, the intervals representing~$u$ and~$v$ overlap in every interval
representation of~$U(G)$.
Hence, $u$ and~$v$ are connected by an arc
in~$G$, and the direction of this arc determines the only possible
rotation of~$x$ in any directional rotation of~$T$, e.g.{},
if $\arc{u}{v}$ is an arc in $G$ and the segment of~$u$ is to the
right of the segment of~$v$, then reverse the order of the children
of~$x$.
\paragraph{Rotating PQP-nodes.}
Let $y_1,\ldots,y_k$ be the children of a PQP-node~$x$ in~$T$.
For each $i=1,\ldots,k$, let $B_i=S_{xy_i} \cup B^T_{y_i}$, and let
$B=\bigcup_{i=1}^kB_i$. The properties of the
MPQ-tree give us that (i)~every vertex in~$A^T_x$ is adjacent
in~$U(G)$ to every vertex in~$B$, (ii)~none of
the $B_i$ is empty, and (iii)~for any two vertices $b_i \in B_i$,
$b_j \in B_j$ with $i\neq j$, we have that~$b_i$ and~$b_j$ are
independent in~$G$.
Assume that there is an arc \arc{b_i}{a} directed from some
$b_i \in B_i$ to some $a \in A^T_x$. We claim that any
rotation~$T'$ of~$T$ that does not put~$y_i$ as the first child
of~$x$ is not directional. Assume the contrary. Let~$y_j$,
$j \neq i$ be the first child of~$x$ in~$T'$, let $\mathcal{I}$ be a
directional representation that agrees with~$T'$, and let~$b_j$
be some vertex in~$B_j$.
The left endpoint of $\mathcal{I}(a)$ is to the
right of the left endpoint of $\mathcal{I}(b_i)$ as \arc{b_i}{a} is
an arc. The right endpoint of $\mathcal{I}(b_j)$ is to the left of
the left endpoint of $\mathcal{I}(b_i)$ as~$T'$ puts~$y_j$
before~$y_i$. Thus, $\mathcal{I}(b_j)$ and $\mathcal{I}(a)$ are
disjoint, a~contradiction.
Similarly, there are directed arcs from~$A^T_x$ to at
most one set of type~$B_i$. If there are any, the corresponding
child $y_i$ is in the last
position in every directional rotation of~$T$. Our algorithm
rotates the child~$y_i$ ($y_j$) with an arc from~$B_i$ to~$A^T_x$
(from~$A^T_x$ to~$B_j$) to the first (last) position, should such
children exist, and leaves the other children as they are in~$T$.
It remains to show that the resulting rotation of~$T$ is directional; see~\cref{sec:rotation}.
\end{proof}
\begin{lemma}[Perturbing Endpoints]\label{lem:recognition_dimension}
There is an algorithm that, given an MPQ-tree~$T$ that agrees with
some directional representation of a graph~$G$, constructs a
directional representation $\mathcal{I}$ of~$G$ such that $T$ agrees
with $\mathcal{I}$.
\end{lemma}
\begin{proof}
The frontier of~$T$ yields a fixed order of maximal cliques $C_1,\ldots,C_k$ of~$G$.
Given this order, we construct the following auxiliary poset~$D$.
First, we add two independent chains of length~$k+1$ each: vertices $a_1,\ldots,a_{k+1}$ with arcs \arc{a_i}{a_j} for $1\leq i<j\leq k+1$, and vertices $b_1,\ldots,b_{k+1}$ with arcs \arc{b_i}{b_j} for $1\leq i<j\leq k+1$.
Then, for each vertex~$v$ in~$G$, let $\leftc(v)$ and $\rightc(v)$
denote the indices of the leftmost and of the rightmost clique in
which~$v$ is present, respectively.
Now we add to~$D$ vertex~$v$ plus, for $1\leq i \leq \leftc(v)$, the
arc \arc{a_i}{v} and, for $1\leq i \leq \rightc(v)$, the arc \arc{b_i}{v}.
Further, for each arc \arc{u}{v} in~$G$, we add \arc{u}{v} to~$D$.
Lastly, for any two vertices~$u$ and $v$ that are independent in~$G$
and that fulfill $\rightc(u) < \leftc(v)$, we add an arc \arc{u}{v} to~$D$.
We claim that~$G$ is a directional interval graph if and only if~$D$
is a two-dimensional poset.
First assume that~$G$ is a directional interval graph and fix a directional interval representation of~$G$ whose intervals all have distinct endpoints.
For $i=1,\ldots,k$, let~$L_i$ be the sequence of all the vertices~$v$
in~$G$ for which $\leftc(v)=i$, in the order of their left endpoints.
Similarly, let~$R_i$ be the sequence of all the vertices~$v$ in~$G$
for which $\rightc(v)=i$, in the order of their right endpoints.
The following two linear orders $L$ and $R$ of the vertices of~$D$
yield a realizer of~$D$:
\begin{align*}
L &= b_1 < b_2 < \ldots < b_k < a_1 < L_1 < a_2 < L_2 < \ldots < a_k < L_k < a_{k+1}\textrm{,} \\
R &= a_1 < a_2 < \ldots < a_k < b_1 < R_1 < b_2 < L_2 < \ldots < b_k < R_k < b_{k+1}\textrm{.}
\end{align*}
Now, for the other direction, assume that we have a two-dimensional realizer of~$D$.
As $b_{k+1}$ and~$a_1$ are independent in~$D$, we have that $b_{k+1} < a_1$ in exactly one of the orders in the realizer.
We call this order~$L$, and the other one~$R$.
As $a_{k+1}$ and~$b_1$ are independent in~$D$ and $b_1 < b_{k+1} < a_1 < a_{k+1}$ in~$L$, we have that $a_{k+1} < b_1$ in~$R$.
For each $i=1,\ldots,k$, define~$L_i$ as the sequence of vertices in~$G$ appearing between $a_{i}$ and $a_{i+1}$ in the order~$L$.
Similarly, let~$R_i$ be the sequence of vertices in~$G$ appearing between $b_{i}$ and $b_{i+1}$ in the order~$R$.
Observe that, for every vertex~$v$, we have that $a_{\leftc(v)} < v$
in~$D$ and that $a_{\leftc(v)+1}$ and~$v$ are independent in~$D$.
As $a_{\leftc(v)+1} \leq a_{k+1} < b_1 \leq b_{\rightc(v)} < v$ in~$R$, we have $v < a_{\leftc(v)+1}$ in~$L$.
Thus, $v$~is in $L_{\leftc(v)}$ and, by a similar argument, $v$~is in $R_{\rightc(v)}$.
Now we are ready to construct a directional interval representation $\mathcal{I}$ of~$G$.
For each $i=1,\ldots,k$, we select $\norm{L_i}$ different real points
in $(i-\frac{1}{2},i)$ and $\norm{R_i}$ different real points in
$(i,i+\frac{1}{2})$.
For a vertex~$v$ that appears on the~$i$-th position in~$L_{\leftc(v)}$
and on the~$j$-th position in $R_{\rightc(v)}$, we choose the~$i$-th
point in $(\leftc(v)-\frac{1}{2},\leftc(v))$ as the
left endpoint, and the~$j$-th point in
$(\rightc(v),\rightc(v)+\frac{1}{2})$ as the right endpoint.
Such a set of intervals is a directional interval representation of~$G$.
First, observe that any two intervals intersect if and only if they have a common clique.
Next, if there is an arc \arc{u}{v} in~$G$, then the arc $\arc{u}{v}$
is also in~$D$, $u < v$ holds both in~$L$ and in~$R$, the
corresponding intervals overlap, and $\mathcal{I}(u)$ starts and ends
to the left of $\mathcal{I}(v)$.
Last, if there is an edge \edge{u}{v} in~$G$, then $u$ and~$v$ are independent in~$D$, $u < v$ in one of the orders in the realizer, and $v < u$ in the other.
Thus, one of the intervals $\mathcal{I}(u)$ and $\mathcal{I}(v)$
must contain the other.
\end{proof}
\cref{thm:recognition} follows easily from \cref{lem:recognition_rotation,lem:recognition_dimension}.
See~\cref{sec:implementation} for details.
\section{Open Problems}
\label{sec:outro}
Can we recognize directional interval graphs in linear time?
Can we recognize bidirectional interval graphs in polynomial time?
Can we color bidirectional interval graphs optimally, or at least find
$\alpha$-approximate solutions with $\alpha < 2$?
\appendix
\section{Speeding Up the Greedy Coloring Algorithm}
\label{sec:runtimegreedy}
\RuntimeGreedy*
\label{lem:runtimegreedy*}
\begin{proof}
We describe a sweep-line algorithm sweeping from left to right.
In a first step, we show how to achieve a running time of $O(m + n
\log n)$, where $m$ is the number of edges of the directional
interval graph~$G$ induced by the given set~$V$ of $n$ intervals.
Then we use an additional data structure in order to avoid the
$O(m)$ term in the running time. Note that $m$ can be quadratic
in~$n$. For the faster implementation, we do not assume knowledge
of~$G$.
Build a balanced binary search tree~\ensuremath{\mathcal{T}}\xspace to keep track of the
currently available colors. Initially, \ensuremath{\mathcal{T}}\xspace contains the colors~1
to~$n$. Fill a list~\ensuremath{\mathcal{L}}\xspace with the $2n$ endpoints of the intervals
in~$V$ (which we can assume to be pairwise different). Sort~\ensuremath{\mathcal{L}}\xspace.
Then traverse~\ensuremath{\mathcal{L}}\xspace in this order, which corresponds to a left-to-right sweep.
There are two types of events.
\begin{description}
\item[\normalfont\textsc{Left:}] If the current endpoint is the left
endpoint of an interval~$v$, let $x$ be the largest color over all
intervals that have an arc to~$v$, that is,
$x = \max\{c(v) \colon \arc{u}{v} \in A(G)\} \cup \{0\}$. Then
search in \ensuremath{\mathcal{T}}\xspace for the smallest color~$y$ greater than~$x$,
delete~$y$ from~\ensuremath{\mathcal{T}}\xspace, and set $c(v)=y$.
\item[\normalfont\textsc{Right:}] If the current endpoint is the
right endpoint of an interval~$v$, we insert~$c(v)$ into~\ensuremath{\mathcal{T}}\xspace
because $c(v)$ is available again.
\end{description}
Clearly, this implementation runs in $O(m + n \log n)$ time. To
avoid the $O(m)$ term, we use a second binary search tree $\ensuremath{\mathcal{T}}\xspace'$ that
maintains the currently active intervals, sorted according to color.
We augment~$\ensuremath{\mathcal{T}}\xspace'$ by storing, in every node~$\nu$, the leftmost right
endpoint~$r^\nu$ in its subtree. Any interval that contains the
current endpoint in~\ensuremath{\mathcal{L}}\xspace is \emph{active}.
At a \textsc{Left} event, this allows us to determine, in
$O(\log n)$ time, the interval~$u$ with the largest color~$x$ among
all active intervals that overlap the new interval~$v$ (that is,
$r_u < r_v$), as follows. We find~$u$ by descending~$\ensuremath{\mathcal{T}}\xspace'$ from its
root. From the current node, we go to its right child~$\rho$
whenever $r^\rho<r_v$. If such an interval does not exist, we set
$x=0$. Then we continue as above, querying~\ensuremath{\mathcal{T}}\xspace for the smallest
available color~$y>x$. Finally, we set $c(v)=y$ and add~$v$ to~$\ensuremath{\mathcal{T}}\xspace'$.
At a \textsc{Right} event, we update~\ensuremath{\mathcal{T}}\xspace as above. Additionally, we
need to update~$\ensuremath{\mathcal{T}}\xspace'$. We do this by deleting the interval~$v$ that
is about to end.
We now argue that, for outputting the greedy solution of our
coloring problem, the running time of $\Oh{n \log n}$ is worst-case
optimal assuming the comparison-based model of computation.
Suppose that a coloring algorithm would run in $o(n \log n)$ time.
Then, we could use it to sort any set $\{a_1,\dots,a_n\}$ of $n$
numbers in $o(n \log n)$ time by coloring the set
$\{[a_1-M,a_1+M],\dots,[a_n-M,a_n+M]\}$ of intervals,
where~$M = \max\{a_1,\dots,a_n\} - \min \{a_1,\dots,a_n\}$. Namely,
the corresponding directional interval graph is a tournament graph
and for each $i \in \{1,\dots,n\}$, the color of the interval
$[a_i-M,a_i+M]$ in an optimal coloring corresponds to the rank
of~$a_i$ in a sorted version of $\{a_1,\dots,a_n\}$.
\end{proof}
\Approx*
\label{cor:approx*}
\begin{proof}
Let $\mathcal{I}$ be the set of intervals of~$G$.
We split $\mathcal{I}$ into a set of left-going intervals $\mathcal{I}_1$
and into a set of right-going intervals $\mathcal{I}_2$.
These sets induce the directional graphs $G_1$ and $G_2$, respectively.
Now we color $G_1$ and $G_2$ independently with our greedy coloring algorithm
and we re-combine them by using different sets of colors for $G_1$ and $G_2$.
This is a proper coloring of~$G$ with $\chi = \chi(G_1) + \chi(G_2)$ colors
since between any interval in~$\mathcal{I}_1$ and any interval
in~$\mathcal{I}_2$, there may be an edge but no arc.
Clearly, $\chi \le 2 \max \set{\chi(G_1), \chi(G_2)} \le 2 \chi(G)$.
\end{proof}
\section{Coloring Mixed Proper Interval Graphs}
\label{sec:proper}
\HardnessPropper*
\label{clm:hardnesspropper*}
\begin{proof}
The general idea is as follows.
We start the construction with the same set of intervals as in the proof of \cref{thm:hardness}.
Then, we set $x_\mathsf{left} = 0$, and $x_\mathsf{right}$ to the very right of all intervals, i.e., $x_\mathsf{right} = 4 (m+1)$.
Next, we describe a procedure that extends every interval so that it has the left endpoint in $x_\mathsf{left}$, or has the right endpoint in $x_\mathsf{right}$.
The procedure adds some new intervals and merges some groups of intervals into one interval.
The total height of the interval representation increases to $4n+2nm$ during the procedure.
Finally, we extend every interval at $x_\mathsf{left}$ ($x_\mathsf{right}$) to the left (right) by a length inverse to its current total length.
This trick guarantees that in the end, no interval contains another interval.
In the remainder of the proof, we describe the procedure of extending, adding and merging intervals.
The intervals of the frame and all $v_i^\mathsf{true}$ and $v_i^\mathsf{false}$
with $i = 1, \dots, n$ already end at $x_\mathsf{left}$ or $x_\mathsf{right}$.
Currently, we have that in any drawing of $G_\Phi$ with $6n$ layers and a fixed $i \in \set{1,\ldots,n}$,
all the intervals $b_i^j$, and $o_i^j$ with $j = 1, \dots, m$ are drawn in the layers of $f_i^3$ and $f_i^6$.
Additionally, each dummy interval and each $s_j$ is draw in one of these layers.
We divide these layers into $m$ copies each so that each pair of $b_i^j$ and $o_i^j$ has its own two layers.
First we divide each $f_i^3$ and $f_i^6$ into $m$ copies each.
Accordingly, we adjust the height of the drawing to be $4n + 2nm$.
Then, we make $m$ copies of each dummy interval and virtually assign each copy to a distinct layer of the drawing.
For each $b_i^j$ we virtually assign it to the layer of the $j$-th copy of $f_i^3$ and extend it to the left up to $x_\mathsf{left}$.
In this process, we merge $b_i^j$ with every dummy interval on the left and with the $j$-th copy of $f_i^3$ while keeping all involved arcs.
We call the merged interval $f_i^{3,j}$.
If there is no $b_i^j$, we obtain $f_i^{3,j}$ by extending
the $j$-th copy of $f_i^3$ up to $x_\mathsf{right}$ and merging
it with all dummy intervals virtually assigned to its layer.
Symmetric to $b_i^j$, we extend each $o_i^j$ to the right up to $x_\mathsf{right}$ and merge $o_i^j$ with all dummy intervals virtually assigned to the layer of $f_i^{3,j}$, but here we drop the arcs of the dummy intervals.
We call the merged interval~${o'}_i^j$.
Similarly, for every clause $c_j$ with variables $v_i$, $v_k$, $v_\ell$, we merge all dummy intervals virtually assigned to the layer of the $j$-th copy of $f_z^6$, for $z=i,k,\ell$ that are to the right of $s_j$ and drop all the arcs as in the previous case.
We obtain three copies~${d'}_j^1$, ${d'}_j^2$, and ${d'}_j^3$ of the same interval and we merge one of these copies, say ${d'}_j^3$, with $s_j$.
We denote that new interval by ${s'}_j$.
We drop all arcs of ${d'}_j^1$, ${d'}_j^2$, and ${s'}_j$
to preserve the freedom we had for placing $s_j$ in our original construction.
The only unmerged dummy intervals are in the layer of the $j$-th copy of $f_i^6$ to the left of~$s_j$
or in the layer of the $j$-th copy of $f_i^6$ if there is
no occurrence of the variable $v_i$ in the clause $c_j$.
In each of these layers, we merge the dummy intervals together
with the corresponding copy of $f_i^6$ and
obtain intervals ending at $x_\mathsf{left}$.
For $j = 1, \dots, m$, we call the merged interval $f_i^{6,j}$.
For $i = 1, \dots, n$ and $j = 1, \dots, m-1$,
we add the arcs $\arc{f_i^{2}}{f_i^{3,1}}$,
$\arc{f_i^{3,j}}{f_i^{3,j+1}}$, $\arc{f_i^{3,m}}{f_i^{4}}$,
$\arc{f_i^{5}}{f_i^{6,1}}$, $\arc{f_i^{6,j}}{f_i^{6,j+1}}$,
and $\arc{f_i^{6,m}}{f_{i+1}^{1}}$ to have a
frame as in the original hardness construction.
Observe that this new frame now has exactly $4n + 2nm$ intervals with
their left endpoint at~$x_\mathsf{left}$ and, in the whole construction,
there are $2n + 6m$ other intervals with their right endpoint at~$x_\mathsf{right}$, i.e.,
the $2n$ intervals $v_i^\mathsf{true}$ and $v_i^\mathsf{false}$ for $i = 1, \dots, n$
and the $6m$ intervals ${o'}_i^j$, ${d'}_j^1$, ${d'}_j^2$, and ${s'}_j$
for $j = 1, \dots, m$.
Next, we argue that the functionality described
in the proof of \cref{thm:hardness} is retained.
Intervals of the (new) frame either block
a complete layer from $x_\mathsf{left}$ to $x_\mathsf{right}$
or they end at position 1 (each $f_i^2$ and $f_i^4$)
or within the clause gadget of a variable $c_j$
if a variable $v_i$ occurs in~$c_j$ (each $f_i^{3,j}$ and $f_i^{6,j}$).
Any other interval starting in a clause gadget of a clause~$c_j$
needs to be matched with a frame interval that ends in the clause
gadget of~$c_j$.
Therefore, to have a construction with a total height of at most~$4n + 2nm$,
we need to combine $f_i^{3,j}$ and $f_i^{6,j}$
with ${o'}_i^j$ and some of $\set{{d'}_j^1, {d'}_j^2, {s'}_j}$,
while $f_i^{3,j}$ and ${s'}_j$ are not combinable.
This ensures that the correctness argument from the
proof of \cref{thm:hardness} remains valid.
\end{proof}
\section{Rotation Is Directional}
\label{sec:rotation}
\begin{claim}
The rotation of MPQ-tree $T$ of a directional interval graph $G$ constructed in the proof of \cref{lem:recognition_rotation} is a directional rotation.
\end{claim}
Let~$T'$ denote the tree~$T$ after applying the rotations described in the proof of \cref{lem:recognition_rotation}.
We claim that~$T'$
is directional. Let~$\tilde T$ be an arbitrary directional rotation
of~$T$. By construction, $T'$ and~$\tilde T$ differ only in the
ordering of children of PQP-nodes~$x$ that do not have arcs from/to
vertices in~$A_x^T$. To prove that~$T'$ is directional, it suffices
to show that the rotation of~$\tilde T$ obtained by swapping two
children of a P-node~$x$ that have no arcs from/to vertices in
$A_x^T$ is directional.
Consider a directional interval representation~$\mathcal I$ whose
clique ordering corresponds to the rotation~$\tilde T$ and
let~$y_k,y_l$ be two children of some PQP-node~$x$ such that
neither~$B_k$ nor~$B_l$ contains a vertex with an arc from/to a
vertex in~$A_x^T$. Let~$I_k$ be the smallest interval that
contains~$\mathcal I(v)$ for all~$v \in B_k$ and let~$I_l$ be
defined analogously for~$B_l$. Note that~$I_k$ and~$I_l$ are
disjoint and that each of them is properly contained
in~$\bigcap_{v \in A_x^T} \mathcal I(v)$ as otherwise it would have
incoming or outgoing arcs. After suitably stretching the real line,
we may assume that~$I_k$ and~$I_l$ have the same length.
Let~$x_k,x_l$ denote the left endpoints of~$I_k$ and~$I_l$,
respectively. We obtain a directional representation whose clique
ordering corresponds to~$T'$ simply by exchanging the positions of
the representations of the subgraphs induced by~$B_l$ and by~$B_k$.
More formally, for each~$v \in B_k$
set~$\mathcal I'(v) = \mathcal I(v) - x_k+x_l$ and for
each~$v \in B_l$ set~$\mathcal I'(v) = \mathcal I(v) - x_l + x_k$.
For all other vertices $v \in V \setminus (B_k \cup B_l)$
set~$\mathcal I'(v) = \mathcal I(v)$. It follows that each of the
subgraphs induced by~$B_k$ and~$B_l$ is still represented correctly.
Moreover, by construction the vertices in~$B_l$ and~$B_k$ still have
edges (and not arcs) to all vertices in~$A_x^T$.
\section{Recognition Algorithm}
\label{sec:implementation}
\ensuremath{\mathcal{T}}\xspacehmRecognition*
\label{thm:recognition*}
\begin{proof}
Our algorithm, given a directional interval graph~$G$, applies the
algorithm from Lemma~\ref{lem:recognition_rotation} to obtain a
directional MPQ-tree of~$G$.
Then, using Lemma~\ref{lem:recognition_dimension}, it constructs a directional representation of~$G$.
If any of the phases fails, then we know that~$G$ is not a directional interval graph, and we can reject the input.
Otherwise, our algorithm accepts the input and produces the directional representation of~$G$.
It is easy to see that both algorithms from
Lemmas~\ref{lem:recognition_rotation}
and~\ref{lem:recognition_dimension} can be implemented to run in
$\Oh{|V(G)|^2}$ time.
For Lemma~\ref{lem:recognition_rotation} it is enough to notice that:
\begin{itemize}
\item the MPQ-tree of an interval graph $U(G)$ is of size $\Oh{|V(G)|}$ and can be constructed in time $\Oh{|V(G)|+|E(G)+A(G)|}$~\cite{KorteM89},
\item when deciding the rotation of a PQQ-node $x$, the pair of vertices that decide the rotation of $x$ can be found in $\Oh{|V(G)|}$ time,
\item when deciding the rotation of a PQP-node $x$, the first, and the last child of $x$ can be found in $\Oh{|V(G)|}$ time.
\end{itemize}
For Lemma~\ref{lem:recognition_dimension} it is enough to notice that:
\begin{itemize}
\item there are $\Oh{|V(G)|}$ maximal cliques in an interval graph,
\item there are $\Oh{|V(G)|}$ vertices and $\Oh{|V(G)|^2}$ arcs in the auxiliary poset~$D$,
\item two-dimensional realizer of the auxiliary poset~$D$ can be constructed in time $\Oh{|V(D)+A(D)|}$~\cite{McConnellS99}.
\end{itemize}
\end{proof}
It is also quite easy to speed up the implementation of the rotation
algorithm in Lemma~\ref{lem:recognition_rotation} to linear time.
Sadly, the auxiliary poset~$D$ that we construct in
Lemma~\ref{lem:recognition_dimension} has quadratic size and is thus the
main obstacle for obtaining a linear-time algorithm.
We suspect that an explicit construction of~$D$ can be avoided.
\end{document}
|
\begin{document}
\input dibe_2004.tex
\title[Boundary regularity for parabolic systems]{
Boundary regularity for parabolic systems\\ in convex domains}
\mathrm{d}ate{\today}
\begin{abstract}
In a cylindrical space-time domain with a convex, spatial base,
we establish a local Lipschitz estimate for weak solutions to parabolic systems with Uhlenbeck structure up to the lateral boundary,
provided homogeneous Dirichlet data are assumed on that part of the lateral boundary.
\mbox{e}nd{abstract}
\maketitle
\tableofcontents
\section{Introduction}
This paper studies boundary regularity of weak solutions $u\colon \Omega_T\to\mathbb{R}^N$, $N\ge 1$, to nonlinear parabolic systems of the type
\begin{equation}\label{system}
\partial_t u^i - \sum_{\alpha=1}^{n}
\big[ \mathbf a\big(|Du|\big) u^i_{x_{\alpha}}\big] _{x_{\alpha}}=b^i \quad
\mbox{for $i=1,\mathrm{d}ots, N$,}
\mbox{e}nd{equation}
in a space-time cylinder $\Omega_T=\Omega\times (0,T)$, where
$\Omega\subset\mathbb{R}^n$ is a bounded open convex
set, $n\ge 2$ and $T>0$.
We assume that $u$ satisfies a homogeneous Dirichlet boundary condition
on some part of the lateral boundary $(\partial\Omega)_T=\partial\Omega\times (0,T)$.
The nonlinearity
$\mathbf a\colon\mathbb{R}_{\ge 0}\to\mathbb{R}_{\ge 0}$ fulfills a growth condition of the type
$\mathbf a (r)+r\mathbf a'(r)\approx r^{p-2}$ for some growth
exponent $p>\frac{2n}{n+2}$. As such the diffusion part in \mbox{e}qref{system}
is said to have the Uhlenbeck structure. For the right-hand side we require $b\in L^\sigma (\Omega_T,\mathbb{R}^N)$
for some $\sigma >n+2$.
The primary purpose of this paper is to establish
$$
Du\in L^\infty \big( \Omega_T\cap Q_{\varrho}(z_o),\mathbb{R}^{Nn}\big)
$$
whenever $u$ is a weak solution to the system \mbox{e}qref{system}
satisfying $u\mbox{e}quiv 0$ on the subset $(\partial\Omega)_T\cap Q_{2\varrho}(z_o)$ of
the lateral boundary. Here $Q_\varrho(z_o):=B_{\varrho}(x_o)\times(t_o-\varrho^2,t_o)$
for some $z_o=(x_o,t_o)\in(\pl\Om)_T$. We only require that $\Omega$ is a bounded open convex set.
No further regularity of $\partial\Omega$ is assumed.
The qualitative assertion is confirmed by a quantitative
$L^\infty$-estimate for the spatial gradient $Du$.
Regularity problems for nonlinear equations or systems of the parabolic $p$-Laplacian type and their stationary counterparts
were very difficult to access in the past. The interior $C^{1,\lambda}$-regularity had been longstanding open problems.
The first major breakthrough was achieved by Uraltseva \cite{Uraltseva} in 1968. She showed that solutions to $p$-Laplacian equations, whose model is given by
\begin{equation}\label{i:p-Laplace}
-\Div\big( |Du|^{p-2}Du\big)=0\qquad\mbox{in $\Omega$,}
\mbox{e}nd{equation}
are of class $C^{1,\lambda}$ in the interior of the domain $\Omega\subset\mathbb{R}^n$.
This result was generalized in 1977 by Uhlenbeck in her famous paper \cite{Uhlenbeck} to the $p$-Laplacian type systems
(i.e. elliptic version of \mbox{e}qref{system})
\begin{equation}\label{i:p-Laplace-type}
-
\sum_{\alpha=1}^{n}
\big[ \mathbf a\big(|Du|\big) u^i_{x_{\alpha}}\big] _{x_{\alpha}}=0
\quad
\mbox{for $i=1,\mathrm{d}ots, N$.}
\mbox{e}nd{equation}
More general structures, replacing $|Du|^2$ by some quadratic expression
$Q(Du,Du)$ and including a sufficiently regular dependence on $x\in\Omega$, have been considered by Tolksdorf \cite{Tolksdorf:1983}. Roughly speaking, in the weak formulation of \mbox{e}qref{i:p-Laplace-type} the nonlinear term $\mathbf a\big(|Du|\big)Du$ is replaced by $\mathbf a\big(x, Q(Du, Du)^\frac12\big)Q(Du,\,\cdot \,)$.
Similar $C^{1,\lambda}$-regularity results have been shown for
minimizers of integral functionals with $p$-growth. The degenerate
case with growth exponent $p\ge 2$ goes back to Giaquinta \& Modica
\cite{GiaquintaModica:1986-a}, while the singular case $1<p<2$ was
treated by Acerbi \& Fusco \cite{AcerbiFusco}. For systems of
$p$-Laplacian type as in \mbox{e}qref{i:p-Laplace-type}, sharp pointwise interior gradient bounds in terms of a nonlinear potential of the right-hand side $b$ have been established in \cite{DuzaarMingione:2010}.
Regarding the boundary regularity for $p$-Laplacian type systems the picture is less complete.
Global $C^{1,\lambda}$-regularity is known only for homogeneous Dirichlet and Neumann boundary data; see Hamburger \cite{Hamburger}. For general boundary data, it is still an open problem. However, local $L^\infty$-gradient bounds (Lipschitz estimates at the boundary)
have been established by Foss \cite{Foss} for minimizers to
asymptotically regular integral functionals on domains with $C^{1,\lambda}$-boundary; see also
\cite{Foss-Passarelli-Verde, Lieberman}. Again for homogeneous
Dirichlet or Neumann data,
global Lipschitz estimates (in terms of the right-hand side $b$ of \mbox{e}qref{i:p-Laplace-type} and under minimal assumptions on the regularity
of $\partial \Omega$ and $b$) have been proved by Cianchi \& Maz'ya in \cite{ Cianchi-Mazya-2, Cianchi-Mazya-1}. These results are global in nature and only valid if $u$ or its outer normal derivative $\partial_\hat uidehat nu u$ vanishes on the whole boundary $\partial\Omega$.
It is noteworthy that their results hold for convex domains in particular. In contrast to
these global results, Banerjee \& Lewis \cite{BanerjeeLewis:2014} established
local boundary Lipschitz estimates with homogeneous data
for convex domains. Their result is of local nature as
they only require the homogeneous boundary condition
on a part
of the boundary.
Inspired by the technique introduced in \cite{BanerjeeLewis:2014}, Marcellini,
the first two, and the last author were able to establish the first local boundary Lipschitz estimate for integral functionals with
non-standard $p,q$-growth; see \cite{BDMS}.
The interior $C^{1,\lambda}$ regularity theory for the parabolic $p$-Laplacian type systems \mbox{e}qref{system} is a fundamental achievement by DiBenedetto \& Friedman; see
\cite{DiBenedetto-Friedman, DiBenedetto-Friedman2,
DiBenedetto-Friedman3} and the monograph \cite[Chapters~VIII, IX, X]{DB};
see also Chen \cite{Chen} and Wiegner \cite{Wiegner}.
For systems without Uhlenbeck structure of the type
\begin{equation}
\partial_t u^i - \sum_{\alpha=1}^{n}
\big[ \mathbf a_\alpha^i(Du) \big] _{x_{\alpha}}=0 \quad
\mbox{for $i=1,\mathrm{d}ots, N$,}
\mbox{e}nd{equation}
with a nonlinear diffusion $\mathbf a$ that behaves asymptotically like the $p$-Laplacian
at the
origin, i.e.~$s^{1-p}\mathbf a(s\xi)\to |\xi|^{p-2}\xi$ in the limit $s\mathrm{d}ownarrow 0$ for any $\xi$, partial $C^{1,\lambda}$-regularity has been established by B\"ogelein \& Duzaar \& Mingione \cite{BoeDuMin}.
In contrast to the interior regularity theory, the boundary regularity is largely an open problem.
There were two results achieved by Chen \& DiBenedetto in \cite{Chen-DiBenedetto} for the parabolic systems with the Uhlenbeck structure
in $C^{1,\lambda}$-domains.
The first was about the H\"older continuity of a solution $u$ up to the lateral boundary with any H\"older exponent in $(0,1)$,
given sufficiently regular boundary data; see also \cite[Chapter X, Theorem 1.1]{DB}.
The second dealt with the H\"older continuity of $Du$ up to the lateral boundary, given homogeneous boundary data;
see \cite[Chapter X, Theorem 1.2]{DB}.
The results have been achieved by a boundary flattening procedure. This allows us, after freezing the coefficients, to reduce the problem to the interior case via reflection along the flat boundary. At this stage it is important that the transformed coefficients admit certain quantitative H\"older-regularity. In the course of the proof the authors established gradient sup-estimates
for the model case of $p$-Laplacian systems with homogeneous Dirichlet data when the boundary is flat; see \cite[Propositions 3.1, 3.1']{Chen-DiBenedetto}. These estimates serve as reference inequalities when comparing the solution with the one to the frozen system.
This is why $\partial\Omega$ and $\mathbf g$ are assumed to be $C^{1,\lambda}$. This approach fails in the case of Lipschitz domains.
Boundary regularity for more general parabolic systems has been considered by the first author in \cite{B-boundary}. The main result ensures the boundedness up to the lateral boundary of the spatial derivative of weak solutions to asymptotically regular parabolic systems. Roughly speaking this means that the $C^1$-coefficients of the diffusion part behave like the $p$-Laplacian when $|Du|$ becomes large. The result holds true for inhomogeneous boundary values. As in \cite{Chen-DiBenedetto}, the proof relies on a boundary flattening procedure and comparison arguments. Therefore, $\partial\Omega$ and $\mathbf g$ have to be of class $C^{1,\lambda}$.
\subsection{Statement of the result}
We assume that the nonlinearity $\mathbf a\colon\mathbb{R}_{\ge0}\to\mathbb{R}_{\ge0}$ is
of class $C^1(\mathbb{R}_{>0},\mathbb{R}_{>0})$ and satisfies
\begin{equation}
\label{assumption:a(0)}
\lim_{r\mathrm{d}ownarrow0}r\mathbf{a}(r)=0.
\mbox{e}nd{equation}
Moreover, $\mathbf a$ fulfills a standard monotonicity and $p$-growth condition
\begin{equation}\label{assumption:a'}
m(\mu^2+r^{2}) ^{\frac{p-2}{2}}
\leq
\mathbf{a}(r)+r\mathbf{a}^{\prime }(r)
\leq
M(\mu^2+r^{2}) ^{\frac{p-2}{2}}
\qquad\mbox{for all }r>0,
\mbox{e}nd{equation}
with positive constants $0<m\le M$, some parameter $\mu\in[0,1]$, and some growth exponent
$\frac{2n}{n+2}<p<\infty$. Note that in the case $\mu>0$ the parabolic system
\mbox{e}qref{system} is non-degenerate, while for $\mu=0$ the diffusion part becomes either degenerate or singular at points with
$|Du|=0$. For the inhomogeneity $b=(b^1,\mathrm{d}ots,b^N)\colon\Omega_T\to\mathbb{R}^N$ we assume the integrability condition
\begin{equation}
\label{assumption:b}
\mbox{$ b\in L^\sigma(\Omega_T,\mathbb{R}^N)\;$ for some $\sigma>n+2$.}
\mbox{e}nd{equation}
\begin{definition}\label{def:weak-sol}\upshape
Assume that the nonlinearity $\mathbf a$ and the inhomogeneity $b$ satisfy the assumptions \mbox{e}qref{assumption:a(0)} -- \mbox{e}qref{assumption:b}.
A map $u \colon \Omega_T\to \mathbb{R}^N$ with
$$
u\in C^0\big( [0, T ]; L^2 (\Omega ,\mathbb{R}^N)\big)\cap L^p\big(0,T;W^{1,p}(\Omega ,\mathbb{R}^N)\big)
$$
is called a weak solution to the nonlinear parabolic system \mbox{e}qref{system}
if and only if
\begin{equation}\label{weak-solution}
\iint_{\Omega_T}\big[ u\cdot\varphi_t -\mathbf a(|Du|)Du\cdot D\varphi\big]\, \mathrm{d}x\mathrm{d}t
=
\iint_{\Omega_T}b\cdot\varphi\, \mathrm{d}x\mathrm{d}t
\mbox{e}nd{equation}
for any test function $\varphi\in C^\infty_0(\Omega ,\mathbb{R}^N)$.
$\Box$
\mbox{e}nd{definition}
Throughout this article $d$ denotes the scaling deficit given by
\begin{align}\label{def:deficit}
d:=\left\{
\begin{array}{cl}
\frac p2,&\mbox{if $p\ge2$,}\\[1.2ex]
\frac{2p}{p(n+2)-2n},& \mbox{if $\frac{2n}{n+2}<p<2$.}
\mbox{e}nd{array}
\right.
\mbox{e}nd{align}
Now we can state our main result.
\begin{theorem}[$L^\infty$-gradient bound at the boundary]\label{thm:main}
Let $\Omega\subset\mathbb{R}^n$ be an open bounded convex set, and assume that the structural assumptions \mbox{e}qref{assumption:a(0)} -- \mbox{e}qref{assumption:b} are in force and let $u\in C^0([0,T];L^2(\Omega,\mathbb{R}^N))\cap
L^p(0,T;W^{1,p}(\Omega,\mathbb{R}^N))$ be a weak solution to the parabolic
system \mbox{e}qref{system} in the sense of Definition~\ref{def:weak-sol}
satisfying the homogeneous Dirichlet boundary condition
\begin{equation*}
\mbox{$u\mbox{e}quiv0\;$
on $(\partial\Omega)_T\cap Q_{2\varrho}(z_o)$ in the sense of traces,}
\mbox{e}nd{equation*}
where $z_o=(x_o,t_o)$ is a point with space center
$x_o\in\partial\Omega$ and time $t_o\in(0,T)$, and $\varrho\in (0,\frac12\sqrt{t_o})$.
Then, we have
$$
Du\in L^\infty \big(\Omega_T\cap Q_{\varrho/2}(z_o)\big).
$$
Moreover, the following quantitative $L^\infty$-gradient bound
\begin{align*}
\sup_{\Omega_T\cap Q_{\varrho/2}(z_o)}|Du|
\le
C\bigg[ \Big(1 + \varrho^{n+2}\|b\|_{
L^\sigma(\Omega_T\cap Q_\varrho(z_o))}^{\frac{(n+2)\sigma}{\sigma-n-2}}\Big)
\Xiint{-\!-}_{\Omega_T \cap Q_{\varrho}(z_o)}\big(1+|Du|^p\big)
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{d}{p}}
\mbox{e}nd{align*}
holds with a constant $C$ depending on $n,N,p,\sigma,m,M$, and
the geometry of the boundary.
\mbox{e}nd{theorem}
\begin{remark}
The dependence of the constant $C$ on the geometry of the boundary can be quantified in terms of the expression $\Theta_{\varrho/2}(x_o)$ defined in Section
\ref{sec:convex-domains}.
\mbox{e}nd{remark}
We point out that the gradient bound from the preceding theorem is the
exact analogue of the interior gradient bounds in
\cite[Chapter VIII, Thms. 5.1, 5.2']{DB} for the case
$b=0$.
\subsection{Strategy of the proof}
The usual boundary flattening
procedure via a local Lipschitz representation of $\partial\Omega$
leads to a nonlinearity depending on the gradient of the Lipschitz graph. Due to the limited regularity of $\partial \Omega$,
the transformed nonlinearity admits only a measurable dependence on the spatial variables.
This prevents the reduction of the problem by freezing, comparing and reflection arguments
to the interior. Therefore we pursue a different strategy, which is inspired by ideas from Banerjee \& Lewis
\cite{BanerjeeLewis:2014}; see also \cite{BDMS} for the corresponding
boundary estimate for minimizers to integral functionals with non-standard $p,q$ growth.
The present paper represents in some sense the parabolic counterpart of \cite{BanerjeeLewis:2014}.
We establish the
sup-estimate of $Du$ in Theorem~\ref{thm:main}
as the limit of similar estimates for more regular approximating problems.
More precisely, we approximate the convex domain $\Omega$
in Hausdorff distance from outside by smooth convex domains $\Omega_\varepsilon$,
regularize the nonlinearity $\mathbf{a}$ into $\mathbf{a}_\varepsilon$, extend $u$ and $b$ by zero outside of $\Omega_T$,
and mollify them properly into $g_\varepsilon$ and $b_\varepsilon$.
Then we solve in $(\Omega_\varepsilon)_T\cap
Q_\varrho(x_o,t_o)$ the Cauchy-Dirichlet problem associated to $\mathbf{a}_\varepsilon$ and $b_\varepsilon$, with boundary values $g_\varepsilon$
on the parabolic boundary of $(\Omega_\varepsilon)_T\cap Q_\varrho(x_o,t_o)$.
The unique solution $u_\varepsilon$ -- which exists by standard methods --
fulfills the Dirichlet condition $u_\varepsilon=0$ on
$(\partial\Omega_\varepsilon)_T \cap Q_\varrho(x_o,t_o)$ by construction. Since the domain of $u_\varepsilonilon$ is smooth we may use a reflection argument together with the interior $C^{1,\lambda}$-regularity results and the Schauder estimates
for linear parabolic systems to show that these solutions are smooth up to the boundary; see
Appendix~\ref{app:smooth}.
Next, we prove a quantitative sup-estimate for $Du_{\varepsilon}$, which is uniform
in the parameter $\varepsilon$. Its proof consists of two steps. In the first step
we derive an energy estimate for the second order derivatives; see Proposition \ref{prop:energy}.
The key ingredient is a differential geometric
identity from \cite{Grisvard:1985}; see Lemma~\ref{lem:Grisvard}. This identity allows us to exploit the convexity of $\partial\Omega_\varepsilon$ in the sense that
the boundary integral, which cannot be controlled by integrals over $(\Omega_\varepsilon)_T \cap Q_\varrho(x_o,t_o)$,
admits a sign and can be discarded in the estimate.
Based on the energy estimate, we then perform
a Moser iteration, which leads to the sup-estimate for $Du_{\varepsilon}$ in
Proposition~\ref{prop:apriori}.
Finally we pass to the limit $\varepsilon\mathrm{d}ownarrow0$, which can be achieved by certain compactness
arguments.
Decisive
for this argument are the uniform (in $\varepsilon$) energy estimates for the solutions to the regularized problems.
The main obstruction at this stage is that testing the
original parabolic system by the difference $u-u_\varepsilon$ is not allowed,
since $u_\varepsilonilon$ does not admit zero boundary values on
$(\partial\Omega)_T\cap Q_\varrho(x_o,t_o)$. Moreover, $u$ is not
sufficiently regular in time, i.e. the
extension of $u$ by zero outside of $\Omega_T$ does not necessarily
admit a distributional time derivative on $(\Omega_\varepsilon)_T\cap
Q_\varrho(x_o,t_o)$. This is why we will not choose the zero
extension of $u$
as boundary values for $u_\varepsilon$, but the modified version
$g_\varepsilon := \mbox{e}ta_\varepsilon (x)u$. The cut-off function
$\mbox{e}ta_\varepsilon$ is chosen to vanish on the set
$\{ x\in\Omega: {\rm dist}(x,\partial\Omega)<\varepsilon\}$. With the aid of
Hardy's inequality, one checks that $g_\varepsilon$ admits a time
derivative in the dual space on the domain $(\Omega_\varepsilon)_T\cap
Q_\varrho(x_o,t_o)$.
Note that this choice of the boundary values does not affect the sup-estimate for $Du_{\varepsilon}$. On the other hand the choice allows us to derive an appropriate uniform energy estimate for $u_\varepsilon$. Thus, we can pass to a weakly convergent subsequence with the weak limit $\tilde u$.
Since the sup-estimate for $Du_{\varepsilon}$ is uniform in $\varepsilon$, it can be transferred to
$D\tilde u$.
To conclude, it is left to show $\tilde u=u$. This however follows from the uniqueness.
\hat uidehat noindent
{\it Acknowledgments.} V.~B\"ogelein and N.~Liao have been supported by the FWF-Project P31956-N32
``Doubly nonlinear evolution equations".
\section{Preliminaries}\label{Sec:Preliminaries}
\subsection{A remark on convex domains}\label{sec:convex-domains}
The dependence of the constant from
Theorem~\ref{thm:main} on the domain is given by the
quantity
\begin{equation}\label{def-Theta}
\Theta_{\varrho}(x_o):=\frac{\varrho^n}{|\Omega\cap B_\varrho(x_o)|},
\qquad\mbox{for $x_o\in\partial\Omega$ and $\varrho>0$.}
\mbox{e}nd{equation}
Since every bounded convex
set satisfies a uniform cone condition, $\Theta_{\varrho}(x_o)$ can be bounded
independently of $x_o\in\partial\Omega$ and $\varrho>0$
by a constant only depending on the
domain $\Omega$. For a more detailed discussion, we
refer to \cite[Section 2.1]{BDMS}.
\subsection{A differential geometric identity}
For a $C^2$-domain $\Omega\subset\mathbb{R}^n$, the second fundamental
form of $\partial\Omega$ is defined by
\begin{equation*}
\boldsymbol{B}_x(\xi,\mbox{e}ta):=-\partial_\xi\hat uidehat nu(x)\cdot \mbox{e}ta
\mbox{e}nd{equation*}
for any $x\in\partial\Omega$ and all tangential vectors $\xi,\mbox{e}ta\in
\mathrm{T}_x(\partial\Omega)$,
where $\hat uidehat nu\in C^1(\partial\Omega,\mathbb{R}^n)$ denotes the outer
unit normal vector field on $\partial\Omega$.
We will use the following differential geometric identity due to Grisvard
\cite[Eqn. 3,1,1,8]{Grisvard:1985}.
\begin{lemma}\label{lem:Grisvard}
Let $\Omega\subset\mathbb{R}^n$ be a bounded $C^2$-domain and $w\in
C^1(\overline\Omega,\mathbb{R}^n)$ a vector field. Then we have the
following identity on $\partial\Omega$:
\begin{align*}
&(w\cdot\hat uidehat nu)\Div w
-
\partial_w w\cdot \hat uidehat nu\\
&\qquad=
\Div_{\mathrm{T}}((w\cdot\hat uidehat nu)w_{\mathrm{T}})
-
(\trace\,\boldsymbol B)(w\cdot\hat uidehat nu)^2
-
\boldsymbol B(w_{\mathrm{T}},w_{\mathrm{T}})
-
2w_{\mathrm{T}}\cdot\hat uidehat nabla_{\mathrm{T}}(w\cdot\hat uidehat nu),
\mbox{e}nd{align*}
where $w_{\mathrm{T}}:=w-(w\cdot\hat uidehat nu)\hat uidehat nu$ denotes the tangential component
of $w$
and $\hat uidehat nabla_{\mathrm{T}},$ $\Div_{\mathrm{T}}$ the gradient and the
divergence, respectively, with regard to the
tangential directions.
\mbox{e}nd{lemma}
Note that for a convex domain $\Omega$, our sign convention for the second
fundamental form implies
\begin{equation*}
\boldsymbol{B}_x(\mbox{e}ta,\mbox{e}ta)\le0
\mbox{\qquad for any }\mbox{e}ta\in\mathrm{T}_x(\partial\Omega).
\mbox{e}nd{equation*}
\subsection{Properties of the coefficients $\mathbf{a}(r)$}
Keeping in mind assumption~\mbox{e}qref{assumption:a(0)}, we observe
\begin{equation*}
\mathbf{a}(r)
=
\frac1r\int_0^r\frac{\mathrm{d}}{\mathrm{d}s}[s\mathbf{a}(s)]\mathrm{d}s
=
\int_0^1\big[\mathbf{a}(r\sigma)+r\sigma\mathbf{a}'(r\sigma)\big]\mathrm{d}\sigma
\quad\mbox{for any $r>0$.}
\mbox{e}nd{equation*}
Therefore, assumption \mbox{e}qref{assumption:a'} and
standard estimates, cf. \cite[Lemma~2.1]{GiaquintaModica:1986}, \cite[Lemma~2.1]{AcerbiFusco}, imply
\begin{equation}\label{bounds-a}
c^{-1}m\big( \mu^2+r^{2}\big)^{\frac{p-2}{2}}
\leq \mathbf{a}(r)
\leq
cM\big( \mu^2+r^{2}\big)^{\frac{p-2}{2}}
\mbox{e}nd{equation}
for all $r>0$ and a constant $c=c(p)$. For the derivatives of the coefficients
$\mathbf{a}(|\xi|)\xi_\alpha^i$ we compute
\begin{equation*}
\partial_{\xi_\beta^j}\big[\mathbf{a}(|\xi|)\xi_\alpha^i\big]
=
\mathbf{a}(|\xi|)\mathrm{d}elta_{\alpha\beta}\mathrm{d}elta^{ij}
+\frac{\mathbf{a}^{\prime }( |\xi|) }{|\xi|}\xi_\alpha^i\xi_\beta^j
\mbox{e}nd{equation*}
for any $\xi\in\mathbb{R}^{Nn}$ with $\xi\hat uidehat not=0$. This implies the monotonicity and growth property
\begin{equation}\label{monoton-coefficients}
\boldsymbol h( |\xi| ) |\lambda | ^{2}
\leq
\sum_{i,j=1}^{N}\sum_{\alpha,\beta=1}^n \partial_{\xi_\beta^j}\big[\mathbf{a}(|\xi|)\xi_\alpha^i\big]
\lambda_{\alpha}^i\lambda _{\beta}^j
\leq
\boldsymbol H( |\xi| ) |\lambda | ^{2},
\mbox{e}nd{equation}
for any $\xi,\lambda\in\mathbb{R}^{Nn}$ with $\xi\hat uidehat neq0$, where we abbreviated
\begin{equation}\label{bounds-h-H}
\left\{
\begin{aligned}
\boldsymbol{h}(r)
&:=\min\{\mathbf{a}(r), \mathbf{a}(r)+r\mathbf{a}'(r)\}\ge c^{-1}m(\mu^2+r^2)^{\frac{p-2}2},\\
\boldsymbol H(r)&:=\max\{\mathbf{a}(r), \mathbf{a}(r)+r\mathbf{a}'(r)\}
\le
cM(\mu^2+r^2)^{\frac{p-2}2}.
\mbox{e}nd{aligned}\right.
\mbox{e}nd{equation}
The estimates follow from~\mbox{e}qref{bounds-a}
and~\mbox{e}qref{assumption:a'} by distinguishing the cases $\mathbf{a}'(r)\ge 0$ and $\mathbf{a}'(r)<0$.
\subsection{Sobolev's constant on convex domains}
In order to determine the dependencies of the constants in the
Moser iteration, we rely on the following version of
Sobolev's embedding valid for convex domains, cf.~\cite[Chapter~10, Thm.~8.1]{DiBenedetto:RealAnalysis} and \cite[Lemma~2.3]{BDMS}.
\begin{lemma}\label{lem:sobolev}
Let $K\subset\mathbb{R}^n$ be a bounded open convex set and $1\le p<n$. Then, for any
$w\in W^{1,p}(K)$ we have
\begin{equation*}
\bigg[- \mskip-19,5mu \int_K| w|^{p^\ast}\,\mathrm{d}x\bigg]^\frac{1}{p^\ast}
\le
c(n,p)\frac{(\mathrm{d}iam K)^n}{|K|}|K|^{\frac1n}\bigg[- \mskip-19,5mu \int_K|Dw|^{p}\,\mathrm{d}x\bigg]^\frac{1}{p}
+
\bigg[- \mskip-19,5mu \int_K| w|^{p}\,\mathrm{d}x\bigg]^\frac{1}{p},
\mbox{e}nd{equation*}
with the Sobolev exponent $p^\ast=\frac{np}{n-p}$.
\mbox{e}nd{lemma}
\subsection{Auxiliary lemmata} The following elementary assertions will be used in
the Moser iteration.
\begin{lemma}\label{lem:A}
Let $A>1$, $\theta>1$, $\gamma>0$ and $k\in\mathbb{N}$. Then, we have
\begin{align}\label{A1}
\prod_{j=1}^k
A^{\frac{\theta^{k-j+1}}{\gamma(\theta^k-1)}}
=
A^\frac{\theta}{\gamma(\theta-1)}
\qquad\mbox{for $A,\theta>0$.}
\mbox{e}nd{align}
and
\begin{align}\label{A2}
\prod_{j=1}^k
A^{\frac{j\theta^{k-j+1}}{\gamma(\theta^k-1)}}
\le
A^{\frac{\theta^2}{\gamma(\theta-1)^2}}
\qquad\mbox{for $A,\theta>1$.}
\mbox{e}nd{align}
\mbox{e}nd{lemma}
\begin{proof}
For the first product we compute
\begin{align*}
\prod_{j=1}^k
A^{\frac{\theta^{k-j+1}}{\gamma(\theta^k-1)}}
&=
\mbox{e}xp\bigg[
\log A\sum_{j=1}^k\frac{\theta^{k-j+1}}{\gamma(\theta^k-1)}\bigg]\\\hat uidehat nonumber
&=
\mbox{e}xp\bigg[
\frac{\log A}{\gamma(1-\theta^{-k})}
\sum_{j=1}^k\theta^{-j+1} \bigg]\\\hat uidehat nonumber
&=
\mbox{e}xp\bigg[
\frac{\log A}{\gamma (1-\theta^{-1})}
\bigg]
=
A^\frac{\theta}{\gamma(\theta-1)}.
\mbox{e}nd{align*}
Similarly, we re-write the second product in the form
\begin{align*}
\prod_{j=1}^k
A^{\frac{j\theta^{k-j+1}}{\gamma(\theta^k-1)}}
=
\mbox{e}xp\bigg[\log A
\sum_{j=1}^k \frac{j\theta^{k-j+1}}{\gamma(\theta^k-1)}\bigg]
=
\mbox{e}xp\bigg[\frac{\log A }{\gamma(1-\theta^{-k})}
\sum_{j=1}^k j\theta^{-j+1}\bigg].
\mbox{e}nd{align*}
To estimate the right-hand side further, we observe that for any $t\in(0,1)$ we have
\begin{align*}
\frac{1}{1-t^k}\sum_{j=1}^kj t^{j-1}
=
\frac{1}{1-t^k}\frac{\mathrm{d}}{\mathrm{d} t}\sum_{j=0}^k t^{j}
=\frac{1}{1-t^k}
\frac{\mathrm{d}}{\mathrm{d} t}\frac{1-t^{k+1}}{1-t}
\le
\frac{1}{(1-t)^2}.
\mbox{e}nd{align*}
We use this estimate with the choice $t=\theta^{-1}\in(0,1)$ and obtain
\begin{align*}
\prod_{j=1}^k
A^{\frac{j\theta^{k-j+1}}{\gamma(\theta^k-1)}}
&\le
\mbox{e}xp\bigg[\frac{\log A}{\gamma(1-\theta^{-1})^2}\bigg]
=
A^{\frac{\theta^2}{\gamma(\theta-1)^2}} .
\mbox{e}nd{align*}
This proves the claim.
\mbox{e}nd{proof}
\section{A priori estimates for smooth solutions}\label{sec:apriori}
We begin by proving the desired gradient bound in the case of
regular data. More precisely, we additionally assume that the boundary
$\partial\Omega$ is of class $C^2$ and that the solution is of class
$C^3$. Moreover, we consider parabolic systems that are
non-degenerate, i.e. $\mu>0$, and inhomogeneities with $\spt b\Subset\Omega\times\mathbb{R}$.
The precise statement reads as follows.
\begin{proposition}\label{prop:apriori}
Let $\Omega\subset\mathbb{R}^n$ be a bounded convex domain with
$C^2$-boundary, $B_\varrho(x_o)$ a ball with
$\frac{\varrho^n}{2^n|\Omega\cap B_{\varrho/2}(x_o)|}\le \Theta$
for some constant $\Theta>0$, and $(t_o-\varrho^2,t_o)\subset(0,T)$. Moreover, we assume that
$u\in C^3(\overline\Omega_T\cap
Q_\varrho(z_o),\mathbb{R}^N)$ is a solution to
the parabolic system
\begin{equation}\label{reg: parabolic system}
\partial_tu^i
-
\sum_{\alpha=1}^{n}
\big[\mathbf{a}(|Du|) u^i_{x_\alpha}\big]_{x_\alpha}
=
b^i
\qquad\mbox{in $\Omega_T\cap Q_\varrho(z_o)$, }
\mbox{e}nd{equation}
for $i=1,\ldots,N$, where $\mathbf a$ and $b$ satisfy assumptions \mbox{e}qref{assumption:a(0)} -- \mbox{e}qref{assumption:b} with $\mu\in(0,1]$ and
\begin{equation*}
\spt b\Subset\Omega\times\mathbb{R}.
\mbox{e}nd{equation*}
Moreover, $u\mbox{e}quiv0$ on $(\pl\Om)_T\cap Q_{2\varrho}(z_o)$.
Then we have the gradient sup-estimate
\begin{align}\label{Lipschitz-estimate}\hat uidehat nonumber
&\sup_{\Omega_T\cap Q_{\varrho/2}(z_o)}|Du|\\
&\qquad\le
C\bigg[\Big(1+ \varrho^{n+2}\| b\|_{
L^\sigma(\Omega_T\cap Q_\varrho(z_o))}^{\frac{(n+2)\sigma}{\sigma-n-2}}\Big) \Xiint{-\!-}_{\Omega_T \cap Q_{3\varrho/4}(z_o)}
\big( 1+|Du|^p\big)
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{d}p},
\mbox{e}nd{align}
for some constant $C$ that depends at most on $n,N,p,\sigma,m,M$,
and $\Theta$ and where $d$ denotes the scaling deficit defined in \mbox{e}qref{def:deficit}.
\mbox{e}nd{proposition}
The proof is given in the following subsections.
\subsection{Energy estimates for second order derivatives}\label{sec:energy-est}
The first step in the proof of Proposition~\ref{prop:apriori} is the
derivation of an energy estimate
for smooth solutions to the parabolic system
\mbox{e}qref{reg: parabolic system}.
\begin{proposition}[Energy estimate for second derivatives] \label{prop:energy}
Suppose the hypotheses in Proposition~\ref{prop:apriori} hold.
Then, for any non-negative increasing
$C^1$-function $\boldsymbol{\mathcal P}hi \colon \mathbb{R}_{\ge 0} \to \mathbb{R}_{\ge 0} $,
any cut-off function
$\phi\in C_0^\infty (B_\varrho(x_o),\mathbb{R}_{\ge0})$ and any non-negative Lipschitz continuous function $\chi\colon [t_o-\varrho^2,t_o]\to
\mathbb{R}_{\ge 0}$ we have the estimate
\begin{align}\label{eq:energy-est-sys1}\hat uidehat nonumber
\iint_{\Omega_T\cap
Q_\varrho}&
\chi\phi^2\bigg[\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big]
+
\tfrac12
\boldsymbol {\mathcal P}hi(|Du|)
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma}
\bigg]\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\le
2\iint_{\Omega_T\cap Q_\varrho} \chi \boldsymbol {\mathcal P}hi(|Du|)
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_{x_\gamma}^i \phi_{x_\beta}
u_{x_\gamma}^j\mathrm{d}x\mathrm{d}t\\[4pt]
&\phantom{\le\,}
\,+\iint_{\Omega_T\cap Q_\varrho}
\chi\phi^2 \boldsymbol {\mathcal P}hi(|Du|)Du\cdot Db\,\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align}
where we abbreviated
\begin{equation}\label{def-chi}
\boldsymbol{\mathcal P}si(s):=\int_0^{s}\boldsymbol{\mathcal P}hi(\tau)\tau\mathrm{d}\tau
\mbox{e}nd{equation}
and
\begin{equation}\label{def:b}
\mathbf{b}_{\alpha\beta}^{ij}
:=
\mathbf{a}(|Du|)\mathrm{d}elta_{\alpha\beta}\mathrm{d}elta^{ij}
+
\frac{\mathbf{a}'(|Du|)}{|Du|}
u_{x_\alpha}^i u_{x_{\beta}}^j
\mbox{e}nd{equation}
for $\alpha,\beta=1,\ldots, n$ and $i,j=1,\ldots, N$.
\mbox{e}nd{proposition}
\begin{remark}
\upshape
We note that the monotonicity conditions~\mbox{e}qref{assumption:a'}
and~\mbox{e}qref{bounds-h-H} imply the ellipticity and growth estimates
\begin{equation}\label{b-elliptic}
c^{-1}m\big(\mu^2+|Du|^2\big)^{\frac{p-2}2}|\lambda|^2
\le
\sum_{\alpha,\beta=1}^n\sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}\lambda_\alpha^i\lambda_\beta^j
\le
cM\big(\mu^2+|Du|^2\big)^{\frac{p-2}2}|\lambda|^2
\mbox{e}nd{equation}
for any $\lambda\in\mathbb{R}^{Nn}$, cf.~\mbox{e}qref{monoton-coefficients}. Therefore, the preceding proposition
yields an energy estimate of the form
\begin{align}\label{eq:energy-est-sys2}
\hat uidehat nonumber
\iint_{\Omega_T\cap
Q_\varrho}
\chi\phi^2\bigg[\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big]
&+
\tfrac{m}{C}\boldsymbol {\mathcal P}hi(|Du|)\big(\mu^2+|Du|^2\big)^{\frac{p-2}2}|D^2u|^2
\bigg]\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\le
CM\iint_{\Omega_T\cap Q_\varrho}\chi \boldsymbol {\mathcal P}hi(|Du|)
\big(\mu^2+|Du|^2\big)^{\frac p2}|D\phi|^2\mathrm{d}x\mathrm{d}t\\[4pt]
&\phantom{\le\,}
+\iint_{\Omega_T\cap Q_\varrho}\chi\phi^2\boldsymbol {\mathcal P}hi(|Du|)Du\cdot Db\,\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align}
with a constant $C=C(p)\ge 1$.
This will be the starting point for the Moser iteration.
$\Box$
\mbox{e}nd{remark}
\begin{remark}\upshape
It is crucial that Proposition~\ref{prop:energy} holds for cylinders
with arbitrary centers $z_o$, not only for points in the lateral boundary. This allows us to apply it on regularized
domains $\Omega_\varepsilon\supset\Omega$, with the choice $z_o\in\partial \Omega\times (0,T)$
independently of $\varepsilon>0$.
$\Box$
\mbox{e}nd{remark}
\begin{proof}[{\rm\bfseries Proof of Proposition~\ref{prop:energy}.}]
For the sake of convenience,
we omit the reference to the center $z_o$ in our notation.
We write $v_e$ for the directional derivative of a function $v$ in
the direction $e\in\mathbb{R}^n$.
We start by differentiating \mbox{e}qref{reg: parabolic system}
in the direction $e$. In view of the identities
\begin{equation}\label{eq:lambda-derivative}
(|Du|)_e
=
\sum_{j=1}^N\sum_{\beta =1}^n\frac{u_{x_\beta}^j u_{x_\beta e}^j}{|Du|}
\quad
\mbox{and}
\quad
\big[ \mathbf{a}(|Du|)\big]_{e}
=
\frac{\mathbf{a}'(|Du|)}{|Du|}
\sum_{j=1}^N\sum_{\beta =1}^n u_{x_\beta}^ju_{x_\beta e}^j
\mbox{e}nd{equation}
we obtain for $i=1,\ldots,N$ that
\begin{align}\label{eq:diff-system}
(b^i)_e&
=
\partial_t u_e^i -
\sum_{\alpha=1}^n \Big[\mathbf{a}(|Du|) u_{x_{\alpha}}^i\Big] _{x_{\alpha}e}
=
\partial_t u_e^i
-
\sum_{\alpha,\beta=1}^n\sum_{j=1}^N
\big[\mathbf{b}_{\alpha\beta}^{ij}(x,t)u_{x_\beta e}^j\big]_{x_\alpha}.
\mbox{e}nd{align}
In the last line, we used the abbreviation introduced in \mbox{e}qref{def:b}.
Next, we compute
\begin{align*}
\mathbf I
&:=
\sum_{\alpha,\beta,\gamma=1}^n\, \sum_{i,j=1}^N
\bigg[
\frac{\mathbf{a}'(|Du|)}{|Du|}
u_{x_\alpha}^iu_{x_\gamma}^iu_{x_\beta}^ju_{x_\beta x_\gamma}^j
\bigg]_{x_\alpha} \\
&\,=
\sum_{\alpha,\beta,\gamma=1}^n\, \sum_{i,j=1}^N
\frac{\mathbf{a}'(|Du|)}{|Du|}
u_{x_\alpha}^iu_{x_{\alpha}x_\gamma}^iu_{x_\beta}^ju_{x_\beta x_\gamma}^j\\
&\,\phantom{=\,}
+
\sum_{\alpha,\beta,\gamma=1}^n\, \sum_{i,j=1}^N
u_{x_\gamma}^i
\bigg[
\frac{\mathbf{a}'(|Du|)}{|Du|}
u_{x_\alpha}^iu_{x_\beta}^ju_{x_\beta x_\gamma}^j
\bigg]_{x_\alpha} \\
&\,=
\sum_{\alpha,\beta,\gamma=1}^n\, \sum_{i,j=1}^N
\Big[\mathbf{b}_{\alpha\beta}^{ij}
u_{x_{\alpha}x_\gamma}^iu_{x_\beta x_\gamma}^j +
u_{x_\gamma}^i \big[\mathbf{b}_{\alpha\beta}^{ij}
u_{x_\beta x_\gamma}^j\big]_{x_\alpha}\Big] \\
&\,\phantom{=\,} -
\sum_{\alpha,\gamma=1}^n\, \sum_{i=1}^N
\Big[
\mathbf{a}(|Du|) u_{x_{\alpha}x_\gamma}^iu_{x_\alpha x_\gamma}^i +
u_{x_\gamma}^i
\big[
\mathbf{a}(|Du|) u_{x_\alpha x_\gamma}^i\big]_{x_\alpha} \Big].
\mbox{e}nd{align*}
We note that the last term on the right-hand side of the preceding identity is equal to $\Delta [\mathbf g(|Du|)]$, where $\Delta$ stands for the Laplacian, and $\mathbf{g}$ is defined by
$$
\mathbf g(s):=\int_0^sr\mathbf a(r)\mathrm{d}r\quad \mbox{for any $s>0$.}
$$
Using the differentiated system \mbox{e}qref{eq:diff-system}
with $e =e_\gamma$ we thus obtain
\begin{align}\label{bfI}
\mathbf I
=
\sum_{\alpha,\beta,\gamma=1}^n\, \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma} -
\Delta \big[\mathbf g(|Du|)\big] +
\tfrac12\partial_t|Du|^2 -
Du\cdot Db.
\mbox{e}nd{align}
On the other hand, a direct calculation gives
\begin{align}\label{LG}
\mathbf I + \Delta\big[\mathbf g(|Du|)\big]
=
\mathrm L\big[ \mathbf g(|Du|)\big] ,
\mbox{e}nd{align}
where $\mathrm L$ denotes the second order elliptic differential operator defined by
\begin{equation}\label{def:op-L-system}
\mathrm L[v]
:=
\sum_{\alpha,\gamma =1}^n
\big[ \mathbf c_{\alpha\gamma}v_{x_\gamma}\big]_{x_\alpha},
\mbox{e}nd{equation}
with coefficients
\begin{align}\label{def:coeff-c-system}
\mathbf c_{\alpha\gamma}(x)
&:=
\frac{1}{\mathbf{a}(|Du|)}
\bigg[
\mathbf{a}(|Du|) \mathrm{d}elta_{\alpha\gamma}
+
\frac{\mathbf{a}'(|Du|)}{|Du|}
\sum_{i=1}^N
u_{x_\alpha}^i u_{x_{\gamma}}^i\bigg]\\
&\,=
\mathrm{d}elta_{\alpha\gamma}
+
\frac{\mathbf{a}'(|Du|)}{|Du|\,\mathbf{a}(|Du|)}
\sum_{i=1}^Nu_{x_\alpha}^i u_{x_{\gamma}}^i\hat uidehat nonumber .
\mbox{e}nd{align}
Joining \mbox{e}qref{bfI} and \mbox{e}qref{LG}, we find
\begin{align}\label{eq:sub-1-system}
\tfrac12\partial_t|Du|^2 +
\sum_{\alpha,\beta,\gamma=1}^n\,\sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma}
=
\mathrm L\big[ \mathbf g(|Du|)\big] +
Du\cdot Db.
\mbox{e}nd{align}
We now multiply this identity by $\phi^2(x)$, where $\phi\in C^\infty_0(B_\varrho,\mathbb{R}_{\ge 0})$ is a smooth cut-off function. In the resulting equation we examine the diffusion term on the right-hand side.
We start by noting that
\begin{align}\label{replace-c-by-D^2f}
\sum_{\gamma=1}^n\mathbf c_{\alpha\gamma} \mathbf g(|Du|)_{x_\gamma}
&=
\sum_{\beta,\gamma=1}^n\sum_{j=1}^N
\mathbf{a}(|Du|)
\mathbf c_{\alpha\gamma} u_{x_\beta}^j u_{x_\beta
x_\gamma}^j\\\hat uidehat nonumber
&=
\sum_{\beta,\gamma=1}^n\sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u_{x_\gamma}^i u_{x_\beta x_\gamma}^j
\mbox{e}nd{align}
for $\alpha=1,\ldots,n$, where we used first
\mbox{e}qref{eq:lambda-derivative}$_1$ with $e=e_\beta$ and then
the definition \mbox{e}qref{def:coeff-c-system}
for the coefficients $\mathbf c_{\alpha\beta}$ and~\mbox{e}qref{def:b}. This allows us to compute
\begin{align*}
\phi^2 \mathrm L\big[ \mathbf g(|Du|)\big]
&=
\sum_{\alpha,\gamma =1}^n\Big[ \phi^2
\mathbf c_{\alpha\gamma}\mathbf g(|Du|)_{x_\gamma}\Big]_{x_\alpha} -
2\phi
\sum_{\alpha,\gamma=1}^n \phi_{x_\alpha}\mathbf c_{\alpha\gamma}
\mathbf g(|Du|)_{x_\gamma} \\
&=
\mathbf{II} -
2\phi
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_{x_\gamma}^iu_{x_\beta x_\gamma}^j ,
\mbox{e}nd{align*}
with the obvious meaning of $\mathbf{II}$.
Inserting this identity into \mbox{e}qref{eq:sub-1-system} multiplied by
$\phi^2$ as described above, we deduce
\begin{align*}
\phi^2 &
\bigg[ \tfrac12 \partial_t|Du|^2 +
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma}\bigg]\\
&=
\mathbf{II} -
2\phi \sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_{x_\gamma}^iu_{x_\beta x_\gamma}^j +
\phi^2Du\cdot Db .
\mbox{e}nd{align*}
Next, we
note that due to~\mbox{e}qref{b-elliptic}, the matrix
$(\mathbf{b}_{\alpha\beta}^{ij})$ defines a
positive definite bilinear form on $\mathbb{R}^{Nn}$, which grants a Young type inequality for quadratic forms. That is,
\begin{align*}
2\phi & \bigg|\sum_{\alpha,\beta=1}^n\,\sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_{x_\gamma}^iu_{x_\beta x_\gamma}^j\bigg|\\
&\le
\tfrac12\phi^2\sum_{\alpha,\beta=1}^n\,\sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}u_{x_\alpha
x_\gamma}^iu_{x_\beta x_\gamma}^j
+2\sum_{\alpha,\beta=1}^n\,\sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}\phi_{x_\alpha}u_{x_\gamma}^i\phi_{x_\beta}u_{x_\gamma}^j,
\mbox{e}nd{align*}
for any $\gamma=1,\ldots,n$.
Using this estimate in the identity above and re-absorbing the term containing the second
derivatives of $u$ into the left-hand side yields
\begin{align*}
\tfrac12\phi^2 &
\bigg[
\partial_t|Du|^2 +
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma}\bigg]\\
&\le
\mathbf{II} +
2\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_\gamma^i \phi_{x_\beta} u_{x_\gamma}^j +
\phi^2Du\cdot Db.
\mbox{e}nd{align*}
Next, we multiply this identity by $\chi(t)\boldsymbol{\mathcal P}hi(|Du|)$, where $\chi\colon [t_o-\varrho^2,t_o]\to\mathbb{R}_{\ge 0}$ is a non-negative Lipschitz continuous function and $\boldsymbol{\mathcal P}hi\in C^1(\mathbb{R}_{\ge 0},\mathbb{R}_{\ge 0})$ is
increasing.
For the term involving the time derivative we compute
\begin{align*}
\tfrac12\boldsymbol{\mathcal P}hi (|Du|)\partial_t|Du|^2
=
\boldsymbol{\mathcal P}hi (|Du|)|Du|\partial_t|Du|
=
\partial_t\bigg[\int_0^{|Du|}\boldsymbol{\mathcal P}hi(\tau)\tau\,d\tau\bigg]
=
\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big],
\mbox{e}nd{align*}
with the function $\boldsymbol{\mathcal P}si$ defined in~\mbox{e}qref{def-chi} and obtain
\begin{align}\label{eq:sub-2-system}\hat uidehat nonumber
\chi\phi^2 &
\bigg[
\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big] +
\tfrac12\boldsymbol{\mathcal P}hi (|Du|)\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma}\bigg]\\
&\le
\chi \boldsymbol{\mathcal P}hi(|Du|) \bigg[
\mathbf{II} +
2\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_\gamma^i \phi_{x_\beta} u_{x_\gamma}^j +
\phi^2Du\cdot Db \bigg].
\mbox{e}nd{align}
Next, we analyze the term containing $\mathbf{II}$, which will result in a boundary term. Indeed,
\begin{align*}
\boldsymbol{\mathcal P}hi (|Du|)\mathbf{II}
&=
\boldsymbol{\mathcal P}hi (|Du|)\sum_{\alpha,\gamma=1}^n\Big[ \phi^2
\mathbf c_{\alpha\gamma} \mathbf g(|Du|)_{x_\gamma}\Big]_{x_\alpha}\\
&=
\sum_{\alpha,\gamma=1}^n\Big[ \phi^2 \boldsymbol{\mathcal P}hi (|Du|)
\mathbf c_{\alpha\gamma}\mathbf g(|Du|)_{x_\gamma}\Big]_{x_\alpha} -
\phi^2 \sum_{\alpha,\gamma=1}^n \mathbf c_{\alpha\gamma}\boldsymbol{\mathcal P}hi (|Du|)_{x_\alpha}
\mathbf g(|Du|)_{x_\gamma}\\
&\le
\sum_{\alpha,\gamma=1}^n\Big[ \phi^2 \boldsymbol{\mathcal P}hi (|Du|)
\mathbf c_{\alpha\gamma}\mathbf g(|Du|)_{x_\gamma}\Big]_{x_\alpha}.
\mbox{e}nd{align*}
In the last line we used
\begin{align*}
\sum_{\alpha,\gamma=1}^n \mathbf c_{\alpha\gamma}
\boldsymbol{\mathcal P}hi (|Du|)_{x_\alpha}
\mathbf g(|Du|)_{x_\gamma}
=
\boldsymbol{\mathcal P}hi^\prime (|Du|) \mathbf g^\prime(|Du|)
\sum_{\alpha,\gamma=1}^n
\mathbf c_{\alpha\gamma}|Du|_{x_\alpha}|Du|_{x_\gamma}
\ge
0,
\mbox{e}nd{align*}
since $\boldsymbol{\mathcal P}hi$ and $\mathbf g$ are both increasing and that the coefficients
$\mathbf c_{\alpha\gamma}$ are positive definite.
We now integrate $\chi \boldsymbol{\mathcal P}hi (|Du|)\mathbf{II}$ over $\Omega_T\cap Q_\varrho$ and perform an integration
by parts. This leads to a boundary integral. More
precisely, denoting by $\hat uidehat nu$ the outward unit normal vector on $\partial\Omega$, we have
\begin{align}\label{integral-of-II}
&\iint_{\Omega_T\cap Q_\varrho}\chi\boldsymbol{\mathcal P}hi (|Du|)\mathbf{II}\,\mathrm{d}x\mathrm{d}t\\
\hat uidehat nonumber
&\qquad\le
\iint_{(\partial\Omega)_T\cap Q_\varrho}\chi\phi^2\boldsymbol{\mathcal P}hi (|Du|)
\sum_{\alpha,\gamma=1}^n \mathbf c_{\alpha\gamma}\mathbf g(|Du|)_{x_\gamma}\hat uidehat nu_\alpha\,\mathrm{d}\mathcal{H}^{n-1}\mathrm{d}t\\\hat uidehat nonumber
&\qquad=
\iint_{(\partial\Omega)_T\cap Q_\varrho}
\chi\phi^2\boldsymbol{\mathcal P}hi (|Du|)
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N\mathbf{b}_{\alpha\beta}^{ij}
u_{x_\gamma}^iu_{x_\beta x_\gamma}^j\hat uidehat nu_\alpha\,\mathrm{d}\mathcal{H}^{n-1}\mathrm{d}t.
\mbox{e}nd{align}
The last step follows from~\mbox{e}qref{replace-c-by-D^2f}.
Now, we analyze the integrand on the right-hand side by recalling the explicit form
of the coefficients. In view of~\mbox{e}qref{def:b}, we obtain
\begin{align*}
\sum_{\alpha,\beta,\gamma=1}^n & \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u_{x_\gamma}^iu_{x_\beta x_\gamma}^j\hat uidehat nu_\alpha\\
&=
\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\bigg[
\mathbf{a}(|Du|)\mathrm{d}elta_{\alpha\beta}\mathrm{d}elta^{ij}
+\frac{\mathbf{a}'(|Du|)}{|Du|}
u^i_{x_\alpha}u^j_{x_\beta}\bigg]u_{x_\gamma}^iu_{x_\beta x_\gamma}^j\hat uidehat nu_\alpha\\
&=
\sum_{\alpha,\gamma=1}^n\sum_{i=1}^N
\mathbf{a}(|Du|)u_{x_\gamma}^iu_{x_\alpha x_\gamma}^i\hat uidehat nu_\alpha
+
\sum_{i=1}^N
u_\hat uidehat nu^i
\sum_{\beta,\gamma=1}^n\sum_{j=1}^N
\frac{\mathbf{a}'(|Du|)}{|Du|}
u^i_{x_\gamma}u^j_{x_\beta}u^j_{x_\beta x_\gamma}.
\mbox{e}nd{align*}
At this point,
we apply the differential geometric identity in
Lemma~\ref{lem:Grisvard} to the vector fields $w:=\hat uidehat nabla u^i$, $i=1,\ldots,N$.
In our case, the tangential components
of $w$
vanish since $u\mbox{e}quiv0$ on $(\partial\Omega)_T\cap Q_\varrho$.
Hence, Lemma~\ref{lem:Grisvard} yields the identity
\begin{equation*}
\Delta u^i u_\hat uidehat nu^i-
\sum_{\alpha,\gamma=1}^n
u_{x_\gamma}^iu_{x_\alpha x_\gamma}^i\hat uidehat nu_\alpha
=-(\trace\,\boldsymbol B) \big(u_\hat uidehat nu^i\big)^2\quad \mbox{on $(\partial\Omega)_T\cap Q_\varrho $}
\mbox{e}nd{equation*}
for $i=1, \mathrm{d}ots ,N$.
Here, $\boldsymbol B$ denotes the second fundamental form of $\partial\Omega$. Note that
$\mathrm{trace}\,\boldsymbol B\le 0$, since $\Omega$ is convex. Therefore, in the above identity
the right-hand side is non-negative. This allows us to continue in estimating the boundary
integral above.
More precisely, on $(\partial\Omega)_T\cap Q_\varrho $ we obtain
\begin{align*}
&\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u_{x_\gamma}^iu_{x_\beta x_\gamma}^j\hat uidehat nu_\alpha\\
&\quad\quad\le
\sum_{i=1}^N
u_\hat uidehat nu^i
\sum_{\beta,\gamma=1}^n\sum_{j=1}^N\bigg[
\mathbf{a}(|Du|)\mathrm{d}elta_{\gamma\beta}\mathrm{d}elta^{ij}+
\frac{\mathbf{a}'(|Du|)}{|Du|}
u^i_{x_\gamma}u^j_{x_\beta}\bigg]u^j_{x_\beta x_\gamma}\\
&\quad\quad=
\sum_{i=1}^N
u_\hat uidehat nu^i
\sum_{\beta,\gamma=1}^n\sum_{j=1}^N
\mathbf{b}_{\gamma\beta}^{ij}u^j_{x_\beta x_\gamma}
=
\sum_{i=1}^N
u_\hat uidehat nu^i
\sum_{\gamma=1}^n
\big[\mathbf{a}(|Du|) u_{x_\gamma}^i\big]_{x_\gamma} \\
&\quad\quad=
u_\hat uidehat nu\cdot\partial_tu-u_\hat uidehat nu\cdot b=0.
\mbox{e}nd{align*}
In the last line, we used the parabolic system
\mbox{e}qref{reg: parabolic system},
the fact $\partial_tu\mbox{e}quiv0$ on
$(\partial\Omega)_T\cap Q_\varrho$ and $\spt b\Subset\Omega_T$.
Recalling~\mbox{e}qref{integral-of-II}, we deduce that
\begin{align*}
\iint_{\Omega_T\cap Q_\varrho}
\chi\boldsymbol{\mathcal P}hi (|Du|)\mathbf{II}\,\mathrm{d}x\mathrm{d}t
&\le 0.
\mbox{e}nd{align*}
Therefore, integrating \mbox{e}qref{eq:sub-2-system} over $\Omega_T\cap Q_\varrho$ we obtain
\begin{align*}
& \iint_{\Omega_T\cap Q_\varrho} \chi\phi^2
\bigg[
\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big] +
\tfrac12\boldsymbol{\mathcal P}hi (|Du|)\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
u^i_{x_\alpha x_\gamma }u^j_{x_\beta x_\gamma}\bigg] \mathrm{d}x\mathrm{d}t\\
&\qquad\le
\iint_{\Omega_T\cap Q_\varrho}
\chi \boldsymbol{\mathcal P}hi(|Du|) \bigg[
2\sum_{\alpha,\beta,\gamma=1}^n \sum_{i,j=1}^N
\mathbf{b}_{\alpha\beta}^{ij}
\phi_{x_\alpha}u_\gamma^i \phi_{x_\beta} u_{x_\gamma}^j +
\phi^2Du\cdot Db \bigg] \mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
This finishes the proof of the proposition.
\mbox{e}nd{proof}
\subsection{A reverse H\"older type inequality}\label{sec:iteration}
Here, we work in the setting of Proposition~\ref{prop:apriori}. Again we omit the reference to the center $z_o$
in our notation. By $\zeta\in
C^1(\mathbb{R}_{\ge0},[0,1])$ we denote a cut-off function with respect to the
time variable that satisfies $\zeta\mbox{e}quiv0$ on $[0,\frac12]$, $\zeta\mbox{e}quiv 1$
on $[1,\infty)$ and $0\le\zeta'\le 3$ on $[\frac12,1]$.
Moreover, we consider a cut-off function $\phi\in
C^\infty_0(B_\varrho,[0,1])$ with respect to the spatial variables.
In the energy estimate \mbox{e}qref{eq:energy-est-sys2} we choose
the non-negative increasing function $\boldsymbol{\mathcal P}hi\colon\mathbb{R}_{\ge0}\to\mathbb{R}_{\ge0}$ in the form
\begin{equation*}
\boldsymbol{\mathcal P}hi(s):= \hat uidetilde{\boldsymbol{\mathcal P}hi} \big(\sqrt{\mu^2+s^2}\big)
\qquad\mbox{with}\qquad
\hat uidetilde{\boldsymbol{\mathcal P}hi}(\tau):=\zeta^2(\tau)\tau^{2\alpha}\quad\text{ for some }\al\ge0.
\mbox{e}nd{equation*}
We could omit the cut-off function $\zeta$ in the case $\mu=1$. For the sake of a unified approach we proceed using $\zeta$ in any case.
In the sequel we use the abbreviation
$$
\boldsymbol{\mathcal H}(\xi):= \sqrt{\mu^2+|\xi|^2}
\qquad\mbox{for $\xi\in\mathbb{R}^{Nn}$,}
$$
so that $\boldsymbol{\mathcal P}hi(|\xi|)=\hat uidetilde{\boldsymbol{\mathcal P}hi}(\boldsymbol{\mathcal H}(\xi))$.
With this notation, we have
\begin{align}\label{Psi-upper-bound}
\boldsymbol {\mathcal P}si (|Du|)
&:=
\int_0^{|Du|} \boldsymbol{\mathcal P}hi(s)s \,\mathrm{d} s
=
\int_{\mu}^{\boldsymbol{\mathcal H}(Du)}\,\zeta^2(\tau)\tau^{2\alpha+1}\mathrm{d}\tau
\le
\tfrac{1}{2+2\alpha}\,\boldsymbol{\mathcal H}(Du)^{2+2\alpha}.
\mbox{e}nd{align}
Since $\boldsymbol{\mathcal H}(0)\mbox{e}quiv \mu\le 1$ we deduce the lower bound
\begin{equation}
\label{Psi-lower-bound}
\boldsymbol {\mathcal P}si (|Du|)
\ge
\tfrac{1}{2+2\alpha}\,\boldsymbol{\mathcal H}(Du)^{2+2\alpha} -
\tfrac{1}{2+2\alpha}.
\mbox{e}nd{equation}
Now, we start with estimating the second integral
on the right-hand side of \mbox{e}qref{eq:energy-est-sys2}. Since $\spt b (\cdot,t)\Subset\Omega$ for any $t\in [0,T]$ and
since $\phi\in C^\infty_0 (B_\varrho)$, we are allowed to integrate by parts (with respect to $x$) in the integral containing $b$ and obtain
\begin{align*}
&\sum_{i=1}^N\sum_{\alpha=1}^n\iint_{\Omega_T \cap Q_{\varrho } }
\chi\phi^2 \boldsymbol{\mathcal P}hi (|Du|)u^i_{x_\alpha} b^i_{x_\alpha}\,\mathrm{d}x\mathrm{d}t\\
&=
-\sum_{i=1}^N \sum_{\alpha=1}^n\iint_{\Omega_T \cap Q_{\varrho } }\chi b^i
\Big[\phi^2 \boldsymbol{\mathcal P}hi (|Du|) u^i_{x_\alpha}\Big]_{x_\alpha}\,\mathrm{d}x\mathrm{d}t\\
&=
-\sum_{i,j=1}^N\sum_{\alpha,\beta=1}^n\iint_{\Omega_T \cap Q_{\varrho } }
\chi\phi^2 b^i
\bigg[\boldsymbol{\mathcal P}hi (|Du|) \mathrm{d}elta_{\alpha\beta}\mathrm{d}elta^{ij} +\boldsymbol{\mathcal P}hi' (|Du|)
\frac{u^i_{x_\alpha}u^j_{x_\beta}}{|Du|}
\bigg]u^j_{x_\alpha x_\beta}\,\mathrm{d}x\mathrm{d}t\\
&\quad
-2\sum_{i=1}^N\sum_{\alpha=1}^n\iint_{\Omega_T \cap Q_{\varrho }}
\chi\phi\, b^i
\boldsymbol{\mathcal P}hi (|Du|)\phi_{x_\alpha} u^i_{x_\alpha}\,\mathrm{d}x\mathrm{d}t\\
&=:\,\mathbf I +\,\mathbf{II},
\mbox{e}nd{align*}
with the obvious meaning of $\mathbf I$ and $\mathbf{II}$. The first integral can be estimated as
\begin{align*}
\mathbf I
&\le
c(n,N)\iint_{\Omega_T\cap Q_\varrho}\chi\phi^2 |b|
\big[\boldsymbol{\mathcal P}hi(|Du|)+|Du|\boldsymbol{\mathcal P}hi'(|Du|)\big]|D^2u|\,
\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
To estimate the term in brackets we note that $|Du|\le\boldsymbol{\mathcal H}(Du)$, $\zeta\le 1$ and $|Du|\le\boldsymbol{\mathcal H}(Du)\le1$ whenever $\zeta'(\boldsymbol{\mathcal H}(Du))\hat uidehat neq0$ and finally $\boldsymbol{\mathcal H}(Du)\ge\tfrac12$ whenever $\zeta(\boldsymbol{\mathcal H}(Du))\hat uidehat neq0$. Therefore, we obtain
\begin{align*}
&\boldsymbol{\mathcal P}hi(|Du|)+|Du|\boldsymbol{\mathcal P}hi'(|Du|) \\
&\qquad\le
\zeta(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{2\alpha}
\big[(1+2\alpha)\zeta(\boldsymbol{\mathcal H}(Du)) +
2 \zeta'(\boldsymbol{\mathcal H}(Du))|Du|\big] \\
&\qquad\le
c\,(1+2\alpha)\zeta(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{2\alpha} \\
&\qquad\le
c\,(1+2\alpha)\zeta(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{2\alpha+\frac{p-2+(2-p)_+}2}.
\mbox{e}nd{align*}
Inserting this above yields
\begin{align*}
\mathbf{I}
&\le
c\,(1+2\alpha) \iint_{\Omega_T \cap Q_{\varrho }}
\chi\phi^2
|b|\zeta(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{2\alpha+\frac{p-2+(2-p)_+}2}
|D^2u|\,\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\le
\frac{m}{2C} \iint_{\Omega_T \cap Q_{\varrho } } \chi\phi^2
\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{p+2\alpha -2}|D^2u|^2\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\phantom{\le\,}
+\frac{c\,(1+2\alpha)^2}{m}\iint_{\Omega_T \cap Q_{\varrho } }
\chi\phi^2
\boldsymbol{\mathcal H}(Du)^{2\alpha+(2-p)_+}|b|^2\mathrm{d}x\mathrm{d}t\\
&=:
\frac{m}{2C}\mathbf{I}_1+\frac{c\,(1+2\alpha)^2}{m}\mathbf{I}_2,
\mbox{e}nd{align*}
for a constant $c=c(n,N,p)$.
In the last estimate we used Young's inequality.
The constant $C$ is from the energy estimate \mbox{e}qref{eq:energy-est-sys2} and depends only on $p$.
The second term is bounded by
\begin{align*}
\mathbf{II}
&\le
2\iint_{\Omega_T \cap Q_{\varrho } }
\chi\phi\, |b|
\boldsymbol{\mathcal P}hi (|Du|)|D\phi| |Du|\,\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\le
2\iint_{\Omega_T \cap Q_{\varrho } }
\chi\phi\, |b|\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{2\alpha+1}
|D\phi| \,\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\le
c\iint_{\Omega_T \cap Q_{\varrho } }\chi\phi\, |b|\boldsymbol{\mathcal H}(Du)^{2\alpha+\frac{p+(2-p)_+}2}
|D\phi| \,\mathrm{d}x\mathrm{d}t\\
&\le
c\,M\iint_{\Omega_T \cap Q_{\varrho } }\chi \boldsymbol{\mathcal H} (Du)^{p+2\alpha}|D\phi|^2\,\mathrm{d}x\mathrm{d}t
+\tfrac{1}{M}\mathbf{I}_2,
\mbox{e}nd{align*}
where, in the second-to-last step, we again used the fact $\boldsymbol{\mathcal H}(Du)\ge\frac12$ on the support of $\zeta(\boldsymbol{\mathcal H}(Du))$, and in the last step
we applied Young's inequality.
Using the above estimates for $\mathbf{I}$ and $\mathbf{II}$ in~\mbox{e}qref{eq:energy-est-sys2} and re-absorbing the term $\frac{m}{2C}\mathbf{I}_1$
into the left-hand side, we arrive at
\begin{align}\label{pre-W22-est}
\hat uidehat nonumber
\iint_{\Omega_T\cap Q_\varrho} &
\chi\phi^2
\Big[\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big] +
\tfrac{m}{2C}
\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2} |D^2u|^2\Big]
\,\mathrm{d}x\mathrm{d}t\\
&\qquad\qquad\le
c\iint_{\Omega_T\cap Q_\varrho}\chi \boldsymbol{\mathcal H} (Du)^{p+2\alpha} |D\phi|^2\,\mathrm{d}x\mathrm{d}t +
c(1+2\alpha)^2\mathbf{I}_2,
\mbox{e}nd{align}
where $c=c(m,M,n,N,p)$.
In \mbox{e}qref{pre-W22-est} we choose $\chi$ in the form of a product of two functions $\chi$ and $\hat uidetilde \chi$. We choose
the first function $\chi\in W^{1,\infty}\big( [t_o-\varrho^2, t_o]\big) $ to satisfy $0\le\chi\le1$, $\chi(t_o-\varrho^2)=0$, and
$\partial_t\chi\ge 0$, while the second one is defined by
\begin{equation*}
\hat uidetilde\chi (t)
:=
\left\{
\begin{array}{cl}
1,& t\in [t_o-\varrho^2, \tau],\\[4pt]
1-\frac{t-\tau}{\mathrm{d}elta},& t\in (\tau,\tau+\mathrm{d}elta),\\[4pt]
0,& t\in [\tau+\mathrm{d}elta ,t_o],
\mbox{e}nd{array}
\right.
\mbox{e}nd{equation*}
where $\mathrm{d}elta>0$ and $t_o-\varrho^2<\tau<\tau +\mathrm{d}elta<t_o$.
With this specification of $\chi$
we consider the first integral on the left-hand side. We perform an integration by parts with respect to time and obtain (observe that no boundary terms occur due to the choice of $\chi$ and $\hat uidetilde\chi$)
\begin{align*}
&\iint_{\Omega_T\cap Q_\varrho}
\phi^2\chi\hat uidetilde\chi
\partial_t\big[\boldsymbol{\mathcal P}si(|Du|)\big]\,\mathrm{d}x\mathrm{d}t\\
&\qquad=
-
\iint_{\Omega_T\cap Q_\varrho}
\phi^2\partial_t\big[\chi(t)\hat uidetilde \chi (t)\big]
\boldsymbol{\mathcal P}si(|Du|) \,\mathrm{d}x\mathrm{d}t\\
&\qquad=
-
\iint_{\Omega_T\cap Q_\varrho}
\phi^2 \big[ \hat uidetilde \chi\partial_t\chi+
\chi\partial_t\hat uidetilde \chi\big]
\boldsymbol{\mathcal P}si(|Du|)\,\mathrm{d}x\mathrm{d}t\\
&\qquad=
-\iint_{\Omega_T\cap Q_\varrho}
\hat uidetilde\chi\phi^2\partial_t\chi
\boldsymbol{\mathcal P}si(|Du|) \,\mathrm{d}x\mathrm{d}t +
\frac1{\mathrm{d}elta}
\iint_{\Omega_T\cap B_\varrho\times (\tau,\tau+\mathrm{d}elta)}
\chi\phi^2 \boldsymbol{\mathcal P}si(|Du|)\,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
We insert this into \mbox{e}qref{pre-W22-est} and pass to the limit
$\mathrm{d}elta\mathrm{d}ownarrow 0$. For $\tau\in (t_o-\varrho^2,t_o)$ we obtain
\begin{align*}
\hat uidehat nonumber
\int_{\Omega\cap B_\varrho\times\{ \tau\}}&
\chi\phi^2\boldsymbol{\mathcal P}si(|Du|)\,\mathrm{d}x\\
&\phantom{\le\,}+
\iint_{\Omega_T\cap B_\varrho\times(t_o-\varrho^2,\tau)} \chi\phi^2
\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2}|D^2u|^2
\,\mathrm{d}x\mathrm{d}t\hat uidehat nonumber\\
&\le
c\bigg[ \iint_{\Omega_T\cap Q_\varrho}
\chi\boldsymbol{\mathcal H} (Du)^{p+2\alpha}|D\phi|^2\,\mathrm{d}x\mathrm{d}t+
(1+2\alpha)^2\mathbf{I}_2+\mathbf I_3\bigg],
\mbox{e}nd{align*}
where
$\mathbf I_3$ is defined by
\begin{equation*}
\mathbf I_3
:=
\frac{1}{2(1+\alpha)}
\iint_{\Omega_T\cap Q_\varrho}\phi^2
\partial_t\chi
\boldsymbol{\mathcal H} (Du)^{2+2\alpha}\,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{equation*}
In the estimate leading to $\mathbf I_3$ we used \mbox{e}qref{Psi-upper-bound} and the fact that
$\partial_t\chi\ge 0$. Note that $\mathbf I_3$ is non-negative. Observe also that the right-hand side of the preceding inequality
is independent of $\tau$.
Therefore we can pass to the limit $\tau\uparrow t_o$ in the second integral on the left-hand side, while in the first integral we can take the supremum over $\tau\in (t_o-\varrho^2,t_o)$. This implies
\begin{align*}
\sup_{t_o-\varrho^2<\tau<t_o}&
\int_{\Omega\cap B_\varrho\times\{ \tau\}}
\chi\phi^2\boldsymbol{\mathcal P}si(|Du|)\,\mathrm{d}x\hat uidehat nonumber\\
&\phantom{\le\,} +
\iint_{\Omega_T\cap Q_\varrho} \chi\phi^2
\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2} |D^2u|^2
\,\mathrm{d}x\mathrm{d}t\hat uidehat nonumber\\
&\le
c\bigg[ \iint_{\Omega_T\cap Q_\varrho}\chi
\boldsymbol{\mathcal H} (Du)^{p+2\alpha}|D\phi|^2\,\mathrm{d}x\mathrm{d}t+
(1+2\alpha)^2\mathbf{I}_2+\mathbf I_3\bigg],
\mbox{e}nd{align*}
with a constant $c=c(n,N,m,M,p)$.
In order to bound the sup-term from below, we
use \mbox{e}qref{Psi-lower-bound} and multiply the resulting inequality by $(p+2\alpha)$, from which we deduce
\begin{align*}
\sup_{t_o-\varrho^2<\tau<t_o}&
\int_{\Omega\cap B_\varrho\times\{ \tau\}}
\chi\phi^2 \boldsymbol{\mathcal H} (Du)^{2+2\alpha} \,\mathrm{d}x\\
&\phantom{\le\,}+
(p+2\alpha)\iint_{\Omega_T\cap Q_\varrho} \chi\phi^2
\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2} |D^2u|^2
\,\mathrm{d}x\mathrm{d}t\\
&\le
c\,(p{+}2\alpha)
\iint_{\Omega_T\cap Q_\varrho}\!\! \Big[ \|D\phi\|_{L^\infty}^2\boldsymbol{\mathcal H} (Du)^{p+2\alpha}
+\|\partial_t\chi\|_{L^\infty}\boldsymbol{\mathcal H} (Du)^{2+2\alpha}\Big]\mathrm{d}x\mathrm{d}t\\
&\phantom{\le\,} +
c\,(p+2\alpha)^3\iint_{\Omega_T \cap Q_{\varrho } }\chi\phi^2 \boldsymbol{\mathcal H}
(Du)^{2\alpha+(2-p)_+}|b|^2\,\mathrm{d}x\mathrm{d}t +
|\Omega\cap B_\varrho|,
\mbox{e}nd{align*}
for a constant $c$ depending on $n,N,m,M,$ and $p$.
After taking means, the preceding estimate takes
the form
\begin{align}\label{pre-W22-est-2}\hat uidehat nonumber
\sup_{t_o-\varrho^2<\tau<t_o}&
- \mskip-19,5mu \int_{\Omega\cap B_\varrho\times\{\tau\}}
\chi\phi^2 \boldsymbol{\mathcal H} (Du)^{2+2\alpha} \,\mathrm{d}x\\\hat uidehat nonumber
&\phantom{\le\,}+
(p+2\alpha)\varrho^2\Xiint{-\!-}_{\Omega_T\cap Q_\varrho} \chi\phi^2
\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2} |D^2u|^2
\,\mathrm{d}x\mathrm{d}t\\
&\le
c\,(p+2\alpha)\mathbf{R}_1
+
c\,(p+2\alpha)^3 \mathbf{R}_2 + 1
=:
\mathbf{R},
\mbox{e}nd{align}
for a constant $c=c(n,N,m,M,p)\ge 1$ and with the abbreviations
\begin{align*}
\mathbf{R}_1
&:=
\varrho^2\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}\Big[
\|D\phi\|_{L^\infty}^2\boldsymbol{\mathcal H}
(Du)^{p+2\alpha}+\|\partial_t\chi\|_{L^\infty} \boldsymbol{\mathcal H}
(Du)^{2+2\alpha}\Big]\mathrm{d}x\mathrm{d}t
\mbox{e}nd{align*}
and
\begin{align*}
\mathbf{R}_2
&:=
\varrho^2\Xiint{-\!-}_{\Omega_T \cap Q_{\varrho }}
\chi\phi^2\boldsymbol{\mathcal H} (Du)^{2\alpha+(2-p)_+}|b|^2 \,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
Observe that we kept the cut-off function $\chi\phi^2$ in the integrand
of the last integral. The reason for that will
become clear later.
The next step is to perform an interpolation argument of
Gagliardo-Nirenberg type.
For the parameter $\mathrm{d}elta:=\frac{2(1+\alpha)}{n}>0$, we compute
\begin{align}\label{est:D^2u}\hat uidehat nonumber
\Big|D\Big[\phi^{1+\frac2n}\, &\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{\frac{p+ 2\alpha+2\mathrm{d}elta }{2}}\Big]\Big|\\\hat uidehat nonumber
&\le
\tfrac{p+2\alpha+2\mathrm{d}elta}{2}
\phi^{1+\frac2n}\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^\frac{p+ 2\alpha-2+2\mathrm{d}elta}{2}
| D^2u |\\\hat uidehat nonumber
&\phantom{\le\,}
+2\phi^{1+\frac2n}\zeta(\boldsymbol{\mathcal H}(Du))\zeta'(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{\frac{p+ 2\alpha+2\mathrm{d}elta }{2}}|D^2u|
\\\hat uidehat nonumber
&\phantom{\le\,}
+\tfrac{n+2}{n}\phi^{\frac2n}|D\phi|\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{\frac{p+ 2\alpha+2\mathrm{d}elta }{2}}\\\hat uidehat nonumber
&\le
c\,(p+2\alpha)\phi^{1+\frac2n}
\zeta(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^\frac{p+ 2\alpha-2+2\mathrm{d}elta}{2}
| D^2u |\\
&\phantom{\le\,}
+c\,\|D\phi\|_{L^\infty}\phi^{\frac2n}\boldsymbol{\mathcal H}(Du)^{\frac{p+ 2\alpha+2\mathrm{d}elta }{2}},
\mbox{e}nd{align}
for a constant $c=c(n)$.
In the last step, we used the fact that $\boldsymbol{\mathcal H}(Du)\le 1$
on the support of $\zeta'(\boldsymbol{\mathcal H}(Du))$, as well as the bounds $\zeta\le1$
and $\zeta'\le3$.
On a fixed time slice we apply Sobolev's embedding
in Lemma~\ref{lem:sobolev} with $p=\frac{2n}{n+2}$
and then inequality~\mbox{e}qref{est:D^2u}, with the result
\begin{align*}
&- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\phi^{2+\frac{4}{n}}\zeta^4(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\,\mathrm{d}x\\
&\quad
\le
2C_{\mathrm{Sob}}^2\varrho^2
\bigg[
- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\big|D\big[\phi^{1+\frac{2}{n}}\zeta^2(\boldsymbol{\mathcal H}(Du))
\boldsymbol{\mathcal H}(Du)^{\frac{p+2\alpha+2\mathrm{d}elta}{2}}\big]\big|^{\frac{2n}{n+2}}\,\mathrm{d}x
\bigg]^\frac{n+2}{n}\\
&\quad\phantom{\le\,}
+2
\bigg[
- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\big|\phi^{1+\frac{2}{n}}\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{\frac{p+2\alpha+2\mathrm{d}elta}{2}}\big|^{\frac{2n}{n+2}}\,\mathrm{d}x
\bigg]^\frac{n+2}{n}
\\
&\quad\le
c\,C_{\mathrm{Sob}}^2\varrho^2(p+2\alpha)^2
\bigg[
- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\Big[\phi^{1+\frac2n}\zeta(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{\frac{p+ 2\alpha-2}{2}+\mathrm{d}elta}
| D^2u |\Big]^{\frac{2n}{n+2}}\,\mathrm{d}x
\bigg]^\frac{n+2}{n}\\
&\quad
\phantom{\le\,}
+c\,C_{\mathrm{Sob}}^2 \varrho^2\|D\phi\|_{L^\infty}^2
\bigg[
- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\big|\phi^{\frac2n}\boldsymbol{\mathcal H}(Du)^{\frac{p+2\alpha}{2}+\mathrm{d}elta}\big|^{\frac{2n}{n+2}}\,\mathrm{d}x
\bigg]^\frac{n+2}{n}.
\mbox{e}nd{align*}
In the last line, we used the estimate
$\|\phi\|_{L^\infty}\le \varrho\|D\phi\|_{L^\infty}$, which holds true
since $\phi$ has compact support in $B_\varrho$, and we assumed that
$C_{\mathrm{Sob}}\ge 1$.
According to Lemma~\ref{lem:sobolev}, we can choose the Sobolev constant
$C_{\mathrm{Sob}}$ only depending on $n$ and $\Theta$.
Next, we estimate both integrals on the right-hand side by
means of H\"older's inequality with exponents $\frac{n+2}{n}$ and
$\frac{n+2}2$, which leads to
\begin{align*}
- \mskip-19,5mu \int_{\Omega\cap
B_\varrho}&\phi^{2+\frac{4}{n}}\zeta^4(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\,\mathrm{d}x
\le
c\,C_{\rm Sob}^2\varrho^2\,
\mathbf{II}_1\cdot\mathbf{II}_2^\frac{2}{n},
\mbox{e}nd{align*}
where
\begin{align*}
\mathbf{II}_1
&:=
(p+2\alpha)^2
- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\phi^2\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2}|D^2u|^2\,\mathrm{d}x\\
&\phantom{:=\,}
+
\|D\phi\|_{L^\infty}^2
- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\boldsymbol{\mathcal H} (Du)^{p+2\alpha}\,\mathrm{d}x
\mbox{e}nd{align*}
and
\begin{equation*}
\mathbf{II}_2:=- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\phi^2\boldsymbol{\mathcal H} (Du)^{n\mathrm{d}elta}\mathrm{d}x.
\mbox{e}nd{equation*}
At this point, the reason for our choice of $\mathrm{d}elta$ becomes clear.
In fact, we have chosen $\mathrm{d}elta$ in such a way that
$n\mathrm{d}elta=2+2\alpha$ coincides with the integrability exponent in the sup-term of the energy inequality \mbox{e}qref{pre-W22-est-2}. We multiply the preceding inequality by
$\chi (t)^{1+\frac{2}{n}}$ and take the mean with respect to $t$ over the interval $(t_o-\varrho^2,t_o)$. In this way we obtain
\begin{align}\label{hi-int-1}
\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}&
[\chi\phi^2]^{1+\frac{2}{n}}
\zeta^4(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta} \,\mathrm{d}x\mathrm{d}t
\hat uidehat nonumber\\
&\le
c\,C_{\rm Sob}^2\varrho^2\,
- \mskip-19,5mu \int_{(t_o-\varrho^2,t_o)} \chi(t)\mathbf{II}_1\cdot \bigg[- \mskip-19,5mu \int_{\Omega\cap B_\varrho}\chi\phi^2
\boldsymbol{\mathcal H} (Du)^{2+2\alpha}\mathrm{d}x\bigg]^\frac{2}{n}\mathrm{d}t
\hat uidehat nonumber\\
&\le
c\,C_{\rm Sob}^2\varrho^2\,
\mathbf R^\frac{2}{n} - \mskip-19,5mu \int_{(t_o-\varrho^2,t_o)} \chi(t)\mathbf{II}_1\mathrm{d}t.
\mbox{e}nd{align}
In the last line we used the energy inequality
\mbox{e}qref{pre-W22-est-2}. Recall that $\mathbf R$ denotes the right-hand side
of \mbox{e}qref{pre-W22-est-2}. For the estimate of the last integral, we
again apply \mbox{e}qref{pre-W22-est-2} and use the definition of $\mathbf R$, with the result
\begin{align*}
- \mskip-19,5mu \int_{(t_o-\varrho^2,t_o)} \chi(t)\mathbf{II}_1\mathrm{d}t
&\le
(p+2\alpha)^2
\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}\chi\phi^2\zeta^2(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H} (Du)^{p+2\alpha-2}|D^2u|^2\,\mathrm{d}x\\
&\quad +
\|D\phi\|_{L^\infty}^2\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}\boldsymbol{\mathcal H} (Du)^{p+2\alpha}
\,\mathrm{d}x\mathrm{d}t\\
&\le
\frac{c\,(p+2\alpha)}{\varrho^2}\,\mathbf R,
\mbox{e}nd{align*}
where the constant $c$ depends on $n,N,m,M,p$, and $\Theta$.
Joining this with \mbox{e}qref{hi-int-1} yields
\begin{align}\label{hi-int-2}\hat uidehat nonumber
\bigg(\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}&
[\chi\phi^2]^{1+\frac{2}{n}}\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\mathrm{d}x\mathrm{d}t\bigg)^{\frac
n{n+2}}\\\hat uidehat nonumber
&\le
1+\bigg(\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}
[\chi\phi^2]^{1+\frac{2}{n}}
\zeta^4(\boldsymbol{\mathcal H}(Du))\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\mathrm{d}x\mathrm{d}t\bigg)^{\frac
n{n+2}}\\
&\le c\,(p+2\alpha)\mathbf R.
\mbox{e}nd{align}
Our next goal is to estimate $\mathbf{R}_2$. In
view of the integrability assumption \mbox{e}qref{assumption:b}, i.e.~$b\in L^\sigma(\Omega_T,\mathbb{R}^N)$ with
$\sigma>n+2$, we can use H\"older's inequality to estimate
\begin{align}\label{b-term}
\mathbf{R}_2
&\le
\Theta^\frac{2}{\sigma}[b]_{\sigma,\Omega_T\cap Q_\varrho}^2
\bigg(\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}\big[\chi\phi^2\boldsymbol{\mathcal H}
(Du)^{2\alpha+(2-p)_+}\big]^{\frac{\sigma}{\sigma-2}}
\,\mathrm{d}x\mathrm{d}t\bigg)^{\frac{\sigma-2}\sigma}
\mbox{e}nd{align}
where we defined
\begin{equation*}
[b]_{\sigma,\Omega_T\cap Q_\varrho}:=
\bigg[ \varrho^{\sigma-n-2}\iint_{\Omega_T \cap Q_{\varrho } }|b|^\sigma\mathrm{d}x\mathrm{d}t\bigg]^\frac{1}{\sigma}.
\mbox{e}nd{equation*}
Here we used the fact that $\varrho^{n+2}/|\Omega_T\cap Q_\varrho|$ is bounded by
$\Theta$.
In order to estimate the integral on the right-hand side of \mbox{e}qref{b-term} further, we
interpolate the $L^{\frac\sigma{\sigma-2}}$-norm between the
$L^1$-norm and the $L^{\frac{n+2}{n}}$-norm, which is possible since
$\sigma>n+2$.
For every $\kappa>0$, this yields the bound
\begin{align}\label{interpolation}\hat uidehat nonumber
\mathbf{R}_2
&\le
\Theta^\frac{2}{\sigma}[b]_{\sigma,\Omega_T\cap Q_\varrho}^2
\bigg[
\kappa\bigg(\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}
\big[\chi\phi^2\boldsymbol{\mathcal H}(Du)^{2\alpha+(2-p)_+}\big]^{\frac{n+2}{n}}
\,\mathrm{d}x\mathrm{d}t\bigg)^{\frac{n}{n+2}}\\\hat uidehat nonumber
&\qquad\qquad\qquad\qquad +
\kappa^{-\frac{n+2}{\sigma-n-2}}
\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}
\chi\phi^2\boldsymbol{\mathcal H}(Du)^{2\alpha+(2-p)_+}\,\mathrm{d}x\mathrm{d}t
\bigg]\\\hat uidehat nonumber
&\le
\Theta^\frac{2}{\sigma}[b]_{\sigma,\Omega_T\cap Q_\varrho}^2
\bigg[
\kappa\bigg(\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}
[\chi\phi^2]^{\frac{n+2}{n}}
\big[\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}+1\big]\,\mathrm{d}x\mathrm{d}t\bigg)^{\frac{n}{n+2}}\\
&\qquad\qquad\qquad\qquad +
\kappa^{-\frac{n+2}{\sigma-n-2}}
\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}\big[\boldsymbol{\mathcal H}(Du)^{p+2\alpha}+1\big]\,\mathrm{d}x\mathrm{d}t
\bigg].
\mbox{e}nd{align}
In the last line, we used the fact that
\begin{align*}
\big[2\alpha+(2-p)_+\big]\tfrac{n+2}n
\le
p+2\alpha+2\mathrm{d}elta
\qquad\mbox{and}\qquad
2\alpha+(2-p)_+
\le p+2\alpha.
\mbox{e}nd{align*}
The latter hold true for any $\alpha\ge0$ and $p\ge1$.
Joining estimates~\mbox{e}qref{hi-int-2}
and~\mbox{e}qref{interpolation}, we arrive at
\begin{align*}
\Big[1 -
c\,(p&+2\alpha)^3\Theta^\frac{2}{\sigma}
[b]_{\sigma,\Omega_T\cap Q_\varrho}^2\kappa\Big]
\bigg[\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}
[\chi\phi^2]^{1+\frac{2}{n}}
\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{n}{n+2}}\\\hat uidehat nonumber
&\le
c\,(p+2\alpha) \mathbf{R}_1 +
c\,\Theta^\frac{2}{\sigma}(p+2\alpha)^3
[b]_{\sigma,\Omega\cap Q_\varrho}^{2}\kappa + 1\\
&\phantom{\le\,} +
c\,\Theta^\frac{2}{\sigma}(p+2\alpha)^{3}
[b]_{\sigma,\Omega_T\cap Q_\varrho}^{2}\kappa^{-\frac{n+2}{\sigma-n-2}}
\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}\big[\boldsymbol{\mathcal H}(Du)^{p+2\alpha}+1\big]\,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
At this stage, we choose the parameter $\kappa>0$ so small that
\begin{equation*}
c\,(p+2\alpha)^3 \Theta^\frac{2}{\sigma}[b]_{\sigma,\Omega_T\cap
Q_\varrho}^{2}
\kappa =\tfrac12.
\mbox{e}nd{equation*}
This implies in particular that the second term on the right-hand side equals $\frac12 $. On the other hand the coefficient in front of the last term on the right equals
\begin{align*}
\tfrac12\kappa^{-1-\frac{n+2}{\sigma-n-2}}
&=
\tfrac12 \kappa^{-\frac{\sigma}{\sigma-n-2}}
=
\Big[c\,(p+2\alpha)^3 \Theta^\frac{2}{\sigma}[b]_{\sigma,\Omega_T\cap Q_\varrho}^{2}\Big]^\frac{\sigma}{\sigma-n-2}.
\mbox{e}nd{align*}
This turns the preceding estimate into
\begin{align*}
&\bigg[\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}
[\chi\phi^2]^{1+\frac{2}{n}}\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\mathrm{d}x\mathrm{d}t\bigg]^{\frac{n}{n+2}}\\\hat uidehat nonumber
&\qquad\le
c\,(p+2\alpha)^{\frac{3\sigma}{\sigma-n-2}}
\bigg[1+\mathbf{R}_1+ [b]_{\sigma,\Omega_T\cap Q_\varrho}^{\frac{2\sigma}{\sigma-n-2}}
\Xiint{-\!-}_{\Omega_T \cap Q_\varrho}
\big[\boldsymbol{\mathcal H}(Du)^{p+2\alpha}+1\big]\mathrm{d}x\mathrm{d}t\bigg]
\mbox{e}nd{align*}
for a constant $c$ depending only on $n,N,m,M,p,\sigma$, and $\Theta$.
By an application of Young's inequality, we can rewrite the above inequality and obtain the {\it reverse
H\"older type estimate}
\begin{align}\label{reverse:start}\hat uidehat nonumber
\bigg[\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}
&
[\chi\phi^2]^{1+\frac{2}{n}}\boldsymbol{\mathcal H}(Du)^{p+2\alpha+2\mathrm{d}elta}\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{n}{n+2}}\\
&\le
c\,(p+2\alpha)^{\frac{3\sigma}{\sigma-n-2}}\boldsymbol\gamma_o\,
\Xiint{-\!-}_{\Omega_T\cap Q_\varrho}\big[\boldsymbol{\mathcal H}(Du)^{q+2\alpha}
+1\big]\,\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align}
where we defined $q:=\max\{p,2\}$ and moreover abbreviated
\begin{equation*}
\boldsymbol\gamma_o
:=
1+\varrho^2\|D\phi\|_{L^\infty}^2
+\varrho^2\|\partial_t\chi\|_{L^\infty}+[b]_{\sigma,\Omega_T\cap
Q_\varrho}^{\frac{2\sigma}{\sigma-n-2}}.
\mbox{e}nd{equation*}
For the constant $c$ above we have the dependencies $c(n,N,m,M,p,\sigma,\Theta)$.
For the Moser iteration scheme we need to compare the exponents on both sides of \mbox{e}qref{reverse:start}.
We have
\begin{equation*}
p+2\alpha+2\mathrm{d}elta
=
q+2\alpha+\tfrac4n(1+\alpha)-(q-p)
>
q+2\alpha,
\mbox{e}nd{equation*}
since $p>\frac{2n}{n+2}>2-\frac4n$.
\subsection{The iteration scheme}
We fix radii $r,s$ with $\frac\varrho2\le r<s\le \varrho$ and define
\begin{equation*}
\varrho_k:=r+\tfrac1{2^k}(s-r)\quad\mbox{and}\quad Q_k:= Q_{\varrho_k}(x_o,t_o)
\mbox{e}nd{equation*}
for $k\in\mathbb{N}_0$.
We choose cut-off functions $\phi_k\in C^\infty_0(B_{\varrho_k}(x_o), [0,1])$ such that $\phi_k\mbox{e}quiv 1$ on
$B_{\varrho_{k+1}}(x_o)$ and $|D\phi_k|\le\frac{2^{k+2}}{s-r}$ and
$\chi_k\in W^{1,\infty}((t_o-\varrho_k^2,t_o),[0,1])$ such that $\chi_k(t_o-\varrho_k^2)=0$,
$\chi_k\mbox{e}quiv 1$ on $(t_o-\varrho_{k+1}^2, t_o)$, and $0\le\partial_t\chi_k\le \frac{2^{2(k+2)}}{(s-r)^2}$.
With these specifications inequality \mbox{e}qref{reverse:start} yields
\begin{align}\label{reverse:start-1}\hat uidehat nonumber
\bigg[\frac{1}{|\Omega_T\cap Q_{k-1}|}&
\iint_{\Omega_T\cap Q_{k}}
\boldsymbol{\mathcal H}(Du)^{q+2\alpha(1+\frac{2}{n}) -(q-p)+\frac{4}{n}}\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{n}{n+2}}\\
&\le
\frac{C_o4^k\varrho^2}{(s-r)^2}\,(p+2\alpha)^{\frac{3\sigma}{\sigma-n-2}}
\Xiint{-\!-}_{\Omega_T\cap Q_{k-1}}\big[\boldsymbol{\mathcal H}(Du)^{q+2\alpha}
+1\big]\,\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align}
for a constant $C_o$ of the type
\begin{equation}\label{def:C_o}
C_o:=C\Big( 1+[b]_{\sigma,\Omega_T\cap
Q_\varrho}^{\frac{2\sigma}{\sigma-n-2}}\Big).
\mbox{e}nd{equation}
Here $C$ denotes a universal constant depending on $n,N,m,M,p,\sigma$, and $\Theta$.
To bound the left-hand side of \mbox{e}qref{reverse:start-1} we use the fact
\begin{equation*}
\frac{|\Omega_T \cap Q_{k}|}{|\Omega_T \cap Q_{k-1}|}
\ge
\frac{|\Omega_T\cap Q_{\varrho/2}|}{|Q_\varrho|}
\ge
\frac{|\Omega\cap B_{\varrho/2}|}{4|B_1|\varrho^n}
\ge \frac{1}{c(n)\Theta}.
\mbox{e}nd{equation*}
We use this in \mbox{e}qref{reverse:start-1} and obtain
\begin{align}\label{reverse:start-2}\hat uidehat nonumber
\bigg[
\Xiint{-\!-}_{\Omega_T\cap Q_{k}}&
\boldsymbol{\mathcal H}(Du)^{q+2\alpha(1+\frac{2}{n}) -(q-p)+\frac{4}{n}}\mathrm{d}x\mathrm{d}t\bigg]^{\frac{n}{n+2}}\\
&\le
\frac{C_o4^k\varrho^2}{(s-r)^2}(p+2\alpha)^{\frac{3\sigma}{\sigma-n-2}}
\Xiint{-\!-}_{\Omega_T\cap Q_{k-1}}\big[\boldsymbol{\mathcal H}(Du)^{q+2\alpha}
+1\big]\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align}
for some constant $C_o$ with the same structure as the one in~\mbox{e}qref{def:C_o}. We now define
recursively a sequence $(\beta_k)_{k\in\mathbb{N}_o}$ by $\beta_o:=0$ and
\begin{equation*}
2\beta_k:=2\beta_{k-1}\big(1+\tfrac2n\big)-(q-p)+\tfrac4n.
\mbox{e}nd{equation*}
Induction leads to
\begin{equation}\label{formula-beta-k}
\beta_k= \frac{4-n(q-p)}{4}
\Big[\Big( 1+\frac2n\Big)^k-1\Big].
\mbox{e}nd{equation}
The choice $\alpha=\beta_{k-1}$ turns \mbox{e}qref{reverse:start-2} into
\begin{align}\label{reverse:start-3}\hat uidehat nonumber
\bigg[
\Xiint{-\!-}_{\Omega_T\cap Q_{k}}&
\boldsymbol{\mathcal H}(Du)^{q+2\beta_k}\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{n}{n+2}}\\
&\le
\frac{C_o4^k\varrho^2}{(s-r)^2}(1+\beta_k)^{\frac{3\sigma}{\sigma-n-2}}\bigg[
\Xiint{-\!-}_{\Omega_T\cap Q_{k-1}}\boldsymbol{\mathcal H}(Du)^{q+2\beta_{k-1}}
\,\mathrm{d}x\mathrm{d}t+1\bigg].
\mbox{e}nd{align}
In the last line we used $\beta_{k-1}<\beta_k$ to replace
$p+2\beta_{k-1}$ by $2p(1+\beta_{k})$.
The constant $C_o$ in the above estimate is up to a multiplicative factor
the same as the one from~\mbox{e}qref{def:C_o}. This, however, does not change the dependencies
in $C_o$.
To proceed further let
\begin{equation*}
\boldsymbol A_k:=\Xiint{-\!-}_{\Omega_T\cap Q_{k}}\boldsymbol{\mathcal H}(Du)^{q+2\beta_k}\,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{equation*}
In terms of $\boldsymbol A_k$ the reverse H\"older inequality \mbox{e}qref{reverse:start-3}
leads to a recursion formula
\begin{equation*}
\boldsymbol A_k
\le
\bigg[\frac{C_o4^k\varrho^2}{(s-r)^2}(1+\beta_k)^{\frac{3\sigma}{\sigma-n-2}}\bigg]^{1+\frac2n}
\big(\boldsymbol A_{k-1}+1\big)^{1+\frac2n}\qquad
\forall\,k\in\mathbb{N}.
\mbox{e}nd{equation*}
Iteration of this inequality gives
\begin{equation*}
\boldsymbol A_k
\le
\prod_{j=1}^k
\bigg[ \frac{C_o4^j\varrho^2}{(s-r)^2}(1+\beta_j)^{\frac{3\sigma}{\sigma-n-2}}\bigg]^{(1+\frac2n)^{k-j+1}}
\big(\boldsymbol A_{0}+1\big)^{(1+\frac2n)^k}
\mbox{e}nd{equation*}
for any $k\in\mathbb{N}$. Here we enlarged $C_o$ by a factor 2.
We take this inequality to the power $\frac{1}{q+2\beta_k}$ and obtain
\begin{equation}\label{iteration:1}
\boldsymbol A_k^\frac{1}{q+2\beta_k}
\le
\prod_{j=1}^k
\bigg[ \frac{C_o4^j\varrho^2}{(s-r)^2}(1+\beta_j)^{\frac{3\sigma}{\sigma-n-2}}
\bigg]^\frac{(1+\frac2n)^{k-j+1}}{q+2\beta_k}
\big(\boldsymbol A_{0}+1\big)^\frac{(1+\frac2n)^k}{q+2\beta_k}.
\mbox{e}nd{equation}
Note that
\begin{equation*}
\lim_{k\to\infty} \frac{(1+\frac2n)^k}{q+2\beta_k}= \frac{2}{4-n(q-p)}.
\mbox{e}nd{equation*}
Therefore, we have
\begin{equation}\label{limit-rhs}
\lim_{k\to\infty}
\big(\boldsymbol A_{0}+1\big)^\frac{(1+\frac2n)^k}{q+2\beta_k}
=
\bigg[
\Xiint{-\!-}_{\Omega_T\cap Q_{s}}\boldsymbol{\mathcal H}(Du)^{q}\,\mathrm{d}x\mathrm{d}t+1
\bigg]^\frac{2}{4-n(q-p)}.
\mbox{e}nd{equation}
With the abbreviation
\begin{equation*}
\boldsymbol\gamma :=\frac{4-n(q-p)}{2}\in(0,2],
\mbox{e}nd{equation*}
formula~\mbox{e}qref{formula-beta-k} takes the form
\begin{equation*}
\beta_k=\frac{\boldsymbol\gamma}2\Big[\Big(1+\frac2n\Big)^k-1\Big]
\le \Big(1+\frac2n\Big)^k-1.
\mbox{e}nd{equation*}
Therefore, for any $j\in\mathbb{N}$ we have the estimate
\begin{equation*}
\frac{C_o4^j\varrho^2}{(s-r)^2}(1+\beta_j)^{\frac{3\sigma}{\sigma-n-2}}
\le
\hat uidetilde C_oK^j,
\mbox{e}nd{equation*}
with the abbreviations
\begin{align*}
\hat uidetilde C_o&:=\frac{C_o\varrho^2}{(s-r)^2}
=\frac{C\varrho^2}{(s-r)^2}\Big( 1+[b]_{\sigma,\Omega_T\cap
Q_\varrho}^{\frac{2\sigma}{\sigma-n-2}}\Big)
\mbox{e}nd{align*}
and
\begin{align*}
K&:=4\big(1+\tfrac2n\big)^{\frac{3\sigma}{\sigma-n-2}}\ge1.
\mbox{e}nd{align*}
We use this to bound the product appearing in \mbox{e}qref{iteration:1},
with the result
\begin{align*}
\prod_{j=1}^k
\bigg[
\frac{C_o4^{j}\varrho^2}{(s-r)^2}(1+\beta_j)^{\frac{3\sigma}{\sigma-n-2}}
\bigg]^{\frac{(1+\frac2n)^{k-j+1}}{q+2\beta_k}}
& \le
\prod_{j=1}^k \hat uidetilde C_o^{\frac{(1+\frac2n)^{k-j+1}}{q+2\beta_k}}
\prod_{j=1}^k K^{\frac{j(1+\frac2n)^{k-j+1}}{q+2\beta_k}} \\
& \le
\prod_{j=1}^k
\hat uidetilde C_o^{\frac{(1+\frac2n)^{k-j+1}}{\boldsymbol\gamma[(1+\frac2n)^k-1]}}
\prod_{j=1}^k
K^{\frac{j(1+\frac2n)^{k-j+1}}{\boldsymbol\gamma[(1+\frac2n)^k-1]}}.
\mbox{e}nd{align*}
The first product on the right-hand side can be computed with the help of \mbox{e}qref{A1} from Lemma~\ref{lem:A} applied with $A=\hat uidetilde C_o$ and $\theta=1+\frac2n$. We obtain
\begin{align*}
\prod_{j=1}^k
\hat uidetilde C_o^{\frac{(1+\frac2n)^{k-j+1}}{\boldsymbol\gamma[(1+\frac2n)^k-1]}}
=
\hat uidetilde C_o^\frac{n+2}{2\boldsymbol\gamma}
=
\hat uidetilde C_o^{\frac{n+2}{4-n(q-p)} }.
\mbox{e}nd{align*}
Similarly, the second product can be bounded with the help of \mbox{e}qref{A2} from Lemma~\ref{lem:A} applied with $A=K$ and $\theta=1+\frac2n$. This yields
\begin{align*}
\prod_{j=1}^k
K^{\frac{j(1+\frac2n)^{k-j+1}}{\boldsymbol\gamma[(1+\frac2n)^k-1]}}
&\le
K^{\frac{(n+2)^2}{4\boldsymbol\gamma}}
=
K^{\frac{(n+2)^2}{2(4-n(q-p))}}.
\mbox{e}nd{align*}
Inserting this above, we obtain
\begin{align*}
\prod_{j=1}^k\bigg[
\frac{C_o4^{j}\varrho^2}{(s-r)^2}
(1+\beta_j)^{\frac{3\sigma}{\sigma-n-2}}
\bigg]^{\frac{(1+\frac2n)^{k-j+1}}{q+2\beta_k}}
\le
\big(K^{\frac{n+2}2}\hat uidetilde C_o\big)^{\frac{n+2}{4-n(q-p)} },
\mbox{e}nd{align*}
where $K$ depends only on $n$ and $\sigma$. In particular, the
right-hand side is independent of $k\in\mathbb{N}$.
This allows us to pass to the limit $k\uparrow\infty$ in
\mbox{e}qref{iteration:1}. In view of~\mbox{e}qref{limit-rhs}, this yields
\begin{align*}
\limsup_{k\to\infty}\mathbf A_k^{\frac{1}{ q+2\beta_k}}
&\le
\big(K^{\frac{n+2}2}\hat uidetilde C_o\big)^{\frac{n+2}{4-n(q-p)} }\bigg[
\Xiint{-\!-}_{\Omega_T\cap Q_{s}}\boldsymbol{\mathcal H}(Du)^{q}\,\mathrm{d}x\mathrm{d}t+1
\bigg]^{\frac{2}{4-n(q-p)}}\\
&\le
C\bigg[\frac{\varrho^{n+2}\big(1+ [ b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\big)}{(s-r)^{n+2}}
\Xiint{-\!-}_{\Omega_T \cap Q_{s}}
\big( 1+|Du|^2\big)^{\frac{ q }{2}}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{2}{4-n(q-p)}}.
\mbox{e}nd{align*}
In the last line we used \mbox{e}qref{def:C_o}, i.e.~the special form of $C_o$, and the fact that
$\mu\le1$.
Since $\varrho_k\mathrm{d}ownarrow r$ and $\beta_k\uparrow\infty$, the
last estimate implies the following {\it sup-estimate for the gradient}
\begin{align}\label{sup-Q-r-s}
\sup_{\Omega_T\cap Q_{r}}|Du|
&=
\lim_{k\to\infty}
\bigg[\Xiint{-\!-}_{\Omega \cap Q_{r}}
|Du|^{q+2\beta_k}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{1}{ q+2\beta_k}}\hat uidehat nonumber\\
&\le
\limsup_{k\to\infty}\mathbf A_k^\frac{1}{ q+2\beta_k}\hat uidehat nonumber\\
&\le
C\bigg[\frac{\varrho^{n+2}\big(1+ [ b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\big)}{(s-r)^{n+2}}
\Xiint{-\!-}_{\Omega_T \cap Q_{s}}
\big( 1+|Du|^2\big)^{\frac{ q }{2}}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac1q\frac{2q}{4-n(q-p)}},
\mbox{e}nd{align}
for a constant $C$ that depends on $n,m,M,p,\sigma$, and $\Theta$.
In the case $p\ge 2$ we have $q=p$, and therefore \mbox{e}qref{sup-Q-r-s} simplifies to
\begin{align*}
\sup_{\Omega_T\cap Q_{r}}|Du|
&\le
C\bigg[\frac{\varrho^{n+2}\Big(1+ [b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\Big)}{(s-r)^{n+2}} \Xiint{-\!-}_{\Omega_T \cap Q_{s}}
\big( 1+|Du|^2\big)^{\frac{ p}{2}}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{d}p},
\mbox{e}nd{align*}
where $d=\tfrac12 p$ is the scaling deficit from \mbox{e}qref{def:deficit}.
With the choice $r=\frac\varrho2$ and $s=\frac{3\varrho}4$, this yields the asserted
sup-estimate for the gradient~\mbox{e}qref{Lipschitz-estimate} in the case $p\ge2$.
Note that this is in perfect accordance with the interior estimate
\cite[Chapter VIII, Theorem 5.1]{DB}.
\subsection{Interpolation in the case $\frac{2n}{n+2}<p<2$}
To reduce the integrability exponent
in the sup-estimate from $q=2$ to $p$ in the singular case we need an additional interpolation
argument. To this end, we
apply \mbox{e}qref{sup-Q-r-s} with arbitrary radii $r,s$ satisfying
$\frac\varrho2\le r<s\le \frac{3\varrho}4$.
On the right-hand side of
the estimate, we bound a part of the integrand by its supremum and
then apply Young's inequality with exponents $\frac{4-n(2-p)}{2(2-p)}$ and
$\frac{4-n(2-p)}{p(n+2)-2n}$.
Note that this is possible if and only if $\frac{2n}{n+2}<p<2$.
This procedure leads us to
\begin{align*}
\sup_{\Omega_T\cap Q_{r}}& \big(1+|Du|^2\big)^{\frac12}\\
&\le
C\bigg[\frac{\varrho^{n+2}\big(1+ [ b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\big)}{(s-r)^{n+2}}
\Xiint{-\!-}_{\Omega_T \cap Q_{s}}
\big( 1+|Du|^2\big)
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{2}{4-n(2-p)}}\\[1.2ex]
&\le
C\sup_{\Omega_T\cap Q_{s}}\big(1+|Du|^2\big)^{\frac{2-p}{4-n(2-p)}}\\
&\quad
\cdot
\bigg[\frac{\varrho^{n+2}\big(1+[b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\big)}{(s-r)^{n+2}}
\Xiint{-\!-}_{\Omega_T \cap Q_{s}}
\big( 1+|Du|^2\big)^{\frac p2}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{2}{4-n(2-p)}}\\[1.2ex]
&\le
\tfrac12\sup_{\Omega_T\cap Q_{s}}\big(1+|Du|^2\big)^{\frac12}\\
&\quad+
C\bigg[\frac{\varrho^{n+2}\big(1+ [ b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\big)}{(s-r)^{n+2}}
\Xiint{-\!-}_{\Omega_T\cap Q_{s}}
\big( 1+|Du|^2\big)^{\frac p2}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{2}{p(n+2)-2n}}.
\mbox{e}nd{align*}
By a standard iteration argument (cf. \cite[Chapter~V, Lemma~3.1]{Giaquinta-book}), this implies
\begin{align}\label{sup-est-rho}\hat uidehat nonumber
\sup_{\Omega_T\cap Q_{\varrho/2}}&(1+|Du|^2)^{\frac12}\\
&\le
C\bigg[\Big(1+ [ b]_{\sigma,\Omega_T\cap
Q_{\varrho}}^{\frac{(n+2)\sigma}{\sigma-n-2}}\Big)
\Xiint{-\!-}_{\Omega_T\cap Q_{3\varrho/4}}
\big( 1+|Du|^2\big)^{\frac p2}
\,\mathrm{d}x\mathrm{d}t\bigg]^\frac{d}{p},
\mbox{e}nd{align}
where $d=\frac{2p}{p(n+2)-2n}$ is the scaling deficit, cf.~\mbox{e}qref{def:deficit}.
This is exactly the claimed bound \mbox{e}qref{Lipschitz-estimate} in the
singular range $\frac{2n}{n+2}<p<2$, and completes the proof of the sup-estimate from
Proposition~\ref{prop:apriori}. Note also that the sup-gradient estimate
\mbox{e}qref{sup-est-rho} is again in perfect accordance with the corresponding interior estimate
\cite[Chapter VIII, Theorem 5.2']{DB}.
\section{Regularization}\label{sec:regularization}
In this section we describe the regularization procedure that will
allow us to extend the a priori estimate to the general
case. We consider the situation stated
in
Theorem~\ref{thm:main}, i.e.~we let $\Omega\subset\mathbb{R}^n$ be
a bounded convex domain, and suppose that $u\in
L^p(0,T;W^{1,p}(\Omega,\mathbb{R}^N))$ is a solution to \mbox{e}qref{system}, where
\mbox{e}qref{assumption:a(0)} -- \mbox{e}qref{assumption:b}
are in force. Moreover, we assume that for some $x_o\in\partial\Omega$
and $\varrho>0$ we have $u\mbox{e}quiv0$ on $(\partial\Omega)_T\cap
Q_{2\varrho}(z_o)$ in the sense of traces.
\subsection{Approximation of the domain}
For any $\varepsilon\in(0,1]$ we consider the parallel set
$\hat uidetilde\Omega_\varepsilon:=\{x\in\mathbb{R}^n\colon \mathrm{d}ist(x,\Omega)<\frac{3}2\varepsilon\}$. Note that $\hat uidetilde\Omega_\varepsilon$ is convex as $\Omega$ is convex. By a well-known result from convex analysis (see e.g. \cite[\S XIII.2, Satz 2]{Marti:book}), the
domains $\hat uidetilde\Omega_\varepsilon$ can be approximated in Hausdorff distance by
smooth convex sets $\Omega_\varepsilon$ with
\begin{equation*}
\mathrm{d}ist_{\mathcal{H}}\big(\Omega_\varepsilon,\hat uidetilde\Omega_\varepsilon\big)
<
\tfrac12\varepsilon.
\mbox{e}nd{equation*}
In particular, the regularized sets $\Omega_\varepsilon$ satisfy
\begin{equation}\label{choice-Omega-delta}
\big\{x\in\mathbb{R}^n\colon \mathrm{d}ist (x,\Omega)<\varepsilon\big\}
\subset\Omega_\varepsilon\subset
\big\{x\in\mathbb{R}^n\colon \mathrm{d}ist
(x,\Omega)<2\varepsilon\big\}.
\mbox{e}nd{equation}
Since the domains $\Omega_\varepsilon$ approximate $\Omega$ from the outside, we obtain
\begin{equation}\label{measure-density-Omega-delta}
\sup_{\varepsilon\in(0,1]}\frac{\varrho^n}{|\Omega_\varepsilon\cap B_{\varrho/2}(x_o)|}
\le
\frac{\varrho^n}{|\Omega\cap B_{\varrho/2}(x_o)|}
=
2^n\Theta_{\varrho/2}(x_o)
\mbox{e}nd{equation}
for every $x_o\in\partial\Omega$ and $\varrho>0$,
with the constant $\Theta_{\varrho/2}(x_o)$ introduced
in~\mbox{e}qref{def-Theta}.
As a result, the constants in the a priori estimate will be
independent of $\varepsilon\in(0,1]$.
\subsection{Regularization of the coefficients}\label{section:reg-coeff}
We regularize the coefficients by means of a
mollifyer $\phi\in C^\infty_0(\mathbb{R},[0,\infty))$
with $\spt\phi\subset (-1,1)$ and $\int_\mathbb{R}\phi\,\mathrm{d} x=1$. For $\varepsilon\in(0,1]$ we let
$\phi_\varepsilon(x):=\varepsilon^{-1}\phi(\frac x\varepsilon)$ and
$$
\mathbf c_\varepsilon(s):=\big(\phi_\varepsilon\ast \mathbf c\big)(s),
\quad\mbox{where}\ \
\mathbf c(s):=
\left\{
\begin{array}{cl}
\mathbf{a}(\mathrm{e}^{s}),
&\mbox{if $\mu>0$,} \\[5pt]
\mathbf{a}\big(\sqrt{\varepsilon^2 +\mathrm{e}^{2s}}\,\big)
, &\mbox{if $\mu=0$,}
\mbox{e}nd{array}
\right.
$$
for any $s\in\mathbb{R}$.
The regularized coefficients $\mathbf a _{\varepsilon}$ are defined by
$$
\mathbf a_{\varepsilon} (r):= \mathbf c_\varepsilon(\log r),
\quad\mbox{for }r>0.
$$
Similarly as in \cite[Section~4.2]{BDMS} we obtain the following ellipticity
and growth conditions for $\mathbf a_{\varepsilon}$; see also Appendix~\ref{app:reg} for the proof. For any $r>0$ we have
\begin{equation}\label{growth-a-delta}
\left\{
\begin{array}{c}
\tfrac{m}{c}(\lambda^2+r^2)^{\frac{p-2}2}
\le
\mathbf a_{\varepsilon}(r)
\le
cM(\lambda^2+r^2)^{\frac{p-2}2},\\[5pt]
\tfrac{m}{c}(\lambda^2+r^2)^{\frac{p-2}2}
\le
\mathbf a_{\varepsilon}'(r)r+\mathbf a_{\varepsilon}(r)
\le
cM(\lambda^2+r^2)^{\frac{p-2}2},\\[5pt]
|\mathbf a_{\varepsilon}''(r)r^2|
\le
\frac{cM}\varepsilon(\lambda^2+r^2)^{\frac{p-2}2},
\mbox{e}nd{array}
\right.
\mbox{e}nd{equation}
with a constant $c=c(n,p)$ and
\begin{equation}
\lambda
:=
\left\{
\begin{array}{cl}
\mu ,&\mbox{if $\mu>0$,} \\[2pt]
\varepsilonilon, &\mbox{if $\mu=0$.}
\mbox{e}nd{array}
\right.\label{lambda-def}
\mbox{e}nd{equation}
Moreover, we have
\begin{align}\label{a-delta-convergence}
| \mathbf a_{\varepsilon} (r)-\mathbf{a}(r)|
\le
2c(p)M \varepsilon\max\big\{ 1,\mathrm{e}^{p-2}\big\}
\big(\lambda^2 + r^2\big)^{\frac{p-2}{2}}
\mbox{e}nd{align}
for any $r>0$; see also Appendix~\ref{app:reg} for the proof.
\subsection{Weak solutions to the regularized problems}
Here we assume that
\begin{equation*}
\mbox{$u\mbox{e}quiv0\;$ on $(\partial\Omega)_T\cap Q_{2\varrho}(x_o)$, }
\mbox{e}nd{equation*}
where $Q_{2\varrho}(z_o)$ is a parabolic cylinder with
$x_o\in \partial\Omega$ and $(t_o-4\varrho^2,t_o)\subset(0,T)$.
For a cut-off function
$\hat uidetilde\mbox{e}ta\in C^\infty(0,\infty;[0,1])$ with $\hat uidetilde\mbox{e}ta\mbox{e}quiv 1$ on
$[2,\infty)$ and $\hat uidetilde\mbox{e}ta\mbox{e}quiv0$ on $(0,1)$ we
consider the boundary values
\begin{equation}\label{def:g_eps}
g_\varepsilon(x,t):=\mbox{e}ta_\varepsilon(x) u(x,t)
\quad\mbox{with}\quad
\mbox{e}ta_\varepsilon(x):=\hat uidetilde\mbox{e}ta\Big(\frac{\mathrm{d}ist(x,\partial\Omega)}\varepsilon\Big)
\mbox{ for $x\in\Omega$}.
\mbox{e}nd{equation}
We extend this function to $\mathbb{R}^n\times(0,T)$ by letting
$g_\varepsilon\mbox{e}quiv 0$ on $(\mathbb{R}^n\setminus\Omega)_T$.
Note that the extension satisfies $g_\varepsilon\in
L^p(0,T;W^{1,p}_0(\Omega_\varepsilonilon,\mathbb{R}^N))$.
For the inhomogeneity we consider the regularization
$b_\varepsilon:=\phi_{\varepsilonilon/2}\ast b$, for a standard mollifier in space-time $\phi\in
C^\infty_0(Q_1,\mathbb{R})$.
Due to the construction of $\Omega_\varepsilon$ (see \mbox{e}qref{choice-Omega-delta}) we have
\begin{equation}\label{b-compact-support}
\spt b_\varepsilon \Subset \Omega_\varepsilon\times\mathbb{R}.
\mbox{e}nd{equation}
We let $\mathbf{a}_\varepsilon$ be the coefficients constructed in
Section~\ref{section:reg-coeff}.
By
$$
u_\varepsilonilon
\in
g_\varepsilon+ L^p\big(t_o-\varrho^2,t_o;W^{1,p}_0(\Omega_\varepsilonilon\cap B_\varrho(x_o),\mathbb{R}^N)\big)
$$
we denote the weak solution to the Cauchy-Dirichlet problem
\begin{equation}\label{eq-ueps}
\left\{
\begin{array}{cl}
\partial_t u_\varepsilonilon -
\Div \big(\mathbf{a}_\varepsilon(|Du_\varepsilonilon|)Du_\varepsilonilon\big)
=
b_\varepsilon
& \quad\mbox{in $(\Omega_\varepsilonilon)_T\cap Q_\varrho(z_o)$} \\[5pt]
u_\varepsilonilon =
g_\varepsilon
& \quad\mbox{on $\partial_p((\Omega_\varepsilonilon)_T\cap Q_\varrho(z_o))$.}
\mbox{e}nd{array}
\right.
\mbox{e}nd{equation}
Note that $u_\varepsilon=0$ on $(\partial\Omega_\varepsilon)_T\cap Q_\varrho(z_o)$ and
$u_\varepsilon = \mbox{e}ta_\varepsilon u$ on $(\Omega_\varepsilon)_T\cap \partial_p Q_\varrho (z_o)$.
Using a reflection argument, interior regularity theory and
up-to-the-boundary Schauder estimates we can show that $u_\varepsilon$ is
smooth up to the boundary component
$(\partial\Omega_\varepsilon)_T\cap Q_\varrho(z_o)$; see Appendix~\ref{app:smooth}.
\section{Proof of Theorem \ref{thm:main}}
The proof of the gradient estimate will be achieved in Section~\ref{sec:proof}. Prior to that, we shall prove an energy estimate for $u_\varepsilonilon$.
\subsection{An energy estimate for the approximating solutions}
Throughout this section we omit the reference to the center $z_o$ in our notation.
From \cite[Corollary~3.11]{Kinnunen-Martio} we recall the following result;
note that the constant $\gamma$ in \cite[Corollary~3.11]{Kinnunen-Martio} can be chosen as $\gamma=\frac12$ due to the convexity of $\Omega$; see \cite[Example~3.6\,(4)]{Kinnunen-Martio}.
\begin{lemma}[Hardy's inequality]\label{lem:Hardy}
Let $1<p<\infty$ and suppose that $\Omega\subset\mathbb{R}^n$ is a bounded open convex set. Then there is a constant $c$ depending on $n$ and $p$ such that whenever $u\in W^{1,p}_0(\Omega)$ there holds
\begin{equation*}
\int_\Omega \bigg(\frac{|u(x)|}{\mathrm{d}ist(x,\partial\Omega)}\bigg)^p\,\mathrm{d}x
\le
c \int_\Omega|Du(x)|^p\,\mathrm{d}x.
\mbox{e}nd{equation*}
\mbox{e}nd{lemma}
In the following we let
\begin{equation}\label{def:V}
\boldsymbol V_\varepsilonilon
:=
L^\infty\big(t_o-\varrho^2,t_o;L^2(\Omega_\varepsilon\cap B_\varrho,\mathbb{R}^N)\big)\cap
L^p\big(t_o-\varrho^2,t_o;W_0^{1,p}(\Omega_\varepsilon\cap B_\varrho,\mathbb{R}^N)\big)
\mbox{e}nd{equation}
with norm
$$
\|\varphi\|_{\boldsymbol V_\varepsilon}
:=
\|\varphi\|_{L^\infty-L^2} +
\|\varphi\|_{L^p-W^{1,p}}.
$$
We start with an estimate for the spatial gradient of the boundary values.
\begin{lemma}\label{lem:Dge}
Let $u$ be a weak solution to \mbox{e}qref{system} in
$\Omega_T$ with $u\mbox{e}quiv0$ on $(\partial\Omega)_T\cap Q_{2\varrho}(z_o)$
in the sense of traces, and $g_\varepsilon=\mbox{e}ta_\varepsilon u$ be constructed as
in~\mbox{e}qref{def:g_eps}.
Then we have
\begin{align*}
\iint_{\Omega_T\cap Q_\varrho} |Dg_\varepsilon|^p\mathrm{d}x\mathrm{d}t
\le
c \iint_{\Omega_T\cap Q_{2\varrho}} \big[ |Du|^p+\varrho^{-p}|u|^p\big]\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align*}
with
a constant $c=c(n,p)$.
\mbox{e}nd{lemma}
\begin{proof}
We choose a
standard cut-off function $\zeta\in C^\infty_0(B_{2\varrho},[0,1])$ with
$\zeta\mbox{e}quiv1$ on $B_\varrho$ and $|D\zeta|\le\frac2\varrho$ on
$B_{2\varrho}$. Then we apply
Hardy's inequality from Lemma~\ref{lem:Hardy} to the
function $\zeta u$ on the time-slices $\Omega\times\{t\}$
for a.e.~$t\in(t_o-\varrho^2,t_o)$, with the result
\begin{align*}
\iint_{\Omega_T\cap Q_\varrho} |Dg_\varepsilon|^p\mathrm{d}x\mathrm{d}t
&\le
c \iint_{\Omega_T\cap Q_\varrho} |Du|^p\mathrm{d}x\mathrm{d}t
+
\frac{c}{\varepsilon^p}\iint_{\Omega_T\cap Q_\varrho\cap\spt(D\mbox{e}ta_\varepsilon)}
|\zeta u|^p\mathrm{d}x\mathrm{d}t\\
&\le
c \iint_{\Omega_T\cap Q_\varrho} |Du|^p\mathrm{d}x\mathrm{d}t
+
c \iint_{\Omega\times(t_o-\varrho^2,t_o)}
\bigg(\frac{|\zeta u|}{\mathrm{d}ist(x,\partial\Omega)}\bigg)^p\mathrm{d}x\mathrm{d}t\\
&\le
c \iint_{\Omega_T\cap Q_\varrho} |Du|^p\mathrm{d}x\mathrm{d}t
+
c \iint_{\Omega\times(t_o-\varrho^2,t_o)} |D(\zeta u)|^p\mathrm{d}x\mathrm{d}t\\
&\le
c \iint_{\Omega_T\cap Q_{2\varrho}} \big[ |Du|^p+\varrho^{-p}|u|^p\big]\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
This proves the claimed estimate.
\mbox{e}nd{proof}
In the next lemma we provide an estimate for the distributional time derivative of the boundary values.
\begin{lemma}\label{lem:time-u}
Let $u$ be a weak solution to \mbox{e}qref{system} in
$\Omega_T$, and $g_\varepsilon=\mbox{e}ta_\varepsilon u$ be constructed as in~\mbox{e}qref{def:g_eps}.
Then, for any $\varphi\in C_0^\infty((\Omega_\varepsilon)_T\cap Q_\varrho,\mathbb{R}^N)$ we have
\begin{align}\label{time-deriv-g}
\bigg|\iint_{\Omega_T} g_\varepsilon\cdot\partial_t\varphi\,\mathrm{d}x\mathrm{d}t\bigg|
&\le
c \bigg[\iint_{\Omega_T\cap\spt\varphi}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\bigg]^{\frac{p-1}p}
\|D\varphi\|_{L^p((\Omega_\varepsilon)_T\cap Q_\varrho)}\\[1ex]\hat uidehat nonumber
&\phantom{\le\,}+
\|b\|_{L^{\frac{p(n+2)}{p(n+2)-n}}(\Omega_T\cap\spt\varphi)}
\|\varphi\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)},
\mbox{e}nd{align}
with
a constant $c=c(n,p,M)$ and the parameter $\lambda$ from \mbox{e}qref{lambda-def}.
In particular, $\partial_t g_\varepsilon\in \boldsymbol V_\varepsilonilon'$.
\mbox{e}nd{lemma}
Note that
$b\in L^{\frac{p(n+2)}{p(n+2)-n}}(\Omega_T)$, since
$\sigma>n+2>\frac{p(n+2)}{p(n+2)-n}$. Therefore, the right-hand
side of \mbox{e}qref{time-deriv-g} is finite.
\begin{proof}
Let
$\varphi\in C_0^\infty((\Omega_\varepsilon)_T\cap Q_\varrho,\mathbb{R}^N)$, and consider
the cut-off function $\mbox{e}ta_\varepsilon(x)$ from~\mbox{e}qref{def:g_eps}.
Testing the weak form of \mbox{e}qref{system} with $\mbox{e}ta_\varepsilon\varphi$ and recalling~\mbox{e}qref{bounds-a}, we estimate
\begin{align}\label{time-deriv-g-1}
\bigg|\iint_{\Omega_T}& g_\varepsilon\cdot \partial_t\varphi\,\mathrm{d}x\mathrm{d}t\bigg|\\\hat uidehat nonumber
&=
\bigg|\iint_{\Omega_T} u\cdot \partial_t(\mbox{e}ta_\varepsilon\varphi)\,\mathrm{d}x\mathrm{d}t\bigg|\\\hat uidehat nonumber
&=
\bigg|\iint_{\Omega_T}
\Big[ \mathbf{a}(Du)Du\cdot D(\mbox{e}ta_\varepsilon\varphi) -
\mbox{e}ta_\varepsilon b\cdot\varphi\Big]\,\mathrm{d}x\mathrm{d}t\bigg| \\\hat uidehat nonumber
&\le
cM \iint_{\Omega_T}
\big(\mu^2+|Du|^2\big)^{\frac{p-2}{2}}|Du|\, |D(\mbox{e}ta_\varepsilon\varphi)|
\,\mathrm{d}x\mathrm{d}t
+
\iint_{\Omega_T} |b| |\varphi| \,\mathrm{d}x\mathrm{d}t \\\hat uidehat nonumber
&\le
cM
\bigg[\iint_{\Omega_T\cap \spt\varphi}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\bigg]^{\frac{p-1}p}\|D(\mbox{e}ta_\varepsilon\varphi)\|_{L^p(\Omega_T)}\\\hat uidehat nonumber
&\phantom{\le\,}+
\|b\|_{L^{\frac{p(n+2)}{p(n+2)-n}}(\Omega_T\cap\spt\varphi)} \|\varphi\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}.
\mbox{e}nd{align}
For the norm in the second-to-last
term, we have
\begin{equation}\label{bound-cut-off}
\|D(\mbox{e}ta_\varepsilon\varphi)\|_{L^p(\Omega_T)}^p
\le
c\iint_{\Omega_T\cap Q_\varrho}|D\varphi|^p\mathrm{d}x\mathrm{d}t
+
\frac{c}{\varepsilon^p}\iint_{\Omega_T\cap
Q_\varrho\cap\spt(D\mbox{e}ta_\varepsilon)}|\varphi|^p\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{equation}
In order to bound the last integral, we observe that for points $x$ in the domain of
integration, we have
\begin{equation*}
\mathrm{d}ist(x,\partial\Omega_\varepsilon)
\le
2\varepsilon+\mathrm{d}ist(x,\partial\Omega)
\le
4\varepsilon,
\mbox{e}nd{equation*}
by the construction of $\Omega_\varepsilon$ and since
$\spt(D\mbox{e}ta_\varepsilon)$ is contained in the $2\varepsilon$-neighborhood of
$\partial\Omega$.
Therefore, we can apply Hardy's inequality from
Lemma~\ref{lem:Hardy} on the time slices $\Omega_\varepsilon\times\{t\}$ for
a.e. $t\in (t_o-\varrho^2,t_o)$, after extending $\varphi$ by zero on
$(\Omega_\varepsilon\times\{t\})\setminus B_\varrho$. Note that the constant in
Hardy's inequality only depends on $n$ and $p$, but independent of
$\varepsilon$. As a result, we obtain
\begin{align*}
\frac{1}{\varepsilon^p}\iint_{\Omega_T\cap
Q_\varrho\cap\spt(D\mbox{e}ta_\varepsilon)}|\varphi|^p\mathrm{d}x\mathrm{d}t
&\le
c\iint_{\Omega_\varepsilon\times(t_o-\varrho^2,t_o)}
\bigg(\frac{|\varphi|}{\mathrm{d}ist(x,\partial\Omega_\varepsilonilon)}\bigg)^p\mathrm{d}x\mathrm{d}t\\
&\le
c\iint_{\Omega_\varepsilon\times(t_o-\varrho^2,t_o)}|D\varphi|^p\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
Joining this bound with~\mbox{e}qref{bound-cut-off}, we arrive at
$
\|D(\mbox{e}ta_\varepsilon\varphi)\|_{L^p}
\le
c\|D\varphi\|_{L^p},
$
for a constant $c=c(n,p)$.
Using this in \mbox{e}qref{time-deriv-g-1}, we deduce the asserted
estimate~\mbox{e}qref{time-deriv-g}. Finally, we note that
Gagliardo-Nirenberg's inequality implies
\begin{align*}
&\|\varphi\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}\\
&\qquad\le
\bigg[\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|D\varphi|^p \,\mathrm{d}x\mathrm{d}t
\bigg(\sup_{t\in(t_o-\varrho^2,t_o)}\int_{\Omega_\varepsilon\cap B_\varrho} |\varphi(\cdot,t)|^2 \,\mathrm{d}x
\bigg)^{\frac{p}{n}}
\bigg]^{\frac{n}{p(n+2)}} \\[1ex]
&\qquad\le
c \|\varphi\|_{\boldsymbol V_\varepsilonilon} .
\mbox{e}nd{align*}
Therefore, the estimate~\mbox{e}qref{time-deriv-g} can be rewritten in the form
\begin{align*}
\bigg|\iint_{\Omega_T}& g_\varepsilon\cdot\partial_t\varphi\,\mathrm{d}x\mathrm{d}t\bigg|\\\hat uidehat nonumber
&\le
c \Bigg[\bigg(\iint_{\Omega_T\cap\spt\varphi}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\bigg)^{\frac{p-1}p}+
\|b\|_{L^{\frac{p(n+2)}{p(n+2)-n}}(\Omega_T\cap\spt\varphi)}\Bigg]
\|\varphi\|_{\boldsymbol V_\varepsilonilon}
\mbox{e}nd{align*}
for any $\varphi\in C_0^\infty\big((\Omega_\varepsilon)_T\cap Q_\varrho,\mathbb{R}^N\big)$.
This proves the assertion $\partial_tg_\varepsilon\in\boldsymbol V_\varepsilonilon'$.
\mbox{e}nd{proof}
We use the preceding estimate of the distributional time derivative of
$g_\varepsilon$ for the proof of the desired energy estimate. The difficulty comes from the fact that $u$ and $u_\varepsilonilon$ are solutions on different domains $\Omega_T$ and $(\Omega_\varepsilonilon)_T$. For ease of notation, we define
\begin{equation*}
V_\lambda(A):=\big(\lambda^2+|A|^2\big)^{\frac{p-2}{4}}A,
\qquad\mbox{for $A\in \mathbb{R}^{Nn}$}.
\mbox{e}nd{equation*}
\begin{lemma}[Energy estimate]\label{lem:energy}
For any
$\varepsilon>0$ and any weak solution $u_\varepsilon$ to the Cauchy-Dirichlet
problem~\mbox{e}qref{eq-ueps} we have
\begin{align*}
& \sup_{\tau\in(t_o-\varrho^2,t_o)}\int_{(\Omega_\varepsilonilon\cap B_\varrho)\times\{\tau\}}
|u_\varepsilonilon-g_\varepsilon|^2 \,\mathrm{d}x +
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho}
\big|V_\lambda(Du_\varepsilonilon)\big|^2\,\mathrm{d}x\mathrm{d}t \\\hat uidehat nonumber
&\qquad \le c
\iint_{\Omega_T\cap Q_{2\varrho}}\big[\lambda^p+|Du|^p+\varrho^{-p}|u|^p\big]\mathrm{d}x\mathrm{d}t
+
\big\||b|+|b_\varepsilon|\big\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_T\cap
Q_\varrho)}^{\frac{p(n+2)}{p(n+2)-n-p}}
\mbox{e}nd{align*}
with a constant $c=c(n,p,m,M)$.
\mbox{e}nd{lemma}
\begin{proof}
For fixed $\tau\in (t_o-\varrho^2,t_o)$ and $\mathrm{d}elta\in(0,t_o-\tau)$ we let
\begin{equation*}
\zeta_\mathrm{d}elta(t)
:=
\left\{\begin{array}{cl}
1, & \mbox{for $t\in [t_o-\varrho^2,\tau]$}, \\[3pt]
\frac{\tau+\mathrm{d}elta-t}{\mathrm{d}elta}, & \mbox{for $t\in (\tau,\tau+\mathrm{d}elta)$}, \\[3pt]
0, & \mbox{for $t\in [\tau+\mathrm{d}elta,t_o]$.}
\mbox{e}nd{array}\right.
\mbox{e}nd{equation*}
As in Lemma~\ref{lem:time-u} one easily checks that solutions $u_\varepsilon$ to the parabolic systems
\mbox{e}qref{eq-ueps} own a distributional time derivative $\partial_t
u_\varepsilonilon\in \boldsymbol V_\varepsilonilon'$. Therefore, the
testing function $\zeta^2_\mathrm{d}elta(u_\varepsilonilon-g_\varepsilon)\in V_\varepsilonilon$ is admissible
in the weak form of \mbox{e}qref{eq-ueps}, which implies
\begin{align}\label{weak-1}
\big\langle \partial_tu_\varepsilon, \zeta_\mathrm{d}elta^2(u_\varepsilonilon-g_\varepsilon)\big\rangle
&+
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho} \zeta_\mathrm{d}elta^2\mathbf{a}_\varepsilon(Du_\varepsilonilon)
Du_\varepsilonilon\cdot (Du_\varepsilonilon-Dg_\varepsilon)
\,\mathrm{d}x\mathrm{d}t \hat uidehat nonumber\\
&=
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho}
\zeta_\mathrm{d}elta^2 b_\varepsilonilon\cdot(u_\varepsilonilon-g_\varepsilon)\,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align}
Here, $\langle\cdot,\cdot\rangle$ denotes the duality pairing on
$\boldsymbol V_\varepsilonilon'\times \boldsymbol V_\varepsilonilon$.
We rewrite the first term on the left-hand side in the form
\begin{align*}
\big\langle \partial_tu_\varepsilon& ,
\zeta_\mathrm{d}elta^2(u_\varepsilonilon-g_\varepsilon)\big\rangle\\
&=
\big\langle \partial_t(u_\varepsilon-g_\varepsilon),
\zeta_\mathrm{d}elta^2(u_\varepsilonilon-g_\varepsilon)\big\rangle
+
\big\langle \partial_tg_\varepsilon,
\zeta_\mathrm{d}elta^2(u_\varepsilonilon-g_\varepsilon)\big\rangle\\
&=
\big\langle \partial_t(\zeta_\mathrm{d}elta (u_\varepsilon-g_\varepsilon)),
\zeta_\mathrm{d}elta(u_\varepsilonilon-g_\varepsilon)\big\rangle
-
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho}
\zeta_\mathrm{d}elta'\zeta_\mathrm{d}elta |u_\varepsilonilon-g_\varepsilon|^2
\,\mathrm{d}x\mathrm{d}t\\
&\phantom{=\,}+
\big\langle \partial_tg_\varepsilon,
\zeta_\mathrm{d}elta^2(u_\varepsilonilon-g_\varepsilon)\big\rangle\\
&=:
\mathbf{I}(\mathrm{d}elta) + \mathbf{II}(\mathrm{d}elta) +\mathbf{III}(\mathrm{d}elta),
\mbox{e}nd{align*}
with the obvious meaning of $\mathbf{I}(\mathrm{d}elta)$ to $\mathbf{III}(\mathrm{d}elta)$.
For the first term we find
\begin{align*}
\mathbf{I}(\mathrm{d}elta)
=
\tfrac12 \iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}
\partial_t|\zeta_\mathrm{d}elta (u_\varepsilonilon-g_\varepsilon)|^2 \,\mathrm{d}x\mathrm{d}t
=
0,
\mbox{e}nd{align*}
since $\zeta_\mathrm{d}elta(t_o)=0$ and $u_\varepsilon=g_\varepsilon$ on the initial time
slice $(\Omega_\varepsilon\cap B_\varrho)\times\{t_o-\varrho^2\}$.
By the mean value theorem we obtain
\begin{align*}
\lim_{\mathrm{d}elta\mathrm{d}ownarrow 0} \mathbf{II}(\mathrm{d}elta)
&=
\tfrac12 \int_{(\Omega_\varepsilonilon\cap B_\varrho)\times\{\tau\}}
|u_\varepsilonilon-g_\varepsilon|^2 \,\mathrm{d}x .
\mbox{e}nd{align*}
Finally, we estimate the third term by means of
Lemma~\ref{lem:time-u}, with the result
\begin{align*}
\big|\mathbf{III}(\mathrm{d}elta)\big|
&\le
c \bigg[\iint_{\Omega_T\cap Q_\varrho}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\bigg]^{\frac{p-1}p}
\|Du_\varepsilon-Dg_\varepsilon\|_{L^p((\Omega_\varepsilon)_T\cap Q_\varrho)}\\[1ex]
&\quad+
\|b\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}
\|u_\varepsilon-g_\varepsilon\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}.
\mbox{e}nd{align*}
For the last term in~\mbox{e}qref{weak-1}, a straightforward application of
H\"older's inequality yields
\begin{align*}
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho} &
\zeta_\mathrm{d}elta^2 b_\varepsilonilon\cdot(u_\varepsilonilon-g_\varepsilon)\,\mathrm{d}x\mathrm{d}t\\
&\le
\|b_\varepsilon\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}
\|u_\varepsilon-g_\varepsilon\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}.
\mbox{e}nd{align*}
The preceding considerations allow us to pass to the limit
$\mathrm{d}elta\mathrm{d}ownarrow0$ in \mbox{e}qref{weak-1}.
In the term not yet considered, i.e.~the one containing the
coefficients $\mathbf a_\varepsilon$, the passage to the limit under the
integral can be justified by dominated convergence. Overall we get
\begin{align}\label{weak-2}\hat uidehat nonumber
\tfrac12 \int_{(\Omega_\varepsilonilon\cap B_\varrho)\times\{\tau\}}&
|u_\varepsilonilon-g_\varepsilon|^2 \,\mathrm{d}x +
\iint_{(\Omega_\varepsilonilon)_\tau\cap Q_\varrho}
\mathbf{a}_\varepsilon(Du_\varepsilonilon)
Du_\varepsilonilon\cdot (Du_\varepsilonilon-Dg_\varepsilon)
\,\mathrm{d}x\mathrm{d}t \\\hat uidehat nonumber
&\le
c \bigg[\iint_{\Omega_T\cap Q_\varrho}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\bigg]^{\frac{p-1}p}
\|Du_\varepsilon-Dg_\varepsilon\|_{L^p((\Omega_\varepsilon)_T\cap Q_\varrho)}\\[1ex]
&\phantom{\le\,}+
2\big\||b|+|b_\varepsilon|\big\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}
\|u_\varepsilon-g_\varepsilon\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}
\mbox{e}nd{align}
for any $\tau\in(t_o-\varrho^2,t_o)$.
By the growth properties~\mbox{e}qref{growth-a-delta} of $\mathbf{a}_\varepsilon$ and
Young's inequality for the $V_\lambda$-function \cite[Lemma
2.3]{Acerbi-Mingione-electro} we obtain for the diffusion term
\begin{align*}
&\iint_{(\Omega_\varepsilonilon)_\tau\cap Q_\varrho}
\mathbf{a}_\varepsilon(Du_\varepsilonilon) Du_\varepsilonilon\cdot (Du_\varepsilonilon-Dg_\varepsilon)
\,\mathrm{d}x\mathrm{d}t \\
&\qquad\ge
\tfrac{m}{c} \iint_{(\Omega_\varepsilonilon)_\tau\cap Q_\varrho}
|V_\lambda(Du_\varepsilonilon)|^2
\,\mathrm{d}x\mathrm{d}t\\
&\qquad\qquad-
cM \iint_{(\Omega_\varepsilonilon)_\tau\cap Q_\varrho}
\big(\lambda^2 + |Du_\varepsilonilon|^2\big)^{\frac{p-2}{2}}
|Du_\varepsilonilon||Dg_\varepsilon| \,\mathrm{d}x\mathrm{d}t \\
&\qquad\ge
\tfrac{m}{2c} \iint_{(\Omega_\varepsilonilon)_\tau\cap Q_\varrho}
|V_\lambda(Du_\varepsilonilon)|^2
\,\mathrm{d}x\mathrm{d}t -
c \iint_{\Omega_\tau\cap Q_\varrho}
|V_\lambda(Dg_\varepsilon)|^2 \,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
We join the two preceding estimates, take the supremum over
$\tau\in(t_o-\varrho^2,t_o)$ in the first term on the left-hand side and
let $\tau\uparrow t_o$ in the second one. This gives
\begin{align}\label{weak-3}
\mathbf S&+
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho}
|V_\lambda(Du_\varepsilonilon)|^2\,\mathrm{d}x\mathrm{d}t \\\hat uidehat nonumber
&\le
c \bigg[\iint_{\Omega_T\cap Q_\varrho}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\bigg]^{\frac{p-1}p}
\|Du_\varepsilon-Dg_\varepsilon\|_{L^p((\Omega_\varepsilon)_T\cap Q_\varrho)}\\[1ex]\hat uidehat nonumber
&\phantom{\le\,}
+
c\big\||b|+|b_\varepsilon|\big\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}
\|u_\varepsilon-g_\varepsilon\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}\\
&\phantom{\le\,}+
c \iint_{\Omega_T\cap Q_\varrho}
|V_\lambda(Dg_\varepsilon)|^2 \,\mathrm{d}x\mathrm{d}t \hat uidehat nonumber
\mbox{e}nd{align}
with the abbreviation
\begin{equation*}
\mathbf S:=\sup_{\tau\in(t_o-\varrho^2,t_o)}\int_{(\Omega_\varepsilonilon\cap B_\varrho)\times\{\tau\}}
|u_\varepsilonilon-g_\varepsilon|^2 \,\mathrm{d}x.
\mbox{e}nd{equation*}
The inequalities of Gagliardo-Nirenberg and Young provide us with the estimate
\begin{align*}
\|u_\varepsilon-g_\varepsilon\|_{L^{\frac{p(n+2)}{n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}
&\le
c\bigg[\mathbf S^{\frac{p}{n}}
\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|Du_\varepsilon-Dg_\varepsilon|^p \,\mathrm{d}x\mathrm{d}t
\bigg]^{\frac{n}{p(n+2)}} \\
&\le
c\bigg[
\mathbf S
+
\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|Du_\varepsilon-Dg_\varepsilon|^p
\,\mathrm{d}x\mathrm{d}t
\bigg]^{\frac{n+p}{p(n+2)}}.
\mbox{e}nd{align*}
We use this bound in~\mbox{e}qref{weak-3} and apply Young's inequality,
with the result
\begin{align*}
\mathbf S &+
\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho}
|V_\lambda(Du_\varepsilonilon)|^2\,\mathrm{d}x\mathrm{d}t \\\hat uidehat nonumber
&\le
\tfrac12\mathbf S
+
\tfrac12\iint_{(\Omega_\varepsilonilon)_T\cap Q_\varrho}
|V_\lambda(Du_\varepsilonilon)|^2\,\mathrm{d}x\mathrm{d}t
+c
\iint_{\Omega_T\cap Q_\varrho}\big(\lambda^p+|Du|^p\big)\mathrm{d}x\mathrm{d}t\\
& \phantom{\le\,}\hat uidehat nonumber+
c \iint_{\Omega_T\cap Q_\varrho}
|V_\lambda(Dg_\varepsilon)|^2 \,\mathrm{d}x\mathrm{d}t
+
c
\big\||b|+|b_\varepsilon|\big\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_T\cap Q_\varrho)}^{\frac{p(n+2)}{p(n+2)-n-p}}.
\mbox{e}nd{align*}
In turn we used the elementary estimate
$
|A|^p \le |V_\lambda (A)|^2 +\lambda^p
$.
The first two terms on the right-hand side can be absorbed into the
left. Finally, we use Lemma~\ref{lem:Dge} to estimate the term involving $Dg_\varepsilon$ by
\begin{align*}
\iint_{\Omega_T\cap Q_\varrho} |V_\lambda(Dg_\varepsilon)|^2 \,\mathrm{d}x\mathrm{d}t
\le
c \iint_{\Omega_T\cap Q_{2\varrho}} \big[ \lambda^p+|Du|^p+\varrho^{-p}|u|^p\big]\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align*}
Inserting this above, we
arrive at the desired estimate.
\mbox{e}nd{proof}
\begin{remark}\label{local-energy} \upshape
The same arguments yield the following local (in time) energy
estimate
\begin{align*}
& \sup_{\tau\in(t_o-\varrho^2,s)}\int_{(\Omega_\varepsilonilon\cap B_\varrho)\times\{\tau\}}
|u_\varepsilonilon-g_\varepsilon|^2 \,\mathrm{d}x +
\iint_{(\Omega_\varepsilonilon)_s\cap Q_\varrho}
\big|V_\lambda(Du_\varepsilonilon)\big|^2\,\mathrm{d}x\mathrm{d}t \\\hat uidehat nonumber
&\qquad \le c
\iint_{\Omega_s\cap Q_{2\varrho}}\big[\lambda^p+|Du|^p+\varrho^{-p}|u|^p\big]\mathrm{d}x\mathrm{d}t
+
c
\big\||b|+|b_\varepsilon|\big\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega_\varepsilon)_s\cap
Q_\varrho)}^{\frac{p(n+2)}{p(n+2)-n-p}}
\mbox{e}nd{align*}
for any $s\in (t_o-\varrho^2, t_o]$.
$\Box$
\mbox{e}nd{remark}
\subsection{Proof of the gradient estimate}\label{sec:proof}
We recall \mbox{e}qref{measure-density-Omega-delta}, \mbox{e}qref{growth-a-delta}
and \mbox{e}qref{b-compact-support} and the fact $u_\varepsilon\in
C^3\big((\overline\Omega_\varepsilonilon)_T\cap Q_\varrho(z_o)\big)$
(see Appendix~\ref{app:smooth}). Therefore,
Proposition~\ref{prop:apriori} is applicable with $u,\mathbf{a},b,\Omega,\Theta$ replaced
by $u_\varepsilon,\mathbf{a}_{\varepsilon},b_\varepsilon,\Omega_\varepsilon,\Theta_{\varrho/2}(x_o)$. We thus
obtain the gradient estimate
\begin{align}\label{sup-est-w-delta}
&\sup_{(\Omega_\varepsilon)_T\cap Q_{\varrho/2}}
\big( 1+|Du_\varepsilon|^2\big)^{\frac{1}{2}}\hat uidehat nonumber\\
&\qquad \le
C\bigg[\Big(1+ \varrho^{n+2}
\| b_\varepsilonilon\|_{L^\sigma((\Omega_\varepsilonilon)_T\cap Q_\varrho)}^{\frac{(n+2)\sigma}{\sigma-n-2}}\Big)
\Xiint{-\!-}_{(\Omega_\varepsilonilon)_T \cap Q_{3\varrho/4}}
\big( 1+|Du_\varepsilonilon|^p\big)
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{d}p}.
\mbox{e}nd{align}
In view of~\mbox{e}qref{measure-density-Omega-delta} and
\mbox{e}qref{growth-a-delta}, the constant $C$ in the preceding inequality
depends only on $n,N,p,m,M,$ and $\Theta_{\varrho/2}(x_o)$, but is independent of $\varepsilon$.
The energy estimate from Lemma~\ref{lem:energy} implies
\begin{align}\label{energy-bounded}
\sup_{\tau\in(t_o-\varrho^2,t_o)}\int_{(\Omega_\varepsilonilon\cap B_\varrho)\times\{\tau\}}
|u_\varepsilonilon-g_\varepsilon|^2 \,\mathrm{d}x
+
\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|Du_\varepsilon|^p\mathrm{d}x\mathrm{d}t
\le
C,
\mbox{e}nd{align}
with a constant $C$ independent of $\varepsilonilon\in(0,1)$; note in particular that
$\|b_\varepsilon\|_{L^\sigma}$ is bounded independently of
$\varepsilonilon$. We combine this with the gradient sup-estimate from
\mbox{e}qref{sup-est-w-delta} replacing $(\frac\varrho2,\frac{3\varrho}4)$ by
$(\frac{3\varrho}4,\varrho)$. This yields the uniform bound
\begin{align}\label{sup-unif}
\sup_{(\Omega_\varepsilon)_T\cap Q_{3\varrho/4}} |Du_\varepsilon|\le C,
\mbox{e}nd{align}
with a constant $C$ independent of $\varepsilon$.
From the construction of the boundary values $g_\varepsilon$ it is clear that $g_\varepsilon\to u$ in
$L^p(\Omega_T\cap Q_\varrho)$ as $\varepsilon\mathrm{d}ownarrow0$.
Moreover, Lemma~\ref{lem:Dge} ensures that
\begin{align}\label{boundary-energy-est}
\iint_{\Omega_T\cap Q_\varrho}&\big[|Dg_\varepsilon|^p+|g_\varepsilon|^p\big]\mathrm{d}x\mathrm{d}t
\le
c\iint_{\Omega_T\cap Q_{2\varrho}}\big[|Du|^p+\varrho^{-p}|u|^p\big]\,\mathrm{d}x\mathrm{d}t,
\mbox{e}nd{align}
for every $\varepsilon\in(0,1)$.
We therefore deduce
\begin{equation}\label{geps-weak-conv}
\mbox{$g_\varepsilon\rightharpoondown u$ weakly in $L^p\big(t_o-\varrho^2,t_o;W^{1,p}(\Omega\cap B_\varrho,\mathbb{R}^N)\big)$ as $\varepsilon\mathrm{d}ownarrow0$.}
\mbox{e}nd{equation}
Moreover, Poincar\'e's inequality and \mbox{e}qref{boundary-energy-est} yield the bound
\begin{align}\label{ueps-Lp-bound}
&\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho} |u_\varepsilon|^p\,\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\qquad\le
c \iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|u_\varepsilon-g_\varepsilon|^p\,\mathrm{d}x\mathrm{d}t
+c\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|g_\varepsilon|^p\,\mathrm{d}x\mathrm{d}t\\\hat uidehat nonumber
&\qquad\le
c\varrho^p\iint_{(\Omega_\varepsilon)_T\cap Q_\varrho}|Du_\varepsilon|^p\,\mathrm{d}x\mathrm{d}t
+c\iint_{\Omega_T\cap Q_{2\varrho}}\big[\varrho^p|Du|^p+|u|^p\big]\,\mathrm{d}x\mathrm{d}t.
\mbox{e}nd{align}
We extend $u_\varepsilon$ by zero on $Q_\varrho\setminus(\Omega_\varepsilon)_T$. Since
$u_\varepsilon=0$ on $(\partial\Omega_\varepsilon)_T\cap Q_\varrho$ in the sense of traces,
the extended maps satisfy $u_\varepsilon\in
L^p\big(t_o-\varrho^2,t_o;W^{1,p}(B_\varrho,\mathbb{R}^N)\big)$ for every
$\varepsilon\in(0,1)$. Moreover, estimates~\mbox{e}qref{energy-bounded}
and~\mbox{e}qref{ueps-Lp-bound} imply that the family
$(u_\varepsilon)_{\varepsilon\in(0,1)}$ is bounded in the latter space.
Therefore, we find $\varepsilon_i\mathrm{d}ownarrow0$ and a limit map $\tilde u\in
L^p(t_o-\varrho^2,t_o;W^{1,p}(B_\varrho,\mathbb{R}^N))$ such that
\begin{equation}\label{ueps-weak-conv}
\mbox{$u_{\varepsilon_i}\rightharpoondown \tilde u$ weakly in $L^p\big(t_o-\varrho^2,t_o;W^{1,p}(B_\varrho,\mathbb{R}^N)\big)$
as
$i\to\infty$.}
\mbox{e}nd{equation}
In view of the uniform $L^\infty{-}L^2$ bound~\mbox{e}qref{energy-bounded}
we can pass to a non-relabelled subsequence to deduce that
$\tilde u\in L^\infty(t_o-\varrho^2,t_o;L^2(B_\varrho,\mathbb{R}^N))$ and
\begin{equation*}
\mbox{$u_{\varepsilon_i}-g_{\varepsilon_i}\hat usto \tilde u-u$
weakly$^\ast$ in $L^\infty\big(t_o-\varrho^2,t_o;L^2(B_\varrho,\mathbb{R}^N)\big)$ as
$i\to\infty$.}
\mbox{e}nd{equation*}
By construction, the maps $u_\varepsilon$ agree with $g_\varepsilon$ on the lateral
boundary in the sense that
\begin{equation*}
u_\varepsilon-g_\varepsilon\in L^p\big(t_o-\varrho^2,t_o;W^{1,p}_0(B_\varrho,\mathbb{R}^N)\big).
\mbox{e}nd{equation*}
Because of the weak convergences~\mbox{e}qref{geps-weak-conv}
and~\mbox{e}qref{ueps-weak-conv}, this boundary condition is preserved in
the limit $\varepsilon_i\mathrm{d}ownarrow0$, from which we deduce
\begin{equation}\label{boundary-condition-1}
\tilde u-u\in L^p\big(t_o-\varrho^2,t_o;W^{1,p}_0(B_\varrho,\mathbb{R}^N)\big).
\mbox{e}nd{equation}
Now, let $\varepsilonilon_o>0$ and consider the outer parallel set
$O_{2\varepsilon_o}:=\big\{x\in B_\varrho\colon \mathrm{d}ist(x,\Omega)<2\varepsilon_o\big\}$.
Since $\Omega_\varepsilon\subset O_{2\varepsilon_o}$ for every $\varepsilon\in(0,\varepsilon_o)$,
we have
\begin{equation*}
\mbox{$u_\varepsilon-g_\varepsilon=0\;$ a.e.~on $Q_\varrho\setminus (O_{2\varepsilon_o})_T$, for every $\varepsilon\in(0,\varepsilon_o)$.}
\mbox{e}nd{equation*}
Also this property is preserved in the limit $\varepsilon_i\mathrm{d}ownarrow0$,
which implies that $\tilde u=u$ a.e.~on $Q_\varrho\setminus
(O_{2\varepsilon_o})_T$ for every $\varepsilon_o>0$. In turn, we conclude
\begin{equation}\label{boundary-condition-2}
\mbox{$\tilde u=u\;$ a.e.~on $Q_\varrho\setminus\Omega_T$. }
\mbox{e}nd{equation}
Combining the properties~\mbox{e}qref{boundary-condition-1}
and~\mbox{e}qref{boundary-condition-2}, we infer the desired boundary condition
\begin{equation*}
\tilde u\in u + L^p\big(t_o-\varrho^2,t_o;W^{1,p}_0(\Omega\cap B_\varrho,\mathbb{R}^N)\big)
\mbox{e}nd{equation*}
for the limit map $\tilde u$.
Our next goal is to show that the limit map $\tilde u$ attains the expected initial
values at the initial time $t_o-\varrho^2$. To this end, we exploit the
lower semicontinuity of the $L^2$-norm with respect to weak
convergence and the local (in time) energy estimate from Remark \ref{local-energy} to estimate
\begin{align*}
\tfrac1h\int_{t_o-\varrho^2}^{t_o-\varrho^2+h} &
\|\tilde u(t)-u(t)\|^2_{L^2(\Omega\cap B_\varrho)}\mathrm{d}t\\
&\le
\liminf_{\varepsilon\mathrm{d}ownarrow0}
\tfrac1h\int_{t_o-\varrho^2}^{t_o-\varrho^2+h}
\|u_\varepsilon(t)-g_\varepsilon(t)\|^2_{L^2(\Omega\cap B_\varrho)} \mathrm{d}t\\
&\le
c\iint_{(\Omega\cap
B_{2\varrho})\times (t_o-\varrho^2 , t_o-\varrho^2+h)}\big[\lambda^p+|Du|^p+\varrho^{-p}|u|^p\big]\mathrm{d}x\mathrm{d}t\\
&\phantom{\le\,} +
c\big\|b\big\|_{L^{\frac{p(n+2)}{p(n+2)-n}}((\Omega\cap B_\varrho)\times(t_o-\varrho^2,t_o-\varrho^2+h))}^{\frac{p(n+2)}{p(n+2)-n-p}},
\mbox{e}nd{align*}
for every $h\in(0,\varrho^2)$. Since the right-hand side of the last inequality converges to 0 as $h\mathrm{d}ownarrow 0$ we infer
$$
\lim_{h\mathrm{d}ownarrow 0} \tfrac1h
\int_{t_o-\varrho^2}^{t_o-\varrho^2+h}
\|\tilde u(t)-u(t)\|^2_{L^2(\Omega\cap B_\varrho)}\mathrm{d}t =0.
$$
Since $u\in C^0([0,T];L^2(\Omega,\mathbb{R}^N))$
by assumption, this implies that $\tilde u=u$ on
$(\Omega\cap B_\varrho)\times \{t_o-\varrho^2\}$ in the usual $L^2$-sense.
At this stage it remains to verify the differential equation for the limit map
$\tilde u$.
For a fixed compact set $K\Subset \Omega_T\cap Q_{\varrho}$, the interior
$C^{1,\alpha}$-estimates from \cite[Chapter IX, Theorem~1.1, Chapter~VIII, Theorems~5.1 and~5.2']{DB} and the uniform energy bound~\mbox{e}qref{energy-bounded} imply
\begin{equation}\label{C1alpha-bound}
\|Du_\varepsilonilon\|_{C^{0,\alpha}(K)}\le C
\mbox{e}nd{equation}
for every $\varepsilon\in(0,1)$, for some H\"older exponent $\alpha\in(0,1)$ and some constant $C>0$,
both independent of $\varepsilon$. This allows us to apply Ascoli-Arz\'ela's theorem
to conclude that $Du_\varepsilonilon$ converges uniformly to $D\tilde u$ on
compact subsets of $\Omega_T\cap Q_{\varrho}$. In particular, we have
$Du_\varepsilonilon\to D\tilde u$ pointwise in $\Omega_T\cap Q_{\varrho}$.
In view of the uniform gradient bound on compact subsets contained in
\mbox{e}qref{C1alpha-bound}
and the property~\mbox{e}qref{a-delta-convergence} of the regularized coefficients,
we can use dominated convergence to pass to the limit in the weak
formulation of the system~\mbox{e}qref{eq-ueps}.
We conclude
that the limit map $\tilde u$ is a weak solution to the system
\mbox{e}qref{system} on $\Omega_T\cap Q_{\varrho}$. Moreover, we know that $\tilde u=u$ on $\partial_{p}(\Omega_T\cap Q_{\varrho})$. By uniqueness of solutions this shows that $\tilde u\mbox{e}quiv u$ in $\Omega_T\cap Q_{\varrho}$.
Moreover, due
to the sup-bound for the spatial gradient \mbox{e}qref{sup-unif} we may
apply the dominated convergence theorem to get
\begin{equation}
\label{ueps-strong-conv}
\mbox{$Du_{\varepsilon_i}\to D\tilde u=Du\;$ strongly in $L^p(Q_{3\varrho/4},\mathbb{R}^N)$ in the limit
$\varepsilon_i\mathrm{d}ownarrow0$,}
\mbox{e}nd{equation}
where we extended $u_{\varepsilon_i}$ by zero on
$Q_{3\varrho/4}\setminus(\Omega_{\varepsilon_i})_T$.
This strong convergence enables us to
pass to the limit $\varepsilon_i\mathrm{d}ownarrow 0$ on the right-hand side of
\mbox{e}qref{sup-est-w-delta}. Note that the construction of $b_\varepsilon$ ensures the convergence $\|
b_\varepsilonilon\|_{L^\sigma((\Omega_\varepsilonilon)_T\cap Q_\varrho)}\to \|
b\|_{L^\sigma(\Omega_T\cap Q_\varrho)}$. On
the left-hand side of \mbox{e}qref{sup-est-w-delta} we may pass to the limit due to the pointwise convergence. In this way, we obtain
\begin{align}\label{sup-est-tilde-u}
&\sup_{\Omega_T\cap Q_{\varrho/2}}
\big( 1+|D u|^2\big)^{\frac{1}{2}}\hat uidehat nonumber\\
&\qquad \le
C\bigg[\Big(1+ \varrho^{n+2}
\| b\|_{L^\sigma(\Omega_T\cap Q_\varrho)}^{\frac{(n+2)\sigma}{\sigma-n-2}}\Big)
\Xiint{-\!-}_{\Omega_T \cap Q_{3\varrho/4}}
\big( 1+|D u|^p\big)
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{d}p}.
\mbox{e}nd{align}
This yields the asserted
sup-estimate for the gradient of $u$, and
completes the proof of Theorem~\ref{thm:main}.
\qed
\appendix
\section{Properties of the regularized coefficients}\label{app:reg}
Here, we provide proofs for the properties of the regularized coefficients $\mathbf a_{\varepsilon}$ stated in Subsection~\ref{section:reg-coeff}.
The first line in \mbox{e}qref{growth-a-delta} follows directly from the
definition of~$\mathbf c_\varepsilon$ and the growth
condition~\mbox{e}qref{bounds-a} for $\mathbf{a}$. The constant $c$ can be
chosen in the form $c(p)\max\{ 1,\mathrm{e}^{p-2}\}$ with the constant $c(p)$ from
\mbox{e}qref{bounds-a}. Concerning the ellipticity condition, we observe that
\begin{equation}\label{adelta}
\mathbf a_{\varepsilon}'(r)r+\mathbf a_{\varepsilon}(r)
=
\mathbf c_\varepsilon'(\log r)+\mathbf c_\varepsilon(\log r)
=
\big(\phi_\varepsilon\ast(\mathbf c'+\mathbf c)\big)(\log r)
\quad\mbox{for any $r>0$. }
\mbox{e}nd{equation}
For the function $\mathbf c'+\mathbf c$ appearing on
the right-hand side, we have in the case $\mu=0$ that
\begin{align*}
&\mathbf c'(s)+\mathbf c(s)\\
&\quad=
\frac{1}{\varepsilon^2+\mathrm{e}^{2s}}
\bigg[\mathrm{e}^{2s}
\Big[
\sqrt{\varepsilon^2+\mathrm{e}^{2s}}\,\mathbf{a}'\big(\sqrt{\varepsilon^2+\mathrm{e}^{2s}}\big)
+
\mathbf{a}\big(\sqrt{\varepsilon^2 +\mathrm{e}^{2s}}\big)
\Big]
+
\varepsilon^2\mathbf{a}\big(\sqrt{\varepsilon^2+\mathrm{e}^{2s}}\big)\bigg]
\mbox{e}nd{align*}
for any $s\in\mathbb{R}$.
Using the lower bounds from~\mbox{e}qref{assumption:a'} and
\mbox{e}qref{bounds-a}, we deduce
\begin{align}\label{cdelta}
\mathbf c'(s)+\mathbf c(s)
&\ge
c(p)^{-1}m(\varepsilon^2 +\mathrm{e}^{2s})^{\frac{p-2}2}.
\mbox{e}nd{align}
On the other hand, in the case $\mu>0$ we have
\begin{align}\label{cdelta_}
\mathbf c'(s)+\mathbf c(s)
&=
\mathbf{a}'(\mathrm{e}^{s})\mathrm{e}^{s}
+\mathbf{a}(\mathrm{e}^{s})
\ge
m(\mu^2 +\mathrm{e}^{2s})^{\frac{p-2}2}
\mbox{e}nd{align}
for any $s\in\mathbb{R}$.
Similarly as above, we infer from
\mbox{e}qref{adelta}, \mbox{e}qref{cdelta}, \mbox{e}qref{cdelta_} and the definition of $\lambda$ that
\begin{equation*}
\mathbf a_{\varepsilon}'(r)r+\mathbf a_{\varepsilon}(r)
\ge
c(p)^{-1}m\min \big\{ 1,\mathrm{e}^{-(p-2)}\big\}\big(\lambda^2
+r^2\big)^\frac{p-2}{2}
\quad\mbox{for any }r>0.
\mbox{e}nd{equation*}
This yields the lower bound in~\mbox{e}qref{growth-a-delta}$_2$.
Similarly, by applying the upper bound from \mbox{e}qref{assumption:a'} (taking also into account the fact
that $\mathbf a$ is non-negative, cf.~\mbox{e}qref{bounds-a}), we obtain
\begin{align*}
(\phi_\varepsilon\ast\mathbf{c}')(s)
\le
c(p) M\max\{1,\mathrm{e}^{p-2}\} \big(\lambda^2+\mathrm
e^{2s}\big)^{\frac{p-2}{2}}
\quad\mbox{for any }s\in\mathbb{R}.
\mbox{e}nd{align*}
From this we deduce
\begin{align*}
\mathbf{a}_{\varepsilon}'(r)r
&=
(\phi_\varepsilon\ast\mathbf{c}')(\log r)
\le
c(p)M\max\{1,\mathrm{e}^{p-2}\} \big(\lambda^2+r^2\big)^{\frac{p-2}{2}}
\quad\mbox{for any }r>0,
\mbox{e}nd{align*}
which implies the asserted upper bound in~\mbox{e}qref{growth-a-delta}$_2$.
At this stage it remains to derive the estimate for the second derivative $\mathbf a_\varepsilon''$. To this
end, we compute
\begin{equation*}
\mathbf a_{\varepsilon}''(r)r^2
=
\mathbf{c}_\varepsilon''(\log r)-\mathbf{c}_\varepsilon'(\log r)
=
\big((\phi_\varepsilon'-\phi_\varepsilon)\ast \mathbf c'\big)(\log r).
\mbox{e}nd{equation*}
Then we use~\mbox{e}qref{assumption:a'} and \mbox{e}qref{bounds-a} to derive in the case $\mu=0$ the bound
\begin{align*}
|\mathbf c'(s)|
&=
\bigg|\mathbf{a}'\big(\sqrt{\varepsilon^2 +\mathrm{e}^{2s}}\big)\frac{\mathrm{e}^{2s}}{\sqrt{\varepsilon^2 +\mathrm{e}^{2s}}}\bigg|\\\hat uidehat nonumber
&\le
\Big|\sqrt{\varepsilon^2+\mathrm{e}^{2s}}\,\mathbf{a}'\big(\sqrt{\varepsilon^2
+\mathrm{e}^{2s}}\big)+\mathbf{a}\big(\sqrt{\varepsilon^2+\mathrm{e}^{2s}}\big)\Big|
+\Big|\mathbf{a}\big(\sqrt{\varepsilon^2
+\mathrm{e}^{2s}}\big)\Big|\\
&\le
\big( 1+c(p)\big)M \big(\varepsilon^2 +\mathrm e^{2s}\big)^\frac{p-2}{2}
\le
2c(p)M \big(\varepsilon^2 +\mathrm e^{2s}\big)^\frac{p-2}{2},
\mbox{e}nd{align*}
while in the case $\mu>0$ we obtain
\begin{align*}
|\mathbf c'(s)|
&=
\big|\mathbf{a}'(\mathrm{e}^{s})\mathrm{e}^{s}\big|
\le
\big|\mathrm{e}^{s}\,\mathbf{a}'(\mathrm{e}^{s}) +
\mathbf{a}(\mathrm{e}^{s})\big| +
\big|\mathbf{a}(\mathrm{e}^{s})\big|\\
&\le
\big( 1+c(p)\big)M \big(\mu^2 +\mathrm e^{2s}\big)^\frac{p-2}{2}
\le
2c(p)M \big(\mu^2 +\mathrm e^{2s}\big)^\frac{p-2}{2}.
\mbox{e}nd{align*}
Hence, in both cases we have
\begin{align}\label{cprime-bound}
|\mathbf c'(s)|
\le
2c(p)M \big(\lambda^2 +\mathrm e^{2s}\big)^\frac{p-2}{2}.
\mbox{e}nd{align}
From this we deduce, similarly as above, that
\begin{align*}
|\mathbf a_{\varepsilon}''(r)|r^2
&=
|((\phi_\varepsilon'-\phi_\varepsilon)\ast\mathbf c')(s)|\\
&\le
2c(p)M\max\{1,\mathrm{e}^{p-2}\} \big(\lambda^2+\mathrm
e^{2s}\big)^{\frac{p-2}{2}}\int_\mathbb{R}|\phi_\varepsilon'-\phi_\varepsilon|\,\mathrm{d} s\\
&\le
2c(p)M
\Big( \tfrac{2}{\varepsilon}\|\phi'\|_{L^\infty}+1\Big)
\max\big\{1,\mathrm{e}^{p-2}\big\} \big(\lambda^2+\mathrm
e^{2s}\big)^{\frac{p-2}{2}}.
\mbox{e}nd{align*}
The proof of the claim~\mbox{e}qref{growth-a-delta} is thus complete.
Finally, we analyze the convergence of $\mathbf a_{\varepsilon} (r)$ in the
limit $\varepsilon\mathrm{d}ownarrow 0$ and thereby prove
\mbox{e}qref{a-delta-convergence}.
For any $s\in\mathbb{R}$ we estimate
\begin{align*}
\big| \mathbf c_\varepsilon (s)- \mathbf c (s)\big|
&\le
\sup_{s-\varepsilon<r<s+\varepsilon}
\,\big|\mathbf{c}(r)-\mathbf{c}(s)\big|
=\sup_{s-\varepsilon<r<s+\varepsilon}
\bigg| \int_s^r \mathbf{c}'(\tau)\,\mathrm{d}tau\bigg|
\\[1ex]
&\le
2c(p)M
\sup_{s-\varepsilon<r<s+\varepsilon}\bigg| \int_s^r \big(\varepsilon^2 +\mathrm e^{2\tau}\big)^\frac{p-2}{2}\mathrm{d}tau\bigg|
\\
&\le
2c(p)M \varepsilon\max\big\{ 1,\mathrm{e}^{p-2}\big\}
\big(\varepsilon^2 + \mathrm e^{2s}\big)^{\frac{p-2}{2}},
\mbox{e}nd{align*}
where the second-to-last step follows from~\mbox{e}qref{cprime-bound}.
This gives the desired estimate
\mbox{e}qref{a-delta-convergence}.
\section{Regularity up to the boundary}\label{app:smooth}
Here, we show that solutions to the regularized
problem \mbox{e}qref{eq-ueps} are smooth up to the boundary as claimed at the end of Section~\ref{sec:regularization}.
To this end, we follow the strategy
of Banerjee \& Lewis \cite[Appendix.~Proof of
(2.7)]{BanerjeeLewis:2014} to flatten the boundary and then to reduce
the problem of boundary regularity to the interior case by a
reflection argument.
\subsection{Schauder estimates for linear parabolic systems}
In this section, we explain Schauder estimates for linear parabolic systems of the type
\begin{equation}\label{A:linear-system}
\partial_t u^i -\sum_{\al,\be=1}^n\sum_{j=1}^N\big[A^{ij}_{\alpha\beta}u^j_{x_\beta}\big]_{x_\alpha}
=
\sum_{\al=1}^n\sum_{j=1}^Nb^{ij}_{\alpha}u^j_{x_\alpha}+\sum_{\al=1}^n(f_\alpha^i)_{x_\alpha}+c^i
\qquad\mbox{in $\Om_T$,}
\mbox{e}nd{equation}
for $i=1,2,\mathrm{d}ots, N$,
where the coefficients $A^{ij}_{\alpha\beta}\colon\Om_T\to\mathbb{R}$ satisfy for some $0<\hat uidehat nu\le L$ the ellipticity and boundedness condition
\begin{equation}\label{parabolicity}
\hat uidehat nu |\xi|^2\le \sum_{\al,\be=1}^n\sum_{i,j=1}^N A^{ij}_{\alpha\be}\xi^i_\alpha\xi^j_\be\le L|\xi|^2\quad\mbox{for all $\xi\in\rr^{Nn}$.}
\mbox{e}nd{equation}
We will assume that the functions $c^i\colon\Omega_T\to\mathbb{R}$ belong to a parabolic Campanato-Morrey space, which is defined as follows.
\begin{definition}\label{def-Morrey}
With $q \geq 1$, $\theta \ge 0$, a measurable map $w\colon \Omega_T\to \mathbb{R}^k$, $k\ge 1$, belongs
to the (parabolic) Morrey space $L^{q,\theta}(\Omega_T,\mathbb{R}^k)$ if and only if
\begin{equation*}
\|w\|_{L^{q,\theta}(\Omega_T,\mathbb{R}^k)}^q
:=
\sup_{z_o\in\Omega_T,\, 0<\varrho<\mathrm{d}iam(\Omega_T)}
\varrho^{-\theta} \iint_{\Omega_T\cap Q_\varrho(z_o)} |w|^q \,\mathrm{d}x\mathrm{d}t
<
\infty.
\mbox{e}nd{equation*}
\mbox{e}nd{definition}
By $C^{0,\mu}$ we mean H\"older continuity with respect to the parabolic metric, i.e.~with H\"older exponent $\mu$ in space and $\frac{\mu}{2}$ in time. With these notions at hand we state the following parabolic Schauder estimates, which can be proved by standard comparison and freezing techniques, cf. \cite{Campanato, Misawa, Schlag}.
\begin{theorem}\label{Thm:Schauder}
Suppose $A^{ij}_{\alpha\be}$ and $f_\alpha^i$ are in $C^{0,\mu}(\Om_T)$ for some $\mu\in(0,1)$,
whereas $b^{ij}_{\alpha}\in L^{\infty}(\Om_T)$ and $c^i\in L^{2,\theta}(\Om_T)$ for $\theta:=n+2\mu$.
Let $u$ be a weak solution to \mbox{e}qref{A:linear-system} under the assumption \mbox{e}qref{parabolicity}.
Then $Du\in C^{0,\mu}_{\loc}(\Om_T)$ and moreover for any compact set $K\Subset\Om_T$ we have
\begin{equation*}
[Du]_{\mu,K}\le C\big[\|Dv\|_{L^2(\Om_T)}+M\big],
\mbox{e}nd {equation*}
where $C$ depends on $n$, $\hat uidehat nu$, $L$ and $\mathrm{d}ist(K,\Om_T)$,
and $M$ depends on the norms of $A^{ij}_{\alpha\be}$, $b^{ij}_{\alpha}$, $f_\be^i$, and $c^i$
in their corresponding spaces.
\mbox{e}nd{theorem}
\subsection{Flattening of the boundary}
Before we start with the actual construction of local boundary coordinates, we introduce a few abbreviations.
By $B_\mathrm{d}elta^{(n-1)}\mbox{e}quiv B_\mathrm{d}elta^{(n-1)}(0)$ we denote the ball of radius $\mathrm{d}elta>0$ centered at the origin in $\mathbb{R}^{n-1}$.
Then, for $\mbox{e}ta>0$ we define ${\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}:=B_\mathrm{d}elta^{(n-1)}\times (-\mbox{e}ta,\mbox{e}ta)$, and similarly ${\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^{+}:= B_\mathrm{d}elta^{(n-1)}\times (0,\mbox{e}ta)$
and ${\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^{-}:= B_\mathrm{d}elta^{(n-1)}\times (-\mbox{e}ta,0)$. Cylinders in $\mathbb{R}^{n+1}$ of height $T>0$
with base ${\mathcal C}_{\mathrm{d}elta,\mbox{e}ta},\,{\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^\pm $ are denoted by ${\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}$, ${\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^\pm$, so that
$ {\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}:= {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}\times (0,T)$.
Since $\partial\Omega_\varepsilon$ is a smooth closed $(n-1)$-dimensional
submanifold of $\mathbb{R}^n$, it can locally be written as graph of a smooth function $\phi\in C^\infty(B_\mathrm{d}elta^{n-1})$ after a suitable rigid motion.
More precisely, for any point
$y_o\in \partial \Omega_\varepsilon\cap B_\varrho(x_o)$,
there is a neighboorhood $N_o$ of $y_o$ so that
$\Omega_\varepsilon\cap N_o={\mathcal P}hi({\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-)$ with the
parametrization
${\mathcal P}hi\colon {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}\to N_o\subset\mathbb{R}^n$
defined by
\begin{equation}\label{def:Phi}
{\mathcal P}hi(y',y_n):= \big(y',\phi(y')\big)+ \hat uidehat nu\big( y',\phi(y')\big) y_n,\quad \mbox{for $y'\in B_\mathrm{d}elta ^{n-1}$ and $y_n\in(-\mbox{e}ta,\mbox{e}ta)$,}
\mbox{e}nd{equation}
where $\hat uidehat nu\colon \partial\Omega_\varepsilon\to\mathbb{R}^n$ denotes the outward unit normal on
$\partial\Omega_\varepsilon$. By another rigid motion we can achieve that $y_o=0$ and
$\hat uidehat nu(0)=e_n$.
The inverse mapping
$
{\mathcal P}si:={\mathcal P}hi^{-1}\colon N_o
\to {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}
$
is given by
\begin{equation}\label{eq:Psi}
{\mathcal P}si (x)= \Big(x_1-\boldsymbol d_{x_1}(x)\boldsymbol d(x),\mathrm{d}ots,
x_{n-1}-\boldsymbol d_{x_{n-1}}(x)\boldsymbol d(x),
\boldsymbol d(x)\Big)
\qquad\mbox{for }x\in N_o,
\mbox{e}nd{equation}
where $\boldsymbol d$ denotes the signed distance to
$\partial\Omega_\varepsilon$.
A straightforward computation yields
\begin{equation}\label{DPsi(0)}
D{\mathcal P}si(0)=\mathrm{id}_{\mathbb{R}^{n}},
\mbox{e}nd{equation}
and
\begin{equation}\label{def:Q}
\mathbf Q(x)
:=
D{\mathcal P}si(x)^t\cdot D{\mathcal P}si(x)
=
\left[
\begin{matrix}
\big( D{\mathcal P}si_\alpha(x)\cdot D{\mathcal P}si_\beta(x)\big)_{1\le \alpha,\beta\le n-1}& 0 \\[5pt]
0 & 1 \\
\mbox{e}nd{matrix}
\right].
\mbox{e}nd{equation}
For a more detailed derivation of these properties, we refer to
\cite[Section~5.1]{BDMS}.
In what follows, we use the short-hand notations
\begin{equation}\label{def:Q2}
\mathbf{Q}_x(\xi,\zeta):=\sum_{i=1}^N\sum_{\alpha,\beta=1}^n
\mathbf{Q}_{\alpha\beta}(x)\xi_\alpha^i\zeta_\beta^i
\qquad\mbox{and}\qquad
|\xi|_{\mathbf{Q}_x}
:=\sqrt{ \mathbf Q(x)(\xi,\xi)},
\mbox{e}nd{equation}
for matrices $\xi,\zeta\in\mathbb{R}^{Nn}$.
Now we define
\begin{equation}
\hat u(y,t):=u_\varepsilon \big({\mathcal P}hi (y),t\big)
\quad\iff\quad
u_\varepsilon(x)=\hat u\big({\mathcal P}si (x),t\big),
\mbox{e}nd{equation}
for $y\in {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-$ and $t\in[0,T]$, and analogously
\begin{equation*}
\hat uidehat\varphi(y,t):=\varphi\big({\mathcal P}hi (y),t\big)
\quad\iff\quad \varphi(x,t)=\hat uidehat\varphi\big({\mathcal P}si (x),t\big)
\mbox{e}nd{equation*}
for any $\varphi\in L^p(0,T;W^{1,p}_0(\Omega_\varepsilon,\mathbb{R}^N))$.
Then,
$\hat u\in L^p(0,T;W^{1,p}( {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-))$ and
$\hat u=0$ in the sense of traces on $B_\mathrm{d}elta^{(n-1)}\times \{0\}\times(0,T)$.
For the
derivatives in spatial directions, we have
\begin{align*}
Du_\varepsilon(x,t)\cdot D\varphi(x,t)
&=
\mathbf Q_x\big( D\hat u \big({\mathcal P}si (x),t\big),D\hat uidehat \varphi \big({\mathcal P}si (x),t\big)\big).
\mbox{e}nd{align*}
Moreover, for a.e.~$x\in N_o$ and $t\in(0,T)$ we have
\begin{align*}
u_\varepsilon(x,t)\cdot \partial_t\varphi(x,t)
=
\hat u({\mathcal P}si(x),t)\cdot \partial_t\hat uidehat\varphi({\mathcal P}si(x),t).
\mbox{e}nd{align*}
Using the two preceding formulae and applying the transformation
$x={\mathcal P}hi(y)$ on a fixed time slice, we infer
\begin{align*}\hat uidehat nonumber
&\int_{{\mathcal P}hi ({\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^- )\times\{t\}}\big[u_\varepsilon\cdot\partial_t\varphi
-
\mathbf a_\varepsilon\big( |Du_\varepsilon|\big) Du_\varepsilon\cdot D\varphi\big] \,\mathrm{d}x\\\hat uidehat nonumber
&\qquad=
\int_{{\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-\times\{t\}}
\big[\hat u\cdot\partial_t\hat uidehat\varphi
-
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf{Q}_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\hat uidehat \varphi \big)\big] J_n{\mathcal P}hi\, \mathrm{d}y,
\mbox{e}nd{align*}
for a.e.~$t\in(0,T)$, where $J_n{\mathcal P}hi:=|\mathrm{d}et D{\mathcal P}hi|$ denotes the Jacobian of ${\mathcal P}hi$. Integrating this identity with respect to $t\in(0,T)$,
we obtain the left-hand side of~\mbox{e}qref{eq-ueps}.
Diminishing $\mbox{e}ta>0$ if necessary, we can achieve that the right
hand side $b_\varepsilon$ in \mbox{e}qref{eq-ueps} vanishes in a tubular neighborhood of $\partial\Omega_\varepsilon\times (0,T)$ by construction, cf.~\mbox{e}qref{b-compact-support}.
Consequently,~\mbox{e}qref{eq-ueps} turns into
\begin{equation}\label{equation:u-hat}
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-}\Big[\hat u\cdot\partial_t\hat uidehat\varphi
-
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf{Q}_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\hat uidehat \varphi \big)\Big] J_n{\mathcal P}hi\, \mathrm{d}y\mathrm{d}t
=
0.
\mbox{e}nd{equation}
In this equation, the testing function $\hat uidehat\varphi$ can be chosen
as an arbitrary smooth function with compact support in
${\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-$. By an approximation argument, we
can also verify it for every $\hat uidehat\varphi\in L^p\big(0,T;W^{1,p}_0(
{\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-)\big)$ with $\hat uidehat\varphi_t\in
L^2({\mathcal Q}_{\mathrm{d}elta,\mu}^-)$ and $\hat uidehat\varphi(0)=0=\hat uidehat\varphi(T)$.
Next, for an arbitrary testing function $\psi\in L^p\big(0,T;W^{1,p}_0(
{\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-)\big)$ with $\psi_t\in L^2({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-)$ and $\psi(0)=0=\psi(T)$, we test \mbox{e}qref{equation:u-hat}
with
$\hat uidehat\varphi := (J_n{\mathcal P}hi)^{-1}\psi$, which is admissible
since $J_n{\mathcal P}hi$ is a positive Lipschitz function. This leads to
\begin{align}\label{transformed-Euler}\hat uidehat nonumber
&\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-}\Big[\hat u\cdot\partial_t\psi
-
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf{Q}_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\psi \big) \Big]\, \mathrm{d}y\mathrm{d}t\\
&\qquad=
-\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-}\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf{Q}_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big(D\hat u ,\psi\otimes D[J_n{\mathcal P}hi]\big)(J_n{\mathcal P}hi)^{-1}\mathrm{d}y\mathrm{d}t,
\mbox{e}nd{align}
for every
$\psi\in L^p\big(0,T;W^{1,p}_0({\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^-)\big)$ with $\psi_t\in
L^2({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-)$ and $\psi(0)=0=\psi(T)$.
\subsection{Reflection and reduction to the interior}
Next, we extend $\mathbf Q_{\mathcal P}hi$ and $J_n{\mathcal P}hi$ to
${\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^+$ by an even reflection across
$\Gamma_\mathrm{d}elta:=B_\mathrm{d}elta^{(n-1)}\times\{0\}$. To this aim we define
$$
\mbox{$\mathbf Q_{{\mathcal P}hi (y',y_n)}:= \mathbf Q_{{\mathcal P}hi (y',-y_n)}\,$
and
$J_n{\mathcal P}hi (y',y_n):=J_n{\mathcal P}hi (y',-y_n)\,$
for any $(y',y_n)\in {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^+$.}
$$
Note that the functions $\mathbf{Q}_{\mathcal P}hi$ and $J_n{\mathcal P}hi$
are smooth on $B_\mathrm{d}elta^{(n-1)}\times (-\mbox{e}ta,0]$, and therefore their
extensions are also smooth on $\Gamma_\mathrm{d}elta$.
However, the extensions are in general only Lipschitz continuous on
${\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}$.
Only the horizontal derivatives
of the extended Jacobian are continuous across $\Gamma_\mathrm{d}elta$,
since they are even functions as the Jacobian itself.
Next, we extend the solution $\hat u$ by an odd reflection
across the boundary $\Gamma_\mathrm{d}elta$ on each time-slice. More
precisely, we let
$$
\mbox{$\hat u (y',y_n,t):=-\hat u (y',-y_n,t)\, $ for
$(y',y_n)\in {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}^+$.}
$$
Now we consider testing functions $\psi\in
L^p\big(0,T;W^{1,p}_0({\mathcal C}_{\mathrm{d}elta,\mbox{e}ta})\big)$ with $\partial_t\psi\in
L^2({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta})$ and $\psi(0)=0=\psi(T)$. We decompose $\psi=\psi_{\rm e}+\psi_{\rm o}$
into its even part $\psi_{\rm e}$ and odd part $\psi_{\rm o}$
with respect to reflection across
$\Gamma_\mathrm{d}elta$.
According to ${\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}= {\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^+\cup {\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-$ we write
\begin{align*}
\mathbf I:=
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}}
\Big[\hat u\cdot\partial_t\psi -
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\psi \big) \Big]\, \mathrm{d}y\mathrm{d}t
=
\mathbf I_{\rm e}^+ + \mathbf I_{\rm e}^- +
\mathbf I_{\rm o}^+ + \mathbf I_{\rm o}^-.
\mbox{e}nd{align*}
The right-hand side integrals are defined as follows: For any sign $\{ +,-\}$ and any symmetry type $\{{\rm e},{\rm o}\}$
one has to replace ${\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}, \psi$ in $\mathbf I$ by the corresponding half cylinder $\{ {\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^+, {\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-\}$
and the corresponding even or odd part $\{\psi_{\rm e},\psi_{\rm o}\}$ of $\psi$.
In the last two terms, we observe that
$\mathbf Q_{\mathcal P}hi( D\hat u ,D\psi_{\rm o})$ is an even function with respect to $y_n$
because the derivatives of $\hat u$ and $\psi_{\rm o}$ in direction of $y_i$ with $i\in\{1,\mathrm{d}ots ,n-1\}$ are
odd and the derivatives in the direction
of $y_n$ are even. Furthermore, the structure of $\mathbf Q$ from \mbox{e}qref{def:Q}
does not lead to mixed terms with both types of derivatives.
For the same reason, $|D\hat u|_{\mathbf Q_{\mathcal P}hi}$ is an even function, and
by definition we have that
$\hat u\cdot\partial_t\psi_{\mathrm{o}}$ is even as well. Consequently,
the integrands of the last two integrals are even, which implies
$\mathbf I_{\rm o}^-=\mathbf I_{\rm o}^+$.
Similarly, using the facts that $\hat u$ is odd and $\psi_{\rm e}$ is
even, we deduce that $\mathbf I_{\rm e}^-=-\mathbf
I_{\rm e}^-$.
Therefore, we obtain
\begin{align}\label{Euler-LHS}
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}}&\Big[\hat u\cdot\partial_t\psi-
\mathbf a_\varepsilon\big(|D\hat u|_{ \mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\psi \big) \Big] \mathrm{d}y\mathrm{d}t
\\
\hat uidehat nonumber
&
=2\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-}\Big(\hat u\cdot\partial_t\psi_o-
\mathbf a_\varepsilon\big(|D\hat u|_{ \mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\psi_{\rm o} \big)
\Big) \mathrm{d}y\mathrm{d}t.
\mbox{e}nd{align}
Note that the right-hand side coincides with the left-hand side
of~\mbox{e}qref{transformed-Euler} with $\psi_{\rm o}$ in place of $\psi$.
Analogously to the decomposition of $\mathbf I$, we write
\begin{align*}
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}}&
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,\psi\otimes
D[J_n{\mathcal P}hi]\big)(J_n{\mathcal P}hi)^{-1}\mathrm{d}y\mathrm{d}t
=\mathbf{II}_{\rm e}^+ + \mathbf{II}_{\rm e}^- +
\mathbf{II}_{\rm o}^+ + \mathbf{II}_{\rm o}^-.
\mbox{e}nd{align*}
For these integrals, we can use the similar symmetry considerations as
above. Since $\psi_{\mathrm{o}}\otimes D[J_n{\mathcal P}hi]$ enjoys the same
symmetry properties as $D\psi_{\mathrm{o}}$, we infer
$\mathbf{II}_{\mathbf{o}}^+=\mathbf{II}_{\mathrm{o}}^-$. Similarly, we
deduce $\mathbf{II}_{\mathrm{e}}^+=-\mathbf{II}_{\mathrm{e}}^-$. This
implies
\begin{align}\label{Euler-RHS}\hat uidehat nonumber
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}}&
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,\psi\otimes
D[J_n{\mathcal P}hi]\big)(J_n{\mathcal P}hi)^{-1}\mathrm{d}y\mathrm{d}t \\
&=
2\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-}
\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,\psi_{\mathrm{o}}\otimes
D[J_n{\mathcal P}hi]\big)(J_n{\mathcal P}hi)^{-1}\mathrm{d}y\mathrm{d}t.
\mbox{e}nd{align}
Note that $\psi_{\rm o}=0$ on
$\Gamma_\mathrm{d}elta$, which makes $\psi_{\rm o}$
admissible in the transformed parabolic system
\mbox{e}qref{transformed-Euler}. This means that the right-hand sides
of~\mbox{e}qref{Euler-LHS} and \mbox{e}qref{Euler-RHS} coincide. Thus, we conclude
that the extended map $\hat u$ satisfies
\begin{align}\label{Euler-extension}\hat uidehat nonumber
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}}&
\Big[\hat u\cdot\partial_t\psi-\mathbf a_\varepsilon\big(|D\hat u|_{ \mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,D\psi \big) \Big]\mathrm{d}y\mathrm{d}t\\
&=
\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}}\mathbf a_\varepsilon\big( |D\hat u|_{\mathbf Q_{\mathcal P}hi}\big)
\mathbf Q_{\mathcal P}hi\big( D\hat u ,\psi\otimes D[J_n{\mathcal P}hi]\big)(J_n{\mathcal P}hi)^{-1}\mathrm{d}y\mathrm{d}t,
\mbox{e}nd{align}
for every $\psi\in L^p\big(0,T;W^{1,p}_0({\mathcal C}_{\mathrm{d}elta,\mbox{e}ta})\big)$
with $\partial_t\psi\in L^2({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta})$ and $\psi(0)=0=\psi(T)$.
Dropping the ${\mathcal P}hi$ on $\mathbf{Q}$ for ease of notation,
\mbox{e}qref{Euler-extension} is the weak form of the parabolic system
\begin{equation}\label{Euler-pointwise}
\begin{aligned}
\pl_t \hat u^i - &\sum_{\al,\be=1}^n\big[\mathbf a_\varepsilon(|D\hat u|_{{\bf Q}}){\bf Q}_{\al\be}\hat u^i_{y_\al}\big]_{y_\be}\\
&=\sum_{\al,\be=1}^n\mathbf a_\varepsilon(|D\hat u|_{\bf Q}){\bf Q}_{\al\be}\hat u^i_{y_\al} \frac{[J_n{\mathcal P}hi]_{y_\be}}{(J_n{\mathcal P}hi)}
\quad\mbox{ in $\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta}$.}
\mbox{e}nd{aligned}
\mbox{e}nd{equation}
\subsection{Smoothness of $u_\varepsilon$ up to the lateral boundary}
We first observe that $\mathbf Q_{{\mathcal P}hi(0)}(\xi,\xi)=|\xi|^2$, since $D{\mathcal P}si (0)={\rm id}_{\rr^n}$ by~\mbox{e}qref{DPsi(0)}. By shrinking $\mathrm{d}elta$ and $\mbox{e}ta$ if necessary, we can achieve
\begin{equation}\label{bounds-norm}
\mbox{$\tfrac12|\xi|\le |\xi|_{\mathbf{Q}_{{\mathcal P}hi(y)}}\le 2|\xi|$ for any $\xi\in\rr^{Nn}$
and $y\in {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}$, and
$\mathrm{Lip}\Big(\mathbf{Q}_{\mathcal P}hi\big|_{{\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}}\Big)\le\mathcal{L}ambda$}
\mbox{e}nd{equation}
for some universal constant $\mathcal{L}ambda<\infty$.
This implies that
assumptions (1.7) -- (1.9) from \cite{Tolksdorf:1983} are
fulfilled if we replace the functions $b$, $q(\xi)$ used by Tolksdorf
by the functions $\mathbf{Q}$, $|\xi|_{\mathbf{Q}_x}^2$ defined in
\mbox{e}qref{def:Q2}. Similarly, we have
\begin{equation}\label{bounds-Jacobian}
\tfrac12\le J_n{\mathcal P}hi(y)\le 2
\mbox{ for any $y\in {\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}$,}
\quad \mbox{and}\quad
\mathrm{Lip}\Big( J_n{\mathcal P}hi \big|_{{\mathcal C}_{\mathrm{d}elta,\mbox{e}ta}}\Big)\le\mathcal{L}ambda
\mbox{e}nd{equation}
for some universal constant $\mathcal{L}ambda<\infty$.
Furthermore, the estimates \mbox{e}qref{growth-a-delta} for the coefficients
$\mathbf a_\varepsilon (t)$ imply that assumptions (1.10)--(1.12) from
Tolksdorf \cite{Tolksdorf:1983} hold true. For the inhomogeneous term, we observe that
\begin{equation*}
a^i(x,\xi)=\sum_{\al,\be=1}^n\mathbf{a}_\varepsilon(|\xi|_{\mathbf{Q}})\mathbf{Q}_{\al\be}\xi^i_{\al}\frac{[J_n{\mathcal P}hi]_{y_\be}}{J_n{\mathcal P}hi}.
\mbox{e}nd{equation*}
Again, by \mbox{e}qref{bounds-norm} and \mbox{e}qref{bounds-Jacobian}, we will find the desired positive constant
in order to verify (1.13) from \cite{Tolksdorf:1983}. Having arrived at this stage we can apply the $C^{1,\alpha}$-regularity results from
\cite{DB}. Indeed, as pointed out by DiBenedetto in the monograph \cite[Chapter~VIII.7]{DB}, the statement of \cite[Chapter~IX, Theorem~1.1]{DB} continues to hold under these assumptions. The application of the theorem yields $D_y\hat u\in C^{0,\al}_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta})$.
Hence $u_\varep$ enjoys the same degree of regularity in the vicinity of $(\pl\Om_\varep\cap B_\varrho(x_o))\times(0,T)$.
A further application of the interior regularity from
\cite[Chapter~IX]{DB} directly to $u_\varep$ gives
$Du_\varep\in C^{0,\al}(\overline{\Om_\varep \cap B_\varrho(x_o)}\times [\tau,T])$ for some $\tau>0$.
Up to now, all the above regularity results also hold for the
degenerate or singular case,
and solutions cannot be expected to be more regular in this case.
However, since the regularized problem is non-degenerate, we can show
higher regularity of solutions. We begin by noting that a standard
application of the difference quotient technique yields the weak
differentiability of $V_\lambda(D\hat u)=(\lambda^2+|D\hat u|^2)^{\frac{p-2}{4}}D\hat u$
with $D[V_\lambda(D\hat u)]\in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta},\mathbb{R}^{Nn})$;
see for instance \cite[Lemma 5.1]{DMS} and
\cite[Thm. 1.1]{Scheven:2010} for the cases $p\ge2$ and
$\frac{2n}{n+2}<p<2$, respectively.
By using the fact $\lambda>0$ in the case $p>2$ and the local
boundedness of $|D\hat u|$ if $p<2$, we deduce that the second spatial
derivatives of the solution satisfies
$D^2_y\hat u\in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta},\mathbb{R}^{Nn})$.
Having second spatial derivates in $L^2_{\loc}$ and first spatial derivatives locally bounded, we are allowed to perform an integration by parts in \mbox{e}qref{Euler-extension} in the diffusion term. After that we shift all terms except the one containing the time derivative to the right-hand side. In this way we obtain an estimate of the form
\begin{equation}\label{time-deriv}
\bigg|\iint_{{\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}} \hat u\cdot\partial_t\psi \,\mathrm{d}y\mathrm{d}t\bigg|
\le
C \|\psi\|_{L^2({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta})}
\mbox{e}nd{equation}
for any $\psi\in C_0^\infty ({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta},\mathbb{R}^N)$. This implies that $\partial_t\hat u \in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta})$.
The main ingredient for the higher regularity are the Schauder
estimates for
linear parabolic systems stated in Theorem~\ref{Thm:Schauder}.
We begin by differentiating \mbox{e}qref{Euler-extension} in tangential
directions, i.e.~with respect to $y_\mbox{e}ll$ for $\mbox{e}ll=1,2,\mathrm{d}ots, n-1$.
As before we omit the ${\mathcal P}hi$ on $\mathbf{Q}$.
Since $D^2_y \hat u\in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta})$, we infer that
$v:=\hat u_{y_\mbox{e}ll}$ is a weak solution to the following parabolic system:
\begin{equation}\label{Schauder-system}
\partial_t v^i -
\sum_{\al,\be=1}^n\sum_{j=1}^N\big[A^{ij}_{\alpha\be}v^j_{y_\alpha}\big]_{y_\be}
=
\sum_{\al=1}^n\sum_{j=1}^Nb^{ij}_{\alpha}v^j_{y_\alpha} +
\sum_{\al=1}^n(f^i_\alpha)_{y_\alpha} + c^i
\mbox{e}nd{equation}
in $\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta}$ and for $i=1,\mathrm{d}ots, N$, where the coefficients are given by
\begin{align}\label{def_A}
A^{ij}_{\alpha\be}
&:=
\mathbf{a}_\varepsilon(|D\hat u|_{\mathbf{Q}})\mathbf{Q}_{\alpha\be}\mathrm{d}elta^{ij} +
\frac{\mathbf{a}_\varepsilon^\prime(|D\hat u|_{\mathbf{Q}})}{|D\hat u|_{\mathbf{Q}}}
\sum_{\gamma,\mathrm{d}elta=1}^n
\mathbf{Q}_{\alpha\gm} \mathbf{Q}_{\be\mathrm{d}elta} \hat u^i_{y_\gm}\hat u^j_{y_\mathrm{d}elta} ,
\mbox{e}nd{align}
and
\begin{align}\label{def_b}
b^{ij}_{\alpha}
&:=
\sum_{\beta=1}^n
\frac{[J_n{\mathcal P}hi]_{y_\be}}{J_n{\mathcal P}hi}
\bigg[\mathbf{a}_\varepsilon(|D\hat u|_{\mathbf{Q}})\mathbf{Q}_{\alpha\be}\mathrm{d}l^{ij} +
\sum_{\gamma, \mathrm{d}elta=1}^n
\frac{\mathbf{a}_\varepsilon^\prime(|D\hat u|_{\mathbf{Q}})}{|D\hat u|_{\mathbf{Q}}}
\mathbf{Q}_{\alpha\mathrm{d}elta} \mathbf{Q}_{\be\gamma}
\hat u^i_{y_\gamma}\hat u^j_{y_\mathrm{d}elta}\bigg].
\mbox{e}nd{align}
The inhomogeneities are defined by
\begin{align*}
f_i^\alpha
&:=
\sum_{\beta,\gamma,\mathrm{d}elta=1}^n \sum_{k=1}^N
\frac{\mathbf{a}_\varepsilon^{\prime}(|D\hat u|_{\mathbf{Q}})}{2|D\hat u|_{\mathbf{Q}}}
[\mathbf{Q}_{\gm\mathrm{d}l}]_{y_\mbox{e}ll}\mathbf{Q}_{\al\be}
\hat u^i_{y_\beta} \hat u^k_{y_\gm}\hat u^k_{y_\mathrm{d}l} +
\sum_{\beta=1}^n
\mathbf{a}_\varepsilon(|D\hat u|_{\mathbf{Q}})
[\mathbf{Q}_{\al\be}]_{y_\mbox{e}ll}\hat u^i_{y_\beta}
\mbox{e}nd{align*}
and
\begin{align*}
c^i
&:=
\sum_{\alpha, \beta=1}^n\mathbf{a}_\varepsilon(|D\hat u|_{\mathbf{Q}})
\bigg[
\frac{\mathbf{Q}_{\al\be}[J_n{\mathcal P}hi]_{y_\be}}{J_n{\mathcal P}hi}\bigg]_{y_\mbox{e}ll}
\hat u^i_{y_\al} \\
&\quad\ \, +
\frac{[J_n{\mathcal P}hi]_{y_\be}}{J_n{\mathcal P}hi}
\frac{\mathbf{a}_\varepsilon^\prime(|D\hat u|_{\mathbf{Q}})}{2|D\hat u|_{\mathbf{Q}}}
\sum_{\alpha, \beta, \gamma, \mathrm{d}elta=1}^n
[\mathbf{Q}_{\gm\mathrm{d}l}]_{y_\mbox{e}ll} \mathbf{Q}_{\al\be}
\hat u^i_{y_\al}\hat u^j_{y_\gm}\hat u^j_{y_\mathrm{d}l} .
\mbox{e}nd{align*}
Note that the derivatives $(J_n{\mathcal P}hi)_{y_\mbox{e}ll}$ and $\mathbf{Q}_{y_\mbox{e}ll}$
are Lipschitz continuous on the whole domain $B_{\mathrm{d}l}\times(-\mbox{e}ta,\mbox{e}ta)$ for any $\mbox{e}ll=1,2,\mathrm{d}ots,n-1$.
According to the $C^{1,\al}$-regularity of $\hat u$, the coefficients
$A^{ij}_{\alpha\be}$
and the term $f_i^\alpha$ appearing in \mbox{e}qref{Schauder-system}
are H\"older continuous, while the coefficients $b^{ij}_{\alpha}$
and the inhomogeneities $c^i$ are bounded.
Moreover, for any $\xi\in\rr^{Nn}$, by \mbox{e}qref{growth-a-delta}$_{1,2}$ we have that
\[
\sum_{\al,\be=1}^n\sum_{i,j=1}^N A^{ij}_{\alpha\be}\xi^i_\alpha\xi^j_\be\ge\tfrac{m}{c}\big(\varep+|D\hat u|^2_{\mathbf{Q}}\big)^{\frac{p-2}2}|\xi|^2.
\]
Consequently the interior Schauder estimates from Theorem~\ref{Thm:Schauder} yield the H\"older continuity of the spatial gradient
$Dv$ for some proper H\"older exponent. In particular, $\hat u_{y_\alpha y_\beta}$
is locally H\"older continuous on $\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta}$, provided $\alpha+\beta<2n$.
Likewise, we may differentiate \mbox{e}qref{Euler-pointwise} with respect to $t$.
This procedure becomes legitimate if we can show $D_{y}\partial_t \hat u\in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mu})$.
Thanks to \mbox{e}qref{time-deriv}, this can be done by working with the difference quotient of $w$ in the time variable.
Indeed, let $h>0$ and define the finite difference in time by
\[
\tau_h \hat u(t):=\hat u(t+h)-\hat u(t).
\]
Here and in the sequel we keep silent of the dependence of $\hat u$ on $x$.
Taking finite differences in the time variable of \mbox{e}qref{Euler-pointwise} we obtain
that the parabolic system
\begin{equation}\label{diff-quo}
\begin{aligned}
\tau_h\pl_t \hat u^i -
&
\sum_{\al,\be=1}^n \Big[\tau_h \big[{\bf a}_\varepsilon(|D\hat u|_{{\bf Q}}){\bf Q}_{\al\be}\hat u^i_{y_\al}\big]\Big]_{y_\be}\\
&=
\sum_{\al,\be=1}^n \tau_h\big[{\bf a}_\varepsilon(|D\hat u|_{\bf Q}){\bf Q}_{\al\be}\hat u^i_{y_\al}\big]
\frac{[J_n{\mathcal P}hi]_{y_\be}}{J_n{\mathcal P}hi},
\mbox{e}nd{aligned}
\mbox{e}nd{equation}
is satisfied weakly in $\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta}$.
Next, for fixed $t$ and $h$, we introduce the quantity
\[
\Dl(s):=s D\hat u(t+h)+(1-s)D\hat u(t)\in\mathbb{R}^{Nn}
\]
whose entries are $
\Dl_\al^i(s):=s\hat u_{y_\al}^i(t+h)+(1-s)\hat u_{y_\al}^i(t),
$
and calculate
\begin{align*}
\tau_h\big[ {\bf a}_\varepsilon&(|D\hat u|_{{\bf Q}}){\bf Q}_{\al\be}\hat u^i_{y_\al}\big]\\
&=
\tau_h \hat u^{j}_{y_\al}
\int_0^1\left[{\bf a}_\varepsilon\big(|\Dl(s)|_{\bf Q}\big)
{\bf Q}_{\al\be}\mathrm{d}l^{ij} +
\frac{{\bf a}_\varepsilon^{\prime}(|\Dl(s)|_{\bf Q})}{|\Dl(s)|_{\bf Q}}
{\bf Q}_{\al\gm}\Dl^j_\gm(s){\bf Q}_{\be\mathrm{d}l}\Dl^i_\mathrm{d}l(s)\right]\mathrm{d} s\\
&=:
\tau_h \hat u^{j}_{y_\al} \mathcal{A}_{\al\be}^{ij},
\mbox{e}nd{align*}
for $\beta=1,\ldots,n$ and $i=1,\ldots,N$.
It is not hard to verify that the matrix $\mathcal{A}_{\al\be}^{ij}$
satisfies
\begin{align*}
&\sum_{\al,\be=1}^n\sum_{i,j=1}^N\mathcal{A}^{ij}_{\alpha\be}\xi^j_\alpha\xi^i_\be\ge\tfrac{m}{c}|\xi|^2\int_0^1\big(\varep^2+|\Dl(s)|^2_{\bf Q}\big)^{\frac{p-2}2}\,\mathrm{d} s
\ge C_o|\xi|^2
\mbox{e}nd{align*}
and
\begin{align*}
&|\mathcal{A}^{ij}_{\alpha\be}|\le cM \int_0^1\big(\varep^2+|\Dl(s)|^2_{\bf Q}\big)^{\frac{p-2}2}\,\mathrm{d} s\le C_1
\mbox{e}nd{align*}
for some positive constants $C_o$ and $C_1$ depending on $p$, $m$, $M$, $c$, $\varep$, and $\|D\hat u\|_{L^\infty}$.
We may test \mbox{e}qref{diff-quo} by $\tau_h \hat u^i \z^2$ with $\z\in C^1_0(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta})$.
Employing the above growth conditions on $\mathcal{A}^{ij}_{\alpha\be}$ and the fact that $\partial_t \hat u \in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta})$, a standard calculation gives
\begin{align*}
\iint_{\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta}} \z^2|\tau_h D\hat u|^2\,\mathrm{d} y\mathrm{d} t
\le
C h^2
\mbox{e}nd{align*}
for some constant $C$ with dependence only on $C_o$, $C_1$, $\mathcal{L}m$, $\|\z\|_{L^\infty}$, $\|D\z\|_{L^\infty}$, $\|\partial_t\z\|_{L^\infty}$ and $\|\partial_t \hat u\|_{L^2(\spt\zeta)}$
but independent of $h$.
Passing to the limit in the above estimate as $h\mathrm{d}ownarrow0$,
we conclude that $D\partial_t \hat u\in L^2_{\loc}(\mathcal{Q}_{\mathrm{d}l,\mbox{e}ta})$ as promised. Therefore, we may differentiate \mbox{e}qref{Euler-extension} with respect to $t$ and obtain, denoting $\tilde v:=\partial_t \hat u$, that
\begin{equation*}
\partial_t \tilde v^i -
\sum_{\al,\be=1}^n\sum_{j=1}^N\big[A^{ij}_{\al\be}\tilde v^j_{y_\al}\big]_{y_\be}
=
\sum_{\al=1}^n\sum_{j=1}^N b^{ij}_\alpha \tilde v^j_{y_\alpha}
\qquad\text{for $i=1,2,\mathrm{d}ots, N$}
\mbox{e}nd{equation*}
in $\mathcal{Q}_{\mathrm{d}l,\mu}$, where $A^{ij}_{\alpha\be}$ and
$b^{ij}_\alpha$ are defined in \mbox{e}qref{def_A} and \mbox{e}qref{def_b}, respectively.
Then the interior Schauder estimates from Theorem~\ref{Thm:Schauder} yield the local H\"older
continuity of
$\partial_t D\hat u$ on
$\mathcal{Q}_{\mathrm{d}l,\mu}$.
To obtain H\"older regularity for $\hat u_{y_n y_n}$, we turn back to
\mbox{e}qref{transformed-Euler} in $\mathcal{Q}_{\mathrm{d}l,\mu}^-$.
Let us write it in non-divergence form and keep the terms with $\hat u^i_{y_ny_n}$
on the left-hand side, while we put all other terms on the right-hand side.
As usual, we will omit ${\mathcal P}hi$ on $\mathbf{Q}$.
In this way, we may obtain an algebraic, linear system
\begin{equation}\label{linear-system}
\sum_{j=1}^N B^{ij}\hat u^j_{y_ny_n}=g^i\quad\text{ in }\mathcal{Q}_{\mathrm{d}l,\mu}^-
\mbox{ for $i=1,2,\mathrm{d}ots, N$,}
\mbox{e}nd{equation}
where
\[
B^{ij}
=
\mathbf{a}_\varepsilon(|D\hat u|_{\mathbf{Q}})\mathbf{Q}_{nn}\mathrm{d}l^{ij} +
\frac{\mathbf{a}_\varepsilon^\prime(|D\hat u|_{\mathbf{Q}})}{|D\hat u|_{\mathbf{Q}}}
\mathbf{Q}_{n\gm} \hat u^j_{y_\gm}\mathbf{Q}_{\al n}\hat u^i_{y_\al}
\]
and the right-hand side $g^i$ is a combination of first derivatives, second derivatives of $\hat u$
excluding $\hat u_{y_ny_n}$, together with $\mathbf{Q}$, $J_n{\mathcal P}hi$ and their first derivatives.
As a result, $g^i$ is H\"older continuous for all $i=1,2,\mathrm{d}ots, N$.
On the other hand, we observe that the matrix $(B^{ij})$ is positive definite and H\"older continuous in
the closure of $\mathcal{Q}_{\mathrm{d}l,\mu}^-$, provided we choose $\mathrm{d}l$ and $\mu$
sufficiently small. As a result, $\hat u^i_{y_ny_n}$ can be solved from the algebraic, linear system \mbox{e}qref{linear-system},
and is also H\"older continuous in the closure of $\mathcal{Q}_{\mathrm{d}l,\mu}^-$.
Hence we have shown that $\hat u_{y_iy_j}$ is H\"older continuous in the closure of $\mathcal{Q}_{\mathrm{d}l,\mu}^-$
for all $i,j=1,2,\mathrm{d}ots,n$. Consequently, the same fact holds for $\partial_t \hat u$ due to the system \mbox{e}qref{transformed-Euler}.
Transforming back to $u_\varep$ we obtain that $\pl_t u_\varep$ and $D^2 u_\varep$
are H\"older continuous up to the lateral boundary $\{\pl\Om_\varep\cap N_o\}\times[\tau,T]$
for some $\tau>0$.
The sketched procedure can be iterated to give even higher regularity. To this
end, we successively differentiate the linear
system~\mbox{e}qref{Schauder-system} in tangential directions and with
respect to time and apply the Schauder estimate from
Theorem~\ref{Thm:Schauder}. This yields the H\"older continuity for all
derivatives expect from the ones in normal directions. The H\"older
regularity of the remaining derivatives can then be deduced from the
system~\mbox{e}qref{transformed-Euler} on ${\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta}^-$. In this
way, we inductively deduce $\hat u\in C^{k,\alpha}_{\loc}({\mathcal Q}_{\mathrm{d}elta,\mbox{e}ta})$ for any $k\in\mathbb{N}$, which yields
the desired smoothness of the approximating solutions $u_\varepsilon$.
\begin{thebibliography}{99}
\bibitem{AcerbiFusco}E. Acerbi and N. Fusco, Regularity
for minimizers of non-quadratic functionals: the case $1<p<2$.
\mbox{e}mph{J. Math. Anal. Appl.} 140 (1989), 115--135.
\bibitem{Acerbi-Mingione-electro}E.~Acerbi, G.~Mingione,
Regularity results for stationary electro-rheological fluids.
\mbox{e}mph{Arch. Ration. Mech. Anal.} 164 (2002), no. 3, 213--259.
\bibitem{BanerjeeLewis:2014}
A.~Banerjee and J.~Lewis,
\hat uidehat newblock Gradient bounds for p-harmonic systems with vanishing Neumann
(Dirichlet) data in a convex domain.
\hat uidehat newblock \mbox{e}mph{Nonlinear Anal.} 100 (2014), 78--85.
\bibitem{B-boundary}
V.~B\"ogelein,
\hat uidehat newblock Global gradient bounds for the parabolic p-Laplacian system.
\hat uidehat newblock {\mbox{e}m Proc. Lond. Math. Soc. (3)} 111 (2015), no. 3, 633--680.
\bibitem{BDMS}
V.~B\"ogelein, F.~Duzaar, P.~Marcellini and C.~Scheven,
\hat uidehat newblock Boundary regularity for elliptic systems with $p,q$-growth.
\hat uidehat newblock{\mbox{e}m preprint}, 2020.
\bibitem{BoeDuMin}
V.~B\"ogelein, F.~Duzaar, and G.~Mingione,
\hat uidehat newblock The regularity of general parabolic systems with degenerate diffusion. \hat uidehat newblock {\mbox{e}m Mem. Amer. Math. Soc.} 221 (2013), no. 1041.
\bibitem{Campanato}
S.~Campanato,
\hat uidehat newblock Equazioni paraboliche del secondo ordine e spazi $\mathcal L^{2,\theta}(\Omega,\mathrm{d}elta)$ (Italian).
\hat uidehat newblock \mbox{e}mph{Ann. Mat. Pura Appl. (4)} 73 (1966), 55--102.
\bibitem{Chen} Y.~Z.~Chen, H\"older continuity of the gradient of solutions of nonlinear degenerate parabolic systems,
{\it Acta Math. Sinica (N.S.)} 2 (1986), no. 4, 309--331.
\bibitem{Chen-DiBenedetto}
Y.~Z.~Chen and E.~DiBenedetto,
\hat uidehat newblock Boundary estimates for solutions of nonlinear degenerate parabolic systems.
\hat uidehat newblock {\mbox{e}m J. Reine Angew. Math.} 395 (1989), 102--131.
\bibitem{Cianchi-Mazya-2}
A.~Cianchi and V.G.~Maz'ya,
\hat uidehat newblock Global Lipschitz regularity for a class of quasilinear elliptic equations.
\hat uidehat newblock \mbox{e}mph{Comm. Partial Differential Equations} 36 (2011), no. 1, 100--133.
\bibitem{Cianchi-Mazya-1}
A.~Cianchi and V.G.~Maz'ya,
\hat uidehat newblock Global boundedness of the gradient for a class of nonlinear elliptic systems.
\hat uidehat newblock \mbox{e}mph{Arch. Ration. Mech. Anal.} 212 (2014), no. 1, 129--177.
\bibitem{DiBenedetto:RealAnalysis}
E.~DiBenedetto,
\hat uidehat newblock \mbox{e}mph{Real analysis. Second edition.}
\hat uidehat newblock Birkh\"auser Advanced Texts: Basel Textbooks, Birkh\"auser
/ Springer, New York, 2016.
\bibitem{DB}
E.~DiBenedetto,
\hat uidehat newblock Degenerate Parabolic Equations.
\hat uidehat newblock Universitext, Springer-Verlag, New York, 1993.
\bibitem{DiBenedetto-Friedman}
E.~DiBenedetto and A.~Friedman,
\hat uidehat newblock H\"older estimates for nonlinear degenerate parabolic systems.
\hat uidehat newblock {\mbox{e}m J. Reine Angew. Math.} 357 (1985), 1--22.
\bibitem{DiBenedetto-Friedman2}
E.~DiBenedetto and A.~Friedman,
\hat uidehat newblock Regularity of solutions of nonlinear degenerate parabolic systems.
\hat uidehat newblock {\mbox{e}m J. Reine Angew. Math.}349 (1985), 83--128.
\bibitem{DiBenedetto-Friedman3}
E.~DiBenedetto and A.~Friedman,
\hat uidehat newblock Addendum to: ``H\"older estimates for nonlinear degenerate parabolic systems''.
\hat uidehat newblock {\mbox{e}m J. Reine Angew. Math.} 363 (1985), 217--220.
\bibitem{DMS}
F.~Duzaar, G.~Mingione, and K.~Steffen,
\hat uidehat newblock Parabolic systems with polynomial growth and regularity.
\hat uidehat newblock \mbox{e}mph{Mem. Am. Math. Soc.} 214 (2011), no. 1005.
\bibitem{DuzaarMingione:2010}
F.~Duzaar and G.~Mingione,
\hat uidehat newblock Local Lipschitz regularity for degenerate elliptic systems.
\hat uidehat newblock\mbox{e}mph{ Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire} 27 (2010), no. 6, 1361--1396.
\bibitem{Foss}
\hat uidehat newblock M.~Foss,
\hat uidehat newblock Global regularity for almost minimizers of nonconvex variational problems.
\hat uidehat newblock \mbox{e}mph{Ann. Mat. Pura Appl. (4)} 187 (2008), no. 2, 263--321.
\bibitem{Foss-Passarelli-Verde}
M.~Foss, A.~Passarelli di Napoli, and A.~Verde,
\hat uidehat newblock Global Lipschitz regularity for almost minimizers of asymptotically convex variational problems.
\hat uidehat newblock \mbox{e}mph{Ann. Mat. Pura Appl. (4)} 189 (2010), no. 1, 127--162.
\bibitem{Giaquinta-book}
M.~Giaquinta,
\hat uidehat newblock \mbox{e}mph{Multiple integrals in the calculus of variations and nonlinear elliptic systems.}
\hat uidehat newblock Annals of Mathematics Studies, 105. Princeton University Press, Princeton, NJ, 1983.
\bibitem{GiaquintaModica:1986}
M. Giaquinta, and G. Modica,
\hat uidehat newblock Partial regularity of minimizers of quasiconvex integrals.
\hat uidehat newblock \mbox{e}mph{Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire}
3 (1986), no. 3, 185--208.
\bibitem{GiaquintaModica:1986-a}
M. Giaquinta, and G. Modica,
\hat uidehat newblock Remarks on the regularity of the minimizers of certain degenerate functionals.
\hat uidehat newblock{\it Manuscripta Math. } 57 (1986), no. 1, 55--99.
\bibitem{Grisvard:1985}
P.~Grisvard,
\hat uidehat newblock Elliptic Problems in Nonsmooth Domains.
Monographs and Studies in Mathematics, 24. Pitman (Advanced Publishing Program),
Boston, MA, 1985.
\bibitem{Hamburger}
C.~Hamburger,
\hat uidehat newblock Regularity of differential forms minimizing degenerate elliptic functionals.
\hat uidehat newblock \mbox{e}mph{J. Reine Angew. Math.} 431 (1992), 7--64.
\bibitem{Kinnunen-Martio}
J.~Kinnunen and O.~Martio,
\hat uidehat newblock Hardy's inequalities for Sobolev functions.
\hat uidehat newblock \mbox{e}mph{Math. Res. Lett.} 4 (1997), no. 4, 489--500.
\bibitem{Lieberman}
G.M.~Lieberman,
\hat uidehat newblock Boundary regularity for solutions of degenerate elliptic equations.
\hat uidehat newblock \mbox{e}mph{Nonlinear Anal.} 12 (1988), no. 11, 1203--1219.
\bibitem{Marti:book}
J.~Marti,
\hat uidehat newblock Konvexe Analysis.
\hat uidehat newblock Birkh\"auser Verlag, Basel-Stuttgart, 1977.
\bibitem{Misawa}
\hat uidehat newblock M.~Misawa,
\hat uidehat newblock Existence of a classical solution for linear parabolic systems of nondivergence form.
\hat uidehat newblock \mbox{e}mph{Comment. Math. Univ. Carolin.} 45 (2004), no. 3, 475--482.
\bibitem{Scheven:2010}
C.~Scheven,
\hat uidehat newblock Regularity for subquadratic parabolic systems:
Higher integrability and dimension estimates.
\hat uidehat newblock \mbox{e}mph{Proc. Roy. Soc. Edinburgh} 140A (2010), 1269--1308.
\bibitem{Schlag}
\hat uidehat newblock W.~Schlag,
\hat uidehat newblock Schauder and $L^p$ estimates for parabolic systems via Campanato spaces.
\hat uidehat newblock \mbox{e}mph{Comm. Partial Differential Equations} 21 (1996), no. 7-8, 1141--1175.
\bibitem{Tolksdorf:1983}
P.~Tolksdorf,
\hat uidehat newblock Everywhere-regularity for some quasilinear systems with a lack of ellipticity.
\hat uidehat newblock \mbox{e}mph{Ann. Mat. Pura Appl. (4)} 134 (1983), 241--266.
\bibitem{Uhlenbeck}
K.~Uhlenbeck,
\hat uidehat newblock Regularity for a class of non-linear elliptic systems.
\hat uidehat newblock \mbox{e}mph{Acta Math.} 138 (1977), no. 3-4, 219--240.
\bibitem{Uraltseva}
N.N.~Uraltseva,
\hat uidehat newblock Degenerate quasilinear elliptic systems. (Russian)
\hat uidehat newblock \mbox{e}mph{Zap. Nau\v cn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI)} 7 1968 184--222.
\bibitem{Wiegner}
M.~Wiegner,
\hat uidehat newblock On $C_\alpha$-regularity of the gradient of solutions of degenerate parabolic system.
\hat uidehat newblock \mbox{e}mph{Ann. Mat. Pura Appl. (IV).} 145 (1986) 385--405.
\mbox{e}nd{thebibliography}
\mbox{e}nd{document}
|
\begin{document}
\title{Tilings of amenable groups
}
\author{Tomasz Downarowicz \and
Dawid Huczek \and
Guohua Zhang
}
\institute{Tomasz Downarowicz \at
Institute of Mathematics of the Polish Academy of Science\\ \'Sniadeckich 8, 00-956 Warszawa, Poland \\ and \\Institute of Mathematics and Computer Science \\ Wroclaw University of Technology \\ Wybrze\.ze Wyspia\'nskiego 27, 50-370 Wroc\l aw, Poland
\email{[email protected]}
\and
Dawid Huczek \at
Institute of Mathematics and Computer Science \\ Wroclaw University of Technology \\ Wybrze\.ze Wyspia\'nskiego 27, 50-370 Wroc\l aw, Poland
\email{[email protected]}
\and
Guohua Zhang \at
School of Mathematical Sciences and LMNS \\ Fudan University and Shanghai Center for Mathematical Sciences \\ Shanghai 200433, China
\email{[email protected]}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
We prove that for any infinite countable amenable group $G$, any $\varepsilon>0$ and any finite subset $K\subset G$, there exists a tiling (partition of $G$ into finite ``tiles'' using only finitely many ``shapes''), where all the tiles are $(K,\varepsilon)$-invariant. Moreover, our tiling has topological entropy zero (i.e., subexponential complexity of patterns). As an application, we construct a free action of $G$ (in the sense that the mappings, associated to different from unity elements of $G$, have no fixpoints), on a zero-dimensional space, and which has topological entropy zero.
\keywords{countable amenable group, (dynamical) tiling, free action, topological entropy}
\subclass{37B10 \and 37B40}
\end{abstract}
\section{Introduction}
In this paper we solve a question about ``tileability'' of countable amenable groups using
finitely many tiles with good invariance properties. The problem was open for a long time, but it
was shaded by another, more difficult, problem about ``tileability'' using only one tile.
In 1980, D. Ornstein and B. Weiss \cite{OW1} announced that every such group can be \emph{$\varepsilonilon$-tiled} using finitely many ``F\o lner tiles'' (sets belonging to the F\o lner sequence), i.e., covered up to $\varepsilonilon$ (in terms of the invariant mean) by $\varepsilonilon$-disjoint shifted copies of these finitely many sets. The authors admit that removing the inaccuracy in covering the group remains a problem.
Nonetheless, their construction (which appears in \cite{OW2}) became fundamental in the analysis of dynamical systems (both topological and measure-theoretic) with the action of countable amenable groups. It serves as a substitute of the Rohlin lemma, allowing to generalize to this case vast majority of entropic and ergodic theorems known for the actions of $\mathbb Z$ (see e.g. \cite{OW2}, \cite{WZ}, \cite{RW}, \cite{L}, \cite{Da} and the references therein). Later, in \cite{We}, B. Weiss showed that countable amenable groups from a large class admit a precise tiling by only one shape (monotile) belonging to a selected F\o lner sequence. This class includes all countable amenable linear groups and all countable residually finite amenable groups. But, to the best of our knowledge, the $\varepsilon$-quasitilings have never been improved in full generality to precise tilings.
In an attempt to carry over the \emph{theory of symbolic extensions} (see \cite{BD}) to actions of amenable groups, we have encountered serious difficulties in applying quasitilings. The imprecision in how they cover the group, although inessential in most other cases, destroys the construction of symbolic extensions. It turns out that not only we need the tilings to be precise, but also it is desirable that they have topological entropy zero (not just arbitrarily small). This has inspired us to abandon, at least for some time, symbolic extensions and focus on improving Ornstein-Weiss' machinery in the first place.
Below the reader will find a complete and self-contained construction of a precise tiling (i.e., partition) of any infinite countable amenable group by shifted copies of finitely many finite sets (which we call ``shapes'' as opposed to ``tiles'', as we call the elements of the partition), where the shapes (and hence the tiles) reveal arbitrarily good invariance properties under small shifts. In fact, we produce not one but countably many such partitions, which are \emph{congruent} (each is inscribed in the following one), each is determined by the following one, and all of them have topological entropy zero (i.e., have subexponential complexity of patterns in large regions). Such sequence of tilings can be thought of as an analog of an odometer used frequently for $\mathbb Z$-actions (more generally, for actions of residually finite groups) to introduce a system of \emph{parsings}, i.e., partitions of orbits into finite \emph{blocks}, on which one can later perform various combinatorial manipulations.
Our construction starts with building an $\varepsilonilon$-quasitiling almost identical to that of Ornstein and Weiss, except that our meaning of $\varepsilonilon$-covering is in terms of lower Banach density. Regardless of the similarity, we provide all details, mainly because of somewhat complicated lower Banach density estimates. This quasitiling is then modified four times: first it is made disjoint, next precisely covering (this step is the most novel), then, given a sequence of such tilings they are made congruent, finally they are brought to a form where each is a factor of the following. In each step we control their small topological entropy, so that after the last modification the entropy is killed completely to zero.
As an application we produce a free zero-entropy action of the group on a zero-dimensional space.
We are convinced that exact tilings might simplify many proofs in the area of ergodic theory and topological dynamics for amenable group actions and perhaps allow for new developments. For instance, in addition to symbolic extensions, the creation of Brattelli diagrams might become possible for such actions, leading to better understanding of K-theory and orbit equivalence.
\section{Preliminaries}
\subsection{Basic notions}
Throughout this paper $G$ denotes an infinite countable \emph{amenable group}, i.e., a group in which there exists a sequence of finite sets $F_n\subset G$ (called a \emph{F\o lner sequence}, or the \emph{sequence of F\o lner sets}), such that for any $g\in G$ we have
\[
\lim_{n\to\infty}\frac{\abs{gF_n\bigtriangleup F_n}}{\abs{F_n}}= 0,
\]
where $gF=\set{gf:f\in F}$, $\abs{\cdot}$ denotes the cardinality of a set, and $\bigtriangleup$ is the symmetric difference. Since $G$ is infinite, the sequence $|F_n|$ tends to infinity. Without loss of generality (see \cite[Corollary 5.3]{N}) we can assume that the sets in the F\o lner sequence are symmetric and contain the unity.
\begin{defn}
If $T$ and $K$ are finite subsets of $G$ and $\varepsilon<1$, we say that $T$ is \emph{$(K,\varepsilon)$-invariant} if
\[\frac{\abs{KT\bigtriangleup T}}{\abs{T}}< \varepsilon,\]
where $KT=\set{gh : g \in K,h\in T}$.
\end{defn}
Observe that if $K$ contains the unity of $G$, then $(K,\varepsilon)$-invariance is equivalent to the simpler condition
\[\abs{KT}< (1+\varepsilon)\abs{T}.\]
The following fact is very easy to see, so we skip the proof.
\begin{lem}\label{folner}
A sequence of finite sets $(F_n)$ is a F\o lner sequence if and only if for every finite set $K$ and every $\varepsilon>0$ the sets $F_n$ are eventually $(K,\varepsilon)$-invariant.
\end{lem}
\begin{lem}\label{ben}
Let $K\subset G$ be a finite set and fix some $\varepsilon>0$. There exists $\delta>0$ such that if
$T\subset G$ is $(K,\delta)$-invariant and $T'$ satisfies $\frac{|T'\triangle T|}{|T|}\le \delta$ then
$T'$ is $(K,\varepsilon)$-invariant.
\end{lem}
\begin{proof}
We have $KT'\setminus KT\subset K(T'\setminus T)$ (and similarly for $T$ and $T'$ exchanged), so
$KT'\triangle KT= (KT'\setminus KT)\cup(KT\setminus KT')\subset K(T'\setminus T)\cup K(T\setminus T')=
K(T'\triangle T)$. Thus, by the triangle inequality for $|\cdot\triangle\cdot|$, we obtain
$$
\frac{|KT'\triangle T'|}{|T'|}\le
\frac{|KT'\triangle KT|+ |KT\triangle T|+|T'\triangle T|}{(1-\delta)|T|}\le \frac{|K|\delta+\delta+\delta}{1-\delta}<\varepsilon,
$$
if $\delta$ is sufficiently small.
\end{proof}
\begin{defn}
We say that $T'\subset T$ ($T$ finite) is a $(1-\varepsilon)$-subset of $T$, if $\abs{T'}\geqslant(1-\varepsilon)\abs{T}$.
\end{defn}
\begin{defn}
Let $K$ be a finite subset of $G$ and let $T\subset G$ be arbitrary. The \emph{$K$-core} of $T$, denoted as $T_K$, is the set $\set{g\in T: Kg\subset T}$ (this is the largest subset $T'\subset T$ satisfying $KT'\subset T$).
\end{defn}
The following fact is a fairly standard and easy exercise, nonetheless we give its proof for completeness.
\begin{lem}\label{estim}
For any $\varepsilon>0$ and any finite $K\subset G$ there exists a $\delta$ (in fact $\delta = \frac\varepsilon{|K|}$), such that if $T\subset G$ is finite and $(K,\delta)$-invariant then the $K$-core $T_K$ is a $(1-\varepsilon)$-subset of $T$.
\end{lem}
\begin{proof} Note that $(K,\delta)$-invariance of $T$ implies that
$$(\forall g\in K)\ \ |gT\setminus T|<\delta|T|,$$
i.e.,
$$(\forall g\in K)\ \ |T\cap g^{-1}T| = |gT\cap T| >(1-\delta)|T|.
$$
This yields
$$|T_K|=\left|\bigcap_{g\in K}(T\cap g^{-1}T)\right|>(1-|K|\delta)|T|=(1-\varepsilon)|T|.$$
\end{proof}
We end this subsection by recalling a standard combinatorial fact (see e.g. \cite[Lemma A.3.5]{Do}):
\begin{thm}[Marriage theorem (variant)]\label{marriage}
Let $Q$ be a countable set, let $A$ and $B$ be countable subsets of $Q$ and let $R$ be a relation between $B$ and $A$ such that for some positive integer $N$ the following two conditions hold:
\begin{itemize}
\item For any $b\in B$ the number of $a\in A$ such that $bRa$ is at least~$N$.
\item For any $a\in A$ the number of $b\in B$ such that $bRa$ is at most~$N$.
\end{itemize}
Then there exists an injective mapping $\phi$ from $B$ into $A$ such that $bR\phi(b)$, for all $b\in B$.
\end{thm}
\subsection{Lower and upper Banach density}
Below we present the notions of lower and upper Banach densities. In \cite{BBF} the reader will find a different, yet equivalent definition (the equivalence follows from Lemma \ref{bd} below and the fact that if $(F_n)$ is a F\o lner sequence, so is $(F_ng_n)$ for any choice of the $g_n$'s, see also \cite[formula (5)]{BBF}).
\begin{defn}
For $S\subset G$ and a finite $F\subset G$ denote
\[
\underline D_F(S)=\inf_{g\in G} \frac{|S\cap Fg|}{|F|}, \ \ \ \overline D_F(S)=\sup_{g\in G} \frac{|S\cap Fg|}{|F|}.
\]
If $(F_n)$ is a F\o lner sequence then we define two values
\[
\underline D(S)=\limsup_{n\to\infty} \underline D_{F_n}(S) \ \ \ \text{ and } \ \ \ \overline D(S)=\liminf_{n\to\infty} \overline D_{F_n}(S),
\]
which we call the \emph{lower} and \emph{upper} \emph{Banach densities} of $S$, respectively.
\end{defn}
Note that $\overline D(S) = 1- \underline D(G\setminus S)$. The following fact is fairly standard, we include its proof for completeness.
\begin{lem}\label{bd}
Regardless of the set $S$, the values of $\underline D(S)$ and $\overline D(S)$ do not depend on the
F\o lner sequence, the limits superior and inferior in the definition are in fact limits, moreover,
\begin{align*}
\underline D(S) &= \sup\{\underline D_F(S): F\subset G, F \text{ is finite}\}\ \ \ \text{ and } \\
\overline D(S) &= \,\inf\,\{\overline D_F(S): F\subset G, F \text{ is finite}\}\ge \underline D(S).
\end{align*}
\end{lem}
\begin{proof}
We will only show that
$$
\liminf_n\underline D_{F_n}(S)\ge\sup\{\underline D_F(S): F\subset G, F \text{ is finite}\}.
$$
This, and an analogous symmetric statement, clearly imply the assertion.
Fix some $\varepsilon>0$ and let $F$ be a finite set such that
$$
\underline D_F(S)\ge \sup\{\underline D_{F'}(S): F'\subset G, F' \text{ is finite}\}-\varepsilon.
$$
Let $n$ be so large that $F_n$ is $(F,\varepsilon)$-invariant. Given $g\in G$, we have
$$
|S\cap Ffg|\ge \underline D_F(S)|F|,
$$
for every $f\in F_n$. This implies that there are at least $\underline D_F(S)|F||F_n|$ pairs $(f',f)$ with $f'\in F, f\in F_n$ such that
$f'fg\in S$. This in turn implies that there exists at least one $f'\in F$ for which
$$
|S\cap f'F_ng|\ge \underline D_F(S)|F_n|.
$$
Since $f'\in F$ and $F_n$ is $(F,\varepsilon)$-invariant (and hence so is $F_ng$), we have
$$
|S\cap f'F_ng|\le|S\cap FF_ng|\le |S\cap F_ng|+\varepsilon|F_n|,
$$
which yields
$$
|S\cap F_ng|\ge (\underline D_F(S)-\varepsilon)|F_n|.
$$
We have proved that $\underline D_{F_n}(S)\ge \underline D_F(S)-\varepsilon$, which ends the proof.
\end{proof}
\subsection{Some facts from symbolic dynamics}
Let $\Lambda$ be a finite set with discrete topology. There exists a standard action of $G$ on $\Lambda^G$ (called the \emph{shift} action), defined as follows: $(gx)(h)=x(hg)$. $\Lambda^G$ with the product topology and the shift action of $G$ becomes a zero-dimensional dynamical system, called the \emph{full shift over $\Lambda$}. A \emph{symbolic dynamical system over $\Lambda$} is any closed, $G$-invariant subset $X$ of the full shift. In the following paragraph we will need the notions of a
block and an associated cylinder set. Let $F\subset G$ be a finite set. By a \emph{block with domain $F$} we will understand any element $B\in \Lambda^F$. The \emph{cylinder corresponding to $B$} is the set
$$
[B] = \{x\in\Lambda^G:x|_F = B\}.
$$
If $X$ is a symbolic system, then the set $X_F = \{B\in\Lambda^F: X\cap [B]\neq\varnothing\}$ is interpreted as the family of blocks with domain $F$ which \emph{occur} in $X$.
Given a symbolic system $X$, we can construct its \emph{topological factor} in
form of a symbolic system $Y$ over another finite alphabet, say $\Delta$, applying so-called \emph{sliding block code with finite horizon}. Such a factor is determined by a mapping $\Pi:X_F\to\Delta$ called \emph{the code}, where the set $F$ is finite and called \emph{the horizon} (of the code).
The code $\Pi$ extends to a mapping $\pi:X\to \Delta^G$ by the formula $x\mapsto y$, where $y$ is given by
$$
y(g) = \Pi(gx|_F).
$$
Then the set $Y=\pi(X)$ is a closed, shift-invariant subset of $\Delta^G$. In practice, the fact that $Y$ is a topological factor of $X$ is verified by checking whether each element $y$ of $Y$ is \emph{determined} term by term by an $x\in X$ by means of some algorithm (procedure, reasoning) \emph{with finite horizon}, i.e., whether we can decide about the symbol $y(g)$ using only the information from
\begin{enumerate}
\item $gx|_F$ (i.e., the symbolic contents of $x$ within the copy of $F$ shifted by $g$), and
\item some \emph{finite bank of information} (i.e., the mapping $\Pi$, which in practice is the set of instructions how to deduce $\Pi(B)$ given $B\in\Lambda^F$).
\end{enumerate}
We will refer to this intuitive understanding of factor maps between symbolic systems several times.
In case $X$ is transitive with a transitive point $x$ (i.e., $X$ equals the orbit-closure of $x$), $Y$ is a topological factor of $X$ via a mapping $\pi$ and $y=\pi(x)$ (in which case $y$ is a transitive point for $Y$), then, abusing slightly the terminology, we will say that \emph{$y$ is a topological factor of $x$} (because in such case the ``finite bank of information'' about the factor map from $X$ to $Y$ can be reconstructed from the combinatorial relation between $x$ and $y$).
It is now not hard to see, that if $(x_n, y_n)$ is a sequence of pairs in the Cartesian square of $\Lambda^G$, converging to some $(x,y)$, $y_n$ is a topological factor of $x_n$ for every $n$, with a common (independent of $n$) coding horizon, then $y$ is a topological factor of $x$ (with the same horizon).
We should mention here another ingredient needed later in this work, connecting upper Banach density with
invariant measures. We will use it only for symbolic systems, although its generality is much wider. We
skip the proof, which is an immediate consequence of the Ergodic Theorem for amenable groups (\cite[Theorem 1.2]{L}).
\begin{lem}\label{measure}
Let $x\in\Lambda^G$ be a symbolic element, let $F\subset G$ be a finite set, and let $B\in\Lambda^F$ be a block. Then
$$
\mu([B])\le \overline D(\{g: gx|_F = B\}),
$$
for any invariant measure $\mu$ supported by the orbit closure of $x$.
\end{lem}
We will be using the notion of topological entropy for actions of amenable groups, but mainly in the case of symbolic systems. In this case it is completely analogous to that for symbolic systems with the action of $\mathbb{Z}$. In this note, by $\log$ we will mean $\log_2$.
\begin{defn}
The \emph{topological entropy} of a symbolic dynamical system $X$ is the limit
\[\mathbf{h}(X)=\lim_{n\to\infty}\frac{1}{\abs{F_n}}\log N(F_n),\]
where $N(F_n)$ is the \emph{$F_n$-complexity of $X$}, i.e., the cardinality $|X_{F_n}|$ of different blocks with domain $F_n$ occurring in $X$.
\end{defn}
The limit is known to exist and, by Orstein--Weiss Lemma, does not depend on the choice of the F\o lner sequence, see e.g. \cite[Theorem 6.1]{LW}. We will also need the following recent result (see \cite{DF}):
\begin{thm}\label{infimum}
The topological entropy of a symbolic dynamical system $X$ equals
\[\inf_F\frac{1}{\abs{F}}\log N(F),\]
where $F$ ranges over all finite subsets of $G$ and $N(F)$ is the $F$-complexity of $X$.
\end{thm}
It is well known, for actions of amenable groups, that if $Y$ is a topological factor of $X$ then
$\mathbf{h}(Y)\le\mathbf{h}(X)$.
If $X$ is transitive with a transitive point $x$ then $X_{F_n}$ coincides with the family of blocks $\{gx|_{F_n}:g\in G\}$, so that $\mathbf{h}(X)$ can be evaluated by examining $x$ only. In such case we will alternatively denote $\mathbf{h}(X)$ by $\mathbf{h}(x)$ and call it \emph{the entropy of $x$}. This convention will be used later to define \emph{entropy of a quasitiling}.
At some point we will also refer to measure-theoretic entropy and the variational principle. We choose to skip discussing these ingredients here; the necessary information is provided near where they are applied.
We need to say a few words about topological joinings of symbolic systems. We will restrict to transitive systems. Let $x$ and $y$ be two symbolic elements with possibly different alphabets, say $\Lambda$ and $\Delta$. The pair $(x,y)$ can be viewed as a symbolic element with the alphabet $\Lambda\times\Delta$. The shift-orbit closure of so understood $(x,y)$ is an example of a \emph{topological joining} $X\vee Y$ between the systems $X$ and $Y$ generated (as shift-orbit closures) by $x$ and $y$, respectively. There
are many other joinings of $X$ and $Y$, (for instance, the direct product), but we will mostly use joinings of the above form. Both $X$ and $Y$ are topological factors of $X\vee Y$ (by projections), and we have the standard inequality
$$
\mathbf{h}(X\vee Y)\le \mathbf{h}(X)+\mathbf{h}(Y).
$$
At one occasion we will also use a countable product of subshifts. What we need to know about such products is that they are zero-dimensional and that the topological entropy equals the sum of the series of entropies of the component subshifts. These are standard facts and we skip their proofs.
\section{Quasitilings and tilings}
Our definitions given below are slightly different from the original ones in \cite{OW2}, however most of the
differences are inessential. Sometimes we will refer to quasitilings and tilings defined below as \emph{static} as opposed to \emph{dynamical}, which will be introduced later.
\begin{defn} A \emph{quasitiling} is determined by two objects:
\begin{enumerate}
\item a finite collection $\mathcal CS(\mathcal CT)$ of finite subsets of $G$ containing the unity $e$,
called \emph{the shapes}.
\item a finite collection $\mathcal CC(\mathcal CT) = \{C(S):S\in\mathcal CS(\mathcal CT)\}$ of disjoint subsets of $G$, called \emph{center sets} (for the shapes).
\end{enumerate}
The quasitiling is then the family $\mathcal CT=\{(S,c):S\in\mathcal CS(\mathcal CT),c\in C(S)\}$. We require that the map $(S,c)\mapsto Sc$ is injective.\footnote{This requirement is stronger than asking that different tiles have different centers. Two tiles $Sc$ and $S'c'$ may be equal even though $c\neq c'$
(this is even possible when $S=S'$). However, when the tiles are disjoint, then the (stronger) requirement
follows automatically from the fact that the centers belong to the tiles.} Hence, by the \emph{tiles} of $\mathcal CT$ (denoted by the letter $T$) we will mean either the sets $Sc$ or the pairs $(S,c)$ (i.e., the tiles with defined centers), depending on the context.
\end{defn}
\begin{defn}\label{qt} Let $\varepsilon\in[0,1)$ and $\alpha\in(0,1]$. A quasitiling $\mathcal CT$ is called
\begin{enumerate}
\item \emph{$\varepsilon$-disjoint} if there exists a mapping $T\mapsto T^\circ$ ($T\in\mathcal CT$) such that
\begin{itemize}
\item $T^\circ$ is a $(1-\varepsilon)$-subset of $T$, and
\item $T\neq T'\implies T^\circ\cap {T'}^\circ=\varnothing$;
\end{itemize}
\item \emph{$\alpha$-covering} if $\underline D(\bigcup\mathcal CT)\ge\alpha$;
\item an (exact) \emph{tiling} if it is a partition of $G$.
\end{enumerate}
\end{defn}
\begin{rem}\label{centers}
Suppose $\mathcal CT$ is a quasitiling and $T\mapsto T^\mathbf{h}eartsuit$ associates to the tiles of $\mathcal CT$ their
``modifications'', disjoint for different tiles. The ``modifications'' are assumed bounded in the
following sense: if $T=Sc$ is a tile (with shape $S$ and center $c$) then $T^\mathbf{h}eartsuit c^{-1}$ is contained in some a priori given finite set $F$. Then the collection $\mathcal CT^\mathbf{h}eartsuit = \{T^\mathbf{h}eartsuit:T\in\mathcal CT\}$ gives rise to a new, disjoint quasitiling. We need, however, to redefine the centers so that they fall inside the new tiles (there are a priori no other restrictions). Once this is done, the collection of the new shapes is determined (and it is finite) and so are the corresponding center sets for each shape. By disjointness of the new tiles, all other requirements in the definition of quasitilings are fulfilled. However, a careless assignment of the centers may drastically enlarge the collection of shapes, and thus excessively increase the entropy of the quasitiling (see Definition \ref{dynam}). To avoid that, on every such occasion we will, by default, apply one deterministic procedure, as follows. Let us enumerate the set $F=\{g_1,g_2,\dots,g_l\}$. Now, if $T^\mathbf{h}eartsuit$ is a tile obtained from $T=Sc$ then we set its center at the point $g\in T^\mathbf{h}eartsuit$ such that $gc^{-1}$ has the smallest number among $T^\mathbf{h}eartsuit c^{-1}$. This arrangement uses only the information from the quasitiling $\mathcal CT$, the mappings $T\mapsto T^\mathbf{h}eartsuit$, and the ``finite bank'' of information about the ordering of the finite set $F$.
\end{rem}
\begin{lem}\label{cov1} Let $\mathcal CT$ be a $(1-\varepsilon)$-covering quasitiling. Suppose $T\mapsto T^\mathbf{h}eartsuit$
associates to the tiles of $\mathcal CT$ their $\alpha$-subsets, disjoint for different tiles.
Then the quasitiling $\mathcal CT^\mathbf{h}eartsuit=\{T^\mathbf{h}eartsuit:T\in\mathcal CT\}$ is $\alpha(1-\varepsilon)$-covering.
In particular, for an $\varepsilon$-disjoint, $(1-\varepsilon)$-covering quasitiling $\mathcal CT$, the disjoint quasitiling $\mathcal CT^\circ$ (as in Definition \ref{qt}) is $(1-\varepsilon)^2$-covering.
\end{lem}
\begin{proof} Denote $E=\bigcup\mathcal CS(\mathcal CT)$.
Let $\mathcal CT_F$ be the collection of tiles $T\in\mathcal CT$ entirely contained in some finite set $F$.
As easily verified, all other tiles $T\in\mathcal CT$ are disjoint with the $EE^{-1}$-core of $F$. Denote also $\mathcal CT_F^\mathbf{h}eartsuit = \{T^\mathbf{h}eartsuit:T\in\mathcal CT_F\}$ and
$$
\theta(F) = \left|F\cap\bigcup\mathcal CT_F^\mathbf{h}eartsuit\right|.
$$
Clearly, $|F\cap\bigcup\mathcal CT_F|\le \theta(F)\frac1\alpha$. Thus and by the $(1-\varepsilon)$-covering assumption, for any $\xi>0$ the following holds
$$
(1-\varepsilon-\xi)|F|\le |F\cap\bigcup\mathcal CT|\le \theta(F)\tfrac1{\alpha}+|F\setminus F_{EE^{-1}}|\le \theta(F)\tfrac1\alpha+\xi|F|,
$$
if $F$ is ``sufficiently invariant" (see Lemma \ref{estim}), and then the same holds for any shifted set $Fg$. Thus
$$
\theta(Fg)\ge \alpha(1-\varepsilon-2\xi)|F|.
$$
Since, by Lemma \ref{bd}, $\underline D(\bigcup\mathcal CT^\mathbf{h}eartsuit)\ge \inf_g\frac{\theta(Fg)}{|F|}$, and $\xi$ is arbitrary, we obtain the assertion.
\end{proof}
\begin{defn}\label{dynam}
Every quasitiling $\mathcal CT$ can be represented in a symbolic form, as a point $x_\mathcal CT\in\Lambda^G$, with the alphabet $\Lambda = \mathcal CS(\mathcal CT)\cup\{0\}$, as follows: $x_\mathcal CT(g)=S \iff g\in C(S)$, $0$ otherwise. Let
$X_\mathcal CT$ be the orbit closure of $x_\mathcal CT$ under the shift action. This system is called the \emph{dynamical quasitiling} (generated by $\mathcal CT$). If $\mathcal CT$ is a tiling, we obtain a \emph{dynamical tiling}.
According to our earlier convention, by the \emph{entropy} of $\mathcal CT$ (denoted as $\mathbf{h}(\mathcal CT)$) we will understand the topological entropy of $X_\mathcal CT$.
\end{defn}
\begin{rem}\label{dyntil}
Note that every element of $X_\mathcal CT$ represents a quasitiling with the same set of shapes as $\mathcal CT$.
Moreover, if $\mathcal CT$ has any of the following properties: $\varepsilonilon$-disjointness, disjointness, $(1-\varepsilonilon)$-covering, being a tiling, then every quasitiling in $X_\mathcal CT$ has the same property.
\end{rem}
The following lemmas will be used in the entropy estimates of some quasitilings and tilings.
\begin{lem}\label{combinat}
There exists a function $\Theta:(0,1)\to(0,1)$, with $\lim_{\varepsilon\to 0}\Theta(\varepsilon) = 0$,
such that for any finite set $F\subset G$ and $\varepsilon\in(0,1)$ the number of all subsets of $F$ with cardinality not
exceeding $\varepsilon|F|$ is smaller than $2^{|F|\Theta(\varepsilon)}$.
\end{lem}
The elementary proof is based on Stirling's formula.
\begin{lem}\label{entest}
For each positive integer $r$ and $\varepsilon>0$ there exists a $\delta=\delta(r,\varepsilon)>0$ for which the following statement holds:
Let $\mathcal CT$ be a $\frac14$-disjoint quasitiling of $G$. Suppose $\mathcal CS(\mathcal CT)$ can be divided into $r$ disjoint \emph{classes} $\mathcal CS_1,\mathcal CS_2,\dots,\mathcal CS_r$ such that, for each $i=1,2,\dots,r$, we have
\begin{align}
&s_i=\min\{|S|:S\in\mathcal CS_i\}\ge\frac1\delta, \\
&|\mathcal CS_i|\le 2^{\varepsilon s_i}.
\end{align}
Then $\mathbf{h}(\mathcal CT)<3\varepsilon$.
\end{lem}
\begin{proof}
Let $E=\bigcup\mathcal CS(\mathcal CT)$. Let $F=F_n$ (the element of the F\o lner sequence) for some large index $n$ (recall that then $|F|$ is also large).
We can assume that $F$ is $(E,\frac12)$-invariant. We will count the family $\{gx|_F:g\in G\}$, where $x$ is the symbolic representation of $\mathcal CT$.
Let $\mathcal CT_{Fg}$ denote the collection of tiles with centers in $Fg$. Notice that these tiles are contained in $EFg$. Further, let $\mathcal CT^\circ_{Fg}=\{T^\circ:T\in\mathcal CT_{Fg}\}$ (a disjoint selection of $\frac34$-subsets from each member of $\mathcal CT_{Fg}$). Denote by $\mathcal CT_{Fg,i}$ the subfamily of $\mathcal CT_{Fg}$ consisting of the tiles with shapes in $\mathcal CS_i$, and finally denote $\mathcal CT^\circ_{Fg,i}=\{T^\circ:T\in\mathcal CT_{Fg,i}\}$. The cardinalities of $\mathcal CT^\circ_{Fg,i}$ and $\mathcal CT_{Fg,i}$ are obviously the same, and the size of each $T^\circ\in\mathcal CT_{Fg,i}$
is at least $\frac34|T|$, hence at least $\frac34s_i$. By disjointness of the tiles $T^\circ$,
we have
$$
\sum_{i=1}^r s_i|\mathcal CT_{Fg,i}|\le \frac{4|EF|}{3}\le 2|F|.
$$
In particular, since each $s_i$ is not less than $\frac1\delta$, we get
$|\mathcal CT_{Fg}|\le 2\delta|F|$. Now replace each symbol $S$ in the symbolic representation of $x$ by the symbol $i$ such that $S\in\mathcal CS_i$. The above procedure replaces $x$ by a new symbolic element $\mathbf{h}at x$, over the alphabet $\{0,1,2,\dots,r\}$. Since there are at most $2\delta|F|$ nonempty terms in the symbolic representation of $gx|_F$ (hence in $g\mathbf{h}at x|_F$), the number of possible blocks $g\mathbf{h}at x|_F$ does not exceed $2^{\Theta(2\delta)|F|}\cdot(r+1)^{2\delta|F|}$.
Next, we will bound the number of different blocks of the form $gx|_F$ which produce the same block of the form $g\mathbf{h}at x|_F$. So, fix some $g\in G$ and observe the block $g\mathbf{h}at x|_F$. Each symbol $i$ appears in $g\mathbf{h}at x|_F$ exactly $|\mathcal CT_{Fg,i}|$ times. This creates at most $|\mathcal CS_i|^{|\mathcal CT_{Fg,i}|}\le 2^{\varepsilon s_i|\mathcal CT_{Fg,i}|}$ possibilities for the configurations of the symbols of $x$ replacing the symbols $i$. Jointly, there are at most
$$
\prod_{i=1}^r 2^{\varepsilon s_i|\mathcal CT_{Fg,i}|} = 2^{\varepsilon\sum_{i=1}^rs_i|\mathcal CT_{Fg,i}|}\le 2^{2\varepsilon|F|}
$$
blocks of the form $gx|_F$ for each block of the form $g\mathbf{h}at x|_F$. Altogether, there are at most
$2^{\Theta(2\delta)|F|}\cdot(r+1)^{2\delta|F|}\cdot 2^{2\varepsilon|F|}$ blocks of the form $gx|_F$, which, after taking logarithm, dividing by $|F|$ and letting $|F|\to\infty$, yields the entropy estimate
$$
\mathbf{h}(\mathcal CT)\le \Theta(2\delta)+ 2\delta\log(r+1)+2\varepsilon.
$$
Since, by Lemma \ref{combinat}, $\Theta(2\delta)\to 0$ as $\delta\to 0$,
the assertion follows.
\end{proof}
We will be also using the following lemma. The easy proof resembles part of the proof of the preceding lemma (small entropy of $\mathbf{h}at x$).
\begin{lem}\label{symbolic_small_entropy}
For any finite set $\Lambda$ in which we select one element (and call it ``zero''), and $\varepsilon>0$ there
exists a $\delta>0$ such that for any symbolic element $x$ with the alphabet $\Lambda$ and upper Banach density of non-zero symbols smaller than $\delta$, has entropy less than $\varepsilon$.
\end{lem}
\section{The construction of an exact tiling}
In this section we construct an exact tiling of $G$ having ``well-invariant'' shapes and small entropy.
This is done in three steps: First we construct a quasitiling $\mathcal CT$ which is only $\varepsilon$-disjoint and $(1-\varepsilon)$-covering, then we modify it to a disjoint quasitiling $\mathcal CT^\circ$, which then is transformed
into an exact tiling $\mathcal CT^*$.
The following lemma is almost the same as \cite[I.\S 2. Theorem 6]{OW2}, differing from it in small details, in particular, our $(1-\varepsilon)$-covering is defined in terms of lower Banach density.
Many ideas in our proof given below are the same as in \cite{OW2}, however we needed to add somewhat lengthy lower Banach density estimates.
\begin{lem}\label{quasitilings} Let $G$ be a countable amenable group with a F\o lner sequence $(F_n)$ of symmetric sets containing the unity. Given $\varepsilon>0$, there exists a positive integer $r=r(\varepsilon)$ such that for each positive integer $n_0$, there exists an $\varepsilon$-disjoint, $(1-\varepsilon)$-covering quasitiling $\mathcal CT$ of $G$ with $r$ shapes $\{F_{n_1},\dots,F_{n_r}\}$, where $n_0<n_1<\cdots<n_r$.
\end{lem}
\begin{proof}
Find $r$ such that $(1-\frac\varepsilon2)^r<\varepsilon$. This is going to be the cardinality of the family of shapes.
Choose integers $n_1=n_0+1, n_2,\dots, n_r$ so that they increase and for each pair of indices $j<i$, $j,i\in\{1,2,\dots, r\}$ the set $F_{n_i}$ is $(F_{n_j},\delta_j)$-invariant, where $\delta_j$ will be specified later. We let $\mathcal CS(\mathcal CT) = \{F_{n_j}: j=1,\dots,r\}$ be our family of shapes. With this choice, the assertions about the shapes and their number are fulfilled. It remains to construct the corresponding center sets $C (F_{n_j})$ so as to satisfy $\varepsilon$-disjointness and $(1-\varepsilon)$-covering of $\mathcal CT$.
We proceed by induction over $j$ decreasing from $r$ to 1.
Consider the collection of all such subsets $C\subset G$ that $\{F_{n_r}c:c\in C\}$ is an $\varepsilon$-disjoint quasitiling (with one shape). As easily verified, when ordered by inclusion this collection satisfies the hypothesis of Zorn's Lemma (the disjoint family for the union of a chain can be found in a limit procedure). Thus, there exists a maximal element in our collection (note that it is nonempty). We now pick one such maximal element and denote it $C_r$. At this point we define $C (F_{n_r})$ (the center set for the shape $F_{n_r}$) as $C_r$. We let $H_r = F_{n_r}C_r$ denote the part of the group covered by the so far constructed quasitiling. In order to estimate $\underline D(H_r)$ from below we will estimate $\underline D_{F_{n_r}}(H_r)$. If $g\in C_r$ then $F_{n_r}g$ is contained in $H_r$,
hence $\frac{|H_r\cap F_{n_r}g|}{|F_{n_r}|}=1$. For $g\notin C_r$, suppose that this ratio is strictly smaller than $\varepsilon$. This implies that $F_{n_r}g$ can be added to the $\varepsilon$-disjoint family $\{F_{n_r}c: c\in C_r\}$, contradicting the maximality of $C_r$.
That is, we have proved that for any $g\in G$, $\frac{|H_r\cap F_{n_r}g|}{|F_{n_r}|}\ge\varepsilon$, i.e.,
that $\underline D_{F_{n_r}}(H_r)\ge\varepsilon$.
Thus $\underline D(H_r)\ge\varepsilon$ (which is strictly larger than $\frac\varepsilon2=1-(1-\frac\varepsilon2)^1)$.
Fix some $j\in\{1,2,\dots,r-1\}$ and suppose we have constructed an $\varepsilon$-disjoint quasitiling $\{F_{n_i}c:j+1\le i\le r,\ c\in C_i\}$ (with $C_i$ abbreviating $C(F_{n_i})$, the center set for the shape $F_{n_i}$), whose union
$$
H_{j+1} = \bigcup_{i=j+1}^r\ F_{n_i}C_i
$$
has lower Banach density strictly larger than $1-(1-\frac\varepsilon2)^{r-j}$ (this is our inductive hypothesis on $H_{j+1}$ and it is fulfilled for $H_r$). We need to go one step further in our ``decreasing induction'', i.e., add a center set $C_j$ for the shape $F_{n_j}$. Consider the collection of all subsets $C\subset G$ such that the family $\{F_{n_i}c:j+1\le i\le r,\ c\in C_i\}\cup\{F_{n_j}c:c\in C\}$ is an $\varepsilon$-disjoint quasitiling (this includes that $C$ is disjoint from $\bigcup_{i=j+1}^rC_i$). As before, when ordered by inclusion, this collection satisfies the hypothesis of Zorn's Lemma. Thus, there exists a maximal element $C_j$ (this time possibly empty). We set $C (F_{n_j})= C_j$ and denote
$$
H_j = \bigcup_{i=j}^r\ F_{n_i}C_i.
$$
Our goal is to estimate from below the lower Banach density of $H_j$. By Lemma \ref{bd}, it suffices to estimate $\underline D_F(H_j)$ for just one finite set $F$ which we will define in a moment. Define $B=\mathcal{B}igl(\bigcup_{i=j+1}^rF_{n_i}F^{-1}_{n_i}\mathcal{B}igr)F_{n_j}$. Clearly, $B$ contains $F_{n_j}$
(hence the unity), and, as easily verified, it has the following property:
\begin{itemize}
\item
whenever $F^{-1}_{n_j}F_{n_i}c\cap A\neq\varnothing$, for some
$i\in\{j+1,\dots,r\}$, $c\in G$ and $A\subset G$, then $F_{n_i}c\subset BA$.
\end{itemize}
Let $n$ be so large that $F_n$ is $(B,\delta_j)$-invariant and that $\underline D_{F_n}(H_{j+1})>1-(1-\frac\varepsilon2)^{r-j}$ (the latter is possible due to the assumption on $\underline D(H_{j+1})$). Now we define the aforementioned set $F$ as $F=F_{n_j}F_n$.
Fix some $g\in G$ and define
$$
\alpha_g = \frac{|H_{j+1}\cap F_ng|}{|F_n|} \text{ \ \ and \ \ } \beta_g = \frac{|H_{j+1}\cap BF_ng|}{|BF_n|}.
$$
Notice that
\begin{equation}\label{minus}
\alpha_g \ge \underline D_{F_n}(H_{j+1}) > 1-(1-\tfrac\varepsilon2)^{r-j}.
\end{equation}
Also, we have
\begin{align}
\beta_g &\ge \frac{|H_{j+1}\cap F_ng|}{(1+\delta_j)|F_n|}=\frac{\alpha_g}{1+\delta_j}\label{zero} \ , \text{ \ \ and }\\
\beta_g &\le \frac{|H_{j+1}\cap F_ng|+|BF_ng\setminus F_ng|}{|F_n|}\le \alpha_g + \delta_j.
\end{align}
Note that since $F_{n_j}\subset B$ and $F_n$ is $(B,\delta_j)$-invariant, $F_n$ is automatically
$(F_{n_j},\delta_j)$-invariant. Thus
\begin{equation}\label{jeden}
\frac{|H_{j+1}\cap Fg|}{|F|}\ge \frac{|H_{j+1}\cap F_ng|}{(1+\delta_j)|F_n|}= \frac{\alpha_g}{1+\delta_j}\ge\frac{\beta_g-\delta_j}{1+\delta_j}.
\end{equation}
Consider only these finitely many component sets $F_{n_i}c$ of $H_{j+1}$ (i.e., with $i\in\{j+1,\dots,r\},\ c\in C_i$) for which $F^{-1}_{n_j}F_{n_i}c$ has a nonempty intersection with $F_ng$, and denote by $E_g$ the union of so selected components $F_{n_i}c$. By the property of $B$ (with $A=F_ng$), $E_g$ is a subset of $BF_ng$ (and also of $H_{j+1}$), so
\begin{equation}\label{niewiem}
|E_g|\le |H_{j+1}\cap BF_ng|= \beta_g|BF_n|\le\beta_g(1+\delta_j)|F_n|.
\end{equation}
Each of the selected components $F_{n_i}c\subset E_g$ is $(F^{-1}_{n_j},\delta_j)$-invariant (each $F_{n_j}$ is symmetric), hence, when multiplied on the left by $F^{-1}_{n_j}$ it can gain at most $\delta_j|F_{n_i}c|$ new elements. Thus the set $E_g$, when multiplied on the left by $F^{-1}_{n_j}$, can gain at most $\delta_j\sum_{F_{n_i}\!c\subset E_g}|F_{n_i}c|$ new elements. On the other hand, denoting by $(F_{n_i}c)^\circ$ the pairwise disjoint sets (contained in respective sets $F_{n_i}c$) as in the definition of $\varepsilon$-disjointness, we also have
$$
\sum_{F_{n_i}\!c\subset E_g}|F_{n_i}c|\le \frac1{1-\varepsilon}\sum_{F_{n_i}\!c\subset E_g}|(F_{n_i}c)^\circ| = \frac1{1-\varepsilon}\mathcal{B}igl|\bigcup_{F_{n_i}\!c\subset E_g}(F_{n_i}c)^\circ\mathcal{B}igr|\le \frac1{1-\varepsilon}|E_g|.
$$
Combining this with the preceding statement, we obtain that the set $E_g$, when multiplied on the left by $F^{-1}_{n_j}$, can gain at most $\frac{\delta_j}{1-\varepsilon}|E_g|$ new elements, which is less than $2\delta_j|E_g|$ (we can assume that $\varepsilon<\frac12$).
Denote $H'_{j+1} = F_{n_j}^{-1}H_{j+1}$. By the choice of the components included in $E_g$, the set $F^{-1}_{n_j}E_g$ contains all of $H'_{j+1}\cap F_ng$. Thus, using $(1+2\delta_j)\le (1+\delta_j)^2$ and \eqref{niewiem}, we obtain that
\begin{equation*}
|H'_{j+1}\cap F_ng|\le |F^{-1}_{n_j}E_g| \le (1+2\delta_j)|E_g| \le (1+\delta_j)^3\beta_g|F_n|.
\end{equation*}
Let $N_g = F_ng\setminus H'_{j+1}$. By the above inequality, we know that
\begin{equation}\label{dwa}
|N_g|\ge \bigl(1-(1+\delta_j)^3\beta_g\bigr)|F_n|\ge \bigl(1-(1+\delta_j)^3\beta_g\bigr)\tfrac{|F|}{1+\delta_j},
\end{equation}
where the last inequality follows from $(F_{n_j},\delta_j)$-invariance of $F_n$.
For each $c\in N_g$ we have either $c\in C_j$ and then $\frac{|H_j\cap F_{n_j}c|}{|F_{n_j}|}=1$, or
$\frac{|H_j\cap F_{n_j}c|}{|F_{n_j}|}\ge\varepsilon$ (otherwise $c$ could be added to $C_j$ contradicting its maximality; note that $N_g$ is disjoint from $\bigcup_{i=j+1}^rC_i$). In either case
$|H_j\cap F_{n_j}c|\ge\varepsilon|F_{n_j}|$. This implies that there are at least $\varepsilon|N_g||F_{n_j}|$ pairs $(f,c)$ with $f\in F_{n_j}, c\in N_g$ such that $fc\in H_j$. This in turn implies that there exists at least one $f\in F_{n_j}$ for which
\begin{equation}\label{trzy}
|H_j\cap fN_g|\ge \varepsilon|N_g|.
\end{equation}
Notice that $fN_g$ is contained in $Fg$ (because $N_g\subset F_ng$ and $f\in F_{n_j}$) and disjoint from $H_{j+1}$ ($N_g$ is disjoint from $H'_{j+1}$ which contains $f^{-1}H_{j+1}$). Thus we can estimate, using \eqref{jeden}, \eqref{dwa} and \eqref{trzy}:
\begin{eqnarray*}
\frac{|H_j\cap Fg|}{|F|}&\ge & \frac{|H_{j+1}\cap Fg| + |H_j\cap fN_g|}{|F|}\\
&= & \frac{|H_{j+1}\cap Fg|}{|F|} + \frac{|H_j\cap fN_g|}{|N_g|}\frac{|N_g|}{|F|}\\
&\ge & \frac{\beta_g-\delta_j}{1+\delta_j} + \varepsilon\frac{1-(1+\delta_j)^3\beta_g}{1+\delta_j}.
\end{eqnarray*}
The rest of the proof is elementary calculus. Both terms in the last expression are linear functions of $\beta_g$, the first one
with positive and large slope $\frac1{1+\delta_j}$, the other with negative but small slope $-\varepsilon(1+\delta_j)^2$. Jointly,
the function increases with $\beta_g$. So, we can replace $\beta_g$ by any smaller value, for instance, by $\frac{1-(1-\frac\varepsilon2)^{r-j}}{1+\delta_j}$ (see \eqref{minus} and \eqref{zero}), to obtain
$$
\frac{|H_j\cap Fg|}{|F|}> \frac{1-(1-\tfrac\varepsilon2)^{r-j}}{(1+\delta_j)^2}-\frac{\delta_j}{1+\delta_j} + \varepsilon\bigl(\tfrac1{1+\delta_j}-(1+\delta_j)(1-(1-\tfrac\varepsilon2)^{r-j})\bigr).
$$
Now notice, that if we replace the undivided occurrence of $\varepsilon$ by $\frac{3\varepsilon}4$, we make the entire expression smaller by
some positive value (independent of $g$). On the other hand, if $\delta_j$ is very small and we remove it completely from the
expression, we will perhaps enlarge it, but very little. We now specify $\delta_j$ to be so small, that if we replace $\varepsilon$ by $\frac{3\varepsilon}4$ and remove $\delta_j$ completely, then the expression will become smaller. With such a choice of $\delta_j$ we have
$$
\frac{|H_j\cap Fg|}{|F|}> 1-(1-\tfrac\varepsilon2)^{r-j} +\frac{3\varepsilon}4(1-\tfrac\varepsilon2)^{r-j} = 1-(1-\tfrac\varepsilon2)^{r-j+1} + \xi,
$$
where $\xi>0$ does not depend on $g$. Taking infimum over all $g\in G$ we get, by Lemma \ref{bd},
$$
\underline D(H_j)\ge \underline D_F(H_j) > 1-(1-\tfrac\varepsilon2)^{r-j+1},
$$
and the inductive hypothesis has been derived for $j$.
Once the induction reaches $j=1$ we get that the lower Banach density of $H=H_1$ is larger than $1-(1-\tfrac\varepsilon2)^r$ which,
by the choice of $r$, is larger than $1-\varepsilon$. This concludes the proof of the lemma.
\end{proof}
The next lemma and the following theorem contain our key passage from quasitilings to (exact) tilings.
\begin{lem}\label{disjoint_tilings}
Fix $\varepsilon>0$ and a finite set $K\subset G$. There exists a disjoint $(1-\varepsilon)$-covering quasitiling $\mathcal CT^\circ$ of $G$, such that every shape of $\mathcal CT^\circ$ is $(K,\varepsilon)$-invariant and
$\mathbf{h}(\mathcal CT^\circ)<\varepsilon$.
\end{lem}
\begin{proof}
Let $\xi$ be such that $(1-\xi)^2>1-\varepsilon$ and $\frac{\Theta(\xi)}{1-\xi}\le\frac\varepsilon3$,
where $\Theta(\cdot)$ is defined in Lemma \ref{combinat}. Let $r=r(\xi)$ (as defined in Lemma \ref{quasitilings}) and $\delta=\delta(r,\frac\varepsilon3)$ (as defined in Lemma \ref{entest}).
Let $\mathcal CT$ be the quasitiling delivered by Lemma \ref{quasitilings} for $\xi$ (in the role of $\varepsilon$)
and $n_0$ so large that $F_{n}$ is $(K,\xi)$-invariant and $|F_n|>\frac1{\delta(1-\xi)}$ for each $n\ge n_0$.
We will show that the disjoint quasitiling $\mathcal CT^\circ$ (as in the definition of $\xi$-disjointness) is good. First of all, by Lemma \ref{cov1}, $\mathcal CT^\circ$ is $(1-\xi)^2$-covering (hence also $(1-\varepsilon)$-covering), and by Lemma \ref{ben}, if $\xi$ is small enough, the shapes of $\mathcal CT^\circ$ are $(K,\varepsilon)$-invariant. Next, we will verify that $\mathcal CT^\circ$ satisfies the assumptions of Lemma \ref{entest} (with $\frac\varepsilon3$ in place of $\varepsilon$).
For $i=1,2,\dots, r$, let $\mathcal CS_i$ be the family of shapes of the tiles $T^\circ$ such that $T$ has the shape
$F_{n_i}$. By choosing a subsequence if necessary, we can assume that the sizes of the F\o lner sets satisfy $|F_{n+1}|> 2|F_n|$, which (together with $\xi<\frac12$) ensures that the above families $\mathcal CS_i$ are disjoint. The minimal size $s_i$ of a shape in $\mathcal CS_i$ is at least $|F_{n_i}|(1-\xi)$ which is larger than $\frac1\delta$, as required. The cardinality of $\mathcal CS_i$ is estimated by the number of all $(1-\xi)$-subsets of $F_{n_i}$ (the new center for every such subset is determined by Remark \ref{centers}), that is, by $2^{\Theta(\xi)|F_{n_i}|}\le 2^{\frac{\Theta(\xi)}{1-\xi}s_i}\le 2^{\frac\varepsilon3 s_i}$. Now the application of Lemma \ref{entest} ends the proof.
\end{proof}
\begin{thm}\label{exact_tilings}
Fix $\varepsilon>0$ and a finite set $K\subset G$. There exists an (exact) tiling $\mathcal CT^*$ of $G$, such that every shape of $\mathcal CT^*$ is $(K,\varepsilon)$-invariant, and $\mathbf{h}(\mathcal CT^*)<\varepsilon$.
\end{thm}
\begin{proof} Let $\mathcal CT^\circ$ be the disjoint quasiltiling delivered by the preceding lemma for the parameters $\gamma$ and $K$, where $\gamma<\min\{\frac12,\frac\varepsilon2,\frac\delta6\}$ with $\delta$ specified
in Lemma \ref{ben} for $\varepsilon$ and $K$. In the following steps of the construction we will modify this quasitiling so it becomes a tiling (i.e., it will cover all of~$G$).
In every shape $S$ of $\mathcal CT^\circ$ we choose two disjoint subsets, $A(S)$ and $A'(S)$, each of cardinality $\lceil2\gamma\abs{S}\rceil$ (which, we can assume, is smaller than $3\gamma|S|$). Next, if $T^\circ=Sc$ is a tile of $\mathcal CT^\circ$, we let $A(T^\circ)=A(S)c$ (and analogously for $A'(T^\circ)$). The unions of these latter sets over the entire tiling yield two disjoint sets $A$ and $A'$, each having lower Banach density at least $2\gamma(1-\gamma)>\gamma$ (see Lemma \ref{cov1}). Let $B$ denote the set of elements of $G$ that are not covered by the tiles of $\mathcal CT^\circ$. Directly, since $\mathcal CT^\circ$ is $(1-\gamma)$-covering, the upper Banach density of $B$ is less than $\gamma$.
Let $F$ be a finite, symmetric subset of $G$ such that the proportion of elements of $A$ in any translate $Fg$ is at least $\gamma$, the same holds for $A'$, and the proportion of elements of $B$ in $Fg$ is less than $\gamma$. Let $\xi<\frac\varepsilon4$ be so small that any symbolic dynamical system with the alphabet $F\cup\{0\}$ and with the upper Banach density of non-zero symbols smaller than $\xi$ has entropy less than $\frac\varepsilon4$ (see Lemma \ref{symbolic_small_entropy}).
Using Lemma \ref{disjoint_tilings} again (with different parameters), we obtain a disjoint, $(1-\xi)$-covering quasitiling $\mathcal CT'$ with entropy less than $\xi$ (hence less than $\frac\varepsilon4$). Moreover, we can assume an ``improved disjointness'': if $T'_1$ and $T'_2$ are different tiles of $\mathcal CT'$ then $FT'_1\cap FT'_2=\varnothing$; this can be achieved (using Lemmas \ref{estim} and \ref{cov1}) by requesting the disjoint quasitiling to be $(1-\xi')$-covering, with $(F,\xi')$-invariant tiles and entropy less than $\xi'$ (for a small $\xi'$), and then replacing the tiles by their $F$-cores with the centers determined by Remark \ref{centers} (such modification does not increase the entropy because it produces a topological factor by a finite horizon algorithm).
Let $T'$ be a tile of $\mathcal CT'$. We define a relation $R$ between $B\cap T'$ and $A\cap FT'$: $bRa$ if and only if $a\in Fb$. By the definition of $F$, for every $b$ there are at least $\gamma\abs{F}$ elements $a$ such that $bRa$, and for every $a$ there are at most $\gamma\abs{F}$ elements $b$ such that $bRa$. By the marriage theorem (Theorem \ref{marriage}), there exists an injective mapping $\phi_{T'}$ from $B\cap T'$ into $A\cap FT'$ such that $\phi_{T'}(b)\in Fb$ for every $b$ in the domain. The ``improved disjointness'' implies that not only domains, but also images, of the maps $\phi_{T'}$ are disjoint, so that uniting the graphs of $\phi_{T'}$ we obtain an injective map $\phi:B\cap\bigcup\mathcal CT'\to A$. Moreover, we can arrange that whenever $T'=Sc$ and $T''=Sc'$ (i.e., two tiles of $\mathcal CT'$ have the same shape $S$), $B\cap T'' = (B\cap T')c^{-1}c'$ and $A\cap FT'' = (A\cap FT')c^{-1}c'$, then $\phi_{T''}(b) = \phi_{T'}(b{c'}^{-1}c)c^{-1}c'$ (for every $b\in B\cap T''$), i.e., that $\phi_{T'}$ depends only on how $T'$ and $FT'$ contain and intersect the tiles of $\mathcal CT^\circ$. In this manner, the map $\phi$ is determined by $\mathcal CT'$ and $\mathcal CT^\circ$ via a finite horizon algorithm.
Further, let $B'$ be the remaining part of $B$ (not covered by the tiles of $\mathcal CT'$). Again, we define a relation (which we will again denote by $R$) between $B'$ and $A'$, by the same formula: $bRa$ if and only if $a\in Fb$. As before, by the definition of $F$, the assumptions of the marriage theorem are fulfilled, yielding another injective mapping $\phi'$ from $B'$ into $A'$ with $\phi'(b\,')\subset Fb\,'$ (this map, however, is not necessarily determined by a finite horizon algorithm). Uniting the maps $\phi$ and $\phi'$ (in terms of uniting their graphs) we obtain an injective mapping $\Phi$ from $B$ into $A\cup A'$, with the property that for every $b$, $\Phi(b)\in Fb$.
We can now define the desired tiling $\mathcal CT^*$: every tile of $\mathcal CT^*$ will have the form $T^* = T^\circ\cup \set{b\in B: \Phi(b)\in T^\circ}$ for some $T^\circ\in \mathcal CT^\circ$. We define the center of this new tile to be the same as the center for $T^\circ$. Each shape of $\mathcal CT^\circ$ may produce many new shapes of $\mathcal CT^*$, however, since $\set{b\in B: \Phi(b)\in T^\circ}\subset FT^\circ$, the variety of new shapes remains finite.
The center sets for each new shape are then determined automatically. By the construction of $\Phi$, the tiles of $\mathcal CT^*$ are disjoint and cover all of $G$. The added set $\set{b\in B: \Phi(b)\in T^\circ}$ has cardinality at most that of $A(T^\circ)\cup A'(T^\circ)$, hence $|T^*|\le |T^\circ|(1+6\gamma)$. Therefore, by Lemma~\ref{ben} (and the selection of $\gamma$), $T^*$ is $(K,\varepsilon)$-invariant.
It remains to show that $\mathcal CT^*$ has entropy strictly less than $\varepsilon$. Consider the symbolic element $y\in (F\cup\{0\})^G$ defined as follows: $y(b)=g\in F$ if $b\in B'$ and $\phi'(b)=gb$, and $y(b)=0$ for $b\notin B'$. Since $B'$ has upper Banach density less than $\xi$, the upper Banach density of non-zero symbols in $y$ is also less than $\xi$. Thus the topological entropy of $y$ is less than $\frac\varepsilon4$. Now observe that the tiling $\mathcal CT^*$ is determined by the quasitilings $\mathcal CT^\circ$, $\mathcal CT'$ and the contents of $y$, via a finite horizon algorithm ($y$ is not determined by $\mathcal CT^\circ$, $\mathcal CT'$ via a finite horizon algorithm, but once we acquire the information coming from $y$, the finite horizon statement holds). Thus $\mathcal CT^*$ is a topological factor of a topological joining between $\mathcal CT^\circ$ and $\mathcal CT'$, and $y$. Therefore the entropy of $\mathcal CT^*$ is indeed less than $\gamma+\frac\varepsilon4+\frac\varepsilon4<\varepsilon$.
\end{proof}
\section{A congruent sequence of tilings with entropy zero}
In this section we strengthen the preceding result by obtaining exact tilings (with arbitrarily ``well-invariant'' shapes) which have topological entropy zero. This is done in two steps, via constructing a sequence of exact tilings $(\widetilde\mathcal CT_k)_{k\ge1}$ with entropies tending to zero, and which is \emph{congruent}, i.e., such that, for each $k\ge 1$, every tile of $\widetilde\mathcal CT_{k+1}$ equals a union of tiles of $\widetilde\mathcal CT_k$. Next, we transform this sequence into $(\overline\mathcal CT_k)_{k\ge1}$, in which every tiling is a topological factor of its successor, and hence all of them have entropy zero.
\begin{lem}\label{sqt}
Fix a converging to zero sequence $\varepsilon_k> 0$ and a sequence $K_k$ of finite subsets of $G$.
There exists a congruent sequence of tilings $\widetilde\mathcal CT_k$ of $G$ such that the shapes of
$\widetilde\mathcal CT_k$ are $(K_k,\varepsilon_k)$-invariant and $\mathbf{h}(\widetilde\mathcal CT_k)<\varepsilon_k$.
\end{lem}
\begin{proof}
Use Theorem~\ref{exact_tilings} to obtain a tiling $\mathcal CT^*_1$ whose shapes are $(K_1,\varepsilon_1)$-invariant and topological entropy is strictly less than $\varepsilon_1$. We set $\widetilde\mathcal CT_1=\mathcal CT_1^*$. Suppose a tiling $\widetilde\mathcal CT_k$
(as in the formulation of the lemma) has been constructed. We will construct $\widetilde\mathcal CT_{k+1}$.
Let us denote by $D_k$ the set $\bigcup \mathcal CS(\widetilde\mathcal CT_k)$. By an application of Lemmas \ref{ben} and \ref{symbolic_small_entropy}, there exists $\delta>0$ such that whenever $\frac{|T'\triangle T|}{|T|}<\delta$ and $T$ is $(K_{k+1},\delta)$-invariant then $T'$ is $(K_{k+1},\varepsilon_{k+1})$-invariant, and any symbolic dynamical system with the alphabet $\mathcal CS(\widetilde\mathcal CT_k)\cup\{0\}$ and upper Banach density of non-zero symbols smaller than $\delta$ has topological entropy less than $\frac{\varepsilon_{k+1}}{2}$. Choose $\delta_k<\min\{\frac{\varepsilon_{k+1}}2,\frac\delta{|D_k|+1}\}$. We can now use Theorem~\ref{exact_tilings}
again, with the parameter $\delta_k$, to obtain a tiling $\mathcal CT^*_{k+1}$ with entropy less than $\delta_k$ (hence strictly less than $\frac{\varepsilon_{k+1}}{2}$), and the shapes of which are $(K'_{k+1},\delta_k)$-invariant, where $K'_{k+1} = K_{k+1}\cup D_k\cup D_k^{-1}$.\footnote{Such shapes are also $(D_k,\delta_k)$-invariant, $(D^{-1}_k,\delta_k)$-invariant, but only $(K_{k+1},2\delta_k)$-invariant; it is so since $D_k$ contains the unity, which we do not assume about $K_{k+1}$.} We need to modify the tiling $\mathcal CT^*_{k+1}$ to make it congruent with $\widetilde\mathcal CT_k$, i.e., ensure that its tiles are unions of the tiles of $\widetilde\mathcal CT_k$. Define a ``modification map'' $T^*\mapsto \widetilde T$ (where $T^*\in\mathcal CT^*_{k+1}$) by $\widetilde T=\bigcup\set{Sc\in \widetilde\mathcal CT_k:c\in T^*}$. The center of $\widetilde T$ is determined according to Remark \ref{centers}. That way we create a modified tiling, denoted $\widetilde\mathcal CT_{k+1}$, congruent with $\widetilde\mathcal CT_k$. It is easily verified that each tile $\widetilde T$ of $\widetilde\mathcal CT_{k+1}$ satisfies
$$
T^*_{D_k^{-1}}\subset \widetilde T\subset D_kT^*
$$
and, clearly, $T^*$ is located between the same two extreme sets, hence
$$
\widetilde T\triangle T^*\subset D_kT^*\setminus T^*_{D_k^{-1}}.
$$
Since $T^*$ is $(D_k,\delta_k)$-invariant (and $D_k$ contains the unity), we have $|D_kT^*|\le (1+\delta_k)|T^*|$. Also, since $T^*$ is
$(D^{-1}_k,\delta_k)$-invariant, $T^*_{D_k^{-1}}$ is a $(1-|D_k|\delta_k)$-subset of $T^*$ (see Lemma \ref{estim}). This yields
$$
\frac{|\widetilde T\triangle T^*|}{|T^*|}\le (|D_k|+1)\delta_k< \delta.
$$
Since $T^*$ is $(K_{k+1},2\delta_k)$-invariant and $2\delta_k\le \delta$, the selection of $\delta$ implies $(K_{k+1},\varepsilon_{k+1})$-invariance of $\widetilde T$.
We will now argue that the modification does not increase the entropy too much. In the argument below we will refer to the tiles of $\widetilde\mathcal CT_k$ as ``small'', and the tiles of $\mathcal CT^*_{k+1}$ as ``large''.
We claim, that in order to determine the modified large tiles, \emph{in addition} to knowing $\mathcal CT^*_{k+1}$, we only need to examine the centers of the small tiles lying \emph{outside} the union of the $D_k$-cores of the large tiles. Indeed, after all such centers have been examined and their corresponding small tiles have been allocated among the large tiles, the remaining part of each large tile $T^*$ (not covered by the above small tiles) can be ``blindly'' included to $\widetilde T$; we do not need to check where exactly the remaining centers of small tiles are. It is so because a point of $T^*$ does not belong to $\widetilde T$ only if it belongs to a small tile with center in a different large tile, say ${T'}^*$. In such case, however, this center does not belong to the $D_k$-core of ${T'}^*$, hence it lies outside the union of all such cores, and such centers have been already examined. So, the necessary information (additional to knowing $\mathcal CT^*_{k+1}$) can be encoded in a symbolic element obtained from the symbolic representation of $\widetilde\mathcal CT_k$ (with non-zero symbols at the centers of the tiles) in which all symbols \emph{inside} the above mentioned $D_k$-cores are ignored, i.e., replaced by zeros. Since the union of these cores has lower Banach density at least $1-\delta$ (because $T_{D_k}^*$ is a $(1- |D_k| \delta_k)$-subset of $T^*$, Lemma \ref{cov1} applies), the upper Banach density of non-zero symbols in the discussed symbolic element is at most $\delta$. Its alphabet is $\mathcal CS(\widetilde\mathcal CT_k)\cup\{0\}$, hence, by the choice of $\delta$, the entropy of such a symbolic element is less than $\frac{\varepsilon_{k+1}}2$. Adding the entropy of $\mathcal CT^*_{k+1}$ we get that the entropy of $\widetilde\mathcal CT_{k+1}$ is strictly less than $\varepsilon_{k+1}$.
\end{proof}
The next statement is perhaps the most important in this paper.
\begin{thm}\label{congruent}
Let $G$ be an infinite countable amenable group.
Fix a converging to zero sequence $\varepsilon_k> 0$ and a sequence $K_k$ of finite subsets of $G$.
There exists a congruent sequence of (exact) tilings $\overline\mathcal CT_k$ of $G$ such that the shapes of
$\overline\mathcal CT_k$ are $(K_k,\varepsilon_k)$-invariant and $\mathbf{h}(\overline\mathcal CT_k)=0$ for each $k$.
\end{thm}
\begin{proof}
We have constructed a congruent sequence of tilings $\widetilde\mathcal CT_k$, each of entropy strictly less than $\varepsilon_k$. We need to modify them one more time, to kill their entropy completely. To this end we will need another inductive procedure concluded by a limit passage.
First of all, in the construction of the sequence $\widetilde\mathcal CT_k$ we add one more inductive (easily fulfilled) requirement: The tiling $\widetilde\mathcal CT_k$ has entropy strictly less than $\varepsilon_k$, hence there exists a finite set $E_k$ (for example a far enough member of the F\o lner sequence) such that the $E_k$-complexity of $\widetilde\mathcal CT_k$ (i.e., the number of all blocks of the form $g\widetilde x|_{E_k}$, where $\widetilde x$ is the symbolic representation of $\widetilde\mathcal CT_k$) does not exceed $2^{\varepsilon_k|E_k|}$. For later purposes, we assume also that $|E_k|>\frac1{\varepsilon_k}$.
We require that all shapes of $\widetilde\mathcal CT_{k+1}$, in addition to being $(K_{k+1},\varepsilon_{k+1})$-invariant, are also
$(E_k,\delta_k)$-invariant, for $\delta_k=\frac{\varepsilon_k}{|E_k|\log{(|\Lambda_k|)}}$, where $\Lambda_k=\mathcal CS(\widetilde\mathcal CT_k)\cup\{0\}$ is the alphabet used by symbolic representation of $\widetilde\mathcal CT_k$. We can start the induction.
Every tile of $\widetilde\mathcal CT_2$ is a union of the tiles of $\widetilde\mathcal CT_1$, thus every shape of $\widetilde\mathcal CT_2$ is partitioned by (shifted) shapes of $\widetilde\mathcal CT_1$. However, for each shape of $\widetilde\mathcal CT_2$ there possibly occur more than one different ways it is partitioned. Now, for each shape $S$ of $\widetilde\mathcal CT_2$ we select one such way and call it ``the master partition'' of $S$. We create a new tiling, inscribed in $\widetilde\mathcal CT_2$, using the same family of shapes as $\widetilde\mathcal CT_1$ (perhaps not all shapes will be used, but that does not bother us), as follows: in each tile of $\widetilde\mathcal CT_2$ we apply the (appropriately shifted) master partition of its shape. We denote this new tiling by $\mathcal CT_1^{(2)}$. Notice that this tiling is completely determined by $\widetilde\mathcal CT_2$ and the ``finite bank of information'' containing the master partitions of all shapes of $\widetilde\mathcal CT_2$. And clearly, the coding from $\widetilde\mathcal CT_2$ to $\mathcal CT_1^{(2)}$ has a finite horizon. Thus, $\mathcal CT_1^{(2)}$ is a topological factor of $\widetilde\mathcal CT_2$ (and congruent with it).
Analogously, from $\widetilde\mathcal CT_2$ and $\widetilde\mathcal CT_3$ we create a tiling $\mathcal CT_2^{(3)}$ which
uses the same shapes as $\widetilde\mathcal CT_2$ and is congruent with and a topological factor of $\widetilde\mathcal CT_3$.
Now, applying to the tiles of $\mathcal CT_2^{(3)}$ the master partitions from the preceding step (by the shifted shapes of $\widetilde\mathcal CT_1$) we also create a new tiling $\mathcal CT_1^{(3)}$ using the same shapes as $\widetilde\mathcal CT_1$, congruent with and being a topological factor of $\mathcal CT_2^{(3)}$ (and $\widetilde\mathcal CT_3$).
Continuing in an obvious way we create a triangular array of tilings $\mathcal CT_k^{(j)}$
($k\le j$; we also place $\widetilde\mathcal CT_k$ as $\mathcal CT_k^{(k)}$ along the diagonal of that array),
where $k$ is the row number, $j$ is the column number, the rows are finite and the columns are infinite,
with the following properties:
\begin{enumerate}
\item $\mathcal CT_k^{(j)}$ uses the same shapes as $\widetilde\mathcal CT_k$, for every $k\le j$;
\item $\mathcal CT_k^{(j)}$ is congruent with and a topological factor of $\mathcal CT_l^{(j)}$, whenever $k\le l\le j$;
\item each tile of $\mathcal CT_{k+1}^{(j)}$ is partitioned by the tiles of $\mathcal CT_k^{(j)}$ according to the master partition of its shape $S$ (here $k<j$).
\end{enumerate}
We recall that the above master partition is defined using the ``original'' tilings $\widetilde\mathcal CT_k$ and $\widetilde\mathcal CT_{k+1}$ (as a selected one of many ways of partitioning the shape $S$) and then it does not change in the following steps of the construction of the array of tilings.
By compactness of the symbolic spaces $\Lambda_k^G$ (where $\Lambda_k$ is the alphabet used in all tilings in the $k$th column), there exists a subsequence $j_i$ such that $\mathcal CT_k^{(j_i)}$ converges, for every $k$, to some symbolic element, say $\overline\mathcal CT_k\in\Lambda_k^G$. Now, all combinatorial properties satisfied by the elements $\mathcal CT_k^{(j_i)}$ (and pairs $\mathcal CT_k^{(j_i)}$, $\mathcal CT_l^{(j_i)}$) verifiable by finite horizon testing (where the horizon does not depend on $j_i$) pass on to the limit element (because such properties hold on closed sets). In particular:
\begin{enumerate}
\item $\overline\mathcal CT_k$ represents an exact tiling with shapes $\mathcal CS(\widetilde\mathcal CT_k)$;
\item $\overline\mathcal CT_k$ is congruent with and a topological factor of $\overline\mathcal CT_l$, whenever $k\le l$;
\item each tile of $\overline\mathcal CT_{k+1}$ is partitioned by the tiles of $\overline\mathcal CT_k$ according to the master partition of its shape $S$ (for any $k\ge 1$).
\end{enumerate}
Property (1) implies that the shapes of $\overline\mathcal CT_k$ are $(K_k,\varepsilon_k)$-invariant, as needed.
Because the estimation of the topological entropy of $\overline\mathcal CT_k$ involves measure-theoretic entropy (and the variational principle) we isolate it as a separate lemma. The main proof will be resumed afterwards.
\begin{lem}For each $k$ and every invariant measure $\mu$ supported by the dynamical tiling $\overline X$ generated by $\overline\mathcal CT_k$ we have $h_\mu(G) <4 \varepsilon_k$.
\end{lem}
\begin{proof}
Recall that the measure-theoretic entropy is computed as
$$
h_\mu(G) = \lim_{n\to\infty} \frac1{|F_n|}H_\mu(\overline X_{F_n}),
$$
where
$$
H_\mu(\overline X_{F_n}) = -\sum_{B\in \overline X_{F_n}}\mu([B])\log(\mu([B])).
$$
Moreover, the above limit is the same as the infimum (by the strong sub-additivity property of entropy function, see \cite[Proposition 3.1.9]{MO}). Hence, in order to estimate $h_\mu(G)$ from above it suffices to estimate $\frac1{|F_n|}H_\mu(\overline X_{F_n})$ for just one (arbitrary) F\o lner set. We will use the particular set $E_k$ selected at the beginning of the last inductive construction, such that the $E_k$-complexity of $\widetilde\mathcal CT_k$ does not exceed $2^{\varepsilon_k|E_k|}$. We have
$$
H_\mu(\overline X_{E_k}) = -\!\!\!\!\!\!\sum_{B\in \overline X_{E_k}\cap\widetilde X_{E_k}}\!\!\!\!\!\!\mu([B])\log(\mu([B]))
-\!\!\!\!\!\!\sum_{B\in \overline X_{E_k}\setminus\widetilde X_{E_k}}\!\!\!\!\!\!\mu([B])\log(\mu([B])),
$$
where $\widetilde X$ is the dynamical tiling generated by $\widetilde\mathcal CT_k$.
The entropy of a finite-dimensional sub-probabilistic vector is estimated from above by the mass of the vector times the logarithm of its dimension, plus 1. Thus, the first sum does not exceed 1 times the logarithm of the $E_k$-complexity of $\widetilde\mathcal CT_k$ (i.e., $\varepsilon_k|E_k|$), plus 1. The second sum does not exceed the measure of the union of all cylinders corresponding to blocks $B$ with domain $E_k$ occurring in $\overline X$ but not in $\widetilde X$, times the logarithm of the number of all possible blocks with domain $E_k$ (i.e., times $\log(|\Lambda_k|^{|E_k|})$), plus 1.
Observe that if $g$ is such that $E_kg$ is contained in a tile of $\overline\mathcal CT_{k+1}$ then the associated block $B$ (formally equal to $g\overline\mathcal CT_k|_{E_k}$) arises from the master partition of this tile's shape and thus the same block $B$ occurs in $\widetilde\mathcal CT_k$ (at some position $g'$), hence in $\widetilde X$. So, a block $B$ occurs in $\overline X$ but not in $\widetilde X$ only if it occurs in $\overline\mathcal CT_k$ exclusively at such positions $g$ that $E_kg$ is not contained in one tile of $\overline \mathcal CT_{k+1}$. This happens only when $g$ does not fall in the $E_k$-core of any tile of $\overline \mathcal CT_{k+1}$. Recall that each tile of $\overline \mathcal CT_{k+1}$ is $(E_k,\delta_k)$-invariant, so, by Lemma \ref{estim}, its $E_k$-core is its $(1-\xi)$-subset, where $\xi = \delta_k|E_k|= \frac{\varepsilon_k}{\log(|\Lambda_k|)}$. Now, by Lemma \ref{cov1} (for a $1$-covering tiling) we get that the upper Banach density of the set not covered by the discussed $E_k$-cores is at most $\xi$. By Lemma~\ref{measure}, the set of all points in the dynamical tiling generated by $\overline \mathcal CT_{k+1}$ (each such point represents a tiling), such that $e$ does not belong to the union of the $E_k$-cores of all tiles, has measure at most $\xi$, for every invariant measure supported by this dynamical tiling. It follows from our earlier discussion, that the above set contains the preimage (via the factor map from $\overline\mathcal CT_{k+1}$ to $\overline\mathcal CT_k$), of the union of the cylinders $B$ indexing the second large sum above. Thus any invariant measure supported by $\overline\mathcal CT_k$ (in particular $\mu$) gives this union a value at most $\xi$.
Eventually, we get the estimate
$$
H_\mu(\overline X_{E_k})\le \varepsilon_k|E_k| + \xi|E_k|\log(|\Lambda_k|) + 2 = 2\varepsilon_k|E_k| + 2<4\varepsilon_k|E_k|.
$$
\end{proof}
We return to the main proof. The above lemma, together with the variational principle for amenable group actions (see \cite[Variational Principle 5.2.7]{MO}) imply that $\mathbf{h}(\overline\mathcal CT_k)\le 4\varepsilon_k$. On the other hand, since $\overline\mathcal CT_k$ is a factor of any $\overline\mathcal CT_j$ with $j\ge k$, we obtain $\mathbf{h}(\overline\mathcal CT_k)\le 4\varepsilon_j$, which implies that $\mathbf{h}(\overline\mathcal CT_k)=0$.
\end{proof}
\section{Free action with entropy zero}
We are in a position to construct a free, zero entropy action of $G$ on a zero-dimensional space.
\begin{thm} \label{free action}
Let $G$ be an infinite countable amenable group. There exists a zero-dimensional space $\mathfrak X$
and a free action of $G$ on $\mathfrak X$ which has topological entropy zero.
\end{thm}
\begin{proof}
It suffices to show that for every $g\in G$, $g\neq e$, there exits a symbolic system $\mathfrak{X}_g\subset \{0,1\}^G$ with topological entropy zero and such that no points of $\mathfrak{X}_g$ are fixed by the shift by $g$. Once this is done, we can define $\mathfrak{X}=\prod_{g\neq e}\mathfrak{X}_g$ (with the product action). This system obviously has no points fixed by any $g\neq e$ (i.e., this is a free action), and as a countable product of zero-entropy subshifts it is zero-dimensional and has topological entropy zero.
We will use two different techniques, depending on whether $g$ has a finite order or not. The finite order case strongly relies upon our exact tilings with entropy zero constructed in the preceding sections. In each case the alphabet of $\mathfrak X_g$ will consist of two symbols (although not necessarily denoted as 0 and 1).
Fix $g\in G$ and assume the order of $g$ to be infinite. The following is an equivalence relation on $G$: $f \sim h \iff f=g^ph$ for some $p\in \mathbb{Z}$. Let $B$ be a set containing exactly one element from each equivalence class. Now every element $h\in G$ has a unique representation $h=g^pb$, where $p\in \mathbb{Z}$ and $b\in B$. Denote this exponent $p$ by $p(h)$. We define a symbolic element $x \in \set{-1,1}^G$ by $x(h)=(-1)^{p(h)}$, and we let $\mathfrak{X}_g$ be the shift orbit closure of $x$.
Suppose that for some $y\in \mathfrak{X}_g$ we have $gy=y$, in particular $y(g)=y(e)$. Let $h_n$ be a sequence of elements of $G$ such that $y=\lim_{n\to\infty}h_nx$. For large enough $n$ we have $y(e)=x(h_n)$ and $y(g)=h_nx(g)=x(gh_n)$. Therefore, on one hand $x(gh_n)=x(h_n)$, and on the other, by the definition of $x$, we have $x(gh_n)= -x(h_n)$. We have shown that $g$ fixes no points of $\mathfrak X_g$.
To show that $\mathbf{h}(\mathfrak{X}_g)=0$ let $F=\{g,g^2,\dots,g^n\}$ for some $n\in\mathbb N$. It is easy to see that the $F$-complexity equals 2 (there are only two blocks with domain $F$: $[-1,1,\dots,(-1)^n]$ and $[1,-1,\dots,(-1)^{n+1}]$). Thus $\frac1{|F|}\log N(F) = \frac{\log 2}{n}$, which is arbitrarily small, implying, via Theorem~\ref{infimum}, that the entropy of the subshift is indeed zero.
Now assume the order of $g$ is finite and equals $q$. We can still define the relation $\sim$ as before, the only difference being that the equivalence classes are now finite. Therefore they form a tiling, say $\mathcal CT_0$, of $G$ (this tiling has one shape $S=\{e,g,\dots,g^{q-1}\}$, the centers are assigned arbitrarily within the tiles\footnote{Note that any point within the tile can be assigned its center without needing to shift the shape.}). Setting $K_1=\{e\}$ and $\varepsilon_1=1$ we see that $\mathcal CT_0$ can be used (in place of $\mathcal CT_1^*$) as the first tiling $\widetilde\mathcal CT_1$ in Lemma~\ref{sqt}. Now Theorem~\ref{congruent} produces a new sequence of tilings $(\overline\mathcal CT_k)_{k\geqslant 1}$ such that $\mathbf{h}(\overline\mathcal CT_k)=0$ for each $k\ge 1$ and $\overline\mathcal CT_k$ uses the same shapes as $\widetilde\mathcal CT_k$. In particular, $\overline\mathcal CT_1$ has the same one shape $S$, i.e., it is the partition into equivalence classes ($\overline\mathcal CT_1$ differs from $\widetilde\mathcal CT_1=\mathcal CT_0$ in having the centers positioned ``more intelligently'' within the tiles). Let $\mathfrak{X}_g$ be the dynamical tiling generated by $\overline\mathcal CT_1$.
We already know that this symbolic system has entropy zero. Recall that every symbolic element $y\in\mathfrak X_g$ represents a tiling (using the same one shape $S$), in particular every tile has only one center, i.e., within every tile there is only one nonzero symbol $S$ (we agreed
to use shape labels as symbols placed at the tile centers, and zeros everywhere else).
Suppose that for some $y\in \mathfrak{X}_g$ we have $gy=y$. Let $c$ be the center of the tile containing $e$. Then $c=g^p$ for some $p\in\{0,1,\dots,q-1\}$ and $y(c)=S$. Since $gy=y$ we also have $S=gy(c)=y(cg) = y(g^{p+1})$. Clearly $g^{p+1}$ belongs to the same class (i.e., the same tile) as $c$, and since $g\neq e$,
$g^{p+1}\neq c$. We have found two nonzero symbols in one tile, a contradiction. This concludes the proof.
\end{proof}
\end{document}
|
\bm{e}gin{document}
\title{Inference in High-dimensional Multivariate Response Regression with Hidden Variables}
\bm{e}gin{abstract}
This paper studies the inference of the regression coefficient matrix under multivariate response linear regressions in the presence of hidden variables.
A novel procedure for constructing confidence intervals of entries of the coefficient matrix is proposed. Our method first utilizes the multivariate nature of the responses by estimating and adjusting the hidden effect to construct an initial estimator of the coefficient matrix. By further deploying a low-dimensional projection procedure to reduce the bias introduced by the regularization in the previous step, a refined estimator is proposed and shown to be asymptotically normal. The asymptotic variance of the resulting estimator is derived with closed-form expression and can be consistently estimated. In addition, we propose a testing procedure for the existence of hidden effects and provide its theoretical justification. Both our procedures and their analyses are valid even when the feature dimension and the number of responses exceed the sample size. Our results are further backed up via extensive simulations and a real data analysis.
\end{abstract}
{\em Keywords:} High-dimensional regression, multivariate response regression, hidden variables, confounding, confidence intervals, hypothesis testing, surrogate variable analysis.
\Sigma_Ection{Introduction}\label{sec_intro}
Multivariate response linear regression is a widely used approach of discovering the association between a response vector $Y$ and a feature vector $X$ in a variety of applications \citep{anderson_book}. Oftentimes, there may exist some unobservable, hidden, variables $Z$ that correlate with both the response $Y$ and the feature $X$. For example, in genomics studies, $Y$ typically represents different gene expressions, $X$ contains a set of exposures (e.g. levels of treatment), and $Z$ corresponds to the unobserved batch effect \citep{Leek2008,Gagnon2012}. In causal inference, one can interpret $X$ as the multiple causes of $Y$ and treat $Z$ as confounders, which are unobserved due to cost constraint or ethical issue \citep{silva2006learning,janzing2018detecting,wang2019blessings}. Since $X$ and $Z$ are often correlated, ignoring the hidden variables $Z$ in the regression model may lead to spurious association between $X$ and $Y$. Therefore, accounting for the existence of such hidden variables is critical to draw valid scientific conclusions.
This paper studies the following multivariate response linear regression with hidden variables,
\bm{e}gin{equation}\label{model}
Y = \bm{T}heta^T X + \bm{B}^T Z + E,
\end{equation}
where $Y\inftyn \mathbb{R}^m$ is the multivariate response, $X\inftyn \mathbb{R}^p$ is the random vector of $p$ observable features while $Z\inftyn \mathbb{R}^K$ is the random vector of $K$ unobservable, hidden, variables, that are possibly correlated with $X$. The number of hidden variables $K$ is unknown and is assumed to be no greater than the number of responses $m$. The random vector $E\inftyn \mathbb{R}^m$ is the additive noise independent of $X$ and $Z$. Assume the observed data $(\bm{Y}, \bm{X}) \inftyn (\mathbb{R}^{n\mathsf{T}imes m}, \mathbb{R}^{n\mathsf{T}imes p})$ consist of $n$ i.i.d. samples $(\bm{Y}_i, \bm{X}_i)$, for $i\inftyn [n] := \{1,\ldots,n\}$, from model (\ref{model}).
Throughout the paper, we focus on the high-dimensional setting, that is both $m$ and $p$ can grow with the sample size $n$. Without loss of generality, we assume $\mathbb{E}(X)=\b0$ and $\mathbb{E}(Z)=\b0$ as we can always center the data $\bm{Y}$ and $\bm{X}$.
In model (\ref{model}), the coefficient matrix $\bm{T}heta \inftyn \mathbb{R}^{p\mathsf{T}imes m}$ encodes the association between $X$ and $Y$ after adjusting the hidden variables $Z$, and is of our primary interest. More precisely, for any given $i\inftyn[p]$ and $j\inftyn[m]$, we are interested in constructing confidence intervals for $\Theta_{ij}$, or equivalently, testing the following hypothesis:
\bm{e}gin{equation}\label{def_target}
H_{0,\Theta_{ij}}: ~ \Theta_{ij} = 0, \qquad \mathsf{T}extrm{versus} \qquad H_{1,\Theta_{ij}}:~ \Theta_{ij}\ne 0.
\end{equation}
Our secondary interest is to answer the question that whether the $j$th response $Y_j$ is affected by any of the hidden variables. Since each column $\bm{B}_j\inftyn \mathbb{R}^K$ of the matrix $\bm{B} = (\bm{B}_1, \ldots, \bm{B}_m)$ corresponds to the coefficient of the hidden effects of $Z$ on $Y_j$, we can answer the above question by testing the hypothesis:
\bm{e}q\label{def_target_B}
H_{0, B_j}: \bm{B}_{j} = \b0,\qquad \mathsf{T}extrm{versus} \qquad H_{1,B_j}: \bm{B}_j \ne \b0.
\end{aligned}\end{equation}
In particular, if the null hypothesis $H_{0, B_j}$ is rejected, then the effect of the hidden variables $Z$ on $Y_j$ is significant, suggesting the necessity of adjusting the hidden effects for modelling $Y_j$.
Since we allow $X$ and $Z$ to be correlated in (\ref{model}), we can decouple their dependence via the $L_2$ projection of $Z$ onto the linear space of $X$:
\bm{e}q\label{model_ZX}
Z = \bm{A}^T X + (Z - \bm{A}^T X) := \bm{A}^T X + W,
\end{aligned}\end{equation}
where $\bm{A} = (\mathbb{E}[XX^T])^{-1}\mathbb{E}[XZ^T]\inftyn \mathbb{R}^{p\mathsf{T}imes K}$ and $W=Z - \bm{A}^T X$ satisfies $\mathbb{E}[WX^T]=\b0$. While $W$ and $X$ are uncorrelated, we do not require them to be independent. In other words, (\ref{model_ZX}) does not imply that $X$ and $Z$ follow a linear regression model. Indeed, our framework allows any nonlinear dependence structure between $X$ and $Z$ and is therefore model free for the joint distribution of $(X,Z)$. Under such decomposition,
the original model (\ref{model}) can be rewritten as
\bm{e}q\label{model_linear}
Y = (\bm{T}heta + \bm{A}\bm{B})^T X + \varepsilonilon
\end{aligned}\end{equation}
where the new error term $\varepsilonilon \coloneqq \bm{B}^T W + E$ has zero mean and is uncorrelated with $X$. Before we elaborate how we make inference on $\Theta_{ij}$ and $\bm{B}_j$, we start with a brief review of the related literature.
\subsection{Related literature}
Surrogate variable analysis (SVA) has been widely used to estimate and make inference on $\bm{T}heta$ under model (\ref{model}) for genomics data \citep{Leek2008,Gagnon2012}. Recent progress has been made in \cite{Lee2017,wang2017,McKennan19} towards both developing new methodologies and understanding the theoretical properties of the existing approaches. However, all existing SVA-related approaches rely on the ordinary least squares (OLS) between $\bm{Y}$ and $\bm{X}$ to estimate $\bm{T}heta + \bm{A}\bm{B}$ in (\ref{model_linear}), hence are only feasible when the feature dimension, $p$, is small comparing to the sample size $n$. As researchers tend to collect far more features than before due to advances of modern technology, there is a need of developing new method which allows the feature dimension $p$ to grow with, or even exceed, the sample size $n$.
More recently, \cite{bing2020adaptive} studied the estimation of $\bm{T}heta$ under model (\ref{model}). Their proposed procedure assumes a row-wise sparsity structure on $\bm{T}heta$ and is suitable for $p$ that is potentially greater than $n$. Despite the advance on the estimation aspect, conducting inference on $\bm{T}heta$ remains an open problem when $p$ is larger than $n$. The extra difficulty of making inference comparing to estimation in the high-dimensional regime is already visible in the ideal scenario, the sparse linear regression models, without any hidden variable, see \cite{zhangzhang2014,vandegeer2014,belloni2015uniform,Javanmard2018,ning2017general}, among many others.
Inference of the linear coefficient in the presence of hidden variables, to the best of our knowledge, is only studied in \cite{guo2020doubly} for the univariate case $\bm{y} = \bm{X}\bm{t}heta + \bm{Z}\bm{b}eta +\bm{e}psilon$ where $\bm{y}\inftyn \mathbb{R}^n$ is the univariate response, $\bm{X}\inftyn \mathbb{R}^{n\mathsf{T}imes p}$ consists of the high-dimensional feature and $\bm{Z}\inftyn \mathbb{R}^{n\mathsf{T}imes K}$ represents the hidden confounders. By further assuming $\bm{X} = \bm{Z}\bm{G}amma^T + \bm{W}'$ for some loading matrix $\bm{G}amma$ and additive error $\bm{W}'$
independent of $\bm{Z}$, \cite{guo2020doubly} proposed a doubly debiased lasso procedure for making inference on entries of $\bm{t}heta$. Our situation differs from theirs in that we have multiple responses. By borrowing strength across multivariate responses, we are able to remove the hidden effects without assuming any model between $\bm{X}$ and $\bm{Z}$. Moreover, combining multiple responses provides additional information on the coefficient matrix, $\bm{B}$, of the hidden variable, which not only helps to remove the hidden effects in our estimation procedure for $\bm{T}heta$, but also enables us to test and quantify the hidden effects for each response.
In model (\ref{model_linear}), when $\bm{T}heta$ is sparse and the matrix $\bm{L} \coloneqq \bm{A}\bm{B}$ has a small rank $K$, our problem is related to the recovery of an additive decomposition of a sparse matrix and a low-rank matrix, as studied by \cite{chandrasekaran2012latent,Candes,Hsu2011}, just to name a few. In order to identify and estimate $\bm{T}heta$, \cite{chandrasekaran2012latent} proposed a penalized $M$-estimator under certain incoherence conditions between $\bm{T}heta$
and $\bm{L}$. By contrast, our identifiability conditions (see, Section \ref{sec_id}) differ significantly from theirs, hence leading to a completely different procedure for estimation. Furthermore, this strand of works only focus on estimation while our interest in this paper is about inference.
\subsection{Main contributions}
Our first contribution is in establishing an identifiability result of $\bm{T}heta$ in Theorem \ref{thm_ident} of Section \ref{sec_id} under model (\ref{model}) when the entries of $E$ in (\ref{model}) are allowed to be correlated, that is, $\Sigma_E:=\text{Cov}(E)$ is non-diagonal. To the best of our knowledge, the existing literature only studies the identifiability of $\bm{T}heta$ when $\Sigma_E$ is diagonal, see, for instance, \cite{Lee2017,wang2017,McKennan19,bing2020adaptive}. In Section \ref{sec_id} we also discuss different sets of conditions under which $\bm{T}heta$ can be identified asymptotically as $m\mathsf{T}o \inftynfty$ when $\Sigma_E$ is non-diagonal.
Our second contribution is to propose a new procedure in Section \ref{sec_method} for constructing confidence intervals of $\Theta_{ij}$ that is suitable even when $p$ is larger than $n$. Our procedure consists of four steps: the first step in Section \ref{sec_pred} estimates the coefficient matrix $(\bm{T}heta + \bm{A}\bm{B})$ in (\ref{model_linear}); the second step in Section \ref{sec_est_B} estimates $\bm{B}$, the coefficient matrix of the hidden variables, using the residual matrix from the first step; the third step uses the estimate of $\bm{B}$ to remove the hidden effect and construct an initial estimator $\widehat\Theta_{ij}$ of $\Theta_{ij}$, while our final step constructs the refined estimator $\widetilde\Theta_{ij}$ of $\Theta_{ij}$ by removing the bias of $\widehat\Theta_{ij}$ due to the high-dimensional regularization (see, Section \ref{sec_est_Theta}). The resulting estimate $\widetilde\Theta_{ij}$ is further used to construct confidence intervals of $\Theta_{ij}$ and to test the hypothesis (\ref{def_target}) in Section \ref{sec_est_Theta}. Finally, in Section \ref{sec_method_infer_B}, we further propose a $\chi^2$-based statistic for testing the null hypothesis $\bm{B}_j = \b0$ for any given $j$.
Our third contribution is to provide statistical guarantees for the aforementioned procedure. Our main result, stated in Theorem \ref{thm_asymp_normal} of Section \ref{sec_theory_ASN}, shows that our estimator $\widetilde\Theta_{ij}$ of $\Theta_{ij}$ satisfies $\sqrt{n}(\widetilde \Theta_{ij}-\Theta_{ij}) = \xi + \Delta$ where $\xi$ is normally distributed, conditioning on the design matrix, and $\Delta$ is asymptotically negligible as $n\mathsf{T}o \infty$. In Section \ref{sec_effciency}, we further show that $\widetilde\Theta_{ij}$ is asymptotically efficient in the Gauss-Markov sense, and its asymptotic variance can be consistently estimated. Combining these results justifies the usage of our proposed procedure in Section \ref{sec_est_Theta} for making inference on $\Theta_{ij}$. In the proof of Theorem \ref{thm_asymp_normal}, an important intermediate result we derived is the (column-wise) uniform $\ell_2$ convergence rate of our estimator $\widehat \bm{B}$, which is stated in Theorem \ref{thm_rates_B}. On top of this result, we further establish the asymptotic normality of $\widehat\bm{B}_j$ for any $j\inftyn[m]$ with explicit expression of the asymptotic variance in Theorem \ref{thm_B_asn}. The result provides theoretical guarantees for the $\chi^2$-based statistic in Section \ref{sec_method_infer_B} for testing $\bm{B}_j = \b0$.
The remainder of this paper is organized as follows. In Section \ref{sec_id} we establish the identifiability result of $\bm{T}heta$. Section \ref{sec_method} contains the methodology of making inference on $\Theta_{ij}$ and $\bm{B}_j$. Statistical guarantees are provided in Section \ref{sec_theory}.
Simulation studies are presented in Section \ref{sec_sim} while the real data analysis is shown in Section \ref{sec_real_data}.
\paragraph{Notation.}
For any set $S$, we write $|S|$ for its cardinality. For any positive integer $d$, we write $[d] = \{1,2,\ldots,d\}$.
For any vector $v\inftyn \mathbb{R}^d$ and some real number $q\gammae 0$, we define its $\ell_q$ norm as $\|v\|_q = (\sum_{j=1}^d |v_j|^q)^{1/q}$. For any matrix $M \inftyn \mathbb{R}^{d_1 \mathsf{T}imes d_2}$, $I \subseteq [d_1]$ and $J\subseteq [d_2]$, we write $M_{IJ}$ as the $|I| \mathsf{T}imes |J|$ submatrix of $M$ with row and column indices corresponding to $I$ and $J$, respectively. In particular, $M_{I\cdot}$ denotes the $|I|\mathsf{T}imes d_2$ submatrix and $M_J$ denotes the $d_1\mathsf{T}imes |J|$ submatrix. Further write $\|M\|_{p,q} = (\sum_{j=1}^{d_1} \|M_{j\cdot}\|_q^p)^{1/p}$ and denote by $\|M\|_{{\rm op}}$, $\|M\|_F$ and $\|M\|_\inftynfty$, respectively, the operator norm, the Frobenius norm and the element-wise sup-norm of $M$. For any matrix $M$, we write $\lambda_{k}(M)$ for its $k$th largest singular value. We use ${\bm I}_d$ to denote the $d\mathsf{T}imes d$ identity matrix and $\b0$ to denote the vectors with entries all equal to zero. We use $\bm{e}_1,\ldots,\bm{e}_d$ to denote the canonical basis in $\mathbb{R}^d$. For any two sequences $a_n$ and $b_n$, we write $a_n \lesssim b_n$ if there exists some positive constant $C$ such that $a_n \le Cb_n$ for any $n$. We let $a_n \asymp b_n$ stand for $a_n\lesssim b_n$ and $b_n \lesssim a_n$. Denote $a\vee b=\max (a,b)$ and $a\wedge b=\min(a,b)$.
\Sigma_Ection{Identifiability of $\Theta$}\label{sec_id}
In this section, we establish conditions under which $\bm{T}heta$ in model (\ref{model}) is identifiable when $Z$ is correlated with $X$ and the entries of $E$ are possibly correlated.
Recall that model (\ref{model}) can be rewritten as (\ref{model_linear}). By regressing $Y$ onto $X$, one can identify
\bm{e}q\label{def_F}
\bm{F} = \bm{T}heta + \bm{A}\bm{B}.
\end{aligned}\end{equation}
The main challenge in identifying $\bm{T}heta$ is that we need to further separate $\bm{T}heta$ and $\bm{A}\bm{B}$ in the matrix $\bm{F}$.
The existing literature \citep{wang2017,Lee2017,McKennan19,bing2020adaptive} leverages the following decomposition of the residual covariance matrix of $\varepsilonilon = \bm{B}^TW + E$ from (\ref{model_linear})
\bm{e}q\label{def_Sigma_eps}
\Sigma_Eps = \bm{B}^T \Sigma_W \bm{B} + \Sigma_E,
\end{aligned}\end{equation}
to recover the row space of $\bm{B}\inftyn \mathbb{R}^{K\mathsf{T}imes m}$. Here we write $\Sigma_W=\text{Cov}(W)$ and $\Sigma_E=\text{Cov}(E)$. The decomposition (\ref{def_Sigma_eps}) is ensured by the independence assumption between $E$ and $W$.
When $\Sigma_E$ is diagonal and under suitable conditions on $\bm{B}$ and $\Sigma_W$, the row space of $\bm{B}$ can be identified from (\ref{def_Sigma_eps}) either via PCA or the heteroscedastic PCA \citep{bing2020adaptive}, or via maximizing the quasi-likelihood under a factor model \citep{wang2017}. The recovered row space of $\bm{B}$ is further used towards identifying $\bm{T}heta$.
Our model differs from the existing literature in that we allow $\Sigma_E$ to be non-diagonal, in which case the identifiability conditions in \cite{wang2017} and \cite{bing2020adaptive} are no longer applicable. For non-diagonal $\Sigma_E$, we adopt the following conditions,
\bm{e}gin{equation}\label{ident_conds}
\lambda_K\left({1\over m}\bm{B}^T \Sigma_W \bm{B}\right) \gammae c,\qquad \|\Sigma_E\|_{{\rm op}} = o(m), \qquad \mathsf{T}extrm{as }m\mathsf{T}o \infty,
\end{equation}
where $c$ is a positive constant and $\lambda_K(M)$ denotes the $K$th largest eigenvalue of a symmetric matrix $M$. Under (\ref{ident_conds}), the space spanned by the first $K$ eigenvectors of $\Sigma_Eps$ recovers the row space of $\bm{B}$ asymptotically as $m\mathsf{T}o \infty$. This is an immediate result of the Davis-Kahan Theorem \citep{DavisKahan}, and has been widely used in the literature of factor models, see, for instance, \cite{fan2013large}.
Given the row space of $\bm{B}$, we can identify the projection matrices $P_B = \bm{B}^T(\bm{B}\bm{B}^T)^{-1}\bm{B}$ and $P_{B}^{\perp} = {\bm I}_m - P_B$. Multiplying $P_{B}^{\perp}$ on both sides of equation (\ref{model}), we have
\bm{e}gin{equation}\label{eq_pby}
P_{B}^{\perp} Y = (\bm{T}heta P_{B}^{\perp})^T X + P_{B}^{\perp}E,
\end{equation}
from which we recover $\bm{T}heta P_{B}^{\perp}$ by
\bm{e}gin{equation}\label{def_Theta_PB_comp}
\bm{T}heta P_{B}^{\perp} = [\text{Cov}(X)]^{-1}\text{Cov}(X, P_{B}^{\perp} Y).
\end{equation}
From $\bm{T}heta P_{B}^{\perp} = \bm{T}heta - \bm{T}heta P_{B}$, we have that $\bm{T}heta$ can be recovered if $\bm{T}heta P_B$ becomes negligible as $m\mathsf{T}o \infty$. Requiring $\bm{T}heta P_B$ being small is common in the existing literature \citep{Lee2017,wang2017,bing2020adaptive}. We adopt the condition of assuming $\bm{T}heta P_B$ small in terms of row-wise $\ell_1$ norm. The following theorem formally establishes the identifiability of $\bm{T}heta$. As revealed in the proof of Theorem \ref{thm_ident}, $\|\bm{T}heta_{i\cdot}\|_1=o(m)$ together with the other conditions therein ensures $(\bm{T}heta P_B)_{ij} = o(1)$.
\bm{e}gin{theorem}\label{thm_ident}
Under model (\ref{model}), assume (\ref{ident_conds}) and
\bm{e}q\label{cond_ident_Theta}
\max_{1\le j\le m}\bm{B}_j^T\Sigma_W\bm{B}_j = O(1),\qquad \max_{1\le i\le p}\|\bm{T}heta_{i\cdot}\|_1 = o(m),\qquad \mathsf{T}extrm{as }m\mathsf{T}o \infty.
\end{aligned}\end{equation}
Then $\bm{T}heta$ can be recovered from the first two moments of $(X, Y)$ asymptotically as $m\mathsf{T}o \infty$.
\end{theorem}
The first requirement of (\ref{cond_ident_Theta}) is a regularity condition which holds, for instance, if $\Sigma_W\inftyn\mathbb{R}^{K\mathsf{T}imes K}$ has bounded eigenvalues and each column $\bm{B}_j\inftyn \mathbb{R}^K$ of $\bm{B}$ is bounded in $\ell_2$-norm. The second condition in (\ref{cond_ident_Theta}) requires the $\ell_1$-norm of each row of $\bm{T}heta \inftyn \mathbb{R}^{p\mathsf{T}imes m}$ is of smaller order of $m$. This is the case if $\bm{T}heta$ has bounded entries and each row of $\bm{T}heta$ is sufficiently sparse. Such a sparsity assumption is reasonable in many applications, for instance, in genomics \citep{wang2017,McKennan19}.
\bm{e}gin{remark}[Alternative identifiability conditions of $P_B$]\label{rem_ident_B}
Condition (\ref{ident_conds}) assumes the spiked eigenvalue structure of $\Sigma_Eps$ in (\ref{def_Sigma_eps}) and is a common identifiability condition in the factor model when $m$ is large (see, \cite{fan2013large,Bai-factor-model-03}). We refer to Remark \ref{rem_cond_B} for more discussions on (\ref{ident_conds}).
Alternatively, another line of work studies the unique decomposition of the low rank and sparse decomposition under the so-called rank-sparsity incoherence conditions, \cite{Candes,Chandrasekaran,Hsu2011}, just to name a few. For instance, \citet[Theorem 1]{Hsu2011} showed that $\bm{B}^T \Sigma_W \bm{B}$ and $\Sigma_E$ are identifiable from $\Sigma_Eps$ if
\bm{e}q\label{ident_cond_sparse_rank}
\|\Sigma_E\|_{\infty,0} \|\bm{U}_B\|_{\infty,2}^2 \le c
\end{aligned}\end{equation}
for some small constant $0< c<1$. Here $\bm{U}_B$ contains the right $K$ singular vectors of $\bm{B} \inftyn \mathbb{R}^{K\mathsf{T}imes m}$. Once $\bm{B}^T \Sigma_W \bm{B}$ is identified, we can recover $P_B$ via PCA. Our identifiability results in Theorem \ref{thm_ident} still hold if (\ref{ident_conds}) is replaced by (\ref{ident_cond_sparse_rank}).
\end{remark}
\bm{e}gin{remark}[Other identifiability conditions of $\bm{T}heta$]\label{rem_ident_cond_Theta}
In the SVA literature, provided that $P_B$ is known, there are other sufficient conditions under which $\bm{T}heta$ is identifiable. One type of such condition is called {\em negative controls} which assumes that, for a known set $S\subseteq[m]$ with $|S| \gammae K$,
$$
\bm{T}heta_S = \b0\quad \mathsf{T}ext{and} \quad \mathsf{T}ext{rank}(\bm{B}_{S}) =K.
$$
In words, there is a known set of responses that are not associated with any of the features in the multivariate response model (\ref{model}). Another condition considered in \cite{wang2017} requires the sparsity of $\bm{T}heta$ in a similar spirit to (\ref{cond_ident_Theta}). It is assumed that, for some integer $K\le r\le m$,
\[
\max_{j\inftyn[p]}\left\|\bm{T}heta_{j\cdot}\right\|_0 \le \floor{(m-r)/2},\qquad \textrm{rank}(\bm{B}_S) = K, \quad \forall~ S\subseteq [m] \mathsf{T}extrm{ with }|S| = r.
\]
Intuitively, the above condition also puts restrictions on the sparsity of $\bm{B}$, as the submatrix of $\bm{B}$ may have rank smaller than $K$ if $\bm{B}$ is too sparse.
Our identifiability results in Theorem \ref{thm_ident} still hold if condition (\ref{cond_ident_Theta}) is replaced by any of these conditions.
\end{remark}
\Sigma_Ection{Methodology}\label{sec_method}
In this section we describe our procedure of making inference on $\Theta_{ij}$ and $\bm{B}_j$ for a given $i\inftyn[p]$ and $j\inftyn[m]$. Recall that $(\bm{Y}_{i\cdot}, \bm{X}_{i\cdot})$, for $1\le i\le n$, are i.i.d. copies of $(Y,X)$ from model (\ref{model}). Let $(\bm{Y}, \bm{X})$ denote the data matrix.
For constructing confidence intervals of $\Theta_{ij}$ and testing the hypothesis (\ref{def_target}), our procedure consists of three main steps: (1) estimate the best linear predictor $\bm{X}\bm{F}$ in Section \ref{sec_pred} with $\bm{F}$ defined in (\ref{def_F}), (2) estimate the residual $\bm{e}psilon = \bm{Y} -\bm{X}\bm{F}$ and the matrix $\bm{B}$ in Section \ref{sec_est_B}, (3) estimate $\bm{T}heta_j$ and construct the final estimator of $\Theta_{ij}$ in Section \ref{sec_est_Theta}. Finally, we discuss how to make inference on $\bm{B}_j$ in Section \ref{sec_method_infer_B}.
\subsection{Estimation of $XF$}\label{sec_pred}
Recall from (\ref{def_F}) that $\bm{F}$ has the additive decomposition of $\bm{T}heta$ and $\bm{A}\bm{B}$. Estimating $\bm{F}$ is challenging when the number of features $p$ exceeds the sample size $n$ without additional structure on $\bm{T}heta$. We thus consider the following parameter space of $\bm{T}heta$
\bm{e}gin{equation}\label{def_space_Theta}
\mathcal{M}(s_n, M_n) :=
\left\{
\bm{M}\inftyn\mathbb{R}^{p\mathsf{T}imes m}: \sum_{j=1}^p 1\{\|\bm{M}_{j\cdot}\|_2\ne 0\} \le s_n,
\max_{1\le j\le p} \norm{\bm{M}_{j\cdot}}_1 \leq M_n
\right\}
\end{equation}
for some integer $1\le s_n \le p$ and some sequence $M_n > 0$ that both possibly grow with $n$. As a result, any $\bm{T}heta \inftyn \mathcal{M}(s_n, M_n)$ has at most $s_n$ non-zero rows and, for each of these non-zero rows, its $\ell_1$-norm is controlled by the sequence $M_n$. Existence of zero rows
is a popular sparsity structure in multivariate response regression \citep{yuanlin} and is also appealing for feature selection, while the structure of row-wise $\ell_1$ norm is needed in view of the identifiability condition (\ref{cond_ident_Theta}).
Since the submatrix of $\bm{T}heta \inftyn\mathcal{M}(s_n, M_n)$ corresponding to the non-zero rows may further have different sparsity patterns, we propose to estimate each column of $\bm{F}$ separately. Specifically, we estimate $\bm{F}$ by $\widehat{\bm{F}} = (\widehat \bm{F}_1,\dotso,\widehat \bm{F}_m) \inftyn \mathbb{R}^{p\mathsf{T}imes m}$ where, for each $j \inftyn [m]$, $\widehat\bm{F}_j = \widehat{\bm{t}heta}^{(j)} + \widehat{\bm{d}elta}^{(j)}$ is obtained by solving
\bm{e}q\label{def_est_F_j}
\widehat{\bm{t}heta}^{(j)},~\widehat{\bm{d}elta}^{(j)} = \argmin_{\bm{t}heta,\bm{d}elta \inftyn \mathbb{R}^p}\frac{1}{n}\norm{\bm{Y}_{j} - \bm{X}(\bm{t}heta + \bm{d}elta)}_2^2 + \lambda_1^{(j)}\norm{\bm{t}heta}_1 + \lambda_2^{(j)}\norm{\bm{d}elta}_2^2.
\end{aligned}\end{equation}
for some tuning parameters $\lambda_1^{(j)}, \lambda_2^{(j)} \gammae 0$. Computationally, for any given $\mathbf{l}dIj$ and $\mathbf{l}dj$, solving (\ref{def_est_F_j}) is as efficient as solving a lasso problem (see, \cite{chernozhukov2017} or Lemma \ref{lem_solution} in Appendix \ref{app_theory_fit}). We discuss in details practical ways of selecting $\lambda_1^{(j)}$ and $\lambda_2^{(j)}$ in Section \ref{sec_cv}.
Procedure (\ref{def_est_F_j}) is known as lava \citep{chernozhukov2017} and is designed to capture both the sparse signal $\bm{T}heta_j$ and the dense signal $\bm{A}\bm{B}_j$ via respectively the lasso penalty and the ridge penalty. When columns of $\bm{T}heta$ share the same sparsity pattern, \cite{bing2020adaptive} proposed a variant of (\ref{def_est_F_j}) to estimate $\bm{F}$ jointly via the group lasso penalty together with the multivariate ridge penalty. To allow different sparsity patterns in columns of $\bm{T}heta$ and, more importantly, to provide a sharp column-wise control of $\bm{X}\widehat \bm{F}_j - \bm{X}\bm{F}_j$ for our subsequent inference on $\Theta_{ij}$, we opt for estimating $\bm{F}$ column-by-column.
\subsection{Estimation of $B$}\label{sec_est_B}
In this section, we discuss the estimation of $\bm{B}$.
Our procedure first estimates the residual matrix $\bm{e}psilon \coloneqq \bm{Y} - \bm{X}\bm{F}\inftyn \mathbb{R}^{n\mathsf{T}imes m}$ by
\bm{e}gin{equation}\label{def_est_epsilon}
\widehat\bm{e}psilon = \bm{Y} - \bm{X} \widehat \bm{F}
\end{equation}
with $\widehat\bm{F}$ obtained from (\ref{def_est_F_j}). To estimate $\bm{B}$, notice that $\bm{e}psilon = \bm{W} \bm{B} + \bm{E}$ follows a factor model with $\bm{B}$ being the loading matrix and $\bm{W}$ being the latent factor matrix, should we observe $\bm{e}psilon$. We therefore propose to estimate $\bm{B}$ by the following approach commonly used in the factor analysis \citep{SW2002,Bai-factor-model-03,fan2013large} via the plug-in estimate $\widehat\bm{e}psilon$. Specifically, write the SVD of the normalized $\widehat\bm{e}psilon$ as
\bm{e}gin{equation}\label{def_svd_epsilon}
{1 \over \sqrt{nm}}\widehat\bm{e}psilon ~ = ~\sum_{k=1}^{m} d_k \bm{u}_k \bm{v}_k^T,
\end{equation}
where $\bm{U}_K = (\bm{u}_1,\ldots, \bm{u}_K)\inftyn \mathbb{R}^{n\mathsf{T}imes K}$ and $\bm{V}_K = (\bm{v}_1,\ldots,\bm{v}_K)\inftyn \mathbb{R}^{m\mathsf{T}imes K}$ denote, respectively, the left and right singular vectors corresponding to $d_1 \gammae d_2 \gammae \cdots \gammae d_K$. Further write $\bm{D}_K = \textrm{diag}(d_1,\ldots, d_K)$. We propose to estimate $\bm{B}$ and $\bm{W}$ by
\bm{e}gin{align*}
&(\widehat\bm{B}, \widehat\bm{W}) = \argmin_{\bm{B},\bm{W}} {1\over nm}\left\|
\widehat\bm{e}psilon - \bm{W}\bm{B}
\right\|_F^2,\\\nonumber
&\mathsf{T}extrm{subject to}\quad {1\over n}\bm{W}^T\bm{W} = {\bm I}_K,\quad {1\over m}\bm{B}\bm{B}^T\mathsf{T}extrm{ is diagonal}.
\end{align*}
It is well known (see, for instance, \cite{Bai-factor-model-03}) that the above problem leads to the following solution
\bm{e}gin{equation}\label{def_est_BW}
\widehat\bm{B}^T = \sqrt{m}~ \bm{V}_K\bm{D}_K,\qquad \widehat\bm{W} = \sqrt{n}~ \bm{U}_K.
\end{equation}
We assume $K$ is known for now and defer its selection to Section \ref{sec_K}.
\subsection{Estimation and inference of $\Theta$}\label{sec_est_Theta}
Without loss of generality, we let $\Theta_{11}$ be the parameter of our interest.
To make inference of $\Theta_{11}$, we first construct an initial estimator of $\bm{T}heta_1\inftyn\mathbb{R}^p$ via $\ell_1$ regularization after removing the hidden effects, and then obtain our final estimator of $\Theta_{11}$ by removing the bias due to the $\ell_1$-regularization in the first step. For this reason, our final estimator of $\Theta_{11}$ is doubly debiased.
Write $\widetilde \bm{y} = \bm{Y}\widehat{P}_B^\perp \bm{e}_1$ with $\widehat P_B^{\perp} := {\bm I}_m - \widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}\widehat\bm{B} = {\bm I}_m - \bm{V}_K\bm{V}_K^T$ from (\ref{def_est_BW}).
In view of (\ref{eq_pby}), we propose to use the solution of the following lasso problem as the initial estimator of $\bm{T}heta_1$,
\bm{e}q\label{def_Thetaj_hat}
\widehat{\bm{T}heta}_1 = \argmin_{\bm{t}heta\inftyn\mathbb{R}^p}\frac{1}{n}\bm{n}orm{\widetilde\bm{y} - \bm{X}\bm{t}heta}_2^2 + \lambda_3\norm{\bm{t}heta}_1.
\end{aligned}\end{equation}
Here $\lambda_3\gammae 0$ is some tuning parameter. As seen in (\ref{eq_pby}), using the projected response $\widetilde \bm{y} = \bm{Y}\widehat{P}_B^\perp \bm{e}_1$ in the above lasso problem removes the bias due to the hidden variables.
While the $\ell_1$-regularization reduces the variance of the resulting estimator, it introduces extra bias that needs to be adjusted in order to further make inference of $\Theta_{11}$. To reduce this bias due to the $\ell_1$ regularization, our final estimator of $\Theta_{11}$ is proposed as follows,
\bm{e}q\label{def_Theta_11}
\widetilde{\Theta}_{11} = \widehat{\Theta}_{11} + \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T(\widetilde\bm{y} - \bm{X}\widehat{\bm{T}heta}_1)
\end{aligned}\end{equation}
where $\widehat\bm{o}mega_1 \inftyn \mathbb{R}^p$ is the estimate of the first column $\bm{O}mega_1$ of $\bm{O}mega \coloneqq \Sigma^{-1}$ with $\Sigma=\text{Cov}(X)$. There are several ways of estimating $\bm{O}mega_1$, for instance, \cite{zhangzhang2014,javanmard14,vandegeer2014}. In this paper, we follow the node-wise lasso procedure in \cite{zhangzhang2014} and \cite{vandegeer2014} to obtain $\widehat{\bm{o}mega}_1$. Specifically, let
\bm{e}q\label{formula_nodewise}
\widehat\bm{g}amma_1 = \argmin_{\bm{g}amma \inftyn \mathbb{R}^{p-1}} {1\over n}\left\|
\bm{X}_1 - \bm{X}_{-1}\bm{g}amma
\right\|_2^2 + \widetilde \lambda\|\bm{g}amma\|_1
\end{aligned}\end{equation}
for some tuning parameter $\widetilde \lambda\gammae 0$, where $\bm{X}_{-1}\inftyn\mathbb{R}^{n\mathsf{T}imes(p-1)}$ is the submatrix of $\bm{X}$ with the first column removed. We write
\bm{e}q\label{def_tau_1}
\widehat\mathsf{T}au_1^2 = {1\over n}\bm{X}_1^T(\bm{X}_1 - \bm{X}_{-1}\widehat\bm{g}amma_1)
\end{aligned}\end{equation}
and define
\bm{e}q\label{def_est_omega}
\widehat\bm{o}mega_1^T = {1\over \widehat\mathsf{T}au_1^2}\bm{e}gin{bmatrix}
1 & -\widehat\bm{g}amma_1^T
\end{bmatrix},
\end{aligned}\end{equation}
as the estimator of $\bm{O}mega_1$. In Theorem \ref{thm_asymp_normal} of Section \ref{sec_theory_ASN}, we show that, conditioning on the design matrix, $\sqrt{n}(\widetilde{\Theta}_{11} - \Theta_{11})$ is asymptotically normal with mean zero and variance $\sigma_{E_1}^2 \widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1$, where $\sigma_{E_1}^2:=[\Sigma_E]_{11}$ and $\widehat\Sigma = n^{-1}\bm{X}^T\bm{X}$.
In light of this result, we can test the hypothesis $H_{0,\Theta_{11}}: \Theta_{11} = 0$ versus $H_{1,\Theta_{11}}: \Theta_{11} \neq 0$, via the following test statistic
\bm{e}q\label{def_U_hat}
\widehat U_n^{(11)} = \sqrt{n}~\widetilde\Theta_{11}/\sqrt{\widehat\sigma_{E_1}^2 \widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1},
\end{aligned}\end{equation}
with $\widehat\sigma_{E_1}^2$ being an estimator of $\sigma_{E_1}^2$, defined as
\bm{e}gin{equation}\label{def_est_variance}
\widehat \sigma_{E_1}^2 = {1\over n}(\widehat\bm{e}psilon_1-\widehat\bm{W}\widehat\bm{B}_1)^T(\widehat\bm{e}psilon_1-\widehat\bm{W}\widehat\bm{B}_1)
\end{equation}
with $\widehat\bm{e}psilon$, $\widehat\bm{B}$ and $\widehat\bm{W}$ obtained from (\ref{def_est_epsilon}) and (\ref{def_est_BW}).
For any given significance level $\alpha\inftyn(0,1)$, we reject the null hypothesis if $|\widehat U_n^{(11)}| > k_{\alpha/2}$, where $k_{\alpha/2}$ is the $(1-\alpha/2)$ quantile of $N(0,1)$. Equivalently, we can also construct a $(1 - \alpha)\mathsf{T}imes 100\%$ confidence interval for $\Theta_{11}$ as
\bm{e}q \label{CI_def}
\left(\widetilde{\Theta}_{11} - k_{\alpha/2}\sqrt{\widehat\sigma_{E_1}^2\widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1 / n},~~
\widetilde{\Theta}_{11} + k_{\alpha/2}\sqrt{\widehat\sigma_{E_1}^2\widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1 / n}
\right).
\end{aligned}\end{equation}
\subsection{Hypothesis testing of the hidden effect}\label{sec_method_infer_B}
In practice, it is also of interest to test whether or not some response $Y_j$, for $1\le j\le m$, is affected by any of the hidden variables $Z$. If the effect of the hidden variables $Z$ is indeed significant, ignoring the hidden variables in the regression analysis may yield biased estimators and incorrect conclusion. In this case, the use of our hidden variable model (\ref{model}) is strongly preferred, as adjusting the hidden effects for modelling $Y_j$ is critical.
Without loss of generality, we take $j = 1$. The hypothesis testing problem (\ref{def_target_B}) becomes $H_{0,B_1}: \bm{B}_{1} = \b0$ versus $H_{1,B_1}: \bm{B}_1 \ne \b0$. We propose to use the following test statistic
\bm{e}gin{equation}\label{def_R_hat}
\widehat R_{n}^{(1)} = n \widehat \bm{B}_1^T \widehat \bm{B}_1 / \widehat\sigma_{E_1}^2
\end{equation}
with $\widehat\bm{B}$ and $\widehat\sigma_{E_1}^2$ obtained from (\ref{def_est_BW}) and (\ref{def_est_variance}), respectively. While $\widehat \bm{B}$ depends on the regularized estimator lava in (\ref{def_est_F_j}) via the estimated residuals, an interesting phenomenon is that there is no need to further debias the estimator $\widehat \bm{B}$ for inference. In Theorem \ref{thm_B_asn}, we show that the estimator $\widehat \bm{B}_j$ is asymptotically normal and the test statistic $\widehat R_{n}^{(1)}$ converges in distribution to the $\chi^2$ distribution with degrees of freedom equal to $K$ under the null. Thus, given any significance level $\alpha\inftyn(0,1)$, we reject the null hypothesis if $\widehat R_{n}^{(1)} > c_{\alpha}$, where $c_{\alpha}$ is the $(1-\alpha)$ quantile of the $\chi^2$ distribution with degrees of freedom equal to $K$.
\Sigma_Ection{Theoretical analysis}\label{sec_theory}
In this section, we provide theoretical guarantees for our procedure in Section \ref{sec_method}. Section \ref{sec_ass} contains our main assumptions. The asymptotic normality of $\widetilde\Theta_{11}$ is established in Section \ref{sec_theory_ASN} while its efficiency and the consistent estimation of its asymptotic variance are discussed in Section \ref{sec_effciency}. The statistical guarantees for $\widehat\bm{B}$ are shown in Sections \ref{sec_theory_B}.
\subsection{Assumptions}\label{sec_ass}
Throughout our analysis, we assume that $m$ and $p$ both grow with $n$ and the number of hidden variables, $K$, is fixed. Our analysis can be extended to the case where $K$ grows with $n$ coupled with more involved conditions. We start with the following blanket distributional assumptions on $W$ and $E$.
\bm{e}gin{assumption}\label{ass_error}
Let $\gammaamma_w$ and $\gammaamma_e$ denote some finite positive constants.
Assume $\Sigma_W^{-1/2}W$ is a $\gamma_w$ sub-Gaussian random vector \footnote{A centered random vector $X\inftyn \mathbb{R}^d$ is $\gamma$ sub-Gaussian if $
\mathbb{E}[\exp(\langle u, X\rangle)] \le \exp(\|u\|_2^2\gamma^2/2)
$ for any $u\inftyn\mathbb{R}^d$.} with $\Sigma_W={\rm Cov}(W)$.
Assume $\Sigma_E^{-1/2}E$ is a $\gamma_e$ sub-Gaussian random vector with $\Sigma_E = {\rm Cov}(E)$.
\end{assumption}
Our analysis requires the following regularity conditions on $\bm{B}$, $\Sigma_W$ and $\Sigma_E$.
\bm{e}gin{assumption}\label{ass_B_Sigma}
Assume there exist some positive finite constants $c_W\le C_W$, $c_B\le C_B$, $C_E$ and $c_\varepsilonilon$ such that
\bm{e}gin{enumerate}
\inftytem[(a)] $c_W\le \lambda_K(\Sigma_W) \le \lambda_1(\Sigma_W) \le C_W$;
\inftytem[(b)] $\max_{1\le j\le m}\|\bm{B}_j\|_2^2\le C_B$, $\lambda_K(\bm{B}\bm{B}^T) \gammae c_B m$;
\inftytem[(c)] $\lambda_1(\Sigma_E) \le C_E$;
\inftytem[(d)] $\min_{1\le j\le m} \left(\bm{B}_j^T \Sigma_W \bm{B}_j + [\Sigma_E]_{jj}\right) \gammae c_\varepsilonilon$.
\end{enumerate}
\end{assumption}
\bm{e}gin{remark}\label{rem_cond_B}
Assumption \ref{ass_B_Sigma} is slightly stronger than the identifiability condition (\ref{ident_conds}) and the first condition in (\ref{cond_ident_Theta}). They are all commonly used regularity conditions in the literature of factor analysis \citep{Bai-Ng-K,Bai-factor-model-03,SW2002,Bai-Ng-forecast,fan2013large,Ahn-2013,fan2017} as well as in the related SVA literature \citep{Lee2017,wang2017}. In particular, condition $\lambda_K(\bm{B}\bm{B}^T) \gammae c_B m$ is known as the pervasive assumption which holds, for instance, if a (small) proportion of columns of $\bm{B}$ are i.i.d. realizations of a $K$-dimensional sub-Gaussian random vector whose covariance matrix has bounded eigenvalues \citep{guo2020doubly}.
\end{remark}
We also need conditions on the design matrix $\bm{X}$. Recall that $s_n$ is defined in (\ref{def_space_Theta}).
\bm{e}gin{assumption}\label{ass_X}
Assume the rows of $\bm{X}$ are i.i.d. realizations of the random vector $X\inftyn \mathbb{R}^p$ with $\Sigma:={\rm Cov}(X)$ satisfying
$$
\max_{1\le j\le p}\Sigma_{jj} \le C,\qquad c \le \lambda_{\min}(\Sigma) \le \sup_{S\subseteq[p]: |S|\le s_n}\lambda_{\max}(\Sigma_{SS})\le C
$$ for some absolute constants $0<c<C<\infty$. Further assume $X\sim N_p(0, \Sigma)$.
\end{assumption}
Assumption \ref{ass_X} is borrowed from \cite{vandegeer2014} to analyze the theoretical properties of $\widehat\bm{o}mega_1$ via the node-wise lasso approach in (\ref{def_est_omega}). As commented there, the Gaussianity in Assumption \ref{ass_X} is not essential and can be relaxed to that $X$ is a sub-Gaussian or bounded random vector.
Since our whole inference procedure for $\Theta_{11}$ starts with the estimation of $\bm{X}\bm{F}$ from (\ref{def_est_F_j}), the estimation error of $\bm{X}\widehat\bm{F}$ plays a critical role throughout our analysis. While upper bounds of the rate of convergence of $\|\bm{X}\widehat\bm{F}_j - \bm{X}\bm{F}_j\|_2$ have been established in \cite{chernozhukov2017}, we provide a uniform bound in Appendix \ref{app_theory_fit} by showing that, with probability tending to one, the following holds uniformly over $j\inftyn[m]$,
\bm{e}gin{equation}\label{def_Rem_j}
{1\over n}\|\bm{X}\widehat\bm{F}_j - \bm{X}\bm{F}_j\|_2^2 ~ \lesssim ~ Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j).
\end{equation}
Here we write $\bm{F}_j = \bm{t}heta_j + \bm{d}elta_j$ with $\bm{t}heta_j \coloneqq \bm{T}heta_j$ and $\bm{d}elta_j \coloneqq \bm{A}\bm{B}_j$. The terms $Rem_{1,j}$, $Rem_{2,j}(\bm{d}elta_j)$ and $Rem_{3,j}(\bm{t}heta_j)$ all depend on the design matrix $\bm{X}$ and their exact expressions are stated in Appendix \ref{app_theory_fit}.
For ease of presentation, we resort to a deterministic upper bound of the right hand side of (\ref{def_Rem_j}).
\bm{e}gin{assumption}\label{ass_initial}
There exists a positive (deterministic) sequence $r_{n} = o(1)$ such that with probability tending to one as $n\mathsf{T}o\infty$,
\[
\max_{1\le j\le m}\Bigl[Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j)\Bigr]\le r_{n}.
\]
\end{assumption}
Our subsequent theoretical results naturally depend on $r_{n}$, for which we provide the explicit rate later in Corollary \ref{cor_ASN} of Section \ref{sec_theory_ASN}. Notice that Assumption \ref{ass_initial} together with (\ref{def_Rem_j}) readily implies
\[
\lim_{n\mathsf{T}o \infty}\mathbb{P}\left\{
\max_{1\le j\le m}{1\over n}\|\bm{X}\widehat \bm{F}_j - \bm{X}\bm{F}_j\|_2^2 \lesssim r_n
\right\} = 1.
\]
\subsection{Asymptotic normality of $\widetilde\Theta_{11}$}\label{sec_theory_ASN}
In this section, we establish our main result: the asymptotic normality of our estimator $\widetilde \Theta_{11}$ from (\ref{def_Theta_11}). To this end, we first study the convergence rate of the initial estimator $\widehat\bm{T}heta_1$ defined in (\ref{def_Thetaj_hat}). Recall from (\ref{def_Theta_PB_comp}) that the estimand of $\widehat\bm{T}heta_1$ is $\bm{a}r\bm{T}heta_1 := \bm{T}heta P_{B}^{\perp} \bm{e}_1$ which satisfies
\[
\|\bm{a}r\bm{T}heta_1\|_0 = \|\bm{T}heta P_{B}^{\perp} \bm{e}_1\|_0 \le s_n,
\]
implied by (\ref{def_space_Theta}).
The following lemma states the $\ell_1$ convergence rate of $\widehat\bm{T}heta_1 - \bm{a}r\bm{T}heta_1$, whose proof can be found in Appendix \ref{app_proof_thm_Theta}. Recall that $M_n$ is defined in (\ref{def_space_Theta}) and $r_{n}$ is defined in Assumption \ref{ass_initial}.
\bm{e}gin{lemma}\label{thm_Theta_simple_rates}
Under Assumptions \ref{ass_error} -- \ref{ass_initial}, assume $M_n = o(m)$, $\|{\rm Cov}(Z)\|_{\rm op} = \mathcal{O}(1)$, $\log m = o(n)$ and $s_n\log p = o(n)$. By choosing
$$
\lambda_3 \gammatrsim \sqrt{\max_{1\le j\le p}\widehat\Sigma_{jj}}\sqrt{\log p\over n}
$$ in (\ref{def_Thetaj_hat}), with probability tending to one as $n\mathsf{T}o \inftynfty$,
\bm{e}q\label{rate_Theta_td_simp}
\|\widehat \bm{T}heta_1 - \bm{a}r\bm{T}heta_1\|_{1} & ~ \lesssim ~ s_n\sqrt{\log p\over n} + \left({s_nM_n\over m} + \sqrt{s_n}\right)\left(\sqrt{\log m \over n\wedge m} + r_n\right).
\end{aligned}\end{equation}
\end{lemma}
Condition $M_n = o(m)$ is needed here to ensure that $\bm{T}heta$ is identifiable (see, Section \ref{sec_id}). It can be replaced by any other identifiability conditions in Remark \ref{rem_ident_cond_Theta}. Recall that $Z\inftyn \mathbb{R}^K$ and $K$ is fixed, $\|\text{Cov}(Z)\|_{{\rm op}} = \mathcal{O}(1)$ is a mild regularity condition. The requirement $s_n\log p = o(n)$ is also mild as we explained below.
The first term on the right hand side of (\ref{rate_Theta_td_simp}) is known as the optimal rate of estimating a $s_n$-sparse coefficient vector in standard linear regression. Therefore, $s_n\sqrt{\log p} = o(\sqrt n)$ is the minimal requirement for consistently estimating $\bm{a}r\bm{T}heta_1$ in $\ell_1$-norm.
The second term stems from the error of estimating $P_{B}$, or in fact, of estimating $\bm{B}$ (see, Theorem \ref{thm_rates_B} in Section \ref{sec_theory_B}). For instance, when $\bm{X}\bm{F}$ can be estimated with a fast rate, that is, $r_n$ is sufficiently small, then (\ref{rate_Theta_td_simp}) can be simplified to
\[
\|\widehat \bm{T}heta_1 - \bm{a}r\bm{T}heta_1\|_{1} \lesssim s_n\sqrt{\log p\over n} + {s_nM_n\over m}\sqrt{\log m \over n\wedge m} + \sqrt{s_n\log m \over n\wedge m}.
\]
The above rate becomes faster as $m$ increases. In particular, when $n = \mathcal{O}(m)$, we recover the optimal rate (up to a multiplicative logarithmic factor)
\[
\|\widehat \bm{T}heta_1 - \bm{a}r\bm{T}heta_1\|_{1} = \mathcal{O}_\mathbb{P}\left(s_n\sqrt{\log (p\vee m)\over n} \right).
\]
Armed with the guarantees of the initial estimator $\widehat\bm{T}heta_1$, our following main result shows that $\sqrt{n}(\widetilde \Theta_{11} - \Theta_{11})$ is asymptotically normal with
a closed-form expression of the asymptotic variance. Its proof can be found in Appendix \ref{app_thm_asymp_normal}.
Recall that $\bm{O}mega = \Sigma^{-1}$ is the precision matrix of $X$. Since $\widetilde\Theta_{11}$ depends on the estimate of $\bm{O}mega_1\inftyn\mathbb{R}^p$, our analysis requires $\bm{O}mega_1$ to be sparse. Let $s_\Omega = \|\bm{O}mega_1\|_0$ denote the sparsity of $\bm{O}mega_1$.
\bm{e}gin{theorem}\label{thm_asymp_normal}
Under Assumptions \ref{ass_error} -- \ref{ass_initial}, assume $E_1\sim N(0,\sigma_{E_1}^2)$, $\|{\rm Cov}(Z)\|_{\rm op} = \mathcal{O}(1)$, $ (s_n\vee s_\Omega)\log(p)\log(m) = o(n)$ and $s_n \log p = o(\sqrt n)$.
Further assume
\bm{e}gin{align}\label{cond_Mn}
& M_n \sqrt{n} = o(m),\\\label{cond_rn}
&\norm{\bm{A}_{1\cdot}}_2 \sqrt{\log m}+\left(\|\bm{A}_{1\cdot}\|_2 \sqrt n + \sqrt{(s_n\vee s_\Omega)\log p}\right)r_n = o(1).
\end{align}
By choosing $\widetilde \lambda \asymp \sqrt{\log p/n}$ in (\ref{def_est_omega}), one has
\[\sqrt{n}(\widetilde{\Theta}_{11} - \Theta_{11}) = \zeta + \Delta,\]
where \[
\zeta \mid \bm{X} \sim N(0, \sigma_{E_1}^2 \widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1),\qquad |\widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1 - \Omega_{11}| = o_{\mathbb{P}}(1),\qquad \Delta = o_{\mathbb{P}}(1).\]
\end{theorem}
Theorem \ref{thm_asymp_normal} shows that the difference between $\widetilde\Theta_{11}$ and $\Theta_{11}$ scaled by $\sqrt{n}$ is decomposed into two terms, $\zeta$ and $\Delta$, where, conditioning on $\bm{X}$, $\zeta$ follows a Gaussian distribution with zero mean and variance $\sigma_{E_1}^2 \widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1$, and $\Delta$ is asymptotically negligible. Indeed, $\Delta = o_{\mathbb{P}}(1)$ holds uniformly over $\bm{T}heta \inftyn \mathcal{M}(s_n,M_n)$ in (\ref{def_space_Theta}), so that we can use Theorem \ref{thm_asymp_normal} to construct honest confidence intervals for $\Theta_{11}$, as long as $\sigma_{E_1}^2$ can be consistently estimated.
\bm{e}gin{remark}[Discussions of conditions in Theorem \ref{thm_asymp_normal}]
The Gaussianity assumption of $E_1$ is not essential. In fact, our proof states that
$\zeta = \widehat\bm{o}mega_1^T\bm{X}^T\bm{E}_1 / \sqrt n$. Therefore, when $E_1$ is not Gaussian, one can still obtain $\sqrt{n}(\widetilde{\Theta}_{11} - \Theta_{11}) \mid \bm{X}\mathsf{T}o_d N(0,\sigma_{E_1}^2 \widehat{\bm{o}mega}_1^T\widehat\Sigma\widehat{\bm{o}mega}_1)$ provided that the Lindeberg's condition for the central limit theorem holds.
The condition $s_{\Omega}\log p = o(n)$ ensures the consistency of the node-wise Lasso estimator $\widehat\bm{o}mega_1$, see \cite{vandegeer2014}. We require an extra logarithmic factor of $m$ here due to the union bounds over $j\inftyn[m]$ for estimating $\bm{X}\bm{F}_j$.
Condition $s_n \log p = o(\sqrt n)$ puts restriction on the number of non-zero rows in $\bm{T}heta$. It is a rather standard condition for making inference of the coefficient in high-dimensional regressions \citep{javanmard14,vandegeer2014,zhangzhang2014}. As discussed after Lemma \ref{thm_Theta_simple_rates}, it is also the minimum requirement for consistently estimating $\bm{a}r\bm{T}heta_1$ in $\ell_1$-norm.
Condition (\ref{cond_Mn}) is concerned with the magnitude of each row of $\bm{T}heta$ in $\ell_1$ norm and is a strengthened version of the identifiability condition (\ref{cond_ident_Theta}). Recall that the estimand of the initial estimator $\widehat\bm{T}heta_1$ is $\bm{a}r\bm{T}heta_1 := \bm{T}heta P_{B}^{\perp} \bm{e}_1$ rather than $\bm{T}heta_1$. The condition is used to ensure that the bias term for estimating $\Theta_{11}$, defined as $\Theta_{11}-\bm{a}r\Theta_{11}=\bm{T}heta_{1\cdot}^T P_B\bm{e}_1$, is asymptotically negligible. Condition (\ref{cond_Mn}) holds, for instance, when the rows of $\bm{T}heta$ are sufficiently sparse and the order of $m$ is comparable or larger than $n$, see \cite{McKennan19,wang2017}.
Finally, condition (\ref{cond_rn}) puts restriction on the $\ell_2$ norm of $\bm{A}_{1\cdot}$ as well as on the order of $r_n$. To aid intuition of this condition, we provide explicit rates of $r_n$ under two common scenarios in the high-dimensional setting. As seen in Corollary \ref{cor_ASN} below, the requirement of $r_n$ again hinges on the magnitude of $\bm{A}$ which quantifies the correlation between the observable feature $X$ and the hidden variable $Z$.
We refer to Remark \ref{rem_A} for detailed discussions of conditions on $\bm{A}$.
\end{remark}
The following corollary provides explicit rates of $r_n$ under two common scenarios in the high-dimensional settings, depending on the magnitude of $\|\Sigma\|_{{\rm op}}$.
\bm{e}gin{corollary}\label{cor_ASN}
Assume that Assumptions \ref{ass_error} -- \ref{ass_X} hold.
\bm{e}gin{enumerate}
\inftytem[(1)] Suppose $p>n$ and $\|\Sigma\|_{\rm op} = \mathcal{O}(1)$. Assume $(s_n\vee s_\Omega)\log^2(p\vee m) = o(n)$,
\bm{e}q\label{cond_A_op}
\|\bm{A}\|_{\rm op}^2 = o\left({1\over \sqrt{(s_n\vee s_\Omega)\log p}}\right)
\end{aligned}\end{equation}
and $\|\bm{A}_{1\cdot}\|_2 = o(\sqrt{(s_n\vee s_\Omega)\log p/ n})$.
Then Assumption \ref{ass_initial} holds with
\bm{e}q\label{rate_rnj_case1}
r_{n} = \mathcal{O}\left(\|\bm{A}\|_{{\rm op}}^2 + {s_n \log (p\vee m)\over n}\right),\quad \forall\ 1\le j\le m
\end{aligned}\end{equation}
and condition (\ref{cond_rn}) holds.
\inftytem[(2)] Suppose $p>n$, $\|\Sigma\|_{{\rm op}}\asymp p$ and $\mathsf{T}r(\Sigma)=\mathcal{O}(p)$. Assume $s_n(s_n\vee s_\Omega) \log^2(p\vee m) = o(n)$ and
$
\|\bm{A}\|_{{\rm op}}^2 = \mathcal{O}(1/p).
$
Then Assumption \ref{ass_initial} holds with
\[
r_{n} = \mathcal{O}\left(\sqrt{s_n\log (p\vee m) \over n}\right).
\]
Furthermore, condition (\ref{cond_rn}) holds as well.
\end{enumerate}
\end{corollary}
\bm{e}gin{remark}[Discussions of conditions on $\bm{A}$]\label{rem_A}
We first explain why restriction on the magnitude of $\bm{A}$ is necessary in the high-dimensional regime ($p > n$).
For any $j\inftyn [m]$, recall that $\|\bm{A}\bm{B}_j\|_2^2 = \|\bm{d}elta_j\|_2^2$ and consider the regression $\bm{Y}_j = \bm{X}\bm{d}elta_j + \bm{e}psilon_j$ with $\bm{t}heta_j = \b0$. Even in this simplified scenario, since $\bm{d}elta_j$ is a dense $p$-dimensional vector, its consistent estimation
requires $\|\bm{d}elta_j\|_2=o(1)$ when $p$ is larger than $n$ \citep{Hsu2014,chernozhukov2017,cevid2018spectral}. Therefore, one would expect that
$\|\bm{d}elta_j\|_2^2 = o(1)$ is necessary for consistent estimation of $\bm{X}\bm{F}_j$ for each $1\le j\le m$. The uniform bound over $1\le j\le m$, together with $\lambda_K(\bm{B})\gammatrsim \sqrt{m}$, in turn implies
\bm{e}q\label{bd_A_op_minimax}
\|\bm{A}\|_{{\rm op}}^2 = o(1).
\end{aligned}\end{equation}
Therefore, consistent estimation of $\bm{X}\bm{F}$ in high-dimensional scenario necessarily requires small $\|\bm{A}\|_{{\rm op}}^2$. Recall that $\bm{A} = \Sigma^{-1}\text{Cov}(X,Z)$ with $\Sigma = \text{Cov}(X)$. A small $\|\bm{A}\|_{{\rm op}}^2$ means either (a) the observable feature $X$ and the hidden variable $Z$ are weakly correlated, or (b) $\Sigma$ has spiked eigenvalues. We comment on these two cases separately below.
Scenario (1) of Corollary \ref{cor_ASN} corresponds to (a).
When there is a finite number of observable feature $X$ correlated with the hidden variable $Z$, we have
$\|\bm{A}\|_{{\rm op}}^2 = \mathcal{O}(\rho)$ where $\rho=\max_{1\le j\le m,1\le k\le K}\mathsf{T}extrm{Corr}(X_j, Z_k)$. Condition (\ref{cond_A_op}) holds if $\rho = o(1/\sqrt{(s_n\vee s_\Omega)\log p})$.
In addition, $\|\bm{A}_{1\cdot}\|_2 = o(\sqrt{(s_n\vee s_\Omega)\log p/ n})$ holds, for instance, when either the rows of $\bm{A}$ are balanced in the sense that $\|\bm{A}_{1\cdot}\|_2 = \mathcal{O}(\|\bm{A}\|_{{\rm op}}/\sqrt{p})$ or $\max_{1\le k\le K}\mathsf{T}extrm{Corr}(X_1, Z_k) = o(\sqrt{(s_n\vee s_\Omega)\log p/n})$.
Scenario (2) of Corollary \ref{cor_ASN} corresponds to (b) where $\Sigma$ has a fixed number of spiked eigenvalues. One instance is when $X$ follows from a factor model $X = \bm{G}amma F + W'$ where $F\inftyn\mathbb{R}^r$ is the factor and the loading matrix $\bm{G}amma\inftyn\mathbb{R}^{p\mathsf{T}imes r}$ satisfies $\lambda_r(\bm{G}amma) \gammatrsim \sqrt p$ with $r < p$. \citet[Section 3.4]{bing2020adaptive} provides examples of this model under which $\|\Sigma\|_{{\rm op}} = \mathcal{O}(p)$, $\mathsf{T}r(\Sigma)= \mathcal{O}(p)$ and $\|\bm{A}\|_{{\rm op}}^2 = \mathcal{O}(1/p).$
\end{remark}
\subsection{Efficiency and consistent estimation of the asymptotic variance}\label{sec_effciency}
From Theorem \ref{thm_asymp_normal}, our estimator $\widetilde\Theta_{11}$ has the asymptotic variance $\sigma_{E_1}^2\Omega_{11}/n$, which, according to the Gauss-Markov theorem, is the same asymptotic variance of the best linear unbiased estimator (BLUE) of $\Theta_{11}$ in the classical low-dimensional setting without any hidden variables. Therefore, our estimator $\widetilde \Theta_{11}$ is efficient in this Gauss-Markov sense. In fact, even when there exist hidden variables $Z$, $\sigma_{E_1}^2\Omega_{11}/n$ is also the minimal variance of all unbiased estimators in the low-dimensional setting. Indeed, when $Z$ is observable, the Gauss-Markov theorem states that the oracle BLUE of $\Theta_{11}$ has the asymptotic variance
\[
{\sigma_{E_1}^2\over n}
~ \bm{e}_1^T \bm{e}gin{bmatrix}
\Sigma & \text{Cov}(X,Z) \\
\text{Cov}(Z,X) & \text{Cov}(Z)
\end{bmatrix}^{-1} \bm{e}_1 = {\sigma_{E_1}^2\over n}\left(
\Omega_{11} + \bm{A}_{1\cdot}^T \Sigma_W^{-1}\bm{A}_{1\cdot}
\right).
\]
Here the equality uses the block matrix inversion formula, the definition $\bm{A} = \Sigma^{-1}\text{Cov}(X,Z)$ and $\Sigma_W = \text{Cov}(Z) - \text{Cov}(Z,X)\Sigma^{-1} \text{Cov}(X,Z)$. Comparing to $\sigma_{E_1}^2\Omega_{11} / n$, the term $\bm{A}_{1\cdot}^T \Sigma_W^{-1} \bm{A}_{1\cdot}$ represents the efficiency loss due to the hidden variables. However, in the high-dimensional setting with $\|\bm{A}_{1\cdot}\|_2 = o(1)$ (together with $\Omega_{11} \gammae c$ and $\lambda_K(\Sigma_W) \gammae c_W$), this efficiency loss becomes negligible and the asymptotic variance in the above display reduces to $\sigma_{E_1}^2\Omega_{11} / n$.
In the high-dimensional regime, if one treats model (\ref{model}) as a semi-parametric model $Y_1 = \Theta_{11}X_1 + G(X_{-1}, Z) + E_1$ for some unknown function $G:\mathbb{R}^{p-1}\mathsf{T}imes \mathbb{R}^K \mathsf{T}o \mathbb{R}$ with $Z$ being observable, our estimator $\widetilde\Theta_{11}$ of $\Theta_{11}$ is semi-parametric efficient according to Theorem 2.3 and Lemma 2.1 in \cite{vandegeer2014}.\\
Our proposed test statistic in (\ref{def_U_hat}) and confidence intervals in (\ref{CI_def}) require to estimate $\sigma_{E_1}^2$.
The following proposition ensures that the proposed estimator $\widehat\sigma_{E_1}^2$ in (\ref{def_est_variance}) is consistent. Consequently, an application of the Slutsky's theorem coupled with Theorem \ref{thm_asymp_normal} justifies the validity of our test statistic and confidence intervals in Section \ref{sec_est_Theta}.
\bm{e}gin{prop}\label{prop_sigma_E}
Under conditions of Theorem \ref{thm_asymp_normal}, $\widehat \sigma_{E_1}^2$ defined in (\ref{def_est_variance}) satisfies
\[
|\widehat \sigma_{E_1}^2 - \sigma_{E_1}^2| = o_\mathbb{P}(1).
\]
\end{prop}
\subsection{Rate of convergence and asymptotic normality of $\widehat B$}\label{sec_theory_B}
Towards establishing the theoretical guarantees of $\widetilde\Theta_{11}$ in the previous section, one intermediate, but important, step is to sharply characterize the error of estimating $P_B$, or equivalently, $\bm{B}$. In this section, we first present the convergence rate of our estimator $\widehat\bm{B}$ in (\ref{def_est_BW}). Then, we establish the asymptotic normality of $\widehat\bm{B}$ to test the hypothesis (\ref{def_target_B}).
First notice that, without further restrictions, $\bm{W}$ and $\bm{B}$ are not identifiable even one has direct access to $\bm{e}psilon = \bm{W}\bm{B}+\bm{E}$. This can be seen by constructing $\bm{W}' = \bm{W} Q$ and $\bm{B}' = Q^{-1}\bm{B}$ for any invertible matrix $Q\inftyn \mathbb{R}^{K\mathsf{T}imes K}$ such that $\bm{W}\bm{B} = \bm{W}'\bm{B}'$. To quantify the estimation error of $\widehat\bm{B}$, we introduce the following rotation matrix \citep{bai2020simpler},
\bm{e}gin{equation}\label{def_H0}
\bm{H}_0^T = {1\over nm}\bm{W}^T\bm{W} \bm{B} \widehat\bm{B}^T \bm{D}_K^{-2} \inftyn \mathbb{R}^{K\mathsf{T}imes K}
\end{equation}
with $\bm{D}_K$ defined in (\ref{def_svd_epsilon})\footnote{If $\bm{D}_K$ is not invertible, we use its Moore-Penrose inverse instead.}.
Further define
\bm{e}gin{equation}\label{def_B_tilde}
\widetilde \bm{B} = \bm{H}_0 \bm{B} \inftyn \mathbb{R}^{K\mathsf{T}imes m}.
\end{equation}
Since $\widetilde \bm{B} = (nm)^{-1}\bm{D}_K^{-2}\widehat\bm{B} (\bm{B}^T\bm{W}^T\bm{W}\bm{B})$ only depends on the data and the identifiable quantity $\bm{B}^T\bm{W}^T\bm{W}\bm{B}$, $\widetilde \bm{B}$ is well-defined.
The following theorem provides the uniform $\ell_2$ convergence rate of $\widehat\bm{B}_j - \widetilde\bm{B}_j$ over $1\le j\le m$. Recall that $M_n$ is defined in (\ref{def_space_Theta}) and $r_{n}$ is defined in Assumption \ref{ass_initial}.
\bm{e}gin{theorem}\label{thm_rates_B}
Under Assumptions \ref{ass_error}, \ref{ass_B_Sigma}, \ref{ass_initial} and $M_n = o(m)$, with probability tending to one as $n\mathsf{T}o \inftynfty$,
one has
\bm{e}gin{equation}\label{eq_thm_rates_B}
\max_{1\le j\le m}\|\widehat\bm{B}_j - \widetilde\bm{B}_j\|_2 \lesssim \sqrt{\log m\over n\wedge m} + r_{n}.
\end{equation}
\end{theorem}
The first term on the right hand side of (\ref{eq_thm_rates_B}) is the error rate of estimating $\bm{B}$ when $\bm{e}psilon=\bm{Y} - \bm{X}\bm{F}$ is known, while the second term corresponds to the error of estimating $\bm{e}psilon$ by $\widehat\bm{e}psilon = \bm{Y}-\bm{X}\widehat\bm{F}$.
If $\bm{e}psilon = \bm{W}\bm{B} + \bm{E} \inftyn \mathbb{R}^{n\mathsf{T}imes m}$ were observed, theoretical guarantees of $\widehat\bm{B}$ and $\widehat\bm{W}$ from (\ref{def_est_BW}) for diverging $n$ and $m$ have been thoroughly studied in the literature of factor models \citep{Bai-factor-model-03,Bai-Ng-forecast,fan2013large}. Our results reduce to the existing results in this case with $r_n = 0$. The logarithmic factor of $m$ comes from establishing the union bound over $j\inftyn[m]$. The appearance of $m$ in the denominator of bound (\ref{eq_thm_rates_B}) also reflects the benefit of having a large $m$, the so-called blessing of dimensionality \citep{Bai-factor-model-03,fan2013large}. When one only has access to $\widehat\bm{e}psilon$ instead of $\bm{e}psilon$, the analysis becomes more challenging. Specifically, since $\widehat\bm{e}psilon= \bm{W}\bm{B} + \widetilde\bm{E}$ with $\widetilde\bm{E} := \bm{E}+\widehat\bm{e}psilon - \bm{e}psilon$, one can view $\widehat\bm{e}psilon$ as a factor model with the factor component $\bm{W}\bm{B}$ and the error $\widetilde\bm{E} $. The difficulty of establishing Theorem \ref{thm_rates_B} lies in characterizing the dependence between $\widetilde\bm{E}$ and $\bm{W}\bm{B}$, as $\widehat\bm{e}psilon$ depends on the data hence also depends on $\bm{W}$ in a complicated way.\\
In addition to the rates of convergence, the following theorem provides the asymptotic normality of $\widehat \bm{B}_j$ for any $1\le j\le m$.
\bm{e}gin{theorem}\label{thm_B_asn}
Under the same conditions of Theorem \ref{thm_rates_B}, assume $s_n\log(p\vee m) = o(\sqrt{n})$, $\|\Sigma_E\|_{\infty,1}=\mathcal{O}(1)$, $ \sqrt{n} = o(m/\log (m))$ and
\bm{e}gin{equation}\label{cond_r_asn}
\|\bm{A}\|_{{\rm op}}^2\max\left\{ n \|\bm{A}\bm{B}_j\|_2^2,~ s_n\log (p\vee m), ~ \sqrt{n\log m\over m}\right\}= o(1).
\end{equation}
Then for any $1\le j\le m$, one has
\[
\sqrt{n}(\widehat\bm{B}_j - \widetilde\bm{B}_j) \overset{d}{\longrightarrow} N_K(\b0, \sigma_{E_1}^2 {\bm I}_K),\qquad \mathsf{T}extrm{as }n\mathsf{T}o\inftynfty.
\]
\end{theorem}
For the same reason, since we do not impose any identifiability conditions for $\bm{B}$, our estimator $\widehat \bm{B}_j$ is not centered around $\bm{B}_j$ but rather its rotated version $\widetilde\bm{B}_j = \bm{H}_0\bm{B}_j$ \citep{Bai-factor-model-03,bai2020simpler}. We emphasize that this rotation does not impede us from testing $\bm{B}_j = \b0$.
Specifically, Theorem \ref{thm_B_asn} implies that for any $1\le j\le m$, under the null hypothesis $\bm{B}_j = \b0$,
\[
n \widehat \bm{B}_j^T \widehat \bm{B}_j / \sigma_{E_j}^2 \overset{d}{\longrightarrow} \chi^2_K,\qquad \mathsf{T}extrm{as }n\mathsf{T}o\inftynfty.
\]
provided that
\bm{e}gin{equation}\label{cond_A_infer_B}
\|\bm{A}\|_{{\rm op}}^2 \max\left\{ s_n\log (p\vee m), ~ \sqrt{n\log(m)/m}\right\}= o(1).
\end{equation}
Since $\sigma_{E_1}^2$ can be consistently estimated as shown in Proposition \ref{prop_sigma_E} of Section \ref{sec_effciency}, this justifies the validity of our testing statistic $\widehat R_n^{(1)}$ in (\ref{def_R_hat}) of Section \ref{sec_method_infer_B}.
In case one is willing to assume additional identifiability conditions on $\bm{B}$, such as those in \cite{Bai-Ng-forecast}, the rotation matrix $\bm{H}_0$ becomes the identity matrix asymptotically \citep{bai2020simpler}.
In the following, we comment on the conditions in Theorem \ref{thm_B_asn}.
To allow a non-diagonal $\Sigma_E$, the inferential result on $\bm{B}$ requires $\|\Sigma_E\|_{\infty,1}=\mathcal{O}(1)$, a stronger condition than Assumption \ref{ass_B_Sigma} (c), as well as $\log (m) \sqrt{n} = o(m)$. These conditions are commonly assumed in the analysis of factor models \citep{Bai-factor-model-03,Bai-Ng-forecast,bai2020simpler}, and can be dropped if $\Sigma_E$ is proportional to the identity matrix, as remarked in \citet[Theorem 6]{Bai-factor-model-03}.
Condition (\ref{cond_r_asn}) is needed to ensure that the error
of estimating $\bm{e}psilon$ by $\widehat\bm{e}psilon$ is negligible. For the similar reason, if $\Sigma_E$ is proportional to the identity matrix, the requirement $\|\bm{A}\|_{{\rm op}}^2\sqrt{n\log(m)/m}=o(1)$ can be removed. In general, condition (\ref{cond_r_asn}) holds, for instance, if $\sqrt{n/m} = \mathcal{O}(s_n\log(p\vee m))$,
\bm{e}gin{equation}\label{eq_condition_B}
\|\bm{A}\|_{{\rm op}}^2 = o\left({1\over s_n\log (p\vee m)}\right),\qquad \|\bm{A}\|_{{\rm op}}^2\|\bm{A}\bm{B}_j\|_2^2 = o\left(
{1\over n}
\right).
\end{equation}
We reiterate that for testing the hypothesis $\bm{B}_j = \b0$, the condition $\|\bm{A}\|_{{\rm op}}^2\|\bm{A}\bm{B}_j\|_2^2 = o(1/n)$ holds automatically. We refer to Corollary \ref{cor_ASN} for the discussion on the first condition in (\ref{eq_condition_B}).\\
\bm{e}gin{remark}[Comparison with \cite{guo2020doubly}]
As briefly mentioned in the Introduction, \cite{guo2020doubly} consider the univariate model $y = X^T \bm{t}heta + Z^T\bm{b}eta +\varepsilonilon$ and propose a doubly debiased lasso procedure for making inference on entries of $\bm{t}heta$, say $\mathsf{T}heta_1$, in the presence of hidden confounders $Z\inftyn\mathbb{R}^{K}$. Although both their estimator of $\mathsf{T}heta_1$ and our estimator of $\Theta_{11}$ are shown to be efficient in the Gauss-Markov sense (i.e. the same asymptotic variance), the analyses are carried under different modelling assumptions. For instance, different from our model, \cite{guo2020doubly} additionally assume $X = \bm{G}amma Z + W'$ with some additive error $W'$ that is independent of $Z$. They also assume all $K$ singular values of the loading matrix $\bm{G}amma$ to be of order $\sqrt{p}$. Consequently, the $L_2$-projection matrix $\bm{A} = (\mathbb{E}[XX^T])^{-1}\mathbb{E}[XZ^T]$ satisfies $\|\bm{A}\|_{{\rm op}}^2=\mathcal{O}(1/p)$ and the residual vector $W=Z - \bm{A}^T X$ satisfies $\|\Sigma_W\|_{{\rm op}} = \mathcal{O}(1/p)$. By contrast, from Corollary \ref{cor_ASN} and its subsequent remark, our analysis does not necessarily require $\|\bm{A}\|_{{\rm op}}^2=\mathcal{O}(1/p)$. This could be understood as the benefits of having multivariate responses. On the other hand, we require parts (a) and (b) in Assumption \ref{ass_B_Sigma} and the latter does not hold under the conditions on $X$ and $\bm{G}amma$ in \cite{guo2020doubly}. Finally, due to the multivariate nature of the responses, we are able to conduct inference on $\bm{B}$ to test the existence of hidden confounders, whereas, in the univariate case, \cite{guo2020doubly} does not study such inference problems on $\bm{b}eta$.
\end{remark}
\Sigma_Ection{Practical considerations and simulation study}\label{sec_prac_and_sim}
In this section we first discuss two practical considerations of our procedure: selection of the number of hidden variables $K$ in Section \ref{sec_K} and selection of tuning parameters in Section \ref{sec_cv}. We then evaluate the finite sample performance of the proposed inferential method via synthetic datasets in Section \ref{sec_sim}.
\subsection{Selection of the number of hidden variables}\label{sec_K}
Recall that $\bm{e}psilon = \bm{W}\bm{B} + \bm{E}$ follows a factor model with $K$ latent factors (corresponding to $\bm{W}$) if $\bm{e}psilon$ were observed. We propose to select $K$ based on the estimate $\widehat\bm{e}psilon$ in (\ref{def_est_epsilon}) of $\bm{e}psilon$. Specifically, we adopt the criterion in \cite{bing2020adaptive} that selects $K$ by
\bm{e}gin{equation}\label{select_K}
\widehat K = \argmax_{j\inftyn \{1,2,\ldots, \bm{a}r K\}} d_j / d_{j+1},
\end{equation}
where $d_1 \gammae d_2 \gammae \cdots$ are the singular values of $\widehat\bm{e}psilon/\sqrt{nm}$ in (\ref{def_svd_epsilon}) and $\bm{a}r K$ is a pre-specified number, for example, $\bm{a}r K =\floor{(n\wedge m)/2}$ \citep{lam2012} with $\floor{x}$ standing for the largest integer that is no greater than $x$. Criterion (\ref{select_K}) is first proposed by \cite{lam2012} for selecting the number of latent factors in factor models. It is related with the ``elbow'' approach of selecting the number of components in PCA. In our current context, both theoretical and empirical justifications of the criterion (\ref{select_K}) have been provided in \cite{bing2020adaptive}. On the other hand, there exist other methods of selecting $K$ for which we refer to \cite{Lee2017,wang2017,bing2020adaptive}.
\subsection{Selection of tuning parameters}\label{sec_cv}
We describe how to practically select the tuning parameters in our procedure of making inference of $\Theta_{11}$.
The estimation of $\bm{X}\bm{F}$ in (\ref{def_est_F_j}) requires the selection of $\mathbf{l}dIj$ and $\mathbf{l}dj$ for $j\inftyn [m]$. Their theoretical orders are stated in Theorem \ref{thm_pred} of Appendix \ref{app_theory_fit}. In practice, one could choose them over a two-way grid of $\mathbf{l}dIj$ and $\mathbf{l}dj$ via cross-validation (CV) by minimizing the mean squared prediction error on a validation set (for instance, by using the $k$-fold CV). When the dimensions $p$ and $m$ are large, such two-way grid search might be computationally burdensome. \citet[Appendix E.3]{bing2020adaptive} proposed a faster way of selecting $\mathbf{l}dIj$ and $\mathbf{l}dj$. For the reader's convenience, we restate it here. Pick any $j\inftyn[m]$. We start with a grid $\mathcal{G}$ of $\mathbf{l}dj$ and for each $\mathbf{l}dj \inftyn \mathcal{G}$, we set
\[
\mathbf{l}dIj(\mathbf{l}dj) = c_0 \sqrt{\max_{1\le j\le p} M_{jj}(\mathbf{l}dj)}\left(\sqrt{m \over n} + \sqrt{2\log p \over n}\right)
\]
where
$\bm{M}(\mathbf{l}dj) = n^{-1} \bm{X}^T Q^2_{\mathbf{l}dj} \bm{X}$ with
$Q_{\mathbf{l}dj} = {\bm I}_n - \bm{X}(\bm{X}^T\bm{X} + n\mathbf{l}dj{\bm I}_p)^{-1}\bm{X}^T$
and
$c_0>0$ is some universal constant (our simulation reveals good performance for $c_0 = 1$). This choice of $\mathbf{l}dIj(\mathbf{l}dj)$ is based on its theoretical order in Theorem \ref{thm_pred} of Appendix \ref{app_theory_fit}. We then use $5$-fold cross validation to select $\lambda_2^{(j)*}$ which gives the smallest mean squared error of the predicted values. Fixing $\lambda_2^{(j)*}$, the optimization problem in (\ref{crit_Theta}) becomes a group-lasso problem and we propose to select $\mathbf{l}dIj$ via $5$-fold cross validation (for instance, the \mathsf{T}extsf{cv.glmnet} package in \mathsf{T}extsc{R}).
The initial estimator $\widehat\bm{T}heta_1$ of $\bm{T}heta_1$ in (\ref{def_Thetaj_hat}) requires another tuning parameter $\lambda_3$. As (\ref{def_Thetaj_hat}) solves a standard lasso problem, we propose to select $\lambda_3$ via $5$-fold cross validation implemented in the \mathsf{T}extsf{cv.glmnet} package in \mathsf{T}extsc{R}.
Finally, recall that we use the node-wise lasso procedure in (\ref{def_est_omega}) for estimating the first column of the precision matrix $\bm{O}mega$. We propose to select $\widetilde \lambda$ in (\ref{def_est_omega}) by 5-fold CV as well.
\subsection{Simulations}\label{sec_sim}
In this section we conduct extensive simulations to verify the performance of our developed inferential tools for testing $\Theta_{ij} = 0$ and $\bm{B}_j = \b0$.
\paragraph{Data generating mechanism:}
The data generating process is as follows.
For generating the design matrix, we simulate $\bm{X}_i$ i.i.d. from $N_p(\b0,\Sigma)$ where $\Sigma_{jk} = (-1)^{j+k}\cdot (0.5)^{|j-k|}$ for all $j,k\inftyn [p]$. We simulate $A_{jk}\sim \eta\cdot N(0.5,0.1)$ and $B_{kl}\sim N(0.1,1)$ for $j \inftyn [p]$, $k\inftyn[K]$, $l\inftyn [m]$ where the parameter $\eta$ controls the magnitude of entries of $\bm{A}$.
To generate $\bm{T}heta$, for given integers $s$ and $s_m$,
we sample entries of the top left $s\mathsf{T}imes s_m$ submatrix of $\bm{T}heta$ i.i.d. from $N(2,0.1)$ and set all other entries of $\bm{T}heta$ to zero. The number of non-zero rows of $\bm{T}heta$ is set to $s = 3$ while the sparsity of each non-zero row is fixed as $s_m = 10$. Next, we generate i.i.d. $\bm{Z}_i = \bm{A}^T\bm{X}_i + \bm{W}_i$ with $\bm{W}_i \sim N_K(\b0,3^2{\bm I}_K)$. Finally, we generate i.i.d. $\bm{Y}_i = \bm{T}heta^T\bm{X}_i + \bm{B}^T\bm{Z}_i + \bm{E}_i$ with $\bm{E}_i\sim N_p(\b0,{\bm I}_m)$.
Throughout the simulation, we fix $n = 200$, $K=3$ and consider $p \inftyn \{50, 250\}$, $m\inftyn \{20,50,100\}$ and $\eta \inftyn \{0.2,1\}$. Each setting is repeated 25 times without further specification.
\paragraph{Procedures under comparison:} For our proposed procedure, we select tuning parameters in the way we described in Section \ref{sec_cv}. To concentrate on the comparison of inference, we use the true $K$ as input (our simulation reveals that $K$ can be consistently estimated by (\ref{select_K}) in almost all settings). For comparison, we also consider the following approaches.
\bm{e}gin{itemize}
\inftytem Desparsified method (DSpar) implemented in the ``hdi'' package in \mathsf{T}extsc{R},
\inftytem Decorrelated Score (DScore) test implemented in the ``ScoreTest'' package\footnote{\url{https://github.com/huijiefeng/ScoreTest}} in \mathsf{T}extsc{R},
\inftytem Doubly Debiased Lasso (DDL) method proposed by \citet{guo2020doubly}\footnote{\url{https://github.com/zijguo/Doubly-Debiased-Lasso}}.
\end{itemize}
\paragraph{Testing on $\bm{T}heta$:}
We evaluate the performance of conducting hypothesis testing on $\bm{T}heta$ by using all four methods in each combination setting of $p \inftyn \{50, 250\}$, $m\inftyn \{20,50,100\}$ and $\eta \inftyn \{0.2,1\}$.
To introduce the metrics we use, for each generated $\bm{T}heta$, we let ${\mathcal{S}} = \{(i,j): \Theta_{ij}\neq 0\}$ denote the support of $\bm{T}heta$ and ${\mathcal{S}}^c$ denote its complement. By fixing the significance level at $\alpha = 0.05$, we compute the the empirical Type I error and the empirical Power for each method, defined as
\bm{e}q\nonumber
&\mathsf{T}ext{Type I error} = \frac{1}{|{\mathcal{S}}^c|} \sum_{(i,j)\inftyn {\mathcal{S}}^c} 1\left\{
\mathsf{T}extrm{Reject the null $H_{0,\Theta_{ij}}$}
\right\}\\\nonumber
&\mathsf{T}ext{Power} = \frac{1}{|{\mathcal{S}}|} \sum_{(i,j)\inftyn {\mathcal{S}}}
1\left\{
\mathsf{T}extrm{Reject the null $H_{0,\Theta_{ij}}$}
\right\}
\end{aligned}\end{equation}
Table \ref{table_error} reports the averaged Type I errors and Powers for all four methods in each setting\footnote{Since \cite{guo2020doubly} only provides guarantees of DDL for large $p$, we only compare with DDL in the high-dimensional scenarios. Due to the long running time of DDL, we only report its performance for $m = 20$ and $p = 250$.}. As we can see, when $\eta = 0.2$ so that the magnitude of hidden effects is relatively small, in both low ($p=50$) and high ($p=250$) dimensional settings, the averaged Type I errors of all methods are generally close to the nominal level 0.05, while the proposed method achieves higher Powers. When $\eta = 1.0$ so that the magnitude of hidden effects is relatively large, in the low dimensional setting $p = 50$, the averaged Type I errors of the proposed approach are much lower and closer to the nominal level than all other methods.
On the other hand, in the high dimensional setting $p = 250$, despite all methods have similar Type I errors, our proposed approach yields much higher Powers.
\bm{e}gin{table}[t]
\caption{The averaged Type I errors and Powers at significance level 0.05 for the proposed method, DSpar, DScore and DDL}\label{table_error}
\bm{e}gin{center}
\bm{e}gin{tabular}{ c c c c c c c c c}
\hline
$p$ &Metric&Method&\multicolumn{3}{c}{$\eta = 0.2$}&\multicolumn{3}{c}{$\eta = 1.0$} \\
&&& $m = 20$ & $m = 50$ & $m = 100$ & $m = 20$ & $m = 50$ & $m = 100$\\
\hline
50&Type I error&Proposed&0.057&0.072&0.085&0.117&0.102&0.104\\
&&DSpar&0.060&0.059&0.064&0.338&0.313&0.282\\
&&DScore&0.054&0.060&0.051&0.367&0.361&0.348\\
&&DDL&
- & - & - & - & - & - \\
\hline &Power&Proposed&1.000&1.000&1.000&0.929&1.000&1.000\\
&&DSpar&0.970&0.866&0.941&0.924&0.957&0.757\\
&&DScore&0.982&0.916&0.934&0.908&0.857&0.942\\
&&DDL &
- & - & - & - & - & - \\
\hline
250&Type I error&Proposed&0.051&0.076&0.063&0.089& 0.097 &0.116\\
&&DSpar&0.058&0.059&0.054&0.110&0.114&0.111\\
&&DScore&0.045&0.046&0.052&0.105&0.104&0.109\\
&&DDL&0.098&-&-&0.114&-&-\\
\hline &Power&Proposed&1.000&1.000&1.000&0.998&1.000&0.998\\
&&DSpar&0.934&0.88&0.954&0.580&0.602&0.729\\
&&DScore&0.913&0.856&0.883&0.663&0.683&0.702\\
&&DDL&0.893&-&-&0.691&-&-\\
\hline
\end{tabular}
\end{center}
\end{table}
We further demonstrate how the empirical Type I error and Power of different methods change as the signal strength varies. To this end, we generate $\bm{T}heta$ by setting its non-zero entries to $r$ with $r$ varying within $\{0.05,0.07,0.1,0.2,0.3,0.5,1,1.5,2.0\}$. We consider $p = 50$, $m = 20$ and $\eta\inftyn\{0.2,1\}$. For each choice of $r$ and $\eta$, we repeat generating the data and computing Type I errors and Powers 25 times. Figure \ref{fig:power} depicts how the averaged Type I errors and Powers change as $r$ increases for different methods.
When $\eta = 0.2$, the averaged Type I errors of all methods are similar and close to 0.05 but our proposed approach has much higher Powers than the other two methods over the whole range of the signal strength. When $\eta = 1.0$, it is clear that both DSpar and DScore fail to control the Type I errors whereas our proposed method not only controls the Type I error but also has much higher Powers as the signal strength increases. Figure \ref{fig:power} together with the results from Table \ref{table_error} suggests the superiority of our proposed approach over the compared methods.
\bm{e}gin{figure}[H]
\centering
\inftyncludegraphics[width = 16cm, height = 10cm]{plots/rho_05_iso.pdf}
\caption{The average Type I errors and Powers with varying magnitude of the nonzero coefficients of $\bm{T}heta$. The black, red and green lines represent the proposed approach, DSpar and DScore, respectively. The solid lines depict the averaged Powers while the dashed lines represent the averaged Type I errors.}
\label{fig:power}
\end{figure}
\paragraph{Testing on $\bm{B}$:}
We proceed to evaluate the empirical performance of our proposed method for testing the hypothesis $H_{0,B_j}: \bm{B}_j = \b0$ versus $H_{1,B_j}: \bm{B}_j \ne \b0$. We adopt the same data generating process as described in the beginning except that we set $\bm{B}_j = \b0$ for each $j \inftyn \{1,\dotso,b_m\}$. Here $b_m$ controls the number of zero columns of $\bm{B}$ and is chosen from $\{5, 10\}$. We also consider $p = 50$, $\eta = 0.1$ and vary $m$ within $\{20, 50, 100\}$.
Similarly, we calculate the empirical Type I error and the empirical Power as
\bm{e}q
&\mathsf{T}ext{Type I error} = \frac{1}{b_m} \sum_{j=1}^{b_m}1\left\{\mathsf{T}extrm{Reject the null $H_{0,B_{j}}$}\right\},\\
&\mathsf{T}ext{Power} = \frac{1}{(m - b_m)} \sum_{j= b_m+1}^m 1\left\{\mathsf{T}extrm{Reject the null $H_{0,B_{j}}$}\right\}.
\end{aligned}\end{equation}
We repeat 100 times for each scenario. Table \ref{table_error_B} contains the averaged Type I errors and Powers of our procedure in all settings. The Type I errors are not far from the nominal level 0.05 and get closer to it as $m$ increases while the Powers are close to one in all settings. These findings are in line of our Theorem \ref{thm_B_asn}.
\bm{e}gin{table}[t]
\caption{The averaged Type I errors and Powers at significance level 0.05 for the proposed method of testing $H_{0,B_j}: \bm{B}_j = \b0$ versus $H_{1,B_j}: \bm{B}_j\ne \b0$.}\label{table_error_B}
\bm{e}gin{center}
\bm{e}gin{tabular}{ c c c c c c c}
\hline
Metric &\multicolumn{3}{c}{$b_m = 5$}&\multicolumn{3}{c}{$b_m = 10$}\\
& $m = 20$ & $m = 50$& $m = 100$& $m = 20$ & $m = 50$& $m = 100$\\
\hline
Type I error&0.072&0.064&0.062&0.063&0.041&0.058\\
\hline
Power&0.989&1.000&0.998&1.000&0.988&0.999\\
\hline
\end{tabular}
\end{center}
\end{table}
\bm{e}gin{comment}
\subsection{Comparison with projection}
In the first scenario, we generate $\bm{T}heta_{raw} \inftyn \mathbb{R}^{s\mathsf{T}imes m}$ by sampling each entry independently from $N(2,0.1)$, and then set the first s row of $\bm{T}heta$ as
$\bm{T}heta_{raw}({\bm I}_m - \bm{B}^T(\bm{B}\bm{B}^T)^{-1}\bm{B})$. The rest rows of $\bm{T}heta$ are all set to 0.
By constructing $\bm{T}heta$ in this way, from Theorem \ref{thm_asymp_normal} we know the extra bias term $\bm{T}heta_{1\cdot}^TP_B\bm{e}_1$ in $\bm{D}elta$ becomes zero when constructing $\widetilde\Theta_{11}$. For each $i = 1,\dotso,d, j = 1,\dotso,m$, we firstly construct $\widetilde\Theta_{ij}$, based on which we further construct test statistic for the hypothesis $H_0: \Theta_{ij} = 0$ versus $H_1: \Theta_{ij}\neq 0$, as well as the confidence interval $CI_{ij}$ given by (\ref{CI_def}).
To evaluate the inferential performance, we fix the significance level $\alpha = 0.05$. We denote ${\mathcal{S}}_0 = \{(i,j): \Theta_{ij}\neq 0\}$, ${\mathcal{S}}_0^C$ as its complement, and $CI_{ij}$ as the confidence interval for $\Theta_{ij}$. We look at the average of the following metrics over 25 repetitions
\end{comment}
\Sigma_Ection{Analysis on the stock mouse dataset}\label{sec_real_data}
In this section, we validate our method on the heterogenous stock mouse dataset \citep{valdar2006genome} from Wellcome Trust Centre for Human Genetics.
This dataset contains $129$ continuous phenotypes that can be categorized into six categories: Behavior, Diabetes, Ashma, Immunology, Haemotology and Biochemistry. The dataset also contains around $10,000$
Single Nucleotide Polymorphisms (SNPs) for each mouse. One primary interest is to discover significant associations between the SNPs and the phenotypes. Since both phenotypes and genotypes are measured by different experimenters at different time points and the mice are from different generations and families \citep{valdar2006genome}, we expect the existence of unknown hidden effects, such as batch effects. We thus deploy our proposed method for finding significant entries of $\bm{T}heta$ by adjusting the potential hidden effects.
To preprocess the data, since the measured phenotypes and SNPs vary for different groups of mice,
we only consider the mice that should have all phenotypes measured.
Meanwhile, we only keep the SNPs that have been measured by these retained mice. Finally, since there exists different levels of missingness among the phenotypes, we remove those phenotypes with percentage of missing values greater than $5\%$ and impute the missing values of the remaining phenotypes by using the average of their $20$-nearest neighbors. After the data preprocessing, we obtain a data set that has $n = 810$ mice, $p = 10,346$ measured SNPs and $m = 104$ recorded phenotypes.
To deploy our method, we first use the procedure in Section \ref{sec_K} to find $\widehat K=28$ for this dataset and then apply our procedure in (\ref{sec_est_Theta}) to test the significance of each entry of $\bm{T}heta$. The tuning parameters are chosen in the way as described in Section \ref{sec_cv}. To account for multiple testing problem,
we apply the Bonferroni correction at 0.05 significant level.
For comparison, we also run both DSpar and DScore (see, Section \ref{sec_sim}) with the same correction.
To interpret and validate the discovered significant associations, we map the SNPs to either annotated genes or intergenic regions.
On the one hand, our approach and the other two methods detect some common meaningful signals. For example, in Diabetes related phenotypes, such as Insulin, both our method and DSpar find the SNP \mathsf{T}extit{rs4213255} to be significant.
This SNP is mapped to gene \mathsf{T}extit{repro33} which has been shown to be associated with endocrine and exocrine glands \citep{goldfine1997endocrine} that directly mediates insulin level. Another SNP that is found by both our method
and DSpar
to be significant for an immunology phenotype
is \mathsf{T}extit{rs13476136} whose corresponding gene \mathsf{T}extit{Tli1} (T lymphoma induced 1) has been demonstrated to directly affect immunology \citep{wielowieyski1999tli1, blake2003mgd, smith2019mouse, krupke2017mouse}.
Furthermore, significance of the SNP \mathsf{T}extit{rs3713052} is discovered for a Haemotology related phenotype (Haem.LICabs) by all three methods, and this SNP is mapped into the intergenic region between the gene \mathsf{T}extit{Gm39049} and the gene \mathsf{T}extit{Tenm4}.
Although the function of this intergenic region is unclear to us, the \mathsf{T}extit{Tenm4} gene has been found to associate with the hematopoietic system \citep{blake2003mgd, smith2019mouse, krupke2017mouse}.
On the other hand, there exist many meaningful associations that are only identified to be significant by our method. For instance, the SNP \mathsf{T}extit{rs6290322}
is only found to be significant by our method for a Diabetes related phenotype (Glucose). It has been shown that the mapped gene \mathsf{T}extit{gro57} of this SNP is associated with several Diabetic phenotypes
\citep{blake2003mgd, smith2019mouse, krupke2017mouse}. Our method also finds the
SNP \mathsf{T}extit{rs3141314} to be significant
for a Haemotology phenotype (Haem.PLT, platelet count). This SNP is mapped to gene \mathsf{T}extit{hlb258} which is known to be functional related with the blood phenotypes \citep{blake2003mgd, smith2019mouse, krupke2017mouse}. In addition, several SNPs such as $rs3711203$ and $rs3725230$ are only found by our method to be significant
for multiple immunological phenotypes. These SNPs are all mapped to gene \mathsf{T}extit{slck} (slick hair gene) which directly effects the integumentary system \citep{blake2003mgd, smith2019mouse, krupke2017mouse}. The integumentary system including the skin and corresponding appendages acts as a physical barrier between outside environment and internal environment hence plays an important role in the immune system.
Overall, our method finds more meaningful and significant SNPs than the other two methods. Specifically, for each method, we record the numbers of significant SNPs for each phenotype and report the summary statistics of these numbers in Table \ref{real data}. We also run our testing procedure in Section \ref{sec_method_infer_B} for $\bm{B}$ and all the test statistics are very large ($>427$ for all phenotypes), suggesting the existence of strong hidden effects. Although DSpar and Dscore are able to detect a few signals that are sufficiently large without adjusting the hidden effects, to find more weak/moderate yet meaningful signals, our proposed approach appears to be more effective.
\bm{e}gin{table}[ht]
\caption{Summary statistics of the numbers of significant SNPs over all phenotypes by using different methods.}\label{real data}
\bm{e}gin{center}
\bm{e}gin{tabular}{c c c c c}
\hline
Method& Min & Mean & Median & Max\\
\hline
Ours & 7 & 21.77 & 21 & 43\\
\hline
DSpar & 0 & 1.77 & 0 & 39\\
\hline
DScore & 0 & 0.09& 0 & 5\\
\hline
\end{tabular}
\end{center}
\end{table}
{\small
\Sigma_Etlength{\bmf{i}bsep}{0.85pt}{
\bmf{i}bliographystyle{plainnat}
\bmf{i}bliography{ref}
}
}
\appendix
\Sigma_Ection{Column-wise $\ell_2$ convergence rates of $X\widehat F-XF$}\label{app_theory_fit}
We first provide theoretical guarantees of $\bm{X}\widehat\bm{F}-\bm{X}\bm{F}$ under the fixed design matrix $\bm{X}$ as the analysis is still valid for random design by first conditioning on $\bm{X}$.
Recall from model (\ref{model}) that $W$ is uncorrelated with $X$. To simplify the analysis under the fixed design scenario, we assume the independence between $X$ and $W$ in order to derive the deviation bounds of their cross product. We expect that the same theoretical guarantees hold under $\text{Cov}(X,W)=0$ by using more complicated arguments.
Recall that $\widehat\bm{F} = (\widehat\bm{F}_1, \ldots, \widehat\bm{F}_m)$ with $\widehat\bm{F}_j$ obtained from solving (\ref{def_est_F_j}) for $1\le j\le m$. The following lemma characterizes the solution $\widehat\bm{F}_j = \widehat\bm{t}heta^{(j)} + \widehat\bm{d}elta^{(j)}$. It is proved in \cite{chernozhukov2017}.
\bm{e}gin{lemma}\label{lem_solution}
For any $1\le j\le m$, let $(\widehat \bm{t}heta^{(j)}, \widehat \bm{d}elta^{(j)})$ be any solution of (\ref{def_est_F_j}), and denote
\bm{e}gin{equation}\label{def_P_Q_lbd2}
P_{\mathbf{l}dj} = \bm{X}\left(\bm{X}^T\bm{X} + n \mathbf{l}dj {\bm I}_p\right)^{-1}\bm{X}^T,\qquad Q_{\mathbf{l}dj} = {\bm I}_n - P_{\mathbf{l}dj}.
\end{equation}
for any $\mathbf{l}dj\gammae 0$ such that $P_{\mathbf{l}dj}$ exists.
Then $\widehat \bm{t}heta^{(j)}$ is the solution of the following problem
\bm{e}gin{equation}\label{crit_Theta}
\widehat \bm{t}heta^{(j)} = \arg\min_{\bm{t}heta\inftyn \mathbb{R}^p} {1\over n}\left\|Q_{\mathbf{l}dj}^{1/2}(\bm{Y}_j - \bm{X}\bm{t}heta)
\right\|_2^2 + \mathbf{l}dIj \|\bm{t}heta\|_{1},
\end{equation}
and $\widehat \bm{d}elta^{(j)}=(\bm{X}^T\bm{X} + n \mathbf{l}dj {\bm I}_p)^{-1}\bm{X}^T(\bm{Y}_j - \bm{X}\widehat\bm{t}heta^{(j)})$, where $Q_{\mathbf{l}dj}^{1/2}$ is the principal matrix square root of $Q_{\lambda_2}$. Moreover, we have
\bm{e}gin{equation}\label{fit}
\bm{X}\widehat \bm{F}_j = \bm{X}\left(\widehat \bm{t}heta^{(j)} + \widehat \bm{d}elta^{(j)}\right) = P_{\mathbf{l}dj}\bm{Y}_j + Q_{\mathbf{l}dj}\bm{X}\widehat \bm{t}heta^{(j)}.
\end{equation}
\end{lemma}
To analyze $\widehat\bm{F}_j$, we first introduce the Restricted Eigenvalue (RE) \citep{bickel2009}. For some given constant $\alpha \gammae 1$ and integer $1\le s\le p$, define
\bm{e}gin{equation}\label{RE_X}
\kappa(s, \alpha) = \min_{S\subseteq [p],|S|\le s}~\min_{\Delta \inftyn \mathcal{C}(S, \alpha)}{\|\bm{X} \Delta\|_2\over \sqrt{n}\|\Delta_{S\cdot}\|_2},
\end{equation}
where $\mathcal{C}(S, \alpha) := \{\Delta\inftyn \mathbb{R}^{p} \Sigma_Etminus \{\b0\}: \alpha\|\Delta_{S}\|_{1}\gammae \|\Delta_{S^c}\|_{1}\}$. For $1\le j\le m$ and the $j$th response regression, define
\bm{e}gin{equation}\label{def_V_eps}
\sigma_j^2 = \gamma_w^2 \bm{B}_j^T\Sigma_W \bm{B}_j + \gamma_e^2 \sigma_{E_j}^2
\end{equation}
where $\gamma_w$ and $\gamma_e$ are the sub-Gaussian constants defined in Assumption \ref{ass_error} and $\sigma_{E_j}^2 = [\Sigma_E]_{jj}$. Write $M^{(j)} = n^{-1}\bm{X}^T Q_{\mathbf{l}dj}^2\bm{X}$ with $Q_{\mathbf{l}dj}$ defined in (\ref{def_P_Q_lbd2}). Recall that $\widehat\Sigma = n^{-1}\bm{X}^T\bm{X}$ and its eigenvalue are $\Lambda_1\gammae \Lambda_2 \gammae \cdots \gammae \Lambda_q>0$ with $q = \textrm{rank}(\bm{X})$. Further recall $s_n$ is defined in (\ref{def_space_Theta}).
The following theorem provides the $\ell_2$ convergence rate of $\bm{X}\widehat\bm{F}_j - \bm{X}\bm{F}_j$ uniformly over $1\le j\le m$.
\bm{e}gin{theorem}\label{thm_pred}
Under Assumptions \ref{ass_error}, assume $\kappa(s_n,4)>0$ and choose
\bm{e}gin{equation}\label{rate_lbd1}
\lambda_{1}^{(j)} = 4\sigma_j\sqrt{6\max_{1\le i\le p}M_{ii}^{(j)}}\sqrt{\log (p\vee m) \over n}
\end{equation}
and any $\lambda_2^{(j)} \gammae 0$
in (\ref{def_est_F_j}) such that $P_{\lambda_2^{(j)}}$ exists.
With probability $1-2(p\vee m)^{-1} - m^{-1}$,
\bm{e}gin{align*}
{1\over n}\left\|\bm{X} \widehat \bm{F}_j - \bm{X} \bm{F}_j\right\|_2^2
&\lesssim \inftynf_{\substack{(\bm{t}heta_0, \bm{d}elta_0):\\
\bm{t}heta_0 + \bm{d}elta_0 = \bm{F}_j}} \Bigl[Rem_{1,j} + Rem_{2,j}(\bm{d}elta_0) + Rem_{3,j}(\bm{t}heta_0)\Bigr]
\end{align*}
holds uniformly over $1\le j\le m$,
where
\bm{e}gin{align*}
&Rem_{1,j} = \left(\mathsf{T}r\left(P_{\lambda_2^{(j)}}^2\right) + \left\|P_{\lambda_2^{(j)}}^2\right\|_{{\rm op}}\log m \right){\sigma_j^2 \over n}\\
&Rem_{2,j}(\bm{d}elta_0) = \lambda_2^{(j)}~ \bm{d}elta_0^T \widehat \Sigma(\widehat \Sigma + \lambda_2^{(j)} {\bm I}_p)^{-1}\bm{d}elta_0\\
&Rem_{3,j}(\bm{t}heta_0) =
{\mathbf{l}dj(\Lambda_1 + \lambda_2^{(j)}) \over (\Lambda_q+\lambda_2^{(j)})^2}\left(\max_{1\le i\le p}\widehat\Sigma_{ii}\right) {s_0\log (p\vee m) \over \kappa^2(s_n,4)}{\sigma_j^2\over n}.
\end{align*}
\end{theorem}
\bm{e}gin{proof}
Theorem \ref{thm_pred} can be proved by using the line of arguments in the proof of Theorem 4 in \cite{bing2020adaptive} except for working on the following event
\bm{e}q\label{def_event_lbd}
\mathcal{E} := \bmf{i}gcap_{i=1}^p\bmf{i}gcap_{j=1}^m\left\{
\left|
\bm{X}_i^T Q_{\mathbf{l}dj} \bm{e}psilon_j
\right| \le {n\over 4}\mathbf{l}dIj
\right\}
\end{aligned}\end{equation}
with $\mathbf{l}dIj$ defined in (\ref{rate_lbd1}). To establish $\mathbb{P}(\mathcal{E})$, pick any $1\le i\le p$ and $1\le j\le m$.
We first note that, by the independence of $\bm{e}psilon_{tj}$ for $1\le t\le n$,
$\bm{e}psilon_j^TQ_{\mathbf{l}dj}\bm{X}_i$ is sub-Gaussian with sub-Gaussian parameter
\[
\sigma_j\sqrt{\bm{X}_i^TQ_{\mathbf{l}dj}^2 \bm{X}_i} = \sigma_j\sqrt{nM_{ii}^{(j)}}.
\]
Thus, the basic tail inequality of sub-Gaussian random variable yields
\[
\mathbb{P}\left\{
\left|\bm{X}_i^T Q_{\mathbf{l}dj} \bm{e}psilon_j\right| > t\sigma_j\sqrt{nM_{ii}^{(j)}}
\right\}\le 2e^{-t^2/2},\quad \mathsf{T}ext{for all }t\gammae0.
\]
Choose $t = \sqrt{6\log(p\vee m)}$ and take the union bounds over $1\le i\le p$ and $1\le j\le m$ to obtain
$\mathbb{P}(\mathcal{E}) \gammae 1 - 2(p\wedge m)^{-1}.$
\end{proof}
We remark that Theorem \ref{thm_pred} in particular holds for the true $\bm{t}heta_j = \bm{T}heta_j$ and $\bm{d}elta_j = \bm{A}\bm{B}_j$, for $1\le j\le m$, whenever they are identifiable.
\Sigma_Ection{Main proofs}
\subsection{Proof of Theorem \ref{thm_ident}: identifiability}
From model (\ref{model_linear}) and noting that $\text{Cov}(X, \varepsilonilon) = \b0$, $\bm{T}heta + \bm{A}\bm{B}$ can be identified from $[\text{Cov}(X)]^{-1}\text{Cov}(X,Y)$, and so is $\Sigma_Eps$. Let $\bm{U}_K\inftyn\mathbb{R}^{m\mathsf{T}imes K}$ denote the first $K$ eigenvectors of $\Sigma_Eps$. An application of the Davis Kahan Theorem yields
\[
\|\bm{U}_K\bm{U}_K^T - P_B\|_{{\rm op}} \le {\sqrt{2}\|\Sigma_E\|_{{\rm op}} \over \lambda_K(\bm{B}^T\Sigma_W\bm{B})} = o(1)
\]
under condition (\ref{ident_conds}). Thus, $P_B^{\perp}$ is recovered asymptotically and so is $\bm{T}heta P_B^{\perp} = (\bm{T}heta + \bm{A}\bm{B})P_B^{\perp}$. Finally, for each $1\le i\le p$ and $1\le j\le m$, since under condition (\ref{cond_ident_Theta}),
\bm{e}q\label{bd_bias}
|\bm{T}heta_{i\cdot}^TP_B\bm{e}_j| =~ & \left|\bm{T}heta_{i\cdot}^T\bm{B}^T\Sigma_W^{1/2}\left(\Sigma_W^{1/2}\bm{B}\bm{B}^T\Sigma_W^{1/2}\right)^{-1}\Sigma_W^{1/2}\bm{B}\bm{e}_j\right|\\
\leq ~ &\norm{\bm{T}heta_{i\cdot}}_1\left\|\bm{B}^T\Sigma_W^{1/2}\left(\Sigma_W^{1/2}\bm{B}\bm{B}^T\Sigma_W^{1/2}\right)^{-1}\right\|_{\infty,2}\left\|\Sigma_W^{1/2}\bm{B}\bm{e}_j\right\|_2\\
\leq ~ &\norm{\bm{T}heta_{i\cdot}}_1\max_{1\leq \ell \leq m}\norm{\Sigma_W^{1/2}\bm{B}_\ell}_2[\lambda_{K}(\bm{B}^T \Sigma_W \bm{B})]^{-1}\norm{\Sigma_W^{1/2}\bm{B}_j}_2\\
=~&\mathcal{O}\left({\norm{\bm{T}heta_{i\cdot}}_1\over m}\right),
\end{aligned}\end{equation}
we conclude that
\[
\bm{T}heta_{ij} = [\bm{T}heta P_B^{\perp}]_{ij} + [\bm{T}heta P_B]_{ij} = [\bm{T}heta P_B^{\perp}]_{ij} + o(1).
\]
This completes the proof. \qed
\subsection{Proof of Theorem \ref{thm_rates_B}: The uniform convergence rate of $\widehat B_j$}\label{app_proof_thm_B}
Recall from (\ref{def_svd_epsilon}) that
\[
{1\over nm}\widehat\bm{e}psilon^T\widehat\bm{e}psilon = \bm{V}\bm{D}^2\bm{V}^T.
\]
We work on the intersection of the events
\bm{e}gin{align}\label{def_event_F}
\mathcal{E}_F &:= \left\{
\max_{1\le j\le m}{1\over n}\|\bm{X}\widehat\bm{F}_j- \bm{X}\bm{F}_j\|_2^2 \lesssim r_n \right\},\\\label{def_event_D}
\mathcal{E}_D &:= \left\{
\sqrt{c_Wc_B} \lesssim \lambda_K(\bm{D}_K)\le \lambda_1(\bm{D}_K) \lesssim
\sqrt{C_WC_B}
\right\},
\end{align}
with $r_n$ defined in Assumption \ref{ass_initial} and $c_B, C_B, c_W, C_W$ defined in Assumption \ref{ass_B_Sigma}.
Lemma \ref{lem_D_K} and Assumption \ref{ass_initial} guarantee that $\lim_{n\mathsf{T}o\infty}\mathbb{P}(\mathcal{E}_F\cap \mathcal{E}_D) = 1$.
By (\ref{def_est_BW}), observe that
\bm{e}gin{align*}
{1\over nm}\widehat\bm{e}psilon^T\widehat\bm{e}psilon \widehat\bm{B}^T = \bm{V}\bm{D}^2\bm{V}^T\sqrt{m}\bm{V}_K\bm{D}_K = \widehat\bm{B}^T\bm{D}_K^2.
\end{align*}
Plugging
\bm{e}gin{equation}\label{def_bDelta}
\widehat\bm{e}psilon = \bm{Y} - \bm{X}\widehat\bm{F} = \bm{e}psilon + \underbrace{\bm{X}\bm{F} - \bm{X}\widehat\bm{F}}_{\bm{D}elta}
\end{equation}
into the above display yields
\bm{e}gin{align*}
{1\over nm}\left(
\bm{e}psilon^T\bm{e}psilon + \bm{e}psilon^T\bm{D}elta + \bm{D}elta^T\bm{e}psilon + \bm{D}elta^T\bm{D}elta
\right) \widehat \bm{B}^T \bm{D}_K^{-2} = \widehat\bm{B}^T.
\end{align*}
Since
\[
{1\over nm}
\bm{e}psilon^T\bm{e}psilon = {1\over nm}\left(
\bm{B}^T\bm{W}^T\bm{W}\bm{B} + \bm{B}^T\bm{W}^T\bm{E} + \bm{E}^T\bm{W}\bm{B} + \bm{E}^T\bm{E}
\right),
\]
using the definition in (\ref{def_H0}) gives
\bm{e}gin{align}\label{display_B_hat_BH}\nonumber
&\widehat \bm{B}^T - \bm{B}^T\bm{H}_0^T\\
&= {1\over nm}\left(
\bm{B}^T\bm{W}^T\bm{E} + \bm{E}^T\bm{W}\bm{B} + \bm{E}^T\bm{E} + \bm{e}psilon^T\bm{D}elta + \bm{D}elta^T\bm{e}psilon + \bm{D}elta^T\bm{D}elta
\right) \widehat \bm{B}^T \bm{D}_K^{-2}\\\nonumber
&= {1\over n\sqrt m}\left(
\bm{B}^T\bm{W}^T\bm{E} + \bm{E}^T\bm{W}\bm{B} + \bm{E}^T\bm{E} + \bm{e}psilon^T\bm{D}elta + \bm{D}elta^T\bm{e}psilon + \bm{D}elta^T\bm{D}elta
\right)\bm{V}_K\bm{D}_K^{-1},
\end{align}
where we used (\ref{def_est_BW}) in the last step. Pick any $1\le j\le m$ and multiply both sides of the above display by $\bm{e}_j$. We proceed to bound each corresponding terms on the right hand side.
First, invoking Lemma \ref{lem_quad_terms} and $\mathcal{E}_D$ gives
\bm{e}gin{align*}
\left\|\bm{e}_j^T\bm{B}^T\bm{W}^T\bm{E}\bm{V}_K\bm{D}_K^{-1}\right\|_2 \lesssim \|\bm{B}_j^T\bm{W}^T\bm{E}\|_2 \lesssim \sqrt{nm\log m}
\end{align*}
with probability at least $1-8m^{-1}$. Similarly, we obtain
\bm{e}gin{align*}
{1\over n\sqrt m}\left\|\bm{e}_j^T\left(
\bm{B}^T\bm{W}^T\bm{E} + \bm{E}^T\bm{W}\bm{B} + \bm{E}^T\bm{E}
\right)\bm{V}_K\bm{D}_K^{-1}\right\|_2 \lesssim \sqrt{\log m \over n\wedge m}.
\end{align*}
On the other hand, Lemma \ref{lem_quad_terms_Delta} together with Assumption \ref{ass_initial} ensures that, with probability $1- 8m^{-1}$,
\bm{e}gin{align}\label{bd_quad_Deltas}
&{1\over n\sqrt m}\left\|\bm{e}_j^T\left(
\bm{e}psilon^T\bm{D}elta + \bm{D}elta^T\bm{e}psilon + \bm{D}elta^T\bm{D}elta
\right)\bm{V}_K\bm{D}_K^{-1}\right\|_2\\\nonumber
& \lesssim ~ \sqrt{r_n}\sqrt{Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j)} + r_{n,1} + \sqrt{r_{n,2}\log (m) \over n}+ r_{n,3}\sqrt{1\over n}\\\nonumber
& \lesssim ~ r_n
\end{align}
uniformly over $1\le j\le m$. Here, for convenience, we write
\bm{e}gin{align}\label{def_r_n_k}
&r_{n,1} = \max_{1\le j\le m}Rem_{1,j},\quad r_{n,2} = \max_{1\le j\le m}Rem_{2,j}(\bm{d}elta_j),\quad r_{n,3} = \max_{1\le j\le m}Rem_{3,j}(\bm{t}heta_j).
\end{align}
Collecting the previous three displays concludes the desired rate. The proof is completed by noting that $m = m(n) \mathsf{T}o \inftynfty$ whence the probabilities tend to one as $n\mathsf{T}o \inftynfty$.
\qed
\subsection{Proof of Lemma \ref{thm_Theta_simple_rates}: $\ell_1$ convergence rate of the initial estimator $\widehat\Theta_1$}\label{app_proof_thm_Theta}
Recall $\widehat \Sigma = n^{-1}\bm{X}^T \bm{X}$ and $\kappa(s_n,4)$ is defined in (\ref{RE_X}). Define the following event
\bm{e}gin{align}\label{def_event_X}
\mathcal{E}_{\bm{X}} := \left\{\kappa(s_n, 4) \gammae c,~ \max_{1\le j\le p}\widehat\Sigma_{jj}\le C, ~ {1\over \sqrt n}\|\bm{X}\bm{T}heta\|_{2,1} \le C' M_n\sqrt{s_n}, ~ {1\over \sqrt n}\|\bm{X}\bm{A}\|_{{\rm op}} \le C' \right\}
\end{align}
for some finite constants $C\gammae c>0$ and $C'>0$. Lemma \ref{lem_X} in Appendix \ref{app_lemmas_Theta} proves that $\lim_{n\mathsf{T}o \inftynfty}\mathbb{P}(\mathcal{E}_{\bm{X}}) = 1$ under the conditions of Theorem \ref{thm_Theta_simple_rates}.
Recall $r_n$ from Assumption \ref{ass_initial}. Define
\bm{e}gin{equation}\label{def_eta_bar}
\eta_{n} = \sqrt{\log m \over n\wedge m} + r_n.
\end{equation}
Further recall $\widetilde\bm{B}$ and $\bm{H}_0$ are defined in (\ref{def_B_tilde}) and (\ref{def_H0}). We work on the event
\bm{e}gin{equation}\label{def_event_misc}
\mathcal{E}_{\bm{X}} \cap \left\{
\|(\widetilde \bm{B} - \widehat\bm{B})\widehat P_B^{\perp}\bm{e}_1\|_2 \lesssim \eta_n
\right\} \cap \left\{
\|(\widehat P_B-P_B)\bm{e}_1\|_\infty \lesssim {\eta_n \over m}
\right\}\cap \left\{
\lambda_K(\bm{H}_0) \gammatrsim c_H
\right\}
\end{equation}
which, according to Lemmas \ref{lem_X}, \ref{lemma_technical} and \ref{lemma_PB_error}, holds with probability tending to one.
Recall that $\bm{a}r \bm{T}heta_1 = \bm{T}heta P_{B}^{\perp} \bm{e}_1$.
Starting with
\[
\frac{1}{n}\bm{n}orm{\widetilde\bm{y} - \bm{X}\widehat \bm{T}heta_1}_2^2 + \lambda_3\norm{\widehat \bm{T}heta_1}_1 \le \frac{1}{n}\bm{n}orm{\widetilde\bm{y} - \bm{X}\bm{a}r \bm{T}heta_1}_2^2 + \lambda_3\norm{\bm{a}r \bm{T}heta_1}_1,
\]
work out the squares to obtain
\bm{e}gin{align*}
{1\over n}\left\|
\bm{X} (\widehat\bm{T}heta_1 - \bm{a}r \bm{T}heta_1)
\right\|_2^2 \le {2\over n}\left|
\langle \bm{X} (\widehat\bm{T}heta_1 - \bm{a}r \bm{T}heta_1), \widetilde \bm{y} - \bm{X} \bm{a}r \bm{T}heta_1
\right|
+\lambda_3\norm{\bm{a}r \bm{T}heta_1}_1 - \lambda_3\norm{\widehat \bm{T}heta_1}_1.
\end{align*}
By noting that
\bm{e}gin{align*}
\widetilde\bm{y} - \bm{X} \bm{a}r \bm{T}heta_1 & = \left[\bm{X}(\bm{T}heta + \bm{A}\bm{B}) + \bm{W}\bm{B} + \bm{E}\right] \widehat P_B^{\perp}\bm{e}_1 - \bm{X} \bm{T}heta P_B^{\perp}\bm{e}_1\\
&= \bm{X}\bm{A}\bm{B}\widehat P_B^{\perp}\bm{e}_1+ \bm{W}\bm{B}\widehat P_B^{\perp}\bm{e}_1 + \bm{E} \widehat P_B^{\perp}\bm{e}_1 + \bm{X} \bm{T}heta(\widehat P_B^{\perp}-P_{B}^{\perp}) \bm{e}_1
\end{align*}
and by writing $\bm{D}elta = \widehat\bm{T}heta_1 - \bm{a}r\bm{T}heta_1$,
we have
\bm{e}gin{align*}
{2\over n}\left|
\langle \bm{X} \bm{D}elta, \widetilde \bm{y} - \bm{X} \bm{a}r \bm{T}heta_1
\right| &\le {2\over n}\left| \bm{e}_1^T \widehat P_B^{\perp}\bm{E}^T \bm{X} \bm{D}elta\right|+ {2\over n}\left\|\bm{X}\bm{D}elta\right\|_2 Rem\\
&\le {2\over n}\left\| \bm{e}_1^T \widehat P_B^{\perp}\bm{E}^T \bm{X}\right\|_\infty \|\bm{D}elta\|_1 + {2\over n}\left\|\bm{X}\bm{D}elta\right\|_2 Rem.
\end{align*}
where
\[
Rem = {1\over \sqrt n}\left\|
\bm{X}\bm{A}\bm{B}\widehat P_B^{\perp}\bm{e}_1+ \bm{W}\bm{B}\widehat P_B^{\perp}\bm{e}_1 +\bm{X} \bm{T}heta (P_B - \widehat P_B) \bm{e}_1\right\|_2.
\]
Provided that
\bm{e}gin{equation}\label{bd_lbd3}
\left\| \bm{e}_1^T \widehat P_B^{\perp}\bm{E}^T \bm{X}\right\|_\infty \le {n\over 4}\lambda_3,
\end{equation}
from the fact that $\|\bm{a}r\bm{T}heta_1\|_0 \le s_n$,
using $\norm{\bm{a}r\bm{T}heta_1}_1 - \norm{\widehat \bm{T}heta_1}_1 \le \|\bm{D}elta_S\|_1 + \|\bm{D}elta_{S^c}\|_1$ with $S := \supp(\bm{a}r\bm{T}heta_1)$ and $|S| \le s_n$ gives
\bm{e}gin{align*}
{1\over n}\left\|
\bm{X} \bm{D}elta
\right\|_2^2 \le {2\over n}\left\|\bm{X}\bm{D}elta\right\|_2 Rem + {3 \over 2}\lambda_3\|\bm{D}elta_S\|_1 - {1 \over 2}\lambda_3\|\bm{D}elta_{S^c}\|_1.
\end{align*}
We now bound from above $Rem$. By recalling that $\widetilde\bm{B} = \bm{H}_0 \bm{B}$,
\bm{e}gin{align*}
{1\over \sqrt n}\left\|
\bm{X}\bm{A}\bm{B}\widehat P_B^{\perp}\bm{e}_1\right\|_2 &= {1\over \sqrt n}\left\|
\bm{X}\bm{A}\bm{H}_0^{-1}(\widetilde\bm{B} - \widehat\bm{B})\widehat P_B^{\perp}\bm{e}_1\right\|_2\\
&\le {1\over \sqrt n}\left\|
\bm{X}\bm{A}\bm{H}_0^{-1}\right\|_{{\rm op}}\left\|(\widetilde\bm{B}-\widehat\bm{B})\widehat P_B^{\perp}\bm{e}_1\right\|_2\\
&\lesssim {1\over \sqrt n}\left\|
\bm{X}\bm{A}\right\|_{{\rm op}}\eta_n & \mathsf{T}extrm{by }(\ref{def_event_misc})\\
&\lesssim \eta_n & \mathsf{T}extrm{by }(\ref{def_event_X}).
\end{align*}
By (\ref{def_event_misc}), we also have
\bm{e}gin{align*}
{1\over \sqrt n}\left\|\bm{X} \bm{T}heta (\widehat P_B -P_B)\bm{e}_1\right\|_2 &\le
{1\over \sqrt n}\left\|\bm{X}\bm{T}heta\right\|_{2,1} \left\|(\widehat P_B -P_B)\bm{e}_1\right\|_\infty \lesssim {M_n\sqrt{s_n} \over m}\eta_n.
\end{align*}
Together with Lemma \ref{lem_W_eigens}, we also have
\bm{e}gin{align*}
{1\over \sqrt n}\left\|
\bm{W}\bm{B}\widehat P_B^{\perp}\bm{e}_1\right\|_2 & \lesssim
{1\over \sqrt n}\left\|
\bm{W}\right\|_{{\rm op}}\left\|(\widetilde \bm{B} - \widehat\bm{B})\widehat P_B^{\perp}\bm{e}_1\right\|_2 \lesssim \eta_n
\end{align*}
with probability $1 - 2e^{-n}$.
We thus conclude that with the same probability, on the event (\ref{def_event_misc}),
\[
Rem \lesssim \eta_n \left(1 + {M_n\sqrt{s_n}\over m}\right).
\]
Following the same line of arguments as the proof of Theorem 6 in \cite{bing2020adaptive}, it is straightforward to show that, on the event (\ref{def_event_misc}) and for any $\lambda_3$ such that (\ref{bd_lbd3}) holds,
\bm{e}gin{equation}\label{rate_Theta_td}
\|\widehat \bm{T}heta_1 - \bm{a}r\bm{T}heta_1\|_{1}
\lesssim \max\left\{{\lambda_3}, ~ {(\widetilde \lambda_3)^2 \over \lambda_3 }\right\}{s_n\over \kappa^2(s_n,4)},
\end{equation}
holds with probability $1-2e^{-n}$, where
\bm{e}gin{equation}\label{def_lbd3_td}
\widetilde\lambda_3 = \eta_n \left(1 + {M_n\sqrt{s_n}\over m}\right) {\kappa(s_n,4) \over \sqrt{s_n}}.
\end{equation}
It remains to show (\ref{bd_lbd3}) holds with probability tending to one for
any
\bm{e}gin{align}\label{def_event_lbd3}
\lambda_3 \gammae \bm{a}r\lambda_3 \asymp
\sigma_{E_1}\sqrt{\max_{1\le j\le p} \widehat\Sigma_{jj}}\sqrt{\log p\over n}.
\end{align}
If this holds, then observe that (\ref{def_event_lbd3}), (\ref{rate_Theta_td}) and (\ref{def_lbd3_td}) readily imply
\bm{e}q\label{rate_Theta_td_prime}
\|\widehat \bm{T}heta_1 - \bm{a}r\bm{T}heta_1\|_{1}
&\lesssim (\bm{a}r \lambda_3 \vee \widetilde\lambda_3) {s_n \over \kappa^2(s_n,4)}\\
&\lesssim s_n\sqrt{\log p\over n} + \left(\sqrt{s_n} + {M_n s_n\over m}\right)\eta_n
\end{aligned}\end{equation}
by choosing $\lambda_3$ appropriately. The result immediately follows from (\ref{def_eta_bar}).
To prove (\ref{bd_lbd3}) holds for any $\lambda_3\gammae \bm{a}r\lambda_3$, note that
\bm{e}gin{align*}
\left\|\bm{e}_1^T \widehat P_B^{\perp}\bm{E}^T \bm{X}\right\|_\infty &\le \left\| \bm{e}_1^T \bm{E}^T \bm{X}\right\|_\infty + \left\| \bm{e}_1^T \widehat P_B\bm{E}^T \bm{X}\right\|_\infty\\
&\le \left\| \bm{e}_1^T \bm{E}^T \bm{X}\right\|_\infty + \left\| \bm{e}_1^T\widehat P_{B}\right\|_2\left\|\bm{E}^T \bm{X}\right\|_{2,\infty}.
\end{align*}
Since $\bm{E}_1^T\bm{X}_j$ is $\gamma_e\sqrt{n\widehat\Sigma_{jj}[\Sigma_E]_{11}}$ sub-Gaussian, the sub-Gaussian tail probability together with union bounds over $1\le j\le p$ yields
\[
\mathbb{P}\left\{
\left\| \bm{e}_1^T \bm{E}^T \bm{X}\right\|_\infty \le 2\gamma_e\sqrt{n\log p}\sqrt{[\Sigma_E]_{11}\max_{1\le j\le p}\widehat\Sigma_{jj}}
\right\} \gammae 1-2p^{-1}.
\]
Furthermore, noting that
\[
\left\|\bm{E}^T \bm{X}\right\|_{2,\infty}^2 = \max_{1\le j\le p}\bm{X}_j^T\bm{E} \Sigma_E^{-1/2}\Sigma_E \Sigma_E^{-1/2}\bm{E}\bm{X}_j
\]
and $\bm{X}_j\bm{E}\Sigma_E^{-1/2}$ is $\gamma_e\sqrt{n\widehat\Sigma_{jj}}$ sub-Gaussian,
an application of Lemma \ref{lem_quad} with union bounds over $1\le j\le p$ gives
\[
\mathbb{P}\left\{
\left\|\bm{E}^T \bm{X}\right\|_{2,\infty}^2 \le \gamma_e^2 n \max_{1\le j\le p}\widehat\Sigma_{jj}\left(
\sqrt{\mathsf{T}r(\Sigma_E)} + \sqrt{4\|\Sigma_E\|_{{\rm op}} \log p}
\right)^2
\right\} \gammae 1-p^{-1}.
\]
By part (E) of Lemma \ref{lemma_technical}, we conclude that
\[
\mathbb{P}\left\{
{1\over n}\left\|\bm{e}_1^T \widehat P_B^{\perp}\bm{E}^T \bm{X}\right\|_\infty
\lesssim \gamma_eC_E\sqrt{\max_{1\le j\le p}\widehat\Sigma_{jj}}\sqrt{\log p\over n}
\right\} \gammae 1- 3p^{-1}
\]
where
\[
C_E = \sqrt{[\Sigma_E]_{11}} + \sqrt{\mathsf{T}r(\Sigma_E)\over m\log p} + \sqrt{\|\Sigma_E\|_{{\rm op}}\over m} \lesssim 1.
\]
This completes the proof. \qed
\subsection{Proof of Theorem \ref{thm_asymp_normal}: asymptotic normality of $\widetilde \Theta_{11}$}\label{app_thm_asymp_normal}
Recall that $\bm{a}r\bm{T}heta_1 = \bm{T}heta P_B^{\perp}\bm{e}_1$ so that $\bm{a}r\Theta_{11} = \bm{e}_1^T \bm{T}heta P_B^{\perp}\bm{e}_1$.
By the definition of $\widetilde{\Theta}_{11}$ and $\bm{a}r{\Theta}_{11}$, we have
\bm{e}q
\widetilde{\Theta}_{11} - \bm{a}r\Theta_{11} & = \widehat{\Theta}_{11} - \bm{a}r\Theta_{11}+ \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T(\widetilde\bm{y} - \bm{X}\widehat{\bm{T}heta}_1)\\
& = \underbrace{(\bm{e}_1 - \frac{1}{n}\bm{X}^T\bm{X}\widehat{\bm{o}mega}_1)^T(\widehat{\bm{T}heta}_1 - \bm{a}r \bm{T}heta_1)}_{I_1} + \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T(\widetilde\bm{y}_1 - \bm{X}\bm{a}r\bm{T}heta_1)\\
& = I_1 + \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\left[(\bm{X}(\bm{T}heta + \bm{A}\bm{B}) + \bm{W} \bm{B} + \bm{E})\widehat{P}_{B}^\perp \bm{e}_1 - \bm{X}\bm{T}heta P_B^\perp \bm{e}_1\right] \\
& = I_1 + \underbrace{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X}\bm{T}heta(\widehat{P}_B^\perp - P_B^\perp)\bm{e}_1}_{I_2}
+ \underbrace{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X} \bm{A}\bm{B} \widehat{P}_B^\perp \bm{e}_1}_{I_3}\\
&~~~~~~~~~~~~~~~~~~+ \underbrace{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{W} \bm{B} \widehat{P}_B^\perp \bm{e}_1}_{I_4}
+\underbrace{ \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{E} \widehat{P}_B^\perp \bm{e}_1}_{I_5}\\
=& I_1 + I_2 + I_3 + I_4 + I_5.
\end{aligned}\end{equation}
In what follows, we will characterize $I_1$ through $I_5$, respectively. For simplicity, define
\bm{e}gin{equation}\label{def_xi}
\xi_n = s_n\sqrt{\log p\over n} + \left({s_nM_n\over m} + \sqrt{s_n}\right)\left(\sqrt{\log m \over n} + r_n\right)
\end{equation}
such that $\|\widehat\bm{T}heta_1 - \bm{a}r\bm{T}heta_1\|_1 = \mathcal{O}_{\mathbb{P}}(\xi_n)$ from Theorem \ref{thm_Theta_simple_rates}.
\bm{e}gin{itemize}
\inftytem For $I_1$, the KKT condition of (\ref{formula_nodewise}) implies that \citep{vandegeer2014}
\[
\left\|\frac{1}{n}\bm{X}^T\bm{X} \widehat{\bm{o}mega}_1 - \bm{e}_1\right\|_\inftynfty\leq \frac{\widetilde{\lambda}}{2\widehat\mathsf{T}au_1^2},
\]
which, together with Lemma \ref{lemma_nodewise} and Theorem \ref{thm_Theta_simple_rates}, yields
\bm{e}q\label{bd_I1}
|I_1| \leq \norm{\widehat{\bm{T}heta}_1 - \bm{a}r\bm{T}heta_1}_1\norm{\bm{e}_1 - \frac{1}{n}\bm{X}^T\bm{X}\widehat{\bm{o}mega}}_{\inftynfty} = \mathcal{O}_{\mathbb{P}}\left(\xi_n \sqrt{\log p\over n}\right).
\end{aligned}\end{equation}
\inftytem For $I_2$, direct calculation gives us
\bm{e}q\nonumber
I_2 &= (\bm{e}_1 - \frac{1}{n}\bm{X}^T\bm{X} \widehat{\bm{o}mega}_1)^T \bm{T}heta( P_B - \widehat{P}_B)\bm{e}_1 + \bm{T}heta_{1\cdot}^T(P_B - \widehat{P}_B)\bm{e}_1\\
&= I_{21} + I_{22}.
\end{aligned}\end{equation}
Recall that $\eta_n$ is defined in (\ref{def_eta_bar}). We have
\bm{e}q \nonumber
I_{21}\leq \norm{\bm{e}_1 - \frac{1}{n}\bm{X}^T\bm{X}\widehat{\bm{o}mega}_1}_\inftynfty
\norm{\bm{T}heta}_{1,1}\norm{(P_B - \widehat{P}_B)\bm{e}_1}_\inftynfty = \mathcal{O}_{\mathbb{P}}\left( \frac{s_n M_n\eta_n}{m}\sqrt{\log p\over n}\right),
\end{aligned}\end{equation}
where the last step follows from Lemma \ref{lemma_PB_error}, Lemma \ref{lemma_nodewise} and $\|\bm{T}heta\|_{1,1} \le s_n\|\bm{T}heta\|_{\infty,1} \le s_nM_n$ from (\ref{def_space_Theta}).
Similarly, we can show that
\bm{e}q \nonumber
|I_{22}| \leq \norm{\bm{T}heta_{1\cdot}}_1 \norm{(P_B - \widehat{P}_B)\bm{e}_1}_\inftynfty =
\mathcal{O}_{\mathbb{P}}\left(\frac{M_n\eta_n}{m}\right),
\end{aligned}\end{equation}
and therefore
\bm{e}q\label{bd_I2}
|I_2| = \mathcal{O}_{\mathbb{P}}\left(\left(1 + s_n\sqrt{\log p\over n}\right)\frac{M_n\eta_n}{m}\right) = \mathcal{O}_{\mathbb{P}}\left(\frac{M_n\eta_n}{m}\right).
\end{aligned}\end{equation}
\inftytem For $I_3$, recall from (\ref{def_H0}) and (\ref{def_B_tilde}) that
$\bm{A}\bm{B} = \widetilde\bm{A}\widetilde\bm{B}:=(\bm{A}\bm{H}_0^{-1})(\bm{H}_0\bm{B})$ on the event
\[
\mathcal{E}_{H} = \left\{
c_H\lesssim \lambda_K(\bm{H}_0) \le \lambda_1(\bm{H}_0) \lesssim C_H
\right\}
\]
with $c_H$ and $C_H$ defined in Lemma \ref{lemma_technical}. On the event $\mathcal{E}_H$, we obtain
\bm{e}q\nonumber
|I_3| &= |\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X} \widetilde\bm{A} \widetilde \bm{B} \widehat{P}_B^\perp \bm{e}_1|\\
&\leq \norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X} \widetilde\bm{A}}_2\norm{(\widetilde \bm{B} - \widehat\bm{B})\widehat{P}_B^\perp \bm{e}_1}_2 & \mathsf{T}extrm{by }\widehat{\bm{B}}\widehat{P}_B^\perp = \b0\\
&\lesssim c_H^{-1}\norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X} \bm{A}}_2\norm{(\widetilde \bm{B} - \widehat\bm{B})\widehat{P}_B^\perp \bm{e}_1}_2.
\end{aligned}\end{equation}
Notice that $\lim_{n\mathsf{T}o\inftynfty}\mathbb{P}(\mathcal{E}_H) = 1$ and $\norm{(\widetilde \bm{B} - \widehat\bm{B})\widehat{P}_B^\perp \bm{e}_1}_2 = \mathcal{O}_{\mathbb{P}}(\bm{a}r\eta)$ from parts (A) and (D) of Lemma \ref{lemma_technical}, respectively. We bound from above $\norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X} \bm{A}}_2$ as
\bm{e}q\nonumber
\norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X} \bm{A}}_2
&\leq\norm{(\bm{e}_1 - \frac{1}{n}\bm{X}^T\bm{X}\widehat{\bm{o}mega}_1)^T\bm{A}}_2 + \norm{\bm{A}_{1\cdot}}_2\\
& = \mathcal{O}_{\mathbb{P}}\left(\sqrt{s_\Omega\log p \over n}\right) + \norm{\bm{A}_{1\cdot}}_2
\end{aligned}\end{equation}
where the last step uses Lemma \ref{lemma_nodewise_A}. We thus conclude
\bm{e}q\label{bd_I3}
|I_3| = \mathcal{O}_{\mathbb{P}}\left(\eta_n\sqrt{s_\Omega \log p\over n} + \eta_n\norm{\bm{A}_{1\cdot}}_2\right).
\end{aligned}\end{equation}
\inftytem For $I_4$, on the event $\mathcal{E}_H$ and by writing $\widetilde\bm{W} = \bm{W}\bm{H}_0^{-1}$,
\bm{e}q\nonumber
|I_4| \leq \norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\widetilde\bm{W}}_2\norm{(\widetilde \bm{B} - \widehat\bm{B})\widehat{P}_B^\perp \bm{e}_1}_2 \lesssim c_H^{-1}\norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{W}}_2\mathcal{O}_{\mathbb{P}}(\eta_n).
\end{aligned}\end{equation}
Note that, conditioning on $\bm{X}$, $\widehat{\bm{o}mega}_1^T\bm{X}^T\bm{W}\Sigma_W^{-1/2}\inftyn\mathbb{R}^K$ is $\gammaamma_w\sqrt{\widehat\bm{o}mega_1^T\bm{X}^T\bm{X}\widehat\bm{o}mega_1}$ sub-Gaussian random vector. An application of Lemma \ref{lem_quad} yields, for all $t>0$,
\bm{e}gin{align*}
\mathbb{P}\left\{
\norm{\widehat{\bm{o}mega}_1^T\bm{X}^T\bm{W}}_2^2 > \gammaamma_w^2(\widehat\bm{o}mega_1^T\bm{X}^T\bm{X}\widehat\bm{o}mega_1)\left(
\sqrt{\mathsf{T}r(\Sigma_W)} + \sqrt{2\|\Sigma_W\|_{{\rm op}}t}
\right)^2
\right\} \le e^{-t}.
\end{align*}
Note that
\bm{e}q\label{bd_omegaXXomega}
{1\over n}\widehat\bm{o}mega_1^T\bm{X}^T\bm{X}\widehat\bm{o}mega_1 &\le \Omega_{11} + \left|\widehat\bm{o}mega_1^T {1\over n}\bm{X}^T\bm{X} \widehat\bm{o}mega_1 - \Omega_{11}\right|\\
&=\mathcal{O}_{\mathbb{P}}\left(
\Omega_{11} + \sqrt{s_\Omega \log p\over n}
\right) & \mathsf{T}extrm{ by Lemma \ref{lemma_nodewise}}\\
& = \mathcal{O}_{\mathbb{P}}(\Omega_{11})
\end{aligned}\end{equation}
by using $s_\Omega \log p = o(n)$ and $\Omega_{11} \gammae \Sigma_{11}^{-1} \gammae C^{-1}$ from Assumption \ref{ass_X}. By also noting that
\bm{e}q\label{bd_Omega_11}
\Omega_{11} \le {1\over \lambda_{\min}(\Sigma)} = \mathcal{O}(1)
\end{aligned}\end{equation}
from Assumption \ref{ass_X}, from $\mathsf{T}r(\Sigma_W) \le K\|\Sigma_W\|_{{\rm op}} = \mathcal{O}(1)$ and (\ref{bd_omegaXXomega}), we conclude
\bm{e}q\nonumber
\left\|\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{W}\right\|_2 = \mathcal{O}_{\mathbb{P}}\left(
1/\sqrt{n}
\right).
\end{aligned}\end{equation}
Hence
\bm{e}q\label{bd_I4}
I_4 =\mathcal{O}_{\mathbb{P}}\left(
\eta_n \over \sqrt{n}
\right).
\end{aligned}\end{equation}
\inftytem For $I_5$, by definition
\bm{e}q\nonumber
\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{E} \widehat{P}_B^\perp \bm{e}_1 =&
\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{E} P_B^\perp \bm{e}_1 + \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{E}(P_B - \widehat{P}_B)\bm{e}_1\\
:=& I_{51} + I_{52}.
\end{aligned}\end{equation}
It's easy to see that $\bm{E} \widehat{P}_B^\perp \bm{e}_1 \inftyn \mathbb{R}^n$ is an i.i.d Gaussian vector with covariance matrix $V_{11}{\bm I}_{n}$ and independent of $\bm{X}$, where
\[
V_{11} := \bm{e}_1^T P_B^\perp\Sigma_{E}P_B^\perp\bm{e}_1.
\]
This implies that
\bm{e}q\nonumber
\sqrt{n}I_{51} ~ \bmf{i}g | ~ \bm{X} \sim N\left(0, \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X}\widehat{\bm{o}mega}_1~ V_{11} \right).
\end{aligned}\end{equation}
We further note that
\bm{e}q\label{bd_V11}
V_{11} = [\Sigma_E]_{11} - \bm{e}_1^T P_B\Sigma_E \bm{e}_1-\bm{e}_1^T P_B\Sigma_E P_B^{\perp} \bm{e}_1 = [\Sigma_E]_{11} + \mathcal{O}(1/\sqrt{m})
\end{aligned}\end{equation}
by using $\|P_B\bm{e}_1\|_2 = \mathcal{O}(1/\sqrt m)$ deduced from (\ref{bd_bias}). Hence, also by (\ref{bd_omegaXXomega}) and (\ref{bd_Omega_11}),
\bm{e}q\label{bd_I51}
\sqrt{n}I_{51} = \zeta + o_\mathbb{P}(1)
\end{aligned}\end{equation}
where
\bm{e}q\label{def_zeta}
\zeta | \bm{X} \sim N\left(0, \widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{X}\widehat{\bm{o}mega}_1~ [\Sigma_E]_{11} \right).
\end{aligned}\end{equation}
For the second term, we know
\bm{e}q\nonumber
|I_{52}|\leq |\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{E}(P_B - \widehat{P}_B)\bm{e}_1|
&\leq \frac{1}{n}\norm{\bm{E}^T\bm{X}\widehat{\bm{o}mega}_1}_2\norm{(\widehat{P}_B - P_B)\bm{e}_1}_2.
\end{aligned}\end{equation}
Using the same arguments of bounding $\norm{\widehat{\bm{o}mega}_1^T\bm{X}^T\bm{W}}_2$ as above, one can establish that
\bm{e}gin{align*}
\mathbb{P}\left\{
\norm{\widehat{\bm{o}mega}_1^T\bm{X}^T\bm{E}}_2^2 > \gammaamma_e^2(\widehat\bm{o}mega_1^T\bm{X}^T\bm{X}\widehat\bm{o}mega_1)\left(
\sqrt{\mathsf{T}r(\Sigma_E)} + \sqrt{2\|\Sigma_E\|_{{\rm op}}t}
\right)^2
\right\} \le e^{-t},\quad \forall t>0.
\end{align*}
Hence, by $\|\Sigma_E\|_{{\rm op}}=\mathcal{O}(1)$, (\ref{bd_omegaXXomega}) and (\ref{bd_Omega_11}),
\[
\norm{\widehat{\bm{o}mega}_1^T\frac{1}{n}\bm{X}^T\bm{E}}_2 = \mathcal{O}_{\mathbb{P}}\left(
\sqrt{m\over n}
\right).
\]
Finally, invoke Lemma \ref{lemma_PB_error} to obtain
\bm{e}q\label{bd_I52}
|I_{52}| =\mathcal{O}_{\mathbb{P}}\left(\eta_n\over \sqrt n\right).
\end{aligned}\end{equation}
\end{itemize}
Collecting (\ref{bd_I1}), (\ref{bd_I2}), (\ref{bd_I3}), (\ref{bd_I4}), (\ref{bd_I51}) and (\ref{bd_I52}) and using
$$
\bm{a}r \Theta_{11} = \Theta_{11} - \bm{T}heta_{1\cdot}^TP_B\bm{e}_1 \overset{(\ref{bd_bias})}{=} \Theta_{11} + \mathcal{O}(M_n/m)
$$
conclude
\bm{e}gin{align*}
\sqrt{n}\left(\widetilde\Theta_{11} - \Theta_{11}\right) &= \zeta + \Delta
\end{align*}
where $\zeta$ satisfies (\ref{def_zeta}) and
\bm{e}q\nonumber
\Delta &= \mathcal{O}_{\mathbb{P}}\left(
\xi_n\sqrt{\log p} + \left({M_n\sqrt{n} \over m} + \sqrt{s_\Omega \log p} + \sqrt{n} \norm{\bm{A}_{1\cdot}}_2+1\right)\eta_n
\right) + \mathcal{O}\left({M_n\sqrt{n}\over m}\right) + o_{\mathbb{P}}(1).
\end{aligned}\end{equation}
By $M_n\sqrt{n} = o(m)$, (\ref{def_xi}) and (\ref{def_eta_bar}), after a bit algebra, we conclude
\bm{e}gin{align*}
\Delta
&= \mathcal{O}_{\mathbb{P}}\left(
{s_n\log p \over \sqrt n} + \left({s_nM_n\sqrt{\log p} \over m} + \sqrt{(s_n\vee s_\Omega)\log p}+ \sqrt{n} \norm{\bm{A}_{1\cdot}}_2+ 1\right) \eta_n
\right)+ o_\mathbb{P}(1)\\
&= \mathcal{O}_{\mathbb{P}}\left(\left( \sqrt{(s_n\vee s_\Omega)\log p}+ \sqrt{n} \norm{\bm{A}_{1\cdot}}_2+1\right) \left(
\sqrt{\log m\over n} + r_n
\right)
\right) + o_\mathbb{P}(1)\\
&= \mathcal{O}_{\mathbb{P}}\left(\sqrt{(s_n\vee s_\Omega)\log(p)\log(m)\over n}\right)\\
&\quad + \mathcal{O}_{\mathbb{P}}\left(\norm{\bm{A}_{1\cdot}}_2\sqrt{\log m}+\left( \sqrt{(s_n\vee s_\Omega)\log p}+ \sqrt{n} \norm{\bm{A}_{1\cdot}}_2\right) r_n
\right) + o_\mathbb{P}(1)\\
&= o_\mathbb{P}(1)
\end{align*}
where we use $s_n\log p = o(\sqrt n)$ in the second line, use $\log m =o(n)$ and $r_n = o(1)$ in the third equality and use
$
(s_n\vee s_\Omega)\log(p)\log(m) = o(n)
$
together with (\ref{cond_rn}) in the last step.
Finally, $|\widehat\bm{o}mega_1^T \widehat\Sigma \widehat\bm{o}mega_1 - \Omega_{11}| = o_\mathbb{P}(1)$ is proved in Lemma \ref{lemma_nodewise}. The proof is complete.\qed
\subsection{Proof of Corollary \ref{cor_ASN}}\label{app_proof_cor_ASN}
We first prove case (1). From Theorem \ref{thm_pred}, we start by simplifying the expressions of $Rem_{1,j}$, $Rem_{2,j}(\bm{d}elta_j)$ and $Rem_{3,j}(\bm{t}heta_j)$. Recall the SVD of $\widehat\Sigma = \sum_{k=1}^q\Lambda_q \bm{u}_k\bm{u}_k^T$ with $q = \textrm{rank}(\bm{X})$. Pick any $1\le j\le m$ and note $\|\bm{t}heta_j\|_0 \le s_n$ We have
\bm{e}gin{align*}
&Rem_{1,j} = {\sigma_j^2\over n}\left(
\sum_{k=1}^q \left(
\Lambda_k \over \Lambda_k + \lambda_2^{(j)}
\right)^2 + \left(\Lambda_1 \over \Lambda_1 + \lambda_2^{(j)}\right)^2\log m
\right),\\
&Rem_{2,j}(\bm{d}elta_j) = \sum_{k=1}^q{\lambda_2^{(j)} \Lambda_k \over \Lambda_k + \lambda_2^{(j)}} \left(\bm{u}_k^T\bm{d}elta_j\right)^2,\\
& Rem_{3,j}(\bm{t}heta_j) =
{\mathbf{l}dj(\Lambda_1 + \lambda_2^{(j)}) \over (\Lambda_q+\lambda_2^{(j)})^2}\left(\max_{1\le i\le p}\widehat\Sigma_{ii}\right) {s_n\log (p\vee m) \over \kappa^2(s_n,4)}{\sigma_j^2\over n}.
\end{align*}
Taking $\lambda_2 \mathsf{T}o \inftynfty$ yields
\bm{e}gin{align*}
&Rem_{1,j} = 0,\\
&Rem_{2,j}(\bm{d}elta_j) = \sum_{k=1}^q\Lambda_k \left(\bm{u}_k^T\bm{d}elta_j\right)^2 = \bm{d}elta_j^T \widehat\Sigma \bm{d}elta_j,\\
& Rem_{3,j}(\bm{t}heta_j) = \left(\max_{1\le i\le p}\widehat\Sigma_{ii}\right) {s_n\log (p\vee m) \over \kappa^2(s_n,4)}{\sigma_j^2\over n}.
\end{align*}
An application of Lemma \ref{lem_bernstein} together with
$$
\bm{d}elta_j^T \Sigma \bm{d}elta_j \le \|\bm{d}elta_j\|_2^2 \|\Sigma\|_{{\rm op}}\le \|\bm{A}\|_{{\rm op}}^2\|\bm{B}_j\|_2^2\|\Sigma\|_{{\rm op}} \lesssim \|\bm{A}\|_{{\rm op}}^2\|\Sigma\|_{{\rm op}}
$$ yields
\bm{e}gin{align*}
\mathbb{P}\left\{
\bm{d}elta_j^T \widehat\Sigma \bm{d}elta_j \le \|\bm{A}\|_{{\rm op}}^2\|\Sigma\|_{{\rm op}}\left(1 + \sqrt{\log m \over n}\right)
\right\}\gammae 1-2p^{-2}.
\end{align*}
Taking the union bounds over $1\le j\le m$ and
invoking Assumptions \ref{ass_B_Sigma} and $\mathcal{E}_{\bm{X}}$ in (\ref{def_event_X}) conclude
\[
r_n = \mathcal{O}\left(\|\bm{A}\|_{{\rm op}}^2 + {s_n\log(p\vee m)\over n}\right)
\]
with probability tending to one. This proves the rate in (\ref{rate_rnj_case1}). In this case, condition (\ref{cond_rn}) reduces to
\[
\norm{\bm{A}_{1\cdot}}_2 \sqrt{\log m}+\left(\|\bm{A}_{1\cdot}\|_2 \sqrt n + \sqrt{(s_n\vee s_\Omega)\log p}\right)\left(
\|\bm{A}\|_{{\rm op}}^2 + {s_n\log(p\vee m)\over n}
\right) = o(1).
\]
Provided that $\|\bm{A}_{1\cdot}\|_2 = o(\sqrt{(s_n\vee s_\Omega)\log p/n})$,
\[
\norm{\bm{A}_{1\cdot}}_2 \sqrt{\log m} = o\left(
(s_n\vee s_\Omega)\log p\log m\over n
\right) = o(1).
\]
and
\[
\sqrt{(s_n\vee s_\Omega)\log p}\left(
\|\bm{A}\|_{{\rm op}}^2 + {s_n\log(p\vee m)\over n}
\right) = o(1)
\]
is ensured by (\ref{cond_A_op}) and $(s_n\vee s_\Omega)\log^2(p\vee m) = o(n)$.
To prove case (2), by repeating the proof of Corollary 8 in \cite{bing2020adaptive}, one can deduce that
\[
Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j) \lesssim \sqrt{(\mathsf{T}r(\widehat\Sigma) + \Lambda_1 s_n) \|\bm{d}elta_j\|_2^2\log (p\vee m)\over n}+{s_n \over n}.
\]
Since $\mathsf{T}r(\widehat\Sigma)=\mathcal{O}_\mathbb{P}(p)$, $\|\bm{d}elta_j\|_2^2 \lesssim \|\bm{A}\|_{{\rm op}}^2 = \mathcal{O}(1/p)$ and $\Lambda_1 = \mathcal{O}_\mathbb{P}(p)$ by using Lemma \ref{lem_op_norm}, $\max_{1\le j\le p}\Sigma_{jj} = \mathcal{O}(1)$ and $\|\Sigma\|_{{\rm op}} =\mathcal{O}(p)$, we conclude
\[
r_n = \mathcal{O}\left(
\sqrt{s_n \log (p\vee m)\over n}+{s_n\log (p\vee m) \over n}
\right).
\]
Immediately, $\|\bm{A}_{1\cdot}\|_2 \le \|\bm{A}\|_{{\rm op}}$ and condition (\ref{cond_rn}) holds under $\|\bm{A}\|_{{\rm op}}^2 = \mathcal{O}(1/p)$ and $s_n(s_n\vee s_\Omega)\log^2(p\vee m) = o(n)$.
\qed
\subsection{Proof of Proposition \ref{prop_sigma_E}: consistency of the estimation of $\sigma_{E_1}^2$}\label{app_proof_prop_sigma_E}
We work on the event that
\[
\left\{\lambda_K(\bm{H}_0) \gammatrsim c_H\right\} \bmf{i}gcap \left\{{1\over n}\|\bm{X}\widehat\bm{F}_1 - \bm{X}\bm{F}_1\|_2^2 \lesssim r_{n,1}\right\}
\]
which, according to Lemma \ref{lemma_technical} and Theorem \ref{thm_pred}, holds with probability tending to one.
Recall from (\ref{def_est_epsilon}) that
\[
\widehat\bm{e}psilon_1 = \bm{e}psilon_1 + \bm{D}elta_1 = \bm{W}\bm{B}_1 + \bm{E}_1 + \bm{D}elta_1 = \widetilde\bm{W}\widetilde\bm{B}_1 + \bm{E}_1 + \bm{D}elta_1
\]
with $\bm{D}elta_1 = \bm{X}\widehat\bm{F}_1 - \bm{X}\bm{F}_1$, $\widetilde\bm{W} = \bm{W}\bm{H}_0^{-1}$ and $\widetilde\bm{B} = \bm{H}_0\bm{B}$ defined in (\ref{def_B_tilde}).
By definition (\ref{def_est_variance}), after a bit algebra,
\bm{e}gin{align*}
\widehat\sigma_{E_1}^2 - \sigma_{E_1}^2 &= {1\over n}\bm{E}_1^T\bm{E}_1 - \sigma_{E_1}^2+ {1\over n}\bm{D}elta_1^T\bm{D}elta_1 + {2\over n}\bm{D}elta_1^T(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1) + {2\over n}\bm{D}elta_1^T\bm{E}_1\\
&\quad + {1\over n}(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1)^T(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1) + {2\over n}(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1)^T\bm{E}_1.
\end{align*}
We study each terms on the right hand side separately. First, an application of Lemma \ref{lem_bernstein} together with $\sigma_{E_1}^2\le C_E$ gives
\[
\left| {1\over n}\bm{E}_1^T\bm{E}_1 - \sigma_{E_1}^2\right| = \mathcal{O}_\mathbb{P}\left(\sqrt{1 /n}
\right),
\]
which further implies
\[
{1\over \sqrt n}\|\bm{E}_1\|_2 = \mathcal{O}_\mathbb{P}(1).
\]
We thus have
\bm{e}q\label{bd_term_1}
&\left|
{1\over n}\bm{E}_1^T\bm{E}_1 - \sigma_{E_1}^2+ {1\over n}\bm{D}elta_1^T\bm{D}elta_1 + {2\over n}\bm{D}elta_1^T\bm{E}_1
\right|\\
&\le \left|
{1\over n}\bm{E}_1^T\bm{E}_1 - \sigma_{E_1}^2\right|+ {1\over n}\|\bm{D}elta_1\|_2^2 + {2\over n}\|\bm{D}elta_1\|_2\|\bm{E}_1\|_2
= \mathcal{O}_{\mathbb{P}}(n^{-1/2} + r_n).
\end{aligned}\end{equation}
To bound the other terms, notice that
\[
\|\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1\|_2 \le \|\widetilde\bm{W}-\widehat\bm{W}\|_{\rm op} \|\widehat\bm{B}_1\|_2 + \|\widetilde\bm{W}\|_{\rm op}\|\widehat\bm{B}_1 - \widetilde\bm{B}_1\|_2.
\]
By Lemma \ref{lem_W_eigens}, part (B) of Lemma \ref{lemma_technical}, Theorem \ref{thm_rates_B} and Lemma \ref{lem_W_frob}, we have
\bm{e}q\nonumber
{1\over n}\|\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1\|_2 = \mathcal{O}_\mathbb{P}\left(\sqrt{\log m\over n} + r_n\right).
\end{aligned}\end{equation}
This leads to
\bm{e}q \label{bd_term_2}
&\left|
{2\over n}\bm{D}elta_1^T(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1)+{1\over n}(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1)^T(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1)\right.\\
&\quad \left.+ {2\over n}(\widetilde\bm{W}\widetilde\bm{B}_1 - \widehat\bm{W}\widehat\bm{B}_1)^T\bm{E}_1
\right| = \mathcal{O}_\mathbb{P}\left(
\sqrt{\log m\over n} + r_n
\right).
\end{aligned}\end{equation}
Collecting (\ref{bd_term_1}) and (\ref{bd_term_2}) completes the proof. \qed
\bmf{i}gskip
The following lemma provides overall control of $\widehat\bm{W} - \widetilde\bm{W}$ in the operator norm.
\bm{e}gin{lemma}\label{lem_W_frob}
Under conditions of Theorem \ref{thm_rates_B}, with probability tending to one,
\[
{1\over \sqrt n}\|\widetilde\bm{W}-\widehat\bm{W}\|_{\rm op} \lesssim \sqrt{r_n} + \sqrt{\log m \over n \wedge m}.
\]
\end{lemma}
\bm{e}gin{proof}
We work on the event that parts (A) -- (C) of Lemma \ref{lemma_technical} hold intersecting with $\mathcal{E}_B$ in (\ref{def_event_B}) and $\mathcal{E}_F$ in (\ref{def_event_F}). Recalling that $\widetilde\bm{B}$ is defined in (\ref{def_B_tilde}) and $\widetilde\bm{W} = \bm{W}\bm{H}_0^{-1}$.
Observe that
$$
\widehat\bm{W} = \widehat\bm{e}psilon\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1} = \widetilde\bm{W}\widetilde\bm{B}\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}+ (\widehat\bm{e}psilon-\bm{e}psilon)\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}
$$
with $\bm{e}psilon = \bm{W}\bm{B} = \widetilde\bm{W}\widetilde\bm{B}$. This gives
\bm{e}q \nonumber
\widehat\bm{W} - \widetilde\bm{W} &= \widetilde\bm{W}(\widetilde\bm{B}-\widehat\bm{B})\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}+ (\widehat\bm{e}psilon-\bm{e}psilon)\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}.
\end{aligned}\end{equation}
For the first term,
\[
{1\over \sqrt n}\|\widetilde\bm{W}(\widetilde\bm{B}-\widehat\bm{B})\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}\|_{\rm op} \le c_H^{-1}{1\over \sqrt{n}}\|\bm{W}\|_{\rm op}{\|\widetilde\bm{B}-\widehat\bm{B}\|_{{\rm op}}\over \lambda_K(\widehat\bm{B})}.
\]
Invoking Lemma \ref{lem_W_eigens} and (\ref{bd_B_diff_op}) yields
\[
{1\over \sqrt n}\|\widetilde\bm{W}(\widetilde\bm{B}-\widehat\bm{B})\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}\|_{\rm op} = \mathcal{O}_{\mathbb{P}}\left( \eta_n \right)
\]
with $\eta_n$ defined in (\ref{def_eta_bar}). Similarly, the second term can be bounded by
\[
{1\over \sqrt n}\|(\widehat\bm{e}psilon-\bm{e}psilon)\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}\|_{\rm op} \lesssim {1\over \sqrt{n}}\|\bm{X}\widehat\bm{F}-\bm{X}\bm{F}\|_{F} {1\over \lambda_K(\widehat\bm{B})} =\mathcal{O}_{\mathbb{P}}(\sqrt{r_n}).
\]
Combining these two bounds completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm_B_asn}: The asymptotic normality of $\widehat B_j$}\label{app_proof_thm_B_asn}
We work on the event $\mathcal{E}_F\cap \mathcal{E}_D$ in (\ref{def_event_F}) -- (\ref{def_event_D}) intersecting with $\{\lambda_K(\bm{H}_0) \gammatrsim 1\}$ which holds with probability tending to one. From (\ref{display_B_hat_BH}), for any $j\inftyn [m]$, one has
\bm{e}gin{align}\label{decomp_Bhat_j_B_j}\nonumber
\sqrt{n}\left(\widehat \bm{B}_j - \bm{H}_0\bm{B}_j\right) & = {1\over m\sqrt n}\bm{D}_K^{-2}\widehat \bm{B} \bm{B}^T\bm{W}^T\bm{E}_j\\
&\quad + \underbrace{{1\over m\sqrt n}\bm{D}_K^{-2}\widehat \bm{B}\left(
\bm{E}^T\bm{W}\bm{B}_j +\bm{E}^T\bm{E}_j + \bm{e}psilon^T\bm{D}elta_j + \bm{D}elta^T\bm{e}psilon_j + \bm{D}elta^T\bm{D}elta_j
\right)}_{R}.
\end{align}
Let
\bm{e}q\label{def_H2}
\bm{H}_2 = \bm{B}\widehat\bm{B}^T (\widehat\bm{B}\widehat\bm{B}^T)^{-1} = {1\over m}\bm{B}\widehat\bm{B}^T \bm{D}_K^{-2},
\end{aligned}\end{equation}
such that
\[
{1\over m\sqrt n}\bm{D}_K^{-2}\widehat \bm{B} \bm{B}^T\bm{W}^T\bm{E}_j = {1\over \sqrt n}\bm{H}_2^T \bm{W}^T\bm{E}_j.
\]
First notice that, since $\bm{W}$ and $\bm{E}$ are independent, the classical central limit theorem yields
\[
{1\over \sqrt n}\bm{W}^T\bm{E}_j \overset{d}{\longrightarrow} N_K\left(\b0, \sigma_{E_j}^2\Sigma_W\right),\qquad \mathsf{T}extrm{as }n\mathsf{T}o \inftynfty.
\]
Following \cite{bai2020simpler}, define
\bm{e}q\label{def_Q}
\bm{Q} = \Lambda_0R_0\Sigma_B^{-1/2}
\end{aligned}\end{equation}
where $\Sigma_B = m^{-1}\bm{B}\bm{B}^T$ and $\Sigma_B^{1/2}\Sigma_W\Sigma_B^{1/2}$ has the eigen-decomposition $R_0\Lambda_0 R_0^T$.
Since Lemma \ref{lem_H2} proves $\bm{H}_2 \mathsf{T}o \bm{Q}^{-1}$ in probability, together with the fact $(\bm{Q}^T)^{-1}\Sigma_W\bm{Q}^{-1}= {\bm I}_K$, Slutsky's theorem ensures
\[
{1\over \sqrt n}\bm{H}_2^T\bm{W}^T\bm{E}_j \overset{d}{\longrightarrow} N_K\left(\b0, \sigma_{E_j}^2{\bm I}_K\right),\qquad \mathsf{T}extrm{as }n\mathsf{T}o \inftynfty.
\]
It remains to show $R$ in (\ref{decomp_Bhat_j_B_j}) is of order $o_\mathbb{P}(1)$.
By (\ref{bd_quad_Deltas}), one has
\bm{e}gin{align}\label{bd_R1}\nonumber
&{1\over m\sqrt n}\|\bm{D}_K^{-2}\widehat \bm{B}\left(
\bm{e}psilon^T\bm{D}elta_j + \bm{D}elta^T\bm{e}psilon_j + \bm{D}elta^T\bm{D}elta_j
\right)\|_2\\\nonumber
&= {1\over \sqrt{nm}}\|\bm{D}_K^{-1}\bm{V}_K^T\left(
\bm{e}psilon^T\bm{D}elta_j + \bm{D}elta^T\bm{e}psilon_j + \bm{D}elta^T\bm{D}elta_j
\right)\|_2\\\nonumber
& \lesssim \sqrt{nr_n}\sqrt{Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j)} + r_{n,1}\sqrt{n} + \sqrt{r_{n,2}\log (m)}+ r_{n,3}\\
& = \sqrt{nr_n}\sqrt{Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j)} + r_{n,1}\sqrt{n} + o(1)
\end{align}
with probability $1-8m^{-1}$, provided that $r_n\sqrt{\log m} = o(1)$. In addition, recalling that $\widetilde\bm{B} = \bm{H}_0\bm{B}$ and $\mathcal{E}_D$, one has
\bm{e}gin{align*}
{1\over m\sqrt n}\|\bm{D}_K^{-2}\widehat \bm{B}
\bm{E}^T\bm{W}\bm{B}_j\|_2 &\lesssim {1\over m\sqrt n}\left(\|\widetilde\bm{B} \bm{E}^T\bm{W}\bm{B}_j\|_2 + \|\widehat\bm{B}-\widetilde\bm{B}\|_{{\rm op}}\|\bm{E}^T\bm{W}\bm{B}_j\|_2\right)\\
&\lesssim {1\over m\sqrt n}\left(\|\bm{B} \bm{E}^T\bm{W}\bm{B}_j\|_2 + \|\widehat\bm{B}-\widetilde\bm{B}\|_{{\rm op}}\|\bm{E}^T\bm{W}\bm{B}_j\|_2\right).
\end{align*}
Since an application of Lemma \ref{lem_bernstein} with an union bound over $1\le k\le K$ yields
\[
{1\over m\sqrt n}\|\bm{B} \bm{E}^T\bm{W}\bm{B}_j\|_2 \le {1\over m\sqrt n}\left(
n\log (m)\bm{B}_j^T\Sigma_W \bm{B}_j \sum_{k=1}^K\bm{B}_{k\cdot}^T \Sigma_E \bm{B}_{k\cdot}
\right)^{1/2} \lesssim \sqrt{\log m \over m}
\]
with probability $1-2m^{-1}$, and similar arguments yield
\[
{1\over \sqrt{nm}}\|\bm{E}^T\bm{W}\bm{B}_j\|_2 \lesssim \max_{\ell\inftyn[m]}{1\over \sqrt n}|\bm{E}_{\ell}^T\bm{W} \bm{B}_j|\lesssim \sqrt{\log m}
\]
with probability $1-2m^{-1}$, invoke (\ref{bd_B_diff_op}) to conclude
\bm{e}q\label{bd_R2}
{1\over m\sqrt n}\|\bm{D}_K^{-2}\widehat \bm{B}
\bm{E}^T\bm{W}\bm{B}_j\|_2 = o_\mathbb{P}(1)
\end{aligned}\end{equation}
provided that $r_n \sqrt{\log m} = o(1)$, $\log m = o(\sqrt m)$ and $\log^2(m) = o(\sqrt n)$.
Finally, by Lemma \ref{lem_quad_terms}, we have
\bm{e}q\label{bd_R3}
{1\over m\sqrt n}\|\bm{D}_K^{-2}\widehat \bm{B}\bm{E}^T\bm{E}_j\|_2 &\lesssim {1\over m\sqrt n}\left(
\|\bm{B}\bm{E}^T\bm{E}_j\|_2 + \|\widehat\bm{B}-\widetilde\bm{B}\|_{{\rm op}}\|\bm{E}^T\bm{E}_j\|_2
\right)\\
&\lesssim \sqrt{(n+m)\log m\over m^2} + \left(\sqrt{\log m\over n\wedge m} + r_n\right)\sqrt{(n+m)\log m\over m}\\
& = o(1) + r_n \sqrt{n\log m\over m}
\end{aligned}\end{equation}
with probability tending to one. The last step uses $$\sqrt{n\log m} = o(m)$$ and $r_n\sqrt{\log m} = o(1)$. To combine the bounds, by taking $\lambda_2^{(j)} \mathsf{T}o \inftynfty$ for all $1\le j\le m$ and invoking $\mathcal{E}_X$ in (\ref{def_event_X}), one has
\bm{e}gin{align*}
n Rem_{1,j} \le n r_1 = o_\mathbb{P}(1),\quad Rem_{2,j}(\bm{d}elta_j) =\mathcal{O}_\mathbb{P}\left(
\|\bm{d}elta_j\|_2^2
\right),\qquad r_{n,2} = \mathcal{O}_\mathbb{P}(\|\bm{A}\|_{{\rm op}}^2)
\end{align*}
and
\[
Rem_{3,j}(\bm{t}heta_j) \le r_{n,3} = \mathcal{O}_{\mathbb{P}}\left(
{s_n\log(p\vee m) \over n}
\right),
\]
such that
\[
r_n = \mathcal{O}_\mathbb{P}\left(
\|\bm{A}\|_{{\rm op}}^2 + {s_n\log(p\vee m) \over n}
\right) + o_\mathbb{P}(n^{-1}).
\]
Therefore, $r_n\sqrt{\log m} = o(1)$. Also by $s_n\log(p\vee m) = o(\sqrt n)$, collecting (\ref{bd_R1}), (\ref{bd_R2}) and (\ref{bd_R3}) yields
\bm{e}gin{align*}
\left\|R
\right\|_2 &= \mathcal{O}_\mathbb{P}\left(
\|\bm{d}elta_j\|_2\sqrt{n r_n} +\sqrt{r_ns_n\log(p\vee m)} + r_n\sqrt{n\log m\over m}
\right) + o_\mathbb{P}(1)\\
&= \mathcal{O}_\mathbb{P}\left(
\|\bm{d}elta_j\|_2\sqrt{n\|\bm{A}\|_{{\rm op}}^2 + s_n\log(p\vee m)} +\|\bm{A}\|_{{\rm op}} \sqrt{s_n\log(p\vee m)}\right.\\
&\quad\qquad \left.+ \|\bm{A}\|_{{\rm op}}^2\sqrt{n\log m\over m}
\right) + o_\mathbb{P}(1)\\
&=\mathcal{O}_\mathbb{P}\left(
\|\bm{A}\|_{{\rm op}}\left[
\|\bm{d}elta_j\|_2\sqrt{n} + \sqrt{s_n\log(p\vee m)}\right]+ \|\bm{A}\|_{{\rm op}}^2\sqrt{n\log m\over m}
\right) + o_\mathbb{P}(1)
\end{align*}
Invoke condition (\ref{cond_r_asn}) to complete the proof.
\Sigma_Ection{Technical lemmas}
\subsection{Lemmas used in the proof of Theorem \ref{thm_rates_B}}
The following lemma provides upper and lower bounds of the eigenvalues of $n^{-1}\bm{W}^T\bm{W}$.
\bm{e}gin{lemma}\label{lem_W_eigens}
Under Assumptions \ref{ass_error} and \ref{ass_B_Sigma}, assume $K\log n\le Cn$ for some large constant $C>0$. Then
\[
\mathbb{P}\left\{
c_W \lesssim \lambda_K\left({1\over n}\bm{W}^T\bm{W}\right) \le \lambda_1\left({1\over n}\bm{W}^T\bm{W}\right) \lesssim C_W
\right\} \gammae 1-2e^{-n}.
\]
\end{lemma}
\bm{e}gin{proof}
First, an application of Lemma \ref{lem_op_norm_diff} yields
\[
\mathbb{P}\left\{
\left\|{1\over n}\bm{W}^T\bm{W} - \Sigma_W\right\|_{{\rm op}} \lesssim \|\Sigma_W\|_{{\rm op}} \left(\sqrt{K\log n\over n} + {K\log n\over n}\right)
\right\} \gammae 1-2e^{-n}.
\]
As Weyl's inequality leads to
\[
\left|\lambda_k\left({1\over n}\bm{W}^T\bm{W}\right) - \lambda_k(\Sigma_W)\right|
\le \left\|{1\over n}\bm{W}^T\bm{W} - \Sigma_W\right\|_{{\rm op}},\quad \forall 1\le k\le K,
\]
use $c_W\le \lambda_K(\Sigma_W) \le \lambda_1(\Sigma_W) \le C_W$ and $K\log n \le Cn$ to complete the proof.
\end{proof}
The following lemma shows that the event $\mathcal{E}_D$ in (\ref{def_event_D}) holds with probability tending to one, thereby providing upper and lower bounds for the singular values of $\widehat \bm{e}psilon/\sqrt{nm}$.
\bm{e}gin{lemma}\label{lem_D_K}
Under conditions of Theorem \ref{thm_rates_B}, one has
$$
\lim_{n\mathsf{T}o\inftynfty}\mathbb{P}(\mathcal{E}_D) = 1.
$$
\end{lemma}
\bm{e}gin{proof}
Recall that $\bm{D}_K$ contains the $K$ largest singular value of $\widehat\bm{e}psilon / \sqrt{nm}$. From
\[
\widehat\bm{e}psilon = \bm{W}\bm{B} + \bm{E} + \bm{D}elta
\]
with $\bm{D}elta = \bm{X}\bm{F}- \bm{X}\widehat\bm{F}$, using Weyl's inequality gives
\bm{e}gin{align*}
\left|\lambda_k(\bm{D}_K) - {1\over \sqrt{nm}}\lambda_k(\bm{W}\bm{B})\right| & ~ =~
\left|{1\over \sqrt{nm}}\lambda_k(\widehat\bm{e}psilon) - {1\over \sqrt{nm}}\lambda_k(\bm{W}\bm{B})\right|\\
&~ \le~ {1\over \sqrt{nm}}\|\bm{E}\|_{{\rm op}} + {1\over \sqrt{nm}}\|\bm{X}\widehat\bm{F} - \bm{X}\bm{F}\|_{{\rm op}},
\end{align*}
for all $1\le k\le K$.
On the one hand, by Assumption \ref{ass_B_Sigma} and Lemma \ref{lem_W_eigens},
\[
\sqrt{c_Wc_B} \lesssim {1\over \sqrt{nm}}\lambda_K(\bm{W}\bm{B})\le {1\over \sqrt{nm}}\lambda_1(\bm{W}\bm{B}) \lesssim \sqrt{C_WC_B}
\]
with probability at least $1-2n^{-c'n}$.
On the other hand,
invoke Lemma \ref{lem_op_norm} to obtain
\[
\mathbb{P}\left\{{1\over nm}\|\bm{E}^T\bm{E}\|_{{\rm op}} \le {\gamma_e^2\over m}\left(
\sqrt{\mathsf{T}r(\Sigma_E) \over n} + \sqrt{6\|\Sigma_E\|_{{\rm op}}}
\right)^2\right\} \gammae 1-e^{-n}.
\]
Using $\mathsf{T}r(\Sigma_E) \le m\|\Sigma_E\|_{{\rm op}} \le C_E m$ and $\|\Sigma_E\|_{{\rm op}}\le C_E$ implies
\[
{1\over nm}\|\bm{E}^T\bm{E}\|_{{\rm op}} = o_{\mathbb{P}}(1).
\]
Since Assumption \ref{ass_initial} ensures
\[
{1\over nm}\|\bm{X}\widehat\bm{F} - \bm{X}\bm{F}\|_{{\rm op}}^2= \mathcal{O}_{\mathbb{P}}\left(r_n\right) = o_{\mathbb{P}}(1),
\]
we conclude that, with probability tending to one,
\[
\sqrt{c_Wc_B} \lesssim \lambda_k(\bm{D}_K) \lesssim \sqrt{C_Wc_B},\quad \forall 1\le k\le K.
\]
The proof is complete.
\end{proof}
\bm{e}gin{lemma}\label{lem_quad_terms}
Under Assumptions \ref{ass_error} and \ref{ass_B_Sigma}, with probability greater than $1-8m^{-1}$, the following holds, uniformly over $1\le j\le m$,
\bm{e}gin{align*}
&\|\bm{E}^T\bm{W} \bm{B}_j\|_2 \lesssim \sqrt{nm\log m}, \\
&\|\bm{E}_j^T\bm{W} \bm{B}\|_2 \lesssim \sqrt{nm\log m}, \\
&\|\bm{E}_j^T\bm{E}\|_2 \lesssim \sqrt{n(n+m)\log m}.
\end{align*}
Furthermore, if $\|\Sigma_E\|_{\infty,1} \le C$ for some constant $C>0$, then with probability $1-2m^{-1}$, uniformly over $1\le j\le m$,
\[
\|\bm{B}\bm{E}^T\bm{E}_j\|_2 \lesssim \sqrt{n(n+m)\log m}.
\]
\end{lemma}
\bm{e}gin{proof}
Write $\bm{a}r \bm{E} = \bm{E} \Sigma_E^{-1/2}$ and $\bm{a}r\bm{W} = \bm{W}\Sigma_W^{-1/2}$. We have
\[
\|\bm{E}^T\bm{W} \bm{B}_j\|_2^2 \le \|\Sigma_E\|_{{\rm op}} \sum_{\ell=1}^m\left(
\bm{a}r\bm{E}_\ell^T \bm{W} \bm{B}_j
\right)^2.
\]
Notice that $\bm{a}r E_{i\ell}$ is $\gamma_e$ sub-Gaussian and $\bm{W}_{i\cdot}^T\bm{B}_j$ is $\gamma_w\sqrt{\bm{B}_j^T\Sigma_W\bm{B}_j}$ sub-Gaussian,
for all $1\le i\le n$.
An application of Lemma \ref{lem_bernstein} together with union bounds over $1\le \ell \le m$ gives
\[
\mathbb{P}\left\{
\|\bm{E}^T\bm{W} \bm{B}_j\|_2 \lesssim \sqrt{\|\Sigma_E\|_{{\rm op}}\bm{B}_j^T\Sigma_W\bm{B}_j}\sqrt{nm\log m}
\right\} \gammae 1-2m^{-1}.
\]
By similar arguments,
$$
\|\bm{E}_j^T\bm{W} \bm{B}\|_2^2 \le \|\bm{E}_j^T\bm{a}r\bm{W}\|_2^2\|\bm{B}^T\Sigma_W\bm{B}\|_{{\rm op}} \le K\|\bm{E}_j^T\bm{a}r\bm{W}\|_\infty^2\|\bm{B}^T\Sigma_W\bm{B}\|_{{\rm op}}.
$$
Since $E_{ij}$ is $\gamma_e\sqrt{[\Sigma_E]_{jj}}$ sub-Gaussian for $1\le i\le n$, apply Lemma \ref{lem_bernstein} to bound $|\bm{E}_j^T\bm{a}r\bm{W}_k|$ and take union bounds over $1\le k\le K$ to obtain
\[
\mathbb{P}\left\{
\|\bm{E}_j^T\bm{W} \bm{B}\|_2 \lesssim \sqrt{\|\bm{B}^T\Sigma_W\bm{B}\|_{{\rm op}}[\Sigma_E]_{jj}}\sqrt{nK\log m}
\right\} \gammae 1-2m^{-1}.
\]
The result follows by $\|\bm{B}^T\Sigma_W\bm{B}\|_{{\rm op}} \lesssim m$ from Assumption \ref{ass_B_Sigma}.
Finally,
\bm{e}q\label{bd_EjE}
\|\bm{E}_j^T\bm{E}\|_2^2 \le \|\Sigma_E\|_{{\rm op}} \left(
(\bm{E}_j^T \bm{a}r\bm{E}_j)^2 + \sum_{\ell \ne j}(\bm{E}_j^T\bm{a}r\bm{E}_\ell)^2
\right).
\end{aligned}\end{equation}
For the first term, for any $1\le i\le n$, notice that
\[
\mathbb{E}\left[
\bm{E}_{ij}\bm{a}r\bm{E}_{ij}
\right] = \mathbb{E}\left[
\bm{E}_{ij}\bm{E}_{i\cdot}^T
\right]\Sigma_E^{-1/2}\bm{e}_j = \bm{e}_j^T\Sigma_E^{1/2}\bm{e}_j.
\]
An application of Lemma \ref{lem_bernstein} gives
\[
\mathbb{P}\left\{
|\bm{E}_j^T \bm{a}r\bm{E}_j - n\bm{e}_j^T\Sigma_E^{1/2}\bm{e}_j| \lesssim \sqrt{[\Sigma_E]_{jj}}\sqrt{n\log m}
\right\} \gammae 1-2m^{-1},
\]
which implies
\bm{e}q\label{bd_EjEj}
|\bm{E}_j^T \bm{a}r\bm{E}_j| \lesssim n\bm{e}_j^T\Sigma_E^{1/2}\bm{e}_j + \sqrt{[\Sigma_E]_{jj}}\sqrt{n\log m}\lesssim n\sqrt{\log m}
\end{aligned}\end{equation}
with the same probability. Similarly, applying Lemma \ref{lem_bernstein} again to $\bm{E}_j^T\bm{a}r\bm{E}_\ell$ with union bounds over $j\ne \ell \inftyn [m]$ yields
\[
\mathbb{P}\left\{
|\bm{E}_j^T \bm{a}r\bm{E}_\ell| \lesssim \sqrt{[\Sigma_E]_{jj}}\sqrt{n\log m}
\right\} \gammae 1-2m^{-1}.
\]
Combining this with (\ref{bd_EjE}) and (\ref{bd_EjEj}) concludes
\[
\|\bm{E}_j^T\bm{E}\|_2^2 \lesssim n^2\log m + nm\log m
\]
with probability at least $1-4m^{-1}$.
Finally, by similar arguments, one can show that, with probability $1-2m^{-1}$
\[
|\bm{B}_{k\cdot}^T \bm{E}^T\bm{E}_j| \lesssim n\bm{B}_{k\cdot}^T \Sigma_E \bm{e}_j + \sqrt{n\log(m)[\Sigma_E]_{jj} \bm{B}_{k\cdot}^T \Sigma_E \bm{B}_{k\cdot}}
\]
uniformly over $1\le k\le K$ and $1\le j\le m$, and therefore, with the same probability,
\bm{e}gin{align*}
\|\bm{B} \bm{E}^T\bm{E}_j\|_2^2 &\lesssim \sum_{k=1}^K
\left[
n^2(\bm{B}_{k\cdot}^T \Sigma_E \bm{e}_j)^2 + n\log(m)[\Sigma_E]_{jj} \bm{B}_{k\cdot}^T \Sigma_E \bm{B}_{k\cdot}
\right]\\
&= n^2 \bm{e}_j^T \Sigma_E \bm{B}^T\bm{B}\Sigma_E \bm{e}_j + n\log(m) [\Sigma_E]_{jj}\mathsf{T}r(\bm{B}\Sigma_E\bm{B})\\
&\le n^2\|\Sigma_E\|_{\infty,1}^2\|\bm{B}\|_{2,\infty}^2 + n\log(m)[\Sigma_E]_{jj} \|\bm{B}\|_F^2 \|\Sigma_E\|_{{\rm op}}\\
&\lesssim n^2 + nm\log(m)
\end{align*}
by invoking Assumption \ref{ass_B_Sigma} and using $\|\Sigma_E\|_{\infty,1}\le C$ in the last step.
This completes the proof.
\end{proof}
\bmf{i}gskip
Recalling from (\ref{def_r_n_k}),
Assumption \ref{ass_initial} implies $r_{n,k} \le r_n = o_{\mathbb{P}}(1)$, for $k\inftyn \{1,2,3\}$.
\bm{e}gin{lemma}\label{lem_quad_terms_Delta}
Under conditions of Theorem \ref{thm_rates_B}, on the event $\mathcal{E}_F$ defined in (\ref{def_event_F}), the following holds with probability greater than $1-8m^{-1}$, uniformly over $1\le j\le m$.
\bm{e}gin{align*}
{1\over n\sqrt m}\|\bm{e}psilon_j^T\bm{D}elta\|_2 ~~&\lesssim ~ r_{n,1} + \sqrt{r_{n,2}\log(m) \over n} + r_{n,3}\sqrt{1\over n},\\
{1\over n\sqrt{m}}\|\bm{D}elta_j^T\bm{e}psilon\|_2 ~~&\lesssim Rem_{1,j} + \sqrt{\log (m) Rem_{2,j}(\bm{d}elta_j)\over n}+ {Rem_{3,j}(\bm{t}heta_j)\over \sqrt n}\\
& \lesssim ~ r_{n,1} + \sqrt{r_{n,2}\log (m) \over n}+ r_{n,3}\sqrt{1\over n},\\
{1\over n\sqrt m}\|\bm{D}elta_j^T \bm{D}elta\|_2 ~&\lesssim \sqrt{r_n}\sqrt{Rem_{1,j} + Rem_{2,j}(\bm{d}elta_j) + Rem_{3,j}(\bm{t}heta_j)},
\end{align*}
with $r_n$ defined in Assumption \ref{ass_initial}.
\end{lemma}
\bm{e}gin{proof}
Since $\bm{D}elta = \widehat\bm{e}psilon - \bm{e}psilon = \bm{X}\bm{F} - \bm{X}\widehat\bm{F}$, on the event $\mathcal{E}_F$, we immediately have
\bm{e}gin{equation}\label{bd_Delta_Delta}
\|\bm{D}elta_j^T \bm{D}elta\|_2^2 \le \sum_{\ell=1}^m \|\bm{D}elta_j\|_2^2 \|\bm{D}elta_\ell\|_2^2 \lesssim n m ~ r_n \|\bm{D}elta_j\|_2^2.
\end{equation}
To study the other two terms, first note that $\bm{t}heta_j$ and $\bm{d}elta_j$ are identifiable under conditions of Theorem \ref{thm_rates_B}. From Lemma \ref{lem_solution} and $\bm{t}heta_j + \bm{d}elta_j = \bm{F}_j$, for any $1\le j\le m$, we have
\bm{e}gin{align*}
\bm{D}elta_j = \bm{X}\widehat\bm{F}_j - \bm{X}\bm{F}_j = P_{\lambda_2^{(j)}}\bm{e}psilon_j - Q_{\lambda_2^{(j)}}\bm{X}\bm{d}elta_j + Q_{\lambda_2^{(j)}}\bm{X}(\widehat\bm{t}heta^{(j)} - \bm{t}heta_j).
\end{align*}
Then
\bm{e}gin{align*}
\|\bm{e}psilon^T\bm{D}elta_j\|_2 \le \left\|
\bm{e}psilon^T P_{\lambda_2^{(j)}}\bm{e}psilon_j
\right\|_2 + \left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}\bm{d}elta_j
\right\|_2 + \left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}(\widehat\bm{t}heta^{(j)} - \bm{t}heta_j)
\right\|_2.
\end{align*}
By Cauchy-Schwarz inequality, we have
\bm{e}gin{align*}
\left\|
\bm{e}psilon^T P_{\lambda_2^{(j)}}\bm{e}psilon_j
\right\|_2^2 \le \sum_{\ell=1}^m \left(\bm{e}psilon_\ell P_{\lambda_2^{(j)}}\bm{e}psilon_\ell\right) \left(\bm{e}psilon_j P_{\lambda_2^{(j)}}\bm{e}psilon_j\right).
\end{align*}
Invoking Lemma \ref{lem_quad} gives, with probability at least $1-m^{-1}$,
\bm{e}gin{align*}
\bm{e}psilon_\ell P_{\lambda_2^{(j)}}\bm{e}psilon_\ell &\lesssim \sigma_\ell^2\left(
\sqrt{\mathsf{T}r\left(P_{\lambda_2^{(j)}}\right)} + \sqrt{\left\|P_{\lambda_2^{(j)}}\right\|_{{\rm op}}\log m}\right)^2\\
&\le 2\sigma_{\ell}^2 \left(
\mathsf{T}r\left(P_{\lambda_2^{(j)}}\right) + \left\|P_{\lambda_2^{(j)}}\right\|_{{\rm op}}\log m\right)\\
& \asymp n Rem_{1,j},
\end{align*}
uniformly over $1\le \ell \le m$ and $1\le j\le m$. Here $\sigma_j^2$ is defined in (\ref{def_V_eps}) and in the last step we used
\bm{e}gin{equation}\label{eq_sigmas}
\sigma_j^2 \asymp 1,\qquad \forall 1\le j\le m
\end{equation}
under Assumption \ref{ass_B_Sigma}. The above display implies, with the same probability,
\bm{e}q\label{rate_eps_P_eps_j}
\left\|
\bm{e}psilon^T P_{\lambda_2^{(j)}}\bm{e}psilon_j
\right\|_2^2 \lesssim n^2 m [Rem_{1,j}]^2.
\end{aligned}\end{equation}
By similar lines of arguments in the proof of Lemma 14 in \cite{bing2020adaptive}, one can show that, with probability $1-m^{-1}$,
\bm{e}q\label{rate_eps_QX_delta}
\left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}\bm{d}elta_j
\right\|_2^2 \lesssim n Rem_{2,j}(\bm{d}elta_j) \log(m) \sum_{\ell=1}^m \sigma_\ell^2
\end{aligned}\end{equation}
holds uniformly over $1\le j\le m$. Finally,
\[
\left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}(\widehat\bm{t}heta^{(j)} - \bm{t}heta_j)
\right\|_2 \le \max_{1\le i\le p}\left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}_i\right\|_2 \left\|\widehat\bm{t}heta^{(j)} - \bm{t}heta_j\right\|_1.
\]
By arguments of Lemma 15 in \cite{bing2020adaptive}, with probability at least $1-(pm)^{-1}$
\[
\max_{1\le i\le n}\left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}_i\right\|_2^2 \lesssim \left(
\sqrt{\mathsf{T}r(\Gamma)} + \sqrt{4\log(pm) \|\Gamma\|_{{\rm op}}}
\right)^2 n \max_{1\le i\le p} M^{(j)}_{ii}
\]
uniformly over $1\le j\le m$, with $\Gamma := \gamma_w^2\bm{B}^T\Sigma_W\bm{B} + \gamma_e^2\Sigma_E$ and $M^{(j)} = n^{-1}\bm{X}^TQ_{\lambda_2^{(j)}}^2\bm{X}$. Furthermore, the proof of Lemma 9 in \cite{bing2020adaptive} ensures that, with probability $1- (p\wedge m)^{-1}$,
\[
\|\widehat\bm{t}heta^{(j)} - \bm{t}heta_j\|_1 \lesssim {Rem_{3,j}(\bm{t}heta_j) + Rem_{2,j}(\bm{d}elta_j) \over \lambda_{1,j}}
\]
uniformly over $1\le j\le m$. By (\ref{rate_lbd1}), we conclude
\bm{e}gin{align}\label{rate_eps_QX_beta_diff}\nonumber
\left\|
\bm{e}psilon^T Q_{\lambda_2^{(j)}}\bm{X}(\widehat\bm{t}heta^{(j)} - \bm{t}heta_j)
\right\|_2 &\lesssim \sqrt{n}\left[Rem_{3,j}(\bm{t}heta_j) + Rem_{2,j}(\bm{d}elta_j)\right]{\sqrt{\mathsf{T}r(\Gamma)} + \sqrt{\|\Gamma\|_{{\rm op}}\log(pm)} \over \sigma_j\sqrt{\log (p\vee m)}}\\
&\lesssim \sqrt{nm}\left[Rem_{3,j}(\bm{t}heta_j) + Rem_{2,j}(\bm{d}elta_j)\right]
\end{align}
where the last line follows from $\mathsf{T}r(\Gamma) \lesssim m$ and $\sigma_j^2 \asymp 1$ under Assumption \ref{ass_B_Sigma}. Collecting (\ref{rate_eps_P_eps_j}), (\ref{rate_eps_QX_delta}) and (\ref{rate_eps_QX_beta_diff}) concludes
\bm{e}gin{align}\label{bd_Delta_eps}
{1\over \sqrt{nm}} \|\bm{D}elta_j^T\bm{e}psilon\|_2
&\lesssim
\sqrt n Rem_{1,j} + \sqrt{\log (m) Rem_{2,j}(\bm{d}elta_j)}+ Rem_{3,j}(\bm{t}heta_j) + Rem_{2,j}(\bm{d}elta_j).
\end{align}
We proceed to use the same arguments to bound from above
\[
\|\bm{D}elta^T\bm{e}psilon_j\|_2^2 = \sum_{\ell=1}^m |\bm{D}elta_\ell^T\bm{e}psilon_j|^2.
\]
Since
$$
|\bm{D}elta_\ell^T\bm{e}psilon_j| \le \left|
\bm{e}psilon_\ell^T P_{\lambda_2^{(\ell)}}\bm{e}psilon_j
\right| + \left|\bm{d}elta_j^T\bm{X}^T Q_{\lambda_2^{(\ell)}}\bm{e}psilon_j
\right| + \left|
\bm{e}psilon_j^T Q_{\lambda_2^{(\ell)}}\bm{X}(\widehat\bm{t}heta^{(\ell)} - \bm{t}heta_j)
\right|,
$$
it is straightforward to establish that
\bm{e}gin{align}\label{bd_eps_Delta}\nonumber
{1\over nm}\|\bm{D}elta^T\bm{e}psilon_j\|_2^2
&\lesssim {1\over m}\sum_{\ell=1}^m \left\{n[Rem_{1,\ell}]^2 + Rem_{2,\ell}(\bm{d}elta_j)\log(m) + \left[Rem_{3,\ell}(\bm{t}heta_j) + Rem_{2,\ell}(\bm{d}elta_j)\right]^2
\right\}\\
&\lesssim n r_{n,1}^2 + r_{n,2} \log m + (r_{n,2} + r_{n,3})^2
\end{align}
with probability at least $1-m^{-1}$. By collecting (\ref{bd_Delta_Delta}), (\ref{bd_Delta_eps}), (\ref{bd_eps_Delta}) and using $r_{n,2} \le r_n = o_{\mathbb{P}}(1)$ under Assumption \ref{ass_initial} to simplify the results, the proof is complete.
\end{proof}
\subsection{Lemmas used in the proof of Lemma \ref{thm_Theta_simple_rates} and Theorem \ref{thm_asymp_normal}}\label{app_lemmas_Theta}
The following two lemmas establish useful bounds on quantities related with $\bm{H}_0$ and $\widehat\bm{B}$ that are used frequently in our proof. Recall that $r_{n}$ is defined in Assumption \ref{ass_initial} and $\eta_n$ is defined in (\ref{def_eta_bar}).
\bm{e}gin{lemma}\label{lemma_technical}
Under Assumptions \ref{ass_error}, \ref{ass_B_Sigma} and \ref{ass_initial}, assume $M_n = o(m)$ and $\log m = o(n)$. The following holds with probability tending to one.
\bm{e}gin{enumerate}[label=(\Alph*)]
\inftytem $c_H \lesssim \lambda_K(\bm{H}_0) \le \lambda_1(\bm{H}_0) \lesssim C_H$;
\inftytem $\max_{1\le j\le m}\|\widehat\bm{B}_j\|_2 \lesssim C_H\sqrt{C_B}$;
\inftytem $\lambda_K(\widehat\bm{B}) \gammatrsim c_H\sqrt{c_B} \sqrt{m}$;
\inftytem $\max_{1\le j\le m} \norm{(\widetilde\bm{B}-\widehat\bm{B})\widehat P_B^{\perp}\bm{e}_j}_2 \lesssim \eta_n (C_H/c_H)\sqrt{C_B/c_B}$;
\inftytem $\max_{1\le j\le m}\|\widehat P_B \bm{e}_j\|_2\lesssim m^{-1/2}(C_H/c_H)\sqrt{C_B/c_B}$;
\inftytem $\|\bm{T}heta \widehat P_B \bm{e}_j\|_1 \lesssim m^{-1}\|\bm{T}heta\|_{1,1} (C_H^2C_B)/(c_Hc_B)$.
\end{enumerate}
Here $c_H = c_W\sqrt{c_B/(C_W C_B)}$ and $C_H = C_W\sqrt{C_B/(c_Wc_B)}$ with $c_B, C_B, c_W, C_W$ defined in Assumption \ref{ass_B_Sigma}.
\end{lemma}
\bm{e}gin{proof}
Notice that $\eta_{n} = o(1)$ is implied by $r_{n}=o(1)$ and $\log m = o(n)$.
We work on the event
\bm{e}q\label{def_event_B}
\mathcal{E}_B := \left\{
\max_{1\le j\le m}\|\widehat\bm{B}_j - \widetilde\bm{B}_j\|_2 \lesssim \eta_{n}
\right\}
\end{aligned}\end{equation}
intersecting with $\mathcal{E}_D$ defined in (\ref{def_event_D}) and
\[
\mathcal{E}_W:= \left\{
c_W \lesssim \lambda_K\left({1\over n}\bm{W}^T\bm{W}\right) \le \lambda_1\left({1\over n}\bm{W}^T\bm{W}\right) \lesssim C_W
\right\}.
\]
From Theorem \ref{thm_rates_B}, Lemma \ref{lem_D_K} and Lemma \ref{lem_W_eigens}, $\lim_{n\mathsf{T}o\inftynfty}\mathbb{P}(\mathcal{E}_B\cap \mathcal{E}_D\cap \mathcal{E}_W)= 1$.
To show (A), recall from (\ref{def_est_BW}) and (\ref{def_H0}) that
\[
\bm{H}_0^T = {1\over nm}\bm{W}^T\bm{W} \bm{B} \widehat\bm{B}^T \bm{D}_K^{-2} = {1\over n\sqrt{m}}\bm{W}^T\bm{W} \bm{B}\bm{V}_K \bm{D}_K^{-1}.
\]
It implies
\[
\bm{H}_0^T\bm{H}_0 = {1\over n}\bm{W}^T\bm{W} \left({1\over m}\bm{B}\bm{V}_K \bm{D}_K^{-2}\bm{V}_K^{T}\bm{B}^T\right) {1\over n}\bm{W}^T\bm{W}.
\]
By invoking $\mathcal{E}_W$, $\mathcal{E}_D$ and Assumption \ref{ass_B_Sigma}, we then have
\[
\lambda_K(\bm{H}_0^T\bm{H}_0) \gammatrsim c_W^2 c_B / (C_WC_B).
\]
Similarly,
\[
\lambda_1(\bm{H}_0^T\bm{H}_0) \lesssim C_W^2 C_B / (c_Wc_B).
\]
This proves (A).
Part (B) then follows immediately by
\bm{e}gin{align*}
\|\widehat\bm{B}_j\|_2 &\le \|\widetilde \bm{B}_j\|_2 + \|\widehat\bm{B}_j-\widetilde\bm{B}_j\|_2\\
&\le \lambda_1(\bm{H}_0) \|\bm{B}_j\|_2 + \eta_n\\
&\lesssim C_H\sqrt{C_B}
\end{align*}
where we used Assumption \ref{ass_B_Sigma} in the penultimate step and $\eta_n = o(1)$ in the last step. Similarly, using Weyl's inequality again yields
\[
\lambda_K(\widehat \bm{B}) \gammae \lambda_K(\widetilde\bm{B}) - \|\widehat\bm{B}-\widetilde\bm{B}\|_{{\rm op}} \gammatrsim {c_W\sqrt{c_B}\over \sqrt{C_WC_B}}\lambda_K(\bm{B}) -\eta_n\sqrt{m}\gammatrsim \sqrt{m}
\]
where the second inequality uses $\widetilde\bm{B}^T = \bm{H}_0 \bm{B}^T$, part (A) and
\bm{e}gin{equation}\label{bd_B_diff_op}
\|\widehat\bm{B}- \widetilde\bm{B}\|_{{\rm op}}^2 \le \|\widehat\bm{B}- \widetilde\bm{B}\|_{F}^2 \le m\eta_n^2
\end{equation}
on the event $\mathcal{E}_B$.
This proves part (C). Part (D) is proved by observing that
\bm{e}gin{align*}
\left\|(\widetilde\bm{B}-\widehat\bm{B})\widehat P_B^{\perp}\bm{e}_j\right\|_2 &\le \left\|\widetilde\bm{B}_j-\widehat\bm{B}_j\right\|_2 + \left\|(\widetilde\bm{B}-\widehat\bm{B})\widehat P_B\bm{e}_j\right\|_2
\end{align*}
and
\bm{e}q\nonumber
\left\|(\widetilde\bm{B}-\widehat\bm{B})\widehat P_B\bm{e}_j\right\|_2 = & ~ \norm{(\widetilde{\bm{B}} - \widehat{\bm{B}})\widehat{\bm{B}}^T(\widehat{\bm{B}}\widehat{\bm{B}}^T)^{-1}\widehat{\bm{B}}\bm{e}_j}_2\\
\leq& ~ \norm{\widetilde{\bm{B}} - \widehat{\bm{B}}}_{{\rm op}}\norm{\widehat{\bm{B}}^T(\widehat{\bm{B}}\widehat{\bm{B}}^T)^{-1}\widehat{\bm{B}}\bm{e}_j}_2\\
\leq& ~\eta_n\sqrt{m} ~ [\lambda_K(\widehat\bm{B})]^{-1}\|\widehat\bm{B}_j\|_2
\end{aligned}\end{equation}
together with results in (B) and (C). Similarly,
\[
\|\widehat P_{B}\bm{e}_j\|_2 = \norm{\widehat{\bm{B}}^T(\widehat{\bm{B}}\widehat{\bm{B}}^T)^{-1}\widehat{\bm{B}}\bm{e}_j}_2 \le [\lambda_K(\widehat\bm{B})]^{-1}\|\widehat\bm{B}_j\|_2 \lesssim m^{-1/2}(C_H/c_H)\sqrt{C_B/c_B}.
\]
Finally,
\[
\|\bm{T}heta \widehat P_B \bm{e}_j\|_1 \le \|\bm{T}heta\|_{1,1}\max_{1\le \ell\le m}\left|
\bm{e}_\ell^T \widehat\bm{B}^T (\widehat\bm{B}\widehat\bm{B}^T)^{-1}\widehat\bm{B}\bm{e}_j\right| \le \|\bm{T}heta\|_{1,1}{\|\widehat\bm{B}\|_{\infty,2}^2 \over \lambda_K(\widehat\bm{B}\widehat\bm{B}^T)}.
\]
Invoke (B) and (C) to complete the proof.
\end{proof}
\bm{e}gin{lemma}\label{lemma_PB_error}
Under conditions of Lemma \ref{lemma_technical}, one has
\bm{e}q\nonumber
\max_{1\le j\le m}\norm{(P_B - \widehat P_B)\bm{e}_j}_2 = \mathcal{O}_{\mathbb{P}}\left(\eta_n\over \sqrt{m}\right),\quad \max_{1\le j\le m}\norm{(P_B - \widehat P_B)\bm{e}_j}_\inftynfty = \mathcal{O}_{\mathbb{P}}\left(\eta_n\over m\right).
\end{aligned}\end{equation}
\end{lemma}
\bm{e}gin{proof}
We prove the results by using Lemma \ref{lemma_technical}.
We firstly bound the $\ell_2$ norm of $(P_B - \widehat P_B)\bm{e}_j$ and will provide a sketch for bound in $\ell_\inftynfty$ norm as the proof is very similar. Recall that $\widetilde \bm{B} = \bm{H}_0\bm{B}$.
By triangle inequality
\bm{e}q
\norm{(P_B - \widehat P_B)\bm{e}_j}_2
=&\norm{(\widetilde\bm{B}^T(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}\widetilde\bm{B} - \widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}\widehat\bm{B})\bm{e}_j}_2\\
\leq&
\norm{(\widetilde\bm{B} - \widehat \bm{B})^T(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}\widetilde\bm{B}\bm{e}_j}_2 + \norm{\widehat\bm{B}^T[(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1} - (\widehat\bm{B}\widehat\bm{B}^T)^{-1}]\widetilde\bm{B}\bm{e}_j}_2\\
&\quad +
\norm{\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}(\widetilde\bm{B} - \widehat \bm{B})\bm{e}_j}_2\\
:=&I_1 + I_2 + I_3.
\end{aligned}\end{equation}
Now we bound each term. For $I_1$
\bm{e}q
\norm{(\widetilde\bm{B} - \widehat \bm{B})^T(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}\widetilde\bm{B}\bm{e}_j}_2 \leq&\norm{\widetilde\bm{B} - \widehat \bm{B}}_{{\rm op}}\norm{(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}}_{{\rm op}}\norm{\widetilde\bm{B}\bm{e}_j}_2\\
\lesssim &\norm{\widetilde\bm{B} - \widehat \bm{B}}_{{\rm op}}\norm{(\bm{B}\bm{B}^T)^{-1}}_{{\rm op}}\norm{\bm{B}_j}_2\\
=& \mathcal{O}_{\mathbb{P}}\left(\frac{\eta_n}{\sqrt{m}}\right),
\end{aligned}\end{equation}
where the last two steps follow from Lemma \ref{lemma_technical}. Similarly we can show that $I_3 = \mathcal{O}_{\mathbb{P}}(\eta_n/\sqrt{m})$. It remains to bound $I_2$. Direct calculation gives
\bm{e}q
&\norm{\widehat\bm{B}^T[(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1} - (\widehat\bm{B}\widehat\bm{B}^T)^{-1}]\widetilde\bm{B}\bm{e}_j}_2\\
&= ~\norm{\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}[\widetilde\bm{B}\widetilde\bm{B}^T - \widehat\bm{B}\widehat\bm{B}^T](\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}\widetilde\bm{B}\bm{e}_j}_2\\
&\leq ~ [\lambda_K(\widehat\bm{B})]^{-1}\left[
\norm{(\widetilde\bm{B} -\widehat\bm{B})^T P_B\bm{e}_j}_2 + \norm{\widehat\bm{B}(\widetilde\bm{B} -\widehat\bm{B})^T(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}\widetilde\bm{B}\bm{e}_j}_2
\right]\\
&\leq ~ [\lambda_K(\widehat\bm{B})]^{-1}\left[\norm{\widetilde\bm{B} -\widehat\bm{B}}_{{\rm op}}\norm{P_B\bm{e}_j}_2
+ \norm{\widehat \bm{B}}_{{\rm op}} I_1
\right]\\
&= ~ \mathcal{O}_{\mathbb{P}}\left(\frac{\eta_n}{\sqrt{m}}\right),
\end{aligned}\end{equation}
where the last step follows from Lemma \ref{lemma_technical} together with the bound for $I_1$. The proof for the $\ell_2$ bound is completed by combining the above results.
To show the result in $\ell_\inftynfty$ norm, notice that we can similarly upper bound it by three terms $I_1'$ -- $I_3'$ in $\ell_\inftynfty$ norm instead of $
\ell_2$ norm by substituting $\max_{j}\norm{\widetilde\bm{B}_j - \widehat \bm{B}_j}_{2}$ for $\norm{\widetilde \bm{B} - \widehat \bm{B}}_{{\rm op}}$. For instance, $I_1' \leq \max_{j}\norm{\widetilde\bm{B}_j - \widehat \bm{B}_j}_{2}\norm{(\widetilde\bm{B}\widetilde\bm{B}^T)^{-1}}_{{\rm op}}\norm{\widetilde\bm{B}\bm{e}_j}_2
= \mathcal{O}_{\mathbb{P}}(\eta_n/m)$. The other two terms should follow similarly. This completes the proof.
\end{proof}
The following lemma proves that $\mathcal{E}_{\bm{X}}$ defined in (\ref{def_event_X}) holds with probability tending to one under conditions of Theorem \ref{thm_Theta_simple_rates}.
\bm{e}gin{lemma}\label{lem_X}
Under Assumption \ref{ass_X}, assume $s_n \le C n/\log p$ for some large constant $C>0$ and $\|\text{Cov}(Z)\|_{{\rm op}} = \mathcal{O}(1)$. Then
\[
\lim_{n\mathsf{T}o\inftynfty} \mathbb{P}(\mathcal{E}_{\bm{X}}) = 1.
\]
\end{lemma}
\bm{e}gin{proof}
When the rows of $\bm{X}\Sigma^{-1/2}$ are i.i.d. sub-Gaussian random vector with bounded sub-Gaussian constant, provided that $\lambda_{\min}(\Sigma) \gammae c_0$ for some constant $c_0>0$ and $s_n\log p \le Cn$ for some large constant $C>0$, \cite{rz13} shows that $\kappa(s_n,4) \gammae c$ holds with probability $1-2n^{-c'n}$.
\cite{rz13} also shows that
\bm{e}gin{equation}\label{bd_op_XsXs}
\sup_{S\subseteq[p]: |S|\le s_n}{1\over n}\lambda_1(\bm{X}_S^T\bm{X}_S) = \mathcal{O}_{\mathbb{P}}(1)
\end{equation}
provided that $\sup_{S\subseteq[p]: |S|\le s_n}\Sigma_{SS} =\mathcal{O}(1)$.
By applying Lemma \ref{lem_bernstein} with an union bound over $1\le j\le m$ and invoking $\max_{1\le j\le m}\Sigma_{jj} \le C$ from Assumption \ref{ass_X}, we have
$$
\max_{1\le j\le m}\widehat\Sigma_{jj} \le \max_{1\le j\le m}\left(\Sigma_{jj} + |\widehat\Sigma_{jj}-\Sigma_{jj}|\right) \le C'
$$
with probability $1 - 2(p\vee n)^{-1}$. For $\|\bm{X}\bm{T}heta\|_{2,1}$, since $\bm{T}heta_{S^c\cdot} = \b0$, we have
\bm{e}gin{align*}
{1\over \sqrt n}\|\bm{X}\bm{T}heta\|_{2,1} = {1\over \sqrt n}\|\bm{X}_S\bm{T}heta_{S\cdot}\|_{2,1}& \le {1\over \sqrt n}\|\bm{X}_S\|_{{\rm op}} \|\bm{T}heta_{S\cdot}\|_{2,1}\\ &\overset{(\ref{bd_op_XsXs})}{=} \mathcal{O}_{\mathbb{P}}\left(\sqrt{s_n}\|\bm{T}heta\|_{\infty,1}\right)\overset{(\ref{def_space_Theta})}{=}
\mathcal{O}_{\mathbb{P}}\left(M_n\sqrt{s_n}\right).
\end{align*}
Finally,
\bm{e}gin{equation}\label{bd_XA_op}
{1\over \sqrt n}\|\bm{X}\bm{A}\|_{{\rm op}} = \mathcal{O}_{\mathbb{P}}(1)
\end{equation}
has been proved in \citet[Lemma 12]{bing2020adaptive}.
\end{proof}
Under Assumption \ref{ass_X}, the following Lemma characterizes the estimation error of $\widehat\bm{o}mega_1$ defined in (\ref{def_est_omega}) using (\ref{formula_nodewise}), as well as the order of $\widehat\mathsf{T}au_1^2$ in (\ref{def_tau_1}). It is proved in \cite{vandegeer2014}. Recall that $s_\Omega = \norm{\bm{O}mega_1}_0$.
\bm{e}gin{lemma}\label{lemma_nodewise}
Under Assumption \ref{ass_X}, assume $s_\Omega \log p = o(n)$. By choosing $\widetilde \lambda \asymp \sqrt{\log p/n}$ in (\ref{formula_nodewise}), we have $1/\widehat\mathsf{T}au_1^2 = \mathcal{O}_{\mathbb{P}}(1)$,
\[
|\widehat\bm{o}mega_1^T\widehat\Sigma\widehat\bm{o}mega_1 - \Omega_{11}| = \mathcal{O}_{\mathbb{P}}\left(\sqrt{\frac{s_{\Omega}\log p}{n}}\right),\qquad
\norm{\bm{e}_1 - \widehat\Sigma \widehat\bm{o}mega_1}_{\infty} = \mathcal{O}_{\mathbb{P}}\left(\sqrt{\frac{\log p}{n}}\right).
\]
\end{lemma}
The following lemma provides upper bounds for $\|\bm{e}_1-\widehat\Sigma\widehat\bm{o}mega_1)^T\bm{A}\|_2$.
\bm{e}gin{lemma}\label{lemma_nodewise_A}
Under conditions of Lemma \ref{lemma_nodewise} and $\|\text{Cov}(Z)\|_{{\rm op}}=\mathcal{O}(1)$, one has
\[
\|(\bm{e}_1 - \widehat\Sigma\widehat\bm{o}mega_1)^T\bm{A}\|_2 = \mathcal{O}_{\mathbb{P}}\left(
\sqrt{s_\Omega\log p\over n}
\right)
\]
\end{lemma}
\bm{e}gin{proof}
Use $\bm{e}_1 = \Sigma\bm{o}mega_1$ to obtain
\bm{e}gin{equation}\label{eqn_decomp}
(\bm{e}_1 - \widehat\Sigma\widehat\bm{o}mega_1)^T\bm{A} = \bm{o}mega_1^T(\Sigma - \widehat\Sigma)\bm{A} + (\bm{o}mega_1-\widehat\bm{o}mega_1)^T\widehat\Sigma\bm{A}.
\end{equation}
For the first term, plugging $\bm{A} = \Sigma^{-1}\text{Cov}(X,Z)$ into the expression yields
\bm{e}gin{align*}
\|\bm{o}mega_1^T(\Sigma - \widehat\Sigma)\bm{A}\|_2^2 = \sum_{k=1}^K\left(
\bm{o}mega_1^T\Sigma^{1/2}\left({\bm I}_p - {1\over n} \bm{a}r\bm{X}^T\bm{a}r\bm{X}\right)
\Sigma^{-1/2}\text{Cov}(X,Z)\bm{e}_k\right)^2
\end{align*}
where $\bm{a}r\bm{X} = \bm{X}\Sigma^{-1/2}$. Notice that
\[
\bm{o}mega_1^T\Sigma^{1/2}\left({\bm I}_p - {1\over n} \bm{a}r\bm{X}^T\bm{a}r\bm{X}\right)
\Sigma^{-1/2}\text{Cov}(X,Z)\bm{e}_k = {1\over n}\sum_{i=1}^n\left(\mathbb{E}[U_i^TV_i] - U_iV_i
\right)
\]
where $U_i = \bm{a}r\bm{X}_{i\cdot}^T\Sigma^{1/2}\bm{o}mega_1$ is $\sqrt{\Omega_{11}}$ sub-Gaussian and $V_i = \bm{a}r\bm{X}_{i\cdot}^T\Sigma^{-1/2}\text{Cov}(X,Z)\bm{e}_k$ is $$
\sqrt{\bm{e}_k^T \text{Cov}(Z,X)\Sigma^{-1}\text{Cov}(X,Z)\bm{e}_k} \le \sqrt{\text{Cov}(Z_k)}
$$ sub-Gaussian.
An application of Lemma \ref{lem_bernstein} with an union bound over $1\le k\le K$ gives
\[
\left|\bm{o}mega_1^T\Sigma^{1/2}\left({\bm I}_p - {1\over n} \bm{a}r\bm{X}^T\bm{a}r\bm{X}\right)
\Sigma^{-1/2}\text{Cov}(X,Z)\bm{e}_k\right| = \mathcal{O}\left(
\sqrt{\Omega_{11}\text{Cov}(Z_k) \over n}
\right)
\]
uniformly over $1\le k\le K$,
with probability $1-O(n^{-1})$. Using (\ref{bd_Omega_11}) and $\|\text{Cov}(Z)\|_{{\rm op}}=\mathcal{O}(1)$ further yields
\bm{e}q\label{bd_Sigma_diff}
\|\bm{o}mega_1^T(\Sigma - \widehat\Sigma)\bm{A}\|_2 = \mathcal{O}_{\mathbb{P}}\left(
1/\sqrt{n}
\right).
\end{aligned}\end{equation}
Regarding the second term in (\ref{eqn_decomp}), one has
$$
\|(\bm{o}mega_1-\widehat\bm{o}mega_1)^T\widehat\Sigma\bm{A}\|_2 \le {1\over \sqrt{n}}\|\bm{X}\bm{A}\|_{{\rm op}}{1\over \sqrt n}\|\bm{X}(\widehat\bm{o}mega_1 - \bm{o}mega_1)\|_2 \overset{(\ref{bd_XA_op})}{=} \mathcal{O}_{\mathbb{P}}(1) \cdot {1\over \sqrt n}\|\bm{X}(\widehat\bm{o}mega_1 - \bm{o}mega_1)\|_2.
$$
Recall from (\ref{def_est_omega}) that
\bm{e}q\label{bd_omega_diff}
\widehat\bm{o}mega_1^T = \widehat\mathsf{T}au_1^{-2} \bm{e}gin{bmatrix}
1 & -\widehat\bm{g}amma_1^T
\end{bmatrix}.
\end{aligned}\end{equation}
Following \cite{vandegeer2014}, we define $\bm{g}amma_1 = \argmin_{\bm{g}amma\inftyn\mathbb{R}^{p-1}}\mathbb{E}[\|\bm{X}_1-\bm{X}_{-1}\bm{g}amma\|_2^2]$ and $\mathsf{T}au_1^2 = \mathbb{E}[\|\bm{X}_1-\bm{X}_{-1}\bm{g}amma_1\|_2^2]/n = \Omega_{11}^{-1}$ such that
$$
\bm{o}mega_1^T = \mathsf{T}au_1^{-2} \bm{e}gin{bmatrix}
1 & -\bm{g}amma_1^T
\end{bmatrix}.
$$
Triangle inequality yields
\bm{e}gin{align*}
{1\over \sqrt n}\|\bm{X}(\widehat\bm{o}mega_1 - \bm{o}mega_1)\|_2 &\le {1\over \sqrt n}{\|\bm{X}_{-1}(\widehat\bm{g}amma_1- \bm{g}amma_1)\|_2 \over \widehat\mathsf{T}au_1^2} + {1\over \sqrt n}\|\bm{X}_1 - \bm{X}_{-1}\bm{g}amma_1\|_2\left|{1\over \widehat\mathsf{T}au_1^2}- {1\over \mathsf{T}au_1^2}\right|.
\end{align*}
Using the results in \cite{vandegeer2014} yields
\[
{1\over \sqrt n}\|\bm{X}_{-1}(\widehat\bm{g}amma_1- \bm{g}amma_1)\|_2 = \mathcal{O}_{\mathbb{P}}\left(\sqrt{s_\Omega\log p\over n}\right),\quad \left|{1\over \widehat\mathsf{T}au_1^2}- {1\over \mathsf{T}au_1^2}\right| = \mathcal{O}_{\mathbb{P}}\left(\sqrt{s_\Omega\log p\over n}\right).
\]
Together with
\[
{1\over \sqrt n}\|\bm{X}_1 - \bm{X}_{-1}\bm{g}amma_1\|_2 = \mathsf{T}au_1^2 {1\over \sqrt n}\|\bm{X}\bm{o}mega_1\|_2 = \mathcal{O}_{\mathbb{P}}\left(\mathsf{T}au_1^2\sqrt{\bm{o}mega_1^T \Sigma \bm{o}mega_1}\right) = \mathcal{O}_{\mathbb{P}}(1)
\]
from (\ref{bd_Omega_11}), we conclude
\[
{1\over \sqrt n}\|\bm{X}(\widehat\bm{o}mega_1 - \bm{o}mega_1)\|_2 = \mathcal{O}_{\mathbb{P}}\left(\sqrt{s_\Omega\log p\over n}\right).
\]
The proof is completed by combining the above display with (\ref{bd_Sigma_diff}) and (\ref{bd_omega_diff}).
\end{proof}
\subsection{Lemmas used in the proof of Theorem \ref{thm_B_asn}}
Recall that $\bm{H}_2 = \bm{B}\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1}$ and $\bm{Q}$ is defined in (\ref{def_Q}).
The following lemma shows that $\bm{H}_2$ converges to $\bm{Q}^{-1}$ in probability.
\bm{e}gin{lemma}\label{lem_H2}
Under conditions of Theorem \ref{thm_B_asn}, $\bm{H}_2$ converges to $\bm{Q}^{-1}$ in probability.
\end{lemma}
\bm{e}gin{proof}
We prove the result by the same reasoning as \citet[Lemmas 1 \& 3]{bai2020simpler}. We first prove
\bm{e}q\label{WhatWconverge}
\bm{H}_1 = {1\over n}\widehat\bm{W}^T\bm{W} \mathsf{T}o \bm{Q},\quad \mathsf{T}extrm{in probability,}
\end{aligned}\end{equation}
and then show $\bm{H}_2 = \bm{H}_1^{-1} + o_\mathbb{P}(1)$.
Following the argument in \citet[Lemma 1]{bai2020simpler} and by expanding $\widehat\bm{e}psilon = \bm{W}\bm{B}+\bm{E}+\bm{D}elta$ with $\bm{D}elta = \widehat\bm{e}psilon - \bm{e}psilon$, we arrive at
\bm{e}gin{align*}
&{1\over n} \bm{W}^T\widehat\bm{W} \bm{D}_K^2\\
& = {\bm{W}^T\bm{W} \over n} {\bm{B}\bm{B}^T \over m}{\bm{W}^T\widehat \bm{W} \over n} + {\bm{W}^T\bm{E}\bm{E}^T\widehat\bm{W} \over n^2m} + {\bm{W}^T\bm{E}\bm{B}^T\over nm}{\bm{W}^T\widehat\bm{W}\over n} + {\bm{W}^T\bm{W}\over n}{\bm{B}\bm{E}^T\widehat\bm{W}\over nm}\\
&\quad +{1\over n^2m}\left(
\bm{W}^T\bm{D}elta \bm{e}psilon^T \widehat\bm{W} + \bm{W}^T\bm{D}elta \bm{D}elta^T \widehat\bm{W} + \bm{W}^T\bm{e}psilon \bm{D}elta^T \widehat\bm{W}
\right).
\end{align*}
With $\widetilde\bm{W} = \bm{W}\bm{H}_0^{-1}$, notice
\[
{\bm{B}\bm{E}^T\widehat\bm{W}\over nm} = {\bm{B}\bm{E}^T\widetilde \bm{W}\over nm} + {\bm{B}\bm{E}^T(\widehat\bm{W}-\widetilde\bm{W})\over nm}
\]
and
\[
{\bm{W}^T\bm{E}\bm{E}^T\widehat\bm{W} \over n^2m} = {\bm{W}^T\bm{E}\bm{E}^T\widetilde \bm{W} \over n^2m} + {\bm{W}^T\bm{E}\bm{E}^T(\widehat\bm{W}-\widetilde\bm{W}) \over n^2m}.
\]
By arguments in \citet[Lemma 1]{bai2020simpler} and Lemma \ref{lem_W_frob}, one has
\[
{\bm{W}^T\bm{E}\bm{E}^T\widehat\bm{W} \over n^2m} + {\bm{W}^T\bm{E}\bm{B}^T\over nm}{\bm{W}^T\widehat\bm{W}\over n} + {\bm{W}^T\bm{W}\over n}{\bm{B}\bm{E}^T\widehat\bm{W}\over nm} = o_\mathbb{P}(1).
\]
Furthermore, by Lemma \ref{lem_quad_terms_Delta} and Lemma \ref{lem_W_eigens},
\[
{1\over n^2m}
\|\bm{W}^T\bm{D}elta \bm{e}psilon^T \widehat\bm{W}\|_F \le {1\over \sqrt n}\|\bm{W}\|_{{\rm op}} {1\over nm}\|\bm{D}elta\bm{e}psilon^T\|_F =\mathcal{O}_\mathbb{P}\left({1\over n\sqrt m}\max_{j\inftyn[m]}\|\bm{D}elta\bm{e}psilon_j\|_2\right) = o_\mathbb{P}(1).
\]
Using similar arguments yields
\[
{1\over n^2m}\left(
\bm{W}^T\bm{D}elta \bm{e}psilon^T \widehat\bm{W} + \bm{W}^T\bm{D}elta \bm{D}elta^T \widehat\bm{W} + \bm{W}^T\bm{e}psilon \bm{D}elta^T \widehat\bm{W}
\right) = o_\mathbb{P}(1),
\]
and, therefore,
\[
{\bm{W}^T\widehat\bm{W} \over n} \bm{D}_K^2 = {\bm{W}^T\bm{W} \over n} {\bm{B}\bm{B}^T \over m}{\bm{W}^T\widehat \bm{W} \over n} + o_\mathbb{P}(1).
\]
Finally, recalling $\Lambda_0$ from (\ref{def_Q}), note that $\bm{D}_K^2\mathsf{T}o \Lambda_0$ in probability. To see this, since \[
\lambda_j(\Lambda_0) = {1\over m}\lambda_j(\bm{B}\Sigma_E \bm{B}^T),
\]
for any $1\le j\le K$,
Weyl's inequality yields
\[
\left|\lambda_j(\bm{D}_K^2) - \lambda_j(\Lambda_0)\right| \le {1\over m}\left\|
{1\over n}\widehat\bm{e}psilon^T \widehat \bm{e}psilon - \bm{B} \Sigma_W \bm{B}^T
\right\|_{{\rm op}}.
\]
By the proof of Theorem \ref{thm_rates_B} together with Lemma \ref{lem_W_eigens}, it is easy to derive
\[
\left|\lambda_j(\bm{D}_K^2) - \lambda_j(\Lambda_0)\right| = o_\mathbb{P}(1),\qquad \forall j\inftyn [K],
\]
such that $\bm{D}_K \mathsf{T}o \Lambda_0$ in probability. Then
the arguments in \citet[Lemma 1]{bai2020simpler} yield (\ref{WhatWconverge}).
It remains to prove
\[
\bm{H}_2^{-1} = \bm{H}_1 + o_\mathbb{P}(1).
\]
We prove this by using the same arguments in \citet[Lemma 3]{bai2020simpler} of showing that
$\bm{H}_0 = \bm{H}_1 + o_\mathbb{P}(1)$ and $\bm{H}_0 = \bm{H}_2^{-1}+o_\mathbb{P}(1)$, where we recall that
\[
\bm{H}_0^T = {1\over n}\bm{W}^T\bm{W} {1\over m}\bm{B}\widehat\bm{B}^T\bm{D}_K^{-2}.
\]
To prove $\bm{H}_0= \bm{H}_2^{-1}+o_\mathbb{P}(1)$, notice that
\[
\bm{D}_K^{-1}\widehat\bm{B}\left({1\over nm}\widehat\bm{e}psilon^T \widehat\bm{e}psilon\right) \widehat\bm{B}^T \bm{D}_K^{-1} = m \bm{D}_K^{2}.
\]
Further expanding the left hand side by $\widehat\bm{e}psilon = \bm{W}\bm{B}+\bm{E}+\bm{D}elta$ with $\bm{D}elta = \widehat\bm{e}psilon -\bm{e}psilon$ yields
\bm{e}gin{align*}
m\bm{D}_K^2 &= \bm{D}_K^{-1}{1\over m}\widehat\bm{B}\bm{B}^T{1\over n}\bm{W}^T\bm{W} \bm{B}\widehat\bm{B}^T \bm{D}_K^{-1} +
2\bm{D}_K^{-1}\left({1\over m}\widehat\bm{B}\bm{B}^T\right)\left({1\over n}\bm{W}^T\bm{E}\widehat\bm{B}^T\right)\bm{D}_K^{-1}\\
&\quad + \bm{D}_K^{-1}\left({1\over nm}\widehat\bm{B}\bm{E}^T\bm{E}\widehat\bm{B}^T\right)\bm{D}_K^{-1} + 2\bm{D}_K^{-1}\left({1\over nm}\widehat\bm{B}\bm{D}elta^T \bm{e}psilon\widehat\bm{B}^T\right)\bm{D}_K^{-1}\\
&\quad + \bm{D}_K^{-1}\left({1\over nm}\widehat\bm{B}\bm{D}elta^T \bm{D}elta\widehat\bm{B}^T\right)\bm{D}_K^{-1}.
\end{align*}
Since $\bm{H}_2 = \bm{B}\widehat\bm{B}^T(\widehat\bm{B}\widehat\bm{B}^T)^{-1} = \bm{B}\widehat\bm{B}^T / m$, we conclude
\bm{e}gin{align*}
\bm{H}_0^{-1} &= \bm{H}_2 + 2\bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over m}\widehat\bm{B}\bm{B}^T\right)\left({1\over nm}\bm{W}^T\bm{E}\widehat\bm{B}^T\right)\bm{D}_K^{-1}\\
&\quad + \bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over nm^2}\widehat\bm{B}\bm{E}^T\bm{E}\widehat\bm{B}^T\right)\bm{D}_K^{-1} + 2\bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over nm^2}\widehat\bm{B}\bm{D}elta^T \bm{e}psilon\bm{B}^T\right)\bm{D}_K^{-1}\\
&\quad + \bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over nm^2}\widehat\bm{B}\bm{D}elta^T \bm{D}elta\bm{B}^T\right)\bm{D}_K^{-1}.
\end{align*}
To show the last four terms on the right hand side are negligible, by Lemma \ref{lem_D_K} and \ref{lemma_technical}, one has
\bm{e}gin{align*}
&\left\|\bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over m}\widehat\bm{B}\bm{B}^T\right)\left({1\over nm}\bm{W}^T\bm{E}\widehat\bm{B}^T\right)\bm{D}_K^{-1}\right\|_F\\
&\lesssim \left\|
{1\over m}\widehat\bm{B}\bm{B}^T
\right\|_{{\rm op}} \left\|{1\over nm}\bm{W}^T\bm{E}\right\|_F \|\widehat\bm{B}\|_{{\rm op}}^2\\
& \lesssim {1\over n\sqrt{m}}\left\|\bm{W}^T\bm{E}\right\|_F
\end{align*}
with probability tending to one. Since
\[
{1\over n\sqrt{m}}\left\|\bm{W}^T\bm{E}\right\|_F \le {\sqrt{K}\over n}\max_{k\inftyn[K],j\inftyn [m]}\|\bm{W}_k^T\bm{E}_j\|_2 = \mathcal{O}_\mathbb{P}\left(\sqrt{\log m \over n}\right) = o_\mathbb{P}(1)
\]
from Lemma \ref{lem_bernstein} with an union bound over $k\inftyn[K]$ and $j\inftyn [m]$ and $\log m = o(n)$, we have
\bm{e}q\label{bd_WE_F}
{1\over n\sqrt{m}}\left\|\bm{W}^T\bm{E}\right\|_F = o_\mathbb{P}(1).
\end{aligned}\end{equation}
By similar arguments, we have
\[
\left\|
\bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over nm^2}\widehat\bm{B}\bm{E}^T\bm{E}\widehat\bm{B}^T\right)\bm{D}_K^{-1}\right\|_{F} = \mathcal{O}_\mathbb{P}\left( {1\over nm}\|\bm{E}\|_{{\rm op}}^2 \right) = \mathcal{O}_\mathbb{P}\left({n + m \over nm}\right) = o_\mathbb{P}(1)
\]
by also using Lemma \ref{lem_op_norm} and $\mathsf{T}r(\Sigma_E) = \mathcal{O}(m)$. Furthermore, invoke Lemma \ref{lem_quad_terms_Delta} to obtain
\bm{e}gin{align*}
\|\bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over nm^2}\widehat\bm{B}\bm{D}elta^T \bm{e}psilon\widehat\bm{B}^T\right)\bm{D}_K^{-1}\|_F &\lesssim {1\over nm}\|\bm{D}elta^T \bm{e}psilon\|_F \le {1\over n\sqrt{m}}\max_{j\inftyn [m]}\|\bm{D}elta^T \bm{e}psilon_j\|_2 = o(1)
\end{align*}
and
\[
\|\bm{H}_0^{-1}\bm{D}_K^{-1}\left({1\over nm^2}\widehat\bm{B}\bm{D}elta^T \bm{D}elta\widehat\bm{B}^T\right)\bm{D}_K^{-1}\|_F \lesssim {1\over n\sqrt{m}}\max_{j\inftyn [m]}\|\bm{D}elta^T \bm{D}elta_j\|_2 = o(1)
\]
with probability tending to one. Collecting terms concludes
$
\bm{H}_0^{-1} = \bm{H}_2 + o_\mathbb{P}(1),
$
or equivalently,
$\bm{H}_0 = \bm{H}_2^{-1} + o_\mathbb{P}(1).$
We proceed to show $\bm{H}_0 = \bm{H}_1 + o_\mathbb{P}(1)$. From the basic equality
$\widehat\bm{e}psilon = \bm{W}\bm{B}+\bm{E}+\bm{D}elta$ and $\widehat\bm{e}psilon \widehat\bm{B}^T\bm{D}_K^{-2} = m\sqrt{n}\bm{U}_K = m\widehat \bm{W}$, we have
\bm{e}gin{align*}
{1\over n} \bm{W}^T\widehat\bm{W} & = {1\over nm}\bm{W}^T \widehat\bm{e}psilon\widehat\bm{B}^T\bm{D}_K^{-2}\\
&= {1\over n}\bm{W}^T\bm{W} {1\over m}\bm{B}\widehat\bm{B}^T\bm{D}_K^{-2} + {1\over nm}\bm{W}^T\bm{E}\widehat\bm{B}^T\bm{D}_K^{-2} + {1\over nm}\bm{W}^T\bm{D}elta \widehat\bm{B}^T\bm{D}_K^{-2},
\end{align*}
which leads to
\bm{e}gin{align*}
\bm{H}_1 & = \bm{H}_0 + {1\over nm}\bm{D}_K^{-2}\widehat\bm{B}\bm{E}^T\bm{W} + {1\over nm}\bm{D}_K^{-2}\widehat\bm{B}\bm{D}elta^T\bm{W}.
\end{align*}
Previous arguments and (\ref{bd_WE_F}) give
\[
{1\over nm}\|\bm{D}_K^{-2}\widehat\bm{B}\bm{E}^T\bm{W}\|_F = \mathcal{O}_\mathbb{P}\left({1\over n\sqrt m}\|\bm{E}^T\bm{W}\|_F\right) = o_\mathbb{P}(1)
\]
and
\[
{1\over nm}\|\bm{D}_K^{-2}\widehat\bm{B}\bm{D}elta^T\bm{W}\|_F = \mathcal{O}_\mathbb{P}\left(
{1\over n\sqrt m}\|\bm{D}elta^T\bm{W}\|_F
\right).
\]
Invoke Lemma \ref{lem_W_eigens} and Assumption \ref{ass_initial} to conclude
\[
{1\over n\sqrt m}\|\bm{D}elta^T\bm{W}\|_F \le {1\over \sqrt n}\|\bm{W}\|_{{\rm op}} {1\over \sqrt{nm}}\|\bm{D}elta\|_F = o_\mathbb{P}(1).
\]
We have finished the proof of $\bm{H}_1 = \bm{H}_0 + o_\mathbb{P}(1) = \bm{H}_2^{-1} + o_\mathbb{P}(1)$, completing the proof.
\end{proof}
\Sigma_Ection{Auxiliary lemmas}\label{sec_proof_aux}
The following lemma is used in our analysis. The tail inequality is for a quadratic form of sub-Gaussian random vectors. It is a slightly simplified version of Lemma 30 in \cite{Hsu2014} and is proved in \cite{bing2020adaptive}.
\bm{e}gin{lemma}\label{lem_quad}
Let $\xi\inftyn \mathbb{R}^d$ be a $\gammaamma_\xi$ sub-Gaussian random vector. For all symmetric positive semi-definite matrices $H$, and all $t\gammae 0$,
\[
\mathbb{P}\left\{
\xi^T H \xi > \gammaamma_\xi^2\left(
\sqrt{{\rm tr}(H)}+ \sqrt{2\|H\|_{{\rm op}}t}
\right)^2
\right\} \le e^{-t}.
\]
\end{lemma}
The following lemma provides an upper bound on the operator norm of $\bm{G} H \bm{G}^T$ where $\bm{G}\inftyn \mathcal{R}^{n\mathsf{T}imes d}$ is a random matrix and its rows are independent sub-Gaussian random vectors. It is proved in \cite{bing2020prediction}.
\bm{e}gin{lemma}\label{lem_op_norm}
Let $\bm{G}$ be $n$ by $d$ matrix whose rows are independent $\gammaamma$ sub-Gaussian random vectors with identity covariance matrix. Then for all symmetric positive semi-definite matrices $H$,
\[
\mathbb{P}\left\{{1\over n}\| \bm{G} H \bm{G}^T \|_{{\rm op}} \le \gammaamma^2\left( \sqrt{{\rm tr}(H) \over n} + \sqrt{6\|H\|_{{\rm op}}}
\right)^2\right\} \gammae 1 - e^{-n}
\]
\end{lemma}
Another useful concentration inequality of the operator norm of the random matrices with i.i.d. sub-Gaussian rows is stated in the following lemma. This is an immediate result of \citet[Remark 5.40]{vershynin_2012}.
\bm{e}gin{lemma}\label{lem_op_norm_diff}
Let $\bm{G}$ be $n$ by $d$ matrix whose rows are i.i.d. $\gammaamma$ sub-Gaussian random vectors with covariance matrix $\Sigma_Y$. Then for every $t\gammae 0$, with probability at least $1-2e^{-ct^2}$,
\[
\left\| {1\over n}\bm{G}^T \bm{G} - \Sigma_Y\right\|_{{\rm op}}\le \max\left\{\delta, \delta^2\right\} \left\|\Sigma_Y\right\|_{{\rm op}},
\]
with $\delta = C\sqrt{d/n}+ t/\sqrt n$ where $c = c(\gammaamma)$ and $C=C(\gammaamma)$ are positive constants depending on $\gammaamma$.
\end{lemma}
The deviation inequalities of the inner product of two random vectors with independent sub-Gaussian elements are well-known; we state the one in \cite{bing2020inference} for completeness.
\bm{e}gin{lemma}\cite[Lemma 10]{bing2020inference}\label{lem_bernstein}
Let $\{X_t\}_{t=1}^n$ and $\{Y_t\}_{t=1}^n$ be any two sequences, each with zero mean independent $\gammaamma_x$ sub-Gaussian and $\gammaamma_y$ sub-Gaussian elements. Then, for some absolute constant $c>0$, we have
\[
\mathbb{P}\left\{{1\over n}\left|\sum_{t=1}^n\left(X_t Y_t - \mathbb{E}[X_t Y_t]\right)\right| \le \gammaamma_x \gammaamma_y t \right\}\gammae 1-2\exp\left\{-c\min\left( t^2,t \right)n\right\}.
\]
In particular, when $\log N\le n$, one has
\[
\mathbb{P}\left\{{1\over n}\left|\sum_{t=1}^n\left(X_t Y_t - \mathbb{E}[X_t Y_t]\right)\right| \le C\sqrt{\log N \over n} \right\}\gammae 1-2N^{-c}
\]
where $c \gammae 2$ and $C = C(\gammaamma_x,\gammaamma_y,c)$ are some positive constants.
\end{lemma}
\end{document}
|
\begin{document}
\title{Full linear multistep methods as root-finders}
\author[tue]{Bart S. van Lith\fnref{fn1}}
\ead{[email protected]}
\author[tue]{Jan H.M. ten Thije Boonkkamp}
\author[tue,phl]{Wilbert L. IJzerman}
\fntext[fn1]{Corresponding author}
\address[tue]{Department of Mathematics and Computer Science, Eindhoven University of Technology - P. O. Box 513, NL-5600 MB Eindhoven, The Netherlands.}
\address[phl]{Philips Lighting - High Tech Campus 44, 5656 AE, Eindhoven, The Netherlands.}
\begin{abstract}
Root-finders based on full linear multistep methods (LMMs) use previous function values, derivatives and root estimates to iteratively find a root of a nonlinear function. As ODE solvers, full LMMs are typically not zero-stable. However, used as root-finders, the interpolation points are convergent so that such stability issues are circumvented. A general analysis is provided based on inverse polynomial interpolation, which is used to prove a fundamental barrier on the convergence rate of any LMM-based method. We show, using numerical examples, that full LMM-based methods perform excellently. Finally, we also provide a robust implementation based on Brent's method that is guaranteed to converge.
\end{abstract}
\begin{keyword}Root-finder \sep nonlinear equation \sep linear multistep methods \sep iterative methods \sep convergence rate.
\end{keyword}
\maketitle
\section{Introduction}
Suppose we are given a sufficiently smooth nonlinear function $f: \mathbb{R} \to \mathbb{R}$ and we are asked to solve the equation
\begin{equation}\label{eq:root_finding_problem}
f(x) = 0.
\end{equation}
This archetypical problem is ubiquitous in all fields of mathematics, science and engineering. For example, ray tracing techniques in optics and computer graphics need to accurately calculate intersection points between straight lines, rays, and objects of varying shapes and sizes \cite{glassner,chaves}. Implicit ODE solvers are often formulated like \eqref{eq:root_finding_problem}, after which a root-finder of some kind is applied \cite{butcher}.\par
Depending on the properties of the function $f$, there are several methods that present themselves. Sometimes the derivative is not available for various reasons, in which case the secant method will prove useful. If higher-order convergence is desired, inverse quadratic interpolation may be used \cite{gautschi}. If the derivative of $f$ exists and is available, Newton's method is a solid choice, especially if $f$ is also convex. \par
Recently, a new interpretation of root-finding methods in terms of ODEs has been introduce by Grau-S\'anchez et al. \cite{grau_adams,grau_RK,grau_obreshkov}. Their idea is to consider the inverse function derivative rule as an ODE, so that any explicit ODE solver may be converted to a root-finding method. Indeed, Grau-S\'anchez et al. have successfully introduced root-finders based on Adams-type multistep and Runge-Kutta integrators. It goes without saying that only explicit ODE solvers can be usefully converted to root-finding methods. However, predictor-corrector pairs are possible, as those methods are indeed explicit.\par
We argue that the ODE approach can be interpreted as inverse interpolation with (higher) derivatives. Indeed, any linear integration method is based on polynomial interpolation. Thus, the ODE approach can be seen as a generalisation of inverse interpolation methods such as the secant method or inverse quadratic interpolation. The analysis can thus be combined into a single approach based on inverse polynomial interpolation.\par
Our main theoretical result is a theorem on the convergence rate of root-finders based on explicit linear multistep methods. We furthermore prove a barrier on the convergence rate of LMM-based root-finders. It turns out that adding a few history points quickly boosts the convergence rate close to the theoretical bound. However, adding many history points ultimately proves an exercise in futility due to diminishing returns in the convergence rate. Two LMM-based methods are constructed explicitly, one using two history points with a convergence rate of $1+\sqrt{3} \approx 2.73$ and another with three history points that converges with rate $2.91$.\par
In terms of the efficiency measure, defined as $p^\frac{1}{w}$ with $p$ the order and $w$ the number of evaluations \cite{gautschi}, our method holds up even compared to several optimal memoryless root-finders. The Kung-Traub conjecture famously states the optimal order of memoryless root-finders using $w$ evaluations is $2^{w-1}$, so that the efficiency is $2^\frac{w-1}{w}$. Our method requires two evaluations per iteration, provided function and derivative values are stored, leading to an efficiency of $1.74$ for the three-point method. As a consequence, anything up to an eighth-order method has a lower efficiency measure. For instance, optimal fourth-order methods with $\sqrt[3]{4} \approx 1.59$ \cite{Shengguo20091288,SOLEYMANI2012847,BEHL201589}, or eighth-order methods with $\sqrt[4]{8} \approx 1.68$ \cite{kim,CHUN201486,Lotfi2015}. Only 16th-order methods, e.g. Geum and Kim's \cite{GEUM20113278}, with $\sqrt[5]{16} \approx 1.74$ would in theory be more efficient.\par
Using several numerical examples, we show that the LMM-based methods indeed achieve this higher convergence rate. Furthermore, pathological examples where Newton's method fails to converge are used to show increased stability. We also construct a robust LMM-based method combined with bisection to produce a method that can been seen as an extension of Brent's \cite{brent}. Similar to Brent's method, whenever an enclosing starting bracket is provided, an interval $[a,b]$ with $f(a) f(b) <0$, our method is guaranteed to converge.\par
This article is organised in the following way. First, we find the convergence rate of a wide class of root-finders in Section~\ref{sec:barriers} and prove a barrier on the convergence rates. Next, in Section~\ref{sec:new_finders} we derive new root-finders based on full linear multistep methods and show that such methods are stable when the initial guess is sufficiently close to the root. After this, some results are presented in Section~\ref{sec:results} that verify our earlier theoretical treatment. Finally, we present our robust implementation in Section~\ref{sec:robust}, after which we give our conclusions in Section~\ref{sec:conclusions}.
\section{Barriers on LMM root-finders}\label{sec:barriers}
Root-finding methods based on the ODE approach of Grau-S\'anchez et al. can be derived by assuming that the function $f$ is sufficiently smooth and invertible in the vicinity of the root. Under these assumptions, the chain rule gives
\begin{equation}\label{eq:root_ODE}
\frac{\diff x}{\diff y} = [f^{-1}]^\prime (y) = \frac{1}{f^\prime\big(x\big)} = F(x),
\end{equation}
which we may interpret as an autonomous ODE for the inverse. Integrating \eqref{eq:root_ODE} from an initial guess $y_0 = f(x_0)$ to $y=0$ yields
\begin{equation}\label{eq:root_integral}
x(0) = x_0 + \int_{y_0}^{0} F\big( x(y) \big) \diff y,
\end{equation}
where $x(0)$ is the location of the root. Immediately, we see that applying the forward Euler method to \eqref{eq:root_ODE} gives Newton's method. From \eqref{eq:root_integral}, we see that the step size of the integrator should be taken as $0-y_0 = -f(x_0)$. However, Newton's method may also be interpreted as an inverse linear Taylor method, i.e., a method where the inverse function is approximated by a first-order Taylor polynomial. Indeed, any linear numerical integration method applied to \eqref{eq:root_ODE} can be interpreted as an inverse polynomial interpolation.\par
As such, explicit linear multistep methods applied to \eqref{eq:root_ODE} will also produce a polynomial approximation to the inverse function. Such a method has the form
\begin{equation}\label{eq:LMM_root-finder}
x_{n+s} + \sum_{k=0}^{s-1} a_k^{(n)} x_{n+k} = h_{n+s} \sum_{k=0}^{s-1} b_k^{(n)} F(x_{n+k}),
\end{equation}
where indeed $b_s^{(n)} = 0$, otherwise we end up with an implicit root-finder, which would not be very useful. The coefficients of the method, $\{a_k^{(n)}\}_{k=0}^{s-1}$ and $\{b_k^{(n)}\}_{k=0}^{s-1}$, will depend on the previous step sizes and will therefore be different each step. The step sizes are given by $h_{n+k} = y_{n+k} - y_{n+k-1}$, the differences in $y$-values. Since we wish to find the root, we set $y_{n+s} = 0$, leading to $h_{n+s} = y_{n+s} - y_{n+s-1} = - y_{n+s-1} $. Furthermore, the $y$-values are of course given by the function values of the root estimates, i.e.,
\begin{equation}
h_{n+k} = f(x_{n+k}) -f(x_{n+k-1}) \quad \text{for }k=1,\ldots,s-1.
\end{equation}
Like an ODE solver, we may use an implicit LMM in tandem with an explicit LMM to form a predictor-corrector pair, the whole forming an explicit method. Unlike an ODE solver, we may construct derivative-free root-finders based on the LMM approach by setting all $b_k^{(n)}=0$ for $k=0,\ldots s-1$ and for all $n>0$, e.g., the secant method. For an ODE solver this would obviously not make sense. Similar to ODE solvers, we may introduce higher derivatives by using
\begin{equation}\label{eq:general_LMM_root-finder}
x_{n+s} + \sum_{k=0}^{s-1} a_k^{(n)} x_{n+k} = h_{n+s} \sum_{k=0}^{s-1} b_k^{(n)} F(x_{n+k}) + h_{n+s}^2 \sum_{k=0}^{s-1} c_k^{(n)} F^\prime(x_{n+k}) + \ldots
\end{equation}
The following theorem provides the maximal convergence rate for any method of the form \eqref{eq:general_LMM_root-finder}. Furthermore, it provides a fundamental barrier on the convergence rates of LMM-based root-finders. Under certain conditions, it also rewards our intuition in the sense that methods using more information converge faster to the root.\par
Let us introduce some notation first. We denote $d$ the number of derivatives used in the method \eqref{eq:general_LMM_root-finder}. Higher derivatives of the inverse are found by iteratively applying the inverse function derivative rule. Methods defined by \eqref{eq:LMM_root-finder} are the special case of \eqref{eq:general_LMM_root-finder} with $d=1$. We also introduce coefficients $\sigma_k$ that indicate whether the coefficients $a_k^{(n)}$ are arbitrarily fixed from the outset or left free to maximise the order of convergence, i.e., $\sigma_k = 1$ if $a_k^{(n)} $ is free and $\sigma_k = 0$ otherwise.
\begin{theorem}\label{thm:convergence_bound}
For simple roots, the convergence rate $p$ for any method of the form \eqref{eq:general_LMM_root-finder}, where the coefficients are chosen so as to give the highest order of convergence, is given by the largest real root of
\begin{equation}\label{eq:convergence_root}
p^s = \sum_{k=0}^{s-1} p^k(d + \sigma_k),
\end{equation}
for all $s \geq 1$ and $d \geq 1$, or $s \geq 2$ and $d=0$. The convergence rate is bounded by $p < d+2$.\\
Additionally, if $\sigma_k = 1$ for all $k = 0,\ldots,s-1$, the convergence rates form a monotonically increasing sequence in $s$ and $p \to d+2$ as $s \to \infty$.
\end{theorem}
\begin{proof}
1. Any method of the form \eqref{eq:general_LMM_root-finder} implicitly uses inverse polynomial (Hermite) interpolation applied to the inverse function $f^{-1}$, let us call the resulting interpolation $H$. Let $y_{n+k}$, $k = 0,\ldots,s-1$ be the interpolation points. At each point $y_{n+k}$, there are $d+\sigma_k$ values are interpolated, the inverse function value $x_k$ if $\sigma_k = 1$ and $d$ derivative values. Thus, the polynomial interpolation error formula gives
\begin{equation*}
f^{-1}(y) - H(y) = \frac{[f^{-1}]^{(N+1)}(\upsilon)}{(N+1)!} \prod_{k=0}^{s-1} (y - y_{n+k})^{d+\sigma_k},
\end{equation*}
where $\upsilon$ is in the interval spanned by the interpolation points and $N = sd + \sum_{k=0}^{s-1}\sigma_k$. The approximation to the root is then computed as $x_{n+s} = H(0)$. Let us denote the exact value of the root as $\alpha$, then
\begin{equation*}
|x_{n+s} - \alpha | = \frac{|[f^{-1}]^{(N+1)}(\upsilon)|}{(N+1)!} \prod_{k=0}^{s-1} | y_{n+k}|^{d+\sigma_k}.
\end{equation*}
Define $\varepsilon_{n+k} = x_{n+k} - \alpha $ and recognise that $f(x_{n+k}) = f(\alpha + \varepsilon_{n+k}) = f^\prime(\alpha) \varepsilon_{n+k} + \mathcal{O}(\varepsilon_{n+k}^2)$, where $f^\prime(\alpha) \neq 0$. Thus, we find
\begin{equation*}
| \varepsilon_{n+s}| \approx A_0 |\varepsilon_{n+s-1}|^{d+\sigma_{s-1}} \cdots | \varepsilon_{n}|^{d+\sigma_{0}},
\end{equation*}
where $A_0>0$ is a constant depending on $[f^{-1}]^{(N+1)}(\upsilon) $, $s$ and $f^\prime(\alpha)$. The error behaviour is of the form
\begin{equation*}
|\varepsilon_{l+1}| = C |\varepsilon_l|^p, \tag{$\ast$}
\end{equation*}
asymptotically as $l \to \infty$. Here, $C>0$ is a constant. Applying $(\ast)$ $s$ times on the left and $s-1$ times on the right-hand side leads to
\begin{equation*}
| \varepsilon_{n}|^{p^s} \approx A_1 | \varepsilon_{n}|^{\sum_{k=0}^{s-1} p^k(d+\sigma_k) },
\end{equation*}
where all the constants have been absorbed into $A_1$. Thus, \eqref{eq:convergence_root} is established.\par
2. For methods that only use a single point, we have $s=1$ and $\sigma_0 = 1$ so that \eqref{eq:convergence_root} simplifies to $p = d + 1$. Hence, also in the case $s=1$, we have $p < d+2$.\par
3. Finally, by its definition we can bound $\sigma_k \leq 1$, so that we obtain
\begin{equation*}
p^s \leq (d+1) \sum_{k=0}^{s-1} p^k = (d+1) \frac{p^s-1}{p-1}.
\end{equation*}
Simplifying, we obtain
\begin{equation*}
p^{s+1}-(d+2)p^s + d+1 \leq 0. \tag{$\star$}
\end{equation*}
Note that $ p=1 $ is always a solution if we impose equality. However, the maximal convergence rate is given by the largest real root, so that we look for solutions $p>1$. Dividing by $p^s$ yields,
\begin{equation*}
p -( d+2) \leq - \frac{d+1}{p^s} <0,
\end{equation*}
which holds for all $s \geq 1$. Hence, we obtain $ p < d+2$.\par
4. Suppose now that $\sigma_k = 1$ for all $k = 0,\ldots,s-1$, so that the convergence rate satisfies $(\star)$ with equality. Rewriting, we obtain
\begin{equation*}
d+2 - p = \frac{d+1}{p^s}.
\end{equation*}
Thus, the convergence rate is given by the intersection point of a straight line and an inverse power law, both as a function of $p$. First, we observe that there is always an intersection at $p=1$. Furthermore, the slope of the straight line is $-1$, while the slope of the inverse power law at $p=1$ is given by $-s(d+1)$. Therefore, the slope of the right-hand side is smaller than the slope of the left-hand side for all $s\geq 1$ and $d\geq 1$, or $d=0$ and $s\geq2$. Thus, to the right of $p=1$, the inverse power law is below the straight line. Combined with the fact that $ \frac{d+1}{p^s}$ is a convex function for $p>0$ and it goes asymptotically to zero, we see that the second intersection point exists and is unique. Fix $s$ and call the second intersection point, i.e., the convergence rate, $p_s$. Finally, we note that for $p>1$, we have
\begin{equation*}
\frac{d+1}{p^{s+1}} < \frac{d+1}{p^s},
\end{equation*}
so that $p_{s+1}$ will be moved towards the right compared to $p_s$. Thus, we find
\begin{equation*}
p_{s+1} > p_s
\end{equation*}
for all $d \geq 1$ and $s \geq 1$, or $d=0$ and $s \geq 2$. The result now follows from the Monotone Convergence Theorem \cite{adams}.
\end{proof}
From Theorem~\ref{thm:convergence_bound}, we find several special cases, such as the derivative-free interpolation root-finders, i.e., $d=0$. Note that derivative-free root-finders with $s=1$ simply do not exist, as then the inverse is approximated with a constant function.
\begin{corollary}
Inverse polynomial interpolation root-finders, i.e., $d=0$ resulting in all $b_k^{(n)}=0$ in \eqref{eq:LMM_root-finder}, can attain at most a convergence rate that is quadratic. Their convergence rates are given by the largest real root of
\begin{equation}\label{eq:inverse_polynomial_root}
p^{s+1}-2p^s+1 =0,
\end{equation}
for all $s\geq 2$. The convergence rates are bounded by $p < 2$ and form a monotonically increasing sequence in $s$, with $p \to 2$ as $s \to \infty$.
\end{corollary}
\begin{proof}
The coefficients $\{a_k^{(n)}\}_{k=0}^{s-1}$ are chosen to maximise the order of convergence, so that $\sigma_k = 1$ for all $k=0,\ldots,s-1$, while $d=0$, leading to
\begin{equation*}
p^s = \sum_{k=0}^{s-1} p^k = \frac{p^s-1}{p-1}.
\end{equation*}
Simplifying yields \eqref{eq:inverse_polynomial_root}. Furthermore, the condition that $\sigma_k = 1$ for all $k=0,\ldots,s-1$ is satisfied so that the convergence rates form a monotonically increasing sequence that converges to $d+2 = 2$.
\end{proof}
Inverse polynomial root-finders such as the secant method ($s=2$) or inverse quadratic interpolation ($s=3$) are derivative-free, so that their highest convergence rate is $2$ according to Theorem \ref{thm:convergence_bound}. The first few convergence rates for derivative-free inverse polynomial interpolation methods are presented in Table~\ref{tab:convergence_rates_interpolation}. The well known convergence rates for the secant method and the inverse quadratic interpolation method are indeed reproduced. As becomes clear from the table, the rates quickly approach $2$ but never quite get there. The increase in convergence rate becomes smaller and smaller as we increase the number of interpolation points.
\begin{table}[h]
\centering
\caption{The first few convergence rates for $s$ points using only function values.}
\label{tab:convergence_rates_interpolation}
\begin{tabular}{l|l}
$s$ & $p$ \\ \hline
$2$ & $1.62$ \\
$3$ & $1.84$ \\
$4$ & $1.92$ \\
$5$ & $1.97$
\end{tabular}
\end{table}
Next, we cover the Adams-Bashforth methods also discussed in \cite{grau_adams}. As ODE solvers, Adams-Bashforth methods are explicit integration methods that have order of accuracy $s$ \cite{hairer}. However, as Theorem~\ref{thm:convergence_bound} suggests, as root-finders they will have a convergence rate that is smaller than cubic, since $d=1$. In fact, the convergence rate of Adams-Bashforth root-finders is bounded by $\frac{3+\sqrt{5}}{2} = 2.62$ as was proven by Grau-S\'anchez et al. \cite{grau_adams}. The following corollary is a generalisation of their result.
\begin{corollary}
The Adams-Bashforth root-finder methods with $s\geq 2$ exhibit convergence rates given by the largest real root of
\begin{equation}\label{eq:AB_root}
p^{s+1} - 3 p^s + p^{s-1} + 1 =0,
\end{equation}
for all $s \geq 1$. The convergence rates are bounded by $p < \frac{3 + \sqrt{5}}{2}$ and form a monotonically increasing sequence in $s$, with $p \to \frac{3 + \sqrt{5}}{2}$ as $s \to \infty$.
\end{corollary}
\begin{proof}
1. Adams-Bashforth methods have $a_k^{(n)}=0$ for $k=0,\ldots,s-2$, resulting in $\sigma_k = 0$ for $k = 0,\ldots,s-2$ and $\sigma_{s-1} = 1$. We may write $\sigma_k$ for simplicity as a Kronecker delta, i.e., $\sigma_k = \delta_{k,s-1}$. Furthermore, the methods use a single derivative of $f^{-1}$ so that $d=1$. The $s=1$ method is equal to Newton's method, which has a quadratic convergence rate. For $s \geq 2$, we find from Theorem~\ref{thm:convergence_bound} that
\begin{equation*}
p^s =p^{s-1} + \sum_{k=0}^{s-1} p^k = p^{s-1} + \frac{p^{s}-1}{p-1} .
\end{equation*}
Simplifying yields \eqref{eq:AB_root}. Again, we assume that $p>1$ and we divide by $p^{s-1}$, so that
\begin{equation*}
p^2 - 3p + 1 =- \frac{1}{p^{s-1}} < 0,
\end{equation*}
which holds for all $s \geq 2$. This implies that $p < \frac{3 + \sqrt{5}}{2}$ for all $s \geq 2$.\par
2. The proof that the convergence rates make up a monotonically increasing sequence is similar to the one given for Theorem~\ref{thm:convergence_bound}. First, we note that \eqref{eq:AB_root} always has a root at $p=1$. Next, we rewrite it to read
\begin{equation*}
3-p = \frac{1}{p} + \frac{1}{p^s}.
\end{equation*}
The left-hand side is a straight line with slope $-1$ while the right-hand side is a convex function that has slope $-3$ at $p=1$. Thus, to the right of $p=1$, the convex function is below the straight line. Furthermore, the right-hand side goes asymptotically to zero for large $p$. This implies that there is a unique intersection point with $p>1$, which is the convergence rate. For fixed $s$, call the second intersection point $p_s$. Finally, we note that
\begin{equation}
\frac{1}{p} + \frac{1}{p^{s+1}} < \frac{1}{p} + \frac{1}{p^s},
\end{equation}
for $p>1$, from which we see that the intersection point is moved to the right for larger $s$, i.e., $p_{s+1}>p_s$. The result again follows from the Monotone Convergence Theorem.
\end{proof}
The first few convergence rates for the Adams-Bashforth root-finder methods are given in Table~\ref{tab:convergence_rates_AB} and agree with the rates found by Grau-S\'anchez et al. As becomes clear from the table, the convergence rates quickly draw near the bound of $2.62$. Yet again we are met with steeply diminishing returns as we increase the number of history points $s$.
\begin{table}[h]
\centering
\caption{The first few convergence rates for Adams-Bashforth root-finder method using $s$ points.}
\label{tab:convergence_rates_AB}
\begin{tabular}{l|l}
$s$ & $p$ \\ \hline
$1$ & $2$ \\
$2$ & $2.41$ \\
$3$ & $2.55$ \\
$4$ & $2.59$ \\
$5$ & $2.61$
\end{tabular}
\end{table}
The Adams-Bashforth root-finder methods cannot attain a convergence rate higher than $2.62$, which is still some way off the cubic bound given by Theorem~\ref{thm:convergence_bound}. However, the theorem also tells us that using full linear multistep methods will result in convergence rates closer to cubic, at least in the limit of large $s$. For ODE solvers, trying to obtain a higher convergence rate by increasing the number of points often leads to instabilities. Indeed, the stability regions of many LMMs tend to shrink as the order is increased \cite{butcher}. In general, polynomial interpolation on equispaced points is unstable, e.g., Runge's phenomenon \cite{quarteroni}.\par
We must note that the stability issues in ODE solvers arise from the fact that polynomial interpolation is applied on an equispaced grid. Root-finders are designed to home in on a root, and when convergent, the step sizes will decrease rapidly. Runge's phenomenon can be countered by placing more nodes closer to the boundary, for example Gau{\ss} or Lebesgue nodes. As we will see, inverse polynomial interpolation is stable on the set of points generated by the root-finder itself, provided the starting guess is sufficiently close.\par
Let us inspect the convergence rates of different LMM-based root-finders using Theorem~\ref{thm:convergence_bound}, see Table~\ref{tab:convergence_rates_derivatives}. These convergence rates are computed under the assumption that all derivatives and point values are used, i.e., $\sigma_k = 1$ for $k=0,\ldots,s-1$ in Theorem~\ref{thm:convergence_bound}. The convergence rate of a $d$-derivative method can be boosted by at most $1$, and the table shows that this mark is attained very quickly indeed. Adding a few history points raises the convergence rate significantly, but finding schemes with $s>3$ is likely to be a waste of time.
\begin{table}[h]
\centering
\caption{The first few convergence rates for $s$ points (vertical) using function values and the first $d$ derivatives (horizontal).}
\label{tab:convergence_rates_derivatives}
\begin{tabular}{l|llll}
$s\backslash d$ & $1$ & $2$ & $3$ & $4$ \\ \hline
$1$ & $2$ & $3$ & $4$ & $5$ \\
$2$ & $2.73$ & $3.79$ & $4.82$ & $5.85$ \\
$3$ & $2.91$ & $3.95$ & $4.97$ & $5.98$ \\
$4$ & $2.97$ & $3.99$ & $4.99$ & $5.996$
\end{tabular}
\end{table}
Thus, provided that the root-finders are stable, a higher convergence rate can be achieved by adding history points, as well as derivative information.
\section{Full LMM-based root-finders}\label{sec:new_finders}
Let us investigate full LMM-based root-finders that use a single derivative, thus methods of the form \eqref{eq:LMM_root-finder}. The current step size is then given by $h_{n+s} = -f(x_{n+s-1})$. Let us define $q_k^{(n)}$ as
\begin{equation}
q_k^{(n)} = \frac{f(x_{n+k})}{f(x_{n+s-1})}, \quad k = 0,\ldots ,s-2,
\end{equation}
so that $h_{n+s} q_k^{(n)} = -f(x_{n+k})$ is the total step between $y_{n+k}$ and $y_{n+s}=0$. The Taylor expansions of $x(y_{n+k})$ and $x^\prime(y_{n+k})$ about $y_{n+s}$ are then given by
\begin{subequations}
\begin{align}
x(y_{n+k}) = x(y_{n+s}) +\sum_{m=1}^\infty \frac{1}{m!} (-h_{n+s} q_k)^m x^{(m)} (y_{n+s}),\\
x^\prime(y_{n+k}) = x^\prime(y_{n+s}) + \sum_{m=1}^\infty \frac{1}{m!} (-h_{n+s} q_k)^m x^{(m+1)} (y_{n+s}),
\end{align}
\end{subequations}
where we have dropped the superscript $(n)$ for brevity. Substituting these into \eqref{eq:LMM_root-finder}, we obtain
\begin{equation}
\begin{aligned}
&x(y_{n+s}) \left[1 + \sum_{k=0}^{s-1} a_k \right]- h_{n+s} x^\prime(y_{n+s}) \left[ \sum_{k=0}^{s-1} a_k q_k + b_k \right]\\
&+ \sum_{m=2}^{\infty} \frac{1}{(m-1)!} (-h_{n+s})^m x^{(m)}(y_{n+s}) \sum_{k=0}^{s-1} \left[ \tfrac{1}{m} q_k^m a_k + q_k^{m-1} b_k \right]=0.
\end{aligned}
\end{equation}
The consistency conditions then are given by
\begin{subequations}\label{eq:LMM_consistency}
\begin{align}
&\sum_{k=0}^{s-1} a_k= -1,\\
&\sum_{k=0}^{s-1} a_k q_k + b_k = 0.
\end{align}
\end{subequations}
This gives us two equations for $2s$ coefficients, so that we can eliminate another $2s-2$ leading order terms, resulting in the conditions
\begin{equation}\label{eq:LMM_order_equations}
\sum_{k=0}^{s-1} \frac{q^m_k}{m} a_k + q_k^{m-1} b_k = 0,
\end{equation}
where $m = 2,\ldots,2s-1$.
\subsection{The $s=2$ method}
\noindent The $s=2$ LMM-based method is given by
\begin{equation}\label{eq:inverse_cubic_LMM}
x_{n+2} + a_1 x_{n+1} + a_0 x_n = h_{n+2} \Big( b_1 F(x_{n+1}) + b_0 F(x_{n}) \Big),
\end{equation}
where we have again suppressed the superscript $(n)$ on the coefficients. Here, $h_{n+2} = - f(x_{n+1})$ so that we may write $q = q_0$, i.e.
\begin{equation}
q = \frac{f(x_n)}{f(x_{n+1})}.
\end{equation}
Applying \eqref{eq:LMM_consistency} and \eqref{eq:LMM_order_equations}, we find a set of linear equations, i.e.,
\begin{subequations}
\begin{align}
a_1 + a_0 = -1,\\
a_1 + q a_0 +b_1 + b_0 = 0,\\
\tfrac{1}{2}a_1 + \tfrac{1}{2} q^2 a_0 +b_1 + q b_0 = 0,\\
\tfrac{1}{3} a_1 + \tfrac{1}{3} q^3 a_0 + b_1 + q^2 b_0 = 0.
\end{align}
\end{subequations}
These equations may be solved, provided $q \neq 1$, to yield
\begin{subequations}\label{eq:LMM_coefficients}
\begin{align}
a_0 &= \frac{1-3q}{(q-1)^3} & a_1 &= -1-a_0, \\
b_0 &= \frac{q}{(q-1)^2} & b_1 &= q b_0 .
\end{align}
\end{subequations}
The condition $q\neq 1$ is equivalent to $f(x_{n+1}) \neq f(x_n)$. This condition is not very restrictive, as stronger conditions are needed to ensure convergence.\par
The above method may also be derived from the inverse polynomial interpolation perspective, using the ansatz
\begin{equation}\label{eq:inverse_cubic_interpolation}
H(y) = h_3 \big(y-f(x_{n+1}) \big)^3 + h_2 \big(y-f(x_{n+1}) \big)^2 + h_1 \big(y-f(x_{n+1}) \big) + h_0,
\end{equation}
where $h_i$, $i=0,1,2,3$ are undetermined coefficients. The coefficients are fixed by demanding that $H$ interpolates $f^{-1}$ and its derivative at $y=f(x_{n+1})$ and $y=f(x_n)$, i.e.,
\begin{subequations}\label{eq:inverse_cubic_conditions}
\begin{align}
H\big(f(x_n)\big) &= x_n, \\
H\big(f(x_{n+1})\big) &= x_{n+1},\\
H^\prime \big(f(x_n)\big) &= \frac{1}{f^\prime(x_n)},\\
H^\prime \big(f(x_{n+1})\big) &= \frac{1}{f^\prime(x_{n+1})}.
\end{align}
\end{subequations}
Solving for $h_i$, $i=0,1,2,3$ and setting $y=0$, we find the same update $x_{n+2}$ as \eqref{eq:inverse_cubic_LMM}.\par
The stability of the $s=2$ LMM method depends on the coefficients of the LMM in much the same way as an ODE solver. Indeed, we can set the sequence $\tilde{x}_n = x_n + z_n$ where $x_n$ is the sequence generated by exact arithmetic we wish to find while $z_n$ is a parasitic mode. It can be shown that the parasitic mode satisfies
\begin{equation}
z_{n+2} + a_1 z_{n+1} + a_0 z_n = 0,
\end{equation}
so that it may grow unbounded if the roots are greater than $1$ in modulus. Using the ansatz $z_n = B \lambda^n$, we find the characteristic polynomial of the $s=2$ method, i.e.,
\begin{equation}
\rho(\lambda) = \lambda^2 - \lambda \left( 1+a_0 \right) + a_0 = (\lambda -1) \left(\lambda - a_0\right),
\end{equation}
where the roots cans simply be read off. Stability of the root-finder is ensured if the stability polynomial of the method has a single root with $\lambda = 1$, while the other roots satisfy $|\lambda| <1$. This property is called zero-stability for linear multistep ODE solvers. Thus, to suppress parasitic modes we need
\begin{equation}
|a_0| = \left| \frac{1-3q}{(q-1)^3} \right| <1.
\end{equation}
This reduces to $q$ being either $q<0$, or $q>3$, so that $|q|>3$ is a sufficient condition. Thus, if the sequence $\{ |f(x_n)| \}_{n=1}^\infty$ is decreasing fast enough, any parasitic mode is suppressed. We may estimate $q$ as a ratio of errors, since $f(x_n) = f^\prime( \alpha) \varepsilon_n + \mathcal{O}(\varepsilon_n^2)$, so that
\begin{equation}
q \approx \frac{\varepsilon_n}{\varepsilon_{n+1}}.
\end{equation}
Using $\varepsilon_{n+1} = C \varepsilon_n^p$ with $p=1+\sqrt{3}$, we find that
\begin{equation}
\varepsilon_n < C_1,
\end{equation}
with $C_1 = \left( \frac{1}{3 C} \right)^\frac{1}{\sqrt{3}} $. We conclude that the method will be stable if the initial errors are smaller than the constant $C_1$, which depends on the details of the function $f$ in the vicinity of the root. This condition translates to having the starting values sufficiently close to the root. This is a rather typical stability condition for root-finders.
\subsection{$s=3$ method}
We may again apply \eqref{eq:LMM_consistency} - \eqref{eq:LMM_order_equations} to find a method with $s=3$, this time we have $6$ coefficients, given by
\begin{subequations}
\begin{align}
a_0 &= \frac{q_1^2( q_0 (3 + 3q_1 - 5 q_0) -q_1 )}{(q_0-1)^3(q_0-q_1)^3}, & b_0 &= \frac{q_0 q_1^2}{(q_0-1)^2(q_0-q_1)^2}, \\
a_1 &= \frac{q_0^2( q_1 (5 q_1- 3q_0 - 3 ) + q_0 )}{(q_1-1)^3(q_0-q_1)^3}, & b_1 &= \frac{q_0^2 q_1}{(q_0-q_1)^2 (q_1 - 1)^2}, \\
a_2 &= \frac{q_0^2 q_1^2( 3q_1 - q_0(q_1-3)-5 )}{(q_0-1)^3 (q_1-1^3) } , & b_2 &= \frac{q_0^2 q_1^2}{(q_0-1)^2(q_1-1)^2},
\end{align}
\end{subequations}
where $q_0 = \frac{f(x_n)}{f (x_{n+2}) }$ and $q_1 = \frac{f(x_{n+1})}{f (x_{n+2}) }$. Here, we have the conditions $q_0 \neq 1$ and $q_1 \neq 1$, reducing to the condition that all $y$-values must be unique. Again, this condition is not very restrictive for reasons detailed above.\par
Methods with a greater number of history points are possible, however, the gain in convergence rate from $s=3$ to $s=4$ is rather slim, as indicated by Table~\ref{tab:convergence_rates_derivatives}. If such methods are desirable, they can be derived by selecting coefficients that satisfy \eqref{eq:LMM_consistency} - \eqref{eq:LMM_order_equations}.
\section{Results}\label{sec:results}
Like the secant method, the $s=2$ full LMM root-finding method needs two starting points for the iteration. However, as the analytical derivative is available, we choose to simply start Newton's method with one point, say $x_0$, giving $x_1$. The $s=3$ method needs three starting values, therefore the next value $x_2$ is obtained from the $s=2$ LMM method. The LMM-based methods can be efficiently implemented by storing the function value and derivative value of the previous step, thus resulting in a need for only one function and one derivative evaluation per iteration.\par
The efficiency measure is defined as $p^{\frac{1}{w}}$ with $p$ the order of convergence and $w$ the number of evaluations per iteration \cite{gautschi}. Assuming the function itself and the derivative cost the same to evaluate, the $s=3$ LMM-based method has an efficiency measure of $\sqrt{2.91} \approx 1.71$. Compared to Newton's method, with an efficiency measure of $\sqrt{2} \approx 1.41$, this is certainly an improvement.
\subsection{Numerical examples}
Here, we provide a number of test cases and show how many iterations LMM-based root-finders take versus the number needed by Newton's method, see Table~\ref{tab:numerical_examples_newton_hybrid}. We have used a selection of different test cases with polynomials, exponentials, trigonometric functions, square roots and combinations thereof. For each of the test problems shown, the methods converged within a few iterations. Some problems were deliberately started near a maximum or minimum to see the behaviour when the derivatives are small.\par
The test computations were performed using the variable-precision arithmetic of MATLAB's Symbolic Math Toolbox. The number of digits was set to 300 while the convergence criterion used was
\begin{equation}
|x_{l+1} - x_l| \leq 10^{-\eta},
\end{equation}
with $\eta = 250$. The numerical convergence rates were computed with the error behaviour
\begin{equation}\label{eq:error_behavior}
|\varepsilon_{l+1}| = C | \varepsilon_{l}|^p,
\end{equation}
asymptotically as $l \to \infty$. The limiting value of the estimates for $p$ is displayed in Table~\ref{tab:numerical_examples_newton_hybrid}.
\begin{table}[h]
\centering
\caption{Test cases with iterations taken for Newton's method (subscript $N$) and the LMM-based method with $s=2$ (subscript $2$) and $s=3$ method (subscript $3$). }
\label{tab:numerical_examples_newton_hybrid}
\begin{tabular}{lrl|lllll}
function & root & $x_0$ & $\#$its$_N$ & $\#$its$_{2}$ & $\#$its$_{3}$ & $p_{2}$ & $p_{3}$\\ \hline
$x + e^x$ & $-0.57$ & $1.50$ & 11 & 8 & 8 & $2.73$ & $2.93$ \\
$\sqrt{x} -\cos(x)$ & $0.64$ & $0.50$ & 9 & 7 & 8 & $2.74$ & $2.91$ \\
$e^x -x^2+3x-2$ & $0.26$ & $0.00$ & 10 & 8 & 7 & $2.72$ & $2.94$ \\
$x^4-3x^2-3$ & $1.95$ & $1.30$ & 17 & 14 & 14 & $2.73$ & $2.92$ \\
$x^3-x-1$ & $1.32$ & $1.00$ & 12 & 9 & 9 & $2.73$ & $2.64$ \\
$e^{-x}-x^3$ & $0.77$ & $2.00$ & 13 & 10 & 10 & $2.73$ & $2.92$ \\
$5\big(\sin(x)+\cos(x)\big)-x$ & $2.06$ & $1.50$ & 11 & 9 & 9 & $2.73$ & $2.92$ \\
$x-\cos(x)$ & $0.74$ & $1.00$ & 9 & 7 & 7 & $2.72$ & $2.93$ \\
$\log(x-1)+\cos(x-1)$ & $1.40$ & $1.60$ & 12 & 9 & 9 & $2.73$ & $2.92$ \\
$\sqrt{1+x} -x$ & $1.62$ & $1.00$ & 9 & 7 & 7 & $2.73$ & $2.92$ \\
$\sqrt{e^x-x} - 2x$ & $0.54$ & $1.00$ & 11 & 8 & 7 & $2.73$ & $2.92$ \\ \hline \hline
\multicolumn{2}{l}{Total number of iterations} & & $124$ & $96$ & $95$ & & \\
\end{tabular}
\end{table}
Overall, Newton's method consistently displayed a quadratic convergence rate, which is the reason we did not display it in the table. The LMM-based methods, on the other hand, generally have a higher convergence rate that may vary somewhat from problem to problem. This is due to the fact that the step sizes may vary slightly while the convergence rates of the LMM-based methods only holds asymptotically, even with so many digits.
\subsection{Pathological functions}
A classical example of a pathological function for Newton's method is the hyperbolic tangent $\tanh(x)$. Students often believe that Newton's method converges for any monotone function until they are asked to find the root of $\tanh(x)$. Hence, we have used this function as a test case using standard double precision floating point arithmetic and a convergence criterion reading
\begin{equation}\label{eq:double_precision_criterion}
|x_{l+1} - x_l| \leq 2 \epsilon,
\end{equation}
with $\epsilon$ the machine precision. Newton's method fails to converge for starting values with approximately $|x_0|\geq1.089$, see Table~\ref{tab:tanh_test}. Our $s=2$ LMM-based method extends this range somewhat more and converges for any starting value with roughly $|x_0| \leq 1.239$. The behaviour of the $s=3$ LMM-based method is similar, though it does take two more iterations to converge.
\begin{table}[]
\centering
\caption{Convergence history for $\tanh(x)$ of the three methods: Newton (subscript $N$) and the LMM-based methods with $s=2$ and $s=3$. Note that the root of $\tanh(x)$ is at $x=0$.}
\label{tab:tanh_test}
\begin{tabular}{r|r|r}
$x_N$ & $x_{(s=2)}$ & $x_{(s=3)}$ \\ \hline
$1.239$ & $1.239$ & $1.239$ \\
$-1.719$ & $-1.719$ & $-1.719$ \\
$6.059$ & $0.8045$ & $0.8045$ \\
$-4.583 \cdot 10^{4}$ & $0.7925$ & $-0.6806$ \\
$\mathrm{Inf}$ & $-0.7386$ & $1.377$ \\
& -$6.783 \cdot 10^{-3}$ & $-0.7730$ \\
& $9.323 \cdot 10^{-6}$ & $3.466 \cdot 10^{-2}$ \\
& $|x_{(s=2)}|< \epsilon$ & $-3.032 \cdot 10^{-4}$ \\
& & $1.831 \cdot 10^{-11}$ \\
& & $|x_{(s=3)}|<\epsilon$ \\
\end{tabular}
\end{table}
Starting at $x_0 = 1.239$, Newton's method diverges quickly, returned as $-\mathrm{inf}$ after only 4 iterations. The LMM-based method, on the other hand, bounces around positive and negative values for about 5 iterations until it is close enough to the root. After that, the asymptotic convergence rate sets in and the root is quickly found, reaching the root within machine precision at 7 iterations.\par
Donovan et al.\cite{donovan_miller_moreland} developed another test function for which Newton's method fails, it gives a false convergence result to be precise. The test function is given by
\begin{equation}\label{eq:DMM_test_function}
h(x) = \sqrt[3]{x}e^{-x^2},
\end{equation}
which is, in fact, infinitely steep near the root $x=0$, yet smooth, see Figure~\ref{fig:DMM_test}. Again, we used double precision arithmetic and \eqref{eq:double_precision_criterion} as a stopping criterion. Newton's method diverges for any starting value except the exact root itself. However, Newton's method eventually gives a false convergence result as the increment $|x_{l+1}-x_l|$ falls below the tolerance. The $s=2$ and $s=3$ LMM-based methods converge when starting with $|x_0| \leq 0.1147$ for this problem, see Table~\ref{tab:DMM_test}.
\begin{figure}
\caption{Pathological test function $h(x)$ from \eqref{eq:DMM_test_function}
\label{fig:DMM_test}
\end{figure}
\begin{table}
\centering
\caption{Convergence history for $h(x)$ from \eqref{eq:DMM_test_function} of the three methods: Newton (subscript $N$) and the LMM-based methods with $s=2$ and $s=3$. Note that the root is at $x=0$.}
\label{tab:DMM_test}
\begin{tabular}{r|r|r}
$x_N$ & $x_{(s=2)}$ & $x_{(s=3)}$ \\ \hline
$0.1147$ & $0.1147$ & $0.1147$ \\
$0.2589$ & $-0.2589$ & $-0.2589$ \\
$1.0402$ & $0.1016$ & $0.1016$ \\
$1.6084$ & $9.993 \cdot 10^{-2}$ & $-5.648 \cdot 10^{-2}$ \\
$1.9407$ & $-0.2581$ & $0.1959$ \\
$2.2102$ & $9.840 \cdot 10^{-2}$ & $-0.1611$ \\
$2.4445$ & $9.810 \cdot 10^{-2}$ & $5.021 \cdot 10^{-2}$ \\
$2.6549$ & $-0.2344$ & $-7.190 \cdot 10^{-2}$ \\
$2.8478$ & $6.602 \cdot 10^{-2}$ & $4.947 \cdot 10^{-2}$ \\
$3.0270$ & $6.021 \cdot 10^{-2}$ & $-3.777 \cdot 10^{-3}$ \\
$3.1953$ & $-4.939 \cdot 10^{-2}$ & $3.027 \cdot 10^{-4}$ \\
$3.3543$ & $-4.019 \cdot 10^{-4}$ & $-6.875 \cdot 10^{-6}$ \\
$3.5056$ & $1.288 \cdot 10^{-4}$ & $1.216 \cdot 10^{-9}$ \\
$3.6502$ & $2.028 \cdot 10^{-10}$ & $-4.652 \cdot 10^{-15}$ \\
$3.7889$ & $-5.308 \cdot 10^{-15}$ & $|x_{(s=2)}|< \epsilon$ \\
$3.9225$ & $|x_{(s=2)}|< \epsilon$ & \\
\end{tabular}
\end{table}
Starting at the maximal $x_0=0.1147$ for instance, the LMM-based methods bounce several times between positive and negative $x$-values without making much headway. After that, the root is close enough and the asymptotic convergence rate sets in, reaching the root to within machine precision in a few steps.\par
We believe the reason that the LMM-based method has increased stability is due to the fact that it uses two points to evaluate the function and its derivative. In both cases, the iterations jump between positive and negative values, enclosing the root. In this fashion, the LMM-based method acts much like the \textit{regula falsi} method. Once the iterates are close enough to the root, the asymptotic convergence rate sets in and the iterates converge in but a few steps.
\section{A robust implementation}\label{sec:robust}
As with most open root-finding algorithms, the conditions under which the method is guaranteed to converge are rather restrictive. Therefore, we have designed a bracketing version of the LMM-based method that is guaranteed to converge. The algorithm is based on Brent's method, using similar conditions to catch either slow convergence or runaway divergence. This version of the LMM-based method does, however, require an enclosing bracket $[a,b]$ on which the function changes sign, i.e., $f(a) f(b)<0$. Alternatively, such a method can start out as an open method, switching to the bracketing method when a sign change is detected.\par
The algorithm consists of a cascade of methods increasing in accuracy but decreasing in robustness, similar to Brent's method. At the lowest level stands the most robust method, bisection, guarding against steps outside the current search bracket. Bisection is guaranteed to converge, but does so rather slowly. On the highest level we use the full $s=3$ LMM-based method discussed in the previous section. Thus, in the best possible case, the method will converge with a rate of $2.91$. The method is, by virtue of the bisection method, guaranteed to converge to a root.\par
Like Brent's method and Dekker's method, the LMM-based method keeps track of three points $a$, $b$ and $c$. Here, $b$ is the best estimate of the root so far, $c$ is the previous value for $b$ while $a$ is the contrapoint so that $\mathrm{int}[a,b]$ encloses the root. Ideally, all three values are used to compute the next value for $b$. However, extra conditions are added to ensure the inverse actually makes sense on the interval $\mathrm{int}[a,c]$.\par
Consider the case where the sign of $f^\prime(c)$ is not equal to the sign of $\frac{f(b)-f(a)}{b-a}$, but the sign of $f^\prime(b)$ is. It follows that there is an extremum between $b$ and $c$, and the inverse function does not exist in the interval $\mathrm{int}[a,c]$, leading to an error if we were to compute the inverse interpolation.\par
Thus, the following condition should be applied to each derivative value: the sign of the derivative needs to be the same as the sign of the secant slope on $\mathrm{int}[a,b]$, i.e.,
\begin{equation}\label{eq:derivative_condition}
\mathrm{sgn }\left( f^\prime(z) \right) = \mathrm{sgn } \left( \frac{f(b)-f(a)}{b-a} \right),
\end{equation}
where $z=a,b,c$. Only when the derivative at a point satisfies \eqref{eq:derivative_condition} can the derivative sensibly contribute to the inverse. Otherwise the derivative information should be discarded, leading to lower-order interpolation. If all derivatives are discarded, the resulting interpolation is inverse quadratic or the secant method.\par
Ultimately, the method provides an interval on which the function $f$ changes sign with a relative size of some given tolerance $\delta$, i.e.,
\begin{equation}\label{eq:brent_criterion}
|a - b| \leq \delta |b|.
\end{equation}
We shall use $\delta = 2 \epsilon$ in all our examples, with $\epsilon$ the machine precision. As an input, the algorithm has $f$, $f^\prime$, $a$ and $b$ such that $f(a)f(b)<0$. The algorithm can be described in the following way:
\begin{enumerate}
\item If all three function values are different, use $s=3$, otherwise use $s=2$.
\item Check the sign of the derivatives at points $a$, $b$ and $c$, include the derivatives that have the proper sign.
\item If the interpolation step is worse than a bisection step, or outside the interval $[a,b]$, use bisection.
\item If the step is smaller than the tolerance, use the tolerance as step size.
\item If the convergence criterion is met, exit, otherwise go to 1.
\end{enumerate}
The first step determines the number of history points that can be used. The second step determines which derivative values should be taken into account. In effect, only the second step is essentially different from Brent's method, with all the following steps exactly the same \cite{brent}. The extra conditions on the derivatives gives rise to a selection of 12 possible root-finders, including inverse quadratic interpolation and the secant method. Our method can therefore be seen as building another 10 options on top of Brent's method. Naturally, sufficiently close to the root, the derivative conditions will be satisfied at all three points and the method will use the full LMM method with $s=3$.
\subsection{Comparison with Brent's method}
Here we give a few examples of the robust LMM-based root-finding algorithm discussed above compared to Brent's method. As a performance measure, we use the number of iterations. Standard double precision arithmetic is employed, as that provides sufficient material for comparison. For both methods, the stopping criterion is given by \eqref{eq:brent_criterion}, the relative size of the interval must be sufficiently small.
\begin{table}[h]
\centering
\caption{Test cases with iterations taken for Brent's method and the LMM-based method. Subscript $B$ represents Brent while subscript $LMM$ represents the LMM-based method.}
\label{tab:examples_brent_hybrid}
\begin{tabular}{lrl|ll}
function & root & $[a,b]$ & $\#$its$_B$ & $\#$its$_{LMM}$ \\ \hline
$x + e^x$ &$-0.57$& $[-1,1]$ & $6$ & $4$ \\
$\sqrt{x} -\cos(x)$&$0.64$& $[0,2]$ & $8$ & $4$ \\
$e^x -x^2+3x-2$ &$0.26$& $[-1,1]$ & $5$ & $3$ \\
$x^4-3x^2-3$ & $1.95$ & $[1,3]$ & $10$ & $8$ \\
$x^3-x-1$ & $1.32$ & $[0,2]$ & $29$ & $6$ \\
$e^{-x}-x^3$ & $0.77$ & $[0,2]$ & $9$ & $4$ \\
$5\big(\sin(x)+\cos(x)\big)-x$&$2.06$ &$[0,4]$&$45$ & $6$ \\
$x-\cos(x)$ & $0.74$ & $[0,1]$ & $7$ & $3$ \\
$\log(x-1)+\cos(x-1)$&$1.39$&$[1.2,1.6]$& $31$ & $4$ \\
$\sqrt{1+x} -x$ & $1.62$ & $[0,2]$ & $5$ & $3$ \\
$\sqrt{e^x-x} - 2x$&$0.54$& $[-1,2]$ & $9$ & $4$ \\ \hline \hline
Total number of iterations & & & $164$ & $49$ \\
Total number of function evaluations & && $164$ & $98$
\end{tabular}
\end{table}
Table~\ref{tab:examples_brent_hybrid} shows that for most functions, both Brent's method and the LMM-based method take a comparable number of iterations. However, in some cases, the difference is considerable. In the worst case considered here, Brent's method takes $7.5$ times as many iterations to converge. In terms of efficiency index, Brent's method should be superior with an efficiency index of $1.84$ against $1.71$ of the LMM-based method. Taken over the whole set of test functions, however, Brent's method takes more than three times as many iterations in total, leading to a significant increase in function evaluations. We conclude therefore that practically, the LMM-based root-finder is a better choice.
\section{Conclusions}\label{sec:conclusions}
We have discussed root-finders based on full linear multistep methods. Such LMM-based methods may be interpreted as inverse polynomial (Hermite) interpolation methods, resulting in a simple and general convergence analysis. Furthermore, we have proven a fundamental barrier for LMM-based root-finders: their convergence rate cannot exceed $d+2$, where $d$ is the number of derivatives used in the method.\par
The results indicate that compared to the Adams-Bashforth root-finder methods of Grau-S\'anchez et al. \cite{grau_adams}, any full LMM-based method with $s \geq 2$ has a higher convergence rate. As ODE solvers, full LMMs are typically not zero-stable and special choices of the coefficients have to be made. Employed as root-finders on the other hand, it turns out that LMMs are stable, due to the rapid decrease of the step size. This allows the usage of full LMMs that are otherwise not zero-stable.\par
Contrary to the Adams-type methods, the full LMM-based root-finders can achieve the convergence rate of $3$ in the limit that all history points are used. The $s=2$ and $s=3$ methods, $s$ being the number of history points, were explicitly constructed and provide a convergence rate of $2.73$ and $2.91$, respectively. Numerical experiments confirm these predicted convergence rates. Furthermore, application to pathological functions where Newton's method diverges show that the LMM-based methods also have enhanced stability properties.\par
Finally, we have implemented a robust LMM-based method that is guaranteed to converge when provided with an enclosing bracket of the root. The resulting robust LMM root-finder algorithm is a cascade of twelve root-finders increasing in convergence rate but decreasing in reliability. At the base sits bisection, so that the method is indeed guaranteed to converge to the root. At the top resides the $s=3$ LMM-based root-finder, providing a maximal convergence rate of $2.91$.\par
In terms of efficiency index, Brent's method is theoretically the preferred choice with $1.84$ compared to $1.71$ for the LMM-based method. However, numerical examples show that the increased convergence rate leads to a significant decrease in the total number of function evaluations over a range of test functions. Therefore, in practical situations, provided the derivative is available, the LMM-based method performs better.
\end{document}
|
\begin{document}
\title{Controlling Arbitrary Observables in Correlated Many-body Systems}
\date{\today}
\author{Gerard McCaul}
\email{[email protected]}
\affiliation{Tulane University, New Orleans, LA 70118, USA}
\author{Christopher Orthodoxou}
\email{[email protected]}
\affiliation{Department of Physics, King's College London, Strand, London, WC2R 2LS, U.K.}
\author{Kurt Jacobs}
\affiliation{U.S. Army Research Laboratory, Computational and Information Sciences Directorate, Adelphi, Maryland 20783, USA}
\affiliation{Department of Physics, University of Massachusetts at Boston, Boston, MA 02125, USA}
\affiliation{Hearne Institute for Theoretical Physics, Louisiana State University, Baton Rouge, LA 70803, USA}
\author{George H. Booth}
\email{[email protected]}
\affiliation{Department of Physics, King's College London, Strand, London, WC2R 2LS, U.K.}
\author{Denys I. Bondar}
\email{[email protected]}
\affiliation{Tulane University, New Orleans, LA 70118, USA}
\begin{abstract}
Here we present an expanded analysis of a model for the manipulation and control of observables in a strongly correlated, many-body system, which was first presented in [McCaul \textit{et al.}, eprint: arXiv:1911.05006]. A field-free, non-linear equation of motion for controlling the expectation value of an essentially arbitrary observable is derived, together with rigorous constraints that determine the limits of controllability. We show that these constraints arise from the physically reasonable assumptions that the system will undergo unitary time evolution, and has enough degrees of freedom for the electrons to be mobile. Furthermore, we give examples of multiple solutions to generating target observable trajectories when the constraints are violated. Ehrenfest theorems are used to further refine the model, and provide a check on the validity of numerical simulations. Finally, the experimental feasibility of implementing the control fields generated by this model is discussed.
\end{abstract}
\maketitle
\section{Introduction}
The study of the control of quantum systems has a rich history \citep{Werschnik2007}, encompassing a diverse array of strategies. This includes both local control \citep{Kosloff1992,Koslofflocalcontrol} and optimal control~\citep{Serban2005,PhysRevLett.106.190501} that steers a system to a final target state using iterative optimisation~\citep{PhysRevA.37.4950, RevModPhys.80.117, Glaser2015}, possibly under additional constraints~\citep{SpectralConstraints, Kosloffoptimalconstraints}. Separate from this is \emph{tracking control} \citep{PhysRevA.72.023416,PhysRevA.98.043429,PhysRevA.84.022326,Campos2017, doi:10.1063/1.1582847,doi:10.1063/1.477857}, where a physical system is evolved in such a way that a chosen observable conforms to (or ``tracks'') a pre-selected trajectory.
Examples of tracking control abound, with applications as diverse as singularity-free tracking of molecular rotors \citep{PhysRevA.98.043429}, optimising dynamics within the density matrix renormalisation group \citep{PhysRevLett.106.190501, PhysRevA.84.022326}, and spectral dynamical mimicry, where a shaped pulse is used to induce an arbitrary desired spectrum in an atomic system \citep{Campos2017}. In a recent paper \citep{companionletter} a model for the tracking control of a many-electron system was presented without derivation. Here, we expand greatly upon that work, in three principal directions.
First, the tracking model used in Ref.\citep{companionletter} is motivated in Sec.\ref{sec:Tracking}. Starting from general considerations of an $N$-electron Hamiltonian, a comprehensive derivation of the tracking equation is present. Additionally, in Sec.\ref{sec:TrackingConstraints} we derive the precise constraints on tracking necessary both to avoid singularities and guarantee a unique evolution for the system. A simple example where these constraints are not obeyed and multiple solutions for the tracking field are possible is also provided.
Given that in tracking control, one recovers the expected observable trajectory by design, a method of verifying that numerical calculations are physically valid is vital. To this end, we detail in Sec.\ref{sec:Ehrenfest} the application of an Ehrenfest theorem to the model as a way both to verify simulations, and remove nonphysical discontinuities from control fields.
Finally, with the purpose of exploring further the experimental requirements of the control protocol, we examine in Sec.\ref{sec:Experimental} the effect of introducing a frequency cut-off for the control field used to create the `driven imposters' detailed in Ref.\citep{companionletter}. We close in Sec.\ref{sec:Discussion} with a discussion of the results and questions for future work.
\section{Tracking Model \label{sec:Tracking}}
\subsection{Background}
Our goal is to implement a tracking control \citep{doi:10.1063/1.1582847} model for a general $N$-electron system subjected to a laser pulse described by the Hamiltonian (using atomic units)\citep{hagenkleinert2016}
\begin{align}
\hat{H} &= \sum_\sigma\int \frac{{\rm d}x}{2}\hat{\psi}^{\dagger}(x) \left[ i \partial_x -A(t) \right]^2 \hat{\psi}(x) \nonumber \\
&+ \sum_{\sigma \sigma^\prime} \int \frac{{\rm d}x {\rm d}x'}{2} \hat{\psi}_{\sigma^\prime}^{\dagger}(x')\hat{\psi}_{\sigma}^{\dagger}(x) U(x-x') \hat{\psi}_{\sigma}(x) \hat{\psi}_{\sigma^\prime}(x')
\label{eq:GeneralHamiltonian}
\end{align}
where $A(t)$ is the field vector potential and the $\hat{\psi}_{\sigma}(x)$ are standard fermionic field operators satisfying $\left\{\hat{\psi}_{\sigma^\prime}^{\dagger}(x'),\hat{\psi}_{\sigma}(x)\right\}=\delta_{\sigma \sigma^\prime} \delta(x-x^\prime)$ . Ultimately, we wish to calculate the control field $A_T(t)$, such that the trajectory of an expectation $\langle\hat{O}(t)\rangle$ follows some desired function $O_T(t)$ \citep{PhysRevA.72.023416,PhysRevA.98.043429,PhysRevA.84.022326,Campos2017}. For the sake of specificity, here we derive the control field $A_T(t)$ necessary to control the current expectation, but emphasise that an expression can be derived for an arbitrary expectation using the technique described in Sec.\ref{sec:arbobservable}. We first re-express the model in an explicitly self-adjoint form using
\begin{align}
\hat{\psi}^{\dagger}(x) \left[ i \partial_x -A(t) \right]^2 &\hat{\psi}(x) \notag =\\& \partial_x\left[{\rm e}^{iA(t)} \hat{\psi}_{\sigma}(x)\right]^{\dagger} \partial_x\left[{\rm e}^{iA(t)} \hat{\psi}_{\sigma}(x)\right].
\end{align}
In this form, one may straightforwardly construct a continuity equation for the density operator $\hat{\rho}(x) = \hat{\psi}^{\dagger}(x) \hat{\psi}(x)$:
\begin{align}\label{Eq:Continuity}
\frac{{\rm d}}{{\rm d}t} \hat{\rho}(x) = i \left[\hat{H}, \hat{\rho}(x)\right]
= -\partial_x \hat{J}(x),
\end{align}
which defines the current operator $\hat{J}(x)$,
\begin{align}
\hat{J}(x) =& \frac{1}{2i}\left[ \hat{\psi}^{\dagger}(x)\partial_x \hat{\psi}(x) - \partial_x \hat{\psi}^{\dagger}(x) \hat{\psi}(x) \right] \notag \\ &+ A(t) \hat{\psi}^{\dagger}(x) \hat{\psi}(x) . \label{eq:currentcontinuum}
\end{align}
The current expectation is obtained from this expression by taking expectations and integrating over space, i.e. $\int {\rm d}x\left< \hat{J}(x)\right>=J(t)$. Noting that $N = \left\langle \int \hat{\rho}(x) dx \right\rangle$ is a conserved quantity, one may straightforwardly invert Eq.\eqref{eq:currentcontinuum} to obtain the $A_T(t)$ that corresponds to $\int {\rm d}x\left< \hat{J}(x)\right>=J_T(t)$:
\begin{align}\label{Eq:EhrenfestCurrent}
A_T(t) =& \frac{i}{2N} \int dx \left\langle \hat{\psi}^{\dagger}(x)\partial_x \hat{\psi}(x) - \partial_x \hat{\psi}^{\dagger}(x) \hat{\psi}(x) \right\rangle(t) \notag \\ &+ \frac{J_T(t)}{N}.
\end{align}
For systems with Bosonic statistics, it is easy to show that the control field equation is almost identical, but the definition of the current operator picks up a negative sign $\hat{J}(x)\to -\hat{J}(x)$.
\subsection{Tracking Control in A Discrete Model}
While the equation for the tracking control field will in principle describe tracking for an $N$-electron system, in this paper we will provide a concrete illustration of its use with a lattice model. To do so, we first discretise the model Hamiltonian, using $a$ as the lattice constant such that $x=ja$ and $x^\prime =ka$:
\begin{align}
\int dx &\to \sum_{r} a \Longrightarrow
\delta(x-x') \to \frac{\delta_{jk}}{a}, \\
\hat{\psi}_{\sigma}(x) &\to \frac{\hat{c}_{j\sigma}}{\sqrt{a}} \Longrightarrow
\left\{ \hat{c}_{j\sigma}^{\dagger}, \hat{c}_{k\sigma'} \right\} = \delta_{jk} \delta_{\sigma \sigma'}, \\
\partial_x g(x) &\to [g_{j+1} - g_j]/a.
\end{align}
After discretisation and assuming periodic boundary conditions, the Hamiltonian takes the form:
\begin{align}
\hat{H} =& -\sum_{j,\sigma} \frac{1}{2a} \left( e^{-i\Phi(t)} \hat{c}_{j+1 \sigma}^{\dagger} \hat{c}_{j\sigma} + e^{i\Phi(t)} \hat{c}_{j \sigma}^{\dagger} \hat{c}_{j+1, \sigma} \right) \notag\\
& +\sum_{j,\sigma} \frac{1}{a}
\hat{c}_{j\sigma}^{\dagger} \hat{c}_{j \sigma}+ \sum_{j,k,\sigma,\sigma'} \frac{1}{2a^2} U_{j-k} \hat{c}_{k\sigma'}^{\dagger} \hat{c}_{k \sigma}^{\dagger} \hat{c}_{j\sigma} \hat{c}_{j\sigma'}
\end{align}
where we have set $\Phi(t)=aA(t)$. From this discretised Hamiltonian, one is able to derive a continuity equation for $\hat{\rho_j}=\sum_{ \sigma}\hat{c}_{j\sigma}^{\dagger} \hat{c}_{j \sigma}$:
\begin{align}
\frac{{\rm d}\hat{\rho_j}}{{\rm d}t}&=\frac{1}{a}(\hat{J}_j-\hat{J}_{j-1}), \\
\hat{J}_j&=-i\sum_{\sigma}\left({\rm e}^{-i\Phi\left(t\right)}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}-{\rm h.c.}\right).
\end{align}
This continuity equation defines the current operator $\hat{J}=\sum_j \hat{J}_j$, and has the important property of being composed only from the kinetic part of the Hamiltonian. This means the current operator is not explicitly dependent on the form of the interaction $U_{j-k}$. As a result of this property, the construction of a method to track the expectation of the current operator does not depend on the specific form of the Hamiltonian's interparticle interactions. For this reason, we will restrict our derivation to a specific Hamiltonian, but emphasise that the results may be applied to any model with the form of Eq.(\ref{eq:GeneralHamiltonian}).
\begin{figure}
\caption{Schematic representation of the Fermi-Hubbard model. Electrons hop between sites with an on-site repulsion of $U$, and a hermitian hopping amplitude scaled by the applied field $\Phi(t)$.}
\label{fig:Hamiltonian}
\end{figure}
From this point forward we will use the 1D Fermi-Hubbard model \citep{Tasaki1998} (see Fig.~\ref{fig:Hamiltonian} for a schematic representation) as a concrete example of the tracking strategy. This model has the Hamiltonian
\begin{align}
\hat{H}\text{\ensuremath{\left(t\right)}}= & -t_{0}\sum_{j,\sigma}\text{\ensuremath{\left({\rm e}^{-i\Phi\left(t\right)}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}+{\rm e}^{i\Phi\left(t\right)}\hat{c}_{j+1\sigma}^{\dagger}\hat{c}_{j\sigma}\right)}}\nonumber \\
& +U\sum_{j}\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{j\uparrow}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{j\downarrow}.
\label{eq:Hamiltonian}\end{align}
As in the continuum case, we wish to find the vector potential that will produce a specified current $J_T\left(t\right)=\left\langle \hat{J}\right\rangle $. To do so, we take the current expectation
\begin{equation}
\hat{J}=-iat_{0}\sum_{j,\sigma}\left({\rm e}^{-i\Phi\left(t\right)}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}-{\rm h.c.}\right),\label{eq:currentoperator}
\end{equation}
and rearrange for $\Phi$, expressing
the nearest neighbour expectation in a polar form:
\begin{equation}
\left\langle \psi (t) \left|\sum_{j,\sigma} \hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}\right| \psi (t) \right\rangle =R\left(\psi\right){\rm e}^{i\theta\left(\psi\right)}. \label{neighbourexpectation}
\end{equation}
In both Eq.\eqref{neighbourexpectation} and later expressions, the argument $\psi$ indicates that the expression is dependent on a functional of $\ket{\psi}\equiv \ket{\psi(t)}$. Eq.\eqref{neighbourexpectation} can be used in conjunction with Eq.\eqref{eq:currentoperator} to yield
\begin{align}
J\left(t\right)= & -i a t_{0} R\left(\psi\right)\left({\rm e}^{-i\left[\Phi\left(t\right)-\theta\left(t\right)\right]}-{\rm e}^{i\left[\Phi\left(t\right)-\theta\left(\psi\right)\right]}\right)\nonumber \\
= & -2 a t_{0} R \left(\psi\right)\sin(\Phi\left(t\right)-\theta\left(\psi\right)).\label{eq:currentexpectation}
\end{align}
An important caveat that should be noted here is that if one were to apply a time dependent rotation to the system, the current expectation would no longer depend explicitly on $\Phi(t)$ \citep{PhysRevA.95.023601}, but instead there would remain an implicit dependence through the state of the system $\ket{\psi}$. This is important, as in order to define a control field which reproduces a tracking current $J_T(t)$, we invert Eq.(\ref{eq:currentexpectation}). From this inversion we obtain the tracking control field $\Phi_T(t, \psi)$, which takes the desired current expectation as a parameter,
\begin{equation}
\Phi_T\left(t, \psi \right)=\arcsin\left[-X(t,\psi)\right]+\theta\left(\psi\right).
\label{eq:phi_track}
\end{equation}
in which we have defined
\begin{align}
X(t,\psi) & = \frac{J_T\left(t\right)}{2at_{0}R\left(\psi\right)} .
\end{align}
From Eq.(\ref{eq:phi_track}) it is possible to eliminate the control field entirely from the model Hamiltonian using the equality
\begin{align}
{\rm e}^{\pm i\Phi_{T}\left(t,\psi\right)}={\rm e}^{\pm i\theta\left(\psi\right)}\left[\sqrt{1- X^2(t,\psi)}\mp iX(t,\psi)\right],
\end{align}
where the above equality is obtained via Euler's equation and $\cos\left(\arcsin\left(x\right)\right)=\sqrt{1-x^{2}}$. From this, we are able to define the ``tracking Hamiltonian'' $\hat{H}_T (J_T(t),\psi)$ which takes the target current $J_T(t)$ as a parameter:
\begin{align}
\hat{H}_{T}\left(J_T(t), \psi\right) & = \sum_{\sigma,j} \left[ P_+{\rm e}^{-i\theta\left(\psi \right)}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma} + \mbox{H.c.} \right] \nonumber \\
& \;\;\;\; + U\sum_{j}\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{j\uparrow}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{j\downarrow}, \label{eq:trackingHamiltonian} \\
P_{\pm} & = -t_{0}\left(\sqrt{1 - X^2(t,\psi)}\pm i X(t,\psi)\right).
\end{align}
This leads to a field-free, non-linear evolution for the wavefunction given by
\begin{align}
i\frac{{\rm d}\left|\psi\right>}{{\rm d}t}=\hat{H}_T\left(J_T(t), \psi\right)\left|\psi\right>, \label{eq:eqnofmotion}
\end{align}
which is equivalent to evolving the system with the original Hamiltonian given in Eq.(\ref{eq:Hamiltonian}) and the usual Schr{\"o}dinger equation $i\frac{{\rm d}\left|\psi\right>}{{\rm d}t}=\hat{H}\left(t\right)\left|\psi\right>$, under the additional constraint that $\Phi(t)$ is chosen such that $\langle \hat{J}(t)\rangle =J_T (t)$. After solving Eq.\eqref{eq:eqnofmotion}, it is also possible to recover the tracking field $\Phi_T(t)$ via Eq.\eqref{eq:phi_track}.
\subsection{Tracking Arbitrary Observables \label{sec:arbobservable}}
Finally, we extend the derivation for tracking current to an arbitrary observable $\hat{O}=\hat{O}^\dagger$ whose expectation $O(t)=\left\langle \hat{O}\right\rangle$ is not a function of $\Phi$. In this case the time derivative is
\begin{align}
\frac{{\rm d}O(t)}{{\rm d} t}=&it_{0}\sum_{j,\sigma}\text{\ensuremath{\left({\rm e}^{-i\Phi\left(t\right)}\left\langle \left[\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1,\sigma},\hat{O}\right]\right\rangle +{\rm h.c.} \right)}}
\nonumber \\ &-iU\sum_{j}\left\langle \left[\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{j\uparrow}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{j\downarrow},\hat{O}\right]\right\rangle .
\end{align}
From this evolution, we assign
\begin{align}
\sum_{j,\sigma}\left\langle \left[\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1,\sigma},\hat{O}\right]\right\rangle =R_{O}{\rm e}^{i\theta_{O}}, \\
B=-iU\sum_{j}\left\langle \left[\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{j\uparrow}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{j\downarrow},\hat{O}\right]\right\rangle.
\end{align}
With this substitution we obtain an expression for the derivative of the observable in terms of the control field:
\begin{align}
\frac{{\rm d}O(t)}{{\rm d} t}=-2t_{0}R_{O}\sin\left(\Phi-\theta_{O}\right)+B.
\end{align}
This can be inverted to obtain the tracking control field for an arbitrary observable
\begin{align}
\Phi_{O}=\arcsin\left(\frac{B-\frac{{\rm d}O}{{\rm d} t}}{2t_{0}R_{O}}\right)+\theta_{O}.
\end{align}
From this a tracking Hamiltonian and constraint can be derived using the methods presented previously. The theoretical considerations in the rest of this paper may be applied to tracking an arbitrary variable, but in the interests of clarity we shall restrict our attention to tracking of the current expectation using Eq.(\ref{eq:trackingHamiltonian}).
\section{Tracking Constraints \label{sec:TrackingConstraints}}
In this section we prove the statement:
\emph{For a finite system, if the wavefunction $\ket{\psi}\equiv\ket{\psi(t)}$ solves Eq.\eqref{eq:eqnofmotion}, and satisfies the constraints}
\begin{align}
\left|X(t, \psi)\right|&<1-\epsilon_1 \label{eq:constraint2}, \\
R(\psi)&>\epsilon_2, \label{eq:constraint1}
\end{align}
\emph{where $\epsilon_1, \epsilon_2$ are any positive constants, then $\ket{\psi}$ is a unique solution of Eq.\eqref{eq:eqnofmotion} and therefore by Eq.\eqref{eq:phi_track}, $\Phi_T(t)$ is a unique field which solves the current tracking problem.}
Both the constraints given by Eqs.(\ref{eq:constraint2},\ref{eq:constraint1}) are necessary conditions for $\hat{H}_T\left(J_T(t), \psi\right)$ to be \emph{Lipschitz continuous} (LC) over $\ket{\psi}$ \citep{geraldfolland2007}. In this case, the Picard-Lindel{\"o}f theorem guarantees $\ket{\psi}$ has a unique solution depending on its initial value when being evolved by the tracking Hamiltonian \citep{kentnagle2011}.
In Sec.\ref{sec:formalproof}, we show formally that under the constraints given by Eqs.(\ref{eq:constraint2},\ref{eq:constraint1}), the tracking Hamiltonian is LC, while in Sec.\ref{sec:motivation} we provide a physical motivation for these constraints. Finally in Sec.\ref{sec:multiple} we provide a simple example where the derived constraints do not hold, and multiple solutions for the tracking field are possible.
\subsection{Proving Lipschitz continuity \label{sec:formalproof}}
We define the $L_2$ norm $\Lnorm{\ket{\psi}}=\sqrt{\braket{\psi}{\psi}}$ and spectral norm \citep{horn1985matrix}:
\begin{equation}
\specnorm{\hat{A}}=\sup_{\braket{\psi}{\psi}=1}\Lnorm{\hat{A}\ket{\psi}}.
\end{equation}
These norms obey a submultiplicative property \citep{borzi},
\begin{equation}
\Lnorm{\hat{A}\ket{\psi}}\leq\specnorm{\hat{A}}\Lnorm{\ket{\psi}}
\end{equation}
which when combined with the Cauchy-Schwarz inequality yields:
\begin{equation}
\left|\left\langle \phi\left|\hat{A}\right|\psi \right\rangle\right|\leq \Lnorm{\ket{\phi}}\Lnorm{\hat{A}\ket{\psi}}\leq\Lnorm{\ket{\phi}}\specnorm{\hat{A}}\Lnorm{\ket{\psi}}. \label{eq:Cauchy}
\end{equation}
We now proceed to proving that for the set of wavefunctions which obey Eqs.(\ref{eq:constraint2}, \ref{eq:constraint1}), the following inequality holds:
\begin{equation}
\Lnorm{\hat{H}_T\left(J_T(t), \psi\right)\ket{\psi}- \hat{H}_T\left(J_T(t), \phi\right)\ket{\phi}} \leq L_H \Lnorm{\ket{\psi}-\ket{\phi}} \label{eq:TrackingLipschitz}
\end{equation}
where $L_H$ is some finite constant, and is the definition of LC for the function $\hat{H}_T\left(J_T(t), \psi\right)\ket{\psi}$. In order to prove this, it is convenient to establish some properties both for operators and functionals of $\ket{\psi}$.
First, in finite dimensions all linear operators are bounded, which implies they are also LC over the whole Hilbert space:
\begin{equation}
\Lnorm{\hat{A}\left(\ket{\psi}-\ket{\phi}\right)}\leq \specnorm{\hat{A}}\Lnorm{\ket{\psi}-\ket{\phi}}.
\end{equation}
Additionally, the expectation of linear operators $\left\langle \psi \left|\hat{A} \right| \psi \right\rangle =A(\psi)$ is also LC on the space of wavefunctions ($\Lnorm{\psi}=1$). This is demonstrated by taking the identity
\begin{align}
A(\psi)-&A(\phi)=\left\langle\psi\left|\hat{A}\right|\psi\right\rangle- \left\langle\phi\left|\hat{A}\right|\phi\right\rangle \notag\\ &=\left\langle\psi\right|\hat{A}\left( \left|\psi\right\rangle-\left|\phi \right\rangle\right)- \left(\left\langle\phi \right|-\left\langle\psi\right| \right)\hat{A}\left|\phi\right\rangle,
\end{align}
and applying the triangle inequality $\abs{x+y}\leq\abs{x}+\abs{y}$ to its norm:
\begin{align}
\abs{A(\psi)-A(\phi)} \leq2\specnorm{\hat{A}} \Lnorm{\left|\psi \right\rangle-\left|\phi \right\rangle}. \label{eq:expectationinequality}
\end{align}
More generally, an arbitrary functional of $\ket{\psi}$, $f:\ket{\psi}\to\mathbb{C}$ is LC over $\ket{\psi}$ if for all $\ket{\psi},\ket{\phi}$ in its domain, it satisfies the inequality
\begin{equation}
\ensuremath{\abs{f\left(\psi\right)-f\left(\phi\right)}\leq}L_f\Lnorm{\ket{\psi}-\ket{\phi}}
\end{equation}
where $L_f$ is some finite constant. Taking two functionals $f\left(\psi\right)$, $g\left(\psi\right)$, which are LC over $\ket{\psi}$ with Lipschitz constants $L_f$ and $L_g$, then the norm of their product $h\left(\psi\right)=f\left(\psi\right)g\left(\psi\right)$
is:
\begin{align}
\abs{h(\psi)-h(\phi)} &=\abs{\left(f(\psi)-f(\phi)\right)g(\psi)+f(\phi)\left(g(\psi)-g(\phi)\right)}\nonumber \\
&\leq\abs{f(\psi)-f(\phi)}\abs{g(\psi)}+\abs{f(\phi)}\abs{g(\psi)-g(\phi)} \nonumber \\
&\leq \left( L_f\abs{g(\psi)}+L_g\abs{f(\phi)} \right)\ensuremath{\Lnorm{\ket{\psi}-\ket{\phi}}} \label{eq:Lipschitzproductscalar}.
\end{align}
This means that if the functionals $f(\psi)$, $g(\psi)$ are LC \emph{and} bounded over the domain of $\psi$ then their product is also LC. In the case of a product between an operator and an LC functional, $f(\psi)\hat{A}$, a similar result to Eq.\eqref{eq:Lipschitzproductscalar} is obtained:
\begin{align}
\Lnorm{f(\psi)\hat{A}\ket{\psi}-f(\phi)\hat{A}\ket{\phi}} &\notag \\ \leq \specnorm{\hat{A}}&\left( L_f+\abs{f(\psi)} \right)\ensuremath{\Lnorm{\ket{\psi}-\ket{\phi}}} \label{eq:Lipschitzproductmix},
\end{align}
i.e. if $f(\psi)$ is bounded and LC, $f(\psi)\hat{A}$ is also LC. Lastly, sums of any LC operators or functionals will themselves be LC by the triangle inequality.
Equipped with these properties, the most direct route to proving Eq.\eqref{eq:TrackingLipschitz} is to prove each of the constituent components of Eq.(\ref{eq:trackingHamiltonian}) are both LC and bounded, which by Eqs.(\ref{eq:Lipschitzproductscalar}, \ref{eq:Lipschitzproductmix}) and the triangle inequality is sufficient to prove that the tracking Hamiltonian is itself LC in $\psi$.
The relevant parts of the Hamiltonian for which Lipschitz continuity over $\ket{\psi}$ and boundedness must be demonstrated are $\rm{e}^{ i\theta(\psi)}$ and $P_\pm$. To prove the former is LC, we first consider the nearest neighbour expectation, using $\sum_{j,\sigma}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}=\hat{K}$:
\begin{equation}
\left\langle \psi \left|\hat{K} \right| \psi \right\rangle=K(\psi)=R(\psi){\rm e}^{i\theta(\psi)}. \label{eq:Kneighbour}
\end{equation}
This expectation is LC by Eq.\eqref{eq:expectationinequality}, and bounded due to Eq.\eqref{eq:Cauchy} and the normalisation of wavefunctions.
Combining this result with the reverse triangle inequality $\big| \left| x\right| - \left|y \right| \big| \leq \left| x-y \right|$ further demonstrates that $R(\psi)=\left|K(\psi)\right|$ is also LC and bounded. The final step in order to show ${\rm e}^{i\theta(\psi)}$ is itself LC is to establish $R^{-1}(\psi)$ is LC under Eqs.(\ref{eq:constraint2}, \ref{eq:constraint1}). This is easily established by
\begin{align}
\abs{R^{-1}(\psi)-R^{-1}(\phi)}=& \frac{1}{R(\psi)R(\phi)}\abs{R(\psi)-R(\phi)} \notag \\ \leq& \frac{1}{\epsilon^2_2}\specnorm{\hat{K}}\Lnorm{\left|\psi \right\rangle-\left|\phi \right\rangle}
\end{align}
where in the second inequality we have utilised Eq.\eqref{eq:constraint1}. By Eq.(\ref{eq:Lipschitzproductscalar}) we therefore establish
${\rm e}^{i\theta(\psi)}=\frac{K(\psi)}{R(\psi)}$ is LC, and is bounded by definition.
The final term to tackle is $P_\pm$. Since this is the only term that involves our target $J_T(t)$, we work directly in the variable $x=X(t,\psi)$. The function $f(x)=x$ is itself trivially LC and bounded over this domain where Eq.\eqref{eq:constraint2} is satisfied. It therefore only remains to check the Lipschitz continuity of $f(x)=\sqrt{1-x^2}$. Since this function is differentiable on the interval $I=[-(1-\epsilon_2),1-\epsilon_2]$ which satisfies Eq.\eqref{eq:constraint2}, by the mean value theorem \citep{doi:10.1112/plms/s2-7.1.14} the function is LC if $\left|f^\prime(x)\right|\leq M$ for all $x \in I$ and $M$ is finite. It is easy to show that
\begin{equation}
M=\max_{x\in I}\left|f^\prime(x)\right|=\frac{1-\epsilon_2}{2\sqrt{2\epsilon_2-\epsilon_2^2}}
\end{equation}
and therefore $P_\pm$ is LC and bounded provided $\epsilon_2\neq0$ and $<1$ . As a result, we establish that under the conditions of Eqs.(\ref{eq:constraint2}, \ref{eq:constraint1}), each of the components of the tracking Hamiltonian is LC and bounded, meaning that the Hamiltonian is itself LC. From this continuity it follows that the Picard-Lindel{\"o}f theorem is obeyed and $\ket{\psi}$ has a unique solution depending on its initial value. It is interesting to note that this result, derived from the analysis of the continuous formulation \eqref{eq:trackingHamiltonian}, stands in sharp contrast to some discretized approaches to tracking problems, in which multiple solutions are possible \citep{Rabitzmultiple}.
\subsection{Physical Motivation \label{sec:motivation}}
It is reasonable to ask whether the constraints imposed upon $\psi$ are well justified, and here we provide physical motivation for them. First, the condition $\left|X(t, \psi)\right|<1-\epsilon_1$ is easily justified by noting that if this is violated, $P_+^\dagger \neq P_-$ and the tracking Hamiltonian in Eq.\eqref{eq:trackingHamiltonian} is no longer Hermitian. This constraint therefore corresponds to a restriction on the currents that can be produced in a physical system to ensure that the state undergoes appropriate unitary time evolution.
The restriction imposed by Eq.\eqref{eq:constraint1} is somewhat more general as it does not make reference to the current being tracked. Nevertheless, we shall demonstrate here that it is reasonable to expect this property in physical systems. We first consider $\hat{K}$ in a diagonal basis, using the transformation $\hat{c}_{j\sigma}=\sum_{k}{\rm e}^{i\omega_{k}j}\tilde{c}_{k\sigma}$
where $\omega_{k}=\frac{2\pi k}{L}$, and $L$ is the number of sites. The nearest neighbour expectation then assumes the
form
\begin{equation}
K(\psi) =\sum_{k,\sigma}\left(\cos\left(\omega_{k}\right)+i\sin\left(\omega_{k}\right)\right)\left\langle \psi\left|\tilde{c}_{k\sigma}^{\dagger}\tilde{c}_{k\sigma}\right|\psi\right\rangle.
\end{equation}
In the diagonal space, we immediately see that every occupied state in momentum space contributes components with equal magnitude but which differ by a phase. For an even number of particles (as is always the case at half filling), it is mathematically very easy to construct an arbitrary wavefunction such that each occupied state's contribution is in antiphase with another, making $K(\psi)=0$ and violating the tracking constraint. A simple example of this is shown in Fig.\ref{fig:components}.
\begin{figure}
\caption{An example of the contributions to $K(\psi)$ for $L=10$ sites at half filling. In this example the occupied states for one spin species (dashed blue) have been chosen so that they are in anti-phase with the other species (red), and therefore $K(\psi)=0$.}
\label{fig:components}
\end{figure}
While it is possible to construct a wavefunction which violates Eq.\eqref{eq:constraint1}, the question is whether such a wavefunction is truly physical. To answer this, we consider Eq.(\ref{eq:Hamiltonian}) in the tight binding limit ($\frac{U}{t_0}=0$). In the diagonalised basis, this Hamiltonian is \citep{floriangebhard2010,fabianessler2005}
\begin{equation}
\hat{H}\left(t\right)=-2t_{0}\sum_{k,\sigma}\cos(\omega_k-\Phi(t))\tilde{c}_{k\sigma}^{\dagger}\tilde{c}_{k\sigma}.
\end{equation}
Notice that this shares a common eigenbasis with the nearest neighbour
expectation. Before any driving occurs, the system is in the ground state $\left|\psi_{g}\right>$,
that minimises the system energy. Since the Hamiltonian is diagonal in the occupation number basis, the ground state will be a pure state \footnote{In fact, the ground state is potentially highly degenerate depending on the number of sites and filling fraction, but since by we are only arguing that the ground state energy is non-zero, it is sufficient to treat the ground state as a pure state in the occupation number representation.} in this representation, and has energy:
\begin{equation}
\left\langle \psi_{g}\left|\hat{H}\left(0\right)\right|\psi_{g}\right\rangle =-2t_{0}\sum_{k,\sigma}\cos\left(\omega_{k}\right)\delta(k,\sigma)=E_{g} \label{eq:groundstate}
\end{equation}
where $\delta(k,\sigma)=\left\langle {\psi}_{g}\left|\tilde{c}_{k\sigma}^{\dagger}\tilde{c}_{k\sigma}\right|{\psi}_{g}\right\rangle$ is $1$ if the relevant mode is occupied in the ground state, and zero otherwise.
Clearly, the occupation numbers of the ground state will be such that Eq.(\ref{eq:groundstate}) is \emph{minimised}. If one has $N=\sum_\sigma N_\sigma$ particles on an $L$ site lattice, each spin species' contribution to the ground state energy will consist of the $N_\sigma$ momentum modes closest to $\omega_k$=0. From this counting argument, it is possible to give an analytic expression for $E_g=\sum_\sigma E_\sigma$:
\begin{equation}
-\frac{E_\sigma}{2t_0} = \left\{
\begin{array}{ll}
1+2\sum\limits^{\frac{N_\sigma}{2}-1}_{k=1}\cos\omega_{k} + \cos\frac{\pi N_\sigma}{L} & \textrm{if $N_\sigma >0$ is even}, \\
1+2\sum\limits^{\frac{N_\sigma-1}{2}}_{k=1}\cos\omega_{k} & \textrm{if $N_\sigma$ is odd}.
\end{array}
\right.
\end{equation}
It is easy to see from this analytic expression that the only cases for which $E_g$ is zero is either the vacuum or when every mode of both spin species is occupied, and the system dynamics are completely frozen.
Having established $E_g$ is non-zero in all but the most trivial of circumstances, we now substitute it into the nearest neighbour expectation to obtain $K(\psi_g)$:
\begin{align}
K(\psi_g)&=\left\langle \psi_{g}\left|\sum_{j,\sigma}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}\right|\psi_{g}\right\rangle =-\frac{E_{g}}{2t_{0}}+\lambda, \\
\lambda&=i\sum_{k}\delta(k,\sigma)\sin\left(\omega_{k}\right)
\end{align}
which means that for the ground state, $K(\psi_g)$ has
a non-zero real part and $R\left(\psi_g\right)$ \emph{must }be non-zero. Furthermore, since the Hamiltonian and
nearest neighbour operators commute at all times in the diagonal basis, the value of $R\left(\psi\right)$ is time-independent and therefore non-zero for all $\psi$ that can be evolved from the ground state.
In a system with non-zero $U$, we can consider only the kinetic term, which has the form
\begin{equation}
\hat{H}_{K}=-t_{0}\sum_{j,\sigma}\left(\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j+1\sigma}+\hat{c}_{j+1\sigma}^{\dagger}\hat{c}_{j\sigma}\right).
\end{equation}
In this case, provided $\left\langle \psi\left|\hat{H}_K\right|\psi\right\rangle\neq0$, an analogous argument can be made to justify Eq.\eqref{eq:constraint1}. For this reason, we
can consider that this constraint corresponds to the condition
that there is some kinetic energy in the system, and the electrons have not been completely frozen (a natural precondition for observing \emph{any} current).
We conclude this section with the observation that while in principle the derived constraints are highly non-linear inequalities in $\ket{\psi}$, in practice simulations confirm the expectation that even at high $\frac{U}{t_0}$, Eq.\eqref{eq:constraint1} is obeyed (see e.g. Fig.\ref{fig:Ehrenfesttracking}). Furthermore, it is relatively easy to satisfy Eq.\eqref{eq:constraint2} via a heuristic scaling of the target to be tracked, as these constraints limit only the peak amplitude of current in the evolution, and otherwise allow for any function to be tracked when appropriately scaled. If one is concerned only with reproducing the shape of the target current, then using a scaled target $J_s(t)=kJ_T(t)$ such that $\left|J_s\left(t\right)\right|<2at_{0}R\left(t\right)$ will allow tracking unproblematically. Alternately, if one treats the lattice constant $a$ as a tunable parameter, this can always be set for the tracking system so as to satisfy $\left|X(t, \psi)\right|<1-\epsilon_2$.
Singularities in the control field are a common occurrence in tracking control, which often make a specified trajectory impossible to reproduce \citep{doi:10.1063/1.1582847, doi:10.1137/0325030, PhysRevA.98.043429}. While singularities are present in the unconstrained model presented here, they are easily identified and avoided using the constraints derived above.
\subsection{Multiple Solutions \label{sec:multiple}}
We conclude this section with a demonstration that when the derived constraints of Eqs.(\ref{eq:constraint2},\ref{eq:constraint1}) are not both satisfied, multiple solutions for $\ket{\psi}$ and hence $\Phi_T(t)$ are possible. To simplify algebra, we consider a $U=0$ system, where $\theta(\psi)=0$ regardless of the field applied when evolving from the ground state.
Now consider a situation where one uses tracking simply to reproduce the current produced by some field $\Phi(t)$, i.e. $J_T(t)=J(t)$ and if the solution is unique, $\Phi_T=\Phi(t)$. Applying tracking to this situation, if
\begin{equation}
\label{eq:multiiplecondition}
\Phi(0)=0, \qquad
\abs{\Phi(t)} < \frac{\pi}{2},
\end{equation}
then the solution is unique and $\Phi_T(t)=\Phi(t)$.
If however there is a point where $\abs{\Phi(t)} = \frac{\pi}{2}$ then $X(t,\psi)= 1$ and Eq.\eqref{eq:constraint2} is violated. If the control field is continuous, then any $\Phi(t)$ which does not obey Eq.\eqref{eq:multiiplecondition} also violates Eq.\eqref{eq:constraint2}. Fig.\ref{fig:multiplesimulation} confirms this violation, where both control fields generate the same current (shown in Fig.\ref{fig:refEhrenfest}\textbf{(a)}), but have different functional forms.
\begin{figure}
\caption{Control fields driving a $U=0$ system, each of which generates the same current. Here there are multiple solutions, $\Phi(t)\neq\Phi_T(t)$ due to the violation of Eq.\eqref{eq:constraint2}
\label{fig:multiplesimulation}
\end{figure}
The multiplicity of solutions shown can be understood physically in a simple manner. Reproducing the target current only requires that $\sin(\Phi(t))=\sin(\Phi_T(t))$, but identical dynamics requires ${\rm e}^{\pm i\Phi(t)}={\rm e}^{\pm i\Phi_T(t)}$. The latter condition is much stricter, and only coincides with the tracking requirements when Eq.\eqref{eq:multiiplecondition} is also obeyed.
\begin{figure}
\caption{When reproducing a current, while $\abs{\Phi(t)}
\label{fig:phiillustration}
\end{figure}
This phenomenon is illustrated in Fig.\ref{fig:phiillustration}, where crossing the threshold produces two solutions that will track the target observable. It is therefore possible to generate tracking control fields which reproduce the target, but have quite different dynamics, and hence, multiple solutions for $\ket{\psi}$ and $\Phi_T(t)$.
We conclude this section with the observation that even in the case that $\ket{\psi}$ is unique, the tracking field $\Phi_T(t)$ defined in Eq.\eqref{eq:phi_track} will only be unique modulo $2\pi$. This constitutes a non-uniqueness in the tracking field \emph{at each timestep}. Fortunately, one is able to appeal to another physical principle to eliminate this non-uniqueness, namely that the system obey an \emph{Ehrenfest theorem} for current.
\section{Ehrenfest Theorems \label{sec:Ehrenfest}}
We now turn our attention to the question of verification of numerical simulations. Given the tracking strategy will by definition reproduce the trajectory one desires, it is important to have an independent check that tracking has been achieved via a physical evolution rather than numerical aberrations. A particularly sensitive test of the physicality of a numerical simulation is checking that expectations obey the relevant Ehrenfest theorems (see e.g. Ref.~\citep{PhysRevA.84.022326}). These relate derivatives of a given expectation to other expectations. In the Hubbard model, there is an Ehrenfest theorem for $J(t)$, namely
\begin{align}
\frac{{\rm d} J\left(t\right)}{{\rm d}t}=&eat_{0}{\rm e}^{-i\Phi\left(t\right)} \sum_{j,\sigma}\left(\left<\left[\hat{H}\left(t\right),c_{j\sigma}^{\dagger}c_{j+1\sigma}\right]\right>\right. \nonumber \\ &\left.-\frac{{\rm d}\Phi(t)}{{\rm d}t}\left<c_{j,\sigma}^{\dagger}c_{j+1,\sigma}\right>\right) +\rm{h.c.} \label{eq:Ehrenfest theory}
\end{align}
which must be respected if the evolution is physical.
An important feature of the tracking Hamiltonian is that although the tracked variable will be reproduced by construction, there is no guarantee that any other observables will be tracked. This means that we only know \emph{a priori} the left hand side of Eq.\eqref{eq:Ehrenfest theory}, which will correspond by construction to $\frac{dJ_t}{dt}$, and can therefore verify a simulation respects physical principles by checking that the independent expectations from the right hand side of Eq.\eqref{eq:Ehrenfest theory} are correct. To do so, we assign the commutator in the first term of \eqref{eq:Ehrenfest theory} the following shorthand
\begin{align}
\frac{1}{U}\sum_{j,\sigma}\left<\left[\hat{H}\left(t\right),c_{j\sigma}^{\dagger}c_{j+1\sigma}\right]\right>=C(\psi)\rm{e}^{i\kappa(\psi)}, \label{eq:twobodyexpectation}
\end{align}
from which we obtain an analytic expression for the current derivative in terms of the independent expectations defined by Eqs.(\ref{neighbourexpectation}) and (\ref{eq:twobodyexpectation}):
\begin{align}
\frac{{\rm d}J(t)}{{\rm d}t}=&-2eat_{0}\frac{{\rm d}\Phi(t)}{{\rm d}t}R\left(\psi\right)\cos\left(\Phi\left(t\right)-\theta\left(\psi\right)\right)\nonumber \\&-2eat_{0}U C(\psi)\cos\left(\Phi\left(t\right)-\kappa\left(\psi\right)\right),
\label{eq:Ehrenfest}
\end{align}
which provides a valuable consistency check for numerical simulations.
The Ehrenfest theorem also resolves the problem of $\Phi_T(t)$ being only unique modulo $2\pi$ when $\ket{\psi}$ is unique. If at time $t, \Phi_T(t)$ correctly reproduces $J_T(t)$, then $\Phi_T(t)\to\Phi_T(t)+2n\pi$, $n\in\mathbb{Z}$ will generate the same current. This means that at each time, one in fact has an infinite number of choices for $\Phi_T(t)$. This non-uniqueness leads to $\Phi_T(t)$ being non differentiable. To see this consider
\begin{align}
\frac{{\rm d}\Phi(t)}{{\rm d}t}=\lim_{\Delta t\to0}\frac{\Phi_T(t+\Delta t)-\Phi_T(t)}{\Delta t}. \label{eq:philimit}
\end{align}
If the derivative exists for this solution, then switching solution to (for instance) $\Phi_T(t+\Delta t)\to \Phi_T(t+\Delta t)+2n\pi$ would render $\Phi_T$ non-differentiable, as the limit on the right hand side of Eq.\eqref{eq:philimit} would not exist. For the Ehrenfest theorem to be meaningful however, $\frac{{\rm d}\Phi(t)}{{\rm d}t}$ must exist. For this reason, the additional solutions resulting from adding integer multiples of $2n\pi$ at any time cannot be admitted as physical. Eq.(\ref{eq:Ehrenfest}) uniquely specifies $\frac{{\rm d}\Phi(t)}{{\rm d}t}$, and stipulating that the evolution must obey this means that for a given initial condition, $\Phi_T(t)$ has a unique solution.
To test the Ehrenfest theorem, we take two systems at $U=0$ and $U=7t_0$, and drive them with the $\Phi(t)$ shown in Fig.\ref{fig:multiplesimulation}. All results are obtained with a numerically exact time propagation of the correlated state. More details for these reference systems can be found in Ref.\citep{companionletter}.
Fig.\ref{fig:refEhrenfest} compares the dipole acceleration $\frac{{\rm d} J(t)}{{\rm d}t}$ calculated using Eq.(\ref{eq:Ehrenfest}) to the numerical gradient. It can be seen from that both calculations align perfectly, as they must for the system evolution to be considered physical. Extending this to tracking control, Fig.\ref{fig:Ehrenfesttracking} provides an example demonstrating that the Ehrenfest theorem is obeyed when tracking the current of a different system. This highlights the fact that the theorem is obeyed in two systems with the same current gradient, despite the fact that the non-tracked expectations do not match between simulations.
\begin{figure}
\caption{Comparison between the numerical current gradient, and the analytic prediction calculated via Eq.(\ref{eq:Ehrenfest}
\label{fig:refEhrenfest}
\end{figure}
\begin{figure}
\caption{When tracking the original $J(t)$ from the $U=0$ system in the $U=7t_0$ system, we find that $\frac{{\rm d}
\label{fig:Ehrenfesttracking}
\end{figure}
The verification provided by Ehrenfest theorems is particularly useful for tracking in high $\frac{U}{t_0}$ simulations, when $\theta(\psi)$ exhibits large oscillations. When this angle is calculated numerically, it is given a value between $\left[-\pi,\pi \right]$. If on a timestep update this threshold is crossed, a numerical discontinuity is introduced by the assignment $\theta(\psi)=\pm \pi \pm \delta \to \mp \pi \pm \delta$. The Ehrenfest theorem is sensitive to this artifical discontinuity, and can therefore be used to correct it in both $\theta(\psi)$ and $\Phi_T(t)$. An example of a control field where this correction is necessary is shown in Fig.\ref{fig:Ehrenfestunwravel}
\begin{figure}
\caption{While $\Phi(t)-\theta(\psi)$ is always constrained to lie between $\left[-\frac{\pi}
\label{fig:Ehrenfestunwravel}
\end{figure}
\section{Experimental Feasibility \label{sec:Experimental}}
Although the previous section, and the material mimicry done in Ref.\citep{companionletter} demonstrates that the tracking strategy is successful \emph{in silico}, there remains a question of the experimental feasibility of generating the laser pulses prescribed by the tracking strategy. Although it is possible to implement a control scheme which reflects experimental constraints \citep{Kosloffoptimalconstraints, SpectralConstraints}, this in general does not guarantee an exact match with the target. In order to guarantee exact tracking in Ref. \citep{companionletter}, neither the intensity nor bandwidth of the driving field was constrained.
As a first test of the experimental feasibility of our method, we examine the effect of introducing a cut-off frequency $\omega_c$ to the control field obtained from the material mimicry in Ref.\citep{companionletter}. Taking $\Phi_T(t)$ from Eq.\eqref{eq:phi_track}, we make a cut-off in frequency space such that ${\widetilde{\Phi}_T(\omega>\omega_c)=0}$. This post-processed control field is then used to solve the Schr{\"o}dinger equation for the same system $\Phi_T(t)$ was originally applied to.
\begin{figure}
\caption{Reference harmonic spectra for both the conducting limit $\frac{U}
\label{fig:refspectra}
\end{figure}
As targets, we use reference spectra (the Fourier transform of the dipole accelerations presented in Fig.\ref{fig:refEhrenfest}), which are shown in Fig.\ref{fig:refspectra}, and results can be seen in Fig.\ref{fig:J7cut}. Two conclusions can be drawn from these results. First, when tracking the insulating systems spectrum in the conducting limit, as shown in Fig.\ref{fig:J7cut}\textbf{(b)}, the spectra matches its target well while $\omega<\omega_c$, after which it is strongly suppressed. Conversely, in the case where a system with very strong onsite-repulsion tracks the tight-binding spectrum (as in Fig.\ref{fig:J7cut}\textbf{(a)}), the response to the cut-off appears highly non-linear, and a very broadband pulse with a cutoff of $\omega_c \approx 50\omega_0$ is needed to reproduce the four most prominent harmonics associated with $J^{(0)}(t)$. This suggests that when two materials are at greater distances from each other in the phase diagram, greater bandwidth in the control field is required for tracking.
\begin{figure}
\caption{Introducing a low-pass filter on $\Phi_T(t)$ before using it to evolve the system, we find that for \textbf{a)}
\label{fig:J7cut}
\end{figure}
When tracking both reference spectra in an intermediate material $\frac{U}{t_0}=1$, we find more promising results, as shown in Fig.\ref{fig:J1cut}. In this case, a linear dependence on $\omega_c$ is observed in both tracked spectra, and one is able to recover the most prominent harmonics of the $\frac{U}{t_0}=0$ reference system at a potentially realisable $\omega_c=10\omega_0$.
\begin{figure}
\caption{Tracking both reference spectra at $\frac{U}
\label{fig:J1cut}
\end{figure}
\section{Discussion \label{sec:Discussion}}
In this paper we have expanded on the work presented in Ref.\citep{companionletter}. In addition to providing a more complete derivation for the tracking model's equation of motion, constraints guaranteeing Hermiticity and a unique evolution were rigorously derived. Although these constraints restrict the size of imitable currents in tracking, this can be circumvented either by scaling the current one wishes to track, or modifying system parameters such that the constraints are obeyed. The ability to transparently identify and remove singularities via scaling represents a tangible advantage over more generic tracking strategies \citep{doi:10.1063/1.1582847, doi:10.1137/0325030, PhysRevA.98.043429}.
The derived constraints of Eqs.(\ref{eq:constraint2},\ref{eq:constraint1}) also highlight an interesting ambiguity in the tracking model, namely that in some circumstances multiple control fields will track the same target expectation. This raises a question for future investigations about the enumeration of these solutions, and how their dynamics differ.
An Ehrenfest theorem for the tracked expectation was also introduced for the purpose of verifying the consistency of the numerics with the constraints of physical principles. By insisting that this Ehrenfest theorem be obeyed removes unphysical discontinuities that can arise from the periodic effect of $\Phi(t)$ on the dynamics.
In investigating the potential to realize this tracking experimentally with finite-bandwidth applied fields, we employed a low-pass filter on the tracking control field $\Phi_T(t)$. This produced an interesting asymmetry in the tracking response to the cut-off frequency $\omega_c$. While systems tracking currents generated by materials with a \emph{higher} $\frac{U}{t_0}$ always displayed a linear dependence, this linearity was not always observed for the converse case of tracking \emph{lower} $\frac{U}{t_0}$. While there appears to be a regime of linear dependence when the gap between the original and tracked system parameters is sufficiently small (see Fig.\ref{fig:J1cut}), Fig.\ref{fig:J7cut} shows that when the two systems are separated by greater distances on the phase diagram, the current response to a cut-off in $\Phi_T(t)$ is highly non-linear.
To achieve the fine control over expectations shown both in this paper and Ref.\citep{companionletter}, it will be necessary to adapt the tracking strategy to reflect experimental constraints. A potential future avenue is to optimise tracking results while only utilising a small number of discrete, experimentally feasible frequencies, rather than the unrestricted broadband pulses used in the simulations presented here.
Finally, the same concepts used to derive the model presented here could potentially be applied to optimal dynamic discrimination (ODD). This problem is essentially the converse to that of tracking control, in which one distinguishes very similar quantum systems using the dynamics induced by properly shaped laser pulses \citep{oddrabitz,Goun_Bondar_Er_Quine_Rabitz_2016}. Given that the requirements for discrimination are similar to those for tracking control, the former may benefit from the techniques presented here.
\end{document}
|
\begin{document}
\begin{center}
{\Large\bf{Optimal time decay estimation for large-solution about 3D compressible MHD equations }
}
\footnote{$^{\dag}$
Corresponding Author: [email protected] (Fei Chen).}
\footnote{$^{**}$
Email Address: [email protected] (Fei Chen); [email protected] (Shuai Wang); wcb1216@163\\.com (Chuanbao Wang).}
\footnote{$^{***}$
This work was supported by the National Natural Science Foundation of China [grant number 12101345] and Natural Science Foundation of Shandong Province of China [grant number ZR2021QA017].}
Shuai Wang$^{*}$, \ Fei Chen$^{*\dag }$, \ Chuanbao Wang$^{*}$
\\[1ex]
$^* $ School of Mathematics and Statistics, Qingdao University, \\[0.5ex]
Qingdao, Shandong 266071, China
\end{center}
\begin{center}
\begin{minipage}{15cm}
\par
\small {\bf Abstract:} This paper mainly focus on optimal time decay estimation for large-solution about compressible magnetohydrodynamic equations in 3D whole space, provided that $(\sigma_{0}-1,u_{0},M_{0})\in L^1\cap H^2$. In \cite{Chenyuhui} (Chen et al.,2019), they proved time decay estimation of $\|(\sigma-1,u,M)\|_{H^1}$ being $(1+t)^{-\frac{3}{4}}$. Based on it, we obtained that of $\|\nabla(\sigma-1,u,M)\|_{H^1}$ being $(1+t)^{-\frac{5}{4}}$ in \cite{Womensa}. Therefore, we are committed to improving that of $\|\nabla^2 (\sigma-1,u,M)\|_{L^2}$ in this paper. Thanks to the method adopted in \cite{Wangwenjun} (Wang and Wen, 2021), we get the optimal time decay estimation to the highest-order derivative for space of solution, which means that time decay estimation of $\|\nabla^2 (\sigma-1,u,M)\|_{L^2}$ is $(1+t)^{-\frac{7}{4}}$.
\par
{\bf Keywords:} Decay estimation; Large-solution; Compressible magnetohydrodynamic equations\\
{\bf Mathematics Subject Classification (2010):}~~~35B45; 35Q35; 35B40
\end{minipage}
\end{center}
\allowdisplaybreaks
\vskip2mm
{\section{Introduction }}
\par
We introduce the following compressible magnetohydrodynamic (CMHD) equations for $(x,t)\in R^{3}\times R^{+}$
\begin{eqnarray}\label{1.1}
\begin{cases}
\partial_{t}\sigma+\diverg(\sigma u)=0,\\
\partial_{t}(\sigma u)+\diverg(\sigma u\otimes u)-\mu\Delta u-(\mu+\lambda)\nabla\diverg u+\nabla P-(\curl M)\times M=0,\\
\partial_{t}M-\nu\Delta M-\curl(u\times M)=0,\quad \diverg M=0,\\
\end{cases}
\end{eqnarray}
the functions are density $\sigma\in R^{+}$, velocity field $u\in R^{3}$, magnetic field $M\in R^{3}$ and pressure $P=P(\sigma)=\sigma^\gamma $ (adiabatic exponent $\gamma$ satisfies $\gamma \geq 1$ ). $\mu$ and $\lambda$ with $\mu >0$ and $2\mu+3\lambda>0$ are viscosity coefficients, in addition, $\nu>0$ expresses magnetic diffusion coefficient. We supplement (\ref{1.1}) by giving the following initial data
\begin{eqnarray}\label{1.2}
(\sigma,u,M)(x,0)=(\sigma_{0},u_{0},M_{0})(x)\rightarrow(1,0,0),\quad as~~|x|\rightarrow +\infty.
\end{eqnarray}
\par
About 3 dimensions CMHD equations, above all, we recall some well-posedness results. For classical solutions: global existence under conditions of being vacuum and small energy \cite{Lihailiang}; for large-solutions: local existence \cite{Fanjishan3,Vol'pert}, well-posedness for initial-boundary value question in $H^1$ \cite{Wangdehua}, global existence with $\nu$ needing large enough and $\gamma$ being close to $1$ \cite{Hongguangyi}, as well as global existence and time decay estimations under $L^l(1\leq l<\frac{6}{5})\cap H^3$ \cite{Chenqing} or $L^1\cap H^2$ \cite{Chenyuhui}. For weak solutions, the well-posedness and time decay estimations can be read in \cite{Ducomet,Huxianpeng1,Huxianpeng2,SLiu,Suen1,Suen2,Wu}.
\par
Next, we emphasis some related paper for time decay estimations about (\ref{1.1}). In regard to small-solutions, time decay estimation on $\mathbb{T}^3$ can be found in Zhu with Zi \cite{Zhulimei}. In the case that low-frequency parts of initial data are in some $\dot{B}_{2,\infty}^{-b}$ $(b_{1}\geq 2, b_{2}=\frac{2b_{1}}{q}-\frac{b_{1}}{2}, 1-\frac{b_{1}}{2}<b\leq b_{2})$, time decay estimation of $L^\mathfrak{L}$-solution $\big(\mathfrak{L}\in[2,\min(\frac{2b_{1}}{b_{1}-2},4)]\big)$ was studied by Shi with Zhang \cite{Shi}. If initial data is in $L^l(1\leq l<\frac{6}{5})\cap H^3$, Chen with Tan \cite{Chenqing} proved global existence and showed (\ref{1}) with $f=0,1$; $\alpha=\frac{3}{2}(\frac{1}{l}-\frac{1}{2})+\frac{f}{2}$
\begin{eqnarray}\label{1}
\|\nabla^{f}(\sigma-1,u,M)\|_{H^{3-f}}\leq C(1+t)^{-\alpha},
\end{eqnarray}
besides, Li with Yu \cite{Lifucai} also got (\ref{1}) in case of $l=1$, as well as Zhang with Zhao \cite{Zhangjianwen} additional proved
\begin{eqnarray}\label{111}
\|\partial_{t}(\sigma-1,u,M)\|_{H^1}\leq C(1+t)^{-\frac{3}{2}(\frac{1}{l}-\frac{1}{2})-\frac{1}{2}},
\end{eqnarray}
similarly, Pu with Guo \cite{Puxueke} obtained (\ref{1}) to full CMHD equations. On this basis of (\ref{1}), Gao, Chen with Yao \cite{Gao1} showed (\ref{2}) with $f=2,3$; $\beta_{1}=\frac{3}{2}(\frac{1}{l}-\frac{1}{2})+1$; $\beta_{2}=\frac{3}{2}(\frac{1}{l}-\frac{1}{2})+\frac{f}{2}$
\begin{eqnarray}\label{2}
\begin{aligned}
&\|\nabla^2(\sigma-1,u)\|_{H^1}\leq C(1+t)^{-\beta_{1}},\\
&\|\nabla^{f} M\|_{H^{3-f}}\leq C(1+t)^{-\beta_{2}},
\end{aligned}
\end{eqnarray}
similarly, Gao, Tao with Yao \cite{Gao2} gained that to full CMHD equations. It's obviously that the decay rates about the higher-order derivatives for space in (\ref{2}) is faster than that in (\ref{1}). Based on Guo with Wang \cite{Guoyan}, Tan with Wang \cite{Tanzhong} established (\ref{3}) which is about the higher-order derivatives for space in the event that initial data is under $H^L (L\geq 3)\cap \dot{H}^n(0\leq n<\frac{3}{2})$
\begin{eqnarray}\label{3}
\|\nabla^{f}(\sigma-1,u,M)\|_{H^{L-f}}\leq C(1+t)^{-\frac{f+n}{2}},\quad f=0,1,...,L-1.
\end{eqnarray}
In addition, in case of $H^S(S\geq 3)\cap \dot{B}^{-s}_{2,\infty}(0\leq s\leq\frac{5}{2})$, Huang, Lin with Wang \cite{Huangwenting} proved time decay estimation about the highest-order derivatives for space
\begin{eqnarray}\label{31}
\|\nabla^{f}(\sigma-1,u,M)\|_{L^2}\leq C(1+t)^{-\frac{f+s}{2}},\quad 0\leq f\leq S.
\end{eqnarray}
\par
As for large-solutions, recently, Chen, Huang with Xu \cite{Chenyuhui} studied global stability and time decay estimation for large-solution in case of $(\sigma_{0}-1,u_{0},M_{0})\in L^1\cap H^2$
\begin{eqnarray}\label{4}
\|(\sigma-1,u,M)\|_{H^1}
\leq C(1+t)^{-\frac{3}{4}},
\end{eqnarray}
based on that, Gao, Wei with Yao \cite{Gao3} gained that about the higher-order derivatives for space of $M$
\begin{eqnarray}\label{5}
\|\nabla M\|_{H^1}+\| M_t\|_{L^2}\leq C(1+t)^{-\frac{5}{4}},
\end{eqnarray}
and we \cite{Womensa} proved that about the higher-order derivatives for space of $(\sigma,u,M)$
\begin{eqnarray}\label{we}
\|\partial_{t}(\sigma-1,u,M)\|_{L^2}+\|\nabla(\sigma-1,u,M)\|_{H^1}\leq C(1+t)^{-\frac{5}{4}}.
\end{eqnarray}
Hence, in our study, from (\ref{we}), we expect to know:
whether we can improve time decay estimation of $\|\nabla^2(\sigma-1,u,M)\|_{L^2}$, which is faster than $(1+t)^{-\frac{5}{4}}$?\\
{\bf Notation:} Throughout the paper, constant $C>0$ is not about time, which may be variable in different lines. Besides, for purpose of simplifying the writing, denote $\|\cdot\|_{L^2}=\|\cdot\|$, $\|\cdot\|_{H^p}=\|\cdot\|_{p}$ , $\|w_{1}\|_{W}+\|w_{2}\|_{W}=\|(w_{1},w_{2})\|_{W}$ as well as $\|w_{1}\|_{W}^v+\|w_{2}\|_{W}^v=\|(w_{1},w_{2})\|_{W}^v$.
\par
Write Theorem \ref{Theorem1.1} proved in \cite{Chenyuhui} because of the usage more than once in our study.
\begin{Theorem}\label{Theorem1.1}
Assume conditions are as follows: $\mu>\frac{1}{2}\lambda$, $(\sigma,u,M)$ is a smooth global solution about (\ref{1.1}), $\sigma\in[0,N_{1}]$, $(\sigma_{0},u_{0},M_{0})$ with $0 < a\leq \sigma_{0} $ satisfies admissible conditions, $(\sigma_{0}-1,u_{0},M_{0})\in L^1\cap H^2$, $\sup_{t\geq 0}\|\sigma(\cdot,t)\|_{C^{\beta}}+\sup_{t\geq 0}\|M(\cdot,t)\|_{L^\infty}\leq N_{2}$ with $ 0<\beta<1 $, so, for $\forall\, t>0$, $\exists~~\underline{\sigma}=\underline{\sigma}(a,N_{2})>0$, it holds
\begin{eqnarray}\label{Th1}
\sigma(x,t)\geq\underline{\sigma},
\end{eqnarray}
\begin{eqnarray}\label{Th2}
\begin{aligned}
\|(\sigma-1,u,M)\|_{2}^2
+\int_{0}^\infty\Big(\|\nabla(\sigma-1)(\tau)\|_{1}^2+\|\nabla (u,M)(\tau)\|_{2}^2 \Big)d\tau \leq C_{0},
\end{aligned}
\end{eqnarray}
\begin{eqnarray}\label{Th3}
\begin{aligned}
\|(\sigma-1,u,M)\|_{1}
\leq C_{0}(1+t)^{-\frac{3}{4}}.
\end{aligned}
\end{eqnarray}
\end{Theorem}
\par
Now, let us give the statement of our conclusion:
\begin{Theorem}\label{Theorem1.2} On the basis of conditions about Theorem 1.1, for $\forall t\geq \widetilde{T}$, $(\sigma,u,M)$ about (\ref{1.1}) has time decay estimation
\begin{eqnarray}\label{Th1.2}
\|\nabla^k(\sigma-1,u,M)\|\leq C(1+t)^{-\frac{3}{4}-\frac{k}{2}},~~k=0,1,2.
\end{eqnarray}
Here, the large time $\widetilde{T}$ is going to be defined in Lemma \ref{Lemma2.7}.
\end{Theorem}
And then, we present the main method and process. In the first step, based on estimations about the second-order derivative for space of $(q,u,M)$ and $\nabla ^2 q$ which were obtained in \cite{Womensa}, we set up the below energy estimation by classical energy method
\begin{eqnarray}\nonumber
\begin{aligned}
&\frac{d}{dt}E(t)+\frac{1}{2}\int_{R^3}\Big(\delta P'(1)|\nabla^2 q|^2+\eta|\nabla^2 \diverg u|^2+\mu|\nabla^3 u|^2+\nu|\nabla^3 M|^2 \Big)dx\\
&\leq C_{1}\delta \Big(\|\nabla ^2 u\|^2+\|\nabla^2 M\|^2\Big),
\end{aligned}
\end{eqnarray}
for a small constant $\delta>0$, a large enough time $T_{1}>0$ and a constant $C_{1}>0$ ( not about time), here
\begin{eqnarray}\nonumber
E(t):=\delta\int_{R^3}\nabla u\cdot\nabla^2 q dx+\frac{p'(1)}{2}\|\nabla^2 q\|^2 +\frac{1}{2}\|\nabla^2 u\|^2+\frac{1}{2}\|\nabla^2 M\|^2.
\end{eqnarray}
Obviously, $ \int_{R^3}\nabla u\cdot\nabla^2 q dx$ is a difficult term to get time decay estimation. But, owing to the smallness of $\delta$, $E(t)\sim\|\nabla (q,u,M)\|_{1}^2$ , so, in \cite{Womensa}, we only obtain same time decay estimations of $\|\nabla(q,u,M)\|$ and $\|\nabla^2 (q,u,M)\|$. To overcome this difficulty, we adopt the method inspired by Wang with Wen \cite{Wangwenjun}. In \cite{Wangwenjun}, they used a method which has no decay loss of the highest-order derivative for space of solution to prove optimal time decay estimation for small-solution about compressible Navier-Stokes equations with reaction diffusion. Thus, in the second step, the detailed process in Lemma \ref{Lemma2.4}, we establish the estimation of $\int_{R^3}\nabla u\cdot\nabla^2 q^L dx$ ($q^L$ is low-medium frequency portion of $q$ which the details is in Appendix A ), and then take the elimination of it on $E(t)$, it can get
\begin{eqnarray}\nonumber
E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx\sim\|\nabla^2(q,u,M)\|^2,
\end{eqnarray}
further, for $T_{*}$ ( large enough time) and constant $C_{*}>0$ (not about time), one has
\begin{eqnarray}\nonumber
\|\nabla^2 (q,u,M)(t)\|^2\leq Ce^{-C_{*} t}\|\nabla^2(q,u,M)(T_{*})\|^2+C\int_{T_{*}}^t e^{-C_{*}(t-\tau)}\|\nabla^2(q^L,u^L,M^L)(\tau)\|^2 d\tau.
\end{eqnarray}
Finally, based on time decay estimation for solution about linearized equations, which is the analysis for low-middle-frequency portion (details in Appendix B), time decay estimation of $\|\nabla^2(q^L,u^L,M^L)(\tau)\|^2$ is derived, further, we set up that of $\|\nabla^2(q,u,M)\|^2$.
{\section{Process of proof}}
\par
The process of proof about Theorem \ref{Theorem1.2} mainly includes four parts: energy estimations, elimination of $\int_{R^3}\nabla u\cdot\nabla^2 q^L dx$, $L_{x}^2 L_{t}^\infty$-norm-estimation for low-middle-frequency portion as well as time decay estimation about nonlinear equations.
Firstly, define $q :=\sigma-1$ and $\eta:=\mu+\lambda$ , then, rewrite (\ref{1.1}) and (\ref{1.2}), one has
\begin{eqnarray}\label{2.1}
\begin{cases}
\partial_{t}q+\diverg u=\mathfrak{a},\\
\partial_{t}u+P'(1)\nabla q-\eta\nabla \diverg u-\mu\Delta u=\mathfrak{b},\\
\partial_{t}M-\nu\Delta M=\mathfrak{c},
\end{cases}
\end{eqnarray}
\begin{eqnarray}\label{2.2}
(q,u,M)(x,0)=(q_{0},u_{0},M_{0})(x)\rightarrow (0,0,0),\quad as~|x|\rightarrow +\infty,
\end{eqnarray}
where $\mathfrak{a}$, $\mathfrak{b}$, $\mathfrak{c}$ are defined as
\begin{eqnarray}\label{2.3}
\begin{cases}
\mathfrak{a}:=- u\cdot\nabla q -q \diverg u,\\
\mathfrak{b}:=\frac{1}{q+1}(M\cdot\nabla M+M\cdot \nabla^{t}M)- \frac{q}{q+1}(\mu\Delta u+\eta\nabla \diverg u)-\Big(\frac{P'(1+q)}{1+q}-P'(1)\Big)\nabla q-u\cdot \nabla u,\\
\mathfrak{c}:=-u\cdot \nabla M-(\diverg u)M+M\cdot \nabla u.\\
\end{cases}
\end{eqnarray}
{\subsection {Energy estimations}}
\par
We thanks to the following Lemma \ref{Lemma2.1} and Lemma \ref{Lemma2.2} obtained in \cite{Womensa}, they are estimations about the second-order derivative for space of $(q,u,M)$ as well as $\nabla ^2 q$ respectively, based on this two Lemma, Lemma \ref{Lemma2.3} can be obtained.
\begin{Lemma}\label{Lemma2.1} On the basis of conditions to Theorem \ref{Theorem1.1}, estimation of the second-order derivative for space of solution is as follows
\begin{eqnarray}\label{L2.2}
\begin{aligned}
& \frac{d}{dt}\int_{R^3}\Big(\frac{p'(1)}{2}|\nabla^2 q|^2 +\frac{1}{2}|\nabla^2 u|^2+\frac{1}{2}|\nabla^2 M|^2 \Big)dx+\int_{R^3}\Big(\eta|\nabla^2\diverg u|^2 +\mu|\nabla^3 u|^2+\nu|\nabla^3 M|^2 \Big)dx\\
&\leq C\Big(\|(u,M)\|_{1}+\|(q,u,M,\nabla u)\|^\frac{1}{4}\Big)\Big(\|\nabla^2 q\|^2+\|\nabla^2 (u,M)\|_{1}^2\Big).
\end{aligned}
\end{eqnarray}
\end{Lemma}
\begin{Lemma}\label{Lemma2.2} On the basis of conditions to Theorem \ref{Theorem1.1}, estimation of $\nabla ^2 q$ is as follows
\begin{eqnarray}\label{L2.3}
\begin{aligned}
&\frac{d}{dt}\int_{R^3}\nabla u\cdot\nabla^2 q dx+\int_{R^3}\frac{7p'(1)}{8}|\nabla^2 q|^2 dx\\
&\leq C\|\nabla^2 u\|_{1}^2 + C\Big(\|(q,u,M)\|_{1}+\|(q,u,M)\|^\frac{1}{4}\Big)\Big(\|\nabla^2q\|^2+\|\nabla^2 M\|_{1}^2\Big).
\end{aligned}
\end{eqnarray}
\end{Lemma}
\par
Next, Lemma \ref{Lemma2.1} and Lemma \ref{Lemma2.2} contribute to Lemma \ref{Lemma2.3}.
\begin{Lemma}\label{Lemma2.3} On the foundation of conditions to Theorem \ref{Theorem1.1}, define $E(t)$
\begin{eqnarray}\label{L2.4.1}
E(t):=\delta\int_{R^3}\nabla u\cdot\nabla^2 q dx+\frac{p'(1)}{2}\|\nabla^2 q\|^2 +\frac{1}{2}\|\nabla^2 u\|^2+\frac{1}{2}\|\nabla^2 M\|^2,
\end{eqnarray}
for a small constant $\delta>0$, a large enough time $T_{1}>0$ and a constant $C_{1}>0$ (not about time), it establishes
\begin{eqnarray}\label{L2.4.2}
\begin{aligned}
&\frac{d}{dt}E(t)+\frac{1}{2}\int_{R^3}\Big(\delta P'(1)|\nabla^2 q|^2+\eta|\nabla^2 \diverg u|^2+\mu|\nabla^3 u|^2+\nu|\nabla^3 M|^2 \Big)dx\\
&\leq C_{1}\delta \Big(\|\nabla ^2 u\|^2+\|\nabla^2 M\|^2\Big).
\end{aligned}
\end{eqnarray}
\end{Lemma}
\begin{proof} Choosing a small constant $\delta>0$, then, adding up (\ref{L2.2}) with $\delta\times(\ref{L2.3})$ and using (\ref{Th2}), it may check
\begin{eqnarray}\label{L2.4.3}
\begin{aligned}
&\frac{d}{dt}\int_{R^3}\Big(\delta\nabla u\cdot\nabla^2 q +\frac{p'(1)}{2}|\nabla^2 q|^2 +\frac{1}{2}|\nabla^2 u|^2+\frac{1}{2}|\nabla^2 M|^2\Big)dx\\
&+\frac{3}{4}\int_{R^3}\Big(\delta P'(1)|\nabla^2 q|^2+\eta|\nabla^2 \diverg u|^2+\mu|\nabla^3 u|^2+\nu|\nabla^3 M|^2 \Big)dx\\
&\leq C \Big(\|(q,u,M)\|_{1}+\|(q,u,M,\nabla u)\|^\frac{1}{4}\Big)\Big(\|\nabla^2 q\|^2+\|\nabla^3 u\|^2+\|\nabla^3 M\|^2 \Big)\\
&+C \Big(\|(q,u,M)\|_{1}+\|(q,u,M,\nabla u)\|^\frac{1}{4}\Big)\Big(\|\nabla^2 u\|^2+\|\nabla^2 M\|^2 \Big)\\
&+\widetilde{C}\delta\Big(\|\nabla^2 u\|^2+\|\nabla^2 M\|^2\Big).
\end{aligned}
\end{eqnarray}
Using (\ref{Th3}), it easily check
\begin{eqnarray}\nonumber
\begin{aligned}
\|(q,u,M)\|_{1}+\|(q,u,M,\nabla u)\|^\frac{1}{4}\leq C(1+t)^{-\frac{3}{16}},
\end{aligned}
\end{eqnarray}
so, for a large enough time $T_{1}>0$
\begin{eqnarray}\label{L2.4.4}
\begin{aligned}
C\Big(\|(q,u,M)\|_{1}+\|(q,u,M,\nabla u)\|^\frac{1}{4}\Big)\leq C(1+t)^{-\frac{3}{16}}\leq \frac{1}{4}\min\{\delta P'(1), \mu, \nu,4\widetilde{C}\delta\}.
\end{aligned}
\end{eqnarray}
Thus, (\ref{L2.4.2}) can be proved by combining (\ref{L2.4.3}) and (\ref{L2.4.4})
\begin{eqnarray}\nonumber
\begin{aligned}
&\frac{d}{dt}E(t)+\frac{1}{2}\int_{R^3}\Big(\delta P'(1)|\nabla^2 q|^2+\eta|\nabla^2 \diverg u|^2+\mu|\nabla^3 u|^2+\nu|\nabla^3 M|^2 \Big)dx\\
&\leq C_{1}\delta \Big(\|\nabla ^2 u\|^2+\|\nabla^2 M\|^2 \Big).
\end{aligned}
\end{eqnarray}
\end{proof}
{\subsection {Elimination of $\int_{R^3}\nabla u\cdot\nabla^2 q^L dx$}}
\par
We aim to demonstrate $L_{x}^2 L_{t}^\infty$-norm-estimation of $\nabla^2(q,u,M)$ by the elimination of $\int_{R^3}\nabla u\cdot\nabla^2 q^L dx$ on $E(t)$ (\ref{L2.4.1}).
\begin{Lemma}\label{Lemma2.4} It holds that for $T_{*}$ ( large enough time)
and constant $C_{*}>0$ (not about time)
\begin{eqnarray}\nonumber
\|\nabla^2 (q,u,M)(t)\|^2\leq Ce^{-C_{*} t}\|\nabla^2(q,u,M)(T_{*})\|^2+C\int_{T_{*}}^t e^{-C_{*}(t-\tau)}\|\nabla^2(q^L,u^L,M^L)(\tau)\|^2 d\tau.
\end{eqnarray}
\end{Lemma}
\begin{proof} Applying $\nabla(\ref{2.1})_{2}$ by $\nabla^2 q^L$, integrating over $R^3$ and using $(\ref{2.1})_{1}$, one has
\begin{eqnarray}\label{L2.5.1}
\begin{aligned}
\frac{d}{dt}\int_{R^3}\nabla u \cdot \nabla^2 q^L dx&=-P'(1)\int_{R^3}\nabla^2 q\cdot\nabla^2 q^Ldx+\eta\int_{R^3}\nabla^2 \diverg u\cdot\nabla^2 q^L dx+\mu\int_{R^3}\nabla\Delta u\cdot\nabla^2 q^L dx\\&+\int_{R^3}\nabla \mathfrak{b}\cdot\nabla^2 q^Ldx+\int_{R^3}\nabla \diverg u^L\cdot\nabla \diverg u dx-\int_{R^3}\nabla \mathfrak{a}^L \cdot\nabla \diverg u dx.
\end{aligned}
\end{eqnarray}
It's obviously to obtain $(\ref{L2.5.2})$ by H\"{o}lder and Young inequalities
\begin{eqnarray}\label{L2.5.2}
\begin{aligned}
-\frac{d}{dt}\int_{R^3}\nabla u \cdot \nabla^2 q^L dx&\leq
\frac{P'(1)}{4}\|\nabla^2 q\|^2+\frac{\eta}{2}\|\nabla^2 \diverg u\|^2+\frac{\mu}{2}\|\nabla^3 u\|^2+\frac{1}{2}\|\nabla \mathfrak{b}\|^2\\&+\Big(P'(1)+\frac{\mu+\eta+1}{2}\Big)\|\nabla^2 q^L\|^2+\frac{1}{2}\|\nabla \diverg u^L\|^2+\|\nabla \diverg u\|^2+\frac{1}{2}\|\nabla \mathfrak{a}^L\|^2.
\end{aligned}
\end{eqnarray}
According to $(\ref{2.3})_{1}$, by H\"{o}lder and G-N (Gagliardo-Nirenberg) inequalities as well as (\ref{Th2}), we can check
\begin{eqnarray}\label{L2.5.3}
\begin{aligned}
\|\nabla \mathfrak{a}\|&\leq C \|\nabla^2 q\|\|u\|_{L^\infty}+\|\nabla q\|_{L^3}\|\nabla u\|_{L^6}+ \|\nabla q\|_{L^3}\|\diverg u\|_{L^6}+ \| q\|_{L^\infty}\|\nabla\diverg u\| \\
&\leq C(\|u\|_{L^\infty}+\|\nabla q\|_{L^3}+ \| q\|_{L^\infty})(\|\nabla^2 q\|+\|\nabla^2 u\|)\\
&\leq C\Big(\|u\|^{\frac{1}{4}}\|\nabla^2 u\|^{\frac{3}{4}}+\|q\|^{\frac{1}{4}}\|\nabla^2 q\|^{\frac{3}{4}}\Big)(\|\nabla^2 q\|+\|\nabla^2 u\|)\\
&\leq C\|(q,u)\|^{\frac{1}{4}}\|\nabla^2(q,u)\|.
\end{aligned}
\end{eqnarray}
Thanks to $(2.14)$ and $(2.15)$ in \cite{Womensa}, it follows
\begin{eqnarray}\label{L2.5.4}
\begin{aligned}
\|\nabla \mathfrak{b}\|\leq C\Big(\|(u,M)\|_{1}+\|(q,u,M)\|^\frac{1}{4}\Big)(\|\nabla^2 q\|+\|\nabla^2 (u,M)\|_{1}).
\end{aligned}
\end{eqnarray}
By (\ref{A.5}) and Lemma \ref{LA.1}, it gets
\begin{eqnarray}\label{L2.5.5}
\begin{aligned}
\|\nabla \mathfrak{a}^L\|\leq \|\nabla \mathfrak{a}\|+\|\nabla \mathfrak{a}^h\|\leq C \|\nabla \mathfrak{a}\|.
\end{aligned}
\end{eqnarray}
Combining (\ref{L2.5.2})-(\ref{L2.5.5}), we obtain
\begin{eqnarray}\label{L2.5.6}
\begin{aligned}
-\frac{d}{dt}\int_{R^3}\nabla u \cdot \nabla^2 q^L dx&\leq
\frac{P'(1)}{4}\|\nabla^2 q\|^2+\frac{\eta}{2}\|\nabla^2 \diverg u\|^2+\frac{\mu}{2}\|\nabla^3 u\|^2+C\|\nabla^2 q^L\|^2\\&+\frac{1}{2}\|\nabla \diverg u^L\|^2+\|\nabla \diverg u\|^2\\&+C\Big(\|(u,M)\|_{1}^2+\|(q,u,M)\|^\frac{1}{2}\Big)\Big(\|\nabla^2 q\|^2+\|\nabla^2 (u,M)\|_{1}^2 \Big).
\end{aligned}
\end{eqnarray}
Similar (\ref{L2.4.4}), using (\ref{Th3}), for a large enough time $T_{2}>0$
\begin{eqnarray}\label{L2.5.66}
\begin{aligned}
C\Big(\|(u,M)\|_{1}^2+\|(q,u,M)\|^\frac{1}{2}\Big)\leq \frac{P'(1)}{8}.
\end{aligned}
\end{eqnarray}
Summing $\delta\times(\ref{L2.5.6})$ with (\ref{L2.4.2}), using (\ref{L2.5.66}) and then checking that for $T_{*}=\max\{ T_{1},T_{2}\}$
\begin{eqnarray}\label{L2.5.7}
\begin{aligned}
&\frac{d}{dt}\Big(E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx\Big)+\frac{1}{2}\int_{R^3}\Big(\delta P'(1)|\nabla^2 q|^2+\eta|\nabla^2 \diverg u|^2+\mu|\nabla^3 u|^2+\nu|\nabla^3 M|^2 \Big)dx\\
&\leq\frac{P'(1)}{4}\delta\|\nabla^2 q\|^2+\frac{\eta}{2}\delta\|\nabla^2 \diverg u\|^2+\frac{\mu}{2}\delta\|\nabla^3 u\|^2+C\delta\|\nabla^2 q^L\|^2+\frac{1}{2}\delta\|\nabla \diverg u^L\|^2\\&+\delta\|\nabla \diverg u\|^2+ C_{1}\delta \Big(\|\nabla ^2 u\|^2+\|\nabla^2 M\|^2 \Big)+\frac{P'(1)}{8}\delta\Big(\|\nabla^2 q\|^2+\|\nabla^2 u\|_{1}^2+\|\nabla^2 M\|_{1}^2 \Big).
\end{aligned}
\end{eqnarray}
By Lemma \ref{A.1}, it can check
\begin{eqnarray}\label{L2.5.8}
\begin{aligned}
\frac{\mu}{2}\|\nabla^3 u\|^2+\frac{\nu}{2}\|\nabla^3 M\|^2\geq \frac{\mu}{4}\|\nabla^3 u\|^2+\frac{\mu}{4}B_{0}^2\|\nabla^2 u^h\|^2+\frac{\nu}{4}\|\nabla^3 M\|^2+\frac{\nu}{4}B_{0}^2\|\nabla^2 M^h\|^2.
\end{aligned}
\end{eqnarray}
Putting (\ref{L2.5.8}) into (\ref{L2.5.7}), and then adding $\frac{\mu}{4}B_{0}^2\|\nabla^2 u^L\|^2+\frac{\nu}{4}B_{0}^2\|\nabla^2 M^L\|^2$ on either side of the result, it follows
\begin{eqnarray}\label{L2.5.9}
\begin{aligned}
&\frac{d}{dt}\Big(E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx\Big)+\frac{P'(1)}{8}\delta\|\nabla^2 q\|^2+\frac{\eta}{2}\|\nabla^2 \diverg u\|^2\\
&+\frac{\mu}{4}\|\nabla^3 u\|^2+\frac{\nu}{4}\|\nabla^3 M\|^2+\frac{\mu}{8}B_{0}^2\|\nabla^2 u\|^2+\frac{\nu}{8}B_{0}^2\|\nabla^2 M\|^2\\
&\leq
\frac{\eta}{2}\delta\|\nabla^2 \diverg u\|^2+\frac{\mu}{2}\delta\|\nabla^3 u\|^2+C\delta\|\nabla^2 q^L\|^2+\frac{1}{2}\delta\|\nabla \diverg u^L\|^2+\delta\|\nabla \diverg u\|^2\\&+ C_{1}\delta \Big(\|\nabla ^2 u\|^2+\|\nabla^2 M\|^2 \Big)+\frac{P'(1)}{8}\delta \Big(\|\nabla^2 u\|_{1}^2+\|\nabla^2 M\|_{1}^2 \Big)\\
&+\frac{\mu}{4}B_{0}^2\|\nabla^2 u^L\|^2+\frac{\nu}{4}B_{0}^2\|\nabla^2 M^L\|^2,
\end{aligned}
\end{eqnarray}
then, noticing $\delta\leq\min\Big\{\frac{1}{8},\frac{\mu}{2P'(1)},\frac{\nu}{P'(1)}\Big\}:=\delta_{0}$ and then
$B_{0}^2 \geq \max\Big\{\frac{48\delta_{0}}{\mu},\frac{48C_{1}\delta_{0}}{\mu},\frac{6P'(1)\delta_{0}}{\mu},\frac{32C_{1}\delta_{0}}{\nu},\frac{4P'(1)\delta_{0}}{\nu}\Big\}$,
we can check it out
\begin{eqnarray}\label{L2.5.10}
\begin{aligned}
&\frac{d}{dt}\Big(E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx\Big)+\frac{P'(1)}{8}\delta\|\nabla^2 q\|^2+\frac{\eta}{4}\|\nabla^2 \diverg u\|^2\\
&+\frac{\mu}{8}\|\nabla^3 u\|^2+\frac{\nu}{8}\|\nabla^3 M\|^2+\frac{\mu}{16}B_{0}^2\|\nabla^2 u\|^2+\frac{\nu}{16}B_{0}^2\|\nabla^2 M\|^2\\
&\leq C\|\nabla^2(q^L,u^L,M^L)\|^2.
\end{aligned}
\end{eqnarray}
By (\ref{A.5}) and Lemma \ref{LA.1} respectively, it proves
\begin{eqnarray}\label{L2.5.11}
\begin{aligned}
&E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx=\delta\int_{R^3}\nabla u \cdot \nabla^2 q^h dx+\frac{P'(1)}{2}\|\nabla^2 q\|^2+\frac{1}{2}\|\nabla^2 u\|^2+\frac{1}{2}\|\nabla^2 M\|^2,\\
&\delta\int_{R^3}\nabla u \cdot \nabla^2 q^h dx
=-\delta\int_{R^3} \nabla \diverg u \cdot\nabla q^h dx
\leq\frac{\delta}{2}\|\nabla q^h\|^2+\frac{\delta}{2}\|\nabla \diverg u\|^2
\leq\frac{\delta}{2}\|\nabla^2 q\|^2+\frac{\delta}{2}\|\nabla^2 u\|^2,
\end{aligned}
\end{eqnarray}
based on the smallness of $\delta$, it implies
\begin{eqnarray}\label{L2.5.12}
E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx\sim\|\nabla^2(q,u,M)\|^2.
\end{eqnarray}
Together with $(\ref{L2.5.10})$ and $(\ref{L2.5.12})$, a constant $C_{*}$ can exist so that
\begin{eqnarray}\label{L2.5.13}
\begin{aligned}
&\frac{d}{dt}\Big(E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx\Big)+C_{*}\Big(E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx \Big)\\
&\leq C\|\nabla^2(q^L,u^L,M^L)\|^2.
\end{aligned}
\end{eqnarray}
Integrating $(\ref{L2.5.13})\times e^{C_{*}t}$ on $[T_{*},t]$ which respects to time, we can check it out
\begin{eqnarray}\label{L2.5.14}
\begin{aligned}
E(t)-\delta\int_{R^3}\nabla u \cdot \nabla^2 q^L dx
&\leq C e^{-C_{*} t}\Big(E(T_{*})-\delta\int_{R^3}\nabla u(T_{*})\cdot \nabla^2 q^L(T_{*}) dx\Big)\\
&+C\int_{T_{*}}^t e^{-C_{*}(t-\tau)}\|\nabla^2(q^L,u^L,M^L)(\tau)\|^2 d\tau.
\end{aligned}
\end{eqnarray}
The combination of $(\ref{L2.5.12})$ and $(\ref{L2.5.14})$ leads to
\begin{eqnarray}\nonumber
\|\nabla^2 (q,u,M)(t)\|^2\leq Ce^{-C_{*} t}\|\nabla^2(q,u,M)(T_{*})\|^2+C\int_{T_{*}}^t e^{-C_{*}(t-\tau)}\|\nabla^2(q^L,u^L,M^L)(\tau)\|^2 d\tau.
\end{eqnarray}
\end{proof}
{\subsection {$L_{x}^2 L_{t}^\infty$-norm-estimation for low-middle-frequency portion}}
\par
This part is $L_{x}^2 L_{t}^\infty$-norm-estimation for low-middle-frequency portion of $(q,u,M)$ about nonlinear equations $(\ref{2.1})$ and $(\ref{2.2})$, which is based on the analysis of linearized equations in Appendix B.
\par
Define differential operator $\mathbf{Q}$
\begin{eqnarray}\label{Q}
\mathbf{Q}=
\begin{pmatrix}
0& \diverg &0\\
P'(1)\nabla &-\eta\nabla \diverg-\mu\Delta &0\\
0&0&-\nu\Delta
\end{pmatrix}
\end{eqnarray}
so, $(\ref{2.1})$ and $(\ref{2.2})$ can be rewritten as
\begin{eqnarray}\label{G1}
\begin{cases}
\partial_{t}X+\mathbf{Q}X=F(X),\\
X|_{t=0}=X(0),
\end{cases}
\end{eqnarray}
where $X(t):=(q(t),u(t),M(t))^{T}$, $F(X):=(\mathfrak{a},\mathfrak{b},\mathfrak{c})^{T}$, and $X(0):=(q_{0},u_{0},M_{0})^{T}$. Further, denote
$\widetilde{X}(t):=(\widetilde{q}(t),\widetilde{u}(t),\widetilde{M}(t))^{T}$, the linearized equations are as below
\begin{eqnarray}\label{G2}
\begin{cases}
\partial_{t}\widetilde{X}+\mathbf{Q}\widetilde{X}=0,\\
\widetilde{X}|_{t=0}=X(0),
\end{cases}
\end{eqnarray}
we can calculate its solution: $\widetilde{X}=\mathbf{q}(t)X(0),~~\mathbf{q}(t)=e^{-t\mathbf{Q}}$.
\par
In Appendix B, we analyze the estimations for low-middle-frequency portion of solution about equations $(\ref{G2})$, which leads to Lemma \ref{Lemma2.5}.
\begin{Lemma}\label{Lemma2.5} Suppose $1\leq j\leq 2$, the following estimation holds for $\forall ~\text{integer}~ k\geq0$
\begin{eqnarray}\label{L2.6}
\|\nabla^k \big(\mathbf{q}(t)X^L(0)\big)\|\leq C\|X(0)\|_{L^j}(1+t)^{-[\frac{3}{2}(\frac{1}{j}-\frac{1}{2})+\frac{k}{2}]}.
\end{eqnarray}
\end{Lemma}
\begin{proof}
By Plancherel theorem, (\ref{B.12}) and (\ref{B.14}) with $\mathfrak{b}=b_{0}$, $\mathfrak{B}=B_{0}$, one has
\begin{eqnarray}\label{L}
\begin{aligned}
\|\partial_{x}^k(\widetilde{q}^L,\widetilde{\Upsilon}^L,\widetilde{M}^L)(t)\|&=\|(i\xi)^k(\widehat{\widetilde{q}^L},\widehat{\widetilde{\Upsilon}^L},\widehat{\widetilde{M}^L})(t)\|_{L_{\xi}^2}\\
&=\bigg(\int_{R^3}|(i\xi)^k(\widehat{\widetilde{q}^L},\widehat{\widetilde{\Upsilon}^L},\widehat{\widetilde{M}^L})(\xi,t)|^2 d\xi\bigg)^{\frac{1}{2}}\\
&\leq C \bigg(\int_{|\xi|\leq B_{0}}|\xi|^{2|k|}|(\widehat{\widetilde{q}},\widehat{\widetilde{\Upsilon}},\widehat{\widetilde{M}})(\xi,t)|^2 d\xi\bigg)^{\frac{1}{2}}\\
&\leq C \bigg(\int_{|\xi|\leq b_{0}}|\xi|^{2|k|}e^{-C_{l}|\xi|^2 t}|(\widehat{q},\widehat{\Upsilon},\widehat{M})(\xi,0)|^2 d\xi\bigg)^{\frac{1}{2}}\\
&+C \bigg(\int_{b_{0}\leq|\xi|\leq B_{0}}|\xi|^{2|k|}e^{-\varsigma t}|(\widehat{q},\widehat{\Upsilon},\widehat{M})(\xi,0)|^2 d\xi\bigg)^{\frac{1}{2}},
\end{aligned}
\end{eqnarray}
then, taking H\"{o}lder as well as Hausdorff-Young inequalities on (\ref{L}), for $\frac{1}{j}+\frac{1}{j'}=1$, $1\leq j\leq 2\leq j'\leq\infty$, it can check
\begin{eqnarray}\label{LL}
\begin{aligned}
\|\partial_{x}^k(\widetilde{q}^L,\widetilde{\Upsilon}^L,\widetilde{M}^L)(t)\|&\leq C \|(\widehat{q},\widehat{\Upsilon},\widehat{M})(0)\|_{L_{\xi}^{j'}}(1+t)^{-[\frac{3}{2}(\frac{1}{2}-\frac{1}{j'})+\frac{|k|}{2}]}\\
&\leq C \|(q,\Upsilon,M)(0)\|_{L^{j}}(1+t)^{-[\frac{3}{2}(\frac{1}{j}-\frac{1}{2})+\frac{|k|}{2}]}.
\end{aligned}
\end{eqnarray}
In the same way, from (\ref{B.16}), we obtain
\begin{eqnarray}\label{LLL}
\begin{aligned}
\|\partial_{x}^k(\widetilde{\Gamma u})^L(t)\|&\leq C \bigg(\int_{|\xi|\leq B_{0}}|\xi|^{2|k|}|\widehat{\widetilde{\Gamma u}}(\xi,t)|^2 d\xi\bigg)^{\frac{1}{2}}\\
&\leq C \bigg(\int_{|\xi|\leq B_{0}}|\xi|^{2|k|}e^{-\mu|\xi|^2 t}|\widehat{\Gamma u}(\xi,0)|^2 d\xi\bigg)^{\frac{1}{2}}\\
&\leq C \| u_{0}\|_{L^{j}}(1+t)^{-[\frac{3}{2}(\frac{1}{j}-\frac{1}{2})+\frac{|k|}{2}]}.
\end{aligned}
\end{eqnarray}
Combining (\ref{LL}) with (\ref{LLL}), we have verified (\ref{L2.6}).
\end{proof}
The solution about equations $(\ref{G1})$ can be calculated by the semigroup method and Duhamel principle,
\begin{eqnarray}\label{G3}
X(t)=\mathbf{q}(t)X(0)+\int_{0}^t \mathbf{q}(t-\tau)F(X)(\tau)d\tau.
\end{eqnarray}
\par
Combining Lemma \ref{Lemma2.5} and $(\ref{G3})$, we obtain time decay estimation for low-middle-frequency portion of $(q,u,M)$ about nonlinear equations $(\ref{2.1})$ and $(\ref{2.2})$.
\begin{Lemma}\label{Lemma2.6} The following estimation holds for $\forall ~\text{integer}~ k\geq0$, $1\leq p\leq 2$
\begin{eqnarray}\label{L2.7}
\begin{aligned}
\|\nabla^k X^L(t)\|&\leq C\|X(0)\|_{L^1}(1+t)^{-\frac{3}{4}-\frac{k}{2}}+C\int_{0}^\frac{t}{2}\|F(X)(\tau)\|_{L^1}(1+t-\tau)^{-\frac{3}{4}-\frac{k}{2}}d\tau\\
&+C\int_{\frac{t}{2}}^t\|F(X)(\tau)\|(1+t-\tau)^{-\frac{k}{2}}d\tau.
\end{aligned}
\end{eqnarray}
\end{Lemma}
{\subsection{Time decay estimation about nonlinear equations}}
\par
We will use Lemma \ref{Lemma2.4} and \ref{Lemma2.6} to arrive at time decay estimation for $(\sigma,u,M)$ about nonlinear equations (\ref{2.1}) and (\ref{2.2}).
\begin{Lemma}\label{Lemma2.7} On the basis of conditions about Theorem \ref{Theorem1.1}, $\exists~\text{a large constant}~\widetilde{T}>0$ for $\forall t\geq \widetilde{T}$, $(\sigma,u,M)$ about (\ref{1.1}) has time decay estimation
\begin{eqnarray}\label{L2.8}
\|\nabla^k(\sigma-1,u,M)\|\leq C(1+t)^{-\frac{3}{4}-\frac{k}{2}},k=0,1,2.
\end{eqnarray}
\end{Lemma}
\begin{proof} Denote
\begin{eqnarray}\label{L2.8.1}
N(t):=\sup\limits_{0\leq\tau\leq t}\sum_{n=0}^2(1+\tau)^{\frac{3}{4}+\frac{n}{2}}\|\nabla^n(q,u,M)(\tau)\|
\end{eqnarray}
so, it has
\begin{eqnarray}\label{L2.8.2}
\|\nabla^n(q,u,M)(\tau)\|\leq CN(t)(1+\tau)^{-\frac{3}{4}-\frac{n}{2}},\quad 0\leq\tau\leq t,\quad 0\leq n\leq 2.
\end{eqnarray}
It's easy to check the following inequalities by (\ref{Th1}), H\"{o}lder and G-N inequalities
\begin{eqnarray}\label{L2.8.3}
\|\mathfrak{a}(\tau)\|_{L^1}\leq C\|(q,u)\|\|\nabla(q,u)\|,
\end{eqnarray}
\begin{eqnarray}\label{L2.8.4}
\begin{aligned}
\|\mathfrak{b}(\tau)\|_{L^1}&\leq C(\|M\|\|\nabla M\|+\|q\|\|\nabla^2 u\|+\|q\|\|\nabla q\|+\|u\|\|\nabla u\|)\\
&\leq C \Big(\|(q,u,M)\|\|\nabla(q,u,M)\|+\|q\|\|\nabla u\|^{\frac{1}{2}}\|\nabla^3 u\|^{\frac{1}{2}}\Big),
\end{aligned}
\end{eqnarray}
and
\begin{eqnarray}\label{L2.8.5}
\|\mathfrak{c}(\tau)\|_{L^1}\leq C\|(u,M)\|\|\nabla(u,M)\|.
\end{eqnarray}
By the combination of (\ref{L2.8.3})-(\ref{L2.8.5}) and the usage of (\ref{Th3}), one has
\begin{eqnarray}\label{L2.8.6}
\begin{aligned}
\|F(X)(\tau)\|_{L^1}&\leq \|\mathfrak{a}(\tau)\|_{L^1}+\|\mathfrak{b}(\tau)\|_{L^1}+\|\mathfrak{c}(\tau)\|_{L^1}\\
&\leq C(1+\tau)^{-\frac{3}{2}}+C \|q\|\|\nabla u\|^{\frac{1}{2}}\|\nabla^3 u\|^{\frac{1}{2}}.
\end{aligned}
\end{eqnarray}
Similarly, and using (\ref{L2.8.2}), (\ref{Th2}) and (\ref{Th3}), it obtains
\begin{eqnarray}\label{L2.8.7}
\|\mathfrak{a}(\tau)\|\leq C\|\nabla^2(q,u)\|\|(q,u)\|_{1}\leq C N(t)(1+\tau)^{-\frac{7}{4}}(1+\tau)^{-\frac{3}{4}}\leq C N(t)(1+\tau)^{-\frac{10}{4}},
\end{eqnarray}
\begin{eqnarray}\label{L2.8.8}
\begin{aligned}
\|\mathfrak{b}(\tau)\|&\leq C(\|M\|_{L^3}\|\nabla M\|_{L^6}+\|q\|_{L^\infty}\|\nabla^2 u\|+\|q\|_{L^3}\|\nabla q\|_{L^6}+\|u\|_{L^3}\|\nabla u\|_{L^6})\\
&\leq C \Big(\|\nabla^2(q,u,M)\|\|(q,u,M)\|_{1}+\|\nabla^2 u\|\|\nabla q\|^{\frac{1}{2}}\|\nabla^2 q\|^{\frac{1}{2}}\Big)\\
&\leq C \Big(N(t)(1+\tau)^{-\frac{7}{4}}(1+\tau)^{-\frac{3}{4}}+N(t)(1+\tau)^{-\frac{7}{4}}(1+\tau)^{-\frac{3}{8}}\Big)\\
&\leq C \Big(N(t)(1+\tau)^{-\frac{10}{4}}+N(t)(1+\tau)^{-\frac{17}{8}}\Big)\\
&\leq CN(t)(1+\tau)^{-\frac{17}{8}},
\end{aligned}
\end{eqnarray}
and
\begin{eqnarray}\label{L2.8.9}
\|\mathfrak{c}(\tau)\|\leq C\|\nabla^2(u,M)\|\|(u,M)\|_{1}\leq C N(t)(1+\tau)^{-\frac{10}{4}}.
\end{eqnarray}
By the combination of (\ref{L2.8.7})-(\ref{L2.8.9}), one has
\begin{eqnarray}\label{L2.8.10}
\begin{aligned}
\|F(X)(\tau)\|&\leq \|\mathfrak{a}(\tau)\|+\|\mathfrak{b}(\tau)\|+\|\mathfrak{c}(\tau)\|\\
&\leq CN(t)(1+\tau)^{-\frac{17}{8}}.
\end{aligned}
\end{eqnarray}
According to (\ref{L2.8.6}), (\ref{L2.8.10}) and Lemma \ref{Lemma2.6}, for $k\in[0,2]$, one may check
\begin{eqnarray}\label{L2.8.11}
\begin{aligned}
\|\nabla^k X^L(t)\|&\leq C\|X(0)\|_{L^1}(1+t)^{-\frac{3}{4}-\frac{k}{2}}\\&+C \int_{0}^\frac{t}{2}\Big((1+\tau)^{-\frac{3}{2}}+ \|q\|\|\nabla u\|^{\frac{1}{2}}\|\nabla^3 u\|^{\frac{1}{2}}\Big)(1+t-\tau)^{-\frac{3}{4}-\frac{k}{2}}d\tau\\
&+CN(t)\int_{\frac{t}{2}}^t(1+\tau)^{-\frac{17}{8}}(1+t-\tau)^{-\frac{k}{2}}d\tau\\
&\leq C\|X(0)\|_{L^1}(1+t)^{-\frac{3}{4}-\frac{k}{2}}+C(1+t)^{-\frac{3}{4}-\frac{k}{2}}+CN(t)(1+t)^{-\frac{9}{8}-\frac{k}{2}}\\
&\leq C (1+t)^{-\frac{3}{4}-\frac{k}{2}}\Big(\|X(0)\|_{L^1}+1+N(t)(1+t)^{-\frac{3}{8}}\Big),
\end{aligned}
\end{eqnarray}
where we have adopted Young inequality, (\ref{Th3}) and (\ref{Th2}) to calculate
\begin{eqnarray}\nonumber
\begin{aligned}
&\int_{0}^\frac{t}{2}\|q\|\|\nabla u\|^{\frac{1}{2}}\|\nabla^3 u\|^{\frac{1}{2}}(1+t-\tau)^{-\frac{3}{4}-\frac{k}{2}} d\tau\\
&\leq C \int_{0}^\frac{t}{2}\Big(\|(q,\nabla u)(\tau)\|^2+\|\nabla^3 u(\tau)\|^2 \Big)(1+t-\tau)^{-\frac{3}{4}-\frac{k}{2}} d\tau\\
&\leq C \int_{0}^\frac{t}{2}\Big((1+\tau)^{-\frac{3}{2}}+\|\nabla^3 u(\tau)\|^2 \Big)(1+t-\tau)^{-\frac{3}{4}-\frac{k}{2}} d\tau\\
&\leq C (1+t)^{-\frac{3}{4}-\frac{k}{2}}.
\end{aligned}
\end{eqnarray}
From Lemma \ref{Lemma2.4}, and use (\ref{L2.8.11}), yields directly for $k\in[0,2]$, $t\geq T_{*}$
\begin{eqnarray}\label{L2.8.12}
\begin{aligned}
\|\nabla^2 X(t)\|^2&\leq C e^{-C_{*} t}\|\nabla^2 X(T_{*})\|^2+C\int_{T_{*}}^t e^{-C_{*}(t-\tau)}(1+\tau)^{-\frac{7}{2}}\Big(\|X(0)\|_{L^1}^2+1+N^2(\tau)(1+\tau)^{-\frac{3}{4}}\Big) d\tau\\
&\leq C e^{-C_{*} t}\|\nabla^2 X(T_{*})\|^2+C(1+t)^{-\frac{7}{2}}\Big(\|X(0)\|_{L^1}^2+1\Big)+CN^2(t)(1+t)^{-\frac{17}{4}}\\
&\leq C e^{-C_{*} t}\|\nabla^2 X(T_{*})\|^2+C(1+t)^{-\frac{7}{2}}\Big(\|X(0)\|_{L^1}^2+1+N^2(t)(1+t)^{-\frac{3}{4}}\Big).
\end{aligned}
\end{eqnarray}
In addition, according to (\ref{A.5}) and lemma \ref{LA.1}, it gets for $k\in[0,2]$
\begin{eqnarray}\label{L2.8.13}
\|\nabla^k X(t)\|^2\leq C\Big(\|\nabla^k X^L (t)\|^2+\|\nabla^k X^h (t)\|^2 \Big)\leq C\Big(\|\nabla^k X^L (t)\|^2+\|\nabla^2 X(t)\|^2 \Big).
\end{eqnarray}
Substituting (\ref{L2.8.11}) and (\ref{L2.8.12}) into (\ref{L2.8.13}), for $k\in[0,2]$, $t\geq T_{*}$, we have
\begin{eqnarray}\label{L2.8.14}
\begin{aligned}
\|\nabla^k X(t)\|^2 &\leq C (1+t)^{-\frac{3}{2}-k} \Big(\|X(0)\|_{L^1}^2+1+N^2(t)(1+t)^{-\frac{3}{4}}\Big)\\
&+ C e^{-C_{*} t}\|\nabla^2 X(T_{*})\|^2+C(1+t)^{-\frac{7}{2}}\Big(\|X(0)\|_{L^1}^2+1+N^2(t)(1+t)^{-\frac{3}{4}}\Big)\\
&\leq C (1+t)^{-\frac{3}{2}-k}\Big(\|X(0)\|_{L^1}^2+1+N^2(t)(1+t)^{-\frac{3}{4}}\Big)+ C e^{-C_{*} t}\|\nabla^2 X(T_{*})\|^2.
\end{aligned}
\end{eqnarray}
Thus, from (\ref{L2.8.1}) and (\ref{L2.8.14}), $\exists~C_{**}>0$ such that
\begin{eqnarray}\nonumber
N^2(t)\leq C_{**}\Big(\|X(0)\|_{L^1}^2+1+N^2(t)(1+t)^{-\frac{3}{4}}+\|\nabla^2 X(T_{*})\|^2\Big).
\end{eqnarray}
Further, $\exists~\widetilde{T}>0$ for $t\geq \widetilde{T}$, it holds
\begin{eqnarray}\nonumber
C_{**}(1+t)^{-\frac{3}{4}}\leq\frac{1}{2},
\end{eqnarray}
such that
\begin{eqnarray}\nonumber
N^2(t)\leq 2C_{**}\Big(\|X(0)\|_{L^1}^2+1+\|\nabla^2 X(T_{*})\|^2\Big),
\end{eqnarray}
combining it with (\ref{Th2}), we can obtain for $\forall t\geq \widetilde{T}$
\begin{eqnarray}\nonumber
N(t)\leq C.
\end{eqnarray}
According to (\ref{L2.8.2}), we have proved (\ref{L2.8}), further more, we have confirmed Theorem \ref{Theorem1.2}.
\end{proof}
\section*{Appendix}
\appendix
\section{Frequency decomposition}
For $\chi_{x}=\frac{1}{i}\nabla=\frac{1}{i}(\partial_{x_{1}},\partial_{x_{2}},\partial_{x_{3}})$, $\phi_{0}(\chi_{x})$ and $\phi_{1}(\chi_{x})$ are pseudo differential operators. $0\leq \phi_{0}(\xi),\phi_{1}(\xi)\leq 1$ $(\xi\in R^3)$ are cut off and also smooth functions which satisfy
\begin{eqnarray}\nonumber
\phi_{0}(\xi)=
\begin{cases}
0,~~|\xi|>b_{0},\\
1,~~|\xi|<\frac{b_{0}}{2},
\end{cases}~~
\phi_{1}(\xi)=
\begin{cases}
1,~~|\xi|>B_{0}+1,\\
0,~~|\xi|<B_{0},
\end{cases}
\end{eqnarray}
Here, $b_{0}$ and $B_{0}$ (fixed constants) satisfy
\begin{eqnarray}\label{A.2}
0<b_{0}\leq \sqrt{\frac{P'(1)}{\eta+\mu}},
\end{eqnarray}
\begin{eqnarray}\label{A.3}
B_{0} \geq \max\Big\{\sqrt{\frac{48\delta_{0}}{\mu}},\sqrt{\frac{48C_{1}\delta_{0}}{\mu}},\sqrt{\frac{6P'(1)\delta_{0}}{\mu}},\sqrt{\frac{32C_{1}\delta_{0}}{\nu}},\sqrt{\frac{4P'(1)\delta_{0}}{\nu}}\Big\},
\end{eqnarray}
Thus, by Fourier transform, for a function $g(x)\in L^2(R^3)$, let's define a frequency decomposition $(g^l(x),g^m(x),g^h(x))$
\begin{eqnarray}\label{A.1}
g^l(x)=\phi_{0}(\chi_{x})g(x),~~
g^m(x)=(I-\phi_{0}(\chi_{x})-\phi_{1}(\chi_{x}))g(x),~~
g^h(x)=\phi_{1}(\chi_{x})g(x).
\end{eqnarray}
Further, denote
\begin{eqnarray}\label{A.4}
g^{L}(x):=g^l(x)+g^m(x),~~g^{H}(x):=g^m(x)+g^h(x),
\end{eqnarray}
one has
\begin{eqnarray}\label{A.5}
g(x)=g^l(x)+g^m(x)+g^h(x):=g^l(x)+g^{H}(x):=g^{L}(x)+g^h(x).
\end{eqnarray}
Using Plancherel theorem and (\ref{A.1}), the following inequalities can be gained.
\begin{Lemma}\label{LA.1}
$\forall~\text{integers}~p_{0}, p, p_{1}, s$, for $p_{0}\leq p\leq p_{1} \leq s$ and $g(x)\in H^s(R^3)$, it gets
\begin{eqnarray}\label{A.6}
\|\nabla^p g^l\|\leq b_{0}^{p-p_{0}}\|\nabla^{p_{0}} g^l\|,~~~\|\nabla^p g^l\|\leq \|\nabla^{p_{1}} g\|,
\end{eqnarray}
\begin{eqnarray}\label{A.7}
\|\nabla^p g^h\|\leq \frac{1}{B_{0}^{p_{1}-p}}\|\nabla^{p_{1}} g^h\|,~~~\|\nabla^p g^h\|\leq \|\nabla^{p_{1}} g\|,
\end{eqnarray}
and
\begin{eqnarray}\label{A.8}
b_{0}^p\|g^m\|\leq \|\nabla^p g^m\|\leq B_{0}^p\|g^m\|.
\end{eqnarray}
\end{Lemma}
\section{The analysis of linearized equations}
Denote $\Omega :=(-\Delta)^{\frac{1}{2}}$, and $\Upsilon:=\Omega^{-1} \diverg u$, then, $u=-\Omega^{-1} \nabla \Upsilon-\Omega^{-1} \diverg (\Omega^{-1} \curl u)$.\\
From (\ref{2.1}), it can get
\begin{eqnarray}\label{B.1}
\begin{cases}
\partial_{t}q+\Omega \Upsilon=\mathfrak{a},\\
\partial_{t}\Upsilon -P'(1)\Omega q-(\eta+\mu)\Delta \Upsilon = \Omega^{-1} \diverg \mathfrak{b},\\
\partial_{t}M-\nu\Delta M=\mathfrak{c},
\end{cases}
\end{eqnarray}
while $\Gamma u=\Omega^{-1} \curl u$ satisfies
\begin{eqnarray}\label{B.2}
\begin{cases}
\partial_{t}\Gamma u -\mu \Delta \Gamma u = \Gamma \mathfrak{b},\\
\Gamma u(x,0)=\Gamma u_{0}(x).
\end{cases}
\end{eqnarray}
Thus, based on the above analysis, the analysis of $u$ only relies on the analysis of $\Upsilon$ and $\Gamma u$.\\
By the application of Fourier transform on (\ref{B.1}), we can check
\begin{eqnarray}\label{B.3}
\begin{cases}
\partial_{t}\widehat{q}+|\xi| \widehat{\Upsilon}=\widehat{\mathfrak{a}},\\
\partial_{t}\widehat{\Upsilon} -P'(1)|\xi| \widehat{q}+(\eta+\mu) |\xi|^2 \widehat{ \Upsilon} = \widehat{\Omega^{-1}\diverg \mathfrak{b}},\\
\partial_{t}\widehat{M}+\nu|\xi|^2 \widehat{M}=\widehat{\mathfrak{c}},
\end{cases}
\end{eqnarray}
further, the linearized equations are as below
\begin{eqnarray}\label{B.4}
\begin{cases}
\partial_{t}\widehat{q}+|\xi| \widehat{\Upsilon}=0,\\
\partial_{t}\widehat{\Upsilon} -P'(1)|\xi| \widehat{q}+(\eta+\mu) |\xi|^2 \widehat{\Upsilon} = 0,\\
\partial_{t}\widehat{M}+\nu|\xi|^2 \widehat{M}=0.
\end{cases}
\end{eqnarray}
In fact, (\ref{B.4}) is equivalent to
\begin{eqnarray}\label{B.5}
\frac{d}{dt}\begin{pmatrix}
\widehat{q}\\
\widehat{\Upsilon}\\
\widehat{M}
\end{pmatrix}
+\mathbb{Q}\begin{pmatrix}
\widehat{q}\\
\widehat{\Upsilon}\\
\widehat{M}
\end{pmatrix}=0,
\end{eqnarray}
while
\begin{eqnarray}\nonumber
\mathbb{Q}=
\begin{pmatrix}
0& |\xi| &0\\
-P'(1)|\xi| &(\eta+\mu) |\xi|^2 &0\\
0&0&\nu|\xi|^2
\end{pmatrix}.
\end{eqnarray}
\subsection{The study of low-frequency portion}
By (\ref{B.4}), one may calculate
\begin{eqnarray}\label{B.6}
\frac{d}{dt}\Big(\frac{P'(1)|\widehat{q}|^2+|\widehat{\Upsilon}|^2+|\widehat{M}|^2}{2}\Big)+(\eta+\mu) |\xi|^2 |\widehat{\Upsilon}|^2+\nu|\xi|^2 |\widehat{M}|^2=0.
\end{eqnarray}
Adding up $(\ref{B.4})_{1} \times\bar{\widehat{\Upsilon}}$ with $\overline{(\ref{B.4})_{2}}\times\widehat{q}$, it can check
\begin{eqnarray}\label{B.7}
\frac{d}{dt}Re(\widehat{q}\bar{\widehat{\Upsilon}})+|\xi||\widehat{\Upsilon}|^2-P'(1)|\xi||\widehat{q}|^2 =-(\eta+\mu)|\xi|^2 Re(\widehat{q}\bar{\widehat{\Upsilon}}).
\end{eqnarray}
Choosing $\delta_{*}>0$ (fixed small constant ) and summing up $-\delta_{*}|\xi|\times$ (\ref{B.7}) with (\ref{B.6}), yields directly
\begin{eqnarray}\label{B.8}
\begin{aligned}
&\frac{d}{dt}\Big(\frac{P'(1)|\widehat{q}|^2+|\widehat{\Upsilon}|^2+|\widehat{M}|^2-2\delta_{*}|\xi|Re(\widehat{q}\bar{\widehat{\Upsilon}})}{2}\Big)\\
&+(\eta+\mu) |\xi|^2 |\widehat{\Upsilon}|^2+\nu|\xi|^2 |\widehat{M}|^2-\delta_{*}|\xi|^2|\widehat{\Upsilon}|^2+P'(1)\delta_{*}|\xi|^2|\widehat{q}|^2 \\ &=\delta_{*}(\eta+\mu)|\xi|^3 Re(\widehat{q}\bar{\widehat{\Upsilon}})\\
&\leq \frac{\delta_{*}(\eta+\mu)^2}{2P'(1)}|\xi|^4 |\widehat{\Upsilon}|^2+\frac{P'(1)\delta_{*}}{2}|\xi|^2|\widehat{q}|^2.
\end{aligned}
\end{eqnarray}
By noticing
\begin{eqnarray}\nonumber
0<\delta_{*}\leq \min\Big\{\frac{1}{2},\frac{\eta+\mu}{2}\Big\},
\end{eqnarray}
it can get from (\ref{B.8})
\begin{eqnarray}\label{B.9}
\begin{aligned}
&\frac{d}{dt}\Big(\frac{P'(1)|\widehat{q}|^2+|\widehat{\Upsilon}|^2+|\widehat{M}|^2-2\delta_{*}|\xi|Re(\widehat{q}\bar{\widehat{\Upsilon}})}{2}\Big)+\frac{\eta+\mu}{2} |\xi|^2 |\widehat{\Upsilon}|^2+\nu|\xi|^2 |\widehat{M}|^2+\frac{\delta_{*}P'(1)}{2}|\xi|^2|\widehat{q}|^2 \\
&\leq \frac{(\eta+\mu)^2}{4P'(1)}|\xi|^4 |\widehat{\Upsilon}|^2,
\end{aligned}
\end{eqnarray}
further, noticing a small constant $b_{0}$, for
\begin{eqnarray}\nonumber
|\xi|\leq b_{0}\leq \sqrt{\frac{P'(1)}{\eta+\mu}},
\end{eqnarray}
it can obtain from (\ref{B.9})
\begin{eqnarray}\label{B.10}
\begin{aligned}
\frac{d}{dt}\ell(\xi,t)+\frac{\eta+\mu}{4} |\xi|^2 |\widehat{\Upsilon}|^2+\nu|\xi|^2 |\widehat{M}|^2+\frac{\delta_{*}P'(1)}{2}|\xi|^2|\widehat{q}|^2 \leq 0,
\end{aligned}
\end{eqnarray}
where
\begin{eqnarray}\nonumber
\ell(\xi,t):=\frac{P'(1)|\widehat{q}|^2+|\widehat{\Upsilon}|^2+|\widehat{M}|^2-2\delta_{*}|\xi|Re(\widehat{q}\bar{\widehat{\Upsilon}})}{2}.
\end{eqnarray}
Because of $\delta_{*}b_{0}\leq\min\{\frac{P'(1)}{2},\frac{1}{2}\}$, one has
\begin{eqnarray}\nonumber
\ell(\xi,t)\sim|\widehat{q}|^2+|\widehat{\Upsilon}|^2+|\widehat{M}|^2,
\end{eqnarray}
so, for $|\xi|\leq b_{0}$, $\exists~\text{constant}~C_{l}>0$ such that
\begin{eqnarray}\label{B.11}
C_{l}|\xi|^2 \ell(\xi,t)\leq \frac{\eta+\mu}{4} |\xi|^2 |\widehat{\Upsilon}|^2+\nu|\xi|^2 |\widehat{M}|^2+\frac{\delta_{*}P'(1)}{2}|\xi|^2|\widehat{q}|^2.
\end{eqnarray}
Thus, from (\ref{B.10}) and (\ref{B.11}), it obtains
\begin{eqnarray}\label{B.12}
\ell(\xi,t)\leq e^{-C_{l}|\xi|^2 t}\ell(\xi,0),~|\xi|\leq b_{0}.
\end{eqnarray}
\subsection{The study of middle-frequency portion}
Let's calculate the characteristic polynomial of $\mathbb{Q}$
\begin{eqnarray}\nonumber
\begin{aligned}
\mathbb{Q}_{\lambda_{0}}=|Q-\lambda_{0}I|
=\Big(\lambda_{0}^2-(\eta+\mu)|\xi|^2 \lambda_{0} +P'(1)|\xi|^2 \Big)\Big(\nu|\xi|^2-\lambda_{0}\Big)=-\alpha_{0}\lambda_{0}^3+\alpha_{1}\lambda_{0}^2-\alpha_{2}\lambda_{0}+\alpha_{3},
\end{aligned}
\end{eqnarray}
here
\begin{eqnarray}\nonumber
\begin{aligned}
\alpha_{0}=1,~~ \alpha_{1}=(\eta+\mu)|\xi|^2+\nu|\xi|^2,~~ \alpha_{2}=\nu(\eta+\mu)|\xi|^4+P'(1)|\xi|^2,~~ \alpha_{3}=\nu P'(1)|\xi|^4.
\end{aligned}
\end{eqnarray}
Since $\alpha_{0}$-$\alpha_{4}$ all positive, by Routh-Hurwitz theorem, all roots of $\mathbb{Q}_{\lambda_{0}}$ have positive real part if and only if $\mathbb{Q}_{1}>0$ and $\mathbb{Q}_{2}>0$
\begin{eqnarray}\nonumber
\begin{aligned}
&\mathbb{Q}_{1}=\alpha_{1}=(\eta+\mu)|\xi|^2+\nu|\xi|^2>0,\\
&\mathbb{Q}_{2}=
\begin{vmatrix}
\alpha_{1}&\alpha_{0}\\
\alpha_{3}&\alpha_{2}
\end{vmatrix}
=\nu(\eta+\mu)^2|\xi|^6+(\eta+\mu)P'(1)|\xi|^4+\nu^2(\eta+\mu)|\xi|^6>0.
\end{aligned}
\end{eqnarray}
According to the analysis in subsection 3.3 proofed in \cite{Danchin}, Lemma \ref{LB1} is attainable.
\begin{Lemma}\label{LB1}
For $\forall~\text{constant}~\mathfrak{b}~\text{and}~\mathfrak{B}~(0<\mathfrak{b}<\mathfrak{B})$, $\exists$ constant $\varsigma>0$ which relies only on $\eta$, $\nu$, $\mathfrak{b}$ and $\mathfrak{B}$ such that for $\forall~\mathfrak{b}\leq |\xi|\leq \mathfrak{B}$ and $t\in R^{+}$
\begin{eqnarray}\label{B.13}
\begin{aligned}
|e^{-t\mathbb{Q}}|\leq Ce^{-\varsigma t}.
\end{aligned}
\end{eqnarray}
\end{Lemma}
From (\ref{B.5}) and (\ref{B.13}), for $\forall~\mathfrak{b}\leq |\xi|\leq \mathfrak{B}$, one has
\begin{eqnarray}\label{B.14}
\begin{aligned}
|(\widehat{q},\widehat{\Upsilon},\widehat{M})(\xi,t)|=|e^{-t\mathbb{Q}}(\widehat{q},\widehat{\Upsilon},\widehat{M})(\xi,0)|\leq Ce^{-\varsigma t}|(\widehat{q},\widehat{\Upsilon},\widehat{M})(\xi,0)|.
\end{aligned}
\end{eqnarray}
\subsection{Estimation of $\widehat{\Gamma u}(\xi,t)$}
Taking Fourier transform on (\ref{B.2}), the linearized portion of that is as below
\begin{eqnarray}\label{B.15}
\partial_{t}\widehat{\Gamma u} +\mu |\xi|^2\widehat{\Gamma u} = 0,
\end{eqnarray}
for $\forall~|\xi|\geq 0$, one has
\begin{eqnarray}\label{B.16}
|\widehat{\Gamma u}(\xi,t)|^2\leq C e^{-\mu |\xi|^2 t}|\widehat{\Gamma u}(\xi,0)|^2.
\end{eqnarray}
\vskip2mm
\renewcommand\refname{References}
\end{document}
|
\begin{document}
\title{Thermodynamic bound on quantum state discrimination}
\author{Jos\'{e} Polo-G\'{o}mez}
\email{[email protected]}
\affiliation{Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada}
\affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada}
\affiliation{Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5, Canada}
\begin{abstract}
We show that the second law of thermodynamics poses a restriction on how well we can discriminate between quantum states. By examining an ideal gas with a quantum internal degree of freedom undergoing a cycle based on a proposal by Asher Peres, we establish a non-trivial upper bound on the attainable accuracy of quantum state discrimination. This thermodynamic bound, which relies solely on the linearity of quantum mechanics and the constraint of no work extraction, matches Holevo's bound on accessible information, but is looser than the Holevo-Helstrom bound. The result gives more evidence on the disagreement between thermodynamic entropy and von Neumann entropy, and places potential limitations on proposals beyond quantum mechanics.
\end{abstract}
\maketitle
\section{Introduction}\label{Section: introduction}
The major role that information plays in thermodynamics was first realized by Szilard~\cite{Szilard}, and later underpinned by Landauer's erasure principle~\cite{Landauer} and Bennett's exorcization of Maxwell's demon~\cite{Bennett}. The interplay between the second law of thermodynamics and information processing has ever since been the subject of extensive research (see~\cite{Maruyama2009} for a review). In the last decades, this relationship has been particularly fruitful for the study of small far-from-equilibrium systems~\cite{Sagawa2012smallsystems}, both in the realm of stochastic thermodynamics (see, e.g.,~\cite{Piechocinska2000,Touchette2000,Cao2004,Kawai2007,Cao2009,Abreu2011,Abreu2012,Strasberg2013,Mandal2013,Barato2014,Horowitz2014}), and quantum mechanics (see~\cite{Alicki2004,Sagawa2009,Jacobs2009,Sagawa2010,Hilt2011,Esposito2011,Sagawa2012fluctuation,Reeb2014,Alhambra2016,Naghiloo2018,Ptaszynsky2019}, among many others).
The potential applications of quantum thermodynamics~\cite{Scovil1959,Sekimoto2000,Sato2002,Scully2002,Quan2007,Dillenscheneider2009,Linden2010,Zhang2014,Rossnagel2014,Huang2014,Abah2014,Quan2014,Gardas2015,Rossnagel2016,Klaers2017,Niedenzu2018,PozasKerstjens2018,Micadei2019,YungerHalpern2022} have been one of the main driving forces for its development in the last decades~\cite{Vinjanampathy2016,Binder2018,Deffner2019}, and it is precisely in this context that it is important to understand the constraints that thermodynamics imposes on quantum information processing.
One of the most elemental problems to be considered in the theory of quantum information is quantum state discrimination, i.e., the problem of determining how good we can possibly get at distinguishing a set of quantum states from one another.
Quantum state discrimination is essential to retrieving classical information from a quantum system, making it a crucial component in most quantum information processes~\cite{Bae2015}. It has also been shown to be relevant for quantum foundations~\cite{Gisin1998,Simon2001,Gallego2010,Pusey2012} and quantum communication~\cite{Bruss1998,Keyl1999,Bruss2000,Bae2006,Chiribella2006}.
That the second law of thermodynamics imposes a restriction on our ability to distinguish quantum states was first realized by Asher Peres~\cite{Peres1993}. In his work, Peres considered an ideal gas of particles with spin, and imagined the existence of two ``magical membranes'', \textit{perfectly} transparent or opaque to two non-orthogonal states. It was shown that these fictitious artifacts would allow the performance of a cycle in which heat was extracted from a heat bath and completely converted into work, in contradiction with the Kelvin-Planck statement of the second law. This result was not a challenge to the second law, of course, since the membranes imagined by Peres, to the best of our knowledge, do not exist. However, it shows that the existence of these membranes is \textit{not only} forbidden by quantum mechanics, but by thermodynamics \textit{as well}, yet establishing another link between the already very intertwined areas of thermodynamics and information. Moreover, a cycle akin to that of Peres was also considered by K. Maruyama, \v{C}. Brukner and V. Vedral~\cite{Maruyama2005} to obtain a bound for accessible information from thermodynamic arguments. This bound was similar but weaker than that obtained by Holevo~\cite{Holevo1973} within quantum information theory.
In the spirit of these previous works, here we show that the second law of thermodynamics imposes a bound on how well we can distinguish two pure quantum states. In Sec.~\ref{Section: Peres' demon}, we reformulate Peres' idea introducing a ``demon'' that carries out the (in principle \textit{forbidden}) operations, since this artifact highlights the informational aspects that will turn out to be relevant afterwards. In Sec.~\ref{Section: Modified cycle and bound}, we will obtain the ``thermodynamic bound'' on quantum state discrimination using an adequate modification of the previous setup. In Sec.~\ref{Section: Discussion} we compare this bound to the other bounds from quantum information theory and analyze the consequences of the results, concluding in Sec.~\ref{Section: Conclusion}.
\section{Peres' demon}\label{Section: Peres' demon}
In this Section we will reformulate the cycle devised by Peres~\cite{Peres1993,Maruyama2009} in a way that will allow us to monitor both the flow of information and the role of measurements in the process. The key elements of the reformulation are what we will call \textit{Peres' demons}. We will assume that these entities have somehow the ability to perfectly distinguish two specific non-orthogonal quantum states. Notice that, in principle, these demons are not Maxwell's demons, in the sense that the \textit{forbidden} operation that we will assume they have the ability to perform does not explicitly violate a thermodynamic law, but rather a quantum mechanical one. We will see that, nevertheless, this ability leads to the possibility of violating the second law of thermodynamics. Specifically, with the sole premise of the linearity of quantum mechanics, the second law of thermodynamics enforces that non-orthogonal states cannot be perfectly distinguished.
We consider an ideal gas contained in a cylinder of fixed total volume $V$. The gas has an internal quantum degree of freedom that is decoupled from the rest. The cycle, depicted in Fig.~\ref{fig: Peres cycle}, starts off with a gas divided in two portions of $N/2$ particles whose internal degrees of freedom are in two \textit{non-orthogonal} states $\ket{\psi_1}$ (left, in red) and $\ket{\psi_2}$ (right, in blue). Each gas occupies a quarter of the total volume available, and is separated by a fixed opaque wall, as represented in the top left picture of Fig.~\ref{fig: Peres cycle}.
\begin{figure*}
\caption{Reformulation of the cycle proposed by Asher Peres~\cite{Peres1993}
\label{fig: Peres cycle}
\end{figure*}
In the first step, the two gases expand isothermically, allowing the extraction of work. Following the convention by which work is positive when it is given to the system, and negative when extracted from it, we have that the work involved in this first step of the process is
\begin{equation}\label{eq: W1}
W_1= - N k_{\textsc{b}} T \int_{V/4}^{V/2} \frac{\textrm{d}v}{v}=- N k_{\textsc{b}}T \ln2.
\end{equation}
In the second step, the opaque wall separating the two gases is exchanged by two other walls, initially at the centre of the cylinder, and one next to the other. These walls are each operated by one Peres' demon, an entity capable of perfectly distinguishing $\ket{\psi_1}$ from $\ket{\psi_2}$. We will assume that the demons share a common memory in which they store the information about what is the internal quantum state of each particle in the gas. The job of these demons is to use their ability to distinguish $\ket{\psi_1}$ and $\ket{\psi_2}$ to decide whether a particle arriving to the wall goes through it or not, depending on what the quantum state of the particle is. The demon operating the wall to the left (red wall, in Fig.~\ref{fig: Peres cycle}) is instructed to let the particles in state $\ket{\psi_1}$ pass, rebounding off the ones in state $\ket{\psi_2}$. That way, this wall is \textit{transparent} to $\ket{\psi_1}$, while \textit{opaque} to $\ket{\psi_2}$. The wall to the right (blue wall, in Fig.~\ref{fig: Peres cycle}) operates in reverse: it is transparent to $\ket{\psi_2}$, and opaque to $\ket{\psi_1}$. Thus, the gas in the left half of the cylinder (in state $\ket{\psi_1}$) exerts pressure on the blue wall, but not the red one. Conversely, the gas in the right half (in state $\ket{\psi_2}$) exerts pressure on the red wall, but not the blue one. Thus, once the walls are allowed to move, the red and the blue walls move to their left and right, respectively, and they do so without opposition until they reach the ends of the cylinder. Each gas is therefore freely expanding to the whole volume, and the work involved in this process is
\begin{equation}\label{eq: W2}
W_2=-N k_\textsc{b} T \int_{V/2}^{V} \frac{\textrm{d}v}{v}=- N k_{\textsc{b}}T \ln2.
\end{equation}
At the end of this step, the demons have measured the state of all the particles, and this information is stored in their shared memory. The quantum state of the particles might have been modified by the measurement process as well, and the measurement itself had an associated work cost that we denote as $W_{\text{meas}}$. Now, notice that the information stored in the memory was not depleted in the performance of the second step. Thus, the demons can be instructed to perform a joint \textit{resetting} operation on the memory and the gas to reverse the measurement process, taking both systems back to their initial states, with an associated work cost $W_{\text{reset}}$. After this step, the information in the memory is erased, and the quantum state of a random particle in the gas (i.e., for the ensemble) can be described by the mixed state
\begin{equation}\label{eq: rho}
\hat\rho=\frac{1}{2}\ket{\psi_1}\!\!\bra{\psi_1} + \frac{1}{2} \ket{\psi_2}\!\!\bra{\psi_2}.
\end{equation}
Here, we will work in the limit in which the measurement-resetting process is reversible, i.e., when
\begin{equation}
W_{\text{meas}}+W_{\text{reset}}=0.
\end{equation}
The assumption that the measurement and resetting processes can be performed reversibly might seem counterintuitive since 1) especially in quantum mechanics, measurements are regarded as highly irreversible~\cite{vonNeumannFoundations,Peres1980}, and 2) in this protocol the demons use the outcomes of the measurements to apply a feedback to the gas (see, e.g.,~\cite{Jacobs2009,Sagawa2010,Funo2013}). However, in this setup the feedback is applied to the kinematic degrees of freedom, which are dynamically decoupled from the quantum degree of freedom that was measured. This is the reason why the information in the memory is not exhausted by the feedback in the first place. As a consequence, we can think of the measurement and resetting processes as being reversible overall~\cite{Peres1974}, as long as they are only concerned with the memory and the internal quantum state. In this sense, notice that the quantum state of the gas after the resetting is the mixture given in Eq.~\eqref{eq: rho} that also corresponds to the state of the gas before step 2 \textit{if we trace out} the kinematic degrees of freedom, i.e., if we do not take into account whether the particles of the gas are to the left or to the right of the central wall (cf. Fig.~\ref{fig: Peres cycle}). It is not the state of the gas as a whole that is reversed during the resetting operation, but rather just the state of its quantum degree of freedom (jointly with the memory). The point can still be made that, e.g., projective measurements in quantum mechanics, are irreversible. However, by virtue of Naimark's dilation theorem~\cite{Watrous2018}, this irreversibility can be understood as emerging from tracing out external degrees of freedom, specifically those of the measurement apparatus~\cite{Peres1974}. Even without this consideration at hand, it has been shown that sharp measurements can be approximated arbitrarily well with logically reversible measurements~\cite{Ueda1996}. It is worth remarking that these considerations were not made in Peres' work~\cite{Peres1993}, since in his version of the cycle the role of demons is played by membranes (see Sec.~\ref{Section: introduction}), and these are assumed to perfectly discriminate between $\ket{\psi_1}$ and $\ket{\psi_2}$ without altering the state of the particles, so that it is not necessary to consider an explicit measurement process---nor a memory to store its outcomes.
Now, in order to proceed with the rest of the cycle, notice that in the quantum state given in Eq.~\eqref{eq: rho}, $\ket{\psi_1}$ and $\ket{\psi_2}$ are non-orthogonal, and therefore $\hat\rho$ can be diagonalized to
\begin{equation}\label{eq: rho diagonalization}
\hat\rho=c \ket{\phi_1}\!\!\bra{\phi_1} + (1-c) \ket{\phi_2}\!\!\bra{\phi_2},
\end{equation}
for some orthogonal states $\ket{\phi_1}$ and $\ket{\phi_2}$, and some real constant \mbox{$c \in [0,1]$}. We can now introduce two membranes such that one is transparent to $\ket{\phi_1}$ and opaque to $\ket{\phi_2}$ (the green one, in Fig.~\ref{fig: Peres cycle}), and the other one is opaque to $\ket{\phi_1}$ and transparent to $\ket{\phi_2}$ (the yellow one, in Fig.~\ref{fig: Peres cycle}). Since $\ket{\phi_1}$ and $\ket{\phi_2}$ are orthogonal, these membranes are in principle physically realizable (even though they are highly ideal~\cite{Peres1993}).
In the third step, each membrane is introduced at one different end of the cylinder to compress isothermically the portion of the gas they are opaque to, until they meet at the centre of the cylinder. The work involved in this process is
\begin{equation}\label{eq: W3}
W_3=-[cN +(1-c)N]k_{\textsc{b}} T \int_{V}^{V/2} \frac{\mathrm{d} v}{v}=N k_{\textsc{b}} T \ln 2,
\end{equation}
and as a result the gas is separated in two portions, one in state $\ket{\phi_1}$ (light green in Fig.~\ref{fig: Peres cycle}), and the other one in state $\ket{\phi_2}$ (light yellow in Fig.~\ref{fig: Peres cycle}).
In the fourth step of the process, each portion of the gas is isothermically compressed to the initial pressure, namely
\begin{equation}
P_0= \frac{\frac{N}{2} k_{\textsc{b}}T}{\frac{V}{4}}=\frac{2Nk_{\textsc{b}}T}{V}.
\end{equation}
This compression is carried out using regular opaque walls, depicted in black in Fig.~\ref{fig: Peres cycle}. Since the portion of gas in state $\ket{\phi_1}$ contains $cN$ particles, it has to be compressed to a volume
\begin{equation}
V_1=\frac{cN k_{\textsc{b}}T}{P_0}=\frac{cV}{2}.
\end{equation}
Analogously, the portion of gas in state $\ket{\phi_2}$ has to be compressed to a volume
\begin{equation}
V_2=\frac{(1-c)V}{2}.
\end{equation}
The work involved in this process is
\begin{align}\label{eq: W4}
W_4&=-cN k_{\textsc{b}} T \int_{V/2}^{cV/2} \frac{\mathrm{d} v}{v} - (1-c)N k_{\textsc{b}} T \int_{V/2}^{(1-c)V/2} \frac{\mathrm{d} v}{v} \nonumber \\
&= N k_{\textsc{b}} T [-c \ln c -(1-c)\ln (1-c)] \nonumber \\
&=N k_{\textsc{b}} T S(\hat\rho) \ln 2,
\end{align}
where we have recognized that
\begin{equation}
-c \log c -(1-c)\log (1-c)=H(c)=S(\hat\rho)
\end{equation}
is the Shannon entropy of the binary distribution with probability $c$, $H(c)$, and that this is precisely the von Neumann entropy of the state $\hat\rho$, since $c$ and $1-c$ are its eigenvalues. Note that as long as $\ket{\psi_1}$ and $\ket{\psi_2}$ are non-orthogonal, we have that $c \neq 1/2$, and therefore \mbox{$H(c) < H(1/2)=1$} (see Appendix~\ref{Appendix: Even mixture of non-orthogonal states}).
Finally, in the fifth and last step of the cycle, an additional wall is introduced in the cylinder. This leaves the gas divided in three portions. To explain this last process we will refer to the bottom left picture of Fig.~\ref{fig: Peres cycle}, where we have assumed, without loss of generality, that $c>1/2$, and hence there are more particles of the gas in state $\ket{\phi_1}$. In that case, two portions of gas, occupying volumes $V/4$ and $(2c-1)V/4$, are in state $\ket{\phi_1}$, while the remaining portion, occupying a volume $(1-c)V/2$, is in state $\ket{\phi_2}$. We can then perform separate unitary transformations in each portion. On the leftmost one (occupying a volume $V/4$), we perform a unitary taking $\ket{\phi_1}$ to $\ket{\psi_1}$. Meanwhile, on the central and rightmost portions (whose volumes add up to $V/4$), we perform two different unitaries that transform $\ket{\phi_1}$ and $\ket{\phi_2}$ into $\ket{\psi_2}$, respectively. These unitaries---which, for instance, in the case in which the internal quantum degree of freedom is spin, could correspond to the application of magnetic fields with a certain direction, intensity and duration---can be performed with an arbitrarily small work cost, which becomes zero in the quasistatic limit, when we allow the process to take place during an infinitely long period of time. Thus, we can assume
\begin{equation}\label{eq: W5}
W_5=0.
\end{equation}
Once the cycle is completed, we can compute the total work involved:
\begin{align}
W_t&=\sum_{j=1}^{5} W_j + W_{\text{meas}} + W_{\textrm{reset}} \\
&=-N k_{\textsc{b}} T [1-S(\hat\rho)] \ln 2. \nonumber
\end{align}
Since, as remarked before, $S(\hat\rho)<1$ as long as $\ket{\psi_1}$ and $\ket{\psi_2}$ are not orthogonal, we conclude that
\begin{equation}
W_t < 0,
\end{equation}
which means that the cycle's net effect is the extraction of work, in violation of the second law. This, as anticipated, is a consequence of having assumed the possibility of distinguishing two non-orthogonal states (while still keeping the well-known ability to discriminate orthogonal states~\cite{vonNeumann}).
\section{Thermodynamic bound}\label{Section: Modified cycle and bound}
In this Section we modify the setup presented in Sec.~\ref{Section: Peres' demon} to obtain a bound for how well we can discriminate two quantum states solely based on thermodynamic constraints. To do so, instead of Peres' demons, we consider demons that can only distinguish states $\ket{\psi_1}$ and $\ket{\psi_2}$ with a certain probability of success. We then calculate the total work involved in the cycle with these conditions. Imposing that the second law is satisfied we obtain an upper bound on the accuracy with which a pair of second-law-abiding demons should be able to discriminate $\ket{\psi_1}$ from $\ket{\psi_2}$. Notice that, unlike before, these entities are given the ability to distinguish quantum states \textit{only within the limits of thermodynamics}. In Sec.~\ref{Section: Peres' demon}, Peres' demons were considered to have the ability to perfectly distinguish a pair of non-orthogonal quantum states, which is in principle an operation forbidden by quantum mechanics, but turned out to be forbidden by thermodynamics as well. Here, the second-law-abiding demons have the ability to perform the discrimination of non-orthogonal quantum states only with a certain efficiency limited by the fulfillment of the second law of thermodynamics. We should nevertheless still call them demons since, as long as the thermodynamic bound is looser than the quantum information one---as we will see is indeed the case---, these entities can perform operations that are forbidden by quantum mechanics (but might not be forbidden by other proposals beyond standard quantum theory).
The modified cycle is represented in Fig.~\ref{fig: modified cycle}. There, it becomes apparent that steps 1, 3, 4, and 5 of the cycle are the same as in Sec.~\ref{Section: Peres' demon}, and so is the work involved in them. The only differences we ought to analyze are in step 2 and the resetting of the demons' memory.
\begin{figure*}
\caption{Modified cycle resulting from having demons that cannot \textit{perfectly}
\label{fig: modified cycle}
\end{figure*}
In this new setup, the demons are only capable of distinguishing $\ket{\psi_1}$ and $\ket{\psi_2}$ with some limited efficiency: namely, we assume that for some $\delta \in [0,1]$,
\begin{equation}\label{eq: prob. success}
P_s=\operatorname{Prob}(\textrm{``guessing correctly''})=\frac{1+\delta}{2}.
\end{equation}
Note that $\delta=0$ corresponds to the case in which the demons are not capable of discriminating one state from the other at all, while $\delta=1$ corresponds to the scenario studied in Sec.~\ref{Section: Peres' demon}.
In the second step of the cycle, the demons operate the red and green walls imperfectly, and unlike in the previous setup, the process will in general stop before the walls reach the ends of the cylinder. Specifically, a fraction \mbox{$(1-\delta)/2$} of the particles that were in state $\ket{\psi_1}$ are mistakenly identified as being in state $\ket{\psi_2}$. These particles now exert pressure on the red wall, and are constrained to move between it and the leftmost end of the cylinder. Similarly, a fraction $(1-\delta)/2$ of the particles that were in state $\ket{\psi_2}$ are mistakenly identified as being in state $\ket{\psi_1}$, and they exert pressure on the blue wall, constrained to move between it and the rightmost end of the cylinder (see top right picture in Fig.~\ref{fig: modified cycle}, for clarity). Thus, the red wall feels pressure
\begin{itemize}
\item[-] on its left, from those particles initially in state $\ket{\psi_1}$ that were measured to be in state $\ket{\psi_2}$, and
\item[-] on its right, from those particles that were correctly measured to be in state $\ket{\psi_2}$, which move freely between the red wall and the rightmost end of the cylinder (the blue wall is transparent to them, since the demons share the memory, i.e., their measurements are consistent).
\end{itemize}
Since the red wall is transparent to the rest of particles, it stops moving whenever these two pressures equalize. Let $x$ be the fraction of the total volume to the left of the red wall once it reaches equilibrium (cf. Fig.~\ref{fig: modified cycle}), then, by Eq.~\eqref{eq: prob. success},
\begin{equation}
\frac{1-\delta}{2} \,\frac{NT}{2xV}=\frac{1+\delta}{2} \, \frac{NT}{2(1-x)V} \; \Rightarrow \; x=\frac{1-\delta}{2}.
\end{equation}
An analogous treatment for the blue wall yields
\begin{equation}
y=\frac{1-\delta}{2},
\end{equation}
where now $y$ is the fraction of the total volume to the right of the blue wall once it reaches equilibrium (cf. Fig.~\ref{fig: modified cycle}). The work involved in these ``incomplete expansions'' is then
\begin{align}\label{Eq: W'2}
&W'_2=-\frac{1-\delta}{4} N k_{\textsc{b}} T \int_{V/2}^{xV} \frac{\mathrm{d} v}{v} - \frac{1+\delta}{4} N k_{\textsc{b}} T \int_{V/2}^{(1-x)V} \frac{\mathrm{d} v}{v} \nonumber \\
&\phantom{==\;\;\;} - \frac{1-\delta}{4} N k_{\textsc{b}} T \int_{V/2}^{yV} \frac{\mathrm{d} v}{v} - \frac{1+\delta}{4} N k_{\textsc{b}} T \int_{V/2}^{(1-y)V} \frac{\mathrm{d} v}{v} \nonumber \\
&\phantom{\;}=-Nk_{\textsc{b}} T \ln 2 \, \bigg( 1 + \frac{1-\delta}{2}\log \frac{1-\delta}{2} + \frac{1+\delta}{2} \log\frac{1+\delta}{2} \bigg) \nonumber \\
&\;= -N k_{\textsc{b}} T \ln 2 \, \bigg[ 1-H \bigg( \frac{1+\delta}{2}\bigg) \bigg],
\end{align}
where $H(p)$ is again the Shannon entropy of the binary distribution with probability $p$. Notice in particular that when $\delta=1$, $H(1)=0$ and we recover the result in Eq.~\eqref{eq: W2}.
It is worth remarking that, as before, the measurements, with an associated work cost $W'_{\text{meas}}$, might have affected the state of the particles. One might wonder whether the post-measurement states of the particles \textit{measured to be} in state $\ket{\psi_k}$ ($k=1,2$) are all the same or, on the contrary, they are different depending on their initial state. We argue here that, for the process to be optimal (and so we need it to be if we want to extract an upper bound for $\delta$), these states have to be the same. If they were not, then they would be physically distinguishable to some extent, and additional measurements could be performed by the demons to refine their initial guessing (i.e., there would be still some information left in the post-measurement states that could be extracted by the demons). Thus, we can assume here that all particles measured to be in state $\ket{\psi_k}$ (whether correctly or incorrectly) are left in the same post-measurement state $\ket{\psi'_k}$. Another consequence of this is that, once a particle is measured, it is unequivocally identified with the state of its measurement outcome by both demons (since they share their memory), and it will behave accordingly with respect to both walls. This condition is tantamount to requiring that the outcomes of sequential measurements performed by the demons have to be consistent.
Moreover, with the previous assumption, the mixture of gases at the end of step 2 turns out to be homogeneous throughout all the cylinder. Take, for instance, the leftmost portion. It has $(1-\delta)N/4$ particles in state $\ket{\psi'_2}$ that are constrained to that region of the cylinder. But we can also find there a fraction of the particles that were successfully measured to be in state $\ket{\psi_1}$ and freely move between the leftmost portion of the gas and the central one, only constrained by the blue wall. Specifically, a fraction $(1-\delta)/(1+\delta)$ of the $(1+\delta)N/4$ particles in this situation will be found on average on the leftmost region of the cylinder. This amounts precisely to $(1-\delta)N/4$ particles in state $\ket{\psi'_1}$. Thus, the quantum state of a random particle of the gas in this region can be described by the mixed state
\begin{equation}
\hat\rho'=\frac{1}{2}\ket{\psi'_1}\!\!\bra{\psi'_1}+\frac{1}{2} \ket{\psi'_2}\!\!\bra{\psi'_2}.
\end{equation}
The same reasoning can be carried out for the other two regions of the cylinder, with the same conclusion.
Finally, before proceeding with the third step of the cycle, the demons are instructed to perform a joint resetting of the memory and the gas, with a work cost $W'_{\text{reset}}$. After this step, the memory is taken back to its default state, and the ensemble of particles in the gas can be described by
\begin{equation}
\hat\rho=\frac{1}{2}\ket{\psi_1}\!\!\bra{\psi_1} + \frac{1}{2} \ket{\psi_2}\!\!\bra{\psi_2}.
\end{equation}
As in Sec.~\ref{Section: Peres' demon}, we assume that the measurement-resetting process can be performed in a reversible way, so that
\begin{equation}\label{measurement and resetting modified}
W'_{\text{meas}} + W'_{\text{reset}}=0.
\end{equation}
Once the modified cycle is performed, using the work calculated in Eq.~\eqref{Eq: W'2} for the modified step 2, the condition given in Eq.~\eqref{measurement and resetting modified} for the work involved in the measurement and the resetting, and retrieving the work involved in steps 1, 3, 4, and 5 from Eqs.~\eqref{eq: W1},~\eqref{eq: W3},~\eqref{eq: W4}, and~\eqref{eq: W5}, we can calculate the net work received by the system during the cycle,
\begin{equation}
W'_t= -N k_{\textsc{b}} T \bigg[ 1 - H\bigg( \frac{1+\delta}{2} \bigg) - S(\hat\rho) \bigg] \ln 2.
\end{equation}
If we now pose the constraint that the second law is satisfied, i.e., that $W_t \geq 0$, then this translates into
\begin{equation}
1 - H\bigg( \frac{1+\delta}{2} \bigg) - S(\hat\rho) \leq 0.
\end{equation}
Notice that as $\delta$ increases from 0 to 1, $H[(1+\delta)/2]$ decreases, and therefore the left-hand side of the previous inequality increases. Thus, the optimal $\delta$ that we can achieve while fulfilling the second law, which we denote $\delta_{th}$, is the one that saturates the inequality,
\begin{equation}\label{eq: thermodynamic bound}
H\bigg( \frac{1+\delta_{th}}{2} \bigg)= 1-S(\hat\rho).
\end{equation}
This is the thermodynamic bound that we were looking for.
\section{Comparison between bounds}\label{Section: Discussion}
Now that we have obtained the thermodynamic bound, let us compare it with the bound given by the Holevo-Helstrom theorem~\cite{Holevo1974,Helstrom1976} within quantum information theory. The theorem establishes that the optimal success probability in the discrimination of two quantum states $\hat\rho_1$ and $\hat\rho_2$ is given by
\begin{equation}
P_s= \frac{1}{2} + \frac{1}{4}|\!| \hat\rho_1-\hat\rho_2 |\!|_1 \; \Rightarrow \; \delta_{\textsc{qi}}=\frac{1}{2}|\!| \hat\rho_1-\hat\rho_2 |\!|_1,
\end{equation}
where we used that $P_s=(1+\delta_\textsc{qi})/2$, with $\delta_\textsc{qi}$ representing how much better than random we can perform in the optimal case according to quantum information. For the particular case of two pure states, \mbox{$\hat\rho_1=\ket{\psi_1}\!\!\bra{\psi_1}$} and \mbox{$\hat\rho_2=\ket{\psi_2}\!\!\bra{\psi_2}$}, we can evaluate the bound in terms of the overlap between both states: let $\theta \in [0,\pi/2]$ be such that
\begin{equation}
|\!\braket{\psi_1}{\psi_2}\!|=\cos \theta,
\end{equation}
then we have (see Appendix~\ref{Appendix: results in terms of the angle} for details)
\begin{equation}
|\!| \hat\rho_1-\hat\rho_2 |\!|_1=2\sin\theta.
\end{equation}
The optimal bound that one can obtain in quantum information theory is therefore given by
\begin{equation}
\delta_{\textsc{qi}}=\sin\theta.
\end{equation}
On the other hand, in Appendix~\ref{Appendix: results in terms of the angle} we show that
\begin{equation}
S(\hat\rho)=H\bigg( \frac{1+\cos\theta}{2} \bigg),
\end{equation}
where, as in Secs.~\ref{Section: Peres' demon} and~\ref{Section: Modified cycle and bound}, $\hat\rho$ is given by Eq.~\eqref{eq: rho}. From Eq.~\eqref{eq: thermodynamic bound},
\begin{equation}\label{eq: thermo bound explicit}
H\bigg( \frac{1+\delta_{th}}{2} \bigg)= 1-H\bigg( \frac{1+\cos\theta}{2} \bigg).
\end{equation}
For each \mbox{$\theta \in [0,\pi/2]$}, this equation can be solved implicitly. In Fig.~\ref{fig: both bounds}, both bounds $\delta_{\textsc{qi}}$ and $\delta_{th}$ are represented as functions of $\cos\theta$.
\begin{figure}
\caption{Optimal performance for quantum state discrimination achievable within the constraints imposed by thermodynamics (orange) and quantum information theory (blue).}
\label{fig: both bounds}
\end{figure}
\begin{figure}
\caption{Relative comparison between the thermodynamic bound and the Holevo-Helstrom bound, \mbox{$(\delta_{th}
\label{fig: relative comparison}
\end{figure}
As expected, the thermodynamic bound is looser than the quantum information one. The relative difference $(\delta_{th}-\delta_{\textsc{qi}})/\delta_{th}$ plotted in Fig.~\ref{fig: relative comparison} reveals that when the two states considered are almost orthogonal, the thermodynamic bound is close to the (optimal) bound given by the Holevo-Helstrom theorem. As the two states get closer to each other, the thermodynamic bound stops being a good first estimation of the quantum information one, although it does show the same behaviour $\delta_{th} \to 0$ as $\cos\theta \to 1$.
The looseness of the thermodynamic bound is revealing how far one could go if we were out of the regime of validity of quantum mechanics---since the only aspect of quantum mechanics that we used in order to derive the thermodynamic bound was its linearity. Consider, for instance, the operation performed by the demons in the measurement process. This operation is something that we assume the demons, with their supposed abilities to discriminate between $\ket{\psi_1}$ and $\ket{\psi_2}$, can perform. However, in general this operation might not be a quantum channel. As a matter of fact, whenever the assumed $\delta$ is above the quantum information limit $\delta_\textsc{qi}$ but below the thermodynamic bound $\delta_{th}$, this operation is indeed \textit{not a quantum channel}. As far as we know, it is an impossible operation, but it could be a possible one in some kind of beyond-quantum theory. All we are saying here is that it is forbidden by quantum mechanics, but not by thermodynamics alone.
The slackness of $\delta_{th}$ with respect to $\delta_\textsc{qi}$ also constitutes yet one more indicator that the von Neumann entropy is in general not appropriate for representing thermodynamic entropy~\cite{Strasberg2019,Safranek2019,Strasberg2021,Alipour2022,Adam2022}. In this regard, even if we consider different cycles and obtain different thermodynamic bounds, we would not anticipate these to be more restrictive than the quantum information bound $\delta_\textsc{qi}$. This is so because thermodynamic entropy is always coarser than the von Neumann entropy~\cite{Strasberg2021}, and therefore no process forbidden by thermodynamics should be expected to be allowed by quantum mechanics.
Finally, it is worth comparing the thermodynamic bound with the one we can obtain from the limit on the accessible information imposed by Holevo's bound. Specifically, suppose we only took into account the information accessed by the measurement performed by the demons, before the resetting is performed. This quantity, for a single random particle of the gas, is represented by the mutual information
\begin{equation}\label{Eq: mutual info gas memory}
I(\text{gas}:\text{memory})=H_\text{gas} + H_\text{memory} - H_{\text{joint}},
\end{equation}
where $H_{\text{gas}}$, $H_{\text{memory}}$, and $H_{\text{joint}}$ are the Shannon entropies of the random variables described by the state of the gas particle ($\ket{\psi_1}$ or $\ket{\psi_2}$), the state of the memory (1 or 2, depending on the measurement outcome), and their joint state (formed by the state of the gas and the state measured by the demons). Since exactly half of the particles of the gas are in each quantum state, the probability distribution that describes the state of the gas particles is given by $p_{\text{gas}}(1) = p_{\text{gas}}(2) = 1/2$, and hence
\begin{equation}
H_{\text{gas}} = H\bigg( \frac{1}{2} \bigg) = 1.
\end{equation}
On the other hand, the probability that a memory bit registers a 1 is given by the probability that a demon measures that a random particle of the gas is in state $\ket{\psi_1}$. Since the accuracy of the demons at guessing the state of the particles is given in Eq.~\eqref{eq: prob. success} by $(1+\delta)/2$, then, by Bayes' theorem,
\begin{equation}
p_{\text{memory}}(1) = \frac{1+\delta}{2}\, p_{\text{gas}}(1) + \frac{1-\delta}{2} \, p_{\text{gas}}(2) = \frac{1}{2},
\end{equation}
and thus $p_{\text{memory}}(2)=1/2$ as well. Therefore,
\begin{equation}
H_{\text{memory}} = H\bigg( \frac{1}{2} \bigg) = 1.
\end{equation}
Finally, the joint distribution of the quantum state of a random gas particle and the outcome of its measurement by the demons is given by
\begin{align}
p_{\text{joint}}(j,k) & = \text{Prob}\big(\ket{\psi_j},k\big) \\
&= \frac{1+\delta}{4} \, \delta_{jk} + \frac{1-\delta}{4} \, (1 - \delta_{jk}), \nonumber
\end{align}
for $j,k \in \{1,2\}$, where $\delta_{jk}$ is the Kronecker delta. Thus,
\begin{align}
H_{\text{joint}} & = - \sum_{j,k=1}^{2} p_{\text{joint}}(j,k) \,\log p_{\text{joint}}(j,k) \nonumber \\
& = - \frac{1+\delta}{2} \,\log \frac{1+\delta}{4} - \frac{1-\delta}{2} \,\log \frac{1-\delta}{4} \nonumber \\
& = 1 - \frac{1+\delta}{2}\,\log\frac{1+\delta}{2} - \frac{1-\delta}{2}\,\log\frac{1-\delta}{2} \nonumber \\
& = 1 + H \bigg( \frac{1+\delta}{2} \bigg).
\end{align}
From Eq.~\eqref{Eq: mutual info gas memory}, we conclude that
\begin{equation}
I(\text{gas}:\text{memory}) = 1 - H \bigg( \frac{1+\delta}{2} \bigg).
\end{equation}
Then, Holevo's theorem imposes a bound on accessible information that yields
\begin{equation}
1-H\bigg(\frac{1+\delta}{2}\bigg) \leq S(\hat\rho),
\end{equation}
which naturally leads to another quantum information bound, $\delta_{\text{Hol}}$, satisfying
\begin{equation}\label{eq: Holevo bound}
H\bigg(\frac{1+\delta_\text{Hol}}{2}\bigg) = 1 - H\bigg(\frac{1+\cos\theta}{2}\bigg).
\end{equation}
However, this implicit equation is exactly the same as Eq.~\eqref{eq: thermo bound explicit}, which defines the thermodynamic bound, and therefore \mbox{$\delta_{\text{Hol}}=\delta_{th}$}. This is consistent with the results of M. Plenio~\cite{Plenio1999}, who obtained Holevo's bound from Landauer's principle. It also suggests that the thermodynamic bound on accessible information obtained in~\cite{Maruyama2005} could be improved to reach Holevo's bound if one incorporates a memory resetting step in the cycle. Indeed, both the present work and~\cite{Plenio1999} suggest that Holevo's theorem can be derived from strictly thermodynamic arguments, although Holevo and Helstrom's cannot.
\section{Conclusion}\label{Section: Conclusion}
We have shown that thermodynamics sets a constraint on quantum state discrimination. Specifically, we established that enforcing the fulfillment of the second law of thermodynamics sets an upper limit to the accuracy with which two different pure quantum states can be distinguished. Here, we considered a specific cycle for an ideal gas whose particles have an internal quantum degree of freedom that is dynamically decoupled from their positions and momenta. We introduced two walls operated by a couple of \textit{demons} that are responsible for distinguishing the gas particles being in one state or another with some efficiency $(1+\delta)/2$. The key detail is that the work involved in some steps of the cycle depends on this efficiency. That dependency allows us to prove that whenever $\delta$ exceeds a particular \textit{thermodynamic bound}, energy can be drained from the heat bath and be completely converted into a positive amount of extracted work, in violation of the second law. This thermodynamic bound coincides with the one we can deduce from Holevo's bound on accessible information, but was found to be always looser than the one given by the Holevo-Helstrom theorem that restricts quantum state discrimination within quantum information theory.
This result has the potential to give hints on the relationship between von Neumann entropy and thermodynamic entropy~\cite{Strasberg2019,Safranek2019,Strasberg2021,Alipour2022,Adam2022}, and more specifically on how to define the latter in full generality when dealing with quantum systems. The arguments used here should also generalize to more than two states, and from pure to mixed states in general, which may give first estimations for quantum state discrimination problems that otherwise generally require solving a semidefinite program problem~\cite{Yuen1975,Bae2015,Watrous2018}.
The way the thermodynamic bound is obtained is agnostic to the specific details of the quantum formalism. The feasibility of the proposed cycle only relies on linearity and the perfect discrimination of orthogonal states, and these are therefore the only assumptions we need to make about the underlying theory. Notice also that the demons employed in the cycle are not Maxwell's demons: they have a (shared) memory, we deal with its erasure during the cycle, and they do not break any thermodynamic law. The reason why it is convenient to use this artifact is that we did not want to be specific about the physical mechanism involved in the discrimination of the states, and the so-called demons represent a suitable abstraction that allows us to focus strictly on the flow of information of the measurement-resetting steps of the cycle. The generality of the approach means that the thermodynamic bound is universal: as long as the theory governing the internal degree of freedom is linear with respect to the quantum states, it should be satisfied, even if we were considering hidden variables, or in general any formalism beyond quantum mechanics that might allow for improvements on the Holevo-Helstrom bound.
\section{Even mixture of non-orthogonal states}\label{Appendix: Even mixture of non-orthogonal states}
In this Appendix, we show that as long as the states are non-orthogonal, neither of the eigenvalues of $\hat\rho$, which we denoted $c$ and $1-c$ in Eq.~\eqref{eq: rho diagonalization}, are equal to $1/2$. Equivalently, if $c$ was $1/2$, then $\ket{\psi_1}$ and $\ket{\psi_2}$ would be orthogonal. Indeed, let us assume $c=1/2$, then we would have
\begin{align}
\bra{\psi_1} \hat\rho \ket{\psi_1}&=\frac{1}{2}\big(|\!\braket{\psi_1}{\psi_1}\!|^2+|\!\braket{\psi_1}{\psi_2}\!|^2\big)=\frac{1}{2}\big(|\!\braket{\psi_1}{\phi_1}\!|^2+|\!\braket{\psi_1}{\phi_2}\!|^2\big).
\end{align}
However, since $\ket{\phi_1}$ and $\ket{\phi_2}$ form an orthonormal basis of the subspace of the Hilbert space where $\hat\rho$ has support, and since $\ket{\psi_1}$ is a unit vector that necessarily lives in this subspace,
\begin{equation}
|\!\braket{\psi_1}{\psi_1}\!|^2=|\!\braket{\psi_1}{\phi_1}\!|^2+|\!\braket{\psi_1}{\phi_2}\!|^2=1,
\end{equation}
which implies that
\begin{equation}
\!\braket{\psi_1}{\psi_2}\!=0,
\end{equation}
as claimed.
\section{Results in terms of the angle $\theta$}\label{Appendix: results in terms of the angle}
In this Appendix, for the sake of completeness, we show the derivation of two simple results used in Sec.~\ref{Section: Discussion} to compare the thermodynamic bound with the quantum information ones.
Let $\hat\rho_1=\ket{\psi_1}\!\!\bra{\psi_1}$ and $\hat\rho_{2}=\ket{\psi_2}\!\!\bra{\psi_2}$ be two pure states. We can always write
\begin{equation}
\braket{\psi_1}{\psi_2}=e^{\mathrm{i}\phi}\cos{\theta},
\end{equation}
for some $\phi\in[0,2\pi)$ and $\theta \in [0,\pi/2]$. Consider first the case in which $\theta \neq 0$. Then,
\begin{equation}
\ket{\psi_2}=e^{\mathrm{i}\phi} \cos\theta \ket{\psi_1} + \sin\theta \ket{\varphi},
\end{equation}
where
\begin{equation}
\ket{\varphi}=\frac{1}{\sin\theta} (\ket{\psi_2}-\braket{\psi_1}{\psi_2}\ket{\psi_1})
\end{equation}
is unit norm and orthogonal to $\ket{\psi_1}$. Therefore, we can write
\begin{align}
\hat\rho_2&=\cos^2\theta \ket{\psi_1}\!\!\bra{\psi_1} + e^{\mathrm{i}\phi} \cos\theta \sin\theta \ket{\psi_1}\!\!\bra{\varphi} + e^{-\mathrm{i}\phi} \cos\theta \sin\theta \ket{\varphi}\!\!\bra{\psi_1} + \sin^2\theta \ket{\varphi}\!\!\bra{\varphi}.
\end{align}
Thus,
\begin{align}
\hat\rho_1-\hat\rho_2&=(1-\cos^2\theta) \ket{\psi_1}\!\!\bra{\psi_1} - e^{\mathrm{i}\phi} \cos\theta \sin\theta \ket{\psi_1}\!\!\bra{\varphi} - e^{-\mathrm{i}\phi} \cos\theta \sin\theta \ket{\varphi}\!\!\bra{\psi_1} - \sin^2\theta \ket{\varphi}\!\!\bra{\varphi},
\end{align}
and since $\ket{\psi_1}$ and $\ket{\varphi}$ are orthogonal, the eigenvalues of this operator are just the roots of the characteristic polynomial
\begin{align}
p_{-}(\lambda)&=\left| \begin{array}{cc}
1-\cos^2\theta & -e^{\mathrm{i}\phi}\cos\theta\sin\theta \\
-e^{-\mathrm{i}\phi}\cos\theta\sin\theta & -\sin^2\theta \\
\end{array} \right| =\lambda^2-\sin^2\theta,
\end{align}
which are
\begin{equation}
\lambda_{-}^{(\pm)}=\pm\sin\theta.
\end{equation}
Thus,
\begin{equation}\label{Eq: trace norm difference}
|\!|\hat\rho_1-\hat\rho_2|\!|_1=|\lambda_{-}^{(+)}|+|\lambda_{-}^{(-)}|=2\sin\theta.
\end{equation}
Similarly,
\begin{align}
\hat\rho&=\frac{1}{2}(\hat\rho_1+\hat\rho_2) =\frac{1+\cos^2\theta}{2}\ket{\psi_1}\!\!\bra{\psi_2} + \frac{e^{\mathrm{i}\phi}}{2}\cos\theta\sin\theta\ket{\psi_1}\!\!\bra{\varphi} + \frac{{e}^{-\mathrm{i}\phi}}{2}\cos\theta\sin\theta\ket{\varphi}\!\!\bra{\psi_1}+\frac{\sin^2\theta}{2}\ket{\varphi}\!\!\bra{\varphi}.
\end{align}
The eigenvalues of $\hat\rho$ are then the roots of the characteristic polynomial
\begin{align}
p_{+}(\lambda)&=\left| \begin{array}{cc}
\frac{1+\cos^2\theta}{2} & \frac{e^{\mathrm{i}\phi}}{2}\cos\theta\sin\theta \\
\frac{e^{-\mathrm{i}\phi}}{2}\cos\theta\sin\theta & \frac{\sin^2\theta}{2} \\
\end{array} \right| =\lambda^2-\lambda+\frac{\sin^2\theta}{4},
\end{align}
namely
\begin{equation}
\lambda_{+}^{(\pm)}=\frac{1\pm\cos\theta}{2}.
\end{equation}
We conclude that
\begin{align}\label{Eq: entropy rho}
S(\hat\rho)&=-\lambda_{+}^{(+)}\log\lambda_{+}^{(+)}-\lambda_{+}^{(-)}\log\lambda_{+}^{(-)}=H\bigg( \frac{1+\cos\theta}{2}\bigg).
\end{align}
The case $\theta=0$ is dealt with easily, since in that case $\hat{\rho}_1-\hat{\rho}_2=0$ and thus
\begin{equation}
|\!|\hat\rho_1-\hat\rho_2|\!|_1 = 0,
\end{equation}
which coincides with the application of Eq.~\eqref{Eq: trace norm difference} when $\theta=0$. Similarly, in this case $\hat{\rho}=\hat{\rho}_1=\hat{\rho}_2$ is a pure state, which implies that
\begin{equation}
S(\hat{\rho})=0,
\end{equation}
in agreement with Eq.~\eqref{Eq: entropy rho}, since $H(1)=0$.
\twocolumngrid
\end{document}
|
\begin{document}
\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowitle[On the topology of Symplectic Calabi--Yau $4$--manifolds]{On the topology of Symplectic Calabi--Yau $4$--manifolds}
\author{Stefan Friedl}
\address{Mathematisches Institut\\ Universit\"at zu K\"oln\\ Germany}
\email{[email protected]}
\author{Stefano Vidussi}
\address{Department of Mathematics, University of California,
Riverside, CA 92521, USA} \email{[email protected]} \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowhanks{S. Vidussi was partially supported by NSF grant
\#0906281.}
\begin{abstract}
Let $M$ be a $4$-manifold with residually finite fundamental group $G$ having $b_1(G) > 0$. Assume that $M$ carries a symplectic structure with trivial canonical class $K = 0 \iotan H^2(M)$. Using a theorem of Bauer and Li, together with some classical results in $4$--manifold topology, we show that for a large class of groups $M$ is determined up to homotopy and, in favorable circumstances, up to homeomorphism by its fundamental group. This is analogous to what was proven by Morgan--Szab\'o in the case of $b_1 = 0$ and provides further evidence to the conjectural classification of symplectic $4$--manifolds with $K = 0$. As a side, we obtain a result that has some independent interest, namely the fact that the fundamental group of a surface bundle over a surface is large, except for the obvious cases.
\end{abstract}
\maketitle
\varphispace{-0.55cm}
\subsection{Introduction}
A classification, even conjectural, of symplectic manifolds of (real) dimension $4$ can be considered at the moment out of reach. In this realm we must currently content ourselves with tackling this problem for some limited class of manifolds where the problem becomes more tractable. An example of that, that mimics the approach common in complex geometry, is to study manifolds of a given Kodaira dimension (see \cite{Li06a}). This approach has allowed a complete classification, up to diffeomorphism, in the case of Kodaira dimension $\kappaappa = - \iotanfty$.
The next step is to tackle the case of $\kappaappa =0$ where some encouraging results, described in part below, are already available. For sake of presentation, here we will be concerned with the case of trivial canonical class $K = 0$, the difference being almost immaterial, see \cite[Theorem 2.4 and Proposition 6.3]{Li06a}.
Examples of symplectic manifolds with $K = 0$ are familiar, but quite exceptional: in particular, the only known example with trivial fundamental group is the $K3$ surface. Motivated by this fact, Morgan and Szab\'o proved in \cite{MS97} that a simply connected symplectic $4$--manifold with $K = 0$ is homotopy equivalent (hence homeomorphic, by Freedman's work) to the $K3$ surface. In fact, as remarked in \cite[Corollary 1.4]{Bau08},
their result extends to all manifolds with $b_1 = 0$ as long as the fundamental group has nontrivial finite quotients. From the vantage point of this note, it is convenient to rephrase the main result of \cite{MS97} in terms of fundamental groups.
We begin by introducing the following notation.
\begin{definition} A closed $4$--manifold $M$ is a \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextit{Symplectic Calabi--Yau} (SCY for short) manifold if it admits a symplectic structure with $K = 0 \iotan H^2(M;\Z)$.
A finitely presented group is an \emph{SCY group} if it is the fundamental group of an SCY $4$--manifold. \end{definition}
(Henceforth all groups will be implicitly assumed to be finitely presented.) We will state a weaker version of the main result of \cite{MS97}, restricting ourselves to residually finite groups, more manageable for our purposes.
\begin{theorem} \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextbf{\emph{(Morgan--Szab\'o)}} \label{thm:morsza} Let $G$ be a residually finite SCY group with $b_1(G) = 0$. Then the following holds true:
\begin{enumerate}
\iotatem $G$ is the trivial group;
\iotatem the corresponding SCY manifolds are homotopy equivalent, hence unique up to homeomorphism.
\end{enumerate} \end{theorem}
At this point we can ask if results similar to those of Theorem \ref{thm:morsza} hold for groups with $b_1(G) > 0$. The geography of known SCY manifolds with positive first Betti number, as we discuss in Section \ref{sec:examples}, is fairly limited and is composed exclusively by infrasolvmanifolds, all (finitely covered by) torus bundles over a torus. Donaldson \cite{Do08} and Li \cite{Li06a} have suggested the possibility that this list is complete.
This suggestion is supported by work of Bauer and Li in \cite{Bau08,Li06b}:
\begin{theorem} \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextbf{\emph{(Bauer, Li)}} \label{thm:bauerli} Let $G$ be a SCY group with $b_1(G) > 0$. Then the following holds true:
\begin{enumerate}
\iotatem $2 \leq b_{1}(G) \leq 4$;
\iotatem the corresponding SCY manifolds satisfy $\chi(M) = \sigma(M) = 0$.
\end{enumerate} \end{theorem}
If $M$ is a SCY manifold all its finite covers are. The theorem above entails therefore that $2 \leq b_1(G) \leq vb_1(G) \leq 4$, where $vb_{1}(G) = \mbox{sup}\{ b_{1}(G_{i}) | G_{i} \leq_{f.i.} G\}$.
The purpose of this note is to discuss how to bridge the gap between the statement of Theorem \ref{thm:bauerli} and that of Theorem \ref{thm:morsza}, i.e. to attempt to give a topological classification for $b_1 > 0$ that mimics parts (1) and (2) of Theorem \ref{thm:morsza}.
Regarding part (1), our work will be mostly devoted to apply Theorem \ref{thm:bauerli} to compute certain group--theoretic invariants for SCY groups, to show that they are consistent with those of infrasolvmanifolds groups. The one merit
is to show that Theorem \ref{thm:bauerli} constrains quite effectively SCY groups at least within some interesting classes of groups. In particular we will show, as consequence of the recent work of Agol and Wise \cite{Ag12,Wi12}, the following alternative, that is possibly of independent interest:
\begin{theorem} Let $G$ be the fundamental group of a surface bundle over a surface $\Sigma_{h} \hookrightarrow M \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo \Sigma_{g}$; then $G$ is either large or $\mbox{max}\,(g,h) \leq 1$. \end{theorem}
(Recall that a group $G$ is large if a finite index subgroup surjects onto a nonabelian free group. In such case, $ vb_1(G) = \iotanfty$.\F} \def\GL{\mbox{GL}} \def\glfk{\GL(k,\F)} \def\glck{\GL(k,\C)ootnote{
While preparing the final version of this paper we learned that, independently, R. \.{I}. Baykur (see \cite{Ba12}) and T. J. Li--Y. Ni (see \cite{LNi12}) obtained, under the same assumptions, similar conclusions on the virtual Betti number.})
While it appears very unlikely that the constraints of Theorem \ref{thm:bauerli} characterize SCY groups, we will show that for the $3$--dimensional analog (namely determining, among all $3$--manifolds, the class of those manifolds $N$ that fiber over $S^1$ with a fibration with Euler class $e = 0 \iotan H^2(N;\Z)$) the condition $1 \leq b_1(G) \leq vb_1(G) \leq 3$ is ``almost" sufficient to characterize the class.
(The ``almost" is due to the presence of $S^1 \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes S^2$.) This is a rather straightforward consequence of highly nontrivial facts on $3$--manifold groups (culminating in \cite{Ag12,Wi12}).
Our strongest
results are about part (2) and show, for a large class of SCY groups, the uniqueness of the homotopy type for the corresponding manifold:
\begin{theorem} \label{thm:main} Let $G$ be a residually finite SCY group with $b_1(G) > 0$. Assume $H^2(G;\Z[G]) = 0$. Then the corresponding SCY manifolds are homotopy equivalent Eilenberg--Maclane spaces $K(G,1)$. \end{theorem}
In the framework of residually finite fundamental groups, Theorem \ref{thm:main} reduces the study up to homotopy of SCY $4$--manifolds with $b_1 > 0$ to determining SCY groups, under the assumption that $H^2(G;\Z[G]) = 0$.
This assumption holds for large classes of groups, in particular all virtual duality groups of virtual cohomological dimension at least $3$ (see \cite[Proposition~VIII.11.3]{Br94}). Remarkably for us, this is the case for the fundamental groups of all known examples of SCY manifolds, as they are Poincar\'e duality groups of dimension $4$. These groups are virtually poly--$\Z$ hence residually finite. Also, as for virtually poly--$\Z$ groups the Borel conjecture
holds true (see \cite{FJ90}) in dimension 4, we thus have the following:
\begin{corollary} \label{cor:unique} Let $G$ be a SCY group arising as fundamental group of an infrasolvmanifold. Then the corresponding SCY manifolds are unique up to homeomorphism.
\end{corollary}
Combined with Theorem \ref{thm:morsza} the corollary asserts that, for all known examples, the fundamental group determines in fact the homeomorphism type of SCY $4$--manifolds.
We want to stress that the results presented above sit at the intersection of symplectic topology and $4$--manifold topology. Namely, they provide constraints on the type of $M$ that emerge only in this dimension; it is otherwise known by \cite{FP11} that in dimension $6$ for any finitely presented group $G$, there exists a symplectic manifold $Z$ with canonical class $K = 0$ and $\pi_1(Z) = G$ and much latitude on the choice of higher Betti numbers. (In dimension $2$, of course, the only admissible $G$ is $\Z^2$.)
We end with a comment regarding the classification up to diffeomorphism. It is often expected that in dimension $4$ every homeomorphism class of smooth manifolds admits multiple, or even infinite, smoothings. This expectation, however, is founded mostly on our understanding of simply--connected $4$--manifolds. Eilenberg--Maclane spaces may well exhibit a different behavior. Recently, Stern asked in \cite{St12} whether (symplectic) $4$--dimensional Eilenberg--Maclane spaces have at most one smooth structure. The same result of uniqueness up to diffeomorphism for all examples covered by Corollary \ref{cor:unique} would follow, as discussed at the end of Section \ref{sec:topclass}, from a conjecture of Baldridge and Kirk \cite[Conjecture 23]{BK07}, that is motivated by a different circle of ideas. It is not out of question therefore to expect that Corollary \ref{cor:unique} holds in the smooth category as well. Similarly, all known constructions of exotic simply connected $4$--manifolds fail to produce a symplectic manifold with $K = 0$ nondiffeomorphic to the $K3$ surface. We can add, for the case of SCY manifolds, a further ingredient: potentially nondiffeomorphic symplectic smoothings with $K = 0$ would have to have, even for all finite covers,
the same Seiberg--Witten invariants. In fact, Taubes' constraints imply that $K = 0$ is the only basic class, and the same situation occurs for all finite covers. This implies in particular that different smoothings would not be distinguishable with any of the known smooth invariants.
\end{ack}
\section{SCY groups} \label{sec:groups}
\subsection{Examples of SCY manifolds with $b_{1} > 0$} \label{sec:examples}
To the best of the authors' knowledge, all known SCY manifolds with $b_{1} > 0$ are \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextit{infrasolvmanifolds}. (We refer to \cite{Hi02} for the rather elaborate definition of infrasolvmanifold and a discussion of their basic properties, from which we will draw in what follows.) In fact, these have appeared in the literature in various forms, and we attempt here to describe them in a uniform way:
\\ -- $T^2$--bundles over $T^2$. These manifolds are symplectic, by \cite{Ge92}, and it is well--known
(see e.g. \cite{Li06a}) that for these manifolds $K = 0$. $T^2$--bundles over $T^2$ admit a solvmanifold structure, and in fact they constitute the entire class of solvmanifolds with $2 \leq b_1 \leq 4$ (see e.g. \cite[Proposition 1]{Ha05});
\\ -- $S^1$--bundles over a torus bundle $T^2 \hookrightarrow N^3 \rightarrow S^1$, with the $S^1$--bundle restricting trivially to the fiber $T^2$. These manifolds are symplectic, by \cite{FGM91}, and the fact that $K = 0$ follows e.g. from \cite[Corollary 2.4]{McS96}. As these manifolds can be described as mapping tori of a selfdiffeomorphism of $T^3$, they admit an infrasolvmanifold structure, see \cite[Chapter 8]{Hi02}. Some of these manifolds have the structure of $T^2$--bundles over $T^2$ (hence solvmanifolds) on the nose while other have an abelian cover that does (see \cite{FV11a}) but it is not clear if they all do;
\\ -- Cohomologically symplectic infrasolvmanifolds. (Manifolds $M$ for which there exist a class in $H^2(M;\R)$ of positive square.) These are symplectic, by \cite{Ka11}. As they are covered by a $T^2$--bundle over $T^2$, $K$ must be torsion hence (using \cite[Corollary 2.4]{McS96} if $b_{+} = 1$) they have $K = 0$.
In fact, by the observations above, the class of symplectic infrasolvmanifolds include all known examples of SCY manifolds. (It is not clear to the authors if there exist, in dimension $4$, symplectic infrasolvmanifolds which are not actual solvmanifold; if that does not happen, the list would therefore reduce to $T^2$--bundles over $T^2$.)
In \cite{FV11a} the authors showed that, for $4$--manifolds that admit a circle action, the list is complete.
The manifolds above are all Eilenberg--Maclane spaces, hence their fundamental groups are Poincar\'e duality groups of cohomological dimension $4$. As fundamental groups of infrasolvmanifolds, they are virtually poly--$\Z$, hence residually finite. As $b_{1}(M) > 0$, these manifolds fiber over $S^1$ (\cite[Lemma 3.14]{Hi02}) hence by \cite{Lu94a} the
$L^2$--Betti number $b_{1}^{(2)}(M)$ vanishes. Two other integral invariants, the Hausmann--Weinberger invariant $q(G)$ (\cite{HW85}) and the Kotschick invariant $p(G)$ (\cite{Ko93}) defined respectively as
\[ q(G) = \mbox{inf} \{ \chi(X) \}, \ \ p(G) = \mbox{inf} \{ \chi(X) - |\sigma(X)| \}\] (where the infimum is taken among all $4$--manifolds $X$ with $\pi_{1}(X) = G$) vanish as well (\cite[Corollary 3.12.4]{Hi02}).
\subsection{Bauer--Li constraints}
If we assume the point of view that symplectic infrasolvmanifolds should exhaust the class of SCY manifolds, we must look for evidence that SCY groups satisfy conditions known to hold for fundamental groups of that class of manifolds, such as the vanishing of the invariants discussed in the previous section.
In this section we will use the constraints of Theorem \ref{thm:bauerli} to determine these invariants for SCY groups.
First of all, part (1) of Theorem \ref{thm:bauerli}, together with L\"uck's Approximation Theorem for $L^2$--invariants (\cite{Lu94}), yields the following:
\begin{lemma} \label{lemma:lueck} Let $G$ be a residually finite SCY group with $b_{1}(G) > 0$. Then the $L^2$--Betti number $ b_{1}^{(2)}(G) = 0$. \end{lemma}
\begin{proof} As $G$ is residually finite, there exists a nested cofinal sequence of normal finite index subgroups $G_{i} \lhd G$. For this sequence, $\lim_{i} [G:G_i] = \iotanfty$. By \cite{Lu94}, we have \[ b_{1}^{(2)}(G) = \lim_{i} \F} \def\GL{\mbox{GL}} \def\glfk{\GL(k,\F)} \def\glck{\GL(k,\C)rac{b_1(G_i)}{ [G:G_i]}. \] As $G$ is SCY, so is each finite index subgroup $G_{i} \lhd G$. It then follows from Theorem \ref{thm:bauerli} that all Betti numbers $b_1(G_i)$ are bounded above by $4$, hence $b_{1}^{(2)}(G) = 0$. \end{proof}
This result allows us to determine immediately the two other integral invariants of $G$, $q(G)$ and $p(G)$. Lemma \ref{lemma:lueck} implies, by a fairly standard argument, that these invariants vanish:
\begin{proposition} \label{cor:min}
Let $G$ be a residually finite SCY group with $b_{1}(G) > 0$.
Then $q(G) = p(G) = 0$. \end{proposition}
\begin{proof} For any $4$--manifold $X$ with $\pi_{1}(X) = G$
we have by standard facts of $L^2$-invariants (see e.g. \cite{Lu02}) that
\[ \chi(X) = 2 b_{0}^{(2)}(X) -2 b_{1}^{(2)}(X) + b_{2}^{(2)}(X) = b_{2}^{(2)}(X).\]
Here we used the fact that $G$ is infinite, which implies $b_{0}^{(2)}(X) = b_{0}^{(2)}(G) = 0$, and Lemma \ref{lemma:lueck}, which implies $b_{1}^{(2)}(X) = b_{1}^{(2)}(G) = 0$. Therefore, $\chi(X) \geq 0$ and $\chi(X) = b_{2}^{(2)}(X) = b_{2,+}^{(2)}(X) + b_{2,-}^{(2)}(X) \geq |b_{2,+}^{(2)}(X) - b_{2,-}^{(2)}(X)| = |\sigma(X)|$, from which $q(G) \geq 0$ and $p(G) \geq 0$ follow. On the other hand, by definition of SCY group there exists a SCY manifold $M$ with $\pi_{1}(M) = G$ for which the equalities are attained. \end{proof}
A good amount of wishful thinking may give the expectation that the conditions $2 \leq b_1(G) \leq vb_1(G) \leq 4$, $q(G) = p(G) = 0$ characterize symplectic infrasolvmanifold groups, with the exclusion of the somewhat exceptional $\Z^2$ (the $2$--dimensional symplectic infrasolvmanifold group), see below. (The first condition is certainly not sufficient, as the example of $\Z^3$ shows; here, $q(\Z^3) = 2$, see \cite[Lemma 5.1]{Ko94}.) Most likely, this expectation is unfounded: it is however interesting to compare it with an (imperfect) analog of this problem, namely a characterization of $3$--manifolds that fiber over the circle with trivial Euler class in terms of virtual Betti numbers. We have the following.
\begin{proposition}
Let $G$ be a closed orientable $3$--manifold group
such that $1 \leq b_{1}(G) \leq vb_1(G) \leq 3$;
then either $G$ is the fundamental group of a (unique up to homeomorphism) $3$--manifold $N$ that fibers over the circle with Euler class $e = 0 \iotan H^2(N;\Z)$ or $G$ is infinite cyclic. \end{proposition}
\begin{proof} First, assume that $G$ is freely indecomposable. By Kneser's Theorem, this is equivalent to $G$ being the fundamental group of a prime $3$--manifold. As a consequence $G$ is either finite, infinite cyclic, or the fundamental group of a $3$--dimensional aspherical manifold $N$. We exclude the first condition, for which $b_1(G) = 0$, and proceed with the last one. If $N$ is atoroidal, then by the work of Agol and Wise
(\cite{Wi12} and \cite{Ag12}) we have $vb_1(G) = \iotanfty$, see also \cite{AFW12} for references. Similarly, if $N$ contains an incompressible torus
which is not a virtual fiber of a fibration, then $vb_{1}(G) = \iotanfty$ by \cite{Koj87,Lu88}. Therefore, $N$ must contain a virtual torus fiber, that is promoted to a torus fiber in a cover $\pi : {\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowilde N} \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo N$. Now, as $b_{1}(G) > 0$, there is a nontrivial class $\phi \iotan H^{1}(N)$ that lifts to a nontrivial class in $\pi^{*} \phi \iotan H^{1}({\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowilde N})$. As ${\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowilde N}$ is a torus bundle, by standard facts all nontrivial elements of $H^{1}({\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowilde N})$ correspond to fibrations, hence $\pi^{*} \phi$ is a fibered class. But then so must $\phi$, namely $N$ itself is a $T^2$--bundle over $S^1$. To complete the proof, observe that if $G$ is a nontrivial free product of fundamental groups of prime $3$--manifolds, as all $3$--manifold groups are residually finite, it is easy to see that $vb_1(G) = \iotanfty $ on the nose.
\end{proof}
\subsection{Surface bundle groups}
All known SCY manifolds are finitely covered by a $T^2$--bundle over $T^2$, hence the corresponding SCY groups are virtually surface bundle groups. Virtual surface bundle groups are therefore a good starting point to probe how restrictive the constraints of Theorem \ref{thm:bauerli} are. The results on the numerical invariants described above provide a partial answer to this question. For those groups, L\"uck proved in \cite{Lu94a} the vanishing of $b_{1}^{(2)}(G)$, so Lemma \ref{lemma:lueck} is inconclusive. On the other hand, Kotschick proved in \cite[Theorem 3.8]{Ko94} that aspherical manifolds realize the values of $q(p)$ and $p(G)$; this, and \cite[Theorem 2]{Ko98}, entail that $q(G) \geq p(G) > 0$ whenever $G$ is the fundamental group of a surface bundle over a surface where base and fiber have genus greater than $1$. Proposition \ref{cor:min} applies then to show that such a $G$ is not an SCY group.
We will see that we can complete this result by verifying (with one exception) that, in the class of groups that arise as fundamental groups of a surface bundle over a surface, only the groups discussed in Section \ref{sec:examples} are SCY. This is a consequence of the following result, which has some interest \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextit{per se}. We denote by $\Sigma_{g}$ an orientable surface of genus $g$.
\begin{theorem} \label{thm:surfacesurface}
Let $\Sigma_{h} \hookrightarrow M \rightarrow \Sigma_{g}$ be a surface bundle over a surface. Then $\pi_1(M)$ is large if and only if $\mbox{max}(g,h) > 1$.
\end{theorem}
\begin{proof} The ``only if" part of the statement is elementary. When $g > 1$, the ``if" part follows from the surjectivity of the map $\pi_1(M) \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo \pi_{1}(\Sigma_{g})$ and the fact that the fundamental group of a surface of genus $g$ admits a surjection onto the free group on $g$ generators. So the interesting case is $\Sigma_{h} \hookrightarrow M \rightarrow T^2$, for $h > 1$. Choose a homology basis $\{s,t \iotan H_1(T^2)\}$. This determines a marking $S^1_{s} \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes S^1_{t}$ of the base of the fibration.
Correspondingly, we can consider the fibration $M \rightarrow S^1_{s}$ with $3$--dimensional fiber $F_t$ that is itself a $\Sigma_{h}$--bundle over $S^1_{t}$, the monodromy of the latter fibration being the restriction of the monodromy of $M$ along $S^{1}_{t}$. The fibration $M \rightarrow S^1_{s}$ arises then as mapping torus of an automorphism $\phi : F_t \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo F_t$. By the Nielsen--Thurston classification of automorphisms of surfaces, we can now split the problem in three cases, depending on the isotopy class of the monodromy of $F_t$.
\begin{enumerate}
\iotatem The monodromy of $F_{t} \rightarrow S^1_t$ is pseudo--Anosov. In this case $F_t$ is hyperbolic, whence $Aut(F_t)$ is a finite group so that $\phi \iotan Aut(F_t)$ can be assumed to have finite order $p$. The cover of $M$ determined by \[ \pi_{1}(M) \rightarrow \pi_{1}(S^1_{s}) \rightarrow \Z_p \] is then the product $F_t \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes S^{1}_s$. By the work of Agol and Wise
(see again \cite{Ag12,Wi12,AFW12}), $\pi_{1}(F_t)$ is large, and the result follows.
\iotatem The monodromy of $F_{t} \rightarrow S^1_t$ is periodic. Denote by $q$ its period. The cover of $M$ determined by \[ \pi_{1}(M) \rightarrow \pi_{1}(S^1_{t}) \rightarrow \Z_q \] is then the product $F_s \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes S^{1}_t$, where $F_s$ is the $\Sigma_{h}$--bundle over $S^1_{s}$, the monodromy of the latter fibration being the restriction of the monodromy of $M$ along $S^{1}_{s}$. Invoking the work of Agol and Wise again, $\pi_{1}(F_s)$ is large and the result follows.
\iotatem The monodromy of $F_{t} \rightarrow S^1_t$ is reducible.
This implies (see e.g. \cite[Theorem~2.15]{CSW11}) that, after suitable isotopy of the monodromy, there is an incompressible JSJ torus $T \subset F_{t}$, intersecting each fiber in a disjoint union of circles preserved by the monodromy.
Recall that $\pi_1(T) \subset \pi_1(F_t)$ is separable by \cite{LN91}, i.e. for any $\gamma \iotan \pi_{1}(F_{t}) \setminus \pi_1(T)$, there exist an epimorphism to a finite group $\alpha : \pi_{1}(F_{t}) \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo Q$ such that $\alpha(\gamma) \notin \alpha(\pi_{1}(T))$. We now need the following virtual extension result:
\begin{claim}
Let $G = \Gammaamma \rtimes \Z$ where $\Gammaamma$ is a finitely generated group, and let $\a : \Gammaamma \rightarrow Q$ be an epimorphism onto a finite group; then there exists an integer $d$ such that $\a$ extends to the normal
subgroup $G_d = \Gammaamma \rtimes d \Z \lhd G$ with $\a (\gamma,m) = \a(\gamma)$. \end{claim}
This claim is well-known (see e.g. \cite{Bu11}), but for the reader's convenience we give a quick proof.
We denote by $\phi$ the automorphism of $\Gammaamma$ which corresponds to the semidirect product.
Note that $\hom(\Gamma,Q)$ is a finite set (here we need that $\Gamma$ is finitely generated). It follows that
$\Gamma$ contains only finitely subgroups such that the quotient is isomorphic to $Q$. Therefore there exists an $r\iotan \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\Rightarrow} \def\hom{\mbox{Hom}} \def\Ext{\mbox{Ext}} \def\gcd{\mbox{gcd}ngle$ such that $\phi^r(\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a))=\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$. In particular $\phi^r$ induces an automorphism of $Q = \Gamma/\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$.
Since $\Gamma/\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$ is finite there exists an $s\iotan \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\Rightarrow} \def\hom{\mbox{Hom}} \def\Ext{\mbox{Ext}} \def\gcd{\mbox{gcd}ngle$ such that $\phi^{rs}$ induces the identity on $\Gamma/\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$.
We write $d:=rs$.
We will now show that $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a) \rtimes d\Z$ is a normal subgroup of $\Gamma\rtimes d\Z$,
which clearly implies the claim.
Let $(g,kd)\iotan \mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)\rtimes d\Z$ and $(h,ld)\iotan \Gamma\rtimes \Z$. Then
\[ \begin{array}} \def\ea{\end{array}} \def\L{\Lambda} \def\npha{n(\phi,\a){rcl} (h,ld)^{-1}(g,kd)(h,ld)&=&(\phi^{-ld}(h^{-1}),-ld)(g,kd)(h,ld)\\
&=&(\phi^{-ld}(h^{-1}),-ld)(g\phi^{kd}(h),kd+ld)\\
&=&(\phi^{-1}(h^{-1})\phi^{-ld}(g)\phi^{kd-ld}(h),kd)\\
&=&(\phi^{-ld}(h^{-1}g\phi^{kd}(h)),kd).\ea \]
Recall that $\phi^{kd}(h)h^{-1}$ lies in $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$, it thus follows that $h^{-1}g\phi^{kd}(h)$ lies in $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$.
Furthermore $\phi^d$ preserves $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$, it thus follows that $\phi^{-ld}(h^{-1}g\phi^{kd}(h))$ also lies in $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowr{\mbox{tr}(\a)$. This concludes the proof of the claim.
The claim applies to the fundamental group of a fibration over $S^1$, and asserts that an epimorphism from the fundamental group of the fiber virtually extends, and does so in such a way that the epimorphism is constant along the orbit of the $\Z$--action on the fundamental group of the fiber.
Using the claim for $G = \pi_1(M) = \pi_{1}(F_t) \rtimes \pi_{1}(S^1_{s})$ it now follows from \cite[Proposition 5]{Koj87}
that, possibly after going to a finite cover of $M$, we can assume that $T \subset F_{t}$ is nonseparating.
Also, as the number of JSJ tori of $F_{t}$ is finite (see e.g. \cite[Theorem 3.4]{Bo02})
we can assume (perhaps up to going to a cover of $M$ and up to an isotopy of $\phi$), that the automorphism $\phi : F_t \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo F_t$ restricts to an automorphism of $T$, which implies that there exists a nonseparating $3$--manifold $\Lambda \subset M$ that restricts to $T$ in each fiber of $M \rightarrow S^1_{s}$. We can now proceed along the lines of \cite[Lemma 2.4]{Lub96} to complete the proof: consider the commutative diagram
\[ \xymatrix{ 1 \ar[r] & \pi_1(T) \ar[d] \ar[r] &\pi_1(\Lambda)\ar[d]\ar[r] & \Z \ar[d]^{\cong}\ar[r] & 1 \\ 1 \ar[r] & \pi_1(F_t \setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu T ) \ar[d] \ar[r] &\pi_1(M \setminus \nu \Lambda) \ar[d]\ar[r] & \Z \ar[d]^{\cong} \ar[r] & 1 \\ 1 \ar[r] & \pi_1(F_t) \ar[r] &\pi_1(M)\ar[r] & \Z \ar[r] & 1 ,}\]
where $\nu$ denotes an open tubular neighborhood. Note that the leftmost and rightmost vertical maps are injective. It follows that the middle vertical maps are injective as well.
We now pick an identification $\overline{ \nu \Lambda} = \Lambda\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes [-1,1]\subset M$. We write $C:=M\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu \Lambda$
and given $\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowau \iotan [-1,1]$ we write $\Lambda_\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowau:=\Lambda\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowau$. By the above we have a subgroup inclusion $\pi_1(\Lambda_\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowau) \leq \pi_{1}(C) \leq \pi_{1}(M)$ for $\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowau = \pm 1$.
We can now view $\pi_1(M)$ as an HNN extension
\[ \pi_1(M)=\ll \pi_1(C),t\,|\, \pi_1(\Lambda_{-1})=t\pi_1(\Lambda_1)t^{-1}\rr.\]
By separability, there exists an epimorphism $\a : \pi_{1}(F_t) \rightarrow Q$ onto a finite group such that $\a(\pi_1(T)) \lneq \a(\pi_{1}(F_{t} \setminus \nu T))$.
This entails, using the claim again, that (perhaps on a cover of $M$) we have an epimorphism $\a : \pi_1(M) \rightarrow Q$ such that $\a(\pi_1(\Lambda_{\pm 1})) \lneq \a(\pi_{1}(C))$.
We now write $A:=\a(\pi_1(C))$ and $B_\pm:=\alpha(\pi_1(\Lambda_{\pm 1})) \lneq A$.
It follows from the properties of an HNN extension that $\a$ induces an epimorphism
\[ \pi_1(M)=\ll \pi_1(C),t\,|\, \pi_1(\Lambda_{-1})=t\pi_1(\Lambda_1)t^{-1}\rr\theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo
K:=\ll A,t\,|\, B_-=tB_+t^{-1}\rr.\]
Since $A$ is a finite group it follows from \cite[Prop~11~p.~120]{Se80} and from \cite[Exercise~3~p.~123]{Se80} that $K$ admits a finite index subgroup which is a nonabelian free group. In particular, $\pi_1(M)$ contains a finite index subgroup that surjects onto a nonabelian free group, i.e. $\pi_1(M)$ is large.
\end{enumerate}
\end{proof}
We note that the proof Theorem \ref{thm:surfacesurface} implies, more generally, that $\pi_1(M)$ is large whenever $M$ is a bundle over $S^1$
with fiber a closed irreducible $3$-manifold with non-trivial JSJ decomposition.
Denoting by $\pi_g$ the fundamental group of an orientable surface of genus $g$ we obtain
from Theorem \ref{thm:surfacesurface} immediately the following result:
\begin{proposition} Let $1 \rightarrow \pi_{h} \rightarrow G \rightarrow \pi_{g} \rightarrow 1$ be an extension of a surface group $\pi_g$ by a surface group $\pi_h$. Then if $G$ is a SCY group either $g = h = 1$ or $G$ equals the trivial group or $\Z^2$.
\end{proposition}
All fundamental groups allowed by this proposition are known to be SCY \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextit{except} for $G = \Z^2$. We are not aware of any method to exclude this case. If such a manifold did exist, though, it would be homeomorphic to $S^2 \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowimes T^2$ (see e.g. \cite[Corollary 6.11.1]{Hi02}) but as $K = 0$ it would not be diffeomorphic to it, a very interesting manifold indeed.
\begin{remark} While it is not clear whether the constraints of Theorem \ref{thm:bauerli} are sufficient to characterize SCY groups, we can ask whether other constrains are available. The authors of this note proved in \cite{FV11a} that if a symplectic manifold $M$ with trivial canonical class carries a free circle action, then a stronger version of Theorem \ref{thm:bauerli} holds, namely $vb_1(M;\F_{p}) \leq 4$
for any prime $p$ (the conditions $\chi(M) = \sigma(M) = 0$ are trivially satisfied). The interest of this enhanced result is that it allows one to use, in suitable circumstances, information on the growth of \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextit{mod $p$} homology, like the Lubotzky alternative for linear groups (see e.g. ~\cite[Window~9,~Corollary~18]{LS03} and \cite[Theorem 1.3]{La09}), that is stronger than what is available with $\Z$--coefficients. It is not perhaps unreasonable to conjecture therefore that any symplectic $4$--manifold with $K = 0$ satisfies $vb_1(M;\F_{p}) \leq 4$. As a corollary of a result of Lackenby \cite[Theorem 1.10]{La10} we can assert a weak form of this result: Given any epimorphism $\phi : \pi(M) \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowo \Z$ (which exist as $b_1(M) \geq 1$), the cyclic covers $M_k$ with fundamental group $\phi^{-1}(k\Z)$ must satisfy $\limsup_{k} b_{1}(M_{k};\F_p) < \iotanfty$, otherwise $\pi_1(M)$ would be large.
\end{remark}
\section{The Homeomorphism Type of SCY $4$--manifolds} \label{sec:topclass}
In this section we prove Theorem \ref{thm:main}. We start with the following theorem, which paraphrases (\cite[Theorem 6]{E97}) of Eckmann:
\begin{theorem} \theta} \def\rep{\mathcal{R}} \def\ord{\mbox{ord}} \def\lra{\Leftrightarrowextbf{\emph{(Eckmann)}} \label{thm:eckmann} Let $M$ be a $4$--manifold with $b_{0}^{(2)}(M) = b_{1}^{(2)}(M) = b_{2}^{(2)}(M) = 0$ whose fundamental group $G = \pi_{1}(M)$ satisfies $H^2(G,\Z[G]) = 0$; then either $G$ is virtually infinite cyclic, or $M = K(G,1)$. \end{theorem}
Combining this theorem with Lemma \ref{lemma:lueck} we obtain the following result, equivalent to Theorem \ref{thm:main}:
\begin{theorem} \label{thm:comb} Let $M$ be an SCY $4$--manifold with fundamental group $G$. Assume that $G := \pi_{1}(M)$ is residually finite and satisfies $b_{1}(G) > 0$ and $H^2(G,\Z[G]) = 0$. Then $M = K(G,1)$. \end{theorem}
\begin{proof} As $G = \pi_1(M)$ is infinite, $b_{0}^{(2)}(M)$ vanishes, and so does $b_{1}^{(2)}(M)$, by Lemma \ref{lemma:lueck}. The Euler characteristic $\chi(M) = 2 b_{0}^{(2)}(M) -2 b_{2}^{(1)}(M) + b_{2}^{(2)}(M)$ equals zero by Theorem \ref{thm:bauerli}, hence $b_{2}^{(2)}(M) = 0$ as well. We can apply then Eckmann's Theorem. The case where $G$ is virtually cyclic, i.e. $\Z \lhd_{f.i.} G$, can be excluded: by Theorem \ref{thm:bauerli} we have $b_{1}(G) \geq 2$ and the Betti number is nondecreasing on finite index subgroups. The statement now follows. \end{proof}
Checking the details of the proof of Theorem \ref{thm:eckmann} it is possible to see that the group $H^2(G,\Z[G])$ coincides with $\pi_2(M)$, and that this is the only obstruction to $M$ being aspherical. The condition $H^2(G,\Z[G]) = 0$ is not too restrictive; in particular, it is satisfied by all (virtual) duality groups of (virtual) cohomological dimension at least $3$, see \cite[Sections~10, 11]{Br94}.
By considering the examples of SCY infrasolvmanifolds we have the corollary:
\begin{corollary} Let $G$ be a SCY group arising as fundamental group of an infrasolvmanifold. Then the corresponding symplectic manifolds are unique up to homeomorphism. \end{corollary}
\begin{proof} Let $M$ be any SCY infrasolvmanifold, and denote $G = \pi_{1}(M)$. The manifold $M$ is an Eilenberg--Maclane space of type $K(G,1)$, from which we deduce that $G$ is a Poincar\'e duality group of cohomological dimension $4$ and $H^2(G,\Z[G]) = 0$. Moreover, $G$ is virtually poly--$\Z$ hence residually finite by a classical result due to Hirsch. We can then apply Theorem \ref{thm:main} to show that any SCY $X$ with fundamental group equal to $G$ is homotopic to $M$. But with virtually poly--$\Z$ groups we can apply to machinery of \cite{FQ90} to deduce that $X$ is actually homeomorphic to $M$, see e.g. \cite[Theorem 2.16]{FJ90}. \end{proof}
Baldridge and Kirk \cite[Conjecture 23]{BK07} formulated the following conjecture:
Let $X$ and $Y$ be 4-manifolds which realize the minimum of the Hausmann--Weinberger invariant of a group $G$.
If $X$ and $Y$ have equivalent intersection forms, then they are in fact diffeomorphic.
We will now argue that this conjecture implies in particular that SCY manifolds with a given group $G$ are unique up to diffeomorphism.
We first show that
for a SCY manifold, the intersection form is determined by the fundamental group $G$. In fact by Theorem \ref{thm:bauerli} the rank is determined by $b_1(G)$ and the signature is always zero (hence the form is indefinite), and as the characteristic element $K$ vanishes, the parity is even.
The argument is thus completed by the fact that a SCY manifold realizes the minimum value $q(G) = 0$.
\end{document}
|
\begin{document}
\title{On local Type~I singularities of the Navier-Stokes equations and Liouville theorems}
\begin{abstract}
We prove that suitable weak solutions of the Navier-Stokes equations exhibit Type~I singularities if and only if there exists a non-trivial mild bounded ancient solution satisfying a Type~I decay condition. The main novelty is in the reverse direction, which is based on the idea of zooming out on a regular solution to generate a singularity.
By similar methods, we prove a Liouville theorem for ancient solutions of the Navier-Stokes equations bounded in $L^3$ along a backward sequence of times.
\end{abstract}
\section{Introduction}
In this paper, we consider potential singularities of the Navier-Stokes equations from the perspective of Liouville theorems. The main idea is to ``zoom in" on the singularity and classify the limiting objects. This approach is highly effective in the regularity theory of minimal surfaces~\cite{degiorgi65}, semilinear heat equations~\cite{gigakohnasymptoticallyselfsimilar}, harmonic maps~\cite{schoenuhlenbeckregularitytheory}, and many other PDEs.
Unlike the above examples, the three-dimensional Navier-Stokes equations have no known critical conserved quantities or monotonicity formulae. Because of this issue, Type~I conditions are typically imposed on the solutions; that is, we often ask that a critical quantity is finite near the singularity. For example, in the famous paper~\cite{escauriazasereginsverak}, Escauriaza, Seregin, and {\v S}ver{\'a}k demonstrated, via Liouville theorems, that $L^\infty_t L^3_x$ solutions do not form singularities. The axisymmetric case is exceptional because $r v_\theta$ satisfies a maximum principle, and in this case, Seregin and {\v S}ver{\'a}k proved that interior Type~I blow-up does not occur~\cite{sereginsverakaxisymmetric}. Liouville theorems were also used by Tsai in~\cite{tsai} and other authors (see~\cite{necasruzsverak,chaewolfasymptoticallyselfsimilar,guevaraphucselfsim}) to exclude self-similar singularities in quite general situations. However, many questions concerning feasible Type~I scenarios, e.g., discretely self-similar blow-up, remain completely open. We refer to~\cite{sereginshilkinliouvillesurvey} for a recent survey of regularity results based on Liouville theorems.
A central object in the Liouville theory is the class of \emph{mild bounded ancient solutions}, which arise naturally as ``blow-up limits" of singular solutions (see~\cite{kochnadirashvili}).
These are defined to be solutions which satisfy the integral equation formulation of the Navier-Stokes equations and are bounded (in fact, smooth) for all backward times. The assumption that the solution is mild simply excludes the ``parasitic solutions" $v=\vec{c}(t)$, $q = -\vec{c} \,'(t) \cdot x$. At a conceptual level, classifying mild bounded ancient solutions serves to determine the possible model solutions on which a Navier-Stokes singularity must be based.
In~\cite{kochnadirashvili}, G. Koch, Seregin, {\v S}ver{\'a}k, and Nadirashvili conjectured that \emph{mild bounded ancient solutions are constant}.
Remarkably, the same authors proved that this is true in two dimensions and in the axisymmetric case without swirl (see Theorems 5.1-5.2 therein). A special case of the conjecture was recently verified by Lei, Ren, and Zhang in~\cite{lei2019ancient} when the solution is axisymmetric and periodic in the $z$-variable (see also~\cite{carrillo2018decay}). If true, the conjecture excludes Type~I singularities and implies that $D$-solutions of the steady Navier-Stokes equations are constant.\footnote{This is in contrast to the focusing semilinear heat equation $\partial_t u - \Delta u = |u|^{p-1} u$, for which there is a non-trivial ground state whenever $p \geq p_c := \frac{n+2}{n-2}$.} If there is a counterexample to the conjecture, it is conceivable that it already occurs in the axisymmetric class.
We are interested in a weak version of the above conjecture obtained by restricting to mild bounded ancient solutions having Type~I decay in backward time. With this modification, we can clarify the relationship between these solutions and Type~I singularities:
\begin{theorem}
\langlebel{thm:localver}
The following are equivalent:
\begin{itemize}
\item There exists a suitable weak solution with Type I singular point.
\item There exists a non-trivial mild bounded ancient solution with $\mathbf{I} < \infty$.
\end{itemize}
\end{theorem}
The relevant terminology will be defined below, as there is some sublety in the formulation of Type~I, see~\eqref{eq:typeinotion}. The quantity $\mathbf{I}$ is defined in~\eqref{eq:idef}. For suitable weak solutions, see Definition~\ref{suitabledef}.
The main novelty of Theorem~\ref{thm:localver} is in the reverse direction. Our idea is to zoom out on an ancient (but regular) solution to generate a singular solution. This is known as the ``blow-down limit" in free boundary problems, and it has not yet been exploited in the Navier-Stokes literature. Our primary tools are known and consist of estimates in Morrey spaces and the persistence of singularities introduced by Rusin and {\v S}ver{\'a}k in~\cite{rusinsver}. In principle, constructing ancient solutions with Type~I decay is a (difficult) route to obtaining Navier-Stokes singularities.
We will use a rather weak notion of Type~I in terms of the rescaled energy.
Let $z = (x,t) \in \mathbb{R}^{3+1}$, $Q(z,r) = B(x,r) \times ]t-r^2,t[$ be a parabolic ball, $Q' = Q(z,r)$, and
\begin{equation}
A(Q') = \esssup_{t-r^2<t'<t} \frac{1}{r} \int_{B(x,r)} |v(x',t')|^2 \, dx',
\end{equation}
\begin{equation}
C(Q') = \frac{1}{r^2} \int_{Q(z,r)} |v|^3 \, dz',
\end{equation}
\begin{equation}
D(Q') = \frac{1}{r^2} \int_{Q(z,r)} \left\lvert q-[q]_{x,r}(t') \right\rvert^{\frac{3}{2}} \, dz',
\end{equation}
\begin{equation}
E(Q') = \frac{1}{r} \int_{Q(z,r)} |\nabla v|^2 \, dz'.
\end{equation}
If $\omega \subset R^{3+1}$ is open and $(v,q)$ is defined on $\omega$, then
\begin{equation}
\langlebel{eq:idef}
\mathbf{I}(\omega) = \sup_{Q' \subset \omega} A(Q') + C(Q') + D(Q') + E(Q')
\end{equation}
If $\omega$ is unspecified, we use $\omega = \mathbb{R}^3 \times \mathbb{R}_-$. Together, $v \equiv \text{const.}$ and $\mathbf{I} < \infty$ imply $v \equiv 0$.
If $v$ is not essentially bounded in any parabolic ball centered at $z$, we say that $z$ is a \emph{singular point}.
Finally, if there exists a parabolic ball $Q'$ centered at the singular point $z$ and
\begin{equation}
\langlebel{eq:typeinotion}
\mathbf{I}(Q') < \infty,
\end{equation}
then we say that $z$ is a \emph{Type I} singularity.
Observe that~\eqref{eq:typeinotion} is adapted to the minimal requirements needed to make sense of the local energy inequality and partial regularity theory. In particular, $\mathbf{I}(Q') \ll 1$ implies regularity. Our notion is also natural because it follows from boundedness of a variety of quantities considered to be Type~I in the literature, e.g.,
\begin{equation}
\langlebel{eq:TypeIinspacetime}
\tag{a}
\sup_{x,t} \left( |x^*-x| + \sqrt{T^*-t} \right) |v(x,t)|,
\end{equation}
\begin{equation}
\langlebel{eq:TypeIweakL3}
\tag{b}
\sup_{t} \norm{v(\cdot,t)}_{L^{3,\infty}},
\end{equation}
\begin{equation}
\langlebel{eq:TypeILp}
\tag{$\text{c}_p$}
\sup_t (T^*-t)^{\frac{1}{2}-\frac{3}{2p}} \norm{v(\cdot,t)}_{L^p},
\end{equation}
\begin{equation}
\langlebel{eq:TypeILinfty}
\tag{$\text{c}_\infty$}
\sup_{t} \sqrt{T^*-t} \norm{v(\cdot,t)}_{L^{\infty}},
\end{equation}
in the class of suitable weak solutions, see Lemma~\ref{lem:weakserrin}. In Theorem~\ref{thm:localvertypei}, we prove a version of Theorem~\ref{thm:localver} in the context of~\eqref{eq:TypeIinspacetime}-\eqref{eq:TypeILp} ($3 < p < \infty$) using Calder{\'o}n-type energy estimates, introduced in~\cite{calderon90}. Historically,~\eqref{eq:TypeILinfty} has been considered important, in part due to its success in the work of Giga and Kohn (see~\cite{gigakohnasymptoticallyselfsimilar}). However, an important distinction is that~\eqref{eq:TypeILinfty} is not well suited to the reverse direction, see Remark~\ref{rmkonpinfty}. Note that boundedness of one of~\eqref{eq:TypeIinspacetime}-\eqref{eq:TypeILinfty} is not known to imply boundedness of the other quantities.\footnote{Moreover, it does not appear to hold for other equations, e.g., the harmonic map heat flow or parabolic-elliptic Keller-Segel system in two dimensions. However, in the context of mild solutions, one may say that~\eqref{eq:TypeILp} for $p_1$ implies~\eqref{eq:TypeILp} for $p_2 \geq p_1$ (in a slightly smaller time interval), and in particular, implies~\eqref{eq:TypeILinfty}. Clearly,~\eqref{eq:TypeIinspacetime} implies~\eqref{eq:TypeIweakL3}. Of course, many more quantities are possible, e.g., space-time Lorentz norms, quantities involving the vorticity, quantities involving Besov spaces (see~\cite{sereginzhou2018}), etc.}
\\
In this paper, we also prove a Liouville theorem for ancient solutions with Type~I decay along a backward sequence of times. The Liouville theorem of Escauriaza, Seregin, and {\v S}ver{\'a}k in~\cite{escauriazasereginsverak} states that an ancient suitable weak solutions in $L^\infty_t L^3_x$ vanishing identically at time $t=0$ must be trivial. It is natural to ask whether the condition on vanishing can be removed; is a mild ancient solution in $L^\infty_t L^3_x$ necessarily zero? Yes, since estimates of the form
\begin{equation}
\langlebel{eq:exampleest}
\norm{v}_{L^\infty(Q(R/2))} \leq R^{-1} f \big( \norm{v}_{L^\infty_t L^3_x(Q(R))} \big)
\end{equation}
were considered by Dong and Du in~\cite{dongducritical},
where $f > 0$ is an increasing function. Hence, one may simply allow $R \to \infty$ in~\eqref{eq:exampleest}.\footnote{We thank Hongjie Dong for informing us of this proof. It is possible to prove~\eqref{eq:exampleest} using a compactness argument, persistence of singularities, and the local regularity result for $L^\infty_t L^3_x(Q)$ solutions in~\cite{escauriazasereginsverak}.
A similar Liouville theorem was proven in~\cite{sereginliouvillehalfspacel2infty} for ancient solutions in $L^\infty_t L^2_x(\mathbb{R}^2_+ \times ]-\infty,0[)$ by duality methods.}
On the other hand, the analogous result along a sequence of times is less obvious, and we prove it in the sequel:
\begin{theorem}
\langlebel{thm:liouville}
If $v$ is a mild ancient solution satisfying
\begin{equation}
\sup_{k \in \mathbb{N}} \norm{v(\cdot,t_k)}_{L^3} < \infty
\end{equation}
for a sequence of times $t_k \downarrow -\infty$, then
\begin{equation}
v \equiv 0.
\end{equation}
\end{theorem}
The proof relies essentially on zooming out and the persistence of singularities, as in Theorem~\ref{thm:localver}. In this case, to control the solution, we use the theory of weak $L^{3,\infty}$ solutions developed in~\cite{sereginsverakweaksols,barkersereginsverakstability}, where $L^{3,\infty} = L^3_{\rm weak}$ is the Lorentz space/weak Lebesgue space. We prove a more quantitative version in Theorem~\ref{thm:moregeneralliouville}.
\\
Without Type~I assumptions, it is unclear what the existence of non-constant mild bounded ancient solutions says about the regularity theory. For example, the one-dimensional viscous Burgers equation is easily seen to be regular, but it admits non-constant traveling wave solutions $f(x-ct)$.\footnote{One may obtain other mild bounded ancient solutions of 1d viscous Burgers by solving the backward heat equation using a superposition of solutions $f(x_0) \exp\left[(x-x_0)+t\right]$ and applying the Cole-Hopf transformation.} These solutions are easily upgraded to higher dimensions by writing $u(x,t) = f(x\cdot \vec{n} - ct)$. Regarding Navier-Stokes solutions, as there are no non-constant mild bounded ancient solutions in two dimensions~\cite{kochnadirashvili}, no such ``upgrade" is possible. The analogous results in the half-space remain open.\footnote{The relevant literature includes~\cite{gigahsumaekawaplaner,sereginsverakrescalinghalfspace,barkersereginmbas,sereginliouvillehalfspacel2infty}. Since the writing of this paper, Seregin has shown an analogue of Theorem~\ref{thm:localver} in the half-space~\cite{seregintypeihalfspace}. We remark that the relationship between various formulations of Type~I is less clear in the half-space.}
\section{Preliminaries}
\langlebel{sec:preliminaries}
In this section, we recall some known facts about suitable weak solutions. We refer to~\cite{escauriazasereginsverak,sereginnotes,sereginsverakaxisymmetric,sereginsverakhandbook} for a review of the partial regularity theory; in particular, \cite{sereginsverakaxisymmetric,sereginsverakhandbook} contain many excellent heuristics.
Let $z = (x,t) \in \mathbb{R}^{3+1}$, $r>0$, and $Q' = Q(z,r)$ a parabolic~ball. We also write $Q(r) = Q(0,r)$ and $Q = Q(1)$.
\begin{definition}[Suitable weak solution]
\langlebel{suitabledef}
We say that $(v,q)$ is a \emph{suitable weak solution} in $Q'$ if
\begin{equation}
v \in L^\infty_t L^2_x \cap L^2_t H^1_x(Q') \text{ and } q \in L^{\frac{3}{2}}(Q'),
\end{equation}
$(v,q)$ satisfies the Navier-Stokes equations on $Q'$ in the sense of distributions,
\begin{equation}
\left\lbrace
\begin{aligned}
\partial_t v - \Delta v + v \cdot \nabla v + \nabla q &= 0 \\
\div v &= 0
\end{aligned}
\right.
\end{equation}
and $(v,q)$ satisfies the local energy inequality,
\begin{equation}
\langlebel{localenergyienq}
\int_{B(x,r)} \zeta |v(y,t')|^2 \, dy + 2 \int_{t-r^2}^{t'} \int_{B(x,r)} \zeta |\nabla v|^2 \, dy \, ds \leq $$ $$ \leq \int_{t-r^2}^{t'} \int_{B(x,r)} |v|^2 (\partial_t + \Delta) \zeta + (|v|^2 + 2q) v \cdot \nabla \zeta \, dy \,ds,
\end{equation}
for all non-negative $\zeta \in C^\infty_0(B(x,r) \times ]t-r^2,t])$ and almost every $t' \in ]t-r^2,t]$.\footnote{By weak continuity in time, one may remove the ``almost every" restriction.}
Finally, we say that $v$ is a suitable weak solution in $Q'$ (without reference to the pressure) if there exists $q \in L^{\frac{3}{2}}(Q')$ such that $(v,q)$ is suitable in $Q'$.
\end{definition}
The following lemma is proven in~\cite[Theorem 2.2]{lin}. The proof relies on the local energy inequality~\eqref{localenergyienq}, the embedding $L^\infty_t L^2_x \cap L^2_t H^1_x(Q) \hookrightarrow L^{\frac{10}{3}}(Q)$, and the Aubin-Lions lemma.
\begin{lemma}[Compactness]
\langlebel{lem:compactnessforsuitable}
Let $(v^{(k)},q^{(k)})_{k \in \mathbb{N}}$ be a sequence of suitable weak solutions on $Q$ satisfying
\begin{equation}
\sup_{k \in \mathbb{N}} \norm{v^{(k)}}_{L^3(Q)} + \norm{q^{(k)}}_{L^{\frac{3}{2}}(Q)} < \infty.
\end{equation}
Then there exists a suitable weak solution $(u,p)$ on $Q(R)$ for all $0 < R < 1$ such that
\begin{equation}
\langlebel{eq:convergenceofsuitableweaksols}
v^{(k)} \to u \text{ in } L^{3}_{{\rm loc}}(B \times ]-1,0]), \quad q^{(k)} \rightharpoonup p \text{ in } L^{\frac{3}{2}}_{{\rm loc}}(B \times ]-1,0]),
\end{equation}
along a subsequence.
\end{lemma}
The next proposition is our primary tool. It is contained in Lemma 2.1 and Lemma 2.2 of~\cite{rusinsver}. However, as the statement therein is slightly different, we include a proof for completeness.
\begin{proposition}[Persistence of singularities]
\langlebel{pro:appearanceofsingularity}
Let $(v^{(k)},q^{(k)})_{k \in \mathbb{N}}$ be a sequence of suitable weak solutions on $Q$ satisfying \eqref{eq:convergenceofsuitableweaksols}.
If
\begin{equation}
\limsup_{k \to \infty} \; \norm{v^{(k)}}_{L^\infty(Q(R))} = \infty \text{ for all } 0 < R < 1,
\end{equation}
then
\begin{equation}
u \text{ has a singularity at the space-time origin}.
\end{equation}
\end{proposition}
\begin{proof}
We prove the contrapositive. Suppose that $u \in L^\infty(Q(R))$ for some $0<R<1$. Let $\epsilon > 0$ (to be fixed later). Then there exists $0 < R_0 < R$ (depending also on $\epsilon$) satisfying, for all $0 < r \leq R_0$,
\begin{equation}
\frac{1}{r^2} \int_{Q(r)} |u|^3 \, dx \, dt \leq \epsilon.
\end{equation}
This is because $u \in L^\infty(Q(R))$ is a subcritical assumption.
Rescaling, we may set $R_0 = 1$. By the strong convergence in~\eqref{eq:convergenceofsuitableweaksols}, for $k$ sufficiently large (depending on $0 < r \leq 1$),
\begin{equation}
\langlebel{vkest}
\frac{1}{r^2} \int_{Q(r)} |v^{(k)}|^3 \, dx \, dt \leq 2\epsilon.
\end{equation}
We decompose the pressure as $q^{(k)} = \tilde{q}^{(k)} + h^{(k)}$, where
\begin{equation}
\langlebel{qkdef}
\tilde{q}^{(k)} = (-\Delta)^{-1} \div \div (\varphi v^{(k)} \otimes v^{(k)}),
\end{equation}
$\varphi \in C^\infty_0(B)$ ($0 \leq \varphi \leq 1$) satisfies $\varphi \equiv 1$ on $B(3/4)$, and $h^{(k)}(\cdot,t)$ is harmonic in $B(3/4)$.
By~\eqref{qkdef} and Calder{\'o}n-Zygmund estimates,
\begin{equation}
\langlebel{qkest}
\frac{1}{r^2} \int_{Q(r)} |\tilde{q}^{(k)}|^{\frac{3}{2}} \,dx\,dt \leq \frac{C}{r^2} \int_{Q(r)} |v^{(k)}|^3 \, dx \,dt.
\end{equation}
By the triangle inequality and~\eqref{qkest} (with $r=1$),
\begin{equation}
\langlebel{hkunitscaleest}
\int_{Q} |h^{(k)}|^{\frac{3}{2}} \,dx \,dt \leq C \sup_{l \in \mathbb{N}} \left( \int_{Q} |v^{(l)}|^{3} + |q^{(l)}|^{\frac{3}{2}} \, dx \,dt \right) \leq CM.
\end{equation}
($M > 0$ depends on $\epsilon$ through $R_0$.)
By H{\"o}lder's inequality and interior regularity for harmonic functions, whenever $0 < r \leq 1/2$,
\begin{equation}
\langlebel{hkscaleinvarest}
\frac{1}{r^2} \int_{Q(r)} |h^{(k)}|^{\frac{3}{2}} \, dx \,dt \leq C r \int_{-\frac{1}{4}}^{0} \norm{ h^{(k)}(\cdot,t) }_{L^{\infty}(B(1/2))}^{\frac{3}{2}} dt\leq CMr,
\end{equation}
Finally, one may combine~\eqref{vkest},~\eqref{qkest}, and~\eqref{hkscaleinvarest} and fix $\epsilon$ and $r$ sufficiently small to obtain
\begin{equation}
\limsup_{k \to \infty} \frac{1}{r^2} \int_{Q(r)} |v^{(k)}|^3 + |q^{(k)}|^{\frac{3}{2}} \leq \epsilon_{\rm CKN},
\end{equation}
where $\epsilon_{\rm CKN} > 0$ is the constant in the $\epsilon$-regularity criterion. This ensures
\begin{equation}
\limsup_{k \to \infty} \sup_{Q(r/2)} |v^{(k)}| \leq \frac{C_{\rm CKN}}{r},
\end{equation}
as desired.
\end{proof}
Since the forward direction of Theorem~\ref{thm:localver} deals with local solutions, it is useful to locally mimic the situation of the ``first singular time" in the Cauchy problem. The following proposition follows from partial regularity, see \cite[Lemma 3.2]{kukavicarusinzianeJMFM2017} and \cite[Theorem 3]{sereginsverakhandbook}.
\begin{proposition}[Regular cylinder lemma]
\langlebel{pro:goodsingulartime}
If $v$ is a suitable weak solution in $Q$ with singular point at the space-time origin, then there exist $z^* \in B(1/2) \times ]-1/4,0]$ and $0<R<1/2$ satisfying
\begin{equation}
v \in L^{\infty}(Q(z^*,R) \setminus Q(z^*,r)) \text{ for all } 0 < r < R.
\end{equation}
\end{proposition}
It is possible to combine Proposition~\ref{pro:goodsingulartime} and Bogovskii's operator to truncate the solution, see~\cite{neustupapenel1999},~\cite[Remark 12.3]{taolocalizationcompactness}, and~\cite{albrittonbarkerlocalregII}. We will not require this here.
As discussed in the introduction, boundedness of other widely considered critical quantities is known to imply $\mathbf{I}(Q') < \infty$. For example, this is true of the weak Lebesgue spaces:
\begin{lemma}[Weak Serrin implies Type~I]
\langlebel{lem:weakserrin}
If $v$ is a suitable weak solution on $Q$ with
\begin{equation}
v \in L^{q,\infty}_t L^{p,\infty}_x(Q),
\end{equation}
where $3 \leq p \leq \infty$ and $2 \leq q \leq \infty$ satisfy the Ladyzhenskaya-Prodi-Serrin condition
\begin{equation}
\langlebel{eq:Ladyzhenprodi}
\frac{3}{p} + \frac{2}{q} = 1,
\end{equation}
then, for all $Q' = Q(R)$ with $0 < R < 1$,
\begin{equation}
\langlebel{eq:weakserrinimpliedbound}
\mathbf{I}(Q') < \infty.
\end{equation}
\end{lemma}
Notice that having one of~\eqref{eq:TypeIinspacetime}-\eqref{eq:TypeILinfty} bounded is enough to apply Lemma~\ref{lem:weakserrin} (for suitable weak solutions).
It is already known that absolute smallness in the above $L^{q,\infty}_t L^{p,\infty}_x$ spaces (with the exception of the case $q = 2$) implies regularity, see~\cite{kimkozonoweakspaces}.
To prove Lemma~\ref{lem:weakserrin}, we use the critical Morrey-type quantities
\begin{equation}
M^{s,l}(Q') = \frac{1}{R^{\kappa}} \int_{t-r^2}^t \left( \int_{B(x,r)} |v|^s \, dx' \right)^{\frac{l}{s}} \, dt',
\end{equation}
where $\kappa = l (2/l+3/s-1)$, defined for $1 \leq s,l \leq \infty$ (with the obvious modification when $l=\infty$).
The next lemma asserts that finiteness of rescaled energies $A,C,E$ (see~\cite{SereginCriticalMorrey2006}) or critical Morrey-type quantities $M^{s,l}$ (see~\cite[Theorem 6]{sereginsverakhandbook} and~\cite{SereginZajaczkowskiPOMI2006}) implies Type~I bounds for suitable weak solutions.
\begin{lemma}[Morrey-type estimates]
\langlebel{lem:Morreytypeestimates}
Suppose $(v,q)$ is a suitable weak solution in $Q$ with
\begin{equation}
\min_{s,l} \left\lbrace \sup_{Q' \subset Q} A(Q'), \sup_{Q' \subset Q} C(Q'), \sup_{Q' \subset Q} E(Q'), \sup_{Q' \subset Q} M^{s,l}(Q') \right\rbrace < \infty,
\end{equation}
where $s>3/2$, $l>1$ are finite and required to satisfy\footnote{
The statement in~\cite{sereginsverakhandbook} also contains the requirement $3/s+2/l-3/2 > \max \{ 2/l,1/2-1/l \}$. However, this requirement can be avoided by decreasing $s$ and/or $l$ using embeddings of Morrey spaces.}
\begin{equation}
\frac{3}{s} + \frac{2}{l} < 2.
\end{equation}
Then, for all $Q' = Q(R)$ with $0 < R < 1$,
\begin{equation}
\langlebel{eq:morreyIbound}
\mathbf{I}(Q') < \infty.
\end{equation}
\end{lemma}
For the above result to hold, it is crucial that $(v,q)$ is already assumed to be suitable, since the proof relies on the local energy inequality. Indeed, the estimate which gives~\eqref{eq:morreyIbound} depends on the background quantities $C(1)$ and $D(1)$.
\begin{proof}[Proof of Lemma~\ref{lem:weakserrin}]
Let $\delta > 0$ sufficiently small, so that $s=p-\delta$ and $l=q-\delta$ satisfy the requirements of Lemma~\ref{lem:Morreytypeestimates}. Then the embedding properties of Lorentz spaces imply
\begin{equation}
M^{s,l}(Q')^{\frac{1}{l}} \leq C\norm{v}_{L^l_t L^s_x(Q')} \leq C\norm{v}_{L^{q,\infty}_t L^{p,\infty}_x(Q)},
\end{equation}
for all parabolic balls $Q' \subset Q$.
\end{proof}
\section{Proof of Theorem~\ref{thm:localver}}
We now prove Theorem~\ref{thm:localver}. As the forward direction is essentially known, we focus on the reverse direction. The forward direction is also valid in the local setting with curved boundary without Type~I assumptions, see~\cite{albrittonbarkerlocalregII}.
\begin{proof}[Proof]
\textbf{Forward direction}. Suppose that $v$ is a suitable weak solution in $Q$ with singularity at the space-time origin and $\mathbf{I}(Q) < \infty$. By Proposition~\ref{pro:goodsingulartime}, we may assume that $v \in L^{\infty}(Q \setminus Q(r))$ for all $0 < r \leq 1$. This may require considering an earlier singularity than the original. It is proven in~\cite[Theorem 2.8]{sereginsverakaxisymmetric} and~\cite[Section 5]{sereginsverakhandbook} that, under an appropriate rescaling procedure, such a solution (even without the Type~I assumption) gives rise to a non-trivial mild bounded ancient solution $u$. It is clear from the rescaling procedure in~\cite{sereginsverakaxisymmetric,sereginsverakhandbook} that $u$ will satisfy $\mathbf{I} < \infty$.
\textbf{Reverse direction}. Suppose that $v$ is a non-trivial mild bounded ancient solution satisfying $\mathbf{I} < \infty$. By translating in space-time as necessary, we have
\begin{equation}
\langlebel{eqN}
\norm{v}_{L^\infty(Q)} = N > 0.
\end{equation}
Consider the sequence $(v^{(k)})_{k \in \mathbb{N}}$ of suitable weak solutions
\begin{equation}
\langlebel{rescaling}
v^{(k)}(x,t) = k v(k x, k^2 t), \quad (x,t) \in Q(2).
\end{equation}
By the uniform estimate
\begin{equation}
\langlebel{eq:uniformest}
\sup_{k \in \mathbb{N}} \mathbf{I}(v^{(k)},Q(2)) < \infty
\end{equation}
and Lemma~\ref{lem:compactnessforsuitable}, there exists a subsequence and a suitable weak solution $(u,p)$ with
\begin{equation}
v^{(k)} \to u \text{ in } L^3(Q) \text{ and } q^{(k)} \rightharpoonup p \text{ in } L^{\frac{3}{2}}(Q).
\end{equation}
Moreover,~\eqref{eqN} and~\eqref{rescaling} give
\begin{equation}
\norm{v^{(k)}}_{L^\infty(Q(1/k))} = k N \to \infty.
\end{equation}
Hence, Proposition~\ref{pro:appearanceofsingularity} implies that $u$ is singular at the space-time origin. Finally,
\begin{equation}
\mathbf{I}(u) < \infty
\end{equation}
follows from~\eqref{eq:uniformest}. That is, the singularity is Type~I.
\end{proof}
We now address other formulations of Type~I.
\begin{theorem}
\langlebel{thm:localvertypei}
Let $3 \leq p < \infty$.
The following are equivalent:
\begin{itemize}
\item There exists a suitable weak solution in $Q$ with singularity at the space-time origin and
\begin{equation}
\langlebel{eq:typei}
\esssup_{-1<t<0} (-t)^{\frac{1}{2}-\frac{3}{2p}} \norm{v}_{L^{p,\infty}(B)} < \infty.
\end{equation}
\item There exists a mild bounded ancient solution satisfying
\begin{equation}
\langlebel{eq:typeiancient}
\esssup_{t<0} (-t)^{\frac{1}{2}-\frac{3}{2p}} \norm{v}_{L^{p,\infty}} < \infty.
\end{equation}
\end{itemize}
\end{theorem}
\begin{remark}
\langlebel{rmkonpinfty}
It is noteworthy that $p=\infty$ is omitted despite being a popular formulation of Type~I. This is because $\sup_{t < 0} \sqrt{-t} \norm{v(\cdot,t)}_{L^\infty} < \infty$ alone does not appear to guarantee $\mathbf{I} < \infty$, or even that the local energy is finite up to (and including) the blow-up time. This is related to the fact that no global-in-time weak solution theory is known for $L^\infty$ initial data. However, the forward direction remains valid because Lemma~\ref{lem:weakserrin} implies $\mathbf{I}(Q(1/2)) < \infty$ (with an estimate depending on the quantities $C(1)$ and $D(1)$ for suitable weak solutions).
When $p>3$, it is possible to prove Theorem~\ref{thm:localvertypei} with mild solutions replacing suitable weak solutions. One could also consider $\sup_{x,t} \left(|x| + \sqrt{-t} \right) |v|$, Lorentz spaces $L^{p,q}$ ($1 < q \leq \infty$), etc.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:localvertypei} (Reverse direction)]
Let $3 \leq p < \infty$. We allow the constants below to depend implicitly on $p$. It suffices to prove that a mild bounded ancient solution satisfying~\eqref{eq:typeiancient} also satisfies $\mathbf{I} < \infty$. By translating in space-time and rescaling, we only need to demonstrate
\begin{equation}
\langlebel{eq:sufficestoshow}
A(1/2) + C(1/2) + D(1/2) + E(1/2) \leq C(M),
\end{equation}
where
\begin{equation}
\sup_{-1<t<0} (-t)^{\frac{1}{2} + \frac{3}{2p}} \norm{v(\cdot,t)}_{L^{p,\infty}} \leq M.
\end{equation}
We utilize a Calder{\'o}n-type splitting, see~\cite{calderon90,Jiasver2013,albrittonblowupcriteria}. Decompose $a := v(\cdot,-1) = \tilde{u_0} + \bar{u_0}$, where
\begin{equation}
\tilde{u_0} = \mathbb{P} \left( \mathbf{1}_{\{ |a| > \langlembda M \}} a \right),
\end{equation}
and $\langlembda > 0$ will be determined later. This gives
\begin{equation}
\norm{\tilde{u_0}}_{L^2} \leq C(\langlembda,M) \; \text{ and } \; \norm{\bar{u_0}}_{L^{2p}} \leq C_0(\langlembda,M),
\end{equation}
where $C_0(\langlembda,M) \to 0$ as $\langlembda \to 0^+$.
We decompose the solution as
\begin{equation}
\langlebel{eq:vdecomp}
v = V + U,
\end{equation}
where $V \in C([-1,0];L^{2p})$ is the mild solution of the Navier-Stokes equations on $\mathbb{R}^3 \times ]-1,0[$ with initial data $\bar{u_0}$. When $0 < \langlembda \ll 1$ (depending on $M$), $V$ is guaranteed to exist on $\mathbb{R}^3 \times ]-1,0[$, and
\begin{equation}
\langlebel{eq:Vest}
\norm{V}_{L^\infty_t L^{2p}_x(\mathbb{R}^3 \times ]-1,0[)} + \norm{(t+1)^{\frac{1}{2}} \nabla V}_{L^\infty_t L^{2p}_x(\mathbb{R}^3 \times ]-1,0[)} \leq 1.
\end{equation}
By the Calder{\'o}n-Zygmund estimates and pressure representation $Q = (-\Delta)^{-1} \div \div V \otimes V$,
\begin{equation}
\langlebel{eq:Qest}
\norm{Q}_{L^\infty_t L^p_x(\mathbb{R}^3 \times ]-1,0[)} \leq C.
\end{equation}
The correction $U$ solves a perturbed Navier-Stokes equations with initial data $\tilde{u_0}$ and zero forcing term. It is possible to show that $U$ (which belongs to subcritical spaces) belongs to the energy space on $\mathbb{R}^3 \times ]-1,0[$ and satisfies the energy inequality. (There is standard perturbation theory involved, using that $v$ and $V$ are mild solutions, see~\cite{albrittonblowupcriteria} for details.) A Gronwall-type argument implies
\begin{equation}
\langlebel{eq:Uest}
\norm{U}_{L^\infty_t L^2_x(\mathbb{R}^3 \times ]-1,0[)} + \norm{\nabla U}_{L^2(\mathbb{R}^3 \times ]-1,0[)} + \norm{U}_{L^{\frac{10}{3}}(\mathbb{R}^3 \times ]-1,0[)} \leq C(\langlembda,M).
\end{equation}
Using $P = (-\Delta)^{-1} \div \div (U \otimes U + V \otimes U + U \otimes V)$, Calder{\'o}n-Zygmund estimates, and H{\"o}lder's inequality, we obtain
\begin{equation}
\langlebel{eq:Pest}
\norm{P}_{L^{\frac{3}{2}}(Q)} \leq C(\langlembda,M).
\end{equation}
Combining~\eqref{eq:vdecomp} with~\eqref{eq:Vest}-\eqref{eq:Pest} completes the proof of the reverse direction. We omit the proof of the forward direction.
\end{proof}
\section{Proof of Theorem~\ref{thm:liouville}}
We will now prove the Liouville theorem. In fact, we will prove the following, more quantitative generalization to the Lorentz space $L^{3,\infty}$. Let $\mathbb{B}$ denote the subspace of $\dot B^{-1}_{\infty,\infty}$ whose functions $f$ satisfy
\begin{equation}
f(\langlembda \cdot) \to 0 \text{ in the sense of distributions as } \langlembda \to \infty.
\end{equation}
\begin{theorem}[Liouville theorem]
\langlebel{thm:moregeneralliouville}
For all $M > 0$, there exists a constant $\epsilon = \epsilon(M) > 0$ satisfying the following property. Suppose that $v$ is a mild ancient solution\footnote{In this section, we consider mild solutions belonging to the class $L^\infty_{t,{\rm loc}} L^\infty_x(\mathbb{R}^3 \times ]-\infty,0[)$.} such that
\begin{equation}
\langlebel{vboundedseqtimes}
\norm{v(\cdot,t_k)}_{L^{3,\infty}} \leq M
\end{equation}
for a sequence $t_k \downarrow -\infty$. If
\begin{equation}
{\rm dist}_{\dot B^{-1}_{\infty,\infty}}(v(\cdot,0),\mathbb{B}) \leq \epsilon,
\end{equation}
then
\begin{equation}
\limsup_{k \to \infty} \sqrt{|t_k|/2} \norm{v}_{L^\infty(Q(\sqrt{|t_k|/2}))} < \infty.
\end{equation}
Hence,
\begin{equation}
v \equiv 0.
\end{equation}
\end{theorem}
We will use the theory of weak $L^{3,\infty}$ solutions developed in~\cite{barkersereginsverakstability}. These are defined to be suitable weak solutions of the Navier-Stokes equations with initial data $u_0 \in L^{3,\infty}$ that additionally satisfy a decomposition $v = V + U$, where $V(\cdot,t) = S(t) u_0$ is the Stokes evolution of the initial data and $U$ belongs to the energy space with $\norm{U(\cdot,t)}_{L^2} \to 0$ as $t \to 0^+$. We will also use the following proposition, which is proven in~\cite{globalweakbesov} by contradiction and backward uniqueness arguments.
\begin{proposition}[Auxiliary proposition]
\langlebel{pro:auxiliarypro}
For all $M>0$, there exists a constant $\epsilon_0 = \epsilon_0(M) > 0$ satisfying the following property. Suppose that $v$ is a weak $L^{3,\infty}$ solution on $\mathbb{R}^3 \times ]0,1[$ satisfying
\begin{equation}
\norm{v(\cdot,0)}_{L^{3,\infty}} \leq M
\end{equation}
and
\begin{equation}
\norm{v(\cdot,1)}_{\dot B^{-1}_{\infty,\infty}} \leq \epsilon_0.
\end{equation}
Then
\begin{equation}
v \text{ is essentially bounded in } \mathbb{R}^3 \times ]1/4,1[.
\end{equation}
\end{proposition}
In fact, one may give pointwise bounds for $v$ on $\mathbb{R}^3 \times ]1/4,1[$, but this will not be necessary.
\begin{proof}[Proof of Theorem~\ref{thm:moregeneralliouville}]
Suppose otherwise. That is, there exists a mild ancient solution $v$ satisfying
\begin{equation}
\norm{v(\cdot,t_{k})}_{L^{3,\infty}} \leq M
\end{equation}
for a sequence $t_k \downarrow -\infty$,
\begin{equation}
\langlebel{eq:distance}
{\rm dist}_{\dot B^{-1}_{\infty,\infty}}(v(\cdot,0),\mathbb{B}) \leq \epsilon_0/2,
\end{equation}
with $\epsilon_0 = \epsilon_0(M) > 0$ as in Proposition~\ref{pro:auxiliarypro}, and
\begin{equation}
\limsup_{k \to \infty} \sqrt{|t_{k}|/2} \norm{v}_{L^\infty(Q(\sqrt{|t_{k}|/2}))} = \infty.
\end{equation}
Regarding \eqref{eq:distance}, we decompose $v(\cdot,0) = U + W$,
where $U \in \mathbb{B}$ and $\norm{W}_{\dot B^{-1}_{\infty,\infty}} \leq \epsilon_0$.
We construct a sequence $(v^{(k)})_{k \in \mathbb{N}}$ of mild solutions on $\mathbb{R}^3 \times ]-1,0[$ by rescaling appropriately:
\begin{equation}
v^{(k)}(x,t) = \sqrt{|t_{k}|} v(\sqrt{|t_{k}|} x, |t_{k}| t).
\end{equation}
Since $v$ is mild, it is not difficult to show that $v^{(k)}$ is a weak $L^{3,\infty}$ solution on $\mathbb{R}^3 \times ]-1,0[$.
Moreover,
\begin{equation}
\norm{v^{(k)}(\cdot,-1)}_{L^{3,\infty}} \leq M,
\end{equation}
\begin{equation}
\langlebel{eq:rescalingforvat0}
v^{(k)}(\cdot,0) = \sqrt{|t_{k}|} v^{(k)}(\sqrt{|t_{k}|} \cdot,0) = U^{(k)} + W^{(k)},
\end{equation}
where $U^{(k)}$ and $W^{(k)}$ correspond to $U$ and $W$, appropriately rescaled,
and
\begin{equation}
\langlebel{eq:vkgrowing}
\norm{v^{(k)}}_{L^\infty(Q(1/2))} \to \infty.
\end{equation}
Regarding~\eqref{eq:rescalingforvat0}, we find that $U^{(k)} \to 0$ in the sense of distributions and $\norm{W^{(k)}}_{\dot B^{-1}_{\infty,\infty}} \leq \epsilon_0$. Hence, there exists $W^\infty$ satisfying $\norm{W^\infty}_{\dot B^{-1}_{\infty,\infty}} \leq \epsilon_0$ and
\begin{equation}
v^{(k)}(\cdot,0) \to W^\infty \text{ in the sense of distributions}
\end{equation}
along a subsequence.
Next, we recall a compactness result for the above sequence of weak $L^{3,\infty}$ solutions (see~\cite{barkersereginsverakstability,globalweakbesov}). There exists a weak $L^{3,\infty}$ solution $v^\infty$ on $\mathbb{R}^3 \times ]-1,0[$ and a subsequence such that
\begin{equation}
v^{(k)}(\cdot,-1) \wstar u(\cdot,-1) \text{ in } L^{3,\infty},
\end{equation}
where $\norm{u(\cdot,-1)}_{L^{3,\infty}} \leq M$,
\begin{equation}
v^{(k)} \to v^\infty \text{ in } L^{3}_{{\rm loc}}(\mathbb{R}^3 \times ]-1,0]),
\end{equation}
\begin{equation}
q^{(k)} \rightharpoonup q^\infty \text{ in } L^{\frac{3}{2}}_{{\rm loc}}(\mathbb{R}^3 \times ]-1,0]),
\end{equation}
and
\begin{equation}
v^{(k)}(\cdot,0) \to v^\infty(\cdot,0) \text{ in the sense of distributions}.
\end{equation}
In particular,
\begin{equation}
v^\infty(\cdot,0) = W^\infty.
\end{equation}
By Proposition~\ref{pro:auxiliarypro}, $v^\infty$ is essentially bounded in $\mathbb{R}^3 \times ]-3/4,0[$.
We claim that $v^\infty$ has a singular point $z^* \in \overline{Q(1/2)}$. Indeed, due to~\eqref{eq:vkgrowing}, we have
\begin{equation}
\limsup_{k \to \infty} \norm{v^{(k)}}_{L^\infty(Q(z^*,R))} = \infty \text{ for all } 0 < R < 1/4,
\end{equation}
for some $z^* \in \overline{Q(1/2)}$, and we may invoke Proposition~\ref{pro:appearanceofsingularity}. This contradicts that $v^\infty$ is essentially bounded in $\mathbb{R}^3 \times ]-3/4,0[$ and completes the proof.
\end{proof}
We conclude with a few remarks:
\begin{remark}
The proof also implies that, if there exists a non-trivial mild ancient solution satisfying~\eqref{vboundedseqtimes}, then there exists a singular weak $L^{3,\infty}$ solution $v^\infty$ on $\mathbb{R}^3 \times ]-1,0[$. By considering the energy-class correction $u^\infty(\cdot,t) = v^\infty - S(t) v(\cdot,-1)$
after the initial time,
one obtains a singular weak Leray-Hopf solution with subcritical forcing term.
Using the theory of weak Besov solutions developed in~\cite{globalweakbesov}, similar statements hold when $L^{3,\infty}$ is replaced by $\dot{B}^{-1+\frac{3}{p}}_{p,\infty}$ and when $L^{3}$ is replaced by $\dot{B}^{-1+\frac{3}{p}}_{p,p}$ ($p>3$). While similar results remain unknown in ${\rm BMO}^{-1}$, a mild ancient solution satisfying $\norm{v(\cdot,t_k)}_{{\rm BMO}^{-1}} \to 0$ as $t_k \to -\infty$ must be identically zero. This follows from the perturbation theory in~\cite{kochtataru}.
Similar statements seem to hold \emph{mutatis mutandis} in the half-space with a different decomposition of the pressure, e.g., the one in~\cite{barkerser16blowup} (see also the weak $L^3(\mathbb{R}^3_+)$ solution theory developed in~\cite{Tuanthesis}). It is interesting to note that, in the half-space case, one has the option to zoom out on an interior or boundary point.
\end{remark}
\subsubsection*{Acknowledgments}
The authors would like to thank Gregory Seregin, Vladim{\i}r {\v S}ver{\'a}k, and Julien Guillod for helpful suggestions on a preliminary version of the paper. DA was supported by the NDSEG Graduate Fellowship and a travel grant from the Council of Graduate Students at the University of Minnesota.
\end{document}
|
\begin{document}
\twocolumn[
\title{Efficient polarization squeezing in optical fibers}
\author{Joel~Heersink, Vincent~Josse, Gerd~Leuchs and Ulrik.~L.~Andersen}
\affiliation{Institut f\"ur Optik, Information und Photonik, Max--Planck Forschungsgruppe, Universit\"at Erlangen--N\"urnberg, G\"{u}nther-Scharowsky-Str. 1, Bau 24, 91058, Erlangen, Germany}
\begin{abstract}
We report on a novel and efficient source of polarization squeezing using a single pass through an optical fiber. Simply passing this Kerr squeezed beam through a carefully aligned $\lambda/2$ waveplate and splitting it on a polarization beam splitter, we find polarization squeezing of up to $5.1 \pm 0.3$~dB. The experimental setup allows for the direct measurement of the squeezing angle.
\end{abstract}
]
\ocis{190.3270, 190.4370, 270.5290, 270.6570.}
\maketitle
The budding field of quantum information holds promise to revolutionize communication. In particular the field of quantum information with continuous variables has received much attention in the last decades with the realization, for instance, of quantum teleportation, quantum cryptography and dense coding.\cite{braunstein.book} Most of these protocols require the use of squeezed (or entangled) states of light that are generally detected using a strong local oscillator, which makes the scalability of quantum networks difficult. Recently, polarization squeezing has attracted much attention \cite{chirkin93.qe,grangier87.prl,bowen02.prl,heersink03.pra,josse03.prl} as it can be measured in direct detection,\cite{korolkova02.pra} making it especially attractive for cryptography and other quantum communication applications. Furthermore the flucuations of the polarization variables can be mapped onto the collective fluctuations of an atomic ensemble, paving the way to quantum memory.\cite{hald99.prl} To date polarization squeezing has been achieved using the nonlinear interactions in Optical Parametric Amplifiers,\cite{bowen02.prl} optical fibers \cite{heersink03.pra} and atomic media.\cite{josse03.prl}
In this paper we present the results of a novel and efficient method of bright pulse polarization squeezing generation using the Kerr effect in an optical fiber. This setup is greatly simplified relative to other schemes and leads to high squeezing levels. Further, this method enables us to completely characterize all quadratures of the state exiting an optical fiber. In particular we measured the angle by which the squeezed uncertainty region has been rotated by the nonlinear Kerr effect.
\begin{figure}
\caption{Representation in phase space of a) the effect of Kerr squeezing on a coherent beam, and of b) the light state exiting the fiber in our setup.}
\label{fig:phase_space}
\end{figure}
It has been known for a long time that the optical Kerr effect in glass fiber can generate quadrature squeezing. This third order nonlinear effect ($\chi^3$), also refered to as Self Phase Modulation produces an intensity dependent change in the refractive index. The squeezing generation can be understood by a single mode picture represented in Fig.~\ref{fig:phase_space}(a): since different amplitudes experience different rotations in phase space, the fluctuation circle (corresponding to shot noise) of the input field is transformed into an ellipse. The squeezed quadrature is rotated by the angle $\theta_{sq}$ relative the amplitude quadrature. Since the Kerr effect conserves photon number, the amplitude fluctuations remain at the shot noise level preventing direct detection of the squeezing.
Polarization squeezing overcomes this problem. It can be generated by overlapping two quadrature squeezed beams produced in the two orthogonal polarization modes of the fiber, visualized by $ \hat{a}_{x}$ and $ \hat{a}_{y}$ in Fig.~\ref{fig:phase_space}(b). This setup, seen in Fig.~\ref{fig:setup}, is similar to our previous experiment producing polarization squeezing,\cite{heersink03.pra} where instead two amplitude squeezed beams were generated. These were produced in an asymmetric Sagnac interferometer \cite{kitagawa86.pra,schmitt98.prl,krylov98.ol} in which a strong Kerr squeezed pulse is transformed into amplitude squeezing by interference with a weak auxilliary pulse; squeezing by this principle can also be achieved using a Mach Zehnder interferometer.\cite{fiorentino01.pra} This intereference of a strong squeezed and a weak "coherent" beam however gives rise to a loss in squeezing due to the dissimilarity of the pulses. In this paper's setup we avoid this destructive effect by mutually interfering two strong Kerr squeezed pulses in a Stokes measurement, with the potential to measure more squeezing. Formally this interference of equally squeezed pulses is reminiscent of earlier experiments producing vacuum squeezing.\cite{bergman91.ol,rosenbluh91.prl} Further advantages of the present setup is a greater robustness against input power fluctuations and the ability to measure squeezing at all powers.
To describe the quantum polarization state of a light field it is helpful to introduce the quantum Stokes parameters.\cite{korolkova02.pra} These are defined in analogy to their classical counterparts:
\begin{eqnarray}
\hat{S}_{0} &= \hat{a}^{\dagger}_{x} \hat{a}_{x} + \hat{a}^{\dagger}_{y} \hat{a}_{y}, \quad
\hat{S}_{1} &= \hat{a}^{\dagger}_{x} \hat{a}_{x} - \hat{a}^{\dagger}_{y} \hat{a}_{y}, \nonumber \\
\hat{S}_{2} &= \hat{a}^{\dagger}_{x} \hat{a}_{y} + \hat{a}^{\dagger}_{y} \hat{a}_{x}, \quad
\hat{S}_{3} &= i(\hat{a}^{\dagger}_{y} \hat{a}_{x} - \hat{a}^{\dagger}_{x} \hat{a}_{y}).
\label{eqn:stokes_def}
\end{eqnarray}
Following the non-commutation of the photon annihilation and creation operators, $\hat{a}_{x/y}$ and $\hat{a}^{\dagger}_{x/y}$, these Stokes parameters obey the relations: $ [\hat{S}_0,\hat{S}_i] = 0$ and $ [\hat{S}_i,\hat{S}_j] = 2i\epsilon_{ijk}\hat{S}_k$ with $ \{i,j,k \}= 1,2,3$. These relations lead to Heisenberg inequalities for the fluctuations of these parameters that depend on the mean polarization state.\cite{korolkova02.pra} For instance, let us consider the situation where the modes $ \hat{a}_{x}$ and $\hat{a}_{y}$ have the same amplitude but phase shifted by $\pi/2$ (Fig.~\ref{fig:phase_space}(b)): $\left\langle \hat{a}_{x} \right\rangle=i\left\langle \hat{a}_{y} \right\rangle=i\alpha/\sqrt{2}$, $\alpha$ being a real number. This light is circularly polarized ($\langle\hat{S}_1\rangle=\langle\hat{S}_2\rangle=0$; $\langle\hat{S}_3\rangle=\langle\hat{S}_0\rangle=\alpha^2$) and the only non-trivial Heisenberg inequality is $\Delta^{2} \hat{S}_{1}\, \Delta^{2} \hat{S}_{2}\geq{}|\langle\hat{S}_3\rangle|^{2}= \alpha^{4}$, where $\Delta^{2} \hat{S}_j$ refers to the variance $\langle\hat{S}_j^2\rangle - \langle\hat{S}_j\rangle^2$. Polarization squeezing is achieved if the variance $\Delta^2\hat{S}_{\theta}$ (variance of a general Stokes parameter rotated by an angle $\theta$ in the $\hat{S}_1$-$\hat{S}_2$ plane) is below the shot noise level:
\begin{equation}
\Delta^{2} \hat{S}_{\theta} \! \leq{} \! |\langle \hat{S}_{3} \rangle | \! = \! \alpha^{2} \quad{} \textnormal{where} \quad{} \hat{S}_{\theta} \! = \! \cos{\theta} \, \hat{S}_1 \! + \! \sin{\theta} \, \hat{S}_2.
\label{eqn:stokes_uncertainty}
\end{equation}
To fully characterize the polarization fluctuations, one can express the fluctuations of the Stokes parameters in terms of the noise of the linearly polarized modes $ \hat{a}_{x}$ and $\hat{a}_{y}$. Since the fluctuations are small compared to the mean values ($\delta \hat{a}_x,\,\delta \hat{a}_y\,<<\, \alpha$), we find:
\begin{eqnarray}
\delta\hat{S}_{\theta}&= &\alpha (\delta\hat{X}_{x,\theta}-\delta\hat{X}_{y,\theta})/\sqrt{2}.
\label{eqn:stokes_linearization}
\end{eqnarray}
Here $\hat{X}_{x/y, \theta}$ corresponds to the quadrature rotated by an angle $\theta$ for $x$ and $y$ polarizations, visualized in Fig.~\ref{fig:phase_space}(b) and Fig.~\ref{fig:results_rotation}. The amplitude and phase quadratures are found for $\theta = 0$ and $\theta = \pi/2$.
The modes $x$ and $y$ propagate through the same fiber with identical amplitudes. They experience the same nonlinearity, and thus have the same quadrature noise $\Delta^{2} \hat{X}_{x,\theta}=\Delta^{2} \hat{X}_{y,\theta}\equiv\Delta^{2} \hat{X}_{\theta}$. We assume the noise to be uncorrelated since the modes are temporally and spatially separated by the fiber birefringence. With this assumption, the measured noise corresponds to the noise of the rotated quadrature, $\Delta^2 S_{\theta} = \alpha^2 \Delta^2 X_{\theta} $. Polarization squeezing is seen in a certain quadrature, rotated by the angle $\theta_{sq}$. This leads to two conclusions: i) we can easily produce polarization squeezing using this setup and, ii) by measuring the fluctuations of the Stokes parameters we can characterize the light state, in particular its quadrature noise $\Delta^2 X_{\theta}$.
The experimental setup is seen in Fig.~\ref{fig:setup}. We use a Cr$^{4+}$:YAG laser emitting 130~fs FWHM pulses at 1497~nm at a repetition rate of 163~MHz. This beam is coupled into the two orthogonal polarization axes of a 13.3~m polarization maintaining fiber (3M FS-PM-7811, 6~$\mu$m core diameter). In this manner we generate two independent Kerr squeezed beams in a single pass through a fiber. After the fiber, the pulses' intensities are adjusted to be identical and they are aligned to temporally overlap. The fiber's polarization axes exhibit a strong birefringence (beat length 1.67~mm) that must be compensated. To minimize losses at the fiber output, we precompensate the pulses in an unbalanced Michelson-like interferometer that introduces a tunable delay between the polarizations.\cite{heersink03.pra,fiorentino01.pra} At the output, 0.1\% of the light is used as an input signal for an control loop which maintains a constant relative phase of $\pi/2$ between the exiting pulses, producing a circularly polarized beam (not explicitly shown).
\begin{figure}
\caption{Eexperimental setup for efficient polarization squeezing generation.}
\label{fig:setup}
\end{figure}
We measure the noise of a given Stokes parameter with the half waveplate $(\lambda /2)$ and a PBS. The PBS outputs are measured directly using detectors with Epitaxx-500 photodiodes and a low pass filter ($\leq$40~MHz) to avoid AC saturation due to the laser repetition rate. The measurement frequency is 17.5~MHz on a HP8595E spectrum analyzer with a resolution bandwidth of 300~kHz and a video bandwidth of 30~Hz. The difference between the two photocurrents is given by:
\begin{equation}
i_- \; \propto \; \cos{4\Phi}\,\hat{S}_1\!+\!\sin{4\Phi}\,\hat{S}_2\;=\;\hat{S}_{4\Phi},
\label{eqn:stokes_measurements}
\end{equation}
where $\Phi$ is the rotation angle of the waveplate compared to the $x$ axis. Rotating the waveplate effectively rotates the measured Stokes parameter in the $\hat{S}_1$-$\hat{S}_2$ plane, allowing a complete measurement of the polarization noise. Using the fact that the amplitude fluctuations of the two individual modes $x$ and $y$ are at the shot noise level, the Heisenberg uncertainty limit is determined by measuring $\Delta^2\hat{S}_1$. This shot noise calibration was checked using a coherent beam with the same amplitude directly from the laser.
A plot of the measured noise as the $\lambda /2$ waveplate is rotated is seen in Fig.~\ref{fig:results_rotation}. The pulse energy is 83.7~pJ (soliton energy 56$\pm$4~pJ). We find an oscillation between very large noise and squeezing, as expected from the rotation of a squeezed state. Plotted on the x-axis is the projection angle $\theta$, i.e. the angle by which the state has been rotated in phase space, inferred from the waveplate angle ($\theta=4\Phi$). For $\theta=0$, an $\hat{S}_1$ measurement, we find a noise value equal to the shot noise. Rotation of the state by $\theta_{sq}$ makes the squeezing in the system observable by projecting out only the squeezed axis of the uncertainty ellipse. Further rotation brings a rapid increase in noise as the excess phase noise, composed of the anti-squeezing and classical thermal noise, becomes visible. This is similar to experiments using local oscillators, however here no stabilization is needed after production of the polarization squeezed state. This may be important for experiments with long acquisition times, i.e. state tomography.
\begin{figure}
\caption{Noise against phase-space rotation angle for the rotation of the measurement $\lambda /2$ waveplate for a pulse energy of 83.7~pJ using 13.3~m 3M~FS-PM-7811 fiber. Inset: Schematic of the projection principle for angle $\theta$. Results are corrected for $-86.1\pm0.1$~dBm electronic noise.}
\label{fig:results_rotation}
\end{figure}
The squeezed and anti-squeezed quadratures and the squeezing angle, $\theta_{sq}$, of this state were investigated as a function of pulse energy (Fig.~\ref{fig:results_13.3m}). The maximum observed squeezing is $-5.1\pm0.3$~dB at an energy of 83.7~pJ.
Squeezing saturation is seen at high power, likely due to the overwhelming excess phase noise which distorts the uncertainty ellipse. The losses of the setup were found to be 20.5\%: 4\% from the fiber end, 7.8\% from optical elements and 10\% from the photodiodes. Thus we have generated a maximum polarization squeezing of $-8.8\pm0.8$~dB. This value agrees better with theoretical predictions.\cite{drummond01.josab} Investigating the squeezing angle, $\theta$, we find that the rotation of the uncertainty region necessary to observe squeezing decreases with increasing power. This is expected as, despite an increasing anti-squeezing, the amplitude noise of a Kerr squeezed beam remains constant. Saturation is also apparent in $\theta_{sq}$ making a clean projection of the squeezing difficult. This polarization squeezing setup could be further investigated for different fiber lengths as well as types and theoretical description of the setup could be implemented.
\begin{figure}
\caption{Results for 13.3~m 3M FS-PM-7811 fiber as a function of pulse energy: a) the squeezing and excess phase noise and b) the squeezing angle. The energy at which a first order soliton is generated (56$\pm$4~pJ) is shown by the dashed line. Results are corrected for $-86.1\pm0.1$~dBm electronic noise.}
\label{fig:results_13.3m}
\end{figure}
We see potential for further improvement of our $-5.1\pm0.3$~dB squeezing produced in our novel and efficient setup, namely with better photodiodes and an all fiber setup in which losses are minimized. The developement of specialty microstructured fibers with lower classical phase noise is expected to also improve our result,\cite{korn04.iqec} bringing us yet closer to the theoretically predicted maximal fiber squeezing.\cite{drummond01.josab} Polarization and quadrature entanglement can be directly produced from this resource, both important tools in many quantum communication protocols.
This work was funded by Project 1078 of the DFG. The authors thank O. Gl\"{o}ckl for help. U.L.A. thanks the Alexander von Humboldt Foundation.
\end{document}
|
\begin{document}
\title{{\LARGE{A constructive method for linear extensions of Zadeh's fuzzy order}}}
\author{\small{Abdelkader STOUTI}}
\date{\small{Laboratory of Mathematics and Applications, Faculty of Sciences and Techniques,
University Sultan Moulay Slimane, P.O. Box 523, 23000 Beni-Mellal, MOROCCO. \\E-mail: [email protected]}}\maketitle
\begin{center}
\begin{abstract}
\noindent In this paper, we give a constructive method for linear
extensions of Zadeh's fuzzy orders. We also characterize Zadeh's
fuzzy orders by their linear extensions.
\end{abstract}
\end{center}
\noindent{\it 2000 Mathematics Subject Classification}: 03E72,
04A72, 06A05, 06A06.
\noindent{\it Keywords}: fuzzy set, fuzzy relation, Zadeh's fuzzy
order, linear Zadeh's fuzzy order, linear extension.
\noindent \baselineskip=20pt
\section{Introduction and preliminaries}
In 1971, Zadeh \cite{Zadeh71} introduced the notion of fuzzy
order. Since then, the theory of fuzzy order was developed by
many authors (for example see: [1-4] and [6, 8, 9, 12, 13]). In
this paper, we first establish a constructive method for linear
extensions of Zadeh's fuzzy orders. Secondly, we prove that every
Zadeh's fuzzy order defined on a finite set is the fuzzy
intersection of its all linear Zadeh's fuzzy extensions.
This paper is organized as follows. In the first section, we
recall some well know definitions and results. In the seond
section, we first give the key result of the present paper (see
Lemma 2.2). For the case of finite sets, a constructive method for
linear extensions of Zadeh's fuzzy orders is given in the second
section (see Theorem 2.1). In the third section, we characterize
Zadeh's fuzzy orders by their linear extensions (see Theorem 3.1).
In the forth section, we give some examples of building linear
extensions of Zadeh's fuzzy orders.
Next, we will recall some well know definitions and results.
Let $X$ be a nonempty set. A fuzzy subset $A$ of $X$ is
characterized by its membership function $A : X \rightarrow [0,
1]$ and $A(x)$ is interpreted as the degree of membership of the
element $x$ in the fuzzy subset $A$ for each $x\in X.$
\begin{df} \cite{Zadeh71}. Let $X$ be a nonempty set.
A fuzzy relation $r$ on $X$ is a function $r :X\times X
\longrightarrow [0 , 1]$. For every $x, y\in X,$ the value
$r(x,y)$ is called the grade of membership of $(x,y)$ in $r$ and
means how far x and y are related under $r.$
\end{df}
In \cite{Zadeh71}, Zadeh gave the following definition of fuzzy
order.
\begin{df} \cite{Zadeh71}. Let $X$ be a nonempty crisp set. A Zadeh's fuzzy
order on $X$ is a fuzzy subset $r$ of $X\times X$ satisfying the
following three properties:
(i) for all $x \in X,$ $r(x, x)=1$, (Z-fuzzy reflexivity);
(ii) for all $x, y \in X,$ $x\not=y$ and $r(x, y) > 0 $ imply
$r(y, x)=0$ (Z-fuzzy antisymmetry);
iii) for all $x, z \in X,$ $r(x, z) \geq \sup_{y \in X}\left[
\min\{ r(x, y), r(y, z)\} \right],$ (Z-fuzzy transitivity).
\end{df}
A nonempty set $X$ with a Zadeh's fuzzy order $r$
defined on it is called Zadeh's--fuzzy ordered set (for short,
Z-foset) and we denote it by $(X, r).$
If $Y$ is a subset of an Z-foset $(X, r)$, then the Z-fuzzy order
$r$ is a Z-fuzzy order on $Y$ and is called the induced fuzzy
order.
A Z-fuzzy order $r$ is linear (total) on $X$ if for every $x, y
\in X,$ we have $r(x, y) > 0$ or $r(y, x) > 0.$ A Z-fuzzy ordered
set $(X, r)$ in which $r$ is linear is called a Z-fuzzy chain. If
there is at least two elements $x,y \in X$ such that $r(x,y)=r(y, x)=0,$
then the Z-fuzzy order $r$ is said to be a partial Z-fuzzy order.
Let $(X, r)$ be a Z-fuzzy ordered set and $A$ be a subset of $X$.
(a) An element $u \in X$ is a $r$-upper bound of $A$ if $r(x, u) >
0$ for all $x \in A.$ If $u$ is $r$-upper bound of $A$ and $u \in
A,$ then $u$ is called a greatest element of $A.$ The $r$-lower
bound and least element are defined analogously.
(b) An element $m \in A$ is called a maximal element of $A$ if
$r(m, x) > 0$ for some $x \in A,$ then $x=m.$ Minimal elements are
defined similarly.
(c) An element $s \in X$ is the $r$-supremum of $A$ if $s$ is a
$r$-upper bound of $A$ and for all $r$-upper bound $u$ of $A,$ we
have $r(s, u) > 0.$ When $s$ exists, we shall write
$s=\sup_{r}(A).$ Similarly, $l \in X$ is the $r$-infimum of $A$ if
$l$ is a $r$-lower bound of $A$ and for all $r_{\alpha}$-lower
bound $k$ of $A,$ we have $r(k, l) > 0.$ When $l$ exists we shall
write $l=\inf_{r}(A).$
\begin{df}
Let $X$ be a nonempty set and $r$, $r'$ be two Zadeh's fuzzy
orders on $X$. We say that $r'$ is an extension of $r$ if
$r(x,y)\leq r'(x,y),$ for every $(x, y) \in X^{2}.$
\end{df}
\begin{df}
Let $(X, r)$ be a nonempty Zadeh's fuzzy ordered set and let $a,
b$ be two elements of $X.$ We say that $a$ and $b$ are
incomparable in $(X, r)$ if $r(a,b)=r(b,a)=0.$
\end{df}
\noindent Let $x, y \in \RR.$ Then, we set $\max\{x, y\}=x\vee y
\hbox{ and } \min\{x, y\}=x\wedge y.$
\begin{exm}
\noindent 1. Let $r_{1}$ and $r_{2}$ be the two fuzzy
relations defined on $\RR$ by:
$$r_{1}(x,y) =
\left\{\begin{array}{c}
1 ~,~ if ~~ x=y ;~~~~~~~~~~~~~\\
\min(1 , \frac{y-x}{2}), ~~if ~~ x<y ~~~ \\
0, ~~ ~~if ~~ x> y~~~~~~~~~~~~~
\end{array}
\right.$$ and $$r_{2}(x,y) = \left\{
\begin{array}{c}
1 \quad \hbox{ if } \quad x=y\\
0,75 \quad \hbox{ if } \quad x > y\\
0 \quad \hbox{ if } \quad x < y.
\end{array}
\right.$$
Then, $r_{1}$ and $r_{2}$ are two Zadeh's fuzzy orders
on $\RR.$
\noindent 2. Let $X=\{a, b, c\}$ and $r$ be the fuzzy relation
defined on $X$ by the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& a &b & c \\
\hline
a & 1 & 0 & $\gamma$ \\
b & 0 & 1 & 0 \\
c & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
Then, $r$ is a Zadeh's fuzzy order on $X.$
\noindent 3. Let $X=\{a, b, c, d\}$ and $r$ be the fuzzy relation
defined on $X$ by the following matrix:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& a & b & c & d \\
\hline
a & 1 & 0 & $\gamma$ & $\lambda$ \\
b & 0 & 1 & $\alpha$ & $\beta$ \\
c & 0 & 0 & 1 & 0 \\
d & 0 & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
Then, $r$ is a Zadeh's fuzzy order on $X.$
\end{exm}
\section{A constructive method for linear extensions of Zadeh's fuzzy orders defined on finite sets}
In this section, we will show how one can constructs a linear
extension of a partial Zadeh's fuzzy order defined on a finite
nonempty set $X.$ More precisely, we will prove the following
result.
\begin{thm}
Every Zadeh's fuzzy order on finite nonempty set $X$ can be
extended to a linear Zadeh's fuzzy order on $X$.
\end{thm}
In order to prove Theorem 2.1, we will need the following
technical lemma.
\begin{lem} Let $(X, r)$ be a nonempty Zadeh's
fuzzy ordered set and let $a, b$ be two elements of $X$ such that
$r(b, a)=0.$ Then, there exists at least a Zadeh's fuzzy order $r'$ on $X$
which extends $r$ such that $r'(a,b)=1$ and $r'(b,a)=0.$
\end{lem}
\noindent{\bf Proof}. Let $(X, r)$ be a Zadeh's fuzzy ordered set
and let $a, b\in X$ such that $r(b,a)=0.$ Let $s$ be the fuzzy
relation defined on $X$ by setting
$$r'(x,y)=\max \{r(x,y) , \min(r(x,a),r(b,y))\}.$$
In what follows, we will write $r'(x,y)=r(x,y)\vee(r(x,a)\wedge
r(b,y)).$ We claim that $r'$ is a Zadeh's fuzzy order on $X$ which
extends $r.$
\noindent Claim 1. The fuzzy relation $r^{'}$ is Z--fuzzy
reflexive. Let $x \in X.$ Then, we have
$r'(x,x)=r(x,x)\vee(r(x,a)\wedge r(b,x))=1\vee(r(x,a)\wedge
r(b,x))=1.$ Thus, $r'$ is Z-fuzzy reflexive.
\noindent Claim 2. The fuzzy relation $r^{'}$ is Z--fuzzy
antisymmetric. Indeed, let $x,y \in X$ such that $r'(x,y)>0$ and
$x\neq y.$ Then, as $r'(y,x)=r(y,x)\vee(r(y,a)\wedge r(b,x)),$ we
have four cases to study.
\noindent First case. We have: $r'(x,y)= r(x,y)>0$ and
$r(x,a)\wedge r(b,y)>0.$ As $r$ is Z--fuzzy antisymmetric, we
get $r(y,x)=0.$ Hence, we obtain $r'(y,x)=r(y,a)\wedge r(b,x).$ On
the other hand since $r$ is Z--fuzzy transitive, so we get
$r(b,a)\geq r(b,y)\wedge r(y,a).$ Since $r(b,a)=0$ and $r(b,y)>0,$
then we obtain $r(y,a)=0.$ Thus, we have $r'(y,x)=0.$
\noindent Second case. We have: $r'(x,y)= r(x,y)>0$ and
$r(x,a)\wedge r(b,y)=0.$ Then, as $r$ is Z--fuzzy antisymmetric,
we get $r(y,x)=0.$ Hence, $r'(y,x)=r(y,a)\wedge r(b,x).$ Since
$r(x,a)\wedge r(b,y)=0,$ so we have $r(x,a)=0$ or $r(b,y)=0.$
(a) First subcase. We have: $r(x,a)=0.$ Then, since $r$ is
Z--fuzzy transitive we get $r(x,a)\geq r(x,y)\wedge r(y,a).$ As
$r(x,a)=0$ and $r(x,y)>0,$ so we obtain $r(y,a)=0.$ Hence, we get
$r'(y,x)=0.$
(b) Second subcase. We have: $r(b,y)=0.$ Then, as $r$ is Z--fuzzy
transitive, we get $r(b,y)\geq r(b,x)\wedge r(x,y).$ Since
$r(b,y)=0$ and $r(x,y)>0,$ so we have $r(b,x)=0.$ Thus, we obtain
$r'(y,x)=0.$
\noindent Third case. We have: $r'(x,y)= r(x,a)\wedge r(b,y)>0$
and $r(x,y)>0.$ Then, as $r$ is Z--fuzzy antisymmetric we get
$r(y,x)=0.$ Hence, we obtain $r'(y,x)=r(y,a)\wedge r(b,x).$ Then,
since $r$ is Z--fuzzy transitive we get $r(b,a)\geq r(b,y)\wedge
r(y,a).$ As $r(b,a)=0$ and $r(b,y)>0,$ so we get $r(y,a)=0.$ Thus,
we obtain $r'(y,x)=0.$
\noindent Fourth case. We have: $r'(x,y)= r(x,a)\wedge r(b,y)>0$
and $r(x,y)=0.$ So, as $r$ is Z--fuzzy transitive, we get
$r(b,a)\geq r(b,x)\wedge r(x,a).$ Since $r(b,a)=0$ and $r(x,a)>0,$
then we obtain $r(b,x)=0.$ Hence, we have $r(y,a)\wedge r(b,x)=0.$
So, we get $r'(y,x)=r(y,x).$ Then, by using the Z--fuzzy
transitivity of $r$ we obtain $r(b,x)\geq r(b,y)\wedge r(y,x).$ As
$r(b,x)=0$ and $r(b,y)>0,$ so we obtain $r(y,x)=0.$ Hence, we get
$r'(y,x)=0.$ Therefore, $r'$ is Z--fuzzy antisymmetric.
\noindent Claim 3. The fuzzy relation $r^{'}$ is Z--fuzzy
transitive. Indeed, let $x,y,z \in X$. Then, we have four cases to
study.
\noindent First case. We have: $r^{'}(x,y)=r(x,y)$ and
$r^{'}(y,z)=r(y,z).$ Then we get $r^{'}(x,y)\wedge
r^{'}(y,z)=r(x,y)\wedge r(y,z).$ Since $r$ is Z-fuzzy transitive,
so we obtain $r(x,z)\geq r(x,y)\wedge r(y,z).$ Hence, $r(x,z)\geq
r^{'}(x,y)\wedge r^{'}(y,z).$ On the other hand since
$r'(x,z)=r(x,z)\vee(r(x,a)\wedge r(b,z)),$ then we get
$r'(x,z)\geq r(x,z).$ Thus, we have $r^{'}(x,z)\geq
r^{'}(x,y)\wedge r^{'}(y,z).$
\noindent Second case. We have: $r^{'}(x,y)=r(x,a)\wedge r(b,y)$
and $r^{'}(y,z)=r(y,a)\wedge r(b,z).$ So we get, $r^{'}(x,y)\wedge
r^{'}(y,z)=r(x,a)\wedge r(b,y)\wedge r(y,a)\wedge r(b,z).$ Since
$r(b,a)=0$ and $r$ is Z-fuzzy transitive, so we get $r(b,a)\geq
r(b,y)\wedge r(y,a).$ Then, we obtain $r(b,y)\wedge r(y,a)=0.$
Hence, we have $r^{'}(x,y)\wedge r^{'}(y,z)=0.$ Thus,
$r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z).$
\noindent Third case. We have: $r^{'}(x,y)=r(x,y)$ and
$r^{'}(y,z)=r(y,a)\wedge r(b,z).$ Then, we get $r^{'}(x,y)\wedge
r^{'}(y,z)=r(x,y)\wedge r(y,a)\wedge r(b,z).$ On the other hand,
as $r'(x,z)=r(x,z)\vee(r(x,a)\wedge r(b,z)),$ so $r'(x,z)\geq
r(x,a)\wedge r(b,z).$ As $r$ is Z-fuzzy transitive, then we get
$r(x,a)\geq r(x,y)\wedge r(y,a).$ Hence, we obtain $r'(x,z)\geq
r(x,y)\wedge r(y,a)\wedge r(b,z).$ Thus, we have $r^{'}(x,z)\geq
r^{'}(x,y)\wedge r^{'}(y,z).$
\noindent Fourth case. We have: $r^{'}(x,y)=r(x,a)\wedge r(b,y)$
and $r^{'}(y,z)=r(y,z).$ So we get, $r^{'}(x,y)\wedge
r^{'}(y,z)=r(x,a)\wedge r(b,y)\wedge r(y,z).$ On the other hand as
$r'(x,z)=r(x,z)\vee(r(x,a)\wedge r(b,z)),$ so we get $r'(x,z)\geq
r(x,a)\wedge r(b,z).$ In addition, By using the Z-fuzzy
transitivity of $r$ we get $r(b,z)\geq r(b,y)\wedge r(y,z).$
Hence, we obtain $r'(x,z)\geq r(x,a)\wedge r(b,y)\wedge r(y,z).$
So, we obtain $r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z).$ Then,
we deduce that $r'(x,z) \geq \min\{r'(x,y),r'(y,z)\},$ for every
$y \in X.$ Thus, $r'$ is Z-fuzzy transitive. Therefore, $r'$ is a
Zadeh's fuzzy order on $X.$
\noindent Claim 4. The Zadeh's fuzzy order $r^{'}$ is an extension
of $r.$ Indeed, as for all $(x,y) \in X^{2},$ we have
$r'(x,y)=\max \{r(x,y), \min(r(x,a),r(b,y))\}\geq r(x,y).$ So,
$r'$ is an extension of $r.$ Moreover, we have
$$r'(a,b)=\max \{r(a,b), \min(r(a,a),r(b,b))\}=1$$
and
$$r'(b,a)=\max \{r(b,a),
\min(r(b,a),r(b,a))\}=r(b,a)=0.$$ \qed
Now we are able to give the proof of Theorem 2.1.
\noindent{\bf Proof of Theorem 2.1.} Let $X=\{x_{1}, ...,
x_{n}\}$ be a finite set and let $r$ be a Zadeh's fuzzy order on
$X$. If $x_{1}$ is comparable to every $x_{i}$ for $i\in \{2,...,
n\},$ it is ok. If note, by applying at most $n-1$ times Lemma
2.2, we get a Zadeh's fuzzy order $r_{1},$ say, which extends $r$
and satisfies
$$r_{1}(x_{1}, x_{i}) > 0 \hbox{ or } r_{1}(x_{i}, x_{1}) > 0 \hbox{ for every }
i \in \{1, 2, ..., n\}.$$ Now, if $x_{2}$ is comparable to every
$x_{i}$ for $i\in \{3,..., n\},$ it is ok. If it is note the case,
by applying at most $n-2$ times Lemma 2.2, we get a Zadeh's
fuzzy order $r_{2}$ which extends $r_{2}$ such that
$$r_{2}(x_{j}, x_{i}) > 0 \hbox{ or } r_{2}(x_{i}, x_{j}) > 0 \hbox{ for every }
i \in \{1, 2,..., n\}\hbox{ and }j=1, 2.$$
By induction for every $k \in \{1,..., n-1\},$ there exists a Zadeh's fuzzy
order $r_{k},$ say, which extends $r_{k-1}$ and satisfies the
following:
$$r_{k}(x_{j}, x_{i}) > 0 \hbox{ or }r_{k}(x_{i}, x_{j}) > 0 \hbox{ for every }
j \in \{1, 2, ..., n\}\hbox{ and }i=1, 2, ..., k.$$ Since $r\leq
r_{1} \leq r_{2} \leq ...\leq r_{n-2} \leq r_{n-1},$ then $r \leq
r_{n-1}.$ Thus, we have obtain a linear Zadeh's fuzzy order
$r_{n-1}$ which extends $r.$ \qed
\begin{rem} Let $k$ be number of the times of application of
Lemma 2.2 in the proof of Theorem 2.1. Then, the naturel number
$k$ satisfies $k\leq \frac{m}{2}$ where $m=card(A)$ and
$$A=\{(x_{i},x_{j})\in X^{2},~~\mbox {such that}~~
r(x_{i},x_{j})=r(x_{j},x_{i})=0\}.$$ As $m \leq n(n-1),$ then we
get $k \leq \frac{n(n-1)}{2}.$ \end{rem}
\section{A characterization of Zadeh's fuzzy orders by their linear
extensions} In this section, we will characterize Zadeh's fuzzy
orders which are defined on a finite nonempty set $X$ by their
linear extensions. More precisely, we will prove the following
result.
\begin{thm}
Every Zadeh's fuzzy order on a finite nonempty set $X$ is the
fuzzy intersection of all linear Zadeh's fuzzy orders which extend
it.
\end{thm}
In order to prove Theorem 3.1, we will need the following
technical lemma.
\begin{lem} Let $(X, r)$ be a finite nonempty Zadeh's
fuzzy ordered set and let $a, b$ be two elements of $X$ such that
$r(a, b) > 0.$ Then, there exists at least a linear Zadeh's fuzzy order $r'$ on $X$
which extends $r$ such that $r'(a,b)=r(a, b).$
\end{lem}
\noindent{\bf Proof}. Let $(X, r)$ be a nonempty Zadeh's fuzzy
ordered set and let $a, b$ be two elements of $X$ such that $r(a,
b) > 0.$ If $r$ is linear then we take $s=r$ and Lemma 3.2 is
proved. If it is note the case, then from Theorem 2.1 there exist
a linear Zadeh's fuzzy order $r^{'}$ on $X$ which extends $r.$ If
$r(a,b)= r^{'}(a, b),$ then we take $s=r^{'}$ and Lemma 3.2 is
proved. If it note the case, then assume that we have $r(a,b)<
r^{'}(a, b).$ So, we set $\beta=r(a,b).$ Hence, we get $0 < \beta
< 1.$ Next, we will define the following fuzzy relation $s$ on $X$
by setting:
$$s(x,y)=\left\{\begin{array}{c}
r^{'}(x, y), ~~~~~if ~~ r(x,y)> \beta ; \\
\beta \wedge r^{'}(x, y), ~if ~~ r(x,y)\leq \beta.\\
\end{array}\right.$$
\noindent We claim that $s$ is a Zadeh's fuzzy order which extends $r$ and
satisfies $s(a, b)=r(a, b).$
\noindent Claim 1. The fuzzy relation $s$ is a linear Zadeh's
fuzzy order on $X$.
\noindent (i) Z--fuzzy reflexivity. For all $x \in X$ we have
$r(x,x)=1>\beta$, then we get $s(x,x)=r^{'}(x,x)=1.$ Thus $s$ is
fuzzy reflexive.
\noindent (ii) Z--fuzzy antisymmetry. Let $x,y \in X$ such that
$s(x,y)>0$ and $x\neq y.$ To show that $s(y, x)=0,$ we have to
consider two subcases.
First subcase. We have: $s(x,y)=r^{'}(x,y)>0.$ Then, $r(x,y)>
\beta.$ As $r^{'}$ is Z--fuzzy antisymmetric, so we get
$r^{'}(y,x)=0$. On the other hand, since $r(x,y)> \beta>0$ and $r$
is antisymmetric, then we get $r(y,x)=0\leq \beta.$ Hence,
$s(y,x)=\beta \wedge r^{'}(y, x)=0$. Thus, $s$ is fuzzy
antisymmetric.
Second subcase. We have: $s(x,y)=\beta \wedge r^{'}(x, y)>0.$
Then, we get $r^{'}(x, y)>0.$ By the fuzzy
antisymmetry of $r^{'}$ we obtain $r^{'}(y,x)=0.$ As $r(y, x) \leq
r^{'}(y, x),$ then we obtain $r(y, x)=0 \leq \beta.$ Hence, we get
$s(y,x)=\beta \wedge r^{'}(y,x)=0.$ Thus, $s$ is fuzzy antisymmetric.
\noindent (iii) Z--fuzzy transitivity. Let $x,y,z \in X$. To prove
that $s(x, z) \geq \{s(x, y), s(y, z) \},$ we have to distinguish
the following four cases.
\noindent First case. We have: $s(x,y)=r^{'}(x,y)$ and
$s(y,z)=r^{'}(y,z).$ Then, we get $r(x,y)> \beta$ and $r(y,z)>
\beta$. As $r$ is Z--fuzzy transitive, so $r(x,z)\geq
r(x,y)\wedge r(y,z).$ Hence, $r(x,z)> \beta$. Then, we obtain
$s(x,z)=r^{'}(x,z)$. From the fuzzy transitivity of $r^{'}$ we get
$r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z).$ Thus, $s(x,z)\geq
s(x,y)\wedge s(y,z).$
\noindent Second case. We have: $s(x,y)=r^{'}(x,y)$ and
$s(y,z)=\beta \wedge r^{'}(y,z).$ Then, we get $r(x,y)> \beta$
and $r(y,z)\leq \beta.$ So, we obtain $r(x,y)\wedge r(y,z)\leq
\beta.$ Next, we have two subcases to consider.
First subcase. We have: $r(x,z)> \beta.$ So, we get
$s(x,z)=r^{'}(x,z).$ As $r^{'}$ is Z--fuzzy transitive,
we obtain $r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z).$ So, we
get $r^{'}(x,z)\geq r^{'}(x,y)\wedge (\beta \wedge r^{'}(y,z)).$ Thus,
we obtain $s(x,z)\geq s(x,y)\wedge s(y,z).$
Second subcase. We have: $r(x,z)\leq \beta.$ Then, $s(x,z)=\beta
\wedge r^{'}(x,z)$. Since $r^{'}$ is Z--fuzzy transitive, hence
we get $r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z)$ and
$\beta \wedge r^{'}(x,z)\geq r^{'}(x,y)\wedge
(\beta \wedge r^{'}(y,z)).$ Thus, $s(x,z)\geq s(x,y)\wedge
s(y,z)$.
Third case. We have: $s(x,y)=\beta \wedge r^{'}(x,y)$ and
$s(y,z)=r^{'}(y,z).$ Then, we get $r(x,y)\leq \beta$ and $r(y,z)>
\beta.$ So, $r(x,y)\wedge r(y,z)\leq \beta.$ For this case we have
two subcases to study.
First subcase. We have: $r(x,z)> \beta.$ Then, we get
$s(x,z)=r^{'}(x,z).$ Since $r^{'}$ is Z--fuzzy transitive, so we
have $r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z)$ and
$r^{'}(x,z)\geq (\beta \wedge r^{'}(x,y))\wedge r^{'}(y,z).$ Thus,
we obtain $s(x,z)\geq s(x,y)\wedge s(y,z).$
Second subcase. We have: $r(x,z)\leq \beta.$ Then, $s(x,z)=\beta
\wedge r^{'}(x,z).$ Since $r^{'}$ is Z--fuzzy transitive, we get
$r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z)$ and $\beta \wedge
r^{'}(x,z)\geq (\beta \wedge r^{'}(x,y))\wedge r^{'}(y,z).$ Thus,
$s(x,z)\geq s(x,y)\wedge s(y,z).$
Fourth case. We have: $s(x,y)=\beta \wedge r^{'}(x,y)$ and
$s(y,z)=\beta \wedge r^{'}(y,z).$ Then, we get $r(x,y)\leq \beta$
and $r(y,z)\leq \beta.$ So, $r(x,y)\wedge r(y,z)\leq \beta$. For
this case we have two subcases to distinguish.
First subcase. We have: $r(x,z)> \beta.$ Then, we have
$s(x,z)=r^{'}(x,z).$ As $r^{'}$ is fuzzy transitive, so we obtain
$r^{'}(x,z)\geq r^{'}(x,y)\wedge r^{'}(y,z).$ Hence,
$r^{'}(x,z)\geq (\beta \wedge r^{'}(x,y))\wedge (\beta \wedge
r^{'}(y,z)).$ Thus, we get $s(x,z)\geq s(x,y)\wedge s(y,z).$
Second subcase. We have: $r(x,z)\leq \beta.$ Then, we get
$s(x,z)=\beta \wedge r^{'}(x,z).$ Since $r^{'}$ is
Z--fuzzy transitive, so we obtain $r^{'}(x,z)\geq r^{'}(x,y)\wedge
r^{'}(y,z)$ and $\beta \wedge r^{'}(x,z)\geq (\beta \wedge r^{'}(x,y))\wedge (\beta \wedge r^{'}(y,z)).$
Thus, we get $s(x,z)\geq s(x,y)\wedge s(y,z).$ Thus, we obtain
$s(x,z) \geq \min\{s(x,y),s(y,z)\}$, for all $y \in X.$ Hence,
$s$ is fuzzy transitive. Therefore, $s$ is a Zadeh's
fuzzy order on $X.$
\noindent Claim 2. The Z--fuzzy order relation $s$ is linear.
Indeed, let $x, y \in X,$ such that $x\neq y$. Then, since
$r^{'}$ is a linear Zadeh's fuzzy order we get $r^{'}(x,y)>0$ or
$r^{'}(y,x)>0.$
First case. We have: $r^{'}(x,y)>0.$ Then, $s(x, y)=r^{'}(x,y)$ or
$s(x, y)=\beta \wedge r^{'}(x, y).$ So, we obtain $s(x, y) > 0.$
Second case. We have: $r^{'}(y, x)>0.$ Then, $s(y, x)=r^{'}(y, x)$
or $s(y, x)=\beta \wedge r^{'}(y,x).$ So, we get $s(y, x) > 0.$
\noindent Claim 3. We have: $r(x,y)\leq s(x,y).$ To show this,
we will distinguish two cases.
First case. We have: $r(x,y)> \beta.$ Then, $s(x,y)=r^{'}(x, y).$
As $r^{'}$ is an extension of $r,$ so $r(x,y) \leq s(x, y).$
Second case. We have: $r(x,y)\leq \beta.$ Then, $r^{'}(x, y)=\beta
\wedge r^{'}(x, y).$ Since $r(x,y)\leq \beta$ and $r(x, y) \leq
r^{'},$ then $r(x, y) \leq \beta \wedge r^{'}(x, y).$ Thus, we
have $r(x, y) \leq s(x,y).$ Therefore, the Lemma 3.2 is
proved.\qed
Next, we are ready to give the proof of Theorem 3.1.
\noindent{\bf Proof of Theorem 3.1.} Let $r$ be a Zadeh's fuzzy order on a
nonempty finite set $X.$ If $r$ is a linear Zadeh's fuzzy order,
then it is ok. If note, let $A$ be the set of all linear Zadeh's
fuzzy orders on $X$ which extend $r$. From Theorem 2.1,
$A$ is nonempty. Let $s_{0}$ be the fuzzy intersection of all elements of $A.$
Then, we have $$s_{0}(x, y)=\inf_{s \in A} s(x, y),\hbox{ for
every }(x, y) \in X^{2}.$$
\noindent Claim 1. The fuzzy relation $s_{0}$ is a Zadeh's fuzzy
order on $X.$
(a) Z--fuzzy reflexivity. Let $x \in X.$ Then, $s(x, x)=1$ for
every $s \in A.$ So, $s_{0}(x, x)=1.$ Thus, $r$ is Z--fuzzy
reflexive.
(b) Z--fuzzy antisymmetry. Let $x, y \in X$ such that $x\not=y$
and $s_{0}(x, y) > 0.$ Let $s \in A.$ Then, $s(x, y) > 0.$ On the
other hand, we know that $s$ is Z--fuzzy antisymmetric. So, we get
$s(y, x)=0.$ Hence, $s_{0}(y, x)=\inf_{s \in A} s(y, x)=0.$ Thus,
$s_{0}$ is Z--fuzzy antisymmetric.
(c) Z--fuzzy transitivity. Let $x, y, z \in X$ and $s\in A.$ Then,
we have
$$s_{0}(x, y) \leq s(x, y) \hbox{ and } s_{0}(y, z) \leq s(y,
z).$$ Hence, we get
$$\min\{s_{0}(x, y), s_{0}(y, z)\} \leq \min\{s(x, y), s(y,
z)\}.$$ On the other hand, we know that $s$ is Z--fuzzy
transitive. So, we get $$\min\{s(x, y), s(y, z)\} \leq s(x, z).$$
Then, we obtain, $$\min\{s_{0}(x, y), s_{0}(y, z)\} \leq s(x, z),
\hbox{ for every } s \in A.$$ Since $s_{0}(x, z)=\inf_{s \in A}
s(x, z),$ so we get $$\min\{s_{0}(x, y), s_{0}(y, z)\} \leq
s_{0}(x, z).$$ Thus, $s_{0}$ is Z--fuzzy transitive. Therefore,
$s_{0}$ is a Zadeh's fuzzy order on $X.$
\noindent Claim 2. We have: $r \leq s_{0}.$ Indeed if $x, y \in
X,$ then we get
$$r(x, y) \leq s(x, y),\hbox{ for every } s \in A.$$
So, we obtain
$$r(x, y) \leq s_{0}(x, y),\hbox{ for every } (x, y)\in X^{2}.$$
Thus, we have $r \leq s_{0}.$
\noindent Claim 3. For every $a, b \in X,$ we have
$$( r(a, b)=r(b, a)=0 ) \Rightarrow ( s_{0}(a, b)=s_{0}(b, a)=0
).$$ Indeed, assume that $r(a, b)=r(b, a)=0.$ Then, from Theorem
2.1, there exists two linear Zadeh's fuzzy orders of $r,$ $s_{1}$
and $s_{2},$ say such that
$$ \left\{ \begin{array}{c}
s_{1}(a, b)=1 \\
s_{1}(b, a)=0
\end{array}
\right.$$ and
$$\left\{\begin{array}{c}
s_{2}(a, b)=0 \\
s_{2}(b, a)=1.
\end{array}
\right. $$ \\ As $s_{0} \leq s_{1}$ and $s_{0} \leq s_{2},$ hence
we get $s_{0}(a, b)=s_{0}(b, a)=0.$
\noindent Claim 4. We have: $r=s_{0}.$ Indeed, we know that $r\leq
s_{0}.$ By absurd assume that $r\not=s_{0}.$ Then, there is $a, b
\in X$ such that $r(a, b) < s_{0}(a, b).$ Hence, we get $s_{0}(a,
b)> 0.$ As $s_{0}$ is Z-fuzzy antisymmetric, so we obtain
$s_{0}(b, a)=0.$ On the other hand, we know that $r \leq s_{0}.$
Then, $r(b, a)=0.$ Now, we claim that $r(a, b) > 0.$ On the
contrary assume that $r(a, b)=0.$ So, we get $r(a, b)=r(b, a)=0.$
By using Claim 2, we obtain $s_{0}(a, b)=s_{0}(b, a)=0.$ That is a
contradiction with the fact that $s_{0}(a, b) > 0.$ As $r(a, b)>
0,$ then from Lemma 3.2, there exists $s \in A$ such that $s(a,
b)=r(a, b).$ Since $s_{0} \leq s,$ hence we get $s_{0}(a, b) \leq
r(a, b).$ That is a contradiction with our assumption that $r(a,
b) < s_{0}(a, b).$ Therefore, we obtain $r=s_{0}.$ \qed
\section{Examples of construction of linear extensions of Zadeh's fuzzy orders }
In this section we will give some examples of construction of
linear extensions of Zadeh's fuzzy orders by applying Theorem 2.1.
\noindent 1. Let $X=\{a, b, c\}$ and $r$ be the Zadeh's fuzzy
order on $X$ defined by the following table
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& a & b & c \\
\hline
a & 1 & 0 & $\gamma$ \\
b & 0 & 1 & 0 \\
c & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
\noindent By using Lemma 2.2, we get in the first step a Zadeh's
fuzzy order $r_{1}$ which extends $r$ and it is defined by
$r_{1}(x,y)=\max \{r(x,y) , \min(r(x,a),r(b,y))\}$. Then, we
represent $r_{1}$ by the following table
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& a &b & c \\
\hline
a & 1 & 1 & $\gamma$ \\
b & 0 & 1 & 0 \\
c & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
Since $r_{1}$ is not linear, then we apply again Lemma 2.2 for
$r_{1}.$ So, we get a Zadeh's fuzzy order $r_{2}$ extending
$r_{1}$ (also $r_{1}$ extends also $r$) defined by
$$r_{2}(x,y)=\max \{r_{1}(x,y) , \min(r_{1}(x,b),r_{1}(c,y))\}.$$ We
can represent $r_{2}$ by the following table
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& a &b & c \\
\hline
a & 1 & 1 & 1 \\
b & 0 & 1 & 1 \\
c & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
The procedure stop here because $r_{2}$ is a linear Zadeh's fuzzy
order extending $r$.
In this example the number of application of Lemma 2.2 is $k=2.$
In this case, we have $k=\frac{m}{2}$ such that $m=card(A)$ with
$A=\{(a,b), (b,a), (b,c), (c,b)\}.$
\noindent 2. Let $X=\{a, b, c, d\}$ and $r$ be the Zadeh's fuzzy
order on $X$ defined by the following matrix:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& a & b & c & d \\
\hline
a & 1 & 0 & $\gamma$ & $\lambda$ \\
b & 0 & 1 & $\alpha$ & $\beta$ \\
c & 0 & 0 & 1 & 0 \\
d & 0 & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
\noindent By using Lemma 2.2, we get in the first step a Zadeh's
fuzzy order $r_{1}$ which extends $r$ and it is defined by
$r_{1}(x,y)=\max \{r(x,y) , \min(r(x,a),r(b,y))\}.$ Hence, we
represent $r_{1}$ by the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& a & b & c & d \\
\hline
a & 1 & 1 & $\gamma \vee \alpha$ & $\lambda \vee \beta$ \\
b & 0 & 1 & $\alpha$ & $\beta$ \\
c & 0 & 0 & 1 & 0 \\
d & 0 & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
As $r_{1}$ is not linear, then we can still apply Lemma 2.2 for
$r_{1}.$ So, we obtain a Zadeh's fuzzy order $r_{2},$ say, which
extends $r_{1}$ and it is defined by
$$r_{2}(x,y)=\max \{r_{1}(x,y) , \min(r_{1}(x,c),r_{1}(d,y))\}.$$ Then, we
represent $r_{2}$ by the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& a & b & c & d \\
\hline
a & 1 & 1 & $\gamma \vee \alpha$ & $\lambda \vee \beta \vee \gamma \vee \alpha$ \\
b & 0 & 1 & $\alpha$ & $\beta \vee \alpha$ \\
c & 0 & 0 & 1 & 1 \\
d & 0 & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
The procedure stop here because $r_{2}$ is a linear Zadeh's fuzzy
order extending $r$. In this example the number of application of
Lemma 2.2 is $k=2.$ Thus, we have $k=\frac{m}{2}$ where
$m=card(A)$ with $A=\{(a,b), (b,a), (c,d), (d,c)\}.$
\noindent 3. Let $X=\{x_{1}, x_{2}, x_{3}, x_{4}, x_{5}, x_{6},
x_{7}\}$ and $r$ be the Zadeh's fuzzy order on $X$ defined by the
following matrix:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& $x_{1}$ &$ x_{2}$ & $x_{3}$ & $x_{4}$ & $x_{5}$& $x_{6}$& $x_{7}$\\
\hline
$x_{1}$ & 1 & 0 & 0 & 0.55 & 0.40 &0.45 &0.60\\
$x_{2}$ & 0 & 1 & 0 & 0.60 & 0.50 & 0.35 &0.75\\
$x_{3}$ & 0.15 & 0 & 1 & 0.30 & 0.70 &0.80 &0.90\\
$x_{4}$ & 0 & 0 & 0 & 1 & 0 &0.15 &0\\
$x_{5}$ & 0 & 0 & 0 & 0 & 1 &0.30 &0.25\\
$x_{6}$ & 0 & 0& 0 & 0 & 0 &1 &0\\
$x_{7}$ & 0 & 0& 0 & 0 & 0 &0.20 &1\\
\hline
\end{tabular}
\end{center}
\noindent By applying a finite number of times Lemma 2.2, we
obtain a linear extension of $r$ which we represented by the
following matrix
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& $x_{1}$ &$ x_{2}$ & $x_{3}$ & $x_{4}$ & $x_{5}$& $x_{6}$& $x_{7}$\\
\hline
$x_{1}$ & 1 & 1 & 0 & 0.60 & 0.60 &0.45 &0.75\\
$x_{2}$ & 0 & 1 & 0 & 0.60 & 0.60 & 0.35 &0.75\\
$x_{3}$ & 0.15 & 0.15 & 1 & 0.30 & 0.70 &0.80 &0.90\\
$x_{4}$ & 0 & 0 & 0 & 1 & 1 &0.30 &0.25\\
$x_{5}$ & 0 & 0 & 0 & 0 & 1 &0.30 &0.25\\
$x_{6}$ & 0 & 0& 0 & 0 & 0 &1 &0\\
$x_{7}$ & 0 & 0& 0 & 0 & 0 &0.20 &1\\
\hline
\end{tabular}
\end{center}
\end{document}
|
\begin{document}
\mainmatter
\title{General Analysis Tool Box for\\mathbb{C}ontrolled Perturbation}
\titlerunning{General Analysis Tool Box for Controlled Perturbation}
\author{Ralf Osbild}
\institute{Saarbr{\"u}cken, March 29, 2012}
\maketitle
\begin{abstract}
The implementation of reliable and efficient geometric algorithms
is a challenging task.
The reason is the following conflict:
On the one hand,
computing with rounded arithmetic
may question the reliability of programs
while, on the other hand,
computing with exact arithmetic may be too expensive and hence inefficient.
One solution is the implementation of
{controlled perturbation algorithms} which
combine the speed of floating-point arithmetic with
a protection mechanism that guarantees reliability,
nonetheless.
This paper is concerned with
the performance analysis of controlled perturbation algorithms
in theory.
We answer this question with the presentation of a
\emph{general analysis tool box} for controlled perturbation algorithms.
This tool box is separated into independent components
which are presented individually with their interfaces.
This way, the tool box supports alternative approaches
for the derivation of the most crucial bounds.
We present three approaches for this task.
Furthermore,
we have thoroughly reworked the concept of controlled perturbation
in order to include rational function based predicates into the theory;
polynomial based predicates are included anyway.
Even more we introduce object-preserving perturbations.
Moreover,
the tool box is designed such that it reflects the actual behavior
of the controlled perturbation algorithm at hand
without any simplifying assumptions.
\keywords{
controlled perturbation,
reliable geometric computing,\\
floating-point computation,
numerical robustness problems.
}
\end{abstract}
\section{Introduction}
\subsection{Robust Geometric Computing}
It is a notoriously difficult task to cope with
rounding errors in computing~\cite{F70,DH02}.
In computational geometry,
predicates are decided on the sign of mathematical expressions.
If rounding errors cause a wrong decision of the predicate,
geometric algorithms may fail in various ways:
inconsistency of the data (e.g., contradictory topology),
loops that do not terminate or
loops that terminate unexpectedly~\cite{KMPSY08}.
In addition,
the thoughtful processing of degenerate cases
makes the implementation of geometric algorithms laborious~\cite{BKOS00}.
The meaning of degeneracy always depends on the context
(e.g., three points on a line, four points on a circle).
There are several ways to overcome the numerical robustness issues
and to deal with degenerate inputs.
The \emph{exact computation paradigm}~\cite{JRZ91,KLN91,MN94,FW96,Y97,LEDA99}
suggests an implementation of an exact arithmetic.
This is established by a number representation of variable precision
(i.e., variable bit length)
or the use of symbolic values which are not evaluated
(e.g., roots of integers).
There are several implementations of such
number types~\cite{CGAL,CORE02,GMPFR11,MPFI,LEDA99}.
Each program must be developed carefully such that
it can deal with all possible degenerate cases.
The software libraries {\sc Leda} and {\sc Cgal}
follow the exact computation paradigm~\cite{LEDA99,KN04,FGK00}.
The paradigm was also taken as a basis in~\cite{BEH05,Sh97,HK05}.
As opposed to that,
the \emph{topology oriented approach}~\cite{SI92,Im96,SII00}
is based on an arithmetic of finite precision.
To avoid numerical robustness issues,
the main guideline is the maintenance of the topology.
This objective requires individual alterations of the algorithm at hand
and it seems that it cannot be turned into an easy-to-use general framework.
Furthermore,
this approach must also cope with degenerate inputs.
However,
the speed of floating-point arithmetic may be worth the trouble;
in addition with other accelerations,
Held~\cite{He01} has implemented a very fast computation
of the Voronoi diagram of line segments.
There are also \emph{problem-oriented} solutions.
In computational geometry,
the sign of determinants decides an interesting class of predicates.
For example,
the side-of-line or the in-circle predicate in the plane
belong to this class
and are used in the computation of Delaunay diagrams.
Some publications attack the numerical issues in
the evaluation of determinants directly~\cite{ABD97,BM00}.
The previous approaches have in common
that they primarily focus on the numerical issues.
Other approaches are originated from the degeneracy issue.
A slight perturbation of the input seems to solve this problem.
There are different approaches which are based on perturbation.
The \emph{symbolic perturbation},
see for example \cite{EM90,Y90a,Y90b,EC95,ECS97,S98,M95},
provides a general way to distort inputs such that
degeneracies do not occur.
This definitely provides a shorter route
for the presentation of geometric algorithms.
Practically this approach requires exact arithmetic
to avoid robustness issues.
Therefore the pitfall in this approach is that,
if the concept requires very small perturbations,
it implicates a high precision and possibly a slow implementation.
In this paper we focus on \emph{controlled perturbation}.
This variant was introduced
by Halperin et al.~\cite{HS98}
for the computation of spherical arrangements.
There a perturbed input is a random point
in the neighborhood of the initial input.
It is unlikely, but not impossible, that the input is degenerate.
Therefore the algorithm has a repeating perturbation process
with two objectives:
Finding an input
that does not contain degeneracies and
that leads to numerically robust floating-point evaluations.
Halperin et al.~have presented
mechanisms to respond to inappropriate perturbations.
Moreover,
they have argued formally
under which conditions there is a chance for a successful termination
of their algorithm.
\emph{Controlled perturbation leads to numerically robust implementations
of algorithms
which use non-exact arithmetic and
which do not need to process degenerate cases.}
This idea of controlled perturbation
was applied to further geometric problems afterwards:
The arrangement of polyhedral surfaces~\cite{HR99},
the arrangement of circles~~\cite{HL04},
Voronoi diagrams and Delaunay triangulations~\cite{K04,FKMS05}.
However, the presentation of each specific algorithm
has required a specific analysis of its performance.
This broaches the subject of
a \emph{general method} to analyze controlled perturbation algorithms.
We remark that
controlled perturbation has also a shady side:
Although it solves the problem for the perturbed input exactly,
it does not solve it for the initial input.
Furthermore,
it is non-obvious how to receive a solution for the initial input in general.
In case the input is highly degenerated,
the running time of the algorithm may increase significantly
after the permutation~\cite{BMS94,ABS97}.
In this case,
the specialized treatment of degeneracies may be much faster.
\subsection{Our contribution}
The study of a {general method}
to analyze controlled perturbation algorithms
is a joint work with Kurt Mehlhorn and Michael Sagraloff.
We have firstly presented the idea
in \cite{MOS06}.
Then Caroli~\cite{C07} studied the applicability of the method
for predicates which are used for
the computation of arrangements of circles (according to~\cite{HL04})
and the computation of Voronoi diagrams of line segments
(according to~\cite{Bu96,Se96}).
Our significantly improved journal article
contains, furthermore, a detailed discussion of
the analysis of multivariate polynomials~\cite{MOS11}.
Independent of former publications,
the author has redeveloped the topic from scratch
to design a sophisticated tool box for the analysis
of controlled perturbation algorithms.
The tool box is valid for floating-point arithmetic,
guides step by step through the analysis and
allows alternative components.
Furthermore,
the solutions of two open problems
are integrated into the theory.
We briefly present our achievements below.
\emph{We present a general tool box to
analyze algorithms and their predicates.}
The tool box is subdivided into independent components and their interfaces.
Step-by-step instructions for the analysis
are associated with each component.
Interfaces represent bounds that are used in the analysis.
The result is a precision function or a probability function.
Furthermore,
necessary conditions for the analysis
are derived from the interfaces
(e.g., the notion of \emph{criticality} differs from former publications).
\emph{We present alternative approaches to derive necessary bounds.}
Because we have subdivided the tool box into
independent components and their interfaces,
it is possible to make alternative components available
in the most crucial step of the analysis.
The \emph{direct approach} is based on the geometric meaning of predicates,
the \emph{bottom-up approach} is based on the composition of functions,
and the \emph{top-down approach} is a coordinate-wise analysis of functions.
Similar direct and top-down approaches are presented in~\cite{MOS06,MOS11}.
This is the first time that a bottom-up approach is presented
for this task.
\emph{The result of the analysis is valid
for floating-point arithmetic.}
A random floating-point number generator
that guarantees a uniform distribution
was introduced in~\cite{MOS11}.
But, so far, the result of the analysis was never proven to be valid
for the finite set of floating-point numbers
since the Lebesgue measure cannot take sets of measure zero
into account.
To overcome this issue,
we define a specialized perturbation generator and
pay attention to the finiteness in the analysis, namely,
in the success probability,
in the (non-)exclusion of points and
in the usage of the Lebesgue measure.
\emph{We present an alternative analysis of multivariate polynomials.}
An analysis of multivariate polynomials,
which resembles the top-down approach,
is presented in~\cite{MOS11}.
Here we present an alternative analysis
which makes use of the bottom-up approach.
\emph{We solve the open problem of analyzing rational functions.}
We include poles of rational functions into the theory and
describe the treatment of floating-point range errors in the analysis.
We suggest a general way to guard rational functions in practice and
we show how to analyze the behavior of these guards in theory.
\emph{We solve the open problem of object-preserving perturbations.}
We introduce a perturbation generator that makes it possible
to perturb the location of input objects without deforming the objects itself.
To achieve this goal,
we have designed the perturbation
such that
the relative floating-point input specifications of the objects
are preserved despite of the usage of rounded arithmetic.
\emph{We suggest an implementation
that is in accordance with the analysis tool box.}
We define a fixed-precision perturbation generator and
extend it to be object-preserving.
We explain the particularities
in the practical treatment of range errors
that occur especially in the case of rational functions.
Finally, we show how to realize guards for rational functions.
\subsection{Content}
In this paper we present a tool box
for a general analysis of controlled perturbation algorithms.
In Section~\ref{sec-cp-algo},
we present the basic design principles of controlled perturbation
from a practical point of view.
Fundamental quantities and definitions of the analysis
are introduced in Section~\ref{sec-fund-quant-def}.
The \emph{general analysis tool box}
and all of its components
are briefly introduced
in Section~\ref{sec-ana-tool-box}.
Its detailed presentation is structured in two parts:
The \emph{function analysis} and the \emph{algorithm analysis.}
Geometric algorithms base their decisions on geometric predicates
which are decided by signs of real-valued functions.
Therefore the analysis of algorithms
requires a general analysis of such functions.
The \emph{function analysis}
is visualized
in Figure~\ref{fig-illu-ana-func} on Page~\pageref{fig-illu-ana-func}.
Since the analysis is performed with real arithmetic,
we must also prove its validation for actual floating-point inputs.
This validation is anchored in Section~\ref{sec-validation}.
The function analysis itself works in two stages.
The required bounds form the interface between the stages
and are presented
in Section~\ref{sec-nec-con-func}.
The \emph{method of quantified relations}
represents the actual analysis in the second stage and is introduced
in Section~\ref{sec-meth-quan-rela}.
The derivation of the bounds in the first stage uses
the \emph{direct approach} of Section~\ref{sec-direct-approach},
the \emph{bottom-up approach}
of Section~\ref{sec-bottom-up}, or
the \emph{top-down approach}
of Section~\ref{sec-top-down},
together with an \emph{error analysis}
which is introduced in Section~\ref{sec-guards-safetybounds}.
In Section~\ref{sec-rational-function}
we extend the analysis and the implementation
such that both properly deal with floating-point range errors.
As examples,
we present the analysis of
\emph{multivariate polynomials} in Section~\ref{sec-bottom-up}
and the analysis of
\emph{rational functions} in Section~\ref{sec-ana-rational-func}.
The \emph{algorithm analysis}
is visualized
in Figure~\ref{fig-illu-ana-algo} on Page~\pageref{fig-illu-ana-algo}.
The algorithm analysis works also in two stages.
In the first stage, we perform the function analyses and
derive some algorithm specific bounds.
The analysis itself in the second stage is represented by the
\emph{method of distributed probability}.
The algorithm analysis is entirely presented in Section~\ref{sec-ana-algo}.
Furthermore, we present a general way
to \emph{implement} controlled perturbation algorithms
in Section~\ref{sec-gen-cp-imple}
such that
our analysis tool box can be applied to them.
Even more,
we suggest a way to implement \emph{object-preserving perturbations}
in Section~\ref{sec-pertub-policy}.
A \emph{quick reference}
to the most important definitions of this paper
can be found in the appendix in Section~\ref{sec-append-identifiers}.
\section{Controlled Perturbation Algorithms}
\label{sec-cp-algo}
This section contains an introduction
to the basic principles for controlled perturbation algorithms.
We have already mentioned
that implementations of geometric algorithms
must address degeneracy issues and numerical robustness issues.
We review floating-point arithmetic
in Section~\ref{sec-intro-floating-point}
and present the basic design principles
of controlled perturbation algorithms
in Section~\ref{sec-intro-cp-algo}.
\subsection{Floating-point Arithmetic}
\label{sec-intro-floating-point}
Variable precision arithmetic is necessary
for a general implementation of controlled perturbation algorithms.
We explain this statement with the following thought experiment\footnote{
This consideration is absolutely conform to
Halperin et~al.~\cite{HL04}:
If the augmented perturbation parameter $\delta$
exceeds a given threshold $\Delta$,
the precision is augmented and $\delta$ is reset.}
that can be skipped during first reading:
Assume we compute an arrangement of $n$ circles incrementally
with a fixed precision arithmetic.
Let us further assume that there is an upper bound
on the radius of the circles.
Then, because of the fixed precision,
the number of distinguishable intersections per circle must be limited.
Hence the computation of a dense arrangement
gets stuck after a certain amount of insertions
unless we allow circles to be moved (perturbed) further away
from their initial location.
Asymptotically,
this policy transforms a very dense arrangement
into an arrangement of almost uniformly distributed circles.
Therefore we demand that the precision of the arithmetic can be chosen
arbitrarily large.
A \emph{floating-point number} is given by
a sign, a mantissa, a radix and a signed exponent.
In the regular case,
its value is defined as
\begin{eqnarray*}
\text{value}
&:=&
\text{sign} \cdot \text{mantissa} \cdot \text{radix}^\text{exponent}.
\end{eqnarray*}
Without loss of generality,
we assume the radix to be 2.
The bit length
of the mantissa is called \emph{precision}
$L\label{def-L-inline}$.
We denote the \emph{bit length of the exponent} by
$K\label{def-K-inline}$.
The discrete set of regular floating-point numbers is a subset of
the rational numbers.
Furthermore,
this set is finite
for fixed $L$ and $K$.
A \emph{floating-point arithmetic}
defines the number representation (the radix, $L$ and $K$),
the operations,
the rounding policy
and the exception handling for floating-point numbers
(see Goldberg~\cite{G91}).
A technical standard for
fixed precision floating-point arithmetic
is IEEE 754-2008 (see \cite{IEEE08}).
Nowadays,
the built-in types single, double and quadruple precision
are usual for radix 2.
There are several software libraries that offer
\emph{variable\footnote{
With variable we subsume all types of arithmetic
that support arbitrarily large precisions.
Some are called variable precision, multiple precision
or arbitrary precision.}
precision floating-point arithmetic}.
{\sc Cgal} provides the multi-precision floating-point number type
{\tt MP\_Float} (see the {\sc Cgal} manual~\cite{CGAL}).
{\sc Core} provides
the variable precision floating-point number type
{\tt CORE::BigFloat} (see~\cite{CORE02}).
And
{\sc Leda} provides
the variable precision floating-point number type
{\tt leda\_bigfloat} (see the {\sc Leda} book~\cite{LEDA99}).
Be aware that
the rounding policy and exception handling of certain libraries
may differ from the IEEE standard.
Since our analysis partially presumes\footnote{
A standardized behavior of floating-point operations
is presumed in Section~\ref{sec-guards-safetybounds}.}
this standard,
we must ensure that the arithmetic in use is appropriate.
The {\sc Gnu} Multiple Precision Floating-Point Reliable Library,
for example,
``provides the four rounding modes from the IEEE 754-1985 standard,
plus away-from-zero, as well as for basic operations as for other
mathematical functions'' (see the {\sc Gnu Mpfr} manual~\cite{GMPFR11}).
Moreover,
{\sc Gnu Mpfr} is used for
the multiple precision interval arithmetic
which is provided by the
Multiple Precision Floating-point Interval library
(see the {\sc Gnu Mpfi} manual~\cite{MPFI}).
Variable precision arithmetic
is more expensive than built-in fixed precision arithmetic.
We remark that, in practice, we try to solve the problem at hand
with built-in arithmetic first
and, in addition, try to make use of floating-point filters.
Throughout the paper we use the following notations.
\begin{definition}[floating-point]\label{def-fp-numbers}
Let $L,K\in\mathbb{N}$.
By $\mathbb{F}LK$ we denote:
\\
\indent
1. The set of floating-point numbers
with radix 2, precision $L$
and $K$-bit exponent. \\
\indent
2. The floating-point arithmetic
that is induced by the set characterized in 1.
\\
Furthermore, we define the suffix $\mathbb{R}F$ for sets and expressions:
\\
\indent
1. Let $k\in\mathbb{N}$ and let $X\subset\mathbb{R}^k$.
Then $X\mathbb{R}F := X \cap \mathbb{F}^k$. \\
\indent
2. $f(x)\mathbb{R}F$
denotes the floating-point value of $f(x)$
evaluated with arithmetic $\mathbb{F}$.
\end{definition}
\noindent
That means,
by $X\mathbb{R}F$ we denote the restriction of $X$ to
its subset that can be represented with floating-point numbers in $\mathbb{F}$.
To simplify the notation
we omit the indices $L$ or $K$ of $\mathbb{F}LK$
whenever they are given by the context.
For the same reason
we have already skipped the dimension $k$ in the suffix $\mathbb{R}F$.
\subsection{Basic Controlled Perturbation Implementations}
\label{sec-intro-cp-algo}
Rounding errors of floating-point arithmetic
may influence the result of predicate evaluations.
Wrong predicate evaluations may cause
erroneous results of the algorithm
and even lead to non-robust implementations
(see Kettner et al.~\cite{KMPSY08}).
In order to get correct and robust implementations,
we introduce guards which testify the reliability of predicate evaluations
(see~\cite{F97,BFS01,MOS11}).
\begin{definition}[guard]\label{def-guard}
Let $\mathbb{F}$ be a floating-point arithmetic
and let $f:X\to\mathbb{R}$ be a function
with $X\subset\mathbb{R}^k$.
We call a predicate $\mathbb{G}G_{f}:X\to\text{\{true, false\}}$
a \emph{guard for $f\!$ on $X$} if
\begin{eqnarray*}
\text{$\mathbb{G}G_{f}(x)$ is true}
\mathbb{F}ORMSEP &\mathbb{R}ightarrow& \mathbb{F}ORMSEP
\text{\rm sign}(f(x)\mathbb{R}F) = \text{\rm sign}(f(x))
\end{eqnarray*}
for all $x\in X\mathbb{R}F$.
Presumed that there is such a predicate $\mathbb{G}G_f$,
we say that an input $x\in X\mathbb{R}F$ is \emph{guarded}
if $\mathbb{G}G_{f}(x)$ is true
and \emph{unguarded}
if $\mathbb{G}G_{f}(x)$ is false.
\end{definition}
That means, guards testify the sign of function evaluations.
A design of guards is presented in Section~\ref{sec-guards-safetybounds}.
By means of guards
we can implement geometric algorithms
such that they can either verify or disprove their result.
\begin{definition}[guarded algorithm]\label{def-guarded-algo}
We call an algorithm ${\cal A}_\text{\rm G}$ a \emph{guarded algorithm}
if there is a guard for each predicate evaluation
and if the algorithm halts either with the correct combinatorial result
or with the information that a guard has failed.
If ${\cal A}_\text{\rm G}$ halts with the correct result,
we also say that ${\cal A}_\text{\rm G}$ is \emph{successful}, and
we say that ${\cal A}_\text{\rm G}$ has \emph{failed}
if a guard has failed.
\end{definition}
Let $\bar{y}$ be an input of ${\cal A}_\text{\rm G}$.
In case ${\cal A}_\text{\rm G}(\bar{y})$ is successful,
we obtain the desired result
for input $\bar{y}$.
Of course,
the situation is unsatisfying
if ${\cal A}_\text{\rm G}$ fails.
Therefore we introduce controlled perturbation
(see Halperin et al.~\cite{HL04}):
We execute ${\cal A}_\text{\rm G}$ for randomly perturbed inputs $y$
(i.e., random points in the neighborhood of $\bar{y}$)
\emph{until} ${\cal A}_\text{\rm G}$ terminates successfully.
Furthermore,
we increase the precision $L$
of the floating-point arithmetic $\mathbb{F}$
after each failure
in the hope to improve the chance to succeed.
(It is the task of the analysis to give evidence.)
We summarize this idea
in the provisional controlled perturbation algorithm \mbox{$\BACP$}
which is shown in Algorithm~\ref{algo-plaincp}.
The general controlled perturbation algorithm
is presented on page~\pageref{algo-cp} in Section~\ref{sec-ana-algo}.
\begin{algorithm}
\caption{: $\BACP({\cal A}_\text{\rm G}, \bar{y}, \bar{U}U_\delta)$}
\label{algo-plaincp}
\begin{algorithmic}
\,:\,ATE \emph{/* initialization */}
\,:\,ATE $L \leftarrow$ precision of built-in floating-point arithmetic
\mathbb{R}EPEAT
\,:\,ATE \emph{/* run guarded algorithm */}
\,:\,ATE $y \leftarrow$ random point in $\bar{U}UU_\delta(\bar{y})\mathbb{R}FL$
\,:\,ATE $\omega \leftarrow {\cal A}_\text{\rm G}(y,\mathbb{F}L)$
\,:\,ATE \emph{/* adjust parameters */}
\IF{${\cal A}_\text{\rm G}$ failed}
\,:\,ATE $L \leftarrow 2L$
{e_\text{\rm max}}NDIF
\bar{U}NTIL{${\cal A}_\text{\rm G}$ succeeded}
\,:\,ATE \emph{/* return perturbed input $y$ and result $\omega$ */}
\mathbb{R}ETURN $(y,\omega)$
\end{algorithmic}
\end{algorithm}
We see that there is an implementation of $\BACP({\cal A}_\text{\rm G})$
for every guarded algorithm ${\cal A}_\text{\rm G}$,
or to say it in other words,
for every algorithm that is only based on geometric predicates
that can be guarded.
It is important to note that
this does not necessarily imply that $\BACP$ performs well.
It is the main objective of this paper
to develop a general method to analyze the performance
of controlled perturbation algorithms ${\cal A}_\text{\rm CP}$.
\section{Fundamental Quantities and Definitions}
\label{sec-fund-quant-def}
Our main aim is the derivation of a general method
to analyze controlled perturbation algorithms.
In order to achieve this,
we introduce fundamental quantities first.
In Section~\ref{sec-basic-quantities}
we define the quantities that describe the situation
which we want to analyze.
We encounter and discuss many issues
during the definition of the success probability
in Section~\ref{sec-succ-prob}.
\emph{This is the first presentation of a
detailed modelling of the floating-point success probability.}
Controlled perturbation specific quantities are introduced
in Section~\ref{sec-further-quantities}.
(Further analysis specific bounds
are defined in the presentation of the analysis later on.)
The overview in Section~\ref{sec-over-func-argu}
summarizes the classification of inputs
in practice and in the analysis.
In Section~\ref{sec-veri-succ}
we present conditions
under which we may \emph{apply} controlled perturbation to a predicate
in practice
and under which we can actually \emph{justify} its application
in theory.
\subsection{Perturbation, Predicate, Function}
\label{sec-basic-quantities}
Here we define the quantities
that are needed to describe the initial situation:
the original input,
the perturbation area,
the perturbation parameter,
the perturbed input,
the input value bound,
functions that realize geometric predicates,
and
predicate descriptions.
In the analysis
we assume that the \emph{original input $\bar{y}$\label{def-ybar-inline}}
of a controlled-perturbation algorithm ${\cal A}_\text{\rm CP}$
consists of $n\label{def-n-inline}$
floating-point numbers,
that means, $\bar{y}\in\mathbb{F}^n$ or, as we prefer to say, $\bar{y}\in\mathbb{R}^n\mathbb{R}F$.
At this point
we do not care for a geometrical interpretation
of the input of ${\cal A}_\text{\rm CP}$.
We remark that this is no restriction:
a complex number can be represented by two numbers;
a vector can be represented by the sequence of its components;
geometric objects can be represented by their coordinates and measures;
and so on.
A circle in the plain, for example,
can be represented by a 6-tuple
(the coordinates of three distinct points in the circle)
or a 3-tuple (the coordinates of the center and the radius).
And, to carry the example on,
an input of $m$ circles can be interpreted as a tuple
$\bar{y}\in\mathbb{R}^{n}\mathbb{R}F$
with $n:=6m$ if we choose the first variant.
We define the \emph{perturbation of $\bar{y}$}
as a random additive distortion of its components.\footnote{
There is no unique definition of perturbation in geometry
(see the introduction in~\cite{S98}).}
We call $\bar{U}U_\delta(\bar{y})\subset\mathbb{R}^n$
a \emph{perturbation area} with
\emph{perturbation parameter $\delta$\label{def-pert-para-inline}}
if
\begin{quote}
1. $\delta\in\mathbb{R}_{>0}^n$, \\
2. $y\in\bar{U}U_\delta(\bar{y})$ implies $|y_i-\bar{y}_i|\le\delta_i$ for $1\le i\le n$ and \\
3. $\bar{U}U_\delta(\bar{y})$ contains an (open) neighborhood of $\bar{y}$.
\end{quote}
Note that $\bar{U}U_\delta(\bar{y})$ is not a discrete set
whereas $\bar{U}U_\delta(\bar{y})\mathbb{R}F$ is finite.
In our example,
if we allow a circular perturbation of the $3m$ points which
define the $m$ input circles,
the perturbation area is the Cartesian product of $3m$ planar discs.
We make the observation that
even if we consider the input as a plain sequence of numbers,
the perturbation area may look very special---we cannot
neglect the geometrical interpretation here!
In this context,
we define an
\emph{axis-parallel perturbation area}
$U_\delta(\bar{y})\label{def-U-inline}$
as a box which is centered in $\bar{y}$
and has edge length $2\delta_i$ parallel to the $i$-th main axis
(and always denote it by the latin letter $U$ instead of $\bar{U}U$).
This definition significantly simplifies the shape of the perturbation area.
Naturally,
the perturbed input must also be a vector of floating-point numbers.
For now,
we denote the \emph{perturbed input} by
$y\in\bar{U}U(\bar{y})\mathbb{R}F\label{def-y-inline}$.
(We remark that we refine this definition
on page~\pageref{inline-random-grid}).
The analysis of ${\cal A}_\text{\rm CP}$
depends on the analysis of ${\cal A}_\text{\rm G}$ and its predicates
(see Section~\ref{sec-ana-algo}).
We remember that
a \emph{geometric predicate},
which is true or false,
is decided by the sign of a
\emph{real-valued function} $f$.\label{def-f-inline}
Therefore we introduce further quantities to describe such functions.
We assume that $f$ is a $k$-ary real-valued function and
that $k\ll n$.
We further assume that we evaluate $f$ at $k$ distinct perturbed input values,
that means,
we evaluate $f(y_{\sigma(1)},\ldots,y_{\sigma(k)})$
where $\sigma:\{1,\ldots,k\}\to\{1,\ldots,n\}$ is injective.
The mapping $\sigma$ is injective to guarantee
that the variables in the formula of $f$ are independent of each other.
To not get the indices mixed up in the analysis,
we rename the argument list of $f$ into
$x_i := y_{\sigma(i)}\label{def-x-inline}$
for $1\le i\le k$.
In the same way we also rename the affected input values
$\bar{x}_i := \bar{y}_{\sigma(i)}\label{def-xbar-inline}$.
We denote the set of \emph{valid arguments for $f$} by $A\label{def-A-inline}$.
In the analysis,
${e_\text{\rm max}}$ implicitly describes an upper-bound
on the absolute value of perturbed input values
in the way
\begin{eqnarray}\label{def-e}\label{for-e-min}
{e_\text{\rm max}} &:=& \min
\left\{
e'\in\mathbb{N} \,:\,
\text{$|\bar{y}_i| + \delta_i \le 2^{e'}$ for all $1\le i \le n$}
\right\}.
\end{eqnarray}
We call ${e_\text{\rm max}}$ the \emph{input value parameter}.
Be aware that this is just a bound on the arguments of $f$
and not a bound on the absolute value of $f$.
At the moment we assume that
the absolute value of $f$ is bounded on $A$
and that the size $K$ of the exponent of the floating-point arithmetic $\mathbb{F}LK$
is sufficiently large to avoid overflow errors
during the evaluation of $f$.
In Section~\ref{sec-rational-function},
we drop this assumption and discuss the treatment of range issues.
Below we summarize the basic quantities
which are needed for the analysis
of a function $f$.
\begin{definition}\label{def-predi-con}
We call $\text{\rm pr}ED$
a \emph{predicate description} if:
\begin{quote}
1. $k\in\mathbb{N}$, \\
2. $A\subset\mathbb{R}^k$, \\
3. $\delta\in\mathbb{R}_{>0}^k$, \\
4. ${e_\text{\rm max}}$ is as it is defined in Formula~(\ref{def-e}), \\
5. $\bar{U}_\delta(A)\subset[-2^{{e_\text{\rm max}}},2^{{e_\text{\rm max}}}]^k$ and \\
6. $f:\bar{U}_\delta(A)\to\mathbb{R}$.
\end{quote}
\end{definition}
\noindent
Predicate descriptions are used on and on.
We extend the notion
in Definition~\ref{def-predi-con-2} on page~\pageref{def-predi-con-2}
and in Definition~\ref{def-predi-con-3} on page~\pageref{def-predi-con-3}.
\subsection{Success Probability, Grid Points}
\label{sec-succ-prob}
The controlled-perturbation algorithm ${\cal A}_\text{\rm CP}$ terminates eventually
if there is a positive probability that ${\cal A}_\text{\rm G}$ terminates successfully.
The latter condition is fulfilled if $f$ has the property:
The probability of a successful evaluation of $f$
gets arbitrarily close to the certain event
just by increasing the precision $L$.
We call this property \emph{applicability}
and specify it
in Section~\ref{sec-veri-succ}.
In this section we derive a definition for the success probability
that is appropriate for the analysis
and that is valid for floating-point evaluations.
We begin with the question:
What is the least probability that
a guarded evaluation of $f$ is successful in a run of ${\cal A}_\text{\rm G}$
under the arithmetic $\mathbb{F}$?
We assume
that each random point is chosen with the same probability.
Then the answer is
\begin{eqnarray*}
\text{\rm pr} (f\mathbb{R}F)
&:=&
\min_{\bar{x}\in A} \;
\frac
{\mathbb{C}ARDI{
\left\{
x\in\bar{U}_\delta(\bar{x})\mathbb{R}F
\,:\,
\text{$\mathbb{G}G(x)$ is true}
\right\}
}}
{\mathbb{C}ARDI{\bar{U}_\delta(\bar{x})\mathbb{R}F}}.
\end{eqnarray*}
The definition really reflects the actual behavior of $f$.
The probability is the number of guarded (floating-point) inputs
divided by the total number of inputs
and considers the worst-case for all perturbation areas.
\subsection*{Issue~1: Floating-point arithmetic is hard to be analyzed directly}
Because floating-point arithmetic and its rounding policy
can hardly be analyzed directly,
we aim at deriving a corresponding formula for real arithmetic.
In real space,
we use the Lebesgue measure\footnote{
Measure Theory: The Lebesgue measure is defined in Forster~\cite{F11}.}
$\mu\label{def-mu-inline}$
to determine the volume of areas.
Therefore we are looking for a formula like
\begin{eqnarray}\label{for-prob-guess}
\text{\rm pr} (\text{$f$})
&:=&
\min_{\bar{x}\in A} \;
\frac
{\mu\left(
\left\{
x\in\bar{U}_\delta(\bar{x})
\,:\,
\text{$\mathbb{G}G'(x)$ is true}
\right\}
\right)}
{\mu({\bar{U}_\delta(\bar{x})})}
\end{eqnarray}
where the predicate $\mathbb{G}G':\bar{U}_\delta(A)\to\text{\{true, false\}}$
equals $\mathbb{G}G$
at arguments with floating-point representation.
\subsection*{Issue~2: The set of floating-point numbers has measure zero}
It is well-known that
the set $\bar{U}_\delta(\bar{x})\mathbb{R}F$ is finite and
that its superset $\bar{U}_\delta(\bar{x})\mathbb{R}Q$ is a set of measure zero.
Be aware that
the fraction in Formula~(\ref{for-prob-guess})
does not change
if we redefine $f$ on a set of measure zero.
This implies some bizarre situations.
For example,\footnote{
Note that
there are finite sets of exceptional points that
lead to similar counter-examples
since every exception influences the practical behavior of the function
(and $L$ is finite).}
\newcommand{\mathbb{F}T}{{f_\text{true}}}
\newcommand{\mathbb{F}F}{{f_\text{false}}}
let $\mathbb{F}F:\bar{U}_\delta(A)\to\mathbb{R}$ be
\begin{eqnarray*}
\mathbb{F}F(x)&:=&\left\{
\begin{array}{r@{\quad:\quad}l}
f(x) & x\not\in\bar{U}_\delta(A)\mathbb{R}Q \\
0 & otherwise
\end{array} \right.
\end{eqnarray*}
and let $\mathbb{F}T:\bar{U}_\delta(A)\to\mathbb{R}$ be
\begin{eqnarray*}
\mathbb{F}T(x)&:=&\left\{
\begin{array}{r@{\quad:\quad}l}
f(x) & x\not\in\bar{U}_\delta(A)\mathbb{R}Q \\
B & otherwise
\end{array} \right.
\end{eqnarray*}
where $B\in\mathbb{R}_{>0}$ is large enough to guarantee
that the guard $\mathbb{G}G$ evaluates to true in the latter case.
Be aware that
$\text{\rm pr}(\text{$\mathbb{F}F$}) = \text{\rm pr}(\text{$\mathbb{F}T$})$
due to Formula~(\ref{for-prob-guess})
whereas both implementations ``${\cal A}_\text{\rm G}$ with $\mathbb{F}T$'' and ``${\cal A}_\text{\rm G}$ with $\mathbb{F}F$''
behave most conflictive:
The former is always successful whereas
the latter never succeeds.
We remark that
the assumption ``$f$ is (upper) continuous almost everywhere''
does not solve the issue because
``almost everywhere'' means
``with the exception of a set of measure zero.''
We have to introduce several restrictions
to get able to deal with situations like that.
\subsection*{Issue~3: There is no general relation between $\text{\rm pr}(f\mathbb{R}F)$ and $\text{\rm pr}(f)$}
This problem gets already visible in the 1-dimensional case.
\begin{example}\label{ex-density-1}
Let $\mathbb{F}=\mathbb{F}_{2,3}$ be the floating-point arithmetic with
$L=2$ and $K=3$.
In addition
let $U=[0,2]$, $R_1=[0,1]$ and $R_2=[1,2]$ be intervals.
The situation is depicted in Figure~\ref{fig-fp-ratio-f}.
\begin{figure}
\caption{Distribution of the discrete set $\mathbb{F}
\label{fig-fp-ratio-f}
\end{figure}
\noindent
What is the probability that a randomly chosen point $x\in U$
lies inside of $R_1$,
respectively $R_2$,
for points in $U$ or $U\mathbb{R}F$?
Note that $R_1$ and $R_2$ have the same length.
For $R_1=[0,1]$ we have
\begin{eqnarray*}
\text{\rm pr}(R_1) = \frac{1}{2} \mathbb{F}ORMSEP {<} \mathbb{F}ORMSEP \text{\rm pr}(R_1\mathbb{R}F) = \frac{17}{21},
\end{eqnarray*}
that means,
the probability is higher for floating-point arithmetic.
On the other hand,
for $R_2=[1,2]$ we have
\begin{eqnarray*}
\text{\rm pr}(R_2) = \frac{1}{2} \mathbb{F}ORMSEP {>} \mathbb{F}ORMSEP \text{\rm pr}(R_2\mathbb{R}F) = \frac{5}{21},
\end{eqnarray*}
that means,
the probability is higher for real arithmetic.
$\bigcirc$
\end{example}
We derive from Example~\ref{ex-density-1}
that there is no general relation between $\text{\rm pr}(f\mathbb{R}F)$ and $\text{\rm pr}(f)$
because of the distribution of $\mathbb{F}$.
\subsection*{Issue~4: Distribution of $\mathbb{F}$ is non-uniform}
Because the discrete set of floating-point numbers
is non-uniformly distributed in general,
we smartly alter the perturbation policy:
We restrict the random choice of floating-point numbers to selected numbers
that lie on a regular grid.
\begin{definition}[grid]\label{def-grid-points}
Let ${e_\text{\rm max}}$ be as it is defined in Formula~(\ref{def-e}) and
let $\mathbb{F}LK$ be a floating-point arithmetic (with ${e_\text{\rm max}} \ll 2^{K-1}$).
We define
\begin{eqnarray}\label{for-def-tau}
\tau
&:=& 2^{{e_\text{\rm max}}-L-1}\label{def-tau-inline}.
\end{eqnarray}
We call
\begin{eqnarray}\label{for-def-grid}
\mathbb{G}LKE
&:=& \left\{
\lambda\tau \,:\, \text{$\lambda\in\mathbb{Z}$ and $\lambda\tau\in[-2^{e_\text{\rm max}},2^{e_\text{\rm max}}]$}
\right\}
\end{eqnarray}
the \emph{grid points induced by ${e_\text{\rm max}}$ with respect to $\mathbb{F}LK$}
and we call $\tau$ the \emph{grid unit of $\mathbb{G}LKE$}.
Furthermore,
we denote the grid points $\mathbb{G}$ inside of a set $X\subset\mathbb{R}^k$ by
\begin{eqnarray*}
X\mathbb{R}G &:=& X\cap \mathbb{G}^k.
\end{eqnarray*}
\end{definition}
Again we omit the indices
whenever they do not deserve special attention.
We observe that
the grid unit $\tau$ is the maximum distance between two adjacent
points in $\mathbb{F} \cap [-2^{e_\text{\rm max}},2^{e_\text{\rm max}}]$.
We observe further that
the grid points $\mathbb{G}$ form a subset of $\mathbb{F}$.
Be aware that the symbol $\mathbb{F}$ represents a set or an arithmetic
whereas the symbol $\mathbb{G}$ always represents a set.
It is important to see that the underlying arithmetic is still $\mathbb{F}$.
We have introduced $\mathbb{G}$
only to change the definition of the \emph{original perturbation area}
into $\bar{U}UU_\delta(\bar{y})\mathbb{R}G\label{inline-random-grid}$.
This leads to the \emph{final version of the success probability of $f$}:
The least probability that
a guarded evaluation of $f$ is successful for inputs in $\mathbb{G}$
under the arithmetic $\mathbb{F}$ is
\begin{eqnarray}\label{for-prob-rest-G}
\text{\rm pr} (f\mathbb{R}G)
&:=&
\min_{\bar{x}\in A} \;
\frac
{\mathbb{C}ARDI{
\left\{
x\in\bar{U}_\delta(\bar{x})\mathbb{R}G
\,:\,
\text{$\mathbb{G}G(x)$ is true}
\right\}
}}
{\mathbb{C}ARDI{\bar{U}_\delta(\bar{x})\mathbb{R}G}}.
\end{eqnarray}
Before we continue this consideration,
we add a remark on the implementation of the perturbation area
$\bar{U}UU_\delta(\bar{y})\mathbb{R}G$.
\begin{remark}\label{perturb-implem-inline}
Because the points in $\mathbb{G}$ are uniformly distributed,
the implementation of the perturbation
is significantly simplified
to the random choice of integer $\lambda$ in Formula~(\ref{for-def-grid}).
This functionality is made available by
basically all higher programming languages.
Apart from that
we generate floating-point numbers with the largest possible number
of trailing zeros.
{This possibly reduces the rounding error in practice.}
\end{remark}
\subsection*{Issue~5: Projection of $\bar{U}UU_\delta(\bar{y})\mathbb{R}G$ is non-uniform}
The \emph{original perturbation area}
$\bar{U}UU_\delta(\bar{y})\mathbb{R}G$
is a discrete set of uniformly distributed points
of which every point is chosen with the same probability.
As a consequence,
the \emph{predicate perturbation area} $\bar{U}_\delta(\bar{x})\mathbb{R}G$
is also uniformly distributed.
But it is important to see that
this does not imply
that all points in the projected grid
appear with the same probability!
We illustrate, explain and solve this issue in Section~\ref{sec-ana-algo}.
For now
we continue our consideration
under the assumption
that all points in $\bar{U}_\delta(\bar{x})\mathbb{R}G$ are uniformly distributed and
randomly chosen with the same probability.
\subsection*{Issue~6: Analyses for various perturbation areas may differ}
In the determination of $\text{\rm pr}(f\mathbb{R}G)$ in Formula~(\ref{for-prob-rest-G}),
we encounter the difficulty
to find the minimum ratio
between the {guarded} and {all possible} inputs
\emph{for all possible perturbation areas},
that means, for all $\bar{x}\in A$.
We can address this problem with a simple worst-case consideration
if we cannot gain (or do not want to gain)
further insight into the behavior of $f$:
We just expect that,
whatever could negatively affect the analysis of $f$ within
the total predicate perturbation area $\bar{U}_\delta(A)$,
affects the perturbation area $\bar{U}_\delta(\bar{x})$
under consideration.
This way, we safely obtain a lower bound on the minimum.
\subsection*{Issue~7: There is no general relation between $\text{\rm pr}(f\mathbb{R}G)$ and $\text{\rm pr}(f)$}
\begin{example}\label{ex-density-2}
We continue Example~\ref{ex-density-1}.
In addition
let $R_3=[\frac{1}{10},\frac{9}{10}]$ be an interval.
Because $U\subseteq[-2^1,2^1]$,
we have ${e_\text{\rm max}}=1$ and $\tau=2^{{e_\text{\rm max}}-L-1}=\frac{1}{4}$.
The situation is depicted in Figure~\ref{fig-fp-ratio-g}.
\begin{figure}
\caption{The distribution of the grid points $\mathbb{G}
\label{fig-fp-ratio-g}
\end{figure}
\noindent
Again we compare the continuous and the discrete case:
What is the probability that a randomly chosen point $x\in U$
lies inside of $R_1$ ($R_2$ or $R_3$, respectively)?
The probability is now higher
for $R_1$ and $R_2$ in the discrete case
\begin{eqnarray*}
\text{\rm pr}(R_1) =
\text{\rm pr}(R_2) = \frac{1}{2}
\quad < \quad
\text{\rm pr}(R_1\mathbb{R}G) =
\text{\rm pr}(R_2\mathbb{R}G) = \frac{5}{9},
\end{eqnarray*}
and higher for $R_3$
\begin{eqnarray*}
\text{\rm pr}(R_3) = \frac{2}{5} \quad > \quad \text{\rm pr}(R_3\mathbb{R}G) = \frac{1}{3}
\end{eqnarray*}
in the real case.
$\bigcirc$
\end{example}
We make the observation that
the restriction to points in $\mathbb{G}$
does not entirely solve the initial problem:
We still cannot relate the probability $\text{\rm pr}(f)$ with $\text{\rm pr}(f\mathbb{R}G)$ in general.
To improve the estimate,
we need another trick that we indicate in Example~\ref{ex-density-3}:
\emph{If we make the interval
slightly larger,
we can safely determine the inequality.}
\begin{example}\label{ex-density-3}
Let $\tau$ be the grid unit of $\mathbb{G}$.
We define three intervals $R\subset\mathbb{R}AUG\subset U$.
Let $U\subset\mathbb{R}$
be a closed interval of length $\lambda_0\tau$
with $\lambda_0\in\mathbb{N}$.
Let $\mathbb{R}AUG\subset U$
be an interval of length at least $\tau$
that has the limits $\mathbb{R}AUG:=[a-\frac{\tau}{2},b+\frac{\tau}{2}]$
for $a,b\in\mathbb{R}$.
Finally, we define $R:=[a,b]$.
In addition let $\lambda\in\mathbb{N}$ be such that
\begin{eqnarray*}
\lambda\tau \;\; \le \;\; \mu(\mathbb{R}AUG) \;\; < \;\; (\lambda+1)\tau.
\end{eqnarray*}
We observe that the number of grid points in
$R\mathbb{R}G$ and $\mathbb{R}AUG\mathbb{R}G$
is bounded by
\begin{eqnarray*}
\lambda-1 \;\;\le\;\; \mathbb{C}ARDI{R\mathbb{R}G} \;\;\le\;\;
\lambda \;\;\le\;\; \mathbb{C}ARDI{\mathbb{R}AUG\mathbb{R}G} \;\;\le\;\;
\lambda+1.
\end{eqnarray*}
Moreover, we make the important observation that
\begin{eqnarray*}
\frac{\mathbb{C}ARDI{R\mathbb{R}G}}{\mathbb{C}ARDI{U\mathbb{R}G}}
\;\; \le \;\; \frac{\lambda}{\lambda_0+1}
\;\; \le \;\; \frac{\lambda}{\lambda_0}
\;\; = \;\; \frac{\lambda\tau}{\lambda_0\tau}
\;\; \le \;\; \frac{\mu(\mathbb{R}AUG)}{\mu(U)}.
\end{eqnarray*}
That means,
it is more likely that a random point in $U$ lies inside of $\mathbb{R}AUG$
than a random point in $U\mathbb{R}G$ lies inside of $R\mathbb{R}G$.
The inequality
\begin{eqnarray*}
\text{\rm pr}(R\mathbb{R}G) &\le& \text{\rm pr}(\mathbb{R}AUG)
\end{eqnarray*}
is valid independently of the actual choice or location of $R$.
$\bigcirc$
\end{example}
\subsection*{Issue~8: There is still no general relation between $\text{\rm pr}(f\mathbb{R}G)$ and $\text{\rm pr}(f)$}
The probability $\text{\rm pr}(f)$
is defined as the ratio of volumes.
The definition is, in particular,
independent of the location and shape of the involved sets.
As an example,
we consider the three different (shaded) regions
in Figure~\ref{fig-volume-location-shape}
which all have the same volume.
\begin{figure}\label{fig-volume-location-shape}
\end{figure}
We make the important observation that
the shape and location matter
if we derive the induced ratio for points in $\mathbb{G}$.
The discrepancy between the ratios
is caused by the implicit assumption
that the grid unit $\tau$ is sufficiently small.
(Asymptotically, the ratios approach the same limit
in the three illustrated examples for $\tau\to 0$.)
Be aware that making this assumption explicit
leads to a second constraint on the precision $L$
which we call the \emph{grid unit condition}.
To solve this issue,
we need a way to adjust the grid unit $\tau$ to the shape of $R$.
We address this issue in general in Section~\ref{sec-relate-tau-gamma}.
For now
we continue our consideration under the assumption that
this problem is solved.
\subsection*{Summary and validation of $\text{\rm pr}(f\mathbb{R}G)$}
We summarize our considerations so far.
The analysis of a guarded algorithm
{must} reflect its actual behavior.
(What would be the meaning of the analysis, otherwise?)
Therefore we have defined the success probability
of a floating-point evaluation of $f$
in Formula~(\ref{for-prob-rest-G})
such that it is based on the behavior of guards.
Furthermore,
we have studied the interrelationship
between the success probability
for floating-point and real arithmetic
to prepare the analysis in real space.
Be aware that
we have introduced a specialized perturbation on a regular grid $\mathbb{G}$
(in practice and in analysis)
which is necessary for the derivation of the interrelationship.
Moreover,
we now make this relationship explicit for a single interval.
(The general relationship is formulated
in Section~\ref{sec-relate-tau-gamma}.)
\begin{example}\label{ex-density-4}
(Continuation of Example~\ref{ex-density-3}.)
Let $f:U\to\mathbb{R}$.
We assume the following property of $R$:
{If $x\in U\mathbb{R}G$ lies outside of $R$
then the guard $\mathbb{G}G(x)$ is true.}
Then we have
\begin{eqnarray*}
\text{\rm pr}(f\mathbb{R}G)
&=&
\frac
{\mathbb{C}ARDI{
\left\{
x\in U\mathbb{R}G
\,:\,
\text{$\mathbb{G}G(x)$ is true}
\right\}
}}
{\mathbb{C}ARDI{U\mathbb{R}G}} \\
&\ge&
1 \,-\, \frac{\mathbb{C}ARDI{R\mathbb{R}G}}{\mathbb{C}ARDI{U\mathbb{R}G}} \\
&\ge&
1 \,-\, \frac{\mu(\mathbb{R}AUG)}{\mu(U)}.
\end{eqnarray*}
We conclude:
\emph{If we prove
by means of abstract mathematics
that
\begin{eqnarray*}
1 \,-\, \frac{\mu(\mathbb{R}AUG)}{\mu(U)} &\ge& p
\end{eqnarray*}
for a probability $p\in(0,1)$,
we have implicitly proven that}
\begin{eqnarray*}
\text{\rm pr}(f\mathbb{R}G) &\ge& p
\end{eqnarray*}
for a randomly chosen grid point in $\mathbb{G}$.
Be aware that $\text{\rm pr}(f\mathbb{R}G)$
is defined only by discrete quantities.
$\bigcirc$
\end{example}
\subsection*{Warning: processing exceptional points}
We explain in this paragraph
why it is absolutely non-obvious
how to process exceptional points in general.
Assume that we want to exclude the set $D\subset A$
from the analysis.
This changes our success probability from Formula~(\ref{for-prob-rest-G})
into
\begin{eqnarray*}
\text{\rm pr} (f\mathbb{R}G)
&=&
\min_{\bar{x}\in A} \;
\frac
{\mathbb{C}ARDI{
\left\{
x\in\bar{U}_\delta(\bar{x})\mathbb{R}G
\,:\,
\text{$\mathbb{G}G(x)$ is true}
\right\}
\setminus D
}}
{\mathbb{C}ARDI{\bar{U}_\delta(\bar{x})\mathbb{R}G}} \\
&\ge&
\min_{\bar{x}\in A} \;
\frac
{ \max \left\{
0, \;
{
\left|\left\{
x\in\bar{U}_\delta(\bar{x})\mathbb{R}G
\,:\,
\text{$\mathbb{G}G(x)$ is true}
\right\}\right|
}
-
{|D|}
\right\}
}
{\mathbb{C}ARDI{\bar{U}_\delta(\bar{x})\mathbb{R}G}}.
\end{eqnarray*}
To obtain a practicable solution,
it is reasonable to assume that $D$ is finite and, moreover, that
$|D|\ll\mathbb{C}ARDI{\bar{U}_\delta(\bar{x})\mathbb{R}G}$.
This changes the relation in Example~\ref{ex-density-4}
into:
\begin{eqnarray*}
\text{\rm pr}(f\mathbb{R}G)
&\ge&
\max \left\{
0, \;
1 - \frac{\mu(\mathbb{R}AUG)}{\mu(U)}
-
\frac
{|D|}
{\mathbb{C}ARDI{\bar{U}_\delta(\bar{x})\mathbb{R}G}}
\right\}.
\end{eqnarray*}
It is important to see
that this estimate still contains two quantities
that depend on the floating-point arithmetic.
But our plan was to get rid of this dependency.
In spite of the simplifying assumptions
it is non-obvious how to perform the analysis in real space in general.
\emph{Our suggested solution to this issue is to avoid
exceptional points.
Alternatively
we declare them critical (see next section)
which triggers an exclusion of their environment.}
\subsection{Fp-safety Bound, Critical Set, Region of Uncertainty}
\label{sec-further-quantities}
\subsection*{The fp-safety bound}
We introduce a predicate that can certify
the correct sign of floating-point evaluations.
The essential part of this predicate is the fp-safety bound.
We show in Section~\ref{sec-guards-safetybounds}
that there are fp-safety bounds for a wide class of functions.
\begin{definition}[lower fp-safety bound]\label{def-fpsafetybound}
Let $\text{\rm pr}ED$ be a predicate description.
Let
$S_{\inf f}:\mathbb{N}\to\mathbb{R}_{\ge 0}$ be a monotonically decreasing function
that maps a precision $L$ to a non-negative value.
We call $S_{\inf f}$ a \emph{(lower) fp-safety bound for $f\!$ on $A$}
if the statement
\begin{eqnarray}\label{for-fpsafety-final}
|f(x)| > S_{\inf f}(L)
\mathbb{F}ORMSEP &\mathbb{R}ightarrow& \mathbb{F}ORMSEP
\text{\rm sign}(f(x)\mathbb{R}FL) = \text{\rm sign}(f(x))
\end{eqnarray}
is true for every precision
$L\in\mathbb{N}$ and for all $x\in \bar{U}_\delta(A)\mathbb{R}FL$.
\end{definition}
For the time being,
we consider $K$ to be a constant.
We drop this assumption
in Section~\ref{sec-rational-function}
where we introduce \emph{upper} fp-safety bounds.
Until then
we only consider \emph{lower} fp-safety bounds.
\subsection*{The critical set}
Next we introduce a classification of the points in $\bar{U}_\delta(A)$
in dependence on their neighborhood.
(We refine the definition on Page~\pageref{def-critical-set-second}.)
\begin{definition}[critical]\label{def-critical-set}
Let $\text{\rm pr}ED$ be a predicate description.
We call a point $c\in\bar{U}_\delta(\bar{x})$
\emph{critical} if
\begin{eqnarray}\label{for-def-crit}
\inf_{x\in U_\varepsilon(c)\setminus\{c\}} \; \left|f(x)\right| &=& 0
\end{eqnarray}
on a neighborhood $U_\varepsilon(c)$
for infinitesimal
small $\varepsilon>0$.
Furthermore,
we call zeros of $f$ that are not critical \emph{less-critical}.
Points that are neither critical nor less-critical
are called \emph{non-critical}.
We define
the \emph{critical set $C_{f,\delta}$
of $f$ at $\bar{x}\in A$ with respect to $\delta$}
as the union of critical and less-critical points within $\bar{U}_\delta(\bar{x})$.
\end{definition}
In other words,
we call $c$ critical
if there is a Cauchy sequence\footnote{
Analysis: Cauchy sequence is defined in Forster~\cite{F06}.}
$(a_i)_{i\in \mathbb{N}}$ in $\bar{U}_\delta(\bar{x})\setminus\{c\}$
where $\lim_{i\to\infty} a_i=c$
and $\lim_{i\to\infty} f(a_i)=0$.
We remember that
the metric space\footnote{
Topology: Metric space and completeness are defined in Jänich~\cite{J01}.}
$\mathbb{R}^k$ is complete,
that means,
the limit of the sequence $(a_i)$
lies inside of the closure
$\bar{U}_\delta(\bar{x})$.
Sometimes we omit the indices
of the critical set $C$
if they are given by the context.
\begin{example}
We consider the three functions
that are depicted in Figure~(\ref{fig-crit-set-compare}).
Let $f_1(x)=x^2$.
Let $f_2(x)=x^2$ for $x\ne 0$ and $f_2(0)=2$.
Let $f_3(x)=x^2+1$ for $x\not\in \{-2,1\}$ and $f_3(-2)=0$ and $f_3(1)=0.2$.
\begin{figure}
\caption{Examples of critical, less-critical and non-critical points.}
\label{fig-crit-set-compare}
\end{figure}
The point $x=0$ in Picture~(a)
is a zero and a critical point for $f_1$.
In (a), every argument $x\ne 0$ is non-critical for $f_1$.
In (b), $f_2$ is non-zero at $x=0$, but $x=0$ is a critical point for $f_2$.
In (c), the argument $x=-2$ is less-critical for $f_3$ and
the argument $x=1$ is non-critical for $f_3$.
$\bigcirc$
\end{example}
What is the difference of critical and less-critical points?
We observe
that the point $c$ is excluded from its neighborhood
in Formula~(\ref{for-def-crit}).
Zeros of $f$ would trivially be critical otherwise.
Furthermore,
we observe that zeros of continuous functions are always critical.
For our purpose it is important to see that
the infimum of $|f|$ is positive
if we exclude the less-critical points \emph{itself}
and \emph{neighborhoods} of critical points.
Be aware
that we technically could treat both kinds differently in the analysis
and still ensure that the result of the analysis is
valid for floating-point arithmetic.
Only for simplicity we deal with them in the same way
by adding these points to the critical set.
Only for simplicity we also add exceptional points to the critical set.
\subsection*{The region of uncertainty}
The next construction is a certain environment of the critical set.
\begin{definition}[region of uncertainty]\label{def-region-uncertainty}
Let $\text{\rm pr}ED$ be a predicate description.
In addition
let $\gamma\in\mathbb{R}_{>0}^k$.
We call
\begin{eqnarray}\label{for-region-uncert}
R_{f,\gamma}(\bar{x})
&:=&
\bar{U}_\delta(\bar{x})
\;\cap\;
\left( \bigcup_{c\in C_{f,\delta}(\bar{x})} U_\gamma(c) \right)
\end{eqnarray}
the \emph{region of uncertainty for $f$
induced by $\gamma$ with respect to $\bar{x}$.}
\end{definition}
In our presentation
we use the axis-parallel boxes $U_\gamma(c)$
to define
the specific $\gamma$-neighborhood of $C$;
other shapes require adjustments, see Section~\ref{sec-relate-tau-gamma}.
The sets $U_\gamma(c)$ are open and
the complement of $R_{f,\gamma}(\bar{x})$ in $\bar{U}_\delta(\bar{x})$
is closed.
We omit the indices
of the region of uncertainty $R$
if they are given by the context.
The vector $\gamma\label{def-gamma-inline}$
defines the tuple of componentwise distances to $c$.
The presentation requires a formal definition of the
set of all admissible $\gamma$.
This set is either a box or a line.
Let $\hat\gamma\in\mathbb{R}_{>0}^k$.
Then we define the unique open axis-parallel box
with vertices $0$ and $\hat\gamma$ as
\begin{eqnarray*}\label{def-GAB-inline}
\mathbb{G}ABG &:=& \left\{ \gamma'=(\gamma'_1,\ldots,\gamma'_k)
: \text{$\gamma'_i\in(0,\hat\gamma_i)$ for all $i\in I$} \right\}
\end{eqnarray*}
and the open diagonal from 0 to $\hat\gamma$ inside of $\mathbb{G}ABG$ as
\begin{eqnarray*}\label{def-GAL-inline}
\mathbb{G}ALG &:=& \left\{ \gamma : \text{$\gamma = \lambda \hat\gamma$ with $\lambda\in(0,1)$} \right\}.
\end{eqnarray*}
It is important
that the $\gamma_i$ can be chosen arbitrarily small whereas
the upper bounds $\hat\gamma_i$ are only introduced for technical reasons;
we assume that $\hat\gamma$ is ``sufficiently'' small.\footnote{
It is fine to ignore this information during first reading.
More information and the formal bound is given
in Remark~\ref{rem-def-region-suit}.2
on Page~\pageref{rem-def-region-suit}.}
Occasionally we omit $\hat\gamma$.
We have already seen
that there is need to augment the region of uncertainty
(see Issue~7 and~8
in Section~\ref{sec-succ-prob}).
This task is accomplished
by
the mapping $\gamma\mapsto{\text{\rm aug}}(\gamma):=\frac{\gamma}{t}$
for $t\in(0,1)$.
For technical reasons we remark that
$\gamma\in\mathbb{G}ABG$ if ${\text{\rm aug}}(\gamma)\in\mathbb{G}ABG$, and
$\gamma\in\mathbb{G}ALG$ if ${\text{\rm aug}}(\gamma)\in\mathbb{G}ALG$.
We call $R_{f,{\text{\rm aug}}(\gamma)}\label{inline-def-aug-rou}$
the \emph{augmented region of uncertainty for $f$ under} ${\text{\rm aug}}(\gamma)$.
By $\mathbb{G}amma\label{def-Gamma-inline}$
we denote the \emph{set of valid augmented $\gamma$}
and include it in the predicate description.
\begin{definition}\label{def-predi-con-2}
We extend Definition~\ref{def-predi-con} and
call $\text{\rm pr}EDG$ a \emph{predicate description} if:
7. $\mathbb{G}amma=\mathbb{G}ALG$ or $\mathbb{G}amma=\mathbb{G}ABG$
for a sufficiently small $\hat\gamma\in\mathbb{R}_{>0}^k$.
\end{definition}
\subsection{Overview: Classification of the Input}
\label{sec-over-func-argu}
In practice and in the analysis
we deal with real-valued functions whose signs decide predicates.
The arguments of these functions belong to the perturbation area.
In this section
we give an overview of the various characteristics for function arguments
that we have introduced so far.
We strictly distinguish between terms of practice and terms of the analysis.
The diagram of the practice-oriented terms
is shown in Figure~\ref{fig-practice-oriented}.
We consider the discrete perturbation area $U_{\!\delta}\,\mathbb{R}G$.
Controlled perturbation algorithms ${\cal A}_\text{\rm CP}$
are designed with intent
to avoid the implementation of degenerate cases and
to compute the combinatorial correct solution.
Therefore
the guards in the embedded algorithm ${\cal A}_\text{\rm G}$
must fail for the zero set and
for arguments whose evaluations lead to wrong signs.
The guard is designed such that
the evaluation is definitely fp-safe
if the guard does not fail (light shaded region).
Unfortunately
there is no convenient way
to count (or bound) the number of arguments in $U_{\!\delta}\,\mathbb{R}G$
for which the guard fails.
That is the reason why we perform the analysis with real arithmetic
and introduce further terms.
\begin{figure}
\caption{The diagram of the practice-oriented terms.}
\label{fig-practice-oriented}
\end{figure}
The diagram of the analysis-oriented terms
is shown in Figure~\ref{fig-analysis-oriented}.
We consider the real perturbation area $U_{\!\delta}$.
Instead of the zero set,
we consider the critical set
(see Definition~\ref{def-critical-set}).
The critical set is a superset of the zero set.
Then we choose the region of uncertainty
as a neighborhood of the critical set
(see Definition~\ref{def-region-uncertainty}).
We augment the region of uncertainty to obtain a result
that is also valid for floating-point evaluations.
We intent to prove fp-safety outside of the augmented region of uncertainty
(i.e. on the light shaded region).
Therefore we design a fp-safety bound that is true
outside of the region.
This way we can guarantee that
the evaluation of a guard (in practice)
only fails on a subset of the augmented region (in the analysis).
\begin{figure}
\caption{The diagram of the analysis-oriented terms (shown in black).}
\label{fig-analysis-oriented}
\end{figure}
\subsection{Applicability and Verifiability of Functions}
\label{sec-veri-succ}
We study the circumstances
under which we may \emph{apply} controlled perturbation to a predicate
in practice
and under which we can actually \emph{verify} its application
in theory.
We stress that we talk about a \emph{qualitative} analysis here;
the desired \emph{quantitative} analysis is derived in the following sections.
Furthermore, we want to remark that
\emph{verifiability} is not necessary
for the presentation of the analysis tool box.
However,
the distinction between applicability, verifiability and analyzability
was important for the author during the development of the topic.
We keep it in the presentation
because it may also be helpful to the reader.
Anyway, skipping this section is possible and even
assuming equality between verifiability and analyzability
will do no harm.
\subsection*{In practice}
We specify the function property that
the probability of a successful evaluation of $f$
gets arbitrarily close to the certain event
by increasing the precision.
\begin{definition}[applicable]
\label{def-func-app}
Let $\text{\rm pr}ED$ be a predicate description.
We call $f$ \emph{applicable}
if for every $p\in(0,1)$ there is $L_p\in\mathbb{N}$ such that
the guarded evaluation of $f$
is successful
at a randomly perturbed input $x\in \bar{U}_\delta(\bar{x})\mathbb{R}GL$
with probability at least $p$
for every precision $L\in\mathbb{N}$ with $L\ge L_p$
and every $\bar{x}\in A$.
\end{definition}
Applicable functions can safely be used
in guarded algorithms:
Since the precision $L$ is increased (without limit)
after a predicate has failed,
the success probability gets arbitrarily close to 1
for each predicate evaluation.
As a consequence,
the success probability of ${\cal A}_\text{\rm G}$ gets arbitrarily close to 1, too.
\subsection*{In the qualitative analysis}
Unfortunately
we cannot check directly
if $f$ is applicable.
Therefore we introduce two properties
that imply applicability.
\begin{definition}\label{def-verifiable}
Let $\text{\rm pr}EDL$ be a predicate description.
\begin{itemize}
\item
{(region-condition).}
For every $p\in(0,1)$ there is $\gamma\in\mathbb{R}plus^k$ such that
the geometric failure probability is bounded in the way
\begin{eqnarray}
\label{for-def-region-const}
\frac{\mu(R_\gamma(\bar{x}))}{\mu(U_\delta(\bar{x}))} &\le& (1-p)
\end{eqnarray}
for all $\bar{x}\in A$.
We call this condition the \emph{region-condition}.
\item
{(safety-condition).}
\label{def-safety-cond}
There is a fp-safety bound $S_{\inf f}:\mathbb{N}\to\mathbb{R}_{>0}$
on $\bar{U}_\delta(A)$ with\footnote{
Technically,
the assumption $S_{\inf f}(L)\stackrel{!}{>}0$ is no restriction.}
\begin{eqnarray}\label{for-safety-cond}
\lim_{L\to\infty}S_{\inf f}(L) &=& 0.
\end{eqnarray}
We call this condition
the \emph{safety-condition}.
\item
{(verifiable).}
We call $f$
\emph{verifiable on $\bar{U}_\delta(A)$ for controlled perturbation}
if $f$ fulfills the region- and safety-condition.
\end{itemize}
\end{definition}
\noindent
The region-condition guarantees the adjustability of the
volume of the region of uncertainty.
Note that
the region-condition is actually a condition on the critical set.
It states that the critical set is sufficiently ``sparse''.
The safety-condition guarantees the adjustability of the fp-safety bound.
It states that
for every $\varphi>0$
there is a precision ${L_\text{\rm safe}}\in\mathbb{N}$ with the property that
\begin{eqnarray}\label{for-def-safety-const}
S_{\inf f}(L)\le\varphi
\end{eqnarray}
for all $L\in\mathbb{N}$ with $L\ge {L_\text{\rm safe}}$.
We give an example of a verifiable function.
\begin{example}
Let $A\subset\mathbb{R}$ be an interval,
let $\delta\in\mathbb{R}_{>0}$ and
let $f:\bar{U}_\delta(A)\to\mathbb{R}$
be a univariate polynomial\footnote{
We avoid the usual notation $f\in\mathbb{R}[x]$
to emphasize that the domain of $f$ \emph{must be bounded.}}
of degree $d$ with real coefficients, i.e.,
\begin{eqnarray*}
f(x) &=& a_d \cdot x^d + a_{d-1} \cdot x^{d-1} +
\ldots + a_1 \cdot x + a_0.
\end{eqnarray*}
We show that $f$ is verifiable.
Part 1 (region-condition).
Because of the fundamental theorem of algebra
(e.g., see Lamprecht~\cite{L93}),
$f$ has at most $d$ real roots.
Therefore the size of the critical set $C_f$ is bounded by $d$
and the volume of the region of uncertainty
$R_\gamma(\bar{x})$ is upper-bounded by $2d\gamma$.
For a given $p\in(0,1)$
we then choose
\begin{eqnarray*}
\gamma &:=& \frac{(1-p)\delta}{d}
\end{eqnarray*}
which fulfills the region-condition because of
\begin{eqnarray*}
\frac{\mu(R_\gamma(\bar{x}))}{\mu(U_\delta(\bar{x}))}
&\le& \frac{2\gamma d}{2\delta} \;\; = \;\; 1-p.
\end{eqnarray*}
Part 2 (safety-condition).
Corollary~\ref{col-unipolysafety} on page~\pageref{col-unipolysafety}
provides the fp-safety bound
\begin{eqnarray*}
S_{\inf f}(L) &:=& (d+2) \, \max_{1\le i\le d} |a_i| \;\, 2^{{e_\text{\rm max}}(d+1)+1-L}
\end{eqnarray*}
for univariate polynomials.
Since $S_{\inf f}(L)$ converges to zero as $L$ approaches infinity,
the safety-condition is fulfilled.
Therefore $f$ is verifiable.
$\bigcirc$
\end{example}
We show that,
if a function is verifiable,
it has a positive lower bound on its absolute value
outside of its region of uncertainty.
\begin{lemma}\label{lem-veri-minval}
Let $\text{\rm pr}EDL$ be a predicate description
and let $f$ be verifiable.
Then for every $\gamma\in\mathbb{R}plus^k$, there is $\varphi\in\mathbb{R}_{>0}$ with
\begin{eqnarray}
\label{for-def-veri-minval}
\varphi \le |f(x)|
\end{eqnarray}
for all $x\in \bar{U}_\delta(\bar{x})\setminus R_\gamma(\bar{x})$ and
for all $\bar{x}\in A$.
\end{lemma}
\begin{proof}
We assume the opposite.
That means,
in particular,
for every $i\in\mathbb{N}$
there is $a_i\in\bar{U}_\delta(\bar{x})\setminus R_\gamma(\bar{x})$
such that $|f(a_i)| < \frac{1}{i}$.
Then $(a_i)_{i\in\mathbb{N}}$ is a bounded sequence with accumulation points in
$\bar{U}_\delta(\bar{x})\setminus R_\gamma(\bar{x})$.
Those points must be critical
and hence belong to $R_\gamma(\bar{x})$.
This is a contradiction.
\qed
\end{proof}
Finally we prove that verifiability of functions implies applicability.
\begin{lemma}\label{lem-veri-is-app}
Let $\text{\rm pr}EDL$ be a predicate description
and let $f$ be verifiable.
Then $f$ is applicable.
\end{lemma}
\begin{proof}
Let $p\in(0,1)$.
Then the geometric success probability is bounded by $p$.
Therefore there must be an upper bound on the volume
of the region of uncertainty
(see Definition~\ref{def-verifiable}).
In addition
there is a precision ${L_\text{\rm grid}}$ such that
we may interpret this region as an augmented region $R_{{\text{\rm aug}}(\gamma)}$
(see Theorem~\ref{theo-validation}).
Furthermore,
there must be a positive lower bound on $|f|$ outside of $R_\gamma$
(see Lemma~\ref{lem-veri-minval}).
Moreover,
there must be a precision ${L_\text{\rm safe}}$
for which the fp-safety bound is smaller than the bound on $|f|$.
Be aware
that this implies
that the guarded evaluation of $f$ is successful
at a randomly perturbed input
with probability at least $p$
for every precision $L\ge \max\{{L_\text{\rm safe}},{L_\text{\rm grid}}\}$.
That means,
$f$ is applicable
(see Definition~\ref{def-func-app}).
\qed
\end{proof}
\section[General Analysis Tool Box (Introduction)]{General Analysis Tool Box}
\label{sec-ana-tool-box}
The general analysis tool box
to analyze controlled perturbation algorithms
is presented in the remainder of the paper.
We call the presentation a {tool box}
because its components are strictly separated from each other
and sometimes allow alternative derivations.
In particular,
we present three ways
to analyze functions.
Here we briefly introduce the tool box
and refer to the detailed presentation of its components
in the subsequent sections.
\emph{The decomposition of the analysis into well-separated components
and their precise description is an innovation of this presentation.}
\begin{figure}
\caption{Illustration of the various ways to analyze functions.}
\label{fig-illu-ana-func}
\end{figure}
The tool box is subdivided into components.
At first we explain the \emph{analysis of functions}.
The diagram in
Figure~\ref{fig-illu-ana-func}
illustrates three ways to analyze functions.
We subdivide the function analysis in two stages.
The analysis itself in the second stage requires three necessary bounds,
also known as the \emph{interface},
which are defined in Section~\ref{sec-nec-con-func}:
\emph{region-suitability},
\emph{value-suitability} and
\emph{safety-suitability}.
In Section~\ref{sec-meth-quan-rela}
we introduce the \emph{method of quantified relations}
which represents the actual analysis in the second stage.
In the first stage,
we pay special attention to the derivation of two bounds of the interface
and suggest three different ways to solve the task.
We show in Section~\ref{sec-direct-approach}
how the bounds can be derived
in a \emph{direct approach} from geometric measures.
Furthermore,
we show how to build-up the bounds for the desired function
from simpler functions in a
\emph{bottom-up approach}
in Section~\ref{sec-bottom-up}.
Moreover,
we present a derivation of the bounds
by means of a ``sequence of bounds'' in a \emph{top-down approach}
in Section~\ref{sec-top-down}.
Finally,
we show how we can derive the third necessary bound of the interface
with an \emph{error analysis}
in Section~\ref{sec-guards-safetybounds}
We deal with the \emph{analysis of algorithms}
in Section~\ref{sec-ana-algo}.
The idea is illustrated in
Figure~\ref{fig-illu-ana-algo} on page~\pageref{fig-illu-ana-algo}.
Again we subdivide the analysis in two stages.
The actual analysis of algorithms
is the \emph{method of distributed probability}
which represents the second stage
and is explained in Section~\ref{sec-meth-distri-prob}.
The \emph{interface} between the stages is subdivided in two groups.
Firstly,
there are algorithm prerequisites
(to the left of the dashed line in the figure).
These bounds are defined and derived
in Section~\ref{sec-nec-con-algo}:
\emph{evaluation-suitability},
\emph{predicate-suitability} and
\emph{perturbation-suitability}.
Secondly,
there are predicate prerequisites
(to the right of the dashed line in the figure).
These are determined by means of function analyses.
\section[Justification of the Floating-point Analysis (Validation)]{Justification of Analyses in Real Space}
\label{sec-validation}
This section addresses the problem
to derive the success probability for floating-point evaluations
from the success probability
which we determine
in real space.
Analyses in real space are without meaning for
controlled perturbation implementations (which use floating-point arithmetic),
unless we determine a reliable relation
between floating-point and real arithmetic.
To achieve this goal,
we introduce an additional constraint on the precision
in Section~\ref{sec-relate-tau-gamma}
and summarize our efforts in the determination of the success probability
in Section~\ref{sec-over-validation}.
\emph{This is the first presentation that
adjusts the precision of the floating-point arithmetic to
the shape of the region of uncertainty.}
\subsection{The Grid Unit Condition}
\label{sec-relate-tau-gamma}
Here we adjust the distance of grid points
(i.e., the grid unit $\tau$)
to the ``width'' of the region of uncertainty $\gamma$.
As we have seen in Issue~8
in Section~\ref{sec-succ-prob},
the grid unit $\tau$ must be sufficiently small
(i.e., $L$ must be sufficiently large)
to derive a reliable probability $\text{\rm pr}(f\mathbb{R}G)$
from $\text{\rm pr}(f)$.
The problem is illustrated in Figure~\ref{fig-volume-location-shape}
on page~\pageref{fig-volume-location-shape}.
We call this additional constraint on $L$
the \emph{grid unit condition}
\begin{eqnarray}\label{for-grid-unit-con}
L &\ge& {L_\text{\rm grid}}
\end{eqnarray}
for a certain ${L_\text{\rm grid}}\in\mathbb{N}$.
Informally,
we demand that $\tau\ll\gamma$.
Here we show
how to derive the threshold ${L_\text{\rm grid}}$ formally.
We refine the concept of the augmented region of uncertainty
which we have mentioned briefly in Section~\ref{sec-succ-prob}.
The discussion of Issue~7
suggests an additive augmentation $\gamma={\text{\rm aug}}(\gamma')$
that fulfills
\begin{eqnarray*}
\tau_0
&\stackrel{(I)}{\le}&
\gamma'_i
\;\; \stackrel{(II)}{\le} \;\;
\gamma_i-\tau_0
\end{eqnarray*}
for all $1\le i\le k$
where $\tau_0$ is an upper bound on the grid unit.
However,
in the analysis
it is easier to handle a multiplicative augmentation
\begin{eqnarray*}
\gamma
&\stackrel{(III)}{:=}&
\frac{\gamma'}{t}
\end{eqnarray*}
for a factor $t\in(0,1)$,
that means,
we define ${\text{\rm aug}}(\gamma') := \frac{\gamma'}{t}$.
We call $\frac{1}{t}\label{def-t-inline}$
the \emph{augmentation factor} for the region of uncertainty.
Together this leads to the implications
\begin{eqnarray*}
\text{$(I)$ and $(III)$}
\mathbb{F}ORMSEP&\mathbb{R}ightarrow&\mathbb{F}ORMSEP
\tau_0 \;\;\le\;\; t \cdot \min_{1\le i\le k} \gamma_i \\
\text{$(II)$ and $(III)$}
\mathbb{F}ORMSEP&\mathbb{R}ightarrow&\mathbb{F}ORMSEP
\tau_0 \;\;\le\;\; (1-t) \cdot \min_{1\le i\le k} \gamma_i \\
\text{and consequently}
\mathbb{F}ORMSEP&\mathbb{R}ightarrow&\mathbb{F}ORMSEP
\tau_0 \;\;\stackrel{(IV)}{\le}\;\; \min\left\{t,1-t\right\} \cdot \min_{1\le i\le k} \gamma_i
\end{eqnarray*}
Furthermore,
we demand that $\tau_0$ is a power of $2$
which turns $(IV)$ into the equality
\begin{eqnarray*}
\tau_0
&\stackrel{(V)}{=}&
2^{\left\lfloor
\log_2
\left(
\min\left\{t,1-t\right\} \cdot \min_{1\le i\le k} \gamma_i
\right)
\right\rfloor}.
\end{eqnarray*}
Due to
Formula~(\ref{def-tau-inline})
in Definition~\ref{def-grid-points}
we also know that
\begin{eqnarray*}
\tau_0
&\stackrel{(VI)}{=}& 2^{{e_\text{\rm max}}-{L_\text{\rm grid}}-1}.
\end{eqnarray*}
Therefore we can deduce ${L_\text{\rm grid}}$ from $(V)$ and $(VI)$ as
\begin{eqnarray}\label{for-def-lgrid}
{L_\text{\rm grid}}(\gamma) &:=&
{e_\text{\rm max}}
-1
- \left\lfloor
\log_2 \left(
\min\left\{t,1-t\right\} \cdot \min_{1\le i\le k} \gamma_i
\right)
\right\rfloor.
\end{eqnarray}
As an example,
for $t=\frac{1}{2}$
we obtain
${L_\text{\rm grid}}(\gamma) = {e_\text{\rm max}} - \lfloor \log_2 \min_{1\le i\le k} \gamma_i \rfloor$.
We refine the notion of a predicate description.
\begin{definition}\label{def-predi-con-3}
We extend Definition~\ref{def-predi-con-2} and
call $\text{\rm pr}EDGt$ a \emph{predicate description} if:
8. $t\in(0,1)$.
\end{definition}
Now we are able to summarize the construction above.
\begin{theorem}\label{theo-validation}
Let $\text{\rm pr}EDGt$ be a predicate description.
Then
\begin{eqnarray}\label{for-grid-impact}
\frac{\mu\left(R_{\gamma}(\bar{x})\right)}{\mu\left(U_\delta(\bar{x})\right)}
\ge
\frac{\mathbb{C}ARDI{R_{t\gamma}(\bar{x})\mathbb{R}GL}}{\mathbb{C}ARDI{U_\delta(\bar{x})\mathbb{R}GL}}
\end{eqnarray}
for all precisions $L\ge{L_\text{\rm grid}}(\gamma)$
where
${L_\text{\rm grid}}$ is defined in Formula~(\ref{for-def-lgrid}).
\end{theorem}
\begin{remark}
We add some remarks on the grid unit condition.
1.
Unequation~(\ref{for-grid-impact}) guarantees that
the success probability for grid points is at least
the success probability that is derived from the volumes of areas.
This justifies the analysis in real space at last.
2.
Be aware that
the grid unit condition is a fundamental constraint:
It does not depend on the function that realize the predicate,
the dimension of the (projected or full) perturbation area,
the perturbation parameter or
the critical set.
The threshold ${L_\text{\rm grid}}$
mainly depends on
the augmentation factor $\frac{1}{t}$ and
$\gamma$.
In particular
we observe that an additional bit of the precision is sufficient
to fulfill the grid unit condition for $\frac{\gamma}{2}$, i.e.
\begin{eqnarray*}
{L_\text{\rm grid}}\left(\frac{\gamma}{2}\right)
&=&
{L_\text{\rm grid}}(\gamma) + 1.
\end{eqnarray*}
3.
We have defined the region of uncertainty $R_f$
by means of axis-parallel boxes $U_\gamma(c)$
for $c\in C_f$
in Definition~\ref{def-region-uncertainty}.
If $R_f$ is defined in a different way,
we must appropriately adjust
the derivation of ${L_\text{\rm grid}}$ in this section.
4.
We observe that the grid unit condition solves
Issue~8 from Section~\ref{sec-succ-prob}.
Now we reconsider the example in Figure~\ref{fig-volume-location-shape}
on page~\pageref{fig-volume-location-shape}.
We observe
that the grid unit in Picture~(a) fulfills the grid unit condition
whereas the condition fails in Pictures~(b) and~(c).
Obviously, $\tau\gg\gamma$ in the latter cases.
$\bigcirc$
\end{remark}
\subsection{Overview: Prerequisites of the Validation}
\label{sec-over-validation}
It is important to see that the analysis
\emph{must} reflect the behavior of
the underlying floating-point implementation
of a controlled perturbation algorithms
to gain a meaningful result.
Only for the purpose to achieve this goal,
we have introduced some principles
that we summarize below.
The items are meant to be reminders, not explanations.
\begin{itemize}
\item
We guarantee that the perturbed input lies on the grid $\mathbb{G}$.
\item
We analyze an augmented region of uncertainty.
\item
The region of uncertainty
is a union of axis-parallel boxes and, especially,
intervals in the 1-dimensional case.
There are lower bounds on the measures of the box.
\item
The grid unit condition is fulfilled.
\item
We do not exclude isolated points,
unless we can prove that their exclusion does not change the
floating-point probability.
It is always safe to exclude environments of points.
\item
We analyze $\eta$ runs of ${\cal A}_\text{\rm G}$ at a time
(see Section~\ref{sec-meth-distri-prob}).
\end{itemize}
With this principles at hand
we are able to derive a valid analysis in real space.
\section[Necessary Conditions for the Analysis of Functions (Interface)]
{Necessary Conditions for the Analysis of Functions}
\label{sec-nec-con-func}
The method of quantified relations,
which is introduced in the next section,
actually performs the analysis of real-valued functions.
Here we prepare its applicability.
In Section~\ref{sec-analyzability-func}
we present three necessary conditions:
the \emph{region-, value- and safety-suitability.}
Together these conditions are also sufficient to apply the method.
Because these conditions are deduced in the first stage
of the function analysis
(see Section~\ref{sec-direct-approach}--\ref{sec-guards-safetybounds})
and are used in the second stage
(see Section~\ref{sec-meth-quan-rela}),
we also refer to them as the interface between the two stages
(see Figure~\ref{fig-illu-interface}).
\emph{This is the first time
that we precisely define the prerequisites of the function analysis.}
The definitions are followed by an example.
In Section~\ref{sec-over-func-prop}
we summarize all function properties.
\begin{figure}
\caption{The interface between the two stages
of the analysis of functions.}
\label{fig-illu-interface}
\end{figure}
\subsection{Analyzability of Functions}
\label{sec-analyzability-func}
Here we define and explain the three function properties
that are necessary for the analysis.
Their associated bounding functions constitute
the interface between the two stages.
Informally, the properties have the following meanings:
\begin{itemize}
\item
We can reduce the volume of the region of uncertainty
to any arbitrarily small value (region-suitability).
\item
There are positive and finite limits on the absolute value of $f$
outside of the region of uncertainty (value-suitability).
\item
We can reduce the rounding error in the floating-point evaluation of $f$
to any arbitrarily small value
(safety-suitability).
\end{itemize}
\subsection*{The region-suitability}
The region-suitability is a geometric condition
on the neighborhood of the critical set.
We demand that
we can adjust the volume of the region of uncertainty
to any arbitrarily small value.
For technical reasons we need an invertible bound.
\begin{definition}[region-suitable]\label{def-region-suit}
Let $\text{\rm pr}EDL$ be a predicate description.
We call $f$ \emph{region-suitable}
if the critical set of $f$ is either empty
or if there is an invertible upper-bounding function\footnote{
Instead of $\nu_f$
we can also use its complement $\chi_f$.
See the following Remark~\ref{rem-def-region-suit}.4 for details.}
\begin{eqnarray*}\label{def-nu-inline}
\nu_f : \mathbb{G}AL \to\mathbb{R}_{>0}
\end{eqnarray*}
on the volume of the region of uncertainty
that has the property:
For every $p\in(0,1)$ there is $\gamma\in\mathbb{G}AL$ such that
\begin{eqnarray}\label{for-defregsuit}
\frac{\mu(R_\gamma(\bar{x}))}{\mu(U_\delta(\bar{x}))}
&\le& \frac{\nu_f(\gamma)}{\mu(U_\delta(\bar{x}))}
\;\;\le\;\; (1-p)
\end{eqnarray}
for all $\bar{x}\in A$.
\end{definition}
\begin{remark}\label{rem-def-region-suit}
We add several remarks on the definition above.
1. Region-suitability is related to the region-condition
in the following way:
The criterion for region-suitability results from the
replacement of $\mu(R_\gamma(\bar{x}))$
in Formula~(\ref{for-def-region-const})
with a function $\nu_f$.
This changes the region-condition in Definition~\ref{def-verifiable}
into a quantitative bound.
2. Of course,
controlled perturbation cannot work
if the region of uncertainty covers the entire perturbation area of $\bar{x}$.
We have said that
we consider $\gamma\in\mathbb{G}ALG$
for a ``sufficiently'' small $\hat\gamma\in\mathbb{R}_{>0}^k$.
That means formally,
we postulate
$\nu(\hat\gamma) \ll \mu(U_\delta(\bar{x}))$.
To keep the notation as plain as possible,
we are aware of this fact
and do not make this condition explicit in our statements.
3. The invertibility of the bonding function $\nu_f$ is essential for
the method of quantified relations
as we see in the proof of Theorem~\ref{theo-quan-rela}.
There it is used to deduce
the parameter $\gamma$
from the volume of the region of uncertainty---with
the exception of an empty critical set
which does not imply any restriction on $\gamma$.
4a. The function
$\nu_f$ provides an upper bound on the volume of the
region of uncertainty within the perturbation area of $\bar{x}$.
Sometimes it is more convenient to consider its complement
\begin{eqnarray}
\label{def-chi-region-suit}
\chi_f(\gamma)
&:=&
\mu\left(U_\delta(\bar{x})\right) \, - \, \nu_f(\gamma).
\end{eqnarray}
The function $\chi_f(\gamma)$
provides a lower bound on the volume of
the \emph{region of provable fp-safe inputs.}
4b. The special case $\nu_f\equiv 0$
corresponds to the special case
$\chi_f\equiv \mu\left(U_\delta(\bar{x})\right)$.
Then the critical set is empty and
there is no region of uncertainty.
This implies that $\varphi_f(\gamma)$
can also be chosen as a constant function
(see the value-suitability below).
4c. Based on Formula~(\ref{def-chi-region-suit}),
we can demand the existence of an invertible function
$\chi_f:\mathbb{G}AL\to\mathbb{R}_{>0}$
instead of $\nu_f$
in the definition of region-suitability.
That means,
either $\chi_f\equiv\mu\left(U_\delta(\bar{x})\right)$
or $\chi_f:\mathbb{G}AL\to\mathbb{R}_{>0}$ in an invertible function.
5. We make the following observations about region-suitability:
(a) If the critical set is finite, $f$ is region-suitable.
(b) If the critical set contains an open set, $f$ cannot be region-suitable.
(c) If the critical set is a set of measure zero,
it does not imply that $f$ is region-suitable.
Be aware that these properties are not equivalent:
If $f$ is region-suitable,
the critical set is a set of measure zero.
But a critical set of measure zero
does not necessarily imply that $f$ is region-suitable:
In topology we learn that $\mathbb{Q}$ is dense\footnote{
Topology: ``$\mathbb{Q}$ is dense in $\mathbb{R}$''
means that $\overline{\mathbb{Q}}=\mathbb{R}$.
For example, see Jänich~\cite[p.~63]{J01}.}
in $\mathbb{R}$;
hence any open $\varepsilon$-neighborhood of $\mathbb{Q}$ equals $\mathbb{R}$.
In set theory we learn that\footnote{
Set Theory:
Cardinalities of (infinite) sets are denoted by $\aleph_i$.
For example, see Deiser~\cite[162ff]{D04}.}
$|\mathbb{Q}|=\aleph_0 < 2^{\aleph_0}=|\mathbb{R}|$;
hence $f$ cannot be region-suitable if the critical set
is (locally) ``too dense.''
$\bigcirc$
\end{remark}
\subsection*{The inf-value-suitability}
The inf-value-suitability is a condition on the behavior of
the function $f$.
We demand that
there is a positive lower bound on the absolute value of $f$
outside of the region of uncertainty.
\begin{definition}[inf-value-suitable]\label{def-value-suit}
Let $\text{\rm pr}EDL$ be a predicate description.
We call $f$ \emph{(inf-)value-suitable}
if there is a lower-bounding function
\begin{eqnarray*}\label{def-varphi-inline}
\varphi_{\inf f} : \mathbb{G}AL \to\mathbb{R}plus
\end{eqnarray*}
on the absolute value of $f$
that has the property:
For every $\gamma\in\mathbb{G}AL$, we have
\begin{eqnarray}
\varphi_{\inf f}(\gamma)
&\le& |f(x)|
\end{eqnarray}
for all $x\in \bar{U}_\delta(\bar{x})\setminus R_\gamma(\bar{x})$ and
for all $\bar{x}\in A$.
\end{definition}
We extend this definition
by an upper bound on the absolute value of $f$
in Section~\ref{sec-rational-function}
and call this property sup-value-suitability;
until then
we call the inf-value-suitability simply the value-suitability and
also write $\varphi_f$ instead of $\varphi_{\inf f}$.
The criterion for value-suitability results from
the replacement of the constant $\varphi$
in Formula~(\ref{for-def-veri-minval})
with the bounding function $\varphi_f$.
This changes the existence statement of
Lemma~\ref{lem-veri-minval} into a quantitative bound.
\subsection*{The inf-safety-suitability}
The inf-safety-suitability is a condition on the error analysis
of the floating-point evaluation of $f$.
We demand that
we can adjust the rounding error in the evaluation of $f$
to any arbitrarily small value.
For technical reasons we demand an invertible bound.\footnote{
We leave the extension to non-invertible or discontinuous
bounds to the reader;
we do not expect that there is any need in practice.}
\begin{definition}[inf-safety-suitable]\label{def-safety-suit}
Let $\text{\rm pr}ED$ be a predicate description.
We call $f$ \emph{(inf-)safety-suitable}
if there is an injective fp-safety bound $S_{\inf f}(L):\mathbb{N}\to\mathbb{R}_{>0}$
that fulfills the safety-condition
in Formula~(\ref{for-safety-cond})
and if
\begin{eqnarray*}
S_{\inf f}^{-1} : (0,S_{\inf f}(1)] \to \mathbb{R}_{>0}.
\end{eqnarray*}
is a strictly monotonically decreasing real continuation of its inverse.
\end{definition}
We extend the definition by sup-safety-suitability
in Section~\ref{sec-rational-function};
until then
we call the inf-safety-suitability simply the safety-suitability.
\subsection*{The analyzability}
Based on the definitions above,
we next define analyzability,
relate it to verifiability and
give an example for the definitions.
\begin{definition}[analyzable]\label{def-analyzable}
We call $f$ \emph{analyzable}
if it is region-, value- and safety-suitable.
\end{definition}
\begin{lemma}
\label{lem-ana-is-veri}
Let $f$ be analyzable. Then $f$ is verifiable.
\end{lemma}
\begin{proof}
If $f$ is analyzable, $f$ is especially region-suitable.
Then the region-condition in Definition~\ref{def-verifiable}
is fulfilled because of the bounding function $\nu_f$.
In addition $f$ must also be safety-suitable.
Then the safety-condition in Definition~\ref{def-verifiable}
is fulfilled because of the bounding function $S_{\inf f}$.
Together both conditions imply that $f$ is verifiable.
\qed
\end{proof}
We support the definitions above with the example
of univariate polynomials.
Because we refer to this example later on,
we formulate it as a lemma.
\begin{lemma}\label{lem-uni-poly-ana}
Let $f$ be the univariate polynomial
\begin{eqnarray}\label{for-unipolyrep}
f(x) &=& a_d \cdot x^d + a_{d-1} \cdot x^{d-1} +
\ldots + a_1 \cdot x + a_0
\end{eqnarray}
of degree $d$
and let $\text{\rm pr}EDL$ be a predicate description for $f$.
Then $f$ is analyzable on $\bar{U}_\delta(A)$
with the following bounding functions
\begin{eqnarray}
\nu_f(\gamma) &:=& 2d\gamma \label{for-unipolynu} \\
\varphi_f(\gamma) &:=& |a_d| \cdot \gamma^d \label{for-unipolyphi} \\
S_{\inf f}(L) &:=& (d+2) \, \max_{1\le i\le d} |a_i| \;\, 2^{{e_\text{\rm max}}(d+1)+1-L}. \nonumber
\end{eqnarray}
\end{lemma}
\begin{proof}
For a moment
we consider the complex continuation of the polynomial,
i.e.\ $f\in\mathbb{C}[z]$.
Because of the fundamental theorem of algebra
(e.g., see Lamprecht~\cite{L93}),
we can factorize $f$ in the way
\begin{eqnarray*}
f(z) &=& a_d \cdot \prod_{i=1}^d (z-\zeta_i)
\end{eqnarray*}
since $f$ has $d$ (not necessarily distinct) roots $\zeta_i\in\mathbb{C}$.
Now let $\gamma\in\mathbb{R}_{>0}$.
Then we can lower bound the absolute value of $f$ by
\begin{eqnarray*}
|f(z)| &\ge& |a_d| \cdot \gamma^d
\end{eqnarray*}
for all $z\in\mathbb{C}$
whose distance to every (complex) root of $f(z)$ is at least $\gamma$.
Naturally, the last estimate
is especially true for real arguments $x$
whose distance to the orthogonal projection of the complex roots $\zeta_i$
onto the real axis is at least $\gamma$.
So we set the critical set to\footnote{
Complex Analysis:
The function $\mathbb{R}E(z)$ maps a complex number $z$ to its real part.
For example, see Fischer et al.~\cite{FL05}.}
$C_f(\bar{x}):=\{ \mathbb{R}E(\zeta_i) : 1\le i\le d\} \cap \bar{U}_\delta(\bar{x})$.
This validates the bound $\varphi_f$
and implies that $f$ is value-suitable.
Furthermore,
the size of $C_f$ is upper-bounded by $d$ for all $\bar{x}\in A$.
This validates the bound $\nu_f$.
Because $\nu_f$ is invertible,
$f$ is region-suitable.
The bounding function $S_{\inf f}(L)$ is proven
in Corollary~\ref{col-unipolysafety}
in Section~\ref{sec-guards-safetybounds}.
Because $S_{\inf f}(L)$ is invertible, $f$ is also safety-suitable.
As a consequence, $f$ is \emph{analyzable} with the given bounds.
\qed
\end{proof}
We admit that we have chosen a quite simple example.
But a more complex example would have been a waste of energy since
we present
\emph{three general approaches to derive the bounding functions
for the region- and value-suitability}
in Sections~\ref{sec-direct-approach},~\ref{sec-bottom-up}
and~\ref{sec-top-down}.
That means, for more complex examples we use more convenient tools.
A well-known approach to derive the bounding function for the safety-bound
is given in Section~\ref{sec-guards-safetybounds}.
\subsection{Overview: Function Properties}
\label{sec-over-func-prop}
At this point,
we have introduced all properties that are necessary
to precisely characterize functions
in the context of the analysis.
So let us take a short break to see what we have defined and related so far.
We have summarized the most important implications
in Figure~\ref{fig-func-prop-implic}.
\begin{figure}
\caption{The illustration summarizes the implications of
the various function properties that we have defined in this paper.
A function that is region-, value- and safety-suitable
at the same time is also analyzable (see Definition~\ref{def-analyzable}
\label{fig-func-prop-implic}
\end{figure}
Controlled perturbation is \emph{applicable}
to a certain class of functions.
But only for a subset of those functions,
we can actually \emph{verify} that
controlled perturbation {works in practice}---without
the necessity, or even ability, to analyze their performance.
We remember
that no condition on the absolute value is needed
for verifiability
because it is not a quantitative property.
A subset of the verifiable functions
represents the set of \emph{analyzable} functions in a quantitative sense.
For those functions there are \emph{suitable bounds} on
the maximum volume of the region of uncertainty,
on the minimum absolute value outside of this region
and on the maximum rounding error.
In the remaining part of the paper,
we are only interested in the class of analyzable functions.
\section[The Method of Quantified Relations (2nd Stage)]
{The Method of Quantified Relations}
\label{sec-meth-quan-rela}
The method of quantified relations actually performs
the function analysis
in the second stage.
The component and its interface are illustrated in Figure~\ref{fig-ana-qr}.
We introduce the method in Section~\ref{sec-meth-qr-pres}.
Its input consists of three bounding functions
that are associated with the three suitability properties
from the last section.
The applicability does not depend on any other condition.
The method provides general instructions
to relate the three given bounds.
The prime objective is to derive a relation between
the probability of a successful floating-point evaluation
and the precision of the floating-point arithmetic.
More precisely,
the method provides a precision function $L(p)$
or a probability function $p(L)$.
\emph{This is the first presentation
of step-by-step instructions for the second stage
of the function analysis.}
An example of its application follows
in Section~\ref{sec-meth-qr-ex}.
\begin{figure}
\caption{The method of quantified relations
and its interface.}
\label{fig-ana-qr}
\end{figure}
\subsection{Presentation}
\label{sec-meth-qr-pres}
There are no further prerequisites than
the three necessary suitability properties
from the last section.
Therefore we can immediately state the main theorem
of this section whose proof contains the
method of quantified relations.
\begin{theorem}[quantified relations]\label{theo-quan-rela}
Let $\text{\rm pr}EDL$ be a predicate description and
let $f$ be analyzable.
Then there is a method
to determine a precision function $L_f:(0,1)\to\mathbb{N}$
such that the guarded evaluation of $f$
at a randomly perturbed input
is successful with probability at least $p\in(0,1)$
for every precision $L\in\mathbb{N}$ with $L\ge L_f(p)$.
\end{theorem}
\begin{proof}
We show in six steps how we can determine
a precision function $L_f(p)$ which has the property:
If we use a floating-point arithmetic with precision $L_f(p)$
for a given $p\in(0,1)$,
the evaluation of $f(x)\mathbb{R}G$ is guarded
with success probability of at least $p$
for a randomly chosen $x\in\bar{U}_\delta(\bar{x})\mathbb{R}G$
and for any $\bar{x}\in A$.
An overview of the steps is given in Table~\ref{tab-steps-qr}.
Usually we begin with Step~1.
However, there is an exception:
In the special case that $\nu_f\equiv 0$,
we know that the bounding function $\varphi$ is constant,
see Remark~\ref{rem-def-region-suit}.4 for details.
Then we just skip the first four steps and begin with Step~5.
\begin{table}
\centerline{\fbox{
\begin{minipage}{0.9\columnwidth}
Step 1: relate probability with volume of region of uncertainty (define ${\varepsilon_{\nu}}$)\\
Step 2: relate volume of region of uncertainty with distances (define $\gamma$)\\
Step 3: relate distances with floating-point grid (choose $t$)\\
Step 4: relate new distances with minimum absolute value (define $\varphi$)\\
Step 5: relate minimum absolute value with precision (define ${L_\text{\rm safe}}$)\\
Step 6: relate ${L_\text{\rm safe}}$ with ${L_\text{\rm grid}}$ (define ${L_\text{\rm grid}}$ and $L_f$)
\end{minipage}}}
\caption{Instructions for performing the method of quantified relations.}
\label{tab-steps-qr}
\end{table}
{Step 1 (define ${\varepsilon_{\nu}}$).}
We derive
an upper bounding function ${\varepsilon_{\nu}(p)}$ on the volume of the
augmented region of uncertainty
from the success probability $p$
in the way
\begin{eqnarray}\label{for-epsilon-nu}
{\varepsilon_{\nu}(p)}
&:=& (1-p)\cdot\mu(U_\delta) \\
&=& (1-p)\cdot\prod_{i=1}^k (2\delta_i). \nonumber
\end{eqnarray}
That means,
a randomly chosen point $x\in U_\delta(\bar{x})$
lies inside of a given region of volume $\varepsilon_\nu(p)$
with probability at least $p$.
Be aware
that we argue about the \emph{real space} in this step.
{Step 2 (define $\gamma$).}
We know that
there is $\gamma\in\mathbb{R}_{>0}^k$ that fulfills
the region-condition in Definition~\ref{def-verifiable}
because $f$ is verifiable.
Since $f$ is even region-suitable,
we can also determine such $\gamma\in\mathbb{G}AL$
by means of the inverse of the bounding function $\nu_f$.
The existence and invertibility of $\nu_f$
is guaranteed by Definition~\ref{def-region-suit}.
Hence we define the function
\begin{eqnarray}\label{for-gamma-epsilon-nu}
\gamma(p) &:=& \nu_f^{-1}({\varepsilon_{\nu}(p)}) \in\mathbb{G}AL.
\end{eqnarray}
We remember that there is an alternative definition of the
region-suitability which we have mentioned
in Remark~\ref{rem-def-region-suit}.4.
Surely it is also possible to use the bounding function $\chi_f$
instead of $\nu_f$ in the method of quantified relations directly;
the alternative Steps~$1'$ and~$2'$
are introduced in Remark~\ref{rem-meth-quan-rela}.2.
{Step 3 (choose $t$).}
We aim at a result
that is valid for floating-point arithmetic
although we base the analysis on real arithmetic
(see Section~\ref{sec-validation}).
We choose\footnote{
The analysis works for any choice.
However, finding the best choice is an optimization problem.}
$t\in(0,1)$
and define $R_{t\gamma}$ as
the normal sized region of uncertainty.
Due to Theorem~\ref{theo-validation},
the probability that
a random point $x\in U_\delta(\bar{x})\mathbb{R}G$
lies inside of $R_{t\gamma}(\bar{x})\mathbb{R}G$
is smaller than
the probability that a random point $x\in U_\delta(\bar{x})$
lies inside of $R_\gamma(\bar{x})$.
Consequently,
if a randomly chosen point
lies outside of the augmented region of uncertainty with probability $p$,
it lies outside of the normal sized region of uncertainty with probability
at least $p$.
Our next objective is to guarantee
a floating-point safe evaluation
outside of the
\emph{normal sized} region of uncertainty.
{Step 4 (define $\varphi$).}
Now we want to determine the minimum absolute value
outside of the region of uncertainty
$R_{t\gamma}(\bar{x})$.
We have proven in Lemma~\ref{lem-veri-minval}
that a positive minimum exists.
Because $f$ is value-suitable,
we can use the bounding function $\varphi_f$
for its determination
(see Definition~\ref{def-value-suit}).
That means, we consider
\begin{eqnarray*}
\varphi(p) &:=& \varphi_f(t\cdot\gamma(p)).
\end{eqnarray*}
{Step 5 (define ${L_\text{\rm safe}}$).}
So far we have fixed the region of uncertainty and
have determined the minimum absolute value
outside of this region.
Now we can use the safety-condition
from Definition~\ref{def-verifiable}
to determine a precision ${L_\text{\rm safe}}$
which implies fp-safe evaluations outside of $R_{t\gamma}$.
That means,
we want that Formula~(\ref{for-def-safety-const})
is valid for every $L\in\mathbb{N}$ with $L\ge {L_\text{\rm safe}}$.
Again we use the property that $f$ is analyzable
and use the inverse of the fp-safety bound $S_{\inf f}^{-1}$
in Definition~\ref{def-safety-suit}
to deduce the precision from the minimum absolute value $\varphi(p)$ as
\begin{eqnarray}
{L_\text{\rm safe}}(p)
&=&
\left\lceil
S_{\inf f}^{-1} \left(
\varphi_f \left(
t\cdot\nu_f^{-1}\left({\varepsilon_{\nu}\left(p\right)}\right)
\right)\right)
\right\rceil.
\label{for-def-lsafe}
\end{eqnarray}
{Step 6 (define ${L_\text{\rm grid}}$ and $L_f$).}
We numerate the component functions of $\nu_f^{-1}$
in the way
$\nu_f^{-1}(\varepsilon)=(\nu_{1}^{-1}(\varepsilon),\ldots,\nu_{k}^{-1}(\varepsilon))$.
Then we deduce the bound ${L_\text{\rm grid}}$ from
Formula~(\ref{for-def-lgrid}) and
Formula~(\ref{for-gamma-epsilon-nu}) in the way
\begin{eqnarray}\label{for-lgrid-in-proof}
{L_\text{\rm grid}}(p) &:=&
{e_\text{\rm max}}
-1
- \left\lfloor
\log_2 \left(
\min\left\{t,1-t\right\}
\cdot \min_{1\le i\le k} \nu_i^{-1}(\varepsilon_\nu(p))
\right)
\right\rfloor.
\end{eqnarray}
Finally we define the \emph{precision function $L_f(p)$}
pointwise as
\begin{eqnarray}\label{def-lf-final}
L_f(p) &:=& \max\left\{{L_\text{\rm safe}}(p), {L_\text{\rm grid}}(p)\right\}.
\end{eqnarray}
Due to the used estimates,
any precision $L\in\mathbb{N}$ with $L\ge L_f(p)$ is a solution.
\qed
\end{proof}
\begin{remark}\label{rem-meth-quan-rela}
We add some remarks on the theorem above.
1.
It is important to see that
${L_\text{\rm safe}}$ is derived from the \emph{volume} of $R_f$
and is based on the region- and safety condition in
Definition~\ref{def-verifiable}
whereas ${L_\text{\rm grid}}$ is derived from the \emph{narrowest width} of $R_f$
and is based on the grid unit condition in Section~\ref{sec-relate-tau-gamma}.
Of course,
$L_f(p)$ must be large enough to fulfill both constraints.
2.
As we have seen,
we can also use the function $\chi_f$ to define the region-suitability
in Definition~\ref{def-region-suit}.
Therefore we can modify the first two steps of
the method of quantified relations as follows:
\newline
Step $1'$ (define $\varepsilon_{\chi}$).
Instead of Step~1,
we define a bounding function ${\varepsilon_{\chi}(p)}$
on the volume of the complement of $R_f$
from the given success probability $p$.
That means, we replace Formula~(\ref{for-epsilon-nu}) with
\begin{eqnarray*}
{\varepsilon_{\chi}(p)}
&:=& p \cdot \mu(U_\delta) \\
&=& p \cdot \prod_{i=1}^k \left(2\delta_i\right).
\end{eqnarray*}
Step $2'$ (define $\gamma$).
Then we can determine $\gamma(p)$
with the inverse of the bounding function $\chi_f$.
That means, we replace Formula~(\ref{for-gamma-epsilon-nu}) with
\begin{eqnarray*}
\gamma(p) &:=& \chi_f^{-1}({\varepsilon_{\chi}(p)}) \in\mathbb{G}AL
\end{eqnarray*}
which finally changes Formula~(\ref{for-def-lsafe}) into
\begin{eqnarray*}
{L_\text{\rm safe}}(p)
&=&
\left\lceil
S_{\inf f}^{-1} \left(
\varphi_f \left(
t\cdot
\chi_f^{-1}\left(
{\varepsilon_{\chi}\left(p
\right)}\right)
\right)\right)
\right\rceil.
\end{eqnarray*}
We make the observation that
these changes do not affect the correctness of
the method of quantified relations.
3.
It is important to see
that the method of quantified relations
is absolutely independent of the derivation of the bounding functions
which are associated with the necessary suitability properties.
Especially in Step~2,
$\gamma$ is determined solely by means of the function $\nu^{-1}$.
We illustrate this generality with the examples in
Figure~\ref{fig-crit-set-nu}.
\begin{figure}
\caption{Visualization of $\nu^{-1}
\label{fig-crit-set-nu}
\end{figure}
The three pictures show different regions of uncertainty for
the \emph{same} critical set
and the \emph{same} volume $\varepsilon_\nu$.
This is because the region of uncertainties
result from different functions $\nu^{-1}$.
We could say that the function $\nu^{-1}$ ``knows'' how to distribute
the region of uncertainty around the critical set
because of its definition in the first stage
of the analysis. For example:
(a) as local neighborhoods,
(b) as axis-parallel stripes, or
(c) as neighborhoods of local minima of
$f$ (the dashed line).
(We remark that case (c) presumes that $f$ is continuous.)
Naturally, different functions $\nu^{-1}$ lead to different values of $\gamma$
as is illustrated in the pictures.
Be aware that
the method of quantified relations
itself is absolutely independent of the \emph{derivation} of $\nu$
and especially independent of the \emph{approach}
by which $\nu$ is derived.
(We present three different approaches soon.)
4.
If $f$ is analyzable and $\varphi_f$ invertible,
we can also derive the success probability $p$
from a precision $L$.
We observe that the function $\varepsilon_\nu$
in Formula~(\ref{for-epsilon-nu}) is always invertible.
Therefore
we can transform Formula~(\ref{for-def-lsafe})
and~(\ref{for-lgrid-in-proof}) into
\begin{eqnarray*}
{p_\text{\rm inf}}(L)\label{for-pinf-inline}
&:=&
\varepsilon_\nu^{-1}
\left( \nu_f
\left( \frac{1}{t} \cdot \varphi_f^{-1}
\left( S_{\inf f}(L)
\right) \right) \right) \\
{p_\text{\rm grid}}(L)\label{for-pgrid-inline}
&:=&
\varepsilon_\nu^{-1} \left(
\nu_*\left(
\frac{2^{-L+{e_\text{\rm max}}-1}}
{\min\{t,1-t\}}
\right)
\right),
\end{eqnarray*}
respectively,
where $\nu_*^{-1}$
is the least growing component function of $\nu_f^{-1}$
and $\nu_*$ is the inversion of $\nu_*^{-1}$.
This leads to
the (preliminary) \emph{probability function\label{def-prob-func-inline}}
$p_f:\mathbb{N}\to(0,1)$,
\begin{eqnarray*}
p_f(L) &:=& \min \left\{ {p_\text{\rm inf}}(L), {p_\text{\rm grid}}(L) \right\}
\end{eqnarray*}
for parameter $t\in(0,1)$.
We develop the final version of the probability function
in Section~\ref{sec-range-error-analysis}.
Self-evidently
we can also derive appropriate bounding functions for
$\chi$ instead of $\nu$ (see Remark~2).
$\bigcirc$
\end{remark}
\subsection{Example}
\label{sec-meth-qr-ex}
To get familiar with the usage of
the method of quantified relations,
we give a detailed application in the proof
of the following lemma.
\begin{lemma}
Let $f$ be a univariate polynomial of degree $d$
as shown in Formula~(\ref{for-unipolyrep})
and let $\text{\rm pr}EDL$ be a predicate description.
Then we obtain for $f$:
\begin{eqnarray}\label{for-theo-uni-lsafe}
{L_\text{\rm safe}}(p) &:=& \left\lceil -d\log_2(1-p) \; + \; \mathbb{C}U \right\rceil
\end{eqnarray}
where
\begin{eqnarray*}
\mathbb{C}U &:=&
\log_2\frac
{(d+2) \cdot \max_{1\le i\le d}|a_i| \cdot 2^{{e_\text{\rm max}} (d+1) +1}}
{|a_d| \cdot (t\delta/d)^d}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
The polynomial $f$ is analyzable
because of Lemma~\ref{lem-uni-poly-ana}.
Therefore we can determine ${L_\text{\rm safe}}$
with the first 5 steps of the method of quantified relations
(see Theorem~\ref{theo-quan-rela}).\\
Step 1:
Since the perturbation area $U_\delta(\bar{x})$ is an interval of length $2\delta$,
the region of uncertainty has a volume of at most
\begin{eqnarray*}
{\varepsilon_{\nu}(p)} &:=& 2\delta (1-p).
\end{eqnarray*}
Step 2:
Next we deduce $\gamma$ from the inverse of
the function in Formula~(\ref{for-unipolynu}),
that means,
from $\nu_f^{-1}(\varepsilon)=\frac{\varepsilon}{2d}$.
We obtain
\begin{eqnarray*}
\gamma(p) &:=& \nu^{-1}_f({\varepsilon_{\nu}(p)})
\;\; = \;\; \frac{{\varepsilon_{\nu}(p)}}{2d}
\;\; = \;\; \frac{\delta(1-p)}{d}.
\end{eqnarray*}
Step 3:
We choose $t\in(0,1)$.
\newline
Step 4:
Due to Formula~(\ref{for-unipolyphi}),
the absolute value of $f$
outside of the region of uncertainty
is lower-bounded by the function
\begin{eqnarray*}
\varphi(p)
&:=&
|a_d| \cdot (t\cdot\gamma(p))^d \\
&=&
|a_d| \cdot \left( \frac{t\delta(1-p)}{d} \right)^d.
\end{eqnarray*}
Step 5:
A fp-safety bound $S_{\inf f}$
is provided by Corollary~\ref{col-unipolysafety}
in Formula~(\ref{for-sfl-univariate}).
The inverse of this function
at $\varphi(p)$ is
\begin{eqnarray*}
S_{\inf f}^{-1}(\varphi(p))
&=& \log_2
\frac
{(d+2) \cdot \max_{1\le i\le d}|a_i| \cdot 2^{{e_\text{\rm max}} (d+1) +1}}
{\varphi(p)}.
\end{eqnarray*}
Due to Formula~(\ref{for-def-lsafe}),
this leads to
\begin{eqnarray*}
{L_\text{\rm safe}}(p)
&:=& \left\lceil S_{\inf f}^{-1}(\varphi(p)) \right\rceil\\
&=& \left\lceil \log_2\frac
{(d+2) \cdot \max_{1\le i\le d}|a_i| \cdot 2^{{e_\text{\rm max}} (d+1) +1}}
{|a_d| \cdot (t\delta(1-p)/d)^d}
\right\rceil \\
&=& \left\lceil -d\log_2(1-p) \; + \; \log_2\frac
{(d+2) \cdot \max_{1\le i\le d}|a_i| \cdot 2^{{e_\text{\rm max}} (d+1) +1}}
{|a_d| \cdot (t\delta/d)^d}
\right\rceil
\end{eqnarray*}
as was claimed in the lemma.
\qed
\end{proof}
Since the formula for ${L_\text{\rm safe}}(p)$
in the lemma above looks rather complicated,
we interpret it here.
We observe that $c_u$ is a constant
because it is defined only by constants:
The degree $d$ and the coefficients $a_i$ are defined by $f$,
and the parameters ${e_\text{\rm max}}$ and $\delta$ are given by the input.
We make the asymptotic behavior
${L_\text{\rm safe}}(p) = O\left(-d\log_2(1-p)\right)$
for $p\to1$
explicit in the following corollary:
We show that
$d$ additional bits of the precision are sufficient
to halve the failure probability.
\begin{corollary}
Let $f$ be a univariate polynomial of degree $d$
and let ${L_\text{\rm safe}}:(0,1)\to\mathbb{N}$
be the precision function in Formula~(\ref{for-theo-uni-lsafe}).
Then
\begin{eqnarray*}
{L_\text{\rm safe}}\left(\frac{1+p}{2}\right) &=& {L_\text{\rm safe}}(p)+d.
\end{eqnarray*}
\end{corollary}
\begin{proof}
Due to Formula~(\ref{for-theo-uni-lsafe}) we have:
\begin{eqnarray*}
{L_\text{\rm safe}}\left(\frac{1+p}{2}\right) &=&
\left\lceil -d \log_2 \left( 1- \left( \frac{1+p}{2} \right) \right) + c_u \right\rceil \\[0.5ex]
&=&
\left\lceil -d \log_2 \left( \frac{1-p}{2} \right) + c_u \right\rceil \\[0.5ex]
&=&
\left\lceil -d \left( \log_2 (1-p) - \log_2 (2) \right) + c_u \right\rceil \\
&=&
\left\lceil -d \log_2 (1-p) + d + c_u \right\rceil \\
&=&
\left\lceil -d \log_2 (1-p) + c_u \right\rceil + d \\
&=&
{L_\text{\rm safe}}(p) + d
\end{eqnarray*}
Because $d$ is a natural number,
we can pull it out of the brackets.
\qed
\end{proof}
\section[The Direct Approach Using Estimates (1st Stage, rv-suit)]{The Direct Approach Using Estimates}
\label{sec-direct-approach}
This approach derives the bounding functions
which are associated with region- and value-suitability
in the first stage of the analysis
(see Figure~\ref{fig-ana-direct}).
It is partially based on
the geometric interpretation of the function $f$ at hand.
More precisely, it presumes that the critical set of $f$
is embedded in geometric objects
for which we know simple mathematical descriptions
(e.g., lines, circles, etc.).
The derivation of bounds from geometric interpretations
is also presented in~\cite{MOS06,MOS11}.
In Section~\ref{sec-direct-present}
we explain the derivation of the bounds.
In Section~\ref{sec-direct-example}
we show some examples.
\begin{figure}
\caption{The direct approach and its interface.}
\label{fig-ana-direct}
\end{figure}
\subsection{Presentation}
\label{sec-direct-present}
The steps of the direct approach are summarized
in Table~\ref{tab-steps-direct}.
To facilitate the presentation of the geometric interpretation,
we assume that the function $f$ is continuous everywhere
and that we do not allow any exceptional points.
Then the critical set of $f$ equals the zero set of $f$.
Hence the region of uncertainty is an environment of the zero set
in this case.
We define the region of uncertainty $R_\gamma$
as it is defined in Formula~(\ref{for-region-uncert}).
In the first step,
we choose $\mathbb{G}AL$
which is the domain of $\gamma$.
Or in other words, we choose $\hat\gamma$.
Sometimes,
certain choices of $\mathbb{G}AL$
may be more useful than others, e.g.,
cubic environments where $\hat\gamma_i=\hat\gamma_j$
for all $1\le i,j\le k$.
Now assume that we have chosen $\mathbb{G}AL$.
In the second step,
we estimate
(an upper bound on)
the volume of the region of uncertainty $R_{\gamma}$
by a function $\nu_f(\gamma)$ for $\gamma\in\mathbb{G}AL$.
In the direct approach,
we hope that a geometric interpretation of the zero set
supports the estimation.
For that purpose it would be helpful
if the region of uncertainty is embedded in a line, a circle,
or any other geometric structure that we can easily describe mathematically.
Assume further that we have fixed the bound $\nu_f$.
In the third step,
we need to determine a function $\varphi_f(\gamma)$
on the minimum absolute value of $f$
outside of $R_\gamma$.
This is the most difficult step in the direct approach:
\emph{Although geometric interpretation may be helpful in the second step,
mathematical considerations are necessary to derive $\varphi_f$.}
Therefore we hope that $\varphi_f$ is ``obvious'' enough
to get guessed.
If there is no chance to guess $\varphi_f$,
we need to try one of the alternative approaches
of the next sections, that means,
the bottom-up approach or the top-down approach.
\begin{table}[h]
\centerline{\fbox{
\begin{minipage}{0.9\columnwidth}
Step 1: choose the set $\mathbb{G}AL$ (define $\hat\gamma$) \\
Step 2: estimate $\nu_f(\gamma)$ in dependence on $\mathbb{G}AL$
(define $\nu_f$) \\
Step 3: estimate $\varphi_f(\gamma)$ in dependence on $\nu_f(\gamma)$
(define $\varphi_f$)
\end{minipage}}}
\caption{Instructions for performing the direct approach.}
\label{tab-steps-direct}
\end{table}
\subsection{Examples}
\label{sec-direct-example}
We present two examples that
use the direct approach to derive the bounds
for the region-value-suitability.
\begin{example}
\newcommand{\bar{U}X}{{u_x}}
\newcommand{{v_x}}{{v_x}}
\newcommand{\mathbb{Q}X}{{q_x}}
\newcommand{\bar{U}Y}{{u_y}}
\newcommand{{v_y}}{{v_y}}
\newcommand{\mathbb{Q}Y}{{q_y}}
\newcommand{\mathbb{G}UX}{\gamma_{u_x}}
\newcommand{\mathbb{G}VX}{\gamma_{v_x}}
\newcommand{\mathbb{G}QX}{\gamma_{q_x}}
\newcommand{\mathbb{G}UY}{\gamma_{u_y}}
\newcommand{\mathbb{G}VY}{\gamma_{v_y}}
\newcommand{\mathbb{G}QY}{\gamma_{q_y}}
We consider the $\text{\rm in\_box}$ predicate in the plane.
Let $u$ and $v$ be two opposite vertices of the box
and let $q$ be the query point.
Then $\text{\rm in\_box}(u,v,q)$ is decided by the sign of the function
\begin{eqnarray}
f (u,v,q)
&=&
f (\bar{U}X,\bar{U}Y, {v_x},{v_y}, \mathbb{Q}X,\mathbb{Q}Y) \nonumber \\
&:=&
\max \left\{
\left(\mathbb{Q}X-\bar{U}X\right)\left(\mathbb{Q}X-{v_x}\right), \;
\left(\mathbb{Q}Y-\bar{U}Y\right)\left(\mathbb{Q}Y-{v_y}\right)
\right\}. \label{for-inbox-uvq}
\end{eqnarray}
The function
is negative if $x$ lies inside of the box,
it is zero if $x$ lies in the boundary,
and it is positive if $x$ lies outside of the box.
Step~1:
We choose an arbitrary
$\hat\gamma=(\hat\gamma_\bar{U}X,\hat\gamma_\bar{U}Y,\hat\gamma_{v_x},\hat\gamma_{v_y},\hat\gamma_\mathbb{Q}X,\hat\gamma_\mathbb{Q}Y)\in\mathbb{R}_{>0}^6$.
Step~2:
The box is defined by $u$ and $v$.
This fact is true
independent of the choices for $\mathbb{G}UX$, $\mathbb{G}UY$, $\mathbb{G}VX$ and $\mathbb{G}VY$.
We observe that
the largest box inside of the perturbation area $U_\delta$
is the boundary of $U_\delta$ itself.
This observation leads to the upper bound
\begin{eqnarray*}
\nu_f(\gamma)
&=&
\nu_f (\mathbb{G}UX,\mathbb{G}UY, \mathbb{G}VX,\mathbb{G}VY, \mathbb{G}QX,\mathbb{G}QY) \\
&:=&
4 \left( \mathbb{G}QX \delta_y \;+\; \mathbb{G}QY \delta_x \right)
\end{eqnarray*}
on the volume of the region of uncertainty
if we take the horizontal distance $\mathbb{G}QX$
and the vertical distance $\mathbb{G}QY$ from the boundary of the box into account.
That means,
$\nu_f$ depends on the distances $\mathbb{G}QX$ and $\mathbb{G}QY$
of the query point $q$
from the zero set.
Step 3:
The evaluation of Formula~(\ref{for-inbox-uvq})
at query points where
$\mathbb{Q}X$ has distance $\mathbb{G}QX$ from $\bar{U}X$ or ${v_x}$, and
$\mathbb{Q}Y$ has distance $\mathbb{G}QY$ from $\bar{U}Y$ or ${v_y}$, leads to
\begin{eqnarray*}
\varphi_f(\gamma)
&:=&
\min\left\{
\left| \gamma^2_\mathbb{Q}X - \mathbb{G}QX \cdot | {v_x}-\bar{U}X |\right|, \;
\left| \gamma^2_\mathbb{Q}Y - \mathbb{G}QY \cdot | {v_y}-\bar{U}Y |\right|
\right\}.
\end{eqnarray*}
The derived bounds fulfill the desired properties.
$\bigcirc$
\end{example}
\begin{example}
\newcommand{\mathbb{C}X}{{c_x}}
\newcommand{\mathbb{C}Y}{{c_y}}
\newcommand{\mathbb{Q}X}{{q_x}}
\newcommand{\mathbb{Q}Y}{{q_y}}
\newcommand{\mathbb{G}CX}{\gamma_{c_x}}
\newcommand{\mathbb{G}CY}{\gamma_{c_y}}
\newcommand{\mathbb{G}R}{\gamma_{r}}
\newcommand{\mathbb{G}QX}{\gamma_{q_x}}
\newcommand{\mathbb{G}QY}{\gamma_{q_y}}
We consider the $\text{\rm in\_circle}$ predicate in the plane.
Let $c$ be the center of the circle,
let $r>0$ be its radius,
and let $q$ be the query point.
Then $\text{\rm in\_circle}(c,r,q)$ is decided by the sign of the function
\begin{eqnarray}
f(c,r,q)
&=& f(\mathbb{C}X,\mathbb{C}Y,r,\mathbb{Q}X,\mathbb{Q}Y) \nonumber \\
&:=&
\left(\mathbb{Q}X-\mathbb{C}X\right)^2 \;+\;
\left(\mathbb{Q}Y-\mathbb{C}Y\right)^2 \;-\;
r^2 \label{for-incirc-crq}
\end{eqnarray}
The function is negative if $x$ lies inside of the circle,
it is zero if $x$ lies on the circle,
and it is positive if $x$ lies outside of the circle.
Step~1:
We choose
$\hat\gamma=(\hat\gamma_\mathbb{C}X,\hat\gamma_\mathbb{C}Y,\hat\gamma_r,\hat\gamma_\mathbb{Q}X,\hat\gamma_\mathbb{Q}Y)\in\mathbb{R}_{>0}^5$
where $\hat\gamma_\mathbb{Q}X=\hat\gamma_\mathbb{Q}Y$.
In addition,
we choose $\hat\gamma_r< r$ for simplicity.
Step~2:
The largest circle that fits into the perturbation area $U_\delta$ has radius
$\min\left\{\delta_x,\delta_y\right\}$.
If we intersect any larger circle with $U_\delta$,
the total length of the circular arcs inside of $U_\delta$
cannot be larger than
$2\pi \cdot \min\left\{\delta_x,\delta_y\right\}$.
This bounds the total length of the zero set.
Now we define the region of uncertainty
by spherical environments:
The region of uncertainty is the union of open discs
of radius $\mathbb{G}QX$ which are located at the zeros.
Then the width of the region of uncertainty
is given by the diameter of the discs, i.e., by $2\mathbb{G}QX$.
As a consequence,
\begin{eqnarray*}
\nu_f(\gamma)
&=& \nu_f(\mathbb{G}CX,\mathbb{G}CY,\mathbb{G}R,\mathbb{G}QX,\mathbb{G}QY) \\
&:=&
4\pi\mathbb{G}QX \cdot \min\left\{\delta_x,\delta_y\right\}
\end{eqnarray*}
is an upper bound on the volume of $R_\delta$.
That means,
$\nu_f$ depends on the distance $\mathbb{G}QX$ of the query point $q$
from the zero set.
Step~3:
The absolute value of Formula~(\ref{for-incirc-crq}) is minimal
if the query point $q$ lies inside of the circle
and has distance $\mathbb{G}QX$ from it.
This leads to
\begin{eqnarray*}
\varphi_f(\gamma)
&:=&
\left|
\left( r \;-\; \mathbb{G}QX \right)^2 \;-\; r^2
\right| \\
&=&
\mathbb{G}QX \left( \mathbb{G}QX - 2r \right).
\end{eqnarray*}
The derived bounds fulfill the desired properties.
$\bigcirc$
\end{example}
\section[The Bottom-up Approach Using Calculation Rules (1st Stage, rv-suit)]
{The Bottom-up Approach Using Calculation Rules}
\label{sec-bottom-up}
In the first stage of the analysis,
this approach derives the bounding functions
which are associated with the region- and value-suitability
(see Figure~\ref{fig-ana-bu}).
We can apply this approach to certain composed functions.
That means,
if $f$ is composed by $g$ and $h$,
we can derive the bounds for $f$ from the bounds for $g$ and $h$
under certain conditions.
We present some mathematical constructs
which preserve the region- and value-suitability
and introduce useful calculation rules for their bounds.
Namely we introduce
the \emph{lower-bounding rule} in Section~\ref{sec-rule-lower-bound},
the \emph{product rule} in Section~\ref{sec-rule-product} and
the \emph{min rule} and \emph{max rule} in Section~\ref{sec-rule-min-max}.
We point to a general way to formulate rules
in Section~\ref{sec-rule-general}.
The list of rules is by far not complete.
Nevertheless,
they are already sufficient to
derive the bounding functions
for multivariate polynomials
as we show in Section~\ref{sec-multivariate-poly}.
\emph{With the bottom-up approach we present an entirely new approach
to derive the bounding functions for the region-suitability and
value-suitability.
Furthermore we present a new way to analyze multivariate polynomials.}
\begin{figure}
\caption{The bottom-up approach and its interface.}
\label{fig-ana-bu}
\end{figure}
\subsection{Lower-bounding Rule}
\label{sec-rule-lower-bound}
Our first rule states
that every function is region-value-suitable
if there is a lower bounding function which is region-value-suitable.
Note that there are no further restrictions on $f$.
\begin{theorem}[lower bound]\label{theo-lower-bound}
Let $\text{\rm pr}EDL$ be a predicate description.
If there is a region-value-suitable function
$g:\bar{U}_\delta(A)\to\mathbb{R}$ and $c\in\mathbb{R}_{>0}$ where
\begin{eqnarray}\label{for-ruleboundfun}
|f(x)| &\ge& c \, |g(x)|,
\end{eqnarray}
then $f$ is also region-value-suitable
with the following bounding functions:
\begin{eqnarray*}
\nu_f(\gamma) &:=& \nu_g(\gamma) \\
\varphi_f(\gamma) &:=& c\varphi_g(\gamma).
\end{eqnarray*}
If $f$ is in addition safety-suitable, $f$ is analyzable.
\end{theorem}
\begin{proof}
Part 1 (region-suitable).
Let $(a_i)_{i\in\mathbb{N}}$ be a sequence in the set $U_\delta(\bar{x})$ with
$\lim_{i\to\infty} f(a_i)=0$.
Then Formula~(\ref{for-ruleboundfun}) implies that
$\lim_{i\to\infty} g(a_i)=0$.
That means,
critical points of $f$ are critical points of $g$.
Therefore we set $C_f(\bar{x}) :=C_g(\bar{x})$.
As a consequence the region bound $\nu_f(\gamma) := \nu_g(\gamma)$
is sufficient for the region-suitability of $f$.
Part 2 (value-suitable).
Because we set $C_f(\bar{x}) = C_g(\bar{x})$,
we have $R_f(\bar{x}) = R_g(\bar{x})$.
Due to Formula~(\ref{for-ruleboundfun}),
the minimum absolute value of $f$
outside of the region of uncertainty $R_f(\bar{x})$
is bounded by the minimum absolute value of $g$
outside of the (same) region of uncertainty $R_g(\bar{x})$.
Hence the bound
$\varphi_f(\gamma) = c\varphi_g(\gamma)$
is sufficient for the value-suitability of $f$.
Part 3 (analyzable). Trivial.
\qed
\end{proof}
\subsection{Product Rule}
\label{sec-rule-product}
The next rule states
that the product of region-value-suitable functions
is also region-value-suitable.
Furthermore,
we show how to derive appropriate bounds.
\begin{theorem}[product]\label{theo-product}
Let $(f, k, A_g\times A_{gh}\times A_h, \delta, {e_\text{\rm max}}, \mathbb{G}AL, t)$
be a predicate description where
$A_g\subset\mathbb{R}^j$,
$A_{gh}\subset\mathbb{R}^{\ell-j}$ and
$A_h\subset\mathbb{R}^{k-\ell}$
for
$j\in\mathbb{N}_{0}$ and $\ell,k\in\mathbb{N}$ with $j\le \ell\le k$.
If there are two region-value-suitable functions
\begin{eqnarray*}
g&:&\bar{U}_{(\delta_1,\ldots,\delta_\ell)}(A_g\times A_{gh})\to\mathbb{R} \\
h&:&\bar{U}_{(\delta_{j+1},\ldots,\delta_k)}(A_{gh}\times A_h)\to\mathbb{R}
\end{eqnarray*}
such that
\begin{eqnarray*}
f(x_1,\ldots,x_k) &=& g(x_1,\ldots,x_\ell) \cdot h(x_{j+1},\ldots,x_k),
\end{eqnarray*}
then $f$ is also
region-value-suitable with the following bounding functions:
\begin{eqnarray}
\varphi_{f}(\gamma) &:=&
\varphi_g(\gamma_1,\ldots,\gamma_\ell) \cdot
\varphi_h(\gamma_{j+1},\ldots,\gamma_k) \label{for-phi-prod-rule} \\
\nu_{f}(\gamma)
&:=& \min
\left\{ \prod_{i=1}^k (2\delta_i), \right. \nonumber \\
&&\qquad\left.\label{for-nug-plus-nuh}
\nu_g(\gamma_1,\ldots,\gamma_\ell) \!\!\! \prod_{i=\ell+1}^k \!\!\! (2\delta_i)
\, + \,
\nu_h(\gamma_{j+1},\ldots,\gamma_k) \prod_{i=1}^j (2\delta_i)\right\}.
\end{eqnarray}
Furthermore, if $j=\ell$,
we can replace
the last equation
by the tighter bound
\begin{eqnarray}\label{for-beta-product-rule}
\chi_{f}(\gamma) &:=&
\chi_g(\gamma_1,\ldots,\gamma_j) \cdot
\chi_h(\gamma_{j+1},\ldots,\gamma_k).
\end{eqnarray}
If $f$ is in addition safety-suitable, $f$ is analyzable
(independent of $j=\ell$).
\end{theorem}
\begin{proof}
Part 1 (value-suitable).
Let $x\in U_\delta(\bar{x})$ such that
$(x_1,\ldots,x_\ell)$
does not lie in the region of uncertainty\footnote{
To avoid confusion,
we occasionally add the function name to the index of
the region of uncertainty or the perturbation area
within the proof, e.g.\ $R_{f,\gamma}$ and $U_{f,\delta}$.}
of $g$, that means
\begin{eqnarray}\label{for-g-region-condition}
(x_1,\ldots,x_\ell)
\not\in
R_{g,(\gamma_1,\ldots,\gamma_\ell)}(\bar{x}_1,\ldots,\bar{x}_\ell),
\end{eqnarray}
and that $(x_{j+1},\ldots,x_k)$ does not lie in the region of uncertainty
of $h$, that means
\begin{eqnarray}\label{for-h-region-condition}
(x_{j+1},\ldots,x_k)
\not\in
R_{h,(\gamma_{j+1},\ldots,\gamma_k)}(\bar{x}_{j+1},\ldots,\bar{x}_k).
\end{eqnarray}
Because $g$ and $h$ are value-suitable,
we obtain:
\begin{eqnarray*}
|f(x)| &=& |g(x_1,\ldots,x_\ell)| \cdot |h(x_{j+1},\ldots,x_k)| \\
&\ge& \varphi_g(\gamma_1,\ldots,\gamma_\ell) \cdot
\varphi_h(\gamma_{j+1},\ldots,\gamma_k) \\
&=& \varphi_f(\gamma)
\end{eqnarray*}
on the absolute value of $f$.
Part 2 (region-suitable).
Because of the argumentation above,
we must construct
the region of uncertainty $R_f$
such that $x\in\mathbb{R}^k$ lies outside of $R_f$
only if the conditions in Formula~(\ref{for-g-region-condition})
and~(\ref{for-h-region-condition}) are fulfilled.
Case $j=\ell$.
Then the arguments of $g$ and $h$ are disjoint.
This case is illustrated in Figure~\ref{fig-region-ag-ah}.
\begin{figure}
\caption{Case $j=\ell$:
The (dark shaded) complement of $R_f$
is the Cartesian product of the complement of $R_g$
and the complement of $R_h$.}
\label{fig-region-ag-ah}
\end{figure}
We observe that for each point $(x_1,\ldots,x_j)$ outside of $R_g$
and each point $(x_{\ell+1},\ldots,x_k)$ outside of $R_h$
their concatenation $x$ lies outside of $R_f$.
Therefore we determine the volume of the complement of $R_f$
inside of the perturbation area as
\begin{eqnarray*}
\mu\left(U_{f,\delta}(\bar{x})\setminus R_{f,\gamma}(\bar{x})\right)
&=&
\mu\left(U_{g,(\delta_1,\ldots,\delta_j)}(\bar{x}_1,\ldots,\bar{x}_j) \right.\\
&&\qquad\qquad \left.\setminus R_{g,(\gamma_1,\ldots,\gamma_j)}(\bar{x}_1,\ldots,x_j)\right) \\
&&
\cdot\;\mu\left(U_{h,(\delta_{\ell+1},\ldots,\delta_k)}(\bar{x}_{\ell+1},\ldots,\bar{x}_k) \right.\\
&&\qquad\qquad \left.\setminus R_{h,(\gamma_{\ell+1},\ldots,\gamma_k)}(\bar{x}_{\ell+1},\ldots,x_k)\right).
\end{eqnarray*}
As a consequence Formula~(\ref{for-beta-product-rule}) is true.
Case $j<\ell$.
In contrast to the discussion above,
$g$ and $h$
share the arguments $x_{j+1},\ldots,x_\ell$.
This case is illustrated in Figure~\ref{fig-region-ag-agh-ah}.
\begin{figure}
\caption{Case $j<\ell$:
The (light shaded) region of uncertainty $R_f$ is the union of
two Cartesian products.}
\label{fig-region-ag-agh-ah}
\end{figure}
We denote the projection of the first $j$
(respectively, the last $k-\ell$)
coordinates by $\pi_{\le j}$ (respectively, $\pi_{> \ell}$).
In this case,
Formula~(\ref{for-beta-product-rule})
does not have to be true.
That is why
we define $R_f$ as
\begin{eqnarray*}
R_{f,\gamma}(\bar{x})
&:=&
R_{g,(\gamma_1,\ldots,\gamma_\ell)}(\bar{x}_1,\ldots,\bar{x}_\ell)
\times
\pi_{>\ell} (\bar{U}_\delta(\bar{x})) \\
&& \cup \;
\pi_{\le j} (\bar{U}_\delta(\bar{x}))
\times
R_{h,(\gamma_{j+1},\ldots,\gamma_k)}(\bar{x}_{j+1},\ldots,\bar{x}_k).
\end{eqnarray*}
Now we can upper-bound the volume of $R_f$
by means of $\nu_g$ and $\nu_h$
which leads immediately to the sum in the last line of
Formula~(\ref{for-nug-plus-nuh}).
Of course,
the volume of the region of uncertainty is
bounded by the volume of the perturbation area
which justifies the first line of Formula~(\ref{for-nug-plus-nuh}).
This finishes the proof.
\qed
\end{proof}
\subsection{Min Rule, Max Rule}
\label{sec-rule-min-max}
The next two rules state that the minimum and maximum
of finitely many region-value-suitable functions
are also region-value-suitable.
Furthermore,
we show how to derive appropriate bounds.
\begin{theorem}[min, max]\label{theo-ruleminmax}
Let $g$ and $h$ be two region-value-suitable functions
as defined in Theorem~\ref{theo-product}.
Then the functions
\begin{eqnarray*}
f_\text{\rm min},f_\text{\rm max}&:&\bar{U}_{\delta}(A_g \times A_{gh} \times A_h)\to\mathbb{R},\\
f_\text{\rm min}(x_1,\ldots,x_k) &:=& \min \{g(x_1,\ldots,x_\ell), h(x_{j+1},\ldots,x_k)\}\\
f_\text{\rm max}(x_1,\ldots,x_k) &:=& \max \{g(x_1,\ldots,x_\ell), h(x_{j+1},\ldots,x_k)\}
\end{eqnarray*}
are region-value-suitable
with bounds $\varphi_{f_\text{\rm min}}$ and $\nu_{f_\text{\rm min}}$ for $f_\text{\rm min}$
and bounds $\varphi_{f_\text{\rm max}}$ and $\nu_{f_\text{\rm max}}$ for $f_\text{\rm max}$
where
\begin{eqnarray}
\nonumber
\varphi_{f_\text{\rm min}}(\gamma) &:=&
\min \{ \varphi_g(\gamma_1,\ldots,\gamma_\ell),
\varphi_h(\gamma_{j+1},\ldots,\gamma_k)\} \\
\label{for-phi-max-rule}
\varphi_{f_\text{\rm max}}(\gamma) &:=&
\max \{ \varphi_g(\gamma_1,\ldots,\gamma_\ell),
\varphi_h(\gamma_{j+1},\ldots,\gamma_k)\}\\%
\nu_{f_\text{\rm min}}(\gamma) := \nu_{f_\text{\rm max}}(\gamma)
&:=& \nu_f(\gamma)
\text{ (see Formula~(\ref{for-nug-plus-nuh}))}. \nonumber
\end{eqnarray}
Furthermore, if $j=\ell$,
we can replace
$\nu_{f_\text{\rm min}}(\gamma)$ and $\nu_{f_\text{\rm max}}(\gamma)$
by the tighter bounds
\begin{eqnarray*}
\chi_{f_\text{\rm min}}(\gamma) := \chi_{f_\text{\rm max}}(\gamma)
&:=&
\chi_g(\gamma_1,\ldots,\gamma_j) \cdot
\chi_h(\gamma_{j+1},\ldots,\gamma_k).
\end{eqnarray*}
If $f_\text{\rm min}$ (respectively $f_\text{\rm max}$) is in addition safety-suitable,
it is also analyzable
(independent of $j=\ell$).
\end{theorem}
\begin{proof}
The line of argumentation follows exactly
the proof of Theorem~\ref{theo-product}.
\qed
\end{proof}
\subsection{General Rule}
\label{sec-rule-general}
We do not claim that the list of rules is complete.
On the contrary,
we suggest that the approach may be extended by further rules.
We emphasize that the bottom-up approach is constructive:
We build new region-value-suitable functions from
already proven region-value-suitable functions.
The argumentation always follows the proof of the product rule,
that means,
the compound of $g$ and $h$ inherits the desired property from $g$ and $h$:
(a) \emph{outside of the union} of the regions of uncertainty
for \emph{shared} arguments,
and (b) \emph{inside of the Cartesian product of the complement}
of the regions of uncertainty
for \emph{disjoint} arguments
(see Figure~\ref{fig-region-ag-ah}).
We remark that,
if we want to derive the bounds for a specific function $f$,
we first need to determine the parse tree of $f$ according to the known rules;
this may be a non-obvious task in general.
The instructions
of the bottom-up approach are summed up in the following table.
\begin{table}[h]
\centerline{\fbox{
\begin{minipage}{0.9\columnwidth}
Step 1: determine parse tree according to the rules \\
Step 2: determine bounds bottom-up according to the parse tree
\end{minipage}}}
\caption{Instructions for performing the bottom-up approach.}
\label{tab-steps-bu}
\end{table}
\subsection{Example: Multivariate Polynomials}
\label{sec-multivariate-poly}
It is important to see
that the rules lead to a generic approach
to construct entire classes of region-value-suitable functions.
In the following we use this approach to analyze multivariate polynomials.
(A different way to analyze multivariate polynomials
was presented before in~\cite{MOS11}.)
So far we know
that univariate polynomials are region-value-suitable.
Now we show
how we transfer
the region-value-suitability property of $(k-1)$-variate polynomials
to $k$-variate polynomials
by means of the product rule
and the lower bound rule.
Moreover,
we completely analyze $k$-variate polynomials afterwards.
\subsection*{Preparation}
We prepare the analysis of multivariate polynomials with further definitions.
Let $k\in\mathbb{N}$.
For $\beta\in\mathbb{N}_0^k$ and $x\in\mathbb{R}^k$
we define $x^\beta$
as the term
$x^\beta:=x_1^{\beta_1} \cdot \ldots \cdot x_k^{\beta_k}$.
Next
we define the reverse lexicographic order\footnote{
For lexicographic order see Cormen~et~al.~\cite{CLR90}.}
on $k$-tuples.
Let $\alpha,\beta\in\mathbb{N}_0^k$.
Then we define $\alpha\,{\prec}\,\beta\label{def-lex-inline}$
if and only if
there is $\ell\in\{1,\ldots,k\}$
such that
$\alpha_{j}=\beta_{j}$
for all $\ell < j \le k$
and
$\alpha_{\ell}<\beta_{\ell}$.
In addition
we denote by ${\cal P}(k)$
the set of bijective functions
$\sigma:\{1,\ldots,k\} \to \{1,\ldots,k\}$.
In other words,
${\cal P}(k)$ is
the set of
permutations\footnote{
Algebra: For permutation see Lamprecht~\cite{L93}.}
of $\{1,\ldots,k\}$.
Now let $\alpha,\beta\in\mathbb{N}_0^k$
and let $\sigma\in{\cal P}(k)$.
We define the \emph{permutation $\sigma$ of a tuple
$\alpha=(\alpha_1,\ldots,\alpha_k)$} by
$\sigma(\alpha) :=
\left(\alpha_{\sigma^{-1}(1)}, \ldots, \alpha_{\sigma^{-1}(k)} \right)$.
Further we define the
\emph{reverse lexicographic order
after the permutation\label{def-lexafter-inline} $\sigma$} as
\begin{eqnarray*}
\alpha \,{\prec}_\sigma\, \beta
\mathbb{F}ORMSEP & :\Longleftrightarrow & \mathbb{F}ORMSEP
\sigma(\alpha)\,{\prec}\, \sigma(\beta).
\end{eqnarray*}
Let ${\cal I}\subset\mathbb{N}_0^k$ be finite.
We denote the set of largest elements in ${\cal I}$ by
\begin{eqnarray*}
{\Ind_{\rm max}} &:=& \left\{
\beta\in{\cal I} \,:\,
\text{there is }
\sigma\in{\cal P}(k)
\text{ such that }
\alpha{\prec}_\sigma\beta
\text{ for all }
\alpha\in{\cal I}, \alpha\ne\beta
\right\}.
\end{eqnarray*}
We observe
that there may be $\beta\in{\cal I}$ which do not belong to ${\Ind_{\rm max}}$.
We observe further
that different permutations may lead to the same local maximum.
For each $\beta\in{\Ind_{\rm max}}$ we collect these permutations in the set
\begin{eqnarray*}
{\cal P}_{\beta}(k) &:=& \left\{
\sigma\in{\cal P}(k) \,:\,
\beta = \max\nolimits_{{\prec}_\sigma}{\cal I}
\right\}.
\end{eqnarray*}
\subsection*{The region- and value-suitability}
We prove that all multivariate polynomials are region-value-suitable.
\begin{lemma}\label{lem-multi-poly-suit}
Let $\text{\rm pr}EDL$ be a predicate description for
the $k$-variate polynomial ($k\ge 2$)
\begin{eqnarray*}
f(x) := \sum_{\iota\in{\cal I}} a_\iota x^\iota
\end{eqnarray*}
where ${\cal I}\subset\mathbb{N}_0^k$ is finite
and $a_\iota\in\mathbb{R}_{\neq 0}$ for all $\iota\in{\cal I}$.
Then $f$ is region-value-suitable.
There are bounding functions
for every $\beta\in{\Ind_{\rm max}}$:
\begin{eqnarray*}
\varphi_f(\gamma) &:=& |a_\beta| \cdot \gamma^\beta\\
\chi_f(\gamma) &:=& \prod_{i=1}^{k} 2\left(\delta_i-\beta_i\gamma_i\right).
\end{eqnarray*}
\end{lemma}
\begin{proof}
Preparing consideration.
Let $\beta\in{\Ind_{\rm max}}$ and let $\sigma\in{\cal P}_\beta(k)$.
Once chosen, $\beta$ and $\sigma$ are fixed in this proof.
Because of the reverse lexicographic order,
the maximal exponent of $x_{\sigma(k)}$ in $f(x)$ is $\beta_{\sigma(k)}$.
Therefore we can write $f$ as
\begin{eqnarray*}
f(x) = b_{\beta_{\sigma(k)}} \cdot x_{\sigma(k)}^{\beta_{\sigma(k)}}
+ b_{\beta_{\sigma(k)}-1} \cdot x_{\sigma(k)}^{\beta_{\sigma(k)}-1}
+ \ldots
+ b_1 \cdot x_{\sigma(k)}
+ b_0
\end{eqnarray*}
where the $b_i(x_{\sigma(1)},\ldots,x_{\sigma(k-1}))$
are $(k-1)$-variate polynomials
for $0\le i\le\beta_{\sigma(k)}$.
For a moment
we consider the complex continuation of the polynomial $f$,
i.e.\ $f\in\mathbb{C}[z]$.
Furthermore we \emph{assume}\footnote{
We discuss the assumption in Part~2 of the proof.}
that the value of $b_{\beta_{\sigma(k)}}$ is not zero.
Then there are $\beta_{\sigma(k)}$ (not necessarily distinct)
functions $\zeta_i:\mathbb{C}^{k-1}\to\mathbb{C}$ such that
we can write $f$ in the way
\begin{eqnarray*}
f(z) &=& b_{\beta_{\sigma(k)}}(z_{\sigma(1)},\ldots,z_{\sigma(k-1)})
\cdot
\prod_{i=1}^{\beta_{\sigma(k)}}
(z_{\sigma(k)}-\zeta_i(z_{\sigma(1)},\ldots,z_{\sigma(k-1)})).
\end{eqnarray*}
We remark that if we consider $f$ as a polynomial in $z_{\sigma(k)}$
with parameterized coefficients $b_i$,
then the functions $\zeta_i$ define the parameterized roots.
Even if the location of the roots is variable,
the total number of the roots is definitely bounded by $\beta_{\sigma(k)}$.
In case that $z_{\sigma(k)}$ has a distance of at least
$\gamma_{\sigma(k)}$ to the values $\zeta_i$,
we can lower bound the absolute value of $f$ by
\begin{eqnarray}\label{for-factorizef2}
|f(z)| &\ge&
{\left|b_{\beta_{\sigma(k)}}\left(z_{\sigma(1)},\ldots,z_{\sigma(k-1)}\right)\right|}
\cdot
{\gamma_{\sigma(k)}^{\beta_{\sigma(k)}}}
\end{eqnarray}
Therefore this bound is especially true for real arguments.
Before we end the consideration in the complex space,
we add a remark.
Sagraloff et al.~\cite{SY11,MOS11} suggested a way to improve this estimate:
While preserving the \emph{total} region-bound $\varphi_f$,
it is possible to redistribute the region of uncertainty
around the zeros of $f$ in a way where
the amount of the \emph{individual} region-contribution per zero may differ;
they have shown that a certain redistribution improves the estimate
in Formula~(\ref{for-factorizef2}).
Next we use mathematical induction to prove
that $f$ is region-value-suitable.
Part~1 (basis). Let $j=1$.
Due to Lemma~\ref{lem-uni-poly-ana}
univariate polynomials are region-value-suitable.
Part~2 (inductive step). Let $1<j\le k$.
We define the function $g_j$ as
\begin{eqnarray*}
g_j\left(z_{\sigma(1)},\ldots,z_{\sigma(j-1)}\right)
&:=&
{b_{\beta_{\sigma(j)}}\left(z_{\sigma(1)},\ldots,z_{\sigma(j-1)}\right)}.
\end{eqnarray*}
Since $g_j$ is a polynomial in $j-1$ variables,
$g_j$ is region-value-suitable by induction.
Because of Theorem~\ref{theo-lower-bound},
the function $|g_j|$ is region-value-suitable
with the same bounds.
Furthermore, we define the functions
\begin{eqnarray*}
h_j(z_{\sigma(j)})
&:=& {\gamma_{\sigma(j)}^{\beta_{\sigma(j)}}} \\
\varphi_{h_j}(\gamma_{\sigma(j)})
&:=& \gamma_{\sigma(j)}^{\beta_{\sigma(j)}} \\
\nu_{h_j}(\gamma_{\sigma(j)})
&:=& 2\beta_{\sigma(j)}\gamma_{\sigma(j)}.
\end{eqnarray*}
Obviously
$h_j$ is region-value-suitable.
We have $|f_j|\ge|g_j|\cdot h_j$.
Then the product $|g_j|\cdot h_j$ is also region-value-suitable
because of Theorem~\ref{theo-product}.
Be aware that the construction
of the estimate in Formula~(\ref{for-factorizef2})
is based on the assumption that the coefficient
$b_{\beta_{\sigma(j)}}$ of $f_j$ is not zero.
We observe that this is only guaranteed outside of
the region of uncertainty of $g_j$.
We observe further
that the construction in the proof of Theorem~\ref{theo-lower-bound}
preserves the region of uncertainty,
that means, $R_{g_j}\subset R_{f_j}$.
Therefore the assumption is justified and
we can conclude that $f_j$ is region-value-suitable.
It remains to show that the claimed bounding functions
$\varphi_f$ and $\nu_f$ are true.
Part~3 ($\varphi_f$).
The basis $j=1$
follows from Lemma~\ref{lem-uni-poly-ana}:
\begin{eqnarray*}
\varphi_{f_1}\left(\gamma_{\sigma(1)}\right)
&:=&
\left|a_\beta\right| \cdot \gamma_{\sigma(1)}^{\beta_{\sigma(1)}}
\end{eqnarray*}
(Be aware
that the real coefficient $a_\beta$
is contained in every $g_j$ for $1< j\le k$.)
Now let $1<j\le k$.
For the induction step
we need the following observation:
Because of the reverse lexicographic order,
the maximal exponent of $x_{\sigma(j-1)}$
in the parameterized coefficient
$b_{\beta_{\sigma(j)}}(x_{\sigma(1)},\ldots,x_{\sigma(j-1)})$
is $\beta_{\sigma(j-1)}$.
We have
\begin{eqnarray*}
\varphi_{f_j}\left(\gamma_{\sigma(1)},\ldots,\gamma_{\sigma(j)}\right)
&:=&
\left|a_\beta\right|
\cdot
\prod_{\ell=1}^j \gamma_{\sigma(\ell)}^{\beta_{\sigma(\ell)}}
\end{eqnarray*}
The case $j=k$ proves the claim.
Part~4 ($\chi_f$).
The basis $j=1$ follows
from Lemma~\ref{lem-uni-poly-ana}:
\begin{eqnarray*}
\chi_{f_1}\left(\gamma_{\sigma(1)}\right)
&:=&
2\left(\delta_{\sigma(1)}-\beta_{\sigma(1)}\gamma_{\sigma(1)}\right).
\end{eqnarray*}
Now let $1<j\le k$.
Because the argument list of $g_j$ and $h_j$ are disjoint,
we apply Formula~(\ref{for-beta-product-rule}) and obtain
\begin{eqnarray*}
\chi_{f_j}\left(\gamma_{\sigma(1)},\ldots,\gamma_{\sigma(j)}\right)
&:=&
\prod_{\ell=1}^j
2\left(\delta_{\sigma(\ell)}-\beta_{\sigma(\ell)}\gamma_{\sigma(\ell)}\right).
\end{eqnarray*}
The case $j=k$ proves the claim.
\qed
\end{proof}
\subsection*{The analysis}
Now we prove the analyzability of multivariate polynomials
and apply the approach of quantified relations
to derive a precision function.
\begin{theorem}[multivariate polynomial]\label{theo-multipoly-ana}
Let $f$ be a $k$-variate polynomial ($k\ge2$) of total degree $d$
as defined in Lemma~\ref{lem-multi-poly-suit} and
let $\text{\rm pr}EDL$ be a predicate description for $f$
with cubical neighborhoods
$\delta_i=\delta_j$ and $\gamma_i=\gamma_j$
for all $1\le i,j \le k$.
Then $f$ is analyzable.
Furthermore,
we obtain the bounding function
\begin{eqnarray}\label{for-theo-multi-lsafe}
{L_\text{\rm safe}}(p)
&\ge&
\left\lceil
- \beta^* \log_2 \left(1-\sqrt[k]{p}\right)
\;+\; \mathbb{C}M(\beta)
\right\rceil
\end{eqnarray}
where
\begin{eqnarray*}
\mathbb{C}M(\beta) &:=&
\log_2\frac
{(d+1+\lceil\log_2|{\cal I}|\rceil)
\cdot |{\cal I}|
\cdot \max_{\iota\in{\cal I}}|a_\iota|
\cdot 2^{{e_\text{\rm max}} d+{\beta^*}+1}
\cdot \hat\beta^{\beta^*}}
{|a_\beta| \cdot
{(t\delta_1)}^{\beta^*}}.
\end{eqnarray*}
for $\beta\in{\Ind_{\rm max}}$
and $\hat\beta:=\max_{1\le i\le k}\beta_i$
and $\beta^*:=\sum_{1=i}^{k} \beta_i$.
\end{theorem}
We observe that $\hat\beta\le d$ and $\beta^*\le d$.
Note that the choice of $\beta\in{\Ind_{\rm max}}$ is an optimization problem:
We suggest to choose $\beta$
such that the constant $\beta^*$
in the asymptotic bound
${L_\text{\rm safe}}(p) = O\left(-\beta^*\log(1-\sqrt[k]{p})\right)$
for $p\to1$
is small.
\begin{proof}
Part~1 (analyzable).
Let $\beta\in{\Ind_{\rm max}}$.
Due to Lemma~\ref{lem-multi-poly-suit},
$f$ is region-value-suitable.
In addition Corollary~\ref{cor-multipolysafety}
provides a fp-safety bound $S_{\inf f}(L)$
for $k$-variate polynomials
in Formula~(\ref{for-sfl-multivariate}).
The function $S_{\inf f}(L)$ converges to zero and is invertible.
It follows that $f$ is safety-suitable and thus analyzable.
Part~2 (analysis).
We apply the approach of quantified relations.
Let $\delta_1,\gamma_1\in\mathbb{R}_{>0}$ and
$\delta_1=\delta_i$ and $\gamma_1=\gamma_i$
for all $1\le i\le k$.
In addition let $\hat\beta := \max_{1\le i\le k} \beta_i$.
Step~$1'$:
At first we derive an upper bound $\varepsilon_\chi$ on the volume
of the complement of the region of uncertainty
according to the precision $p$.
Naturally we obtain
\begin{eqnarray*}
{\varepsilon_{\chi}}(p)
&:=& p \prod_{i=1}^{k} 2\delta_i
\;\; = \;\; p \left(2\delta_1\right)^k.
\end{eqnarray*}
Step~$2'$:
Because of the cubical neighborhood we redefine
\begin{eqnarray*}
\chi_f(\gamma) &:=&
2^k \left( \delta_1 - \hat\beta \gamma_1 \right)^k.
\end{eqnarray*}
Then we use $\varepsilon_\chi$ and $\chi_f$ to determine $\gamma_1$:
\begin{eqnarray*}
\chi_f(\gamma) &=& {\varepsilon_{\chi}(p)} \\
\Leftrightarrow
\makebox[3.0cm]
{
$2^k \left(\delta_1-\hat\beta\gamma_1\right)^k$}
&=& p \, 2^k \, \delta_1^k\\
\Leftrightarrow
\makebox[3.0cm]
{
$\left(1-\frac{\hat\beta\gamma_1}{\delta_1}\right)^k$}
&=& p \\
\mathbb{R}ightarrow
\makebox[3.0cm]
{
$1-\frac{\hat\beta\gamma_1}{\delta_1}$}
&=& \sqrt[k]{p} \\
\Leftrightarrow
\makebox[3.0cm]
{
$\gamma_1(p)$}
&:=& \frac{\delta_1\left(1-\sqrt[k]{p}\right)}{\hat\beta}
\end{eqnarray*}
Step~3:
Since $\gamma$ represents the augmented region of uncertainty,
the normal sized region is induced by $t\gamma$. \\
\noindent
Step~4:
Now we fix the bound $\varphi_f$ on the absolute value and set
\begin{eqnarray*}
\varphi(p)
&=& \varphi_f(t\gamma(p)) \\
&=& |a_\beta| \cdot \left(t\gamma(p)\right)^\beta \\
&=& |a_\beta| \cdot \prod_{i=1}^k \left(t\gamma_i(p)\right)^{\beta_i} \\
&=& |a_\beta| \cdot \left(t\gamma_1(p)\right)^{\beta^*} \\
&=& |a_\beta| \cdot
\left(\frac{t\delta_1 \left(1-\sqrt[k]{p}\right)}{\hat\beta}\right)^{\beta^*}
\end{eqnarray*}
where $\beta^*:=\sum_{i=1}^k \beta_i$. \\
\noindent
Step~5:
To derive the bound on the precision,
we consider the inverse of Formula~(\ref{for-sfl-multivariate})
which is
\begin{eqnarray*}
S_{\inf f}^{-1}(\varphi(p))
&=&
\log_2\frac
{(d+1+\lceil\log_2|{\cal I}|\rceil)
\cdot |{\cal I}| \cdot \max|a_\iota| \cdot 2^{{e_\text{\rm max}} d+1}}
{\varphi(p)}\\
&=&
\log_2\frac
{(d+1+\lceil\log_2|{\cal I}|\rceil)
\cdot |{\cal I}| \cdot \max|a_\iota| \cdot 2^{{e_\text{\rm max}} d+1}
\cdot (2\hat\beta)^{\beta^*} }
{|a_\beta| \cdot ({t\delta_1\left(1-\sqrt[k]{p}\right)})^{\beta^*}}\\
&=&
-{\beta^*} \log_2
{\left(1-\sqrt[k]{p}\right)}
\\
&& + \log_2\frac
{(d+1+\lceil\log_2|{\cal I}|\rceil)
\cdot |{\cal I}| \cdot \max|a_\iota| \cdot 2^{{e_\text{\rm max}} d+1}
\cdot (2\hat\beta)^{\beta^*} }
{|a_\beta| \cdot {(t\delta_1)}^{\beta^{*}}}.
\end{eqnarray*}
Finally the claim follows from
${L_\text{\rm safe}}(p):=\left\lceil S_{\inf f}^{-1}(\varphi(p))\right\rceil$.
\qed
\end{proof}
The formula for ${L_\text{\rm safe}}(p)$
in the lemma above looks rather complicated.
Therefore we study the asymptotic behavior
${L_\text{\rm safe}}(p) = O\left(-d\log(1-\sqrt[k]{p})\right)$
for $p\to1$
in the following corollary:
We show that ``slightly'' more than
$d$ additional bits of the precision are sufficient
to halve the failure probability.
\begin{corollary}
Let $f$ be a $k$-variate polynomial ($k\ge 2$) of total degree $d$
and let ${L_\text{\rm safe}}:(0,1)\to\mathbb{N}$
be the precision function in Formula~(\ref{for-theo-multi-lsafe}).
Then
\begin{eqnarray*}
{L_\text{\rm safe}}\left(\frac{1+p}{2}\right)
&\le&
{L_\text{\rm safe}}(p) + \left\lceil \lambda\beta^* \right\rceil
\end{eqnarray*}
where $\beta^*=\sum_{i=1}^{k}\beta_i\le d$ and
\begin{eqnarray*}
\lambda &:=&
\log_2 \left(
\frac{1 - \sqrt[k]{p}}
{1 - \sqrt[k]{\frac{1+p}{2}}}
\right).
\end{eqnarray*}
\end{corollary}
\begin{proof}
All quantities are as defined in Theorem~\ref{theo-multipoly-ana}.
We obtain:
\begin{eqnarray*}
{L_\text{\rm safe}}\left(\frac{1+p}{2}\right)
&=&
\left\lceil
-\beta^*
\log_2 \left(
1-\sqrt[k]{\frac{1+p}{2}}
\right)
\; + \; \mathbb{C}M(\beta)
\right\rceil \\
&=&
\left\lceil
-\beta^*
\log_2 \left( \left(1-\sqrt[k]{p}\right)\cdot
\frac{1-\sqrt[k]{\frac{1+p}{2}}}{1-\sqrt[k]{p}}
\right)
\; + \; \mathbb{C}M(\beta)
\right\rceil \\
&=&
\left\lceil
-\beta^*
\log_2 \left(1-\sqrt[k]{p}\right)
-\beta^*
\log_2
\left(\frac{1-\sqrt[k]{\frac{1+p}{2}}}{1-\sqrt[k]{p}}
\right)
\; + \; \mathbb{C}M(\beta)
\right\rceil \\
&\le&
{L_\text{\rm safe}}(p)
+\left\lceil
\beta^*
\log_2
\left(\frac{1-\sqrt[k]{p}}{1-\sqrt[k]{\frac{1+p}{2}}}
\right) \right\rceil.
\end{eqnarray*}
This proves the claim.
\qed
\end{proof}
\section[The Top-down Approach Using Replacements (1st Stage, rv-suit)]{The Top-down Approach Using Replacements}
\label{sec-top-down}
This approach derives the bounding functions
which are associated with region- and value-suitability
in the first stage of the analysis
(see Figure~\ref{fig-ana-td}).
In the bottom-up approach
we consider a sequence of functions
which is incrementally built-up from simple functions and
\emph{ends up} at the function $f$ under consideration.
In contrast to that
we now construct a sequence of functions top-down that \emph{begins} with $f$
and leads to a (different) sequence
by dealing with the arguments of $f$ coordinatewise.
However,
the top-down approach works in two phases:
In the first phase we just derive the auxiliary functions and
in the second phase
we determine the bounds for the region- and value-suitability
bottom-up.
That is why we call this approach also \emph{pseudo-top-down}.
\begin{figure}
\caption{The top-down approach and its interface.}
\label{fig-ana-td}
\end{figure}
We remark that the idea of developing a top-down approach is not new:
The idea was first introduced by Mehlhorn et al.~\cite{MOS06} and
their journal article appeared in~\cite{MOS11}.
\emph{As opposed to previous publications,
our top-down approach is different for several reasons:
It is designed to fit to the method of quantified relations
and
it is based on our general conditions to analyze functions
(we do not need auxiliary constructions like exceptional points,
continuity or a finite zero set).}
New definitions are introduced in Section~\ref{sec-td-preparation}.
We define the basic idea
of a \emph{replacement} in Section~\ref{sec-td-single-rep}.
Afterwards
we show how we can apply a \emph{sequence of replacements}
to the function under consideration
in Section~\ref{sec-td-multi-rep}.
We present the top-down approach to derive the bounding functions
in Section~\ref{sec-td-first-analysis}.
Next we consider an example in Section~\ref{sec-td-example}.
For clarity,
we finally answer selected questions
in Section~\ref{sec-td-deeper-insight}.
\subsection{Definitions}
\label{sec-td-preparation}
We prepare the presentation with various definitions
and begin with a projection.
Let $\ell,k\in\mathbb{N}$ with $\ell\le k$, let $I:=\{1,\ldots,k\}$
and let
\begin{eqnarray*}
s:\{1,\ldots,\ell\}\to I
\end{eqnarray*}
be an injective mapping.
Then we define the projection
\begin{eqnarray*}
\pi_s(x)\label{def-projection-pi}
&:=&
\left(x_{s(1)},\ldots,x_{s(\ell)}\right).
\end{eqnarray*}
In a natural way we extend the projection to sets $X\subset\mathbb{R}^k$ by
\begin{eqnarray*}
\pi_s(X)
&:=& \left\{
\pi_s(x) \,:\, x\in X
\right\}.
\end{eqnarray*}
Since we often make use of the projection $\pi$
in the context of an index $i\in I$,
we define the following abbreviations in their obvious meaning:
\begin{eqnarray*}
{\pi_{i}} (x) &:=&
(x_i), \\
{\pi_{<i}} (x) &:=&
(x_1,\ldots,x_{i-1}), \\
{\pi_{>i}} (x) &:=&
(x_{i+1},\ldots,x_k), \\
{\pi_{\neq i}} (x) &:=&
(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_k).
\end{eqnarray*}
We remark on this contextual definitions
that the greatest index $k$
is always given implicitly by the set $I$ of indices.
The usage of such orthogonal projections leads to
the following condition on the set $A$ of projected inputs:
\emph{It is a necessary condition in the top-down analysis
that $A$ as well as the perturbation area $\bar{U}_\delta(A)$
are closed axis-parallel boxes without holes.}
We briefly motivate the next notation:
Assume that the function $f$ has a $k$-ary argument.
During the analysis of $f$,
we often bind $k-1$ of these variables to values
given in a $(k-1)$-tuple, say $\bar{x}i$.
We do this to study the local behavior of $f$
in dependence on a single free argument, say $x_i$.
\begin{definition}[free-variable star]
\label{def-free-var-star}
Let $\text{\rm pr}EDB$ be a predicate description
where $A$ is an axis-parallel box without holes.
In addition
let $I:=\{1,\ldots,k\}$ and
let $i\in I$.
For each $(k-1)$-tuple
$\bar{x}i := (\bar{x}i_1,\ldots,\bar{x}i_{i-1},\bar{x}i_{i+1},\ldots,\bar{x}i_k) \in {\pi_{\neq i}}(A)$
we define the function $\mathbb{F}SIX(x_i)$ as
\begin{eqnarray*}
& \mathbb{F}SIX : {\pi_{i}}(A) \to \mathbb{R},\\
& x_i \, \mapsto \,
\mathbb{F}SIX(x_i) =
f(x_1,\ldots,x_k)|_{x_j=\bar{x}i_j \; \forall j\in I\!, \; j\neq i}
\,=\,
f(\bar{x}i_1,\ldots,\bar{x}i_{i-1},x_i,\bar{x}i_{i+1},\ldots,\bar{x}i_k).
\end{eqnarray*}
\end{definition}
In other words,
we consider $\mathbb{F}SIX$
as the function $f$
where $x_i$ is a free variable
and all remaining variables are bound to the tuple $\bar{x}i$.
We illustrate the definition with an example and
consider the function $f(x_1,x_2,x_3):=3x_1^2+2x_2^3-4x_3$.
Then $f^{*2}_{(4,7)}$ is a function in $x_2$ and we have
\begin{eqnarray*}
f^{*2}_{(4,7)}(x_2)
&=& f(x_1,x_2,x_3)|_{x_1=4 \,\wedge\, x_3=7}\\
&=& 3 \cdot 4^2 + 2 x_2^3 - 4 \cdot 7 = 2 x_2^3-20.
\end{eqnarray*}
We sometimes do not attach the tuple $\bar{x}i$ to $\mathbb{F}SI$
to relieve the reading
if $\bar{x}i$ is uniquely defined by the context.
Once we focus on the function $\mathbb{F}SIX$ in one variable, say $x_i$,
we are interested in its induced critical set.
Surely this critical set depends on the choice of $\bar{x}i$.
We have seen that the region-suitability is a necessary condition
for the analyzability of the function.
Therefore the next definition is used to mark those $\bar{x}i$
for which $\mathbb{F}SIX$ is or is not region-suitable.
\begin{definition}[region-regularity]
\label{def-reg-reg}
Let $\text{\rm pr}EDB$ be a predicate description
where $A$ is an axis-parallel box without holes.
We call $\bar{x}i\in {\pi_{\neq i}}(A)$ \emph{region-regular}
if $\mathbb{F}SIX$ is region-suitable on ${\pi_{i}}(A)$.
Otherwise we call $\bar{x}i$ \emph{non-region-regular}.
\end{definition}
Finally we remark that the region-suitability of $\mathbb{F}SIX$ implies that
the functions $\nu_{\mathbb{F}SIX}$ and $\chi_{\mathbb{F}SIX}$ exist.
If $i$ is fixed, there are families of functions $\mathbb{F}SIX$
(and hence families of functions $\nu_{\mathbb{F}SIX}$ and $\chi_{\mathbb{F}SIX}$)
that depend on the region-regular $\bar{x}i$.
We examine these families in the next paragraph.
\subsection{Single Replacement}
\label{sec-td-single-rep}
From now on we consider the following setting:
\emph{Let $\text{\rm pr}EDB$ be a predicate description
where $A$ is an axis-parallel box without holes
and let $I:=\{1,\ldots,k\}$.}
In addition we denote the domain of $f$ by ${\rm dom}(f)$.
We develop the top-down approach step-by-step.
For a given index $i\in I$,
our first aim is to lower-bound the absolute value of $f$
by a function $g$
whose argument lists differ solely in the $i$-th position:
While $f$ depends on $x_i\in {\pi_{i}}(U_\delta(A))$,
the function $g$ depends on
a new variable $\gamma_i\in{\pi_{i}}(\mathbb{G}AB)$.
Hence we say that the construction of $g$
is motivated by the \emph{replacement of $x_i$ with $\gamma_i$}
in the argument list of $f$.
Now we present the construction of the function $g$
for a fixed index $i\in I$.
We focus on the functions $\mathbb{F}SIX$
to study the local behavior of $f$ in its \mbox{$i$-th} argument.
We are interested in tuples $\bar{x}i\in{\pi_{\neq i}}({\rm dom}(f))$
for which $\mathbb{F}SIX$ is region-suitable.
We collect these points in the set
\begin{eqnarray*}
{X}_{f,i} &:=& \left\{ \bar{x}i\in{\pi_{\neq i}}({\rm dom}(f)) :
\text{$\bar{x}i$ is region-regular}
\right\}.
\end{eqnarray*}
To understand our interest in the set ${X}_{f,i}$,
we remind ourselves about the following fact:
For region-regular $\bar{x}i$,
open neighborhoods of the critical set $C_{\mathbb{F}SIX}$
are guaranteed to exist for any given (arbitrarily small) volume.
This is not true for non-region-regular points
which therefore must belong to the critical set
of the objective function.
Next we define the objective function $g$. Let
\begin{eqnarray*}
g :
{\pi_{<i}} ({\rm dom}(f))
\times {\pi_{i}} (\mathbb{G}AB)
\times {\pi_{>i}} ({\rm dom}(f))
\to \mathbb{R}_{\ge 0},
\end{eqnarray*}
be the function with the pointwise definition
\begin{eqnarray}
g(\bar{x}i_1,\ldots,\bar{x}i_{i-1},\gamma_i,\bar{x}i_{i+1}\ldots,\bar{x}i_k)
&:=& \left\{
\begin{array}{l@{\quad:\quad}l}
0 & \text{$\bar{x}i\not\in X_{f,i}$}
\\[0.5ex]
\inf\limits_{\text{(C1)}}
\;
\inf\limits_{\text{(C2)}}
\;
\left|\mathbb{F}SIX(x_i)\right|
& \text{$\bar{x}i\in X_{f,i}$}
\end{array} \right. \label{for-g-inf-inf} \\[1ex]
\text{(C1)} &:& {\bar{x}_i\in {\pi_{i}}(A)} \nonumber \\
\text{(C2)} &:& {x_i\in\bar{U}_{\mathbb{F}SI\!,\delta_i}(\bar{x}_i)
\setminus R_{\mathbb{F}SI\!,\gamma_i}(\bar{x}_i)} \nonumber
\end{eqnarray}
for all $\bar{x}i\in{\pi_{\neq i}}({\rm dom}(f))$ and all $\gamma_i\in{\pi_{i}}(\mathbb{G}AB)$.
The domains ${\rm dom}(f)$ and ${\rm dom}(g)$ only differ in the $i$-th coordinate.
Whenever $\bar{x}i$ is non-region-regular,
we set $g$ to zero.
(We remark that this is essential for the sequence of replacements
in Section~\ref{sec-td-multi-rep} since
this handling triggers the exclusion of
an open neighborhood of $\bar{x}i$---and
not just the exclusion of the point $\bar{x}i$ itself.)
In case $\bar{x}i$ is region-regular,
we set $g$ to the infimum of the absolute value of $f$
outside of the region of uncertainty for the various $\bar{x}_i$.
Note that we must consider the infimum
in the definition of $g$ in Formula~(\ref{for-g-inf-inf})
because $|\mathbb{F}SIX|$ does not need to have a minimum.
We do not assume that $f$ is continuous or semi-continuous.
\begin{definition}
We call the presented construction of the function $g$ the
\emph{function resulting from the replacement of $f$'s argument
$x_i$ with $\gamma_i$}.
We denote the replacement by
$\mathbb{R}EP(f,{x_i} \to \gamma_{i}).$
\end{definition}
We summarize the steps during the replacement of an argument of $f$
and emphasize the relation between the quantities:
Let $f$ be given.
Then we begin with the consideration of the auxiliary function $f^{*i}_{\bar{x}i}$.
We use it to determine the auxiliary set
of region-regular points ${X}_{f,i}$.
To determine the function $g$ afterwards,
we examine $f^{*i}_{\bar{x}i}$ again,
but now only for the points in ${X}_{f,i}$.
In the proof of the analysis in Section~\ref{sec-td-first-analysis},
we use the statement that the replacement $\mathbb{R}EP(f,{x_i} \to \gamma_{i})$
results in a positive function that lower bounds the absolute value of $f$
in a certain sense.
We formalize and prove this statement in the next lemma.
\begin{lemma}\label{lem-sequence-of-lower-bounds}
Let $\text{\rm pr}EDB$ be a predicate description
where $A$ is an axis-parallel box without holes,
let $I:=\{1,\ldots,k\}$ and
let $i\in I$.
Moreover, let $g:=\mathbb{R}EP(f,x_i \to \gamma_i)$.
Then we have
\begin{eqnarray}\label{for-f-larger-rep}
|f(\bar{x}i_1,\ldots,\bar{x}i_{i-1},x_i,\bar{x}i_{i+1}\ldots,\bar{x}i_k)|
\ge g(\bar{x}i_1,\ldots,\bar{x}i_{i-1},\gamma_i,\bar{x}i_{i+1}\ldots,\bar{x}i_k)
> 0
\end{eqnarray}
for all region-regular points $\bar{x}i\in {X}_{f,i}$,
for all $\gamma_i\in{\pi_{i}}(\mathbb{G}AB)$,
for all ${\bar{x}_i\in {\pi_{i}}(A)}$
and for all
${x_i\in\bar{U}_{\mathbb{F}SI\!,\delta_i}(\bar{x}_i) \setminus R_{\mathbb{F}SI\!,\gamma_i}(\bar{x}_i)}$.
\end{lemma}
\begin{proof}
The left unequation in Formula~(\ref{for-f-larger-rep})
follows immediately from the construction of
the function $g=\mathbb{R}EP(f,x_i \to \gamma_i)$
because we only consider points
lying outside of the region of uncertainty $R_{\mathbb{F}SI,\gamma_i}(\bar{x}_i)$.
To prove the right unequation
in Formula~(\ref{for-f-larger-rep}),
we assume that there is a region-regular $\bar{x}i\in{X}_{f,i}$
and $\gamma_i\in{\pi_{i}}(\mathbb{G}AB)$ such that
the objective function
\mbox{$g(\bar{x}i_1,\ldots,\bar{x}i_{i-1},\gamma_i,\bar{x}i_{i+1}\ldots,\bar{x}i_k)=0$}.
This implies, for $\bar{x}_i\in{\pi_{i}}(A)$,
the existence of a sequence $(a_j)_{j\in \mathbb{N}}$
in the area
${\bar{U}_{\mathbb{F}SI\!,\delta_i}(\bar{x}_i) \setminus R_{\mathbb{F}SI\!,\gamma_i}(\bar{x}_i)}$
for which $\lim_{j\to\infty} \mathbb{F}SIX(a_j) = 0$.
Consequently $a:=\lim_{j\to\infty} a_j$ must belong to the critical set.
Since the region of uncertainty $R_{\mathbb{F}SI\!,\gamma_i}$
guarantees the exclusion of the open $\gamma_i$-neighborhood
of the critical set---which
includes the open $\gamma_i$-neighborhood of $a$---almost
all points of the sequence $(a_j)_{j\in \mathbb{N}}$
must also lie in $R_{\mathbb{F}SI\!,\gamma_i}$.
This leads to a contradiction to the assumption
and proves the claim.
\qed
\end{proof}
We add the remark that
the right unequation in Formula~(\ref{for-f-larger-rep})
presumes that $\bar{x}i$ is region-regular
as is stated in the lemma.
We obtain $g\equiv 0$
if ${X}_{f,i}$ is the empty set.
We continue with a simple example
that illustrates the method to determine
$\mathbb{R}EP(f,x_i \to \gamma_i)$.
\begin{example}
Let $f (x_1,x_2) = x_1^2 + x_2^2$.
Then $I=\{1,2\}$.
In addition let $i=2$ and
let $A$ be an axis-parallel rectangle that contains the origin $(0,0)$.
We consider $f^{*2}_{\bar{x}i_1} (x_2) = \bar{x}i_1^2 + x_2^2 $.
Since $f^{*2}$ is region-suitable,
this leads to ${X}_{f,2}=\pi_{\neq 2}(A) = \pi_{1}(A)$.
We obtain
\begin{eqnarray*}
g (\bar{x}i_1, \gamma_2)
&:=& \left\{
\begin{array}{l@{\quad:\quad}l}
\gamma_2^2 & \bar{x}i_1=0 \\[0.5ex]
\bar{x}i_1^2 & \text{otherwise.}
\end{array}
\right.
\end{eqnarray*}
The critical set of $g$ contains a single point in the case $\bar{x}i_1=0$
and is empty in the other case.
$\bigcirc$
\end{example}
We end this subsection with two observations.
Firstly, although $g(\bar{x}i_1,\gamma_2)>0$ in the example above,
the limit
\begin{eqnarray*}
\inf_{\bar{x}i_1\in{X}_{f,2}\,\wedge\,\bar{x}i_1\neq0} \; g(\bar{x}i_1,\gamma_2) &=& 0.
\end{eqnarray*}
Secondly, if the lower-bounding function $g$
is region-value-suitable,
the function $f$ is also region-value-suitable
because of Theorem~\ref{theo-lower-bound}.
This observation is the driving force of the top-down approach.
\subsection{Sequence of Replacements}
\label{sec-td-multi-rep}
So far we know how a variable $x_i$
of the argument list of the function $f$ under consideration
can be replaced with a new variable $\gamma_i$.
The advantage of the new variable $\gamma_i$ is
that it reflects the distance to the critical set, somehow.
We announce that, opposed to $x_i$,
the variable $\gamma_i$ is appropriate for the analysis.
A benefit of $\gamma_i$ is that
it is not necessary
to study the precise location of the critical set;
the knowledge about the ``width'' of the critical set is sufficient.
The idea behind the top-down approach is to apply
the replacement procedure
$k$ times in a row to replace all original arguments $(x_1,\ldots,x_k)$ of $f$
by the new substitutes $(\gamma_1,\ldots,\gamma_k)\in\mathbb{G}AB$.
To get the presentation as general as possible,
we keep the order of the $k$ replacements variable.
Let $\sigma: I \to I$ be a bijective function
that defines the order
in which we replace the arguments of $f$.
We interpret $\sigma(i)=j$ as
the replacement of $x_j$ with $\gamma_j$ in the $i$-th step.
Now we look for a recursive definition
to derive the sequence $g_1, \ldots, g_k$ of functions
that result from these replacements.
We define the basis of the recursion as
$g_0:=f$ with $g_0:\bar{U}_\delta(A)\to\mathbb{R}$
and ${\rm dom}(g_0)=\bar{U}_\delta(A)$.
We set
$g_i := \mathbb{R}EP(g_{i-1}, x_{\sigma(i)} \to \gamma_{\sigma(i)})$
for $i\in I$.
In other words:
We focus on the replacement of $x_{\sigma(i)}$ in step $i\in I$,
that means,
we assume that we have just derived the functions $g_1,\ldots,g_{i-1}$.
We then
determine the set
of region-regular points
\begin{eqnarray*}
{X}_{g_{i-1},\sigma(i)} &:=& \left\{ \bar{x}i\in\pi_{\neq\sigma(i)}({\rm dom}(g_{i-1})) :
\text{$\bar{x}i$ is region-regular}
\right\}\!,
\end{eqnarray*}
that means, we check if the function
\begin{eqnarray*}
& g_{i-1,\bar{x}i}^{*\sigma(i)} :
\pi_{\sigma(i)} ({\rm dom}(g_{i-1})) \to \mathbb{R}_{\ge 0}, \\
& g_{i-1,\bar{x}i}^{*\sigma(i)}\!\left(x_{\sigma(i)}\right) \mapsto
g_{i-1}\!\left(\bar{x}i_1,\ldots,\bar{x}i_{\sigma(i)-1},x_{\sigma(i)},\bar{x}i_{\sigma(i)+1},\ldots,\bar{x}i_k\right)
\end{eqnarray*}
is region-suitable for a given $\bar{x}i$.
Thereafter, we
define the domain of the succeeding function
$g_i$ as
\begin{eqnarray*}
& g_i :
\pi_{<\sigma(i)} ({\rm dom}(g_{i-1}))
\times \pi_{\sigma(i)} (\mathbb{G}AB)
\times \pi_{>\sigma(i)} ({\rm dom}(g_{i-1})) \to \mathbb{R}_{\ge 0}
\end{eqnarray*}
and
use ${X}_{g_{i-1},\sigma(i)}$ to
define
$g_i(\bar{x}i_1,\ldots,\bar{x}i_{\sigma(i)-1},\gamma_{\sigma(i)},\bar{x}i_{\sigma(i)+1}\ldots,\bar{x}i_k)$
\begin{eqnarray}
&:=& \left\{
\begin{array}{l@{\quad:\quad}l}
0 & \text{$\bar{x}i\not\in X_{g_{i-1},\sigma(i)}$}
\\[0.5ex]
\inf\limits_{\text{(C1)}}
\;
\inf\limits_{\text{(C2)}}
\;
\left|g_{i-1}^{*\sigma(i)}(x_{\sigma(i)})\right|
& \text{$\bar{x}i\in X_{g_{i-1},\sigma(i)}$}
\end{array} \right. \label{for-gi-inf-inf} \\[1ex]
\text{(C1)} &:& {\bar{x}_{\sigma(i)}\in \pi_{\sigma(i)}({\rm dom}(g_{i-1}))} \nonumber \\
\text{(C2)} &:& {x_{\sigma(i)}\in\bar{U}_{g_{i-1}^{*\sigma(i)},\delta_{\sigma(i)}}(\bar{x}_{\sigma(i)})
\setminus R_{g_{i-1}^{*\sigma(i)},\gamma_{\sigma(i)}}(\bar{x}_{\sigma(i)})} \nonumber
\end{eqnarray}
for all $\bar{x}i\in{\pi_{\neq \sigma(i)}}({\rm dom}(g_{i-1}))$ and all $\gamma_{\sigma(i)}\in\pi_{\sigma(i)}(\mathbb{G}AB)$.
We summarize the relation between the quantities
during the $i$-th replacement
in Figure~\ref{fig-ana-depend-multi-rep}.
(The striped quantities are introduced later.)
\begin{figure}
\caption{Illustration of the dependencies during the $i$-th replacement.
The white-colored quantities are defined in Section~\ref{sec-td-multi-rep}
\label{fig-ana-depend-multi-rep}
\end{figure}
The definitions above are chosen such that
the function $g_i$ exists.
After the \mbox{$k$-th} step,
the recursion ends with
$g_k:\mathbb{G}AB \to \mathbb{R}_{\ge 0}$.
We remark that,
if we apply this mechanism to functions
which are not admissible for controlled perturbation,
the sequence of replacements will end-up with a function $g_k$
that fails the analysis from the next section.
\begin{example}
\label{ex-inbox-gi-dom-crit}
We get back to the 2-dimensional $\text{\rm in\_box}$-predicate.
For this example it is sufficient to assume
that the box is fixed somehow and
that the only argument of the predicate is the query point $q=(x_1,x_2)$.
This time we consider the various domains and critical sets
of the functions $g_i$ that result from the sequence of replacements.
(The order of the replacements is not important for this example.)
The situation is illustrated in Figure~\ref{fig-crit-set-inbox-1}.
Picture (a) shows the domain (shaded region)
of the function $f=g_0$ itself.
We know that the critical set is the boundary of the query box.
After the replacement $\mathbb{R}EP(g_0,x_1\to\gamma_1)$,
the first argument belongs to the set $\pi_1(\mathbb{G}AB)$
resulting in an altered domain (see Picture (b)).
We make two observations.
Firstly, the critical set of $g_1$ is formed by two horizontal lines
that are caused by the top and bottom part of the box $C_{g_0}$.
What is the reason for that?
If we consider the absolute value of $g_0$
while moving its argument
along a horizontal line that
passes through the top or bottom line segment of the box
($x_2$ is fixed then),
it leads to a mapping that is zero on an open interval;
in this case the mapping cannot be region-suitable.
Secondly, there are no further contributions to the critical set of $g_1$.
What is the reason?
If we consider the absolute value of $g_0$ along a horizontal line that
passes through the interior of the box,
it leads to a mapping which is region-suitable.
\begin{figure}
\caption{Illustration of the various domains and critical sets
that result from the sequence of replacements
for the 2-dimensional $\text{\rm in\_box}
\label{fig-crit-set-inbox-1}
\end{figure}
Picture (c) shows the situation after the second replacement
$\mathbb{R}EP(g_1,x_2\to\gamma_2)$.
The function $g_2$ is positive on its entire domain $\mathbb{G}AB$.
The reason for this is that,
if we consider the absolute value of $g_1$ along a vertical line
($\gamma_1$ is fixed then),
it leads to a mapping which is region-suitable.
$\bigcirc$
\end{example}
\subsection{Derivation and Correctness of the Bounds}
\label{sec-td-first-analysis}
Although we have replaced each $x_i$ with $\gamma_i$
in the argument list of $f$ in a top-down manner,
we are not able to determine the bounds $\nu_f$ and $\varphi_f$
in the same way.
To achieve this goal,
we need to go through the collected information bottom-up again.
The reason is that, at the time we arrive at a function, say $g_{i-1}$,
we cannot check directly if $g_{i-1}$ is region- and value-suitable.
Instead of this,
we want that these properties are inherited from the successor $g_i$
to the predecessor.
We will see that,
once we arrive at $g_k$,
we can easily check if $g_k$ has the desired properties.
This way we can possibly show
that $g_0$, i.e.\ $f\!,$ is also region-value-suitable.
Therefore we divide the analysis in two phases.
The first phase consists of the deduction of $g_k$
via the sequence of replacements
and is already presented in the last section.
The second phase consists of the deduction of the bounding functions
$\varphi_f$ and $\chi_f$ and
is the subject of this section.
We begin with an auxiliary statement
which claims that $g_k$ is non-decreasing in each argument
under certain circumstances.
\begin{lemma}\label{lem-gk-non-decreasing}
Let $\text{\rm pr}EDB$ be a predicate description
where $A$ is an axis-parallel box without holes
and let $I:=\{1,\ldots,k\}$.
Let $\sigma:I\to I$ be bijective,
i.e., an order on the elements of $I$.
Finally,
let $g_0:=f$ and $g_j:=\mathbb{R}EP(g_{j-1},x_{\sigma(j)}\to\gamma_{\sigma(j)})$
for all $1\le j \le k$,
i.e.,
$g_k$ is the resulting function after the $k$ replacements.
If the function $g_k$ is positive\footnote{
That is why we have defined $\mathbb{G}AB$ as an \emph{open} set.},
it is non-decreasing
in $\gamma_i$
on ${\pi_{i}}(\mathbb{G}AB)$
for all $i\in I$.
\end{lemma}
\begin{proof}
We refer to the explicit definition of $g_i$ in Formula~(\ref{for-gi-inf-inf})
that reflects the replacement of the $i$-th argument:
For growing $\gamma_{\sigma(i)}$
we shrink the domain for $x_{\sigma(i)}$
due to condition~(C2).
Formally,
for $\gamma',\gamma''\in\pi_{\sigma(i)}(\mathbb{G}AB)$
with $\gamma'<\gamma''$
the corresponding regions of uncertainty are related in the way
\begin{eqnarray*}
& R_{g_{i-1}^{*\sigma(i)},\gamma'}(\bar{x}_{\sigma(i)})
\;\subset\;
R_{g_{i-1}^{*\sigma(i)},\gamma''}(\bar{x}_{\sigma(i)}).
\end{eqnarray*}
Because the function value of $g_i$
is defined by the infimum absolute value,
the function $g_i$
must be non-decreasing in its $i$-th argument $\gamma_{\sigma(i)}$
for region-regular $\bar{x}i$
by construction.
The same argumentation is true for each of the $k$ replacements and
is independent of the actual sequence of replacements.
This finishes the proof.
\qed
\end{proof}
The domain of the function $g_k$ is naturally $\mathbb{G}AB$.
Even if $\mathbb{G}AB$ has the same cardinality than $\mathbb{R}$
for $k\ge2$,
it is non-obvious how to define an invertible function $\chi_{g_k}$
on $\mathbb{G}AB$.
But such a bound is required to use the method of quantified relations.
For that purpose we restrict the domain in the analysis
to $\mathbb{G}AL$:
It is true that the elements of $\gamma\in\mathbb{G}AL$
are now interlinked,
but the important fact is that
we can still choose them arbitrarily close to zero.
To further prepare the analysis,
we have to focus on a peculiarity of the auxiliary function
$g_{i-1,\bar{x}i}^{*\sigma(i)}$ for a given $i\in I$.
Remember that
$\nu_{g_{i-1,\bar{x}i}^{*\sigma(i)}}$ and $\chi_{g_{i-1,\bar{x}i}^{*\sigma(i)}}$
are families of functions with parameter
$\bar{x}i\in {X}_{g_{i-1},\sigma(i)}$.
Therefore we are facing the following issue:
For a given $i\in I$,
how can we deal with these two families of functions?
The first solution that comes into mind
is to replace each family with just one bounding function---so
this is what we do.
That means, we define the pointwise limits of these families as
\begin{eqnarray}
\hat\nu_{g_{i-1}^{*\sigma(i)}}\left(\gamma_{\sigma(i)}\right) &:=&
\sup_{\bar{x}i\in{X}_{f,i}} \; \nu_{g_{i-1,\bar{x}i}^{*\sigma(i)}}\left(\gamma_{\sigma(i)}\right) \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\hat\chi_{g_{i-1}^{*\sigma(i)}}\left(\gamma_{\sigma(i)}\right) &:=&
\inf_{\bar{x}i\in{X}_{f,i}} \; \chi_{g_{i-1,\bar{x}i}^{*\sigma(i)}}\left(\gamma_{\sigma(i)}\right)
\label{for-def-chi-star}
\end{eqnarray}
for $\gamma\in\mathbb{G}AB$
and make use of these new bounds in the analysis.
To illustrate this extra work in the analysis,
we have added
the two striped quantities in Figure~\ref{fig-ana-depend-multi-rep}.
Now we are ready to present the top-down approach to analyze
real-valued functions.
We claim and prove the results in the following theorem.
\begin{theorem}[top-down approach]\label{theo-top-down-first-version}
Let $\text{\rm pr}EDB$ be a predicate description
where $A$ is an axis-parallel box without holes
and let $I:=\{1,\ldots,k\}$.
Let $\sigma:I\to I$ be bijective,
i.e., an order on the elements of $I$.
Finally,
let $g_0:=f$ and $g_j:=\mathbb{R}EP(g_{j-1},x_{\sigma(j)}\to\gamma_{\sigma(j)})$
for all $1\le j \le k$.
We define $\varphi_f$ and $\chi_f$ as
\begin{eqnarray*}
\varphi_{f}(\gamma)
&:=& g_k(\gamma) \\
\chi_{f}(\gamma)
&:=& \prod_{j=1}^k \,
\hat\chi_{g_{j-1}^{*\sigma(j)}}\!\left(\gamma_{\sigma(j)}\right).
\end{eqnarray*}
If $g_k$ is positive on $\,\mathbb{G}AB$ and
$\chi_f$ is invertible on\footnote{
Remember that $\mathbb{G}AL\subset\mathbb{G}AB$.}
$\,\mathbb{G}AL$,
then $f$ is region-value-suitable
with the bounding functions\footnote{
Remember that we can use $\nu_f$
instead of $\chi_f$
because of Formula~(\ref{def-chi-region-suit}).}
$\varphi_f$ and $\chi_f$.
\end{theorem}
\begin{proof}
We prove the claim in three parts.
First we show that
there are certain bounding functions
$\varphi_{g_k}$ and $\chi_{g_k}$
for which $g_k$ is region-value-suitable.
Afterwards we prove that,
if the function $g_i$ has such bounding functions,
then $g_{i-1}$ has also appropriate bounding functions.
And in the end we deduce the claim of the theorem.
Part~1 (basis).
We assume that
$g_k$ is positive on the open set $\,\mathbb{G}AB$, that means,
we consider the function $g_k:\mathbb{G}AB\to\mathbb{R}_{>0}$.
At first we decompose the domain in two parts
(see Figure~\ref{fig-gamma-box-decomposition}).
Let $\gamma\in\mathbb{G}AL$.
We define the unique open axis-parallel box
with opposite vertices $\gamma$ and\footnote{
Remember that we have introduced $\hat\gamma$
to define $\mathbb{G}ABG$ and $\mathbb{G}ALG$.
More information and the formal bound is given
in Remark~\ref{rem-def-region-suit}.2
on Page~\pageref{rem-def-region-suit}.}
$\hat\gamma$ as
\begin{eqnarray*}
\mathbb{G}ASG &:=&
\left\{
\gamma'\in\mathbb{G}AB :
\text{$\gamma_i \le \gamma'_i$ for all $i\in I$}
\right\}\!.
\end{eqnarray*}
We denote its complement within the $\mathbb{G}AB$ by
\begin{eqnarray*}
\mathbb{G}ARG &:=& \mathbb{G}AB \setminus \mathbb{G}ASG.
\end{eqnarray*}
We think of $\mathbb{G}ARG$ as the region of uncertainty and
$\mathbb{G}ASG$ as the region
whose floating-point numbers are guaranteed to evaluate fp-safe.
\begin{figure}
\caption{This is an exemplified 2-dimensional illustration
of the decomposition of the $\mathbb{G}
\label{fig-gamma-box-decomposition}
\end{figure}
We claim that $g_k$ is region-value-suitable on $\mathbb{G}AB$
in the following sense:
We set the bounding functions to
\begin{eqnarray*}
\varphi_{g_k}(\gamma) &:=& g_k(\gamma) \\
\chi_{g_k}(\gamma)
&:=&
\prod_{j=1}^k \,
\left( \hat\gamma_j - \gamma_j \right)
\end{eqnarray*}
and claim that two statements are fulfilled for every
$\gamma\in\mathbb{G}AL$:
\begin{enumerate}
\item
The absolute value of $g_k(\gamma')$ is at least $\varphi_{g_k}(\gamma)$
for all points $\gamma'\in\mathbb{G}ASG$.
\item
The volume of $\mathbb{G}ASG$ is $\chi_{g_k}(\gamma)$.
\end{enumerate}
To prove the first statement,
we consider the function value of $g_k$
along a path of $k$ axis-parallel line segments
from $\gamma$ to $\gamma'$.
The path starts at
$\gamma=(\gamma_1,\ldots,\gamma_k)$,
connects
the $(k-1)$ points
$(\gamma'_1,\ldots,\gamma'_j,\gamma_{j+1}\ldots,\gamma_k)$
with $1\le j<k$
in ascending order of $j$
and ends at
$\gamma'=(\gamma'_1,\ldots,\gamma'_k)$.
Along this path,
the function value of $g_k$ is non-decreasing
because of Lemma~\ref{lem-gk-non-decreasing}:
For all $i\in I\!,$
the function $g_k$
is non-decreasing in its $i$-th argument
$\gamma_i\in{\pi_{i}}(\mathbb{G}AB)$
for fixed $\bar{x}i\in{\pi_{\neq i}}(\mathbb{G}AB)$.
The proof of the second statement is straight forward:
Because the box is axis-parallel,
its volume is the product of its edge-lengths.
We make the observation that the function $\chi_{g_k}(\gamma)$
is strictly monotonically increasing on $\mathbb{G}AL$
and hence must be invertible on this domain.
We conclude the first part of the proof:
\emph{For a given $\gamma\in\mathbb{G}AL$,
we have shown
that the function value of $g_k$
is at least $\varphi_{g_k}(\gamma)$
on an area of volume $\chi_{g_k}(\gamma)$.}
This way we have found evidence
that $g_k$ is region-value-suitable
in the meaning above.
Part~2 (induction).
We claim:
\emph{For $i\in I$ and $\gamma\in\mathbb{G}AL$,
the function value of $g_{i-1}$
is at least $\varphi_{g_{i-1}}(\gamma)$
on an area of volume $\chi_{g_{i-1}}(\gamma)$ with}
\begin{eqnarray}
\varphi_{g_{i-1}}(\gamma)
&:=& \varphi_{g_i} (\gamma)
\label{for-proof-phi-star}
\\
\chi_{g_{i-1}}(\gamma)
&:=& \chi_{g_{i}}(\gamma)
\, \cdot \,
\frac { \hat\chi_{g_{i-1}^{*\sigma(i)}} \left(\gamma_{\sigma(i)}\right) }
{\hat\gamma_{\sigma(i)} - \gamma_{\sigma(i)}}.
\label{for-proof-chi-star}
\end{eqnarray}
We prove the claim by mathematical induction for descending $i\in I$.
Basis ($i=k$).
Due to the first part,
we can base the proof on the bounding functions
$\varphi_{g_k}$ and $\chi_{g_k}$.
Induction step ($i\in I$).
We assume that the bounding functions
are true for all $j\in I$ with $i\le j\le k$
and prove the claim for $i-1$.
This is what we do next.
Remember the definition
$g_i := \mathbb{R}EP\!\left(g_{i-1}, x_{\sigma(i)} \to \gamma_{\sigma(i)}\right)$.
In the step backwards
from $g_i$ to $g_{i-1}$,
we observe the following difference in their two axis-parallel domains
due to condition (C2) of Formula~(\ref{for-gi-inf-inf}):
The counterpart to the situation in which
the \mbox{$\sigma(i)$-th} argument of $g_i$ lies in
$\pi_{\sigma(i)}\left(\mathbb{G}ASG\right)$
is the situation in which
the \mbox{$\sigma(i)$-th} argument of $g_{i-1}$ lies in
\begin{eqnarray}\label{for-region-regular-gi-1}
\bar{U}_{g^{*\sigma(i)}_{i-1,\delta_{\sigma(i)}}} \!\! \left(\bar{x}_{\sigma(i)}\right)
\; \setminus \;
R_{g^{*\sigma(i)}_{i-1,\gamma_{\sigma(i)}}} \!\! \left(\bar{x}_{\sigma(i)}\right)
\end{eqnarray}
and belongs to the region-regular case.
Furthermore,
the volume of this area is guaranteed to be at least
$\hat\chi_{g^{*\sigma(i)}_{i-1}}\left(\gamma_{\sigma(i)}\right)$
due to Formula~(\ref{for-def-chi-star}).
Because the axis-parallel domains of $g_i$ and $g_{i-1}$
do not differ in directions different to the $\sigma(i)$-th main axis,
their volume (which is the product of edge lengths)
solely differ in a factor.
Therefore we can estimate the volume $\chi_{g_{i-1}}(\gamma)$
at the product $\chi_{g_{i}}(\gamma)$
where we replace the factor
$({\hat\gamma_{\sigma(i)} - \gamma_{\sigma(i)}})$
by $\hat\chi_{g_{i-1}^{*\sigma(i)}}\!(\gamma_{\sigma(i)})$;
this validates Formula~(\ref{for-proof-chi-star}).
Because of Lemma~\ref{lem-sequence-of-lower-bounds},
the lower-bounding function $\varphi_{g_i}$
is also a lower-bounding function
on the volume of the area which is defined
in Formula~(\ref{for-region-regular-gi-1}).
This validates Formula~(\ref{for-proof-phi-star}).
Part~3 (conclusion).
So far we have shown that
\emph{for a given $\gamma\in\mathbb{G}AL$,
the function value of $f=g_0$
is at least $\varphi_{f}(\gamma)$
on an area of volume $\chi_{f}(\gamma)$ because}
\begin{eqnarray*}
\varphi_{f}(\gamma)
&=&
\varphi_{g_0}(\gamma)
\;\; = \;\;
\varphi_{g_1}(\gamma)
\;\; = \;\;
\cdots
\;\; = \;\;
\varphi_{g_k}(\gamma)
\;\; = \;\;
g_k(\gamma)
\end{eqnarray*}
and because
\begin{eqnarray*}
\chi_{f}(\gamma)
&=&
\chi_{g_{0}}(\gamma) \\
&=&
\chi_{g_{1}}(\gamma)
\, \cdot \,
\frac { \hat\chi_{g_{0}^{*\sigma(1)}} \left(\gamma_{\sigma(1)}\right) }
{\hat\gamma_{\sigma(1)} - \gamma_{\sigma(1)}} \\
&=&
\chi_{g_{2}}(\gamma)
\, \cdot \,
\frac { \hat\chi_{g_{1}^{*\sigma(2)}} \left(\gamma_{\sigma(2)}\right) }
{\hat\gamma_{\sigma(2)} - \gamma_{\sigma(2)}}
\, \cdot \,
\frac { \hat\chi_{g_{0}^{*\sigma(1)}} \left(\gamma_{\sigma(1)}\right) }
{\hat\gamma_{\sigma(1)} - \gamma_{\sigma(1)}} \\[1.5ex]
&& \vdots \\[0.5ex]
&=&
\chi_{g_{k}}(\gamma)
\, \cdot \,
\prod_{i=1}^k \,
\frac { \hat\chi_{g_{i-1}^{*\sigma(i)}} \left(\gamma_{\sigma(i)}\right) }
{\hat\gamma_{\sigma(i)} - \gamma_{\sigma(i)}} \\
&=&
\prod_{j=1}^k \,
\left( \hat\gamma_j - \gamma_j \right)
\, \cdot \,
\prod_{i=1}^k \,
\frac { \hat\chi_{g_{i-1}^{*\sigma(i)}} \left(\gamma_{\sigma(i)}\right) }
{\hat\gamma_{\sigma(i)} - \gamma_{\sigma(i)}} \\
&=&
\prod_{i=1}^k \,
\left( \hat\gamma_{\sigma(i)} - \gamma_{\sigma(i)} \right)
\, \cdot \,
\prod_{i=1}^k \,
\frac { \hat\chi_{g_{i-1}^{*\sigma(i)}} \left(\gamma_{\sigma(i)}\right) }
{\hat\gamma_{\sigma(i)} - \gamma_{\sigma(i)}} \\
&=&
\prod_{i=1}^k \,
{ \hat\chi_{g_{i-1}^{*\sigma(i)}} \left(\gamma_{\sigma(i)}\right) }.
\end{eqnarray*}
If $\chi_f$ is in addition
invertible on the domain $\,\mathbb{G}AL$,
$f$ is region-value-suitable.
This finishes the proof.
\qed
\end{proof}
One prerequisite in the last theorem is
that $g_k$ is positive on the open $\mathbb{G}AB$.
We make the observation that we cannot validate this property
unless we have determined
the entire sequence of replacements from $f=g_0$ down to $g_k$.
That means,
it is possible that the analysis fails at the end of the first phase.
Furthermore, we make the observation
that the bounding functions $\varphi_f$ and $\chi_f$
are actually derived {bottom-up} in the the second phase
of their derivation.
That means,
although we technically determine the sequence of functions $g_i$
in a top-down manner on the surface,
the validity of the formulas is derived bottom-up afterwards.
We summarize the steps of the top-down approach
in Figure~\ref{fig-analysis-3}.
\begin{figure}
\caption{Instructions for performing the top-down approach.
The illustration reflects the steps
in which the quantities are determined
according to Theorem~\ref{theo-top-down-first-version}
\label{fig-analysis-3}
\end{figure}
\subsection{Examples}
\label{sec-td-example}
\begin{example}\label{ex-inbox-topdown}
We use the top-down approach to determine the bounding functions
$\varphi_{\text{\rm in\_box}}$ and $\chi_{\text{\rm in\_box}}$
for the predicate $\text{\rm in\_box}$.
Again, we assume that the box is fixed somehow and
that the only argument of the predicate is the query point.
(There is no much influence on the analysis by the remaining parameters.)
The predicate can be realized, for example, by the function
\begin{eqnarray*}
f(x) &:=& \min_{1\le i\le k} \, \left\{ \ell_i^2-(x_i-c_i)^2 \right\}
\end{eqnarray*}
where $c\in\mathbb{R}^k$ is the center of the axis-parallel box
and its edge lengths are given by $2\ell$.
We eliminate the variables in ascending order from
$x_1$ to $x_k$, that means, we set $\sigma(i):=i$ for all $1\le i\le k$.
Part~1~($\varphi_{\text{\rm in\_box}}$).
To determine $\varphi_{\text{\rm in\_box}}$
we need $g_k$,
to determine $g_k$
we need the entire sequence of replacements,
and to determine $g_i$
we need to determine the value of the ``$\inf\inf$'' expression
in dependence on $\gamma_i$
in Formula~(\ref{for-gi-inf-inf}).
This is what we do next.
Because of the symmetry of $f$,
the following discussion is valid for all coordinates $x_i$.
To prepare the replacement of variables,
we examine the function $f^{*i}$
for the region-regular case (see Figure~\ref{fig-inbox-1}).
\begin{figure}
\caption{An illustration that supports
the relation between the quantities of $f^{*i}
\label{fig-inbox-1}
\end{figure}
The critical set $C_{f^{*i}}$ contains two points,
namely $c_i-\ell_i$ and $c_i+\ell_i$.
By $\gamma_i$ we denote the minimal distance of $x_i$
to a point in $C_{f^{*i}}$.
(Again
we assume that $\hat\gamma_i$ must be less than $\ell_i$;
otherwise the interior of the box
would be covered entirely by the region of uncertainty
and the predicate would lose its meaning.)
The absolute value of $f$ grows in the distance to $C_{f^{*i}}$.
To determine a guaranteed lower bound on the absolute value of $f$,
we assume that the distance of $x_i$ to $C_{f^{*i}}$ is exactly $\gamma_i$.
In addition we make the observation
that $|f|$ grows slower towards the interior of the box
than away from the box;
therefore we must also assume that $x_i$
lies between $c_i-\ell_i$ and $c_i+\ell_i$
to get a convincing bound.
This leads to the worst-case consideration $|x_i-c_i|=|c_i+\ell_i-\gamma_i|$.
We make use of the binomial theorem
to derive the unequation
\begin{eqnarray*}
\left| \ell_i^2 - \left( x_i - c_i \right)^2 \right|
&\ge&
\left| \ell_i^2 - \left( c_i+\ell_i-\gamma_i \right)^2 \right| \\
&=&
\left| 2 \ell_i \gamma_i - \gamma_i^2\right| \\
&=&
\left| \left( 2 \ell_i - \gamma_i \right) \gamma_i \right|.
\end{eqnarray*}
Next we define the functions $g_i$ as
\begin{eqnarray*}
g_i (\gamma_1,\ldots,\gamma_i,x_{i+1},\ldots,x_{k})
&:=&
\min \;
\Bigl(\!\!
\begin{array}[t]{l}
\bigl\{
(2\ell_j-\gamma_j)\gamma_j : 1\le j\le i
\bigr\}
\\[1ex]
\cup
\;
\bigl\{
\ell_j^2 - \left( x_j-c_j \right)^2 : i < j \le k
\bigr\}
\Bigr)
\end{array}
\end{eqnarray*}
and in the end, the sequence of replacements leads to
\begin{eqnarray*}
\varphi_\text{\rm in\_box} (\gamma)
&:=&
g_k(\gamma) \\
&=&
\min_{1\le j\le k}
\left( 2 \ell_j-\gamma_j \right) \gamma_j.
\end{eqnarray*}
Part~2~($\chi_{\text{\rm in\_box}}$).
Now we determine a bound on the volume of
the complement of the region of uncertainty.
For every $i\in I$,
a valid bounding function is given by
\begin{eqnarray*}
\hat\chi_{g^{*i}_{i-1}}\left(\gamma_{i}\right)
&=& 2 \delta_i - 4 \gamma_{i}.
\end{eqnarray*}
This results in the following bound on the total volume:
\begin{eqnarray*}
\chi_{\text{\rm in\_box}}\left(\gamma\right)
&=& \prod_{i=1}^k \,
{ \hat\chi_{g_{i-1}^{*i}} \left(\gamma_{i}\right) } \\
&=& \prod_{i=1}^{k} \left( 2 \delta_i - 4 \gamma_{i} \right).
\end{eqnarray*}
Now that we have determined the bounding functions
$\varphi_{\text{\rm in\_box}}$ and $\chi_{\text{\rm in\_box}}$,
it would be possible to finish the analysis
with the method of quantified relations---but
this is not our interest in this section.
$\bigcirc$
\end{example}
\begin{example}
This is the continuation of Examples~\ref{ex-inbox-gi-dom-crit}
and~\ref{ex-inbox-topdown}.
Here we want to investigate the regions of uncertainty
for the various functions $g_i$.
More precisely,
we are interested in the correlation between the regions
which are defined bottom-up in the second phase of the approach.
Figure~\ref{fig-crit-set-inbox-2}
visualizes the regions of uncertainty for the functions $g_i$.
The regions of uncertainty are light shaded
whereas their complements are dark shaded.
The decomposition is initiated by the choice of $\gamma\in\mathbb{G}AB$.
Since each component $\gamma_i$ is positive,
neighborhoods of the critical set are added to the region of uncertainty
on the way back up to $g_0$.
\begin{figure}
\caption{Illustration of the regions of uncertainty
for the various domains in the analysis of
the 2-dimensional $\text{\rm in\_box}
\label{fig-crit-set-inbox-2}
\end{figure}
As we have seen in Example~\ref{ex-inbox-gi-dom-crit},
the upper line segment of $C_{g_0}$
causes the upper line of $C_{g_1}$.
Conversely, we can now see that the upper line of $C_{g_1}$
causes a region of uncertainty around the
\emph{line which passes through} the upper line segment of $C_{g_0}$.
Be aware
that our top-down approach is designed such that this behavior
is forced for all non-region-regular situations.
This implies that our method does not need any kind of exceptional sets.
In the contrary,
there are no restrictions on the measure of the critical sets at all:
The only thing that matters is the criterion
if $f$ is region-suitable or not.
$\bigcirc$
\end{example}
\subsection{Further Remarks}
\label{sec-td-deeper-insight}
\newcommand{\mathbb{C}EXT}{{C^\text{\rm ext}}}
A different concept of
the top-down approach is published in \cite{MOS11}.
Although both presentations rest upon the same motivation,
there are some technical differences in the realization.
To avoid misunderstandings in the presentation
and gain a deeper insight into our approach,
we end this section with selected questions.
\emph{Does $f$ have to be (upper- or lower-) continuous
to be top-down analyzable?}
No, we do not assume any kind of continuity in our approach.
Points of discontinuity may be critical,
but they do not have to be critical.
\emph{May we assume that $f$ is continuous?}
No, the top-down approach is defined recursively and
the auxiliary functions $g_i$ are not continuous in general.
Consider for example the continuous polynomial
$f(x_1,x_2) := x_1^2 + x_2^2 - 1$
which is the planar ``in unit circle'' predicate.
Then $g_1(x_1,\gamma_2)$ is not continuous in four points
for fixed $\gamma_2$.
The function is illustrated in Figure~\ref{fig-exa-in-circle}.
That is the reason
why the top-down approach \emph{must} work for discontinuous functions.
\begin{figure}
\caption{Exemplified drawing of the ``in unit circle'' predicate
after the first replacement.
The function values on the interval $[-1,1]$
vary with $\gamma_2$.}
\label{fig-exa-in-circle}
\end{figure}
\emph{Does a critical set of measure zero imply that $f$ is region-suitable?}
No, not in general.
A notorious example is the density of $\mathbb{Q}$ in $\mathbb{R}$.
Let $A\subset\mathbb{R}$ be an interval.
Although $A\cap\mathbb{Q}$ is a set of measure zero,
there is no $\varepsilon>0$ such that the neighborhood
$U_\varepsilon(A\cap\mathbb{Q})$ has a volume smaller than $\mu(A)$.
But the latter is a necessary criterion for region-suitability
and the applicability of controlled perturbation.
\emph{Does region-suitability imply a finite critical set?}
No.
A counter-example is the function $x\cdot\sin\left(\frac{1}{x}\right)$
which is region-suitable
although it has infinitely many zeros in any finite neighborhood of zero.
(By the way, this function is also value-suitable.)
We summarize:
\emph{Critical sets of region-suitable functions are countable,
but not every countable critical set implies region-suitability.}
\emph{Is it possible to neglect isolated points in the analysis?}
We may never exclude critical points from the analysis;
they are always used to define the region of uncertainty.
We may exclude less-critical points
provided that we adjust the success-probability ``by hand''.
We may neglect non-critical points
provided that we still determine
the correct inf-value-suitable bound $\varphi_{\inf f}$.
(See also Section~\ref{sec-succ-prob}.)
\emph{May we add additional points to the critical set?}
Yes, we may add points to the critical set
provided that $f$ is still guaranteed to be region-suitable.
(See also Section~\ref{sec-succ-prob}.)
\emph{Can we decide if $f$ is top-down analyzable
without developing the sequence of replacements?}
It is a necessary condition for the top-down analyzability of $f$
that $g_k$ is positive everywhere.
It is not clear
how we can guarantee this property in general without deriving $g_k$.
\section[Determining the Lower Fp-safety Bound (1st Stage, s-suit)]{Determining the Lower Fp-safety Bound}
\label{sec-guards-safetybounds}
Here we introduce the design of
guards and fp-safety bounds.
Guards are necessary to implement guarded evaluations in ${\cal A}_\text{\rm G}$.
In Section~\ref{sec-fea-guards}
we explain how guards can be implemented for a wide class of functions
including polynomials.
To analyze the behavior of guards,
we introduce fp-safety bounds
in Section~\ref{sec-fea-fpsafety}.
We explain how we determine the fp-safety bound
in the analysis
(see Figure~\ref{fig-ana-fea}).
Furthermore,
we prove the fp-safety bounds
which we have used in previous sections.
\begin{figure}
\caption{An error analysis is used
to derive the bounding function for the safety-suitability
in the first stage of the analysis.}
\label{fig-ana-fea}
\end{figure}
\subsection{Implementing Guarded Evaluations}
\label{sec-fea-guards}
Our presentation of guarded evaluations
is based on rounding error analyses
following the approach in~\cite{F97,BFS01,MOS11}.
A refinement is presented in the appendix of~\cite{MOS11}.
\subsection*{Rounding Error Analysis}
The implementation of guards is based on maximum error bounds.
To determine the error bounds we use rounding error analyses.
Note that the error bound of a function $f$
depends on the formula $E$ that realizes $f$
and, especially, on the chosen \emph{sequence of evaluation}.
In Table~\ref{def-fperrorbound-inline}
we cite some rules to determine error bounds.
Expressions $E$ that are composed of addition,
subtraction, multiplication and absolute value can be bounded
by the value $B_E$ in the last row of the table.
This includes the evaluation of polynomials;
for further operators see~\cite{F97,BFS01,MOS11}.
The quantities $\text{\rm ind}_E$ and $\text{\rm sup}_E$
are derived according to the sequence of evaluation of $E$.
The value $\text{\rm ind}_x$ is 0 if $x\in\mathbb{F}L$, and it is 1 if $x$ is rounded.
\begin{example}
We determine the error bound for the expression
\begin{eqnarray*}
E(x_1,\ldots x_k)\mathbb{R}F
&=& (((a\cdot x_1)\cdot x_2)\cdots x_k)\mathbb{R}F
\end{eqnarray*}
where $k\in\mathbb{N}$, $a\in\mathbb{R}$ is a coefficient and
$x\in U_\delta(\bar{x})\mathbb{R}G\subseteq[-2^{e_\text{\rm max}},2^{e_\text{\rm max}}]^k$.
A worst-case consideration leads to
$\text{\rm ind}_{a}=1$ and $\text{\rm sup}_{a}=\mathbb{C}ARDI{a\mathbb{R}F}$ for the coefficient and
$\text{\rm ind}_{x_i}=0$ and $\text{\rm sup}_{x_i}=\mathbb{C}ARDI{x_i\mathbb{R}F}$ for $1\le i\le k$.
Then we obtain
$\text{\rm ind}_{ax_1}=2$ and $\text{\rm sup}_{ax_1}=\mathbb{C}ARDI{ax_1\mathbb{R}F}$
after the first multiplication.
Taking all multiplications into account,
we get
$\text{\rm ind}_{E}=k+1$ and $\text{\rm sup}_{E}=\mathbb{C}ARDI{ax_1\cdots x_k\mathbb{R}F}$.
According to Table~\ref{def-fperrorbound-inline}
we obtain the \emph{dynamic error bound}
\begin{eqnarray*}
B_E(L,x) &=&
(k+1) \cdot \mathbb{C}ARDI{ax_1\cdots x_k\mathbb{R}F} \cdot 2^{-L}
\end{eqnarray*}
and the \emph{static error bound}
\begin{eqnarray*}
B_E(L) &=&
(k+1) \cdot \mathbb{C}ARDI{a\mathbb{R}F} \cdot 2^{k{e_\text{\rm max}}-L}
\end{eqnarray*}
where $2^{e_\text{\rm max}}$ is an upper bound on the absolute value of a perturbed input.
$\bigcirc$
\end{example}
\begin{remark}\label{rem-errorbound-zero}
We make the important observation that
the bound $B_E(L)$ approaches zero when $L$ approaches infinity,
that means,
\begin{eqnarray*}
\lim_{L\to\infty} B_E(L) &=& 0.
\end{eqnarray*}
Furthermore we observe
that \emph{all error bounds which are derived from
Table~\ref{def-fperrorbound-inline}
have this property.}
$\bigcirc$
\end{remark}
\begin{table}[t]
\begin{eqnarray*}
\begin{array}{|@{\qquad}c@{\qquad}|@{\qquad}c@{\qquad}|@{\qquad}c@{\qquad}|}\hline
\tabrule E & \text{\rm sup}_E & \text{\rm ind}_E
\\\hline\hline
\tabrule x & \mathbb{C}ARDI{x\,\mathbb{R}F} & \text{0 or 1}
\\\hline
\tabrule E_1\pm E_2
& (\text{\rm sup}_{E_1}+\text{\rm sup}_{E_2})\mathbb{R}F
& 1+\max\left\{\text{\rm ind}_{E_1},\text{\rm ind}_{E_2}\right\}
\\\hline
\tabrule E_1 \cdot E_2
& (\text{\rm sup}_{E_1} \cdot \text{\rm sup}_{E_2})\mathbb{R}F
& 1 + \text{\rm ind}_{E_1} + \text{\rm ind}_{E_2}
\\\hline
\tabrule |E|
& \text{\rm sup}_{E}
& \text{\rm ind}_E
\\\hline\hline
\multicolumn{3}{|c|}{\tabrule B_E := \text{\rm ind}_E \cdot \text{\rm sup}_E \cdot 2^{-L}}
\\\hline
\end{array}
\end{eqnarray*}
\caption{This table reprints parts of Table~2.1
in Funke~\cite[p.~11]{F97}.
The row for $|E|$ is added by us.}
\label{def-fperrorbound-inline}
\end{table}
\subsection*{Guarded Evaluation}
In guarded algorithms ${\cal A}_\text{\rm G}$
every predicate evaluation $f(x)\mathbb{R}F$
must be protected by a guard $\mathbb{G}G_f(x)$ that verifies
the sign of the result.
Guards can be implemented
using the dynamic
(or the weaker static) error bounds.
Let $B_f(L,x)$ be an upper bound on the rounding error of $f(x)\mathbb{R}F$
for floating point arithmetic $\mathbb{F}L$,
that means,
\begin{eqnarray}\label{for-error-bound}
B_f(L,x) &\ge& \mathbb{C}ARDI{f(x)\mathbb{R}FL - f(x)}.
\end{eqnarray}
Then we can immediately derive the implication
\begin{eqnarray}\label{for-guard-dynamic}
\mathbb{C}ARDI{f(x)\mathbb{R}F} > B_f(L,x)
\mathbb{F}ORMSEP &\mathbb{R}ightarrow& \mathbb{F}ORMSEP
\text{\rm sign}(f(x)\mathbb{R}FL) = \text{\rm sign}(f(x)).
\end{eqnarray}
We use the unequation on the left hand side to construct a
\emph{guard $\mathbb{G}G_f$ for $f$} where
\begin{eqnarray*}
\mathbb{G}G_f(x)
&:=&
\big( \; \mathbb{C}ARDI{f(x)\mathbb{R}F} > B_f(L,x) \, \big).
\end{eqnarray*}
If $\mathbb{G}G_f(x)$ is true, $f(x)$ has the correct sign.
Note that this definition is in accordance with Definition~\ref{def-guard}
on Page~\pageref{def-guard}.
\subsection{Analyzing Guards With Fp-safety Bounds}
\label{sec-fea-fpsafety}
Now we explain how to analyze the behavior of guards
according to~\cite{F97,BFS01,MOS11}.
Remember that we perform the analysis in real space.
The implication
\begin{eqnarray}\label{for-fpsafety-static}
\mathbb{C}ARDI{f(x)} > 2B_f(L, x)
\mathbb{F}ORMSEP &\mathbb{R}ightarrow& \mathbb{F}ORMSEP
\mathbb{C}ARDI{f(x)\mathbb{R}F} > B_f(L, x).
\end{eqnarray}
is true because of Formula~(\ref{for-error-bound}).
The inequality on the left hand side
is a relation that we can safely verify in real space.
We can always use the static error bound
to construct a \emph{fp-safety bound $S_{\inf f}$ for $f$}
\begin{eqnarray*}
S_{\inf f}(L) &:=& 2B_f(L)
\end{eqnarray*}
where $B_f(L)$ is the static error bound.
Note that this definition is in accordance with
Definition~\ref{def-fpsafetybound}
on Page~\pageref{def-fpsafetybound}
because
the implications
in Formulas~(\ref{for-guard-dynamic})
and~(\ref{for-fpsafety-static})
guarantee the desired implication in Formula~(\ref{for-fpsafety-final}).
\emph{Because of Remark~\ref{rem-errorbound-zero},
the fp-safety bound $S_{\inf f}(L)$
fulfills the safety-condition on page~\pageref{def-safety-cond}
by construction.}
Next we derive a \mbox{fp-safety} bound for univariate polynomials.
\begin{corollary}\label{col-unipolysafety}
Let $f$ be a univariate polynomial
\begin{eqnarray}\label{for-2-unipolyrep}
f(x) &=& a_d \cdot x^d + a_{d-1} \cdot x^{d-1} +
\ldots + a_1 \cdot x + a_0
\end{eqnarray}
of degree $d$.
Then
\begin{eqnarray}\label{for-sfl-univariate}
S_{\inf f}(L) &:=& (d+2) \cdot \max_{1\le i\le d} |a_i| \cdot 2^{{e_\text{\rm max}}(d+1)+1-L}
\end{eqnarray}
is a fp-safety bound for $f$ on $[-2^{e_\text{\rm max}},2^{e_\text{\rm max}}]$ where ${e_\text{\rm max}}\in\mathbb{N}$.
\end{corollary}
\begin{proof}
We apply the error analysis of this section.
We evaluate Formula~(\ref{for-2-unipolyrep}) from the right to the left.
For a static error bound we get
\begin{eqnarray*}
B_f(L)
&:=& \text{\rm ind}_f \cdot \text{\rm sup}_f \cdot 2^{-L}
\;\;=\;\;
(d+2) \cdot \left(\max_{1\le i\le d} |a_i| \cdot 2^{{e_\text{\rm max}}(d+1)}\right) \cdot 2^{-L}.
\end{eqnarray*}
Finally we set the fp-safety bound to $S_{\inf f}(L) := 2B_f(L)$.
\qed
\end{proof}
Multiplications usually cause larger rounding errors
than additions.
Surprisingly,
the evaluation of univariate polynomials
with the Horner scheme\footnote{
For Horner scheme see Hotz~\cite{H90}.}
(which minimize the number of multiplications)
does not lead to
a smaller error bound than the one we have derived in the proof.
Next we derive an error bound for $k$-variate polynomials.
We define $x^\iota:=x_1^{\iota_1} \cdot \ldots \cdot x_k^{\iota_k}$
for $\iota\in\mathbb{N}_0^k$ and $x\in\mathbb{R}^k$.
\begin{corollary}\label{cor-multipolysafety}
Let $f$ be the $k$-variate polynomial ($k\ge 2$)
\begin{eqnarray*}
f(x) &:=& \sum_{\iota\in{\cal I}} a_\iota x^\iota
\end{eqnarray*}
where ${\cal I}\subset\mathbb{N}_0^k$ is finite
and $a_\iota\in\mathbb{R}_{\neq 0}$ for all $\iota\in{\cal I}$.
Let $d$ be the total degree of $f$ and
let $\mathbb{N}T$ be the number of terms in $f$.
Then
\begin{eqnarray}\label{for-sfl-multivariate}
S_{\inf f}(L) &:=& (d+1+\lceil\log \mathbb{N}T\rceil) \cdot
\mathbb{N}T \cdot \max_{\iota\in{\cal I}} |a_\iota| \cdot
2^{{e_\text{\rm max}} d+1-L}
\end{eqnarray}
is a fp-safety bound for $f$ on $[-2^{e_\text{\rm max}},2^{e_\text{\rm max}}]^k$ where ${e_\text{\rm max}}\in\mathbb{N}$.
\end{corollary}
\begin{proof}
We begin with the determination of the error bound $B_f$.
The maximum absolute value of the term $a_\iota x^\iota$
is obviously upper-bounded by
the product of a bound on $a_\iota$ and a bound on $x^\iota$.
Because $|x_i|\le 2^{e_\text{\rm max}}$ for all $1\le i \le k$
we have
\begin{eqnarray*}
\text{\rm sup}_{a_\iota x^\iota}
&\le& \max_{\iota\in{\cal I}} |a_\iota| \cdot 2^{{e_\text{\rm max}} d}.
\end{eqnarray*}
Since we know the number $\mathbb{N}T$ of terms in $f$\!,
we can then upper-bound $\text{\rm sup}_f$ by
\begin{eqnarray*}
\text{\rm sup}_f
&\le& \mathbb{N}T \cdot \max_{\iota\in{\cal I}} |a_\iota| \cdot 2^{{e_\text{\rm max}} d}.
\end{eqnarray*}
In addition, we have $\text{\rm ind}_{a_\iota x^\iota}=d+1$
since we evaluate $d$ multiplications and
only $a_\iota$ may not be in the set $\mathbb{F}$.
(Remember that, because of the perturbation,
the values $x_i$ belong to the grid $\mathbb{G}$
which is a subset of $\mathbb{F}$.)
To keep $\text{\rm ind}_f$ as small as possible,
we sum up the $\mathbb{N}T$ terms pairwise such that the tree of evaluation
has depth $\lceil\log \mathbb{N}T\rceil$.
This leads to $\text{\rm ind}_f=d+1+\lceil\log \mathbb{N}T\rceil$.
Therefore we conclude that
\begin{eqnarray*}
B_f(L) &=& \left(d+1+\lceil\log \mathbb{N}T\rceil\right) \cdot
\left(\mathbb{N}T \cdot \max_{\iota\in{\cal I}} |a_\iota| \cdot 2^{{e_\text{\rm max}} d}\right)
\cdot 2^{-L}.
\end{eqnarray*}
As usual we set $S_{\inf f}(L):=2 B_f(L)$.
\qed
\end{proof}
\section{The Treatment of Range Errors (All Components)}
\label{sec-rational-function}
In this section we address a floating-point issue that
is caused by poles of rational functions.
So far
the implementation and analysis of functions
is based on the fact that
signs of floating-point evaluations
are only non-reliable on certain environments of zero.
Now we argue that signs of evaluations may
also be non-reliable on environments of poles.
We do this for the purpose to embed rational functions
into our theory.
In Section~\ref{sec-range-error-implement}
we extend the previous implementation considerations such that
they can deal with range errors.
In Section~\ref{sec-range-error-analysis}
we expand the analysis
to range errors of the floating-point arithmetic $\mathbb{F}$.
\emph{This is the first presentation that gains generality by
the practical and theoretical treatment of range errors
which, for example, are caused by poles of rational functions.}
\subsection{Extending the Implementation}
\label{sec-range-error-implement}
We examine the simple rational function $f(x)=\frac{1}{x}$.
It is well-known that the function value of $f$
at the pole $x=0$ does not exist in $\mathbb{R}$
(unless we introduce the unsigned symbolic value $\pm\infty$,
see Forster~\cite{F06}).
We make the important observation that
we cannot determine the function value of $f$
{in a neighborhood of a pole}
with floating-point arithmetic $\mathbb{F}LK$
because the absolute value of $f$ may be \emph{too large.}
Moreover,
we observe that the \emph{sign of $f$ may change}
on a neighborhood of a pole.
Both observations suggest that
\emph{poles play a similar role like zeros
in the context of controlled perturbation.}
Now we extend the implementation
such that it gets able to deal with range errors.
We extend the implementation of guarded evaluations
in the following way:
If the absolute value of $f$ cannot be represented
with the floating-point arithmetic $\mathbb{F}LK$ because it is too large,
we abort ${\cal A}_\text{\rm G}$ with the notification of a \emph{range error}.
We do not care about the source of the range error:
It may be ``division by zero'' or ``overflow.''
The implementation of the second guard per evaluation is straight forward.
Some programming languages provide an exception handling that
can be used for this objective.
In addition we must change the implementation
of the controlled perturbation algorithm ${\cal A}_\text{\rm CP}$.
If ${\cal A}_\text{\rm G}$ fails because of a \emph{range error,}
we increase the bit length $K$ of the exponent
(instead of the precision $L$).
Be aware that we talk about the exponent,
that means,
an additive augmentation of the bit length
implies a multiplicative augmentation of the range.
These simple changes guarantee that the floating-point arithmetic $\mathbb{F}LK$
gets adjusted to the necessary dimensions in neighborhoods of poles
or in regions where the function value is extremely large.
\subsection{Extending the Analysis of Functions}
\label{sec-range-error-analysis}
For the purpose of dealing with range errors in the analysis,
we need to adapt several parts of the analysis tool box.
Below we present the necessary changes and extensions in the same order
in which we have developed the theory.
\subsection*{Criticality and the region-suitability}
The changes to deal with range errors
affect the interface
between the two stages of the analysis of functions.
At first we extent the definition of criticality.
We demand that certain points
(e.g.~poles of rational functions)
are {critical}, too, and
refine Definition~\ref{def-critical-set} in the following way.
\begin{definition}[critical]\label{def-critical-set-second}
Let $\text{\rm pr}ED$ be a predicate description.
We call a point $c\in\bar{U}_\delta(\bar{x})$
\emph{critical} if
\begin{eqnarray*}
\inf_{x\in U_\varepsilon(c)\setminus\{c\}} \; \left|f(x)\right|
= 0
\mathbb{F}ORMSEP & \text{\rm or} & \mathbb{F}ORMSEP
\sup_{x\in U_\varepsilon(c)\setminus\{c\}} \; \left|f(x)\right|
= \infty
\end{eqnarray*}
on a neighborhood $U_\varepsilon(c)$
for infinitesimal
small $\varepsilon>0$.
Furthermore,
we call $c$ \emph{less-critical}
if $c$ is not critical,
but $f(c)=0$ or $c$ is a pole.
Points that are neither critical nor less-critical
are called \emph{non-critical}.
\end{definition}
For simplicity and as before,
we define the
\emph{critical set $C_{f,\delta}\label{def-crit-set-final-inline}$}
to be the union of critical and less-critical points within $\bar{U}_\delta(\bar{x})$.
Be aware that the new definition of criticality
may expand the region of uncertainty.
As a consequence it affects the \emph{region-suitability}
and the bound $\nu_f$, respectively $\chi_f$.
Note that
Definition~\ref{def-critical-set-second}
guarantees that we exclude neighborhoods of poles from now on.
Because we have integrated poles into the definition of criticality,
we have implicitly adapted the region-suitability.
\subsection*{The sup-value-suitability}
So far we have only considered $\inf|f|$
outside of the region of uncertainty.
But to get a quantified description of range issues in the analysis,
we need to consider $\sup|f|$ as well.
What we have called value-suitability so far
is now called, more precisely, \emph{inf-value-suitability}.
Its bounding function, that we have called $\varphi_f(\gamma)$ so far,
is now called
$\varphi_{\inf f}(\gamma)\label{def-phi-inf-final-inline}$.
In addition to Definition~\ref{def-value-suit}
we introduce \emph{sup-value-suitability},
that means,
there is an upper-bounding function
$\varphi_{\sup f}(\gamma)\label{def-phi-sup-final-inline}$
on the absolute value of $f$
outside of the region of uncertainty $R_f$.
We show how the new bound is determined with the bottom-up approach
later on.
Based on the new terminology,
we call $f$ \emph{(totally) value-suitable}
if $f$ is both: inf-value-suitable and sup-value-suitable.
\subsection*{The sup-safety-suitability and analyzability}
We also extend Definition~\ref{def-safety-suit}.
What we have called safety-suitability so far
is now called, more precisely, \emph{inf-safety-suitability}.
Its bounding function
$S_{\inf f}(L)\label{for-Sinf-final-inline}$
is now called the \emph{lower fp-safety bound}.
In addition
we introduce \emph{sup-safety-suitability},
that means,
there is an invertible upper-bounding function $S_{\sup f}(K)$
on the absolute value of $f$
with the following meaning:
If we know that
\begin{eqnarray*}
|f(x)| &\le& S_{\sup f}(K),
\end{eqnarray*}
then $f(x)\mathbb{R}F$ is definitely a finite number in $\mathbb{F}LK$.
We call $S_{\sup f}(K)$ the \emph{upper fp-safety bound}.
Such a bound is trivially given by\footnote{
Firstly,
the largest floating-point number that is representable with $\mathbb{F}LK$ is
$(2-2^{-L}) 2^{2^{K-1}}$.
Secondly,
we must take the maximal floating-point rounding error into account.}
\begin{eqnarray*}
S_{\sup f}(K)\label{for-Ssup-final-inline}
&:=&
2^{2^{K-1}} - \, S_{\inf f}(L).
\end{eqnarray*}
Based on the new terminology,
we call $f$ \emph{(totally) safety-suitable}
if $f$ is both: inf-safety-suitable and sup-safety-suitable.
As a consequence,
we call $f$ \emph{analyzable}
if $f$ is region-suitable, value-suitable (both subtypes)
and safety-suitable (both subtypes).
\subsection*{The method of quantified relations}
Next we extent the method of quantified relations
such that the new bounds on the range of floating-point arithmetic
are included into the analysis.
In addition
to the precision function $L_f(p)$,
we determine the bounding function
\begin{eqnarray*}\label{for-Kf-inline}
K_f(p)
&:=&
\left\lceil
S_{\sup f}^{-1} \left(
\varphi_{\sup f} \left(
t\cdot\nu_f^{-1}\left({\varepsilon_{\nu}\left(p\right)}\right)
\right)\right)
\right\rceil.
\end{eqnarray*}
That means,
we deduce the maximum absolute value of $f$
outside of the region of uncertainty from the probability;
afterwards
we use the upper fp-safety bound
to deduce the necessary bit length of the exponent.
The derivation of $K_f(p)$ is absolutely analog to the derivation of ${L_\text{\rm safe}}(p)$
in Steps~1--5 of the method of quantified relations.
We summarize our results so far:
If we have the bounding functions of the interface of the function analysis,
we know that the floating-point arithmetic $\mathbb{F}_{L_f(p),K_f(p)}$
is sufficient to safely evaluate $f$
at a random grid point in the perturbation area
with probability $p$.
Furthermore,
we can derive a probability function $p_f$
if $f$ is analyzable and $\varphi_{\inf f}$ and $\varphi_{\sup f}$
are both invertible.
Analog to the definition of ${p_\text{\rm inf}}(L)$
in Remark~\ref{rem-meth-quan-rela}.4,
we derive the additional bound on the probability
\begin{eqnarray*}\label{for-supsafe-inline}
{p_\text{\rm sup}}(K)
&:=&
\varepsilon_\nu^{-1}
\left( \nu_f
\left( \frac{1}{t} \cdot \varphi_{\sup f}^{-1}
\left( S_{\sup f}(K)
\right) \right) \right)
\end{eqnarray*}
from $K_f(p)$.
This leads to the final \emph{probability function\label{def-2-prob-func-inline}}
$p_f:\mathbb{N}\times\mathbb{N}\to(0,1)$ where
\begin{eqnarray*}\label{for-pfLK-inline}
p_f(L,K) &:=&
\min \left\{
{p_\text{\rm inf}}(L),
\, {p_\text{\rm sup}}(K),
\, {p_\text{\rm grid}}(L)
\right\}
\end{eqnarray*}
for parameter $t\in(0,1)$.
\subsection*{The bottom-up approach}
Now we extend the calculation rules of the bottom-up approach
to also derive the bounding function $\varphi_{\sup f}(\gamma)$
from simpler sup-value-suitable functions.
At first we replace the lower-bounding rule in Theorem~\ref{theo-lower-bound}
by the following sandwich-rule.
\begin{theorem}[sandwich]\label{theo-sandwich}
Let $\text{\rm pr}EDL$ be a predicate description.
If there is a region-value-suitable function
$g:\bar{U}_\delta(A)\to\mathbb{R}$ and $c_1,c_2\in\mathbb{R}_{>0}$ where
\begin{eqnarray*}
c_1 \, |g(x)|
& \le &
|f(x)|
\;\; \le \;\;
c_2 \, |g(x)|,
\end{eqnarray*}
then $f$ is also region-value-suitable
with the following bounding functions:
\begin{eqnarray*}
\nu_f(\gamma) &:=& \nu_g(\gamma) \\
\varphi_{\inf f}(\gamma) &:=& c_1 \varphi_{\inf g}(\gamma) \\
\varphi_{\sup f}(\gamma) &:=& c_2 \varphi_{\sup g}(\gamma).
\end{eqnarray*}
If $f$ is in addition safety-suitable, $f$ is analyzable.
\end{theorem}
\begin{proof}
The region-suitability and inf-value-suitability
follows from the proof of Theorem~\ref{theo-lower-bound}.
The sup-value-suitability is proven similar to Part~2
of the mentioned proof.
\qed
\end{proof}
Next we extent the product rule in Theorem~\ref{theo-product}.
We just add the assignment
\begin{eqnarray*}
\varphi_{\sup f}(\gamma) &:=&
\varphi_{\sup g}(\gamma_1,\ldots,\gamma_\ell) \cdot
\varphi_{\sup h}(\gamma_{j+1},\ldots,\gamma_k).
\end{eqnarray*}
after Formula~(\ref{for-phi-prod-rule}).
Its proof follows Part~1 of the proof of Theorem~\ref{theo-product}.
At last we extent the min-rule and the max-rule
in Theorem~\ref{theo-ruleminmax}.
We add the two assignments
\begin{eqnarray*}
\varphi_{\sup f_\text{\rm min}}(\gamma) &:=&
\min \{ \varphi_{\sup g}(\gamma_1,\ldots,\gamma_\ell),
\varphi_{\sup h}(\gamma_{j+1},\ldots,\gamma_k)\} \\
\varphi_{\sup f_\text{\rm max}}(\gamma) &:=&
\max \{ \varphi_{\sup g}(\gamma_1,\ldots,\gamma_\ell),
\varphi_{\sup h}(\gamma_{j+1},\ldots,\gamma_k)\}.
\end{eqnarray*}
after Formula~(\ref{for-phi-max-rule}).
Again, its proof follows Part~1 of the proof of Theorem~\ref{theo-product}.
\subsection*{The top-down approach}
Similar to the functions $\varphi_{\inf g_i}$,
which are simply called $\varphi_{g_i}$
in the overview in Figure~\ref{fig-analysis-3},
we determine
the functions $\varphi_{\sup g_i}$ in the second phase
of the pseudo-top-down approach
in a bottom-up fashion.
This completes the integration of the range considerations
into the analysis tool box.
Be aware that all changes presented in this section
do not restrict the applicability of the analysis tool box
in any way.
On the contrary,
{they are necessary for the correctness and generality
of the tool box.}
\section{The Analysis of Rational Functions}
\label{sec-ana-rational-func}
We have just solved the arithmetical issues
that occur in the implementation and
analysis of rational functions.
Besides we must solve technical issues in the implementation of guards
and, moreover, provide a general technique to derive a quantitative
analysis for rational functions.
\emph{This is the first presentation that contains
the implementation and analysis of rational functions.}
Let $f:=\frac{g}{h}$
be a rational function,
that means, let $g$ and $h$ be multivariate polynomials.
Let $k$ be the number of arguments of $f$,
i.e., we consider $f(x)$ where $x=(x_1,\ldots,x_k)$.
The arguments of $g$ and $h$ may be any subsequence of $x$,
but each $x_i$ is at least an argument of $g$ or an argument of $h$.
We know that $g$ and $h$ are analyzable
(see Section~\ref{sec-multivariate-poly}).
At first we discuss the implementation of guards for rational functions.
We make the important observation that---independent
of the evaluation sequences of $g$ and $h$---the
\emph{division} of the value of $g$
by the value of $h$
is the \emph{very last operation} in the evaluation of $f$.
Because of the standardization of floating-point arithmetic
(e.g., see \cite{IEEE08}),
the sign of $f$ is computed correctly
if the signs of $g$ and $h$ are computed correctly.
Therefore it is sufficient for an implementation
of a predicate that branches on the sign of a rational function $f$
to use the guard
$\mathbb{G}G_f := \left(\mathbb{G}G_g \wedge \mathbb{G}G_h\right)$.
But how do we analyze this predicate,
that means, how can we relate the known quantities?
Let $x$ be given.
In the case that
the (dependent) arguments of $g$ and $h$
lie outside of their region of uncertainty,
we can deduce the relation
\begin{eqnarray}\label{for-rat-func-limits}
\frac{S_{\inf g}}{S_{\sup h}}
\;\; \le \;\;
f(x)
\;\; \le \;\;
\frac{S_{\sup g}}{S_{\inf h}}.
\end{eqnarray}
Unfortunately this is not what we need.
This way,
we can only deduce the value of $f$ from the values of $g$ and $h$,
but not vice versa:
If $f(x)$ fulfills Formula~(\ref{for-rat-func-limits}),
we cannot deduce that the guards $\mathbb{G}G_g$ and $\mathbb{G}G_h$ are true.
For example, assume that $f(x)=1$;
then we know that the values of $g$ and $h$ are equal,
but we do not know if their values are fp-safe or close to zero.
Therefore we choose a different way
to analyze the behavior of guard $\mathbb{G}G_f$.
Since $g$ and $h$ are multivariate polynomials,
we can analyze the behavior of $\mathbb{G}G_g$ and $\mathbb{G}G_h$
and derive the precision functions $L_g(p)$ and $L_h(p)$
as we have seen in earlier sections.
If we demand that $g$ and $h$ evaluate successfully with probability
$\frac{1+p}{2}$ each,
$f$ evaluates successfully with probability $p$
since the sum of the failure probability of $g$ and $h$
is at most $(1-p)$.
This leads to the precision function
\begin{eqnarray*}
L_f(p)
&:=&
\max \left\{
L_g\left( \frac{1+p}{2}\right),
L_h\left( \frac{1+p}{2}\right)
\right\}
\end{eqnarray*}
which reflects the behavior of $\mathbb{G}G_f$
and therefore analyzes the behavior of an implementation of
the rational function evaluation of $f$.
\section{General Analysis of Algorithms (Composition)}
\label{sec-ana-algo}
So far we have only presented components of the tool box
which are used
to analyze functions.
Now we introduce the components
which are used
to analyze controlled-perturbation algorithms ${\cal A}_\text{\rm CP}$.
Figure~\ref{fig-illu-ana-algo} illustrates the analysis of algorithms.
Similar to the analysis of functions,
the algorithm analysis has two stages.
The \emph{interface} between the stages is introduced
in Section~\ref{sec-nec-con-algo}.
It consists of necessary algorithm properties
(to the left of the dashed line)
and the analyzability of the used predicates
(to the right of the dashed line).
There we also show
how to determine the bounds associated with the algorithm properties.
In Section~\ref{sec-algo-prop}
we give an overview of algorithm properties.
The \emph{method of distributed probability}
represents the actual analysis of algorithms
and is presented in Section~\ref{sec-meth-distri-prob}.
\begin{figure}
\caption{Illustration of the analysis
of controlled-perturbation algorithms.}
\label{fig-illu-ana-algo}
\end{figure}
\subsection{Necessary Conditions for the Analysis of Algorithms}
\label{sec-nec-con-algo}
Next we introduce several properties of controlled-perturbation algorithms.
Sometimes we use the same names for algorithm and function properties
to emphasize the analog.
We describe to which algorithms we can \emph{apply} controlled perturbation,
for which we can \emph{verify} that they terminate,
and which we can \emph{analyze} in a quantitative way
because they are \emph{suitable} for the analysis.
In particular,
three properties are necessary for the analyzability of algorithms:
evaluation-, predicate- and perturbation-suitability.
However,
the three conditions are not sufficient for the analysis of algorithms
since there are also prerequisites on the used predicates.
In this section
we define the various properties of controlled-perturbation algorithms,
explain how we obtain the bounding functions
that are associated with the necessary conditions,
and show how the algorithm properties are related with each other.
\begin{definition}
\label{def-algo-prop}
Let ${\cal A}_\text{\rm CP}$ be a controlled perturbation algorithm.
\begin{itemize}
\item\label{def-lacp}
(applicable).
We call ${\cal A}_\text{\rm CP}$ \emph{applicable}
if there is a precision function
$L_{{\cal A}_\text{\rm CP}}:(0,1)\times\mathbb{N}\to\mathbb{N}$
and $\eta\in\mathbb{N}$
with the property:
At least one from $\eta$ runs
of the embedded guarded algorithm ${\cal A}_\text{\rm G}$
is expected to terminate successfully
for a randomly perturbed input of size $n\in \mathbb{N}$
with probability at least $p\in(0,1)$
for every precision $L\in\mathbb{N}$ with $L\ge L_{{\cal A}_\text{\rm CP}}(p,n)$.
\item
(verifiable).
We call ${\cal A}_\text{\rm CP}$ \emph{verifiable}
if the following conditions are fulfilled:\\
1. All used predicates are verifiable. \\
2. The perturbation area ${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$
contains an open neighborhood of $\bar{y}$.\\
3. The total number of predicate evaluations is bounded. \\
4. The number of predicate types is bounded.
\item\label{def-eval-suit}
(evaluation-suitable).
We call ${\cal A}_\text{\rm CP}$ \emph{evaluation-suitable}
if the total number of predicate evaluations
is upper-bounded by a function $\mathbb{N}E:\mathbb{N}\to\mathbb{N}$
in dependence on the input size $n$.
\item\label{def-pred-suit}
(predicate-suitable).
We call ${\cal A}_\text{\rm CP}$ \emph{predicate-suitable}
if the number of different predicates
is upper-bounded by a function $\mathbb{N}P:\mathbb{N}\to\mathbb{N}$
in dependence on the input size $n$.
\item\label{def-algo-pert-suit}
(perturbation-suitable).
Let ${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$
be the perturbation area of ${\cal A}_\text{\rm CP}$
around $\bar{y}$;
we assume that ${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$
is scalable with parameter $\delta$
and that it has a fixed shape,
e.g., cube, box, sphere, ellipsoid, etc.
We call ${\cal A}_\text{\rm CP}$ \emph{perturbation-suitable}
if there is a bounding function $V\!:\mathbb{R}_{>0}^k\to\mathbb{R}_{>0}$
with the property
that there is an open axis-parallel box $U_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$
with volume at least $V(\delta)$
and $U_{{\cal A}_\text{\rm CP},\delta}(\bar{y}) \subset {\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$.
\item
(analyzable).
We call ${\cal A}_\text{\rm CP}$ \emph{analyzable}
if the following conditions are fulfilled:\\
1. All used predicates are analyzable. \\
2. ${\cal A}_\text{\rm CP}$ is evaluation-suitable,
predicate-suitable and perturbation-suitable.
\end{itemize}
\end{definition}
\begin{remark}\label{rem-algo-prop}
We add some remarks on the definitions above.
1. The applicability of an algorithm has a strong meaning:
For every arbitrarily large success probability $p\in(0,1)$ and
for every arbitrarily large input size $n\in\mathbb{N}$
there is still a \emph{finite} precision
that fulfills the requirements.
As a matter of fact,
a controlled perturbation algorithm reaches this precision
after \emph{finite} many steps.
Because in addition the success probability
is monotonically growing during the execution of ${\cal A}_\text{\rm CP}$,
we conclude:
If the algorithm ${\cal A}_\text{\rm CP}$ is applicable,
its execution is guaranteed to terminate.
2. In the definition of applicability,
we define the precision function $L_{{\cal A}_\text{\rm CP}}(p,n)$
as a function in the desired success probability $p$
and the input size $n$.
Naturally,
the bound also depends on other quantities like
the perturbation parameter $\delta$,
an upper bound on the absolute input values
or the maximum rounding-error.
However,
the latter quantities have some influence in the determination
of the bounding functions in the analysis of functions.
Here they occur as parameters in formula $L_{{\cal A}_\text{\rm CP}}$
and are not mentioned as arguments.
3. We remark on the perturbation-suitability that
we allow any shape of the perturbation area ${\cal U}_{{\cal A}_\text{\rm CP},\delta}$
in practice
which fulfills the condition in the definition.
As opposed to that
we have assumed that the perturbation area $U_{f,\delta}$
in the analysis of functions is an axis-parallel box.
This looks contradictorily and needs further explanation.
As a matter of fact,
there is just one perturbation $y\in {\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})\mathbb{R}GLK$
of the input
before we try to evaluate the whole sequence of predicates.
We assume that the random perturbation
is chosen from a discrete uniform distribution
in a subset of the $n$-dimensional space.
Opposed to that,
function $f_i$ has just $k_i \ll n$ arguments $x=(x_1,\ldots,x_{k_i})$.
Mathematically speaking,
we determine the input $x$ of $f_i$ by an orthogonal projection of
$y$ onto a $k_i$-dimensional plane.
Now we make the following important observation:
\emph{If we examine the orthogonal projection onto a
$k_i$-dimensional plane,
the projected points do not occur with the same probability in general.}
We refer to Figure~\ref{fig-distri-square}
and Figure~\ref{fig-distri-sphere}.
Despite of this observation,
we prove in Section~\ref{sec-meth-distri-prob}
that there is an implementation of ${\cal A}_\text{\rm CP}$
that we can analyze---presumed that we know the bounding function $V$
that is mentioned in the definition above.
\begin{figure}
\caption{(a) The original perturbation area ${\cal U}
\label{fig-distri-square}
\end{figure}
\begin{figure}
\caption{(a) The original perturbation area ${\cal U}
\label{fig-distri-sphere}
\end{figure}
4. We remark on the predicate-suitability
that the number $\mathbb{N}P\in\mathbb{N}$ of different predicates is usually fixed
for a geometric algorithm.
Anyway,
since we will see that the analysis can also be performed
for a function $\mathbb{N}P(n)$
we keep the presentation as general as possible.
$\bigcirc$
\end{remark}
Next we explain
how we determine the three bounding functions
which are associated with the three necessary algorithm properties.
We refer to Figure~\ref{fig-illu-ana-algo}.
If the number $\mathbb{N}P$ of used predicates is fixed, we just count them;
otherwise we perform a complexity analysis to determine the bounding function
$\mathbb{N}P(n)$.
We usually determine the bounding function $\mathbb{N}E(n)$
on the number of predicate evaluations
with a complexity analysis, too.
The bound $\eta$ results from a geometric consideration:
We only need to determine the real volume of the solid perturbation area.
If the perturbation area has an ordinary shape,
its computation is straight forward.
We consider an example of this.
\begin{example}
Let the input of ${\cal A}_\text{\rm CP}$ be $m$ points in the plain,
that means, $n=2m$.
In addition
let the perturbation area for each point be a disc of radius $\delta$.
Then the axis-parallel square of maximum volume inside of such a disc
has edge length $\delta\sqrt{2}$.
We obtain:
\begin{eqnarray*}
\eta &:=&
\left\lceil
\frac{\mu({\cal U}_{\delta})}{\mu(U_\delta)}
\right\rceil \\
&=&
\left\lceil
\frac{\mu(\text{$m$ discs of radius $\delta$})}
{\mu(\text{$m$ cubes of edge length $\delta\sqrt{2}$})}
\right\rceil \\
&=&
\left\lceil
\frac{m\cdot \pi \delta^2}{m\cdot2\delta^2}
\right\rceil \\
&=&
2
\end{eqnarray*}
We observe that the bound $\eta$ does not depend on $m$ (or $n$).
$\bigcirc$
\end{example}
\noindent
Now we state and prove the implications of algorithm properties.
\begin{lemma}\label{lem-algo-ana-is-veri}
Let algorithm ${\cal A}_\text{\rm CP}$ be analyzable. Then ${\cal A}_\text{\rm CP}$ is verifiable.
\end{lemma}
\begin{proof}
This is trivially true.
\qed
\end{proof}
\begin{lemma}\label{lem-algo-veri-is-app}
Let algorithm ${\cal A}_\text{\rm CP}$ be verifiable. Then ${\cal A}_\text{\rm CP}$ is applicable.
\end{lemma}
\begin{proof}
\newcommand{{{\cal L}_{p,n}}}{{{\cal L}_{p,n}}}
To show that ${\cal A}_\text{\rm CP}$ is applicable,
we prove the following existence.
\emph{There is $\eta\in\mathbb{N}$
such that
for every $p\in(0,1)$ and every $n\in\mathbb{N}$
there is a precision ${{\cal L}_{p,n}}$
with the property:
For a randomly perturbed input of size $n$,
at least one from $\eta$ runs of ${\cal A}_\text{\rm G}$
is expected to terminate successfully
with probability at least $p$
for every precision $L\in\mathbb{N}$ with $L\ge {{\cal L}_{p,n}}$.}
Then the function $L_{{\cal A}_\text{\rm CP}}(p,n) := {{\cal L}_{p,n}}$
has the desired property
which proves the claim.
At first we show that there is an appropriate $\eta\in\mathbb{N}$.
Because ${\cal A}_\text{\rm CP}$ is verifiable,
the perturbation area
${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$
contains an \emph{open} set around $\bar{y}$.
Therefore there is an open axis-parallel box around $\bar{y}$ with
$U_\delta(\bar{y})\subset{\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$.
Then there is also a natural number
\begin{eqnarray*}
\eta &:=&
\left\lceil
\frac{\mu\left({\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})\right)}
{\mu\left(U_\delta(\bar{y})\right)}
\right\rceil.
\end{eqnarray*}
That means,
if we randomly choose $\eta$
points from a uniformly distributed grid in
${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})\mathbb{R}GLK$,
we may expect that at least one point lies also inside of $U_\delta(\bar{y})$.
Let $p\in(0,1)$ and let $n\in\mathbb{N}$.
In addition
let $y\in U_\delta(\bar{y})\mathbb{R}GLK$
be randomly chosen.
Since ${\cal A}_\text{\rm CP}$ is verifiable,
there is an upper-bound $\mathbb{N}E\in\mathbb{N}$
on the total number of predicate evaluations.
Therefore we can distribute the total failure probability $(1-p)$
among the $\mathbb{N}E$ predicate evaluations.
Hence there is a probability
\begin{eqnarray*}
\varrho &:=&
\frac{1-p}{\mathbb{N}E}.
\end{eqnarray*}
Obviously
${\cal A}_\text{\rm G}(y)$ is successful with probability $p$
if every predicate evaluation fails with probability at most $\varrho$.
Let $\mathbb{N}P\in\mathbb{N}$ be the number of different predicates in ${\cal A}_\text{\rm G}$
which are decided by the functions $f_1,\ldots,f_\mathbb{N}P$.
Because ${\cal A}_\text{\rm CP}$ is verifiable,
all used predicates are verifiable and thus applicable.
Then Definition~\ref{def-func-app}
implies the existence of
precision functions $L_{f_1},\ldots,L_{f_\mathbb{N}P}$.
Therefore there is a precision
\begin{eqnarray*}
{\cal L}_{p,n}
&:=&
\max_{1\le i \le \mathbb{N}P} \;
L_{f_i} \left(1 - \varrho\right)
\end{eqnarray*}
which has the desired property
because of Definition~\ref{def-func-app}.
This finishes the proof.
\qed
\end{proof}
As a consequence of Lemma~\ref{lem-algo-ana-is-veri}
and Lemma~\ref{lem-algo-veri-is-app}
the controlled perturbation implementation ${\cal A}_\text{\rm CP}$
terminates with certainty and
yields the correct result for the perturbed input
if ${\cal A}_\text{\rm CP}$ is analyzable.
\subsection{Overview: Algorithm Properties}
\label{sec-algo-prop}
An overview of the defined algorithm properties
is shown in Figure~\ref{fig-algo-prop}.
The meanings are:
A controlled perturbation algorithm ${\cal A}_\text{\rm CP}$ is guaranteed to terminate
if ${\cal A}_\text{\rm CP}$ is \emph{applicable}
(see Remark~\ref{rem-algo-prop}.1).
If ${\cal A}_\text{\rm CP}$ is \emph{verifiable},
we can prove that ${\cal A}_\text{\rm CP}$ terminates---even
if we are not able to analyze its performance.
And finally,
we can give a quantitative analysis of the performance of ${\cal A}_\text{\rm CP}$
if ${\cal A}_\text{\rm CP}$ is analyzable.
The implications are:
An evaluation-, perturbation- and predicate suitable algorithm
that uses solely analyzable predicates is analyzable
(see Definition~\ref{lem-algo-ana-is-veri}).
An analyzable algorithm is also verifiable
(see Lemma~\ref{lem-algo-ana-is-veri}).
And a verifiable algorithm is also applicable
(see Lemma~\ref{lem-algo-veri-is-app}).
\begin{figure}
\caption{The illustration summarizes the implications of
the various algorithm properties that we have defined in this section.}
\label{fig-algo-prop}
\end{figure}
\subsection{The Method of Distributed Probability}
\label{sec-meth-distri-prob}
Here we state the main theorem of this section.
The proof contains the method of distributed probability
which is used to analyze complete algorithms.
Figure~\ref{fig-illu-ana-algo}
shows the component and its interface.
\begin{theorem}[distributed probability]
Let ${\cal A}_\text{\rm CP}$ be analyzable.
Then there is a general method
to determine a precision function
$L_{{\cal A}_\text{\rm CP}}:(0,1)\times\mathbb{N}\to\mathbb{N}$ and
$K_{{\cal A}_\text{\rm CP}}:(0,1)\times\mathbb{N}\to\mathbb{N}$
and $\eta\in\mathbb{N}$ with the property:
At least one from $\eta$ runs
of the embedded guarded algorithm ${\cal A}_\text{\rm G}$
is expected to terminate successfully for a randomly perturbed input of size $n$
with probability at least $p\in(0,1)$
for every arithmetic $\mathbb{F}LK$
where $L\ge L_{{\cal A}_\text{\rm CP}}(p,n)$ and $K\ge K_{{\cal A}_\text{\rm CP}}(p,n)$.
\end{theorem}
\begin{proof}
We prove the claim in three steps:
At first we derive $\eta\in\mathbb{N}$
from the shape of the region of uncertainty.
Then we determine a bound on the
failure probability of each predicate evaluation.
And finally we analyze each predicate type
to determine the worst-case precision.
An overview of the steps is given in Table~\ref{tab-steps-dp}.
\begin{table}[b]
\centerline{\fbox{
\begin{minipage}{0.9\columnwidth}
Step 1: determine ``in axis-parallel box'' probability (define $\eta$)\\
Step 2: determine ``per evaluation'' probability (define $\rho$)\\
Step 3: compose precision function
(define $L_{{\cal A}_\text{\rm CP}}$ and $K_{{\cal A}_\text{\rm CP}}$)
\end{minipage}}}
\caption{Instructions for performing the method of distributed probability.}
\label{tab-steps-dp}
\end{table}
Step~1 (define $\eta$).
We define $\eta$ as the ratio
\begin{eqnarray*}
\eta
&=&
\left\lceil
\frac{V(\delta)}{\mu(U_\delta(\bar{y}))}
\right\rceil.
\end{eqnarray*}
That means,
if we randomly choose $\eta$
points from a uniformly distributed grid in
${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\bar{y})\mathbb{R}GLK$,
we may expect that at least one point lies also inside of
$U_{{\cal A}_\text{\rm CP},\delta}(\bar{y})$.
Step~2 (define $\rho$).
Let $p\in(0,1)$
be the desired success probability of the guarded algorithm ${\cal A}_\text{\rm G}$.
Then $(1-p)$ is the failure probability of ${\cal A}_\text{\rm G}$.
There are at most $\mathbb{N}E(n)$
predicate evaluations for an input of size $n$.
That means, the guarded algorithm succeeds
if and only if we evaluate all predicates successfully in a row
for \emph{the same} perturbed input.
We observe that the evaluations do not have to be independent.
Therefore we define the failure probability of each predicate evaluation
as the function
\begin{eqnarray*}
\varrho(p,n) &:=& \frac{1-p}{\mathbb{N}E(n)}
\end{eqnarray*}
in dependence on $p$ and $n$.
Step~3 (define $L_{{\cal A}_\text{\rm CP}}$ and $K_{{\cal A}_\text{\rm CP}}$).
There are at most $\mathbb{N}P(n)$ different predicates.
Let $f_1,\ldots,f_{\mathbb{N}P(n)}$
be the functions that realize these predicates.
Since all functions are analyzable,
we determine their precision function $L_{f_i}$
with the presented methods of our analysis tool box.
Then we define the precision function for the algorithm as
\begin{eqnarray*}
L_{{\cal A}_\text{\rm CP}} (p,n)
&:=& \max_{1\le i\le \mathbb{N}P(n)}
\; L_{f_i} (1-\varrho(p,n)) \\
&=& \max_{1\le i\le \mathbb{N}P(n)}
\; L_{f_i}
\left(1-\frac{1-p}{\mathbb{N}E(n)}\right).
\end{eqnarray*}
Analogically we define
\begin{eqnarray*}
K_{{\cal A}_\text{\rm CP}} (p,n)
&=& \max_{1\le i\le \mathbb{N}P(n)}
\; K_{f_i}
\left(1-\frac{1-p}{\mathbb{N}E(n)}\right).
\end{eqnarray*}
Then every arithmetic $\mathbb{F}LK$
with $L\ge L_{{\cal A}_\text{\rm CP}}(p,n)$ and $K\ge K_{{\cal A}_\text{\rm CP}}(p,n)$
has the desired property by construction.
\qed
\end{proof}
\section{General Controlled Perturbation Implementations}
\label{sec-gen-cp-imple}
We present a general way
to implement controlled perturbation algorithms ${\cal A}_\text{\rm CP}$
to which we can apply our analysis tool box.
The algorithm template is illustrated as Algorithm~\ref{algo-cp}.
It is important to see that
all statements which are necessary for the controlled perturbation management
are simply wrapped around the function call of ${\cal A}_\text{\rm G}$.
\begin{algorithm}
\caption{: ${\cal A}_\text{\rm CP}({\cal A}_\text{\rm G}, \bar{y}, \bar{U}U_\delta, \psi, \eta)$}
\label{algo-cp}
\begin{algorithmic}
\,:\,ATE \emph{/* initialization */}
\,:\,ATE $L \leftarrow$ precision of built-in floating-point arithmetic
\,:\,ATE $K \leftarrow$ exponent bit length of built-in floating-point arithmetic
\,:\,ATE ${e_\text{\rm max}} \leftarrow$ determine upper bound $2^{e_\text{\rm max}}$ on $|\bar{y}_i|+\delta$
\mathbb{R}EPEAT
\,:\,ATE \emph{/* run guarded algorithm */}
\mathbb{F}OR{$i=1$ \TO $\eta$}
\,:\,ATE $y \leftarrow$ random point in $\bar{U}UU_\delta(\bar{y})\mathbb{R}GLKE$
\,:\,ATE $\omega \leftarrow {\cal A}_\text{\rm G}(y,\mathbb{F}LK)$
\IF{${\cal A}_\text{\rm G}$ succeeded}
\,:\,ATE leave the for-loop
{e_\text{\rm max}}NDIF
{e_\text{\rm max}}NDFOR
\,:\,ATE \emph{/* adjust parameters */}
\IF{${\cal A}_\text{\rm G}$ failed}
\IF {floating point overflow error occurred}
\,:\,ATE \emph{/* guard failed because of range error */}
\,:\,ATE $K \leftarrow K + \psi_K$
{e_\text{\rm max}}LSE
\,:\,ATE \emph{/* guard failed because of insufficient precision */}
\,:\,ATE $L \leftarrow \lceil\psi_L \cdot L\rceil$
{e_\text{\rm max}}NDIF
{e_\text{\rm max}}NDIF
\bar{U}NTIL{${\cal A}_\text{\rm G}$ succeeded}
\,:\,ATE \emph{/* return perturbed input $y$ and result $\omega$ */}
\mathbb{R}ETURN $(y,\omega)$
\end{algorithmic}
\end{algorithm}
Remember that the original perturbation area is
$\bar{U}UU_\delta(\bar{y})\mathbb{R}G$.
The implementation of a uniform perturbation
seems to be a non-obvious task for most shapes.
Therefore we propose axis-parallel perturbation areas
in applications.
(For example, we can replace spherical perturbation areas with cubes
that are contained in them.)
For axis-parallel areas there is the special bonus
that the perturbation is composed of random integral numbers
as we have explained in Remark~\ref{perturb-implem-inline}.
\label{intext-def-psi}
An argument of the controlled perturbation implementation
is the tuple $\psi=(\psi_L, \psi_K)\in\mathbb{R}\times\mathbb{N}$ of constants
which are used for the augmentation of $L$ and $K$.
The real constant $\psi_L>1$
is used for a multiplicative augmentation of $L$,
and the natural number $\psi_K$
is used for an additive augmentation of $K$.
\label{intext-algo-ana-vari-delta}
We remark that there is a variant of Algorithm~\ref{algo-cp}
that also allows the increase of perturbation parameter $\delta$.
Beginning with $\delta={\delta_\text{\rm min}}\in\mathbb{R}_{>0}^k$,
we augment the perturbation parameter $\delta$
by a real factor $\psi_\delta>1$
each time we repeat the for-loop.
When we leave the for-loop, we reset $\delta$ to ${\delta_\text{\rm min}}$.
We observe that this strategy implies an upper-bound on
the perturbation parameter by ${\delta_\text{\rm max}}:={\delta_\text{\rm min}} \cdot \psi_{\delta}^{\eta-1}$.
This is the bound that we use in the analysis.
To keep the presentation clear,
we do not express variable perturbation parameters explicitly in the code.
A variable precision floating-point arithmetic
is necessary for an implementation of ${\cal A}_\text{\rm CP}$.
When we increase the precision
in order to evaluate complex expressions successfully,
the evaluation of simple expressions
start suffering from the wasteful bits.
Therefore we suggest floating-point filters
as they are used in interval arithmetic.
That means,
we use a multi-precision arithmetic that refines the precision
on demand up to the given $L$.
If it is necessary to exceed $L$, ${\cal A}_\text{\rm G}$ fails.
In the analysis we use
this threshold on the precision.
\section{Perturbation Policy}
\label{sec-pertub-policy}
The meaning of perturbation is introduced
in Section~\ref{sec-basic-quantities} and
its implementation is explained in Remark~\ref{perturb-implem-inline}
on Page~\pageref{perturb-implem-inline}.
So far we have considered the original input
to be the point $\bar{y}\in\mathbb{R}^n$
which is the concatenation of \emph{all} coordinates of
\emph{all} input points
for the geometric algorithm ${\cal A}_\text{\rm CP}$.
In contrast to that,
we now care for the geometric interpretation of the input
and consider it as a sequence of geometric objects
$\mathbb{G}O_1, \ldots, \mathbb{G}O_m$.
Then a perturbation of the input is the sequence of perturbed objects.
In this section we define two different perturbation policies:
The \emph{pointwise} perturbation
in Section~\ref{sec-pert-indi}
and the \emph{object-preserving} perturbation
in Section~\ref{sec-pert-robust}.
The latter has the property
that the topology of the input object is preserved.
\emph{This is the first presentation that
integrates object-preserving perturbations
in the controlled-perturbation theory.}
\subsection{Pointwise Perturbation}
\label{sec-pert-indi}
For pointwise perturbations
we assume that the geometric object is
given by a sequence of points.
A circle in the plain, for example, is given by three points.
Another example is the polygon
in Figure~\ref{fig-pert-point}(a)
which is represented by the sequence of four vertices
$abcd$.
\begin{figure}
\caption{Example of a \emph{pointwise perturbation}
\label{fig-pert-point}
\end{figure}
The \emph{pointwise perturbation} of a geometric object
is the sequence of individually perturbed points
of its description,
i.e., randomly chosen points of their neighborhoods.
Figure~\ref{fig-pert-point}(b)
shows a pointwise perturbed polygon $a'b'c'd'$ for our example.
Because the perturbations are independent of each other,
this policy is quite easy to implement.
But we observe that pointwise perturbations do not preserve
the structure of the input object in general:
The original polygon $abcd$
is simple whereas the perturbed polygon $a'b'c'd'$ in our example is not.
And the orientation of a circle
that is defined by three perturbed points
may differ from the orientation of the circle
that is defined by the original points.
Be aware
that our analysis is particularly designed for pointwise perturbations.
We suggest to apply this perturbation policy to inputs
that are disturbed by nature, e.g., scanned data.
\subsection{Object-preserving Perturbation}
\label{sec-pert-robust}
For object-preserving perturbations
we assume that the geometric object is
given by an anchor point and a sequence of fixed measurements.\footnote{
The measurements may be given explicitly or implicitly.
Both is fine.}
A circle in the plain, for example,
is given by a center (anchor point)
and a radius (fixed measurement).
Another example is the polygon $abcd$
in Figure~\ref{fig-pert-object}(a)
which is given by an anchor point, say $a$,
and implicitly by the sequence of vectors (the measurements)
pointing from $a$ to $b$, from $a$ to $c$, and from $a$ to $d$.
\begin{figure}
\caption{Example of an \emph{object-preserving perturbation}
\label{fig-pert-object}
\end{figure}
The \emph{object-preserving perturbation} of a geometric object
is a pointwise perturbation of its anchor point
while maintaining all given measurements.
Figure~\ref{fig-pert-object}(b)
shows polygon $a'b'c'd'$
that results from an object-preserving perturbation.
There we have $b':=a'+{b}-{a}$, etc.
We observe that this perturbation is actually a translation
of the object and hence preserves
the structure of the input object in any respect:
its orientation, measurements and angles.
The object-preserving perturbation of a circle,
for example,
changes its location but not its radius.
The input must provide further information
to support object-preserving perturbations.
For the explicit representation,
this policy requires a \emph{labeling} of input values
as \emph{anchor points} (perturbable)
or \emph{measurements} (constant).
For the implicit representation,
the policy requires the subdivision of the input
into single objects;
then we make one of these points the anchor point
and derive the measurements for the remaining points.
To allow the object-preserving perturbation,
the implementation must offer the labeling of values or
the distinction of input objects.
In this context
it is pleasant to observe that our perturbation area $\bar{U}UU_\delta(A)\mathbb{R}G$
supports object-preservation because it is composed of a regular grid.
\emph{If the original object is represented without rounding error,
the perturbed object is represented exactly as well.}
Of course,
{we can always apply object-preserving perturbations
to finite-precision input objects.}\footnote{
This is true
because we can derive a sufficient grid unit
from the given fixed-precision input.}
We suggest to apply this policy to inputs
that result from computer-aided design (CAD):
By design, the measurements are often multiples of a certain unit
which can be used as an upper bound on the grid unit.
How can we analyze object-preserving perturbations?
We consider the analysis of function $f$ that realizes a predicate.
For pointwise perturbations
we demand in Section~\ref{sec-basic-quantities}
that $f$ only depends on input values.
For object-preserving perturbations
we only allow dependencies on anchor points:
Every other point in the description of the object
must be replaced in the formula by an expression that
depends on the anchor point of the affected object.
Be aware that these expressions can be resolved error-free
due to the fixed-point grid $\mathbb{G}$.
Then the new formula,
depends only on anchor points (variables) and measurements (constants).
The dependency of the function on the variables is analyzed as before.
Finally we remark that we do not recommend perturbation policies
that are based on scaling, stretching, sheering or rotation
since the perturbed input cannot be represented error-free in general.
\section{Appendix: List of Identifiers}
\label{sec-append-identifiers}
\newcommand{{\cal I}ex}[3]{
\noindent
\begin{minipage}[t]{.15\columnwidth}\begin{flushleft}{#1}\end{flushleft}\end{minipage}\hspace*{\fill}
\begin{minipage}[t]{.71\columnwidth}{#2}\end{minipage}\hspace*{\fill}
\begin{minipage}[t]{.10\columnwidth}\begin{flushright}{#3}\end{flushright}\end{minipage}
}
\newcommand{{\cal I}exsec}[1]
{\subsection*{{#1}\hspace*{\fill}Page}}
Page numbers refer to definitions of the identifiers.
References to preliminary definitions are parenthesized.\\
{\cal I}exsec{Algorithms}
{\cal I}ex{${\cal A}$}{
the given geometric algorithm ${\cal A}(\bar{y})$.}{-}
{\cal I}ex{${\cal A}_\text{\rm G}$}{
the guarded version ${\cal A}_\text{\rm G}(y,\mathbb{F}LK)$ of algorithm ${\cal A}$,
i.e., all predicate evaluations are guarded.}{\pageref{def-guarded-algo}}
{\cal I}ex{${\cal A}_\text{\rm CP}$}{
the controlled perturbation version ${\cal A}_\text{\rm CP}({\cal A}_\text{\rm G},\bar{y},\delta,\psi)$
of algorithm ${\cal A}$.
The implementation of ${\cal A}_\text{\rm CP}$ makes usage of ${\cal A}_\text{\rm G}$.}{\pageref{algo-cp}}
{\cal I}exsec{Sets and Number Systems}
{\cal I}ex{$\mathbb{C}$}{the set of complex numbers.}{-}
{\cal I}ex{$\mathbb{F}LK$}{1. the set of floating point numbers
with radix 2
whose precision has up to $L$ digits and
whose exponent has up to $K$ digits.\\
2. the floating point arithmetic that is induced this way.}{\pageref{def-fp-numbers}}
{\cal I}ex{$\mathbb{G}LKE$}{the set of grid points.
They are a certain subset of the floating point numbers $\mathbb{F}LK$
within the interval $[-2^{e_\text{\rm max}},2^{e_\text{\rm max}}]$.}{\pageref{def-grid-points}}
{\cal I}ex{$\mathbb{N}$; $\mathbb{N}_0$}{the set of natural numbers;
set of natural numbers including zero.}{-}
{\cal I}ex{$\mathbb{Q}$}{the set of rational numbers.}{-}
{\cal I}ex{$\mathbb{R}$; $\mathbb{R}_{>0}$; $\mathbb{R}_{\neq 0}$}{the set of real numbers;
set of positive real numbers;
set of real numbers excluding zero.}{-}
{\cal I}ex{$\mathbb{Z}$}{the set of integer numbers.}{-}
{\cal I}ex{$X\mathbb{R}FLK$}{the restriction of a set $X$ to points in $\mathbb{F}LK$.}{\pageref{def-fp-numbers}}
{\cal I}ex{$X\mathbb{R}GLKE$}{the restriction of a set $X$ to points in $\mathbb{G}LKE$.}{\pageref{def-grid-points}}
{\cal I}exsec{Identifiers of the Analysis}
{\cal I}ex{$A$}{the set of valid projected arguments $\bar{x}$ for $f$.}{\pageref{def-A-inline}}
{\cal I}ex{$B_E(L)$}
{a floating point error bound on the arithmetic expression $E$.}
{\pageref{def-fperrorbound-inline}}
{\cal I}ex{$C_f(\cdot)$}{the critical set of $f$.}
{(\pageref{def-critical-set}), \pageref{def-crit-set-final-inline}}
{\cal I}ex{$\mathbb{G}G_f$}{a guard for $f$ on the domain $X$.}{\pageref{def-guard}}
{\cal I}ex{$K$}{the bit length of the exponent (see $\mathbb{F}LK$).}{\pageref{def-K-inline}}
{\cal I}ex{$K_f(p)$}{a lower bound on the bit length of the exponent.}
{\pageref{for-Kf-inline}}
{\cal I}ex{$L$}{the bit length of the precision (see $\mathbb{F}LK$).}{\pageref{def-L-inline}}
{\cal I}ex{$L_{{\cal A}_\text{\rm CP}}(p,n)$}{the precision function of ${\cal A}_\text{\rm CP}$.}{\pageref{def-lacp}}
{\cal I}ex{$L_f(p)$}{the precision function of $f$.}{\pageref{def-lf-final}}
{\cal I}ex{${L_\text{\rm grid}}$}{a bound on the precision; caused by the grid unit condition.}{(\pageref{for-def-lgrid}), \pageref{for-lgrid-in-proof}}
{\cal I}ex{${L_\text{\rm safe}}$}{a bound on the precision; caused by the region- and safety-condition.}{\pageref{for-def-lsafe}}
{\cal I}ex{$\mathbb{N}E(n)$}{an upper-bound on the number of predicate evaluations.}{\pageref{def-eval-suit}}
{\cal I}ex{$\mathbb{N}P(n)$}{an upper-bound on the number of different predicates.}{\pageref{def-pred-suit}}
{\cal I}ex{$R_{f,\gamma}(\cdot)$}{the region of uncertainty of $f$.}
{\pageref{def-region-uncertainty}}
{\cal I}ex{$R_{f,{\text{\rm aug}}(\gamma)}(\cdot)$}{the augmented region of uncertainty of $f$.}
{\pageref{inline-def-aug-rou}}
{\cal I}ex{$S_{\inf f}(L)$}{the lower fp-safety bound.}
{\pageref{def-fpsafetybound}, (\pageref{for-Sinf-final-inline})}
{\cal I}ex{$S_{\sup f}(K)$}{the upper fp-safety bound.}
{\pageref{for-Ssup-final-inline}}
{\cal I}ex{$U_{f,\delta}(\cdot)$}{the perturbation area of $f$;
its shape is an axis-parallel box.}{\pageref{def-U-inline}}
{\cal I}ex{$U_{{\cal A}_\text{\rm CP},\delta}(\cdot)$}{the perturbation area of ${\cal A}_\text{\rm CP}$;
its shape is an axis-parallel box.}{\pageref{def-algo-pert-suit}}
{\cal I}ex{${\cal U}_{{\cal A}_\text{\rm CP},\delta}(\cdot)$}{the perturbation area of ${\cal A}_\text{\rm CP}$;
it may have any shape.}{\pageref{def-algo-pert-suit}}
{\cal I}ex{${e_\text{\rm max}}$}{the input value parameter
(see Formula~(\ref{for-e-min})).}{\pageref{for-e-min}}
{\cal I}ex{$f$}{the real-valued function $f:\bar{U}_\delta(A)\to\mathbb{R}$
under consideration.
We assume that the sign of $f$ decides a geometric predicate.}
{\pageref{def-f-inline}}
{\cal I}ex{$k$}{the arity of $f$.}
{\pageref{def-f-inline}}
{\cal I}ex{$n$}{the size of input $\bar{y}$.}
{\pageref{def-n-inline}}
{\cal I}ex{$p_f(L,K)$}{the probability function of $f$.}
{(\pageref{def-prob-func-inline}), \pageref{for-pfLK-inline}}
{\cal I}ex{${p_\text{\rm grid}}(L)$}
{a bound on the probability; caused by the grid unit condition.}
{\pageref{for-pgrid-inline}}
{\cal I}ex{${p_\text{\rm inf}}(L)$}
{a bound on the probability; caused by the region- and inf-safety-condition.}
{\pageref{for-pinf-inline}}
{\cal I}ex{${p_\text{\rm sup}}(K)$}
{a bound on the probability; caused by the sup-safety-condition.}
{\pageref{for-supsafe-inline}}
{\cal I}ex{$\text{\rm pr}(f\mathbb{R}G)$}{the least probability that
a guarded evaluation of $f$ is successful for inputs in $\mathbb{G}$
under the arithmetic $\mathbb{F}$.}{\pageref{for-prob-rest-G}}
{\cal I}ex{$\frac{1}{t}$}{the augmentation factor for the region of uncertainty.}
{\pageref{def-t-inline}}
{\cal I}ex{$\bar{x}$}{the arguments of $f$; projection of $\bar{y}$.}
{\pageref{def-xbar-inline}}
{\cal I}ex{$x$}{the perturbed arguments of $f$; projection of $y$.}
{\pageref{def-x-inline}}
{\cal I}ex{$\bar{y}$}{the original input to the algorithm.}
{\pageref{def-ybar-inline}}
{\cal I}ex{$y$}{the perturbed input $y\in U_\delta(\bar{y})$.}
{\pageref{def-y-inline}}
{\cal I}ex{$\delta$}{the perturbation parameter
which bounds the maximum amount of perturbation componentwise.}{\pageref{def-pert-para-inline}}
{\cal I}ex{$\gamma$}{the tuple of
componentwise distances to the critical set.}{\pageref{def-gamma-inline}}
{\cal I}ex{$\mathbb{G}amma$}{the set of valid augmented $\gamma$.}{\pageref{def-Gamma-inline}}
{\cal I}ex{$\mathbb{G}AB$}{like $\mathbb{G}amma$; the set is an axis parallel box.}{\pageref{def-GAB-inline}}
{\cal I}ex{$\mathbb{G}AL$}{like $\mathbb{G}amma$; the set is a line.}{\pageref{def-GAB-inline}}
{\cal I}ex{$\nu_f(\gamma)$}{an upper-bound
on the volume of $R_{f,\gamma}$.}
{\pageref{def-nu-inline}}
{\cal I}ex{$\tau$}{the grid unit.}{\pageref{def-tau-inline}}
{\cal I}ex{$\varphi_{\inf f}(\gamma)$}
{a lower-bound
on the absolute value of $f$ outside of $R_{f,\gamma}$.}
{\pageref{def-varphi-inline}, (\pageref{def-phi-inf-final-inline})}
{\cal I}ex{$\varphi_{\sup f}(\gamma)$}
{an upper-bound
on the absolute value of $f$ outside of $R_{f,\gamma}$.}
{\pageref{def-phi-sup-final-inline}}
{\cal I}ex{$\chi_f(\gamma)$}{a lower-bound on the complement
of $\nu_f$ within the perturbation area.}{\pageref{def-chi-region-suit}}
{\cal I}ex{$\psi$}{the tuple $\psi=(\psi_L, \psi_K)\in\mathbb{R}\times\mathbb{N}$ is used
for the augmentation of $L$ and $K$.}{\pageref{intext-def-psi}}
{\cal I}exsec{Miscellaneous}
{\cal I}ex{$\mu(\cdot)$}{the Lebesgue measure.}{\pageref{def-mu-inline}}
{\cal I}ex{$\pi(\cdot)$}
{the projection of points and sets, e.g., ${\pi_{i}}$, ${\pi_{<i}}$, ${\pi_{>i}}$, ${\pi_{\neq i}}$.}
{\pageref{def-projection-pi}}
{\cal I}ex{${\prec}$}{the reverse lexicographic order.}{\pageref{def-lex-inline}}
{\cal I}ex{${\prec}_\sigma$}{the reverse lexicographic order
after the permutation of the operands.}{\pageref{def-lexafter-inline}}
\addcontentsline{toc}{chapter}{Bibliography}
\end{document}
|
\mathbf{E}gin{document}
\title[Infinitesimal and local rigidity of mappings]{Infinitesimal and local rigidity \\
of mappings of CR manifolds}
\mathfrak{hol}hor{Giuseppe della Sala}
\address{Department of Mathematics, American University of Beirut (AUB)}
\email{[email protected]}
\mathfrak{hol}hor{Bernhard Lamel}
\address{Fakult\"at f\"ur Mathematik, Universit\"at Wien}
\email{[email protected]}
\mathfrak{hol}hor{Michael Reiter}
\address{Fakult\"at f\"ur Mathematik, Universit\"at Wien}
\email{[email protected]}
\subjclass[2010]{Primary 32H02; Secondary 32V40, 58E40}
\thanks{The first author was supported by the FWF project P24878-N25, and would also like to
thank the Center for Advanced Mathematical Sciences (CAMS) at AUB. The second author was supported by the FWF-Project I382 and QNRF-Project NPRP 7-
511-1-098. The third author was supported by the FWF-Project P28873}
\mathbf{E}gin{abstract}
A holomorphic mapping $H$ between two real-analytic CR manifolds $M$ and $M'$ is said to be locally rigid if any other holomorphic map $F\colon M \to M'$ which is close enough to $H$ is obtained by composing $H$ with suitable automorphisms of $M$ and $M'$. With the aim of reducing the local rigidity problem to a linear one, we provide sufficient infinitesimal conditions. Furthermore we study some topological properties of the action of the automorphism group on the space of nondegenerate mappings from $M$ to $M'$.
\end{abstract}
\maketitle
\section{Introduction}
\label{intro}
Let $M \subset \mathbb{C}^N$ be a (real-analytic) generic submanifold. We define ${\rm Aut}_0 (M)$ to be the group of the germs $\sigma$ of biholomorphic maps $\mathbb C^N\to \mathbb C^N$, defined around $0$, such that $\sigma(0) = 0$ and $\sigma(M)\subset M$. We denote the Lie algebra of ${\rm Aut}_0 (M)$ by $\mathfrak{hol}_0(M)$.
Let now $M$ and $M'$ be germs of generic real-analytic CR submanifolds in $\mathbb{C}^N$ and $\mathbb{C}^{N'}$ respectively, and let $\mathcal H(M,M')$ denote the space of germs of holomorphic mappings which send $M$ into $M'$. The group $G= {\rm Aut}_0 (M) \times {\rm Aut}_0 (M') $, which we call the \emph{isotropy group}, acts on $\mathcal H(M,M')$ via $H \mapsto \sigma' \circ H \circ \sigma^{-1}$, where $(\sigma, \sigma')\in G$ and $H \in \mathcal H(M,M')$. If we endow all of these sets with their natural (inductive limit) topologies, they become topological spaces and groups, respectively. We are interested in studying the topological properties of this (continuous) group action. More precisely we would like to continue our study of local rigidity of mappings, a notion we introduced in \cite{dSLR15}: we say a map $H\in \mathcal H(M,M')$ is locally rigid if it projects to an isolated point in the quotient $\mathcal H(M,M') / G$ (for a formal definition, see Definition \ref{locrig}).
Our aim is to provide linear -- and thus easier to compute --
sufficient conditions for local rigidity. In order to state a criterion in this direction,
we say a holomorphic section $V$ of $T^{1,0}(\mathbb C^{N'})|_{H(\mathbb C^N)}$,
which vanishes at $0$, is an \emph{infinitesimal deformation} of $H \in \mathcal H(M,M')$
if $\real V$ is tangent to $M'$ along $H(M)$ (for the formal definition see Definition \ref{def:infdef}). We denote the set of infinitesimal deformations of $H$ by $\mathfrak {hol}_0 (H)$; it forms a real vector space.
We are particularly interested in sets of mappings satisfying certain generic nondegeneracy conditions introduced in \cite{La}: in Definition \ref{defNondeg} below we formally introduce the set of finitely nondegenerate mappings.
We are now going to state our main results, which are built on the approach and techniques of our recent work \cite{dSLR15}. In this paper we exploit our method of infinitesimal deformations to study more general situations. The first result is a generalization of Theorem~1 of \cite{dSLR15}.
\mathbf{E}gin{theorem}\label{infTrivial}
Let $M$ be a germ of a generic minimal real-analytic submanifold through $0$ in $\mathbb{C}^N$, and $M'$ be
a germ of a generic real-analytic submanifold in $\mathbb{C}^{N'}$. Let $H\in \mathcal H(M,M') $ be a germ
of a finitely nondegenerate map satisfying
$\dim_{\mathbb R} \mathfrak {hol}_0 (H) = 0$. Then $H$ is an isolated point in $\mathcal H(M,M')$, and in particular, $H$ is locally rigid.
\end{theorem}
In the next result we are going to relax the assumption $\dim_{\mathbb R} \mathfrak {hol}_0 (H) = 0$. In order to do so we will restrict to the case when $M \subset \mathbb{C}^N$ and $M' \subset \mathbb{C}^{N'}$ are strictly pseudoconvex hypersurfaces, so that $M$ and $M'$ have CR dimension $n=N-1$ and $n'=N'-1$ respectively. Moreover, since the embeddings of spheres have been studied extensively \cites{We, Fa2, Da, Hu, HJ, Ji, Le, Re2, Re3} (see \cite{dSLR15} for a more thorough discussion and additional references), we assume that $M$ or $M'$ is not biholomorphically equivalent to a sphere.
The following result is a generalization of Theorem 2 of \cite{dSLR15} in the setting of strictly pseudoconvex hypersurfaces.
\mathbf{E}gin{theorem}\label{suffcon2intro}
Let $M \subset\mathbb{C}^N$ and $M' \subset \mathbb{C}^{N'}$ be germs of strictly pseudoconvex real-analytic hypersurfaces through $0$, where at least one of $M$ or $M'$ is not spherical. If $H\in \mathcal H(M,M') $ is a germ of a $2$-nondegenerate map that satisfies
$\dim_{\mathbb R} \mathfrak {hol}_0 (H) = \dim_{\mathbb R}\mathfrak{hol}_0(M')$, then $H$ is locally rigid.
\end{theorem}
We note that the assumption of $2$-nondegeneracy implies that $N'\leq \frac{N(N+1)} 2$.
The outline of the paper is as follows: The proofs of our main results are provided in the very last section \ref{proof}. Before, we fix notation in section \ref{prelim} and give a jet parametrization result for finitely nondegenerate maps in section \ref{s:jetparam}. Then, we study infinitesimal deformations in section \ref{s:infdef} and deduce some crucial properties of the action of isotropies on the space of maps in section \ref{s:propiso}.
\section{Preliminaries}
\label{prelim}
This section is devoted to introduce some standard notation. For details and proofs, we refer the reader to e.g. \cite{BER2}.
For a generic real-analytic CR submanifold $M\subset \mathbb C^N$ we denote by $n$ its CR dimension and by $d$ its real codimension so that $N=n+d$. It is well-known (cf. \cite{BER2}) that one can choose \emph{normal coordinates}
$(z,w)\in \mathbb C^n_z\times \mathbb C^d_w=\mathbb C^N$ such that the {\em complexification} $\mathcal M\subset \mathbb C^{2N}_{z,\chi,w,\tau}$ of $M$ is
given by
\[ w = Q(z,\chi, \tau), \ \ {\rm (equivalently:} \ \tau = \overline Q(\chi,z, w) {\rm )},\]
for a suitable germ of holomorphic map $Q:\mathbb C^{2n+d}\to\mathbb C^{d}$ satisfying the following equations
\mathbf{E}gin{equation}
\label{NormalCoord}
Q(z,0,\tau) \equiv Q(0,\chi,\tau) \equiv \tau, \ \ \ \ Q(z,\chi, \overline Q (\chi, z, w)) \equiv w.
\end{equation}
Given a defining equation $\rho' \in (\mathbb{C}\{Z',\bar Z'\})^{d'}$ for $M'$, we recall that
a germ of a mapping $H(z,w)\in (\mathbb C\{z,w\})^{N'}$ with $H(0,0)=0$, belongs to $\mathcal H(M,M')$ if and only if it solves the \emph{mapping equation}
\mathbf{E}gin{align}
\label{mapeq}
\rho'(H(z,w),\overline H(\chi,\tau))= 0 \quad {\rm for}\ w = Q(z,\chi,\tau).
\end{align}
We endow $(\mathbb{C}\{Z\})^{N'}$, which we consider as the space of germs at $0$ of holomorphic maps from $\mathbb{C}^N$ to $\mathbb{C}^{N'}$, with the natural direct limit topology and consider $\mathcal H(M,M') \subset (\mathbb{C}\{Z\})^{N'}$ with the induced topology.
We denote by $L_j$ and $\bar L_j$, $j=1,\ldots, n$, a commuting basis of the germs of CR and anti-CR vector fields, respectively, tangent to $\mathcal M$. Furthermore it will be convenient to consider the following vector fields, which are also tangent to $\mathcal M$:
\[T_\ell = \frac{\primeartial}{\primeartial w_\ell} + \sum_{k=1}^d \overline Q^k_{w_\ell}(\chi,z,w) \frac{\primeartial}{\primeartial \tau_k}, \ \ \ \ S_j = \frac{\primeartial}{\primeartial z_j} + \sum_{k=1}^d\overline Q^k_{z_j}(\chi,z,w) \frac{\primeartial}{\primeartial \tau_k} \]
where $1\leq \ell \leq d$, $1\leq j\leq n$.
In the sequel we denote by $\mathbb{N} = \{1,2,3,\ldots\}$ the set of natural numbers and write $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$. The notion of nondegeneracy we are interested in was introduced in \cite{La}. For our purposes we will also need a slightly weaker one:
\mathbf{E}gin{definition}\label{defNondeg}
Let $M'=\{\rho' = 0\}$, where $\rho' =(\rho_1',\ldots, \rho'_{d'}) \in (\mathbb C\{Z',\zeta'\})^{d'}$ is a local defining function for $M'$. Given a holomorphic map
$H=(H_1,\ldots,H_{N'})\in (\mathbb C\{z,w\})^{N'}$, a fixed sequence $(\iota_1,\ldots,\iota_{N'})$ of
multiindices $\iota_m\in\mathbb{N}_0^n$ and integers $\ell^1,\ldots, \ell^{N'}$ with $1 \leq \ell^j \leq d'$, we consider the determinant
\mathbf{E}gin{equation} \label{folcon}
s = \det \left(\mathbf{E}gin{array}{ccc}
L^{\iota_1}\rho'_{\ell^1, Z_1'}(H(z,w),\overline H(\chi,\tau)) & \cdots & L^{\iota_1}\rho'_{\ell^1,Z_{N'}'}(H(z,w),\overline H(\chi,\tau)) \\ \vdots & \ddots & \vdots \\
L^{\iota_{N'}}\rho'_{\ell^{N'}, Z_1'}(H(z,w),\overline H(\chi,\tau)) & \cdots & L^{\iota_{N'}} \rho'_{\ell^{N'}, Z_{N'}'}(H(z,w),\overline H(\chi,\tau))\end{array}\right).
\end{equation}
We define the open set $\mathcal F_{k} \subset \mathcal H(M,M')$ as the set of maps $H$ for which there exists a sequence of multiindices $(\iota_1,\ldots,\iota_{N'})$ with $k = \max_{1 \leq m \leq N'}|\iota_m|$ and integers $\ell^1,\ldots, \ell^{N'}$ as above such that $s(0)\neq 0$.
We will say that $H$ with $H(M) \subset M'$ is {\em $k_0$-nondegenerate} if $k_0 = \min \{ k\colon H \in \mathcal{F}_k \}$ is a finite number.
\end{definition}
Note that the definition of both $\mathcal F_{k_0}$ and the space of $k_0$-nondegenerate maps are independent of the choice of coordinates (see
\cite[Lemma 14]{La}), hence these spaces are invariant under the action of $G$. Also notice that in the setup of Theorem \ref{suffcon2intro} the space of $2$-nondegenerate maps coincides with the set $\mathcal F_2$.
Our first main goal in this paper is to study the following property:
\mathbf{E}gin{definition}\label{locrig}
Let $M$ and $M'$ be germs of submanifolds in $\mathbb C^N$ (resp. $\mathbb C^{N'}$) around $0$,
and let $H$ be a mapping of $M$ into $M'$.
We say that $H$ is \emph{locally rigid} if $H$ projects to an isolated point in the
quotient $\faktor{\mathcal H(M,M')}{G}$ of the space $\mathcal H(M,M')$ of holomorphic mappings from $M$ to $M'$ with respect to the group of isotropies $G$.
\end{definition}
\mathbf{E}gin{remark}\label{rem:equcon}
It is easy to show that $H \in \mathcal H(M,M')$ is locally rigid according to the definition above if and only if there exists a neighborhood $U$ of $H$ in $(\mathbb C\{Z\})^{N'}$ such that for every $\hat H\in \mathcal H(M,M')\cap U$ there is $g\in G$ such that $\hat H=g H$. In other words, $H$ is locally rigid if and
only if all the maps in $\mathcal H(M,M')$ which are close enough to $H$ are equivalent to $H$ (see Remark 12 in \cite{dSLR15}).
\end{remark}
\section{Jet parametrization}
\label{s:jetparam}
In order to prove our main theorems we will show that in an appropriate sense, the infinitesimal deformations can be considered as a tangent space, by deducing a jet parametrization result for maps in $\mathcal F_{k}$ based on the work in \cites{La, BER99, JL}. First we will introduce some notation.
Let $H: \mathbb{C}^N \to \mathbb{C}^{N'}$ be a germ of a holomorphic map and let $k$ be an integer. We denote by $j_0^k H$ the \emph{$k$-jet of $H$ at $0$}, that is the collection of all derivatives of order $\leq k$ of the components of $H$ at $0$. The space of all $k$-jets at $0$ will be denoted by $J_0^k$ (we drop the dependence on $N$ and $N'$, which will remain fixed, for better readability). We denote by $\Lambda$ coordinates in $J_0^k$ and write
$\Lambda = (\Lambda',\Lambda_{N'}) = (\Lambda_1,\ldots,\Lambda_{N'-1},\Lambda_{N'})$ with $\Lambda_j=(\Lambda_j^{\alpha,\mathbf{E}ta})$, where $\alpha \in \mathbb{N}_0^{n}, \mathbf{E}ta \in \mathbb{N}_0^d$ and $0\leq |\alpha|+|\mathbf{E}ta| \leq k$. We have
\[ \Lambda = j_0^k H \text{ if and only if } \Lambda_j^{\alpha,\mathbf{E}ta} = \frac{1}{\alpha!\mathbf{E}ta!}
\frac{\primeartial^{|\alpha|+|\mathbf{E}ta|} H_j}{\primeartial z^\alpha \primeartial w^\mathbf{E}ta} (0). \]
We can identify $k$-th order jets with polynomial maps of degree at most $k$ (taking $\mathbb{C}N$ into $\mathbb{C}Np$) and will do so freely in the sequel. In particular, the composition of a jet with a jet is defined, as well as the composition of jets with other maps, provided that the source and target dimensions allow it.
We also need to recall the definition of certain subsets of $\mathbb C^N$, commonly referred to as the \emph{Segre sets}. In order to do so we need to introduce some notation. For any $j\in \mathbb{N}$ let $(x_1,\ldots, x_j)$ ($x_\ell\in \mathbb C^n$) be coordinates for $\mathbb C^{nj}$. The \emph{Segre map} of order $q\in \mathbb{N}$ is the map $S^q_0:\mathbb C^{nq}\to \mathbb C^N$ inductively defined as follows:
\[S^1_0(x_1) = (x_1,0), \ \ S^q_0(x_1,\ldots,x_q) = \left (x_1, Q\left(x_1,\overline S^{q-1}_0(x_2,\ldots,x_q) \right)\right)\]
where we denote by $\overline S^{q-1}_0$ the power series whose coefficients are conjugate to the ones of $S^{q-1}_0$ and $Q$ is a map as given in \eqref{NormalCoord}. In particular if $w-Q(z,\chi,\tau)=0$ is a local defining equation of the complexification of a CR submanifold $M \subset \mathbb{C}^N$, we say the Segre map $S^q_0$ is associated to $M$. The $q$-th Segre set $\mathcal S^q_0\subset \mathbb C^N$ is then the image of the map $S^q_0$. In what follows we will use the notation $x^{[j;k]} = (x_j,\ldots,x_k)$. It is known from the Baouendi-Ebenfelt-Rothschild minimality criterion \cite{BER1} that if $M$ is minimal at $0$, then
$S_0^j$ is generically of full rank if $j$ is large enough. We recall that a germ of a CR submanifold $M \subset \mathbb{C}^N$ is called \emph{minimal at $p \in M$} if there is no germ of a CR submanifold $\tilde M \subsetneq M$ of $\mathbb{C}^N$ through $p$ having the same CR dimension as $M$ at $p$.
\mathbf{E}gin{theorem}\label{jetparam}
Let $M\subset \mathbb C^N$ be the germ of a real-analytic, generic minimal submanifold, $0\in M$, and let $M'\subset \mathbb C^{N'}$ be a real-analytic generic submanifold germ. Let $k_0 \in \mathbb{N}$ and $\mathbf t \leq d+1$ be the minimum integer, such that the Segre map $S^{\mathbf t}_0$ of order $\mathbf t$ associated to $M$ is generically of full rank. First suppose that $\mathbf t$ is even.
There exists a finite collection of polynomials
$q_{j}(\Lambda)$ on $J_0^{\mathbf t k_0}$ for $j \in J$, where $J$ is a suitable finite index set, open neighborhoods $\mathcal U_j$ of $\{0\} \times U_j$ in $\mathbb C^N \times J_0^{\mathbf t k_0}$, where $U_j = \{q_j\neq 0\}$ and holomorphic maps
$\Phi_j \colon \mathcal U_j \to \mathbb{C}^{N'} $ satisfying $\Phi_j(0,\Lambda) = 0$, which are of the form
\mathbf{E}gin{align} \label{rationalJetParam}
\Phi_j (Z,\Lambda) = \sum_{\alpha\in \mathbb{N}_0^N} \frac{p_j^{\alpha} (\Lambda)}{q_j(\Lambda)^{d^j_\alpha}} Z^\alpha, \quad p_j^\alpha, q_j \in \mathbb{C}[\Lambda], \quad d^j_\alpha\in\mathbb{N}_0,
\end{align}
such that the following holds:
\mathbf{E}gin{itemize}
\item For every $H\in\mathcal F_{k_0}$, in particular for every $k_0$-nondegenerate map $H$, there exists $j \in J$ such that $j_0^{\mathbf t k_0} H \in U_j$.
\item For every $H\in\mathcal F_{k_0} $ with $j_0^{\mathbf t k_0} H \in U_j$ we have \[ H(Z) = \Phi_j (Z,j_0^{\mathbf t k_0} H).\]
\end{itemize}
In particular, there exist (real) polynomials $c^j_k$, $k\in\mathbb{N}$ on $J_0^{\mathbf t k_0}$ such that
\mathbf{E}gin{align}
\label{defEquationJetParam}
j_0^{\mathbf t k_0} \mathcal F_{k_0} = \bigcup_{j \in J}\{ \Lambda\in J_0^{\mathbf t k_0} \colon q_j(\Lambda) \neq 0, \, c^j_k (\Lambda, \bar \Lambda) =0 \}.
\end{align}
Analogous statements hold for $\mathbf t$ odd, where in this case all $p_j^{\alpha}$ and $q_j$ in the expansion of $\Phi_j$ in \eqref{rationalJetParam} depend antiholomorphic on $\Lambda$.
\end{theorem}
The proof of Theorem \ref{jetparam} will be split up into several lemmas. We define $K(t) = |\{ \alpha \in \mathbb{N}_0^N: |\alpha| \leq t\}|$.
\mathbf{E}gin{lemma}\label{lem:basicIdentity}
Let $M$ and $M'$ be as before. Fix multiindices $(\iota_1,\ldots,\iota_{N'})$ and integers $\ell^1,\ldots, \ell^{N'}$ as above. Let $k_0 = \max_{1\leq m \leq N'} |\iota_m|$. There is a holomorphic map $\Psi: \mathbb{C}^N \times \mathbb{C}^N \times \mathbb{C}^{K(k_0) N'} \to \mathbb{C}^{N'}$ such that for every holomorphic map $H: \mathbb{C}^N \to \mathbb{C}^{N'}$ satisfying \eqref{mapeq} and $s(0)\neq 0$, where $s$ is given as in \eqref{folcon}, we have
\mathbf{E}gin{align}
\label{basicIdentity}
H(Z) = \Psi(Z,\zeta, \primeartial^{k_0} \bar H(\zeta)),
\end{align}
for $(Z,\zeta)$ in a neighborhood of $0$ in $\mathcal M$, where $\primeartial^{k_0}$ denotes the collection of all derivatives up to order $k_0$. Furthermore there exist polynomials $P_{\alpha, \mathbf{E}ta}, Q$ and integers $e_{\alpha,\mathbf{E}ta}$ such that
\mathbf{E}gin{align}
\label{basicIdentityRational}
\Psi(Z,\zeta,W) = \sum_{\alpha,\mathbf{E}ta \in \mathbb{N}_0^N} \frac{P_{\alpha,\mathbf{E}ta}(W)}{Q^{e_{\alpha,\mathbf{E}ta}}(W)} Z^{\alpha} \zeta^{\mathbf{E}ta}.
\end{align}
\end{lemma}
In Lemma \ref{lem:basicIdentity} the statement up until \eqref{basicIdentity} is a reformulation of Prop. 25 in \cite{La}. The expansion in \eqref{basicIdentityRational} follows from the way the implicit function theorem is applied in the proof of Prop. 25, in a similar fashion as in \cites{BER99, JL}.
From now on all the jet parametrization mappings that will appear in the following lemmas will depend on the multiindices and integers fixed in Lemma \ref{lem:basicIdentity}. For the sake of readability we will omit to write this dependence explicitly.
\mathbf{E}gin{lem}\label{lem:derivBasicIdentity}
Under the assumptions of Lemma \ref{lem:basicIdentity} the following holds: For all $\ell \in \mathbb{N}$ there exists a holomorphic mapping $\Psi_{\ell}: \mathbb{C}^N \times \mathbb{C}^N \times \mathbb{C}^{K(k_0 + \ell) N'} \to \mathbb{C}^{N'}$ such that for every holomorphic map $H: \mathbb{C}^N \to \mathbb{C}^{N'}$ satisfying \eqref{mapeq} and $s(0)\neq 0$, where $s$ is given as in \eqref{folcon}, we have
\mathbf{E}gin{align}
\label{derivBasicIdentity}
\primeartial^{\ell} H(Z) = \Psi_{\ell}(Z,\zeta, \primeartial^{k_0+ \ell} \bar H(\zeta)),
\end{align}
for $(Z,\zeta)$ in a neighborhood of $0$ in $\mathcal M$, where $\primeartial^{\ell}$ denotes the collection of all derivatives up to order $\ell$. Furthermore there exist polynomials $P^{\ell}_{\alpha, \mathbf{E}ta},Q_{\ell}$ and integers $e^{\ell}_{\alpha,\mathbf{E}ta}$ such that
\mathbf{E}gin{align}
\label{derivBasicIdentityRational}
\Psi_{\ell}(Z,\zeta,W) = \sum_{\alpha,\mathbf{E}ta \in \mathbb{N}_0^N} \frac{P^{\ell}_{\alpha,\mathbf{E}ta}(W)}{Q_{\ell}^{e^{\ell}_{\alpha,\mathbf{E}ta}}(W)} Z^{\alpha} \zeta^{\mathbf{E}ta}.
\end{align}
\end{lem}
Lemma \ref{lem:derivBasicIdentity} follows by differentiating \eqref{basicIdentity} along the vector fields $S$ and $T$ introduced above, see Cor. 26 of \cite{La}.
The next step is to evaluate (\ref{basicIdentity}) along the Segre sets.
\mathbf{E}gin{cor}\label{cor:iterationSegre}
Under the assumptions of Lemma \ref{lem:basicIdentity} the following holds: For fixed $q \in \mathbb{N}$ there exists a holomorphic mapping $\varphi_{q}: \mathbb{C}^{qn} \times \mathbb{C}^{K(q k_0) N'}\to \mathbb{C}^{N'}$ such that for every holomorphic map $H: \mathbb{C}^N \to \mathbb{C}^{N'}$ satisfying \eqref{mapeq} and $s(0)\neq 0$, where $s$ is given as in \eqref{folcon}, we have
\mathbf{E}gin{align}
\label{iterationSegre}
H(S^q_0(x^{[1;q]})) = \varphi_{q}(x^{[1;q]}, j_0^{q} H).
\end{align}
Furthermore there exist (holomorphic) polynomials $R^{q}_{\gamma},S_{q}$ and integers $m^q_{\gamma}$ such that
\mathbf{E}gin{equation}
\label{iterationSegreRational}
\varphi_{q}(x^{[1;q]},\Lambda) =\mathbf{E}gin{cases} \sum_{\gamma \in \mathbb{N}_0^{qn}} \frac{R^{q}_{\gamma}( \Lambda)}{S_{q}^{m^{q}_{\gamma}}(\Lambda)} (x^{[1;q]})^{\gamma} & q \text{ even }\\
\sum_{\gamma \in \mathbb{N}_0^{qn}} \frac{R^{q}_{\gamma}( \bar \Lambda)}{S_{q}^{m^{q}_{\gamma}}(\bar \Lambda)} (x^{[1;q]})^{\gamma} & q \text{ odd }.\end{cases}
\end{equation}
\end{cor}
\mathbf{E}gin{proof}
Fix $q\in \mathbb{N}$. We begin by putting $Z=S^q_0(x^{[1;q]})$ and $\zeta = \bar S^{q-1}_0(x^{[2;q]})$, so that $(Z,\zeta)\in \mathcal M$, in the identity (\ref{basicIdentity}), in order to obtain
\mathbf{E}gin{align}\label{firsteval}
H(S^q_0(x^{[1;q]})) = \Psi(S^q_0(x^{[1;q]}),\bar S^{q-1}_0(x^{[2;q]}), \primeartial^{k_0} \bar H(\bar S^{q-1}_0(x^{[2;q]}))).
\end{align}
This equation means that one can determine the value of any solution of (\ref{mapeq}), at least along $\mathcal S^q_0$, by knowing the values of its derivatives along $\mathcal S^{q-1}_0$. To determine the latter, we put $\zeta = \overline S^{q-1}_0(x^{[2;q]})$ and $Z = S^{q-2}_0(x^{[3;q]})$ (again so that $(Z,\zeta) \in \mathcal M$) in the conjugate of (\ref{derivBasicIdentity}):
\mathbf{E}gin{equation}\label{secondeval}
\primeartial^{\ell} \bar H(\overline S^{q-1}_0(x^{[2;q]})) = \bar \Psi_{\ell}(\overline S^{q-1}_0(x^{[2;q]}),S^{q-2}_0(x^{[3;q]}), \primeartial^{k_0+ \ell} H(S^{q-2}_0(x^{[3;q]}))).
\end{equation}
By substituting (\ref{secondeval}) for $\ell=k_0$ into (\ref{firsteval}), we get that the values of $H$ along $\mathcal S^q_0$ are determined by the values of their $2k_0$-th order jet along $\mathcal S^{q-2}_0$. Iterating this argument $q$ times we prove \eqref{iterationSegre}. To show \eqref{iterationSegreRational} we use \eqref{basicIdentityRational} and \eqref{derivBasicIdentityRational} at every step; the desired expansion follows by a (cumbersome but) straightforward computation. In particular one derives that $j_0^{q k_0} H \in \{S_q \neq 0\}$, whenever $s(0) \neq 0$. For more details see \cites{BER99, JL}
\end{proof}
\mathbf{E}gin{proof}[Proof of Theorem \ref{jetparam}] By the choice of $\mathbf t\leq d+1$, the Segre map $S^{\mathbf t}_0$ is generically of maximal rank, and we can therefore define the finite number $\nu (S^{\mathbf t}_0)$ as the minimum order of vanishing of minor of maximal size of the Jacobian of $S^{\mathbf t}_0$.
We can thus appeal to Theorem~5 from \cite{JL} and obtain that there exist a neighborhood $\mathcal V$ of $S^{\mathbf t}_0$ in $(\mathbb C\{x^{[1;\mathbf t]}\})^N$ and a holomorphic map
\[\primehi:\mathcal V\times \mathbb C\{x^{[1;\mathbf t]}\} \to \mathbb C\{Z\}\]
such that $\primehi(A,h\circ A) = h$ for all $A\in \mathcal V$ with $\nu(A) = \nu(S^{\mathbf t}_0)$, and for all $h\in\mathbb C\{Z\}$.
Now, define $J$ as the set of all the sequences of multiindices $(\iota_1,\ldots,\iota_{N'})$ and integers $\ell^1,\ldots, \ell^{N'}$ as in Lemma \ref{lem:basicIdentity} with $k_0 = \max_{1\leq m \leq N'} |\iota_m|$.
For any $j \in J$, Corollary \ref{cor:iterationSegre} with $q=\mathbf t$ provides the existence of a map $\varphi_{\mathbf t} = \varphi_{\mathbf t,j}$ satisfying \eqref{iterationSegre}. We set $\Phi_j(\cdot,\Lambda) = \primehi(S^{\mathbf t}_0, \varphi_{\mathbf t,j}(\cdot,\Lambda))$ so that by the
properties of $\varphi_{\mathbf t,j}$ and $\primehi$ the map $\Phi_j$ depends holomorphically on $\Lambda = j_0^{\mathbf t} H$ (or $\bar \Lambda$, respectively, if $j$ is odd). By setting $q_j(\Lambda,\bar \Lambda) = S_{\mathbf t,j}(\Lambda)$, where $S_{\mathbf t,j}$ is given in \eqref{iterationSegreRational}, a direct computation using \eqref{iterationSegreRational} and Thm. 5 in \cite{JL} (more precisely (42) of Thm. 6 of \cite{JL}), allows to derive the expansion in \eqref{rationalJetParam}.
It follows from Corollary \ref{cor:iterationSegre} and Thm.~5 in \cite{JL} that
$H(Z) = \Phi_j(Z, j^{\mathbf tk_0}_0H)$
whenever $H$ is a solution of (\ref{mapeq}) and $s(0)\neq 0$, where $s$ is given as in \eqref{folcon} with the sequence of multiindices corresponding to $j$. In particular if $H$ is $k_0$-nondegenerate by definition there exists $j \in J$ such that $s(0)\neq 0$, which by the arguments above is equivalent to the condition $j_0^{\mathbf t k_0} H \in U_j$.
Finally, the
remaining statement can be proved by setting
$H(Z) =
\Phi_j(Z,\Lambda)$ in (\ref{mapeq}) and expanding it as a power series in
$(z,\chi,\tau)$: the coefficients of this power series depend polynomially on
$\Lambda,\overline \Lambda$, so that the defining equations (\ref{defEquationJetParam}) can be obtained
by setting all the coefficients to $0$.
\end{proof}
\section{Infinitesimal deformations}
\label{s:infdef}
In the following we refer to the notation of Theorem \ref{jetparam}. For any $j \in J$ let $A_j\subset U_j$ be the real-analytic set defined as
\mathbf{E}gin{equation}\label{defas}
A_j = \{\Lambda\in U_j \colon c^j_k(\Lambda,\overline \Lambda) = 0, k\in \mathbb{N}\}.
\end{equation}
By Theorem \ref{jetparam} putting $A \coloneqq \bigcup_j A_j$ we have $j_0^{\mathbf t k_0}(\mathcal F_{k_0}) = A$; in particular $A$ contains the set of $\mathbf t k_0$-jets of all $k_0$-nondegenerate mappings of $M$ into $M'$.
In fact we can say more:
\mathbf{E}gin{lemma}\label{PhiHomo}
For every $j\in J$ we define $\mathcal F_{k_0, j} \coloneqq \mathcal F_{k_0} \cap (j_0^{\mathbf t k_0})^{-1}(U_j)$.
The map $\Phi_j:A_j\to\mathcal F_{k_0,j}$ is a homeomorphism.
\end{lemma}
This result is proved exactly as Lemma 19 in \cite{dSLR15} as a direct consequence of Theorem \ref{jetparam}.
For each $j \in J$ the restriction of $\Phi_j$ to $\mathcal U_j \cap (\mathbb C^N \times A_j)$ gives rise to a map
\[A_j \ni \Lambda \to \Phi_j(\Lambda) \in (\mathbb C\{Z\})^{N'}, \ \Phi_j(\Lambda) (Z) = \Phi_j(Z,\Lambda)\]
from $A_j$ to the space $(\mathbb C\{Z\})^{N'}$.
Let $X\subset A_j$ be any regular (real-analytic) submanifold, and fix $\Lambda_0 \in X$. In what follows we focus on $\Phi_j|_X$.
Note that if we restrict to a small enough neighborhood of $\Lambda_0$ in $X$ (which we again denote by $X$)
the maps $\Phi_j(\Lambda) $ for all $\Lambda\in X$ all have a common radius of convergence $R$,
so that we can consider the restriction of $\Phi_j$ to $X$ as a map
valued in the Banach space ${\rm Hol}(\overline{B_R(0)}, \mathbb C^{N'})$, the space of holomorphic mappings from $B_R(0)$ to $\mathbb{C}^{N'}$, which are continuous up to $\overline{B_R(0)}$, where $B_R(0)$ denotes the ball with radius $R >0$ in $\mathbb{C}^N$.
We also remark that the map $\Phi_j: X \to (\mathbb C\{Z\})^{N'}$ is of class $C^\infty$. We consider its Fr\'echet derivative $D \Phi_j(\Lambda_0)$ at $\Lambda_0$ as a map $T_{\Lambda_0}X \to T_{\Phi_j(\Lambda_0)}(\mathbb C\{Z\})^{N'}\cong (\mathbb C\{Z\})^{N'}$.
We will need a special subspace of $T_{\Phi_j(\Lambda_0)}(\mathbb C\{Z\})^{N'}$. The following definition, already stated in Section \ref{intro}, was first given in \cite{CH}, see also \cite{dSLR15}.
\mathbf{E}gin{definition}\label{def:infdef} Let $M$ and $M'$ be as above and
$H\colon (\mathbb C^N,0) \to (\mathbb C^{N'},0)$ a map with $H(M) \subset M'$. Then a vector
\[V = \sum_{j=1}^{N'} \alpha_j(Z)\frac{\primeartial}{\primeartial Z_j'} \in T_{H}(\mathbb C\{Z\})^{N'}\]
is an \emph{infinitesimal deformation of $H$} if the real part of $V$ is tangent to $M'$ along $H(M)$, i.e.\ if
for one (and hence every) defining function $\rho'=(\rho'_1,\ldots, \rho'_{d'})$ of $M'$, the components of $V$ satisfy the following linear system
\mathbf{E}gin{equation}\label{infdef}
{\rm Re}\left(\sum_{j=1}^{N'}\alpha_j(Z)\frac{\primeartial \rho'_{\ell}}{\primeartial Z_j'}(H(Z),\overline {H(Z)})\right)=0 \ \ {\rm for} \ Z\in M, \ \ell = 1, \ldots, d'.
\end{equation}
We denote this subspace of $T_{H}(\mathbb C\{Z\})^{N'}$ by $\mathfrak {hol}_0 (H)$.
\end{definition}
With the same proof as in \cite{dSLR15} we derive the following property, which motivates the definition above:
\mathbf{E}gin{lemma}\label{contain}
The image of $T_{\Lambda_0}X$ by $D\Phi_j(\Lambda_0)$ is contained in $\mathfrak {hol}_0 (\Phi_j(\Lambda_0))$.
\end{lemma}
The next lemmas give some properties of infinitesimal deformations that will be needed in section \ref{proof} to give the proofs our main theorems.
The first lemma comes from the jet parametrization for solutions of \eqref{infdef} obtained in section 5 in \cite{dSLR15}. Its proof is precisely the one of Cor. 32 in \cite{dSLR15} using Prop. 29 instead of Prop. 31 and keeping $Q$ fixed.
\mathbf{E}gin{lemma}\label{semicont}
Fix $M$ and $M'$ given as above. For any $H \in \mathcal F_{k_0}$, the dimension $\dim_{\mathbb{R}}(\mathfrak{hol}_0(H))$ of the space of infinitesimal deformations of $H$ is finite. Moreover, the function $\dim_{\mathbb{R}}(\mathfrak{hol}_0(\cdot)): \mathcal F_{k_0} \to \mathbb{N}_0$ is upper semicontinuous, i.e.\ for any $H \in \mathcal F_{k_0}$ there exists a neighborhood $\mathcal V$ of $H$ in $\mathcal F_{k_0}$ such that for any $H'\in \mathcal V$ we have $\dim_{\mathbb{R}}(\mathfrak{hol}_0(H'))\leq \dim_{\mathbb{R}}(\mathfrak{hol}_0(H))$.
\end{lemma}
The following lemma follows from Theorem \ref{jetparam}, Lemma \ref{contain} and Lemma \ref{semicont} with the same proof as Lemma 23 in \cite{dSLR15}.
\mathbf{E}gin{lemma} \label{dim10}
Let $\Lambda_0\in A_j$, and suppose that
$\dim_{\mathbb R} \mathfrak {hol}_0 (\Phi_j(\Lambda_0)) = \ell$.
Then there exists a neighborhood $U$ of $\Lambda_0$ in $J_0^{\mathbf t k_0}$ such that,
if $X\subset A_j$ is a submanifold such that $X\cap U\neq \emptyset$, then $\dim_{\mathbb R}(X) \leq \ell$.
\end{lemma}
\section{Properties of the group action}
\label{s:propiso}
In this section we deduce some results which will be used to prove Theorem \ref{suffcon2intro}. Thus we consider strictly pseudoconvex hypersurfaces $M \subset \mathbb{C}^N$ and $M' \subset \mathbb{C}^{N'}$. In the coordinates introduced in section \ref{prelim} this means that the CR-dimension of $M$ and $M'$ are equal to $n=N-1$ and $n'=N'-1$ respectively (and $d=d'=1$).
More specifically we are interested in describing some properties of the action of the isotropy group on $2$-nondegenerate embeddings, or more precisely on the set of their $4$-jets (which by Theorem \ref{jetparam} parametrize $\mathcal F_2$). To this end we first give a brief description of the isotropy groups of the spheres $\mathbb H^{n+1}=\{(z,w) \in \mathbb{C}^{n+1}: {\rm Im}\, w =\|z\|^2\}$. Let $\Gamma_n = \mathbb R^+ \times \mathbb R \times U(n) \times \mathbb C^n$ be a parameter space. Then the map
\mathbf{E}gin{equation}\label{param}
\Gamma_n \ni \gamma = (\lambda, r, U, c) \to \sigma_{\gamma}(z,w) = \frac{(\lambda U \ {}^t(z + c w), \lambda^2 w) }{1-2i \langle \overline c, z \rangle + (r - i \|c\|^2)w} \in {\rm Aut_0}(\mathbb H^{n+1})
\end{equation}
is a diffeomorphism between $\Gamma_n$ and ${\rm Aut_0}(\mathbb H^{n+1})$: here we denote by $\langle \cdot, \cdot \rangle$ the product on $\mathbb C^n$ given by $\langle z, \widetilde z \rangle= \sum_{j=1}^n z_j\widetilde z_j$ and we write $\| z \|^2 = \langle \overline z, z \rangle$.
The first property we are going to study is properness: we remind the reader that the action of a topological group $\mathcal G$ on a space $X$ is said to be \emph{proper} if the map $\mathcal G\times X \to X \times X$ given by $(g,x) \mapsto (x,gx)$ is proper.
We will actually prove properness of the action on a particular subset of $J_0^4$: let $E$ be the subset of $J_0^4$ defined by
\[E=\left\{\Lambda_{N'}^{\alpha,0} = 0, |\alpha|\leq 2,\ 0 \neq \Lambda_{N'}^{0,1} = \|\Lambda'^{\mathbf{E}ta,0}\|^2, |\mathbf{E}ta|=1,\left\langle \overline\Lambda'^{\gamma,0}, \Lambda'^{\delta,0} \right\rangle = 0, \gamma \neq \delta, |\gamma|=|\delta|=1 \right \}.\]
One can verify in a straightforward manner the following properties of $E$:
\mathbf{E}gin{itemize}
\item $E$ is a (real algebraic) submanifold of $J_0^4$.
\item For $M=\{{\rm Im}\, w = \|z\|^2 +O(2)\}$ and $M'=\{{\rm Im}\, w' = \|z'\|^2 + O(2)\}$ the set $E$ contains the $4$-th jet of any non-constant map from $M$ to $M'$.
\item $E$ is invariant under the action of ${\rm Aut}_0(\mathbb H^N) \times {\rm Aut}_0(\mathbb H^{N'})$, cf. Lemma 14 in \cite{dSLR15}.
\end{itemize}
\mathbf{E}gin{lemma}\label{proact}
Suppose that $M\not \cong \mathbb H^{N}$ is strictly pseudoconvex and $M'=\mathbb H^{N'}$. Then the action of $G = {\rm Aut}_0 (M) \times{\rm Aut}_0 (\mathbb H^{N'})$ on $E$ is proper.
\end{lemma}
\mathbf{E}gin{proof}
The proof follows closely the one of \cite[Lemma 15]{dSLR15}. With the same argument as there using the compactness of ${\rm Aut}_0(M)$, which follows from the assumption that $M\not\cong \mathbb H^N$ (see \cite{BV}), we can reduce to showing the following: let $C>1$, and $\{(\Lambda_m, \widetilde \Lambda_m)\}_{m\in\mathbb{N}} \subset E \times E$, $(\sigma'_m)_{m\in \mathbb{N}} \subset {\rm Aut}_0(\mathbb H^{N'})$ be sequences such that $|\Lambda_m|, |\widetilde \Lambda_m|\leq C$, $|(\Lambda_m)_{N'}^{0,1}|, |(\widetilde \Lambda_m)_{N'}^{0,1}|\geq 1/C$ and \mathbf{E}gin{equation}\label{e:compose}\widetilde \Lambda_m = \sigma'_m \circ \Lambda_m \end{equation} for all $m\in \mathbb{N}$. Then $\sigma'_m$ admits a convergent subsequence.
Using the parametrization (\ref{param}), this amounts to showing that the preimage $\{\gamma'_m=(\lambda'_m, r'_m, U'_m, c'_m)\}$ of $\sigma'_m$ in the parameter space $\Gamma'$ is relatively compact, that is $|r'_m|, \|c'_m\|, \lambda'_m$ and $1/\lambda'_m$ are bounded, since $U(N'-1)$ is a compact group.
Looking at the $N'$-th component of the first jet of \eqref{e:compose}, we get
\[{\lambda'_m}^2(\Lambda_m)_{N'}^{0,1} = (\widetilde \Lambda_m)_{N'}^{0,1},\]
and hence $\lambda'_m$ and $1/\lambda'_m$ are bounded.
Considering the first $N'-1$ components of the first jet of \eqref{e:compose} we obtain
\[\lambda_m' U_m' \ \Lambda_m'^{0,1} +(\Lambda_m)_{N'}^{0,1}\ c_m' = \widetilde \Lambda_m'^{0,1},\]
hence
\mathbf{E}gin{align*}
\|c_m'\| \leq \frac{1}{|(\Lambda_m)_{N'}^{0,1}|} \left\| \widetilde \Lambda_m'^{0,1} - \lambda_m' U_m' \ \Lambda_m'^{0,1}Ê\right\|,
\end{align*}
which implies that $c_m'$ is bounded in $\mathbb C^{N'-1}$. Finally, we consider the $N'$-th component of the second jet of \eqref{e:compose}, which gives us
\[(\widetilde \Lambda_m)_{N'}^{0,2} = -2 {\lambda'}_m^2 \left((\Lambda_m)_{N'}^{0,1}\right)^2 r'_m + R_m,\]
where $R_m$ is a polynomial expression in the second jet of $\Lambda_m$, in $\lambda'_m$ and in the coefficients of $c'_m$ and $U'_m$ (but which does not depend on $r'_m$). This shows that the sequence $r'_m$ is bounded in $\mathbb R$, and concludes the proof.\end{proof}
Next, we consider the case $M=\mathbb H^N$ and $M' \not\cong \mathbb H^{N'}$. The proof is quite similar to the previous one, but does not really reduce to it.
\mathbf{E}gin{lemma}\label{proact2}
The action of ${\rm Aut}_0 (\mathbb H^N) \times {\rm Aut}_0 (M')$ on $E$ is proper.
\end{lemma}
\mathbf{E}gin{proof}
By the compactness of ${\rm Aut}_0 (M')$ as in the previous lemma it is enough to show the following: let $C>1$, and let $\{(\Lambda_m,\widetilde\Lambda_m)\}_{m\in\mathbb{N}} \subset E \times E$, $(\sigma_m)_{m\in \mathbb{N}} \subset {\rm Aut}_0 (\mathbb H^N)$ be sequences such that $|\Lambda_m|, |\widetilde \Lambda_m|\leq C$, $|(\Lambda_m)_{N'}^{0,1}|, |(\widetilde \Lambda_m)_{N'}^{0,1}|\geq 1/C$ and \mathbf{E}gin{equation}\label{e:compose2}\widetilde\Lambda_m = \Lambda_m \circ \sigma_m\end{equation} for all $m\in \mathbb{N}$. Then $\sigma_m$ admits a convergent subsequence.
The $N'$-th component of the first jet of \eqref{e:compose2} gives
\[\lambda_m^2(\Lambda_m)_{N'}^{0,1} = (\widetilde \Lambda_m)_{N'}^{0,1},\]
which implies that the sequence $\lambda_m$ is bounded above and below.
Given $\Lambda \in J_0^4$ we denote by $\Lambda'^{1,0}$ the $(N'-1)\times (N-1)$-matrix given by $(\Lambda_j'^{\alpha,0}), j=1,\ldots,N'-1, \alpha \in \mathbb{N}_0^{N-1}$ with $|\alpha| = 1$. The first $N'-1$ components of (the $(0,1)$-part of) the first jet of \eqref{e:compose2} can then be written as follows:
\[\lambda_m \left (\lambda_m \Lambda_m'^{0,1} + \Lambda_m'^{1,0} U_m c_m \right)= \widetilde \Lambda_m'^{0,1} , \]
therefore
\[ \frac{1}{\lambda_m } \widetilde \Lambda_m'^{0,1} - \lambda_m \Lambda_m'^{0,1}= \Lambda_m'^{1,0} U_m c_m. \]
By definition of $E$ we can write the matrix $\Lambda_m'^{1,0}$ as $\sqrt{(\Lambda_m)_{N'}^{0,1}} A_m$, where $A_m$ is a semi-unitary matrix, i.e. ${}^t \overline A_m A_m = I_{N-1}$, thus we have
\mathbf{E}gin{align*}
\| \Lambda_m'^{1,0} U_m c_m \|^2 = (\Lambda_m)_{N'}^{0,1} \| A_m U_m c_m\|^2 = (\Lambda_m)_{N'}^{0,1} \| U_m c_m \|^2 = (\Lambda_m)_{N'}^{0,1} \|c_m \|^2,
\end{align*}
so that by the estimate on $(\Lambda_m)_{N'}^{0,1}$ it holds that
\[ \frac{\|c_m\| }{\sqrt C}\leq \lambda_m \left \|\ \Lambda_m'^{0,1} \right \| +\frac{1}{\lambda_m } \left \| \widetilde \Lambda_m'^{0,1}\right \|,\]
which implies the boundedness of $c_m$ in $\mathbb{C}^{N-1}$. Finally, we consider the $N'$-th component of the second jet of \eqref{e:compose2}, which gives the equation
\[(\widetilde \Lambda_m)_{N'}^{0,2} = -\lambda_m^2 (\Lambda_m)_{N'}^{0,1} r_m + R_m,\]
where $R_m$ is a polynomial expression in the second jet of $\Lambda_m$ and in $\lambda_m$, $c_m$, $U_m$, not depending on $r_m$. This implies that the sequence $r_m$ is bounded in $\mathbb R$, and concludes the proof.
\end{proof}
Next we are going to prove the freeness of the action of the isotropy group of the target manifold. In order to do so, we first introduce for any fixed map $H$, in a way similar to Lemma 17 in \cite{dSLR15}, coordinates such that
\mathbf{E}gin{itemize}
\item the map $H$ is of the form $(z,F(z,w),w)$ for a certain germ of holomorphic function $F: \mathbb C^{N}\to \mathbb C^{\ell}$, where $\ell = N'-N$, such that $F(0) = 0$;
\item the automorphism group of $M'$ at $0$ is a subgroup of ${\rm Aut}_0 (\mathbb H^{N'})$.
\end{itemize}
\mathbf{E}gin{lemma}\label{free}
Let $\Lambda\in E$ be the $4$-jet of a map of the form $(z,w)\to(z,F(z,w),w) \in \mathcal F_2$. Then the stabilizer of $\Lambda$ under the action of $G' = \{\id\} \times {\rm Aut}_0(\mathbb H^{N'})$ is trivial.
\end{lemma}
\mathbf{E}gin{proof}
For $v \in \mathbb{C}^m$ we denote by $v_{[j;k]}$ the coordinates $(v_j, \ldots, v_k)$ for $1\leq j \leq k\leq m$. First using that $\Lambda\in E$ we deduce $\Lambda_N^{\alpha,0} = \cdots =\Lambda_{N'-1}^{\alpha,0} =0$ for all $\alpha$ such that $|\alpha|=1$. Indeed for all $\alpha$ with $|\alpha|=1$ we have
\[
1 = \Lambda_{N'}^{0,1} = \|\Lambda'^{\alpha,0}\|^2 = \sum_{j=1}^{N-1} |\Lambda_j^{\alpha,0}|^2 + \sum_{j=N}^{N'-1} |\Lambda_j^{\alpha,0}|^2 = 1 + \sum_{j=N}^{N'-1} |\Lambda_j^{\alpha,0}|^2,
\]
since for every $\alpha$ there exists exactly one $j_{\alpha}$, such that $\Lambda_{j_\alpha}^{\alpha,0} = 1$ and $\Lambda_{j}^{\alpha,0} = 0$ for all $j \neq j_{\alpha}, 1\leq j \leq N-1$ (by the form of $\Lambda$). In other words the entries of the last $\ell$ rows of the $(N-1+\ell)\times (N-1)$ matrix $\Lambda'^{1,0}$ defined in the proof of Lemma \ref{proact2} are all zeros, while the rest of the matrix is an $(N-1)\times (N-1)$ identity matrix.
Let $\Lambda$ be as in the assumptions and $\sigma' \in G'$ in the stabilizer of $\Lambda$, that is
\mathbf{E}gin{equation}
\label{eq:free}
\Lambda = \sigma' \circ \Lambda.
\end{equation}
Let $\sigma'$ be parametrized by $\gamma' = (\lambda',r',U',c')\in \Gamma'$. We will show that $\sigma' = \id$ by following similar computations analogous to the ones in Lemma \ref{proact}. Looking at the $N'$-th component of the first jet of \eqref{eq:free} we see that $(\lambda')^2 = 1$, hence $\lambda' = 1$ since $\lambda'\in \mathbb R^+$.
Considering the first $N'-1$ components of the first jet of \eqref{eq:free}, we get $U' \Lambda'^{1,0} =\Lambda'^{1,0}$, hence $U'$ is a block diagonal matrix with first block being $I_{N-1}$ and the second block a $\ell \times \ell$ unitary matrix $U''$. Furthermore, we obtain
\mathbf{E}gin{align*}
U' \ {}^t\left( 0, \ldots, 0, \Lambda_{[N;N'-1]}^{0,1} \right) + c' ={}^t\left( 0, \ldots, 0, \Lambda_{[N;N'-1]}^{0,1} \right),
\end{align*}
i.e. $c_j' = 0$ for $1\leq j \leq N-1$ and $c'_{[N;N'-1]} = (I_{\ell}-U'')\Lambda_{[N;N'-1]}^{0,1}$.
Since $\Lambda$ comes from a map in $\mathcal F_2$ it follows that there exists a collection of multiindices $\Delta=(\delta_1,\ldots, \delta_{\ell})$ with $|\delta_j|=2$ such that the $\ell\times \ell$-matrix $\Lambda_{[N;N'-1]}^{\Delta,0}$ (whose $(j,k)$-entry is $\Lambda_{j+N-1}^{\delta_k,0}$) is invertible. Indeed in the Definition \ref{defNondeg} we can take the sequence of multiindices $(\iota_1,\ldots, \iota_{N'})$ to be $\iota_1 = 0$, for $2 \leq k \leq N$ the multiindex $\iota_k = (0,\ldots, 0,1,0,\ldots, 0)$ (where the $1$ appears in the $k-1$-th entry) and $\iota_{N+j}=\delta_j$ for $1\leq j \leq \ell$.
With this choice the determinant $s$ at $0$ of the matrix in Definition \ref{defNondeg} is
\[s(0) = \det \left(\mathbf{E}gin{array}{cccccc}
0 & 0 & \cdots & 0 & 0 & 1 \\
1 & 0 & \cdots & 0 &0 & 0 \\
0 & 1 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots &\ddots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & 0 & 0 \\
\Lambda_1^{\Delta,0} & \Lambda_{2}^{\Delta,0} & \cdots & \Lambda_{N-1}^{\Delta,0} & \Lambda_{[N;N'-1]}^{\Delta,0} & 0
\end{array}\right)
= \primem \det \Lambda_{[N;N'-1]}^{\Delta,0},\]
so that $ \det \Lambda_{[N;N'-1]}^{\Delta,0} \neq 0$. Using this fact and considering the $[N;N'-1]$ components of the second jet of \eqref{eq:free} we have:
\mathbf{E}gin{align*}
U''\Lambda_{[N;N'-1]}^{\Delta} & =\Lambda_{[N;N'-1]}^{\Delta,0}, \ {i.e.}\ U'' = I_{\ell}, c_{[N;N'-1]}' = 0,
\end{align*}
since $\Lambda_{[N;N'-1]}^{\Delta}$ is invertible. Finally taking into account the previous computations the remaining equation in the $2$-jet of the $N'$-th component in \eqref{eq:free} becomes
\mathbf{E}gin{align*}
-r' & = \Lambda_{N'}^{0,2} = 0,
\end{align*}
which shows that $\sigma'=\id$.
\end{proof}
\section{Proofs of the main results}
\label{proof}
In this section we are going to prove Theorems \ref{infTrivial} and \ref{suffcon2intro}.
\mathbf{E}gin{theorem}\label{suffcon1}
Let $M,M'$ be as in Theorem \ref{jetparam}, let $A$ be defined as in (\ref{defas}), $\Lambda_0\in A$, and let $j\in J$ be such that $\Lambda_0 \in A_j$. Suppose that $\dim_{\mathbb R} \mathfrak {hol}_0 (\Phi_j(\Lambda_0)) = 0$, then $\Phi_j(\Lambda_0)$ is isolated in $\mathcal F_{k_0}$.
\end{theorem}
\mathbf{E}gin{proof}
By Lemma \ref{dim10}, there is a neighborhood $U$ of $\Lambda_0$ in $J_0^{\mathbf t k_0}$ such that $U\cap A_j$ does not contain any manifold of positive
dimension. It follows that $U\cap A_j$ is a discrete set: by Lemma \ref{PhiHomo} we have that $\Phi_j(\Lambda_0)$ is in turn isolated in $\mathcal F_{k_0}$.
\end{proof}
Now we have all the ingredients to give a proof of Theorem \ref{infTrivial}:
\mathbf{E}gin{proof}[Proof of Theorem \ref{infTrivial}]
Let $H$ be a finitely nondegenerate map. Then there exists an integer $k_0$ such that $H \in \mathcal F_{k_0}$ (see Definition \ref{defNondeg}). Set $\Lambda_0 = j_0^{\mathbf t k_0} H$, then there exists $j\in J$, such that $\Lambda_0 \in A_j$ and $H = \Phi_j(\Lambda_0)$. By Theorem \ref{suffcon1} $H$ is isolated in $\mathcal F_{k_0}$, but since $\mathcal F_{k_0}$ is an open set of $\mathcal H(M,M')$ (see again Definition \ref{defNondeg}), it follows that $H$ is isolated in $\mathcal H(M,M')$. In particular $H$ is locally rigid by Remark \ref{rem:equcon}.
\end{proof}
Let us now turn to Theorem \ref{suffcon2intro}.
In the setting the jet space in Theorem \ref{jetparam} can be taken to be $J_0^4$, since $k_0 = 2$ and $\mathbf t = 2$. The proof of Theorem \ref{suffcon2intro} follows from the theorem below in a similar way as Theorem \ref{infTrivial} from Theorem \ref{suffcon1}.
\mathbf{E}gin{theorem}\label{suffcon2}
Let $M,M'$ be as in Theorem \ref{suffcon2intro} and let $A \subset J_0^4$ be defined as in (\ref{defas}), and let $\Lambda_0\in A$. Let $j \in J$ be such that $\Lambda_0 \in A_j$ and $\dim_{\mathbb R} \mathfrak {hol}_0 (\Phi_j(\Lambda_0)) = \dim_{\mathbb R}\mathfrak{hol}_0(M') = \ell$. Then $\Phi_j(\Lambda_0)$ is locally rigid.
\end{theorem}
\mathbf{E}gin{proof}
The result can be proved using the properness and freeness of the action of isotropies according to the Lemmas \ref{proact}, \ref{proact2} and \ref{free}. This allows to employ the local slice theorem for free and proper actions and the conclusion can be obtained by arguing exactly as in the proof of Theorem 25 in \cite{dSLR15}.
\end{proof}
\mathbf{E}gin{bibdiv}
\mathbf{E}gin{biblist}
\bib{BER1}{article}{
author={Baouendi, M. S.},
author={Ebenfelt, P.},
author={Rothschild, L. P.},
title={Algebraicity of holomorphic mappings between real algebraic sets
in ${\bf C}^n$},
journal={Acta Math.},
volume={177},
date={1996},
number={2},
pages={225--273},
issn={0001-5962},
review={\MR{1440933 (99b:32030)}},
doi={10.1007/BF02392622},
}
\bib{BER2}{book}{
author={Baouendi, M. S.},
author={Ebenfelt, P.},
author={Rothschild, L. P.},
title={Real submanifolds in complex space and their mappings},
series={Princeton Mathematical Series},
volume={47},
publisher={Princeton University Press},
place={Princeton, NJ},
date={1999},
pages={xii+404},
isbn={0-691-00498-6},
review={\MR{1668103 (2000b:32066)}},
}
\bib{BER99}{article}{
author={Baouendi, M. S.},
author={Ebenfelt, P.},
author={Rothschild, L. P.},
title={Rational dependence of smooth and analytic CR mappings on their
jets},
journal={Math. Ann.},
volume={315},
date={1999},
number={2},
pages={205--249},
issn={0025-5831},
review={\MR{1721797 (2001b:32075)}},
doi={10.1007/s002080050365},
}
\bib{BV}{article}{
author={Beloshapka, V. K.},
author={Vitushkin, A. G.},
title={Estimates of the radius of convergence of power series that give
mappings of analytic hypersurfaces},
language={Russian},
journal={Izv. Akad. Nauk SSSR Ser. Mat.},
volume={45},
date={1981},
number={5},
pages={962--984, 1198},
issn={0373-2436},
review={\MR{637612 (83f:32017)}},
}
\bib{CH}{article}{
author={Cho, Chung-Ki},
author={Han, Chong-Kyu},
title={Finiteness of infinitesimal deformations of CR mappings of CR
manifolds of nondegenerate Levi form},
journal={J. Korean Math. Soc.},
volume={39},
date={2002},
number={1},
pages={91--102},
issn={0304-9914},
review={\MR{1872584 (2002j:32036)}},
doi={10.4134/JKMS.2002.39.1.091},
}
\bib{Da}{article}{
author={D'Angelo, John P.},
title={Proper holomorphic maps between balls of different dimensions},
journal={Michigan Math. J.},
volume={35},
date={1988},
number={1},
pages={83--90},
issn={0026-2285},
review={\MR{931941 (89g:32038)}},
doi={10.1307/mmj/1029003683},
}
\bib{dSLR15}{article}{
author = {{Della Sala}, Giuseppe and Lamel, Bernhard and Reiter, Michael},
doi = {10.1090/tran/6885},
fjournal = {Transactions of the American Mathematical Society},
issn = {0002-9947},
journal = {Trans. Amer. Math. Soc.},
mrclass = {32H02 (32V40)},
mrnumber = {3695846},
number = {11},
pages = {7829?7860},
title = {Local and infinitesimal rigidity of hypersurface embeddings},
url = {http://dx.doi.org/10.1090/tran/6885},
volume = {369},
year = {2017},
}
\bib{Fa2}{article}{
author={Faran, James J.},
title={Maps from the two-ball to the three-ball},
journal={Invent. Math.},
volume={68},
date={1982},
number={3},
pages={441--475},
issn={0020-9910},
review={\MR{669425 (83k:32038)}},
doi={10.1007/BF01389412},
}
\bib{Hu}{article}{
author={Huang, Xiaojun},
title={On a linearity problem for proper holomorphic maps between balls
in complex spaces of different dimensions},
journal={J. Differential Geom.},
volume={51},
date={1999},
number={1},
pages={13--33},
issn={0022-040X},
review={\MR{1703603 (2000e:32020)}},
}
\bib{HJ}{article}{
author={Huang, Xiaojun},
author={Ji, Shanyu},
title={Mapping $\bold B^n$ into $\bold B^{2n-1}$},
journal={Invent. Math.},
volume={145},
date={2001},
number={2},
pages={219--250},
issn={0020-9910},
review={\MR{1872546 (2002i:32013)}},
doi={10.1007/s002220100140},
}
\bib{Ji}{article}{
author={Ji, Shanyu},
title={A new proof for Faran's theorem on maps between $\Bbb B^2$ and
$\Bbb B^3$},
conference={
title={Recent advances in geometric analysis},
},
book={
series={Adv. Lect. Math. (ALM)},
volume={11},
publisher={Int. Press, Somerville, MA},
},
date={2010},
pages={101--127},
review={\MR{2648940 (2011c:32023)}},
}
\bib{JL}{article}{
author={Juhlin, Robert},
author={Lamel, Bernhard},
title={Automorphism groups of minimal real-analytic CR manifolds},
journal={J. Eur. Math. Soc. (JEMS)},
volume={15},
date={2013},
number={2},
pages={509--537},
issn={1435-9855},
review={\MR{3017044}},
doi={10.4171/JEMS/366},
}
\bib{La}{article}{
author={Lamel, Bernhard},
title={Holomorphic maps of real submanifolds in complex spaces of
different dimensions},
journal={Pacific J. Math.},
volume={201},
date={2001},
number={2},
pages={357--387},
issn={0030-8730},
review={\MR{1875899 (2003e:32066)}},
doi={10.2140/pjm.2001.201.357},
}
\bib{Le}{article}{
author={Lebl, Ji{\v{r}}{\'{\i}}},
title={Normal forms, Hermitian operators, and CR maps of spheres and
hyperquadrics},
journal={Michigan Math. J.},
volume={60},
date={2011},
number={3},
pages={603--628},
issn={0026-2285},
review={\MR{2861091}},
doi={10.1307/mmj/1320763051},
}
\bib{Re2}{article}{
author = {Reiter, Michael},
doi = {10.1007/s12220-015-9594-6},
fjournal = {Journal of Geometric Analysis},
issn = {1050-6926},
journal = {J. Geom. Anal.},
mrclass = {32H02 (32V30)},
mrnumber = {3472839},
number = {2},
pages = {1370-1414},
title = {Classification of holomorphic mappings of hyperquadrics from {$\Bbb{C}^2$} to {$\Bbb{C}^3$}},
url = {http://dx.doi.org/10.1007/s12220-015-9594-6},
volume = {26},
year = {2016},
}
\bib{Re3}{article}{
author = {Reiter, Michael},
doi = {10.2140/pjm.2016.280.455},
fjournal = {Pacific Journal of Mathematics},
issn = {0030-8730},
journal = {Pacific J. Math.},
mrclass = {32H02 (32V30 57S05 57S25 58D19)},
mrnumber = {3453979},
number = {2},
pages = {455-474},
title = {Topological aspects of holomorphic mappings of hyperquadrics from {$\mathbb{C}^2$} to {$\mathbb{C}^3$}},
url = {http://dx.doi.org/10.2140/pjm.2016.280.455},
volume = {280},
year = {2016},
}
\bib{We}{article}{
author={Webster, S. M.},
title={The rigidity of C-R hypersurfaces in a sphere},
journal={Indiana Univ. Math. J.},
volume={28},
date={1979},
number={3},
pages={405--416},
issn={0022-2518},
review={\MR{529673 (80d:32022)}},
doi={10.1512/iumj.1979.28.28027},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
\begin{document}
\title{Stochastic jump processes for non-Markovian quantum dynamics}
Relaxation and decoherence phenomena in open quantum systems
\cite{Breuer2002} can often be modelled with sufficient accuracy
by a quantum Markov processes in which the open system's density
matrix is governed by a relatively simple quantum Markovian master
equation with Lindblad structure \cite{GORINI,Lindblad}. However,
non-Markovian quantum systems featuring strong memory effects play
an increasingly important role in many fields of physics such as
quantum optics~\cite{Gardiner96a}, solid state physics~\cite{SS},
and quantum information science~\cite{QIP}. Further applications
include non-Markovian extensions of quantum process tomography,
quantum control \cite{CONTROL}, and quantum transport
\cite{TRANSPORT}.
The non-Markovian quantum dynamics of open systems is
characterized by pronounced memory effects, finite revival times
and non-exponential behavior of damping and decoherence, resulting
from long-range correlation functions and from the dynamical
relevance of large correlations and entanglement in the initial
state. As a consequence the theoretical treatment of non-Markovian
quantum dynamics is generally extremely demanding, both from the
analytical and from the computational point of view~\cite{HPB08}.
Even if one is able to derive an appropriate non-Markovian master
equation or some other mathematical formulation of the dynamics,
the numerical simulation of such processes turns out to be a very
difficult and time-consuming task, especially for high-dimensional
Hilbert spaces.
From classical physics it is known that Monte Carlo techniques
provide efficient tools for the numerical simulation of complex
systems. This fact was one of the motivations to introduce the
Monte Carlo wave function technique
\cite{DCM1992,Dum1992,Carmichael1993} which provides efficient
quantum simulation techniques in the regime of Markovian dynamics.
Several generalizations of the Monte Carlo approach to
non-Markovian dynamics have been developed which are based on
suitable extensions of the underlying reduced system's Hilbert
space \cite{IMAMOGLU,Garraway1997,Breuer99a,Breuer2004}.
Recently, an efficient alternative simulation algorithm for the
treatment of non-Markovian open system dynamics has been proposed
\cite{Piilo2007} that does not require any extension of the state
space. The purpose of the present paper is to develop a
mathematical formulation of this algorithm in terms of a
stochastic Schr\"odinger equation (SSE) in the open system's
Hilbert space. We demonstrate that this formulation gives rise to
a new type of piecewise deterministic quantum jumps process (PDP).
Quantum master equations are often derived from an underlying
microscopic theory by employing some approximation scheme. An
appropriate scheme is the time-convolutionless (TCL) projection
operator technique which leads to a time-local first-order
differential equation for the density matrix
\cite{KUBO63,ROYER72,SHIBATA2}. It will be shown that TCL
master equations allow a stochastic unravelling of the form
developed here. Generally, the use of a certain approximation
technique may lead to violations of the positivity of the master
equation. We demonstrate that positivity violations are closely
linked to singularities of the SSE at which the stochastic process
breaks down. Hence, a great advantage of the present stochastic
formulation consists in the fact that it naturally prevents the
generation of unphysical solutions and that it could thus lead to
important insights into the structural characterization of
positive non-Markovian evolution equations.
The most general structure of the TCL master equation is given by
\footnote{This statement is a direct consequence of Lemma 2.3 of
Ref. \cite{GORINI}; for various alternative forms see
\cite{Breuer2002,Breuer2004}.}
\begin{equation} \label{MASTER-EQ}
\frac{d}{dt}\rho(t) = {\mathcal{L}}_t\rho(t)
= -i[H(t),\rho(t)] + {\mathcal{D}}_t\rho(t),
\end{equation}
where
\begin{eqnarray*}
{\mathcal{D}}_t\rho = \sum_m \Delta_m(t)
\left[ C_m(t)\rho C_m^{\dagger}(t)
- \frac{1}{2} \left\{C_m^{\dagger}(t)C_m(t),\rho \right\}\right].
\end{eqnarray*}
The time-dependent generator ${\mathcal{L}}_t$ consists of a
commutator term describing the unitary part of the evolution and a
dissipator ${\mathcal{D}}_t$. The latter involves a summation over
the various decay channels labelled by $m$ with corresponding
time-dependent decay rates $\Delta_m(t)$ and arbitrary
time-dependent system operators $C_m(t)$.
In the simplest case the rates $\Delta_m$ as well as the
Hamiltonian $H$ and the operators $C_m$ are assumed to be
time-independent. Equation (\ref{MASTER-EQ}) then represents a
master equation in Lindblad form \cite{GORINI,Lindblad} which
generates a semigroup of completely positive dynamical maps known
as quantum Markov process. For arbitrary time-dependent operators
$H(t)$ and $C_m(t)$, and for $\Delta_m(t) \geq 0$ the generator of
the master equation (\ref{MASTER-EQ}) is still in Lindblad form
for each fixed time $t$ and leads to a two-parameter family of
completely positive dynamical transformations \cite{SPOHN} which
may be referred to as time-dependent quantum Markov process
\cite{EISI}. An entirely different situation emerges if one or
several of the $\Delta_m(t)$ become temporarily negative which
expresses the presence of strong memory effects in the reduced
system dynamics. The process is then said to be non-Markovian. Of
course, the physical interpretation of the master equation
requires that it preserves the positivity of the density matrix
$\rho$. The formulation of general mathematical and physical
conditions that guarantee the preservation of positivity is,
however, an unsolved problem of central importance in the field of
non-Markovian quantum dynamics.
We emphasize that the emergence of temporarily negative
$\Delta_m(t)$ in the master equation is a natural phenomenon in
the non-Markovian regime which does in general not imply that the
complete positivity of the corresponding quantum dynamical map is
violated. An example is discussed in Ref.~\cite{Breuer2002} where
the exact non-Markovian master equation of an analytically
solvable model is constructed.
The fundamental difference between Markovian and time-dependent
Markovian processes on the one hand and non-Markovian processes on
the other hand can also be seen very clearly if one attempts to
apply the standard stochastic formulations to the master equation
(\ref{MASTER-EQ}). For both a Markovian and a time-dependent
Markovian dynamics the standard unravelling through a stochastic
quantum jump process can indeed be applied. This means that in
both cases one can formulate an appropriate PDP for the reduced
system's state vector $|\psi(t)\rangle$ in such a way that the
expectation value
\begin{equation} \label{EXPEC}
\rho(t) = {\mathrm{E}}[|\psi(t)\rangle\langle\psi(t)|]
= \int d\psi \, P[|\psi\rangle,t] \, |\psi\rangle\langle\psi|
\end{equation}
satisfies the master equation (\ref{MASTER-EQ}). Here, we have
expressed the expectation value ${\mathrm{E}}$ in terms of an
integration over the Hilbert space of states of the open quantum
system
with the unitarily invariant volume element
$d\psi \equiv D\psi D\psi^*$,
and introduced the corresponding
probability density functional $P[|\psi\rangle,t]$ which is
defined as the probability density of finding at time $t$ the
state vector $|\psi\rangle$ \cite{Breuer2002}. However, the
essential feature of a non-Markovian dynamics is the temporary
appearance of negative decay rates. The use of the standard
unravellings unavoidably leads in this case to negative jump
probabilities, which clearly indicates the decisive difference
between Markovian and non-Markovian quantum processes.
To account for the sign of the decay rates we decompose
$\Delta_m(t)$ into a positive and a negative part defined by
$\Delta_m^{\pm}(t)=\frac{1}{2}\big[|\Delta_m(t)|\pm\Delta_m(t)\big]$.
The master equation (\ref{MASTER-EQ}) can then be written in the
form
\begin{eqnarray} \label{MASTER-EQ2}
\frac{d}{dt}\rho &=& -i[H(t),\rho] \\
&+& \sum_k\Delta_k^+(t) \left[ C_k(t)\rho C_k^{\dagger}(t)
- \frac{1}{2} \left\{C_k^{\dagger}(t)C_k(t),\rho \right\}
\right] \nonumber \\
&-& \sum_l \Delta_l^-(t)\left[ C_l(t)\rho C_l^{\dagger}(t)
- \frac{1}{2} \left\{C_l^{\dagger}(t)C_l(t),\rho \right\}
\right]. \nonumber
\end{eqnarray}
In order to better distinguish the positive and the negative
channels we label the former by an index $k$ and the latter by an
index $l$. Note that $\Delta_k^+(t)\geq 0$ and $\Delta_l^-(t)\geq
0$ and that for Markovian or time-dependent Markovian processes we
have $\Delta_l^-(t)=0$.
We can now formulate the central result of this paper. Namely, the
stochastic Schr\"odinger equation given by
\begin{eqnarray} \label{SSE}
d|\psi(t)\rangle &=& -i G(t) |\psi(t)\rangle dt \nonumber \\
&+&\sum_k\left[\frac{C_k(t)|\psi(t)\rangle}{||C_k(t)|\psi(t)\rangle||}
- |\psi(t)\rangle \right] dN_k^+(t) \nonumber \\
&+&\sum_l \int d\psi' \left[ |\psi'\rangle - |\psi(t)\rangle \right]
dN_{l,\psi'}^-(t)
\end{eqnarray}
yields an unravelling of the master equation (\ref{MASTER-EQ2})
through a non-Markovian piecewise deterministic process. The first
term on the r.h.s. of Eq.~(\ref{SSE}) represents the normalized
deterministic drift of the process which is generated by
\begin{eqnarray*}
G(t) &=& H(t) - \frac{i}{2} \sum_m \Delta_m(t) \\
&& \times \Big[ C_m^{\dagger}(t)C_m(t)
- \langle\psi(t)|C_m^{\dagger}(t)C_m(t)|\psi(t)\rangle \Big].
\end{eqnarray*}
The instantaneous and random quantum jumps are described by the
second and the third line of Eq.~(\ref{SSE}). The quantities
$dN_k^+(t)$ and $dN_{l,\psi'}^-(t)$ are random Poisson increments
satisfying the relations
\begin{eqnarray} \label{PROPS}
dN_k^+(t) dN_{k'}^+(t) &=& \delta_{kk'}dN_k^+(t), \nonumber \\
dN_{l,\psi'}^-(t) dN_{l',\psi''}^-(t)
&=& \delta_{ll'} \delta\big(|\psi'\rangle-|\psi''\rangle\big)
dN_{l,\psi'}^-(t), \nonumber \\
dN_k^+(t) d N_{l,\psi'}^-(t)& =& 0,
\end{eqnarray}
and having expectation values
\begin{eqnarray}
{\mathrm{E}}[dN_k^+(t)] &=&
\Delta_k^+(t)\langle\psi(t)|C_k^{\dagger}(t)C_k(t)|\psi(t)\rangle dt,
\label{EXPEC1} \\
{\mathrm{E}}[dN_{l,\psi'}^-(t)] &=& \Delta_l^-(t)
\frac{P\left[|\psi'\rangle, t\right]}{P\left[|\psi\rangle, t\right]}
\langle\psi'|C_l^{\dagger}(t)C_l(t)|\psi'\rangle \nonumber \\
&& \times \; \delta\left( |\psi(t)\rangle -
\frac{C_l(t) |\psi'\rangle}{||C_l(t) |\psi'\rangle||}\right) dt.
\label{EXPEC2}
\end{eqnarray}
Here, the delta functional on Hilbert space is defined by $\int
d\psi \delta(|\psi\rangle-|\psi_0\rangle) F[|\psi\rangle] =
F[|\psi_0\rangle]$, where $F[|\psi\rangle]$ is an arbitrary smooth
functional.
The physical meaning of the properties in (\ref{PROPS}) is that
there cannot be two or more jumps simultaneously in a given
realization of the process and in a given moment of time. Suppose
first that the dynamics is Markovian or time-dependent Markovian.
We then have $dN_{l,\psi'}^-(t)=0$ and the SSE (\ref{SSE}) reduces
to the stochastic differential equation of the standard PDP
unravelling. According to the second line of Eq.~(\ref{SSE}) the
quantum jumps are represented by an instantaneous change of the
state vector,
\begin{eqnarray*}
|\psi(t)\rangle \longrightarrow
\frac{C_k(t)|\psi(t)\rangle}{||C_k(t)|\psi(t)\rangle||},
\end{eqnarray*}
and by virtue of Eq.~(\ref{EXPEC1}) this jump occurs at the rate
\begin{equation} \label{RATE1}
\Gamma_+ = \Delta_k^+(t)\langle\psi(t)|C_k^{\dagger}(t)C_k(t)|\psi(t)\rangle.
\end{equation}
The term in the third line of the SSE (\ref{SSE}) describes the
negative channels which are crucial for the unravelling of
non-Markovian dynamics. The corresponding jumps are given by
instantaneous transitions from the actual state $|\psi(t)\rangle$
to some state $|\psi'\rangle$. To account for all possible target
states of the negative channel jumps we perform in this term an
integration over $|\psi'\rangle$. According to the delta
functional in Eq.~(\ref{EXPEC2}) the target state $|\psi'\rangle$
of the possible jumps is related to the source state
$|\psi\rangle$ by
$|\psi\rangle=C_l|\psi'\rangle/||C_l|\psi'\rangle||$. Hence,
negative jumps correspond to a reversal of certain positive jumps,
obtained by interchanging the role of source and target state. The
quantity $dN_{l,\psi'}^-(t)$ is the Poisson increment for the
negative jumps via channel $l$. From Eq.~(\ref{EXPEC2}) we infer
that the state vector $|\psi\rangle$ can perform a jump to a state
vector in some volume element $d\psi'$ of Hilbert space around
$|\psi'\rangle$ with the rate (for simplicity we omit the time
arguments)
\begin{eqnarray*}
\Gamma_- =
\Delta_l^- \frac{P\left[|\psi'\rangle\right]d\psi'}{P\left[|\psi\rangle\right]d\psi}
\langle\psi'|C_l^{\dagger}C_l|\psi'\rangle
\delta\left( |\psi\rangle - \frac{C_l |\psi'\rangle}{||C_l |\psi'\rangle||}\right)
d\psi.
\end{eqnarray*}
In an ensemble of realizations of the process the quantity
$P[|\psi'\rangle]d\psi'/P[|\psi\rangle]d\psi$ can be interpreted
as $N'/N$, where $N'$ is the number of realizations in volume
element $d\psi'$ and $N$ is the number of realizations in element
$d\psi$. Then we can identify $\delta \left( |\psi\rangle - C_l
|\psi'\rangle / ||C_l|\psi'\rangle||\right) d\psi = 1$. Hence, the
negative channel jumps from $|\psi\rangle$ to $|\psi'\rangle$
occur at the rate
\begin{equation} \label{RATE2}
\Gamma_- = \Delta_l^- \frac{N'}{N}
\langle\psi'|C_l^{\dagger}C_l|\psi'\rangle.
\end{equation}
The comparison with the rate (\ref{RATE1}) for positive jumps
shows two crucial differences. First, the rates for the positive
jumps is proportional to the expectation value of
$C_k^{\dagger}C_k$ in the source state, while the rates for the
negative jumps is proportional to the expectation value of
$C_l^{\dagger}C_l$ in the target state. Hence, again the role of
source and target state have been interchanged. Second, the
negative jump rates carry an additional factor of $N'/N$, the
ratio of the number of ensemble members in the target state to the
number of members in the source state. Note that due to the
presence of this factor the SSE (\ref{SSE}) is not a stochastic
differential equation in the usual sense because the expectation
values of the random increments (\ref{EXPEC2}) depend explicitly
on the full probability density.
To determine these
increments at a certain time $t$ one has to know the probability
density $P[|\psi\rangle,t]$. Within a numerical simulation this is
achieved by propagating simultaneously an ensemble of realization
from which $P[|\psi\rangle,t]$ can then be estimated
self-consistently.
As an important consequence and as a result of
the non-Markovian character of the dynamics we thus find certain
correlations between different realizations of the process.
It may seem at first sight that the correlations between
the realizations require that a huge number of realizations of the
process has to be generated simultaneously in order to obtain a
sufficiently accurate estimate for the probability density.
However, when the realizations of the process are generated on a
computer there is no need to have $N_i$ identical copies of the
state $|\psi_i\rangle$ to obtain $P\left[|\psi_i\rangle\right]$.
It is sufficient to have only a single copy of $|\psi_i\rangle$
and to keep track of the corresponding integer number $N_i$. This
allows to optimize the numerical implementation of the process and
to perform simulations in an efficient way~\cite{Piilo2007}.
To prove that the expectation value (\ref{EXPEC}) for the process
obtained from the SSE (\ref{SSE}) indeed satisfies the master
equation (\ref{MASTER-EQ2}) we start from
\begin{eqnarray*}
d(|\psi\rangle\langle\psi|) =
|d\psi\rangle\langle\psi| + |\psi\rangle\langle d\psi|
+ |d\psi\rangle\langle d\psi|.
\end{eqnarray*}
Taking the expectation value of this relation, expressing the
increments $|d\psi\rangle$ through the SSE (\ref{SSE}), and using
the properties (\ref{PROPS}) we find
\begin{eqnarray*}
d\rho &=& -i[H,\rho]dt - \sum_m\frac{\Delta_m}{2} \left\{C_m^{\dagger}C_m,\rho \right\} dt
\nonumber \\
&+& \sum_m\Delta_m \; {\mathrm{E}}\left[ \langle\psi|C_m^{\dagger}C_m|\psi\rangle
|\psi\rangle\langle\psi|\right] dt \nonumber \\
&+& {\mathrm{E}}\Bigg[ \sum_k\left( \frac{C_k|\psi\rangle\langle\psi|C_k^{\dagger}}
{||C_k|\psi\rangle||^2} - |\psi\rangle\langle\psi|
\right) dN_k^+ \Bigg] \\
&+&{\mathrm{E}}\left[ \sum_{l} \int d\psi' \left(
|\psi'\rangle\langle\psi'| - |\psi\rangle\langle\psi| \right)
dN_{l,\psi'}^- \right].
\end{eqnarray*}
Using here the expectation values of the increments from
Eq.~(\ref{EXPEC2}) one immediately obtains the required master
equation (\ref{MASTER-EQ2}). Hence, we have proven the validity of
SSE (\ref{SSE}) which is the central result of the paper.
It is worth mentioning that while there exists
stochastic Schr\"odinger equations of diffusion type for
non-Markovian systems~\cite{Strunz1,Strunz2}, to the best of our
knowledge our SSE is the first representation through a stochastic
quantum jump process in the reduced system's Hilbert space.
It is important to note that the expectation value for
$dN_{l,\psi'}^-(t)$ in Eq.~(\ref{EXPEC2}) is not well defined when
the denominator becomes equal to zero, i.~e. $P\left[|\psi\rangle,
t\right]=0$, where $|\psi\rangle =C_l |\psi'\rangle / ||C_l
|\psi'\rangle||$, or alternatively $N=0$ in Eq.~(\ref{RATE2}). The
stochastic process breaks down at this point since there exists an
open negative channel but there are no realizations which are in
the source state of the corresponding jump. The formulation of
general conditions on the structure of the master equation
(\ref{MASTER-EQ}) that ensure the absence of such singularities in
the corresponding SSE (\ref{SSE}) seems to be a difficult problem.
However, it is quite easy to demonstrate that a breakdown of the
process necessarily takes place if the master equation violates
positivity. In fact, within the stochastic formulation developed
here the density matrix $\rho(t)$ of the open system is given by
the expectation value (\ref{EXPEC}) which represents, by the very
construction, a positive matrix. Therefore, if the master equation
violates positivity at some point of time the stochastic dynamics
must necessarily cease to exist. The present method thus signals
the point of violation of positivity of the density matrix.
To prove this statement we denote the state space, i.~e., the set
of all density matrices of the open system by ${\mathcal{S}}$. Let
us assume that the master equation (\ref{MASTER-EQ}) violates
positivity. Hence, there is an initial state $\rho(0)$ and a
corresponding solution $\rho(t)$ of the master equation which
leaves the state space ${\mathcal{S}}$ after some point of time
$t=t_0$. At this point $\rho(t_0)=\rho_0$ reaches the boundary of
${\mathcal{S}}$. Let
$\lambda(t)=\langle\varphi(t)|\rho(t)|\varphi(t)\rangle$ be the
lowest eigenvalue of $\rho(t)$ with corresponding eigenvector
$|\varphi(t)\rangle$. Lying on the boundary, $\rho_0$ must have at
least one zero eigenvalue with corresponding eigenvector
$|\varphi_0\rangle=|\varphi(t_0)\rangle$, i.~e., we have
$\lambda(t_0)=\langle\varphi_0|\rho_0|\varphi_0\rangle=0$. Hence,
an appropriate condition implying the violation of positivity is
given by the inequality $\dot{\lambda}(t_0)<0$~\footnote{It is
assumed here that $\dot{\lambda}(t_0)$ does not vanish, which is
obviously the generic case.}. The Hellman-Feynman theorem yields
\begin{eqnarray*}
\dot{\lambda}(t_0) = \langle\varphi_0|\dot{\rho}(t_0)|\varphi_0\rangle
= \langle\varphi_0|{\mathcal{L}}_{t_0}\rho_0|\varphi_0\rangle,
\end{eqnarray*}
and we find the following condition for the violation of positivity
\begin{equation} \label{COND}
\langle\varphi_0|{\mathcal{L}}_{t_0} \rho_0 |\varphi_0\rangle < 0.
\end{equation}
Consider now an ensemble representation of $\rho_0$ that is
generated through the SSE (\ref{SSE}): $\rho_0 = \sum_i p_i
|\psi_i\rangle\langle\psi_i| $ with
$\langle\psi_i|\psi_i\rangle=1$, $p_i > 0$ and $\sum_i p_i=1$. We
then have
\begin{eqnarray*}
\langle\varphi_0|\rho_0|\varphi_0\rangle = \sum_i p_i
|\langle\varphi_0|\psi_i\rangle|^2 = 0.
\end{eqnarray*}
It follows that $|\varphi_0\rangle$ is orthogonal to all members
of the ensemble, i.~e., $\langle\varphi_0|\psi_i\rangle=0$.
Evaluating condition (\ref{COND}) one therefore finds
\begin{eqnarray*}
\langle\varphi_0|{\mathcal{L}}_{t_0} \rho_0 |\varphi_0\rangle
= \sum_{m,i} p_i \Delta_m(t_0) |\langle\varphi_0|C_m|\psi_i\rangle|^2
< 0.
\end{eqnarray*}
Hence, there must exist indices $m$ and $i$ such that
$\Delta_m(t_0)<0$ and $\langle\varphi_0|C_m|\psi_i\rangle\neq 0$.
It follows that $C_m|\psi_i\rangle$ has a nonzero component in the
direction of $|\varphi_0\rangle$ and that the state vector
$|\psi\rangle=C_m|\psi_i\rangle/||C_m|\psi_i\rangle||$ does not
belong to the ensemble $\{|\psi_i\rangle\}$. In other words,
$P\left[|\psi\rangle,t_0 \right]=0$. We conclude that the point of
violation of positivity implies the breakdown of the SSE
(\ref{SSE}) because there exists an open channel with negative
rate while the probability of being in the source state of the
corresponding jump vanishes.
Of course, the formal mathematical solution of the master equation
(\ref{MASTER-EQ}) does not halt at the point of time when the
positivity is lost: The dynamics continues to reduce occupation
probability of a given state beyond the zero-point. However, the
evolution given by the SSE (\ref{SSE}) stops at the zero-point
since the number of realizations in a given state cannot, by
construction, have negative values. Thus, the stochastic process
developed here identifies the point of time where the master
equation loses the positivity, preventing excursions to unphysical
solutions. While the stochastic unravelling of the
master equation is in general not unique, we expect that the
connection between a breakdown of the positivity and a singularity
of the SSE holds for all stochastic representation of the form
constructed here.
In conclusion, we have derived a piecewise deterministic process
which describes the dynamics of non-Markovian systems. The
stochastic Schr\"odinger equation constructed reveals the
fundamental mathematical and physical difference between
time-local master equations which appear with positive and with
negative rates. The corresponding Poisson increments have a
distinct structure and the negative rate process clearly shows how
non-Markovian effects are manifested.
Markovian and non-Markovian processes are widely used for the
modelling of dynamical systems in many areas of physics, chemistry
and biophysics. Our results indicate how to treat master equations
with negative rates and memory effects also for classical systems.
In fact, the standard simulation algorithm for a classical
Markovian master equation corresponding to the stochastic wave
function method is known as Gillespie algorithm \cite{Gillespie}.
The method proposed here could therefore lead to the development
of an efficient non-Markovian generalization of the Gillespie
algorithm and thus opens the way to many further studies in the
dynamics of complex system.
\acknowledgments
One of us (HPB) greatfully acknowledges financial support
within a Fellowship of the Hanse-Wissenschaftskolleg,
Delmenhorst.
This work has also been supported by the Academy of
Finland (Project No.~115982) and the Magnus Ehrnrooth Foundation.
We thank K.-A. Suominen, S. Maniscalco, and K.
H\"ark\"onen for stimulating discussions.
\end{document}
|
\begin{document}
\title{The symbolic defect of an ideal}
\author{Federico Galetto}
\address{Department of Mathematics and Statistics\\
McMaster University, Hamilton, ON, L8S 4L8}
\email{[email protected]}
\author{Anthony V. Geramita${}^\dag$}
\thanks{${}^\dag$ Deceased, June 22, 2016.}
\author[Y.S. SHIN]{Yong-Su Shin${}^*$}
\address{Department of Mathematics,
Sungshin Women's University, Seoul, Korea, 136-742}
\email{[email protected] }
\thanks{${}^*$This research was supported by a grant
from Sungshin Women's University. Corresponding author}
\author{Adam Van Tuyl${}^{**}$}
\address{Department of Mathematics and Statistics\\
McMaster University, Hamilton, ON, L8S 4L8}
\email{[email protected]}
\thanks{${}^{**}$This research was supported
in part by NSERC Discovery Grant 2014-03898.}
\Bbbkeywords{symbolic powers, regular powers, points, star configurations}
\subjclass[2010]{13A15, 14M05}
\begin{abstract}
Let $I$ be a homogeneous ideal of $\Bbbk[x_0,\ldots,x_n]$. To compare
$I^{(m)}$, the $m$-th symbolic power of $I$, with $I^m$, the regular
$m$-th power, we introduce the $m$-th symbolic defect of $I$,
denoted $\sdef(I,m)$. Precisely, $\sdef(I,m)$ is
the minimal number of generators of the $R$-module $I^{(m)}/I^m$, or
equivalently, the minimal number of generators one must add to $I^m$
to make $I^{(m)}$. In this paper, we take the first step towards
understanding the symbolic defect by considering the case that $I$
is either the defining ideal of a star configuration or the ideal
associated to a finite set of points in $\mathbb{P}^2$. We are
specifically interested in identifying ideals $I$ with
$\sdef(I,2) = 1$.
\end{abstract}
\maketitle
\section{Introduction}\lambdabel{sec:intro}
Let $I$ be a homogeneous ideal of $R = \Bbbk[x_0,\ldots,x_n]$. For any
positive integer $m$, let $I^{(m)}$ denote the $m$-th symbolic power
of $I$. In general, we have $I^m \subseteq I^{(m)}$, but equality may
fail. During the last decade, there has been interest in the
so-called ``ideal containment problem,'' that is, for a fixed integer $m$,
find the smallest integer $r$ such that $I^{(r)} \subseteq I^m$. The
papers \cite{BH,BH2,D+,DHST,ELS,HH,HoHu,SS} are a small subset
of the articles on this problem.
In this note, we are also interested in comparing regular and symbolic
powers of ideals, but we wish to investigate a relatively unexplored direction
by measuring the ``difference'' between the two ideals $I^m$ and $I^{(m)}$.
More precisely, because $I^m \subseteq I^{(m)}$, the quotient $I^{(m)}/I^m$
is a finitely generated graded $R$-module.
For any $R$-module $M$, let $\mu(M)$ denote the number of minimal generators
of $M$. We then define the
{\it $m$-th symbolic defect of $I$} to be the invariant
\[\sdef(I,m) := \mu(I^{(m)}/I^m),\]
that is, the minimal number of generators of $I^{(m)}/I^m$.
We will call the sequence
\[\leqslantft\{\sdef(I,m)\right\}_{m \in \mathbb{N}}\]
the {\it symbolic defect sequence}.
Note that $\sdef(I,m)$ counts the number of generators we need
to add to $I^m$ to make $I^{(m)}$; this invariant can be viewed
as a measure of the failure of $I^m$ to
equal $I^{(m)}$. For example, $\sdef(I,m) = 0$ if and only if
$I^m = I^{(m)}$.
We know of only a few papers that have studied the module
$I^{(m)}/I^m$. This list includes: Arsie and Vatne's paper \cite{AV}
which considers the Hilbert function of $I^{(m)}/I^m$; Huneke's work
\cite{Huneke} which considers $P^{(2)}/P^2$ when $P$ is a height two
prime ideal in a local ring of dimension three; Herzog's paper
\cite{Herzog} which studies the same family of ideals as Huneke using
tools from homological algebra; Herzog and Ulrich's paper \cite{HeUl}
and Vasconcelos's paper \cite{Vas} which also consider a similar
situation to Huneke, but with the assumption that $P$ is generated by
three elements; and Schenzel's work \cite{Shen:2} which describes some
families of prime ideals $P$ of monomial curves with the property that
$P^{(2)}/P^2$ is cyclic (see the comment after \cite[Theorem
2]{Shen:2}).
The introduction of the symbolic defect sequence raises a number of
interesting questions. For example, how large can
$\sdef(I,m)$ be? how does $\sdef(I,m)$ compare to
$\sdef(I,m+1)$? and so on. In some sense, these questions are
difficult since one needs to know both $I^{(m)}$ and $I^m$. To gain
some initial insight into the behavior of the symbolic defect
sequence, in this paper we focus on two cases: (1) $I$ is the defining
ideal of a star configuration, and (2) $I$ is the homogeneous ideal
associated to a set of points in $\mathbb{P}^2$. In both cases, we
can tap into the larger body of knowledge about these ideals.
To provide some additional focus to our paper, we consider the
following question:
\begin{ques}\lambdabel{mainquestion}
What homogeneous ideals $I$ of $\Bbbk[x_0,\ldots,x_n]$ have
$\sdef(I,2) = 1$?
\end{ques}
\noindent
Because one always has $\sdef(I,1) = 0$, Question
\ref{mainquestion} is in some sense the first non-trivial case to
consider. Note that when $\sdef(I,2) = 1$, then from an
algebraic point of view, the ideal $I^2$ is almost equal to $I^{(2)}$
except that it is missing exactly one generator.
We now give an outline of the results of this paper. In Section 2, we
provide the relevant background, and recall some useful tools about
powers of ideals and their symbolic powers.
In Sections 3 through 5, we study $\sdef(I,m)$ when $I$
defines a star configuration. Note that in this paper, when we refer
to star configurations, the forms that define the star configurations
are forms of any degree, not necessarily linear, which is required in
other papers. Our main strategy to compute $\sdef(I,m)$ is to
find an ideal $J$ such that $I^{(m)} = J + I^m$, and then to show that
all the minimal generators of $J$ are required. The recent
techniques using matroid ideals developed by Geramita, Harbourne,
Migliore, and Nagel \cite{GHMN} will play a key role in our proofs.
Our results will imply a similar decomposition found by
Lampa-Baczy{\'n}ska and Malara \cite{LBM} which considers only star
configurations defined using monomial ideals.
In Section 3 we also compute some values of $\sdef(I,m)$ with
$m \geqslantqslant 3$ for some special families of star configurations. Section
4 complements Section 3 by showing that under some extra hypotheses,
$\sdef(I,2)=1$ can force a geometric condition. Specifically,
we show that if ${\mathbb X}$ is a set of points in $\mathbb{P}^2$ with a
linear graded resolution, and if $\sdef(I,2) =1$, then $I$
must be the ideal of a linear star configuration of points in
$\mathbb{P}^2$. In Section 5 we apply our results of Section 3 to
compute the graded minimal free resolution of $I^{(2)}$ when $I$
defines a star configuration of codimension two in $\mathbb{P}^n$.
This result gives a partial generalization of a result of Geramita,
Harbourne, and Migliore \cite{GHM} (see Remark \ref{generalize}).
In Section 6, we turn our attention to general sets of points in
$\mathbb{P}^2$. Our main result is a classification of the general
sets of points whose defining ideals $I_{\mathbb X}$ satisfy
$\sdef(I_{\mathbb X},2) = 1$.
\begin{thm_nonumber}[Theorem \ref{generalpoints}]
Let ${\mathbb X}$ be a set of $s$ general points in $\mathbb{P}^2$ with
defining ideal $I_{\mathbb X}$. Then
\begin{enumerate}
\item[$(i)$] $\sdef(I_{\mathbb X},2) = 0$ if and only if $s = 1,2$ or
$4$.
\item[$(ii)$] $\sdef(I_{\mathbb X},2) = 1$ if and only if
$s =3, 5, 7$, or $8$.
\item[$(iii)$] $\sdef(I_{\mathbb X},2) > 1$ if and only if $s=6$ or
$s \geqslantqslant 9$.
\end{enumerate}
\end{thm_nonumber}
\noindent
Our proof relies on a deep result of Alexander-Hirschowitz \cite{AH:1}
on the Hilbert functions of general double points, and some results of
Catalisano \cite{C}, Harbourne \cite{H1}, and Id\`a \cite{I} on the
graded minimal free resolutions of double points. We end this paper
with an example to show that the symbolic defect sequence is not
monotonic by computing some values of $\sdef(I_{\mathbb X},m)$ when
${\mathbb X}$ is eight general points in $\mathbb{P}^2$ (see Example
\ref{8points}).
\noindent {\bf Acknowledgments.} Work on this project began in August
2015 when Y.S.~Shin and A.~Van Tuyl visited A.V.~(Tony) Geramita at
his house in Kingston, ON. F.~Galetto joined this project in late
September of the same year. Tony Geramita, however, became quite ill
in late December 2015 while in Vancouver, BC, and after a six month
battle with his illness, he passed away on June 22, 2016 in Kingston.
During his illness, we (the remaining co-authors) kept Tony up-to-date
of the status on the project, and when his health permitted, he would
contribute ideas to this paper. He was looking forward to returning
to Kingston, and turning his attention to this paper. Unfortunately,
this was not to be. Although Tony was not able to see the final
version of this paper, we feel that his contributions warrant an
authorship. Those familiar with Tony's work will hopefully
recognize Tony's interests and contributions to the
topics in this paper. Tony is greatly missed.
We would also like to thank Brian Harbourne and Alexandra Seceleanu
for their helpful comments. Part of this paper was written at the
Fields Institute; the authors thank the institute for its hospitality.
Finally, we would like to thank the referees for their helpful suggestions
and corrections.
\section{Background results}\lambdabel{sec:background}
We review the required background. We continue to use the notation of
the introduction. Let $I$ be a homogeneous ideal of
$R = \Bbbk[x_0,\ldots,x_n]$. The {\it $m$-th symbolic power of $I$},
denoted $I^{(m)}$, is defined to be
\[I^{(m)} = \bigcap_{P \in {\rm Ass}(I)} (I^mR_P \cap R)\] where
${\rm Ass}(I)$ denotes the set of associated primes of $I$ and $R_P$
is the ring $R$ localized at the prime ideal $P$.
\begin{rem}
There is some ambiguity in the literature concerning the notion of
symbolic powers. The intersection in the definition is sometimes
taken over all associated primes and sometimes just over the minimal
primes of $I$. In general, these two possible definitions yield
different results. However, they agree in the case of radical
ideals.
\end{rem}
In general, $I^m \subseteq I^{(m)}$, but the reverse containment may
fail. If $\sdef(I,m) = s$, then there exist $s$ homogeneous
forms $F_1,\ldots,F_s$ of $R$ such that
\[I^{(m)}/I^m = \lambdangle F_1 + I^m, \ldots, F_s + I^m \longrightarrowngle \subseteq
R/I^m.\] It follows that
$I^{(m)} = \lambdangle F_1,\ldots,F_s \longrightarrowngle + I^m.$ Note that the ideal
$\lambdangle F_1,\ldots,F_s \longrightarrowngle$ is not unique. Indeed, if
$G_1,\ldots,G_s$ is another set of coset representatives such that
$I^{(m)}/I^m = \lambdangle G_1 + I^m, \ldots, G_s + I^m \longrightarrowngle$, we still
have $I^{(m)} = \lambdangle G_1,\ldots,G_s \longrightarrowngle + I^m$, but
$\lambdangle F_1,\ldots, F_s \longrightarrowngle$ and $\lambdangle G_1,\ldots,G_s \longrightarrowngle$
may be different ideals.
We state some simple facts about $\sdef(I,m)$.
\begin{lem} \lambdabel{sdefectlemma} Let $I$ be a homogeneous radical
ideal of $R$.
\begin{enumerate}[label=(\roman*)]
\item $\sdef(I,1) = 0$.
\item If $I$ is a complete intersection, then
$\sdef(I,m) = 0$ for all $m \geqslantqslant 1$.
\end{enumerate}
\end{lem}
\begin{proof}
$(i)$ This fact is trivial. $(ii)$ This result follows from
Zariski-Samuel \cite[Appendix 6, Lemma 5]{ZS}.
\end{proof}
Recall that $I$ is a {\it generic complete intersection} if the
localization of $I$ at any minimal associated prime of $I$ is a
complete intersection. A result of \cite{Cetal,SV,Wey} will prove
useful:
\begin{thm}[{\cite[Corollary 2.6]{Cetal},\cite{SV}\cite{Wey}}]\lambdabel{resix2}
Let $I$ be a homogeneous ideal of $\Bbbk[x_0,\ldots,x_n]$ that is
perfect, codimension two, and a generic complete intersection. If
\[0 \longrightarrow F \longrightarrow G \longrightarrow I
\longrightarrow 0\] is a graded minimal free resolution of $I$,
then
\[0 \longrightarrow \bigwedge^2 F \longrightarrow F\otimes G \rightarrow {\rm Sym}^2 G \longrightarrow I^2
\longrightarrow 0\] is a graded minimal free resolution of $I^2$.
\end{thm}
\begin{rem}
Weyman's paper \cite{Wey} gives the resolution of
${\rm Sym}^2(I)$. As shown in \cite{Cetal,SV}, the hypotheses on
$I$ imply that ${\rm Sym}^2(I) \cong I^2$.
\end{rem}
Many of our arguments make use of Hilbert functions. The {\it Hilbert
function} of $R/I$, denoted ${\text {\bf H}}_{R/I}$, is the numerical function
${\text {\bf H}}_{R/I}:\mathbb{N} \rightarrow \mathbb{N}$ defined by
\[{\text {\bf H}}_{R/I}(i) := \dim_\Bbbk R_i - \dim_\Bbbk I_i\]
where $R_i$, respectively $I_i$, denotes the $i$-th graded component
of $R$, respectively $I$.
Our primary focus is to understand $\sdef(I,m)$ when $I$
defines either a star configuration or a set of points in
$\mathbb{P}^2$. In the next section, we introduce star configurations
in more detail. For now, we review the relevant background about sets
of points in $\mathbb{P}^2$.
Let ${\mathbb X} = \{P_1,\ldots,P_s\}$ be a set of distinct points in
$\mathbb{P}^2$. If $I_{P_i}$ is the ideal associated to $P_i$ in
$R = \Bbbk[x_0,x_1,x_2]$, then the homogeneous ideal associated to ${\mathbb X}$
is the ideal $I_{\mathbb X} = I_{P_1} \cap \cdots \cap I_{P_s}$.
The next lemma allows us to describe $I_{\mathbb X}^{(m)}$; although this
result is well-known, we have included a proof for completeness.
\begin{lem} Let ${\mathbb X} = \{P_1,\ldots,P_s\} \subseteq \mathbb{P}^2$ be a
set of $s$ distinct points with associated ideal
$I_{\mathbb X} = I_{P_1} \cap \cdots \cap I_{P_s}$. Then for all $m \geqslantqslant 1$,
$I^{(m)}_{\mathbb X} = I^m_{P_1} \cap \cdots \cap I^m_{P_s}$.
\end{lem}
\begin{proof}
The associated primes of $I_{\mathbb X}$ are the ideals $I_{P_i}$ with
$i=1,\ldots,s$. Because localization commutes with products, we
have
\[I^m_{\mathbb X} R_{I_{P_i}} = (I_{\mathbb X} R_{I_{P_i}})^m = (I_{P_i}R_{P_i})^m =
I_{P_i}^mR_{P_i}.\] Note that the second equality follows from the
fact that $I_{P_i}$ is the only associated prime of $I_{\mathbb X}$ contained
in $I_{P_i}$. Since $I_{P_i}^mR_{P_i} \cap R = I_{P_i}^m$, the
result follows.
\end{proof}
For sets of points in $\mathbb{P}^2$, the symbolic defect sequence
will either be all zeroes, or all values of the sequence, except the
first, will be nonzero. Moreover, we can completely classify when the
symbolic defect sequence is all zeroes.
\begin{thm}\lambdabel{completeintersection}
Let ${\mathbb X} \subseteq \mathbb{P}^2$ be any set of points. Then the
following are equivalent:
\begin{enumerate}
\item[$(i)$] $I_{\mathbb X}$ is a complete intersection.
\item[$(ii)$] $\sdef(I_{\mathbb X},m) = 0$ for all $m \geqslantqslant 1$.
\item[$(iii)$] $\sdef(I_{\mathbb X},m) = 0$ for some $m \geqslantqslant 2$.
\end{enumerate}
\end{thm}
\begin{proof}
Lemma \ref{sdefectlemma} shows $(i) \Rightarrow (ii)$, and
$(ii) \Rightarrow (iii)$ is immediate. For $(iii) \Rightarrow (i)$,
it was noted in \cite[Remark 2.12(i)]{Cetal} that when ${\mathbb X}$ is not a
complete intersection of points in $\mathbb{P}^2$, then
$I_{\mathbb X}^m \neq I_{\mathbb X}^{(m)}$ for all $m \geqslantqslant 2$. This also
follows from \cite[Theorem 2.8]{HU} or \cite[Corollary
2.5]{Huneke}.
\end{proof}
\section{Symbolic squares of star configurations}
\lambdabel{sec:symbolic-square-star}
In this section, we will consider $\sdef(I,2)$ when $I$ defines a star
configuration. In fact, we prove a stronger result by finding an
ideal $J$ such that $I^{(2)} = J + I^2$. It is interesting to note
that the ideal $J$ will also be a star configuration. For
completeness, we begin with the relevant background on star
configurations.
\begin{defn}
Let $n$, $c$ and $s$ be positive integers with
$1\leqslantqslantslant c\leqslantqslantslant \min\{n,s\}$. Let
$\mathcal{F} = \{F_1,\ldots,F_s\}$ be a set of forms in
$R=\Bbbk[x_0,x_1,\ldots,x_n]$ with the property that all subsets of
$\mathcal{F}$ of cardinality $c+1$ are regular sequences in
$R$. Define an ideal of $R$ by setting
\begin{equation*}
I_{c,\mathcal{F}} = \bigcap_{1\leqslantqslantslant i_1<\ldots<i_{c}\leqslantqslantslant s}
\lambdangle F_{i_1},\ldots,F_{i_c} \longrightarrowngle.
\end{equation*}
The vanishing locus of $I_{c,\mathcal{F}}$ in $\mathbb{P}^n$ is
called a \emph{star configuration}.
When the forms $F_1,\ldots,F_s$ are all linear,
we will typically use $L_i$ in place of $F_i$ and write
$\mathcal{L}=\{L_1,\ldots,L_s\}$ in place of
$\mathcal{F} = \{F_1,\ldots,F_s\}$, and we will call the vanishing
locus of $I_{c,\mathcal{L}}$ a \emph{linear star configuration}.
\end{defn}
\begin{rem}
A.V.~Geramita is attributed with first coining the term star
configuration to describe the variety defined by
$I_{c,\mathcal{F}}$. The name is inspired by the fact that when
$n=c=2$, and $s=5$, the placement of the five lines
$\mathcal{L} = \{L_1,\ldots,L_5\}$ that define a linear star
configuration resembles a star. In this case, the locus of
$I_{c,\mathcal{L}}$ is a set of 10 points corresponding to the
intersections between these lines. It should be noted that linear
star configurations were classically called $l$-laterals (e.g. see
\cite{Do}). On the other hand, our more general definition follows
\cite{GHMN}, where the geometric objects are called hypersurface
configurations. This more general definition of star configurations
evolved through a series of papers (see \cite{AS:1,PS:1,GHMN}); in
particular, the codimension 2 case was studied before the general
case. Star configurations have been shown to have many nice
algebraic properties, but at the same time, can be used to exhibit
extremal properties. The references \cite{BH,BH2,CGVT, GHM,GMS:1}
form a small sample of papers that have studied the ideals
$I_{c,\mathcal{F}}$.
\end{rem}
\begin{rem}
Geometrically, the vanishing locus in $\mathbb{P}^n$ of the ideal
$\lambdangle F_{i_1},\ldots,F_{i_c} \longrightarrowngle$ is a complete intersection
of codimension $c$ obtained by intersecting the hypersurfaces
defined by the forms $F_{i_1},\ldots,F_{i_c}$. A star configuration
is then a union of such complete intersections.
\end{rem}
\begin{rem}
While the definition of a star configuration makes sense for
$s<n+1$, such cases are less interesting (cf. \cite[Remark
2.2]{GHM}). Therefore we will always assume that $s\geqslantqslantslant n+1$.
\end{rem}
\begin{thm}
\lambdabel{pro:star-generators}
Let $I_{c,\mathcal{F}}$ be the defining ideal of a star
configuration in $\mathbb{P}^n$, with
$\mathcal{F}=\{F_1,\ldots,F_s\}$. Then
\begin{equation*}
\{F_{i_1} \cdots F_{i_{s-c+1}} \mid
1 \leqslantqslantslant i_1 < \ldots < i_{s-c+1} \leqslantqslantslant s\}
\end{equation*}
is a minimal generating set of $I_{c,\mathcal{F}}$.
\end{thm}
\begin{proof}
See \cite[Theorem 2.3]{PS:1} for generation (see also
\cite[Proposition 2.3 (4)]{GHMN}) and \cite[Corollary 3.5]{PS:1}
for minimality.
\end{proof}
We will make use of the following decomposition of the $m$-th symbolic
power; this follows from \cite[Theorem 3.6 (i)]{GHMN}.
\begin{thm}
\lambdabel{pro:symbolic-powers-star}
Let $I_{c,\mathcal{F}}$ be the defining ideal of a star
configuration in $\mathbb{P}^n$, with
$\mathcal{F}=\{F_1,\ldots,F_s\}$. For all $m\geqslantqslantslant 1$, we have
\begin{equation*}
I_{c,\mathcal{F}}^{(m)} = \bigcap_{1\leqslantqslantslant i_1<\ldots<i_{c}\leqslantqslantslant s}
\lambdangle F_{i_1},\ldots,F_{i_c} \longrightarrowngle^m.
\end{equation*}
\end{thm}
We will first consider the case of a linear star configuration
$I_{c,\mathcal{L}}$ in $\mathbb{P}^n$, with
$\mathcal{L}=\{L_1,\ldots,L_s\}$ when $|\mathcal{L}|=n+1$. In this
context, we can reduce to the case of monomial ideals. Then,
following \cite{GHMN}, we will apply our results to obtain
corresponding statements for arbitrary star configurations.
\subsection{The monomial case}
\lambdabel{sec:monomial-case}
Let $I_{c,\mathcal{L}}$ be the defining ideal of a linear star
configuration in $\mathbb{P}^n$, with
$\mathcal{L}=\{L_1,\ldots,L_s\}$. Suppose that
$|\mathcal{L}|=n+1$. Then, up to a change of variables, we may assume
that the hyperplanes forming the star configuration are defined by the
coordinate functions $x_0,x_1,\ldots,x_n$. By Theorem
\ref{pro:symbolic-powers-star}, we have
\begin{equation*}
I^{(m)}_{c,\mathcal{L}} = \bigcap_{0\leqslantqslantslant i_1 < \ldots < i_c \leqslantqslantslant n}
\lambdangle x_{i_1},\ldots,x_{i_c}\longrightarrowngle^m.
\end{equation*}
Clearly, $I_{c,\mathcal{L}}$ and its symbolic powers are monomial
ideals. A monomial $p = x_0^{a_0} x_1^{a_1} \cdots x_n^{a_n}$ belongs
to $I^{(m)}_{c,\mathcal{L}}$ if and only if it satisfies the condition
\begin{equation}
\lambdabel{eq:1}
a_{i_1} + a_{i_2} + \cdots + a_{i_c} \geqslantqslantslant m \text{ for all }
0\leqslantqslantslant i_1 < \cdots < i_c \leqslantqslantslant n.
\end{equation}
Let $\operatorname{Supp} (p)$ denote the support of $p$, i.e.,
$\operatorname{Supp} (p) = \{ x_i \mid x_i \text{ divides } p\}$.
We are now able to describe an ideal $M$ with the property that
$I^{(m)}_{c,\mathcal{L}} = I^m_{c,\mathcal{L}} + M$.
\begin{thm}
\lambdabel{thm:symbolic-monomial}
Let $\mathcal{L} = \{x_0,\ldots,x_n\}$.
Then $I^{(m)}_{c,\mathcal{L}} = I^m_{c,\mathcal{L}} + M$, where $M$
is the ideal generated by all monomials satisfying equation
\eqref{eq:1} whose support has cardinality at least $n-c+3$.
\end{thm}
\begin{proof}
Clearly $I^{(m)}_{c,\mathcal{L}} \supseteq I^m_{c,\mathcal{L}} + M$.
To show the other containment, consider a monomial
$p = x_0^{a_0} x_1^{a_1} \ldots x_n^{a_n} \in
I^{(m)}_{c,\mathcal{L}}$. Since $p\in I^{(m)}_{c,\mathcal{L}}$, we
have $p\in I_{c,\mathcal{L}}$. Then
$|\operatorname{Supp} (p)| \geqslantqslantslant n-c+2$ by Theorem
\ref{pro:star-generators}.
If $|\operatorname{Supp} (p)| = n-c+2$, then the complement of
$\operatorname{Supp} (p)$ in $\{x_0,x_1,\ldots,x_n\}$ has
cardinality $c-1$. Therefore we can write
\begin{equation*}
\{x_0,x_1,\ldots,x_n\} \setminus \operatorname{Supp}(p)
= \{x_{j_1},\ldots,x_{j_{c-1}}\}.
\end{equation*}
For each $x_i \in \operatorname{Supp}(p)$, equation \eqref{eq:1}
implies that
\begin{equation*}
a_i = a_i + a_{j_1} + \ldots + a_{j_{c-1}} \geqslantqslantslant m.
\end{equation*}
Thus $p$ is a multiple of
\begin{equation*}
\prod_{x_i \in \operatorname{Supp} (p)} x_i^m =
\Bigg(\prod_{x_i \in \operatorname{Supp} (p)} x_i\Bigg)^m
\end{equation*}
which is the $m$-th power of a generator of $I_{c,\mathcal{L}}$ by
Theorem \ref{pro:star-generators}. Therefore
$p\in I^m_{c,\mathcal{L}}$.
On the other hand, if $|\operatorname{Supp} (p)| \geqslantqslantslant n-c+3$,
then $p \in M$ by definition.
\end{proof}
For $m=2$ and $m=3$, we can improve upon the statement of Theorem
\ref{thm:symbolic-monomial}.
\begin{cor}
\lambdabel{cor:symb-square-monomial}
Let $\mathcal{L} = \{x_0,\ldots,x_n\}$.
We have
$I^{(2)}_{c,\mathcal{L}} = I_{c-1,\mathcal{L}} +
I^2_{c,\mathcal{L}}$.
\end{cor}
\begin{proof}
By \cite[Lemma 2.13]{GHM}, we have
$I_{c-1,\mathcal{L}} \subseteq I^{(2)}_{c,\mathcal{L}}$, which
implies the containment
$I^{(2)}_{c,\mathcal{L}} \supseteq I_{c-1,\mathcal{L}} +
I^2_{c,\mathcal{L}}$ (these containments hold for any
linear star configuration ideal, not just a monomial
star configuration ideal). To prove the other
containment, we use the fact that our ideals are monomial ideals.
Consider a monomial
$p = x_0^{a_0} x_1^{a_1} \ldots x_n^{a_n} \in
I^{(2)}_{c,\mathcal{L}}$. As observed in the proof of Theorem
\ref{thm:symbolic-monomial},
$|\operatorname{Supp} (p)| \geqslantqslantslant n-c+2$ and, in the case of
equality, $p\in I^{2}_{c,\mathcal{L}}$. Assume
$|\operatorname{Supp} (p)| \geqslantqslantslant n-c+3$. Then $p$ is divisible
by one of the generators of $I_{c-1,\mathcal{L}}$ described in
Theorem \ref{pro:star-generators}. Therefore
$p \in I_{c-1,\mathcal{L}}$.
\end{proof}
\begin{rem}
The above result was first proved in \cite[Corollary 3.7, Corollary
4.5]{LBM} in the special cases that $n=c=2$, and $n=c=3$. The above
statement is also mentioned in \cite[Remark 4.6]{LBM}, but no proof
is given.
\end{rem}
\begin{cor}
\lambdabel{cor:symb-cube-monomial}
Let $\mathcal{L} = \{x_0,\ldots,x_n\}$.
If $c \geqslantqslant 3$, we have
$I^{(3)}_{c,\mathcal{L}} = I_{c-2,\mathcal{L}} + I_{c-1,\mathcal{L}}
I_{c,\mathcal{L}} + I^3_{c,\mathcal{L}}$.
\end{cor}
\begin{proof}
We require $c \geqslantqslant 3$ so that the ideals on the right
hand side are defined. We first show that
$I_{{c-2},\mathcal{L}} \subseteq I^{(3)}_{c,\mathcal{L}}$. Recall
that
$$
I_{c-2,\mathcal{L}} = \lambdangle x_{i_1} \cdots x_{i_{n-c+4}} \mid 0\leqslant
i_1<\cdots < i_{n-c+4}\leqslant n \longrightarrowngle.
$$
Consider any subset $A = \{x_{i_1},\ldots,x_{i_c}\}$ of
$\{x_0,x_1,\dots,x_n\}$ with $|A|=c$, and consider any generator
$m = x_{i_1} \cdots x_{i_{n-c+4}}$ of $I_{c-2,\mathcal{L}}$. Then at
least three of the variables of $A$, say $x_i,x_j,$ and $x_k$, appear
in ${\rm Supp}(m) = \{ x_{i_1}, \ldots, x_{i_{n-c+4}}\}$. Because
$x_ix_jx_k \in \lambdangle x_{i_1},\ldots,x_{i_c} \longrightarrowngle^3$, this means
that $m \in \lambdangle x_{i_1},\ldots,x_{i_c} \longrightarrowngle^3$. But this
implies that every generator $m$ of $I_{{c-2},\mathcal{L}}$ satisfies
\[m \in \leqslantft(\bigcap_{0 \leqslantqslant i_1 < \cdots < i_c \leqslantqslant n} \lambdangle
x_{i_1},\ldots,x_{i_c} \longrightarrowngle^3 \right)=
I_{c,\mathcal{L}}^{(3)}.\] In other words,
$I_{{c-2},\mathcal{L}} \subseteq I^{(3)}_{c,\mathcal{L}}$.
By \cite[Lemma 2.13]{GHM}, we have
$I_{{c-1},\mathcal{L}} \subseteq I_{c,\mathcal{L}}^{(2)}$. This
result allows us to conclude that
\begin{equation*}
I_{c-1,\mathcal{L}} I_{c,\mathcal{L}} \subseteq
I^{(2)}_{c,\mathcal{L}} I_{c,\mathcal{L}}
\subseteq I^{(3)}_{c,\mathcal{L}}.
\end{equation*}
Therefore we have the containment
$I^{(3)}_{c,\mathcal{L}} \supseteq I_{c-2,\mathcal{L}} +
I_{c-1,\mathcal{L}} I_{c,\mathcal{L}} + I^3_{c,\mathcal{L}}$. To
prove the other containment, we again exploit the fact that our ideals
are all monomial.
Consider a monomial
$p = x_0^{a_0} x_1^{a_1} \ldots x_n^{a_n} \in
I^{(3)}_{c,\mathcal{L}}$. By Theorem \ref{thm:symbolic-monomial},
$|\operatorname{Supp} (p)| \geqslantqslantslant n-c+2$ and, in the case of
equality, $p\in I^{3}_{c,\mathcal{L}}$. Let
$|\operatorname{Supp} (p)| =n-c+3$. In this case, the complement of
$\operatorname{Supp} (p)$ in $\{x_0,x_1,\ldots,x_n\}$ has cardinality
$c-2$, so we can write
\begin{equation*}
\{x_0,x_1,\ldots,x_n\} \setminus \operatorname{Supp}(p)
= \{x_{j_1},\ldots,x_{j_{c-2}}\}.
\end{equation*}
For each pair $x_{i_1},x_{i_2} \in \operatorname{Supp}(p)$, equation
\eqref{eq:1} implies that
\begin{equation*}
a_{i_1} + a_{i_2} = a_{i_1} + a_{i_2} + a_{j_1} + \ldots + a_{j_{c-2}} \geqslantqslantslant 3.
\end{equation*}
Thus either $a_{i_1} \geqslantqslantslant 2$ or $a_{i_2} \geqslantqslantslant 2$. Repeating
the same argument for all pairs $x_{i_1},x_{i_2}$ in
$\operatorname{Supp}(p)$, it follows that there are $n-c+2$ elements
$x_h \in \operatorname{Supp} (p)$ such that $x_h^2 \mid p$. Hence $p$
is divisible by a monomial of the form
\begin{equation*}
x_{k_0} x_{k_1}^2 \ldots x_{k_{n-c+2}}^2 =
(x_{k_0} x_{k_1} \ldots x_{k_{n-c+2}})
(x_{k_1} \ldots x_{k_{n-c+2}}),
\end{equation*}
and therefore $p \in I_{c-1,\mathcal{L}} I_{c,\mathcal{L}}$ by Theorem
\ref{pro:star-generators}. As in the previous proof,
if $|\operatorname{Supp} (p)| \geqslantqslantslant n-c+4$, then $p$ is divisible
by a generator of $I_{c-2,\mathcal{L}}$, which completes the proof.
\end{proof}
\begin{thm}
\lambdabel{cor:monomial-sdefect}
Let $\mathcal{L} = \{x_0,\ldots,x_n\}$. We have
$\sdef (I_{c,\mathcal{L}},m) =1$ if and only if $c=m =2$.
\end{thm}
\begin{proof}
Let $c=m=2$. By Theorem \ref{pro:star-generators},
$I_{c-1,\mathcal{L}}=I_{1,\mathcal{L}}=\lambdangle x_0 x_1
\cdots x_n \longrightarrowngle$ is a principal ideal generated in degree
$n+1$. In contrast, $I_{c,\mathcal{L}}^2$ is generated in degree
$n^2$. Therefore, the equality
$I^{(2)}_{c,\mathcal{L}} = I_{c-1,\mathcal{L}} +
I^2_{c,\mathcal{L}}$ of Corollary \ref{cor:symb-square-monomial},
implies that $I^{(2)}_{c,\mathcal{L}}/I^2_{c,\mathcal{L}}$ has a
single minimal generator. Thus $\sdef (I_{c,\mathcal{L}},m) =1$.
Conversely, assume $\sdef (I_{c,\mathcal{L}},m) =1$. By Theorem
\ref{thm:symbolic-monomial},
$I_{c,\mathcal{L}}^{(m)} = I_{c,\mathcal{L}}^m + M$, where $M$ is
the monomial ideal generated by all monomials satisfying equation
\eqref{eq:1} whose support has cardinality at least $n-c+3$. Since
$\sdef (I_{c,\mathcal{L}},m) =1$, we deduce $M\neq 0$. Given any
monomial $p \in M$, we must have
\begin{equation*}
n+1 \geqslantqslantslant |\operatorname{Supp} (p) | \geqslantqslantslant n-c+3.
\end{equation*}
This implies $c\geqslantqslantslant 2$. For any choice of indices
$0\leqslantqslantslant i_{1} < \cdots < i_{n-c+3} \leqslantqslantslant n$, the monomial
\begin{equation}
\lambdabel{eq:3}
p = x_{i_1} x_{i_2}^{m -1} x_{i_3}^{m -1} \cdots x_{i_{n-c+3}}^{m-1}
\end{equation}
satisfies the condition in equation \eqref{eq:1}, and therefore
$p\in M$. We claim that $p$ is a minimal generator of $M$. If it was
not, then we could divide $p$ by a variable in its support and
obtain a new monomial still in $M$. However, if we divide $p$ by any
variable in its support, we either obtain a monomial whose support
has cardinality less than $n-c+3$ or one that violates equation
\eqref{eq:1}. Thus the claim holds. Note also that the degree of
$p$ is $(m -1)(n-c+2)+1$, and this is strictly smaller than the
degree of a minimal generator of $I_{c,\mathcal{L}}^m$, i.e.,
$m (n-c+2)$. It follows that the residue class of $p$ can be taken
as a minimal generator of
$I_{c,\mathcal{L}}^{(m)} / I_{c,\mathcal{L}}^m$. Hence each monomial
of the same form as $p$ contributes 1 to
$\sdef (I_{c,\mathcal{L}}, m)$. Now, if $c>2$ or $m>2$, the freedom
in the choice of the indices $i_{1}, \ldots, i_{n-c+3}$ implies that
$\sdef (I_{c,\mathcal{L}}, m) > 1$.
\end{proof}
\subsection{The general case}
\lambdabel{sec:general-case}
To extend the results of the monomial case to arbitrary star
configurations, we recall a powerful theorem of Geramita, Harbourne,
Migliore, and Nagel \cite[Theorem 3.6 (i)]{GHMN}.
\begin{thm}
\lambdabel{thm:GHMN-main}
Let $I_{c,\mathcal{F}}$ be the defining ideal of a star
configuration in $\mathbb{P}^n$, with
$\mathcal{F}=\{F_1,\ldots,F_s\}\subseteq
R=k[x_0,x_1,\ldots,x_n]$. Let $S=k[y_1,\ldots,y_s]$ and define a
ring homomorphism $\varphi \colon S\to R$ by setting
$\varphi (y_i) = F_i$ for $1\leqslantqslant i\leqslantqslant s$. If $I$ is an ideal of
$S$, then we write $\varphi_* (I)$ to the denote the ideal of $R$
generated by $\varphi (I)$. Let $\mathcal{L} =
\{y_1,\ldots,y_s\}$. Then, for each positive integer $m$, we have
\begin{equation*}
I_{c,\mathcal{F}}^{(m)} = \varphi_* (I_{c,\mathcal{L}})^{(m)} =
\varphi_* (I_{c,\mathcal{L}}^{(m)}).
\end{equation*}
\end{thm}
Since the operator $\varphi_*$ commutes with ideal sums and products,
Theorem \ref{thm:GHMN-main} applied to our results from the previous
section gives the following more general statements.
\begin{thm}
Let $I_{c,\mathcal{F}}$ be the defining ideal of a star
configuration in $\mathbb{P}^n$, with
$\mathcal{F}=\{F_1,\ldots,F_s\}$. Then
$I^{(m)}_{c,\mathcal{F}} = I^m_{c,\mathcal{F}} + M$, where $M$ is
the ideal generated by all products $F_1^{a_1} \cdots F_s^{a_s}$
such that:
\begin{enumerate}
\item $|\{i \mid a_i > 0\}| \geqslantqslant s-c+2$;
\item
$\forall 0\leqslantqslantslant i_1 < \ldots < i_c \leqslantqslantslant n,\ a_{i_1} +
a_{i_2} + \ldots + a_{i_c} \geqslantqslantslant m$.
\end{enumerate}
\end{thm}
\begin{cor}
\lambdabel{cor:general-symb-square}
We have
$I^{(2)}_{c,\mathcal{F}} = I_{c-1,\mathcal{F}} +
I^2_{c,\mathcal{F}}$.
\end{cor}
\begin{cor} We have
$\sdef(I_{c,\mathcal{F}},2) \leqslantqslant
\binom{s}{c-2}$. Furthermore, if
$\mathcal{F} = \mathcal{L} = \{L_1,\ldots,L_s\}$, that is, if
$I_{c,\mathcal{L}}$, is a linear star configuration, then
$\sdef(I_{c,\mathcal{L}},2) = \binom{s}{c-2}$.
\end{cor}
\begin{proof}
By Corollary \ref{cor:general-symb-square},
$I^{(2)}_{c,\mathcal{F}} = I_{c-1,\mathcal{F}} +
I^2_{c,\mathcal{F}}$. By Theorem \ref{pro:star-generators}, the
ideal $I_{c-1,\mathcal{L}}$ is generated by
$\binom{s}{s-c+2} = \binom{s}{c-2}$ minimal
generators, so we need to add at most $\binom{s}{c-2}$ generators
to $I^2_{c,\mathcal{F}}$ to generate $I^{(2)}_{c,\mathcal{L}}$.
If $\mathcal{F} = \mathcal{L}$, by Theorem \ref{pro:star-generators}
$I^2_{c,\mathcal{L}}$ is generated by forms of degree $2(s-c+1)$.
On the other hand, again by Theorem \ref{pro:star-generators}, the
ideal $I_{c-1,\mathcal{L}}$ is generated by generators of degree
$s-c+2$. Since $s-c+2 < 2(s-c+1)$, all the generators of
$I_{c-1,\mathcal{L}}$ need to be added to $I^2_{c,\mathcal{L}}$ to
generate $I_{c,\mathcal{L}}^{(2)}$, i.e., none of them are
redundant.
\end{proof}
\begin{rem}
In the above proof, we appealed to the degrees of the elements of
$\mathcal{L}$ to justify why all the generators of
$I_{c-1,\mathcal{L}}$ are required. In the general case, it may
happen that some of the minimal generators of $I_{c-1,\mathcal{F}}$
have degree larger than a minimal generator of
$I_{c,\mathcal{F}}^2$, thus preventing us from generalizing this
argument.
\end{rem}
The following are also immediate consequences of results from the
previous section.
\begin{cor}
We have
$I^{(3)}_{c,\mathcal{F}} = I_{c-2,\mathcal{F}} + I_{c-1,\mathcal{F}}
I_{c,\mathcal{F}} + I^3_{c,\mathcal{F}}$. In particular,
\[\sdef(I_{c,\mathcal{F}},3) \leqslantqslant \binom{s}{c-3} +
\binom{s}{c-2}\binom{s}{c-1}.\]
\end{cor}
\begin{thm}\lambdabel{sdefect-one-classify}
We have $\sdef (I_{c,\mathcal{F}},m) =1$ if and only if $c=m =2$.
\end{thm}
\subsection{Powers of codimension two linear star configurations}
We round out this section by considering the higher $m$-th symbolic
powers of the linear star configuration $I_{2,\mathcal{L}}$ in
$\mathbb{P}^2$. Note that in this case the linear star configuration
defines a collection of points in $\mathbb{P}^2$. By applying \cite[Corollary 3.9]{HH}
of Harbourne and Huneke (and see also
\cite[Example 3.9]{Cetal} for additional details), we have the
following relationship between the regular and symbolic powers of
$I_{2,\mathcal{L}}$ in $\mathbb{P}^2$.
\begin{thm}\lambdabel{higherpowers}
Suppose that $I_{2,\mathcal{L}}$ defines a linear star configuration
in $\mathbb{P}^2$. Then
\[I_{2,\mathcal{L}}^{(2m)} = (I_{2,\mathcal{L}}^{(2)})^m ~~\mbox{
for all $m \geqslantqslant 1$.}\]
\end{thm}
We can then derive bounds on some of the values of the symbolic defect
sequence.
\begin{thm}
Suppose that $I_{2,\mathcal{L}}$ defines a linear star configuration
in $\mathbb{P}^2$. Then
\[\sdef(I_{2,\mathcal{L}},2m) \leqslantqslant 1+|\mathcal{L}|(m-1) ~~\mbox{for
all $m\geqslantqslant 1$.}\]
\end{thm}
\begin{proof}
Suppose that $\mathcal{L} = \{L_1,\ldots,L_s\}$. By Corollary
\ref{cor:general-symb-square} we have
\[I_{2,\mathcal{L}}^{(2)} = \lambdangle L_1\cdots L_s \longrightarrowngle +
I_{2,\mathcal{L}}^2\] since
$I_{1,\mathcal{L}} = \lambdangle L_1\cdots L_s \longrightarrowngle$. Let
$L = L_1\cdots L_s$. It then follows by Theorem \ref{higherpowers}
that
\begin{eqnarray*}
I_{2,\mathcal{L}}^{(2m)} & = & \leqslantft[\lambdangle L \longrightarrowngle + I_{2,\mathcal{L}}^2\right]^m \\
& = & \lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}I_{2,\mathcal{L}}^2
+ \lambdangle L \longrightarrowngle^{m-2}I_{2,\mathcal{L}}^4
+ \cdots + \lambdangle L \longrightarrowngle^{1}I_{2,\mathcal{L}}^{2m-2} + I_{2,\mathcal{L}}^{2m}.
\end{eqnarray*}
Since $I_{2,\mathcal{L}}$ is generated by forms of degree $(s-1)$,
we can use a a degree argument to show that none of the generators
of
$\lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}I_{2,\mathcal{L}}^2 +
\lambdangle L \longrightarrowngle^{m-2}I_{2,\mathcal{L}}^4 + \cdots + \lambdangle L
\longrightarrowngle^{1}I_{2,\mathcal{L}}^{2m-2}$ belong to
$I_{2,\mathcal{L}}^{2m}$.
Define
$J_{2a} = \lambdangle \frac{L^{2a}}{L_i^{2a}} ~|~ i=1,\ldots,s
\longrightarrowngle$ for $a = 1,\ldots,m-1$. We claim that for
$1\leqslant a \leqslant m-1$,
\begin{multline*}
\lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}I_{2,\mathcal{L}}^2
+ \cdots + \lambdangle L \longrightarrowngle^{m-a+1}I_{2,\mathcal{L}}^{2(a-1)} +
\lambdangle L \longrightarrowngle^{m-a}I_{2,\mathcal{L}}^{2a}
\\
= \lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}J_2 + \cdots +
\lambdangle L \longrightarrowngle^{m-a+1}J_{a-1} + \lambdangle L
\longrightarrowngle^{m-a}I_{2,\mathcal{L}}^{2a}.
\end{multline*}
Indeed, the ideal on the right is contained in the ideal on the
left because each generator of $J_{2a}$ is a generator of
$I_{2,\mathcal{L}}^{2a}$.
For the reverse containment, we do induction on $a$. It is
straightforward to check that
$\lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}I_{2,\mathcal{L}}^2
= \lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}J_2$ for the base
case. Assume now that {$2 \leqslantqslant a \leqslant m-1$}. By induction on $a$,
\begin{multline*}
\lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}I_{2,\mathcal{L}}^2
+ \cdots + \lambdangle L \longrightarrowngle^{m-a+1}I_{2,\mathcal{L}}^{2(a-1)} =
\lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}J_2 + \cdots +
\lambdangle L \longrightarrowngle^{m-(a-1)}J_{2(a-1)}.
\end{multline*}
To finish the proof of the claim, we need to show that
\[
\lambdangle L \longrightarrowngle^{m-a}I_{2,\mathcal{L}}^{2a} \subseteq \lambdangle
L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}J_2 + \cdots + \lambdangle L
\longrightarrowngle^{m-a+1}J_{2(a-1)} + \lambdangle L \longrightarrowngle^{m-a}J_{2a} .\]
Because of Theorem \ref{pro:star-generators}, $I_{2,\mathcal{L}}$
is generated by elements of the form $F_i = L/L_i$ for some
$i=1,\ldots,s$. So, a generator of $I_{2,\mathcal{L}}^{2a}$ has
the form $F_{i_1}F_{i_2}\cdots F_{i_{2a}}$ where
$i_1,\ldots,i_{2a}$ need not be distinct. If
$i_1 = \cdots = i_{2a} = i$, then the generator
$F_{i_1}F_{i_2}\cdots F_{i_{2a}} = \frac{L^{2a}}{L_i^{2a}}$ of
$I_{2,\mathcal{L}}^{2a}$ is also a generator of $J_{2a}$, so $L^{m-a}F_{i_1}F_{i_2}\cdots F_{i_{2a}} \in \lambdangle L \longrightarrowngle^{m-a}J_{2a}$. If at least two of $i_1,\ldots,i_{2a}$ are distinct, say $i_1 \neq i_2$, then
\[F_{i_1}F_{i_2}\cdots F_{i_{2a}} = F_{i_1}L_{i_1}\frac{F_{i_2}}{L_{i_1}}F_{i_3} \cdots F_{i_{2a}}
= L\frac{F_{i_2}}{L_{i_1}}F_{i_3} \cdots F_{i_{2a}}.\]
But then
\[
L^{m-a}F_{i_1}F_{i_2}\cdots F_{i_{2a}} =
L^{m-a+1} \frac{F_{i_2}}{L_{i_1}}F_{i_3} \cdots F_{i_{2a}} \in
\lambdangle L \longrightarrowngle^{m-(a-1)}J_{2(a-1)}.
\]
By induction, we then have
$$
L^{m-a}F_{i_1}F_{i_2}\cdots F_{i_{2a}} \in \lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}J_2
+ \cdots +
\lambdangle L \longrightarrowngle^{m-a+1}J_{2(a-1)} +
\lambdangle L \longrightarrowngle^{m-a}J_{2a}.
$$
This now verifies the claim.
To complete the proof, note that to form
$I_{c,\mathcal{L}}^{(2m)}$, we can add all of the generators
of $\lambdangle L \longrightarrowngle^m + \lambdangle L \longrightarrowngle^{m-1}J_2
+ \cdots + \lambdangle L \longrightarrowngle^{1}J_{2m-2}$ to $I_{2,\mathcal{L}}^{2m}$.
This ideal has at most
$1+s(m-1)$ minimal generators (our generating set may not be minimal)
since each ideal $J_{2a}$ has $s$ generators,
so $\sdef(I_{2,\mathcal{L}},2m) \leqslantqslant 1 + s(m-1)$.
\end{proof}
\section{A geometric consequence}
By Theorem \ref{sdefect-one-classify},
if $I_{c,\mathcal{L}}$ is
a linear star configuration in
$\mathbb{P}^n$ of codimension two, then $\sdef(I_{c,\mathcal{L}},2) =1$ since $c =2$.
If $n =2$, then the linear star configuration
defined by $I_{c,\mathcal{L}}$ is a collection of points in $\mathbb{P}^2$,
and thus, there exist sets of points ${\mathbb X}$ in $\mathbb{P}^2$
with $\sdef(I_{\mathbb X},2) =1$.
In general, it would be interesting to classify all the ideals $I_{\mathbb X}$
of sets of points ${\mathbb X}$ in $\mathbb{P}^2$ with $\sdef(I_{\mathbb X},2) =1$.
In this section, we show under some additional hypotheses, that
if ${\mathbb X}$ is a set of points in $\mathbb{P}^2$ with $\sdef(I_{\mathbb X},2) =1$,
then ${\mathbb X}$ must be a linear star configuration.
We first recall some facts about the defining ideals of points in
$\mathbb{P}^2$; many of these results are probably known to the
experts, but for completeness, we include their proofs. Recall that
for any homogeneous ideal $I \subseteq R$, we let
$\alpha(I) = \min\{i ~|~ I_i \neq 0 \}$. Note that for any
$m \geqslantqslant 1$, $\alpha(I^m) = m \alpha(I)$.
The following is the so-called {\em Dubreil's inequality}
(see \cite{Ca, DGM}), but an elementary proof (which we now give)
is also possible.
\begin{lem}\lambdabel{genslem}
Let $\mathbb{X} \subseteq \mathbb{P}^2$ be a finite set of points.
If $\alpha=\alpha(I_{\mathbb{X}})$, then $I_{\mathbb{X}}$ has
at most $\alpha+1$ minimal generators of degree $\alpha$.
\end{lem}
\begin{proof}
Because $\alpha = \alpha(I_{\mathbb{X}})$, the Hilbert function
of $\mathbb{X}$ at $\alpha-1$ is
${\bf H}_{R/I_{\mathbb X}}(\alpha-1) = \dim_\Bbbk R_{\alpha-1} = \binom{\alpha+1}{2}$.
If $I_{\mathbb{X}}$ has $d > \alpha+1$ generators of degree
$\alpha$, then ${\bf H}_{R/I_{\mathbb X}}(\alpha) = \binom{\alpha+2}{2} - d <
\binom{\alpha+2}{2} - (\alpha+1) = \binom{\alpha+1}{2}$.
In other words, ${\bf H}_{R/I_{\mathbb X}}(\alpha-1) > {\text {\bf H}}_{R/I_{\mathbb X}}(\alpha)$,
contradicting the fact that the Hilbert functions of sets of points
must be non-decreasing functions \cite[cf.~proof of Proposition 1.1 (2)]{GM}.
\end{proof}
The next lemma is a classification of those sets of points
which have exactly $\alpha+1$ minimal generators of degree $\alpha$.
\begin{lem}\lambdabel{equivconditions}
Let ${\mathbb X}$ be a set of points of $\mathbb{P}^2$. Then the following are
equivalent:
\begin{enumerate}
\item[$(i)$] The ideal $I_{\mathbb X}$ has $\alpha+1$ minimal generators of degree
$\alpha = \alpha(I_{\mathbb{X}})$;
\item[$(ii)$] The set ${\mathbb X}$ is a set of $\binom{\alpha+1}{2}$ points in
${\mathbb P}^2$ having generic Hilbert function, i.e.,
\[{\bf H}_{R/I_{\mathbb X}}(i) = \min\{\dim_\Bbbk R_i,|{\mathbb X}|\} ~~~\mbox{for all $i \geqslantqslant 0$; and}\]
\item[$(iii)$] The ideal $I_{\mathbb X}$ has a graded linear resolution.
\end{enumerate}
\end{lem}
\begin{proof}
$(i) \Rightarrow (ii)$ If $I_{\mathbb X}$ has $\alpha+1$ minimal generators of degree $\alpha$, it follows that
\[\binom{\alpha+1}{2} = {\text {\bf H}}_{R/I_{\mathbb X}}(\alpha-1) = {\text {\bf H}}_{R/I_{\mathbb X}}(\alpha) = \binom{\alpha+2}{2} -
\binom{\alpha+1}{1}.\]
Because the Hilbert function of a set of points in $\mathbb{P}^2$ is a strictly increasing
function until it reaches $|{\mathbb X}|$, we have $|{\mathbb X}| = \binom{\alpha+1}{2}$, and
the Hilbert function of $R/I_{\mathbb X}$ is given by
\[{\text {\bf H}}_{R/I_{\mathbb X}}(t) = \min\leqslantft\{\dim_\Bbbk R_t, \binom{\alpha+1}{2}\right\} ~~\mbox{for all $t \geqslantqslant 0$}.\]
$(ii) \Rightarrow (iii)$
If $R/I_{\mathbb X}$ has the generic Hilbert function, one can use
Section 3 of \cite{L} to deduce that the resolution is
\[
0 \longrightarrow R^\alpha(-(\alpha+1)) \longrightarrow
R^{\alpha+1}(-\alpha) \longrightarrow R \longrightarrow R/I_{\mathbb X} \longrightarrow 0,\]
i.e., the graded resolution is linear.
$(iii) \Rightarrow (i)$
Assume that $I_{\mathbb X}$ has a linear graded free resolution
\[
0 \longrightarrow R^{\beta-1}(-(\alpha+1)) \longrightarrow
R^{\beta}(-\alpha) \longrightarrow R \longrightarrow R/I_{\mathbb X} \longrightarrow 0.\]
Since ${\text {\bf H}}_{R/I_{\mathbb X}}(t)={\text {\bf H}}_{R/I_{\mathbb X}}(t+1)$ for $t \gg 0$,
we get that
\begin{multline*}
\dim_\Bbbk R_t-\beta \dim_\Bbbk R_{t-\alpha}+(\beta-1) \dim_\Bbbk R_{t-(\alpha+1)} \\
= \dim_\Bbbk R_{t+1}-\beta \dim_\Bbbk R_{(t+1)-\alpha}+(\beta-1)\dim_\Bbbk R_{(t+1)-(\alpha+1)}.
\end{multline*}
This proves that $\beta=\alpha+1$, i.e., $I_{\mathbb X}$ has $\alpha+1$
minimal generators of degree $\alpha$.
\end{proof}
\begin{lem}\lambdabel{degreebound}
Let ${\mathbb X}$ be a set of points of $\mathbb{P}^2$, and
suppose that any of the three equivalent conditions
of Lemma \ref{equivconditions} holds.
If $I_{\mathbb X}^{(2)} = I_{\mathbb X}^2 + \lambdangle F_1,\ldots,F_r \longrightarrowngle$, i.e.,
the $F_i$ comprise a minimal set of homogeneous generators of $I_{\mathbb X}^{(2)}$ modulo $I_{\mathbb X}^2$,
then ${\rm deg} (F_i)<2\alpha(I_{\mathbb X})$ for all $i=1,\ldots,r$.
\end{lem}
\begin{proof} We first observe that because $I_{\mathbb X}$ is an ideal of points, then the saturation of $I_{\mathbb X}^2$ is $I_{\mathbb X}^{(2)}$. If $d$ is
the saturation degree of $I_{\mathbb X}^2$, i.e., the smallest integer $d$ such that $(I_{\mathbb X}^{(2)})_t = (I_{\mathbb X}^2)_t$ for all $t \geqslantqslant d$, then
it is known that ${\rm reg}(I_{\mathbb X}^2) \geqslantqslant d$ (see, for example, the
introduction of \cite{BG}).
Again, because $I_{\mathbb X}$ is an ideal of points, we have
$$
2{\rm reg}(I_{\mathbb X})\geqslantqslant {\rm reg}(I_{\mathbb X}^2)\geqslantqslant \alpha(I_{\mathbb X}^2) =2\alpha(I_{\mathbb X})=2{\rm reg}(I_{\mathbb X}),
$$
where the first inequality follows from \cite[Theorem 1.1]{GGP}, and the
last equality holds from the fact that $I_{\mathbb X}$ has a linear resolution.
Thus, we get that
${\rm reg}(I_{\mathbb X}^2)=2\alpha(I_{\mathbb X})$, or in
other words, $I_{\mathbb X}^2$ and $I_{\mathbb X}^{(2)}$ agree in degrees
$\geqslantqslant {\rm reg}(I_{\mathbb X}^2)=2\alpha(I_{\mathbb X})$. Therefore, any minimal generators of
$I_{\mathbb X}^{(2)}$ have degrees less than $2\alpha(I_{\mathbb X})=2\alpha$, as we wished.
\end{proof}
When $I_{\mathbb X}$ is the homogeneous ideal of a finite set of points
${\mathbb X}$ in $\mathbb{P}^2$, it is well known that $I_{\mathbb X}$ is both perfect
and has codimension two. In addition, $I_{\mathbb X}$ is a generic complete
intersection because $I_{\mathbb X}$ is a radical ideal in a regular
ring and the minimal associated primes of $I_{\mathbb X}$ are simply
the ideals of the points $P \in {\mathbb X}$, and when we localize $I_{\mathbb X}$ at $I_P$,
we get the maximal ideal of $\Bbbk[x_0,x_1,x_2]$ localized at $I_P$,
which is a complete intersection.
We can thus apply Theorem \ref{resix2} to any homogeneous
ideal of a finite set of points in $\mathbb{P}^2$. In particular,
we record this fact as a lemma.
\begin{lem}\lambdabel{nummingens}
Let ${\mathbb X} \subseteq \mathbb{P}^2$ be a finite set of points.
Suppose that $I_{\mathbb X}$ has $d$ minimal generators of degree
$\alpha = \alpha(I_{\mathbb X})$. Then $I_{\mathbb X}^2$ has $\binom{d+1}{2}$ minimal generators
of degree $\alpha(I_{\mathbb X}^2) = 2\alpha$. In particular,
\[{\text {\bf H}}_{R/I_{\mathbb X}^2}(2\alpha) = \binom{2\alpha+2}{2} - \binom{d+1}{2}.\]
\end{lem}
\begin{proof}
If $F_1,\ldots,F_d$ are the $d$ minimal generators of degree $\alpha =
\alpha(I_{\mathbb X})$,
then by Theorem \ref{resix2}, the elements of
$\{F_iF_j ~\mid~ 1 \leqslantqslant i \leqslantqslant j \leqslantqslant d \}$
will all be minimal generators of $I_{\mathbb X}^2$. Each generator will have
degree $\alpha(I_{\mathbb X}^2) = 2\alpha$ and there are
$\binom{d+1}{2}$ such generators. For the last statement,
since $I_{\mathbb X}^2$ has no generators of degree $< 2\alpha$, we have
$\dim_\Bbbk (I_{\mathbb X}^2)_{2\alpha} = \binom{d+1}{2}$.
\end{proof}
We also require a result
of Bocci and Chiantini. Statement $(i)$ can be found in the
introduction of \cite{BC:1}, while $(ii)$ is \cite[Theorem 3.3]{BC:1}.
\begin{thm}\lambdabel{BCresults}
Let ${\mathbb X} \subseteq \mathbb{P}^2$ be a set of points.
\begin{enumerate}
\item[$(i)$] Then $\alpha(I_{\mathbb{X}}^{(2)}) \geqslantqslant \alpha(I_{\mathbb{X}})+1$.
\item[$(ii)$] If
$\alpha(I_{\mathbb X}^{(2)}) = \alpha(I_{\mathbb X}) +1$, then ${\mathbb X}$ is a linear
star configuration of points or a set of colinear points.
\end{enumerate}
\end{thm}
We now come to the main result of this section.
\begin{thm}\lambdabel{partialconverse}
Let ${\mathbb X}$ be a set of $\binom{\alpha+1}{2}$ points of $\mathbb{P}^2$
with the generic Hilbert function.
If $\sdef(I_{\mathbb X},2) = 1$, then
${\mathbb X}$ is a linear star configuration.
\end{thm}
\begin{proof}
Since $\sdef(I_{\mathbb X},2) = 1$, there exits a form $F$ such that
$I_{\mathbb X}^{(2)} = \lambdangle F\longrightarrowngle + I_{\mathbb X}^2$.
By Lemma \ref{degreebound}, ${{\rm deg}} F < 2\alpha$.
We now show that we must, in fact, have ${\rm deg} F \leqslantqslant \alpha+1$.
By Lemma \ref{equivconditions},
$I_{\mathbb{X}}$ has $\alpha+1$ generators of degree $\alpha$.
By Lemma \ref{nummingens}, the ideal $I_{\mathbb{X}}^2$ will have
$\binom{\alpha+2}{2}$ minimal generators of degree $2\alpha$.
Because $I_{\mathbb{X}}^2 \subseteq I_{\mathbb{X}}^{(2)}$, this means
\begin{eqnarray*}
{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha) &\leqslantqslant& {\text {\bf H}}_{R/I_{\mathbb{X}}^2}(2\alpha)
= \binom{2\alpha+2}{2} - \binom{\alpha+2}{2}\\
& = & \frac{(2\alpha+2)(2\alpha+1) - (\alpha+2)(\alpha+1)}{2}
= \frac{3\alpha^2+3\alpha}{2}.
\end{eqnarray*}
Suppose that ${\rm deg} F > \alpha+1$. Because $I_{\mathbb{X}}^2$ is generated
by forms of degree $2\alpha$ or larger, we have
\[(I_{\mathbb{X}}^{(2)})_{2\alpha-1} = [\lambdangle F\longrightarrowngle + I_{\mathbb{X}}^2]_{2\alpha-1} =
[\lambdangle F\longrightarrowngle]_{2\alpha-1},\]
and consequently,
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha-1) = {\text {\bf H}}_{R/\lambdangle F\longrightarrowngle}(2\alpha-1)
= \binom{2\alpha+1}{2} - \dim_\Bbbk \lambdangle F\longrightarrowngle_{2\alpha-1}.\]
If ${\rm deg} F = d$, then $\lambdangle F\longrightarrowngle \cong R(-d)$ as graded $R$-modules,
so $\dim_\Bbbk \lambdangle F\longrightarrowngle_{2\alpha -1} = \dim_\Bbbk R_{2\alpha-1-d} = \binom{2\alpha-d+1}{2}$.
Because $d \geqslantqslant \alpha+2$, we have
\begin{eqnarray*}
{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha-1) &= &
\binom{2\alpha+1}{2} - \binom{2\alpha-d+1}{2} \\
& \geqslantqslant & \binom{2\alpha+1}{2} - \binom{2\alpha-(\alpha +2)+1}{2} \\
& = & \frac{(2\alpha+1)(2\alpha) - (\alpha-1)(\alpha-2)}{2} \\
& = & \frac{4\alpha^2 + 2\alpha -(\alpha^2- 3\alpha +2)}{2}
= \frac{3\alpha^2+ 5\alpha - 2}{2}.
\end{eqnarray*}
Since $\sdef(I_{\mathbb X},2) \neq 0$, $\mathbb{X}$ is not a complete intersection, and
thus ${\mathbb X}$ cannot be a set of points on a line. Consequently, $\alpha \geqslantqslant 2$.
But then we must have
\[ {\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha-1) \geqslantqslant
\frac{3\alpha^2+ 5\alpha - 2}{2} >
\frac{3\alpha^2+3\alpha}{2} \geqslantqslant {\text {\bf H}}_{R/{I_\mathbb{X}^{(2)}}}(2\alpha).\]
This is a contradiction, so ${\rm deg} F \leqslantqslant \alpha+1$ as claimed.
Because ${{\rm deg}} F > \alpha$ by Theorem \ref{BCresults}, we must have
${\rm deg} F = \alpha+1$. Hence
$\alpha(I_{\mathbb{X}}^{(2)}) = \alpha(I_{\mathbb{X}}) +1$. Theorem
\ref{BCresults} then implies that ${\mathbb X}$ is a either a linear star
configuration or a set of colinear points. If ${\mathbb X}$ was a set of
colinear points, then Theorem \ref{completeintersection} would imply
that $\sdef(I_{\mathbb X},2) = 0$ because colinear points are a complete
intersection. Thus ${\mathbb X}$ must be a linear star configuration in
$\mathbb{P}^2$.
\end{proof}
\begin{rem}
As we will see in Section 6, there exist sets of points ${\mathbb X}$ in
$\mathbb{P}^2$ with $\sdef(I_{\mathbb X},2)=1$, but ${\mathbb X}$ is not a linear star
configuration.
\end{rem}
\begin{rem}
It is natural to ask if a similar type of result holds for points in
$\mathbb{P}^n$ with $n \geqslantqslant 3$, i.e., if $\sdef(I_{\mathbb X},2)=1$,
along with some suitable hypotheses on ${\mathbb X}$, implies that ${\mathbb X}$ must
be a linear star configuration. However, this cannot happen.
Indeed, if such a set of points ${\mathbb X}$ existed, then
${\mathbb X} = V(I_{n,\mathcal{L}})$ for some $n \geqslantqslant 3$ and set of linear
forms $\mathcal{L}$, because ${\mathbb X}$ is a zero-dimensional scheme. But
then we would have $\sdef(I_{n,\mathcal{L}},2)=1$,
contradicting Theorem \ref{sdefect-one-classify}.
\end{rem}
\section{Application: Resolutions of squares of star configurations in ${\mathbb P}^n$}
In this section, we use Corollary \ref{cor:general-symb-square} to
describe a minimal free resolution of the symbolic square of the
defining ideal $I_{2,\mathcal{F}}$ of a codimension two star
configuration in $\mathbb{P}^n$.
\begin{lem}\lambdabel{L:20170926-501}
Let $I_{2,\mathcal{F}}$ be the defining ideal of a star
configuration of codimension two in $\mathbb{P}^n$. Assume
$\mathcal{F}=\{F_1,\ldots,F_s\}$, where $F_1,\ldots,F_s$ are forms
of degrees $1\leqslantqslant d_1\leqslantqslant \cdots \leqslantqslant d_s$, and let
$d=d_1+\cdots+d_s$. Then a graded minimal free resolution of
$I_{2,\mathcal{F}}^2$ has the form
\begin{equation*}
0 \to R^{\binom{s-1}{2}}(-2d) \to \bigoplus_{1\leqslantqslant i\leqslantqslant s}
R^{s-1}(-(2d-d_i)) \to \bigoplus_{1\leqslantqslant i,j\leqslantqslant s} R(-(2d-(d_i+d_j)))
\to I_{2,\mathcal{F}}^2 \to 0.
\end{equation*}
\end{lem}
\begin{proof} By \cite[Theorem 3.4]{PS:1}, the ideal
$I_{2,\mathcal{F}}$ has a graded minimal free resolution of the form
$$
0 \to R^{s-1}(-d) \to \bigoplus _{1\leqslantqslant i\leqslantqslant s}R(-(d-d_i)) \to
I_{2,\mathcal{F}} \to 0.
$$
Recall that
$$
I_{2,\mathcal{F}}=\bigcap_{1\leqslantqslant i < j\leqslantqslant s} \lambdangle F_i,F_j\longrightarrowngle.
$$
Let $P$ be a minimal prime of $I_{2,\mathcal{F}}$ in $R$. Then $P$ has
height $2$.
\noindent {\em Claim.} There exists a unique pair $(i,j)$ such that
$\lambdangle F_i,F_j\longrightarrowngle \subseteq P$.
\noindent {\em Proof of Claim.} The existence of the pair follows from
\cite[Prop.~1.11]{AM}. Assume
$\lambdangle F_\alpha,F_\beta\longrightarrowngle \subseteq P$ for some indices
$\alpha,\beta$ with $\{\alpha,\beta\}\ne \{i,j\}$. Without loss of
generality, we may assume that $\alpha\ne i,j$. Then
$F_i,F_j, F_\alpha \in P$, which is a contradiction, since
$F_i,F_j,F_\alpha$ form a regular sequence of length 3 but $P$ has
height 2.
$\Box$
It follows from the claim that
\begin{equation*}
(I_{2,\mathcal{F}})_P =\bigcap_{1\leqslantqslant k < l \leqslantqslant s} \lambdangle F_k,F_l\longrightarrowngle_P
= \lambdangle F_i,F_j\longrightarrowngle_P = \big\lambdangle \tfrac{F_i}{1},\tfrac{F_j}{1}\big\longrightarrowngle.
\end{equation*}
Since localization preserves regular sequences,
$(I_{2,\mathcal{F}})_P$ is a complete intersection ideal in $R_P$. We
deduce that $I_{2,\mathcal{F}}$ is a generic complete intersection
ideal. Since $I_{2,\mathcal{F}}$ is also a perfect codimension two
ideal, we can apply Theorem
\ref{resix2} to derive the stated graded minimal free resolution of
$I_{2,\mathcal{F}}^2$.
\end{proof}
\begin{lem}\lambdabel{L:20160715-602}
Let $I_{2,\mathcal{F}}$ be the defining ideal of a star
configuration of codimension two in $\mathbb{P}^n$. Assume
$\mathcal{F}=\{F_1,\ldots,F_s\}$, and set $F=F_1\cdots F_s$. Then
\begin{enumerate}[label=(\roman*)]
\item\lambdabel{colon-lemma-i}
$ [I_{2,\mathcal{F}}^2:F]=\big\lambdangle F_{i_1}\cdots F_{i_{s-2}}
\mid 1\leqslantqslant i_1 < \dots < i_{s-2} \leqslantqslant s \big\longrightarrowngle =I_{3,\mathcal{F}}$;
\item\lambdabel{colon-lemma-ii}
$I_{2,\mathcal{F}}^2 \cap \lambdangle F\longrightarrowngle = F[I_{2,\mathcal{F}}^2:F]$.
\end{enumerate}
\end{lem}
\begin{proof} $\ref{colon-lemma-i}$ First, recall that
$$
I_{2,\mathcal{F}}^2=\bigg\lambdangle \frac{F^2}{F_iF_j}
\,\bigg|\, 1\leqslantqslant i\leqslantqslant j\leqslantqslant s\bigg\longrightarrowngle.
$$
Given indices $1\leqslantqslant i_1 < \dots < i_{s-2} \leqslantqslant s$, let
$\{i_{s-1},i_s\}$ be the complement of $\{i_1, \dots, i_{s-2}\}$ in
$\{1,\ldots,s\}$. Then we have
$$
(F_{i_1}\cdots F_{i_{s-2}}) F =\frac{F^2}{F_{i_{s-1}}
F_{i_{s}}} \in I_{2,\mathcal{F}}^2,
$$
and so $F_{i_1}\cdots F_{i_{s-2}}\in [I_{2,\mathcal{F}}^2:F]$.
Conversely, let $G\in [I_{2,\mathcal{F}}^2 : F]$. Since
$G F \in I_{2,\mathcal{F}}^2$, we have that
\begin{equation}
\lambdabel{EQ:20160715-101}
G F = \sum_{1\leqslantqslant i\leqslantqslant s} A_i \frac{F^2}{F_i^2}
+ \sum_{1\leqslantqslant i< j\leqslantqslant s} B_{i,j} \frac{F^2}{F_iF_j}
\end{equation}
for some $A_i, B_{i,j}\in R$.
\noindent
{\em Claim.} For every $1\leqslantqslant i \leqslantqslant s$, $F_i$ divides $A_i$.
\noindent {\em Proof of Claim.} For $i=1$,
$$
G F=A_1 \frac{F^2}{F_1^2}+ \sum_{2\leqslantqslant i\leqslantqslant
s} A_i \frac{F^2}{F_i^2}+ \sum_{1\leqslantqslant i< j\leqslantqslant s}
B_{i,j} \frac{F^2}{F_iF_j}.
$$
Hence
\begin{equation*}
G F-
\sum_{1\leqslantqslant i< j\leqslantqslant s} B_{i,j} \frac{F^2}{F_iF_j} -
\sum_{2\leqslantqslant i\leqslantqslant s} A_i \frac{F^2}{F_i^2} = A_1 \frac{F^2}{F_1^2}.
\end{equation*}
For all $h\neq 1$, $F_1, F_h$ is, by assumption, a regular
sequence. This implies that $F_1$ and $F_h$ are coprime. Therefore
$F_1$ must divide $A_1$ because $F_1$ divides every
term on the left hand side. Similarly, one can show that $F_i$ divides
$A_i$ for all $1\leqslantqslant i\leqslantqslant s$.
$\Box$
Let $A_i=F_i A_i'$ for some $A_i'\in R$. We can rewrite
equation~\eqref{EQ:20160715-101} as
$$
G F= \sum_{1\leqslantqslant i\leqslantqslant s} A_i' \frac{F^2}{F_i}+ \sum_{1\leqslantqslant i< j\leqslantqslant s}
B_{i,j} \frac{F^2}{F_iF_j}.
$$
Dividing both sides by $F$, we obtain
$$
G=\sum_{1\leqslantqslant i\leqslantqslant s} A_i' \frac{F}{F_i}+ \sum_{1\leqslantqslant i<
j\leqslantqslant s} B_{i,j} \frac{F}{F_iF_j}
$$
proving that $G$ is in
$\big\lambdangle F_{i_1}\cdots F_{i_{s-2}} \mid 1\leqslantqslant i_1 < \dots <
i_{s-2} \leqslantqslant s \big\longrightarrowngle$.
\noindent
$\ref{colon-lemma-ii}$ If $G \in I_{2,\mathcal{F}}^2 \cap \lambdangle F\longrightarrowngle$, then
$G = G'F \in I_{2,\mathcal{F}}^2$. So
$G' \in [I_{2,\mathcal{F}}^2:F]$, and thus
$G = FG' \in F[I_{2,\mathcal{F}}^2:F]$.
Conversely, if $H \in F[I_{2,\mathcal{F}}^2:F]$, we have $H = FH'$
with $H' \in [I_{2,\mathcal{F}}^2:F]$. It is then immediate that
$H \in I_{2,\mathcal{F}}^2 \cap \lambdangle F\longrightarrowngle$, which completes the proof of
this lemma.
\end{proof}
\begin{thm}\lambdabel{T:20160710-802}
Let $I_{2,\mathcal{F}}$ be the defining ideal of a star
configuration of codimension two in $\mathbb{P}^n$. Assume
$\mathcal{F}=\{F_1,\ldots,F_s\}$, where $F_1,\ldots,F_s$ are forms
of degrees $1\leqslantqslant d_1\leqslantqslant \cdots \leqslantqslant d_s$, and let
$d=d_1+\cdots+d_s$. Then a graded minimal free resolution of
$I_{2,\mathcal{F}}^{(2)}$ has the form
\begin{equation*}
0 \to \bigoplus_{1\leqslantqslant i\leqslantqslant s} R(-(2d-d_i)) \to \leqslantft(
\bigoplus_{1\leqslantqslant i\leqslantqslant s}R(-(2d-2d_i)) \right) \oplus
R(-d)\to I_{2,\mathcal{F}}^{(2)} \to 0.
\end{equation*}
\end{thm}
\begin{proof}
Let $F = F_1\cdots F_s$. Thanks to Corollary
\ref{cor:general-symb-square}, there is a short exact sequence
\begin{equation*}
0 \to I_{2,\mathcal{F}}^2 \cap \lambdangle F\longrightarrowngle
\to I_{2,\mathcal{F}}^2 \oplus \lambdangle F\longrightarrowngle
\to I_{2,\mathcal{F}}^{(2)} \to 0.
\end{equation*}
We proceed to describe a minimal free resolution of the left term.
By Lemma~\ref{L:20160715-602} $\ref{colon-lemma-i}$, $[I_{2,\mathcal{F}}:F] = I_{3,\mathcal{F}}$. By \cite[Theorem 3.4]{PS:1}, a minimal free resolution of $I_{3,\mathcal{F}}$ has the form
\begin{equation*}
0 \to R^{\binom{s-1}{2}}(-d) \to
{\displaystyle\bigoplus_{1\leqslantqslant i\leqslantqslant s}}R^{s-2}(-(d-d_i))\to
{\displaystyle\bigoplus_{1\leqslantqslant i<j\leqslantqslant s} R(-(d-(d_i+d_j))}.
\end{equation*}
By Lemma~\ref{L:20160715-602} $\ref{colon-lemma-ii}$, we have
$I_{2,\mathcal{F}}^2 \cap \lambdangle F\longrightarrowngle=F[I_{2,\mathcal{F}}^2:F]
= F I_{3,\mathcal{F}}$. Since $F$ has degree $d$, to obtain a
minimal free resolution of $F I_{3,\mathcal{F}}$ it is enough to add
$d$ to the degrees of the generators of the free modules in the
resolution above. More explicitly, a minimal free resolution of
$I_{2,\mathcal{F}}^2 \cap \lambdangle F\longrightarrowngle$ has the form
\begin{equation*}
0 \to R^{\binom{s-1}{2}}(-2d) \to
{\bigoplus_{1\leqslantqslant i\leqslantqslant s}}R^{s-2}(-(2d-d_i))\to
{\bigoplus_{1\leqslantqslant i<j\leqslantqslant s} R(-(2d-(d_i+d_j))}.
\end{equation*}
Next we describe a minimal free resolution of the middle term.
We found a minimal free resolution for $I^2_{2,\mathcal{F}}$ in Lemma~\ref{L:20170926-501}. Since $0\to R(-d) \to \lambdangle F\longrightarrowngle \to 0$ is a minimal free resolution of $\lambdangle F\longrightarrowngle$, we can take a direct sum of the resolutions of the two ideals to obtain the complex
\begin{equation*}
0\to R^{\binom{s-1}{2}}(-2d)
\to \bigoplus_{1\leqslantqslant i\leqslantqslant s}R^{s-2}(-(2d-d_i)) \to
\leqslantft(\bigoplus_{1\leqslantqslant i,j\leqslantqslant s} R(-(2d-(d_i+d_j)))\right) \oplus R(-d),
\end{equation*}
which is a minimal free resolution of $I_{2,\mathcal{F}}^2 \oplus \lambdangle F\longrightarrowngle$.
Our goal is to describe a minimal free resolution of the right term in the short exact sequence. Using a mapping cone construction \cite{HS}, we obtain a free resolution of $I_{2,\mathcal{F}}^{(2)}$ of the form
\begin{multline*}
0 \to R^{\binom{s-1}{2}}(-2d) \to
\leqslantft(\bigoplus_{1\leqslantqslant i\leqslantqslant s}R^{s-2}(-(2d-d_i)) \right)
\oplus R^{\binom{s-1}{2}}(-2d) \to\\
\to \leqslantft(\bigoplus_{1\leqslantqslant i<j\leqslantqslant s} R(-(2d-(d_i+d_j)))\right) \oplus
\leqslantft( \bigoplus_{1\leqslantqslant i\leqslantqslant s}R^{s-1}(-(2d-d_i))\right) \to\\
\to \leqslantft(\bigoplus_{1\leqslantqslant i,j\leqslantqslant s} R(-(2d-(d_i+d_j)))\right) \oplus R(-d).
\end{multline*}
The ideal $I_{2,\mathcal{F}}^{(2)}$ is Cohen-Macaulay by
\cite[Corollary 3.7]{GHMN}. In particular, a graded minimal free
resolution of $I_{2,\mathcal{F}}^{(2)}$ has length 1. Hence the
$R^{\binom{s-1}{2}}(-2d)$ at the end of the resolution must cancel
out the $R^{\binom{s-1}{2}}(-2d)$ in the penultimate module. In
addition, $\bigoplus_{1\leqslantqslant i\leqslantqslant s} R^{s-2}(-(2d-d_i))$ must cancel
with part of $\bigoplus_{1\leqslantqslant i\leqslantqslant s} R^{s-1}(-(2d-d_i))$ in
homological degree two. After these cancellations, we are left with
the smaller resolution
\begin{multline*}
0\to \leqslantft(\bigoplus_{1\leqslantqslant i<j\leqslantqslant s} R(-(2d-(d_i+d_j)))\right)
\oplus
\leqslantft(\bigoplus_{1\leqslantqslant i\leqslantqslant s}R(-(2d-d_i)) \right) \to\\
\to \leqslantft(\bigoplus_{1\leqslantqslant i,j\leqslantqslant s} R(-(2d-(d_i+d_j)))\right)
\oplus R(-d)
\end{multline*}
of $I_{2,\mathcal{F}}^{(2)}$.
By Corollary \ref{cor:general-symb-square},
$I_{2,\mathcal{F}}^{(2)}$ has exactly one generator of degree $d$,
namely $F$, and the rest of the generators have degrees
$2d-(d_i+d_j)$. The generators of degree $2d-(d_i+d_j)$ with
$d_i\neq d_j$ are redundant since they are multiples of $F$. Each
redundant generator gives rise to a relation of degree
$2d-(d_i+d_j)$ that expresses the redundant generator in terms of
the minimal ones. As such, we can remove these redundant generators
along with the corresponding relations. We are then left with
\begin{equation*}
0 \to \bigoplus_{1\leqslantqslant i\leqslantqslant s} R(-(2d-d_i)) \to \leqslantft(
\bigoplus_{1\leqslantqslant i\leqslantqslant s}R(-(2d-2d_i)) \right) \oplus
R(-d)\to I_{2,\mathcal{F}}^{(2)} \to 0.
\end{equation*}
This resolution must now be minimal since no further cancellation is
possible.
\end{proof}
\begin{rem}\lambdabel{generalize}
If $I_{2,\mathcal{L}}$ defines a linear star configuration in $\mathbb{P}^n$,
our formula agrees with the formula of \cite[Theorem 3.2]{GHM} with
$c=2$; thus Theorem \ref{T:20160710-802} is a generalization of \cite{GHM}
in the sense that the star configuration
need not be linear.
\end{rem}
\section{General sets of points}
In this section, we study general sets ${\mathbb X}$ of
points in $\mathbb{P}^2$. Specifically,
we characterize when $\sdef(I_{\mathbb X},2) = 1$.
Recall that a property holds for {\em a general set} of $s$ points in ${\mathbb P}^n$
if the subset of $({\mathbb P}^n)^s$ for which it holds contains a nonempty open subset.
If $\mathbb{X} \subseteq \mathbb{P}^n$ is a general set of points, then $\mathbb{X}$ has the
{\it generic Hilbert function},
that is,
\[
{\text {\bf H}}_{R/I_\mathbb{X}}(i) = \min\{\dim_\Bbbk R_i, |\mathbb{X}|\}
~~\mbox{for all $i \geqslantqslant 0$}.
\]
The key ingredient that we require is the following
famous result of Alexander and Hirschowitz which
computes the Hilbert function of $R/I_{\mathbb{X}}^{(2)}$
when $\mathbb{X}$ is a set of general points in
$\mathbb{P}^n$ (we have specialized their
result to $\mathbb{P}^2$). Roughly speaking, except
if $s =2$ or $5$, the Hilbert function of $R/I_{\mathbb{X}}^{(2)}$ is the
generic Hilbert function of $3|\mathbb{X}|$ points.
\begin{thm}[{\cite[Theorem 2]{AH:1}}]
\lambdabel{AH}
Let $\mathbb{X}$ be a set of $s$ general points
in $\mathbb{P}^2$. If $s \neq 2,5$, then
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(i) = \min\{\dim_\Bbbk R_i, 3s \} ~~\mbox{for
all $i \geqslantqslant 0$.}\]
If $s = 5$, then
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(i) =
\begin{cases}
\min\{\dim_\Bbbk R_i, 3s \} & i \neq 4 \\
14 & i = 4.
\end{cases}
\]
\end{thm}
In fact, the graded minimal free resolution of
$I_{\mathbb X}$ and $I_{\mathbb X}^{(2)}$ for $s$ general
points in $\mathbb{P}^2$ is known. The resolution of $I_{\mathbb X}$ and $I_{\mathbb X}^{(2)}$
is the cumulative work of many people.
For sets ${\mathbb X}$ of simple points, the minimal resolution of $I_{\mathbb X}$
was worked out by Geramita and Maroscia \cite{GM}, Geramita,
Gregory, and Roberts \cite{GGR}, and Lorenzini \cite{L}.
For $I^{(2)}_{\mathbb X}$, Catalisano's work \cite{C} determines the resolution
of $I^{(2)}_{\mathbb X}$ for $s\leqslant 5$, while Id\`a \cite{I} handles the case of $s>5$
(thereby recovering known results when $s\leqslant 9$, and proving a conjecture
of Harbourne \cite[Conjecture 6.3]{H1} in the special case of $m=2$).
We record only the consequences we need.
\begin{lem}\lambdabel{specialcases}
Let ${\mathbb X}$ be a set of $s$ general points in $\mathbb{P}^2$.
\begin{enumerate}
\item[$(i)$] If $s = 5$, then the graded minimal free resolution
of $I_{\mathbb X}$, respectively $I_{\mathbb X}^{(2)}$, is
\[0 \longrightarrow R^2(-4) \longrightarrow R(-2) \oplus R^2(-3)
\longrightarrow I_{\mathbb{X}} \longrightarrow 0, ~~\mbox{respectively}\]
\[0 \rightarrow R^2(-6) \oplus R(-7) \rightarrow R(-4) \oplus R^3(-5)
\rightarrow I_{\mathbb X}^{(2)} \rightarrow 0.\]
\item[$(ii)$]
If $s = 7$, then the graded minimal free resolution
of $I_{\mathbb X}$, respectively $I_{\mathbb X}^{(2)}$, is
\[0 \rightarrow R(-4) \oplus R(-5) \rightarrow R^3(-3) \rightarrow I_{\mathbb X}
\rightarrow 0, ~~\mbox{respectively}\]
\[0 \rightarrow R^6(-7) \rightarrow R^7(-6) \rightarrow I_{\mathbb X}^{(2)} \rightarrow 0.\]
\item[$(iii)$]
If $s = 8$, then the graded minimal free resolution
of $I_{\mathbb X}$, respectively $I_{\mathbb X}^{(2)}$, is
\[0 \rightarrow R^2(-5) \rightarrow R^2(-3) \oplus R(-4) \rightarrow I_{\mathbb X}
\rightarrow 0, ~~\mbox{respectively}\]
\[0 \rightarrow R^3(-8) \rightarrow R^4(-6) \rightarrow I_{\mathbb X}^{(2)} \rightarrow 0.\]
\item[$(iv)$]
If $s = 9$, then the graded minimal free resolution
of $I_{\mathbb X}$, respectively $I_{\mathbb X}^{(2)}$, is
\[0 \rightarrow R^3(-5) \rightarrow R(-3)\oplus R^3(-4)
\rightarrow I_{\mathbb X}
\rightarrow 0, ~~\mbox{respectively}\]
\[0 \rightarrow R^6(-8) \rightarrow R(-6) \oplus R^6(-7)
\rightarrow I_{\mathbb{X}}^{(2)} \rightarrow 0.\]
\end{enumerate}
\end{lem}
We now present the main result of this section.
\begin{thm}\lambdabel{generalpoints}
Let ${\mathbb X}$ be a set of $s$ general points in $\mathbb{P}^2$.
Then
\begin{enumerate}
\item[$(i)$] $\sdef(I_{\mathbb X},2) = 0$ if and only if $s = 1,2$ or
$4$.
\item[$(ii)$] $\sdef(I_{\mathbb X},2) =1$ if and only if $s =3, 5, 7$,
or $8$.
\item[$(iii)$] $\sdef(I_{\mathbb X},2) > 1$ if and only if $s=6$ or $s \geqslantqslant 9$.
\end{enumerate}
\end{thm}
\begin{proof}
By Theorem \ref{completeintersection}, $\sdef(I_{\mathbb X},2) = 0$ if and
only if ${\mathbb X}$ is a complete intersection. But a set of $s$
general points is a complete intersection if and only if $s =1,2,$ or $4$
(e.g., \cite[Exercise 11.9]{Harris}). This proves $(i)$.
We next consider the special cases of $s=3,5,6,7,8,9$.
If $s = 3$, then $\mathbb{X}$ is also a linear star configuration.
Indeed, for each pair of points $P_i,P_j$ with $i \neq j$, take the unique
line $L_{i,j}$ through those two points. Then $I_{\mathbb X} = I_{2,\mathcal{L}}$ where
$\mathcal{L} = \{L_{1,2},L_{1,3},L_{2,3}\}$.
Then $\sdef(I_{\mathbb X},2) = 1$ by Theorem
\ref{sdefect-one-classify}.
For the cases $s = 5,6,7,$ and $9$, we first observe that
\[\dim_\Bbbk(I_\mathbb{X}^{(2)}/I_\mathbb{X}^2)_{\alpha(I_\mathbb{X}^{(2)}/I_\mathbb{X})}
\leqslantqslant \sdef(I_{\mathbb X},2) \leqslantqslant \sum_{t \geqslantqslant 0} \dim_\Bbbk (I_\mathbb{X}^{(2)}/I_\mathbb{X}^2)_t.
\]
We can use Theorem \ref{resix2} and Lemma \ref{specialcases} to
find the Hilbert functions of $R/I_{{\mathbb X}}^{(2)}$ and $R/I_{\mathbb X}^2$ for
$s = 5,6,7,$ and $9$. In these four cases, we will find
that $\dim_\Bbbk (I_\mathbb{X}^{(2)})_t = \dim_\Bbbk(I_\mathbb{X}^2)_t = 0$ if $t \neq
\alpha(I_\mathbb{X}^{(2)}/I_\mathbb{X})$, and consequently the
above inequalities give
\[\dim_\Bbbk(I_\mathbb{X}^{(2)}/I_\mathbb{X}^2)_{\alpha(I_\mathbb{X}^{(2)}/I_\mathbb{X})}
= \sdef(I_{\mathbb X},2).\]
Furthermore, we can use these Hilbert functions to compute
the symbolic defect; precisely,
\[\sdef(I_{\mathbb X},2) = \begin{cases}
1 & \mbox{if $s = 5,7$} \\
3 & \mbox{if $s = 6,9$}.
\end{cases}
\]
When $s = 8$, the Hilbert functions of $I_{\mathbb X}^{(2)}$ and $I_{\mathbb X}^{2}$ disagree
in two degrees, so the above approach does not work.
Instead, if $s =8$, then Lemma \ref{specialcases} $(iii)$ implies that
$\alpha(I_{\mathbb X}) = 3$ and $I_{\mathbb X}$ has two minimal generators
of degree 3. So, $I_{\mathbb X}^2$ has three minimal generators of degree 6.
By Lemma \ref{specialcases} $(iii)$,
$\alpha(I_{{\mathbb X}}^{(2)}) = 6$ and $I_{\mathbb X}^{(2)}$ has four minimal generators
of degree 6.
So, there exists a form $F \in (I_{\mathbb X}^{(2)})_6 \setminus (I_{\mathbb X}^2)_6$.
But since $I_{\mathbb X}^{(2)}$ is generated by
these four generators of degree 6,
$I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$,
that is, $\sdef(I_{\mathbb X},2) = 1$.
Going forward, we now assume that $s \geqslantqslant 10$. Our goal is to
show that $\sdef(I_{\mathbb X},2) > 1$. To do this, we first will show
that if $\sdef(I_X,2) =1$ and $F$ is any homogeneous form
such that $I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$, then
the degree of $F$ is restricted.
Below, $\alpha = \alpha(I_{\mathbb X})$.
\noindent
{\it Claim.} If $s \geqslantqslant 10$ and $I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$,
then ${\rm deg}{F} \geqslantqslant 2\alpha -1$.
\noindent
{\it Proof of Claim.} Suppose that $d = {\rm deg} F \leqslantqslant 2\alpha -2$.
Because $s \geqslantqslant 10$,
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(d) = {\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(d+1) = 3|\mathbb{X}|
~~\mbox{by Theorem \ref{AH}}.\]
On the other hand, since $I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle
+ I_{\mathbb{X}}^2$ and ${\rm deg} F \leqslantqslant 2\alpha-2$, we have
\[\dim_\Bbbk (I_{\mathbb{X}}^{(2)})_d = \dim_\Bbbk \lambdangle F \longrightarrowngle_d = 1 ~~~\mbox{and}
~~\dim_\Bbbk (I_{\mathbb{X}}^{(2)})_{d+1} = \dim_\Bbbk \lambdangle F \longrightarrowngle_{d+1} = 3.\]
But this then means that
\begin{eqnarray*}
\binom{d+2}{2} - 1 &= & {\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(d) = 3|\mathbb{X}|
= {\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(d+1) = \binom{d+3}{2} - 3.
\end{eqnarray*}
So $d$, the degree of $F$, would have to satisfy
\[
\binom{d+2}{2} - \binom{d+3}{2} +2 = 0 \Leftrightarrow d = 0.
\]
But ${\rm deg} F > 0$. So, ${\rm deg} F \geqslantqslant 2\alpha-1$.
$\Box$
Now suppose that $s \geqslantqslant 10$ and $\sdef(I_{\mathbb X},2) = 1$.
Consequently, there is a homogeneous form $F$
such that $I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$,
and furthermore, by the above claim, ${\rm deg} F \geqslantqslant 2\alpha-1$
where $\alpha = \alpha(I_{\mathbb X})$.
We now consider the cases ${\rm deg} F = 2\alpha-1$,
and ${\rm deg} F \geqslantqslant 2\alpha$ separately.
\noindent
{\it Case 1.} $I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$
with ${\rm deg} F = 2\alpha -1$.
\noindent
If ${\rm deg} F = 2\alpha-1$, then we first claim that $\alpha \leqslantqslant 11$.
Indeed, by Theorem \ref{AH}
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha-2) = \binom{2\alpha}{2} \leqslantqslant 3|\mathbb{X}|
= {\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha-1) = \binom{2\alpha+1}{2} -1\]
where the last equality follows from the fact that $I_{\mathbb{X}}^{(2)}$
has exactly one generator of degree $2\alpha-1$. On the other hand,
we know that $|\mathbb{X}| < \binom{\alpha+2}{2}$ since
$s$ general points have the generic Hilbert function,
so $\alpha$ is by definition the smallest number $i$ such that $\binom{i+2}{2} >
s = |\mathbb{X}|$. Combining these inequalities, we have
\[\binom{2\alpha}{2} \leqslantqslant 3|\mathbb{X}| < 3\binom{\alpha+2}{2}.\]
or equivalently, $\alpha$ must satisfy
\[
\frac{2\alpha(2\alpha-1)}{2}-\frac{3(\alpha+2)(\alpha+1)}{2} < 0
\Leftrightarrow \alpha^2 -11\alpha-6 < 0.
\]
But the last inequality only holds if $\alpha \leqslantqslant 11$.
Since we are also assuming that $|{\mathbb X}| \geqslantqslant 10$,
we have $4 \leqslantqslant \alpha \leqslantqslant 11$.
As we noted above, if ${\rm deg} F = 2\alpha-1$,
then $3|\mathbb{X}| = \binom{2\alpha+1}{2}-1$ must be also be satisfied.
Via a direct calculation, we see that $\binom{2\alpha+1}{2}-1$
is divisible by $3$ with $4 \leqslantqslant \alpha \leqslantqslant 11$ if and only
if $\alpha = 5, 8, 11$.
So, if $3|{\mathbb X}| = \binom{2\alpha+1}{2}-1$ with $4 \leqslantqslant \alpha \leqslantqslant 11$,
then we have $|{\mathbb X}| = 54/3 = 18$, or $|{\mathbb X}| = 135/3 = 45$, or $|{\mathbb X}| = 252/3 = 84$.
In all other cases, we cannot have
$I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$.
However, if $|{\mathbb X}| = 45$, then $\alpha(I_{\mathbb X}) = 9$, not $8$.
Also, if $|{\mathbb X}| = 84$, then $\alpha(I_{\mathbb X}) = 12$, not $11$. If
$|{\mathbb X}| = 18$, then $\alpha(I_{\mathbb X}) = 5$. So we need
a separate argument to show that $I_{\mathbb{X}}^{(2)} \neq \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$.
So, let $s = 18$ with $\alpha = 5$ and suppose that
$I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$ with ${\rm deg}{F} = 9$.
Then $(I_{\mathbb{X}}^{(2)})_{10} = (F+ I_{\mathbb{X}}^2)_{10}$. Now
by Theorem \ref{AH}, $\dim_\Bbbk (I_{\mathbb{X}}^{(2)})_{10} = 12$.
On the other hand, $I_{\mathbb{X}}$ has three generators of degree $\alpha = 5$,
so by Lemma \ref{nummingens}, $I_{\mathbb{X}}^2$ has six generators of
degree $2\alpha = 10$, and no smaller generators. So
$\dim_\Bbbk (\lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2)_{10} \leqslantqslant \dim_\Bbbk (\lambdangle F \longrightarrowngle)_{10} +
\dim_\Bbbk (I_{\mathbb{X}}^2)_{10} = 3+6 = 9$. So, by a dimension count, we cannot
have $I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$.
To summarize this case, if $s \geqslantqslant 10$, there is no set
of $s$ general points with
$I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$ with ${\rm deg} F = 2\alpha-1$.
\noindent
{\it Case 2.} $I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$ with ${\rm deg} F \geqslantqslant 2\alpha$.
\noindent
If ${\rm deg} F \geqslantqslant 2\alpha$, then we claim that $\alpha \leqslantqslant 7$.
Indeed, since $I_{\mathbb{X}}^{(2)}$ will be generated by forms of
degree $2\alpha$ or larger, we have
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha-1) = \binom{2\alpha+1}{2} \leqslantqslant 3|\mathbb{X}|. \]
On the other hand, $|\mathbb{X}| < \binom{\alpha+2}{2}$. Combining these
two inequalities gives
\[\binom{2\alpha+1}{2} \leqslantqslant 3|\mathbb{X}| < 3\binom{\alpha+2}{2}.\]
So, $\alpha$ must satisfy
\[(2\alpha+1)(2\alpha) < 3(\alpha+2)(\alpha+1) \Leftrightarrow \alpha^2-7\alpha-6 < 0 \Leftrightarrow \alpha \leqslantqslant 7.\]
Moreover, because $s \geqslantqslant 10$, we have $4 \leqslantqslant \alpha \leqslantqslant 7$,
or equivalently, $10 \leqslantqslant s = |\mathbb{X}| \leqslantqslant 35$.
Let $d = \binom{\alpha+2}{2} - |\mathbb{X}|$, that is, $d$ is the number
of minimal generators of $I_{\mathbb{X}}$ of degree $\alpha$.
If ${\rm deg} F = 2\alpha$ and $I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$,
then $I_{\mathbb{X}}^{(2)}$ has $\binom{d+1}{2}+1$ minimal generators of degree $2\alpha$.
If ${\rm deg} F > 2\alpha$ and $I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle
+ I_{\mathbb{X}}^2$, then
$I_{\mathbb{X}}^{(2)}$ has $\binom{d+1}{2}$ minimal generators of degree $2\alpha$.
So, we will have
\[{\text {\bf H}}_{R/I_{\mathbb{X}}^{(2)}}(2\alpha) = 3|\mathbb{X}| =
\leqslantft\{
\begin{array}{ll}
\displaystyle\binom{2\alpha+2}{2} -\binom{d+1}{2}- 1 & \mbox{if ${\rm deg} F = 2\alpha$},\\[2ex]
\displaystyle\binom{2\alpha+2}{2} -\binom{d+1}{2} & \mbox{if ${\rm deg} F > 2\alpha$}.
\end{array}
\right.
\]
Thus, to summarize,
if $I_{\mathbb{X}}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb{X}}^2$ with
${\rm deg} F \geqslantqslant 2\alpha$, then
\begin{enumerate}
\item[$(a)$] $10 \leqslantqslant |\mathbb{X}| \leqslantqslant 35$,
\item[$(b)$] $\binom{2\alpha+1}{2} \leqslantqslant 3|\mathbb{X}| < \binom{2\alpha+2}{2}$,
and
\item[$(c)$] either
$3|\mathbb{X}| =
\binom{2\alpha+2}{2} -\binom{d+1}{2}- 1$ or $3|\mathbb{X}| =
\binom{2\alpha+2}{2} -\binom{d+1}{2}$ must hold with
$d = \binom{\alpha+2}{2} - |\mathbb{X}|$.
\end{enumerate}
A direct computation for each value $10 \leqslantqslant |{\mathbb X}| \leqslantqslant 35$ shows
that no value of $|{\mathbb X}|$ satisfies both of $(b)$ and $(c)$.
Table \ref{table2} explicitly verifies this statement;
note that in the table, (T) denotes true
and (F) denotes false.
\footnotesize
\begin{table}[h!]
\def1.2{1.2}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$|\mathbb{X}|$ & $\alpha = \alpha(I_{\mathbb{X}})$ &
$\binom{2\alpha+1}{2} \leqslantqslant 3|\mathbb{X}| < \binom{2\alpha+2}{2}$ & $d$ & $\binom{2\alpha+2}{2} -\binom{d+1}{2}- 1
= 3|\mathbb{X}|$
& $\binom{2\alpha+2}{2} -\binom{d+1}{2} = 3|\mathbb{X}|$\\
\hline
\hline
10 & 4 & $36 \leqslantqslant 30 < 45$ (F) & & & \\
11 & 4 & $36 \leqslantqslant 33 < 45$ (F) & & & \\
12 & 4 & $36 \leqslantqslant 36 < 45$ (T) & 3& $39 = 36$ (F)& $38 =36$ (F)\\
13 & 4 & $36 \leqslantqslant 39 < 45$ (T) & 2& $41 = 39$ (F)& $42 =39$ (F) \\
14 & 4 & $36 \leqslantqslant 42 < 45$ (T) & 1& $43 = 42$ (F)& $44 =42$ (F)\\
\hline
\hline
15 & 5 & $55 \leqslantqslant 45 < 66$ (F) & & & \\
16 & 5 & $55 \leqslantqslant 48 < 66$ (F) & & & \\
17 & 5 & $55 \leqslantqslant 51 < 66$ (F) & & & \\
18 & 5 & $55 \leqslantqslant 54 < 66$ (F) & & & \\
19 & 5 & $55 \leqslantqslant 57 < 66$ (T) & 2& $62 =57$ (F)& $63=57$ (F)\\
20 & 5 & $55 \leqslantqslant 60 < 66$ (T) & 1& $64 =60$ (F)& $65=60$ (F)\\
\hline
\hline
21 & 6 & $78 \leqslantqslant 63 < 91$ (F) & & & \\
22 & 6 & $78 \leqslantqslant 66 < 91$ (F) & & & \\
23 & 6 & $78 \leqslantqslant 69 < 91$ (F) & & & \\
24 & 6 & $78 \leqslantqslant 72 < 91$ (F) & & & \\
25 & 6 & $78 \leqslantqslant 75 < 91$ (F) & & & \\
26 & 6 & $78 \leqslantqslant 78 < 91$ (T) &2 &$87=78$ (F) &$88 =78$ (F) \\
27 & 6 & $78 \leqslantqslant 81 < 91$ (T) &1 &$89=81$ (F) &$90=81$ (F) \\
\hline
\hline
28 & 7 & $105 \leqslantqslant 84 < 120$ (F) & & & \\
29 & 7 & $105 \leqslantqslant 87 < 120$ (F) & & & \\
30 & 7 & $105 \leqslantqslant 90 < 120$ (F) & & & \\
31 & 7 & $105 \leqslantqslant 93 < 120$ (F) & & & \\
32 & 7 & $105 \leqslantqslant 96 < 120$ (F) & & & \\
33 & 7 & $105\leqslantqslant 99 < 120$ (F) & & & \\
34 & 7 & $105 \leqslantqslant 102 < 120$ (F) & & & \\
35 & 7 & $105\leqslantqslant 105 < 120$ (T) &1 &$118=105$ (F) &$119=105$ (F) \\
\hline
\end{tabular}
\\
\caption{Comparing inequalities and equalities with ${\rm deg} F \geqslantqslant 2\alpha$}\lambdabel{table2}
\end{table}
\normalsize
To summarize this case, if $s \geqslantqslant 10$, there is no set
of $s$ general points with
$I_{\mathbb X}^{(2)} = \lambdangle F \longrightarrowngle + I_{\mathbb X}^2$ with ${\rm deg} F \geqslantqslant 2\alpha$. Thus
combining this case with the previous case, we see that
if $s \geqslantqslant 10$, then $\sdef(I_X,2) > 1$,
thus completing the proof.
\end{proof}
\begin{rem}
The special case $s=6$ in the Theorem \ref{generalpoints} can also
be explained by appealing to Theorem \ref{partialconverse}. The ideal
$I_{\mathbb X}$ of six general points in $\mathbb{P}^2$ has a linear resolution.
So, if $\sdef(I_{\mathbb X},2) =1$, then
the six points must be a linear star configuration by Theorem \ref{partialconverse}, and in particular,
three of the six points must be on the same line. But
six general points is not a star configuration since three of
the six points cannot lie on a line.
\end{rem}
\begin{exmp}\lambdabel{8points}
As mentioned in the introduction, there are many questions
one can ask about the symbolic defect sequence. We end this
section with an example to show that the symbolic defect sequence
need not be a non-decreasing sequence. Consider the ideal
$I_{\mathbb X}$ when ${\mathbb X}$ is eight general
points in $\mathbb{P}^2$. Using \texttt{Macaulay2} \cite{Mt},
we found that the symbolic defect sequence $\{\sdef(I_{\mathbb X},m)\}_{m=0}^{\infty}$
begins
\[0, 1, 3, 6, 10, 9, 7\]
and thus, the symbolic defect sequence can decrease. Understanding the
long term behavior of this sequence would be of interest.
\end{exmp}
\end{document}
|
\begin{document}
\title{A stochastic algorithm for deterministic multistage optimization problems}
\begin{abstract}
Several attempts to dampen the curse of dimensionality problem of the Dynamic
Programming approach for solving multistage optimization problems have been
investigated. One popular way to address this issue is the Stochastic Dual
Dynamic Programming method (SDDP) introduced by Perreira and Pinto in 1991 for
Markov Decision Processes. Assuming that the value function is convex (for a
minimization problem), one builds a non-decreasing sequence of lower (or
outer) convex approximations of the value function. Those convex
approximations are constructed as a supremum of affine cuts.
On continuous time deterministic optimal control problems, assuming that the
value function is semiconvex, Zheng Qu, inspired by the work of McEneaney,
introduced in 2013 a stochastic max-plus scheme that builds upper (or inner)
non-increasing approximations of the value function.
In this note, we build a common framework for both the SDDP and a discrete
time version of Zheng Qu's algorithm to solve deterministic multistage
optimization problems. Our algorithm generates monotone approximations of the
value functions as a pointwise supremum, or infimum, of basic (affine or
quadratic for example) functions which are randomly selected. We give
sufficient conditions on the way basic functions are selected in order to
ensure almost sure convergence of the approximations to the value function on
a set of interest.
\end{abstract}
\section{Introduction}
Throughout this paper, we aim to study a deterministic optimal control problem
with discrete time. Informally, given a time $t$ and a state $x_t \in \mathbb{X}$, one
can apply a control $u_t \in \mathbb{U}$ and the next state is given by the dynamic
$f_t$, that is $x_{t+1} = f_t \left( x_t, u_t \right)$. Then, one wants to
minimize the sum of costs $c_t\left( x_t, u_t \right)$ induced by the controls
starting from a given state $x_0$ and during a given time horizon
$T$. Furthermore, one can add some final restrictions on the states at time $T$
which will be modeled by an additional cost function $\psi$ depending only on
the final state $x_T$. We will call such optimal control problems,
\emph{multistage optimization problems} and \emph{switched multistage
optimization problems} if the controls are both continuous and discrete:
\begin{subequations}
\label{MultistageProblem}
\begin{align}
\min_{ \substack{x=(x_0, \ldots, x_T)\in \mathbb{X}^{T+1} \\ u = (u_0, \ldots u_{T-1})\in \mathbb{U}^T }}
& \sum_{t=0}^{T-1} c_t \np{x_t, u_t} + \psi(x_T) \\
\text{ s.t. }
& \forall t \in \ce{0,T{-}1} \; , \enspace
\ x_{t+1} = f_t\np{x_t, u_t} \text{ and } x_0 \in \mathbb{X} \ \text{ given}\; .
\end{align}
\end{subequations}
One can solve the multistage Problem~\eqref{MultistageProblem} by Dynamic
Programming as introduced by Richard Bellman around
1950~\cite{Be1954,Dr2002}. This method breaks the multistage
Problem~\eqref{MultistageProblem} into $T$ sub-problems that one can solve by
backward recursion over time. More precisely, denoting by
$\mathcal{B}_t : \overline{\mathbb{R}}^\mathbb{X} \to \overline{\mathbb{R}}^\mathbb{X}$ the operator from the set of
functions over $\mathbb{X}$ that may take infinite values to itself, defined by
\begin{equation}
\label{BellmanOperator}
\mathcal{B}_t\np{\phi} : x\mapsto \min_{u\in \mathbb{U}} \mathcal{B}p{c_t(x,u) + \phi\bp{f_t(x,u)}}\; ,
\end{equation}
one can show (see for example~\cite{Be2016}) that solving Problem~\eqref{MultistageProblem} amounts to solve the following sequence of sub-problems:
\begin{equation}\label{DynamicProgramming}
V_T = \psi \quad \text{ and }\quad \forall t\in \ce{0,T-1} \quad V_t = \mathcal{B}_t(V_{t+1})\; .
\end{equation}
We will call each operator $\mathcal{B}_t$ the \emph{Bellman operator} at time $t$ and
each equation in~\eqref{DynamicProgramming} will be called the \emph{Bellman
equation} at time $t$. Lastly, the function $V_t$ defined in
Equation~\eqref{DynamicProgramming} will be called the (Bellman) \emph{value
function} at time $t$. Note that the value of Problem~\eqref{MultistageProblem} is
equal to the value function $V_0$ at point $x_0$, that is
$V_0 \left( x_0 \right)$, whereas solving the sequence of
sub-problems given by Equation~\eqref{DynamicProgramming} means to compute the value functions
$V_t$ at each point $x\in\mathbb{X}$ and time $t\in \ce{0,T{-}1}$.
We will state several assumptions on these operators in Section~\ref{sec:lAlgorithme}
under which we will devise an algorithm to solve the system of Bellman Equation~\eqref{DynamicProgramming},
also called the Dynamic Programming formulation of
the multistage problem. Let us stress on the fact that although we want to solve
the multistage Problem~\eqref{MultistageProblem}, we will mostly focus on its
(equivalent) Dynamic Programming formulation given by Equation~\eqref{DynamicProgramming}.
One issue of using Dynamic Programming to solve multistage optimization problems
is the so-called \emph{curse of dimensionality}~\cite{Be1954}. That is, when the
state space $\mathbb{X}$ is a vector space, grid-based methods to compute the value
functions have a complexity which is exponential in the dimension of the state
space $\mathbb{X}$. One popular algorithm
(see~\cite{Gi.Le.Ph2015,Gu2014,Gu.Ro2012,Pe.Pi1991,Sh2011,Zo.Ah.Su2018}) that
aims to dampen the curse of dimensionality is the Stochastic Dual Dynamic
Programming algorithm (or SDDP for short) introduced by Pereira and Pinto in
1991. Assuming that the cost functions $c_t$ are convex and the dynamics $f_t$
are linear, the value functions defined in the Dynamic Programming
formulation~\eqref{DynamicProgramming} are convex~\cite{Gi.Le.Ph2015}. Under
these assumptions, the SDDP algorithm aims to build lower (or outer)
approximations of the value functions as suprema of affine functions and thus,
does not rely on a discretization of the state space. In the aforementioned
references, this approach is used to solve stochastic multistage convex
optimization problems, however in this article we will restrict our study to
deterministic multistage convex optimization problems as formulated in
Problem~\eqref{MultistageProblem}. Still, the SDDP algorithm can be applied to
our framework. One of the main drawback of the SDDP algorithm (in the stochastic
case) is the lack of an efficient stopping criterion: it builds lower
approximations of the value functions but upper (or inner) approximations are
built through a Monte-Carlo scheme that is costly and the associated stopping
criteria is not deterministic. We follow another path to provide upper
approximations as explained now.
In~\cite[Ch.\ 8]{Qu2013} and~\cite{Qu2014}, Qu devised an algorithm which builds
upper approximations of a Bellman value function arising in an infinite horizon
and continuous time framework where the set of controls is both discrete and
continuous. This work was inspired by the work of McEneaney~\cite{Mc2007} using
techniques coming from tropical algebra, also called max-plus or min-plus
techniques. Assume that $\mathbb{X}=\mathbb{R}^n$ and that for each fixed discrete control the
cost functions are convex quadratic and the dynamics are linear in both the
state and the continuous control. If the set of discrete controls is finite,
then exploiting the min-plus linearity of the Bellman operators $\mathcal{B}_t$, one can
show that the value functions can be computed as a finite pointwise infimum of
convex quadratic functions:
\begin{equation*}
V_t = \inf_{\phi_t \in F_t} \phi_t\; ,
\end{equation*}
where $F_t$ is a finite set of convex quadratic forms. Moreover, in this
framework, the elements of $F_t$ can be explicitly computed through the
Discrete Algebraic Riccati Equation (DARE~\cite{La.Ro1995}). Thus, an
approximation scheme that computes an increasing sequence of subsets
$\left(F_t^k\right)_{k\in \mathbb{N}}$ of $F_t$ yields an algorithm that
converges after a finite number of improvements
\begin{equation*}
V_t^k := \inf_{\phi_t \in F_t^k} \phi_t \approx \inf_{\phi_t \in F_t} \phi_t = V_t.
\end{equation*}
However, the size of the set of functions $F_t$ that need to be computed is
growing exponentially with $T-t$. In~\cite{Qu2014}, in order to address the
exponential growth of $F_t$, Qu introduced a probabilistic scheme that adds
to $F_t^k$ the ``best'' (given the current approximations) element of
$F_t$ at some point drawn on the unit sphere.
Our work aims to build a general algorithm that encompasses both a deterministic
version of the SDDP algorithm and an adaptation of Qu's work to a discrete time
and finite horizon framework.
The remainder of this paper is structured as follows. In
Section~\ref{sec:lAlgorithme}, we make several assumptions on the Bellman
operators $\mathcal{B}_t$ and define an algorithm which builds approximations of the
value functions as a pointwise optimum (\emph{i.e.} either a pointwise infimum
or a pointwise supremum) of basic functions in order to solve the associated
Dynamic Programming formulation~\eqref{DynamicProgramming} of the multistage
Problem~\eqref{MultistageProblem}. At each iteration, the so-called basic
function that is added to the current approximation will have to satisfy two key
properties at a randomly drawn point, namely, \emph{tightness} and \emph{validity}. A key
feature of the proposed algorithm is that it can yield either upper or lower
approximations. More precisely,
\noindent $\bullet$ if the basic functions are affine, then approximating the
value functions by a pointwise supremum of affine functions will yield the SDDP
algorithm;
\noindent $\bullet$ if the basic functions are quadratic convex, then
approximating the value functions by a pointwise infimum of convex quadratic
functions will yield an adaptation of Qu's min-plus algorithm.
In Section~\ref{sec:Convergence}, we study the convergence of the approximations
of the value functions generated by our algorithm at a given time
$t\in \ce{0,T}$. We use an additional assumption on the random points on which
current approximations are improved, which state that they need to cover a ``rich enough set''
and show that the approximating sequence converges almost surely (over the
draws) to the Bellman value function on a set of interest.
In the last sections, we will specify our algorithm to three special cases. In
Section~\ref{SDDP_Example}, we prove that when building lower approximations as
a supremum of affine cuts, the condition on the draws is satisfied on the
optimal current trajectory, as done in SDDP. Thus, we get another point of view
on the usual (see~\cite{Gi.Le.Ph2015,Sh2011}) asymptotic convergence of SDDP,
in the deterministic case. In Section~\ref{sec:Exemples_switch}, we describe an
algorithm which builds upper approximations as an infimum of quadratic forms. It
will be a step toward addressing the issue of computing efficient upper
approximations for the SDDP algorithm. In Section~\ref{sec:ToyExample}, we
present on a toy example some numerical experiments where we simultaneously
compute lower approximations of the value functions by a deterministic version
of SDDP of the value functions and upper approximations of the value functions
by a discrete time version of Qu's min-plus algorithm.
\begin{figure}
\caption{\label{fig:UpperLower}
\label{fig:UpperLower}
\end{figure}
\section{Notations and definitions}
\label{sec:lAlgorithme}
In the sequel, we will use the following notations
\noindent $\bullet$ $\mathbb{X} := \mathbb{R}^n$, endowed with its euclidean structure and its Borel $\sigma$-algebra
denotes the \emph{set of states}.
\noindent $\bullet$ $T$, a finite integer that we will call the time \emph{horizon}.
\noindent $\bullet$ $\mathop{\mathrm{opt}}$, denotes a generic operation that is either the \emph{pointwise infimum} or
the \emph{pointwise supremum} of functions which we will call the \emph{pointwise optimum}.
\noindent $\bullet$ $\overline{\mathbb{R}}$, denotes the extended real line endowed with the operations
$+\infty + \np{-\infty} = -\infty + \infty = +\infty$.
\noindent $\bullet$ $\mathop{\mathrm{dom}}{\phi}$, denotes the \emph{domain} of $\phi \in \left(\overline{\mathbb{R}}\right)^\mathbb{X}$ defined as
the subset of $\mathbb{X}$ in which $\phi(x)\in \mathbb{R}$.
\noindent $\bullet$ $Fb_t$ and $Fbb_t$, denote for every $t\in \ce{0,T}$, two subsets of the set $\left(\overline{\mathbb{R}}\right)^\mathbb{X}$
such that $Fb_t \subset Fbb_t$.
\noindent $\bullet$ $\phi$ is said to be a \emph{basic function} if it is an element of $Fb_t$ for some $t\in \ce{0,T}$.
\noindent $\bullet$ $\indi{X}$ denotes, for every set $X \subset \mathbb{X}$, the function equal to
$0$ on $X$ and $+ \infty$ elsewhere.
\noindent $\bullet$ For every $t\in \ce{0,T}$ and every set of basic functions $F_t \subset Fb_t$, we denote by $\mathcal{V}_{F_t}$ its pointwise optimum,
$\mathcal{V}_{F_t} := \mathop{\mathrm{opt}}_{\phi \in F_t} \phi$, that is
\begin{equation}
\label{VoptF}
\begin{array}{ccll}
\mathcal{V}_{F_t} : & \mathbb{X} & \longrightarrow & \overline{\mathbb{R}} \\
& x & \longmapsto & \mathop{\mathrm{opt}} \left\{\phi (x) \mid \phi \in F_t \right\}.
\end{array}
\end{equation}
\noindent $\bullet$ $\left(\mathcal{B}_t\right)_{t\in \ce{0,T-1}}$ denotes a sequence of $T$ operators from $\overline{\mathbb{R}}^\mathbb{X}$ to $\overline{\mathbb{R}}^\mathbb{X}$,
called the \emph{Bellman operators}.
\noindent $\bullet$ $\left( V_t \right)_{t\in \ce{0,T}}$, denotes, for a fixed function $\psi : \mathbb{X} \to \overline{\mathbb{R}}$,
a sequence of \emph{value functions} given by the system of Bellman Equations~\eqref{DynamicProgramming}.
Now, we make several assumptions on the structure of
Problem~\eqref{DynamicProgramming}. These assumptions will be satisfied in the
examples developed in Sections~\ref{SDDP_Example} to~\ref{sec:ToyExample}.
These assumptions will make it possible to propagate
backward in time, regularity of the value function at the final time $t=T$ to
the value function at the initial time $t=0$.
\begin{assumption}[Structural assumptions]
\label{Assumptions} \
\begin{myenumerate}
\item\label{Stability-pointwise-optimum} \textbf{Stability by pointwise optimum:}
for every $t\in \ce{0,T}$, if $F_t \subset Fb_t$ then
$\mathcal{V}_{F_t} \in Fbb_t$.
\item\label{Stability-pointwise-convergence} \textbf{Stability by pointwise convergence:} for every $t\in \ce{0,T}$ if a sequence of functions $\np{\phi^k}_{k\in \mathbb{N}}\subset Fbb_t$ converges pointwise to $\phi$ on the domain of $V_t$, then $\phi \in Fbb_t$.
\item \label{CommonRegularity} \textbf{Common regularity:} for every $t\in \ce{0,T}$, there exists a common (local) modulus of continuity of all $\phi\inFbb_t$, \emph{i.e.} for every $x\in \mathop{\mathrm{dom}}\np{V_t}$, there exist $\omega_{t,x} : \mathbb{R}_+ \to \mathbb{R}_+ \cup \{+\infty \}$ which is increasing, equal to $0$ in $0$, continuous at $0$ and such that for every $\phi \in Fbb_t$ and for every $x' \in \mathop{\mathrm{dom}}\np{V_t}$, we have that $\lvert \phi(x) - \phi (x') \rvert \leq \omega_{t,x} \np{\lVert x - x' \rVert}$.
\item\label{Final-condition} \textbf{Final condition:} the value function $V_T$ at time $T$ is a pointwise optimum
for some given subset $F_T$ of~$Fb_T$, that is $\psi := \mathcal{V}_{F_T}$.
\item \label{StabilityBellman}
\textbf{Stability by the Bellman operators:} for every $t\in \ce{0,T-1}$, if $\phi \in Fbb_{t+1}$,
then $\mathcal{B}_{t}\left( \phi \right)$ belongs to $Fbb_{t}$.
\item\label{order-preserving} \textbf{Order preserving operators:} for every $t\in \ce{0,T-1}$, the operators $\mathcal{B}_t$ are \emph{order preserving}, \emph{i.e.} if $\phi,
\varphi \in Fbb_{t+1}$ are such that $\phi \leq \varphi$, then $\mathcal{B}_t
\left(\phi\right) \leq \mathcal{B}_t\left(\varphi\right)$.
\item\label{Additively-subhomogeneous} \textbf{Additively subhomogeneous operators: } for every time step $t\in \ce{0,T-1}$, and every given compact set $K_t$, there exists $M_t> 0$ such that the operator $\mathcal{B}_t$ restricted to $K_t$ is \emph{additively subhomogeneous} over $Fbb_{t+1}$, meaning that for every constant function $\lambda\geq 0$ and every function $\phi \in Fbb_{t+1}$, we have
\[
\mathcal{B}_t\left( \phi + \lambda \right) + \indi{K_t} \leq \mathcal{B}_t\left(\phi\right) + \lambda M_t + \indi{K_t}.
\]
\item\label{proper-value} \textbf{Proper value functions:} the solution $\left(V_t\right)_{t\in \ce{0,T}}$ to the Bellman equations~\eqref{DynamicProgramming} never takes the value $-\infty$ and is not identically equal to $+\infty$.
\item\label{optimal-sets} \textbf{Compactness condition:} for every $t\in \ce{0,T-1}$ and every compact
set $K_t \subset \mathop{\mathrm{dom}} \np{ V_t}$,
there exists a compact set
$K_{t+1} \subset \mathop{\mathrm{dom}} \np{ V_{t+1} }$ such that,
for every function
$\phi \in Fbb_{t+1}$ and constant $\lambda \geq 0$, we have
\[
\mathcal{B}_t\left( \phi + \lambda + \indi{K_{t+1}} \right) \leq \mathcal{B}_t \left( \phi + \lambda \right)
+ \indi{K_t}.
\]
\end{myenumerate}
\end{assumption}
\begin{remark}
Assumption~\ref{Assumptions}-\eqref{CommonRegularity} ensures that the domain of each function
of $Fbb_t$ includes the domain of $V_t$. Note that if $Fbb_t$ is the
set of all functions satisfying Assumption~\ref{Assumptions}-\eqref{CommonRegularity}, then
Assumption~\ref{Assumptions}-\eqref{Stability-pointwise-convergence} is trivially
satisfied. Also note that the domain of $V_t$ is known as in~\cite{Gi.Le.Ph2015}.
\end{remark}
\begin{remark}
Note that Assumption~\ref{Assumptions}-\eqref{proper-value} and~\ref{Assumptions}-\eqref{optimal-sets} do not
change whether $\mathop{\mathrm{opt}} = \inf$ or $\mathop{\mathrm{opt}} = \sup$ as the optimal control problem
that we consider is formulated as a minimization problem.
\end{remark}
\begin{lemma}\label{ContinuiteVt}
For every $t\in \ce{0,T}$ we have that $V_t \in Fbb_t$.
\end{lemma}
\begin{proof}
By Assumption~\ref{Assumptions}-\eqref{Final-condition} and
Assumption~\ref{Assumptions}-\eqref{Stability-pointwise-optimum}, $V_T$ is in
$Fbb_T$. Now, assume that for some $t\in \ce{0,T-1}$ we have that
$V_{t+1} \in Fbb_{t+1}$. By Assumption~\ref{Assumptions}-\eqref{StabilityBellman},
we have that $V_t = \mathcal{B}_t\np{V_{t+1}} \in Fbb_t$ which ends the proof by
backward induction.
\end{proof}
From a set of basic functions $F_t \subset Fb_t$, one can build its
pointwise optimum $\mathcal{V}_{F_t} = \mathop{\mathrm{opt}}_{\phi \in F_t} \phi$. We build a
monotone sequence of approximations of the value functions as optima of basic
functions which will be computed through \emph{compatible selection functions}
as defined below. We illustrate this definition in Figure~\ref{fig:TightValid}.
\begin{definition}[Compatible selection function]
\label{det:CompatibleSelection} Let a time step $t \in \ce{0,T-1}$ be fixed.
A \emph{compatible selection function}, or simply \emph{selection function}, is a function $\Selection{}{}$ from
$2^{Fb_{t+1}} \times \mathbb{X}$ to $Fb_t$ satisfying the two following properties
\noindent -- \textbf{Validity}: for every set of basic functions $F_{t+1} \subset Fb_{t+1}$ and every $x\in \mathbb{X}$, we
have $\Selection{F_{t+1}, x}{} \leq \mathcal{B}_t\left(\mathcal{V}_{F_{t+1}}\right)$
(resp. $\Selection{F_{t+1}, x}{} \geq \mathcal{B}_t\left(\mathcal{V}_{F_{t+1}}\right)$) when
$\mathop{\mathrm{opt}} = \sup$ (resp. $\mathop{\mathrm{opt}} = \inf$).
\noindent -- \textbf{Tightness}: for every set of basic functions $F_{t+1} \subset Fb_{t+1}$ and every $x\in \mathbb{X}$
the functions $\Selection{F_{t+1}, x}{}$ and $\mathcal{B}_t\left( \mathcal{V}_{F_{t+1}}
\right)$ coincide at point $x$, that is
\(
\Selection{F_{t+1}, x}{x} = \mathcal{B}_t\left( \mathcal{V}_{F_{t+1}} \right)\np{x}.
\)
For $t= T$, we say that $\mathcal{S}_T : 2^{Fb_T} \times \mathbb{X} \to
Fb_T$ is a \emph{compatible selection function} if
it is \emph{valid} and \emph{tight}.
There, $\mathcal{S}_T$ is \emph{valid} if, for every $F_T \subset Fb_T$ and $x\in \mathbb{X}$, the function
$\mathcal{S}_T\left( F_T, x \right)$ remains below (resp. above) the value
function at time $T$ when $\mathop{\mathrm{opt}} = \sup$ (resp. $\mathop{\mathrm{opt}} = \inf$). Moreover,
the function $\mathcal{S}_T$ is \emph{tight} if it coincides with the value function at point $x$,
that is for every $F _T\subset Fb_T$ and $x\in \mathbb{X}$, we have
\( \mathcal{S}_T\left[ F_T, x \right]\left(x\right) = V_T(x).
\)
\end{definition}
\begin{figure}
\caption{\label{fig:TightValid}
\label{fig:TightValid}
\end{figure}
Note that the Tightness assumption only asks for equality at the point $x$
between the functions $\phi_t^\sharp\left(F_{t+1},x\right)$ and
$\mathcal{B}_t\left(\mathcal{V}_{F_{t+1}}\right)$ and not necessarily everywhere. The only
global property between the functions $\phi_t^\sharp\left(F_{t+1},x\right)$
and $\mathcal{B}_t\left(\mathcal{V}_{F_{t+1}}\right)$ is an inequality given by the
validity assumption.
In Algorithm~\ref{Tropical Dynamic Programming} we will generate, for every time $t$, a sequence of
random points of crucial importance that we will call \emph{trial points}. They
will be the ones where the selection functions will be evaluated, given the set
$F_t^k$ which characterizes the current approximation. In order to generate
those points, we will assume that we have at our disposition an Oracle which,
given $T{+}1$ sets of functions (characterizing the current approximations),
computes $T{+}1$ compact sets and a probability law.
\begin{definition}[Oracle]
\label{AssumptionOracle}
The Oracle takes as input $T{+}1$ sets of functions $F = \np{F_0,\ldots, F_T}$
included in $Fb_0,\ldots, Fb_T$ respectively. Its output consists of
$T{+}1$ compact sets $K_0, \ldots, K_T$, each included in $\mathbb{X}$, and of a
probability measure ${\mathbb{P}}_F$ on the space
$\mathbb{X}^{T+1}$
which are such that
\noindent -- Initialization. For every $t\in \ce{0,T}$, set $F_t = \emptyset$ and return $T{+}1$ given compact sets and a given probability measure.
\noindent -- For every $t\in \ce{0,T}$, $K_t \subset \mathop{\mathrm{dom}}\left(V_t\right)$.
\noindent -- The support of ${\mathbb{P}}_F$ is $K_0 \times \ldots \times K_T$.
\end{definition}
For every time $t\in \ce{0,T}$, we construct a sequence of functions
$\left( V_t^{k} \right)_{k\in \mathbb{N}}$ belonging to $Fbb_t$ as
follows. For every time $t\in \ce{0,T}$ and for every $k \geq 0$, we build a subset $F_t^{k}$ of the set $Fb_t$ and define the sequence of functions by pointwise optimum
\begin{equation}
\label{Vktdef}
V_t^{k} := \mathcal{V}_{F_t^k} = \mathop{\mathrm{opt}}_{\phi \in F_t^{k}} \phi\; .
\end{equation}
As described here, the functions are just byproducts of Algorithm~\ref{Tropical Dynamic Programming}, which only
describes the way the sets $F_t^{k}$ are computed.
As the following algorithm was inspired by Qu's work which uses tropical algebra techniques, we will call this algorithm ``Tropical Dynamic Programming''.
\begin{algorithm}
\caption{Tropical Dynamic Programming \ (TDP)}
\label{Tropical Dynamic Programming}
\begin{algorithmic} \mathbb{R}EQUIRE{For every $t\in \ce{0,T}$, $\Selection{}{}$ a compatible selection function and a $\text{Trial point Oracle}$ satisfying Definition~\ref{AssumptionOracle}.}
\ENSURE{For every $t\in \ce{0,T}$, a sequence of sets
$\left(F_t^k\right)_{k\in\mathbb{N}}$ and the associated sequence $V_t^k = \mathop{\mathrm{opt}}_{\phi \in F_t^k} \phi$.}
\STATE{Define for every $t\in \ce{0,T}$,
$F_t^0 := \emptyset$.}
\FOR{$k\geq 0$}
\STATE{\emph{Forward phase}}
\STATE{Compute $\left(K_0^k, \dots K_T^k, \mathbb{P}^{k}\right) = \text{Oracle}\left(F^k_0, \dots, F^k_T \right).$}
\STATE{Draw trial points $\left(x_t^{k}\right)_{t\in \ce{0,T}}$ over $K_0^{k} \times K_1^{k} \times \ldots \times K_T^{k}$ according to $\mathbb{P}^{k}$ knowing the past iterations.}
\STATE{\emph{Backward phase}}
\STATE{Compute $\phi_T^{k+1} := \mathcal{S}_T \left[F^{k}_{T}, x_T^{k}\right]$.}
\STATE{Define $F_T^{k+1} := F_T^{k} \cup \left\{ \phi_T^{k+1} \right\}$.}
\FOR{$t$ from $T-1$ to $0$}
\STATE{Compute $\phi_t^{k+1} := \Selection{F^{k+1}_{t+1}, x_t^{k}}{}$.}
\STATE{Define $F_t^{k+1} := F_t^{k} \cup \left\{ \phi_t^{k+1} \right\}$.}
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
At each iteration, Algorithm~\ref{Tropical Dynamic Programming} generates a trial point $x_t^k$ which only depends on the data available at the current iteration. We loosely explain this point. Define for every $k\in \mathbb{N}$,
$F^k = \np{F_t^k}_{t\in \ce{0,T}}$ and $x^k = \np{x_t^k}_{t\in \ce{0,T}}$. Then, there exists a deterministic function $\xi$ and a sequence of independent random variables $\np{W^k}_{k\in \mathbb{N}}$ such that for every $k\in \mathbb{N}$, $x^k = \xi\np{F^k, W^{k}}$ where $\np{W^{k}}_{k \in \mathbb{N}}$ is furthermore independent from $F^0$. Throughout the remainder of the article, denote by $\np{\Omega,\mathcal{F}, \mathbb{P}}$ a probability space on which the random variables $\np{W^k}_{k\in \mathbb{N}}$ are defined and independent.
We will denote by $+$ the Minkowski sum between sets, by $\mathbb{B}$ the unit
closed euclidean ball of $\mathbb{X}^{T+1}$ and for every $x \in \mathbb{X}^{T+1}$ and radius
$r>0$, $B(x,r)$ is the euclidean open ball of radius $r$ centered at
$x$. Furthermore, we define for every $t$, $K_t^* := \limsup_{k\in \mathbb{N}} K_t^k$
the set of all possible limit points of $K_t^k$. We make the following
assumption on the Oracle which, loosely stated, ensures that if a state $x_t$ is
close to $K_t^*$, then $x_t$ is almost a limit point of the sequence of trial
points $\np{x_t^k}_{k\in \mathbb{N}}$.
\begin{assumption}[Trial point assumption]
\label{TrialpointAssumption}
For every radius $r' > 0$, there exists $r >0$ such that
\begin{equation}
\label{eq:TrialpointAssumption}
\forall x \in \mathbb{X}
\; , \enspace
\mathbb{P}
\mathcal{B}c{\bp{x \in \limsup_{k\in \mathbb{N}}\footnotemark K^k + r\mathbb{B}} \mathbb{R}ightarrow x \in \limsup_{k\in \mathbb{N}} B(x^k,r') } = 1
\; .
\end{equation}
\end{assumption}
\footnotetext{See~\cite[Definition 4.1 p. 109]{Ro.We2009}.}
Remark that $\np{\limsup_{k\in \mathbb{N}} K^k} + r\mathbb{B} = \limsup_{k\in \mathbb{N}} \np{K^k + r\mathbb{B}}$, hence the lack of parenthesis. The following lemma gives some more insight on the Trial point assumption.
\begin{lemma}
\label{lemmatrial}
Consider the sequence of trial points $\np{x^k}_{k\in \mathbb{N}}$ generated by Algorithm~\ref{Tropical Dynamic Programming} with an Oracle satisfying
Assumption~\ref{TrialpointAssumption}. Given $r' > 0$ and $x\in \mathbb{X}$, for every $r'' > r'$, $\mathbb{P}$-a.s.,
\begin{equation}
\label{eq:lemmatrial1}
x \in \limsup_{k\in \mathbb{N}} B(x^k, r') \mathbb{R}ightarrow x^k \in B(x,r'') \ \text{for infinitely many indices} \ k\in \mathbb{N}
\; .
\end{equation}
Conversely, given $r'' >0$ and $x\in \mathbb{X}$, for every $r' > r''$, $\mathbb{P}$-a.s.
\begin{equation}
\label{eq:lemmatrial2}
x^k \in B(x,r'') \ \text{for infinitely many indices} \ k\in \mathbb{N} \mathbb{R}ightarrow x \in \limsup_{k\in \mathbb{N}} B(x^k, r')
\; .
\end{equation}
\end{lemma}
\begin{proof}
First, we prove Equation~\eqref{eq:lemmatrial1}. Fix $r''>r'>0$ and assume that $x \in \limsup_{k\in \mathbb{N}} B(x^k, r')$, $\mathbb{P}$-a.s..
Then, there exists an increasing function $\sigma : \mathbb{N} \to \mathbb{N}$ and a sequence
$\np{y^{\sigma(k)}}_{k\in \mathbb{N}} \subset r'\mathbb{B}$ such that
$x^{\sigma(k)} + y^{\sigma(k)} \underset{k\to +\infty}{\longrightarrow} x$. As
$r'' - r' > 0$, there exists a rank $k_0\in \mathbb{N}$ such that when $k \geq k_0$ we have
\(
\lVert x - x^{\sigma(k)} + y^{\sigma(k)} \rVert \leq r'' - r'
\). By triangle inequality, we have
\[
\lVert x - x^{\sigma(k)}\rVert \leq \lVert x - x^{\sigma(k)} + y^{\sigma(k)}\rVert + \lVert -y^{\sigma(k)} \rVert\\
\leq (r''-r') + r' = r'',
\] \emph{i.e.} $\mathbb{P}$-a.s., for every $k \geq k_0$, $x \in B(x^k, r'')$, which yields
Equation~\eqref{eq:lemmatrial1}.
Second, we prove Equation~\eqref{eq:lemmatrial2}. Fix $r'> r''> 0$ and assume that
$x^k \in B(x,r'')$ for infinitely many indices $k\in \mathbb{N}$. Thus,
$\mathbb{P}$-a.s, there exists an increasing function $\sigma : \mathbb{N} \to \mathbb{N}$ and a
sequence $\np{y^{\sigma(k)}}_{k\in \mathbb{N}} \subset r''\mathbb{B}$ such that
$x^{\sigma(k)} - x = y^{\sigma(k)}$. As $r' > r''$, $\mathbb{P}$-a.s.
$x \in B(x^k, r')$ and
\(
x = x^{\sigma(k)} - y^{\sigma(k)}.
\)
Hence, we obtain Equation~\eqref{eq:lemmatrial2}.
\end{proof}
Now, we give two examples of Oracles that satisfy the Trial point assumption~\ref{TrialpointAssumption}. They are used respectively in Section~\ref{SDDP_Example} and~\ref{sec:Exemples_switch}.
\begin{example}[Independant uniform draws over the unit sphere]
\label{example:sphere}
Consider the Oracle which constantly outputs $T+1$ times the unit euclidean
sphere $\mathbf{S}$ of $\mathbb{X}$ and the uniform probability measure
$\mathbb{P}^k := \sigma_U$ of $\mathbf{S}^{T+1}$ on $\mathbb{X}^{T+1}$\footnote{For every
$A \in \mathcal{B}\np{\mathbb{X}^{T+1}}$,
$\sigma_U(A) = C \, \mathrm{Leb}\np{\pi_{\mathbf{S}^{T+1}}^{-1}\np{A \cap
\mathbf{S}^{T+1}}}$, where $\mathrm{Leb}$ is the Lebesgue measure on
$\mathbb{X}^{T+1}$, $\pi_{\mathbf{S}^{T+1}}$ is the euclidean projector on $\mathbf{S}^{T+1}$
restricted to the ball $B\np{0,1}^{T+1}$ without $0$ and $C$ a normalization constant.}. Here, we have for every
$k\in \mathbb{N}$, $K^k = \mathbf{S}^{T+1}$. Fix an arbitrary $r' > 0$
and set $r= r'/2$, we prove that
\begin{equation*}
\forall x \in \mathbb{X}, {\mathbb{P}}\mathcal{B}c{ x \in \np{ \mathbf{S}^{T+1} + r\mathbb{B}} \mathbb{R}ightarrow x \in \limsup_{k\in \mathbb{N}} B(x^k,r') } = 1 \; .
\end{equation*}
\begin{proof}
Fix $x\in \mathbf{S}^{T+1} + r\mathbb{B}$, we need to show that we have
\(
\mathbb{P}\bc{x \in \limsup_{k\in \mathbb{N}} B(x^k,r') } = 1
\). Now, fix $r''>0$ such that $r < r'' < r'$. Using Lemma~\ref{lemmatrial}-\eqref{eq:lemmatrial2}, it is enough to show that
\begin{equation}
\label{eq:oracle1}
\mathbb{P}\nc{ x^k \in B(x,r'') \ \text{for infinitely many indices} \ k\in \mathbb{N}} = 1.
\end{equation}
As $r'' > r$ and $x$ is distant from $\mathbf{S}^{T+1}$ by less than $r$, the
quantity
$\mathbb{P}\nc{x^k \in B(x,r'')} = \sigma_U \nc{B(x,r'') \cap \mathbf{S}^{T+1}}$ is a
positive constant in $k$. Thus, we have that
\(
\sum_{k\in \mathbb{N}} \mathbb{P}\bc{x^k \in B(x,r'')} = +\infty
\).
Moreover, the sequence of events $\np{x^k \in B(x,r)}_{k\in \mathbb{N}}$ are independent,
thus by Borel-Cantelli's Lemma, Equation~\eqref{eq:oracle1} holds.
\end{proof}
\end{example}
\begin{example}[Dirac on the current optimal trajectory]
\label{example:opt_traj}
The sequence of Probability measures $\np{{\mathbb{P}}^k}_{k\in \mathbb{N}}$ is recursively build as follows:
\noindent -- Set ${\mathbb{P}}^0 := \np{\delta_{x_0^0}, \ldots \delta_{x_T^0}}$ where $x_t^0 \in K_t^0$ for every $t\in \ce{0,T}$.
\noindent -- Given sets of functions $F_0^k, \ldots, F_T^k$. Start, by fixing $x_0^{k} = x_0$ and
compute forward in time, for $t\in \ce{0,T{-}1}$, optimal controls by
\(
u_t^k \in \mathop{\arg\min}_u \mathcal{B}_t^u\np{\mathcal{V}_{F_{t+1}^k}}(x_t^k),
\)
and successive states by
\(
x_{t+1}^k = f_t\np{x_t^k, u_t^k}
\).
\noindent -- Define a probability measures ${\mathbb{P}}^k := \np{\delta_{x^k_0}, \ldots, \delta_{x^k_T}}$.
Consider the Oracle which, given sets of functions $F_0^k, \ldots, F_T^k$,
outputs the singleton $\na{x^k} = \ba{\np{x^k_t}_{t\in \ce{0,T}}}$ and
the probability measure
${\mathbb{P}}^k := \np{\delta_{x^k}}$ defined at previous step. Fix $r > 0$, take $r' = r >0$ and
$x \in \mathbb{X}^{T+1}$. We obtain that
\begin{equation*}
{\mathbb{P}}\mathcal{B}c{ x \in \bp{\limsup_{k\in \mathbb{N}} \underbrace{\{x^k\} + r\mathbb{B}}_{= B(x^k, r)} }^c \ \text{or} \ x \in \limsup_{k\in \mathbb{N}} B(x^k,r) } = 1 \; ,
\end{equation*}
which is equivalent to the Trial point assumption with $K^k = \{x^k\}$.
\end{example}
\section{Almost sure convergence on the set of accumulation points}
\label{sec:Convergence}
In this section, we will prove the convergence result stated in
Theorem~\ref{ConvergenceTheorem}. For this purpose, we state several crucial properties
of the approximation functions $\left(V_t^k\right)_{k\in \mathbb{N}}$ generated by
Algorithm~\ref{Tropical Dynamic Programming}. They are direct consequences of the facts that the Bellman
operators are order preserving and that the basic functions building our
approximations are computed through compatible selection
functions. The Algorithm~\ref{Tropical Dynamic Programming} is stochastic, as trial points are drawn at each
iteration from $\mathbb{P}^k$. Therefore, equalities, inequalities and statements
where the functions $V_t^k$ are involved hold $\mathbb{P}$-almost surely. However, for
the sake of simplicity, we will refrain from always adding $\mathbb{P}$-almost surely
in equalities, inequalities and some statements.
\begin{lemma}
\label{ProprietesFaciles}
The sequence of functions $\left( V_t^{k} \right)_{k\in \mathbb{N}}$,
for every $t\in \ce{0,T}$, given by Equation~\eqref{Vktdef} and
produced by Algorithm~\ref{Tropical Dynamic Programming} satisfy the following properties.
\begin{enumerate}
\item\label{proofitema} \textbf{Monotone approximations:} for every indices
$k< k'$ and every $t\in \ce{0,T}$, we have that
$V_t^k \geq V_t^{k'} \geq V_t$
when $\mathop{\mathrm{opt}} = \inf$ and $V_t^k \leq V_t^{k'} \leq V_t$ when $\mathop{\mathrm{opt}} = \sup$.
\item\label{proofitemb} For every $k\in \mathbb{N}$ and every $t\in \ce{0,T-1}$, we have that
$
\mathcal{B}_t\left( V_{t+1}^k\right) \leq V_t^k$ when $\mathop{\mathrm{opt}} = \inf$
and $\mathcal{B}_t\left( V_{t+1}^k\right) \geq
V_t^k$ when $\mathop{\mathrm{opt}} = \sup$.
\item\label{proofitemc}
For every $k\geq 1$ and every $t\in \ce{0,T-1}$, we have
\(
\mathcal{B}_t\left( V_{t+1}^k \right)\left( x_t^{k-1} \right) =
V_t^k \left(x_t^{k-1} \right)
\).
\item\label{proofitemd} For every $k\geq 1$, we have
\( V_{T}^k\left( x_T^{k-1} \right) = V_T \left(x_T^{k-1} \right) \).
\end{enumerate}
\end{lemma}
\begin{proof} We prove each point when $\mathop{\mathrm{opt}} = \inf$. The case $\mathop{\mathrm{opt}} = \sup$ is similar and left to the reader.
\noindent $\bullet$~\eqref{proofitema} (left inequality).
Let $t\in \ce{0,T}$ be fixed.
By construction of Algorithm~\ref{Tropical Dynamic Programming}, the sequence of sets
$\bp{F_t^k}_{k\in \mathbb{N}}$ is non-decreasing. Now, using the definition of the sequence $\left(V_t^k\right)_{k\in \mathbb{N}}$
given by Equation~\eqref{Vktdef} we have that
\( V_t^{k+1} = \mathcal{V}_{F^{k+1}_t}(x) = \inf_{\phi\in F^{k+1}_t}\phi(x) \leq \inf_{\phi
\in F^{k}_t} \phi(x) = \mathcal{V}_{F^{k}_t}(x) = V_t^k
\) and thus the sequence $\left(V_t^k \right)_{k\in \mathbb{N}}$ is non-increasing.
\noindent $\bullet$~\eqref{proofitemb}. We prove the assertion by induction on
$k\in \mathbb{N}$. For $k=0$, as $F_t^0 = \emptyset$, we have $V_t^0 = +\infty$
for all $t\in \ce{0,T-1}$ and thus the assertion is true. Now, assume that for
some $k\in \mathbb{N}$, we have for all $t\in \ce{0,T-1}$
\begin{equation}
\label{eq:HR}
\mathcal{B}_t\left( V_{t+1}^{k} \right) \leq V_t^k \; .
\end{equation}
Since $\bp{V_{t+1}^{k'}}_{k'\in \mathbb{N}}$ is non-increasing by already proved Item~\eqref{proofitema} and
$\mathcal{B}_t$ is order preserving using Assumption~\ref{Assumptions}-\eqref{order-preserving}, we have that
$\mathcal{B}_t\left( V_{t+1}^{k+1} \right)\leq \mathcal{B}_t\np{V_{t+1}^{k}}$. This last inequality combined with induction assumption
given by Equation~\eqref{eq:HR} gives the inequality
\begin{equation}
\label{eq:HRbis} \mathcal{B}_t\left( V_{t+1}^{k+1} \right) \leq V_t^k\; .
\end{equation}
Moreover, we also have that
\begin{align}
\mathcal{B}_t\left( V_{t+1}^{k+1} \right)
& \mathop{=}_{(\text{by }~\eqref{Vktdef})} \mathcal{B}_t\left( \mathcal{V}_{F_{t+1}^{k+1}}\right)
\mathop{\le}_{(\text{by } \mathcal{S}_t \text{ validity at } x_t^k)}
\Selection{F_{t+1}^{k+1}, x^k_t}{}
= \phi^{k+1}_t \; ,
\label{eq:HRter}
\end{align}
where the last equality is obtained by definition of function $\phi^{k+1}_t$ in Algorithm~\ref{Tropical Dynamic Programming}.
Thus, combining Equation~\eqref{eq:HRbis} and~\eqref{eq:HRter} we have that
$ \mathcal{B}_t\np{V_{t+1}^{k+1}} \le \inf \bp{V_t^{k} , \phi_t^{k+1}}$.
Finally, using Equation~\eqref{Vktdef} and Algorithm~\ref{Tropical Dynamic Programming}, we have that
\begin{equation*}
\inf \bp{V_t^{k} , \phi_t^{k+1}} =
\inf \mathcal{B}p{\inf_{\phi \in F_t^{k} } \phi, \phi_t^{k+1}} =
\inf_{\phi \in F_t^{k} \cup \{ \phi_t^{k+1}\}} \phi =
\inf_{\phi \in F_t^{k+1}} \phi =
V_t^{k+1}\; .
\end{equation*}
Thus, we obtain that
\(
\mathcal{B}_t\left( V_{t+1}^{k+1} \right) \leq \inf \bp{V_t^{k} , \phi_t^{k+1}}= V_t^{k+1}
\), which gives the induction assumption for $k+1$ and concludes the proof of~\eqref{proofitemb}.
\noindent $\bullet$~\eqref{proofitemc}. As the selection function $\Selection{}{}$ is tight in the sense of
Definition~\ref{det:CompatibleSelection}, we have by construction of Algorithm~\ref{Tropical Dynamic Programming} that
$\mathcal{B}_t \np{V_{t+1}^k}\np{x_t^{k-1}} = \phi_t^k \np{x_t^{k-1}}$.
Combining this equation with Item~\eqref{proofitemb}
and the definition of $V_t^k$, one gets Lemma~\ref{ProprietesFaciles}-\eqref{proofitemc}.
\noindent $\bullet$~\eqref{proofitemd}. Similarly,
we have that
$V_{T} \np{ x_T^{k-1}}= \phi_T^k \np{x_T^{k-1}}$,
which combined with the inequality given in Item~\eqref{proofitema}
and the definition of $V_T^k$ gives Lemma~\ref{ProprietesFaciles}-\eqref{proofitemd}.
\noindent $\bullet$~\eqref{proofitema} (right inequality).
We prove that $V_t^k \geq V_t$ for all for all $k\in \mathbb{N}$ and all $t\in \ce{0,T}$.
Fix $k \in \mathbb{N}$, we show that $V_t^k \geq V_t$ for all $t\in \ce{0,T}$ by backward recursion on time $t$.
For $t=T$, by validity of the selection functions
given in Definition~\ref{det:CompatibleSelection}, for every $\phi \in F_T^k$, we have that $\phi \geq
V_T$. Thus $V_T^k = \mathcal{V}_{F_T^k} = \inf_{\phi \in F_T^k} \phi \geq V_T$. Now, suppose
that for some $t \in \ce{0,T-1}$, we have that $V_{t+1} \le V_{t+1}^k$. Then, using the definition of the value function
in Equation~\eqref{DynamicProgramming}, the fact that the Bellman operators are order preserving and the inequality already proved
in Item~\eqref{proofitemb} we obtain that:
\(
V_t = \mathcal{B}_t \bp{V_{t+1}} \le \mathcal{B}_t \bp{V_{t+1}^k} \le V_t^{k}\; ,
\)
which gives the assertion for time $t$.
This ends the proof.
\end{proof}
In the following two propositions, we state that the sequences
$\left( V_t^k \right)_{k\in \mathbb{N}}$ and
$\np{\mathcal{B}_t \left( V_{t+1}^k \right)}_{k\in \mathbb{N}}$ converge uniformly on any compact
included in the domain of $V_t$. The limit function $V_t^*$ of
$\left( V_t^k \right)_{k\in \mathbb{N}}$ will be a natural candidate to be equal to the
value function $V_t$.
\begin{lemma}
\label{lemma:unifconv}
Fix $t\in \ce{0,T}$. Let $\np{\phi^k}_{k\in \mathbb{N}}$ be a monotonic sequence in
$Fbb_t$ such that there exists $\phi_1, \phi_2 \in Fbb_t$ satisfying
for every $k\in \mathbb{N}$ \( \phi_1 \leq \phi^k \leq \phi_2. \) Then, the
sequence $\np{\phi^k}_{k\in \mathbb{N}}$ converges uniformly on every compact set
included in $\mathop{\mathrm{dom}}\np{V_t}$ to a function $\phi^* \in Fbb_t$.
\end{lemma}
\begin{proof}
The proof relies on the Arzela-Ascoli~theorem~\cite[Theorem
2.13.30~p.347]{Sc1995}. Since $\phi_1$ and $\phi_2$ belong to $Fbb_t$,
they are finite on $\mathop{\mathrm{dom}}\np{V_t}$. Then, the sequence of functions
$\np{\phi^k}_{k\in \mathbb{N}}$ is monotonic and bounded, so it converges pointwise
on $\mathop{\mathrm{dom}}\np{V_t}$ to a limit function $\phi^*$. By
Assumption~\ref{Assumptions}-\eqref{Stability-pointwise-convergence}, this
implies that $\phi^* \in Fbb_t$.
Now, fix a compact set $K\subset \mathop{\mathrm{dom}}\np{V_t}$. First, since
$\np{\phi^k}_{k\in \mathbb{N}} \subset Fbb_t$, we have that for every $k\in \mathbb{N}$,
$\mathop{\mathrm{dom}}\np{\phi^k}$ contains $\mathop{\mathrm{dom}}\np{V_t}$ and the sequence
$\np{\phi^k}_{k \in \mathbb{N}}$ share a common modulus of continuity. Second, the
continuous functions $\lvert \phi_1 \rvert$ and $\lvert \phi_2 \rvert$ are
bounded from above by quantities independent of $x$ on the compact $K$, thus,
$\sup_{k\in \mathbb{N}} \sup_{x\in K} \lvert \phi^k\np{x} \rvert$ is finite. Hence,
by Arzela-Ascoli~theorem the monotonic sequence $\np{\phi^k}_{k\in \mathbb{N}}$
converges uniformly on the compact $K$ to the continuous function $\phi^*$.
\end{proof}
\begin{proposition}[Existence of an approximating limit]
\label{ExistenceLimit}
Let $t\in \ce{0,T}$ be fixed, the sequence of
functions $\left( V_t^k \right)_{k\in \mathbb{N}}$ defined by Equation~\eqref{Vktdef} and Algorithm~\ref{Tropical Dynamic Programming} $\mathbb{P}$-a.s.\ converges
uniformly on every compact set included in the domain of $V_t$ (solution of Equation~\eqref{DynamicProgramming}) to a function $V_t^* \in Fbb_t$.
\end{proposition}
\begin{proof}
By Lemma~\ref{ProprietesFaciles}-\eqref{proofitema}, for every $k\geq 1$ we
have that \( V_t^1 \leq V_t^k \leq V_t \), when $\mathop{\mathrm{opt}} = \sup$ (and the
inequalities are reversed when $\mathop{\mathrm{opt}} = \inf$). Now, we have that
$V_t^1 \in Fbb_t$ and by Lemma~\ref{ContinuiteVt}, the mapping $V_t$ is also in
$Fbb_t$. Moreover, by Lemma~\ref{ProprietesFaciles}-\eqref{proofitema},
the sequence $\np{V_t^k}_{k \geq 1}$ is monotonic. Thus, by
Lemma~\ref{lemma:unifconv}, we have that $\np{V_t^k}_{k\geq 1}$ converges uniformly
on every compact set included in $\mathop{\mathrm{dom}}\np{V_t}$ to a function
$V_t^* \in Fbb_t$.
This ends the proof.
\end{proof}
\begin{proposition}
\label{ConvergenceBellmanImages}
Let $t\in \ce{0,T-1}$ be fixed and $V_{t+1}^*$ be the function defined in Proposition~\ref{ExistenceLimit}. The sequence $\mathcal{B}_t \left( V_{t+1}^k \right)$ $\mathbb{P}$-a.s.\
converges uniformly to the continuous function $\mathcal{B}_t \left(V_{t+1}^*\right)$
on every compact sets included in the domain of $V_t$.
\end{proposition}
\begin{proof}
First we consider the case $\mathop{\mathrm{opt}} = \inf$. As the sequence
$\left( V_{t+1}^k \right)_{k\in \mathbb{N}}$ is non-increasing and using the fact that the operator $\mathcal{B}_t$ is order preserving, the sequence
$\np{ \mathcal{B}_t \np{ V_{t+1}^k }}_{k\in \mathbb{N}}$ is also non-increasing.
Moreover, we have that
\begin{align*}
V_t^1 & \geq V_t^k \tag{\text{Lemma~\ref{ProprietesFaciles}-\eqref{proofitema}}} \\
& \geq \mathcal{B}_t\np{V_{t+1}^k} \tag{\text{Lemma~\ref{ProprietesFaciles}-\eqref{proofitemb}}} \\
& \geq \mathcal{B}_t \np{V_{t+1}} \tag{\text{Lemma~\ref{ProprietesFaciles}-\eqref{proofitema}}} \\
& = V_t.
\end{align*}
Thus, by Lemma~\ref{lemma:unifconv}, the sequence of functions $\np{\mathcal{B}_t\np{V_{t+1}^k}}_{k \geq 1}$ converges uniformly on every compact set included in $\mathop{\mathrm{dom}}\np{V_t}$ to a function $\phi \in Fbb_t$. Let $K_t$ be a given compact set
included in $\mathop{\mathrm{dom}} \np{V_t}$. We now show that the function $\phi$ is equal to $\mathcal{B}_t\left( V_{t+1}^*\right)$ on the given compact $K_t$ or equivalently
we show that $\phi + \indi{K_t} = \mathcal{B}_t\left( V_{t+1}^* \right) + \indi{K_t}$. As already shown in Proposition~\ref{ExistenceLimit},
we have that $V_{t+1}^k \geq V_{t+1}^*$, which combined with the fact that the operator $\mathcal{B}_t$ is order preserving,
gives, for every $k\geq 1$, that
\(
\mathcal{B}_t \np{ V_{t+1}^k } \geq \mathcal{B}_t\np{V_{t+1}^*}.
\)
Now, adding on both side of the previous inequality the mapping $\indi{K_t}$ and taking the limit as $k$ goes to infinity,
we have that
\begin{equation*}
\phi + \indi{K_t} \geq \mathcal{B}_t\left( V_{t+1}^* \right) + \indi{K_t}.
\end{equation*}
For the converse inequality, start by recalling that,
by the compactness condition
(see Assumption~\ref{Assumptions}-\eqref{optimal-sets}), there exists a compact set $K_{t+1} \subset \mathop{\mathrm{dom}} \np{V_{t+1}}$ such that, for every $\phi \in Fbb_{t+1}$ and every $\lambda \geq 0$, we have that
\begin{equation}
\label{eq:tmp111}
\mathcal{B}_t\left( \phi + \lambda + \indi{K_{t+1}} \right) \leq \mathcal{B}_t \left( \phi + \lambda \right) + \indi{K_t}.
\end{equation}
Now, by Proposition~\ref{ExistenceLimit}, the non-increasing sequence $\left(V_{t+1}^k \right)_{k\in \mathbb{N}}$ converges
uniformly to $V_{t+1}^*\in Fbb_{t+1}$ on the compact set $K_{t+1}$. Thus,
for any fixed $\epsilon > 0$, there exists an integer $k_0 \in \mathbb{N}$,
such that we have
\[
V_{t+1}^k \leq V_{t+1}^k + \indi{K_{t+1}} \leq V_{t+1}^* + \epsilon + \indi{K_{t+1}}\,,
\]
for all $k \geq k_0$.
By~Assumption~\ref{Assumptions}-\eqref{order-preserving} and~Assumption~\ref{Assumptions}-\eqref{Additively-subhomogeneous}, the operator $\mathcal{B}_t$ is order preserving and additively $M_t$-subhomogeneous, thus we get using Equation~\eqref{eq:tmp111} that
\begin{align*}
\mathcal{B}_t\left( V_{t+1}^k \right) \leq \mathcal{B}_t\left ( V_{t+1}^k + \indi{K_{t+1}} \right)
& \leq \mathcal{B}_t\np{V_{t+1}^* + \epsilon +\indi{K_{t+1}}}, \tag{by Assumption~\ref{Assumptions}-\eqref{order-preserving}} \\
& \leq \mathcal{B}_t\np{V_{t+1}^* + \epsilon} + \indi{K_{t}}, \tag{by~Equation~\eqref{eq:tmp111}} \\
& \leq \mathcal{B}_t\np{V_{t+1}^*} + M_t\epsilon + \indi{K_t} \tag{by~Assumption~\ref{Assumptions}-\eqref{Additively-subhomogeneous}}\; .
\end{align*}
Adding $\indi{K_t}$ on the left hand side, we have for every $k\geq k_0$ that
\(
\mathcal{B}_t\left(V_{t+1}^k\right) + \indi{K_t} \leq \mathcal{B}_t\left( V_{t+1}^* \right) + M_t\epsilon + \indi{K_t} .
\)
Thus, taking the limit when $k$ goes to infinity we obtain that
\begin{equation*}
\phi + \indi{K_t} \leq \mathcal{B}_t\left( V_{t+1}^* \right) + M_t \epsilon + \indi{K_t} .
\end{equation*}
The result has been proved for all $\epsilon >0$ and we have thus shown that
$\phi = \mathcal{B}_t\left( V_{t+1}^*\right)$ on the compact set $K_t$. We conclude that
$\left( \mathcal{B}_t\left( V_{t+1}^k\right) \right)_{k\in \mathbb{N}}$ converges uniformly to
the function $\mathcal{B}_t\left( V_{t+1}^*\right)$ on the compact set $K_t$.
For the case $\mathop{\mathrm{opt}} = \sup$, \emph{mutatis mutantis} we have that
\(
\mathcal{B}_t\np{V_{t+1}^k} \leq \mathcal{B}_t\np{V_{t+1}^*}.
\)
Similarly, as the sequence $\np{V_{t+1}^k}$ is non-decreasing and $\mathcal{B}_t$ is
order preserving, one gets that for every $k$ large enough
\[
\mathcal{B}_t\np{V_{t+1}^*} \geq \mathcal{B}_t\np{V_{t+1}^*} + \indi{K_{t+1}} \geq \mathcal{B}_t\np{V_{t+1}^* + \epsilon + \indi{K_{t+1}}}.
\]
Thus, by Equation~\eqref{eq:tmp111} and $M_t$-sub-homogeneity we have that
\(
\mathcal{B}_t\np{V_{t+1}^*} + \indi{K_t} \leq \mathcal{B}_t\np{V_{t+1}^k} + M_t \epsilon + \indi{K_t},
\)
which yields the result when $k$ goes to infinity. This ends the proof.
\end{proof}
We want to exploit the fact that our approximations of the final cost function
are exact in the sense that we have equality between $V_T^k$ and $V_T$ at the
points drawn in Algorithm~\ref{Tropical Dynamic Programming}, that is, the tightness assumption of the
selection function is much stronger at time $T$ than for times $t<T$. Thus we
want to propagate the information backward in time: starting from time $t=T$ we
want to deduce information on the approximations for times $t<T$.
In order to show that $V_t = V_t^*$ on some set $S_t$, a dissymmetry between upper and lower approximations is emphasized. We introduce the notion of optimal sets $\left( S_t \right)_{t\in \ce{0,T}}$ with respect to a sequence of functions $\left(\phi_t\right)_{t\in \ce{0,T}}$ as a condition on the sets $\left( S_t \right)_{t\in \ce{0,T}}$ such that in order to compute the restriction of $\mathcal{B}_t \left( \phi_{t+1} \right)$ to $S_t$, one only needs to know $\phi_{t+1}$ on the set $S_{t+1}$. The Figure~\ref{fig:OptimalSets} illustrates this notion.
\begin{definition}[Optimal sets]
\label{optimalDraws}
Let $\left(\phi_t\right)_{t\in \ce{0,T}}$ be $T{+}1$ functions on $\mathbb{X}$. A
sequence of sets $\left( S_t \right)_{t\in \ce{0,T}}$ is said to be
\emph{$\left(\phi_t\right)$-optimal} if for every $t\in \ce{0,T-1}$, we have
\begin{equation}
\label{eq:optimalSets}
\mathcal{B}_t\left( \phi_{t+1} + \indi{S_{t+1}} \right) + \indi{S_t} = \mathcal{B}_t \left( \phi_{t+1} \right) + \indi{S_t}.
\end{equation}
\end{definition}
\begin{figure}
\caption{\label{fig:OptimalSets}
\label{fig:OptimalSets}
\end{figure}
When approximating from below, the optimality of sets is only needed for the
limit functions $\left(V_t^*\right)_{t\in \ce{0,T}}$, whereas when approximating
from above, one needs the optimality of sets with respect to the value functions
$\left(V_t\right)_{t\in \ce{0,T}}$. It seems easier to ensure the $\np{V_t^*}$-optimality of
sets than $\np{V_t}$-optimality as the function $V_t^*$ is known through the
sequence $\left( V_t^k \right)_{k\in \mathbb{N}}$, whereas the function $V_t$ is,
\emph{a priori}, unknown. This fact is discussed in
Sections~\ref{SDDP_Example} and~\ref{sec:Exemples_switch}.
\begin{lemma}[Uniqueness in restricted Bellman Equations]
\label{UnicityBellman}
Let $\left( X_t \right)_{t \in \ce{0,T}}$ be a sequence of sets such that for
every $t\in \ce{0,T}$, $X_t \subset \mathop{\mathrm{dom}}\np{V_t}$ and which is
\noindent -- $\np{V_t}$-optimal when $\mathop{\mathrm{opt}} = \inf$,
\noindent -- $\np{V_t^*}$-optimal when $\mathop{\mathrm{opt}} = \sup$.
If the sequence of functions $\left(V_t^* \right)_{t\in \ce{0,T}}$ satisfies the following restricted Bellman Equations:
\begin{equation}
\label{RestrictedBellmanEquation}
V_T^* + \delta_{X_T}= \psi + \delta_{X_T}
\quad \text{ and }\quad
\forall t\in \ce{0,T-1}, \ \mathcal{B}_t\left( V_{t+1}^* \right) + \indi{X_t} = V_{t}^* + \indi{X_t}
\; .
\end{equation}
Then, for every $t\in \ce{0,T}$ and every $x\in X_t$, we have that $V_{t}^* (x) = V_t(x)$.
\end{lemma}
\begin{proof}
We prove the lemma by backward induction on time $t\in \ce{0,T}$. We first treat the case $\mathop{\mathrm{opt}} = \inf$.
At time $t=T$, since $V_T$ is given by Equation~\eqref{DynamicProgramming}, we have $V_T= \psi$. We therefore have
by Equation~\eqref{RestrictedBellmanEquation} that $V_T^* + \indi{X_T} = \psi + \indi{X_T} = V_T + \indi{X_T}$, which gives the fact that
functions $V_T^*$ and $V_T$ coincide on the set $X_T$.
Now, let time $t\in \ce{0,T-1}$ be fixed and assume that we have $V_{t+1}^*(x) =
V_{t+1}(x)$ for every $x\in X_{t+1}$, or equivalently:
\begin{equation}
\label{eq:tmp0}
V_{t+1}^* + \indi{X_{t+1}} = V_{t+1} + \indi{X_{t+1}}\,.
\end{equation}
Using Lemma~\ref{ProprietesFaciles}-\eqref{proofitema}, the sequence of functions $\np{V_t^k}_{k\in \mathbb{N}}$ is lower bounded by $V_t$. Taking the
limit in $k$, we obtain that $V_t^* \geq V_t$, thus we only have to prove that
$V_t^* \leq V_t$ on $X_t$, that is $V_t^* + \indi{X_t} \leq V_t + \indi{X_t}$.
We successively have:
\begin{align}
V_{t}^* + \indi{X_t}
& = \mathcal{B}_t\left( V_{t+1}^* \right) + \indi{X_t}
\tag{by~\eqref{RestrictedBellmanEquation}}
\\
& \le
\mathcal{B}_t\left( V_{t+1}^* +\indi{X_{t+1}} \right) + \indi{X_t}
\tag{$\mathcal{B}_t$ is order preserving}
\\
& = \mathcal{B}_t\left( V_{t+1} +\indi{X_{t+1}} \right) + \indi{X_t}
\tag{by induction assumption~\eqref{eq:tmp0}}
\\
& = \mathcal{B}_t\left( V_{t+1} \right) + \indi{X_t}
\tag{by~\eqref{eq:optimalSets}, $(X_t)_{{t \in \ce{0,T}}}$ is $(V_t)$-optimal}
\\
& = V_t + \indi{X_t},
\tag{by~\eqref{DynamicProgramming}}
\end{align}
which concludes the proof in the case of $\mathop{\mathrm{opt}} = \inf$.
Now we prove the case $\mathop{\mathrm{opt}} = \sup$ in a similar way by backward induction on time $t\in \ce{0,T}$. As for the case $\mathop{\mathrm{opt}}=\inf$, at time $t=T$, one has $V_T^* + \indi{X_T} = V_T + \indi{X_T}$. Now assume that for some $t\in \ce{0,T-1}$ one has $V_{t+1}^{*} + \indi{X_{t+1}} = V_{t+1} + \indi{X_{t+1}}$. By Lemma~\ref{ProprietesFaciles}-\eqref{proofitema}, the sequence of functions $\np{V_t^k}$ is now upper bounded by $V_t$. Thus, taking the limit in $k$ we obtain that $V_t^* \leq V_t$ and we only need to prove that $V_t^* + \indi{X_t} \geq V_t + \indi{X_t}$. We successively have:
\begin{align}
V_{t} + \indi{X_t}
& = \mathcal{B}_t\left( V_{t+1} \right) + \indi{X_t}
\tag{by~\eqref{DynamicProgramming}}
\\
& \leq
\mathcal{B}_t\left( V_{t+1} +\indi{X_{t+1}} \right) + \indi{X_t}
\tag{$\mathcal{B}_t$ is order preserving}
\\
& = \mathcal{B}_t\left( V_{t+1}^* +\indi{X_{t+1}} \right) + \indi{X_t}
\tag{by induction assumption~\eqref{eq:tmp0}}
\\
& = \mathcal{B}_t\left( V_{t+1}^* \right) + \indi{X_t}
\tag{$(X_t)_{{t \in \ce{0,T}}}$ is $(V_t^*)$-optimal}
\\
& = V_t^* + \indi{X_t}, \tag{by~\eqref{RestrictedBellmanEquation}}
\end{align}
This ends the proof.
\end{proof}
One cannot expect the limit function, $V_t^*$, to be equal everywhere to the
value function, $V_t$, given by Equation~\eqref{DynamicProgramming}. However,
one can expect an (almost sure over the draws) equality between the two
functions $V_t$ and $V_t^*$ on all possible cluster points of sequences
$\left( y_k \right)_{k\in \mathbb{N}}$ with $y_k \in K_t^k$ for all $k\in \mathbb{N}$, that is,
on the set $\limsup K_t^k$.
\begin{theorem}[Convergence of Tropical Dynamic Programming]
\label{ConvergenceTheorem}
Define $K_t^* := \limsup_k K_t^k$, for every time $t\in \ce{0,T}$. Assume
that, $\mathbb{P}$-a.s.\ the sets $\left(K_t^*\right)_{t\in\ce{0,T}}$ are
$\np{V_t}$-optimal when $\mathop{\mathrm{opt}} = \inf$ (resp. $\np{V_t^*}$-optimal when
$\mathop{\mathrm{opt}} = \sup$). Then, $\mathbb{P}$-a.s.\ for every $t \in \ce{0,T}$ the function
$V_t^*$ defined in Proposition~\ref{ExistenceLimit} is equal to the value function $V_t$
on $K_t^*$.
\end{theorem}
\begin{proof} We will only consider the case $\mathop{\mathrm{opt}} = \inf$ as the proof for the case $\mathop{\mathrm{opt}} = \sup$ is analogous.
We will show that Equation~\eqref{RestrictedBellmanEquation} holds $\mathbb{P}$-almost surely with $X_t = K_t^*$, $t\in \ce{0,T}$.
The proof is decomposed in several steps
\noindent $\bullet$ Reformulation using the separability of $\mathbb{X}$.
Let $C := (C_t)_t \subset \mathbb{X}^{T+1}$ be compact in $\mathop{\mathrm{dom}}(V_0)\times \ldots \times \mathop{\mathrm{dom}}(V_T)$. For every $t\in \ce{0,T-1}$, set $\Delta_t : x_t \in \mathbb{X} \to V_t^*(x_t) - \mathcal{B}_t\np{V_{t+1}^*}(x_t) \in \overline{\mathbb{R}}$, $\Delta_T : x_T \in \mathbb{X} \to V_T^*(x_T) - V_T(x_T) \in \overline{\mathbb{R}}$ and $\Delta := \np{\Delta}_{t\in \ce{0,T}}$. Also write $K^* := \np{K_t^*}_{t\in \ce{0,T}}$. We want to show that
\begin{equation}
\label{eq:thm1}
\mathbb{P}\mathcal{B}c{\forall x \in C, \bp{x \in K^* \mathbb{R}ightarrow \Delta(x) = 0} } = 1 \; .
\end{equation}
By continuity of $V_t^* - \mathcal{B}_t\np{V_{t+1}^*}$ (resp. $V_T^* - V_T$) for every $t\in \ce{0,T-1}$ (resp. $t = T$) and compactness of $K$, Equation~\eqref{eq:thm1} is equivalent to
\begin{equation}
\label{eq:thm2}
\mathbb{P} \mathcal{B}c{\forall \epsilon > 0, \exists r > 0, \forall x \in K, \bp{x \in \np{K^* + r \mathbb{B}} \mathbb{R}ightarrow \Delta(x) \leq \epsilon }} = 1 \; .
\end{equation}
Without loss of generality, by density, one may restrict $\epsilon$ and $r$ to the countable set $\mathbb{Q}_+^*$ and the set $C$ to the set $C \cap \np{\mathbb{Q}^n}^{T+1}$, that is, Equation~\eqref{eq:thm2} is equivalent to
\begin{equation}
\label{eq:thm3}
\forall \epsilon \in \mathbb{Q}_+^*, \exists r \in \mathbb{Q}_+^*, \forall x \in C \cap \np{\mathbb{Q}^n}^{T+1},
\mathbb{P}\bc{ x \in \np{K^* + r\mathbb{B}} \mathbb{R}ightarrow \Delta(x) \leq \epsilon } = 1 \;.
\end{equation}
For the remainder of the proof, we fix $\epsilon \in \mathbb{Q}_+^*$. Now, we exploit
the equicontinuity of the sequence of functions $\np{V_t^k}_{k\in \mathbb{N}}$ and
$\bp{\mathcal{B}_t\np{V_{t+1}^k}}_{k\in \mathbb{N}}$ in order to compute a suitable radius
$r' \in \mathbb{Q}_+^*$ so as to satisfy Equation~\eqref{eq:thm3}. We separate the cases $t=T$
and $t<T$.
\noindent $\bullet$ Equicontinuity and uniform convergence, case
$\mathbf{t = T}$. As the functions $V_T$ and $V_t^k$, for $k\in \mathbb{N}$ are in
$Fbb_T$, they share a common modulus of continuity on the compact
$C_T$. Thus, they share a common uniform modulus of continuity. Hence, there
exists a radius $r_T \in \mathbb{Q}_+^*$, such that for every $x_T \in C_T \cap \mathbb{Q}^n$,
if $y_T \in B\np{x_T, r_T} \cap \mathop{\mathrm{dom}}{V_T}$, then
\begin{equation}
\label{eq:thm4}
\big\lvert V_T^{k+1}\np{x_T} - V_T^{k+1}(y_T) \rvert \leq \frac{\epsilon}{3} \ \text{and} \ \lvert V_T\np{y_T} - V_T(x_T) \rvert \leq \frac{\epsilon}{3} \; .
\end{equation}
Now, as $\np{V_T^k}_{k\in \mathbb{N}}$ converges uniformly to $V_t^*$ on the compact $C_t \subset \mathop{\mathrm{dom}} V_T$, there exists a rank $k_T \in \mathbb{N}$ such that, if $k\geq k_T$, then for all $x_T \in K_T$,
\begin{equation}
\label{eq:thm5}
\lvert V_T^{k+1}\np{x_T} - V_T^*\np{x_T} \rvert \leq \frac{\epsilon}{3}.
\end{equation}
\noindent $\bullet$ Equicontinuity and uniform convergence, case
$\mathbf{t\in \ce{0,T-1}}$. The sequences $\np{V_t^k}_{k\in \mathbb{N}}$,
resp. $\np{\mathcal{B}_t\np{V_{t+1}^k}}_{k\in \mathbb{N}}$ are uniformly equicontinuous on the
compact $C_t \subset \mathop{\mathrm{dom}}\np{V_t}$. There exists a radius $r_t \in \mathbb{Q}_+^*$ such
that for every $x_t \in C_t$, if $y_t \in B\np{x_t,r_t} \cap \mathop{\mathrm{dom}} V_t$, then
for every $k\in \mathbb{N}$,
\begin{equation}
\label{eq:thm6}
\lvert V_t^{k+1}(x_t) - V_t^{k+1}(y_t) \rvert \leq \frac{\epsilon}{4} \quad \text{and}
\quad \lvert \mathcal{B}_t\np{V_{t+1}^{k+1}(y_t)} - \mathcal{B}_t\np{V_{t+1}^{k+1}(x_t)} \rvert \leq \frac{\epsilon}{4} \; .
\end{equation}
By uniform convergence of the sequence $\np{V_t^k}_{k\in \mathbb{N}}$ (resp. $\mathcal{B}_t\np{V_{t+1}^k}_{k\in \mathbb{N}}$) to $V_t^*$ (resp. to $\mathcal{B}_t\np{V_{t+1}^*}$) on the compact $C_t \subset \mathop{\mathrm{dom}}(V_t)$, there exists a rank $k_t \in \mathbb{N}$ such that, if $k\geq k_t$, then for every $x_t \in K$
\begin{equation}
\label{eq:thm7}
\lvert V_t^*(x_t) - V_t^{k+1}(x_t) \rvert \leq \frac{\epsilon}{4} \ \text{and} \ \lvert \mathcal{B}_t\np{V_{t+1}^{k+1}}(x_t) - \mathcal{B}_t\np{V_{t+1}^*}(x_t)\rvert \leq \frac{\epsilon}{4}
\end{equation}
\noindent $\bullet$ There exists a draw $x_t^{k^*}$ of the sequence of trial points $\np{x_t^k}_{k\in \mathbb{N}}$ arbitrarily close to any given point of $K^*$.
Throughout the remainder of the proof, we fix ranks $k_t \in \mathbb{N}$ and radii $r_t \in \mathbb{Q}_+^*$ defined in Step $2$ and set
\[
\overline{k} := \max_{t\in \ce{0,T}} k_t \in \mathbb{N} \ \text{and} \ \underline{r} := \min_{t\in \ce{0,T}} r_t \; .
\]
By the Trial point oracle assumption, there exists $r \in \mathbb{Q}_+^*$ such that, for every $x\in C$,
\begin{equation}
\label{eq:thm8}
\mathbb{P}\nc{x \in K^* + r\mathbb{B} \mathbb{R}ightarrow x \in \limsup_{k\in \mathbb{N}} B\np{x^k, \underline{r}/2}} = 1 \; .
\end{equation}
Now, fix $x\in C \cap \np{\mathbb{Q}^n}^{T+1}$. By Equation~\eqref{eq:thm8}, $\mathbb{P}$-a.s., if $x\in K^* + r\mathbb{B}$ then $x \in \limsup_{k\in \mathbb{N}} B\np{x^k, \underline{r}/2}$, so by Lemma~\ref{lemmatrial}, $\np{x^k}_k \in B(x, \underline{r})$ infinitely often. Hence, $\mathbb{P}$-a.s., if $x\in K^* + r\mathbb{B}$, then there exists $k^* \geq \overline{k}$ such that
\begin{equation}
\label{eq:thm9}
x^{k^*} \in B(x, \underline{r}).
\end{equation}
\noindent $\bullet$ Conclusion.
When $t = T$, by triangle inequality, $\mathbb{P}$-a.s. we have that
\begin{align*}
\Delta(x_T)
& \leq \underbrace{\lvert V_T^*\np{x_T} - V_T^{k^*+1}\np{x_T} \rvert}_{\leq \epsilon/3 \ \text{by \eqref{eq:thm5} and \eqref{eq:thm9}} } + \underbrace{\lvert V_T^{k^*+1}\np{x_T} - V_T^{k^*+1}\np{x_T^{k^*}}\rvert}_{\leq \epsilon/3 \ \text{by \eqref{eq:thm4}}}
\\
& \quad + \underbrace{\lvert V_T^{k^*+1}\np{x_T^{k^*}} - V_T\np{x_T^{k^*}} \rvert}_{= 0 \ \text{by Tightness Lemma~\ref{ProprietesFaciles}-\eqref{proofitemd}}}
+ \underbrace{\lvert V_T\np{x_T^{k^*}} - V_T\np{x_T} \rvert}_{\leq \epsilon/3 \ \text{by \eqref{eq:thm5} and \eqref{eq:thm9}} }
\\
& \leq \epsilon \; .
\end{align*}
When $t \in \ce{0,T-1}$, by triangle inequality, $\mathbb{P}$-a.s. we have that
\begin{align*}
\Delta(x_t)
& \leq \underbrace{\lvert V_t^*(x_t) - V_t^{k+1}(x_t) \rvert}_{\leq \epsilon/4 \ \text{by \eqref{eq:thm7}}} + \underbrace{\lvert V_t^{k+1}(x_t) - V_t^{k+1}(x_t^k) \rvert}_{\leq \epsilon/4 \ \text{by \eqref{eq:thm6} and \eqref{eq:thm9}} }
\\
& \quad + \underbrace{\lvert V_t^{k+1}(x_t^k) - \mathcal{B}_t\np{V_{t+1}^{k+1}}(x_t^k)\rvert}_{= \ 0 \ \text{by Lemma~\ref{ProprietesFaciles}-\eqref{proofitemc}}}
\\
& \quad + \underbrace{\lvert \mathcal{B}_t\np{V_{t+1}^{k+1}}(x_t^k) - \mathcal{B}_t\np{V_{t+1}^{k+1}}(x_t) \rvert}_{\leq \epsilon/4 \ \text{by \eqref{eq:thm6} and \eqref{eq:thm9}} } + \underbrace{\lvert \mathcal{B}_t\np{V_{t+1}^{k+1}}(x_t)- \mathcal{B}_t\np{V_{t+1}^{*}}(x_t) \rvert}_{\leq \epsilon/4 \ \text{by \eqref{eq:thm7}} }
\\
& \leq \epsilon \; .
\end{align*}
Thus, we have shown Equation~\eqref{eq:thm3}, \emph{i.e.} $\mathbb{P}$-a.s., for every $t\in \ce{0,T}$ we have $V_t^* = \mathcal{B}_t\np{V_{t+1}^*}$ on $K_t^*$.
The sequence $\left( V_t^* \right)_{k\in \mathbb{N}}$
satisfies the restricted Bellman Equation~\eqref{RestrictedBellmanEquation} with the sequence
$\left( K_t^* \right)_{k\in \mathbb{N}}$. The conclusion follows from the Uniqueness lemma (Lemma~\ref{UnicityBellman}).
\end{proof}
\section{SDDP selection function: lower approximations in the linear-convex framework}
\label{SDDP_Example}
We will show that our framework contains a similar framework of (the
deterministic version of) the SDDP algorithm as described in \cite{Gi.Le.Ph2015}
and yields the same result of convergence. Let $\mathbb{X} = \mathbb{R}^n $ be a continuous
state space and $\mathbb{U} = \mathbb{R}^m$ a continuous control space. We want to solve the
following problem
\begin{equation}
\label{pb:linear-convex}
\begin{aligned}
\min_{\substack{x=\np{x_0, \ldots, x_T} \\u = \np{u_0, \ldots u_{T-1}}}} & \sum_{t=0}^{T-1} c_t (x_t, u_t) + \psi(x_T) \\ \text{s.t.} \ & x_0 \in \mathbb{X} \ \text{is given,} \\
& \forall t \in \ce{0,T}, \ x_t \in \mathbb{X}, \\
& \forall t \in \ce{0,T-1}, \ u_t \in \mathbb{U}, \, x_{t+1} = f_t(x_t, u_t).
\end{aligned}
\end{equation}
We make similar assumptions as in the literature of SDDP (\emph{e.g.}~\cite{Gi.Le.Ph2015}), note that in our formulation, we have put the constraints on the states and controls on the cost functions. We refer to~\cite{Au.Ek1984} and~\cite{Ro.We2009} for results on set-valued mappings.
\begin{assumption}
\label{hypo:linear-convex}
For all $t \in \ce{0,T-1}$ we assume that:
\begin{enumerate}
\item The dynamic $f_t : \mathbb{X} \times \mathbb{U} \longrightarrow \mathbb{X}$ is linear, $f_t(x, u) = A_t x + B_t u$, for some given matrices $A_t$ and $B_t$ of compatible dimensions.
\item\label{hypo:convexcosts} The cost function $c_t : \mathbb{X} \times \mathbb{U} \longrightarrow \overline{\mathbb{R}}$ is a proper lower semicontinuous (l.s.c.) convex function which is $L_{c_t}$-Lipschitz continuous on its (convex) domain, $\mathop{\mathrm{dom}}\np{c_t}$.
\item The projection on $\mathbb{X}$ of $\mathop{\mathrm{dom}}\np{c_t}$, denoted $X_t$, is a convex polytope with non-empty interior.
\item\label{hypo:multiapplication} Define the set-valued mapping $U_t : \mathbb{X} \rightrightarrows \mathbb{U}$, for every $x\in \mathbb{X}$
\[
U_t\np{x} := \left\{ u \in \mathbb{U} \mid \np{x,u} \in \mathop{\mathrm{dom}}\np{c_t} \ \text{and} \ f_t\np{x,u} \in X_{t+1}\right\},
\]
where we assume that
\noindent $\bullet$ For every $x\in X_t$, $U_t(x)$ is compact.
\noindent $\bullet$ The graph of the set-valued mapping $U_t$ has a non-empty interior.
\noindent $\bullet$ For every $x\in X_t$, there exists $u \in U_t(x)$.\footnote{known as a \emph{Relatively Complete Recourse} assumption.}
\noindent $\bullet$ The set-valued mapping $U_t$ is $L_{U_t}$-\emph{Lipschitz continuous}\footnote{For all $x,x' \in \mathbb{X}$, $U_t(x') \subset U_t(x) + L_{U_t} \lVert x' - x \rVert$.} (hence, both upper and lower semicontinuous).
\end{enumerate}
Moreover, at time $t = T$, we assume that $X_T := \mathop{\mathrm{dom}}\np{V_T}\subset \mathbb{X}$ is
convex and compact with non-empty interior, the final cost function
$\psi : \mathbb{X} \longrightarrow \overline{\mathbb{R}}$ is a proper convex l.s.c. function with
known compact convex domain and $\psi$ is $C_T$-Lipschitz continuous on its
domain.
\end{assumption}
\begin{remark}
Under Assumption~\ref{hypo:linear-convex}, the graph of the set-valued mapping $U_t$ is convex, and its domain is $X_t$ by the RCR assumption.
\end{remark}
\begin{remark}
A sufficient condition to ensure that the set-valued mapping $U_t$ is Lipschitz
continuous is given in~\cite[Example 9.35]{Ro.We2009}: $U_t$ is Lipschitz when
its graph is convex polyhedral, which is the classical framework of
SDDP. Moreover a Lipschitz constant can be explicitly computed.
\end{remark}
For every time step $t\in \ce{0,T-1}$, recall the \emph{Bellman operator}
$\mathcal{B}_t$ for every function $\phi : \mathbb{X} \rightarrow \overline{\mathbb{R}}$ by:
\begin{equation}
\label{Bellman:linear-convex}
\mathcal{B}_t(\phi ) := \inf_{u \in \mathbb{U}}\mathcal{B}p{ c_t \np{\cdot, u} + \phi\bp{f_t\np{\cdot, u}}} \; .
\end{equation}
Moreover, for every function $\phi : \mathbb{X} \rightarrow \overline{\mathbb{R}}$ and every $\np{ x, u } \in \mathbb{X} \times \mathbb{U}$ we define
\begin{equation}
\mathcal{B}_t^u\left( \phi \right)(x) := c_t\left(x,u\right) + \phi\bp{f_t\np{x, u}} \in \overline{\mathbb{R}} \; .
\end{equation}
The Bellman equations of Problem \eqref{pb:linear-convex} can be written using
the Bellman operators $\mathcal{B}_t$ given by Equation~\eqref{Bellman:linear-convex}:
\begin{equation}
\label{DP:linear-convex}
V_T = \psi \quad\text{and}\quad
\forall t\in \ce{0,T{-}1}, V_t: x \in \mathbb{X} \mapsto \mathcal{B}_t(V_{t+1})(x) \in \overline{\mathbb{R}}\; .
\end{equation}
In Proposition~\ref{SDDP:stability}, we establish a stability property of the Bellman
operators given by Equation~\eqref{Bellman:linear-convex}. The image of a Lipschitz
continuous function by the operator $\mathcal{B}_t$ will also be Lipschitz continuous and we
give an explicit (conservative) Lipschitz constant.
\begin{proposition}
\label{SDDP:stability}
Under Assumption~\ref{hypo:linear-convex}, for every $t\in \ce{0,T-1}$, given a constant
$L_{t+1} > 0$, there exist a constant $L_t > 0$ such that if
$\phi : \mathbb{X} \to \overline{\mathbb{R}}$ is convex l.s.c. proper with domain $X_{t+1}$ and $L_{t+1}$-Lipschitz continuous on
$X_{t+1}$ then $\mathcal{B}_t\left( \phi \right)$ is convex l.s.c. proper with domain $X_t$ and $L_t$-Lipschitz continuous on
$X_t$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{SDDP:stability}]
Fix $t\in \ce{0,T-1}$ and let $\phi : \mathbb{X} \to \overline{\mathbb{R}}$ be a convex l.s.c.
proper function with domain $X_{t+1}$ and $L_{t+1}$-Lipschitz continuous
function on $X_{t+1}$. We show that $\mathop{\mathrm{dom}}\bp{\mathcal{B}_t\np{\phi}} = X_t$. Let
$x_t \in X_t$ be arbitrary. By the RCR Assumption, there exist
$u_t \in U_t\np{x_t}$ such that $f_t \left( x_t, u_t \right) \in X_{t+1}$ and
$\np{x_t,u_t} \in \mathop{\mathrm{dom}}\np{c_t}$. As the domain of $\phi$ is $X_{t+1}$, we
have that
\[
\inf_{u \in \mathbb{U}} \mathcal{B}p{c_t\np{x_t,u} + \phi\bp{f_t\np{x_t,u}}} \leq c_t\left(x_t,u_t\right) + \phi \bp{f_t\np{x_t,u_t}} < +\infty \; .
\]
Thus, we have shown that $\mathop{\mathrm{dom}} \bp{ \mathcal{B}_t\np{ \phi}}$ includes $X_t$. Conversely, if $x\notin X_t$, then for every $u\in \mathbb{U}$, we have $c_t\np{x,u} = +\infty$, hence $x\notin \mathop{\mathrm{dom}}\np{\mathcal{B}_t\np{\phi}}$. This implies that $\mathop{\mathrm{dom}}\bp{\mathcal{B}_t\np{\phi}} \subset X_t$ and the equality follows.
Moreover, the above infimum can be restricted to $U_t\np{x}$, which is compact. As the function $x \mapsto \mathcal{B}_t\left( \phi \right)(x)$ is convex (resp. l.s.c.) on $X_{t}$ as $\np{x,u} \mapsto c_t\left(x,u\right) + \phi\left(f_t\left(x,u\right)\right)$ is jointly convex (resp. l.s.c. and $U_t(x)$ is compact).
Since $c_t(x, \cdot)$, $\phi$ are l.s.c. and $f_t\np{x, \cdot}$ is continuous, the above infimum is attained. We will denote by $u_x \in U_t\np{x}$ a minimizer, note that $f_t\np{x,u_x} \in X_{t+1}$.
We finally show that the function $\mathcal{B}_t\left( \phi \right)$ is Lipschitz on $X_t$ with a constant $L_t> 0$ that only depends on the data of Problem \eqref{pb:linear-convex}. Fix $x, x' \in X_t$ and denote by $u_{x'} \in U_t\np{x'}$ an optimal control at $x'$, \emph{i.e.} $\mathcal{B}_t^{u_{x'}}\np{\phi}\np{x'}= \mathcal{B}_t\np{\phi}\np{x'}$. For every $u \in U_t\np{x}$, we have that
\begin{align}
\mathcal{B}_t\np{\phi}\np{x}
& \leq \mathcal{B}_t\np{\phi}\np{x'} + \mathcal{B}_t^u\np{\phi}\np{x} - \mathcal{B}_t\np{\phi}\np{x'}
\notag
\\
& = \mathcal{B}_t\np{\phi}\np{x'} + \bp{c_t\np{x,u} - c_t\np{x',u_{x'}}} + \mathcal{B}p{\phi\bp{f_t\np{x,u}} - \phi\bp{f_t\np{x',u_{x'}}}}
\notag
\\
& \leq \mathcal{B}_t\np{\phi}\np{x'} + L_{c_t}\np{ \lVert x - x' \rVert + \lVert u - u_{x'} \rVert}
\label{tmp123456789} \\
& \quad + L_{t+1} \mathcal{B}p{\lambda_{\tiny \text{max}}\np{A_t^TA_t}^{1/2}\lVert x - x' \rVert + \lambda_{\tiny \text{max}}\np{B_t^TB_t}^{1/2}\lVert u - u_{x'} \rVert }
\notag.
\end{align}
Indeed, as the domain of $c_t(x, \cdot)$ is $U_t(x)$, the domain of $\phi$ is $X_{t+1}$ and that for every $u\in U_t(x)$, we have $f_t\np{x,u} \in X_{t+1}$, Equation~\eqref{tmp123456789} holds for every $u\in U_t(x)$.
Now, we will bound from above $\lVert u - u_{x'} \rVert$ by $\lVert x - x' \rVert$ multiplied by a constant.
By Assumption~\ref{hypo:linear-convex}-\eqref{hypo:multiapplication} the set-valued mapping $U_t$ is $L_{U_t}$-Lipschitz on its domain $X_t$. Hence, by definition, there exists $\tilde{u} \in U_t\np{x}$ such that:
\begin{equation}
\lVert \tilde{u} - u_{x'} \rVert \leq L_{U_t} \lVert x - x' \rVert. \label{tmp:LipschitzIntersection}
\end{equation}
Replacing $u$ by $\tilde{u}$ in Equation~\eqref{tmp123456789}, by Equation~\eqref{tmp:LipschitzIntersection} we deduce that
\(
\mathcal{B}_t\np{\phi}\np{x} - \mathcal{B}_t\np{\phi}\np{x'} \leq L_{t} \lVert x - x' \rVert,
\)
where the Lipschitz constant $L_{t}>0$ only depends on the data of Problem \eqref{pb:linear-convex}. \emph{Mutatis mutandis}, we have that
\(
\mathcal{B}_t\np{\phi}\np{x'} - \mathcal{B}_t\np{\phi}(x) \leq L_{t} \lVert x - x' \rVert,
\)
and the result follows.
\end{proof}
\begin{remark}
Knowing the value function at time $t = T$, by
Proposition~\ref{SDDP:stability} we can compute recursively backward in time
the domain of $V_t$ for each $t < T$: it is equal to the projection on $\mathbb{X}$ of
the domain of the cost function, which is $X_t$ and known to the decision
maker. Moreover using Proposition~\ref{SDDP:stability} we have that, for every
$t$, the value function $V_t$ is convex l.s.c. proper and Lipschitz continuous on
its domain, with a computable constant.
\end{remark}
As lower semicontinuous proper convex functions can be approximated by a supremum of affine function, for every $t\in \ce{0,T}$ we define $Fb_t^{\text{\tiny SDDP}}$ to be the set of affine functions $\phi : x \in \mathbb{X} \mapsto \langle a, x \rangle + b \in \mathbb{R}$, $a \in \mathbb{X}$, $b\in \mathbb{R}$ with $\mathbb{V}ert a \mathbb{V}ert_2 \leq L_t$ if $x \in X_t$ and $+\infty$ otherwise. Moreover, we shall denote by $Fbb_t^{\text{\tiny SDDP}}$ the set of convex functions $\phi : \mathbb{X} \mapsto \overline{\mathbb{R}}$ which are $L_t$-Lipschitz continuous on $X_t$, of domain $X_t$ and proper.
\begin{proposition}
\label{propStructurale_SDDP}
Under Assumption~\ref{hypo:linear-convex}, the Problem~\ref{pb:linear-convex} and the
Bellman operators defined in Equation~\eqref{DP:linear-convex} satisfy the structural
assumptions given in Assumption~\ref{Assumptions}.
\end{proposition}
\begin{proof} We prove successively each assumption listed in Assumption~\ref{Assumptions}.
\noindent
$\bullet$\ref{Assumptions}-\eqref{Stability-pointwise-optimum}.
Recall that we are here on the case $\mathop{\mathrm{opt}} = \sup$.
Fix $t\in \ce{0,T}$ and let $F \subset Fb^{\text{\tiny SDDP}}_t$ be a set of
affine $L_t$-Lipschitz continuous functions with domain $X_t$. For every
$x, x' \in X_t$, we have that
\[
\vert \mathcal{V}_F(x) - \mathcal{V}_F\left(x'\right) \vert
= \vert \sup_{\phi \in F} \phi(x) - \sup_{\phi \in F} \phi(x') \vert
\leq \sup_{\phi \in F} \vert \phi(x) - \phi(x') \vert \leq L_t\mathbb{V}ert x - x' \mathbb{V}ert.
\]
Thus, the function $\mathcal{V}_{F}$ is $L_t$-Lipschitz continuous. As a supremum of affine functions is convex and l.s.c.,
$\mathcal{V}_F$ is also convex and l.s.c., we have thus shown that $\mathcal{V}_F \in Fbb_{t}^{\text{\tiny SDDP}}$.
\noindent $\bullet$\ref{Assumptions}-\eqref{Stability-pointwise-convergence} and
\ref{Assumptions}-\eqref{CommonRegularity}.
By construction, for all $t\in \ce{0,T}$, every element of $Fbb_t^{\text{\tiny SDDP}}$ is $L_t$-Lipschitz continuous. Thus, by the previous point, $Fbb_t^{\text{\tiny SDDP}}$ is also stable by pointwise convergence.
\noindent $\bullet$\ref{Assumptions}-\eqref{Final-condition}.
As $\psi$ is convex proper and $L_T$-Lipschitz continuous on $X_T$, it is a countable (as $\mathbb{R}^n$ is separable) supremum of $L_T$-Lipschitz affine functions.
\noindent $\bullet$\ref{Assumptions}-\eqref{StabilityBellman}.
This has been shown in Proposition~\ref{SDDP:stability}.
\noindent $\bullet$\ref{Assumptions}-\eqref{order-preserving}.
Let $\phi_1$ and
$\phi_2$ be two functions over $\mathbb{X}$ such that $\phi_1 \leq \phi_2$ \emph{i.e.}
for every $x\in \mathbb{X}$, we have $\phi_1(x) \leq \phi_2(x)$. We want to show that
$\mathcal{B}_t\left( \phi_1 \right) \leq \mathcal{B}_t\left(\phi_2 \right)$. Let $x\in \mathbb{X}$, we
have:
\begin{align*}
\mathcal{B}_t\left( \phi_1\right)(x)
& = \inf_{u\in \mathbb{U}} c_t(x,u) + \phi_1\left( f_t(x,u) \right) \\
& \leq \inf_{u\in \mathbb{U}} c_t(x,u) + \phi_2 \left( f_t(x,u) \right) \\
& = \mathcal{B}_t \left( \phi_2 \right)(x).
\end{align*}
\noindent $\bullet$\ref{Assumptions}-\eqref{Additively-subhomogeneous}.
We will
show that $\mathcal{B}_t$ is additively homogeneous, hence one can choose $M_t=1$ in
Assumption~\ref{Assumptions}-\eqref{Additively-subhomogeneous}. Let $\lambda \in \mathbb{R}$ be
a given constant and $\phi$ a given function in $Fbb_{t+1}$. We identify
the constant $\lambda$ with the constant function
$\lambda : x \mapsto \lambda$ and we have for all $x\in \mathbb{X}$:
\begin{align*}
\mathcal{B}_t\left( \lambda + \phi \right)(x)
& = \inf_{u\in \mathbb{U}} \mathcal{B}p{c_t(x,u) + \np{\lambda + \phi}\bp{f_t(x,u)}}
\\
& = \inf_{u\in \mathbb{U}} \mathcal{B}p{ c_t(x,u) + \lambda + \phi\bp{f_t(x,u)}}
\\
& = \lambda + \inf_{u\in \mathbb{U}} \mathcal{B}p{ c_t(x,u) + \phi\bp{f_t(x,u)}}
\\
& = \lambda + \mathcal{B}_t\np{\phi}(x).
\end{align*}
\noindent $\bullet$ \ref{Assumptions}-\eqref{proper-value}.
By backward recursion on
time step $t \in \ce{0,T}$ and by Proposition~\ref{SDDP:stability}, for every time step
$t\in \ce{0,T}$ the function $V_t$ given by the Dynamic Programming
Equation~\eqref{DP:linear-convex} is convex and $L_t$-Lipschitz continuous on $X_t$.
\noindent $\bullet$\ref{Assumptions}-\eqref{optimal-sets}.
Fix $t\in \ce{0,T-1}$, an
arbitrary element $\phi \in Fbb_t^{\text{\tiny SDDP}}$, a constant
$\lambda \geq 0$ and set $\tilde{\phi} := \phi + \lambda$. We will show that
for every compact set $K_t \subset X_t$, there exist a compact set
$K_{t+1} \subset X_{t+1}$ such that
\begin{equation}
\label{tmpwololo}
\mathcal{B}_t \left( \tilde{\phi} + \indi{K_{t+1}} \right) + \indi{K_t} = \mathcal{B}_t \left( \tilde{\phi} \right) + \indi{K_t},
\end{equation}
which will imply the desired result.
Now, Equation~\eqref{tmpwololo} is equivalent to the fact that for every state $x_t \in K_t$, there exist a control $u_t \in U_t\np{x}$ such that
\begin{equation*}
f_t \np{ x_t, u_t } \in K_{t+1}
\quad\text{where}\quad
u_t\in \mathop{\arg\min}_{u\in U_t\np{x}} \mathcal{B}_t^u \np{\tilde{\phi}}\np{x_t} = c_t \np{x_t, u } + \tilde{\phi}\bp{ f_t \np{x_t, u}}
\; .
\end{equation*}
Set $K_{t+1} := f_t\np{X_t, U_t\np{X_t}}$, it satisfies Equation~\eqref{tmpwololo}, we show that it is compact. As $X_t$ is compact and $f_t$ is continuous, it is sufficient to prove that $U_t\np{X_t}$ is compact, which is true as $U_t$ is upper semicontinuous (u.s.c.) and non-empty compact valued, see~\cite[Proposition 11 p.112]{Au.Ek1984}.
This ends the proof
\end{proof}
Now, we define a compatible selection function for $\mathop{\mathrm{opt}}=\sup$.
Let $t\in \ce{0,T-1}$ be fixed, for any $F \subset Fb_t^{\text{\tiny SDDP}}$ and $x\in \mathbb{X}$, we define the following optimization problem
\begin{subequations}
\begin{align}
\label{NodalProblemSDDP}
\min_{ (x',u,\lambda) \in X_t \times U_t(x') \times \mathbb{R}}
& c_t\np{ x', u} + \lambda \\
\text{s.t.}
& \quad
x' = x \quad\text{and}\quad \phi\bp{f_t\np{ x', u}} \leq \lambda \quad \forall \phi \in F\; .
\end{align}
\end{subequations}
If we denote by $b$ its optimal value and by $a$ a Lagrange multiplier
associated to the constraint $x'- x = 0$ at the optimum, that is such that
$(x',u;\lambda, a)$ is a stationary point of the Lagrangian
$c_t\left( x', u\right) + \lambda -\langle a , x' - x \rangle$, then we define
\[
\phi_t^{\text{\tiny SDDP}}\left( F, x \right) := x'\mapsto \langle a , x' - x \rangle + b + \indi{X_t}(x')\enspace.
\]
Finally, at time $t= T$, for any $F \subset Fb_T^{\text{\tiny SDDP}}$ and $x\in \mathbb{X}$, fix $a \in \partial V_T (x)$ and define
\[
\phi_T^{\text{\tiny SDDP}}\left( F, x \right) := x'\mapsto \langle a , x' - x \rangle + V_T\left( x \right).
\]
\begin{proposition}
For every time $t\in \ce{0,T}$, the function $\phi_t^{\text{\tiny SDDP}}$ is a compatible selection function
for $\mathop{\mathrm{opt}}=\sup$ in the sense of Definition~\ref{det:CompatibleSelection}.
\end{proposition}
\begin{proof}
Fix $t\in \ce{0,T-1}$, $F \subset Fb_{t+1}^{\text{\tiny SDDP}}$ and $x\in \mathbb{X}$.
Using Equation~\eqref{Bellman:linear-convex} we obtain that
$\mathcal{B}_t\left( \mathcal{V}_F \right) \np{x}$ is equal to $b$ the optimal value of optimization problem~\eqref{NodalProblemSDDP}.
Thus, since $\phi_t^{\text{\tiny SDDP}}\left( F, x \right)\np{x}=b$ we obtain that the selection function is tight.
It is also valid as $a$ is a subgradient of the convex function $\mathcal{B}_t\left( V_F \right)$ at $x$. For $t=T$, the selection
function $\phi_T^{\text{\tiny SDDP}}$ is tight and valid by convexity of $V_T$.
\end{proof}
If we want to apply the convergence result from
Theorem~\ref{ConvergenceTheorem}, as we approximate from below the value
functions ($\mathop{\mathrm{opt}} = \sup$) then one has to make the draws according to some sets
$K_t^k$ such that the sets $K_t^* := \limsup_{k\in \mathbb{N}} K_t^k$ are $V_t^*$
optimal. As done in the literature of the Stochastic Dual Dynamic Programming
algorithm (see for example~\cite{Gi.Le.Ph2015} and~\cite{Zo.Ah.Su2018}
or~\cite{Pe.Pi1991}), one can study the case when the draws are made along the
optimal trajectories of the current approximations.
More precisely, fix $k \in \mathbb{N}$ we define a sequence $(x_0^k, x_1^k, \ldots , x_T^k)$ by
\begin{equation*}
x_0^k := x_0
\quad\text{and}\quad
\forall t \in \ce{0,T-1}, \ x_{t+1}^k := f_t \np{ x_t^k, u_t^k}
\; , \enspace
\end{equation*}
where $u_t^k \in \mathop{\arg\min}_u \mathcal{B}_t^u\left( V_t^k \right)\left(x_t^k\right)$. We say
that such a sequence $(x_0^k, x_1^k, \ldots , x_T^k)$ is an \emph{optimal
trajectory for the $k$-th approximations starting from $x_0$}. We show that
optimal trajectories for the current approximations become $\np{V_t^*}$-optimal
when $k$ goes to infinity, using a result of convergence in minimization by
Rockafellar and Wets~\cite[Theorem 7.33]{Ro.We2009}.
\begin{proposition}
\label{OptimalityOfTrajectories}
For every $k\in \mathbb{N}$, let $(x_0^k, x_1^k, \ldots , x_T^k)$ be an optimal trajectory for the $k$-th approximations starting from $x_0$ and define a sequence of singletons for every $t\in \ce{0,T}$, $K_t^k := \left\{ x_t^k \right\}$.
Then the sets $\np{K_t^*}_{t\in \ce{0,T}}$ defined by $K_t^* := \limsup_k K_t^k$ are $\np{V_t^*}$-optimal.
\end{proposition}
\begin{proof}
Fix $t\in \ce{0,T{-}1}$, we want to show that Equation~\eqref{eq:optimalSets} is satisfied for
$K_t^*$ which is equivalent to prove that for every $x^*_t \in K_t^*$, we have that
\begin{equation}
\label{eq:tmp1}
\mathcal{B}_t\left( V_{t+1}^* + \indi{K_{t+1}^*} \right)\left(x^*_t\right) = \mathcal{B}_t\left( V_{t+1}^* \right)(x^*_t).
\end{equation}
Now, using the definition of the Bellman operators in Equation~\eqref{Bellman:linear-convex} and Equation~\eqref{eq:tmp1} we have to prove that there exists a control $u^*_t \in U_t(x_t^*)$ such that
\begin{equation}
\label{eq:tmp3}
u^*_t \in \bset{ u}{ f_t \np{x^*_t, u} \in K_{t+1}^*}
\cap
\mathop{\arg\min}_{u\in U_t(x_t^*) } \mathcal{B}p{ c_t\np{x_t^*, u} + V_{t+1}^* \bp{ f_t\np{x_t^*, u }}}\; .
\end{equation}
Fix $x_t^* \in K_t^*$ and extracting if needed a subsequence, without loss of
generality, assume that $\np{x_t^k}_{k\in \mathbb{N}}$ converges to $x_t^*$. Fix
$k\in \mathbb{N}$ and the sequence of controls $\np{u_0^k, \ldots, u_{T-1}^k}$
associated with the optimal trajectory for the $k$-th approximations
$\np{x_0^k, \ldots, x_T^k}$. We have that
\begin{equation}
\label{eq:control-k-opt}
u_t^k \in \bset{ u}{f_t\np{x_t^k, u} \in K_{t+1}^k} \cap
\mathop{\arg\min}_{u \in U_t\np{x_t^k}} \mathcal{B}p{c_t\np{x_t^k, u} + V_{t+1}^k\bp{f_t\np{x_t^k, u}}}
\; .
\end{equation}
Extracting, if needed, a subsequence $\np{u_t^n}_{n\in \mathbb{N}}$ of
$\np{u_t^k}_{n\in \mathbb{N}}$, we will show that the sequence $\np{u_t^n}_{n\in \mathbb{N}}$ converges to some
$u_t^* \in \mathop{\arg\min}_{u \in U_t(x_t^*)} c_t\np{x_t^*, u} + V_{t+1}^* \np{
f_t\np{x_t^*,u}}$. Equation~\eqref{eq:tmp3} will be satisfied as for every $n\in \mathbb{N}$,
$f_t\np{x_t^n, u_t^n} \in K_{t+1}^n$, the continuity of $f_t$ and definition of
$K_{t+1}^*$ will ensure that
$u_t^* \in \left\{ u \mid f_t\np{x_t^*, u} \in K_{t+1}^*\right\}$.
We will use the result of convergence in minimization~\cite[Theorem 7.33]{Ro.We2009}. We define
\begin{align*}
& B^k : \mathbb{U} \to \overline{\mathbb{R}}, \ u \mapsto c_t\np{x_t^k, u} + V_{t+1}^k\bp{f_t\np{x_t^k,u}}\; , \\
& B^* : \mathbb{U} \to \overline{\mathbb{R}}, \ u \mapsto c_t\np{x_t^*, u} + V_{t+1}^*\bp{f_t\np{x_t^*,u}}\; .
\end{align*}
Recall that, under Assumption~\ref{hypo:linear-convex}-\eqref{hypo:multiapplication}, the
set-valued mapping $U_t$ has compact values with non-empty interior and is
$L_{U_t}$-Lipschitz continuous for some constant $L_F > 0$. Moreover, the
functions $B^*$ and every $B^k$, $k\in \mathbb{N}$ are convex, l.s.c., proper,
inf-compact, with compact domains $U_t\np{x_t^*}$ and $U_t\np{x_t^k}$,
respectively. As $U_t$ is Lipschitz continuous, the sequence of functions
$\np{B^k}_{k\in \mathbb{N}}$ converges uniformly to $B^*$ on every compact $K$ included
in the interior of $\mathop{\mathrm{dom}}\np{B^*} = U_t\np{x_t^*}$. Thus, by~\cite[Theorem
7.17.c]{Ro.We2009}, $\np{B_t^k}_{k\in \mathbb{N}}$ epiconverges to $B^*$. Finally,
$\np{u_t^k}_{k\in \mathbb{N}} \subset f_t\np{X_t, U_t\np{X_{t}}}$ which is compact as
$U_t$ is u.s.c. and $f_t$ is continuous. We conclude that we can extract a
converging subsequence out of $\np{u_t^k}_k$. Denoting by
$u_t^* \in U_t\np{x_t^*}$ its limit, by~\cite[Theorem 7.33]{Ro.We2009} we
finally have that
\(
u_t^* \in \mathop{\arg\min}_{u \in \mathbb{U}} B^*\np{u}
\). This ends the proof.
\end{proof}
Hence, when applying TDP with the SDDP selection function, we will refine the approximations along the current optimal trajectories, \emph{i.e.} we use Oracle defined in Example~\ref{example:opt_traj}. We conclude this section by proving the convergence of TDP algorithm in the SDDP case.
\begin{theorem}[Lower (outer) approximations of the value functions]
Under Assumption~\ref{hypo:linear-convex}, for every $t\in \ce{0,T}$, denote by
$\left( V_t^k \right)_{k\in \mathbb{N}}$ the sequence of functions generated by
Tropical Dynamic Programming \ with the selection function $\phi_t^{\text{\tiny SDDP}}$ and the draws
made uniformly over the sets $K_t^k$ defined in
Proposition~\ref{OptimalityOfTrajectories}. Then, the sequence
$\left( V_t^k \right)_{k\in \mathbb{N}}$ is non-decreasing, bounded from above by
$V_t$, and converges uniformly to $V_t^*$ on every compact set included in
$\mathop{\mathrm{dom}}\left(V_t \right)$. Moreover, almost surely over the draws, $V_t^* = V_t$
on $\limsup_{k\in \mathbb{N}} K_t^k$.
\end{theorem}
\begin{proof}
As the structural assumptions Assumption~\ref{Assumptions} are satisfied, as the functions
$\phi_t^{\text{\tiny SDDP}}$, $0\leq t\leq T$, are compatible selections and the sets
$\left(K_t^*\right)_{t\in \ce{0,T}}$ are $\np{V_t^*}$-optimal (case
$\mathop{\mathrm{opt}} = \sup$) by Theorem~\ref{ConvergenceTheorem}, we have the result.
\end{proof}
\section{A min-plus selection function: upper approximations in the
linear-quadratic framework with both continuous and discrete controls}
\label{sec:Exemples_switch}
In \S\ref{homogeneous_case}, we study the case where the cost functions and
dynamics are homogeneous. We conclude this section in \S\ref{theExample} with an
example which shows that using optimal trajectories of the best current
approximations as trial points in Tropical Dynamic Programming \ may generate functions
$\np{V_t^k}_{k\in \mathbb{N}}$ which do not converge to the value function $V_t$. In the appendix \ref{homogenization}, we show how one can
use the homogeneous case to solve the non-homogeneous case by augmenting the
state dimension by one.
\subsection{The pure homogeneous case}
\label{homogeneous_case} We will denote by $\mathbb{M}_n$ the set of $n{\times}n$ real matrices and by $\mathbb{S}_{n}\subset \mathbb{M}_n$ the subset of symmetric
matrices.
\begin{definition}[Pure quadratic form]
\label{purequadform}
We say that a function $q : \mathbb{X} \rightarrow \mathbb{R}$ is a pure quadratic form if there exist a symmetric real matrix $M \in \mathbb{S}_n$ such that for every $x\in \mathbb{X}$, we have
\(
q(x) = x^T M x.
\)
Similarly, a function $q : \mathbb{X} \times \mathbb{U} \rightarrow \mathbb{R}$ is a pure quadratic
form if there exist two symmetric real matrices $M_1 \in \mathbb{S}_n$ and
$M_2 \in \mathbb{S}_m$ such that for every $x \in \mathbb{X}$, we have
\(
q(x,u) = x^T M_1 x + u^T M_2 u.
\)
\end{definition}
Let us insist that pure quadratic forms are not general $2$-homogeneous quadratic forms in the sense that they lack a mixing term of the form $x^T M u$. In \ref{homogenization} we show why we do not lose generality by studying this case instead of general polynomials of degree $2$. Let $\mathbb{X} = \mathbb{R}^n $ be a continuous state space (endowed with its euclidean and Borel structure), $\mathbb{U} = \mathbb{R}^m$ a continuous control space and $\mathbb{V}$ a finite set of discrete (or switching) controls. We want to solve the following optimization problem
\begin{subequations}
\label{pb:linear-quad-switch}
\begin{align}
\min_{\substack{ (x, u , v) \in \mathbb{X}^{T+1} \times \mathbb{U}^T \times \mathbb{V}^T}}
& \sum_{t=0}^{T-1} c_t^{v_t} (x_t, u_t) + \psi(x_T)
\\
\text{s.t.}
& \quad x_0 \in \mathbb{X} \ \text{given, and}\quad
\forall t \in \ce{0,T-1}, \ x_{t+1} = f_t^{v_t}\np{ x_t, u_t}\; .
\end{align}
\end{subequations}
\begin{assumption}
\label{hypo:linear-quad-switch}
Let $t \in \ce{0,T-1}$ and $v \in \mathbb{V}$ be arbitrary.
\noindent --
The dynamic $f_t^v : \mathbb{X} \times \mathbb{U} \longrightarrow \mathbb{X}$ is linear. That is,
\(
f_t^v(x, u) = A_t^v x + B_t^v u,
\)
for some given matrices $A_t^v$ and $B_t^v$ of compatible dimensions.
\noindent -- The cost function $c_t^v : \mathbb{X} \times \mathbb{U} \longrightarrow \mathbb{R}$ is a pure convex quadratic form,
\(
c_t^v(x,u) = x^T C_t^v x + u^TD_t^v u,
\)
where the matrix $C_t^v$ is symmetric semidefinite positive and the matrix $D_t^v$ is symmetric definite positive.
\noindent -- The final cost function $\psi := \inf_{i\in I_T} \psi_i$ is a finite infimum of pure convex quadratic form, of matrix $M_i \in \mathbb{S}_n$ with $i\in I_T$ a finite set, such that there exists a constant $\alpha_T \geq 0$ satisfying for every $i \in I_T$
\(
\quad 0 \preceq M_i \preceq \alpha_T \;\mathrm{Id}
\).
\end{assumption}
One can write the Dynamic Programming equation for Problem~\ref{pb:linear-quad-switch} as follows
\begin{equation}
\label{DP:linear-quad-switch}
V_T = \psi
\quad\text{ and }
\forall t\in \ce{0,T{-}1}, \forall x\in \mathbb{X},
V_t\np{x} = \inf_{v\in \mathbb{V}}\inf_{u \in \mathbb{U}} c_t^v( x, u) + V_{t+1} \bp{f_t^v(x, u)}
\; .
\end{equation}
The following result is crucial in order to study this example: the value functions are $2$-homogeneous, allowing us to restrict their study to the unit sphere.
\begin{proposition}
For every time step $t\in\ce{0,T}$, the value function $V_t$, solution of
Equation~\eqref{DP:linear-quad-switch} is $2$-homogeneous, that is, for
every $x\in \mathbb{X}$ and every $\lambda \in \mathbb{R}$, we have
\(
V_t\np{\lambda x} = \lambda^2 V_t\np{x}
\).
\end{proposition}
\begin{proof}
We proceed by backward recursion on time step $t\in \ce{0,T}$. For $t=T$ it is
true by Assumption~\ref{hypo:linear-quad-switch}. Assume that it is true for some
$t\in \ce{1,T}$. Fix $\lambda \in \mathbb{R}$, then by definition of $V_{t-1}$, for
every $x\in \mathbb{X}$, we have
\begin{align*}
V_{t-1}\left(\lambda x \right)
& = \min_{v \in \mathbb{V}}
\min_{u\in U} c_{t-1}^v \np{\lambda x, u} +
V_{t}\bp{ f_{t-1}^v\np{\lambda x,u}}
\\
& = \min_{v \in \mathbb{V}}
\min_{u' = u/\lambda \in U} c_{t-1}^v\np{\lambda x, \lambda u'} +
V_{t}\bp{ f_{t-1}^v\np{\lambda x,\lambda u'}}\; ,
\end{align*}
which yields the result by $2$-homogeneity of $x\mapsto c_{t-1}^v \np{x,u}$,
linearity of $f_{t-1}^v$ and $2$-homogeneity of $V_{t}$.
\end{proof}
Thus, in order to compute $V_t$, one only needs to know its values on the unit
(euclidean) sphere $\mathbf{S}$ as for every non-zero $x \in \mathbb{X}$,
$V_t(x) = \mathbb{V}ert x \mathbb{V}ert^2 \ V_t\bp{\frac{x}{\mathbb{V}ert x \mathbb{V}ert}}$. Hence, we will refine our approximations only on the sphere, that is we will draw trial points uniformly on the sphere and use the Oracle defined in Example~\ref{example:sphere}. Now, for every time
$t\in \ce{0,T{-}1}$ and every switching control $v \in \mathbb{V}$ we define
the \emph{Bellman operator with fixed switching control} $\mathcal{B}_t^v$ for
every function $\phi : \mathbb{X} \rightarrow \overline{\mathbb{R}}$ by:
\[
\mathcal{B}_t^v(\phi ) := \inf_{u \in \mathbb{U}} c_t^v( \cdot, u) + \lVert f_t^v(\cdot, u) \rVert^2\phi \left(\frac{f_t^v(\cdot, u)}{\lVert f_t^v(\cdot, u)\rVert} \right).
\]
For every time $t\in \ce{0,T-1}$ we define the \emph{Bellman operator} $\mathcal{B}_t$ for every function $\phi : \mathbb{X} \to \overline{\mathbb{R}}$ by:
\begin{equation}
\label{Bellman-min-plus}
\mathcal{B}_t\left( \phi \right) := \inf_{v \in \mathbb{V}} \mathcal{B}_t^v \left( \phi \right).
\end{equation}
This definition of the Bellman operator emphasizes that the unit sphere
$\mathbf{S}$ is $(V_t)$-optimal in the sense of Definition~\ref{optimalDraws}. Note that for
$2$-homogeneous functions, we have that
$\mathcal{B}_t^{v}\np{\phi} = \inf_{u\in \mathbb{U}} c_t^{v}\np{\cdot,u} +
\phi\np{f_t^{v}\np{\cdot,u}}$. Using Equation~\eqref{Bellman-min-plus}, one can
rewrite the Dynamic Programming Equation~\eqref{DP:linear-quad-switch} as
\begin{equation}
\label{DP:tmp}
V_T = \psi
\quad\text{ and }\quad
\forall t\in \ce{0,T{-}1}, V_t = \mathcal{B}_t\np{V_{t+1}}
\; .
\end{equation}
Now, in order to apply the Tropical Dynamic Programming \ algorithm to Equation~\eqref{DP:tmp}, we
need to check Assumption~\ref{Assumptions}. Under Assumption~\ref{hypo:linear-quad-switch}, there
exist an interval in the cone of symmetric semidefinite matrices which is stable
by every Bellman operator $\mathcal{B}_t$ in the sense of the proposition below. We will
consider the Loewner order on the cone of (real) symmetric semidefinite
matrices, \emph{i.e.} for every couple of matrices of symmetric matrices
$\left(M_1, M_2\right)$ we say that $M_1 \preceq M_2$ if, and only if,
$M_2 - M_1$ is semidefinite positive. Moreover we will identify a pure quadratic
form with its symmetric matrix, thus when we write an infimum over symmetric
matrices, we mean the pointwise infimum over their associated pure quadratic
forms.
\begin{proposition}[Existence of a stable interval]
\label{StableInterval}
Under Assumption~\ref{hypo:linear-quad-switch}, we define a sequence of positive reals $\np{\alpha_t}_{t\in \ce{0,T}}$ by backward recursion on $t\in \ce{0,T-1}$ such that we have:
\begin{equation}
0 \preceq M \preceq \alpha_{t+1} \;\mathrm{Id} \mathbb{R}ightarrow 0 \preceq \mathcal{B}_t(M) \preceq \alpha_t \;\mathrm{Id},
\end{equation}
where $\alpha_T$ is a given constant by Assumption~\ref{hypo:linear-quad-switch}.
\end{proposition}
\begin{proof}
First, given an arbitrary $t\in \ce{0,T}$, we want to show that if
$M \succeq 0$ then $\mathcal{B}_t (M) \succeq 0$. As in
Proposition~\ref{propStructurale_SDDP} one can show that the Bellman operator $\mathcal{B}_t$ is
order preserving. Therefore, if $M \succeq 0$ then
$\mathcal{B}_t (M) \succeq \mathcal{B}_t (0)$. Hence it is enough to show that
$\mathcal{B}_t (0) \succeq 0$. But by Formula~\eqref{equa:Riccati_reduced}, we have that
$B_t (0) = \min_{v \in \mathbb{V}}C_t^v \succeq 0$ (by
Assumption~\ref{hypo:linear-quad-switch}) hence the result follows.
Second, let $t\in \ce{0,T-1}$ and $\alpha_{t+1} >0$ be fixed. We consider
$\alpha_t > 0$ defined by
\begin{equation}
\label{alphadef}
\alpha_t := \max_{v \in \mathbb{V}} \alpha_{t+1} \lambda_{\tiny \text{max}}\bp{{A_t^v}\np{A_t^v}^T} + \lambda_{\tiny \text{max}}(C_t^v ) > 0
\; ,
\end{equation}
and we prove that if $M\preceq \alpha_{t+1} \;\mathrm{Id}$ then we have that
$\mathcal{B}_t (M) \preceq \alpha_t \;\mathrm{Id}$. For that purpose, consider $M$ such that
$M\preceq \alpha_{t+1} \;\mathrm{Id}$. Then, denoting by $\overline{B}_t^v$ the
matrix
$\;\mathrm{Id} + \alpha_{t+1} B_t^v \np{D_t^v}^{-1} \np{B_t^v}^T$,
we have that
$\lambda_{\tiny \text{min}}(\overline{B}_t^v)= 1 + \alpha_{t+1}\lambda_{\tiny \text{min}}(B_t^v
\np{D_t^v}^{-1} \np{B_t^v}^T)\ge 1$ using the fact that the
matrix $B_t^v \np{D_t^v}^{-1} \np{B_t^v}^T$ is positive
semi-definite by Assumptions~\ref{hypo:linear-quad-switch}. Now, we
successively have for any $v \in \mathbb{V}$
\begin{align}
\lambda_{\tiny \text{max}} \bp{\mathcal{B}_t^v \np{M}}
& \le \lambda_{\tiny \text{max}} \bp{\mathcal{B}_t^v \np{\alpha_{t+1} \;\mathrm{Id}}}
\tag{$\mathcal{B}_t^v$ is order preserving}
\\
& = \lambda_{\tiny \text{max}}
\mathcal{B}p{
\alpha_{t+1} \np{A_t^v} ^T \np{\overline{B}_t^v}^{-1} A_t^v + C_t^v
}
\tag{using~\eqref{equa:Riccati_reduced}}
\\
& \le
\alpha_{t+1} \lambda_{\tiny \text{max}}
\mathcal{B}p{
\np{A_t^v} ^T \np{\overline{B}_t^v}^{-1} A_t^v} + \lambda_{\tiny \text{max}}\np{C_t^v}
\tag{by Proposition~\ref{prop:valeurpropre}}
\\
& \le
\alpha_{t+1}
\lambda_{\tiny \text{max}}\bp{A_t^v {A_t^v} ^T}
\lambda_{\tiny \text{max}}\bp{\np{\overline{B}_t^v}^{-1}}
+ \lambda_{\tiny \text{max}}\np{C_t^v}
\tag{by Proposition~\ref{prop:valeurpropre}}
\\
& \le\alpha_{t+1} \lambda_{\tiny \text{max}}\bp{A_t^v {A_t^v} ^T} + \lambda_{\tiny \text{max}}\np{C_t^v}
\tag{as $\lambda_{\tiny \text{max}}\bp{\np{\overline{B}_t^v}}= \lambda_{\tiny \text{min}}\bp{\np{\overline{B}_t^v}}^{-1} \le 1$ }
\\
& \le \alpha_t \tag{using~\eqref{alphadef}}
\; ,
\end{align}
which gives that $\mathcal{B}_t^v(M) \preceq \alpha_t \;\mathrm{Id}$. Then, the same result follows for the operator
$\mathcal{B}_t$ using Equation~\eqref{Bellman-min-plus}. This ends the proof.
\end{proof}
Using Proposition~\ref{StableInterval},, one can deduce by backward recursion on $t\in \ce{0,T-1}$ the existence of intervals of matrices, in the Loewner order, which are stable by the Bellman operators.
\begin{corollary}
\label{coro:StableBellman}
Under Assumption~\ref{hypo:linear-quad-switch}, using the sequence of positive reals
$\np{\alpha_t}_{t\in \ce{0,T}}$ defined in Proposition~\ref{StableInterval},, we define a
sequence of positive reals $\np{\beta_t}_{t\in \ce{0,T}}$ by
\( \beta_T := \alpha_T\) and \(\forall t\in \ce{0,T{-}1}\), $\beta_t:= \max\np{\alpha_t,\beta_{t+1}}$.
Then, one has that
\[
0 \preceq M \preceq \beta_T \;\mathrm{Id} \mathbb{R}ightarrow \forall t \in \ce{0,T-1}, \ 0
\preceq \mathcal{B}_{t}\np{\ldots \mathcal{B}_{T-2}\np{\mathcal{B}_{T-1}\np{M}}} \preceq \beta_t \;\mathrm{Id} .
\]
\end{corollary}
The basic functions $Fb^{\text{\tiny min-plus}}_t$ will be pure quadratic convex forms bounded in the Loewner sense by $0$ and $\beta_t I$,
\[
Fb_t^{\text{\tiny min-plus}} :=
\bset{\phi: x \in \mathbb{X} \mapsto x^TMx \in \mathbb{R}}{ M \in \mathbb{S}_n, \ 0\preceq M \preceq \beta_t \;\mathrm{Id}}\; ,
\]
and we define the following class of functions which will be stable by pointwise infimum of elements in $Fb_t^{\text{\tiny min-plus}}$,
\begin{equation}
\label{def_funcbbQu}
Fbb_t^{\text{\tiny min-plus}} :=
\bset{\mathcal{V}_F}{F \subset Fb_t^{\text{\tiny min-plus}}}\; .
\end{equation}
Exploiting the min additivity of the Bellman operator, which gives that
$\mathcal{B}_t\bp{\inf\np{\phi_1, \phi_2}} = \inf \bp{\mathcal{B}_t\np{\phi_1},
\mathcal{B}_t\np{\phi_2}}$, and the fact that the final cost $\widetilde{\psi}$ is a
finite infima of basic functions, one deduces by backward induction on
$t\in \ce{0,T}$ that the value functions are finite infima of basic functions.
\begin{lemma}
\label{FiniteInfimum}
For every time $t\in \ce{0,T}$, there exists a finite set $F_t$ of convex pure quadratic forms such that
\[
V_t = \inf_{\phi \in F_t} \phi.
\]
\end{lemma}
\begin{proof}
For $t = T$, set $F_T := \left\{ \psi_i \right\}_{i\in I_T}$. Now, assume
that for some $t\in \ce{0,T-1}$, we have that
\( V_{t+1} = \inf_{\phi \in F_{t+1}} \phi \), where $F_{t+1}$ is a
finite set of convex pure quadratic functions. Then, by definition of the
Bellman operators $\mathcal{B}_t$ (see Equation~\eqref{Bellman-min-plus}), we have that
\begin{align*}
V_t & = \mathcal{B}_t\np{V_{t+1}}
= \inf_{v \in \mathbb{V}} \mathcal{B}_t^{v}\np{\inf_{\phi \in F_{t+1} } \phi}
= \inf_{\phi \in F_{t+1} }\inf_{v \in \mathbb{V}}\mathcal{B}_t^{v} \np{\phi}
= \inf_{\phi \in F_{t+1} , v \in \mathbb{V}} \bp{\mathcal{B}_t^{v} \np{\phi}}
\end{align*}
Thus, setting $F_t := \ba{\mathcal{B}_t^{v} \np{\phi}\,\vert\,\phi \in F_{t+1} \text{ and } v \in \mathbb{V}}$,
we obtain that $V_t = \inf_{\phi \in F_t} \phi$, where $F_t$ is a finite set of convex pure quadratic functions.
Backward induction on time $t \in \ce{0,T}$ ends the proof.
\end{proof}
\begin{proposition}
\label{struct_assump_lin-quad-switch}
Under Assumption~\ref{hypo:linear-quad-switch}, the Problem~\ref{pb:linear-quad-switch}
and the Bellman operators defined in Equation~\eqref{DP:linear-quad-switch} satisfy the
structural assumptions given in Assumption~\ref{Assumptions}.
\end{proposition}
\begin{proof} We prove successively each assumption listed in Assumption~\ref{Assumptions}.
\noindent $\bullet$ \ref{Assumptions}-\eqref{Stability-pointwise-optimum}.
By construction, $Fbb_t^{\text{\tiny min-plus}}$ in Equation~\eqref{def_funcbbQu} is stable by pointwise infimum.
\noindent $\bullet$ \ref{Assumptions}-\eqref{Stability-pointwise-convergence} and \ref{Assumptions}-\eqref{CommonRegularity}.
We will show that every element of $Fbb_t^{\text{\tiny min-plus}}$ is $2\beta$-Lipschitz continuous on $\mathbf{S}$. Let $F=\left\{ \phi_i \right\}_{i \in I} \subset Fbb_t^{\text{\tiny min-plus}}$ with $I \subset \mathbb{N}$ and $\phi_i \in Fb_t^{\text{\tiny min-plus}}$ with associated symmetric matrix $M_i$. Fix $x,y \in \mathbf{S}$, we have successively
\begin{align}
\vert V_F(x) - V_F(y) \vert
& = \vert \inf_{i \in I} x^T M_i x - \inf_{i\in I} y^T M_i y \vert \notag \\
& \leq \max_{i \in I} \vert x^T M_i x - y^T M_i y \vert \notag \\
& \leq \max_{i \in I} \vert x^T M_i \left( x-y \right) + y^T M_i\left( x-y \right) \vert \notag \\
& \leq \max_{i \in I} \vert \langle x + y, M_i(x-y) \rangle \vert \tag{$M^T = M$} \notag \\
& \leq \mathbb{V}ert x + y \mathbb{V}ert \cdot \max_{i \in I} \mathbb{V}ert M_i\left(x-y\right) \mathbb{V}ert \tag{Cauchy-Schwarz} \notag \\
& \leq \mathbb{V}ert x + y \mathbb{V}ert \cdot \max_{i \in I} {\mathbb{V}ert M_i \mathbb{V}ert} \mathbb{V}ert x-y \mathbb{V}ert
\notag
\\
& \leq \beta_t {\mathbb{V}ert x + y \mathbb{V}ert} \cdot \mathbb{V}ert x-y \mathbb{V}ert
\tag{$\mathbb{V}ert M_i \mathbb{V}ert \leq \beta_t$}
\notag
\\
& \leq 2 \beta_t \mathbb{V}ert x - y \mathbb{V}ert \; ,
\end{align}
since $\mathbb{V}ert x + y \mathbb{V}ert \le 2$. Thus, every element of $Fbb_t^{\text{\tiny min-plus}}$ is $2\beta_t$-Lipschitz on
$\mathbf{S}$ and by stability by pointwise infimum, $Fbb_t^{\text{\tiny min-plus}}$ is
stable by pointwise convergence.
\noindent $\bullet$ \ref{Assumptions}-\eqref{Final-condition}.
By Assumption~\ref{hypo:linear-quad-switch}, the final cost function $\psi$ is an element of $Fbb_T^{\text{\tiny min-plus}}$.
\noindent $\bullet$ \ref{Assumptions}-\eqref{StabilityBellman}.
This is given by Corollary~\ref{coro:StableBellman}.
\noindent $\bullet$ \ref{Assumptions}-\eqref{order-preserving}.
Proceed as in Proposition~\ref{propStructurale_SDDP}.
\noindent $\bullet$ \ref{Assumptions}-\eqref{Additively-subhomogeneous}.
Fix a time step $t\in \ce{0,T-1}$, a compact $K_t \subset \mathop{\mathrm{dom}}(V_t)\, (=\mathbb{X})$, a function $\phi \in Fbb_{t+1}^{\text{\tiny min-plus}}$ and a constant $\lambda \geq 0$. By definition of $Fbb_{t+1}^{\text{\tiny min-plus}}$, there exists a finite set $F := \left\{ \phi_i \right\}_{i\in I} \subset Fb_{t+1}^{\text{\tiny min-plus}}$ such that $\phi = \inf_{i \in I} \phi_i$.
By Equation~\eqref{equa:inverseRiccati_determinist}, for each $i\in I$ and $v \in \mathbb{V}$, there exists a linear map $L_i^{v}$ such that
\begin{equation}
\label{eq:tmp123}
\min_{u\in \mathbb{U}} c_t^v\np{x,u} + \phi_i\np{f_t^{v}\np{x,u}}= c_t^{v} \np{x,L_i^{v}(x)} + \phi_i\np{f_t^{v}\np{x,L_i^{v}(x)}}
\enspace ,
\end{equation}
with $\|L_i^{v}\|\leq \alpha_{t+1}C_t$,
where $C_t$ is a constant depending on the parameters of the
control problem only.
Hence the maps $x\mapsto f_t^{v}\np{x,L_i^{v}(x)}$
are linear and their norm are bounded by $(\alpha_{t+1}+1)C'_t$
for some constant $C'_t$ depending on the parameters of the control problem
only. Set $M_t :=((\alpha_{t+1}+1)C'_t \|K_t\|)^2 $,
where $ \|K_t\|$ is the radius of a ball centered in $0$ including $K_t$. Therefore, for $x\in K_t$, we have $\lVert f_t^{v}\np{x,u}\rVert^2 \leq M_t$. Now, for $x\in K_t$, using the bound on~$f_t$ we have
\begin{align*}
\mathcal{B}_t\left(\phi + \lambda \right)\np{x}
& = \min_{\substack{i\in I \\ u\in \mathbb{U} \\ v \in \mathbb{V}}}
c_t^v\np{x,u} + \lVert f_t^{v}\np{x,u}\rVert^2 \np{\phi_i + \lambda}
\np{ \frac{f_t^{v}\np{x,u} }{\lVert f_t^{v}\np{x,u}\rVert} } \\
& \leq \min_{\substack{i\in I \\ v \in \mathbb{V}}}
c_t^{v} \np{x,L_i^{v}(x)} + \lVert f_t^{v}\np{x,L_i^{v}(x)}\rVert^2
\np{\phi_i+\lambda} \np{ \frac{f_t^{v}\np{x,L_i^{v}(x)}}{\lVert f_t^{v}\np{x,L_i^{v}(x)}\rVert}} \\
& \leq \min_{\substack{i\in I \\ v \in \mathbb{V}}} c_t^{v} \np{x,L_i^{v}(x)} + \lVert f_t^{v}\np{x,L_i^{v}(x)}\rVert^2 \phi_i
\np{ \frac{ f_t^{v}\np{x,L_i^{v}(x)}}{\lVert f_t^{v}\np{x,L_i^{v}(x)}\rVert} } +M_t \lambda \\
& = \min_{\substack{i\in I \\ v \in \mathbb{V}}}
\mathcal{B}p{c_t^{v} \np{x,L_i^{v}(x)} + \phi_i \np{f_t^{v}\np{x,L_i^{v}(x)}}} + M_t \lambda
\\
& = \min_{\substack{i\in I \\ u\in \mathbb{U} \\ v \in \mathbb{V}}}
\mathcal{B}p{c_t^v\np{x,u} + \phi_i \bp{ f_t^{v}\np{x,u} }} + M_t \lambda \tag{by~\eqref{eq:tmp123}} \\
& = \mathcal{B}_t\np{\phi}(x) + M_t\lambda \; .
\end{align*}
hence the desired result \(
\mathcal{B}_t\left(\phi + \lambda \right)\np{x} \leq M_t\lambda + \mathcal{B}_t\np{\phi}(x)
\).
\noindent $\bullet$ \ref{Assumptions}-\eqref{proper-value}.
This is a consequence of Lemma~\ref{FiniteInfimum}.
\noindent $\bullet$ \ref{Assumptions}-\eqref{optimal-sets}
Fix $\phi \in Fbb_t$ and $\lambda \geq 0$. Denote by $\tilde{\phi} = \phi + \lambda$. For every $x\in \mathbb{X}$, we have that
\begin{align*}
\mathcal{B}_t\np{\tilde{\phi}}\np{x} & = \min_{\np{u,v}\in \mathbb{U} \times \mathbb{V}} c_t^{v}\np{x,u} + \lVert f_t^{v}\np{x,u}\rVert^2 \tilde{\phi}\np{\frac{f_t^{v}\np{x,u}}{\lVert f_t^{v}\np{x,u} \rVert}} \\
& = \min_{\np{u,v}\in \mathbb{U} \times \mathbb{V}} c_t^{v}\np{x,u} + \lVert f_t^{v}\np{x,u}\rVert^2 \np{\tilde{\phi} + \indi{\mathbf{S}}}\np{\frac{f_t^{v}\np{x,u}}{\lVert f_t^{v}\np{x,u} \rVert}} \\
& = \mathcal{B}_t\np{\tilde{\phi} + \indi{\mathbf{S}}}(x),
\end{align*}
which implies the desired result.
\end{proof}
\begin{remark}
We have shown that $\mathcal{B}_t$ is additively subhomogeneous with constant $M_t$. An upper bound of $M_t$ can be computed as in the proof of Proposition~\ref{StableInterval}, by bounding the greatest eigenvalue of each matrices $L_i^{v}$.
\end{remark}
We now define, for any $t\in \ce{0,T}$, a selection functions $\phi_t^{\text{\tiny min-plus}}$ and prove that it is a compatible selection function. As each $\mathop{\arg\min}$ mentioned below involves a finite set, selecting an element in the $\mathop{\arg\min}$ raises no issue.
\begin{proposition}
\label{QuIsCompatible}
For every time $t\in \ce{0,T}$, any $F \subset Fb_t^{\text{\tiny min-plus}}$ and any $x\in \mathbb{X}$,
define a function $\phi_t^{\text{\tiny min-plus}}$ as follows
\begin{equation}
\phi_t^{\text{\tiny min-plus}}\np{F, x}
\in
\begin{cases}
\mathcal{B}_t \mathcal{B}p{ \mathop{\arg\min}_{\phi \in F} \bp{ \mathcal{B}_t \np{\phi}\np{x}}} & \text{for}\quad t\not= T\; , \enspace \\
\mathop{\arg\min}_{\psi_i \in F} \psi_i(x) & \text{for}\quad t=T
\; ,
\end{cases}
\end{equation}
is a \emph{compatible selection} function as defined in Definition~\ref{det:CompatibleSelection}.
\end{proposition}
\begin{proof}
Fix $t=T$. The function $\phi_t^{\text{\tiny min-plus}}$ is tight and valid as $V_T = \psi$. Now fix $t\in \ce{0,T-1}$. Let $F \subset Fbb_{t+1}^{\text{\tiny min-plus}}$ and $x\in \mathbb{X}$ be arbitrary. We have
\begin{align*}
\mathcal{B}_t\np{\mathcal{V}_F}(x)
& = \mathcal{B}_t \bp{ \inf_{\phi \in F} \phi} (x)
\\
& = \inf_{(u,v)\in \mathbb{U} \times \mathbb{V}}
\mathcal{B}p{c_t^v \np{x, u} + \inf_{\phi \in F}\phi \bp{f_t^v\np{x,u}}}
\\
& = \inf_{\phi \in F}\inf_{(u,v)\in \mathbb{U} \times \mathbb{V}}
\mathcal{B}p{c_t^v \np{x, u}+ \phi \bp{f_t^v\np{x,u}}}
\\
& = \inf_{\phi \in F}\bp{ \mathcal{B}_t\left( \phi \right)(x)} \\
& = \phi_t^{\text{\tiny min-plus}}\left( F, x \right)\np{x}
\; .
\end{align*}
Thus, $\phi_t^{\text{\tiny min-plus}}$ is tight.
By similar arguments, we have for every $x' \in \mathbb{X}$ that
\[
\mathcal{B}_t\np{\mathcal{V}_F}(x') = \bp{\inf_{\phi \in F} \mathcal{B}_t\np{\phi}}\np{x'} \leq \phi_t^{\text{\tiny min-plus}}\np{F, x}\np{x'}
\; .
\]
This shows that $\phi_t^{\text{\tiny min-plus}}\np{F, x}$ is valid and ends the proof.
\end{proof}
We conclude this section by proving the convergence of TDP algorithm in the Min-plus case.
\begin{theorem}[Upper (inner) approximations of the value functions]
For every $t\in \ce{0,T}$, denote by $\left( V_t^k \right)_{k\in \mathbb{N}}$ the
sequence of functions generated by Tropical Dynamic Programming \ with the selection function
$\phi_t^{\text{\tiny min-plus}}$ and the draws made uniformly over the sphere
$K_t := \mathbf{S}$. Under Assumption~\ref{hypo:linear-quad-switch}, the sequence
$\left( V_t^k \right)_{k\in \mathbb{N}}$ is non increasing, bounded from below by
$V_t$ and converges uniformly to $V_t^*$ on $\mathbf{S}$. Moreover, almost surely
over the draws, $V_t^* = V_t$ on $\mathbf{S}$.
\end{theorem}
\begin{proof}
As the structural Assumption~\ref{Assumptions} are satisfied, as the functions
$\phi_t^{\text{\tiny min-plus}}$, $0\leq t\leq T$ are compatible selections and the
unit sphere $\mathbf{S}$ is $V_t$-optimal (case $\mathop{\mathrm{opt}} = \inf$), we can apply Theorem \ref{ConvergenceTheorem}.
\end{proof}
\subsection{Optimal trajectories for upper approximations may not converge}
\label{theExample}
We now give an example showing that approximating from above may fail to converge when the
points are drawn along optimal trajectories for the current upper
approximations of $V_t$ (in contrast with Section~\ref{SDDP_Example} where we approximate from
below $V_t$). As shown by Proposition~\ref{HomogeneVSnonhomogene} there is no loss of
generality in considering the framework of \S\ref{homogeneous_case} but with
non-homogeneous functions.
\begin{figure}
\caption{\label{ex:figure}
\label{ex:figure}
\end{figure}
We consider a (non-homogeneous) problem with only two time steps, that is $T=1$ and $t \in \na{0,1}$ such that
\noindent $\bullet$ The state space $\mathbb{X}$ and the control space $\mathbb{U}$ are equal to $\mathbb{R}$.
\noindent $\bullet$ The linear dynamic is $f(x,u) = x + u$.
\noindent $\bullet$ The quadratic cost is $c(x,u) = x^2 + u^2$.
\noindent $\bullet$ The final cost function is the infimum, $\psi = \inf\np{\psi_1, \psi_2}$, of two given quadratic mappings,
$\psi_1(x)= (x+2)^2 +1$ and $\psi_2(x)= x^2$.
The Bellman operator $\mathcal{B}$, associated to this multistage optimization problem is
defined for every $\phi : \mathbb{X} \to \overline{\mathbb{R}}$ and every $x\in \mathbb{X}$ by
\[
\mathcal{B}\np{\phi}(x) = \min_{u\in U}\bp{x^2 + u^2 + \phi\np{x+u}} = x^2 + \min_{u\in U}\bp{u^2 + \phi\np{x+u}}
\; .
\]
For the case where $\phi_{a,b}\np{\cdot} = \np{\cdot+a}^2 + b$ with $a,b \in \mathbb{R}$ one has for every $x\in \mathbb{R}$
\begin{equation}
\label{ex:image_bellman}
\mathcal{B}\np{\phi_{a,b}}\np{x} = \frac{3}{2}x^2 + ax + b.
\end{equation}
Fix $x_0 = x_0^k = -2$ for every $k\in \mathbb{N}$. As described in
Algorithm~\ref{Tropical Dynamic Programming}, the approximations of the value functions $V_1$ and
$V_0$ are initialized to $+\infty$. Thus every control $u \in \mathbb{U}$ is optimal in
the sense that $u \in \mathop{\arg\min}_{u' \in \mathbb{U}} x^2 + (u')^2 + \phi\np{x+u'}$. Hence
if we set $x_1^0 := -1 = f(x_0, 1)$ then $(x_0,x_1^0)$ is an optimal trajectory
as described in Proposition~\ref{OptimalityOfTrajectories}.
We deduce from Equation~\eqref{ex:image_bellman} the following facts, illustrated in Figure~\ref{ex:figure}.
\begin{enumerate}
\item The image of $\psi_2$ is strictly greater than the image of $\psi_1$ by
the Bellman operator $\mathcal{B}$, \emph{i.e.}
\[
\mathcal{B}\np{\psi_2}(-2) > \mathcal{B}\np{\psi_1}\np{-2}.
\]
\item The image by the $k$-th optimal dynamic of $-2$ is $-1$, \emph{i.e.}
setting $u_0^k := \mathop{\arg\min}_{u' \in \mathbb{U}} (-2)^2 + (u')^2 + V_1^k \np{-2+u'}$
(the $\mathop{\arg\min}$ is here a singleton) one has
\[
f\np{-2, u_0^k} = -1.
\]
\item At the final step $t=1$, the best function at the point $-1$ is $\psi_2$, \emph{i.e.}
\[
\psi(-1) = \inf\np{\psi_1(-1), \psi_2(-2)} = \psi_2\np{-2}.
\]
\end{enumerate}
From those three facts, one can deduce starting $x_0 = -2$ and $x_1= -1$, the
optimal trajectory for the current approximations will always be sent to
$x_1=-1$. But, as shown in the proof of Proposition~\ref{QuIsCompatible} one can
show that the image by $\mathcal{B}$ of an infimum is the infimum of the images by $\mathcal{B}$:
\[
V_0(-2) = \mathcal{B}\np{ \inf \np{\psi_1, \psi_2} }(-2) = \inf\np{ \mathcal{B}\np{ \psi_1 }(-2) , \mathcal{B} \np{ \psi_1 }(-2)}.
\]
Thus for every $k \in \mathbb{N}$, we have
\(
V_0(-2) = \mathcal{B}\np{ \psi_1 }(-2) < \mathcal{B} \np{ \psi_1 }(-2) = V_0^k\np{-2}
\). The constant sequence $V_0^k\np{-2}$ fails to converge to $V_0\np{-2}$.
\section{Numerical experiments on a toy example}
\label{sec:ToyExample}
In \S\ref{constrained-linquad}, we propose a toy optimization Problem~\eqref{ConstrainedLQ} on which we run TDP-SDDP and TDP-Minplus. Problem~\eqref{ConstrainedLQ} falls in the framework described in Section~\ref{SDDP_Example}. Thus, we are able to obtain lower approximations of $V_t$ using TDP-SDDP. TDP-Minplus cannot be applied directly. We apply a ``discretization'' step to Problem~\eqref{ConstrainedLQ} (see \S\ref{Discretization}) which yields Problem~\eqref{SwitchedConstrainedLQ} parameterized by an integer $N > 0$. Then we apply to Problem~\eqref{SwitchedConstrainedLQ} an ``homogenization'' step (see \S\ref{Homogeneization}) to obtain Problem~\eqref{HomogeneizedSwitchedConstrainedLQ}. The value functions $V_t$ of the original Problem~\eqref{ConstrainedLQ} are bounded from above by $\widetilde{V}_{t,N}$, the value functions of Problem~\eqref{HomogeneizedSwitchedConstrainedLQ}. We apply TDP-Minplus (described in Section~\ref{sec:Exemples_switch}) to Problem~\eqref{HomogeneizedSwitchedConstrainedLQ} which gives upper approximations of $\widetilde{V}_{t,N}$ and \emph{a fortiori}, of $V_t$. In \S\ref{numerical_experiments}, we show numerical experiments which show the convergence of this approximation scheme to $V_t$.
\subsection{A toy example: constrained linear-quadratic framework}
\label{constrained-linquad}
Let $\beta,\gamma$ be two given reals such that $\beta < \gamma$, we study the
following multistage linear quadratic problem involving a constraint on one of
the controls:
\begin{subequations}
\label{ConstrainedLQ}
\begin{align}
\min_{(x,u,v) \in \mathbb{X}^{T+1} \times \mathbb{U}^{T} \times [\beta,\gamma]^{T} }
& \sum_{t=0}^{T-1} c_t(x_t, u_t,v_t) + \psi(x_T)
\\
\text{s.t.}
& \quad x_0 \in \mathbb{X} \ \text{given, and}\quad
\forall t \in \ce{0,T-1}, \ x_{t+1} = f_t(x_t, u_t, v_t) \; ,
\end{align}
\end{subequations}
where $\mathbb{X}=\mathbb{R}^n$ and $\mathbb{U}=\mathbb{R}^m$, with quadratic convex costs functions of the
form \[c_t(x,u,v) = x^T C_t x + u^T D_t u + v^2 d_t,\] where $C_t \in\mathbb{S}^+_n$,
$D_t \in \mathbb{S}^+p_n$ and $d_t>0$, linear dynamics
$f_t(x,u, v) = A_t x + B_t u + v b_t$, where $A_t$ (resp. $B_t$) is a
$n \times n$ (resp. $n \times m$) matrix, $b_t \in \mathbb{X}$, and final cost function
$\psi := x^T M x$ with $M \in \mathbb{S}^+p_n$.
For every $t\in \ce{0,T}$, the value function $V_t$ is $L_t$-Lipschitz
continuous and convex. Moreover the Lipschitz constant $L_t > 0$ can be
explicitly computed. As done in Section~\ref{SDDP_Example} we will generate
lower approximations of the value functions $V_t$ through compatible selection
functions $\np{\phi_t^{\text{\tiny SDDP}}}_{t\in \ce{0,T}}$. In this example, the
structural Assumption~\ref{Assumptions} are not satisfied as the sets of states
and controls are not compacts. As we will still observe convergence of the lower
approximations generated by TDP
$\np{\phi_t^{\text{\tiny SDDP}}}_{t\in \ce{0,T}}$ to the value functions, this
suggests that the (classical) framework presented in Section~\ref{SDDP_Example}
can be extended. This will be the object of a future work and here we would like
to stress on the numerical scheme and results.
\subsection{Discretization of the constrained control}
\label{Discretization}
We approximate Problem~\eqref{ConstrainedLQ} by discretizing the constrained
control in order to obtain an unconstrained switched multistage linear
quadra\-tic problem. More precisely, we fix an integer $N \geq 2$, set
$ v_i = \beta + i\frac{\gamma-\beta}{N-1}$ for every $i \in \ce{0,N-1}$ and set
$\mathbb{V}:= \left\{v_0, v_1, \ldots v_{N-1} \right\}$. Then, we define the following
unconstrained switched multistage linear quadratic problem:
\begin{subequations}
\label{SwitchedConstrainedLQ}
\begin{align}
\min_{(x,u,v) \in \mathbb{X}^{T+1} \times \mathbb{U}^{T} \times \mathbb{V}^{T} }
& \sum_{t=0}^{T-1} c_t^{v_t} (x_t, u_t) + \psi(x_T)
\\
\text{s.t.}
& \quad x_0 \in \mathbb{X} \ \text{given, and}\quad
\forall t \in \ce{0,T-1}, \ x_{t+1} = f_t^{v_t}(x_t, u_t)\; ,
\end{align}
\end{subequations}
where for every $v\in \mathbb{V}$, $f_t^v = f_t\np{\cdot, \cdot, v}$ and
$c_t^{v} = c_t\np{\cdot, \cdot, v}$. As the set of controls of Problem
\eqref{ConstrainedLQ} contains the set of controls of Problem
\eqref{SwitchedConstrainedLQ}, upper approximations of the value functions of
Problem \eqref{SwitchedConstrainedLQ} will are give upper approximations of the
value functions of Problem \eqref{ConstrainedLQ}. Thus we will construct upper
approximations for Problem~\eqref{SwitchedConstrainedLQ}.
\subsection{Homogenization of the costs and dynamics}
\label{Homogeneization}
We add a dimension to the state space in order to homogenize the costs and
dynamics, when a sequence of switching controls is fixed. Define the following
homogenized costs and dynamics for every $t\in \ce{0,T-1}$ by:
\begin{align*}
\tilde{f_t}^v\np{x,y,u} & = \begin{pmatrix} A_t & vb_t \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} + \begin{pmatrix} B_t \\ 0 \end{pmatrix} u, \\[1ex]
\tilde{c_t}^v (x,y,u) & = \begin{pmatrix} x \\ y \end{pmatrix}^T \begin{pmatrix} C_t & 0 \\ 0 & v^2 d_t \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} + u ^T D_t u,
\end{align*}
And as the final cost function is already homogeneous,
$\widetilde{\psi}(x,y) = \begin{pmatrix} x \\ y \end{pmatrix}^T \begin{pmatrix}
M & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}$. Using
these homogenized functions we define a multistage optimization problem with one
more (compared to Problem~\eqref{SwitchedConstrainedLQ}) dimension on the state
variable:
\begin{equation}
\label{HomogeneizedSwitchedConstrainedLQ}
\begin{aligned}
& \min_{{(x,y,u,v) \in \mathbb{X}^{T+1} \times \mathbb{R}^{T+1} \times \mathbb{U}^T \times \mathbb{V}^{T}}} \sum_{t=0}^{T-1} \tilde{c_t}^{v_t} (x_t, y_t, u_t) + \widetilde{\psi}(x_T, y_T) \\
& \text{s.t.} \ \left\{
\begin{aligned}
& (x_0, y_0) \in \mathbb{X} \times \mathbb{R} \ \text{is given,} \\
& \forall t \in \ce{0,T-1}, \ (x_{t+1},y_{t+1}) = \tilde{f_t}^{v_t}(x_t, y_t, u_t) \; . \\
\end{aligned}\right.
\end{aligned}
\end{equation}
One can deduce the value functions $V_{t,N}$ of the multistage optimization
problem \eqref{SwitchedConstrainedLQ} (with non-homogeneous costs and dynamics)
from the value functions $\widetilde{V}_{t,N}$ of~\eqref{HomogeneizedSwitchedConstrainedLQ}
(with homogeneous costs and dynamics) by
Proposition~\ref{HomogeneVSnonhomogene}. For every $x\in \mathbb{X}$, we have that
\begin{equation}
\label{Homogeneisation}
V_{t,N}(x) = \widetilde{V}_{t,N}(x,1) \; .
\end{equation}
For every time step $t\in\ce{0,T}$ the value function $\widetilde{V}_{t,N}$ solution of
Problem~\eqref{HomogeneizedSwitchedConstrainedLQ} is $2$-homogeneous. That is,
for every $(x,y)\in \mathbb{X}\times \mathbb{R}$ and every $\lambda \in \mathbb{R}$, we have
$ \widetilde{V}_{t,N}\left(\lambda x, \lambda y \right) = \lambda^2 \widetilde{V}_{t,N} \left( x, y
\right).$ This will allow us to restrict the study of the value functions to the
unit sphere, which is compact.
\subsection{Min-plus upper approximations of the value functions of Problem~\eqref{HomogeneizedSwitchedConstrainedLQ}}
\label{qu-sec}
We apply the results of Section~\ref{sec:Exemples_switch} as follows.
Let $v\in \mathbb{V}$ be a given switching control, in this framework, the operator $\mathcal{B}_t^v$ is defined as in Section~\ref{sec:Exemples_switch} but with an augmented state. More precisely, for every function $\phi : \mathbb{X} \times \mathbb{R} \to \overline{\mathbb{R}}$ and every point $(x,y) \in \mathbb{X} \times \mathbb{R}$:
\begin{align*}
\mathcal{B}_t^v \left( \phi \right)(x,y) = \inf_{u\in \mathbb{U}} \tilde{c}_t^v(x,y, u) + \lVert \tilde{f}_t^v(x, y, u) \rVert^2\phi\np{\frac{\tilde{f}_t^v(x, y, u)}{\lVert \tilde{f}_t^v(x, y, u) \rVert}} \; .
\end{align*}
Then, for every time $t\in \ce{0,T-1}$,
the Dynamic Programming operator $\mathcal{B}_t $ associated to Problem~\eqref{HomogeneizedSwitchedConstrainedLQ}
satisfies $\mathcal{B}_t \left( \phi \right) := \inf_{v\in \mathbb{V}} \mathcal{B}_t^v \left( \phi \right)$.
A key property of the operators $\mathcal{B}_t^v$ and $\mathcal{B}_t$ is that they are
min-additive, meaning that for every functions
$\phi_1, \phi_2 : \mathbb{X} \to \overline{\mathbb{R}}$ one has:
\[
\mathcal{B}_t^v \bp{ \inf\np{\phi_1, \phi_2} } = \inf\bp{\mathcal{B}_t^v\np{\phi_1}, \mathcal{B}_t^v\np{\phi_2}}\enspace ,
\]
and a similar equation for $\mathcal{B}_t$. Moreover, by Riccati formula (see
Equation~\eqref{equa:Riccati_determinist}), the image of a convex quadratic
function by $\mathcal{B}_t^v$ is also a convex quadratic function.
Lemma~\ref{FiniteInfimum} suggests
to use the following set of basic functions:
\[
Fb_t^{\text{\tiny min-plus}} := F_t \ \text{ and } \ Fbb_t^{\text{\tiny min-plus}} := \left\{ \mathcal{V}_F \ \mathcal{B}ig\vert \ F \subset Fb_t^{\text{\tiny min-plus}} \right\} \; .
\]
As done in Section~\ref{sec:Exemples_switch}, one could also have considered as
basic functions the quadratic functions bounded in the Loewner sense between $0$
and $\alpha_t \mathrm{I}$, where $\alpha_t > 0$, $t \in \ce{0,T}$, are real numbers
such that, if $\phi$ is a quadratic form bounded between $0$ and
$\alpha_{t+1}\mathrm{I}$, then $\mathcal{B}_t^v\np{\phi}$ is bounded between $0$ and
$\alpha_t\mathrm{I}$.
Moreover, using the aforementioned properties, one will be able to compute
$\mathcal{B}_t^v\np{ \mathcal{V}_F}$ for a given switching control $v$ and
$\mathcal{B}_t\np{ \mathcal{V}_F}$, for any finite set $F$ of convex quadratic functions.
Therefore, given a time $t\in \ce{0,T-1}$, we define the selection function
$\phi_t^{\text{\tiny min-plus}}$ as follows. For any given
$F \subset Fb_{t+1}^{\text{\tiny min-plus}}$ and $(x,y)\in \mathbb{X}\times \mathbb{R}$,
\begin{align*}
& \phi_t^{\text{\tiny min-plus}}\left(F, x, y \right) =\mathcal{B}_t^v(\phi) \\
& \text{for some}\quad (v,\phi)\in \mathop{\arg\min}_{(v,\phi)\in \mathbb{V}\times F} \mathcal{B}_t^v \left( \phi \right) \left( x, y \right).
\end{align*}
Moreover, at time $t=T$, for any $F \subset Fb_T^{\text{\tiny min-plus}}$ and $(x,y) \in \mathbb{X} \times \mathbb{R}$, we set
\[
\phi_t^{\text{\tiny min-plus}}\left(F, x,y \right) = \widetilde{\psi}(x,y) = \psi(x).
\]
Motivated by the 2-homogeneity of the value functions, the random draws of TDP
for the basic functions $Fb_t^{\text{\tiny min-plus}}$, $1 \leq t \leq T$ and the
selection functions $\phi_t^{\text{\tiny min-plus}}$ will be made uniformly on the
unit euclidean sphere, which satisfies
Definition~\ref{AssumptionOracle}. Indeed, by $2$-homogeneity, it is enough to
know the value functions of \eqref{HomogeneizedSwitchedConstrainedLQ} on the
sphere to know them on the whole state space.
\subsection{Upper and lower approximations of the value functions}
For a large number of discretization points $N$ (defined in
\S\ref{Discretization}), one would expect that the value functions $V_{t,N}$
of~\eqref{SwitchedConstrainedLQ} approximate the value functions $V_t$
of~\eqref{ConstrainedLQ}.
Indeed, one can show that for every time step $t\in \ce{0,T}$, the approximation
error is bounded by $C_t T/N^2$ in $\mathbb{X}$, for some constant $C_t > 0$.
Thus, for large $N$, we have $V_{t,N} \approx V_t$ and by Equation~\eqref{Homogeneisation}, for every $N \geq 2$, we have
\[
\widetilde{V}_{t,N}\np{\cdot, 1} = V_{t,N}\geq V_t.
\]
In the following Proposition we approximate $\widetilde{V}_{t,N}$ from above by a min-plus
algorithm and $V_t$ from below by SDDP
and using the convergence result of Theorem~\ref{ConvergenceTheorem} (admitting that the result still holds for SDDP in this framework), we obtain the following one.
\begin{theorem}
\label{ConvergenceTheoremExample}
For every $t\in \ce{0,T}$, denote by
$\left( \overline{V}_t^k \right)_{k\in \mathbb{N}}$ (resp.
$\np{ \underline{V}_t^k}_{k\in \mathbb{N}}$) the sequence of functions generated by TDP
with the selection function $\phi_t^{\text{\tiny min-plus}}$
(resp. $\phi_t^{\text{\tiny SDDP}}$) and the draws made uniformly over the euclidean
sphere of $\mathbb{X}\times \mathbb{R}$ (resp. made as described in
Proposition~\ref{OptimalityOfTrajectories}).
Then the sequence $\left( \overline{V}_t^k \right)_{k\in \mathbb{N}}$
(resp. $\np{\underline{V}_t^k}_{k \in \mathbb{N}}$) is non-increasing
(resp. non-decreasing), bounded from below (resp. above) by $\widetilde{V}_{t,N}$
(resp. $V_t$) and converges uniformly to $\widetilde{V}_{t,N}$ (resp. $V_t$) on any
compact subset of $\mathbb{X}\times \mathbb{R}$ (resp. $K_t^*$ defined in
Proposition~\ref{OptimalityOfTrajectories}).
\end{theorem}
\subsection{Numerical experiments}
\label{numerical_experiments}
The following data was used as a specific case of \eqref{ConstrainedLQ}. For every time $t\in \ce{0,T-1}$,
\begin{align*}
A_t & = (1-0.1)\;\mathrm{Id} \quad & B_t & = \begin{pmatrix} 1 & \cdots & 1 \\ \vdots & & \vdots \\ 1 & \cdots & 1 \end{pmatrix} \quad & b_t & = \begin{pmatrix} 1 \\ \vdots \\ 1\end{pmatrix} \\
C_t & = 0.1 \;\mathrm{Id} \quad & D_t & = 0.1 \;\mathrm{Id} \quad & d_t & = 0.1.
\end{align*}
The time horizon is $T = 15$, the states are in $\mathbb{X} = \mathbb{R}^n$ with $n = 25$, the
unconstrained continuous controls are in $\mathbb{U} = \mathbb{R}^m$ with $m = 3$, the
constrained continuous control is in $[\beta, \gamma]$,
with
$[\beta,\gamma]=[1,5]$ in the first example and $[\beta,\gamma]=[-3,5]$ in the
second one. Moreover, we start from the initial point
$x_0 = 0.2 \; (1 , \ldots , 1 )^T$ when TDP is applied with the selection
function $\phi_t^{\text{\tiny SDDP}}$ and the number of discretization points $N$ is
varying from $5$ to $200$, for TDP with the selection function
$\phi_t^{\text{\tiny min-plus}}$. In Figures~\ref{FirstExample}
and~\ref{SecondExample}, we give graphs representing the values
$\underline{V}_t^k\np{x_t^k}$ and $\overline{V}_t^k\np{x_t^k,1}$ at each time
step $t\in \ce{0,T-1}$ where the sequence of states $\np{x_t^k}_{k\in \mathbb{N}}$ is
the optimal trajectory for the current lower approximations
$\np{\underline{V}_t^k}_{k\in \mathbb{N}}$ defined in \eqref{OptimalityOfTrajectories}.
From Theorem~\ref{ConvergenceTheoremExample}, we know that for every
$t \in \ce{0,T-1}$ the gap
$\overline{V}_t^k\np{x_t^k,1} - \underline{V}_t^k\np{x_t^k}$ should be close to
$0$ as $k$ increases assuming that $N$ is large enough to have
$V_t \approx V_{t,N}$.
\begin{figure*}
\caption{\label{FirstExample}
\label{FirstExample}
\end{figure*}
\begin{figure*}
\caption{Second example for $\beta=-3$, $\gamma=5$ with varying $N = 5$ (left),
$N = 50$ (middle) and $N = 200$ (right) after $20$ iterations.}
\label{SecondExample}
\end{figure*}
\begin{figure*}
\caption{\label{TimePlots}
\label{TimePlots}
\end{figure*}
On those two examples, we exhibit two convergence behaviors. On the first
example, the constrained control has to be greater than $1$, thus avoiding $0$
which would have been (or almost) the optimal control if there were no
constraint. The optimal constrained control is the projection on
$\mathbb{U} \times [\beta,\gamma]$ of the optimal unconstrained control, thus the
switching control is most of the time equal to the lower bound $\beta=1$.
From this observation we deduce two properties. First, the upper approximation
given by Qu algorithm is good, even for a small $N$, as the optimal switch is
(most of the time) equal to $\beta$. Second, this implies that at iteration $k$,
the set $F_t^k$ is of small cardinality.
Moreover, in this example the number of switches is $N = 5$ thus few
computations of $\mathcal{B}_t^v\np{\phi}(x)$ need to be done in order to compute
$\mathcal{B}_t\np{\phi}(x)$ for some $x \in \mathbb{X}$ and $\phi \in F_t^k$. Thus, as
shown on the left of Figure~\ref{TimePlots}, the computation time of an
iteration of Qu's min-plus algorithm is small compared to SDDP which does not
exploit this property.
On the second example, the constrained control is in an interval containing
$0$. The switching control often changes and this means more computations. A
compromise between computational time and precision can be achieved (see
Figure~\ref{TimePlots}) in order to make the computational time of Qu algorithm
similar to the one of SDDP algorithm.
\section*{Conclusion}
In this article we have devised an algorithm, Tropical Dynamic Programming, that encompasses both a
discrete time version of Qu's min-plus algorithm and the SDDP algorithm in the
deterministic case. We have shown in the last section that Tropical Dynamic Programming \ can be
applied to two natural frameworks: one for min-plus and one for SDDP. In the
case where both framework intersects, one could apply Tropical Dynamic Programming \ with the
selection functions $\phi_t^{\text{\tiny min-plus}}$ and get non-increasing upper
approximations of the value function. Simultaneously, by applying Tropical Dynamic Programming \
with the selection function $\phi_t^{\text{\tiny SDDP}}$, one would get
non-decreasing lower approximations of the value function. Moreover, we have
shown that the upper approximations are, almost surely, asymptotically equal to
the value function on the whole space of states $\mathbb{X}$ and that the lower
approximations are, almost surely, asymptotically equal to the value function on
a set of interest.
Thus, in those particular cases we get converging bounds for $V_0(x_0)$, which
is the value of the multistage optimization Problem~\ref{MultistageProblem},
along with asymptotically exact minimizing policies. In those cases, we have
shown a possible way to address the issue of computing efficient upper bounds
when running the SDDP algorithm by running in parallel another algorithm (namely
TDP \ with min-plus selection functions).
In Section~\ref{sec:ToyExample} we studied a way to simultaneously build lower
and upper approximations of the value functions using the results of the
previous sections. However the discretization and homogenization scheme that
was described is rapidly limited by the dimension of the control space, due to
the need to discretize the constrained controls. We will provide in a future
work, a systematic scheme to use simultaneously SDDP and a min-plus methods
which is more efficient numerically and does not rely on discretization of the
control space. Moreover we will extend the current framework to multistage
stochastic optimization problems with finite white noises.
\appendix
\section{Algebraic Riccati Equation}
This section gives complementary results for Section~\ref{sec:Exemples_switch}. We use the same framework and notations introduced in Section~\ref{sec:Exemples_switch}.
\begin{proposition}
Fix a discrete control $v \in \mathbb{V}$ and a time step $t\in \ce{0,T-1}$.
\begin{myenumerate}
\item The operator $\mathcal{B}_t^v: \mathbb{S}_n\to \mathbb{S}_n^+$ restricted to the pure
quadratic forms (identified with $\mathbb{S}_n$ the space of the symmetric
semidefinite positive matrices) is given by the \emph{discrete time algebraic Riccati equation}
\begin{align}
\label{equa:Riccati_determinist}
\mathcal{B}_t^v\np{M} = C_t^v + \np{A_t^v}^T M A_t^v -
\np{A_t^v}^T M B_t^v
\bp{D_t^v + B_t^v M\np{B_t^v}^T }^{-1}
\np{B_t^v}^{T} M A_t^v
\; .
\end{align}
\item Moreover, when $M \in \mathbb{S}_n^+$ Equation~\eqref{equa:Riccati_determinist} can be rewritten as
\begin{equation} \label{equa:Riccati_reduced}
\mathcal{B}_t^v(M) = \left(A_t^v\right)^TM\left(I + B_t^v\left(D_t^v\right)^{-1}\left(B_t^v\right)^TM\right)^{-1}A_t^v + C_t^v.
\end{equation}
\end{myenumerate}
\end{proposition}
\begin{proof}\quad
\noindent $\bullet$ We prove Equation~\eqref{equa:Riccati_determinist}. Note that if $M$ is symmetric, then $\mathcal{B}_t^v(M)$ is also symmetric.
Now, let $t \in \na{T{-}1, T{-}2, \ldots, 0}$ and $M \in \mathbb{S}_n$ be fixed.
Let $x\in \mathbb{X}$, we have that
\begin{align}
\mathcal{B}_t^v(M)(x)
& = \inf_{u \in \mathbb{U}} x^T C_t^v x + u^T D_t^v u + \lVert f_t^v\left(x,u \right)\rVert^2 \frac{f_t^v\left(x,u \right)^T}{\lVert f_t^v\left(x,u \right)\rVert} M \frac{f_t^v\left(x,u \right)}{\lVert f_t^v\left(x,u \right)\rVert} \notag
\\
& = \inf_{u \in \mathbb{U}} x^T C_t^v x + u^T D_t^v u + f_t^v\left(x,u \right)^T M f_t^v(x,u) \notag
\\
& = x^T C_t^v x + \inf_{u \in \mathbb{U}} u^T D_t^v u + f_t^v(x,u)^T M f_t^v(x,u). \label{equa:RandomRiccatiProof_determinist}
\end{align}
As $u\mapsto f_t^{v}(x,u)$ is linear, $D_t^v \succ 0$ and $M \succeq 0$, we have that
\[
g: u\in \mathbb{U} \mapsto u^T D_t^v u + f_t^v(x,u)^T Mf_t^v(x,u) \in \mathbb{R}
\]
is strictly convex, hence is minimal when $\nabla g(u) = 0$ \emph{i.e.} for $u\in \mathbb{U}$ such that:
\begin{equation}
\label{equa:optimalite_determ}
\bp{D_t^v + \left( B_t^v \right)^T M B_t^v} u
+ \np{B_t^v}^T M \np{A_t^v}x = 0
\; .
\end{equation}
Now we will show that $D_t^v + \left( B_t^v \right)^T M B_t^v$ is invertible. As $M \in \mathbb{S}_n$ and $D_t^{v} \in \mathbb{S}_n^+$, for every $u \in \mathbb{U}$, we have:
\begin{equation*}
u^T \bp{D_t^v + \np{ B_t^v}^T M B_t^v} u
= \underbrace{u^T D_t^v u}_{> 0} + \underbrace{\left( B_t^v u\right)^T M \left(B_t^v u\right)}_{\geq 0 }> 0
\; .
\end{equation*}
We have shown that the symmetric matrix $D_t^v + \left( B_t^v \right)^T M B_t^v$ is definite positive and thus invertible. We conclude that Equation~\eqref{equa:optimalite_determ} is equivalent to:
\begin{equation}\label{equa:inverseRiccati_determinist}
u = - \bp{D_t^v + \np{B_t^v}^T M B_t^v }^{-1} \left( B_t^v \right)^T M \left( A_t^v \right) x
\; .
\end{equation}
Finally, replacing Equation~\eqref{equa:inverseRiccati_determinist} in Equation~\eqref{equa:RandomRiccatiProof_determinist} we get after simplifications that
\begin{align*}
\mathcal{B}_t^v(M)(x)
& = x^T \mathcal{B}ig( C_t^v + \np{A_t^v}^T M A_t^v
\\
& \hspace{1cm} -\np{A_t^v}^T M B_t^v \bp{D_t^v + \np{B_t^v}^T MB_t^v}^{-1}
\np{B_t^v}^{T} M A_t^v \mathcal{B}ig) x
\; ,
\end{align*}
which gives Equation~\eqref{equa:Riccati_determinist}.
\noindent $\bullet$ Equation~\eqref{equa:Riccati_reduced} follows from~\cite[Proposition 12.1.1 page 271]{La.Ro1995}.
\end{proof}
\section{Smallest and greatest eigenvalues}
\label{Appendix:Linear_Algebra}
Here we recall some formulas on the lowest and greatest eigenvalues of a matrix.
Denote the smallest eigenvalue of a symmetric real matrix $M$ by $\lambda_{\tiny \text{min}}(M)$
(every eigenvalue of $M$ is real) and by $\lambda_{\tiny \text{max}}(M)$ its greatest eigenvalue.
\begin{proposition}
\label{prop:valeurpropre}
Let $n>0$ be given. We have the following matrix inequalities.
\begin{subequations}
\begin{align}
& \forall (A, B) \in \mathbb{S}_{n}^2
\; , \enspace \lambda_{\tiny \text{max}}(A + B) \leq \lambda_{\tiny \text{max}}(A) + \lambda_{\tiny \text{max}}(B)
\; .
\label{eq:vpmax-sum}
\\
& \forall (A,B) \in \mathbb{M}_n{\times}\mathbb{S}_{n}
\; , \enspace
\lambda_{\tiny \text{max}}(A^TBA) \leq \lambda_{\tiny \text{max}}(A^TA) \lambda_{\tiny \text{max}}(B)
\; .
\label{eq:vpmax-prod}
\end{align}
\end{subequations}
\end{proposition}
\newcommand{\spectral}[1]{{\lVert #1 \rVert}_{\text{sp}}}
\begin{proof}
For any matrix $M \in \mathbb{M}_n$, the spectral norm of $M$, $\spectral{M}$, (See~\cite[Theorem 1.4.2]{Ci1989})
is the subordinate matrix norm of the euclidean norm on $\mathbb{R}^n$. When the matrix $M\in \mathbb{S}_n$ is real symmetric, we have that $\spectral{M}= \lambda_{\tiny \text{max}}\np{M}$ and
for any real matrix $M \in \mathbb{M}_n$, we have that $\lambda_{\tiny \text{max}}\np{M^T M} = \lambda_{\tiny \text{max}}\np{M M^T} = \lVert M \rVert^2$.
\noindent $\bullet$ Fix $A,B \in \mathbb{S}_n$, we prove Equation~\eqref{eq:vpmax-sum}. As $A+B\in \mathbb{S}_n$ and using the fact that a subordinate matrix norm is a norm
we have that
\(
\lambda_{\tiny \text{max}}\np{A + B} = \spectral{A + B}\leq \spectral{A}+\spectral{B} = \lambda_{\tiny \text{max}}(A) + \lambda_{\tiny \text{max}}(B)
\).
\noindent $\bullet$ Fix $\np{A,B} \in \mathbb{M}_n{\times}\mathbb{S}_{n}$. We prove Equation~\eqref{eq:vpmax-prod} as follows
\begin{align*}
\lambda_{\tiny \text{max}}\np{A^TBA}
& = \lVert A^TBA \rVert
\tag{as $A^TBA \in \mathbb{S}_n$}
\\
& \leq \spectral{A^T} \spectral{B}\spectral{A}
\tag{$\spectral{\cdot}$ is submultiplicative as a matrix norm}
\\
& = \spectral{A}^2 \spectral{B}= \lambda_{\tiny \text{max}}(A^TA) \lambda_{\tiny \text{max}}(B)
\; .
\end{align*}
This ends the proof.
\end{proof}
\section{Homogenization}
\label{homogenization}
\newcommand{\Homogen}[1]{{\mathcal H}_{#1}}
We explain why, by adding another dimension to the state variable, there is no loss of generality induced by studying pure quadratic forms in Problem~\ref{pb:linear-quad-switch} instead of positive polynomial of degree $2$, nor is there one for studying linear dynamics instead of affine dynamics.
First, we define the operator $\Homogen{2}$ that maps a function $\phi$ defined on a finite dimensional vector space $\mathbb{E}$ to a $2$-homogeneous
function $\Homogen{2}\bp{\phi}$ defined on the extended domain $\mathbb{E} \times \mathbb{R}$ as
follows
\begin{equation}
\label{h2def}
\begin{array}{ccll}
\Homogen{2} : & \overline{\mathbb{R}}^{\mathbb{E}} & \longrightarrow & \overline{\mathbb{R}}^{\mathbb{E} \times \mathbb{R}} \\
& \phi & \longmapsto & \Homogen{2}\bp{\phi}:
\np{z,y} \mapsto y^2 \phi \np{\frac{z}{y}} \ \text{if} \ y \neq 0, \ 0 \ \text{otherwise}.
\end{array}
\end{equation}
Thus, if $\phi$ is a positive polynomial of degree $2$, then $\Homogen{2}\np{\phi}$ is a $2$-homogeneous convex quadratic form (with possibly a mixed term in $x$ and $u$).
In a similar way, we define the operator $\Homogen{1}$ that maps any function $\phi$ defined on a domain $\mathbb{E}$ and taking values in $\mathbb{E}$
to a $1$-homogeneous function $\Homogen{1}\bp{\phi}$ as follows
\begin{equation}
\begin{array}{ccll}
\Homogen{1} : & \mathbb{E}^{\mathbb{E}} & \longrightarrow & \np{\mathbb{E} \times {\mathbb{R}}}^{\mathbb{E} \times {\mathbb{R}}} \\
& \phi & \longmapsto & \Homogen{1}\bp{\phi}: \np{z,y} \mapsto \np{y \phi \np{\frac{z}{y}} ,y} \ \text{if} \ y \neq 0, \ 0 \ \text{otherwise}.
\end{array}
\end{equation}
Now consider $\left(\mathcal{B}_t\right)_{t\in \ce{0,T-1}}$ the Bellman operators associated to Problem~\ref{pb:linear-quad-switch}
\begin{equation}
\begin{array}{ccll}
\mathcal{B}_t: & \overline{\mathbb{R}}^{\mathbb{X}} & \longrightarrow & \overline{\mathbb{R}}^{\mathbb{X}} \\
& \phi & \longmapsto & \mathcal{B}_t\bp{\phi}: x \mapsto
\min_{\substack{u \in \mathbb{U} \\v \in \mathbb{V}}} c_t^{v} \np{x,u} + \phi \np{ f_t^{v}\np{x,u} }
\; ,
\end{array}
\label{bellman-non-homogene}
\end{equation}
We denote by $\np{\mathcal{B}_t^{\Homogen{}}}_{t\in \ce{0,T-1}}$, the family of Bellman operators obtained through homogenization (with $\mathbb{E}=\mathbb{X}\times\mathbb{U}$) as follows
\begin{equation}
\begin{array}{ccll}
\mathcal{B}_t^{\Homogen{}}: & \overline{\mathbb{R}}^{\mathbb{X}\times {\mathbb{R}}} & \longrightarrow & \overline{\mathbb{R}}^{\mathbb{X} \times {\mathbb{R}}} \\
& \varphi & \longmapsto & \mathcal{B}_t^{\Homogen{}} \bp{\varphi}: \np{x,y} \mapsto \min_{\substack{u \in \mathbb{U} \\v \in \mathbb{V}}} \Homogen{2}\bp{c_t^{v}}\bp{x,u,y}\\
& & & \hspace{3cm} + \varphi \np{ \Homogen{1} \bp{f_t^{v}}\np{x,u,y} } .
\label{bellman-homogeneise}
\end{array}
\end{equation}
The next proposition relates the solution of Problem~\ref{pb:linear-quad-switch} with non-homogeneous functions to the solution of the associated
homogenized problem.
\begin{proposition}
\label{HomogeneVSnonhomogene}
Let $\np{V_t}_{t\in \ce{0,T}}$ (resp. $\np{\widetilde{V}_t}_{t\in \ce{0,T}}$)
denote the solutions of the Dynamic Programming Equation~\eqref{DynamicProgramming}
system of equations associated with the operators $\np{\mathcal{B}_t}_{t\in \ce{0,T-1}}$ defined by Equation~\eqref{bellman-non-homogene}
(resp. $\np{\mathcal{B}^{\Homogen{}}_t}_{t\in \ce{0,T-1}}$ defined by Equation~\eqref{bellman-homogeneise}) and final cost $\psi$
(resp. $\Homogen{2}\bp{\psi}$)). Then, for every $x\in \mathbb{X}$ and $t\in \ce{0,T}$ , we have that
\(
V_t\np{x} = \widetilde{V}_t\np{x,1} \).
\end{proposition}
\begin{proof}
First, it is easy to prove by backward recursion on time $t\in \ce{0,T}$, that the mappings $\widetilde{V}_t$ for every $t\in \ce{0,T}$, are $2$-homogeneous.
Second, we will show by backward recursion on time that, for every $t\in \ce{0,T}$,
\begin{equation}
\label{eqtmp:1}
\widetilde{V}_t =\Homogen{2}\bp{V_t}.
\end{equation}
Then, the result will follow by evaluating Equation~\eqref{eqtmp:1} at $y=1$.
At the final time $t=T$, we have that
\[
\widetilde{V}_T := \Homogen{2}\bp{\psi} = \Homogen{2}\bp{V_T}.
\]
Now, assume that for some $t\in \ce{0,T-1}$, we have that $\widetilde{V}_{t+1} = \Homogen{2}\bp{V_{t+1}}$, for $(x,y)\in \mathbb{X}\times \mathbb{R}$ we successively obtain that
\begin{align*}
\widetilde{V}_t\np{x,y}
& = \mathcal{B}_t^{\Homogen{}}\np{\widetilde{V}_{t+1}}\np{x,y} \\
& = \min_{\substack{u \in \mathbb{U} \\v \in \mathbb{V}}}\Homogen{2}\bp{c_t^{v}}\np{x,u,y} + \widetilde{V}_{t+1}\mathcal{B}p{\Homogen{1}\bp{f_t^{v}}\np{x,u,y}}
\tag{$2$-homogeneity of $\widetilde{V}_{t+1}$} \\
& = \min_{\substack{u \in \mathbb{U} \\v \in \mathbb{V}}}\Homogen{2}\bp{c_t^{v}}\np{x,u,y} +\Homogen{2}\bp{V_{t+1}} \mathcal{B}p{yf_t^{v}\np{\frac{x}{y}, \frac{u}{y}}, y} \tag{Induction hyp. and def.}\\
& = \min_{\substack{u \in \mathbb{U} \\v \in \mathbb{V}}} y^2 c_t^{v}\np{\frac{x}{y}, \frac{u}{y}} +y^2 V_{t+1} \np{f_t^{v}\np{\frac{x}{y}, \frac{u}{y}}}
\tag{by Equation~\eqref{h2def}} \\
& = y^2 \min_{\substack{u' \in \mathbb{U} \\v \in \mathbb{V}}} c_t^{v}\np{\frac{x}{y}, u'} + V_{t+1}\np{f_t^{v}\np{\frac{x}{y},u'}} \tag{$u'= u/y$} \\
& =y^2 \mathcal{B}_t\np{V_{t+1}}\np{\frac{x}{y}} \\
& =\Homogen{2}\bp{V_t}\np{x,y} \tag{by Equation~\eqref{h2def}}\; .
\end{align*}
This ends the proof.
\end{proof}
Lastly, we briefly explain how to get rid of the possible mixed terms in both $u$ and $x$ in the cost functions after homogenization. That is, there is no loss of generality to consider the case of cost functions which are positive polynomials of degree $2$ and affine cost than to consider the case studied in \S\ref{homogeneous_case}, \emph{i.e.} pure quadratic costs and linear functions. From Proposition~\ref{HomogeneVSnonhomogene}, we have seen that one can consider the case where the cost functions are $2$-homogeneous with linear dynamics. Assume (for the sake of simplicity, we omit the discrete control $v$ here) that the cost function $c_t$ is of the form
\[
c_t\np{x,u} := x^T P_1 x + x^T P_2 u + u^T P_3 u,
\]
where $P_1$, $P_2$ and $P_3$ are symmetric semidefinite positive matrices of coherent dimensions, with $P_3$ being definite positive. Moreover, fix a $2$-homogeneous convex quadratic form $\psi$ and assume the dynamic $f_t$ to be linear of the form
\[
f_t\np{x,u} := A x + Bu.
\]
Setting $Q_1 := P_1 - \frac{1}{4}P_2P_3^{-1}P_2^T$, $Q_2 := P_3$, $L := \frac{1}{2} P_3^{-1} P_2^{T}$, one has that the cost function $\np{x,u} \mapsto c_t'\np{x,u} := x^T Q_1 x + u^T Q_2 u$ is a quadratic function without mixing term and $\np{x,u} \mapsto f_t\np{x,u} := \np{A + L}x + Bu$ is linear. Furthermore, by straightforward computations, one can check that $c_t$ and $f_t$ satisfy:
\begin{equation}
\label{eq:couts-dyn-sans-croises}
c_t\np{x,u + Lx} = c_t'(x,u) \quad \text{and} \quad f_t\np{x,u+Lx} = f_t'(x,u).
\end{equation}
Note that as $Q_2 = P_3$, the matrix $Q_2$ is symmetric definite positive and as
$c_t$ is positive and by Equation~\eqref{eq:couts-dyn-sans-croises} for every $x\in \mathbb{X}$
and $u \in \mathbb{U}$
\[
x^T Q_1 x = c_t'\np{x,0}=c_t\np{x,0 + Lx} \geq 0,
\]
then $Q_1$ is symmetric semidefinite positive. Thus the quadratic function $c_t'$ is convex and a pure quadratic form in the sense of Definition~\ref{purequadform}.
Consider the Bellman operator associated with the costs $c_t'$ and dynamics $f_t'$:
\begin{equation}
\begin{array}{ccll}
\mathcal{B}_t': & \overline{\mathbb{R}}^{\mathbb{X}} & \longrightarrow & \overline{\mathbb{R}}^{\mathbb{X}} \\
& \phi & \longmapsto & \mathcal{B}_t'\bp{\phi}: x \mapsto
\min_{u \in \mathbb{U}} c_t' \np{x,u} + \phi \np{ f_t'\np{x,u} }
\; ,
\end{array}
\label{bellman-sans-terme-mixte}
\end{equation}
Thus, for any function $\phi \in \overline{\mathbb{R}}^{\mathbb{X}}$ and every $x \in \mathbb{X}$, recall that $\mathbb{U} = \mathbb{R}^m$ is unconstrained, so we have that
\begin{align}
\mathcal{B}_t\np{\phi}\np{x} & = \min_{u \in \mathbb{U}} c_t(x,u) + \phi\np{f_t\np{x,u}} \notag \\
& = \min_{u' = u + Lx \in \mathbb{U}} c_t\np{x, u + Lx} + \phi\np{f_t\np{x,u+Lx}} \notag \\
& = \min_{u' \in \mathbb{U}} c_t'(x,u') + \phi\np{f_t\np{x,u'}} \notag \\
\mathcal{B}_t\np{\phi}\np{x} & = \mathcal{B}_t'\np{\phi}\np{x} \label{supersupersuper}.
\end{align}
From Equation~\eqref{supersupersuper}, one can deduce by backward recursion (as done in
Proposition~\ref{HomogeneVSnonhomogene}) on the time step $t\in \ce{0,T}$, that the value
functions $V_t$ (resp. $V_t'$) of the Dynamic Programming problem with Bellman
operators $\mathcal{B}_t$ (resp. $\mathcal{B}_t'$) and final cost function $\psi$ (resp. $\psi$ as
well) satisfy \( V_t = V_t'\) \; .
\end{document}
|
\end{eqnarray*}gin{document}
\markboth{P. Barone}
{Universality Pad\'{e} noise}
\title{On the universality of the distribution of the generalized eigenvalues of a pencil of Hankel random matrices}
\author{Piero Barone}
\address{ Istituto per le Applicazioni del Calcolo ''M. Picone'',
C.N.R.,\\
Via dei Taurini 19, 00185 Rome, Italy \\
[email protected], [email protected]}
\maketitle
\section*{Abstract}
Universality properties of the distribution of the generalized eigenvalues of a pencil of random Hankel matrices, arising in the solution of the exponential interpolation problem of a complex discrete stationary process, are proved under the assumption that every finite set of random variables of the process have a multivariate spherical distribution. An integral representation of the condensed density of the generalized eigenvalues is also derived. The asymptotic behavior of this function turns out to depend only on stationarity and not on the specific distribution of the process.
{\it Key words}: complex moments; Pad\'{e} approximants; random polynomials.
{\it MSC 2000}: 15B52, 60B20, 62E15
\section*{Introduction}
Let us consider the following moment problem: to compute
the complex measure defined on a compact set
$D\subsetI \! \! \! \! {C}$ by
$$S(z)=\sum_{j=1}^p \gamma_j\delta(z-\zeta_j),\;\;\zeta_j\in \mbox{int}(
D), \;\;\zeta_j\ne\zeta_h \; \forall j\ne h,\;\;\gamma_j\inI \! \! \! \! {C}$$ from its complex moments
$$s_k=\int_Dz^kS(z)dz=\int\!\!\!\!\!\int_{D}(x+iy)^k S(x+iy)dx dy,
\;\;k=0,\dots,n-1,\,\;n=2p$$ It turns out that
\end{eqnarray*}gin{eqnarray}s_k=\sum_{j=1}^p \gamma_j\zeta_j^k.\label{modale}\end{eqnarray}
The problem is thus equivalent to solve the complex exponential interpolation problem for the data $s_k,\;k=0,\dots,n-1$. It is well known that the $\zeta_j,\;j=1,\dots,p$ are the generalized eigenvalues of the Hankel pencil $P=[U_1({\underline s}),U_0({\underline s})]$ where
$$U_0({\underline s})=U(s_0,\dots,s_{n-2}),\;\;\;\;U_1({\underline s})=U(s_1,\dots,s_{n-1})$$
and $$U(s_0,\dots,s_{n-2})=\left[\end{eqnarray*}gin{array}{llll}
s_0 & s_{1} &\dots &s_{p-1} \\
s_{1} & s_{2} &\dots &s_{p} \\
. & . &\dots &. \\
s_{p-1} & s_{p} &\dots &s_{n-2}
\end{array}\right].$$
Moreover $\gamma_j$ are related to the generalized
eigenvector ${\underline u}_j$ of $P$ by $\gamma_j={\underline u}_j^T[s_0,\dots,s_{p-1}]^T$.
\noindent Equivalently, $\zeta_j,\;j=1,\dots,p$ are the roots of the polynomial in
the variable $z$
$$det[U_1({\underline s})-zU_0({\underline s})]$$ which is the denominator of the Pad\'{e} approximant $[p-1,p](z)$ to the $Z$-transform of ${s_k}$ defined by $$f(z)=\sum_{k=0}^\infty s_kz^{-k}.$$
\noindent Denoting random variables by bold characters, let us assume now that we know an even number $n\ge 2p$ of noisy complex moments
$${\bf a}_k=s_k+\mbox{\boldmath $\nu$}_k,\quad k=0,1,2,\dots,n-1
$$ where $\mbox{\boldmath $\nu$}_k$ is a stationary discrete process, and we want to estimate $S(z)$ - i.e. $p,\;(\gamma_j,\zeta_j),j=1,\dots,p$ - from $\{{\bf
a}_k\}_{k=0,\dots,n-1}$. This is a well known difficult ill-posed problem which
is central in many disciplines and appears in the
literature in different forms and contexts (see e.g.
\cite{dsp,donoho,gmv,osb,scharf,vpb}). All the quantities defined above become random. It is therefore relevant to study the distribution of these quantities under suitable hypotheses on the noise affecting the data.
In this work the case $s_k=0,\;\forall k$ will be considered
assuming that the noise is represented by a discrete stationary process, white or colored. For example the distribution of every finite set of r.v. of the process can be multivariate $\alpha$-stable which is a class of distributions closed with respect to addition (up to scale and location parameters), a property consistent with the naive concept of noise. Most of the properties proved in the following will require that the distribution of every finite set of r.v. of the process is spherical and the density function exists (\cite[Sec.1.5]{muir},\cite[Sec.2]{fang}). We remember that
under this assumption white noise is necessarily Gaussian. Therefore e.g. $\alpha$-stable, non-Gaussian $(\alpha\ne 2)$, spherical processes are colored.
Motivations to consider the pure noise case are twofold. The problem of identifying the presence of a signal in a large noise environment arises in many applied contexts. The distribution of the generalized eigenvalues $\mbox{\boldmath $\zeta$}_j$
of the random pencil
$${\bf
P}=[U({\bf a}_1,\dots,{\bf a}_{n-1}),U({\bf a}_0,\dots,{\bf
a}_{n-2})]$$ in the pure noise case is a reference to detect the presence of a signal. More generally, when solving the noise filtering problem, information on the generalized spectral properties of the noise are required. In the specific class of complex exponential models a generalized spectrum is given by the condensed density (\cite{barjma}) of the generalized eigenvalues $\mbox{\boldmath $\zeta$}_j$
of the random pencil ${\bf P}$ or, equivalently, of the poles
of the $[p-1,p](z)$ random Pad\'{e} approximant
to the $Z$-transform (formal random series)
$${\bf f}(z)=\sum_{k=0}^\infty {\bf a}_k z^{-k}.$$
In \cite{barja} it was proved that if $\{{\bf a}_k\}$ is a complex Gaussian white noise, expressing the condensed density in polar coordinates, the radial density weakly converges to a Dirac distribution centered in $1$ and the phase density is uniform in $(0,2\pi]$. Moreover it was proved that for $n=2$ the condensed density is the uniform measure on the Riemann sphere
$$h_2(z)= \frac{1}{\pi(1+|z|^2)^2}.$$
In \cite{bess} it was conjectured, on the basis of numerical experiments, that the condensed density has an universal behavior i.e., in polar coordinates, the radial density is Lorentzian, centered in $1$ with width dependent on $n$ and the phase density is uniform in $(0,2\pi]$.
In the following we prove that this conjecture is true under the hypothesis that the noise has a spherical distribution but the radial density is not Lorentzian. More specifically, assuming only the stationarity of the noise process, it is proved that the condensed density is asymptotically concentrated on the unit circle independently of the noise distribution. Moreover it is proved that if the noise distribution is spherical
the condensed density $\forall n$ is independent of the specific spherical distribution, it is invariant by scaling, and, when $n=2$, it is given by the uniform measure on the Riemann sphere. Furthermore in polar coordinates $(\rho,\theta)$ the marginal condensed density w.r. to $\rho$ (radial density) does not depend on $\theta$ and the marginal condensed density w.r. to $\theta$ is uniform
.
Finally an integral representation of the condensed density is provided in the spherical case which shows that the radial density is not Lorentzian.
The paper is organized in two sections. In the first one the properties of the condensed density are studied. In the second one three numerical experiments confirming the derived properties are illustrated.
\section{The condensed density of the Pad\'{e} poles}
Let us consider the transformation
$T=(T^{(1)},T^{(2)})$ that maps every
realization ${\underline a}(\omega)$ of ${\bf {\underline a}}=\{{\bf a}_k,k=0\dots,n-1\}$ to $({\underline{\zeta}}(\omega), {\underline{\gamma}}(\omega))$
given by $
a_k(\omega)=\sum_{j=1}^{n/2}\gamma_j(\omega)\zeta_j(\omega)^{k},\;\;k=0,\dots,n-1,$
where $\omega\in\Omega$ and $\Omega$ is the space of events.
It was proved in \cite[Lem.2]{barja2} that $T({\bf {\underline a}})$ is defined and one-to-one a.e.. By noticing that the $\mbox{\boldmath $\zeta$}_j$ are given by $T^{(1)}({\bf {\underline a}})$,
we define the condensed density as
$$h_n(z)=\frac{2}{n}E\left[\sum_{j=1}^{n/2}\delta(z-{\mbox{\boldmath $\zeta$}}_j)\right]$$
where the expectation is with respect to the density of ${\bf {\underline a}}$.
\end{eqnarray*}gin{teo}
If $\{{\bf a}_k\}$ is stationary
$$\lim_{n \rightarrow\infty} h_n(z)=\delta(z-1)$$
\end{teo}
\noindent\underline{proof.}
Let us consider the periodic (circular) process $\{\tilde{{\bf a}}_k\}$ obtained by repeating a finite segment of length $p$ of the process $\{{\bf a}_k\}$. Then we have
$${\bf f}(z)=\sum_{k=0}^\infty \tilde{{\bf a}}_k z^{-k}= \sum_{h=0}^{p-1}\tilde{{\bf a}}_h\left(\sum_{k=0}^\infty z^{-h+k p}\right)$$
if $|z|<1$ we get
$${\bf f}(z)=\sum_{h=0}^{p-1}\tilde{{\bf a}}_h z^{-h}\frac{z^p}{z^p-1}$$
hence ${\bf f}(z)$ is a random rational function for every period $p$ and its poles are the roots of unity. Therefore the Pade' denominator is a deterministic polynomial and the poles' condensed density is the counting measure on its roots.
In the limit for $p\rightarrow \infty$ the periodic process $\{\tilde{{\bf a}}_k\}$ converges in $L^2-$mean to $\{{\bf a}_k\}$ (\cite[Sec. 6]{kaw}) and the counting measure on the roots of unity tends to the uniform measure on the unit circle.
$\Box$
\end{eqnarray*}gin{teo}
If $n=2$ and $\tilde{\bf{\underline a}}=\{I \! \! Re[{\bf a}_0],\Im[{\bf a}_0],I \! \! Re[{\bf a}_1],\Im[{\bf a}_1]\}$ is spherically distributed with a density, then the poles' condensed density is the uniform measure on the Riemann sphere $$h_2(z)=\frac{1}{\pi(1+|z|^2)^2}.$$
\end{teo}
\noindent\underline{proof}.
If a $n$-dimensional random vector $\tilde{\bf{\underline a}}$ is spherically distributed its joint density is (\cite[th.2.9]{fang}) $$f(\|{\underline \alpha}t\|^2)=\frac{\Gamma(n/2)}{2 \pi^{n/2}}\|{\underline \alpha}t\|^{1-n}g(\|{\underline \alpha}t\|)$$ where $g(\cdot)$ is the density of $\|\tilde{\bf{\underline a}}\|$ and $\|\cdot\|$ denotes the euclidean norm. In the case considered $n=4$.
By making the change of variables $T$ we get ${\mbox{\boldmath $\gamma$}}={\bf a}_0,\;\;{\mbox{\boldmath $\zeta$}}=\frac{{\bf a}_1}{{\bf a}_0}$ and
$$ \|\tilde{\bf{\underline a}}\|^2= |{\mbox{\boldmath $\gamma$}}|^2(1+|{\mbox{\boldmath $\zeta$}}|^2).$$
Noticing that the complex Jacobian of $T$ is $\gamma$, the marginal on ${\mbox{\boldmath $\zeta$}}$ is then
$$h(\zeta)=\int_{I \! \! \! \! {C}}\gamma f(T({\underline x}))d\gamma=\frac{1}{2 \pi^2}\int_{I \! \! \! \! {C} } \frac{\gamma}{|\gamma|^3(1+|\zeta|^2)^{3/2}}g\left(|\gamma|\sqrt{1+|\zeta|^2}\right)d\gamma.$$
By the change of variables $\tilde{\gamma}=\gamma\sqrt{1+|\zeta|^2)}$ and expressing the integral in real coordinates with real Jacobian $|\gamma|^2$ we get
$$h(\zeta)=\frac{1}{2 \pi^2}\int_{I \! \! R^2 }\frac{|\tilde{\gamma}|^2}{(1+|\zeta|^2)}\frac{1}{|\tilde{\gamma}|^3}g(|\tilde{\gamma}|)\frac{1}
{1+|\zeta|^2}dI \! \! Re{\tilde{\gamma}}d\Im{\tilde{\gamma}}=$$
$$\frac{1}{2 \pi^2(1+|\zeta|^2)^2}\int_{I \! \! R^2 }\frac{1}{|\tilde{\gamma}|}g(|\tilde{\gamma}|)dI \! \! Re{\tilde{\gamma}}d\Im{\tilde{\gamma}}.
$$
But $$\frac{1}{2\pi}\int_{I \! \! R^2 }\frac{1}{|\tilde{\gamma}|}g(|\tilde{\gamma}|)dI \! \! Re{\tilde{\gamma}}d\Im{\tilde{\gamma}}=1$$ because the marginals of a spherical density are spherical and we apply the formula above with $n=2\;\;\;\Box$
\end{eqnarray*}gin{teo}
If $n> 2$ and $\tilde{\bf{\underline a}}=\{I \! \! Re[{\bf {\underline a}}],\Im[{\bf {\underline a}}]\}$ is $2n$-variate spherically distributed with a density, then the poles' condensed density is the same independently of the specific distribution of $\tilde{\bf{\underline a}}$. Moreover in polar coordinates $(\rho,\theta)$ the marginal condensed density w.r. to $\rho$ does not depend on $\theta$ and the marginal condensed density w.r. to $\theta$ is uniform. Finally the condensed density is invariant by scaling $\tilde{\bf{\underline a}}$.
\end{teo}
\noindent\underline{proof}.
We first notice that the density of ${\bf {\underline a}}$ is
\end{eqnarray*}gin{eqnarray} f(\|{\underline a}\|^2)=\frac{\Gamma(n)}{2 \pi^{n}}\|{\underline a}\|^{1-2 n}g(\|{\underline a}\|)\label{dens}\end{eqnarray} where $g(\cdot)$ is the density of $\|{\bf {\underline a}}\|$ because
$\|\tilde{\bf{\underline a}}\|=\|{\bf {\underline a}}\|$.
We have \end{eqnarray*}gin{eqnarray*}
h_n(z)&=&\frac{2}{n}E\left[\sum_{j=1}^{n/2}\delta(z-{\mbox{\boldmath $\zeta$}}_j)\right]\\&=&
\frac{\Gamma(n)}{2 \pi^{n}}\frac{2}{n}\sum_{j=1}^{n/2}\displaystyle
\int_{I \! \! \! \! {C}^n}\delta(z-\zeta_j)\|{\underline a}\|^{1-2 n}g(\|{\underline a}
\|)d{\underline a}
\label{dc1}\end{eqnarray*}
and, by making the change of variables $T$, whose complex Jacobian is (\cite[Th.2]{barja2})
$$ J_C({\underline{\zeta}},{\underline{\gamma}})=
(-1)^{n/2}\prod_{j=1}^{n/2}\gamma_j
\prod_{j<h}(\zeta_j-\zeta_h)^4, $$
we get
\end{eqnarray*}gin{eqnarray*}
h_n(z)&=&
\frac{2}{n}\sum_{j=1}^{n/2}\frac{\Gamma(n)}{2\pi^{n}}\displaystyle
\int_{I \! \! \! \! {C}^{n/2}}\int_{I \! \! \! \! {C}^{n/2}}\delta(z-\zeta_j)J_C({\underline{\zeta}},{\underline{\gamma}})\left(\sum_{k=0}^{n-1}\left|
\sum_{h=1}^{n/2}\gamma_h\zeta_h^{k}\right|^2\right)^{1/2- n}\cdot \\ &&g\left(\left(\sum_{k=0}^{n-1}\left|
\sum_{h=1}^{n/2}\gamma_h\zeta_h^{k}\right|^2\right)^{1/2}\right)d{\underline{\zeta}}
d{\underline{\gamma}}\\
&=&
\frac{2}{n}\sum_{j=1}^{n/2}\frac{\Gamma(n)}{2\pi^{n}}
\int_{I \! \! \! \! {C}^{n/2-1}}\int_{I \! \! \! \! {C}^{n/2}}
J_C^*({\underline{\zeta}}^{(j)},z,{\underline{\gamma}})
\left(\sum_{k=0}^{n-1}\left|
\sum_{h\ne j}^{1,n/2} \gamma_h\zeta_h^{k}+\gamma_j
z^{k}\right|^2\right)^{1/2- n}\cdot \\ &&g\left(\left(\sum_{k=0}^{n-1}\left|
\sum_{h\ne j}^{1,n/2} \gamma_h\zeta_h^{k}+\gamma_j
z^{k}\right|^2\right)^{1/2}\right)
d{\underline{\zeta}}^{(j)}d{\underline{\gamma}}
\label{dc2}\end{eqnarray*}
where ${\underline{\zeta}}^{(j)}=\{\zeta_h,h\ne j\}$
and
$$J_C^*({\underline{\zeta}}^{(j)},z,{\underline{\gamma}})=
(-1)^{n/2}\prod_{ h=1}^{1,n/2}\gamma_h\prod_{r<h,r\ne
j}(\zeta_r-\zeta_h)^4\prod_{(r,h)\ne j}(\zeta_r-z)^4 .$$
Let us define
$$Q_j=Q_j({\underline{\zeta}}^{(j)},z)=X_jX^H_j\inI \! \! \! \! {C}^{n/2\times n/2},\;\;X_j\inI \! \! \! \! {C}^{n/2\times n},\;\;X_j(h,k)=\omegaverline{x}_{hk}^{(j)}$$ where
\end{eqnarray*}gin{eqnarray}x_{hk}^{(j)}=\left\{\end{eqnarray*}gin{array}{ll}
\zeta_h^{k-1}, & h\ne j\\
z^{k-1}, & h=j
\end{array}\right. .\label{matx}\end{eqnarray}
Then
$$\sum_{k=0}^{n-1}\left|
\sum_{h\ne j}^{1,n/2} \gamma_h\zeta_h^{k}+\gamma_j
z^{k}\right|^2={\underline{\gamma}}^HQ_j{\underline{\gamma}}$$
and
\end{eqnarray*}gin{equation}
h_n(z)=\frac{2}{n}\sum_{j=1}^{n/2}h^{(j)}_n(z)\label{dc}\end{equation}
where
\end{eqnarray*}gin{eqnarray} h^{(j)}_n(z)=
\int_{I \! \! \! \! {C}^{n/2-1}}\prod_{r<h,(r,h)\ne
j}(\zeta_r-\zeta_h)^4\prod_{r\ne j}(\zeta_r-z)^4G_j({\underline{\zeta}}^{(j)},z) d{\underline{\zeta}}^{(j)}
\label{condh}\end{eqnarray} and
$$G_j({\underline{\zeta}}^{(j)},z)=\frac{\Gamma(n)}{2 \pi^{n}}\int_{I \! \! \! \! {C}^{n/2}}
(-1)^{n/2}\left(\prod_{ h=1}^{1,n/2}\gamma_h\right)
\left({\underline{\gamma}}^HQ_j{\underline{\gamma}}\right)^{1/2- n}g\left(\left({\underline{\gamma}}^HQ_j{\underline{\gamma}}\right)^{1/2}\right)
d{\underline{\gamma}}.$$
We show now that $G_j({\underline{\zeta}}^{(j)},z)$ is independent of the specific form of the spherical density generator $f(\cdot)$.
If ${\underline{\gamma}}t=\{I \! \! Re[{\underline{\gamma}}],\Im[{\underline{\gamma}}]\}\in I \! \! R^{n}$ and if $\tilde{Q}_j$ is the real isomorph of $Q_j$, i.e.
$$\tilde{Q}_j=\left[\end{eqnarray*}gin{array}{ll}
I \! \! Re[Q_j], & -\Im[Q_j]\\
\Im[Q_j], & \;\;\;I \! \! Re[Q_j]
\end{array}\right],$$
we have
${\underline{\gamma}}^H Q_j{\underline{\gamma}}={\underline{\gamma}}t^T\tilde{Q}_j{\underline{\gamma}}t$. But then, by noticing that the real Jacobian of the transformation $T$ is $J_R=|J_C|^2$,
we get
$$G_j({\underline{\zeta}}^{(j)},z)=\frac{\Gamma(n)}{2 \pi^{n}}\int_{I \! \! R^n}
\left(\prod_{ h=1}^{1,n/2}{\underline{\gamma}}t^T A_h{\underline{\gamma}}t\right)
\left({\underline{\gamma}}t^T\tilde{Q}_j{\underline{\gamma}}t\right)^{1/2- n}g\left(\left({\underline{\gamma}}t^T\tilde{Q}_j{\underline{\gamma}}t\right)^{1/2}\right)
d{\underline{\gamma}}t$$
where $A_h=I_2\omegatimes{\underline e}_h{\underline e}_h^T$ and ${\underline e}_h$ is the $h-$ column of the identity matrix of order $n/2$.
Let us consider the transformation
$${\underline{\gamma}}h=\tilde{Q}_j^{1/2}{\underline{\gamma}}t$$
which is well defined because $\tilde{Q}_j>0$ and whose Jacobian is $|\tilde{Q}_j|^{1/2}$. We then have
$$G_j({\underline{\zeta}}^{(j)},z)=\frac{\Gamma(n)}{2 \pi^{n}|\tilde{Q}_j|^{1/2}}\int_{I \! \! R^n}
\left(\prod_{ h=1}^{1,n/2}{\underline{\gamma}}h^T\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}{\underline{\gamma}}h\right)
({\underline{\gamma}}h^T{\underline{\gamma}}h)^{1/2- n}g\left(\left({\underline{\gamma}}h^T{\underline{\gamma}}h\right)^{1/2}\right)
d{\underline{\gamma}}h=$$
$$\frac{\Gamma(n)}{\Gamma(n/2) \pi^{n/2}|\tilde{Q}_j|^{1/2}}\frac{\Gamma(n/2)}{2 \pi^{n/2}}\int_{I \! \! R^n}
\left(\prod_{ h=1}^{1,n/2}\frac{{\underline{\gamma}}h^T\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}{\underline{\gamma}}h}{{\underline{\gamma}}h^T{\underline{\gamma}}h}\right)
\left(\sqrt{{\underline{\gamma}}h^T{\underline{\gamma}}h}\right)^{1- n}g\left(\left({\underline{\gamma}}h^T{\underline{\gamma}}h\right)^{1/2}\right)
d{\underline{\gamma}}h.$$
But then
$$\frac{\Gamma(n/2)}{2 \pi^{n/2}}\int_{I \! \! R^n}
\left(\prod_{ h=1}^{1,n/2}\frac{{\underline{\gamma}}h^T\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}{\underline{\gamma}}h}{{\underline{\gamma}}h^T{\underline{\gamma}}h}\right)
\left(\sqrt{{\underline{\gamma}}h^T{\underline{\gamma}}h}\right)^{1- n}g\left(\left({\underline{\gamma}}h^T{\underline{\gamma}}h\right)^{1/2}\right)
d{\underline{\gamma}}h$$ can be seen as the expectation of the function $\left(\prod_{ h=1}^{1,n/2}\frac{{\underline{\gamma}}h^T\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}{\underline{\gamma}}h}{{\underline{\gamma}}h^T{\underline{\gamma}}h}\right)$ of a $n-$dimensional spherically distributed r.v.
${\mbox{\boldmath $\gamma$}}$. Therefore
$$G_j({\underline{\zeta}}^{(j)},z)=\frac{\Gamma(n)}{\Gamma(n/2) \pi^{n/2}|\tilde{Q}_j|^{1/2}}E\left[\prod_{ h=1}^{1,n/2}\left(\frac{{\mbox{\boldmath $\gamma$}}^T\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}{\mbox{\boldmath $\gamma$}}}{{\mbox{\boldmath $\gamma$}}^T{\mbox{\boldmath $\gamma$}}}\right)\right].$$
Let us define the matrix
$$B_{jh}=\frac{\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}}{I \! \! Re[Q_j^{-1}]_{hh}}$$
and prove that $B_{jh}$ is idempotent. In fact
$$B_{jh}^2=\frac{\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}}{I \! \! Re[Q_j^{-1}]_{hh}}\cdot \frac{\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1/2}}{I \! \! Re[Q_j^{-1}]_{hh}}=\frac{\tilde{Q}_j^{-1/2} A_h\tilde{Q}_j^{-1}A_h\tilde{Q}_j^{-1/2}}{I \! \! Re[Q_j^{-1}]_{hh}^2}=$$ $$
\frac{\tilde{Q}_j^{-1/2}(I_2\omegatimes{\underline e}_h{\underline e}_h^T)\tilde{Q}_j^{-1}(I_2\omegatimes{\underline e}_h{\underline e}_h^T)\tilde{Q}_j^{-1/2}}{I \! \! Re[Q_j^{-1}]_{hh}^{2}}.$$
But
\end{eqnarray*}gin{eqnarray*}(I_2\omegatimes{\underline e}_h{\underline e}_h^T)\tilde{Q}_j^{-1}(I_2\omegatimes{\underline e}_h{\underline e}_h^T)&=&\left[\end{eqnarray*}gin{array}{ll}
{\underline e}_h{\underline e}_h^T, & 0\\
0, & {\underline e}_h{\underline e}_h^T
\end{array}\right]
\left[\end{eqnarray*}gin{array}{ll}
I \! \! Re[Q_j^{-1}], & -\Im[Q_j^{-1}]\\
\Im[Q_j^{-1}], & \;\;\;I \! \! Re[Q_j^{-1}]
\end{array}\right]\left[\end{eqnarray*}gin{array}{ll}
{\underline e}_h{\underline e}_h^T, & 0\\
0, & {\underline e}_h{\underline e}_h^T
\end{array}\right]\\&=& I \! \! Re[Q_j^{-1}]_{hh}A_h\end{eqnarray*} because the diagonal elements of $\Im[Q_j^{-1}]$ are zero.
Hence
$$B_{jh}^2=\tilde{Q}_j^{-1/2}\frac{I \! \! Re[Q_j^{-1}]_{hh}}{(I \! \! Re[Q_j^{-1}]_{hh})^{2}}A_h\tilde{Q}_j^{-1/2}=B_{jh}.$$
We then have
\end{eqnarray*}gin{eqnarray}G_j({\underline{\zeta}}^{(j)},z)=\frac{\Gamma(n)}{\Gamma(n/2) \pi^{n/2}|\tilde{Q}_j|^{1/2}}\prod_{ h=1}^{1,n/2}[Q_j^{-1}]_{hh}E\left[\prod_{ h=1}^{1,n/2}\left(\frac{{\mbox{\boldmath $\gamma$}}^TB_{jh}{\mbox{\boldmath $\gamma$}}}{{\mbox{\boldmath $\gamma$}}^T{\mbox{\boldmath $\gamma$}}}\right)\right].\label{condG}\end{eqnarray}
From \cite[Th.1.5.7,ii]{muir} it follows that ${\bf w}_h=\left(\frac{{\mbox{\boldmath $\gamma$}}^T B_{jh}{\mbox{\boldmath $\gamma$}}}{{\mbox{\boldmath $\gamma$}}^T{\mbox{\boldmath $\gamma$}}}\right)$ has the beta distribution with parameters $1$ and $n/2-1$ independently of the distribution of ${\mbox{\boldmath $\gamma$}}$. As a consequence the distribution of $h_n(z)$ too does not depend on the distribution of ${\mbox{\boldmath $\gamma$}}$.
\noindent To prove the last part of the theorem, we notice that the spherical density (\ref{dens}) is invariant under the transformation
$${\underline a}\rightarrow e^{\pm i \frac{\end{eqnarray*}ta}{2}}{\underline a},\;\;\forall \end{eqnarray*}ta.$$
The proof of Theorem 2 in \cite{barja} then holds and provides the requested results. $\;\;\;\Box$
\end{eqnarray*}gin{teo}
If $n> 2$ and $\tilde{\bf{\underline a}}=\{I \! \! Re[{\bf {\underline a}}],\Im[{\bf {\underline a}}]\}$ is $2n$-variate spherically distributed with a density, the poles condensed density is given by
\end{eqnarray*}gin{eqnarray*} h_n(z)=\frac{1}{(2\pi)^{n/2}}
\int_{I \! \! R^{n-2}}K_{G_n}(z,{\underline{\zeta}}^{(1)})\prod_{r<h,(r,h)\ne
1}|\zeta_r-\zeta_h|^{2}\prod_{r\ne 1}|\zeta_r-z|^{2}\cdot\\
\frac{\prod_{ h=1}^{1,n/2}\left(\sum_{P_2}\left|s_{P_2}(z,{\underline{\zeta}}^{(1,h)})\right|^2\right)}{ \left(\sum_{P_1}\left|s_{P_1}(z,{\underline{\zeta}}^{(1)})\right|^2\right)^{n/2+1}}dI \! \! Re({\underline{\zeta}}^{(1)})d\Im({\underline{\zeta}}^{(1)})\end{eqnarray*}
where
$$K_{G_n}(z,{\underline{\zeta}}^{(1)})=E\left[\prod_{ h=1}^{1,n/2}\left({\bf y}^TB_{1h}{\bf y}\right) \right]$$
and ${\bf y}$ is a $n$-variate standard Gaussian random vector;
\noindent $s_{P_1}(z,{\underline{\zeta}}^{(1)})$ are the Schur functions associated to the partition $$P_1=\{j_1 , j_2 , \dots , j_{n/2}\}$$ which spans the minors of maximal order of $X_1$ (eq. \ref{matx});
\noindent
$s_{P_2}(z,{\underline{\zeta}}^{(1,h)})$ are the Schur functions associated to the partition $$P_2=\{j_1 , j_2 , \dots , j_{n/2-1}\}$$ which spans the minors of maximal order of the matrix obtained by $X_1$ by canceling the $h-$th row.
\end{teo}
\noindent\underline{proof}.
From Theorem 3, without loss of generality, we can assume that ${\mbox{\boldmath $\gamma$}}$ has a zero-mean multivariate Gaussian density with identical covariance matrix. The ratio of quadratic forms ${\bf w}_h=\left(\frac{{\mbox{\boldmath $\gamma$}}^TB_{jh}{\mbox{\boldmath $\gamma$}}}{{\mbox{\boldmath $\gamma$}}^T{\mbox{\boldmath $\gamma$}}}\right)$ can be rewritten as ${\bf w}_h={\bf y}^TB_{jh}{\bf y}$ where ${\bf y}=\frac{{\mbox{\boldmath $\gamma$}}}{\|{\mbox{\boldmath $\gamma$}}\|}$. Therefore ${\bf w}_h$ is a quadratic form in the variables ${\bf y}$ which are uniformly distributed on the $n-$dimensional sphere. But this distribution is spherical with characteristic function
given by (\cite[3.1.1]{fang}
$$\Psi_n({\underline t})=\, _0F_1\left(\frac{n}{2};\frac{1}{4}\|{\underline t}\|^2\right).$$
In \cite[Prop.4]{kan} an explicit expression for
$K_{G_n}(z,{\underline{\zeta}}^{(j)})=E\left[\prod_{ h=1}^{1,n/2}{\bf w}_h\right]$ when ${\bf y}$ is multivariate Gaussian is given, and it is claimed that this is also the result in the spherical case up to a constant which is a function of the characteristic function of the spherical distribution. It turns out that in the present case the constant is given by
$$\end{eqnarray*}ta_n=\frac{n^{n/2-1}}{4}\frac{\Psi_n^{(n/2+1)}(0)}{\left(\Psi_n^{(1)}(0)\right)^{n/2}}=\frac{\Gamma[n/2]}{2^{n/2}\Gamma[n]}.$$
We then have
$$E\left[\prod_{ h=1}^{1,n/2}{\bf w}_h\right]= \end{eqnarray*}ta_n\cdot K_{G_n}(z,{\underline{\zeta}}^{(j)})$$
Moreover this expression is a symmetric function of $z,{\underline{\zeta}}^{(j)}$.
We then get by equations (\ref{condh}) and (\ref{condG})
\end{eqnarray*}gin{eqnarray*} h^{(j)}_n(z)&=&\frac{1}{(2\pi)^{n/2}}
\int_{I \! \! \! \! {C}^{n/2-1}}K_{G_n}(z,{\underline{\zeta}}^{(j)})\prod_{r<h,(r,h)\ne
j}(\zeta_r-\zeta_h)^4\prod_{r\ne j}(\zeta_r-z)^4\cdot \\ &&\frac{1}{|\tilde{Q}_j|^{1/2}}
\prod_{ h=1}^{1,n/2}[Q_j^{-1}]_{hh} d{\underline{\zeta}}^{(j)}.\end{eqnarray*}
Remembering that the real Jacobian of the transformation $T$ is $J_R=|J_C|^2$, we have
\end{eqnarray*}gin{eqnarray*} h^{(j)}_n(z)=\frac{1}{(2\pi)^{n/2}}
\int_{I \! \! R^{n-2}}K_{G_n}(z,{\underline{\zeta}}^{(j)})\prod_{r<h,(r,h)\ne
j}|\zeta_r-\zeta_h|^8\prod_{r\ne j}|\zeta_r-z|^8\frac{1}{|Q_j|}\cdot \\
\prod_{ h=1}^{1,n/2}[Q_j^{-1}]_{hh} dI \! \! Re({\underline{\zeta}}^{(j)})d\Im({\underline{\zeta}}^{(j)})\end{eqnarray*}
because $|\tilde{Q_j}|=|Q_j|^2$.
But, by Binet-Cauchy formula, we have
$$|Q_j|=\sum_{j_1\le j_2<\dots<j_{n/2}}\left|X_j\left(\end{eqnarray*}gin{array}{llll}1 & 2 & \dots & n/2 \\ j_1 & j_2 & \dots & j_{n/2}\end{array}\right)\right|^2$$
where $X_j\left(\end{eqnarray*}gin{array}{llll}1 & 2 & \dots & n/2 \\ j_1 & j_2 & \dots & j_{n/2}\end{array}\right)$ is a minor of maximal order $n/2$ of $X_j$. From \cite{mcdon} we have
$$X_j\left(\end{eqnarray*}gin{array}{llll}1 & 2 & \dots & n/2 \\ j_1 & j_2 & \dots & j_{n/2}\end{array}\right)=s_{P_1}(z,{\underline{\zeta}}^{(j)})\prod_{r<h,(r,h)\ne j}(\zeta_r-\zeta_h)\prod_{r\ne j}(\zeta_r-z)$$
where $s_{P_1}(z,{\underline{\zeta}}^{(j)})$ is the Schur function associated to the partition $P_1=\{j_1 , j_2 , \dots , j_{n/2}\}$, which is a symmetric polynomial with positive integer coefficients (Jack function with $\alpha=1$).
Hence
$$|Q_j|=\prod_{r<h,(r,h)\ne j}|\zeta_r-\zeta_h|^2\prod_{r\ne j}|\zeta_r-z|^2 \sum_{j_1\le j_2<\dots<j_{n/2}}\left|s_{P_1}(z,{\underline{\zeta}}^{(j)})\right|^2.$$
But
$[Q^{-1}_j]_{hh}=\frac{[\mbox{adj}( Q_j)]_{hh}}{|Q_j|}$ and
$$[\mbox{adj}( Q_j)]_{hh}=\sum_{j_1\le j_2<\dots<j_{n/2-1}}\left|X_j\left(\end{eqnarray*}gin{array}{lllllll}1, & 2, &\dots & h-1,&h+1\dots & n/2 \\ j_1, & j_2, & \dots& \dots& \dots & j_{n/2-1}\end{array}\right)\right|^2=$$
$$\sum_{P_2}\left|s_{P_2}(z,{\underline{\zeta}}^{(j,h)})\right|^2\prod_{r<k,(r,k)\ne j,h}|\zeta_r-\zeta_k|^2\prod_{r\ne j\ne h}|\zeta_r-z|^2$$
where $P_2=\{j_1 , j_2 , \dots , j_{n/2-1}\}$
and
\end{eqnarray*}gin{eqnarray*} h^{(j)}_n(z)= \frac{1}{(2\pi)^{n/2}}
\int_{I \! \! R^{n-2}}K_{G_n}(z,{\underline{\zeta}}^{(j)})\prod_{r<h,(r,h)\ne
j}|\zeta_r-\zeta_h|^{6-n}\prod_{r\ne j}|\zeta_r-z|^{6-n}\cdot\\
\prod_{ h=1}^{1,n/2}\left(\prod_{r<k,(r,k)\ne j,h}|\zeta_r-\zeta_k|^2\prod_{r\ne j\ne h}|\zeta_r-z|^2 \right)\cdot\\
\frac{\prod_{ h=1}^{1,n/2}\left(\sum_{P_2}\left|s_{P_2}(z,{\underline{\zeta}}^{(j,h)})\right|^2\right)}{ \left(\sum_{P_1}\left|s_{P_1}(z,{\underline{\zeta}}^{(1)})\right|^2\right)^{n/2+1}}dI \! \! Re({\underline{\zeta}}^{(j)})d\Im({\underline{\zeta}}^{(j)}).\end{eqnarray*}
But it turns out that
$$\prod_{ h=1}^{1,n/2}\left(\prod_{r<k,(r,k)\ne j, h}|\zeta_r-\zeta_k|^2\prod_{r\ne j\ne h}|\zeta_r-z|^2 \right)=\prod_{r<h,(r,h)\ne
j}|\zeta_r-\zeta_h|^{n-4}\prod_{r\ne j}|\zeta_r-z|^{n-4}$$ therefore
\end{eqnarray*}gin{eqnarray*} h^{(j)}_n(z)= \frac{1}{(2\pi)^{n/2}}
\int_{I \! \! R^{n-2}}K_{G_n}(z,{\underline{\zeta}}^{(j)})\prod_{r<h,(r,h)\ne
j}|\zeta_r-\zeta_h|^{2}\prod_{r\ne j}|\zeta_r-z|^{2}\cdot\\
\frac{\prod_{ h=1}^{1,n/2}\left(\sum_{P_2}\left|s_{P_2}(z,{\underline{\zeta}}^{(j,h)})\right|^2\right)}{ \left(\sum_{P_1}\left|s_{P_1}(z,{\underline{\zeta}}^{(1)})\right|^2\right)^{n/2+1}}dI \! \! Re({\underline{\zeta}}^{(j)})d\Im({\underline{\zeta}}^{(j)}).\end{eqnarray*}
Because of the symmetry of the Schur polynomials and of $K_n(z,{\underline{\zeta}}^{(j)})$ all $h^{(j)}_n(z),\;j=1,\dots,n/2$ are equal. Therefore by equation (\ref{dc}) we get the thesis.
$\;\;\;\Box$
From the result of the theorem it follows that, expressing $h_n(z)$ in polar coordinates and taking the marginal with respect to the modulus (radial density), we get a much more complicated expression than a Lorentzian function, thus disproving a part of the conjecture mentioned in the Introduction. We also notice that evaluating $h_n(z)$ is not an easy task because computing Schur functions is far from trivial (\cite{sch}).
\section{Numerical examples}
In order to illustrate the results obtained in Theorem 3 and 4 three numerical experiments were performed. In the first one a sample of cardinality $2\cdot 10^6$ of a multivariate complex $\alpha-$stable, centered, symmetric colored noise with $\alpha=0.5$ and $n=4$ was generated. The Pade' poles were computed as well as the empirical density of their modulus. In fig.1 this estimate is represented by dots. Then the integral representation of the poles condensed density derived in Theorem 4 was transformed in polar coordinates $z=\rho\cos(\theta)$ with Jacobian $\rho$. After Theorem 3 we know
that the phase distribution is uniform. Therefore the radial distribution is obtained by multiplying by $2\pi\rho$ the integral representation given in Theorem 4. The integral was approximated by numerical quadrature where, in this case $$K_{G_4}(z,{\underline{\zeta}}^{(j)})= \mbox{tr}(B_1)\mbox{tr}(B_2)+2\mbox{tr}(B_1B_2),$$ and is plotted in fig.1 as a solid line. We notice that the fit is quite accurate.
In the second experiment a $4$-variate complex Gaussian white noise was generated. The same computations as above were performed and the results are plotted in fig.2 confirming that the condensed density in the Gaussian case is the same than in the $\alpha-$stable one as claimed in Theorem 3.
In the third experiment a sample of cardinality $2\cdot 10^6$ of a $6$-variate complex Gaussian white noise was generated.
The same computations as above were performed. In this case
\end{eqnarray*}gin{eqnarray*} K_{G_6}(z,{\underline{\zeta}}^{(j)})=\mbox{tr}(B_1)\mbox{tr}(B_2)\mbox{tr}(B_3)+2\left[\mbox{tr}(B_1B_2)\mbox{tr}(B_3)\right.+\\\left.
\mbox{tr}(B_1B_3)\mbox{tr}(B_2)+\mbox{tr}(B_2B_3)\mbox{tr}(B_1)\right]+8\mbox{tr}(B_1B_2B_3).\end{eqnarray*} The results are plotted in fig.3. Also in this case the fit is quite accurate.
\end{eqnarray*}gin{thebibliography}{11}
\bibitem{barjma} P. Barone, On the condensed density of the generalized eigenvalues of pencils of
Gaussian random matrices and applications, {\it J. Multiv. Anal.} {\bf 111} (2012) 160–-173.
\bibitem{dsp} P. Barone, A black box method for solving the complex exponentials approximation problem,
{\it Digital Signal Proc.} (2012) doi: 10.1016/ j.dsp. 2012.09.005
\bibitem{barja2} P. Barone, A new transform for solving
the noisy complex exponentials approximation problem, {\it J. Approx.
Theory} {\bf 155} (2008), 1--27.
\bibitem{barja} P. Barone, On the distribution
of poles of Pade' approximants to the Z-transform
of complex Gaussian white noise, {\it J. Approx.
Theory} {\bf 132} (2005) 224--240.
\bibitem{bess} D. Bessis, L. Perotti, Universal analytic properties of noise: introducing the J-matrix formalism,{\it J.Phys. A: Math. Theor.} {\bf 42} (2009) 1--15.
\bibitem{sch} J. Demmel, P. Koev, Accurate and efficient evaluation of the Schur and Jack functions {\it Math. Comput.} {\bf 75} (2006) 223--239.
\bibitem{donoho} D. L. Donoho, Superresolution via sparsity constraints, {\it SIAM J. Math. Anal.}, {\bf 23,5} (1992) 1309-1331.
\bibitem{fang} K. Fang, S. Kotz, K. Ng, {\it Symmetric multivariate and related distributions}, (Chapman and Hall, London 1990).
\bibitem{gmv} G. H. Golub, P. Milanfar, J. Varah, A stable numerical method
for inverting shapes from moments, {\it SIAM J. Sci. Comp.},{\bf 21,4} (2004), 1222--1243.
\bibitem{kan} R. Kan, From moments of sum to moments of product, {\it J.Multiv.Anal.} {\bf 99} (2008) 542-554.
\bibitem{kaw} T. Kawata, On the Fourier series of a stationary process. II {\it Zeitschrift f\"{u}r Wahrscheinlichkeitstheorie und verwandte Gebiete} {\bf 13} (1969) 25--38.
\bibitem{mcdon} I. G. Macdonald, {\it Symmetric functions and Hall polynomials}, (Clarendon Press, Oxford 1995).
\bibitem {muir} R. J. Muirhead, { \it Aspects of multivariate statistical theory},
(Wiley, New York 1982).
\bibitem{osb} M. R. Osborne, G. K. Smyth,
A Modified Prony Algorithm for Exponential Function Fitting, {\it SIAM J. Sci. Comput.} {\bf 16 } (1995) 119-138.
\bibitem{scharf} L. L. Scharf, {\it Statistical signal processing},
Addison-Wesley, Reading 1991).
\bibitem{vpb} V. Viti, C. Petrucci, P. Barone, Prony methods in NMR
spectroscopy, {\it International Journal of Imaging Systems and
Technology }{\bf 8} (1997) 565-571.
\end{thebibliography}
\end{eqnarray*}gin{figure}[H]
\hspace*{-0.5in}
\includegraphics[totalheight=5.in]{fig1.eps}
\caption{The poles condensed density for $n=4$, computed by numerical integration (Th.4) (solid) and the empirical density based on $2\cdot 10^6$ replications of a colored noise with a symmetric, centered $\alpha-$stable density with $\alpha=0.5$ (dotted).}
\label{fig1}
\end{figure}
\end{eqnarray*}gin{figure}[H]
\hspace*{-0.5in}
\includegraphics[totalheight=5.in]{fig2.eps}
\caption{The poles condensed density for $n=4$, computed by numerical integration (Th.4) (solid) and the empirical density based on $2\cdot 10^6$ replications of a complex Gaussian white noise (dotted).}
\label{fig2}
\end{figure}
\end{eqnarray*}gin{figure}[H]
\hspace*{-0.5in}
\includegraphics[totalheight=5.in]{fig3.eps}
\caption{The poles condensed density for $n=6$, computed by numerical integration (Th.4) (solid) and the empirical density based on $2\cdot 10^6$ replications of a complex Gaussian white noise (dotted).}
\label{fig3}
\end{figure}
\end{document}
|
\begin{document}
\title{Noise tailoring for scalable quantum computation via randomized
compiling}
\author{Joel J. \surname{Wallman}}
\affiliation{Institute for Quantum Computing and Department of Applied
Mathematics, University of Waterloo, Waterloo, Canada}
\author{Joseph \surname{Emerson}}
\affiliation{Institute for Quantum Computing and Department of Applied
Mathematics, University of Waterloo, Waterloo, Canada}
\affiliation{Canadian Institute for Advanced Research, Toronto, Ontario M5G
1Z8, Canada}
\date{\today}
\begin{abstract}
Quantum computers are poised to radically outperform their classical
counterparts by manipulating coherent quantum systems. A realistic quantum
computer will experience errors due to the environment and imperfect
control. When these errors are even partially coherent, they present a major
obstacle to performing robust computations. Here, we propose a method for
introducing independent random single-qubit gates into the logical circuit in
such a way that the effective logical circuit remains unchanged. We prove that
this randomization tailors the noise into stochastic Pauli errors, which can
dramatically reduce error rates while introducing little or no experimental
overhead. Moreover, we prove that our technique is robust to the inevitable
variation in errors over the randomizing gates and numerically illustrate the
dramatic reductions in worst-case error that are achievable. Given such
tailored noise, gates with significantly lower fidelity---comparable to
fidelities realized in current experiments---are sufficient to achieve
fault-tolerant quantum computation. Furthermore, the worst-case error rate of
the tailored noise can be directly and efficiently measured through randomized
benchmarking protocols, enabling a rigorous certification of the performance of
a quantum computer.
\end{abstract}
\maketitle
The rich complexity of quantum states and processes enables powerful new
protocols for processing and communicating quantum information, as illustrated
by Shor's factoring algorithm~\cite{Shor1999} and quantum simulation
algorithms~\cite{Lloyd1996}. However, the same rich complexity of quantum
processes that makes them useful also allows a large variety of errors to
occur. Errors in a quantum computer arise from a variety of sources, including
decoherence and imperfect control, where the latter generally lead to coherent (unitary) errors.
It is provably possible to perform a fault-tolerant quantum computation in the
presence of such errors provided they occur with at most some maximum threshold
probablity~\cite{Shor1995,Gottesman1996,Knill1998,Aharonov1999,Calderbank2002,Kitaev2003}.
However, the fault-tolerant threshold probability depends upon the error-correcting code and is
notoriously difficult to estimate or bound because of the sheer variety of possible
errors. Rigorous lower bounds on the threshold of the order of $10^{-6}$~\cite{Aharonov1999}
for generic local noise and $10^{-4}$~\cite{Aliferis2007a} and
$10^{-3}$~\cite{Aliferis2008} for stochastic Pauli noise have been obtained for
a variety of codes. While these bounds are rigorous, they are far below
numerical estimates that range from $10^{-2}$~\cite{Knill2005,Wang2011} and
$10^{-1}$~\cite{Duclos-Cianci2010,Wootton2012,Bombin2012}, which are generally
obtained assuming stochastic Pauli noise, largely because the
effect of other errors is too difficult to simulate~\cite{Puzzuoli2014}. While
a threshold for Pauli errors implies a threshold exists for arbitrary errors
(e.g., unitary errors), there is currently no known way to rigorously estimate
a threshold for general noise from a threshold for Pauli noise.
The ``error rate'' due to an arbitrary noise map $\mc{E}$ can be quantified in
a variety of ways. Two particularly important quantities are the average error
rate defined via the gate fidelity
\begin{align}
r(\mc{E}) = 1- \int {\rm d}\psi
\bra{\psi}\mc{E}(\ket{\psi}\!\bra{\psi})\ket{\psi}
\end{align}
and the worst-case error rate (also known as the diamond distance from the
identity)~\cite{Kitaev1997}
\begin{align}\label{eq:diamond_def}
\epsilon(\mc{E})
= \tfrac{1}{2}\| \mc{E} - \mc{I}\|_{\diamond}
= \sup_{\psi} \tfrac{1}{2}\|
\left(\mc{E}\otimes\mc{I}_d-\mc{I}_{d^2}\right)(\psi)\|_1
\end{align}
where $d$ is the dimension of the system $\mc{E}$ acts on, $\|A\|_1 =
\sqrt{\mathrm{Tr} A^{\dagger} A}$ and the maximization is over all $d^2$-dimensional pure
states (to account for the error introduced when acting on entangled states).
The average error rate $r(\mc{E})$ is an experimentally-convenient
characterization of the error rate because it can be efficiently estimated via
randomized benchmarking~\cite{Emerson2005, Emerson2007, Dankert2009, Knill2008,
Magesan2011}. However, the diamond distance is typically the quantity used to
prove rigorous fault-tolerance thresholds~\cite{Aharonov1999}. The average
error rate and the worst-case error rate are related via the
bounds~\cite{Beigi2011, Wallman2014}
\begin{align}\label{eq:fidelity_to_worst}
r(\mc{E}) d^{-1}(d+1)
\leq \epsilon(\mc{E})
\leq \sqrt{r(\mc{E})} \sqrt{d(d+1)}.
\end{align}
The lower bound is saturated by any stochastic Pauli noise, in which case the
worst-case error rate is effectively equivalent to the experimental estimates
obtained efficiently via randomized benchmarking~\cite{Magesan2012a}. While the
upper bound is not known to be tight, there exist unitary channels such that
$\epsilon(\mc{E}) \approx \sqrt{(d+1)r(\mc{E})/4}$, so the scaling with $r$
is optimal~\cite{Sanders2015}.
The scaling of the upper bound of \cref{eq:fidelity_to_worst} is only saturated
by purely unitary noise. However, even a small coherent error relative to
stochastic errors can result in a dramatic increase in the worst-case error.
For example, consider a single qubit noise channel with $ r = 1\times 10^{-4}$,
where the contribution due to stochastic noise processes is $r = 0.83\times
10^{-4}$ and the remaining contribution is from a small unitary (coherent)
rotation error. The worst-case error for such noise is $\epsilon \approx
10^{-2}$, essentially two orders of magnitude greater than the
infidelity~\cite{Sanders2015}.
Here we show that by compiling random single-qubit gates into a logical
circuit, noise with arbitrary coherence and spatial correlations can be
tailored into stochastic Pauli noise. We also prove that our technique is
robust to gate-dependent errors which arise naturally due to imperfect
gate calibration. In particular, our protocol is fully robust against arbitrary
gate-dependent errors on the gates that are most difficult to implement, while
imperfections in the easier gates introduces an additional error that is
essentially proportional to the infidelity.
Our randomized compiling technique requires only a small (classical) overhead
in the compilation cost, or, alternatively, can be implemented on the fly
with fast classical control. Stochastic Pauli errors with the same average
error rate $r$ as a coherent error leads to four major advantages for quantum
computation: (i) they have a substantially lower worst-case error rate, (ii)
the worst-case error rate can be directly estimated efficiently and robustly
via randomized benchmarking experiments, enabling a direct comparison to a
threshold estimate to determine if fault-tolerant quantum computation is
possible, (iii) the known fault-tolerant thresholds for Pauli errors are
substantially higher than for coherent errors, and (iv) the average error rate
accumulates linearly with the length of a computation for stochastic Pauli
errors, whereas it can accumulate quadratically for coherent errors.
Randomizing quantum circuits has been previously proposed in
Refs~\cite{Knill2004a,Kern2005}. However, these proposals have specific
limitations that our technique circumvents. The proposal for inserting Pauli
gates before and after Clifford gates proposed in Ref.~\cite{Knill2004a} is a
special case of our technique when the only gates to be implemented are
Clifford gates. However, this technique does not account for non-Clifford gates
whereas our generalized technique does. As a large number of non-Clifford gates
are required to perform useful quantum computations~\cite{Aaronson2004} and are
often more difficult to perform fault-tolerantly, our generalized technique
should be extremely valuable in practice. Moreover, the proposal in
Ref.~\cite{Knill2004a} assumes that the Pauli gates are essentially perfect,
whereas we prove that our technique is robust to imperfections in the Pauli
gates. Alternatively, Pauli-Random-Error-Correction (PAREC) has been shown to
eliminate static coherent errors~\cite{Kern2005}. However, PAREC involves
changing the multi-qubit gates in each step of the computation. As multi-qubit
errors are currently the dominant error source in most experimental platforms
and typically depend strongly on the gate to be performed, it is unclear how
robust PAREC will be against gate-dependent errors on multi-qubit gates and
consequently against realistic noise. By way of contrast, our technique is
completely robust against arbitrary gate-dependent errors on multi-qubit gates.
\section{Results}
\subsection{Standardized form for compiled quantum circuits}
We begin by proposing a standardized form for compiled quantum circuits based
on an operational distinction between `easy' and `hard' gates, that
is, gates that can be implemented in a given experimental platform with
relatively small and large amounts of noise respectively. We also propose a
specific choice of easy and `hard' gates that is well-suited to many
architectures for fault-tolerant quantum computation.
In order to experimentally implement a quantum algorithm, a quantum circuit is
compiled into a sequence of elementary gates that can be directly implemented
or have been specifically optimized. Typically, these elementary gates can be
divided into easy and hard gate sets based either on how many physical qubits
they act on or how they are implemented within a fault-tolerant architecture.
In near-term applications of universal quantum computation without quantum
error correction, such as quantum simulation, the physical error model and
error rate associated with multi-qubit gates will generally be distinct from,
and much worse than, those associated with single qubit gates. In the
long-term, fault-tolerant quantum computers will implement some operations
either transversally (that is, by applying independent operations to a set of
physical qubits) or locally in order to prevent errors from cascading. However,
recent `no-go' theorems establish that for any fault-tolerant scheme, there
exist some operations cannot be performed in such a
manner~\cite{Eastin2009,Beverland2014} and so must be implemented via
other means, such as magic-state injection~\cite{Bravyi2005} or gauge
fixing~\cite{Bombin2015}.
The canonical division that we consider is to set the easy gates to be the
group generated by Pauli gates and the phase gate $R = |0\rangle\!\langle 0| + i|1\rangle\!\langle
1|$, and the hard gate set to be the Hardamard gate $H$, the $\pi/8$ gate
$\sqrt{R}$ and the two-qubit controlled-Z gate $\Delta(Z) = |0\rangle\!\langle0|\otimes I
+ |0\rangle\!\langle0|\otimes Z$. Such circuits are universal for quantum computation and
naturally suit many fault-tolerant settings, including CSS codes with a
transversal $T$ gate (such as the 15-qubit Reed-Muller code), color codes and
the surface code. While some of the `hard' gates may be easier than others in a
given implementation, it is beneficial to make the set of easy gates as small
as possible since our scheme is completely robust to arbitrary variations in
errors over the hard gates.
With such a division of the gates, the target circuit can be reorganized into a
circuit (the `bare' circuit) consisting of $K$ clock cycles, wherein each cycle
consists of a round of easy gates followed by a round of hard gates applied
to disjoint qubits as in \cref{fig:randomized_compilation}~a. To
concisely represent the composite operations performed in individual rounds, we
use the notational shorthand $\vec{A} = A_1\otimes \ldots \otimes A_n$ and
define $G_k$ to be the product of all the hard gates applied in the $k$th
cycle. We also set $G_K=I$ without loss of generality, so that the circuit
begins and ends with a round of easy gates.
\begin{figure}
\caption{a) Example of a bare circuit that is arranged into cycles wherein each
cycle consists of a round of easy single-qubit gates and a round of hard gates
(here, the hard gates are controlled-NOT gates). b) A randomized circuit
wherein twirling gates have been inserted before and after every easy gate. c)
A randomized circuit wherein the twirling gates have been compiled into the
easy gates, resulting in a new circuit that is logically equivalent to the bare
circuit and has the same number of elementary
gates.}
\label{fig:randomized_compilation}
\end{figure}
\subsection{Randomized compiling}
We now specify how standardized circuits in the above form can be randomized in
order to average errors in the implementations of the elementary gates into an
effective stochastic channel, that is, into a channel $\mc{E}$ that maps any
$n$-qudit state $\rho$ to
\begin{align}
\mc{E}(\rho) = \sum_{P\in \mbf{P}_d\tn{n}} c_P P\rho P^{\dagger},
\end{align}
where $\mbf{P}_d\tn{n}$ is the set of $d^{2n}$ generalized Pauli operators and
the coefficients $c_P$ are a probability distribution over $\mbf{P}_d\tn{n}$.
For qubits ($d=2$), $\mbf{P}_2$ is the familiar set of four Hermitian and
unitary Pauli operators $\{I,X,Y,Z\}$.
Let $\mbf{C}$ denote the group generated by the easy gates and assume that it
contains a subset $\mbf{T}$ such that
\begin{align}\label{eq:reasy}
\mc{E}^{\mbf{T}} = \md{E}_T T^{\dagger} \mc{E} T
\end{align}
is a stochastic channel for any channel $\mc{E}$, where $\md{E}_x f(x) =
|X|^{-1}\sum_{x\in X} f(x)$ denotes the uniform average over a set $X$
(typically a gate set implicit from the context). The canonical example of such
a set is $\mbf{P}_d\tn{n}$ or any group containing $\mbf{P}_d\tn{n}$.
We propose the following randomized compiling technique, where the
randomization should ideally be performed independently for each run of a given
bare circuit. Each round of easy gates $\vec{C}_k$ in the bare circuit of
\cref{fig:randomized_compilation}~a is replaced with a round of randomized
dressed gates
\begin{align}\label{eq:dressed}
\tilde{C}_k = \vec{T}_k \vec{C}_k \vec{T}_{k-1}^c
\end{align}
as in \cref{fig:randomized_compilation}~b, where the $T_{j,k}$ are chosen
uniformly at random from the twirling set $\mbf{T}$ and the correction
operators are set to $\vec{T}_k^c = G_k\vec{T}_k^{\dagger} G_k^{\dagger}$ to undo the
randomization from the previous round. The edge terms $\vec{T}^c_0$
and $\vec{T}_K$ can either be set to the identity or also randomized depending
on the choice of the twirling set and the states and measurements.
The dressed gates should then be compiled into and implemented as a single
round of elementary gates as in \cref{fig:randomized_compilation}~c rather than
being implemented as three separate rounds of elementary gates. In order to
allow the dressed gates to be compiled into a single easy gate, we require
$\vec{T}_k^c\in\mbf{C}\tn{n}$ for all $\vec{T}_k\in\mbf{T}\tn{n}$. The
example with $\mbf{T} = \mbf{P}_d$ that has been implicitly appealed to and
described as ``toggling the Pauli frame'' previously~\cite{Knill2005} is
a special case of the above technique when the hard gates are Clifford gates
(which are defined to be the gates that map Pauli operators to Pauli operators
under conjugation), but breaks down when the hard gates include non-Clifford
gates such as the single-qubit $\pi/8$ gate. For the canonical division into
easy and hard gates from the previous section, we set $\mbf{T} = \mbf{P}_2$,
$\mbf{C}$ to be the group generated by $R$ and $\mbf{P}_2$ (which is isomorphic
to the dihedral group of order 8) and the hard gates to be rounds of $H$,
$\sqrt{R}$ and $\Delta(Z)$ gates. Conjugating a Pauli gate by $H$ or
$\Delta(Z)$ maps it to another Pauli gate, while conjugating by $\sqrt{R}$ maps
$X^x Z^z$ to $R^x X^x Z^z$ (up to a global phase). Therefore the correction
gates, and hence the dressed gates, are all elements of the easy gate set.
The tailored noise is not realized in any individual choice of sequences.
Rather, it is the average over independent random sequences. However, while
each term in the tailored noise can have a different effect on an input state
$\rho$, if the twirling gates are independently chosen on each run, then
the expected noise over multiple runs is exactly the tailored noise.
Independently sampling the twirling gates each time the circuit is run
introduces some additional overhead, since the dressed gates (which are
physically implemented) depend on the twirling gates and so need to be
recompiled for each experimental run of a logical circuit. However, this
recompilation can be performed in advance efficiently on a classical computer
or else applied efficiently `on the fly' with fast classical control. Moreover,
this fast classical control is exactly equivalent to the control required in
quantum error correction so imposes no additional experimental burden.
We will prove below that our technique tailors noise satisfying various
technical assumptions into stochastic Pauli noise. We expect the technique will
also tailor more general noise into approximately stochastic noise,
though leave a fully general proof as an open problem.
\subsection{Robustness to arbitrary independent errors on the hard gates}
We now prove that our randomized compiling scheme results in an average
stochastic noise channel for Markovian noise that depends arbitrarily upon the
hard gates but is independent of the easy gate. Under this assumption, the
noisy implementations of the $k$th round of easy gates $\vec{C}_k$ and hard
gates $G_k$ can be written as $\mc{E}_{\rm e} \vec{C}$ and $G_k\mc{E}(G_k)$
respectively, where $\mc{E}_{\rm e}$ and $\mc{E}(G_k)$ are $n$-qubit channels
that can include cross-talk and multi-qubit correlations and $\mc{E}(*)$ can
depend arbitrarily on the argument, that is, on which hard gates are
implemented.
\begin{thm}\label{thm:gate_independent}
Randomly sampling the twirling gates $\vec{T}_k$ independently in each round
tailors the noise at each time step (except the last) into stochastic Pauli
noise when the noise on the easy gates is gate-independent.
\end{thm}
\begin{proof}
The key observation is that if the noise in rounds of easy gates is some fixed
noise channel $\mc{E}_{\rm e}$, then the dressed gates in
\cref{eq:dressed} have the same noise as the bare gates and so
compiling in the extra twirling gates in \cref{fig:noise_compilation}~c
does not change the noise at each time step, as illustrated in
\cref{fig:noise_compilation}~a and d. Furthermore, the correction gates
$T^c_{k,j}$ are chosen so that they are the inverse of the randomizing gates
when they are commuted through the hard gates, as illustrated in
\cref{fig:noise_compilation}~b and c. Consequently, uniformly averaging
over the twirling gates in every cycle reduces the noise in the $k$th cycle to
the tailored noise
\begin{align}\label{eq:rhard}
\mc{T}_k =\md{E}_{\vec{T}} \vec{T}^{\dagger}\mc{E}(G_k) \mc{E}_{\rm e}\vec{T}
\end{align}
where for channels $\mc{A}$ and $\mc{B}$, $\mc{A}\mc{B}$ denotes the channel
whose action on a matrix $M$ is $\mc{A}(\mc{B}(M))$. When $\mbf{T}=\mbf{P}$,
the above channel is a Pauli channel~\cite{Emerson2007}. Moreover, by the
definition of a unitary 1-design~\cite{Dankert2009}, the above sum is
independent of the choice of $\mbf{T}$ and so is a Pauli channel for any
unitary 1-design.
\end{proof}
\Cref{thm:gate_independent} establishes that the noise in all but the final
cycle can be
exactly tailored into stochastic noise (albeit under somewhat idealized
conditions which will be relaxed below). To account for noise in the final
round, we can write the effect corresponding to a measurement outcome
$|\vec{z}\rangle$ as $\mc{A}(|\vec{z}\rangle\!\langle\vec{z}|)$ for some fixed noise
map $\mc{A}$. If $\mbf{P}\subset\mbf{C}$, we can choose $\vec{T}_K$ uniformly
at random from $\mbf{P}\tn{n}$. A virtual Pauli gate can then be inserted
between the noise map $\mc{A}$ and the idealized measurement effect
$|\vec{z}\rangle\langle\vec{z}|$ by classically relabeling the measurement outcomes to
map $\vec{z}\to \vec{z}\oplus \vec{x}$, where $\oplus$ denotes entry-wise
addition modulo two. Averaging over $\vec{T}_K$ with this relabeling reduces
the noise in the final round of single-qubit Clifford gates and the measurement
to
\begin{align}
\overline{\mc{A}} &= \md{E}_{\vec{P}} \vec{P} \mc{A} \mc{E}_e \vec{P}
\end{align}
This technique can also be applied to quantum non-demolition measurements on a
subset of the qubits (as in, for example, error-correcting circuits), where the
unmeasured qubits have randomizing twirling gates applied before and after the
measurement.
\begin{figure}
\caption{a) Fragment of a noisy bare circuit with the $k$th cycle indicated
by the dashed box. b) To tailor the noise into stochastic noise, we
insert random twirling gates before the noise and the corresponding
correction gates immediately after the noise. c) Equivalent circuit
to b, where the correction gates have been commuted through $G_k$, the
round of hard gates. d) The randomized circuit equivalent to b and c,
where the twirling and correction gates have been compiled into the
adjacent easy gates. e) The tailored circuit obtained by averaging over
randomized circuits where $\mc{T}
\label{fig:noise_compilation}
\end{figure}
\subsection{Robustness to independent errors on the easy gates}
By \cref{thm:gate_independent}, our technique is fully robust to the most
important form of gate-dependence, namely, gate-dependent errors on the
hard gates. However, \cref{thm:gate_independent} still requires that the
noise on the easy gates is effectively gate-independent. Because residual
control errors in the easy gates will generally produce small gate-dependent
(coherent) errors, we will show that the benefits of noise tailoring can still
be achieved in this physically realistic setting.
When the noise depends on the easy gates,
the tailored noise in the $k$th cycle from equation~\eqref{eq:rhard} becomes
\begin{align}
\mc{T}_k^{\rm GD} =\md{E}_{\vec{T}_1,\ldots,\vec{T}_k}
\vec{T}_k^{\dagger}\mc{E}(G_k) \mc{E}(\vec{\tilde{C}}_k)\vec{T}_k,
\end{align}
which depends on the previous twirling gates through $\vec{\tilde{C}}_k$ by
\cref{eq:dressed}. This dependence means that we cannot assign independent
noise to each cycle in the tailored circuit.
However, in \cref{thm:gate_dependent} (proven in `Methods') we show
that implementing a circuit with gate-dependent noise
$\mc{E}(\vec{\tilde{C}}_k)$ instead of the corresponding gate-independent noise
\begin{align}
\mc{E}_k^{\mbf{T}} = \md{E}_{\vec{\tilde{C}}_k} \mc{E}(\vec{\tilde{C}}_k)
\end{align}
introduces a relatively small additional error. We show that the additional
error is especially small when $\mbf{T}$ is a group normalized by $\mbf{C}$,
that is, $CTC^{\dagger}\in\mbf{T}$ for all $C\in\mbf{C}$, $T\in\mbf{T}$. This
condition is satisfied in many practical cases, including the scenario where
$\mbf{T}$ is the Pauli group and $\mbf{C}$ is the group generated by Pauli and
$R$ gates. The stronger bound reduces the contributions from every cycle by
orders of magnitude in parameter regimes of interest (i.e.,
$\epsilon\left[\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}}\right] \leq 10^{-2}$,
comparable to current experiments), so that the bound on the additional error
grows very slowly with the circuit length.
\begin{thm}\label{thm:gate_dependent}
Let $\mc{C}_{\rm GD}$ and $\mc{C}_{\rm GI}$ be tailored circuits with
gate-dependent and gate-independent noise on the easy gates respectively. Then
\begin{align}
\|\mc{C}_{\rm GD}-\mc{C}_{\rm GI}\|_\diamond
\leq \sum_{k=1}^K \md{E}_{\vec{T}_1,\ldots,\vec{T}_K}
\|\mc{E}(\vec{\tilde{C}}_k)-\mc{E}_k^{\mbf{T}}\|_\diamond.
\end{align}
When $\mbf{T}$ is a group normalized by $\mbf{C}$, this can be improved to
\begin{align}
\|\mc{C}_{\rm GD},\mc{C}_{\rm GI}\|_\diamond
&\leq \sum_{k=2}^K 2\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}}\|\epsilon\left[\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}}\right]
\notag\\
&\quad + \md{E}_{\vec{\tilde{C}}_1}\|\mc{E}(\vec{\tilde{C}}_1) -
\mc{E}_1^{\mbf{T}}\|_\diamond.
\end{align}
\end{thm}
There are two particularly important scenarios in which the effect of
gate-dependent contributions need to be considered and which determine the physically relevant value of $K$. In near-term applications
such as quantum simulators, the following theorem would be applied to the
entire circuit, while in long-term applications with quantum error correction,
the following theorem would be applied to fragments corresponding to rounds of
error correction. Hence under our randomized compiling technique, the noise on
the easy gates imposes a limit either on the useful length of a circuit without
error correction or on the distance between rounds of error correction. It is
important to note that a practical limit on $K$ is already imposed, in the
absence of our technique, by the simple fact that even Pauli noise accumulates
linearly in time, so $r(\mc{T}_k)\ll 1/K$ is already required to ensure that
the output of any realistic circuit remains close to the ideal circuit.
While \cref{thm:gate_dependent} provides a very promising bound, it is unclear
how to estimate the quantities $\tfrac{1}{2}\md{E}_{\vec{\tilde{C}}_k}\|
\mc{E}(\vec{\tilde{C}}_k) - \mc{E}_k^{\mbf{T}} \|_\diamond$ without performing
full process tomography. To remedy this, we now provide the following bound in
terms of the infidelity, which can be efficiently estimated via randomized
benchmarking. We expect the following bound is not tight as we use the triangle
inequality to turn the deviation from the average noise into deviations from no
noise, which should be substantially larger. However, even the following loose
bound is sufficient to rigorously guarantee that our technique significantly
reduces the worst-case error, as illustrated in \cref{fig:noise_reduction} for
a two-qubit gate in the bulk of a circuit (i.e., with $k>1$).
The following bound could also be substantially improved if the noise on the
easy gates is known to be close to depolarizing (even if the hard gates
have strongly coherent errors), as quantified by the
unitarity~\cite{Wallman2015,Kueng2015,Wallman2015b}. However, rigorously
determining an improved bound would require analyzing the protocol for
estimating the unitarity under gate-dependent noise, which is currently
an open problem.
\begin{thm}\label{thm:local}
For arbitrary noise,
\begin{align}
\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}} \|_\diamond
\leq 2\epsilon(\mc{E}_k^{\mbf{T}}) + 2\sqrt{\md{E}_{\vec{\tilde{C}}_k}
\epsilon[\mc{E}(\vec{\tilde{C}}_k)]^2}.
\end{align}
For $n$-qubit circuits with local noise on the easy gates,
\begin{align}
\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}} \|_\diamond &\leq
\sum_{j=1}^n 4\sqrt{6r\Bigl[\overline{\mc{E}}_j^{\mbox{T}}(C_{j,k})\Bigr]}
\end{align}
for $k=1,\ldots,K$, where $\mc{E}_{j.k}^{\mbox{T}}
=\md{E}_{\tilde{C}_{j,k}}\mc{E}_j (\tilde{C}_{j,k})$ is the local noise on the
$j$th qubit averaged over the dressed gates in the $k$th cycle.
\end{thm}
\begin{figure}
\caption{Upper bounds $\epsilon^{\rm ub}
\label{fig:noise_reduction}
\end{figure}
\subsection{Numerical simulations}
Tailoring experimental noise into stochastic noise via our technique provides
several dramatic advantages, which we now illustrate via numerical simulations.
Our simulations are all of six-qubit circuits with the canonical division into
easy and hard gates. That is, the easy gates are composed of Pauli gates and
the phase gate $R = |0\rangle\langle 0| + i|1\rangle\langle 1|$, while the hard gates are the
Hardamard, $\pi/8$ gate $T = \sqrt{R}$ and the two-qubit controlled-Z gate
$\Delta(Z) = |0\rangle\langle0|\otimes I + |0\rangle\langle0|\otimes Z$. Such circuits are
universal for quantum computation and naturally suit many fault-tolerant
settings, including CSS codes with a transversal $T$ gate (such
as the 15-qubit Reed-Muller code), color codes and the surface code.
We quantify the total noise in a noisy implementation $\mc{C}_{\rm noisy}$ of
an ideal circuit $\mc{C}_{\rm id}$ by the variational distance
\begin{align}\label{eq:tvd}
\tau_{\rm noisy} = \sum_j \tfrac{1}{2}|{\rm Pr}(j|\mc{C}_{\rm noisy}) - {\rm
Pr}(j|\mc{C}_{\rm id})|
\end{align}
between the probability distributions for ideal computational basis
measurements after applying $\mc{C}_{\rm noisy}$ and $\mc{C}_{\rm id}$ to a
system initialized in the $\ket{0}\tn{n}$ state. We do not maximize over states
and measurements, rather, our results indicate the effect of noise under
practical choices of preparations and measurements.
For our numerical simulations, we add gate-dependent over-rotations to each
gate, that is, we perturb one of the eigenvectors of each gate $U$ by
$e^{i\delta_U}$. For single-qubit gates, the choice of eigenvector is
irrelevant (up to a global phase), while for the two-qubit $\Delta(Z)$ gate, we
add the phase to the $|11\rangle$ state.
We perform two sets of numerical simulations to illustrate two particular
properties. First, \cref{fig:vary_noise} shows that our technique introduces a
larger relative improvement as the infidelity decreases, that is, approximately
a factor of two on a log scale, directly analogous to the $r/\sqrt{r}$ scaling
for the worst case error (although recall that our simulations are for
computational basis states and measurements and do not maximize the error over
preparations and measurements). For these simulations, we set $\delta_U$ so
that the $\Delta(Z)$ gate has an infidelity of $r[\Delta(Z)]$ and so that all
single-qubit gates have an infidelity of $r[\Delta(Z)]/10$ (regardless of
whether they are included in the ``easy'' or the ``hard'' set). For the bare
circuits (blue circles), each data point is the variational distance of a
randomly-chosen six-qubit circuit with a hundred alternating rounds of easy and
hard gates, each sampled uniformly from the sets of all possible easy and hard
gate rounds respectively. For the tailored circuits (red squares), each data
point is the variational distance between ${\rm Pr}(j|\rm{C}_{\rm id})$ and the
probability distribution ${\rm Pr}(j|\rm{C}_{\rm noisy})$ averaged over $10^3$
randomizations of the bare circuit obtained by replacing the easy gates
with (compiled) dressed gates as in \cref{eq:dressed}.
Second, \cref{fig:vary_gates} shows that the typical error for both the bare
and tailored circuits grows approximately linearly with the length of the
circuit. This suggests that, for typical circuits, the primary reason that the
total error is reduced by our technique is not because it prevents the
worst-case quadratic accumulation of fidelity with the circuit length (although
it does achieve this). Rather, the total error is reduced because the
contribution from each error location is reduced, where the number of error
locations grows linearly with the circuit size. For these simulations, we set
$\delta_U$ so that the $\Delta(Z)$ gate has an infidelity of $10^{-3}$ and the
easy gates have infidelities of $10^{-5}$. For the bare circuits (blue
circles), each data point is the variational distance of a randomly-chosen
six-qubit circuit as above with $K$ alternating rounds of easy and hard gates,
where $K$ varies from five to a hundred. The tailored circuits (red squares)
again give the variational distance between the ideal distribution and the
probability distribution averaged over $10^3$ randomizations of the bare
circuit.
\begin{figure}
\caption{Semilog plots of the error $\tau_{\rm noise}
\label{fig:vary_noise}
\end{figure}
\begin{figure}
\caption{Plots of the error $\tau_{\rm noise}
\label{fig:vary_gates}
\end{figure}
\section{Discussion}
We have shown that arbitrary Markovian noise processes can be reduced to
effective Pauli processes by compiling different sets of uniformly random gates
into sequential operations. This randomized compiling technique can reduce the
worst-case error rate by orders of magnitude and enables threshold estimates
for general noise models to be obtained directly from threshold estimates for
Pauli noise. Physical implementations can then be evaluated by directly
comparing these threshold estimates to the average error rate $r$ estimated via
efficient experimenal techniques, such as randomized benchmarking, to determine
whether the experimental implementation has reached the fault-tolerant regime.
More specifically, the average error rate $r$ is that of the tailored channel
for the composite noise on a round of easy and hard gates and this can be
directly estimated using interleaved randomized benchmarking with the relevant
choice of group~\cite{Magesan2012}.
Our technique can be applied directly to gate sets that are universal for
quantum computation, including all elements in a large class of fault-tolerant
proposals. Moreover, our technique only requires \textit{local} gates to
tailor general noise on multi-qubit gates into Pauli noise. Our numerical
simulations in \cref{fig:vary_noise,fig:vary_gates} demonstrate that our
technique can reduce worst-case errors by orders of magnitude. Furthermore, our
scheme will generally
produce an even greater effect as fault-tolerant protocols are scaled up, since
fault-tolerant protocols are designed to suppress errors, for
example, $\epsilon\to \epsilon^k$ for some scale factor $k$ (e.g., number of
levels of concatenation), so any reduction at the physical level is improved exponentially with $k$.
A particularly significant open problem is the robustness of our technique to
noise that remains non-Markovian on a time-scale longer than a typical gate
time. Non-Markovian noise can be mitigated by techniques such as
randomized dynamic decoupling~\cite{Viola2005,Santos2006}, which correspond to
applying random sequences of Pauli operators to echo out non-Markovian
contributions. Due to the random gates compiled in at each time step, we expect
that our technique may also suppress non-Markovian noise in a similar manner.
\section{Methods}
We now prove \cref{thm:gate_dependent,thm:local}.
\begin{thm}\label{thm:gate_dependent_general}
Let $\mc{C}_{\rm GD}$ and $\mc{C}_{\rm GI}$ be tailored circuits with
gate-dependent and gate-independent noise on the easy gates respectively. Then
\begin{align}
\|\mc{C}_{\rm GD}-\mc{C}_{\rm GI}\|_\diamond
\leq \sum_{k=1}^K \md{E}_{\vec{T}_1,\ldots,\vec{T}_K}
\|\mc{E}(\vec{\tilde{C}}_k)-\mc{E}_k^{\mbf{T}}\|_\diamond.
\end{align}
When $\mbf{T}$ is a group normalized by $\mbf{C}$, this can be improved to
\begin{align}
\|\mc{C}_{\rm GD},\mc{C}_{\rm GI}\|_\diamond
&\leq \sum_{k=2}^K 2\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}}\|\epsilon\left[\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}}\right]
\notag\\
&\quad + \md{E}_{\vec{\tilde{C}}_1}\|\mc{E}(\vec{\tilde{C}}_1) -
\mc{E}_1^{\mbf{T}}\|_\diamond.
\end{align}
\end{thm}
\begin{proof}
Let
\begin{align}
\mc{A}_k &=G_k \mc{E}(G_k)\mc{E}(\vec{\tilde{C}}_k)\vec{\tilde{C}}_k
\notag\\
\mc{B}_k &= G_k \mc{E}(G_k)\mc{E}_k^{\mbf{T}} \vec{\tilde{C}}_k,
\end{align}
where $\mc{A}_k$ and $\mc{B}_k$ implicitly depend on the choice of twirling
gates. Then the tailored circuits under gate-dependent and gate-independent
noise are
\begin{align}
\mc{C}_{\rm GD} &= \md{E}_a\mc{A}_{K:1}
\notag\\
\mc{C}_{\rm GI} &= \md{E}_a\mc{B}_{K:1},
\end{align}
respectively, where $\mc{X}_{a:b} = \mc{X}_a \ldots \mc{X}_b$ (note that this
product is non-commutative) with $\mc{A}_{K:K+1} = \mc{B}_{0:1} = \mc{I}$ and
the expectation is over all $\vec{T}_a$ for $a=1,\ldots,K$. Then by a
straightforward induction argument,
\begin{align}\label{eq:telescope}
\mc{A}_{K:1} - \mc{B}_{K:1}
&= \sum_{k=1}^K \mc{A}_{K:k+1} (\mc{A}_k -
\mc{B}_k) \mc{B}_{k-1:1}
\end{align}
for any fixed choice of the twirling gates. By the triangle inequality,
\begin{align}\label{eq:simple}
\|\mc{C}_{\rm GD} - \mc{C}_{\rm GI} \|_\diamond
&= \|\md{E}_a \sum_{k=1}^K \mc{A}_{K:k+1} (\mc{A}_k - \mc{B}_k) \mc{B}_{k-1:1}
\|_\diamond \notag\\
&\leq \md{E}_a \sum_{k=1}^K \| \mc{A}_{K:k+1} (\mc{A}_k - \mc{B}_k)
\mc{B}_{k-1:1} \|_\diamond \notag\\
&\leq \md{E}_a \sum_{k=1}^K
\|\mc{E}(\vec{\tilde{C}}_k)-\mc{E}_k^{\mbf{T}}\|_\diamond,
\end{align}
where the second inequality follows from the submultiplicativity
\begin{align}
\|\mc{AB}\|_\diamond &\leq \|\mc{A}\|_\diamond \|\mc{B}\|_\diamond
\end{align}
of the diamond norm and the normalization $\|\mc{A}\|_\diamond =1$ which holds
for all quantum channels $\mc{A}$.
We can substantially improve the above bound by evaluating some of the averages
over twirling gates before applying the triangle inequality. In particular,
leaving the average over $\vec{T}_{k-1}$ inside the diamond norm in
\cref{eq:simple} for every term except $k=1$ gives
\begin{align}
\|\mc{C}_{\rm GD} - \mc{C}_{\rm GI} \|_\diamond
&\leq \sum_{k=1}^K \md{E}_{a\neq k-1} \| \md{E}_{k-1} \delta_k\gamma_k
\|_\diamond \notag\\
&\quad + \md{E}_a \|\mc{E}(\vec{\tilde{C}}_1)-\mc{E}_1^{\mbf{T}}\|_\diamond,
\end{align}
where
\begin{align}
\delta_k &= \mc{E}(\vec{\tilde{C}}_k) - \mc{E}_k^{\mbf{T}} \notag\\
\gamma_k &= \vec{\tilde{C}}_k G_{k-1} \mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}}
\vec{T}_{k-1},
\end{align}
and $\delta_k\gamma_k$ is the only factor of $\mc{A}_{K:k+1} (\mc{A}_k -
\mc{B}_k) \mc{B}_{k-1:1}$ that depends on $\vec{T}_{k-1}$. Substituting
$\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}} = (\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}}
- \mc{I}) + \mc{I}$ in $\gamma_k$ gives
\begin{align}
\md{E}_{k-1} \delta_k
&= \md{E}_{k-1} \delta_k \vec{\tilde{C}}_k G_{k-1}
\left[\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}} - \mc{I}\right] \vec{T}_{k-1}
\notag\\
&\quad + \md{E}_{k-1} \delta_k \vec{T}_k\vec{C}_k G_{k-1} ,
\end{align}
where the only factor in the second term that depends on $\vec{T}_{k-1}$ is
$\delta_k$. When $\mbf{T}$ is a group normalized by $\mbf{C}$,
\begin{align}
\md{E}_{k-1} \delta_k &= \md{E}_{k-1}
\mc{E}(\vec{\tilde{C}}_k) - \mc{E}_k^{\mbf{T}} \notag\\
&= \md{E}_{k-1}
\mc{E}(\vec{C}_k[\vec{C}_k^{\dagger}\vec{T}_k\vec{C}_k]\vec{T}_{k-1}^c) -
\mc{E}_k^{\mbf{T}} \notag\\
&= \md{E}_{\vec{T}'}
\mc{E}(\vec{C}_k\vec{T}') - \mc{E}_k^{\mbf{T}} \notag\\
&= 0
\end{align}
for any fixed value of $\vec{T}_k$, using the fact that $\{hg:g\in\mbf{G}\} =
\mbf{G}$ for any group $\mbf{G}$ and $h\in\mbf{G}$ and that $\md{E}_{\vec{T}'}
\mc{E}(\vec{C}_k\vec{T}')$ is independent of $\vec{T}_k$. Therefore
\begin{align}\label{eq:guts}
\|\mc{C}_{\rm GD} - \mc{C}_{\rm GI}\|_\diamond
&= \|\md{E}_j (\mc{A}_{K:1} - \mc{B}_{K:1}) \|_\diamond \notag\\
&\leq \|\delta_1\|_\diamond + \sum_{k=2}^K \md{E}_{j\neq k-1}\|
\md{E}_{k-1}\delta_k \gamma_k \|_\diamond \notag\\
&\leq \|\delta_1\|_\diamond + \sum_{k=2}^K \md{E}_{j\neq k-1}\|
\md{E}_{\vec{T}_{k-1}}\delta_k \vec{\tilde{C}}_k \notag\\
&\quad\times G_{k-1}\left[\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}} -
\mc{I}\right]
\vec{T}_{k-1}\|_\diamond \notag\\
&\leq \sum_{k=2}^K \md{E}_j \| \delta_k\|_\diamond
\|\mc{E}(G_{k-1})\mc{E}_{k-1}^{\mbf{T}} - \mc{I} \|_\diamond \notag\\
&\quad + \|\delta_1\|_\diamond,
\end{align}
where we have had to split the sum over $k$ as $\vec{T}_0$ is fixed to the
identity.
\end{proof}
\begin{thm}
For arbitrary noise,
\begin{align}
\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}} \|_\diamond
\leq 2\epsilon(\mc{E}_k^{\mbf{T}}) + 2\sqrt{\md{E}_{\vec{\tilde{C}}_k}
\epsilon[\mc{E}(\vec{\tilde{C}}_k)]^2}.
\end{align}
For $n$-qubit circuits with local noise on the easy gates,
\begin{align}
\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}} \|_\diamond &\leq
\sum_{j=1}^n 4\sqrt{6r\Bigl[\overline{\mc{E}}_j^{\mbox{T}}(C_{j,k})\Bigr]}
\end{align}
for $k=1,\ldots,K$, where $\mc{E}_{j.k}^{\mbox{T}}
=\md{E}_{\tilde{C}_{j,k}}\mc{E}_j (\tilde{C}_{j,k})$ is the local noise on the
$j$th qubit averaged over the dressed gates in the $k$th cycle.
\end{thm}
\begin{proof}
First note that, by the triangle inequality,
\begin{align}\label{eq:bad_bound}
\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}} \|_\diamond
&= \md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) - \mc{I}
+ \mc{I} - \mc{E}_k^{\mbf{T}} \|_\diamond \notag \\
&\leq \md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) - \mc{I}
\|_\diamond + \|\mc{I} - \mc{E}_k^{\mbf{T}} \|_\diamond \notag \\
&\leq \md{E}_{\vec{\tilde{C}}_k}2\epsilon[\mc{E}(\vec{\tilde{C}}_k)] +
2\epsilon(\mc{E}_k^{\mbf{T}}).
\end{align}
By the Cauchy-Schwarz inequality,
\begin{align}\label{eq:CS_expectation}
\left(\md{E}_{\vec{\tilde{C}}_k}\epsilon[\mc{E}(\vec{\tilde{C}}_k)] \right)^2
&= \left(\sum_{\vec{\tilde{C}}_k} |\# \vec{\tilde{C}}_k|^{-1}
\epsilon[\mc{E}(\vec{\tilde{C}}_k)] \right)^2\notag\\
&\leq \left(\sum_{\vec{\tilde{C}}_k} |\# \vec{\tilde{C}}_k|^{-2} \right)
\left(\sum_{\vec{\tilde{C}}_k}\epsilon[\mc{E}(\vec{\tilde{C}}_k)]^2
\right)\notag\\
&\leq |\# \vec{\tilde{C}}_k|^{-1}
\left(\sum_{\vec{\tilde{C}}_k}\epsilon[\mc{E}(\vec{\tilde{C}}_k)]^2
\right)\notag\\
&= \md{E}_{\vec{\tilde{C}}_k} \epsilon[\mc{E}(\vec{\tilde{C}}_k)]^2 ,
\end{align}
where $\#\vec{\tilde{C}}_k$ is the number of different dressed gates in the
$k$th round.
For local noise, that is, noise of the form $\mc{E}_1\otimes \ldots \mc{E}_n$
where $\mc{E}_j$ is the noise on the $j$th qubit,
\begin{align}
\epsilon[\mc{E}(\vec{\tilde{C}}_k)]&= \tfrac{1}{2}
\| \bigotimes_{j=1}^n\mc{E}_j(\tilde{C}_{j,k}) - \mc{I} \|_\diamond \notag\\
&\leq \sum_{j=1}^n \tfrac{1}{2}
\| \mc{E}_j(\tilde{C}_{j,k}) - \mc{I} \|_\diamond \notag\\
&\leq \sum_{j=1}^n \epsilon[\mc{E}(\tilde{C}_{j,k})],
\end{align}
where we have used the analog of \cref{eq:telescope} for the tensor product and
\begin{align}\label{eq:diamond_tensor}
\|A\otimes B\|_\diamond &\leq \|A\|_\diamond\|B\|_\diamond,
\end{align}
which holds for all $A$ and $B$ due to the submultiplicativity of the diamond
norm, and the equality $\|A\otimes I\|_\diamond = \|A\|_\diamond$. Similarly,
\begin{align}
\epsilon(\mc{E}_k^{\mbf{T}}) = \sum_{j=1}^n \epsilon(\mc{E}_{j,k}^{\mbf{T}})
\end{align}
where $\mc{E}_{j,k}^{\mbf{T}} = \md{E}_{T_{j,k-1},T_{j,k}}
\mc{E}(\tilde{C}_{j,k})$. We then have
\begin{align}
\md{E}_{\vec{\tilde{C}}_k}\epsilon[\mc{E}(\vec{\tilde{C}}_k)]
&\leq \sum_{j=1}^n \md{E}_{\tilde{C}_{j,k}}\epsilon[\mc{E}(\tilde{C}_{j,k})]
\notag\\
&\leq \sum_{j=1}^n
\sqrt{\md{E}_{\tilde{C}_{j,k}}\epsilon[\mc{E}(\tilde{C}_{j,k})]^2}
\end{align}
where the second inequality is due to the Cauchy-Schwarz inequality as in
\cref{eq:CS_expectation}. Returning to \cref{eq:bad_bound}, we have
\begin{align}
\md{E}_{\vec{\tilde{C}}_k}\| \mc{E}(\vec{\tilde{C}}_k) -
\mc{E}_k^{\mbf{T}} \|_\diamond
&\leq \sum_{j=1}^n 2\epsilon(\mc{E}_{j,k}^{\mbf{T}}) +
2\sqrt{\md{E}_{\tilde{C}_{j,k}}\epsilon[\mc{E}(\tilde{C}_{j,k})]^2} \notag\\
&\leq \sum_{j=1}^n 2\sqrt{6r(\mc{E}_{j,k}^{\mbf{T}})} +
2\sqrt{\md{E}_{\tilde{C}_{j,k}}6r[\mc{E}(\tilde{C}_{j,k})]} \notag\\
&\leq \sum_{j=1}^n 4\sqrt{6r(\mc{E}_{j,k}^{\mbf{T}})}
\end{align}
for local noise, where the second inequality follows from
\cref{eq:fidelity_to_worst} with $d=2$ and the third from the linearity of the
infidelity.
\end{proof}
\textit{Acknowledgments}---The authors acknowledge helpful discussions with
Arnaud Carignan-Dugas, David Cory, Steve Flammia, Daniel Gottesman, Tomas
Jochym-O'Connor and Raymond Laflamme. This research was supported by the U.S.
Army Research Office through grant W911NF-14-1-0103, CIFAR, the Government of
Ontario, and the Government of Canada through NSERC and Industry Canada.
\end{document}
|
\begin{equation}gin{document}
\frontmatter
\begin{equation}gin{titlepage}
\begin{equation}gin{center}
{\Large \textsc{ University of Warsaw} \\
\textsc{Faculty of Physics}\\
\textsc{Institute of Theoretical Physics}}
\vspace*{\stretch{0.67}}
{\bf \LARGE Geometry of Third-Order
Ordinary Differential Equations and Its Applications
\\ in General Relativity}
\vspace*{\stretch{1}}
{\LARGE Micha\l\ Godli\'nski}
\vspace*{\stretch{1}}
\includegraphics[width = .4\linewidth, height = .4\linewidth, angle = 0]{uwlogo.eps} \\
\vspace*{\stretch{1}}
{ \Large PhD thesis written \\ at
the Chair of Theory of Relativity and Gravitation \\
under supervision of \\
\goth{m}edskip
\LARGE Dr. hab. Pawe\l\ Nurowski} \\
\vspace*{\stretch{2}}
{\Large \textsc{Warsaw 2008}}
\end{center}
\end{titlepage}
\setcounter{page}{2} \ifthenelse{\boolean{@twoside}}{
\thispagestyle{empty}
$\phantom{empty}$\\
}{}
\thispagestyle{empty}
\selectlanguage{polish}
\begin{equation}gin{center} \vspace*{\stretch{1}}
{\large \it Dzi"ekuj"e dr. hab. Paw"lowi Nurowskiemu za opiek"e
naukow"a, za temat tej rozprawy i liczne wskaz"owki, bez kt"orych
nigdy by ona nie powsta"la.
\\
Dzi"ekuj"e mojej Rodzinie za mi"lo"s"c i wsparcie. Szczeg"olnie
dzi"ekuj"e Rodzicom, Siostrom i Ma"lej R"aczce. \\
Dzi"ekuj"e Bogu.}
\vspace*{\stretch{2}}
\end{center}
\selectlanguage{english}
\tableofcontents
\goth{m}ainmatter
\chapter{Introduction}\label{ch.problem}
\noindent This thesis addresses the problems of equivalence
and geometry of third order ordinary differential equations (ODEs)
which are stated as follows.
\begin{equation}gin{eqp} Given two real differential equations
\begin{equation}gin{equation}
y'''=F(x,y,y',y'')\label{e10}
\end{equation}
and \begin{equation}
y'''=\bar{F}(x,y,y',y'')\label{e20}
\end{equation} for a real function $y=y(x)$, establish whether or not there
exists a local transformation of variables of a suitable type that
transforms \eqref{e10} into \eqref{e20}. \end{eqp}
\begin{equation}gin{gp}
Determine geometric structures defined by a class of equations
$y'''=F(x,y,y',y'')$ equivalent under certain type of
transformations. Find relations between invariants of the ODEs and
invariants of the geometric structures.
\end{gp}
One may consider equivalence with respect to several types of
transformations, in this work we focus on three best known types:
contact, point and fibre-preserving transformations. The
fibre-preserving transformations are those which transform the
independent variable $x$ and the dependent variable $y$ in such a
way that the notion of the independent variable is retained, that
is the transformation of $x$ is a function of $x$ only:
\begin{equation}\label{e.fp}
x\goth{m}apsto\bar{x}=\chi(x),\quad\quad\quad\quad y\goth{m}apsto\bar{y}=\phi(x,y).
\end{equation} The transformation rules for the derivatives are already
uniquely defined by above formulae. Let us define the total
derivative to be \begin{equation}n
\goth{m}athcal{D}er=\partial_x+y'\partial_y+y''\partial_{y'}+y'''\partial_{y''}.
\end{equation}n Then
\begin{equation}gin{subequations}\label{e.tprol}
\begin{equation}gin{align}
y'&\goth{m}apsto\frac{{\rm d}\bar{y}}{{\rm d}\bar{x}}=\frac{\goth{m}athcal{D}er \phi}{\goth{m}athcal{D}er \chi}, \label{e.tprol1} \\
y''&\goth{m}apsto\frac{{\rm d}^2\bar{y}}{{\rm d}\bar{x}^2}= \frac{\goth{m}athcal{D}er}{\goth{m}athcal{D}er\chi}\left(\frac{\goth{m}athcal{D}er\phi}{\goth{m}athcal{D}er \chi}\right),\label{e.tprol2} \\
y'''&\goth{m}apsto \frac{{\rm d}^3\bar{y}}{{\rm d}\bar{x}^3}=\frac{\goth{m}athcal{D}er}{\goth{m}athcal{D}er\chi}\left(\frac{\goth{m}athcal{D}er}{\goth{m}athcal{D}er\chi}\left(\frac{\goth{m}athcal{D}er\phi}{\goth{m}athcal{D}er \chi}\right)\right).\label{e.tprol3}
\end{align}
\end{subequations}
The point transformations of variables mix $x$ and $y$ in an
arbitrary way \begin{equation} \label{e.point}
x\goth{m}apsto\bar{x}=\chi(x,y),\quad\quad\quad\quad y\goth{m}apsto\bar{y}=\phi(x,y),
\end{equation} with the derivatives transforming as in \eqref{e.tprol}.
The contact transformations are more general yet. Not only they
augment the independent and the dependent variables but also the
first derivative \begin{equation}\label{e.cont}
\begin{equation}gin{aligned}
x&\goth{m}apsto\bar{x}=\chi(x,y,y'),\notag\\
y&\goth{m}apsto\bar{y}=\phi(x,y,y'),\\
y'&\goth{m}apsto\frac{{\rm d} \bar{y}}{{\rm d}\bar{x}}=\psi(x,y,y').\notag
\end{aligned}
\end{equation} However, the functions $\chi$, $\phi$ and $\psi$ are not
arbitrary here but subjecting to \eqref{e.tprol1} which now yields
two additional constraints \begin{equation}n
\psi=\frac{\goth{m}athcal{D}er\phi}{\goth{m}athcal{D}er\chi}\quad \iff \quad
\begin{aligned} &\psi\chi_{y'}=\phi_{y'}, \\
&\psi(\chi_x+y'\chi_y)=&\phi_x+y'\phi_y, \end{aligned}
\end{equation}n guaranteeing that ${\rm d}\bar{y}/{\rm d}\bar{x}$ really transforms
like first derivative. With these conditions fulfilled second and
third derivative transform through \eqref{e.tprol2} --
\eqref{e.tprol3}. Of course, fibre-preserving transformations are
a subclass of point ones, as well as point transformations form a
subclass within contact ones.
We always assume in this work that ODEs are defined locally by a
smooth real function $F$ and are considered apart from
singularities. The transformations are always assumed to be local
diffeomorphisms.
\begin{equation}gin{example} In order to illustrate the equivalence problem let us consider
whether or not the equations
$$ y'''=0 \quad\quad\text{and}\quad\quad y'''=3\frac{y''^2}{y'}$$
are equivalent. As one can easily check they are contact
equivalent, since the transformation \begin{equation}n
\begin{equation}gin{aligned}
x&\goth{m}apsto-2y',\notag\\
y&\goth{m}apsto2xy'^2-2yy',\\
y'&\goth{m}apsto-2xy'+y \notag
\end{aligned} \end{equation}n applied to $y'''=0$ brings
it to the other equation. However, they are not fibre-preserving
equivalent, since the quantity
$$I(x,y,y',y'')=\frac{\partial^2}{\partial y''^{\,2}}F(x,y,y',y'')$$ vanishes for every
equation which is fibre-preserving equivalent to $y'''=0$ but does
not vanish for $y'''=3\frac{y''^2}{y'}$. In order to see this we
apply a fibre-preserving transformation of general form
\eqref{e.point} to $y'''=0$ and check that $I=0$ for the resulting
equation.
\end{example}
The above example shows importance of relative invariants in the
equivalence problem. A relative invariant is a function of $F$ and
its derivatives such that if it vanishes for an equation
$y'''=F(x,y,y',y'')$ then it also vanishes for every equation
equivalent to it, thereby each relative invariant provides us with
a necessary condition for equivalence. Moreover, with the help of
adequate number of relative invariants we can also formulate
sufficient conditions for equivalence of ODEs, although this
construction is complicated and can hardly ever be carried out to
the very end because of difficult calculations. The second
problem, the problem of geometry, is even more fundamental and in
fact it contains the problem of equivalence, for once a geometry
associated with ODEs is constructed and a relationship between
local invariants of the geometry and of ODEs is found then one can
study the equivalence of ODEs via objects of the associated
geometry.
A pioneering work on geometry of ODEs of arbitrary order is Karl
W\"unschmann's PhD thesis \cite{Wun} written under supervision of
F. Engel in 1905. In this paper K. W\"unschmann observed that
solutions of an $n$th-order ODE
$y^{(n)}=F(x,y,y',\ldots,y^{(n-1)})$ may be considered as both
curves $y=y(x,c_0,c_1,\ldots,c_{n-1})$ in the $xy$ space and
points $c=(c_0,\ldots,c_{n-1})$ in the solution space $\mathbb{R}^n$
parameterized by values of the constants of integration $c_i$. He
defined a relation of $k$th-order contact between infinitesimally
close solutions considered as curves; two solutions $y(x)$ and
$y(x)+{\rm d} y(x)$ corresponding to $c$ and $c+{\rm d} c$ have the
$k$th-order contact if their $k$th jets coincide at some point
$(x_0, y_0)$. W\"unschmann's main question was how the property of
having $(n-2)$nd contact for $n=3,4$ and $5$ might be described in
terms of the solution space. In particular he examined third-order
ODEs and showed that there is a distinguished class of ODEs
satisfying certain condition for the function $F$, which we call
the W\"unschmann condition. For a third-order ODE in this class,
the condition of having first order contact is described by a
second order Monge equation for ${\rm d} c$. This Monge equation is
nothing but the condition that the vector defined by two
infinitesimally close points $c$ and $c+{\rm d} c$ is null with
respect to a Lorentzian conformal metric on the solution space.
The last observation, although not contained in W\"unschmann's
work, follows immediately from his reasoning and was later made by
S.-S. Chern \cite{Chern}, who cited W\"unschmann's thesis.
The main contribution to the issue of point and contact
geometry of third-order ODEs was made by respectively E. Cartan
and S.-S. Chern in their classical papers \cite{Car1, Car2, Car3}
and \cite{Chern}. We discuss their approach and results in section
\ref{s.i.sum} of Introduction; here we only mention that E. Cartan
\cite{Car2} proved that every third-order ODE modulo point
transformations and satisfying two differential conditions on the
function $F$, one of them being the W\"unschmann condition, has a
three-dimensional Lorentzian Einstein-Weyl geometry on its
solution space. E. Cartan also showed how to construct invariants
of this Weyl geometry from relative point invariants of the
underlying ODE. In the same vein S.-S. Chern constructed a
three-dimensional Lorentzian conformal geometry for third-order
ODEs considered modulo contact transformations and satisfying the
W\"unschmann condition. In both these cases the conformal metric
is precisely the metric appearing implicitly in K. W\"unschmann's
thesis. Later H. Sato and A. Yoshikawa \cite{Sat} applying N.
Tanaka's theory \cite{Tan} constructed a Cartan normal connection
for arbitrary third-order ODEs (not only of the W\"unschmann type)
and showed how its curvature is expressed by the contact relative
invariants.
Geometry of third-order ODEs appears in General Relativity and the
theory of integrable systems. E.T. Newman et al
\cite{nsf1,nsf2,nsf3} devised the Null Surface Formulation (NSF),
a description of General Relativity in terms of families of null
hypersurface generated by a Lorentzian metric. The
$2+1$-dimensional version of this formalism \cite{nsf3d} is
equivalent to Chern's conformal geometry for third-order ODEs
\cite{New1, New2, New3}, which was noticed by P. Tod \cite{Tod}
for the first time. We encapsulate results of NSF in section
\ref{s.i.nsf}. Three-dimensional Einstein-Weyl geometry was
studied mainly from the perspective of the theory of twistors and
integrable systems by N. Hitchin \cite{Hitch}, R. Ward
\cite{Ward}, C. LeBrun \cite{Leb} and P. Tod, M. Dunajski et al
\cite{Jt,Dun1,Dun2}; for discussion of the link between the
Einstein-Weyl spaces and third-order ODEs see \cite{Nur1} and
\cite{Tod}.
P. Nurowski, following the ideas of E.
Cartan, proposed a programme of systematic study of geometries
related to differential equations, including second- and
third-order ODEs. In this programme,
\cite{New3,God3,Godode5,Nur1,Nur2,Nur3}, both new and already
known geometries associated with differential equations are
supposed to be constructed by the Cartan equivalence method and
are to be characterized in the language of Cartan connections
associated with them. In particular in \cite{Nur1} P. Nurowski
provided new examples of geometries associated with ordinary
differential equations including a conformal geometry with special
holonomy $G_2$ from ODEs of the Monge type. Partial results on
geometries of third-order ODEs were given in \cite{Nur1, Nur2,
New3, God3} but the full analysis of these geometries has not been
published so far and this thesis, which is a part of the
programme, aims to fill this gap.
Geometry of third-order ODEs is a part of broader issue of
geometry of differential equations in general. Regarding ODEs of
order two, we owe classical results including construction of
point invariants to S. Lie \cite{Lie} and M. Tresse \cite{Tre}. In
particular E. Cartan \cite{Car5} constructed a two-dimensional
projective differential geometry on the solution spaces of some
second-order ODEs. This geometry was further studied in \cite{NN}
and \cite{Nur3}, the latter paper pursues the analogy between
geometry of three-dimensional CR structures and second-order ODEs
and provides a construction of counterparts of the Fefferman
metrics for the ODEs. Classification of second-order ODEs
possessing Lie groups of fibre-preserving symmetries was done by
L. Hsu and N. Kamran \cite{Hsu}. Geometry on the solution space of
certain four-order ODEs (satisfying two differential conditions),
which is given by the four-dimensional irreducible representation
of $GL(2,\mathbb{R})$ and has exotic $GL(2,\mathbb{R})$ holonomy was
discovered and studied by R. Bryant \cite{Bry}, see also
\cite{Nur4}. The $GL(2,\mathbb{R})$ geometry of fifth-order ODEs has
been recently studied by M. Godli\'nski and P. Nurowski
\cite{Godode5}.
The more general problems yet are existence and properties of
geometry on solution spaces of arbitrary ODEs. The problem of
existence was solved by B. Doubrov \cite{Dubgl}, who proved that
an $n$th-order ODE $n\goth{g}eq 3$, modulo contact transformations, has
a geometry based on the irreducible
$n$-dimensional representation of $GL(2,\mathbb{R})$ provided that it
satisfies $n-2$ scalar differential conditions. An implicit
method of constructing these conditions was given in
\cite{Dubwil}. Properties of the $GL(2,\mathbb{R})$ geometries of ODEs
are still an open problem; they were studied in \cite{Godode5},
where the Doubrov conditions were interpreted as higher order
counterparts of the W\"unschmann condition, and by M. Dunajski and
P. Tod \cite{Dun3}.
Almost all the above papers deal with geometries on solution
spaces but one can also consider other geometries, including those
defined on various jet spaces. The most general result on such
geometries \cite{Dub2} comes from T. Morimoto's nilpotent geometry
\cite{Mor1,Mor2}. It concludes that with a system of ODEs there is
associated a filtration on a suitable jet space together with a
canonical Cartan connection.
\section{Geometry of third-order ODEs --- the present status} \label{s.i.sum}
\noindent We recapitulate the classical results on geometry of
third-order ODEs. We do this mostly in the original spirit of E.
Cartan, emphasizing the role of systems of one-forms and Cartan
connections. The version of Cartan's equivalence method
\cite{Car4} we employ below is explained in books by R. Gardner
\cite{Gar} and P. Olver \cite{Olv2}. Some its aspects are also
discussed by S. Sternberg \cite{Ste} and S. Kobayashi \cite{Kob1}.
\subsection*{E. Cartan's and S.-S. Chern's approach to ODEs}
They began with the space ${\goth{m}athcal J}^2$ of second jets of curves in
$\mathbb{R}^2$ (see \cite{Olv1} and \cite{Olv2} for extensive
description of jet spaces) with coordinate system $(x,y,p,q)$
where $p$ and $q$ denote the first and second derivative $y'$ and
$y''$ for a curve $x\goth{m}apsto(x,y(x))$ in $\mathbb{R}^2$, so that this
curve lifts to a curve $x\goth{m}apsto(x,y(x),y'(x),y''(x))$ in ${\goth{m}athcal J}^2$.
Any solution $y=f(x)$ of $y'''=F(x,y,y',y'')$ is uniquely defined
by a choice of $f(x_0)$, $f'(x_0)$ and $f''(x_0)$ at some $x_0$.
Since that choice is equivalent to a choice of a point in ${\goth{m}athcal J}^2$,
there passes exactly one solution $(x,f(x),f'(x),f''(x))$ through
any point of ${\goth{m}athcal J}^2$. Therefore the solutions form a (local)
congruence on ${\goth{m}athcal J}^2$, which can be described by its annihilating
simple ideal. Let us choose a coframe $(\goth{o}mega^i)$ on ${\goth{m}athcal J}^2$:
\begin{equation}\label{e.omega}\begin{equation}gin{aligned}
\goth{o}mega^1&={\rm d} y -p{\rm d} x, \\
\goth{o}mega^2&={\rm d} p- q{\rm d} x, \\
\goth{o}mega^3&={\rm d} q- F(x,y,p,q){\rm d} x, \\
\goth{o}mega^4&={\rm d} x.
\end{aligned}
\end{equation} Each solution $y=f(x)$ is fully described by the two
conditions: forms $\goth{o}mega^1$, $\goth{o}mega^2$, $\goth{o}mega^3$ vanish on the
curve $t\goth{m}apsto(t,f(t),f'(t),f''(t))$ and, since this defines a
solution modulo transformations of $x$, $\goth{o}mega^4={\rm d} t$ on this
curve.
Suppose now that equation \eqref{e10} undergoes a contact, point
or fibre-preserving transformation.\footnote{Although E.Cartan and
S.-S. Chern did not examine fibre-preserving transformations we
treat them on equal footing with the others for future
references.} Then \eqref{e.omega} transform by \begin{equation}\begin{aligned}
\goth{o}mega^1&\goth{m}apsto \bar{\goth{o}mega}^1=u_1\goth{o}mega^1, \\
\goth{o}mega^2&\goth{m}apsto \bar{\goth{o}mega}^2=u_2\goth{o}mega^1+u_3\goth{o}mega^2, \\
\goth{o}mega^3&\goth{m}apsto \bar{\goth{o}mega}^3=u_4\goth{o}mega^1+u_5\goth{o}mega^2+u_6\goth{o}mega^3,\\
\goth{o}mega^4&\goth{m}apsto \bar{\goth{o}mega}^4=u_8\goth{o}mega^1+u_9\goth{o}mega^2+u_7\goth{o}mega^4, \\
\end{aligned}\label{e.trans_kor} \end{equation} with some functions $u_1,\ldots,u_9$
defined on ${\goth{m}athcal J}^2$ and determined by a particular choice of
transformation, for instance \begin{equation}gin{eqnarray*}
&u_1=\phi_y-\psi\chi_y,\\
&u_7=\goth{m}athcal{D}er \chi,\quad u_8=\chi_y, \quad u_9= \chi_p.
\end{eqnarray*}
In particular, $u_9=0$ in the point case and $u_8=u_9=0$ in the
fibre-preserving case. Since the transformations are
non-degenerate, the condition $u_1u_3u_5u_7\neq 0$ is always
satisfied and transformations \eqref{e.trans_kor} form groups.
Thus a class of contact equivalent third-order ODEs is a local
G-structure\footnote{We work in the local trivializations of the
G-structure} $G_c\times{\goth{m}athcal J}^2$, that is a local subbundle of the
bundle of linear frames on ${\goth{m}athcal J}^2$, defined by the property that
the coframe $(\goth{o}mega^1,\goth{o}mega^2,\goth{o}mega^3,\goth{o}mega^4)$ belongs to it
and the structural group is given by \begin{equation}\label{e.G_c}
G_c= \begin{pmatrix} u_1 & 0 & 0 & 0 \\ u_2 & u_3 & 0 & 0 \\
u_4 & u_5 & u_6& 0\\
u_8 & u_9 & 0 & u_7 \end{pmatrix}. \end{equation} In a like manner, for point and
fibre-preserving transformation we have $G$-structures with groups
$G_p\subset G_c$ given by $u_9=0$ and $G_f\subset G_p$ given by
$u_8=u_9=0$.
Thereby the problem of equivalence has been formulated; two ODEs
are contact/point/fibre-preserving equivalent if and only if the
$G_c$/$G_p$/$G_f$-structures which correspond to them are
equivalent. In other words, it means that there exists a
diffeomorphism which transforms one $G$-structure into the other
$G$-structure.
On the bundles $G_c\times{\goth{m}athcal J}^2$, $G_p\times{\goth{m}athcal J}^2$ or $G_f\times{\goth{m}athcal J}^2$
there are four fixed, well defined one-forms
$(\theta^1,\theta^2,\theta^3,\theta^4)$, the components of the
canonical $\mathbb{R}^4$-valued form $\theta$ existing on the frame
bundle of ${\goth{m}athcal J}^2$. Let us consider the contact case as an example.
Let $(x)$ denote $x,y,p,q$ and $(g)$ be coordinates in $G_c$ given
by \eqref{e.G_c}. Let us choose a coordinate system $(x,g)$ on
$G_c\times{\goth{m}athcal J}^2$ compatible with the local trivialization. Then
$\theta^i$ at the point $(x,g^{-1})$ read \begin{equation}
\label{e.i.canon}\begin{aligned}
\theta^1=&u_1\goth{o}mega^1, \\
\theta^2=&u_2\goth{o}mega^1+u_3\goth{o}mega^2, \\
\theta^3=&u_4\goth{o}mega^1+u_5\goth{o}mega^2+u_6\goth{o}mega^3,\\
\theta^4=&u_8\goth{o}mega^1+u_9\goth{o}mega^2+u_7\goth{o}mega^4. \\
\end{aligned} \end{equation}
The idea of Cartan's method is the following. Starting from
$G_c\times{\goth{m}athcal J}^2$ (or $G_p\times{\goth{m}athcal J}^2$ or $G_f\times{\goth{m}athcal J}^2$) one
constructs a new principal bundle ${\mathcal P}=H\times{\goth{m}athcal J}^2$ (with a new
group $H$) equipped with one fixed coframe
$(\theta^i,\mathcal{O}mega_{\goth{m}u})$ built in some geometric way and such
that it encodes all the local invariant information about
$G_c\times{\goth{m}athcal J}^2$ (or $G_p\times{\goth{m}athcal J}^2$ or $G_f\times{\goth{m}athcal J}^2$
respectively) through its structural equations.
\begin{equation}gin{remark} As an illustration of this idea consider its very special
application to a metric structure of signature $(k,l)$ on a
manifold ${\goth{m}athcal M}$. The structure defines locally the bundle
$O(k,l)\times{\goth{m}athcal M}$, an $O(k,l)$-reduction of the frame bundle. If
$(\goth{o}mega^i)$ is an orthonormal coframe on ${\goth{m}athcal M}$, then the forms
$(\theta^i)$ at a point $(x,g)$ are given by
$\theta^i=(g^{-1})^i_{~j}\goth{o}mega^j$, $g\in O(k,l)$. Moreover,
associated to $(\goth{o}mega^i)$ there are the Levi-Civita connection
one-forms $\Gamma^i_{~j}$, which when lifted to $O(k,l)\times{\goth{m}athcal M}$
define $\tfrac12 n(n-1)$ connection forms $\mathcal{O}mega^i_{~j}$ by \begin{equation}n
\mathcal{O}mega^i_{~j}(x,g)=(g^{-1})^i_{~k}\Gamma^k_{~l}(x)\,g^l_{~j}+(g^{-1})^i_{~k}{\rm d}
g^k_{~j}. \end{equation}n Clearly $(\theta^i,\mathcal{O}mega^j_{~k})$ is a coframe on
$O(k,l)\times{\goth{m}athcal M}$. It is the Cartan coframe for the metric
structure. The Cartan invariants are the Riemannian curvature $R$
given by the equations
\begin{equation}n \begin{aligned} &{\rm d}\theta^i+\mathcal{O}mega^i_{~j}{\scriptstyle\wedge}\,\theta^j=0, \\
&{\rm d}\mathcal{O}mega^i_{~j}+\mathcal{O}mega^j_{~k}{\scriptstyle\wedge}\,\mathcal{O}mega^k_{~j}=\tfrac12R^i_{jkl}
\theta^k{\scriptstyle\wedge}\,\theta^l \end{aligned} \end{equation}n and its consecutive covariant
derivatives. Here it is easy to construct the desired coframe
because there is a distinguished $\goth{o}(k,l)$-valued connection ---
the Levi-Civita connection. \end{remark}
In our cases situation is more complicated. There are no natural
candidates for the forms $\mathcal{O}mega_{\goth{m}u}$ since there is no
distinguished connection on the bundles $G_{\cdot}\times J^2$. In
order to resolve such situations E. Cartan developed the method of
constructing the desired bundle ${\mathcal P}$ through a series of
reductions and prolongations of an initial structural group. This
procedure is too complicated to be included in Introduction; we
postpone the full discussion to section \ref{s.c.proof} of chapter
\ref{ch.contact} where we follow E. Cartan's and S.-S. Chern's
reasoning. Here we only formulate main conclusions.
\subsection*{Point equivalence} Examining the
problem of point equivalence E. Cartan constructed for every ODE a
local seven-dimensional bundle ${\mathcal P}$ over $J^2$, a reduction of
$G_p\times{\goth{m}athcal J}^2$, together with a coframe
$(\theta^1,\theta^2,\theta^3,\theta^4,\mathcal{O}mega_1,\mathcal{O}mega_2,\mathcal{O}mega_3)$
on ${\mathcal P}$. The coframe contains all the point invariant information
on the original ODE due to the following theorem, which is the
application of the equivalence method to third-order ODEs.
\begin{equation}gin{theorem}[E. Cartan]\label{th.i.clas} Equations $y'''=F(x,y,y',y'')$ and
$\cc{y}'''=\cc{F}(\cc{x},\cc{y},\cc{y}',\cc{y}'')$ with smooth $F$
and $\cc{F}$ are locally point equivalent if and only if there
exists a local diffeomorphism ${\mathcal P}hi\goth{g}oth{co}lon{\mathcal P}\to\cc{{\mathcal P}}$ which maps
the Cartan coframe $(\cc{\theta}^1,\ldots,\cc{\mathcal{O}mega}_3)$ of
$\cc{F}$ to the coframe $(\theta^1,\ldots,\mathcal{O}mega^3)$ of $F$, \begin{equation}n
{\mathcal P}hi^*\cc{\theta}^i=\theta^i,\qquad
{\mathcal P}hi^*\cc{\mathcal{O}mega}_\goth{m}u=\mathcal{O}mega_\goth{m}u,\qquad i=1,\ldots,4,\quad
\goth{m}u=1,2,3. \end{equation}n ${\mathcal P}hi$ projects to a point transformation
$(x,y)\goth{m}apsto(\cc{x}(x,y),\cc{y}(x,y))$ which maps $\cc{F}$ to
$F$.
\end{theorem}
The general idea of determining whether or not ${\mathcal P}hi$ exist is as
follows, see\cite{Olv2}. The structural equations for the Cartan
coframe are
\begin{equation}gin{align}
{\rm d}\goth{h}p{1} =&\vp{1}{\scriptstyle\wedge}\,\goth{h}p{1}+\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{2},\nonumber \\
{\rm d}\goth{h}p{2} =&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{1}+\vp{3}{\scriptstyle\wedge}\,\goth{h}p{2}+\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
{\rm d}\goth{h}p{3}=&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{2}+(2\vp{3}-\vp{1}){\scriptstyle\wedge}\,\goth{h}p{3}+\inv{A}{1}\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{1},\nonumber \\
{\rm d}\goth{h}p{4} =&(\vp{1}-\vp{3}){\scriptstyle\wedge}\,\goth{h}p{4}+\inv{B}{1}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{1}
+\inv{B}{2}\goth{h}p{3}{\scriptstyle\wedge}\,\goth{h}p{1}, \nonumber\\
{\rm d}\vp{1} =&-\vp{2}{\scriptstyle\wedge}\,\goth{h}p{4}+(\inv{D}{1}+3\inv{B}{3})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}
+(3\inv{B}{4}-2\inv{B}{1})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3} \label{e.i.dtheta_7d} \\
&+(2\inv{C}{1}-\inv{A}{2})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4}-\inv{B}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}, \nonumber \\
{\rm d}\vp{2}=&(\vp{3}-\vp{1}){\scriptstyle\wedge}\,\vp{2}+\inv{D}{2}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+(\inv{D}{1}+\inv{B}{3})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}
+\inv{A}{3}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4} \nonumber \\
&+(2\inv{B}{4}-\inv{B}{1})\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}+\inv{C}{1}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{4}, \nonumber \\
{\rm d}\vp{3}=&(\inv{D}{1}+2\inv{B}{3})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+2(\inv{B}{4}-\inv{B}{1})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}
+\inv{C}{1}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4}+\inv{B}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}, \nonumber
\end{align}
with some explicitly given functions
$\inv{A}{1},\inv{A}{2},\inv{A}{3},\inv{B}{1},\inv{B}{2},\inv{B}{3},
\inv{B}{4},\inv{C}{1},\inv{D}{1},\inv{D}{2}$ on ${\mathcal P}$. Let
$(X_1,\ldots,X_7)$ denotes the frame dual to
$(\theta^1,\ldots,\mathcal{O}mega_3)$. Since pull-back commutes with
exterior differentiation each of these functions is a point
relative invariant of the underlying ODE. Furthermore, the coframe
derivatives $X_1(\inv{A}{1}),\ldots,X_7(\inv{D}{2})$,
$X_1(X_1(\inv{A}{1})),\ldots,X_7(X_7(\inv{D}{2})),\ldots$ of
arbitrary order are also point relative invariants. We calculate
all relevant coframe derivatives for $F$ and $\cc{F}$ up to some
finite order $n$ and gather them into respective functions
${\bf T}\goth{g}oth{co}lon{\mathcal P}\to\mathbb{R}^N$ and $\cc{{\bf T}}\goth{g}oth{co}lon\cc{{\mathcal P}}\to\mathbb{R}^N$ with
the same target space of dimension $N$ equal to the number of the
invariants. Finally, we examine whether or not the graphs of ${\bf T}$
and $\cc{{\bf T}}$ overlap as manifolds in $\mathbb{R}^N$. If they overlap,
then the sought diffeomorphism ${\mathcal P}hi$ exists between certain
nonempty open sets $U\subset{\bf T}^{-1}(\mathcal{O})$ and
$\cc{U}\subset\cc{{\bf T}}^{-1}(\mathcal{O})$, where $\mathcal{O}$ is the overlap. A
detailed explanation of this procedure is given at the beginning
of chapter \ref{ch.class}; at this stage we only need to know that
the coframe solves the point equivalence problem for ODEs. What
joins this problem with the domain of differential geometry is the
fact that the coframe is a geometric object
--- a Cartan connection
--- on ${\mathcal P}\to{\goth{m}athcal J}^2$.
In order to introduce the notion of Cartan connection we first
show how to read the structure of principal bundle on ${\mathcal P}\to{\goth{m}athcal J}^2$
from \eqref{e.i.dtheta_7d}. Fibres of the projection ${\mathcal P}\to J^2$
are annihilated by the forms
$\theta^1,\theta^2,\theta^3,\theta^4$, and the vector fields
$X_5,X_6,X_7$ are tangent to the fibres. Simultaneously, the
commutation relations of these fields are isomorphic to
commutators of the three-dimensional algebra\footnote{The symbol
$\goth{g}\semi{.}\goth{h}$ denotes a semidirect product of the Lie algebras
$\goth{g}$ and $\goth{h}$} $\goth{h}=\mathbb{R}\goth{o}plus(\mathbb{R}\semi{.}\mathbb{R})$ and they
define a local action of the group
$H=\mathbb{R}\times(\mathbb{R}\ltimes\mathbb{R})$ on ${\mathcal P}$ for which $X_5,X_6,X_7$
are the fundamental fields.
Now, let us focus on the most symmetric situation when all the
relative invariants $\inv{A}{1},\ldots,\inv{D}{2}$ vanish. This
case corresponds to an ODE which is point equivalent to the
trivial $y'''=0$. For such an ODE the equations
\eqref{e.i.dtheta_7d} become the Maurer-Cartan equations for the
algebra $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$, where
$\goth{g}oth{co}(2,1)=\mathbb{R}\goth{o}plus\goth{so}(2,1)$ is the orthogonal algebra centrally
extended by dilatations generated by the identity matrix. As a
consequence, ${\mathcal P}$ becomes locally the Lie group
$CO(2,1)\ltimes\mathbb{R}^3$, with the left-invariant fields
$X_1,\ldots,X_7$. Therefore ${\goth{m}athcal J}^2$ is the homogeneous space $H\to
CO(2,1)\ltimes\mathbb{R}^3 \to CO(2,1)\ltimes\mathbb{R}^3/H$ acted upon by
$CO(2,1)\ltimes\mathbb{R}^3$. This group acts on ${\goth{m}athcal J}^2$ as the group of
point symmetries of $y'''=0$, that is for any solution $y=f(x)$ of
$y'''=0$ its graph $x\to(x,y(x),y'(x),y''(x))$ in ${\goth{m}athcal J}^2$ is
transformed into the graph of other solution of $y'''=0$. The
coframe $(\theta^i,\mathcal{O}mega_\goth{m}u)$ can be arranged into the following
matrix \begin{equation}\label{e.i.conp}
{\scriptstyle\wedge}\,h{\goth{o}mega}=\begin{pmatrix} \vp{3} & 0 & 0 & 0 & 0 \\
\goth{h}p{1} & \vp{3}-\vp{1} & -\goth{h}p{4} & 0 & 0\\
\goth{h}p{2} & -\vp{2} & 0 & -\goth{h}p{4} & 0 \\
\goth{h}p{3} & 0 &-\vp{2} & \vp{1}-\vp{3} & 0 \\
0 & \goth{h}p{3} & -\goth{h}p{2} & \goth{h}p{1} & -\vp{3}
\end{pmatrix},
\end{equation} which is the Maurer-Cartan one-form of
$CO(2,1)\ltimes\mathbb{R}^3$. In the language of ${\scriptstyle\wedge}\,h{\goth{o}mega}$ the
equations \eqref{e.i.dtheta_7d} read \begin{equation}n
{\rm d}{\scriptstyle\wedge}\,h{\goth{o}mega}+\tfrac12[{\scriptstyle\wedge}\,h{\goth{o}mega},{\scriptstyle\wedge}\,h{\goth{o}mega}]=0. \end{equation}n
Turning to an arbitrary situation, we see that non-trivial cases
can not be described in this language, since the invariants
$\inv{A}{1},\ldots,\inv{D}{2}$ do not vanish in general and the
last equation does not hold. We need a new object, the Cartan
connection, defined here after \cite{Kob1}.
\begin{equation}gin{definition}
Let $H\to{\mathcal P}\to{\goth{m}athcal M}$ be a principal bundle and let $G$ be a Lie group
such that $H$ is its closed subgroup and $\dim G=\dim{\mathcal P}$. A Cartan
connection of type $(G,H)$ on ${\mathcal P}$ is a one-form ${\scriptstyle\wedge}\,h{\goth{o}mega}$
taking values in the Lie algebra $\goth{g}oth{g}$ of $G$ and satisfying
the following conditions:
\begin{equation}gin{itemize}
\item[i)] ${\scriptstyle\wedge}\,h{\goth{o}mega}_u:T_u{\mathcal P}\to\goth{g}oth{g}$ for every $u\in{\mathcal P}$ is
an isomorphism of vector spaces \item[ii)]
$A^*\lrcorner\,{\scriptstyle\wedge}\,h{\goth{o}mega}=A$ for every $A\in\goth{g}oth{h}$ and the
corresponding fundamental field $A^*$ \item[iii)]
$R^*_h{\scriptstyle\wedge}\,h{\goth{o}mega}=\Ad(h^{-1}){\scriptstyle\wedge}\,h{\goth{o}mega}$ for $h\in H$.
\end{itemize}
\end{definition}
A Cartan connection is then an object that generalizes the notion
of the Maurer-Cartan form on a Lie group. The curvature of a
Cartan connection \begin{equation}n
{\scriptstyle\wedge}\,h{K}={\rm d}{\scriptstyle\wedge}\,h{\goth{o}mega}+\frac{1}{2}[{\scriptstyle\wedge}\,h{\goth{o}mega},{\scriptstyle\wedge}\,h{\goth{o}mega}] \end{equation}n
does not have to vanish any longer and measures, as it were, how
far $H\to{\mathcal P}\to{\goth{m}athcal M}$ `differs' from the homogeneous space $H\to G\to
G/H$.
In our case the formula \eqref{e.i.conp} defines a
$\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$-valued Cartan connection, whose
non-vanishing curvature contains basic point invariants, from
which the full set of invariants can be constructed through
exterior differentiation. This is the basic relation, announced at
the beginning of Introduction, between classification of ODEs and
their geometry. In chapter \ref{ch.point} we discuss properties of
${\scriptstyle\wedge}\,h{\goth{o}mega}$ in full detail.
The geometry introduced above, however, is not the most
interesting structure one can associate with the ODEs modulo point
transformations. Indeed, the crucial observation E. Cartan made in
his paper \cite{Car2} was that the connection ${\scriptstyle\wedge}\,h{\goth{o}mega}$ may
generate a new type of geometry on the solution space of certain
classes of ODEs.
In order to present the idea of this construction let us invoke
once more the trivial equation $y'''=0$ but now consider how the
symmetry group act on the solutions regarded as curves in the
$xy$ plane. It is well-known that the full group of point
symmetries of $y'''=0$ is generated by the following one-parameter
groups of transformations $(x,\,y)\goth{m}apsto{\mathcal P}hi^i_t(x,\,y)$
\begin{equation}gin{align*}
&{\mathcal P}hi^1_t(x,\,y)=(x,\,y+t), & & {\mathcal P}hi^2_t(x,\,y)=(x,\,y+2xt),\\
&{\mathcal P}hi^3_t(x,\,y)=(x,\,y+x^2t), & & {\mathcal P}hi^4_t(x,\,y)=(x,\,ye^{-t}),\\
&{\mathcal P}hi^5_t(x,\,y)=(xe^{t},\,ye^{t}), & & {\mathcal P}hi^6_t(x,\,y)=(x+t,\,y),\\
& {\mathcal P}hi^7_t(x,y)=\left(\frac{x}{1+xt},\,\frac{y}{(1+xt)^2}\right). & &
\end{align*}
A solution to $y'''=0$ is a parabola of the form \begin{equation}\label{solo}
y(x)=c_2x^2+2c_1x+c_0\end{equation} with three arbitrary integration
constants $c_2,c_1,c_0$. Therefore a solution of $y'''=0$ may be
identified with a point $c=(c_2,c_1,c_0)^T$ in the solution space
$\mathcal{S}\goth{g}oth{co}ng\mathbb{R}^3$. Transforming \eqref{solo} according to the above
formulae we find that ${\mathcal P}hi^1_t,{\mathcal P}hi^2_t,{\mathcal P}hi^3_t$ are
translations in the solution space $\mathcal{S}$: \begin{equation}n {\mathcal P}hi^1_t(c)=\begin{pmatrix} c_2
\\ c_1 \\ c_0-t \end{pmatrix}, \quad {\mathcal P}hi^2_t(c)=\begin{pmatrix}
c_2 \\ c_1-t \\ c_0\end{pmatrix}, \quad {\mathcal P}hi^3_t(c)= \begin{pmatrix} c_2-t \\ c_1 \\
c_0 \end{pmatrix}, \end{equation}n while transformations ${\mathcal P}hi^4_t,\ldots,{\mathcal P}hi^7_t$
generate the three-dimensional irreducible representation of
$CO(2,1)$:
\begin{equation}gin{align*}
&{\mathcal P}hi^4_t(c)=\exp t\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}
c, &
&{\mathcal P}hi^5_t(c)=\exp t\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} c,
\\\\
&{\mathcal P}hi^6_t(c)=\exp t\begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix} c, &
&{\mathcal P}hi^7_t(c)=\exp t\begin{pmatrix} 0 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} c.
\end{align*}
Thereby $\mathcal{S}$ is a homogeneous space equipped with the flat
conformal metric\footnote{The symbol $[g]$ denotes the conformal
metric, whose representative is the metric $g$. We also adapt the
following notation:
$\alpha\begin{equation}ta=\tfrac12(\alpha\goth{o}times\begin{equation}ta+\begin{equation}ta\goth{o}times\alpha)$ for
one-forms $\alpha$ and $\begin{equation}ta$.} $[g]$ \begin{equation}\label{e.i.gflat}
g=({\rm d} c_0)({\rm d} c_2)-({\rm d} c_1)^2 \end{equation} and preserved by $CO(2,1)$
\begin{equation}n M^TgM=e^{\lambda} g,\quad \text{for}\quad M\in CO(2,1)
\quad\text{and}\quad \lambda\in\mathbb{R}. \end{equation}n
Apart from $[g]$ there is another piece of structure in $\mathcal{S}$. The
symmetry group acting on $\mathcal{S}$ is not the full ten-dimensional
conformal symmetry group of the flat metric $Conf(2,1)\goth{g}oth{co}ng
O(3,2)$, but merely $CO(2,1)\ltimes\mathbb{R}^3$, the Euclidean group
extended by dilatations, hence another object is needed to reduce
$O(3,2)$ to $CO(2,1)\ltimes\mathbb{R}^3$. This object is the Weyl
one-form of the flat Weyl geometry. Let us remind that a Weyl
geometry $(g,\phi)$ is a metric $g$ and a one-form $\phi$ given
modulo transformations $g\to e^{2\lambda} g$,
$\phi\to\phi+{\rm d}\lambda$, see chapter \ref{ch.point} section
\ref{s.ew.def}. In our case the Weyl geometry is flat, because
there is the flat representative \eqref{e.i.gflat} for which, in
addition, $\phi=0$.
Now a question arises: how the flat Weyl structure on $\mathcal{S}$ can be
reconstructed from the Cartan coframe $(\theta^i,\mathcal{O}mega_\goth{m}u)$? In
order to answer this question we will use the method of
construction through fibre bundles and Lie transport, which
differs from E. Cartan's original reasoning and was introduced by
P. Nurowski, cf \cite{Nur0, Nur1}.
To begin with, we observe that ${\mathcal P}$ is always, not only in the
trivial case, a principal bundle $CO(2,1)\to{\mathcal P}\to\mathcal{S}$. Indeed, as
we said, ${\goth{m}athcal J}^2$ is foliated by solutions of the underlying ODE,
which are curves in ${\goth{m}athcal J}^2$ annihilated by $\goth{o}mega^1,\goth{o}mega^2$ and
$\goth{o}mega^3$. Thus we have a projection ${\goth{m}athcal J}^2\to\mathcal{S}$ with the fibres
being the solutions. As a consequence ${\mathcal P}$ is also a bundle over
$\mathcal{S}$ with leaves of the projection annihilated by the ideal
$(\theta^1,\theta^2,\theta^3)$. On the leaves of ${\mathcal P}\to\mathcal{S}$ there
act the vector fields $X_4,\ldots,X_7$, members of the frame dual
to $(\theta^i,\mathcal{O}mega_\goth{m}u)$. In view of \eqref{e.i.dtheta_7d} their
commutation relations are isomorphic to $\goth{g}oth{co}(2,1)$ and turn each
leaf into an orbit of the free action of $CO(2,1)$ regardless
whether or not the invariants $\inv{A}{1},\ldots,\inv{D}{2}$
vanish. If in addition $\inv{A}{1}=\ldots=\inv{D}{2}=0$ then
locally ${\mathcal P}\goth{g}oth{co}ng CO(2,1)\ltimes\mathbb{R}^3$, which generates the
structure of homogeneous space on $\mathcal{S}$. Next, still in the trivial
case, let us consider the bilinear form \begin{equation}\label{i.gP}
{\scriptstyle\wedge}\,h{g}=2\theta^1\theta^3-(\theta^2)^2\end{equation} and the one-form
$\mathcal{O}mega_3$ on ${\mathcal P}$. We calculate the Lie derivatives of these
objects along the fields $X_4,\ldots,X_7$ tangent to the fibres of
${\mathcal P}\to\mathcal{S}$ and observe that
\begin{equation}\label{e.i.lieg}\begin{aligned}
&{\scriptstyle\wedge}\,h{g}(X_j,\cdot)\equiv0,\quad\text{for}\quad j=4,5,6,7, \\
&L_{X_4}{\scriptstyle\wedge}\,h{g}=0,\quad L_{X_5}{\scriptstyle\wedge}\,h{g}=0,\quad L_{X_6}{\scriptstyle\wedge}\,h{g}=0,\quad
L_{X_7}{\scriptstyle\wedge}\,h{g}=2{\scriptstyle\wedge}\,h{g}\end{aligned} \end{equation} and \begin{equation}\label{e.i.lienu} \begin{aligned}
&X_4\lrcorner\mathcal{O}mega_3=0,\quad X_5\lrcorner\mathcal{O}mega_3=0,\quad
X_6\lrcorner\mathcal{O}mega_3=0,\quad
X_7\lrcorner\mathcal{O}mega_3=1,\\
&L_{X_j}\mathcal{O}mega_3=0,\quad\text{for}\quad j=4,5,6,7. \end{aligned}\end{equation}
These properties allow
us to project the pair $({\scriptstyle\wedge}\,h{g},\mathcal{O}mega_3)$ along ${\mathcal P}\to\mathcal{S}$ to a
Weyl structure $(g,\phi)$ on $\mathcal{S}$. This is precisely the flat Weyl
structure obtained above by action of the symmetry group.
The construction through Lie derivatives and the projection has an
essential advantage in comparison to the symmetry approach, since
it can be immediately generalized to non-trivial equations. In
fact, the equations \eqref{e.i.lieg}, \eqref{e.i.lienu} hold not
only in the trivial case $y'''=0$ but under much weaker conditions
$\inv{A}{1}=0$ and $\inv{C}{1}=0$. Calculating the explicit forms
of these invariants Cartan proved that if only an ODE given by a
function $F(x,y,p,q)$ satisfies
\begin{equation}gin{align} &\left(\goth{m}athcal{D}-\tfrac{2}{3}F_q\right)\left(
\tfrac{1}{6}\goth{m}athcal{D} F_q
-\tfrac{1}{9}F_q^2-\tfrac{1}{2}F_p\right)+F_y=0, \label{e.i.wunsch} \\ \notag\\
&\goth{m}athcal{D}^2F_{qq}-\goth{m}athcal{D} F_{qp}+F_{qy}=0,\label{e.i.cart} \\
\intertext{where}
&\goth{m}athcal{D}=\partial_x+p\partial_y+q\partial_p+F\partial_q,
\notag\end{align} then it has a Weyl geometry on its solution
space $\mathcal{S}$. The quantity on the left hand side of the condition
\eqref{e.i.wunsch}, the W\"unschmann invariant, was found in
\cite{Wun} for the first time.
For the equations satisfying
\eqref{e.i.wunsch} and \eqref{e.i.cart} the bundle
$CO(2,1)\to{\mathcal P}\to\mathcal{S}$ is the bundle of orthonormal frames for the
conformal metric $[g]$ on $\mathcal{S}$, whereas ${\scriptstyle\wedge}\,h{\goth{o}mega}$ of
\eqref{e.i.conp} becomes a $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$-valued Cartan
connection. The $\goth{g}oth{co}(2,1)$-part of ${\scriptstyle\wedge}\,h{\goth{o}mega}$ is the Weyl
connection, and the Weyl curvature is expressed by the invariants
$\inv{B}{1},\inv{B}{2},\inv{B}{3},\inv{B}{4}$, which do not vanish
in general. Cartan proved that all the Weyl geometries constructed
of third-order ODEs are Einstein, that is the traceless part of
the Ricci tensor for their Weyl connections always vanishes.
Finally, he observed \cite{Car3} that these Einstein-Weyl
geometries are of general form; for each three-dimensional
Lorentzian Einstein-Weyl structure there is an ODE satisfying
\eqref{e.i.wunsch}, \eqref{e.i.cart}, whose solution space carries
this structure.
\subsection*{Contact equivalence}
Soon after E.Cartan studied the point equivalence and geometry,
the contact problem was examined by S.-S. Chern using the same
method. After a slightly more complicated construction (we discuss
it in chapter \ref{ch.contact}) Chern built a ten-dimensional
bundle ${\mathcal P}\to{\goth{m}athcal J}^2$ equipped with the coframe
$(\theta^1,\theta^2,\theta^3,\theta^4,\mathcal{O}mega_1,\ldots,\mathcal{O}mega_6)$.
For $y'''=0$ the structural equations coincide with the
Maurer-Cartan equations for the algebra $\goth{o}(3,2)\goth{g}oth{co}ng\goth{sp}(4,\mathbb{R})$
which reflects the fact that $O(3,2)$ is the maximal group of
contact symmetries of $y'''=0$.
Next difference between Cartan's and Chern's results is that here
the coframe $(\theta^i,\mathcal{O}mega_\goth{m}u)$ for a general ODE is not a
Cartan connection since it does not transform regularly along the
fibres of the bundle ${\mathcal P}\to{\goth{m}athcal J}^2$, i.e. does not fulfill condition
iii) of the definition. Lack of this regularity means that there
are nonconstant $\mathcal{O}mega_\alpha{\scriptstyle\wedge}\,\theta^k$ terms in the structural
equations. However, from the point of view of S.-S. Chern, who was
interested in geometries on the solutions space analogous to the
Einstein-Weyl geometries, this fault was insignificant and he did
not investigate it. The Cartan connection for this problem was
given later in \cite{Sat} and in \cite{Dub2}.
Leaving aside this topic let us consider S.-S. Chern's results.
Given the ten-dimensional coframe he noticed that in the
structural equations there appears the W\"unschmann invariant and
further analysis depends on whether it vanishes or not. If the
W\"unschmann invariant vanishes then the coframe
$(\theta^i,\mathcal{O}mega_\goth{m}u)$ becomes an $\goth{o}(3,2)$-valued Cartan normal
conformal connection over $\mathcal{S}$. In this case S.-S. Chern gave an
explicit formula for the underlying Lorentzian conformal metric
$[g]$ in terms of the forms $\goth{o}mega^1,\goth{o}mega^2,\goth{o}mega^3$ on
${\goth{m}athcal J}^2$, see eq. \eqref{e.c.g} in chapter \ref{ch.contact}. Since
the contact symmetry group of $y'''=0$ is $O(3,2)$, the full
conformal group of the flat metric in $2+1$ dimensions, there is
no additional geometric object on $\mathcal{S}$ and we have just the
conformal geometry instead of a Weyl geometry there. Later it was
shown \cite{Nur2} that the conformal geometry can be obtained from
the bilinear symmetric field
${\scriptstyle\wedge}\,h{g}=(\theta^2)^2-2\theta^1\theta^3$ on ${\mathcal P}$ by virtue of
conditions analogous to \eqref{e.i.lieg}. The lowest-order
conformal invariant in three dimensions --- the Cotton tensor ---
was also expressed in terms of contact relative invariants for
ODEs.
Turning to the equations with non-vanishing W\"unschmann
invariant, S.-S. Chern continued reduction of the bundle ${\mathcal P}$
according to Cartan's method and obtained a five-dimensional
manifold, say ${\mathcal P}_5$, which is a line bundle over ${\goth{m}athcal J}^2$ and is
furnished with a coframe $(\theta^1,\ldots,\theta^4,\mathcal{O}mega)$.
Next, he recognized that in the homogeneous case of the equation
$y'''=-y$ the bundle ${\mathcal P}_5$ is a group $\mathbb{R}^2\ltimes\mathbb{R}^3$ and
it generates a geometry of cones on the solution space. This is
rather an exotic kind of structure and we will not discuss it
here, referring the reader to section \ref{s.c.furred} of chapter
\ref{ch.contact}.
This closes our summary of the known results on the geometry and
classification of third-order ODEs.
\section{Null Surface Formulation}\label{s.i.nsf}
\noindent An intriguing thing about third-order ODEs is that the
Lorentzian geometry on $\mathcal{S}$ was rediscovered fifty years after
S.-S. Chern from completely different perspective, the perspective
of General Relativity. In a series of papers
\cite{nsf1,nsf2,nsf3,nsf4} E.T. Newman et al proposed and
developed the Null Surface Formulation (NSF), an alternate
approach to General Relativity. Ideas of this approach are the
following. Let ${\goth{m}athcal M}^4$ be a four-manifold coordinated by $(x^\goth{m}u)$.
Usually, the basic object in General Relativity is a Lorentzian
metric $g$ on ${\goth{m}athcal M}^4$ (or the Levi-Civita connection), from which
other objects are derived, the Riemann tensor, the Weyl tensor and
the Einstein tensor, on which dynamics is imposed by the Einstein
equations. One of many objects in Lorentzian geometry is a null
hypersurface. It is by definition a hypersurface $ Z(x^\goth{m}u)=const$
in ${\goth{m}athcal M}^4$ which satisfies the eiconal equation \begin{equation}\label{i.eikon}
g({\rm d} Z, {\rm d} Z)=0,\end{equation} where $g$ is the inverse (contravariant)
metric. For a given $g$ the family of all null hypersurfaces is
fully defined by the above equation.
The NSF is an alternate point of view, here one begins with ${\goth{m}athcal M}^4$
without any metric but endowed with a two-parameter family of
hypersurfaces. Starting from these data a Lorentzian {\em
conformal} metric is constructed by the property that these
hypersurfaces are its null hypersurfaces; the metric is found by
solving the eiconal equation \eqref{i.eikon} with respect to the
components $g^{\goth{m}u\nu}$. In this approach it is the family of
hypersurfaces that is a basic concept and the metric is a derived
one. The family is defined by a sufficiently differentiable real
function $Z(x^\goth{m}u,s,s^*)$ on ${\goth{m}athcal M}\times S^2$, where $s$ and $s^*$
are stereographic variables on the sphere $S^2$ and $^*$ denotes
the complex conjugation. Each hypersurface is a level set
$$Z(x^\goth{m}u,s,s^*)=const,$$
with some fixed $s$. The reason why $s\in S^2$ is that the
hypersurfaces are to be null and then $S^2$ becomes the sphere of
null directions. When $x^\goth{m}u$ are fixed and $s$ sweeps out the
sphere we obtain the corresponding hypersurfaces at $x^\goth{m}u$
orthogonal to all null directions. In order to find the metric the
authors assumed that the functions \begin{equation}\label{i.fi} f^0=Z,\quad
f^+=Z_s,\quad f^-=Z_{s^*},\quad f^1=Z_{ss^*} \end{equation} form a coordinate
system on ${\goth{m}athcal M}^4$ for all $s$, introduced the coframe $({\rm d}
f^0,{\rm d} f^+,{\rm d} f^-,{\rm d} f^1)$ and found explicit formulae for
components $ g^{\goth{m}u\nu}$ in this coframe. The eiconal equation
implies $g^{00}=0$ in the coframe $({\rm d} f^A)$ but one must still
take into account vanishing of derivatives of $g^{00}$ with
respect to $s$ and $s^*$. Doing so the authors found that the
conformal metric is uniquely defined by the family of
hypersurfaces provided that the function $Z$ and a real conformal
factor $\mathcal{O}mega^2(x^\goth{m}u,s,s^*)$ for the Lorentzian conformal metric
satisfy two quite complicated complex differential conditions,
referred to as metricity conditions. If they are satisfied then
the components of the conformal Lorentzian metric $[g]$ are
products of $\mathcal{O}mega^2$ and some expressions containing derivatives
of $Z$. Moreover, both metricity conditions and the components
$g^{AB}$ only depend of $Z$ via its derivatives $Z_{ss}$ and
$Z_{s^*s^*}$ and one can eliminate the space-time coordinates in
$\mathcal{O}mega$ and $Z_{ss}$ through \eqref{i.fi} and obtain the
functions \begin{equation}n\Lambda(f^A,s,s^*)=Z_{ss}(x^\goth{m}u(f^A,s,s^*),s,s^*),
\qquad \mathcal{O}mega(f^A,s,s^*)=\mathcal{O}mega(x^\goth{m}u(f^A,s,s^*),s,s^*).\end{equation}n
Thereby information about a Lorentzian metric is encoded by two
complex functions
$\mathcal{O}mega$ and $\Lambda$ of the variables $(s, s^*, Z(s,s^*), Z_s,
Z_{s^*}, Z_{ss^*})$, satisfying the two metricity conditions and
the integrability condition $\frac{{\rm d}^2}{{\rm d}
s^{*2}}\Lambda=\frac{{\rm d}^2}{{\rm d} s^{2}}\Lambda^*$.
Next step was writing down Einstein equations
$G^{\goth{m}u\nu}=\varkappa T^{\goth{m}u\nu}$ in the new variables. The
authors proved that applying consecutive derivatives $\partial_s$
and $\partial_{s^*}$ to \begin{equation}\label{i.ein}
G^{\goth{m}u\nu}Z_{\goth{m}u}Z_{\nu}=\varkappa T^{\goth{m}u\nu}Z_{\goth{m}u}Z_{\nu},\end{equation}
$Z_\goth{m}u=\partial_\goth{m}u Z$, one obtains nine out of ten equations and
the lacking tenth equation, the trace component, is recovered with
the help of the metricity conditions. After suitable
substitutions, the vacuum version of \eqref{i.ein} reduces to one
equation
$$\mathcal{O}mega_{f^1f^1}-Q[\Lambda]\mathcal{O}mega=0.$$
In this manner the vacuum Einstein equations were reduced to the
set of four complex equations for $\mathcal{O}mega$ and $\Lambda$: the one
above, the two metricity conditions and the integrability
condition for $\Lambda$. Having proven this result in \cite{nsf3},
the authors moved to the analysis of solutions of the Einstein
equations, their linearization, perspectives for quantization and
other topics we will not cover here. From our point of view the
most interesting is the three-dimensional version of the NSF
\cite{nsf3d,New1, New2, New3}, which leads immediately to
third-order ODEs. We cite the construction in detail.
In the case of $2+1$-dimensional Lorentzian geometry on ${\goth{m}athcal M}^3$ the
space of null directions is diffeomorphic to $S^1$ and a conformal
class can be reconstructed from the one-parameter family of
surfaces
$$ Z(x^i,s)=const, $$ where $(x^i)=(x^0,x^1,x^2)\in{\goth{m}athcal M}^3$ and $s\in
S^1$ real. Let us introduce the functions
\begin{equation}\label{i.ypq}y=Z(x^i,s),\quad p=Z_s(x^i,s),\quad
q=Z_{ss}(x^i,s)\end{equation} and the coframe
$$\sigma^0={\rm d} y,\quad \sigma^1={\rm d} p, \quad \sigma^2={\rm d} q. $$
The third derivative, $Z_{sss}(x^i,s)$, together with
\eqref{i.ypq} define a real function
$$\Lambda(s,y,p,q)=Z_{sss}(x^i(s,y,p,q),s).$$
It is obvious that the variables $(s,\,y,\,p,\,q)$ constitute a
coordinate system on ${\goth{m}athcal J}^2$, the space of second jets of functions
$S^1\to\mathbb{R}$. Moreover, the function $\Lambda$ on ${\goth{m}athcal J}^2$ defines
the third-order ODE \begin{equation}\label{i.3ord}
Z_{sss}=\Lambda(s,Z,Z_s,Z_{ss})\end{equation} for $Z(s)$. The function
$Z(x^i,s)$, with which we have begun, can be identified with the
general solution of this equation, $x^i$ playing the role of three
integration constants. In this manner ${\goth{m}athcal M}^3$ becomes the solution
space of the ODE and ${\goth{m}athcal M}^3\times S^1$ is identified with ${\goth{m}athcal J}^2$,
where the projection ${\goth{m}athcal J}^2\to{\goth{m}athcal M}^3$ is given by
$(x^i,s)\goth{m}apsto(x^i)$. The fibres of this projection are solutions
of \eqref{i.3ord} considered as curves in $J^2$. We also notice
that the total derivative $\frac{{\rm d}}{{\rm d} s}$ applied to a
function on ${\goth{m}athcal J}^2$ coincides with the total derivative $\goth{m}athcal{D}$
$$ \frac{{\rm d}}{{\rm d} s}f(s,y,p,q)=\goth{m}athcal{D} f= (\partial_s+p\partial_y
+q\partial_p+\Lambda\partial_q)f. $$
It follows that the construction of a conformal geometry from the
family of surfaces is fully equivalent to Chern's construction
described earlier and the metricity conditions contain the
W\"unschmann condition. Let us look at how this construction was
done. The eiconal equation
$$g^{ij}Z_{i}Z_{j}=g^{ij}y_{i}y_{j}=0$$ for the sought metric
$g=g^{ij}\partial_{x^i}\goth{o}times\partial_{x^j}$ implies, as before,
$g^{00}=0$ in the coframe $(\sigma^i)$. Taking $\partial_s g^{00}$
gives $$ g^{ij}y_{i}p_{j}=g^{01}=0. $$ Another derivation
$\partial_s$ yields
$$ g^{ij}y_{i}q_{j}+g^{ij}p_{i}p_{j}=g^{02}+g^{11}=0. $$
Third and fourth derivatives yield \begin{equation}n g^{02}\Lambda_q+3g^{12}=0
\end{equation}n and \begin{equation}n
g^{02}(\goth{m}athcal{D}\Lambda_q-\tfrac13\Lambda_q^2-3\Lambda_p)+3g^{22}=0. \end{equation}n
At this point all the components of the metric are found and are
proportional to $g^{02}$, which becomes naturally the conformal
factor $\mathcal{O}mega^2(x^i,s)$. The conformal metric in the basis
$(\partial_y,\partial_p,\partial_q)$ is as follows \begin{equation}n g^{ij}=\mathcal{O}mega^2\begin{pmatrix} 0 & 0 & 1 \\
0 & -1 & -\tfrac13\Lambda_q
\\ 1 & -\tfrac13\Lambda_q &
-\tfrac13\goth{m}athcal{D}\Lambda_q+\tfrac19\Lambda_q^2+\Lambda_p \end{pmatrix}. \end{equation}n
$\mathcal{O}mega$ is not an arbitrary function of $s$, since
$\mathcal{O}mega^2=g^{02}=g^{ij}y_iq_i$ and applying $\goth{m}athcal{D}$ we get \begin{equation}n \goth{m}athcal{D}
\mathcal{O}mega=\frac13\mathcal{O}mega\Lambda_q.\end{equation}n This is first of the metricity
conditions, it is a constraint on $\mathcal{O}mega$ provided that $\Lambda$
is known. Second condition is obtained by taking the fifth
derivative of the eiconal equation with respect to $s$. It reads
\begin{equation}gin{align*} 0&=g^{ij}(5p_i
\partial^4_s
Z_j+Z_i\partial^5_sZ_j+10\partial^2Z_i\partial^3Z_j)=\\
&=5g^{11}(\goth{m}athcal{D}\Lambda)_p+5g^{12}(\goth{m}athcal{D}\Lambda)_q+g^{02}(\goth{m}athcal{D}^2\Lambda)_q
+10(g^{02}\Lambda_y+g^{12}\Lambda_p+g^{22}\Lambda_q).
\end{align*}
Substituting the metric components by their explicit forms we
obtain precisely the W\"unschmann condition \eqref{e.i.wunsch} for
$\Lambda$.
The relations between third-order ODEs and $2+1$-dimensional
conformal geometry in the NSF were further studied in \cite{New1,
New2, New3}. In \cite{New1} it was shown that one can generate the
conformal metric on the solution space starting from the system
\eqref{e.omega} of one-forms $\goth{o}mega^i$ on ${\goth{m}athcal J}^2$ associated with
an ODE. The authors defined the tensor
\begin{equation}\label{i.newg}2\goth{o}mega^1(\goth{o}mega^3+a\goth{o}mega^1+b\goth{o}mega^2)-(\goth{o}mega^2)^2\end{equation}
on ${\goth{m}athcal J}^2$ with arbitrary functions $a$ and $b$. Next they imposed
the condition that Lie transport of the above tensor along fibres
of the projection ${\goth{m}athcal J}^2\to solution\,space$ is conformal. The
construction is parallel to \eqref{i.gP} and the conditions for
conformal Lie transport fix uniquely $a$ and $b$, and yield the
W\"unschmann condition in a like manner to \eqref{e.i.lieg} and
\eqref{e.i.lienu}. In \cite{New2} the authors re-proved the
invariance of the so obtained conformal geometry under contact
transformations of the related ODE, which reflects a gauge freedom
in the NSF. Finally, in \cite{New3} explicit formulae were given
for the curvature of normal conformal connection in terms of
contact invariants of third-order ODE.
Simultaneously to the research on the three-dimensional version of
the NSF, much progress was made in understanding the full
four-dimensional formalism in \cite{New2, New3, New4}. Here $(s,
s^*, Z(s,s^*), Z_s, Z_{s^*}, Z_{ss^*}, Z_{ss}, Z_{s*s*})$ are
coordinates on the second jet space ${\goth{m}athcal J}^2(S^2,\mathbb{R})$ of functions
$S^2\to\mathbb{R}$. Owing to this fact, the complex function $\Lambda$
defines a pair of PDEs for a real function $Z$ of the variables
$s$ and $s^*$ through
\begin{equation}gin{align}
Z_{ss}=&\Lambda(s, s^*, Z, Z_s, Z_{s^*}, Z_{ss^*}), \notag
\\
Z_{s^*s^*}=&\Lambda^*(s, s^*, Z, Z_s, Z_{s^*}, Z_{ss^*}), \notag
\end{align}
while the space-time ${\goth{m}athcal M}^4$ is the space of solutions to this
system, $(x^\goth{m}u)$ being constants of integrations. Following ideas
of the three-dimensional construction the authors considered the
six-dimensional submanifold $L$ of ${\goth{m}athcal J}^2(\mathbb{R}^2,\mathbb{R})$ given by
the PDEs. They considered the Pfaffian system on $L$ associated
with the PDEs and, following the formula \eqref{i.newg}, they
built a symmetric tensor on $L$ from the Pfaff forms. This tensor
projects to a conformal geometry on ${\goth{m}athcal M}^4$ provided that the
metricity conditions for $\Lambda$ are satisfied, which can be now
viewed as generalizations of the W\"unschmann condition. The
Cartan normal conformal connection for this geometry was
constructed \cite{New4}. The Null Surface Formulation is still an
ongoing project and work is being continued to obtain conformal
Einstein equations in terms of invariants of the PDEs.
\section{Results of the thesis}
\noindent The thesis has threefold aim.
\begin{equation}gin{itemize}
\item[i)] Constructing new geometries associated with third-order ODEs
modulo contact, point and fibre-preserving transformations of
variables. Considering possible applications to General
Relativity.
\item[ii)] Describing new geometries together with the already
known in the language of Cartan connections with curvature given
by respective invariants of ODEs obtained by Cartan's method.
\item[iii)] Applying Cartan invariants to classification of
certain types of third-order ODEs.
\end{itemize}
In order to construct the geometries we follow Cartan's
construction outlined in section \ref{s.i.sum} of Introduction. We
start from the equivalence problems formulated in terms of the
$G$-structures \eqref{e.i.canon} and apply Cartan's method to
obtain manifolds ${\mathcal P}$ equipped with the coframes encoding all the
invariant information about ODEs. Next we show how to read the
principal bundle structures of these manifolds over distinct
bases. Usually it is the structure over $\mathcal{S}$ which is the most
interesting, but we also consider structures over ${\goth{m}athcal J}^1$, ${\goth{m}athcal J}^2$
and certain six-dimensional manifold ${\goth{m}athcal M}^6$, which appears
naturally. When the structure of a bundle is established then the
invariant coframe defines a Cartan connection on ${\mathcal P}\to base$,
usually under additional conditions playing similar role to the
W\"unschmann condition. In order to obtain the geometries and the
W\"unschmann-like conditions we often apply the P. Nurowski method
of construction by Lie transport and projection.
In the most symmetric cases, when the underlying ODEs have
transitive symmetry groups, the bundles ${\mathcal P}$ become locally Lie
groups while their bases become homogeneous spaces, providing
homogeneous models for the geometries. Since the dimension of $\mathcal{S}$
and ${\goth{m}athcal J}^1$ is three and we build geometries with at least
two-dimensional structural group then the homogeneous models are
given by the ODEs with at least five-dimensional symmetry group.
Below we encapsulate our results. The geometries of the ODEs are
also gathered in table \ref{t.geom} on page \pageref{t.geom}.
Unfortunately, among fourteen geometries which we consider in this
work, there is no Lorentzian geometry or any new geometry which
could currently be applied to General Relativity.
\subsection*{Contact geometries}
Chapter \ref{ch.contact} is devoted to the geometries of the ODEs
modulo contact transformations. The only equations possessing at
least five- dimensional contact symmetry group are the linear
equations with constant coefficients\footnote{We prove this
statement in chapter \ref{ch.class}.}, that is \begin{equation}n y'''=0 \end{equation}n
with the symmetry group $O(3,2)$ and \begin{equation}n y'''=-2\goth{m}u y'+y,\qquad
\goth{m}u\in \mathbb{R}, \end{equation}n mutually non-equivalent for distinct $\goth{m}u$,
with the symmetry group $\mathbb{R}^2\ltimes\mathbb{R}^3$. Sections
\ref{s.c.th} to \ref{s.c6d} discuss geometries whose homogeneous
model is generated by $y'''=0$. In section \ref{s.c.th} we state
the main theorem in that chapter, theorem \ref{th.c.1}, which
describes the geometry on ${\goth{m}athcal J}^2$. It can be recapitulated as
follows.
\begin{equation}gin{theo}[Theorem \ref{th.c.1}] The contact invariant information
about an equation $y'''=F(x,y,y',y'')$ is given by the following
data
\begin{equation}gin{itemize}
\item[i)] The principal fibre bundle $H_6\to{\mathcal P}\to{\goth{m}athcal J}^2$, where
$\dim{\mathcal P}=10$ and $H_6$ is a six-dimensional subgroup of $O(3,2)$
\item[ii)] The coframe
$(\theta^1,\theta^2,\theta^3,\theta^4,\mathcal{O}mega_1,\mathcal{O}mega_2,\mathcal{O}mega_3,\mathcal{O}mega_4,\mathcal{O}mega_5,\mathcal{O}mega_6)$
on ${\mathcal P}$ which defines the $\goth{o}(3,2)\goth{g}oth{co}ng\goth{sp}(4,\mathbb{R})$ Cartan
normal connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ on ${\mathcal P}$.
\end{itemize}
The coframe and the connection ${\scriptstyle\wedge}\,h{\goth{o}mega}_c$ are given
explicitly in terms of $F$ and its derivatives. There are two
basic relative invariants for this geometry: the W\"unschmann
invariant and $F_{y''y''y''y''}$.\end{theo} This theorem is almost
identical to the result proved in \cite{Sat} and the only new
element we add here is the explicit formula for the connection.
Section \ref{s.c.proof} contains the proof of the theorem, which
repeats S.-S. Chern's construction of the coframe and the
construction of the normal connection of \cite{Sat}.
Sections \ref{s.c.conf} to \ref{s.c6d} discuss next three
geometries generated by ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$. Section \ref{s.c.conf}
contains the construction of the Lorentzian geometry on the
solution space and, after \cite{Nur2}, gives explicit formulae for
the normal conformal connection. Section \ref{s.c-p} studies a new
type of geometry in the context of third-order ODEs, the
contact-projective structure on ${\goth{m}athcal J}^1$. The idea of this geometry
is the following. Consider the solutions of an ODE as a family of
curves in ${\goth{m}athcal J}^1$ and ask whether these curves are among geodesics
of a linear connection. The answer to this question is positive
provided that $F_{y''y''y''y''}=0$ and in this case there is a
whole family of connections for which the solutions are geodesics.
Such a family of connections in ${\goth{m}athcal J}^1$ is an example of a contact
projective structure, see D. Fox \cite{Fox} for the general
definition of contact projective structures. Moreover, Tanaka's
theory allows us to define a notion of normal Cartan connection
for these structures in dimension three \cite{Cap, Cap2, Fox}. It
is then a matter of straightforward calculations to check that
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is the normal connection for our contact
projective structure. In section \ref{s.c6d} we construct a
six-dimensional split signature conformal geometry on some
six-manifold ${\goth{m}athcal M}^6$ over which ${\mathcal P}$ is a bundle. Next we show that
the associated normal conformal connection ${\scriptstyle\wedge}\,h{\goth{m}athbf{w}}$ has a
special conformal holonomy reduced from $\goth{o}(4,4)$ to
$\goth{o}(3,2)\semi{.}\mathbb{R}^5\subset\goth{o}(4,4)$ and, what is more,
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is precisely the $\goth{o}(3,2)$ part of
${\scriptstyle\wedge}\,h{\goth{m}athbf{w}}$. These results are summarized as follows.
\begin{equation}gin{theo}
The connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ of theorem \ref{th.c.1} has
fourfold interpretation.
\begin{equation}gin{itemize}
\item[1.] It is always the normal $\goth{o}(3,2)$ Cartan connection on
${\goth{m}athcal J}^2$ (Chern-Sato-Yoshikawa construction.)
\item[2.] If the ODE has vanishing W\"unschmann condition then
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is the normal Lorentzian conformal connection for
the Lorentzian structure on the solution space (Chern-NSF
construction.) \item[3.] If the ODE satisfies $F_{y''y''y''y''}=0$
then ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ becomes the normal Cartan connection for the
contact projective structure on ${\goth{m}athcal J}^1$. \item[4.] ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$
is the $\goth{o}(3,2)$-part of the $\goth{o}(4,4)$ normal conformal connection
for the six-dimensional split conformal geometry on ${\goth{m}athcal M}^6$ with
special holonomy $\goth{o}(3,2)\semi{.}\mathbb{R}^5$.
\end{itemize}
\end{theo}
In section \ref{s.c.furred} we turn to geometries, whose
homogeneous models are provided by the equation $y'''=-2\goth{m}u y'+y$.
Following Chern we reduce the bundle ${\mathcal P}$ to its five-dimensional
subbundle. Then we find that
\begin{equation}gin{theo}[Theorem \ref{cor.c.geom5d}]
Every ODE satisfying some contact invariant condition
$\inc{a}{}[F]=\goth{m}u=const$ has a $\mathbb{R}^2$ geometry on its solution
space together with a $\mathbb{R}^2$ linear connection from the
invariant coframe. The action of the algebra $\mathbb{R}^2$ on $\mathcal{S}$ is
given by \begin{equation}n
\begin{pmatrix}
u & v & 0 \\
-\goth{m}u v & u & v \\
v & -\goth{m}u v & u
\end{pmatrix}.
\end{equation}n
\end{theo}
This geometry seems to be a generalization of Chern's `cone
geometry' which was associated with the equation $y'''=-y$ and
briefly mentioned to exist for arbitrary ODEs. In our construction
the action of $\mathbb{R}^2$ depends on the characteristic polynomial
of respective linear equation, and we get a real cone geometry
provided that it has three distinct roots.
\subsection*{Point geometries} In sections \ref{s.p.th} to \ref{s.lor3}
of chapter \ref{ch.point} we study the geometries associated with
the ODEs modulo point transformations. Sections \ref{s.p.th} to
\ref{s.w6d} deal with the geometries modelled on $y'''=0$. Our
approach is analogous to the contact case and results are similar,
with some caveats. The algebra $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$ is not
semisimple, hence the methods of the Tanka theory fail here.
Moreover, even its generalization, the Morimoto nilpotent
geometry, does not work in this case, since the point geometry of
third-order ODEs on ${\goth{m}athcal J}^2$ is not a filtration. As a consequence,
we do not have a general theory about existence of Cartan
connections and in the case of ${\goth{m}athcal J}^1$, where we find point
projective structure, certain refinement of contact projective
structure, we are not able to find any connection and suppose that
it does not exist in general. We also construct a six-dimensional
Weyl structure in the split signature, which is related to the
six-dimensional conformal geometry of the contact case. To
summarize
\begin{equation}gin{theo}
The following statements hold
\begin{equation}gin{itemize}
\item[1.] The point invariant information about
$y'''=F(x,y,y',y'')$ is given by the seven-dimensional principal
bundle $H_3\to{\mathcal P}\to{\goth{m}athcal J}^2$ together with the coframe
$\goth{h}p{1},\goth{h}p{2},\goth{h}p{3},\goth{h}p{4},\vp{1},\vp{2},\vp{3}$ on ${\mathcal P}$, which
defines the $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$ Cartan connection
${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ (Cartan construction.) \item[2.] If the ODE has
vanishing W\"unschmann \eqref{e.i.wunsch} and Cartan
\eqref{e.i.cart} invariants then it has the Einstein-Weyl geometry
on $\mathcal{S}$ and the Weyl connection is given by ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$
(Cartan construction.) \item[3.] If the ODE satisfies
$F_{y''y''y''}=0$ then it has the point-projective structure on
${\goth{m}athcal J}^1$. \item[4.] For any ODE there exists the split signature
six-dimensional Weyl geometry, which is never Einstein.
\end{itemize}
\end{theo}
A new construction, which does not have a contact counterpart, is
considered in section \ref{s.lor3}. This is a Lorentzian {\em
metric structure} on the solution space $\mathcal{S}$. Its construction
follows immediately from the Einstein-Weyl geometry. If the Ricci
scalar of the Weyl connection is non-zero, then it is a weighted
conformal function and may be fixed to a constant by an
appropriate choice of the conformal gauge. The homogeneous models
of this geometry are associated with
$$y'''=\frac{3\,y''^2}{2\,y'}$$
if the Ricci scalar is negative, and
$$y'''=\frac{3y''^2y'}{y'^2+1}$$ if the Ricci scalar is positive.
Both these equations are contact equivalent to $y'''=0$ and their
point symmetry groups are $O(2,2)$ and $O(4)$ respectively.
\subsection*{Fibre-preserving geometries} Sections \ref{s.f}
and \ref{s.fp} of chapter \ref{ch.point} are devoted to the
geometries of ODEs modulo fibre-preserving transformations. We
obtain a seven-dimensional bundle and the
$\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$-valued Cartan connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^f$
on it. Since both ${\scriptstyle\wedge}\,h{\goth{o}mega}^f$ and ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ of the point
case take value in the same algebra these cases are very similar
to each other. Indeed, we show that one can recover
${\scriptstyle\wedge}\,h{\goth{o}mega}^f$ from ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ just by appending one
function on the bundle. As a consequence, the geometries of the
fibre-preserving case are obtained from their point counterparts
by appending the object generated by the function.
We did not study obvious or not interesting geometries. In the
point and fibre-preserving case geometries of $y'''=-2\goth{m}u y'+y$
are the same to what we have in the contact case, since the
respective symmetry groups are the same. Also the fibre-preserving
geometry on ${\goth{m}athcal J}^1$ does not seem to be worth studying.
\subsection*{Classification of ODEs}
Chapter \ref{ch.class} contains the classification part of this
work. We obtain two results
\begin{equation}gin{itemize}
\item[i)] We characterize regular ODEs admitting large contact and
point symmetry groups, that is the groups of dimension at least
four. We give the conditions, in terms of Cartan invariants, for
an ODE to posses the large symmetries. The classification is given
in tables \ref{t.cc.1} and \ref{t.pp.1} on pages \pageref{t.cc.1}
-- \pageref{t.pp.2}. We give the criteria for contact
linearization of the ODEs. (The fibre-preserving classification of
the ODEs with large symmetries is again parallel to the point
classification and has been already done \cite{God1, Grebot}, see
also Remark \ref{rem.fp}.) \item[ii)] We characterize regular ODEs
fibre-preserving equivalent to II, IV, V, VI, VII and XI reduced
Chazy classes, which are certain polynomial ODEs with the Painleve
property. We give the explicit formulae for the transformations.
\end{itemize}
The condition of regularity assumed above is of technical nature,
cf the discussion in the beginning of chapter \ref{ch.class}, in
particular definition \ref{def.cc.reg}.
To summarize, the following material contained in this work is
new: sections \ref{s.c-p} to \ref{s.c.furred} of chapter
\ref{ch.contact} excluding theorem \ref{th.c.2}, sections
\ref{s.p-p} to \ref{s.fp} of chapter \ref{ch.point} and the whole
of chapter \ref{ch.class}. Other sections contain a re-formulation
and an extension of already known results.
All our calculations were performed or checked using the symbolic
calculations program Maple.
\section{Notation}
In what follows we use the following symbols, in particular $W$
denotes the W\"unschmann invariant.
\begin{equation}gin{align}
F=&F(x,y,p,q), \notag \\
\goth{m}athcal{D}=&\partial_x+p\partial_y+q\partial_p+F\partial_q, \\
K=&\tfrac{1}{6}\goth{m}athcal{D} F_q -\tfrac{1}{9}F_q^2-\tfrac{1}{2}F_p, \label{e.defK} \\
L=&\tfrac13F_{qq}K-\tfrac13F_qK_q-K_p-\tfrac13F_{qy}, \\
M=&2K_{qq}K-2K_{qy}+\tfrac13F_{qq}L-\tfrac23F_qL_q-2L_p, \\
W=&\left(\goth{m}athcal{D}-\tfrac{2}{3}F_q\right)K+F_y, \label{e.defW} \\
Z=&\frac{\goth{m}athcal{D} W}{W} -F_q.
\end{align}
Parentheses denote sets of objects: \begin{equation}n (a_1,\ldots,a_k)\end{equation}n is
the set consisting of $a_1,\ldots,a_k$. In particular this symbol
denotes bases of vector spaces as well as coordinate systems,
frames and coframes on manifolds. The linear span of vectors or
covectors $a_1,\ldots,a_k$ is denoted by \begin{equation}n <a_1,\ldots,a_k>.
\end{equation}n If $a_1,\ldots,a_k$ are vector fields or one-forms on a
manifold, then the above symbol denotes the distribution or the
simple ideal generated by them. The symmetric tensor product of
two one-forms or vector fields $\alpha$ and $\begin{equation}ta$ is denoted by
\begin{equation}n
\alpha\begin{equation}ta=\tfrac12(\alpha\goth{o}times\begin{equation}ta+\begin{equation}ta\goth{o}times\alpha).\end{equation}n
The symbols $A_{(\goth{m}u\nu)}$ and $A_{[\goth{m}u\nu]}$ denote
symmetrization and antisymmetrization of a tensor $A_{\goth{m}u\nu}$
respectively. For a metric $g$ of signature $(k,l)$ the group
$CO(k,l)$ is defined to be
$$CO(k,l)=\{A\in GL(k+l,\mathbb{R})\,|\quad A^TgA=e^\lambda g,\quad\lambda\in\mathbb{R} \}. $$
Its Lie algebra
$$\goth{g}oth{co}(k,l)=\{a\in \goth{g}l(k+l,\mathbb{R})\,|\quad a^Tg+ga=\lambda g,\quad\lambda\in\mathbb{R} \}. $$
A semidirect product of two Lie groups $G$ $H$, where $G$ acts on
$H$ is denoted by $ G\ltimes H$. A semidirect product of their Lie
algebras is denoted by $\goth{g}\semi{.}\goth{h}$. If the action depends of a
parameter $\goth{m}u$ then we add an subscript: $\ltimes_\goth{m}u$ and
$\semi{\goth{m}u}$.
\begin{equation}gin{landscape}
\begin{equation}gin{table}
\caption{Geometries of third-order ODEs}\label{t.geom}
\renewcommand*{\arraystretch}{1.5}
\begin{equation}gin{tabular}{|c|c|c|c|c|}
\goth{h}line Model & Manifold & Contact &
Point & Fibre-preserving \\
\goth{h}line \goth{h}line
\goth{m}ultirow{8}{*}{$y'''=0$} & \goth{m}ultirow{2}{*}{${\goth{m}athcal J}^2$} &
\goth{m}ultirow{2}{*}{$\goth{o}(3,2)$ connection} &
\goth{m}ulticolumn{2}{|c|}{\goth{m}ultirow{2}{*}{$\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$ connection}} \\
& & & \goth{m}ulticolumn{2}{|c|}{} \\ \cline{2-5}
& \goth{m}ultirow{2}{*}{$\mathcal{S}$} & \goth{m}ultirow{2}{*}{Lorentzian conformal} &
\goth{m}ultirow{2}{*}{Lorentzian Einstein-Weyl} &
\raisebox{-0.25ex}{Lorentzian
Einstein-Weyl} \\
& & & & \raisebox{0.3ex}{with a weighted function} \\\cline{2-5}
& \goth{m}ultirow{2}{*}{${\goth{m}athcal J}^1$} & \goth{m}ultirow{2}{*}{contact projective} &
\goth{m}ultirow{2}{*}{point projective} & \goth{m}ultirow{2}{*}{{\em was not
studied}} \\
& & & & \\ \cline{2-5}
& \goth{m}ultirow{2}{*}{${\goth{m}athcal M}^6$} & \goth{m}ultirow{2}{*}{split conformal}
& \goth{m}ultirow{2}{*}{split Weyl} & split Weyl \\
& & & & with a weighted function \\ \goth{h}line
\goth{m}ultirow{2}{*}{$y'''=\frac{3y'(y'')^2}{1+(y')^2}$} &
\goth{m}ultirow{2}{*}{$\mathcal{S}$} &
\goth{m}ultirow{2}{*}{------} & \raisebox{-0.25ex}{Lorentzian} & \goth{m}ultirow{2}{*}{------} \\
& & & \raisebox{0.3ex}{Ricci$=const> 0$} & \\ \goth{h}line
\goth{m}ultirow{2}{*}{$y'''=\frac{3(y'')^2}{2y'}$} &
\goth{m}ultirow{2}{*}{$\mathcal{S}$} & \goth{m}ultirow{2}{*}{------} &
\raisebox{-0.25ex}{Lorentzian} & \raisebox{-0.25ex}{Lorentzian} \\
& & & \raisebox{0.3ex}{Ricci$=const< 0$} & \raisebox{0.3ex}{with a
one-form} \\ \goth{h}line
\goth{m}ultirow{2}{*}{$y'''=-2\goth{m}u y'+y$} & \goth{m}ultirow{2}{*}{$\mathcal{S}$} &
\goth{m}ulticolumn{3}{|c|}{\goth{m}ultirow{2}{*}{$\mathbb{R}^2$ geometry}}
\\
& & \goth{m}ulticolumn{3}{|c|}{} \\ \goth{h}line
\end{tabular}
\end{table}
\end{landscape}
\chapter[Geometries of ODEs modulo contact transformations]
{Geometries of ODEs considered modulo contact transformations of variables}\label{ch.contact}
\section{Cartan connection on ten-dimensional bundle}\label{s.c.th}
\noindent We formulate the theorem about
the main structure which is associated with third-order ODEs
modulo contact transformations of variables, the $\goth{o}(3,2)$ Cartan
connection on the bundle ${\mathcal P}^c\to{\goth{m}athcal J}^2$. This structure will serve
as a starting point for both analyzing of geometries of ODEs and
their classification.
\begin{equation}gin{theorem}\label{th.c.1}
Consider an equation $y'''=F(x,y,y',y'')$. The contact invariant
information about this equation is given by the following data
\begin{equation}gin{itemize}
\item[i)] The principal fibre bundle $H_6\to{\mathcal P}^c\to{\goth{m}athcal J}^2$, where
$\dim{\mathcal P}^c=10$, ${\goth{m}athcal J}^2$ is the space of second jets of curves in the
$xy$-plane, and $H_6$ is the following six-dimensional subgroup of
$SP(4,\mathbb{R})$ \begin{equation} \label{e.c.H6} H_6=\begin{pmatrix} \sqrt{u_1}, &
\frac12\frac{u_2}{\sqrt{u_1}}, & -\frac12\frac{u_4}{\sqrt{u_1}}, &
\tfrac{1}{24}\tfrac{u_2^2u_5}{u_1^{3/2}u_3}-\tfrac12\sqrt{u_1}\,u_6 \\\\
0 & \tfrac{u_3}{\sqrt{u_1}}, & -\tfrac{u_5}{\sqrt{u_1}},
& \tfrac12\tfrac{u_2u_5-u_3u_4}{u_1^{3/2}} \\\\
0 & 0 & \tfrac{\sqrt{u_1}}{u_3}, &
-\tfrac12\tfrac{u_2}{\sqrt{u_1}\,\goth{u}_3} \\\\
0 & 0 & 0 & \tfrac{1}{\sqrt{u_1}}\end{pmatrix}.
\end{equation} \item[ii)] The coframe
$(\theta^1,\theta^2,\theta^3,\theta^4,\mathcal{O}mega_1,\mathcal{O}mega_2,\mathcal{O}mega_3,\mathcal{O}mega_4,\mathcal{O}mega_5,\mathcal{O}mega_6)$
on ${\mathcal P}^c$, which defines the $\goth{o}(3,2)$-valued Cartan normal
connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ on ${\mathcal P}^c$ by \begin{equation}\label{e.c.conn_sp}
{\scriptstyle\wedge}\,h{\goth{o}mega}^c=\begin{pmatrix} \tfrac{1}{2}\vc{1} & \tfrac{1}{2}\vc{2} & -\tfrac{1}{2}\vc{4} & -\tfrac{1}{4}\vc{6} \\\\
\goth{h}c{4} & \vc{3}-\tfrac{1}{2}\vc{1} & -\vc{5} & -\tfrac{1}{2}\vc{4} \\\\
\goth{h}c{2} & \goth{h}c{3} & \tfrac{1}{2}\vc{1}-\vc{3} & -\tfrac{1}{2}\vc{2} \\\\
2\goth{h}c{1} & \goth{h}c{2} & -\goth{h}c{4} & -\tfrac{1}{2}\vc{1}
\end{pmatrix}.
\end{equation}
\end{itemize}
Let $(x,y,p,q,u_1,u_2,u_3,u_4,u_5,u_6)$, $(x^i,u_\goth{m}u)$ for short,
be a local coordinate system on ${\mathcal P}^c$, which is compatible with
the local trivialization ${\mathcal P}^c=H_6\times{\goth{m}athcal J}^2$, that is
$(x^i)=(x,y,p,q)$ are coordinates in ${\goth{m}athcal J}^2$ and $(u_\goth{m}u)$ are
coordinates in $H_6$ as in \eqref{e.c.H6}. Then the value of
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ at the point $(x^i,u_\goth{m}u)$ in ${\mathcal P}^c$ is given by
\begin{equation}n {\scriptstyle\wedge}\,h{\goth{o}mega}^c(x^i,u_\goth{m}u)=u^{-1}\,\goth{o}mega^c\,u+u^{-1}{\rm d} u
\end{equation}n where $u$ denotes the matrix \eqref{e.c.H6} and \begin{equation}n
{\goth{o}mega}^c= \begin{pmatrix}
\tfrac{1}{2}\vc{1}^0&\tfrac{1}{2}\vc{2}^0 & -\tfrac{1}{2}\vc{4}^0 & -\tfrac{1}{4}\vc{6}^0 \\\\
\goth{o}mega^4 & \vc{3}^0-\tfrac{1}{2}\vc{1}^0 & -\vc{5}^0 & -\tfrac{1}{2}\vc{4}^0 \\\\
\goth{o}mega^2 & {\scriptstyle\wedge}\,t{\goth{o}mega}^3 & \tfrac{1}{2}\vc{1}^0-\vc{3}^0 & -\tfrac{1}{2}\vc{2}^0 \\\\
2\goth{o}mega^1 & \goth{o}mega^2 & -\goth{o}mega^4 & -\tfrac{1}{2}\vc{1}^0 \end{pmatrix}\end{equation}n
is the connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ calculated at the point
$(x^i,u_1=1,u_2=0,u_3=1,u_4=0,u_5=0,u_6=0)$.
The forms $\goth{o}mega^1,\goth{o}mega^2,{\scriptstyle\wedge}\,t{\goth{o}mega}^3,\goth{o}mega^4$ read
\begin{equation}\label{e.c.om}\begin{equation}gin{aligned}
\goth{o}mega^1=&{\rm d} y-p{\rm d} x, \\
\goth{o}mega^2=&{\rm d} p-q{\rm d} x, \\
{\scriptstyle\wedge}\,t{\goth{o}mega}^3=&{\rm d} q-F{\rm d} x-\tfrac13F_q({\rm d} p-q{\rm d} x)+K({\rm d} y-p{\rm d} x), \\
\goth{o}mega^4=&{\rm d} x.
\end{aligned}
\end{equation} The forms $\vc{1}^0,\ldots,\vc{6}^0$ read
\begin{equation}\label{e.c.Om0}\begin{equation}gin{aligned}
\vc{1}^0=&-K_q\,\goth{o}mega^1, \\
\vc{2}^0=&\left(\tfrac13 W_q+L\right)\,\goth{o}mega^1-K_q\,\goth{o}mega^2-K\goth{o}mega^4, \\
\vc{3}^0=&-K_q\,\goth{o}mega^1+\tfrac16F_{qq}\,\goth{o}mega^2+\tfrac13F_q\goth{o}mega^4, \\
\vc{4}^0=&-(\tfrac13W_{qq}+L_q)\,\goth{o}mega^1+\tfrac12K_{qq}\,\goth{o}mega^2,
\\
\vc{5}^0=&\tfrac12K_{qq}\,\goth{o}mega^1-\tfrac16F_{qqq}\,\goth{o}mega^2-\tfrac16F_{qq}\,\goth{o}mega^4,
\\
\vc{6}^0=&(\tfrac13\goth{m}athcal{D}(W_{qq})-\tfrac43W_{qp} -\tfrac13F_qW_{qq}
+\tfrac13F_{qqq}W+M)\,\goth{o}mega^1+ \\
&+\tfrac13(F_{qqy}-F_{qqq}K-W_{qq})\,\goth{o}mega^2-K_{qq}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3+ \\
&+(\tfrac23F_{qy}-\tfrac13F_{qq}K-2L-\tfrac43W_q)\,\goth{o}mega^4.
\end{aligned}
\end{equation}
\end{theorem}
In above theorem we used a concept of normal Cartan connection in
the sense of N. Tanaka \cite{Tan}. A normal Cartan connection is a
connection which takes value in a semisimple graded Lie algebra
and whose curvature satisfies some algebraic conditions, which
are, so to speak, a generalization of conditions for torsion in
the case of linear connections. We explain it below.
\begin{equation}gin{definition}
A semisimple Lie algebra $\goth{g}$ is graded if it has a vector space
decomposition \begin{equation}n
\goth{g}=\goth{g}_{-k}\goth{o}plus\ldots\goth{o}plus\goth{g}_{-1}\goth{o}plus\goth{g}_0\goth{o}plus\goth{g}_1\goth{o}plus\ldots\goth{o}plus\goth{g}_{k}
\end{equation}n such that \begin{equation}n[\goth{g}_i,\goth{g}_j]\subset \goth{g}_{i+j}\end{equation}n and
$\goth{g}_{-k}\goth{o}plus\ldots\goth{o}plus\goth{g}_{-1}$ is generated by $\goth{g}_{-1}$.
\end{definition}
Let us suppose that $\goth{g}$ is a semisimple graded Lie algebra and
denote $\goth{m}=\goth{g}_{-k}\goth{o}plus\ldots\goth{o}plus\goth{g}_{-1}$,
$\goth{h}=\goth{g}_0\goth{o}plus\ldots\goth{o}plus\goth{g}_k$. Let us consider a $\goth{g}$ valued
Cartan connection ${\scriptstyle\wedge}\,h{\goth{o}mega}$ on a bundle $H\to{\mathcal P}\to{\goth{m}athcal M}$, where
the Lie algebra of $H$ is $\goth{h}$. Fix a point $p\in{\mathcal P}$. The
decomposition $\goth{g}=\goth{m}\goth{o}plus\goth{h}$ defines in $T_p{\mathcal P}$ the complement
$\mathcal{H}_p$ of the vertical space ${\mathcal V}_p$. Therefore we have
$T_p{\mathcal P}={\mathcal V}_p\goth{o}plus\mathcal{H}_p$, ${\scriptstyle\wedge}\,h{\goth{o}mega}({\mathcal V}_p)=\goth{h}$ and
${\scriptstyle\wedge}\,h{\goth{o}mega}(\mathcal{H}_p)=\goth{m}$. The curvature ${\scriptstyle\wedge}\,h{K}_p=$
$({\rm d}{\scriptstyle\wedge}\,h{\goth{o}mega}+{\scriptstyle\wedge}\,h{\goth{o}mega}{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,h{\goth{o}mega})_p$ at $p$ is then
characterized by the tensor $\kappa_p\in\mathcal{H}om({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})$ given
by
\begin{equation}\label{e.c.kappa}\kappa_p(A,B)={\scriptstyle\wedge}\,h{K}_p({\scriptstyle\wedge}\,h{\goth{o}mega}^{-1}_p(A), {\scriptstyle\wedge}\,h{\goth{o}mega}^{-1}_p(B)),\quad
A,B\in\goth{m}.\end{equation}
The function $\kappa\goth{g}oth{co}lon{\mathcal P}\to\mathcal{H}om({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})$ is called the
structure function.
In the space $\mathcal{H}om({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})$ of
$\goth{g}$-valued two-forms let us
define $\mathcal{H}om^1({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})$ to be the space of all
$\alpha\in\mathcal{H}om({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})$ fulfilling
$$\alpha(\goth{g}_i,\goth{g}_j)\subset \goth{g}_{i+j+1}\goth{o}plus\ldots\goth{o}plus\goth{g}_k \quad\text{for}\quad i,j<0.$$
Since the Killing form $B$ of $\goth{g}$ is non-degenerate and satisfies
$B(\goth{g}_p,\goth{g}_q)=0$ for $p\neq -q$, one can identify $\goth{m}^*$ with
$\goth{g}_1\goth{o}plus\ldots\goth{o}plus\goth{g}_k$. For a basis $(e_1,\ldots,e_m)$ of
$\goth{m}$ let $(e^*_1,\ldots,e^*_m)$ denote the unique basis of
$\goth{g}_1\goth{o}plus\ldots\goth{o}plus\goth{g}_k$ such that $B(e_i,e^*_j)=\delta_{ij}$.
Tanaka considered the following complex \begin{equation}n
\ldots\longrightarrow\mathcal{H}om({\scriptstyle\wedge}\,edge^{q+1}\goth{m},\goth{g})\goth{o}verset{\partial^*}{\longrightarrow}\mathcal{H}om({\scriptstyle\wedge}\,edge^q\goth{m},\goth{g})\longrightarrow\ldots
\end{equation}n with
$\partial^*\goth{g}oth{co}lon\mathcal{H}om({\scriptstyle\wedge}\,edge^{q+1}\goth{m},\goth{g})\to\mathcal{H}om({\scriptstyle\wedge}\,edge^q\goth{m},\goth{g})$
given by the following formula \begin{equation}n
\begin{equation}gin{aligned}
(\partial^*\alpha)&(A_1{\scriptstyle\wedge}\,\ldots{\scriptstyle\wedge}\, A_q)=\sum_i[e^*_i,\alpha(e_i{\scriptstyle\wedge}\,
A_1{\scriptstyle\wedge}\,\ldots{\scriptstyle\wedge}\, A_q)]
\\&+\tfrac{1}{2}\sum_{i,j}\alpha([e^*_j,A_i]_\goth{m}{\scriptstyle\wedge}\, e_j{\scriptstyle\wedge}\, A_1{\scriptstyle\wedge}\,\ldots{\scriptstyle\wedge}\,\goth{h}at{A_i}{\scriptstyle\wedge}\,\ldots{\scriptstyle\wedge}\, A_q),
\end{aligned}
\end{equation}n where $\alpha\in\mathcal{H}om({\scriptstyle\wedge}\,edge^q\goth{m},\goth{g})$, $A_1,\ldots\,A_q\in\goth{m}$,
$(e_i)$ is any basis in $\goth{m}$ and $[\,\, ,\,]_\goth{m}$ denotes the
$\goth{m}$-component of the bracket with respect to the decomposition
$\goth{g}=\goth{m}\goth{o}plus\goth{h}$. Finally, N. Tanaka \cite{Tan} introduced the
notion of normal connection, the definition below is given in the
language of \cite{Cap}.
\begin{equation}gin{definition}\label{def.Tanakanorm}
A Cartan connection ${\scriptstyle\wedge}\,h{\goth{o}mega}$ as above is normal if its
structure function $\kappa$ fulfills the following conditions \begin{equation}n
\begin{equation}gin{aligned}
\text{i)}& &\qquad &\kappa\in\mathcal{H}om^1({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g}),\\
\text{ii)}& &\qquad &\partial^*\kappa=0.
\end{aligned}
\end{equation}n
\end{definition}
N. Tanaka considered above objects in a wider context of geometric
structures associated with Cartan connections. He proved the
one-to-one correspondence between normal connections on ${\mathcal P}\to{\goth{m}athcal M}$
and some $G$-structures, so called $G^{\#}_{~0}$-structures, on
${\goth{m}athcal M}$. Starting from a $G^{\#}_{~0}$-structure one obtains a normal
connection by a procedure of reductions and prolongations, which
is a version of Cartan's equivalence method. It is the
correspondence with $G^\#_0$-structures which makes the normal
connections distinguished. N. Tanaka proved that a normal
connection exists and is unique if only $\goth{g}$ is a subalgebra of
$\goth{g}(\goth{m},\goth{g}_0)$, so called prolongation of $\goth{m}$ and $\goth{g}_0$. The
notion of prolongation in Tanaka's sense would lead us to his
general theory, which is beyond the scope of this paper; in this
work we need conditions of normality to explicitly construct the
Cartan connection in the only case of theorem \ref{th.c.1}. For
details of Tanaka's theory see \cite{Tan2,Tan}.
\section{Proof of theorem 2.1} \label{s.c.proof}
\noindent We prove theorem \ref{th.c.1} by repeating Chern's
construction of Cartan connection for the contact equivalence
problem, supplemented later in \cite{Sat}.
We begin with the $G_c$-structure \eqref{e.G_c} on ${\goth{m}athcal J}^2$, which,
according to Introduction, encodes all the contact invariant
information about the underlying ODE. Let us fix on ${\goth{m}athcal J}^2\times
G_c$ a coordinate system $(x,y,p,q,u_1,\ldots,u_9)$ or
$(x^k,g^i_{j})$ for short, where $(x^k)=(x,y,p,q)$ are the
coordinates on ${\goth{m}athcal J}^2$ and
$$
g^i_{j}=\begin{pmatrix} u_1 & 0 & 0 & 0 \\ u_2 & u_3 & 0 & 0 \\ u_4 & u_5 & u_6 & 0 \\ u_8 & u_9 & 0 & u_7 \end{pmatrix}
$$
are coordinates on $G_c$. We remind that there are four well
defined forms on $G_c\times{\goth{m}athcal J}^2$, the components of the canonical
form $\theta^i$:
$$
\theta^i(x,g^{-1})=g^i_{~j}\,\goth{o}mega^j(x),\qquad i=1,\ldots,4
$$
with $\goth{o}mega^i$ given by \eqref{e.omega}. We seek a bundle on
which $\theta^i$ are supplemented to a coframe by certain new
one-forms $\mathcal{O}mega_\goth{m}u$ chosen in a well-defined geometric manner.
\subsection{S.-S. Chern's construction} The construction of the bundle ${\mathcal P}^c$ and
the coframe by Cartan's method is the following.
\subsection*{1)} We calculate the exterior derivatives of
$\theta^i$ on $G_c\times{\goth{m}athcal J}^2$ \begin{equation}\label{e.c.red10}\begin{equation}gin{aligned}
{\rm d}\theta^1=&\alpha_1{\scriptstyle\wedge}\,\theta^1+T^1_{~jk}\theta^j{\scriptstyle\wedge}\,\theta^k, \\
{\rm d}\theta^2=&\alpha_2{\scriptstyle\wedge}\,\theta^1+\alpha_3{\scriptstyle\wedge}\,\theta^2+T^2_{~jk}\theta^j{\scriptstyle\wedge}\,\theta^k, \\
{\rm d}\theta^3=&\alpha_4{\scriptstyle\wedge}\,\theta^1+\alpha_5{\scriptstyle\wedge}\,\theta^2+\alpha_6{\scriptstyle\wedge}\,\theta^3+T^3_{~jk}\theta^j{\scriptstyle\wedge}\,\theta^k, \\
{\rm d}\theta^4=&\alpha_8{\scriptstyle\wedge}\,\theta^1+\alpha_9{\scriptstyle\wedge}\,\theta^2+ \alpha_7{\scriptstyle\wedge}\,\theta^4
+T^4_{~jk}\theta^j{\scriptstyle\wedge}\,\theta^k,
\end{aligned} \end{equation}
where $\alpha_\goth{m}u$ are the entries of the matrix ${\rm d}
g^i_{~k}\cdot g^{-1k}_{~j}$ and $T^i_{~jk}$ are some functions on
$G_c\times{\goth{m}athcal J}^2$. Next we collect $T^i_{~jk}\theta^j{\scriptstyle\wedge}\,\theta^k$
terms
\begin{equation}gin{align}
{\rm d}\theta^1=&\left(\alpha_1-T^1_{~12}\theta^2-T^1_{~13}\theta^3-T^1_{~14}\theta^4\right){\scriptstyle\wedge}\,\theta^1\nonumber \\
&+T^1_{~23}\theta^2{\scriptstyle\wedge}\,\theta^3+T^1_{~24}\theta^2{\scriptstyle\wedge}\,\theta^4+T^1_{~34}\theta^3{\scriptstyle\wedge}\,\theta^4,\nonumber \\
{\rm d}\theta^2=&\left(\alpha_2-T^2_{~12}\theta^2-T^2_{~13}\theta^3-T^2_{~14}\theta^4\right){\scriptstyle\wedge}\,\theta^1 \nonumber\\
&+\left(\alpha_3-T^2_{~23}\theta^3-T^2_{~24}\theta^4\right){\scriptstyle\wedge}\,\theta^2+T^2_{~34}\theta^3{\scriptstyle\wedge}\,\theta^4,\label{e.c.red20} \\
{\rm d}\theta^3=&\left(\alpha_4-T^3_{~12}\theta^2-T^3_{~13}\theta^3-T^3_{~14}\theta^4\right){\scriptstyle\wedge}\,\theta^1,\nonumber \\
&+\left(\alpha_5-T^3_{~23}\theta^3-T^3_{~24}\theta^4\right){\scriptstyle\wedge}\,\theta^2
+\left(\alpha_6-T^3_{~34}\theta^4\right){\scriptstyle\wedge}\,\theta^3\nonumber\\
{\rm d}\theta^4=&\left(\alpha_8-T^4_{~12}\theta^2-T^4_{~13}\theta^3-T^4_{~14}\theta^4\right){\scriptstyle\wedge}\,\theta^1,\nonumber \\
&+\left(\alpha_9-T^4_{~23}\theta^3-T^4_{~24}\theta^4\right){\scriptstyle\wedge}\,\theta^2
+\left(\alpha_7+T^4_{~34}\theta^3\right){\scriptstyle\wedge}\,\theta^4\nonumber
\end{align}
and introduce new 1-forms $\pi_\goth{m}u$ substituting the collected terms.
Eq. \eqref{e.c.red10} now read \begin{equation}\label{e.c.red30}\begin{equation}gin{aligned}
{\rm d}\theta^1&=\pi_1{\scriptstyle\wedge}\,\theta^1+\frac{u_1}{u_3 u_7}\theta^4{\scriptstyle\wedge}\,\theta^2, \\
{\rm d}\theta^2&=\pi_2{\scriptstyle\wedge}\,\theta^1+\pi_3{\scriptstyle\wedge}\,\theta^2+\frac{u_3}{u_6u_7}\theta^4{\scriptstyle\wedge}\,\theta^3, \\
{\rm d}\theta^3&=\pi_4{\scriptstyle\wedge}\,\theta^1+\pi_5{\scriptstyle\wedge}\,\theta^2+\pi_6{\scriptstyle\wedge}\,\theta^3,\\
{\rm d}\theta^4&=\pi_8{\scriptstyle\wedge}\,\theta^1+\pi_9{\scriptstyle\wedge}\,\theta^2+\pi_7{\scriptstyle\wedge}\,\theta^4,
\end{aligned}\end{equation}
since $T^1_{~23}=T^1_{~34}=0$, $T^1_{~24}=-u_3u_7/u_1$ and
$T^2_{~34}=-u_3/(u_6u_7)$.
The equations \eqref{e.c.red30} resemble structural equations
for a linear connection very much. When $\theta^i$ are components
of the canonical form and $\Gamma^j_{~k}$ are components of a
$\goth{g}$-valued connection then we have
\begin{equation}n{\rm d}\theta^i+\Gamma^i_{~j}{\scriptstyle\wedge}\,\theta^j=\tfrac12T^i_{~jk}\theta^j{\scriptstyle\wedge}\,\theta^k\end{equation}n
with a torsion $T$. In our case $\pi_\goth{m}u$ are the entries of the
matrix $\pi$ in $\goth{g}_c$
\begin{equation}n\pi=\begin{pmatrix} \pi_1 & 0 & 0 & 0 \\ \pi_2 & \pi_3 & 0 & 0 \\
\pi_4 & \pi_5 & \pi_6& 0\\
\pi_8 & \pi_9 & 0 & \pi_7 \end{pmatrix}. \end{equation}n However, $\pi$ is not a
linear connection. This is because in this version of Cartan's
method $\pi$ usually does not transform regularly along fibres of
bundles ($G_c\times{\goth{m}athcal J}^2$ here) and the `curvature'
${\rm d}\pi+\pi{\scriptstyle\wedge}\,\pi$ is not necessarily a tensor, hence $\pi$ is not
a linear connection in general. We may think of $\pi$ as a
connection in a broader meaning, that is a horizontal distribution
on $G_c\times{\goth{m}athcal J}^2$, which is not necessarily right-invariant.
Keeping this in mind we will refer to $T^i_{~jk}$ as torsion. Thus
$\pi$ is a `connection' chosen by the demand that its torsion is
`minimal', i.e. possesses as few terms as possible.
We observe that $\pi_\goth{m}u$, which are candidates for the sought
forms $\mathcal{O}mega_\goth{m}u$, are not uniquely defined by equations
\eqref{e.c.red30}, for example the gauge $\pi_1\to\pi_1+f\theta^1$
leaves \eqref{e.c.red30} unchanged. Therefore our connection is
not uniquely defined by its torsion.
\subsection*{2)} In the next step we reduce the bundle $G_c\times{\goth{m}athcal J}^2$.
We choose its subbundle, say ${\mathcal P}^{(1)}$, characterized by the
property that the torsion coefficients are constant on it. We
choose ${\mathcal P}^{(1)}$ such that $T^1_{~24}=-1,\,T^2_{~34}=-1$ on it.
Thus ${\mathcal P}^{(1)}$ is defined by \begin{equation}\label{e.c.red_u6u7}
u_6=\frac{u_3^2}{u_1},\quad\quad\quad u_7=\frac{u_1}{u_3}.
\end{equation} It is known \cite{Gar,Ste} that such a reduction preserves
the equivalence, in other words, two bundles are equivalent if and
only if their respective reductions are. Here ${\mathcal P}^{(1)}$ has the
seven-dimensional structural group \begin{equation}n G_c^{(1)}=\begin{pmatrix} u_1 & 0 & 0
& 0
\\ u_2 & u_3 & 0 & 0 \\ u_4 & u_5 & \tfrac{u_3^2}{u_1} & 0 \\ u_8 & u_9 & 0
& \tfrac{u_1}{u_3} \end{pmatrix}. \end{equation}n
\subsection*{3)} Next we pull-back $\theta^i$ and $\pi_\goth{m}u$ to
${\mathcal P}^{(1)}$. But the new structural group $G^{(1)}_c$ is a
seven-dimensional subgroup of $G_c$, so
$(\theta^1,\ldots,\theta^4,\pi_1,\ldots,\pi_9)$ of
\eqref{e.c.red30} is not a coframe on ${\mathcal P}^{(1)}$ any longer, since
\begin{equation}n \pi_6=2\pi_3-\pi_1 \goth{m}od(\theta^i), \quad\quad\quad
\pi_7=\pi_1-\pi_3
\goth{m}od(\theta^i). \end{equation}n
Taking this into account we recalculate \eqref{e.c.red30} and
gather the torsion terms. We choose the new connection
\begin{equation}n\pi=\begin{pmatrix} \pi_1 & 0 & 0 & 0 \\ \pi_2 & \pi_3 & 0 & 0 \\
\pi_4 & \pi_5 & 2\pi_3-\pi_1& 0\\
\pi_8 & \pi_9 & 0 & \pi_1-\pi_3 \end{pmatrix} \end{equation}n so that its torsion is
minimal again. \begin{equation}\label{e.c.red40}\begin{equation}gin{aligned}
{\rm d}\theta^1&=\pi_1{\scriptstyle\wedge}\,\theta^1+\theta^4{\scriptstyle\wedge}\,\theta^2, \\
{\rm d}\theta^2&=\pi_2{\scriptstyle\wedge}\,\theta^1+\pi_3{\scriptstyle\wedge}\,\theta^2+\theta^4{\scriptstyle\wedge}\,\theta^3,\\
{\rm d}\theta^3&=\pi_4{\scriptstyle\wedge}\,\theta^1+\pi_5{\scriptstyle\wedge}\,\theta^2+(2\pi_3-\pi_1){\scriptstyle\wedge}\,\theta^3
+\left(\frac{3u_5}{u_3}-\frac{3u_2-u_3F_q}{u_1}\right)\theta^4{\scriptstyle\wedge}\,\theta^3,\\
{\rm d}\theta^4&=\pi_8{\scriptstyle\wedge}\,\theta^1+\pi_9{\scriptstyle\wedge}\,\theta^2+(\pi_1-\pi_3){\scriptstyle\wedge}\,\theta^4.
\end{aligned}\end{equation}
\subsection*{4)}
We repeat the steps {\bf 2)} and {\bf 3)}. Firstly we reduce
${\mathcal P}^{(1)}$ to the subbundle ${\mathcal P}^{(2)}\subset{\mathcal P}^{(1)}$ defined by
the property that the only non-constant torsion coefficient
$T^3_{~34}$ in \eqref{e.c.red40} vanishes on it,
\begin{equation}\label{e.c.red_u5}
u_5=\frac{u_3}{u_1}\left(u_2-\frac{1}{3}u_3F_q\right).
\end{equation} Next we recalculate connection, re-collect the torsion and
make another reduction through the constant torsion condition ($K$
is defined in \eqref{e.defK}.) \begin{equation}\label{e.c.red_u4}
u_4=\frac{u^2_3}{u_1}K+\frac{u_2^2}{2u_1}.
\end{equation}
At this stage we have reduced the frame bundle $G_c\times{\goth{m}athcal J}^2$ to
the nine-dimensional subbundle ${\mathcal P}^{(3)}\to{\goth{m}athcal J}^2$, such that its
structural group is the following
\begin{equation}n G^{(3)}_c=\begin{pmatrix} u_1 & 0 & 0 & 0 \\
u_2 & u_3 & 0 & 0 \\
\tfrac{u_2^2}{u_1} & \tfrac{u_2u_3}{u_1} & \tfrac{u_3^3}{u_1} & 0 \\
u_8 & u_9 & 0 & \frac{u_1}{u_3} \end{pmatrix} \end{equation}n and the frame dual to
$(\goth{o}mega^1,\goth{o}mega^2,\goth{o}mega^3-\tfrac13F_q\goth{o}mega^2+K\goth{o}mega^1,\goth{o}mega^4)$
belongs to ${\mathcal P}^{(3)}$. The structural equations on ${\mathcal P}^{(3)}$ read
after collecting \begin{equation}\label{e.c.red50}\begin{equation}gin{aligned}
{\rm d}\theta^1&=\pi_1{\scriptstyle\wedge}\,\theta^1+\theta^4{\scriptstyle\wedge}\,\theta^2, \\
{\rm d}\theta^2&=\pi_2{\scriptstyle\wedge}\,\theta^1+\pi_3{\scriptstyle\wedge}\,\theta^2+\theta^4{\scriptstyle\wedge}\,\theta^3, \\
{\rm d}\theta^3&=\pi_2{\scriptstyle\wedge}\,\theta^2+\left(2\pi_3-\pi_1\right){\scriptstyle\wedge}\,\theta^3
+\frac{u_3^3}{u_1^3}W\theta^4{\scriptstyle\wedge}\,\theta^1,\\
{\rm d}\theta^4&=\pi_8{\scriptstyle\wedge}\,\theta^1+\pi_9{\scriptstyle\wedge}\,\theta^2+(\pi_1-\pi_3){\scriptstyle\wedge}\,\theta^4
\end{aligned}\end{equation}
with some one-forms $\pi_1,\pi_2,\pi_3,\pi_8,\pi_9$. The function
$W$, defined in \eqref{e.defW}, is the W\"unsch\-mann invariant
which is a contact relative invariant for third-order ODEs. It
means, as we already explained in Introduction, that every contact
transformation applied to an ODE preserves the condition $W=0$ or
$W\neq 0$. It follows that third-order ODEs $y'''=F(x,y,y',y'')$
and $y'''=\cc{F}(x,y,y',y'')$, satisfying $W[F]= 0$ and
$W[\cc{F}]\neq 0$ respectively, are not contact equivalent.
Thereby, as Chern observed, third-order ODEs fall into two main
contact inequivalent branches: the ODEs satisfying $W\neq 0$, and
those satisfying $W=0$.
Equations \eqref{e.c.red50} do not still define the forms
$\pi_\goth{m}u$ uniquely but only modulo the following transformations
\begin{equation}gin{align}
\pi_1&\to \pi_1+ 2t_1\theta^1,\nonumber \\
\pi_2&\to \pi_2+t_1\theta^2,\nonumber \\
\pi_3&\to \pi_3+t_1\theta^1, \label{e.c.prol10}\\
\pi_8&\to \pi_8+t_2\theta^1+t_3\theta^2+t_1\theta^4,\nonumber \\
\pi_9&\to \pi_9+t_3\theta^1+t_4\theta^2.\nonumber
\end{align}
That is to say, the torsion in \eqref{e.c.red50} defines the
$\goth{g}^{(3)}_c$ connection $\pi$ only up to \eqref{e.c.prol10}.
At this point, there is no pattern of further reduction. If $W=0$
there are only constant torsion coefficients in \eqref{e.c.red50}
and we do not have any conditions to define a subbundle of
${\mathcal P}^{(3)}$. In these circumstances we prolong ${\mathcal P}^{(3)}$.
\subsection*{5)} The idea of prolongation is the following.
On ${\mathcal P}^{(3)}$ there is no fixed coframe but only the coframe
$(\theta^1,\theta^2,\theta^3,\theta^4,\pi_1,\pi_2,\pi_3,\pi_8,\pi_9)$
given modulo \eqref{e.c.prol10}. But `a coframe given modulo $G$'
is a $G$-structure on ${\mathcal P}^{(3)}$. As a consequence we can deal
with this new structure on $P^{(3)}$ by means of the Cartan
method. Let us consider the bundle ${\mathcal P}^{(3)}\times G^{prol}$ then,
where \begin{equation}n G^{prol}=\begin{pmatrix} 1 & 0 \\ t & 1 \end{pmatrix} \end{equation}n reflects the
freedom \eqref{e.c.prol10} so that the block $t$ reads
\begin{equation}n \begin{pmatrix} 2t_1 & 0 & 0 & 0 \\ 0 & t_1 & 0 & 0 \\ t_1 & 0 & 0 & 0 \\
t_2 & t_3 & 0 & t_1 \\ t_3 & t_4 & 0 & 0 \end{pmatrix}. \end{equation}n On
${\mathcal P}^{(3)}\times G^{prol}$ there exist nine fixed one-forms
$\theta^1,\theta^2,\theta^3,\theta^4,{\mathcal P}i_1,{\mathcal P}i_2,{\mathcal P}i_3,{\mathcal P}i_8,{\mathcal P}i_9$,
given by
\begin{equation}n \begin{pmatrix} \theta^i \\ {\mathcal P}i_\goth{m}u \end{pmatrix} = \begin{pmatrix} 1 & 0 \\
t & 1 \end{pmatrix} \begin{pmatrix} \theta^i \\ \pi_\goth{m}u \end{pmatrix}, \end{equation}n which is the
canonical one-form on ${\mathcal P}^{(3)}\times G^{prol}\to {\mathcal P}^{(3)}$.
\subsection*{6)} Now we apply the method of reductions to the
above structure on ${\mathcal P}^{(3)}\times G^{prol}$. We calculate the
exterior derivatives of $(\theta^i,{\mathcal P}i_\goth{m}u)$. The derivatives of
$\theta^i$ take the form of \eqref{e.c.red50} with $\pi_\goth{m}u$
replaced by ${\mathcal P}i_\goth{m}u$. The derivatives of ${\mathcal P}i_\goth{m}u$, after
collecting and introducing 1-forms $\Lambda_K$ containing ${\rm d}
t_K$, read
\begin{equation}gin{align}
{\rm d}{\mathcal P}i_1=& \Lambda_1{\scriptstyle\wedge}\,\theta^1+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^2-{\mathcal P}i_2{\scriptstyle\wedge}\,\theta^4,\nonumber \\
{\rm d}{\mathcal P}i_2=& \tfrac{1}{2}\Lambda_1{\scriptstyle\wedge}\,\theta^2-{\mathcal P}i_1{\scriptstyle\wedge}\,{\mathcal P}i_2-{\mathcal P}i_2{\scriptstyle\wedge}\,{\mathcal P}i_3+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^3+\frac{u_3^3}{u_1^3}W{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^1\nonumber \\
&+2f_1\theta^1{\scriptstyle\wedge}\,\theta^3+f_4\theta^1{\scriptstyle\wedge}\,\theta^4+f_2\theta^2{\scriptstyle\wedge}\,\theta^3+f_5\theta^2{\scriptstyle\wedge}\,\theta^4,\label{e.c.prol30}\\
{\rm d}{\mathcal P}i_3=&\tfrac{1}{2}\Lambda_1{\scriptstyle\wedge}\,\theta^1+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^2+{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^3+f_1\theta^1{\scriptstyle\wedge}\,\theta^2+f_2\theta^1{\scriptstyle\wedge}\,\theta^3+f_5\theta^1{\scriptstyle\wedge}\,\theta^4+f_3\theta^2{\scriptstyle\wedge}\,\theta^3,\nonumber \\
{\rm d}{\mathcal P}i_8=&\Lambda_2{\scriptstyle\wedge}\,\theta^1+\Lambda_3{\scriptstyle\wedge}\,\theta^1+\tfrac{1}{2}\Lambda_1{\scriptstyle\wedge}\,\theta^4+{\mathcal P}i_9{\scriptstyle\wedge}\,{\mathcal P}i_2+{\mathcal P}i_8{\scriptstyle\wedge}\,{\mathcal P}i_3+f_2\theta^3{\scriptstyle\wedge}\,\theta^4,\nonumber\\
{\rm d}{\mathcal P}i_9=&\Lambda_3{\scriptstyle\wedge}\,\theta^1+\Lambda_4{\scriptstyle\wedge}\,\theta^2+{\mathcal P}i_1{\scriptstyle\wedge}\,{\mathcal P}i_9-2{\mathcal P}i_3{\scriptstyle\wedge}\,{\mathcal P}i_9+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^4-f_1\theta^1{\scriptstyle\wedge}\,\theta^4+f_3\theta^3{\scriptstyle\wedge}\,\theta^4. \nonumber
\end{align}
where $f_1,f_2,f_3,f_4,f_5$ are functions. This time the forms
$\Lambda_K$ are interpreted as connection forms. We compute
$f_1,\ldots,f_5$ and choose the subbundle ${\mathcal P}^c$ of
${\mathcal P}^{(3)}\times G^{prol}$ by the condition that $f_1,f_2,f_3$ are
equal to zero on ${\mathcal P}^c$. This is done by appropriate specifying of
parameters $t_2,t_3,t_4$ as functions of
$(x$,$y$,$p$,$q$,$u_1,u_2$,$u_3$,$u_8$,$u_9$,$t_1)$. We skip
writing these complicated formulae. The structural equations on
${\mathcal P}^c$ read \begin{equation}\label{e.c.prol40}\begin{equation}gin{aligned}
{\rm d}\theta^1 =&{\mathcal P}i_1{\scriptstyle\wedge}\,\theta^1+\theta^4{\scriptstyle\wedge}\,\theta^2, \\
{\rm d}\theta^2 =&{\mathcal P}i_2{\scriptstyle\wedge}\,\theta^1+{\mathcal P}i_3{\scriptstyle\wedge}\,\theta^2+\theta^4{\scriptstyle\wedge}\,\theta^3, \\
{\rm d}\theta^3 =&{\mathcal P}i_2{\scriptstyle\wedge}\,\theta^2+(2{\mathcal P}i_3-{\mathcal P}i_1){\scriptstyle\wedge}\,\theta^3+A\,\theta^4{\scriptstyle\wedge}\,\theta^1, \\
{\rm d}\theta^4 =&{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^1+{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^2+({\mathcal P}i_1-{\mathcal P}i_2){\scriptstyle\wedge}\,\theta^4, \\
{\rm d}{\mathcal P}i_1 =&\Lambda_1{\scriptstyle\wedge}\,\theta^1+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^2-{\mathcal P}i_2{\scriptstyle\wedge}\,\theta^4, \\
{\rm d}{\mathcal P}i_2 =&({\mathcal P}i_3-{\mathcal P}i_1){\scriptstyle\wedge}\,{\mathcal P}i_2+A\,{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^1+\tfrac{1}{2}\Lambda_1{\scriptstyle\wedge}\,\theta^2
+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^3+B\,\theta^1{\scriptstyle\wedge}\,\theta^4+C\,\theta^2{\scriptstyle\wedge}\,\theta^4, \\
{\rm d}{\mathcal P}i_3 =&\tfrac{1}{2}\Lambda_1{\scriptstyle\wedge}\,\theta^1+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^2+{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^3
+C\,\theta^1{\scriptstyle\wedge}\,\theta^4, \\
{\rm d}{\mathcal P}i_8 =&{\mathcal P}i_9{\scriptstyle\wedge}\,{\mathcal P}i_2+{\mathcal P}i_8{\scriptstyle\wedge}\,{\mathcal P}i_3-2C\,{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^1+\tfrac{1}{2}\Lambda_1{\scriptstyle\wedge}\,\theta^4
+D\,\theta^1{\scriptstyle\wedge}\,\theta^2+2E\,\theta^1{\scriptstyle\wedge}\,\theta^3 \\
&+G\,\theta^1{\scriptstyle\wedge}\,\theta^4+H\,\theta^2{\scriptstyle\wedge}\,\theta^3+J\,\theta^2{\scriptstyle\wedge}\,\theta^4, \\
{\rm d}{\mathcal P}i_9 =&({\mathcal P}i_1-2{\mathcal P}i_3){\scriptstyle\wedge}\,{\mathcal P}i_9+{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^4 +E\,\theta^1{\scriptstyle\wedge}\,\theta^2+H\,\theta^1{\scriptstyle\wedge}\,\theta^3
+J\,\theta^1{\scriptstyle\wedge}\,\theta^4 +L\,\theta^2{\scriptstyle\wedge}\,\theta^3, \\
{\rm d}\Lambda_1 =&\Lambda_1{\scriptstyle\wedge}\,{\mathcal P}i_1+2{\mathcal P}i_8{\scriptstyle\wedge}\,{\mathcal P}i_2+2C\,{\mathcal P}i_8{\scriptstyle\wedge}\,\theta^1-2C\,{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^2
-A\,{\mathcal P}i_9{\scriptstyle\wedge}\,\theta^4+{\scriptstyle\wedge}\,t{M}\,\theta^1{\scriptstyle\wedge}\,\theta^2 \\
&+2(D+AL)\,\theta^1{\scriptstyle\wedge}\,\theta^3+{\scriptstyle\wedge}\,t{N}\,\theta^1{\scriptstyle\wedge}\,\theta^4+2E\,\theta^2{\scriptstyle\wedge}\,\theta^3
+G\,\theta^2{\scriptstyle\wedge}\,\theta^4
\end{aligned}\end{equation}
with certain functions $A,B,C,D,E,F,G,H,J,L,{\scriptstyle\wedge}\,t{M},{\scriptstyle\wedge}\,t{N}$ on
${\mathcal P}^c$.
Above structural equations \emph{uniquely define the only
remaining auxiliary form $\Lambda_1$}. In this manner we
constructed the bundle ${\mathcal P}^c\to{\goth{m}athcal J}^2$ and the fixed coframe
associated to the ODEs modulo contact transformations. As we have
explained in Introduction, the functions $A,\ldots,{\scriptstyle\wedge}\,t{N}$, and
their coframe derivatives are relative contact invariants for
third-order ODEs.
\subsection{Cartan normal connection from Tanaka's theory}\label{s.c.normalcon}
The above coframe is not fully satisfactory from the geometric
point of view since it does not transform equivariantly along the
fibres of ${\mathcal P}^c\to{\goth{m}athcal J}^2$, that is to say, it does not define a
Cartan connection.
In order to see this we consider the simplest case, related to the
equation $y'''=0$, when all the functions $A,\ldots,{\scriptstyle\wedge}\,t{N}$
vanish. Then \eqref{e.c.prol40} become the Maurer-Cartan equations
for the Lie algebra $\goth{o}(3,2)\goth{g}oth{co}ng\goth{sp}(4,\mathbb{R})$ and ${\mathcal P}^c$ is
locally the Lie group $O(3,2)$. The Maurer-Cartan form on ${\mathcal P}^c$
in the four-dimensional defining representation of $\goth{sp}(4,\mathbb{R})$
is given by \begin{equation}n
{\scriptstyle\wedge}\,t{\goth{o}mega}=\begin{pmatrix} \tfrac{1}{2}{\mathcal P}i_1 & \tfrac{1}{2}{\mathcal P}i_{2} & -\tfrac{1}{2}{\mathcal P}i_{8}& -\tfrac{1}{4}\Lambda_1 \\\\
\goth{h}c{4} & {\mathcal P}i_3-\tfrac{1}{2}{\mathcal P}i_{1} & -{\mathcal P}i_{9} & -\tfrac{1}{2}{\mathcal P}i_{8} \\\\
\goth{h}c{2} & \goth{h}c{3} & \tfrac{1}{2}{\mathcal P}i_{1}-{\mathcal P}i_{3} & -\tfrac{1}{2}{\mathcal P}i_{2} \\\\
2\goth{h}c{1} & \goth{h}c{2} & -\goth{h}c{4} & -\tfrac{1}{2}{\mathcal P}i_{1}
\end{pmatrix}.
\end{equation}n Let $\goth{h}$ be the six-dimensional subalgebra of $\goth{o}(3,2)$
annihilated by the ideal
$<\theta^1,\,\theta^2,\,\theta^3,\,\theta^4>$ and let $H_6$ be the
connected simply-connected Lie subgroup of $SP(4,\mathbb{R})$ with the
algebra $\goth{h}$. Then $H_6\to{\mathcal P}\to{\goth{m}athcal J}^2$ is a homogeneous space and
${\scriptstyle\wedge}\,t{\goth{o}mega}$ is a flat Cartan connection of type
$(SP(4,\mathbb{R}),H_6)$ on ${\mathcal P}^c$.
However, this object is not a Cartan connection in a general case,
when $A,\ldots,{\scriptstyle\wedge}\,t{N}$ do not vanish, since its curvature
${\scriptstyle\wedge}\,t{K}={\rm d} {\scriptstyle\wedge}\,t{\goth{o}mega}+{\scriptstyle\wedge}\,t{\goth{o}mega}{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}$ is not
horizontal with respect to the fibration ${\mathcal P}\to{\goth{m}athcal J}^2$, that is
value of ${\scriptstyle\wedge}\,t{K}$ on a vector tangent to a fibre of ${\mathcal P}\to{\goth{m}athcal J}^2$ is
not necessarily zero; for instance ${\scriptstyle\wedge}\,t{K}^1_{~2}$ contains the
term $\tfrac12A {\mathcal P}i_9{\scriptstyle\wedge}\,\theta^1$. On the other hand the
horizontality of ${\scriptstyle\wedge}\,t{K}$ is necessary and locally sufficient for
${\scriptstyle\wedge}\,t{\goth{o}mega}$ to be a Cartan connection.
In order to resolve this problem H. Sato and Y. Yoshikawa
\cite{Sat} found the structural equations for the normal
connection in this problem by means of the Tanaka theory. We
recalculate their result in our notation and give explicit form of
the normal connection, which their paper does not contain.
Let $E^i_{~j}\in\goth{g}l(4,\mathbb{R})$ denotes the matrix whose
$(i,j)$-component is equal to one and other components equal zero.
We introduce the following basis in $\goth{sp}(4,\mathbb{R})$
\begin{equation}\label{e.c.basis_sp}
\begin{equation}gin{aligned}
&e_1=2E^4_{~1}, & &e_2=E^3_{~1}+E^4_{~2}, & &e_3=E^3_{~2} \\
&e_4=E^2_{~1}-E^4_{~3}, & &e_5=\tfrac{1}{2}(E^1_{~1}-E^2_{~2}+E^3_{~3}-E^4_{~4}), &
&e_6=\tfrac{1}{2}(E^1_{~2}-E^3_{~4}),\\
&e_7=E^2_{~2}-E^3_{~3}, & &e_8=-\tfrac{1}{2}(E^1_{~3}+E^4_{~2}), & &e_9=-E^2_{~3}, \\
&& &e_{10}= -\tfrac{1}{4}E^1_{~4}. &&
\end{aligned}
\end{equation}
The form ${\scriptstyle\wedge}\,t{\goth{o}mega}$ is given by
$${\scriptstyle\wedge}\,t{\goth{o}mega}=\theta^1e_1+\theta^2 e_2+\theta^3 e_3+\theta^4 e_4+{\mathcal P}i_1e_5+{\mathcal P}i_2e_6+{\mathcal P}i_3e_7+{\mathcal P}i_8e_8+{\mathcal P}i_9e_9+\Lambda_1e_{10}.$$
The algebra $\goth{g}$ has the following grading
$$ \goth{o}(3,2)=\goth{g}_{-3}\goth{o}plus\goth{g}_{-2}\goth{o}plus\goth{g}_{-1}\goth{o}plus\goth{g}_0\goth{o}plus\goth{g}_1\goth{o}plus\goth{g}_2\goth{o}plus\goth{g}_3, $$
where
\begin{equation}n
\begin{equation}gin{aligned}
&\goth{g}_{-3}=<e_1>, & &\goth{g}_{-2}=<e_2>, & &\goth{g}_{-1}=<e_3,e_4>, & & \\
&\goth{g}_{0}=<e_5,e_7>, & &\goth{g}_{1}=<e_6,e_9>, & &\goth{g}_{2}=<e_8>, &
&\goth{g}_3=<e_{10}>
\end{aligned}
\end{equation}n and
$\goth{h}=\goth{g}_0\goth{o}plus\goth{g}_1\goth{o}plus\goth{g}_2\goth{o}plus\goth{g}_3=<e_5,\ldots,e_{10}>$. Let
lower case Latin indices range from $1$ to $4$ and upper case
Latin indices range from $1$ to $10$ throughout this section.
Suppose now that ${\scriptstyle\wedge}\,h{\goth{o}mega}$ is an $\goth{sp}(4,\mathbb{R})$ Cartan
connection on ${\mathcal P}^c$. Its structure function $\kappa$, defined in
\eqref{e.c.kappa}, decomposes into
$$\kappa=\tfrac{1}{2}\kappa^I_{~ij}\,e_I\goth{o}times e^i{\scriptstyle\wedge}\, e^j,$$
where $(e^I)$ denotes the basis dual to $(e_I)$ and
$\kappa^I_{~ij}=\kappa^I_{~[ij]}$ are functions. Condition i) of
definition \ref{def.Tanakanorm} reads
\begin{equation}gin{align*} &\kappa^1_{~23}=0,& &\kappa^1_{~24}=0, &&\kappa^1_{~34}=0, &&\kappa^2_{~34}=0. \end{align*}
\noindent We read structural constants $[e_I,e_J]=c^K_{~IJ} e_K$
for $\goth{sp}(4,\mathbb{R})$, compute the Killing form $B_{IJ}$ and its
inverse $B^{IJ}$. The operator
$\partial^*\goth{g}oth{co}lon\mathcal{H}om({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})\to\mathcal{H}om(\goth{m},\goth{g})$ acts as
follows \begin{equation}n
\partial^*(e_I\goth{o}times e^i{\scriptstyle\wedge}\, e^j)=
\left(2\delta^{[i}_{~m}B^{j]K}c^L_{~KI}-\delta^L_{~I}c^{[i}_{~Km}B^{j]K}\right)
e_L\goth{o}times e^m.\end{equation}n We apply $\partial^*$ to the basis
$(e_I\goth{o}times e^i {\scriptstyle\wedge}\, e^j)$ of $\mathcal{H}om^1({\scriptstyle\wedge}\,edge^2\goth{m},\goth{g})$ and find that
the condition $\partial^*\kappa=0$ for the normality is equivalent
to vanishing of the following combinations of $\kappa^I_{~ij}$:
\begin{equation}gin{align}
&\begin{aligned} &\kappa^1_{~14},&& \kappa^2_{~14},&&\kappa^2_{~24},\end{aligned} && \begin{aligned}&\kappa^3_{~24},&&\kappa^3_{~34},&&\kappa^4_{~23},\end{aligned} \notag \\
&\kappa^1_{~12}+\kappa^2_{~13}+\kappa^2_{~24}, && 2\kappa^1_{~12}-\kappa^4_{~24}-\kappa^5_{~34}, \notag \\
&\kappa^1_{~12}-\kappa^3_{~23}-\kappa^7_{~34}, && \kappa^1_{~13}+\kappa^2_{~23}, \notag \\
&\kappa^1_{~13}-\kappa^4_{~34}, && \kappa^2_{~12}+\kappa^4_{~14}+\kappa^5_{~24}, \notag \\
&\kappa^2_{~12}-\kappa^3_{~13}-\kappa^7_{~24}, && \kappa^2_{~12}+\kappa^5_{~24}-\kappa^6_{~34}-\kappa^7_{~24},\notag \\
&\kappa^2_{~13}+\kappa^3_{~23}+\kappa^5_{~34}-\kappa^7_{~34}, && \kappa^2_{~23}+\kappa^4_{~34},\notag \\
&\kappa^3_{~12}-\kappa^5_{~14}+\kappa^6_{~24}+\kappa^7_{~14}, && \kappa^4_{~12}-\kappa^5_{~13}+2\kappa^7_{~13}+\kappa^9_{~24}, \notag \\
&\kappa^4_{~12}-\kappa^6_{~23}-\kappa^8_{~34}-\kappa^9_{~24}, && \kappa^4_{~13}+\kappa^7_{~23}-\kappa^9_{~34}, \notag \\
&\kappa^4_{~14}+\kappa^6_{~34}+\kappa^7_{~24}, && \kappa^4_{~24}-\kappa^5_{~34}+2\kappa^7_{~34},\notag \\
&2\kappa^5_{~12}-2\kappa^8_{~24}-\kappa^{10}_{~34}, && \kappa^5_{~13}+\kappa^6_{~23}-\kappa^8_{~34}, \notag \\
&\kappa^5_{~14}+\kappa^6_{~24}, && \kappa^5_{~23}-2\kappa^7_{~23}-\kappa^9_{~34}, \notag \\
&2\kappa^6_{~12}+2\kappa^8_{~14}+\kappa^{10}_{~24}, && \kappa^6_{~13}+\kappa^7_{~12}+\kappa^8_{~24}+\kappa^9_{~14}. \notag
\end{align}
We write the sought normal Cartan connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ as
follows
$${\scriptstyle\wedge}\,h{\goth{o}mega}^c=\theta^1e_1+\theta^2e_2+\theta^3e_3+\theta^4e_4
+\mathcal{O}mega_1e_5+\mathcal{O}mega_2e_6+\mathcal{O}mega_3e_7+\mathcal{O}mega_4e_8+\mathcal{O}mega_5e_9+\mathcal{O}mega_6e_{10}.$$
The forms $\mathcal{O}mega_\goth{m}u$, unknown yet, are given by
\begin{equation}gin{align*}
&\mathcal{O}mega_1 = {\mathcal P}i_1+a_i\theta^i, && \mathcal{O}mega_2 = {\mathcal P}i_2+b_i\theta^i,
&& \mathcal{O}mega_3 ={\mathcal P}i_3+c_i\theta^i, \nonumber \\
&\mathcal{O}mega_4 ={\mathcal P}i_8+f_i\theta^i, && \mathcal{O}mega_5 ={\mathcal P}i_9+g_i\theta^i,
&& \mathcal{O}mega_6 = \Lambda_1+h_i\theta^i.
\end{align*}
Next, we calculate the curvature
${\scriptstyle\wedge}\,h{K}^c={\rm d}{\scriptstyle\wedge}\,h{\goth{o}mega}^c+{\scriptstyle\wedge}\,h{\goth{o}mega}^c{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,h{\goth{o}mega}^c$, find
the components of the structure function and put them into the
normality conditions. These are only satisfied if all the
functions $a_i,b_i,c_i,e_i,f_i,g_i$ vanish except for
$c_1,h_1,h_2,h_3,h_4$ which are arbitrary and
\begin{equation}gin{align*}
&a_1=2c_1, && b_1=\tfrac{2}{3}C,&& b_2=c_1, && f_1=\tfrac{2}{3}J, &&
f_4=c_1,
\end{align*}
where $C$ and $J$ are the functions in \eqref{e.c.prol40}.
Finally, we obtain from the $e_1$-component of the Bianchi
identity
${\rm d}{\scriptstyle\wedge}\,h{K}^c={\scriptstyle\wedge}\,h{K}^c{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,h{\goth{o}mega}^c-{\scriptstyle\wedge}\,h{\goth{o}mega}^c{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,h{K}^c$
that
\begin{equation}gin{align*}
&c_1=0, && h_1=\tfrac{4}{3}G-\tfrac{2}{3}{\scriptstyle\wedge}\,t{X}_4(J),&& h_2=\tfrac{2}{3}J, && h_3=0, &&
h_4=-\tfrac{2}{3}C,
\end{align*}
where ${\scriptstyle\wedge}\,h{X}_4$ is the vector field in the frame
$({\scriptstyle\wedge}\,t{X}_1,{\scriptstyle\wedge}\,t{X}_2,{\scriptstyle\wedge}\,t{X}_3,{\scriptstyle\wedge}\,t{X}_4,{\scriptstyle\wedge}\,t{X}_5,{\scriptstyle\wedge}\,t{X}_6,{\scriptstyle\wedge}\,t{X}_7,{\scriptstyle\wedge}\,t{X}_8,{\scriptstyle\wedge}\,t{X}_9,{\scriptstyle\wedge}\,t{X}_{10})$
dual to
$(\theta^1,\theta^1,\theta^1,\theta^1,{\mathcal P}i_1,{\mathcal P}i_2,{\mathcal P}i_3,{\mathcal P}i_8,{\mathcal P}i_9,\Lambda_1)$.
The normal connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ has been constructed. The
last thing we must do is renaming the coordinates $u_8\to u_4$,
$u_9\to u_5$ and choosing $u_6$ appropriately, so that formulae
\eqref{e.c.H6} -- \eqref{e.c.Om0} hold. This finishes the proof of
theorem \ref{th.c.1}.
\section{Geometries on ten-dimensional bundle}\label{s.c.bundle}
\subsection{Curvature and its interpretation}
We turn to discussion of consequences of theorem \ref{th.c.1}. As
we explained in Introduction, the basic object that contains the
contact invariants for the ODEs is the curvature
\begin{equation}n{\scriptstyle\wedge}\,h{K}^c={\rm d}{\scriptstyle\wedge}\,h{\goth{o}mega}^c+{\scriptstyle\wedge}\,h{\goth{o}mega}^c{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,h{\goth{o}mega}^c. \end{equation}n
It is given by the structural equations for the coframe
$\theta^1,\ldots,\mathcal{O}mega_6$.
\begin{equation}gin{align}
{\rm d}\goth{h}c{1} =&\vc{1}{\scriptstyle\wedge}\,\goth{h}c{1}+\goth{h}c{4}{\scriptstyle\wedge}\,\goth{h}c{2},\nonumber \\
{\rm d}\goth{h}c{2} =&\vc{2}{\scriptstyle\wedge}\,\goth{h}c{1}+\vc{3}{\scriptstyle\wedge}\,\goth{h}c{2}+\goth{h}c{4}{\scriptstyle\wedge}\,\goth{h}c{3},\nonumber \\
{\rm d}\goth{h}c{3}=&\vc{2}{\scriptstyle\wedge}\,\goth{h}c{2}+(2\vc{3}-\vc{1}){\scriptstyle\wedge}\,\goth{h}c{3}+\inc{A}{2}\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{1}
+\inc{A}{1}\goth{h}c{4}{\scriptstyle\wedge}\,\goth{h}c{1}, \nonumber \\
{\rm d}\goth{h}c{4} =&\vc{4}{\scriptstyle\wedge}\,\goth{h}c{1}+\vc{5}{\scriptstyle\wedge}\,\goth{h}c{2}+(\vc{1}-\vc{3}){\scriptstyle\wedge}\,\goth{h}c{4},\nonumber \\
{\rm d}\vc{1} =&\vc{6}{\scriptstyle\wedge}\,\goth{h}c{1}+\vc{4}{\scriptstyle\wedge}\,\goth{h}c{2}-\vc{2}{\scriptstyle\wedge}\,\goth{h}c{4},\nonumber \\
{\rm d}\vc{2}=&(\vc{3}-\vc{1}){\scriptstyle\wedge}\,\vc{2}+\tfrac{1}{2}\vc{6}{\scriptstyle\wedge}\,\goth{h}c{2}+\vc{4}{\scriptstyle\wedge}\,\goth{h}c{3}
+\inc{A}{3}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}+\inc{A}{4}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}, \nonumber \\
{\rm d}\vc{3}=&\tfrac{1}{2}\vc{6}{\scriptstyle\wedge}\,\goth{h}c{1}+\vc{4}{\scriptstyle\wedge}\,\goth{h}c{2}+\vc{5}{\scriptstyle\wedge}\,\goth{h}c{3}
+\inc{A}{5}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}+\inc{A}{2}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4},\label{e.c.dtheta_10d} \\
{\rm d}\vc{4}=&\vc{5}{\scriptstyle\wedge}\,\vc{2}+\vc{4}{\scriptstyle\wedge}\,\vc{3}+\tfrac{1}{2}\vc{6}{\scriptstyle\wedge}\,\goth{h}c{4}
+(\inc{A}{6}+\inc{B}{2})\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2} +2\inc{B}{3}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3} \nonumber \\
&-\inc{A}{3}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}+\inc{B}{4}\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3} \nonumber \\
{\rm d}\vc{5} =&(\vc{1}-2\vc{3}){\scriptstyle\wedge}\,\vc{5}+\vc{4}{\scriptstyle\wedge}\,\goth{h}c{4}
+(\inc{A}{7}+\inc{B}{3})\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}+\inc{B}{4}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3} \nonumber \\
&-\inc{A}{5}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}+\inc{B}{1}\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}, \nonumber \\
{\rm d}\vc{6} =&\vc{6}{\scriptstyle\wedge}\,\vc{1}+2\vc{4}{\scriptstyle\wedge}\,\vc{2}+\inc{C}{1}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}
+2\inc{B}{2}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}+\inc{A}{8}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}+2\inc{B}{3}\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}, \nonumber
\end{align}
where
$\inc{A}{1},\ldots,\inc{A}{8},\inc{B}{1},\ldots,\inc{B}{4},\inc{C}{1}$
are functions on ${\mathcal P}^c$.
The functions $\inc{A}{1},\ldots,\inc{C}{1}$ are contact relative
invariants of the underlying ODE, that is their vanishing or not
is a contact invariant property. The full set of contact
invariants can be constructed by consecutive differentiation of
$\inc{A}{1},\ldots,\inc{C}{1}$ with respect to the frame
$(X_1,X_2,X_3,X_4,X_5,X_6,X_7,X_8,X_9, X_{10})$ dual to
$(\theta^1,$ $\theta^2,$ $\theta^3,$ $\theta^4,$ $\mathcal{O}mega_1,$
$\mathcal{O}mega_2,$ $\mathcal{O}mega_3,$ $\mathcal{O}mega_4,$ $\mathcal{O}mega_5,$ $\mathcal{O}mega_6)$.
Utilizing the identities ${\rm d}^2\mathcal{O}mega_\goth{m}u=0$ we compute the
exterior derivatives of $\inc{A}{i},\inc{B}{j},\inc{C}{1}$, for
instance
$$ {\rm d}\inc{B}{1}=X_1(\inc{B}{1})\goth{h}c{1}+X_2(\inc{B}{2})\goth{h}c{2}+X_3(\inc{B}{3})\goth{h}c{3}
-2\inc{B}{4}\goth{h}c{4}+2\inc{B}{1}\vc{1}-5\inc{B}{1}\goth{h}c{3}.$$ From
these formulae it follows that i) $\inc{A}{2},\ldots\inc{A}{8}$
express by the coframe derivatives of $\inc{A}{1}$, ii)
$\inc{B}{2},\ldots\inc{B}{4}$ express by coframe derivatives of
$\inc{B}{1}$ and iii) $\inc{C}{1}$ is a function of coframe
derivatives of both $\inc{A}{1}$ and $\inc{B}{1}$. Hence there are
two basic invariants for the system \eqref{e.c.dtheta_10d} reading
\begin{equation}n \inc{A}{1}=\frac{u_3^3}{u_1^3}W \qquad
\inc{B}{1}=\frac{u_1^2}{6u_3^5}F_{qqqq} \end{equation}n and all other
invariants can be derived from them.\footnote{This property means
in language of Tanaka's theory that curvature of a normal
connection is generated by its harmonic part.}
The simplest case, in which all the contact invariants
$\inc{A}{1},\ldots,\inc{C}{1}$ vanish corresponds to
$W=F_{qqqq}=0$ and is characterized by
\begin{equation}gin{corollary}\label{cor.c.10d_flat}
For a third-order ODE $y'''=F(x,y,y',y'')$ the following
conditions are equivalent.
\begin{equation}gin{itemize}
\item[i)] The ODE is contact equivalent to $y'''=0$.
\item[ii)] It satisfies the conditions $W=0$, and $F_{qqqq}=0$.
\item[iii)] It has the $\goth{o}(3,2)$ algebra of contact symmetry generators.
\end{itemize}
\end{corollary}
\subsection{Structure of ${\mathcal P}^c$}
The manifold ${\mathcal P}^c$ is endowed with threefold geometry of the
principal bundle over the second jet space ${\goth{m}athcal J}^2$, the first jet
space $J^1$ and the solution space $\mathcal{S}$. We discuss these
structures consecutively. Let us remind that
$(X_1,X_2,X_3,X_4,X_5,X_6,X_7,X_8,X_9, X_{10})$ denotes the dual
frame to $(\theta^1,$ $\theta^2,$ $\theta^3,$ $\theta^4,$
$\mathcal{O}mega_1,$ $\mathcal{O}mega_2,$ $\mathcal{O}mega_3,$ $\mathcal{O}mega_4,$ $\mathcal{O}mega_5,$
$\mathcal{O}mega_6)$.
First bundle structure, $H_6\to{\mathcal P}\to{\goth{m}athcal J}^2$, has been already
introduced explicitly in theorem \ref{th.c.1}. Here we only show
that it is actually defined by the coframe, since it can be
recovered from its structural equations. Indeed, we see from
\eqref{e.c.dtheta_10d} that the ideal spanned by
$\theta^1,\theta^2,\theta^3,\theta^4$ is closed \begin{equation}n {\rm d}
\theta^i{\scriptstyle\wedge}\,\theta^1{\scriptstyle\wedge}\,\theta^2{\scriptstyle\wedge}\,\theta^3{\scriptstyle\wedge}\,\theta^4=0 \qquad
\text{for}\qquad i=1,2,3,4, \end{equation}n and it follows that its
annihilated distribution $<X_5,X_6,X_7,X_8,X_9,X_{10}>$ is
integrable. Maximal integral leaves of this distribution are
locally fibres of the projection ${\mathcal P}^c\to{\goth{m}athcal J}^2$. Furthermore, the
commutation relations of these vector fields are isomorphic to the
commutation relations of the six-dimensional group $H_6$, hence we
can {\em define} $X_5,\ldots,X_{10}$ to be the fundamental vector
fields associated to the action $H_6$ on ${\mathcal P}^c$.
In order to explain how ${\mathcal P}^c$ is the bundle
$CO(2,1)\ltimes\mathbb{R}^3\to{\mathcal P}^c\to\mathcal{S}$ let us first describe the
space $\mathcal{S}$ itself. On ${\goth{m}athcal J}^2$ there is the congruence of solutions
of the ODE. A family of solutions passing through sufficiently
small open set in ${\goth{m}athcal J}^2$ is given by the mapping
$$(x;c_1,c_2,c_3)\goth{m}apsto
(x,f(x;c_1,c_2,c_3),f_x(x;c_1,c_2,c_3),f_{xx}(x;c_1,c_2,c_3)),$$
where $y=f(x;c_1,c_2,c_3)$ is the general solution to
$y'''=F(x,y,y',y'')$ and $(c_1,c_2,c_3)$ are constants of
integration. Thus a solution can be considered as a point in the
three-dimensional real space $\mathcal{S}$ parameterized by the constants
of integration. This space can be endowed with a local structure
of differentiable manifold if we {\em choose} a parametrization
$(c_1,c_2,c_3)\goth{m}apsto f(x;c_1,c_2,c_3)$ of the solutions and admit
only sufficiently smooth re-parameterizations
$(c_1,c_2,c_3)\goth{m}apsto(\tilde{c}_1,\tilde{c}_2,\tilde{c}_3)$ of the
constants. We always assume that $\mathcal{S}$ is locally a manifold.
Since $J^2$ is a bundle over $\mathcal{S}$ so is ${\mathcal P}^c$ and the fibres of
the projection ${\mathcal P}^c\to\mathcal{S}$ are annihilated by the closed ideal
$<\theta^1,\theta^2,\theta^3>$. On the fibres there act the vector
fields $X_4,X_5,X_6,X_7,X_8,X_9,X_{10}$, which form the Lie
algebra $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$ and thereby define the action of
$CO(2,1)\ltimes\mathbb{R}^3$ on ${\mathcal P}^c$.
Apart from the projection ${\goth{m}athcal J}^2\to\mathcal{S}$ there is also the projection
${\goth{m}athcal J}^2\to{\goth{m}athcal J}^1$ that takes the second jet $(x,y,p,q)$ of a curve
into its first jet $(x,y,p)$. It gives rise to the third bundle
structure, $H_7\to{\mathcal P}^c\to{\goth{m}athcal J}^1$. Here the tangent distribution is
$<X_3,X_5,X_6,X_7,X_8,X_9,X_{10}>$ and it defines the action of a
seven-dimensional group $H_7$ which of course contains $H_6$.
It appears that, under some conditions, ${\scriptstyle\wedge}\,idehat{\goth{o}mega}_c$ is
not only a Cartan connection over ${\goth{m}athcal J}^2$ but over $\mathcal{S}$ or ${\goth{m}athcal J}^1$
also.
\section{Conformal geometry on solution space}\label{s.c.conf}
\noindent The section on the conformal geometry on $\mathcal{S}$ only
contains results of P. Nurowski \cite{Nur2,Nur1}, see also
\cite{New3}. Let us define on ${\mathcal P}^c$ the symmetric
two-contravariant tensor field
\begin{equation}n{\scriptstyle\wedge}\,h{g}=(\theta^2)^2-2\theta^1\theta^3 \end{equation}n of signature $(++-
\,0\,0\,0\,0\,0\,0\,0)$. The degenerate directions of ${\scriptstyle\wedge}\,h{g}$ are
precisely those tangent to the fibres of ${\mathcal P}^c\to\mathcal{S}$ \begin{equation}n
{\scriptstyle\wedge}\,h{g}(X_i,\cdot)=0,\qquad \text{for}\qquad i=4,5,6,7,8,9,10. \end{equation}n
The Lie derivatives of ${\scriptstyle\wedge}\,h{g}$ along the degenerate directions
are as follows \begin{equation}gin{align}
&L_{X_4}{\scriptstyle\wedge}\,h{g}=\frac{u_3^3}{u_1^3}W(\theta^1)^2,
\qquad\quad
L_{X_7}{\scriptstyle\wedge}\,h{g}=2{\scriptstyle\wedge}\,h{g}, \label{e.c.lie1} \\
\intertext{and}
&L_{X_i}{\scriptstyle\wedge}\,h{g}=0 \qquad\text{for} \qquad i=5,6,8,9,10. \label{e.c.lie2} \end{align}
Thus, if only $$W=0,$$ all the degenerate directions but $X_7$ are
isometries of ${\scriptstyle\wedge}\,h{g}$ whereas $X_7$ is a conformal symmetry. This
allows us to {\em project} ${\scriptstyle\wedge}\,h{g}$ to the Lorentzian conformal
metric $[g]$ on the solution space $\mathcal{S}$. Since the action of
$CO(2,1)\ltimes\mathbb{R}^3$ on $\mathcal{S}$ is not given explicitly, we can
not write down the explicit formula for $g$. We can only do this
in terms of the coordinate system $(x,y,p,q)$ on ${\goth{m}athcal J}^2$:
\begin{equation}gin{align}
g&=(\goth{o}mega^2)^2-2\goth{o}mega^1{\scriptstyle\wedge}\,t{\goth{o}mega}^3= \label{e.c.g} \\
&=({\rm d} p-q{\rm d} x)^2
-2({\rm d} y-p{\rm d} x)({\rm d} q-F{\rm d} x-\tfrac13F_q({\rm d} p-q{\rm d} x)+K({\rm d} y-p{\rm d}
x)).\notag\end{align}
In order to find the explicit expression for $g$ in a coordinate
system $(c_1,c_2,c_3)$ on $\mathcal{S}$ we would have to find the general
solution $y=f(x;c_1,c_2,c_3)$ of the underlying ODE, then solve
the system \begin{equation}n\left\{ \begin{aligned} y=&f(x;c_1,c_2,c_3), \\
p=&f_{x}(x;c_1,c_2,c_3), \\ q=&f_{xx}(x;c_1,c_2,c_3) \end{aligned} \right.
\end{equation}n with respect to $c_i$ and substitute these formulae into
\eqref{e.c.g}.
The condition $W=0$ means $\inc{A}{1}=0$ which causes
$\inc{A}{2}=\ldots=\inc{A}{8}=0$. Thus, structural equations
\eqref{e.c.dtheta_10d} do not contain the non-constant terms
proportional to $\theta^4$ and the curvature ${\scriptstyle\wedge}\,h{K}^c$ is
horizontal over $\mathcal{S}$. As a consequence, ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is a
connection over $\mathcal{S}$. It appears that it is nothing but the Cartan
normal conformal connection for $[g]$.
\subsection{Normal conformal connection} We introduce after
\cite{Nur2} the normal conformal connection and discuss its
curvature. Consider $\mathbb{R}^n$ with coordinates $(x^\goth{m}u)$,
$\goth{m}u=1,\ldots,n$ equipped with the flat metric $g=g_{\goth{m}u\nu}{\rm d}
x^\goth{m}u\goth{o}times{\rm d} x^\nu$ of the signature $(k,l)$, $n=k+l$. The
group $Conf(k,l)$ of conformal symmetries of $g_{\goth{m}u\nu}$ consists
of
\begin{equation}gin{itemize}
\item[i)] the subgroup $CO(k,l)=\mathbb{R}\times O(k,l)$ containing the
group $O(k,l)$ of isometries of $g$ and the dilatations,
\item[ii)] the subgroup $\mathbb{R}^n$ of translations, \item[iii)] the
subgroup $\mathbb{R}^n$ of special conformal transformations.
\end{itemize}
The stabilizer of the origin in $\mathbb{R}^n$ is the semisimple
product of $CO(k,l)\ltimes\mathbb{R}^n$ of the isometries, the
dilatations, and the special conformal transformations. The flat
conformal space is the homogeneous space
$Conf(k,l)/CO(k,l)\ltimes\mathbb{R}^n$. To this space there is
associated the flat Cartan connection on the bundle
$CO(k,l)\ltimes\mathbb{R}^n\to Conf(k,l)\to \mathbb{R}^n$ with values in the
algebra $\goth{g}oth{co}nf(k,l)$.
By virtue of the M\"obius construction, the group $Conf(k,l)$ is
isomorphic to the orthogonal group
$O(k+1,l+1)$ preserving the metric \begin{equation}n \begin{pmatrix} 0 & 0 & -1 \\
0 & g_{\goth{m}u\nu} & 0 \\ -1 & 0 & 0 \end{pmatrix} \end{equation}n on $\mathbb{R}^{n+2}$. This
isomorphism gives rise to the following representation of
$\goth{g}oth{co}nf(k,l)\goth{g}oth{co}ng\goth{o}(k+1,l+1)$ \begin{equation}\label{e.c.o32} \begin{pmatrix} \phi & g_{\nu\rho}\xi^\rho & 0 \\
v^\goth{m}u & \lambda^\goth{m}u_{~\nu} & \xi^\goth{m}u \\ 0 & g_{\nu\rho}v^\rho &
-\phi \end{pmatrix}. \end{equation} Here the vector $(v^\goth{m}u)\in\mathbb{R}^n$ generates the
translations, the matrix $(\lambda^\goth{m}u_{~\nu})\in\goth{o}(k,l)$
generates the isometries, $\phi$ -- the dilatations, and
$(\xi^\goth{m}u)\in\mathbb{R}^n$ -- the special conformal transformations.
Let us turn to an arbitrary case of a conformal metric $[g]$ of
the signature $(k,l)$ on a $n$-dimensional manifold ${\goth{m}athcal M}$,
$n=k+l>2$. Let us choose a representative $g$ of $[g]$ and
consider an orthogonal coframe $(\goth{o}mega^\goth{m}u)$, in which
$g=g_{\goth{m}u\nu}\,\goth{o}mega^{\goth{m}u}\goth{o}times\goth{o}mega^\nu$ with constant
coefficients $g_{\goth{m}u\nu}$. We calculate the Levi-Civita connection
$\Gamma^\goth{m}u_{~\nu}$ for $g$, its Ricci tensor $R_{\goth{m}u\nu}$ and the
Ricci scalar $R$. Next we define the following one-forms \begin{equation}n
P_\nu=\left( \tfrac{1}{2-n}R_{\nu\rho}+\tfrac{1}{2(n-1)(n-2)}R
g_{\nu\rho} \right)\theta^\rho. \end{equation}n Given these objects we build
the following $\goth{g}oth{co}nf(k,l)$-valued matrix $\goth{o}mega^{conf}$ on $M$
\begin{equation}\label{e.cnc} \goth{o}mega^{conf}=\begin{pmatrix}
0&P_\nu&0\\
\theta^\goth{m}u&\Gamma^\goth{m}u_{~\nu}&g^{\goth{m}u\rho} P_\rho\\
0&g_{\nu\rho}\theta^\rho&0 \end{pmatrix}. \end{equation} This is the normal conformal
connection on ${\goth{m}athcal M}$ in the natural gauge.\footnote{The gauge is
natural since we have started from the Levi-Civita connection, not
from any Weyl connection for $g$, in which case \eqref{e.cnc}
contains a Weyl potential.} Consider now the conformal bundle
$CO(k,l)\ltimes\mathbb{R}^n\to{\mathcal P}\to{\goth{m}athcal M}$, and choose a coordinate system
$(h,x)$ on ${\mathcal P}$ compatible with the local triviality ${\mathcal P}\goth{g}oth{co}ng
CO(k,l)\ltimes\mathbb{R}^n \times{\goth{m}athcal M}$, where $x$ stands for $(x^\goth{m}u)$ in
${\goth{m}athcal M}$ and the matrix $h\in CO(k,l)\ltimes\mathbb{R}^n$ reads \begin{equation}n
h=\begin{equation}gin{pmatrix} {\rm e}^{-\phi}&{\rm
e}^{-\phi}g_{\nu\rho}\xi^\rho &
\frac{1}{2}{\rm e}^{-\phi}\xi^\rho\xi^\sigma g_{\rho\sigma}\\
0&\Lambda^{\goth{m}u}_{~\nu}&\Lambda^\goth{m}u_{~\rho}\xi^\rho\\
0&0&{\rm e}^\phi
\end{pmatrix},
\quad\quad\Lambda^\goth{m}u_{~\rho}\Lambda^\nu_{~\sigma}g_{\goth{m}u\nu}=g_{\rho\sigma}.
\end{equation}n
The normal conformal connection for $g$ is the following
$\goth{g}oth{co}nf(k,l)$-valued one-form on ${\mathcal P}$ \begin{equation}n
{\scriptstyle\wedge}\,h{\goth{o}mega}^{conf}(h,x)=h^{-1}\cdot\pi^*(\goth{o}mega^{conf}(x))\cdot h
+h^{-1}{\rm d} h. \end{equation}n The curvature of the normal conformal
connection is as follows \begin{equation}n
{\scriptstyle\wedge}\,h{K}^{conf}(h,x)=h^{-1}\cdot\pi^*(K^{conf}(x))\cdot h, \end{equation}n
where $K^{conf}$ is the curvature for $\goth{o}mega^{conf}$ on ${\goth{m}athcal M}$ \begin{equation}n
K^{conf}=\begin{pmatrix}
0&D P_\nu&0\\
0& C^\goth{m}u_{~\nu} &g^{\goth{m}u\rho}D P_\rho\\
0&0&0 \end{pmatrix}, \end{equation}n and \begin{equation}n DP_\goth{m}u={\rm d}
P_\goth{m}u+P_\nu{\scriptstyle\wedge}\,\Gamma^\nu_{~\goth{m}u}=\tfrac12P_{\goth{m}u\nu\rho}\goth{o}mega^\nu{\scriptstyle\wedge}\,\goth{o}mega^\rho.
\end{equation}n The curvature contains the lowest-order conformal invariants
for $g$, namely
\begin{equation}gin{itemize}
\item For $n\goth{g}eq4$ \begin{equation}n C^\goth{m}u_{~\nu}=\tfrac12
C^\goth{m}u_{~\nu\rho\sigma}\goth{o}mega^\rho{\scriptstyle\wedge}\,\goth{o}mega^\sigma\end{equation}n is the Weyl
conformal tensor, while \begin{equation}n
P_{\goth{m}u\nu\rho}=\tfrac{1}{n-3}\nabla_{\sigma}C^\sigma_{~\goth{m}u\nu\rho}\end{equation}n
is its divergence. \item For $n=3$ the Weyl tensor identically
vanishes, $C^\goth{m}u_{~\nu}= 0$, and the lowest-order conformal
invariant is the Cotton tensor $P_{\goth{m}u\nu\rho}$. It has five
independent components.
\end{itemize}
The normality of conformal connections, originally defined by E.
Cartan, is the following property. The algebra $\goth{g}oth{co}nf(k,l)$ is
graded: $\goth{g}oth{co}nf(k,l)=\goth{g}_{-1}\goth{o}plus\goth{g}_0\goth{o}plus\goth{g}_1$, where
translations are the $\goth{g}_{-1}$-part, $\goth{g}oth{co}(k,l)$ is the $\goth{g}_0$-part
and the special conformal transformations are the $\goth{g}_1$-part. The
normal connection for $[g]$ is the only conformal connection such
that the $\goth{g}oth{co}(k,l)$-part of its curvature, $C^\goth{m}u_{~\nu}=\tfrac12
C^\goth{m}u_{~\nu\rho\sigma}\goth{o}mega^\rho{\scriptstyle\wedge}\,\goth{o}mega^\sigma$, is traceless:
$C^\rho_{~\nu\rho\sigma}=0$. Cartan normal conformal connections
are normal in the sense of Tanaka.
\subsection{Normal conformal connection from ODEs}
As we mentioned, ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is the normal conformal
connection over $\mathcal{S}$. In order to see this it is enough to
rearrange ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ according to the representation
\eqref{e.c.o32} \begin{equation}\label{e.c.conn_o}
{\scriptstyle\wedge}\,h{\goth{o}mega}^c=\begin{pmatrix} \vc{3} & -\tfrac{1}{2}\vc{6} & -\vc{4} & -\vc{5} & 0 \\
\goth{h}c{1} & \vc{3}-\vc{1} & -\goth{h}c{4} & 0 & -\vc{5} \\
\goth{h}c{2} & -\vc{2} & 0 & -\goth{h}c{4} & \vc{4} \\
\goth{h}c{3} & 0 &-\vc{2} & \vc{1}-\vc{3} & -\tfrac{1}{2}\vc{6} \\
0 & \goth{h}c{3} & -\goth{h}c{2} & \goth{h}c{1} & -\mathcal{O}mega^3
\end{pmatrix}
\end{equation} and calculate its curvature, which is equal to \begin{equation}n
{\scriptstyle\wedge}\,h{K}^c=\begin{pmatrix} 0 & DP_1 & DP_2 & DP_3 & 0 \\
0 & 0 & 0 & 0 & DP_3 \\
0 & 0 & 0 & 0 & -DP_2 \\
0 & 0 & 0 & 0 & DP_1 \\
0 & 0 & 0 & 0 & 0
\end{pmatrix} \end{equation}n
with the following components of the Cotton tensor on ${\mathcal P}^c$
\begin{equation}gin{align*}
DP_1=&-\tfrac12\inc{C}{1}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}-\inc{B}{2}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}-\inc{B}{3}\,\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}, \notag \\
DP_2=&-\inc{B}{2}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}-2\inc{B}{3}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}-\inc{B}{4}\,\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}, \\
DP_3=&-\inc{B}{3}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}-\inc{B}{4}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}-\inc{B}{1}\,\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}. \notag
\end{align*}
Finally, we pull-back these formula to ${\goth{m}athcal J}^2$ through $u_1=u_3=1$,
$u_2=u_4=u_5=u_6=0$ and get
\begin{equation}gin{align*}
DP_1=&(\tfrac12M_p+\tfrac16F_qM_q+\tfrac16F_{qqq}K_y+K_qL_q-\tfrac16 K^2
F_{qqqq}+\notag \\
&+\tfrac16K_qF_{qqy}-\tfrac16F_{qqyy}-\tfrac13F_{qqq}K_qK+\tfrac13 F_{qqy}K)\goth{o}mega^1{\scriptstyle\wedge}\,\goth{o}mega^2
\notag \\
&+\tfrac12\left(M_q-K_{qqq}K-2K_{qq}K_q+K_{qqy}\right)\goth{o}mega^1{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3+\notag \\
&-\tfrac12 L_{qq}\goth{o}mega^2{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3, \\
DP_2=&\tfrac12\left(M_q-K_{qqq}K-2K_{qq}K_q+K_{qqy}\right)\goth{o}mega^1{\scriptstyle\wedge}\,\goth{o}mega^2+\notag \\
&-L_{qq}\goth{o}mega^1{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3+\tfrac12 K_{qqq}\goth{o}mega^2{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3, \notag \\
DP_3=&-\tfrac12 L_{qq}\goth{o}mega^1{\scriptstyle\wedge}\,\goth{o}mega^2+\tfrac12 K_{qqq}\goth{o}mega^1{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3
-\tfrac16F_{qqqq}\goth{o}mega^2{\scriptstyle\wedge}\,{\scriptstyle\wedge}\,t{\goth{o}mega}^3. \notag
\end{align*}
The formulae for the conformal connection and curvature (in a
slightly different notation) are given in \cite{Nur2,New3}.
\section{Contact projective geometry on first jet
space}\label{s.c-p} \noindent The connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ gives
rise to not only the above conformal structure but also a geometry
on the first jet space ${\goth{m}athcal J}^1$. As we know, there are two basic
contact invariants in the curvature ${\scriptstyle\wedge}\,h{K}^c$: $W$ and
$F_{qqqq}$. The condition $W=0$ yields the conformal geometry. Let
us now examine the second possibility
$$F_{qqqq}=0.$$ Quick inspection of the structural equations
\eqref{e.c.dtheta_10d} shows that in this case the curvature does
not contain $\theta^i{\scriptstyle\wedge}\,\theta^4$ terms and thereby ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$
is an $\goth{sp}(4,\mathbb{R})$ Cartan connection on $H_7\to{\mathcal P}^c\to{\goth{m}athcal J}^1$.
A natural question is to what geometric structure ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$
is now related. In order to answer it let us look at the geometry
defined on ${\goth{m}athcal J}^1$ by the solutions of the underlying ODE
$y'''=F(x,y,y',y'')$. As we have already said the solutions form a
congruence on ${\goth{m}athcal J}^2$ since there passes exactly one solution
through a point $(x_0,y_0,p_0,q_0)\in{\goth{m}athcal J}^2$. The $G_c$-structure on
${\goth{m}athcal J}^2$ defined by \eqref{e.G_c} preserves this congruence and also
preserves the contact invariant information about the ODE. The
geometry on ${\goth{m}athcal J}^2$ is then of first order, since it is
characterized by the group $G_c$ acting on the tangent space
$T{\goth{m}athcal J}^2$.
Geometry of an
ODE on ${\goth{m}athcal J}^1$ is of different kind. The family of solutions is no
longer a congruence. Indeed, through a fixed
$(x_0,y_0,p_0)\in{\goth{m}athcal J}^1$ there pass many solutions, each of them
corresponding to some value of $q_0$, which is given by a choice
of a tangent direction at $(x_0,y_0,p_0)$. Such a choice, however,
can not be made in an arbitrary way; a solution $y=f(x)$ lifts to
a curve $x\goth{m}apsto(x,f(x),f'(x))$ in ${\goth{m}athcal J}^1$, whose tangent vector
field $\partial_x+f'(x)\partial_y+f''(x)\partial_p$ is always
annihilated by the one-form $\goth{o}mega^1={\rm d} y-p{\rm d} x$ on ${\goth{m}athcal J}^1$.
In this manner all the admissible tangent vectors to the solutions
form a rank-two distribution $\goth{m}athcal{C}$, {\em the contact distribution
on ${\goth{m}athcal J}^1$}, which is annihilated by $\goth{o}mega^1$. This is first
difference between ${\goth{m}athcal J}^2$ and ${\goth{m}athcal J}^1$: we have the rank-two
distribution $\goth{m}athcal{C}$ instead of the congruence. Second and more
important difference is that we still need to distinguish the
family of solutions among a class of all curves with their tangent
fields in $\goth{m}athcal{C}$.
There are at least two basic methods of doing this. In the first
method we treat the tangent directions of curves in ${\goth{m}athcal J}^1$ as new
dimensions and move to the space where over each point
$(x,y,p)\in{\goth{m}athcal J}^1$ there is the fibre of all the possible tangent
directions. In this manner the `entangled' family of solutions in
${\goth{m}athcal J}^1$ would be `stretched' to a congruence in the new space.
However, the bundle of tangent directions of ${\goth{m}athcal J}^1$ is nothing but
the second jet space ${\goth{m}athcal J}^2$ and thereby we would come back to the
description given in the theorem \ref{th.c.1}.
The other way is describing the family of solutions in ${\goth{m}athcal J}^1$ as a
family of unparameterized geodesics of a linear connection in
${\goth{m}athcal J}^1$. This approach leads us to the notion of projective
differential geometry.
\subsection{Contact projective geometry}\label{s.cp3}
This geometry has been exhaustively analyzed in \cite{Fox}, see
also \cite{Cap,Cap2}. We will not discuss the general theory here
but focus on an application of the three-dimensional case to the
ODEs. The definition of contact-projective geometry, see D. Fox
\cite{Fox}, adapted to our situation is the following.
\begin{equation}gin{definition}\label{def.c.cp}
A contact projective structure on the first jet space ${\goth{m}athcal J}^1$ is
given by the following data.
\begin{equation}gin{itemize}
\item[i)] The contact distribution $\goth{m}athcal{C}$, that is a distribution
annihilated by $$\goth{o}mega^1={\rm d} y -p{\rm d} x.$$ \item[ii)] A family
of unparameterized curves everywhere tangent to $\goth{m}athcal{C}$ and such
that: a) for a given point and a direction in $\goth{m}athcal{C}$ there is
exactly one curve passing through that point and tangent to that
direction, b) curves of the family are among unparameterized
geodesics for some linear connection on ${\goth{m}athcal J}^1$.
\end{itemize}
\end{definition}
A contact projective structure on ${\goth{m}athcal J}^1$ is equivalently given by
a family of linear connections, whose geodesic spray contains the
family of curves. For $\nabla$ to belong to this class one needs
\begin{equation}\label{e.cp.geod}\nabla_V V=\lambda (V) V \end{equation} along every curve
in the family, where $X$ denotes a tangent field to the considered
curve and $\lambda(X)$ is a function. Given two such connections
$\nabla$ and ${\scriptstyle\wedge}\,t{\nabla}$, their difference is a $(2,1)$-type
tensor field $$A(X,Y)={\scriptstyle\wedge}\,t{\nabla}_X Y-\nabla_X Y,$$ for all $X$
and $Y$. Simultaneously we have $A(V,V)=\goth{m}u(V) V$ for $V\in\goth{m}athcal{C}$,
where $\goth{m}u(V)={\scriptstyle\wedge}\,t{\lambda}(V)-\lambda(V)$ and $\goth{m}u_w$ at a point
$w\in{\goth{m}athcal J}^1$ is a covector on the vector space $\goth{m}athcal{C}_w$. By
polarization we obtain \begin{equation}\label{e.c.A}
A(X,Y)+A(Y,X)=\goth{m}u(X)Y+\goth{m}u(Y)X, \qquad X,Y\in\goth{m}athcal{C}.\end{equation} The connections
associated to a contact projective structure, when considered at a
point, form an affine space characterized by the above $A$.
Let us describe $A$ more explicitly. We choose a frame
$(e_1=\partial_y, e_2=\partial_p, e_3=\partial_x+p\partial_y)$ and
denote the dual frame by $(\sigma^1,\sigma^2,\sigma^3)$. In
particular $\goth{o}mega^1=\sigma^1$ and $\goth{m}athcal{C}=<e_2,e_3>$. Let
$i,j,\ldots=1,2,3$ and $I,J,\ldots=2,3$. Now $\nabla_j e_i=
\Gamma^k_{~ij} e_k$,
$A^k_{~ij}={\scriptstyle\wedge}\,t{\Gamma}^k_{~ij}-\Gamma^k_{~ij}$, $\goth{m}u=\goth{m}u^I e_I$
and \eqref{e.c.A} reads \begin{equation}n A^k_{~(IJ)}=\goth{m}u_{(I}
\delta^{k}_{~J)}. \end{equation}n Relevant components are equal to
\begin{equation}gin{align}\label{e.c.Aco}
&A^1_{~22}=0, &&A^1_{~(23)}=0, &&A^1_{~33}=0, \notag \\
&A^2_{~22}=\goth{m}u_2, &&A^2_{~(23)}=\tfrac12\goth{m}u_3,&& A^1_{~33}=0, \\
&A^3_{~22}=0, &&A^3_{~(23)}=\tfrac12\goth{m}u_2, && A^1_{~33}=\goth{m}u_3
\notag \end{align} and the rest of $A^k_{~ij}$ is free. Elementary
calculations assure us that the class of admissible connections is
a $20$-dimensional subspace of $27$-dimensional space of all
linear connections on ${\goth{m}athcal J}^1$. Another constraint for the
connections is given by \begin{equation}\label{e.c.ompr} (\nabla_V
\goth{o}mega^1)V=\nabla_V(\goth{o}mega^1(V))=0,\qquad V\in\goth{m}athcal{C}. \end{equation} In our frame
this is equivalent to $\Gamma^k_{(IJ)}=0$. Combining eq.
\eqref{e.c.Aco} and \eqref{e.c.ompr} we obtain
\begin{equation}gin{proposition}\label{prop.cp.coord}
The following quantities are invariant with respect to a choice of
a connection in the class distinguished by a contact projective
structure on ${\goth{m}athcal J}^1$
\begin{equation}gin{subequations}\label{e.cp.niezm}
\begin{equation}gin{align}
&\Gamma^1_{~22}=0, && \Gamma^1_{~(23)}=0, && \Gamma^1_{~33}=0, && \label{e.cp.niezm1} \\
&\Gamma^3_{~22}, && 2\Gamma^3_{~(23)}-\Gamma^2_{~22}, &&
\Gamma^3_{~33}-2\Gamma^2_{~(23)}, && \Gamma^2_{~33}.
\label{e.cp.niezm2}
\end{align}
\end{subequations}
The connection coefficients are calculated in a frame $(e_i)$ such
that $\goth{m}athcal{C}=<e_2,e_3>$.
Values of four the unspecified combinations \eqref{e.cp.niezm2}
define a contact projective structure.
\end{proposition}
Among the above connections there is a distinguished subclass of
those connections which covariantly preserve the distribution
$\goth{m}athcal{C}$. We shall call them compatible connections. They satisfy not
only \eqref{e.c.ompr} but a stronger condition \begin{equation}n \nabla_X
\goth{o}mega^1 =\phi(X) \goth{o}mega^1, \qquad \text{for all } X,\end{equation}n with
some one-form $\phi$. We have the following
\begin{equation}gin{proposition}
A compatible connection has non-vanishing torsion
\begin{equation}gin{proof}
\begin{equation}gin{align*}
{\rm d} \goth{o}mega^1(X,Y)&=\tfrac12((\nabla_X \goth{o}mega^1) Y
-(\nabla_Y\goth{o}mega^1)X+\goth{o}mega^1(T(X,Y)))= \\
&=(\phi{\scriptstyle\wedge}\,\goth{o}mega^1)(X,Y)+\tfrac12\goth{o}mega^1(T(X,Y))), \\
\intertext{thus}
({\rm d}\goth{o}mega^1{\scriptstyle\wedge}\,\goth{o}mega^1)(X,Y,Z)&=\tfrac12(\goth{o}mega^1(T(X,Y))\goth{o}mega^1(Z)+{\rm
cycl. perm.})\neq0.
\end{align*}
\end{proof}
\end{proposition}
\subsection{Contact projective geometries from ODEs}
It is obvious that the family of solutions of an arbitrary
third-order ODE satisfies the conditions i) and ii a) of
definition \ref{def.c.cp}. (Condition ii a) is satisfied with the
possible exception of the direction $\partial_p$, which belongs to
$\goth{m}athcal{C}$ but it is not tangent to any solution in general. However,
this exception is irrelevant since our consideration is local on
$T{\goth{m}athcal J}^1$.) We ask when the solutions form a subfamily of geodesics
for a linear connection.
\begin{equation}gin{lemma} \label{lem.c-p}
A third-order ODE $y'''=F(x,y,y',y'')$ defines a
contact-projective structure on ${\goth{m}athcal J}^1$ if and only if
$F_{qqqq}=0$. Moreover, the quantities \eqref{e.cp.niezm2} are
given by \begin{equation}\label{e.cp.gam}\begin{aligned} &\Gamma^3_{~22}=a_3, &&
2\Gamma^3_{~(23)}-\Gamma^2_{~22}=a_2, \\
&\Gamma^3_{~33}-2\Gamma^2_{~(23)}=a_1, && \Gamma^2_{~33}=-a_0,
\end{aligned}\end{equation} where \begin{equation}n F=a_3q^3+a_2q^2+a_1q+a_0. \end{equation}n
\begin{equation}gin{proof}
The field $V=\tfrac{{\rm d}}{{\rm d} x}$ tangent to a solution
$(x,f(x),f'(x))$ equals $V=f''e_2+e_3$ in the frame $(e_i)$. The
geodesic equations \eqref{e.cp.geod} read
\begin{equation}gin{align*}
& (f'')^2\,\Gamma^1_{~22}+2f''\Gamma^1_{~(23)}+\Gamma^1_{~33}=0, \\
& f'''+(f'')^2\,\Gamma^2_{~22}+2f''\Gamma^2_{~(23)}+\Gamma^2_{~33}=\lambda(V) f'', \\
& (f'')^2\,\Gamma^3_{~22}+2f''\Gamma^3_{~(23)}+\Gamma^3_{~33}=\lambda(V).
\end{align*}
First of these equations is equivalent to \eqref{e.cp.niezm1}.
From the remaining equations we have that
$$
f'''=\Gamma^3_{~22}
{f''}^3+(2\Gamma^3_{~(23)}-\Gamma^2_{~22}){f''}^2
+(\Gamma^3_{~33}-2\Gamma^2_{~(23)})f''-\Gamma^2_{~33},
$$
is satisfied along every solution.
\end{proof}
\end{lemma}
We observe that the condition $F_{qqqq}=0$ yields
$\inc{B}{1}=\inc{B}{2}=\inc{B}{3}=\inc{B}{4}=0$, which removes all
$\theta^i{\scriptstyle\wedge}\,\theta^3$ terms in the curvature and turns
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ into a connection over ${\goth{m}athcal J}^1$, since in the
curvature there are only terms proportional to
$\theta^1,\theta^2,\theta^4$, horizontal with respect to
${\mathcal P}^c\to{\goth{m}athcal J}^1$. Furthermore, the algebra $\goth{sp}(4,\mathbb{R})\goth{g}oth{co}ng\goth{o}(3,2)$
has the following grading (apart from those already mentioned)
\begin{equation}gin{align}
&\goth{sp}(4,\mathbb{R})=\goth{g}_{-2}\goth{o}plus\goth{g}_{-1}\goth{o}plus\goth{g}_0\goth{o}plus\goth{g}_1\goth{o}plus\goth{g}_2,
\label{c.gradJ1} \\ \intertext{which reads in the basis
\eqref{e.c.basis_sp}:}
&\goth{g}_{-2}=<e_1>, \qquad \goth{g}_{-1}=<e_2,e_4 >, \notag \\
&\goth{g}_{0}=<e_3,e_5,e_7,e_9>, \notag \\
&\goth{g}_{1}=<e_6,e_8>, \qquad \goth{g}_{2}=<e_{10}> \notag.
\end{align}
After calculating Tanaka's normality conditions by the method of
chapter \ref{ch.contact} section \ref{s.c.normalcon}, we observe
that ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is the normal connection with respect to the
grading \eqref{c.gradJ1}. In this manner we have re-proved the
known fact that to a three-dimensional contact projective geometry
there is associated the unique normal $\goth{sp}(4,\mathbb{R})$-valued Cartan
connection.
\begin{equation}gin{proposition}
If the contact projective geometry on ${\goth{m}athcal J}^1$ exists, then
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ of theorem \ref{th.c.1} is the normal Cartan
connection for this geometry.
\end{proposition}
From ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ one may reconstruct the compatible
connections. To do this we just observe that first, second and
fourth equation of
\eqref{e.c.dtheta_10d} can be written as \begin{equation}n \begin{pmatrix} {\rm d} \goth{h}p{1} \\
{\rm d} \goth{h}p{2}\\ {\rm d} \goth{h}p{4} \end{pmatrix}
+\goth{u}nderbrace{\begin{pmatrix} -\mathcal{O}mega_1 & 0 & 0 \\
-\mathcal{O}mega_2 & -\mathcal{O}mega_3 & \theta^3 \\
-\mathcal{O}mega_4 & -\mathcal{O}mega_5 & \mathcal{O}mega_3-\mathcal{O}mega_1 \end{pmatrix}}_{\displaystyle
{\scriptstyle\wedge}\,h{\Gamma}}
{\scriptstyle\wedge}\,edge
\begin{pmatrix} \goth{h}p{1} \\ \goth{h}p{2}\\ \goth{h}p{4} \end{pmatrix} = \begin{pmatrix} \goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{2}
\\\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{3} \\ 0 \end{pmatrix}. \end{equation}n
The tree by three matrix denoted by ${\scriptstyle\wedge}\,h{\Gamma}$ is the
$\goth{g}_0\goth{o}plus\goth{g}_1$-part of ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$. The following
proposition holds.
\begin{equation}gin{proposition}For any section $s\goth{g}oth{co}lon{\goth{m}athcal J}^1\to{\mathcal P}^c$ the pull-back
$s^*{\scriptstyle\wedge}\,h{\Gamma}$ written in the coframe
$(s^*\theta^1,s^*\theta^2,s^*\theta^4)$ is a connection compatible
with the contact projective geometry.
\begin{equation}gin{proof}
First we choose the section $s_0\goth{g}oth{co}lon{\goth{m}athcal J}^1\to{\mathcal P}^c$ given by $q=0$,
$u_1=1$, $u_3=1$ and $u_2=u_4=u_5=u_6=0$. We denote
$\Gamma=s^*_0{\scriptstyle\wedge}\,h{\Gamma}$. In the coframe
$\sigma^1=s^*_0\theta^1$, $\sigma^2=s^*_0\theta^2$ and
$\sigma^3=s^*_0\theta^4$ we have
$-s^*_0\mathcal{O}mega_3=\Gamma^2_{~2}=\Gamma^2_{~2k}\sigma^k$,
$s^*_0\theta^3=\Gamma^2_{~3}=\Gamma^2_{~3k}\sigma^k$ and so on.
Equations \eqref{e.cp.gam} follow from \eqref{e.c.om} and
\eqref{e.c.Om0}, provided that $F_{qqqq}=0$.
Next we consider an arbitrary section $s\goth{g}oth{co}lon{\goth{m}athcal J}^1\to{\mathcal P}^c$. In the
local trivialization ${\mathcal P}^c\goth{g}oth{co}ng H_7\times {\goth{m}athcal J}^1$ we have ${\mathcal P}^c\ni
w=(v,x)$, where $v\in H_7$, $x\in{\goth{m}athcal J}^1$ and $s$ is given by
$x\goth{m}apsto (v(x),x)$. Now
$s^*{\scriptstyle\wedge}\,h{\goth{o}mega}^c(x)=v^{-1}(x)s^*_0{\scriptstyle\wedge}\,h{\goth{o}mega}^c(x)v(x)
+v^{-1}(x){\rm d} v(x)$, and $s^*{\scriptstyle\wedge}\,h{\Gamma}$ is the $\goth{g}_0\goth{o}plus\goth{g}_1$
part of $s^*{\scriptstyle\wedge}\,h{\goth{o}mega}^c$. Since the Lie algebra of $H_7$ is
$\goth{g}_0\goth{o}plus\goth{g}_1\goth{o}plus\goth{g}_2$, every $v(x)$ in the connected
component of the identity may be written in the form
$v(x)=v_2(x)v_1(x)=\exp{(t_2(x)A_2(x))}\exp{(t_1(x)A_1(x))}$ with
$A_2(x)\in\goth{g}_2$ and $A_1(x)\in\goth{g}_0\goth{o}plus\goth{g}_1$. It follows that
\begin{equation}n
s^*{\scriptstyle\wedge}\,h{\goth{o}mega}^c(x)=v^{-1}(x)_2\left\{v^{-1}_1(x)s^*_0{\scriptstyle\wedge}\,h{\goth{o}mega}^c(x)v_1(x)
+v^{-1}_1(x){\rm d} v_1(x)\right\}v_2(x) +v^{-1}_2(x){\rm d} v_2(x)\end{equation}n
But the
$\goth{g}_0\goth{o}plus\goth{g}_1$ part of the quantity in the curly brackets is the
connection $\Gamma=s^*_0{\scriptstyle\wedge}\,h{\Gamma}$ written in the coframe
$(s^*\theta^1,s^*\theta^2,s^*\theta^4)$ and $\ad v^{-1}(x)$
transforms it into other compatible connection, according to
\eqref{e.c.Aco}.
\end{proof}
\end{proposition}
\section{Six-dimensional conformal geometry in the split
signature}\label{s.c6d} \noindent Until now we have not proposed
any geometric structure, apart from ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$, that could be
associated with an ODE of generic type. Motivated by S.-S. Chern's
construction we would like to build some kind of conformal
geometry starting from an arbitrary ODE, which does not
necessarily satisfy the W\"unschmann condition.
Let us define the `inverse' of the symmetric tensor field $
{\scriptstyle\wedge}\,h{g}=2\theta^1\theta^3-(\theta^2)^2$ of section \ref{s.c.conf}
to be ${\scriptstyle\wedge}\,h{g}_{inv}={\scriptstyle\wedge}\,h{g}^{ij}X_i\goth{o}times X_j=2X_1X_3-(X_2)^2$. We
take the $\goth{o}(2,1)$-part of the connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ \begin{equation}n
\Gamma= \begin{pmatrix}
\vc{3}-\vc{1} & -\goth{h}c{4} & 0 \\
-\vc{2} & 0 & -\goth{h}c{4} \\
0 &-\vc{2} & \vc{1}-\vc{3} \end{pmatrix}, \end{equation}n and the Levi-Civita symbol
$\epsilon_{ijk}$ in three dimensions. Next we define a new
bilinear form ${\scriptstyle\wedge}\,h{\goth{g}g}$ on ${\mathcal P}^c$ \begin{equation}n {\scriptstyle\wedge}\,h{\goth{g}g}
=\epsilon_{ijk}\,{\scriptstyle\wedge}\,h{g}^{kl}\,\theta^i\,\Gamma^j_{~l}. \end{equation}n The
above method of obtaining ${\scriptstyle\wedge}\,h{\goth{g}g}$ of the split degenerate
signature from ${\scriptstyle\wedge}\,h{g}$ is called the Sparling procedure
\cite{Nur1}. The new metric reads \begin{equation}\label{e.c.g33}
{\scriptstyle\wedge}\,h{\goth{g}g}=2(\mathcal{O}mega_1-\mathcal{O}mega_3)\theta^2-2\mathcal{O}mega_2\theta^1+2\theta^4\theta^3
\end{equation} and was given in \cite{Nur1} in a slightly different context
for the first time. We easily find that its degenerated directions
$X_8,X_9,X_{10},$
and $X_5+X_7$ form an integrable distribution, so that one can consider
the
six-dimensional space ${\goth{m}athcal M}^6$ of its integral leaves. The degenerated
directions $X_8,X_9,$ and $X_{10}$ are isometries
\begin{equation}n L_{X_6}{\scriptstyle\wedge}\,h{\goth{g}g}=L_{X_8}{\scriptstyle\wedge}\,h{\goth{g}g}=L_{X_9}{\scriptstyle\wedge}\,h{\goth{g}g}=0,
\end{equation}n
whereas the fourth direction, $X_5+X_7$, is a conformal
transformation \begin{equation}n L_{(X_5+X_7)}{\scriptstyle\wedge}\,h{\goth{g}g}={\scriptstyle\wedge}\,h{\goth{g}g}.\end{equation}n This
allows us to project ${\scriptstyle\wedge}\,h{\goth{g}g}$ to the split signature conformal
metric $[\goth{g}g]$ on ${\goth{m}athcal M}^6$ without any assumptions about the
underlying ODE.
It is interesting to study the normal conformal connection
associated to this geometry. Since ${\mathcal P}^c$ is a subbundle of the
conformal bundle over $M^6$, we can calculate the $\goth{o}(4,4)$-valued
normal conformal connection \eqref{e.cnc} at once on ${\mathcal P}^c$. It is
as follows.
\begin{equation}gin{equation*}
{\scriptstyle\wedge}\,h{\goth{m}athbf{w}}=\begin{pmatrix} \tfrac12\mathcal{O}mega_1&0&0&\tfrac12\mathcal{O}mega_2&-\tfrac12\mathcal{O}mega_4&-\tfrac12\mathcal{O}mega_6&0&0\\\\
\mathcal{O}mega_1-\mathcal{O}mega_{{3}}&\mathcal{O}mega_3-\tfrac12\mathcal{O}mega_1&\tfrac12\theta_4&\tfrac12\mathcal{O}mega_2&0&-\goth{m}athrm{w}^3_{~5}
&\mathcal{O}mega_5&-\tfrac12\mathcal{O}mega_4\\\\
-\mathcal{O}mega_2&\mathcal{O}mega_2&\tfrac12\mathcal{O}mega_1&\goth{m}athrm{w}^3_{~4}&\goth{m}athrm{w}^3_{~5}&0&\mathcal{O}mega_4&-\tfrac12\mathcal{O}mega_6\\\\
\theta_4&0&0&\mathcal{O}mega_3-\tfrac12\mathcal{O}mega_1&-\mathcal{O}mega_5&-\mathcal{O}mega_4&0&0\\\\
\theta_2&0&0&\theta_3&\tfrac12\mathcal{O}mega_1-\mathcal{O}mega_3&-\mathcal{O}mega_2&0&0\\\\
\theta_1&0&0&\tfrac12\theta_2&-\tfrac12\theta_4&-\tfrac12\mathcal{O}mega_1&0&0\\\\
\theta_3&-\theta_3&-\tfrac12\theta_2&0&-\tfrac12\mathcal{O}mega_2&-\goth{m}athrm{w}^3_{~4}&\tfrac12\mathcal{O}mega_1-\mathcal{O}mega_3&
\tfrac12\mathcal{O}mega_{{2}}\\\\%WIERSZ 7
0&\theta_2&\theta_1&\theta_3&\mathcal{O}mega_1-\mathcal{O}mega_3&-\mathcal{O}mega_2&\theta_4&-\tfrac12\mathcal{O}mega_1
\end{pmatrix},
\end{equation*}
where
\begin{equation}gin{align*}
\goth{m}athrm{w}^3_{~4}=&\inc{A}{4}\theta^1+\inc{A}{2}\theta^2+\inc{A}{1}\theta^4,\\
\goth{m}athrm{w}^3_{~5}=&\tfrac12\mathcal{O}mega_6+\inc{A}{3}\theta^1+\inc{A}{5}\theta^2+\inc{A}{2}\theta^4.
\end{align*}
It appears that this connection is of very special form. We show
that the algebra of its holonomy group is reduced to
$\goth{o}(3,2)\semi{.}\mathbb{R}^5$. Let us write down the connection as
\begin{equation}gin{align*}
{\scriptstyle\wedge}\,h{\goth{m}athbf{w}}=&(\mathcal{O}mega_1-\mathcal{O}mega_3)e_1-\mathcal{O}mega_2e_2+\theta^4e_3+\theta^2e_4+\theta^1e_5+\theta^3e_6+ \\
&+\mathcal{O}mega_1e_7+\mathcal{O}mega_4e_8+\mathcal{O}mega_5e_9+\mathcal{O}mega_6e_{10}+\goth{m}athrm{w}^3_{~5}e_{11}+\goth{m}athrm{w}^3_{~4}e_{12},
\end{align*}
where $e_1,\ldots,e_{12}$ are appropriate matrices in $\goth{o}(4,4)$.
The space \begin{equation}n V=\,<e_1,\ldots,e_{12}>\,\subset\goth{o}(4,4) \end{equation}n is not
closed under the commutation relations, hence $V$ is not a Lie
subalgebra. However, if we extend $V$ so that it contains three
commutators $e_{13}=[e_3,e_{12}]$, $e_{14}=[e_5,e_{10}]$ and
$e_{15}=[e_5,e_{12}]$ then $<e_1,\ldots,e_{15}>$ is a Lie algebra,
a certain semidirect sum of $\goth{o}(3,2)$ and $\mathbb{R}^5$. Bases of the
factors are the following: \begin{equation}n \mathbb{R}^5=<e_1+2e_7-2e_{14}, e_{11},
e_{12}, e_{13}, e_{15}>, \end{equation}n \begin{equation}n \goth{o}(3,2)=<e_2+e_{13}, e_3, e_4,
e_5, e_6-e_{15},e_7,e_8,e_9,e_{10},e_{14}>. \end{equation}n The matrix of
${\scriptstyle\wedge}\,h{\goth{m}athbf{w}}$ can be transformed to the following conjugated
representation, which reveals its structure well \begin{equation}n \begin{pmatrix}
\tfrac{1}{2}\vc{1} & \tfrac{1}{2}\vc{2} & -\tfrac{1}{2}\vc{4} &
-\tfrac{1}{4}\vc{6} &
2\vc{2} & -2\goth{m}athrm{w}^3_{~4} & 2\goth{m}athrm{w}^3_{~5} & 0\\\\
\goth{h}c{4} & \vc{3}-\tfrac{1}{2}\vc{1} & -\vc{5} &-\tfrac{1}{2}\vc{4} &
4\vc{3}-4\vc{1} & -2\vc{2} & 0 & -2\goth{m}athrm{w}^3_{~5} \\\\
\goth{h}c{2} & \goth{h}c{3} & \tfrac{1}{2}\vc{1}-\vc{3} & -\tfrac{1}{2}\vc{2}
&
4\goth{h}c{3} & 0 & 2\vc{2} & 2\goth{m}athrm{w}^3_{~4} \\\\
2\goth{h}c{1} & \goth{h}c{2} & -\goth{h}c{4} & -\tfrac{1}{2}\vc{1} &
0 & -4\goth{h}c{3} & 4\vc{1}-4\vc{3} & -2\vc{2}\\\\
0 & 0 & 0 & 0 & \tfrac{1}{2}\vc{1} & \tfrac{1}{2}\vc{2} & \tfrac{1}{2}\vc{4} & \tfrac{1}{4}\vc{6} \\\\
0 & 0 & 0 & 0 & \goth{h}c{4} & \vc{3}-\tfrac{1}{2}\vc{1} & \vc{5} &\tfrac{1}{2}\vc{4} \\\\
0 & 0 & 0 & 0 & -\goth{h}c{2} & -\goth{h}c{3} & \tfrac{1}{2}\vc{1}-\vc{3} & -\tfrac{1}{2}\vc{2}\\\\%WIERSZ 7
0 & 0 & 0 & 0 & -2\goth{h}c{1} & -\goth{h}c{2} & -\goth{h}c{4} & -\tfrac{1}{2}\vc{1}
\end{pmatrix}. \end{equation}n ${\scriptstyle\wedge}\,h{\goth{m}athbf{w}}$ has the following block structure in
this representation
\begin{equation}n {\scriptstyle\wedge}\,h{\goth{m}athbf{w}}=\begin{pmatrix} {\scriptstyle\wedge}\,h{\goth{o}mega}^c & {\scriptstyle\wedge}\,h{\tau} \\\\
0 & -\sigma {\scriptstyle\wedge}\,h{\goth{o}mega}^c \sigma\end{pmatrix},
\end{equation}n where \begin{equation}n
\sigma=\begin{pmatrix} 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0
\end{pmatrix}.\end{equation}n
Surprisingly enough the $\goth{o}(3,2)$-part of ${\scriptstyle\wedge}\,h{\goth{m}athbf{w}}$, given
by the diagonal blocks, is totally determined by the $\goth{o}(3,2)$
connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$. In particular, this relation holds
when $W=0$ and ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ is a conformal connection itself.
In this case we have rather an unexpected link between conformal
connections in dimensions three and six.
\section{Further reduction and geometry on five-dimensional bundle}\label{s.c.furred}
Theorem \ref{th.c.1} is a starting point for further reduction of
the structural group since one can use the non-constant invariants
in \eqref{e.c.dtheta_10d} to eliminate more variables $u_\goth{m}u$.
From this point of view third-order ODEs fall into three main
classes:
\begin{equation}gin{itemize}
\item[i)] $W=0$, $F_{qqqq}=0$,
\item[ii)] $W=0$, $F_{qqqq}\neq 0$,
\item[iii)] $W\neq 0$.
\end{itemize}
Class i) contains the equations equivalent to $y'''=0$ and is
fully characterized by the corollary \ref{cor.c.10d_flat}. Further
reduction cannot be done due to lack of non-constant structural
functions.
Class ii) is not interesting from the geometric point of view
since it does not contain equations with five-dimensional or
larger symmetry groups, as it is proved in theorems
\ref{th.cc.sym} and \ref{th.cc.W4d} of chapter \ref{ch.class}.
Class iii), which leads to a Cartan connection on a
five-dimensional bundle, is studied below. Owing to $W\neq 0$ we
continue reduction by setting $\inc{A}{1}=1$, $\inc{A}{2}=0$,
which gives \begin{equation}n
u_1=\sqrt[3]{W}u_3, \qquad u_5=\frac{1}{3}\frac{W_q}{\sqrt[3]{W^2}}.
\end{equation}n At this moment the auxiliary variable $u_6$ which was
introduced by the prolongation becomes irrelevant and may be set
equal to zero $$u_6=0.$$ In second step we choose \begin{equation}n
u_2=\frac{1}{3}Zu_3 \end{equation}n and finally \begin{equation}n
u_4=\frac{1}{9}\frac{W_q}{\sqrt[3]{W^2}}M-\frac{1}{3}\sqrt[3]{W}Z_q.
\end{equation}n The coframe and the underlying bundle ${\mathcal P}^c$ of the theorem
\ref{th.c.1} have been reduced to dimension five according to the
following
\begin{equation}gin{theorem}[S.-S. Chern]\label{th.c.2}
A third-order ODE $y'''=F(x,y,y',y'')$ satisfying the contact
invariant condition $W\neq 0$ and considered modulo contact
transformations of variables, uniquely defines a 5-dimensional
bundle ${\mathcal P}^c_5$ over ${\goth{m}athcal J}^2$ and an invariant coframe
$(\theta^1,\ldots,\theta^4,\mathcal{O}mega)$ on it. In local coordinates
$(x,y,p,q,u)$ this coframe is given by \begin{equation}
\label{e.c.theta_5d}\begin{equation}gin{aligned}
\theta^1=&\sqrt[3]{W}u\goth{o}mega^1, \\
\theta^2=&\frac{1}{3}Z u\goth{o}mega^1+u\goth{o}mega^2, \\
\theta^3=&\frac{u}{\sqrt[3]{W}}\left(K+\frac{1}{18}Z^2\right)\goth{o}mega^1
+\frac{u}{3\sqrt[3]{W}}\left(Z-F_q\right)\goth{o}mega^2+\frac{u}{\sqrt[3]{W}}\goth{o}mega^3, \\
\theta^4=&\left(\frac{1}{9}\frac{W_q}{\sqrt[3]{W^2}}Z-\frac{1}{3}\sqrt[3]{W}Z_q\right)\goth{o}mega^1
+\frac{W_q}{3\sqrt[3]{W^2}}\goth{o}mega^2+\sqrt[3]{W}\goth{o}mega^4, \\
\mathcal{O}mega=& \left(\left(\frac{1}{9}W_q\goth{m}athcal{D} Z-\frac{1}{27}W_qZ^2+\frac{1}{9}W_p Z\right)\frac{1}{W}
-\frac{1}{3}Z_p-\frac{1}{9}F_qZ_q\right)\goth{o}mega^1 \\
&+\left(\frac{W_p}{3W}-\frac{1}{3}Z_q\right)\goth{o}mega^2+
\frac{W_q}{3W}\,\goth{o}mega^3+\frac{1}{3}F_q\,\goth{o}mega^4
+\frac{{\rm d} u}{u}.
\end{aligned}\end{equation}
where $\goth{o}mega^i$ are defined by the ODE via \eqref{e.omega}. The
exterior derivatives of these forms read
\begin{equation}gin{align}
d\theta^1=&\mathcal{O}mega{\scriptstyle\wedge}\,\theta^1-\theta^2{\scriptstyle\wedge}\,\theta^4, \nonumber \\
d\theta^2=&\mathcal{O}mega{\scriptstyle\wedge}\,\theta^2+\inc{a}{}\,\theta^1{\scriptstyle\wedge}\,\theta^4-\theta^3{\scriptstyle\wedge}\,\theta^4,\nonumber\\
d\theta^3=&\mathcal{O}mega{\scriptstyle\wedge}\,\theta^3+\inc{b}{}\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{c}{}\,\theta^1{\scriptstyle\wedge}\,\theta^3
-\theta^1{\scriptstyle\wedge}\,\theta^4+\inc{e}\,\theta^2{\scriptstyle\wedge}\,\theta^3+\inc{a}{}\,\theta^2{\scriptstyle\wedge}\,\theta^4, \label{e.c.dtheta_5d} \\
d\theta^4=&\inc{f}{}\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{g}{}\,\theta^1{\scriptstyle\wedge}\,\theta^3+\inc{h}{}\,\theta^1{\scriptstyle\wedge}\,\theta^4
+\inc{k}{}\,\theta^2{\scriptstyle\wedge}\,\theta^3-\inc{e}{}\,\theta^2{\scriptstyle\wedge}\,\theta^4, \nonumber \\
d\mathcal{O}mega=&\inc{l}{}\,\theta^1{\scriptstyle\wedge}\,\theta^2+(\inc{f}{}-\inc{a}{}\inc{k}{})\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\inc{m}{}\,\theta^1{\scriptstyle\wedge}\,\theta^4+\inc{g}{}\,\theta^2{\scriptstyle\wedge}\,\theta^3+\inc{h}{}\,\theta^2{\scriptstyle\wedge}\,\theta^4.\nonumber
\end{align}
\end{theorem}
The basic functions for \eqref{e.c.dtheta_5d} (i.e. generating the
full set of functions by consecutive taking of coframe
derivatives) are
$\inc{a}{},\inc{b}{},\inc{e}{},\inc{h}{},\inc{k}{}$:
\begin{equation}gin{align}
\inc{a}{}=& \frac{1}{\sqrt[3]{W^2}}\left(K+\frac{1}{18}Z^2+\frac{1}{9}ZF_q-\frac{1}{3}\goth{m}athcal{D} Z\right), \nonumber \\
\inc{b}{}=&\frac{1}{3u\sqrt[3]{W^2}}\bigg(\frac{1}{27}F_{qq}Z^2+\left(K_q-\frac{1}{3}Z_p-\frac{2}{9}F_qZ_q\right)Z+\nonumber\\
&+\left(\frac{1}{3}\goth{m}athcal{D} Z-2K\right)Z_q+Z_y+F_{qq}K-3K_p-K_qF_q-F_{qy}+W_q\bigg), \nonumber \\
\inc{e}{}=&\frac{1}{u}\left(\frac{1}{3}F_{qq}+\frac{1}{W}\left(\frac{2}{9}W_qZ-\frac{2}{3}W_p-\frac{2}{9}W_qF_q \right)\right), \label{e.c.basfun_5d}\\
\inc{h}{}=&\frac{1}{3u\sqrt[3]{W}}\bigg(\left(\frac{1}{9}W_qZ^2-\frac{1}{3}W_pZ+W_y-\frac{1}{3}W_q\goth{m}athcal{D} Z\right)\frac{1}{W}+\nonumber \\
&+\goth{m}athcal{D} Z_q+\frac{1}{3}F_qZ_q\bigg),\nonumber \\
\inc{k}{}=&\frac{1}{u^2\sqrt[3]{W}}\left(\frac{2W_q^2}{9W}-\frac{W_{qq}}{3}\right).
\nonumber
\end{align}
Our next aim is to obtain a Cartan connection. First of all we
study the most symmetric case to find the Lie algebra of a
connection. We assume that all the functions
$\inc{a}{},\ldots,\inc{m}{}$ are constant. Having applied the
exterior derivative to \eqref{e.c.dtheta_5d} we get that
$\inc{b}{},\ldots,\inc{m}{}$ are equal to zero and $\inc{a}{}$ is
an arbitrary real constant $\goth{m}u$. In this case the equations
\eqref{e.c.dtheta_5d} become the Maurer-Cartan equations for the
algebra $\mathbb{R}^2\goth{o}plus_\goth{m}u\mathbb{R}^3$. Straightforward calculations
show that this case corresponds to a general linear equation with
constant coefficients.
\begin{equation}gin{corollary} \label{cor.c.5d_flat}
A third-order ODE is contact equivalent to
$$y'''=-2\goth{m}u y'+y,$$ where $\goth{m}u$ is an arbitrary constant if and only if
it satisfies
\begin{equation}gin{align}
1)\quad &W\neq 0, \nonumber \\
2)\quad &\frac{1}{\sqrt[3]{W^2}}
\left(K+\frac{1}{18}Z^2+\frac{1}{9}ZF_q-\frac{1}{3}\goth{m}athcal{D} Z\right)=\goth{m}u \label{e400} \\
3)\quad &2W_q^2-3W_{qq}W=0. \nonumber
\end{align}
Such an equation has the five-dimensional algebra
$\mathbb{R}^2\semi{\goth{m}u}\mathbb{R}^3$ of infinitesimal contact symmetries.
The equations with different constants $\goth{m}u_1$ and $\goth{m}u_2$ are
non-equivalent.
\begin{equation}gin{proof}
Assume that $\inc{a}{}=\goth{m}u$, $\inc{k}{}=0$. It follows from
${\rm d}^2\theta^i=0$ and ${\rm d}^2\mathcal{O}mega=0$, that this assumption
makes other functions in \eqref{e.c.dtheta_5d} vanish. Put
$y'''=-2\goth{m}u y'+y$ into the formulae of theorem \ref{th.c.2} and
check that it satisfies $\inc{a}{}=\goth{m}u$, $\inc{k}{}=0$. Every
equation satisfying $\inc{a}{}=\goth{m}u$, $\inc{k}{}=0$ is contact
equivalent to it by virtue of theorem \ref{th.i.clas} adapted to
this situation.
\end{proof}
\end{corollary}
Now we immediately find a family of Cartan connections
\begin{equation}gin{theorem}\label{cor.c.geom5d}
An ODE which satisfies the condition \begin{equation}n
\frac{1}{\sqrt[3]{W^2}}\left(K+\frac{1}{18}Z^2+\frac{1}{9}ZF_q-\frac{1}{3}\goth{m}athcal{D}
Z\right)=\lambda\end{equation}n has the solution space equipped with the
following $\mathbb{R}^2$-valued linear torsion-free connection
\begin{equation}n
{\scriptstyle\wedge}\,idehat{\goth{o}mega}_\lambda=\begin{pmatrix}
-\vc{} & -\goth{h}c{4} & 0 \\
\lambda\goth{h}c{4} & -\vc{} & -\goth{h}c{4} \\
-\goth{h}c{4} & \lambda\goth{h}c{4} & -\vc{}
\end{pmatrix}.
\end{equation}n Its curvature reads
\begin{equation}n
\begin{pmatrix}
R^1_{~1} & R^1_{~2} & 0 \\
-\lambda R^1_{~2} & R^1_{~1} & R^1_{~2} \\
R^1_{~2} & -\lambda R^1_{~2} & R^1_{~1}
\end{pmatrix}
\end{equation}n
with
\begin{equation}gin{align}
R^1_{~1}=&(\lambda\inc{g}{}+\inc{k}{})\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}+(\lambda \inc{k}{}-\inc{g}{})\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}
-\inc{g}{}\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}, \nonumber \\
R^1_{~2}=&-\inc{f}{}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}-\inc{g}{}\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}-\inc{k}{}\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{3}. \nonumber
\end{align}
The connection is flat if and only if the related ODE is contact
equivalent to $y'''=-2\lambda y'+y$
\begin{equation}gin{proof}
The condition $\inc{a}{}=\lambda$ together with its differential
consequences $\inc{b}{}=\inc{c}{}=\inc{e}{}=\inc{h}{}=\inc{m}{}=0$
and $\inc{l}{}=-\inc{k}{}-\lambda \inc{g}{}$ is the necessary and
sufficient condition for the curvature of
${\scriptstyle\wedge}\,idehat{\goth{o}mega}_\lambda$ to be horizontal.
\end{proof}
\end{theorem}
It follows that every ODE as above has its space of solution equipped with a geometric structure consisting of
\begin{equation}gin{itemize}
\item[i)] Reduction of $\goth{g}l(3,\mathbb{R})$ to $\mathbb{R}^2$ represented by
\begin{equation}n
\begin{pmatrix}
a_1 & a_2 & 0 \\
-\lambda a_2 & a_1 & a_2 \\
a_2 & -\lambda a_2 & a_1
\end{pmatrix}.
\end{equation}n \item[ii)] A linear torsion-free connection $\Gamma$ taking
values in this $\mathbb{R}^2$.
\end{itemize}
The structure is an example of a geometry with special holonomy.
The algebra $\mathbb{R}^2$ is spanned by the unit matrix and \begin{equation}n
\goth{m}athrm{m}(\lambda) = \begin{pmatrix}
0 & 1 & 0 \\
-\lambda & 0 & 1 \\
1 & -\lambda & 0
\end{pmatrix},
\end{equation}n whose action on $\mathcal{S}$ is more complicated. Its eigenvalue
equation
$$ \det(u\goth{m}athbf{1}-\goth{m}athrm{m}(\lambda))=u^3+2\lambda u-1
$$ is the characteristic polynomial of the linear ODE $y'''=-2\lambda y'+y$. If
$\lambda<\tfrac{3}{4}\sqrt[3]{2}$ the polynomial has three
distinct roots and $\goth{m}athrm{m}$ is a generator of non-isotropic
dilatations acting along the eigenspaces. If
$\lambda=\tfrac{3}{4}\sqrt[3]{2}$ the characteristic polynomial
has two roots, one of them double, for other $\lambda$s there is
one eigenvalue. The action is diagonalizable only in the case of
three distinct eigenvalues.
\chapter[Geometries of ODEs modulo point and fibre-preserving\ldots]
{Geometries of ODEs considered modulo point and fibre-preserving
transformations of variables}\label{ch.point}
\section{Point case: Cartan connection on seven-dimensional
bundle}\label{s.p.th} \noindent Following the scheme of reduction
given in chapter \ref{ch.contact} we construct Cartan connection
for ODEs modulo point transformations.
\begin{equation}gin{theorem}[E. Cartan]\label{th.p.1}
The point invariant information about $y'''=F(x,y,y',y'')$ is
given by the following data
\begin{equation}gin{itemize}
\item[i)] The principal fibre bundle $H_3\to{\mathcal P}^p\to{\goth{m}athcal J}^2$, where
$\dim{\mathcal P}^p=7$, and $H_3$ is the three-dimensional group \begin{equation}
\label{e.p.H3} H_3=\begin{pmatrix} \sqrt{u_1}, &
\frac12\frac{u_2}{\sqrt{u_1}}, & 0 & 0 \\\\
0 & \tfrac{u_3}{\sqrt{u_1}}, & 0 & 0 \\\\
0 & 0 & \tfrac{\sqrt{u_1}}{u_3}, &
-\tfrac12\tfrac{u_2}{\sqrt{u_1}\,\goth{u}_3} \\\\
0 & 0 & 0 & \tfrac{1}{\sqrt{u_1}}\end{pmatrix}.
\end{equation} \item[ii)] The coframe
$(\theta^1,\theta^2,\theta^3,\theta^4,\mathcal{O}mega_1,\mathcal{O}mega_2,\mathcal{O}mega_3)$,
which defines the $\goth{g}oth{co}(2,1)\goth{o}plus_{.}\mathbb{R}^3$-valued Cartan normal
connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ on ${\mathcal P}^p_{7}$ by \begin{equation}\label{e.p.conn_7d}
{\scriptstyle\wedge}\,h{\goth{o}mega}^p=\begin{pmatrix} \tfrac{1}{2}\vp{1} & \tfrac{1}{2}\vp{2} & 0 & 0 \\\\
\goth{h}p{4} & \vp{3}-\tfrac{1}{2}\vp{1} & 0 & 0 \\\\
\goth{h}p{2} & \goth{h}p{3} & \tfrac{1}{2}\vp{1}-\vp{3} & -\tfrac{1}{2}\vp{2} \\\\
2\goth{h}p{1} & \goth{h}p{2} & -\goth{h}p{4} & -\tfrac{1}{2}\vp{1}
\end{pmatrix}.
\end{equation}
\end{itemize}
Let $(x,y,p,q,u_1,u_2,u_3)=(x^i,u_\goth{m}u)$ be a locally trivializing
coordinate system in ${\mathcal P}^p$. Then the value of ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ at
the point $(x^i,u_\goth{m}u)$ in ${\mathcal P}^p$ is given by \begin{equation}n
{\scriptstyle\wedge}\,h{\goth{o}mega}^p(x^i,u_\goth{m}u)=u^{-1}\,\goth{o}mega^p\,u+u^{-1}{\rm d} u \end{equation}n
where $u$ denotes the matrix \eqref{e.p.H3} and \begin{equation}n\goth{o}mega^p= \begin{pmatrix}
\tfrac{1}{2}\vp{1}^0&\tfrac{1}{2}\vp{2}^0 & 0 & 0 \\\\
{\scriptstyle\wedge}\,t{\goth{o}mega}^4 & \vp{3}^0-\tfrac{1}{2}\vp{1}^0 & 0 & 0 \\\\
\goth{o}mega^2 & {\scriptstyle\wedge}\,t{\goth{o}mega}^3 & \tfrac{1}{2}\vp{1}^0-\vp{3}^0 & -\tfrac{1}{2}\vp{2}^0 \\\\
2\goth{o}mega^1 & \goth{o}mega^2 & -{\scriptstyle\wedge}\,t{\goth{o}mega}^4 & -\tfrac{1}{2}\vp{1}^0
\end{pmatrix}\end{equation}n is the connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ calculated at the point
$(x^i,u_1=1,u_2=0,u_3=1)$.
The forms $\goth{o}mega^1,\goth{o}mega^2,{\scriptstyle\wedge}\,t{\goth{o}mega}^3,\goth{o}mega^4$ read
\begin{equation}gin{align*}
\goth{o}mega^1=&{\rm d} y-p{\rm d} x, \notag \\
\goth{o}mega^2=&{\rm d} p-q{\rm d} x, \\
{\scriptstyle\wedge}\,t{\goth{o}mega}^3=&{\rm d} q-F{\rm d} x-\tfrac13F_q({\rm d} p-q{\rm d} x)+K({\rm d} y-p{\rm d} x),\notag \\
{\scriptstyle\wedge}\,t{\goth{o}mega}^4=&{\rm d} x +\tfrac16F_{qq}({\rm d} y-p{\rm d} x).\notag
\end{align*}
The forms $\vc{1}^0,\ldots,\vc{6}^0$ read
\begin{equation}gin{align*}
\vc{1}^0=&-(3K_q+\tfrac29F_{qq}F_q+\tfrac23F_{qp})\,\goth{o}mega^1+\tfrac16F_{qq}\goth{o}mega^2, \notag \\
\vc{2}^0=&\left(L+\tfrac16F_{qq}K\right)\,\goth{o}mega^1
-(2K_q+\tfrac19F_{qq}F_q+\tfrac13F_{qp})\,\goth{o}mega^2
+\tfrac16F_{qq}{\scriptstyle\wedge}\,t{\goth{o}mega}^3-K{\scriptstyle\wedge}\,t{\goth{o}mega}^4, \\
\vc{3}^0=&-(2K_q+\tfrac16F_{qq}F_q+\tfrac13F_{qp})\,\goth{o}mega^1+\tfrac13F_{qq}\,\goth{o}mega^2
+\tfrac13F_q{\scriptstyle\wedge}\,t{\goth{o}mega}^4.\notag
\end{align*}
\begin{equation}gin{proof}
We begin with the $G_p$-structure on ${\goth{m}athcal J}^2$ of Introduction, which
encodes an ODE up to point transformations. In the usual locally
trivializing coordinate system $(x,y,p,q,u_1,\ldots,u_8)$ on
$G_p\times{\goth{m}athcal J}^2$ the fundamental form $\goth{h}p{i}$ is given by
\begin{equation}gin{align*}
\goth{h}p{1}=& u_1\goth{o}mega^1,\\
\goth{h}p{2}=& u_2\goth{o}mega^2+u_3\goth{o}mega^3, \\
\goth{h}p{3}=& u_4\goth{o}mega^1+u_5\goth{o}mega^2+u_6\goth{o}mega^3,\\
\goth{h}p{4}=& u_8\goth{o}mega^1+u_7\goth{o}mega^4.
\end{align*}
We repeat the procedure of section \ref{s.c.proof} of chapter
\ref{ch.contact}. We choose a connection by the minimal torsion
requirement and then reduce $G_p\times{\goth{m}athcal J}^2$ using the constant
torsion property. We differentiate $\goth{h}p{i}$ and gather the
$\goth{h}p{j}{\scriptstyle\wedge}\,\goth{h}p{k}$ terms into \begin{equation}\label{e.p.red10}\begin{equation}gin{aligned}
{\rm d}\goth{h}p{1}&=\vp{1}{\scriptstyle\wedge}\,\goth{h}p{1}+\frac{u_1}{u_3 u_7}\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{2}, \\
{\rm d}\goth{h}p{2}&=\vp{2}{\scriptstyle\wedge}\,\goth{h}p{1}+\vp{3}{\scriptstyle\wedge}\,\goth{h}p{2}+\frac{u_3}{u_6u_7}\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{3}, \\
{\rm d}\goth{h}p{3}&=\vp{4}{\scriptstyle\wedge}\,\goth{h}p{1}+\vp{5}{\scriptstyle\wedge}\,\goth{h}p{2}+\vp{6}{\scriptstyle\wedge}\,\goth{h}p{3},\\
{\rm d}\goth{h}p{4}&=\vp{8}{\scriptstyle\wedge}\,\goth{h}p{1}+\vp{9}{\scriptstyle\wedge}\,\goth{h}p{2}+\vp{7}{\scriptstyle\wedge}\,\goth{h}p{4}
\end{aligned}\end{equation}
with the auxiliary connection forms $\vp{\goth{m}u}$ containing the
differentials of $u_\goth{m}u$ and terms proportional to $\goth{h}p{i}$. Then
we reduce $G_p\times{\goth{m}athcal J}^2$ by
\begin{equation}n
u_6=\frac{u_3^2}{u_1},\quad\quad\quad u_7=\frac{u_1}{u_3}.
\end{equation}n Subsequently, we get formulae identical to
\eqref{e.c.red_u5}, \eqref{e.c.red_u4}:
\begin{equation}gin{align*}
u_5=&\frac{u_3}{u_1}\left(u_2-\frac{1}{3}u_3F_q\right), \\
u_4=&\frac{u^2_3}{u_1}K+\frac{u_2^2}{2u_1}
\end{align*}
and also
\begin{equation}n
u_8=\frac{u_1}{6u_6}F_{qq}.
\end{equation}n After these substitutions the structural equations for
$\goth{h}p{i}$ are the following
\begin{equation}gin{align}\label{e.p.red50}
{\rm d}\goth{h}p{1} =&\vp{1}{\scriptstyle\wedge}\,\goth{h}p{1}+\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{2},\nonumber \\
{\rm d}\goth{h}p{2} =&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{1}+\vp{3}{\scriptstyle\wedge}\,\goth{h}p{2}+\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
{\rm d}\goth{h}p{3} =&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{2}+(2\vp{3}-\vp{1}){\scriptstyle\wedge}\,\goth{h}p{3}+\inp{A}{1}\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{1},\nonumber \\
{\rm d}\goth{h}p{4} =&(\vp{1}-\vp{3}){\scriptstyle\wedge}\,\goth{h}p{4}+\inp{B}{1}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{1}
+\inp{B}{2}\goth{h}p{3}{\scriptstyle\wedge}\,\goth{h}p{1}, \nonumber
\end{align}
with some functions $\inp{A}{1},\inp{B}{1},\inp{B}{2}$. But now,
in contrast to the contact case, the forms $\vp{1},\vp{2},\vp{3}$
are defined by the above equations without any ambiguity, thus
there is no need to prolong and we have the rigid coframe on the
seven-dimensional bundle ${\mathcal P}^p\to{\goth{m}athcal J}^2$.
\end{proof}
\end{theorem}
\subsection{Point versus contact objects} Comparing theorem
\ref{th.p.1} to theorem \ref{th.c.1} of the contact case, we see
that the contact and point objects are related as follows. The
point bundle ${\mathcal P}^p$ is a subbundle of ${\mathcal P}^c$, the embedding
$\sigma\goth{g}oth{co}lon{\mathcal P}^p\to{\mathcal P}^c$ given by \begin{equation}\label{e.p.cp}
u_4=\frac{u_1}{6u_3}F_{qq}, \qquad u_5=0, \qquad u_6=0. \end{equation} In
this paragraph we denote the point coframe of theorem \ref{th.p.1}
by $(\goth{o}verset{p}{\goth{h}p{1}},\ldots,\goth{o}verset{p}{\goth{h}p{3}})$ in order to
distinguish it from the contact coframe now denoted by
$(\goth{o}verset{c}{\goth{h}c{1}},\ldots,\goth{o}verset{c}{\vc{7}})$. The point
coframe on ${\mathcal P}^p$ can be constructed from the contact one by the
following formula. \begin{equation}\label{e.p.ctop2}\begin{equation}gin{aligned}
&\goth{o}verset{p}{\goth{h}p{i}}=\sigma^*\goth{o}verset{c}{\goth{h}c{i}}, \qquad i=1,2,3,4, \\
&\goth{o}verset{p}{\vp{1}}=\sigma^*\goth{o}verset{c}{\vc{1}}-2f_1\,\sigma^*\goth{o}verset{c}{\goth{h}c{1}}, \\
&\goth{o}verset{p}{\vp{2}}=\sigma^*\goth{o}verset{c}{\vc{2}}-f_2\,\sigma^*\goth{o}verset{c}{\goth{h}c{1}}
-f_1\,\sigma^*\goth{o}verset{c}{\goth{h}c{2}}, \\
&\goth{o}verset{p}{\vp{3}}=\sigma^*\goth{o}verset{c}{\vc{3}}-f_1\,\sigma^*\goth{o}verset{c}{\goth{h}c{1}},
\end{aligned}\end{equation}
where the functions $f_1$ and $f_2$ are
\begin{equation}gin{align*}
f_1=&\frac{1}{u_1}(K_q+\tfrac19F_{qq}F_q+\tfrac13F_{qp})+\frac{u_2}{12\,u_1u_3}F_{qq},
\notag \\
f_2=&\sigma^*\inc{A}{2}=\frac{u_3}{3u^2_1} W_q\notag.
\end{align*}
The mapping $\sigma$ together with the functions $f_1,f_2$ is a
new piece of structure that allows us to move from the more coarse
contact coframe to the point coframe. It is obvious that $F_{qq}$
and $f_1$ appearing in the above formulae are not contact
invariants. What is more, they are not point invariants either,
for the property $F_{qq}=0$ or $f_1=0$ is not preserved under
point transformations. At the level of geometric objects the
passage from the contact case to the point case is given by the
difference
$$\sigma^*{\scriptstyle\wedge}\,h{\goth{o}mega}^c-{\scriptstyle\wedge}\,h{\goth{o}mega}^p.$$
One can also proceed in the inverse direction and construct the
contact bundle and coframe starting from the point case. In this
approach the bundle ${\mathcal P}^c$ is the extension
${\mathcal P}^c={\mathcal P}^p\times_{H_3}H_6$ of ${\mathcal P}^p$ and in order to get the
contact coframe we must define forms
$\goth{o}verset{c}{\goth{h}c{1}},\ldots,\goth{o}verset{c}{\vc{6}}$ on ${\mathcal P}^p$ in
terms of the point coframe, then define a matrix one-form
${\scriptstyle\wedge}\,h{\goth{o}mega}^c$ on $P^p_7$ and finally lift this ${\scriptstyle\wedge}\,h{\goth{o}mega}^c$
to the Cartan connection on ${\mathcal P}^c$. Since the formulae for
$\goth{o}verset{c}{\goth{h}c{1}},\ldots,\goth{o}verset{c}{\vc{3}}$ are similar to
\eqref{e.p.ctop2} and the formulae for
$\goth{o}verset{c}{\vc{4}},\goth{o}verset{c}{\vc{5}},\goth{o}verset{c}{\vc{6}}$ are
complicated we omit them.
\subsection{Curvature} Further analysis of the coframe of
theorem \ref{th.p.1} is very similar to what we have done in
chapter \ref{ch.contact}. The curvature of the connection is given
by the following exterior differentials of the coframe, cf
\eqref{e.i.dtheta_7d}
\begin{equation}gin{align}
{\rm d}\goth{h}p{1} =&\vp{1}{\scriptstyle\wedge}\,\goth{h}p{1}+\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{2},\nonumber \\
{\rm d}\goth{h}p{2} =&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{1}+\vp{3}{\scriptstyle\wedge}\,\goth{h}p{2}+\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
{\rm d}\goth{h}p{3}=&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{2}+(2\vp{3}-\vp{1}){\scriptstyle\wedge}\,\goth{h}p{3}+\inp{A}{1}\goth{h}p{4}{\scriptstyle\wedge}\,\goth{h}p{1},\nonumber \\
{\rm d}\goth{h}p{4} =&(\vp{1}-\vp{3}){\scriptstyle\wedge}\,\goth{h}p{4}+\inp{B}{1}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{1}
+\inp{B}{2}\goth{h}p{3}{\scriptstyle\wedge}\,\goth{h}p{1}, \nonumber\\
{\rm d}\vp{1} =&-\vp{2}{\scriptstyle\wedge}\,\goth{h}p{4}+(\inp{D}{1}+3\inp{B}{3})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}
+(3\inp{B}{4}-2\inp{B}{1})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3} \label{e.p.dtheta_7d} \\
&+(2\inp{C}{1}-\inp{A}{2})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4}-\inp{B}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}, \nonumber \\
{\rm d}\vp{2}=&(\vp{3}-\vp{1}){\scriptstyle\wedge}\,\vp{2}+\inp{D}{2}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+(\inp{D}{1}+\inp{B}{3})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}
+\inp{A}{3}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4} \nonumber \\
&+(2\inp{B}{4}-\inp{B}{1})\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}+\inp{C}{1}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{4}, \nonumber \\
{\rm d}\vp{3}=&(\inp{D}{1}+2\inp{B}{3})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+2(\inp{B}{4}-\inp{B}{1})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}
+\inp{C}{1}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4}-2\inp{B}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}, \nonumber
\end{align}
where
$\inp{A}{1},\inp{A}{2},\inp{A}{3},\inp{B}{1},\inp{B}{2},\inp{B}{3},
\inp{B}{4},\inp{C}{1},\inp{D}{1},\inp{D}{2}$ are functions on
${\mathcal P}^p$. All these functions express by the coframe derivatives of
$\inp{A}{1},\inp{B}{1},\inp{C}{1}$, which therefore constitute the
set of basic relative invariants for this problem and read
\begin{equation}gin{align*}
\inp{A}{1}=&\frac{u_3^3}{u^3_1}W, \nonumber \\ \inp{B}{1}=&\frac{1}{u_3^2}\left(\frac{1}{18}F_{qqq}F_q+\frac{1}{36}F_{qq}^2+\frac{1}{6}F_{qqp}\right)
-\frac{u_2}{6u_3^3}F_{qqq}, \\
\inp{C}{1}=&\frac{u_3}{u_1^2}\left(2F_{qq}K+\frac{2}{3}F_qF_{qp}-2F_{qy}+F_{pp} +2W_q\right). \nonumber
\end{align*}
$\inp{B}{2}$ and $\inp{B}{4}$ are also vital:
\begin{equation}gin{align*}
\inp{B}{2}=&\frac{u_1}{6u_3^3}F_{qqq}, \\
\inp{B}{4}=&\frac{1}{u_3^2}
\left(K_{qq}+\frac19F_{qqq}F_q+\frac13 F_{qqp}+\frac{1}{12}F_{qq}^2 \right).
\end{align*}
\begin{equation}gin{corollary}\label{cor.p.7d_flat}
For a third-order ODE $y'''=F(x,y,y',y'')$ the following
conditions are equivalent.
\begin{equation}gin{itemize}
\item[i)] The ODE is point equivalent to $y'''=0$.
\item[ii)] It satisfies the conditions $W=0$, $F_{qqq}=0$, $F_{qq}^2+6F_{qqp}=0$
and $$2F_{qq}K+\frac{2}{3}F_qF_{qp}-2F_{qy}+F_{pp}=0.$$
\item[iii)] It has the $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$ algebra of infinitesimal point symmetries.
\end{itemize}
\end{corollary}
The manifold ${\mathcal P}^p$, like ${\mathcal P}^c$, is equipped with the threefold
structure of principal bundle over ${\goth{m}athcal J}^2$, ${\goth{m}athcal J}^1$ and $\mathcal{S}$. Let
$(X_1,X_2,X_3,X_4,X_5,X_6,X_7)$ be the frame dual to
$(\goth{h}p{1},\goth{h}p{2},\goth{h}p{3},\goth{h}p{4},\vp{1},\vp{2},\vp{3})$.
\begin{equation}gin{itemize}
\item ${\mathcal P}^p$ is the bundle $H_3\to{\mathcal P}^p\to{\goth{m}athcal J}^2$ with the
fundamental fields $X_5,X_6,X_7$. \item It is the bundle
$CO(2,1)\to{\mathcal P}^p\to\mathcal{S}$ with the fundamental fields
$X_4,X_5,X_6,X_7$ \item It is also the bundle $H_4\to{\mathcal P}^p\to{\goth{m}athcal J}^1$
with the fundamental fields $X_3,X_5,X_6,X_7$.
\end{itemize}
\section{Einstein-Weyl geometry on space of solutions}\label{s.ew}
\noindent We have already described the construction of the
Einstein-Weyl geometry on the solution space in Introduction. Here
we write it down in a more systematic manner.
\subsection{Weyl geometry}\label{s.ew.def}
A Weyl geometry on ${\goth{m}athcal M}^n$ is a pair $(g,\phi)$ such that $g$ is a
metric of signature $(k,l)$, $k+l=n$ and $\phi$ is a one-form and
they are given modulo the following transformations \begin{equation}n
\phi\to\phi+{\rm d}\lambda, \qquad\qquad g\to e^{2\lambda} g.\end{equation}n In
particular $[g]$ is a conformal geometry. For any Weyl geometry
there exists the Weyl connection; it is the unique torsion-free
connection such that \begin{equation}n
\nabla g=2\phi\goth{o}times g.
\end{equation}n The Weyl connection takes values in the algebra $\goth{g}oth{co}(k,l)$ of
$[g]$. Let $(\goth{o}mega^\goth{m}u)$ be an orthonormal coframe for some $g$
of $[g]$; $g=g_{\goth{m}u\nu}\goth{o}mega^\goth{m}u\goth{o}times\goth{o}mega^\nu$ with all the
coefficients $g_{\goth{m}u\nu}$ being constant. The Weyl connection
one-forms $\Gamma^\goth{m}u_{~\nu}$ are uniquely defined by the
relations
\begin{equation}gin{eqnarray}
&{\rm d}\goth{o}mega^\goth{m}u+\Gamma^\goth{m}u_{~\nu}{\scriptstyle\wedge}\,\goth{o}mega^\nu=0, \notag \\
&\Gamma_{(\goth{m}u\nu)}=-g_{\goth{m}u\nu}\,\phi, \quad \text{where} \quad
\Gamma_{ij}=g_{jk}\Gamma^k_{~j}. \notag
\end{eqnarray}
The curvature tensor $R^\goth{m}u_{~\nu\rho\sigma}$, the Ricci tensor
${\rm R}ic_{\goth{m}u\nu}$ and the Ricci scalar ${\rm R}$ of a Weyl connection are
defined as follows
\begin{equation}gin{align*} &R^\goth{m}u_{~\nu}={\rm d}\Gamma^\goth{m}u_{~\nu}+\Gamma^\goth{m}u_{~\rho}{\scriptstyle\wedge}\,\Gamma^{\rho}_{~\nu}=
R^\goth{m}u_{~\nu\rho\sigma}\goth{o}mega^\rho{\scriptstyle\wedge}\,\goth{o}mega^\sigma \\
&{\rm R}ic_{\goth{m}u\nu}=R^\rho_{~\goth{m}u\rho\nu}, \\
&{\rm R}={\rm R}ic_{\goth{m}u\nu}g^{\goth{m}u\nu}.
\end{align*}
The Ricci scalar has the conformal weight $-2$, that is it
transforms as ${\rm R}\to e^{-2\lambda}{\rm R}$ when $g\to e^{2\lambda}g$.
Apart from these objects there is another one, which has no
counterpart in the Riemannian geometry, the Maxwell two-form \begin{equation}n
F={\rm d} \phi. \end{equation}n The Maxwell two-form $F_{\goth{m}u\nu}$ is
proportional to the antisymmetric part ${\rm R}ic_{[\goth{m}u\nu]}$ of the
Ricci tensor.
Einstein-Weyl structures are, by definition, those Weyl
structures for which the symmetric trace-free part of the Ricci
tensor vanishes \begin{equation}n
{\rm R}ic_{(\goth{m}u\nu)}-\tfrac{1}{n}R\cdot g_{\goth{m}u\nu} =0.
\end{equation}n
\subsection{Einstein-Weyl structures from ODEs} In this paragraph
we follow P. Nurowski \cite{Nur1,Nur2}. One sees from the system
\eqref{e.p.dtheta_7d} that the pair $({\scriptstyle\wedge}\,h{g},\vp{3})$, where
\begin{equation}n{\scriptstyle\wedge}\,h{g}=2\goth{h}p{1}\goth{h}p{3}-(\goth{h}p{2})^2\end{equation}n is Lie transported along
the fibres of ${\mathcal P}^p\to\mathcal{S}$ in the following way \begin{equation}n L_{X_4}g=
\inp{A}{1}(\theta^1)^2,\qquad L_{X_5}{\scriptstyle\wedge}\,h{g}=0,\qquad
L_{X_6}{\scriptstyle\wedge}\,h{g}=0,\qquad L_{X_7}{\scriptstyle\wedge}\,h{g}=2{\scriptstyle\wedge}\,h{g},\end{equation}n and \begin{equation}n
L_{X_3}\mathcal{O}mega_3=\tfrac12\inp{C}{1}\goth{h}p{1}, \end{equation}n
\begin{equation}n\label{e.p.lie5} L_{X_j}\mathcal{O}mega_3=0,\qquad\text{for}\qquad
j=5,6,7. \end{equation}n
\noindent Due to these properties $({\scriptstyle\wedge}\,h{g},\mathcal{O}mega_3)$ descends
along ${\mathcal P}^p\to\mathcal{S}$ to the Weyl structure $(g,\phi)$ on the solution
space $\mathcal{S}$ on condition that \begin{equation}n W=0 \end{equation}n and \begin{equation}\label{e.p.cart}
\left(\tfrac{1}{3}\goth{m}athcal{D} F_q
-\tfrac{2}{9}F_q^2-F_p\right)F_{qq}+\tfrac{2}{3}F_qF_{qp}-2F_{qy}+F_{pp}=0.
\end{equation} These conditions are equivalent to E. Cartan's original
conditions \eqref{e.i.wunsch}, \eqref{e.i.cart}. Our conditions
are even simpler, because the quantity \eqref{e.p.cart} is of
first order in $\goth{m}athcal{D}$.
The conformal metric of the Weyl structure $(g,\phi)$ coincides
with the conformal metric of the contact case and is represented
by
\begin{equation}gin{align*}
g&=2\goth{o}mega^1{\scriptstyle\wedge}\,t{\goth{o}mega}^3-(\goth{o}mega^2)^2= \\
&=2({\rm d} y-p{\rm d} x)({\rm d} q -\tfrac13F_q{\rm d} p+ K{\rm d} y
+(\tfrac13qF_q-pK-F){\rm d} x)-({\rm d} p-q{\rm d} x)^2,
\end{align*} while the Weyl potential is given by
\begin{equation}n\phi=-(2K_q+\tfrac19F_{qq}F_q+\tfrac13F_{qp})({\rm d} y-p{\rm d}
x)+\tfrac13F_{qq}({\rm d} p-q{\rm d} x) +\tfrac13F_q{\rm d} x \end{equation}n
The Weyl connection for this geometry, lifted to
$CO(2,1)\to{\mathcal P}^p\to\mathcal{S}$, now the bundle of orthonormal frames, reads
\begin{equation}n
\Gamma=\begin{pmatrix}
-\vp{1} & -\goth{h}p{4} & 0 \\
-\vp{2} & -\vp{3} & -\goth{h}p{4} \\
0 &-\vp{2} & \vp{1}-2\vp{3}
\end{pmatrix}.
\end{equation}n The curvature is as follows \begin{equation}\label{e.ewcur}
(R^\goth{m}u_{~\nu})=\begin{pmatrix}
R^1_{~1}-F&R^1_{~2}&0\\
R^2_{~1}&-F&R^1_{~2}\\
0&R^2_{~1}&-R^1_{~1}-F \end{pmatrix} \end{equation} with
\begin{equation}gin{eqnarray}
&F={\rm d}\vp{3}=2\inp{B}{3}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+(2\inp{B}{4}-2\inp{B}{1})\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}
-2\inp{B}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
&R^1_{~1}=-\inp{B}{3}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}-\inp{B}{4}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}-\inp{B}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
&R^1_{~2}=\inp{B}{1}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+\inp{B}{2}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
&R^2_{~1}=-\inp{B}{3}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}+(\inp{B}{1}-2\inp{B}{4})\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}.
\nonumber
\end{eqnarray}
The Ricci tensor reads
\begin{equation}n
{\rm R}ic=\begin{equation}gin{pmatrix}
0&-3\inp{B}{3}&3\inp{B}{1}-5\inp{B}{4}\\
3\inp{B}{3}&2\inp{B}{4}&3\inp{B}{2}\\
-3\inp{B}{1}+\inp{B}{1}&-3\inp{B}{2}&0
\end{pmatrix}
\end{equation}n and satisfies the Einstein-Weyl equations \begin{equation}n
{\rm R}ic_{(ij)}=\tfrac{1}{3}R \cdot {g}_{ij}
\end{equation}n with the Ricci scalar $R=6\inp{B}{4}$. The components of the
curvature in the orthogonal coframe given by $u_1=1, u_2=0, u_3=1$
are given by
\begin{equation}gin{align*}
\inp{B}{1}=&\tfrac{1}{18}F_{qqq}F_q+\tfrac16F_{qqp}+\tfrac{1}{36}F_{qq}^2,
\\
\inp{B}{2}=&\tfrac16F_{qqq}, \\
\inp{B}{3}=&\tfrac16F_{qqy}-\tfrac13F_{qq}K_q-\tfrac16F_{qqq}K-\tfrac{1}{18}F_{qq}F_{qp}
-\tfrac{1}{54}F^2_{qq}F_q-L_q,\\
\inp{B}{4}=&K_{qq}+\frac19F_{qqq}F_q+\tfrac13F_{qqp}+\frac{1}{12}F_{qq}^2.
\end{align*}
\section{Geometry on first jet space}\label{s.p-p}
\noindent In section \ref{s.c-p} of chapter \ref{ch.contact} we
described how certain ODEs modulo contact transformations generate
contact projective geometry on $J^1$. The fact that point
transformations form a subclass within contact transformations
suggests that ODEs modulo point transformations define some
refined version of contact projective geometry. Indeed, the only
object that is preserved by point transformations but is not
preserved by contact transformations is the projection ${\goth{m}athcal J}^1\to
\text{\em xy plane}$, whose fibres are generated by $\partial_p$.
This motivate us to propose the following
\begin{equation}gin{definition}\label{def.p-p}
A point projective structure on ${\goth{m}athcal J}^1$ is a contact projective
structure, such that integral curves of the field $\partial_p$ are
geodesics of the contact projective structure.
\end{definition}
We immediately get
\begin{equation}gin{lemma}
The field $\partial_p$ is geodesic for the contact projective
geometry generated by an ODE provided that the ODEs satisfies
$$F_{qqq}=0.$$
\begin{equation}gin{proof}
In the notation of section \ref{s.cp3} of chapter \ref{ch.contact}
we have $\partial_p=e_2$ and, from proposition
\ref{prop.cp.coord}, $\Gamma^1_{~22}=0$. Thus $\nabla_2e_2=\lambda
e_2$ iff $\Gamma^3_{~22}=0$, which is equivalent to $F_{qqq}=0$ by
means of lemma \ref{lem.c-p}.
\end{proof}
\end{lemma}
However, the condition $F_{qqq}=0$ is not sufficient for the form
\eqref{e.p.conn_7d} to be a Cartan connection for the point
projective structure and we show that there does not exist any
simple way to construct a Cartan connection on ${\mathcal P}^p\to J^1$. The
algebra $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3\subset\goth{o}(3,2)$ inherits the
following grading from \eqref{c.gradJ1}
$$\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3=\goth{g}_{-2}\goth{o}plus\goth{g}_{-1}\goth{o}plus\goth{g}_{0}\goth{o}plus\goth{g}_{1}$$
but it is not semisimple, so the Tanaka method cannot be
implemented. Moreover, the broadest generalization of this method
-- the Morimoto nilpotent geometry handling non-semisimple groups
also fails in this case. This is because the Morimoto approach
requires the algebra $\goth{g}$ to be equal to the prolongation of its
non-positive part algebra $\goth{g}_{-2}\goth{o}plus\goth{g}_{-1}\goth{o}plus\goth{g}_0$. The
notion of prolongation as well as algorithmic procedure for
calculating it has been introduced by N. Tanaka \cite{Tan2}. In
our case the prolongation of $\goth{g}_{-2}\goth{o}plus\goth{g}_{-1}\goth{o}plus\goth{g}_0$ is
larger then $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$ and equals precisely
$\goth{o}(3,2)$, which yields a contact projective structure. Thus only
the contact case is solved by the methods of the nilpotent
geometry.
Lacking general theory we must search a Cartan connection in a
more direct way. Consider than an ODE satisfying $F_{qqq}=0$. It
follows that $\inp{B}{2}=0$ and $\inp{B}{4}=\inp{B}{1}$ in
equations \eqref{e.p.dtheta_7d}. We seek four one-forms
\begin{equation}gin{align*}
\Xi_1=&\vp{1}+a_1\theta^1+a_2\theta^2+a_3\theta^4, \\
\Xi_2=&\vp{2}+b_1\theta^1+b_2\theta^2+b_3\theta^4, \\
\Xi_3=&\vp{3}+c_1\theta^1+c_2\theta^2+c_3\theta^4, \\
\Xi_4=&\goth{h}p{3}+f_1\theta^1+f_2\theta^2+f_3\theta^4,
\end{align*}
with yet unknown functions $a_1,\ldots,f_3$, such that the matrix
\begin{equation}n
\begin{pmatrix} \tfrac{1}{2}\Xi_{1} & \tfrac{1}{2}\Xi_{2} & 0 & 0 \\\\
\goth{h}p{4} & \Xi_{3}-\tfrac{1}{2}\Xi_{1} & 0 & 0 \\\\
\goth{h}p{2} & \Xi_{4} & \tfrac{1}{2}\Xi_{1}-\Xi_{3} & -\tfrac{1}{2}\Xi_{2} \\\\
2\goth{h}p{1} & \goth{h}p{2} & -\goth{h}p{4} & -\tfrac{1}{2}\Xi_{1}
\end{pmatrix}
\end{equation}n is a Cartan connection on ${\mathcal P}^7\to{\goth{m}athcal J}^1$. Calculating the
curvature for this connection we obtain that the horizontality
conditions yield
$${\rm d} a_1=X_1(a_1)\goth{h}p{1}+X_2(a_1)\goth{h}p{2}+\inp{B}{1}\goth{h}p{3}+X_4(a_1)\goth{h}p{4}-a_1\vp{1}-a_2\vp{2}.$$
Unfortunately, none combination of the structural functions
$\inp{A}{1},\ldots,\inp{D}{2}$ and their coframe derivatives of
first order satisfies this condition. Therefore we are not able to
build a Cartan connection for an arbitrary point projective
structure. Moreover, since $\inp{B}{1}$ is a basic point invariant
together with $\inp{A}{1}$ and $\inp{C}{1}$, it seems to us
unlikely that among the coframe derivatives of
$\inp{A}{1},\ldots,\inp{D}{2}$ of any order there exists a
function satisfying the above condition. If such a function
existed it would mean that among the derivatives there is a more
fundamental function from which $\inp{B}{1}$ can be obtained by
differentiation.
Of course, we do have a Cartan connection for the point projective
geometry provided that in addition to $F_{qqq}=0$ the conditions
$\inp{B}{1}=\inp{D}{1}=0$ are imposed. However, the geometric
interpretation of these conditions is unclear.
\section{Six-dimensional Weyl geometry in the split
signature}\label{s.w6d} \noindent The construction of the
six-dimensional split signature conformal geometry given in
chapter \ref{ch.contact} has also its Weyl counterpart in the
point case. A similar construction was done by P. Nurowski,
\cite{Nur1} but he considered the conformal metric, not the Weyl
geometry. Here, apart from the tensor \begin{equation}n
{\scriptstyle\wedge}\,h{\goth{g}g}=2(\mathcal{O}mega_1-\mathcal{O}mega_3)\theta^2-2\mathcal{O}mega_3\theta^1+2\theta^4\theta^3
\end{equation}n of \eqref{e.c.g33}, we also have the one-form \begin{equation}n
\frac12\vp{3}.\end{equation}n The Lie derivatives along the degenerate
direction $X_5+X_7$ of ${\scriptstyle\wedge}\,h{\goth{g}g}$ are \begin{equation}n
L_{(X_5+X_7)}{\scriptstyle\wedge}\,h{\goth{g}g}={\scriptstyle\wedge}\,h{\goth{g}g}\qquad \text{and}\qquad
L_{(X_5+X_7)}\vp{3}=0.\end{equation}n In this manner the pair
$({\scriptstyle\wedge}\,h{\goth{g}g},\tfrac12\vp{3})$ generates the six-dimensional
split-signature Weyl geometry $(\goth{g}g,\phi)$ on the six-manifold
${\goth{m}athcal M}^6$ being the space of integral curves of $X_5+X_7$. The
associated Weyl connection is $\goth{g}oth{co}(3,3)\semi{.}\mathbb{R}^6$-valued and
has the
following form.
\begin{equation}n
\Gamma^\goth{m}u_{~\nu}=\begin{pmatrix}
0&\tfrac12\goth{h}p{4}&\Gamma^1_{~3}&0&\Gamma^1_{~5}
&\Gamma^1_{6}\\\\
\tfrac12\vp{2}&\tfrac12\vp{1}-\tfrac12\vp{3}&\Gamma^2_{~3}&-\Gamma^1_{~5}&0&\Gamma^2_{~6}\\\\
\tfrac12\goth{h}p{4}&0&\tfrac12\goth{h}p{3}-\tfrac12\goth{h}p{1}&-\Gamma^1_{~6}&-\Gamma^2_{~6}&0\\\\
0&-\tfrac12\goth{h}p{1}&\tfrac12\goth{h}p{3}&-\vp{3}&-\tfrac12\vp{2}&-\tfrac12\goth{h}p{4}\\\\
\tfrac12\goth{h}p{1}&0&\tfrac12\goth{h}p{2}&-\tfrac12\goth{h}p{4}&-\tfrac12\vp{1}-\tfrac12\vp{3}&0\\\\
-\tfrac12\goth{h}p{3}&-\tfrac12\goth{h}p{2}&0&-\Gamma^1_{~3}&-\Gamma^2_{~3}&\tfrac12\vp{1}-\tfrac32\vp{3}
\end{pmatrix}, \end{equation}n where
\begin{equation}gin{align*}
\Gamma^1_{~3}=&\tfrac12\vp{2}+\tfrac12\inp{A}{2}\goth{h}p{1},\\
\Gamma^2_{~3}=&\inp{A}{3}\goth{h}p{1}+\tfrac12\inp{A}{2}\goth{h}p{2}+\inp{A}{1}\goth{h}p{4}, \\
\Gamma^1_{~5}=&\inp{D}{1}\goth{h}p{1}+\inp{B}{3}\goth{h}p{2}+(\tfrac32\inp{B}{4}-\inp{B}{1})\goth{h}p{3}
+(\inp{C}{1}-\tfrac12\inp{A}{2})\goth{h}p{4}, \\
\Gamma^1_{~6}=&(\tfrac12\inp{B}{4}-\inp{B}{1})\goth{h}p{1}-\inp{B}{2}\goth{h}p{2},\\
\Gamma^2_{~6}=&(\inp{B}{3}+\inp{D}{1})\goth{h}p{1}+\tfrac12\inp{B}{4}\goth{h}p{2}+\inp{B}{2}\goth{h}p{4}.
\end{align*}
Contrary to section \ref{s.c6d} of chapter \ref{ch.contact}, this
connection seems to have full holonomy $CO(3,3)\ltimes\mathbb{R}^6$ and
it is not generated by ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ in any simple way. It is
also never Einstein-Weyl.
\section{Three-dimensional Lorentzian geometry on solution
space}\label{s.lor3} \noindent The geometries of sections
\ref{s.ew} to \ref{s.w6d} are counterparts of respective
geometries of the contact case. The point classification, however,
contains another geometry, which is new when compared to the
contact case. This is owing to the fact that the Einstein-Weyl
geometry of section \ref{s.ew} has in general the non-vanishing
Ricci scalar, which is a weighted function and can be fixed to a
constant by an appropriate choice of the conformal gauge. Thereby
the Weyl geometry on $\mathcal{S}$ is reduced to a
Lorentzian metric geometry.
These properties of the Weyl geometry are reflected at the level
of the ODEs by the fact that the equations \begin{equation}\label{so22}
y'''=\frac{3}{2}\frac{(y'')^2}{y'} \end{equation}
and \begin{equation}\label{so4}
y'''=\frac{3y'(y'')^2}{{y'}^2+1} \end{equation} are contact equivalent to
the trivial $y'''=0$ by means of corollary \ref{cor.c.10d_flat}
but they are mutually {\em non-equivalent} under point
transformations and possess the $\goth{o}(2,2)$ and $\goth{o}(4)$ algebra of
point symmetries respectively. Both of them generate the same flat
conformal geometry but their Weyl geometries differ. After
calculating equations \eqref{e.ewcur} we see that the only
non-vanishing component of their curvature is the Ricci scalar,
which is negative for the equation \eqref{so22} and positive for
\eqref{so4}. In this circumstances we do another reduction step in
the Cartan algorithm setting the Ricci scalar equal to $\pm 6$
respectively\footnote{We choose $\pm 6$ here to avoid large
numerical factors.}, which means $\inp{B}{4}=\pm1$, and obtain a
six-dimensional subbundle ${\mathcal P}^p_6$ of ${\mathcal P}^p$. The invariant
coframe $(\goth{h}p{1},\goth{h}p{2},\goth{h}p{3},\goth{h}p{4},\vp{1},\vp{2})$ yields the
local structure of $SO(2,2)$ or $SO(4)$ on ${\mathcal P}^p_6$ while the
tensor ${\scriptstyle\wedge}\,h{g}=2\goth{h}p{1}\goth{h}p{3}-(\goth{h}p{2})^2$ descends to a metric
rather than a conformal class on $\mathcal{S}$ by means of conditions
\begin{equation}\label{p.liem} L_{X_5}{\scriptstyle\wedge}\,h{g}=0, \qquad L_{X_6}{\scriptstyle\wedge}\,h{g}=0.\end{equation} The
obtained metrics are locally diffeomorphic to the metrics on the
symmetric spaces $SO(2,2)/SO(2,1)$ or $SO(4)/SO(3)$.
In order to generalize this construction to a broader class of
equations we assume that the Ricci scalar of the Einstein-Weyl
geometry is non-zero \begin{equation}n
6K_{qq}+\frac23F_{qqq}F_q+2F_{qqp}+\frac12F_{qq}^2 \neq 0\end{equation}n and
set
$$ u_3=\sqrt{\left|6K_{qq}+\frac23F_{qqq}F_q+2F_{qqp}+\frac12F_{qq}^2\right|}$$
in the coframe of theorem \ref{th.p.1}. The
tensor ${\scriptstyle\wedge}\,h{g}$ on ${\mathcal P}^p_6$ projects to the metric $g$ on $\mathcal{S}$
provided that the conditions \eqref{p.liem} still hold, which is
equivalent to \begin{equation}n W=0 \quad \text{and} \quad
(\goth{m}athcal{D}+\tfrac23F_q)\left(
6K_{qq}+\frac23F_{qqq}F_q+2F_{qqp}+\frac12F_{qq}^2\right)=0. \end{equation}n
The Cartan coframe on ${\mathcal P}^p_6$ is then given by
\begin{equation}gin{align}
{\rm d}\goth{h}p{1} =&\vp{1}{\scriptstyle\wedge}\,\goth{h}p{1}-\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{4},\nonumber \\
{\rm d}\goth{h}p{2} =&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{1}+\inv{p}{1}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}-\goth{h}p{3}{\scriptstyle\wedge}\,\goth{h}p{4},\nonumber \\
{\rm d}\goth{h}p{3}=&\vp{2}{\scriptstyle\wedge}\,\goth{h}p{2}-\vp{1}{\scriptstyle\wedge}\,\goth{h}p{3}+\inv{p}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3},\nonumber \\
{\rm d}\goth{h}p{4} =&\vp{1}{\scriptstyle\wedge}\,\goth{h}p{4}+\inv{p}{3}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}
+\inv{p}{4}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}+\inv{p}{5}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{4}
-\tfrac12\inv{p}{2}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{4}+\inv{p}{1}\goth{h}p{3}{\scriptstyle\wedge}\,\goth{h}p{4}, \nonumber\\
{\rm d}\vp{1} =&-\vp{2}{\scriptstyle\wedge}\,\goth{h}p{4}+\inv{p}{2}\vp{2}{\scriptstyle\wedge}\,\goth{h}p{1}
+\inv{p}{6}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}+\inv{p}{7}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}+\inv{p}{4}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}
+\inv{p}{5}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{4},\notag \\
{\rm d}\vp{2}=&-\vp{1}{\scriptstyle\wedge}\,\vp{2}+\inv{p}{1}\vp{2}{\scriptstyle\wedge}\,\goth{h}p{3}+\inv{p}{8}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{2}
+\inv{p}{9}\goth{h}p{1}{\scriptstyle\wedge}\,\goth{h}p{3}+\inv{p}{10}\goth{h}p{2}{\scriptstyle\wedge}\,\goth{h}p{3}
+\inv{p}{5}\goth{h}p{3}{\scriptstyle\wedge}\,\goth{h}p{4}, \nonumber
\end{align}
with some functions $\inv{p}{1},\ldots,\inv{p}{10} $ and the
Levi-Civita connection is given by \begin{equation}n
\begin{pmatrix}
\Gamma^1_{~1} & \Gamma^1_{~2} & 0 \\
\Gamma^2_{~1} & 0 & \Gamma^1_{~2} \\
0 & \Gamma^2_{~1} & -\Gamma^1_{~1}
\end{pmatrix},
\end{equation}n
where
\begin{equation}gin{align*}
\Gamma^1_{~1}=&-\mathcal{O}mega_1+\tfrac12\inv{p}{2}\theta^2,\\
\Gamma^1_{~2}=&\tfrac12\inv{p}{2}\theta^1-\inv{p}{1}\theta^2-\theta^4,
\\
\Gamma^2_{~1}=&-\mathcal{O}mega_2+\tfrac12\inv{p}{2}\theta^3.
\end{align*}
The curvature reads \begin{equation}n \begin{pmatrix}
R^1_{~1} & R^1_{~2} & 0 \\
R^2_{~1} & 0 & R^1_{~2} \\
0 & R^2_{~1} & -R^1_{~1}
\end{pmatrix},
\end{equation}n
\begin{equation}gin{align*}
R^1_{~1}=&\tfrac12(\inv{p}{9}-\inv{p}{6})\theta^1{\scriptstyle\wedge}\,\theta^2+(\tfrac14(\inv{p}{2})^2
-\inv{p}{7})\theta^1{\scriptstyle\wedge}\,\theta^3+(\inv{p}{4}+X_2(\inv{p}{1})
+\tfrac12\inv{p}{1}\inv{p}{2})\theta^2{\scriptstyle\wedge}\,\theta^3,\\\\
R^1_{~2}=&(\inv{p}{10}-\tfrac12
X_2(\inv{p}{2})-\tfrac14(\inv{p}{2})^2)\theta^1{\scriptstyle\wedge}\,\theta^2+
(\inv{p}{4}+X_2(\inv{p}{1})+\tfrac12\inv{p}{1}\inv{p}{2})\theta^1{\scriptstyle\wedge}\,\theta^3
\\
&+((\inv{p}{1})^2-X_3(\inv{p}{1}))\theta^2{\scriptstyle\wedge}\,\theta^3, \\\\
R^2_{~1}=&-\inv{p}{8}\theta^1{\scriptstyle\wedge}\,\theta^2+\tfrac12(\inv{p}{6}-\inv{p}{9})\theta^1{\scriptstyle\wedge}\,\theta^3
+(-\inv{p}{10}+\tfrac12X_2(\inv{p}{2})+\tfrac14(\inv{p}{2})^2)\theta^2{\scriptstyle\wedge}\,\theta^3.
\end{align*}
\section[Fibre-preserving case: Cartan connection on seven-dimensional\ldots]
{Fibre-preserving case: Cartan connection on seven-dimensional
bundle}\label{s.f}
\noindent The construction of a Cartan connection for the fibre
preserving case is very similar to its point counterpart. This is
due to the fact that every point symmetry of $y'''=0$ is
necessarily fibre-preserving and, as a consequence, the bundle we
will construct is also of dimension seven. Starting from the
$G_f$-structure of Introduction, which is given by the forms
\begin{equation}n\begin{aligned}
\goth{h}f{1}=&u_1\goth{o}mega^1, \\
\goth{h}f{2}=&u_2\goth{o}mega^1+u_3\goth{o}mega^2, \\
\goth{h}f{3}=&u_4\goth{o}mega^1+u_5\goth{o}mega^2+u_6\goth{o}mega^3,\\
\goth{h}f{4}=&u_7\goth{o}mega^4, \\
\end{aligned} \end{equation}n and after the substitutions
\begin{equation}gin{align*}
u_6=&\frac{u_3^2}{u_1},\quad\quad\quad u_7=\frac{u_1}{u_3}, \\
u_5=&\frac{u_3}{u_1}\left(u_2-\frac{1}{3}u_3F_q\right), \\
u_4=&\frac{u^2_3}{u_1}K+\frac{u_2^2}{2u_1}
\end{align*}
we get the following theorem.
\begin{equation}gin{theorem}\label{th.f.1}
An equation $y'''=F(x,y,y',y'')$ modulo fibre-preserving
transformations is described by the coframe
$(\goth{h}f{1},\goth{h}f{2},\goth{h}f{3},\goth{h}f{4},\vf{1},\vf{2},\vf{3})$ which
generates the following Cartan connection
\begin{equation}\label{e.f.conn_7d}
{\scriptstyle\wedge}\,h{\goth{o}mega}^f=\begin{pmatrix} \tfrac{1}{2}\vf{1} & \tfrac{1}{2}\vf{2} & 0 & 0 \\\\
\goth{h}f{4} & \vf{3}-\tfrac{1}{2}\vf{1} & 0 & 0 \\\\
\goth{h}f{2} & \goth{h}f{3} & \tfrac{1}{2}\vf{1}-\vf{3} & -\tfrac{1}{2}\vf{2} \\\\
2\goth{h}f{1} & \goth{h}f{2} & -\goth{h}f{4} & -\tfrac{1}{2}\vf{1}
\end{pmatrix}
\end{equation} on the seven-dimensional bundle $H_3\to{\mathcal P}^f\to{\goth{m}athcal J}^2$. The group
$H_3$ is the same as in the point case
\begin{equation}n H_3=\begin{pmatrix} \sqrt{u_1}, &\frac12\frac{u_2}{\sqrt{u_1}}, & 0 & 0 \\\\
0 & \tfrac{u_3}{\sqrt{u_1}}, & 0 & 0 \\\\
0 & 0 & \tfrac{\sqrt{u_1}}{u_3}, &
-\tfrac12\tfrac{u_2}{\sqrt{u_1}\,\goth{u}_3} \\\\
0 & 0 & 0 & \tfrac{1}{\sqrt{u_1}}\end{pmatrix},
\end{equation}n and the connection is explicitly given by \begin{equation}n
{\scriptstyle\wedge}\,h{\goth{o}mega}^f(x^i,u_\goth{m}u)=u^{-1}\,{\goth{o}mega}^f\,u+u^{-1}{\rm d} u \end{equation}n
where $u\in H_3$ and \begin{equation}n{\goth{o}mega}^f= \begin{pmatrix}
\tfrac{1}{2}\vp{1}^0&\tfrac{1}{2}\vp{2}^0 & 0 & 0 \\\\
{\scriptstyle\wedge}\,t{\goth{o}mega}^4 & \vp{3}^0-\tfrac{1}{2}\vp{1}^0 & 0 & 0 \\\\
\goth{o}mega^2 & {\scriptstyle\wedge}\,t{\goth{o}mega}^3 & \tfrac{1}{2}\vp{1}^0-\vp{3}^0 & -\tfrac{1}{2}\vp{2}^0 \\\\
2\goth{o}mega^1 & \goth{o}mega^2 & -{\scriptstyle\wedge}\,t{\goth{o}mega}^4 & -\tfrac{1}{2}\vp{1}^0
\end{pmatrix}\end{equation}n is given by
\begin{equation}gin{align*}
\goth{o}mega^1=&{\rm d} y-p{\rm d} x, \notag \\
\goth{o}mega^2=&{\rm d} p-q{\rm d} x, \notag \\
{\scriptstyle\wedge}\,t{\goth{o}mega}^3=&{\rm d} q-F{\rm d} x-\tfrac13F_q({\rm d} p-q{\rm d} x)+K({\rm d} y-p{\rm d} x),\notag \\
\goth{o}mega^4=&{\rm d} x \\
\vf{1}^0=&-K_q\,\goth{o}mega^1+\tfrac13F_{qq}\goth{o}mega^2, \notag \\
\vf{2}^0=&L\,\goth{o}mega^1-K_q\,\goth{o}mega^2+\tfrac13F_{qq}{\scriptstyle\wedge}\,t{\goth{o}mega}^3-K\goth{o}mega^4, \notag \\
\vf{3}^0=&-K_q\,\goth{o}mega^1+\tfrac13F_{qq}\,\goth{o}mega^2
+\tfrac13F_q\goth{o}mega^4.\notag
\end{align*}
\end{theorem}
The exterior differentials of the coframe are equal to
\begin{equation}gin{align}
{\rm d}\goth{h}f{1} =&\vf{1}{\scriptstyle\wedge}\,\goth{h}f{1}+\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{2}+\inf{B}{1}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{2},\nonumber \\
{\rm d}\goth{h}f{2} =&\vf{2}{\scriptstyle\wedge}\,\goth{h}f{1}+\vf{3}{\scriptstyle\wedge}\,\goth{h}f{2}+\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{3}+\inf{B}{1}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{3},\nonumber \\
{\rm d}\goth{h}f{3}=&\vf{2}{\scriptstyle\wedge}\,\goth{h}f{2}+(2\vf{3}-\vf{1}){\scriptstyle\wedge}\,\goth{h}f{3}+\inf{A}{1}\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{1}+\inf{B}{1}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{3},\nonumber \\
{\rm d}\goth{h}f{4} =&(\vf{1}-\vf{3}){\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber\\
{\rm d}\vf{1}
=&-\vf{2}{\scriptstyle\wedge}\,\goth{h}f{4}+(\inf{D}{1}-\inf{B}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{2}
+\inf{B}{3}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{3}+ \label{e.f.dtheta_7d} \\
&+(2\inf{C}{1}-\inf{A}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4}+\inf{B}{4}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{3}+\inf{B}{5}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber \\
{\rm d}\vf{2}=&(\vf{3}-\vf{1}){\scriptstyle\wedge}\,\vf{2}+\inf{D}{2}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{2}+(\inf{D}{1}-2\inf{B}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{3}
+\inf{A}{3}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4} \nonumber \\
&+\inf{B}{6}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{3}+(\inf{C}{1}-\inf{A}{2})\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{4}+\inf{B}{5}\goth{h}f{3}{\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber \\
{\rm d}\vf{3}=&(\inf{D}{1}-\inf{B}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{2}+\inf{B}{3}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{3}
+(\inf{C}{1}-\inf{A}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4} \notag \\
&+\inf{B}{4}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{3}+\tfrac12\inf{B}{5}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber
\end{align}
where
$\inf{A}{1},\inf{A}{2},\inf{A}{3},\inf{B}{1},\inf{B}{2},\inf{B}{3},\inf{B}{4},\inf{B}{5},\inf{B}{6},\inf{C}{1},\inf{D}{1},\inf{D}{2}$
are functions on ${\mathcal P}^f$. All these invariants express by the
coframe derivatives of $\inf{A}{1},\inf{B}{1},\inf{C}{1}$, which
read
\begin{equation}gin{align}
\inf{A}{1}=&\frac{u_3^3}{u^3_1}W, \nonumber \\
\inf{B}{1}=&\frac{1}{3u_3}F_{qq}, \nonumber \\
\inf{C}{1}=&\frac{u_2}{u_1^2}\left(\tfrac19F_{qq}F_q+\tfrac13F_{qp}+K_q\right)+ \nonumber \\
&+\frac{u_3}{u_1^2}\left(\tfrac23F_{qq}K-\tfrac13K_qF_q-K_p-\tfrac23F_{qy}\right).\nonumber
\end{align}
If $\inf{A}{1}=0$ then $\inf{A}{2}=\inf{A}{3}=0$ and if
$\inf{B}{1}=0$ then $\inf{B}{i}=0$ for $i=2,\ldots,6$. In
particular we have \begin{equation}n
{\rm d}\inf{B}{1}=-\inf{B}{2}\goth{h}f{1}+(\inf{B}{6}-\inf{B}{3})\goth{h}f{2}-\inf{B}{4}\goth{h}f{3}-\inf{B}{5}\goth{h}f{4}
-\inf{B}{1}\vf{3}.\end{equation}n The flat case is given by vanishing of
$\inf{A}{1}$, $\inf{B}{1}$ and $\inf{C}{1}$.
\subsection{Fibre-preserving versus point objects}
An immediate observation about the fibre-preserving objects is
that they are closely related to their point counterparts of
theorem \ref{th.p.1}. The bundle ${\mathcal P}^f$ is also, like ${\mathcal P}^p$, a
subbundle of the contact bundle ${\mathcal P}^c$ given by
$\tau\goth{g}oth{co}lon{\mathcal P}^f\to{\mathcal P}^c$ \begin{equation}\label{e.f.cf} u_4=0, \qquad u_5=0,
\qquad u_6=0, \end{equation} and, as before, the forms
$\goth{o}verset{f}{\goth{h}f{1}},\ldots,\goth{o}verset{f}{\goth{h}f{4}}$ of the
fibre-preserving coframe are given by pull-backs of their contact
counterparts \begin{equation}n
\goth{o}verset{f}{\goth{h}f{i}}=\tau^*\goth{o}verset{c}{\goth{h}f{i}},\qquad i=1,2,3,4.
\end{equation}n This fact suggests that there exists a distinguished
diffeomorphism of ${\mathcal P}^p$ and ${\mathcal P}^f$ given by a similar condition.
Indeed, there is the unique diffeomorphism $\rho\goth{g}oth{co}lon{\mathcal P}^p\to{\mathcal P}^f$
such that
\begin{equation}\label{e.f.pf1}
\goth{o}verset{p}{\goth{h}p{1}}=\rho^*\goth{o}verset{f}{\goth{h}f{1}},\qquad\qquad
\goth{o}verset{p}{\goth{h}p{2}}=\rho^*\goth{o}verset{f}{\goth{h}f{2}},\qquad\qquad
\goth{o}verset{p}{\goth{h}p{3}}=\rho^*\goth{o}verset{f}{\goth{h}f{3}}. \end{equation} It is given by
the identity map in the coordinate systems of theorems
\ref{th.p.1} and \ref{th.f.1}. The remaining one-forms are
transported as follows.\begin{equation}\label{e.f.pf2} \begin{equation}gin{aligned}
\goth{o}verset{p}{\goth{h}p{4}}=&\rho^*(\goth{o}verset{f}{\goth{h}f{4}}+\tfrac12\inf{B}{1}\,\goth{o}verset{f}{\goth{h}f{1}}), \\
\goth{o}verset{p}{\vp{1}}=&\rho^*(\goth{o}verset{f}{\vf{1}}+\inf{B}{5}\,\goth{o}verset{f}{\goth{h}f{1}}
-\tfrac12\inf{B}{1}\,\goth{o}verset{f}{\goth{h}f{2}}),\\
\goth{o}verset{p}{\vp{2}}=&\rho^*(\goth{o}verset{f}{\vf{2}}+\tfrac12\inf{B}{5}\,\goth{o}verset{f}{\goth{h}f{2}}
-\tfrac12\inf{B}{1}\,\goth{o}verset{f}{\goth{h}f{3}}), \\
\goth{o}verset{p}{\vp{3}}=&\rho^*(\goth{o}verset{f}{\vf{3}}+\tfrac12\inf{B}{5}\,\goth{o}verset{f}{\goth{h}f{1}}),
\end{aligned}\end{equation}
where \begin{equation}n
\inf{B}{5}=-\frac{1}{3u_1}(\goth{m}athcal{D}(F_{qq})+\tfrac13F_{qq}F_q). \end{equation}n The
above formulae enable us to transform easily the fibre-preserving
coframe into the point coframe. Given the fibre-preserving
coframe $(\goth{o}verset{f}{\goth{h}f{1}},\ldots,\goth{o}verset{f}{\vf{3}})$ we
compute ${\rm d} \goth{o}verset{f}{\goth{h}f{1}}$ and take the coefficient of the
$\goth{o}verset{f}{\goth{h}f{1}}{\scriptstyle\wedge}\,\goth{o}verset{f}{\goth{h}f{2}}$ term. This is the
function $\inf{B}{1}$. Next we compute ${\rm d} \inf{B}{1}$,
decompose it in the fibre-preserving coframe, take minus function
that stands next to $\goth{o}verset{f}{\goth{h}f{4}}$ and this is
$\inf{B}{5}$. We substitute these functions together with
$(\goth{o}verset{f}{\goth{h}f{1}},\ldots,\goth{o}verset{f}{\vf{3}})$ into the right
hand side of \eqref{e.f.pf1} and \eqref{e.f.pf2}, where $\rho$ is
the identity transformation of ${\mathcal P}^f$ and the point coframe is
explicitly constructed on ${\mathcal P}^f$.
Now let us consider the inverse construction, from the
fibre-preserving case to the point case. If we have only the
point coframe $(\goth{o}verset{p}{\goth{h}p{1}},\ldots,\goth{o}verset{p}{\vp{3}})$
then we can not utilize eq. \eqref{e.f.pf2} since we are not able
to construct the function $\inf{B}{1}$, which is not a point
invariant\footnote{For example the point transformation
$(x,y)\to(y,x)$ destroys the condition $F_{qq}=0$.} and, as such,
does not appear among functions $\inp{A}{1},\ldots,\inp{D}{2}$ in
\eqref{e.p.dtheta_7d} or among their derivatives. However, if we
consider the point coframe {\em and} the function $\inf{B}{1}$
then the construction is possible, since $\inf{B}{5}$ is given by
the derivative $-X_4(\inf{B}{1})$ along the field $X_4$ of the
{\em point} dual frame. Therefore the passage from the point case
to the fibre-preserving case is possible if we supplement the
connection ${\scriptstyle\wedge}\,h{\goth{o}mega}^p$ with the function $\inf{B}{1}$. This
fact implies that each construction of the point case has its
fibre-preserving counterpart which has an additional object
generated by $\inf{B}{1}$.
\section{Fibre-preserving geometry from point
geometry}\label{s.fp}
\subsection{ Counterpart of the Einstein-Weyl geometry on $\mathcal{S}$}
This geometry is constructed in the following way. Let
$(\goth{h}f{1},\ldots,\vp{3})$ denotes again the fibre-preserving
coframe. Given the objects ${\scriptstyle\wedge}\,h{g}=2\goth{h}f{1}\goth{h}f{3}-(\goth{h}f{2})^2$ and
\begin{equation}n{\scriptstyle\wedge}\,h{\phi}=\vf{3}+\frac12\inf{B}{5}\goth{h}f{1}, \end{equation}n let us also
consider the function $\inf{B}{1}$, and ask under what conditions
the triple $({\scriptstyle\wedge}\,h{g},{\scriptstyle\wedge}\,h{\phi},\inf{B}{1})$ can be projected to a
geometry on $\mathcal{S}$. There are two possibilities here, either
$\inf{B}{1}=0$ or $\inf{B}{1}\neq0$. If $\inf{B}{1}=0$ then it is
easy to see that the pair $({\scriptstyle\wedge}\,h{g},{\scriptstyle\wedge}\,h{\phi})$ generates the
Einstein-Weyl geometry if only $\inf{A}{1}=\inf{C}{1}=0$, which
means that we are in the trivial case $y'''=0$.
Suppose $\inf{B}{1}\neq0$ then. For the geometry on $\mathcal{S}$ to exist
we need not only the conditions for the Lie transport of ${\scriptstyle\wedge}\,h{g}$
and ${\scriptstyle\wedge}\,h{\phi}$ but also \begin{equation}\label{e.lieb}
L_{X_i}\inf{B}{1}=0,\quad \text{for}\quad i=4,5,6,\quad
L_{X_7}\inf{B}{1}=-\inf{B}{1}.\end{equation} If all these conditions are
satisfied then $({\scriptstyle\wedge}\,h{g},{\scriptstyle\wedge}\,h{\phi},\inf{B}{1})$ defines on $\mathcal{S}$ the
Einstein-Weyl geometry $(g,\phi)$ of the point case, which is
equipped with an additional object: a weighted function $f$ which
transforms $f\to e^{-\lambda}f$ when $g\to e^{2\lambda}g$ and is
given by the projection of $\inf{B}{1}$. The conditions for
existence of this geometry are $\inf{A}{1}=\inf{B}{5}=0$, that is
\begin{equation}\label{f.lief} W=0 \qquad\text{and}\qquad
\goth{m}athcal{D}(F_{qq})+\tfrac13F_{qq}F_q=0.\end{equation} As usual, the condition $W=0$
guarantees existence of $[g]$ and the other condition yields
\eqref{e.lieb}. The proper Lie transport of ${\scriptstyle\wedge}\,h{\phi}$ along
$X_4$ is already guaranteed by the above conditions as their
differential consequence.
\subsection{Counterpart of the Weyl geometry on ${\goth{m}athcal M}^6$}
In the similar vein we show that the triple
$({\scriptstyle\wedge}\,h{\goth{g}g},\tfrac12{\scriptstyle\wedge}\,h{\phi},\inf{B}{1})$, where \begin{equation}gin{eqnarray*}
&{\scriptstyle\wedge}\,h{\goth{g}g}=2(\mathcal{O}mega_1-\mathcal{O}mega_3)\theta^2-2\mathcal{O}mega_3\theta^1+2\theta^4\theta^3,
\\
&{\scriptstyle\wedge}\,h{\phi}=\vf{3}+\frac12\inf{B}{5}\goth{h}f{1}.
\end{eqnarray*}
projects to the
six-dimensional split signature Weyl geometry $(\goth{g}g,\phi)$ of
chapter \ref{ch.point} section \ref{s.w6d} equipped with a
function $f$ of conformal weight $-2$.
\subsection{Counterpart of Lorentzian geometry on $\mathcal{S}$}
Given $(g,\phi,f)$ on $\mathcal{S}$ it is natural to fix the conformal
gauge so as\footnote{With possible change of the signature to make
$f$ positive.} $f=1$. This is equivalent to another substitution
\begin{equation}n u_3=\tfrac13F_{qq} \end{equation}n in the Cartan reduction algorithm,
which leads us to the bundle ${\mathcal P}^f_6$ with the following
differential system \begin{equation}\begin{aligned}
{\rm d}\goth{h}f{1}=&\,\vf{1}{\scriptstyle\wedge}\,\goth{h}f{1}+\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{2},\\
{\rm d}\goth{h}f{2}=&\,\vf{2}{\scriptstyle\wedge}\,\goth{h}f{1}+\inv{f}{1}\goth{h}f{3}{\scriptstyle\wedge}\,\goth{h}f{2}+\inv{f}{2}\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{2}+\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{3},\\
{\rm d}\goth{h}f{3}=&-\vf{1}{\scriptstyle\wedge}\,\goth{h}f{3}+\vf{2}{\scriptstyle\wedge}\,\goth{h}f{2}+(2-2\inv{f}{3})\goth{h}f{3}{\scriptstyle\wedge}\,\goth{h}f{2}+\inv{f}{4}\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{1}+2\inv{f}{2}\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{3},\\
{\rm d}\goth{h}f{4}=&\,\vf{1}{\scriptstyle\wedge}\,\goth{h}f{4}+\inv{f}{5}\,\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{1}+(\inv{f}{3}-2)\,\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{2}+\inv{f}{1}\,\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{3},\\
{\rm d}\vf{1}=&\,(2\inv{f}{3}-2)\vf{2}{\scriptstyle\wedge}\,\goth{h}f{1}-\vf{2}{\scriptstyle\wedge}\,\goth{h}f{4}+\inv{f}{6}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{2}+\inv{f}{7}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{3}+\inv{f}{8}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4}
-\inv{f}{5}\goth{h}f{2}\goth{h}f{4}, \\
{\rm d}\vf{2}=&\,\vf{2}{\scriptstyle\wedge}\,\vf{1}-\inv{f}{1}\vf{2}{\scriptstyle\wedge}\,\goth{h}f{3}-\inv{f}{2}\vf{2}{\scriptstyle\wedge}\,\goth{h}f{4}+\inv{f}{9}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{2}
+\inv{f}{10}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{3}+\inv{f}{11}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4}+\\
&+\inv{f}{12}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{3}+\inv{f}{13}\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{4}-\inv{f}{5}\goth{h}f{3}{\scriptstyle\wedge}\,\goth{h}f{4}. \\
\end{aligned} \end{equation}
If the conditions \eqref{f.lief}, now equivalent to
$\inv{f}{4}=0=\inv{f}{2}$, are satisfied then ${\scriptstyle\wedge}\,h{g}=2\goth{h}f{1}\goth{h}f{3}-(\goth{h}f{2})^2$
projects to a Lorentzian metric $g$ and \begin{equation}n
{\scriptstyle\wedge}\,h{\phi}=-2\inv{f}{5}\goth{h}f{1}+2\inv{f}{3}\goth{h}f{2}+2\inv{f}{1}\goth{h}f{3}
\end{equation}n projects to a one-form $\phi$. With the Lorentzian metric
there is associated the Levi-Civita connection
$(\Gamma^\goth{m}u_{~\nu})$:
\begin{equation}gin{align*}
\Gamma^1_{~1}=&-\vf{1}+(\inv{f}{3}-1)\goth{h}f{2},\\
\Gamma^1_{~2}=&(\inv{f}{3}-1)\goth{h}f{1}+\inv{f}{1}\goth{h}f{2}-\goth{h}f{4},
\\
\Gamma^2_{~1}=&-\vf{2}+(\inv{f}{3}-1)\goth{h}f{3}.
\end{align*}
The covariant derivative of $\phi$ with respect to
$\Gamma^\goth{m}u_{~\nu}$ is as follows \begin{equation}n \phi_{i;j}=\begin{equation}gin{pmatrix}
-\inv{f}{9}-(\inv{f}{5})^2&\tfrac{1}{2}\inv{f}{6}+\inv{f}{5}(\inv{f}{3}-2)&\inv{f}{12}-\inv{f}{3}(\inv{f}{3}-3)\\
\tfrac{1}{2}\inv{f}{6}+\inv{f}{5}\inv{f}{3}&2\inv{f}{12}-2\inv{f}{3}(\inv{f}{3}-2)&X_2(\inv{f}{1})+\inv{f}{1}\\
\inv{f}{12}-\inv{f}{3}(\inv{f}{3}-1)&X_2(\inv{f}{1})-\inv{f}{1}&X_3(\inv{f}{1})
\end{pmatrix}.
\end{equation}n The one-form $\phi$ and the Ricci tensor satisfy the
following identities
\begin{equation}gin{eqnarray*}
&\nabla_{(i}\phi_{j)}=-{\rm R}ic_{ij}-\phi_i\phi_j+(\phi^k\phi_k+2)g_{ij},\\
&{\rm R}=2\phi^k\phi_k+6,\\
&{\rm d}\phi=-2\ast\phi.
\end{eqnarray*}
The homogeneous model of this geometry is associated to
$y'''=\tfrac32\tfrac{(y'')^2}{y'}$.
\chapter{Classification of third-order ODEs}\label{ch.class}
\noindent Previous chapters provided several geometric
constructions associated to third-order ODEs, now we focus on the
application of the Cartan equivalence method to the classification
of ODEs. Following \cite{Olv2} chapters 8 -- 14 we describe the
procedure which was outlined in Introduction.
By means of theorems \ref{th.c.1}, \ref{th.c.2}, \ref{th.p.1} and
\ref{th.f.1} to an ODE modulo contact/point/fibre-preserving
transformations there is associated a Cartan coframe on a bundle
${\mathcal P}\to{\goth{m}athcal J}^2$. From theorem \ref{th.i.clas} (with obvious changes)
we know, that the underlying ODEs are equivalent if and only if
the coframes are. Thereby the problem of equivalence of ODEs is
reduced to the equivalence problem of coframes.
Consider two smooth coframes $(\goth{o}mega^i)$ on ${\goth{m}athcal M}^n$ and
$(\cc{\goth{o}mega}^i)$ on $\cc{{\goth{m}athcal M}}^n$. Let $(X_i)$ and $(\cc{X}_i)$ be
the dual frames. We ask under what conditions there exists a local
diffeomorphism ${\mathcal P}hi\goth{g}oth{co}lon {\goth{m}athcal M} \supset U\to \cc{U}\subset \cc{M}$
such that ${\mathcal P}hi^*\cc{\goth{o}mega}^i=\goth{o}mega^i$. To answer this question
we need to define several notions. The structural equations for
$(\goth{o}mega^i)$ read
${\rm d}\goth{o}mega^i=\frac12T^i_{jk}\goth{o}mega^j{\scriptstyle\wedge}\,\goth{o}mega^k$, where
$T^i_{jk}=T^i_{[jk]}$ are smooth functions since the coframe is
smooth. The functions $T^i_{jk}$ and their coframe derivatives
$T^i_{jk|l}=X_l(T^i_{jk})$, $T^i_{jk|lm}=X_m(X_l(T^i_{jk}))$ etc.
of any order are called structural functions of the coframe.
$T^i_{~jk}$ are the structural functions of zero order,
$T^i_{jk|l}$ are the structural functions of first order and so
on. Since pull-back commutes with exterior differentiation all the
structural functions are relative invariants of the equivalence
problem of coframes. We say that smooth functions $f_1,\ldots,f_k$
are independent at a point $w$ if ${\rm d} f_1{\scriptstyle\wedge}\,\ldots{\scriptstyle\wedge}\,{\rm d} f_k\neq
0$ at $w$. The rank of a coframe $(\goth{o}mega^i)$ at $w$ is defined to
be the maximal number of its independent structural functions at
$w$. The order $s$ at $w$ of a coframe of rank $r$ at $w$ is the
smallest natural number such that among structural functions of
order at most $s$ there are $r$ functions independent at $w$.
We call a smooth coframe regular in an open set $U$ if for each
$j\goth{g}eq 0$ the number of its independent structural functions of
order $j$ is constant on $U$. In particular the rank and the order
of a regular coframe are both constant on $U$. From now on we
confine our considerations to coframes regular on sufficiently
small topologically trivial open subsets $U$ of ${\goth{m}athcal M}$. For a
regular coframe of order $s$ one may choose a set of $r$
independent structural functions $I_1,\ldots,I_r$, which are of
order at most $s$ and all the remaining structural functions
$T_\sigma$ of any order are expressible in terms of $I_j$, $
T_\sigma=f_\sigma(I_1,\ldots,I_r)$. Indeed, all the $(s+1)$-order
functions are of this form by definition of the order of a
coframe, whereas all the functions of order $s+2$ or greater are
their coframe derivatives. These derivatives depend on $I_j$ and
$I_{j|k}$ but $I_{j|k}$ are functions of order at most $s+1$ so
their are also functions of $I_j$.
Thereby all the structural functions are described by i) the set
$(I_1,\ldots,I_r)$ and ii) the formulae which characterize how the
$(s+1)$-order functions are expressed by $(I_1,\ldots,I_r)$. Both
these objects may be encoded in the so called classifying
function. The classifying function for a coframe of order $s$ is a
function ${\bf T}\goth{g}oth{co}lon{\goth{m}athcal M}\to\mathbb{R}^N$ given by all the structural
functions of order at most $s+1$, which are lexicographically
ordered with respect to their indices, and $N$ is the number of
these structural functions \begin{equation}n {\bf T}:w\in U\longmapsto
(T^i_{jk},T^i_{jk|l_1},\ldots,T^i_{jk|l_1\ldots
l_{s+1}})\in\mathbb{R}^N. \end{equation}n By smoothness and regularity of the
underlying coframe the graph ${\bf T}(U)$ is an $r$-dimensional
submanifold in $\mathbb{R}^N$. The last ingredient which we need is
definition of overlapping; two $r$-dimensional submanifolds of
$\mathbb{R}^N$ overlap if their intersection is also an $r$-dimensional
submanifold of $\mathbb{R}^N$. Now we are in position to cite the
following
\begin{equation}gin{theorem}[\cite{Olv2}, theorem 14.24]\label{th.cc.clas}
Let $(\goth{o}mega^i)$ and $(\cc{\goth{o}mega}^i)$ be smooth, regular coframes
defined, respectively, on ${\goth{m}athcal M}^n$ and $\cc{{\goth{m}athcal M}}^n$. There exists a
local diffeomorphism ${\mathcal P}hi:{\goth{m}athcal M}\to\cc{{\goth{m}athcal M}}$ such that
${\mathcal P}hi^*\cc{\goth{o}mega}^i=\goth{o}mega^i$, $i=1,\ldots,n$, if and only if the
coframes have the same order $s=\cc{s}$ and the graphs ${\bf T}({\goth{m}athcal M})$
and $\cc{{\bf T}}(\cc{{\goth{m}athcal M}})$ of their classifying functions overlap.
Moreover, if $w_0\in{\goth{m}athcal M}$ and $\cc{w}_0 \in \cc{{\goth{m}athcal M}}$ are any points
mapping to the same point \begin{equation}n z_0={\bf T}(w_0)=\cc{{\bf T}}(\cc{w}_0)\in
{\bf T}({\goth{m}athcal M})\cap\cc{{\bf T}}(\cc{{\goth{m}athcal M}})\end{equation}n on the overlap, then there is a
unique equivalence map ${\mathcal P}hi$ such that $\cc{w_0}={\mathcal P}hi(w_0)$.
\end{theorem}
An important notion is a symmetry of a coframe. This is a
diffeomorphism ${\mathcal P}hi:{\goth{m}athcal M}\to{\goth{m}athcal M}$ such that ${\mathcal P}hi^*\goth{o}mega^i=\goth{o}mega^i$.
We have
\begin{equation}gin{theorem}[\cite{Olv2}, theorem 14.26]\label{th.cc.sym}
The symmetry group $G$ of a regular coframe $(\goth{o}mega^i)$ of rank
$r$ on ${\goth{m}athcal M}^n$ is a local Lie group of transformations of dimension
$n-r$.
\end{theorem}
There is a class of coframes which admit a particularly simple
description in this language. These are coframes of rank zero,
whose all structural functions $T^i_{~jk}$ are constant. These
coframes are of order zero and the image of their classifying
function is a point in $\mathbb{R}^N$. Thus, two rank-zero coframes are
equivalent if and only if their structural constants are equal,
$T^i_{~jk}=\cc{T}^i_{~jk}$. Furthermore, a coframe of rank zero
has an $n$ dimensional Lie symmetry group $G$. The algebra of this
group is precisely the algebra with structural constants
$T^i_{~jk}$ and the coframe may be interpreted as the left
invariant coframe defining a local structure of $G$ on ${\goth{m}athcal M}$.
Let us turn to ODEs. In order to
determine whether or not two given ODEs are
contact/point/fibre-preserving equivalent one constructs the
respective invariant coframes for both the ODEs, calculates the
structural functions, which are now relative invariants for the
underlying ODEs, and applies theorem \ref{th.cc.clas} provided
that the coframes are regular. In particular,
contact/point/fibre-preserving symmetry group of the underlying
ODE is isomorphic to the symmetry group of the associated coframe.
In practice there are three difficulties in this approach. First
difficulty is that the coframes and invariants we have built so
far are defined on manifolds ${\mathcal P}$ larger then ${\goth{m}athcal J}^2$. As a
consequence, the invariants contain not only $x,y,p,q$ but the
auxiliary bundle variables $u_\goth{m}u$ as well. Since it is natural to
describe ODEs in terms of $x,y,p,q$ only, first of all we must
finish the reduction of the group parameters, so that finally no
free $u_\goth{m}u$ remains and we obtain a coframe on ${\goth{m}athcal J}^2$ from which
we compute invariants of ODEs depending on $x,y,p,q$ only.
Second problem is that the above method requires regularity of the
Cartan coframes. Thus we have to restrict our consideration to the
class of regular ODEs defined as follows.
\begin{equation}gin{definition}\label{def.cc.reg}
An ODE $y'''=F(x,y,y',y'')$ is regular if it is given by a locally
smooth function $F$ such that the Cartan coframe obtained after
maximal possible reduction of the structural group is regular.
\end{definition}
Third and essential obstacle for the full classification is that
the task of finding whether or not two graphs of functions
${\bf T}\goth{g}oth{co}lon{\goth{m}athcal J}^2\to \mathbb{R}^N$ overlap is highly nontrivial,
particularly in our cases, where the components of ${\bf T}$ are
compound functions depending on $x,y,p,q$ through $F$. Taking this
into account we restrict the classification to two classes of
equations for which we are able to find compact criteria for the
equivalence.
\begin{equation}gin{itemize}
\item[i)] The regular equations possessing large contact or point
symmetry groups, that is symmetry groups of dimension at least
four. \item[ii)] The regular equations fibre-preserving equivalent
to reduced Chazy classes II, IV -- VII and XII.
\end{itemize}
For these two families we carry out the classification to the very
end, however we also provide some partial result in the case of
totally arbitrary ODEs. This partial result, theorems
\ref{th.cc.nW4d} and \ref{th.cc.W4d}, is the explicit
construction of the invariant coframes on ${\goth{m}athcal J}^2$ in the contact
case without further analysis of the classifying function.
\section{Equations with large contact symmetry
group}\label{s.cc.large}
\noindent This class of ODEs is particularly convenient for
characterization since, as we shall see, to these equations we can
always associate a Cartan coframe of rank zero, with the only
exception of ODEs contact equivalent to general linear ODEs
\eqref{e.cc.glin}. These exceptional ODEs are characterized by the
fact that their symmetry group act intransitively on ${\goth{m}athcal J}^2$, cf
corollary \ref{cor.cc.linear} and proposition \ref{prop.cc.int}
Let us begin with the ten-dimensional coframe of theorem
\ref{th.c.1}. As we said in section \ref{s.c.furred} of chapter
\ref{ch.contact}, ODEs fall into three main classes:
\begin{equation}gin{itemize}
\item $W=0$ and $F_{qqqq}=0$, \item $W\neq 0$, \item $W=0$ but
$F_{qqqq}\neq 0$.
\end{itemize}
If $F_{qqqq}=W=0$ then we are in the situation of corollary
\ref{cor.c.10d_flat} and this is the only case of the ODEs with a
ten-dimensional contact symmetry group, since for any other ODE
there are non-constant relative invariants in equations
\eqref{e.c.dtheta_10d} and the dimension of the symmetry group is
less then ten. Below we consider the case $W\neq 0$.
\subsection{ODEs with $W\neq 0$}
For these equations we have the five dimensional coframe of
theorem \ref{th.c.2}:
\begin{equation}gin{align}
d\theta^1=&\mathcal{O}mega{\scriptstyle\wedge}\,\theta^1-\theta^2{\scriptstyle\wedge}\,\theta^4, \nonumber \\
d\theta^2=&\mathcal{O}mega{\scriptstyle\wedge}\,\theta^2+\inc{a}{}\,\theta^1{\scriptstyle\wedge}\,\theta^4-\theta^3{\scriptstyle\wedge}\,\theta^4,\nonumber\\
d\theta^3=&\mathcal{O}mega{\scriptstyle\wedge}\,\theta^3+\inc{b}{}\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{c}{}\,\theta^1{\scriptstyle\wedge}\,\theta^3
-\theta^1{\scriptstyle\wedge}\,\theta^4+\inc{e}{}\,\theta^2{\scriptstyle\wedge}\,\theta^3+\inc{a}\,\theta^2{\scriptstyle\wedge}\,\theta^4, \label{e.cc.dtheta_5d} \\
d\theta^4=&\inc{f}{}\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{g}{}\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\inc{h}{}\,\theta^1{\scriptstyle\wedge}\,\theta^4
+\inc{k}{}\,\theta^2{\scriptstyle\wedge}\,\theta^3-\inc{e}{}\,\theta^2{\scriptstyle\wedge}\,\theta^4, \nonumber \\
d\mathcal{O}mega=&\inc{l}{}\,\theta^1{\scriptstyle\wedge}\,\theta^2+(\inc{f}{}-\inc{a}{}\inc{k}{})\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\inc{m}{}\,\theta^1{\scriptstyle\wedge}\,\theta^4
+\inc{g}{}\,\theta^2{\scriptstyle\wedge}\,\theta^3+\inc{h}{}\,\theta^2{\scriptstyle\wedge}\,\theta^4.\nonumber
\end{align}
with functions $\inc{a}{},\inc{b}{},\inc{e}{},\inc{h}{},\inc{k}{}$ given by
\begin{equation}gin{align}
\inc{a}{}=& \frac{1}{\sqrt[3]{W^2}}\left(K+\frac{1}{18}Z^2+\frac{1}{9}ZF_q-\frac{1}{3}\goth{m}athcal{D} Z\right), \nonumber \\
\inc{b}{}=&\frac{1}{3u\sqrt[3]{W^2}}\bigg(\frac{1}{27}F_{qq}Z^2+\left(K_q-\frac{1}{3}Z_p-\frac{2}{9}F_qZ_q\right)Z+\nonumber\\
&+\left(\frac{1}{3}\goth{m}athcal{D} Z-2K\right)Z_q+Z_y+F_{qq}K-3K_p-K_qF_q-F_{qy}+W_q\bigg), \nonumber \\
\inc{e}{}=&\frac{1}{u}\left(\frac{1}{3}F_{qq}+\frac{1}{W}\left(\frac{2}{9}W_qZ-\frac{2}{3}W_p-\frac{2}{9}W_qF_q \right)\right), \label{e.cc.basfun_5d}\\
\inc{h}{}=&\frac{1}{3u\sqrt[3]{W}}\bigg(\left(\frac{1}{9}W_qZ^2-\frac{1}{3}W_pZ+W_y-\frac{1}{3}W_q\goth{m}athcal{D} Z\right)\frac{1}{W}+\nonumber \\
&+\goth{m}athcal{D} Z_q+\frac{1}{3}F_qZ_q\bigg),\nonumber \\
\inc{k}{}=&\frac{1}{u^2\sqrt[3]{W}}\left(\frac{2W_q^2}{9W}-\frac{W_{qq}}{3}\right). \nonumber
\end{align}
This means in view of theorem \ref{th.cc.sym} that if $W\neq 0$
then we have symmetry groups of dimension at most five. We proved
in corollary \ref{cor.c.5d_flat} that the assumption
$\inc{a}{}=const$ and $\inc{k}{}=0$ implies that the underlying
ODE has a five-dimensional group of symmetry. Now we easily see
that there are no other equations with a five-dimensional symmetry
group here. In fact, this property requires that all the functions
in \eqref{e.cc.dtheta_5d} are constants, in particular
$\inc{a}{}=const$ and $\inc{k}{}=const$. But $\inc{k}{}\sim
u^{-2}$ so it is a constant function on the bundle ${\mathcal P}^c_5$ iff it
vanishes and we are back in corollary \ref{cor.c.5d_flat}. At this
stage we know that all the ODEs with $W\neq 0$ other than
$y'''=-2\goth{m}u y'+y$ have the symmetry group of dimension at most
four. Next we reduce the last free bundle parameter $u$. The
Cartan algorithm bifurcates at this point:
\begin{equation}gin{itemize}
\item[i)] The functions $\inc{b}{},\inc{e}{},\inc{h}{}$ and $\inc{k}{}$ in \eqref{e.cc.dtheta_5d} vanish.
\item[ii)] At least one function among $\inc{b}{},\inc{e}{},\inc{h}{},\inc{k}{}$ does not vanish.
\end{itemize}
We will discuss both these possibilities consecutively.
\paragraph{i)} Utilizing the Jacobi identity for
\eqref{e.cc.dtheta_5d} we get that all the functions but
$\inc{a}{}$ vanish, $\inc{a}{}=\inc{a}{}(x)$ and ${\rm d}\inc{a}{}
=(\inc{a}{})_4\theta^4$, ${\rm d}
(\inc{a}{})_4=(\inc{a}{})_{44}\theta^4$ etc, so neither
$\inc{a}{}$ nor its derivatives of any order contain the last
auxiliary variable $u$ and the full reduction of the structural
group cannot be done. We check that the ODE \begin{equation}\label{e.cc.glin}
y'''=-2\goth{m}u(x)y'+(1-\goth{m}u'(x))y, \end{equation} with an arbitrary smooth
function $\goth{m}u(x)$, satisfies
$\inc{b}{}=\inc{e}{}=\inc{h}{}=\inc{k}{}=0$ and
$\inc{a}{}=\goth{m}u(x)$. Furthermore, a linear third-order ODE of
general form satisfies either $W=0$, in which case it is
equivalent to $y'''=0$, or $W\neq 0$, in which case
$\inc{b}{},\inc{e}{},\inc{h}{}$ and $\inc{k}{}$ vanish. Thus case
i) describes the ODEs satisfying $W\neq 0$ and linearizable
through contact transformations, in particular the linear
equations with constant coefficients are distinguished by the
additional condition $\inc{a}{}=const$.
\begin{equation}gin{corollary} \label{cor.cc.linear}
The following third-order ODEs are linearizable via contact
transformations of variables
\begin{equation}gin{itemize}
\item The equations satisfying $W=F_{qqqq}=0$. These are equivalent to
\begin{equation}n y'''=0 \end{equation}n
and admit the group $O(3,2)$ of contact symmetries.
\item The equations satisfying $W\neq0$, $\inc{b}{}=\inc{e}{}=\inc{h}{}=\inc{k}{}=0$ and
$\inc{a}{}=\goth{m}u(x)$, where $\goth{m}u(x)$ is any smooth non-constant real function. These are
equivalent to the general linear equation
\begin{equation}n y'''=-2\goth{m}u(x)y'+(1-\goth{m}u'(x))y\end{equation}n
and admit a four-dimensional group of contact symmetries. The
symmetry group acts on three-dimensional orbits in ${\goth{m}athcal J}^2$. If
such an
equation is given in the above form, then symmetries are
generated by \begin{equation}n \begin{aligned} V_i&=f_i\partial_y+f'_i\partial_p+f''_i\partial_q, \qquad i=1,2,3\\
V_4&=y\partial_y+p\partial_p+q\partial_q,\end{aligned} \end{equation}n
where $f_1,f_2,f_3$ are any three functionally independent
solutions of the ODE.
\item The equations satisfying $W\neq0$, $\inc{k}{}=0$ and
$\inc{a}{}=\goth{m}u\in\mathbb{R}$. These are equivalent to the general linear
equation with constant coefficients \begin{equation}n y'''=-2\goth{m}u y'+y \end{equation}n
and admit the group $\mathbb{R}^2\ltimes_\goth{m}u\mathbb{R}^3$ of contact
symmetries.
\end{itemize}
The equations with different values of $\goth{m}u(x)$ or $\goth{m}u$ are
non-equivalent.
\end{corollary}
\paragraph{ii)} In this case we use a non-vanishing function among
$\inc{b}{},\inc{e}{},\inc{h}{},\inc{k}{}$ in
\eqref{e.cc.dtheta_5d} to do the last reduction and eventually
obtain a coframe on ${\goth{m}athcal J}^2$. The substitution is as follows.
\begin{equation}gin{itemize}
\item[1.] If $\inc{k}{}\neq 0$ then we set $\inc{k}{}=\epsilon=\pm
1$ depending on the sign of the quantity
$\tfrac{1}{\sqrt[3]{W}}(\tfrac{2W_q^2}{9W}-\tfrac{W_{qq}}{3})$, which gives the substitution
\begin{equation}\label{e.cc.red10}
u=\frac{1}{\sqrt[6]{|W|}}\sqrt{\left|\frac{2W_q^2}{9W}-\frac{W_{qq}}{3}\right|}
\end{equation}
in the coframe of theorem \ref{th.c.2}. \item[2.] If $\inc{k}{}=0$
and $\inc{e}{}\neq 0$, then we set $\inc{e}{}=1$ and substitute
\begin{equation}n
u=\frac{1}{3}F_{qq}+\frac{1}{W}\left(\frac{2}{9}W_qZ-\frac{2}{3}W_p-\frac{2}{9}W_qF_q \right).
\end{equation}n
\item[3.] If $\inc{k}{}=\inc{e}{}=0$ and $\inc{h}{}\neq 0$, then
we set $\inc{h}{}=1$ and
\begin{equation}gin{align*}
u=&\frac{1}{3\sqrt[3]{W}}\bigg(\left(\frac{1}{9}W_qZ^2-\frac{1}{3}W_pZ+W_y-\frac{1}{3}W_q\goth{m}athcal{D} Z\right)\frac{1}{W}+ \\
&+\goth{m}athcal{D} Z_q+\frac{1}{3}F_qZ_q\bigg).\nonumber
\end{align*}
\item[4.] Finally, if $\inc{k}{}=\inc{e}{}=\inc{h}{}=0$ and
$\inc{b}{}\neq 0$, then we set $\inc{h}{}=1$ and
\begin{equation}gin{align}
u=&\frac{1}{3\sqrt[3]{W^2}}\bigg(\frac{1}{27}F_{qq}Z^2+\left(K_q-\frac{1}{3}Z_p-\frac{2}{9}F_qZ_q\right)Z\label{e.cc.red20}\\
&+\left(\frac{1}{3}\goth{m}athcal{D} Z-2K\right)Z_q+Z_y+F_{qq}K-3K_p-K_qF_q-F_{qy}+W_q\bigg). \nonumber
\end{align}
\end{itemize}
In this manner we obtain
\begin{equation}gin{theorem}\label{th.cc.nW4d}
Let $y'''=F(x,y,y',y'')$ be an ODE such that i) $W\neq 0$, ii) the
functions $\inc{b}{},\inc{e}{},\inc{h}{},\inc{k}{}$ in
\eqref{e.cc.basfun_5d} do not vanish simultaneously. The contact
invariant information on this ODE is given by the following Cartan
coframe $(\theta^1,\theta^2,\theta^3,\theta^4)$ on ${\goth{m}athcal J}^2$:
\begin{equation}\label{e510}\begin{equation}gin{aligned}
\theta^1=&u\sqrt[3]{W}\goth{o}mega^1, \\
\theta^2=&u\left(\frac{1}{3}Z\goth{o}mega^1+\goth{o}mega^2\right), \\
\theta^3=&\frac{u}{\sqrt[3]{W}}\left(\left(K+\frac{1}{18}Z^2\right)\goth{o}mega^1
+\frac{1}{3}\left(Z-F_q\right)\goth{o}mega^2+\goth{o}mega^3\right), \\
\theta^4=&\left(\frac{1}{9}\frac{W_q}{\sqrt[3]{W^2}}Z-\frac{1}{3}\sqrt[3]{W}Z_q\right)\goth{o}mega^1
+\frac{W_q}{3\sqrt[3]{W^2}}\goth{o}mega^2+\sqrt[3]{W}\goth{o}mega^4,
\end{aligned}\end{equation}
where $u$ is given by eq. \eqref{e.cc.red10} --
\eqref{e.cc.red20}, depending on which functions among
$\inc{b}{},\inc{e}{},\inc{h}{},\inc{k}{}$ are non-zero.
\end{theorem}
The exterior derivatives of the above coframe read
\begin{equation}\label{e.cc.dtheta_nW4d}\begin{equation}gin{aligned}
d\theta^1 =&\,a\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{I}{1}\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\inc{I}{2}\,\theta^1{\scriptstyle\wedge}\,\theta^4-\theta^2{\scriptstyle\wedge}\,\theta^4, \\
d\theta^2 =& \,e\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{I}{3}\,\theta^1{\scriptstyle\wedge}\,\theta^4
+\inc{I}{1}\,\theta^2{\scriptstyle\wedge}\,\theta^3+\inc{I}{2}\,\theta^2{\scriptstyle\wedge}\,\theta^4
-\theta^3{\scriptstyle\wedge}\,\theta^4, \\
d\theta^3 =&\,h\,\theta^1{\scriptstyle\wedge}\,\theta^2+k\,\theta^1{\scriptstyle\wedge}\,\theta^3
-\,\theta^1{\scriptstyle\wedge}\,\theta^4+\inc{I}{4}\,\theta^2{\scriptstyle\wedge}\,\theta^3
+\inc{I}{3}\,\theta^2{\scriptstyle\wedge}\,\theta^4+\inc{I}{2}\,\theta^3{\scriptstyle\wedge}\,\theta^4,\\
d\theta^4 =& \,m\,\theta^1{\scriptstyle\wedge}\,\theta^2+\,n\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\,s\,\theta^1{\scriptstyle\wedge}\,\theta^4+\inv{\epsilon}{1}\,\theta^2{\scriptstyle\wedge}\,\theta^3
-(a+\inc{I}{4})\,\theta^2{\scriptstyle\wedge}\,\theta^4,
\end{aligned}\end{equation}
where $\inv{\epsilon}{1}=\pm 1,\,0$ and $\inc{I}{1}$,
$\inc{I}{2}$, $\inc{I}{3}$, $\inc{I}{4}$, $a$, $e$, $h$, $k$, $m$,
$n$, $s$ are functions.
The most important invariants read
\begin{equation}gin{align*}
\inv{\epsilon}{1}=&\sgn\left(2W_q^2-3WW_{qq}\right),\notag \\
\inc{I}{1}=&\frac{9W_{qqq}W^2-9W_{qq}W_qW+4W_q^3}{2(2W_q^2-3WW_{qq})\sqrt{\left|2W_q^2-3WW_{qq}\right|}},\notag \\
\inc{I}{2}=&\frac{1}{6\sqrt[3]{W}(3W_{qq}W-2W_q^2)}\Big((6WF_q-6ZW)W_{qq}+4ZW^2_q-4F_qW_q^2\notag \\
&+(3F_{qq}W-12W_p-6WZ_q)W_q+18WW_{qp}-9W^2Z_{qq}-9W^2F_{qqq}\Big),\notag \\
\inc{I}{3}=&\frac{1}{\sqrt[3](W^2}\left( K-\frac{1}{3}\goth{m}athcal{D} Z+\frac{1}{18}Z^2+\frac{1}{9}F_qZ\right), \\
\inc{I}{4}=&\frac{1}{6\sqrt[3]{W}(2W_q^2-WW_{qq})\sqrt{\left|2W_q^2-WW_{qq}\right|}}\Big(9F_qWW_{qqq}-9ZWW_{qqq}\notag \\
&+(15F_qWW_q-15ZWW_q-27W_pW+18WF_{qq}-18WZ_q)W_{qq} \notag \\
&+8F_qW_q^3-8ZW_q^3+(6WZ_q+24W_p-9F_{qq}W)W_q^2 \notag \\
&-(9W^2Z_{qq}+18WW_{qp}+9W^2F_{qqq})W_q+27W_{qqp}W^2 \Big).\notag
\end{align*}
\subsection{Equations satisfying $W\neq 0$ and admitting large contact symmetry
groups}\label{s.cc.nw} Apart from the contact linearizable
equations all the equations admitting contact symmetry group are
characterized by the coframe \eqref{e510}. By virtue of theorem
\ref{th.cc.sym}, an ODE associated with the coframe \eqref{e510}
admits a four-dimensional symmetry group if all its relative
invariants in \eqref{e.cc.dtheta_nW4d} are constant.
As we said, two ODEs associated with the coframe \eqref{e510} and
admitting a four-dimensional contact symmetry group are equivalent
if and only if their respective invariants have the same constant
value.
Below we describe our method of finding these equations. First we
assume that the invariants
$\inc{I}{1},\inc{I}{2},\inc{I}{3},\inc{I}{4}$ are constant, which
is a necessary condition for a large symmetry group. Then we close
the system \eqref{e.cc.dtheta_nW4d} and the identities
${\rm d}^2\goth{h}c{i}=0$ give us information about the remaining
invariants, for example
\begin{equation}n\begin{equation}gin{aligned}{\rm d}^2\goth{h}c{1}=&(\inc{I}{1}\inc{I}{3}
-\inc{I}{2}\inc{I}{4}+e+s)\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}{\scriptstyle\wedge}\,\goth{h}c{2}+
(a\inc{I}{1}+\inc{I}{1}\inc{I}{4}-\inv{\epsilon}{1}\inc{I}{2}+n)\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{3}{\scriptstyle\wedge}\,\goth{h}c{2}+\\
&+(a-\inc{I}{1}\inc{I}{2})\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}{\scriptstyle\wedge}\,\goth{h}c{3}+ {\rm d}
a{\scriptstyle\wedge}\,\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}=0. \end{aligned} \end{equation}n This equation yields
${\rm d}^2\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{2}=0=(a-\inc{I}{1}\inc{I}{2})\goth{h}c{1}{\scriptstyle\wedge}\,\goth{h}c{4}{\scriptstyle\wedge}\,\goth{h}c{3}{\scriptstyle\wedge}\,\goth{h}c{2}$,
hence $a=\inc{I}{1}\inc{I}{2}=const$ and we get that $n$ and $e+s$
are also constants expressed by $\inc{I}{j}$. Next
${\rm d}^2\goth{h}c{2}{\scriptstyle\wedge}\,\goth{h}c{2}=0$ gives $e-k+s=0$ and
${\rm d}^2\goth{h}c{3}{\scriptstyle\wedge}\,\goth{h}c{1}=0$ gives $k=const$, so $a,e,k,n$ and $s$ are
constant. Continuing this reasoning we also find that $h$ and $m$
are constant. As a consequence
$\inc{I}{1},\inc{I}{2},\inc{I}{3},\inc{I}{4}=const$ is the
necessary and sufficient condition for an ODE to admit a large
symmetry group.
In these circumstances the identities ${\rm d}^2\goth{h}c{i}=0$ become a
system of quadratic algebraic equations for $\inc{I}{1},\ldots,s$.
We solve this system by the method of consecutive substitutions.
Doing so we find that the system has no solutions if
$\inv{\epsilon}{1}=0$. This fact implies that the ODEs which we
search exist only in the branch 1 above defined by the
normalization \eqref{e.cc.red10}. In the case
$\inv{\epsilon}{1}=\pm 1$ the system is underdetermined and we
express all the invariants by $\inv{\epsilon}{1}$ and
$\inc{I}{1}$, whose value is arbitrary except for $\inc{I}{1}=
0,\,\pm 3/\sqrt{2}$.
\begin{equation}gin{align*}
\inc{I}{3}=&\frac{3(\inc{I}{2})^2}{8(\inc{I}{1})^2}(3\inv{\epsilon}{1}-(\inc{I}{1})^2),
&
\inc{I}{4}=&\frac{\inc{I}{2}}{2\inc{I}{1}}(3\inv{\epsilon}{1}-2(\inc{I}{1})^2),\\\\
a=&\inc{I}{1}\inc{I}{2}, &
e=&\frac{1}{8 \inc{I}{1}}(\inc{I}{2})^2(9\inv{\epsilon}{1}-5(\inc{I}{1})^2),
\\\\
h=&-\frac{3\inv{\epsilon}{1}}{16(\inc{I}{1})^3}(16(\inc{I}{1})^2
-3(\inc{I}{2})^3(\inc{I}{1})^2+9(\inc{I}{2})^3\inv{\epsilon}{1}), &
n=&-\frac12\inv{\epsilon}{1}\inc{I}{2},\\\\
m=&\frac{\inv{\epsilon}{1}(\inc{I}{2})^3}{8(\inc{I}{1})^2}((\inc{I}{2})^2-9\inv{\epsilon}{1}),&
k=&\frac{5(\inc{I}{2})^2}{8\inc{I}{1}}(3\inv{\epsilon}{1}-(\inc{I}{1})^2),
\\\\
s=&-\frac{3\inv{\epsilon}{1}(\inc{I}{2})^2}{4\inc{I}{1}},& &
\end{align*}
and
$$\inc{I}{2}=2\sqrt[3]{\frac{(\inc{I}{1})^2}{(2(\inc{I}{1})^2-9\inv{\epsilon}{1})}}.$$
Thereby we have obtained all the structural equations which {\em
may be generated} by ODEs admitting large symmetry groups.
However, it is unknown whether every admissible values of the
invariants are realized by the ODEs. In order to find the
equations we apply two approaches. The first approach is as
follows. We choose some simple ODEs, for example $F=q^\alpha$ or
$F=e^q$, calculate the equations \eqref{e.cc.dtheta_nW4d} for them
and check if they satisfy the large symmetry conditions. By this
method we find the ODEs which realize some but not all the
admissible values of $\inv{\epsilon}{1}$ and $\inc{I}{1}$. In
order to find the remaining ODEs we use the fact that when all the
invariants are constant then the coframe $(\goth{h}c{i})$ is a
left-invariant coframe of a four-dimensional Lie group. This group
is precisely the group of symmetry of the underlying ODE which
acts on ${\goth{m}athcal J}^2$ and the frame dual to $(\goth{h}c{i})$ is a system of
infinitesimal symmetry generators for the differential equation.
We integrate the coframe $(\goth{h}c{i})$, that is we find the explicit
formulae for the forms $\goth{h}c{i}$ which satisfy the equations
\eqref{e.cc.dtheta_nW4d} with constant invariants
$\inc{I}{1},\ldots,s$. Having found formulae for the symmetry
generators we find their common invariant functions on ${\goth{m}athcal J}^2$
which are our desired ODEs. On integrating the coframe $(\goth{h}c{i})$
we used M. MacCallum's classification of real four-dimensional Lie
algebras \cite{MacC} to transform the frames into a simple
canonical form, which simplified the calculations.
In this way we have found the full list of third-order ODEs
satisfying the condition $W\neq 0$ and admitting at least four
dimensional group of contact symmetries. We gather these equations
in table \ref{t.cc.1} together with equations of the branch $W=0$,
which is analyzed below. Two equations of the same type given in
the tables are equivalent if and only if the constants they
involve are equal. For example $F=q^\goth{m}u$ and $F=q^\nu$ are
equivalent provided that $\goth{m}u=\nu$. We give the necessary and
sufficient conditions for any ODE to be contact equivalent to any
member of the list. We also attach the description of the symmetry
algebra. Our result agrees with the part of B. Doubrov and B.
Komrakov's classification \cite{Dub1} referring to third-order
ODEs, however we provide other canonical forms of the considered
equations.
\subsection{ODEs with $W=0$ and $F_{qqqq}\neq0$}
We repeat the scheme explained above. This time we start from the
coframe of theorem \ref{th.c.1}. The following substitutions \begin{equation}n
\begin{aligned} &u_1=u_3^2\sqrt{\left|\frac{u_3}{F_{qqqq}}\right|}, &
&u_2=-3u_3\frac{K_{qqq}}{F_{qqqq}}, \\
&u_4=u_3\sqrt{\left|\frac{u_3}{F_{qqqq}}\right|}\left(\frac{3K_{qqqq}}{F_{qqqq}}-\frac{12F_{qqqqq}K_{qqq}}{5F^2_{qqqq}}\right),
&
&u_5=-u_3\sqrt{\left|\frac{u_3}{F_{qqqq}}\right|}\frac{F_{qqqqq}}{5F_{qqqq}},
\\ &u_6=0 & & \end{aligned} \end{equation}n reduce the bundle ${\mathcal P}^c$ to a five-dimensional
subbundle. The last reduction is as follows.
\begin{equation}gin{itemize}
\item[1.] If
\begin{equation}\label{e.cc.red40}
2L_{qq}F_{qqqq}-3K^2_{qqq}\neq 0,
\end{equation} then \begin{equation}
u_3=\sqrt[3]{9L_{qq}-\frac{27K^2_{qqq}}{2F_{qqqq}}}.
\end{equation}
\item[2.] If $L_{qq}F_{qqqq}-3K^2_{qqq}=0$ but
\begin{equation}\label{e.cc.red50}
5F_{qqqqqq}F_{qqqq}-6F^2_{qqqqq}\neq 0,
\end{equation}
then
\begin{equation}
u_3=\frac{25F^3_{qqqq}}{5F_{qqqqqq}F_{qqqq}-6F^2_{qqqqq}}.
\end{equation}
\item[3.] If $L_{qq}F_{qqqq}-3K^2_{qqq}=0$ and $5F_{qqqqqq}F_{qqqq}-6F^2_{qqqqq}=0$, but
\begin{equation}\label{e.cc.red60} \frac{1}{3}F_{qq}+\frac{1}{F_{qqqq}}\left(\frac{18}{5}K_{qqqq}+\frac{2}{5}F_{qqqqp}+\frac{2}{15}F_qF_{qqqqq}\right)-\frac{12F_{qqqqq}K_{qqq}}{5F^2_{qqqq}}\neq 0
\end{equation}
then
\begin{equation}\label{e.cc.red70} u_3=\frac{1}{3}F_{qq}+\frac{1}{F_{qqqq}}\left(\frac{18}{5}K_{qqqq}+\frac{2}{5}F_{qqqqp}+\frac{2}{15}F_qF_{qqqqq}\right)
-\frac{12F_{qqqqq}K_{qqq}}{5F^2_{qqqq}}.
\end{equation}
\end{itemize}
\begin{equation}gin{remark}
The three quantities defined in \eqref{e.cc.red40},
\eqref{e.cc.red50}, \eqref{e.cc.red60} cannot vanish
simultaneously because it would be in contradiction to the
condition $F_{qqqq}\neq 0$.
\end{remark}
Thus we have the following
\begin{equation}gin{theorem}\label{th.cc.W4d}
Let $y'''=F(x,y,y',y'')$ be an ODE satisfying $W=0$ and
$F_{qqqq}\neq 0$. The contact invariant information on the ODE is
given by the following Cartan coframe
$(\theta^1,\theta^2,\theta^3\theta^4)$ on ${\goth{m}athcal J}^2$:
\begin{equation}gin{align*}
\theta^1=&u^2\sqrt{\left|\frac{u}{F_{qqqq}}\right|}\goth{o}mega^1,\nonumber \\
\theta^2=&u\left(-\frac{3K_{qqq}}{F_{qqqq}}\goth{o}mega^1+\goth{o}mega^2\right),\nonumber \\
\theta^3=&\sqrt{\left|\frac{F_{qqqq}}{u}\right|}\left(\left(K+\frac{9K^2_{qqq}}{2F^2_{qqqq}}
\right)\goth{o}mega^1-\left(\frac{F_q}{3}+\frac{3K_{qqq}}{F_{qqqq}}\right)\goth{o}mega^2+\goth{o}mega^3\right), \\
\theta^4=&u\sqrt{\left|\frac{u}{F_{qqqq}}\right|}\left
(\left(\frac{3K_{qqq}}{F_{qqqq}}-\frac{12F_{qqqqq}K_{qqq}}{5F^2_{qqqq}}\right)\goth{o}mega^1
-\frac{F_{qqqqq}}{5F_{qqqq}}\goth{o}mega^2+\goth{o}mega^4\right),\nonumber
\end{align*}
where $u$ is the function of $x,y,p,q$ given by the formulae
\eqref{e.cc.red40} -- \eqref{e.cc.red70}.
\end{theorem}
The exterior derivatives of the above forms are the following
\begin{equation}gin{align*}
d\theta^1 &=a\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inc{I}{5}\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\inc{I}{6}\,\theta^1{\scriptstyle\wedge}\,\theta^4-\theta^2{\scriptstyle\wedge}\,\theta^4, \nonumber \\
d\theta^2 &=e\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inv{\epsilon}{2}\,\theta^1{\scriptstyle\wedge}\,\theta^4
+\frac{2}{5}\inc{I}{5}\,\theta^2{\scriptstyle\wedge}\,\theta^3+\frac{2}{5}\inc{I}{6}\,\theta^2{\scriptstyle\wedge}\,\theta^4
-\theta^3{\scriptstyle\wedge}\,\theta^4, \nonumber \\
d\theta^3 &=f\,\theta^1{\scriptstyle\wedge}\,\theta^2+g\,\theta^1{\scriptstyle\wedge}\,\theta^3+\inc{I}{7}\,\theta^2{\scriptstyle\wedge}\,\theta^3
+\inc{\epsilon}{2}\,\theta^2{\scriptstyle\wedge}\,\theta^4-\frac{1}{5}\inc{I}{6}\,\theta^3{\scriptstyle\wedge}\,\theta^4,\label{e610} \\
d\theta^4 &=k\,\theta^1{\scriptstyle\wedge}\,\theta^2+l\,\theta^1{\scriptstyle\wedge}\,\theta^3
+m\,\theta^1{\scriptstyle\wedge}\,\theta^4+\inc{I}{8}\,\theta^2{\scriptstyle\wedge}\,\theta^3
+s\,\theta^2{\scriptstyle\wedge}\,\theta^4-\frac{3}{5}\inc{I}{5}\,\theta^3{\scriptstyle\wedge}\,\theta^4, \nonumber
\end{align*}
where $\inv{\epsilon}{2}=\pm 1,0$ is defined through
$$\inv{\epsilon}{2}=\sgn (2F_{qqqq}L_{qq}-3K_{qqq}^3)$$ and
$\inc{I}{5},\inc{I}{6},\inc{I}{7},\inc{I}{8},a,e,f,g,k,l,m,s$ are
functions of $x,y,p,q$. We do not display these functions, since
they are complicated. They can be immediately calculated from
theorem \ref{th.cc.W4d}. We apply the procedure of seeking the
ODEs with four-dimensional symmetry group and insert the results
in table \ref{t.cc.1} on pages \pageref{t.cc.1} --
\pageref{t.cc.2}.
Finally, we have the following geometric description of general
linearizable ODEs.
\begin{equation}gin{proposition}\label{prop.cc.int}
The only smooth third-order ODEs admitting large contact symmetry
group acting intransitively on ${\goth{m}athcal J}^2$ are the equations contact
equivalent to \begin{equation}n y'''=-2\goth{m}u(x)y'+(1-\goth{m}u'(x))y, \end{equation}n with an
arbitrary smooth function $\goth{m}u(x)\neq const$.
\begin{equation}gin{proof}
We showed in corollary \ref{cor.cc.linear} that the above
equations admit a 4-dimensional contact symmetry group acting on
3-dimensional orbits in ${\goth{m}athcal J}^2$. All other ODEs admitting a large
contact symmetry group $G$ possess Cartan coframes of rank zero.
These coframes were explicitly constructed in theorem \ref{th.c.1}
for the case $W=0$, $F_{qqqq}=0$, in theorems \ref{th.c.2} and
\ref{th.cc.nW4d} for the case $W\neq 0$ and in theorem
\ref{th.cc.W4d} for the case $W=0$, $F_{qqqq}\neq 0$. The coframe
for the case $W=0$, $F_{qqqq}=0$ generates on ${\goth{m}athcal J}^2$ local
structure of the homogeneous space $SP(4,\mathbb{R})/H_6$. The coframes
of theorems \ref{th.cc.nW4d} and \ref{th.cc.W4d} equip ${\goth{m}athcal J}^2$ with
local structure of $G$. In both these cases the action of $G$ on
${\goth{m}athcal J}^2$ is transitive.
\end{proof}
\end{proposition}
\section{Equations with large point symmetry
group}\label{s.pp.large}
\noindent Given the detailed description of the ODEs with large
contact symmetry groups it is already easy to find the equations
admitting large point symmetry groups. It follows from the fact
that any equation possessing point symmetries has at least the
same number of contact symmetries (`number of symmetries' means
the dimension of the symmetry group here.) Therefore the equations
with large point symmetry groups lie entirely within the classes
with large contact symmetry groups. As a consequence, we must only
do the full reduction for the equations contact equivalent to
those in table \ref{t.cc.1} and analyze existence of point
symmetries. The procedure of reduction for point transformations
is fully analogous to the contact case. We have the following main
branches
\begin{equation}gin{itemize}
\item[i)] Linear and point linearizable equations equivalent to
$y'''=0$ with the 7-dimensional algebra $\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$
of point symmetries. \item[ii)] Non-linearizable equations
admitting the 6-dimensional algebras $\goth{o}(2,2)$ or $\goth{o}(4)$ of point
symmetries. They are equivalent to $y'''=\tfrac{3y''^2}{2y'}$ or
$y'''=\tfrac{3y'y''^2}{y'^2+1}$ respectively. These classes are
new when compared to the contact classification. \item[iii)] The
linear and point linearizable equations which satisfy $W\neq 0$.
They are equivalent to $y''=-2\goth{m}u(x)y'+(1-\goth{m}u'(x))y$ and have a
5-dimensional or a 4-dimensional group of symmetries.
\item[iv)] All the remaining equations satisfying $W\neq 0$.
\item[v)] All the remaining equations satisfying $W=0$.
\end{itemize}
Branches i) to iii) are relatively simple to characterize in terms
of point invariants. i) has been described in corollary
\ref{cor.p.7d_flat}. ii), which is contact equivalent to $y'''=0$,
has been discussed in section \ref{s.lor3} of chapter
\ref{ch.point}, and description of iii) is based on the reduction
to a five-dimensional coframe, as in section \ref{s.cc.nw}. After
\begin{equation}n u_1=\sqrt[3]{W}u_3, \qquad\qquad u_2=\tfrac13Zu_3 \end{equation}n we
obtain a five dimensional coframe $(\goth{h}p{1},\ldots,\goth{h}p{4},\vp{})$
with the following basic invariants
\begin{equation}gin{align}
\inc{a}{}=&
\frac{1}{\sqrt[3]{W^2}}\left(K+\frac{1}{18}Z^2+\frac{1}{9}ZF_q-\frac{1}{3}\goth{m}athcal{D}
Z\right), \notag \\
\inp{b}{}=&\frac{1}{3u\sqrt[3]{W^2}}\bigg(\left(\frac{1}{12}F_{qq}+\frac{1}{18}Z_q\right)Z^2
+\left(K_q-\frac{1}{3}Z_p-\frac{1}{9}F_qZ_q+\frac{1}{18}F_{qq}F_q\right)Z+\nonumber\\
&-\frac{1}{6}F_{qq}\goth{m}athcal{D} Z-KZ_q+Z_y+\frac32F_{qq}K-3K_p-K_qF_q-F_{qy}\bigg), \nonumber \\
\inp{e}{}=&\frac{1}{u}\left( \frac16F_{qq} -\frac13Z_q
+\frac{1}{W}\left(\frac29W_qZ-\frac23W_p-\frac29W_qF_q\right)\right),
\notag \\
\inp{h}{}=&\frac{1}{3u\sqrt[3]{W}}
\bigg(\left(\frac{1}{18}W_qZ^2-\left(\frac{1}{3}W_p+\frac19W_qF_q\right)Z+W_y
-W_qK\right)\frac{1}{W}+\nonumber \\
&-3K_q-\frac{1}{3}F_{qq}F_q-F_{qp}\bigg),\nonumber \\
\inp{k}{}=&\frac13\frac{W_q}{\sqrt[3]{W^2}u}. \notag
\end{align}
The linearizable equations are described by conditions expressed
in terms of these invariants, as given in table \ref{t.pp.1}.
The branches iv) and v) need more thorough study. We consider iv)
first. We know from section \ref{s.cc.nw} that any equation with
large point symmetry group in this branch satisfies necessarily
$3WW_{qq}-2W_q^2\neq 0$. It implies $W_q\neq 0$, which allows us
to reduce the last free group parameter via \begin{equation}n
u_3=\frac{1}{3}\frac{W_q}{\sqrt[3]{W^2}}. \end{equation}n The set of basic
invariants is the following
\begin{equation}gin{align*} \inp{I}{1}=&-3\frac{W_{qq}W}{W_q^2}, \notag \\
\inp{I}{2}=&\frac{1}{\sqrt[3]{W}W_q}\left(3W_p+W_qF_q-W_qZ-3WZ_q-3F_{qq}W
\right), \notag \\
\inp{I}{3}=&-\frac{\sqrt[3]{W^2}}{2W_q}\left(2Z_q+F_{qq}\right),
\\
\inp{I}{4}=&\frac{1}{18\sqrt[3]{W^2}}\left(Z^2-6\goth{m}athcal{D}
Z+18K+2ZF_q\right),\notag
\end{align*} and the above invariants are (up to constant numbers) the
$T^1_{13}$, $T^1_{14}$, $T^2_{13}$, and $T^2_{14}$ coefficients in
the structural equations \begin{equation}n {\rm d}
\goth{h}p{i}=\frac12T^i_{jk}\goth{h}p{j}{\scriptstyle\wedge}\,\goth{h}p{k},\qquad T^i_{jk}=-T^i_{kj}\end{equation}n
for the coframe. The ODEs with large point symmetry groups which
fall into this branch are types II.2, II.3, IV, and VI of table
\ref{t.pp.1}.
We turn to the branch v). The coframe is given either by \begin{equation}n
u_1=-\frac{3 F_{qqq}^5}{4 F_{qqqq}^3},\qquad u_2=\frac{F_{qqq}^2N}{2F_{qqqq}},
\qquad u_3=\frac{F_{qqq}^2}{2F_{qqqq}}, \end{equation}n
provided that $F_{qqqq}\neq0$ or, if $F_{qqqq}=0$ but
$F_{qqq}\neq0$, by
\begin{equation}gin{align*}
u_1=&-\frac{1}{36F_{qqq}^4}\left(6F_{qqqp}+5F_{qqq}F_{qq}\right)^3,\\
u_2=&-\frac{1}{6F_{qqq}^2}\left(6F_{qqqp}+5F_{qqq}F_{qq}\right)N,\\
u_3=&-\frac{1}{6F_{qqq}}\left(6F_{qqqp}+5F_{qqq}F_{qq}\right),
\end{align*} where \begin{equation}n
N=F_{qqp}+\frac{1}{6}F_{qq}^2+\frac{1}{3}F_{qqq}F_{q}. \end{equation}n For
$F_{qqqq}\neq 0$ we have the following basic invariants
\begin{equation}gin{align*}
\inp{I}{5}=&\frac{F_{qqq}F_{qqqqq}}{F_{qqqq}^2}, \notag \\
\inp{I}{6}=&\frac{F_{qqqq}}{F_{qqq}^4}\left(\frac83F_{qqqq}-12F_{qqq}K_{qqq}+\frac59F_{qqqq}F_{qq}^2+20F_{qqqq}K_{qq} \right), \notag \\
\inp{I}{7}=&\frac{F_{qqqq}}{F_{qqq}^4}\left(
6N_qF_{qqq}-6NF_{qqqq}+F_{qq}F_{qqq}^2\right) \\
\inp{I}{8}=&-\frac{2}{27}\frac{F_{qqqq}^4}{F_{qqq}^8}\big(4NF_qF_{qqq}+6\goth{m}athcal{D}
NF_{qqq}+\notag \\
&-9N^2-F_{qq}^2N-36K_{qq}N-6F_{qqq}^2K\big), \notag
\end{align*}
which are obtained from the coefficients $T^1_{13}, T^1_{14},
T^2_{13}$ and $T^2_{14}$ in the structural equations.
For the branch $F_{qqqq}=0$, $F_{qqq}\neq0$, which contains only
one class of equations with large point symmetry groups, the
equations equivalent to $F=q^3$, we have the invariants \begin{equation}n
\inp{I}{9}=T^1_{12}, \qquad \inp{I}{10}=T^1_{14}, \qquad
\inp{I}{11}=T^2_{14}.\end{equation}n
Properties of the ODEs admitting a large point symmetry group are
given in table \ref{t.pp.1} on pages \pageref{t.pp.1} --
\pageref{t.pp.2}. In the point classification we also have a
counterpart of proposition \ref{prop.cc.int}.
\begin{equation}gin{remark}\label{rem.fp}
The fibre-preserving classification of the ODEs admitting a large
symmetry group is parallel to the point classification and has
been already done \cite{God1, Grebot}. The main difference is that
types I.3, II.3. and IX do not admit four-dimensional
fibre-preserving symmetry groups.
\end{remark}
\section{Fibre-preserving equivalence to certain reduced Chazy equations}
\noindent An ordinary differential equation in the {\em complex}
domain is said to have the Painlev\'e property if its general
solution does not have movable branch points, that is branch
points whose location depends on integration constants,
\cite{Cos}. The problem of classifying the third-order Painlev\'e
ODEs which are polynomials in $y,y'$, and $y''$ and are locally
analytic in $x$ was studied by J. Chazy \cite{Cha}, who considered
the polynomial equations modulo the following transformations \begin{equation}n
x\to \chi(x),\qquad\qquad y\to \alpha(x)y+\begin{equation}ta(x), \end{equation}n which are
a subclass of complex analytic fibre-preserving transformations.
J. Chazy found thirteen classes of the ODEs satisfying the
Painleve property. Each of these classes has a particularly simple
representative -- the reduced Chazy class -- obtained by a certain
limit procedure. Here we are interested in reduced Chazy classes
II, IV, V, VI, VII and XI $\sigma\neq11$. They are as follows
\begin{equation}gin{flalign*}
II: && F=&-2yq-2p^2,&&&\\
IV: && F=&-3yq-3p^2-3y^2p,&&&\\
V: && F=&-2yq-4p^2-2y^2p,&&&\\
VI: && F=&-yq-5p^2-y^2p,&&&\\
VII: && F=&-yq-2p^2+2y^2p,&&&\\
XI: &&
F=&-2yq-2p^2+\frac{24}{\sigma^2-1}\left(p+y^2\right)^2,&&\sigma\in\goth{m}athbf{N},\
\sigma\neq 1,6k.&
\end{flalign*}
All of them have the form
\begin{equation}
F=\kappa yq+\lambda p^2+ \goth{m}u y^2p+\nu y^4,\label{e.ch.e}
\end{equation} with some constant numbers $\kappa$, $\lambda$, $\goth{m}u$, $\nu$.
We exclude type XI for $\sigma=11$ for technical reasons.
We aim to find necessary and sufficient conditions for a {\em
regular real} third order ODE to be fibre-preserving equivalent to
one of the above equations. We find the Cartan coframe for such
equations and the explicit formulae for the fibre-preserving
invariants. Next we find the functional relations between the
invariants, which allows us in to describe the classifying
function ${\bf T}$ explicitly and consequently describe its image.
First step is calculating the structural equations for the
fibre-preserving coframe of theorem \ref{th.f.1} for the Chazy
types. They are as follows
\begin{equation}gin{align}
{\rm d}\goth{h}f{1} =&\vf{1}{\scriptstyle\wedge}\,\goth{h}f{1}+\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{2},\nonumber \\
{\rm d}\goth{h}f{2} =&\vf{2}{\scriptstyle\wedge}\,\goth{h}f{1}+\vf{3}{\scriptstyle\wedge}\,\goth{h}f{2}+\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{3},\nonumber \\
{\rm d}\goth{h}f{3}=&\vf{2}{\scriptstyle\wedge}\,\goth{h}f{2}+(2\vf{3}-\vf{1}){\scriptstyle\wedge}\,\goth{h}f{3}+\inf{A}{1}\goth{h}f{4}{\scriptstyle\wedge}\,\goth{h}f{1},\nonumber \\
{\rm d}\goth{h}f{4} =&(\vf{1}-\vf{3}){\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber\\
{\rm d}\vf{1} =&-\vf{2}{\scriptstyle\wedge}\,\goth{h}f{4}+(2\inf{C}{1}-\inf{A}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber \\
{\rm d}\vf{2}=&(\vf{3}-\vf{1}){\scriptstyle\wedge}\,\vf{2}+\inf{A}{3}\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4}
+(\inf{C}{1}-\inf{A}{2})\goth{h}f{2}{\scriptstyle\wedge}\,\goth{h}f{4}, \nonumber \\
{\rm d}\vf{3}=&(\inf{C}{1}-\inf{A}{2})\goth{h}f{1}{\scriptstyle\wedge}\,\goth{h}f{4}. \nonumber
\end{align}
In this system the functions $\inp{B}{i}$, $i=1\ldots6$,
$\inp{D}{1}$, and $\inp{D}{2}$ vanish, which is equivalent to the
following three fibre-preserving invariant conditions
\begin{equation}gin{equation}
F_{qq}=0,\qquad F_{qpp}=0, \qquad
F_{ppp}=2F_{qpy}-\frac23F^2_{qp}.\label{e.ch.10}
\end{equation}
The most important non-vanishing invariants read
\begin{equation}gin{align*}
&\inf{A}{2}=\frac{1}{u_1^3}\left(\left(\lambda-\frac{7\kappa}{6}\right )u_2u_3
+\left(\goth{m}u+\frac{\kappa\lambda}{3}+\frac{\kappa^2}{18}\right
)yu_3^2\right ), \notag \\
&\inf{C}{1}-\inf{A}{2}=\frac{\kappa u_3}{3u_1^2}, \\
&X_4(\inf{C}{1}-\inf{A}{2})=-\left ( \frac23\kappa u_2u_3+\frac19\kappa^2 yu_3^2\right)
\frac{1}{u_1^3}.\notag
\end{align*}
For the Chazy classes one can reduce the parameters through
\begin{equation}\label{e.ch.20} \inf{A}{2}=1, \quad
\inf{C}{1}-\inf{A}{2}=\frac13,\quad
X_4(\inf{C}{1}-\inf{A}{2})=0.\end{equation} This leads to the
four-dimensional system
\begin{equation}gin{equation}
\begin{equation}gin{aligned}
d\theta^1 &=\inw{a}\,\theta^1{\scriptstyle\wedge}\,\theta^4-\theta^2{\scriptstyle\wedge}\,\theta^4,\\
d\theta^2 &=-2\tau\,\theta^1{\scriptstyle\wedge}\,\theta^2+\inw{b}\,\theta^1{\scriptstyle\wedge}\,\theta^4
+2\inw{a}\,\theta^2{\scriptstyle\wedge}\,\theta^4-\theta^3{\scriptstyle\wedge}\,\theta^4,\\
d\theta^3 &=\left (\tfrac{\lambda}{\kappa}-\tfrac{2}{3}\right )\theta^1{\scriptstyle\wedge}\,\theta^2
-3\tau\,\theta^1{\scriptstyle\wedge}\,\theta^3
+\inw{c}\in\,\theta^1{\scriptstyle\wedge}\,\theta^4+\inw{b}\,\theta^2{\scriptstyle\wedge}\,\theta^4+3\inw{a}\,\theta^3{\scriptstyle\wedge}\,\theta^4, \\
d\theta^4 &=\tau\,\theta^1{\scriptstyle\wedge}\,\theta^4,
\end{aligned}\label{e.ch.dth}
\end{equation}
where \begin{equation}n
\tau=\frac{\goth{m}u}{\kappa^2}+\frac{\lambda}{6\kappa}+\frac{1}{4},
\end{equation}n and $\inw{a},\,\inw{b},\,\inw{c}$ are functions on ${\goth{m}athcal J}^2$. We
check by direct calculations that in this case the coframe is of
order one and all the invariants are generated by $\inw{a}$ and
$\inw{a}_4=X_4(\inw{a})$ by the following formulae
\begin{equation}gin{align}\label{e.ch.syz}
\inw{b}=&\frac{1}{\tau}\left (\frac{1}{3}-\frac{\lambda}{\kappa}\right)\inw{a}-\frac{1}{2\tau}
+\frac{1}{12\tau^2}\left (\frac{1}{3}-\frac{\lambda}{\kappa} \right ),\notag \\
\inw{c}=&\frac{1}{\tau}\left (\frac{\lambda}{\kappa}-\frac{7}{6} \right)\inw{a}_4
+\frac{1}{2\tau}\left (\frac{\lambda}{\kappa}-\frac{7}{6} \right)\inw{a}^2
+\left (-\frac{1}{\tau}+\frac{1}{6\tau^2}\left (\frac{\lambda}{\kappa}-\frac{7}{6} \right) \right )\inw{a}
-\frac{1}{2\tau^2} \notag\\
&+\frac{1}{36\tau^3}\left
(\frac{\lambda}{\kappa}-\frac{144\nu}{\kappa^3}+\frac{3}{2}\right),\notag \\
\inw{a}_1=&-2\tau \inw{a}-\frac{1}{6},\qquad \inw{a}_2=\tau, \qquad \inw{a}_3= 0, \\
\inw{a}_{41}=& -3\tau \inw{a}_4+2\tau \inw{a}^2+\left (\frac{\lambda}{\kappa}-\frac{1}{6} \right )\inw{a}
+\frac{1}{12\tau}\left (\frac{\lambda}{\kappa}-\frac{1}{3} \right )+\frac{1}{2},\notag \\
\inw{a}_{42}=& -4\tau \inw{a}-\frac{1}{6},\qquad \inw{a}_{43}=\tau, \notag\\
\inw{a}_{44}=& -7\inw{a}_4 \inw{a}-\frac{\inw{a}_4}{6\tau}-6\inw{a}^3+\frac{1}{\tau}\left (\frac{\lambda}{\kappa}-1 \right ) \inw{a}^2
+\left (\frac{1}{\tau}+\frac{1}{6\tau^2}\left (\frac{\lambda}{\kappa}-\frac{1}{2} \right )\right )\inw{a}
\notag\\
&+\frac{1}{6\tau^2}+\frac{1}{\tau^3}\left (\frac{\nu}{\kappa^3}-\frac{1}{72} \right ),\notag
\end{align}
where, as usual, $\inw{a}_i=X_i(\inw{a})$,
$\inw{a}_{ij}=X_j(X_i(\inw{a}))$. These algebraic formulae, when
differentiated, enable to express all other derivatives of
$\inw{a}$, $\inw{b}$ and $\inw{c}$ in terms of $\inw{a}$ and
$\inw{a}_4$. In order to do this we only must consecutively
substitute the coframe derivatives of $\inw{a}$, $\inw{b}$ and
$\inw{c}$ with the right hand side of \eqref{e.ch.syz}, for
instance
$$\inw{a}_{11}=-2\tau \inw{a}_1=4\tau^2 \inw{a}+\frac{\tau}{3},
\qquad \qquad \inw{a}_{12}=-2\inw{a}_2=-2\tau^2,$$ etc. Thereby
the non-constant components of the classifying function
${\bf T}\goth{g}oth{co}lon{\goth{m}athcal J}^2\to\mathbb{R}^N$ are completely characterized by
$$(x,y,p,q)\goth{m}apsto(\inw{a},\inw{b},\inw{c},\inw{a}_1,\inw{a}_2,\inw{a}_3,
\inw{a}_4,\inw{a}_{41},\inw{a}_{42},\inw{a}_{43},\inw{a}_{44}).
$$
The graph of this function in $\mathbb{R}^{11}$ is parameterized by
$\inw{a}$ and $\inw{a}_4$.
Let us consider an arbitrary third-order ODE. It is locally
fibre-preserving equivalent to one of the Chazy classes if and
only if graphs of respective classifying functions overlap. This
is only possible if i) conditions \eqref{e.ch.10} are satisfied,
ii) the reduction defined by the conditions \eqref{e.ch.20} is
possible, and iii) after the reduction the equations
\eqref{e.ch.dth} and \eqref{e.ch.syz} hold. The reduction
\eqref{e.ch.20} is possible iff
\begin{equation}\label{e.ch.30}\begin{aligned}
P=&\goth{m}athcal{D} F_{qp}-F_{qy}\neq0, \\
Q=&2W_p-\goth{m}athcal{D} W_q+F_qW_q\neq0. \end{aligned} \end{equation} After the reduction to
dimension four given by \begin{equation}n
u_1 =\frac{2P^2}{Q}, \qquad u_3 =\frac{2F_q P^3}{3Q}-\frac{2P^2\goth{m}athcal{D} P}{Q},
\qquad u_3=-\frac{4P^3}{Q}
\end{equation}n we get the frame \begin{equation}\label{e.ch.frame}\begin{equation}gin{aligned}
X_1=&\frac{Q}{P^2}\left (\frac{Q}{4W_q}-\frac{F_q}{6} \right)\partial_p
+\frac{Q}{P^2}\left( \frac{Q^2}{16W_q^2}-\frac{F_q^2}{36}-\frac{K}{2}\right )\partial_q, \\
X_2=&-\frac{Q^2}{4P^3}\partial_p-\frac{Q^3}{8P^3W_q}\partial_q, \\
X_3=&\frac{Q^3}{8P^4}\partial_q, \\
X_4=&-\frac{2P}{Q}\goth{m}athcal{D}.
\end{aligned}\end{equation}
Equations \eqref{e.ch.dth} are satisfied if and only if
\begin{equation}gin{align}\label{e.ch.40}
&2 W_{pp}-W_{qy}+F_{qp}W_q=0,\notag\\
&W_q\goth{m}athcal{D} P-P\goth{m}athcal{D} W_q=0,\notag\\
&P_y+\frac{1}{3}PF_{qp}=0,\notag\\
&Q_y+\frac{1}{3}QF_{qp}-2\tau P^2=0,\\
&K_p+\frac{1}{2}F_{qy}-\frac{5}{36}F_qF_{qp}+\frac{1}{W_q}
\left (F_{qp}\goth{m}athcal{D} W_q-\frac{1}{12}F_qW_{qy}+\frac{1}{2}\goth{m}athcal{D} W_{qy}\right)\notag\\
&-\frac{3W_{qy}\goth{m}athcal{D} W_q}{4W_q^2}+\left (\frac{2}{3}-\frac{\lambda}{\kappa}\right)P=0.\notag
\end{align}
Finally, equations \eqref{e.ch.syz} must be satisfied by functions
\begin{equation}\label{e.ch.inv}\begin{equation}gin{aligned}
\inw{a}=&\frac{P}{W_q}+\frac{1}{Q}\left (4\goth{m}athcal{D} P-\frac{2}{3}F_qP -\frac{2PW_p}{W_q}\right )-\frac{2P\goth{m}athcal{D} Q}{Q^2},\\
\inw{b}=&\left \{ \frac{5}{2W_q^2}+\frac{1}{Q}\left(-\frac{10F_q}{3W_q}-\frac{10W_p}{W_q^2}\right)
+\frac{1}{Q^2}\left(-4F_p-4K-\frac{2}{9}F_q^2+\right.\right. \\
&\left.\left.+\frac{1}{W_q}\left(\frac{20}{3}W_pF_q-4\goth{m}athcal{D} W_p+2\goth{m}athcal{D} Q\right )
+\frac{10W_p^2}{W_q^2}\right)\right\}P^2,\\
\inw{c}=&\frac{8P^3W}{Q^3},
\end{aligned}
\end{equation} and by their derivatives $\inw{a}_4$,
$\inw{a}_{41},\ldots,\inw{a}_{44}$ with respect to the frame
$X_1,\ldots,X_4$. Therefore, by means of theorem \ref{th.cc.clas}
we have
\begin{equation}gin{proposition}
An ODE is locally fibre-preserving equivalent to one of the Chazy
classes II, IV, V, VI, VII or XI for $\sigma\neq11$ in a
neighbourhood of a point $j_0\in{\goth{m}athcal J}^2$ if and only if i) the ODE
satisfies the conditions \eqref{e.ch.10}, \eqref{e.ch.30},
\eqref{e.ch.40}, and \eqref{e.ch.syz} with the invariants
$\inw{a}$, $\inw{b}$, $\inw{c}$, $\inw{a}_4$, and
$\inw{a}_{41},\ldots,\inw{a}_{44}$ given by \eqref{e.ch.frame} and
\eqref{e.ch.inv}, ii) the values of $\inw{a}(j_0)$ and
$\inw{a}_4(j_0)$ for the ODE and the Chazy class coincide.
\end{proposition}
Given these criteria for the equivalence it is
interesting to find a transformation of variables transforming an
ODE
$$\frac{{\rm d}^3 y}{{\rm d} x^3}=F\left(x,y,\frac{{\rm d} y}{{\rm d} x},\frac{{\rm d}^2 y}{{\rm d} x^2}\right)$$
to its equivalent Chazy class \begin{equation}n
\frac{{\rm d}^3 \cc{y}}{{\rm d} \cc{x}^3}=\kappa \cc{y}\frac{{\rm d}^2 \cc{y}}{{\rm d} \cc{x}^2}
+\lambda \left(\frac{{\rm d} \cc{y}}{{\rm d} \cc{x}}\right)^2
+\goth{m}u \frac{{\rm d} \cc{y}}{{\rm d} \cc{x}}\cc{y}^2+\nu \cc{y}^4.
\end{equation}n We show that the transformation \begin{equation}n
\cc{y}=\phi(x,y),\qquad\qquad \cc{x}=\chi(x)
\end{equation}n
may be easily found.
Let us apply to a Chazy class the above (arbitrary)
fibre-preserving transformation and calculate for so obtained ODE
(which is a general ODE equivalent to the Chazy classes) the
quantities $P$, $Q$ and $F_q-pF_{qp}$:
\begin{equation}gin{align}\label{r1670}
P=&-\kappa\chi_x\phi_y,\notag \\
Q=&2\kappa^2\tau\,\chi_x^2\,\phi_y\,\phi,\\
F_q-pF_{qp}=&\kappa\chi_x\phi+3\frac{\chi_{xx}}{\chi_x}-3\frac{\phi_{xy}}{\phi_y}.\notag
\end{align}
From first and second equation we get
\begin{equation}gin{align*}
(\log |\phi|)_y=&2\tau\frac{P^2}{Q},\\
\chi_x=&-\frac{Q}{2\tau P \phi}.
\end{align*}
Putting this into third equation of \eqref{r1670} we obtain \begin{equation}n
(\log |\phi|)_x=\frac{1}{2}\left(\log\left|\frac{Q^2}{P^3}\right|\right)_x-\frac{\kappa Q}{12\tau P}
+\frac{1}{6}(pF_{qp}-F_q).
\end{equation}n Finally, after integration of $\chi_x$ and $\phi_x$ we have
\begin{equation} \label{e.ch.tr}\begin{aligned} &\bar{y}=\frac{c_1Q}{|P|^{\frac{3}{2}}}
\exp\left\{\int_{x_0}^x\left(-\frac{\kappa Q}{12\tau P}
+\frac{1}{6}(pF_{qp}-F_q)\right)dx+2\tau\int_{y_0}^y\left.\frac{P^2}{Q}\right|_{x=x_0}dy\right\},\\
&\bar{x}=\frac{1}{2\tau}\int^x_{x_0}\frac{Q}{P\phi}dx+c_2. \end{aligned}\end{equation}
We summarize this calculation as follows.
\begin{equation}gin{proposition}
If there exists a fibre-preserving transformation from an equation
$y'''=F(x,y,y',y'')$ to a reduced Chazy type II, IV -- VII or XI
$\sigma \neq 11$, then it is given by the inverse of eq.
\eqref{e.ch.tr}, where $P$ and $Q$ are calculated for
$y'''=F(x,y,y',y'')$ according to formulae \eqref{e.ch.30}.
\end{proposition}
\ifthenelse{\boolean{@twoside}}{
\fancyhead[CO]{\iffloatpage{\goth{h}eadfont{tables}}{\rightmark}}
\fancyhead[CE]{\iffloatpage{}{\leftmark}}}{
\fancyhead[CO]{\iffloatpage{}{\leftmark}}}
\addtocounter{table}{1}
\begin{equation}gin{sidewaystable}
\caption{Equations admitting large contact
symmetry groups}\label{t.cc.1}
\renewcommand*{\arraystretch}{1.5}
\begin{equation}gin{tabular}
{|c|c|c|c|c|c|c|c|}
\goth{h}line
& \goth{m}ulticolumn{2}{|c|}{Equation} & \goth{m}ulticolumn{3}{|c|}{Characterization}
& Symmetry algebra $\goth{g}$ & $\dim\goth{g}$ \\ \goth{h}line \goth{h}line
I & $F=0$ & & \goth{m}ulticolumn{3}{|c|}{$W=0,\qquad F_{qqqq}=0$} & $\goth{o}(3,2)$ & $10$ \\ \goth{h}line
\goth{m}ultirow{3}{*}{II} & \goth{m}ultirow{3}{*}{$F=-2\goth{m}u p+y$} & \goth{m}ultirow{3}{*}{$\goth{m}u\in\mathbb{R}$} &
\goth{m}ulticolumn{3}{|c|}{\goth{m}ultirow{3}{*}{$W\neq 0,\quad\inc{a}{}=\goth{m}u,\quad\inc{k}{}=0$}} &
$[V_1,V_4] =-\goth{m}u V_2+V_3,\, [V_1,V_5] =V_1$, & \goth{m}ultirow{3}{*}{$5$} \\
& & & \goth{m}ulticolumn{3}{|c|}{} & $[V_2,V_4] =V_1-\goth{m}u V_3,\, [V_2,V_5] =V_2$, & \\
& & & \goth{m}ulticolumn{3}{|c|}{} & $[V_3,V_4] =V_2,\, [V_3,V_5] =V_3$, & \\ \goth{h}line
\goth{m}ultirow{2}{*}{III} & \goth{m}ultirow{2}{*}{$F=-2\goth{m}u(x)p+(1-\goth{m}u'(x))y$}
& & \goth{m}ulticolumn{3}{|c|}{\goth{m}ultirow{2}{*}{$W\neq 0,\quad
\inc{a}{}=\goth{m}u(x),
\quad\inc{b}{}=\inc{e}{}=\inc{h}{}=\inc{k}{}=0$}} &
$[V_1,V_4]=V_1,\, [V_2,V_4]=V_2$, & \goth{m}ultirow{8}{*}{$4$} \\
& & & \goth{m}ulticolumn{3}{|c|}{} & $[V_3,V_4]=V_3$, & \\ \cline{1-7}
\goth{m}ultirow{2}{*}{IV} & \goth{m}ultirow{2}{*}{$F=q^{3/2+1/(2\sqrt{\goth{m}u})}$}
& \goth{m}ultirow{2}{*}{$\goth{m}u>0,\,\neq\tfrac{1}{9}$} &
\goth{m}ultirow{2}{*}{$0<1+4\inv{\epsilon}{1}(\inc{I}{1})^{-2}\neq\tfrac{1}{9}$}
& \goth{m}ultirow{2}{*}{$\goth{m}u=1+4\inv{\epsilon}{1}(\inc{I}{1})^{-2}$} &
\goth{m}ultirow{6}{*}{\rotatebox{270}{$\inc{I}{1},\inc{I}{2},\inc{I}{3},\inc{I}{4}=const$}
\rotatebox{270}{$W\neq 0,\quad\inv{\epsilon}{1}\neq 0,$}} &
{$[V_1,V_4]=2V_1,\, [V_2,V_4]=(1+\sqrt{\goth{m}u})V_2,$} & \\
& & & & & & {$[V_2,V_3]=V_1, \,[V_3,V_4]=(1-\sqrt{\goth{m}u})V_3$} & \\
\cline{1-5} \cline{7-7}
\goth{m}ultirow{2}{*}{V} & \goth{m}ultirow{2}{*}{$F=(q^2+1)^{\frac{3}{2}}\,
\exp\left (\frac{{\rm arc\,tg}\,q}{\sqrt{\goth{m}u}}\right)$} &
\goth{m}ultirow{2}{*}{$\goth{m}u>0$} & \goth{m}ultirow{2}{*}{$\inv{\epsilon}{1}=-1,\,1-4(\inc{I}{1})^{-2}<0$} &
\goth{m}ultirow{2}{*}{$\goth{m}u=4(\inc{I}{1})^{-2}-1$} & & $[V_1,V_4]=2V_1,\, [V_2,V_4]=V_2-\sqrt{\goth{m}u}V_3$ & \\
& & & & & & $[V_2,V_3]=V_1,\, [V_3,V_4]=\sqrt{\goth{m}u} V_2+V_3$ & \\
\cline{1-5} \cline{7-7}
\goth{m}ultirow{2}{*}{VI} & \goth{m}ultirow{2}{*}{$F=\exp q$} & &
\goth{m}ultirow{2}{*}{$\inv{\epsilon}{1}=-1,\,\inc{I}{1}=-2$} &
& & $[V_1,V_4]=2V_1,\, [V_2,V_4]=V_2+V_3$ & \\
& & & & & & $[V_2,V_3]=V_1,\, [V_3,V_4]=V_3$ & \\ \goth{h}line
\end{tabular}
\end{sidewaystable}
\ifthenelse{\boolean{@twoside}}{\fancyhead[CE]{\leftmark}}{\fancyhead[CO]{\leftmark}}
\addtocounter{table}{-1}
\begin{equation}gin{sidewaystable}
\label{t.cc.2}\caption{Equations admitting large contact symmetry
groups}
\renewcommand*{\arraystretch}{1.5}
\begin{equation}gin{tabular}
{|c|c|c|c|c|c|c|c|}
\goth{h}line
& \goth{m}ulticolumn{2}{|c|}{Equation} & \goth{m}ulticolumn{3}{|c|}{Characterization} & Symmetry algebra $\goth{g}$
& $\dim\goth{g}$ \\ \goth{h}line
\goth{m}ultirow{2}{*}{VII} &
$F=\goth{m}u\left(\frac{q^2}{1-p^2}-p^2+1\right)^{3/2}$ &
\goth{m}ultirow{2}{*}{$\goth{m}u>0$} & $\inv{\epsilon}{2}=1$, &
\goth{m}ultirow{4}{*}{$\goth{m}u=\sqrt{\left|\frac{9(\inc{I}{7})^3}{9(\inc{I}{7})^3-2}\right|}$}
& \goth{m}ultirow{14}{*}{\rotatebox{270}{$W=0,\quad F_{qqqq}\neq 0,\quad
\inc{I}{5},\inc{I}{6},\inc{I}{7},\inc{I}{8}=const$}}
& \goth{m}ultirow{2}{*}{$\goth{u}(2)$} & \goth{m}ultirow{14}{*}{$4$} \\
& $-3\frac{q^2p}{1-p^2}+p^3-p^2$ & & $0<\inc{I}{7}<\frac{\sqrt[3]{6}}{3}$, & & & & \\[0.5ex]
\cline{1-4} \cline{7-7}
\goth{m}ultirow{3}{*}{VIII} & \goth{m}ultirow{3}{*}{$F=\goth{m}u\frac{(2qy-p^2)^{3/2}}{y^2}$} & $0<\goth{m}u<1$
& $\inv{\epsilon}{2}=1,\,\inc{I}{7}<0$
&
& & \goth{m}ultirow{8}{*}{$\goth{g}l(2,\mathbb{R})$} & \\ \cline{3-4}
& & $\goth{m}u>1$ & $\inv{\epsilon}{2}=-1,\,\inc{I}{7}>\frac{\sqrt[3]{6}}{3}$ & & & & \\ \cline{3-5}
& & $\goth{m}u=1$ & $\inv{\epsilon}{2}=\inc{I}{7}=0,\,\inc{I}{8}=1$ & & & & \\ \cline{1-5}
\goth{m}ultirow{2}{*}{IX} &
\goth{m}ultirow{2}{*}{$F=4\goth{m}u(q-p^2)^{3/2}+6qp-4p^3$} &
\goth{m}ultirow{2}{*}{$\goth{m}u>0$} &
\goth{m}ultirow{2}{*}{$\inv{\epsilon}{2}=-1,\,0<\inc{I}{7}<\frac{\sqrt[3]{6}}{3}$}
&
\goth{m}ultirow{4}{*}{$\goth{m}u=\sqrt{\left|\frac{9(\inc{I}{7})^3}{9(\inc{I}{7})^3-2}\right|}$} & & & \\
& & & & & & & \\ \cline{1-4}
\goth{m}ultirow{3}{*}{X} &
\goth{m}ultirow{3}{*}{$F=\goth{m}u\left(\frac{q^2}{p^2}+p^2\right)^{3/2}+3\frac{q^2}{p}+p^3$}
& $0<\goth{m}u<1$ & $\inv{\epsilon}{2}=-1,\,\inc{I}{7}<0$
&
& & & \\
\cline{3-4}
& & $\goth{m}u>1$ & $\inv{\epsilon}{2}=1,\,\inc{I}{7}>\frac{\sqrt[3]{6}}{3}$ & & & & \\ \cline{3-5}
& & $\goth{m}u=1$ & $\inv{\epsilon}{2}=\inc{I}{7}=0,\,\inc{I}{8}=-1$ & & & & \\ \cline{1-5} \cline{7-7}
\goth{m}ultirow{2}{*}{XI} & \goth{m}ultirow{2}{*}{$F=(q^2+1)^{3/2}$} &&
\goth{m}ultirow{2}{*}{$\inv{\epsilon}{2}=1,\,\inc{I}{7}=\frac{\sqrt[3]{6}}{3}$} &&& $[V_2,V_4]=-V_3,\,\,[V_2,V_3]=V_1,$ & \\
&&&&&& $[V_3,V_4]=V_2$ & \\ \cline{1-5} \cline{7-7}
\goth{m}ultirow{2}{*}{XII} & \goth{m}ultirow{2}{*}{$F=q^{3/2}$} &&
\goth{m}ultirow{2}{*}{$\inv{\epsilon}{2}=-1,\,\inc{I}{7}=\frac{\sqrt[3]{6}}{3}$} &&& $[V_2,V_4]=V_2,\,\,[V_2,V_3]=V_1,$ & \\
&&&&&& $[V_3,V_4]=-V_3$ & \\ \goth{h}line
\end{tabular}
\end{sidewaystable}
\begin{equation}gin{sidewaystable}
\caption{Equations admitting large point symmetry
groups}\label{t.pp.1}
\renewcommand*{\arraystretch}{1.5}
\begin{equation}gin{tabular}
{|c|c|c|c|c|c|c|c|}
\goth{h}line
& \goth{m}ulticolumn{2}{|c|}{Equation} & \goth{m}ulticolumn{3}{|c|}{Characterization}
& Symmetry algebra $\goth{g}$ & $\dim\goth{g}$
\\ \goth{h}line \goth{h}line
\goth{m}ultirow{2}{*}{I.1} & \goth{m}ultirow{2}{*}{$F=0$} & &
\goth{m}ultirow{2}{*}{$F_{qq}^2+6F_{qqp}=0$} &
\goth{m}ulticolumn{2}{|c|}{\goth{m}ultirow{4}{*}{$F_{qq}K+\frac{1}{3}F_qF_{qp}-F_{qy}+\frac12F_{pp}=0$}}
& \goth{m}ultirow{2}{*}{$\goth{g}oth{co}(2,1)\semi{.}\mathbb{R}^3$}
& \goth{m}ultirow{2}{*}{$7$} \\ & & & & \goth{m}ulticolumn{2}{|c|}{} & & \\
\cline{1-4} \cline{7-8}
\goth{m}ultirow{2}{*}{I.2} & \goth{m}ultirow{2}{*}{$F=\frac32\frac{q^2}{p}$} &
& \goth{m}ultirow{2}{*}{$F_{qq}^2+6F_{qqp}<0$} &
\goth{m}ulticolumn{2}{|c|}{\goth{m}ultirow{4}{*}{$W=0,\qquad F_{qqq}=0$}} &
\goth{m}ultirow{2}{*}{$\goth{o}(2,2)$}
& \goth{m}ultirow{2}{*}{$6$} \\
& & & & \goth{m}ulticolumn{2}{|c|}{} & & \\
\cline{1-4} \cline{7-8}
\goth{m}ultirow{2}{*}{I.3} & \goth{m}ultirow{2}{*}{$F=\frac{3q^2p}{1+p^2}$} &
&\goth{m}ultirow{2}{*}{$F_{qq}^2+6F_{qqp}>0$} & \goth{m}ulticolumn{2}{|c|}{} &
\goth{m}ultirow{2}{*}{$\goth{o}(4)$} & \goth{m}ultirow{2}{*}{$6$} \\
& & & & \goth{m}ulticolumn{2}{|c|}{} & & \\ \goth{h}line
\goth{m}ultirow{2}{*}{I.4} & \goth{m}ultirow{2}{*}{$F=q^3$} &
\goth{m}ultirow{2}{*}{} &
\goth{m}ulticolumn{3}{|c|}{\goth{m}ultirow{2}{*}{$W=F_{qqqq}=0,\quad
\inp{I}{9}=-\frac25,\quad \inp{I}{10}=\frac{1}{25},\quad
\inp{I}{11}=0 $}} &
{$[V_1,V_4]=2V_1,\, [V_2,V_4]=\tfrac43 V_2,$} & \goth{m}ultirow{2}{*}{4}\\
& & & \goth{m}ulticolumn{3}{|c|}{} & {$[V_2,V_3]=V_1, \,[V_3,V_4]=-\tfrac23 V_3$} & \\
\goth{h}line
\goth{m}ultirow{2}{*}{II.1} & \goth{m}ultirow{2}{*}{$F=-2\goth{m}u p+y$} &
\goth{m}ultirow{2}{*}{$\goth{m}u\in\mathbb{R}$} &
\goth{m}ulticolumn{3}{|c|}{\goth{m}ultirow{2}{*}{$W\neq 0,\quad
\inc{a}{}=\goth{m}u,\quad \inp{e}{}=\inp{k}{}=0 $}} &
\goth{m}ultirow{2}{*}{as in table \ref{t.cc.1}} & \goth{m}ultirow{2}{*}{5} \\
& & & \goth{m}ulticolumn{3}{|c|}{} & & \\ \goth{h}line
\goth{m}ultirow{2}{*}{II.2} & \goth{m}ultirow{2}{*}{$F=\goth{m}u\frac{q^2}{p}$} &
\goth{m}ultirow{2}{*}{$\goth{m}u > \tfrac32,\neq3$} &
\goth{m}ultirow{2}{*}{$\inp{I}{2}\notin[\,0,\sqrt[3]{4}\,]$} &
\goth{m}ultirow{2}{*}{$\inp{I}{2}=\sqrt[3]{\frac{(2\goth{m}u-3)^2}{\goth{m}u(\goth{m}u-3)}}$}
& \goth{m}ultirow{2}{*}{\raisebox{-2ex}{$W\neq0,\,\,\inp{I}{1}=-2$}} &
\goth{m}ultirow{2}{*}{$[V_1,V_2]=V_1,\, [V_3,V_4]=V_3$} & \goth{m}ultirow{3}{*}{4}\\
& & & & & & & \\ \cline{1-5}\cline{7-8}
\goth{m}ultirow{2}{*}{II.3} &
\goth{m}ultirow{2}{*}{$F=\frac{3p+\goth{m}u}{p^2+1}q^2$} &
\goth{m}ultirow{2}{*}{$\goth{m}u>0$} &
\goth{m}ultirow{2}{*}{$\inp{I}{2}\in(0,\sqrt[3]{4})$} &
\goth{m}ultirow{2}{*}{$\inp{I}{2}=\sqrt[3]{\frac{4\goth{m}u^2}{\goth{m}u^2+9}}$} &
\goth{m}ultirow{2}{*}{\raisebox{2ex}{$\inp{I}{2},\,\inp{I}{3},\,\inp{I}{4}=const$}}
& $[V_1,V_2]=V_3,\, [V_3,V_1]=V_2$ & \goth{m}ultirow{3}{*}{4}\\
& & & & & & $[V_3,V_4]=V_3,\, [V_2,V_4]=V_2$ & \\ \goth{h}line
\end{tabular}
\end{sidewaystable}
\begin{equation}gin{sidewaystable}
\addtocounter{table}{-1} \label{t.pp.2} \caption{Equations
admitting large point symmetry groups}
\renewcommand*{\arraystretch}{1.5}
\begin{equation}gin{tabular}
{|c|c|c|c|c|c|c|c|}
\goth{h}line
& \goth{m}ulticolumn{2}{|c|}{Equation} & \goth{m}ulticolumn{3}{|c|}{Characterization}
& Symmetry algebra $\goth{g}$ & $\dim\goth{g}$
\\ \goth{h}line
\goth{m}ultirow{2}{*}{III} & \goth{m}ultirow{2}{*}{$F=-2\goth{m}u(x)p+(1-\goth{m}u'(x))y$}
& & \goth{m}ulticolumn{3}{|c|}{\goth{m}ultirow{2}{*}{$W\neq 0,\quad
\inc{a}{}=\goth{m}u(x), \quad \inp{b}{}=\inp{e}{}=\inp{h}{}=\inp{k}{}=0
$}} &
\goth{m}ultirow{12}{*}{as in table \ref{t.cc.1}} & \goth{m}ultirow{12}{*}{4} \\
& & & \goth{m}ulticolumn{3}{|c|}{} & & \\ \cline{1-6}
\goth{m}ultirow{2}{*}{IV} & \goth{m}ultirow{2}{*}{$F=q^{\goth{m}u}$} &
\goth{m}ultirow{2}{*}{$\goth{m}u\neq0,1,\tfrac32,3$} &
\goth{m}ultirow{2}{*}{$\inp{I}{1}\neq-3$} &
\goth{m}ultirow{2}{*}{$\inp{I}{1}=\frac{4-3\goth{m}u}{\goth{m}u-1}$}
&\goth{m}ultirow{2}{*}{\raisebox{-1.7ex}{$W\neq0$}} &
& \\
& & & & & & & \\
\cline{1-5}
\goth{m}ultirow{2}{*}{VI} & \goth{m}ultirow{2}{*}{$F=\exp q$} & &
\goth{m}ulticolumn{2}{|c|}{\goth{m}ultirow{2}{*}{$\inp{I}{1}=-3$}}
& \goth{m}ultirow{2}{*}{\raisebox{1.7ex}{$\inp{I}{1},\inp{I}{2},\,\inp{I}{3},\,\inp{I}{4}=const$}}
& & \\
& & & \goth{m}ulticolumn{2}{|c|}{} & & & \\ \cline{1-6}
\goth{m}ultirow{2}{*}{VIII} &
\goth{m}ultirow{2}{*}{$F=\goth{m}u\frac{(2qy-p^2)^{3/2}}{y^2}$} &
\goth{m}ultirow{2}{*}{$\goth{m}u>0$} & \goth{m}ultirow{2}{*}{$\inp{I}{8}>-\tfrac32$}
&\goth{m}ultirow{2}{*}{$\inp{I}{8}=-\tfrac32+\tfrac{2}{\goth{m}u^2}$} &
\goth{m}ultirow{3}{*}{\raisebox{-2ex}{$F_{qqqq}\neq 0,\,\,W=0$}} &
& \\
& & & & & & & \\ \cline{1-5}
\goth{m}ultirow{2}{*}{IX} &
\goth{m}ultirow{2}{*}{$F=4\goth{m}u(q-p^2)^{3/2}+6qp-4p^3$} &
\goth{m}ultirow{2}{*}{$\goth{m}u>0$} & \goth{m}ultirow{2}{*}{$\inp{I}{8}<-\tfrac32$}
&\goth{m}ultirow{2}{*}{$\inp{I}{8}=-\tfrac32-\tfrac{2}{\goth{m}u^2}$} &
\goth{m}ultirow{3}{*}{} & & \\
& & & & &
\goth{m}ultirow{3}{*}{\raisebox{3ex}{$\inp{I}{5},\inp{I}{6},\,\inp{I}{7},\,\inp{I}{8}=const$}} & &
\\ \cline{1-5}
\goth{m}ultirow{2}{*}{XII} & \goth{m}ultirow{2}{*}{$F=q^{3/2}$} & &
\goth{m}ulticolumn{2}{|c|}{\goth{m}ultirow{2}{*}{$\inp{I}{8}=-\tfrac32$}} &
& & \\
&& & \goth{m}ulticolumn{2}{|c|}{} & & & \\ \goth{h}line
\end{tabular}
\end{sidewaystable}
\ifthenelse{\boolean{@twoside}}{
\renewcommand{\chaptermark}[1]{\goth{m}arkboth{\goth{h}eadfont{#1}}{\goth{h}eadfont{#1}}}}{
\renewcommand{\chaptermark}[1]{\goth{m}arkboth{\goth{h}eadfont{#1}}{}}}
\begin{equation}gin{thebibliography}{99}
\bibitem{Wun} K. W\"unschmann {\it \"Uber Ber\"uhrungsbedingungen bei
Integralkurven von Differentialgleichungen}, Dissertation,
Greiswald, 1905
\bibitem{Car1} E. Cartan, Les espaces generalises et l'integration
de certaines classes d'equations differentielles, {\it C. R. Acad.
Sci.} {\bf 206} (1938) 1425
\bibitem{Car2} E. Cartan, La geometria de las ecuaciones diferenciales de tercer orden,
{\it Oeuvres Compl\`etes}, Part III vol. 2, Paris:
Gauthier-Villars, 1955 (original 1941)
\bibitem{Car3} E. Cartan, Sur une classe d'espaces de Weyl
{\it Ann. Sc. Ec. Norm.Sup.} {\bf 60} (1943) 1
\bibitem{Chern} S.-S. Chern, The geometry of the differential
equation $y'''=F(x,y,y',y'')$, {\it Selected Papers} vol. I,
Springer-Verlag, 1978 (original 1940)
\bibitem{Bry} R. Bryant, Two exotic holonomies in dimension four, path
geometries, and twistor theory, {\it Proc. Symp. Pure Math.} {\bf
53} (1991) 33.
\bibitem{Cap} A. Cap, Two constructions with parabolic geometries,
{\it Rend. Circ. Mat. Palermo Suppl.} {\bf 79} (2006) 11, {\tt
arXiv:math/0504389}
\bibitem{Cap2} A. Cap and H. Schichl,
Parabolic geometries and canonical Cartan connections,
{\it Hokkaido Math. J.} {\bf 29} (2000) 453
\bibitem{Car4} E. Cartan, Les probl\`emes d'\'equivalence, in:
{\it Oeuvres Compl\`etes}, Part III, vol. 2, Paris:
Gauthier-Villars, 1955
\bibitem{Car5} E. Cartan, Sur les vari\'et\'es \`a connexion projective, {\it
Bull. Soc. Math.} {\bf 52} (1924) 205
\bibitem{Cha} J. Chazy, Sur les \'equations diff\'erentielles du troisi\'eme
ordre et d'ordre sup\'erieur dont l'int\'egrale g\'en\'erale a ses
points critiques fixes, {\it Acta Math.} {\bf 34} (1911) 317
\bibitem{Cos} C. Cosgrove, Chazy classes IX -- XI of third-order differential
equations, {\it Studies in Applied Mathematics} {\bf 104} (2000)
171
\bibitem{Dubwil} B. Doubrov, Contact trivialization of ordinary differential
equations, {\it Differential Geometry and Its Applications} (2001)
73
\bibitem{Dubgl} B. Doubrov, Generalized Wilczynski invariants for non-linear ordinary
differential equations, (2007) {\tt arXiv:math/0702251}
\bibitem{Dub1} B. Doubrov and B.Komrakov, Contact Lie algebras of vector fields on the plane,
{\it Geometry and Topology} {\bf 3} (1999) 1
\bibitem{Dub2} B. Doubrov, B. Komrakov and T. Morimoto,
Equivalence of holonomic differential equations, {\it Lobachevskij
Journal of Mathematics} {\bf 3} (1999) 39
\bibitem{Dun1} M. Dunajski, L.J. Mason and P. Tod, Einstein-Weyl geometry,
the dKP equation and twistor theory, {\it J. Geom. Phys.} {\bf 37}
(2001) 63
\bibitem{Dun2} M. Dunajski and P. Tod,
Einstein-Weyl structures from Hyper-K\"ahler metrics with conformal Killing vectors,
{\it Differ. Geom. Appl.} {\bf 14} (2001) 39
\bibitem{Dun3} M. Dunajski and P. Tod, Paraconformal geometry of
$n$th order ODEs, and exotic holonomy in dimension four, {\it J.
Geom. Phys.} {\bf 56} (2006) 1790
\bibitem{nsf3d} D. Forni, M. Iriondo and C. Kozameh, Null surfaces formulation in three dimensions,
{\it J. Math. Phys.} {\bf 41} (2000) 5517
\bibitem{Fox} D. Fox, Contact projective structures, {\it Indiana University Mathematics
Journal} {\bf 54} (2005) 1547, {\tt arXiv:math/0402332}
\bibitem{nsf2} S. Fritelli, C. Kozameh and E.T. Newman,
Lorentzian metrics from characteristic surfaces, {\it J. Math.
Phys.} {\bf 36} (1995) 4975
\bibitem{nsf3} S. Fritelli, C. Kozameh and E.T. Newman,
GR via characteristic surfaces, {\it J. Math. Phys.} {\bf 36}
(1995) 4984
\bibitem{nsf4} S. Fritelli, E.T. Newman and C. Kozameh,
On the dynamics of the characteristic surfaces, {\it J. Math.
Phys.} {\bf 36} (1995) 6397
\bibitem{New2} S. Fritelli, N. Kamran and E.T. Newman,
Differential equations and conformal geometry, {\it J. Geom.
Phys.} {\bf 43} (2002) 133
\bibitem{New1} S. Fritelli, C. Kozameh and E.T. Newman,
Differential Geometry from Differential Equations, {\it Commun.
Math. Phys.} {\bf 223} (2001) 383
\bibitem{New3} S. Fritelli, C. Kozameh, E.T. Newman and P. Nurowski,
Cartan normal conformal connections from differential equations,
{\it Class. Quantum Grav.} {\bf 19} (2002) 5235
\bibitem{New4} E. Gallo, C. Kozameh, E.T. Newman and K. Perkins,
Cartan normal conformal connections from pairs of second-order
PDEs, {\it Class. Quantum Grav.} {\bf 21} (2004) 4063
\bibitem{Gar} R. Gardner, {\it The Method of Equivalence and Its Applications},
Philadelphia: SIAM, 1989
\bibitem{God1} M. Godli\'{n}ski, {\it Cartan's Method of Equivalence and Sasakian Structures} (in Polish),
Master Thesis, University of Warsaw, 2001
\bibitem{God3} M. Godli\'{n}ski and P. Nurowski, Third-order ODEs and four-dimensional
split signature Einstein metrics, {\it J. Geom. Phys.} {\bf 56}
(2006) 344
\bibitem{Godode5} M. Godli\'{n}ski and P. Nurowski, $GL(2,\mathbb{R})$ geometry of ODE's, (2007) {\tt
arXiv:0710.0297}
\bibitem{Grebot} G. Grebot, The characterization of third order ordinary
differential equations admitting a transitive fiber-preserving
point symmetry group, {\it Journ. Math. Anal. Appl.} {\bf 206}
(1997) 364
\bibitem{Hitch} N. J. Hitchin, Complex manifolds and Einstein's equations,
{\it Twistor Geometry and Non-linear Systems}
({\it Lecture Notes in Mathematics}) vol. 970 Berlin: Springer, 1982
\bibitem{Hsu} L. Hsu and N. Kamran, Classification of second-order ordinary differential
equations admitting Lie groups of fiber-preserving symmetries,
{\it Proc. London Math. Soc.} {\bf 58} (1989) 387
\bibitem{Jt} P. E. Jones and P. Tod, Minitwistor spaces and Einstein-Weyl spaces,
{\it Class. Quantum Grav.} {\bf 2} (1985) 565
\bibitem{Kob1} S. Kobayashi, {\it Transformation Groups in Differential Geometry},
N.Y., Heidelberg, Berlin: Springer-Verlag, 1972
\bibitem{nsf1} C. Kozameh and E.T. Newman, Theory of light cone cuts of null infinity
{\it J. Math. Phys.} {\bf 24} (1983) 2481
\bibitem{Leb} C. LeBrun, Explicit self-dual metrics on $CP^2 \#
\cdots \# CP^2$, {\it J. Diff. Geom.} {\bf 34} (1991) 233
\bibitem{Lie} S. Lie, Klassifikation und Integration von
gewohnlichen Differentialgleichungen zwischen $x$, $y$, die eine
Gruppe von Transformationen gestatten III, {\it Gesammelte
Abhandlungen} vol. 5, Leipzig: Treubner, 1924
\bibitem{MacC} M.A.H. MacCallum, On the Classification of the Real Four-Dimensional Lie Algebras,
{\it On Einstein's Path Essays in Honor of Engelbert Schucking},
New-York: Springer-Verlag, 1999
\bibitem{Mor1} T. Morimoto, Geometric structures on filtred manifolds,
{\it Hokkaido Math. J.} {\bf 22} (1993) 263
\bibitem{Mor2} T. Morimoto, Lie Algebras, Geometric Structures
and Differential Equations on Filtred Manifolds, in:
{\it Advanced Studies in Pure Mathematics} 37, 2002,
Lie Groups, Geometric Structures and Differential Equations -- One
Hundred Years after Sophus Lie--, 205-252
\bibitem{NN} E.T. Newman and P. Nurowski,
Projective connections associated with second-order ODEs, {\it
Class. Quantum Grav.} {\bf 20} (2003) 2325
\bibitem{Nur0} P. Nurowski, On a certain formulation of the Einstein
equations, {\it J. Math. Phys.} {\bf 39} (1998) 5477
\bibitem{Nur1} P. Nurowski, Differential equations and conformal structures,
{\it J. Geom. Phys.} {\bf 55} (2005) 19
\bibitem{Nur4} P. Nurowski, Comment on GL(2,R) geometry of 4th order ODEs, (2007) {\tt
arXiv:0710.1658}
\bibitem{Nur2} P. Nurowski, Notes on Cartan connections, unpublished
\bibitem{Nur3} P. Nurowski and G.A.J. Sparling,
Three-dimensional CR structures and second-order ordinary
differential equations, {\it Class. Quantum Grav.} {\bf 20} (2003)
4995
\bibitem{Olv1} P.J. Olver, {\it Applications of Lie Groups to Differential Equations},
Second Edition, Graduate Texts in Mathematics, vol. 107, New York:
Springer--Verlag, 1993
\bibitem{Olv2} P.J. Olver, {\it Equivalence, Invariants and Symmetry},
Cambridge: Cambridge University Press, 1995
\bibitem{Sat} H. Sato and A.Y. Yoshikawa,
Third order ordinary differential equations and Legendre
connection, {\it J. Math. Soc. Japan} {\bf 50} No. 4 (1998) 993
\bibitem{Ste} S. Sternberg, {\it Lectures on Differential Geometry},
Englewood Cliffs, N.J.: Prentice-Hall, 1965
\bibitem{Tan2} N. Tanaka, On differential systems, graded Lie algebras and pseudo-groups,
{\it J. Math. Kyoto Univ} {\bf 10} (1970) 1
\bibitem{Tan} N. Tanaka, On the equivalence probles associated
with simple graded Lie algebras, {\it Hokkaido Math. J.} {\bf 8} (1979) 23
\bibitem{Tod} K.P. Tod, Einstein-Weyl spaces and third-order differential equations,
{\it J. Math. Phys.} {\bf 41} (2000) 5572
\bibitem{Tre} M. A. Tresse, Determinations des invariants
ponctuels de l'equation differentielle ordinaire du second ordre
$y''=\goth{o}mega(x,y,y')$, Leipzig: Hirzel, 1896
\bibitem{Ward} R.S. Ward, Einstein-Weyl spaces and $SU(\infty)$ Toda fields,
{\it Class. Quantum Grav.} {\bf 7} (1990) L95
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Balanced presentaions of the trivial group and four-dimensional geometry}
\author{ Boris Lishak and Alexander Nabutovsky }
\maketitle
\begin{abstract}
We prove that 1) There exist infinitely many non-trivial
codimension one ``thick" knots in $\mathbb{R}^5$; 2) For each
closed four-dimensional smooth manifold $M$ and
for each sufficiently small positive $\varepsilon$ the set of isometry classes
of Riemannian metrics with volume equal to $1$ and injectivity radius greater than $\varepsilon$ is disconnected; 3) For each closed four-dimensional
$PL$-manifold $M$ and any $m$ there exist arbitrarily large values of $N$ such that
some two triangulations of $M$ with $<N$ simplices cannot be connected
by any sequence of $<M_m(N)$ bistellar transformations, where
$M_m(N)=\exp(\exp(\ldots \exp (N)))$ ($m$ times).
\end{abstract}
\section{Main results.} \langlebel{Main}
{\bf 1.1.} The goal of this paper is to extend results of [N1], [N2], [N3]
to the four-dimensional situation.
\begin{Thm} Let $M$ be any closed four-dimensional Riemannian manifold.
Let $I_{\varepsilon}(M)$ denote the space of isometry classes of Riemannian
metrics on $M$ with volume equal to $1$ and injectivity radius greater than
$\varepsilon$. (This space is endowed with the Gromov-Hausdorff metric $d_{GH}$.)
Then for all sufficiently small $\varepsilon >0$ $I_\varepsilon(M)$ is disconnected,
and, moreover, can be represented at the union of two non-empty subsets
$A_1,\ A_2$ such that for any $\mu_1\in A_1$, $\mu_2\in A_2$
$d_{GH}(\mu_1,\ \mu_2)>{\varepsilon\over 10}$.
\par
Furthermore, let for each $m$ $\exp_m(x)$ denote $\exp(\exp(\ldots(\exp x)))$ ($m$ times).
Then for each $m$ for all sufficiently small $\varepsilon$ there exist $\mu,\nu\in I_{\varepsilon}(M)$ with the following property. Let
$\mu_1=\mu, \mu_2,\ldots , \mu_N=\nu$ be a sequence of isometry classes of Riemannian metrics on $M$
of volume one such that for each $i$ $d_{GH}(\mu_i,\ \mu_{i+1})\leq {\varepsilon\over 10}$. Then
$\inf_i inj(\mu_i)\leq {1\over \exp_m({1\over\varepsilon})}$.
\end{Thm}
A (stronger) analog of this theorem for $n>4$ as well as for a class of closed four-dimensional manifolds
representable as the connected sum of any closed $4$-manifold and several copies of $S^2\times S^2$ can be found in [N2] (Theorem 1 and section 5.A).
(More precisely, ``several" means $14$. The minimal number of copies of $S^2\times S^2$ required for the method
of [N2] to work is equal to the number of relators in a sequence of finitely presented groups, where the triviality problem is algorithmically unsolvable;
cf. [Sh1], [Sh2]).
The next theorem is a four-dimensional analog of Theorem 11 from [N2].
For each smooth manifold $M$ define $Al_1(M)$ as the space of $C^1$-smooth
Alexandrov spaces of curvature $-1\leq K\leq 1$, $C^1$-diffeomorphic to $M$
(cf. [BN] for a definition of Alexandrov spaces with two-sided bounds on
sectional curvature).
A result of I. Nikolaev ([Ni]) implies that all of them are Gromov-Hausdorff
limits of sequences of smooth Riemannian structures on $M$. The classical Gromov-Cheeger compactness theorem implies that all elements of $Al_1(M)$ are
$C^{1,\alpha}$-smooth Riemannian structures on $M$ for any $\alpha\in (0,1)$.
We can consider diameter as a functional on $Al_1(M)$.
\begin{Thm} Let $M$ be a closed $4$-dimensional manifold such that
either its Euler characteristic is not equal to zero, or its simplicial
volume is not equal to zero. Then diameter regarded as a functional on $Al_1(M)$has infinitely many local minima. The set of values of diameter at its local minima on $Al_1(M)$ is unbounded.
\end{Thm}
The assumptions about $M$ imply a uniform positive lower bound for the volume of
all elements of $Al_1(M)$. Now the Gromov-Cheeger theorem implies
the compactness of sublevel sets of $diam$, $diam^{-1}((0,x])$, on $Al_1(M)$
for all values of $x$. Now we see that it is sufficient to prove that there
exists an unbounded sequence of values of $x$ such that the set of all smooth
Riemannian structures on $M$ with $-1\leq K\leq 1$ and $diam\leq x$
is disconnected, and, moreover, can be represented as a union of two non-empty
subsets with disjoint closures. After noticing that the classical
Cheeger inequality implies that for all such smooth Riemannian structures
the injectivity radius will be bounded below by an explicit positive
function of $x$ (that behaves as $const\ \exp(-3x)$) we see that this
theorem is similar to the previous one, and, in fact, has a very similar
proof.
Theorem 11 in [N2] should not be confused with a much deeper and significantly
more difficult main theorem in [NW1] (see also [NW2] and [W]) that
does not have the assumption that a smooth manifold $M$ of dimension
greater than four has either a non-zero Euler characteristic
or a non-zero simplicial volume, and, therefore, one lacks an a priori uniform
positive lower bound for the volumes of the considered metrics.
At the moment we are
not able to prove a four-dimensional analog of the main theorem of [NW1].
{\bf 1.2.} In order to state the next theorem define {\it crumpledness}
(a.k.a {\it ropelength}) of an embedded
closed manifold $X^n$ in a complete Riemannian manifold $Y^{n+k}$
as $\kappa(X^n)={vol^{1\over n}(X^n)\over r(X^n)}$, where $r(X^n)$ denotes the
injectivity
radius of the normal exponential map of $X^n$. Informally speaking, $r(X^n)$
can be interpreted as the smallest radius of a nonself-intersecting tube
around $X^n$. This functional was defined in [N1] for hypersurfaces
and named ``crumpledness", but in later papers on ``thick" knots in $\mathbb{R}^3$ it had been given a new name ``ropelength", as it can be interpreted
as the length of a similar knot such that the maximal radius
of a nonself-intersecting tube around this knot is equal to one (i.e.
it is the length of a similar knot tied on ``thick" rope of radius one).
One of the ideas of [N1] was that one can similarly consider higher-dimensional
``thick" knots. Two knots (=embeddings of $S^n$ in $\mathbb{R}^{n+k}$)
belong to the same $x$-thick knot type
if they both are in the same path component of the sublevel set
$\kappa^{-1}((0,x])$ of $\kappa$.
\par
To state our main result about ``thick" knots it is convenient
to first introduce a space of non-parametrized $C^{1,1}$-smooth embeddings
$E_n=Emb(S^n, R^{n+1})/Diff(S^n)$ of $S^n$ into $\mathbb{R}^{n+1}$,
and then define $Knot_{n,1}$ as the quotient of $E_n$ with respect
to the action of the group generated by isometries and homotheties
of the ambient Euclidean space $\mathbb{R}^{n+1}$.
The choice of smoothness is motivated by the facts that 1) $r(\Sigma^n)>0$
for every $C^{1,1}$-smooth closed hypersurface; 2) $r$ is an upper semi-continuous functional of $E_n$ and, therefore, $Knot_{n,1}$ (Theorem 5.1.1 of
[N1]); and 3) The sublevel sets
$\kappa^{-1}((0,x])$ in $Knot_{n,1}$ are compact (see [N1], proof of
Theorem 5.2.1). These facts are true for all dimensions $n$.
Now for each $x$ we can consider ``thick" knot $x$-types as subsets of either $E_n$ or $Knot_{n,1}$. A knot $x_1$-type
and $x_2$-type are {\it distinct} if they do not intersect in $E_n$ (or, equivalently, in $Knot_{n,1}$). (Assuming that, say, $x_1\leq x_2$, this is
equivalent to the $x_1$-knot not being a subset of the $x_2$-knot.)
Our next results imply that there exist non-trivial types of ``thick" four-dimensional knot types of codimension one.
\begin{Thm}
There exists an infinite sequence of distinct $x_i$-knot types in $E_4$ (correspondingly, $Knot_{4,1}$),
where $x_i$ is an unbounded increasing sequence. Moreover, there exists an unbounded increasing sequence
of $x_i$, which are the values of $\kappa$ at its local minima $k_i$ on $E_4$ (or, equivalently, $Knot_{4,1}$).
Further, for each $m$ one can find such a sequence of numbers $\{x_i\}$ and knots $k_i$ with the additional
property that any isotopy between $k_i$ and the standard $4$-sphere of radius $1$ in $\mathbb{R}^5$ must pass through
hypersurfaces, where the value of $\kappa$ is greater than $\exp_m(x_i)$.
\end{Thm}
\noindent
{\bf Remarks.} {\bf 1.} The second assertion of the theorem is stronger than the first assertion, as each local minimum of $\kappa$
with value $x$ gives rise to a $x$-knot type that consists of one knot, if the local minimum is strict, and a connected set
of knots in $\kappa^{-1}(\{x\})$ otherwise. On the other hand, the second asserton immediately follows from the first assertion
and the compactness of sublevel sets of $\kappa$ (see Theorem 5.1.1 in [N1]).
\par\noindent
{\bf 2.} The local minima of $\kappa$ were called self-clenching hypersurfaces
in [N1]. The idea behind this metaphor is that one can imagine that this
hypersurface is made of very thin material that bends but cannot be stretched.
If it also cannot be squeezed, then the ``thick" hypersurface
is tightly
folded in $\mathbb{R}^5$. It can move (other than rigid body movement) only
if the local minimum is not strict, and only by ``sliding movements", so that
at each moment of time it is still a local minimum of $\kappa$ (i.e. it cannot
be unfolded into a less crumpled shape).
\par\noindent
{\bf 3.} Two very interesting question are whether or not
there exist non-trivial ``thick" knots of codimension one in $\mathbb{R}^3$
and $\mathbb{R}^4$. The second of these questions is related to
the smooth Schoenflies conjecture, that asserts that each smooth embedding
of $S^3$ into $\mathbb{R}^4$ is isotopic to the standard round sphere of radius one. (This fact is known for all other dimensions). Note, that if
the smooth Schoenflies conjecture turns out to be false one can still ask
whether or not there are non-trivial ``thick" knot types in the component
of $Knot_{3,1}$ that consists of $3$-spheres in $\mathbb{R}^4$ that are isotopic to the round sphere. It seems almost ``self-evident" that there are no
non-trivial ``thick" knots $S^1\subset \mathbb{R}^2$, but we do not know a proof of this fact
and are not aware of any publications in this direction.
\par
{\bf 1.3.} To state our third result for every closed four-dimensional $PL$-manifold $M$
consider the set of all simplicial isomorphism classes of simplicial
complexes PL-homeomorphic to $M$. For brevity, we call them {\it triangulations} of $M$. The discrete set $T(M)$ of all triangulations of $M$ can be turned into a metric space using {\it bistellar transformations}. Bistellar transformation
are operations that transform one triangulation into the other as follows.
Let $T_1$ be a triangulation of $M$. Assume that it contains a
simplicial subcomplex $K$ that
consists of $k$, $1\leq k\leq 5$, $4$-dimensional simplices (together with their faces)
and is simplicially isomorphic to a subcomplex $C$ of the boundary of a
$5$-dimensional simplex $\partial \Delta^5$. To perform the corresponding
bistellar transformation one first removes these $k$ simplices (and all
their faces) and then attaches the closure
of the complement $\partial\Delta^5\setminus C$ to the boundary of $K$
(which is simplicially isomorphic to the boundary of
$\partial\Delta^5\setminus C$).
Since we exchange one PL-disc (triangulated with $k$ $4$-simplices) for
another (triangulated with $6-k$ $4$-simpleces), we obtain a triangulation
$T_2$ of the same manifold. Moreover, endow
$T_1$ and $T_2$ with length metrics such that each simplex is
a flat regular simplex with side length one. In this case it is easy to see
that
$T_1$ and $T_2$ will be bi-Lipschitz homeomorphic, and the Lipschtz
constants of the homeomorphism and its inverse
will not exceed an absolute constant
that can be
explicitly evaluated. U. Pachner proved that every two triangulations
of the same closed $PL$-manifold can be connected by a finite sequence
of bistellar transformations ([P]). Now one can define the distance
$d_{Bist}(T_1,T_2)$ on $t(M)$ as the minimal number of bistellar
transformations required to transform $T_1$ into $T_2$.
\begin{Thm} For each $4$-dimensional closed PL-manifold $M$ and each
positive integer value of $m$ there
exist arbitarily large values of $N$ and two triangulations $T_1$, $T_2$
with $\leq N$ simplices such that $d_{Bist}(T_1,\ T_2) >\exp_m(N)$.
(In other words, $T_1$ and $T_2$ cannot be connected by any sequence
of less than $\exp_m(N)$ bistellar transformations).
\end{Thm}
A stronger version of this theorem had been proven in [N3] for all manifolds
of dimension greater than four as well as all four-dimensional manifolds that can be represented as
a connected sum with $k$ copies of $S^2\times S^2$, where the value of
$k$ can be chosen as $14$ using
[Sh1], [Sh2]. Note that results of [N3] and, especially, Theorem 1.4
for $M=S^4$
have potential implications for
four-dimensional Euclidean Quantum Gravity (see [N4] and references there).
\section {Proofs.} \langlebel{Proofs}
{\bf 2.1. Balanced finite presentations of the trivial group.}
In [L] one of the authors have constructed a sequence of finite
balanced presentations of the trivial group. These finite presentations
have two generators and and two relations. They can be described
as the Baumslag-Gersten group $B=<x, t\vert x^{x^t}=x^2>$
with an added variable second relation.
(Here $a^b$ denotes $aba^{-1}$.)
To describe this relation note that there exists a sequence of
words $v_n$ in $B$ of length $O(n)$ representing $x^{E([\log_2\ n])}$,
where $E(m)$ denotes $2^{2^{\dots^2}}$ ($m$ times). Clearly, $v_n$
commutes with all powers of $x$. Words $w_n=[v_n,x]$ represent
the trivial element but one needs to apply the only
relation at least $E([\log_2 n]-const)$ times to establish this fact. (Thus, the Dehn function
of the one relator group $B$ grows faster than any tower of exponentials
of a fixed height of $n$, cf. [Ge], [Pl].)
The extra relation added to $B$ is
$[v_n, x^3][v_n, x^5][v_n, x^7]=t$, where $[a,b]$ denotes the commutator
$aba^{-1}b^{-1}$. The most important property of this sequence of groups
is that any representation of either $x$ or $t$ as a product of conjugates
of the two relators and their inverses will require at least $E([\log_2 n]-2)$
multipliers. Also, note that these finite presentations satisfy the Andrews-Curtis
conjecture. The importance of the last observation is in the fact that
when one constructs a representation complex $K$ of such a finite presentation
(that is, a $2$-complex
with one $0$-dimensional cell, two $1$-dimensional
cells corresponding to the generators and two $2$-dimensional cells
corresponding to the relators), embeds it in $\mathbb{R}^5$,
takes the boundary of a small neighborhood of the embedding, and smoothes
it out, one obtains not merely a smooth homotopy $4$-sphere that must be homeomorphic to $S^4$ by virtue of Freedman's proof of the $4$-dimensional
Poincare conjecture, but a manifold that is diffeomorphic to $S^4$. This fact can be demonstrated without the $4$-dimensional Poincare conjecture using instead
the fact that the operations in the Andrews-Curtis conjecture
correspond to certain diffeomorphisms of the underlying
manifold (``handle slidings"). A sequence of these diffeomorphisms
corresponding to handle slides will eventually result in the standard sphere
that corresponds to the representation $2$-complex of the trivial
finite presentation of the trivial group (cf. [BHP]).
The resulting smooth hyperspheres in $\mathbb{R}^5$ that
will be denoted by $S^4(v_n)$ can, after rescaling, be interpreted as elements of $I_{\varepsilon_n}(S^4)$
(for an appropriate $\varepsilon_n$) or $Al_1(S^4)$. We can also interprete them as elements of $Knot_{4,1}$ or $E_4$. Finally, we can construct
a hypersphere triangulated into flat simplices (instead of a
smooth hypersphere). It is easy to see that the number of
simplices will grow linearly with the length of the word $v_n$ in the
Baumslag-Gersten group that was used to construct, first, the balanced
finite presentation of the trivial group and, then, a $4$-dimensional sphere.
Similarly, in the smooth case, ${vol^{1\over 4}\over inj}$, ${vol^{1\over 4}\over r}$ and $\vert K\vert diam^2$
will be bounded above by an exponential function of
$const$ $n$ for some $const$ (in fact, one can ensure much better
bounds, but we do not need this).
These hypersurfaces in $\mathbb{R}^5$ constructed using
the balanced presentations of the trivial group introduced
in [L] will be used in the proofs of all our results.
But note that in this construction one can alternatively
use another family of balanced presentations
of the trivial group with similar properties
that were independently discovered by Martin Bridson.
We were not aware of his work until this paper
had almost been finished. But after [L]
appeared on the arXiv Bridson e-mailed to us and wrote that he found
such finite presentations in 2003. Although they were mentioned
in his ICM-2006 talk ([B], p. 977), he has never publshed or posted any
details of his construction on the internet.
Two weeks after the appearance of [L] his preprint [B2] has also appeared
on the arXiv.
Note that in his ICM-2006 talk Bridson expresses a hope
that such finite presentations of the trivial group
can be used to extend results of Nabutovsky
and Weinberger on the sublevel sets of diameter on moduli spaces
to dimension $4$ (which is something that we were not yet able to accomplish).
So, his work [B2] was also partially motivated by potential applications
that are similar in spirit to our results in this paper.
Even earlier, in the 90s, the second author attempted to prove the results
of this paper using balanced finite presentations of
the trivial group obtained from $B$ in the most obvious way, namely, by adding the second relation $w_n=t$. Yet he was not able
to verify that these balanced presentations have the desired properties.
{\bf 2.2. The filling length.} Following [N2]
we are going to use the following characteristic of simply-connected
closed Riemannian manifolds that measures how
``difficult" is to contract closed curves.
We define it as the supremum over all closed curves $\gamma$ of the ratio
${fl(\gamma)\over length(\gamma)}$, where the
filling length $fl(\gamma)$ denotes the infimum
over all homotopies $H=(\gamma_t)_{t\in [0,1]}$, $\gamma_0=\gamma$,
contracting $\gamma$ to a point (=constant curve) $\gamma_1$ of the maximal
length $\sup_t\ length(\gamma_t)$ of the closed curves arising during
the homotopy $H$. We are going to denote this quantity by $Fl$ and regard
it as a functional on a considered space of (isometry classes) of
Riemannian metrics.
To see that $Fl<+\infty$ first note that all sufficiently short curves
$\gamma$ can be contracted to a point without length increase (and,
therefore, $fl(\gamma)=length(\gamma)$). On the other hand for long curves
$\gamma$ we can choose any point $z$, and connecting $z$ with a sequence
of sufficiently close points on $\gamma$ by minimal geodesics reduce contraction
of $\gamma$ to consecutive contractions of triangles formed by a very short arc
of $\gamma$ and two minimal geodesics betwen $z$ and two very close
points on $\gamma$. Perimeters of these triangles are bounded by $2d+
\varepsilon$, where $d$ is the diameter of the Riemannian manifold and $\varepsilon$ is arbitrarily small. This easily implies that ${fl(\gamma)\over length(\gamma)}
\longrightarrow 1$ as $length(\gamma)\longrightarrow\infty$. This fact
was first noticed by M. Gromov ([Gr]). (The existence of the supremum for closed curves
of length $\leq 2d+\varepsilon$ follows from the compactness of the set of
Lipschitz curves of length $\leq 2d+\varepsilon$ parametrized by the arclength.)
Gromov also introduced the term ``filling length" and the notation $fl$
(with a slightly different meaning than what we use here).
Note that $Fl$ can also be defined
for all simply-connected length spaces such that for some positive
$\varepsilon$ all closed curves of length $\leq \varepsilon$ can be contracted
to a point without length increase. So, in particular,
we can consider $Fl$ as a functional on the spaces of triangulations
of closed manifolds after we endow each simplex of the maximal dimension
by the metric of a regular flat simplex with side length $1$. (Actually, it is
easy to see that $Fl$ will not depend on the choice of side length here.)
Our observation is that if $S^4(v_n)$ is a (smooth or PL) sphere:
constructed starting from the word $v_n$ in the Baumslag-Gersten group
as expalined above (using either the idea from [L] or the idea from [B2])
then:
\begin{Pro}
The value of $Fl_n=Fl(S^4(v_n))$ grows faster than any finite tower of exponentials of $n$.
\end{Pro}
{\bf Proof.} Indeed, if not, then we can prove that the area of van Kampen diagrams
for generators of $S^4(v_n)$ will also be bounded by towers of exponentials
of $n$ of a fixed height. In the proof below we use the same notation $const$
for different constants that can be, in principle, evaluated.
The idea is that one can choose a way to represent each close curve $\gamma$
of length $\leq x$ by a word of length $\leq const\ n\ x$
so that if two closed
curves $\gamma_1$ and $\gamma_2$ are $const$-close, then
the corresponding words can be connected by a sequence of at most
$const\ x$ relations. In order to achieve this we first project $\gamma$
to the embedding of the representation complex of the balanced presentation
in $\mathbb{R}^5$. Recall, that $S^4(v_4)$ is the smoothed-out boundary
of a small tubular neighborhood of the representation complex, so this step
will increase the length by at most $const\ n$ factor. Denote the projection
of $\gamma$ to the embedded representation complex by $\tilde\gamma$.
Note that if $D$ is a Riemannian $2$-disc one can choose
a way to replace each arc with endpoints on $\partial D$ by a shortest
arc of $\partial D$ with the same endpoints. An ambiguity in the situation
when the distance between the endpoints along the boundary is equal to ${\vert\partial D\vert\over 2}$ leads to a discontinuity, and arcs corresponding
to two very close curves can together almost form the boundary of $D$.
Consider now one of the two $2$-cells in the representation complex
and a connected smooth arc $A$
in its interior with end points on its boundary.
The boundary of the $2$-cell has some self-intersections that appeared
as the result of taking the quotient map, when the cell was attached
to the $1$-skeleton. Yet we can canonically lift $A$ to the $2$-disc
with the same Riemannian metric in the interior and with nonself-intersecting
boundary, then extend $A$ to the boundary, replace it by a shortest
arc of the boundary with the same endpoints, and, finally, project this arc back
to the $1$-skeleton of the embedded representation complex.
Now take each component of the intersection of
$\tilde\gamma$ with $e_i$, $i=1,2$ and replace it by a minimal
arc in the boundary of $\partial e_i$ with the same endpoints as
explained above. This
will result in a length increase by at most $const\ n$ factor.
The mentioned ambiguity in the case when the points of intersection
with $\partial e_i$ can be connected in the lift of $\partial e_i$
by two arcs of length ${\partial e_i\over 2}$ is
a source of discontinuity of this process.
If such a discontinuity arises for a pair of close curves $\gamma_1$, $\gamma_2$, then
the resulting arcs in the $1$-skeleton of the
representation complex will (almost) form the boundary of $e_i$.
Also, note that such ambiguity
(discontinuity) can arise only for a sufficiently long arc of $\gamma\bigcap e_i$, and, therefore, the number of such occurences for a
closed curve $\gamma$ is bounded by
$const\ x$. Now note that once $\gamma$ is replaced by a closed curve
in the $1$-skeleton of the representation complex, we can assign to it
a word, and conclude that these words for sufficiently close curves can be transformed one into the other by an application of at most $const\ x$
relations.
Now we observe that a well-known and easy argument implies that
for each $\delta>0$ there
exists a $\delta$-net in the space of closed curves of length
$\leq x$ in the constructed Riemannian manifold of cardinality bounded
by $\exp(const\ {x\over\delta})$.
Therefore,
each closed curve can be contracted to a point by a discretized
homotopy that consists of at most $\exp(const\ x)$ ``jumps" of
``length" $\leq const$ so that each ``jump" corresponds to a sequence of
not more than $const\ x$ applications of the relations for words
corresponding to the curves.
Finally, let $\gamma_0$ be a closed curve that represents one of the generators of
the considered finite presentation. It can be connected to a point
through curves of length $\leq Fl_n$ $length(\gamma_0)$. Therefore,
one obtains an at most exponential in $const\ Fl_n$
upper bound for the number of
the relations required to demonstrate that the generator, is, indeed, trivial.
This completes the proof of the proposition.
Let $M_0$ be any closed simply connected
$4$-dimensional Riemannian manifold.
We can form a Riemannian connected sum of $M_0$ with the spheres $S^4(v_n)$
in an obvious way and observe that $Fl$ for resulting Riemannian
manifolds grows faster than any tower of exponentials of $n$ of a fixed
height.
{\bf 2.3. Proof of theorems.} It is now easy
to prove the main theorems for simply connected manifolds. In order to prove Theorem 1.4
recall that each bistellar transformation leads to a bi-Lipschitz homeomorphism
of the underlying simplicial complexes regarded as metric spaces, where each face of dimension four
is given the metric of the regular $4$-simplex with the side length $1$.
The Lipschitz constants for the map and its inverse do not exceed an absolute
constant $const$. Now note that in this situation $Fl$ cannot change by more than
the factor $const^2$. The value of $Fl$ for the boundary $\partial\Delta^5$
of the regular
$4$-simplex (endowed with the standard metric) is $1$. Therefore, the value
of $Fl$ on each triangulation of $S^4$ that can be connected with $\partial\Delta^5$ by at most $M$ bistellar transformation is at most $const^{2M}$.
This fact immediately implies the assertion of the theorem.
The proofs of the first three theorems are similar. The idea is to prove
that if the assertion does not hold, then $Fl_n$ is bounded above
by a tower of exponentials of $n$ of a fixed height, and this would
contradict the assertion of Proposition 2.1.
One can follow [N2] to finish the proof of Theorem 1.1. One starts from
the observation that if two Riemannian structures in $I_\varepsilon(M)$
are ${\varepsilon\over 8.5}$-close (in the Gromov-Hausdorff metric), then
the values of $Fl$ can differ by a factor that does not exceed $1000000$
(Lemma 2 in [N2]). (The idea is that if $M_1$ and $M_2$ are close
Riemannian manifolds and one can contract any closed curve in $M_2$ through
not too long curves, one
can try to contract any closed curve $\gamma$ in $M_1$ by 1) discretizing
it, moving points to the closest points in $M_2$ and connecting them
by minimal geodesics, thereby obtaining a closed curve $\gamma_2$
that can be regarded as a ``transfer" of $\gamma$ to $M_2$; 2) Contracting
$\gamma_2$ through not too long curves in $M_2$; 3) Discretizing this
homotopy and transfering closed curves in the discretization back to
$M_1$; 4) Connecting the transfers of the
nearest closed curves by homotopies in $M_1$, thus, obtaining a homotopy
contracting $\gamma$ in $M_1$.)
The second observation used in [N2] is that one can use the
well-known proof of the fact that $I_{\varepsilon}(M)$ is precompact to
give an explicit uper bound of the form $\exp({const\over \varepsilon^9})$
(in the four-dimensional case) for the cardinality of an $\varepsilon/20$-net in $I_{\varepsilon}(M)$. This estimate can then be used to conclude that any two Riemannian structures in the same connect component of $I_{\varepsilon}(M)$ can be connected by a sequence
of $\varepsilon/9$-long ``jumps", so that the number of jumps does not
exceed $\exp({const \over \varepsilon^9})$ (see the proof of Lemma 3
in [N2]). Combining this estimate with the previous observation
we see that the ratio of values of $Fl$
at any two elements of the same connected
component of $I_{\varepsilon}(M)$
is bounded by a double exponential of a power of
${1\over \varepsilon}$ (and, thus, by a triple exponential
function of ${1\over\varepsilon}$ for all sufficiently
small values of $\varepsilon$). This estimate can be generalized to a stronger
equivalence relation on $I_{\varepsilon}(M)$ than being in the same connected component,
namely, the transitive closure of the relation ``to be ${\varepsilon\over 9}$-close in the Gromov-Hausdorff metric".
A comparison of these triply exponential upper bounds with lower bounds
for $Fl$ that grow faster than any tower of exponential of a fixed height
of $n$ yields the assertion of Theorem 1.1.
As it had been noticed, Theorem 1.2 would follow from the disconnectedness
of sublevel sets of the diameter $diam^{-1}((0,x])$ on $Al_1(M)$, and the
injectivity radius is bounded below by $\exp(-const\ x)$ on these sets.
Now one can use the same argument as in the proof of Theorem 1.1.
To prove Theorem 1.3 we can rescale the hypersurface
to have the value of the volume
equal to $1$. Now note that the definition of $\kappa$ implies that
$\kappa\geq \vert k\vert$, where $k$ denotes any of the
principal curvatures of the hypersurface.
This implies the obvious upper bound for the
absolute values of its sectional curvatures, when it is
regarded as a Riemannian manifold. It is not difficult to establish an
upper bound for the diameter of the hypersurface in the inner
metric (which immediately follows from Theorem 1.1 in [T]). Now
the Cheeger inequality implies an explicit lower bound for
the injectivity radius of the hypersurface that behaves as
$\exp(-const\ \kappa^{const})$, and we can prove the disconnectedness
of sublevel sets of $\kappa$ for an unbounded sequence of values
of $x$ exactly as we proved the disconnectedness of $I_{\varepsilon}(M)$.
The proofs of Theorems 1.1, 1.2 and 1.4
in the case of a nonsimply connected manifold can be based
on the same ideas. We form a Riemannian connected sum of $M$ endowed
with some Riemannian metric with $S^4(v_n)$. Now $Fl$ is not defined,
but we can look at how much the length of the closed
curves corresponding to the
generators of the balanced presentations must be increased before they
can be contracted to a point. An argument similar to the proof of
Proposition 2.1 implies that the growth of this quantity with $n$ is
faster than any tower of exponentials of a fixed height. On the other
hand the arguments in this section can be used to demonstrate
that the connectedness
assumptions imply a much slower growth of this quantity.
{\bf Acknowledgements.} This work has been partially supported from NSERC
Accelerator and Discovery Grants
of second named author (A.N.).
{\bf References.}
\noindent
[BN] V. Berestovskii, I. Nikolaev, ``Multidimensional generalized Riemannian
spaces", in ``Geometry iV", ed. Yu. G. Reshetnyak, Encyclopedia of
Mathematical Sciences, vol. 70, Springer, 1993.
\par\noindent
[BHP] W. Boone, W. Haken, and V. Poenaru, ``On Recursively Unsolvable Problems in Topology and Their Classification", Contributions to Mathematical Logic (H. Arnold Schmidt, K. Sch\"utte, and H. J. Thiele, eds.), North-Holland, Amsterdam, 1968.
\par\noindent
[B] M. Bridson, ``Non-positive curvature and complexity for finitely presented groups", Proceedings of the ICM, Madrid, Spain, 2006, 961-987,
European Mathematical Society, 2006.
\par\noindent
[B2] M. Bridson, ``The complexity of balanced presentations and the Andrews-Curtis
conjecture", arXiv:1504.04261
\par\noindent
[Ge] S. M. Gersten, Dehn functions and $l_1$-norms of finite presentations, Algorithms and Clas- sification in combinatorial group theory (Berkeley, CA, 1989) (G. Baumslag, C. Miller, eds.), Math. Sci. Res. Inst. Publ. 23, Springer-Verlag (1992),
195-224.
\par\noindent
[Gr] M. Gromov, ``Metric structures for Riemannian and non-Riemannian spaces", Birkhauser, 1998.
\par\noindent
[L] B. Lishak, ``Balanced finite presentations of the trivial group",
arXiv:1504.00418.
\par\noindent
[N1] A. Nabutovsky, ``Non-recursive functions, knots ``with thick ropes",
and self-clenching ``thick" hypersurfaces", Comm. Pure Appl. Math. 48(1995), 381-428.
\par\noindent
[N2] A. Nabutovsky, ``Disconnectedness of sublevel sets of some Riemannian functionals", Gem. Funct. Analysis (GAFA), 6(1996), 703-725.
\par\noindent
[N3] A. Nabutovsky, ``Geometry of the space of triangulations of a compact manifold", Comm. Math. Phys. 181(1996), 303-330.
\par\noindent
[N4] A. Nabutovsky, ``Combinatorics of the space of Riemannian structures and
logic phenomena of Euclidean Quantum Gravity", in ``Perspectives in Riemannian
Geometry", ed. by V. Apostolov et al., CRM Proceedings and Lecture Notes,
vol. 40, 223-248, AMS, Providence, RI, 2006.
\par\noindent
[N5] A. Nabutovsky, ``Morse landscapes of Riemannian functionals and related
problems", Proceedings of ICM-2010, vol. 2, 862-881, Hindustan Book Agency,
New Delhi, 2010.
\par\noindent
[NW1] A. Nabutovsky, S. Weinberger, ``Variational problems for Riemannian functionals and arithmetic groups", Publications d'IHES, 92(2000), 5-62.
\par\noindent
[NW2] A. Nabutovsky, S. Weinberger, ``The fractal nature of Riem/Diff I",
Geom. Dedicata 101(2003), 1-54.
\par\noindent
[Ni] I. Nikolaev, ``Bounded curvature closure of the set of compact
Riemannian manifolds", Bull. of the AMS 24(1992), 171-177.
\par\noindent
[M] J. Milnor, ``Lectures on the $h$-cobordism theorem", Princeton University Press, Princeton, NJ, 1965.
\par\noindent
[P] U. Pachner, ``P.L. homeomorphic manifolds are equivalent by elementary
shellings", Europ. J. Combinatorics 12(1991), 129-145.
\par\noindent
[Pl] A. N. Platonov An isoperimetric function of the Baumslag-Gersten group, Moscow Univ. Math.
Bull. 59 (2004), no 3, 12-17.
\par\noindent
[Sh 1] M. A. Stan'ko ``On the Markov theorem on algorithmic
nonrecognizability of manifolds", J. of Mathematical Sci., 146(2007), 5622-5623.
\par\noindent
[Sh 2] M. A. Stan'ko ``The Markov theorem and algorithmically
unrecognizable combinatorial manifolds", Izv. Akad. Nauk SSSR, Ser. Math., 68(2004), 207-224.
\par\noindent
[T] P. Topping, ``Relating diameter and mean curvature for submanifolds of
Euclidean space", Comment. Math. Helv. 83(2008), 539-546.
\par\noindent
[W] S. Weinberger, ``Computers, rigidity and moduli", Princeton University Press, 2004.
\end{document}
|
\begin{document}
\verticaladjustment{-2pt}
\title{A Scalable Semidefinite Relaxation Approach to Grid Scheduling}
\thispagestyle{firststyle}
\ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{}
}
\begin{document}
\begin{abstract}
Determination of the most economic strategies for supply and transmission of electricity is a daunting computational challenge. Due to theoretical barriers, so-called NP-hardness, the amount of effort to optimize the schedule of generating units and route of power, can grow exponentially with the number of decision variables. Practical approaches to this problem involve legacy approximations and ad-hoc heuristics that may undermine the efficiency and reliability of power system operations, that are ever growing in scale and complexity. Therefore, the development of powerful optimization methods for detailed power system scheduling is critical to the realization of smart grids and has received significant attention recently. In this paper, we propose for the first time a computational method, which is capable of solving large-scale power system scheduling problems with thousands of generating units, while accurately incorporating the nonlinear equations that govern the flow of electricity on the grid. The utilization of this accurate nonlinear model, as opposed to its linear approximations, results in a more efficient and transparent market design, as well as improvements in the reliability of power system operations. We design a polynomial-time solvable third-order semidefinite programming (TSDP) relaxation, with the aim of finding a near globally optimal solution for the original NP-hard problem. The proposed method is demonstrated on the largest available benchmark instances from real-world European grid data, for which provably optimal or near-optimal solutions are obtained.
\end{abstract}
\title{A Scalable Semidefinite Relaxation Approach to Grid Scheduling}
\begin{center} July 2017 \end{center}
\BCOLReport{17.03}
The development of systematic algorithms for optimal allocation and scheduling of resources can be traced back to Nobel Laureate Wassily Leontief's seminal work on input-output analysis \cite{leontief1963multiregional}, and the development of simplex algorithm by George Dantzig \cite{dantzig2016linear}, which was driven by the United States military planning problems during World War II. Ever since, various application domains have relied heavily on optimization theory as a tool for design, planning, and decision-making.
An economically significant application is the operation of power grids, with the aim of efficient and secure supply of electricity.
Grid operations are currently planned with legacy frameworks that are far from producing near-optimal solutions at the scale and detail required by the next-generation grids.
In the past decade, various potential approaches have been identified for enhancements in grid operation through more accurate models for the flow of power, control of network topology, and taking uncertainties of demand and renewable generation into consideration.
The realization of the aforementioned directions is expected to offer considerable improvements in the efficiency and reliability of the power grids \cite{cain2012history}. However, due to the ever-growing size and scope of grids, the scalability of algorithms for solving detailed and accurate models remains as the primary bottleneck \cite{national2016analytic,clack2017evaluation,giannakis2013monitoring}.
\begin{figure*}
\caption{Day-ahead scheduling of a notional power system, with three vertices and two generating units. (A) The optimal operational strategy based on the available forecast of the demand and wind generation. The shaded period represents peak hours. (B) Off-peak configuration of the network, during which the expensive generator is not committed. (C) Peak configuration, during which the expensive generator contributes to accommodate transmission limits.}
\label{fig:UC_OPF}
\end{figure*}
Building an optimal day-ahead plan for the operation of a nationwide grid is a daunting challenge, in part, due to the presence of thousands of generating units, whose on/off status need to be determined.
Algorithms for finding the most economic plan with binary on/off decisions give rise to massive search trees. Another challenge is posed by the nonlinearity of the physical laws describing the flow of electricity.
This paper presents the first computational method that is capable of solving day-ahead power system scheduling problems of realistic size, that are built upon accurate physical models.
More specifically, the proposed method accomplishes the integrated optimization of two fundamental problems, faced by system operators and utilities on a daily basis:
\begin{enumerate}
\item {\bf Unit Commitment (UC):} The problem of scheduling generating units throughout a planning horizon, based on demand forecasts and technological constraints.
\item {\bf Optimal Power Flow (OPF):} The problem of determining an operating point for the network that delivers power from suppliers to consumers as economically as possible, subject to physical constraints.
\end{enumerate}
Advanced algorithms for UC and OPF can contribute to the efficiency and transparency of power markets by improving operational decisions and pricing mechanisms \cite{lipka2017running,7778217}.
Figure \ref{fig:UC_OPF}, exemplifies the optimal unit commitment plan for operating a notional power grid for the day-ahead, based on the available forecast of the demand and renewable generation. A cheap/slow unit provides the base generation for the entire planning horizon, while an expensive/fast generating unit is committed during the peak hours to avoid the violation of transmission limits.
The contributions of this paper are twofold. First the unit commitment problem is convexified via a family of linear and third-order semidefinite programming (TSDP) constraints. This convex relaxation achieves near-globally optimal solutions for UC problems with nearly $100,000$ binary variables. Second, a family of TSDP constraints are introduced to relax the power flow equations. The combination results in a tractable method for solving coupled UC-OPF optimization problems. The proposed method offers unprecedented scalability and improves upon the existing literature, in terms of the practical feasibility and efficiency of solutions, by allowing the joint optimization of commitment and power flow decisions based on an accurate nonlinear model for power grids.
\subsection{Semidefinite Programming Relaxation}
Since a wide range of physical phenomena and dynamical systems can be modeled by polynomial functions, polynomial optimization have received significant research interest.
Our methodology is aligned with a popular framework for the study of polynomial optimization, that involves convex hull characterization of algebraic varieties through hierarchies of semidefinite programs (SDPs) \cite{lasserre2001global,lasserre2006convergent}.
Performance guarantees and extensions of SDP hierarchies have since been investigated by several papers \cite{laurent2009sums,gouveia2010theta,sojoudi2014exactness,belotti2015conic},
as well as their applications in various areas such as, quantum information theory \cite{tomamichel2016quantum,nagy2016epr}, compressed sensing \cite{javanmard2016phase,candes2015phase}, graph theory \cite{bandeira2016inference,aflalo2015convex}, statistics \cite{chandrasekaran2013computational}, operation of infrastructure networks \cite{lavaei2012zero,coogan2017offset}, and other branches of optimization theory \cite{park2015semidefinite}. The primary challenge for the application of SDP hierarchies beyond small-scale instances is the rapid growth of dimension, which necessitates a detailed study of sparsity and structural patterns to boost the efficiency
\cite{muramatsu2003new,kim2003second,kim2003exact,bao2011semidefinite,natarajan2013penalized}.
\begin{table*}[!hp]
\setlength{\tabcolsep}{2pt}
\centering \small
\caption{The performance of the TSDP relaxation algorithm for 24-hour horizon scheduling of benchmark systems with hourly epochs using the linear DC and nonlinear AC models.}\label{table_res_AC}
\scalebox{0.8}{
\begin{tabular}{lr||rrr|rr||rrr}
\hline
& & \multicolumn{5}{c||}{Linear DC Model} & \multicolumn{3}{c}{Nonlinear AC Model} \\
\hline
Test Case\!\! & Number & Ratio of TSDP & TSDP & TSDP & CPLEX & CPLEX & Ratio of TSDP & TSDP & TSDP \\
& of Units & Inexact Binaries & Gap & Time & Gap & Time & Inexact Binaries & Gap & Time \\
\hline
IEEE 118\!\! & $54$ & $0~/~1,296$ & $0\%$ & $3\mathrm{s}$ & $0\%$ & $28\mathrm{s}$ & $0~/~1,296$ & $0.01\%$ & $11\mathrm{s}$\\
IEEE 300\!\! & $69$ & $0~/~1,656$ & $0\%$ & $3\mathrm{s}$ & $0\%$ & $67\mathrm{s}$ & $0~/~1,656$ & $0.34\%$ & $41\mathrm{s}$\\
\hline
PEGASE 1354\!\! & $193$ & $28.4~/~4,632$ & $0.09\%$ & $18\mathrm{s}$ & $8.57\%$ & $10,800\mathrm{s}^\dagger$ & $26.0~/~4,632$ & $1.24\%$ & $486\mathrm{s}$ \\
PEGASE 2869\!\! & $392$ & $24.5~/~9,408$ & $0.01\%$ & $35\mathrm{s}$ & $-$ & $10,800\mathrm{s}^\dagger$ & $33.8~/~9,408$ & $0.42\%$ & $2,175\mathrm{s}$ \\
PEGASE 9241\!\! & $1,153$ & $75.4~/~27,672$ & $0.05\%$ & $137\mathrm{s}$ & $-$ & $10,800\mathrm{s}^\dagger$ & $226.5~/~27,672$ & $2.73\%$ & $56,351\mathrm{s}$ \\
PEGASE 13659\!\! & $4,077$ & $29.5~/~97,848$ & $0.22\%$ & $266\mathrm{s}$ & $-$ & $10,800\mathrm{s}^\dagger$ & $995.3~/~97,848$ & $1.21\%$ & $116,064\mathrm{s}$ \\
\hline
\multicolumn{9}{l}{}
\\ \\
\multicolumn{9}{l}{\scriptsize $~^{\dagger}$ Solver is terminated after $3$ hours.}
\end{tabular}
}
\end{table*}
\subsection{Review of Unit Commitment}
Economic scheduling of power generation units has been extensively investigated since the early 1960s, to handle predictable demand variations throughout the day-ahead. Extensions of the problem have later been studied to capture practical limits of network and security requirements, among other considerations. The reader is referred to \cite{allen2012price} for a detailed survey of the conventional formulations and computational methods for unit commitment.
Recent policy and modeling proposals for electricity market operation and unit commitment include stochastic and robust optimization frameworks, under load and renewable generation uncertainty \cite{bitar2012bringing,bertsimas2013adaptive,phan2013optimization,yu2015impacts,lorca2016multistage,zhao2016unit,sundar2017modified}.
Additionally, incorporating other operational decisions into a comprehensive UC problem has been envisioned with a goal of co-optimizing multiple aspects of day-ahead planning, such as the optimal power flow \cite{bai2009semi,castillo2016unit,lipka2017running}, network topology control \cite{hedman2010co}, demand response \cite{wu2013hourly}, air quality control \cite{kerl2015new}, as well as scheduling of deferrable loads \cite{subramanian2013real}.
From a computational perspective, unit commitment algorithms rely on bounds from polynomial-time solvable relaxations for pruning search trees and certifying closeness to global optimality. Such relaxations can be generated through partial characterization of the convex hull of the feasible solutions \cite{ostrowski2012tight,lee2004min,damci2016polyhedral,geng2017alternative}. Additionally, in the presence of nonlinear price functions, conic inequalities can be adopted to strengthen the convex relaxations \cite{akturk2009strong,frangioni2009computational,bai2009semi,jabr2013rank}. Recently, a strong convex relaxation is proposed in \cite{fattahi2016conic} through a combination of reformulation-linearization and semidefinite programming techniques, which works very well on small instances of the unit commitment problem. Distributed methods are investigated in \cite{kargarian2015distributed} and \cite{papavasiliou2015applying} with the aim of leveraging high-performance computing platforms for solving large-scale unit commitment problems. Nevertheless, the improvements in run-time are reported to diminish with more than $15$ parallel workers \cite{papavasiliou2013comparative}. In terms of scalability, the proposed approach here significantly improves upon the above-referenced computational methods in the number of generating units as well as network size; notwithstanding, that our numerical experiments are conducted on a workstation with a single CPU.
\subsection{Review of Optimal Power Flow}
The optimal power flow problem is concerned with the determination of power flows and injections across the grid, for the optimal transmission and distribution of electricity. An accurate formulation of power flow in a transmission line includes nonconvex nonlinear equations, that substantively increase the computational complexity of the optimization problem. Consequently, the development of a framework for the joint optimization of UC and OPF has remained an open problem with significant economic impact as highlighted in \cite{baldick2005design}.
To this end, one of the most promising directions is based on the semidefinite programming relaxation of the power flow equations \cite{lavaei2012zero}. This approach to OPF has since been widely investigated and improved upon, through geometric analysis of feasible regions \cite{zhang2013geometry,kocuk2016inexactness,coffrin2016strengthening,chen2016bound,chen2017spatial}, and under certain graph-theoretic assumptions \cite{madani2015convex,gan2015exact}. Various studies have leveraged the sparsity of power networks for reducing the computational burden of solving semidefinite relaxations and developing distributed frameworks
\cite{molzahn2013implementation,andersen2014reduced,bose2015equivalent,madani2016promises,guo2016case,zhang2017distributed}.
More recently, other approaches such as Homotopy continuation \cite{mehta2016numerical}, for finding all solutions to power flow equations, and dynamic programming, in the presence of discrete variables \cite{dvijotham2017graphical} have been studied. Additionally, several extensions of OPF have been recently studied under more general settings, to address considerations such as the security of operation \cite{madani2016promises}, robustness \cite{dorfler2016breaking}, energy storage \cite{marley2016solving}, uncertainty of generation \cite{anese2017chance}. The reader is refereed to \cite{capitanescu2016critical} for a detailed survey on OPF.
\section*{Experimental Results}
This section gives a brief summary of the experiments with
the proposed third-order semidefinite programming (TSDP) approach on large-scale instances of day-ahead scheduling.
The goal is to determine the least-cost dispatch, that is, the on/off status and the amount of power produced by the generating units throughout the day ahead for meeting the load (demand) subject to the network transmission and technological constraints.
We consider
real-world benchmark grids based on IEEE and European data with up to $13,659$ buses (vertices) and $4,077$ generating units. The
planning horizon is divided into $24$ hourly intervals. For the largest benchmark, the model includes almost 100,000 binary decision variables.
Table \ref{table_res_AC} presents the average results for ten Monte Carlo demand simulations for each benchmark network.
The computations are performed on a workstation with a single CPU.
The details of data generation and experiments are discussed in the Methods and Materials section.
\subsection{Linear DC Model}
We first consider the approximate linear DC model, which is typically used by the electric power industry to formulate transmission of power in day-ahead scheduling problems.
For all experiments, the proposed TSDP relaxation yields integer values for more than $99.5\%$ of binary variables. Moreover, the objective values of the recovered (feasible) scheduling decisions are provably within $0.22\%$ of global optimality for all benchmarks. The average performance of the TSDP relaxation, based on the DC model, is reported in columns three, four and five of Table \ref{table_res_AC}. Even for the largest benchmark, near-optimal solutions are obtained within a few minutes.
For comparison, the results with the commercial mixed-integer solver CPLEX, which is widely used by the system operators, are provided in columns six and seven. Although small-scale problems, based on IEEE data, are solved fast by CPLEX, no feasible solution is found after three hours of computation for the largest three benchmarks.
\subsection{Nonlinear AC Model}
If an accurate nonlinear AC model for the flow of electricity is adopted, CPLEX, Gurobi and other commonly used off-the-shelf optimizers cannot be employed due to the presence of non-convex power constraints. For the largest benchmark system in Table \ref{table_res_AC}, the aforementioned nonlinear model results in a mixed-integer nonlinear optimization problem with $97,848$ binary variables, as well as $983,448$ non-convex quadratic constraints. For all experiments based on this network, our algorithm has been able to find solutions (with maximum power mismatch within $10^{-5}$ per-unit) that are on average within $2.73\%$ from global optimality. Moreover, for small- to medium-sized cases, all solutions are obtained in less than $40$ minutes and within $1.24\%$ gap from global optimality.
\section*{Notation}
The following notation is used in this paper: Bold letters are used for vectors and matrices, while italic letters with subscript indices refer to the entries of a vector or matrix. $\mathbb{R}$, $\mathbb{C}$, and $\mathbb{H}_n$ denote the sets of real numbers, complex numbers, and $n\times n$ Hermitian matrices, respectively. The letter ``$i$'' is reserved for the imaginary unit. The superscripts $(\cdot)^\ast$ and $(\cdot)^\top$ represent the conjugate transpose and transpose operators, respectively. The notations $\mathrm{real}\{\cdot\}$, $\mathrm{imag}\{\cdot\}$, and $|\cdot|$ represent the real part, imaginary part, and element-wise absolute value of a scalar or matrix, respectively. The notations $\mathbf{X}_{\bullet,k}$ and $\mathbf{X}_{l,\bullet}$ refer to the $k$-th column and the $l$-th row of matrix $\mathbf{X}$, respectively. Additionally, $\mathrm{diag}\{\mathbf{X}\}$ denotes the vertical vector whose entries are given by the diagonal elements of $\mathbf{X}$. The notation $\mathbf{X}\succeq0$ means that $\mathbf{X}$ is Hermitian and positive semidefinite. The notation $\mathbf{X}\otimes\mathbf{Y}$ refers to the Kronecker product of the matrices $\mathbf{X}$ and $\mathbf{Y}$.
Given an $n\times n$ matrix $\mathbf{X}$ and $\mathcal{S}_1,\mathcal{S}_2\in\{1,\ldots,n\}$, define $\mathbf{X}[\mathcal{S}_1,\mathcal{S}_2]$ to be the $|\mathcal{S}_1|\times|\mathcal{S}_2|$ submatrix of $\mathbf{X}$ with row and columns from $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively. Throughout the paper, non-positive subscript indices refer to known initial values.
\section*{Power System Scheduling}
The power system scheduling problem seeks to find the most economic operation plan for a set of generating units throughout a time horizon to meet the demand for electricity, subject to technological constraints. Let $\mathcal{G}=\{1,2,\ldots,G\}$ denote the set of generating units, whose schedule and the amount of contribution to the grid are to be determined. In order to formulate the problem as a static optimization, it is common practice to divide the planning horizon into a set of discrete time intervals $\mathcal{T}=\{1,2,\ldots,T\}$, e.g., hourly time slots for day-ahead scheduling.
Let $x_{g,t}\in\{0,1\}$ be a binary variable indicating the decision of whether or not the generating unit $g\in\mathcal{G}$ is committed for production in the time slot $t\in\mathcal{T}$. If $x_{g,t}=1$, the unit is active and expected to produce power within its capacity limitations, otherwise, no power is produced by $g$ in this time slot. Additionally, let $c_{g,t}$ be the production cost of unit $g$, during the interval $t$. There are two types of power exchanges between generating units and loads in a power system: i) active power, and ii) reactive power. Active power is the actual product that is traded to meet the demand, whereas the reactive power is a technical term, which represents the oscillation exchanges between generators and loads that help maintaining voltages. Let $p_{g,t}$ and $q_{g,t}$, respectively, to be the amount of active power and reactive powers produced by unit $g\in\mathcal{G}$, in time interval $t\in\mathcal{T}$. The overall power injection by generating unit $g\in\mathcal{G}$ can be expressed as the complex number $p_{g,t} + i q_{g,t}$, which is referred to as {\it complex power}, where ``$i$'' accounts for the imaginary unit.
A distinctive feature of our approach is the ability to jointly optimize unit commitment and power flow for an accurate model of the grid. In this paper, the constraints of this large-scale optimization problem are divided into two classes:
\begin{enumerate}
\item {\bf Unit constraints} that model the capacity and technological limitations of generating units, and
\item {\bf Network constraints} that model laws of physics governing the flow of power electricity across the grid, such as conservation of energy, as well as the transmission capacity limitations and demand requirements, throughout the planning horizon.
\end{enumerate}
Using the notation introduced above, we formulate power system scheduling as the following optimization problem:
\begin{subequations}\label{PSS}\begin{align}
& \underset{
\begin{subarray}{c}
\\
\mathbf{x},\mathbf{p},\mathbf{q},\mathbf{c}\in\mathbb{R}^{G\! \times\! T}\!\!\!\!\!\!\!\!\!\!
\end{subarray}
}{\text{minimize}}
& & \sum_{g=1}^{G}{\sum_{t=1}^{T}{c_{g,t}}} &&&& \label{PSS_obj}\\
& \text{subject to}
& & (\mathbf{x}^{\top}_{g,\bullet},\mathbf{p}^{\top}_{g,\bullet},\mathbf{q}^{\top}_{g,\bullet},\mathbf{c}^{\top}_{g,\bullet})\in\mathcal{U}_g &&&& \hspace{-0.4cm}\forall g\in\mathcal{G}, \label{cons_unit}\\
& & & (\mathbf{p}_{\bullet, t},\mathbf{q}_{\bullet, t})\in\mathcal{N}_t &&&& \hspace{-0.36cm}\forall t\in\mathcal{T},\label{cons_network}
\end{align}\end{subequations}
with respect to decision variables
$\mathbf{x}\equiv [x_{g,t}]$,
$\mathbf{c}\equiv [c_{g,t}]$,
$\mathbf{p}\equiv[p_{g,t}]$ and
$\mathbf{q}\equiv[q_{g,t}]$. Optimization problem [\ref{PSS_obj}]--[\ref{cons_network}] minimizes the overall cost of producing power subject to unit and network constraints [\ref{cons_unit}] and [\ref{cons_network}], respectively. For every generating unit $g\in\mathcal{G}$, the quadruplet $(\mathbf{x}^{\top}_{g,\bullet},\mathbf{p}^{\top}_{g,\bullet},\mathbf{q}^{\top}_{g,\bullet},\mathbf{c}^{\top}_{g,\bullet})\in\mathbb{R}^{T\times 4}$ characterizes the scheduling decision, throughout the planning horizon, while for every time slot $t\in\mathcal{T}$, the pair $(\mathbf{p}_{\bullet, t},\mathbf{q}_{\bullet, t})\in\mathbb{R}^{G\times 2}$ accounts for the generation profile. The price functions and technological limitations of generating units are described by the sets $\mathcal{U}_1,\mathcal{U}_2,\ldots,\mathcal{U}_G\subset\mathbb{R}^{T\times 4}$, while the demand information and network data across the time slots are given by $\mathcal{N}_1,\mathcal{N}_2,\ldots,\mathcal{N}_T\subset\mathbb{R}^{G\times 2}$.
The binary unit commitment decisions and nonlinearity of network equations are the primary sources of computational complexity for solving the problem [\ref{PSS_obj}]--[\ref{cons_network}].
As a result, there has been a huge body of research devoted to finding convex relaxations for power system scheduling and its related problems, by means of tools and techniques from the area of mathematical programming. In the following, we first describe the families of sets $\{\mathcal{U}_g\}_{g\in\mathcal{G}}$ and $\{\mathcal{N}_t\}_{t\in\mathcal{T}}$, given by the unit commitment and network constraints of power system scheduling. We then introduce convex surrogates for them which lead to a class of computationally tractable and, yet, accurate relaxations of problem [\ref{PSS_obj}]--[\ref{cons_network}].
\subsection{Unit Constraints}
Following is a definition for the family $\{\mathcal{U}_g\}_{g\in\mathcal{G}}$, which is based on a number of practical limitation for the operation of generating units.
\begin{definition}
For every generating unit $g\in\mathcal{G}$, define $\mathcal{U}_g$ to be the set of all quadruplets
$(\mathbf{x}^{\top}_{g,\bullet},\mathbf{p}^{\top}_{g,\bullet},\mathbf{q}^{\top}_{g,\bullet},\mathbf{c}^{\top}_{g,\bullet})\in\mathbb{R}^{T\times 4}$ that satisfies constraints {\rm[\ref{cons_bin}]}, {\rm[\ref{cons_cost}]}, {\rm[\ref{cons_cap}]}, {\rm[\ref{cons_min}]}, and {\rm[\ref{cons_ramp}]}, for all $t\in\mathcal{T}$.
\end{definition}
Note that, non-positive indices refer to given initial values.
In the reminder of this section, we detail each of the above-mentioned constraints.
\paragraph{Production Costs:}
The cost of operating a unit $g\in\mathcal{G}$ within different time intervals is a quadratic function of the active power produced by the unit. In addition, there is a fixed cost $\gamma_g$ associated with every interval during which the generator is committed (i.e., $x_{g,t}=1$), as well as a startup cost $\gamma_g^{\uparrow}$ and a shutdown cost $\gamma_g^{\downarrow}$ that are enforced on time slots at which the unit $g$ changes status. Therefore, the price of operating unit $g$ at time $t$ can be described through the nonlinear equation [\ref{cons_cost}],
where $\alpha_g$ and $\beta_g$ are nonnegative coefficients.
\paragraph{Generation Capacity:}
If a generating unit $g\in\mathcal{G}$ is committed at time $t\in\mathcal{T}$, the amount of active power $p_{g,t}$ and reactive power $q_{g,t}$ produced in that time slot must lie within capacity limitations of the unit. In other words, if $x_{g,t}=1$, then we have $p_{g,t}\in[\ubar{p}_g,\bar{p}_g]$ and $q_{g,t}\in[\ubar{q}_g,\bar{q}_g]$, where $\ubar{p}_g$, $\bar{p}_g$, $\ubar{q}_g$ and $\bar{q}_g$ are the given lower and upper bounds for unit $g$.
Constraints [\ref{cons_cap_p}]--[\ref{cons_cap_q}], ensure that the amount of power produced by unit $g$ is zero if $x_{g,t}=0$, and within capacity limits, if $x_{g,t}=1$.
\paragraph{Minimum Up \& Down Time Limits:}
Technical considerations often prohibit frequent changes in the status of generating units. Once a unit starts producing power, there is a minimum time before it can be turned off, and once the unit is turned off, it cannot be immediately activated, again. Denote by $m^{\uparrow}_g$ and $m^{\downarrow}_g$ the minimum time for which the generating unit $g\in\mathcal{G}$ is required to remain active and deactivate, respectively. The minimum up and down limits for unit $g$ are enforced through constraints [\ref{cons_min_u}]--[\ref{cons_min_l}].
\paragraph{Ramp Rate Limits:}
The rate of change in the amount of power produced by a generating unit is often constrained, depending on the type of the generator. Denote by $r_g$ the maximum variation of active power generation, that is allowed by unit $g\in\mathcal{G}$, between two adjacent time intervals in which the unit is committed. Similarly, define $s_g$ as the maximum amount of active power that can be generated by unit $g$ immediately after startup or prior to shutdown. Ramp rate limits of unit $g\in\mathcal{G}$ are expressed through constraints [\ref{cons_ramp_pu}] and [\ref{cons_ramp_pl}].
Observe that if either $x_{g,t-1} = 0$ or $x_{g,t} = 0$, the constraints in [\ref{cons_ramp_pu}] and [\ref{cons_ramp_pl}] reduce to
$|p_{g,t}|\leq s_g$. Alternatively, if $x_{g,t-1} = x_{g,t} = 1$, the above constraints imply that $|p_{g,t}-p_{g,t-1}|\leq r_g$.
\begin{table}[t]
\line(1,0){360}\\
\begin{flushleft}
{\bf Unit Constraints:}
\end{flushleft}
\begin{align}
x_{g,t} \in \{0,1\}\label{cons_bin}
\end{align}
\begin{align}
&c_{g,t} = \alpha_g p^2_{g,t} + \beta_g p_{g,t} +\nonumber\\
&\quad\quad\;\;\,\gamma_g x_{g,t} + \gamma_g^{\uparrow} (1-x_{g,{t-1}})x_{g,t} + \gamma_g^{\downarrow} x_{g,t-1}(1-x_{g,t}),\label{cons_cost}
\end{align}
\begin{subequations}\label{cons_cap}\begin{align}
& \ubar{p}_g x_{g,t} \leq p_{g,t} \leq \bar{p}_g x_{g,t}\label{cons_cap_p}\\
& \ubar{q}_g x_{g,t} \leq q_{g,t} \leq \bar{q}_g x_{g,t}.\label{cons_cap_q}
\end{align}\end{subequations}
\begin{subequations}\label{cons_min}\begin{align}
&\quad\;\;\, x_{g,t} \geq x_{g,\tau}-x_{g,\tau-1}, \quad \forall\tau\in\{t-m^{\uparrow}_g+1,\ldots,t\},\label{cons_min_u}\\
&1-x_{g,t} \geq x_{g,\tau-1}-x_{g,\tau}, \quad \forall\tau\in\{t-m^{\downarrow}_g+1,\ldots,t\}.\label{cons_min_l}
\end{align}\end{subequations}
\begin{subequations}\label{cons_ramp}\begin{align}
&p_{g,t} - p_{g,t-1} \leq r_g x_{g,t-1} + s_g (1-x_{g,t-1}),\label{cons_ramp_pu} \\
&p_{g,t-1} - p_{g,t} \leq r_g x_{g,t} + s_g (1-x_{g,t}), \label{cons_ramp_pl}
\end{align}\end{subequations}
\line(1,0){360}\\
\begin{flushleft}
{\bf AC Network Constraints:}
\end{flushleft}
\begin{subequations}\label{cons_AC}\begin{align}
\mathbf{d}_{t}+\mathrm{diag}\{\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}\mathbf{Y}^{\ast}_t\}&=\mathbf{C}^{\top}(\mathbf{p}_{\bullet, t}+i\mathbf{q}_{\bullet, t})\label{cons_AC1}\\
\lvert\mathrm{diag}\{\vec{\mathbf{C}}_t\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}\vec{\mathbf{Y}}_t^{\ast}\}\rvert&\leq \mathbf{f}_{\mathrm{max};t}\label{cons_AC2}\\
\lvert\mathrm{diag}\{\cev{\mathbf{C}}_t\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}\cev{\mathbf{Y}}_t^{\ast}\}\rvert&\leq \mathbf{f}_{\mathrm{max};t}\label{cons_AC3}\\
\mathbf{v}_{\mathrm{min}}\leq\lvert\mathbf{v}_{\bullet, t}\rvert&\leq \mathbf{v}_{\mathrm{max}}\label{cons_AC4}
\end{align}\end{subequations}
\line(1,0){360}\\
\begin{flushleft}
{\bf DC Network Constraints:}
\end{flushleft}
\begin{subequations}\label{cons_DC}\begin{align}
\mathbf{q}_{\bullet, t}&=0\label{cons_DC1}\\
\mathrm{real}\{\mathbf{d}_{t}\}+\mathbf{B}_t \boldsymbol{\theta}_{\bullet, t} &=\mathbf{C}^{\top} \mathbf{p}_{\bullet, t}\label{cons_DC2}\\
\lvert \vec{\mathbf{B}}_t \boldsymbol{\theta}_{\bullet, t} \rvert&\leq \mathbf{f}_{\mathrm{max};t}\label{cons_DC3}
\end{align}\end{subequations}
\line(1,0){360}
\caption{Unit and network constraints in power system scheduling.}
\label{fig:table_cons}
\end{table}
\subsection{Network Constraints}
In this part, we focus on network considerations in power system scheduling. The transmission of electricity from suppliers to consumers is carried out through an interconnected network whose topology throughout each time interval $t\in\mathcal{T}$ can be modeled as a directed graph $\mathcal{H}_t=(\mathcal{V},\mathcal{E}_t)$, with $\mathcal{V}$ and $\mathcal{E}_t$ as the set of vertices and edges, respectively. In power system terminology, vertices are referred to as ``buses'', and edges are called ``lines'' or ``branches'' of the network. Each generating unit is associated with (located at) one of the buses. Define the unit incidence matrix $\mathbf{C}\in\{0,1\}^{G\times\mathcal{V}}$ to be a binary matrix whose entry $(g,k)$ is equal to one, if and only if the generating unit $g$ belongs to bus $k$. Additionally, define the pair of matrices $\vec{\mathbf{C}}_t,\cev{\mathbf{C}}_t\in\{0,1\}^{\mathcal{E}_t\times\mathcal{V}}$ as the initial and final incidence matrices, respectively. The entry $(k,l)$ of $\vec{\mathbf{C}}_t$ is equal to one, if and only if line $l$ starts at bus $k$, while the entry $(k,l)$ of $\cev{\mathbf{C}}_t$ equals one, if and only if line $l$ ends at bus $k$.
The steady state voltages across the network are sinusoidal functions with a global frequency. As a result, the voltage function at each bus can be characterized by its amplitude and phase difference from a reference bus. Therefore, for each $k\in\mathcal{V}$ and $t\in\mathcal{T}$, a complex number $v_{k;t}$ is defined, whose magnitude $|v_{k;t}|$ and angle $\angle v_{k;t}$, respectively, account for the amplitude and phase of the voltage at the bus $k$, in time interval $t$.
Define $\mathbf{v}\equiv[v_{k;t}]\in\mathbb{C}^{\mathcal{V}\times \mathcal{T}}$ and $\boldsymbol{\theta}\equiv[\angle v_{k;t}]\in\mathbb{R}^{\mathcal{V}\times \mathcal{T}}$, to be the matrices encapsulating complex voltage and phase angle values, respectively.
The two widely used models for power networks are discussed next. The first one is the accurate Alternating Current (AC) model, which incorporates the nonlinear power flow equations. The next one is the Direct Current (DC) model, which is a simplified version of the AC model and can be described by linear equalities.
The use of nonlinear AC power flow equations introduces substantial complexity into power system optimization problems. However, various physical phenomena, such as network losses and reactive power flows are captured by the AC model, while ignored by the DC model. As a result, it is desirable to adopt the AC model, in order to determine better operation strategies.
Figure \ref{fig:FigureAC} illustrates a highly non-convex feasible region of voltage angles, enforced by the demand and technological constraints, in a simple four bus network that is described exactly by the AC model.
One of the primary benefits of the proposed method in this paper, is the possibility of adopting the AC model in large-scale power system scheduling problems.
\subsubsection{Alternating Current Power Flow Model}
\begin{figure}
\caption{ (A) A four bus power system from \cite{zimmerman2011matpower}
\label{fig:FigureAC}
\end{figure}
In the AC model, characteristics of the network in a time interval $t\in\mathcal{T}$, can be described by a triplet of admittance matrices $\vec{\mathbf{Y}}_t,\cev{\mathbf{Y}}_t\in\mathbb{C}^{\mathcal{E}_t\times \mathcal{V}}$ and $\mathbf{Y}_t\in\mathbb{C}^{\mathcal{V}\times \mathcal{V}}$, that govern the flow of power throughout the network.
Next, we define the family $\{\mathcal{N}^{\mathrm{AC}}_t\}_{t\in\mathcal{T}}$
and give a brief description for each constraint.
\begin{definition}
For every time interval $t\in\mathcal{T}$, let $\mathcal{N}^{\mathrm{AC}}_t$ be the set of pairs
$(\mathbf{p}_{\bullet,t},\mathbf{q}_{\bullet,t})\in\mathbb{R}^{T\times 2}$, for which there exists a vector of complex voltages $\mathbf{v}_{\bullet,t}\in\mathbb{C}^{\mathcal{V}}$ satisfying the constraints {\rm[\ref{cons_AC1}]}--{\rm[\ref{cons_AC4}]}.
\end{definition}
\paragraph{AC Power Balance Equation:}
Constraint [\ref{cons_AC1}] is referred to as the {\it power balance equation} which accounts for the conservation of energy at all buses of the network. The vector $\mathbf{d}_t\in\mathbb{C}^{\mathcal{N}}$ denotes the demand forecast at each bus, in interval $t$, whose real and imaginary parts account for active and reactive power demands, respectively. Observe that the overall complex power produced by generating units located at each bus $k\in\mathcal{V}$ is given by the $k$-th entry of $\mathbf{C}^{\top}(\mathbf{p}_{\bullet t}+i\mathbf{q}_{\bullet t})$. Finally, the $k$-th entry of the vector $\mathrm{diag}\{\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}\mathbf{Y}^{\ast}_t\}$ is equal to the amount of complex power exchange between bus $k$ and the rest of the network. The voltages across the network are adjusted in such a way that the overall complex power produced at each bus equals the sum of power consumptions and power exchanges of that bus, at all times. This requirement is enforces by constraint [\ref{cons_AC1}].
\paragraph{AC Thermal Limits:}
Due to thermal losses, the flow entering a line may differ from the flow leaving the line at the other end. For each time interval $t\in\mathcal{T}$, complex power flows entering the lines of the network through their starting and ending buses are given by vectors $\mathrm{diag}\{\vec{\mathbf{C}}_t\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}\vec{\mathbf{Y}}_t^{\ast}\}$ and $\mathrm{diag}\{\cev{\mathbf{C}}_t\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}\cev{\mathbf{Y}}_t^{\ast}\}$, respectively. Constraints [\ref{cons_AC2}] and [\ref{cons_AC3}] restrict the flow of power, within the thermal limit of the lines $\mathbf{f}_{\mathrm{max};t}\in\mathbb{R}^{\mathcal{L}_t}$, for each $t\in\mathcal{T}$.
\paragraph{Voltage Magnitude Limits:}
In order for power system components to operate properly, the voltage magnitude at each bus needs to remain within a prespecified range, given by vectors $\mathbf{v}_{\mathrm{min}},\mathbf{v}_{\mathrm{max}}\in\mathbb{R}^{\mathcal{V}}$. Voltage magnitude limits are enforced through the constraint [\ref{cons_AC4}].
The nonlinear AC network constraints [\ref{cons_AC1}]--[\ref{cons_AC4}] pose a significant challenge for solving power system optimization problems based on a full model. As a result, typically a simplified version of the AC model is considered in practice which is explained next.
\subsubsection{Direct Current Power Flow Model}
The DC model can be formulated by ignoring the reactive powers, voltage magnitude deviations from their nominal values, and network losses. Under this model, the network is described by means of sustenance matrices $\vec{\mathbf{B}}_t\in\mathbb{R}^{\mathcal{E}_t\times \mathcal{V}}$ and $\mathbf{B}_t\in\mathbb{R}^{\mathcal{V}\times \mathcal{V}}$. Moreover, the flow of active power across the network in each time interval $t\in\mathcal{T}$ is expressed with respect to the vector of voltage angles $\boldsymbol{\theta}_{\bullet,t}\in\mathbb{R}^{\mathcal{V}}$.
\begin{definition}
For every time interval $t\in\mathcal{T}$, let $\mathcal{N}^{\mathrm{DC}}_t$ be the set of pairs
$(\mathbf{p}_{\bullet,t},\mathbf{q}_{\bullet,t})\in\mathbb{R}^{T\times 2}$, for which there exists a vector of voltage phase values $\boldsymbol{\theta}_{\bullet,t}\in\mathbb{R}^{\mathcal{V}}$ satisfying constraints {\rm[\ref{cons_DC1}]}--{\rm[\ref{cons_DC3}]}.
\end{definition}
Constraint [\ref{cons_DC2}] is a simplified alternative for power balance equation [\ref{cons_AC1}], in which the vector $\mathbf{B}_t \boldsymbol{\theta}_{\bullet, t}\in\mathbb{R}^{\mathcal{V}}$ contains approximate values for active power exchanges between each vertex and the rest of the network. Additionally, thermal limits are enforced through the constraint [\ref{cons_DC3}], in which $\vec{\mathbf{B}}_t \boldsymbol{\theta}_{\bullet, t}\in\mathbb{R}^{\mathcal{L}_t}$ is the vector of approximate values for active power flow of lines. Notice that, since the network losses are ignored under the DC model, power flows entering both directions are considered equal and it suffices to enforce one inequality for each line.
\section*{Methodology}
In order to tackle a general power system scheduling problem of the form [\ref{PSS_obj}]--[\ref{cons_network}], we develop third-order semidefinite programming (TSDP) relaxations for the families of sets $\{\mathcal{U}_g\}_{g\in\mathcal{G}}$ and $\{\mathcal{N}_t\}_{t\in\mathcal{T}}$, which lead to a computationally-tractable algorithm. The proposed approach involves introducing additional variables, each as a proxy for a quadratic monomial. We design a class of inequalities, to strengthening the relation between each proxy variable and the monomial it represents.
\begin{table*}[t]
\line(1,0){360}
\begin{subequations}\label{cons_KK}
\begin{align}\label{cons_K}
\!\!\!\!\!\!\!
\left(\!\!
\begin{bmatrix}
& \hspace{-1.7mm} +1 &\hspace{-1.7mm} \\
1 &\hspace{-1.7mm} -1 &\hspace{-1.7mm} \\
&\hspace{-1.7mm} &\hspace{-1.7mm} +1 \\
1 &\hspace{-1.7mm} &\hspace{-1.7mm} -1
\end{bmatrix}\!\!\otimes\!\!
\begin{bmatrix}
&\hspace{-1mm} -\ubar{p}_g &\hspace{-1.2mm} &\hspace{-1.2mm} +1 &\hspace{-1mm} \\
&\hspace{-1mm} +\bar{p}_g &\hspace{-1.2mm} &\hspace{-1.2mm} -1 &\hspace{-1mm} \\
&\hspace{-1mm} &\hspace{-1.2mm} -\ubar{p}_g &\hspace{-1.2mm} &\hspace{-1mm} +1 \\
&\hspace{-1mm} &\hspace{-1.2mm} +\bar{p}_g &\hspace{-1.2mm} &\hspace{-1mm} -1 \\
s_g&\hspace{-1.2mm} r_g\!-\!s_g &\hspace{-1mm} &\hspace{-1.2mm} +1 &\hspace{-1mm} -1 \\
s_g&\hspace{-1.2mm} &\hspace{-1mm} r_g\!-\!s_g &\hspace{-1.2mm} -1 &\hspace{-1mm} +1 \\
\end{bmatrix}\right)
\!\!\times\!\!
\begin{bmatrix}
\mathbf{e}^{\top}_1\\
\mathbf{e}^{\top}_2\!\!+\!\mathbf{e}^{\top}_{6}\!\!+\!\mathbf{e}^{\top}_{7}\\
\mathbf{e}^{\top}_3\!\!+\!\mathbf{e}^{\top}_{11}\!\!+\!\mathbf{e}^{\top}_{13}\\
\mathbf{e}^{\top}_4\!+\!\mathbf{e}^{\top}_{9}\\
\mathbf{e}^{\top}_5\!+\!\mathbf{e}^{\top}_{15}\\
\mathbf{e}^{\top}_8\!+\!\mathbf{e}^{\top}_{12}\\
\mathbf{e}^{\top}_{14}\\
\mathbf{e}^{\top}_{10}
\end{bmatrix}^{\!\!\top}\!\!\!\!\times\!\!
\begin{bmatrix}
1\\
x_{g,t-1}\\
x_{g,t}\\
p_{g,t-1}\\
p_{g,t}\\
u_{g,t}\\
y_{g,t}\\
z_{g,t}
\end{bmatrix}\!\!\geq\! 0,\!\!\!\!
\end{align}
\line(1,0){360}
\begin{align}\label{cons_Kd}
\!\!\!\!\!\!\!
\left(
\begin{bmatrix}
& \hspace{-1.5mm} +1 &\hspace{-1.5mm} \\
1 &\hspace{-1.5mm} -1 &\hspace{-1.5mm} \\
&\hspace{-1.5mm} +\bar{p}_g &\hspace{-1.5mm} -1 \\
&\hspace{-1.5mm} -\ubar{p}_g &\hspace{-1.5mm} +1
\end{bmatrix}\!\!\otimes\!\!
\begin{bmatrix}
+1\\
-1\\
+1\\
-1
\end{bmatrix}^{\!\!\top}
\right)
\!\!\times\!\!
\begin{bmatrix}
\dot{\mathbf{e}}^{\top}_1\\
\dot{\mathbf{e}}^{\top}_2\\
\dot{\mathbf{e}}^{\top}_3+\dot{\mathbf{e}}^{\top}_{5}+\dot{\mathbf{e}}^{\top}_{7}\!\!\\
\dot{\mathbf{e}}^{\top}_4\\
\dot{\mathbf{e}}^{\top}_6\\
\dot{\mathbf{e}}^{\top}_8\\
\dot{\mathbf{e}}^{\top}_{9}+\dot{\mathbf{e}}^{\top}_{11}\\
\dot{\mathbf{e}}^{\top}_{10}\\
\dot{\mathbf{e}}^{\top}_{12}
\end{bmatrix}^{\top} \hspace{-2.0mm}\times
\begin{bmatrix}
1\\
x_{g,t-2}\\
x_{g,t-1}\\
x_{g,t}\\
u_{g,t-1}\\
u_{g,t}\\
p_{g,t-1}\\
z_{g,t-1}\\
y_{g,t}
\end{bmatrix}
\geq 0,
\end{align}
\line(1,0){360}
\begin{align}\label{cons_Ku}
\!\!\!\!\!\!\!
\left(
\begin{bmatrix}
& \hspace{-1.5mm} +1 &\hspace{-1.5mm} \\
1 &\hspace{-1.5mm} -1 &\hspace{-1.5mm} \\
&\hspace{-1.5mm} +\bar{p}_g &\hspace{-1.5mm} -1 \\
&\hspace{-1.5mm} -\ubar{p}_g &\hspace{-1.5mm} +1
\end{bmatrix}\!\!\otimes\!\!
\begin{bmatrix}
-1\\
+1\\
-1
\end{bmatrix}^{\top}
\right)
\!\!\times\!\!
\begin{bmatrix}
\ddot{\mathbf{e}}^{\top}_1\\
\ddot{\mathbf{e}}^{\top}_2+\ddot{\mathbf{e}}^{\top}_{5}\\
\ddot{\mathbf{e}}^{\top}_3\\
\ddot{\mathbf{e}}^{\top}_4\\
\ddot{\mathbf{e}}^{\top}_6\\
\ddot{\mathbf{e}}^{\top}_8\\
\ddot{\mathbf{e}}^{\top}_7\\
\ddot{\mathbf{e}}^{\top}_9
\end{bmatrix}^{\top} \hspace{-2.0mm}\!\!\times\!
\begin{bmatrix}
x_{g,t-2}\\
x_{g,t-1}\\
x_{g,t}\\
u_{g,t-1}\\
u_{g,t}\\
p_{g,t-1}\\
z_{g,t-1}\\
y_{g,t}
\end{bmatrix}\geq 0.
\end{align}
\line(1,0){360}\\
\begin{flushleft}
{\scriptsize $\otimes$ denotes the Kronecker product of two matrices.
$\left\{\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_{15}\right\}$,
$\left\{\dot{\mathbf{e}}_1,\dot{\mathbf{e}}_2,\ldots,\dot{\mathbf{e}}_{12}\right\}$ and
$\left\{\ddot{\mathbf{e}}_1,\ddot{\mathbf{e}}_2,\ldots,\ddot{\mathbf{e}}_{9}\right\}$
denote the standard basis vectors for
$\mathbb{R}^{15}$, $\mathbb{R}^{12}$, and $\mathbb{R}^{9}$,
respectively.
} \end{flushleft}
\end{subequations}
\end{table*}
In this work, we propose a convex relaxation of the power system scheduling problem [\ref{PSS_obj}]--[\ref{cons_network}], which is built by substituting the unit and AC network feasible sets with their convex surrogates
$\{\mathcal{U}^{\mathrm{TSDP}}_g\}_{g\in\mathcal{G}}$ and
$\{\mathcal{N}^{\mathrm{TSDP}}_t\}_{t\in\mathcal{T}}$, respectively:
\begin{subequations}\label{PSS_relax}\begin{align}
& \underset{
\begin{subarray}{c}
\\
\mathbf{x},\mathbf{p},\mathbf{q},\mathbf{c}\in\mathbb{R}^{G\! \times\! T}\!\!\!\!\!\!\!\!\!\!
\end{subarray}
}{\text{minimize}}
& & \sum_{g=1}^{G}{\sum_{t=1}^{T}{c_{g,t}}} &&&& \label{PSS_relax_obj}\\
& \text{subject to}
& & (\mathbf{x}^{\top}_{g,\bullet},\mathbf{p}^{\top}_{g,\bullet},\mathbf{q}^{\top}_{g,\bullet},\mathbf{c}^{\top}_{g,\bullet})\in\mathcal{U}^{\mathrm{TSDP}}_g &&&& \hspace{-0.4cm}\forall g\in\mathcal{G}, \label{cons_unit_relax}\\
& & & (\mathbf{p}_{\bullet, t},\mathbf{q}_{\bullet, t})\in\mathcal{N}^{\mathrm{TSDP}}_t &&&& \hspace{-0.36cm}\forall t\in\mathcal{T},\label{cons_network_relax}
\end{align}\end{subequations}
Due to convexity of the sets $\{\mathcal{U}^{\mathrm{TSDP}}_g\}_{g\in\mathcal{G}}$ and
$\{\mathcal{N}^{\mathrm{TSDP}}_t\}_{t\in\mathcal{T}}$, the problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}] can be solved in polynomial time. Moreover, since $\mathcal{U}_g\subseteq\mathcal{U}^{\mathrm{TSDP}}_g$ and $\mathcal{N}^{\mathrm{AC}}_t\subseteq\mathcal{N}^{\mathrm{TSDP}}_t$, for every $g\in\mathcal{G}$ and $t\in\mathcal{T}$, respectively, the optimal cost of problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}] is a lower bound to the optimal cost of problem [\ref{PSS_obj}]--[\ref{cons_network}]. If an optimal solution to the problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}] satisfies the original constraints [\ref{cons_unit}] and [\ref{cons_network}], then the relaxation is exact and a provably global optimal solution to problem [\ref{PSS_obj}]--[\ref{cons_network}] is obtained. Otherwise, a rounding procedure is adopted to transform the optimal solution of [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}] to a feasible and near optimal solution of [\ref{PSS_obj}]--[\ref{cons_network}].
\subsection{Relaxation of Unit Constraints}
Each unit feasible set $\mathcal{U}_g$ is a semialgebraic set, with constraints [\ref{cons_bin}] and [\ref{cons_cost}] as the sources of nonconvexity.
In this work, we create a family of convex surrogates $\{\mathcal{U}^{\mathrm{TSDP}}_g\}_{g\in\mathcal{G}}$, by enforcing a collection of linear and conic inequalities.
To this end, define auxiliary variables
$\mathbf{u},
\mathbf{y},
\mathbf{z},
\mathbf{o}\in\mathbb{R}^{G\times T}$, whose components account for monomials $x_{g,t-1}x_{g,t}$,
$p_{g,t-1}x_{g,t}$,
$x_{g,t-1}p_{g,t}$ and
$p^2_{g,t}$, respectively. In other words, if the relaxation is exact, the equations
\begin{subequations}
\begin{align}
&u_{g,t}=x_{g,t-1}x_{g,t},\quad
&&y_{g,t}=p_{g,t-1}x_{g,t},\label{mono1}\\
&z_{g,t}=x_{g,t-1}p_{g,t},\quad
&&P_{g,t}=p^2_{g,t},\label{mono2}
\end{align}
\end{subequations}
hold true at optimality.
To capture the binary requirement for commitment decisions, the following convex inequalities, that are referred to as ``McCormick constraints'', are enforced:
\begin{align}
\max\{0, x_{g,t-1}+x_{g,t}-1\} \leq u_{g,t} \leq \min\{x_{g,t-1},x_{g,t}\}.\label{relax_mcc}
\end{align}
Now, constraint [\ref{cons_cost}] can be cast in the following linear form, with respect to the auxiliary variables:
\begin{align}
&c_{g,t} = \alpha_g o_{g,t} + \beta_g p_{g,t} +\nonumber\\
&\quad\quad\;\;\,\gamma_g x_{g,t} + \gamma_g^{\uparrow} (x_{g,t}-u_{g,t}) + \gamma_g^{\downarrow} (x_{g,t-1}-u_{g,t}).\label{relax_cost}
\end{align}
Finally, we relax the nonconvex equations [\ref{mono1}]--[\ref{mono2}] with the following conic constraints: \\
\begin{subequations}\label{conic0}
\noindent\begin{minipage}{0.422\hsize}
\begin{equation}\label{conic1}
\!\!\begin{bmatrix}
x_{g,t}\! &\!\! u_{g,t}\! &\!\! y_{g,t}\! \\
u_{g,t}\! &\!\! x_{g,t-1}\! &\!\! p_{g,t-1}\! \\
y_{g,t}\! &\!\! p_{g,t-1}\! &\!\! o_{g,t-1}\!
\end{bmatrix}\!\succeq 0,
\notag
\addtocounter{equation}{1}
\end{equation}
\end{minipage}
\begin{minipage}{0.262\hsize}
\begin{equation} \label{conic2}
\notag
\begin{bmatrix}
x_{g,t-1}\! &\!\! u_{g,t}\! &\!\! z_{g,t} \\
u_{g,t}\! &\!\! x_{g,t}\! &\!\! p_{g,t} \\
z_{g,t}\! &\!\! p_{g,t}\! &\!\! o_{g,t}
\end{bmatrix}\!\succeq 0,
\end{equation}
\end{minipage}\begin{minipage}{0.3\hsize}
\;\;\normalfont[\begin{NoHyper}\ref{conic1},\ref{conic2}\end{NoHyper}]
\end{minipage}
\end{subequations}
\noindent
as well as a number of linear inequalities that are stated next.
\begin{definition}
For each $g\in\mathcal{G}$, define $\mathcal{U}^{\mathrm{TSDP}}_g\subset\mathbb{R}^{T\times 4}$ to be the set of all quadruplets
$(\mathbf{x}^{\top}_{g,\bullet},\mathbf{p}^{\top}_{g,\bullet},\mathbf{q}^{\top}_{g,\bullet},\mathbf{c}^{\top}_{g,\bullet})$, for which there exists $(\mathbf{u}^{\top}_{g,\bullet}, \mathbf{y}^{\top}_{g,\bullet}, \mathbf{z}^{\top}_{g,\bullet}, \mathbf{o}^{\top}_{g,\bullet})\in\mathbb{R}^{T\times4}$, such that for every $t\in\mathcal{T}$, the following constraints hold true:
\begin{itemize}
\item[i)] The linear inequalities {\rm[\ref{cons_cap}]}, {\rm[\ref{cons_min}]}, {\rm[\ref{cons_ramp}]},
\item[ii)] The conic and linear constraints {\rm[\ref{relax_mcc}]}, {\rm[\ref{relax_cost}]}, {\rm[\ref{conic1}]}, {\rm[\ref{conic2}]} and {\rm[\ref{cons_K}]},
\item[iii)] The linear inequalities {\rm[\ref{cons_Kd}]}, if $m^{\downarrow}_g>1$,
\item[iv)] The linear inequalities {\rm[\ref{cons_Ku}]}, if $m^{\uparrow}_g>1$.
\end{itemize}
\end{definition}
Notice that for each $g\in\mathcal{G}$, the relaxed feasible set $\mathcal{U}^{\mathrm{TSDP}}_g$ is defined, by means of conic and linear inequalities that are convex. The validity of these inequalities is proven in \href{url}{SI}~\href{url}{Text}.
The definition of $\mathcal{U}^{\mathrm{TSDP}}_g$ involves $2\times T$ third-order semidefinite constraints that can be enforced efficiently. Additionally, the overall number of inequalities grows linearly with respect to $T$, which is an improvement upon existing methods. On the other hand, the ramp and minimum up \& down constraints are incorporated into the valid inequalities, the present convex relaxation offers more accurate bounds, in the case of severe load variations.
\subsection{Relaxation of Network Constraints}
A state of the art method, given in \cite{madani2016promises}, for convex relaxation of AC power flow equations incorporates an auxiliary matrix variable $\mathbf{W}_t\in\mathbb{H}_n$, for each $t\in\mathcal{T}$, accounting for $\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}$. Using the matrix $\mathbf{W}_t\in\mathbb{H}_n$, the AC network constraints [\ref{cons_AC1}]--[\ref{cons_AC4}] can be convexified as follows:
\begin{subequations}\label{relax_AC}\begin{align}
\mathbf{d}_{t}+\mathrm{diag}\{\mathbf{W}_t\;\mathbf{Y}^{\ast}_t\}&=\mathbf{C}^{\top}(\mathbf{p}_{\bullet t}+i\mathbf{q}_{\bullet t})\label{relax_AC1}\\
\lvert\mathrm{diag}\{\vec{\mathbf{C}}_t\;\mathbf{W}_t\;\vec{\mathbf{Y}}_t^{\ast}\}\rvert&\leq \mathbf{f}_{\mathrm{max};t}\label{relax_AC2}\\
\lvert\mathrm{diag}\{\cev{\mathbf{C}}_t\;\mathbf{W}_t\;\cev{\mathbf{Y}}_t^{\ast}\}\rvert&\leq \mathbf{f}_{\mathrm{max};t}\label{relax_AC3}\\
\mathbf{v}_{\mathrm{min}}\leq\lvert\mathrm{diag}\{\mathbf{W}_t\}\rvert&\leq \mathbf{v}_{\mathrm{max}}\label{relax_AC4}
\end{align}\end{subequations}
In formulation [\ref{relax_AC1}]--[\ref{relax_AC4}], the structure of matrix $\mathbf{W}_t$, i.e.,
\begin{align}
\mathbf{W}_t=\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}
\label{W_fac}
\end{align}
is ignored, to make the model polynomially solvable. To remedy the absence of the non-convex equation [\ref{W_fac}], the relaxation can be strengthened through a combination of conic constraints, with the aim of enforcing the relation between $\mathbf{W}_t$ and $\mathbf{v}_{\bullet, t}$, implicitly.
Observe that an arbitrary matrix $\mathbf{W}_t\in\mathbb{H}_n$ can be factored to $\mathbf{v}_{\bullet, t}\mathbf{v}^{\ast}_{\bullet, t}$, if and only if, it is rank-one and positive semidefinite:
\begin{align}
\mathrm{rank}\{\mathbf{W}_t\} = 1
\quad \wedge \quad
\mathbf{W}_t\succeq 0.\label{rank_psd}
\end{align}
Although a rank constraint on $\mathbf{W}_t$ cannot be enforced efficiently, employing the convex constraint $\mathbf{W}_t\succeq 0$ leads to the semidefinite programming (SDP) relaxation of AC network constraints. For larger-scale systems, a graph-theoretic analysis divides the set of buses into several overlapping subsets $\mathcal{A}_1,\mathcal{A}_2,\ldots,\mathcal{A}_A\subseteq\mathcal{V}$, such that the relaxation could be represented with smaller conic constraints:
\begin{align} \label{relax:sdp}
\mathbf{W}_t[\mathcal{A}_k,\mathcal{A}_k]\succeq 0,\qquad\forall k\in\{1,2,\ldots,A\},
\end{align}
where for every $\mathcal{X}\subseteq\{1,\ldots,n\}$, the notation $\mathbf{W}_t[\mathcal{X},\mathcal{X}]$ represents the $|\mathcal{X}|\times|\mathcal{X}|$ principal submatrix of $\mathbf{W}_t$, whose rows and columns are chosen from $\mathcal{X}$. Choosing
$\mathcal{A}_1,\mathcal{A}_2,\ldots,\mathcal{A}_A$,
based on the bags of an arbitrary tree decomposition of the network, leads to an equivalent but more efficient SDP relaxation \cite{madani2016promises}. A weaker, but far more tractable approach is the second-order cone programming (SOCP) relaxation which uses conic constraints of the following form:
\begin{align} \label{relax:socp}
\begin{bmatrix}
W_{k_1,k_1}\! &\!\! W_{k_1,k_2}\! \\
W_{k_2,k_1}\! &\!\! W_{k_2,k_2}\!
\end{bmatrix}\!\succeq 0,\qquad \forall (k_1,k_2)\in\mathcal{E}_t
\end{align}
To achieve a better balance between the strength of the convex relaxation and scalability, in this paper,
we use a third-order semidefinite programming (TSDP) relaxation which is described as follows:
\begin{definition}
For each $t\in\mathcal{T}$, let $\mathcal{N}^{\mathrm{TSDP}}_t\subset\mathbb{R}^{G\times 2}$ be the set of pairs $(\mathbf{p}^{\top}_{g,\bullet},\mathbf{q}^{\top}_{g,\bullet})$, for which there exists a Hermitian matrix $\mathbf{W}_t\in\mathbb{H}_n$, that satisfies constraints {\rm[\ref{relax_AC1}]}--{\rm[\ref{relax_AC4}]} and the following third-order semidefinite constraints
\begin{align}
\begin{bmatrix}
W_{k_1,k_1}\! &\!\! W_{k_1,k_2}\! &\!\! W_{k_1,k_3} \\
W_{k_2,k_1}\! &\!\! W_{k_2,k_2}\! &\!\! W_{k_2,k_3} \\
W_{k_3,k_1}\! &\!\! W_{k_3,k_2}\! &\!\! W_{k_3,k_3}
\end{bmatrix}\!\succeq 0,
\end{align}
for every $(k_1,k_2,k_3)\in\bigcup_{k=1}^{A} \mathcal{A}_k\times\mathcal{A}_k\times\mathcal{A}_k$,
where
$\mathcal{A}_1,\mathcal{A}_2,\ldots,\mathcal{A}_A\subseteq\mathcal{V}$,
are the bags associated with an arbitrary tree decomposition of the network $\mathcal{H}_t$, for which $|\mathcal{A}_k|\geq 3$, $\forall k \in\{1,\ldots A\}$.\label{def3}
\end{definition}
\begin{figure}
\caption{Comparisons between the performance of TSDP and SOCP relaxations of the AC network constraints on ten instances of PEGASE 1354-bus system. The difference between optimality gaps are shown.}
\label{table_res_OPF}
\end{figure}
\section*{Comparison of the models}
We have compared the performance of the proposed TSDP relaxation
(Definition \ref{def3}) with
the SOCP relaxation given by [\ref{relax:socp}]
in terms of: i) upper bound (UB) given by the cost of the feasible solution returned and; ii) convex lower bound (LB) on the optimal cost. To that end, ten instances based on each European system
in Table \ref{table_res_AC} are considered.
In all instances of the largest PEGASE 2869-bus, 9241-bus and 13659-bus cases, the Newton-Raphson's local search method successfully converges to a feasible operating point when started with an initial point from TSDP relaxation (discussed in detail in Section Recovering a Feasible Solution); however, it fails to converge when started with points from the SOCP relaxation. For the instances on the smallest PEGASE 1354-bus system, both SOCP and TSDP relaxations lead to feasible solutions through Newton-Raphson's local search.
Figure \ref{table_res_OPF} displays the optimal relaxation objective value (LB) and the cost of the resulting feasible solution (UB) for this system. Although very fast to solve, the SOCP relaxation is substantially worse than the TSDP relaxation, as seen in Figure \ref{table_res_OPF}.
Motivated by the aforementioned observation, we propose the TSDP relaxation to convexify the AC network equations [\ref{cons_AC1}]--[\ref{cons_AC4}], in power system scheduling.
\section*{Discussion and Conclusions}
In this paper, we study the problem of optimizing grids operation throughout a planning horizon, based on the available resources for supply and transmission of electricity. This fundamental problem is heavily investigated for decades and need to be solved on a daily basis by independent system operators and utility companies. The challenge is twofold: first, determining a massive number of highly correlated binary decisions that account for commitment of generators; secondly, finding the most economic transmission strategy in accordance with laws of physics and technological limitations.
We propose a third-order semidefinite programming (TSDP) method that is equipped with an accurate physical model for the flow of electricity and offers massive scalability in the number of generating units and grid size. While co-optimization of supply and transmission under the full physical model has long been put forward as a direction to boost the efficiency and reliability of operation, scalability has been the main bottleneck to-date. Significant improvement over the state-of-the-art methods is validated on day-ahead grid scheduling problems, on the largest publicly available real-world data.
Given the simplicity of the linear algebraic operations on 3x3 Hermitian matrices, a direction of interest is to build a highly parallel numerical algorithm for solving large-scale TSDP problems on graphical processing units high-performance computing facilitates.
\section*{Materials and Methods}
This section details the procedure for recovering a feasible solution to the scheduling from the optimal solution of the relaxed problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}] and data generation.
\subsection*{Recovering a Feasible Solution}
Let $(
\mathbf{x}^{\mathrm{opt}},
\mathbf{p}^{\mathrm{opt}},
\mathbf{q}^{\mathrm{opt}},
\mathbf{c}^{\mathrm{opt}}
)$ be an optimal solution to the relaxed problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}].
If all entries of $\mathbf{x}^{\mathrm{opt}}$ turn out to be integer, and there exists a matrix
$\mathbf{v}\in\mathbb{C}^{\mathcal{V}\times \mathcal{T}}$ that satisfies constraints [\ref{cons_AC1}]--[\ref{cons_AC4}], then $(
\mathbf{x}^{\mathrm{opt}},
\mathbf{p}^{\mathrm{opt}},
\mathbf{q}^{\mathrm{opt}},
\mathbf{c}^{\mathrm{opt}}
)$ is a globally optimal solution to problem [\ref{PSS_obj}]--[\ref{cons_network}].
However, the relaxation is often inexact, and solutions to the relaxed problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}] are not necessarily feasible for problem [\ref{PSS_obj}]--[\ref{cons_network}]. In such cases, a recovery process is needed to transform $(
\mathbf{x}^{\mathrm{opt}},
\mathbf{p}^{\mathrm{opt}},
\mathbf{q}^{\mathrm{opt}},
\mathbf{c}^{\mathrm{opt}}
)$ to a feasible and near-optimal solution for problem [\ref{PSS_obj}]--[\ref{cons_network}].
\begin{algorithm}[t]
{\small
\caption{\small Recovering a Feasible Solution}
\label{alg}
\begin{algorithmic}
\Require{The optimal unit commitment solution $\mathbf{x}^{\mathrm{opt}}\in\mathbb{R}^{G\times T}$ to problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}]:}
\For{$g=1\ldots,G$}
\For{$t=1-\max\{m^{\uparrow}_g,m^{\downarrow}_g\},\ldots,0$}
\State Set $x^{\mathrm{feas}}_{g,t}$ according to the initial state of unit $g$.
\EndFor
\EndFor
\For{$g=1\ldots,G$}
\For{$t=1\ldots,T$}
\State $a^{\uparrow}\!\!\gets\!\max\{x^{\mathrm{feas}}_{g,\tau}-x^{\mathrm{feas}}_{g,\tau-1}\;|\;\forall\tau\! \in\!\{t-m^{\uparrow}_g+1,\ldots,t-1\}\}$
\State $a^{\downarrow}\!\!\gets\!\max\{x^{\mathrm{feas}}_{g,\tau-1}-x^{\mathrm{feas}}_{g,\tau}\;|\;\forall\tau\! \in\!\{t-m^{\downarrow}_g+1,\ldots,t-1\}\}$
\If{$a^{\uparrow}=a^{\downarrow}=1$}
\State Declare failure.
\Else
\If{$a^{\uparrow}=1$}
\State $x^{\mathrm{feas}}_{g,t}\gets 1$
\EndIf
\If{$a^{\downarrow}=1$}
\State $x^{\mathrm{feas}}_{g,t}\gets 0$
\EndIf
\If{$a^{\uparrow}=a^{\downarrow}=0$}
\State $x^{\mathrm{feas}}_{g,t} \gets\mathrm{round}\{x^{\mathrm{opt}}_{g,t}+0.25\} $
\EndIf
\EndIf
\EndFor
\EndFor
\State \Return $\mathbf{x}^{\mathrm{feas}}$
\end{algorithmic}}
\end{algorithm}
As demonstrated by Table \ref{table_res_AC}, on average, only a small portion of the binary variables remain fractional after solving the proposed TSDP relaxation problem.
In all of our experiments, a feasible candidate for $\mathbf{x}$ is obtained, through the Algorithm \ref{alg}, which simply rounds each entry of $\mathbf{x}^{\mathrm{opt}}$ subject to minimum up and down time constraints [\ref{cons_min_l}] and [\ref{cons_min_u}].
Another challenge is finding a feasible voltage profile
$
\mathbf{v}=[
\mathbf{v}_{\bullet, 1}|
\mathbf{v}_{\bullet, 2}|
\ldots|
\mathbf{v}_{\bullet, T}
]\in\mathbb{C}^{\mathcal{V}\times \mathcal{T}},
$
based on a solution to the relaxed problem [\ref{PSS_relax_obj}]--[\ref{cons_network_relax}].
If the rank constraints [\ref{rank_psd}] are not satisfied at optimality, then the relaxation of network equations is not exact and it is not possible to factorize the resulting matrices
$\mathbf{W}^{\mathrm{opt}}_1,
\mathbf{W}^{\mathrm{opt}}_2,\ldots,$ $
\mathbf{W}^{\mathrm{opt}}_T$,
in the form of equation [\ref{W_fac}]. A ``recovery algorithm'' is introduced in \cite{madani2016promises}, for finding an approximate vector of voltages $\hat{\mathbf{v}}_{\bullet, t}$ based on $\mathbf{W}^{\mathrm{opt}}_t$, which minimizes the overall mismatch (i.e., violation of network equations). In order to obtain voltage profiles with no mismatch, we feed the outcome of the recovery algorithm from \cite{madani2016promises} as the initial point to Newton-Raphson's local search algorithm. This procedure is described next:
\begin{enumerate}
\item Find a feasible matrix of commitment decisions $\mathbf{x}^{\mathrm{feas}}$ via Algorithm~\ref{alg}.
\item For every $t=1\ldots,T$:
\begin{enumerate}
\item[i)] Obtain an approximate voltage profile $\hat{\mathbf{v}}_{\bullet, t}$ from $\mathbf{W}^{\mathrm{opt}}_t$ based on the recovery algorithm in \cite{madani2016promises}.
\item[ii)] Start with
$\mathbf{p}^{\mathrm{opt}}_{\bullet, t}$,
$\mathbf{q}^{\mathrm{opt}}_{\bullet, t}$ and
$\hat{\mathbf{v}}_{\bullet, t}$, as the initial point to
search locally for a triplet of vectors
$\mathbf{p}^{\mathrm{feas}}_{\bullet, t}\in\mathbb{R}^G$,
$\mathbf{q}^{\mathrm{feas}}_{\bullet, t}\in\mathbb{R}^G$ and
$\mathbf{v}^{\mathrm{feas}}_{\bullet, t}\in\mathbb{C}^{\mathcal{V}}$,
that minimizes the objective function
$\sum^{G}_{g=1}{\alpha_{g} p^2_{g,t} + \beta_{g} p_{g,t}}$, subject to the constraints [\ref{cons_AC1}]--[\ref{cons_AC4}] and
\begin{subequations}
\begin{align}
&\ubar{q}_g x^{\mathrm{feas}}_{g,t} \leq q_{g,t} \leq \bar{q}_g x^{\mathrm{feas}}_{g,t}\\
&\ubar{p}_g x^{\mathrm{feas}}_{g,t} \leq p_{g,t} \leq \bar{p}_g x^{\mathrm{feas}}_{g,t}\\
&p_{g,t}\geq p^{\mathrm{feas}}_{g,t-1} - r_g x^{\mathrm{feas}}_{g,t} - s_g (1-x^{\mathrm{feas}}_{g,t}) \\
&p_{g,t}\leq p^{\mathrm{feas}}_{g,t-1} + r_g x^{\mathrm{feas}}_{g,t-1} + s_g (1-x^{\mathrm{feas}}_{g,t-1}).
\end{align}
\end{subequations}
\item[iii)]
Derive the feasible cost values $c^{\mathrm{feas}}_{g,t}$, according to the equation [\ref{cons_cost}].
\end{enumerate}
\item Report $(\mathbf{x}^{\mathrm{feas}},\mathbf{p}^{\mathrm{feas}},\mathbf{q}^{\mathrm{feas}})$ as the output schedule/dispatch and $\mathbf{v}$ as the corresponding voltage profile. The following quantity serves as an upperbound for relative distance from global optimality:
\begin{align}
\!\!\mathrm{Gap}\leq 100\times\frac{
\sum_{t=1}^{T}{\sum_{g=1}^{G}{(c^{\mathrm{feas}}_{g,t}-c^{\mathrm{opt}}_{g,t})}}}
{\sum_{t=1}^{T}{\sum_{g=1}^{G}{c^{\mathrm{feas}}_{g,t}}}}
\end{align}
\end{enumerate}
We have used the procedure described above for the experiments presented in Table \ref{table_res_AC}, and in all cases, a feasible solution could be found within the violation tolerance of the constraint ($10^{-5}$ per-unit).
\subsection*{Data Generation}
The network data for IEEE and European systems is obtained from the MATPOWER package \cite{zimmerman2011matpower,josz2016ac}. Hourly load changes for the day-ahead at all buses are considered proportional to the numbers reported in \cite{khodaei2010transmission}.
In each experiment, the cost coefficients
$\alpha_g$, $\beta_g$, $\gamma_g$, $\gamma^{\downarrow}_g$ and $\gamma^{\uparrow}_g$
are chosen uniformly between zero and $~1~\$/(\mathrm{MW.h})^2$, $~10~\$/(\mathrm{MW.h})$, $~100~\$$, $~30~\$$ and $~50~\$$, respectively.
The ramp limits of each generating unit are set to
$r_g = s_g = \max\{\bar{p}_g/4,\ubar{p}_g\}$. For each generating unit, the minimum up and down limits $m^{\uparrow}_g$ and $m^{\downarrow}_g$ are randomly selected in such a way that $m^{\uparrow}_g-1$ and $m^{\downarrow}_g-1$ have Poisson distribution with parameter $4$. The initial status of generators at time period $t=0$ is found by solving a single period economic dispatch problem corresponding to the demand at time $t=1$. For each generating unit $g\in\mathcal{G}$, it is assumed that the initial status has been maintained exactly since time period $t=-t^{(0)}_g$, where $t^{(0)}_g$ has Poisson distribution with parameter $4$. For simplicity, all of the generating units with negative capacity are removed.
All simulations are run in MATLAB using a workstation with an Intel 3.0 GHz, 12-core CPU, and 256 GB RAM. The CVX package version 3.0 \cite{cvx} and MOSEK version 8.0 \cite{mosek} are used for solving semidefinite programming problems. The data set as well as the log files of the optimization runs are available for download at:
\href{http://ieor.berkeley.edu/~atamturk/data/tsdp}{http://ieor.berkeley.edu/$\sim$atamturk/data/tsdp}.
\subsection*{Acknowledegement}
A. Atamt\"urk is supported, in part,
by grant FA9550-10-1-0168 from the Office of the Assistant Secretary of Defense for Research and Engineering.
\pagebreak
\section*{Appendix}
A proof of validity for conic and linear inequalities [\ref{conic0}] and [\ref{cons_KK}] is provided in this section.
\begin{proposition}
Inequalities {\rm[\ref{conic1}]}, {\rm[\ref{conic2}]} and {\rm[\ref{cons_K}]} are valid for every pair of vectors $(\mathbf{x},\mathbf{p})\in\mathbb{R}^{G\times T}$ that satisfy constraints
{\rm[\ref{cons_bin}]}, {\rm[\ref{cons_cap_p}]}, {\rm[\ref{cons_ramp_pu}]}, {\rm[\ref{cons_ramp_pl}]}, {\rm[\ref{cons_min_u}]} and {\rm[\ref{cons_min_l}]}. Additionally, if
$m^{\downarrow}_g \geq 2$, then constraint {\rm[\ref{cons_Kd}]} and if $m^{\uparrow}_g \geq 2$, then constraint {\rm[\ref{cons_Ku}]} is valid.
\end{proposition}
\textit{\textbf{Proof:}}
For every
$(g,t)\in\{1,\ldots,G\}\times\{1,\ldots,T\}$, define the vector of monomials:
\begin{align}
\boldsymbol{\delta}_{g,t} \equiv [&x_{g,t-1},\; x_{g,t},\; p_{g,t-1},\; p_{g,t},\nonumber\\
&\ubar{h}_{g,t-1},\; \bar{h}_{g,t-1},\; \ubar{h}_{g,t},\; \bar{h}_{g,t},\; \ubar{a}_{g,t},\; \bar{a}_{g,t},\nonumber\\
&x_{g,t-1}\ubar{h}_{g,t-1},\; x_{g,t-1}\bar{h}_{g,t-1},\; x_{g,t-1}\ubar{h}_{g,t},\nonumber\\ &x_{g,t-1}\bar{h}_{g,t},x_{g,t-1}\ubar{a}_{g,t},\; x_{g,t-1}\bar{a}_{g,t},\nonumber\\
&x_{g,t}\ubar{h}_{g,t-1},\; x_{g,t}\bar{h}_{g,t-1},\; x_{g,t}\ubar{h}_{g,t},\nonumber\\
&x_{g,t}\bar{h}_{g,t},\; x_{g,t}\ubar{a}_{g,t},\; x_{g,t}\bar{a}_{g,t}]^{\top},
\end{align}
where
\begin{subequations}
\begin{align}
\ubar{h}_{g,t} &\equiv \sqrt{p_{g,t}- \ubar{p}_g x_{g,t}}\;,\qquad
\bar{h}_{g,t} \equiv \sqrt{\bar{p}_g x_{g,t} - p_{g,t}}\;, \\
\ubar{a}_{g,t} &\equiv \sqrt{s_g + (r_g - s_g) x_{g,t-1} + p_{g,t-1} - p_{g,t} }\;, \\
\bar{a}_{g,t} &\equiv \sqrt{s_g + (r_g - s_g) x_{g,t} - p_{g,t-1} + p_{g,t} }\;.
\end{align}
\end{subequations}
Define $\boldsymbol{\Delta}_{g,t}$ as the $22\times22$ symmetric matrix formed by multiplying $\boldsymbol{\delta}_{g,t}$ by its transpose:
\begin{align}
\boldsymbol{\Delta}_{g,t}\equiv\boldsymbol{\delta}_{g,t}\boldsymbol{\delta}_{g,t}^{\top}.
\end{align}
Observe that $\boldsymbol{\Delta}_{g,t}$ is positive semidefinite, and as a consequence, every principle submatrix of $\boldsymbol{\Delta}_{g,t}$ is positive semidefinite, as well. Considering submatrices $\boldsymbol{\Delta}_{g,t}[\{2,1,3\},\{2,1,3\}]$ and
$\boldsymbol{\Delta}_{g,t}[\{1,2,4\},\{1,2,4\}]$, concludes the conic constraints [\ref{conic1}] and [\ref{conic2}], respectively. Moreover, the constraint [\ref{cons_K}] encapsulates $24$ linear inequalities, and it is straightforward to verify that inequalities $k$, $k+6$, $k+12$ and $k+18$ can be concluded from
\begin{align}
\!\!\boldsymbol{\Delta}_{g,t}[\{k+4,k+10,k+16\},\{k+4,k+10,k+16\}]\succeq 0,\!
\end{align}
for each $k=1,2,\ldots,6$. This completes the proof of [\ref{cons_K}].
In order to prove the validity constraint [\ref{cons_Kd}], suppose that $m^{\downarrow}_g \geq 2$, and consider the following vector of monomials:
\begin{align}
\boldsymbol{\phi}^{\downarrow}_{g,t} \equiv [&w^{\downarrow}_{g,t},\;
x_{g,t-1}w^{\downarrow}_{g,t},\;
\ubar{h}_{g,t-1}w^{\downarrow}_{g,t},\;
\bar{h}_{g,t-1}w^{\downarrow}_{g,t}]^{\top},
\end{align}
where
\begin{align}
w^{\downarrow}_{g,t} \equiv \sqrt{1-x_{g,t-2}+x_{g,t-1}-x_{g,t}}\;.
\end{align}
Observe that all four inequalities encapsulated in [\ref{cons_Kd}] can be concluded from the conic inequality $\boldsymbol{\phi}^{\downarrow}_{g,t}(\boldsymbol{\phi}^{\downarrow}_{g,t})^{\top}\succeq0$.
If $m^{\uparrow}_g \geq 2$, the validity of the constraint [\ref{cons_Ku}] can be similarly proven by defining
\begin{align}
w^{\uparrow}_{g,t} \equiv\sqrt{x_{g,t-2}-x_{g,t-1}+x_{g,t}}\;,
\end{align}
and forming the vector of monomials
\begin{align}
\boldsymbol{\phi}^{\uparrow}_{g,t} \equiv [&w^{\uparrow}_{g,t},\;
x_{g,t-1}w^{\uparrow}_{g,t},\;
\ubar{h}_{g,t-1}w^{\uparrow}_{g,t},\;
\bar{h}_{g,t-1}w^{\uparrow}_{g,t}]^{\top}.
\end{align}
Finally, the four inequalities from [\ref{cons_Ku}] can be inferred from the conic inequality $\boldsymbol{\phi}^{\uparrow}_{g,t}(\boldsymbol{\phi}^{\uparrow}_{g,t})^{\top}\succeq0$.
\qed
\end{document}
|
\begin{document}
\begin{abstract}
We explore the notion of a span of cospans and define, for them, horizonal and vertical composition. These compositions satisfy the interchange law if working in a topos $\cat{C}$ and if the span legs are monic. A bicategory is then constructed from $\cat{C}$-objects, $\cat{C}$-cospans, and doubly monic spans of $\cat{C}$-cospans. The primary motivation for this construction is an application to graph rewriting.
\end{abstract}
\title{Spans of cospans}
\author{Daniel Cicala}
\maketitle
\section{Introduction}
There is currently interest in studying complex networks through the simpler networks of which they are comprised. This point of view is known as \textit{compositionality}. Various flavors of graphs (directed, weighted, colored, etc.) play an important role in this program because they are particularly well suited to model networks \cite{Baez_CompFrameMarkovProcess,Baez_CompFrameLinearNetworks,RoseSabadinWalters_SepAlgNCospansGraphs,RoseSabadinWalters_CalcColimsComp}. By adding a bit of structure to graphs, we can decide how to glue graphs together to make larger graphs.
The structure that we want to consider is given by choosing two subsets of nodes, named \textit{inputs} and \textit{outputs}. When the inputs of one graph equal the outputs of another, we can glue the graphs together. One method for adding this structure is to use a cospan of graphs $I \to G \leftarrow O$ where $I$ and $O$ are discrete. Then gluing graphs together becomes a matter of composing cospans, which B\'{e}nabou \cite{Benabou_Bicats} described using pushouts in the context of a bicategory whose morphisms are cospans and $2$-morphisms are maps of cospans. Rebro \cite{Rebro_Span2} extended this idea to give a bicategory whose $2$-morphisms are cospans of cospans. See the figure below to see what this means.
\begin{figure}
\caption{Various morphisms of cospans}
\label{fig.MapOfCospans}
\label{fig.CospanOfCospans}
\label{fig.SpanOfCospans}
\end{figure}
The goal of this paper is to explore another tool to study networks: spans of cospans (see Figure \ref{fig.SpanOfCospans}). Grandis and Par\'{e} studied these in the context of intercategories and showed that lax interchange held. We will build a bicategory from certain spans of cospans. Both B\'{e}nabou and Rebro only required sufficient colimits to compose maps of cospans and cospans of cospans, respectively. For us, we require sufficient limits and colimits while also ensuring they play well together. For this reason, we start with a topos $\cat{C}$, then let our $0$-cells be the objects of $\cat{C}$, $1$-cells the cospans in $\cat{C}$, and $2$-cells isomorphism classes of spans (with monic legs) of cospans. Our main theorem is that this construction, named $\cat{MonSp(Csp(C))}$, really gives a bicategory.
The specific motivation for this particular construction is to create a framework to house the rewriting of graphs (with inputs and outputs). Applying our construction to the topos $\cat{Graph}$, we consider $\cat{Rewrite}$, the full sub-bicategory of $\cat{MonSp(Csp(Graph))}$ whose objects are the discrete graphs. The $2$-cells of $\cat{Rewrite}$ represent all possible ways to rewrite one graph into another that respect the inputs and outputs. In this paper, graph rewrites are performed using the double pushout method. This is explained in Section \ref{sec.Rewriting}, though \cite{Ehrig_GraphGramAlgAp} and \cite{LackSoboc_AdhesiveCategories} contain more detailed accounts. Here is an example of what a $2$-cells in $\cat{Rewrite}$ will look like:
\[
\label{NoRef_IntroRewrite2Cell}
\diagram{NoRef_IntroRewrite2Cell}
\]
Inputs are depicted with `$\circ$', outputs with `$\square$', and nodes depicted with `$\bullet$' are neither. The picture above illustrates the rewriting of the top graph to the bottom graph by deleting an edge and adding a loop.
The structure of this paper is as follows. We begin Section \ref{sec.SpansOfCospans} by defining spans of cospans and two ways to compose them. After declaring that we will be working in a topos and that the legs of the spans of cospans will be monic, we show that the compositions satisfy an interchange law. This section ends with a construction of a bicategory whose $2$-cells are spans of cospans. Strictly speaking, there is no need for the full strength of a topos, so in Section \ref{sec.Disc on Assump}, we discuss how we might weaken our assumptions so that interchange still holds. Then in Section \ref{sec.Rewriting} we give a brief introduction to graph rewriting and present our motivating example: $\cat{Rewrite}$, a bicategory that contains all possible ways to rewrite graphs.
The author would like to thank John Baez for many helpful discussions as well as Blake Pollard and Jason Erbele for contributing the Boolean algebra counterexample in Section \ref{sec.Disc on Assump}.
\section{Spans of cospans}
\label{sec.SpansOfCospans}
We begin by introducing spans of cospans. Given that our motivation is to have these be $2$-cells in some bicategory, we also show how to compose them. In fact, there are two ways to do so and these will correspond to what will eventually be horizontal and vertical composition. Of course, we would like for these compositions to play nicely together and so we finish this section by showing that an interchange law holds, under certain assumptions.
Before we begin, a few remarks on convention are in order. First, throughout this paper, tailed arrows ``$\rightarrowtail$" refer to monics, and two headed arrows ``$\twoheadrightarrow$" to quotient maps. Also, hooked arrows ``$\hookrightarrow$'' are canonical inclusions, which will be labeled $\iota_x$ when its codomain is $x$. To declutter the diagrams, only arrows that will eventually be referenced are given names. Spans $A \leftarrow B \to C$ are denoted by $B \colon A \xrightarrow{\mathit{sp}} C$ and cospans $X \to Y \leftarrow Z$ by $Y \colon X \xrightarrow{\mathit{csp}} Z$. This notation is vague, but context should dispense with any confusion. Note that, when first defining spans of cospans below, the object names might seem to be oddly chosen. The intention is to develop a consistency that will carry into the proof of the interchange law, at which point, the naming will seem more methodical.
\subsection{Spans of cospans and their compositions}
Suppose that we are working in a category $\cat{C}$. Given a parallel pair of cospans $L$, $S \colon X \xrightarrow{\mathit{csp}} Y$, a \emph{span of cospans} is a span $S' \colon L \xrightarrow{\mathit{sp}} S$ such that
\[
\label{NoRef_SpanOfCospanDef}
\diagram{NoRef_SpanOfCospanDef}
\]
commutes. Given spans of cospans $S'_1,S'_2 \colon L \xrightarrow{\mathit{sp}} S$, then a morphism of spans of cospans is a $\cat{C}$-morphism $S'_1 \to S'_2$ such that the diagram
\begin{equation}
\label{diag.2-cell iso}
\diagram{Diag_2CellIso}
\end{equation}
commutes. This is an isomorphism of spans of cospans exactly when the $\cat{C}$-morphism is.
There are two different ways to turn compatible pairs of spans of cospans into a single span of cospans. Actually, we are foreshadowing that spans of cospans will soon be $2$-cells in a bicategory. Instead of beating around the bush, we immediately name these assignments for what they are: vertical and horizontal composition. As we will see, $\cat{C}$ should have enough limits and colimits for the compositions to be defined. Also, for the present moment, compositions will only be defined up to isomorphism. We will also hold off on looking for the typical properties composition should satisfy until we introduce the bicategorical structure.
Take a pair of spans of cospans $S' \colon L \xrightarrow{\mathit{sp}} S$ and $S'' \colon S \xrightarrow{\mathit{sp}} L'$. Define \textit{vertical composition} $\circ_v$ by
\begin{equation}
\label{eq.VertComp}
S'' \circ_v S' \coloneqq S' \times_S S'' \colon L \xrightarrow{\mathit{sp}} L'.
\end{equation}
Diagrammatically, this is
\[
\label{NoRef_VertCompDef}
\diagram{NoRef_VertCompDef}
\]
Now, let $L,S \colon X \xrightarrow{\mathit{csp}} Y$ and $R,T \colon Y \xrightarrow{\mathit{csp}} Z$ be cospans and let $S' \colon L \xrightarrow{\mathit{sp}} S$ and $T' \colon R \xrightarrow{\mathit{sp}} T$ be spans of cospans. Define the assignment \textit{horizontal composition} $\circ_h$ by
\begin{equation}
\label{eq.HorComp}
T' \circ_h S' \coloneqq S' +_Y T'' \colon L +_Y R \xrightarrow{\mathit{sp}} S +_Y T,
\end{equation}
which corresponds to
\[
\label{NoRef_HorCompDef}
\diagram{NoRef_HorCompDef}
\]
At this point, it is natural to ask whether the interchange law holds between vertical and horizontal composition. It does, but not without some further assumptions.
\subsection{The interchange law}
Let $\cat{C}$ be a topos with chosen pushouts and let both legs of each span of cospans be monic. To see examples of where the interchange law fails without these assumptions, see Section \ref{sec.Disc on Assump}.
The first thing we want to do is to show that the vertical and horizontal compositions are well-defined, up to isomorphism. To this end, we give a lemma that will be put to work several times during the course of this section.
\begin{lem}
\label{lem.helpful little lemma}
Given a diagram
\begin{equation}
\label{diag.helpful little lemma}
\diagram{Diag_HelpfulLittleLemma}
\end{equation}
we get a pushout
\begin{equation}
\label{diag.helpful pushout}
\diagram{Diag_HelpfulPushout}
\end{equation}
such that the canonical arrows $\gamma$
and $\gamma'$ are monic.
\end{lem}
\begin{proof}
Using its universal property, we see that
$\gamma$ factors through $B'+C$ as seen in
diagram
\[
\label{NoRef_HelpfulLemmaProof1}
\diagram{NoRef_HelpfulLemmaProof1}
\]
It is straightforward to check that the squares are both pushouts. By Lemma \ref{lem.adhesive properties}, we get that $\gamma$ must be monic and also that the monotonicity of $\gamma'$ will follow once \eqref{diag.helpful pushout} is shown to be a pushout.
One can check that the right hand square commutes by using the universal property of $B+C$. To see that this square is a pushout, set up a cocone $D$
\begin{equation}
\label{diag.helpful lemmma cocone}
\diagram{Diag_HelpfulLemmmaCocone}
\end{equation}
Then $d'\iota_{B'}$, $d'\iota_{C'}$, and $D$ form a cocone under the span $B' \leftarrow A' \to C'$ on the bottom face of diagram \eqref{diag.helpful little lemma}. This induces the canonical map $\gamma'' \colon B'+_AC' \to D$. It follows that $d'\iota_{B'}=\gamma'' q' \iota_{B'}$ and $d'\iota_{C'}=\gamma'' q' \iota_{C'}$. Therefore, $d'=\gamma'' q'$ by the universal property of coproducts.
Furthermore, $dq\iota_B$, $dq\iota_C$, and $D$ form a cocone under the span $B \leftarrow A \to C$ on the top face of diagram \eqref{diag.helpful little lemma}. Then $dq\iota_B = d'\gamma\iota_B = \gamma'' q'\gamma\iota_B = \gamma'' \gamma' q \iota_B$ and $dq\iota_C = d'\gamma\iota_C = \gamma'' q'\gamma\iota_C = \gamma'' \gamma' q \iota_C$ meaning that both $d$ and $\gamma''\gamma'$ satisfy the canonical map $B+_AC \to D$. Hence $d=\gamma''\gamma'$.
The universality of $\gamma''$ with respect to diagram \eqref{diag.helpful lemmma cocone} follows from the universality of $\gamma''$ with respect to $B'+_AC'$.
\end{proof}
\begin{lem}
Vertical and horizontal composition of
spans of cospans respects monics.
\end{lem}
\begin{proof}
The result for vertical composition
follows from the fact that pullbacks
respect monics. The result for horizontal
composition follows from applying Lemma
\ref{lem.helpful little lemma} to the
diagrams
\[
\diagram{NoRef_CompRespectMonics1}
\quad
\diagram{NoRef_CompRespectMonics2}
\qedhere
\]
\end{proof}
The interchange law requires that, given composable spans of cospans
\begin{equation}
\label{diag.2cells interchanged}
\diagram{Diag_2CellsInterchanged}
\end{equation}
there is an isomorphism:
\begin{equation}
\label{eq.interchange equation}
\left( S' \circ_\t{v} S'' \right) \circ_\t{h}
\left( T' \circ_\t{v} T'' \right) \cong
\left( S' \circ_\t{h} T' \right) \circ_\t{v}
\left( S'' \circ_\t{h} T'' \right).
\end{equation}
The left hand side corresponds to first applying vertical composition then horizontal composition. The right hand side swaps the order of composition. This isomorphism will later strengthen to an equality when isomorphism classes of spans of cospans are the $2$-cells of a bicategory.
It is straightforward, using \eqref{eq.VertComp} and \eqref{eq.HorComp}, to see that \eqref{eq.interchange equation} reduces to finding an isomorphism
\begin{equation}
\label{eq.interchange simplified}
(S' \times_S S'') +_Y (T' \times_T T'')
\cong
(S' +_Y T') \times_{S+_YT} (S'' +_Y T'')
\end{equation}
of spans of cospans.
To simplify our notation, write:
\begin{gather*}
A \coloneqq (S' \times_SS'') + (T' \times_TT''), \quad
A_Y \coloneqq (S' \times_SS'') +_Y (T' \times_TT''),\\
B \coloneqq (S'+T') \times_{S+T} (S''+T''), \quad
B_Y \coloneqq (S'+_YT') \times_{S+_YT} (S''+_YT'').
\end{gather*}
Now, apply Lemma \ref{lem.helpful little lemma} to the diagram
\[
\label{NoRef_GettingAMaps}
\diagram{NoRef_GettingAMaps}
\]
to get the pushout
\[
\label{NoRef_AMaps}
\diagram{NoRef_AMaps}
\]
Similarly, we get pushouts
\[
\diagram{NoRef_APrimeMaps}
\quad
\diagram{NoRef_APrimePrimeMaps}
\]
Now, $A$ forms a cone over the cospan $S+T \colon S'+T' \xrightarrow{\mathit{csp}} S'' + T''$ via the maps $a$, $a'$, and $a''$. And so, we get a canonical map $\theta \colon A \to B$.
\begin{lem}
\label{lem:pullback over subobject}
Given cospans $Y$, $W \colon X \xrightarrow{\mathit{csp}} Z$ where the legs of $W$ factor through a monic $Y \rightarrowtail W$, then there is a unique isomorphism $X \times_Y Z \cong X \times_W Z$.
\end{lem}
\begin{proof}
Via the projection maps, $X \times_Y Z$ forms a cone over the cospan $W \colon X \xrightarrow{\mathit{csp}} Z$ and, also, $X \times_W Z$ forms a cone over the cospan $Y \colon X \xrightarrow{\mathit{csp}} Z$, though the latter requires the monic $Y \rightarrowtail W$ to do so. Universality implies that the induced maps are mutual inverses and they are the only such pair.
\end{proof}
\begin{lem}
\label{lem.Theta Iso}
The map $\theta \colon A \to B$ is an isomorphism.
\end{lem}
\begin{proof}
Because colimits are stable under pullback \cite[Thm.~4.7.2]{MacLaneMoerdijk_SheavesGeomLogic}, we get an isomorphism
\[
\gamma \colon (S'\times_{S+T}S'') +(S'\times_{S+T}T'') +(T'\times_{S+T}S'') +(T'\times_{S+T}T'') \to B.
\]
But $S'\times_{S+T}T''$ and $S''\times_{S+T}T'$ are initial. To see this, recall that in a topos, all maps to the initial object are isomorphisms. Now, consider the diagram
\[
\label{NoRef_ShowingInitialObject}
\diagram{NoRef_ShowingInitialObject}
\]
whose lower right square is a pullback because coproducts are disjoint in topoi. Similarly, $T'\times_{S+T}S''$ is initial. Hence we get a canonical isomorphism
\begin{equation} \label{eq:B second iso}
\gamma' \colon (S'\times_{S+T}S'')+(T'\times_{S+T}T'') \to B
\end{equation}
that factors through $\gamma$. But Lemma \ref{lem:pullback over subobject}
gives unique isomorphisms $S' \times_{S} S'' \cong S' \times_{S+T} S''$ and $T'\times_{T} T'' \cong T' \times_{S+T} T''$. This produces a canonical isomorphism
\[
\gamma'' \colon A \to (S'\times_{S+T}S'')+(T'\times_{S+T}T'').
\]
One can show that $\theta = \gamma' \circ \gamma''$ using universal properties.
\end{proof}
Now, let us consider the following diagram:
\begin{equation}
\label{diag.the big cube}
\diagram{Diag_BigCube}
\end{equation}
where $\theta_Y$ and $\psi$ are the canonical maps. Observe that $\psi$ factors through $\theta_Y$ in the above diagram. This follows from the universal property of pullbacks. We also have that the top square is a pullback from the previous lemma.
\begin{lem}
\label{lem.theta_Y iso}
The map $\theta_Y \colon A_Y \to B_Y$ is an isomorphism.
\end{lem}
\begin{proof}
Because we are working in a topos, it suffices to show that $\theta_Y$ is both monic and epic. It is monic because $a'_Y$ is monic.
To see that $\theta_Y$ is epic, it suffices to show that $\psi$ is epic. The front and rear right faces of \eqref{diag.the big cube} are pushouts by Lemma \ref{lem.helpful little lemma}. Then because the top and bottom squares of \eqref{diag.the big cube} are pullbacks consisting of only monomorphisms, Lemma \ref{lem.vk dual} implies that the front and rear left faces are pushouts. However, as pushouts over monos, Lemma \ref{lem.adhesive properties} tells us they are pullbacks. But in a topos, regular epis are stable under pullback, and so $\psi$ is epic.
\begin{comment}
Observe the subdiagrams
\[
\begin{tikzcd}[row sep=1em,column sep=1em]
{A}
\ar[d,twoheadrightarrow]
\ar[r,"a'"] &
{S'+T'}
\ar[r]
\ar[d,twoheadrightarrow] &
{S+T}
\ar[d,twoheadrightarrow] \\
{A_Y}
\ar[r,"a'_Y"] &
{S'+_YT'}
\ar[r]
& {S+_YT}
\end{tikzcd}
\begin{tikzcd}[row sep=1em,column sep=1em]
{A}
\ar[d,twoheadrightarrow]
\ar[r,"a''"] &
{S''+T''}
\ar[r]
\ar[d,twoheadrightarrow] &
{S+T}
\ar[d,twoheadrightarrow] \\
{A_Y}
\ar[r,"a''_Y"] &
{S''+_YT''}
\ar[r] &
{S+_YT}
\end{tikzcd}
\]
of \eqref{diag.the big cube}, each of whose outer and left squares are pushouts. As in Lemma \ref{lem.helpful little lemma}, the right squares are pushouts. Since the top and bottom squares in \eqref{diag.the big cube} are pullbacks consisting solely of monomorphisms, Lemma \ref{lem.vk dual} implies that the faces
\begin{equation}
\label{diag.showing epic}
\begin{tikzcd}[row sep=1em,column sep=1em]
{A}
\ar[r,rightarrowtail]
\ar[d,"\psi"] &
{S'+T'}
\ar[d,twoheadrightarrow] \\
{B_Y}
\ar[r,rightarrowtail] &
{S'+_YT'}
\end{tikzcd}
\begin{tikzcd}[row sep=1em,column sep=1em]
{A}
\ar[r,rightarrowtail]
\ar[d,"\psi"] &
{S''+T''}
\ar[d,twoheadrightarrow] \\
{B_Y}
\ar[r,rightarrowtail] &
{S''+_YT''}
\end{tikzcd}
\end{equation}
are pushouts. However, as pushouts over
monos,
Lemma \ref{lem.adhesive properties} tells
us
they are pullbacks. In a topos, regular
epis
are stable under pullback. Applying this
to
\eqref{diag.showing epic}, we conclude
that
$\psi$ is epic.
\end{comment}
\end{proof}
It remains to show that $\theta_Y$ serves as
an isomorphism between spans of cospans. This
amounts to showing that
\begin{equation}
\label{diag.theta 2-cell iso}
\diagram{Diag_Theta2CellIso}
\end{equation}
commutes. Here $g$ and $k$ are induced from applying vertical composition before horizontal, $h$ from applying horizontal composition before vertical, $j$ is from composing in either order, $f$ is from \eqref{eq.HorComp}, and $p$ is from \eqref{diag.the big cube}. The top and bottom face commute by construction.
\begin{lem}
The inner triangles of diagram \eqref{diag.theta 2-cell iso} commute. That is, we have $k=fp \theta_Y$ and $h=\theta_Yg$.
\end{lem}
\begin{proof}
To see that $k=fp\theta_Y$, consider the diagram
\[
\label{NoRef_2CellIso}
\diagram{NoRef_2CellIso}
\]
The bottom face is exactly the pushout diagram from which $f$ was obtained. Universality implies that $k = f a'_Y$ and, as seen in \eqref{diag.the big cube}, $a'_Y = p \theta_Y$.
That $h=\theta_Yg$ follows from
\[
fph=j=kg=fp\theta_Yg
\]
and the fact that $fp$ is monic.
\end{proof}
Of course, we have only shown that two of the four inner triangles commute, but we can replicate our arguments to show the remaining two commute as well. This lemma was the last step in proving the following interchange law.
\begin{thm}
\label{thm.interchange law}
Given diagram \eqref{diag.2cells interchanged} in a topos, there is a canonical isomorphism $(S' \times_S S'') +_Y (T' \times_T T'') \cong (S' +_Y T') \times_{S +_Y T} ( S'' +_Y T'')$.
\end{thm}
\subsection{Constructing the bicategory}
Let $\cat{C}$ be any topos. We will commence construction of a bicategory named $\cat{MonSp(Csp(C))}$, or $\widetilde{\mathbf{C}}$ for short. The $0$-cells of $\widetilde{\mathbf{C}}$ are just the $\cat{C}$-objects. For $0$-cells $X$ and $Y$, build a category $\widetilde{\mathbf{C}}(X,Y)$ whose objects are $\cat{C}$-cospans and morphisms are isomorphism classes of $\cat{C}$-spans of cospans whose legs are both monic. Composition in $\widetilde{\mathbf{C}} (X,Y)$ is the vertical composition $\circ_v$ introduced in \eqref{eq.VertComp}. It is straightforward to check that associativity holds and that spans of cospans whose legs are identity serve as identities.
The composition functor is given by an assignment
\[
\otimes \colon \widetilde{\mathbf{C}}(Y,Z) \times \widetilde{\mathbf{C}}(X,Y) \to \widetilde{\mathbf{C}}(X,Z)
\]
that acts on $1$-cells by $(T,S) \mapsto S \otimes_Y T$ and on $2$-cells by horizontal composition $\circ_h$ from \eqref{eq.HorComp}. It is straightforward to check that $\otimes$ preserves identities. Theorem \ref{thm.interchange law} ensures that $\otimes$ preserves composition.
For every $0$-cell $X$, the identity functor $\cat{1} \to \widetilde{\mathbf{C}} (X,X)$ picks out the $2$-cell with all identity maps on $X$. The associator is made of $2$-cells
\[
R+_XS+_YT \colon (R+_XS)+_YT \xrightarrow{\mathit{sp}} R+_X(S+_YT).
\]
The right unitor is made of $2$-cells $S \colon S+_YY \xrightarrow{\mathit{sp}} S$. Likewise, the left unitor has $2$-cells $T \colon T \xrightarrow{\mathit{sp}} Y+_YT$. The legs for each of the above are the obvious choices. The pentagon and triangle identities follow from the associativity, up to isomorphism, of pushouts.
Given all of the data just laid out, we have the main theorem of the paper.
\begin{thm}
If $\cat{C}$ is a topos, then $\cat{MonSp(Csp(C))}$ is a bicategory.
\end{thm}
\section{A discussion on the assumptions}
\label{sec.Disc on Assump}
Can we expand the domain on which this construction works? Apart from ensuring sufficiently many limits and colimits, the primary source of roadblocks is the interchange law. As we discuss below, it is not absolutely necessary to work strictly within a topos, but we do so in order to be expeditious. To lay out, one by one, the requirements for our interchange law to hold would be exhausting and leave us little energy to work through its proof. To do this is even less reasonable given that we can just shout ``topos'' and move on. However, listing these requirements is interesting enough to take a look at here.
Before digging deeper into the properties used, let's convince ourselves of the necessity for monic legs within our span of cospans.
\begin{ex}
Consider the category $\cat{Set}$ of sets and functions. We will relax the assumption that the legs of the spans of cospans are both monic. Indeed, suppose that $S'$, $S''$, and $T'$ are two element sets and $S$, $Y$, $T$, and $T''$ are singletons. The functions can be any of the limited choices we have. After several routine calculations, we determine that $(S' \times_S S'') +_Y (T' \times_T T'')$ has cardinality $5$ and $(S' +_Y T') \times_{S+_YT} (S''+_YT'')$ has cardinality $6$.
\end{ex}
So we see that even in the archetypal topos, the legs of the spans must be monic for the interchange law to hold. But even if they are, this law may fail if our category $C$ is not a topos. The next example illustrates this.
\begin{ex}
Consider the Boolean algebra on a two element set. This is the category $0 \to 1$ with products given by meet and coproducts given by join. Note that this is not a topos. Indeed, the only non-identity morphism is both monic and epic but, as no inverse exists, it is not is isomorphism. Recalling the interchange equation \eqref{eq.interchange equation}, suppose that we have $Y=S''=T'=0$ and $S=S'=T=T''=1$. It is straightforward to check that $(S' \times_S S'') +_Y (T' \times_T T'') = 0$ and $(S' +_Y T') \times_{S+_YT} (S''+_YT'')=1$. That is, the interchange equation does not hold. Because this Boolean algebra can be embedded into any other, it follows that interchange does not hold for any Boolean algebra.
\end{ex}
Now that we are convinced that we actually do need to assume something for interchange to work, we can list our requirements. The obvious place to start is by asking for our category to have enough limits and colimits.
In Lemma \ref{lem.helpful little lemma}, we use that pushouts respect monics. This occurs in topoi. Indeed, this occurs in adhesive categories which are discussed in Section \ref{sec.Rewriting}.
In Lemma \ref{lem.Theta Iso}, we use a couple of facts. First, we use that coproducts are disjoint, which means that a pullback over coproduct inclusions is initial. We also use that colimits -- particularly coproducts -- are stable under pullback. The full force of a topos is not needed to satisfy this requirement. Indeed, looking at \cite[Thm.~1.4.9]{MacLaneMoerdijk_SheavesGeomLogic}, we simply need our category to be locally cartesian closed.
In Lemma \ref{lem.theta_Y iso}, we use that a monic epimorphism is an isomorphism in topoi. This is surely not true in general, but is is true that a monic regular epimorphism is always an isomorphism. Notice that the vertically aligned epimorphisms in diagram \eqref{diag.the big cube} are all regular since they are all coequalizers. For instance $S+T \twoheadrightarrow S+_YT$ is a coequalizer over $Y \rightrightarrows S+T$. So we can merely ask for pullbacks to preserve regular epimorphisms. Note that this does happen in a regular category, which is a larger class than topoi.
Also in Lemma \ref{lem.theta_Y iso}, we make use of Lemmas \ref{lem.adhesive properties} and \ref{lem.vk dual}. These holds in any adhesive category.
It is clear that the properties of adhesive categories play an important role in ensuring that $\widetilde{\mathbf{C}}$ is a bicategory, as does being locally cartesian closed and regular. Certainly, topoi are a well-known, large family of categories that are contained in the intersection of all of these classes.
\section{An Informal Introduction to Graph Rewriting}
\label{sec.Rewriting}
For those who are not familiar with rewriting systems, and graph rewriting in particular, we provide a brief introduction. For a more in-depth and rigorous viewpoint, see \cite{Baader_TermRewritingAllThat}, \cite{Ehrig_GraphGramAlgAp}, or \cite{LackSoboc_AdhesiveCategories}.
There are many methods of rewriting found throughout mathematics, computer science, and linguistics. The general idea is that we begin with a collection of rules, a collection of terms and a way to apply rules to certain, compatible terms. When applied to a term, a rule replaces a sub-term with a new sub-term. A simple, very informal example is found within the English language. Consider a set of rules
\[
\{ \t{(noun)} \mapsto x : x \t{ is an
English noun} \}.
\]
We can apply any one of these rules to
\[
\t{`The (noun) is behind you.'}
\]
to obtain heaps of grammatically correct, if potentially spooky, sentences.
The first rewriting methods were uni-dimensional, in that they are concerned with replacing a string of characters or letters. Many attempts to define multi-dimensional rewriting systems came up short in application and execution, but Ehrig, Pfender, and Scheider developed a categorical approach using graph morphisms and pushouts \cite{Ehrig_GraphGramAlgAp} that has since been studied extensively. This approach came to be known as \textit{double pushout graph rewriting}. This is what we are interested in and will consider in our bicategory $\cat{Rewrite}$ introduced below.
Here is how double pushout graph rewriting works. A \emph{production} is a span
\[
p: L \leftarrowtail K \rightarrowtail R
\]
of graphs with monic legs. Some authors call this a `linear production' to distinguish from spans with potentially non-monic legs, but we will not adopt this convention here. Given a production $p$, and a graph morphism $L \to C$, called a \emph{matching map}, such that there exists a diagram
\begin{equation}
\label{diag:derivation}
\diagram{Diag_Derivation}
\end{equation}
consisting of two pushout squares, we say that that $D$ is a \emph{direct derivation} of $C$ and write $C \rightsquigarrow_E D$ or just $C \rightsquigarrow D$. It is common enough to decorate the arrow `$\rightsquigarrow$' with more information: for instance, the name of the production or the matching map. But that will not be necessary here. Observe that the objects $E$ and $D$ need not exist, but when they do, they are unique up to isomorphism \cite[Lemma 4.5]{LackSoboc_AdhesiveCategories}.
A \emph{grammar} $(\mathcal{G},\mathcal{P})$ is a set of graphs $\mathcal{G}$ paired with a
set of productions $\mathcal{P}$. A \textit{derivation of the grammar} is a string of direct derivations
\[
G_0 \rightsquigarrow G_1 \rightsquigarrow \dotsm \rightsquigarrow G_n
\]
from productions in $\mathcal{P}$ and with $G_0 \in \mathcal{G}$. We say that $G_n$ is a \emph{rewrite of $G_0$}. The \textit{language} $\mathcal{L}(\mathcal{G},\mathcal{P})$ generated by the grammar is the collection of all graphs $G$ such that there is a derivation $G_0 \rightsquigarrow^\ast G$ of the grammar. The idea is that one will study a language. Exactly what properties are interesting is beyond the scope of this discussion. Interested readers should consult the references mentioned at the beginning of this section. Instead of going deeper into the subject, we will briefly zoom out.
Searching for a general framework for term graph rewriting, Lack and Sobocinski \cite{LackSoboc_AdhesiveCategories} introduced a class of categories they call \emph{adhesive}. Roughly, a category is adhesive if it has pullbacks, pushouts along monomorphisms, and certain exactness conditions between pullbacks and pushouts hold. This is not a trivial class of categories given that topoi are adhesive \cite{LackSoboc_ToposesAdhesive}. Because of this, we were able to use the following lemmas, which were proven for adhesive categories, in our construction. However, we will just present them for topoi.
\begin{lem}[{\cite[Lemmas
4.2-3]{LackSoboc_AdhesiveCategories}}]
\label{lem.adhesive properties}
In a topos, monomorphisms are stable under pushout. Also, pushouts along monomorphisms
are pullbacks.
\end{lem}
\begin{lem}[{\cite[Lemma
6.3]{LackSoboc_AdhesiveCategories}}]
\label{lem.vk dual}
In a topos, consider a cube
\[
\label{NoRef_AdhsiveDualCube}
\diagram{NoRef_AdhsiveDualCube}
\]
whose top and bottom faces consist of only monomorphisms. If the top face is a pullback
and the front faces are pushouts, then the bottom face is a pullback if and only if the
back faces are pushouts.
\end{lem}
A good portion of the theory for double pushout graph rewriting has been extended to adhesive categories. So, while our focus is and will be on double pushout graph rewriting, there may be variations of the bicatgory $\cat{Rewrite}$ (introduced below) that are of interest to computer scientists. For instance, the \emph{Schanuel topos} was used to model the $\pi$-calculus \cite{Fiore_OpModelsProcessCalulii} and adhesive categories allow us to extend rewriting to such settings. Our contribution is a bicategorical framework to house the rewriting as $2$-cells, though only for double pushout graph rewriting.
\subsection{$\cat{Rewrite}$}
\label{sec.Rewrite}
Here, we introduce the bicategory $\cat{Rewrite}$ as promised. To prepare, we begin by introducing a slight generalization of the double pushout graph rewriting concepts discussed above.
An \emph{interface} $(I,O)$ is a pair of discrete graphs and a \emph{production with interface} $(I,O)$, or simply \emph{$(I,O)$-production}, is a cospan of spans
\[
\label{NoRef_IOProduction}
\diagram{NoRef_IOProduction}
\]
Think of $I$ and $O$ as choosing inputs and outputs. Given a production with interface $(I,O)$, we say that a graph $G'$ is a \emph{direct $(I,O)$-derivation} of $G$ if there is a diagram
\[
\label{NoRef_IODerivation}
\diagram{NoRef_IODerivation}
\]
where the bottom squares are pushouts. We denote this as $G \rightsquigarrow G'$. An \emph{$(I,O)$-grammar} $(\mathcal{G},\mathcal{P})$ consists of a collection of graphs $\mathcal{G}$ and another of $(I,O)$-productions $\mathcal{P}$. Again, an $(I,O)$-grammar generates a language consisting of all graphs $G$ such that there is a chain of direct $(I,O)$-derivations $G_0 \rightsquigarrow G_1 \rightsquigarrow \dotsm \rightsquigarrow G_n=G$ from $\mathcal{P}$ such that $G_0 \in \mathcal{G}$. However, this time, we require such a chain to respect the inputs and outputs in the sense that
\begin{equation}
\label{diag.DerivationChain}
\diagram{Diag_DerivationChain}
\end{equation}
Now we can put these productions with interfaces into our bicategorical framework.
Consider the full sub-bicategory of $\widetilde{\mathbf{C}}$ for $\cat{C} := \cat{Graph}$ with objects the finite, discrete graphs. Here, a $1$-cell is a cospan $G \colon I \xrightarrow{\mathit{csp}} O$ where $G$ is a graph and $I$, $O$ are discrete graphs. A $2$-cell between $1$-cells $G'$ and $G''$ is a span of cospans $G \colon G' \xrightarrow{\mathit{sp}} G''$ which we think of as an $(I,O)$-derivation $G' \rightsquigarrow G''$. We will call this sub-bicategory $\cat{Rewrite}$ because the $2$-cells are exactly all the possible ways to rewrite one graph into another so that inputs and outputs are preserved. Given any $2$-cell $G \colon G' \xrightarrow{\mathit{sp}} G''$ in $\cat{Rewrite}$, then the diagram
\[
\label{NoRef_2CellToRewrite}
\diagram{NoRef_2CellToRewrite}
\]
gives an $(I,O)$-derivation $G' \rightsquigarrow_G G''$. Conversely, any $(I,O)$-derivation \eqref{diag.DerivationChain} can be made into composable $2$-cells $E_i \colon G_i \xrightarrow{\mathit{sp}} G_{i+1}$ where the maps from $I$ and $O$ are the evident composites. Then, the vertical composition of the resulting $2$-cells gives us the desired span of cospans.
To better illustrate this, we will provide a concrete example of the dictionary between $\cat{Rewrite}$ and double pushout graph rewriting. Suppose we were given an $(I,O)$-derivation, with $I=\{\ast\}=O$, induced from the following double pushout graph rewriting diagram
\[
\label{NoRef_DPOGraphRewritingExample}
\diagram{NoRef_DPOGraphRewritingExample}
\]
where the functions are described by the labeling, the inputs are circled, and the outputs are squared. In words, we have rewritten the graph on the lower left by remove an edge $a \to c$ and adding a loop $c \to c$. The pull back of the span
\[
\label{NoRef_DPOGraphRewritingExample2}
\diagram{NoRef_DPOGraphRewritingExample2}
\]
is the graph
\[
\label{NoRef_DPOGraphRewritingExample3}
\diagram{NoRef_DPOGraphRewritingExample3}
\]
The corresponding $2$-cell is the diagram
\[
\label{NoRef_DPOGraphRewritingExample4}
\diagram{NoRef_DPOGraphRewritingExample4}
\]
Here we witness the advantage that graph rewriting has over graph morphisms in the realm of expresivity. There is no way to replace the $2$-cell above with a map of graph cospans.
There is nothing inherently special, from a mathematical point of view, about working with graphs and their morphisms. It is in applications where graphs gain importance. We can actually create categories analogous to $\cat{Rewrite}$ with any topos.
\section{Conclusion and further work}
Our primary motivation for constructing this bicategory is as a way to study the gluing of graphs together in a way compatible with chosen input and output nodes. To this end, we defined a bicategory $\cat{Rewrite}$. This is a very large bicategory and in practice, one begins with a grammar and studies the resulting language. So, in an upcoming collaboration with Kenny Courser, we will look at relating languages to sub-bicategories of $\cat{Rewrite}$ generated by a grammar. In this same paper, we will study the structure of $\cat{MonSp(Csp(C))}$ alongside similar bicategories.
\begin{comment}
Though introduced by B\'{e}nabou
\cite{Benabou_Bicats}, we will present the
definition of a bicategory as given by
Leinster in \cite{Leinster_Bicats}.
\begin{defn}[Bicategory]
A \emph{bicategory} $\cat{B}$ consists of
the
following data:
\begin{itemize}
\item a collection of objects, or
$0$-cells, $\ob (\cat{B})$,
\item categories $\cat{B}(X,Y)$, for
all
$X,Y \in \ob (\cat{B})$, whose objects
we
call $1$-cells and morphisms $2$-cells,
\item composition functors
\begin{align*}
\otimes_{XYZ} \colon \cat{B}(Y,Z)
\times \cat{B}(X,Y) & \to
\cat{B}(X,Z)\\
(g,f) & \mapsto g \circ f \\
(\eta,\varepsilon) & \mapsto \eta
\circ_{\t{h}} \varepsilon
\end{align*}
for all $X,Y,Z \in \ob (\cat{B})$,
\item identity functors $I_X \colon
\cat{1}
\to \cat{B}(X,X)$, for all $X \in \ob
(\cat{B})$,
\item natural isomorphisms
\[
\alpha_{WXYZ} \colon \otimes_{WXZ}
(\otimes_{XYZ} \times \id) \Rightarrow
\otimes_{WYZ} (\id \times
\otimes_{WXY})
\]
for all $W,X,Y,Z \in \ob (\cat{B})$
called
the \emph{associators}, and
\item natural transformations
\begin{align*}
r_{XY} & \colon \otimes_{XXY} (\id
\times I_X ) \Rightarrow \id, \t{
and } \\
\ell & \colon \otimes_{XYY} (I_Y
\times \id ) \Rightarrow \id
\end{align*}
called the right and left unitors.
\end{itemize}
This data is subject to the equations
\begin{gather*}
(\id \otimes \alpha) \alpha (\alpha
\otimes \id) = \alpha \alpha, \\
(\id \otimes \ell) \alpha = r \otimes
\id .
\end{gather*}
called the pentagon and triangle
identities,
respectively.
\end{defn}
Next, we will introduce the notion of a
topos. There is a rich theory of topos that
we will not even begin to explore here, but
instead point the interested reader towards
\cite{BarrWells_ToposTriplesTheories,
Johnstone_ElephantV1,
MacLaneMoerdijk_SheavesGeomLogic} which are
just some of the many great books on the
subject. We are only interested in topos in so
far as they are a large, familiar, and easily
described class of categories for which our
construction works. There is a larger class of
categories on which we could construct our
bicategory of spans and cospans, but
describing it would require an unwieldy list
of properties. {\color{red} this sentence may
not be needed if we include a discussion on
the assumptions in the paper}
\begin{defn}[Topos]
In a category with a terminal object $1$,
a \emph{subobject classifier} is a
morphism $t \colon 1 \to \Omega$ with the
property that, for any monic $m \colon X
\to Y$, there is a unique morphism $\chi_m
\colon Y \to \Omega$ such that $m$ is the
pullback of $t$ along $\chi_m$. A
\emph{topos} is a category that has finite
colimits, is cartesian closed, and has a
subobject classifier.
\end{defn}
\end{comment}
\begin{comment}
Using its universal property, we see that
$\gamma$ factors through $B'+C$ as seen in
diagram
\[
\begin{tikzcd}
{B}
\ar[r,hookrightarrow]
\ar[d,rightarrowtail,"b"] &
{B+C}
\ar[d,"b+C"]&
{} \\
{B'}
\ar[r,hookrightarrow]&
{B'+C}
\ar[d,"B' + c"]
\ar[r,hookleftarrow] &
{C}
\ar[d,rightarrowtail, "c"]\\
{} &
{B'+C'}
\ar[r,hookleftarrow] &
{C'}
\end{tikzcd}
\]
It is straightforward {\color{red} IS IT?}
to check that the
squares are both pushouts, which preserve
monics (Lemma \ref{lem.adhesive
properties}). Hence $\gamma$ is monic. By
Lemma \ref{lem.adhesive properties}, the
monotonicity of $\gamma'$ will follow from
the right square being a pushout.
\end{comment}
\begin{comment}
\begin{lem}
Consider a diagram
\begin{equation}
\begin{tikzcd}[column sep=1.5em,row
sep=tiny]
{}
& {}
& {B}
\ar[ddd,rightarrowtail,"b" near
start]
\ar[rd,hookrightarrow,"\iota_B"]
& {}
& {}
\\ {A}
\ar[rru]
\ar[rd]
\ar[ddd,"="]
& {}
& {}
& {B+C}
\ar[ddd,dashed,"\psi"]
\ar[r,twoheadrightarrow,"q"]
& {B+_AC}
\ar[ddd,dashed,"\psi'"]
\\ {}
& {C}
\ar[rru,crossing over,
hookrightarrow,"\iota_C" near
start]
& {}
& {}
& {}
\\ {}
& {}
& {B'}
\ar[dr,hookrightarrow,"\iota_{B'}"]
& {}
& {}
\\ {A}
\ar[rru]
\ar[rd]
& {}
& {}
& {B'+C'}
\ar[r,twoheadrightarrow,"q'"]
& {B'+_AC'}
\\ {}
& {C'}
\ar[rru,hookrightarrow,
"\iota_{C'}"]
& {}
& {}
& {}
\ar[from=3-2,to=6-2,crossing
over,rightarrowtail,swap, "c" near
start]
\end{tikzcd}
\end{equation}
whose front and back faces commute and
where $\psi$ and $\psi'$ are the canonical
arrows. Then $\psi$ and $\psi'$ are monic
and
the right hand square is a pushout.
\end{lem}
\begin{proof}
Apply Lemmas \ref{lem.pushout along mono
and
injection} and \ref{lem.adhesive
properties}
to the diagram
\[
\begin{tikzcd}
{B}
\ar[r,hookrightarrow]
\ar[d,rightarrowtail,"b"] &
{B+C}
\ar[d,"b+C"]&
{} \\
{B'}
\ar[r,hookrightarrow]&
{B'+C}
\ar[d,"B' + c"]&
{C}
\ar[l,hookrightarrow]
\ar[d,rightarrowtail, "c"]\\
{} &
{B'+C'} &
{C'}
\ar[l,hookrightarrow]
\end{tikzcd}
\]
to obtain the monotonicity of $b+C$ and
$B'+c$. By universality,
$\psi=(B'+c)(b+C)$
hence $\psi$ is monic. By Lemma
\ref{lem.adhesive properties}, the
monotonicity of $\psi'$ will follow from
the
right square being a pushout.
One can check that the right hand square
commutes by using the universal property
of
$B+C$. To see that this square is a
pushout, set up a cocone $D$
\begin{equation}
\begin{tikzcd}[column sep=1.5em,row
sep=1.5em]
{B+C}
\ar[r,twoheadrightarrow, "q"]
\ar[d,"\psi"] &
{B+_AC}
\ar[d,"\psi'"]
\ar[ddr,bend left,"d"]&
{} \\
{B'+C'}
\ar[r,twoheadrightarrow,"q'"]
\ar[drr,bend right,"d'"]&
{B'+_AC'} &
{} \\
{} &
{} &
{D}
\end{tikzcd}
\end{equation}
Then $d'\iota_{B'}$, $d'\iota_{C'}$, and
$D$
form a cocone under the span $C'
\leftarrow C
\to B'$ on the bottom face of diagram
\eqref{diag.helpful little lemma}. This
induces the canonical map $\psi'' \colon
B'+_AC' \to D$. It follows that
$d'\iota_{B'}=\psi'' q' \iota_{B'}$ and
$d'\iota_{C'}=\psi'' q' \iota_{C'}$.
Therefore, $d'=\psi' q'$ by the universal
property of coproducts.
Furthermore, $dq\iota_B$, $dq\iota_C$, and
$D$
form a cocone under the span $C \leftarrow
A
\to B$ on the top face of diagram
\eqref{diag.helpful little lemma}. Then
$dq\iota_B = d'\psi\iota_B = \psi''
q'\psi\iota_B = \psi'' \psi' q \iota_B$
and
$dq\iota_C = d'\psi\iota_C = \psi''
q'\psi\iota_C = \psi'' \psi' q \iota_C$
meaning that both $d$ and $\psi''\psi'$
satisfy the canonical map $B+_AC \to D$.
Hence $d=\psi''\psi'$.
The universality of $\psi''$ with respect
to
Diagram \eqref{diag.helpful lemmma cocone}
follows from the universality of $\psi''$
with
respect to $B'+_AC'$.
\end{proof}
\end{comment}
\begin{comment}
First, we find the left hand side of
\eqref{eq.interchange equation} which
corresponds to the composite $2$-cell
resulting from applying vertical composition
before horizontal composition. To compute
this, we first find the $2$-cells $S'
\circ_\t{v} S''$ and $T' \circ_\t{v} T''$:
\[
\begin{tikzcd}[column sep=1em,row sep=1.5em]
{} &
{L} &
{} &
{R} &
{} \\
{X}
\ar[ru, bend left=20]
\ar[r]
\ar[rd, bend right=20]&
{S' \times_SS''}
\ar[u,rightarrowtail]
\ar[d,rightarrowtail]&
{Y}
\ar[lu, bend right=20]
\ar[l]
\ar[ld, bend left=20]
\ar[ru, bend left=20]
\ar[r]
\ar[rd, bend right=20]&
{T' \times_TT''}
\ar[u,rightarrowtail]
\ar[d,rightarrowtail] &
{Z}
\ar[lu, bend right=20]
\ar[l]
\ar[ld, bend left=20] \\
{} &
{L'} &
{} &
{R'} &
{}
\end{tikzcd}
\]
The resulting horizontal composite is the
$2$-cell
\begin{equation}
\label{diag.vert then hori}
\begin{tikzcd}[column sep=1em,row sep=1.5em]
{} &
{L+_YR} &
{} \\
{X}
\ar[ru, bend left=20]
\ar[r]
\ar[rd, bend right=20]&
{(S' \times_SS'') +_Y (T' \times_TT'')}
\ar[u,rightarrowtail,"f"]
\ar[d,rightarrowtail] &
{Z}
\ar[lu, bend right=20]
\ar[l]
\ar[ld, bend left=20]\\
{} &
{L'+_YR'} &
{}
\end{tikzcd}
\end{equation}
Next, we find the right hand side of
\eqref{eq.interchange equation} which
corresponds to composing horizontally, then
vertically. To compute this, we first get
composites $S' \circ_\t{h} T'$ and $S''
\circ_\t{h} T''$:
\begin{equation}
\label{diag.hori then vert 1st step}
\begin{tikzcd}[column sep=1em]
{} &
{L+_YR} &
{} \\
{} &
{S'+_YT'}
\ar[u,rightarrowtail,"g"]
\ar[d,rightarrowtail]&
{} \\
{X}
\ar[ruu, bend left=20]
\ar[ru, bend left=20]
\ar[r]
\ar[rd, bend right=20]
\ar[rdd, bend right=20]&
{S+_YT}&
{Z}
\ar[luu, bend right=20]
\ar[lu, bend right=20]
\ar[l]
\ar[ld, bend left=20]
\ar[ldd, bend left=20]\\
{} &
{S''+_YT''}
\ar[u,rightarrowtail]
\ar[d,rightarrowtail]&
{} \\
{} &
{L'+_YR'} &
{}
\end{tikzcd}
\end{equation}
Then vertical composition gives us the $2$-cell
\begin{equation}
\begin{tikzcd}[column sep=1em,row sep=1.5em]
\label{diag:hori then vert}
{} &
{L+_YR} &
{} \\
{X}
\ar[ru, bend left=20]
\ar[r]
\ar[rd, bend right=20] &
{(S' +_Y T') \times_{S +_YT} (S'' +_Y
T'')}
\ar[u,rightarrowtail]
\ar[d,rightarrowtail] &
{Z}
\ar[lu, bend right=20]
\ar[l]
\ar[ld, bend left=20]\\
{} &
{L'+_YR'} &
{}
\end{tikzcd}
\end{equation}
\end{comment}
\begin{comment}
To finish, we will show that equations
\[
S'\times_{S+T}S'' = S'\times_{S}S''
\t{ and }
T'\times_{S+T}T'' = T'\times_{T}T''
\]
hold in the poset $\Sub(S)$ of
subobjects of $S$. First, we have
\begin{equation}
\label{diag.S'x_S'' <= S'x_{S+T}S''}
\begin{tikzcd}[column sep=tiny,row
sep=tiny]
{S' \times_SS''}
\ar[rrrd,bend left=20]
\ar[rddd,bend right=20] &
{} &
{} &
{} \\
{} &
{S' \times_{S+T} S''}
\ar[rr]
\ar[dd] &
{} &
{S''}
\ar[dd]
\ar[ld] \\
{} &
{} &
{S}
\ar[rd,hookrightarrow] &
{} \\
{} &
{S'}
\ar[rr]
\ar[ru]&
{} &
{S+T}
\end{tikzcd}
\end{equation}
where the maps $S' \to S+T$ and $S'' \to
S+T$
are the evident compositions. This gives a
canonical map $S' \times_S S'' \to S'
\times_{S+T} S''$ which is a monomorphism
because all of the other maps in diagram
\eqref{diag.S'x_S'' <= S'x_{S+T}S''} are
monomorphisms. Hence
\begin{equation} \label{eq.S'x_SS'' <=
S'x_{S+T}S''}
S'\times_{S+T}S'' \leq S'\times_{S}S''
\end{equation}
in $\Sub{S}$.
Similarly, we have the diagram
\begin{equation}
\label{diag.S'x_{S+T}S'' <= S'x_{S}S''}
\begin{tikzcd}[column sep=tiny,row
sep=tiny]
{S' \times_{S+T} S''}
\ar[rrrd,bend left=20]
\ar[rddd,bend right=20] &
{} &
{} &
{} \\
{} &
{S' \times_{S} S''}
\ar[rr]
\ar[dd] &
{} &
{S''}
\ar[dd]
\ar[ld] \\
{} &
{}&
{S}
\ar[rd,hookrightarrow] &
{} \\
{} &
{S'}
\ar[rr]
\ar[ru]&
{} &
{S+T}
\end{tikzcd}
\end{equation}
which gives a canonical map $S'
\times_{S+T}
S'' \to S' \times_S S''$. Hence
\begin{equation}
\label{eq.S'x_{S+T}S'' <= S'x_S+S''}
S'\times_SS'' \leq S'\times_{S+T}S''
\end{equation}
in $\Sub{S}$.
Combining \eqref{eq.S'x_SS'' <=
S'x_{S+T}S''}
and \eqref{eq.S'x_{S+T}S'' <= S'x_S+S''},
we
have that $S'\times_SS'' =
S'\times_{S+T}S''$
in $\op{Sub}(S)$. Back in $\cat{C}$, this
equality weakens to an isomorphism.
Similarly, we get that $T'\times_TT''
\cong
T'\times_{S+T}T''$ in $\cat{C}$. Recalling
\eqref{eq:B second iso}, we have that $A
\cong
B$.
\end{comment}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{A single trapped ion in a finite range trap}
\author[]{M. Bagheri Harouni \corref{cor1}}
\ead{[email protected]}
\author[]{M. Davoudi Darareh}
\ead{[email protected]}
\address{Department of Physics,
Faculty of Science, University of Isfahan, Hezar Jerib, Isfahan,
81746-73441, Iran} \cortext[cor1]{Corresponding author. Tel.:+98
311 7932435 ; fax: +98 311 7932409
E-mail address: [email protected] (M. Bagheri Harouni)}
\begin{abstract}
This paper presents a method to describe dynamics of an ion
confined in a realistic finite range trap. We model this realistic
potential with a solvable one and we obtain dynamical variables
(raising and lowering operators) of this potential. We consider
coherent interaction of this confined ion in a finite range trap
and we show that its center-of-mass motion steady state is a
special kind of nonlinear coherent states. Physical properties of
this state and their dependence on the finite range of potential
are studied.
\end{abstract}
\begin{keyword}
Nonclassical property \sep Finite range trap \sep Trapped ion
\sep Nonlinear coherent state
\PACS 42.50.Dv, 42.50.Gy
\end{keyword}
\end{frontmatter}
\section{Introduction}
\indent Single trapped ions represent elementary quantum systems
that are approximately isolated from the environment
\cite{leibfried}. In these systems both internal electronic states
and external center-of-mass motional states (external states) can
be coupled to and manipulated by light fields. This makes the
trapped ion systems suited for quantum optical and quantum
dynamical studies under well-controlled conditions. Motivated by
the strong analogy between cavity quantum electrodynamics and the
trapped ion system, various theoretical and experimental proposals
have been made on how to create nonclassical and arbitrary states
of motion of trapped ions. Preparation of the number states
\cite{roos}, coherent, quadrature squeezed number states and
superposition of the number states were considered in this system
experimentally \cite{meekhof} and theoretically \cite{heinzen}.
Experimental preparation of the Schr\"{o}dinger-cat state was
considered \cite{monroe1}. Theoretical schemes for generation of
arbitrary center-of-mass motional states of a trapped ion is
described in \cite{gardiner}. Moreover, the possibility of
generation of even and odd coherent states of the center-of-mass
motion of a trapped ion is considered \cite{filho}. A new scheme
for preparation of nonclassical motional states of trapped ions is
investigated in \cite{wang}. Recently, Preparation of Dicke states
in an ion chain is considered theoretically and experimentally
\cite{hume}. In addition to the above-mentioned attempts,
preparation of different family of nonlinear coherent states
\cite{manko}, is also studied theoretically \cite{vogel1,mahd}.\\
\indent On the other hand, the trapped ion system has found some
applications in quantum information and quantum computation
\cite{doerk}. For quantum information processing by trapped ion,
preparation of some special states is important. Among these
states, entangled states have found crucial importance.
Preparation of the entangled states of trapped ion is considered
recently \cite{blatt}. For quantum computation applications,
preparation of the two-dimensional cluster-state is considered
\cite{wunderlich}. Because of the some similarities between the
trapped ion system and the Jaynes-Cummings model \cite{jaynes},
the trapped ion system is used to realize different
generalizations of the Jaynes-Cummings model which have found
some applications in quantum information \cite{militello}.\\
\indent In all of the above-mentioned efforts on the trapped ion
system, it is assumed that the ion is confined in a harmonic
oscillator-shaped potential while the dimension of this potential
extended to infinity. Hence, the range of the confining trap is
infinity. However, in the realistic experimental setup, the
dimension of trap is finite and the realistic trapping potential
is not a harmonic oscillator potential but the truncated and
modified one within the extension of the trap. In this paper, we
assume that the confining potential for ion has finite range. We
will model this confining potential with a solvable one. By using
the concept of the $f$-deformed oscillator \cite{manko}, we try to
consider the trapped ion in confining potential with finite range
as an $f$-deformed oscillator and in this context we obtain
raising and lowering operators (dynamical variables) of this
potential. The finite range effects of this model can be used in
traps of the order of nano-scale, called nano Paul traps, that are
attracted a great deal of attention recently \cite{nanotrap}. It
is worth to note that the confining model potential which is
considered here is used for other confined physical systems, such
as the Bose-Einstein condensate \cite{wang1}, and carriers in a
quantum well \cite{harrison}. The $f$-deformed oscillator approach
where we have considered here, has been used before for some other
confined systems \cite{malek1}.\\ \indent This paper presents a
method to describe dynamics of an ion confined in a finite range
trap. We will show that stationary state of the center-of-mass
motion of the trapped ion is a special kind of nonlinear coherent
states where its properties depend on the range of the confining
potential. The outline of the paper is as follows. Section
\ref{sec1} deals with scheme for model potential and in this
section we will obtain dynamical variables of this potential in
the context of the $f$-deformed oscillator. In Sec. \ref{sec2} we
propose coherent interaction of an ion confined in a finite range
potential and we consider its dynamics in the steady state. In
this section we will obtain an eigenvalue equation for the state
of the center-of-mass motion of the ion. In Sec. \ref{sec3} we
summarize definition of the nonlinear coherent states and we will
show that the steady state of the ion motion can be considered as
a nonlinear coherent state. Physical properties of this system are
investigated in this section. Section \ref{sec4} is devoted to the
conclusion.
\section{Algebraic approach for a particle in a finite range potential}\label{sec1}
\indent To consider an ion in a finite range trap, we try to model
the potential energy function of the realistic trap by an
analytically solvable potential. For comparing new results with
previous ones we are looking for a potential which reduces to the
harmonic oscillator potential in a specific limit of its
parameters. A potential which has this property is the modified
P\"{o}schl-Teller (MPT) potential ~\cite{MPT}. The MPT potential
has the following form
\begin{equation}\label{mpt potential}
V(x)=D\,tanh^2(\frac{x}{\delta}),
\end{equation}
where $D$ is the depth of the well, $\delta$ determines the range
of the potential and $x$ gives the relative distance from the
equilibrium position. The well depth, D, can be defined as
$D=\frac{1}{2}m\omega^2\delta^2$, with mass of the particle $m$
and angular frequency $\omega$ of the harmonic oscillator, so
that, in the limiting case $D\rightarrow \infty$(or $
\delta\rightarrow \infty)$, but keeping the product $m\omega^2$
finite, the MPT potential energy reduces to the harmonic potential
energy, $\lim_{D\rightarrow \infty}
V(x)=\frac{1}{2}m\omega^2x^2$. Solving the Schr\"{o}dinger
equation, the energy eigenvalues for the MPT potential are
obtained as~\cite{landau}
\begin{equation}\label{mpt energy 1}
E_n=D-\frac{\hbar^2\omega^2}{4D}(s-n)^2, \hspace{1cm} n=0, 1,
2, \cdots, [s]
\end{equation}
in which $s=(\sqrt{1+(\frac{4D}{\hbar\omega})^2}-1)/2$, and $[s]$
stands for the closest integer to $s$ that is smaller than $s$.
The MPT oscillator quantum number $n$ can not be larger than the
maximum number of bound states $[s]$, because of the dissociation
condition $s-n\geq 0$. Detailed description about this energy
spectrum can be found in ~\cite{Davoudi}. By introducing a
dimensionless parameter
$N=\frac{4D}{\hbar\omega}=\frac{2m\omega\delta^2}{\hbar}$, the
bound energy spectrum in equation (\ref{mpt energy 1}) can be
rewritten as
\begin{equation}\label{mpt energy 2}
E_n=\hbar\omega[-\frac{n^2}{N}+(\sqrt{1+\frac{1}{N^2}}-\frac{1}{N})n+
\frac{1}{2}(\sqrt{1+\frac{1}{N^2}}-\frac{1}{N})].
\end{equation}
The relation (\ref{mpt energy 2}) shows a nonlinear dependence on
the quantum number $n$, so that, different energy levels are not
equally spaced. As is evident, $N$ is a dimensionless parameter
and from now we refer this parameter as the depth of the trap. It
is clear that, in the limiting case $D\rightarrow \infty$ (or
$N\rightarrow \infty$), the energy spectrum for the quantum
harmonic oscillator will be obtained, i.e.,
$E_n=\hbar\omega(n+\frac{1}{2})$. This means that for finite
values of $D$ (or finite values of $\delta$), we have a deformed
quantum oscillator, which its natural deformation from the quantum
harmonic oscillator can be amplified by decreasing $D$ or $N$.
Thus, the well depth of this potential that identifies its range,
is used to approximate the harmonic oscillator potential and it
can also be considered as a controllable physical deformation
parameter. It is interesting to note that the dimensionless
parameter $N$ can also be written as $N=\frac{\delta}{\Delta x}$.
Here $\Delta x=\sqrt{\frac{\hbar}{2m\omega}}$, is the ground state
wave function spread which for typical traps is of the order of
nanometer $(nm)$ \cite{leibfried}. $\delta$, that determines the
range of the potential would be of the same order of magnitude as
the ion-electrode distance in a Paul trap system. It results that
if trap size be of the order of $nm$, the finite range effects of
the trap would be important. Such kind of the Paul traps have
considered recently \cite{nanotrap}.\\
\indent It is shown that \cite{malek1}, each quantum system which
has an unequal spaced energy spectrum can be considered as an
$f$-deformed oscillator. Therefore, according to the energy
spectrum of the MPT potential, this system can be considered as an
$f$-deformed oscillator ~\cite{Davoudi}. On the other hand, the
$f$-deformed quantum oscillator \cite{manko}, as a nonlinear
oscillator with a specific kind of nonlinearity, is characterized
by the following deformed dynamical variables $\hat{A}$ and
$\hat{A}^\dag$
\begin{eqnarray}\label{fd}
\hat{A}&=&\hat{a}f(\hat{n})=f(\hat{n}+1)\hat{a},\nonumber\\
\hat{A}^\dag&=&f(\hat{n})\hat{a}^\dag=\hat{a}^\dag f(\hat{n}+1),
\hspace{1.5cm} \hat{n}= \hat{a}^\dag\hat{a},
\end{eqnarray}
\noindent where $\hat{a}$ and $\hat{a}^\dag$ are usual boson
annihilation and creation operators $([\hat{a}, \hat{a}^\dag]=1)$,
respectively. The real deformation function $f(\hat{n})$ is a
nonlinear operator-valued function of the harmonic number operator
$\hat{n}$, which introduces some nonlinearities to the system.
From equation (\ref{fd}), it follows that the $f$-deformed
operators $\hat{A}$, $\hat{A}^\dag$ and $\hat{n}$ satisfy the
following closed algebra
\begin{eqnarray}\label{algebrafd}
&[\hat{A}, \hat{A}^\dag]=&(\hat{n}+1)f^2(\hat{n}+1)-\hat{n}f^2(\hat{n}),\nonumber\\
&[\hat{n}, \hat{A}]=&-\hat{A},\hspace{1.5cm} [\hat{n},
\hat{A}^{\dag}]=\hat{A}^{\dag}.
\end{eqnarray}
\noindent The above-mentioned algebra, represents a deformed
Heisenberg-Weyl algebra whose nature depends on the nonlinear
deformation function $f(\hat{n})$. An $f$-deformed oscillator is a
nonlinear system characterized by a Hamiltonian of the harmonic
oscillator form
\begin{equation}\label{hamiltf}
\hat{H}=\frac{\hbar\omega}{2}(\hat{A}^\dag\hat{A}+\hat{A}\hat{A}^\dag).
\end{equation}
Using equation (\ref{fd}) and the number state representation
$\hat{n}|n \rangle=n|n \rangle$, the eigenvalues of the
Hamiltonian (\ref{hamiltf}) can be written as
\begin{equation}\label{energyf}
E_n=\frac{\hbar\omega}{2}[(n+1)f^2(n+1)+nf^2(n)].
\end{equation}
\indent It is worth noting that in the limiting case
$f(n)\rightarrow 1$, the deformed algebra (\ref{algebrafd}) and
the deformed energy eigenvalues (\ref{energyf}) will reduce to the
conventional Heisenberg-Weyl algebra and the harmonic oscillator spectrum, respectively.\\
\indent Comparing the bound energy spectrum of the MPT oscillator,
equation (\ref{mpt energy 2}), and the energy spectrum of an
$f$-deformed oscillator, equation (\ref{energyf}), we obtain the
corresponding deformation function for the MPT oscillator as
\begin{equation}\label{ff}
f^2(\hat{n})=\sqrt{1+\frac{1}{N^2}}-\frac{\hat{n}}{N}.
\end{equation}
Furthermore, the ladder operators of the bound eigenstates of the
MPT Hamiltonian can be written in terms of the conventional
operators $\hat{a}$ and $\hat{a}^\dag$ as follows
\begin{equation}\label{a mpt}
\hat{A}=\hat{a}\sqrt{\sqrt{1+\frac{1}{N^2}}-\frac{\hat{n}}{N}},
\hspace{0.5cm}
\hat{A}^\dag=\sqrt{\sqrt{1+\frac{1}{N^2}}-\frac{\hat{n}}{N}}\hat{a}^\dag.
\end{equation}
\noindent These two operators satisfy the deformed Heisenberg-Weyl
commutation relation
\begin{equation}\label{algebra mpt}
[\hat{A},
\hat{A}^\dag]=\sqrt{1+\frac{1}{N^2}}-\frac{2\hat{n}+1}{N},
\end{equation}
As is clear, in the limiting case $f(n)\rightarrow
1\;(N\rightarrow\infty)$ this deformed commutation relation will
reduce to the conventional commutation relation,
$[\hat{a},\hat{a}^{\dag}]=1$.\\ \indent As a result, in this
section we conclude that the trapped ion in MPT potential can be
considered as an $f$-deformed oscillator with specific kind of the
$f$-deformed Heisenberg-Weyl algebra.\\ \indent In the following,
we will consider coherent interaction of a single trapped ion in a
finite range trap with light fields. Then, we will generate the
nonlinear coherent states of ionic vibrational motion in a finite
range trap and finally we will investigate some physical
properties of these states such as, their number distribution,
quadrature squeezing and their phase-space distribution.
\section{Ion dynamics in a finite range trap}\label{sec2}
\indent As is usual in theoretical consideration of trapped ion
systems, the confining potential is assumed to be a spatial
varying high-frequency time-dependent field, the so-called Paul
trap, $V(\vec r,t)$. It is shown that, motion of a particle inside
a such high-frequency trap can be treated by averaging over the
fast motion (part of the particle displacement that its frequency
is the same as frequency of trap fields). In this approach a
confined particle in such a trap experiences a spatial static
effective potential \cite{landau1}. Usually this static potential
is assumed to be a three dimensional harmonic oscillator-like
potential so that in one direction ($x$-direction) can be written
as $V(x)=\frac{1}{2}m\omega^2x^2$ \cite{leibfried}. As is
conventional, ion is cooled to the ground-state of the trap and in
this situation due to smallness of the ratio of trap height to
other energy scales, such as energy distance between two adjacent
energy levels of the trap, the trap is assumed extend to infinity.
However, in the realistic experimental setup, the dimension of the
trap is finite and the realistic trapping potential is not a
harmonic oscillator potential extending to infinity but the
truncated and modified one within the extension of the trap. Thus,
the realistic confining potential becomes flat near the edge of
the trap and can be simulated by the tanh-shaped potential, so
that in one dimension ($x$-direction) can be written as
$V(x)=D\tanh^2(\frac{x}{\delta})$. In this paper, we try to
investigate some effects which originate from finite range
property of the trap.\\ \indent According to the previous section,
we model this trapped ion as an $f$-deformed quantum oscillator.
Therefore, the oscillator-like Hamiltonian of this system can be
written as
\begin{equation}\label{heh}
\hat{H}_t=\frac{\hbar\omega}{2}(\hat A\hat A^\dag+\hat A^\dag\hat A),
\end{equation}
where we interpret the operator $\hat A\;(\hat A^\dag)$ as the
operator whose action causes the transition of the ion
center-of-mass motion to the lower (upper) energy state of the
trap. These operators are given in Eq. (\ref{a mpt}). In fact, the
Hamiltonian (\ref{heh}) is related to the external degrees of
freedom of the ion. According to the resonant condition, the ion
is assumed as a two-level system with the ground state $|g\rangle$
and the excited state $|e\rangle$. Then, internal degrees of
freedom of the ion can be expressed with electronic flip operators
$\hat S_z=|e\rangle\langle e|-|g\rangle\langle g|$, $\hat
S^+=|e\rangle\langle g|$ and $\hat S^-=|g\rangle\langle e|$ which
satisfy the usual $su(2)$ algebra. On the other hand, with the
help of the suitable laser fields, the internal levels of the
trapped ion can be coherently coupled to each other and to the
external motional degrees of freedom of the ion. Therefore, the
total Hamiltonian of the system may be given as
\begin{equation}\label{}
\hat H=\hat H_0+\hat H_{int}(t),
\end{equation}
where $\hat H_0=\hat H_t+\hbar\omega_i\hat S_z$, with $H_t$ given
in Eq. (\ref{heh}), describes the free motion of the internal and
external degrees of freedom of the ion. Here, $\hbar\omega_i$
refers to the energy difference of internal states of the ion,
$\hbar\omega_i=E_e-E_g$. The interaction of the ion with the laser
fields is described by $\hat H_{int}(t)$ and is written as
\begin{equation}\label{hint}
\hat H_{int}(t)=g\left[E_0e^{-i(k_0\hat x-\omega_it)}+E_1e^{-i[k_1\hat
x-(\omega_i-\omega_n)t]}\right]\hat S^+ +\;H.c.\;,
\end{equation}
in which $g$ is coupling constant, $k_0$ and $k_1$ are the wave
numbers of the driving laser fields and $\omega_n$ refers to the
energy of the lower vibrational side-band with respect to the
electronic transition of the ion. $\omega_n$ is the frequency of
the ion transition between energy levels of the finite range trap.
Because energy spectrum of the trap depends on the energy level
numbers and we consider a transition between specific side-band
levels, hence, we show the transition frequency with definite
dependence to $n$. In the above Hamiltonian, $\hat H_{int}(t)$,
$\hat x$ is the operator of the center-of-mass position and may be
defined as \cite{malek1}
\begin{equation}\label{}
\hat x=\frac{\eta}{k_l}(\hat A+\hat A^\dag),
\end{equation}
where $\eta$ being the Lamb-Dicke parameter and $k_l$ is
associated wave number to the characteristic length of the trap
and assume to be $k_l\simeq k_0\simeq k_1$. The interaction
Hamiltonian (\ref{hint}) can be written as
\begin{equation}\label{}
\hat
H_{int}(t)=\hbar e^{i\omega_it}\left[\Omega_0+\Omega_1e^{-i\omega_nt}\right]e^{i\eta(\hat A+\hat
A^\dag)}\hat S^++H.c.\;,
\end{equation}
$\Omega_0=\frac{gE_0}{\hbar}$ and $\Omega_1=\frac{gE_1}{\hbar}$
are the Rabi frequencies of the laser fields tuned to the
electronic transition and the lower sideband, respectively. The
interaction Hamiltonian in the interaction picture with respect to
the $\hat H_0$ can be written as
\begin{equation}\label{hint1}
\hat H_I=\hbar\Omega_1\hat
S^+\left[\frac{\Omega_0}{\Omega_1}+e^{-i\omega_nt}\right]\exp\left[i\eta\left(e^{-i\hat\nu_nt}\hat A+\hat A^\dag
e^{i\hat\nu_nt}\right)\right]+H.c.\;,
\end{equation}
where $\hat\nu_n=\frac{\omega}{2}[(\hat n+2)f^2(\hat n+2)-\hat
nf^2(\hat n)]$. In this relation the function $f(\hat n)$ is given
by Eq. (\ref{ff}).\\ \indent By using the vibrational
rotating-wave approximation \cite{vogel1} and applying the
disentangling approach in \cite{garcia} for the exponential term
which appeared in equation (\ref{hint1}), the interaction
Hamiltonian (\ref{hint1}) may be written as
\begin{equation}\label{}
\hat H_I=\hbar\Omega_1\hat S^+\left[\frac{\Omega_0}{\Omega_1}F_0(\hat n,\eta)+g(\eta)F_1(\hat n,\eta)\hat
a\right]+H.c.\;,
\end{equation}
where the function $F_j(\hat n,\eta)\;(j=0,1)$ is defined by
\begin{equation}\label{maindef}
F_j(\hat
n,\eta)=\sum_{l=0}^n\frac{[g(\eta)]^{2l}}{l!(l+j)!}\frac{f(\hat n)!f(\hat n+j)!}{[f(\hat n-l)!]^2}\frac{\hat n!}{(
\hat n-l)!}M(\hat n-l).
\end{equation}
In this equation different functions are appeared which are
defined as follows
\begin{eqnarray}
g(\eta)&=&\frac{i}{\sqrt{\gamma}}\tan(\sqrt{\gamma}\eta),\hspace{1cm}X_n=\beta-\gamma(2n+1),\nonumber\\
M(n)&=&e^{-\frac{X_n}{\gamma}\ln(\cos(\sqrt{\gamma}\eta))},
\end{eqnarray}
where $\gamma=\frac{1}{N}$, $\beta=\sqrt{1+\frac{1}{N^2}}$ and
$\hat n$ is an operator whose eigenvalues, $n$, refer to the
excitation energy level number inside the trap. It is worth to
note that in the limiting case $N\rightarrow\infty$ which is
equivalent to $f(n)\rightarrow 1$, the system will reduce to the
confined ion in the harmonic oscillator-shaped trap, which has
been considered in \cite{vogel1}. The function $F_j(\hat n,\eta)$,
given in Eq.
(\ref{maindef}), will reduce to its counterpart in the harmonic oscillator-shaped trap \cite{vogel1}.\\
\indent The time evolution of the system is characterized by the
master equation
\begin{equation}\label{master}
\frac{d\hat\rho}{dt}=-\frac{i}{\hbar}[\hat H_I,\hat\rho]+\frac{\Gamma}{2}(2\hat S^-\hat\rho^{'}\hat S^+-\hat S^+\hat S^-\hat\rho-\hat\rho\hat S^+\hat
S^-),
\end{equation}
where $\Gamma$ is the spontaneous emission rate. To account for
the recoil of spontaneously emitted photons the first term of the
damping part of the master equation contains
\begin{equation}\label{}
\hat\rho^{'}=\frac{1}{2}\int_{-1}^1dzY(z)e^{ik_l\hat xz}\hat\rho
e^{-ik_l\hat xz},
\end{equation}
$Y(z)$ is the angular distribution of the spontaneous emission and
$\hat\rho$ is the vibronic density operator.\\ \indent In the
long-time limit, the ion will be populated in the ground state
$|g\rangle$ as a consequence of atomic spontaneous emission. In
this case, the steady-state solution of the master equation
(\ref{master}) can be assumed to be
$\hat\rho_{ss}=|g\rangle|\psi\rangle\langle\psi|\langle g|$, where
$|\psi\rangle$ stands for the vibronic motion of the ion. The
stationary solution of Eq. (\ref{master}) can be found by setting
$\frac{d\hat\rho}{dt}=0$ and since
\begin{equation}\label{}
\hat S^-|g\rangle\langle g|=\hat S^+\hat S^-|g\rangle\langle
g|=|g\rangle\langle g|\hat S^+\hat S^-=0,
\end{equation}
we obtain
\begin{equation}\label{}
[\hat H_I,\hat\rho_{ss}]=0.
\end{equation}
From this equation, we find that the vibronic state $|\psi\rangle$
satisfies the following equation
\begin{equation}\label{NLCS}
\hat ah(\hat
n)|\psi\rangle=\chi|\psi\rangle,\hspace{1.5cm}\chi=-\frac{\Omega_0}{g(\eta)\Omega_1}.
\end{equation}
In this equation $h(\hat n)=F_1(\hat n-1,\eta)/F_0(\hat
n-1,\eta)$.
\section{Nonlinear coherent states of ionic vibrational motion and their physical
properties}\label{sec3} \indent Similar to the definition of the
canonical coherent states \cite{glauber}, the coherent state of a
generalized $f$-deformed oscillator is defined as a right-hand
eigenstate of the generalized annihilation operator $(\hat A=\hat
af(\hat n))$ as follows
\begin{equation}\label{}
\hat A|\alpha,f\rangle=\alpha|\alpha,f\rangle.
\end{equation}
Due to the appearance of nonlinear deformation function, $f(\hat
n)$, in definition of these states, they are called nonlinear
coherent states. According to this definition, vibronic state of
the ion in the steady state, Eq. (\ref{NLCS}), is a nonlinear
coherent state with
\begin{eqnarray}
f(\hat n)&=&h(\hat n)=\frac{F_1(\hat n-1,\eta)}{F_0(\hat n-1,\eta)},\nonumber\\
\alpha&=&\chi=-\frac{\Omega_0}{g(\eta)\Omega_1}.
\end{eqnarray}
Nonlinear coherent states can be expanded in terms of the usual
Fock states $(\hat n|n\rangle=n|n\rangle)$ as follows
\begin{equation}\label{}
|\alpha,f\rangle=N_f\sum_n\frac{\alpha^n}{\sqrt{n!}f(n)!}|n\rangle,\hspace{1cm}
N_f=\left[\sum_n\frac{|\alpha|^{2n}}{n![f(n)!]^2}\right]^{-\frac{1}{2}},
\end{equation}
where $f(n)!=f(n)f(n-1)\cdots f(0)$. Thus, the steady state of the
ion in Eq. (\ref{NLCS}) is a special kind of the nonlinear
coherent state where its properties are defined by the function
$h(\hat n)$. This function is characterized by the Lamb-Dicke
parameter $\eta$ and quantum number $n$ which refers to the level
of vibronic excitation. Moreover, according to the Eq.
(\ref{NLCS}), nonlinear coherent state of the ion depends on the
complex parameter $\chi$, which is controlled by the Rabi
frequencies of the lasers, the Lamb-Dicke parameter and $\gamma$
parameter that governed by the range of the trap. In order to get
some insight about physical properties of this family of nonlinear
coherent state, we consider some statistical properties of this
state. In Fig. (\ref{f1}) we show the vibrational number
distribution of this state, $p(n)=|\langle n|\psi\rangle|^2$. In
all of the plots in this figure, Lamb-Dicke parameter and the
ratio $\frac{\Omega_0}{\Omega_1}$ are chosen as $\eta=0.22$ and
$\frac{\Omega_0}{\Omega_1}=0.85$, respectively. It can be seen
that the vibrational number distribution depends sensitively on
the depth (or range) of the trap. In some cases it is possible to
prepare a superposition of several Fock states. Another feature of
this figure is that by choosing the proper values of the depth of
the trap, such as $(N=30)$, it is possible to prepare a
superposition of two or three Fock states. An interesting property
of this vibrational number distribution is that we can prepare a
highly excited Fock state for external motion of the ion $(N=45)$
\cite{vogel2}. In this case with most probability we can claim
that one Fock state is prepared. By increasing the depth of the
trap $(N=75)$, the vibrational number distribution will reduce to
a superposition of Fock states again. In this case the
distribution of the Fock states is approximately symmetric about
the most probable number state. Thus, it is shown that for
definite values of physical parameters, $\eta$ and
$\frac{\Omega_0}{\Omega_1}$ and for different values of the trap
depth, we can prepare different states even a highly excited Fock
state.\\ \indent
In Fig. (\ref{f2}) we have plotted quadrature
squeezing of the state $|\psi\rangle$, Eq. (\ref{NLCS}). Physical
parameters for this plot are chosen as
$\eta=0.25,\;\frac{\Omega_0}{\Omega_1}=0.31$ and the phase of the
quadrature operator is chosen as $\frac{\pi}{4}$. This figure
depicts squeezing behavior versus the depth of the trap. It is
evident that for some values of the depth, the state (\ref{NLCS})
exhibits quadrature squeezing. Hence, in addition to the
remarkable properties of the vibrational number distribution, this
state has other nonclassical property. The non-classical
properties of nonlinear coherent states is one of their most
important properties \cite{mancini}.\\ \indent In Fig. (\ref{f3}),
we have shown the contour plots of the $Q$ function of the state
(\ref{NLCS}). In this figure, different plots belong to different
depths of the trap with $\eta=0.75$ and
$\frac{\Omega_0}{\Omega_1}=0.9$. In the case of $N=7$ (plot (a))
the plot contains contribution at several amplitudes. This feature
implies occurrence of quantum interference effects inherent in
this state. It displays several localized regions where it becomes
extremely small. This phenomena is related to the separate peaks
of the number distribution of state (\ref{NLCS}) which are rather
close together. By increasing the depth of the trap, in plot (b),
$N=26$, and plot (c), $N=45$, this strong structure of the $Q$
function is disappeared. In these cases the $Q$ function has one
peak and this shows that the peaks of the number distribution are
decreased. On the other hand, the cross section of the $Q$
function is not symmetric and this shows that for selected values
of the parameters, the associated quadrature operator exhibits
quadrature squeezing. With more increasing the depth of the trap,
in plot (d), $N=75$, structure of the $Q$ function becomes
stronger than plots (b) and (c). In this case, the state exhibits
quadrature squeezing and we expect that quantum interference
occurs again.\\ \indent
To obtain more information
about the nature of the state (\ref{NLCS}), we have considered its
associated Wigner function, $W(\alpha)$. The Wigner function for
different values of the Lamb-Dicke parameter and the depth of the
trap is shown in Fig. (\ref{f4}). In this figure the ratio
$\frac{\Omega_0}{\Omega_1}$ is chosen equal to $0.9$. The negative
values of the Wigner function are a signature of the nonclassical
nature of the associated state. As is seen, in all cases the
Wigner function has negative values. To consider the Lamb-Dicke
parameter effects, in plots \ref{f4}(a)-\ref{f4}(c), we have
decreased the Lamb-Dicke parameter while the depth of the trap is
chosen constant. The Wigner function in plot \ref{f4}(a) shows
occurrence of the quantum interference. Decreasing of the
Lamb-Dicke parameter splits peaks of the Wigner function in two
groups. This yields a coherent superposition of two quantum
states. It is evident that decreasing of the Lamb-Dicke parameter
will decrease the amplitude of the Wigner function. In addition to
the Lamb-Dicke parameter effects, dependence of the Wigner
function to the depth of the trap is considered in plots
\ref{f4}(d)-\ref{f4}(f). It is seen that in plot \ref{f4}(d), for
selected parameters, the Wigner function is split into two parts
which is signature of superposition of two coherent states,
because each part consists of several peaks. By increasing the
depth of the trap, these two parts are going to be mixed and the
quantum interference will be occurred.
\section{Conclusion}\label{sec4}
\indent We have studied dynamics of a single trapped ion in a
finite range trap. In the context of the $f$-deformed
oscillators, we have shown that the confined ion in a finite range
trap can be assumed as an $f$-deformed oscillator. By modelling
the realistic potential with the modified P\"{o}schl-Teller
potential, we have obtained dynamical variables (raising and
lowering operators) of this system. Moreover, we have proposed a
scheme for preparation of a special family of nonlinear coherent
states. Such states could be generated as stationary states of the
center-of-mass motion of a laser-driven trapped ion in a finite
range trap while interacts with a bichromatic laser field. When
the motional state is nonlinear coherent state, the ion is
decoupled from the driving laser field. Then, any perturbation of
this motional state leads to the switching of the interaction and
this leads to a self-stabilization of the state. We have shown
that the prepared motional state of the ion has some nonclassical
features which strongly depend on the depth of the trap. These
states show some coherence effects such as localization of their
phase-space distribution and splitting to two or more sub-states
which the latter leads to quantum interference. According to the
profile of the $Q$ function of these states, they exhibit
quadrature squeezing and for specific values of the physical
parameters we have calculated their quadrature squeezing. It is
shown that the nonclassical nature of the prepared states depends
on the depth of the trap so that for specific values of the depth,
both quantum interference and quadrature squeezing will occur but
for some other values, this state exhibits quadrature squeezing only.\\
\indent In view of interesting properties of generated states in
this paper, states of this type and physical system under
consideration might to be of more general interest. First of all,
the single trapped ion in finite range trap has a finite
dimensional Hilbert space. As mentioned before, the number of
energy levels in this system is controlled by the depth of the
trap. As we know, size of the Hilbert space (dimension of the
Hilbert space) has a crucial importance in some quantum phenomena,
such as decoherence. Due to the development in experimental set
ups of trapped ion, it seems possible to organize an experiment to
consider Hilbert space size effects for this system. Then, our
system can be considered as an experimental set up to investigate
Hilbert space size effects. Second, this system turn out to be of
interest for realization of the quantum groups. If we take a look
at Hamiltonian (\ref{hint}), it seems that in the Lamb-Dicke
regime $(\eta\ll 1)$, this system can be considered as a
realization of a deformed Jaynes-Cummings model. By considering
the Lamb-Dicke limit, the exponential in Eq. (\ref{hint}) can be
expanded to lowest order, resulting in the operator $g'(\hat A\hat
S^++\hat A^{\dag}\hat S^-)$, which corresponds to the deformed
Jaynes-Cummings model (in this relation $g'=\eta g$). In addition,
it is shown that there is a relation between the operators $\hat
A$ and $\hat A^\dag$ in Eq. (\ref{a mpt}) and the $q$-deformed
algebra \cite{Davoudi}. Therefore, our model can be considered as
a realization of $q$-deformed and general deformed Jaynes-Cummings
model where Lamb-Dicke parameter plays an important role on this
issue. Third, in recent types of the Paul traps, the so-called
nano Paul traps \cite{nanotrap}, the finite range effects of
trapping potential are more important. It seems that our model
which tries to consider finite range effects can provide a
theoretical description for investigating the nano Paul traps. To
put every things in a nut shell, our model in this paper provides
an experimental set up to consider Hilbert space size effects and
realization of $q$-deformed and general $f$-deformed algebras.
\\{\bf Acknowledgments}\\
The authors wish to thank The Office of Graduate Studies and
Research Vice President of The
University of Isfahan for their support.
{\bf Figure captions}\\
Fig. 1. The vibrational number distribution is shown for four
values of the depth of the trap. The values of the depth, $N$, are
written on each plot and $\eta=0.22$ and
$\frac{\Omega_0}{\Omega_1}=0.85$.
Fig. 2. Plot of quadrature squeezing versus depth of the trap. In
this plot $\eta=0.25$, $\frac{\Omega_0}{\Omega_1}=0.31$ and
quadrature operator phase is selected as $\frac{\pi}{4}$.
Fig. 3. Contour plots of the $Q$ function for $\eta=0.75$ and
$\frac{\Omega_0}{\Omega_1}=0.9$.
In this figure light region indicates large values of the function. Each plot belongs to
specific values of the depth of the trap. In plot (a) $N=7$, plot (b) $N=26$, plot (c) $N=45$
and in plot (d) the depth of the trap is selected as
$N=75$.
Fig. 4. Plots of the Wigner function for different values of the
Lamb-Dicke parameter and the depth of the trap which are shown on
each plot. In all plots the ratio $\frac{\Omega_0}{\Omega_1}$ is selected equal to $0.9$.
\end{document}
|
\begin{document}
\title[Sharp error term in LLT and mixing for Lorentz gases with infinite horizon]{Sharp error term in local limit theorems
and mixing for Lorentz gases with infinite horizon}
\author{Fran\c{c}oise P\`ene}
\address{Univ Brest, Universit\'e de Brest, LMBA,
UMR CNRS 6205,
6 avenue Le Gorgeu, 29238 Brest cedex, France}
\email{[email protected]}
\author{Dalia Terhesiu}
\address{Mathematisch Instituut
University of Leiden
Niels Bohrweg 1, 2333 CA Leiden}
\email{[email protected]}
\keywords{local limit theorem with speed, rates of mixing, infinite measure, Lorentz gases with infinite horizon}
\begin{abstract}
We obtain sharp error rates in the local limit theorem for the Sinai billiard map (one and two dimensional) with infinite horizon.
This result allows us to further obtain higher order terms and thus, sharp mixing rates in the speed of mixing of
dynamically H{\"o}lder observables for the planar and tubular infinite horizon Lorentz gases in the map (discrete time) case. We also obtain an asymptotic estimate for the tail probability of the first return time to the initial cell.
In the process, we study families of transfer operators for
infinite horizon Sinai billiards perturbed with the free flight function and
obtain higher order expansions for the associated families of eigenvalues and eigenprojectors.
\end{abstract}
\maketitle
\section{Introduction}
\subsection*{Lorentz gas}
The Lorentz gas has been introduced in \cite{Lorentz} to model the displacement
of electrons in metals. This model describes the evolution of a point particle moving freely with unit velocity and elastic reflections off pairwise disjoint strictly convex obstacles (with smooth boundary) located $\mathbb Z^d$-periodically (with $d\in\{1,2\}$) in the plane if $d=2$ or on the tube $\mathbb T\times\mathbb R=\mathbb R^2/(\mathbb Z\times\{0\})$.
These obstacles are written $\mathcal{O}_j+\ell$ with $j=1,...,\mathcal J$ and $\ell\in\mathbb Z^d$ (where $\mathcal J$ a non empty finite set). In this model,
the phase space consists of positions-unit velocity vectors, which we call configurations.
\begin{figure}
\caption{A trajectory in a $\mathbb Z^2$-periodic planar domain ($d=2$)}
\label{fig1}
\end{figure}
\begin{figure}
\caption{A trajectory in a $\mathbb Z$-periodic tubular domain ($d=1$)}
\label{fig1}
\end{figure}
In this paper, we are interested in discrete time Lorentz gases (the dynamical system corresponding to the collision times) with infinite horizon.
The horizon is said to be infinite if there exists an infinite trajectory intersecting no obstacle and it is said to be finite otherwise. Understanding the stochastic behaviour of the Lorentz gas in the infinite horizon case is much more challenging than the finite horizon case
and below we recall the main differences along with previous results.
The exposition below focuses on the $\mathbb Z^2$-periodic case, but we mention that similar statements hold for $\mathbb Z$-periodic tubular model.
The space $M$ of configurations of this dynamical system is the set
of postcollisional unit vectors based on
$\bigcup_{j\in\mathcal{J}}\def\eps{\varepsilon,\ \ell\in\mathbb Z^2}\partial \mathcal{O}_j+\ell$. For any $\ell\in\mathbb Z^2$, we define the $\ell$-th cell $M_\ell$ as the set of configurations with position in $\bigcup_{j\in\mathcal J}\mathcal{O}_j+\ell$. We write $\kappa_n$ for the label in $\mathbb Z^2$ of the
cell in which the particle is at the $n$-th collision time:
\[
\kappa_n=N\quad\mbox{if the position of the particle is based in }\bigcup_{j\in\mathcal J}\mathcal{O}_j+N\mbox{ at the }n\mbox{th collision time}\, .
\]
The finiteness of the horizon is equivalent to the uniform boundedness of $\kappa_1$.
Whereas the model we consider is purely deterministic (position and velocity at collisions can be computed explicitly in terms of the initial one), $(\kappa_n)_{n\ge 0}$ behaves asymptotically as a random walk on $\mathbb Z^2$. In the finite horizon case,
$(\kappa_n)_{n\ge 0}$ behaves asymptotically as a simple symmetric random walk, while in the infinite horizon we have a symmetric random walk with displacement of infinite variance.
It is worth noticing that the dynamics of the discrete time Lorentz gas is given by the sequence of couples $(X_n,\kappa_n)_{n\ge 1}$ where $X_n$ with values in $M_0$
is the configuration modulo $\mathbb Z^2$ of the position at the $n$-th collision time. More precisely,
\[
X_n=(q,\varepsilonc v)\in M_0\quad\mbox{if the configuration at the $n$-th collision time is }(q+\kappa_n,\varepsilonc v)\, .
\]
In this representation $(\kappa_n)_n$ corresponds to the evolution of the Lorentz gas at a macroscopic scale while $(X_n)_n$ corresponds to its evolution at a microscopic scale.
The dynamics of $(X_n)_n$ is refereed to as Sinai billiard and recall that the ergodicity of this dynamical system has been established in the seminal work by Sinai \cite{Sinai70}.
Several limit laws for Lorentz process are obtained for the invariant probability measure $\mathbb P=\bar\mu$ on $M_0$ absolutely continuous with respect to Lebesgue and for which $(X_n)_{n\ge 0}$ and $(\kappa_{n+1}-\kappa_n=\kappa(X_n))_{n\ge 0}$ is a stationary process.
Under this invariant probability measure, $\kappa_1$ is not square integrable when the horizon is infinite, whereas it
is (as already mentioned) bounded when the horizon is finite.
Limit properties of discrete time finite horizon Lorentz gases
have been obtained in several recent works, among which we mention~\cite{SV04, pene09IHP, FPBS10, Pene18, PTho18,PTho19a}.
Very recent notable progress for continuous time
Lorentz gases with finite
horizon has been made for mixing local limit theorem by Dolgopyat and Nandori~\cite{DN1, DN2}, for mixing rates by Dolgopyat, Nandori, P{\`e}ne~\cite{DNP} and for suitable versions of CLT by P{\`e}ne and Thomine~ \cite{PTho19a}.
Obtaining the analogue of any the said results in the infinite horizon case is very challenging because $\kappa_1$ has infinite variance. The main difficulty in carrying out similar arguments when $\kappa_1$ has infinite variance
comes down to the weak regularity properties of a family $(P_t)_t$ of perturbed operators.
Whereas in the finite horizon case eigenelements of these operators are $C^\infty$ in $t$, in the infinite horizon case the family of operator is
just continuous in $t$ as an operator from the Young Banach space $\mathcal{B}$ to $L^p$
(see additional explanations in Section \ref{technicalkeyresults}. In this paper we obtain refined expansions going beyond mere continuity estimates and use this to answer unsolved problems in the discrete time model with infinite horizon.
In particular, we obtain: i) higher order (optimal) local limit theorem and mixing; ii) first order expansion of the tail probability of the first return time to the initial cell.
In what follows we provide a simplified statement of our main results, recalling and comparing with previous results.
\subsection*{Previous results: CLT and Local Limit Theorem (LLT) for $\kappa_n$}
When the horizon is finite, it has been proved in \cite{BunimovichSinai:1981,BCS91,Young98} that $(\kappa_n)_n$ behaves asymptotically as a Gaussian random variable with the standard normalization in $\sqrt{n}$, that is
\[
\forall A_i<B_i,\quad \lim_{n\rightarrow +\infty}
\mathbb P
\left(\frac{\kappa_n}{\sqrt{n}}\in [A_1,B_1]\times[A_2,B_2]\right)=\mathbb P(W\in [A_1,B_1]\times[A_2,B_2])\, ,
\]
where $W$ is a Gaussian random variable.
When the horizon is infinite, it has been conjectured in \cite{Bleher} and proved rigorously \cite{SV07} that the Lorentz gas is
superdiffusive and more precisely that $(\kappa_n)_n$ satisfies the CLT
with nonstandard normalization in $\sqrt{n\log n}$, i.e.
\[
\forall A_i<B_i,\quad \lim_{n\rightarrow+\infty}
\mathbb P
\left(\frac{\kappa_n}{\sqrt{n\log n}}\in [A_1,B_1]\times[A_2,B_2]\right)=\mathbb P(W\in [A_1,B_1]\times[A_2,B_2])\, ,
\]
where $W$ is some Gaussian random variable.
These two different behaviours can be heuristically explained by the fact that $\kappa_1$ is uniformly bounded when the horizon is finite and is not square integrable when the horizon is infinite, more precisely that
\begin{equation}\label{tailprobakappa}
\mathbb P(|\kappa_1|>N)\approx N^{-2}\, .
\end{equation}
While~\cite{SV07} focuses on the case where only obstacle (modulo $\mathbb Z^2$) is tangent to a same line, we consider the more general case and establish the following formula for the asymptotic variance matrix of $W$
generalizing \cite{SV07}:
\begin{equation}\label{ExprSigma2}
(a_{i,j})_{i,j=1,2}
:=\frac 1{|\partial\bar Q|}\sum_{C\in\mathcal C}\frac {
\mathfrak d_{C}^2}{|w_C|} w_C\otimes w_C
\, ,
\end{equation}
where $w\otimes w$ represents the matrix $\left(\begin{array}{cc}w_1^2&w_1w_2\\w_1w_2&w_2^2\end{array}\right)$ if $w=(w_1,w_2)$ and where $\mathcal C$ is the set of different "corridors" distinct modulo $\mathbb Z^2$ (see picture and begining of Section \ref{freeflight} for details) that can be drawn in $Q$; for each corridor $C\in\mathcal C$, $\mathfrak d_C$ is its width and $w_C$ a vector in $\mathbb Z^2$ in the direction of the corridor with coprime coordinates.
\begin{figure}
\caption{Corridors for two different periodic billiard domains}
\label{fig2}
\end{figure}
We recall the local version of the said CLT, namely LLT gives the asymptotic behaviour of $\mathbb P(\kappa_n=0)$, that is of the probability that the particle returns to the initial cell in $\mathbb Z^2$ at the $n$-th reflection time.
Such LLTs have been obtained by Sz\'asz and Varj\'u in \cite{SV04}
in the finite horizon case and in \cite{SV07} in the infinite horizon case, stating that
\begin{equation}\label{LLT0}
\mathbb P(\kappa_n=0)\sim \frac{\Phi(0)}{a_n^2}\quad\mbox{as}\ n\rightarrow+\infty\, ,
\end{equation}
with $a_n=\sqrt{n}$ in the finite horizon case and $a_n=\sqrt{n\log n}$ in the
infinite horizon case. Here $\Phi$ is the density function of the corresponding Gaussian random variable $W$. Further, \cite{SV07} uses a version of the local theorem to deduce the recurrence of $(X_n,\kappa_n)_{n\ge 1}$.
We recall that when the horizon is finite, the recurrence comes directly from the CLT thanks to a general argument due to Conze \cite{Conze99} and Schmidt \cite{Schmidt98}.
When the horizon is finite, a sharp error rate in the LLT
has been obtained by P{\`e}ne~\cite{pene09IHP}
and further extensions including expansions of any order in the LLT have been shown in \cite{Pene18}.
In this paper, we establish several extensions of the LLT in the infinite horizon case, which is much more delicate due to the lack of finite variance.
We emphasize that previous results on LLT~\cite{SV07} and mixing~\cite{pene09IHP}
for the infinite measure preserving, infinite horizon Lorentz maps reduce to first order terms.
\subsection*{Tail probability of the first return time to the initial cell (see Theorem~\ref{THMreturntime})}
Let $\tau_0$ be the first return time to the initial cell, that is
\[
\tau_0:=\min\{n\ge 1\, :\, \kappa_n=(0,0)\}\, .
\]
When the horizon is finite, it was proved in \cite{DSV} that $\mathbb P(\tau_0>N)\sim\frac 1{\Phi(0)\log N}$. When the horizon is infinite, we show that
\begin{equation}\label{returntime0}
\mathbb P(\tau_0>N)\sim\frac 1{\Phi(0)\log\log N}\quad\mbox{as}\quad N\rightarrow +\infty\, .
\end{equation}
\subsection*{Study of long free flights (see Lemma~\ref{lemm-tail})}
An important ingredient of our proofs for higher order LLT and mixing exploits higher order expansion of $\kappa$. We recall that in the case where only one obstacle (modulo $\mathbb Z^2$ is tangent to an infinite line contained in the billiard domain), \cite[Proposition 6]{SV07} shows that
\[
\mathbb P(\kappa_1=L+Nw)\sim\frac{\mathfrak a}{|Nw|^3}+o(N^{-3})\, .
\]
This form was enough in \cite{SV07} for obtaining the LLT. In our proofs, we need an error of $\mathcal O(N^{-4})$.
Our Lemma~\ref{lemm-tail} provides (under an additional regularity assumption) a precise estimate of the following form (with explicit constants $\mathfrak a$ and $\mathfrak a'$):
\begin{equation}\label{Plongtraj}
\mathbb P(\kappa_1=L+Nw)=\frac{\mathfrak a}{|Nw|^3}+\frac{\mathfrak a'}{|Nw|^4}+o(N^{-4})\, .
\end{equation}
\subsection*{Class of functions} All our results below hold for dynamically H{\"o}lder observables and refer to Section \ref{mainresults} for precise definition
and further assumptions, where required.
\subsection*{Higher order in Mixing LLT (see Theorem~\ref{THM0} for details)}
We study a stronger version of the LLT, the mixing LLT (MLLT)
which consists in establishing
\begin{equation}\label{MLLT0}
\mathbb E[f(X_0).
\mathbf 1_{\{\kappa_n=N\}}.g(X_n)]\sim \frac{\Phi(0)\mathbb E[f(X_0)]
\mathbb E[g(X_0)]}{n\log n} \quad\mbox{as}\ n\rightarrow +\infty\, ,
\end{equation}
when $\mathbb E[f(X_0)]\mathbb E[g(X_0)]\ne 0$. MLLT is about asymptotic independence of $(X_0,\kappa_n,X_n)$ as $n\rightarrow +\infty$.
When $N=0$, we prove in particular that
\begin{align*}
\mathbb E[f(X_0).
\mathbf 1_{\{\kappa_n=0\}}.g(X_n)]&=\frac{\Phi(0)\mathbb E[f(X_0)]
\mathbb E[g(X_0)]}{n\log(n\log n))} +O\left( \frac 1{n(\log n)^2}\right)\, ,
\end{align*}
with additional error terms detailed in Theorem~\ref{THM0}.
This result ensures in particular that
\[
\mathbb E[f(X_0).
\mathbf 1_{\{\kappa_n=0\}}.g(X_n)]=\mathbb E[f(X_0)]
\mathbb E[g(X_0)] \Phi(0)\left(\frac 1{n\log n}-\frac{\log\log n}{n(\log n)^2}\right) +O\left( \frac 1{n(\log n)^2}\right)
\]
providing a second order term in \eqref{MLLT0}.
When $N\ne 0$ is fixed, we obtain an intermediate term ensuring in particular that
\[
\mathbb E[f(X_0).
\mathbf 1_{\{\kappa_n=\lfloor N\sqrt{n
\log n
}\rfloor\}}.g(X_n)]=- \frac{ \nabla\Phi(N) \cdot \mathfrak K(f,g)}{(n\log(n\log n))^{\frac {3}2}} +O\left( \frac 1{(n\log n)^{\frac {3}2}\log n}\right)
\]
if $\mathbb E[f(X_0)] \mathbb E[g(X_0)]=0$ with $\mathfrak K(f,g)$ a $\mathbb R^2$-valued bilinear form linearly independent of $\mathbb E[f(X_0)]
\mathbb E[g(X_0)]$.
\subsection*{Mixing of general observables in the infinite measure case (Theorem~\ref{THM1})}
The local limit theorem for $\kappa_n$ is strongly related to the notion of mixing of dynamical systems preserving an infinite measure, that is the study of the behaviour of quantities of the form
\[
\int_M f(X_0,\kappa_0).g(X_n,\kappa_n)\, d\mu\, ,
\]
where $\mu$ is the measure absolutely continuous with respect to the Lebesgue measure which is invariant under $(X_n,\kappa_n)\mapsto (X_{n+1},\kappa_{n+1})$.
A mixing result without error term has been established in~\cite[Theorems 1.1]{Pene18}. In the present Theorem \ref{THM1} we improve this result to
\begin{align}\label{EQTHM1}
&\int_M f(X_0,\kappa_0).g(X_n,\kappa_n)\, d\mu=O\left(n^{-\frac 32}(\log n)^{-\frac 52}\right)\\
\nonumber&+ \left(\frac {\Phi(0)}{n\log(n\log n)}+O((\log n)^{-1})\right)\int_Mf(X_0,\kappa_0)\, d\mu\, \int_Mg(X_0,\kappa_0)\, d\mu\, .
\end{align}
In the \emph{finite} horizon case, an expansion of any order of the form $\int_M f(X_0,\kappa_0).g(X_n,\kappa_n)\, d\mu=\sum_{m=1}^{K}\frac{c_m(f,g)}{n^{m}}+o(n^{-K})$ with $c_0(f,g)=\Phi(0)\int_Mf(X_0)\,d\mu\int_Mg(X_0)\,d\mu$ with $(c_m(f,g))_m$ linearly independent
has been established in \cite{Pene18}. Such a result implies, in particular, that for any positive integer $m$, there exist couples of observables $(f,g)$ such that $\int_M f(X_0,\kappa_0).g(X_n,\kappa_n)\, d\mu\approx n^{-m}$.
In the infinite horizon case, \eqref{EQTHM1} does not imply directly the optimal result for zero mean observables and we address this in a result of independent interest.
\subsection*{Mixing of zero integral observables in the infinite measure case (Theorem~\ref{THEOCob})}
In the infinite horizon case, we obtain different
rates of mixing
for null integral functions $f,g$.
In particular, we show that
\begin{equation}\label{decor1int0}
\int_M f(X_0,\kappa_0).g(X_n,\kappa_n)\, d\mu\approx (n\log (n\log n))^{-1}n^{-m}\, ,
\end{equation}
when $g$ and $f$ are coboundaries of respective orders $k,\ell$ with $k+\ell=m$
(a coboundary of order $m$ is a function $g$ of the form $g(X_n,\kappa_n)=g_0(X_n,\kappa_n)-g_0(X_{n-1},\kappa_{n-1})$, where
$g_0$ is a coboundary of order $m-1$, considering here that a function with non null integral is a coboundary of order 0)
and that
\begin{equation}\label{decor2int0}
\int_M f(X_0,\kappa_0).g(X_n,\kappa_n)\, d\mu\approx (n\log (n\log n)) ^{-2}\,
\end{equation}
when $f=\phi.(\mathbf 1_{M_N}+\mathbf 1_{M_{-N}}-2M_0)$ with $\phi$ and $g$
having non null integral.
\subsection*{Method of proof and main challenges}
We heavily exploit that the discrete infinite horizon Lorentz gas is a $\mathbb Z^d$ extension of the dynamical system
$X_n\mapsto X_{n+1}$, that is of the form $(X_n,\kappa_n)\mapsto (X_{n+1},\kappa_{n+1}=\kappa_n+\kappa(X_n))$. We refer to Section \ref{mainresults} for further details.
We recall that the LLT (without error term) established by Sz\'asz and Varj\'u in~\cite[Theorem 13]{SV07}
uses the abstract results~\cite[Theorem 2]{BalintGouezel06} of B{\'a}lint and Gou{\"e}zel, which establishes the non standard CLT for observables of infinite variance (so, not $L^2$) observables acting on exponential Young towers.
The method in~\cite{BalintGouezel06} was developed to establish the non standard CLT for the stadium billiard, which, among other things, was possible due to the work of Young~\cite{Young98} and Chernov~\cite{Chernov99}.
A classical tool for establishing LLTs for chaotic dynamical systems is the perturbed transfer operator method.
As clarified in Section \ref{technicalkeyresults}, a serious challenge for obtaining error terms in the LLT in~\cite[Theorem 13]{SV07} is that
we need 'sufficiently high' expansions (not just continuity) for the families of eigenvalues and eigenprojectors associated with the transfer operator perturbed with non $L^2$ functions.
To give a first insight into this difficulty we point out that given the Young Banach space $\mathcal{B}$ (see Section~\ref{technicalkeyresults} for definition),
the family of operators is continuous as a family of elements of $\mathcal{L}(\mathcal{B}\rightarrow L^p)$ for some $p>1$ (but not as elements of $\mathcal{L}(\mathcal{B})$).
Propositions~\ref{PROP1} and~\ref{prop-lambda} provide the expansions for the families of eigenvalues and eigenprojectors we use in the proofs of our main results, already mentioned.
The proofs of Propositions~\ref{PROP1} and~\ref{prop-lambda} build on the framework put forward in~\cite{BalintGouezel06}
using several geometrical estimates established in~\cite{SV07}. Under assumptions specific to Young towers for Sinai billiards with infinite horizon, Propositions~\ref{prop-lambda} and ~\ref{PROP1} can be viewed as refined version of the main technical results in~\cite{BalintGouezel06}.
For a summary of the new ingredients used in the proofs of these propositions we refer to the text after the statement of Proposition~\ref{prop-expproj} in Section \ref{sec:Pi}.
Let us conclude this introduction with a few remarks on the main examples of infinite measure preserving systems of physical interest.
We have already recalled infinite measure preserving periodic Lorentz gases. As already mentioned, LLTs for Sinai billiards can be translated into first order mixing for periodic Lorentz gases.
A different type of systems of physical interest are intermittent maps, preserving an infinite measure.
To fix notation, we recall a well known interval map, namely the Liverani Saussol Vaienti map \cite{LSV}, $T:[0,1]\to [0,1]$, $T(x)=x(1+2^\alpha x^\alpha)$ if $x\le 1/2$ and $T(x)=2x-1$ if $x>1/2$. Such maps can be viewed as one sided Markov renewal chains with heavy dependencies.
We only consider $\alpha\ge 1$, as in this case $T$ is preserving an infinite measure.
First order mixing for such maps was obtained by Gou\"ezel~\cite{Gouezel11} and by Melbourne and Terhesiu~\cite{MT12}.
In these works, the $1/\alpha$-stable LLT (for the first return map of intermittent maps, a much more simple dynamical setting than that of Sinai billiards) is a minor part of the mechanism; in fact, for $\alpha<2$ this type of LLT can be bypassed (see~\cite{MT12}).
In short, the mechanisms for obtaining mixing are highly non trivial generalizations of :
\begin{itemize}
\item[i)] the procedure of obtaining the asymptotic of renewal sequences for simple symmetric random walks (in the sense that the LLT is the only required ingredient), in the case of periodic Lorentz gases;
\item[ii)] proofs of strong renewal theorems (for renewal sequences with infinite mean) for one sided Markov renewal chains, in the case of intermittent maps.
\end{itemize}
Higher order mixing for (not necessarily Markov) infinite measure preserving intermittent interval maps have been obtained first in~\cite{MT12} and refined in~\cite{Terhesiu16}; such results have been
generalized to suspension flows in~\cite{MelTer17, BMT18}.
We do not know whether results similar to the results on mixing rates for mean zero
functions and coboundaries in the setup of Lorentz maps (in~\cite{Pene18} and also
in Theorem~\ref{THEOCob} therein) hold for infinite measure preserving intermittent interval maps; the previous results~\cite{MT12, Terhesiu16, MelTer17, BMT18} for
zero integral
observables are confined to big $O$ error terms.
\subsection*{Outline of the paper
}
In Section \ref{mainresults}, we introduce
the precise version of the $\mathbb Z^d$-periodic billiard model with infinite horizon we consider and state our main results Theorems~\ref{THMreturntime}~\ref{THM0} and~\ref{THM1}.
and~\ref{THEOCob}. In section~\ref{technicalkeyresults}, we present our key technical results Propositions~\ref{PROP1},~\ref{prop-lambda}.
In Section \ref{freeflight}, we obtain an expansion for the probability of long free flights, which is crucial for Proposition~\ref{prop-lambda}.
In Section \ref{sec:Pi}, we prove our first key result Proposition~\ref{PROP1}, stating
an expansion of the dominating eigenprojector.
In Section \ref{sec:lambda}, we prove our second key result Proposition~\ref{prop-lambda}, which gives an expansion of the eigenvalue using results contained in the two previous sections.
In Section \ref{LLT}, we state an expansion in the LLT
in a general context and use it to prove our main result
as well as a general decorrelation result for some $\mathbb Z^d$-extensions.
Some further technical estimates, as well as the proof of Theorem~\ref{THMreturntime}, are included in Appendix.
\section{Model and main results}\label{mainresults}
We start by presenting the two dimensional case, that is when $d=2$.
We consider a planar billiard domain $Q$ given by
$
Q:=\mathbb R^2\setminus\bigcup_{j \in\mathcal J,\, \ell\in\mathbb Z^2} \mathcal O_{j,\ell}$,
with $\mathcal J$ a non empty finite set and with $\mathcal O_{j,\ell}:=\mathcal O_{j}+\ell$, where the $\mathcal O_j$ are open convex set with boundary $C^3$ and with nonzero curvature, such that $\mathcal O_{j,\ell}$ have pairwise disjoint closures.
We assume that the billiard has infinite horizon, i.e. that $Q$ contains at least one line.
When $d=1$, we replace $Q$ and the $\mathcal{O}_{j,\ell}$ by their quotients modulo $\mathbb Z\times\{0\}$, that is $Q$ is a subset of the tube $\mathbb T\times\mathbb R$.
\begin{figure}
\caption{A periodic billiard domain with 4 infinite horizon directions}
\label{fig1}
\end{figure}
We denote by $(M,T,\mu)$ the original billiard dynamical system map corresponding to collision times. The configuration space $M$ is the set of couples of
position and velocity $(q,\varepsilonc v)$ with $q\in \partial Q$ and $\varepsilonc v$
a unit reflected vector, i.e. a unit vector $\varepsilonc v$ oriented inside $Q$.
The billiard map $T$ maps a configuration $(q,\varepsilonc v)$ corresponding to a collision time to the configuration
corresponding to the next collision time. The measure $\mu$ is the measure on $M$ with density proportional to $\cos\varphi$, where $\varphi$ is the angle of $\varepsilonc v$ with the normal vector to $\partial Q$ directed inside $Q$, normalized so that
$\mu(\{(q,\varepsilonc v)\in M\, :\, q\in\bigcup_{j\in\mathcal J} \partial \mathcal O_{j}\})=1$.
The infinite measure preserving dynamical system $(M,T,\mu)$ is canonically isomorphic to the $\mathbb Z^d$-extension of $(\bar M,\bar T,\bar\mu)$
by $\kappa:\bar M\rightarrow\mathbb Z^d$,
where $(\bar M,\bar T,\bar\mu)$ is the probability preserving billiard dynamical system in the billiard domain in
$\bar Q
=Q/\mathbb Z^2\subset \mathbb T^2$ if $d=2$ (and $\bar Q
=Q/(\{0\}\times\mathbb Z)\subset \mathbb T^2$ if $d=1$)
and $\bar\mu$ the probability measure with density proportional to $\cos\varphi$.
Let us give the formula of assymptotic variance $\Sigma^2$.
We take $\Sigma^2=(a_{i,j})_{i,j=1,...,d}$ with $(a_{i,j})_{i,j=1,2}$ given in \eqref{ExprSigma2}.
Note that \eqref{ExprSigma2} coincide with the formula of \cite[Theorem 20]{SV07} in the case of a single obstacle since in this case a corridor corresponds to four points $x\in R_0$ s.t. $Tx=x$ (two positions, one on each side of the corridor, and two directions $\pm\frac{w_C}{|w_C|}$).
{\bf We assume $\Sigma^2$ invertible}, i.e. the interior of $Q$ contains at least $d$ unbounded lines not parallel to each other (one may observe that when $d=1$ the invertibility of $\Sigma^2=a_{1,1}$ just means that $a_{1,1}\ne 0$).
For any $x\in\mathbb R^d$, we write $\Phi_{\Sigma^2}(x):=\frac{e^{-\frac 12 \Sigma^{-2}x\cdot x}}{\sqrt{(2\pi)^d\det \Sigma^2}}$, $\Phi_{\Sigma^2}$
is the density function of a Gaussian distribution with expectation 0 and variance
matrix $\Sigma^2$.
We set $\tau_0:=\min\{n\ge 1\, :\, \kappa_n=(0,0)\}$ with $\kappa_n:=\sum_{k=0}^{n-1}\kappa\circ\bar T^k$.
\begin{theo}[Tail probability of the first return time in the initial cell]\label{THMreturntime}
\begin{align}
\label{returntime}
\mbox{If }d=2 :&\quad\bar\mu(\tau_0>N)\sim\frac {2\pi \sqrt{\det \Sigma^2}}{\log\log N}\quad\mbox{as}\quad N\rightarrow +\infty\\
\label{returntimedim1}
\mbox{If }d=1 :&\quad
\bar\mu\left(\tau_0>N\right)\sim \sqrt{\frac {2 a_{1,1}\log N}{\pi N}}\quad\mbox{as}\quad N\rightarrow +\infty\, .
\end{align}
\end{theo}
Our other main theorems will require the following additional assumption (ensuring \eqref{Plongtraj}):
\begin{equation}\label{H0}
\mbox{$\partial Q$ is $C^4$ at points $q\in\partial Q$ such that the tangent line to $\partial Q$ at $q$ is contained in $Q$.}
\end{equation}
Let us introduce the class of smooth functions we consider. Let $R_0\subset M$ be the set of reflected vectors that are tangent to $\partial Q$. The billiard map $T$
defines a $C^1$-diffeomorphism from $M\setminus(R_0\cup T^{-1}R_0)$ onto $M\setminus(R_0\cup T R_0)$.
For any integers $k\le k'$, we set $\xi_k^{k'}$ for the partition
of $M\setminus \bigcup_{j=k}^{k'} T^{-j}R_0$ in connected components and $\xi_k^\infty:=\bigvee _{j\ge k}\xi_k^j$.
For any $\phi:M\rightarrow\mathbb R$ and any $\eta\in(0,1)$, we set
\begin{equation}\label{DefiHolder}
L_{\phi,\eta}:=\sup_{k\ge 0}\sup_{A\in \xi_{-k}^k}\sup_{x,y\in A}\frac{|\phi(x)-\phi(y)|}{\eta^k}
\quad\mbox{and}\quad
\Vert\phi\Vert_{(\eta)}:=\Vert\phi\Vert_\infty+L_{\phi,\eta}\, .
\end{equation}
For $\phi: \bar M\rightarrow \mathbb R$, we define $R_0$, $\xi_{k}^j$, $L_{\phi,\eta}$, $\Vert\phi\Vert_{(\eta)}$ in the same way with $(\bar M,\bar T)$ instead of $(M,T)$.
\begin{theo}[Mixing local limit theorem]\label{THM0}
Assume
\eqref{H0}.
Let $\phi,\psi: \bar M\rightarrow \mathbb R$ be two measurable functions such that $\Vert \phi\Vert_{(\eta)}+\Vert \psi\Vert_{(\eta)}<\infty$,
then, uniformly in $N\in\mathbb Z^d$,
\begin{align}\label{LLTfinal}
&\int_{\bar M} \phi 1_{\{\kappa_n=N\}}\psi\circ \bar T^n\, d\bar\mu
=\frac{\mathbb E_{\bar\mu}[\phi]\mathbb E_{\bar\mu}[\psi]}{(n\log( n\log n))^{\frac d2}}\left(\Phi_{\Sigma^2}\left(\frac N{\sqrt{n\log (n\log n)}}\right)
+O((\log n)^{-1})\right)\\
\nonumber& - \nabla\Phi_{\Sigma^2}\left(\frac N{\sqrt{n\log (n\log n)}}\right)
\cdot\frac{\mathfrak K(\phi,\psi)}{(n\log (n\log n))^{
\frac{d+1}2}}
+O\left(\frac{
1}{(n\log n)^{\frac {d+1}2}\log n}\right)\, .
\end{align}
with $\mathfrak K(\phi,\psi):=\mathbb E_{\bar\mu}[\psi]\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{
j}\phi]+\mathbb E_{\bar\mu}[\phi]\sum_{j\le -1}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{
j}\psi]$, these sums being absolutely convergent, and
with $\nabla$ the gradient operator.
\end{theo}
Observe that the two first terms of \eqref{LLTfinal} both contain expansions since the first term can be rewritten $
\frac {\mathbb E_{\bar\mu}[\phi]\mathbb E_{\bar\mu}[\psi]}{(n\log n)^{\frac d2}}\left(
\Phi_{\Sigma^2}\left(\frac N{\sqrt{n\log n}}\right)
\left(1+
\left(\frac{\Sigma^{-2}N\cdot N}{n\log n}-d\right)\frac{\log\log n}{2\log n}\right)
+ O\left(\frac 1{\log n}\right)
\right)$ and the second one
$-\nabla\Phi_{\Sigma^2}\left(\frac N{\sqrt{n\log n}}\right)
\cdot\frac {\mathfrak K(\phi,\psi)}{(n\log n)^{\frac{
d+1}2}}
\left(1-\frac12\left(d+2-\frac{\Sigma^{-2}N\cdot N}{n\log n}\right)
\frac{\log\log n}{\log n}\right)$.
The assumption that the interior of $Q$ contains at least $d$ non parallel infinite lines ensures that $\det \Sigma^2\ne 0$.
For any $N\in\mathbb Z^d$, we write $M_N$ for the set of $(q,v)\in M$ such that
$q\in \bigcup_{j\in\mathcal J}\partial \mathcal O_{j,N}$.
\begin{rem}
Observe that the bilinear forms $\mathbb E_\mu[\phi]\mathbb E_{\mu}[\psi]$
and $\mathfrak K(\phi,\psi)$ are linearly independent.
Indeed, under the assumptions of Theorem~\ref{THM0}, then $\mathfrak K(\phi-\phi\circ \bar T,\psi)=-\mathbb E_\mu[\psi]\mathbb E_\mu[\kappa.\phi\circ \bar T]$
which is non zero in general.
\end{rem}
\begin{theo}
[Decorrelation in infinite measure]
\label{THM1}
Assume \eqref{H0}.
Let $\eta,\gamma\in(0,1)$.
Let $\phi,\psi:M\rightarrow\mathbb R$ be two measurable observables such that
$
\sum_{N\in\mathbb Z^2}(1+
|N|^\gamma
)\left(\Vert \phi 1_{M_N}\Vert_{\infty}+\Vert \psi 1_{M_N}\Vert_{\infty}\right)<\infty$
and
$\sum_{N\in\mathbb Z^2}L_{\phi 1_{M_N},\eta}<\infty$.
Then
\[
\int_M \phi. \psi\circ T^n\, d\mu=
\frac {
\Phi_{\Sigma^2}(0)}{(n\log (n\log n))^{\frac d2}}
\int_M\phi\, d\mu\, \int_M\psi\, d\mu+O\left(\frac 1{(n\log n)^{\frac d2}\log n}\right)\, .
\]
If moreover
$
\sum_{N\in\mathbb Z^2}(1+
|N|^{1+\gamma}
)\left(\Vert \phi 1_{M_N}\Vert_{\infty}+\Vert \psi 1_{M_N}\Vert_{\infty}\right)<\infty$, then
\begin{align*}
\int_M \phi. \psi\circ T^n\, d\mu&=
\frac {
\Phi_{\Sigma^2}(0)}{(n\log (n\log n))^{\frac d2}}
\left(1
+O\left(\frac 1{\log n}\right)\right)\int_M\phi\, d\mu\, \int_M\psi\, d\mu\\
&+O\left(\frac 1{(n\log n)^{\frac {d+1}2}
\log n
}\right)\, .
\end{align*}
\end{theo}
Again, in the above result, $(n\log(n\log n))^{-\frac d2}$ can be replaced by $\frac{1-\frac d2\frac{\log\log n}{\log n}}{(n\log n)^{\frac d2}}$ providing a second term in $\frac{\log\log n}{(n\log n)^{\frac d2}\log n}$.
When $\phi$ or $\psi$ has zero mean, Theorem~\ref{THM1} only provides
an estimate in $O(\cdot)$.
Luckily, our method enables us to establish sharp decorrelation rates
for zero mean observables under natural regularity assumptions. This includes smooth
coboundaries.
\begin{theo}[Sharper decorrelation rates for particular functions with zero integral]\label{THEOCob}
Assume \eqref{H0}.
Let $\gamma\in(0,1)$.
\begin{itemize}
\item[(a)]
Let
$\phi,\psi:M\rightarrow\mathbb C$ be observables such that
$\sum_{N\in\mathbb Z^d}(1+
|N|^\gamma
)\left(\Vert \phi 1_{M_N}\Vert_{\infty}+\Vert \psi 1_{M_N}\Vert_{\infty}\right)<\infty$
and
$\sum_{N\in\mathbb Z^d}L_{\phi 1_{M_N},\eta}<\infty$.
Then
\begin{align*}
\int_M \phi.\psi\circ(id-T)^m\circ T^n\, d\mu&=-
\frac {\Phi_{\Sigma^2}(0)
\int_M\phi\, d\mu\, \int_M\psi\, d\mu}{(n\log (n\log n))^{\frac d2}n^m}
(-2)^{-m}d(d+2)\cdots(d+2m-2)\\
&\ \ \ +O\left((n\log n)^{-\frac d2}n^{-m}(\log n)^{-1}\right)\, ,
\end{align*}
with $(-2)^{-m}d(d+2)\cdots(d+2m-2)=(-1)^m m!$ when $d=2$.
\item[(b)] Let $N\in\mathbb Z^d$.
If $\phi:M\rightarrow\mathbb C$ is invariant by translation of positions by $\mathbb Z^d$ and satisfies $\Vert\phi\Vert_{(\eta)}<\infty$ and if there exists
$\delta\in(0,1]$ such that $\sum_{N\in\mathbb Z^d}\left(\Vert \psi 1_{M_N}\Vert_{(\eta)}+N^\delta \Vert \psi 1_{M_N}\Vert_{\infty}\right)<\infty$,
then, setting $f_0=\phi(1_{M_N}+1_{M_{-N}}-2\times 1_{M_0})$,
\begin{align*}
\int_M f_0.\psi\circ T^n\, d\mu
&=-
\frac {\Phi_{\Sigma^2}(0)\int_{M_0}\phi\, d\mu\, \int_M\psi\, d\mu}{(n\log (n\log n))^{\frac d2+1}}\left(\Sigma^{-2}N\cdot N
+O((\log n)^{-1})\right) \\
&
+O\left(\frac{\log n}{(n\log n)^{\frac {d+3}2}}+a_n^{-d-2-\delta}\right)
\, .
\end{align*}
\end{itemize}
\end{theo}
Again $((n\log (n\log n))^{-\frac d2-m}$ in the above formulas can be replaced by
$\frac{1-\frac{(d+2m)\log\log n}{2\log n}}{(n\log n)^{\frac d2+m}}$
an expansion with two terms.
Let us make several observations on this last result.
First, whereas in the finite horizon case, we only have leading terms in $n^{-d/2-m}$ in the decorrelation of smooth functions, in the infinite horizon case we can have leading terms in $n^{-m}(n\log n)^{-d/2}$
but also in $(n\log n)^{-d/2-1}$.
Other orders are possible. For example, we can easily adapt our proof
to obtain sharp decorrelation rate in $n^{-\frac d2-m-1}(\log n)^{-d/2-1}$ in case (b) with $\psi$ a coboundary of order $m$.
Observe that, when $m=1$, Case (a) of Theorem~\ref{THEOCob} corresponds to the study of $\int_M \phi.\psi\circ T^n\, d\mu-\int_M \phi.\psi\circ T^{n-1}\, d\mu$ and the dominating term given by (a) is equivalent to the difference between the two leading terms of $\int_M \phi.\psi\circ T^n\, d\mu$
and of $\int_M \phi.\psi\circ T^{n-1}\, d\mu$ obtained in Theorem~\ref{THM1}.
The leading term is of order $(n\log n)^{-d}-((n+1)\log (n+1))^{-d}\sim d(n\log n)^{-\frac d2-1}\log n $.
Observe that the case when $\phi$ is a coboundary and the case of two coboundaries is included in item (a) of theorem~\ref{THEOCob}. Indeed, by $T$-invariance of $\mu$,
\begin{align*}
\int_{M}\phi\circ(id-T).\psi\circ T^n\, d\mu
&= \int_{M}\phi.\psi\circ T^{n}\, d\mu-\int_{M}\phi.\psi\circ T^{n-1}\, d\mu\\
&=- \int_{\Omega}\phi.\psi\circ(id-T)\circ T^{n-1}\, d\mu\, ,
\end{align*}
and thus
\[
\int_{M}\phi\circ(id-T)^r.\psi\circ(id-T)^s\circ T^n\, d\mu=(-1)^r
\int_{M}\phi.\psi\circ(id-T)^{r+s}\circ T^{n-r}\, d\mu\, .
\]
Theorems~\ref{THM0} and~\ref{THM1} are contained in the more technical Theorems~\ref{LLT20} and~\ref{LLT2} (valid for a class of less regular observables)
which are consequences
of Theorem~\ref{LLT1} that gives higher order terms in LLT and speed of mixing
under abstract assumptions on families of eigenvalues and eigenprojectors. Most of our work consist in proving the results contained in the next section
enabling the application of Theorem~\ref{LLT1} to the quotiented tower $(\mathbb{D}elta,f,\mu)$ and $\hat\kappa$.
\section{Key technical estimates}\label{technicalkeyresults}
We focus on the case $d=2$ since the similar results in the case $d=1$ follow from them.
We do not assume here that the interior of $Q$ contains at least $d$ non parallel infinite lines.
Our results are based on Fourier analysis and Young towers.
It is known that $(\bar M,\bar T,\bar\mu)$ is a factor, under a projection written $\mathfrak p_1:\bar\mathbb{D}elta\rightarrow \bar M$, of
a Young tower $(\bar\mathbb{D}elta,\bar f,\bar\mu_\mathbb{D}elta)$ with stable
and unstable curves. By factorizing/collapsing the stable curves we can reduce it to a one-dimensional Young tower $(\mathbb{D}elta, f,\mu_\mathbb{D}elta)$ by $\mathfrak p_2:\bar\mathbb{D}elta\rightarrow\mathbb{D}elta$.
Throughout we let $\hat\kappa:\mathbb{D}elta\to\mathbb{Z}^d$ be the version of $\kappa$ on $\mathbb{D}elta$, that is $\hat\kappa\circ\mathfrak p_2=\kappa\circ\mathfrak p_1$ (the existence of such a $\hat\kappa$ comes from the fact that $\kappa$ is constant on the stable curves).
Let $P$ be the transfer operator for $(\mathbb{D}elta, f,\mu_\mathbb{D}elta)$. We consider the family $(P_t)_{t\in\mathbb R}$ of perturbations of $P$ given by
$P_t:=P\left(e ^{i t\cdot \hat\kappa}\cdot \right)$, where $\cdot$ denotes the standard scalar product on $\mathbb R^d$. Note that $P_0\equiv P$.
As shown in~\cite{SV07} thanks to \cite{Young98,Chernov99} there exist $\beta\in(0,\pi]$ and $\theta\in(0,1)$ such that for every
$t\in[-\beta,\beta]^d$,
\begin{equation}
\label{spgap-Sz}
P_t^n=\lambda_t^n\Pi_t+N_t^n\, ,
\end{equation}
\begin{equation}
\label{spgap-Sz-bis}
\mbox{with}\quad\quad\quad\quad\sup_{t\in[-\beta,\beta]^d}\Vert N_t^n\Vert_{\mathcal B}+\sup_{t\in[-\pi,\pi]^d\setminus[-\beta,\beta]^d}\Vert P_t^n\Vert_{\mathcal B}=O\left(\theta^n\right)\, ,
\end{equation}
\begin{equation}
\label{spgap-Sz-bisbis}
\mbox{and}\quad\quad\quad\quad
\lim_{t\rightarrow 0}\left\Vert \Pi_t-\mathbb E_{\mu_\mathbb{D}elta}[\cdot]\mathbf 1\right\Vert_{\mathcal{L}(\mathcal{B}\rightarrow L^1(\mu_\mathbb{D}elta))}=0\, ,
\end{equation}
where $\mathcal B$ is a complex Banach space of $\mathbb C$-valued and $\mu_\mathbb{D}elta$-integrable functions
(considered by Young in \cite{Young98}).
As in the finite horizon case, $(P_t)_t$ defines a family of operators on $\mathcal{B}$.
But, whereas in the finite horizon case $t\mapsto P_t$ is $C^\infty$ from $\mathbb R$ to $\mathcal{L}(\mathcal{B})$ the set of linear continuous operators on $\mathcal{B}$, in the infinite horizon case we can just say that $t\mapsto P_t$ is
continuous from $\mathbb R$ to $\mathcal{L}(\mathcal{B}\rightarrow L^1(\mu_\mathbb{D}elta))$.
Additionally the derivative of $P_t$ at $0$ should be $ P(i\hat\kappa\cdot)$ which is not in $\mathcal{L}(\mathcal{B})$ not even in $\mathcal{L}(L^1)$
(see Lemma~\ref{discont}) but is in $\mathcal{L}(L^p\rightarrow L^q)$
as soon as $\frac 1p+\frac 12> \frac 1q$.
As shown in~\cite[Proposition 6]{SV07}, $\bar\mu(\kappa=L+N\varepsilonc w)\sim C_{L,\varepsilonc w}N^{-3}(1+o(1))$ for some $C_{L,\varepsilonc w}\ge 0$, strictly positive for some $L$ if there exists a line of direction $\varepsilonc w$ contained inside $Q$.
Combining this with~\eqref{spgap-Sz}, ~\eqref{spgap-Sz-bis}, several lemmas obtained inside~\cite[Proof of Theorem 13]{SV07}
and ~\cite[Proposition 4.17]{BalintGouezel06}, Sz\'asz and Varj\'u established in \cite{SV07} the following estimate
\begin{equation}\label{devlambda}
\lambda_t=1-\Sigma^2 t\cdot t\, \log (1/|t|)(1+o(1)),\quad\mbox{as}\ t\rightarrow 0\, .
\end{equation}
Inside the proof of our Proposition~\ref{prop-lambda} below, which gives a higher order expansion of $\lambda_t$ at $t=0$, we provide a precise summary of the results in~\cite{SV07} needed to obtain~\eqref{devlambda}.
Let us recall that \eqref{spgap-Sz}, \eqref{spgap-Sz-bis}, \eqref{spgap-Sz-bisbis}
and \eqref{devlambda}
imply the LLT for $\bar T$ and
thus first order mixing (speed of mixing) for $T$ as in~\cite[Theorem 1.1]{Pene18}.
Here we are interested in higher order terms in both the LLT and mixing for suitable classes of functions.
\begin{prop}\label{PROP1}
There exists a functional Banach space $\mathcal B_0$ of bounded functions
and $\Pi'_0 \in\mathcal L(\mathcal B_0\rightarrow L^s(\mu_\mathbb{D}elta))$
for every $s\in[1,2)$ such that, for any $p>2$, any $p'\in[1,\frac43)$,
any $\gamma\in (1,\min(2\frac{p-1}p,\frac 4{p'}-2))$, there exists $C>0$
such that
$\|(\Pi_t-\Pi_0-t\cdot \Pi_0')(w)\|_{\mathcal L(\mathcal{B}_0\rightarrow L^{
p'}(\mu_\mathbb{D}elta))}\le C\,|t|^\gamma$ as $t\rightarrow 0$.
\end{prop}
Proposition~\ref{PROP1} follows directly from Proposition~\ref{prop-expproj} by taking $v=1_\mathbb{D}elta$.
\begin{rem}
A precise formula for $\Pi'_0$ is given by~\eqref{eq-derivPi0} of Proposition~\ref{prop-expproj} together with~\eqref{eq1yderivpi} of Proposition~\ref{prop-expproj-base}.
\end{rem}
\begin{prop}
\label{prop-lambda}
Assume \eqref{H0}.
As $t\to 0$,
$\lambda_t=1- \Sigma^2 t\cdot t\, \log(|t|^{-1})
+O(t^2)$
with the notations introduced at the begining of Section~\ref{freeflight}.
\end{prop}
Since the norms of $\mathbb R^d$ are all metrically equivalent, Proposition~\ref{prop-lambda} is true for any choice of norm $|\cdot|$ on $\mathbb R^d$.
\section{Estimate of the probability of long free flights}\label{freeflight}
Our proof of Proposition~\ref{prop-lambda} provided in Section~\ref{sec:lambda}
is based on estimates established in Section~\ref{sec:Pi} and on an higher order expansion of the tail of the free flight $\kappa$.
Lemma~\ref{lemm-tail} below gives the required expansion (to be precise we just use $O(N^{-4})$). We mention that Lemma~\ref{lemm-tail} can be regarded as a refinement of \cite[Proposition 6]{SV07}. First, it provides higher order of the tail of the free flight
and second,
we compute the dominating term without any restriction on
the number of types
of scatterer at the boundary of a given corridor.
As in \cite{SV07}, we introduce the notion of corridor.
We call corridor $C$
a strip contained in $Q$ delimited by $(A+\mathbb R w)\cup(B+L+\mathbb R w)$ for some
$A,B\in \bigcup_{j\in\mathcal J}\partial\mathcal O_j$ and $L,w\in\mathbb Z^2$.
We write $\mathcal C$ for the set of corridors.
For any corridor $C\in\mathcal C$, we write $\mathfrak d_{C}$ for its width and
$w_{ C}$ for its direction that is the prime $w\in\mathbb Z^2\setminus\{0\}$ with non negative first coordinate integer coordinates (and with positive second coordinate if the first one is null) such that the $ C$ contains a line of direction
$w_{C}$.
\begin{figure}
\caption{A corridor}
\label{fig3}
\end{figure}
Observe that a corridor is fully determined by $A$ and that, given $A$ defining a corridor, there are two possible choices of prime $w\in\mathbb Z^2$, one being the opposite of the other.
We write $\mathcal A$ for the finite set of $(A,w)\in (\bigcup_{j\in\mathcal J}\partial\mathcal O_j)\times\mathbb Z^2$, with $w$ prime, such that there exists $(B,L)$ as above and $d_{A}$ for the width of the corresponding corridor.
\begin{rem}
Let $(A,w)\in \mathcal A$ then $(A, v=w/| w|)\in\bigcap_{k\in\mathbb Z} T^{k}(R_0)$.
With the above notations, let $\mathcal O$ be the obstacle containing $B+L$.
If
$(q',v')\in\bigcap_{k\in\mathbb Z} T^{k}(R_0)\setminus\{(A,v)\}$ is close to $(A,v)$,
the line $q'+\mathbb R v'$ passes between $\mathcal{O}+Mw$ and $\mathcal{O}+(M+1)w$ for some $M\in\mathbb Z$ without passing through them. But, this is impossible if $ |\angle( v, v')|<\arctan\frac{2r}{|w|}$ with $r$ the minimal radius of curvature of $\mathcal O$). Thus $(A,v)$ is isolated in $\bigcap_{k\in\mathbb Z} T^{k}(R_0)$. Therefore $\mathcal A$ is finite.
\end{rem}
Given $L,w\in\mathbb Z^2$,
we write $E_{L, w}$ for the set of $(A,B)$ such that $(A,w)\in \mathcal A$
and $B\in\partial Q$ is as described above (on the other line delimiting the corridor).
\begin{lem}
\label{lemm-tail}
Assume \eqref{H0}.
Let $ w\in\mathbb Z^2$ prime and $L\in\mathbb Z^2$,
then
\[
\bar\mu(\kappa=L+N w)
=\sum_{(A,B)\in E_{L, w}}\left(\frac{ d_{A}^2\, \mathfrak a_{(A,B)}}{2|\partial\bar Q|\, |w|N^3 }+\frac{\mathfrak a'_{(A,B)}}{2|\partial \bar Q|\, (|w|\, N)^4}\right)
+o(N^{-4})\, ,
\]
with
$\mathfrak a_{(A,B)}:= \frac{AA'\, B''B}{|w| ^2}$ and
\[
\mathfrak a'_{(A,B)}:=
\frac{3d_A^2AA'\, B''B(AA'+B''B-2\frac{w.\overrightarrow{A(B+L)}}{|w|})}2
+
\frac{d_A^3}2\left( B''B\left(\frac 1{\mathfrak c_{A}}-\frac 1{\mathfrak c_{A'}}
\right)+AA'\left(\frac 1{ \mathfrak c_{B}}-\frac 1{ \mathfrak c_{B''}}\right)\right)\, .
\]
where $B''$ is the first point of $\partial Q$ met by the half-line $B-\mathbb R_+ w$
and where $A'$ is the first point of $\partial Q$ met by the half-line $A+\mathbb R_+ w$ and where $\mathfrak c_{E}$ is the curvature of $\partial Q$ at any $E\in\partial Q$.
\end{lem}
For example, in the case of a billiard with a single original obstacle (if $\mathcal J=1$), then
$\bar\mu(\kappa=(N,1))=\frac{d^2}{2|\partial \bar Q|}(N^{-3}+ 3(1-h)N^{-4}+o(N^{-4}))$ (then $A=A'$, $B=B''$ and so $\frac 1{\mathfrak c_A}-\frac 1{\mathfrak c_{A'}}=\frac 1{\mathfrak c_B}-\frac 1{\mathfrak c_{B''}}=0$), where $d$ is the vertical distance between two different obstacles and $h$ is the horizontal gap between the highest point and the lowest point of the obstacle ($h$ can be negative or positive).
\begin{cor}\label{lemm-tailrem}
Assume \eqref{H0}. Let us fix a corridor $C$.
The sum $\mathfrak a_C$ of the $\mathfrak a_{(A,B)}$ over the couples $(A,B)$ associated to $C$ is 2.
\end{cor}
\begin{proof}
The corridor $C$ is delimited by two lines $\mathcal{L}$ and $\mathcal{L}'$.
Let $(A_i)_{i\in \mathbb Z/N\mathbb Z}$ (resp. $(B_j)_{j\in \mathbb Z/N'\mathbb Z}$) be the points of $\bigcup_{j=1}^{\mathcal J}\partial \mathcal{O}_j\cap (\mathcal{L}+\mathbb Z^2)$ (resp. $\bigcup_{j=1}^{\mathcal J}\partial \mathcal{O}_j\cap(\mathcal{L}'+\mathbb Z^2)$) ordered so that $A'_i=A_{i+1}+\ell_i$ and $B''_j=B_{j-1}+\ell'_j$ for some $\ell_i,\ell'_j\in\mathbb Z^2$.
Then $\sum_i\overrightarrow{A_iA'_i}=\sum_j\overrightarrow{B''_jB_j}= w_C$,
and so \[
\mathfrak a_C=\sum_{i,j}\left(\mathfrak a_{(A_i,B_j)}+\mathfrak a_{(B_j,A_i)}\right)
=\sum_{i,j}\frac{A_iA'_i\, B''_jB_j+B_jB'_j\, A''_iA_i}{|w_C|^2}
=2\, .
\]
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemm-tail}]
Observe that, for every $\varepsilon$, outside the $\varepsilon$-neighbourhood of $\bigcap_{k\in\mathbb Z} T^{k} (R_0)$, $\kappa$ is uniformly bounded. Moreover, outside the $\varepsilon$-neighbourhood of points $(A_0, w/| w|)$
with $(A_0,B)\in E_{L, w}$, for some $N_0$ large enough, $\kappa\not\in L+(N_0+\mathbb N) w$.
Therefore, it is enough to prove that, for any
$(A_0,B)\in E_{L, w}$ and any $i_0,i_1\in\mathcal J$ so that $A_0\in\partial \mathcal O_{i_0}$ and $B\in\partial \mathcal O_{i_1}$,
\begin{equation}\label{mu0L+Nw}
\mu(M_{(i_0 ,0)}\cap T^{-1}(M_{(i_1,L+N w)}))=\frac{d_0^2 \, \mathfrak a_{(A_0,B)}}{2|\partial\bar Q|\, |w|\, N^3}
+\frac{\mathfrak a'_{(A_0,B)}}{2|\partial \bar Q|\, N^4} + o(N^{-4})\, ,
\end{equation}
where
$d_0:=d_{A_0}$ is the width of the corresponding corridor and $M_{(i,\ell)}$ the set of elements of $ M$ based on a point in $\partial \mathcal O_{i,\ell}$.
Set $B_0:=B+L$.
Let $\mathcal A_N$ be the image set of
$M_{(i_0 ,0)}\cap T^{-1}(M_{(i_1,\ell_1+N w)})$ by the projection map $(q,\varepsilonc v)\mapsto (A,\varepsilonc v)$
where $A$ is the intersection point of $q+\mathbb R\, \varepsilonc v$
with the line $A_0+\mathbb R w^\perp$, where $ w^\perp$ is the unit vector perpendicular to $\varepsilonc w$
directed inside $Q$ at $A_0$.
Because of the invariance of $\bar\mu$ under the billiard map
and under the inversion $(q,\varepsilonc v)= (q,-\varepsilonc v)$,
thus, using also coordinates $(x,\alpha)$ on $(A_0+\mathbb R w^\perp)\times S^1$
with $x$ the absciss on the line $A_0+\mathbb R w^\perp$ and $\alpha$ for angle between $w$ and $\varepsilonc v$, we know that
\begin{equation}\label{mu0L+Nw2}
\mu(M_{(i_0 ,0)}\cap T^{-1}(M_{(i_1,\ell_1+N w)})=\frac 1{2|\partial \bar Q|}\int_{\mathcal A_N}\cos\alpha\, dx\, d\alpha \, .
\end{equation}
\begin{figure}
\caption{Projected point $A$.}
\label{fig0}
\end{figure}
We write $\mathcal O'$ for the first obstacle touched by $A_0+\mathbb R_+ w$
(at $A'_0\in\partial\mathcal O'$) and $\mathcal O"$ for the first obstacle touched by $B_0-\mathbb R_+ w$ (at $B"\in\partial\mathcal O"$).
Let us write $\varepsilonc v_\alpha$ for the unit vector making angle $\alpha$ with $ w$, for $\alpha\in[0,\pi/2]$ and we consider $x$ close to 0.
First, for a given position $A=A_0+x w^\perp$ close to $A_0$ with $x<0$,
$(A,\varepsilonc v_\alpha)$ is in $\mathcal A_N$ if and only if
the first obstacle (other than $\mathcal O_{(i_0,0)}$) met by
the half-line $A+\varepsilonc v_\alpha\, \mathbb R_+$ is $\mathcal O_{(i_1,\ell_1+N w)}$, that is
if and only if
$ \max(\beta,\alpha_{N})\le \alpha<\alpha'_{N}$.
where we write $\beta$ for the angle such that the line $A+\varepsilonc v_\beta\mathbb R$ is tangent to $\mathcal O'$ from above, $\alpha_{N}$ for the angle such that the line $q+\varepsilonc v_{\alpha_N} \mathbb R$
is tangent to $\mathcal O_{(i_1,\ell_1+N w)}$ from underneath (at some point $B'_N$) and $\alpha'_{N}$ for the angle such that the line $q+\varepsilonc v_{
\alpha'_N} \mathbb R$
is tangent to $\mathcal O"+N w$ from underneath (at some point $B"_N$).
\begin{figure}
\caption{The $\alpha_i$ and $\alpha'_i$ in a corridor}
\label{fig1b}
\end{figure}
Second, for a given position $A=A_0+x w^\perp$ close to $A_0$ with $x>0$,
$(A,\varepsilonc v_\alpha)$ is in $\mathcal A_N$ if and only if
the first obstacle met by
the half-line $A+\varepsilonc v_\alpha\, \mathbb R_+$ is $\mathcal O_{(i_1,\ell_1+Nw)}$ and if
$A+\varepsilonc v_\alpha\, \mathbb R_-$ intersects $\mathcal O_{(i_0,0)}$, that is if
$\max(\beta',\alpha_{N})\le \alpha<\alpha'_{N}$,
with $\beta'$ the smallest positive angle such that $A+\varepsilonc v_{\beta'}\mathbb R$
is tangent to $\partial \mathcal O_{i_0}$.
Thus, the integral \eqref{mu0L+Nw2} becomes
\begin{eqnarray}\label{mu0L+Nw3}
\frac {I_N^{-}+I_N^+}{2|\partial \bar Q|}:=\frac {1}{2|\partial \bar Q|}
\left( \int_{x^-}^0 \int_{\max( \beta,\alpha_N) }^{\alpha'_N} \cos\alpha\, d\alpha \, dx \,
+ \int_0^{x^+} \int_{\max( \beta',\alpha_N)}^{\alpha'_N} \cos\alpha\, d\alpha \, dx\right)\, .
\end{eqnarray}
The proof continues now by giving precise estimates of
$\beta,\beta'$ and $\alpha_N, \alpha'_N$ and also of the numbers $x^- < 0 < x^+$ beyond which there are no solutions $\alpha$ to respectively
$ \max(\beta,\alpha_{N})\le \alpha<\alpha'_{N}$ and $\max(\beta',\alpha_{N})\le \alpha<\alpha'_{N}$ anymore.\\
\noindent\underline{\bf Estimate of $\beta$.}
Let $q(u)=A'_0-q_1(u)\frac{w}{|w|}-q_2(u)w^\perp$ be the parametrization of $\partial\mathcal{O}'$ by arc-length such that $q(0)=A'_0$ (i.e. $(q_1(0),q_2(0))=(0,0)$) and $A\in q(s)+\mathbb R q'(s)$.
Observe that $(q_1'(0),q'_2(0))=(1,0)$ and
$(q_1''(0),q''_2(0))=(0,\mathfrak c_{A'_0})$ where $\mathfrak c_{A'_0}$ is the curvature of $\partial \mathcal O'$ at $A'_0$.
Then
$\tan\beta=\frac{q'_2(s)}{q'_1(s)}= \frac{\mathfrak c_{A'_0}s+\frac {q_2'''(0)}2 s^2}{1+\frac {q_1'''(0)}2 s^2}+ O(s^3)$.
So
$\beta= \mathfrak c_{A'_0}s+\frac {q_2'''(0)}2 s^2+ O(s^3)$,
with $s$ such that
\begin{align*}
AA_0&=q_2(s)+(A_0A'_0-q_1(s))\frac{q'_2(s)}{q'_1(s)}=\frac{\mathfrak c_{A'_0}}2s^2
+ (A_0A'_0-s)(\mathfrak c_{A'_0} s+\frac {q_2'''(0)}2s^2)+O(s^3)
\end{align*}
Thus $
s=\frac{AA_0}{A_0A'_0\mathfrak c_{A'_0}}\left(1-\frac 1{2}\left(\frac{q_2'''(0)}{\mathfrak c_{A'_0}}
-\frac 1{A_0A'_0}\right)\frac{AA_0}{A_0A'_0\mathfrak c_{A'_0}}\right)+O((AA_0)^3)$.
It follows that
\begin{equation}\label{beta}
\beta=\frac{AA_0}{A_0A'_0}+ \frac{(AA_0)^2}{2(A_0A'_0)^3\mathfrak c_{A'_0}}+ O((AA_0)^3)\, .
\end{equation}
\noindent\underline{\bf Estimate of $\beta'$.}
We proceed analogously for $\beta'$. This time
we consider that $q(u)=A'_0-q_1(u)\frac{w}{|w|}-q_2(u)w^\perp$ is the parametrization of $\partial\mathcal{O}_{(i_0,0)}$ by arclength such that $q(0)=A_0$ (i.e. $(q_1(0),q_2(0))=(0,0)$) and $A\in q(s)+\mathbb R q'(s)$.
Note that $(q_1'(0),q'_2(0))=(1,0)$ and
$(q_1''(0),q''_2(0))=(0,\mathfrak c_{A'_0})$ where where $\mathfrak c_{A'_0}$ denotes the curvature of $\partial \mathcal O'$ at $A'_0$.
Then
$\tan\beta'=\frac{q'_2(s)}{q'_1(s)}=\mathfrak c_{A_0}s+O(s^2)$,
with $s$ such that
$-AA_0=q_2(s)-q_1(s)q'_2(s)=-\frac{\mathfrak c_{A_0}}2s^2
+\mathcal{O}(s^3)$
and so
\begin{equation}\label{beta'}
\beta'
= \sqrt{2c_{A_0}AA_0}+O(AA_0)\, .
\end{equation}
\noindent\underline{\bf Estimates of $\alpha_N$ and of $\alpha'_N$.}
Note that $\alpha'_N$ is $\alpha_{N+m_1}$
for another choice of point $B_0$ and for some $m_1\in \mathbb Z^2$ (depending on $B_1$). Thus it is enough to estimate $\alpha_N$.
The notation $O(...)$ below will be uniform in $A$.
Let $B_N:=B_0+N w$.
We parametrize $ \partial\mathcal O_{(i_1,\ell_1+N w)}$ by arc-length as $q(u)=B_N+q_1(u)\frac{w}{|w|}+q_2(u)w^\perp$ such that $q(0)=B_N$, $q(s)=B'_N$, $q'(0)=\frac{w}{|w|}$ and $q''(0)=\mathfrak c_{B_0}w^\perp$ where $\mathfrak c_{B_0}$ is the curvature of $\partial \mathcal O_{(i_1,\ell_1)}$ at $B_0$.
Since $\langle q'(s),q'(s)\rangle=1$,
$\langle q''(s),q'(s)\rangle=0$,
$0=\langle q'''(0),q'(0)\rangle+\langle q''(0),q''(0)\rangle=q'''_1(0)+\mathfrak c_{B_0}^2$
and
so $q'''_1(0)=-\mathfrak c_{B_0}^2$.
Then
$B_NB'_N
=|q(s)-q(0)|
= s
-\frac {\mathfrak c_{B_0}^2}{24} s^3 +o(s^3)$,
and so
\begin{align}
\nonumber \tan\alpha_N=&\frac {q'_2(s)}{q'_1(s)}=\frac {\mathfrak c_{B_0}s+\frac 12q_2'''(0)s^2+\frac 16q_2''''(0)s^3+o(s^3))}{1+\frac 12q_1'''(0)s^2
+o(s^2)}\\
\label{tanalphaN1}&=\mathfrak c_{B_0}B_NB'_N+\frac{q_2'''(0)}{2}(B_NB'_N)^2+\mathfrak b_{B_0}(B_NB'_N)^3+o((B_NB'_N)^3)\, ,
\end{align}
with $\mathfrak b_{B_0}:=\frac{q_2''''(0)}{6}+\frac{13\mathfrak c_{B_0}^3}{24}$.
Also
\begin{align*}
\tan \alpha_N&=\tan\left(T_{B_N} \partial \mathcal O_{(i_1,\ell_1+N w)}, \overrightarrow{AB'_N}\right)=
\frac{ w^\perp\cdot \overrightarrow{AB'_N}}{ w\cdot \overrightarrow{AB'_N}/| w|}
=
\frac{d'_0(A)+w^\perp\cdot \overrightarrow{B_NB'_N}
}{|w|\, N+\frac{w.(\overrightarrow{A_0B_0}+\overrightarrow{B_NB'_N})}{|w|}}\, ,
\end{align*}
with
$d'_0(A):=w^\perp\cdot \overrightarrow{AB_0}=d_0-w^\perp\cdot\overrightarrow{A_0A}$.
Thus $\alpha_N=O(N^{-1})$, $B_NB'_N=O(N^{-1})$, $A_0A=O(N^{-1})$.
Since $B_N\in \partial \mathcal O_{(i_1,\ell_1+N w)}$ and since
$T_{B_N}\partial \mathcal O_{(i_1,\ell_1+N w)}=B_N+\mathbb R w$, it follows that
$ w^\perp\cdot \overrightarrow{B_NB'_N}=q_2(s)=\frac 12
\mathfrak c_{B_0}(B_NB'_N)^2+o(N^{-2})$ and
$w\cdot \overrightarrow{B_NB'_N}
=|w|B_NB'_N
+o\left(N^{-2}\right)$.
So
\begin{align}
\tan \alpha_N
\label{a2bis}&=
\frac{d'_0(A)}{\frac{w.\overrightarrow{A_0B_N}}{|w|}}+
\frac 12
\frac{\mathfrak c_{B_0}(B_NB'_N)^2
}{\frac{w.\overrightarrow{A_0B_N}}{|w|}}
-\frac{d'_0(A) B_NB'_N}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^2}
+o(N^{-3})
\, .
\end{align}
Identifying \eqref{tanalphaN1} with \eqref{a2bis}, we obtain
that
\[
B_NB'_N= \frac{d_1}{ \frac{w.\overrightarrow{A_0B_N}}{|w|}}+\frac{d_2}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^2}+\frac{d_3}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^3}+o\left(N^{-3}\right)\, ,\]
with
$
d_1:=\frac{d'_0(A)}{\mathfrak c_{B_0}}$,
$d_2:=-\frac{q_2'''(0)d_1^2}{2\mathfrak c_{B_0}}$,
$d_3:=
-\frac{d_1^2}{2}
-q_2'''(0)\frac{d_1d_2}{\mathfrak c_{B_0}}-\mathfrak b_{B_0}\frac{d_1^3}{\mathfrak c_{B_0}}$.
Using \eqref{tanalphaN1} and $\arctan u=u-\frac {u^3}3$, it follows that
\begin{align}
\label{alphaN}
\alpha_N= \frac{d'_0(A)}{\frac{w.\overrightarrow{A_0B_N}}{|w|}} -\frac{ \frac{(d'_0(A))^2}{2 \mathfrak c_{B_0}}+\frac{(d'_0(A))^3}{3} }{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^3}+o\left(N^{-3}\right)\, .
\end{align}
Let us recall that
$d'_0(A):=w^\perp\cdot \overrightarrow{AB_0}$.
Therefore, we also have
\begin{equation}\label{alpha'N}
\alpha'_N=\frac{d'_0(A)}{\frac{w.\overrightarrow{A_0B''_N}}{|w|}} -\frac{ \frac{(d'_0(A))^2}{2 \mathfrak c_{B''}}+\frac{(d'_0(A))^3}{3} }{(\frac{w.\overrightarrow{A_0B''_N}}{|w|})^3}+o\left(N^{-3}\right) \, .
\end{equation}
Hence
\begin{equation}\label{alpha'N-alphaN}
\alpha'_N-\alpha_N=d'_0(A)\left(\frac 1{\frac{w.\overrightarrow{A_0B''_N}}{|w|}}
-\frac{1}{\frac{w.\overrightarrow{A_0B_N}}{|w|}}\right)
+\frac{d_0^2}{2(\frac{w.\overrightarrow{A_0B''_N}}{|w|})^3}\left(\frac 1{ \mathfrak c_{B_0}}-\frac 1{ \mathfrak c_{B''}}\right)
+o\left(N^{-3}\right) \, .
\end{equation}
\noindent\underline{\bf Conclusion}
Observe that since $\alpha_N=O(N^{-1})$ and $\alpha'_N-\alpha_N=O(N^{-2})$,
\[
\int_{\max(\beta'',\alpha_N)}^{\alpha'_N}\cos(\varphi)\, d\varphi=\sin \alpha'_N- \sin\max(\beta'',\alpha_N)=\alpha'_N-\max(\beta'',\alpha_N)+O(N^{-4})\, ,
\]
for $\beta''\in\{\beta,\beta'\}$.
Recalling \eqref{mu0L+Nw3}, since $AA_0=O(N^{-1})$, it follows that
\[
I_N^++I_N^-=O(N^{-5})+\int_{0}^{x^+}\max(0,\alpha'_N- \max(\beta',\alpha_N))\, dx+\int_{x^-}^{0}\max(0,\alpha'_N-\max(\beta,\alpha_N))\, dx\, .
\]
Due \eqref{alpha'N} and \eqref{beta'} and since $A_0A=O(N^{-1)}$,
for $N$ large enough, $\beta'<\alpha'_N$ if and only if
\[
\sqrt{2\mathfrak c_{A_0}AA_0}+O(AA_0)
< \frac{d_0}{|w|N}+O(N^{-2})\, , \mbox{i.e.}\quad
AA_0
< \frac{d_0^2}{2\mathfrak c_{A_0}|w|^2N^2}+O(N^{-3})\, .
\]
Analogously the condition $\alpha_N<\beta'<\alpha'_N$ is satisfied if and only if
$ AA_0
= \frac{d_0^2}{2c_{A_0}|w|^2N^2}+O(N^{-3})$.
Therefore, using the fact that $\alpha'_N-\alpha_N=O(N^{-2})$, we obtain
\begin{align}
I_N^+&=\int_{0}^{ \frac{d_0^2}{2\mathfrak c_{A_0}|w|^2N^2}+O(N^{-3})}\!\!\!\!\left(\alpha'_N-\alpha_N\right)\, dx
= \frac{d_0^3 B''B_0}{2\mathfrak c_{A_0}|w|^4N^4}+O(N^{-5})\, ,\label{IntAN+}
\end{align}
where we used \eqref{alpha'N-alphaN}.
Now \eqref{alpha'N} combined with \eqref{beta} implies that $\beta<\alpha'_{N}$ if and only if
$$
\frac{AA_0}{A_0A'_0}+ \frac{(AA_0)^2}{2(A_0A'_0)^3\mathfrak c_{A'_0}}+ O((AA_0)^3)<\frac{d_0+A_0A}{\frac{w.\overrightarrow{A_0B''_N}}{|w|}}+O(N^{-3})
$$
i.e.
$
A_0A<\mathfrak T'_N:=\frac{d_0A_0A'_0}{\frac{w.\overrightarrow{A_0B''_N}}{|w|}}+\left(d_0(A_0A'_0)^2-\frac{d_0^2}{2\mathfrak c_{A'_0}}\right)
\frac {1}{(\frac{w.\overrightarrow{A_0B''_N}}{|w|})^2}
+O(N^{-3})$.
Analogously
$\beta<\alpha_{N}$ if and only if
\begin{equation}\label{beta<alphaN}
A_0A<\mathfrak T_N:=\frac{d_0A_0A'_0}{\frac{w.\overrightarrow{A_0B_N}}{|w|}}+\left(d_0(A_0A'_0)^2-\frac{d_0^2}{2\mathfrak c_{A'_0}}\right)
\frac {1}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^2}
+O(N^{-3})\, .
\end{equation}
So
$\mathfrak T'_N-\mathfrak T_N\sim \frac{d_0A_0A'_0\, B''B_0}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^2}$.
Since $\alpha_N=O(N^{-1})$ and
$\alpha'_N-\alpha_N=O(N^{-2})$, it follows that
\begin{equation}
I_N^-
=\int_{-\mathfrak T'_N}^{0}(\alpha'_N-\max(\beta,\alpha_N))\, dx\, +O(N^{-5})\, .
\end{equation}
Now observe that
$\alpha_N<\beta<\alpha'_N$ if and only if $\mathfrak T_N<A_0A<\mathfrak T'_N$
and in this case:
\[
\alpha'_N-\max(\beta,\alpha_N)=\alpha'_N-\beta
=(\mathfrak T'_N-AA_0)\left(\frac 1{A_0A'_0}-\frac 1{\frac{w.\overrightarrow{A_0B''_N}}{|w|}}\right)+O(N^{-3})\, .
\]
Otherwise $\alpha'_N-\max(\beta,\alpha_N)=\alpha'_N-\alpha_N$ and so
\begin{align*}
\alpha'_N-\max(\beta,\alpha_N)&=(d_0+A_0A)\left(\frac 1{\frac{w.\overrightarrow{A_0B''_N}}{|w|}}-\frac{1}{\frac{w.\overrightarrow{A_0B_N}}{|w|}}\right)
+\frac{d_0^2}{2(\frac{w.\overrightarrow{A_0B_N}}{|w|})^3}\left(\frac 1{ \mathfrak c_{B_0}}-\frac 1{ \mathfrak c_{B''}}\right)
+o\left(N^{-3}\right) \, .
\end{align*}
Therefore
$
I_N^-
=\frac{\Gamma_1}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^3}+\frac{\Gamma_2}{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^4}+o(N^{-4})$,
with $\Gamma_1:=d_0^2A_0A'_0\, B''B_0$ and
\[
\Gamma_2:= \frac{3d_0^2\, A_0A'_0B''B_0(A_0A'_0+B''B_0)}{2}
-\frac{d_0^3B''B_0}{2\mathfrak c_{A'_0}}+\frac{d_0^3A_0A'_0}{2}\left(\frac 1{ \mathfrak c_{B_0}}-\frac 1{ \mathfrak c_{B''}}\right) \, .
\]
Thus, due to \eqref{mu0L+Nw2}, \eqref{mu0L+Nw3} and \eqref{IntAN+}, we conclude that
\begin{align*}
\bar\mu(M_{(i_0 ,0)}\cap T^{-1}(M_{(i_1,L+N w)}))
&=\frac{\Gamma_1}{2|\partial \bar Q|(\frac{w.\overrightarrow{A_0B_N}}{|w|})^3}+\frac{\Gamma_3}{2|\partial \bar Q|(\frac{w.\overrightarrow{A_0B_N}}{|w|})^4}+o(N^{-4})\, ,
\end{align*}
with
$
\Gamma_3:=\Gamma_2+\frac{d_0^3 B''B_0}{2\mathfrak c_{A_0}}
$.
We obtain \eqref{mu0L+Nw} by noticing that, since $\overrightarrow{BB_N}=N w$,
\[
\frac 1{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^3}=
\frac 1{(N|w|+\frac{w.\overrightarrow{A_0B_0}}{|w|})^3}=
\frac 1{(N|w|)^3}\left(1-3\frac{w.\overrightarrow{A_0B_0}}{N|w|^2}\right)+o(N^{-4})\, ,
\]
and analogously that
$
\frac 1{(\frac{w.\overrightarrow{A_0B_N}}{|w|})^4}=
\frac 1{(N|w|)^4}+o(N^{-4})$.
\end{proof}
\section{Regularity of the projector $t\mapsto\Pi_t$ at $t=0$}
\label{sec:Pi}
In this section we state and prove Proposition~\ref{prop-expproj}, which is a generalization of Proposition~\ref{PROP1}. We do not assume \eqref{H0}.
We write $\pi_0:\mathbb{D}elta\rightarrow Y$ for the vertical projection from $\mathbb{D}elta$ to
its base $Y=\mathbb{D}elta_0$ given by $\pi_0(x,\ell)=(x,0)$ and $\omega:\mathbb{D}elta\rightarrow\mathbb N$ for the level map given by $\omega((x,\ell))=\ell$.
To simplify notations we write $\mu_\mathbb{D}elta(h)$ for $\int_\mathbb{D}elta h\, d\mu_\mathbb{D}elta$.
\subsection{Banach spaces and regularity of the dominating eigenprojector}\label{sec:Banach}
We start by recalling results from \cite{BCS91,Young98,Chernov99,SV07}.
First recall that the operator $P$ can be written as follows
\begin{equation}\label{formulaP}
\forall h\in L^1(\mu_\mathbb{D}elta),\quad Ph(x)=\sum_{Y\in f^{-1}(x)}e^{-\alpha(y)}h(y)\, ,\quad x\in \mathbb{D}elta\, ,
\end{equation}
with $\alpha\equiv 0$ outside the base of $\mathbb{D}elta$.
We write $s_0(\cdot,\cdot)$ for the separation time for $f$ on $\mathbb{D}elta$ corresponding to the separation time $s(\cdot,\cdot)$ in \cite{Young98}.
In particular, if $s_0(x,y)\ge n$ then the corresponding elements in $\bar M$ (or in $M_N$) are in the
same atom of $\xi_0^n$.
Recall the class of $\eta\in (0,1)$-H\"older functions defined in \eqref{DefiHolder}.
Choose $\beta\in(\eta,1)$ large enough so that $|\alpha(y_1)-\alpha(y_2)|\le C_\alpha \beta^{s_0(y_1,y_2)+1}$
if $s_0(y_1,y_2)\ge 1$.
The condition $\beta>\eta$ will ensure that the Banach spaces $\mathcal B_0$ and $\mathcal B$ described below will be tailored to the approximation of observables
considered in Theorem~\ref{THM1}.
We define the space $\mathcal B_0$ of Lipschitz functions $h:\mathbb{D}elta\rightarrow\mathbb C$ with respect to the metric $\beta^{s_0(\cdot,\cdot)}$, with norm
\[
\Vert h\Vert_{\mathcal B_0}:=\Vert h\Vert_\infty+\esup_{x,y\in\mathbb{D}elta\, :\, s_0(x,y)\ge 1}\frac{|h(x)-h(y)|}{\beta^{s_0(x,y)}}\, .
\]
Let $p>2$ and choose $\varepsilon>0$ small enough so that in particular $\sum_{\ell\ge 0} e^{p\ell\varepsilon}\mu_\mathbb{D}elta(\mathbb{D}elta_\ell)<\infty$,
writing $\mathbb{D}elta_\ell
$ for the $\ell$-th floor of the tower $\mathbb{D}elta$.
We let $\mathcal B$ be the space
of functions $h:\mathbb{D}elta\rightarrow\mathbb C$ such that the following quantity is finite
\[
\Vert h\Vert_{\mathcal B}:=\sup_{\ell\ge 1}e^{-\ell\varepsilon}\left(\Vert h_{|\mathbb{D}elta_\ell}\Vert_\infty+\esup_{x,y\in\mathbb{D}elta_\ell\, :\, s_0(x,y)\ge 1}\frac{|h(x)-h(y)|}{\beta^{s_0(x,y)}}\right)\, .
\]
The choice of $\epsilons$
ensures that the Young Banach space $\mathcal B$ can be continuously injected in $L^p(\mu_\mathbb{D}elta)$. While $\mathcal{B}_0$ is independent of $p$, $\mathcal{B}$ depends on $p$ via $\epsilons$.
\begin{lem}\label{discont}
Let $H:g\mapsto P(\hat\kappa g)$. Then
$H^3(\mathbf 1)\not\in L^1(\mu_\mathbb{D}elta)$.
As a consequence, since $\mathbf 1\in\mathcal{B}\subset L^1(\mu_\mathbb{D}elta)$, $P'_0=iH$ does acts neither on $\mathcal{B}$ nor on $L^1(\mu_\mathbb{D}elta)$
\end{lem}
\begin{proof}
Note that
$H^3(\mathbf 1)=
P(\hat\kappa P(\hat\kappa P(\hat\kappa)))
=P^3(\hat\kappa\circ f^2. \hat\kappa\circ f. \hat\kappa)$,
using the fact that $\hat\kappa P(g)=P(g.\hat\kappa\circ f)$.
In particular $H^3(\mathbf 1)$ is integrable if and only if
$
\kappa\circ \bar T^2. \kappa\circ\bar T.\kappa$
is integrable, i.e. if and only if $
\kappa\circ \bar T.\kappa. \kappa\circ \bar T^{-1}
$.
Let us prove that this random variable is not integrable.
Due to \cite[Proposition 9]{SV07}, there exist $K,K_0>0$ such that $|\kappa\circ \bar T^{-1}|$ and $|\kappa\circ\bar T|$
are both larger that $K\sqrt{|\kappa|}-K_0$.
Since $\kappa^2$ is not integrable, nor is
$\left| \kappa\circ\bar T. \kappa. \kappa\circ \bar T^{-1}
\right|$.
\end{proof}
In what follows we exploit that $\mathcal{B}$ is continuously embedded in $L^p(\mu_\mathbb{D}elta)$ and satisfies~\eqref{spgap-Sz} and~\eqref{spgap-Sz-bis}.
Using just the information on big tail $\bar\mu(|\kappa|>N)=O( N^{-2})$,
\begin{lem}
\label{lem-expproj} Let $q\in[1,\frac{2p}{p+2})$ (so that $2(\frac 1q-\frac 1p)>1$) and $\gamma\in(1, 2\left(\frac 1q-\frac 1p\right))$, there exists $C=C(q,\gamma)>0$ such that for all $t\in[-\pi,\pi]^2$,
$\|P_t-P_0-t\cdot P_0'\|_{\mathcal{B}\to L^q}\le C t^\gamma$,
with $P'_0:=P(i\kappa \cdot)$.
\end{lem}
\begin{proof}
Note that $q<p$.
Set $r:=\frac{qp}{p-q}$ so that $\frac 1q=\frac 1p+\frac 1r$. With this notation,
the upper bound on $\gamma$ can be rewritten $\gamma r<2$ and
\begin{align*}
\|P_t-&P_0-t\cdot P_0'\|_{\mathcal{B}\to L^q}\ll\|P_t-P_0-t\cdot P_0'\|_{L^p\to L^q}\\
&\le\left\|v\mapsto P\left((e^{it\cdot\hat\kappa}-1-it\cdot \hat\kappa )v\right)\right\|_{L^p\to L^q}\\
&\le\left\|v\mapsto (e^{it\cdot\hat\kappa}-1-it\cdot \hat\kappa )v\right\|_{L^p\to L^q}\le\left\|e^{it\cdot\hat\kappa}-1-it\cdot \hat\kappa \right\|_{L^r}\\
&\le\left\|t^\gamma\hat\kappa^\gamma \right\|_{L^r}
=t^\gamma\left(\mathbb E\left[\hat\kappa^{\gamma r}\right]\right)^{1/r}=O(t^\gamma)\, ,
\end{align*}
where we used the H\"older inequality at the penultimate line and the fact that $\hat\kappa$ admits moments of every order smaller than 2
at the last line. We have also used the fact that $|e^{i\hat\kappa}-1-i\hat\kappa|\le \min(|\hat\kappa|, | \hat\kappa|^2)\le |\hat\kappa|^\gamma$ for any $\gamma\in[1,2]$ and for all $x$ and that $\gamma r<2$.
\end{proof}
Given $q\in[1,2)$ up to taking $p>2q/(2-q)$ large enough (and so up to our choice of $\mathcal B$), we can adjust $\gamma$ so that $\gamma<2/q$ is as close to $2/q$ as we wish
(in particular as close to 2 as we wish if
$q=1$).
Let $Y:=\mathbb{D}elta_0$ be the base of the tower $\mathbb{D}elta$. Throughout, we let $\mu_Y=\mu_\mathbb{D}elta(\cdot|Y)$.
As shown in~\cite{Chernov99, SV07},
the height of the tower $(\mathbb{D}elta, f,\mu_\mathbb{D}elta)$, which we denote by $\sigma:Y\to \mathbb{N}$, has exponential tail $\mu_Y(\sigma>n)\ll \theta_1^n$ for some $\theta_1\in(0,1)$.
Using Lemma~\ref{lem-expproj} and building on the arguments used in~\cite{BalintGouezel06}, we obtain the following expansion of eigenprojector $\Pi_t$.
\begin{prop}
\label{prop-expproj}
Let $b\in(p,+\infty]$.
For every $w\in\mathcal B_0$, every $v\in L^b(\mu_\mathbb{D}elta)$ constant on each $(a,\ell)$ (with $a\in\alpha$ and $0\le \ell<\sigma(a)$) such that $vw\in\mathcal B$,
there exists $\Pi_0'(vw)$ belonging to $L^{\eta}(\mu_\mathbb{D}elta)$ for every $\eta\in [1,2)$ such that $1_Y\Pi'_0(vw)\in\mathcal B_0$.\\
Moreover, for every $p'\in(1, 4/3)$ and every $\gamma\in (1,\min(2\frac{p-1}p,\frac 4{p'}-2))$, there exists $C,\delta>0$ such that, for every $(v,w)$ as above and all $t\in B_\delta(0)$,
\[
\|(\Pi_t-\Pi_0-t\cdot \Pi_0')(vw)\|_{L^{
p'}(\mu_\mathbb{D}elta)}\le C\,|t|^\gamma (\|vw\|_{\mathcal B}+
\|w\|_{\mathcal B_0}\|v\|_{L^{b}(\mu_\mathbb{D}elta)}).
\]
and $\Vert \Pi'_0(vw)\Vert_{L^\eta}\le C (\|vw\|_{\mathcal B}+
\|w\|_{\mathcal B_0}\|v\|_{L^{b}(\mu_\mathbb{D}elta)})$.
Moreover $\Pi'_0$ satisfies
\begin{equation}
\label{eq-derivPi0}
\Pi_0' (vw)(x)=\Pi_0' (vw)\circ\pi_0(x)+
i\sum_{m=0}^{\omega(x)-1}\hat\kappa\circ f^m(\pi_0(x))\int_\mathbb{D}elta vw\, d\mu_\mathbb{D}elta\, .
\end{equation}
\end{prop}
We postpone the proof of Proposition~\ref{prop-expproj} to the end of this section. We remark that Proposition~\ref{prop-expproj} cannot be proved
merely via the continuity arguments in~\cite{KellerLiverani99}, which is why we resort to building on the arguments in~\cite{BalintGouezel06}.
The key elements used in the proof of Proposition~\ref{prop-expproj} are : i) obtain Lemma~\ref{lem-eigf2}, which gives
the expansion of $t\mapsto\int_Y\Pi_t v\, d\mu_Y$ for suitable functions $v$, as in Proposition~\ref{prop-expproj-base}; ii) the key observation in i) is that it is sensible to study the smoothness in $t$ of $\int_Y\Pi_t\, d\mu_Y$, first and use this to define $\mathbf 1_Y\Pi'_0$
and further, to control $\|1_Y\Pi_0'v\|_{\mathcal{B}_0}$ for suitable functions $v$.
iii) use i) and ii) together with formula \eqref{eq-derivPi0} to control $\Pi_0'v$ in $L^{p'}$
for suitable $v$ and $p'$. For further details on the use of Lemma~\ref{lem-eigf2} in defining $\mathbf 1_Y\Pi'_0$ we refer
to the last paragraph in Subsection~\ref{subsec-def}.
\subsection{Regularity of $t\mapsto \mathbf 1_Y\Pi_t$.}
\label{subsec-def}
Recall that $\sigma$ corresponds to the first return time of $f$ to the base $Y$.
In what follows, we let $F=f^\sigma:Y\rightarrow Y$ with
$F(y)=f^{\sigma(y)}(y)$ for all $y\in Y$ be the first return map.
Recall that $(Y, F,\mu_Y)$ is a Gibbs Markov map
with respect to a suitable countable partition $\mathcal{Y}$ and that $\sigma$ is constant on each atom
of $\mathcal{Y}$ (the required definitions are recalled below).
Let $R: L^1(\mu_Y)\to L^1(\mu_Y)$ be the transfer operator of the Gibbs Markov base map $(Y, F=f^\sigma,\mu_Y)$ given by
\begin{equation}\label{defR}
Rv(x):=\sum_{y\, :\, Fy=x}\chi(y)v(y)=\sum_{n\ge 1}P^n(1_{Y\cap\{\sigma=n\}}v)\, ,
\end{equation}
with $\chi=\frac{d\mu_Y}{d\mu_Y\circ F}:Y\to\mathbb{R}$.
In particular, on $f^{-1}(Y)$, $e^{-\alpha}=\chi\circ\pi_0$.
Define $d_\beta(y,y')=\beta^{s(y,y')}$ where the separation time for $F$, $s(y,y')$, is the least integer $n\ge0$
such that $F^ny$ and $F^ny'$ lie in distinct partition elements in $\alpha$.
The partition $\mathcal{Y}$ separates trajectories with $s(y,y')=\infty$ if and only if $y=y'$; so $d_\beta$ is a metric.
The map $F$ is a {\em (full-branch) Gibbs-Markov map}, which means that
\begin{itemize}
\item $F|_a:a\to Y$ is a measurable bijection for each $a\in\mathcal{Y}$, and
\item
$|\log\chi(y)-\log\chi(y')|\le C_\alpha d_\beta(y,y')$ for all $y,y'\in a$, $a\in\mathcal{Y}$
(since $s(\cdot,\cdot)\le s_0(\cdot,\cdot)$).
\end{itemize}
A consequence of this definition is that there is a constant $C>0$ such that
\begin{align} \label{eq:GM}
\chi(y)\le C\mu_Y(a) \qquad\text{and} \qquad
|\chi(y)-\chi(y')|\le C\mu_Y(a)d_\beta(y,y'),
\end{align}
for all $a\in\mathcal{Y}$ and $y,y'\in a$.
Since $F$ is Gibbs Markov, it follows that
(see, for instance,~\cite[Section 5]{Sarig02}):
\begin{itemize}
\item
The space $(\mathcal{B}_1,\Vert\cdot\Vert_{\mathcal B_0})$ of $\beta$-H\"older continuous functions on $Y$ contains constant functions and $\mathcal{B}_1\subset L^\infty( \mu_Y)$.
Note that $\mathcal{B}_1$ corresponds to functions $h\in\mathcal B_0$ supported on $Y$.
For this reason and for the reader convenience, we have chosen to write also $\Vert\cdot\Vert_{\mathcal B_0}$
for the norm of $\mathcal{B}_1$ (in order to avoid the introduction of unnecessary notation).
\item $R$ is quasi-compact on $\mathcal{B}_1$ and $1$ is a simple eigenvalue for $R$, isolated in
the spectrum of $R$.
\end{itemize}
Building on the argument of~\cite[Lemma 3.15]{BalintGouezel06}, in this section we obtain
\begin{prop}
\label{prop-expproj-base}
Let $b\in(p,+\infty]$ and $\gamma\in (1,2\frac{p-1}p)$.
There exists $C>0$ such that,
for every $w\in\mathcal B_0$ and $v\in L^b(\mu_\mathbb{D}elta)$ constant on each $(a,\ell)$ (with $a\in\alpha$ and $0\le \ell<\sigma(a)$)
so that $vw\in\mathcal B$, the following holds true
\[\|1_Y(\Pi_t-\Pi_0-t\cdot \Pi_0')(vw)\|_{\mathcal{B}_0}\le C\,|t|^\gamma (\|vw\|_{\mathcal B}+\|w\|_{\mathcal B_0}\|v\|_{L^{b}(\mu_\mathbb{D}elta)})\, ,
\]
\begin{equation}
\label{eq1yderivpi}
\mbox{with}\quad\quad\quad\quad\quad 1_Y\Pi'_0(vw):=\mu_\mathbb{D}elta(vw)Q'_0(1_Y)+c_0(vw)1_Y\in\mathcal{B}_0\, ,
\end{equation}
\[
\mbox{and}\quad\quad\quad
c_0(vw):=i \left(\sum_{j\ge 0}\int_\mathbb{D}elta \hat\kappa\circ f^j \, .vw\, d\mu_\mathbb{D}elta+ \mu_\mathbb{D}elta( vw )\sum_{j\ge 0}\int_{\mathbb{D}elta} \hat\kappa\,\frac{ 1_{Y}\circ f^{j+1}}{\mu_\mathbb{D}elta(Y)}\, d\mu_\mathbb{D}elta\right)\, ,
\]
and with $Q'_0(1_Y)\in\mathcal B_1$ given by \eqref{eq-Qtderiv} and $\Vert 1_Y\Pi'_0(vw)\Vert_\infty\le C'
\|w\|_{\mathcal B_0}\|v\|_{L^{b}(\mu_\mathbb{D}elta)}$.
\end{prop}
\begin{rem}
The expression of $c_0(vw)$ comes from Lemma~\ref{lem-eigf2}.
As shown in Lemma~\ref{lemma-lemexpk1y} below, the sums in the formula for $c_0(vw)$ are absolutely convergent and in $O( \|w\|_{\mathcal B_0}\|v\|_{L^{b}(\mu_\mathbb{D}elta)})$.
\end{rem}
\begin{rem}\label{int00}
Under the assumptions of Proposition~\ref{prop-expproj},
if $\mu_\mathbb{D}elta(vw)=0$, then
\[
\Pi'_0(vw)=\Pi'_0(vw)\circ\pi_0=c_0(vw)=i \sum_{j\ge 0}\int_\mathbb{D}elta \hat\kappa\circ f^j \, .vw\, d\mu_\mathbb{D}elta\, .
\]
If moreover $vw=v'w'-(v'w')\circ f$ is a coboundary of functions of the same kind, then
\[
\Pi'_0(vw)=\Pi'_0(vw)\circ\pi_0=c_0(vw)=
-i \int_\mathbb{D}elta \hat\kappa \, .(v'w')\circ f\, d\mu_\mathbb{D}elta\, .
\]
\end{rem}
Recall that $\sigma$ is the first return time of $f$ to the base $Y$.
Write $\hat \kappa_n:=\sum_{j=0}^{n-1}\hat\kappa\circ f^j$
and note that $\hat\kappa_\sigma:=\sum_{j=0}^{\sigma(\cdot)-1}\hat\kappa\circ f^j$ is the induced (to the base $Y$) version of $\hat\kappa$.
In order to define $1_Y\Pi'_0v$ we will justify that the derivative at $t=0$ of LHS of the following identity
$$1_Y\Pi_t (vw)=\int_Y\Pi_t (vw)\, d\mu_Y\frac{Q_t(1_Y)}{\int_Y Q_t (1_Yvw)\, d\mu_Y} \, $$
is well defined. Here, $Q_t$ is an eigenprojector of a perturbation $\tilde R_t$ of $R$.
More precisely, while $\Pi_t$ is the eigenprojector of $ P(\lambda_t^{-1}e^{it\hat\kappa}\cdot)$ associated to the eigenvalue 1, $Q_t$ is the main eigenprojector of $\tilde R_t=R(\lambda_t^{-\sigma(\cdot)}e^{it\hat\kappa_\sigma}\cdot)$ associated to the eigenvalue 1 (see below for the formal definition of $\tilde R_t$).
The above displayed formula allows us to exploit that in the RHS we only have
$\int_Y\Pi_t (vw)\, d\mu_Y$ and $Q_t$. As explained below $Q_t$ is much easier to understand than $\Pi_t$. Among other technical lemmas, in the next subsection, we obtain the required expansion (in norm) for $Q_t$ (see Lemma~\ref{lem-eigf}).
Lemma~\ref{lem-eigf2} below allows us to control the derivative of $\int_Y\Pi_t (vw)\, d\mu_Y$ at $0$.
\subsection{Technical lemmas}
\label{subsec-techlem}
We start with the following lemma on the integrability of $\hat\kappa_\sigma$.\begin{lem}
\label{lem-indk} For any $r\in (1,2)$, $\int_Y|\hat\kappa_\sigma|^r\, d\mu_Y<\infty$.
\end{lem}
\begin{proof} First, due to the H\"older inequality,
\[
\int_Y|\hat\kappa_\sigma|^r\, d\mu_Y
=\sum_{n\ge 1}\int_{Y\cap\{\sigma=n\}}|\hat\kappa_n|^r\, d\mu_Y
\le\sum_{n\ge 1}\mu(\sigma=n)^{1/q}\int_Y|\hat\kappa_n|^{rp}1_{\{\sigma=n\}}\, d\mu_Y\, ,
\]
with $p\in(1,2/r)$ and $q=p/(p-1)$ so that $1/p+1/q=1$.
Let $s=rp/(rp-1)$ so that $1/(rp)+1/s=1$.
Using the H{\"o}lder inequality for inner products, we have that for any $n\ge 1$
\[
|\hat\kappa_n|^{rp} =|\sum_{j=0}^{n-1}\hat\kappa \circ f^j\cdot 1|^{rp}
\le
\left(\left|\sum_{j=0}^{n-1}1\right|^{1/s} \left(\sum_{j=0}^{n-1}|\hat\kappa \circ f^j|^{rp}\right)^{1/rp}\right)^{rp}\\
\le n^{rp/s}\sum_{j=0}^{n-1}|\hat\kappa \circ f^j|^{rp}\, ,
\]
and thus
$ \int_Y|\hat\kappa_\sigma|^r\, d\mu_Y
\le\sum_{n\ge 1}\mu(\sigma=n)^{1/q}n^{rp/s}\int_Y\sum_{j=0}^{n-1}|\hat\kappa \circ f^j|^{rp} 1_{\{\sigma=n\}}\, d\mu_Y$
which leads to
\[
\int_Y|\hat\kappa_\sigma|^r\, d\mu_Y \le\sum_{n\ge 1}C_1\theta_1^{n/q}n^{rp/s}(\mu(Y))^{-1}\int_\mathbb{D}elta|\hat\kappa |^{rp}\, d\mu_\mathbb{D}elta<\infty\, .
\]
\end{proof}
We define $\tilde R_t:=\sum_{n\ge 1}\lambda_t^{-n}P_t^n(1_{Y\cap\{\sigma=n\}}v)=\sum_{n\ge 1}\lambda_t^{-n}R_n(e^{it\hat\kappa_n}\cdot)$,
with $R_nv:=R(1_{\{\sigma=n\}}v)=P^n(1_{Y\cap\{\sigma=n\}}v)$.
The next lemma
provides some useful estimates on $R_n$.
\begin{lem}\label{lem-Rn}
Let $b\in(2,+\infty]$.
Fix $q\in (\frac b{b-1},2)$ (with convention $\frac b{b-1}=1$ if $b=\infty$) and $\varepsilon_0\in(0,1)$. Then, for every $\gamma\in [1,2/q)$, there exist $C_0$ and $\rho\in(0,1)$ so that for all $t$ small enough, all $w \in \mathcal{B}_1$, all $n\ge 1$ and all $v_Y\in L^b(\mu_Y)$ constant on each atom of the partition $\alpha$,
\begin{align}
\label{eq-rnv0}
\|R_n((e^{it\hat\kappa_\sigma}-1-it\hat\kappa_\sigma) v_Y w)\|_{\mathcal{B}_0}
&\le C_0 \rho^n |t|^\gamma\, \|\hat\kappa_\sigma\|^\gamma_{L^{\gamma q}(\mu_Y)}\, \|v_Y\|_{ L^{b}(\mu_Y)} \| w \|_{\mathcal{B}_0}\\
\label{eq-rnv1}\Vert R_n(\hat \kappa_\sigma v_Y w)\Vert_{\mathcal{B}_0} &\le C_0\rho^n \|v_Y\|_{L^b(\mu_Y)}
\| \hat \kappa_\sigma\|^\gamma_{L^{\gamma q}(\mu_Y)} \| w\|_{\mathcal{B}_0}\, , \\
\label{eq-rnv2} \Vert R_n(v_Y w)\Vert_{\mathcal{B}_0} &\le C_0\rho^n \|v_Y\|_{L^2(\mu_Y)} \| w\|_{\mathcal{B}_0}\, .
\end{align}
\end{lem}
Note that, in this lemma, $1<\gamma<2(1-\frac 1b)$ and that up to adapting the value of $q$, we can take $\gamma$
as close to $2(1-\frac 1b)$ as we wish.
\begin{proof}
By the arguments used in ~\cite[Proof of Proposition 12.1]{MelTer17} and exploiting that $v_Y$ and $\hat\kappa_\sigma$ are constant on every $a\in\mathcal{A}$,
\begin{equation}\label{RnDA}
\forall w\in\mathcal{B}_1,\ \forall h\in L^1(\mu_Y),\quad \|R_n( h w)\|_{\mathcal{B}_0}\ll \|1_{\{\sigma=n\}} h\|_{L^1(\mu_Y)}\|w\|_{\mathcal{B}_0}\, ,
\end{equation}
for $h\in\{(e^{it\hat\kappa_\sigma}-1-it\hat\kappa_\sigma) v_Y, v_Y,\hat\kappa_\sigma, \hat\kappa_\sigma v_Y\}$.
A justification of \eqref{RnDA} based on ~\cite[Proof of Proposition 12.1]{MelTer17} is provided in Appendix~\ref{sec-RnDA}.
We note that since $\sigma$ has exponential tail, equation~\eqref{RnDA} and H{\"o}lder inequality imply~\eqref{eq-rnv1} and \eqref{eq-rnv2}.
Next, by Lemma~\ref{lem-indk},
for any $r \in (1,2)$,
$\hat\kappa_\sigma\in L^r(\mu_Y)$.
Since $\gamma q\in[1,2)$, using the same argument as in the proof of Lemma~\ref{lem-expproj} combined with H\"older inequality, we have
\[
\int_Y 1_{\{\sigma=n\}}|(e^{it\cdot \hat\kappa_\sigma}-1-it\cdot \hat\kappa_\sigma)\, v_Y|\, d\mu_Y\le \Vert v_Y\Vert_{L^b(\mu_Y)}\Vert (t\cdot \hat\kappa_\sigma)^\gamma\Vert_{L^q(\mu_Y)} (\mu_Y(\sigma=n))^{1-\frac 1b-\frac 1q}\, ,
\]
which leads to \eqref{eq-rnv0}.
Note that $\gamma q<2$ ensures that $\hat\kappa_\sigma^\gamma\in L^{q}(\mu_Y)$.
The result follows from the previous two displayed inequalities since $\sigma$ has exponential tail.~\end{proof}
Note that $\tilde R_0=R$ and that \eqref{devlambda}
implies that $\lambda_0'= \frac{d}{dt}\lambda_t|_{t=0}=0$.
The next lemma shows that $t\mapsto \tilde R_t \in\mathcal B_1$ is differentiable at $t=0$
with derivative $\tilde R'_0:=R(i\hat\kappa_\sigma \cdot)=\sum_{n=0}^\infty R_n(i\hat\kappa_n \cdot)$, and that this is also true if we replace $w\in\mathcal B_1$
by $v_Yw$ as in Lemma~\ref{lem-Rn}.
\begin{lem}
\label{lem-deriv}
Let $b,q,\gamma$ as in Lemma~\ref{lem-Rn}.
Then for every $\gamma\in [1,2/q)$, there exists
$C>0$ such that for $t$ small enough and all $w \in \mathcal{B}_1$
and for any $v_Y$ as in Lemma~\ref{lem-Rn},
\begin{align}
\|R(v_Yw)\|_{\mathcal{B}_0} &\le C \|v_Y\|_{ L^{b}(\mu_Y)}\Vert w\Vert_{\mathcal{B}_0},\label{Rp-0}\\
\|\tilde R'_0(v_Yw)\|_{\mathcal{B}_0} &\le C \|\hat\kappa_\sigma\|^\gamma_{L^{\gamma q}(\mu_Y)}\, \|v_Y\|_{ L^{b}(\mu_Y)}\Vert w\Vert_{\mathcal{B}_0},\label{Rp-a}\\
\|(\tilde R_t-\tilde R_0-t\cdot \tilde R'_0)(v_Y w)\|_{\mathcal{B}_0} &\le C |t|^\gamma(\|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma\, \|v_Y\|_{ L^{b}(\mu_Y)})\Vert w\Vert_{\mathcal{B}_0}\, .\label{Rp-c}
\end{align}
\end{lem}
\begin{proof}
Summing \eqref{eq-rnv1} (resp. \eqref{eq-rnv2}) gives \eqref{Rp-a} (resp. \eqref{Rp-0}).
Next, note that
\begin{align*}
&\ \|(\tilde R_t-\tilde R_0-t\cdot \tilde R'_0)(v_Y w)\|_{\mathcal{B}_0} \le \sum_{n\ge 1} |\lambda_t^{-n}|\, \|R_n((e^{it\cdot \hat\kappa_\sigma}-1-it\cdot \hat\kappa_\sigma)(v_Yw))\|_{\mathcal{B}_0}\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \sum_{n\ge 1} |\lambda_t^{-n}-1|\left(\|R_n (v_Yw)\|_{\mathcal{B}_0}+\|R_n(\hat\kappa_\sigma (v_Yw))\|_{\mathcal{B}_0}\right)\\
&\le C_0|t|^\gamma(\|\hat\kappa_\sigma\|^\gamma_{L^{\gamma q}(\mu_Y)}\, \|v_Y\|_{ L^{b}(\mu_Y)}) \sum_{n\ge 1}
\rho^{\frac n2}\Vert w\Vert_{\mathcal{B}_0} \\
&\ \ \ + |1-\lambda_t|
\sum_{n\ge 1} n
\, |\lambda_t^{-n}|
\left(\|R_n (v_Yw)\|_{\mathcal{B}_0}+
t\|R_n(\hat\kappa_\sigma (v_Yw))\|_{\mathcal{B}_0}\right)\, ,
\end{align*}
where in the last inequality we have used Lemma~\ref{lem-Rn}
for a suitable $\rho$
together with the fact that $ |\lambda_t^{-n}-1|= |1-\lambda_t|\left|\sum_{k=1}^{n}\lambda_t^{-k}\right|$
and $|\lambda^{-1}_t|<\rho^{-\frac 12}$ if $t$ is small enough (due to \eqref{devlambda}).
We conclude by applying \eqref{eq-rnv1} and \eqref{eq-rnv2}
combined with $|\lambda^{-1}_t|<\rho^{-\frac 12}$.
\end{proof}
Recall that $\Pi_t$ acts on $\mathcal{B}$. The next lemma is a restatement of~\cite[Lemma 3.14]{BalintGouezel06} in terms of the
eigenprojection $\Pi_t$, as opposed to the (normalized) eigenvector $\frac{\Pi_t 1}{\int_\mathbb{D}elta \Pi_t 1\, d\mu_\mathbb{D}elta}$, as there.
\begin{lem}
\label{lem-projbase}For any small $t$ small enough
and for any $v\in\mathcal{B}$
$\tilde R_t(1_Y\Pi_tv)=(1_Y\Pi_t v)$.
\end{lem}
\begin{proof}
Since $\Pi_tv\in\mathcal{B}$, observe that $1_Y\Pi_tv\in\mathcal B_1$.
For all $x\in f^{-1}(Y)$,
\begin{align}
\label{eq-Pit}
\Pi_t v(x)=\lambda_t^{-{\omega(x)}}P_t^{\omega(x)}(\Pi_t v)(x)=\lambda_t^{-{\omega(x)}} e^{it\cdot \hat\kappa_{\omega(x)}(
\pi_0(x))}\Pi_t (v)\circ\pi_0(x)\, .
\end{align}
Therefore, for all $y\in Y$,
\begin{align*}
\label{eq-Pit}
\Pi_t v(y)&=\lambda_t^{-1}P_t\Pi_tv(y)=\sum_{x\in f^{-1}(y)}\chi\circ\pi_0(x)\lambda_t^{-1}e^{it\cdot\hat\kappa(x)}\Pi_t(v)(x)\\
&=\sum_{z\in F^{-1}(y)}\chi(z)\lambda_t^{-\sigma(z)} e^{it\cdot \hat\kappa_{\sigma(z)}(z)}\Pi_t (v)(z)=\tilde R_t(1_Y\Pi_tv)(x)\, ,
\end{align*}
due to \eqref{eq-Pit}, since $\omega(x)+1=\sigma(\pi_0(x))$ and since, for every $x\in f^{-1}(Y)$,
$x=f^{\omega(x)}(\pi_0(x))$.~\end{proof}
By Lemma~\ref{lem-projbase}, for $t$ small enough, the eigenvalue $\tilde\lambda_t$ of $\tilde R_t$ associated with the projection $(1_Y\Pi_t) v$
is so that $\tilde\lambda_t=1$; this lemma tells us how the projection acts on $\mathcal{B}$.
Let $Q_t$ be the eigenprojection for $\tilde R_t$ associated with $\tilde\lambda_t=1$.
Since $1$ is an isolated eigenvalue in the spectrum of $\tilde R_0=R$ and $\tilde R_t$ is a continuous family of operators (by Lemma~\ref{lem-deriv}),
we have that $\tilde \lambda_t=1$ is isolated in the spectrum of $\tilde R_t$ for every $t$ small enough.
Hence, there exists $\delta_0>0$ so that for any $\delta\in(0,\delta_0)$,
\begin{equation}
\label{eq-Qt}
Q_t=\frac 1{2i\pi}\int_{|\xi-1|=\delta}(\xi I-\tilde R_t)^{-1}\, d\xi.
\end{equation}
Since $(\xi I-\tilde R_0)^{-1}$ and $\tilde R'_0$ are well defined operators in $\mathcal{B}_1\subset L^\infty(\mu_Y)$, so
is the derivative at $t=0$ of $t\mapsto Q_t\in\mathcal{B}_1$ and write
\begin{align}
\label{eq-Qtderiv}
Q'_0=\frac 1{2i\pi}\int_{|\xi-1|=\delta}(\xi I-\tilde R_0)^{-1}\tilde R'_0(\xi I-\tilde R_0)^{-1}\, d\xi\, ,
\end{align}
for $\delta>0$ small enough.
Recall that $Q_0, Q_0'$ are well defined in $\mathcal{B}_1$.
The next result shows that $\|Q_0'h\|_{\mathcal{B}_0}$ is also well defined for $h=v_Yw$ with $v_Y,w$ as in Lemma~\ref{lem-Rn}.
\begin{lem}
\label{lem-eigf}
Let $b,q,\gamma$ as in Lemma~\ref{lem-Rn}.
Then there exist $C_1, C_2>0$ such that for $t$ small enough,
for every $v_Y\in L^{b}(\mu_Y)$ constant on each $a\in\alpha$ and every $w\in \mathcal{B}_1$,
\begin{align*}
\Vert Q_t(v_Yw)\Vert_{\mathcal{B}_0}&\le C_1
\Vert w\Vert_{\mathcal{B}_0}\|v_Y\|_{L^b(\mu_Y)}\, ,\\
\|Q_0'(v_Yw)
\|_{\mathcal{B}_0}&\le C_1\|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma\, \|v_Y\|_{ L^{b}(\mu_Y)}\Vert w\Vert_{\mathcal{B}_0}\, ,\\
\|(Q_t -Q_0-tQ_0')(v_Yw)\|_{\mathcal{B}_0}&\le C_2 |t|^\gamma(\|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma\, \|v_Y\|_{ L^{b}(\mu_Y)})\Vert w\Vert_{\mathcal{B}_0}\, .
\end{align*}
\end{lem}
\begin{proof}
The first estimate will come from our estimates of $\|(Q_t -Q_0-tQ_0')(v_Yw)\|_{\mathcal{B}_0}$ and $\|Q_0'(v_Yw) \|_{\mathcal{B}_0}$
and from \eqref{eq-Qt} ensuring that
\begin{align*}\label{eq-xiR0Y}
\Vert Q_0(vw)\Vert_{\mathcal{B}_0}&=\frac 1{2\pi}
\left\|\int_{|\xi-1|=\delta}(\xi I-\tilde R_0)^{-1}(v_Yw) d\xi\right\Vert_{\mathcal{B}_0}\\
&=\frac 1{2\pi}\left\|\int_{|\xi-1|=\delta}\!\!\!\!\!\! (\xi I- R)^{-1}(v_Yw-\mu_Y(v_Yw)) d\xi+\mu_Y(v_Yw)\int_{|\xi-1|=\delta}\!\!\!\!\!\! (\xi I-1)^{-1}1_Y\, d\xi\right\Vert_{\mathcal{B}_0}
\end{align*}
Thus
$\Vert Q_0(vw)\Vert_{\mathcal{B}_0}\le \frac 1{2\pi}\left( \sum_{j\ge 1}\int_{|\xi-1|=\delta}|\xi|^{-j-1}\|R^j(v_Yw-\mu_Y(v_Yw))\|_{\mathcal{B}_0} d\xi\right)+ |\mu_Y(v_Yw)|$. So
\[
\Vert Q_0(vw)\Vert_{\mathcal{B}_0}\ll \left(\sum_{j\ge 1}(1-\delta)^{-j-1}\rho^{j-1}\|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}\right)+ |\mu_Y(v_Yw)|\ll \Vert w\Vert_{\mathcal{B}_0}\Vert v_Y\Vert_{L^b(\mu_Y)}\, ,
\]
for some $\rho\in(0,1)$, where we used \eqref{Rp-0} combined with the spectral properties of $R$, up to take $\delta$ small enough so that $(1-\delta)^{-1}\rho<1$.
Second
\begin{align*}
&(Q_t -Q_0)= \frac 1{2i\pi}\int_{|\xi-1|=\delta}(\xi I-\tilde R_t)^{-1}(\tilde R_0- \tilde R_t)(\xi I-\tilde R_0)^{-1} d\xi\\\
&=E_1(t)+E_2(t)= \frac 1{2i\pi}\int_{|\xi-1|=\delta}(\xi I-\tilde R_0)^{-1}(\tilde R_0- \tilde R_t)(\xi I-\tilde R_0)^{-1} \, d\xi\\
&+\frac 1{2i\pi}
\int_{|\xi-1|=\delta}\Big((\xi I-\tilde R_t)^{-1}-(\xi I-\tilde R_0)^{-1}\Big)(\tilde R_0- \tilde R_t)(\xi I-\tilde R_0)^{-1} d\xi\, .
\end{align*}
Note that
\[
\|E_2(t)( v_Yw)\|_{\mathcal{B}_0}
\ll \int_{|\xi-1|=\delta} \Big\|(\xi I-\tilde R_t)^{-1}-(\xi I-\tilde R_0)^{-1} \Big\|_{\mathcal{B}_0}
\|(\tilde R_0- \tilde R_t)(\xi I-\tilde R_0)^{-1}(v_Yw)\|_{\mathcal{B}_0}\, d\xi.
\]
Recall that for all $\xi$ so that $|\xi-1|=\delta$, $\|(\xi I-\tilde R_t)^{-1}\|_{\mathcal{B}_0}\ll 1$, for all $t$ small enough.
This together with Lemma~\ref{lem-deriv} with $v_Y=1_Y$ implies that for all $t$ small enough,
\[
\left\|(\xi I-\tilde R_t)^{-1}-(\xi I-\tilde R_0)^{-1}\right\|_{\mathcal{B}_0}
=\left\|(\xi I-\tilde R_0)^{-1}(\tilde R_t-\tilde R_0)(\xi I-\tilde R_t)^{-1}\right\|_{\mathcal{B}_0}
\ll |t|.
\]
Hence
$
\|E_2(t)(v_Yw )\|_{\mathcal{B}_0}\ll |t| \int_{|\xi-1|=\delta}\|(\tilde R_0- \tilde R_t)(\xi I-\tilde R_0)^{-1}(v_Yw)\|_{\mathcal{B}_0}\, d\xi$.
To simplify notations, we write $\mu_Y(\cdot)$ for $\int_Y\cdot\, d\mu_Y$.
We claim that
\begin{equation}
\label{eq-xiRvY}
\int_{|\xi-1|=\delta}\!\!\!\!\!\!\!\!\! \|(\tilde R_0-\tilde R_t)(\xi I-\tilde R_0)^{-1}(v_Yw)\|_{\mathcal{B}_0}\, d\xi\ll |t| \|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma \|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}
\, .
\end{equation}
This
implies that
\begin{align*}
\|E_2(t)(v_Y w)\|_{\mathcal{B}_0}\ll |t|^2\|\hat\kappa_\sigma
\|_{L^{\gamma q}(\mu_Y)}^\gamma\|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}\, .
\end{align*}
Next, compute that
\begin{align*}
E_1(t)=tQ_0' +E(t)&=\frac t{2i\pi}\int_{|\xi-1|=\delta}(\xi I-\tilde R_0)^{-1}\tilde R'_0(\xi I-\tilde R_0)^{-1} \, d\xi\\
&+\frac 1{2i\pi} \int_{|\xi-1|=\delta}(\xi I-\tilde R_0)^{-1}(\tilde R_0- \tilde R_t-t\tilde R'_0)(\xi I-\tilde R_0)^{-1} \, d\xi\, .
\end{align*}
Proceeding as in estimating $E_2(t)(v_Yw)$ above and using the first part of the conclusion in Lemma~\ref{lem-deriv}
and a formula analgous to \eqref{eq-xiRvY},
\begin{align}
\|Q_0'(v_Yw )\|_{\mathcal{B}_0}&\ll \int_{|\xi-1|=\delta}\|\tilde R'_0(\xi I-\tilde R_0)^{-1}(v_Yw)\|_{\mathcal{B}_0}\, d\xi\nonumber\\
&\ll\|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma
\| v_Y\|_{L^b(\mu_Y)}
\|w\|_{\mathcal{B}_0} \, ,\label{claim}
\end{align}
and thus, the second part of the conclusion follows.
Finally, similarly to~\eqref{eq-xiRvY}, we claim that
\begin{align}
\label{eq-claim2}
\nonumber \|E(t)(v_Y w)\|_{\mathcal{B}_0}&\ll \int_{|\xi-1|=\delta}\|(\tilde R_0- \tilde R_t-t\cdot \tilde R'_0)(\xi I-\tilde R_0)^{-1}(v_Yw)\|_{\mathcal{B}_0}\, d\xi\\
&\ll |t|^\gamma|\, \|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma\|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}\, .
\end{align}
The third part of the conclusion follows by putting all the above together.
It remains to prove the claims~\eqref{eq-xiRvY}, \eqref{claim} and~\eqref{eq-claim2}.
Note that for any operator $\tilde P$ bounded in $\mathcal{B}_1$, we have
\begin{align*}
&\int_{|\xi-1|=\delta}\|\tilde P(\xi I-\tilde R_0)^{-1}(v_Yw)\|_{\mathcal{B}_0}\, d\xi\\
&=\int_{|\xi-1|=\delta} \Big\|\tilde P(\xi I-\tilde R_0)^{-1} (v_Yw-\mu_Y(v_Yw)1_Y)+ (\xi-1)^{-1}\mu_Y( v_Yw)\tilde P 1_Y\Big\|_{\mathcal{B}_0}\, d\xi\, .
\end{align*}
In the sequel, we take $\tilde P\in\{Id, \tilde R_0- \tilde R_t,\tilde R'_0,\tilde R_0-\tilde R_t-t\tilde R'_0\}$, respectively.
Since $\tilde R_0=R$ has a spectral gap in $\mathcal{B}_1$ with decomposition $R^j=\mu_Y(\cdot)1_Y+N'_j$ with $\|N'_j \|_{\mathcal{B}_0}\le C\rho^j$ for some $C>0$ and some $\rho<1$, for every $j\ge 1$, we can write
\[
R^j(v_Yw-\mu_Y(v_Yw)1_Y)=R^{j-1}R(v_Y-\mu_Y(v_Yw)1_Y)=
N'_{j-1}(R(v_Yw-\mu_Y(v_Yw)1_Y)),
\]
since $\mu_Y(R(v_Yw-\mu_Y(v_Yw)))=0$.
We note that although $(v_Y-\mu_Y(v_Y)1_Y)\notin\mathcal{B}_1$. But, by the first two displayed inequalities in Appendix~\ref{sec-RnDA}
(with $vw$ replaced by $v_Yw$ and $\mu_Y(v_Yw)1_Y$),
\[
\Big\|R(v_Yw-\mu_Y(v_Yw))\|_{\mathcal{B}_0}\le C' \|w\|_{\mathcal B_0}\|v_Y\|_{L^1(\mu_Y)}\, ,
\]
for some $C'>0$. Thus, there exist $C">0$ and $\rho<1$ so that for $j\ge 1$,
\begin{equation}\label{majoRj}
\Big\|R^j(v_Yw-\mu_Y(v_Yw))\Big\|_{\mathcal{B}_0}\le C"\rho^j
\|w\|_{\mathcal B_0}\|v_Y\|_{L^1(\mu_Y)}
\end{equation}
Below we write $\gamma(\tilde P)$ to indicate that this is a positive number that depend on the operator $\tilde P\in\{\tilde R_0-\tilde R_t,\tilde R'_0 ,\tilde R_0-\tilde R_t-t\tilde R_0\}$.
Using the fact that $\tilde R_0=R$ and that $(\xi I-R)^{-1}=\sum_{j\ge 0}\xi^{-j-1}R^j$ and putting the above together, we obtain
\begin{align*}
&\int_{|\xi-1|=\delta} \|\tilde P(\xi I-R)^{-1} v_Yw\|_{\mathcal{B}_0}\, d\xi\le\int_{|\xi-1|=\delta} \left\| \xi^{-1}\tilde P(v_Yw-\mu_Y( v_Yw))\right\|_{\mathcal{B}_0}\, d\xi\\
&\ \ \ \ \ \ \ \ +\int_{|\xi-1|=\delta}\left\Vert \sum_{j\ge 1} \xi^{-j-1}\tilde P R^{j}(v_Yw-\mu_Y( v_Yw))+ (\xi-1)^{-1}\mu_Y( v_Yw)\tilde P 1\right\Vert_{\mathcal{B}_0}\, d\xi\\
&\ll t^{\gamma(\tilde P)} \left(\|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma\, \|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}+\sum_{j\ge 1}\int_{|\xi-1|=\delta} |\xi|^{-j-1}\Big\| R^{j}(v_Yw-\mu_Y( v_Yw))\Big\|_{\mathcal{B}_0}\, d\xi\right)\\
&\ll t^{\gamma(\tilde P)} \left(\|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma\, \|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}+\sum_{j\ge 1}\int_{|\xi-1|=\delta} |\xi|^{-j-1}\rho^j \Vert v_Y\Vert_{L^1(\mu_Y)}\|w\|_{\mathcal{B}_0}\, d\xi\right).
\end{align*}
due to Lemma \ref{lem-deriv} and to \eqref{majoRj}, with $\gamma(\tilde R_0-\tilde R_t)=1$, $\gamma(\tilde R'_0)=0$, $\gamma(\tilde R_0-\tilde R_t-t\tilde R'_0)=\gamma$.
So
\[
\int_{|\xi-1|=\delta} \|\tilde P(\xi I-R)^{-1} (v_Yw)\|_{\mathcal{B}_0}\, d\xi\ll
t^{\gamma(\tilde P)} \|\hat\kappa_\sigma\|_{L^{\gamma q}(\mu_Y)}^\gamma \|v_Y\|_{L^b(\mu_Y)}\|w\|_{\mathcal{B}_0}
\left( 1+\delta \sum_{j\ge 1} \rho^j(1-\delta)^{-j}\right)
\]
The claims follow by choosing $\delta$
so that
$\sum_{j\ge 2}\rho^j|\xi|^{-j}\le \sum_{j\ge 2}\rho^j(1-\delta)^{-j}<\infty$.
~\end{proof}
\subsection{Expansion of $t\mapsto\int_Y\Pi_t\, d\mu_Y$}
The next estimate, of independent interest, requires a more careful analysis and strongly exploits that the modulus is outside of the integral.
Its proof uses arguments somewhat similar to the ones in~\cite{KellerLiverani99} together with arguments exploiting symmetries on the tower $\mathbb{D}elta$.
Recall that $p>2$.
\begin{lem}\label{lem-eigf2}
\label{cor-betcont}
Let $b\in(p,+\infty]$. Let $c_0 (v, w)$ be as defined in
Proposition~\ref{prop-expproj-base}, namely
\[
c_0(vw)=i \left(\sum_{j\ge 0}\int_\mathbb{D}elta \hat\kappa\circ f^j \, .vw\, d\mu_\mathbb{D}elta+ \int_\mathbb{D}elta vw \, d\mu_\mathbb{D}elta\sum_{j\ge 0}\int_{\mathbb{D}elta} \hat\kappa\, \frac{1_{Y}\circ f^{j+1}}{\mu_\mathbb{D}elta(Y)}\, d\mu_\mathbb{D}elta\right)\in\mathbb C^2\, .
\]
Then for every $(v,w)$ as in
Proposition~\ref{prop-expproj}
and for every $\gamma\in (1,2\frac{p-1}p)$,
\begin{equation*}
\left|\int_Y (\Pi_t-\Pi_0)(vw)\, d\mu_Y-c_0(vw)\cdot t \right|\ll |t|^\gamma (\Vert vw\Vert_{\mathcal B}+\Vert w\Vert_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)}) \mbox{ as }t \to 0\,.
\end{equation*}
\end{lem}
\begin{proof}
Let $\gamma\in \left(1, 2\frac{p-1}p\right)$, i.e. $1<\gamma<2\left( 1-\frac 1p\right)<2\left( 1-\frac 1b\right)$. Fix $\varepsilon>0$ so that $\gamma+\varepsilon<2\left( 1-\frac 1p\right)<2$.
Fix $1<q<\frac {2b}{b+2}$ close enough to 1 so that $1<\gamma+\varepsilon<2\left(\frac 1q-\frac 1b\right)$.
Let $p_1\in(1,2)$.
Consider $\theta\in(0,1)$ satisfying \eqref{spgap-Sz-bis} and such that $\theta_1^{\frac {(p_1-1)}{p_1}}<\theta$. Set $\theta_0:=\sqrt{\theta}$
and $r:=\theta_0^{1/\vartheta}$ with $\vartheta
>2$ large enough (i.e. $r$ close enough to 1) so that
\begin{equation}\label{condr}
\frac{2(\gamma+\varepsilon)}{\vartheta-1}<\varepsilon\quad\mbox{and}\quad
\frac{\left(\gamma+\varepsilon\right)(\vartheta-2)}{\vartheta-1}>\gamma-1\, .
\end{equation}
We choose $\delta$ small enough so that $(1-\delta)^{-\frac p{p-1}}\theta_1<1$,
$(1-\delta)^{-\frac b{b-1}}\theta_1^{
\frac b{b-1}(\frac 1p-\frac 1{b})}<1$,
\begin{equation}
\label{eq-delta}
\delta+r<1, \quad
(1-\delta)^{-1}\rho<1,\quad
(1-\delta)^{-1}\theta_0<1
\end{equation}
and so that
$\Pi_t=\int_{|\xi-1|=\delta}(\xi I-P_t)^{-1}\, d\xi$ for every $t$ in a small neighbourhood of 0 (this is possible thanks to~\cite{KellerLiverani99} since
1 is isolated in the spectrum of $P_0$).
We will make use of this choice from equation~\eqref{eq-enm} onwards.
Recall that $(\xi I-P_t)^{-1}-(\xi I-P_0)^{-1}=(\xi I-P_0)^{-1}(P_t-P_0)(\xi I-P_t)^{-1}$, and so
\begin{align*}
&\int_Y(\Pi_t -\Pi_0) (vw) \, d\mu_Y=\frac 1{2i\pi}\int_Y \int_{|\xi-1|=\delta}(\xi I-P_0)^{-1}(P_t-P_0)(\xi I-P_t)^{-1} (vw)\, d\xi\, d\mu_Y\\
&=I_1(t)+I_2(t)=\frac 1{2i\pi}\int_Y \int_{|\xi-1|=\delta}(\xi I-P_0)^{-1}(P_t- P_0)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y\\
&+\frac 1{2i\pi}\int_Y \int_{|\xi-1|=\delta}(\xi I-P_0)^{-1}(P_t- P_ 0)\Big((\xi I-P_t)^{-1}-(\xi I-P_0)^{-1}\Big) (vw)\, d\xi\, d\mu_Y\, .
\end{align*}
\begin{itemize}
\item Let us start with the computation of $I_1(t)$. Setting
$N':=\lfloor(\gamma+\varepsilon)\log t/\log \theta_0\rfloor$,
\begin{align}
I_1(t)
&=\frac 1{2i\pi}\int_Y \int_{|\xi-1|=\delta}(\xi I-P_0)^{-1}(P_t- P_0)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y\nonumber\\
&= I_{1,N'}(t)+I'_{1,N'}(t)=\frac 1{2i\pi}\sum_{j=0}^{N'-1}\int_Y \int_{|\xi-1|=\delta}\xi^{-j-1}P_0^j(P_t- P_0)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y\nonumber\\
&\ \ \ +\frac 1{2i\pi}\int_Y \int_{|\xi-1|=\delta}\xi^{-N'}(\xi I-P_0)^{-1}P_0^{N'}(P_t- P_0)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y\, .\label{eqI1}
\end{align}
Observe that $|\xi-1|=\delta$ implies $|\xi^{-1}|\le(1-\delta)^{-1}<r^{-1}$.
Due to Lemma \ref{lem-expproj},
\begin{align*}
& \left|I_{1,N'}(t)-\frac 1{2i\pi}\sum_{j=0}^{N'-1}\int_Y \int_{|\xi-1|=\delta}\xi^{-j-1}P_0^j(t\cdot P'_0)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y \right|\\
&\ll r^{-N'}\left\Vert (P_t- P_0-t\cdot P'_0)(\xi I-P_0)^{-1} (vw)\right\Vert_{L^1(\mathbb{D}elta)}\ll r^{-N'}t^{\gamma+\varepsilon}\Vert vw\Vert_{\mathcal B}\, ,
\end{align*}
since our assumption on $\varepsilon,q$ ensures that $\gamma+\varepsilon<2\left(\frac 1q-\frac 1p\right)$.
Therefore
\begin{equation}
\left|I_{1,N'}(t)-
\frac 1{2\pi}
\sum_{j=0}^{N'-1}\int_{f^{-j-1}Y} \int_{|\xi-1|=\delta}\xi^{-j-1}(t\cdot \kappa)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y \right|\ll r^{-N'}t^{\gamma+\varepsilon}\Vert vw\Vert_{\mathcal B}\, .
\end{equation}
Moreover,
since $\sup_{\varepsilonrt \xi-1\varepsilonrt=\delta}\left\Vert(\xi I-P_0)^{-1}\right\Vert_{\mathcal B}<\infty$, using the Fubini theorem, we obtain
\begin{align*}
&I'_{1,N'}(t)
=\frac 1{2i\pi}\int_Y \int_{|\xi-1|=\delta}\xi^{-N'}(\xi I-P_0)^{-1}(\Pi_0+N_0^{N'})(P_t- P_0)(\xi I-P_0)^{-1} (vw)\, d\xi\, d\mu_Y\nonumber\\
&= \frac 1{2i\pi \mu_\mathbb{D}elta(Y)}\int_{|\xi-1|=\delta}\!\!\!\!\!\!\!\! \xi^{-N'}\Pi_0\left(1_Y(\xi I-P_0)^{-1}1\right)\Pi_0((P_t- P_0)(\xi I-P_0)^{-1} (vw)) \, d\xi +O(
r^{-N'}\theta^{N'}
)\nonumber\\
&=
\frac 1{2\pi\mu_\mathbb{D}elta(Y)}
\int_{|\xi-1|=\delta}\xi^{-N'}\Pi_0\left(1_Y(\xi I-P_0)^{-1}1\right)\Pi_0(t\cdot \hat\kappa(\xi I-P_0)^{-1} (vw)) \, d\xi\,\\
&+O((r^{-N'}|t|^{\gamma+\varepsilon}+r^{-N'}\theta^{N'}
)\Vert vw\Vert_{\mathcal B})\, ,
\end{align*}
where we used again Lemma~\ref{lem-expproj}.
Since $r=\theta_0^{1/\vartheta}$, $\vartheta>2$ and $\theta=\theta_0^{1/2}$, we obtain
\begin{equation}\label{I1Formule}
I_1(t)=
\frac 1{2\pi}
t\cdot A_{1,N'}(t) +O((r^{-N'}t^{\gamma+\varepsilon}+\theta_0^{\frac{(\vartheta-2)N'}{2}}
)\Vert vw\Vert_{\mathcal B})\, ,
\end{equation}
\begin{align}
\label{eq-1prime}
A_{1,N'}(t)&:=U_{N'}(t)+V_{N'}(t)=
\sum_{j=0}^{N'-1}\int_{Y} \int_{|\xi-1|=\delta}\xi^{-j-1}P^{j+1}\left( \hat \kappa(\xi I-P_0)^{-1} (vw)\right)\, d\xi\, d\mu_Y\\
\nonumber &+\frac 1{2\pi\mu_\mathbb{D}elta(Y)}\int_{|\xi-1|=\delta}\xi^{-N'}\Pi_0\left(1_Y(\xi I-P_0)^{-1}1\right)\Pi_0( \hat\kappa(\xi I-P_0)^{-1} (vw)) \, d\xi\, .
\end{align}
Due to our choice of $N'$ and since
$\theta_0=r^\vartheta
$, we have
\begin{equation}\label{estimN'}
r^{N'\vartheta}=\theta_0^{N'} \ll t^{\gamma+\varepsilon}
\end{equation}
so $r^{-N'} \ll t^{-\frac{\gamma+\varepsilon
}{\vartheta}}\ll t^{-\varepsilon}
$, due to our assumptions on $\vartheta$.
Note that
\[
(\xi I-P_0)^{-1}(vw)=\sum_{j\ge 0} \xi^{-j-1} P_0^j(vw)\quad\mbox{and}\quad
(\xi I-P_0)^{-1}1_\mathbb{D}elta= (\xi-1)^{-1}1_\mathbb{D}elta
\]
and so $V_{N'}(t)=\frac 1{\mu_\mathbb{D}elta(Y)}\int_{|\xi-1|=\delta}\sum_{j\ge 0}\xi^{-N'-j-1}(\xi-1)^{-1}\mu_\mathbb{D}elta(Y)\Pi_0( \hat\kappa P_0^j (vw)) \, d\xi$. Thus
\begin{align*}
V_{N'}(t)&=\frac 1{\mu_\mathbb{D}elta(Y)}\sum_{j\ge 0}\int_{|\xi-1|=\delta}\xi^{-N'-j-1}(\xi-1)^{-1}\mu_\mathbb{D}elta(Y)\Pi_0( \hat\kappa N_0^j (vw)) \, d\xi
\end{align*}
since $P_0^j=\mu_\mathbb{D}elta(\cdot)1_\mathbb{D}elta+N_0^j$, since $\Pi_0(\hat\kappa)=0$ and since $\sum_{j\ge 1}|\xi^{-j}\Pi_0(\hat \kappa N_0^j(vw))|\ll \sum_{j\ge 1}|(1-\delta)^{-1}\theta|^j\Vert \hat\kappa\Vert_{L^{p/(p-1)}(\mu_\mathbb{D}elta)}\Vert vw\Vert_{\mathcal B}<\infty$, due to our choice of $\delta$.
Therefore, due to the Cauchy integral formula,
\begin{align}
\label{formule TN'}
V_{N'}(t)&=2i\pi \sum_{j\ge 0}\Pi_0( \hat\kappa N_0^j (vw))
=2i\pi \sum_{j\ge 0}\int_\mathbb{D}elta \hat\kappa\circ f^j \, vw\, d\mu_\mathbb{D}elta\, .
\end{align}
Combining this with \eqref{I1Formule} and \eqref{eq-1prime}, we obtain
\begin{equation}\label{I1Formulebis}
I_1(t)=
t\cdot \left(\frac 1{2\pi}U_{N'}(t)+i\sum_{j\ge 0}\int_\mathbb{D}elta \hat\kappa\circ f^j \, .vw\, d\mu_\mathbb{D}elta\right) +O(|t|^\gamma\Vert vw\Vert_\mathcal B)\, .
\end{equation}
We compute that
\begin{align}
\label{eq-s12v}
&U_{N'}(t) =\frac 1{\mu_\mathbb{D}elta(Y)}\sum_{j=0}^{N'-1} \int_{|\xi-1|=\delta}\xi^{-j-1}\int_{\mathbb{D}elta} 1_Y P^{j+1}\left( \hat \kappa(\xi I-P_0)^{-1} (vw)\right)\, d\mu_\mathbb{D}elta\, d\xi\\
\nonumber &=\frac 1{\mu_\mathbb{D}elta(Y)}\sum_{j=0}^{N'-1}\int_{|\xi-1|=\delta}\xi^{-j-1}\int_{\mathbb{D}elta} \hat \kappa(\xi I-P_0)^{-1} (vw). 1_Y\circ f^{j+1}\, d\mu_\mathbb{D}elta\, d\xi=\frac {U_{1,N'}(t)+U_{2,N'}(t)}{\mu_\mathbb{D}elta(Y)}\, ,
\end{align}
\begin{align*}
U_{1,N'}(t) &=\sum_{j=0}^{N'-1}\int_{|\xi-1|=\delta}\xi^{-j-1}\int_{\mathbb{D}elta} \hat \kappa\, (\xi I-P_0)^{-1} \left(vw-\mu_\mathbb{D}elta(vw)\right)\, 1_Y\circ f^{j+1}\, d\mu_\mathbb{D}elta\, d\xi\, ,\\
U_{2,N'}(t))&=\mu_\mathbb{D}elta( vw) \sum_{j=0}^{N'-1}\int_{|\xi-1|=\delta}\xi^{-j-1}\,(\xi-1)^{-1}\,\int_{\mathbb{D}elta} \hat \kappa\, 1_Y\circ f^{j+1}\, d\mu_\mathbb{D}elta\, d\xi\, ,
\end{align*}
since $(\xi I-P_0)^{-1}1_\mathbb{D}elta=(\xi-1)^{-1}1_\mathbb{D}elta$.
Due to the Cauchy
integral
formula,
\begin{align}
U_{2,N'}(t):=& 2i\pi\mu_\mathbb{D}elta( vw) \sum_{j=0}^{N'-1}\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j+1}\, d\mu_\mathbb{D}elta\nonumber \\
&=2i\pi \mu_\mathbb{D}elta( vw) \sum_{j\ge 0}\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j+1}\, d\mu_\mathbb{D}elta +O(\theta_0^{N'}\Vert vw\Vert_{L^1(\mu_\mathbb{D}elta)})\nonumber\\
&=
2i\pi \mu_\mathbb{D}elta( vw) \sum_{j\ge 0}\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j+1}\, d\mu_\mathbb{D}elta +O(|t|^\gamma\Vert vw\Vert_{\mathcal B})\, ,\label{U2N't}
\end{align}
where in the penultimate line we have used that $\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j}\, d\bar\mu_\mathbb{D}elta=O(\theta_0^j)$ (see Lemma~\ref{lemma-lemexpk1y} below) and in the last line we have used~\eqref{estimN'}.
Next, we estimate $U_{1,N'}$ defined in~\eqref{eq-s12v}.
Since $v|_{(a,\ell)}=C_{(a,\ell)}$ where $C_{(a,\ell)}$ are constants, $U_{1,N'}(t)$ is equal to the following quantities
\begin{align}
\nonumber&=\sum_{j=0}^{N'}\int_{|\xi-1|=\delta}\xi^{-j-1}\int_{\mathbb{D}elta}\sum_{r\ge0} \xi^{-r-1} P_0^{r}\sum_{(a,\ell)}\left(vw1_{(a,\ell)}-\int_{(a,l)} vw\, d\mu_\mathbb{D}elta\right)\, (\hat\kappa1_Y\circ f^j)\, d\mu_\mathbb{D}elta\, d\xi\\
\nonumber&=\sum_{j=0}^{N'}\int_{|\xi-1|=\delta}\xi^{-j-1}\sum_{r\ge0} \xi^{-r-1}\sum_{(a,\ell)}\int_{\mathbb{D}elta} P_0^{r}\left(vw1_{(a,\ell)}-\int_{(a,l)} vw\, d\mu_\mathbb{D}elta\right)\, (\hat\kappa1_Y\circ f^j)\, d\mu_\mathbb{D}elta\, d\xi\\
\nonumber&=\sum_{j=0}^{N'}\int_{|\xi-1|=\delta}\xi^{-j-1}\sum_{r\ge0} \xi^{-r-1}\sum_{(a,\ell)}C_{(a,\ell)}\int_{\mathbb{D}elta} \left(w1_{(a,\ell)}- \mu_\mathbb{D}elta(w1_{(a,l)})\right)\,(\hat\kappa\, 1_Y\circ f^j)\circ f^r\, d\mu_\mathbb{D}elta\, d\xi\\
\nonumber&= \sum_{r\ge0} \sum_{(a,\ell)}\sum_{j=0}^{N'}C_{(a,\ell)}\int_{\mathbb{D}elta} \left(w1_{(a,\ell)}- \mu_\mathbb{D}elta(w1_{(a,l)}\right)\,(\hat\kappa\, 1_Y\circ f^j)\circ f^r\, d\mu_\mathbb{D}elta\, \int_{|\xi-1|=\delta}\xi^{-j-r-2}\, d\xi
\end{align}
and thus $U_{1,N'}(t)=0$. The interchange of sums and integrals in the above
equations is justified by Lemma~\ref{lem-justif...}. This is the only part of the proof where this assumption that is crucially used.
Combining \eqref{I1Formulebis}, \eqref{eq-s12v}, \eqref{U2N't} and $U_{1,N'}(t)=0$, we obtain that
\begin{align*}
I_1(t)&=i t\cdot \left( \sum_{j\ge 0}\int_\mathbb{D}elta \hat\kappa\circ f^j \, .vw\, d\mu_\mathbb{D}elta+ \int_\mathbb{D}elta vw \, d\mu_\mathbb{D}elta\sum_{j\ge 0}\int_{\mathbb{D}elta} \hat\kappa\, \frac{1_{Y}\circ f^{j+1}}{\mu_\mathbb{D}elta(Y)}\, d\bar\mu_\mathbb{D}elta\right)\\
&+O(t^\gamma(\Vert vw\Vert_{\mathcal B}+\|w\|_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)}))\, .
\end{align*}
\item Let us prove that
$| I_2(t)|\ll |t|^{\gamma}\Vert vw\Vert_{\mathcal B}$.
This will give the conclusion.
First, compute that for any $N\ge 1$,
\begin{align}
\label{eq-i2}
\nonumber I_2(t)&=\int_Y \int_{|\xi-1|=\delta} (\xi I-P_0)^{-1}(P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\xi\, d\mu_Y\\
\nonumber &=\sum_{j=0}^{N-1}\int_{|\xi-1|=\delta} \xi^{-j-1} \int_Y P_0^j (P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\,d\mu_Y\, d\xi\\
\nonumber &+\int_Y \int_{|\xi-1|=\delta} \xi^{-N} (\xi I-P_0)^{-1} P_0^N (P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\xi\, d\mu_Y\\
&=S_N(t)+E_N(t).
\end{align}
We further decompose $S_N$ and $E_N$, choosing
$N:=\lfloor
(\gamma+\varepsilon)\frac{\log t}{\log(\theta_0/r)}\rfloor$ to prove the claim.
We first deal with $S_N$.
\begin{align}
\label{eq-sn}
\nonumber S_N(t)&=\sum_{j=0}^{N-1} \sum_{\ell=0}^{N-1}\int_{|\xi-1|=\delta} \xi^{-j
-1}\xi^{-\ell
-1} \int_Y P_0^j (P_t-P_0)P_0^\ell (P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\,d\mu_Y\, d\xi\\
\nonumber &+\sum_{j=0}^{N-1} \int_Y\int_{|\xi-1|=\delta} \xi^{-j
-1} P_0^j (P_t-P_0) (\xi I-P_0)^{-1}\xi^{-N} P_0^N(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\xi\, d\mu_Y\\
&=S_{N, N}(t)+E_{N, N}(t).
\end{align}
Set $\gamma_0=\frac {\gamma+\varepsilon}2\in(0,1)$.
Note that $\gamma+\varepsilon<2\left(1-\frac 1p\right)$ implies that $\frac p{p-1}\gamma_0<1$.
\begin{align}
\Big |\int_Y P_0^j &(P_t-P_0)P_0^\ell (P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\,d\mu_Y\Big|\nonumber\\
&=\Big |\int_Y (e^{it\cdot\hat\kappa}-1)P_0^\ell (P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\,d\mu_Y\Big|
\nonumber\\
&\ll \|(\xi I-P_t)^{-1}(vw)\|_{\mathcal{B}}\,\left\Vert ((e^{it\cdot\hat\kappa}-1)1_Y)\circ f^{\ell+1}\, (e^{it\cdot\hat\kappa}-1)\right\Vert_{L^{
\frac p{p-1}}
(\mu_\mathbb{D}elta)}\nonumber\\
&\ll
|t|^{2\gamma_0}\|\hat\kappa\|_{L^{2\gamma_0\frac p{p-1}}}^{2\gamma_0}\Vert vw\Vert_{\mathcal B}\, ,\label{AAA1}
\end{align}
where
we have used the H\"older inequality combined with
$\mathcal B\hookrightarrow L^p(\mu_
\mathbb{D}elta
)$ and finally,
in the last line, we used
the inequality $|e^{it\cdot\hat\kappa}-1|\ll |t\hat\kappa|^{\gamma_0}$ combined with
the Cauchy-Schwarz inequality. Hence,
using the fact that $|\xi|^{-1}\le (1-\delta)^{-1}\le r^{-1}$,
\begin{equation}
\label{eq-snm}
|S_{N, N}(t)|\ll r^{-2N} |t|^{2\gamma_0}=r^{-2N} |t|^{\gamma+\varepsilon}.
\end{equation}
By~\eqref{spgap-Sz}, we have $P_0^N=\Pi_0+N_0^N$ with $\|N_0^N\|_{\mathcal{B}}\ll \theta_0^N$. Hence,
\begin{align*}
E_{N, N}(t)&=\sum_{j=0}^{N-1} \int_Y \int_{|\xi-1|=\delta} \xi^{-j
-1} P_0^j (P_t-P_0)\xi^{-N}(\xi I-P_0)^{-1}\Pi_0\Big((P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, \Big)\, d\xi\, d\mu_Y\\
&+\sum_{j=0}^{N-1}\int_Y \int_{|\xi-1|=\delta} \xi^{-j
-1
} P_0^j (P_t-P_0)\xi^{-N} (\xi I-P_0)^{-1} N_0^N\Big((P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, \Big)\, d\xi\, d\mu_Y\\
&=E_{N, N}^1(t)+E_{N, N}^2(t).
\end{align*}
But for any $\xi$ so that $|\xi-1|=\delta$,
\begin{align*}
\Big|\Pi_0\Big((P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, \Big)\Big|&\ll \int_\mathbb{D}elta |(P_t- P_ 0)
(\xi I-P_t)^{-1} (vw)
| \, d\mu_\mathbb{D}elta\\
&\ll \|P_t- P_ 0\|_{\mathcal{B}\to L^1}\Vert vw\Vert_{\mathcal B}\ll |t| \Vert vw\Vert_{\mathcal B}\, .
\end{align*}
Hence,
\begin{align*}
|E_{N, N}^1(t)|\ll |t|\, \Vert vw\Vert_{\mathcal B} \sum_{j=0}^{N-1} \int_\mathbb{D}elta \int_{|\xi-1|=\delta} | \xi^{-j
-1
}|\, | \xi^{-N}|\, |(P_t-P_0)(\xi I-P_0)^{-1} (1_Y)|\, d\xi\, d\mu_\mathbb{D}elta.
\end{align*}
The above together with~\eqref{eq-delta} implies that
\begin{equation}
\label{eq-enm}
|E_{N, N}^1(t)|\ll |t|\, r^{-2N}\, \|P_t- P_ 0\|_{\mathcal{B}\to L^1} \Vert vw\Vert_{\mathcal B}\ll |t|^2\, r^{-2N} \Vert vw\Vert_{\mathcal B}\, .
\end{equation}
Recall that $\|N_0^N\|_{\mathcal{B}}\ll \theta_0^N$. This together with ~\eqref{eq-delta} implies that
\begin{equation}
\label{eq-enm2}
|E_{N, N}^2(t)|\ll (\theta_0/r^2)^{N}\, \|P_t- P_ 0\|_{\mathcal{B}\to L^1} \Vert vw\Vert_{\mathcal B}\ll |t|\, (\theta_0/r^2)^{N} \Vert vw\Vert_{\mathcal B}\, .
\end{equation}
This together with~\eqref{eq-sn}, \eqref{eq-snm}, \eqref{eq-enm} and~\eqref{eq-enm2}
implies that
\begin{equation}
\label{eq-snest}
|S_N(t)|\ll (r^{-2N}|t|^{\gamma+\varepsilon}
+|t|^2 r^{-2N}
+ |t|(\theta_0/r^2)^N) \Vert vw\Vert_{\mathcal B}
\, .
\end{equation}
Proceeding as above in estimating $E_{N,N}(t)$, we compute that
\begin{align*}
& E_N(t)=E_{N}^1(t)+E_{N}^2(t)\\
\nonumber&=\int_Y \int_{|\xi-1|=\delta} \xi^{-N} (\xi I-P_0)^{-1} \Pi_0\Big( (P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\Big)\, d\xi\, d\mu_Y\\
\nonumber &+\int_Y \int_{|\xi-1|=\delta} \xi^{-N} (\xi I-P_0)^{-1} N_0^N\Big( (P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\Big)\, d\xi\, d\mu_Y\, .
\end{align*}
Similar to~\eqref{eq-enm2}, $|E_{N}^2(t)|\ll (\theta_0/r)^{N} \Vert vw\Vert_{\mathcal B}$. We need to decompose $E_N^1(t)$ further.
To start we reduce the analysis to $\Pi_0\Big( (P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\Big)$,
for $\xi$ so that $|\xi-1|=\delta$. With this specified we compute that for any $L\ge 1$,
\begin{align*}
\Pi_0\Big( (P_t-P_0)&(\xi I -P_0)^{-1}(P_t - P_ 0)(\xi I-P_t)^{-1} (vw)\Big)\\
=&\int_Y (P_t-P_0)(\xi I-P_0)^{-1}(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\mu\\
=&S'_L(t) +E'_L(t)=\sum_{j=0}^{L-1}\xi^{-j-1} \int_Y (P_t-P_0)P_0^j (P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\mu \\
&+\int_Y (P_t-P_0)(\xi I-P_0)^{-1}\xi^{-L}P^L(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\mu\, .
\end{align*}
Due to \eqref{AAA1},
$|S'_{L}(t)|\ll
r^{-L}\, |t|^{\gamma+\varepsilon} \Vert vw\Vert_{\mathcal B}$.
Next, to estimate $E'_L(t)$, we repeat the argument used in estimating $E_{N,N}(t)$ above. Namely,
\begin{align*}
|E'_L(t)|&\ll \Big|\int_Y (P_0-P_t)(\xi I-P_0)^{-1}\xi^{-L}\Pi_0(P_t- P_ 0)(\xi I-P_t)^{-1} (vw)\, d\mu \Big|\\
&+\Big|\int_Y (P_0-P_t)(\xi I-P_0)^{-1}\xi^{-L}N_0^L(P_t- P_ 0)(\xi I-P_t)^{-1}(vw)\, d\mu\Big|\\
&\ll (r^{-L}\, |t|^2+ |t|\, (\theta_0/r)^L) \Vert vw\Vert_{\mathcal B}\, .
\end{align*}
Therefore
$\left|E_N(t)\right|\ll(\theta_0/r)^N+ r^{-N}\left(r^{-L}\, |t|^{\gamma+\varepsilon}+ |t|\, (\theta_0/r)^L\right) \Vert vw\Vert_{\mathcal B}$.
Gathering this and \eqref{eq-snest}, we obtain
\begin{equation}
\label{eq-i2tuselat}
|I_2(t)|\ll \left(r^{-2N}|t|^{\gamma+\varepsilon}
+ |t|(\theta_0/r^2)^N+(\theta_0/r)^N+ r^{-N}\left(r^{-L}\, |t|^{\gamma+\varepsilon}+ |t|\, (\theta_0/r)^L\right)\right) \Vert vw\Vert_{\mathcal B}\, .
\end{equation}
Recall that
$N:=\lfloor
(\gamma+\varepsilon)\frac{\log |t|}{\log(\theta_0/r)}\rfloor$, so that
$(\theta_0/r)^N\sim |t|^{\gamma+\varepsilon}$.
Recall also that $\theta_0= r^\vartheta$ with $\vartheta>
2$ so that
$
\frac{2(\gamma+\varepsilon)}{\vartheta-1}<\varepsilon$
and
$\frac{\left(\gamma+\varepsilon\right)(\vartheta-2)}{\vartheta-1}>\gamma-1$.
Hence
$\theta_0/r=r^{\vartheta-1}$ and $r^{-N}\sim |t|^{-\frac{\gamma+\varepsilon}{\vartheta-1}}$
and $(\theta_0/r^2)^N=r^{N(\vartheta-2)}\approx |t|^{\frac{(\gamma+\varepsilon)(\vartheta-2)}{\vartheta-1}}$ thus
\begin{align*}
\label{eq-i2tuselat}
|I_2(t)|&\ll \left( |t|^{\gamma+\varepsilon-2\frac{\gamma+\varepsilon}{\vartheta-1}}
+| t|^{1+\frac{(\gamma+\varepsilon)(\vartheta-2)}{\vartheta-1}}+|t|^{\gamma+\varepsilon}+ |t|^{-\frac{\gamma+\varepsilon}{\vartheta-1}}
\left(r^{-L}\, |t|^{\gamma+\varepsilon}+ |t|\, (\theta_0/r)^L\right)\right) \Vert vw\Vert_{\mathcal B}\\
&\ll \left(|t|^{\gamma} + |t|^{-\frac{\gamma+\varepsilon}{\vartheta-1}}
\left((\theta_0/r)^{-\frac{L}{\vartheta-1}}\, |t|^{\gamma+\varepsilon}+ |t|\, (\theta_0/r)^L\right)\right) \Vert vw\Vert_{\mathcal B}\, ,
\end{align*}
due to our choice of $\vartheta$.
Now, choose $L:=\left\lfloor\frac{(\gamma+\varepsilon-1)\log |t|}{\log(\theta_0/r)}\right\rfloor$, so, due to our choice of $\vartheta$,
$$
|I_2(t)|
\ll \left(|t|^{\gamma} + |t|^{-\frac{\gamma+\varepsilon}{\vartheta-1}}
\left( |t|^{\gamma+\varepsilon-\frac{\gamma+\varepsilon-1}{\vartheta-1}}+ |t|^{\gamma+\varepsilon}\right)\right) \Vert vw\Vert_{\mathcal B}\ll |t|^{\gamma} \Vert vw\Vert_{\mathcal B} \, .$$
\end{itemize}
\end{proof}
\subsection{Proof of the expansion of $\Pi_t$~: proofs of Propositions~\ref{prop-expproj-base} and \ref{prop-expproj}}
We first provide the proof of Propositions~\ref{prop-expproj-base} relying on our technical Lemmas~\ref{lem-eigf} and
~\ref{lem-eigf2}, and then we prove Proposition~\ref{prop-expproj} .\\
\begin{pfof}{Proposition~\ref{prop-expproj-base}}
We proceed as in~\cite[Proof of Lemma 3.14]{BalintGouezel06} using our estimates.
Since 1 is a simple eigenvalue of $\tilde R_t$, and since $Q_t(1_Y)$ and $1_Y\Pi_t(vw)$ are both eigenfunctions belonging to $\mathcal B_1$ of $\tilde R_t$ associated to 1, these two vectors are proportional and so,
\begin{equation}\label{1YPit}
1_Y\Pi_t(vw)=\mu_Y(1_Y\Pi_t(vw))\frac{Q_t(1_Y)}{\mu_Y(Q_t(1_Y))}=\left(\mu_\mathbb{D}elta(vw)+t\cdot c_0(vw)+\boldsymbol{\varepsilon}^{(1)}_t
\right)\frac{Q_t(1_Y)}{\mu_Y(Q_t(1_Y))}\, ,
\end{equation}
with
$
\boldsymbol{\varepsilon}^{(1)}_t:=\mu_Y(1_Y(\Pi_t-\Pi_0-t\cdot c_0)(vw))=O(|t|^\gamma(\|vw\|_{\mathcal B}+\|w\|_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)}))$
in $\mathbb C$,
by Lemma~\ref{cor-betcont}.
Moreover, due to Lemma~\ref{lem-eigf},
$
\boldsymbol{\varepsilon}^{(2)}_t:=(Q_t-Q_0-t\cdot Q'_0)(1_Y)=O(|t|^\gamma)$
in $\mathcal B_1$.
Hence, still in $\mathcal B_1$,
\begin{align*}
\frac{Q_t(1_Y)}{\mu_Y(Q_t(1_Y))}&=
\frac{1_Y+t\cdot Q'_0(1_Y)+\boldsymbol{\varepsilon}^{(1)}_t}
{\mu_Y(1_Y+t\cdot Q'_0(1_Y)+\boldsymbol{\varepsilon}^{(1)}_t)}
=1_Y+t\cdot(Q'_0(1_Y)-\mu_Y(Q'_0(1_Y))1_Y)+O(|t|^\gamma)\\
&=1_Y+t\cdot Q'_0(1_Y)+O(|t|^\gamma)\, ,
\end{align*}
since the derivation at $t=0$ of $Q_t^2(1_Y)=Q_t(1_Y)$ combined with $Q_01_Y=1_Y$ leads to $\mu_Y(Q'_01_Y)=Q_0Q'_01_Y=0$.
Combining the above estimate with \eqref{1YPit}, we conclude that
\[
1_Y\Pi_t(vw)=\mu_\mathbb{D}elta(vw)+t\cdot \left(c_0(vw)+\mu_\mathbb{D}elta(vw)Q'_0(1_Y)\right)+O\left(|t|^\gamma(\|vw\|_{\mathcal B}+\|w\|_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)})\right)\, ,
\]
which leads to \eqref{eq1yderivpi}.
~\end{pfof}
\begin{rem}
\label{rem:derivopi}
By the same reasoning used in obtaining $\mu_Y(Q'_01_Y)=Q_0Q'_01_Y=0$ with $\Pi_0'$ instead of $Q_0'$ and $1$ instead of $1_Y$ we have $\mu_\mathbb{D}elta(\Pi'_01)=0$.
\end{rem}
Using Proposition~\ref{prop-expproj-base} and equation~\eqref{eq-Pit} we can complete
\begin{pfof}{Proposition~\ref{prop-expproj}}
To simplify notations, we write $\|(v,w)\|:=\|vw\|_{\mathcal B}+\|w\|_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)}$.
Starting from~\eqref{eq-Pit},
$\Pi_t(vw)(x)=\lambda_t^{-\omega(x)} e^{it\cdot\hat\kappa_{\omega(x)}\circ \pi_0(x)}\Pi_t (vw)\circ\pi_0(x)$.
Using the fact that $\lambda_0=1$ and $\lambda'_0=0$, we conclude that
$t\mapsto \Pi_t(vw)(x)$ is differentiable at $0$, with derivative
\begin{align}
\label{eq-derivPi0ult}
\Pi_0' (vw)(x)&=
i[\hat\kappa_{\omega(x)}\Pi_0 (vw)]\circ\pi_0(x)
+\Pi_0' (vw)\circ\pi_0(x),
\end{align}
where we use the definition of $1_Y\Pi'_0$ in~\eqref{eq1yderivpi}.
This provides the last formula of Proposition~\ref{prop-expproj}.
With this definition, since $\Pi'_0$ is uniformly bounded on $Y$, for every $\eta\in(1,2)$
\begin{align*}
\Vert \Pi'_0 (vw)\Vert^\eta_{L^\eta(\mu_\mathbb{D}elta)}
&\ll \Vert 1_Y\Pi'_0 (vw)\Vert_\infty^\eta+
\sum_{n\ge 1}\int_{\{\sigma>n\}}|\hat\kappa_n|^\eta\, d\mu_Y\|vw\|_{L^1(\mu_Y)}\\
&\ll \left(1+ \sum_{n\ge 1}(\mu_Y(\sigma>n))^{\frac{2-\eta}{2+\eta}}\Vert|\hat\kappa_n|^\eta\Vert_{L^\frac{\eta+2}{2\eta}(\mu_Y)}\right)\|(v,w)\|\\
&\ll\left( 1+\sum_{n\ge 1}\theta_1^{\frac{2-\eta}{2+\eta}n} n^\eta \Vert\hat\kappa\Vert_{L^\frac{\eta+2}{2}(\mu_\mathbb{D}elta)}^\eta\right)\|(v,w)\|=O(\|(v,w)\|)\, .
\end{align*}
Formula \eqref{eq-derivPi0ult} together with~\eqref{eq-Pit} implies that
\begin{align}
\label{eq-expbigP}
(\Pi_t-&\Pi_0-t\cdot \Pi_0')(vw)(x)=
\left[(e^{it\cdot\hat\kappa_{\omega(x)}}-1-it\cdot \hat\kappa_{\omega(x)})\Pi_0 (vw)\right]
\circ\pi_0(x)\\
&\nonumber
+\left[(e^{it\hat\kappa_{\omega(x)}}-1)t\cdot\Pi'_0(vw)\right]\circ\pi_0
\left[e^{it\cdot\hat\kappa_{\omega(x)}}
(\Pi_t-\Pi_0-t\cdot\Pi_0') (vw)\right]\circ\pi_0(x)
\\
&
+(\lambda_t^{-\omega(x)}-1)
[e^{it\hat\kappa_{\omega(x)}}\Pi_t (vw)]\circ\pi_0(x)\, .\nonumber
\end{align}
This
leads to
$\left\Vert(\Pi_t-\Pi_0-t\cdot \Pi_0')(vw)\right\Vert_{L^{p'}(\mu_{\mathbb{D}elta})}\le I_1(t)+I_2(t)+I_3(t)+I_4(t)$.
First
\begin{align*}
(I_1(t))^{p'}&:=\int_{\mathbb{D}elta}\left| e^{it\hat\kappa_{\omega(x)}}-1-it\cdot \hat\kappa_{\omega(x)}\right|^{p'}\circ\pi_0(x)d\mu_{\mathbb{D}elta}(x)\left(\int_{Y}|vw|\, d\mu_Y\right)^{p'}\\
&\ll\int_{\mathbb{D}elta}\left|t\cdot \hat\kappa_{\omega(x)}\circ\pi_0(x)\right|^{p'\gamma} d\mu_{\mathbb{D}elta}(x)\Vert vw\Vert_{L^1(\mu_\mathbb{D}elta)}^{p'}\\
&\ll \sum_{n\ge 1}\int_{\sigma> n} \left|t \cdot \hat\kappa_{n}\right|^{p'\gamma} d\mu_{Y}\Vert vw\Vert_{L^1(\mu_\mathbb{D}elta)}^{p'}
\end{align*}
Thus $(I_1(t))^{p'}\ll\sum_{n\ge 1}|t|^{p'\gamma}\mu_Y(\sigma\ge n)^{\frac {2-\gamma}{2+\gamma}} \Vert \hat\kappa_n^{p'\gamma}\Vert_{L^{\frac{\gamma+2}{2\gamma}}(\mu_Y)}\Vert vw\Vert_{L^1(\mu_\mathbb{D}elta)} ^{p'}$. It follows that
\[
\ll\sum_{n\ge 1}|t|^{p'\gamma}\mu_Y(\sigma> n)^{\frac {2-\gamma}{2+\gamma}} \Vert \hat\kappa_n\Vert^{p'\gamma}_{L^{p'\frac{\gamma+2}{2}}(\mu_Y)}\Vert vw\Vert_{L^1(\mu_\mathbb{D}elta)}^{p'}\ll |t|^{p'\gamma}\|vw\|^{p'}_{\mathcal B}\, ,
\]
for any $\gamma\in(1,\frac 4{p'}-2)$, since $p'\frac{\gamma+2}2\in(1,2)$ and $\Vert\hat\kappa_n\Vert_{L^{p'\frac{\gamma+2}{2}}(\mu_Y)}\le n\Vert \hat\kappa\Vert_{L^{p'\frac{\gamma+2}{2}}(\mu_\mathbb{D}elta)}$ and $\mu_Y(\sigma>n)\ll \theta_1^n$.
Second, using $|e^{iy}-1|\le|y|$, it comes
\begin{align*}
(I_2(t))^{p'}&:=|t|^{p'}\,\int_{\mathbb{D}elta}\left|(e^{it\cdot \hat\kappa_{\omega(x)}}-1)\Pi'_0(vw)\right|^{p'}\circ\pi_0\, d\mu_{\mathbb{D}elta}(x)
\ll |t|^{2p'}\sum_{n\ge 1}\int_{\sigma> n} |\hat\kappa_n\Pi'_0(vw)|^{p'}\, d\mu_Y \\
&\ll |t|^{2p'} \Vert 1_Y\Pi'_0(vw)\Vert_\infty \sum_{n\ge 1}\mu(\sigma> n)^{\frac {2-p'}{p'+2}}\Vert\hat\kappa_n^{p'}\Vert_{L^{\frac{p'+2}{2p'}}(\mu_\mathbb{D}elta)}\\
&\ll |t|^{2p'} \Vert 1_Y\Pi'_0(vw)\Vert_\infty \ll |t|^{2p'}\|(v,w)\|,
\end{align*}
proceeding as for $I_1$ and using $\Vert 1_Y\Pi'_0(vw)\Vert_\infty\ll\|(v,w)\|$.
Third
\begin{align}
\label{I3}
(I_3(t))^{p'}&:=
\int_{\mathbb{D}elta}\left|
e^{it\cdot \hat\kappa_{\omega(x)}}
(\Pi_t-\Pi_0-t\Pi_0') (vw)\right|^{p'}\circ\pi_0(x)\,d\mu_{\mathbb{D}elta}(x)\\
&=O\left(|t|^{\gamma p'}(\Vert vw\Vert_{
\mathcal B}+\Vert w\Vert_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)})^{p'}\right)\, ,
\end{align}
with $\gamma$ as in Proposition~\ref{prop-expproj-base}.
For the last term, we observe that the expansion of $\lambda$ at $0$ implies in particular that $1>|\lambda_t|>e^{-a_0\frac {|t|^2\log(1/|t|)} 2}$ if $t$ is small enough and so
\begin{align*}
|I_4(t)|^{p'} &=\int_{\mathbb{D}elta} \left|(\lambda_t^{-\omega(x)}-1)
e^{it\cdot \hat\kappa_{\omega(x)}}\Pi_t
(vw)
\right|^{p'}\circ\pi_0(x)\, d\mu_\mathbb{D}elta(x)\\
&\le \int_{\mathbb{D}elta}\left (\omega(x)\, |\lambda_t-1|\, e^{a_0\frac {|t|^2\log(1/|t|)} 2(\omega(x)-1)}\left\Vert 1_Y\Pi_t (vw)\right\Vert_\infty\right)^{p'}\, d\mu_\mathbb{D}elta(x)\\
&\ll (|t|^2\log(1/|t|)\left\Vert 1_Y\Pi_t (vw)\right\Vert_\infty)^{p'} \sum_{n\ge 1}\int_{\sigma> n}n^{p'} e^{a_0p'\frac {n|t|^2\log(1/|t|)} 2}\, d\mu_Y
\end{align*}
and so $|I_4(t)|^{p'}\ll (|t|^2\log(1/|t|)\left\Vert 1_Y\Pi_t (vw)\right\Vert_\infty)^{p'} \sum_{n\ge 1}n^{p'} e^{a_0p'\frac {n|t|^2\log(1/|t|)} 2}\mu_Y(\sigma> n)$. Thus
\[
|I_4(t)|^{p'}\ll (|t|^2\log(1/|t|) \|(v,w)\|)^{p'}\, ,
\]
for small $|t|$
since $\mu_Y(\sigma>n)\ll \theta_1^n$ and
$\left\Vert 1_Y\Pi_t (vw)\right\Vert_\infty\ll\|(v,w)\|$ (see \eqref{1YPit}).
So, taking $\gamma$ as in Proposition~\ref{prop-expproj},
$\left\Vert(\Pi_t-\Pi_0-t\cdot \Pi_0')(vw)\right \|_{L^{p'}(\mu_{\mathbb{D}elta})}
\ll |t|^\gamma\, \left(\|vw\|_{\mathcal B}+\Vert w\Vert_{\mathcal B_0}\|v\|_{L^b(\mu_\mathbb{D}elta)}\right)$.
\end{pfof}
\section{Expansion of $\lambda_t$: Proof of Proposition~\ref{prop-lambda} }
\label{sec:lambda}
Let $v_t$ be the eigenfunction of $P_t$ associated with $\lambda_t$ so that $\mu_\mathbb{D}elta(v_t)=1$, that is $v_t=\frac{\Pi_t 1_\mathbb{D}elta}{\mu_\mathbb{D}elta(\Pi_t 1_\mathbb{D}elta)}$.
Using a classical argument (see, for instance,~\cite{BalintGouezel06}), we write
\begin{align}
1-\lambda_t&=\int_{\mathbb{D}elta} (P_0-P_t)v_t\, d\mu_\mathbb{D}elta=\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa})v_t\, d\mu_\mathbb{D}elta\nonumber\\
&=\int_{\mathbb{D}elta} (1-e^{it\cdot\hat\kappa})\, d\mu_\mathbb{D}elta+\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa})(v_t-v_0)\, d\mu_\mathbb{D}elta:=\Psi(t) +V(t)\, .\label{eq-lambdato}
\end{align}
Estimates of $V(t)$ and $\Psi(t)$ are given respectively in Lemmas~\ref{lem:V} and \ref{lem:Psi}.
$\Psi(t)$ is the pure scalar part which will be estimated as if dealing with the characteristic function of an i.i.d. process
and for this we only need to exploit Lemma~\ref{lemm-tail}.
The function $V(t)$ will be estimated via the estimates used in the proof of Proposition~\ref{prop-expproj}.
We start with the latter.
\begin{lem}\label{lem:V} There exists $C>0$ such that for all $t\in B_\delta(0)$, $|V(t)|\le Ct^2$.
\end{lem}
\begin{proof}
Note that
$v_t-v_0=\frac{\Pi_t(1_\mathbb{D}elta)-\mu_\mathbb{D}elta(\Pi_t(1_\mathbb{D}elta))}{\mu_\mathbb{D}elta(\Pi_t(1_\mathbb{D}elta))}$.
Therefore it is enough to prove that
\begin{equation}\label{Goal}
\left|
\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa})(\Pi_t(1_\mathbb{D}elta)-\mu_\mathbb{D}elta(\Pi_t(1_\mathbb{D}elta)))\, d\mu_\mathbb{D}elta\right|\ll t^2\, .
\end{equation}
We know from Proposition~\eqref{prop-expproj} that
$\mu_\mathbb{D}elta((\Pi_t-\Pi_0)(1_\mathbb{D}elta))=O(t)$, which implies that
\begin{equation}\label{Esti1}
\left|\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa})\mu_\mathbb{D}elta((\Pi_t-\Pi_0)(1_\mathbb{D}elta))\, d\mu_\mathbb{D}elta\right| \ll t^2\Vert \hat\kappa\Vert_{L^1(\mu_\mathbb{D}elta)}\ll t^2\, .
\end{equation}
It remains to estimate
$V_0(t):=\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa})(\Pi_t-\Pi_0)(1_\mathbb{D}elta)\, d\mu_\mathbb{D}elta$.
To this end, thanks to \eqref{eq-Pit}, we use the following decomposition similar to \eqref{eq-expbigP}
\begin{align}
\label{eq-expbigPbis}
(\Pi_t-\Pi_0)1_\mathbb{D}elta(x)=&\lambda_t^{-\omega(x)} e^{it\hat\kappa_{{\omega(x)}}
(\pi_0(x))}\Pi_t (1_\mathbb{D}elta)\circ\pi_0(x)-1\nonumber\\
=&
(e^{it\cdot\hat\kappa_{\omega(x)}(\pi_0(x))}-1)
+
\left[e^{it\cdot\hat\kappa_{\omega(x)}}
(\Pi_t-\Pi_0) (1_\mathbb{D}elta)\right]\circ\pi_0(x)\\
&+(\lambda_t^{-\omega(x)}-1)
[e^{it\cdot\hat\kappa_{\omega(x)}}\Pi_t (1_\mathbb{D}elta)]\circ\pi_0(x)\, ,
\nonumber
\end{align}
which leads to
\begin{equation}\label{DecompJ}
\left|\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa})(\Pi_t-\Pi_0)(1_\mathbb{D}elta)\, d\mu_\mathbb{D}elta\right|\le J_1(t)+J_2(t)+J_3(t)\, ,
\end{equation}
\begin{align*}
J_1(t)&:=\left|\int_{\mathbb{D}elta}(1-e^{it\cdot\hat\kappa(x)}) (e^{it\cdot \hat\kappa_{\omega(x)}(\pi_0(x))}-1)\, d\mu_{\mathbb{D}elta}(x)\right|\, ,\nonumber\\
J_2(t)&:=
\int_{\mathbb{D}elta}\left|
(1-e^{it\cdot\hat\kappa(x)})
e^{it\hat\kappa_{\omega(x)}}
(\Pi_t-\Pi_0) (1_\mathbb{D}elta)\right|\circ\pi_0(x)\,d\mu_{\mathbb{D}elta}(x)\, ,\\
J_3(t)&:=
\int_{\mathbb{D}elta} \left|(1-e^{it\cdot\hat\kappa(x)})(\lambda_t^{-\omega(x)}-1)
e^{it\kappa_{\omega(x)}}\Pi_t
(1_\mathbb{D}elta)
\right|\circ\pi_0(x)\, d\mu_\mathbb{D}elta(x)\, .
\end{align*}
First let us prove that
\begin{align}
J_1(t)=O(t^2)\quad \mbox{as }t\rightarrow 0\, .
\label{MajoJ1}
\end{align}
\begin{align}
J_1(t)&=\left|\int_Y \sum_{k=0}^{\sigma-1}(1-e^{it\cdot \hat\kappa\circ f^k})
(1-e^{it\cdot \hat\kappa_k})\, d\mu_\mathbb{D}elta\right|
\le t^2\int_{Y
} \sum_{k=1}^{\sigma-1}|\hat\kappa\circ f^k|\, |\hat\kappa_{k}|\, d\mu_\mathbb{D}elta
\label{TOTO}\, .
\end{align}
To prove that $J_1(t)=O(t^2)$, we will use the fact that, due to ~\cite[Propositions 11--12]{SV07} and~\cite[Lemma 16]{SV07}, there exists $\varepsilon>0$ such that, for every $V$,
\[
\mu_\mathbb{D}elta\left(A_{n,V}\right)=O(n^{-3-\varepsilon})\quad \mbox{with }A_{n,V}:=\left\{\hat\kappa=n,\ \exists |j|\le V\log (n+2),\ |\hat\kappa\circ f^j|>n^{4/5}\right\}.
\]
For any $y\in Y$, we write $\ell(y)$ for the largest integer in $\{1,...,\sigma(y)-1\}$ such that
\[
N(y):=\sup_{k=1,...,\sigma(y)-1}|\hat\kappa\circ f^k(y)|=|\hat\kappa\circ f^{\ell(y)}(y)|\, .
\]
Set $Y_n:=Y\cap\{\hat \kappa\circ f^{\ell(y)}=n$,
$Y'_n:=\left\{y\in Y_n\, :\ \sigma(y)< b\log (n+2)\right\}$
and also
$
Y^{(0)}_n:=\left\{y\in Y'_n\, :\ \forall j<\sigma(y),\ |\hat\kappa\circ f^j|\le n^{4/5}\right\}\, .$
Notice that
\begin{align*}
&\int_{Y} \sum_{k=1}^{\sigma-1}|\hat\kappa\circ f^k|\, |\hat\kappa_{k}|\, d\mu_\mathbb{D}elta\le \sum_{n\ge 0}\int_{Y'_n} \sum_{k=1}^{b\log(n+2)-1}|\hat\kappa\circ f^k|\, |\hat\kappa_{k}|\, d\mu_\mathbb{D}elta+\sum_{n\ge 0} \int_{Y_n\setminus Y'_n} n^2\sigma\, d\mu_\mathbb{D}elta\\
&\le \sum_{n\ge 0}\int_{Y^{(0)}_n} b\, n^{9/5}\log (n+2)+\sum_{n\ge 0}\int_{Y'_n\setminus Y_n^{(0)}}n^2b\log(n+2)\, d\mu_\mathbb{D}elta
+\sum_{n\ge 0} n^2\mathbb E_{\mu_\mathbb{D}elta}[\sigma 1_{Y\setminus Y'_n}]\\
&\le \sum_{n\ge 0}b\, n^{9/5}\log (n+2)
\mu_\mathbb{D}elta(Y'_n)
+\sum_{n\ge 0}n^2b\log(n+2)\mu_\mathbb{D}elta(Y'_n\setminus Y_n^{(0)})\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+\sum_{n\ge 0} n^2\!\!\!\sum_{m\ge b\log(n+2)}\mu_\mathbb{D}elta(\sigma>m)\, .
\end{align*}
The last term of the above right hand side is less than
$\sum_{n\ge 0} n^2\sum_{m\ge b\log(n+2)}C_1\theta_1^m\le \sum_{n\ge 0} O( n^2 \theta_1^{ b\log(n+2)})<\infty$ by taking $b$ large enough. The other terms are dominated by
\begin{align*}
&\ \sum_{n\ge 0}O\left(n^{9/5}\log (n+2)
\sum_{m=0}^{b\log(n+2)}\mu_\mathbb{D}elta(\hat\kappa=n)
\right)
+\sum_{n\ge 0}n^2\log(n+2)\sum_{m=0}^{b\log(n+2)}\mu_\mathbb{D}elta(A_{n,b})
\\
&\le \sum_{n\ge 0}O\left(n^{9/5-3}(\log (n+2))^2 \right)+\sum_{n\ge 0}n^{2-3-\varepsilon}(\log(n+2))^2<\infty\, ,
\end{align*}
Combined with~\eqref{TOTO}, this ends the proof of \eqref{MajoJ1}.
Second, due to Proposition~\ref{prop-expproj-base}
\begin{align}
J_2(t)&=
\int_{\mathbb{D}elta}\left|
(1-e^{it\cdot\hat\kappa(x)})
e^{it\hat\kappa_{\omega(x)}}
(\Pi_t-\Pi_0) (1_\mathbb{D}elta)\right|\circ\pi_0(x)\,d\mu_{\mathbb{D}elta}(x)\nonumber\\
&\ll \int_{\mathbb{D}elta}\left|t\cdot
\kappa(x)
(\Pi_t-\Pi_0) (1_\mathbb{D}elta)\right|\circ\pi_0(x)\,d\mu_{\mathbb{D}elta}(x)\ll t^2 \Vert\hat\kappa\Vert_{L^1(\mu_\mathbb{D}elta)}\, ,\label{MajoJ2}
\end{align}
and finally, using the fact that $1>|\lambda_t|>e^{-\frac {t^2\log(1/|t|)} 2}$ if $t$ is small enough,
\begin{align}
|J_3(t)| &=\int_{\mathbb{D}elta} \left|(1-e^{it\cdot\hat\kappa(x)})(\lambda_t^{-\omega(x)}-1)
e^{it\kappa_{\omega(x)}}\Pi_t
(1_\mathbb{D}elta)
\right|\circ\pi_0(x)\, d\mu_\mathbb{D}elta(x)\nonumber\\
&\le \int_{\mathbb{D}elta} |t\cdot\hat\kappa(x)|\omega(x)\, |\lambda_t-1|\, e^{\frac {t^2\log(1/|t|)} 2(\omega(x)-1)}\left\Vert 1_Y\Pi_t (1_\mathbb{D}elta)\right\Vert_\infty\, d\mu_\mathbb{D}elta(x)\nonumber\\
&\ll t^3\log(1/|t|) \sum_{n\ge 1}\int_{Y\cap\{\sigma> n\}}|\hat\kappa\circ f^n|n e^{\frac {nt^2\log(1/|t|)} 2}\, d\mu_Y\nonumber
\end{align}
Thus $|J_3(t)|\ll t^3\log(1/|t|) \sum_{n\ge 1}n e^{\frac {nt^2\log(1/|t|)} 2}\Vert\hat\kappa\Vert_{L^p(\mu_\mathbb{D}elta)}(\mu_Y(\sigma> n))^{1/q}$ and it follows that
\begin{align}
|J_3(t)|&\ll t^3\log(1/|t|) \sum_{n\ge 1}n e^{\frac {nt^2\log(1/|t|)} 2}\Vert\hat\kappa\Vert_{L^p(\mu_\mathbb{D}elta)}(C_1\theta_1^n)^{1/q}=O(t^2)\, ,
\label{MajoJ3}
\end{align}
by taking $p<2$ close to 2 and using the fact that for $|t|$ small enough
$e^{\frac {t^2\log(1/|t|)} 2}\theta_1<1$.
Combining \eqref{Esti1}, \eqref{DecompJ}, \eqref{MajoJ1}, \eqref{MajoJ2} and \eqref{MajoJ3},
we obtain \eqref{Goal} and conclude.~\end{proof}
To complete the proof of Proposition~\ref{prop-lambda}, we still need to estimate $\Psi(t)$ in~\eqref{eq-lambdato}.
The next lemma can be viewed as an extension of the asymptotic of the pure scalar quantity in~\cite[Proof of Theorem 3.1]{ADb}
under the conclusion of Lemma~\ref{lemm-tail} via the arguments used in~\cite{MT12, Terhesiu16}.
\begin{lem}\label{lem:Psi}
Let $\hat\kappa_0:\mathbb{D}elta\rightarrow \mathbb Z^d$ be a $\mu_\mathbb{D}elta$-centered function. Set $\Psi(t):=\int_{\mathbb{D}elta} (1-e^{it\cdot\hat\kappa_0})\, d\mu_\mathbb{D}elta$.
Assume that there exists $M_0>0$ such that $\{|\hat\kappa_0|\ge M_0\}=\bigcup_{(L,w)\in
\mathcal S}\{\hat\kappa_0\in L+\mathbb N w\}$
with $\mathcal S$ a finite subset of $(\mathbb Z^2)^2$ such that the lattices $L+\mathbb N w\subset\mathbb Z^d$ are disjoint for distinct $(L,w)\in \mathcal S$ outside the open ball $B(0,M_0)$.
Assume moreover that
\[
\forall (L,w)\in \mathcal S,\quad\mu_\mathbb{D}elta\left(\hat\kappa_0= L+ n w\right)=\frac{c_{L,w}}{n^3}+O(n^{-4}),\quad\mbox{as}\quad n\rightarrow+\infty\, .
\]
Then, as $t\to 0$, $\Psi(t)=\sum_{(L,w)\in\mathcal S}\frac{c_{L,w}}2(t\cdot w)^2\log||t|^{-1}|+O(t^2)$.
\end{lem}
\begin{proof}
We start by writing $\Psi(t)=\int_\mathbb{D}elta (1-e^{it\cdot\hat\kappa_0})\, d\mu_\mathbb{D}elta
=\int_\mathbb{D}elta (1+it\cdot\hat\kappa_0-e^{it\cdot\hat\kappa_0})\, d\mu_\mathbb{D}elta
$. So
\begin{align*}
\Psi(t)
&=\sum_{(L,w)\in\mathcal S}\sum_{n= 1}^{\lfloor |t|^{-1}\rfloor}(1+it\cdot (L+nw)-e^{it\cdot (L+nw)})\mu_{\mathbb{D}elta}(\hat\kappa_0=L+nw)+O(t^2)\\
&=\sum_{(L,w)\in\mathcal S}\sum_{n= 1}^{\lfloor |t|^{-1}\rfloor}(1+it\cdot (L+nw)-e^{it\cdot (L+nw)})\left(\frac{c_{L,w}}{n^3}+O(n^{-4})\right)+O(t^2)\\
&=\sum_{(L,w)\in\mathcal S}\sum_{n= 1}^{\lfloor |t|^{-1}\rfloor}(1+it\cdot (L+nw)-e^{it\cdot (L+nw)})\frac{c_{L,w}}{n^3}+O(t^2)\, ,
\end{align*}
using the properties of $M_0,\mathcal S$.
Setting $a_{L,w,n}:=\frac{c_{L,w}}{n^3}$, $A_{L,w,n}:=\sum_{k\ge n}a_{L,w,k}$ and $b_{L,w,n}:=1+it\cdot (L+nw)-e^{it\cdot (L+nw)}$,
using the Abel transform, we obtain
\begin{align*}
&\Psi(t)=O(t^2)+\sum_{(L,w)\in\mathcal S}\sum_{n=1}^{\lfloor |t|^{-1}\rfloor}a_{L,w,n}b_{L,w,n}\\
&=O(t^2)+\sum_{(L,w)\in\mathcal S}\left\{A_{L,w,1}b_{L,w,1}-A_{L,w,\lfloor|t|^{-1}\rfloor+1}b_{L,w,\lfloor|t|^{-1}\rfloor}+\sum_{n=2}^{\lfloor |t|^{-1}\rfloor}A_{L,w,n}(b_{L,w,n}-b_{L,w,n-1})\right\}\\
&=O(t^2)+\sum_{n=2}^{\lfloor |t|^{-1}\rfloor}\sum_{(L,w)\in\mathcal S}\left\{\left(\frac{c_{L,w}}{2n^2}+O(n^{-3})\right)(it\cdot w+e^{it\cdot(L+(n-1)w)}-e^{it\cdot(L+nw)})\right\}
\end{align*}
and thus
\begin{align*}
\Psi(t)&=O(t^2)+\sum_{n=2}^{\lfloor |t|^{-1}\rfloor}\sum_{(L,w)\in\mathcal S}\left\{\left(\frac{c_{L,w}}{2n^2}+O(n^{-3})\right)(it\cdot w(1-e^{it\cdot(L+nw)}) +O(t^2))\right\}\\
&=O(t^2)+\sum_{n=2}^{\lfloor |t|^{-1}\rfloor}\sum_{(L,w)\in\mathcal S}\left\{\left(\frac{c_{L,w}}{2n^2}+O(n^{-3})\right)(it\cdot w)(-it\cdot(L+nw)+O(t^2n^2))\right \}\\
&=O(t^2)+(t\cdot w)^2\sum_{(L,w)\in\mathcal S}\sum_{n=2}^{\lfloor |t|^{-1}\rfloor}\frac{c_{L,w}}{2n}=O(t^2)+\sum_{(L,w)\in\mathcal S}\frac{c_{L,w}}2(t\cdot w)^2\log||t|^{-1}|.
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop-lambda}]
We first observe that Lemma~\ref{lemm-tail} ensures that the general conditions of
Lemma~\ref{lem:Psi} are satisfied with
$c_{L,w}:=
\sum_{(A,B)\in E_{L,w}}\frac{ d_{A}^2\, \mathfrak a_{(A,B)}}{2|\partial\bar Q|\, |w| }$,
and with
$\mathcal S$
is the set of $(w,L)\in(\mathbb Z^2)^2$, with $w$ with co-prime coordinates and $L\in\mathcal E_w$ (where $\mathcal E_w$ is the set of $L\in\mathbb Z^2$ such that $L\cdot w\ge 0>(L-w)\cdot w$)
for which
there exist $(A,B)\in E_{L,w}$.
The finiteness of $\mathcal S$ comes then from the finiteness of $\mathcal A$ and the finiteness of the possibilities for $B$ once $(A,w)$ is fixed, the finiteness of $L$ once $A,w,B$ are fixed comes from our constraint on $L$.
The disjointness assumption of Lemma~\ref{lem:Psi} comes from our first conditions on $w$ and $L$. Since $1-\lambda_t=\Psi(t)+V(t)$, due to Lemmas~\ref{lem:V} and~\ref{lem:Psi},
we know that
\[
1-\lambda_t=\sum_{(L,w)\in\mathcal S}\frac{c_{L,w}}2(t\cdot w)^2\log||t|^{-1}|+O(t^2)\, .
\]
For any prime $w\in\mathbb Z^2$, due to Lemma~\ref{lemm-tail}
and Corollary~\ref{lemm-tailrem},
\[
\sum_{L\, :\, (L,w)\in\mathcal S}c_{L,w}=\sum_{C\in\mathcal C\, :\, w_C=\pm w}
\frac{ \mathfrak d_{C}^2}{|\partial\bar Q|\, |w_C| }
\, .\]
Therefore
$1-\lambda_t
=2 \sum_{C\in\mathcal C}
\frac{ \mathfrak d_{C}^2}{2|\partial\bar Q|\, |w_C| }
(t\cdot w_C)^2\log||t|^{-1}|+O(t^2)$.
\end{proof}
\section{Expansions in the local limit theorem and of decorrelation rate}\label{LLT}
\subsection{Expansion in the local limit theorem in a general context}
Let $(\mathbb{D}elta, f,\mu_\mathbb{D}elta)$ be a probability preserving dynamical system with transfer
operator $P$.
Let $p,p_0,p_1\in[ 1,+\infty]$ with $p_0\le p_1$.
Set $\hat \kappa_n:=\sum_{k=0}^{n-1}\hat\kappa\circ f^k$, with $\hat\kappa:\mathbb{D}elta\rightarrow\mathbb Z^d$ integrable with zero mean. Assume that, for every $t\in[-\pi,\pi]^d$, the operator $P_t:f\mapsto P(e^{i t\cdot\hat\kappa}f)$ acts on a complex Banach space $\mathcal B$ of functions $f:\mathbb{D}elta\rightarrow\mathbb C$ and satisfies the following properties.
Assume that $\hat\kappa\in L^p(\mu_\mathbb{D}elta)$, that $\mathcal B\hookrightarrow L^{p_1}(\mu_\mathbb{D}elta)$ and
that there exists $\beta\in(0,\pi)$ such that for every
$t\in[-\beta,\beta]^d$,
\begin{equation}\label{quasicompact}
P_t^n=\lambda_t^n\Pi_t+N_t^n\, ,
\end{equation}
\begin{equation}\label{borneexpo}
\sup_{t\in[-\beta,\beta]^d}\Vert N_t^n\Vert_{\mathcal L(\mathcal B,L^{p_0}(\mu_\mathbb{D}elta))}+\sup_{t\in[-\pi,\pi]^d\setminus[-\beta,\beta]^d}\Vert P_t^n\Vert_{\mathcal L(\mathcal B,L^{p_0}(\mu_\mathbb{D}elta))}=O\left(\theta^n\right)\, ,\quad\mbox{with}\ 0<\theta<1\, .
\end{equation}
Assume moreover that there exists an invertible positive symmetric matrix $A$ such that
\begin{equation}\label{lambda}
\lambda_t=1- A t\cdot t\, \log (1/|t|)+O(t^2)\, ,\ i.e. \ \lambda_t=e^{- A t \cdot t\, \log (1/|t|)+O(t^2)}\, .
\end{equation}
Moreover assume either that there exists
$\gamma\in(0,1]$ such that
\begin{equation}\label{Pi}
\Pi_t:=
\mathbb E_{\mu_{\mathbb{D}elta}}\left[\cdot\right]\mathbf 1_{\mathbb{D}elta}+O\left( |t|^\gamma\right)\quad\mbox{in}\quad\mathcal L\left(\mathcal B, L^{p_0}(\mu_{\mathbb{D}elta})\right)
\, ,
\end{equation}
or that
there exists
$\gamma'\in(0,1]$ such that
\begin{equation}\label{Pidiff}
\exists \Pi'_0\in\mathcal L(\mathcal B,L^{p_0}(\mu_\mathbb{D}elta)),\
\Pi_t:=\mathbb E_{\mu_{\mathbb{D}elta}}\left[\cdot\right]\mathbf 1_{\mathbb{D}elta}+t\cdot \Pi'_0+ O\left( t^{1+\gamma'}\right)\ \mbox{in}\ \mathcal L(\mathcal B,L^{p_0}(\mu_\mathbb{D}elta))\, .
\end{equation}
Note that, due to Proposition~\ref{PROP1},
under \eqref{H0}, both~\eqref{Pi} and~\eqref{Pidiff} are satisfied for the tower $(\mathbb{D}elta,f,\mu_\mathbb{D}elta)$ associated to the discrete infinite horizon Lorentz gas.
Let $q_0,q_1\in[1,+\infty]$ be such that $\frac 1{q_i}+\frac 1 {p_i}= 1$.
Let us define
\begin{equation}\label{I0I2}
I_{0}(X):=\frac{e^{-\frac 1{2} A^{-1}X\cdot X}}{\sqrt{(2\pi)^{d}\det A}},\quad
I_{2}(X):=
-
\frac{ A^{-1} X\cdot X-d}{\sqrt{(2\pi)^{d}\det A}}\, e^{-\frac 1{2} A^{-1}X\cdot X}\, ,
\end{equation}
\begin{equation}\label{I1I3}
I_{1}(X):=
-i
\frac{A^{-1}X}{\sqrt{(2\pi)^{d}\det(A)}}e^{-\frac 1{2} A^{-1}X\cdot X},\quad
I_{3}(X):=
-i
\frac{A^{-1}X(d+2- A^{-1} X\cdot X)}{\sqrt{(2\pi)^{d}\det(A)}}\, e^{-\frac 1{2} A^{-1}X\cdot X}
.
\end{equation}
Under these assumptions, we state the following general local limit theorem with expansion, that can be read first considering $k=0$. The generalization to $k=O(n/\log n)$ will be useful in the proof of our main results (Theorems~\ref{THM0} and \ref{THM1}) due to approximations of observables $\phi,\psi$ using functions that are constant on every stable curve. We set $a_n:=\sqrt{n\log n}$.
\begin{prop}\label{LLT1} Assume~\eqref{quasicompact}--\eqref{lambda}.
If \eqref{Pi} holds, then,
for every $h\in\mathcal B
$, for $g\in L^{q_0}(\mu_\mathbb{D}elta)$, uniformly in $N\in\mathbb Z^d$
\begin{align}\label{eq:LLT1}
&\mathbb E_{\mu_{\mathbb{D}elta}}[h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n]=
\frac {\mathbb E_{\mu_{\mathbb{D}elta}}[h]\mathbb E_{\mu_{\mathbb{D}elta}}[g]}{a_n^d}
\left(I_0\left(\frac N{a_n}\right)-\frac {I_2\left(\frac N{a_n}\right)\log\log n +O(1)}{2\log n}
\right)\\
&
+O\left(
a_n^{-d}(a_n^{-\gamma}+(k/a_n)^{\min(1,\frac p{q_1},\frac p{p_0})})
\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}
(\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B}
\right)\, ,\nonumber
\end{align}
for $k=0$ (without additional assumption on $p_1$) and uniformly in
$k\le C n/\log n
$.\\
If \eqref{Pidiff} holds, then, for every $h\in\mathcal B$, for $g\in L^{q_0}(\mu_\mathbb{D}elta)$, uniformly in $N\in\mathbb Z^d$,
\begin{align*}
&\mathbb E_{\mu_{\mathbb{D}elta}}[h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n]\\
&=\frac 1{a_n^d}\mathbb E_{\mu_{\mathbb{D}elta}}[h]\mathbb E_{\mu_{\mathbb{D}elta}}[g]
\left(I_0\left(\frac N{a_n}\right)-\frac {I_2\left(\frac N{a_n}\right)\log\log n+O(1)}{2\log n}
\right)\\
&+\frac {
C_k(g,h)}{a_n^{d+1}}
\left(I_{1}\left(\frac N{a_n}\right)-\frac {I_3\left(\frac N{a_n}\right)}2 \frac{\log\log n}{\log n}\right)+O\left(\frac{\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\sup_u\Vert P_u^{k}P^kh\Vert_{\mathcal B}}{a_n^{d+1}\log n}\right)\\
&
+O\left(\frac {1 }{(2\pi )^d(a_n)^{d+1}}\int_{[-\beta a_n,\beta a_n]^d}\!\!\!\!\!\!\!\!\! |u|e^{-\frac{c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}2} |\mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi'_0(P_{u/a_n}^kP^kh-P^{2k}h)]|\, du\right)
\, ,
\end{align*}
with
$C_k(g,h):= \mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi'_0P^{2k}h]+ i\mathbb E_{\mu_{\mathbb{D}elta}}[\hat\kappa_k g]\mathbb E_{\mu_\mathbb{D}elta}[h]+i\mathbb E_{\mu_\mathbb{D}elta}[g] \mathbb E_{\mu_\mathbb{D}elta}[\hat\kappa_kP^kh]$,
for $k=0$ (without additional assumption on $p_1,p_0,q_0$) and uniformly in
$k\le C a_n^{\frac{\tilde\gamma}{1+\tilde\gamma}}/(\log n)^{1/(1+\tilde\gamma)}$
with $\tilde\gamma=\min(1,\frac{p(q_0-1)-q_0}{q_0},\frac{p-q_1}{q_1})$
if
$q_1<p$, $q_0<+\infty$, $\frac 1{q_0}+\frac 1 {p}<1$.
\end{prop}
When $k=0$, one can take $p_0=p_1=1$, $q_0=q_1=\infty$ and that the last error term in the second estimate of Proposition~\ref{LLT1} vanishes.
\begin{rem}
If $k=0$ and $\mathbb E_{\mu_\mathbb{D}elta}[g]\mathbb E_{\mu_\mathbb{D}elta}[h]=0$,
the second estimate of Proposition~\ref{LLT1} provides an estimate in $O(a_n^{-d-1}/\log n)$ if $N$ is fixed or uniformly bounded, but provides
an estimate with a leading term (non null in general) of order $a_n^{-d-1}$ if $N$ has order $a_n$.
\end{rem}
\begin{proof}[Proof of Proposition~\ref{LLT1}]
We assume that $\beta<1$, that
\begin{equation}\label{lambda0}
\forall t\in[-\beta,\beta]^d,\quad e^{-2 At\cdot t\log(1/|t|)}\le \left|\lambda_t\right|\le e^{-\frac 12 At\cdot t\log(1/|t|)}\,
\end{equation}
for $|\cdot|$ the supremum norm on $\mathbb R^d$
and that
$\forall y>x>\beta^{-1}$, $\frac 12 (x/y)^\epsilonsilon \le \log(x)/\log(y)\le 2 (y/x)^{\varepsilon}$
(using for example Karamata's representation of slowly varying functions).
This last condition will imply that, for every $n$ large enough (so that $a_n>\beta^{-1}$) and for every $|u|\in[-\beta a_n,\beta a_n]$, the following
inequalities hold true
\begin{equation}\label{logx/logy}
\frac 12\min(|u|^\varepsilon,|u|^{-\varepsilon})\le \log(a_n/|u|)/\log a_n \le 2\max(|u|^\varepsilon,|u|^{-\varepsilon})\, .
\end{equation}
Let $h,g$ be as in the statement of Proposition~\ref{LLT1}.
Note that $P_t^n(H.G\circ f^n) =GP_t^n(H)$ (since $\int_\mathbb{D}elta h P(H \cdot G\circ f)\,d\mu_{\mathbb{D}elta}= \int_\mathbb{D}elta ( h\circ f)\, H\, (G\circ f) \,d\mu_{\mathbb{D}elta} = \int_\mathbb{D}elta h\,G\, P(H) \,d\mu_{\mathbb{D}elta}$). Thus,
\begin{align*}
\mathbb E_{\mu_{\mathbb{D}elta}}&[h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n]
=\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it\cdot N}\mathbb E_{\mu_{\mathbb{D}elta}}\left[he^{it\cdot\hat \kappa_n\circ f^k}g\circ f^n\right]\, dt\\
&=\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it\cdot N}\mathbb E_{\mu_{\mathbb{D}elta}}\left[P^{n+k}\left(he^{it\cdot\hat \kappa_n\circ f^k}g\circ f^n\right)\right]\, dt\\
&
=\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it\cdot N}\mathbb E_{\mu_{\mathbb{D}elta}}\left[P^{k}\left(P^n\left(he^{it\cdot\hat \kappa_{n-k}\circ f^k}(ge^{it\cdot\hat \kappa_k})\circ f^n\right)\right)\right]\, dt\, .
\end{align*}
Thus $\mathbb E_{\mu_{\mathbb{D}elta}}(h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n)
=\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it\cdot N}\mathbb E_{\mu_{\mathbb{D}elta}}\left[P_t^k\left( gP^n\left(he^{it\cdot\hat \kappa_{n-k}\circ f^k}\right)\right)\right]\, dt$ and so
\[
\mathbb E_{\mu_{\mathbb{D}elta}}[h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n]=\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it\cdot N}\mathbb E_{\mu_{\mathbb{D}elta}}\left[ P_t^k \left(gP_t^{n-2k}P_t^kP^kh\right)\right]\, dt\, .
\]
Now, due to \eqref{quasicompact} and \eqref{borneexpo}, $\mathbb E_{\mu_{\mathbb{D}elta}}(h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n)$ is equal to
\begin{align}
\nonumber&\frac 1{(2\pi)^d}\int_{[-\beta,\beta]^d}e^{-it\cdot N}\lambda_t^{n-2k}\mathbb E_{\mu_{\mathbb{D}elta}}\left[e^{it\cdot\hat\kappa_k}g\Pi_tP_t^kP^kh\right]\, dt+O(\theta^{n-2k}\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B})\\
\label{FormulaInt}&=\frac 1{(2\pi a_n)^d}\int_{[-\beta a_n,\beta a_n]^d}e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\mathbb E_{\mu_{\mathbb{D}elta}}\left[e^{i\frac u{a_n}\cdot\hat\kappa_k}g\Pi_{u/a_n}P_{u/a_n}^kP^kh\right]\, du\\
&\ \ \ +O\left(\theta^{n-2k}\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B}\right)\, .\nonumber
\end{align}
Observe that $a_n^2\sim 2 n\log(a_n)$.
This, combined
with~\eqref{lambda0} and~\eqref{logx/logy},
ensures that there exists $c_0,c'_0>0$ such that,
for every $n$ large enough
and for every $u\in[-\beta a_n,\beta a_n]^d$,
\begin{equation}\label{bornelambda}
e^{-c'_0\max(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}\le e^{-\frac {2n}{a_n^2} Au\cdot u\log(a_n/|u|)}\le \left|\lambda_{u/a_n}^{n} \right|\le e^{-\frac {n}{2a_n^2} Au\cdot u\log(a_n/|u|)}\le
e^{-c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})} \, .
\end{equation}
Therefore, using \eqref{Pi}, we obtain that
$$\left\Vert\lambda_{u/a_n}^{n-2k}\left(\Pi_{u/a_n}-\mathbb E_{\mu_{\mathbb{D}elta}}[\cdot]\mathbf 1_{\mathbb{D}elta}\right)\right\Vert_{\mathcal L(\mathcal B,L^{p_0}(\mu_{\mathbb{D}elta}))}\le K_1 e^{-c_0\min(|u|^{2+\epsilonsilon},|u|^{2-\epsilonsilon})}(u/a_n)^\gamma \, ,$$
and so
\begin{align}
&\mathbb E_{\mu_{\mathbb{D}elta}}(h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n)=O\left(a_n^{-d-\gamma}\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B}\right)\nonumber\\
&+\frac 1{(2\pi a_n)^d}\int_{[-\beta a_n,\beta a_n]^d}\mathbb E_{\mu_{\mathbb{D}elta}}[e^{i\frac u{a_n}\cdot\hat\kappa_k}g]\, \mathbb E_{\mu_{\mathbb{D}elta}}[e^{i\frac u{a_n}\cdot\hat\kappa_k\circ f^k}h]e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du\nonumber\\
&=\frac 1{(2\pi a_n)^d} \mathbb E_{\mu_{\mathbb{D}elta}}[g] \, \mathbb E_{\mu_{\mathbb{D}elta}}[h]\int_{[-\beta a_n,\beta a_n]^d}e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du\nonumber\\
&\ \ \ +O(
a_n^{-d}(a_n^{-\gamma}+(k/a_n)^{\min(1,\frac p{q_1})}+(k/a_n)^{\min(1,\frac p{p_0})})
\Vert g\Vert_{\mathbb L^{q_0}(\mu_\mathbb{D}elta)}
\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B}
)\, ,\label{CCC1}
\end{align}
due to \eqref{bornelambda}
combined with the two following estimates
\begin{align*}
\mathbb E_{\mu_\mathbb{D}elta}[(e^{i\frac u{a_n}\cdot\hat\kappa_k}-1)g]&=O\left(|u|^{\gamma_0}\frac{k^{\gamma_0}\Vert\hat\kappa\Vert^{\gamma_0}_{L^{p}(\mu_\mathbb{D}elta)}\Vert g\Vert_{\mathbb L^{q_0}(\mu_\mathbb{D}elta)}}
{(a_n)^{\gamma_0}}\right)\\
\mathbb E_{\mu_\mathbb{D}elta}[(e^{i\frac u{a_n}\cdot\hat\kappa_k\circ f^k}-1)P^kh]&=O\left(|u|^{\gamma_0}\frac{k^{\gamma_0}\Vert\hat\kappa\Vert^{\gamma_0}_{L^{p}(\mu_\mathbb{D}elta)}\Vert h\Vert_{\mathbb L^{p_1}(\mu_\mathbb{D}elta)}}
{(a_n)^{\gamma_0}}\right)\, ,
\end{align*}
with $\gamma_0:=\min(1,\frac p{p_0}
,\frac p{q_1})$.
Here we have used H{\"o}lder inequality,
the fact that
$|e^{ix}-1|\le 2|x|^{\gamma_0}$
and
the following inequality applied with $s=\gamma_0$ and $r\in\{p_0,q_1\}$
\begin{equation}\label{normrs}
\forall r\ge 1,\quad\forall s\ge 1/r,\quad
\||\hat\kappa_k|^{s}\|_{L^r(\mu_\mathbb{D}elta)}=
\|\hat\kappa_k\|^{s}_{L^{rs}(\mu_\mathbb{D}elta)}\le
k^{s} \|\hat\kappa\|^s_{L^{rs}(\mu_\mathbb{D}elta)}
\, ,
\end{equation}
where we used the triangular inequality for $\Vert\cdot\Vert_{L^{rs}(\mu_\mathbb{D}elta)}$
and the fact that $\Vert\hat\kappa\circ f^m\Vert_{L^{rs}(\mu_\mathbb{D}elta)}=\Vert\hat\kappa\Vert_{L^{rs}(\mu_\mathbb{D}elta)}$ by $f$-invariance of $\mu_\mathbb{D}elta$.
Moreover, due to \eqref{lambda
}
and \eqref{bornelambda}
and since $k=O(n/\log n)$,
\begin{align}
\label{lambdan-k}\lambda_{u/a_n}^{n-2k}
&=\lambda_{u/a_n}^{n} \lambda_{u/a_n}^{-2k}
= e^{-\frac n{ a_n^2} A u\cdot u\, \left(\log (a_n/|u|)\right)+O(n|u|^2/a_n^2)} e^{O(\frac kn\max(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon}))}\\
\nonumber&= e^{-\frac 1{2 \log n} A u\cdot u\, \left(\log n+\log(\log n/|u|^2)\right)+O(\max(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})/\log n)}\\
\nonumber&=e^{-\frac 1{2} A u\cdot u\, \left(1+\frac{\log(\log n/|u|^2)}{\log n}\right)}
+O\left(e^{-\frac{c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}2}\frac{\max(|u|^{2-\varepsilon},|u|^{2+\varepsilon})}{\log n}\right)\, .
\end{align}
Therefore
\begin{align*}
&\int_{[-\beta a_n,\beta a_n]^d}u^\ell e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du=\int_{\mathbb R^d}u^\ell e^{-i\frac u{a_n}\cdot N}e^{-\frac 1{2} A u\cdot u\, \left(1+\frac{\log(\log n/|u|^2)}{\log n}\right)}\, du+O\left(\frac 1{\log n}\right)\nonumber\\
=&\int_{\mathbb R^d}u^\ell e^{-i\frac u{a_n}\cdot N}e^{-\frac 1{2} A u\cdot u}
\left(1 -\frac 12 A u\cdot u\frac{\log(\log n/|u|^2)}{\log n}\right. \nonumber\\
&\left. \ \ \ +O\left(e^{\frac 12 Au\cdot u\frac{|\log(\log n/|u|^2)|}{\log n}}
\frac{
|u|^4(\log(\log n/|u|^2))^2}{(\log n)^2}\right)\right)\, du+O\left(\frac 1{\log n}\right)\, ,\nonumber
\end{align*}
where we used $e^{x}=1+x+O(e^{|x|}x^2)$. Thus
\begin{align}
&\int_{[-\beta a_n,\beta a_n]^d}u^\ell e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du
=\int_{\mathbb R^d}u^\ell e^{-i\frac u{a_n}\cdot N}e^{-\frac 1{2} A u\cdot u}
\left(1 -\frac 12 A u\cdot u\frac{\log(\log n/|u|^2)}{\log n}\right)\nonumber\\
&\ \ \ +O\left(|u|^{4+\ell}
e^{-\frac 1{2} A u\cdot u\left(1-\frac{\log\log n}{\log n}-2\frac{|\log |u||}{\log n}\right)}\frac{(\log\log n)^2+|\log |u||^2}{(\log n)^2}\right)\, du+O\left(\frac 1{\log n}\right)\, , \nonumber
\end{align}
\begin{align}
\int_{[-\beta a_n,\beta a_n]^d}u^\ell &e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du=\int_{\mathbb R^d}u^\ell e^{-i\frac u{a_n}\cdot N}e^{-\frac 1{2} A u\cdot u}
\left(1 -\frac 12 A u\cdot u\frac{\log(\log n/|u|^2)}{\log n}\right)\nonumber\\
&\ \ +O\left(|u|^{4+\ell}
e^{-\frac 1{2} A u\cdot u\left(\frac 12 - 2\max(|u|,|u|^{-1})\right)}\frac{(\log\log n)^2+|\log |u||^2}{(\log n)^2}\right)\, du+O\left(\frac 1{\log n}\right)\nonumber\\
=&\int_{\mathbb R^d}u^\ell e^{-i\frac u{a_n}\cdot N}e^{-\frac 1{2} A u\cdot u}
\left(1 -\frac 12 A u\cdot u\frac{\log\log n}{\log n}\right)\, du+O\left(\frac 1{\log n}\right)\, .\label{intlambda}
\end{align}
Applying this formula with $\ell=0$ and using
$(k/a_n)^{\min(1/p_1,1/q_1)}=O((\log n)^{-1})$,
\eqref{CCC1} becomes
\eqref{eq:LLT1}
with $I_{2k}(x):=\frac 1{(2\pi)^{d}}\int_{\mathbb R^d}e^{-i u\cdot x}\left( A u\cdot u\right)^{k}e^{-\frac 1{2} A u\cdot u}\, du$, which coincide with \eqref{I0I2}.
Indeed, with the change of variable $v=A^{1/2}u$,
$I_0(x)=\frac{\Phi(A^{-1/2}x)}{\sqrt{\det A}}$ and $I_2(X)=-\frac{(\mathbb{D}elta \Phi)(A^{-1/2}x)}{\sqrt{\det A}}$ where $\mathbb{D}elta$ is the Laplacian operator and with $\Phi(x):=\frac 1{(2\pi)^{d}}\int_{\mathbb R^d}e^{-i u\cdot x}
e^{-\frac 1{2} u\cdot u}\, du=\frac{e^{-\frac 12 x\cdot x}}{\sqrt{(2\pi)^d}}$ the standard $d$-dimensional Gaussian density.
This ends the proof of the first assertion of Proposition~\ref{LLT1}.
\\
Assume from now on \eqref{Pidiff}.
Then,
\begin{align*}
\mathbb E_{\mu_\mathbb{D}elta}[P_t^k(g\Pi_tP_t^k P^k h)]&=\mathbb E_{\mu_\mathbb{D}elta}[e^{it\cdot\hat\kappa_k}g]\mathbb E_{\mu_\mathbb{D}elta}[P_t^kP^k h]+ t\cdot\mathbb E_{\mu_\mathbb{D}elta}[e^{i t\cdot\hat\kappa_k}g\Pi'_0P_t^kP^k h]\\
&\ \ \ \ \ \ +O(t^{1+\gamma'}\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\Vert P_t^kP^k h\Vert_{\mathcal B})\, ,
\end{align*}
and so \eqref{FormulaInt} leads to
\begin{align*}
&\mathbb E_{\mu_{\mathbb{D}elta}}[h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n]
=\frac 1{(2\pi a_n)^d}\int_{[-\beta a_n,\beta a_n]^d}e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\\
&
\ \ \ \ \ \ \ \ \ \ \ \ \ \left(\mathbb E_{\mu_{\mathbb{D}elta}}[e^{i\frac u{a_n}\cdot\hat\kappa_k}g]\, \mathbb E_{\mu_{\mathbb{D}elta}}[e^{i\frac u{a_n}\cdot\hat\kappa_k\circ f^k}h]+\frac u{a_n}\cdot\mathbb E_{\mu_\mathbb{D}elta}[e^{i \frac u{a_n}\cdot\hat\kappa_k}g\Pi'_0 P_{\frac u{a_n}}^kP^kh]
\right)
\, du\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ +O\left(a_n^{-d-1-\gamma'}\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B}\right)\, .
\end{align*}
If $k=0$, we go directly to \eqref{AAA2b}. Otherwise we proceed as follows.
Since $|e^{ix}-1-ix|\le 2|x|^{1+\widetilde\gamma}$ for
$\widetilde\gamma:=\min(1,\frac{p(q_0-1)}{q_0}-1,\frac p{q_1}-1)\in (0,1]$
(since $\frac 1{q_0}+\frac 1 {p}<1$ and $q_1<p$),
\begin{align*}
\mathbb E_{\mu_\mathbb{D}elta}\left[\left(e^{i\frac u{a_n}\cdot\hat\kappa_k}-1-i\frac u{a_n}\cdot\hat\kappa_k\right)g\right]
& =O\left(\frac{1}{a_n^{1+\widetilde\gamma}}\mathbb E_{\mu_{\mathbb{D}elta}}\left[\left|u\cdot\hat\kappa_k\right|^{1+\widetilde\gamma} |g|\right]\right)\\
&=O\left(\frac{|u|^{1+\widetilde\gamma}}{a_n^{1+\widetilde\gamma}}\left\Vert |\hat\kappa_k|^{1+\widetilde\gamma}\right\Vert_{L^{\frac{q_0}{q_0-1}}(\mu_\mathbb{D}elta)} \left\Vert g\right\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\right)\, ,
\end{align*}
due to the H\"older inequality,
noting that $\frac{q_0}{q_0-1}(1+\widetilde\gamma)\le p$.
Thus, using \eqref{normrs} with $s=1+\widetilde\gamma$ and $r=\frac{q_0}{q_0-1}$,
we obtain
\begin{align*}
\mathbb E_{\mu_\mathbb{D}elta}\left[\left(e^{i\frac u{a_n}\cdot\hat\kappa_k}-1-i\frac u{a_n}\cdot\hat\kappa_k\right)g\right]=O\left(|u|^{1+\widetilde\gamma}\frac{k^ {1+\widetilde\gamma}\Vert\hat\kappa\Vert^{1+\widetilde\gamma}_{L^{\frac{q_0}{q_0-1}(1+\widetilde\gamma)}(\mu_\mathbb{D}elta)}\Vert g\Vert_{\mathbb L^{q_0}(\mu_\mathbb{D}elta)}}
{a_n^{1+\widetilde\gamma}}\right)\, ,
\end{align*}
\[
\mathbb E_{\mu_\mathbb{D}elta}[(e^{i\frac u{a_n}\cdot\hat\kappa_k}-1-i\frac u{a_n}\cdot\hat\kappa_k) P^kh]=O\left(|u|^{1+\widetilde\gamma}\frac{k^{1+\widetilde\gamma}\Vert\hat\kappa^{1+\widetilde\gamma}\Vert_{L^{q_1}(\mu_\mathbb{D}elta)}\Vert P^kh\Vert_{\mathbb L^{p_1}(\mu_\mathbb{D}elta)}}
{a_n^{1+\widetilde\gamma}}\right)\, ,
\]
noticing that $(1+\widetilde\gamma)\frac{q_0}{q_0-1}\le p$ and $(1+\widetilde\gamma)q_1\le p$.
Therefore
\begin{align*}
&\mathbb E_{\mu_{\mathbb{D}elta}}[h\mathbf 1_{\{\hat \kappa_n\circ f^k=N\}}g\circ f^n]
=\frac 1{(2\pi a_n)^d} \mathbb E_{\mu_{\mathbb{D}elta}}[g] \, \mathbb E_{\mu_{\mathbb{D}elta}}[h]\int_{[-\beta a_n,\beta a_n]^d}e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du\\
&+\frac 1{(2\pi )^d(a_n)^{d+1}}\int_{[-\beta a_n,\beta a_n]^d}
u\cdot D_{n,k}(g,h) e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k} \, du\\
&
+O\left(a_n^{-d-1}/\log n
\Vert g\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\sup_t\Vert P_t^kP^kh\Vert_{\mathcal B}\right)\, ,
\end{align*}
with
$
D_{n,k}(g,h):=\left(\mathbb E_{\mu_{\mathbb{D}elta}}[ i\hat\kappa_k (g\Pi_0(h)+\Pi_0(g) P^kh) +e^{i \frac u{a_n}\cdot\hat\kappa_k\circ f^k}g\Pi'_0 P_{\frac u{a_n}}^kP^kh]\right)$,
since $k^{1+\widetilde\gamma}\ll a_n^{\widetilde\gamma}/\log n$
due to our assumption on $k$.
Recall that $\Pi'_0\in\mathcal L(\mathcal B, L^{p_0}(\mu_\mathbb{D}elta))$. Choosing
$\gamma_2=\min(1,\frac{p_0}{q_0})\in (0,1]$ (since $q_0<\infty$)
so that $1\le\gamma_2q_0\le p_0$,
\begin{align*}
\left|\mathbb E_{\mu_{\mathbb{D}elta}}[(e^{i \frac u{a_n}\cdot\hat\kappa_k\circ f^k}-1)g\Pi'_0 P^k_{u/a_n}P^kh]
\right|&\le 2|u/a_n|^{\gamma_2}\mathbb E_{\mu_\mathbb{D}elta}[|\hat\kappa_k|^{\gamma_2}\Pi'_0P^k_{u/a_n}P^kh]\\
&\le 2|u/a_n|^{\gamma_2} k^{\gamma_2}\Vert\hat\kappa\Vert_{L^{\gamma_2 q_0}(\mu_\mathbb{D}elta)}^{\gamma_2}\Vert \Pi'_0P^k_{u/a_n}P^kh\Vert_{L^{p_0}(\mu_\mathbb{D}elta)}\, ,
\end{align*}
since $\Vert \hat\kappa_k\Vert_{L^{q_0\gamma_2}(\mu_\mathbb{D}elta)}\le k\Vert\hat\kappa\Vert_{L^{q_0\gamma_2}(\mu_\mathbb{D}elta)}$ (by triangular inequality and $f$- invariance of $\mu_\mathbb{D}elta$). So
\begin{align}\label{AAA2b}
&\mathbb E_{\mu_{\mathbb{D}elta}}(h\mathbf 1_{\{\hat S_n=N\}}g\circ f^n)=
\frac 1{(2\pi a_n)^d}\int_{[-\beta a_n,\beta a_n]^d}e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du\, \mathbb E_{\mu_\mathbb{D}elta}[g]\mathbb E_{\mu_\mathbb{D}elta}[h]\\
\nonumber&+
\frac {1 }{(2\pi )^d(a_n)^{d+1}}\int_{[-\beta a_n,\beta a_n]^d} ue^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\mathcal E_k(g,h,u/a_n)\, du\,
+O(a_n^{-d-1}/\log n)\, ,
\end{align}
since $(k/a_n)^{\gamma_2}\ll (\log n)^{-1}$ and with
\[
\mathcal E_k(g,h,t):= \mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi'_0P_t^kP^kh]+i\mathbb E_{\mu_{\mathbb{D}elta}}[\hat\kappa_k g]\mathbb E_{\mu_\mathbb{D}elta}[h]+i\mathbb E_{\mu_\mathbb{D}elta}[g] \mathbb E_{\mu_\mathbb{D}elta}[\hat\kappa_kP^kh]\, .
\]
Therefore
\begin{align*}
&\mathbb E_{\mu_{\mathbb{D}elta}}(h\mathbf 1_{\{\hat S_n=N\}}g\circ f^n)=
\frac 1{(2\pi a_n)^d}\int_{[-\beta a_n,\beta a_n]^d}e^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du\, \mathbb E_{\mu_\mathbb{D}elta}[g]\mathbb E_{\mu_\mathbb{D}elta}[h]\\
&+\frac {\mathcal E_k(g,h,0) }{(2\pi )^d(a_n)^{d+1}}\cdot\int_{[-\beta a_n,\beta a_n]^d} ue^{-i\frac u{a_n}\cdot N}\lambda_{u/a_n}^{n-2k}\, du\, +O(a_n^{-d-1}/\log n)\\
&+O\left(\frac {1 }{(2\pi )^d(a_n)^{d+1}}\int_{[-\beta a_n,\beta a_n]^d} |u|e^{-\frac{c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}2} |\mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi'_0(P_{u/a_n}^kP^kh-P^{2k}h)]|\, du\right)\, ,
\end{align*}
due to \eqref{bornelambda}.
Due to \eqref{intlambda} for $\ell=0$ and $\ell=1$,
$\mathbb E_{\mu_{\mathbb{D}elta}}(h\mathbf 1_{\{\hat S_n=N\}}g\circ f^n)$ is equal to
\begin{align*}
&
\frac 1{(2\pi a_n)^d}
\left(I_{0}\left(\frac N{a_n}\right)-\frac 12
I_{2}\left(\frac N{a_n}\right)\frac{\log\log n}{\log n}+O\left(\frac 1{\log n}\right)\right)
\mathbb E_{\mu_\mathbb{D}elta}[g]\mathbb E_{\mu_\mathbb{D}elta}[h]\\
&+\frac {\mathcal E_k(g,h,0)}{a_n^{d+1}} \mathbb E_{\mu_{\mathbb{D}elta}}\left(I_{1}\left(\frac N{a_n}\right)-\frac 12
I_{3}\left(\frac N{a_n}\right)\frac{\log\log n}{\log n}\right)+
O\left(a_n^{-d-1}/\log n\right)\\
&+O\left(\frac {1 }{(2\pi )^d(a_n)^{d+1}}\int_{[-\beta a_n,\beta a_n]^d} |u|e^{-\frac{c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}2} |\mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi'_0(P_{u/a_n}^kP^kh-P^{2k}h)]|\, du\right)\, ,
\end{align*}
with
$I_{2k+1}(x):=\frac 1{(2\pi)^d}\int_{\mathbb R^d}u( Au\cdot u)^k e^{-iu\cdot x}e^{-\frac 12 Au\cdot u}\, du=
i(-1)^{k}
\frac{(\mathbb{D}elta_{1,k}\Phi)(A^{-\frac 12}x)}{\sqrt{\det A}}$, with $\mathbb{D}elta_{1,k}\Phi(x):=A^{-\frac 12}\left(\sum_{j=1}^d\frac{\partial^{2k+1}}{\partial x_i\partial^{2k} x_j}\Phi(x)\right)_{i=1,...,d}$, which leads to \eqref{I1I3},
ending the proof of the Proposition.
\end{proof}
\subsection{Decorrelation expansion for $\mathbb Z^d$-extensions}
We are now interested in decorrelation expansion for $\mathbb Z^d$-extensions satisfying the set up of Proposition~\ref{LLT1}.
In this subsection, we consider the $\mathbb Z^d$-extension $(\Omega,\nu,S)$ of $(\mathbb{D}elta,f,\mu_\mathbb{D}elta)$
with step function $\hat\kappa:\mathbb{D}elta\rightarrow\mathbb Z^d$, where
$\Omega=\mathbb{D}elta\times\mathbb Z^d$, the transformation $S$ is defined by $S(x,L)=(fx,L+\hat\kappa(x))$ and preserves
the infinite measure $\nu=\mu_\mathbb{D}elta\otimes \mathfrak m_d$ with $\mathfrak m_d$ being the counting measure on $\mathbb Z^d$.
\begin{cor}\label{CORO}
Assume~\eqref{quasicompact}--\eqref{lambda}.
Suppose that \eqref{Pi} holds with $\gamma>0$.
Let $\phi,\psi:\mathbb{D}elta\times\mathbb Z^d\rightarrow\mathbb R$ be two observables
such that
$
\sum_{N\in\mathbb Z^d}\left[\Vert\phi(\cdot,N)\Vert_{\mathcal B}+\Vert\psi(\cdot,N)\Vert_{L^{q_2}(\mu_\mathbb{D}elta)}\right]<\infty
$
and
$
\sum_{N\in\mathbb Z^d}|N|^\gamma\left[\left\varepsilonrt\int_\mathbb{D}elta \phi(\cdot,N)\, d\mu\right|+\left\varepsilonrt\int_\mathbb{D}elta \psi(\cdot,N)\, d\mu\right|\right]<\infty$.
Then
\[
\int_\Omega \phi. \psi\circ S^n\, d\nu=\frac {I_0(0)}{ (n\log n)^{\frac d2}}\left(1
-
\frac d2\frac{\log\log n}{\log n}\right)\int_\Omega\phi\, d\mu\, \int_\Omega\psi\, d\mu+O\left(\frac 1{(n\log n)^{\frac d2}\log n}\right)\, .
\]
Suppose that \eqref{Pidiff}(instead of \eqref{Pi}) holds with
$\gamma'>0$
and assume that there exists $\gamma_0>0$ such that
$
\sum_{N\in\mathbb Z^d}(1+|N|^{\gamma_0})\left(\Vert\psi(\cdot,N)\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}+\Vert \phi(\cdot,N)\Vert_{\mathcal B}\right)<\infty\, .
$
Then
\[
\int_\Omega \phi. \psi\circ S^n\, d\nu=\frac {I(0)}{(n\log n)^{\frac d2}}\left(1-\frac{d\log\log n+O(1)}{2\log n}\right)\int_\Omega\phi\, d\nu\, \int_\Omega\psi\, d\nu+O\left(\frac 1{(n\log n)^{\frac {d+1}2}\log n}\right)\, .
\]
\end{cor}
\begin{proof}
Applying Proposition~\ref{LLT1} to the couples $(\phi(\cdot,N_1),\psi(\cdot,N_2))$ with $k=0$ leads to
\begin{align*}
&\int_\Omega \phi.\psi\circ S^n\, d\nu=\sum_{N_1,N_2\in\mathbb Z^d}\mathbb E_{\mu_\mathbb{D}elta}\left[\phi(\cdot,N_1)
\mathbf 1_{\{\hat\kappa_n=N_2-N_1\}}\, \psi(f^n(\cdot), {N_2})\right]\\
&=\sum_{N_1,N_2\in\mathbb Z^d}\frac 1{a_n^d}\mathbb E_{\mu_{\mathbb{D}elta}}[\phi(\cdot,N_1)]\mathbb E_{\mu_{\mathbb{D}elta}}[\phi(\cdot,N_2)]
\left(I_0\left(\frac {N_2-N_1}{a_n}\right)-\frac {I_2\left(\frac {N_2-N_1}{a_n}\right)}2 \frac{\log\log n}{\log n}\right)\\
&\ \ \ \ \ \ \ \ \ \ +O\left(\frac{1}{a_n^d\log n}\right)\, .
\end{align*}
Thus $\int_\Omega \phi.\psi\circ S^n\, d\nu
=\frac 1{a_n^d}\sum_{N_1,N_2\in\mathbb Z^d}\int_{M_{N_1}}\psi\, d\mu\, \int_{M_{N_2}}\phi\, d\mu I_{0}(0)\left(1-\frac d2\frac{\log\log n}{\log n}\right)+O\left(\frac{1}{a_n^d\log n}\right)$.
We used the fact that for every $\gamma\in(0,2]$, there exists $C_\gamma$
such that, for every $X\in\mathbb R^2$, $|I_0(X)-I_0(0)|+|I_2(X)-I_2(0)|\le C_\gamma |X|^\gamma$, that $|N_2-N_1|\le|N_2|+|N_1|$ combined with our assumption on $\phi$ and $\psi$.
Setting $\tilde I_k(n,x):=I_k(x)-\frac {I_{k+2}(x)}2\frac{\log\log n}{\log n}$ and applying
the second point of Proposition~\ref{LLT1}, we obtain that $\int_\Omega \phi.\psi\circ S^n\, d\nu$ is equal to
\begin{align*}
&\frac 1{a_n^d}\sum_{N_1,N_2\in\mathbb Z^d}\mathbb E_{\mu_{\mathbb{D}elta}}[\phi(\cdot,N_1)]\mathbb E_{\mu_{\mathbb{D}elta}}[\psi(\cdot,N_2)]
\left(\tilde I_0\left(n,\frac {N_2-N_1}{a_n}\right)+O\left(\frac 1{\log n}\right)\right)\\
&+\frac 1{(a_n)^{d+1}}\sum_{N_1,N_2\in\mathbb Z^d}\mathbb E_{\mu_{\mathbb{D}elta}}[\psi(\cdot,N_2)\Pi'_0\phi(\cdot,N_1)] \tilde I_1\left(n,\frac {N_2-N_1}{a_n}\right)
+O\left(\frac{1}{a_n^{d+1}\log n}\right)\\
&=\frac {I_0(0)}{a_n^d}\sum_{N_1,N_2\in\mathbb Z^d}\!\!\!\! \mathbb E_{\mu_{\mathbb{D}elta}}[\phi(\cdot,N_1)]\mathbb E_{\mu_{\mathbb{D}elta}}[\psi(\cdot,N_2)]
\left(1
-\frac d2\frac{\log\log n}{\log n}+O\left(\frac 1{\log n}\right)\right)+O\left(\frac{1}{a_n^{d+1}\log n}\right)\, ,
\end{align*}
where we used the same argument as before with $\gamma\in(1,2]$ combined
with the fact that $I_1$ and $I_3$ are uniformly $\gamma_0$-H\"older with $\gamma_0\in(0,1)$ and that $I_1(0)=I_3(0)=0$.
\end{proof}
Observe that when $\phi$ or $\psi$ has null integral, then Corollary \ref{CORO}
only provides an upper bound (given by the term in $O(\cdot)$).
Nevertheless, the method we used to establish Proposition~\ref{LLT1} and Corollary \ref{CORO} enables the establishment of explicit decorrelation rates
for some specific but natural
null integral observables of the $\mathbb Z^d$-extension, including coboundaries.
Before stating these decorrelation results, let us introduce the following notations.
Set
\begin{align}
I_{\ell_1,\ell_2,N}(x)&:= \frac1{(2\pi)^d}\int_{\mathbb R^d} (-iu\cdot N)^{\ell_1}(-A u\cdot u)^{\ell_2}e^{-iu\cdot x}e^{-\frac 12Au\cdot u}\, du\nonumber\\
&= \frac 1{(2\pi)^d\sqrt{\det A}}\int_{\mathbb R^d} (-iu\cdot A^{-1/2} N)^{\ell_1}(- u\cdot u)^{\ell_2}e^{-iu\cdot A^{-1/2}x}e^{-\frac 12u\cdot u}\, du\nonumber\\
&=
\frac{\mathbb{D}elta_{\ell_1,\ell_2}\Phi(A^{-\frac 12}x,N)}{\sqrt{\det A}}\, ,
\label{formulaI}
\end{align}
with $\mathbb{D}elta_{\ell_1,\ell_2}\Phi(x,N):=(A^{-\frac 12}N\cdot\nabla)^{\ell_1}\mathbb{D}elta^{\ell_2}\Phi (A^{-1/2}x)$
with $\Phi(x)=\frac{e^{-\frac 12 x\cdot x}}{(2\pi)^{d/2}}$ and where $\nabla$ and $\mathbb{D}elta$ are the usual differential
gradient and Laplacian operators. Recall that, for $\psi:\mathbb R^d\rightarrow \mathbb C$; $\nabla\psi=\left(\frac{\partial}{\partial x_i}\psi\right)_{i=1,...,d}$ and that $\mathbb{D}elta\psi=(\nabla\cdot\nabla)\psi=\sum_{i=1}^d\frac{\partial^2}{\partial^2 x_i}\psi$.
\begin{prop}\label{LLT00}
Assume assumptions of Proposition~\ref{LLT1} with
\eqref{Pi}.
\begin{itemize}
\item[(A)] (Coboundary) If $\phi(x,\ell)=h_\ell(x)
$
and $\psi(x,\ell)=g_\ell(x)
$, with
$\sum_{N\in\mathbb Z^d}\left(\Vert g_N\Vert_{L^{q_2}(\mu_\mathbb{D}elta)}+\Vert h_N\Vert_{\mathcal B}\right)<\infty$,
and
$\sum_{N\in\mathbb Z^d}|N|^\delta\left(\Vert g_N\Vert_{L^{1}(\mu_\mathbb{D}elta)}+\Vert h_N\Vert_{L^{1}(\mu_\mathbb{D}elta)}\right)<\infty$ for some $\delta\in(0,1)$,
then, for all integer $m\ge 1$, $\int_\Omega \phi\circ(id-S)^m.\psi\circ S^n\, d\nu$ is equal to
\begin{align*}
&\frac {(\log n)^m+m\log\log n(\log n)^{m-1}}{2^ma_n^{d+2m}}
\left(
I_{0,m,0}(0)+
I_{0,m+1,0}(0) \frac{\log\log n}{2\log n}\right)\\
&\times \int_\Omega g\, d\nu\, \int_\Omega h\, d\nu+O(a_n^{-d-2m}(\log n)^{m-1})\, .
\end{align*}
\item [(B)] Let $N\in\mathbb Z^d$ be fixed.
If $\phi(x,\ell)=h(x)(1_{\ell=N}+1_{\ell=-N}-2\times 1_{\ell=0})$
and $\psi(x,\ell)=g(x)$
with $g\in L^{q_2}(\mu_\mathbb{D}elta)$ and $h\in\mathcal{B}$,
then
\begin{align*}
\int_\Omega \phi.\psi\circ S^n\, d\nu&=-
\frac {A^{-1}N\cdot N}{a_n^{d+2}\sqrt{(2\pi)^d\det(A)}}\mathbb E_{\mu_\mathbb{D}elta}[h]\, \mathbb E_{\mu_\mathbb{D}elta}[g]\left(1-
\frac{(d+2)\log\log n}{2\log n}+O\left((\log n)^{-1}\right)\right)\\
&+O(a_n^{-d-2-\min(\delta,\gamma)})
\, .\end{align*}
\end{itemize}
\end{prop}
Let us observe that item (B) of Proposition~\ref{LLT00} (with $h=g=1_\mathbb{D}elta$) implies in particular that
$\mu_\mathbb{D}elta(\hat\kappa_n=1)+\mu_\mathbb{D}elta(\hat\kappa_n=-1)-2\mu_\mathbb{D}elta(\hat\kappa_n=0)\approx a_n^{-d-2}$,
as $n\rightarrow +\infty$.
\begin{proof}[Proof of Proposition~\ref{LLT00}]
Set
$
J_{\ell_1,\ell_2,n,N,N_0}:=\frac 1{(2\pi)^d}\int_{-[\beta a_n,\beta a_n]^d}e^{-iu\cdot N/a_n}(-iu\cdot N_0)^{\ell_1}(-Au\cdot u)^{\ell_2}\lambda_{u/a_n}^{n-m}\, du\, ,
$
with $m=0$ in item (B).
Observe first that, due to \eqref{lambdan-k}
\begin{equation}
\label{diffJ1}
\forall\delta_0\in(0,1],\quad
J_{\ell_1,\ell_2,n,N,N_0}:=J_{\ell_1,\ell_2,n,0,N_0} + O\left(\frac{N_0^{\ell_1}N^{\delta_0}}{a_n^{\delta_0}}\right)
\end{equation}
using $|e^{ix}-1|\le 2 |x|^{\delta_0}$,
and, moreover, due to \eqref{intlambda}
\begin{align}\label{relationIJ}
J_{\ell_1,\ell_2,n,0,N_0}&=I_{\ell_1,\ell_2,N_0}(0)
+I_{\ell_1,\ell_2+1,N_0}(0) \frac{\log\log n}{2\log n}+O\left(\frac {N_0^{\ell_1}}{\log n}\right)\, ,
\end{align}
for every $\delta\in [0,1]$.
Note that $I_{\ell_1,\ell_2,N}$ has same parity as $\ell_1$ and thus that
$I_{\ell_1,\ell_2,N}(0)=0$ if
$\ell_1$ is an odd number.
\begin{itemize}
\item Assume
the assumptions of (A).
Then
\begin{align*}
&\int_\Omega \phi\circ(id-S)^m.\psi\circ S^n\, d\nu
=\sum_{r=0}^m\frac{m!(-1)^r}{r!\, (m-r)!}
\int_\Omega \phi\circ S^r.\psi\circ S^n\, d\nu\\
&=\sum_{N_1,N_2\in\mathbb Z^d}\sum_{r=0}^m\frac{m!(-1)^r}{r!\, (m-r)!}
\mathbb E_{\mu_\mathbb{D}elta}\left[h_{N_1}1_{\{\hat\kappa_{n-r}=N_2-N_1\}}g_{N_2}\circ f^{n-r}\right]
\, .
\end{align*}
Thus, due to \eqref{FormulaInt} with $k=0$ and since $\sum_{r=0}^m\frac{m!(-1)^r\lambda_t^{n-r}}{r!(m-r)!}=(-1)^m\lambda_t^{n-m}(1-\lambda_t)^m$, $\int_\Omega \phi\circ(id-S)^m.\psi\circ S^n\, d\nu$ is equal to
\begin{align*}
&\frac {(-1)^m}{(2\pi)^da_n^d}\sum_{N_1,N_2\in\mathbb Z^d}
\int_{[-\beta a_n,\beta a_n]^d}
e^{-iu\cdot(N_2-N_1)/a_n}\lambda_{u/a_n}^{n-m}(1-\lambda_{u/a_n})^m\mathbb E_{\mu_{\mathbb{D}elta}}[g_{N_2}\Pi_{u/a_n}h_{N_1}]\, du\\
&\ \ \ \ \ \ \ +O\left(\sum_{N_1,N_2\in\mathbb Z^d}\theta^n
\Vert g_{N_2}\Vert_{L^{q_1}(\mu_\mathbb{D}elta)}\Vert h_{N_1}\Vert_{\mathcal B}\right)\\
\end{align*}
that can be rewritten
\begin{align*}
&\frac {(-1)^m}{(2\pi)^da_n^{d+2m}}\sum_{N_1,N_2\in\mathbb Z^d}
\int_{[-\beta a_n,\beta a_n]^d}e^{-iu\cdot(N_2-N_1)/a_n}\lambda_{u/a_n}^{n-m}(Au.u\log(|a_n/u|))^m\mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi_{u/a_n}h]\, du\\
&+O\left(\sum_{N_1,N_2\in\mathbb Z^d}\!\!\!\!\!\! a_n^{-d-2m}
\Vert g_{N_2}\Vert_{L^{q_1}(\mu_\mathbb{D}elta)}\Vert h_{N_1}\Vert_{\mathcal B}
\left(1+\int_{[-\beta a_n,\beta a_n]^d}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\! e^{-c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}|u|^{2m}(\log(|a_n/u|))^{m-1} du\right)\right)
\end{align*}
where we used \eqref{lambda} and \eqref{bornelambda}. Now writing
$\log(|a_n/u|)=\log a_n+O(\log|u|)$ and using \eqref{Pi}, we obtain that
$\int_\Omega \phi\circ(id-S)^m.\psi\circ S^n\, d\nu$ is equal to
\begin{align}
&
\label{cobfinal}
\sum_{N_1,N_2\in\mathbb Z^d}\!\!\!\!
\frac {(\log a_n)^m}{a_n^{d+2m}} J_{0,m,n,0,0}\mathbb E_{\mu_\mathbb{D}elta}[g_{N_2}]\mathbb E_{\mu_\mathbb{D}elta}[h_{N_1}]+O(a_n^{-d-2m}(\log a_n)^{m-1}
\sum_{N_1,N_2}\Vert g_{N_2}\Vert_{L^{q_2}(\mu_\mathbb{D}elta)}\Vert h_{N_1}\Vert_{\mathcal B}),
\end{align}
due to \eqref{diffJ1} with $\delta_0=\delta$, to $\Pi_0=\mathbb E_{\mu_\mathbb{D}elta}[\cdot]1_\mathbb{D}elta$ and to our summability assumptions.
This ends the proof of (A) since $\int_\Omega\phi\, d\nu=\sum_{N\in\mathbb Z^d}\mathbb E_{\mu_\mathbb{D}elta}[h_{N}]$ and $\int_\Omega\psi\, d\nu=\sum_{N\in\mathbb Z^d}\mathbb E_{\mu_\mathbb{D}elta}[g_{N}]$.
\item Assume the assumptions of (B).
Then, due to \eqref{FormulaInt} with $k=0$, we obtain
\begin{align*}
&\int_\Omega \phi.\psi\circ S^n\, d\nu
=\mathbb E_{\mu_{\mathbb{D}elta}}[h( 1_{\{\hat \kappa_n\circ f^k=N\}}+ 1_{\{\hat \kappa_n\circ f^k=N\}}-2\times 1_{\{\hat \kappa_n\circ f^k=0\}})g\circ f^n]\\
&=
\frac 1{(2\pi)^da_n^d}\int_{[-\beta a_n,\beta a_n]^d}\!\!\!\!\! (e^{-iu\cdot N/a_n}+e^{iu\cdot N/a_n}-2)\lambda_{u/a_n}^n\mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi_{u/a_n}h]\, dt+O(\theta^n
\Vert g\Vert_{L^{q_1}(\mu_\mathbb{D}elta)}\Vert h\Vert_{\mathcal B})\, .
\end{align*}
Therefore $\int_\Omega \phi.\psi\circ S^n\, d\nu$ is equal to
\begin{align*}
&
\frac 1{(2\pi)^da_n^{d+2}}\int_{[-\beta a_n,\beta a_n]^d}(iu\cdot N)^2\lambda_{u/a_n}^n\mathbb E_{\mu_{\mathbb{D}elta}}[g\Pi_{u/a_n}h]\, dt+O(a_n^{-d-4}
\Vert g\Vert_{L^{q_1}(\mu_\mathbb{D}elta)}\Vert h\Vert_{\mathcal B})\\
&=
\frac {1}{a_n^{d+2}}\mathbb E_{\mu_{\mathbb{D}elta}}[g]\mathbb E_{\mu_\mathbb{D}elta}[h] J_{2,0,n,0,N}
+O(a_n^{-d-2-\gamma}
\Vert g\Vert_{L^{q_1}(\mu_\mathbb{D}elta)}\Vert h\Vert_{\mathcal B})\, ,
\end{align*}
due to \eqref{Pi}.
We end the proof of (B) by using \eqref{relationIJ} combined with
\begin{equation}\label{I20N0}
I_{2,0,N}(0)=-\frac {A^{-1}N\cdot N}{\sqrt{(2\pi)^d\det(A)}}
,\quad I_{2,1,N}(0)=\frac {(d+2)A^{-1}N\cdot N}{\sqrt{(2\pi)^d\det(A)}}
\, .
\end{equation}
\end{itemize}
\end{proof}
\subsection{Decorrelation for the $\mathbb Z^d$-periodic billiard map}\label{resultsbill}
We consider the setting of Section~\ref{mainresults} with \eqref{H0}. Recall that the billiard map $T$ can be represented as the $\mathbb Z^d$-extension of $(\bar M,\bar T,\bar\mu)$ with step function
$\kappa:\bar M\rightarrow \mathbb Z^d$ (the step function of the one-dimensional case corresponding to the first coordinate of the step function of the two dimensional case).
Therefore these two billiard maps can be represented as the $\mathbb Z^d$-extension of $(\bar M,\bar\mu,\bar T)$ by a step function $\kappa:\bar M\rightarrow\mathbb Z^d$ with $d\in\{1,2\}$ such that
there exists $\hat\kappa:\mathbb{D}elta\rightarrow\mathbb Z^d$ satisfying
$\hat\kappa\circ\mathfrak p_2=\kappa\circ\mathfrak p_1$.
We treat together these two models in the following.
We consider the Banach spaces $\mathcal B_0$ and $\mathcal B$ defined in Section
\ref{sec:Banach}.
As recalled in Section~\ref{sec:Banach}, with these choices, \eqref{spgap-Sz} and \eqref{spgap-Sz-bis} hold.
Since $\mathcal B_0$ is continuously embedded in $\mathcal B$,
\eqref{quasicompact} and \eqref{borneexpo} holds true with the Banach space $\mathcal B_0$ and for any $p_1\in [1,+\infty)$.
Moreover \eqref{lambda} has been proved in Proposition~\ref{prop-lambda} and,
due to Proposition~\ref{prop-expproj} (the fact that the symmetric matrix is invertible follows from the assumption of total dimension of the horizon),
\eqref{Pidiff} (and so holds true \eqref{Pi} with any $\gamma\in(0,1]$) for $\mathcal B_0$ for any $p_0\in(1,\frac 43)$.
\begin{rem}
The second item of Proposition~\ref{LLT1} applies providing a mixing local limit theorem for $\mathbb E_{\mu}(\phi\mathbf 1_{\{ \kappa_n=N\}}\psi\circ \bar T^n)$ with $\phi$ and $\psi$ both constant on stable curves, with $\phi$ dynamically H\"older and $\psi\in L^{q_0}(\bar\mu)$. Corollary~\ref{CORO} applies analogously.
\end{rem}
Our goal is to prove Theorems \ref{THM0} and \ref{THM1} for general dynamically H\"older observables. To this end, we will use approximations by functions on $\mathbb{D}elta$ and Proposition \ref{LLT1} with $k=k(n)\rightarrow+\infty$.
Recall that we have defined
$\xi_k^{k'}$, $\xi_k^\infty$ before Theorem~\ref{THM0}.
For any $\phi:M \rightarrow \mathbb C$
or $\phi:\bar M\rightarrow \mathbb C$
and $-\infty< k\le k'\le \infty$, we define the local variation
$\omega_k^{k'}(\phi, x):= \sup_{y\in\xi_{k}^{k'}(x)}|\phi( x)-\phi(y)|$,
where $\xi_{k}^{k'}( x)$ is the element of $\xi_k^{k'}$
containing $x$.
We start
with a local limit theorem (generalizing Theorem~\ref{THM0}). Set $\kappa_n:=\sum_{k=0}^{n-1}\kappa\circ \bar T^k$.
\begin{theo}\label{LLT20}
\begin{itemize}
\item[(a)]
Let $(k(n))_n$ be a sequence of integers diverging to $+\infty$ such that $k(n)=O( a_n/\log n)$.
Let $N\in\mathbb Z^d$ and $\phi,\psi: \bar M\rightarrow \mathbb C$ be two bounded measurable functions,
then
\begin{align}\nonumber
&\int_{\bar M} \phi 1_{\{\kappa_n=N\}}\psi\circ \bar T^n\, d\bar\mu= \frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-\frac 12 \, I_2\left(\frac{N}{a_n}\right)\frac{\log\log n}{\log n}
\right)\int_{\bar M}\phi\, d\bar\mu \int_{\bar M}\psi\, d\bar\mu\\
\label{error}&+
O\left(a_n^{-d}I_0\left(\frac{N}{2a_n}\right)
\left(\Vert\phi\Vert_{\infty}\Vert\omega_{-k(n)}^\infty(\psi,\cdot)\Vert_{L^1(\bar\mu)})+\Vert\psi\Vert_{\infty}\Vert\omega_{-k(n)}^{k(n)}(\phi,\cdot)\Vert_{L^1(\bar\mu)})\right)
+\frac {\Vert\phi\Vert_\infty\Vert\psi\Vert_\infty}{a_n^d\log n}\right)\, ,
\end{align}
uniformly in $N\in\mathbb Z^d$. In particular, if $\lim_{k\rightarrow +\infty} \Vert \omega_{-k}^k(\phi,\cdot)
+ \omega_{-k}^{\infty}(\psi,\cdot))\Vert_{L^1(\bar\mu)}=0$,
then the error term in \eqref{error} is in $o(a_n^{-d})$, and if
$\int_{\bar M}(\omega_{-k(n)}^{k(n)}(\phi,\cdot)
+ \omega_{-k(n)}^{\infty}(\psi,\cdot))\, d\bar\mu=O((\log n)^{-1})$,
then the error in \eqref{error} is in $O\left(\frac 1{a_n^d\log n}\right)$.\\
\item[(b)]
Assume that $k(n)=O(a_n^{\frac 12-u})$ for some $u\in(0,1)$
and that there exists $p_2>2$ such that, for $h\in\{\phi,\psi\}$,
$\|\omega_{-k(n)}^{k(n)}(h,\cdot)\|_\infty
=O((\log n)^{-1})$,
and $\|\omega_{-k(n)}^{k(n)}(h,\cdot)\|_{L^1(\bar\mu)}=O((a_n\log n)^{-1})$
and
$\Vert \omega_{-k(n)}^{k(n)}(h,\cdot)\Vert_{L^{p_2}(\bar\mu) }=o(k(n)^{-1}/\log n)\quad\mbox{and}\quad\sum_{j\ge k(n)}\Vert\omega_{-j}^{j}(h,\cdot)\Vert
_{L^{p_2}(\bar\mu)}=O((\log n)^{-1})$,
then
the numeric series
$A_+(\phi):=\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}.\phi]$ and $A_-(\psi):=\sum_{j\le -1}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}.\psi]$ converge absolutely
and $\int_{\bar M} \phi 1_{\{\kappa_n=N\}}\psi\circ \bar T^n\, d\bar\mu$ is equal to
\begin{align}\label{LLTfinal0}
&
\frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-\frac 12 I_2\left(\frac{N}{a_n}\right) \frac{\log\log n}{\log n} + O\left(\frac 1{\log n}\right)
\right)\mathbb E_{\bar\mu}[\phi]\mathbb E_{\bar\mu}[\psi]\\
\nonumber&+i\frac {\mathbb E_{\bar\mu}[\psi]
A_+(\phi)
+\mathbb E_{\bar\mu}[\phi]
A_-(\psi)
}{a_n^{d+1}}
\left(
I_{1}\left(\frac {N}{a_n}\right)
-\frac {I_3\left(\frac {N}{a_n}\right)}2 \frac{\log\log n}{\log n}\right)
+O\left(\frac{a_n^{-d-1}}{\log n}\right)\nonumber\, .
\end{align}
\end{itemize}
\end{theo}
\begin{proof}
For the first assertion, we use the first part of Proposition~\ref{LLT1}
with $p>2$ (close to 2), $p_1=\infty$ ($q_1=1$), $p_0\in(1,4/3)$ ($p_0<p$) and $q_0>4$ so that $\min(1,\frac p{q_1},\frac p{p_0})=1$.
Moreover $k/a_n
\ll (\log n)^{-1}$.
We assume from now on, without loss of generality, that $\phi,\psi$ take their values in
$\mathbb R$.
To simplify notations, we write $k=k(n)$.
We define $\phi^{(k)}$ and $\psi^{(k)}$ by
\begin{equation}\label{phikpsik}
\phi^{(k)}(\bar x):=\inf_{\xi_{-k}^k( x)} \phi_+-\sup_{\xi_{-k}^k( x)} \phi_-
\quad\mbox{and}\quad
\psi^{(k)}( \bar x):=\inf_{\xi_{-k}^\infty(x)} \psi_+ - \sup_{\xi_{-k}^\infty(x)} \psi_-
\, ,
\end{equation}
where $\phi_+=\max(\phi,0)$, $\psi_+=\max(\psi,0)$, $\phi_-=\max(-\phi,0)$, $\psi_-=\max(-\psi,0)$.
Observe that, for every $x\in\bar M$,
$|\phi^{(k)}(x)|\le|\phi(x)|$ and $|\psi^{(k)}(x)|\le|\psi(x)|$,
\begin{equation}\label{0MA1}
\left|\phi^{(k)}( x)- \phi(x)\right|\le \omega_{-k}^k(\phi,x)\le 2\Vert \phi\Vert_\infty
\quad\mbox{and}\quad
\left|\psi^{(k)}( x)- \psi(x)\right|\le \omega_{-k}^\infty(\psi,x)\le 2\Vert\psi\Vert_\infty.
\end{equation}
Since $\phi^{(k)}\circ \bar T^k$ and $\psi^{(k)}\circ \bar T^k$ are constant
on the stable curves, there exist $\tilde\phi^{(k)},\tilde\psi^{(k)}:\mathbb{D}elta\rightarrow\mathbb C$ such that $\tilde\phi^{(k)}\circ \mathfrak p_2=\phi^{(k)}\circ \bar T^k\circ\mathfrak p_1$,
$\tilde\psi^{(k)}\circ \mathfrak p_2=\psi^{(k)}\circ \bar T^k\circ\mathfrak p_1$.
Observe that $\Vert \tilde\psi^{(k)}\Vert_{\infty}\le
\Vert \psi^{(k)}\Vert_\infty$.
Note that $\tilde\phi^{(k)}:\mathbb{D}elta\rightarrow\mathbb C$
is constant on balls of the form $\{y\in\mathbb{D}elta\, :\, s(x,y)\ge 2k\}$ for every $x\in\mathbb{D}elta$.
This will be useful to show that
\begin{equation}\label{PukPuphi}
\sup_u\Vert P_u^kP^k\tilde\phi^{(k)}\Vert_{\mathcal B_0}\ll\Vert\phi^{(k)}\Vert_\infty\, .
\end{equation}
To this end, due to \eqref{formulaP},
for every $x_1,x_2\in\mathbb{D}elta$ such that $s_0(x_1,x_2)\ge 1$
\begin{align*}
&\left |P_u^kP^k\tilde\phi^{(k)}(x_1)-P_u^kP^k\tilde\phi^{(k)}(x_2)\right|\\
&\le \sum_{(y_1,y_2)\, :\, s(y_1,y_2)\ge 2k+1,\ f^{2k}(y_i)=x_i}
\left| e^{-\alpha_{2k}(y_1)+iu\hat\kappa_k(y_1)}\tilde\phi^{(k)}(y_1)-
e^{-\alpha_{2k}(y_2)+iu\hat\kappa_k(y_2)}\tilde\phi^{(k)}(y_2) \right|\\
&= \sum_{(y_1,y_2)\, :\, s(y_1,y_2)\ge 2k+1,\ f^{2k}(y_i)=x_i}
\left| \left(e^{-\alpha_{2k}(y_1)}-
e^{-\alpha_{2k}(y_2)}\right)e^{iu\hat\kappa_k(y_1)}\tilde\phi^{(k)}(y_1) \right|\, ,
\end{align*}
where we used the notation $\alpha_{m}:=\sum_{k=0}^{m-1}\alpha\circ f^k$.
We end the proof of \eqref{PukPuphi} by noticing that
\begin{align*}
\left|e^{-\alpha_{2k}(y_1)}-e^{-\alpha_{2k}(y_2)}\right|&\le
(e^{-\alpha_{2k}(y_1)}+e^{-\alpha_{2k}(y_2)})\left|\alpha_{2k}(y_1)-\alpha_{2k}(y_2)\right|\\
&\le (e^{-\alpha_{2k}(y_1)}+e^{-\alpha_{2k}(y_2)})\sum_{m=0}^{2k-1} C_\alpha \beta^{s(y_1,y_2)+1-m}\\
&\le (e^{-\alpha_{2k}(y_1)}+e^{-\alpha_{2k}(y_2)})C_\alpha \beta^{s(x_1,x_2)}(1-\beta)^{-1}\, .
\end{align*}
Applying the first item of Proposition
\ref{LLT1} to the Banach space $\mathcal B_0$ and to the couples
$(h,g)=(\tilde\phi^{(k)},\tilde\psi^{(k)})$ with $q_1>2$, we obtain that
\begin{align}
\nonumber&\int_{\bar M} \phi^{(k)}1_{\{\kappa_n=N\}}\psi^{(k)} \circ \bar T^n \, d\bar\mu\\
\label{AAAA1}&=\frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-I_2\left(\frac{N}{a_n}\right) \frac{\log\log n}{2\, \log n } \right)\int_{\bar M} \phi^{(k)}\, d\bar\mu\, \int_{\bar M} \psi^{(k)}\, d\bar\mu
+O\left(\frac{ \Vert \psi\Vert_{
\infty
}\Vert \phi\Vert_\infty}{(a_n)^{d}\, \log n }\right)\, ,
\end{align}
where we used the fact that $|\psi^{(k)}|\le |\psi|$, $|\phi^{(k)}|\le |\phi|$ and
$k/a_n\ll (\log n)^{-1}$.
Moreover
\begin{equation}\label{diffint}
\left|\int_{\bar M} \phi\, d\bar\mu\, \int_{\bar M} \psi\, d\bar\mu-
\int_{\bar M} \phi^{(k)}\, d\bar\mu\, \int_{\bar M} \psi^{(k)}\, d\bar\mu\right|
\le \Vert \psi\Vert_{\infty}\Vert\omega_{-k}^k(\phi,\cdot)\Vert_{L^1(\bar\mu)}+\Vert \phi\Vert_{\infty}\Vert\omega_{-k}^{\infty}(\psi,\cdot)\Vert_{L^1(\bar\mu)}\,
\end{equation}
and, setting $\phi^{(k,+)}(x):=\sup_{\xi_{-k}^k( x)} |\phi|$ and
$\psi^{(k,+)}( x):=\sup_{\xi_{-k}^\infty(x)} |\psi|$,
\begin{align}
&\int_{\bar M}1_{\{\kappa_n=N\}}|\phi.\psi\circ \bar T^n-\phi^{(k)}.\psi^{(k)}\circ \bar T^n|\, d\bar\mu\nonumber\\
&\le
\int_{\bar M}1_{\{\kappa_n=N\}}\left(|\phi|.|\psi-\psi^{(k)}|\circ \bar T^n+|\phi-\phi^{(k)}|.|\psi^{(k)}|\circ\bar T^n\right)\, d\bar\mu\nonumber\\
&\le
\int_{\bar M}1_{\{\kappa_n=N\}}\left(\phi^{(k,+)}.\omega_{-k}^{+\infty}(\psi,\cdot)\circ \bar T^n+\omega_{-k}^{k}(\phi,\cdot).\psi^{(k,+)}\circ \bar T^n\right)\, d\mu\label{0majoint}
\end{align}
which is in
\begin{align*}
&O\left(\ I_0\left(\frac{N}{2a_n}\right)a_n^{-d}\left(|\bar\mu(\phi^{(k,+)})| \bar\mu(\omega_{-k}^{\infty}(\psi,\cdot))+\bar\mu(\omega_{-k}^{k}(\phi,\cdot))\, |\bar\mu(\psi^{(k,+)})|\right)
+\frac{\log\log n}{(a_n)^d\log n}\Vert\phi\Vert_\infty\Vert\psi\Vert_\infty\right) \nonumber
\end{align*}
applying \eqref{AAAA1} with $\phi,\psi$ replaced respectively by $(\phi^{(k,+)},\omega_{-k}^{\infty}(\psi))$ and by $(\omega_{-k}^k(\phi),\psi^{(k,+)})$,
and using $|I_0(x)+I_2(x)|\ll I_0(x/2)$.
This ends the proof of \eqref{error}.
Assume now the assumptions of (b)
and let us prove \eqref{LLTfinal0}.
We replace \eqref{phikpsik} by
\begin{equation}\label{phikpsik2}
\phi^{(k)}(\bar x):=\mathbb E_{\bar\mu}[\phi|\xi_{-k}^k( x)]
\quad\mbox{and}\quad
\psi^{(k)}(\bar x):=\mathbb E_{\bar\mu}[\phi|\xi_{-k}^k( x)]
\, .
\end{equation}
Observe that
$\mathbb E_{\bar\mu}[\phi^{(k)}]=\mathbb E_{\bar\mu}[\phi]$,
$\mathbb E_{\bar\mu}[\psi^{(k)}]=\mathbb E_{\bar\mu}[\psi]$ and
\[
\mathbb E_{\mu_\mathbb{D}elta}[\hat\kappa_k P^k \tilde\phi^{(k)}]=\mathbb E_{\bar\mu}[\kappa_k\circ \bar T^k \phi^{(k)}\circ \bar T^k]=\mathbb E_{\bar\mu}[\kappa_k \phi^{(k)}]
=\mathbb E_{\bar\mu}[\kappa_k \phi]\, ,
\]
due to \eqref{phikpsik2} since $\kappa_k$ is $\xi_{-k}^k$-measurable
and similarly,
setting $\kappa_{-k}:=\sum_{m=-k}^{-1}\kappa\circ \bar T^{m}$,
\[
\mathbb E_{\mu_\mathbb{D}elta}[\hat\kappa_k \tilde\psi^{(k)}]=\mathbb E_{\bar\mu}[\kappa_k \psi^{(k)}\circ \bar T^k]=\mathbb E_{\bar\mu}[\kappa_k\circ \bar T^{-k} \psi^{(k)}]=\mathbb E_{\bar\mu}[\kappa_{-k} \psi^{(k)}]
=\mathbb E_{\bar\mu}[\kappa_{-k} \psi].
\]
To prove \eqref{LLTfinal0},
we apply the second item of Proposition \ref{LLT1} to the Banach space $\mathcal B_0$ and to the couples
$(h,g)=(\tilde\phi^{(k)},\tilde\psi^{(k)})$ with $p<2$ (close to 2), $q_0>4$ (large), $p_1=\infty$, $q_1=1$ so that
the condition $k(n)\le C a_n^{\frac{\tilde\gamma}{1+\tilde\gamma}}/(\log n)^{1/(1+\tilde\gamma)}$ of Proposition~\ref{LLT1}
holds. We obtain that
\begin{equation}
\label{BBB1}
\int_{\bar M} \phi^{(k)}1_{\{\kappa_n=N\}}\psi^{(k)} \circ \bar T^n \, d\bar\mu\\
=J_1+J_2+J_3+O\left(\frac{\Vert \psi\Vert_{L^{q_0}(\bar\mu)}\Vert \phi\|_\infty}{a_n^{d+1}\log n}\right)
\end{equation}
\[
J_1:=\frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-\frac 12 I_2\left(\frac{N}{a_n}\right) \frac{\log\log n}{\log n} + O\left(\frac 1{\log n}\right)
\right)
\mathbb E_{\bar\mu}[\phi]\mathbb E_{\bar\mu}[\psi]\, ,
\]
\begin{align*}
J_2&:=\frac {
C_k(\tilde\psi^{(k)},\tilde\phi^{(k)})}{(a_n)^{d+1}}\left(I_{1}\left(\frac {N}{a_n}\right)-\frac {I_3\left(\frac {N}{a_n}\right)}2 \frac{\log\log n}{\log n}\right)
+O(a_n^{-d-1}(\log n)^{-1})\, ,
\end{align*}
\[
J_3:=O\left(\frac {1 }{(2\pi )^d(a_n)^{d+1}}\int_{[-\beta a_n,\beta a_n]^d}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! |u|e^{-\frac{c_0\min(|u|^{2-\epsilonsilon},|u|^{2+\epsilonsilon})}2} |\mathbb E_{\mu_{\mathbb{D}elta}}[\tilde\psi^{(k)}\Pi'_0(P_{u/a_n}^kP^k\tilde\phi^{(k)}-P^{2k}\tilde\phi^{(k)})]|\, du\right)\, .
\]
Note that $\phi=\phi_0+\bar\mu(\phi)$ and $\psi=\psi_0+\bar\mu(\psi)$, with $\phi_0=\phi-\bar\mu(\phi)$ and $\psi_0=\psi-\bar\mu(\psi)$.
We can resume the rest of the proof to estimating $J_2$ and $J_3$.
To study these two terms,
we shall exploit the expression $\Pi'_0$ obtained in Propositions~\ref{prop-expproj} and~\ref{prop-expproj-base}.
For $J_3$ we exploit that
\begin{equation}
\label{eq-blabla2}
\Pi'_0
w=\mathbb E_{\bar\mu}[w]H+i\sum_{j\ge 0}
\mathbb E_{\mu_\mathbb{D}elta}[\hat\kappa\circ f^{j}w]1_\mathbb{D}elta\, ,
\end{equation}
with
$\displaystyle
H(x):=Q'_0(1_Y)\circ\pi_0(x)+i\sum_{m=0}^{\omega(x)-1}\hat\kappa\circ f^m\circ\pi_0(x)+\frac i{\mu_\mathbb{D}elta(Y)}\sum_{j\ge 0}\mathbb E_ {\mu_\mathbb{D}elta}[\hat\kappa 1_Y\circ f^{j+1}]$.
Writing $t$ for $u/a_n$ in the expression of $J_3$ and using the above formula for $\Pi'_0$ we note that
\begin{align}
\label{eq-blabla1}
&\left|\mathbb E_{\mu_\mathbb{D}elta}\left[\tilde\psi^{(k)}
\Pi'_0(P_t^{k}P^k-P^{2k})\tilde\phi^{(k)}\right]\right|\\
\nonumber &=\left\varepsilonrt\mathbb E_{\bar\mu}[(e^{it\kappa_k}-1)\phi^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}\left[\tilde\psi^{(k)} H\right]+i\mathbb E_{\bar\mu}[\psi^{(k)}]\sum_{j\ge 0}
\nonumber \mathbb E_{
\mu_\mathbb{D}elta}[\hat\kappa P^j(P_t^kP^k-P^{2k})
\tilde\phi^{(k)}
]\right\varepsilonrt\\
&\le |t| k\Vert\kappa\Vert_{L^1(\bar\mu)} \Vert \phi\Vert_\infty\Vert\psi\Vert_\infty \Vert H\Vert_{L^1(\bar\mu)}+\Vert\psi\Vert_\infty E_k
\, ,
\end{align}
\begin{align*}
E_k
&=\sum_{j= 0}^{
\log n-1}\left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{k+j}(e^{it\kappa_k}-1)\phi^{(k)}]\right|+\sum_{j\ge
\log n}
\Vert\hat\kappa\Vert_{L^{q_1}}\theta^j\Vert (P_t^kP^k-P^{2k})
\tilde\phi^{(k)}\Vert_{\mathcal B_0}\\
&
\ll \sum_{j= 0}^{
\log n-1} \Vert \kappa\circ \bar T^{k+j}(t\kappa_k)^{p-1}
\Vert_{L^1(\bar\mu)} \Vert\phi\Vert_\infty+\sum_{j\ge
\log n}
\theta^j\Vert\phi\Vert_\infty
\, ,
\end{align*}
\begin{align*}
E_k&\ll
\log n
\Vert \kappa\Vert_{L^{p}(\mu_\mathbb{D}elta)}\Vert t\kappa_k
\Vert^{p-1}_{L^{p}(\bar\mu)} \Vert\phi\Vert_\infty+
\theta^{\log n}\Vert\phi\Vert_\infty\ll
(|t k|^{p-1}\log n +\theta^{\log n})
\Vert\phi\Vert_\infty\, .
\end{align*}
This together with equation~\eqref{eq-blabla1} and $k(n)\ll a_n^{\frac 12-u}$ implies that $J_3=O(a_n^{-d-1}/\log n)$.
The rest of the proof is allocated to the study of $J_2$.
We will
use
\[
\mathbb E_{\mu_\mathbb{D}elta}\left[\tilde\psi^{(k)}
\Pi'_0P^{2k}\tilde\phi^{(k)}\right]=\mathbb E_{\bar\mu}[\phi^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}\left[\tilde\psi^{(k)} H\right]+i\mathbb E_{\bar\mu}[\psi^{(k)}]\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{k+j}\phi^{(k)}]\, ,
\]
obtained from~\eqref{eq-blabla2} with $w=\tilde\phi^{(k)}$, after multiplication with $\tilde\psi^{(k)}$ and integration. Thus
\begin{align}\label{intermediate}
J_2&=\frac {\mathfrak C_{k(n)}(\phi,\psi)}{(a_n)^{d+1}}
\left(I_{1}\left(\frac N{a_n}\right)-\frac {I_3\left(\frac N{a_n}\right)}2 \frac{\log\log n}{\log n}\right)
+O\left(a_n^{-d-1}/\log n\right)\, ,
\end{align}
with $\mathfrak C_k(\phi,\psi):=C_k(\tilde\psi^{(k)},\tilde\phi^{(k)})$, that is
\begin{align}
\label{eq:Ck}
\mathfrak C_k(\phi,\psi)&=\mathbb E_{\mu_{\mathbb{D}elta}}[\tilde \psi^{(k)}\Pi'_0P^{2k}\tilde\phi^{(k)}]+ i\mathbb E_{\bar\mu}[\kappa_{-k} \psi^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}[\phi^{(k)}]+i\mathbb E_{\bar\mu}[ \psi^{(k)}] \mathbb E_{\bar\mu}[\kappa_k\phi^{(k)}]\\
&=
\mathbb E_{\bar\mu}[\phi]
\left(\mathbb E_{\mu_\mathbb{D}elta}\left[\tilde\psi^{(k)} H\right]+i\mathbb E_{\bar\mu}[\kappa_{-k} \psi^{(k)}]\right)+i
\mathbb E_{\bar\mu}[\psi]
\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi^{(k)}]\, .\nonumber
\end{align}
Thus
\begin{align}\label{intermediate2}
&\int_{\bar M} \phi^{(k)}1_{\{\kappa_n=N\}}\psi^{(k)} \circ \bar T^n \, d\bar\mu\\
\nonumber&=\frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-\frac 12 I_2\left(\frac{N}{a_n}\right) \frac{\log\log n}{\log n} + O\left(\frac 1{\log n}\right)
\right)\mathbb E_{\bar\mu}[\phi]\mathbb E_{\bar\mu}[\psi]\\
\nonumber&+ \frac {\mathfrak C_{k(n)}(\phi,\psi)}{(a_n)^{d+1}}
\left(I_{1}\left(\frac N{a_n}\right)-\frac {I_3\left(\frac N{a_n}\right)}2 \frac{\log\log n}{\log n}\right) +O(a_n^{-d-1}/\log n)\, .
\end{align}
In particular, since $|\psi^{(k)}|\le\psi^{(k,+)}\le |\psi|+\omega_{-k}^k(\psi,\cdot)$ with $\psi^{(k,+)}(x):=\sum_{y\in\xi_{-k}^k(x)}|\psi(y)|$,
\begin{align*}\label{intermediate3}
&\left|\int_{\bar M} 1_{\{\kappa_n=N\}}(\phi.\psi \circ \bar T^n-\phi^{(k)}.\psi^{(k)}\circ\bar T^n) \, d\bar\mu\right|\\
&\le
\int_{\bar M} 1_{\{\kappa_n=N\}}\left(\left|\phi.\omega_{-k}^k(\psi,\cdot) \circ \bar T^n\right|+\left|\omega_{-k}^k(\phi,\cdot).\psi^{(k,+)}\circ\bar T^n\right|\right) \, d\bar\mu
\nonumber
\end{align*}
which is dominated, up to a multiplicative constant by
\begin{align*}
\nonumber&\ll\frac {\Vert\phi\Vert_\infty\Vert\omega_{-k}^k(\psi,\cdot)\Vert_{L^1(\bar\mu)}+\Vert\psi^{(k,+)}\Vert_\infty\Vert\omega_{-k}^k(\phi,\cdot)\Vert_{L^1(\bar\mu)}}{a_n^d} \\
\nonumber&+ \frac {\mathfrak C_{k(n)}(|\phi|,\omega_{-k}^k(\psi,\cdot))
+\mathfrak C_{k(n)}(\omega_{-k}^k(\phi,\cdot),|\psi^{(k,+)}|))
}{(a_n)^{d+1}}
+O(a_n^{-d-1}/\log n)\\
&= O(a_n^{-d-1}/\log n)\, ,
\end{align*}
since for $(h,g)=(|\phi|,\omega_{-k}^k(\psi,\cdot))$ or $(h,g)=(\omega_{-k}^k(\phi,\cdot),\psi^{(k,+)})$,
\[
\mathfrak C_{k(n)}(h,g)\ll \| g\|_\infty \|h\|_\infty + k(n)(\|g\|_{L^{p_2}(\bar\mu)}
\|h\|_{L^1(\bar\mu)}+\|h\|_{L^{p_2}(\bar\mu)}
\|g\|_{L^1(\bar\mu)})\, ,
\]
(since $\kappa\in L^{p_2/(p_2-1)(\bar\mu)}$)
and using our assumptions.
Therefore
\begin{align}\label{intermediate3}
&\int_{\bar M} \phi 1_{\{\kappa_n=N\}}\psi \circ \bar T^n \, d\bar\mu\\
\nonumber&=\frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-\frac 12 I_2\left(\frac{N}{a_n}\right) \frac{\log\log n}{\log n} + O\left(\frac 1{\log n}\right)
\right)\int_{\bar M} \phi\, d\bar\mu\, \int_{\bar M} \psi\, d\bar\mu\\
\nonumber&+ \frac {\mathfrak C_{k(n)}(\phi,\psi)}{(a_n)^{d+1}}
\left(I_{1}\left(\frac N{a_n}\right)-\frac {I_3\left(\frac N{a_n}\right)}2 \frac{\log\log n}{\log n}\right) +O(a_n^{-d-1}/\log n)\, .
\end{align}
Note that $\phi=\phi_0+\bar\mu(\phi)$ and $\psi=\psi_0+\bar\mu(\psi)$, with $\phi_0=\phi-\bar\mu(\phi)$ and $\psi_0=\psi-\bar\mu(\psi)$. So
\begin{align}\label{decompdecor}
&\int_{\bar M} \phi 1_{\{\kappa_n=N\}}\psi \circ \bar T^n \, d\bar\mu\\
\nonumber&=\int_{\bar M} \phi_0 1_{\{\kappa_n=N\}}\psi\circ \bar T^n \, d\bar\mu
+\bar\mu(\phi)\int_{\bar M} 1_{\{\kappa_n=N\}}\psi_0 \circ \bar T^n \, d\bar\mu
+\bar\mu(\phi)\bar\mu(\psi)\bar\mu(\kappa_n=N)\, .
\end{align}
Applying \eqref{intermediate3} with $(1_{\bar M},1_{\bar M})$ instead of $(\phi,\psi)$ leads to
\begin{equation}\label{decor11}
\bar\mu(\kappa_n=N)=\frac 1{a_n^d}
\left(I_0\left(\frac{N}{a_n}\right)-\frac 12 I_2\left(\frac{N}{a_n}\right) \frac{\log\log n}{\log n} + O\left(\frac 1{\log n}\right)
\right) +O(a_n^{-d-1}/\log n)\, ,
\end{equation}
since $\mathbb E_{\mu_\mathbb{D}elta}[\Pi'_01]=0$ (see Remark~\ref{rem:derivopi}).
Now let us study $\int_{\bar M} \phi_0 1_{\{\kappa_n=N\}}\psi\circ \bar T^n \, d\bar\mu$.
In what follows we show that $\mathfrak C_{k(n)}(\phi_0,\psi)$
converges as $n\rightarrow +\infty$ with rate in $O((\log n)^{-1})$. Note that
\begin{equation}\label{int0}
\mathfrak C_{k(n) }(
\phi_0
,\psi)=i\mathbb E_{\bar\mu}[\psi]\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi^{(k(n))}]\, .
\end{equation}
Let us prove the two following estimates
\begin{equation}\label{limit}
\left\varepsilonrt
\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi^{(k(n))}]-
\sum_{j\ge 0}
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]\right|=O((\log n)^{-1})\, ,
\end{equation}
\begin{equation}\label{finitesum}
\sum_{j\ge 0}\left|
\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]\right|<\infty\, .
\end{equation}
\begin{itemize}
\item \textbf{Proof of~\eqref{finitesum}.} First, for
every integers $j,k_j\ge 0$, we have
\begin{align*}
\left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]\right|
&\le
|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi^{(k_j)}]|+
|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}(\phi-\phi^{(k_j)})]|\\
&\le|\mathbb E_{\mu_\mathbb{D}elta}[\hat\kappa P^{j+k_j}(\tilde\phi^{(k_j)}-\mu_\mathbb{D}elta(\tilde\phi^{(k_j)})]|+\Vert \kappa\Vert_{L^{\frac {p_2}{p_2-1}}(\bar\mu)}
\Vert\omega_{-k_j}^{k_j}(\phi,\cdot)\Vert_{L^{p_2}(\bar\mu)}\, .
\end{align*}
Now using the fact that $\Vert P^{2k_j}\tilde\phi^{(k_j)}-\mu_\mathbb{D}elta(\tilde\phi^{(k_j)})\Vert_{\mathcal B_0}\ll\Vert\phi\Vert_\infty$ (which comes from \eqref{PukPuphi}) and using \eqref{spgap-Sz-bis} for $t=0$, we conclude that
\begin{equation}\label{majo}
\left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]\right|\le C \Vert \hat\kappa\Vert_{L^{q_1
}(\mu_\mathbb{D}elta)}\theta^{j-k_j}\Vert\phi\Vert_\infty+\Vert \kappa\Vert_{L^{\frac {p_2}{p_2-1}}(\bar\mu)}\Vert\omega_{-k_j}^{k_j}(\phi,\cdot)\Vert_{L^{p_2}(\bar\mu)}\, .
\end{equation}
Taking $k_j=j/2$,
we obtain
\begin{equation}
\sum_{j\ge 0} \left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]\right|\ll \sum_{j\ge 0} \left(\Vert \hat\kappa\Vert_{L^{\frac p{p-1}}}\theta^{ j/2}\Vert\phi\Vert_\infty+\Vert\omega_{-\lfloor j/2\rfloor}^{\lfloor j/2\rfloor}(\phi,\cdot)\Vert_{L^{p_2}(\bar\mu)}\right)<\infty\, .
\end{equation}
This ends the proof of \eqref{finitesum}.
\item
\textbf{Proof of~\eqref{limit}}. Recall that $\|\mathbb E_{\bar\mu}[\psi^{(k)}-\psi]\|
\le\Vert\omega_{-k}^{k}(\psi,\cdot)\Vert_{L^1(\bar\mu)}=O((\log n)^{-1})$.
Observe that, due to \eqref{majo} with $k_j:=\lfloor j/2\rfloor$ and $k=k(n)$,
\begin{align*}
&\sum_{j\ge 0}
\left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}(\phi^{(k)}-\phi)]\right|
=\sum_{j=0}^ {2k-1}\left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}(\phi^{(k)}-\phi)]\right|+\sum_{j\ge 2k} \left|\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}(\phi^{(k)}-\phi)]\right|\\
&\le
2k \Vert \kappa\Vert_{L^{\frac{p_2}{p_2-1}}(\bar\mu)}
\Vert \omega_{-k}^k(\phi,\cdot)\Vert_{L^{p_2}(\bar\mu) }\\
&\ \ \ \ \ \ \ \
+\sum_{j\ge 2k}\left(C
\Vert \hat\kappa\Vert_{L^{q_1
}(\mu_\mathbb{D}elta)}\theta^{j/2}2\Vert\phi\Vert_\infty+\Vert \kappa\Vert_{L^{\frac {p_2}{p_2-1}}(\bar\mu)}\Vert\omega_{-\lfloor j/2\rfloor}^{\lfloor j/2\rfloor}(\phi,\cdot)\Vert_{L^{p_2}(\bar\mu)}\right)\, ,
\end{align*}
which is $O((\log n)^{-1})$
due to our assumptions.
This ends the proof of~\eqref{limit}.
\end{itemize}
Applying \eqref{int0} combined with \eqref{limit} and \eqref{finitesum}, we obtain
\begin{align}\label{DDD1}
\mathfrak C_k(\phi_0,\psi)=i\mathbb E_{\bar\mu}[\psi]\sum_{j\ge 0}\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]
+O((\log n)^{-1})\, ,
\end{align}
and thus, due to \eqref{intermediate3},
\begin{equation}\label{phi0psi}
\int_{\bar M} \phi_0 1_{\{\kappa_n=N\}}\psi \circ \bar T^n \, d\bar\mu
= \frac {i\mathbb E_{\bar\mu}[\psi]\sum_{j\ge 0}\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\phi]}{(a_n)^{d+1}}\tilde I_1\left(n,\frac N{a_n}\right)
+O\left(\frac{a_n^{-d-1}}{\log n}\right)\, ,
\end{equation}
with
$\tilde I_1(n,x):=
I_{1}\left(x\right)-\frac {I_3\left(x\right)}2 \frac{\log\log n}{\log n}$.
Now let us prove that a similar formula holds for $(1,\psi_0)$, where $\psi_0=\psi-\bar\mu(\psi)$ thanks to time reversibility.
Let $\tau:\bar M\rightarrow\bar M$ be
given by $\tau(q,\varepsilonc v)=(q,2(\varepsilonc n_q,\varepsilonc v)\varepsilonc n_q-\varepsilonc v)$ where $\varepsilonc n_q$
is the unit normal vector to $\partial Q$ directed in $Q$. Observe that $\tau$ preserves $\bar\mu$ and satisfies the following relations:
$\tau\circ\tau=id$, $\tau\circ\bar T^n\circ \tau=\bar T^{-n}$ and $\kappa\circ\bar T^k\circ\tau:=-\kappa\circ \bar T^{-k-1}$.
In particular, $\kappa_n\circ \tau\circ \bar T^n=-\kappa_n$.
Therefore
\begin{align*}
&\int_{\bar M}
1_{\{\kappa_n=N\}}\psi_0\circ \bar T^n\, d\bar\mu
=\int_{\bar M}
1_{\{\kappa_n\circ \tau=N\}}\psi_0\circ \bar T^n\circ\tau\, d\bar\mu\\
&=\int_{\bar M}
\tau 1_{\{\kappa_n\circ \tau=N\}}\psi_0\circ\tau\circ \bar T^{-n}\, d\bar\mu
=\int_{\bar M}
1_{\{\kappa_n\circ \tau\circ \bar T^n=N\}}\psi_0\circ\tau \, d\bar\mu
\\
&=\int_{\bar M}\psi_0\circ\tau1_{\{\kappa_n=-N\}}
\, d\bar\mu\, .
\end{align*}
Observe that
$\omega_{-k}^k(\psi_0\circ\tau,x)=\omega_{-k}^k(\psi_0,\tau(x))$ and so,
the composition by $\tau$ preserves the $L^r$ norms of $\omega_{-k}^k$.
Thus, applying \eqref{phi0psi} to the couple $(\psi_0\circ \tau,1)$ instead of $(\phi_0,\psi)$, we obtain
\begin{align*}
&\int_{\bar M} 1_{\{\kappa_n=N\}}\psi_0 \circ \bar T^n \, d\bar\mu
= \frac {i\sum_{j\ge 0}\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\psi\circ\tau]}{(a_n)^{d+1}}\tilde I_1\left(n,-\frac N{a_n}\right)
+O\left(\frac{a_n^{-d-1}}{\log n}\right)\\
&= \frac {i\sum_{j\le -1}\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\psi]}{(a_n)^{d+1}}
\tilde I_1\left(n,\frac N{a_n}\right)+O\left(\frac{a_n^{-d-1}}{\log n}\right)\, .
\end{align*}
since $\mathbb E_{\bar\mu}[\kappa\circ \bar T^{j}\psi\circ\tau]=\mathbb E_{\bar\mu}[-\kappa\circ \bar T^{-j-1}\psi]$,
$-\tilde I_1(n,-x)=\tilde I_1(n,x)$.
The claimed estimate follows from
this last estimate combined with~\eqref{decompdecor}, \eqref{decor11} and
\eqref{phi0psi}.
\end{proof}
For all $N\in\mathbb Z^d$, we set $\phi_N:=\phi 1_{M_N}$.
Observe that $\omega_{-k}^\infty (h,\cdot)\le\omega_{-k}^k(h,\cdot)\le L_{h,\eta}\eta^k$. Therefore, Theorem~\ref{THM0} (resp. Theorem~\ref{THM1}) is a direct consequence of the previous (resp. following) theorem (for Theorem~\ref{THM1}
we also use the remark just after its statement).
\begin{theo}\label{LLT2}
Let $(k(n))_n$ be a sequence of integers diverging to $+\infty$ such that $k(n)=O( a_n/\log n)$.
Let $\phi,\psi: M\rightarrow \mathbb C$ be two measurable functions.
\begin{itemize}
\item[(I)] If
$\sum_{N\in\mathbb Z^d}(\Vert \phi_N\Vert_\infty+\Vert \psi_N\Vert_\infty)<\infty$ and
$\lim_{k\rightarrow +\infty}
\int_M(\omega_{-k}^k(\phi,\cdot)
+ \omega_{-k}^{\infty}(\psi,\cdot))\, d\mu=0$.
Then
\begin{equation}\label{MIXBILL}
\int_M \phi.\psi\circ T^n\, d\mu= \frac{I_0(0)}{a_n^d}\int_M\phi\, d\mu
\int_M\psi\, d\mu+o(a_n^{-d})\, .
\end{equation}
\item[(II)] If moreover there exists
$\gamma\in(0,1)$ such that
$\sum_N |N|^\gamma(\Vert\phi_N\Vert_\infty+\Vert\psi_N\Vert_\infty)<\infty$,
and if
$\int_M(\omega_{-k(n)}^{k(n)}(\phi,\cdot)
+ \omega_{-k(n)}^{\infty}(\psi,\cdot))\, d\mu=O((\log n)^{-1})$,
then
\begin{equation}\label{MIXBILL2}
\int_M \phi.\psi\circ T^n\, d\mu= \frac {I_0\left(0\right)}{a_n^d}
\left(1-
\frac{d\log\log n}{2\log n}\right)
\int_M\phi\, d\mu \int_M\psi\, d\mu
+O\left(\frac 1{a_n^d\log n}\right)\, .
\end{equation}
\item[(III)]
If moreover $k(n)\ll a_n^{\frac 12-u}$ for some $u\in(0,1)$ and if
there exists $\gamma\in(0,1)$ such that
$\sum_N|N|^{1+\gamma}(\Vert\phi_N\Vert_\infty+\Vert\psi_N\Vert_\infty)<\infty$,
and
$\int_M(\omega_{-k(n)}^{k(n)}(\phi,\cdot)
+ \omega_{-k(n)}^{k(n)}(\psi,\cdot))\, d\mu=O((a_n\log n)^{-1})$,
then
\begin{align*}\int_M \phi.\psi \circ T^n \, d\mu&=\frac {I_0\left(0\right)}{a_n^d}
\left(1- \frac{d\log\log n +O(1)}{2\log n}
\right)\int_{ M} \phi\, d\mu\, \int_{ M} \psi\, d\mu
+O\left(\frac 1{a_n^{d+1}\log n}\right)\, .
\end{align*}
\end{itemize}
\end{theo}
\begin{proof}
For every $x\in M$ we write $\bar x\in\bar M$ for the class of $x$ modulo $\mathbb Z^d$ for the position.
Let us write $\bar\phi_N(\bar x)=\phi_N(x)$, for every $x\in M_N$.
Observe that
\[
\int_M \phi.\psi \circ T^n \, d\mu=
\sum_{N_1,N_2\in\mathbb Z^2}
\int_{\bar M} \bar\phi_{N_1}\mathbf 1_{\{\kappa_n=N_2-N_1\}}
\bar\psi_{N_2} \circ \bar T^n \, d\bar\mu\, .
\]
Now,
setting again $\tilde I_0(n,x):=I_0(x)-\frac {I_{2}(x)}2\frac{\log\log n}{\log n}$
and applying \eqref{error}, we obtain
\begin{align}\label{EEE1}
&\int_M \phi.\psi \circ T^n \, d\mu=\sum_{N_1,N_2\in\mathbb Z^d} \frac 1{a_n^d}
\tilde I_0\left(n ,\frac{N_2-N_1}{a_n}\right)
\int_{M}\phi_{N_1}\, d\mu \int_{M}\psi_{N_2}\, d\mu\\
\nonumber&+
O\left(a_n^{-d}\sum_{N_1,N_2\in \mathbb Z^d}I_0\left(\frac{N_2-N_1}{2a_n}\right)
\Vert\phi_{N_1}\Vert_{\infty}\Vert\omega_{-k(n)}^\infty(\psi_{N_2},\cdot)\Vert_{L^1(\bar\mu)})\right)\\
\nonumber&+O\left(a_n^{-d}\sum_{N_1,N_2\in \mathbb Z^d}I_0\left(\frac{N_2-N_1}{2a_n}\right)\Vert\psi_{N_2}\Vert_{\infty}\Vert\omega_{-k(n)}^{k(n)}(\phi_{N_1},\cdot)\Vert_{L^1(\bar\mu)})\right)
+O\left(\frac {\Vert\phi_{N_1}\Vert_\infty\Vert\psi_{N_2}\Vert_\infty}{a_n^d\log n}\right)\, .
\end{align}
\eqref{MIXBILL} follows from \eqref{EEE1} and the Lebesgue dominated convergence theorem, since
$I_0$ and $I_2$ are bounded and since $I_0$ is continuous at 0.
Assume now the assumptions of (II).
Since $|I_0(X)-I_0(0)|+|I_2(X)-I_2(0)|=O(X^\gamma)$,
replacing $I_0\left(\frac{N_2-N_1}{a_n}\right)-\frac 12 \, I_2\left(\frac{N_2-N_1}{a_n}\right)...$ by $I_0\left(0\right)-\frac 12 \, I_2\left(0\right)...$ in \eqref{EEE1} leads to an error term in $O(a_n^{-d-\gamma})$. Since $I_0$ is bounded and
to our assumptions,
the first error term in
\eqref{EEE1} is in $O(a_n^{-d}/\log n)$. This completes the proof of
\eqref{MIXBILL2}.
Finally, we assume the assumptions of (III).
We start from \eqref{BBB1} for $(\bar\phi_{N_1},\bar\psi_{N_2},N_2-N_1)$ instead of $(\phi,\psi,N)$.
Our assumptions combined with $|I_{2j}(x)-I_{2j}(0)|\ll x^{1+\gamma}$
ensure that we can replace $I_0(N/a_n)$ and $I_2(N/a_n)$ in $J_1$
up to an error (after summation over $N_1,N_2\in\mathbb Z^2$) in
$O(a_n^{-d-1-\gamma})$.
Our assumptions
ensure that we can replace
$\phi^{(k)}$ and $\psi^{(k)}$ by respectively $\phi$ and $\psi$ in $J_1$ up to a total error (after summation)
in $O(a_n^{-d-1}/\log n)$.
The fact that
$|I_{2j+1}(x)|\ll x^{\gamma}$, combined with
our conditions,
implies that the contribution (after summation) of
$J_2$ is in $O(a_n^{-d-1}/\log n)$.
We prove as in the previous result that
the contribution (after summation) of
$J_3$ is in $O(a_n^{-d-1}/\log n)$. We conclude using $I_2(0)=dI_0(0)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{THEOCob}]
We proceed as in the proof of Proposition~\ref{LLT00} with same notation
$I_{\ell_1,\ell_2,N}$ introduced just before Proposition~\ref{LLT00} with $A=\Sigma^2$ and
\[
J_{\ell_1,\ell_2,n,N,N_0}:=\int_{-[\beta a_n,\beta a_n]^d}e^{-iu\cdot N/a_n}(-iu\cdot N_0)^{\ell_1}(-Au\cdot u)^{\ell_2}\lambda_{u/a_n}^{n-2k}\, du
\]
Using again \eqref{lambdan-k} and \eqref{intlambda}, we observe that
\begin{equation}
\label{diffJ}
\forall\delta_0\in(0,1],\quad
:=J_{\ell_1,\ell_2,n,0,N_0} + O\left(\frac{N_0^{\ell_1}N^{\delta_0}}{a_n^{\delta_0}}\right)\, ,
\end{equation}
\begin{align}\label{relationIJ-bill}
J_{\ell_1,\ell_2,n,0,N_0}&=I_{\ell_1,\ell_2,N_0}(0)
+
I_{\ell_1,\ell_2+1,N_0}(0) \frac{\log\log n}{2\log n}+O\left(\frac {N_0^{\ell_1}}{\log n}\right)\, .
\end{align}
For every $N\in\mathbb Z^d$, we
consider the functions $h_N,g_N:\bar M\rightarrow \mathbb C$
such that, for any $x\in M_N$, $h_N(\bar x):=\phi_N(x)$ and $g_N(\bar x):=\psi_N(x)$
where $\bar x\in\bar M$ is the class of $x$ modulo $\mathbb Z^2$ for the position.
We take $k=k(n):=\lfloor (d+m) (\log n)/|\log \eta|\rfloor$ with $m:=3/2$
in the setting of the second item (which implies that $\eta^{k(n)}\ll a_n^{-d-2m}$) and define the approximating functions
$\phi^{(k)},\psi^{(k)}:M\rightarrow\mathbb R$ ang $h_N^{(k)},g_N^{(k)}:\bar M\rightarrow\mathbb C$ given by
$\phi^{(k)}(x):=\inf_{\xi_{-k}^k( x)} \phi_+-\sup_{\xi_{-k}^k( x)} \phi_-$,
$\psi^{(k)}( x):=\inf_{\xi_{-k}^\infty(x)} \psi_+ - \sup_{\xi_{-k}^\infty(x)} \psi_-
$,
$
h_N^{(k)}(\bar x):=\inf_{\xi_{-k}^k( \bar x)} (h_N)_+-\sup_{\xi_{-k}^k( \bar x)} (h_N)_-$
$g_N^{(k)}(\bar x):=\inf_{\xi_{-k}^k( \bar x)} (g_N)_+-\sup_{\xi_{-k}^k( \bar x)} (g_N)_-$.
Due to our choice of $k$ and to our assumptions on $\phi,\psi$,
\begin{equation}\label{ESTI1}
\int_M \phi.\psi\circ(id-T)^m\circ T^n\, d\mu=\int_M \phi^{(k)}.\psi^{(k)}\circ(id-T)^m\circ T^n\, d\mu+O\left(\eta^{k(n)}\right)\, .
\end{equation}
The error term in the previous formula
is in $O(a_n^{-d-2m})$.
So we focus on the following integral
\[
\int_M \phi^{(k)}.\psi^{(k)}\circ T^n\, d\mu =\sum_{N_1,N_2\in\mathbb Z^d}\mathbb E_{\bar\mu}[ h_{N_1}^{(k)}1_{\{\kappa=N_2-N_1\}}g_{N_2}^{(k)}\circ \bar T^n]\, .
\]
Recall, from the proof of Theorem~\ref{LLT20} that, due the definition of
$h_N^{(k)},g_N^{(k)}$ there exist
$\tilde h_N^{(k)},\tilde g_N^{(k)}:\mathbb{D}elta\rightarrow\mathbb R$
such that $
h_N^{(k)}\circ T^k\circ\mathfrak p_1=\tilde h_N^{(k)}\circ\mathfrak p_2$,
$g_N^{(k)}\circ T^k\circ\mathfrak p_1=\tilde g_N^{(k)}\circ\mathfrak p_2$,
\begin{equation}
\label{controltilde}
\Vert \tilde g_N^{(k)}\Vert_\infty\le\Vert g_N\Vert_\infty\quad\mbox{and}\quad
\sup_{t}\Vert P_t^kP_t\tilde h_N^{(k)}\Vert_ {\mathcal B_0}\le\Vert h_N\Vert_\infty\, ,
\end{equation}
and thus, as seen at the begining of the proof of Proposition~\ref{LLT1},
\begin{align*}\nonumber
\int_M \phi^{(k)}.\psi^{(k)}\circ T^n\, d\mu&=\sum_{N_1,N_2\in\mathbb Z^d}
\mathbb E_{\mu_\mathbb{D}elta}\left[ \tilde h_{N_1}^{(k)}1_{\{\hat\kappa_n\circ f^k=N_2-N_1\}}\tilde g_{N_2}^{(k)}\circ \bar T^n\right]\\
&=\sum_{N_1,N_2\in\mathbb Z^d}\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}
e^{-it\cdot(N_2-N_1)}
\mathbb E_{\mu_\mathbb{D}elta}\left[P_t^k( \tilde g_{N_2}^{(k)}P_t^{n-2k}P_t^kP^k\tilde h_{N_1}^{(k)})\right]\, .
\end{align*}
\begin{itemize}
\item Let us assume the assumptions of item (a) of the theorem. Then
\begin{align}
&\int_M \phi^{(k)}.\psi^{(k)}\circ(I-T)^m\circ T^n\, d\mu=\sum_{j=0}^m\frac{m!}{j!(m-j)!}(-1)^j \int_M \phi^{(k)}.\psi^{(k)}\circ T^{n+j}\, d\mu\nonumber\\
&= \sum_{N_1,N_2\in\mathbb Z^d}\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}
e^{-it\cdot(N_2-N_1)}\sum_{j=0}^m\frac{m!}{j!(m-j)!}(-1)^j
\mathbb E_{\mu_\mathbb{D}elta}\left[P_t^k (\tilde g_{N_2}^{(k)}P_t^{n+j-2k}P_t^kP^k\tilde h_{N_1}^{(k)})\right]
\nonumber\\
&= \sum_{N_1,N_2\in\mathbb Z^d}\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}
e^{-it\cdot(N_2-N_1)}
\mathbb E_{\mu_\mathbb{D}elta}\left[ P_t^k(\tilde g_{N_2}^{(k)}(I-P_t)^mP_t^{n-2k}P_t^kP^k\tilde h_{N_1}^{(k)})\right]\, . \nonumber
\end{align}
Now, as seen in the proof of the first item of Proposition~\ref{LLT00} combined with
\eqref{controltilde}
\begin{align}
\nonumber
&\int_M \phi^{(k)}.\psi^{(k)}\circ(I-T)^m\circ T^n\, d\mu =\frac 1{(2\pi)^da_n^d}\sum_{N_1,N_2\in\mathbb Z^d}\int_{[-\beta a_n,\beta a_n]^d} e^{-iu\cdot(N_2-N_1)/a_n}\lambda_{u/a_n}^{n-2k}(1-\lambda_{u/a_n})^m\\
&\ \ \ \times\mathbb E_{\mu_{\mathbb{D}elta}}[P_{u/a_n}^k(\tilde g^{(k)}_{N_2}\Pi_{u/a_n}P_{u/a_n}^kP_{u/a_n}\tilde h^{(k)}_{N_1})]\, du +O\left(\theta^n\Vert g_{N_2}\Vert_{\infty}\Vert h_{N_1}\Vert_{\infty}\right)\nonumber\\
&=-\frac {(\log n+\log\log n)^m}{2^ma_n^{d+2m}}\left(O((\log n)^{m-1})+\sum_{N_1,N_2\in\mathbb Z^d}
J_{0,m,n,0,0}
\mathbb E_{\mu_\mathbb{D}elta}[P_{u/a_n}^k\tilde g^{(k)}_{N_2}]\mathbb E_{\mu_\mathbb{D}elta}[P_{u/a_n}^kP_{u/a_n}\tilde h^{(k)}_{N_1}]\right)\, ,\label{ESTI3}
\end{align}
where we used
\eqref{lambda}, \eqref{bornelambda}, \eqref{diffJ}, \eqref{Pi} and
$\log(|a_n/u|)=\log a_n+O(\log|u|)$ exactly as we obtained \eqref{cobfinal} in the proof of Proposition \ref{LLT00}.
Moreover
\begin{align}
\mathbb E_{\mu_\mathbb{D}elta}[P_{u/a_n}^kP_{u/a_n}\tilde h^{(k)}_{N_1}]
&=\mathbb E_{\mu_\mathbb{D}elta}[\tilde h^{(k)}_{N_1}]
+O\left( \frac{k}{a_n}\Vert\kappa\Vert_{L^1(\bar\mu)} \Vert h_{N_1}\Vert_\infty\right)\nonumber\\
&=\mathbb E_{\bar\mu}[ h_{N_1}]
+O\left( \frac{k}{a_n}\Vert\kappa\Vert_{L^1(\bar\mu)} \Vert h_{N_1}\Vert_{(\eta)}\right)\, ,\label{errorEsph}
\end{align}
\begin{equation}\label{errorEspg}
\mbox{and}\quad
\mathbb E_{\mu_\mathbb{D}elta}[P_{u/a_n}^k\tilde g^{(k)}_{N_2}]
=\mathbb E_{\bar\mu}[ g_{N_2}]
+O\left( \frac{k}{a_n}\Vert\kappa\Vert_{L^1(\bar\mu)} \Vert g_{N_2}\Vert_{(\eta)}\right)\, ,
\end{equation}
Combining this with \eqref{ESTI1}, \eqref{ESTI3}, with our summability assumptions and with \eqref{relationIJ-bill}, up to an error in $O(a_n^{-d-2m}(\log n)^{m-1})$,
we obtain a dominating term in
\[
-\frac{(\log(n\log n))^m}{2^m(n\log n)^{\frac d2+m}}\left(I_{0,m,0}(0)+I_{0,m+1,0}(0)\frac{\log\log n}{2\log n}\right)\int_{M}\phi\, d\mu\int_M\psi\, d\mu\, .
\]
But, due to~\eqref{formulaI}, $I_{0,k,0}=
\frac{\mathbb{D}elta^k\Phi(0)}{\sqrt{\det \Sigma^2}}$ and so
\begin{align*}
I_{0,k,0}
&=\sum_{i_1,...,i_k=1}^d\frac{\partial^{2k}}{\partial^2 x_{i_1}\cdots\partial^2 x_{i_k}}\frac{\Phi}{\sqrt{\det \Sigma^2}}(0)
=\sum_{k_1+\cdots+k_d=k}^d\frac{k!}{k_1!...k_d!}\frac{\partial^{2k}}{\partial^{2k_1} x_{1}\cdots\partial^{2k_d} x_{d}}\frac{\Phi}{\sqrt{\det \Sigma^2}}(0)\\
&=\sum_{k_1+\cdots+k_d=k}^d\frac{k!\Phi(0)}{k_1!...k_d!\sqrt{\det \Sigma^2}}\prod_{j=1}^d(-1)^{k_j}\mathbb E[Z_j^{2k_j}]
=\frac{(-1)^k\Phi(0)\mathbb E[(Z_1^2+...+Z_d^2)^{k}]}{\sqrt{\det \Sigma^2}}\\
&=(-1)^k\Phi_{\Sigma^2}(0)d(d+2)...(d+2k-2)\, ,
\end{align*}
where $Z_j$ are independent standard gaussian random variables
since $\frac{\partial^{2k}}{\partial^{2k} x_j}\Phi(x)=P_{2k}(x_j)\Phi(x)$,
with $P_{2k}$ is a polynomial such that $P_{2k}(0)=\frac{\partial^{2k}}{\partial^{2k} x_j}\Phi(0)=\frac{(-1)^k(2k)!}{2^k\, k!}=(-1)^k\mathbb E[Z_j^{2k}]$(we use also the well-known moments of the chi-squared distribution of $Z_1^2+...+Z_d^2$).
So $I_{0,m,0}(0)+I_{0,m+1,0}(0)\frac{\log\log n}{2\log n}=
I_{0,m,0}(0)\left(1-\frac{(d+2m)\log\log n}{2\log n}\right)$ and we conclude by the comments after the statement of Theorem~\ref{THEOCob}.
\item Let us prove item (b):
\begin{align*}
&\int_M \phi.\psi\circ T^n\, d\mu
=\int_M \phi^{(k)}.\psi^{(k)}\circ T^n\, d\mu+O(\eta^k)\\
&=\sum_{N_2\in\mathbb Z^d}\mathbb E_{\mu_{\mathbb{D}elta}}[\tilde h_{0}^k( 1_{\{\hat \kappa_n\circ f^k=N_2-N\}}+ 1_{\{\hat \kappa_n\circ f^k=N_2+N\}}-2\times 1_{\{\hat \kappa_n\circ f^k=N\}})\tilde g_{N_2}^{(k)}\circ f^n]+O(\eta^k)\\
&=\sum_{N_2\in\mathbb Z^d}\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{it\cdot N_2}(e^{-it\cdot N}+e^{it\cdot N}-2)\mathbb E_{\mu_{\mathbb{D}elta}}[P_t^k(\tilde g_{N_2}^{(k)}P_t^{n-2k}P_t^kP^k\tilde h_0^{(k)})]\, dt\, ,
\end{align*}
\begin{align*}
&\int_M \phi.\psi\circ T^n\, d\mu=O(\theta^n\sum_{N_2\in\mathbb Z^d}
\Vert g_{N_2}\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\Vert h_0\Vert_{\infty})\\
&+\sum_{N_2\in\mathbb Z^d}
\frac 1{(2\pi)^d}\int_{[-\beta ,\beta]^d}e^{it\cdot N_2}(e^{-it\cdot N}+e^{it\cdot N}-2)\lambda_{t}^{n-2k}\mathbb E_{\mu_{\mathbb{D}elta}}[P_{t}^k(\tilde g_{N_2}^{(k)}\Pi_{t}P_{t}^kP^k\tilde h_0^{(k)})]\, dt\, .
\end{align*}
Now using the change of variable $t=u/a_n$, the expansion of the exponential and
expansion \eqref{Pi} of $\Pi$ with $\gamma=1$, we obtain that $\int_M \phi.\psi\circ T^n\, d\mu$ is equal to
\begin{align*}
&\sum_{N_2\in\mathbb Z^d}
\frac 1{(2\pi)^da_n^{d+2}}\int_{[-\beta a_n,\beta a_n]^d}e^{iu\cdot N_2/a_n}(iu\cdot N)^2\lambda_{u/a_n}^{n-2k}\mathbb E_{\mu_{\mathbb{D}elta}}[P_{u/a_n}^k\tilde g_{N_2}^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}[P_{u/a_n}^kP^k\tilde h_0^{(k)}]\, du\\
&+O\left(a_n^{-d-3}\sum_{N_2\in\mathbb Z^d}
\Vert g_{N_2}\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\Vert h_0\Vert_{\infty}\right)\, ,
\end{align*}
\begin{align*}
&\int_M \phi.\psi\circ T^n\, d\mu=O\left((\log n)a_n^{-d-3}\sum_{N_2\in\mathbb Z^d}
\Vert g_{N_2}\Vert_{L^{q_0}(\mu_\mathbb{D}elta)}\Vert h_0\Vert_{\infty}\right)\\
&+\sum_{N_2\in\mathbb Z^d}
\frac 1{(2\pi)^da_n^{d+2}}\int_{[-\beta a_n,\beta a_n]^d}e^{iu\cdot N_2/a_n}(iu\cdot N)^2\lambda_{u/a_n}^{n-2k}\mathbb E_{\mu_{\mathbb{D}elta}}[\tilde g_{N_2}^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}[\tilde h_0^{(k)}]\, du\, ,
\end{align*}
due to \eqref{errorEsph} and \eqref{errorEspg}.
Thus
\begin{align*}
\int_M \phi.\psi\circ T^n\, d\mu&=-
\frac {1}{a_n^{d+2}}\sum_{N_2\in\mathbb Z^d}\mathbb E_{\mu_{\mathbb{D}elta}}[\tilde g_{N_2}^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}[\tilde h_0^{(k)}] J_{2,0,n,N_2,N}
+O\left((\log n)a_n^{-d-3}\right)\\
&=-
\frac {1}{a_n^{d+2}}J_{2,0,n,0,N}\sum_{N_2\in\mathbb Z^d}\mathbb E_{\mu_{\mathbb{D}elta}}[\tilde g_{N_2}^{(k)}]\mathbb E_{\mu_\mathbb{D}elta}[\tilde h_0^{(k)}]
+O\left((\log n)a_n^{-d-3}+a_n^{-d-2-\delta}\right)\, ,
\end{align*}
\begin{align*}
\int_M \phi.\psi\circ T^n\, d\mu=&-
\frac {1}{a_n^{d+2}}J_{2,0,n,0,N}\sum_{N_2\in\mathbb Z^d}\mathbb E_{\bar\mu}[g_{N_2}]\mathbb E_{\bar\mu}[h_0]
+O\left((\log n)a_n^{-d-3}+a_n^{-d-2-\delta}\right)\\
&=-
\frac {1}{a_n^{d+2}}J_{2,0,n,0,N}\int_M \psi\, d\mu \mathbb E_{\bar\mu}[h_0]
+O\left((\log n)a_n^{-d-3}+a_n^{-d-2-\delta}\right)\, ,
\end{align*}
using \eqref{diffJ} and then our definition of $k(n)$.
We conclude
by \eqref{relationIJ-bill} and \eqref{I20N0}.
\end{itemize}
\end{proof}
\appendix
\section{Tail probability of the return time to the initial cell}
This appendix is devoted to the proof of Theorem~\ref{THMreturntime} (without assuming \eqref{H0}).
Since $\tau_0$ is constant along stable curves, there exists
$\hat\tau_0:\mathbb{D}elta\rightarrow \mathbb N$ such that $\tau_0\circ \mathfrak p_1=\hat\tau_0\circ\mathfrak p_2$.
We use the classical Dvoretzky and Erd\"os argument \cite{DE} combined with the estimates provided by our Proposition~\ref{PROP1} and~\eqref{LLT0} (we do not need Proposition~\ref{prop-lambda} here).
Considering the last visit time $n$ to the 0-cell before time $N$, we observe that
\[
1=\sum_{n=0}^N\bar\mu\left(\kappa_n=0,\ \tau_0\circ \bar T^n>N-n\right)
=\sum_{n=0}^N\mathbb E_{\mu_{\mathbb{D}elta}}\left[\mathbf 1_{\{\hat\tau_0>N-n\}}P^n(\mathbf 1_{\{\hat \kappa_n=0\}})\right]\, .
\]
Moreover, it follows from \eqref{spgap-Sz}, \eqref{spgap-Sz-bis}, \eqref{spgap-Sz-bisbis} and Proposition~\ref{PROP1} that
\begin{align*}
&P^n(\mathbf 1_{\{\hat \kappa_n=0\}})-\bar\mu(\kappa_n=0)=\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}(Id-\Pi_0)P_t^n(\mathbf 1)\, dt\\
&=\frac 1{(2\pi)^d}\int_{[-\beta,\beta]^d}\lambda_t^n(Id-\Pi_0)\Pi_t(\mathbf 1)\, dt+O(\theta^n)\\
&=
O\left(\theta^n+\int_{[-\beta,\beta]^d}e^{-na|t|^2\log(|t|^{-1})}|t|^\gamma\, dt\right)=
O\left(\theta^n+(n\log n)^{-\frac {d+\gamma}2}\right)
\end{align*}
in $L^{p'}(\mu_{\bar\mathbb{D}elta})$ with $p'>1$ and $\gamma>1$ as in Proposition~\ref{PROP1}, using $\Pi_0(\Pi'_0(\mathbf 1))=0$.
Therefore
\[1=
\sum_{n=0}^N\left[\bar\mu(\tau_0>N-n)\bar\mu(\kappa_n=0)
+\epsilons_{n,N}
\right]\, ,
\]
with $\left|\epsilons_{n,N}\right|=O\left(\bar\mu(\tau_0>N-n)^{\frac 1{q'}}(\theta^n+(n\log n)^{-\frac {d+\gamma}2})\right)$ uniformly in $(n,N)$
(with $\frac 1{q'}+\frac 1{p'}=1$).
Since $(M,\mu,T)$ is recurrent, $\lim_{N\rightarrow +\infty}\bar\mu(\tau_0>N)=0$, so
$\lim_{N\rightarrow +\infty}\sum_{n=0}^N\varepsilon_{n,N}=0$ and
\begin{equation}\label{centralEQ}
1=\lim_{N\rightarrow +\infty}
\sum_{n=0}^N\bar\mu(\tau_0>N-n)\bar\mu(\kappa_n=0)\, .
\end{equation}
In particular $1\ge \limsup_{N\rightarrow +\infty}\bar\mu(\tau_0>N)
\sum_{n=0}^N\bar\mu(\kappa_n=0)$.
It follows from $\bar\mu(\kappa_n=0)\sim\frac{\Phi_{\Sigma^2}(0)}{(n\log n)^{\frac d2}}$
(\cite{SV07}) that
$\sum_{n=0}^N\bar\mu(\hat\kappa_n=0)\sim\Phi_{\Sigma^2}(0)\mathfrak e_N$
with $\mathfrak e_N=\log\log N$ if $d=2$ and with
$\mathfrak e_N=\sqrt{\frac N{\log N}}$ if $d=1$.
Therefore we obtain the following upper bound:
\begin{equation}\label{limsupreturntime}
\limsup_{N\rightarrow +\infty}\Phi_{\Sigma^2}(0)\mathfrak e_N \bar\mu(\tau_0>N) \le 1\, .
\end{equation}
\begin{itemize}
\item \underline{Assume $d=2$}. It remains to prove that the lower bound coincide with the above upperbound. To this end, starting from \eqref{centralEQ} observe that, for $0\le m_N\le N$,
\begin{align*}
1&\le \liminf_{N\rightarrow +\infty} \left(\sum_{n=0}^{m_N-1}\bar\mu(\tau_0>N-n)\bar\mu(\kappa_n=0)+
\sum_{n=m_N}^N
\bar\mu(\kappa_n=0)\right)\\
&\le \liminf_{N\rightarrow +\infty}\left(\bar\mu(\tau_0>N-m_N)\sum_{n=0}^{m_N-1}\bar\mu(\kappa_n=0)+\sum_{n=m_N}^N\bar\mu(\kappa_n=0)\right)\, .
\end{align*}
applying this with $N=N'\lfloor\log N'\rfloor$ and $m_N=N'\left(\lfloor\log N'\rfloor-1\right)$, we obtain
\begin{align*}
1
&\le \liminf_{N\rightarrow +\infty}\bar\mu(\tau_0>N')\Phi_{\Sigma^2}(0)\log\log (N')+O\left((\log N')^{-2}
\right)\, ,
\end{align*}
since $\sum_{n=N'(\lfloor\log N'\rfloor-1)}^{N'\lfloor\log N'\rfloor}\bar\mu(\kappa_n=0)=O\left(\log\frac{\log(N'\lfloor \log N'\rfloor)+\log(1-\frac 1{\lfloor \log N'\rfloor}) }{\log(N'\lfloor \log N'\rfloor)}\right)=O\left((\log N')^{-2}
\right)$ and $\log\log m_N\sim\log \log N'$.
Combining this with \eqref{limsupreturntime}, we finally obtain \eqref{returntime}.
\item \underline{Assume $d=1$}.
Upper bound \eqref{limsupreturntime}
ensures that the sequence of non increasing functions $((\mathfrak e_N\bar\mu(\tau_0 >xN))_{x\in(0,\infty)})_{N\ge 1}$ admits limit points for the pointwise convergence except at discontinuity points of the limit. Consider a subsequence indexed by $(N_k)_k$ converging to a function $\psi$.
It follows from \eqref{centralEQ} that, for all $y\in(0,\infty)$,
\begin{equation}\label{lim1}
\lim_{N\rightarrow +\infty}\int_{-1}^{\lfloor Ny\rfloor}\bar\mu(\tau_0>\lfloor Ny\rfloor-x)\bar\mu(\kappa_{\lceil x\rceil}=0)\, dx =1\, .
\end{equation}
Note that $\int_{[-1,\epsilons N]\cup [\lfloor Ny\rfloor-N\epsilons,\lfloor Ny\rfloor]}
\bar\mu(\tau_0>\lfloor Ny\rfloor-x)\bar\mu(\kappa_{\lceil x\rceil}=0)\, dx$ is less than
\[
\sqrt{\frac{\log(N/2)} {N/2}} c'\mathfrak e_{\epsilons N}+ \sum_{k=1}^{N\epsilons}\sqrt{\frac{\log(k)}k}\frac{c'}{\sqrt{(N/2)\log(N/2)}}\le c '' \sqrt{\frac{\epsilons N\log( N)}{N\log (\epsilons N)}}
\]
which, combined with \eqref{lim1} leads to
\[
\limsup_{N\rightarrow +\infty}\left|\int_{\epsilons N}^{\lfloor Ny\rfloor-N\epsilons}\bar\mu(\tau_0>\lfloor Ny\rfloor-x)\bar\mu(\kappa_{\lceil x\rceil}=0)\, dx-1\right|=O(\sqrt{\epsilons})\, .
\]
But, for every $\varepsilon>0$, $\int_{\epsilons N}^{\lfloor Ny\rfloor-N\epsilons}\bar\mu(\tau_0>\lfloor Ny\rfloor-x)\bar\mu(\kappa_{\lceil x\rceil}=0)\, dx$ is equal to
\begin{align*}
&\int_{
\epsilons}^{(\lfloor Ny\rfloor-N\epsilons)/N}\mathfrak e_N\bar\mu(\tau_0>\lfloor Ny\rfloor-Nu)(\sqrt{N\log N}\bar\mu(\kappa_{\lceil Nu\rceil}=0))\, du\\
&\rightarrow \int_{\epsilons}^{y-\epsilons}\psi(y-u)\frac{\Phi_{\Sigma^2}(0)}{\sqrt{u}}\, du\, ,\quad
\mbox{as }N=N_k\rightarrow +\infty\, ,
\end{align*}
due to the dominated convergence theorem since \eqref{limsupreturntime}
and \eqref{LLT0} ensure that
\begin{align*}
N\bar\mu(\tau_0> &\lfloor Ny\rfloor-Nu)\bar\mu(\kappa_{\lceil Nu\rceil}=0)\le cN\frac {1}{\mathfrak e_{N_\varepsilon}\sqrt{ N\epsilons\log( N\epsilons)}}=O(\varepsilon^{-1})\, .
\end{align*}
From which we conclude that $\quad \int_{0}^y\psi(y-u)\frac{\Phi_{\Sigma^2}(0)}{\sqrt{u}}\, du=1$ for any $y\in(0,\infty)$.
Observe that $\psi_0(z):=\frac 1{a\sqrt{z}}$ with $a=\Phi_{\Sigma^2}(0)\int_0^1\frac {dt}{\sqrt{t(1-t)}}$
is a solution of $\int_{0}^y\psi(y-u)\frac{\Phi_{\Sigma^2}(0)}{\sqrt{u}}\, du=1$.
Recall that $\int_0^1\frac {dt}{\sqrt{t(1-t)}}
=\pi$ (using
for example the Euler Beta function), so $a=\pi\Phi_{\Sigma^2}(0)=\sqrt{\frac \pi{2\Sigma^2}}$.
Thus, for all $y\in(0,\infty)$, $\int_{0}^y(\psi-\psi_0)(y-u)\frac{\Phi_{\Sigma^2}(0)}{\sqrt{u}}\, du=0$.
Moreover, due to \eqref{limsupreturntime}, $\psi(x)\le C/\sqrt{x}$ and thus $\psi-\psi_0$ is integrable. We conclude by the Titchmarsh's Convolution Theorem (see e.g. \cite{Doos}) that $\psi-\psi_0\equiv 0$. Thus $\psi_0$ is the unique limit point and so we have proved \eqref{returntimedim1}.
\end{itemize}
\section{Justifying equation~\eqref{RnDA}}
\label{sec-RnDA}
Let $v:\mathbb{D}elta\rightarrow\mathbb R$
satisfying the assumptions
of
Lemma~\ref {lem-Rn} and $w\in\mathcal B_1$.
Using the pointwise formula $R (\hat\kappa_\sigma v w)(y)=\sum_{a\in\mathcal{A}} e^{\chi (y_a)} w(y_a) (v\hat\kappa_\sigma)(y_a)$
(and in particular~\eqref{eq:GM}), the arguments used in the proof of ~\cite[Proposition 12.1]{MelTer17} show that
the $\|\cdot \|_{\mathcal{B}_0}$ norm of $R (v\hat\kappa_\sigma\cdot)$ is bounded by
$\|v\hat\kappa_\sigma\|_{L^1(\mu_Y)}$.
We recall the main inequalities, which in turn will help us justify equations~\eqref{RnDA}.
An important assumption used throughout ~\cite[Proof of Proposition 12.1]{MelTer17}
is ~\cite[Assumption (A1)]{MelTer17}. In our case, $v\hat\kappa_\sigma$ is constant on the partition elements $a\in\mathcal{Y}$ and thus, it automatically satisfies
~\cite[Assumption (A1)]{MelTer17}. In our case, ~\cite[Assumption (A1)]{MelTer17} translates into
$|\sup_a (v\hat\kappa_\sigma)-\inf_a (v\hat\kappa_\sigma)|\le C \inf_a (v\hat\kappa)_\sigma$, for all $a\in\mathcal{Y}$ and some $C>0$.
This allows for a direct application of the arguments in ~\cite[Proof of Proposition 12.1]{MelTer17} .
By the argument~\cite[Proof of Proposition 12.1 a)]{MelTer17} with $\varphi\in\{v\hat\kappa_\sigma,v1_Y\}$, we have that for some $C>0$,
\[
\|R (\varphi w)\|_\infty\le C \|w\|_\infty \|v\hat\kappa_\sigma\|_{L^1(\mu_Y)}.
\]
since $\varphi$ is constant on the partition elements $a\in\mathcal{Y}$.
Recall that $|\cdot|_{\mathcal{B}_0}$ is the seminorm used in defining the norm $\|\cdot\|_{\mathcal{B}_0}$. A simplified version of~\cite[Proof of Proposition 12.1 b)]{MelTer17}
(which can deal the $p$- derivative, for $p\in (1,2)$, in $t$ of $R(e^{it\hat\kappa_\sigma}v)$) with $\varphi\in\{v1_Y,v\hat\kappa_\sigma, 1_{\{\sigma=n\}} v\hat\kappa_\sigma, 1_{\{\sigma=n\}}(e^{it \hat\kappa_\sigma}-1-it\hat\kappa_\sigma)v\}$
ensures that
\[
\|R (\varphi w)\|_{\mathcal{B}_0}
\le C \|w\|_{\mathcal{B}_0} \|\varphi\|_{L^1(\mu_Y)}\, ,
\]
since the function $\varphi$ is constant on the partition elements $a\in\mathcal{A}$.
In particular, $\|R (1_{\{\sigma=n\}} v)w\|_{\mathcal{B}_0}\le C\|w\|_{\mathcal{B}_0} \|1_{\{\sigma=n\}} 1_Y v\|_{L^1(\mu_Y)}$.
\section{On the constant $c_0(v,w)$ }
\label{sec-convsum}
We show that the sums in the expression of $c_0(v,w)$
in
Lemma~\ref{lem-eigf2} are absolutely convergent.
Since we do not know if $\hat\kappa$ is in $\mathcal B$, our strategy is to use $1_{(a,l)}$ and exploit that $\hat\kappa$ is constant on $(a,l)
$. The argument below is delicate since the $\mathcal B$-norm of $1_{(a,l)}$ increases exponentially fast in $\sigma(a)$.
\begin{lem}
\label{lemma-Ppower}
$\sup_{a\in Y,\ l\in\{0,...,\sigma(a)-1\}}\|P_0^{\sigma(a,l)}(\cdot 1_{(a,l)})\|_{\mathcal L(\mathcal B_0\rightarrow \mathcal{B})}<\infty
$.
\end{lem}
\begin{proof}
Let $y\in Y\times\{ \ell_0\}$ with $\ell_0\ne 0$ and $w\in\mathcal{B}_0$. We have $P^{\sigma(a,l)}(w1_{(a,l)})(y)=0$ and so $P^{\sigma(a,l)}(w1_{(a,l)}-\mu_\mathbb{D}elta(w1_{(a,l)}))(y)=-\mu_\mathbb{D}elta(w1_{(a,l)})$.
Let $y\in Y\times\{0\}$, we have $P^{\sigma(a,l)}(w1_{(a,l)})(y)=\chi(y_{a})w(f^ly_a)$,
where $y_{a}$ is the unique element of $a\cap f^{-\sigma(a)}(\{y\})=a\cap F^{-1}(\{y\})$.
Thus,
\begin{align*}
&\sup_{x,y\in Y\times\{0\}}\frac{P^{\sigma(a,l)}(w1_{(a,l)})(x)-P^{\sigma(a,l)}(w1_{(a,l)})(y)}{\beta^{s_0(x,y)}}\\
&\le \sup_{x,y\in Y} \frac{e^{-\alpha_{\sigma(a,l)}(f^l(x_a))}-e^{-\alpha_{\sigma(a,l)}(f^l(y_a))}
}{\beta^{s_0(x,y)}}\|w1_{(a,l)}\|_\infty
+ \sup_{x,y\in Y} \frac{w(f^l(x_a))-w(f^l(y_a))}{\beta^{s_0(x,y)}}\sup_{a} \chi\, \\
&\le \|w\|_{\mathcal{B}_0}2\sup_{x\in Y} e^{-\alpha_{\sigma_a}(f^l(x_a))}
\left(1+\sum_{k=0}^{\sigma(a,l)-1}\frac{|\alpha(f^{l+k}(x_a))-\alpha(f^{l+k}(y_a))|}{\beta^{s_0(x,y)}}\right)
\le C \|w\|_{\mathcal{B}_0}\mu_\mathbb{D}elta(a)\, ,
\end{align*}
where we have used that $s_0(x,y)=s_0(x_a,y_a)-\sigma(a,l)\le s_0(x_a,y_a)$, that $\left|\alpha(x)-\alpha(y)\right|\le C'_1\beta^{s_0(x,y)}$ as soon as
$s_0(x,y)\ge 1$ and finally the distorsion bounds. Moreover, as required,
$$\sup_{(x,y)\in Y\times\{0\}}P^{\sigma(a,l)}(w 1_{(a,l)})(x) \le \|w\|_\infty\sup_{(a,l)}e^{\alpha_{\sigma(a,l)}}\le C\|w\|_\infty\mu_\mathbb{D}elta(a)\, .$$~\end{proof}
\begin{lem}
\label{lemma-lemexpk1y}
There exist $\theta_0,\theta_2\in (0,1)$ so that $|\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j}\, d\mu_\mathbb{D}elta|=O(\theta_0^j)$ and
so that $w\in\mathcal{B}$,
$|\int_{\mathbb{D}elta} \hat\kappa\circ f^{j}\, w\, d\mu_\mathbb{D}elta|=O(\theta_2^j
\|w\|_{\mathcal{B}})$.
\end{lem}
\begin{proof} Let $p_1<2$ and $\epsilonsilon\in(0,1)$.
Note that $\mu_\mathbb{D}elta(\sigma\ge m)\le \sum_{k\ge m}k\mu_Y(\sigma\ge k)\ll\theta_1^{m(1-\epsilonsilon)}$.
Using H\"older inequality and Lemma~\ref{lemma-Ppower}, we compute that
\begin{align*}
&\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j}\, d\mu_\mathbb{D}elta=\int_{\mathbb{D}elta}(\hat\kappa 1_{\{\sigma<j/2\}}-\mu_\mathbb{D}elta(\hat\kappa 1_{\{\sigma<j/2\}}) \, 1_Y\circ f^{j} \, d\mu_\mathbb{D}elta+2\| \hat\kappa\|_{L^{p_1}(\mu_\mathbb{D}elta)} (\mu_\mathbb{D}elta(\sigma\ge j/2))^{\frac {p_1-1}{p_1}}\\
&\ \ \ =\sum_{m\in\mathbb Z^d}m\sum_{(a,l)\, :\, \hat\kappa(a,l)=m,\, \sigma(a,l)<j/2} \, \int_\mathbb{D}elta P^j(1_{(a,l)}-\mu_{\mathbb{D}elta}((a,l))) 1_Y \, d\mu_\mathbb{D}elta+O\left(\theta_1^{\frac{(1-\epsilonsilon)j(p_1-1)}{2p_1}}\right)\, .
\end{align*}
Thus $\int_{\mathbb{D}elta} \hat\kappa\, 1_{Y}\circ f^{j}\, d\mu_\mathbb{D}elta$ is dominated, up to a multiplicative constant, by
\begin{align*}
& \sum_{m\in\mathbb Z^d}|m|\sum_{(a,l)\, :\, \hat\kappa(a,l)=m,\, \sigma(a,l)<j/2} \theta^{j-\sigma(a,l)} \mu_\mathbb{D}elta((a,l))+O\left(\theta_1^{\frac{(1-\epsilonsilon)j(p_1-1)}{2p_1}}\right)
\ll \theta^{j/2}
+
\theta_1^{\frac{(1-\epsilonsilon)j(p_1-1)}{2p_1}}
\, .
\end{align*}
If $w\in\mathcal{B}$ and $p>2$, \eqref{spgap-Sz-bis} ensures that
$
\left|\int_{\mathbb{D}elta}\hat\kappa\circ f^j.w\, d\mu_\mathbb{D}elta\right|\le C\Vert \hat \kappa\Vert_{L^{\frac p{p-1}}(\mu_\mathbb{D}elta)}\theta^j\Vert w\Vert_{\mathcal{B}}$.
\end{proof}
We end with a technical lemma (of flavour somewhat similar to that of Lemma~\ref{lemma-lemexpk1y}) used in the proof of in Lemma~\ref{cor-betcont}.
\begin{lem}
\label{lem-justif...}
Let $\delta$ as in the proof of Lemma~\ref{lem-eigf2}.
There exists $C>0$ so that, for any
$v\in L^b(\mu_\mathbb{D}elta)$ and $w\in\mathcal{B}_0$ satisfy the assumptions on $v,w$ in the statement of Proposition~\ref{prop-expproj} and any $\xi\in\mathbb{C}$ so that
$|\xi|<(1-\delta)^{-1}$,
\[
\sum_{(a,\ell)}\sum_{r\ge 0}|\xi|^{-r}\int_{\mathbb{D}elta} \left| P_0^{r}\left(vw1_{(a,\ell)}-\int_{(a,l)} vw\, d\mu_\mathbb{D}elta\right)\, (\hat\kappa1_Y\circ f^j)\right| d\mu_\mathbb{D}elta\le
C\|w\|_{\mathcal B_0}
\Vert v\Vert_{L^b(\mu_\mathbb{D}elta)}\, ,
\]
\end{lem}
\begin{proof}
Note that $
\Vert P_0^{\sigma(a)-\ell}\left(wv1_{(a,\ell)}-\mu_{\mathbb{D}elta}(wv1_{(a,\ell)})\right)\Vert_{\mathcal B}\ll \|w\|_{\mathcal B_0}\mu_\mathbb{D}elta(|v|\, 1_{(a,\ell)})$
since $v$ is is constant on the $a\in\mathcal{Y}$ and due to Lemma~\ref{lemma-Ppower}. Thus,
\begin{align*}
&J_1:=\sum_{(a,\ell)}\sum_{r\ge\sigma(a)-\ell}|\xi|^{-r}\int_{\mathbb{D}elta} \left| P_0^{r}\left(vw1_{(a,\ell)}-\int_{(a,l)} vw\, d\mu_\mathbb{D}elta\right)\, (\hat\kappa1_Y\circ f^j)\right| d\mu_\mathbb{D}elta\\
&\le\sum_{(a,\ell)}\sum_{r\ge\sigma(a)-\ell} (1-\delta)^{-r} C_0\theta^{r-\sigma(a)+\ell}\|w\|_{\mathcal B_0}\mu_\mathbb{D}elta(|v|\, 1_{(a,\ell)})\Vert \hat\kappa\Vert_{L^{p/(p-1)}(\mu_\mathbb{D}elta)}\, ,
\end{align*}
\begin{align*}
J_1&\ll \sum_{(a,\ell)}(1-\delta)^{\ell-\sigma(a)}\|w\|_{\mathcal B_0}\Vert v1_{(a,\ell)}\Vert_{L^1(\mu_\mathbb{D}elta)}\le \left(\sum_{(a,\ell)}(1-\delta)^{\frac p{p-1}(\ell-\sigma(a))} \mu_\mathbb{D}elta(a,\ell) \right)^{\frac {p-1}p}\!\!\!\!\!\!\!\! \|w\|_{\mathcal B_0}\Vert v\Vert_{L^p(\mu_\mathbb{D}elta)}\\
&\ll \left(\sum_{a}(1-\delta)^{-\frac p{p-1}\sigma(a)} \mu_Y(a) \right)^{\frac {p-1}p} \!\!\!\!\!\!\|w\|_{\mathcal B_0}\Vert v\Vert_{L^p(\mu_\mathbb{D}elta)}\ll
\left(\sum_{m\ge 1}(1-\delta)^{-\frac p{p-1}m}
\theta_1^m
\right)^{\frac {p-1}p}\!\!\!\!\!\! \|w\|_{\mathcal B_0}\Vert v\Vert_{L^p(\mu_\mathbb{D}elta)}.
\end{align*}
In the previous displayed chain of equations we have used that $\mu_Y(\sigma\ge m)\ll \theta_1^m$, $(1-\delta)^{-1}\theta<1$
and that $(1-\delta)^{-\frac p{p-1}}\theta_1<1$. This takes care of the tails of the above sums.
Next, we deal with the partials sums (up to $\sigma$) in the chain of equations below~\eqref{U2N't} leading to $U_{1,N'}(t)=0$.
\begin{align*}
&J_2:=\sum_{(a,\ell)}\sum_{r=0}^{\sigma(a)-\ell}|\xi|^{-r}\int_{\mathbb{D}elta} \left| P_0^{r}\left(vw1_{(a,\ell)}-\mu_\mathbb{D}elta(vw1_{(a,l)})\right)\, (\hat\kappa1_Y\circ f^j)\right| d\mu_\mathbb{D}elta\\
&\ll \sum_{(a,\ell)}(1-\delta)^{\ell-\sigma(a)}\left\Vert vw1_{(a,\ell)}-\int_{(a,l)} vw\, d\mu_\mathbb{D}elta\right\Vert_{L^p(\mu_\mathbb{D}elta)}\Vert \hat\kappa\Vert_{L^{\frac p{p-1}}(\mu_\mathbb{D}elta)}\, ,
\end{align*}
\begin{align*}
J_2&\ll \sum_{(a,\ell)}(1-\delta)^{\ell-\sigma(a)}\|w\|_{\mathcal B_0}|C_{(a,\ell)}|(\mu_{\mathbb{D}elta}((a,\ell)))^{\frac 1b+(\frac 1p-\frac 1{b})}\\
&\ll \left(\sum_{(a,\ell)}(1-\delta)^{\frac b{b-1}(\ell-\sigma(a))}(\mu_{\mathbb{D}elta}((a,\ell)))^{\frac b{b-1}(\frac 1p-\frac 1{b})}\right)^{\frac{b-1}b}\|w\|_{\mathcal B_0}
\left(\sum_{(a,\ell)}|C_{(a,\ell)}|^b(\mu_{\mathbb{D}elta}((a,\ell)))\right)^{\frac 1b}\, .
\end{align*}
It follows that $J_2\ll
\left(\sum_{a}(1-\delta)^{-\frac b{b-1}\sigma(a)}(\mu_{Y}(a))^{\frac b{b-1}(\frac 1p-\frac 1{b})}\right)^{\frac{b-1}b}\|w\|_{\mathcal B_0}\Vert v\Vert_{L^b(\mu_\mathbb{D}elta)}$ and finally
\[
J_2\ll \left(\sum_{m\ge 1}(1-\delta)^{-\frac b{b-1}m}\theta_1^{m\frac b{b-1}(\frac 1p-\frac 1{b})}\right)^{\frac{b-1}b}\|w\|_{\mathcal B_0}
\Vert v\Vert_{L^b(\mu_\mathbb{D}elta)}\, .
\]
\end{proof}
\paragraph{\bf Acknowledgements.}
The research of FP was partially supported by the IUF, Institut Universitaire de France.
The research of DT was partially supported by EPSRC grant EP/S019286/1.
We wish also to thank the mathematical departments of the Universities of Brest and Exeter for their hospitality.
We thank the referees for their careful reading of the manuscript which helped us to substantially improve the presentation.
\end{document}
|
\begin{document}
\pagestyle{plain}
\begin{abstract}
Building on previous work by Mummert, Saadaoui and Sovine (\cite{Mummert}),
we study the logic underlying the web of implications and nonimplications
which constitute the so called reverse mathematics zoo. We introduce a
tableaux system for this logic and natural deduction systems for important
fragments of the language.
\end{abstract}
\title{The logic of the reverse mathematics zoo}
\tableofcontents
\section{Introduction}
Reverse mathematics is a wide ranging research program in the foundations of
mathematics: its goal is to systematically compare the strength of
mathematical theorems by establishing equivalences, implications and
nonimplications over a weak base theory. Currently, reverse mathematics is
carried out mostly in the context of subsystems of second-order arithmetic
and very often a specific system known as $\mathsf{RCA}_0$ is used as the
base theory.
The earlier reverse mathematics research, leading to Steve Simpson's
fundamental monograph \cite{sosoa}, highlighted the fact that most
mathematical theorems formalizable in second order arithmetic were in fact
either provable within $\mathsf{RCA}_0$ or equivalent to one of four other
specific subsystems, linearly ordered in terms of provability strength. This
is summarized by the \emph{Big Five} terminology coined by Antonio Montalb\'{a}n
in \cite{Mont}. However in recent years there has been a change in the
reverse mathematics main focus: following Seetapun's breakthrough result that
Ramsey theorem for pairs is not equivalent to any of the Big Five systems, a
plethora of statements, mostly in countable combinatorics, have been shown to
form a rich and complex web of implications and nonimplications. The first
paper featuring complex and non-linear diagrams representing statements of
second order arithmetics appears to be \cite{HS} (notice that the diagrams
appearing in \cite{wqo,interval} are of a different sort, as they deal with
properties of mathematical objects, rather than with mathematical
statements). Nowadays diagrams of this kind are a common feature of reverse
mathematics papers. This is called the zoo of reverse mathematics, a
terminology coined by Damir Dzhafarov when he designed \lq\lq a program to
help organize relations among various mathematical principles, particularly
those that fail to be equivalent to any of the big five subsystems of
second-order arithmetic\rq\rq. This program is available at \cite{rmzoo}.
Ludovic Patey's web site features a manually maintained zoo
(\cite{Pateyzoo}). The recent monograph \cite{Hirschfeldt}, devoted to a
small portion of the zoo, features a whole chapter of diagrams. These
diagrams cover also situations where a different base theory (e.g.\
$\mathsf{RCA}$, which is $\mathsf{RCA}_0$ with unrestricted induction) is
used, or where only the first order consequences are considered.
Actually, the zoo is not peculiar to subsystems of second order arithmetic.
For example, the study of weak forms of the Axiom of Choice and the
relationships between them has a long tradition in set theory: \cite{AC}
consists of a catalog of 383 forms of the Axiom of Choice and of their
equivalent statements. Connected to the book, there is also the web page
\cite{ACwww}, which claims also to be able to produce zoo-like tables;
unfortunately the site appears to be no longer maintained and, as of December
2015, the links are broken.
Mummert, Saadaoui and Sovine in \cite{Mummert} introduced a framework for
discussing the logic that is behind the web of implications and
nonimplications in the reverse mathematics zoo. They called their system
s-logic, introducing its syntax and semantics and proposing a tableaux system
for satisfiability of sets of s-formulas, and inference systems for two
fragments of s-logic (called $\mathcal{F}_1$ and $\mathcal{F}_2$, with the
first a subset of the second) that are important in the applications.
The present paper can be viewed as a continuation of \cite{Mummert}. Our goal
is to improve the systems introduced by Mummert, Saadaoui and Sovine and show
how widespread automated theorem proving tools can be used to deal
efficiently with s-logic. As a byproduct, our analysis also points out that,
notwithstanding the fact that the semantics for s-logic borrows some ideas
from the one for modal logic, s-logic is actually much closer to
propositional logic than to modal logic.
Here is the plan of the paper. After reviewing s-logic, in Section
\ref{sec:obs} we make some observations about its semantics. Using these, in
Section \ref{sec:tableau} we are able to simplify the tableaux system of
Mummert, Saadaoui and Sovine. Our formulation brings it closer to the
familiar tableaux systems for propositional logic, and thus, using an
efficient implementation of the latter, leads to more efficient algorithms.
Moreover, in Section \ref{sec:nd}, we improve also the treatment of the
fragments $\mathcal{F}_1$ and $\mathcal{F}_2$ by proposing natural deduction
systems for them. We also consider a new natural fragment of s-logic
$\mathcal{F}_3$, which includes $\mathcal{F}_2$ and for which we provide a
sound and complete natural deduction system. In Section \ref{sec:prolog} we
show how logical consequence between formulas of $\mathcal{F}_2$ (and hence
of $\mathcal{F}_1$) can be treated by using standard propositional Prolog:
this provides an efficient way of answering queries about whether a certain
implication or nonimplication follows from a database of known zoo facts.
\section{Basic observations about s-logic}\label{sec:obs}
For the reader's convenience, we start with a brief review of s-logic as
introduced in \cite{Mummert}.
We start from a set of propositional variables and we build propositional
formulas in the usual way, using the connectives $\neg$, $\land$, $\vee$,
and $\rightarrow$. An \emph{s-formula} is a formula of the form $A \strictif
B$ or $A \not\strictif B$, where $A$ and $B$ are propositional formulas. The
first type of s-formula is called positive or $\strictif$ s-formula, the
second one is negative or $\not\strictif$ s-formula. Notice that the
definition of s-formula is not recursive, and thus if $\alpha$ and $\beta$
are s-formulas neither $\alpha \land \beta$ nor $\alpha \strictif \beta$ are
s-formulas.
The intended meaning of $A \strictif B$ is that statement $A$ implies
statement $B$, over the fixed weak base theory. On the other hand $A
\not\strictif B$ asserts that $A \strictif B$ does not hold. In practice,
this happens when we have a model of the base theory in which $A$ holds and
$B$ does not (a counterexample to $A \strictif B$).
The semantics of s-logic is based on the notion of \emph{frame}, which is
just a nonempty set of valuations. Here by valuation we mean the usual notion
for propositional logic, i.e.\ a function assigning to every propositional
variable one of the truth values $T$ and $F$.
A frame $W$ \emph{satisfies} the positive s-formula $A \strictif B$ if for
every valuation $v \in W$ such that $v(A) = T$ we have also $v(B) = T$. $W$
satisfies the negative s-formula $A \not\strictif B$ if there exists a
valuation $v \in W$ such that $v(A) = T$ and $v(B) = F$.
Once we have the notion of satisfaction we can introduce in the usual way
notions such as \emph{satisfiability} of a set of s-formulas $\Gamma$ (there
exists a frame satisfying every member of $\Gamma$) and \emph{logical
consequence} between a set of s-formulas $\Gamma$ and a given s-formula
$\alpha$ (every frame satisfying $\Gamma$ satisfies also $\alpha$): for the
latter we use the notation $\Gamma \models_s \alpha$.
We point out that although $\rightarrow$ and $\strictif$ (and their
negations) are superficially similar, there are important difference between
them. For example, if $X$ and $Y$ are propositional variables, the set of
s-formulas $\{X \not\strictif Y, Y \not\strictif X\}$ is satisfiable (by a
frame with two valuations), while the \lq\lq corresponding\rq\rq\ set of
propositional formulas $\{\neg (X \rightarrow Y), \neg (Y \rightarrow X)\}$
is unsatisfiable. Expressing the same example in terms of logical
consequence, we have that although $\neg (X \rightarrow Y) \models Y
\rightarrow X$ in propositional logic, it is certainly not the case that $X
\not\strictif Y \models_s Y \strictif X$. Notice that in these examples we
are using s-formulas from $\mathcal{F}_1$.
Mummert, Saadaoui and Sovine introduced also the following fragments of
s-logic:
\begin{definition}\label{f1f2}
The fragment $\mathcal{F}_1$, $\mathcal{F}_2$ of s-logic are:
\begin{itemize}
\item $\mathcal{F}_1$ is the set of all s-formulas of the forms $X\strictif
Y$ and $X\not \strictif Y$, where $X,Y$ are propositional variables;
\item $\mathcal{F}_2$ is the set of all s-formulas of the forms $A\strictif
Y$ and $A\not \strictif Y$, where $A$ is a nonempty conjunction of
propositional variables and $Y$ is a single propositional variable.
\end{itemize}
\end{definition}
As pointed out in \cite{Mummert}, $\mathcal{F}_1$ captures the basic
implications and nonimplications in reverse mathematics, while in
$\mathcal{F}_2$ we can express also results such as the equivalence between
Ramsey Theorem for pairs with two colors and the conjunction between the same
theorem restricted to stable colorings and the cohesiveness principle. Notice
that we do not need to consider also s-formulas with conjunctions of
propositional variables after $\strictif$, as $\Gamma \models_s A \strictif X
\land Y$ if and only if $\Gamma \models_s A \strictif X$ and $\Gamma
\models_s A \strictif Y$, while $\Gamma \models_s A \not\strictif X \land Y$
if and only if $\Gamma \models_s A \not\strictif X$ or $\Gamma \models_s A
\not\strictif Y$.
We introduce another fragment of s-logic, which is a natural generalization
of the fragment ${\mathcal F_2}$, and captures some implications between
members of the reverse mathematics zoo escaping ${\mathcal F_2}$. Recall, for
example, that the statement about the existence of iterates of continuous
mappings of the closed unit interval into itself was proved in \cite{FSY} to
be equivalent to the disjunction of weak K\"{o}nig's lemma and
$\mathbf{\Sigma}^0_2$-induction.
\begin{definition}\label{f3}
$\mathcal{F}_3$ is the set of all s-formulas of the forms $C \strictif D$ and
$C \not\strictif D$, where $C$ and $D$ are a nonempty conjunction of
propositional variables and a nonempty disjunction of propositional
variables, respectively.
\end{definition}
Here we do not need to consider also s-formulas with disjunctions of
propositional variables before $\strictif$, as $\Gamma \models_s X \lor Y
\strictif A$ if and only if $\Gamma \models_s X \strictif A$ and $\Gamma
\models_s Y \strictif A$, while $\Gamma \models_s X \lor Y \not\strictif A$
if and only if $\Gamma \models_s X \not\strictif A$ or $\Gamma \models_s Y
\not\strictif A$.
We now make a couple of useful basic observations about the semantics of
s-logic which use the following definition.
\begin{definition}
Given a set of s-formulas $\Gamma$, the set of s-formulas $\Gamma^+,
\Gamma^-$ are defined as
\[
\Gamma^+:=\{A\strictif B: A\strictif B \in \Gamma\},\quad
\Gamma^-:=\{A\not \strictif B: A \not\strictif B \in \Gamma\},
\]
while $\Gamma^+_{prop}$ is the set of propositional formulas
\[
\Gamma^+_{prop}:=\{A\rightarrow B: A\strictif B\in \Gamma\}.
\]
\end{definition}
\begin{lemma}\label{lemma:sat}
Let $\Gamma$ be a set of s-formulas. The following are equivalent:
\begin{enumerate}
\item $\Gamma$ is satisfiable;
\item the set of s-formulas
\[
\Gamma^+ \cup \{A \not \strictif B\}
\]
is satisfiable, for each $A \not \strictif B \in \Gamma^-$;
\item the set of propositional formulas
\[
\Gamma^+_{prop} \cup \{A, \neg B\}
\]
is satisfiable (in the usual sense of propositional logic), for each $A
\not \strictif B \in \Gamma^-$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) implies (2) is immediate.
To prove that (2) implies (3) fix $A \not\strictif B \in \Gamma^-$. Since
$\Gamma^+ \cup \{A \not\strictif B\}$ is satisfiable, there exists a frame
$W$ which validates this set of s-formulas; hence there exists a valuation $v
\in W$ with $v(A)=T$, $v(B)=F$. Since $W \models X \strictif Y $ for all $X
\strictif Y \in \Gamma^+$ we have that $v(X) = T$ implies $v(Y) =T$ for each
such s-formula. Hence $v$ satisfies the set of propositional formulas
$\Gamma^+_{prop} \cup \{ A, \neg B\}$.
For (3) implies (1), suppose (3) holds, and for each $A \not \strictif B \in
\Gamma^- $ let $w_{A \not \strictif B}$ be a valuation satisfying the set of
propositional formulas $\Gamma^+_{prop} \cup \{ A, \neg B\}$. Let $W$ be the
frame consisting of all these valuations: $W=\{w_{A \not \strictif B}: A \not
\strictif B \in \Gamma\}$. It is easily seen that $W$ satisfies $\Gamma$.
\end{proof}
\begin{corollary}\label{cor:unsat}
A set of s-formulas $\Gamma$ is unsatisfiable if and only if there exists $A
\not \strictif B\in \Gamma^-$ such that $\Gamma^+\models A \strictif B$. In
particular, every set of positive s-formulas is satisfiable.
\end{corollary}
Lemma \ref{lemma:sat} suggests a fairly simple algorithm for the
satisfiability problem for sets of s-formulas. In fact given the set of
s-formulas $\Gamma$ one needs only to check whether for each $A \not
\strictif B \in \Gamma^-$ the set of propositional formulas $\Gamma^+_{prop}
\cup \{A, \neg B\}$ is satisfiable. Given the constant improvement in the
efficiency of SAT-solvers (see e.g. \cite{SAT1,SAT2}), this is in fact a
quite efficient way of dealing with the problem.
\begin{corollary}
The problem of satisfiability for a (finite) set of s-formulas has the same
complexity of propositional satisfiability, i.e.\ it is NP-complete.
\end{corollary}
\begin{proof}
The problem is in NP because, if we fix a finite set of s-formulas $\Gamma$
and set $n = |\Gamma|$ and $k = |\Gamma^+|$, using the last point of the
previous Lemma, we can reduce the satisfiability of $\Gamma$ to the
satisfiability of $n-k$ sets of propositional formulas each of cardinality
$k+1$.
The problem is NP-complete because it essentially contains propositional
satisfiability.
\end{proof}
The previous corollary implies that with respect to complexity s-logic is
more similar to propositional logic than to modal logic (recall that
satisfiability for propositional logic is NP-complete, while satisfiability
for the modal logic K is PSPACE-complete).
Next, we consider logical consequence among s-formulas.
\begin{lemma}\label{lemma:logcons}
Let $\Gamma$ be a satisfiable set of s-formulas. For any propositional
formulas $A$ and $B$ we have:
\begin{enumerate}[(i)]
\item $\Gamma \models_s A \strictif B$ if and only if $\Gamma^+
\models_s A \strictif B$ if and only if $\Gamma^+_{prop} \models A
\rightarrow B$;
\item $\Gamma \models_s A \not\strictif B$ if and only if there exists an
s-formula $E \not\strictif F \in \Gamma^-$ such that
\[\Gamma^+, A \strictif B \models_s E \strictif F,\]
if and only if there exists an s-formula $E \not\strictif F \in
\Gamma^-$ such that
\[\Gamma^+_{prop}, A \rightarrow B \models E \rightarrow F.\]
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)$ If $\Gamma \models_s A \strictif B$ then $\Gamma \cup \{A \not\strictif
B\}$ is unsatisfiable. By Lemma \ref{lemma:sat}, there exists $E
\not\strictif F \in \Gamma^-\cup\{A \not \strictif B\}$ such that
$\Gamma^+\cup \{ E \not\strictif F\}$ is unsatisfiable. Since $\Gamma$ is
satisfiable, $E \not\strictif F$ must be $A \not\strictif B$, and hence
$\Gamma^+\models_s A \strictif B$. The viceversa is obvious. The equivalence
between $\Gamma^+\models_s A \strictif B$ and $\Gamma^+_{prop} \models A
\rightarrow B$ follows easily from Lemma \ref{lemma:sat}.
As for $(ii)$, $\Gamma \models_s A \not\strictif B$ iff the set of s-formulas
$\Gamma \cup\{A \strictif B\}$ is unsatisfiable iff (by Lemma
\ref{lemma:sat}) there exists $E\not \strictif F \in \Gamma^-$ such that
$\Gamma^+ \cup\{A \strictif B\} \cup\{E\not \strictif F\}$ is unsatisfiable
iff there exists $E\not \strictif F \in \Gamma^-$ such that $\Gamma^+, A
\strictif B \models_s E \strictif F$ iff $\Gamma^+_{prop}, A \rightarrow B
\models E \rightarrow F$.
\end{proof}
The previous Lemma says that only positive s-formulas are needed to check
whether a positive s-formula is logical consequence of a satisfiable set of
s-formulas. Moreover, if only positive s-formulas are considered, their logic
does not differ substantially from propositional logic, because $\strictif$
behaves exactly as $\rightarrow$.
If we want to prove that a negative s-formula is logical consequence of a
satisfiable set of s-formulas then differences with propositional logic do
appear. The previous Lemma tells us that the collection of $\not\strictif$
s-formulas which are logical consequences of some $\not\strictif$ s-formulas
(i.e.\ typically from the existence of different models showing that the
implications fail) and some $\strictif$ s-formulas is just the union of the
consequences of a single $\not\strictif$ s-formula and the given set of
$\strictif$ s-formulas. In other words, having two models available
simultaneously gives no new information. This might again suggest that
s-logic is not substantially different from propositional logic.
Nevertheless, the deductive meta-properties of s-logic and propositional
logic differ, as showed by the following example.
\begin{example}\label{ex1}
In propositional logic, if $A,B,C,D$ are propositional variables and $\alpha$
is a formula, we have: \[\Gamma, A\rightarrow C \models \alpha \quad \hbox{\
and\ } \quad \Gamma, B\rightarrow C \models \alpha \quad \hbox{\ then \ }
\quad \Gamma, A\land B \rightarrow C\models \alpha.\]
This is not the case in s-logic because, for example: \vskip5pt
\[A\not \strictif D, B\not \strictif D, A \strictif C \models_s C\not \strictif D,\]
\[A\not \strictif D, B\not \strictif D, B \strictif C\models_s C\not \strictif D\]
but
\[
A \not\strictif D, B \not\strictif D, A \land B \strictif C \nvDash_s C \not\strictif D.
\]
In fact the set of s-formulas $\{A \not\strictif D, B \not\strictif D, A
\land B \strictif C, C \strictif D\}$ is satisfied e.g.\ by the frame $W =
\{v_1,v_2\}$ with $v_1(A) = v_2(B) = T$, $v_1(B) = v_2(A) = v_1(D) = v_2(D) =
v_1(C) = v_2(C) = F$.
\end{example}
\section{Tableaux for s-logic}\label{sec:tableau}
Another application of Lemma \ref{lemma:sat} regards the existence of a
tableaux system to check unsatisfiability of finite set of s-formulas. In
\cite{Mummert}, the authors introduce a tableaux system which keeps track of
valuations in the syntax. For this reason the tableaux are unusual compared
e.g.\ to the standard tableaux described in a textbook such as \cite{BenAri}
(see \S2.6, where they are called semantic tableaux). In fact to deal with
strict non-implication the system considers not only s-formulas, but also
so-called {\em world formulas}, that is, pairs $(A,v)$ where $A$ is a
propositional formula and $v$ represents a variable for a propositional
evaluation. The tableaux system of \cite{Mummert} contains e.g.\ the
following rule (where $\Gamma$ is a set of s- and world formulas, and $v$ is
new for $\Gamma$):\footnote{here and below we adopt the convention that the
premisses of a rule are above their consequence, while in \cite{Mummert} the
reverse convention is adopted}
\[
{{\Gamma, A \not\strictif B }
\over
{\Gamma, (A,v), (\neg B,v)} }
\]
The tableaux system of \cite{Mummert} has also the peculiarity of not
discharging the formulas which are used in a step (this is instead a common
feature of tableaux systems for propositional logic, see \cite[Algorithm
2.64]{BenAri}). This is motivated by the fact that positive s-formulas are in
fact universal assertions about the collection of all possible worlds, and
thus might be used again on a different world. However Lemma \ref{lemma:sat}
shows that this precaution is superfluous, because the unsatisfiability of a
set of s-formulas depends only on a single world, the one witnessing the
satisfiability of one of the negative s-formulas that imply the
unsatisfiability of the whole set.
A straightforward application of Lemma \ref{lemma:sat} leads to a more
traditional tableaux system, which has the advantage of dealing only with
propositional formulas, except for the first (root) step. This system can be
described as follows. The rules of the system are given by the standard rules
of a traditional tableaux system for propositional logic plus the {\em
$\not\strictif$-rule}, which is:
\[
{{\Gamma, A\not \strictif B}
\over
{\Gamma^+_{prop}, A, \neg B}},
\]
subsuming the rule
\[
{{\Gamma}
\over
{\Gamma^+_{prop}}}
\]
when $\Gamma^- = \emptyset$.
Notice that, starting from $\Gamma, A \not\strictif B, C \not\strictif D$,
the $\not\strictif$-rule allows to derive either $\Gamma^+_{prop}, A, \neg B$
or $\Gamma^+_{prop}, C, \neg D$.
\begin{definition}
A tableau for a set of s-formulas $\Gamma$ is a finite tree $T$ such that:
\begin{enumerate}[(a)]
\item the root of $T$ is labeled by $\Gamma$, while the inner nodes are
labelled by sets of propositional formulas;
\item the label of the child of the root is obtained from the label of the
root by an application of the $\not\strictif$-rule;
\item the label of every other node is obtained from the label of its
parent by one of the standard propositional tableaux inference rules
(see e.g.\ \cite[Algorithm 2.64]{BenAri}).
\end{enumerate}
A path through a tableau is closed if it contains a node for which the label
contains both $A$ and $\neg A$ for some propositional formula A. A tableau is
closed if every maximal branch is closed.
\end{definition}
Notice that, in contrast with the propositional case, a given set of
s-formulas might have both closed and non-closed tableaux. In fact to obtain
a closed tableau we must pick the \lq\lq right\rq\rq\ negative s-formula when
we apply the $\not\strictif$-rule to construct the child of the root, as is
easily seen for the set of s-formulas $\{A \not\strictif A, A \not\strictif
B\}$.
Applying Lemma \ref{lemma:sat} we immediately obtain:
\begin{corollary}
A set of s-formula $\Gamma$ is unsatisfiable if and only if there exists a
tableau for $\Gamma$ in which every branch is closed.
\end{corollary}
The previous corollary is useful in practice, because to check satisfiability
of s-formulas after the first step we use a standard tableaux system for
propositional logic.
However, the tableaux system presented here and the one proposed in
\cite{Mummert} are hybrid systems, where s-formulas and propositional
formulas coexist. Hence neither system is appropriate to study s-logic for
itself, and compare its deductive properties with the ones of propositional
logic, as we did in Example \ref{ex1}. What are the rules of s-logic, and can
we have a calculus dealing exclusively with s-formulas? As in \cite{Mummert},
we answer these questions for some fragments of s-logic which are relevant to
the practice of reverse mathematics. In our case these are the ones
introduced in Definition \ref{f1f2} (considered also in \cite{Mummert}) but
also the fragment $\mathcal{F}_3$ introduced in Definition \ref{f3}.
\section{Natural deductions for fragments of s-logic}\label{sec:nd}
Lemma \ref{lemma:logcons} is especially useful when dealing with the
fragments of Definitions \ref{f1f2} and \ref{f3}. In \cite{Mummert} sound and
complete deductive systems for $\mathcal{F}_1$ and $\mathcal{F}_2$ are
presented.
The system for $\mathcal{F}_1$ consists of the following axioms and rules:
\begin{description}
\item[(Axiom)] $X \strictif X$, where $X$ is a propositional variable;
\item[(HS)] From $A \strictif X$ and $ X \strictif Y$ deduce $A \strictif
Y$;
\item[(N)] From $X \not \strictif Y$, $X \strictif W$ and $Z \strictif Y$
deduce $W\not \strictif Z$.
\end{description}
The system for $\mathcal{F}_2$ consists of the following axioms and rules:
\begin{description}
\item[(Axiom)] $X\strictif X$, where $X$ is a propositional variable;
\item[(W)] From $A \strictif Y$, deduce $B \strictif Y$, where $B$ is any
conjunction such that every conjunct of $A$ is also a conjunct of $B$;
\item[(HS)] From $X \land B \strictif Y$ and $A \strictif X$, deduce $A
\land B \strictif Y$;
\item[(N)] From $A \not\strictif X$, $A \land Z \strictif X$, and $A
\strictif Y$ for each conjunct $Y$ of $B$, deduce $B \not\strictif Z$.
\end{description}
We propose natural deduction calculi for ${\mathcal F_2}$ and for ${\mathcal
F_1}$, differing from the systems in \cite{Mummert} because of a simpler rule
for negative s-formulas. We also introduce a natural deduction system for
${\mathcal F_3}$. These systems are presented in a style similar to
\cite{HuthRyan} (see \S1.2.3 for a summary of natural deduction for
propositional logic).
\subsection{A Natural Deduction Calculus for ${\mathcal F_2}$}\label{F2}
The Natural Deduction Calculus for ${\mathcal F_2}$ has the following axioms
and rules, where $X,Y,Z,X_i, \dots$ are propositional variables, $A,B,C,
\dots$ are arbitrary (possibly empty) conjunctions of propositional
variables, $\alpha$ is an arbitrary ${\mathcal F_2}$ formula, and $\Gamma$
and $\Gamma'$ are sets of ${\mathcal F_2}$ s-formulas:
\[
\hbox{\textbf{(Axiom)}:}\quad X \strictif X
\]
\begin{prooftree}
\AxiomC{$\Gamma$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$A\strictif Y$}
\LeftLabel{\textbf{(conj1)}:\quad}
\UnaryInfC{$A\land B\strictif Y$}
\end{prooftree}
\begin{prooftree}
\AxiomC{$\Gamma$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$X_1 \land \ldots \land X_n \strictif Y$}
\LeftLabel{\textbf{(conj2)}:\quad}
\UnaryInfC{$X_{i_1} \land \ldots \land X_{i_k} \strictif Y$,}
\end{prooftree}
where $\{X_{i_1}, \dots, X_{i_k}\} = \{X_1, \dots, X_n\}$ as {\em sets} of
propositional variables.
\begin{prooftree}
\AxiomC{$\Gamma$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$A \strictif Y$}
\AxiomC{$\Gamma'$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$Y\land B\strictif Z$}
\LeftLabel{\textbf{(trans)}:\quad}
\BinaryInfC{$A\land B\strictif Z$}
\end{prooftree}
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$A\strictif B$ }
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$A\not\strictif B$ }
\LeftLabel{\textbf{($\bot$)}:\quad}
\BinaryInfC{$\alpha$ }
\end{prooftree}
For negative s-formulas we want a rule allowing to construct of a proof of $A
\not\strictif X$ from hypothesis $\Gamma, C \not\strictif Y$, whenever we
have a proof of $C \strictif Y$ from hypothesis $\Gamma, A \strictif X$:
\begin{prooftree}
\AxiomC{$\Gamma'$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$C \not\strictif Y$}
\AxiomC{$\Gamma, [A \strictif X]$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$C \strictif Y$}
\LeftLabel{\textbf{(neg)}:\quad}
\BinaryInfC{$A \not\strictif X$}
\end{prooftree}
Let $\Gamma \rhd_{\mathcal F_2} \alpha$ denote the existence of a natural
deduction proof (in the system just described) of the ${\mathcal F_2}$
s-formula $\alpha$ from hypothesis in the set of ${\mathcal F_2}$ s-formulas
$\Gamma$.
\begin{example}\label{ex:N2}
Here is a deduction showing that
\[
A \not\strictif X, A \land Z \strictif X, A \strictif Y_1, \ldots, A \strictif Y_n
\rhd_{\mathcal F_2} Y_1 \land \dots \land Y_n \not\strictif Z,
\]
corresponding to rule (N) in the ${\mathcal F_2}$ system of \cite{Mummert}:
\begin{center}
\begin{prooftree}
\AxiomC{$A \not \strictif X$} \AxiomC{$A \strictif Y_n$ } \AxiomC{$A \strictif Y_2$ } \AxiomC{$A \strictif Y_1$ }
\AxiomC{$[Y_1\land \ldots\land Y_n\strictif Z] $}
\BinaryInfC{$A \land Y_2 \land \dots \land Y_n \strictif Z$}
\doubleLine
\BinaryInfC{$A \land Y_3 \land \dots \land Y_n \strictif Z$}
\noLine
\UnaryInfC{\vdots}
\noLine
\UnaryInfC{$A \land Y_n \strictif Z$}
\doubleLine
\BinaryInfC{$A \strictif Z$}
\AxiomC{$A \land Z \strictif X$}
\doubleLine
\BinaryInfC{$A \strictif X$}
\BinaryInfC{$Y_1 \land \ldots \land Y_n \not\strictif Z$}
\end{prooftree}
\end{center}
Here double lines indicate combined applications of (conj2) and (trans), the
top step consists of an application of (trans), and the last step is an
application of (neg).
\end{example}
One can easily prove that all ${\mathcal F_2}$ rules are sound with respect
to s-logical consequence. As for completeness, we divide the proof into
cases, depending on the satisfiability of the set of premisses $\Gamma$.
\begin{lemma}\label{lem:F2positive}
If $\Gamma$ is a satisfiable set of ${\mathcal F_2}$ s-formulas and $\alpha$
is a ${\mathcal F_2}$ s-formula such that $\Gamma \models_s \alpha$ then
$\Gamma \rhd_{\mathcal F_2} \alpha$.
\end{lemma}
\begin{proof}
To prove the Lemma we rely on Theorem 17 from \cite{Mummert}, which says that
if $\Gamma$ is a satisfiable\footnote{actually, the hypothesis in
\cite{Mummert} is that $\Gamma$ is consistent, but an inspection of the proof
reveals that the right hypothesis is the one of satisfiability.} and
$\Gamma\models_s \alpha$ then $\alpha$ is derivable from $\Gamma$ using the
rules (Axiom), (W), (HS), and (N). Hence, to show that $\alpha$ is derivable
in our system it is enough to show the existence of natural deduction proofs
for rules (W), (HS), and (N). The only nontrivial case is rule (N), which is
dealt with in Example \ref{ex:N2}.
\end{proof}
To finish the completeness proof for $\rhd_{\mathcal F_2}$, we have to
consider the case when $\Gamma$ is unsatisfiable, where we need to prove that
$\Gamma \rhd_{\mathcal F_2} \alpha$, for any ${\mathcal F}_2$ s-formula
$\alpha$.
\begin{lemma}\label{lem:unsat}
If $\Gamma$ is unsatisfiable, then for any ${\mathcal F_2}$ s-formula $\alpha$
we have $\Gamma \rhd_{\mathcal F_2} \alpha$.
\end{lemma}
\begin{proof}
By Corollary \ref{cor:unsat}, if $\Gamma$ is unsatisfiable then there exists
$A \not \strictif B \in \Gamma^-$ such that $\Gamma^+ \models_s A \strictif
B$. Since $\Gamma^+$ is satisfiable (again by Corollary \ref{cor:unsat}), by
Lemma \ref{lem:F2positive} we have $\Gamma^+ \rhd_{\mathcal F_2} A \strictif
B$. Hence $\Gamma \rhd_{\mathcal F_2} A \strictif B$, and $\Gamma
\rhd_{\mathcal F_2} \alpha$ follows by rule ($\bot$).
\end{proof}
Putting all the results of this subsection together, we obtain:
\begin{theorem}
If $\Gamma$ is a set of ${\mathcal F}_2$ s-formulas and $\alpha$ is a
${\mathcal F}_2$ s-formula, then
\[
\Gamma \models_s \alpha \qquad \Leftrightarrow \qquad \Gamma \rhd_{\mathcal F_2} \alpha.
\]
\end{theorem}
\subsection{A Natural Deduction Calculus for ${\mathcal F_1}$}
The Natural Deduction Calculus for ${\mathcal F_1}$ has the following axioms
and rules (where $X,Y,Z$ are propositional variables, $\alpha$ is a
${\mathcal F_1}$ s-formula, and $\Gamma$ and $\Gamma'$ are sets of ${\mathcal
F_1}$ s-formulas):
\[
\hbox{\textbf{(Axiom)}:}\quad X \strictif X
\]
\begin{prooftree}
\AxiomC{$\Gamma$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$X\strictif Y$}
\AxiomC{$\Gamma'$ }
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$Y\strictif Z$}
\LeftLabel{\textbf{(trans)}:\quad}
\BinaryInfC{$X\strictif Z$}
\end{prooftree}
\begin{prooftree}
\AxiomC{$\Gamma'$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$Y \not\strictif Z$}
\AxiomC{$\Gamma, [X \strictif Y]$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$Y \strictif Z$}
\LeftLabel{\textbf{(neg)}:\quad} \BinaryInfC{$X \not\strictif Y$}
\end{prooftree}
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$A\strictif B$ }
\AxiomC{$\Gamma'$}
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$A\not\strictif B$ }
\LeftLabel{\textbf{($\bot$)}:\quad}
\BinaryInfC{$\alpha$ }
\end{prooftree}
Let $\Gamma \rhd_{\mathcal F_1} \alpha$ denotes the existence of a natural
deduction proof (in the system just described) of the ${\mathcal F_1}$
s-formula $\alpha$ from hypothesis in the set of ${\mathcal F_1}$ s-formulas
$\Gamma$.
\begin{example}\label{ex:N1}
Here is a deduction showing that
\[
X \not\strictif Y, X \strictif W, Z \strictif Y
\rhd_{\mathcal F_1} W \not\strictif Z,
\]
corresponding to rule (N) in the ${\mathcal F_1}$ system of \cite{Mummert}:
\begin{center}
\begin{prooftree}
\AxiomC{$X \strictif W$} \AxiomC{$[W \strictif Z]$ }
\BinaryInfC{$X \strictif Z$} \AxiomC{$Z \strictif Y $ }
\BinaryInfC{$X \strictif Y$} \AxiomC{$X \not \strictif Y$ }
\BinaryInfC{$W\not\strictif Z$}
\end{prooftree}
\end{center}
Here we employed (trans) twice and (neg) for the last step.
\end{example}
As for the case of the ${\mathcal F_2}$ system, the soundness of
$\rhd_{\mathcal F_1}$ is easily proved, and left to the reader. For
completeness, we may follow the same line of the completeness proof for
$\rhd_{\mathcal F_2}$, dividing the proof into cases, depending on whether
$\Gamma$ is a satisfiable set of ${\mathcal F_1}$ s-formulas or not. The case
where $\Gamma$ is satisfiable can be dealt using Theorem 20 from
\cite{Mummert}, and consists in proving the ${\mathcal F_1}$ rules of
\cite{Mummert} in our system. The only nontrivial case is rule (N), which is
dealt with in Example \ref{ex:N1}. In the case where $\Gamma$ is
unsatisfiable, we may proceed using rule $\bot$ as we did for $\rhd_{\mathcal
F_2}$. Hence:
\begin{theorem}
If $\Gamma$ is a set of ${\mathcal F}_1$ s-formulas and $\alpha$ is a
${\mathcal F}_1$ s-formula, then
\[
\Gamma\models_s \alpha \qquad \Leftrightarrow \qquad \Gamma \rhd_{\mathcal F_1} \alpha.
\]
\end{theorem}
\subsection{A Natural Deduction Calculus for ${\mathcal F_3}$}
We now consider the fragment $\mathcal F_3$ introduced in Definition
\ref{f3}. In considering an $\mathcal F_3$ s-formula $C\strictif D$ or $C\not
\strictif D$ we denote by $C_i$ a propositional variable which is a
$C$-conjunct and by $D_j$ a propositional variable which is a $D$-disjunct.
In order to capture derivability in fragment $\mathcal F_3$, we extend our
natural deduction calculus for $\mathcal F_2$ with the following two rules:
\begin{prooftree}
\AxiomC{$\Gamma$}
\noLine
\UnaryInfC{$\triangledown$}
\noLine
\UnaryInfC{$A\strictif B$}
\LeftLabel{\textbf{(disj1)}:\quad}
\UnaryInfC{$A\strictif D$}
\end{prooftree}
where $\{B_{1}, \dots, B_{n}\} \subseteq \{D_1, \dots, D_h\}$ as {\em sets} of
propositional variables.
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$A\strictif B$ }
\AxiomC{$\Gamma, [A\strictif B_1]$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif E$}
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$\Gamma, [A\strictif B_n]$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif E$}
\LeftLabel{\textbf{(disj2)}:\quad }
\QuaternaryInfC{$C\strictif E$}
\end{prooftree}
where $B= B_1 \lor \dots \lor B_n$ and $\Gamma$ is a set of positive
s-formulas.
\begin{lemma}
Rules (disj1) and (disj2) are sound in s-logic.
\end{lemma}
\begin{proof}
Soundness of rule (disj1) is immediate.
As for rule (disj2), suppose $\Gamma$ is a positive set of $\mathcal F_3$
s-formulas and $B= B_1 \lor \dots \lor B_n$ is such that:
\begin{itemize}
\item $\Gamma \models_s A \strictif B$;
\item $\Gamma, A \strictif B_i \models_s C\strictif E$ for each $i = 1,
\dots, n$.
\end{itemize}
We want to prove that $\Gamma \models_s C \strictif E$. Since $\Gamma$
contains only positive s-formulas, by Corollary \ref{cor:unsat} each set
$\Gamma, A \strictif B_i$ is satisfiable. Hence we may apply Lemma
\ref{lemma:logcons} obtaining:
\[
\Gamma^+_{prop}, A \rightarrow B_i \models C \rightarrow E.
\]
Similarly we obtain $\Gamma^+_{prop} \models A \rightarrow B$, that is
$\Gamma^+_{prop}, A \models B$. By propositional reasoning it follows that
$\Gamma^+_{prop} \models C \rightarrow E$. Hence, by Lemma
\ref{lemma:logcons} again, $\Gamma \models_s C \strictif E$.
\end{proof}
Notice that the restriction to positive set of s-formulas $\Gamma$ in rule
(disj2) is necessary because without this hypothesis the rule is no longer
sound. To see this consider e.g.\ the set
\[
\Gamma=\{A \strictif B_1 \lor B_2, A \not\strictif B_1, A \not\strictif B_2 \}.
\]
$\Gamma$ is satisfiable, while each set $\Gamma \cup \{ A \strictif B_i\}$,
for $i=1,2$, is unsatisfiable. It follows that any formula $C \strictif D$
(with $C, D$ new for $\Gamma$) is a s-consequence of both sets $\Gamma \cup
\{ A \strictif B_i\}$. Moreover, $\Gamma \models_s A \strictif B_1 \lor B_2$,
but $C \strictif D$ is not a s-consequence of $\Gamma$.
We denote $\mathcal F_3$-derivability by $\rhd_{\mathcal F_3}$. In proving
the completeness of the $\mathcal F_3$ system we shall use also the following
three rules, that will be shown to be derivable in our system in the next
Lemma.
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$B\strictif A$ }
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif B_1$}
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif B_n$}
\LeftLabel{\textbf{($\mathbf{r_2}$)}:\quad }
\QuaternaryInfC{$C\strictif A$}
\end{prooftree}
where $B = B_1 \land \dots \land B_n$.
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\strictif E$ }
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\land E_1\strictif F$}
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$\Gamma $}
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D \land E_n\strictif F$}
\LeftLabel{\textbf{($\mathbf{r_3}$)}:\quad }
\QuaternaryInfC{$D\strictif F$}
\end{prooftree}
where $E = E_1 \lor \dots \lor E_n$.
\begin{prooftree}
\AxiomC{$\Gamma\qquad \qquad \qquad \Gamma$ }
\noLine
\UnaryInfC{$\nabla \qquad \ldots \qquad \nabla$}
\noLine
\UnaryInfC{$A^1\strictif B^1 \qquad \qquad A^n\strictif B^n $ }
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$\Gamma, [A^1\strictif B^1_{h_1},\ldots, A^n\strictif B^n_{h_n} ]$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif E$}
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\LeftLabel{\textbf{(disj2gen)}:\quad }
\QuaternaryInfC{$C\strictif E$}
\end{prooftree}
In (disj2gen), we require $\Gamma$ to be a set of positive s-formulas, and we
have a premise
\begin{prooftree}
\AxiomC{$\Gamma, A^1\strictif B^1_{h_1},\ldots, A^n\strictif B^n_{h_n}$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif E$}
\end{prooftree}
for every choice of indices $h_1, \dots, h_n$ such that $B^i_{h_i}$ is a
disjunct of $B^i$.
\begin{lemma}
($r_2$) is a derived rule in the $\mathcal F_2$ system, while ($r_3$) and
(disj2gen) are derived rules in the $\mathcal F_3$ system.
\end{lemma}
\begin{proof}
First, we provide a proof for $(r_2)$ in the $\mathcal F_2$ system.
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$B_1\land\ldots\land B_n\strictif A$ }
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif B_1$}
\BinaryInfC{$C\land B_2\land \ldots \land B_n\strictif A$}
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif B_2$}
\doubleLine
\BinaryInfC{$C\land B_3\land \ldots \land B_n\strictif A$}
\noLine
\UnaryInfC{$\vdots$}
\UnaryInfC{$C\land B_n\strictif A$}
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif B_n$}
\doubleLine
\BinaryInfC{$C\strictif A$}
\end{prooftree}
where in the first step we apply (trans) and then, in correspondence of each
double line, we use a combination of applications of (trans) and (conj2).
We now show how to derive $(r_3)$ in the $\mathcal F_3$ system.
\begin{prooftree}
\AxiomC{$\Gamma$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\strictif E$ }
\AxiomC{$[D\strictif E_1]$ }
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\land E_1\strictif F$}
\doubleLine
\BinaryInfC{$D\strictif F$ }
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$[D\strictif E_n]$ }
\AxiomC{$\Gamma $ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\land E_n\strictif F$}
\doubleLine
\BinaryInfC{$D\strictif F$ }
\QuaternaryInfC{$D\strictif F$}
\end{prooftree}
Again, double lines indicate a combination of applications of (trans) and
(conj2), while in the final step we use (disj2).
As for rule (disj2gen), suppose $B^1= B^1_1 \lor \dots \lor B^1_h$. We can apply
rule (disj2) to
\[
\Gamma \rhd_{\mathcal F_3} A^1 \strictif B^1
\]
and all premisses of the form
\[
\Gamma, A^1\strictif B^1_j, A^2 \strictif B^1_{h_2}, \dots, A^n\strictif B^1_{h_n} \rhd_{\mathcal F_3} C\strictif E,
\]
for $j=1, \dots, h$, obtaining, for all choices of indices $h_2, \dots, h_n$
such that $B^i_{h_i}$ is a disjunct of $B^i$, that
\[
\Gamma, A^2 \strictif B^2_{h_2},\ldots, A^n \strictif B^n_{h_n} \rhd_{\mathcal F_3} C \strictif E;
\]
In other words, we succeeded in eliminating $A^1\strictif B^1$ from the
premisses. In the same way, by applying (disj2) we can successively eliminate
$A^2 \strictif B^2, \dots, A^n \strictif B^n$, eventually deriving $\Gamma
\rhd_{\mathcal F_3} C \strictif E$, as desired.
\end{proof}
In order to prove the completeness of $\mathcal F_3$-derivability we need a
preliminary Lemma.
\begin{lemma}\label{lem:disjprop}
Suppose $\Gamma$ is a set of positive $\mathcal F_3$ s-formulas such that
$\Gamma \ntriangleright_{\mathcal F_3} C \strictif E$. Then there exists a
set of positive $\mathcal F_3$ s-formulas $\Delta$, closed under
$\rhd_{\mathcal F_3}$, such that:
\begin{itemize}
\item $\Delta \supseteq \Gamma$;
\item $\Delta \ntriangleright_{\mathcal F_3} C \strictif E,$;
\item for all positive $\mathcal F_3$ s-formulas $A \strictif B \in
\Delta$ there exists $i$ such that $A \strictif B_i \in \Delta$.
\end{itemize}
\end{lemma}
\begin{proof}
Without loss of generality, we may suppose that $\Gamma$ is closed under
$\rhd_{\mathcal F_3}$. Let $\{\alpha_1, \alpha_2, \dots\}$ be an enumeration
of the positive ${\mathcal F_3}$ s-formulas, with $\alpha_j = A^j \strictif
B^j$.
We claim that there exists a sequence $\Gamma_0 =\Gamma, \Gamma_1, \dots,
\Gamma_n, \ldots$ of sets of positive $\mathcal F_3$ s-formulas, each closed
under $\rhd_{\mathcal F_3}$, with the following properties:
\begin{itemize}
\item $\Gamma_{n} \ntriangleright_{\mathcal F_3} C \strictif E$;
\item if, for $j \leq n$, $\Gamma_{n} \rhd_{\mathcal F_3} \alpha_j$, then
there exists $h$ such that $A^j \strictif B^j_h \in \Gamma_{n+1}$.
\end{itemize}
We start by setting $\Gamma_0 = \Gamma$. Suppose now we already defined
$\Gamma_n$ such that $\Gamma_{n} \ntriangleright_{\mathcal F_3} C \strictif
E$. Let $j_1, \dots j_h \leq n$ be the list of all indices up to $n$ such
that $\Gamma_{n} \rhd_{\mathcal F_3} \alpha_{j_i}$. Then there must exist a
choice of indices $h_{j_1}, \dots, h_{j_h}$ such that $B^{j_i}_{h_{j_i}}$ is
a disjunct of $B^{j_i}$, and
\[
\Gamma_{n}, A^{j_1} \strictif B^{j_1}_{h_{j_1}}, \dots, A^{j_n} \strictif B^{j_n}_{h_{j_h}}
\ntriangleright_{\mathcal F_3} C \strictif E.
\]
In fact, if this were not the case, using rule (disj2gen), we would obtain
that $\Gamma_{n} \rhd_{\mathcal F_3} C \strictif E$. We fix such $h_{j_1},
\dots, h_{j_h}$ and let $\Gamma_{n+1}$ be the closure of $\Gamma_{n} \cup
\{A^1 \strictif B^1_{h_{j_1}}, \dots, A^n \strictif B^n_{h_{j_h}}\}$ under
$\rhd_{\mathcal F_3}$. This proves the claim.
Finally, it is straightforward to check that $\Delta = \bigcup_n \Gamma_n$ has the
required properties.
\end{proof}
We split the proof of the completeness of $\rhd_{\mathcal F_3} $ into cases,
depending on the satisfiability of $\Gamma$ and on the type of the formula to
be derived. We start with:
\begin{lemma}\label{lem:positive}
Suppose $\Gamma$ is a satisfiable set of $\mathcal F_3$ s-formulas and $C
\strictif E$ is a positive $\mathcal F_3$ s-formula such that $\Gamma
\models_s C \strictif E$. Then $\Gamma \rhd_{\mathcal F_3} C \strictif E$.
\end{lemma}
\begin{proof}
We reason by contradiction. If $\Gamma \ntriangleright_{\mathcal F_3} C
\strictif E$ then $\Gamma^+ \ntriangleright_{\mathcal F_3} C\strictif E$,
either. By applying the previous Lemma to $\Gamma^+$ we find a set of
positive $\mathcal F_3$ s-formulas $\Delta \supseteq \Gamma^+$, closed under
${\rhd_{\mathcal F_3}}$, such that
\[
\Delta \ntriangleright_{\mathcal F_3} C \strictif E,
\]
and for all $\mathcal F_3$ s-formulas $A \strictif B$, if $A \strictif B \in
\Delta$ then there exists $i$ with $A \strictif B_i \in \Delta$.
Let $w$ be the valuation defined by setting, for each propositional variable
$X$:
\[
w(X)=
\begin{cases}
T & \hbox{if $C \strictif X \in \Delta$;}\\
F & \hbox{if $C \strictif X \notin \Delta$.}
\end{cases}
\]
We claim that $w(\Delta)= T$, and $w( C\strictif E)=F$.
If $B \strictif A \in \Delta$ and $w(B)=T$, then, since $B=B_1 \land \dots
\land B_n$, we have $w(B_i)=T$ for all $i$. By definition of $w$, for all $i$
it holds $C \strictif B_i \in \Delta$, and by rule ($r_2$) we obtain $C
\strictif A \in \Delta$. By the property of $\Delta$ there exists $i$ such
that $C\strictif A_i \in \Delta$. Hence $w(A_i)=T$ and therefore $w(A)=T$ as
well. This proves that $w(B \strictif A)=T$, for all $B \strictif A \in
\Delta$.
Let us now show that $w(C \strictif E)=F$. Since $w(C)=T$, it suffices to
prove that $w(E_i)=F$, for all $i$. If $w(E_i)=T$ for some $i$, then $C
\strictif E_i \in \Delta$ and $C \strictif E \in \Delta$ would follow by rule
(disj1).
Having established the claim, we conclude the proof as follows. For all
negative $\mathcal F_3$ s-formulas $\alpha = A \not\strictif B \in \Gamma$,
let $v_\alpha$ be a valuation such that $v_\alpha(\Gamma)=T$, $v_\alpha(A)=T$
and $v_\alpha(B)=F$. Such a $v_\alpha$ exists, because by hypothesis $\Gamma$
is satisfiable. Then the frame $W = \{w\} \cup \{v_\alpha: \alpha \in
\Gamma^-\}$ is such that $W \models \Gamma$ and $W \nvDash C \strictif E$,
contradicting our hypothesis.
\end{proof}
Next, we consider the case in which $\Gamma$ is satisfiable, but the formula
to be derived is negative.
\begin{lemma}
Suppose $\Gamma$ is a satisfiable set of $\mathcal F_3$ s-formulas and $C
\not\strictif G$ is a negative $\mathcal F_3$ s-formula such that $\Gamma
\models_s C \not\strictif G$. Then $\Gamma \rhd_{\mathcal F_3} C
\not\strictif G$.
\end{lemma}
\begin{proof}
This proof follows the corresponding proof in \cite{Mummert} with minor
adjustments. We reason again by contradiction supposing (without loss of
generality) that $\Gamma$ closed under $\rhd_{\mathcal F_3}$ and $ C\not
\strictif G \not \in \Gamma$. For any $\alpha= D \not\strictif E \in
\Gamma^-$, we will find a valuation $w_\alpha$ with $w_\alpha(\Gamma^+)=T$,
$w_\alpha(D)=T$ and $w_\alpha(E)=F$, and either $w_\alpha( C)=F$ or
$w_\alpha(G)=T$. Once this is done, we may set
\[
W= \{w_\alpha : \alpha \in \Gamma^-\}
\]
and find a contradiction, since $W$ is a frame satisfying $\Gamma$ but
failing to satisfy $C \not\strictif G$.
Fix $\alpha=D\not \strictif E\in \Gamma^-$. Since $\Gamma$ is satisfiable,
there exists a valuation $w$ with $w(\Gamma^+)=T$, $w(D)=T$, and $w(E)=F$. In
order to find $w_\alpha$ we may suppose that all the valuations $w$ with
these properties satisfy also $w(C)=T$ (otherwise we may choose such a $w$
for $w_\alpha$). Consider the set of positive s-formulas $\Gamma^+ \cup \{C
\strictif G\}$. Then
\[
\Gamma^+ \cup \{C \strictif G\} \ntriangleright_{\mathcal F_3} D \strictif E,
\]
otherwise, since $\Gamma\rhd_{\mathcal F_3} D \not \strictif E$, we would
have $\Gamma \rhd_{\mathcal F_3} C \not\strictif G$ be the (neg) rule. By
Lemma \ref{lem:disjprop} there exists a set of positive formulas
$\Delta\supseteq \Gamma^+\cup\{C\strictif G\}$, closed under $\rhd_{\mathcal
F_3}$, such that $\Delta \ntriangleright_{\mathcal F_3} D \strictif E$, and
for all $A,B$, if $A \strictif B \in \Delta$ then there exists $i$ with $A
\strictif B_i \in \Delta$. We claim that $D \strictif C_i \in \Delta$ for
every $i$. To see this, we consider the valuation $w$ defined as
\[
w(X)=
\begin{cases}
T & \hbox{if $D \strictif X \in \Delta$;}\\
F & \hbox{if $D \strictif X \notin \Delta$.}
\end{cases}
\]
As in Lemma \ref{lem:positive}, it is not difficult to check that
$w(\Delta)=T$, and $w (D \strictif E)=F$. By the previous hypothesis, we have
$w(C)=T$, that is, $w(C_i)=T$ for all $i$. By definition of $w$ this implies
$D \strictif C_i \in \Delta$.
Next, consider the valuation $v_i$ defined as
\[
v_i(X)=
\begin{cases}
T & \hbox{if $D\land G_i \strictif X \in \Delta$;}\\
F & \hbox{if $D\land G_i \strictif X \notin \Delta$.}
\end{cases}
\]
We claim that there exists $i$ with $v_i(E)=F$. Otherwise, we have
$v_i(E)=T$, for all $i$. This means that for all $i$ there exists $j$ with
$v_i(E_j)=T$, that is, by definition of $v_i$, $D \land G_i \strictif E_j \in
\Delta$. It follows that, for all $i$, $D \land G_i \strictif E \in \Delta$.
Consider the following natural deduction, which uses first ($r_2$) and then
($r_3$);
\begin{prooftree}
\AxiomC{$\Delta$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$C\strictif G$ }
\AxiomC{$\Delta$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\strictif C_1$}
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$\Delta$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\strictif C_k$}
\QuaternaryInfC{$D\strictif G$}
\AxiomC{$\Delta$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\land G_1\strictif E$}
\AxiomC{$\ldots$}
\noLine
\UnaryInfC{$ $}
\AxiomC{$\Delta$ }
\noLine
\UnaryInfC{$\nabla$}
\noLine
\UnaryInfC{$D\land G_n\strictif E$}
\QuaternaryInfC{$D\strictif E$}
\end{prooftree}
This contradicts $\Delta \ntriangleright D \strictif E$.
Thus we can pick $i$ such that $v_i(E)=F$. We have $v_i(D)=T$, $v_i(E)=F$,
and $v_i(G)=T$, since $D \land G_i \strictif G_i \in \Delta$ and $G$ is a
disjunction. Moreover, as before, $v_i(\Delta)=T$: if $A\strictif B\in
\Delta$ and $v_i(A)=T$, then $ D\land G_i \strictif A_j\in \Delta$, for all
$j$. By rule ($r_2$) we obtain $D\land G_i \strictif B\in \Delta$, and by the
properties of $\Delta$ there exists $h$ with $D\land G_i \strictif B_h\in
\Delta$; hence, $v_i(B_h)=T$, and $v_i(B)=T$. It follows that
$v_i(\Gamma^+)=T$, and we may choose such a $v_i$ as $w_\alpha$, finishing
the proof.
\end{proof}
The two previous results prove that, if $\Gamma$ is a satisfiable set of
${\mathcal F}_3$ s-formulas, then for any ${\mathcal F}_3$ s-formula $\alpha$
such that $\Gamma \models_s \alpha$ we have $\Gamma \rhd_{\mathcal F_3}
\alpha$.
To finish the completeness proof for $\rhd_{\mathcal F_3}$, we still have to
consider the case when $\Gamma$ is unsatisfiable. In this case we have to
prove that $\Gamma \rhd_{\mathcal F_3} \alpha$, for any ${\mathcal F}_3$
s-formula $\alpha$, and we may repeat the proof of Lemma
\ref{lem:unsat}. Hence:
\begin{lemma}
If $\Gamma$ is unsatisfiable, then for any ${\mathcal F}_3$ s-formula
$\alpha$ we have $\Gamma \rhd_{\mathcal F_3} \alpha$.
\end{lemma}
Putting all results of this section together, we obtain:
\begin{theorem}
If $\Gamma$ is a set of ${\mathcal F}_3$ s-formulas and $\alpha$ is a
${\mathcal F}_3$ s-formula, then
\[
\Gamma \models_s \alpha \qquad \Leftrightarrow \qquad \Gamma \rhd_{\mathcal F_3} \alpha.
\]
\end{theorem}
\section{${\mathcal F_2}$ and Prolog}\label{sec:prolog}
In this section we show how standard Prolog may be used to deal with logical
consequence in $\mathcal{F}_2$. Since some readers might be unfamiliar with
Prolog, we recall here the basic constructs of this programming language
(restricting ourselves to the propositional setting), following \cite{Nerode}
(see \S I.10, and especially Definition 10.4).
Propositional Prolog deals with \emph{Horn clauses} (finite sets of literals
containing at most one positive literal), thought as disjunctions of their
elements. When the Horn clause contains (exactly) one positive literal $\{Y,
\neg X_1, \dots, \neg X_n\}$ it is a \emph{program clause} and we write $Y
:\!\!-\ X_1, \dots, X_n$. If $n>0$ we think that the program clause is
representing $X_1 \land \dots \land X_n \rightarrow Y$ and we call it a
\emph{rule}. If in the program clause we have $n=0$ it is a \textit{fact} and
we write $Y :\!\!-\ $. If the Horn clause has only negative literals $\{\neg X_1,
\dots, \neg X_n\}$ we call it a \emph{goal} and write $:\!\!-\ X_1, \dots, X_n$.
A \emph{Prolog program} is a set of program clauses.
The typical situation is that we are given a Prolog program, and we want to
know whether a conjunction of facts $Y_1, \dots, Y_k$ is logical consequence
of the given facts and rules. To this end we add the goal $\{\neg Y_1, \dots,
\neg Y_k\}$ to the program and ask whether the resulting set of Horn clauses
is unsatisfiable. This is the case if and only if applying the resolution
rule repeatedly to the elements of the set starting with the goal we obtain
the empty clause. Prolog works by searching all possible ways of applying the
resolution rule with these constraints: if the search succeeds we have a
\emph{refutation} of the goal from the program.
We can now go back to our study of the $\mathcal{F}_2$ fragment of s-logic.
\begin{definition}
Given a set $\Gamma$ of $\mathcal{F}_2$ s-formulas, define $Prolog(\Gamma^+)$
to be the following Prolog program:
\[
Prolog(\Gamma^+) = \{Z :\!\!-\ A_1, \dots, A_n \mid A_1 \land \ldots \land A_n \strictif Z \in \Gamma^+\}.
\]
\end{definition}
We have:
\begin{lemma}\label{lemma:prolog}
Let $\Gamma$ be a set of $\mathcal{F}_2$ s-formulas and $A \strictif Y$ be a
$\mathcal{F}_2$ s-formula, where $A = A_1 \land \dots \land A_n$.
\begin{enumerate}[(i)]
\item $\Gamma \models_s A \strictif Y$ if and only there is a refutation
of the goal $:\!\!-\ Y$ from the Prolog program
\[
Prolog(\Gamma^+) \cup \{A_1:\!\!-\ , \dots, A_n:\!\!-\ \};
\]
\item $\Gamma \models_s A \not\strictif Y$ if and only if there exists
$Z_1 \land \dots \land Z_n \not\strictif W \in
\Gamma^-$ and a refutation of the goal $:\!\!-\ W$ from the Prolog program
\[
Prolog(\Gamma^+) \cup \{Y :\!\!-\ A_1, \ldots, A_n, Z_1:\!\!-\ , \dots, Z_n:\!\!-\ \}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item From Lemma \ref{lemma:logcons}.i we have that $\Gamma \models A
\strictif Y$ if and only if $\Gamma^+_{prop}, A \models Y$. Since
$\Gamma$ is a set of $\mathcal{F}_2$-formulas, the elements in
$\Gamma^+_{prop}$ are (essentially) rules, while $A$ is equivalent to
the conjunction of the facts $A_1:\!\!-\ , \dots, A_n:\!\!-\ $. Since $Y$ is a
positive literal, the equivalence follows from the completeness of
Propositional Prolog.
\item From Lemma \ref{lemma:logcons}.ii we have that $\Gamma \models A
\not\strictif Y$ if and only if there exists $Z_1 \land \dots \land
Z_n \not\strictif W \in \Gamma^-$ such that
\[
\Gamma^+_{prop}, A \rightarrow Y, Z_1 \land \dots \land Z_n \models W.
\]
As before, the equivalence follows from interpreting this logical
consequence in terms of Prolog and applying the completeness of
Propositional Prolog.\qedhere
\end{enumerate}
\end{proof}
Lemma \ref{lemma:prolog} suggests an efficient way of checking logical
consequence between $\mathcal{F}_2$ s-formulas based on a well-known
programming language such as Prolog, and actually only for the special case
of goals consisting of a single literal.
\end{document}
|
\begin{document}
\title{On a theorem of Braden}
\author{V.~Drinfeld and D.~Gaitsgory}
\dedicatory{Dedicated to E.~Dynkin}
\date{\today}
\begin{abstract}
We give a new proof of Braden's theorem (\cite{Br}) about \emph{hyperbolic restrictions}
of constructible sheaves/D-modules. The main geometric ingredient in the proof is
a 1-parameter family that degenerates a given scheme $Z$ equipped with a $\BG_m$-action
to the product of the attractor and repeller loci.
\end{abstract}
\maketitle
\tableofcontents
\section*{Introduction}
\ssec{The setting for Braden's theorem}
\sssec{}
Given a scheme $Z$ (or algebraic space) of finite type over a field $k$ of characteristic 0,
let $\Dmod(Z)$ denote the DG category of D-modules on it.
If $f:Z_1\to Z_2$ is a morphism of such schemes one has the de Rham direct image functor
$f_{\bullet}:\Dmod(Z_1)\to \Dmod(Z_2)$ and the $!$-pullback functor
$f^!:\Dmod(Z_2)\to \Dmod(Z_1)$.
One also has the \emph{partially} defined functor $f^\bullet:\Dmod(Z_2)\to \Dmod(Z_1)$, left adjoint to
$f_{\bullet}\,$.
\sssec{}
Suppose now that $Z$ is equipped with an action of the group $\BG_m\,$. Let $Z^+$ (resp., $Z^-$) denote the
corresponding attractor (resp., repeller) locus, see Sects~{\mathfrak r}ef{ss:attr} and {\mathfrak r}ef{ss:repeller}
for the definitions. Let $Z^0$ denote the locus of $\BG_m$-fixed points.
Consider the diagram
\begin{equation} \label{e:square with arrow prev}
\xy
(30,0)*+{Z^+}="Y";
(0,-30)*+{Z^-}="Z";
(30,-30)*+{Z.}="W";
(-5,5)*+{Z^0}="U";
{\ar@{->}_{p^-} "Z";"W"};
{\ar@{->}^{p^+} "Y";"W"};
{\ar@{->}^{i^-} "U";"Z"};
{\ar@{->}_{i^+} "U";"Y"};
\endxy
\end{equation}
Let $\Dmod(Z)^{\BG_m\on{-mon}}\subset \Dmod(Z)$ be the full subcategory consisting of $\BG_m$-monodromic\footnote{The definition of $\BG_m$-monodromic object is recalled in Subsect.~{\mathfrak r}ef{sss:mon}.} objects.
In the context of D-modules, Braden's theorem \cite{Br}
(inspired by a result\footnote{In \cite{GM} M.~Goresky and R.~MacPherson work in a purely
topological setting. They work with correspondences rather than torus actions. According to \cite[Prop. 9.2]{GM},
under a certain condition (which is satisfied if the correspondence comes from a $\BG_m$-action) one has
$A_4^\bullet\buildrel{\sim}\over{\longrightarrow} A_5^\bullet\,$, where $A_i^\bullet$ is defined in \cite[Prop. 4.5]{GM}.
This is the proptotype of Braden's theorem.} from \cite{GM}) says that
the composed functors
$$(i^+)^\bullet\circ (p^+)^! \text{ and } (i^-)^!\circ (p^-)^\bullet, \quad \Dmod(Z)\to \Dmod(Z^0)$$
are both defined \footnote{The issue here is that in the context of D-modules
the $\bullet$-pullback functor is only partially defined.}
on objects of $\Dmod(Z)^{\BG_m\on{-mon}}$
and we have a canonical isomorphism
\begin{equation} \label{e:Braden preview}
(i^+)^\bullet\circ (p^+)^!|_{\Dmod(Z)^{\BG_m\on{-mon}}} \simeq (i^-)^!\circ (p^-)^\bullet|_{\Dmod(Z)^{\BG_m\on{-mon}}}\;\; .
\end{equation}
\sssec{}
In his paper \cite{Br}, T.~Braden formulated and proved his theorem assuming that $Z$ is a normal algebraic variety.
Although his formulation is enough for practical purposes, we prefer to formulate and prove this theorem for
\emph{algebraic spaces} of finite type over a field (without any normality or separateness conditions).
In this more general context, the representability of the functors defining
the attractor $Z^+$ (and other related spaces such as $\wt{Z}$ from \secref{sss:behind} below)
is no longer obvious; it is established in \cite{Dr}.
\ssec{Why should we care?}
Braden's theorem is hugely important in geometric representation theory.
\sssec{}
Here is a typical application
in the context of Lusztig's theory of induction and restriction of character sheaves.
Take $Z=G$, a connected reductive group. Let $P\subset G$ be a parabolic, and let $P^-$ be an opposite parabolic,
so that $M:=P\cap P^-$ identifies with the Levi quotient of both $P$ and $P^-$. Denote the corresponding
closed embeddings by
$$M\overset{i^+}\hookrightarrow P\overset{p^+}\hookrightarrow G \text{ and }
M\overset{i^-}\hookrightarrow P^-\overset{p^-}\hookrightarrow G.$$
Then the claim is that we have a canonical isomorphism of functors $$\Dmod(G)^{\on{Ad}_G\on{-mon}}\to \Dmod(M)^{\on{Ad}_M\on{-mon}}$$
between the corresponding categories of $\on{Ad}$-monodromic D-modules:
\begin{equation} \label{e:restr}
(i^+)^\bullet\circ (p^+)^!\simeq (i^-)^!\circ (p^-)^\bullet.
\end{equation}
The proof is immediate from \eqref{e:Braden preview}: the corresponding $\BG_m$-action is the adjoint action
corresponding to a co-character $\BG_m\to M$, which maps to the center of $M$ and is dominant regular with
respect to $P$. \footnote{Unfortunately, it seems that this particular proof of the isomorphism
\eqref{e:restr}, although very simple and well-known in the folklore, does not appear in the published literature.}
\sssec{}
For other applications of Braden's theorem see
\cite{Ach}, \cite{AC}, \cite{AM}, \cite{Bi-Br}, \cite{GH}, \cite{Ly1}, \cite{Ly2}, \cite{MV}, \cite{Nak}.
\ssec{The new proof}
The goal of this paper is to give an alternative proof of Braden's theorem. The reason for our decision
to publish it is that
\noindent(a) the new proof gives another point of view on ``what Braden's theorem is really about";
\noindent(b) a slight modification of the new proof of Braden's theorem allows to prove a new result in the geometric theory of automorphic forms, see \cite[Thm.~1.2.5]{DrGa3}.
Let us explain the idea of the new proof.
\sssec{Braden's theorem as an adjunction}
Let us complete the diagram \eqref{e:square with arrow prev} to
\begin{equation} \label{e:square with arrow prev enh}
\xy
(30,0)*+{Z^+}="Y";
(0,-30)*+{Z^-}="Z";
(30,-30)*+{Z,}="W";
(-5,5)*+{Z^0}="U";
{\ar@{->}_{p^-} "Z";"W"};
{\ar@{->}^{p^+} "Y";"W"};
{\ar@{->}^{i^-} "U";"Z"};
{\ar@{->}_{i^+} "U";"Y"};
{\ar@<-1.3ex>_{q^+} "Y";"U"};
{\ar@<1.3ex>^{q^-} "Z";"U"};
\endxy
\end{equation}
where
$$q^+:Z^+\to Z^0 \text{ and } q^-:Z^-\to Z^0$$
are the corresponding contraction maps.
First, we observe that the functors $(i^+)^\bullet$ and $(i^-)^!$, when restricted to the corresponding monodromic categories,
are isomorphic to $(q^+)_\bullet$ and $(q^-)_!$, respectively. Hence,
the isomorphism \eqref{e:Braden preview} can be
rewritten as
\begin{equation} \label{e:Braden preview q}
(q^+)_\bullet\circ (p^+)^!|_{\Dmod(Z)^{\BG_m\on{-mon}}} \simeq (q^-)_!\circ (p^-)^\bullet|_{\Dmod(Z)^{\BG_m\on{-mon}}}.
\end{equation}
Next we observe that the functor $(q^-)_!\circ (p^-)^\bullet$ is the left adjoint functor of
$(p^-)_\bullet\circ (q^-)^!$. Hence,
\emph{the isomorphism \eqref{e:Braden preview q} can be restated as the assertion that the functors
$$(q^+)_\bullet\circ (p^+)^!|_{\Dmod(Z)^{\BG_m\on{-mon}}} \text{ and } (p^-)_\bullet\circ (q^-)^!|_{\Dmod(Z)^{\BG_m\on{-mon}}}$$
form an adjoint pair.}
\sssec{The geometry behind the adjunction} \label{sss:behind}
In turns out that the co-unit for this adjunction, i.e., the map
\begin{equation} \label{e:counit preview}
(q^+)_\bullet\circ (p^+)^!\circ (p^-)_\bullet\circ (q^-)^!\to \on{Id}_{\Dmod(Z^0)}\, ,
\end{equation}
is easy to write down (just as in the original form of Braden's theorem, a map in
one direction is obvious).
The crux of the new proof consists of writing down the unit for the adjunction, i.e., the corresponding map
\begin{equation} \label{e:unit preview}
\on{Id}_{\Dmod(Z)^{\BG_m\on{-mon}}}\to (p^-)_\bullet\circ (q^-)^! \circ (q^+)_\bullet\circ (p^+)^!|_{\Dmod(Z)^{\BG_m\on{-mon}}}\, .
\end{equation}
The map \eqref{e:unit preview} comes from a certain geometric construction described in \secref{s:deg}.
Namely, we construct a $1$-parameter ``family"
\footnote{The quotation marks are due to the fact that this
``family" is not flat, in general. If $Z$ is affine then each $\wt{Z}_t$ is a closed subscheme of $Z\times Z$.
If $Z$ is separated, then for each $t$ the map
$\wt{Z}_t\to Z\times Z$ is a monomorphism (but not necesaarily a locally closed embedding).}
of schemes (resp., algebraic spaces) $\wt{Z}_t$ mapping to
$Z\times Z$ (here $t\mathfrak in \BA^1$) such that for $t\ne 0$ the scheme (resp., algebraic space) $\wt{Z}_t$ is the graph of the map $t:Z\buildrel{\sim}\over{\longrightarrow} Z$,
and $\wt{Z}_0$ is isomorphic to $Z^+\underset{Z^0}\times Z^-$.
\ssec{Other sheaf-theoretic contexts} \label{ss:other}
This paper is written in the context of D-modules on schemes (or more generally,
algebraic spaces of finite type) over a field $k$ of characteristic 0.
However, Braden's theorem can be stated in other sheaf-theoretic contexts, where the role of
the DG category $\Dmod(Z)$ is played by a certain triangulated category $D(Z)$.
The two other contexts that we have in mind are as follows:
\noindent(i) $k$ is any field, and $D(Z)$ is the derived category of ${\mathbb Q}_\ell$-sheaves with constructible cohomologies,
\noindent(ii) $k=\BC$, and $D(Z)$ is the derived category of sheaves of $R$-modules
with constructible cohomologies (where $R$ is any ring).
In these two contexts the new proof of Braden's theorem presented in this article goes through with the following modifications:
First, the functors $f^\bullet$ and $f_!$ are always defined, so one should not worry about pro-categories.\footnote{Pro-categories are considered in Appendix~{\mathfrak r}ef{s:pro}.}
Second, the definition of the $G$-monodromic category $D(Z)^{G\on{-mon}}$ (where $G$ is any algebraic group, e.g., the group $\BG_m$) should be slightly different from the definition of $\Dmod(Z)^{G\on{-mon}}$
given in \secref{sss:mon}.
Namely, $D(Z)^{G\on{-mon}}$ should be defined as the full subcategory of $D(Z)$ \emph{strongly generated} by the essential
image of the pullback functor $D(Z/G)\to D(Z)$ (i.e., its objects are those
objects of $D(Z)$ that can be obtained from objects lying in the image of the above pullback
functor by a \emph{finite} iteration of the procedure of taking the cone of a morphism).
\ssec{Some conventions and notation}
\sssec{}
In Sects. {\mathfrak r}ef{s:actions} and {\mathfrak r}ef{s:deg}
we will work over an arbitrary ground field $k$, and in Sects. {\mathfrak r}ef{s:Braden1}-{\mathfrak r}ef{s:Verifying}
we will assume that $k$ has characteristic $0$ (because we will be working with D-modules).
\sssec{}
In this article all schemes, algebraic spaces, and stacks are assumed to be ``classical" (as opposed to derived).
\sssec{}
When working with D-modules, our conventions follow those of \cite[Sects. 5 and 6]{DrGa1}. The
only notational difference is that for a morphism $f:Z_1\to Z_2$, we will denote the
direct image functor $\Dmod(Z_1)\to \Dmod(Z_2)$ by $f_\bullet$ (instead of $f_{\dr,*}$),
and similarly for the left adjoint, $f^\bullet$ (instead of $f^*_{\dr}$).
\sssec{}
Given an an algebraic space (or stack) $Z$ of finite type over a field $k$ of characteristic 0, we
let $\Dmod(Z)$ denote the DG category of D-modules on it.
Our conventions regarding DG categories follow those of \cite[Sect. 0.6]{DrGa1}.
On the other hand, the reader may prefer to replace each time the DG category $\Dmod(Z)$ by its
homotopy category, which is a triangulated category.\footnote{However, the most natural approach to
\emph{constructing} the triangulated category of D-modules on an algebraic stack is to construct the corresponding DG category first, as is done in \cite{DrGa1}.} Then the formulations and proofs of the main results of this article will remain valid. Moreover, once we know that the morphism~\eqref{e:counit preview} in the triangulated setting is the co-unit of an adjunction, it follows that the same is true in the DG setting.
\sssec{}
In Appendix {\mathfrak r}ef{s:pro} we define the notion of pro-completion $\on{Pro}(\bC)$ of a DG category $\bC$.
The reader who prefers to
stay in the triangulated world, can replace it by the category
of all covariant triangulated functors from the homotopy category
$\on{Ho}(\bC)$ to the homotopy category of complexes of $k$-vector spaces. (Note that the category of such functors is not
necessarily triangulated, but this is of no consequence for us.)
\ssec{Organization of the paper}
\sssec{}
Sects. {\mathfrak r}ef{s:actions}-{\mathfrak r}ef{s:deg} are devoted to the geometry of $\BG_m$-actions on algebraic spaces.
Let $Z$ be an algebraic space of finite type over the ground field $k$, equipped with a $\BG_m$-action.
In \secref{s:actions} we define the attractor $Z^+$ and the repeller $Z^-$ by
\begin{equation} \label{e:attr&repel}
Z^+:=\GMaps(\BA^1,Z), \quad\quad Z^-:=\GMaps(\BA^1_-\, ,Z),
\end{equation}
where $\GMaps$ stands for the space of $\BG_m$-equivariant maps and $\BA^1_-:=\BP^1-\{\mathfrak infty\}$ (or equivalently, $\BA^1_-$
is the affine line equipped with the $\BG_m$-action opposite to the usual one).
The basic facts on $Z^\pm$ are formulated in \secref{s:actions}; the proofs of the more difficult statements are given in~\cite{Dr}.
As was already mentioned in ~\secref{sss:behind}, in the proof of Braden's theorem we use a certain 1-parameter family of
algebraic spaces $\wt{Z}_t$, $t\mathfrak in\BA^1$. These spaces are defined and studied in
\secref{s:deg}. The definition is formally similar to \eqref{e:attr&repel}: namely,
\[
\wt{Z}_t:=\GMaps (\BX_t\,, Z),
\]
where $\BX_t$ is the hyperbola $\tau_1\cdot \tau_2=t$ and the action of $\lambda\mathfrak in\BG_m$ on
$\BX_t$ is defined by
$$\tilde\tau_1=\lambda\cdot \tau_1\, , \quad\quad\tilde\tau_2=\lambda^{-1}\cdot\tau_2\, .$$
Note that $\BX_0$ is the union of the two coordinate axes, which meet at the origin; accordingly,
$\wt{Z}_0$ identifies with $Z^+\underset{Z^0}\times Z^-$ (as promised in \secref{sss:behind}).
\sssec{}
In \secref{s:Braden1} we first state Braden's theorem in its original formulation, and then reformulate
it as a statement that certain two functors are adjoint (with the specified co-unit of the adjunction).
In \secref{s:unit} we carry out the main step in the proof of \thmref{t:Braden adj} by constructing the
unit morphism for the adjunction.
The geometric input in the construction of the unit is the family
$t{\mathfrak r}ightsquigarrow \wt{Z}_t$ mentioned above. The input from the theory of D-modules is the
\emph{specialization map}
$$\on{Sp}_\CK:\CK_1\to \CK_0\, ,$$
where $\CK$ is a $\BG_m$-monodromic object in $\Dmod(\BA^1\times \CY)$ (for any algebraic space/stack $\CY$),
and where $\CK_1$ and $\CK_0$ are the !-restrictions of $\CK$ to $\{1\}\times \CY$ and $\{0\}\times \CY$,
respectively. The map $\on{Sp}_\CK$ is a simplified version of the specialization map that goes from nearby
cycles to the !-fiber.
In \secref{s:Verifying} we show that the unit and co-unit indeed satisfy the adjunction property.
In Appendix {\mathfrak r}ef{s:pro} we define the notion of pro-completion $\on{Pro}(\bC)$ of a DG category $\bC$.
\ssec{Acknowledgements}
We thank A.~Beilinson, T.~Braden, J.~Konarski, and A.~J.~Sommese for helpful discussions.
The research of V. D. is partially supported by NSF grants DMS-1001660 and DMS-1303100.
The research of D. G. is partially supported by NSF grant DMS-1063470.
\section{Geometry of $\BG_m$-actions: fixed points, attractors, and repellers} \label{s:actions}
In this section we review the theory of action of the multiplicative group $\BG_m$ on a scheme or algebraic space $Z$.
Specifically, we are concerned with the fixed-point locus, denoted by $Z^0$, as well as the attractor/repeller spaces,
denoted by $Z^+$ and $Z^-$, respectively.
The main results of this section are \propref{p:Z^0closed} (which says that the fixed-point locus is closed),
Theorem~{\mathfrak r}ef{t:attractors} (which ensures representability of attractor/repeller sets),
and \propref{p:Cartesian} (the latter is used in the construction of the unit of the adjunction given in
\secref{sss:defining co-unit}).
In the case of a scheme equipped with a locally linear $\BG_m$-action these
results are well known (in a slightly different language).
\ssec{$k$-spaces} \label{ss:k spaces}
\sssec{}
We fix a field $k$ (of any characteristic).
By a $k$-\emph{space} (or simply \emph{space}) we mean a contravariant functor $Z$ from the category of
affine schemes to that of sets which is a sheaf for the fpqc topology. Instead of $Z({\mathcal S}pec(R))$ we write simply
$Z(R)$; in other words, we consider $Z$ as a covariant functor on the category of $k$-algebras.
Note that for any scheme $S$ we have $Z(S)=\Maps (S,Z)$, where $\Maps$ stands for the set of morphisms between spaces.
Usually we prefer to write $\Maps (S,Z)$ rather than $Z(S)$.
We write $\on{pt}:={\mathcal S}pec(k)$.
\sssec{}
General spaces will appear only as ``intermediate" objects.
For us, the really geometric objects are \emph{algebraic spaces} over $k$. We will be using the definition
of algebraic space from \cite{LM} (which goes back to M.~Artin).
\footnote{In particular, quasi-separatedness is included into the definition of algebraic space. Thus
the quotient $\BA^1/\BZ$ (where the discrete group $\BZ$ acts by translations)
is \emph{not} an algebraic space.}
Any quasi-separated scheme (in particular, any scheme of finite type) is an algebraic space.
The reader may prefer to restrict his attention to schemes, and even to separated schemes,
as this will cover most of the cases of interest to which the main the result of this paper, i.e.,
\thmref{t:braden original}, is applied.
Note that in the definition of spaces, instead of considering affine schemes as ``test schemes", one can consider
algebraic spaces (any fpqc sheaf on the category of affine schemes uniquely extends to an fpqc sheaf on the category of
algebraic spaces).
\sssec{}
A morphism of spaces $f:Z_1\to Z_2$ is said to be a \emph{monomorphism} if
the corresponding map
$$\Maps (S,Z_1)\to\Maps (S,Z_2)$$
is injective for any scheme $S$. In particular, this applies if $Z_1$ and $Z_2$ are algebraic spaces.
It is known that a morphism \emph{of finite type} between schemes (or algebraic spaces) is a monomorphism
if and only if each of its geometric fibers is a reduced scheme with at most one point.
A morphism of algebraic spaces is said to be \emph{unramified} if it has locally finite presentation and its geometric fibers are
finite and reduced.
\ssec{The space of $\BG_m$-equivariant maps} \label{ss:GMaps}
\sssec{}
Let $Z_1$ and $Z_2$ be spaces. We define the space $\MMaps(Z_1,Z_2)$ by
$$\Maps(S,\MMaps(Z_1,Z_2)):=\Maps(S\times Z_1,Z_2)$$
(the right-hand side is easily seen to be an fpqc sheaf with respect to $S$).
\sssec{}
Let $Z_1,Z_2$ be spaces equipped with an action of $\BG_m$. Then we define the space
$\GMaps(Z_1,Z_2)$ as follows: for any scheme $S$,
\begin{equation}
\Maps (S,\GMaps(Z_1,Z_2)):=\Maps (S\times Z_1,Z_2)^{\BG_m}
\end{equation}
(the right-hand side is again easily seen to be an fpqc sheaf with respect to $S$).
The action of $\BG_m$ on $Z_2$ induces a $\BG_m$-action on
$\GMaps(Z_1,Z_2)$.
\sssec{}
Note that even if $Z_1$ and $Z_2$ are schemes, the space $\GMaps(Z_1,Z_2)$ does not have to be a scheme
(or an algebraic space), in general.
\ssec{The space of fixed points} \label{ss:fixed_points}
\sssec{}
Let $Z$ be a space equipped with an action of $\BG_m$. Then we set
\begin{equation}
Z^0:=\GMaps(\on{pt},Z).
\end{equation}
Note that $Z^0$ is a subspace of $Z$ because $\Maps (S,Z^0)=\Maps (S,Z)^{\BG_m}$ is a subset of
$\Maps (S,Z)$.
\begin{defn}
$Z^0$ is called \emph{the subspace of fixed points} of $Z$.
\end{defn}
\sssec{}
We have the following result:
\begin{prop} \label{p:Z^0closed}
If $Z$ is an algebraic space (resp. scheme) of finite type then so is $Z^0$. Moreover, the morphism
$Z^0\to Z$ is a closed embedding.
\end{prop}
The assertion of the proposition is nearly tautological if $Z$ is separated. This case will suffice for most of
the cases of interest to which the main result of this paper applies.
The proof in general is given in \cite[Prop. 1.2.2]{Dr}. It is not difficult; the only surprise is that $Z^0\subset Z$ is closed even if $Z$ is not separated. (Explanation in characteristic zero: $Z^0$ is the subspace of zeros of the vector field on $Z$ corresponding to the $\BG_m$-action.)
\begin{example} \label{ex:fixed-affine}
Suppose that $Z$ is an affine scheme ${\mathcal S}pec(A)$. A $\BG_m$-action on $Z$ is the same as a
$\BZ$-grading on $A$. Namely, the $n$-th component of $A$ consists of
$f\mathfrak in \Gamma(Z,\CO_Z)$ such that $f(\lambda\cdot z)=\lambda^n\cdot f(z)$.
It is easy to see that $Z^0={\mathcal S}pec(A^0)$, where $A^0$ is the maximal graded quotient
algebra of $A$ concentrated in degree 0 (in other words, $A^0$ is the quotient of $A$
by the ideal generated by homogeneous elements of non-zero degree).
\end{example}
\ssec{Attractors} \label{ss:attr}
\sssec{}
Let $Z$ be a space equipped with an action of $\BG_m$. Then we set
\begin{equation} \label{e:attr}
Z^+:=\GMaps(\BA^1,Z),
\end{equation}
where $\BG_m$ acts on $\BA^1$ by dilations.
\begin{defn}
$Z^+$ is called the \emph{attractor} of $Z$.
\end{defn}
\sssec{Pieces of structures on $Z^+$} \label{sss:structures}
\noindent(i) $\BA^1$ is a monoid with respect to multiplication. The action of $\BA^1$ on itself induces an
$\BA^1$-action on $Z^+$, which extends the $\BG_m$-action defined in \secref{ss:GMaps}.
\noindent(ii) Restricting a morphism $\BA^1\times S\to Z$ to $\{1\}\times S$ one gets a morphism $S\to Z$. Thus we get a
$\BG_m$-equivariant morphism $p^+:Z^+\to Z$.
Note that if $Z$ is \emph{separated} (i.e., the diagonal morphism $Z\to Z\times Z$ is a closed embedding),
then $p^+:Z^+\to Z$ is a \emph{mono}morphism. To see this, it suffices to interpret
$p^+$ as the composition
\[
\GMaps (\BA^1,Z)\to \GMaps (\BG_m,Z)=Z.
\]
Thus if $Z$ is separated then $p^+$ identifies $Z^+(S)$ with the subset of those points $f:S\to Z$ for which the map $S\times \BG_m\to Z$,
defined by $(s,t)\mapsto t\cdot f(s)$, extends to
a map $S\times \BA^1\to Z$; informally, the limit
\begin{equation} \label{e:limit}
\underset{t\to 0}{lim}\,\, t\cdot z
\end{equation}
should exist.
\noindent(iii) Recall that $Z^0=\GMaps (\on{pt}, Z)$.
The $\BG_m$-equivariant maps $0:\on{pt}\to\BA^1$ and $\BA^1\to \on{pt}$ induce the maps
$$q^+:Z^+\to Z^0 \text{ and } i^+:Z^0\to Z^+,$$ such that $q^+\circ i^+=\mathfrak id_{Z^0}$, and the composition $p^+\circ i^+$
is equal to the canonical embedding $Z^0\hookrightarrow Z$.
Note that if $Z$ is separated then for $z\mathfrak in Z^+(S)\subset Z(S)$ the point $q^+(S)$ is the limit \eqref{e:limit}.
\sssec{The case of a contracting action} \label{sss:contracting}
Let $Z$ be a separated space. Then it is clear that
if a $\BG_m$-action on $Z$ can be extended to an action of the monoid $\BA^1$ then such an extension is unique.
In this case we will say that that the $\BG_m$-action is \emph{contracting}.
\begin{prop} \label{p:contracting}
Let $Z$ be a separated space of finite type equipped with a $\BG_m$-action.
The morphism $p^+:Z^+\to Z$ is an isomorphism if and only if the $\BG_m$-action on $Z$ is
contracting.
\end{prop}
\begin{proof}
The ``only if" assertion follows from \secref{sss:structures}(i). For the ``if" assertion, we note that
the $\BA^1$-action on $Z$ defines a morphism $g:Z\to Z^+$ such that the composition of the maps
\begin{equation} \label{e:mono-epi}
Z\overset{g}\longrightarrow Z^+\overset{p^+}\longrightarrow Z
\end{equation}
equals $\mathfrak id_Z$. Since the map $p^+$ is a monomorphism (see \secref{sss:structures}(ii)), the assertion follows.
\end{proof}
\begin{rem} \label{r:contracting}
In \cite[Prop. 1.4.15]{Dr} it will be shown that if $Z$ is an \emph{algebraic} space of finite type, then
the assertion of \propref{p:contracting} remains valid even if $Z$ is not separated: i.e., $p^+$
is an isomorphism if and only if the $\BG_m$-action on $Z$ can be extended to an $\BA^1$-action;
moreover, such an extension is unique.
\end{rem}
\sssec{The affine case} \label{sss:attractors-affine}
Suppose that $Z$ is affine, i.e., $Z={\mathcal S}pec(A)$, where $A$ is a $\BZ$-graded commutative algebra.
It is easy to see that in this case $Z^+$ is represented by the affine scheme ${\mathcal S}pec(A^+)$, where $A^+$ is the
maximal $\BZ^{\geq 0}$-graded quotient algebra of $A$ (in other words, the quotient of $A$ by the ideal generated
by by all homogeneous elements of $A$ of strictly negative degrees).
By Example~{\mathfrak r}ef{ex:fixed-affine}, $Z^0={\mathcal S}pec(A^0)$, where $A^0$ is the maximal graded quotient algebra of
$A$ (or equivalently, of $A^+$) concentrated in degree 0. Since the algebra $A^+$ is $\BZ^{\geq 0}$-graded, $A^0$ identifies with the
$0$-th graded component of $A^+$. Thus we obtain the homomorphisms $A^0\hookrightarrow A^+\twoheadrightarrow A^0$. They correspond to the morphisms
$$Z^0\overset{\;\;q^+}\longleftarrow Z^+\overset{\;\;i^+}\longleftarrow Z^0\,.$$
\sssec{Attractors of open/closed subspaces}
We have:
\begin{lem} \label{l:U^+}
Let $Z$ be a space equipped with a $\BG_m$-action, and let $Y\subset Z$
be a $\BG_m$-stable open subspace.
\noindent{\em(i)} Suppose that $Y\to Z$ is an open embedding.
Then the subspace $Y^+\subset Z^+$ equals $(q^+)^{-1}(Y^0)$, where $q^+$ is the natural morphism $Z^+\to Z^0$.
\noindent{\em(ii)} Suppose that $Y\to Z$ is a closed embedding.
Then the subspace $Y^+\subset Z^+$ equals $(p^+)^{-1}(Y)$, where $q^+$ is the natural morphism $Z^+\to Z$.
\end{lem}
\begin{proof}
Let $Y\to Z$ be an open embedding. For any test scheme $S$, we have to show that if
$$f:S\times \BA^1\to Z$$
is a $\BG_m$-equivariant morphism such that $\{0\}\times S\subset f^{-1}(Y)$ then
$f^{-1}(Y)=S\times \BA^1$. This is clear because $f^{-1}(Y)\subset S\times \BA^1$ is open and $\BG_m$-stable.
Let $Y\to Z$ be a closed embedding. An $S$-point of $(p^+)^{-1}(Y)$ is a $\BG_m$-equivariant morphism
$f:S\times \BA^1\to Y$ such that
$S\times \BG_m\subset f^{-1}(Y)$. Since $f^{-1}(Y)$ is closed in $S\times \BA^1$ this implies that
$f^{-1}(Y)=S\times \BA^1$, i.e., $f(S\times \BA^1)\subset Y$.
\end{proof}
\ssec{Representability of attractors} \label{ss:Results_attractors}
\sssec{}
We have the following assertion:
\begin{thm} \label{t:attractors}
Let $Z$ be an algebraic space of finite type equipped with a $\BG_m$-action. Then
\noindent\emph{(i)} $Z^+$ is an algebraic space of finite type;
\noindent\emph{(ii)} The morphism $q^+:Z^+\to Z^0$ is affine.
\end{thm}
The proof of this theorem is given in \cite[Thm. 1.4.2]{Dr}. Here we will prove a particular case (see \secref{sss:loc lin}),
sufficient for most of the cases of interest to which the main result of this paper applies.
Combining \thmref{t:attractors} with \propref{p:Z^0closed}, we obtain:
\begin{cor}
\noindent\emph{(i)} If $Z$ is a separated algebraic space of finite type then so is $Z^+$.
\noindent\emph{(ii)} If $Z$ is a scheme of finite type then so is $Z^+$.
\end{cor}
\begin{proof}
Follows from Theorem~{\mathfrak r}ef{t:attractors}(ii) because by \propref{p:Z^0closed}, $Z^0$ is a closed subspace of~$Z$.
\end{proof}
\sssec{The case of a locally linear action} \label{sss:loc lin}
\begin{defn} \label{d:locally linear}
An action of $\BG_m$ on a scheme $Z$ is said to be \emph{locally linear} if
$Z$ can be covered by open affine subschemes preserved by the $\BG_m$-action.
\end{defn}
\begin{rem} \label{r:q-proj loc lin}
Suppose that $Z$ admits a $\BG_m$-equivariant locally closed embedding into
a projective space $\BP(V)$, where $\BG_m$ acts linearly on $V$. Then the action
of $\BG_m$ is locally linear.
For this reason, locally linear actions include most of the cases of interest that come
up in practice.
\end{rem}
\begin{rem} \label{r:locally linear}
If $k$ is \emph{algebraically closed} and $Z_{{\mathfrak r}ed}$ is a \emph{normal} separated\footnote{We do not know if separateness is really
necessary in Sumihiro's theorem.} scheme of finite type over $k$, then by a theorem of H.~Sumihiro, \emph{any action of $\BG_m$ on
$Z$ is locally linear}. (The proof of this theorem is contained in \cite{Sum} and also in \cite[p.20-23]{KKMS} and \cite{KKLV}.)
\end{rem}
\sssec{}
Let us prove \thmref{t:attractors} in the locally linear case on a scheme. First, we note that \lemref{l:U^+}(i) reduces the assertion
to the case when $Z$ is affine. In the latter case, the assertion is manifest from \secref{sss:attractors-affine}.
\ssec{Further results on attractors} \label{ss:further contractors}
The results of this subsection are included for completeness; they will not be used
for the proof of the main theorem of this paper.
We let $Z$ be an
algebraic space of finite type equipped with a $\BG_m$-action.
\sssec{}
We have:
\begin{prop} \label{p: p^+}
\noindent{\em(i)} If $Z$ is separated then $p^+:Z^+\to Z$ is a monomorphism.
\noindent{\em(ii)} If $Z$ is an affine scheme then $p^+:Z^+\to Z$ is a closed embedding.
\noindent{\em(iii)} If $Z$ is proper then each geometric fiber of $p^+:Z^+\to Z$ is reduced and has exactly one geometric point.
\noindent{\em(iv)} The fiber of $p^+:Z^+\to Z$ over any geometric point of $Z^0\subset Z$ is reduced and has exactly one geometric point.
\end{prop}
\begin{proof}
Point (i) has been proved in \secref{sss:structures}(ii). Point (ii) is manifest from \secref{sss:attractors-affine}. Point (iii)
follows from point (i) and the fact that any morphism from $\BA^1-\{ 0\}$ to a proper scheme extends to the whole $\BA^1$.
After base change, point (iv) is equivalent to the following lemma:
\begin{lem} \label{l:constant}
If $f:\BA^1\to Z$ is a $\BG_m$-equivariant morphism such that
$f(1)\mathfrak in Z^0$ then $f$ is constant.
\end{lem}
\end{proof}
\begin{proof}[Proof of \lemref{l:constant}]
The map $\on{pt}\to Z$, corresponding to $f(1)\mathfrak in Z(k)$ is a closed embedding (whether or not $Z$ is separated).
Hence, the assertion follows from \lemref{l:U^+}(ii).
\end{proof}
\begin{example} \label{ex:P^1}
Let $Z$ be the projective line $\BP^1$ equipped with the usual action of $\BG_m\,$. Then $p^+:Z^+\to Z$ is the
canonical morphism $\BA^1\sqcup\{\mathfrak infty\}\to\BP^1$. In particular, $p^+$ is \emph{not a locally closed embedding.}
\end{example}
\begin{example} \label{e:P^1 glued}
Let $Z$ be the curve obtained from $\BP^1$ by gluing $0$ with $\mathfrak infty$. Equip $Z$ with the $\BG_m$-action induced by
the usual action on $\BP^1$. The map $\BP^1\to Z$ induces a map $(\BP^1)^+\to Z^+$. It is easy to see that the composed map
$$\BA^1\hookrightarrow (\BP^1)^+\to Z^+$$
is an isomorphism $\BA^1\simeq Z^+$.
\end{example}
\begin{rem}
Suppose that the action of $\BG_m$ is locally linear. Then \propref{p: p^+}(ii) and \lemref{l:U^+} imply that
the map $p^+$ is, \emph{Zariski locally on the source}, a locally closed embedding.
Note, however, that is is not the case in general, as can be seen from Example {\mathfrak r}ef{e:P^1 glued}.
\end{rem}
\sssec{}
In the example of $\BP^1$, the restriction of $p^+:Z^+\to Z$ to each connected component of $Z^+$ \emph{is} a locally closed embedding.
This turns out to be true in a surprisingly large class of situations (but there are also important examples when this is false):
\begin{thm}
Let $Z$ be a separated scheme over an algebraically closed field $k$ equipped with a $\BG_m$-action.
Then each of the following conditions ensures that the restriction of $p^+:Z^+\to Z$ to each connected component
\footnote{Using the $\BA^1$-action on $Z^+$, it is easy to see that each connected component of $Z^+$ is the preimage of a connected component of
$Z^0$ with respect to the map $q^+:Z^+\to Z^0\,$.} of $Z^+$ is a locally closed embedding:
\noindent{\em(i)} $Z$ is smooth;
\noindent{\em(ii)} $Z$ is normal and quasi-projective;
\noindent{\em(iii)} $Z$ admits a $\BG_m$-equivariant locally closed embedding into
a projective space $\BP(V)$, where $\BG_m$ acts linearly on $V$.
\end{thm}
Case (i) is due to A.~Bia{\l}ynicki-Birula \cite{Bia}. Case (iii) immediately follows from the easy case $Z=\BP(V)$. Case (ii) turns out to be a particular
case of (iii) because by Theorem~1 from \cite{Sum}, if $Z$ is normal and quasi-projective then it admits a $\BG_m$-equivariant locally
closed embedding into a projective space.
\begin{rem}
In case (i) the condition that $Z$ be a scheme (rather than an algebraic space) is essential, as shown by A.~J.~Sommese \cite{Som}.
In case (ii) the quasi-projectivity condition is essential, as shown by
J.~Konarski \cite{Kon} using a method developed by J.~Jurkiewicz \cite{Ju1,Ju2}.
In this example $Z$ is a 3-dimensional toric variety which is proper but not projective; it is constructed by drawing a certain picture on a 2-sphere,
see the last page of \cite{Kon}.
In case (ii) normality is clearly essential, see Example {\mathfrak r}ef{e:P^1 glued}.
\end{rem}
\ssec{Differential properties}
The results of this subsection are included for the sake of completeness and will not be needed for the sequel.
We let $Z$ be an algebraic space of finite type, equipped with an action of $\BG_m$.
\sssec{}
First, we have:
\begin{lem} \label{l:TZ0}
For any $z\mathfrak in Z^0$ the tangent space\footnote{We define the tangent space by $T_zZ:=(T_z^*Z)^*$, where
$T_z^*Z$ is the fiber of ${\mathcal O}mega^1_{Z/k}$ at $z$. (The equality $T_z^*Z=m_z/m_z^2$ holds
\emph{if the residue field of $z$ is finite and separable} over $k$.)} $T_zZ^0\subset T_zZ$ equals $(T_zZ)^{\BG_m}$.
\end{lem}
\begin{proof}
We can assume that the residue field of $z$ equals $k$ (otherwise do base change). Then compute $T_zZ^0$ in terms of morphisms
${\mathcal S}pec k[\varepsilon ]/(\varepsilon^2 )\to Z^0$.
\end{proof}
\sssec{}
Next we claim:
\begin{prop} \label{p:unrami}
Let $Z$ be an algebraic space of finite type equipped with a $\BG_m$-action. Then the
map $p^+:Z^+\to Z$ is unramified.
\end{prop}
\begin{proof}
We can assume that $k$ is algebraically closed. Then we have to check that for any
$\zeta\mathfrak in Z^+(k)$ the map of tangent spaces
\begin{equation} \label{e:differential}
T_{\zeta}Z^+\to T_{p^+({\zeta})}Z
\end{equation}
induced by $p^+:Z^+\to Z$ is injective. Let $f:\BA^1\to Z$ be the $\BG_m$-equivariant morphism corresponding to $\zeta$. Then
\begin{equation} \label{e:TZ+l}
T_{\zeta}Z^+={\mathcal H}om_{\BG_m}(f^*({\mathcal O}mega^1_Z),\CO_{\BA^1}),
\end{equation}
and the map \eqref{e:differential} assigns to a $\BG_m$-equivariant morphism
$\varphi :f^*({\mathcal O}mega^1_Z)\to\CO_{\BA^1}$ the corresponding map between fibers at $1\mathfrak in\BA^1$.
So the kernel of \eqref{e:differential} consists of those
$\varphi\mathfrak in{\mathcal H}om_{\BG_m}(f^*({\mathcal O}mega^1_Z),\CO_{\BA^1})$ for which $\varphi |_{\BA^1-\{ 0\}}=0$.
This implies that $\varphi =0$ because $\CO_{\BA^1}$
has no nonzero sections supported at $0\mathfrak in\BA^1$.
\end{proof}
\sssec{}
Finally, we claim:
\begin{prop} \label{p:smoothness}
Suppose that $Z$ is smooth. Then $Z^0$ and $Z^+$ are smooth. Moreover, the morphism
$q^+:Z^+\to Z^0$ is smooth.
\end{prop}
\begin{proof}
We will only prove that $q^+$ is smooth. (Smoothness of $Z^0$ can be proved similarly, and smoothness of $Z^+$ follows.)
It suffices to check that $q^+$ is formally smooth.
Let $R$ be a $k$-algebra and $\bar R=R/I$, where
$I\subset R$ is an ideal with $I^2=0$. Let $\bar f :{\mathcal S}pec(\bar R)\times \BA^1\to Z$ be a $\BG_m$-equivariant morphism and let
$\bar f_0:{\mathcal S}pec(\bar R)\to Z^0$
denote the restriction of $\bar f$ to
$${\mathcal S}pec(\bar R)\overset{\on{id}\times \{0\}}\hookrightarrow {\mathcal S}pec(\bar R)\times \BA^1\,.$$
Let $\varphi :{\mathcal S}pec(R)\to Z^0$ be a morphism extending
$\bar f_0\,$. We have to extend $\bar f$ to a $\BG_m$-equivariant morphism $f :{\mathcal S}pec(R)\times\BA^1\to Z$ so that $f_0=\varphi$, where
$f_0:=f|_{{\mathcal S}pec(R)}$.
Since $Z$ is smooth, we can find a \emph{not-necessarily equivariant} morphism
$f :{\mathcal S}pec(\bar R)\times \BA^1\to Z$ extending $\bar f$ with $f_0=\varphi$. Then standard arguments show
that the obstruction to existence of a $\BG_m$-equivariant $f$ with the required properties belongs to
$$H^1(\BG_m\, ,M), \quad
M:=H^0({\mathcal S}pec(\bar R)\times\BA^1\, ,\bar f^*(\Theta_Z)\otimes\CJ )\underset{\bar R}\otimes I,$$
where $\Theta_Z$ is the tangent bundle of $Z$ and $\CJ\subset\CO_{{\mathcal S}pec(R)\times\BA^1}$
is the ideal of the zero section. But $H^1$ of $\BG_m$ with coefficients in any $\BG_m$-module is zero.
\end{proof}
\ssec{Repellers} \label{ss:repeller}
\sssec{}
Set $\BA^1_-:=\BP^1-\{\mathfrak infty\}$; this is a monoid with respect to multiplication containing $\BG_m$ as a subgroup.
One has an isomorphism of monoids
\begin{equation} \label{e:inversion}
\BA^1\buildrel{\sim}\over{\longrightarrow} \BA^1_-\, , \quad\quad t\mapsto t^{-1}.
\end{equation}
\sssec{}
Given a space $Z$, equipped with a $\BG_m$-action, we set
\begin{equation} \label{e:repel}
Z^-:=\GMaps(\BA^1_-,Z).
\end{equation}
\begin{defn}
$Z^-$ is called the \emph{repeller} of $Z$.
\end{defn}
\sssec{}
Just as in \secref{sss:structures} one defines an
action of the monoid $\BA^1_-$ on $Z^-$ extending the action of $\BG_m\,$, a
$\BG_m$-equivariant morphism $p^-:Z^-\to Z$, and $\BA^1_-$-equivariant morphisms
$q^-:Z^-\to Z^0$ and $i^-:Z^0\to Z^-$ (where $Z^0$ is equipped with the trivial $\BA^1_-$-action).
One has $q^-\circ i^-=\mathfrak id_{Z^0}\,$, and the composition $p^-\circ i^-$ is equal to the canonical embedding $Z^0\hookrightarrow Z$.
Using the isomorphism \eqref{e:inversion}, one can identify $Z^-$ with the attractor for the inverse
action of $\BG_m$ on $Z$ (this identification is $\BG_m$-\emph{anti}-equivariant). Thus the results on attractors from
Sects.~{\mathfrak r}ef{sss:attractors-affine} and {\mathfrak r}ef{ss:Results_attractors} imply similar results for repellers.
In particular, if $Z$ is the spectrum of a $\BZ$-graded algebra $A$ then $Z^-$ canonically identifies with
${\mathcal S}pec (A^-)$, where $A^-$ is the maximal $\BZ_-$-graded quotient algebra of $A$.
\ssec{Attractors and repellers}
In this subsection we let $Z$ be an algebraic space of finite type, equipped with an action of $\BG_m$.
\sssec{}
First, we claim:
\begin{lem} \label{l:closed}
The morphisms $i^{\pm}:Z^0\to Z^{\pm}$ are closed embeddings.
\end{lem}
\begin{proof}
It suffices to consider $i^+$. By Theorem~{\mathfrak r}ef{t:attractors}(ii), the morphism $q^+:Z^+\to Z^0$ is separated. One has $q^+\circ i^+=\mathfrak id_{Z^0}\,$.
So $i^+$ is a closed embedding.
\end{proof}
\sssec{}
Now consider the fiber product $Z^+\underset{Z}\times Z^-$ formed using the maps
$p^{\pm}:Z^{\pm}\to Z$.
\begin{prop} \label{p:Cartesian}
The map
\begin{equation} \label{e:open-closed}
j:=(i^+,i^-):Z^0\to Z^+\underset{Z}\times Z^-
\end{equation}
is both an open embedding and a closed one (i.e., is the embedding of a union of some connected components).
\end{prop}
\begin{rem}
If $Z$ is affine then $j$ is an isomorphism (this immediately follows from the explicit description of $Z^{\pm}$ in the affine case, see
Sects. {\mathfrak r}ef{sss:attractors-affine} and {\mathfrak r}ef{ss:repeller}).
In general, $j$ is not necessarily an isomorphism. To see this, note that by \eqref{e:attr} and \eqref{e:repel}, we have
\begin{equation} \label{e:fiberprod}
Z^+\underset{Z}\times Z^-=\GMaps(\BP^1,Z)
\end{equation}
(where $\BP^1$ is equipped with the usual $\BG_m$-action), and a $\BG_m$-equivariant map $\BP^1\to Z$ does not have to be constant, in general.
\end{rem}
\begin{proof}[Proof of \propref{p:Cartesian}]
We will give a proof in the case when $Z$ is a scheme; the case of an arbitrary algebraic space is treated in \cite[Prop. 1.6.2]{Dr}.
Writing $j$ as
\[
Z^0=Z^0\underset{Z}\times Z^0\,\overset{(i^+,i^-)}\longrightarrow\, Z^+\underset{Z}\times Z^-,
\]
and using \lemref{l:closed}, we see that $j$ is a closed embedding.
To prove that $j$ is an open embedding, we note that the following diagram is Cartesian:
$$
\CD
Z^0 @>{\sim}>> \MMaps^{\BG_m}(\on{pt},Z) @>>> \MMaps(\on{pt},Z) \\
@V{j}VV @VVV @VVV \\
Z^+\underset{Z}\times Z^- @>{\sim}>> \MMaps^{\BG_m}(\BP^1,Z) @>>> \MMaps(\BP^1,Z).
\endCD
$$
Now, the required result follows from the next (easy) lemma:
\begin{lem}
For a scheme $Z$, the map
$$Z=\MMaps(\on{pt},Z) \to \MMaps(\BP^1,Z)$$
induced by the projection $\BP^1\to \on{pt}$ is an open embedding.
\end{lem}
\end{proof}
\begin{cor} \label{c:contractive}
\noindent{\em(i)} If the map $p^+:Z^+\to Z$ is an isomorphism then so are the maps
$Z^0\overset{i^-}\longrightarrow Z^-\overset{q^-}\longrightarrow Z^0$.
\noindent{\em(ii)} If the map $p^-:Z^-\to Z$ is an isomorphism then so are the maps
$Z^0\overset{i^+}\longrightarrow Z^+\overset{q^+}\longrightarrow Z^0$.
\end{cor}
\begin{proof}
Let us prove (ii). By \propref{p:Cartesian}, the morphism $i^+:Z^0\to Z^+$ is an open embedding.
It remains to show that any point $\zeta\mathfrak in Z^+$ is contained in $i^+(Z^0)$. Set
\[
U_{\zeta}:=\{t\mathfrak in\BA^1\,|\, t\cdot\zeta\mathfrak in i^+(Z^0)\}.
\]
We have to show that $1\mathfrak in U_{\zeta}$. But $U_{\zeta}$ is an open $\BG_m$-stable subset of $\BA^1$ containing $0$, so $U_{\zeta}=\BA^1$.
\end{proof}
\section{Geometry of $\BG_m$-actions: the key construction} \label{s:deg}
We keep the conventions and notation of \secref{s:actions}. The goal of this section is, given an algebraic space $Z$ equipped with
a $\BG_m$-action, to study a certain canonically defined algebraic space $\wt{Z}$, equipped with a morphism
$\wt{Z}\to\BA^1\times Z\times Z$, such that for $t\mathfrak in\BA^1-\{0\}$ the fiber $\wt{Z}_t$ equals
the graph of the map $t:Z\buildrel{\sim}\over{\longrightarrow} Z$, and the fiber $\wt{Z}_0\,$, corresponding to $t=0$, equals $Z^+\underset{Z^0}\times Z^-$.
As was mentioned in \secref{sss:behind}, the space $\wt{Z}$ is the main geometric ingredient in the proof of
\thmref{t:Braden adj}. However, the reader can skip this section now and return to it when the time comes.
The main points of this section are following. In \secref{ss:tilde Z} we define the space $\wt{Z}$ and the main pieces of structure on it
(e.g., the morphism $\wt{p}:\wt{Z}\to\BA^1\times Z\times Z$ and the action of $\BG_m\times \BG_m$ on $\wt{Z}$). In \secref{ss:repr of inter}
we address the question of representability of $\wt{Z}$. In \secref{ss:fiber products} we prove
\propref{p:2open embeddings}, which plays a key role in Sect.~{\mathfrak r}ef{s:Verifying}.
\ssec{A family of hyperbolas} \label{ss:hyperb}
\sssec{} \label{sss:family of hyperbolas}
Set $\BX:=\BA^2={\mathcal S}pec k[\tau_1,\tau_2]$. Throughout the paper equip $\BX$ with the structure of a scheme over
$\BA^1$, defined by the map
\[
\BA^2\to\BA^1 ,\quad (\tau_1,\tau_2)\mapsto \tau_1\cdot \tau_2\, .
\]
Let $\BX_t$ denote the fiber of $\BX$ over $t\mathfrak in\BA^1$; in other words, $\BX_t\subset\BA^2$ is the curve defined by the equation
$\tau_1\cdot \tau_2=t$. If $t\ne 0$ then $\BX_t$ is a hyperbola, while $\BX_0$ is the ``coordinate cross" $\tau_1\cdot \tau_2=0$.
One has $\BX_0=\BX_0^+\cup\BX_0^-$, where
\begin{equation} \label{e:X0+-}
\BX_0^+:=\{(\tau_1,\tau_2)\mathfrak in\BA^2\,|\, \tau_2=0\}\, , \quad \BX_0^-:=\{(\tau_1,\tau_2)\mathfrak in\BA^2\,|\, \tau_1=0\}\, .
\end{equation}
\sssec{The schemes $\BX_S\,$} \label{sss:X_S}
For any scheme $S$ over $\BA^1$ set
\begin{equation}
\BX_S:=\BX \underset{\BA^1}\times S\, .
\end{equation}
If $S={\mathcal S}pec(R)$ we usually write $\BX_R$ instead of $\BX_S\,$.
\sssec{} \label{sss:structure X}
We will need the following pieces of structure on $\BX$:
\noindent{(i)} The projection $\BX\to \BA^1$ admits two canonically defined sections:
\begin{equation} \label{e:two sections}
\sigma_1(t)=(1,t) \text{ and } \sigma_2(t)=(t,1).
\end{equation}
\noindent{(ii)}
The scheme $\BX$ carries a tautological action of the monoid $\BA^1\times \BA^1$:
$$(\lambda_1,\lambda_2)\cdot (\tau_1,\tau_2)=(\lambda_1\cdot \tau_1,\lambda_2\cdot \tau_2)\,.$$
This action covers the action of $\BA^1\times \BA^1$ on $\BA^1$, given by the product
map $\BA^1\times \BA^1\to \BA^1$ and the tautological action of $\BA^1$ on itself.
\noindent{(iii)} In particular, we obtain an action of $\BG_m\times \BG_m$ on $\BX$.
This action covers the action of $\BG_m\times \BG_m$ on $\BA^1$, given by the product
map $\BG_m\times \BG_m\to \BG_m$ and the tautological action of $\BG_m$ on $\BA^1$.
\sssec{The $\BG_m$-action on $\BX_S\,$} \label{sss:theaction}
Consider the action of the anti-diagonal copy of $\BG_m$ on the scheme $\BX$ from \secref{sss:structure X}(iii). That is,
\begin{equation} \label{e:hyperbolic}
\lambda\cdot (\tau_1,\tau_2):=(\lambda\cdot \tau_1,\; \lambda{}^{-1}\cdot \tau_2).
\end{equation}
This action preserves the morphism $\BX\to\BA^1$, so for any scheme $S$ over $\BA^1$ one obtains
an action of $\BG_m$ on $\BX_S\,$.
\begin{rem} \label{r:theaction}
If $S$ is over $\BA^1-\{ 0\}$, then $\BX_S$ is $\BG_m$-equivariantly isomorphic to $S\times \BG_m$
by means of either of the maps $\sigma_1$ or $\sigma_2$.
\end{rem}
\ssec{Construction of the interpolation} \label{ss:tilde Z}
\sssec{}
Given a space $Z$ equipped with a $\BG_m$-action, define
$\wt{Z}$ to be the following space over $\BA^1$: for any scheme $S$ over $\BA^1$ we set
$$\Maps_{\BA^1} (S, \wt{Z}):=\Maps (\BX_S,Z)^{\BG_m},$$
where $\BX_S$ is acted on by $\BG_m$ as in \secref{sss:theaction}.
In other words, for any scheme $S$, an $S$-point of $\wt{Z}$ is a pair consisting of a morphism
$S\to\BA^1$ and a $\BG_m$-equivariant morphism $\BX_S\to Z$.
Note that for any $t\mathfrak in\BA^1(k)$ the fiber $\wt{Z}_t$ has the following description:
\begin{equation}
\wt{Z}_t=\GMaps (\BX_t\, ,Z).
\end{equation}
\sssec{} \label{sss:tilde p}
The sections $\sigma_1$ and $\sigma_2$ (see \secref{sss:structure X}(i)) define morphisms
$$\pi_1:\wt{Z}\to Z \text{ and } \pi_2:\wt{Z}\to Z,$$
respectively.
Let
\begin{equation} \label{e:tilde p}
\wt{p}:\wt{Z}\to \BA^1\times Z\times Z
\end{equation}
denote the morphism whose first component is the tautological projection $\wt{Z}\to \BA^1$, and
the second and the third components are $\pi_1$ and $\pi_2$, respectively.
\sssec{} \label{sss:action of G_m^2}
Note that the action of the group $\BG_m\times \BG_m$ on $\BX$ from \secref{sss:structure X}(iii)
gives rise to an action of
$\BG_m\times \BG_m$ on $\wt{Z}$. This action has the following properties:
\noindent{(i)} It is compatible with the canonical map $\wt{Z}\to \BA^1$
via the multiplication map $\BG_m\times \BG_m\to \BG_m$ and the \emph{inverse} of the
canonical action of $\BG_m$ on $\BA^1$.
\noindent{(ii)} It is compatible with $\pi_1:\wt{Z}\to Z$ via
the projection on the first factor $\BG_m\times \BG_m\to \BG_m$
and the initial action of $\BG_m$ on $Z$.
\noindent{(iii)} It is compatible with $\pi_2:\wt{Z}\to Z$ via
the projection on the second factor $\BG_m\times \BG_m\to \BG_m$
and the \emph{inverse} of the initial action of $\BG_m$ on $Z$.
\sssec{} \label{sss:anti-diagonal}
Restricting to the \emph{anti-diagonal} copy of $\BG_m\subset \BG_m\times \BG_m$
(i.e., $\lambda\mapsto (\lambda,\lambda^{-1})$), we obtain an action
of $\BG_m$ on $\wt{Z}$. (It is the same action as the one induced by the initial action of $\BG_m$ on $Z$).
This $\BG_m$-action on $\wt{Z}$ preserves the projection $\wt{Z}\to \BA^1$.
Both maps $\pi_1$ and $\pi_2$ are $\BG_m$-equivariant.
\sssec{}
For $t\mathfrak in \BA^1$ let
\begin{equation} \label{e:tilde p_t}
\wt{p}_t:\wt{Z}_t\to Z\times Z
\end{equation}
denote the morphism induced by \eqref{e:tilde p} (as before, $\wt{Z}_t$ stands for the fiber of $\wt{Z}$ over $t$).
It is clear that $(\wt{Z}_1,\wt{p}_1)$ identifies with $(Z,\Delta_Z:Z\to Z\times Z)$.
More generally, for $t\mathfrak in \BA^1-\{0\}$ the pair $(\wt{Z}_t,\wt{p}_t)$ identifies with the graph of the map $Z\to Z$ given
by the action of $t\mathfrak in \BG_m\,$.
More precisely, we have:
\begin{prop} \label{p:outside 0}
The morphism \eqref{e:tilde p} induces an isomorphism between
$$\BG_m\underset{\BA^1}\times \wt{Z}$$
and the graph of the action morphism $\BG_m\times Z\to Z$.
\qed
\end{prop}
\sssec{}
We are now going to construct a canonical morphism
\begin{equation} \label{e:0 fiber}
\wt{Z}_0\to Z^+\underset{Z^0}\times Z^-.
\end{equation}
Recall that $\wt{Z}_0=\GMaps (\BX_0\, ,Z)$ and
$\BX_0=\BX_0^+\cup\BX_0^-$, where $\BX_0^+$ and $\BX_0^-$ are defined by formula~\eqref{e:X0+-}.
One has $\BG_m$-equivariant isomorphisms
\begin{equation}
\BA^1\buildrel{\sim}\over{\longrightarrow}\BX_0^+, \; \; s\mapsto (s,0); \quad\quad\quad \BA^1_-\buildrel{\sim}\over{\longrightarrow}\BX_0^-, \; \; s\mapsto (0,s^{-1}).
\end{equation}
They define a morphism
$$\wt{Z}_0=\GMaps (\BX_0\, ,Z)\to\GMaps (\BX_0^+ ,Z)\buildrel{\sim}\over{\longrightarrow}\GMaps (\BA^1 ,Z)=Z^+$$
and a similar morphism $\wt{Z}_0\to Z^-$.
By construction, the following diagram commutes:
\begin{equation} \label{e:over Z times Z}
\xymatrix{
\wt{Z}_0 \ar[d]^{}\ar[r]^{\wt{p}_0}& Z\times Z\\\
Z^+\underset{Z^0}\times Z^-\ar@{->}[r]^{}&Z^+\times Z^-.\ar[u]_{{p^+}\times {p^-}}
}
\end{equation}
\sssec{}
We now claim:
\begin{prop} \label{p:tilde Z_0}
Let $Z$ be a scheme. Then the map \eqref{e:0 fiber} is an isomorphism.
\end{prop}
\begin{proof}
Follows from the fact that for an affine scheme $S$, the diagram
$$
\CD
S\times \on{pt} @>>> S\times \BX_0^+ \\
@VVV @VVV \\
S\times \BX_0^- @>>> S\times \BX_0
\endCD
$$
is a push-out diagram \emph{in the category of schemes}.
\end{proof}
\begin{rem} \label{r:tilde Z_0}
In \cite[Prop. 2.1.11]{Dr} it is shown that the map \eqref{e:0 fiber} is an isomorphism
more generally when $Z$ is an algebraic space.
\end{rem}
\begin{rem} \label{r:tilde Z_0'}
Combining the isomorphism \eqref{e:0 fiber} with the isomorphism $\wt{Z}_1\simeq Z$,
we can interpret $\wt{Z}$ as an $\BA^1$-family\footnote{In general, this ``family" is not flat, see the example from {\mathfrak r}emref{r:nonflat}.} of spaces interpolating between $Z$ and its ``degeneration"
$Z^+\underset{Z^0}\times Z^-$. Hence, the title of the subsection.
\end{rem}
\ssec{Basic properties of the interpolation}
\sssec{}
We have:
\begin{prop} \label{p:closed and open}
\noindent{\em(i)} Let $Y\subset Z$ be a $\BG_m$-stable closed subspace. Then the diagram
\[
\xymatrix{
\wt{Y} \ar[d]_{\wt{p}_Y} \ar[r]^{}& \wt{Z} \ar[d]^{\wt{p}_Z}\\\
\BA^1\times Y\times Y\;\ar@{^{(}->}[r]^{}&\BA^1\times Z\times Z
}
\]
is Cartesian. In particular, the morphism $\wt{Y}\to\wt{Z}$ is a closed embedding.
\noindent{\em(ii)} Let $Y\subset Z$ be a $\BG_m$-stable open subspace. Then the above diagram identifies
$\wt{Y}$ with an open subspace of the fiber product
\[
\wt{Z}\underset{\BA^1\times Z\times Z}\times (\BA^1\times Y\times Y)\, .
\]
In particular, the morphism $\wt{Y}\to\wt{Z}$ is an open embedding.
\end{prop}
\begin{proof}
Set $$\oBX:=\BX-\{0\},$$
where $0\mathfrak in \BX$ is the zero in $\BX=\BA^2$. For $S\to \BA^1$, set
$\oBX_S:=\BX_S\underset{\BX}\times \oBX$.
(i) Let $S$ be a scheme over $\BA^1$ and $f:\BX_S\to Z$ a $\BG_m$-equivariant morphism.
Formula~\eqref{e:two sections} defines two sections of the map $\BX_S\to S$. We have to
show that if $f$ maps these sections to $Y\subset Z$ then $f(\BX_S)\subset Y$. By
$\BG_m$-equivariance, we have
$$f(\oBX_S)\subset Y\,.$$
Since $\oBX_S$ is schematically dense in $\BX_S$ this implies that
$f(\BX_S)\subset Y$.
(ii) Just as before, we have a $\BG_m$-equivariant morphism $f:\BX_S\to Z$ such that
$f(\oBX)\subset Y$.
The problem is now to show that the set
\[
\{ s\mathfrak in S\,|\,\BX_s\subset f^{-1}(Y)\}
\]
is open in $S\,$.
The complement of this set equals $\pr_S(\BX_S -f^{-1}(Y))$, where $\pr_S :\BX_S\to S$ is the projection.
The set $\pr_S(\BX_S -f^{-1}(Y))$ is closed in $S$ because
$\BX_S -f^{-1}(Y)$ is a closed subset of $\BX_S -\oBX_S$, while
the morphism $\BX_S -\oBX_S\to S$ is closed (in fact, it is a closed embedding).
\end{proof}
\sssec{}
Next we claim:
\begin{prop} \label{sss:props tilde p}
Let $Z$ be separated. Then the map
$$\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$$
is a monomorphism.
\end{prop}
\begin{proof}
As before, set $\oBX:=\BX-\{0\}$, where $0\mathfrak in \BX$ is the zero in $\BX=\BA^2$.
Given a map $S\to \wt{Z}$, the corresponding $\BG_m$-equivariant map
$$\oBX_S\to Z$$
is completely determined by the composition
$$S\to \wt{Z} \overset{\wt{p}}\longrightarrow \BA^1\times Z\times Z\,.$$
Now use the fact that $\oBX_S$ is schematically dense in $\BX_S$.
\end{proof}
\begin{cor}
If $Z$ is separated then so is $\wt{Z}$.
\end{cor}
\sssec{The affine case}
We are going to prove:
\begin{prop} \label{p:2new tilde}
Assume that $Z$ is an affine scheme of finite type. Then the morphism $\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is a closed embedding.
In particular, $\wt{Z}$ is an affine scheme of finite type.
\end{prop}
\begin{proof}
If $Z$ is a closed subscheme of an affine scheme $Z'$ and the proposition holds for $Z'$ then it holds for
$Z$ by \propref{p:closed and open}(i). So we are reduced to the case that $Z$ is a finite-dimensional
vector space equipped with a linear $\BG_m$-action.
If the proposition holds for affine schemes $Z_1$ and $Z_2$ then it holds for $Z_1\times Z_2\,$.
So we are reduced to the case that $Z=\BA^1$ and $\lambda\mathfrak in\BG_m$ acts on $\BA^1$ as multiplication by
$\lambda^n$, $n\mathfrak in\BZ$.
In this case it is straightforward to compute $\wt{Z}$ directly.
In particular, one checks that $\wt{p}$ identifies $\wt{Z}$ with the closed subscheme of
$\BA^1\times Z\times Z$ defined by the equation $x_2=t^n\cdot x_1$ if $n\ge 0$ and by the equation
$x_1=t^{-n}\cdot x_2$ if $n\le 0$ (here $t,x_1,x_2$ are the coordinates on $\BA^1\times Z\times Z=\BA^3$).
\end{proof}
\ssec{Representability of the interpolation} \label{ss:repr of inter}
\sssec{}
We have the following assertion, which is proved in \cite[Thm. 2.2.2 and Prop. 2.2.3]{Dr}:
\begin{thm} \label{t:tildeZ}
Let $Z$ be an algebraic space (resp., scheme) of finite type equipped with a $\BG_m$-action. Then
$\wt{Z}$ is an algebraic space (resp., scheme) of finite type.
\end{thm}
Below we will give a proof in the case when $Z$ is a scheme and the action of $\BG_m$ on $Z$ is locally linear.
This case will be sufficient for the applications in this paper.
\begin{proof}
By assumption, $Z$ can be covered by open affine $\BG_m$-stable subschemes $U_i$. By \propref{p:2new tilde}, each
$\wt{U}_i$ is an affine scheme of finite type. By \propref{p:closed and open}(ii), for each $i$ the
canonical morphism $\wt{U}_i\to\wt{Z}$ is an open embedding. It remains to show that $\wt{Z}$ is
covered by the open subschemes $\wt{U}_i$.
It suffices to check that for each $t\mathfrak in\BA^1$ the fiber $\wt{Z}_t$ is covered by the open subschemes
$(\wt{U}_i)_t$. For $t\ne 0$ this is clear from \propref{p:outside 0}. It remains to consider the case $t=0$.
By \propref{p:tilde Z_0}, $\wt{Z}_0=Z^+\underset{Z^0}\times Z^-$. So a point of $\wt{Z}_0$ is a pair
$(z^+,z^-)\mathfrak in Z^+\times Z^-$ such that $q^+(z^+)=q^-(z^-)$. The point $q^+(z^+)=q^-(z^-)$ is contained in some
$U_i\,$. By Lemma~{\mathfrak r}ef{l:U^+}(i), we have $z^+,z^-\mathfrak in U_i\,$. So our point $(z^+,z^-)\mathfrak in\wt{Z}_0$ belongs to
$ (\wt{U}_i)_0\,$.
\end{proof}
\sssec{The contracting situation}
Let $Z$ be an algebraic space of finite type, and assume that the $\BG_m$-action on $Z$ is contracting,
i.e., the $\BG_m$-action can be extended to an action of the monoid $\BA^1$ (recall that such an extension is unique, see
\secref{sss:contracting} including {\mathfrak r}emref{r:contracting}). In this case we claim:
\begin{prop} \label{p:2contracting}
\noindent{\em(i)} The morphism $\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ identifies $\wt{Z}$
with the graph of the $\BA^1$-action on $Z$; in particular, the composition
\begin{equation} \label{e:first iso}
\wt{Z}\overset{\wt{p}}\longrightarrow\BA^1\times Z\times Z\to\BA^1\times Z\times \on{pt}=\BA^1\times Z
\end{equation}
is an isomorphism.
\noindent{\em(ii)} The inverse of \eqref{e:first iso} is the morphism
\begin{equation} \label{e:beta}
Z\times \BA^1\to\wt{Z},
\end{equation}
corresponding to the $\BG_m$-equivariant map $Z\times \BX\to Z$, defined by
\[
(z,\tau_1,\tau_2)\mapsto\tau_1\cdot z\, ,\quad\quad (\tau_1,\tau_2)\mathfrak in\BX\, ,\; z\mathfrak in Z.
\]
\end{prop}
\begin{proof}
Let $\alpha :\wt{Z}\to\BA^1\times Z$ denote the composition \eqref{e:first iso} and
$\beta :\BA^1\times Z\to\wt{Z}$ the morphism~\eqref{e:beta}. It is easy to see that $\alpha\circ\beta=\mathfrak id$.
In order to prove that $\beta\circ \alpha=\mathfrak id$, it is enough to show that $\alpha$ is a monomorphism. By
\thmref{t:tildeZ}, we are dealing with a morphism between algebraic spaces of finite type, so being a
monomorphism is a fiber-wise condition. Thus, it suffices to show that $\alpha$ induces an isomorphism between fibers
over any $t\mathfrak in\BA^1$.
For $t\ne 0$ this follows from \propref{p:outside 0}. If $t=0$ then by \propref{p:tilde Z_0}
(resp., Remark {\mathfrak r}ef{r:tilde Z_0} in the case of algebraic spaces), the morphism in question is the composition
$$Z^+\underset{Z^0}\times Z^-\to Z^+\overset{p^+}\longrightarrow Z\,.$$
By \propref{p:contracting} (resp., Remark {\mathfrak r}ef{r:contracting} in the non-separated case), $p^+$ is an isomorphism,
and the projection $q^-:Z^-\to Z^0$ is also an isomorphism by \corref{c:contractive}(i).
\end{proof}
\sssec{}
From \propref{p:2contracting} we formally obtain the following one:
\begin{prop} \label{p:dilating}
Let $Z$ be an algebraic space, and assume that the \emph{inverse} of the $\BG_m$-action on $Z$ is contracting. Then:
\noindent{\em(i)} the morphism $\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is a monomorphism, which identifies
$\wt{Z}$ with
\[
\{(t,z_1,z_2\,)\mathfrak in\BA^1\times Z\times Z\,|\, z_1=t^{-1}\cdot z_2\,\};
\]
in particular, the composition
\begin{equation} \label{e:second iso}
\wt{Z}\overset{\wt{p}}\longrightarrow\BA^1\times Z\times Z\to\BA^1\times \on{pt}\times Z=\BA^1\times Z
\end{equation}
is an isomorphism.
\noindent{\em(ii)} The inverse of \eqref{e:second iso} is the morphism
\begin{equation} \label{e:2beta}
Z\times \BA^1\to\wt{Z},
\end{equation}
corresponding to the $\BG_m$-equivariant map $Z\times \BX\to Z$, defined by
\[
(z,\tau_1,\tau_2)\mapsto\tau_2^{-1}\cdot z\, ,\quad\quad (\tau_1,\tau_2)\mathfrak in\BX\, ,\; z\mathfrak in Z.
\]
\end{prop}
\ssec{Further properties of the interpolation}
The material in this subsection is included for completeness and will not be used in the sequel.
Throughout this subsection, $Z$ will be be an algebraic space of finite type equipped
with a $\BG_m$-action.
\sssec{}
We claim:
\begin{prop} \label{p:2smoothness}
If $Z$ is smooth then the canonical morphism $\wt{Z}\to\BA^1$ is smooth.
\end{prop}
\begin{proof}
It suffices to check formal smoothness. We proceed just as in the proof of \propref{p:smoothness}.
Let $R$ be a $k$-algebra equipped with a morphism ${\mathcal S}pec(R)\to\BA^1$. Let $\bar R=R/I$, where
$I\subset R$ is an ideal with $I^2=0$. Let $\bar f\mathfrak in\Maps (\BX_{\bar R}\, , Z)^{\BG_m}$.
We have to show that $\bar f$ can be lifted to
an element of $\Maps (\BX_R\, , Z)^{\BG_m}$. Since $\BX_R$ is affine and $Z$ is smooth there
is no obstruction to lifting $\bar f$ to an element of $\Maps (\BX_R, Z)$. The standard arguments
show that the obstruction to existence of a $\BG_m$-equivariant lift is in $H^1(\BG_m\, ,M)$, where
$M:=H^0(\BX_{\bar R}\, ,\bar f^*(\Theta_Z))\underset{\bar R}\otimes I$ and $\Theta_Z$ is the tangent bundle of $Z$.
But $H^1$ of $\BG_m$ with coefficients in any $\BG_m$-module is zero.
\end{proof}
\sssec{}
Let $Z$ be affine. In this case, by \propref{p:2new tilde}, the morphism $\wt{p}$ identifies $\wt{Z}$ with the closed subscheme
$\wt{p}(\wt{Z})\subset\BA^1\times Z\times Z$. By \propref{p:outside 0}, the intersection of $\wt{p}(\wt{Z})$ with the open subscheme
$$\BG_m\times Z\times Z\subset \BA^1\times Z\times Z$$
is equal to the graph of the action map $\BG_m\times Z\to Z$.
Hence, $\wt{Z}$ contains the closure of the graph in $\BA^1\times Z\times Z$.
\begin{rem} \label{r:nonflat}
In general, this containment is
not an equality. E.g., this happens if $Z$ is the hypersurface in $\BA^{2n}$ defined by the equation
$x_1\cdot y_1+\ldots x_n\cdot y_n=0$ and the $\BG_m$-action on $Z$ is defined by
$\lambda(x_1\, ,\dots,x_n\, ,y_1\, ,\ldots, y_n)=
(\lambda\cdot x_1\, ,\ldots,\lambda\cdot x_n\, ,\lambda^{-1}\cdot y_1\, ,\ldots,\lambda^{-1}\cdot y_n)$.
\end{rem}
However, one has the following:
\begin{prop}
If $Z$ is affine and smooth then
$$\wt{p}(\wt{Z})=\overline{\Gamma},$$
where $\Gamma\subset\BG_m\times Z\times Z$ is the graph of of the action map $\BG_m\times Z\to Z$
and $\overline{\Gamma}$ denotes its scheme-theoretical closure in $ \BA^1\times Z\times Z\,$.
\end{prop}
Indeed, this immediately follows from \propref{p:2smoothness}.
\sssec{}
We claim:
\begin{prop} \label{p:props tilde p'}
The morphism $\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is unramified.
\end{prop}
\begin{proof}
The morphism $\wt{p}$ is of finite presentation (because $\wt{Z}$ and $ \BA^1\times Z\times Z$ have finite type over $k$). It remains to check the condition on the geometric fibers of $\wt{p}$.
Over $\BA^1-\{0\}$, it follows from \propref{p:outside 0}.
Over $0\mathfrak in \BA^1$ it follows from \propref{p:unrami} combined with \propref{p:tilde Z_0} (for schemes) and
Remark {\mathfrak r}ef{r:tilde Z_0} (for arbitrary algebraic spaces).
\end{proof}
\sssec{}
Recall that according to \propref{p:2new tilde}, if $Z$ is affine, the map $\wt{p}$ is a closed embedding.
Note, however, that if $Z$ is the projective line $\BP^1$ equipped with the usual $\BG_m$-action then the map
$\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is \emph{not a closed} embedding (because, e.g., the scheme
$\wt{Z}_0=Z^+\underset{Z^0}\times Z^-$ is not proper).
We have the following assertion:
\begin{prop} \label{p:Pn}
Let $Z$ be a projective space $\BP^n$ equipped with an arbitrary $\BG_m$-action.
Then the morphism $\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is a locally closed embedding.
\end{prop}
\begin{proof}
For a suitable coordinate system in $\BP^n$, the $\BG_m$-action is given by
\[
\lambda *(z_0:\ldots :z_n)=(\lambda^{m_0}\cdot z_0: \ldots :\lambda^{m_n}\cdot z_n),\quad \lambda\mathfrak in\BG_m \, .
\]
Let $U_i\subset Z=\BP^n$ denote the open subset defined by the condition $z_i\ne 0$. It is affine, so by
\propref{p:2new tilde}, the canonical morphism $\wt{U}_i\to\BA^1\times U_i\times U_i$ is a closed embedding.
Thus to finish the proof of the proposition, it suffices to show that
$\wt{p}^{-1}(\BA^1\times U_i\times U_i)=\wt{U}_i\,$. By \propref{p:outside 0},
$\wt{p}^{-1}(\BG_m\times U_i\times U_i)=\BG_m\underset{\BA^1}\times \wt{U}_i\,$. So it remains to prove that the
morphism $\wt{p}_0:\wt{Z}_0\to Z\times Z$ has the following property:
$(\wt{p}_0)^{-1}(U_i\times U_i)=(\wt{U}_i)_0\,$. Identifying $\wt{Z}_0$ with $Z^+\underset{Z^0}\times Z^-$ and
using \lemref{l:U^+}(i), we see that it remains to prove the following lemma:
\begin{lem} \label{l:Pn}
Let $z^+,z^-\mathfrak in\BP^n$. Suppose that
\[
\lim_{\lambda\to 0}\lambda*z^+=\lim_{\lambda\to\mathfrak infty }\lambda*z^-=\zeta\, .
\]
If $z^+,z^-\mathfrak in\ U_i$ then $\zeta\mathfrak in U_i\,$.
\end{lem}
\end{proof}
\begin{proof}[Proof of \lemref{l:Pn}]
Write $z^+=(z^+_0:\ldots :z^+_n)$, $z^-=(z^-_0:\ldots :z^-_n)$, $\zeta =(\zeta_0:\ldots :\zeta_n)$.
We have $z^{\pm}_i\ne 0$, and the problem is to show that $\zeta_i\ne 0$.
Suppose that $\zeta_i= 0$. Choose $j$ so that $\zeta_j\ne 0\,$. Then $z^{\pm}_j\ne 0$ and
\[
\lim_{\lambda\to 0}\lambda^{m_i-m_j}\cdot (z_i/z_j)=\zeta_i/\zeta_j=0, \quad
\lim_{\lambda\to \mathfrak infty}\lambda^{m_i-m_j}\cdot (z_i/z_j)=\zeta_i/\zeta_j=0\, .
\]
This means that $m_i>m_j$ and $m_i<m_j$ at the same time, which is impossible.
\end{proof}
\sssec{}
As a corollary of \propref{p:Pn}, combined with \propref{p:closed and open},
we obtain that if $Z$ admits a $\BG_m$-equivariant locally closed embedding into
a projective space $\BP(V)$, where $\BG_m$ acts linearly on $V$, then the morphism
$\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is a locally closed
embedding. (Recall, however, that the map $p^{\pm}:Z^{\pm}\to Z$ is typically not a locally closed embedding,
see Example {\mathfrak r}ef{ex:P^1}.)
\sssec{}
More generally, suppose that the $\BG_m$-action on $Z$ is locally linear. Then the proof of
\thmref{t:tildeZ} shows that in this case the map $\wt{p}$ is, \emph{Zariski locally on the source}, a
locally closed embedding.
However, even this is not the case for a general $Z$:
Consider the curve obtained from $\BP^1$ by gluing $0$ with $\mathfrak infty$.
Equip $Z$ with the $\BG_m$-action induced by the usual action on $\BP^1$. Then $\wt{p}:\wt{Z}\to \BA^1\times Z\times Z$ is
not a locally closed embedding, locally on the source. In fact, already $\wt{p}_0:\wt{Z}_0\to Z\times Z$ is not a locally closed embedding
locally on the source (because the maps $p^{\pm}:Z^{\pm}\to Z$ are not).
\ssec{Some fiber products} \label{ss:fiber products}
In this subsection we let $Z$ be an algebraic space of finite type, equipped with an action of $\BG_m$.
\sssec{}
In \secref{sss:tilde p} we defined morphisms $\pi_1,\pi_2:\wt{Z}\to Z$.
In \secref{s:Verifying} we will need to consider the fiber product
\begin{equation} \label{e:fibered1}
Z^-\underset{Z}\times \wt{Z}\, ,
\end{equation}
formed using $\pi_1:\wt{Z}\to Z$, and the fiber product
\begin{equation} \label{e:fibered2}
\wt{Z}\underset{Z}\times Z^+ \, ,
\end{equation}
formed using $\pi_2:\wt{Z}\to Z$.
\sssec{} \label{sss:defining the 2 maps}
Consider the composition
\begin{equation} \label{e:embedding1}
\BA^1\times Z^+\to\wt{Z^+}=\wt{Z^+}\underset{Z^+}\times Z^+ \to \wt{Z}\underset{Z}\times Z^+,
\end{equation}
where the first arrow is the morphism \eqref{e:beta} for the space $Z^+$ and the second arrow is induced by
the morphism $p^+:Z^+\to Z$. Consider also the similar composition
\begin{equation} \label{e:embedding2}
\BA^1\times Z^-\to\wt{Z^-}=Z^-\underset{Z^-}\times\wt{Z^-} \to Z^-\underset{Z}\times\wt{Z},
\end{equation}
where the first arrow is the morphism \eqref{e:2beta} for the space $Z^-$.
In \secref{s:Verifying} we will need the following result.
\begin{prop} \label{p:2open embeddings}
The compositions \eqref{e:embedding1} and \eqref{e:embedding2} are open embeddings.
\end{prop}
Note that unlike the situation of \propref{p:Cartesian}, these embeddings are usually not closed.
\begin{rem}
By Propositions~{\mathfrak r}ef{p:2contracting} and {\mathfrak r}ef{p:dilating}, the maps $\BA^1\times Z^+\to\wt{Z^+}$ and
$\BA^1\times Z^-\to\wt{Z^-}$ are isomorphisms, so \propref{p:2open embeddings} means that the morphisms
\[
\wt{Z^+}\to\wt{Z}\underset{Z}\times Z^+ ,\quad \wt{Z^-}\to Z^-\underset{Z}\times\wt{Z}
\]
are open embeddings.
\end{rem}
\begin{rem}
In the course of the proof of \propref{p:2open embeddings} we will see that if $Z$ is affine,
then the maps \eqref{e:embedding1} and \eqref{e:embedding2} are isomorphisms.
\end{rem}
\sssec{}
We will prove \propref{p:2open embeddings} assuming that the action of $\BG_m$ on $Z$ is
locally linear. The general case is considered in \cite[Prop. 3.1.3]{Dr}.
We will show that \eqref{e:embedding1} is an open embedding. The case of \eqref{e:embedding2}
is similar.
\begin{proof}
First, \propref{p:closed and open}(ii) and \lemref{l:U^+}(i) allow to
reduce the assertion to the case when $Z$ is affine. In the
affine case we will show that the map \eqref{e:embedding1} is an isomorphism.
Next, it follows from \propref{p:closed and open}(i) and \lemref{l:U^+}(ii)
that if $Z\to Z'$ is a closed embedding
and \eqref{e:embedding1} is an isomorphism for $Z'$, then it is also an isomorphism for $Z$.
This reduces the assertion to the case when $Z$ is a vector space equipped with a linear action
of $\BG_m$.
Third, it is easy to see that if $Z=Z_1\times Z_2$, and \eqref{e:embedding1} is an isomorphism for $Z_1$
and $Z_2$, then it is an isomorphism for $Z$. This reduces the assertion further to the case when either
the action of $\BG_m$ on $Z$ or its inverse is contracting.
Suppose that the action is contracting. In this case $Z^+\simeq Z$ by \propref{p:contracting},
and under this identification the map
$$\wt{Z^+}\underset{Z^+}\times Z^+ \to \wt{Z}\underset{Z}\times Z^+,$$
appearing in \eqref{e:embedding1}, is the identity map.
Suppose that the inverse of the given $\BG_m$-action on $Z$ is contracting. By \corref{c:contractive}(ii),
we can identify $Z^+\simeq Z^0$, and by \propref{p:dilating}, $\wt{Z}\simeq \BA^1\times Z$. Under
these identifications, the map \eqref{e:embedding1} is the identity map
$$\BA^1\times Z^0\to (\BA^1\times Z)\underset{Z}\times Z^0\simeq \BA^1\times Z^0\,.$$
\end{proof}
\section{Braden's theorem} \label{s:Braden1}
From now on we will assume that the ground field $k$ has characteristic 0 (because we will be working with D-modules).
The goal of this section is to state Braden's theorem (\thmref{t:braden original}) in the context of D-modules,
and reduce it to another statement (\thmref{t:Braden adj}) that says that certain two functors are adjoint.
Braden's theorem applies to any algebraic space $Z$ of finite type over $k$, equipped with an action
of $\BG_m$. The reader may prefer to restrict his attention to the case of $Z$ being a scheme or even a separated
scheme.
Furthermore, because of Remark {\mathfrak r}ef{r:q-proj loc lin}, for most applications, it is sufficient to consider the case
when the $\BG_m$-action on $Z$ is locally linear, which would make the present paper self-contained, as the main technical results in Sects.
{\mathfrak r}ef{s:actions}-{\mathfrak r}ef{s:deg} were proved only in this case.
\ssec{Statement of Braden's theorem: the original formulation} \label{ss:original statement}
\sssec{} \label{sss:mon}
Let $G$ be an algebraic group. If $Z$ is an algebraic space of finite type equipped with a $G$-action, then
$$\Dmod(Z)^{G\on{-mon}}\subset \Dmod(Z)$$ stands for the
full subcategory generated by the essential image of the pullback functor
$\Dmod(Z/G)\to \Dmod(Z)$, where $Z/G$ denotes the quotient \emph{stack}. Here one can use either the $!$- or the $\bullet$-pullback: this
makes no difference as the morphism $Z\to Z/\BG_m$ is smooth, and hence the two pullback functors differ by the cohomological shift by $2\cdot\dim(G)$.
Note that if the $G$-action is trivial then
$\Dmod(Z)^{G\on{-mon}}=\Dmod(Z)$ (because the morphism $Z\to Z/G$ admits a section).
\sssec{}
From now on let $Z$ be an algebraic space of finite type equipped with a $\BG_m$-action. Consider the commutative diagram
\begin{equation} \label{e:square with arrow}
\xy
(0,0)*+{Z^+\underset{Z}\times Z^-}="X";
(30,0)*+{Z^+}="Y";
(0,-30)*+{Z^-}="Z";
(30,-30)*+{Z.}="W";
(-20,20)*+{Z^0}="U";
{\ar@{->}_{p^-} "Z";"W"};
{\ar@{->}^{p^+} "Y";"W"};
{\ar@{->}_{'p^-} "X";"Y"};
{\ar@{->}^{'p^+} "X";"Z"};
{\ar@{->}_{j} "U";"X"};
{\ar@{->}_{i^-} "U";"Z"};
{\ar@{->}^{i^+} "U";"Y"};
\endxy
\end{equation}
(The definitions of $Z^0$, $Z^{\pm}$, $i^{\pm}$, and $p^{\pm}$ were given in Sects.~{\mathfrak r}ef{ss:fixed_points},
{\mathfrak r}ef{ss:attr}, and {\mathfrak r}ef{ss:repeller}.)
Recall that by \propref{p:Cartesian}, the morphism
$j:Z^0\to Z^+\underset{Z}\times Z^-$ is an open embedding (and also a closed one).
We consider the categories
$$\Dmod(Z)^{\BG_m\on{-mon}},\,\, \Dmod(Z^+)^{\BG_m\on{-mon}},\,\, \Dmod(Z^-)^{\BG_m\on{-mon}}$$ and
$$\Dmod(Z^0)^{\BG_m\on{-mon}}=\Dmod(Z^0).$$
Consider the functors
$$(p^+)^!:\Dmod(Z)^{\BG_m\on{-mon}}\to \Dmod(Z^+)^{\BG_m\on{-mon}} \text{ and }
(i^-)^!:\Dmod(Z^-)^{\BG_m\on{-mon}}\to \Dmod(Z^0).$$
The formalism of pro-categories (see Appendix {\mathfrak r}ef{s:pro}) also provides the functors
$$(p^-)^\bullet: \Dmod(Z)^{\BG_m\on{-mon}}\to \on{Pro}(\Dmod(Z^-)^{\BG_m\on{-mon}})$$ and
$$(i^+)^\bullet: \Dmod(Z^+)^{\BG_m\on{-mon}}\to \on{Pro}(\Dmod(Z^0)),$$
left adjoint in the sense of Sect. A.3 to
$$(p^-)_\bullet:\Dmod(Z^-)^{\BG_m\on{-mon}}\to \Dmod(Z)^{\BG_m\on{-mon}}$$ and
$$(i^+)_\bullet: \Dmod(Z^0)\to \Dmod(Z^+)^{\BG_m\on{-mon}},$$
respectively.
\sssec{}
Consider the composed functors
$$(i^+)^\bullet\circ (p^+)^! \text{ and } (i^-)^!\circ (p^-)^\bullet,\quad
\Dmod(Z)^{\BG_m\on{-mon}}\to \on{Pro}(\Dmod(Z^0)).$$
They are called the functors of \emph{hyperbolic restriction}.
\sssec{}
We claim that there is a canonical natural transformation
\begin{equation} \label{e:Braden trans}
(i^+)^\bullet\circ (p^+)^! \to (i^-)^!\circ (p^-)^\bullet.
\end{equation}
Namely, the natural transformation \eqref{e:Braden trans} is obtained via the $((i^+)^\bullet,(i^+)_\bullet)$-adjunction
from the natural transformation
\begin{equation} \label{e:the natural transformation}
(p^+)^! \to (i^+)_\bullet\circ (i^-)^!\circ (p^-)^\bullet,
\end{equation}
defined in terms of diagram \eqref{e:square with arrow} as follows.
Note that since $j:Z^0\to Z^+\underset{Z}\times Z^-$ is an \emph{open embedding} (see \propref{p:Cartesian}),
the functor $j^!$ is left adjoint to
$j_\bullet\,$. Now define the morphism \eqref{e:the natural transformation} to be the composition
\begin{multline*}
(p^+)^!\to (p^+)^!\circ (p^-)_\bullet \circ (p^-)^\bullet \simeq ({}'p^-)_\bullet\circ ({}'p^+)^!\circ (p^-)^\bullet\to \\
\to ({}'p^-)_\bullet\circ j_\bullet\circ j^! \circ ({}'p^+)^! \circ (p^-)^\bullet
\simeq (i^+)_\bullet\circ (i^-)^!\circ (p^-)^\bullet,
\end{multline*}
where $(p^+)^!\circ (p^-)_\bullet\simeq ({}'p^-)_\bullet\circ ({}'p^+)^!$
is the base change isomorphism and the map
$$\on{Id}\to j_\bullet\circ j^!$$
comes from the $(j^!,j_\bullet)$-adjunction.
\sssec{}
We are now ready to state Braden's theorem:
\begin{thm} \label{t:braden original}
The functors
$$(i^+)^\bullet\circ (p^+)^! \text{ and } (i^-)^!\circ (p^-)^\bullet, \quad \Dmod(Z)^{\BG_m\on{-mon}}\to \on{Pro}(\Dmod(Z^0))$$
take values in $\Dmod(Z^0)\subset \on{Pro}(\Dmod(Z^0))$
and the map
\eqref{e:Braden trans}
is an isomophism.
\end{thm}
\begin{rem}
As we will see in \secref{sss:contr}, the fact that the functor $(i^+)^\bullet\circ (p^+)^!$ takes values in
$\Dmod(Z^0)\subset \on{Pro}(\Dmod(Z^0))$ is easy to prove.
The fact that the functor $(i^-)^!\circ (p^-)^\bullet$
takes values in $\Dmod(Z^0)$ will follow \emph{a posteriori} from the isomorphism with
$(i^+)^\bullet\circ (p^+)^!$.
\end{rem}
\ssec{Contraction principle}
\sssec{}
Assume for a moment that the $\BG_m$-action on $Z$ extends\footnote{By Remark {\mathfrak r}ef{r:contracting}, such extension is unique if it exists.}
to an action of the monoid $\BA^1$. (Informally, this means
that the $\BG_m$-action on $Z$ contracts it onto the fixed point locus $Z^0$.)
\begin{prop} \label{p:simple Braden}
In the above situation we have the following:
\noindent{\em(a)} The left adjoint $i^\bullet:\Dmod(Z)\to \on{Pro}(\Dmod(Z^0))$ of $i_\bullet$ sends
$\Dmod(Z)^{\BG_m\on{-mon}}$ to $\Dmod(Z^0)$, and we have a canonical isomorphism
$$i^\bullet|_{\Dmod(Z)^{\BG_m\on{-mon}}}\simeq q_\bullet|_{\Dmod(Z)^{\BG_m\on{-mon}}} \; .$$
More precisely, for each $\CF\mathfrak in \Dmod(Z)^{\BG_m\on{-mon}}$
the natural map
$$q_\bullet(\CF)\to q_\bullet\circ i_\bullet\circ i^\bullet(\CF)=(q\circ i)_\bullet\circ i^\bullet(\CF)=i^\bullet(\CF)$$
is an isomorphism.
\noindent{\em(b)} The left adjoint $q_!:\Dmod(Z)\to \on{Pro}(\Dmod(Z^0))$ of $q^!$ sends
$\Dmod(Z)^{\BG_m\on{-mon}}$ to $\Dmod(Z^0)$, and we have a canonical isomorphism
$$q_!|_{\Dmod(Z)^{\BG_m\on{-mon}}}\simeq i^!|_{\Dmod(Z)^{\BG_m\on{-mon}}} \; .$$
More precisely, for each $\CF\mathfrak in \Dmod(Z)^{\BG_m\on{-mon}}$
the natural map
$$i^!(\CF)\to i^!\circ q^!\circ q_!(\CF) =(q\circ i)^!\circ q_!(\CF) =q_! (\CF)$$
is an isomorphism.
\end{prop}
For the proof see \cite[Theorem C.5.3]{DrGa2}.
\sssec{}
Note that we can reformulate point (a) of \propref{p:simple Braden} above as the statement that
the (iso)morphism
$$q_\bullet\circ i_\bullet\to \on{Id}_{\Dmod(Z^0)}$$
defines the co-unit of an adjunction between
$$q_\bullet:\Dmod(Z)^{\BG_m\on{-mon}}{\mathfrak r}ightleftarrows \Dmod(Z^0):i_\bullet.$$
Similarly, point (b) of \propref{p:simple Braden} can be reformulated as the statement that
the (iso)morphism
$$i^!\circ q^! \to \on{Id}_{\Dmod(Z^0)}$$
defines the co-unit of an adjunction between
$$i^!:\Dmod(Z)^{\BG_m\on{-mon}} {\mathfrak r}ightleftarrows \Dmod(Z^0):q^!.$$
\ssec{Reformulation of Braden's theorem} \label{ss:Reformulation of Braden}
\sssec{} \label{sss:contr}
We return to the set-up of \thmref{t:braden original}. By \propref{p:simple Braden}, we obtain
canonical isomorphisms
$$(i^+)^\bullet\simeq (q^+)_\bullet \text{ and } (i^-)^!\simeq (q^-)_!.$$
In particular, we obtain that the functor
$$(i^+)^\bullet\circ (p^+)^!\simeq (q^+)_\bullet \circ (p^+)^!$$
sends $\Dmod(Z)^{\BG_m\on{-mon}}$ to $\Dmod(Z^0)$.
In addition, we see that the functor
$$(i^-)^!\circ (p^-)^\bullet\simeq (q^-)_!\circ (p^-)^\bullet$$
is the left adjoint functor to $(p^-)_\bullet\circ (q^-)^!$.
\sssec{} \label{sss:defining co-unit}
Now define a natural transformation
\begin{equation} \label{e:Braden co-unit}
\left((q^+)_\bullet \circ (p^+)^!{\mathfrak r}ight)\circ \left((p^-)_\bullet\circ (q^-)^!{\mathfrak r}ight)\to \on{Id}_{\Dmod(Z^0)}
\end{equation}
to be the composition
\begin{multline*}
(q^+)_\bullet \circ (p^+)^! \circ (p^-)_\bullet\circ (q^-)^!\simeq
(q^+)_\bullet \circ ({}'p^-)_\bullet\circ ({}'p^+)^! \circ (q^-)^! \to \\
\to (q^+)_\bullet \circ ({}'p^-)_\bullet\circ j_\bullet\circ j^! \circ ({}'p^+)^! \circ (q^-)^!
\simeq (q^+)_\bullet \circ (i^+)_\bullet \circ (i^-)^!\circ (q^-)^! \simeq \\
\simeq (q^+\circ i^+)_\bullet\circ (q^-\circ i^-)^!=
\on{Id}_{\Dmod(Z^0)}.
\end{multline*}
The above natural transformation corresponds to the diagram
$$
\xy
(-20,0)*+{Z}="X";
(20,0)*+{Z^0.}="Y";
(0,20)*+{Z^+}="Z";
(-40,20)*+{Z^-}="W";
(-60,0)*+{Z^0}="U";
(-20,40)*+{Z^-\underset{Z}\times Z^+}="V";
(-20,85)*+{Z^0}="T";
{\ar@{->}^{p^+} "Z";"X"};
{\ar@{->}_{q^+} "Z";"Y"};
{\ar@{->}_{p^-} "W";"X"};
{\ar@{->}^{q^-} "W";"U"};
{\ar@{->}_{'p^-} "V";"Z"};
{\ar@{->}^{'p^+} "V";"W"};
{\ar@{->}_{j} "T";"V"};
{\ar@{->}^{i^-} "T";"W"};
{\ar@{->}_{i^+} "T";"Z"};
{\ar@{->}^{\on{id}} "T";"Y"};
{\ar@{->}_{\on{id}} "T";"U"};
\endxy
$$
\sssec{}
The natural transformation \eqref{e:Braden co-unit} gives rise to (and is determined by) a natural transformation
\begin{equation} \label{e:reform Braden trans}
(q^+)_\bullet \circ (p^+)^!\to \left((p^-)_\bullet\circ (q^-)^!{\mathfrak r}ight)^L\simeq (q^-)_!\circ (p^-)^\bullet.
\end{equation}
Here $\left((p^-)_\bullet\circ (q^-)^!{\mathfrak r}ight)^L$ denotes the left adjoint of
$(p^-)_\bullet\circ (q^-)^!$ in the sense of Sect. A.3.
It follows by diagram chase that the following diagram of natural transfomations commutes:
$$
\CD
(q^+)_\bullet \circ (p^+)^! @>{\text{\eqref{e:reform Braden trans}}}>> (q^-)_!\circ (p^-)^\bullet \\
@A{\sim}AA @A{\sim}AA \\
(i^+)^\bullet \circ (p^+)^! @>{\text{\eqref{e:Braden trans}}}>> (i^-)^!\circ (p^-)^\bullet
\endCD
$$
Hence, the assertion of \thmref{t:braden original} follows from the next one:
\begin{thm} \label{t:Braden adj}
The natural transformation \eqref{e:Braden co-unit} is the co-unit
of an adjunction for the functors
$$(q^+)_\bullet \circ (p^+)^!:\Dmod(Z)^{\BG_m\on{-mon}}{\mathfrak r}ightleftarrows \Dmod(Z^0):(p^-)_\bullet\circ (q^-)^!$$
\end{thm}
\ssec{The equivariant version} \label{ss:equivariant version}
\sssec{} \label{sss:Consider now}
Consider now the stacks
$$\CZ:=Z/\BG_m\, ,\,\, \CZ^0:=Z^0/\BG_m\, ,\,\, \CZ^{\pm}:=Z^{\pm}/\BG_m$$
and the morphisms
$$ \sfp^{\pm}:\CZ^{\pm}\to \CZ\, ,\,\,\sfq^{\pm}:\CZ^{\pm}\to\CZ^0$$
induced by the morphisms
$$p^{\pm}:Z^{\pm}\to Z ,\,\,q^{\pm}:Z^{\pm}\to Z^0$$
from Sects.~{\mathfrak r}ef{sss:structures} and {\mathfrak r}ef{ss:repeller}.
\sssec{}
The construction of the natural transformation \eqref{e:Braden co-unit} can be rendered verbatim to produce a natural transformation
\begin{equation} \label{e:Braden co-unit equiv}
\left((\sfq^+)_\bullet \circ (\sfp^+)^!{\mathfrak r}ight)\circ \left((\sfp^-)_\bullet\circ (\sfq^-)^!{\mathfrak r}ight)\to \on{Id}_{\Dmod(\CZ^0)}.
\end{equation}
We will prove the following version of \thmref{t:Braden adj}:
\begin{thm} \label{t:Braden adj equiv}
The natural transformation \eqref{e:Braden co-unit equiv} is the co-unit
of an adjunction for the functors
$$(\sfq^+)_\bullet \circ (\sfp^+)^!:\Dmod(\CZ){\mathfrak r}ightleftarrows \Dmod(\CZ^0):(\sfp^-)_\bullet\circ (\sfq^-)^!$$
\end{thm}
Let us prove that \thmref{t:Braden adj equiv} implies \thmref{t:Braden adj}.
\begin{proof}
We need to show that for $\CM\mathfrak in \Dmod(Z)^{\BG_m\on{-mon}}$ and $\CN\mathfrak in \Dmod(Z^0)^{\BG_m\on{-mon}}$, the map
$${\mathcal H}om_{\Dmod(Z)^{\BG_m\on{-mon}}}\left(\CM,(p^-)_\bullet\circ (q^-)^!(\CN){\mathfrak r}ight)\to
{\mathcal H}om_{\Dmod(Z^0)^{\BG_m\on{-mon}}}\left((q^+)_\bullet \circ (p^+)^!(\CM),\CN{\mathfrak r}ight),$$
induced by \eqref{e:Braden co-unit}, is an isomorphism.
By the definition of $\Dmod(Z)^{\BG_m\on{-mon}}$, we can assume that $\CM$ is the $\bullet$-pullback of some
$\CM'\mathfrak in \Dmod(\CZ)$. Let $\CN'$ denote the $\bullet$-direct image of $\CN$ under the canonical map $Z^0\to \CZ^0$.
Since all the maps $Z\to \CZ$, $Z^0\to \CZ^0$ and $Z^{\pm}\to \CZ^{\pm}$
are smooth, we have the following commutative diagram (with the vertical arrows being isomorphisms by adjunction):
$$
\CD
{\mathcal H}om\left(\CM,(p^-)_\bullet\circ (q^-)^!(\CN){\mathfrak r}ight) @>>>
{\mathcal H}om\left((q^+)_\bullet \circ (p^+)^!(\CM),\CN{\mathfrak r}ight) \\
@V{\sim}VV @VV{\sim}V \\
{\mathcal H}om\left(\CM',(\sfp^-)_\bullet\circ (\sfq^-)^!(\CN'){\mathfrak r}ight) @>>>
{\mathcal H}om\left((\sfq^+)_\bullet \circ (\sfp^+)^!(\CM'),\CN'{\mathfrak r}ight).
\endCD
$$
Hence, if the bottom horizontal arrow is an isomorphism, then so is the top one.
\end{proof}
\section{Construction of the unit} \label{s:unit}
In this section we will perform the main step in the proof of \thmref{t:Braden adj equiv}; namely, we will
construct the \emph{unit} for the adjunction between the functors
$(\sfq^+)_\bullet \circ (\sfp^+)^!$ and $(\sfp^-)_\bullet\circ (\sfq^-)^!$.
\ssec{The specialization map} \label{ss:specialization}
In this subsection we describe the general set-up for the specialization map.
The concrete situation in which this set-up will
be applied is described in Sects. {\mathfrak r}ef{sss:concrete situation}-{\mathfrak r}ef{sss:concrete} below.
\sssec{}
Let $\CY$ be an algebraic \emph{stack} \footnote{We use the conventions from \cite[Sect. 1.1]{DrGa1}
for algebraic stacks. We refer the reader to \cite[Sect. 6]{DrGa1} for a review of the DG category
of D-modules on algebraic stacks of finite type.} of finite type. Consider the stack $\BA^1\times \CY$, and let
$\mathfrak iota_1$ and $\mathfrak iota_0$ be the maps $\CY\to \BA^1 \times \CY$ corresponding to
the points $1$ and $0$ of $\BA^1$, respectively. Let $\pi$ denote the projection $\BA^1\times \CY\to \CY$.
Let $\CK$ be an object of $\Dmod(\BA^1\times \CY)^{\BG_m\on{-mon}}$, where
$$\Dmod(\BA^1\times \CY)^{\BG_m\on{-mon}}\subset \Dmod(\BA^1\times \CY)$$
is the full subcategory generated by the essential image of the pullback functor
$$\Dmod\left((\BA^1/\BG_m)\times \CY{\mathfrak r}ight)\to \Dmod(\BA^1\times \CY).$$
Set
$$\CK_1:=\mathfrak iota_1^!(\CK), \quad\quad \CK_0:=\mathfrak iota_0^!(\CK).$$
We are going to construct a canonical map
\begin{equation} \label{e:specialization}
\on{Sp}_\CK:\CK_1\to \CK_0\, ,
\end{equation}
which will depend functorially on $\CK$. We will call it the \emph{specialization map}.
\begin{rem}
The map \eqref{e:specialization} is a simplified version of the specialization map
that goes from the nearby cycles functor to the !-fiber.
\end{rem}
\sssec{}
First, note that \propref{p:simple Braden}(b) and the definition of the category $\Dmod(-)$ for an algebraic stack
\footnote{According to \cite[Sect. 6.1.1]{DrGa1}, an object of $\Dmod(\CY )$ is a ``compatible collection" of objects of $\Dmod(S )$
for all schemes $S$ of finite type mapping to $\CY$.} imply that the functor $\pi_!$, left adjoint to
$\pi^!:\Dmod(\CY)\to \Dmod(\BA^1\times \CY)$, is defined on the subcategory
$\Dmod(\BA^1\times \CY)^{\BG_m\on{-mon}}$, and the natural transformation
$$\mathfrak iota_0^!\to \mathfrak iota_0^!\circ \pi^!\circ \pi_!\simeq \pi_!$$
is an isomorphism.
Now, we construct the natural transformation \eqref{e:specialization} as
$$\mathfrak iota_1^!(\CK)\simeq \pi_!\circ (\mathfrak iota_1)_!\circ \mathfrak iota_1^!(\CK) \to \pi_!(\CK)\simeq \mathfrak iota_0^!(\CK),$$
where the morphism $\pi_!\circ (\mathfrak iota_1)_!\circ \mathfrak iota_1^!(\CK)\to \pi_!(\CK)$ comes from the
$((\mathfrak iota_1)_!,\mathfrak iota_1^!)$-adjunction. Note that the functor $(\mathfrak iota_1)_!$ is well-defined
because $\mathfrak iota_1$ is a closed embedding.
\sssec{} \label{sss:specialization for constant}
It is easy to see that if $\CK=\omega_{\BA^1}\times \CK_\CY$ for some $\CY\mathfrak in \Dmod(\CY)$,
then the map \eqref{e:specialization} is the identity endomorphism of
$$\mathfrak iota_1^!(\CK)\simeq \CK_\CY\simeq \mathfrak iota_0^!(\CK).$$
\sssec{} \label{sss:functoriality of specialization}
It is also easy to see from the construction that the natural transformation \eqref{e:specialization}
is functorial with respect to maps between algebraic stacks in the following sense.
Let $f:\CY'\to \CY$ be a map. Then for $\CK':=(\mathfrak id_{\BA^1}\times f)^!(\CK)$ the diagram
$$
\CD
\CK'_1 @>{\on{Sp}_{\CK'}}>> \CK'_0 \\
@A{\sim}AA @AA{\sim}A \\
f^!(\CK_1) @>{f^!(\on{Sp}_\CK)}>> f^!(\CK_0) \\
\endCD
$$
commutes.
Let now $f$ be representable and quasi-compact. Then for $\CK'\mathfrak in \Dmod(\BA^1\times \CY')^{\BG_m\on{-mon}}$
and
$$\CK:=(\mathfrak id_{\BA^1}\times f)_\bullet(\CK'),$$ the diagram
$$
\CD
\CK_1 @>{\on{Sp}_\CK}>> \CK_0 \\
@V{\sim}VV @VV{\sim}V \\
f_\bullet(\CK'_1) @>{f_\bullet(\on{Sp}_{\CK'})}>> f_\bullet(\CK'_0) \\
\endCD
$$
also commutes.
\ssec{Digression: functors given by kernels} \label{ss:kernels}
\sssec{}
According to \cite[Definition 1.1.8]{DrGa1}, an algebraic stack of finite type over $k$ is said to be QCA if the automorphism
groups of its geometric points are affine.
If $f:\CY\to \CY'$ is a morphism between QCA stacks then one has a canonically defined functor
$$f_\blacktriangle:\Dmod(\CY)\to \Dmod(\CY')$$
defined in \cite[Sect. 9.3]{DrGa1}.
\begin{rem}
The functor $f_\blacktriangle$ is a ``renormalized version" of the usual functor $f_\bullet$ of de Rham
direct image (see \cite[Sect. 7.4]{DrGa1}). The problem with the functor $f_\bullet$ is that it is very
poorly behaved unless the morphism $f$ is representable \footnote{Or, more generally, \emph{safe}
in the sense of \cite[Definition 10.2.2]{DrGa1}.}. For example, it fails to satisfy the projection
formula and \emph{is not compatible with compositions}, see \cite[Sect. 7.5]{DrGa1} for more
details. The functor $f_\blacktriangle$ cures all these drawbacks, and it equals the usual functor
$f_\bullet$ if $f$ is representable.
\end{rem}
\sssec{} \label{sss:functors and kernels}
Let $\CY_1$ and $\CY_2$ be QCA algebraic stacks. For an object $\CQ\mathfrak in \Dmod(\CY_1\times \CY_2)$, consider the functor
$$\sF_\CQ:\Dmod(\CY_1)\to \Dmod(\CY_2),\quad \CM\mapsto (\on{pr}_2)_\blacktriangle(\on{pr}_1^!(\CM)\sotimes \CQ),$$
where $\on{pr}_i:\CY_1\times \CY_2\to \CY_i$ are the two projections, and $\sotimes$ is the usual tensor product
on the category of D-modules.
We will refer to $\CQ$ as the \emph{kernel} of the functor $\sF_\CQ$.
In fact, it follows from \cite[Corollary 8.3.4]{DrGa1}
that the assignment $\CQ{\mathfrak r}ightsquigarrow \sF_\CQ$ defines an equivalence between the category $\Dmod(\CY_1\times \CY_2)$
and the DG category of \emph{continuous} \footnote{Recall that a functor between cocomplete DG categories is said to be
continuous if it commutes with arbitrary direct sums.} functors $\Dmod(\CY_1)\to \Dmod(\CY_2)$.
For example, if $\CY_1=\CY_2=\CY$, then for
$$\CQ:=(\Delta_\CY)_\blacktriangle(\omega_\CY)\mathfrak in \Dmod(\CY\times \CY)$$
the corresponding functor $\sF_\CQ$ is the identity functor on $\Dmod(\CY)$.
Here $\omega_{\CY}\mathfrak in \Dmod(\CY)$ denotes the dualizing complex on a stack $\CY$.
\sssec{} \label{sss:corr}
More generally, let
\begin{equation} \label{e:corr}
\xy
(-15,0)*+{\CY_1}="X";
(15,0)*+{\CY_2}="Y";
(0,15)*+{\CY_0}="Z";
{\ar@{->}_{f_1} "Z";"X"};
{\ar@{->}^{f_2} "Z";"Y"};
\endxy
\end{equation}
be a diagram of QCA algebraic stacks. Set
$$\CQ:=(f_1\times f_2)_\blacktriangle(\omega_{\CY_0})\mathfrak in \Dmod(\CY_1\times \CY_2).$$
Then, by the projection formula, the functor $\sF_\CQ$ identifies with $(f_2)_\blacktriangle\circ (f_1)^!$.
\sssec{} \label{sss:two routes}
The reader who is reluctant to use the (potentially unfamiliar) functor $f_\blacktriangle$ can proceed along either
of the following two routes:
\noindent(i) The usual functor of direct image $f_\bullet$ is well-behaved when restricted to the subcategory
$\Dmod(\CY)^+$ of bounded below (=eventually coconnective) objects. It is easy to see that working with
this subcategory would be sufficient for the proof of \thmref{t:Braden adj equiv}. \footnote{Note, however,
that if one redefines the assignment $\CQ{\mathfrak r}ightsquigarrow \sF_\CQ$ using $(\on{pr}_2)_\bullet$ instead of
$(\on{pr}_2)_\blacktriangle$ then one obtains a \emph{different} functor, even when evaluated on
$\Dmod(\CY_1)^+$.}
This strategy can be used in order to adapt the proof of \thmref{t:Braden adj equiv} to the context of $\ell$-adic
sheaves.
\noindent(ii) One can use the following assertion.
\begin{lem}
Suppose that the morphism $f_2:\CY_0\to\CY_2$ is representable. Then
\noindent{\em(i)}
The kernel $\CQ:=(f_1\times f_2)_\blacktriangle(\omega_{\CY_0})$ is canonically isomorphic to
$(f_1\times f_2)_\bullet(\omega_{\CY_0})$;
\noindent{\em(ii)} The functor
$$\sF_\CQ:\Dmod(\CY_1)\to \Dmod(\CY_2),\quad
\CM\mapsto (\on{pr}_2)_\blacktriangle(\on{pr}_1^!(\CM)\sotimes \CQ)\simeq (f_2)_\blacktriangle\circ (f_1)^! (\CM )$$
is canonically isomorphic to the functor
$$\CM\mapsto (\on{pr}_2)_\bullet(\on{pr}_1^!(\CM)\sotimes \CQ).$$
\end{lem}
\begin{proof}
Since $f_2:\CY_0\to\CY_2$ is representable, so is the morphism
$f_1\times f_2:\CY_0\to\CY_1\times\CY_2\,$. This implies (i).
We have canonical isomorphisms
$$\on{pr}_1^!(\CM)\sotimes \CQ\simeq (f_1\times f_2)_\blacktriangle (f_1^!(\CM ))\simeq (f_1\times f_2)_\bullet (f_1^!(\CM ))$$
(the first one holds by projection formula and the second because $f_1\times f_2$ is representable). So
$$(\on{pr}_2)_\bullet(\on{pr}_1^!(\CM)\sotimes \CQ)\simeq
((\on{pr}_2)_\bullet\circ (f_1\times f_2)_\bullet) (f_1^!(\CM )).$$
One also has
$$\sF_\CQ (\CM )\simeq (f_2)_\blacktriangle\circ (f_1)^! (\CM )\simeq (f_2)_\bullet (f_1^!(\CM ))=
((\on{pr}_2)\circ (f_1\times f_2))_\bullet (f_1^!(\CM )).$$
Finally, the fact that $f_1\times f_2$ is representable (see \cite[Proposition 7.5.7]{DrGa1}
\footnote{For any composable morphisms $g,g'$ between stacks one has a morphism $g_\bullet\circ g'_\bullet\to (g\circ g')_\bullet\,$,
which is \emph{not necessarily an isomorphism}. However, it is an isomorphism if $g'$ is representable. In
\cite[Proposition 7.5.7]{DrGa1} this is proved if $g'$ is schematic, but the same proof applies if $g'$ is only representable.}) implies that
$$((\on{pr}_2)\circ (f_1\times f_2))_\bullet\simeq (\on{pr}_2)_\bullet\circ (f_1\times f_2)_\bullet\,.$$
\end{proof}
\ssec{The unit of adjunction: plan of the construction} \sssec{} \label{sss:reformulating}
In \secref{sss:Consider now} we introduced the stacks
$$\CZ:=Z/\BG_m\, ,\,\, \CZ^0:=Z^0/\BG_m\, ,\,\, \CZ^{\pm}:=Z^{\pm}/\BG_m$$
and the morphisms
$\sfp^{\pm}:\CZ^{\pm}\to \CZ\,$, $\sfq^{\pm}:\CZ^{\pm}\to\CZ^0.$
Now consider the diagram
\begin{equation} \label{e:unit diagram}
\xy
(-20,0)*+{\CZ^0}="X";
(20,0)*+{\CZ.}="Y";
(0,20)*+{\CZ^-}="Z";
(-40,20)*+{\CZ^+}="W";
(-60,0)*+{\CZ}="U";
(-20,40)*+{\CZ^+\underset{\CZ^0}\times \CZ^-}="V";
{\ar@{->}_{\sfq^-} "Z";"X"};
{\ar@{->}^{\sfp^-} "Z";"Y"};
{\ar@{->}^{\sfq^+} "W";"X"};
{\ar@{->}_{\sfp^+} "W";"U"};
{\ar@{->}^{'\sfq^+} "V";"Z"};
{\ar@{->}_{'\sfq^-} "V";"W"};
\endxy
\end{equation}
Our goal is to construct a canonical morphism from $\on{Id}_{\Dmod(\CZ)}$ to the composed functor
\begin{equation} \label{e:other comp}
\left((\sfp^-)_\bullet\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\bullet \circ (\sfp^+)^!{\mathfrak r}ight):\Dmod(\CZ)\to \Dmod(\CZ).
\end{equation}
The good news is that all morphisms in diagram \eqref{e:unit diagram} are representable.
In particular, $\sfp^-$ and $\sfq^+$ are representable, so $(\sfp^-)_\bullet=(\sfp^-)_\blacktriangle$ and
$(\sfq^+)_\bullet =(\sfq^+)_\blacktriangle\,$.
Thus, the problem is to construct a canonical morphism from
$\on{Id}_{\Dmod(\CZ)}$ to the composed functor
\begin{equation} \label{e:non-dangerous}
\left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight):\Dmod(\CZ)\to \Dmod(\CZ).
\end{equation}
Using base change\footnote{Since we have switched to the renormalized direct images, we can apply base change and do other standard manipulations.}, we further identify the functor \eqref{e:non-dangerous} with
\begin{equation} \label{e:other comp triangle}
(\sfp^-\circ {}'\sfq^+)_\blacktriangle\circ (\sfp^+\circ {}'\sfq^-)^!,
\end{equation}
where $'\sfq^+$ and $'\sfq^-$ are as in diagram \eqref{e:unit diagram}.
\sssec{} \label{sss:q0&1}
Set $$\CQ_0:=(\sfp^+\times \sfp^-)_\blacktriangle(\omega_{\CZ^+\underset{\CZ^0}\times \CZ^-})\mathfrak in \Dmod(\CZ\times \CZ).$$
Then the functor \eqref{e:other comp triangle} (and, hence, \eqref{e:other comp})
is canonically isomorphic to $\sF_{\CQ_0}\,$.
The identity functor $\Dmod(\CZ)\to \Dmod(\CZ)$ equals $\sF_{\CQ_1}$,
where
$$\CQ_1:=(\Delta_\CZ)_\blacktriangle(\omega_\CZ)\mathfrak in \Dmod(\CZ\times \CZ).$$
\sssec{}
In \secref{ss:constructing map of kernels} we will construct a canonical map
\begin{equation} \label{e:map on kernels}
\CQ_1\to \CQ_0\, .
\end{equation}
By Sects. {\mathfrak r}ef{sss:reformulating}-{\mathfrak r}ef{sss:q0&1} and {\mathfrak r}ef{sss:functors and kernels}, the map of kernels
\eqref{e:map on kernels}
induces a natural transformation
\begin{equation} \label{e:Braden unit}
\on{Id}_{\Dmod(\CZ)}\to \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight)
\end{equation}
between the corresponding functors.
In \secref{s:Verifying} we will prove that the natural transformations \eqref{e:Braden unit}
and \eqref{e:Braden co-unit equiv} satisfy the properties of unit and co-unit of an adjunction between the functors
$(\sfq^+)_\blacktriangle \circ (\sfp^+)^!$ and $(\sfp^-)_\blacktriangle\circ (\sfq^-)^!$.
\ssec{Constructing the morphism \eqref{e:map on kernels}} \label{ss:constructing map of kernels}
We will first define an object $$\CQ\mathfrak in\Dmod(\BA^1\times \CZ\times \CZ)^{\BG_m\on{-mon}}\, ,$$ which
``interpolates" between $\CQ_1$ and $\CQ_0$. We will then define \eqref{e:map on kernels} to be the
specialization morphism $\on{Sp}_\CQ\,$.
\sssec{} \label{sss:concrete situation}
Recall the algebraic space $\wt{Z}$ from \secref{s:deg} and
set $$\wt\CZ:=\wt{Z}/\BG_m,\quad \wt\CZ_t:=\wt{Z}_t/\BG_m\simeq \wt\CZ\underset{\BA^1}\times \{t\}$$
(the action of $\BG_m$ on $\wt\CZ$ was defined in \secref{sss:anti-diagonal}).
Consider the morphisms
$$\wt\sfp :\wt\CZ\to \BA^1\times \CZ\times \CZ \,\,\,\text{ and } \,\,\,\wt\sfp_t :\wt\CZ_t\to \CZ\times \CZ$$
induced by the maps \eqref{e:tilde p} and \eqref{e:tilde p_t}, respectively.
Set
$$\CQ:=\wt\sfp_\blacktriangle(\omega_{\wt\CZ})\mathfrak in \Dmod(\BA^1\times \CZ\times \CZ).$$
\sssec{} \label{sss:Q mon}
We claim that $\CQ$ belongs to the subcategory $\Dmod(\BA^1\times \CZ\times \CZ)^{\BG_m\on{-mon}}$.
In fact, we claim that $\CQ$ is the pullback of a canonically defined object of the category $\Dmod(\BA^1/\BG_m\times \CZ\times \CZ)$.
Indeed, this follows from the existence of the Cartesian diagram
$$
\CD
\wt\CZ @>{=}>> \wt{Z}/\BG_m @>>> \wt{Z}/\BG_m\times \BG_m \\
@V{\wt\sfp}VV @V{\wt{p}/\BG_m}VV @VVV \\
\BA^1\times \CZ \times \CZ @>{=}>> \BA^1\times Z/\BG_m\times Z/\BG_m @>>> \BA^1/\BG_m\times Z/\BG_m\times Z/\BG_m,
\endCD
$$
where $\BG_m\times \BG_m$ acts on $\wt{Z}$ as in \secref{sss:action of G_m^2}.
\sssec{} \label{sss:concrete}
Recall that the pair $(\wt{Z}_1,\wt{p}_1)$ identifies with $(Z,\Delta_Z)$, and the pair
$(\wt{Z}_0,\wt{p}_0)$ identifies with $(Z^+\underset{Z^0}\times Z^-,p^+\times p^-)$.
Therefore, the pair $(\wt\CZ_1,\wt\sfp_1)$ identifies with $(\CZ,\Delta_\CZ)$, and the pair
$(\wt\CZ_0,\wt\sfp_0)$ identifies with $(\CZ^+\underset{\CZ^0}\times \CZ^-,\sfp^+\times \sfp^-)$.
Hence, by base change, the objects $\CQ_1$ and $\CQ_0$ from \secref{sss:q0&1} identify with the !-restrictions of
$\CQ$ to
$$\{1\}\times \CZ\times \CZ\to \BA^1\times \CZ\times \CZ \text{ and }
\{0\}\times \CZ\times \CZ\to \BA^1\times \CZ\times \CZ,$$
respectively.
Now, the sought-for map \eqref{e:map on kernels} is given by the map $\on{Sp}_\CQ$
of \eqref{e:specialization}.
\section{Verifying the adjunction properties} \label{s:Verifying}
In \secref{sss:Consider now} we introduced the stacks
$$\CZ:=Z/\BG_m\, ,\,\, \CZ^0:=Z^0/\BG_m\, ,\,\, \CZ^{\pm}:=Z^{\pm}/\BG_m$$
and the morphisms
$\sfp^{\pm}:\CZ^{\pm}\to \CZ\,$, $\sfq^{\pm}:\CZ^{\pm}\to\CZ^0.$
In
Sects. {\mathfrak r}ef{ss:Reformulation of Braden}-{\mathfrak r}ef{ss:equivariant version}
we constructed a natural transformation
$$\left((\sfq^+)_\bullet \circ (\sfp^+)^!{\mathfrak r}ight)\circ \left((\sfp^-)_\bullet\circ (\sfq^-)^!{\mathfrak r}ight)\to \on{Id}_{\Dmod(\CZ^0)}
\, ,$$
see formula \eqref{e:Braden co-unit equiv}. Since the morphisms $\sfp^-$ and $\sfq^+$ are representable we have
$(\sfp^-)_\bullet=(\sfp^-)_\blacktriangle$ and
$(\sfq^+)_\bullet =(\sfq^+)_\blacktriangle\,$. So the above natural transformation \eqref{e:Braden co-unit equiv}
can be rewritten as a natural transformation
\begin{equation} \label{e:2Braden co-unit}
\left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight)\circ \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\to \on{Id}_{\Dmod(\CZ^0)}\, .
\end{equation}
In \secref{s:unit} we constructed a natural transformation
\begin{equation} \label{e:2Braden unit}
\on{Id}_{\Dmod(\CZ)}\to \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight),
\end{equation}
see formula \eqref{e:Braden unit}.
To prove Theorem~{\mathfrak r}ef{t:Braden adj equiv}, it suffices to
show that the compositions
\begin{equation} \label{e:first composition}
(\sfp^-)_\blacktriangle\circ (\sfq^-)^!\to \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\blacktriangle \circ
(\sfp^+)^!{\mathfrak r}ight)\circ \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\to (\sfp^-)_\blacktriangle\circ (\sfq^-)^!
\end{equation}
and
\begin{equation} \label{e:second composition}
(\sfq^+)_\blacktriangle \circ (\sfp^+)^!\to \left((\sfq^+)_\blacktriangle \circ
(\sfp^+)^!{\mathfrak r}ight)\circ \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight)\to
(\sfq^+)_\blacktriangle \circ (\sfp^+)^!\
\end{equation}
corresponding to \eqref{e:2Braden co-unit} and \eqref{e:2Braden unit} are isomorphic to \footnote{In the future we will
skip the words ``isomorphic to" in similar situations. (This is a slight abuse of language since we work with the DG categories of
D-modules rather than with their homotopy categories.)} the identity morphisms.
We will do so for the composition \eqref{e:first composition}. The case of \eqref{e:second composition}
is similar and will be left to the reader.
The key point of the proof is \secref{sss:key}, which relies on the geometric \propref{p:2open embeddings}.
More precisely, we use the part of \propref{p:2open embeddings} about $Z^-$.
To treat the composition \eqref{e:second composition}, one has to use the part of \propref{p:2open embeddings} about $Z^+$.
\ssec{The diagram describing the composed functor}
\sssec{The big diagram}
We will use the notation
\begin{equation} \label{e:BIG}
\BIG:=\left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\circ \left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight)\circ
\left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight).
\end{equation}
By base change, $\BIG$ is given by pull-push along the following diagram:
\begin{equation} \label{e:comp diag 1}
\xy
(-20,0)*+{\CZ^0}="X";
(20,0)*+{\CZ\, .}="Y";
(0,20)*+{\CZ^-}="Z";
(-40,20)*+{\CZ^+}="W";
(-60,0)*+{\CZ}="U";
(-20,40)*+{\CZ^+\underset{\CZ^0}\times \CZ^-}="V";
(-100,0)*+{\CZ^0}="T";
(-80,20)*+{\CZ^-}="S";
(-60,40)*+{\CZ^-\underset{\CZ}\times \CZ^+}="R";
(-40,60)*+{\CZ^-\underset{\CZ}\times \CZ^+\underset{\CZ^0}\times \CZ^-}="Q";
{\ar@{->}_{\sfq^-} "Z";"X"};
{\ar@{->}^{\sfp^-} "Z";"Y"};
{\ar@{->}^{\sfq^+} "W";"X"};
{\ar@{->}_{\sfp^+} "W";"U"};
{\ar@{->} "V";"Z"};
{\ar@{->} "V";"W"};
{\ar@{->}_{\sfq^-} "S";"T"};
{\ar@{->}^{\sfp^-} "S";"U"};
{\ar@{->} "R";"S"};
{\ar@{->} "R";"W"};
{\ar@{->} "Q";"R"};
{\ar@{->} "Q";"V"};
\endxy
\end{equation}
\sssec{Some notation}
Set
\begin{equation} \label{e:tilde Z-}
\wt{Z}^-:=Z^-\underset{Z}\times \wt{Z}\, ,
\end{equation}
where the fiber product is formed using the composition
\[
\wt{Z} \overset{\wt{p}}\longrightarrow \BA^1\times Z\times Z\to Z\times Z \overset{\on{pr}_1}\longrightarrow Z
\]
(i.e., the morphism $\pi_1:\wt{Z}\to Z$ from \secref{sss:tilde p}).
For $t\mathfrak in \BA^1$ set $\wt{Z}^-_t:=Z^-\underset{Z}\times \wt{Z}_t\,$.
Let $\wt{p}^-:\wt{Z}^-\to \BA^1\times Z^-\times Z$ denote the map obtained by base change from
$$\wt{p}:\wt{Z} \to \BA^1\times Z\times Z\,.$$
Let
$$r: \wt{Z}^-\to \BA^1\times Z^0\times Z$$
denote the composition of $\wt{p}^-:\wt{Z}^-\to \BA^1\times Z^-\times Z$ with the morphism
$$\mathfrak id_{\BA^1}\times q^-\times \mathfrak id_Z:\BA^1\times Z^-\times Z\to\BA^1\times Z^0\times Z.$$
Let $r_t:\wt{Z}^-_t\to Z^0\times Z$ denote the morphism induced by $r: \wt{Z}^-\to \BA^1\times Z^0\times Z\,$.
\sssec{More notation} \label{sss:stacky notation}
Recall that
$$\wt\CZ:=\wt{Z}/\BG_m,\quad \wt\CZ_t:=\wt{Z}_t/\BG_m\simeq \wt\CZ\underset{\BA^1}\times \{t\}\, ,$$
where $\wt{Z}$ is the algebraic space from \secref{s:deg}
(the action of $\BG_m$ on $\wt\CZ$ was defined in \secref{sss:anti-diagonal}).
Set
$$\wt\CZ^-:=\CZ^-\underset{\CZ}\times \wt{\CZ}=\wt Z^-/\BG_m\, ,\quad\quad
\wt\CZ^-_t:=\CZ^-\underset{\CZ}\times \wt{\CZ}_t=\wt Z^-_t/\BG_m\,.$$
Let
$$\sfr:\wt\CZ^-\to \BA^1\times \CZ^0\times \CZ\, ,\quad \sfr_t:\wt\CZ^-_t\to \CZ^0\times \CZ$$
be the morphisms induced by $r: \wt{Z}^-\to \BA^1\times Z^0\times Z$ and $r_t:\wt{Z}^-_0\to Z^0\times Z$, respectively. In particular, we have the morphisms $\sfr_0$ and $\sfr_1$ corresponding to $t=0$ and $t=1$.
\sssec{A smaller diagram describing the functor $\BIG$} \label{sss:small diagram}
By \propref{p:tilde Z_0}, we have an isomorphism $\wt{Z}_0\buildrel{\sim}\over{\longrightarrow} Z^+\underset{Z^0}\times Z^-$.
The corresponding isomorphism
$$\wt{Z}^-_0:=Z^-\underset{Z}\times \wt{Z}_0\buildrel{\sim}\over{\longrightarrow} Z^-\underset{Z}\times Z^+\underset{Z^0}\times Z^-$$
induces
an isomorphism
$$\wt\CZ^-_0\simeq \CZ^-\underset{\CZ}\times \CZ^+\underset{\CZ^0}\times \CZ^-.$$
Thus the upper term of diagram \eqref{e:comp diag 1} is $\wt\CZ^-_0$. The compositions
\[
\CZ^-\underset{\CZ}\times \CZ^+\underset{\CZ^0}\times \CZ^-\to \CZ^-\underset{\CZ}\times \CZ^+\to
\CZ^-\overset{\, \sfq^-}\longrightarrow \CZ^0 \quad \mbox{and}\quad
\CZ^-\underset{\CZ}\times \CZ^+\underset{\CZ^0}\times \CZ^-\to \CZ^+\underset{\CZ^0}\times \CZ^-\to \CZ^-
\overset{\, \sfp^-}\longrightarrow \CZ
\]
from diagram \eqref{e:comp diag 1} are equal, respectively, to the compositions
\[
\wt{\CZ}^-_0\overset{\sfr_0}\longrightarrow \CZ^0\times \CZ \overset{\on{pr}_1}\longrightarrow \CZ^0
\quad \mbox{and}\quad
\wt{\CZ}^-_0\overset{\sfr_0}\longrightarrow \CZ^0\times \CZ \overset{\on{pr}_2}\longrightarrow \CZ
\]
(the morphism $\sfr_0$ was defined in \secref{sss:stacky notation}).
Hence, the functor $\BIG$ is given by pull-push along the diagram
\begin{equation} \label{e:comp diag B}
\xy
(-20,0)*+{\CZ^0}="X";
(20,0)*+{\CZ\, .}="Y";
(0,20)*+{\wt\CZ^-_0}="Z";
{\ar@{->}_{\on{pr}_1\circ \sfr_0} "Z";"X"};
{\ar@{->}^{\on{pr}_2\circ \sfr_0} "Z";"Y"};
\endxy
\end{equation}
\ssec{The natural transformations at the level of kernels} \label{ss:nat trans via kernels}
The goal of this subsection is to describe the natural transformations
$$\BIG\to (\sfp^-)_\blacktriangle\circ (\sfq^-)^! \text{ and } (\sfp^-)_\blacktriangle\circ (\sfq^-)^! \to \BIG$$
at the level of kernels.
\sssec{The kernel corresponding to $\BIG$}
Set
$$\CS:=\sfr_\blacktriangle (\omega_{\wt\CZ^-})\mathfrak in \Dmod(\BA^1\times \CZ^0\times \CZ),$$
where $\sfr:\wt\CZ^-\to \BA^1\times \CZ^0\times \CZ$ was defined in \secref{sss:stacky notation}.
As in \secref{sss:Q mon}, one shows that
$$\CS\mathfrak in \Dmod(\BA^1\times \CZ^0\times \CZ)^{\BG_m\on{-mon}}\, .$$
Set also
$$\CS_0:=(\sfr_0)_\blacktriangle (\omega_{\wt\CZ^-_0})\mathfrak in \Dmod(\CZ^0\times \CZ),\quad \quad
\CS_1:=(\sfr_1)_\blacktriangle (\omega_{\wt\CZ^-_1})\mathfrak in \Dmod(\CZ^0\times \CZ).$$
By \secref{sss:small diagram}, the functor $\Phi$ identifies with $\sF_{\CS_0}\,$.
\sssec{The kernel corresponding to $(\sfp^-)_\blacktriangle\circ (\sfq^-)^! $}
Now set
$$\CT:=(\sfq^-\times \sfp^-)_\blacktriangle(\omega_{\CZ^-}).$$
We have
$$(\sfp^-)_\blacktriangle\circ (\sfq^-)^! \simeq \sF_{\CT}\, .$$
\sssec{} \label{sss:j tilde}
Recall the open embedding
$$j:Z^0\hookrightarrow Z^-\underset{Z}\times Z^+,$$
see \propref{p:Cartesian}.
Let $j^-$ denote the corresponding open embedding
$$Z^-\hookrightarrow Z^-\underset{Z}\times Z^+\underset{Z^0}\times Z^-\simeq Z^-\underset{Z}\times \wt{Z}_0=: \wt{Z}^-_0,$$
obtained by base change.
Let $\sfj^-$ denote the corresponding open embedding
$$\CZ^-\hookrightarrow \wt\CZ^-_0.$$
Note that the composition
\[
\CZ^-\overset{\,\sfj^-}\hookrightarrow \wt\CZ^-_0\overset{\sfr_0}\longrightarrow \CZ^0\times \CZ
\]
equals $\sfq^-\times \sfp^-$.
\sssec{The morphism $\BIG\to(\sfp^-)_\blacktriangle\circ (\sfq^-)^!$ at the level of kernels}
\label{sss:1at the level of kernels}
Recall that the morphism $\BIG\to(\sfp^-)_\blacktriangle\circ (\sfq^-)^!$ comes from the morphism
$$\left((\sfq^+)_\blacktriangle \circ (\sfp^+)^!{\mathfrak r}ight)\circ \left((\sfp^-)_\blacktriangle\circ (\sfq^-)^!{\mathfrak r}ight)\to \on{Id}_{\Dmod(\CZ^0)}$$
constructed in Sects. {\mathfrak r}ef{ss:Reformulation of Braden}-{\mathfrak r}ef{ss:equivariant version}.
By construction, the natural transformation $$\BIG\to(\sfp^-)_\blacktriangle\circ (\sfq^-)^!$$
corresponds to the map of kernels
\begin{equation} \label{e:S_0 to T}
\CS_0\to \CT
\end{equation}
equal to the composition
$$\CS_0:=(\sfr_0)_\blacktriangle (\omega_{\wt\CZ^-_0})\to
(\sfr_0)_\blacktriangle\circ \sfj^-_\blacktriangle (\omega_{\CZ^-}) \buildrel{\sim}\over{\longrightarrow} (\sfq^-\times \sfp^-)_\blacktriangle(\omega_{\CZ^-})=:\CT,$$
where the first arrow comes from
$$\omega_{\wt\CZ^-_0}\to \sfj^-_\bullet\circ (\sfj^-)^\bullet(\omega_{\wt\CZ^-_0})\simeq \sfj^-_\bullet(\omega_{\CZ^-})\simeq
\sfj^-_\blacktriangle(\omega_{\CZ^-}).$$
\sssec{The isomorphism $\CT\simeq \CS_1\,$}
The (tautological) identification $\wt{Z}_1\simeq Z$ defines an identification
\begin{equation} \label{e:tautological identification}
\wt\CZ^-_1\simeq \CZ^-,
\end{equation}
so that
the morphism $\sfr_1:\CZ^-_1{\mathfrak r}ightarrow \CZ^0\times \CZ$ identifies with $\sfq^-\times \sfp^-$.
Hence, we obtain a tautological identification
\begin{equation} \label{e:T to S_1}
\CT\simeq \CS_1\, .
\end{equation}
\sssec{The morphism $(\sfp^-)_\blacktriangle\circ (\sfq^-)^!\to\BIG$ at the level of kernels}
\label{sss:2at the level of kernels}
The map $\on{Sp}_\CS$ of \eqref{e:specialization} defines a canonical map
\begin{equation} \label{e:S_1 to S_0}
\CS_1\to \CS_0\, .
\end{equation}
By \secref{sss:functoriality of specialization}, the natural transformation
$(\sfp^-)_\blacktriangle\circ (\sfq^-)^!\to\BIG$
comes from the map
\begin{equation} \label{e:T to S_0}
\CT\to \CS_1\to \CS_0\, ,
\end{equation}
equal to the composition of \eqref{e:T to S_1} and \eqref{e:S_1 to S_0}.
\sssec{Conclusion}
Thus, in order to prove that the composition \eqref{e:first composition} is the identity map,
it suffices to show that the composed map
\begin{equation} \label{e:composed kernels}
\CT\to \CS_1\to \CS_0\to \CT
\end{equation}
is the identity map on $\CT$.
\ssec{Passing to an open substack}
\sssec{}
Recall the open embedding
$$j^-:Z^-\hookrightarrow \wt{Z}^-_0$$
introduced in \secref{sss:j tilde}.
Let $\overset{\circ}{\wt{Z}}{}^-$ denote the open subset of $\wt{Z}^-$ obtained by removing the closed
subset
$$\left(\wt{Z}^-_0-Z^-{\mathfrak r}ight)\subset \wt{Z}^-_0\subset \wt{Z}^-.$$
Let $\overset{\circ}{\wt\CZ}{}^-$ denote the corresponding open substack of $\wt\CZ^-$.
Let $\overset{\circ}{\wt\CZ}{}^-_t$ denote the fiber of $\overset{\circ}{\wt\CZ}{}^-$ over $t\mathfrak in \BA^1$.
By definition, the open embedding
$$\sfj^-:\CZ^-\hookrightarrow \wt\CZ^-_0$$
defines an \emph{isomorphism}
\begin{equation} \label{e:fiber at 0 open}
\CZ^-\buildrel{\sim}\over{\longrightarrow} \overset{\circ}{\wt\CZ}{}^-_0.
\end{equation}
Note that the isomorphism $\CZ^-\buildrel{\sim}\over{\longrightarrow} \wt\CZ{}^-_1$ of \eqref{e:tautological identification} still defines an isomorphism
\begin{equation} \label{e:fiber at 1 open}
\CZ^-\buildrel{\sim}\over{\longrightarrow} \overset{\circ}{\wt\CZ}{}^-_1.
\end{equation}
\sssec{}
Let
$$\osfr:\overset{\circ}{\wt\CZ}{}^-\to \BA^1\times \CZ^0\times \CZ\quad \text{ and } \quad
\osfr_t:\overset{\circ}{\wt\CZ}{}^-\to \CZ^0\times \CZ$$
denote the morphisms induced by the maps $\sfr$ and $\sfr_t$ from
\secref{sss:stacky notation}.
Set
$$\oCS:=\osfr_\blacktriangle(\omega_{\overset{\circ}{\wt\CZ}{}^-}),$$
and also
$$\oCS_0:=(\osfr_0)_\blacktriangle(\omega_{\overset{\circ}{\wt\CZ}{}^-_0})\quad \text{ and } \quad
\oCS_1:=(\osfr_1)_\blacktriangle(\omega_{\overset{\circ}{\wt\CZ}{}^-_1}).$$
The open embedding $\overset{\circ}{\wt\CZ}{}^-\hookrightarrow \wt\CZ^-$ gives rise to the maps
$$\CS\to \oCS,\quad \CS_0\to \oCS_0, \quad \CS_1\to \oCS_1\, .$$
As in
Sects. {\mathfrak r}ef{sss:1at the level of kernels}-{\mathfrak r}ef{sss:2at the level of kernels}, we have the natural transformations
\begin{equation} \label{e:composed kernels open}
\CT\to \oCS_1\to \oCS_0\to \CT.
\end{equation}
Moreover, the diagram
$$
\CD
\CT @>>> \CS_1 @>>> \CS_0 @>>> \CT \\
@V{\mathfrak id}VV @VVV @VVV @VV{\mathfrak id}V \\
\CT @>>> \oCS_1 @>>> \oCS_0 @>>> \CT
\endCD
$$
commutes.
Hence,
in order to show that the composed map \eqref{e:composed kernels} is the identity map,
\emph{it suffices to show that the composed map \eqref{e:composed kernels open} is the identity map. }
We will do this in the next subsection.
\ssec{The key argument}
\sssec{} \label{sss:key}
Recall now the open embedding
$$\BA^1\times Z^-\to Z^-\underset{Z}\times \wt{Z}=:\wt{Z}^-$$
of \eqref{e:embedding2}.
By definition, it induces an isomorphism
$$\BA^1\times Z^-\simeq \overset{\circ}{\wt{Z}}{}^-.$$
Dividing by the action of $\BG_m$, we obtain an isomorphism
\begin{equation} \label{e:key}
\BA^1\times \CZ^-\simeq \overset{\circ}{\wt\CZ}{}^-.
\end{equation}
Under this identification, we have:
\begin{itemize}
\mathfrak item
The map $\osfr:\overset{\circ}{\wt\CZ}{}^-\to \BA^1\times \CZ^0\times \CZ$
identifies with the map $\BA^1\times \CZ^-\to\BA^1\times \CZ^0\times \CZ$ induced by
$\mathfrak id_{\BA^1}:\BA^1\to \BA^1$ and $ (\sfq^-\times \sfp^-): \CZ^-\to \CZ^0\times \CZ\,$.
\mathfrak item
The isomorphism $\CZ^-\buildrel{\sim}\over{\longrightarrow} \overset{\circ}{\wt\CZ}{}^-_1$ of \eqref{e:fiber at 1 open}
corresponds to the identity map
$$\CZ^-\to (\BA^1\times \CZ^-)\underset{\BA^1}\times \{1\}\simeq \CZ^-.$$
\mathfrak item
The isomorphism $\CZ^-\buildrel{\sim}\over{\longrightarrow} \overset{\circ}{\wt\CZ}{}^-_0$ of \eqref{e:fiber at 0 open}
corresponds to the identity map
$$\CZ^-\to (\BA^1\times \CZ^-)\underset{\BA^1}\times \{0\}\simeq \CZ^-.$$
\end{itemize}
\sssec{}
Hence, we obtain that the composition \eqref{e:composed kernels open} identifies with
$$\CT\simeq \mathfrak iota_1^!(\omega_{\BA^1}\boxtimes \CT) \overset{\on{Sp}}\longrightarrow \mathfrak iota_0^!(\omega_{\BA^1}\boxtimes \CT) \simeq \CT,$$
where $\on{Sp}:=\on{Sp}_{\omega_{\BA^1}\boxtimes \CT}$
is the specialization map \eqref{e:specialization} for the object
$$\omega_{\BA^1}\boxtimes \CT \mathfrak in \Dmod(\BA^1\times \CZ^0\times \CZ).$$
The fact that the above map is the identity map on $\CT$ follows from \secref{sss:specialization for constant}.
\appendix
\section{Pro-categories} \label{s:pro}
\noindent{\bf A.1.} For a DG category $\bC$ let $\on{Pro}(\bC)$ denote its pro-completion, thought of as the DG category opposite to
that of covariant exact functors $\bC\to \Vect$, where $\Vect$ denotes the DG category of complexes of $k$-vector spaces.
\footnote{A way to deal with set-theoretical difficulties is to require
that our functors commute with $\kappa$-filtered colimits for some cardinal $\kappa$, see
\cite[Def.~5.3.1.7]{Lur}.}
Yoneda embedding defines
a fully faithful functor $\bC\to \on{Pro}(\bC)$. Any object in $\on{Pro}(\bC)$ can be written as a filtered limit
(taken in $\on{Pro}(\bC)$) of co-representable functors.
\noindent{\bf A.2.}
A functor $\sF:\bC'\to \bC''$
between DG categories induces a functor denoted also by $\sF$
$$\on{Pro}(\bC')\to \on{Pro}(\bC'')$$ by applying the
\emph{right Kan extension} of the functor
$$\bC'\overset{\sF}\longrightarrow \bC''\hookrightarrow \on{Pro}(\bC'')$$
along the embedding $\bC'\to \on{Pro}(\bC')$.
The same construction can be phrased as follows: for $\wt\bc'\mathfrak in \on{Pro}(\bC')$, thought
of as a functor $\bC'\to \Vect$, the object $\sF(\wt\bc')$, thought of as a functor $\bC''\to \Vect$, is the
\emph{left Kan extension} of $\wt\bc'$ along the functor $\sF:\bC'\to \bC''$.
Explicitly, if $\wt\bc\mathfrak in \on{Pro}(\bC')$ is written as $\underset{i\mathfrak in I}{\underset{\longleftarrow}{lim}}\, \bc_i$
with $\bc_i\mathfrak in \bC'$, then
$$\sF(\wt\bc)\simeq \underset{i\mathfrak in I}{\underset{\longleftarrow}{lim}}\, \sF(\bc_i),$$
as objects of $\on{Pro}(\bC'')$ and
$$\sF(\wt\bc)\simeq \underset{i\mathfrak in I}{\underset{\longrightarrow}{lim}}\, \CMaps_{\bC''}(\sF(\bc_i),-),$$
as functors $\bC''\to \Vect$.
\noindent{\bf A.3.}
Let $\sG:\bC'\to \bC''$ be a functor between DG categories. We can speak of its left adjoint $\sG^L$
as a functor $\bC''\to \on{Pro}(\bC')$. Namely, for $\bc''\mathfrak in \bC''$ the object $\sG^L(\bc'')\mathfrak in \on{Pro}(\bC')$,
thought of as a functor $\bC'\to \Vect$ is given by
$$(\sG^L(\bc''))(\bc')=\CMaps_{\bC''}(\bc'',\sG(\bc')).$$
\noindent{\bf A.4.} We let the same symbol $\sG^L$ also denote the functor $\on{Pro}(\bC'')\to \on{Pro}(\bC')$ obtained as the right Kan extension of
$\sG^L:\bC''\to \on{Pro}(\bC')$ along
$\bC''\hookrightarrow \on{Pro}(\bC'')$.
The functor $\sG^L$ is the left adjoint of the functor
$\sG:\on{Pro}(\bC')\to \on{Pro}(\bC'')$.
We can also think of $\sG^L$ as follows: for
$\wt\bc''\mathfrak in \on{Pro}(\bC'')$, thought of as a functor $\bC''\to \Vect$, the object $\sG^L(\wt\bc'')$, thought of as a functor
$\bC'\to \Vect$ is given by
$$(\sG^L(\wt\bc''))(\bc')=\wt\bc''(\sG(\bc')).$$
\end{document}
|
\begin{document}
\author{Ian Whitehead}
\title{Affine Weyl Group Multiple Dirichlet Series: Type $\widetilde{A}$}
\maketitle
\begin{abstract}
We define a multiple Dirichlet series whose group of functional equations is the Weyl group of the affine Kac-Moody root system $\widetilde{A}_n$, generalizing the theory of multiple Dirichlet series for finite Weyl groups. The construction is over the rational function field $\mathbb{F}_q(t)$, and is based upon four natural axioms from algebraic geometry. We prove that the four axioms yield a unique series with meromorphic continuation to the largest possible domain and the desired infinite group of symmetries.
\end{abstract}
\section{Introduction}
We will construct a multiple Dirichlet series of the form
\begin{align} \label{series}
&Z(x_0, x_1, \ldots x_n) \nonumber \\
&=\sum_{f_0, f_1, \ldots f_n \in \mathbb{F}_q[t] \text{ monic}} \res{f_0}{f_1}\res{f_1}{f_2}\cdots \res{f_{n-1}}{f_n}\res{f_n}{f_0} x_0^{\deg f_0}x_1^{\deg f_1}\cdots x_n^{\deg f_n}
\end{align}
where $n\geq 2$, $q$ is a prime power, and $\res{\,}{\,}$ denotes the quadratic residue character in $\mathbb{F}_q[t]$. This is a new generalization of the Weyl group multiple Dirichlet series developed in papers of Brubaker-Bump-Friedberg, Chinta-Gunnells, and others. The difference is that the product of characters here is based on the following Dynkin diagram:
\includegraphics[scale=.5]{A_n}
\noindent which corresponds to the affine Kac-Moody root system $\widetilde{A}_n$ rather than a finite root system. This gives the series a higher level of complexity: it will extend to a meromorphic function of $n+1$ variables with an infinite group of symmetries, the Weyl group of $\widetilde{A}_n$. And it will shed light on the behavior of still-conjectural automorphic forms on the Kac-Moody Lie group.
Multiple Dirichlet series originated as a tool to compute moments in families of L-functions. Goldfeld and Hoffstein computed the first moment of quadratic Dirichlet L-functions at the central point using a double Dirichlet series \cite{GH}; the second \cite{BFH3} and third \cite{DGH} moments have been computed using similar methods. An essential step is to replace the sums of characters like those appearing in \ref{series} with weighted sums of characters. This guarantees that the series will have a group of functional equations. The choice of weights, or correction polynomials, leads to difficult combinatorial questions, which inspired the development of Weyl group multiple Dirichlet series \cite{BBF1, BBF2, CG1, CG2}. This beautiful theory constructs, for any integer $N$ and (finite) root system $\Phi$, a multiple Dirichlet series built from $N$th power Gauss sums, whose group of functional equations is the Weyl group of $\Phi$. The series can be understood in terms of a metaplectic Casselman-Shalika formula, or as a generating function of combinatorial data on crystals associated with $\Phi$. One of the crowning achievements of this field is the Eisenstein conjecture, which connects multiple Dirichlet series back to automorphic forms; it states that each Weyl group multiple Dirichlet series appears as the constant coefficient in the Whittaker expansion of an Eisenstein series on the $N$-fold metaplectic cover of the algebraic group associated to $\Phi$. It is proven in types $A$ \cite{BBF1} and $B$ \cite{FZ}.
To go beyond the first three moments of quadratic L-functions requires multiple Dirichlet series with infinite Kac-Moody Weyl groups of functional equations. This immediately makes the series much more intricate--infinitely many symmetries imply infinitely many poles, which will accumulate at a natural boundary of meromorphic continuation. Furthermore, unlike in the finite case, the symmetries do not give sufficient information to completely pin down the series; we will show that there are infinitely many series satisfying the desired functional equations. Lee and Zhang \cite{LZ} generalize the averaging construction of Chinta and Gunnells to construct power series with meromorphic continuation to the boundary, satisfying the functional equations, for all symmetrizable Kac-Moody groups. However, their construction does not naturally contain the character sums and L-functions we would like our multiple Dirichlet series to count. Bucur and Diaconu \cite{BD}, in the special case of $\widetilde{D}_4$, define a multiple Dirichlet series satisfying the functional equations by making an assumption about one of its residues. They use this series to compute the fourth moment of quadratic L-functions over rational function fields. Their construction is close in spirit to ours, but is slightly different and likely will not satisfy the Eisenstein conjecture.
The new approach developed in forthcoming work of Diaconu and Pasol \cite{DP}, and explored in the author's thesis \cite{W}, is to construct multiple Dirichlet series axiomatically. There are four axioms \ref{twistedmult}-\ref{initialconditions}, which arise from the geometry of parameter spaces of hyperelliptic curves. Sums of characters, in particular of $\res{f_0}{f_1}\res{f_1}{f_2}\cdots \res{f_{n-1}}{f_n}\res{f_n}{f_0}$ as some of the $f_i$ range over fixed degree $a_i$ and others are held constant, can be interpreted as point counts on these parameter spaces. The weighted sums of characters introduced below are point counts after the spaces are desingularized and compactified. Axiom \ref{localglobal} is a duality statement, and Axiom \ref{dominance} is a cohomological purity statement. The axiomatic construction matches previous constructions in the case of finite root systems \cite{W}. The main theorem of this paper is that, in the case of the affine root system $\widetilde{A}_n$, the four axioms produce a unique series, with meromorphic continuation to the boundary and functional equations. We expect the same theorem to hold for all affine root systems, and, if the axioms are modified as in \cite{DP}, for all Kac-Moody root systems.
Because this is a first foray into Kac-Moody multiple Dirichlet series, we have restricted our attention to type $\widetilde{A}$ and to quadratic characters. It would certainly be feasible to replace these characters with $N$th power residue symbols or Gauss sums. Another restriction: our construction is over the rational function field $\mathbb{F}_q(t)$ only. Over number fields, proving meromorphic continuation of the analogous series is extremely difficult--in the case of $\widetilde{D}_4$ over $\mathbb{Q}$, it is equivalent to computing the fourth moment of quadratic $L$-functions over $\mathbb{Q}$. However, the axioms do resolve all combinatorial problems in the construction over arbitrary global fields. The $p$-parts, or local weights, constructed in this paper will still be correct. They only need to be glued together using a different twisted multiplicativity relation for primes in the field. This means that $\mathbb{F}_q(t)$ plays a privileged role in the theory: it is the only field where Axioms \ref{localglobal} and \ref{dominance} hold, where the $p$-part of the series is reflected in the full series $Z$.
The proof has three sections. First we show directly that the four axioms imply the desired functional equations of the series, and that any multivariable power series satisfying these functional equations is completely determined by its diagonal coefficients. These diagonal coefficients relate to imaginary roots in the $\widetilde{A}_n$ root system, which play a subtle but critical role in the combinatorics. Next, we take a residue of the series, setting the odd-numbered $x_i$ to $q^{-1}$. We give a formula relating the diagonal residue coefficents to the original diagonal series coefficients; hence, the residue determines the full series, but unlike the series, the residue admits an Euler product formula. Balancing the effect of Axiom \ref{localglobal} on the residue coefficients against the effect of Axiom \ref{dominance} on the series coefficients, we prove the existence and uniqueness of the series. Finally, we combine this result with a close examination of the functional equations to prove an explicit formula for the residue, as a product of function field zeta functions. The meromorphic continuation of this product implies the meromorphic continuation of the full series $Z$. Only this last section does not generalize easily to other affine types.
We end this introduction with the example of the $\widetilde{A}_3$ residue formula, and its relevance to both analytic moment conjectures and the Eisenstein conjecture. We prove
\begin{align} \label{A3residue}
&q^2 \text{Res}_{x_1=x_3=q^{-1}} Z(x_0, x_1, x_2, x_3) \nonumber \\
&=\prod \limits_{m=0}^{\infty} (1-x_0^{2m+2}x_2^{2m})^{-1}(1-qx_0^{2m+2}x_2^{2m})^{-1} \nonumber \\
& \qquad \quad(1-x_0^{2m}x_2^{2m+2})^{-1}(1-qx_0^{2m}x_2^{2m+2})^{-1} \nonumber \\
& \qquad \quad (1-x_0^{2m+2}x_2^{2m+2})^{-2}(1-qx_0^{2m+2}x_2^{2m+2})^{-2} \nonumber \\
& \qquad \quad (1-x_0^{2m+1}x_2^{2m+1})^{-1}(1-qx_0^{2m+1}x_2^{2m+1})^{-1}.
\end{align}
We may evaluate the $\widetilde{A}_3$ multiple Dirichlet series as
\begin{align}
Z(q^{-s_0}, x, q^{-s_2}, x)&=\sum_{f_0, f_1, f_2, f_3} \res{f_1 f_3}{f_0 f_2} q^{-s_0 \deg f_0} q^{-s_2 \deg f_2} x^{\deg f_1 f_3} \nonumber \\
&=\sum_f L(s_0, \chi_f)L(s_2, \chi_f) \sigma_0(f) x^{\deg f}
\end{align}
where $f=f_1f_3$ and $\sigma_0(f)$ denotes the number of divisors of $f$. Then this residue, evaluated at $x_0=x_2=q^{-1/2}$, gives an asymptotic for the second moment of $L(1/2, \chi_f)$, weighted by $\sigma_0(f)$, over the function field $\mathbb{F}_q(t)$. It is possible to sieve for squarefree $f$ as well. Along with the fourth moment of $L(1/2, \chi_f)$, this is a first application of Kac-Moody root systems to number theory.
It is perhaps too early to precisely formulate an Eisenstein conjecture in the Kac-Moody setting; metaplectic Kac-Moody Eisenstein series are still conjectural, although recent work of Braverman, Garland, Kazhdan, Miller, and Patnaik makes progress constructing non-metaplectic Eisenstein series on affine Kac-Moody groups over function fields \cite{BK, BGKP, G, GMP}. In particular, a forthcoming paper of Patnaik \cite{P} gives the affine analogue of the Casselman-Shalika formula for Whittaker coefficients of these series. Crucially, Patnaik's formula contains poles corresponding to imaginary roots in the affine root system. We expect such poles to play an important role in Eisenstein series and their Whittaker coefficients for all Kac-Moody groups, including metaplectic. However, their presence cannot be detected from functional equations; different techniques are required. The series in this paper does have poles corresponding to imaginary roots. Of the factors in formula \ref{A3residue}, the first five correspond to real roots in the $\widetilde{A}_3$ root system; the remaining three, however, correspond to imaginary roots. They can be compared to the factor $m$ and the factors indexed by imaginary roots $\alpha^{\vee}$ in Patnaik's formula $1.1$. Such factors do not appear in residues of the multiple Dirichlet series constructed by Lee and Zhang or Bucur and Diaconu. They provide some initial evidence that our construction is the correct one for the Eisenstein conjecture.
\textbf{Acknowledgements}:
Great thanks to Adrian Diaconu and Dorian Goldfeld, who advised the dissertation project that led to this paper. Their wise advice and consistent support was indispensable at every stage of the process. Anna Pusk\'{a}s contributed a key idea that appears in section 5: the translation from $a$'s to $\delta$'s. I have had invaluable conversations about this project with Jeff Hoffstein, Ben Brubaker, Gautam Chinta, Paul Gunnells, Nikos Diamantis, Sol Friedberg, Kyu-Hwan Lee, Holley Friedlander, Manish Patnaik, and Jordan Ellenberg.
\section{Notation and Preliminaries}
Let $q$ be a prime power, with $q\equiv 1 \mod 4$, and let $\mathbb{F}_q$ be the field with $q$ elements. A polynomial in $\mathbb{F}_q[t]$ is called prime if it is monic and irreducible. For $f, g \in \mathbb{F}_q[t]$, let $\res{f}{g}$ denote the quadratic residue symbol, which is multiplicative in both $f$ and $g$. In this context, we have an extremely simple quadratic reciprocity law: for $f, g$ monic, $\res{f}{g}=\res{g}{f}$.
We define the function field zeta function,
\begin{equation}
\zeta(x)=\sum_{f \in \mathbb{F}_q[t] \text{ monic}} x^{\deg f}=\prod_{p \in \mathbb{F}_q[t] \text{ prime}}(1-x^{\deg p})^{-1}=(1-qx)^{-1}
\end{equation}
and the quadratic Dirichlet L-function: for $g\in\mathbb{F}_q[t]$ monic and squarefree,
\begin{equation}
L(x, \chi_g)=\sum_{f \in \mathbb{F}_q[t] \text{ monic}} \res{f}{g} x^{\deg f}=\prod_{p \in \mathbb{F}_q[t] \text{ prime}}(1-\res{p}{g}x^{\deg p})^{-1}.
\end{equation}
Usually, these are written as series in the variable $q^{-s}$; we have substituted the variable $x$ to emphasize the fact that these series are polynomials (or, in the case of $\zeta$, a rational function) in $x$.
The quadratic L-functions satisfy the following functional equations: if $\deg g$ is odd, then
\begin{equation}
L(x, \chi_g) = (q^{1/2}x)^{\deg g -1} L(q^{-1}x^{-1}, \chi_g),
\end{equation}
and if $\deg g$ is even, then
\begin{equation}
(1-x)^{-1}L(x, \chi_g)= (q^{1/2}x)^{\deg g -2} (1-q^{-1}x^{-1})^{-1}L(q^{-1}x^{-1}, \chi_g).
\end{equation}
They also satisfy the Riemann hypothesis: all roots have $|x|=q^{-1/2}$.
The combinatorics of affine root systems play a critical behind-the-scenes role in the proofs below. However, for the sake of readability, we have suppressed this theory almost entirely in the exposition. Many of the type $\widetilde{A}$ results here generalize to all affine types; see \cite{W}.
\section{Axioms and Functional Equations}
Let $q$ be a prime power, congruent to $1$ modulo $4$, and let $n\geq 2$ be an integer. For $f_0, f_1, \ldots f_n\in \mathbb{F}_q[t]$ monic, we wish to define local weights $H(f_0, f_1, \ldots f_n)\in \mathbb{C}$. Informally, $H(f_0, f_1, \ldots f_n)$ is a weighted version of
\begin{equation}
\res{f_0}{f_1}\res{f_1}{f_2}\cdots\res{f_{n-1}}{f_n}\res{f_n}{f_0}
\end{equation}
where $\res{\,}{\,}$ denotes the quadratic residue symbol. The local weights determine global coefficients: for nonnegative integers $a_0, a_1, \ldots a_n$,
\begin{equation}
c_{a_0, a_1, \ldots a_n}(q)=\sum_{\substack{f_0, f_1, \ldots f_n\in \mathbb{F}_q[t] \text{ monic} \\ \deg f_i=a_i}} H(f_0, f_1, \ldots f_n).
\end{equation}
We will give four axioms, originally due to Diaconu and Pasol \cite{DP}, which uniquely determine the local weights and global coefficients. These axioms describe the behavior of $c$ and $H$ as $q$ varies.
\begin{axiom}[Twisted Multiplicativity]\label{twistedmult}
Suppose that $\gcd(f_0 f_1 \cdots f_n, g_0 g_1 \cdots g_n)=1$. Then
\begin{equation}
H(f_0g_0, f_1g_1, \ldots f_ng_n)=H(f_0, f_1, \ldots f_n)H(g_0, g_1, \ldots g_n)\prod_{i \mod n+1} \res{f_i}{g_{i+1}}\res{g_i}{f_{i+1}}
\end{equation}.
\end{axiom}
With this axiom in hand, it suffices to describe $H(p^{a_0}, p^{a_1}, \ldots p^{a_n})$ for $p\in\mathbb{F}_q[t]$ prime.
\begin{axiom}[Local-to-Global Principle]\label{localglobal}
The global coefficients $c_{a_0, a_1, \ldots a_n}(q)$ and local weights $H(p^{a_0}, p^{a_1}, \ldots p^{a_n})$ are polynomials in $q$ and $|p|:=q^{\deg p}$ respectively, of degrees at most $a_0+a_1+\cdots +a_n$, and
\begin{equation}
H(p^{a_0}, p^{a_1}, \ldots p^{a_n})=|p|^{a_0+a_1+\cdots +a_n}c_{a_0, a_1, \ldots a_n}(|p|^{-1}).
\end{equation}
\end{axiom}
\begin{axiom}[Dominance Principle]\label{dominance}
The polynomial $H(p^{a_0}, p^{a_1}, \ldots p^{a_n})$ has degree less than $\frac{1}{2}(a_0+a_1+\cdots +a_n-1)$ in $|p|$; equivalently, $c_{a_0, a_1, \ldots a_n}(q)$ has terms in degrees greater than $\frac{1}{2}(a_0+a_1+\cdots +a_n+1)$. The only exceptions are for $H(1,\ldots 1)$, $H(1, \ldots 1, p, 1, \ldots 1)$, $c_{0,\ldots 0}(q)$, and $c_{0, \ldots 0, 1, 0, \ldots 0}(q)$.
\end{axiom}
The dominance principle will mean that the local weights are as small as possible under Axiom \ref{localglobal}. The final axiom is just a normalization condition:
\begin{axiom}[Initial Conditions]\label{initialconditions}
We have $H(1, \ldots 1, f_i, 1,\ldots 1)=1$ for all $f_i$, and $c_{0, \ldots 0, a_i, 0, \ldots 0}(q)=q^{a_i}$ for all $a_i$.
\end{axiom}
We define the quadratic $\widetilde{A}_n$ multiple Dirichlet series over the rational function field $\mathbb{F}_q(t)$ as
\begin{align}
Z(x_0,x_1,\ldots x_n):=&\sum_{f_0, f_1, \ldots f_n \in \mathbb{F}_q[t] \text{ monic}} H(f_0, f_1, \ldots f_n)x_0^{\deg f_0}x_1^{\deg f_1}\cdots x_n^{\deg f_n} \nonumber \\
&=\sum_{a_0, a_1, \ldots a_n \geq 0} c_{a_0, a_1, \ldots a_n}(q)x_0^{a_0}x_1^{a_1}\cdots x_n^{a_n}
\end{align}
The main theorem of this paper is as follows:
\begin{theorem}\label{main}
There exists a unique choice of local weights $H(f_0, f_1, \ldots f_n)$ and global coefficients $c_{a_0, a_1, \ldots a_n}(q)$ satisfying the four axioms. Moreover, these give rise to a multiple Dirichlet series $Z(x_0,x_1,\ldots x_n)$ with meromorphic continuation to $|x_0x_1\cdots x_n|<q^{-(n+1)/2}$ and group of functional equations isomorphic to the affine Weyl group of $\widetilde{A}_n$.
\end{theorem}
We will prove the last statement first; that is, assuming we have chosen weights and coefficients satisfying the axioms, the resulting series satisfies the functional equations. For now, these functional equations are identities of formal power series only, but they hold as identities of functions once $Z$ is meromorphically continued. We will require some additional definitions of single-variable series contained within $Z$: let
\begin{align}
&\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i)=\sum_{a_i \geq 0} c_{a_0, \ldots a_i, \ldots a_n}(q)x_i^{a_i} \\
&\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i)=\sum_{a_i \geq 0} H(p^{a_0}, \ldots p^{a_i}, \ldots p^{a_n})x_i^{a_i \deg p} \\
&L_{f_0, \ldots \hat{f_i}, \ldots f_n}(x_i)=\sum_{f_i \in \mathbb{F}_q[t] \text{ monic}} H(f_1, \ldots f_i, \ldots f_n) x_i^{\deg f_i}
\end{align}
where we use the notation $\hat{a}$ or $\hat{f}$ for an omitted index. The local series $\lambda$ can be obtained from the global series $\Lambda$ by substituting $q\mapsto |p|^{-1}$, $x_i \mapsto |p|x_i^{\deg p}$ and multiplying by $|p|^{a_0+\cdots+\hat{a_i}+\cdots+a_n}$.
\begin{prop}\label{axiomsimplyfes}
Fix all but one $a_i$, for $i$ modulo $n+1$. If $a_{i-1}+a_{i+1}$ is odd, then $\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i)$ and $\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i)$ are polynomials of degrees $a_{i-1}+a_{i+1}-1$, $(a_{i-1}+a_{i+1}-1)\deg p$ respectively, satisfying:
\begin{align}
&(q^{1/2} x_i)^{a_{i-1}+a_{i+1}-1} \Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(q^{-1}x_i^{-1}) = \Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i) \\
&(q^{1/2} x_i)^{(a_{i-1}+a_{i+1}-1)\deg p} \lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(q^{-1}x_i^{-1}) = \lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i).
\end{align}
If $a_{i-1}+a_{i+1}$ is even, then these series are rational functions with denominators $1-qx_i$, $1-x_i^{\deg p}$, and numerators of degrees $a_{i-1}+a_{i+1}$, $(a_{i-1}+a_{i+1})\deg p$ respectively, satisfying:
\begin{align}
&(q^{1/2} x_i)^{a_{i-1}+a_{i+1}}(1-x_i^{-1})\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(q^{-1}x_i^{-1}) = (1-qx_i)\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i) \\
&(q^{1/2} x_i)^{(a_{i-1}+a_{i+1})\deg p}(1-q^{-\deg p}x_i^{-\deg p})\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(q^{-1}x_i^{-1}) \nonumber \\
& \quad = (1-x_i^{\deg p})\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i).
\end{align}
\end{prop}
\begin{proof}
Notice that in each case, the functional equations of $\Lambda$ and $\lambda$ are equivalent. The proof is by induction on $\sum_{j\neq i}a_j$. If $\sum_{j\neq i}a_j=0$, the proposition follows from Axiom \ref{initialconditions}. For the inductive step, fix $f_0, \ldots \hat{f_i}, \ldots f_n \in \mathbb{F}_q[t]$ monic of degrees $a_0, \ldots \hat{a_i}, \ldots a_n$. Then, by Axiom \ref{twistedmult}, we have the following Euler product formula:
\begin{align}
&L_{f_1, \ldots \hat{f_i}, \ldots f_{n+1}}(x_i) \nonumber \\
&=\left(\prod_{j\neq i, i-1} \prod_{p|f_j} \res{p^{v_p(f_j)}}{p^{-v_p(f_{j+1})}f_{j+1}}\right) \nonumber \\ & \prod_p \left(\sum_{a_i=0}^{\infty} H(p^{v_p(f_0)},\ldots p^{a_i}, \ldots p^{v_p(f_n)}) \res{p^{a_i}}{p^{-v_p(f_{i-1}f_{i+1})}f_{j-1}f_{j+1}} x_i^{a_i \deg p}\right)
\end{align}
where $v_p(f)$ denotes the number of times $p$ divides $f$. Moreover, if we set $g$ to be the squarefree part of $f_{i-1}f_{i+1}$, then all but finitely many of the Euler factors match those of
\begin{equation}
L(x_i, \chi_g)=\prod_p (1-\res{p}{g}x_i^{\deg p})^{-1}.
\end{equation}
The only primes whose Euler factors do not match are those where $p|f_0\cdots\hat{f_i}\cdots f_n$. At such a prime, the ratio of the Euler factors is
\begin{align}
&\mu_p(x_i)=\frac{\lambda_{p^{v_p(f_0)}, \ldots \hat{p^{v_p(f_i)}}, \ldots p^{v_p(f_n)}}(\res{p}{p^{-v_p(g)}g} x_i)}{(1-\res{p}{g}x_i^{\deg p})^{-1}} \nonumber \\
&=\left\lbrace \begin{array}{cc} \lambda_{p^{v_p(f_0)}, \ldots \hat{p^{v_p(f_i)}}, \ldots p^{v_p(f_n)}}(\pm x_i) & \text{if }v_p(f_{i-1}f_{i+1}) \text{ odd} \\ (1 \mp x_i^{\deg p}) \lambda_{p^{v_p(f_0)}, \ldots \hat{p^{v_p(f_i)}}, \ldots p^{v_p(f_n)}}(\pm x_i) & \text{if } v_p(f_{i-1}f_{i+1}) \text{ even} \end{array} \right.
\end{align}
Assuming that the proposition holds for $\lambda_{p^{v_p(f_0)}, \ldots \hat{p^{v_p(f_i)}}, \ldots p^{v_p(f_n)}}$, then this ratio
is a polynomial of degree $d_p=(v_p(f_{i-1}f_{i+1})-1)\deg p$ if $v_p(f_{i-1}f_{i+1})$ is odd, or $v_p(f_{i-1}f_{i+1})\deg p$ if $v_p(f_{i-1}f_{i+1})$ is even, with functional equation
\begin{equation}
(q^{1/2}x_i)^{d_p} \mu_p(q^{-1}x_i^{-1})=\mu_p(x_i).
\end{equation}
Combining these local functional equations with the functional equation of $L(x_i, \chi_g)$, we obtain for $a_{i-1}+a_{i+1}$ odd:
\begin{equation}
(q^{1/2}x_i)^{a_{i-1}+a_{i+1}-1}L_{f_1, \ldots \hat{f_i}, \ldots f_{n+1}}(q^{-1}x_i^{-1})=L_{f_1, \ldots \hat{f_i}, \ldots f_{n+1}}(x_i)
\end{equation}
and for $a_{i-1}+a_{i+1}$ even:
\begin{equation}
(q^{1/2}x_i)^{a_{i-1}+a_{i+1}}(1-x_i^{-1})L_{f_1, \ldots \hat{f_i}, \ldots f_{n+1}}(q^{-1}x_i^{-1})=(1-qx_i)L_{f_1, \ldots \hat{f_i}, \ldots f_{n+1}}(x_i).
\end{equation}
By definition, we have
\begin{equation}
\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i)=\sum_{\substack{f_0, \ldots \hat{f_i}, \ldots f_n \\ \deg f_j=a_j}} L_{f_0, \ldots \hat{f_i}, \ldots f_{n+1}}(x_i)
\end{equation}
and we may assume inductively that each $L_{f_0, \ldots \hat{f_i}, \ldots f_{n+1}}(x_i)$ satisfies the desired functional equations, except for
\begin{align}
&L_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i) \nonumber \\
&=\left\lbrace \begin{array}{cc} \lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i) & \text{if } a_{i-1}+a_{i+1} \text{ odd} \\ \frac{1-x_i}{1-qx_i} \lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i) & \text{if } a_{i-1}+a_{i+1} \text{ even} \end{array} \right.
\end{align}
when $p$ is a prime of degree $1$. We conclude, in the case of $a_{i-1}+a_{i+1}$ odd:
\begin{align}
&(q^{1/2}x_i)^{a_{i-1}+a_{i+1}-1}(\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(q^{-1}x_i^{-1})-q\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(q^{-1}x_i^{-1})) \nonumber \\ &=\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i)-q\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i)
\end{align}
and in the case of $a_{i-1}+a_{i+1}$ even:
\begin{align}
&(q^{1/2}x_i)^{a_{i-1}+a_{i+1}}((1-x_i^{-1})\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(q^{-1}x_i^{-1})\nonumber \\
& \quad -q(1-q^{-1}x_i^{-1})\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(q^{-1}x_i^{-1})) \nonumber \\ &=(1-qx_i)\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i)-q(1-x_i)\lambda_{p^{a_0}, \ldots \hat{p^{a_i}}, \ldots p^{a_n}}(x_i).
\end{align}
Finally, we apply Axiom \ref{dominance}. Consider the above two functional equations as identities of power series in $x_i$ and $q$. Comparing the terms whose degree in $q$ exceeds $\frac{1}{2}(a_0+\cdots+\hat{a_i}+\cdots+a_n)$ plus half their degree in $x_i$ yields the desired functional equation for $\Lambda$. Comparing the remaining terms yields the desired functional equation for $\lambda$.
\end{proof}
For $i$ modulo $n+1$ let $\sigma_i:\mathbb{C}^{n+1}\to \mathbb{C}^{n+1}$ be given by
\begin{equation}
(\sigma_i(x_0, x_1, \ldots x_n))_j=\left\lbrace \begin{array}{cc} q^{-1}x_i^{-1} & \text{if } j\equiv i \\ q^{1/2}x_ix_j & \text{if } j\equiv i\pm 1 \\ x_j & \text{otherwise}. \end{array}\right.
\end{equation}
These transformations generate the group
\begin{equation}
<\sigma_i|\sigma_i^2=1, \sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}, \sigma_i\sigma_j=\sigma_j\sigma_i \text{ for } j\neq i\pm 1>
\end{equation}
which is the $\widetilde{A}_n$ affine Weyl group. Then $Z$ has functional equations
\begin{align}
& Z_{a_{i+1}+a_{i-1} \text{ odd}}(\sigma_i(x_0,\ldots x_n))=q^{1/2} x_i Z_{a_{i+1}+a_{i-1} \text{ odd}}(x_0, \ldots x_n), \label{oddfe}\\
& (1-x_i^{-1}) Z_{a_{i+1}+a_{i-1} \text{ even}}(\sigma_i(x_0, \ldots x_n))= (1-qx_i) Z_{a_{i+1}+a_{i-1} \text{ even}}(x_0, \ldots x_n) \label{evenfe}
\end{align}
where $Z_{a_{i+1}+a_{i-1} \text{ odd/even}}$ denotes sum of terms $c_{a_0,\ldots a_n}(q)x_0^{a_0}\cdots x_n^{a_n}$ with $a_{i-1}+a_{i+1}$ odd or even, respectively.
We may also define the $p$-part of $Z$,
\begin{equation}
Z_p(x_0, \ldots x_n)=\sum_{a_0, \ldots a_n} H(p^{a_0}, \ldots p^{a_n})x_0^{a_0\deg p}\cdots x_n^{a_n\deg p}
\end{equation}
and obtain similar local functional equations.
The functional equations are identities of formal power series, which may be translated into linear recurrences on the coefficients. If $a_{i-1}+a_{i+1}$ is odd, then we have
\begin{equation} \label{oddrecurrence}
c_{\ldots a_i, \ldots}(q)=q^{a_i-(a_{i-1}+a_{i+1}-1)/2} c_{\ldots a_{i-1}+a_{i+1}-1-a_i, \ldots}(q)
\end{equation}
and if $a_{i-1}+a_{i+1}$ is even, then
\begin{align} \label{evenrecurrence}
&c_{\ldots a_i, \ldots}(q) \nonumber \\
&=q c_{\ldots a_i-1, \ldots}(q) + q^{a_i-(a_{i-1}+a_{i+1})/2}(c_{\ldots a_{i-1}+a_{i+1}-a_i, \ldots}(q)-q c_{a_{i-1}+a_{i+1}-a_i-1}(q)).
\end{align}
Note that any coefficient with $a_i > \frac{1}{2}(a_{i-1}+a_{i+1})$ can be rewritten in terms of lower coefficients. The only undetermined coefficients are the diagonals, $c_{a, a, \ldots a}(q)$.
On the other hand, the transformations $\sigma_i$ leave the form $x_0x_1\cdots x_n$ invariant, so we may multiply $Z$ by an arbitrary power series in this variable, thereby obtaining an arbitrary family of diagonal coefficients, without affecting the functional equations. We have proved the following:
\begin{prop}\label{diagonal}
A series $Z(x_0, \ldots x_n)=\sum\limits_{a_0, \ldots a_n} c_{a_0, \ldots a_n}(q)x_0^{a_0}\cdots x_n^{a_n}$ which satisfies the functional equations \ref{oddfe} and \ref{evenfe} is uniquely determined by its diagonal coefficients $c_{a, \ldots a}(q)$.
\end{prop}
In fact, by this proposition, the ratio of two power series satisfying the functional equations must be a diagonal series. By inspecting the recursive formulas, one also sees that if the $c_{a, \ldots a}(q)$ satisfy Axiom \ref{dominance}, then so do all the coefficients.
The last result of this section guarantees the existence of a meromorphic power series satisfying the functional equations. It is proven by a generalization of the averaging procedure developed by Chinta and Gunnells \cite{CG1}. A proof for the affine root system $\widetilde{D}_4$ appears in Bucur-Diaconu \cite{BD}; Lee and Zhang \cite{LZ} show that the construction works in all symmetrizable Kac-Moody root systems. The notation of Lee and Zhang differs somewhat from ours; in particular, they construct the $p$-part $Z_p$ of a series--a change of variables is required to obtain the global series $Z$. For a proof in the notation of this paper, see \cite{W}.
We construct an infinite product which describes all the poles of $Z$ implied by the functional equations: let
\begin{equation}
\Delta(x_0, \ldots x_n)=\prod_{m=0}^{\infty} \prod_{\substack{i, j \mod n+1 \\ i\not\equiv j+1}} (1-q(qx_0^2\cdots qx_n^2)^m(qx_i^2\cdots qx_j^2)).
\end{equation}
$\Delta$ is best understood as a deformed Weyl denominator, a product over positive real roots in the $\widetilde{A}_n$ root system. It converges for $|x_0\cdots x_n|<q^{-(n+1)/2}$. Then we have:
\begin{prop} \label{averaging}
There exists a power series $Z_{\text{avg}}(x_0, \ldots x_n)$ satisfying the functional equations \ref{oddfe} and \ref{evenfe}, such that $\Delta(x_0, \ldots x_n)Z_{\text{avg}}(x_0, \ldots x_n)$ has analytic continuation to $|x_0\cdots x_n|<q^{-(n+1)/2}$.
\end{prop}
The series $Z_{\text{avg}}$ does not satisfy our axioms, but it will be crucial in proving the meromorphic continuation of our series $Z$.
\section{Existence and Uniqueness}
To simplify computations leading to the proof of the main theorem, we restrict our consideration to a particular residue $R$ of the series $Z$. Such residues are essential when we apply Tauberian theorems to $Z$ to obtain analytic results. The series $Z$ can be recovered from the residue $R$, but $R$ is simpler than $Z$ because its coefficients are multiplicative, not twisted multiplicative; hence, $R$ has an Euler product. We will observe a symmetry in the Euler product formula which is equivalent to the local-to-global axiom. This, together with the dominance axiom, leads to an explicit formula for $R$. The meromorphic continuation of $R$ implies that of $Z$.
We will have two separate cases: $n$ odd and $n$ even. The computation for $n$ odd is more straightforward; for $n$ even, the analogous results are somewhat more complicated, but not essentially different.
For $n$ odd, we define
\begin{equation}
R(x_0, x_2, \ldots x_{n-1})=(-q)^{(n+1)/2} \text{Res}_{x_1=x_3=\cdots=x_n=q^{-1}} Z(x_0, \ldots x_n)
\end{equation}
and for $n$ even,
\begin{equation}
R(x_0, x_2, \ldots x_n)=(-q)^{n/2} \text{Res}_{x_1=x_3=\cdots=x_{n-1}=q^{-1}} Z(q^{-1/4}x_0, x_1, \ldots x_{n-1}, q^{-1/4}x_n)
\end{equation}
where the first and last variables are multiplied by $q^{-1/4}$ to simplify later formulas. At first, we treat this residue as a formal power series only. Taking a residue may not be a well-defined operation on arbitrary power series, but in this case it is. We may multiply $Z(x_0, \ldots x_n)$ by $1-qx_i$ and then evaluate at $x_i=q^{-1}$; by Proposition \ref{axiomsimplyfes} this involves taking only finite sums of coefficients, so it gives a well-defined series. This is the meaning of $-q\text{Res}_{x_i=q^{-1}} Z(x_0, \ldots x_n)$. Once we give an explicit formula for $R$ in the following chapter, we will see that it is a meromorphic function, and the residue of a meromorphic function $Z$, in the desired domain.
If $Z$ and $Z'$ are two series satisfying the functional equations \ref{oddfe} and \ref{evenfe}, with residues $R$ and $R'$, then by Proposition \ref{diagonal},
\begin{equation}
\frac{Z(x_0, \ldots x_n)}{Z'(x_0, \ldots x_n)}=Q(x_0\cdots x_n)
\end{equation}
a diagonal series. Therefore,
\begin{equation}
\frac{R(x_0, x_2,\ldots x_{2\lfloor n/2 \rfloor})}{R'(x_0, x_2,\ldots x_{2\lfloor n/2 \rfloor})}=Q(q^{-(n+1)/2}x_0x_2\cdots x_{2\lfloor n/2 \rfloor})
\end{equation}
And the same holds if we only take the diagonal parts of the power series: given $Z(x_0, \ldots x_n)=\sum_{a_0,\ldots a_n}c_{a_0, \ldots a_n}(q)x_0^{a_0}\cdots x_n^{a_n}$, we define $Z_{\text{diag}}(x)=\sum_a c_{a, \ldots a}(q) x^a$, and similarly for $Z'_{\text{diag}}(x)$, $R_{\text{diag}}(x)$, and $R'_{\text{diag}}(x)$. Then
\begin{equation}
\frac{Z_{\text{diag}}(x)}{Z'_{\text{diag}}(x)}=\frac{R_{\text{diag}}(q^{(n+1)/2}x)}{R'_{\text{diag}}(q^{(n+1)/2}x)}=Q(x).
\end{equation}
Equivalently, the ratio
\begin{equation}
\frac{R_{\text{diag}}(x)}{Z_{\text{diag}}(q^{-(n+1)/2}x)}=:P(x)
\end{equation}
is the same for all series satisfying the functional equations \ref{oddfe} and \ref{evenfe}. It does not depend on the choice of diagonal coefficients $c_{a, \ldots a}(q)$. $P(x)$ can be thought of as the diagonal part of the residue if we take $c_{0,\ldots 0}(q)=1$ and $c_{a,\ldots a}(q)=0$ for all $a>0$.
Thus we can recover a full series $Z(x_0, \ldots x_n)$ satisfying the functional equations from its residue $R$, or even from $R_{\text{diag}}$.
Next, we prove two formulas for the coefficients of $R$ in terms of the coefficients of $Z$:
\begin{prop} \label{residueformula1}
Suppose the coefficients of $Z(x_0, \ldots x_n)$ are $c_{a_0, \ldots a_n}(q)$. Then for $n$ odd, the coefficient of $R(x_0, x_2, \ldots x_{n-1})$ at $x_0^{a_0}x_2^{a_2}\cdots x_{n-1}^{a_{n-1}}$ is
\begin{equation}
q^{-2(a_0+a_2+\cdots +a_{n-1})} c_{a_0, a_0+a_2, a_2, a_2+a_4, \ldots a_{n-1}, a_{n-1}+a_0}(q).
\end{equation}
For $n$ even, the coefficient at $x_0^{a_0}x_2^{a_2}\cdots x_{n}^{a_{n}}$ is
\begin{equation}
q^{3(a_0+a_n)/4-2(a_0+a_2+\cdots+a_n)}c_{a_0, a_0+a_2, a_2, a_2+a_4, \ldots a_{n-2}+a_n, a_n}(q).
\end{equation}
In particular, the nonzero coefficients of the residue must have all $a_i$ odd or all $a_i$ even.
\end{prop}
\begin{proof}
Fix all indices except for one $a_i$ with $i$ odd. Then by Proposition \ref{axiomsimplyfes}, we have
\begin{align}
&\Lambda_{a_0, \ldots \hat{a_i}, \ldots a_n}(x_i) \nonumber \\
&= \sum_{a_i=0}^{a_{i-1}+a_{i+1}-1} c_{\ldots a_{i-1}, a_i, a_{i+1} \ldots}(q) x_i^{a_i} + \frac{c_{\ldots a_{i-1}, a_{i-1}+a_{i+1}, a_{i+1} \ldots}(q) x_i^{a_{i-1}+a_{i+1}}}{1-qx_i}
\end{align}
and $c_{\ldots a_{i-1}, a_{i-1}+a_{i+1}, a_{i+1} \ldots}(q)=0$ if $a_{i-1}+a_{i+1}$ is odd.
Taking $-q \text{Res}_{x_i=q^{-1}}$ gives $q^{-a_{i-1}-a_{i+1}}c_{\ldots a_{i-1}, a_{i-1}+a_{i+1}, a_{i+1} \ldots}(q)$. If we repeat this process for all $i\leq n$ odd, we obtain the desired formula. The rearrangements of power series implicit in this computation are only reorderings of finite sums, by Proposition \ref{axiomsimplyfes}.
\end{proof}
We may apply the functional equations $\sigma_i$ for $i \leq n$ odd to the residue. Since only terms with $a_{i-1}+a_{i+1}$ even contribute to the residue, we only need the even part of the functional equation. Let $Z_\text{same}$ denote the part of the power series $Z$ with $a_0+a_2$, $a_2+a_4$, $a_4+a_6$, etc. all even, i.e. where $a_0$, $a_2$, $a_4$, etc. have the same parity. In the case of $n$ odd, we obtain:
\begin{align} \label{residuevalue1}
R(x_0, x_2, \ldots x_{n-1}) &=(-q)^{(n+1)/2} \text{Res}_{x_1=x_3=\cdots=x_n=q^{-1}} Z(x_0, \ldots x_n) \nonumber \\
&=(-q)^{(n+1)/2} \text{Res}_{x_1=x_3=\cdots=x_n=q^{-1}} Z_{\text{same}}(x_0, \ldots x_n) \nonumber \\
&=(1-q)^{(n+1)/2}Z_{\text{same}}(q^{-1}x_0,1, q^{-1}x_2, 1, \ldots q^{-1}x_{n-1}, 1)
\end{align}
and for $n$ even:
\begin{equation} \label{residuevalue2}
R(x_0, x_2, \ldots x_n)=(1-q)^{n/2}Z_{\text{same}}(q^{-3/4} x_0,1, q^{-1}x_2, 1, \ldots q^{-1}x_{n-2}, 1, q^{-3/4}x_n).
\end{equation}
We can now prove a second formula for the coefficients of $R$ in terms of the local coefficients $H(f_0, \ldots f_n)$ rather than the global coefficients $c_{a_0, \ldots a_n}(q)$:
\begin{prop} \label{residueformula2}
If $Z$ is a series defined by local weights $H$ as:
\begin{equation}
Z(x_0, \ldots x_n)=\sum_{f_0, \ldots f_n} H(f_0, \ldots f_n) x_0^{\deg f_0}\cdots x_n^{\deg f_n}
\end{equation}
then for $n$ odd we have:
\begin{align}
&R(x_0, x_2, \ldots x_{n-1}) \nonumber \\
&=\sum_{f_0, f_2, \ldots f_{n-1}} \frac{H(f_0, f_0f_2, f_2, f_2f_4,\ldots f_{n-1}, f_{n-1}f_0)}{q^{\deg f_0+\deg f_2 + \cdots +\deg f_{n-1}}} x_0^{\deg f_0}x_2^{\deg f_2}\cdots x_{n-1}^{\deg f_{n-1}}
\end{align}
and for $n$ even:
\begin{align}
&R(x_0, x_2, \ldots x_n) \nonumber \\
&=\sum_{f_0, f_2, \ldots f_n} \frac{H(f_0, f_0f_2, f_2, f_2f_4,\ldots f_{n-2}f_n, f_n)}{q^{\frac{3}{4}\deg f_0+\deg f_2 + \cdots +\deg f_{n-2}+\frac{3}{4}\deg f_n}} x_0^{\deg f_0}x_2^{\deg f_2}\cdots x_n^{\deg f_n}.
\end{align}
In particular, only tuples of polynomials $f_0, f_2, f_4,\ldots$ with $f_i f_{i+2}$ a perfect square for all $i$, i.e. tuples of polynomials with the same squarefree part, contribute to the residue.
\end{prop}
\begin{proof}
First note that $H(f_0, f_0f_2, f_2, f_2f_4,\ldots)$ indeed vanishes if any $f_if_{i+2}$ is not a perfect square. If a prime $p$ divides $f_if_{i+2}$ an odd number of times, then $H(p^{v_p(f_0)}, p^{v_p(f_0f_2)}, p^{v_p(f_2)}, p^{v_p(f_2f_4)},\ldots)$ must vanish by the local functional equations. Then, by Axiom \ref{twistedmult}, $H(f_0, f_0f_2, f_2, f_2f_4,\ldots)=0$.
Now we use equations \ref{residuevalue1} and \ref{residuevalue2} as a starting point. Fix all but one $f_i$ and assume $\deg f_{i-1}f_{i+1}$ is even. Then, as in the proof of Proposition \ref{axiomsimplyfes}, the series $\sum_{f_i} H(f_0, \ldots f_n)x_i^{\deg f_i}$ matches the L-function $L(x_i, \chi_{f_{i-1}f_{i+1}})$ up to multiplication by correction polynomials. Unless $f_{i-1}f_{i+1}$ is a perfect square, this L-function has a trivial zero at $x_i=1$. If $f_{i-1}f_{i+1}$ is a perfect square, then this series matches the zeta function $(1-qx_i)^{-1}$ up to multiplication by correction polynomials of the form
\begin{align}
(1-x_i^{\deg p})\left(\sum_{a_i=0}^{v_p(f_{i-1}f_{i+1})-1} H(\ldots p^{v_p(f_{i-1})},p^{a_i}, p^{v_p(f_{i+1})}, \ldots) x_i^{a_i\deg p}\right) \nonumber\\
+ H(\ldots p^{v_p(f_{i-1})}, p^{v_p(f_{i-1}f_{i+1})}, p^{v_p(f_{i+1})}, \ldots) x_i^{v_p(f_{i-1}f_{i+1})\deg p}
\end{align}
for each prime $p$ dividing $f_{i-1}f_{i+1}$. Thus, evaluating the series $\sum_{f_i} H(f_0, \ldots f_n)x_i^{\deg f_i}$ at $x_i=1$ and multiplying by $1-q$ gives $H(\ldots f_{i-1}, f_{i-1}f_{i+1}, f_{i+1},\ldots)$. Repeating this process for all $i\leq n$ odd and using equations \ref{residuevalue1} and \ref{residuevalue2} gives the desired result. Again by Proposition \ref{axiomsimplyfes}, the rearrangements of power series implicit in this proof are only reorderings of finite sums.
\end{proof}
Notice that the terms $H(f_0, f_0f_2, f_2, f_2f_4,\ldots)$ appearing in the residue are multiplicative, not twisted multiplicative. That is, if $\gcd(f_0f_2\cdots f_{2\lfloor n/2\rfloor}, g_0g_2\cdots g_{2\lfloor n/2\rfloor})=1$, then
\begin{equation}
H(f_0g_0, f_0f_2g_0g_2, f_2g_2, f_2f_4g_2g_4,\ldots)=H(f_0, f_0f_2, f_2, f_2f_4,\ldots)H(g_0, g_0g_2, g_2, g_2g_4,\ldots).
\end{equation}
Indeed, since all the $f_if_{i+2}$ and $g_ig_{i+2}$ terms are square, they do not contribute to the twists in Axiom \ref{twistedmult}. The only possible factor of $-1$ comes from $\res{f_0}{g_n} \res{f_n}{g_0}$ in the case when $n$ is even, but since $f_0$ and $f_n$ must have the same squarefree part, and so must $g_0$ and $g_n$, this product of residues is $1$.
Therefore, we may write an Euler product expression for $R$:
\begin{equation}
R(x_0, x_2, \ldots x_{2\lfloor n/2 \rfloor})=\prod_{p \text{ prime}} R_p(x_0, x_2, \ldots x_{2\lfloor n/2 \rfloor})
\end{equation}
where if $n$ is odd,
\begin{align}
&R_p(x_0, x_2, \ldots x_{n-1}) \nonumber\\ & \sum_{a_0, a_2, \ldots a_{n-1}} \frac{H(p^{a_0}, p^{a_0+a_2}, p^{a_2}, p^{a_2+a_4}, \ldots p^{a_{n-1}}, p^{a_{n-1}+a_0})}{q^{(a_0+a_2+\cdots+a_{n-1})\deg p}} x_0^{a_0\deg p} x_2^{a_2\deg p}\cdots x_{n-1}^{a_{n-1}\deg p}
\end{align}
and if $n$ is even,
\begin{align}
&R_p(x_0, x_2, \ldots x_n) \nonumber\\ &= \sum_{a_0, a_2, \ldots a_n} \frac{H(p^{a_0}, p^{a_0+a_2}, p^{a_2}, p^{a_2+a_4}, \ldots p^{a_{n-2}+a_n}, p^{a_n})}{q^{(\frac{3}{4}a_0+a_2+\cdots+a_{n-2}+\frac{3}{4}a_n)\deg p}} x_0^{a_0\deg p} x_2^{a_2\deg p}\cdots x_n^{a_n\deg p}.
\end{align}
Let us compare these equations to Proposition \ref{residueformula1}, and apply Axiom \ref{localglobal}. We see that $R_p$ may be obtained from $R$ by substituting $q\mapsto q^{-\deg p}$ and $x_i\mapsto x_i^{\deg p}$.
We may write $R(x_0, x_2, \ldots)$ as a product of terms $(1-q^{\beta}x_0^{\alpha_0}x_2^{\alpha_2}\cdots )^{-\gamma}$, where $\alpha_i\in \mathbb{Z}$, and $\beta\in \mathbb{Z}$ for $n$ odd, or $\frac{1}{2}\mathbb{Z}$ for $n$ even (not $\frac{1}{4}\mathbb{Z}$ because $\alpha_0, \alpha_n$ must have the same parity). Any formal power series in can be expressed uniquely in this way. Then comparing to the Euler product formula for the function field zeta function, we conclude that such factors come in pairs.
\begin{property} \label{symmetry} If $(1-q^{\beta}x_0^{\alpha_0}x_2^{\alpha_2}\cdots )^{-\gamma}$ is a factor of $R$, then so is $(1-q^{1-\beta}x_0^{\alpha_0}x_2^{\alpha_2}\cdots )^{-\gamma}$.
\end{property}
This symmetry is equivalent to Axiom \ref{localglobal} for the full series $Z$: it implies the local-to-global property for coefficients $c_{a_0, a_0+a_2, a_2, a_2+a_4,\ldots}(q)$ and local weights $H(f_0, f_0f_2, f_2, f_2f_4, \ldots)$, and all other coefficients can be obtained from these via compatible global and local functional equations.
We are now ready to prove the existence and uniqueness statements of the main theorem. We first introduce several new notations based on the expression of $R$ as a product of terms $(1-q^{\beta}x_0^{\alpha_0}x_2^{\alpha_2}\cdots )^{-\gamma}$. We let $R^{\flat}$ denote the product of terms with $\beta \leq 0$ and $R^{\sharp}$ denote the product of terms with $\beta \geq 1$. If $n$ is even, we also have $R^{\natural}$ with $\beta=\frac{1}{2}$. We let $R_1$ denote the product of diagonal factors (with $\alpha_0=\alpha_2=\alpha_4=\cdots$), and let $R_0$ denote the product of off-diagonal factors. In the following chapter, we will explicitly compute $R_0$, and show that it satisfies property \ref{symmetry}. For now, we assume this. We also have $R_0^{\flat}$, $R_0^{\natural}$, $R_0^{\sharp}$, $R_1^{\flat}$, $R_1^{\natural}$, $R_1^{\sharp}$, and the diagonal parts of each of these.
Recall that
\begin{equation}
P(x)Z_{\text{diag}}(q^{-(n+1)/2}x)=R_{\text{diag}}(x)=R_{0, \text{ diag}}(x)R_1(x) \label{diagonaltoresidue}
\end{equation}
for any series $Z$ satisfying the functional equations with residue $R$. We must show that there exists a unique choice of $Z_{\text{diag}}$ satisfying Axiom \ref{dominance} and $R_1$ satisfying Property \ref{symmetry} which make this equation true. The resulting series $Z$ with residue $R$ will satisfy the four axioms.
For $Z_{\text{diag}}(q^{-(n+1)/2}x)$ to satisfy the dominance axiom, its coefficients (other than the constant coefficient $1$) must be polynomials divisible by $q$. If it is written as a product of terms $(1-q^{\beta}x^{\alpha})^{-\gamma}$, then all these terms will have $\beta \geq 1$. Thus, when $n$ is odd,
\begin{align} \label{flat}
R_1^{\flat}(x)&=(P(x)Z_{\text{diag}}(q^{-(n+1)/2}x)R_{0, \text{ diag}}(x)^{-1})^{\flat} \nonumber \\
&=(P(x)R_{0, \text{ diag}}(x)^{-1})^{\flat}
\end{align}
and both $P(x)$ and $R_{0, \text{ diag}}(x)^{-1}$ are fixed. Hence, $R_1^{\flat}$ is uniquely determined. Then we must choose $R_1^{\sharp}$ to satisfy Property \ref{symmetry}, and $Z_{\text{diag}}$ to satisfy equation \ref{diagonaltoresidue}. In the case of $n$ even, $R_1^{\flat}R_1^{\natural}$ is uniquely determined, and the conclusion is the same as before.
\section{Computing the Residue}
In this chapter, we prove explicit formulas for the residue.
\begin{prop} \label{Rformula}
If the series $Z(x_0, \ldots x_n)$ satisfies Axioms \ref{twistedmult}-\ref{initialconditions}, and $n$ is odd, then the residue $R(x_0, x_2, x_{n-1})$ is as follows:
\begin{align}\label{residueodd}
&R(x_0, x_2, \ldots x_{n-1}):=(-q)^{(n+1)/2} \text{Res}_{x_1=x_3=\cdots=x_n=q^{-1}} Z(x_0, \ldots x_n) \nonumber \\
&=\prod_{m=0}^{\infty} (1-(x_0 x_2\cdots x_{n-1})^{2m+1})^{-1}(1-q (x_0 x_2\cdots x_{n-1})^{2m+1})^{-1} \nonumber\\
& \qquad \prod_{\substack{i,j \text{ mod } n+1 \\ \text{even}}} (1-(x_0 x_2\cdots x_{n-1})^{2m}(x_i x_{i+2}\cdots x_j)^2)^{-1} \nonumber \\
&\hspace{6em} (1-q (x_0 x_2\cdots x_{n-1})^{2m}(x_i x_{i+2}\cdots x_j)^2)^{-1}
\end{align}
and if $n$ is even, then:
\begin{align} \label{residueeven}
&R(x_0, x_2, \ldots x_n):=(-q)^{(n+1)/2} \text{Res}_{x_1=x_3=\cdots=x_n=q^{-1}} Z(q^{-1/4}x_0, x_1, \ldots x_{n-1}, q^{-1/4}x_n) \nonumber \\
&=\prod_{m=0}^{\infty} (1-(x_0x_2\cdots x_n)^{2m+2})^{-n/2}(1-q(x_0x_2\cdots x_n)^{2m+2})^{-n/2} \nonumber \\
& \qquad \quad \prod_{\substack{0\leq i<n \\ \text{even}}} (1-q^{1/2}(x_0x_2\cdots x_n)^{2m}(x_0x_2\cdots x_i)^2)^{-1} \nonumber \\
&\hspace{6em} (1-q^{1/2}(x_0x_2\cdots x_n)^{2m}(x_{i+2}x_{i+4}\cdots x_n)^2)^{-1} \nonumber \\
&\qquad \quad \prod_{\substack{0< i\leq j <n \\ \text{even}}} (1-(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1} \nonumber \\
&\hspace{6em} (1-q(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1} \nonumber \\
&\hspace{6em} (1-(x_0x_2\cdots x_n)^{2m}(x_0x_2\cdots x_{i-2} x_{j+2} x_{j+4}\cdots x_n)^2)^{-1} \nonumber \\
&\hspace{6em} (1-q(x_0x_2\cdots x_n)^{2m}(x_0x_2\cdots x_{i-2} x_{j+2} x_{j+4}\cdots x_n)^2)^{-1}.
\end{align}
\end{prop}
These products define a meromorphic functions in the domain $|x_0x_2\cdots x_{2\lfloor n/2 \rfloor}|<1$. By Proposition \ref{averaging} we have a series $Z_{\text{avg}}$, satisfying the same functional equations as $Z$, with meromorphic continuation to $|x_0x_1\cdots x_n|<q^{-(n+1)/2}$. Its residue $R_{\text{avg}}$ is meromorphic in the same domain as $R$. The ratio
\begin{equation}
\frac{R(x_0, x_2, \ldots x_{2\lfloor n/2 \rfloor})}{R_{\text{avg}}(x_0, x_2, \ldots x_{2\lfloor n/2 \rfloor})}=Q(q^{-(n+1)/2}x_0x_2\cdots x_{2\lfloor n/2 \rfloor})
\end{equation}
is a power series in one variable $x_0x_2\cdots x_{2\lfloor n/2 \rfloor}$, and $Q(x)$ is meromorphic for $|x|<q^{-(n+1)/2}$. Thus
\begin{equation}
Z(x_0,\ldots x_n)=Z_{\text{avg}}(x_0, \ldots x_n)Q(x_0\cdots x_n)
\end{equation}
is meromorphic for $|x_0x_1\cdots x_n|<q^{-(n+1)/2}$. Hence, if we can prove Proposition \ref{Rformula} we will complete the proof of Theorem \ref{main}.
First we compute the residue up to a diagonal factor, using functional equations. Let $i$ be even, and, if $n$ is even, $0<i<n$. The functional equations $\sigma_i \sigma_{i-1} \sigma_{i+1} \sigma_i$ yield:
\begin{align}
&Z(\ldots x_{i-2}, x_{i-1}, x_i, x_{i+1}, x_{i+2}, \ldots) \nonumber \\
&=\frac{1}{16} \sum_{\epsilon_1, \epsilon_2, \epsilon_3, \epsilon_4 = \pm 1} \epsilon_2 \epsilon_3 q^{-2} x_{i-1}^{-2} x_i^{-4}x_{i+1}^{-2}(\epsilon_4 q^{-1/2}-\frac{1-\epsilon_2 \epsilon_3 q x_{i-1} x_i x_{i+1}}{1-\epsilon_2 \epsilon_3 q^2 x_{i-1} x_i x_{i+1}}) \nonumber \\
&(\epsilon_3 q^{-1/2}-\frac{1-\epsilon_1 q^{1/2} x_i x_{i+1}}{1-\epsilon_1 q^{3/2} x_i x_{i+1}}) (\epsilon_2 q^{-1/2}-\frac{1-\epsilon_1 q^{1/2} x_{i-1} x_i}{1-\epsilon_1 q^{3/2} x_{i-1} x_i}) (\epsilon_1 q^{-1/2}-\frac{1-x_i}{1-q x_i}) \nonumber \\
&Z(\ldots \epsilon_2 q x_{i-2} x_{i-1} x_i, \epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4 x_{i-1}, \frac{\epsilon_2 \epsilon_3}{q^2 x_{i-1} x_i x_{i+1}}, \epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4 x_{i+1}, \epsilon_3 q x_i x_{i+1} x_{i+2}, \ldots).
\end{align}
If we take the residue, only the terms with $\epsilon_1 \epsilon_2 \epsilon_3 \epsilon_4=1$ will contribute, and, by Proposition (\ref{residueformula1}), only even powers of $\epsilon_2$ and $\epsilon_3$ will appear. Thus the residue in fact has functional equations with scalar cocycle:
\begin{equation} \label{resfe}
R(\ldots x_{i-2}, x_i, x_{i+2}, \ldots) = (*) R(\ldots x_{i-2} x_i, \frac{1}{x_i}, x_i x_{i+2}, \ldots)
\end{equation}
where
\begin{align}
&(*)=\frac{1}{16} \sum_{\epsilon_1, \epsilon_2, \epsilon_3, = \pm 1} \epsilon_2 \epsilon_3 q^{2} x_i^{-4}(\epsilon_1 \epsilon_2 \epsilon_3 q^{-1/2}-\frac{1-\epsilon_2 \epsilon_3 q^{-1}x_i}{1-\epsilon_2 \epsilon_3 x_i}) \nonumber \\
&(\epsilon_3 q^{-1/2}-\frac{1-\epsilon_1 q^{-1/2} x_i}{1-\epsilon_1 q^{1/2} x_i}) (\epsilon_2 q^{-1/2}-\frac{1-\epsilon_1 q^{-1/2} x_i}{1-\epsilon_1 q^{1/2} x_i}) (\epsilon_1 q^{-1/2}-\frac{1-x_i}{1-q x_i}) \nonumber \\
&=\frac{(1-x_i^{-2})(1-qx_i^{-2})}{(1-x_i^2)(1-qx_i^2)}.
\end{align}
Applying the transformation $(\ldots x_{i-2}, x_i, x_{i+2}, \ldots)\mapsto (\ldots x_{i-2} x_i, \frac{1}{x_i}, x_i x_{i+2}, \ldots)$ to \ref{residueodd} or \ref{residueeven} just permutes the factors, except for the two factors $(1-x_i^2)^{-1}(1-qx_i^2)^{-1}$, which are replaced by $(1-x_i^{-2})^{-1}(1-qx_i^{-2})^{-1}$. Therefore the formulas of Proposition \ref{Rformula} satisfy the functional equations \ref{resfe}.
Let $R$ and $R'$ be two power series satisfying the functional equations \ref{resfe}. Then the ratio $R/R'$ is invariant under $(\ldots x_{i-2}, x_i, x_{i+2}, \ldots)\mapsto (\ldots x_{i-2} x_i, \frac{1}{x_i}, x_i x_{i+2}, \ldots)$ for $i$ even and, if $n$ is even, $0<i<n$. In the case of $n$ odd, this immediately implies that $R/R'$ is a diagonal power series, so the formula \ref{residueodd} is correct up to diagonal factors.
If $n$ is even, we require additional functional equations. There are two functional equations corresponding to the transformations $(\sigma_0\sigma_1\cdots\sigma_n)^2$ and $(\sigma_n\sigma_{n-1}\cdots\sigma_0)^2$. We will describe the $(\sigma_0\sigma_1\cdots\sigma_n)^2$ functional equation--the other one is similar. We have
\begin{align}
&R(x_0, x_2, x_4, \ldots x_{n-4}, x_{n-2}, x_n) \nonumber \\ &=(*) R(x_0^3x_2^3x_4^2\cdots x_n^2, x_4, x_6, \ldots x_{n-2}, x_0x_n, x_0^{-3}x_2^{-2}\cdots x_n^{-2})
\end{align}
where
\begin{equation}
(*)=\frac{1-q^{1/2}(x_0x_2\cdots x_n)^{-4} x_0^{-2}}{1-q^{1/2}(x_0x_2\cdots x_n)^4 x_0^2} \prod\limits_{m=0}^1 \prod\limits_{\substack{0\leq i<n \\ \text{even}}} \frac{(1-q^{1/2}(x_0x_2\cdots x_n)^{-2m}(x_0x_2\cdots x_i)^{-2})}{(1-q^{1/2}(x_0x_2\cdots x_n)^{2m}(x_0x_2\cdots x_i)^2)}
\end{equation}
(the transformation is slightly different when $n=2$).
Finally, suppose $n$ is even and $n>2$. Consider the transformation $\sigma_0\sigma_1\sigma_n\sigma_{n-1}\sigma_0\sigma_n\sigma_1\sigma_0$. This leads to a functional equation
\begin{align}
&R(x_0, x_2, x_4, \ldots x_{n-4}, x_{n-2}, x_n) \nonumber \\ &=(*) R(x_n^{-1}, x_0x_2x_n, x_4, \ldots x_{n-4}, x_0x_{n-2}x_n, x_0^{-1})
\end{align}
where
\begin{equation}
(*)=\frac{(1-q^{1/2}x_0^{-2})(1-q^{1/2}x_n^{-2})(1-x_0^{-2}x_n^{-2})(1-qx_0^{-2}x_n^{-2})}{(1-q^{1/2}x_0^2)(1-q^{1/2}x_n^2)(1-x_0^2x_n^2)(1-qx_0^2x_n^2)}
\end{equation}
(the functional equation is slightly different when $n=4$).
It is straightforward to show that the formula \ref{residueeven} satisfies these additional functional equations, and that they determine it up to diagonal factors. This completes the computation of $R$ up to diagonal factors. Note that the off-diagonal part $R_0$ satisfies Property \ref{symmetry}.
With the off-diagonal factors in hand, we can compute all of $R$. In fact, by Property \ref{symmetry} we need only compute $R^{\flat}$ and $R^{\natural}$. For $n$ odd, we must show:
\begin{align}\label{residueoddflat}
&R^{\flat}(x_0 x_2 \cdots x_{n-1})= \nonumber \\
&=\prod_{m=0}^{\infty} (1-(x_0 x_2\cdots x_{n-1})^{2m+1})^{-1} \prod_{\substack{i,j \text{ mod } n+1 \\ \text{even}}} (1-(x_0 x_2\cdots x_{n-1})^{2m}(x_i x_{i+2}\cdots x_j)^2)^{-1}
\end{align}
where only the first factor and the factors with $j\equiv i-2$ are not yet determined. For $n$ even, we must show:
\begin{align} \label{residueevenflat}
&R^{\flat}(x_0, x_2, \ldots x_n)= \nonumber \\
&=\prod_{m=0}^{\infty} \prod_{\substack{i,j \text{ mod } n+2 \\ \text{even} \\i\neq 0, \, j\neq n}} (1-(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1}
\end{align}
where only the factors with $j \equiv i-2$ are not yet determined. Further, $R^{\natural}$ has no diagonal factors.
Before starting the computation, we state a useful lemma on the combinatorics of the partition function.
\begin{lemma}
The generating function of integer partitions $\delta^{(0)}\geq \delta^{(1)}\geq \delta^{(2)}\geq\ldots$ such that $\sum\limits _{j\equiv k \mod n} \delta^{(j)} =a_k$ is
\begin{equation}
\prod_{m=0}^{\infty} \prod_{j=1}^{n} (1-(x_1x_2\cdots x_j)(x_1x_2\cdots x_n)^m)^{-1}
\end{equation}
\end{lemma}
As a consequence, the generating function of $n$-tuples of integer partitions $\delta_1^{(j)}, \delta_2^{(j)}, \ldots \delta_{n}^{(j)}$ such that $\sum\limits _{i+j \equiv k \mod n} \delta_i^{(j)} =a_k$ is
\begin{equation} \label{example}
\prod_{m=0}^{\infty} \prod_{i, j \text{ mod } n} (1-(x_ix_{i+1}\cdots x_j)(x_1x_2\cdots x_n)^m)^{-1}.
\end{equation}
The next proposition establishes, in the case of $n$ odd, that the diagonal part of $R^{\flat}$ matches the diagonal part of Equation \ref{residueoddflat}. Recall that
\begin{equation}
P(x)Z_{\text{diag}}(q^{-(n+1)/2}x)=R_{\text{diag}}(x)
\end{equation}
and $Z_{\text{diag}}(q^{-(n+1)/2}x)=1+O(q)$. Here $P(x)=\sum\limits_a p_a(q)x^{a}$ and $p_a(q)$ is the value of $q^{-a(n+1)}c_{a, 2a, \ldots a, 2a}(q)$ in a series satisfying the functional equations whose diagonal coefficients are $c_{0, 0, \ldots 0}=1$, $c_{a, a, \ldots a}=0$ for all $a>0$. We explicitly compute the part of $P(x)$ which has degree $0$ in the variable $q$.
\begin{prop}\label{oddcombinatorics}
Let $n$ be odd. Then
\begin{align}
&P(x_0 x_2\cdots x_{n-1}) \nonumber \\
&=\bigg{(}\prod_{m=0}^{\infty} (1-(x_0 x_2\cdots x_{n-1})^{2m+1})^{-1} \nonumber \\
&\hspace{3em} \prod_{\substack{i,j \text{ mod } n+1 \\ \text{even}}} (1-(x_0 x_2\cdots x_{n-1})^{2m}(x_i x_{i+2}\cdots x_j)^2)^{-1}\bigg{)}_{\text{diag}} + O(q).
\end{align}
\end{prop}
This implies that
\begin{align}
&R^{\flat}_{\text{diag}}(x_0 x_2\cdots x_{n-1}) \nonumber \\
&=\bigg{(}\prod_{m=0}^{\infty} (1-(x_0 x_2\cdots x_{n-1})^{2m+1})^{-1} \nonumber \\
&\hspace{3em} \prod_{\substack{i,j \text{ mod } n+1 \\ \text{even}}} (1-(x_0 x_2\cdots x_{n-1})^{2m}(x_i x_{i+2}\cdots x_j)^2)^{-1}\bigg{)}_{\text{diag}}.
\end{align}
\begin{proof}
The proof requires closely examining the combinatorics of the recurrences on coefficients of $Z$. Recall the statement of the recurrence associated to functional equation $\sigma_i$: for $a_{i-1}+a_{i+1}$ odd,
\begin{equation}
c_{\ldots a_i, \ldots}=q^{a_i-(a_{i-1}+a_{i+1}-1)/2} c_{\ldots a_{i-1}+a_{i+1}-1-a_i, \ldots}
\end{equation}
and for $a_{i-1}+a_{i+1}$ even, applying equation \ref{evenrecurrence} repeatedly gives
\begin{equation}
c_{\ldots a_i, \ldots}=q^{a_i-(a_{i-1}+a_{i+1})/2}(c_{\ldots (a_{i-1}+a_{i+1})/2, \ldots} + \sum_{a_i'=a_{i-1}+a_{i+1}-a_i}^{(a_{i-1}+a_{i+1})/2-1} (c_{\ldots a_i', \ldots} -q c_{\ldots a_i'-1, \ldots})).
\end{equation}
Starting with $c_{a_0, \ldots a_n}$ we will apply the recurrences in the following order: first, reduce as far as possible with the odd $\sigma_i$, then reduce the result as far as possible with the even $\sigma_i$, then reduce that result as far as possible with the odd $\sigma_i$, and so on. Any coefficient will eventually be reduced to a linear combination of diagonal coefficients in this way. The lowest term in $p_a(q)$ represents the number of paths from $c_{a, 2a, a, 2a, \ldots, a, 2a}$ to $c_{0, 0,\ldots 0}$ via these recurrences, gaining as small a power of $q$ as possible.
Given any $c_{a_0, \ldots a_n}$, assuming without loss of generality that $\sum\limits_{i \text{ odd}} a_i \geq \sum\limits_{i \text{ even}} a_i$, we apply the recurrences $\sigma_i$ for $i$ odd to reduce as far as possible. Any coefficient $c_{a_0, a_1', \ldots a_{n-1}, a_n'}$ in the resulting expression now has $\sum\limits_{i \text{ odd}} a_i' \leq \sum\limits_{i \text{ even}} a_i$. Furthermore, it is multiplied by a factor of at least $q^{\sum\limits_{i \text{ odd}} a_i - \sum\limits_{i \text{ even}} a_i}$, and more than this if any of the $a_i$ for $i$ odd could not be reduced. If we continue reducing this way until we reach $c_{0, 0, \ldots 0}$, it will be multiplied by a factor of at least $q^{\text{Max}(\sum\limits_{i \text{ odd}} a_i, \sum\limits_{i \text{ even}} a_i)}$. In particular, reducing $c_{a, 2a, a, 2a, \ldots, a, 2a}$ to $c_{0, 0,\ldots 0}$ involves multiplying by at least $q^{a(n+1)}$. This is the correct order since one possible path is
\begin{equation}
c_{a, 2a, a, 2a, \ldots a, 2a} \to q^{a(n+1)/2} c_{a, 0, a, 0, \ldots a, 0} \to q^{a(n+1)} c_{0, 0, \ldots 0}.
\end{equation}
Therefore $p_a(q)$ is a polynomial in $q$ with nonzero constant term.
Because we are only considering the lowest term in $p_a(q)$, we can discard all terms in the $\sigma_i$ recurrence with a factor greater than $q^{a_i-(a_{i-1}+a_{i+1})/2}$. This leads to greatly simplified recurrences: if $a_{i-1}+a_{i+1}$ is even, then
\begin{equation} \label{evenrec}
c_{\ldots a_i, \ldots}=q^{a_i-(a_{i-1}+a_{i+1})/2}\sum_{a_i'=a_{i-1}+a_{i+1}-a_i}^{(a_{i-1}+a_{i+1})/2} c_{\ldots a_i', \ldots}
\end{equation}
and if $a_{i-1}+a_{i+1}$ is odd, then
\begin{equation} \label{oddrec}
c_{\ldots a_i, \ldots}=0.
\end{equation}
We have now reduced the problem of computing the constant coefficient in $p_a(q)$ to counting chains of nonnegative integer indices:
\begin{align*}
&a_0, &&a_1, &&a_2, &&a_3, &&\ldots &&a_{n-1}, &&a_n\\
&a_0', &&a_1', &&a_2', &&a_3', &&\ldots &&a_{n-1}', &&a_n'\\
&a_0'', &&a_1'', &&a_2'', &&a_3'', &&\ldots &&a_{n-1}'', &&a_n''\\
&\cdots &&\cdots &&\cdots &&\cdots &&\cdots &&\cdots &&\cdots\\
&a_0^{(\ell)}, &&a_1^{(\ell)}, &&a_2^{(\ell)}, &&a_3^{(\ell)}, &&\ldots &&a_{n-1}^{(\ell)}, &&a_n^{(\ell)}\\
\end{align*}
such that:
\begin{condition} \label{ends}
we have the boundary conditions $(a_0, a_1, \ldots a_{n-1}, a_n)=(a, 2a, \ldots a, 2a)$ and $(a_0^{(\ell)}, a_1^{(\ell)}, \ldots a_{n-1}^{(\ell)}, a_n^{(\ell)})=(0, 0 \ldots 0, 0)$
\end{condition}
\begin{condition}
$a_i^{(j)}=a_i^{(j+1)}$ if $i$ is even and $j$ is even, or if $i$ is odd and $j$ is odd.
\end{condition}
\begin{condition} \label{same}
$a_i^{(j)}+a_{i+2}^{(j)}$ is even for all $i, j$.
\end{condition}
\begin{condition} \label{ineq}
For $i$ even and $j$ odd, or $i$ odd and $j$ even,
\begin{equation}
a_i^j \geq \frac{1}{2}(a_{i-1}^{(j)}+a_{i+1}^{(j)}) \geq a_{i}^{(j+1)} \geq a_{i-1}^{(j)}+a_{i+1}^{(j)}-a_i^{(j)}.
\end{equation}
\end{condition}
Note that the indices $i$ are still numbered modulo $n+1$ here.
We will rephrase this counting problem once before solving it. For $i$ modulo $n+1$ even, and $1 \leq j \leq \ell$, let $\delta_i^{(j)}=a_{i+j-1}^{(j-1)}-a_{i+j-2}^{(j)}$. The last inequality of condition (\ref{ineq}) implies that
\begin{equation}
\delta_i^{(1)}\geq\delta_i^{(2)}\geq\delta_i^{(3)}\geq\cdots\geq \delta_i^{(\ell)}\geq 0
\end{equation}
Thus a chain of indices as above gives rise to an $(n+1)/2$-tuple of integer partitions $\delta_0^{(j)}, \delta_2^{(j)}, \ldots \delta_{n-1}^{(j)}$ satisfying the following two conditions, which correspond to conditions \ref{same} and \ref{ends}.
\begin{condition}
For fixed $j$, the $\delta_i^{(j)}$ are either all even or all odd.
\end{condition}
\begin{condition} \label{tele}
$\sum\limits_j \delta_{i-2j}^{(j)}=a$ for all $i$.
\end{condition}
We can reconstruct the chain of indices $a_i^{(j)}$ from the partitions $\delta_i^{(j)}$: for $i$, $j$ both odd or both even, \begin{equation}
a_i^{(j)}=\sum\limits_{k=j+1}^\ell \delta_{i+j+2-2k}^{(k)}.
\end{equation}
By comparing such expressions, we find that for any $i$, $j$ both odd or both even, $a_i^{(j)}\geq a_{i-1}^{(j+1)}$ and $a_i^{(j)}\geq a_{i+1}^{(j+1)}$. This appears to be a stronger condition than \ref{ineq}; in fact, it must be equivalent.
To count the $\delta_i^{(j)}$, first note that there exists a unique strictly decreasing partition $\gamma$ such that for all $i$, there exists a partition $\widetilde{\delta}_i$ with even entries such that $\delta_i=\widetilde{\delta}_i + \gamma^*$. Here $*$ denotes the conjugate partition. We may take $\gamma$ to be the set $\lbrace j : d_i^{(j)} \text{ odd}\rbrace$, in decreasing order. If $\gamma_1$ and $\gamma_2$ have this same property, then $\gamma_1^*+\gamma_2^*$ has all even entries, and, since $\gamma_1$ and $\gamma_2$ are strictly decreasing, this implies that they are equal.
Since the generating function of strictly decreasing partitions is the same as the generating function of odd partitions, $\prod\limits_{k=0}^{\infty}(1-x^{2k+1})^{-1}$, the first factor of \ref{residueoddflat} will account for the choice of $\gamma$.
Now it suffices to count $(n+1)/2$-tuples of even partitions $\widetilde{\delta}_i^{(j)}$ satisfying condition (\ref{tele}). But by the logic of equation \ref{example}, this is precisely the diagonal part of the second factor of \ref{residueoddflat}.
\end{proof}
This completes the computation, and the proof of the main theorem, in the case of $n$ odd. In the case of $n$ even, the proof is similar, supplemented by a lemma which does not hold in the odd case.
\begin{lemma}
Suppose $n$ is even. Then $P(x)$ is an even power series in $x$.
\end{lemma}
\begin{proof}
A list of indices $a_0, a_1, \ldots a_n$ can be broken into blocks of consecutive even or odd indices. The list $a, 2a, a, 2a, \ldots a$ for $a$ odd has a single even-length block of odd indices: the first and last $a$. Since the recurrences can only change the parity of an index if the sum of its neighbors is even, they preserve the property of having an odd number of even-length blocks of odd indices. In particular, there is no path from $a, 2a, a, 2a, \ldots a$ to $0, 0, \ldots 0$ via the recurrences.
\end{proof}
Since the off-diagonal factors of $R$ are all even in $x_0, x_2, \ldots x_n$ and $P(x)$ is even, the diagonal factors of $R^{\flat}R^{\natural}$ must be even as well. Thus $R$ is an even power series, which is not obvious a priori. In particular, $R^{\natural}$ cannot contain diagonal factors, so it suffices to describe the diagonal part of $R^{\flat}$. The following proposition is the analogue of \ref{oddcombinatorics}. The proof is parallel, so many details are omitted.
\begin{prop}
Let $n$ be even. Then
\begin{align}
&P(x_0x_2\cdots x_n)= \nonumber \\
&=\left(\prod_{m=0}^{\infty} \prod_{\substack{i,j \text{ mod } n+2 \\ \text{even} \\i\neq 0, \, j\neq n}} (1-(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1}\right)_{\text{diag}}+O(q^{1/2}).
\end{align}
\end{prop}
This implies that
\begin{align}
&R^{\flat}_{\text{diag}}(x_0x_2\cdots x_n)= \nonumber \\
&=\left(\prod_{m=0}^{\infty} \prod_{\substack{i,j \text{ mod } n+2 \\ \text{even} \\i\neq 0, \, j\neq n}} (1-(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1}\right)_{\text{diag}}
\end{align}
which completely determines $R$.
\begin{proof}
We apply the $\sigma_i$ recurrences to $c_{a, 2a, a, 2a,\ldots a}(q)$ in the following order: first, apply $\sigma_1\sigma_3\cdots\sigma_{n-1}$, then $\sigma_2\sigma_4\cdots \sigma_{n}$, then $\sigma_3\sigma_5\cdots\sigma_{n+1}$ (recall that $\sigma_{n+1}=\sigma_0$), etc. Eventually, every index will be reduced to zero by these recurrences.
If we apply the recurrences $\sigma_1\sigma_3\cdots\sigma_{n-1}$ to an arbitrary $c_{a_0, a_1,\ldots a_n}$, we obtain a linear combination of lower coefficients $c_{a_0, a_1', a_2, a_3', \ldots a_n}$, each of which is multiplied by $q$ to the power of at least $a_1+a_3+\cdots+a_{n-1}-(a_2+a_4+\cdots+a_{n-2})-\frac{1}{2}(a_0+a_n)$, and more than this if any of the $a_1, a_3, \ldots a_{n-1}$ could not be reduced. Repeating this process with $\sigma_2 \sigma_4\cdots\sigma_n$ and so on until we reach $c_{0, 0, \ldots 0}$, we gain a factor of at least $q^{a_1+a_3+\cdots+a_{n-1}+a_n/2}$. This lower bound is actually the correct order for the coefficient $c_{a, 2a, a, 2a,\ldots a}$ with $a$ even, since one possible path is
\begin{equation}
c_{a, 2a, a, 2a,\ldots a} \to q^{a(n-1)/2}c_{a, 0, a, 0,\ldots a} \to q^{an-3a/2}c_{a, 0, \ldots 0} \to q^{an-a/2}c_{0, \ldots 0}
\end{equation}
Hence $p_a(q)$ is a polynomial in $q^{1/2}$ with nonzero constant coefficient.
Once again, any term in the $\sigma_i$ recurrence which carries a power of $q$ greater than $a_i-\frac{1}{2}(a_{i-1}+a_{i+1})$ can be ignored, and we have the simplified recurrences \ref{evenrec} and \ref{oddrec}.
We must count chains of nonnegative indices:
\begin{align*}
&a_0, &&a_1, &&a_2, &&a_3, &&\ldots &&a_{n-1}, &&a_n \\
&a_0', &&a_1', &&a_2', &&a_3', &&\ldots &&a_{n-1}', &&a_n' \\
&a_0'', &&a_1'', &&a_2'', &&a_3'', &&\ldots &&a_{n-1}'', &&a_n \\
&\cdots &&\cdots &&\cdots &&\cdots &&\cdots &&\cdots &&\cdots \\
&a_0^{(\ell)}, &&a_1^{(\ell)}, &&a_2^{(\ell)}, &&a_3^{(\ell)}, &&\ldots &&a_{n-1}^{(\ell)}, &&a_n^{(\ell)} \\
\end{align*}
such that:
\begin{condition} We have $(a_0, a_1, a_2, a_3, \ldots a_{n-1}, a_n)=(a, 2a, a, 2a, \ldots, 2a, a)$ and $(a_0^{(\ell)}, a_1^{(\ell)}, a_2^{(\ell)}, a_3^{(\ell)}, \ldots a_{n-1}^{(\ell)}, a_n^{(\ell)})=(0,0,0,0,\ldots 0,0)$.
\end{condition}
\begin{condition} All $a_i^{(j)}$ are even. (This is the analogue of condition \ref{same} when $n$ is even.)
\end{condition}
\begin{condition} If $i\in \lbrace j, j+2, j+4, \ldots j+n \rbrace$, then $a_{i}^{(j)}=a_{i}^{(j+1)}$.
\end{condition}
\begin{condition}\label{evenineq} If $i\in \lbrace j+1, j+3, j+5, \ldots j+n-1 \rbrace$, then
\begin{equation}
a_{i}^{(j)}\geq \frac{1}{2}(a_{i-1}^{(j)}+a_{i+1}^{(j)})\geq a_{i}^{(j+1)}\geq a_{i-1}^{(j)}+a_{i+1}^{(j)}-a_{i}^{(j)}
\end{equation}
\end{condition}
To further simplify, for $i \in \lbrace 2, 4, 6, \ldots n \rbrace$ and $1 \leq j \leq \ell$, let $\delta_i^{(j)}=a_{i+j-1}^{(j-1)}-a_{i+j-2}^{(j)}$. Then by condition \ref{evenineq} above, we have $\delta_i^{(1)}\geq \delta_i^{(2)} \geq \delta_i^{(3)} \geq \cdots \geq \delta_i^{(\ell)} \geq 0$. To simplify notation, we also set $\delta_0^{(j)}=0$ for all $j$, and we take the lower index $i$ of $\delta_i$ modulo $n+2$ instead of $n+1$. We are now counting $\frac{n}{2}+1$-tuples of integer partitions $\delta_0, \delta_2, \delta_4, \ldots \delta_n$ such that:
\begin{condition} All $\delta_i^{(j)}$ are even.
\end{condition}
\begin{condition} $\delta_0^{(j)}=0$
\end{condition}
\begin{condition} $\sum_{j=1}^{\ell} \delta_{i-2j}^{(j)}=a$ for all $i$.
\end{condition}
The indices $a_i^{(j)}$ can all be recovered from the partitions $\delta_i$ satisfying these conditions.
By the logic of Equation \ref{example}, the generating function of such sets of partitions is the diagonal part of the series:
\begin{equation}
\prod_{m=0}^{\infty} \prod_{\substack{i,j \text{ mod } n+2 \\ \text{even} \\i\neq 0}} (1-(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1}
\end{equation}
which is the same as the diagonal part of
\begin{equation}
\prod_{m=0}^{\infty} \prod_{\substack{i,j \text{ mod } n+2 \\ \text{even} \\i\neq 0, \, j\neq n}} (1-(x_0x_2\cdots x_n)^{2m}(x_ix_{i+2}\cdots x_j)^2)^{-1}.
\end{equation}
\end{proof}
This completes the proof of the main theorem in the case of $n$ even.
\end{document}
|
\begin{document}
\title{Flow Characteristics in a Crowded Transport Model }
\author{Martin Burger\thanks{Institut f\"ur Numerische und Angewandte Mathematik, Westf\"alische Wilhelms-Universit\"at (WWU) M\"unster. Einsteinstr. 62, D 48149 M\"unster, Germany. e-mail: [email protected] } \and Jan-Frederik Pietschmann\thanks{Numerical Analysis and Scientific Computing, Department of Mathematics, TU Darmstadt, Dolivostr. 15, 64293 Darmstadt, Germany. e-mail: [email protected] } }
\maketitle
\begin{abstract}
The aim of this paper is to discuss the appropriate modelling of in- and outflow boundary conditions for nonlinear drift-diffusion models for the transport of particles including size exclusion and their effect on the behaviour of solutions. We use a derivation from a microscopic asymmetric exclusion process and its extension to particles entering or leaving on the boundaries. This leads to specific Robin-type boundary conditions for inflow and outflow, respectively. For the stationary equation we prove the existence of solutions in a suitable setup. Moreover, we investigate the flow characteristics for small diffusion, which yields the occurence of a maximal current phase in addition to well-known one-sided boundary layer effects for linear drift-diffusion problems. In a one-dimensional setup we provide rigorous estimates in terms of $\varepsilonilon$, which confirm three different phases. Finally, we derive a numerical approach to solve the problem also in multiple dimensions. This provides further insight and allows for the investigation of more complicated geometric setups.
\end{abstract}
\section{Introduction}
Transport phenomena of crowded particles and their mathematical modelling have received considerable attention recently, driven by a variety of important applications in biology and social sciences, e.g. transport of ions and macromolecules through channels and nanopores (cf. \cite{Burger2012,Dreyer2013,Dreyer2014,eisenberg2013steric,horng2012pnp}), cargo transport by molecular motors on microtubuli (cf. \cite{ciandrini2014stepping,leduc2012molecular,reese2011crowding}), collective cell migration (cf. \cite{painterhillen,plank2012models,simpson2011models,dyson2014importance}), tumour growth (cf. \cite{jacksonbyrne,stelzer}) or dynamics of human pedestrians (cf. \cite{cividini2013crossing,jelic2012properties,schlakepietschmann}). Such applications naturally lead to questions related to boundary (or interface) conditions restricting in- or outflow of particles, and the resulting characteristics of flow. In ion channels the characteristics are relations between bath concentrations (boundary values) and current, in pedestrian motion one is interested in flow and evacuation properties depending on exit doors, and the movement of cargo between microtubuli respectively delivery to the desired site act as as similar boundary conditions. A variety of computational investigations of such issues have been performed, partly with additional complications such as electrostatic interactions, chemotaxis, or herding. Such simulations can give hints on the flow behaviour, but it becomes difficult to understand the causes and asymptotic regimes for certain effects.
Therefore we investigate in detail a canonical simple model with in- and outflow boundary conditions in this paper. To this end, we assume that the boundary $\partial\Omega$ of our domain is subdivided into three parts: Inflow $\Gamma$, outflow $\Sigma$ and insulating $\partial\Omega\setminus (\Gamma \cup \Sigma)$ with $\Gamma \cap \Sigma = \emptyset$. Then for $x \in \Omega \subset \mathbb{R}R^n$, $t>0$ and $\rho=\rho(x,t)$ we consider the equation
\begin{align}\label{eq:basic}
\partial_t \rho + \nablabla \cdot j = 0,\quaduad j = - D \nablabla \rho + \rho (1-\rho) u,
\end{align}
with boundary conditions
\begin{align}\label{eq:bc1}
- j\cdot n &= \alpha(1-\rho),\quaduad\text{on }\Gamma,\\\label{eq:bc2}
j\cdot n &= \beta\rho,\quaduad\text{on }\Sigma,\\
j\cdot n &= 0,\quaduad\text{on }\partial\Omega\setminus (\Gamma \cup \Sigma)\label{eq:bc3},
\end{align}
where $u:\mathbb{R}R^n \to \mathbb{R}R^n$ is a given velocity field. The model we use is derived from the paradigmatic asymmetric exclusion process (cf. \cite{chou2011non}), with appropriate modifications to include realistic in- and outflow boundaries. In a simple one-dimensional setup, this model was investigated recently in \cite{Wood2009} including stochastic entrance and exit conditions, exhibiting three different phases of behaviour. We will take a continuum limit of that model and verify that these three phases are still present under the same conditions on parameters and demonstrate how the model generalizes to multiple dimensions and multiple species going in potentially different directions. The (formal) continuum limit naturally leads to the case of nonlinear convection dominating the diffusion, hence the limit of diffusion tending to zero is natural, and indeed the appearing boundary layers are separating the different phases.
Let us mention that the study of continuum limits of microscopic particle models with size exclusion effects is a very timely research topic. The majority of the rigorous analysis is however carried out for closed systems, i.e. under no-flux boundary conditions or on the whole space, where such systems possess a gradient flow structure that can be well exploited either with transport metrics (cf. \cite{AGS,carrillo2010nonlinear,liero2013gradient}) or with entropy dissipation techniques (cf. \cite{carrillo2001entropy,Burger2010}). The case of non-closed systems has been studied at the continuum level mainly for Dirichlet boundary conditions, where at least the modelling is obvious. In the case of general inflow and outflow conditions the modelling of boundary conditions needs to be adapted to the specific approach for deriving continuum equations (cf. \cite{erban2007reactive}), which seems to have been ignored by most authors in the past. Moreover, the case of non-equilibrium boundary conditions poses additional challenges on the analysis, in particular existence proofs for stationary solutions cannot be carried out anymore by explicit computations or energy minimization arguments. Nonetheless, some arguments can still benefit from the underlying gradient flow structure in the energy, in particular a transformation to dual variables (also called entropy variables) is quite benefitial for existence proofs, since it yields maximum principles that do not hold for the original variables (cf. \cite{Burger2010,Burger2012,Juengel2014}). In this paper we will use similar ideas and extend them from Dirichlet to inflow and outflow boundary conditions.
This paper is organized as follows: In Section \ref{sec:modelling} we present the model for several species and give more details about the nonlinear boundary conditions. In section \ref{sec:basic} we present existence proofs for a single species. We seperately treat the cases when the velocity field is either a given divergence free vector field or the gradient of a potential function. In Section \ref{sec:asymptotic} we investigate the behaviour for a small diffusion coefficient and compare this to the results presented in \cite{Wood2009}. Finally, in Section \ref{sec:numerics}, we introduce a discontinuous Galerkin scheme and present examples in one and two spatial dimensions.
\section{Modelling}\label{sec:modelling}
Crowding models based on (totally) asymmetric exclusion processes as well as their mean-field continuum limits have gained strong attention recently (cf. e.g. \cite{penington2011building,simpsonmulti} and the references above). The main paradigm is to model jumps of particles on a discrete lattice with jump probabilities consisting of unoriented parts (diffusion) and oriented drifts (transport). The exclusion is incorporated by avoiding jumps to an occupied cell. Using standard continuum limits (rescaling time and space to have grid sizes and typical waiting times converge to zero) as well as simple mean-field closure assumptions, which can also be made rigorous (cf. \cite{giacomin}), one obtains partial differential equations of the form
\begin{equation}
\partial_t \rho_i + \nablabla \cdot (j_i) = 0, \quaduad j_i = - D_i (\rho_0 \nablabla \rho_i - \rho_i \nablabla \rho_0) + \rho_i \rho_0 u_i, \quadquad \text{in } \Omega \times (0,T), \label{basiceqn1}
\end{equation}
where $x \in \Omega \subset \mathbb{R}R^n$, $t>0$, $\rho_i=\rho_i(x,t)$ is the density of the $i$-th species of particles with diffusion coefficient $D_i$ and velocity field $u_i:\mathbb{R}R^n \to \mathbb{R}R^n$ $u_i$, $i=1,\ldots,M$. The free-space density $\rho_0$ is given by
\begin{equation}
\rho_0 = 1- \sum_{i=1}^M \rho_i.
\end{equation}
In the previously well-investigated case of a potential field $u_i = - D_i \nablabla V_i$ for some $V_i:\mathbb{R}R^n \to \mathbb{R}R$ (cf. \cite{Burger2010}), the system can be recast in gradient form
\begin{equation}
\partial_t \rho_i = \nablabla \cdot ( D_i \rho_0 \rho_i \nablabla (\partial_{\rho_i} E[\rho_1,\ldots,\rho_M]) ),
\end{equation}
with the entropy functional
\begin{equation}\label{eq:entropy}
E[\rho_1,\ldots,\rho_M] = \int_\Omega \left( \sum_{i=1}^M ( \rho_i \log \rho_i - \rho_i V_i) + \rho_0 \log \rho_0 \right)~dx.
\end{equation}
The above differential equations have been studied in detail with potential fields and no-flux boundary conditions, when the system is indeed a gradient flow and stationary solutions can be characterised as minimisers of the entropy at fixed mass (cf. \cite{Burger2010}). In many practical applications different boundary conditions and non-zero flow is of fundamental importance however. In \cite{Burger2012} the case of mixed no-flux and Dirichlet boundary conditions has been studied in a model for charged particles coupled with the Poisson equation. Here we want to focus on in- and outflow boundaries, as recently also used in one-dimensional stochastic models \cite{Wood2009}.
\subsection{Inflow Boundary Conditions}
We assume that particles of the $i$-th species enter the domain $\Omega$ on a subregion $\Gamma_i \subset \partial \Omega$ with rate $\alpha_i > 0$. Without exclusion principle, this would simply mean in the continuum that the normal flux equals $\alpha_i$. Modelling volume exclusion in the discrete setting means that the particle can only enter a grid cell adjacent to the boundary if it is empty. Hence, the probability of entering is modified to $\alpha_i \rho_0$, and we deduce the boundary condition
\begin{equation}\label{eq:bc_in}
- j_i \cdot n = \alpha_i \rho_0 \quadquad \text{on } \Gamma_i.
\end{equation}
Note the negative sign in front of the normal flux since we use the convention of a normal oriented outward.
The boundary condition can be rewritten as
$$
D_i \rho_0 \rho_i \nablabla \left( \log \frac{\rho_i}{\rho_0} \right) \cdot n = \rho_0 ( \alpha_i + \rho_i u_i \cdot n),
$$
which clarifies the role of the normal velocity at the inflow boundary. The inflow rate can balance the normal velocity only if $u_i \cdot n \leq 0$. On the other hand we will have $\rho_i \leq 1$ and thus, balancing only for $\alpha_i \leq 1$.
\subsection{Outflow Boundary Conditions}
Outflow boundaries are more straightforward to model, we assume that particles of the $i$-th species leave the domain $\Omega$ on a subregion $\Sigma_i \subset \partial \Omega \setminus \Gamma_i$ with rate $\beta_i$. Thus,
\begin{equation}\label{eq:bc_out}
j_i \cdot n = \beta_i \rho_i \quadquad \text{on } \Sigma_i.
\end{equation}
Again we can rewrite the boundary condition in the form
$$
- D_i \rho_0 \rho_i \nablabla \left( \log \frac{\rho_i}{\rho_0} \right) \cdot n = \rho_i ( \beta_i - \rho_0 u_i \cdot n),
$$
hence $u_i \cdot n \geq 0$ is needed for balancing.
Note also that we could easily include no-flux boundaries by simply setting $\beta_i = 0$, but we rather shall consider them explicitly, i.e. we have
\begin{align}\label{eq:bc_no}
j_i \cdot n = 0\quaduad \text{on }\; \partial\Omega \setminus \left(\bigcup_{i=1}^M\Gamma_i \cup \bigcup_{i=1}^M \Sigma_i\right).
\end{align}
\section{Basic Properties}\label{sec:basic}
As detailed in the introduction, we shall from now on restrict our analysis to the case of a single active species ($M=1$). If we further assume that the diffusion coefficient $D$ is normalized to $1$, equation \eqref{basiceqn1} considerably simplifies to
\begin{equation*}
\partial_t \rho + \nablabla \cdot( - \nablabla \rho + \rho (1-\rho) u) = 0,
\end{equation*}
with $\rho$ denoting the density of a single species and supplemented with boundary conditions \eqref{eq:bc1}--\eqref{eq:bc3}. We mention that with similar arguments as in \cite{jacksonbyrne,stelzer}, the case $M=1$ can also be derived from standard continuum fluid mechanical models adding a congestion constraint $\rho_0 = 1 - \rho$.
Due to the boundary conditions there is obviously no mass conservation in the system. However, there is still a natural balance condition between in- and outflow, i.e., if $\rho$ solves \eqref{eq:basic} then
\begin{equation}
\partial_t \int_\Omega \rho~dx = \int_{\Sigma} \beta \rho ~d\sigma - \int_{\Gamma} \alpha \rho_0~d\sigma,
\end{equation}
where $\alpha$ and $\beta$ denote the in- and outflow rate for $\rho$, respectively. In the stationary case the two integrals need to balance, which implies an interesting coupling in the balance conditions (via $\rho_0$) if $M > 1$. Note also that in an evacuation case, i.e. $\Sigma = \emptyset$, the mass in the system is monotonically decreasing as it is expected. Finally, we state the following assumptions for later use
\begin{assumption}\label{ass:prelim}
(A1) $\Omega \subset \mathbb{R}R^n$, $n=1,2,3$ with boundary $\partial\Omega$ of class $C^2$.
(A1') In addition to (A1), let $\Gamma$ and $\Sigma$ be such that a weak solution $w$ of the Poisson equation with right-hand side in $L^2(\Omega)$ and Neumann data $\partial_n w = f$ satisfies $w \in H^2(\Omega)$ for any $f$ such that
$$ f|_\Sigma \in H^{1/2}(\Sigma), \quaduad f|_\Gamma \in H^{1/2}(\Gamma), \quaduad f|_{\partial \Omega \setminus (\Gamma \cup \Sigma)} \equiv 0. $$
(A2) $0\le \alpha \le 1$ and $0 \le \beta \le 1$.
(A3) $u\in [W^{1,\infty}(\Omega)]^n$ such that $\nablabla\cdot u = 0$, $u\cdot n = -1$ on $\Gamma$, $u\cdot n = 1$ on $\Sigma$, and $u\cdot n = 0$ on $\partial \Omega \setminus( \Gamma \cup \Sigma)$.
(A3') $u=\nablabla V$, $V\in H^1(\Omega)$.
\end{assumption}
\subsection{Existence of Stationary Solutions}
We shall present two different proofs, one for a given velocity vector field $u$ and a second one where $u = \nablabla V$ for some potential $V$, in which case we can employ a transformation to so-called "entropy variables".
\begin{thm}[incompressible case]\label{thm:incompressible} Let the assumptions (A1), (A1'), (A2) and (A3) hold. Then, the equation
\begin{align*}
\nablabla\cdot (-\nablabla \rho + \rho(1-\rho)u) = 0,
\end{align*}
supplemented with the boundary conditions \eqref{eq:bc1}--\eqref{eq:bc3} has at least one solution $u \in H^1(\Omega) \cap L^\infty(\Omega)$ such that
\begin{equation}
\min\{\alpha,1-\beta\} \le \rho(x) \le \max \{ \alpha, 1-\beta \} \label{rhobounds}
\end{equation}
\end{thm}
\begin{proof}
The proof is based on Schauder's fixed-point theorem. We define the set
\begin{align*}
\mathcal{M} = \left\{ \rho \in L^\infty(\Omega)\cap H^1(\Omega)\;|\;\min\{\alpha,1-\beta\} \le \rho \le \max \{ \alpha, 1-\beta \} , \int_\Omega |\nablabla \rho|^2dx\leq C \right\}
\end{align*}
with $C= \Vert u\Vert_\infty^2 + 2 \vert \partial \Omega \vert$.
For given $\bar \rho \in \mathcal{M}$, we define the operator $S:\mathcal{M}\rightarrow H^2(\Omega)$ that maps $\bar\rho$ to the solution of the linearized problem
\begin{align}\label{eq:lin}
- \nablabla\cdot \nablabla \rho + (1-2 \bar \rho) \nablabla \rho \cdot u = 0,
\end{align}
supplemented with the boundary conditions
\begin{align}\label{eq:lin_bc1}
\nablabla\rho\cdot n &= (\alpha - \rho )(1-\bar\rho),\quaduad\text{on }\Gamma,\\
-\nablabla\rho\cdot n &= (\beta - (1-\rho) )\bar \rho,\quaduad\text{on }\Sigma,\\
\nablabla\rho\cdot n &= 0,\quaduad\text{on }\partial\Omega\setminus (\Gamma \cup \Sigma)\label{eq:lin_bc3}.
\end{align}
Note that we linearized the boundary conditions differently on $\Gamma$ and $\Sigma$, this will be crucial later on. Standard theory for linear elliptic equations (and our assumption on the regularity of the boundary), cf. \cite{Grisvard1985,Ladyzhenskaya1968}, ensures a maximum principle and existence of a weak solution, subsequently the existence of a solution $\rho\in H^2(\Omega)$ since the prerequisites of (A1') are satisfied. In order to apply Schauder's fixed-point theorem, we have to prove that $S$ is self-mapping from $\mathcal{M}$ to $\mathcal{M}$, continuous and compact.
{\bf Self-mapping:}
Equation \eqref{eq:lin} satisfies a maximum principle with vanishing normal derivative on $\partial \Omega \setminus (\Gamma \cup \Sigma)$, and thus (by Hopf's maximum principle) $\rho$ attains its maximum on $\Gamma \cup \Sigma$. We have to distinguish the following cases:
\begin{itemize}
\item $\rho$ attains its maximum on $\Gamma$ and thus $\nablabla\rho\cdot n\ge 0$. Since by assumption $(1-\bar\rho) \ge 0$, this implies $\alpha + \rho u\cdot n \ge 0$. As $u\cdot n = - 1 $ on $\Gamma$ we conclude $\rho \le \alpha$.
\item If $\rho$ attains its maximum on $\Sigma$ we conclude, since $u\cdot n = 1$, $\rho\le 1-\beta$.
\end{itemize}
If $\rho$ attains its minimum on the boundary we use the same arguments to conclude $\alpha \le \rho$ and $ 1-\beta \le \rho$. Finally, the $L^2$-bound on $\nablabla \rho$ follows by using the weak formulation with test function $\rho$ and applying the bounds $0\leq \rho \leq 1$ and $0 \leq \bar \rho \leq 1$.
{\bf Continuity:} To show continuity of $S$ we take a sequence $\bar\rho_k$ in $L^\infty(\Omega)\cap H^1(\Omega)$ such that $\bar\rho_k \rightarrow \bar\rho$. We have to show that the sequence $\rho_k = S(\bar\rho_k)$ converges to some $\rho$ and that $\rho = S(\bar\rho)$. Since $\rho_k \in H^2(\Omega)$ we know that there exists $\tilde\rho$ such a subsequence that $\bar\rho_{k_j} \rightharpoonup \tilde{\rho}$ weakly in $H^2(\Omega)$.
Thus
\begin{align*}
\int_\Omega \nablabla \rho_{k_j}\cdot \nablabla\phi\,{\rm d} x - \int_\Omega \rho(1-\rho_{k_j})u\cdot \nablabla \phi\,{\rm d} x \rightarrow \int_\Omega \nablabla \tilde{\rho}\cdot\nablabla\phi\,{\rm d} x - \int_\Omega \rho(1-\tilde\rho)u\cdot \nablabla \phi\,{\rm d} x,
\end{align*}
i.e. $\tilde\rho$ solves \eqref{eq:lin}. The continuity of the trace operator allows us to pass to the limit in the boundary conditions \eqref{eq:lin_bc1}--\eqref{eq:lin_bc3} as well. The maximum principle discussed above implies that this solution is unique and the uniqueness of limits therefore yields $\tilde\rho = \rho$.
{\bf Compactness:} The compactness of the operator $S$ follows from the fact that the embedding $H^2(\Omega)\hookrightarrow L^\infty(\Omega)\cap H^1(\Omega)$ is compact for $n\le 3$. This completes the proof.
\end{proof}
Next we treat the potential case $u = \nablabla V$, where we obtain the following
\begin{thm}[potential case]\label{thm:M1_potential} Let the assumptions (A1), (A2) and (A3') hold. Then, the equation
\begin{align}\label{eq:M1_potential}
\nablabla\cdot (-\nablabla\rho + \rho(1-\rho)\nablabla V) = 0,\quaduad x\in\Omega,
\end{align}
supplemented with the boundary conditions \eqref{eq:bc1}--\eqref{eq:bc3} has at least one solution $u \in H^1(\Omega) \cap L^\infty(\Omega)$ such that $0 \le \rho \le 1$.
\end{thm}
\begin{proof}
Our proof is based on an approximation procedure, applied to the equation in \emph{entropy variables} which are defined as the variation of the entropy functional with respect to the density $\rho$. Since we are in the case $M=1$, the entropy functional \eqref{eq:entropy} reduces to
\begin{align*}
E[\rho] = \int_\Omega \rho\ln(\rho) - \rho V + (1-\rho)\ln (1-\rho).
\end{align*}
Then, we introduce the entropy variable as
\begin{align*}
\psi := \partial_{\rho} E[\rho] = \log \rho - \log \rho_0 - V.
\end{align*}
Using elementary calculations, we can express the original density $\rho$ as
\begin{align*}
\rho = \frac{e^{\psi+V}}{1+e^{\psi+V}},\;\text{and}\;\rho_0 = \frac{1}{1+e^{\psi+V}}.
\end{align*}
Applying this transformation to \eqref{eq:M1_potential}, \eqref{eq:bc1}--\eqref{eq:bc3} yields the nonlinear equation
\begin{align}\label{eq:trans_entropy}
- \nablabla\cdot\left(\frac{e^{\psi+V}}{\left(1+e^{\psi+V}\right)^2}\nablabla \psi \right) = 0,
\end{align}
supplemented with the boundary conditions
\begin{align}\label{eq:bc_in_ent_1}
\frac{e^{ \psi+V}}{(1+e^{ \psi+V})^2}\nablabla \psi\cdot n &= \alpha \frac{1}{1+e^{ \psi+V}},\quaduad\text{on }\Gamma,\\\label{eq:bc_out_ent_1}
-\frac{e^{ \psi+V}}{(1+e^{ \psi+V})^2}\nablabla \psi\cdot n &= \beta\frac{e^{ \psi+V}}{1+e^{ \psi+V}},\quaduad\text{on }\Sigma,\\\label{eq:bc_no_ent_1}
\frac{e^{ \psi+V}}{(1+e^{ \psi+V})^2}\nablabla \psi\cdot n &= 0,\quaduad\text{on }\partial\Omega\setminus (\Gamma \cup \Sigma).
\end{align}
We will now apply an approximation procedure to this equation and proceed in several steps.
{\bf Existence for an auxiliary problem}: To simplify our notation we introduce the function $A(\psi,V) := \frac{e^{\psi+V}}{\left(1+e^{\psi+V}\right)^2}$ and for $\,{\rm d}elta > 0$ we consider the problem
\begin{align}\label{eq:approx1}
-\nablabla\cdot( A(\psi^\,{\rm d}elta,V) \nablabla\psi^\,{\rm d}elta) + \,{\rm d}elta\psi^\,{\rm d}elta = 0.
\end{align}
To prove existence of \eqref{eq:approx1}, we use a fixed-point argument. For given $\tilde\psi \in L^2(\Omega)$ we define $ \tilde A_\,{\rm d}elta(x) = A(\tilde\psi(x),V(x)) + \,{\rm d}elta$ which yields the linear equation
\begin{align}\label{eq:ent_aux}
-\nablabla\cdot( \tilde A_\,{\rm d}elta \nablabla\tilde{\psi}^\,{\rm d}elta) + \,{\rm d}elta\tilde{\psi}^\,{\rm d}elta = 0,
\end{align}
subject to the nonlinear boundary conditions
\begin{align}\label{eq:ent_aux_bc}
\tilde A_\,{\rm d}elta\nablabla\tilde{\psi}^\,{\rm d}elta\cdot n = \left\{\begin{array}{ll}
\alpha\frac{1}{1+e^{\tilde{\psi}^\,{\rm d}elta+V}}, &\text{on }\Gamma,\\
-\beta\frac{e^{\tilde{\psi}^\,{\rm d}elta+V}}{1+e^{\tilde{\psi}^\,{\rm d}elta+V}}, &\text{on }\Sigma,\\
0, & \text{on }\partial\Omega\setminus (\Gamma \cup \Sigma).
\end{array}\right.
\end{align}
The corresponding weak formulation, for $\varphi \in H^1(\Omega)$, is given by
\begin{align}\label{eq:weak_linear}
0=\int_\Omega \left( \tilde A_\,{\rm d}elta \nablabla\tilde{\psi}^\,{\rm d}elta\cdot\nablabla\varphi + \,{\rm d}elta \tilde{\psi}^\,{\rm d}elta\varphi\right)\,{\rm d} x - \alpha\int_\Gamma \frac{1}{1+e^{\tilde{\psi}^\,{\rm d}elta+V}}\varphi\,{\rm d} \sigma + \beta\int_\Sigma\frac{e^{\tilde{\psi}^\,{\rm d}elta+V}}{1+e^{\tilde{\psi}^\,{\rm d}elta+V}}\varphi\,{\rm d}\sigma.
\end{align}
This is the Euler-Lagrange equation to the nonlinear minimisation problem for the energy functional
\begin{align*}
E(\tilde{\psi}^\,{\rm d}elta) = \frac12 \int_\Omega \left(\tilde A_\,{\rm d}elta|\nablabla\tilde{\psi}^\,{\rm d}elta|^2 + \,{\rm d}elta |\tilde{\psi}^\,{\rm d}elta|^2\right)\,{\rm d} x - \alpha\int_\Gamma F(\tilde{\psi}^\,{\rm d}elta,V)\,{\rm d}\sigma + \beta\int_\Sigma G(\tilde{\psi}^\,{\rm d}elta,V)\,{\rm d}\sigma,
\end{align*}
where $F$ and $G$ are chosen such that
\begin{align*}
\partial_\psi F(\psi,V) = \frac{1}{1+e^{\psi+V}},\quaduad \partial_\psi G(\psi,V) = \frac{e^{\psi+V}}{1+e^{\psi+V}}.
\end{align*}
Note that $F$, $G$ are convex, since their second derivatives are non-negative. Furthermore, due to
\begin{align*}
\tilde A_\,{\rm d}elta(x) = \frac{e^{\tilde{\psi} + V}}{(1+e^{\tilde{\psi} + V})^2} + \,{\rm d}elta =\frac{1}{2(1+\cosh(\tilde{\psi} + V))} + \,{\rm d}elta,
\end{align*}
we have that $\tilde A_\,{\rm d}elta(x) \in L^\infty(\Omega)$, uniformly with $\,{\rm d}elta \le \tilde A_\,{\rm d}elta(x) \le \,{\rm d}elta + 1/4$. Thus $E(\tilde{\psi}^\,{\rm d}elta)$ is coercive with respect to the $H^1$ norm and due to its convexity we conclude (cf. \cite[Section 8.2, theorems 2 and 3]{Evans98}) the existence of a unique minimiser $\tilde{\psi}^\,{\rm d}elta \in H^1(\Omega)$, which is by definition a weak solution to \eqref{eq:ent_aux}. Furthermore, since $G(\tilde{\psi}^\,{\rm d}elta,V) \ge 0$ and $F(\tilde{\psi}^\,{\rm d}elta,V) < \infty$, we infer the $L^2$ a-priori estimate
\begin{align}\label{eq:schauder_apriori}
\int_\Omega |\tilde{\psi}^\,{\rm d}elta|^2 \,{\rm d} x \le \tilde C + \alpha\int_\Gamma F(\tilde{\psi}^\,{\rm d}elta,V)\,{\rm d}\sigma \le C_{\mathcal{M}}.
\end{align}
Here the constant $C_{\mathcal{M}}$ depends on the geometry, $\alpha$, $\beta$, and $\,{\rm d}elta$. This result allows us to define the nonlinear operator $\tilde K:L^2(\Omega)\to H^1(\Omega)$ mapping $\tilde\psi$ to the solution of \eqref{eq:ent_aux}--\eqref{eq:ent_aux_bc}. Our aim is to apply Schauder's fixed point theorem in the set
\begin{align*}
\mathcal{M} := \{ \psi \in L^2(\Omega)\,|\, \|\psi\|_{L^2(\Omega)}^2 \le C_{\mathcal M} \}.
\end{align*}
To this end we denote by $I_{H^1(\Omega)\hookrightarrow L^2(\Omega)}$ the compact embedding of $H^1$ into $L^2$ and define the operator $K:\mathcal M \to \mathcal M$ by $K = I_{H^1(\Omega)\hookrightarrow L^2(\Omega)}\circ \tilde K$. Since the a-priori estimate \eqref{eq:schauder_apriori} implies that $K$ is self-mapping, it remains to show its continuity. We consider a sequence $\tilde\psi_n$ that converges to $\tilde \psi$ in $L^2(\Omega)$. This yields a sequence $\tilde{\psi}_n^\,{\rm d}elta \in H^1(\Omega)$ having a weak limit. Since $\tilde A_\,{\rm d}elta$ is uniformly bounded in $L^\infty(\Omega)$, an application of Lebesgues Theorem yields $\tilde A_\,{\rm d}elta(\tilde \psi_n) \to \tilde A_\,{\rm d}elta(\tilde \psi)$ in $L^2(\Omega)$ and thus we can pass to the limit in the first integral of the weak formulation \eqref{eq:weak_linear}, i.e.
\begin{align*}
\int_\Omega \tilde A_\,{\rm d}elta(\tilde\psi_n) \nablabla \tilde \psi^\,{\rm d}elta_n \cdot\nablabla\varphi\,{\rm d} x \to \int_\Omega \tilde A_\,{\rm d}elta(\tilde \psi) \nablabla\tilde \psi^\,{\rm d}elta \cdot\nablabla\varphi\,{\rm d} x.
\end{align*}
Uniqueness of the weak solution to \eqref{eq:weak_linear} (due to the convexity of the Energy $E$) thus implies $K(\tilde \psi_n) \to K(\tilde\psi)$ in $L^2(\Omega)$. Thus Schauder's fixed point theorem yields the existence of a solution $\psi^\,{\rm d}elta$ to the auxiliary problem \eqref{eq:ent_aux}--\eqref{eq:ent_aux_bc}.
{\bf Limit $\mathbf{\,{\rm d}elta \to 0}$}: To this end, we define
\begin{align*}
\rho^\,{\rm d}elta = \frac{e^{\psi^\,{\rm d}elta+V}}{1+e^{\psi^\,{\rm d}elta+V}}.
\end{align*}
Then $\rho^\,{\rm d}elta\in H^1(\Omega)$ and satisfies the equation
\begin{align}\label{eq:ent_approx}
\nablabla\cdot\left(-\nablabla\rho^\,{\rm d}elta + \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V \right) - \,{\rm d}elta\Delta \psi^\,{\rm d}elta + \,{\rm d}elta\psi^\,{\rm d}elta = 0,
\end{align}
with the boundary conditions
\begin{align}
\left(-\nablabla\rho^\,{\rm d}elta + \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V \right)\cdot n = \left\{\begin{array}{ll}
-\alpha(1-\rho^\,{\rm d}elta), &\text{on }\Gamma,\\
\beta\rho^\,{\rm d}elta, &\text{on }\Sigma,\\
0, & \text{on }\partial\Omega\setminus (\Gamma \cup \Sigma).
\end{array}\right.
\end{align}
Again, we consider the weak form given by
\begin{align}\label{eq:rho_delta_weak}
0&= \int_\Omega \left(\nablabla\rho^\,{\rm d}elta - \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V \right)\cdot\nablabla\varphi +\,{\rm d}elta\nablabla\psi^\,{\rm d}elta\cdot\nablabla\varphi+ \,{\rm d}elta\psi^\,{\rm d}elta\varphi\,{\rm d} x \\
& - \alpha\int_\Gamma (1-\rho^\,{\rm d}elta)\varphi\,{\rm d}\sigma + \beta\int_\Sigma \rho^\,{\rm d}elta\varphi\,{\rm d}\sigma, \quaduad \varphi \in H^1(\Omega).
\end{align}
Our aim is to derive a-priori estimates on $\rho^\,{\rm d}elta$ by choosing the test function $\varphi=\psi^\,{\rm d}elta$. We have
\begin{align}\label{eq:estimate}
0&=\int_\Omega (\nablabla\rho^\,{\rm d}elta\cdot \nablabla\psi^\,{\rm d}elta - \rho^\,{\rm d}elta (1-\rho^\,{\rm d}elta)\nablabla V\cdot \nablabla \psi^\,{\rm d}elta)\,{\rm d} x + \,{\rm d}elta\int_\Omega |\nablabla\psi^\,{\rm d}elta|^2 + (\psi^\,{\rm d}elta)^2\,{\rm d} x\\\nonumber
&- \alpha\int_\Gamma (1-\rho^\,{\rm d}elta)\psi^\,{\rm d}elta\,{\rm d}\sigma + \beta\int_\Sigma \rho^\,{\rm d}elta\psi^\,{\rm d}elta\,{\rm d}\sigma.
\end{align}
We estimate each term separately, noting that $\nablabla\psi^\,{\rm d}elta = \frac{1}{\rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)}\nablabla\rho^\,{\rm d}elta - \nablabla V$. For the first term we have
\begin{align*}
& \int_\Omega \frac{|\nablabla\rho^\,{\rm d}elta|^2}{\rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)}\,{\rm d} x - 2 \int_\Omega \nablabla V\cdot\nablabla \rho^\,{\rm d}elta\,{\rm d} x + \int_\Omega \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)|\nablabla V|^2\,{\rm d} x\\
&\ge \frac{1}{2}\int_\Omega \frac{|\nablabla\rho^\,{\rm d}elta|^2}{\rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)}\,{\rm d} x - \int_\Omega \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)|\nablabla V|^2\,{\rm d} x\\
&\ge 2\int_\Omega |\nablabla\rho^\,{\rm d}elta|^2\,{\rm d} x - \frac{1}{4} \int_\Omega |\nablabla V|^2\,{\rm d} x,
\end{align*}
where we used Cauchy's inequality to estimate the mixed term and the fact that $\rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta) \le 1/4$. For the second term we estimate
\begin{align*}
&-\alpha\int_\Gamma (1-\rho^\,{\rm d}elta)\psi^\,{\rm d}elta \,{\rm d}\sigma = \alpha \int_\Gamma \left((1-\rho^\,{\rm d}elta)\log \frac{1-\rho^\,{\rm d}elta}{\rho^\,{\rm d}elta} + 2\rho^\,{\rm d}elta - 1 \right)\,{\rm d} \sigma
+\alpha\int_\Gamma \left(-(1-\rho^\,{\rm d}elta)V + 1 - 2\rho^\,{\rm d}elta\right)\,{\rm d}\sigma.
\end{align*}
The first term in this equation is a Kullback-Leibler distance and thus non-negative. As $V\in H^1(\Omega)$, the trace theorem yields $\left. V\right|_{\partial\Omega} \in L^2(\partial\Omega)$ and since, by definition $0\le \rho^\,{\rm d}elta \le 1$, the second term is bounded. For the third term of \eqref{eq:estimate} we write
\begin{align*}
\beta\int_\Sigma \rho^\,{\rm d}elta\psi^\,{\rm d}elta\,{\rm d}\sigma = \beta \int_\Sigma \left(\rho^\,{\rm d}elta \log \frac{\rho^\,{\rm d}elta}{1-\rho^\,{\rm d}elta} + 2\rho^\,{\rm d}elta - 1 \right)\,{\rm d} \sigma - \beta\int_\Sigma \left(-\rho^\,{\rm d}elta V - 1 + 2\rho^\,{\rm d}elta\right)\,{\rm d} \sigma.
\end{align*}
By the same arguments as above, we conclude that the first term is non-positive while the second one is bounded.
Summarizing, we obtain
\begin{align*}
\int_\Omega |\nablabla\rho^\,{\rm d}elta|^2\,{\rm d} x \le \frac{1}{8} \int_\Omega |\nablabla V|^2\,{\rm d} x + \alpha\int_\Gamma \left(-(1-\rho^\,{\rm d}elta)V + 1 - 2\rho^\,{\rm d}elta\right)\,{\rm d}\sigma - \beta\int_\Sigma \left(-\rho^\,{\rm d}elta V - 1 + 2\rho^\,{\rm d}elta\right)\,{\rm d} \sigma.
\end{align*}
These estimates yield a a-priori bound for $\rho^\,{\rm d}elta$ in $H^1(\Omega)$. Due to $0\le \rho^\,{\rm d}elta \le 1$, this allows us to pass to the limit in the weak formulation \eqref{eq:rho_delta_weak}. In particular we have, by passing to subsequences if necessary,
\begin{align*}
\int_\Omega \nablabla\rho^\,{\rm d}elta\cdot\nablabla\varphi\,{\rm d} x &\to \int_\Omega \nablabla\rho\cdot\nablabla\varphi\,{\rm d} x &\text{ since } \nablabla\rho^\,{\rm d}elta\rightharpoonup\nablabla \rho\text{ in }L^2(\Omega),\\
\int_\Omega \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V\cdot\nablabla\varphi\,{\rm d} x &\to \int_\Omega \rho(1-\rho)\nablabla V\cdot\nablabla\varphi\,{\rm d} x &\text{ since } \rho^\,{\rm d}elta \to \rho \text{ in }L^2(\Omega),\\
\,{\rm d}elta\int_\Omega \nablabla\psi^\,{\rm d}elta\cdot\nablabla\varphi + \psi^\,{\rm d}elta\varphi\,{\rm d} x &\to 0 &\text{ since } \psi^\,{\rm d}elta,\,\nablabla\psi^\,{\rm d}elta\in L^2(\Omega),\\
- \alpha\int_\Gamma (1-\rho^\,{\rm d}elta)\varphi\,{\rm d}\sigma + \beta\int_\Sigma \rho^\,{\rm d}elta\varphi\,{\rm d}\sigma &\to - \alpha\int_\Gamma (1-\rho)\varphi\,{\rm d}\sigma + \beta\int_\Sigma \rho\varphi\,{\rm d}\sigma &\text{ since } \rho^\,{\rm d}elta \to \rho \text{ in }L^2(\partial\Omega).
\end{align*}
\end{proof}
Note in the potential case we can only conclude that the density $\rho$ takes values between zero and one, for a stronger result depending on the in- and outflow parameters we need to return to the assumptions for incompressible velocity fields:
\begin{cor}\label{cor:M1_max} Let the assumptions of theorem \ref{thm:M1_potential} and additionally $\Delta V = 0$, $\partial_n V = -1$ on $\Gamma$, $\partial_n V = 1$ on $\Sigma$, and
$\partial_n V=0$ on $\partial \Omega \setminus (\Gamma \cup \Sigma)$ hold. Then, the solution $\rho$ to \eqref{eq:M1_potential} supplemented with \eqref{eq:bc1}--\eqref{eq:bc3} satisfies the bounds
\begin{equation}
\min\{\alpha,1-\beta\} \le \rho(x) \le \max \{ \alpha, 1-\beta \}
\end{equation}
\end{cor}
\begin{proof}
Since for $\Delta V = 0$ the equation \eqref{eq:M1_potential} fulfills a maximum principle, the assertion follows by similar arguments as in the proof of Theorem \ref{thm:incompressible}.
\end{proof}
\begin{rem} Note that testing the weak form \eqref{eq:rho_delta_weak} with the entropy variable $\psi^\,{\rm d}elta$ yields to estimates analogous to those that are obtained by the entropy dissipation method in the time dependent case. In fact, if equation \eqref{eq:ent_approx} would feature the additional term $\partial_t \rho$ (parabolic case), then differentiating the entropy functional with respect to time would yield
\begin{align*}
\partial_t E(\rho^\,{\rm d}elta) = \int_\Omega \left[\nablabla\cdot((-\nablabla\rho^\,{\rm d}elta + \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V - \,{\rm d}elta \nablabla \psi^\,{\rm d}elta) + \,{\rm d}elta\psi^\,{\rm d}elta\right] (\ln \rho^\,{\rm d}elta - \ln (1-\rho^\,{\rm d}elta) - V)\;dx.
\end{align*}
Recalling the definition $\psi^\,{\rm d}elta = (\ln \rho^\,{\rm d}elta - \ln (1-\rho^\,{\rm d}elta ) - V)$ and after integration by parts one obtains
\begin{align}\label{eq:ent_diss_approx}
\partial_t E(\rho^\,{\rm d}elta) &= -\int_\Omega \left(-\nablabla\rho^\,{\rm d}elta + \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V \right)\cdot \nablabla \psi^\,{\rm d}elta - \,{\rm d}elta |\nablabla \psi^\,{\rm d}elta|^2 + \,{\rm d}elta |\psi^\,{\rm d}elta|^2\;dx\\
&-\alpha \int_\Gamma (1-\rho^\,{\rm d}elta)\psi^\,{\rm d}elta\;d\sigma + \beta\int_\Sigma \rho^\,{\rm d}elta\;d\sigma.\nonumber
\end{align}
This means that in the entropy dissipation, we obtain the same terms as in equation \eqref{eq:estimate}. While in the stationary case, their sum is zero, we can conclude boundedness in the parabolic case by integrating \eqref{eq:ent_diss_approx} with respect to time to conclude
\begin{align*}
E(\rho^\,{\rm d}elta) &+ \int_0^T \int_\Omega \left(-\nablabla\rho^\,{\rm d}elta + \rho^\,{\rm d}elta(1-\rho^\,{\rm d}elta)\nablabla V \right)\cdot \nablabla \psi^\,{\rm d}elta - \,{\rm d}elta |\nablabla \psi^\,{\rm d}elta|^2 + \,{\rm d}elta |\psi^\,{\rm d}elta|^2\;dx\,dt\\
&-\alpha \int_0^T\int_\Gamma (1-\rho^\,{\rm d}elta)\psi^\,{\rm d}elta\;d\sigma\,dt + \beta\int_0^T\int_\Sigma \rho^\,{\rm d}elta\;d\sigma\,dt \le E(\rho_0) \le C,
\end{align*}
where $\rho_0$ denotes the initial datum.
\end{rem}
\begin{rem} We finally mention that for convenience we used a diffusion coefficient equal to one, but all results of this section remain true for an arbitrary value $D > 0$ and even for regular spatially varying coefficients. This is important for the following section, where study the natural case of a small diffusion coefficient.
\end{rem}
\section{Asymptotic Unidirectional Flow Characteristics}\label{sec:asymptotic}
In the following we discuss in detail the flow properties of the single species model for small diffusion $D = \varepsilon \ll 1$, in particular for the stationary solution $\rho \in H^1(\Omega) \times L^\infty(\Omega)$ of
\begin{equation}
\nablabla \cdot( -\varepsilonilon \nablabla \rho + \rho (1-\rho) u) = 0, \quadquad \text{in } \Omega \label{unifloweq}
\end{equation}
with boundary conditions \eqref{eq:bc1}--\eqref{eq:bc3}.
We are interested in the asymptotic behaviour as $\varepsilonilon \,{\rm d}ownarrow 0$, in particular the boundary layers and the asymptotic flow patterns, which we expect to be characterised by three different phases as in \cite{Wood2009}:
\begin{itemize}
\item An {\em influx-limited} phase with an asymptotically low density corresponding to a density of outgoing particles on $\Sigma$.
\item An {\em outflux-limited} phase with an asymptotically high density corresponding to a density of incoming particles on $\Gamma$, with a boundary layer on $\Sigma$ created by lower outflux rates.
\item A {\em maximal current } phase with asymptotic density $\frac{1}2$ and boundary layers both at $\Sigma$ and $\Gamma$, which occurs at high in- and outflow rates.
\end{itemize}
First we note that a direct conclusion from the maximum principle \eqref{rhobounds} is the non-appearance of a maximal current phase if
\begin{align*}
\frac{1}2 \notin \left[ \min\{\alpha, (1-\beta) \} , \max\{\alpha,(1-\beta) \} \right].
\end{align*}
In this case the maximum principle implies that the densities are bounded away from $\frac{1}2$ uniformly in $\varepsilonilon$.
\subsection{Characterization of Phases in the One-Dimensional Flow}
We now turn to the one-dimensional case with constant velocity, where we can use a scaling of space and flow such that $\Omega=[0,1]$,
$u \equiv 1$, $\Gamma =\{0\}$, and $\Sigma=\{1\}$. This setting corresponds exactly to the continuum limit of the setting in \cite{Wood2009}, and we will rigorously show that indeed the same behaviour as in the TASEP with stochastic entrance and exit conditions - with transitions between the phases at exactly the same parameter values - appears for the continuum limit.
For convenience we restate the one-dimensional version of \eqref{unifloweq}, \eqref{eq:bc1}--\eqref{eq:bc3} as
\begin{equation}
- \varepsilonilon \partial_{xx} \rho + \partial_x ( \rho (1-\rho) ) = 0 \quadquad \text{in } (0,1), \label{uniflow1deq}
\end{equation}
with boundary conditions
\begin{align}
&\varepsilonilon \partial_x \rho = (1- \rho) ( \rho - \alpha) & \text{at } x=0, \\
&\varepsilonilon \partial_x \rho = \rho (1 - \rho - \beta) & \text{at } x=1. \label{uniflow1dbc2}
\end{align}
A first result particular for the one-dimensional case is the uniqueness of a solution:
\begin{prop}
There exists exactly one weak solution $\rho \in H^1(\Omega)$ of \eqref {uniflow1deq}-\eqref{uniflow1dbc2}.
\end{prop}
\begin{proof}
Let $\rho_1$ and $\rho_2$ be two solutions, then $w=\rho_1-\rho_2$ satisfies
$$ - \varepsilonilon \partial_{xx} w + \partial_x ((1-\rho_1-\rho_2) w ) = 0 $$
with boundary conditions
\begin{align*}
&-\varepsilonilon \partial_x w + (1-\rho_1-\rho_2) w = - \alpha w & \text{at } x=0, \\
&-\varepsilonilon \partial_x w + (1-\rho_1-\rho_2) w = \beta w & \text{at } x=1.
\end{align*}
Now let $V \in H^2([0,1])$ be such that $-\varepsilonilon \partial V =(1-\rho_1-\rho_2)$ and $w=e^V v$. Then, $v$ is the weak solution of
$$ \partial_x( e^V \partial_x v) = 0 $$
in $(0,1)$ with boundary conditions
\begin{align*}
&\varepsilonilon \partial_x v = \alpha v & \text{at } x=0, \\
&\varepsilonilon \partial_x v = -\beta v & \text{at } x=1.
\end{align*}
Using the weak formulation of this boundary value problem with test function $v$ implies
$$ \int_0^1 e^V v^2~dx + \alpha e^{V(0)} v(0)^2 + \beta e^{V(0)} v(1)^2 = 0, $$
which yields $v \equiv 0$ and thus uniqueness of the solution.
\end{proof}
We start our analysis of the flow properties with a simple calculation relating the difference of $\rho$ to the constant state $\frac{1}2$ to the boundary values:
\begin{lemma} \label{1dlemmax}
Let $\rho \in H^1([0,1])$ be the unique weak solution of \eqref {uniflow1deq}-\eqref{uniflow1dbc2}. Then the estimate
\begin{equation}
\int_0^1 \left(\rho - \frac{1}2 \right)^2 ~dx + \beta \rho(1) - \frac{1}4 \leq \varepsilonilon \vert 1 - \alpha - \beta \vert
\end{equation}
holds.
\end{lemma}
\begin{proof}
Using the test function $\varphi(x)=x$ in the weak form of \eqref {uniflow1deq} and adding and subtracting $\frac{1}4$ we find
$$ \int_0^1 (\varepsilonilon \partial_x \rho + (\rho - \frac{1}2)^2)~dx + \beta \rho(1) - \frac{1}4 = 0 .$$
Further integrating the first term and using the a-priori bounds from the maximum principle for the boundary values concludes the proof.
\end{proof}
Lemma \ref{1dlemmax} will yield the desired asymptotic estimate if we can guarantee that $\beta \rho(1) \geq \frac{1}4$, such that the second term on the left-hand side is nonnegative. Note also that in spatial dimension one the flux is constant, thus we find
$\beta \rho(1) = \alpha (1- \rho(0))$, i.e. the above result could equally be formulated in terms of $\alpha$ respectively the inflow boundary value. To prove the latter under appropriate conditions is the objective of the next result:
\begin{thm}[Maximal Current Phase] \label{1dflowthm1}
Let $\rho \in H^1([0,1])$ be the unique weak solution of \eqref {uniflow1deq}-\eqref{uniflow1dbc2} and let
$$ \min\{\alpha,\beta\} \geq \frac{1}2.$$
Then the estimate
\begin{equation}
\int_0^1 \left(\rho - \frac{1}2 \right)^2 ~dx \leq \varepsilonilon \vert 1 - \alpha - \beta \vert
\end{equation}
holds and furthermore, we have $j \ge 1/4$.
\end{thm}
\begin{proof}
Using Lemma \ref{1dlemmax} it suffices to show $\beta \rho(1) \geq 1/4$, which we carry out by contradiction.
Assume $ \rho(1) =\frac{1}{4\beta} - \,{\rm d}elta$ with $\,{\rm d}elta > 0$. Since $\beta \geq \frac{1}2$ and $\alpha \geq \frac{1}2$ we conclude in particular
$$ \rho(1) =\frac{1}{4\beta} - \,{\rm d}elta \leq \frac{1}2 \quadquad \text{ and} \quadquad \rho(0) =1 - \frac{\beta}\alpha \rho(1) \geq 1- \frac{1}{4\alpha} + \frac{\beta}\alpha \,{\rm d}elta \geq \frac{1}2+ \frac{\beta}\alpha \,{\rm d}elta. $$
Let $H$ be a smooth monotone function such that
$$ H(0)=0, \quadquad H(1)=1, \quadquad \text{supp}(H') \subset (\frac{1}2 - \gamma, \frac{1}2+\gamma) $$
with $\gamma <\min\{\,{\rm d}elta,\frac{\beta}{\alpha}\,{\rm d}elta\}$.
Now we choose the test function
$ \varphi = H(\rho)$
in the weak form of \eqref {uniflow1deq} again with $\frac{1}4$ added and subtracted.
Then we find
$$ \int_0^1 \left( \varepsilonilon H'(\rho) |\partial_x \rho|^2 - H'(\rho)\rho (1-\rho) \partial_x \rho \right)~dx + \beta \rho(1) (H(\rho(1)) - H(\rho(0)) = 0. $$
Using the nonnegativity of the first term and rewriting the second term yields
$$ (\beta \rho(1) - \frac{1}4) (H(\rho(1)) - H(\rho(0)) \leq - \int_0^1 H'(\rho) (\rho - \frac{1}2)^2 \partial_x \rho ~dx
= F(\rho(0)) - F(\rho(1)), $$
where $F$ satisfies $F'(p) = H'(p) (\rho-\frac{1}2)^2$ and $F(0)=0$. With the properties of $H$ it is straightforward to see that
$$ H(\rho(1)) - H(\rho(0)) = 1, \quadquad F(\rho(1)) = 0, \quadquad F(\rho(0)) \leq 3 \gamma^2 . $$
Hence, we conclude
$$ - \,{\rm d}elta = \beta \rho(1) - \frac{1}4 \geq - 3 \gamma^2 , $$
which is a contradiction for $\gamma$ sufficiently small. Since the flux is constant, the fact that $j \ge 1/4$ follows immediately from \eqref{eq:bc2}.
\end{proof}
We remark that the strategy of proof of Theorem \ref{1dflowthm1} is reminiscent of entropy solution concepts for conservation laws and parabolic equations (cf. \cite{karlsen}), where roughly speaking the Heaviside function applied to $\rho - c$ for arbitrary constant $c$ multiplied with a nonnegative smooth function is used as a test function to define entropy inequalities. The function $H$ in the above proof will indeed approximate the Heaviside function of $\rho - \frac{1}2$ as $\gamma$ tends to zero.
In the inflow- and outflow-limited case the analysis is easier, an estimate like in Lemma \ref{1dlemmax} suffices:
\begin{thm}[Inflow- and Outflow Limited Phases] \label{1dflowthm2}
Let $\rho \in H^1([0,1])$ be the unique weak solution of \eqref {uniflow1deq}-\eqref{uniflow1dbc2} and let
$$ \max\{\alpha,\beta\} < \frac{1}2.$$
Then for $\alpha < \beta$ the estimate
\begin{equation}
\int_0^1 |\rho - \alpha| ~dx \leq \varepsilonilon \frac{1- \alpha - \beta}{\beta - \alpha} \end{equation}
holds, while for $\alpha > \beta$
\begin{equation}
\int_0^1 |\rho - 1 + \beta| ~dx \leq \varepsilonilon \frac{1- \alpha - \beta}{\alpha-\beta} \end{equation}
\end{thm}
\begin{proof}
Note that the maximum principle implies $\alpha \leq \rho \leq 1-\beta$ in any of the two cases.
We only detail the case $\frac{1}2 > \beta \geq \alpha$, as the other one is analogous. Using the test function $\varphi(x) = 1-x$ in the weak formulation and some rewriting we have
$$ 0 = \int_0^1 (\varepsilonilon \partial_x \rho + \rho^2 - \rho )~dx + \alpha (1- \rho (0) ). $$
With some rearranging and the bounds on $\rho$ we have
$$ \beta \int_0^1 (\rho - \alpha) ~dx \leq \int_0^1 (1- \rho)(\rho - \alpha)~dx \leq \varepsilonilon (1-\beta-\alpha) + \alpha \int_0^1\rho~dx - \alpha \rho(1). $$
Since $\rho \geq \alpha$ we have
$$ (\beta-\alpha) \int_0^1 |\rho - \alpha| ~dx \leq \varepsilonilon (1-\beta-\alpha). $$
\end{proof}
Summing up, we have shown exactly the same behaviour for our the continuum model as \cite{Wood2009} for the discrete TASEP.
\subsection{Explicit Solutions in One Spatial Dimension}\label{sec:explicit}
In this section, we briefly discuss explicit solutions to the one-dimensional equations \eqref {uniflow1deq}--\eqref{uniflow1dbc2}. This derivation is mostly based on basic calculus and the details are presented in the appendix. However, this approach allows us to clarify the role of the parameter $\varepsilon$ with respect to the phase diagram. In particular, we can show that for $\varepsilon > 0$, maximal current can occur for values of $\alpha,\,\beta$ that are strictly smaller than $1/2$.
Since in one space dimension, the flux $j$ is constant, we can set $j=1/4$ and integrate \eqref{uniflow1deq} and obtain the first order ordinary differential equation
\begin{align*}
-\partial_x\rho + \rho(1-\rho) = \frac{1}{4},
\end{align*}
which is also known as ``logistic equation with harvesting'' in the context of population dynamics, cf. \cite{Brauer1975,Cooke1986}. Solving this equation subject to the boundary conditions elementary calculation shows that on of the following conditions on $\alpha$ and $\beta$ have to hold in order to obtain a continuous solution:
\begin{align}\label{eq:contex1}
\frac{1}{2}\frac{1+2\varepsilon}{4\varepsilon+1} < \alpha &< \frac{1}{2}\quaduad\text{ and }\quaduad \beta = \frac{1}{2}\frac{4\alpha\varepsilon+2\alpha-1}{8\alpha\varepsilon+2\alpha-2\varepsilon-1},\\\label{eq:contex2}
\frac{1}{2}\frac{1+2\varepsilon}{4\varepsilon+1} < \beta &< \frac{1}{2}\quaduad\text{ and }\quaduad \alpha = \frac{1}{2}\frac{4\beta\varepsilon+2\alpha-1}{8\beta\varepsilon+2\beta-2\varepsilon-1},
\end{align}
Interestingly, maximal flux is achieved for values of $\alpha$, $\beta < 1/2$ which is in contrast to the discrete model, \cite{Wood2009}. To illustrate this, we depicted the changes in the phase diagram for different values of $\varepsilon$ in figure \ref{fig:phase_eps}. In section \ref{sec:num1d} we present numerical results based on a discretization of \eqref{uniflow1deq}--\eqref{uniflow1dbc2} that confirm this observation.
\begin{figure}
\caption{In this phase diagram, the solid lines separate the regions of $j\ge 1/4$ (maximal flow) and $j< 1/4$ for the values $\varepsilon = 0.01$ (blue), $\varepsilon = 0.1$ (red) and $\varepsilon = 0.5$ (green). The dashed lines correspond to the discrete case of \cite{Wood2009}
\label{fig:phase_eps}
\end{figure}
\section{Numerical Solution}\label{sec:numerics}
In this section we will describe the numerical method that we used and present some examples in one and two space dimensions. Our implementation is based on the discontinuous finite element method which is well-suited for convection dominated problems, see \cite{DiPietro2012} and the references therein. We will not give any details regarding error estimates and the convergence of our algorithm which remains future work.
\subsection{Setting and Discontinuous Galerkin Scheme}\label{sec:scheme}
Let us recall some well-known notations and definitions, cf. \cite{DiPietro2012}. We start by dividing our domain into elements which are triangles in two space dimensions and intervals in 1D. For simplicity, we shall only discuss the two-dimensional case from now on. We cover the domain $\Omega \subset \mathbb{R}R^2$ by a finite collection of triangles which we denote by $\mathcal{T}_h$, where $h$ refers to the diameter of the largest triangle. Furthermore, we denote by $F$ the mesh faces which are characterised by one of the following two conditions:
\begin{enumerate}
\item Either, there are distinct triangles $T_1$ and $T_2$ such that $F = \partial T_1 \cap \partial T_2$ - $F$ is a interface,
\item or, there is $T\in \mathcal T_h$ such that $F = \partial T \cap \partial\Omega$ - $F$ is a boundary face.
\end{enumerate}
We denote by $\mathcal F_h^i$ the set of all interfaces, $\mathcal F_h^b$ the boundary faces and by $\mathcal F_h$ the union of these two sets. Furthermore, $ n_F$ is the normal vector of a facet, pointing outward. On $\mathcal{T}_h$ we introduce the broken polynomial space
\begin{align*}
V_h = \{ v \in L^2(\Omega) \;:\; \forall\, T \in\mathcal{T}_h,\, \left. v \right|_T \in \mathcal{P}^1(T)\,\},
\end{align*}
where $\mathcal{P}^1(T)$ denotes polynomials of degree one on $T$.
For a scalar function $v$, smooth enough for the expression $\left. v\right|_{F}$ for all $F \in \mathcal F$ to make sense, we define interface averages and jumps in the following way
\begin{align*}
\avg{v}_F(x) &:= \frac{1}{2} ( \left. v\right|_{T_1}(x) + \left. v\right|_{T_2}(x) ), \text{ for a.e. }x\in F, &\text{(average)},\\
\jump{v}_F(x) &:=\left. v\right|_{T_1}(x) - \left. v\right|_{T_2}(x),\text{ for a.e. }x\in F, &\text{(jump)}.
\end{align*}
With these definitions at hand, we can state our discontinuous Galerkin scheme. Starting from the weak formulation of a linearized version of \eqref{eq:basic} we consider
\begin{align}\label{eq:weak}
\underbrace{\varepsilon\int_\Omega\nablabla \rho\nablabla\phi\,{\rm d} x + \int_\Omega \rho(1-\tilde\rho)u\nablabla\phi\,{\rm d} x}_{=:a(\rho,\phi;\tilde\rho)} + \underbrace{\alpha\int_{\Gamma}\rho\phi\,{\rm d} s + \beta\int_{\Sigma}\rho\phi\,{\rm d} s}_{=: a_F(\rho,\phi)} = \underbrace{\alpha\int_\Gamma \phi\,{\rm d} s}_{=:f(\phi)},\quaduad\phi_h \in H^1(\Omega),
\end{align}
with $\tilde\rho \in H^1(\Omega)\cap L^\infty(\Omega)$ given. In order to obtain a discrete solution $\rho_h \in V_h$ we define the bilinear form
\begin{align*}
a(\rho_h,\phi_h ; \tilde \rho ) = a^{\rm swip}(\rho_h,\phi_h) + a^{\rm upw}(\rho_h,\phi_h ; \tilde \rho),
\end{align*}
with a symmetric weighted interior penalty method for the diffusion given by
\begin{align*}
a^{\rm swip}(\rho_h,\phi_h) &= \int_\Omega \varepsilon \nablabla_h\rho_h\cdot\nablabla_h\phi_h\,{\rm d} x - \sum_{F\in\mathcal{F}_h} \varepsilon\int_F \left(\avg{\nablabla_h\rho_h}\cdot n_F\jump{\phi_h} + \jump{\rho_h}\avg{\nablabla_h\phi_h}\cdot n_F\right)\,{\rm d}\sigma \\
&+ \sum_{F\in\mathcal{F}_h}\eta\frac{\varepsilon}{h_F}\int_F\jump{\rho_h}\jump{\phi_h}\,{\rm d}\sigma
\end{align*}
and a upwind scheme for the advection part
\begin{align*}
a^{\rm upw}(\rho_h,\phi_h) &= \int_\Omega -\rho_h((1-\rho_h)u\cdot\nablabla_h \phi_h)\,{\rm d} x + \sum_{F\in\mathcal{F}_h^i}\int_F ((1-\tilde \rho_h)u\cdot n_F \avg{\rho_h} \avg{\phi_h}\,{\rm d}\sigma\\
&+ \sum_{F\in\mathcal{F}_h^i}\frac{1}{2}|(1-\rho_h)u\cdot n_F|\int_F \jump{\rho_h}\jump{\phi_h}\,{\rm d}\sigma,
\end{align*}
again with $\tilde \rho_h \in V_h$ given. The local length scale $h_F$ is defined as $h_F = \frac{1}{2}(h_{T_1} + h_{T_2})$, where $T_1$ and $T_2$ are the two triangles adjacent to face $F$. In order to obtain a solution to the original nonlinear problem \eqref{eq:basic} we employ the following semi-implicit iteration scheme: For $u^n_h$ given find $u_h^{n+1}\in V_h$ s.t.
\begin{align}\label{eq:scheme}
(u_h^{n+1},\phi_h) + \tau(a(u_h^{n+1},\phi_h ; u_h^n) + a_F(u_h^{n+1},\phi_h)) = (u_h^n,\phi_h) + f(\phi_h),\quaduad \forall \phi_h \in V_h,
\end{align}
with a relaxation parameter $\tau > 0$. Thus in each step one has to solve the following system of linear equations
\begin{align*}
(M+\tau A) \underline{u}^{n+1} = (M\underline{u}^n+\tau\underline{f}),
\end{align*}
where $\underline{u}$ denotes the vector of coefficient of $u$ in the linear finite element basis, $A$ is the matrix corresponding to the bilinear form $(a + a_F)$, and $M$ denotes the mass matrix. The vector $\underline{f}$ stems from the term $f(\phi_h)$ on the r.h.s. of \eqref{eq:scheme} with $u^n$ being the solution of the previous step. In all experiments below we chose $u_0=1/2$ and $\tau=0.01$. Note that this scheme can be interpreted as a semi-implicit time discretization of the parabolic version of \eqref{eq:basic} with time step size $\tau$.
\subsection{Results in one spatial dimension}\label{sec:num1d}
In one space dimension, we used MATLAB to implement the scheme described above. We will present several examples in the following and consider the domain $\Omega = [0,1]$ discretized by $n=200$ elements.
\subsubsection*{Different phases}
First we present some examples to illustrate the occurrence of the three different phases (namely \emph{influx limited}, \emph{outflux limited} and \emph{maximal current}) that are analysed in section \ref{sec:asymptotic} (and also in \cite{Wood2009}). We performed simulations for $\varepsilon = 0.1,\, 0.01,\, 0.001$. For $\alpha$ and $\beta$ we chose the values $0.2,\,0.4,\,0.6$ and $0.4,\,0.2,\,0.7$, respectively. The numerical results confirm the predicted occurrence of three phases and the results are shown in figure \ref{fig:phases}.
\begin{figure}
\caption{Some results of the 1D code for $\varepsilonilon=0.1$ (blue), $\varepsilonilon=0.05$ (black), and $\varepsilonilon=0.01$ (red).Top left: $\alpha=0.2$, $\beta=0.4$, Top right: $\alpha=0.4$, $\beta=0.2$, Bottom left: $\alpha=0.6$, $\beta=0.7$, Bottom right: $\alpha=0.5$, $\beta=0.5$}
\label{fig:phases}
\end{figure}
\subsubsection*{Maximal current for $\alpha<1/2$ or $\beta<1/2$}
Here we present numerical evidence for the results of section \ref{sec:explicit}, namely the occurrence of the maximal flow phase for $\alpha,\,\beta < 1/2$. We chose $\varepsilonilon=0.01$ which yields $\frac{1}{2(2\varepsilonilon+1)}=0.4902$. We chose $\alpha=0.4912$ and by \eqref{eq:contex1}, the corresponding $\beta$ is $0.603773585$. To illustrate the case $\beta < 1/2$ we simply interchange the roles of $\alpha$ and $\beta$. Both results are depicted in Figure \ref{fig:alpha_jmax} and confirm the results from \ref{sec:explicit}.
\begin{figure}
\caption{For $\varepsilon = 0.01$, the resulting density (left) and flux (right) is depicted for the values $\alpha=0.4912$, $\beta=0.6043$ (top) and $\alpha=0.6043$, $\beta=0.4912$ (bottom). The maximal flux $j=1/4$ is observed in both cases.}
\label{fig:alpha_jmax}
\end{figure}
To further explore the behaviour of the flux, we used the discontinuous Galerkin scheme introduced above to numerically produce a phase diagram by sampling the values of $\alpha,\,\beta$ from $0$ to $1$ with a stepsize of $0.01$. For $\varepsilon = 0.1$ we compared the contour line $j=1/4$ with \eqref{eq:contex1} or \eqref{eq:contex2}, respectively, see figure \ref{fig:phasenum}.
\begin{figure}
\caption{For $\varepsilon = 0.1$, the left picture shows a phase diagram generated using the discontinuous Galerkin method described above for values of $\alpha,\,\beta = 0,0.001,0.002,\ldots,1$. On the right side, contour lines for several values of the flux $j$ are depicted. The line for $j=1/4$ (blue line) is compared to the analytical results of section \ref{sec:explicit}
\label{fig:phasenum}
\end{figure}
\subsection{Results in two spatial dimensions}
In two spatial dimensions, we used the software package FeniCS, \cite{Logg2010,Logg2012} to implement the scheme described in section \eqref{sec:scheme}. We present several examples using the domain sketched in figure \ref{fig:sketch2d}, i.e. a corridor of length $2$ and height $1$ with two entrances and two exists on each side. The upper entrace and exit are located at $0.65 < y < 0.85$, the lower ones at $0.15 < y < 0.35$. For each entrance, we have a different inflow rates $\alpha_i$, $i=1,2$, while we have $\beta_i$, $i=1,2$ for the exits.
\subsubsection*{Maximum principle}
In all examples in this section, we use a velocity field given as the gradient of some potential. From theorem \ref{thm:M1_potential} and corollary \ref{cor:M1_max} we know that for general potentials $V$ we only have $0\le \rho \le 1$ while for $V$ satisfying the assumptions of corollary \ref{cor:M1_max} we have that $\min\{\alpha,1-\beta\} \le \rho(x) \le \max \{ \alpha, 1-\beta \}$. To illustrate this, let $V_m$ be given as the solution to the equation
\begin{align*}
-\Delta V_m &= 0\;\text{in}\;\Omega, \\
\partial_n V_m &= -1\;\text{ on }\Gamma,\\
\partial_n V_m &= 1\;\text{ on }\Sigma,\\
\partial_n V_m&=0\;\text{ on }\partial \Omega \setminus (\Gamma \cup \Sigma),
\end{align*}
with the normalization condition $\int_{\partial\Omega} u \;d\sigma(x) = C$. On the discrete level, we use a mixed method to discretize this equation in order to ensure the condition $\nablabla\cdot \nablabla V_m$ is fulfilled exactly on the discrete level. The normalization constrained is achieved by setting an arbitrary boundary node to zero. The resulting velocity field $u_m = \nablabla V_m$ is depicted in Figure \ref{fig:nablau}. Alternatively we chose $V_l(x)=x$ which yields $u_l = \nablabla V_l = (1,0)^t$. In our first example we then chose $\alpha_1 = 0.2$, $\alpha_2 = 0.4$, $\beta_1 = 0.4$ and $\beta_2 = 0.2$ and apply our scheme with $V=V_l$ and $V=V_m$, respectively. The results shown in figure \ref{fig:2dlane} produce the expected behaviour, namely the maximum principle \eqref{rhobounds} only occurs for $V=V_m$. Furthermore, the results show that the asymmetric in- and outflow rates indeed some of the ``particles'' entering at $\alpha_2$ move over two the exit at $\beta_1$. This indicates that the model may be able to also predict lane formation in the case with more than one active species (i.e. \eqref{basiceqn1} with $M>1$), see also \cite{schlakepietschmann}.
\subsubsection*{High densities and obstables}
In a second example, we explored the situation of maximal flow by using the values $\alpha_1 = 0.6$, $\alpha_2 = 0.9$, $\beta_1 = 0.9$ and $\beta_2 = 0.6$ and $V=V_l$. Note that in both cases we observe on the parts between the in- and outflow boundaries that the maximum principle of theorem \ref{thm:incompressible} does not hold since $u_l \cdot n \neq 0$ on the no-flux boundary. The results are shown in figure \ref{fig:2dmax}.
Since this example shows that high densities can occur between the two exits, we modify the domain by adding an round obstacle in front of the doors as shown in figure \ref{fig:2dhole}. This is motivated by results from models for human crowd motions where in some situations, an obstacle in front of the exits can improve the situation. Indeed, our results show that the densities between the two exits decreases, however at the price of a large density in front of the obstacle itself. Furthermore, the transition from high to low densities observed in figure \ref{fig:2dlane} is shifted towards the entrances.
\begin{figure}
\caption{Sketch of the geometry for the two-dimensional simulations. We consider a corridor of length $2$ and height $1$ and with two entrances and to exits, solated on the left and right boundary, respectively.}
\label{fig:sketch2d}
\end{figure}
\begin{figure}
\caption{The vector field $\nablabla V_m$. The non homogeneous Neumann boundary conditions yield a velocity field that transports density away from the entrances and towards the exits thus facilitating the tranport.}
\label{fig:nablau}
\end{figure}
\begin{figure}
\caption{Simulation with rates $\alpha_1 = 0.2$, $\alpha_2 = 0.4$, $\beta_1 = 0.4$ and $\beta_2 = 0.2$. Above the results for $V=V_l$ are shown and even though the velocity field points in $x$-direction only, density is transported towards the larger exit. For $V=V_m$ (below) one clearly sees that the maximum principle is satisfied while still some density is transported to the lower exit. }
\label{fig:2dlane}
\end{figure}
\begin{figure}
\caption{Simulation with $V=V_l$ and rates $\alpha_1 = 0.6$, $\alpha_2 = 0.9$, $\beta_1 = 0.9$ and $\beta_2 = 0.6$, i.e. in the regime of maximal flow.}
\label{fig:2dmax}
\end{figure}
\begin{figure}
\caption{Introducing an obstacle dramatically decreases the density in front of the two exists, however at the cost of a slightly increased density in front of the obstacle. The rates are $\alpha_1 = 0.2$, $\alpha_2 = 0.4$, $\beta_1 = 0.4$ and $\beta_2 = 0.2$.}
\label{fig:2dhole}
\end{figure}
\section{Summary \& Outlook}
In this paper we analyzed a model for crowded transport with a single active species. We started by giving some details about the modelling then proceeding with two existence proofs in the stationary case. Next we analysed the flow characteristic of our model in the case of small diffusion. In one space dimension, we were able to recover three different phases that were already observed in the stochastic model \cite{Wood2009}. Further investigation showed however that the continous model can produce fluxes that exceed the value $j=1/4$ which do not occur on the discrete level. We concluded by presenting some numerical examples in one and two spatial dimensions.\\
Our analysis and especially the numerical examples in two spatial dimensions suggest that interesting phenomena can occur when dealing with more than one active species. Since each species has its own in- and outflow rate, it is not clear whether one would again observe different phases, clearly separated by certain values of these parameters. Also, to prove existence in the case $M>1$ becomes much more involved. Regarding the numerical discretization, a scheme that uses the reformulated problem in entropy variables might be an alternative to the direct approach used here.
\section*{Appendix}
In this appendix we will detail the calculation of explicit solutions to the one-dimensional equation \eqref{uniflow1deq}. Since the flux $j$ is constant in 1D, we can integrate this equation to obtain the ordinary differential equation
\begin{align}\label{eq:integrated}
-\partial_x\rho + \rho(1-\rho) = j
\end{align}
supplemented with the boundary conditions $j= \alpha(1-\rho(0))$ and $j=\beta \rho(1)$. We shall separately discuss the cases of constant density, maximal flux and the general case $j < 1/4$:
\begin{enumerate}
\item $\rho = const$: Working with \eqref{eq:integrated}, the problem reduces to an overdetermined algebraic system of equations, namely
\begin{align*}
j &= \alpha ( 1 - \rho ),\\
j &= \beta \rho,\\
j &= \rho(1-\rho).
\end{align*}
This is solvable if and only if $\alpha + \beta = 1$ and we obtain $\rho = \alpha = 1-\beta$ and $j=\alpha(1-\alpha)=\beta(1-\beta)$. In particular, the maximal flux $j = 1/4$ is achieved for $\alpha=\beta=1/2$, only.
\item $j = 1/4$: Note that \eqref{eq:integrated} with $j=1/4$ is also known as ``logistic equation with harvesting'' in the context of population dynamics, cf. \cite{Brauer1975,Cooke1986}. In this case, \eqref{eq:integrated} becomes
\begin{align*}
-\varepsilon\partial_x \rho - \left(\rho - \frac{1}{2}\right)^2 = 0,
\end{align*}
which yields
\begin{align}\label{eq:sol_explicit_14}
\rho(x) = \frac{1}{2} + \frac{\varepsilon}{x+c},
\end{align}
with the constant $c$ to be determined by the boundary conditions. We have
\begin{align*}
\frac{1}{4} &= \alpha(1-\rho(0)) = \alpha\left(\frac12 - \frac{\varepsilon}{c_1}\right)\quaduad\mathbb{R}ightarrow c_1 = \frac{4\alpha\varepsilonilon}{2\alpha-1},\\
\frac{1}{4} &= \beta\rho(1) = \beta\left(\frac12 + \frac{\varepsilon}{1+c_2}\right)\quaduad\mathbb{R}ightarrow c_2 = \frac{4\beta\varepsilonilon}{2\beta-1}-1.
\end{align*}
In order to obtain a single, continous solution we have to ensure the two conditions
\begin{align*}
c_1 = c_2\text{ and either } c_1 = c_2 > 0 \text{ or } c_1 = c_2 < -1.
\end{align*}
Very elementary but rather tedious calculations show that this amounts to the following two conditions on $\alpha,\,\beta$ and $\varepsilon$
\begin{align*}
\frac{1}{2}\frac{1+2\varepsilon}{4\varepsilon+1} < \alpha &< \frac{1}{2}\quaduad\text{ and }\quaduad \beta = \frac{1}{2}\frac{4\alpha\varepsilon+2\alpha-1}{8\alpha\varepsilon+2\alpha-2\varepsilon-1}\quaduad\text{for }c>0,\\
\frac{1}{2}\frac{1+2\varepsilon}{4\varepsilon+1} < \beta &< \frac{1}{2}\quaduad\text{ and }\quaduad \alpha = \frac{1}{2}\frac{4\beta\varepsilon+2\alpha-1}{8\beta\varepsilon+2\beta-2\varepsilon-1}\quaduad\text{for }c<-1,
\end{align*}
with $c:=c_1=c_2$.
As already discussed in section \ref{sec:explicit}, this in interesting since solution with maximal flux occur for values of $\alpha$, $\beta < 1/2$ which is in contrast to the discrete model, \cite{Wood2009}. See also \ref{sec:num1d} for some numerical examples that confirm this observation.
\item $j \neq 1/4$: In this case the explicit solutions is given by
\begin{align}
\rho(x) = \frac{1}{2} - \frac{\sqrt{4j-1}}{2}\tan\left(\frac{1}{2}\frac{\sqrt{4j-1}(x+c)}{\varepsilon}\right).
\end{align}
The boundary conditions reduce to the following nonlinear algebraic system
\begin{align*}
j &= \alpha \left(\frac12+\frac{\sqrt{4j-1}}{2}\tan\left(\frac{1}{2}\frac{\sqrt{4j-1}(1+c)}{\varepsilon}\right)\right),\\
j &= \beta\left(\frac12 - \frac{\sqrt{4j-1}}{2}\tan\left(\frac{1}{2}\frac{\sqrt{4j-1}c}{\varepsilon}\right)\right).
\end{align*}
Solving these two conditions for the constant $c$, we finally end up with a non-linear equation determining $j$ given by
\begin{align*}
\arctan\left(\frac{2j-\alpha}{\alpha\sqrt{4j-1}}\right) = \frac{\sqrt{4j-1}}{2\varepsilon} + \arctan\left(\frac{2j-\beta}{\beta\sqrt{4j-1}}\right).
\end{align*}
This equation can be solved for example by applying Newton's method.
\end{enumerate}
Note that using these calculation, one can basically calculate the solution to \eqref{uniflow1deq} (with $u = 1$) explicitly for $j=1/4$ or $\rho=const$ and for other values of $j$ by solving a system of non-linear equations.
\end{document}
|
\begin{document}
\author{Weronika Wrzos-Kaminska\footnote{Clare College, Cambridge, Trinity Lane, Cambridge CB2 1TL, UK} }
\title{A simpler winning strategy for Sim}
\maketitle
\begin{abstract}
We give a simple human-playable winning strategy for the second player in the game
of Sim.
\end{abstract}
\section{Introduction}
In the game of Sim, two players, P1 and P2, compete on a complete graph of six vertices ($K_6$). Each of the players has a colour, say red for P1 and blue for P2. The players alternate turns, starting with P1, claiming one of the previously uncoloured edges of the $K_6$ and colouring it in their colour.
The first player to be forced to form a triangle of their own colour loses. The game of Sim was first introduced by Simmons in 1969 \cite{Simmons}, and has since then attracted a lot of attention. A technical report by Slany \cite{Slany} provides detailed information about Sim and other Ramsey games, as well as further references to literature on Sim. For more information on this and other mis\`ere-type games, see \cite{Leader}.
It is well known that any two edge-colouring of $K_6$ must contain a monochromatic triangle (as the Ramsey number R(3,3) equals 6). Thus, it is impossible for the game of Sim to end in a tie. Consequently, one of the two players must possesses a winning strategy, ie. a strategy that enables the player to win no matter what moves their opponent makes. Computer searches have shown that it is the second player who possesses a winning strategy for the game of Sim \cite{Simmons}. Mead, Rosa and Huang \cite{Mead} have provided an explicit winning strategy, remarking however that `a simpler (in terms of the rules to be followed) winning strategy is still desirable'. In this paper, we present a different, rather simpler, winning strategy for the game of Sim.
\section{Some Definitions}
Before describing the strategy, we need to introduce some terminology.
Firstly, by a position of a game after $k$ moves, we mean a subgraph of $K_6$ with $k$ coloured edges, each either red or blue.
Given a position, we can partition the edges of $K_6$ as $E(K_6) = R \cup B \cup N,$
where $R$ is the set of red edges (edges that have already been claimed by P1), $B$ is the set of blue edges (edges that have already been claimed by P2) and $N$ is the set of all uncoloured edges.
By a \textit{P1-allowed move} we mean an uncoloured edge such that colouring it red would not complete a monochromatic triangle. That is, any move that does not lead to an immediate loss for P1 is a P1-allowed move. A \textit{P2-allowed move} is defined analogously.
Moreover, we will say that a set X of uncoloured edges is a \textit{P1-allowed set} if $X \cup R$ does not contain any triangles.
Again, we define a \textit{P2-allowed set} in the analogous way.
Given a position P, We define a \textit{mini-board} of P to be a complete subgraph of the $K_6$ such that:
\begin{enumerate}[label=(\roman*), nosep]
\item M contains all the coloured edges and at least one P2-allowed move
\item M is minimal with respect to the above condition, that is, no proper subgraph $K \subset M$ satisfies (i).
\end{enumerate}
Clearly, a mini-board exists whenever the position has at least one P2-allowed move. Note that the mini-board, whenever it exists, is unique up to isomorphism.
\begin{figure}
\caption{Example of a position together with the two (isomorphic) mini-boards. The black edges are representing uncoloured edges. Note that the subgraph spanned by the vertices $\{1,2,3,4\}
\label{fig2}
\end{figure}
Finally, given a mini-board M, we will say that a set of edges $X \subset E(K_6)$ is a \textit{P1-allowed set on M} if $X \subset E(M)$ and it is a P1-allowed set. By a \textit{maximum P1-allowed set on M} we will mean a P1-allowed set on M of maximum size among all the P1-allowed sets on M.
As always, we define \textit{P2-allowed set on M} and \textit{maximum P2-allowed set on M} in a similar way.
\begin{figure}
\caption{For the above position (to the left) and the mini-board M (to the right), we have that the set $\{35, 45 \}
\label{fig1}
\end{figure}
\section{The strategy}
We are now ready to define the winning strategy for P2: \\
Whenever it is P2's turn, they should fix an arbitrary mini-board M of the current position. They should then pick a move according to the following rules:
\begin{enumerate}
\item Consider only P2-allowed moves on M.
\item Pick the move(s) belonging the the greatest number of maximum P2-allowed sets on M.
\item Pick the move(s) belonging the the greatest number of maximum P1-allowed sets on M.
\end{enumerate}
The rules should be interpreted in a hierarchical order as follows: P2 should start by applying Rule 1. If this does not determine their move uniquely, then Rule 2 should be applied as a `tie-breaker' to distinguish between the moves satisfying Rule 1. If this still does not determine their move, then Rule 3 should be applied as a further tie-breaker. If the move is still not determined uniquely, then P2 may pick arbitrarily among the moves which are left after applying all three rules.
This strategy does not necessarily determine the move uniquely up to isomorphism. However, if there is more than one move left after applying all the rules, then any of the remaining moves will lead to a win for P2.
An exhaustive computer search shows that this strategy is indeed winning for P2.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Abstract stochastic evolution equations in M-type 2 Banach spaces}
\runtitle{T\d{a} Vi$\hat{\d{e}}$t T\^{o}n and Atsushi Yagi }
\begin{aug}
\author{T\d{a} Vi$\hat{\d{e}}$t T\^{o}n\thanksref{t2}\ead[label=e1]{[email protected]}}
and
\author{ Atsushi Yagi\thanksref{t3}\ead[label=e2]{[email protected]}}
\thankstext{t2}{This work is supported by the Japan Society for the Promotion of Science.}
\thankstext{t3}{This work is supported by Grant-in-Aid for Scientific Research (No. 20340035) of the Japan Society for the Promotion of Science.}
\runauthor{T\d{a} Vi$\hat{\d{e}}$t T$\hat{\rm o}$n and Atsushi Yagi}
\affiliation{Osaka University}
\address{Department of Information and Physical Science\\
Graduate School of Information Science and Technology\\
Osaka University\\
Suita Osaka 565-0871, Japan\\
\printead{e1}\\
\phantom{E-mail:\ }\printead*{e2}}
\end{aug}
\begin{abstract}
This paper devotes to studying abstract stochastic evolution equations in M-type 2 Banach spaces. First, we handle nonlinear evolution equations with multiplicative noise. The existence and uniqueness of local and global mild solutions under linear growth and Lipschitz conditions on coefficients are presented. The regular dependence of solutions on initial data is also studied. Second, we investigate linear evolution equations with additive noise. The existence and uniqueness of strict and mild solutions and their regularity are shown. Finally, we explore semilinear evolution equations with additive noise. We concentrate on the existence, uniqueness and regular dependence of solutions on initial data.
\end{abstract}
\begin{keyword}[class=MSC]
\kwd[Primary ]{60H15}
\kwd{35R60}
\kwd[; secondary ]{47D06}
\end{keyword}
\begin{keyword}
\kwd{stochastic evolution equations}
\kwd{analytic semigroups}
\kwd{M-type 2 Banach spaces}
\end{keyword}
\end{frontmatter}
\section {Introduction}
The theory of stochastic partial differential equations in Hilbert spaces has been studied from 1970s. The basic theoretical problem on existence and uniqueness of solutions and the problem on regularity and regular dependence on initial data of solutions are still of great interest today.
Two main approaches are known to the abstract stochastic evolution equations, namely, the variational methods and the semigroup methods.
Some early work in the first approach are due to Bensoussan and Temam \cite{BensoussanTemam1,BensoussanTemam2}.
The fundamental work on monotone stochastic evolution equations is also due to Pardoux \cite{Pardoux1,Pardoux2}. There are many other important contributions in this approach, for examples Krylov-Rosovskii \cite{Krylov}, Pr\'{e}v\^{o}t and R\"{o}ckner \cite{PrevotRockner}, Viot \cite{Viot1,Viot2}, Gawarecki and Mandrekar \cite {GawareckiMandrekar} and references therein.
Let us introduce the second approach. The semigroup methods, which were initiated by the invention of the analytic semigroups in the middle of the last century, are characterized by precise formulas representing the solutions of the Cauchy problem for deterministic evolution equations (see Hille \cite{Hille} and Yosida \cite{Yosida}).
The analytical semigroup $S(t)=e^{-tA}$ generated by a linear operator $-A$ provides directly a fundamental solution to the Cauchy problem for an autonomous linear evolution equation
\begin{equation*}
\begin{cases}
\frac{dX}{dt}+AX=F(t), \quad\quad 0<t\leq T,\\
X(0)=X_0,
\end{cases}
\end{equation*}
and the solution is given by the formula $X(t)=S(t)X_0+\int_0^t S(t-s)F(s)ds.$ Similarly, a solution to the Cauchy problem for an autonomous nonlinear evolution equation
\begin{equation*}
\begin{cases}
\frac{dX}{dt}+AX=F(t,X), \quad\quad 0<t\leq T,\\
X(0)=X_0,
\end{cases}
\end{equation*}
can be obtained as a solution of an integral equation $X(t)=S(t)X_0+\int_0^t S(t-s)F(s,X(s))ds.$ For these problems, the solution formulas provide us important information on solutions such as uniqueness, regularity, smoothing effect and so forth. Especially, for nonlinear problems, one can derive Lipschitz continuity of solutions with respect to the initial values, even their Fr\'{e}chet differentiability. This powerful approach has been used in the study of stochastic evolution equations in Hilbert spaces.
Some early work was proposed by Dawson \cite{Dawson0,Dawson}, Curtain and Falb \cite{CurtainFalb}. More recent important contributions are due to Da Prato and his collaborations, see for examples \cite{prato-1,pratoFlandoliPriolaRockner,prato0,prato}, and references therein.
In this paper, we study abstract stochastic evolution equations in M-type 2 Banach spaces by using the semigroup methods.
Let $E$ be an M-type 2 real separable Banach space and $\mathcal B(E)$ be the Borel $\sigma$\,{-}\,field on $E$. Let $\{w_t, t\geq 0\}$ be a one-dimensional Brownian motion on a complete probability space $(\Omega, \mathcal F,\mathbb P)$ with a filtration $\{\mathcal F_t\}_{t\geq 0}$ satisfying the usual conditions (see for example Arnold \cite{Arnold}, Friedman \cite{Fried}, Karatzas-Shreve \cite{IS}).
We proceed to study abstract stochastic evolution equations of the form
\begin{equation} \label{E2}
\begin{cases}
dX+AXdt=F(t,X)dt+ G(t,X)dw_t,\\
X(0)=\xi,
\end{cases}
\end{equation}
where $A\,{:}\,\mathcal D(A)\subset E\to E$ is a densely defined, closed linear operator in $E$. The functions $F$ and $ G$ are $E$-valued random variables. The initial value $ \xi$ is an $\mathcal F_0$-measurable random variable.
A linear form of \eqref{E2} in an M-type 2 and UMD Banach space, i.e. $F(t,x)=F(t)$ is depending only on $t$ and $G(t,x)=Bx$, where $B$ is a linear operator from the space to itself, is investigated in Brze\'{z}niak \cite{Brzezniak}. The author showed a sufficient condition on the operator $A$ and the linear operator $B$, which is assumed to be bounded as an operator from $\mathcal D(A)$ into $\mathcal D_A(\frac{1}{2},2)$, under which \eqref{E2} has a strict solution.
Our objective is to study the existence and uniqueness, regularity and dependence on initial data of solutions of \eqref{E2}. Our contribution is threefold. First, we handle nonlinear evolution equations with multiplicative noise. Second, we investigate linear evolution equations with additive noise. Finally, we concentrate on semilinear evolution equations with additive noise.
The work on linear and semilinear evolution equations with multiplicative noise is in preparation \cite{Ton1,Ton2}.
Let us describe the content of the present paper. In Section \ref{section2}, we introduce the stochastic integral in the M-type 2 Banach space $E$, concepts of solutions of \eqref{E2}, function spaces with values in $E$ and an integral inequality of Volterra type.
In Section \ref{section3}, we show existence of solutions of \eqref{E2} as well as regular dependence of solutions on initial data under some conditions on the coefficients $F$ and $G$. This section contains three subsections. In the first subsection, we assume that $F$ and $G$ satisfy the linear growth and Lipschitz conditions.
Theorem \ref{linear growth and Lipschitz conditions} gives the existence of global mild solutions of \eqref{E2}. The proof of the theorem is similar to that in Da Prato-Zabczyk \cite{prato}.
In the second subsection, we assume that $F$ and $G$ satisfy the local linear growth and local Lipschitz conditions.
Theorem \ref{local existence theorem} shows the existence of maximal local mild solutions of \eqref{E2}. The proof is based on Lemma \ref{lem1}, Lemma \ref{lem2} and Theorem \ref{linear growth and Lipschitz conditions}. Noting that these lemmas come from earlier results in the theory of ordinary differential equations (see for example Friedman \cite{Fried}).
Theorem \ref{dependence of local mild solution on the initial data} gives the dependence of the maximal local mild solution on the initial data.
In the last subsection, we assume that $F$ and $G$ satisfy the linear growth and local Lipschitz conditions.
Theorem \ref{thm1} demonstrates the existence of global mild solutions of \eqref{E2}. Theorem \ref{dependence of global mild solution on the initial data} presents the dependence of the global mild solutions on the initial data.
In Section \ref{section4}, we concentrate on a class of equations of \eqref{E2}, namely, linear evolution equations with additive noise. We assume that $F(t,x)=F(t)$ and $G(t,x)=G(t)$ are depending only on $t$. These functions are considered in the spaces $\mathcal F^{\beta,\sigma}((0,T];E) $ and $\mathcal B((0,T];E),$ which will be defined in Section \ref{section2}. Theorem \ref{StrictSolutions} gives the existence of strict solutions.
Theorem \ref{regularity theorem autonomous linear evolution equation} explores the regularity of mild solutions without the assumption
$|AS(t)|\leq c_\delta t^{-\delta}, t\in [0,T], \delta\in (0,\beta)$ of Theorem \ref{StrictSolutions}.
In Section \ref{section5}, we set $F(t,x)=F_1(x)+F_2(t)$ and $G(t,x)=G(t),$ where $F_1, F_2$ and $G$ are depending only on $x$ and $t$, respectively. The corresponding equation \eqref{semilinear evolution equation} is then the form of semilinear evolution equations with additive noise. We suppose that the function $F_1$ is defined only on a subset of the space $E$, namely $\mathcal D(F_1)= \mathcal D(A^\eta),$ and that $F_1$ satisfies a Lipschitz condition on its domain (see \eqref{AbetaLipschitzcondition}). To treat \eqref{semilinear evolution equation} we require that the initial condition takes values in a smaller space, say $\mathcal D(A^\beta)$.
Theorem \ref{semilinear evolution equationTheorem1} proves the existence of mild solutions in the function space $ \mathcal C((0,T_{F,G,\xi}];\mathcal D(A^\eta))\cap \mathcal C([0,T_{F,G,\xi}];\mathcal D(A^\beta))$. Theorem \ref{semilinear evolution equationMoreRegular} gives a more stronger regularity under more regular initial values. Theorem \ref{continuityofsolutionsininitialdata} presents some results on regular dependence of solutions on initial data. Theorem \ref{semilinear evolution equationTheorem2} shows the existence of solutions for a critical case of the Lipschitz condition on $F_1$. Finally, Theorem \ref{continuityofsolutionsininitialdata2} explores the regular dependence of solutions on initial data for this critical case.
\section{Preliminary} \label{section2}
\subsection{Stochastic integrals in M-type 2 Banach spaces}
This subsection reviews the construction and some properties of the stochastic integral in M-type 2 real separable space $E$. All details in this subsection one can find in the work of Dettweiler \cite{Dettweiler1,Dettweiler,Dettweiler3}, Pisier \cite{Pisier0,Pisier} and Brze\'{z}niak \cite{Brzezniak}.
\begin{definition}[Pisier \cite{Pisier}] \label{MType2BanachSpace}
A Banach sapce $E$ is of $M$-type 2 (or martingale type 2) if there is a constant $c(E)$ such that for all $E$-valued martingale $\{M_n\}_n$ the inequality
$$\sup_n \mathbb E|M_n|^2 \leq c(E) \sum_{n\geq 0}\mathbb E |M_n-M_{n-1}|^2$$
holds with the convention $M_{-1}=0.$
\end{definition}
\begin{example}
Every $L^p$ space with $p\in[2,\infty)$ is of $M$-type 2.
\end{example}
Let us first define the stochastic integral for step functions. Let $f:[0,T]\times \Omega\to E$ be an adapted random step function, i.e. there exist sequences $\{t_i\}_0^n: 0=t_0<\cdots<t_n=T$ and $\{f_i\}_0^{n-1}: f_i\in L^2(\Omega,\mathcal F_{t_i}, \mathbb P; E)$ such that $f(s)=f_i$ a.e. for $s\in[t_i,t_{i+1})$. Then the stochastic integral of $f$ on $[0,T]$ with respect to $w_t$ is defined by
$$I_T(f):=\int_0^T f(t)dw_t=\sum_0^{n-1} (w_{t_{i+1}}-w_{t_i})f_i.$$
It is obvious that $I_T(f)$ is $\mathcal F_T$-measurable. In addition, by putting
$$M_k=\sum_{i=0}^k (w_{t_{i+1}}-w_{t_i})f_i,$$
then $I_T(f)=M_{n-1}$ and $\{M_k\}_{k=0}^{n-1}$ is a martingale. In view of Definition \ref{MType2BanachSpace}, we have
\begin{align}
\mathbb E|I_T(f)|^2\leq& \sup_k \mathbb E|M_k|^2
\leq c(E) \sum_{k=0}^{n-1}\mathbb E| (w_{t_{k+1}}-w_{t_k})f_i |^2 \label{Eq-1}\\
= & c(E) \sum_{k=0}^{n-1}\mathbb E|w_{t_{k+1}}-w_{t_k}|^2 \mathbb E|f_i |^2 \notag\\
=& c(E) \sum_{k=0}^{n-1}(t_{k+1}-t_k) \mathbb E|f_i |^2 \notag\\
=& c(E) \int_0^T \mathbb E|f(t)|^2dt. \notag
\end{align}
Denote by $\mathcal P_\infty$ the predictable $\sigma$\,{-}\,field on $\Omega_{\infty}=[0,\infty)\times \Omega$ generated by sets of the form
$$(s,t]\times K_1, \,\, 0\leq s<t<\infty, K_1\in \mathcal F_s \text{ and } \{0\}\times K_2,\,\, K_2\in \mathcal F_0,$$
and denote by $\mathcal P_T$ the restriction of $\mathcal P_\infty$ to $\Omega_T=[0,T]\times \Omega.$
A process $\phi$ is called predictable if it is measurable from $(\Omega_T, \mathcal P_T)$ into $(E,\mathcal B(E))$.
We then denote by $\mathcal N^2(0,T)$ the set of all $E$\,{-}\,valued predictable processes $\phi$ such that $\mathbb E\int_0^T |\phi(t)|^2 dt<\infty$. Thanks to the inequality \eqref{Eq-1}, one can define the stochastic integral for functions in $\mathcal N^2(0,T)$ (a set $\mathcal N^2(a,b), a,b\in \mathbb R,$ and the integral for functions in $\mathcal N^2(a,b)$ are defined similarly). Indeed, due to \eqref{Eq-1}, it is not hard to show that the limit in the following definition exists and does not depend on the actual choice of step functions.
\begin{definition}[stochastic integrals]
Let $f\in \mathcal N^2(0,T)$. The stochastic integral of $f$ on $[0,t], 0\leq t\leq T$ is defined by
$$I_t(f):=\int_0^t f(s) dw_s=\lim_{n\to\infty} \int_0^t \phi_n(s)dw_s \hspace{1cm} (\text{limit in } L^2(\mathbb P))
$$
where $\{\phi_n\}_n$ is a sequence of step functions such that
$$\lim_{n\to\infty} \mathbb E \int_0^t|f(s)-\phi_n(s)|^2 ds=0.$$
\end{definition}
\begin{proposition}\label{IntegralInequality}
Let $f\in \mathcal N^2(0,\infty)$. Then
\begin{itemize}
\item [{\rm (i)}] $\mathbb E|I_t(f)|^2\leq c(E) \int_0^t \mathbb E|f(s)|^2ds$, \quad $t\geq 0$.
\item [{\rm (ii)}] $\{I(f)\}_t$ is an $E$-valued continuous martingale and $I(f)\in \mathcal N^2(0,\infty).$
\item [{\rm (iii)}] For any $p>1, T>0$
$$\mathbb E \sup_{[0,T]} \Big|\int_0^t \phi(s) dw_s\Big|^p \leq \Big(\frac{p}{p-1}\Big)^p c_p(E) \mathbb E\Big[\int_0^T |\phi(s)|^2 ds\Big]^{\frac{p}{2}},$$
where $c_p(E)$ is some constant depending only on $p, E$.
\end{itemize}
\end{proposition}
\subsection{Concept of solutions}
Throughout this paper, if not specified, we always assume that $F$ and $ B$ are measurable fromÅ $ (\Omega_T \times E, \mathcal P_T\times \mathcal B(E))$ into $(E,\mathcal B(E))$. In addition, we assume that $(-A)$ generates a strongly continuous semigroup $S(t)=e^{-tA}, t\geq 0$ on $E.$ We then set $M_t=\sup_{0\leq s\leq t} |S(s)|.$
Following the definition of local solutions to stochastic differential equations (e.g. Arnold \cite{Arnold}, Friedman \cite{Fried}, Mao \cite{Mao}), we present a definition of local mild solutions for \eqref{E2}.
\begin{definition} \label{def0}
Let $\tau$ be a stopping time such that $\tau\leq T$ a.s.
A predictable $E$-valued continuous process $\{X(t) , t\in [0,\tau)\}$ is called a local mild solution of \eqref{E2} if the followings are satisfied. \begin{itemize}
\item [\rm (i)] There exists a sequence $\{\tau_k\}_{k=1}^\infty$ of stopping times such that $0\leq \tau_k\leq \tau_{k+1}, k\geq 1 $ and $\lim_{k\to\infty} \tau_k=\tau$ \, a.s.
\item [\rm (ii)] For $ t\in[ 0,T] $ and $k\geq 1$, $S(t-\cdot)G(\cdot,X(\cdot))\in \mathcal N^2(0,t\wedge \tau_k)$, i.e.
$$\mathbb E\int_0^{t\wedge \tau_k} |S(t-s)G(s,X(s))|^2 ds<\infty. $$
\item [\rm (iii)] For $ t\in[ 0,T] $ and $k\geq 1$
$$\int_0^{t\wedge \tau_k} |S(t-s)F(s,X(s))| ds<\infty \hspace{2cm} \text{ a.s. } $$
\item [\rm (iv)] For $ t\in[ 0,T] $ and $k\geq 1$
\begin{equation*}
\begin{aligned}
X(t\wedge \tau_k)=&S(t\wedge\tau_k)\xi +\int_0^{t\wedge\tau_k}S(t-s) F(s, X(s))ds\\
& +\int_0^{t\wedge\tau_k} S(t-s) G(s,X(s))dw_s \hspace{2cm}\text{a.s. } \end{aligned}
\end{equation*}
\end{itemize}
If in addition $\lim_{t\uparrow \tau} |X(t)|=\infty$ on $\{\tau< T\},$ then $\{X(t) , t\in [0,\tau)\}$ is called a maximal local mild solution. A maximal local mild solution $\{X(t),0\leq t<\tau\}$ is said to be unique if any other maximal local mild solution $ \{\bar X(t),0\leq t<\bar \tau\}$ is indistinguishable from it, means that $\mathbb P\{\tau=\bar\tau\}=\mathbb P\{X(t)=\bar X(t) \text { for every } t\in [0,\tau)\}=1$.
\end{definition}
The following definition of global mild solutions is presented in Da Prato-Zabczyk \cite{prato} and Brze\'{z}niak \cite{Brzezniak}.
\begin{definition}\label{Def1}
A predictable $E$-valued continuous process $X(t), t\in [0,T]$ is called a global mild solution (briefly, a mild solution) of \eqref{E2} if $X\in \mathcal N(0,T)$
and for every $t\in[0,T]$
\begin{align}
\label{DefinitionGlobalMildSolutions}X(t)=&S(t)\xi +\int_0^tS(t-s) F(s, X(s))ds\\
&+ \int_0^t S(t-s) G(s,X(s))dw_s \hspace{2cm} \text{a.s.}\notag
\end{align}
\end{definition}
It is obvious that a maximal local mild solution $\{X(t),t\in [0,\tau)\}$ of \eqref{E2} is a mild solution on $[0,T]$ if $\tau=T$ a.s. and $X$ satisfies \eqref{DefinitionGlobalMildSolutions} and is continuous at $t=T$.
\begin{definition}\label{Def2}
A predictable $E$-valued continuous process $X(t) , t\in [0,T]$ is called a strict solution of \eqref{E2} if
\begin{itemize}
\item [\rm(i)] $G(\cdot,X(\cdot))\in \mathcal N^2(0,T)$,
\item [\rm(ii)] $|\int_0^t F(s,X(s))ds| <\infty, $ \hspace{1cm} $t\in[0,T]$,
\item [\rm(iii)] $X(t)\in D(A) $ and $\int_0^t \mathbb E|X(s)|_{D(A)}^2 ds<\infty,$ \hspace{1cm}$t\in (0,T],$
\item [\rm(iv)] for every $t\in(0,T]$
$$X(t)=\xi -\int_0^t AX(s)ds+\int_0^t F(s, X(s))ds+ \int_0^t G(s,X(s))dw_s \hspace{1cm} \text{a.s.}$$
\end{itemize}
A strict (mild) solution $\{X(t),0\leq t\leq T\}$ is said to be unique if any other strict (mild) solution $ \{\bar X(t),0\leq t\leq T\}$ is indistinguishable from it, means that $\mathbb P\{X(t)=\bar X(t) \text { for every } t\in [0,T]\}=1$.
\end{definition}
\subsection{Function spaces with values in a Banach space}
Let $I$ be an interval of the real line. By $\mathcal B(I;E)$, we denote the space of uniformly bounded $E$-valued functions on $I$. The space is a Banach space with the supremum norm
$$|G|_{\mathcal B(I;E)}=\sup_{t\in I} |G(t)|, \quad\quad G\in \mathcal B(I;E).$$
Denotes by $\mathcal C(I;E)$ the space of $E$-valued continuous functions. The following well-known result is used very often.
\begin{theorem}
Let $A$ be a closed linear operator of $E$ and $a,b\in \mathbb R, a\leq b.$
\begin{itemize}
\item [{\rm (i)}] If $G\in \mathcal C([a,b];E)$ and $AG\in \mathcal C([a,b];E)$ then
$$A\int_a^b G(t)dt=\int_a^b AG(t)dt.$$
\item [{\rm (ii)}] If $G\in \mathcal N^2(a,b) $ and $AG\in \mathcal N^2(a,b)$ then
$$A\int_a^b G(t)dw_t=\int_a^b AG(t)dw_t.$$
\end{itemize}
\end{theorem}
For an exponent $\sigma>0$, $\mathcal C^\sigma([a,b];E), a\leq b$ denotes the space of functions which are H\"{o}lder continuous on $[a,b]$ with exponent $\sigma$. The space is equipped with norm
$$|G|_{\mathcal C^\sigma([a,b];E)}=\sup_{a\leq s<t\leq b} \frac{|G(t)-G(s)|}{(t-s)^\sigma}.$$
The Kolmogorov test gives a sufficient condition for a stochastic process to be H\"{o}lder continuous.
\begin{theorem}[Kolmogorov test, see e.g. Da Prato-Zabczyk \cite{prato} ] \label{Kolmogorov test}
Let $\zeta(t), t\in [0,T]$ be an $E$-valued stochastic process such that for some constants $c>0, \epsilon_i>0, i=1,2$ and all $t,s\in [0,T]$
\begin{equation} \label{Kolmogorov test condition}
\mathbb E|\zeta(t)-\zeta(s)|^{\epsilon_1}\leq c |t-s|^{1+\epsilon_2}.
\end{equation}
Then $\zeta$ has an version whose $\mathbb P$-almost all trajectories are H\"{o}lder continous functions with an arbitrary exponent smaller than $\frac{\epsilon_2}{\epsilon_1}$.
\end{theorem}
When the process $\zeta(t)$ in Theorem \ref{Kolmogorov test} is a Gaussian process, one can weaken the condition \eqref{Kolmogorov test condition}.
\begin{theorem} \label{Kolmogorov testGaussian}
Let $\zeta(t), t\in [0,T]$ be an $E$-valued Gaussian process such that $\mathbb E \zeta(t)=0, t\geq 0$, and that for some constants $c>0, \epsilon\in (0,1]$ and all $t,s\in [0,T]$
$$
\mathbb E|\zeta(t)-\zeta(s)|^2\leq c |t-s|^\epsilon.
$$
Then there exists a modification of $\zeta$ with $\mathbb P$-almost all trajectories being H\"{o}lder continous functions with an arbitrary exponent smaller than $\frac{\epsilon}{2}$.
\end{theorem}
For two exponents $0<\sigma<\beta\leq 1$ we define a function space $\mathcal F^{\beta, \sigma}((0,T]; E)$ as follows, see Yagi \cite{yagi} ($\mathcal F^{\beta, \sigma}((a,b]; E), a<b$ is defined similarly). The space $\mathcal F^{\beta, \sigma}((0,T]; E)$ consists of all continuous function $f(t)$ on $(0,T]$ (resp. $[0,T]$) when $0<\beta<1$ (resp. $\beta=1$) with the following three properties:
\begin{enumerate}
\item When $\beta<1$, $t^{1-\beta} f(t) $ has a limit as $t\to 0$.
\item The function $f$ is H\"{o}lder continuous with exponent $\sigma$ and with the weight $s^{1-\beta+\sigma}$, i.e.
$$\sup_{0\leq s<t\leq T} \frac{s^{1-\beta+\sigma}|f(t)-f(s)|}{(t-s)^\sigma}=\sup_{0\leq t\leq T}\sup_{0\leq s<t}\frac{s^{1-\beta+\sigma}|f(t)-f(s)|}{(t-s)^\sigma}<\infty.$$
\item
\begin{equation} \label{Fbetasigma3}
\lim_{t\to 0} \sup_{0\leq s\leq t}\frac{s^{1-\beta+\sigma}|f(t)-f(s)|}{(t-s)^\sigma}=0.
\end{equation}
\end{enumerate}
Then $\mathcal F^{\beta, \sigma}((0,T]; E)$ becomes a Banach space with norm
$$|f|_{\mathcal F^{\beta, \sigma}}=\sup_{0\leq t\leq T} t^{1-\beta} |f(t)|+ \sup_{0\leq s<t\leq T} \frac{s^{1-\beta+\sigma}|f(t)-f(s)|}{(t-s)^\sigma}.$$
The following useful inequality follows the definition directly. For every $ f\in \mathcal F^{\beta, \sigma}((0,T]; E), 0<s<t\leq T$ we have
\begin{equation} \label{FbetasigmaSpaceProperty}
\begin{cases}
|f(t)|\leq |f|_{\mathcal F^{\beta, \sigma}} t^{\beta-1}, \\
|f(t)-f(s)| \leq |f|_{\mathcal F^{\beta, \sigma}} (t-s)^{\sigma} s^{\beta-\sigma-1}.
\end{cases}
\end{equation}
In addition, it is not hard to show that
\begin{equation} \label{FbetaFgammasigmaSpaceProperty}
\mathcal F^{\gamma,\sigma} ((0,T];E)\subset \mathcal F^{\beta,\sigma} ((0,T];E), \hspace{2cm} 0<\sigma<\beta<\gamma\leq 1.
\end{equation}
The space $\mathcal F^{\beta, \sigma}((0,T]; E)$ is not a trivial space. Indeed, we have
\begin{remark}[Yagi \cite{yagi}]
When $0<\sigma<\beta<1$, $f(t) =t^{\beta-1} g(t) \in \mathcal F^{\beta, \sigma}((0,T]; E),$ where $g(t)$ is any $E$-valued function on $[0,T]$ such that
$g\in \mathcal C^\sigma([0,T];E)$
and $g(0)=0.$
When $0<\sigma<\beta=1$, the space $ \mathcal F^{1, \sigma}((0,T]; E)$ includes the space of H\"{o}lder continuous functions with exponent $\sigma.$
\end{remark}
\subsection{Integral inequality of Volterra type}
Let us introduce an useful inequality of Volterra type that will be used in this paper. The proof of the inequality can be found, for example, in Yagi \cite{yagi}.
\begin{lemma} \label{Integral inequality of Volterra type}
Let $a\geq 0, b>0, \mu_1>0 $ and $\mu_2>0$ be constants.
\begin{itemize}
\item [{\rm (i)}]
Let $\Gamma$ be the gamma function. Then the function defined by the series
$$E_{\mu,\nu}(t)=\sum_{n=0}^\infty \frac{t^{n\nu}}{\Gamma(\mu_1+n\mu_2)}, \hspace{1cm} 0\leq t<\infty$$
satisfies an estimate
$$E_{\mu,\nu}(t)\leq \frac{2}{\Gamma_0 \nu_2} (1+t)^{2-\mu_1} e^{t+1}, \hspace{1cm} 0\leq t<\infty,$$
where $\Gamma_0=\min_{0<s<\infty} \Gamma(s)$.
\item [{\rm (ii)}]
Let $\varphi(t,s)$ be a nonnegative continuous function defined for $0\leq s<t\leq T.$ If $\varphi(t,s)$ satisfies the integral inequality
$$\varphi(t,s) \leq a (t-s)^{\mu_1-1}+b \int_s^t (t-r)^{\mu_2-1}\varphi(r,s)dr, \quad\quad 0\leq s<t\leq T,$$
then
$$\varphi(t,s)\leq a \Gamma(\mu_1) (t-s)^{\mu_1-1} E_{\mu_1,\mu_2} ([b\Gamma(\mu_2)]^{\frac{1}{\mu_2}}(t-s)),\quad\quad 0\leq s<t\leq T.$$\end{itemize}
\end{lemma}
\section{Nonlinear evolution equations with multiplicative noise} \label{section3}
\subsection{Existence of global mild solutions under linear growth and Lipschitz conditions}
In this subsection, we shall show existence and uniqueness of global mild solutions of \eqref{E2}
under linear growth and Lipschitz conditions on $F$ and $G$. The proof is similar to that in Da Prato-Zabczyk \cite{prato}.
\begin{theorem}[global existence]\label{linear growth and Lipschitz conditions}
Assume that $F$ and $G$ satisfy two conditions:
\begin{itemize}
\item [{\rm (i)}] The linear growth condition
\begin{equation} \label{The linear growth condition}
|F(t,x)|+|G(t,x)|\leq c_1(1+|x|), \quad\quad x\in E, t\in[0,T].
\end{equation}
\item [{\rm (ii)}] The Lipschitz condition
\begin{equation} \label{The Lipschitz condition}
|F(t,x)-F(t,y)|+|G(t,x)-G(t,y)| \leq c_2 |x-y|, \quad \quad x, y \in E, t\in[0,T],
\end{equation}
\end{itemize}
where $c_i>0 \,(i=1,2) $ are some positive constants.
Suppose further that $\mathbb E|\xi|^p< \infty$ for some $p\geq 2$. Then there exists a unique mild solution $X(t)$ to \eqref{E2} on $[0,T]$. Furthermore, it satisfies the estimate
\begin{equation} \label{EEE}
\sup_{0\leq t\leq T}\mathbb E |X(t)|^{p}] \leq \alpha(1+\mathbb E|\xi|^{p}),
\end{equation}
where $\alpha=\alpha(c_1,p,M_T,T)>0$ is some constant depending only on $c_1, p, M_T$ and $T.$
\end{theorem}
\begin{proof} We shall use the fixed point theorem and the Gronwall lemma. The proof is divided into three steps.
{\bf Step 1.} Let us show existence of a mild solution.
Put
\begin{align}
\mathcal Q_1(Y)(t)&=\int_0^t S(t-s) F(s,Y(s)) ds, \label{Q1Definition} \\
\mathcal Q_2(Y)(t)&=\int_0^t S(t-s) G(s,Y(s))dw_s, \label{Q2Definition}\\
\mathcal Q(Y)(t)&=S(t) \xi + \mathcal Q_1(Y)(t)+\mathcal Q_2(Y)(t) \notag
\end{align}
and let $\mathcal E_p(0,\bar T) (\bar T\leq T)$ be the set of all $E$-valued predictable process $Y(t)$ on $[0,\bar T]$ such that
$\sup_{[0,\bar T]} \mathbb E |Y(t)|^p<\infty. $ Then up to indistinguishability, $\mathcal E_p(0,\bar T)$ is a Banach space with norm
$$||Y||_{p,\bar T}=[\sup_{t\in[0,\bar T]} \mathbb E |Y(t)|^p]^\frac{1}{p}.$$
Let us show that $\mathcal Q(\mathcal E_p(0,\bar T))\subset \mathcal E_p(0,\bar T)$. Indeed, by using H\"{o}lder inequality, we have
\begin{align}
||\mathcal Q_1(Y)||_{p,\bar T}^p &\leq \sup_{t\in[0,\bar T]} \Big[\int_0^t |S(t-s) F(s,Y(s))| ds\Big]^p\notag\\
&\leq M_{\bar T}^p \mathbb E \Big[\int_0^{\bar T} |F(s,Y(s)| ds\Big]^p\notag\\
&\leq (c_1M_{\bar T})^p \mathbb E \Big[\int_0^{\bar T} [1+|Y(s)| ]ds\Big]^p\notag\\
&\leq (c_1M_{\bar T})^p {\bar T}^{p-1} \mathbb E \int_0^{\bar T} [1+|Y(s)| ]^pds\notag\\
&\leq (c_1M_{\bar T})^p (2{\bar T})^{p-1} \mathbb E \int_0^{\bar T} [1+|Y(s)|^p ]ds \label{Eq30}\\
&\leq (c_1{\bar T}M_{\bar T})^p 2^{p-1} [1+||Y||_{p,\bar T}^p ]<\infty, \quad \quad\quad Y\in \mathcal E_p(0,{\bar T}),\notag
\end{align}
here we used the inequality $(a+b)^p \leq 2^{p-1} (a^p+b^p), a>0, b>0.$
Thus, $\mathcal Q_1(\mathcal E_p(0,{\bar T}))\subset \mathcal E_p(0,{\bar T})$. Furthermore, due to Proposition \ref{IntegralInequality}, we have
\begin{align}
||\mathcal Q_2(Y)||_{p,\bar T}^p&=\sup_{t\in[0,{\bar T}]} \mathbb E \Big|\int_0^t S(t-s) G(s,Y(s))dw_s\Big|^p\notag\\
&\leq \Big(\frac{p}{p-1}\Big)^p c_p(E) \mathbb E \Big[\int_0^t |S(t-s) G(s,Y(s))|^2 ds\Big]^{\frac{p}{2}}\notag\\
&\leq \Big(\frac{pM_{\bar T}}{p-1}\Big)^p c_p(E) \mathbb E \Big[\int_0^{\bar T} |G(s,Y(s))|^2 ds\Big]^{\frac{p}{2}}\notag\\
&\leq \Big(\frac{c_1pM_{\bar T}}{p-1}\Big)^p c_p(E) \mathbb E \Big[\int_0^{\bar T} [1+|Y(s)|]^2 ds\Big]^{\frac{p}{2}}\notag\\
&\leq \Big(\frac{c_1pM_{\bar T}}{p-1}\Big)^p c_p(E) {\bar T}^{\frac{p-2}{2}} \mathbb E \int_0^{\bar T} [1+|Y(s)|]^p ds\notag\\
&\leq \Big(\frac{c_1pM_{\bar T}}{p-1}\Big)^p c_p(E) {\bar T}^{\frac{p-2}{2}} 2^{p-1} \mathbb E \int_0^{\bar T} [1+|Y(s)|^p] ds\notag\\
&= \Big(\frac{c_1pM_{\bar T}}{p-1}\Big)^p c_p(E) {\bar T}^{\frac{p-2}{2}} 2^{p-1} \int_0^{\bar T} [1+\mathbb E|Y(s)|^p] ds\label{Eq31}\\
&\leq \Big(\frac{c_1pM_{\bar T}}{p-1}\Big)^p c_p(E) {\bar T}^{\frac{p}{2}} 2^{p-1} [1+||Y||_{p,\bar T}^p]<\infty, \quad \quad Y\in \mathcal E_p(0,{\bar T}).\notag
\end{align}
Therefore, $\mathcal Q_2(\mathcal E_p(0,{\bar T}))\subset \mathcal E_p(0,{\bar T})$. We thus have shown that $\mathcal Q(\mathcal E_p(0,{\bar T}))\subset \mathcal E_p(0,{\bar T})$.
Let us next verify that $\mathcal Q$ is a contraction mapping of $\mathcal E_p(0,{\bar T})$, provided $\bar T>0$ is sufficiently small. For any $Y_1, Y_2 \in \mathcal E_p(0,{\bar T})$ we have
\begin{align*}
||\mathcal Q_1(Y_1)- \mathcal Q_1(Y_2)||_{p,\bar T}^p&=\sup_{t\in[0,{\bar T}]} \mathbb E \Big|\int_0^t S(t-s) (F(s,Y_1(s))-F(s,Y_2(s))) ds\Big|^p\\
&\leq M_{\bar T}^p \sup_{t\in[0,{\bar T}]} \mathbb E\Big[\int_0^t |F(s,Y_1(s))-F(s,Y_2(s))| ds\Big]^p\\
&\leq (c_2M_{\bar T})^p \mathbb E \Big[\int_0^{\bar T} |Y_1(s)-Y_2(s)| ds\Big]^p\\
&\leq (c_2M_{\bar T})^p {\bar T}^{p-1} \mathbb E \int_0^{\bar T} |Y_1(s)-Y_2(s)|^p ds\\
&\leq (c_2{\bar T}M_{\bar T})^p ||Y_1-Y_2||_{p,\bar T}^p
\end{align*}
and
\begin{align*}
|&|\mathcal Q_2(Y_1)- \mathcal Q_2(Y_2)||_{p,\bar T}^p\\
&=\sup_{t\in[0,{\bar T}]} \mathbb E \Big|\int_0^t S(t-s) [G(s,Y_1(s))-G(s,Y_2(s))] dw_s\Big|^p\\
&\leq \Big(\frac{p}{p-1}\Big)^p c_p(E)\sup_{t\in[0,{\bar T}]} \mathbb E \Big[\int_0^t |S(t-s) [G(s,Y_1(s))-G(s,Y_2(s))]|^2 ds\Big]^{\frac{p}{2}}\\
&\leq \Big(\frac{pM_{\bar T}}{p-1}\Big)^p c_p(E)\sup_{t\in[0,{\bar T}]} \mathbb E \Big[\int_0^t |G(s,Y_1(s))-G(s,Y_2(s))|^2 ds\Big]^{\frac{p}{2}}\\
&\leq \Big(\frac{c_2pM_{\bar T}}{p-1}\Big)^p c_p(E) \mathbb E \Big[\int_0^{\bar T} |Y_1(s)-Y_2(s)|^2 ds\Big]^{\frac{p}{2}}\\
&\leq \Big(\frac{c_2pM_{\bar T}}{p-1}\Big)^p c_p(E) {\bar T}^{\frac{p-2}{2}}\mathbb E \int_0^{\bar T} |Y_1(s)-Y_2(s)|^p ds\\
&\leq \Big(\frac{c_2pM_{\bar T}}{p-1}\Big)^p c_p(E) {\bar T}^{\frac{p}{2}}||Y_1-Y_2||_{p,\bar T}^p.
\end{align*}
Hence,
\begin{align*}
||\mathcal Q(Y_1)- \mathcal Q(Y_2)||_{p,\bar T}&\leq ||\mathcal Q_1(Y_1)- \mathcal Q_1(Y_2)||_{p,\bar T}+||\mathcal Q_2(Y_1)- \mathcal Q_2(Y_2)||_{p,\bar T}\\
&\leq c_2M_{\bar T} \sqrt{{\bar T}} \Big[ \sqrt{{\bar T}}+ \frac{pC_p^{\frac{1}{p}}(E)}{p-1} \Big] ||Y_1-Y_2||_{p,\bar T}.
\end{align*}
Therefore, if
\begin{equation} \label{BarTsmallCondition}
c_2M_{\bar T} \sqrt{{\bar T}} \Big[ \sqrt{{\bar T}}+ \frac{pC_p^{\frac{1}{p}}(E)}{p-1} \Big]<1,
\end{equation}
then $\mathcal Q$ is contraction in $\mathcal E_p(0,{\bar T})$.
Since $\mathcal Q$ maps $\mathcal E_p(0,{\bar T})$ into itself and is contraction with respect to the norm of $\mathcal E_p(0,{\bar T})$, $\mathcal Q$ has a unique fixed point $X\in \mathcal E_p(0,{\bar T}).$ This shows that $X(t)$ is a mild solution to \eqref{E2} on $[0,{\bar T}]$. In view of \eqref{BarTsmallCondition}, this solution can be extended on $[0,T]$ by considering the equation on intervals $[0,\bar T], [\bar T, 2\bar T],\dots.$ Furthermore, the continuity of $X$ on $[0,T]$ follows from the continuity of the stochastic integral (see Proposition \ref{IntegralInequality}).
{\bf Step 2.} Let us verify uniqueness of the mild solution.
Let $X_1$ and $X_2$ be mild solutions of \eqref{E2}. Then for every $t\in[0,T]$ we have
\begin{align*}
&\mathbb E|X_1(t)-X_2(t)|^2\\
=&\mathbb E \Big|\int_0^t S(t-s)[F(s,X_1(s))-F(s,X_2(s))]ds \\
&+ \int_0^t S(t-s)[G(s,X_1(s))-G(s,X_2(s))]dw_s\Big|^2\\
\leq& 2 \mathbb E \Big|\int_0^t S(t-s)[F(s,X_1(s))-F(s,X_2(s))]ds\Big|^2\\
&+2 \mathbb E \Big|\int_0^t S(t-s)[G(s,X_1(s))-G(s,X_2(s))]dw_s\Big|^2\\
\leq& 2 M_T^2\mathbb E \Big[\int_0^t |F(s,X_1(s))-F(s,X_2(s))|ds\Big]^2\\
&+2c(E) \mathbb E \int_0^t |S(t-s)[G(s,X_1(s))-G(s,X_2(s))]|^2ds\\
\leq& 2 c_2^2M_T^2\mathbb E \Big[\int_0^t |X_1(s)-X_2(s)|ds\Big]^2\\
&+2c(E) M_T^2\mathbb E \int_0^t |G(s,X_1(s))-G(s,X_2(s))|^2ds\\
\leq &2c_2^2[t+c(E)] M_T^2\mathbb E \int_0^t |X_1(s)-X_2(s)|^2ds\\
\leq &2c_2^2[T+c(E)]M_T^2 \int_0^t \mathbb E|X_1(s)-X_2(s)|^2ds.
\end{align*}
The Gronwall lemma then derives that
$\mathbb E|X_1(t)-X_2(t)|^2=0 $ for every $ t\in [0,T].$ Since $X_1$ and $X_2$ are continuous, they are indistinguishable.
{\bf Step 3.} Let us finally verify the estimate \eqref{EEE}.
In view of \eqref{Eq30} and \eqref{Eq31}, we have
\begin{align*}
&\sup_{t\in[0,t]} \mathbb E |X(s)|^p\\
=&||X||_{p,t}^p=||\mathcal Q(X)||_{p,t}^p\\
\leq& [||S(\cdot) \xi ||_{p,t}+ ||\mathcal Q_1(X) ||_{p,t}
+||\mathcal Q_2(X) ||_{p,t}]^p\\
\leq& 3^p[||S(\cdot) \xi ||_{p,t}^p+ ||\mathcal Q_1(X) ||_{p,t}^p
+||\mathcal Q_2(X) ||_{p,t}^p]\\
\leq& 3^p\Big[M_T^p\mathbb E|\xi |^p+ (c_1M_t)^p (2t)^{p-1} \mathbb E \int_0^{t} [1+|Y(s)|^p ]ds\\
&+ \Big(\frac{c_1pM_t}{p-1}\Big)^p c_p(E) t^{\frac{p-2}{2}} 2^{p-1} \int_0^t [1+\mathbb E|Y(s)|^p\Big] ds\\
\leq&3^pM_T^p\mathbb E|\xi |^p+ \Big [(3c_1M_t)^p (2t)^{p-1}+\Big(\frac{3c_1pM_t}{p-1}\Big)^p c_p(E) t^{\frac{p-2}{2}} 2^{p-1} \Big] \\
& \times \int_0^{t} [1+\sup_{[0,s]}\mathbb E|Y(r)|^p ]ds\\
\leq&3^pM_T^p\mathbb E|\xi |^p+ \Big [(3c_1M_T)^p (2T)^{p-1}+\Big(\frac{3c_1pM_T}{p-1}\Big)^p c_p(E) T^{\frac{p-2}{2}} 2^{p-1} \Big] \\
& \times \Big[T+\int_0^{t} \sup_{[0,s]}\mathbb E|Y(r)|^p ds\Big], \hspace{2cm} t\in [0,T].
\end{align*}
Then the Gronwall lemma again provides \eqref{EEE}.
\end{proof}
\subsection{Existence and regular dependence on initial data of local mild solutions under local linear growth and local Lipschitz conditions}
Let us first explore existence and uniqueness of local mild solutions of \eqref{E2} under local linear growth and local Lipschitz conditions on $F$ and $G$. Following ideas in \cite{Fried}, we shall construct two lemmas.
\begin{lemma} \label{lem1}
Let $(\alpha_1,\alpha_2)\subset \mathbb R, \Omega_0\subset \Omega$ and $\Phi_i\in \mathcal N^2(\alpha_1,\alpha_2)$.
If
$$ {\bf1}_{\Omega_0}\Phi_1(t)={\bf1}_{\Omega_0}\Phi_2(t) \hspace{1cm} \text{ for all } t\in (\alpha_1,\alpha_2),$$
then
$${\bf1}_{\Omega_0}\int_{\alpha_1}^{\alpha_2}\Phi_1(t)dw_t={\bf1}_{\Omega_0}\int_{\alpha_1}^{\alpha_2}\Phi_2(t)dw_t \quad\quad\quad\text{a.s.}$$
\end{lemma}
The proof of Lemma \ref{lem1} for the case $E=\mathbb R$ can be found in \cite[Lemma 2.11]{Fried}. In fact, the arguments are available for any $M$-type 2 separable Banach space. So we omit it.
\begin{lemma} \label{lem2}
Consider two equations of the form \eqref{E2}:
\begin{equation} \label{E3}
\begin{cases}
dX_i+AX_idt=F_i(t,X_i)dt+ G_i(t,X_i)dw_t,\\
X_i(0)=\xi_i, \hspace{2cm} i=1,2.
\end{cases}
\end{equation}
Assume that there exists a constant $c>0$ such that
$$|F_i(t,x)-F_i(t,y)|+|G_i(t,x)-G_i(t,y)|\leq c|x-y|$$
and
$$|F_i(t,x)|^2+|G_i(t,x)|^2 \leq c^2 (1+|x|^2) $$
for $i=1,2, x,y\in E$ and $ t\in [0,T].$
Suppose further that
$F_1(t,x)=F_2(t,x),$ $ G_1(t,x)=G_2(t,x)$ for $|x|\leq n, 0\leq t\leq T$ with some $n>0$,
and that $ \xi_1=\xi_2$ for a.s. $\omega$ for which either $\xi_1(\omega)<n$ or $\xi_2(\omega)<n$. Denote $\tau_i=\inf\{t: |X_i(t)|>n\}$ with the convention $\inf \emptyset=T.$ Then
$$\mathbb P(\tau_1=\tau_2)=1,$$
$$\mathbb P\{\sup_{0< t\leq \tau_1}|X_1(t)-X_2(t)|=0\}=1.$$
\end{lemma}
\begin{proof}
On the account of Theorem \ref{linear growth and Lipschitz conditions}, there exists a mild solution $X_i(t)$ of \eqref{E3} on $[0,T]$. Consider a function $\phi: [0,T] \to \mathbb R$ defined by
\begin{equation*}
\phi(t)=
\begin{cases}
1 \text{ if } |X_1(s)| \leq n \text{ for all } 0\leq s\leq t,\\
0 \text{ in all other cases}.
\end{cases}
\end{equation*}
Then for $t\in [0,T]$ we have
\begin{equation*}
\begin{aligned}
\begin{cases}
\phi(t) (\xi_1-\xi_2)=0 &\quad\quad\text{ a.s., }\\
\phi(t)S(t)(\xi_1-\xi_2)=0 &\quad\quad\text{ a.s., }\\
\phi(t)={\bf 1}_{\{\phi(t)=1\}}\phi(t), &\\
\phi(t)=\phi(t)^2. &
\end{cases}
\end{aligned}
\end{equation*}
Therefore,
\begin{equation*}
\begin{aligned}
\phi(t)[X_1(t)-X_2(t)]=&\phi(t)\int_0^t S(t-s) [F_1(s, X_1(s))-F_2(s,X_1(s))] ds \\
&+\phi(t)\int_0^t S(t-s) [F_2(s, X_1(s))-F_2(s,X_2(s))] ds \\
&+\phi(t)\int_0^t S(t-s) [G_1(s, X_1(s))-G_2(s,X_1(s))] dw_s\\
&+\phi(t)\int_0^t S(t-s) [G_2(s, X_1(s))-G_2(s,X_2(s))]dw_s\\
=&J_1+J_2+J_3+J_4.
\end{aligned}
\end{equation*}
When $\phi(t)=1$, $F_1(s, X_1(s))=F_2(s,X_1(s))$ for every $s\in[0,t]$. Hence, $J_1=0$. In addition, we have $G_1(s, X_1(s))=G_2(s,X_1(s))$ on $\{\phi(t)=1\}$ for all $s\in[0,t]$. Lemma \ref{lem1} then provides that
\begin{align*}
J_3&={\bf1}_{\{\phi(t)=1\}}\phi(t)\int_0^t S(t-s) [G_1(s, X_1(s))-G_2(s,X_1(s))] dw_s\\
&={\bf1}_{\{\phi(t)=1\}}\int_0^t S(t-s) [G_1(s, X_1(s))-G_2(s,X_1(s))] dw_s\\
&={\bf1}_{\{\phi(t)=1\}}\int_0^t 0 dw_s=0.
\end{align*}
Hence,
\begin{align}
&\mathbb E\phi(t)|X_1(t)-X_2(t)|^2=\mathbb E|\phi(t)[X_1(t)-X_2(t)]|^2=\mathbb E|J_2+J_4|^2 \notag\\
&\leq 2\mathbb E|J_2|^2+2\mathbb E|J_4|^2. \label{Eq32}
\end{align}
Let us estimate $\mathbb E|J_2|^2$ and $\mathbb E|J_4|^2.$
Since $\phi(t)$ decreases in $t$, we have
\begin{equation*}
\begin{aligned}
2|J_2|^2&\leq 2\Big|\int_0^t \phi(s)S(t-s) [F_2(s, X_1(s))-F_2(s,X_2(s))] ds\Big|^2\\
&\leq 2t\int_0^t \phi(s)|S(t-s)[F_2(s, X_1(s))-F_2(s,X_2(s))]|^2 ds\\
&\leq 2tM_t^2\int_0^t \phi(s)| F_2(s, X_1(s))-F_2(s,X_2(s))|^2 ds\\
&\leq 2tc^2M_t^2\int_0^t \phi(s)| X_1(s)-X_2(s)|^2 ds, \hspace{2cm} t\in [0,T].
\end{aligned}
\end{equation*}
Thus,
\begin{equation} \label{Eq33}
\mathbb E|J_2|^2\leq 2Tc^2 M_T^2 \int_0^t \mathbb E\phi(s)| X_1(s)-X_2(s)|^2 ds, \hspace{2cm} t\in [0,T].
\end{equation}
On the other hand, we have $\phi(s)=1$ on $\{\phi(t)=1\} $ for every $s\in [0,t]$. Lemma \ref{lem1} again provides that
\begin{equation*}
\begin{aligned}
2|J_4|^2&=2\Big|\phi(t){\bf1}_{\{\phi(t)=1\}}\int_0^t S(t-s) [G_2(s, X_1(s))-G_2(s,X_2(s))] dw_s\Big|^2\\
&=2\Big|{\bf1}_{\{\phi(t)=1\}}\int_0^t S(t-s) [G_2(s, X_1(s))-G_2(s,X_2(s))] dw_s\Big|^2\\
&=2\Big|{\bf1}_{\{\phi(t)=1\}}\int_0^t \phi(s)S(t-s) [G_2(s, X_1(s))-G_2(s,X_2(s))] dw_s\Big|^2\\
&\leq 2\Big|\int_0^t \phi(s)S(t-s) [G_2(s, X_1(s))-G_2(s,X_2(s))] dw_s\Big|^2.
\end{aligned}
\end{equation*}
Using Proposition \ref{IntegralInequality}, we then obtain that
\begin{align}
2\mathbb E|J_4|^2\leq &2c(E)\mathbb E \int_0^t |\phi(s)S(t-s) [G_2(s, X_1(s))-G_2(s,X_2(s))]|^2 ds\notag\\
\leq &2c(E)M_t^2\mathbb E \int_0^t \phi(s)|G_2(s, X_1(s))-G_2(s,X_2(s))|^2 ds\notag\\
\leq &2c^2 c(E) M_T^2 \int_0^t \mathbb E\phi(s)| X_1(s)-X_2(s)|^2 ds,
\hspace{2cm} t\in [0,T]. \label{Eq34}
\end{align}
Substituting \eqref{Eq33} and \eqref{Eq34} into \eqref{Eq32}, we observe that
\begin{align*}
\mathbb E\phi(t)&|X_1(t)-X_2(t)|^2 \\
\leq & 2c^2[T+c(E)]M_T^2\int_0^t \mathbb E\phi(s)| X_1(s)-X_2(s)|^2 ds, \hspace{1cm} t\in [0,T].
\end{align*}
Thanks to the Gronwall lemma, we verify that $\mathbb E\phi(t)|X_1(t)-X_2(t)|^2=0$ for every $t\in[0,T].$ As a consequence,
$$\phi(t)|X_1(t)-X_2(t)|^2=0 \quad\quad\quad \text{ a.s.} $$
From the definition of $\phi$, it is then clear that $X_1(t)=X_2(t)$ a.s. $t\in (0,\tau_1]$ and $\mathbb P(\tau_2\geq \tau_1)=1.$ Similarly, we have $X_1(t)=X_2(t)$ a.s. $t\in (0,\tau_2]$ and $\mathbb P(\tau_1\geq \tau_2)=1.$ We thus have shown that
$$\mathbb P\{X_1(t)=X_2(t)\} =\mathbb P(\tau_1= \tau_2)=1, \hspace{2cm} t\in (0,\tau_1]. $$
In addition, by the continuity of $X_1(t)$ and $X_2(t)$, we conclude that $$\sup_{0< t\leq \tau_1} |X_1(t)-X_2(t)|^2=0 \hspace{2cm} \text{ a.s. }$$
It completes the proof.
\end{proof}
\begin{theorem}[local existence] \label{local existence theorem}
Suppose that for any $n>0$ there exist $c_n>0$ and $\bar c_n>0$ such that whenever $ |x|\leq n, |y|\leq n$ and $ t\in [0,T],$ the following two conditions hold:
\begin{itemize}
\item [{\rm (i)}] The local growth condition
$$
|F(t,x)|+|G(t,x)| \leq \bar c_n (1+|x|).
$$
\item [{\rm (ii)}] The local Lipschitz condition
\begin{equation} \label{The local Lipschitz condition}
|F(t,x)-F(t,y)|+|G(t,x)-G(t,y)| \leq c_n |x-y|.
\end{equation}
\end{itemize}
Then there exists a unique maximal local mild solution $\{X(t),t\in[0,\tau)\}$ to \eqref{E2}. Furthermore, there exists a constant $\alpha=\alpha(\bar c_n, M_T, T)>0$ depending only on $\bar c_n, M_T$ and $T$ such that
\begin{equation} \label{XtMinTaunEstimate}
\mathbb E |X(t\wedge \tau_n)|^2\leq \alpha(1+\mathbb E|\xi|^{2}), \hspace{1cm} t\geq 0, n=0,1,\dots,
\end{equation}
where $\{\tau_n\}_{n=0}^\infty$ is a sequence of stopping times defined by
$$
\tau_n=\inf\{t\in[0,T]: |X(t)|>n\}
$$
with the convention $\inf\emptyset=T$ and $\tau_0=0$ a.s.
\end{theorem}
\begin{proof}
Let us first show existence of a maximal local mild solution by using the truncation method. For $n=1,2,\dots,$ we denote
\begin{equation*}
\begin{aligned}
F_n(t,x)=&
\begin{cases}
\begin{aligned}
&F(t,x) & \hspace{1cm} \text{if }& |x|\leq n,\\
&F(t,x)(2-\frac{|x|}{n}) & \text{if } &n<|x|\leq 2n,\\
&0 & \text{if }& |x|> 2n,
\end{aligned}
\end{cases}\\
G_n(t,x)=&
\begin{cases}
\begin{aligned}
&G(t,x) \hspace*{2cm}& \text{ if }& |x|\leq n,\\
&G(t,x)(2-\frac{|x|}{n}) & \text{ if }& n<|x|\leq 2n,\\
&0 & \text{ if }& |x|> 2n,
\end{aligned}
\end{cases}\\
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\xi_{n_0}=&
\begin{cases}
\xi \hspace*{1cm}\text{ if } |\xi|\leq n,\\
0 \hspace*{1cm}\text{ if } |\xi|> n.
\end{cases}
\end{aligned}
\end{equation*}
It is easy to see that the measurable functions $F_n$ and $G_n$ satisfy the linear growth and global Lipshitz conditions. On the account of Theorem \ref{linear growth and Lipschitz conditions}, there exists a unique mild solution $X_n(t)$ on $[0,T]$ of the system
\begin{equation} \label{H2}
\begin{cases}
dX_n+AX_ndt=F_n(t,X_n)dt+G_n(t,X_n)dw_t,\\
X_n(0)=\xi_{n_0}.
\end{cases}
\end{equation}
Define the stopping times
\begin{equation} \label{stoppingTimes}
\tau_n=\inf\{t\in[0,T]: |X_n(t)|>n\}, \hspace{2cm} n=1,2,\dots.
\end{equation}
By Lemma \ref{lem2}, we observe that
$$X_n(t)=X_m(t) \hspace{1cm} \text{ a.s. if } 0\leq t \leq \tau_n \text{ and }m>n.$$
Hence, the sequence $\{\tau_n\}_n$ increases and has a limit $\tau=\lim_{n\to\infty} \tau_n\leq T$ a.s. We then define $\{X(t), 0\leq t<\tau\}$ by
\begin{equation} \label{DefintionOfX(t)}
X(t)=X_n(t), \hspace{2cm} t\in[\tau_{n-1},\tau_n], n\geq 1.
\end{equation}
It is clear that $X(t)$ is continuous and $X\in \mathcal N^2(0,t\wedge \tau_n)$ for every $t\geq 0, n\geq 1$. In addition, on $\{\tau<T\}$ we have
$$\liminf_{t\to\tau}|X(t)|\geq \liminf_{n\to\infty}|X(\tau_n)|=\liminf_{n\to\infty} |X_n(\tau_n)|=\infty.$$
On the other hand, using Lemma \ref{lem1}, we obtain that
\begin{align}
X(t\wedge \tau_n)=&X_n(t\wedge \tau_n)\notag\\
=&S(t\wedge \tau_n)\xi_{n_0}+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-s) F_n(s,X_n(s))ds\notag\\
&+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-s) G_n(s,X_n(s))dw_s\notag\\
=&S(t\wedge \tau_n)\xi_{n_0}+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-s) F(s,X(s))ds\label{XtMinTaunEquation}
\\
&+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-s) G(s,X(s))dw_s. \notag\end{align}
Therefore, $\{X(t), t\in [0,\tau)\}$ is a maximal local mild solution of \eqref{E2}.
Let us now verify the estimate \eqref{XtMinTaunEstimate}.
Using the same argument as in the proof of the estimate \eqref{EEE} in Theorem \ref{linear growth and Lipschitz conditions} to the stochastic integral equation \eqref{XtMinTaunEquation}, we conclude that
$$\mathbb E |X(t\wedge \tau_n)|^2\leq \alpha(1+\mathbb E|\xi_{n_0}|^{2})\leq \alpha(1+\mathbb E|\xi|^{2}),\hspace{1cm} t\geq 0, n\geq 1,$$
where $\alpha=\alpha(\bar c_n, M_T, T)>0$ is some constant depending only on $\bar c_n, M_T$ and $T.$ Clearly, by \eqref{stoppingTimes} and \eqref{DefintionOfX(t)},
$$
\tau_n=\inf\{t\in[0,T]: |X(t)|>n\}.
$$
The estimate \eqref{XtMinTaunEstimate} thus has been verified.
Let us finally verify uniqueness of the solution. Let $\{\bar X(t), t\in \mathcal [0,\bar \tau)\}$ be another maximal local mild solution of \eqref{E2}. Denote
$$\bar \tau_n= \inf\{t\in [0, \bar\tau): |\bar X(t)|> n\} \quad \text{ and } \quad
\tau_n^*=\tau_n\wedge \bar \tau_n.$$
Then the sequence $\{\tau_n^*\}_n $ increases and converges to $\tau\wedge \bar \tau$ a.s. as $n\to \infty$.
For $t\geq 0$ and $n=1,2,\dots,$ we have
\begin{align}
\mathbb E&|X(t\wedge \tau_n^*)-\bar X(t\wedge \tau_n^*)|^2\notag\\
\leq&2\mathbb E \Big|\int_0^{t\wedge \tau_n^*} S(t\wedge \tau_n^*-s)\{F(s,X(s))-F(s,\bar X(s))\}ds\Big|^2\notag\\
&+2\mathbb E \Big|\int_0^{t\wedge \tau_n^*} S(t\wedge \tau_n^*-s)\{G(s,X(s))-G(s,\bar X(s))\}dw_s\Big|^2\notag\\
\leq&2TM_T^2\mathbb E \int_0^{t\wedge \tau_n^*} |F(s,X(s))-F(s,\bar X(s))|^2ds\notag\\
&+2c(E)\mathbb E \int_0^{t\wedge \tau_n^*} |S(t\wedge \tau_n^*-s)\{G(s,X(s))-G(s,\bar X(s))\}|^2 ds\notag\\
\leq &2Tc_n^2M_T^2 \mathbb E\int_0^{t\wedge \tau_n^*} |X(s)-\bar X(s)|^2ds\notag\\
&+2c(E)M_T^2\mathbb E \int_0^{t\wedge \tau_n^*} |G(s,X(s))-G(s,\bar X(s))|^2 ds\notag\\
\leq &2(T+c(E)) c_n^2M_T^2\mathbb E \int_0^{t\wedge \tau_n^*} |X(s)-\bar X(s)|^2ds\notag\\
= &2(T+c(E)) c_n^2M_T^2\mathbb E \int_0^t {\bf1}_{[0, \tau_n^*]}(s) |X(s\wedge \tau_n^*)-\bar X(s\wedge \tau_n^*)|^2ds\notag\\
\leq &2(T+c(E)) c_n^2M_T^2\int_0^t \mathbb E |X(s\wedge \tau_n^*)-\bar X(s\wedge \tau_n^*)|^2ds. \label{H4}
\end{align}
The Gronwall lemma then gives that $\mathbb E|X(t\wedge \tau_n^*)-\bar X(t\wedge \tau_n^*)|^2=0.$ Hence, $X(t\wedge \tau_n^*)=\bar X(t\wedge \tau_n^*)$ for every $n\geq1$ and $t\geq 0.$ Letting $n\to \infty$, we conclude that $X(t)=\bar X(t)$ for every $t\in [ 0, \tau\wedge \bar \tau).$
On the other hand, if $\mathbb P\{\tau<\bar \tau\}<1$ then for almost sure $\omega\in \{\tau<\bar \tau\}$, $\bar X(t,\omega)$ is continuous at $\tau(\omega)$. We then arrive at a contradiction
$$\infty>|\bar X(\tau(\omega),\omega)|=\lim_{n\to\infty} |\bar X(\tau_n(\omega),\omega)|=\lim_{n\to\infty} | X(\tau_n(\omega),\omega)|=\infty.$$
Therefore, $\mathbb P \{\tau<\bar \tau\}=1$. Similarly, we have $\mathbb P \{\tau>\bar \tau\}=1.$ We thus have shown that $\mathbb P \{\tau=\bar \tau\}=1.$ The theorem is proved.
\end{proof}
Let us now study dependence of the maximal local mild solution on the initial data. It turns out that the maximal local mild solution $\{X(t),t\in[0,\tau)\}$ of \eqref{E2}, which is shown in Theorem \ref{local existence theorem}, depends continuously on the initial data in the sense specified in the following theorem.
\begin{theorem} \label{dependence of local mild solution on the initial data}
Let the assumption on the functions $F$ and $G$ of Theorem \ref{local existence theorem} be satisfied. Let $\{X(t),t\in[0,\tau)\}$ and $\{\bar X(t),t\in[0,\bar\tau)\}$ be the maximal local solutions of \eqref{E2} with the initial values $\xi$ and $\bar \xi$, respectively. Then there exist two positive constants $C_1=C_1( c_n, M_T, T)$ and $C_2=C_2( c_n, \bar c_n, M_T, T)$ satisfying the estimates
\begin{align}
&\mathbb E|X(t\wedge \tau_n)-\bar X(t\wedge \tau_n)|^2 \leq C_1 \mathbb E|\xi-\bar\xi|^2 \label{Eq49}
\end{align}
and
\begin{align}
&\mathbb E|X(t\wedge \tau_n)- X(s\wedge \tau_n)|^2\label{Eq50}\\
& \leq C_2 [\mathbb E|[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)|^2 +(1+ \mathbb E|\xi|^{2}) (t-s)]\notag
\end{align}
for every $0\leq s\leq t\leq T$,
where $\{\tau_n\}_{n=0}^\infty$ is a sequence of stopping times defined by
$$
\tau_n=\inf\{t\in[0,T]: |X(t)|>n \text{ or } |\bar X(t)|>n\}.
$$
\end{theorem}
\begin{proof}
Let us first prove \eqref{Eq49}. The case $\mathbb E|\xi-\bar\xi|^2=\infty$ is obvious. Hence, we may assume that $\mathbb E|\xi-\bar\xi|^2<\infty.$ In view of the proof of Theorem \ref{local existence theorem} (see \eqref{XtMinTaunEquation}), we have
\begin{align}
X(t\wedge \tau_n)=&S(t\wedge \tau_n)\xi+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-r) F(r,X(r))dr \label{Eq52}\\
&+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-r) G(r,X(r))dw_r, \notag\end{align}
and
\begin{align*}
\bar X(t\wedge \tau_n)=&S(t\wedge \tau_n)\bar\xi+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-r) F(r,\bar X(r))dr\\
&+\int_0^{t\wedge \tau_n} S(t\wedge \tau_n-r) G(r,\bar X(r))dw_r. \notag\end{align*}
Similarly to \eqref{H4}, we obtain that
\begin{align*}
\mathbb E&|X(t\wedge \tau_n)-\bar X(t\wedge \tau_n)|^2\notag\\
\leq &3\mathbb E|S(t\wedge \tau_n)(\xi-\bar\xi)|^2\notag\\
&+3(T+c(E)) c_n^2M_T^2\int_0^t \mathbb E |X(r\wedge \tau_n)-\bar X(r\wedge \tau_n)|^2dr \notag\\
\leq &3M_T^2\mathbb E|\xi-\bar\xi|^2\notag\\
&+3(T+c(E)) c_n^2M_T^2\int_0^t \mathbb E |X(r\wedge \tau_n)-\bar X(r\wedge \tau_n)|^2dr,
\hspace{1cm} t\in[0,T].
\end{align*}
The Gronwall lemma then provides \eqref{Eq49}.
Let us now verify \eqref{Eq50}. By the semigroup property, from \eqref{Eq52} we observe that
\begin{align*}
X&(t\wedge \tau_n)-X(s\wedge \tau_n)\notag\\
=&S(t\wedge \tau_n-s\wedge \tau_n)\Big[S(s\wedge \tau_n)\xi+\int_0^{s\wedge \tau_n} S(s\wedge \tau_n-r) F(r,X(r))dr\\
&+\int_0^{s\wedge \tau_n} S(s\wedge \tau_n-r) G(r,X(r))dw_r\Big]+\int_{s\wedge \tau_n}^{t\wedge \tau_n} S(t\wedge \tau_n-r) F(r,X(r))dr\\
&+\int_{s\wedge \tau_n}^{t\wedge \tau_n} S(t\wedge \tau_n-r) G(r,X(r))dw_r-X(s\wedge \tau_n)\notag\\
=&[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)+\int_{s\wedge \tau_n}^{t\wedge \tau_n} S(t\wedge \tau_n-r) F(r,X(r))dr\\
&+\int_{s\wedge \tau_n}^{t\wedge \tau_n} S(t\wedge \tau_n-r) G(r,X(r))dw_r.
\end{align*}
Using the local growth condition on $F$ and $G$, we then observe that
\begin{align*}
\mathbb E&|X(t\wedge \tau_n)-X(s\wedge \tau_n)|^2\notag\\
\leq &3\mathbb E|[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)|^2\notag\\
&+3\mathbb E\Big|\int_{s\wedge \tau_n}^{t\wedge \tau_n} S(t\wedge \tau_n-r) F(r,X(r))dr|^2\notag\\
&+3\mathbb E\Big|\int_{s\wedge \tau_n}^{t\wedge \tau_n} S(t\wedge \tau_n-r) G(r,X(r))dw_r\Big|^2\notag\\
\leq &3\mathbb E|[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)|^2\notag\\
&+3\mathbb E(t\wedge \tau_n-s\wedge \tau_n)\int_{s\wedge \tau_n}^{t\wedge \tau_n} |S(t\wedge \tau_n-r)|^2 |F(r,X(r))|^2dr\notag\\
&+3c(E)\mathbb E\int_{s\wedge \tau_n}^{t\wedge \tau_n} |S(t\wedge \tau_n-r)|^2|G(r,X(r))|^2dr\notag\\
\leq &3\mathbb E|[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)|^2\notag\\
&+6c_n^2M_T^2T\mathbb E\int_{s\wedge \tau_n}^{t\wedge \tau_n} [1+|X(r)|^2]dr\notag\\
&+6c_n^2M_T^2c(E)\mathbb E\int_{s\wedge \tau_n}^{t\wedge \tau_n} [1+|X(r)|^2]dr\notag\\
= &3\mathbb E|[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)|^2\notag\\
&+6c_n^2M_T^2[T+c(E)]\int_s^t [1+\mathbb E|X(r\wedge \tau_n)|^2]dr.\notag
\end{align*}
By virtue of \eqref{XtMinTaunEstimate}, we obtain that
\begin{align*}
\mathbb E&|X(t\wedge \tau_n)-X(s\wedge \tau_n)|^2\notag\\
\leq &3\mathbb E|[S(t\wedge \tau_n-s\wedge \tau_n)-I]X(s\wedge \tau_n)|^2\notag\\
&+6 c_n^2M_T^2[T+c(E)] (1+\alpha+\alpha \mathbb E|\xi|^{2}) (t-s),\notag
\end{align*}
where $\alpha=\alpha(\bar c_n, M_T, T)$ is some positive constant. Thus, \eqref{Eq50} has been verified.
\end{proof}
\subsection{Existence and regular dependence on initial data of global mild solutions under linear growth and local Lipschitz conditions}
Let us first show existence and uniqueness of mild solutions of \eqref{E2} under linear growth and local Lipschitz conditions on $F$ and $G$.
\begin{theorem}[global existence]\label{thm1}
Suppose that $F$ and $G$ satisfies the linear growth condition \eqref{The linear growth condition} and the local Lipschitz condition \eqref{The local Lipschitz condition} and that
$\mathbb E|\xi|^2< \infty$. Then
\begin{itemize}
\item [{\rm (i)}] there exists a unique mild solution $X(t)$ of \eqref{E2} on $[0,T]$ such that
\begin{equation} \label{EX(t)2Estimate}
\mathbb E |X(t)|^2 \leq \alpha_1(1+\mathbb E|\xi|^2), \hspace{1.5cm} t\in [0,T],
\end{equation}
where $\alpha_1$ is some constant depending only on $c_1, M_T$ and $ T.$
\item [{\rm (ii)}] For any $p> 2$ there exists a constant $\alpha_2>0$ depending only on $c_1, p, M_T$ and $ T$ such that
\begin{equation} \label{EEEglobal existence}
\mathbb E\sup_{0\leq t\leq T} |X(t)|^{p} \leq \alpha_2(1+\mathbb E|\xi|^{p}).
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
\text{}
\begin{itemize}
\item Proof of (i).
\end{itemize}
It is already known by Theorem \ref{local existence theorem} that there exists a unique maximal local mild solution $\{X(t),t\in [0,\tau)\}$ to \eqref{E2} such that
\begin{align*}
\begin{cases}
\mathbb E |X(t\wedge \tau_k)|^2\leq \alpha_1(1+\mathbb E|\xi|^{2}), &\hspace{1cm}t\geq 0, k\geq 1, \\
X(\tau_{k})=k, &\hspace{1cm} k\geq 1,
\end{cases}
\end{align*}
where $\alpha_1>0$ is some constant depending only on $c_1, M_T$ and $ T.$
Let us first verify that $\tau=T$ a.s. Indeed, if this statement is false, then there would exist $t_0\in(0,T)$ and $\epsilon \in (0,1)$ such that $\mathbb P\{\tau<t_0\}>\epsilon.$ Hence, by denoting $\Omega_k=\{\tau_k\leq t_0\}$, there exists $k_0\in \mathbb N\setminus \{0\}$ such that
$\mathbb P(\Omega_k)\geq \epsilon $ for all $ k\geq k_0.$ Since the sequence $\{\Omega_k\}_k$ is decreasing, we observe that $\mathbb P (\cap_{k=k_0} \Omega_k)\geq\epsilon.$ From this, for every $k\geq k_0$ we have
\begin{align*}
\alpha_1(1+\mathbb E|\xi|^{2}) &\geq \mathbb E |X(t_0\wedge \tau_k)|^2\\
&\geq \mathbb E |{\bf 1}_{\cap_{k=k_0} \Omega_k}X(t_0\wedge \tau_k)|^2\\
&\geq \mathbb E |{\bf 1}_{\cap_{k=k_0} \Omega_k}X(\tau_k)|^2\\
&=k^2 \mathbb P \{\cap_{k=k_0} \Omega_k\}=\epsilon k^2.
\end{align*}
Letting $k\to \infty$, we arrive at a contradiction: $\infty>\alpha_1(1+\mathbb E|\xi|^{2})=\infty.$
Thus, $\tau=T$ a.s.
Let us now show that $X(t)$ is defined and is continuos at $t=T$ and that $X$ satisfies the estimate \eqref{EX(t)2Estimate}.
Since $\tau=T$ a.s., we have
\begin{equation} \label{Eq-2}
X(t)=S(t) \xi+ \mathcal Q_1(X)(t) +\mathcal Q_2(X)(t), \hspace{1cm} t\in [0,T),
\end{equation}
where $\mathcal Q_1$ and $\mathcal Q_2$ are defined by \eqref{Q1Definition} and \eqref{Q2Definition}, respectively.
Using the same argument as in the proof of the estimate \eqref{EEE} of Theorem \ref{linear growth and Lipschitz conditions} to the stochastic integral equation \eqref{Eq-2}, we observe that
\begin{equation} \label{Eq-3}
\mathbb E|X(t)|^2 \leq \alpha_1 (1+\mathbb E |\xi|^2), \hspace{1cm} t\in [0,T),
\end{equation}
where $\alpha_1$ is some constant depending only on $c_1, M_T$ and $ T$. Clearly, the estimate \eqref{Eq-3} derives that $\mathcal Q_1(X)(t)$ and $\mathcal Q_2(X)(t)$ are defined at $t=T$. Furthermore, Proposition \ref{IntegralInequality} provides the continuity of $\mathcal Q_2(X)$ at $t=T$. The continuity of $\mathcal Q_1(X)$ at $t=T$ can be seen easily from \eqref{Eq-3} and the equality
\begin{align*}
&\mathcal Q_1(X)(T)-\mathcal Q_1(X)(t)\\
=&\int_0^T S(T-t)S(t-s) F(s,X(s))ds-\int_0^tS(t-s)F(s,X(s))ds\\
=& [S(T-t)-I]\int_0^tS(t-s)F(s,X(s))ds +\int_t^TS(T-s)F(s,X(s))ds
\end{align*}
with any $ t\in [0,T)$. Therefore, by setting
$$X(T)=S(T) \xi+ \mathcal Q_1(X)(T) +\mathcal Q_2(X)(T),$$
we conclude that the obtained process $\{X(t), t\in [0,T]\}$ is a unique mild solution of \eqref{E2} on $[0,T]$. Furthermore, the estimate \eqref{EX(t)2Estimate} follows from \eqref{Eq-3} and the continuity of $X(t)$ on $[0,T]$.
\begin{itemize}
\item Proof of (ii).
\end{itemize}
The case $\mathbb E|\xi|^p=\infty$ is obvious. Therefore, we can assume that $\mathbb E|\xi|^p<\infty.$ To simply the proof, we shall use a notation $C$ to denote positive constants which are determined by the constants $c_1, p, M_T$ and $T$. So, it may change from occurrence to occurrence. Since
$$X(t)=S(t) \xi+ \mathcal Q_1(X)(t) +\mathcal Q_2(X)(t),$$
for every $s\in [0,T]$ we have
\begin{equation*}
\begin{aligned}
|X(s)|^{p}
\leq &C\Big[|\xi|^{p}+\Big\{\int_0^s|S(s-u) F(u, X(u))|du\Big\}^{p}+|\mathcal Q_2(X)(s)|^{p}\Big]\\
\leq&C\Big[|\xi|^{p}+\Big\{\int_0^s| F(u, X(u))|du\Big\}^{p}+|\mathcal Q_2(X)(s)|^{p}\Big]\\
\leq&C\Big[|\xi|^{p}+\Big\{\int_0^s (1+| X(u)|)du\Big\}^{p}+ |\mathcal Q_2(X)(s)|^{p}\Big]\\
\leq&C\Big[|\xi|^{p}+\int_0^s (1+| X(u)|^{p})du+|\mathcal Q_2(X)(s)|^{p}\Big].
\end{aligned}
\end{equation*}
Hence,
\begin{equation*}
\begin{aligned}
\mathbb E&\sup_{0\leq s\leq t}|X(s)|^{p} \\
&\leq C\Big[\mathbb E |\xi|^{p}+\int_0^t (1+\mathbb E\sup_{0\leq u\leq s}| X(u)|^{p})ds+\mathbb E\sup_{0\leq s\leq t}|\mathcal Q_2(X)(s)|^{p}\Big].
\end{aligned}
\end{equation*}
On the other hand, applying the Burkholder-Davis-Gundy inequality \cite[Theorem 3.28]{IS} to the continuous martingale $\{\mathcal Q_2(X)(s),s\in [0,T]\}$, we see that
\begin{equation*}
\begin{aligned}
\mathbb E\sup_{0\leq s\leq t} |\mathcal Q_2(X)(s)|^{p}&\leq C \mathbb E \Big[\int_0^t|S(s-u)G(u,X(u))|^2du\Big ]^{\frac{p}{2}}\\
&\leq C \mathbb E \Big[\int_0^t|G(u,X(u))|^2du\Big ]^{\frac{p}{2}}\\
&\leq C \mathbb E \Big[\int_0^t(1+|X(u)|^2)du\Big ]^{\frac{p}{2}}\\
&\leq C\int_0^t[1+\mathbb E\sup_{0\leq u\leq s}| X(u)|^{p}]ds.
\end{aligned}
\end{equation*}
Therefore, we have shown that
$$\mathbb E\sup_{0\leq s\leq t}|X(s)|^{p}\leq C\Big [1+ \mathbb E |\xi|^{p}+\int_0^t \mathbb E\sup_{0\leq u\leq s}| X(u)|^{p}ds\Big].$$
By the Gronwall lemma, we conclude that
$$\mathbb E\sup_{0\leq s\leq T}|X(s)|^{p}\leq C(1+\mathbb E|\xi|^{p}).$$
The proof is complete.
\end{proof}
In the remain of the subsection, we will give the continuous dependence of the global mild solution on the initial data.
\begin{theorem} \label{dependence of global mild solution on the initial data}
Let \eqref{The linear growth condition} and \eqref{The local Lipschitz condition} be satisfied. Let $X$ and $\bar X$ be the global solutions of \eqref{E2} with the initial values $\xi$ and $\bar \xi$ satisfying the condition $\mathbb E|\xi|^2+\mathbb E|\bar \xi|^2< \infty$, respectively. Then there exist two positive constants $C_1=C_1( c_1, M_T, T)$ and $C_2=C_2( c_1, \bar c_n, M_T, T)$ such that
\begin{align*}
&\mathbb E|X(t)-\bar X(t)|^2 \leq C_1 \mathbb E|\xi-\bar\xi|^2
\end{align*}
and
\begin{align*}
&\mathbb E|X(t)- X(s)|^2 \leq C_2 [\mathbb E|[S(t-s)-I]X(s)|^2 +(1+ \mathbb E|\xi|^{2}) (t-s)]\notag
\end{align*}
for every $0\leq s\leq t\leq T$.
\end{theorem}
As the proof of this theorem is similar to that of Theorem \ref{dependence of local mild solution on the initial data}, we may omit it.
\section{Linear evolution equations with additive noise} \label{section4}
This section handles linear evolution equations with additive noise: the functions $F$ and $G$ are considered to be dependent only on $t$, i.e. $F(t,X)=F(t)$ and $ G(t,X)=G(t).$
Let us rewrite the equation \eqref{E2} in the form
\begin{equation} \label{autonomous linear evolution equation}
\begin{cases}
dX+AXdt=F(t)dt+ G(t)dw_t,\\
X(0)=\xi.
\end{cases}
\end{equation}
We shall explore existence of strict and mild solutions and regularity of solutions of \eqref{autonomous linear evolution equation}.
Throughout this section, we suppose that
\begin{itemize}
\item the spectrum $\sigma(A)$ of $A$ is contained in an open sectorial domain $\Sigma_{\varpi}$:
\begin{equation} \label{spectrumsectorialdomain}
\sigma(A) \subset \Sigma_{\varpi}=\{\lambda \in \mathbb C: |\arg \lambda|<\varpi\}, \quad \quad 0<\varpi<\frac{\pi}{2},
\end{equation}
\item there exists a constant $M_{\varpi}>0$ such that
\begin{equation} \label{resolventnorm}
||(\lambda-A)^{-1}|| \leq \frac{M_{\varpi}}{|\lambda|}, \quad\quad\quad \quad \lambda \notin \Sigma_{\varpi}.
\end{equation}
\item the non-random functions $F\colon[0,T]\to E$ and $G\colon[0,T]\to E$ satisfy one of the following two conditions:
\begin{equation} \label{FGSpace1}
F\in \mathcal F^{\beta, \sigma}((0,T];E), G\in \mathcal F^{\beta+\frac{1}{2}, \sigma} ((0,T];E)
\end{equation}
\hspace{2.5cm} with $0<\sigma<\beta\leq \frac{1}{2},$
\begin{equation} \label{FGSpace2}
F\in \mathcal F^{\beta, \sigma} \, \text{ with } 0<\sigma<\beta\leq 1 \text{ and } G\in \mathcal B([0,T];E).
\end{equation}
\end{itemize}
The following lemma presents some useful results. The proof can be found in the monograph \cite{yagi}.
\begin{lemma}
The linear operator $A$ satisfies the following properties.
\begin{itemize}
\item [\rm (i)] $(-A)$ generates an analytic semigroup $S(t)=e^{-tA}$.
\item [\rm (ii)]
\begin{equation} \label{EstimateIofAnu}
|A^\nu S(t)| \leq \iota_\nu t^{-\nu}, \hspace{2cm} t\in (0,T], \nu \in [0,\infty),
\end{equation}
where $ \iota_\nu=\sup_{0<t\leq T} t^\nu |A^\nu S(t)|<\infty$. In particular,
\begin{equation} \label{EstimateIofS(t)Maximum}
|S(t)|\leq \iota_0, \hspace{2cm} t\in[0,T].
\end{equation}
\item [\rm (iii)] There exists $\nu>0$ such that
\begin{equation} \label{EstimateIofS(t)}
|S(t)|\leq \iota_0 e^{-\nu t}, \hspace{2cm} t\in[0,T].
\end{equation}
\item [\rm (iv)] For every $\theta\in (0,1]$
\begin{equation} \label{EstimateIofS(t)-IA-theta}
|[S(t)-I]A^{-\theta}|\leq \frac{\iota_{1-\theta}}{\theta} t^\theta, \hspace{2cm} t\in[0,T].
\end{equation}
\item [\rm (v)]
\begin{equation} \label{Eq53}
AS(\cdot)U \in \mathcal F^{\beta,\sigma}((0,T];E) \hspace{2cm} \text{ for every }U\in \mathcal D(A^\beta).
\end{equation}
\end{itemize}
\end{lemma}
\subsection{Existence of strict solutions}
This subsection presents existence of strict solutions of the autonomous linear evolution equation \eqref{autonomous linear evolution equation}.
\begin{theorem} \label{StrictSolutions}
Assume that
\begin{equation} \label{Eq35}
\begin{aligned}
&\text{there exist } \delta\in (0,\frac{1}{2}) \text{ and } c_{\delta}>0 \text{ such that }|AS(t)| <c_{\delta} t^{-\delta}\\
& \text{for every } t\in (0,T].
\end{aligned}
\end{equation}
Then there exists a unique strict solution of \eqref{autonomous linear evolution equation} in the space $\mathcal C((0,T];\mathcal D(A))$.
\end{theorem}
\begin{proof}
Let us observe that the integral $\int_0^t S(t-s)G(s)dw_s$ is well-defined and is continuous. Indeed, it suffices to show that $\int_0^t |S(t-s)G(s)|^2ds$ is finite for $t\in (0,T]$.
If \eqref{FGSpace1} takes place then from \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofS(t)Maximum},
we have
\begin{align*}
\int_0^t|S(t-s)G(s)|^2 ds& \leq \int_0^t \iota_0^2 |G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2 s^{2\beta-1}ds\\
&=\frac{\iota_0^2 |G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2 t^{2\beta}}{2\beta}<\infty, \hspace{2cm} t\in [0,T].
\end{align*}
Otherwise, if \eqref{FGSpace2} takes place then from \eqref{EstimateIofS(t)Maximum}, we have
$$\int_0^t|S(t-s)G(s)|^2 ds \leq \iota_0^2 t |G|_{\mathcal B((0,T]; E)}^2 <\infty, \quad \quad\quad t\in [0,T].$$
On the other hand, by \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofS(t)}, we have
\begin{equation} \label{Eq0}
\int_0^t|S(t-s) F(s)|ds\leq \iota_0 |F|_{\mathcal F^{\beta,\sigma}} \int_0^t e^{-\nu (t-s)} s^{\beta-1}ds \leq \frac{\iota_0 |F|_{\mathcal F^{\beta,\sigma}} t^\beta }{\beta}.
\end{equation}
Hence, $\int_0^tS(t-s) F(s)ds$ is continuous on $[0,T]$.
We thus have shown that \eqref{autonomous linear evolution equation} has a unique mild solution
$X(t)=I_1(t)+I_2(t),$ where
\begin{equation}\label{XI1I2}
\begin{aligned}
I_1(t)&=S(t)\xi +\int_0^tS(t-s) F(s)ds, \\
I_2(t)&= \int_0^t S(t-s) G(s)dw_s.
\end{aligned}
\end{equation}
We shall show that this mild solution is a strict solution in the space $\mathcal C((0,T];\mathcal D(A))$. For this purpose, we divide the proof into two steps.
{\bf Step 1}. Let us verify that $I_1\in \mathcal C((0,T];\mathcal D(A)) $ and satisfies the integral equation
\begin{equation} \label{I1equation}
I_1(t)+\int_0^t AI_1(s)ds=\xi+ \int_0^t F(s)ds, \hspace{1cm} t\in (0,T].
\end{equation}
Let $A_n=A(1+\frac{A}{n})^{-1}, n\in \mathbb N\setminus \{0\}$ be the Yosida approximation of $A$. Then $A_n$ satisfies \eqref{spectrumsectorialdomain} and \eqref{resolventnorm} uniformly and generates an analytic semigroup $S_n(t)$ (see e.g. \cite{yagi}). Furthermore, for every $\nu \in [0,\infty)$ we have
\begin{equation} \label{LimitOfAnnuSn}
\lim_{n\to\infty} A_n^\nu S_n(t)=A^\nu S(t) \quad\quad\text{ (limit in } \mathcal L(E))
\end{equation}
and there exists $\varsigma_\nu>0$ independent of $n$ such that for every $t\in (0,T]$
\begin{equation} \label{EstimateIofAnnu}
\begin{aligned}
\begin{cases}
|A_n^\nu S_n(t)| \leq \varsigma_\nu t^{-\nu} &\quad\quad\text { if } \nu>0,\\
|A_n^\nu S_n(t)| \leq \varsigma_\nu e^{-\varsigma_\nu t} &\quad\quad\text { if } \nu=0.
\end{cases}
\end{aligned}
\end{equation}
Consider a function
$$I_{1n}(t)=S_n(t)\xi +\int_0^tS_n(t-s) F(s)ds, \hspace{2cm} t\in [0,T].$$
Due to \eqref{LimitOfAnnuSn} and \eqref{EstimateIofAnnu}, $\lim_{n\to \infty} I_{1n}(t)=I_1(t)$ a.s. Since $A_n$ is bounded, we verify that
$$dI_{1n}=[-A_n I_{1n}+F(t)]dt, \hspace{2cm} t\in (0,T].$$
From this equation, for any $\epsilon \in (0,T)$ we obtain that
\begin{equation}\label{EquationOfI1n}
I_{1n}(t)=I_{1n}(\epsilon)+\int_\epsilon^t[F(s)-A_nI_{1n}(s)]ds, \quad\quad t\in [\epsilon,T].
\end{equation}
We shall observe convergence of $A_nI_{1n}$. We have
\begin{align}
A_n&I_{1n}(t) \notag\\
=&A_nS_n(t)\xi+\int_0^t A_nS_n(t-s)[F(s)-F(t)]ds + \int_0^t A_nS_n(t-s)ds F(t)\notag\\
=&A_nS_n(t)\xi+\int_0^t A_nS_n(t-s)[F(s)-F(t)]ds + [I-S_n(t)]F(t). \label{Eq36}
\end{align}
Using \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnnu}, we observe that
\begin{align}
|A_n&I_{1n}(t)|\notag\\
\leq &\varsigma_1 t^{-1} |\xi|+\int_0^t \varsigma_1 |F|_{\mathcal F^{\beta,\sigma}} (t-s)^{\sigma-1} s^{\beta-\sigma-1} ds+(1+\varsigma_0 e^{-\varsigma_0 t}) |F|_{\mathcal F^{\beta,\sigma}} t^{\beta-1}\notag\\
=&\varsigma_1 |\xi| t^{-1} + [1+\varsigma_1 {\bf B}(\beta-\sigma, \sigma) +\varsigma_0 e^{-\varsigma_0 t} ] |F|_{\mathcal F^{\beta,\sigma}} t^{\beta-1}, \quad \quad t\in (0,T], \label{EstimateOfNormOfAnI1n}
\end{align}
where ${\bf B} (\cdot,\cdot)$ is the beta function.
Furthermore, due to \eqref{LimitOfAnnuSn} and \eqref{Eq36},
$$
\lim_{n\to\infty}A_nI_{1n}(t)=W(t),
$$
where
$$W(t)=AS(t)\xi+\int_0^t AS(t-s)[F(s)-F(t)]ds + [I-S(t)]F(t).$$
Let us verify that $W(t)$ is continuous on $(0,T]$. Indeed, let $t_0\in (0,T]$. Using \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, for every $t\geq t_0$ we have
\begin{align}
&|W(t)-W(t_0)|\notag\\
\leq & |AS(t_0)[S(t-t_0)-I]\xi| + |[I-S(t)]F(t)-[I-S(t_0)]F(t_0)|\notag \\
&+\Big| \int_{t_0}^t AS(t-s)[F(s)-F(t)]ds+\int_0^{t_0} AS(t-s)ds[F(t_0)-F(t)]\notag\\
&+\int_0^{t_0} S(t-t_0)AS(t_0-s)[F(s)-F(t_0)]ds\notag\\
&-\int_0^{t_0} AS(t_0-s)[F(s)-F(t_0)]ds\Big|\notag\\
\leq & \iota_1 t_0^{-1}|S(t-t_0)\xi-\xi| + |[I-S(t)]F(t)-[I-S(t_0)]F(t_0)|\notag \\
&+ \int_{t_0}^t |AS(t-s)| |F(s)-F(t)|ds+|[S(t-t_0)-S(t)][F(t_0)-F(t)]|\notag\\
&+\int_0^{t_0} |[S(t-t_0)-I]AS(t_0-s)[F(s)-F(t_0)]|ds\notag\\
\leq & \iota_1 t_0^{-1}|S(t-t_0)\xi-\xi| + |[I-S(t)]F(t)-[I-S(t_0)]F(t_0)| \notag\\
&+ \int_{t_0}^t \iota_1 |F|_{\mathcal F^{\beta, \sigma}} (t-s)^{\sigma-1} s^{\beta-\sigma-1}ds+|S(t-t_0)-S(t)| |F(t_0)-F(t)|\notag\\
&+|S(t-t_0)-I|\int_0^{t_0} \iota_1|F|_{\mathcal F^{\beta, \sigma}} (t_0-s)^{\sigma-1} s^{\beta-\sigma-1}ds\notag\\
\leq & \iota_1 t_0^{-1}|S(t-t_0)\xi-\xi| + |[I-S(t)]F(t)-[I-S(t_0)]F(t_0)|\notag \\
&+ \varsigma_1 |F|_{\mathcal F^{\beta, \sigma}} t_0^{\beta-\sigma-1} \int_{t_0}^t(t-s)^{\sigma-1} ds+|S(t-t_0)-S(t)| |F(t_0)-F(t)|\notag\\
&+ \iota_1|F|_{\mathcal F^{\beta, \sigma}} {\bf B}(\beta-\sigma,\sigma) t_0^{\beta-1} |S(t-t_0)-I|\notag\\
\leq & \iota_1 t_0^{-1}|S(t-t_0)\xi-\xi| + |[I-S(t)]F(t)-[I-S(t_0)]F(t_0)| \label{Eq26}\\
&+ \frac{\iota_1 |F|_{\mathcal F^{\beta, \sigma}} }{\sigma} t_0^{\beta-\sigma-1} (t-t_0)^\sigma+|S(t-t_0)-S(t)| |F(t_0)-F(t)|\notag\\
&+ \iota_1|F|_{\mathcal F^{\beta, \sigma}} {\bf B}(\beta-\sigma,\sigma) t_0^{\beta-1} |S(t-t_0)-I|.\notag
\end{align}
Thus, $\lim_{t\searrow t_0}W(t)=W(t_0).$ Similarly, we obtain that $\lim_{t\nearrow t_0}W(t)=W(t_0).$ Hence, $W(t)$ is continuous at $t=t_0$ and then at every point in $(0,T]$.
By the above aguments, we have
$$I_1(t)=\lim_{n\to\infty} I_{1n}(t)=\lim_{n\to\infty} A_n^{-1} A_n I_{1n}(t)=A^{-1}W(t).$$
This shows that $I_1(t) \in \mathcal D(A)$ and $AI_1(t)=W(t)\in \mathcal C((0,T];E).$
Let us prove that $\int_0^t W(s)ds$ exists. Indeed, by virtue of \eqref{FbetasigmaSpaceProperty}, \eqref{EstimateIofAnu} and \eqref{EstimateIofS(t)Maximum}, we have
\begin{align*}
\int_0^t |[I-S(r)]F(r)|dr &\leq (1+\iota_0) |F|_{\mathcal F^{\beta,\sigma}}\int_0^t r^{\beta-1}dr\\
&=\frac{(1+\iota_0) |F|_{\mathcal F^{\beta,\sigma}} r^{\beta}}{\beta}, \hspace{2cm} t\in [0,T],
\end{align*}
and
\begin{align*}
&\int_0^t \Big|\int_0^s AS(s-r)[F(r)-F(s)]dr\Big| ds\\
&\leq \int_0^t \int_0^s |AS(s-r)| |F(r)-F(s)|dr ds\\
&\leq \iota_1 |F|_{\mathcal F^{\beta, \sigma}} \int_0^t \int_0^s (s-r)^{\sigma-1} r^{\beta-\sigma-1}dr ds\\
&=\frac{\iota_1 |F|_{\mathcal F^{\beta, \sigma}} {\bf B}(\beta-\sigma, \sigma) t^\beta}{\beta}, \hspace{2cm} t\in [0,T].
\end{align*}
These estimates show that $\int_0^t [I-S(r)]F(r)dr$ and $\int_0^t \int_0^s AS(s-r)[F(r)-F(s)]dr ds $ exist for $t\in [0,T].$ Therefore, the existence of $\int_0^t W(s)ds$ for $t\in [0,T]$ follows from the equality
\begin{align*}
\int_0^t W(s)ds=&\int_0^t AS(r)\xi dr+\int_0^t \int_0^s AS(s-r)[F(r)-F(s)]dr ds\\
& +\int_0^t [I-S(r)]F(r)dr\\
=&[I-S(t)]\xi+\int_0^t \int_0^s AS(s-r)[F(r)-F(s)]dr ds\\
& +\int_0^t [I-S(r)]F(r)dr.
\end{align*}
In view of \eqref{EstimateOfNormOfAnI1n}, by applying the Lebesgue dominate convergence theorem to \eqref{EquationOfI1n}, for any $\epsilon \in (0,T)$ we obtain that
\begin{equation} \label{Eq27}
I_1(t)=I_1(\epsilon)+\int_\epsilon^t[F(s)-AI_1(s)]ds, \quad\quad t\in [\epsilon,T].
\end{equation}
From \eqref{Eq0} and \eqref{XI1I2}, $\lim_{\epsilon\to 0} I_1(\epsilon) =\xi$. Letting $\epsilon \to 0$ in \eqref{Eq27}, we then observe the equation \eqref{I1equation}.
{\bf Step 2}. Let us observe that $I_2\in \mathcal C([0,T]; \mathcal D(A)) $ and satisfies the equation
\begin{equation} \label{I2equation}
I_2(t)+ \int_0^t AI_2(s)ds=\int_0^t G(u)dw_u, \quad\quad t\in (0,T].
\end{equation}
If \eqref{FGSpace1} takes place, then by using \eqref{FbetasigmaSpaceProperty} and \eqref{Eq35}, we have
\begin{align*}
\int_0^t&|AS(t-s)G(s)|^2 ds \leq \int_0^t c_\delta^2(t-s)^{-2\delta}|G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2 s^{2\beta-1}ds\\
&=c_\delta^2 |G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2 t^{2(\beta-\delta)} {\bf B}(2\beta,1-2\delta)<\infty, \hspace{1cm} t\in (0,T].
\end{align*}
Otherwise, if \eqref{FGSpace2} takes place then
\begin{align*}
\int_0^t|AS(t-s)G(s)|^2 ds &\leq c_\delta^2 \int_0^t (t-s)^{-2\delta} |G|_{\mathcal B((0,T]; E)}^2ds\\
&=\frac{ c_\delta^2 t^{1-2\delta}|G|_{\mathcal B((0,T]; E)}^2}{1-2\delta} <\infty, \hspace{1cm}t\in (0,T].
\end{align*}
Hence, the integral $\int_0^t AS(t-s)G(s)dw_s, t\in (0,T]$ is well-defined and then is continuous.
Since $A$ is closed, we obtain that
$$AI_2(t)=\int_0^t AS(t-s)G(s)dw_s, \hspace{2cm} t\in (0,T].$$
Using the Fubini formula, we have
\begin{equation*}
\begin{aligned}
A \int_0^t I_2(s)ds&=\int_0^t \int_0^s AS(s-u) G(u)dw_uds\\
&=\int_0^t \int_u^t AS(s-u) G(u)dsdw_u\\
&=\int_0^t [G(u)-S(t-u)G(u)]dw_u\\
&=\int_0^t G(u)dw_u-\int_0^t S(t-u)G(u)dw_u\\
&=\int_0^t G(u)dw_u-I_2(t), \hspace{2cm} t\in (0,T].
\end{aligned}
\end{equation*}
This means that $I_2$ satisfies \eqref{I2equation}.
From these two steps, we conclude that $X(t)$ is a strict solution in the space $\mathcal C((0,T];\mathcal D(A))$.
\end{proof}
\begin{remark} \label{remark1}
In {\bf Step 1} of the proof of Theorem \ref{StrictSolutions}, the assumption \eqref{Eq35} is not used. Using this assumption, we can reduce the proof of the statement of this step. However, we do not present such a proof, because it is not useful for the study of regularity of mild solutions in the next theorems. In fact, this assumption is only to guarantee the existence of the integral $\int_0^t AS(t-s) G(s)dw_s, t\in (0,T].$
\end{remark}
\subsection{Regularity of mild solutions}
In this subsection, we will study regularity of mild solutions of \eqref{autonomous linear evolution equation} without the condition \eqref{Eq35} in Theorem \ref{StrictSolutions}. The initial value is considered in the domain $\mathcal D(A^\beta).$
\begin{theorem} \label{regularity theorem autonomous linear evolution equation}
Suppose that $\xi \in \mathcal D(A^\beta)$ a.s. Then there exists a mild solution $X$ of \eqref{autonomous linear evolution equation} in the space
\begin{equation*} \label{regularity theorem autonomous linear evolution equationXSpace}
X \in \mathcal C([0,T];\mathcal D(A^\beta)) \cap \mathcal C^\beta([0,T]; E).
\end{equation*}
Furthermore, $X$ satisfies the estimates
\begin{align}
&\mathbb E |X(t)|^2 \leq \rho_1[\mathbb E|\xi|^2 + |F|_{\mathcal F^{\beta,\sigma}}^2+ |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2], \label{estimateofstrictsolutionCase2}
\end{align}
when \eqref{FGSpace1} takes place, and
\begin{align}
&\mathbb E |X(t)|^2 \leq \rho_2[\mathbb E|\xi|^2+ |F|_{\mathcal F^{\beta,\sigma}}^2+ (1-e^{-\rho_2t}) |G|_{\mathcal B([0,t]; E)}^2 ], \label{estimateofstrictsolution}
\end{align}
when \eqref{FGSpace2} takes place. Here, $\rho_1$ and $\rho_2$ are two positive constants depending only on $A, \beta$ and $\sigma$.
\end{theorem}
\begin{proof}
The existence of the mild solution $X(t)=I_1(t)+I_2(t)$ of \eqref{autonomous linear evolution equation} is already shown in Theorem \ref{StrictSolutions}, where
$I_1(t)$ and $ I_2(t)$ are defined by \eqref{XI1I2}. First, let us assume that the condition \eqref{FGSpace1} takes place. The proof for this case is divided into several steps.
{\bf Step 1}. Let us show that $I_1(t)\in \mathcal D(A^\beta) $ for $t\in [0,T]$ and that
$A^\beta I_1\in \mathcal C([0,T]; E).
$
Since $S(t)$ is strongly continuous, we have
$$\lim_{t\to s}|A^\beta S(t)\xi-A^\beta S(s)\xi|=\lim_{t\to s}|[S(t)-S(t_0)]A^\beta \xi|=0, \hspace{1cm} s\in [0,T].$$
Therefore, $A^\beta S(t)\xi$ is continuous on $[0,T]$. Because of
$$A^\beta I_1(t)=A^\beta S(t)\xi+A^\beta\int_0^t S(t-s)F(s)ds,$$
it suffices to show that $A^\beta \int_0^t S(t-s)F(s)|ds$ is well-defined and is continuos on $[0,T]$.
Let us verify the first assertion. Using the inequalities \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we have
\begin{align}
\int_0^t|A^\beta S(t-s)F(s)|ds&\leq \int_0^t|A^\beta S(t-s)| |F(s)|ds \notag\\
&\leq |F|_{\mathcal F^{\beta,\sigma}} \iota_\beta\int_0^t(t-s)^{-\beta} s^{\beta-1}ds\notag\\
&= |F|_{\mathcal F^{\beta,\sigma}} \iota_\beta\int_0^1u^{\beta-1}(1-u)^{-\beta}du\notag\\
&=\iota_\beta |F|_{\mathcal F^{\beta,\sigma}} {\bf B}(\beta,1-\beta), \hspace{2cm} t\in[0,T]. \label{Eq4}
\end{align}
Hence, $\int_0^tA^\beta S(t-s)F(s)ds$ is well-defined. Since $A^\beta$ is closed, we obtain that
$$A^\beta\int_0^t S(t-s)F(s)ds=\int_0^tA^\beta S(t-s)F(s)ds.$$
Let us now verify the second assertion, i.e. to verify the continuity of $A^\beta\int_0^t S(t-s)F(s)ds$ on $[0,T]$. Fix $t_0\in[0,T]$. We consider two cases.
Case 1: $t_0>0$. For every $t\geq t_0$ we have
\begin{align*}
&\left|\int_0^tA^\beta S(t-s)F(s)ds-\int_0^{t_0}A^\beta S(t_0-s)F(s)ds\right|\\
&\leq \left|\int_0^{t_0}A^\beta [S(t-s)-S(t_0-s)]F(s)ds\right|+\left|\int_{t_0}^tA^\beta S(t-s)F(s)ds\right|\\
&\leq \int_0^{t_0}|A^\beta S(t_0-s)| |F(s)|ds|S(t-t_0)-I| +\int_{t_0}^t|A^\beta S(t-s)| |F(s)|ds.
\end{align*}
Using \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we observe that
\begin{align*}
&\left|\int_0^tA^\beta S(t-s)F(s)ds-\int_0^{t_0}A^\beta S(t_0-s)F(s)ds\right|\\
&\leq \int_0^{t_0} \iota_\beta (t_0-s)^{-\beta} |F|_{\mathcal F^{\beta,\sigma}}s^{\beta-1}ds|S(t-t_0)-I|+\int_{t_0}^t\iota_\beta (t-s)^{-\beta} \sup_{s\in[t_0,t]}|F(s)|ds\\
&= |F|_{\mathcal F^{\beta,\sigma}} \iota_\beta {\bf B}(\beta,1-\beta) |S(t-t_0)-I| + \frac{\iota_\beta\sup_{s\in[t_0,t]}|F(s)|(t-t_0)^{1-\beta}}{{1-\beta}} \\
&\to 0 \text { as } t\searrow t_0.
\end{align*}
This means that $\int_0^tA^\beta S(t-s)F(s)ds$ is right-continuous at $t=t_0$. Similarly, we can show that it is left-continuous at $t=t_0$. Therefore, $A^\beta\int_0^t S(t-s)F(s)ds$ is continuous at $t=t_0$.
Case 2: $t_0=0$. By the property of the space $\mathcal F^{\beta,\sigma}((0,T];E),$ we may put $z=\lim_{t\searrow 0} t^{1-\beta}F(t)$. We have
\begin{align*}
\Big|A^\beta & \int_0^t S(t-s)F(s)ds\Big|\notag\\
=&\Big|\int_0^t A^\beta S(t-s)[F(s)-F(t)]ds\Big|+\Big|\int_0^t A^\beta S(t-s)F(t)ds\Big|\notag\\
=&\Big|\int_0^t A^\beta S(t-s)[F(s)-F(t)]ds\Big|+\Big|[I-S(t)]A^{\beta-1}F(t)\Big|\notag\\
\leq &\int_0^t |A^\beta S(t-s)| |F(t)-F(s)|ds
+| t^{\beta-1}[I-S(t)]A^{\beta-1} [t^{1-\beta}F(t)-z]|\notag\\
&+| t^{\beta-1}[I-S(t)]A^{\beta-1} z|.\notag
\end{align*}
Using \eqref{Fbetasigma3}, \eqref{EstimateIofAnu} and \eqref{EstimateIofS(t)-IA-theta}, we obtain that
\begin{align*}
\limsup_{t\searrow 0}&\left|A^\beta\int_0^t S(t-s)F(s)ds\right|\notag\\
\leq & \iota_\beta \limsup_{t\searrow 0}\int_0^t \iota_\beta (t-s)^{-\beta}|F(t)-F(s)|ds\notag\\
&+\frac{\iota_\beta}{1-\beta}\limsup_{t\searrow 0}|t^{1-\beta}F(t)-z|\notag\\
&+\limsup_{t\searrow 0}| t^{\beta-1}[I-S(t)]A^{\beta-1} z|\notag\\
= & \iota_\beta \limsup_{t\searrow 0}\int_0^t (t-s)^{\sigma-\beta}s^{-1+\beta-\sigma} \frac{ s^{1-\beta+\sigma}|F(t)-F(s)|}{(t-s)^\sigma}ds\notag\\
&+\frac{\iota_\beta}{1-\beta}\limsup_{t\searrow 0}|t^{1-\beta}F(t)-z|\notag\\
&+\limsup_{t\searrow 0}| t^{\beta-1}[I-S(t)]A^{\beta-1} z|\notag\\
\leq &\iota_\beta {\bf B}(\beta-\sigma, 1-\beta+\sigma)\limsup_{t\searrow 0} \sup_{s\in[0,t)}\frac{ s^{1-\beta+\sigma}|F(t)-F(s)|}{(t-s)^\sigma}\notag\\
&+\limsup_{t\searrow 0}| t^{\beta-1}[I-S(t)]A^{\beta-1} z|\notag\\
=&\limsup_{t\searrow 0}| t^{\beta-1}[I-S(t)]A^{\beta-1} z|. \notag
\end{align*}
Since $\mathcal D(A^\beta)$ is dense in $E$, there exists a sequence $\{z_n\}_n$ in $\mathcal D(A^\beta)$ that converges to $z$ as $n\to \infty.$ Using \eqref{EstimateIofS(t)-IA-theta}, we observe that
\begin{align*}
&\limsup_{t\searrow 0}\left|A^\beta\int_0^t S(t-s)F(s)ds\right|\\
&\leq\limsup_{t\searrow 0} |t^{\beta-1}[I-S(t)]A^{\beta-1} (z-z_n)|+\limsup_{t\searrow 0} |t^{\beta-1}[I-S(t)]A^{-1}A^{\beta} z_n|\\
&\leq \frac{\iota_\beta }{1-\beta}|z-z_n|+\iota_0 \limsup_{t\searrow 0} t^\beta |A^{\beta} z_n|\\
&= \frac{\iota_\beta }{1-\beta}|z-z_n|, \hspace{2cm} n=1,2,\dots.
\end{align*}
Letting $n$ to $\infty$, we obtain that $\lim_{t\searrow 0}A^\beta\int_0^t S(t-s)F(s)ds=0.$ This means that $A^\beta\int_0^t S(t-s)F(s)ds$ is right-continuous at $t=t_0$.
From these two cases, we conclude that $A^\beta\int_0^t S(t-s)F(s)ds$ is continuous at $t=t_0$ and then at every point in $[0,T]$. The second assertion has been shown.
{\bf Step 2}. Let us verify that
$
A^\beta X\in \mathcal C([0,T]; E).
$
In view of \eqref {FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we have
\begin{align}
\int_0^t |A^\beta S(t-s) G(s) |^2 ds &\leq \iota_\beta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \int_0^t (t-s)^{-2\beta} s^{2\beta-1}ds \notag \\
&=\iota_\beta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\beta)< \infty. \label{Eq5}
\end{align}
Thus, $\int_0^t A^\beta S(t-s) G(s)dw_s $ is well-defined. Since $A^\beta$ is closed, we obtain that
$$ A^\beta I_2(t) =\int_0^t A^\beta S(t-s) G(s)dw_s. $$
Thanks to Proposition \ref{IntegralInequality}, $ A^\beta I_2$ is a continuous square integrable martingale on $[0,T]$. On the account of {\bf Step 1}, we conclude that
$$A^\beta X=A^\beta I_1+A^\beta I_2 \in \mathcal C([0,T]; E).$$
{\bf Step 3}. Let us show that $I_1$ is $\beta$\,{-}\,H\"older continuous on $[0,T]$.
From \eqref{EstimateIofAnu}, \eqref{EstimateIofS(t)Maximum}, \eqref{XI1I2} and \eqref{I1equation}, for every $0\leq s<t\leq T$ we observe that
\begin{align}
|I_1(t)-I_1(s)|=&\Big|\int_s^t F(u)du-\int_s^t AI_1(u) du\Big| \notag \\
=&\Big|\int_s^t F(u)du-\int_s^t AS(u)\xi du-\int_s^t \int_0^uAS(u-r)F(r)drdu\Big| \notag \\
\leq& \Big|\int_s^t [F(u)-AS(u)\xi] du\Big|+\int_s^t \Big|\int_0^uAS(u-r)F(u)dr\Big|du \notag \\
&+\int_s^t \Big|\int_0^uAS(u-r)[F(u)-F(r)]dr\Big|du \notag\\
\leq & \Big|\int_s^t [F(u)-AS(u)\xi] du\Big|+\int_s^t |[I-S(u)]F(u)|du \label{Eq2} \\
&+\int_s^t \int_0^u|AS(u-r)| |F(u)-F(r)|drdu \notag \\
\leq & \int_s^t [|F(u)-AS(u)\xi |+(1+\iota_0)|F(u)|]du \notag \\
&+\iota_1\int_s^t \int_0^u(u-r)^{-1} |F(u)-F(r)|drdu \notag \\
=&I_{11}(s,t)+I_{12}(s,t). \label{Eq28}
\end{align}
We shall give estimates for $I_{11}$ and $I_{12}$.
Since $\xi\in \mathcal D(A^\beta)$ a.s., we have
$AS(\cdot)\xi \in \mathcal F^{\beta,\sigma}((0,T];E)$ a.s. (see \eqref{Eq53}). Therefore,
$$F(\cdot)-AS(\cdot)\xi \in \mathcal F^{\beta,\sigma}((0,T];E) \hspace{2cm} \text{ a.s.}$$
In view of \eqref {FbetasigmaSpaceProperty}, we see that
\begin{align*}
I_{11}(s,t)&\leq \int_s^t [|F(\cdot)-AS(\cdot)\xi|_ {\mathcal F^{\beta,\sigma}} u^{\beta-1}+|F|_ {\mathcal F^{\beta,\sigma}}(1+\iota_0) u^{\beta-1}]du\\
&=\frac{|F(\cdot)-AS(\cdot)\xi|_ {\mathcal F^{\beta,\sigma}} + |F|_ {\mathcal F^{\beta,\sigma}}(1+\iota_0)}{\beta} (t^\beta-s^\beta)\\
&\leq \frac{|F(\cdot)-AS(\cdot)\xi|_ {\mathcal F^{\beta,\sigma}} + |F|_ {\mathcal F^{\beta,\sigma}}(1+\iota_0)}{\beta} (t-s)^\beta.
\end{align*}
Meanwhile, using \eqref {FbetasigmaSpaceProperty}, we have
\begin{align}
I_{12}(s,t)=&\iota_1\int_s^t \int_0^u(u-r)^{\sigma-1} r^{\beta-1-\sigma}\frac{r^{1-\beta+\sigma}|F(u)-F(r)|}{(u-r)^\sigma}drdu \notag\\
\leq & \iota_1|F|_{\mathcal F^{\beta,\sigma}}\int_s^t \int_0^u(u-r)^{\sigma-1} r^{\beta-\sigma-1}drdu\notag\\
= & \iota_1|F|_{\mathcal F^{\beta,\sigma}}\int_s^t u^{\beta-1}\int_0^1(1-v)^{\sigma-1} v^{\beta-\sigma-1}dvdu\notag\\
= & \iota_1|F|_{\mathcal F^{\beta,\sigma}} {\bf B}(\beta-\sigma,\sigma)\int_s^t u^{\beta-1}du\notag\\
= & \frac{\iota_1|F|_{\mathcal F^{\beta,\sigma}} {\bf B}(\beta-\sigma,\sigma)}{\beta} (t^\beta-s^\beta)\notag\\
\leq& \frac{\iota_1|F|_{\mathcal F^{\beta,\sigma}} {\bf B}(\beta-\sigma,\sigma)}{\beta} (t-s)^\beta. \label{Eq29}
\end{align}
Thus, $I_1(\cdot)$ is $\beta$\,{-}\,H\"older continuous on $[0,T]$.
{\bf Step 4}. Let us show that $X $ is $\beta$-H\"older continuous on $[0,T]$.
On the account of {\bf Step 3}, it suffices to show that
$I_2\in \mathcal C^\beta([0,T];E)$. Let $0\leq s<t\leq T$, then
$$I_2(t)-I_2(s)=\int_s^t S(t-r)G(r)dw_r +\int_0^s [S(t-r)-S(s-r)]G(r)dw_r.$$
Since the integrals in the right hand side are independent and have zero expectation, we have
\begin{align*}
\mathbb E& |I_2(t)-I_2(s)|^2 \notag\\
=&\mathbb E \Big|\int_s^t S(t-r)G(r)dw_r\Big|^2 +\mathbb E\Big|\int_0^s [S(t-r)-S(s-r)]G(r)dw_r\Big|^2 \notag\\
\leq & c(E)\Big[\int_s^t |S(t-r)|^2 | G(r)|^2 dr+ \int_0^s |S(t-r)-S(s-r)|^2 |G(r)|^2dr\Big]. \notag
\end{align*}
Using \eqref {FbetasigmaSpaceProperty}, \eqref{EstimateIofAnu} and \eqref{EstimateIofS(t)Maximum}, we then observe that
\begin{align*}
\mathbb E& |I_2(t)-I_2(s)|^2 \notag\\
\leq & c(E)|G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \Big [\iota_0^2 \int_s^t r^{2\beta-1}dr+ \int_0^s\Big |\int_{s-r}^{t-r} AS(u) du\Big|^2 r^{2\beta-1}dr \Big ] \notag\\
\leq &c(E)|G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \Big [\frac {\iota_0^2 (t^{2\beta}-s^{2\beta})}{2\beta}+\iota_1^2 \int_0^s \Big(\int_{s-r}^{t-r} u^{-1}du\Big)^2 r^{2\beta-1}dr \Big ]. \notag
\end{align*}
Dividing $-1$ as $-1=-\beta+\beta-1$, we have
\begin{align}
\Big(\int_{s-r}^{t-r} u^{-1}du\Big)^2&\leq \Big[\int_{s-r}^{t-r} (s-r)^{-\beta}u^{\beta-1}du\Big]^2\notag\\
&= (s-r)^{-2\beta}\frac{[(t-r)^\beta-(s-r)^\beta]^2}{\beta^2}\notag\\
&\leq (s-r)^{-2\beta}\frac{(t-s)^{2\beta}}{\beta^2}. \label{Eq37}
\end{align}
Hence,
\begin{align}
\mathbb E& |I_2(t)-I_2(s)|^2 \notag\\
\leq &c(E)|G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \Big [\frac {\iota_0^2 (t^{2\beta}-s^{2\beta})}{2\beta}+\frac{\iota_1^2(t-s)^{2\beta}}{\beta^2} \int_0^s (s-r)^{-2\beta} r^{2\beta-1}dr \Big ] \notag\\
\leq &c(E)|G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \Big [\frac {\iota_0^2}{2\beta}+\frac{\iota_1^2 {\bf B}(2\beta, 1-2\beta) }{\beta^2} \Big ] (t-s)^{2\beta}.\label{ExpextationOfI2tI2sSquare}
\end{align}
In addition, by the definition of the stochastic integral, $ I_2(t)$ is a Gaussian process. Theorem \ref{Kolmogorov testGaussian} then provides that $I_2\in \mathcal C^\beta([0,T];E)$.
{\bf Step 5}. Let us prove \eqref{estimateofstrictsolutionCase2}.
Taking $s=0, t\in (0,T]$ in \eqref{Eq2}, it yields that
\begin{align*}
|I_1(t)&-I_1(0)|\\
\leq& \Big|\int_0^t [F(u)-AS(u)\xi] du\Big|+\int_0^t |[I-S(u)]F(u)|du\\
&+\int_0^t \int_0^u|AS(u-r)| |F(u)-F(r)|drdu\\
\leq& \int_0^t [1+|I-S(u)|] |F(u)|du+\Big|\int_0^t AS(u)\xi du\\
&+c_1\int_0^t \int_0^u(u-r)^{-1} |F(u)-F(r)|drdu.
\end{align*}
In view of \eqref{FbetasigmaSpaceProperty}, \eqref{EstimateIofS(t)Maximum}, \eqref{Eq28} and \eqref{Eq29}, we have
\begin{align*}
|I_1(t)&-I_1(0)|\\
\leq& \int_0^t (2+\iota_0) |F(u)|du+|[I-S(t)]\xi||+I_{12}(0,t)\\
\leq& (2+\iota_0) \int_0^t |F|_{\mathcal F^{\beta,\sigma}} u^{\beta-1}du+(1+\iota_0)|\xi|+\frac{\iota_1|F|_{\mathcal F^{\beta,\sigma}} {\bf B}(\beta-\sigma,\sigma)}{\beta} t^\beta\\
=& \frac{[2+\iota_0+\iota_1 {\bf B}(\beta-\sigma,\sigma)] t^\beta |F|_{\mathcal F^{\beta,\sigma}}}{\beta} +(1+\iota_0)|\xi|.
\end{align*}
Then
$$|I_1(t)| \leq (2+\iota_0)|\xi|+\frac{[2+\iota_0+\iota_1 {\bf B}(\beta-\sigma,\sigma) ] t^\beta |F|_{\mathcal F^{\beta,\sigma}}}{\beta}, \quad\quad t\in (0,T].$$
Obviously, this inequality also holds true for $t=0$. As a consequence, there exists $c_1>0 $ depending only on $A, \beta$ and $\sigma$ such that
\begin{equation}\label{estimateofstrictsolutionI1}
\mathbb E|I_1(t)|^2\leq c_1 [\mathbb E|\xi|^2 + |F|_{\mathcal F^{\beta,\sigma}}^2], \hspace{1cm} t\in [0,T].
\end{equation}
On the other hand, using \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofS(t)}, we have
\begin{align}
\mathbb E|I_2(t)|^2 &\leq c(E) \int_0^t |S(t-s)G(s)|^2 ds \notag\\
&\leq c(E) \iota_0^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \int_0^t e^{-2\nu(t-s)} s^{2\beta-1}ds \notag\\
&\leq c(E) c_{\nu,\beta} \iota_0^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2, \hspace{2cm} t\in [0,T], \label{estimateofstrictsolutionI2Case2}
\end{align}
where $c_{\nu,\beta} =\sup_{t\in [0,\infty)} \int_0^t e^{-2\nu(t-s)} s^{2\beta-1}ds <\infty.$
Combining \eqref{estimateofstrictsolutionI1} and \eqref{estimateofstrictsolutionI2Case2}, we conclude that
\begin{align*}
\mathbb E|X(t)|^2 \leq & 2\mathbb E[ I_1(t)^2 + I_2(t)^2]\\
\leq &2 c_1[\mathbb E|\xi|^2 + |F|_{\mathcal F^{\beta,\sigma}}^2]+2c(E) c_{\nu,\beta} \iota_0^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2,
\end{align*}
from which it follows \eqref{estimateofstrictsolutionCase2}.
From these steps, the assertion of the theorem under the condition \eqref{FGSpace1} is proved.
Let us now assume that the condition \eqref{FGSpace2} takes place. Similarly to the above case, we will verify that the assertions in {\bf Steps 1-4} still hold true. Indeed, the assertions in {\bf Step 1} and {\bf Step 3} are obviously true, since they are independent of the conditions \eqref{FGSpace1} and \eqref{FGSpace2}.
To show the assertion in {\bf Step 2}, it suffices to verify that $\int_0^t A^\beta S(t-s) G(s)ds$ is well-defined. This follows from a fact that
\begin{align*}
\int_0^t |A^\beta S(t-s) G(s) |^2 ds &\leq \iota_\beta^2|G|_{\mathcal B([0,t];E)}^2 \int_0^t (t-s)^{-2\beta}ds\\
&=\frac{\iota_\beta^2}{1-2\beta}|G|_{\mathcal B([0,t];E)}^2 t^{1-2\beta}<\infty,
\end{align*}
here we used the property \eqref{EstimateIofAnu}.
To obtain the assertion in {\bf Step 4}, it suffices to prove that $I_2$ is $\beta$\,{-}\,H\"older continuous on $[0,T]$. Indeed, let $0\leq s<t\leq T$. Using \eqref{EstimateIofAnu} and \eqref{EstimateIofS(t)Maximum},
we have
\begin{align*}
\mathbb E& |I_2(t)-I_2(s)|^2 \notag\\
=&\mathbb E \Big|\int_s^t S(t-r)G(r)dw_r\Big|^2 +\mathbb E\Big|\int_0^s [S(t-r)-S(s-r)]G(r)dw_r\Big|^2 \notag\\
\leq & c(E)\Big [\int_s^t |S(t-r)|^2 | G(r)|^2 dr+ \int_0^s |S(t-r)-S(s-r)|^2 |G(r)|^2dr \Big] \notag\\
\leq & c(E)|G|_{\mathcal B([0,T];E)}^2 \Big [\iota_0^2(t-s) + \int_0^s\Big |\int_{s-r}^{t-r} AS(u) du\Big|^2 dr \Big ] \notag\\
\leq & c(E)|G|_{\mathcal B([0,T];E)}^2 \Big [\iota_0^2(t-s) +\iota_1^2 \int_0^s \Big(\int_{s-r}^{t-r} u^{-1}du\Big)^2 dr \Big ].
\end{align*}
Fix $\alpha \in (0,\frac{1}{2}). $ Dividing $-1$ as $-1=-\alpha+\alpha-1$, we have
$$\Big(\int_{s-r}^{t-r} u^{-1}du\Big)^2\leq (s-r)^{-2\alpha} \frac{(t-s)^{2\alpha}}{\alpha^2}, \hspace{1cm} (\text{see } \eqref{Eq37}).$$
Therefore,
\begin{align*}
\mathbb E& |I_2(t)-I_2(s)|^2 \notag\\
\leq &c(E)|G|_{\mathcal B([0,T];E)}^2 \Big [\iota_0^2(t-s) +\frac{\iota_1^2}{\alpha^2} (t-s)^{2\alpha} \int_0^s (s-r)^{-2\alpha} dr \Big ] \notag\\
\leq &c(E)|G|_{\mathcal B([0,T];E)}^2 \Big [\iota_0^2(t-s) +\frac{\iota_1^2 s^{1-2\alpha}}{\alpha^2(1-2\alpha)} (t-s)^{2\alpha} \Big]\notag\\
= &c(E)|G|_{\mathcal B([0,T];E)}^2 \Big [\iota_0^2(t-s)^{1-2\alpha}+\frac{\iota_1^2 s^{1-2\alpha}}{\alpha^2(1-2\alpha)} \Big ](t-s)^{2\alpha} \notag\\
\leq &c(E)|G|_{\mathcal B([0,T];E)}^2 \Big [\iota_0^2+\frac{\iota_1^2 }{\alpha^2(1-2\alpha)} \Big ]T^{1-2\alpha} (t-s)^{2\alpha}.
\end{align*}
Applying Theorem \ref{Kolmogorov testGaussian} for the Gaussian process $I_2(t)$, we observe that $I_2\in \mathcal C^\alpha([0,T];E)$ for any $\alpha \in (0,\frac{1}{2}).$ In particular, $I_2\in \mathcal C^\beta([0,T];E).$
Let us finally observe \eqref{estimateofstrictsolution}.
Obviously, the estimate \eqref{estimateofstrictsolutionI1} still holds true, since it depends on neither \eqref{FGSpace1} nor \eqref{FGSpace2}.
Meanwhile, using \eqref{EstimateIofS(t)}, we have
\begin{align}
|I_2(t)|^2 &\leq c(E) \int_0^t |S(t-s)G(s)|^2 ds \notag\\
&\leq c(E) \iota_0^2 |G|_{\mathcal B([0,t];E)}^2 \int_0^t e^{-2\nu(t-s)}ds \notag\\
&=\frac{c(E) \iota_0^2 |G|_{\mathcal B([0,t];E)}^2}{2\nu} (1-e^{-2\nu t}), \hspace{1cm} t\in [0,T]. \label{estimateofstrictsolutionI2}
\end{align}
Combining \eqref{estimateofstrictsolutionI1} and \eqref{estimateofstrictsolutionI2}, we obtain that
$$\mathbb E|X(t)|^2\leq 2 c_1[\mathbb E|\xi|^2 + |F|_{\mathcal F^{\beta,\sigma}}^2]+\frac{c(E) \iota_0^2 |G|_{\mathcal B([0,t];E)}^2}{\nu} (1-e^{-2\nu t}), \quad t\in [0,T].$$
Thus, \eqref{estimateofstrictsolution} has been verifed.
It completes the proof.
\end{proof}
The following corollary is a direct consequence of Theorems \ref{StrictSolutions}-\ref{regularity theorem autonomous linear evolution equation}.
\begin{corollary}
Assume that \eqref{Eq35} holds true and that $\xi \in \mathcal D(A^\beta)$ a.s. Then there exists a strict solution $X$ of \eqref{autonomous linear evolution equation} in the space:
$$ X \in \mathcal C((0,T];\mathcal D(A)) \cap \mathcal C([0,T];\mathcal D(A^\beta))\cap \mathcal C^\beta([0,T];E).$$
Furthermore, $X$ satisfies the estimate \eqref{estimateofstrictsolutionCase2} when \eqref{FGSpace1} takes place, and satisfies the estimate \eqref{estimateofstrictsolution} when \eqref{FGSpace2} takes place.
\end{corollary}
\section{Semilinear evolution equations with additive noise} \label{section5}
In this section, we will study semilinear evolution equations with additive noise: the function $F(t,x)$ is divided into two parts: one depends only on $x$ and the other depends only on $t$, i.e. $F(t,x)=F_1(X)+F_2(t),$ and the function $G(t,x)=G(t)$ depends only on $t$. Let us rewrite \eqref{E2} into the form of semilinear evolution equations with additive noise.
\begin{equation} \label{semilinear evolution equation}
\begin{cases}
dX+AXdt=[F_1(X)+F_2(t)]dt+ G(t)dw_t, \hspace{1cm} t\in(0,T],\\
X(0)=\xi,
\end{cases}
\end{equation}
where $F_1$ is measurable from $(\Omega_T\times E, \mathcal P_T\times \mathcal B(E))$ to $(E,\mathcal B(E)),$ $F_2$ and $G$ are non-random measurable functions from $[0,T]$ to $E$.
Let fix constants $\eta, \beta, \sigma $ such that
\begin{equation*}
\begin{cases}
0<\eta<\frac{1}{2}, \\
0\lor (2\eta-\frac{1}{2})<\beta<\eta, \\
0<\sigma <\beta.
\end{cases}
\end{equation*}
We suppose further that
\begin{itemize}
\item [(H1)] $F_1$ defines on the domain $ \mathcal D(A^\eta)$ and satisfies a Lipschitz condition of the form
\begin{equation} \label{AbetaLipschitzcondition}
|F_1(x)-F_1(y)|\leq c_{F_1} [ |A^\eta(x-y)|+|A^{\tilde\beta} (x-y)|], \quad\quad x,y\in \mathcal D(A^\eta)
\end{equation}
with $\tilde\beta=\beta$, where $c_{F_1}$ is some positive constant.
\item [(H2)] $F_2 \in \mathcal F^{\beta,\sigma} ((0,T];E). $
\item [(H3)] $ G\in \mathcal F^{\beta+\frac{1}{2},\sigma} ((0,T];E).$
\end{itemize}
\subsection{Existence of local mild solutions}
\begin{theorem}\label{semilinear evolution equationTheorem1}
Let \eqref{spectrumsectorialdomain}, \eqref{resolventnorm}, {\rm (H1)}, {\rm (H2)} and {\rm (H3)} be satisfied.
Let $\xi\in \mathcal D(A^\beta)$ such that $\mathbb E|A^\beta\xi|^2<\infty$. Then \eqref{semilinear evolution equation} possesses a unique local mild solution $X$ in the function space:
\begin{equation}\label{semilinear evolution equationTheorem1Regularity}
X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\eta))\cap \mathcal C([0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\beta)),
\end{equation}
where $T_{F_1,F_2,G,\xi}$ depends only on the squared norms $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2$ and $\mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\beta\xi|^2.$ Furthermore, $X$ satisfies the estimates
\begin{equation}\label{semilinear evolution equationExpectationAbetaXSquare}
\mathbb E|X(t)|^2+\mathbb E |A^\beta X(t)|^2 \leq C_{F_1,F_2,G,\xi}, \hspace{2cm} t\in [0,T_{F_1,F_2,G,\xi}],
\end{equation}
and
\begin{equation}\label{semilinear evolution equationExpectationAetaXSquare}
\mathbb E |A^\eta X(t)|^2 \leq C_{F_1,F_2,G,\xi} \Big[t^{-2(\eta-\beta)}+t^{2(1+\beta-2\eta)} +t^{2(1-\eta)}\Big], \hspace{0.5cm} t\in (0,T_{F_1,F_2,G,\xi}]
\end{equation}
with some constant $C_{F_1,F_2,G,\xi}$ depending only on $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2, \mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\beta\xi|^2.$
\end{theorem}
\begin{proof}
We shall use the fixed point theorem for contractions to prove existence and uniqueness of a local solution.
For each $S\in (0,T]$, we set the underlying space:
\begin{align*}
\Xi (S)=&\{Y\in \mathcal C((0,S];\mathcal D(A^\eta)) \cap \mathcal C([0,S];\mathcal D(A^\beta)) \text{ such that }\\
& \sup_{0<t\leq S} t^{2(\eta-\beta)} \mathbb E|A^\eta Y(t)|^2+ \sup_{0\leq t\leq S}\mathbb E|A^\beta Y(t)|^2 <\infty \}.
\end{align*}
Then up to indistinguishability, $\Xi (S)$ is a Banach space with norm
\begin{equation} \label{norminXi(S)}
|Y|_{\Xi (S)}=\Big[\sup_{0<t\leq S} t^{2(\eta-\beta)} \mathbb E|A^\eta Y(t)|^2+ \sup_{0\leq t\leq S}\mathbb E|A^\beta Y(t)|^2\Big]^{\frac{1}{2}}.
\end{equation}
Let fix a constant $\kappa>0$ such that
\begin{equation} \label{Eq45}
\frac{\kappa^2}{2}> C_1\vee C_2,
\end{equation}
where two constants $C_1$ and $C_2$ will be fixed below.
Consider a subset $\Upsilon(S) $ of $\Xi (S)$ which consists of all function $Y\in \Xi (S)$ such that
\begin{equation} \label{Upsilon(S)Definition}
\max\left\{\sup_{0<t\leq S} t^{2(\eta-\beta)} \mathbb E|A^\eta Y(t)|^2,
\sup_{0\leq t\leq S}\mathbb E|A^\beta Y(t)|^2\right\} \leq \kappa^2.
\end{equation}
Obviously, $\Upsilon(S) $ is a nonempty closed subset of $\Xi (S)$.
For $Y\in \Upsilon(S)$, we define a function on $[0,S]$
\begin{equation} \label{DefinitionOfFunctionPhi}
\Phi Y(t)=S(t) \xi+\int_0^t S(t-s)[F_1(Y(s))+F_2(s)]ds + \int_0^t S(t-s)G(s) dw_s.
\end{equation}
Our goal is then to verify that $\Phi$ is a contraction mapping from $\Upsilon(S)$ into itself, provided that $S$ is sufficiently small, and that the fixed point of $\Phi$ is the desired solution of \eqref{semilinear evolution equation}. For this purpose, we divide the proof into some steps.
{\bf Step 1}. Let us show that $ \Phi Y \in \Upsilon(S)$ for every $Y \in \Upsilon(S). $ Let $Y\in \Xi (S)$.
Due to \eqref{AbetaLipschitzcondition} and \eqref{Upsilon(S)Definition}, we observe that
\begin{align}
\mathbb E|F_1(Y(t))|^2 &\leq \mathbb E[c_{F_1}|A^\eta Y(t)|+c_{F_1}|A^\beta Y(t)|+|F_1(0)|]^2\notag \\
&\leq 3[c_{F_1}^2 \mathbb E |A^\eta Y(t)|^2+c_{F_1}^2 \mathbb E|A^\beta Y(t)|^2 +\mathbb E|F_1(0)|^2]\label{Eq10}\\
&\leq 3[c_{F_1}^2 \kappa^2 t^{2(\beta-\eta)} + c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2], \hspace{1cm} t\in (0,S]. \label{Eq3}
\end{align}
First, we shall show that $ \Phi Y$ satisfies \eqref{Upsilon(S)Definition}. For $\theta \in [\beta, \frac{1}{2})$, from \eqref{Upsilon(S)Definition} and \eqref{DefinitionOfFunctionPhi}, we have
\begin{align*}
&t^{2(\theta-\beta)}\mathbb E|A^\theta\{\Phi Y\}(t)|^2 \notag\\
\leq & 3t^{2(\theta-\beta)}\mathbb E \Big[ |A^\theta S(t) \xi|^2+\Big|\int_0^tA^\theta S(t-s)[F_1(Y(s))+F_2(s)]ds\Big|^2 \notag\\
&+\Big |\int_0^tA^\theta S(t-s) G(s) dw_s\Big|^2 \Big] \notag\\
\leq & 3t^{2(\theta-\beta)} |A^{\theta-\beta} S(t)|^2\mathbb E|A^\beta \xi|^2\notag\\
&+6t^{2(\theta-\beta)}\mathbb E\Big|\int_0^tA^\theta S(t-s)F_1(Y(s))ds\Big|^2 \notag\\
&+6t^{2(\theta-\beta)}\mathbb E\Big|\int_0^t A^\theta S(t-s)F_2(s)ds\Big|^2 \notag\\
& +3c(E)t^{2(\theta-\beta)} \int_0^t|A^\theta S(t-s) G(s)|^2 ds. \notag
\end{align*}
On the account of \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we observe that
\begin{align*}
&t^{2(\theta-\beta)}\mathbb E|A^\theta\{\Phi Y\}(t)|^2 \notag\\
\leq & 3\iota_{\theta-\beta}^2 \mathbb E|A^\beta \xi|^2+6t^{2(\theta-\beta)} \iota_\theta^2 \mathbb E \Big[\int_0^t (t-s)^{-\theta}|F_1(Y(s))|ds\Big]^2\notag\\
&+6t^{2(\theta-\beta)} \iota_\theta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 \Big[\int_0^t(t-s)^{-\theta}s^{\beta-1}ds\Big]^2 \notag\\
&+3c(E) \iota_\theta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 t^{2(\theta-\beta)}\int_0^t(t-s)^{-2\theta} s^{2\beta-1} ds \notag\\
\leq & 3\iota_{\theta-\beta}^2 \mathbb E|A^\beta \xi|^2+6t^{1+2(\theta-\beta)} \iota_\theta^2 \int_0^t (t-s)^{-2\theta}\mathbb E |F_1(Y(s))|^2ds\notag\\
&+6 \iota_\theta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 {\bf B}(\beta,1-\theta)^2 +3c(E) \iota_\theta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta,1-2\theta). \notag
\end{align*}
In view of \eqref{Eq3}, we obtain that
\begin{align}
&t^{2(\theta-\beta)}\mathbb E|A^\theta\{\Phi Y\}(t)|^2 \notag\\
\leq & 3\iota_{\theta-\beta}^2 \mathbb E|A^\beta \xi|^2\notag\\
&+18t^{1+2(\theta-\beta)} \iota_\theta^2 \int_0^t (t-s)^{-2\theta}[c_{F_1}^2 \kappa^2 s^{2(\beta-\eta)} + c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]ds\notag\\
&+6 \iota_\theta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 {\bf B}(\beta, 1-\theta)^2+3c(E) \iota_\theta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\theta) \notag\\
= & 3\iota_{\theta-\beta}^2 \mathbb E|A^\beta \xi|^2+18 \iota_\theta^2 c_{F_1}^2 \kappa^2 t^{1+2(\theta-\eta)}\int_0^t (t-s)^{-2\theta} s^{2(\beta-\eta)} ds\notag\\
&+\frac{18 \iota_\theta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\theta} t^{2(1-\beta)}\notag\\
&+6 \iota_\theta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 {\bf B}(\beta, 1-\theta)^2+3c(E) \iota_\theta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\theta) \notag\\
= & 3\iota_{\theta-\beta}^2 \mathbb E|A^\beta \xi|^2+6 \iota_\theta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 {\bf B}(\beta, 1-\theta)^2\label{tthetabetaAthetaPhiYestimate}\\
&+3c(E) \iota_\theta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\theta)\notag\\
&+ 18\iota_\theta^2 c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\theta) t^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\theta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\theta} t^{2(1-\beta)}. \notag
\end{align}
We apply these estimates with $\theta=\eta$ and $\theta=\beta$. Then it is observed that if $C_1$ and $C_2$ are fixed in such a way that
\begin{equation} \label{Eq42}
\begin{aligned}
C_1>&3\iota_{\eta-\beta}^2 \mathbb E|A^\beta \xi|^2+6 \iota_\eta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 {\bf B}(\beta, 1-\eta)^2\\
&+3c(E) \iota_\eta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\eta),\\
C_2>&3\iota_0^2 \mathbb E|A^\beta \xi|^2+6 \iota_\beta^2 |F_2|_{\mathcal F^{\beta,\sigma} }^2 {\bf B}(\beta, 1-\beta)^2\\
&+3c(E) \iota_\beta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\beta),
\end{aligned}
\end{equation}
and if $S$ is sufficiently small, we have
\begin{align}
t^{2(\eta-\beta)}\mathbb E|A^\eta\{\Phi Y\}(t)|^2 \leq & C_1+ 18\iota_\eta^2 c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\eta) t^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\eta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\eta} t^{2(1-\beta)}\notag\\
\leq& \frac{\kappa^2}{2}+ 18\iota_\eta^2 c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\eta) t^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\eta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\eta} t^{2(1-\beta)}\notag\\
<&\kappa^2, \hspace{3cm} t\in (0,S],\label{Eq43}
\end{align}
and
\begin{align}
\mathbb E|A^\beta\{\Phi Y\}(t)|^2
\leq& C_2+ 18\iota_\beta^2 c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\beta) t^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\beta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\beta} t^{2(1-\beta)}\notag\\
\leq&\frac{\kappa^2}{2}+18\iota_\beta^2 c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\beta) t^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\beta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\beta} t^{2(1-\beta)}\notag\\
<&\kappa^2, \hspace{3cm} t\in (0,S]. \label{Eq44}
\end{align}
We thus have shown that
\begin{equation*}
\max\left\{\sup_{0<t\leq S} t^{2(\eta-\beta)} \mathbb E|A^\eta \Phi Y(t)|^2,
\sup_{0\leq t\leq S}\mathbb E|A^\beta \Phi Y(t)|^2\right\} \leq \kappa^2.
\end{equation*}
This means that $ \Phi Y$ satisfies \eqref{Upsilon(S)Definition}.
Now, we shall show that
$$\Phi (Y)\in \mathcal C((0,S];\mathcal D(A^\eta))\cap \mathcal C([0,S];\mathcal D(A^\beta)).$$
We divide $\Phi Y$ into two parts: $\Phi Y(t)=\Psi Y(t)+ I_2(t),$
where
$${\Psi Y}(t)=S(t) \xi+\int_0^t S(t-s)[F_1(Y(s))+F_2(s)]ds,$$
and $I_2(t)$ is defined by \eqref{XI1I2}.
As seen in {\bf Step 2} of Theorem \ref{regularity theorem autonomous linear evolution equation},
$$
\begin{cases}
A^\beta I_2(t)=A^\beta \int_0^t S(t-s) G(s) dw_s =\int_0^t A^\beta S(t-s)G(s) dw_s,\\
A^\beta I_2 \in \mathcal C([0,S];E).
\end{cases}
$$
Furthermore, by using \eqref {FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we have
\begin{align*}
&\int_0^t |A^\eta S(t-s) G(s) |^2 ds \leq \iota_\eta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \int_0^t (t-s)^{-2\eta} s^{2\beta-1}ds\\
&=\iota_\eta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\eta) t^{2(\beta-\eta)}< \infty, \hspace{1cm} t\in (0,S].
\end{align*}
By the definition of stochastic integrals, $\int_0^t A^\eta S(t-s) G(s)dw_s, \,t \in (0,S] $ is well-defined and belongs to the space $ \mathcal C((0,S];E).$ We thus have verified that
$$I_2\in \mathcal C((0,S];\mathcal D(A^\eta))\cap \mathcal C([0,S];\mathcal D(A^\beta)).$$
Therefore, to finish {\bf Step 1}, it suffices to show that
$$\Psi Y\in \mathcal C((0,S];\mathcal D(A^\eta))\cap \mathcal C([0,S];\mathcal D(A^\beta)).$$
For $0<s<t\leq S$, by the semigroup property we observe that
\begin{align*}
\Psi Y(t)-\Psi Y(s)=&S(t-s)S(s)\xi+ S(t-s)\int_0^s S(s-r)[F_1(Y(r))+F_2(r)]dr\\
& -\Psi Y(s)+\int_s^t S(t-r)[F_1(Y(r))+F_2(r)]dr\\
=&[S(t-s)-I]\Psi Y(s)+\int_s^t S(t-r)[F_1(Y(r))+F_2(r)]dr.
\end{align*}
Let $\rho\in (\frac{1}{2}, 1-\eta)$. In view of \eqref{EstimateIofAnu} and \eqref{EstimateIofS(t)-IA-theta}, we have
\begin{align*}
&|A^\eta[\Psi Y(t)-\Psi Y(s)]| \notag\\
\leq &|[S(t-s)-I]A^{-\rho}| |A^{\eta+\rho}\Psi Y(s)|+\int_s^t |A^\eta S(t-r)| [|F_1(Y(r))|+|F_2(r)|]dr \notag\\
\leq &\frac{\iota_{1-\rho}(t-s)^\rho }{\rho} \Big |A^{\eta+\rho}\Big[S(s) \xi+\int_0^s S(s-r)[F_1(Y(r))+F_2(r)]dr\Big]\Big| \notag \\
&+\iota_\eta\int_s^t (t-r)^{-\eta} [|F_1(Y(r))|+|F_2(r)|]dr \notag\\
\leq &\frac{\iota_{1-\rho}(t-s)^\rho }{\rho} |A^{\eta+\rho-\beta}S(s)| |A^\beta \xi|\notag\\
&+\frac{\iota_{1-\rho}(t-s)^\rho }{\rho} \int_0^s |A^{\eta+\rho} S(s-r)| |F_1(Y(r))|dr \notag \\
&+\frac{\iota_{1-\rho}(t-s)^\rho }{\rho} \int_0^s |A^{\eta+\rho} S(s-r)| |F_2(r)|dr\notag\\
&+\iota_\eta\int_s^t (t-r)^{-\eta} |F_1(Y(r))|dr+\iota_\eta\int_s^t (t-r)^{-\eta} |F_2(r)|dr \notag\\
\leq &\frac{\iota_{1-\rho}\iota_{\eta+\rho-\beta}(t-s)^\rho }{\rho} s^{-\eta-\rho+\beta} |A^\beta \xi|\notag\\
&+\frac{\iota_{1-\rho}\iota_{\eta+\rho} (t-s)^\rho }{\rho}\int_0^s (s-r)^{-\eta-\rho} |F_1(Y(r))|dr \notag\\
&+\frac{\iota_{1-\rho}\iota_{\eta+\rho} |F_2|_{\mathcal F^{\beta,\sigma}}(t-s)^\rho }{\rho} \int_0^s (s-r)^{-\eta-\rho} r^{\beta-1} dr \notag \\
& +\iota_\eta\int_s^t (t-r)^{-\eta} |F_1(Y(r))|dr\notag \\
&+\iota_\eta|F_2|_{\mathcal F^{\beta,\sigma}}\int_s^t (t-r)^{-\eta} r^{\beta-1} dr \notag\\
= &\frac{\iota_{1-\rho} \iota_{\eta+\rho-\beta} }{\rho}|A^\beta \xi| s^{\beta-\eta-\rho}(t-s)^\rho \notag\\
&+\frac{\iota_{1-\rho} \iota_{\eta+\rho}|F_2|_{\mathcal F^{\beta,\sigma}}{\bf B}(\beta,1-\eta-\rho)}{\rho} s^{\beta-\eta-\rho}(t-s)^\rho \notag \\
&+\iota_\eta|F_2|_{\mathcal F^{\beta,\sigma}}\int_s^t (t-r)^{-\eta}r^{\beta-1}dr \notag\\
&+\frac{\iota_{1-\rho}\iota_{\eta+\rho} (t-s)^\rho }{\rho}\int_0^s (s-r)^{-\eta-\rho} |F_1(Y(r))|dr \notag\\
& +\iota_\eta\int_s^t (t-r)^{-\eta} |F_1(Y(r))|dr.\notag
\end{align*}
Dividing $\beta-1$ as $\beta-1=(\eta+\rho-1)+(\beta-\eta-\rho),$ we have
\begin{align*}
\int_s^t (t-r)^{-\eta} r^{\beta-1} dr &\leq \int_s^t (t-r)^{-\eta} (r-s)^{\eta+\rho-1} s^{\beta-\eta-\rho} dr\\
&= {\bf B}(\eta+\rho,1-\eta) s^{\beta-\eta-\rho}(t-s)^\rho.
\end{align*}
Hence,
\begin{align*}
&|A^\eta[\Psi Y(t)-\Psi Y(s)]| \notag\\
\leq &\frac{\iota_{1-\rho}\iota_{\eta+\rho-\beta} }{\rho} |A^\beta \xi| s^{\beta-\eta-\rho}(t-s)^\rho \notag\\
&+\Big[\frac{\iota_{1-\rho} \iota_{\eta+\rho} {\bf B}(\beta,1-\eta-\rho) }{\rho} +\iota_\eta{\bf B}(\eta+\rho,1-\eta)\Big]|F_2|_{\mathcal F^{\beta,\sigma}}s^{\beta-\eta-\rho}(t-s)^\rho \notag\\
&+\frac{\iota_{1-\rho}\iota_{\eta+\rho}}{\rho} (t-s)^\rho \int_0^s (s-r)^{-\eta-\rho} |F_1(Y(r))|dr \notag\\
&+\iota_\eta\int_s^t (t-r)^{-\eta} |F_1(Y(r))|dr.\notag
\end{align*}
Taking expectation of the square of the both hand sides of the above inequality, we see that
\begin{align*}
\mathbb E&|A^\eta[\Psi Y(t)-\Psi Y(s)]|^2 \notag\\
\leq &\frac{4\iota_{1-\rho}^2\iota_{\eta+\rho-\beta}^2}{\rho^2} \mathbb E|A^\beta \xi|^2 s^{2(\beta-\eta-\rho)}(t-s)^{2\rho} \notag\\
& +4\Big[\frac{\iota_{1-\rho}\iota_{\eta+\rho} {\bf B}(\beta,1-\eta-\rho) }{\rho} +\iota_\eta{\bf B}(\eta+\rho,1-\eta)\Big]^2\notag\\
&\times |F_2|_{\mathcal F^{\beta,\sigma}}^2s^{2(\beta-\eta-\rho)}(t-s)^{2\rho} \notag\\
&+\frac{4\iota_{1-\rho}^2\iota_{\eta+\rho}^2}{\rho^2} (t-s)^{2\rho}\mathbb E\Big[ \int_0^s (s-r)^{-\eta-\rho} |F_1(Y(r))|dr\Big]^2 \notag\\
&+4\iota_\eta^2\mathbb E\Big[\int_s^t (t-r)^{-\eta} |F_1(Y(r))|dr\Big]^2.\notag
\end{align*}
Since
\begin{align*}
&\Big[ \int_0^s (s-r)^{-\eta-\rho} |F_1(Y(r))|dr\Big]^2 \\
&=\Big[ \int_0^s (s-r)^{\frac{-\eta-\rho}{2}} (s-r)^{\frac{-\eta-\rho}{2}}|F_1(Y(r))|dr\Big]^2 \\
&\leq \int_0^s (s-r)^{-\eta-\rho}dr \int_0^s (s-r)^{-\eta-\rho}|F_1(Y(r))|^2dr\\
&=\frac{s^{1-\eta-\rho}}{1-\eta-\rho} \int_0^s (s-r)^{-\eta-\rho}|F_1(Y(r))|^2dr,
\end{align*}
we have
\begin{align}
\mathbb E&|A^\eta[\Psi Y(t)-\Psi Y(s)]|^2 \label{Eq25}\\
\leq &\frac{4\iota_{1-\rho}^2 \iota_{\eta+\rho-\beta}^2}{\rho^2} \mathbb E|A^\beta \xi|^2 s^{2(\beta-\eta-\rho)}(t-s)^{2\rho} \notag\\
& +4\Big[\frac{\iota_{1-\rho}\iota_{\eta+\rho} {\bf B}(\beta,1-\eta-\rho)}{\rho} +\iota_\eta{\bf B}(\eta+\rho,1-\eta)\Big]^2\notag\\
&\times |F_2|_{\mathcal F^{\beta,\sigma}}^2s^{2(\beta-\eta-\rho)}(t-s)^{2\rho} \notag\\
&+\frac{4\iota_{1-\rho}^2\iota_{\eta+\rho}^2}{\rho^2(1-\eta-\rho)} (t-s)^{2\rho} s^{1-\eta-\rho} \int_0^s (s-r)^{-\eta-\rho} \mathbb E|F_1(Y(r))|^2dr \notag\\
&+4\iota_\eta^2(t-s) \int_s^t (t-r)^{-2\eta}\mathbb E |F_1(Y(r))|^2dr.\notag
\end{align}
Using \eqref{Eq3}, we have
\begin{align}
& \int_0^s (s-r)^{-\eta-\rho} \mathbb E|F_1(Y(r))|^2dr \notag\\
\leq &
3c_{F_1}^2 \kappa^2\int_0^s (s-r)^{-\eta-\rho} r^{2(\beta-\eta)} dr +3[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]\int_0^s (s-r)^{-\eta-\rho} dr \notag\\
= &
3c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-\eta-\rho) s^{1+2\beta-3\eta-\rho} \label{Eq38}\\
&+\frac{3[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] s^{1-\eta-\rho}}{1-\eta-\rho}, \notag
\end{align}
and
\begin{align}
&\int_s^t (t-r)^{-2\eta}\mathbb E |F_1(Y(r))|^2dr\notag\\
&\leq 3\int_s^t (t-r)^{-2\eta}[c_{F_1}^2 \kappa^2 r^{2(\beta-\eta)} + c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]dr\notag\\
&=3 c_{F_1}^2 \kappa^2 \int_s^t (t-r)^{-2\eta}r^{2(\beta-\eta)} dr+\frac{3[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]}{1-2\eta} (t-s)^{1-2\eta}.\label{Eq39}
\end{align}
Dividing $2(\beta-\eta)$ as $2(\beta-\eta)=(\beta-\frac{1}{2})+(\frac{1}{2}+\beta-2\eta),$ we have
\begin{align}
\int_s^t (t-r)^{-2\eta} r^{2(\beta-\eta)} dr&\leq \int_s^t (t-r)^{-2\eta} (r-s)^{\beta-\frac{1}{2}} t^{\frac{1}{2}+\beta-2\eta}dr\notag\\
&={\bf B}(\frac{1}{2}+\beta,1-2\eta) t^{\frac{1}{2}+\beta-2\eta} (t-s)^{\frac{1}{2}+\beta-2\eta}. \label{Eq40}
\end{align}
By virtue of \eqref{Eq38}, \eqref{Eq39} and \eqref{Eq40}, it follows from \eqref{Eq25} that
\begin{align*}
\mathbb E&|A^\eta[\Psi Y(t)-\Psi Y(s)]|^2 \notag\\
\leq &\frac{4\iota_{1-\rho}^2 \iota_{\eta+\rho-\beta}^2}{\rho^2} \mathbb E|A^\beta \xi|^2 s^{2(\beta-\eta-\rho)}(t-s)^{2\rho} \notag\\
& +4\Big[\frac{\iota_{1-\rho}\iota_{\eta+\rho} {\bf B}(\beta,1-\eta-\rho)}{\rho} +\iota_\eta{\bf B}(\eta+\rho,1-\eta)\Big]^2\notag\\
&\times |F_2|_{\mathcal F^{\beta,\sigma}}^2s^{2(\beta-\eta-\rho)}(t-s)^{2\rho} \notag\\
&+\frac{12\iota_{1-\rho}^2\iota_{\eta+\rho}^2c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-\eta-\rho) }{\rho^2(1-\eta-\rho)} (t-s)^{2\rho} s^{2(1+\beta-2\eta-\rho)} \notag\\
&+\frac{12\iota_{1-\rho}^2\iota_{\eta+\rho}^2[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{\rho^2(1-\eta-\rho)^2} (t-s)^{2\rho} s^{2(1-\eta-\rho)} \notag\\
&+12 \iota_\eta^2c_{F_1}^2 \kappa^2 {\bf B}(\frac{1}{2}+\beta,1-2\eta) t^{\frac{1}{2}+\beta-2\eta} (t-s)^{\frac{3}{2}+\beta-2\eta}\notag\\
&+\frac{12\iota_\eta^2[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]}{1-2\eta} (t-s)^{2(1-\eta)}.
\end{align*}
Since this estimate holds for any $\rho\in (\frac{1}{2}, 1-\eta),$ and since $1<\frac{3}{2}+\beta-2\eta<2(1-\eta)$, the Kolmogorov test then provides that $A^\eta \Psi Y $ is H\"older continuous on $(0,S]$ with an arbitrary exponent smaller than $\frac{1+2\beta}{4}-\eta.$ As a consequence, $\Psi Y\in \mathcal C((0,S];\mathcal D(A^\eta)).$
Similarly, we also have
\begin{align*}
\mathbb E&|A^\beta[\Psi Y(t)-\Psi Y(s)]|^2 \notag\\
\leq &\frac{4\iota_{1-\rho}^2 \iota_\rho^2}{\rho^2} \mathbb E|A^\beta \xi|^2 s^{-2\rho}(t-s)^{2\rho} \notag\\
& +4\Big[\frac{\iota_{1-\rho}\iota_{\beta+\rho} {\bf B}(\beta,1-\beta-\rho)}{\rho} +\iota_\beta{\bf B}(\beta+\rho,1-\beta)\Big]^2\notag\\
&\times |F_2|_{\mathcal F^{\beta,\sigma}}^2s^{-2\rho}(t-s)^{2\rho} \notag\\
&+\frac{12\iota_{1-\rho}^2\iota_{\beta+\rho}^2c_{F_1}^2 \kappa^2 {\bf B}(1,1-\beta-\rho) }{\rho^2(1-\beta-\rho)} (t-s)^{2\rho} s^{2(1-\eta-\rho)} \notag\\
&+\frac{12\iota_{1-\rho}^2\iota_{\beta+\rho}^2[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{\rho^2(1-\beta-\rho)^2} (t-s)^{2\rho} s^{2(1-\beta-\rho)} \notag\\
&+12 \iota_\beta^2c_{F_1}^2 \kappa^2 {\bf B}(\frac{1}{2}+\beta,1-2\beta) t^{\frac{1}{2}-\beta} (t-s)^{\frac{3}{2}-\beta}\notag\\
&+\frac{12\iota_\beta^2[ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]}{1-2\beta} (t-s)^{2(1-\beta)}.
\end{align*}
The Kolmogorov test again provides that $\Psi Y\in \mathcal C((0,S];\mathcal D(A^\beta)).$
It remains to show that $A^\beta \Psi Y$ is continuous at $t=0$. Indeed, we already know by Theorem \ref{regularity theorem autonomous linear evolution equation} that
$$A^\beta\Big[S(t) \xi+\int_0^t S(t-s)F_2(s)ds + \int_0^t S(t-s)G(s) dw_s\Big]$$
is continuous at $t=0.$ Meanwhile, using \eqref{EstimateIofAnu} and \eqref{Eq3}, we have
\begin{align}
\mathbb E&\Big|A^\beta \int_0^t S(t-s)F_1(Y(s))ds\Big|^2 \notag\\
& \leq\mathbb E\Big[\int_0^t |A^\beta S(t-s)| |F_1(Y(s))|ds\Big]^2\notag\\
&\leq \iota_\beta^2 \mathbb E \Big[\int_0^t (t-s)^{-\beta}|F_1(Y(s))| ds\Big]^2\notag\\
&\leq \iota_\beta^2 t \int_0^t (t-s)^{-2\beta}\mathbb E|F_1(Y(s))|^2 ds\notag\\
&\leq 3\iota_\beta^2 t \int_0^t (t-s)^{-2\beta}[c_{F_1}^2 \kappa^2 s^{2(\beta-\eta)} + c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] ds\notag\\
&= 3\iota_\beta^2 \Big[c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-2\beta)t^{2(1-\eta)} +\frac{c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2 }{1-2\beta} t^{2(1-\beta)}\Big] \label{ExpectationOfAbetaSF1YSquare}\\
& \to 0 \hspace{2cm} \text{ as } t\searrow 0. \notag
\end{align}
Therefore, there exists a decreasing sequence $\{t_n\}$ converging to $0$ such that
$$\lim_{n\to\infty} A^\beta \int_0^{t_n} S(t_n-s)F_1(Y(s))ds=0.$$
By the continuity of $A^\beta \int_0^{t_n} S(t_n-s)F_1(Y(s))ds$ on $(0,S]$, we conclude that
$$\lim_{t\to 0} A^\beta \int_0^t S(t-s)F_1(Y(s))ds=0,$$
i.e. $A^\beta \int_0^t S(t-s)F_1(Y(s))ds$ is continuous at $t=0$.
We thus have shown that $A^\beta\Psi Y$
is continuous at $t=0.$
{\bf Step 2}. Let us show that $\Phi$ is a contraction mapping of $\Xi (S)$, provided $S>0$ is sufficiently small. Let $Y_1,Y_2\in \Xi (S)$ and $\theta\in [0, \frac{1}{2}).$ From \eqref{DefinitionOfFunctionPhi}, we see that
\begin{align*}
&t^{2(\theta-\beta)}\mathbb E|A^\theta[\Phi Y_1(t)-\Phi Y_2(t)]|^2 \notag \\
=&t^{2(\theta-\beta)} \mathbb E\Big|\int_0^t A^\theta S(t-s)[F_1(Y_1(s))-F_1(Y_2(s))]ds\Big|^2 \notag\\
\leq&t^{2(\theta-\beta)} \mathbb E\Big[\int_0^t |A^\theta S(t-s)| |F_1(Y_1(s))-F_1(Y_2(s))|ds\Big]^2. \notag
\end{align*}
By virtue of \eqref{EstimateIofAnu}, \eqref{AbetaLipschitzcondition} and \eqref{norminXi(S)}, we have
\begin{align*}
&t^{2(\theta-\beta)}\mathbb E|A^\theta[\Phi Y_1(t)-\Phi Y_2(t)]|^2 \notag \\
\leq& c_{F_1}^2\iota_\theta^2 t^{2(\theta-\beta)}\notag \\
&\times \mathbb E\Big[\int_0^t (t-s)^{-\theta} \{ |A^\eta(Y_1(s)-Y_2(s))|+ |A^\beta (Y_1(s)-Y_2(s))|\}ds\Big]^2 \notag\\
\leq& c_{F_1}^2\iota_\theta^2 t^{1+2(\theta-\beta)}\notag \\
&\times \mathbb E\int_0^t (t-s)^{-2\theta} \{ |A^\eta(Y_1(s)-Y_2(s))|+ |A^\beta (Y_1(s)-Y_2(s))|\}^2ds \notag\\
\leq& 2c_{F_1}^2\iota_\theta^2 t^{1+2(\theta-\beta)} \int_0^t (t-s)^{-2\theta} \mathbb E |A^\eta(Y_1(s)-Y_2(s))|^2ds \notag\\
&+2c_{F_1}^2\iota_\theta^2 t^{1+2(\theta-\beta)} \int_0^t (t-s)^{-2\theta} \mathbb E |A^\beta (Y_1(s)-Y_2(s))|^2ds \notag\\
\leq&2c_{F_1}^2\iota_\theta^2 t^{1+2(\theta-\beta)} \int_0^t (t-s)^{-2\theta} s^{2(\beta-\eta)} |Y_1-Y_2|_{{\Xi (S)}}^2ds \notag\\
&+2c_{F_1}^2\iota_\theta^2 t^{1+2(\theta-\beta)} \int_0^t (t-s)^{-2\theta} |Y_1-Y_2|_{{\Xi (S)}}^2ds \notag\\
=&2c_{F_1}^2\iota_\theta^2 \Big[{\bf B}(1+2\beta-2\eta,1-2\theta) t^{2(1-\eta)}+\frac{t^{2(1-\beta)}}{1-2\theta}\Big] |Y_1-Y_2|_{{\Xi (S)}}^2 \notag\\
\leq &2c_{F_1}^2 \Big[\iota_\theta^2{\bf B}(1+2\beta-2\eta,1-2\theta)+\frac{\iota_\theta^2 S^{2(\eta-\beta)}}{1-2\theta}\Big] S^{2(1-\eta)} |Y_1-Y_2|_{{\Xi (S)}}^2. \notag
\end{align*}
Applying these estimates with $\theta=\eta$ and $\theta=\beta$, we conclude that
\begin{align}
&|\Phi Y_1-\Phi Y_2|_{{\Xi (S)}}^2 \notag\\
=&\Big[\sup_{0<t\leq S} t^{2(\eta-\beta)} \mathbb E|A^\eta [\Phi Y_1(t)-\Phi Y_2(t)]|^2+ \sup_{0\leq t\leq S}\mathbb E|A^\beta [\Phi Y_1(t)-\Phi Y_2(t)]|^2\Big]^{\frac{1}{2}}\notag\\
\leq &2c_{F_1}^2 \Big[\iota_\eta^2{\bf B}(1+2\beta-2\eta,1-2\eta)+\iota_\beta^2{\bf B}(1+2\beta-2\eta,1-2\beta)\label{Eq46}\\
&\hspace*{0.5cm}+\frac{\iota_\eta^2S^{2(\eta-\beta)}}{1-2\eta}+\frac{\iota_\beta^2S^{2(\eta-\beta)}}{1-2\beta}\Big] S^{2(1-\eta)} |Y_1-Y_2|_{{\Xi (S)}}^2. \notag
\end{align}
This shows that $\Phi$ is contraction in $\Xi (S),$ provided $S>0$ is sufficiently small.
{\bf Step 3}. Let us show existence of a local mild solution.
Let $S>0$ be sufficiently small in such a way that $\Phi$ maps $\Upsilon(S)$ into itself and is contraction with respect to the norm of $\Xi (S).$ Due to {\bf Step 1} and {\bf Step 2}, $S=T_{F_1,F_2,G,\xi}$ can be determined by $\mathbb E|F_1(0)|^2, |F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2$ and $\mathbb E |A^\beta\xi|^2.$ Thanks to the fixed point theorem, there exists a unique function $X\in \Upsilon(T_{F_1,F_2,G,\xi})$ such that $X=\Phi X$.
This means that $X$ is a mild solution of \eqref{semilinear evolution equation} in the function space:
$$
X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\eta))\cap \mathcal C([0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\beta)).
$$
{\bf Step 4}. Let us verify the estimate \eqref{semilinear evolution equationExpectationAbetaXSquare}.
We have
\begin{align}
X(t)=&\Big[S(t) \xi+\int_0^t S(t-s)F_2(s)ds + \int_0^t S(t-s)G(s) dw_s \Big] \label{Eq8}\\
&+\int_0^t S(t-s)F_1(X(s))ds \notag \\
=&X_1(t)+X_2(t), \hspace{2cm} t\in [0,T_{F_1,F_2,G,\xi}]. \notag
\end{align}
On the account of Theorem \ref{regularity theorem autonomous linear evolution equation}, we have an estimate for the first term
\begin{align*}
\mathbb E |X_1(t)|^2 \leq \rho_1[\mathbb E|\xi|^2 + |F|_{\mathcal F^{\beta,\sigma}}^2+ |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2],
\end{align*}
where $\rho_1$ is a positive constant depending only on $A, \beta$ and $\sigma$.
For the second term, in view of \eqref{EstimateIofS(t)Maximum} and \eqref{Eq3}, we observe that
\begin{align*}
\mathbb E| X_2(t)|^2
&\leq\mathbb E\Big[\int_0^t | S(t-s)| |F_1(X(s))|ds\Big]^2\notag\\
&\leq t \int_0^t \iota_0^2\mathbb E|F_1(X(s))|^2 ds\notag\\
&\leq 3t \int_0^t \iota_0^2[c_{F_1}^2 \kappa^2 s^{2(\beta-\eta)} + c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] ds\notag\\
&=\frac{3c_{F_1}^2 \kappa^2\iota_0^2}{1+2(\beta-\eta)} t^{2(1+\beta-\eta)}+3[c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]\iota_0^2 t^2
\end{align*}
for every $t\in [0,T_{F_1,F_2,G,\xi}]$. Hence,
\begin{align}
\mathbb E| X(t)|^2 \leq &2 \mathbb E| X_1(t)|^2 +2 \mathbb E| X_2(t)|^2 \notag\\
\leq &2\rho_1[\mathbb E|\xi|^2 +\mathbb E |F|_{\mathcal F^{\beta,\sigma}}^2+ |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2]\notag\\
&+\frac{6c_{F_1}^2 \kappa^2\iota_0^2}{1+2(\beta-\eta)} t^{2(1+\beta-\eta)}+6[c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]\iota_0^2 t^2\notag\\
\leq &2\rho_1[\mathbb E|\xi|^2 +\mathbb E |F|_{\mathcal F^{\beta,\sigma}}^2+ |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2]\label{Eq6}\\
&+\frac{6c_{F_1}^2 \kappa^2\iota_0^2}{1+2(\beta-\eta)} T^{2(1+\beta-\eta)}+6[c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2]\iota_0^2 T^2 \notag
\end{align}
for every $t\in [0,T_{F_1,F_2,G,\xi}]$.
On the other hand, in virtue of \eqref{EstimateIofS(t)Maximum}, \eqref{Eq4}, \eqref{Eq5}, \eqref{ExpectationOfAbetaSF1YSquare} and \eqref{Eq8}, for every $t\in [0,T_{F_1,F_2,G,\xi}]$ we observe that
\begin{align}
\mathbb E&| A^\beta X(t)|^2\notag\\
\leq & 4\mathbb E |S(t)A^\beta \xi|^2+4\mathbb E\Big|\int_0^t A^\beta S(t-s)F_2(s)ds\Big|^2 \notag\\
&+4\mathbb E\Big|\int_0^t A^\beta S(t-s)F_1(X(s))ds\Big|^2+ 4\mathbb E\Big|\int_0^t A^\beta S(t-s)G(s) dw_s\Big|^2\notag\\
\leq & 4\iota_0^2 \mathbb E |A^\beta \xi|^2+4\iota_\beta^2 |F_2|_{\mathcal F^{\beta,\sigma}}^2 {\bf B}(\beta,1-\beta)^2\notag\\
&+ 12\iota_\beta^2 \Big[c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-2\beta)t^{2(1-\eta)} +\frac{c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2 }{1-2\beta} t^{2(1-\beta)}\Big]\notag \\
&+ 4c(E)\int_0^t |A^\beta S(t-s) G(s) |^2 ds\notag\\
\leq & 4\iota_0^2 \mathbb E |A^\beta \xi|^2+4\iota_\beta^2 |F_2|_{\mathcal F^{\beta,\sigma}}^2 {\bf B}(\beta,1-\beta)^2\notag\\
&+ 12\iota_\beta^2 \Big[c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-2\beta)t^{2(1-\eta)} +\frac{c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2 }{1-2\beta} t^{2(1-\beta)}\Big]\notag \\
&+ 4c(E)\iota_\beta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\beta)\notag\\
\leq & 4\iota_0^2 \mathbb E |A^\beta \xi|^2+4\iota_\beta^2 |F_2|_{\mathcal F^{\beta,\sigma}}^2 {\bf B}(\beta,1-\beta)^2+ 4c(E)\iota_\beta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\beta)\label{Eq7}\\
&+ 12\iota_\beta^2 \Big[c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-2\beta)T^{2(1-\eta)} +\frac{c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2 }{1-2\beta} T^{2(1-\beta)}\Big]. \notag
\end{align}
Combining \eqref{Eq6} and \eqref{Eq7}, we obtain \eqref{semilinear evolution equationExpectationAbetaXSquare}.
{\bf Step 5}. Let us verify the estimate \eqref{semilinear evolution equationExpectationAetaXSquare}. From \eqref{EstimateIofAnu} and \eqref{Eq8},
we observe that
\begin{align*}
\mathbb E&| A^\eta X(t)|^2\notag\\
\leq & 4\mathbb E |A^\eta S(t) \xi|^2+4\Big[\int_0^t |A^\eta S(t-s)| |F_2(s)|ds\Big]^2 \notag\\
&+4\mathbb E\Big[\int_0^t |A^\eta S(t-s)| |F_1(X(s))|ds\Big]^2+ 4c(E) \int_0^t |A^\eta S(t-s)G(s)|^2 ds\notag\\
=& 4\mathbb E |A^{\eta-\beta} S(t) A^\beta \xi|^2+4J_1+4J_2+4J_3\notag\\
\leq & 4\iota_{\eta-\beta}^2 \mathbb E |A^\beta \xi|^2 t^{-2(\eta-\beta)}+4J_1+4J_2+4J_3, \hspace{2cm} t\in (0,T_{F_1,F_2,G,\xi}].\notag
\end{align*}
We shall give estimates for $J_1, J_2$ and $J_3$.
For $J_1$, similarly to \eqref{Eq4}, we have
$$J_1\leq \iota_\eta^2 |F|_{\mathcal F^{\beta,\sigma}}^2 {\bf B}(\beta,1-\eta)^2 t^{2(\beta-\eta)}.$$
For $J_2$, similarly to \eqref{ExpectationOfAbetaSF1YSquare}, we conclude that
$$J_2\leq 3\iota_\eta^2 \Big[c_{F_1}^2 \kappa^2 {\bf B}(1+2\beta-2\eta,1-2\eta)t^{2(1+\beta-2\eta)} +\frac{c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2 }{1-2\eta} t^{2(1-\eta)}\Big].$$
For $J_3$, similarly to \eqref{Eq5}, we obtain that
$$J_3\leq c(E) \iota_\eta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 {\bf B}(2\beta, 1-2\eta) t^{2(\beta-\eta)}.$$
Thus, \eqref{semilinear evolution equationExpectationAetaXSquare} is verified.
{\bf Step 6}. Let us finally show uniqueness of the local mild solution. Let $\bar X$ be any other local mild solution to \eqref{semilinear evolution equation} on the interval $[0,T_{F_1,F_2,G,\xi}]$ which belongs to the space $\mathcal C((0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\eta))\cap \mathcal C([0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\beta))$.
The formula
$$\bar X(t)=S(t) \xi+\int_0^t S(t-s)[F_1(\bar X(s))+F_2(s)]ds + \int_0^t S(t-s)G(s) dw_s $$
jointed with \eqref{Eq8} yields that
$$X(t)-\bar X(t)=\int_0^t S(t-s)[F_1(X(s))-F_1(\bar X(s))] ds, \quad\quad t\in [0,T_{F_1,F_2,G,\xi}].$$
We can then repeat the same arguments as in {\bf Step 2} to deduce that
\begin{align}
&|X-\bar X|_{\Xi (\bar T)}^2\label{Eq9}\\
&\leq 2c_{F_1}^2 \Big[\iota_\eta^2{\bf B}(1+2\beta-2\eta,1-2\eta)+\iota_\beta^2{\bf B}(1+2\beta-2\eta,1-2\beta)+\frac{\iota_\eta^2{\bar T}^{2(\eta-\beta)}}{1-2\eta}\notag\\
&\hspace{0.8cm}+\frac{\iota_\beta^2{\bar T}^{2(\eta-\beta)}}{1-2\beta}\Big] {\bar T}^{2(1-\eta)} |X-\bar X|_{\Xi (\bar T)}^2 \hspace{1cm} \text{ for any } \bar T\in(0,T_{F_1,F_2,G,\xi}].\notag
\end{align}
Let $\bar T$ be a positive constant such that
\begin{align*}
&2c_{F_1}^2 \Big[\iota_\eta^2{\bf B}(1+2\beta-2\eta,1-2\eta)+\iota_\beta^2{\bf B}(1+2\beta-2\eta,1-2\beta)\notag\\
&+\frac{\iota_\eta^2\bar T^{2(\eta-\beta)}}{1-2\eta}+\frac{\iota_\beta^2\bar T^{2(\eta-\beta)}}{1-2\beta}\Big] \bar T^{2(1-\eta)}<1.
\end{align*}
It then follows from \eqref{Eq9} that $X(t)=\bar X(t)$ a.s. for every $t\in [0,\bar T].$
We repeat the same procedure with initial time $\bar T$ and initial value $X(\bar T)=\bar X(\bar T)$ to derive that $X(\bar T +t)=\bar X(\bar T +t)$ a.s. for every $t\in [0,\bar T].$ This means that $X(t)=\bar X(t)$ a.s. on a larger interval $[0,2\bar T].$ We continue this procedure by finite times, the extended interval can cover the given interval $[0,T_{F_1,F_2,G,\xi}].$ Therefore, for every $t\in [0,T_{F_1,F_2,G,\xi}],$ $X(t)=\bar X(t)$ a.s.
\end{proof}
\begin{corollary}[global existence] \label{semilinear evolution equationTheorem1corollary}
Assume that in Theorem \ref{semilinear evolution equationTheorem1} the constant $ C_{F_1,F_2,G,\xi}$ is independent of $T_{F_1,F_2,G,\xi}$ for every $\xi\in \mathcal D(A^\beta)$ such that $\mathbb E|A^\beta \xi|^2<\infty$. Then \eqref{semilinear evolution equation} possesses a unique mild solution on the interval $[0,T].$
\end{corollary}
\begin{proof}
Let extend the functions $F_2$ and $G$ to functions $\bar F_2$ and $\bar G$ defined on the whole interval $[0,\infty)$ by putting $\bar F_2(t)\equiv F_2(T)$ and $\bar G(t)\equiv G(T)$ for $T<t<\infty$. It is obvious that
$$|\bar F_2|_{\mathcal F^{\beta,\sigma}((a,b];E)}\leq |F_2|_{\mathcal F^{\beta,\sigma}((a,b];E)}\, \text{ and }\, |\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}((a,b];E)}\leq |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}((a,b];E)}$$
for any interval $(a,b]\subset [0,\infty).$
Let $\bar \xi=X(\frac{T_{F_1,F_2,G,\xi}}{2}).$ We consider the problem
\begin{equation} \label{Eq24}
\begin{cases}
dY+AYdt=[F_1(Y)+F_2(t)]dt+G(t)dw_t, \quad\quad \frac{T_{F_1,F_2,G,\xi}}{2}<t<\infty,\\
Y(\frac{T_{F_1,F_2,G,\xi}}{2})=\bar \xi.
\end{cases}
\end{equation}
Thanks to Theorem \ref{semilinear evolution equationTheorem1}, \eqref{Eq24} has a local mild solution $Y(t)$ for every
$T_{F_1,F_2,G,\xi}\leq t\leq \frac{3T_{F_1,F_2,G,\xi}}{2}. $
By the uniqueness of solution, $X(t)=Y(t)$ a.s. for $t\in [\frac{T_{F_1,F_2,G,\xi}}{2},T_{F_1,F_2,G,\xi}].$ This means that we have constructed a local mild solution to \eqref{semilinear evolution equation} on the interval $[0,\frac{3T_{F_1,F_2,G,\xi}}{2}]. $ The independence of $T_{F_1,F_2,G,\xi}$ with respect to $ C_{F_1,F_2,G,\xi}$ allows us to continue this procedure unlimitedly. Each time the local solution is extended over the fixed length $\frac{T_{F_1,F_2,G,\xi}}{2}$ of interval. So, by finite times, the extended interval can cover the interval $[0,T]$.
\end{proof}
\subsection{Regularity for more regular initial data}
This subsection shows regularity of local mild solutions for more regular initial values. For any $F_2\in \mathcal F^{\gamma, \sigma}((0,T];E),$ $ \max\{\beta,\frac{1}{2}-\eta\}<\gamma< \frac{1}{2},$ and any initial value $\xi\in \mathcal D(A^\gamma)$ satisfying $\mathbb E|A^\gamma\xi|^2<\infty$, we can verify a stronger regularity than \eqref{semilinear evolution equationTheorem1Regularity} for the local mild solution of \eqref{semilinear evolution equation}.
\begin{theorem}\label{semilinear evolution equationMoreRegular}
Let \eqref{spectrumsectorialdomain}, \eqref{resolventnorm}, {\rm (H1)} and {\rm (H3)} be satisfied.
Let $F_2\in \mathcal F^{\gamma, \sigma}((0,T];E)$, $ \max\{\beta,\frac{1}{2}-\eta\}<\gamma< \frac{1}{2}$, and $\xi\in \mathcal D(A^\gamma)$ such that $\mathbb E|A^\gamma\xi|^2<\infty$. Then \eqref{semilinear evolution equation} possesses a unique mild solution $X$ in the function space:
\begin{equation*}
X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\eta))\cap \mathcal C([0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\gamma)),
\end{equation*}
where $T_{F_1,F_2,G,\xi}$ depends only on the squared norms $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2$ and $\mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\gamma\xi|^2.$ In addition, $X$ satisfies the estimate
\begin{align}
\mathbb E|X(t)|^2+t^{2(\gamma-\beta)}\mathbb E |A^\gamma X(t)|^2 \leq C_{F_1,F_2,G,\xi}, \quad\quad t\in [0,T_{F_1,F_2,G,\xi}]\label{semilinear evolution equationExpectationAbetaXSquareMoreRegular}
\end{align}
with some constant $C_{F_1,F_2,G,\xi}$ depending only on $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2, \mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\gamma\xi|^2.$
\end{theorem}
\begin{proof}
Since the embedding of $\mathcal D(A^\gamma)$ in $\mathcal D(A^\beta)$ is continuous, we have $\xi\in \mathcal D(A^\beta)$ and $\mathbb E|A^\beta\xi|^2<\infty.$ In addition, due to \eqref{FbetaFgammasigmaSpaceProperty},
we also have
$F_2\in \mathcal F^{\beta, \sigma}((0,T];E).$ Therefore, by Theorem \ref{semilinear evolution equationTheorem1}, \eqref{semilinear evolution equation} possesses a unique mild solution $X$ in the function space \eqref{semilinear evolution equationTheorem1Regularity}
which satisfies the estimate
\eqref{semilinear evolution equationExpectationAbetaXSquare}.
It now remains only to show that $X\in \mathcal C([0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\gamma))$ and that $X$ satisfies \eqref{semilinear evolution equationExpectationAbetaXSquareMoreRegular}. For this purpose, we shall divide the proof into three steps. Throughout the proof, $C_{F_1,F_2,G,\xi}$ denotes a universal constant which is determined in each occurrence by $F_1,F_2,G$ and $\xi$.
{\bf Step 1}. Let us verify that
\begin{equation} \label{Eq11}
\mathbb E|A^\eta X(t)|^2\leq C_{F_1,F_2,G,\xi} t^{-2\varrho}, \hspace{1cm} t\in (0,T_{F_1,F_2,G,\xi}],
\end{equation}
where $\varrho=\max\{1-\eta-\gamma,\eta-\beta\}\in (0,\frac{1}{2})$.
From \eqref{Eq8}, we have
\begin{align*}
A^\eta X(t)=&A^{\eta-\gamma} S(t) A^\gamma \xi+\int_0^t A^\eta S(t-s)[F_1(X(s))+F_2(s)]ds \notag\\
& + \int_0^t A^\eta S(t-s)G(s) dw_s. \notag
\end{align*}
Using \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we then obtain that
\begin{align*}
\mathbb E &|A^\eta X(t)|^2 \notag\\
\leq & 4 |A^{\eta-\gamma} S(t)|^2\mathbb E |A^\gamma \xi|^2
+4\mathbb E\Big|\int_0^t |A^\eta S(t-s)| |F_1(X(s))|ds\Big|^2\notag\\
&+4\mathbb E\Big[\int_0^t |A^\eta S(t-s)||F_2(s)|ds\Big]^2
+ 4 \mathbb E \Big|\int_0^t A^\eta S(t-s)G(s) dw_s\Big|^2 \notag\\
\leq & C_\xi t^{\min\{-2(\eta-\gamma),0\}}
+4\iota_\eta^2 \mathbb E\Big[\int_0^t (t-s)^{-\eta} |F_1(X(s))|ds\Big]^2\notag\\
& +4\iota_\eta^2 |F_2|_{\mathcal F^{\gamma,\sigma}}^2\Big[\int_0^t (t-s)^{\eta-1} s^{\gamma-1}ds\Big]^2\notag\\
&+ 4c(E) \int_0^t |A^\eta S(t-s)|^2|G(s)|^2 ds \notag\\
\leq & C_\xi t^{\min\{-2(\eta-\gamma),0\}} +4\iota_\eta^2 t \int_0^t (t-s)^{-2\eta} \mathbb E|F_1(X(s))|^2ds\notag\\
& +4\iota_\eta^2 |F_2|_{\mathcal F^{\gamma,\sigma}}^2 {\bf B}(\gamma,\eta)^2 t^{2(\eta+\gamma-1)} + 4c(E) \iota_\eta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \int_0^t (t-s)^{-2\eta} s^{2\beta-1} ds. \notag
\end{align*}
By \eqref{semilinear evolution equationExpectationAbetaXSquare} and \eqref{Eq10}, we have
\begin{align*}
4\iota_\eta^2 &t \int_0^t (t-s)^{-2\eta} \mathbb E|F_1(X(s))|^2ds\notag\\
\leq &12\iota_\eta^2 t \int_0^t (t-s)^{-2\eta} [c_{F_1}^2 \mathbb E |A^\eta X(s)|^2+c_{F_1}^2 \mathbb E|A^\beta X(s)|^2 +\mathbb E|F_1(0)|^2]ds\notag\\
\leq &12\iota_\eta^2c_{F_1}^2 t \int_0^t (t-s)^{-2\eta} \mathbb E |A^\eta X(s)|^2ds\notag\\
&+\frac{12\iota_\eta^2 [c_{F_1}^2 C_{F_1,F_2,G,\xi}+\mathbb E|F_1(0)|^2] t^{2(1-\eta)}}{1-2\eta}.\notag
\end{align*}
Therefore,
\begin{align}
\mathbb E &|A^\eta X(t)|^2 \notag\\
\leq & C_\xi t^{\min\{-2(\eta-\gamma),0\}} +12\iota_\eta^2c_{F_1}^2 t \int_0^t (t-s)^{-2\eta} \mathbb E |A^\eta X(s)|^2ds\notag\\
&+\frac{12\iota_\eta^2 [c_{F_1}^2 C_{F_1,F_2,G,\xi}+\mathbb E|F_1(0)|^2] t^{2(1-\eta)}}{1-2\eta}\notag\\
& +4\iota_\eta^2 |F_2|_{\mathcal F^{\gamma,\sigma}}^2 {\bf B}(\gamma,\eta)^2 t^{2(\eta+\gamma-1)} + 4c(E) \iota_\eta^2 |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \int_0^t (t-s)^{-2\eta} s^{2\beta-1} ds\notag\\
\leq & C_\xi t^{\min\{-2(\eta-\gamma),0\}} +C_{F_1,F_2,G,\xi} t^{-2\varrho}\notag\\
&+12\iota_\eta^2c_{F_1}^2 t\int_0^t (t-s)^{-2\eta} \mathbb E |A^\eta X(s)|^2ds, \notag\\
\leq & C_{F_1,F_2,G,\xi} t^{-2\varrho}
+12\iota_\eta^2c_{F_1}^2 t\int_0^t (t-s)^{-2\eta} \mathbb E |A^\eta X(s)|^2ds, \hspace{0.5cm} t\in (0,T_{F_1,F_2,G,\xi}],\label{Eq12}
\end{align}
here we used the estimates
\begin{equation*}
\begin{cases}
\begin{aligned}
&t^{2(\eta+\gamma-1)}\leq Ct^{-2\varrho}, \hspace{2cm} &t\in (0,T_{F_1,F_2,G,\xi}],\\
&t^{2(\beta-\eta)} \leq Ct^{-2\varrho}, \hspace{2cm} &t\in (0,T_{F_1,F_2,G,\xi}],\\
&t^{\min\{-2(\eta-\gamma),0\}} \leq Ct^{-2\varrho}, \hspace{2cm} &t\in (0,T_{F_1,F_2,G,\xi}]\,\,
\end{aligned}
\end{cases}
\end{equation*}
with some constant $C$ depending only on $T_{F_1,F_2,G,\xi}.$
Then the function $q(t)= t^{2\varrho} \mathbb E |A^\eta X(t)|^2$ satisfies
\begin{equation} \label{Eq48}
q(t)\leq C_{F_1,F_2,G,\xi}+12\iota_\eta^2c_{F_1}^2 t^{1+2\varrho}\int_0^t (t-s)^{-2\eta} s^{-2\varrho}q(s)ds.
\end{equation}
Let us solve the integral inequality as follows. Let $\epsilon>0$ denote a small parameter. For $0\leq t\leq \epsilon$ we have
\begin{align*}
q(t)\leq& C_{F_1,F_2,G,\xi}+12\iota_\eta^2c_{F_1}^2 t^{1+2\varrho}\int_0^t (t-s)^{-2\eta} s^{-2\varrho}ds \sup_{s\in[0,\epsilon]} q(s)\\
=&C_{F_1,F_2,G,\xi}+12\iota_\eta^2c_{F_1}^2 t^{2(1-\eta)}{\bf B}(1-2\varrho,1-2\eta) \sup_{s\in[0,\epsilon]} q(s)\\
\leq&C_{F_1,F_2,G,\xi}+12\iota_\eta^2c_{F_1}^2 \epsilon^{2(1-\eta)}{\bf B}(1-2\varrho,1-2\eta) \sup_{s\in[0,\epsilon]} q(s).
\end{align*}
Hence,
$$[1-12\iota_\eta^2c_{F_1}^2 \epsilon^{2(1-\eta)}{\bf B}(1-2\varrho,1-2\eta)]\sup_{s\in[0,\epsilon]} q(s) \leq C_{F_1,F_2,G,\xi}.$$
If $\epsilon$ is taken sufficiently small so that $12\iota_\eta^2c_{F_1}^2 \epsilon^{2(1-\eta)}{\bf B}(1-2\varrho,1-2\eta)\leq \frac{1}{2}$, then we obtain that
\begin{equation} \label{Eq13}
\sup_{s\in[0,\epsilon]} q(s) \leq C_{F_1,F_2,G,\xi}.
\end{equation}
Meanwhile, for $\epsilon<t\leq T_{F_1,F_2,G,\xi}$ we have
\begin{align*}
q(t)\leq& C_{F_1,F_2,G,\xi}+12\iota_\eta^2c_{F_1}^2 t^{1+2\varrho}\int_0^\epsilon (t-s)^{-2\eta} s^{-2\varrho}ds \sup_{s\in[0,\epsilon]} q(s)\\
&+12\iota_\eta^2c_{F_1}^2 t^{1+2\varrho}\int_\epsilon^t (t-s)^{-2\eta} \max\{ \epsilon^{-2\varrho}, T^{-2\varrho}\} q(s)ds \\
\leq& C_{F_1,F_2,G,\xi}+12\iota_\eta^2c_{F_1}^2\max\{ \epsilon^{-2\varrho}, T^{-2\varrho}\}T^{1+2\varrho}\int_\epsilon^t (t-s)^{-2\eta} q(s)ds.
\end{align*}
Lemma \ref{Integral inequality of Volterra type} then provides that
\begin{equation} \label{Eq14}
\sup_{t\in [\epsilon, T_{F_1,F_2,G,\xi}]} q(t)\leq C_{F_1,F_2,G,\xi}.
\end{equation}
Thus, \eqref{Eq11} follows from \eqref{Eq13} and \eqref{Eq14}.
{\bf Step 2}. Let us verify that $X(t)\in \mathcal D(A^\gamma)$ for every $t\in [0,T_{F_1,F_2,G,\xi}]$.
In virtue of \eqref{Eq8}, it suffices to show that the integrals
$\int_0^t A^\gamma S(t-s)F_2(s)ds,$ $ \int_0^t A^\gamma S(t-s)G(s) dw_s$ and $ \int_0^t A^\gamma S(t-s)F_1(X(s))ds $ are well-defined.
The first integral exists. Indeed, using \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we have
\begin{align}
\int_0^t |A^\gamma S(t-s)F_2(s)|ds\leq & \iota_\gamma |F_2|_{\mathcal F^{\gamma, \sigma}}\int_0^t (t-s)^{-\gamma} s^{\gamma-1} ds \notag\\
=&\iota_\gamma |F_2|_{\mathcal F^{\gamma, \sigma}} {\bf B}(\gamma, 1-\gamma)
<\infty, \hspace{0.5cm} t\in [0,T_{F_1,F_2,G,\xi}]. \label{Eq15}
\end{align}
To show existence of the second integral, we have to verify that $\int_0^t |A^\gamma S(t-s)G(s)|^2 ds<\infty.$ Indeed, by \eqref{FbetasigmaSpaceProperty} and \eqref{EstimateIofAnu}, we see that
\begin{align}
\int_0^t |A^\gamma &S(t-s)G(s)|^2 ds \leq \int_0^t |A^\gamma S(t-s)|^2|G(s)|^2 ds \notag \\
\leq &\iota_\gamma^2 |G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2\int_0^t (t-s)^{-2\gamma} s^{2\beta-1} ds \notag\\
=&\iota_\gamma^2 |G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2 {\bf B}(2\beta,1-2\gamma) t^{2(\beta-\gamma)}
<\infty, \quad \quad t\in (0,T_{F_1,F_2,G,\xi}]. \label{Eq16}
\end{align}
We shall finish this step by showing that the last integral is well-defined.
From \eqref{EstimateIofAnu}, \eqref{semilinear evolution equationExpectationAbetaXSquare}, \eqref{Eq10} and \eqref{Eq11}, we observe that
\begin{align}
\mathbb E\int_0^t &|A^\gamma S(t-s)F_1(X(s))|^2ds \notag\\
\leq & \int_0^t |A^\gamma S(t-s)|^2 \mathbb E|F_1(X(s))|^2ds \notag \\
\leq & 3\iota_\gamma^2 \int_0^t (t-s)^{-2\gamma} [c_{F_1}^2 \mathbb E |A^\eta X(s)|^2+c_{F_1}^2 \mathbb E|A^\beta X(s)|^2 +\mathbb E|F_1(0)|^2]ds\notag\\
\leq & 3\iota_\gamma^2C_{F_1,F_2,G,\xi} \int_0^t (t-s)^{-2\gamma} [s^{-2\varrho}+1]ds\notag\\
=&3\iota_\gamma^2C_{F_1,F_2,G,\xi} \Big[{\bf B}(1-2\varrho, 1-2\gamma) t^{1-2\varrho-2\gamma}+\frac{t^{1-2\gamma}}{1-2\gamma}\Big]\label{Eq17}\\
<&\infty, \hspace{3cm} t\in (0,T_{F_1,F_2,G,\xi}].\notag
\end{align}
Consequently, $\int_0^t |A^\gamma S(t-s)F_1(X(s))|^2ds<\infty$ a.s. Therefore, $$\int_0^t |A^\gamma S(t-s)F_1(X(s))|ds<\infty \quad\text{ a.s.} $$
{\bf Step 3}. Let us show the estimate \eqref{semilinear evolution equationExpectationAbetaXSquareMoreRegular}. From \eqref{Eq8}, we observe that
\begin{align*}
\mathbb E &|A^\gamma X(t)|^2 \\
\leq& 4 \mathbb E|A^\gamma S(t)\xi|^2 + 4\mathbb E\Big[\int_0^t |A^\gamma S(t-s)F_1(X(s))|ds\Big]^2\notag\\
&+ 4\Big[\int_0^t |A^\gamma S(t-s)F_2(s)|ds\Big]^2+4\mathbb E \Big|\int_0^t A^\gamma S(t-s)G(s) dw_s\Big|^2\notag\\
\leq& 4 \mathbb E|A^\gamma S(t)\xi|^2 + 4t\mathbb E\int_0^t |A^\gamma S(t-s)F_1(X(s))|^2ds\notag\\
&+ 4\Big[\int_0^t |A^\gamma S(t-s)F_2(s)|ds\Big]^2+4c(E) \int_0^t |A^\gamma S(t-s)G(s) |^2ds.\notag
\end{align*}
Using \eqref{EstimateIofS(t)}, \eqref{Eq15}, \eqref{Eq16} and \eqref{Eq17}, we have
\begin{align*}
\mathbb E &|A^\gamma X(t)|^2 \\
\leq& 4\iota_0^2 e^{-2\nu t} \mathbb E|A^\gamma\xi|^2
+ C_{F_1,F_2,G,\xi} \Big[{\bf B}(1-2\varrho, 1-2\gamma) t^{2(1-\varrho-\gamma)}+\frac{t^{2(1-\gamma)}}{1-2\gamma}\Big]\notag\\
&+ 4\iota_\gamma^2 |F_2|_{\mathcal F^{\gamma, \sigma}}^2 {\bf B}(\gamma, 1-\gamma)^2+4c(E) \iota_\gamma^2 |G|_{\mathcal F^{\beta+\frac{1}{2}, \sigma}}^2 {\bf B}(2\beta,1-2\gamma) t^{2(\beta-\gamma)}\notag\\
\leq &C_{F_1,F_2,G,\xi}t^{-2(\gamma-\beta)}, \hspace{3cm}t\in (0,T_{F_1,F_2,G,\xi}],
\end{align*}
here we used the estimates
\begin{equation*}
\begin{cases}
\begin{aligned}
&4\iota_0^2 \mathbb E|A^\gamma\xi|^2 e^{-2\nu t}\leq Ct^{-2(\gamma-\beta)}, &\hspace{2cm} t\in (0,T_{F_1,F_2,G,\xi}],\\
&t^{2(1-\varrho-\gamma)}\leq Ct^{-2(\gamma-\beta)}, &\hspace{2cm} t\in (0,T_{F_1,F_2,G,\xi}],\\
&t^{2(1-\gamma)} <Ct^{-2(\gamma-\beta)}, & \hspace{2cm} t\in (0,T_{F_1,F_2,G,\xi}],\\
&4\iota_\gamma^2 |F_2|_{\mathcal F^{\gamma, \sigma}}^2 {\bf B}(\gamma, 1-\gamma)^2<Ct^{-2(\gamma-\beta)}, & \hspace{2cm} t\in (0,T_{F_1,F_2,G,\xi}]\,\,
\end{aligned}
\end{cases}
\end{equation*}
with some constant $C$ depending only on $T_{F_1,F_2,G,\xi}.$
Thus,
$$t^{2(\gamma-\beta)}\mathbb E |A^\gamma X(t)|^2 \leq C_{F_1,F_2,G,\xi},\hspace{2cm}t\in [0,T_{F_1,F_2,G,\xi}].$$
This, together with \eqref{semilinear evolution equationExpectationAbetaXSquare}, then derives \eqref{semilinear evolution equationExpectationAbetaXSquareMoreRegular}.
{\bf Step 4}. Let us verify that $A^\gamma X\in \mathcal C([0,T_{F_1,F_2,G,\xi}];E)$.
Similarly to \eqref{Eq10}, for $t\in (0,T_{F_1,F_2,G,\xi}]$ we have
$$\mathbb E|F_1(X(t))|^2 \leq 3[c_{F_1}^2 \mathbb E |A^\eta X(t)|^2+c_{F_1}^2 \mathbb E|A^\beta X(t)|^2 +\mathbb E|F_1(0)|^2].$$
Due to \eqref{semilinear evolution equationExpectationAbetaXSquare} and \eqref{Eq11}, we observe that
\begin{align}
\mathbb E|F_1(X(t))|^2& \leq C_{F_1,F_2,G,\xi} ( t^{-2\varrho}+1)\notag\\
&\leq C_{F_1,F_2,G,\xi} t^{-2\varrho}, \hspace{2cm} t\in (0,T_{F_1,F_2,G,\xi}]. \label{Eq18}
\end{align}
First, we shall show that $A^\gamma X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];E)$.
By repeating the same argument as in verifying \eqref{Eq25} in {\bf Step 1} of the proof of Theorem \ref{semilinear evolution equationTheorem1}, for every $\rho\in (\frac{1}{2},1-\gamma)$ we see that
\begin{align*}
\mathbb E&|A^\gamma[X(t)-X(s)]|^2 \notag\\
\leq &\frac{4\iota_{1-\rho}^2\iota_{\gamma+\rho-\beta}^2}{\rho^2} \mathbb E|A^\beta \xi|^2 s^{2(\beta-\gamma-\rho)}(t-s)^{2\rho} \notag\\
&+4\Big[\frac{\iota_{1-\rho} \iota_{\gamma+\rho} {\bf B}(\beta,1-\gamma-\rho) }{\rho} +\iota_\gamma{\bf B}(\gamma+\rho,1-\gamma)\Big]^2\notag\\
&\times |F_2|_{\mathcal F^{\beta,\sigma}}^2s^{2(\beta-\gamma-\rho)}(t-s)^{2\rho} \notag\\
&+\frac{4\iota_{1-\rho}^2\iota_{\gamma+\rho}^2(1-\gamma-\rho)}{\rho^2} (t-s)^{2\rho} s^{1-\gamma-\rho} \int_0^s (s-r)^{-\gamma-\rho} \mathbb E|F_1(X(r))|^2dr \notag\\
&+4\iota_\gamma^2(t-s) \int_s^t (t-r)^{-2\gamma}\mathbb E |F_1(X(r))|^2dr.\notag
\end{align*}
Using \eqref{Eq18}, we obtain that
\begin{align}
\mathbb E&|A^\gamma[X(t)-X(s)]|^2 \notag\\
\leq &\frac{4\iota_{1-\rho}^2\iota_{\gamma+\rho-\beta}^2}{\rho^2} \mathbb E|A^\beta \xi|^2 s^{2(\beta-\gamma-\rho)}(t-s)^{2\rho} \notag\\
&+4\Big[\frac{\iota_{1-\rho} \iota_{\gamma+\rho} {\bf B}(\beta,1-\gamma-\rho) }{\rho} +\iota_\gamma{\bf B}(\gamma+\rho,1-\gamma)\Big]^2\notag\\
&\times |F_2|_{\mathcal F^{\beta,\sigma}}^2s^{2(\beta-\gamma-\rho)}(t-s)^{2\rho} \notag\\
&+C_{F_1,F_2,G,\xi} (t-s)^{2\rho} s^{1-\gamma-\rho} \int_0^s (s-r)^{-\gamma-\rho} r^{-2\varrho} dr \notag\\
&+C_{F_1,F_2,G,\xi}(t-s) \int_s^t (t-r)^{-2\gamma} r^{-2\varrho} dr\notag \\
\leq &C_{F_1,F_2,G,\xi}s^{2(\beta-\gamma-\rho)}(t-s)^{2\rho} \notag\\
&+C_{F_1,F_2,G,\xi} s^{2(1-\gamma-\rho-\varrho)} (t-s)^{2\rho} \notag\\
&+C_{F_1,F_2,G,\xi} (t-s) \int_s^t (t-r)^{-2\gamma} r^{-2\varrho} dr.\notag
\end{align}
Let us estimate the latter integral. Fix $\epsilon \in (0, \min\{1-2\gamma, 2\varrho\})$. Since
$$r^{-2\varrho} =r^{-\epsilon} r^{\epsilon-2\varrho}<(r-s)^{-\epsilon} s^{\epsilon-2\varrho}, \hspace{2cm} r\in (s,t),$$
we have
\begin{align*}
\int_s^t (t-r)^{-2\gamma} r^{-2\varrho} dr\leq &s^{\epsilon-2\varrho}\int_s^t (t-r)^{-2\gamma}(r-s)^{-\epsilon} dr\\
=&{\bf B}(1-\epsilon, 1-2\gamma) s^{\epsilon-2\varrho}(t-s)^{1-2\gamma-\epsilon}.
\end{align*}
Hence,
\begin{align*}
\mathbb E&|A^\gamma[X(t)-X(s)]|^2 \notag\\
\leq &C_{F_1,F_2,G,\xi}[s^{2(\beta-\gamma-\rho)}(t-s)^{2\rho}+ s^{2(1-\gamma-\rho-\varrho)} (t-s)^{2\rho} + s^{\epsilon-2\varrho}(t-s)^{2-2\gamma-\epsilon}].\notag
\end{align*}
Since $2\rho> 1$ and $2-2\gamma-\epsilon>1$, the Kolmogorov test then provides that $A^\gamma X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];E)$.
Now, we shall verify that $A^\gamma X$ is continuos at $t=0$. By using \eqref{Eq18} (instead of \eqref{Eq3}), we will repeat the same argument as in showing the continuity of $A^\beta \Psi Y$ at $t=0$ in {\bf Step 1} of the proof of Theorem \ref{semilinear evolution equationTheorem1}. We then obtain the continuity of $A^\gamma X$ at $t=0$. Thus, it is concluded that $A^\gamma X\in \mathcal C([0,T_{F_1,F_2,G,\xi}];E)$.
\end{proof}
\subsection{Regular dependence of solutions on initial data}
Let $\mathcal B_1$ and $\mathcal B_2$ be bounded balls
\begin{align}
\mathcal B_1=\{f\in \mathcal F^{\beta,\sigma}((0,T];E): |f|_{\mathcal F^{\beta,\sigma}}\leq R_1\}, \quad 0<R_1<\infty, \label{mathcal B1}\\
\mathcal B_2=\{f\in \mathcal F^{\beta+\frac{1}{2},\sigma}((0,T];E): |f|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}\leq R_2\}, \quad 0<R_2<\infty \label{mathcal B2}
\end{align}
of the spaces $\mathcal F^{\beta,\sigma}((0,T];E)$ and $\mathcal F^{\beta+\frac{1}{2},\sigma}((0,T];E)$, respectively.
And let $B_A$ be a set of random variable
\begin{equation} \label{BABall}
B_A=\{\xi: \xi\in \mathcal D(A^\beta) \text{ a.s. and } \mathbb E |A^\beta \xi|^2\leq R_3^2\}, \quad 0<R_3<\infty.
\end{equation}
According to Theorem \ref{semilinear evolution equationTheorem1}, for every $F_2\in \mathcal B_1, G\in \mathcal B_2$ and $\xi\in B_A$, there exists a local solution of \eqref{semilinear evolution equation} on some interval $[0,T_{F_1,F_2,G,\xi}]$. Furthermore, by virtue of {\bf Step 1} and {\bf Step 2} of the proof of Theorem
\ref{semilinear evolution equationTheorem1},
\begin{equation} \label{Eq41}
\begin{aligned}
&\text{ there is a time } T_{\mathcal B_1, \mathcal B_2, B_A}>0 \text{ such that } \\
&[0,T_{\mathcal B_1, \mathcal B_2, B_A}]\subset [0,T_{F_1,F_2,G,\xi}] \text{ for all } (F_2,G,\xi)\in \mathcal B_1\times \mathcal B_2\times B_A.
\end{aligned}
\end{equation}
Indeed, in view of \eqref{Eq43}, \eqref{Eq44} and \eqref{Eq46}, $T_{F_1,F_2,G,\xi}$ can be chosen to be any time $S$ satisfying the conditions
\begin{align*}
18\iota_\beta^2 &c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\beta) S^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\beta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\beta} S^{2(1-\beta)} \leq \frac{\kappa^2}{2},\\
18\iota_\beta^2 &c_{F_1}^2 \kappa^2 {\bf B}( 1+2\beta-2\eta, 1-2\beta) S^{2(1+\beta-2\eta)}\notag\\
&+\frac{18 \iota_\beta^2 [ c_{F_1}^2 \kappa^2 +\mathbb E|F_1(0)|^2] }{1-2\beta} S^{2(1-\beta)}\leq \frac{\kappa^2}{2},
\end{align*}
and
\begin{align*}
&2c_{F_1}^2 \Big[\iota_\eta^2{\bf B}(1+2\beta-2\eta,1-2\eta)+\iota_\beta^2{\bf B}(1+2\beta-2\eta,1-2\beta)\notag\\
&\hspace*{0.7cm}+\frac{\iota_\eta^2S^{2(\eta-\beta)}}{1-2\eta}+\frac{\iota_\beta^2S^{2(\eta-\beta)}}{1-2\beta}\Big] S^{2(1-\eta)} <1,
\end{align*}
where $\kappa$ is defined by \eqref{Eq45} and \eqref{Eq42}. Consequently, we can choose $T_{F_1,F_2,G,\xi}$ such that it depends continuously on the norms $|F_2|_{\mathcal F^{\beta,\sigma}}, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}$ and $\mathbb E |A^\beta \xi|^2$. This implies \eqref{Eq41}.
We shall show continuous dependence of solutions on $(F_2,G,\xi)$ in the sense specified in the following theorem.
\begin{theorem}\label{continuityofsolutionsininitialdata}
Let \eqref{spectrumsectorialdomain}, \eqref{resolventnorm}, {\rm (H1)}, {\rm (H2)} and {\rm (H3)} be satisfied.
Let $X$ and $\bar X$ be the solutions of \eqref{semilinear evolution equation} for the data $(F_2,G,\xi)$ and $(\bar F_2,\bar G,\bar \xi)$ in $\mathcal B_1\times \mathcal B_2\times B_A$, respectively. Then there exists a constant $C_{\mathcal B_1, \mathcal B_2, B_A}$ depending only on $\mathcal B_1, \mathcal B_2$ and $ B_A$ such that
\begin{align}
&t^{2\eta}\mathbb E |A^\eta[X(s)-\bar X(s)]|^2+t^{2\eta}\mathbb E|A^{\beta} [X(s)-\bar X(s)]|^2+ \mathbb E|X(t)-\bar X(t)|^2 \label{Eq22}\\
\leq &C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |\xi-\bar \xi|^2+ t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2 + t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2],\notag
\end{align}
and
\begin{align}
&t^{2(\eta-\beta)}[\mathbb E |A^\eta[X(s)-\bar X(s)]|^2+\mathbb E|A^{\beta} [X(s)-\bar X(s)]|^2] \label{Eq23}\\
& \leq C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |A^\beta(\xi-\bar \xi)|^2+ |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2] \notag
\end{align}
for every $ t\in (0, T_{\mathcal B_1, \mathcal B_2, B_A}].$
\end{theorem}
\begin{proof}
This theorem is proved by analogous arguments as in the proof of Theorem \ref{semilinear evolution equationTheorem1}.
Indeed, let us first verify \eqref{Eq22}. If $\mathbb E |\xi-\bar \xi|^2=\infty$, then the conclusion is obvious. Therefore, we may assume that
$\mathbb E |\xi-\bar \xi|^2<\infty.$
First, we shall give an estimate for
$$t^{2\eta}\mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2].$$
For $\theta\in [0,\frac{1}{2})$ and $0<t\leq T_{\mathcal B_1, \mathcal B_2, B_A}$, by using \eqref{FbetasigmaSpaceProperty}, \eqref{EstimateIofAnu} and \eqref{AbetaLipschitzcondition}, we observe that
\begin{align*}
&t^\theta |A^\theta[X(t)-\bar X(t)]|\\
= & \Big|t^\theta A^\theta S(t)(\xi-\bar \xi)+\int_0^t t^\theta A^\theta S(t-s)[F_1(X(s))-F_1(\bar X(s))]ds\notag\\
&+\int_0^t t^\theta A^\theta S(t-s) [F_2(s)-\bar F_2(s)]ds+\int_0^t t^\theta A^\theta S(t-s) [G(s)-\bar G(s)]dw_s\Big| \notag\\
\leq & \iota_\theta |\xi-\bar \xi|\notag\\
&+\iota_\theta c_{F_1}\int_0^t t^\theta (t-s)^{-\theta} [ |A^\eta[X(s)-\bar X(s)]|+|A^{\beta} [X(s)-\bar X(s)]|] ds\notag\\
&+\iota_\theta |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}} \int_0^t t^\theta (t-s)^{-\theta}s^{\beta-1}ds\notag\\
&+\Big|\int_0^t t^\theta A^\theta S(t-s) [G(s)-\bar G(s)]dw_s\Big| \notag\\
= & \iota_\theta |\xi-\bar \xi|+\iota_\theta |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}} {\bf B}(\beta,1-\theta)t^\beta\notag\\
& +\iota_\theta c_{F_1}\int_0^t t^\theta (t-s)^{-\theta} [ |A^\eta[X(s)-\bar X(s)]|+|A^{\beta} [X(s)-\bar X(s)]|] ds\notag\\
&+\Big|\int_0^t t^\theta A^\theta S(t-s) [G(s)-\bar G(s)]dw_s\Big|. \notag
\end{align*}
Thus,
\begin{align}
&\mathbb E|t^\theta A^\theta[X(t)-\bar X(t)]|^2\notag\\
\leq & 4\iota_\theta^2 \mathbb E |\xi-\bar \xi|^2+4\iota_\theta^2 |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2 {\bf B}(\beta,1-\theta)^2 t^{2\beta}\notag\\
& +4\iota_\theta^2 c_{F_1}^2 t^{2\theta} \mathbb E \Big[ \int_0^t (t-s)^{-\theta} [ |A^\eta[X(s)-\bar X(s)]|+|A^{\beta} [X(s)-\bar X(s)]|] ds\Big]^2\notag\\
&+4\mathbb E \Big|\int_0^t t^\theta A^\theta S(t-s) [G(s)-\bar G(s)]dw_s\Big|^2 \notag\\
\leq & 4\iota_\theta^2 \mathbb E |\xi-\bar \xi|^2+4\iota_\theta^2 |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2 {\bf B}(\beta,1-\theta)^2 t^{2\beta} \notag\\
&+4\iota_\theta^2 c_{F_1}^2 t^{2\theta+1} \int_0^t (t-s)^{-2\theta} \mathbb E [ |A^\eta[X(s)-\bar X(s)]|+|A^{\beta} [X(s)-\bar X(s)]|]^2 ds\notag\\
&+4c(E)\iota_\theta^2 |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2\int_0^t t^{2\theta} (t-s)^{-2\theta} s^{2\beta -1}ds \notag\\
\leq & 4\iota_\theta^2 \mathbb E |\xi-\bar \xi|^2+4\iota_\theta^2 {\bf B}(\beta,1-\theta)^2 t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2\label{Eq19}\\
&+4c(E)\iota_\theta^2 {\bf B}(2\beta, 1-2\theta)t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2\notag\\
&+8\iota_\theta^2 c_{F_1}^2 t^{2\theta+1} \int_0^t (t-s)^{-2\theta} \mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2] ds.\notag
\end{align}
Applying these estimates with $\theta=\beta$ and $\theta=\eta$, we have
\begin{align*}
&\mathbb E| A^\beta[X(t)-\bar X(t)]|^2\notag\\
\leq & 4\iota_\beta^2 \mathbb E |\xi-\bar \xi|^2t^{-2\beta}+4\iota_\beta^2 {\bf B}(\beta,1-\beta)^2 |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2\notag\\
&+4c(E)\iota_\beta^2 {\bf B}(2\beta, 1-2\beta) |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2\notag\\
&+8\iota_\beta^2 c_{F_1}^2 t \int_0^t (t-s)^{-2\beta} \mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2] ds,\notag
\end{align*}
and
\begin{align*}
&t^{2\eta} \mathbb E|A^\eta[X(t)-\bar X(t)]|^2\notag\\
\leq & 4\iota_\eta^2 \mathbb E |\xi-\bar \xi|^2+4\iota_\eta^2 {\bf B}(\beta,1-\eta)^2 t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2\notag\\
&+4c(E)\iota_\eta^2 {\bf B}(2\beta, 1-2\eta)t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2\notag\\
&+8\iota_\eta^2 c_{F_1}^2 t^{2\eta+1} \int_0^t (t-s)^{-2\eta} \mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2] ds.\notag
\end{align*}
By putting
$$q(t)=t^{2\eta}\mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2], $$
we then obtain an integral inequality
\begin{align}
q(t)\leq & 4(\iota_\beta^2 t^{2(\eta-\beta)}+\iota_\eta^2 )\mathbb E |\xi-\bar \xi|^2\notag\\
&+4[\iota_\beta^2 {\bf B}(\beta,1-\beta)^2t^{2\eta}+\iota_\eta^2 {\bf B}(\beta,1-\eta)^2 t^{2\beta} ] |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2\notag\\
&+4c(E)[\iota_\beta^2 {\bf B}(2\beta, 1-2\beta)t^{2\eta}+\iota_\eta^2 {\bf B}(2\beta, 1-2\eta)t^{2\beta}] |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 \notag\\
&+8 c_{F_1}^2 t^{2\eta+1}\int_0^t [ \iota_\beta^2 (t-s)^{-2\beta}+\iota_\eta^2 (t-s)^{-2\eta}] s^{-2\eta}q(s) ds\notag\\
\leq & C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |\xi-\bar \xi|^2+ t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2] \label{Eq47}\\
&+8 c_{F_1}^2 t^{2\eta+1}\int_0^t [ \iota_\beta^2 (t-s)^{-2\beta}+\iota_\eta^2 (t-s)^{-2\eta}] s^{-2\eta}q(s) ds\notag
\end{align}
for $0<t\leq T_{\mathcal B_1, \mathcal B_2, B_A}.$
We use the same techniques as in solving the integral inequality \eqref{Eq48} to solve \eqref{Eq47}. Arguing first in a small interval $[0,\epsilon]$ and then in the other interval $[\epsilon, T_{\mathcal B_1, \mathcal B_2, B_A}],$ we obtain that
\begin{align}
t^{2\eta}&\mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2]
=q(t)\notag\\
&\leq C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |\xi-\bar \xi|^2+ t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2] \label{Eq20}
\end{align}
for every $t\in (0, T_{\mathcal B_1, \mathcal B_2, B_A}].$
Now, we shall give an estimate for $\mathbb E|X(t)-\bar X(t)|^2$. Taking $\theta=0$ in \eqref{Eq19}, we have
\begin{align*}
\mathbb E&|X(t)-\bar X(t)|^2
\leq 4\iota_0 \mathbb E |\xi-\bar \xi|^2+ 4\iota_0 {\bf B}(\beta,1)^2 t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2\notag\\
&+4 c(E) {\bf B}(2\beta, 1)t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2 +8\iota_0^2 c_{F_1}^2 t\int_0^t s^{-2\eta}q(s)ds.\notag
\end{align*}
Using \eqref{Eq20}, we observe that
\begin{align*}
t\int_0^t & s^{-2\eta}q(s)ds\\
\leq &C_{\mathcal B_1, \mathcal B_2, B_A} t\int_0^t s^{-2\eta}[\mathbb E |\xi-\bar \xi|^2+ s^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ s^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2]ds\notag\\
\leq &\frac{C_{\mathcal B_1, \mathcal B_2, B_A} t^{2(1-\eta)}\mathbb E |\xi-\bar \xi|^2 }{1-2\eta}\notag\\
&+\frac{C_{\mathcal B_1, \mathcal B_2, B_A} t^{2(1+\beta-\eta)}
[|F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2] }{1+2\beta-2\eta}.
\end{align*}
Therefore,
\begin{align}
&\mathbb E|X(t)-\bar X(t)|^2 \label{Eq21}\\
&\leq C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |\xi-\bar \xi|^2+ t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2]\notag
\end{align}
for $t\in (0, T_{\mathcal B_1, \mathcal B_2, B_A}]. $
By \eqref{Eq20} and \eqref{Eq21}, \eqref{Eq22} has been verified.
Let us now show \eqref{Eq23}. By substituting the estimate
$$|A^\theta S(t) (\xi-\bar \xi)|\leq \iota_{\beta-\theta} t^{\beta-\theta} |A^\beta (\xi-\bar \xi)|$$
with $\theta=\beta$ and $\theta=\eta$ for
$|A^\theta S(t) (\xi-\bar \xi)|\leq \iota_\theta t^{-\theta} |\xi-\bar \xi|,$ we obtain a similar result to \eqref{Eq19}:
\begin{align*}
&\mathbb E|t^\theta A^\theta[X(t)-\bar X(t)]|^2\notag\\
\leq & 4 \iota_{\beta-\theta}^2 t^{2(\beta-\theta)} \mathbb E |A^\beta (\xi-\bar \xi)|^2
+4\iota_\theta^2 {\bf B}(\beta,1-\theta)^2 t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2\notag\\
&+4c(E)\iota_\theta^2 {\bf B}(2\beta, 1-2\theta)t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2\notag\\
&+8\iota_\theta^2 c_{F_1}^2 t^{2\theta+1} \int_0^t (t-s)^{-2\theta} \mathbb E [ |A^\eta[X(s)-\bar X(s)]|^2+|A^{\beta} [X(s)-\bar X(s)]|^2] ds.\notag
\end{align*}
Using the same arguments as in verifying \eqref{Eq20}, we conclude that \eqref{Eq23} holds true.
It completes the proof.
\end{proof}
\subsection{Case $\tilde\beta=0$}
This subsection investigates the critical case of the Lipschitz condition \eqref{AbetaLipschitzcondition} when $\tilde\beta=0.$ We assume that
\begin{itemize}
\item [(H1')] $F_1$ defines on the domain $ \mathcal D(A^\eta)$ and \eqref{AbetaLipschitzcondition} is valid with $\tilde\beta=0,$ i.e.
\begin{equation} \label{AbetaLipschitzconditionBeta0}
|F_1(x)-F_1(y)|\leq c_{F_1} [ |A^\eta(x-y)|+|x-y|], \quad\quad x,y\in \mathcal D(A^\eta).
\end{equation}
\end{itemize}
We can then generalize some results of Theorem \ref{semilinear evolution equationTheorem1}.
\begin{theorem}\label{semilinear evolution equationTheorem2}
Let \eqref{spectrumsectorialdomain}, \eqref{resolventnorm}, {\rm (H1')}, {\rm (H2)} and {\rm (H3)} be satisfied.
Assume that $\mathbb E|\xi|^2 < \infty$. Then \eqref{semilinear evolution equation} possesses a unique mild solution $X$ in the function space:
$$
X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\eta)),
$$
where $T_{F_1,F_2,G,\xi}$ depends only on the squared norms $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2$ and $\mathbb E |F_1(0)|^2$. In addition, $X$ satisfies the estimate
\begin{align}
\mathbb E|X(t)|^2 \leq C_{F_1,F_2,G,\xi}, \quad\quad t\in [0,T_{F_1,F_2,G,\xi}]\label{semilinear evolution equationExpectationAbetaXSquareBeta0}
\end{align}
with some constant $C_{F_1,F_2,G,\xi}$ depending only on $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2, \mathbb E |F_1(0)|^2$ and $\mathbb E|\xi|^2.$
\end{theorem}
\begin{proof}
The proof is analogous to that of Theorem \ref{semilinear evolution equationTheorem1}. For each $S\in (0,T]$, we set the Banach space:
\begin{align*}
\Xi (S)=&\{Y\in \mathcal C((0,S];\mathcal D(A^\eta)) \cap \mathcal C([0,S];E) \text{ such that }\\
& \sup_{0<t\leq S} t^{2\eta} \mathbb E|A^\eta Y(t)|^2+ \sup_{0\leq t\leq S}\mathbb E|Y(t)|^2 <\infty \}
\end{align*}
with norm
$$|Y|_{\Xi (S)}=\Big[\sup_{0<t\leq S} t^{2\eta} \mathbb E|A^\eta Y(t)|^2+ \sup_{0\leq t\leq S}\mathbb E|Y(t)|^2\Big]^{\frac{1}{2}}. $$
Consider a nonempty closed subset $\Upsilon(S) $ of $\Xi (S)$ which consists of all function $Y\in \Xi (S)$ such that
\begin{equation} \label{Upsilon(S)DefinitionBeta0}
\max\{\sup_{0<t\leq S} t^{2\eta} \mathbb E|Y(t)|^2,
\sup_{0\leq t\leq S}\mathbb E|A^\beta Y(t)|^2\} \leq \kappa^2
\end{equation}
with $\kappa>0 $ which will be fixed appropriately.
Similarly to the proof of Theorem \ref{semilinear evolution equationTheorem1}, if we choose $\kappa>0 $ dependent only on $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2, \mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\beta\xi|^2,$ and if $S$ is sufficiently small, then the mapping $\Phi$ defined by \eqref{DefinitionOfFunctionPhi} maps the set $\Upsilon(S) $ into itself and is contraction with respect to the norm of $\Xi (S)$.
Consequently, $\Phi$ possesses a unique fixed point $X\in \Upsilon(S)$, i.e. for every $t\in [0,S]$, $X(t)=\Phi X(t)$ a.s. This means that $X$ is a local mild solution of \eqref{semilinear evolution equation}. Following {\bf Step 4} and {\bf Step 6} in the proof of Theorem \ref{semilinear evolution equationTheorem1}, the estimate \eqref{semilinear evolution equationExpectationAbetaXSquareBeta0} and uniqueness of the solution are verified.
\end{proof}
By the same arguments as in Corollary \ref{semilinear evolution equationTheorem1corollary}, global mild solutions of \eqref{semilinear evolution equation} can be constructed.
\begin{corollary}[global existence]
Assume that in Theorem \ref{semilinear evolution equationTheorem2} the constant $ C_{F_1,F_2,G,\xi}$ is independent of $T_{F_1,F_2,G,\xi}$ for every initial value $\xi\in E$. Then \eqref{semilinear evolution equation} possesses a unique mild solution on the interval $[0,T].$
\end{corollary}
The next theorem shows regularity of local mild solutions for more regular initial values. The proof of the theorem is similar to that of Theorem \ref{semilinear evolution equationMoreRegular}, so we omit it.
\begin{theorem}
Let \eqref{spectrumsectorialdomain}, \eqref{resolventnorm}, {\rm (H1')}, {\rm (H2)} and {\rm (H3)} be satisfied.
Let $F_2\in \mathcal F^{\gamma, \sigma}((0,T];E)$, $ \max\{\beta,\frac{1}{2}-\eta\}<\gamma< \frac{1}{2}$, and $\xi\in \mathcal D(A^\gamma)$ such that $\mathbb E|A^\gamma\xi|^2<\infty$. Then \eqref{semilinear evolution equation} possesses a unique mild solution $X$ in the function space:
\begin{equation*}
X\in \mathcal C((0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\eta))\cap \mathcal C([0,T_{F_1,F_2,G,\xi}];\mathcal D(A^\gamma)),
\end{equation*}
where $T_{F_1,F_2,G,\xi}$ depends only on the squared norms $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2$ and $\mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\gamma\xi|^2.$ In addition, $X$ satisfies the estimate
\begin{align*}
\mathbb E|X(t)|^2+t^{2\gamma}\mathbb E |A^\gamma X(t)|^2 \leq C_{F_1,F_2,G,\xi}, \quad\quad t\in [0,T_{F_1,F_2,G,\xi}], \label{semilinear evolution equationExpectationAbetaXSquareMoreRegularBeta0}
\end{align*}
with some constant $C_{F_1,F_2,G,\xi}$ depending only on $|F_2|_{\mathcal F^{\beta,\sigma}}^2, |G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2, \mathbb E |F_1(0)|^2$ and $ \mathbb E |A^\gamma\xi|^2.$
\end{theorem}
Let us finally verify continuous dependence of solutions on initial data.
\begin{theorem}\label{continuityofsolutionsininitialdata2}
Let \eqref{spectrumsectorialdomain}, \eqref{resolventnorm}, {\rm (H1')}, {\rm (H2)} and {\rm (H3)} be satisfied. Let $X$ and $\bar X$ be the solutions of \eqref{semilinear evolution equation} for the data $(F_2,G,\xi)$ and $(\bar F_2,\bar G,\bar \xi)$ in $\mathcal B_1\times \mathcal B_2\times B_A$, respectively, where $\mathcal B_1$, $ \mathcal B_2$ and $ B_A$ are defined by \eqref{mathcal B1}, \eqref{mathcal B2} and \eqref{BABall}.
Then
\begin{align}
&t^{2\eta}\mathbb E |A^\eta[X(s)-\bar X(s)]|^2+t^{2\eta}\mathbb E|X(s)-\bar X(s)|^2 \notag\\
& \leq C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |\xi-\bar \xi|^2+ t^{2\beta} |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ t^{2\beta} |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2], \notag
\end{align}
and
\begin{align}
&t^{2(\eta-\beta)}[\mathbb E |A^\eta[X(s)-\bar X(s)]|^2+\mathbb E|X(s)-\bar X(s)|^2] \notag\\
& \leq C_{\mathcal B_1, \mathcal B_2, B_A}[\mathbb E |A^\beta(\xi-\bar \xi)|^2+ |F_2-\bar F_2|_{\mathcal F^{\beta,\sigma}}^2+ |G-\bar G|_{\mathcal F^{\beta+\frac{1}{2},\sigma}}^2] \notag
\end{align}
for $t\in (0, T_{\mathcal B_1, \mathcal B_2, B_A}].$
\end{theorem}
As the proof of this theorem is quite analogous to that of Theorem \ref{continuityofsolutionsininitialdata}, we may omit it.
\end{document}
|
\begin{document}
\title{A Parameter Setting Heuristic for the Quantum Alternating Operator Ansatz}
\author{James Sud}
\email{[email protected]}
\thanks{Majority completed at USRA}
\affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA, 94035, USA}
\affiliation{USRA Research Institute for Advanced Computer Science (RIACS), Mountain View, CA, 94043, USA}
\affiliation{Department of Computer Science, University of Chicago, 5730 S Ellis Ave, Chicago, IL 60637, USA}
\author{Stuart Hadfield}
\affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA, 94035, USA}
\affiliation{USRA Research Institute for Advanced Computer Science (RIACS), Mountain View, CA, 94043, USA}
\author{Eleanor Rieffel}
\affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA, 94035, USA}
\author{Norm Tubman}
\affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA, 94035, USA}
\author{Tad Hogg}
\affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA, 94035, USA}
\date{\today}
\begin{abstract}
Parameterized quantum circuits are widely studied approaches to tackling challenging optimization problems. A prominent example is the Quantum Alternating Operator Ansatz (QAOA), a generalized approach that builds on the alternating structure of the Quantum Approximate Optimization Algorithm. Finding high-quality parameters efficiently for QAOA remains a major challenge in practice. In this work, we introduce a classical strategy for parameter setting, suitable for cases in which the number of distinct cost values grows only polynomially with the problem size, such as is common for constraint satisfaction problems.
The crux of our strategy is that we replace the cost function expectation value step of QAOA with a classical model that also has parameters which play an analogous role to the QAOA parameters, but can be efficiently evaluated classically. This model is based on empirical observations of QAOA, where variable configurations with the same cost have the same amplitudes from step to step, which we define as Perfect Homogeneity. Perfect Homogeneity is known to hold exactly for problems with particular symmetries. More generally, high overlaps between QAOA states and states with Perfect Homogeneity have been empirically observed in a number of settings. Building on this idea, we define a Classical Homogeneous Proxy for QAOA in which this property holds exactly, and which yields information describing both states and expectation values. We classically determine high-quality parameters for this proxy, and then use these parameters in QAOA, an approach we label the Homogeneous Heuristic for Parameter Setting. We numerically examine this heuristic for MaxCut on unweighted Erd\H{o}s-R\'{e}nyi random graphs. For up to $3$ QAOA levels we demonstrate that the heuristic is easily able to find parameters that match approximation ratios corresponding to previously-found globally optimized approaches. For levels up to $20$ we obtain parameters using our heuristic with approximation ratios monotonically increasing with depth, while a strategy that uses parameter transfer instead fails to converge with comparable classical resources. These results suggest that our heuristic may find good parameters in regimes that are intractable with noisy intermediate-scale quantum devices. Finally, we outline how our heuristic may be applied to wider classes of problems.
\end{abstract}
\maketitle
\section{Introduction}\label{introduction}
The Quantum Alternating Operator Ansatz (QAOA)~\cite{hadfield19} is a widely-studied parameterized quantum algorithm for tackling combinatorial optimization problems. It uses the alternating structure of Farhi et al.’s Quantum Approximate Optimization Algorithm~\cite{farhi14}, a structure that was also studied in special cases in much earlier work \cite{hogg00, hogg00-1}. Recent work has explored regimes for which suitable parameters for QAOA are difficult to pre-compute. This includes extensive research viewing QAOA as a variational quantum-classical algorithm (VQA), in which results from runs on quantum hardware are fed into a classical outer loop algorithm for updating the parameters. Optimizing parameters for VQAs can be quite challenging, as one typically needs to search over a large parameter space with a complex cost landscape. While for a successful algorithm, one does not necessarily need to find optimal parameters, but rather good enough parameters for a given target outcome, finding good parameters can be difficult due to the number of local optima \cite{cerezo21, wierichs20}, and in some cases barren plateaus \cite{mcclean18, larocca22}. Moreover, the high levels of noise present on current quantum devices can exacerbate these challenges~\cite{wang21}.
We propose a novel approach to QAOA parameter optimization that maps the QAOA circuit to a simpler classical model. The crux of our approach is that in the parameter objective function (as introduced below), we replace the cost function expectation value, which is typically evaluated either on quantum hardware or using expensive classical evaluation, with a simpler model that has parameters that play an analogous role to the QAOA parameters, but can be efficiently evaluated classically. This approach is based on the observation, originally due to Hogg \cite{hogg00, hogg00-1}, that one may leverage a classical \lq\lq mean-field\rq\rq\ or \lq\lq homogeneous\rq\rq\ model where variable configurations with the same cost have the same amplitudes from step to step, which we define as Perfect Homogeneity. Extending this idea, we define a Classical Homogeneous Proxy for QAOA (``proxy'' for short) in which this property holds exactly, and which yields both information describing states and expectation values. We then classically determine high-quality parameters for this proxy, and then use these parameters in QAOA, an approach we label the Homogeneous Heuristic for Parameter Setting (``heuristic'' for short). This heuristic is visualized in Fig.~\ref{fig:flowchart}.
The heuristic is appropriate for classes of constraint satisfaction problems (CSPs) in which instances consists of a number of clauses polynomial in the number of variables, with each clause acting on a subset of the variables. For such problems, the number of distinct cost function values, and thus the computation time of the proxy, is polynomially bounded in the number of variable as desired. We describe the proxy for four such classes of CSPs: random kSAT, MaxCut on unweighted Erd\H{o}s-R\'{e}nyi model graphs, random MaxEkLin2, and random Max-k-XOR. For these examples, the proxy leverages information only about the class (as well as the number of variables and clauses), rather than problem instance, so that the proxy describes states and expectation values for the entire class. We then explore the heuristic for parameter setting on classes of MaxCut problems. This heuristic returns a set of parameters for the entire \textit{class}, which can then be tested on instances from that class. Our results indicate that for MaxCut, the heuristic on $20$-node graphs is able to -- for $p=1$ ,$2$, and $3$ -- identify parameters yielding approximation ratios comparable to those returned by the commonly-used strategy of transferring globally-optimized parameters~\cite{lotshaw21,shaydulin21_qaoakit}. We then demonstrate out to depth $p=20$ that the heuristic identifies parameters yielding monotonically increasing approximation ratios as the depth of the algorithm increases. A parameter setting strategy that uses an identical outer loop parameter update scheme but does not leverage the proxy fails to converge given identical classical resources.
The practical implications of this work is that for classes of optimization problems such as CSPs with a fixed number of randomly selected clauses on a fixed number of variables, the Classical Homogeneous Proxy for QAOA can be computed with solely classical resources in time polynomially scaling with respect to $p$ as well as $n$. Thus, the proxy avoids the issue of exponentially growing resources (with respect to $p$ or $n$) of using exact classical evaluation of expectation values, and avoids the noise present on NISQ devices. The Homogeneous Heuristic for Parameter Setting leverages this proxy, so the subroutine evaluating cost expectation values may be much quicker. However, the parameter landscape may still be challenging to optimize over, especially in cases in which the number of free parameters grows with the problem size. Nevertheless, we demonstrate for MaxCut that our heuristic is able to outperform previous results of parameter transfer \cite{lotshaw21}, indicating the potential of our heuristic approach (which may also be combined with other advancements in parameter setting, as discussed briefly later in the paper) to achieve further improvements in practice.
\begin{figure}
\caption{Flowchart of parameter setting procedure using the Classical Homogeneous Proxy for QAOA. The boxes to the left of the dashed line denote inputs that can be viewed as classical pre-computations, as described in Sec.~\ref{homogeneous_approximation}
\label{fig:flowchart}
\end{figure}
\emph{Quantum Alternating Operator Ansatz}: We here briefly describe the structure of QAOA \cite{farhi14,hogg00, hogg00-1,hadfield19}. Given a cost function $c(x)$ to optimize, a QAOA circuit consists of $p$ repeated applications of a mixing and cost Hamiltonian $B$ and $C$ in alternation, parameterized by $2p$ parameters $(\vec{\gamma}, \vec{\beta})$:
\begin{equation}\label{eq:qaoa_ansatz}
\ket{\Psi(\vec{\gamma}, \vec{\beta})} = e^{-i\beta_pB}e^{-i\gamma_pC}\cdots e^{-i\beta_1B}e^{-i\gamma_1C} \ket{\psi_0},
\end{equation}
where $\ket{\psi_0}$ is an easily-prepared initial state. There is freedom in choosing the Hamiltonians $B$, and $C$, although typically (as is followed in this work), $C$ is chosen such that $C\ket{x}=c(x)\ket{x}$ for all $x$, and $B$ is a mixer that is simple to implement on hardware. More general operators and initial states for QAOA are proposed in \cite{hadfield19}. In this work we choose the X mixer $B = \sum_{i=1}^n X_i$, where $n$ represents the total number of qubits in the circuit, and the starting state $\ket{\psi_0}$ is chosen to be the uniform superposition over all $2^n$ bitstrings, as originally considered in \cite{farhi14}. The goal of QAOA is then to produce a state $\Psi(\vec{\gamma}, \vec{\beta})$ such that repeated sampling in the computational basis yields either an optimal or high-quality approximate solution to the problem. Finding such good parameters is the parameter setting problem and may be approached in a number of ways with different tradeoffs, ranging from black-box optimization techniques to application specific approaches. We refer to fixed QAOA parameters $p,\vec{\gamma},\vec{\beta}$ as a \textit{parameter schedule}.
We now describe QAOA as a VQA: given some classical cost function $c(x)$ on $n$ variable bitstrings $\{0,1\}^n$, and a quantum circuit ansatz\"e with parameters $\vec{\Theta}$, one defines a parameter objective function $f(\vec{\Theta})$ and optimizes over $\vec{\Theta}$ with respect to $f$. In order to optimize over $\vec{\Theta}$, a two-part cycle is typically employed. First, a classical outer loop chooses one or more parameters $\vec{\Theta}$ for which evaluations of $f(\vec{\Theta})$ or its partial derivatives are requested, based on initial or prior cycle information. Then, a subroutine evaluates $f$ and its derivatives for all $\vec{\Theta}$ requested by the outer loop. Given these evaluations, the parameter update scheme can then update the current best $\vec{\Theta}$ and choose a new set of parameters to test. Throughout this work we refer to the outer loop as the parameter update scheme and the inner subroutine as parameter objective function evaluation. Typically in QAOA, $f(\vec{\Theta})= f(\vec{\gamma},\vec{\beta})$ is taken to be $\braket{\vec{\gamma},\vec{\beta}|C|\vec{\gamma},\vec{\beta}}$. This choice of $f$ measures the expected cost function value obtained from sampling the QAOA state, which we refer to as the typical parameter objective function. In this work, we will replace it with what we define as the homogeneous parameter objective function, which utilizes the Classical Homogeneous Proxy for QAOA.
\emph{Related Work}: There have been numerous parameter setting strategies proposed for QAOA. Most of these strategies focus on improving the parameter update scheme, while keeping the typical parameter objective function. These strategies range from the simplest approach of directly applying classical optimizers to approaches incorporating more sophisticated machine learning techniques \cite{khairy20, zhou20, crooks18, shaydulin19, alam2020accelerating, verdon19, wilson21, skolik2021layerwise, rivera2021avoiding}. In practice, however, efficiently finding high-quality parameters remains a challenging task. Global optimization strategies often get stuck in navigating the parameter optimization landscape due to barren plateaus \cite{mcclean18, wang21, larocca22} or multitudes of local optima \cite{cerezo21, wierichs20}, and the problem becomes especially challenging as the number of parameters grows due to the curse of dimensionality. In some circumstances parameter optimization can even become NP-hard~\cite{bittel2021training}. There have been multiple methods proposed that attempt to alleviate these issues. The first of these include initializing parameters to be close to parameters informed by previous information or intuition \cite{streif20, shaydulin21_symmetries, brady21}. One such example is to use parameters that represent a discretized linear quantum annealing schedule \cite{sack21, wurtz21_counteradiabaticity}. Another example is based on the principle that globally optimal parameters for random QAOA instances drawn from the same distribution tend to concentrate \cite{brandao18, farhi14, zhou20, lee21}. Thus, if globally optimal parameters are known for one, or many instances of a specific class of problems, these exact parameters (or median parameters for more than one instance) empirically perform well on a new, unseen instance from the same class \cite{galda21, shaydulin21_qaoakit}. Another approach for improving parameter setting is re-parameterizing the circuit, which places constraints on the allowed parameter schedules, thereby reducing the number of parameters so that they are optimized more easily. In some cases, this re-parameterization can be informed by optimized parameters for typical instances, or by connections to quantum annealing \cite{zhou20, brady2021behavior, brady21, shaydulin21_symmetries, wurtz21_counteradiabaticity}.
An alternative paradigm for parameter setting are methods that modify the parameter objective function itself. Indeed, one can find closed form expressions for expected cost as a function of parameters in some cases, for example MaxCut at $p=1$ \cite{wang18}, or $p=2$ for high-girth regular graphs \cite{marwaha21}. Moreover, when applicable one can take advantage of problem locality considerations (such as graphs of bounded vertex degree for MaxCut) to compute the necessary expectation values without requiring the full quantum state vector~\cite{farhi14,wang18}. In these cases, then, one could optimize parameters with respect to these expressions rather than by evaluating the entire dynamics. Other examples of parameter objective function modification include using conditional value at risk \cite{barkoutsos20} and Gibbs-like functions \cite{li20}, which are closely related to the usual parameter objective function. Similar in spirit to our approach, recent work~\cite{sung2020using,shaffer22,mueller2022accelerating} has proposed the use of surrogate models, which use quantum circuit measurement outcomes to construct an approximate parameter objective function. In contrast, our approach is a entirely classical, and the parameters it outputs may be used directly, or possibly improved further, given access to a quantum device. Additionally, a related perspective was recently proposed in \cite{DiezValle22}, wherein the authors study the connection between single-layer QAOA states and pseudo-Boltzmann states where computational basis amplitudes are also expressed as functions of the corresponding costs, i.e., perfectly homogeneous in our terminology. While~\cite{DiezValle22} provides additional motivation for our approach, the authors there do not consider cases beyond $p=1$ and so their results do not immediately apply to parameter setting in the same way.
\emph{Outline of paper}: In Sec.~\ref{homogeneous_approximation} we define the Classical Homogeneous Proxy for QAOA. In Sec.~\ref{n_dists_random}, we demonstrate how to implement the proxy for classes of CSPs with a fixed, polynomial number of randomly drawn clauses. In Sec.~\ref{homogeneous_parameter_setting} we introduce the Homogeneous Heuristic for Parameter Setting. In Sec.~\ref{validity_of_approximation} we provide numerical evidence for the efficacy of the proxy and the heuristic applied to MaxCut on unweighted Erd\H{o}s-R\'{e}nyi model graphs. Finally in Sec.~\ref{results} we present the results of the heuristic for the MaxCut on the same class of graphs. We conclude in Sec.~\ref{discussion} with a discussion of our results and several future research directions.
\section{Classical Homogeneous Proxy for QAOA}\label{homogeneous_approximation}
This section summarizes and generalizes the approach of \cite{hogg00}, updating the notation to match current notation for QAOA and allowing for easier extension to other CSPs. In this section, we describe the Classical Homogeneous Proxy for QAOA from a sum-of-paths perspective~\cite{hogg00,hadfield2021analytical} using a set of classical equations that ensure what we presently define as Perfect Homogeneity.
\begin{definition}\label{def:perfect_homogeneity}
\textbf{Perfect Homogeneity}: For a given state $\ket{\Psi}=\sum_{x=\{0,1\}^n} q(x) \ket{x}$, where $q(x)$ is the amplitude of bitstring $x$ when written in the computational basis, $\ket{\Psi}$ has Perfect Homogeneity if and only if the the amplitude $q(x)$ of all $x$ can be written solely as a function of $c(x)$, i.e. $Q(c(x))$. Then $\ket{\Psi}=\sum_{x={0,1}^n} Q(c(x)) \ket{x}$.
\end{definition}
Perfect Homogeneity trivially holds for QAOA states with non-degenerate cost functions where each bitstring $x$ has a unique cost $c(x)$, as well as the maximally degenerate case where the cost function is constant. A less trivial example is the class of cost functions that depend only on the Hamming weight of each bitstring, $c(x)=c(|x|)$, such as the Hamming ramp $c(x) \sim |x|$. For this class, symmetry arguments~\cite{shaydulin21_symmetries} imply that the QAOA state retains Perfectly Homogeneity for any choice of QAOA circuit depth and parameters. Indeed, useful intuition for homogeneity can be gained from the symmetry perspective; in \cite[Lem. 1]{shaydulin21_symmetries} it is shown that for a classical cost function with symmetries compatible with the QAOA mixer and initial state, bitstrings that are connected by these symmetries will have identical QAOA amplitudes. In this case the corresponding QAOA states belong to a Hilbert space of reduced dimension. Perfect Homogeneity is an even stronger condition: all bitstrings with the \emph{same cost} have identical amplitudes, not just those connected by mixer-compatible symmetries. Hence, QAOA states can be expected to be Perfectly Homogeneous only in limited cases, though these states may be near to Perfectly Homogeneous states in some settings~\cite{hogg00}. We consider QAOA applied to cost functions with polynomially many distinct cost values, such that states may not have Perfect Homogeneity. Classically, however, we may assert Perfect Homogeneity to construct our proxy.
Additional intuition comes from the case of strictly $k$-local problem Hamiltonians (such as, for instance, MaxCut), where the QAOA state is easily shown to be Perfectly Homogeneous to leading-order in $|\gamma|$, independent of $\beta$, with amplitude of bitstring $x$ given by $\frac{1}{\sqrt{2^n}} ( 1-i\gamma c(x) e^{i 2k})$ in the $p=1$ case. Similar analysis for the higher $p$ case also yields Perfect Homogeneity to first order in the $\gamma_j$ parameters. This connection is considered further in Sec.~\ref{numerical_overlaps}.
Given this intuition, we begin from the sum-of-paths perspective~\cite{hadfield2021analytical} for QAOA to provide preliminary analysis for our approach.
\subsection{Classical Homogeneous Proxy for QAOA
from the Sum-of-Paths Perspective for QAOA}\label{sum_of_paths_perspective}
The amplitude $q_{\ell}(x)$ of a bitstring $x$ at step $\ell$ induced by applying a layer of QAOA with parameters $(\gamma, \beta)$ to a QAOA state with $\ell-1$ layers can be expressed succinctly as~\cite{hadfield2021analytical}
\begin{equation}\label{eq:general_amp_sop}
q_{\ell}(x) = \braket{x|\vec{\gamma},\vec{\beta}}=\sum_y q_{\ell-1}(y) \cos^{n-d_{xy}}\beta(-i\sin\beta)^{d_{xy}} e^{-i\gamma c_y},
\end{equation}
where $d_{xy}$ is the Hamming distance between bitstrings $x$ and $y$, $\cos\beta^{n-d_{xy}}(-i\sin\beta)^{d_{xy}}=\bra{x}e^{-i\beta B} \ket{y}$ are the mixing operator matrix elements, $c_y$ is the cost of bitstring $y$, and the sum is over all $2^n$ bitstrings $y$ in the computational basis.
Grouping the terms in Eq.~\eqref{eq:general_amp_sop}, we can rewrite the sum as
\begin{equation}
q_{\ell}(x) = \sum_{d,c} \cos^{n-d}\beta(-i\sin\beta)^d e^{-i\gamma c} \sum_{y|d_{xy}=d, c_y=c} q_{\ell-1}(y)
\end{equation}
where the sum over $d$ runs from $[0,n]$ and the sum over $c$ runs over the set of unique costs allowed by the cost function, which we describe in Sec.~\ref{cost_distributions}. If it is the case that the amplitudes $q_{\ell-1}(x)$ of all bitstrings with cost $\cp$ are identical, then we can substitute $q(x)$ with $Q(c(x))$ for all $x$. This is exactly Perfect Homogeneity as described in Def.~\ref{def:perfect_homogeneity}. For the rest of this text we use this upper case $Q(c(x))$ to distinguish a function of cost $c=c(x)$ rather than the bitstring $x$ itself. This substitution yields
\begin{equation}\label{eq:pre_homog_sop}
q_{\ell}(x) = \sum_{d,c} \cos^{n-d}\beta(-i\sin\beta)^d e^{-i\gamma c_y} Q_{\ell-1}(c) n(x;d,c),
\end{equation}
where $n(x;d,c)$ represents the number of bitstrings with cost $c$ that are of Hamming distance $d$ from $x$. We note that for this equation, and all following equations, this evolution is no longer restricted to unitary evolution. As such, the values $q$ and $Q$ no longer represent strictly quantum amplitudes, but rather track an object that is an analogue to the quantum amplitude. We now introduce the major modification to the sum-of-paths equations. For this, we replace each distribution $n(x;d,c)$ with some distribution $N(\cp;d,c)$ where $c_x=\cp$, where we again adopt the upper case $N$ to distinguish a new distribution that solely depends on the cost of the bitstring. This distribution will generally differ from the original, but in this work we exhibit cases where $N(\cp;d,c)$ is efficiently computable and can adequately replaces $n(x;d,c)$ for the purpose of parameter setting. These cases are discussed in Sec.~\ref{n_dists_random} and the effectiveness of the replacement is numerically explored in Sec.~\ref{validity_of_approximation}.
Using $N(\cp;d,c)$ distributions to replace $n(x;d,c)$, then, we can further rewrite the sum as
\begin{equation}\label{eq:homog_sop}
q_{\ell}(x) = Q_{\ell}(\cp) = \sum_{d,c} \cos^{n-d}\beta(-i\sin\beta)^d e^{-i\gamma c} Q_{\ell-1}(c) N(\cp;d,c),
\end{equation}
where $Q_{\ell}(\cp)$ is used to make explicit that the substitutions yield an expression for $q_{\ell}(x)$ that depends solely on the cost $\cp$ of bitstring $x$. This leads to a crucial point for our analysis: if we start in a state observing Perfect Homogeneity, and if the distributions $n(x;d,c)$ can be replaced by distributions $N(\cp;d,c)$, which solely depend on the cost $\cp$ of $x$, then subsequent layers retain the Perfect Homogeneity of the state, yielding analogues of amplitudes $Q(\cp)$. Thus, Eq.~\eqref{eq:homog_sop} is a recursive equation yielding Perfectly Homogeneous analogues of quantum states, which we label the Classical Homogeneous Proxy for QAOA. We here note that importantly, the proxy will not return the analogue of amplitude of some bitstring $x$, but rather the analogue of amplitude of bitstrings with cost $\cp$, as implicit knowledge of which bitstrings $x$ have cost $\cp$ would solve the underlying objective function. In Sec.~\ref{validity_of_approximation} we argue that there are cases where these analogue of amplitudes $Q(\cp)$ are close to all $q(x)$ with $c_x=\cp$.
\subsection{Cost Distributions}\label{cost_distributions}
In order to evaluate Eq.~\eqref{eq:homog_sop}, we note that the set of costs to which indices $c$ and $\cp$ belong must be defined. Ideally, this set represents exactly the unique objective function values allowed by the underlying cost function. For optimization problems, however, this set is unknown, and is precisely what we would like to solve. Instead, we can form a reasonable estimate by determining upper and lower bounds on the cost function value, as well as by excluding energies which can be efficiently determined to be excluded by the cost function. In this work we denote this estimated set of unique costs as $\mathcal{C}$. As an example, for CSPs with binary valued clauses, the cost function counts the number of satisfied clauses, trivially yielding $\mathcal{C}=\{0,1,...,m\}$, where $m$ is the number of clauses. The set $\mathcal{C}$ also allows us to define the initial state $Q_0(\cp)$ for the algorithm. QAOA typically begins with a uniform superposition over all bitstrings $x$, such that $q_0(x) = \frac{1}{\sqrt{2^n}}$ for all $x$. Thus we can set $Q_0(\cp)=\frac{1}{\sqrt{2^n}}$ for all $\cp$ in $\mathcal{C}$.
While Eq.~\eqref{eq:homog_sop} yields analogues of amplitudes $Q_{\ell}(\cp)$, one may also wish to use the Classical Homogeneous Proxy for QAOA to return an estimate of expected value of the cost function. This requires computing a distribution $P(\cp)$ for all $\cp$ in $\mathcal{C}$, representing the probability a randomly chosen bitstring has cost $\cp$, averaged over the entire class. Much like in the case of computing $\mathcal{C}$, this computation is approximate, as a perfect computation of this distribution would solve the underlying objective function. Examples of estimations of $P(\cp)$ are shown in Sec.~\ref{n_dists_random}. In order to compute our estimate of expected value of the cost, then, we simply sum over costs $\cp$, weighting by the expected number of bitstrings with cost $\cp$ ($2^n P(\cp)$) and the squared analogue to the amplitude $|Q_{\ell}(\cp)|^2$,
\begin{equation}\label{eq:homog_sop_cost}
\widetilde{\braket{C}}= \sum_{\cp} 2^n P(\cp)|Q_{\ell}(\cp)|^2 \cp,
\end{equation}
with the tilde indicating the use of the proxy and that this is no longer a strictly normalized quantum expectation value. This is exactly the equation we use for the homogeneous parameter objective function as described in the introduction. Note that $P(c')$ does not give a bona fide probability distribution unless it is re-normalized by dividing by $\sum_{\cp} 2^n P(\cp)|Q_{\ell}(\cp)|^2$ at each step; nevertheless we have found these factors remain close to unity for the parameter setting experiments considered below and so we neglect them going forward, which yields further computational savings. It is straightforward to introduce these factors into Eq.~\ref{eq:homog_sop_cost} if desired in other applications.
The set $\mathcal{C}$ and estimate to cost $\widetilde{\braket{C}}$ are critically determined by the class of problems one is working with. Examples of these values for a sample of classes is given in Sec.~\ref{n_dists_random}.
\subsection{Algorithm for Computing the Classical Homogeneous Proxy for QAOA}
Formalizing the results in this section, we present Algorithm~\ref{alg:homog_apx_evol}, which describes how given QAOA parameters $p,\vec{\gamma},\vec{\beta}$ a set of possible costs $\mathcal{C}$, and assumed cost distribution $N(\cp;d,c)$ we can compute the Classical Homogeneous Proxy for QAOA using Eq.~\eqref{eq:homog_sop} and Eq.~\ref{eq:homog_sop_cost}.
\begin{algorithm}
\caption{Classical Homogeneous Proxy for QAOA}\label{alg:homog_apx_evol}
\begin{algorithmic}
\STATE \textbf{Input}: $p$, $\vec{\gamma}$, $\vec{\beta}$, $n$, $m$, $\mathcal{C}$. $N(\cp;d,c)$ $\forall \cp\in \mathcal{C}$, $P(\cp)$ $\forall \cp\in \mathcal{C}$.
\STATE \textbf{Output}: $Q_p(\cp) \; \forall \cp\in \mathcal{C}$, Optionally $\braket{C(\vec{\gamma},\vec{\beta})}_h$
\STATE $Q_0(\cp) \gets 1\sqrt{2^n} \; \forall \cp \in \mathcal{C}$
\FOR{$\ell$ in $[1,p]$}
\STATE $Q_{\ell}(\cp) \gets \sum_{d,c} \cos\beta^{n-d}(-i\sin\beta)^d e^{-i\gamma c} Q_{\ell-1}(c) N(\cp;d,c) \quad \forall \cp \in \mathcal{C}$
\ENDFOR
\STATE If desired, $\widetilde{\braket{C}} \gets \sum_{\cp} 2^n P(\cp)|Q_{\ell}(\cp)|^2 \cp$
\\\hrulefill
\end{algorithmic}
\end{algorithm}
\emph{Time Complexity}: Given $N(\cp;d,c)$ as well as $Q_{\ell-1}(\cp)$ for all $\cp$ in $\mathcal{C}$, the calculation of each amplitude $Q_{\ell}(\cp)$ using Eq.~\eqref{eq:homog_sop} is $\mathcal{O}(n|\mathcal{C}|)$. Thus, the calculation of all $|\mathcal{C}|$ amplitudes $Q_{\ell}(\cp)$ is $\mathcal{O}(n|\mathcal{C}|^2)$. Computing all $Q_{p}(\cp)$ elements then is $\mathcal{O}(n p |\mathcal{C}|^2)$. If desired, the evaluation of the cost function, given in Eq.~\eqref{eq:homog_sop_cost}, involves a sum over $\mathcal{C}$ elements, thus is $\mathcal{O}(|\mathcal{C}|)$.
Thus, if $|\mathcal{C}|$ is $O(\mathrm{poly}(n))$, then Algorithm \ref{alg:homog_apx_evol} is efficient. Indeed, we show such a case in the following section, demonstrating how to calculate the necessary pre-computations of $\mathcal{C}$, as well as $N(\cp;d,c)$ and $P(\cp)$ for all $\cp$ in $\mathcal{C}$.
\section{Cost distributions for Randomly Drawn CSPs}
\label{n_dists_random}
In this section we demonstrate how to compute $\mathcal{C}$ and $\widetilde{\braket{C}}$ for a number of random CSPs. For the Classical Homogeneous Proxy of Sec.~\ref{sum_of_paths_perspective}, we assumed that for all $\cp$ in $\mathcal{C}$, we are given distributions $N(\cp;d,c)$ that suitably estimate $n(x;d,c)$ for all $x$ with cost $\cp$. Obtaining these distributions can be viewed as a pre-computation step, and when derived from the properties of a particular problem class they may then be applied for any instance within. Here, we identify common random classes of optimization problems where $N(\cp;d,c)$ can be efficiently computed for the entire class. Particularly, we focus on CSPs with a fixed number~$m$ of Boolean clauses, each acting on $k$ variables selected at random from the set of $n$ variables. The types of allowed clauses is determined by the problem, for example SAT problems consider disjunctive clauses. We note that the analysis here generalizes a similar method in \cite{hogg00} applied to 3-SAT, allowing for easy extension to any CSP matching the above criteria. For these problems, the procedure is as follows, noting that \emph{all calculations done here are averaged over the entire class}: we first can rewrite the expected number of bitstrings with cost $c$ at distance $d$ from a random bitstring of cost $\cp$ as,
\begin{equation}\label{eq:rand_n_dist_start}
N(\cp;d,c) = \binom{n}{d}P(c|d,\cp),
\end{equation}
where $P(c|d,\cp)$ represents the probability that a bitstring at distance $d$ from a bitstring with cost $\cp$ has cost $c$. This probability can then be rewritten as
\begin{equation}\label{eq:p_of_cprime_given_d_and_c}
P(c|d,\cp) = \frac{P(\cp,c|d)}{P(\cp)} ,
\end{equation}
where $P(\cp,c|d)$ represents the probability that two bitstrings separated with Hamming distance $d$ have costs $c$ and $\cp$ and $ P(\cp)$ represents the probability that a randomly chosen bitstring has cost $\cp$. The numerator can be calculated as follows:
\begin{equation}
P(\cp,c|d) = \sum_{b=\max(0, \cp+c-m)}^{\min(\cp,c)} P(b, \cp-b, c-b | d),
\end{equation}
where $P(b, \cp-b, c-b | d)$ represents the probability that two randomly chosen bitstrings with Hamming distance $d$ have $b$ satisfied clauses in common, along with cost $\cp$ and $c$. This expression can be evaluated via the multinomial distribution
\begin{equation}
P(b, \cp-b, c-b | d) = \frac{m!}{b!(\cp-b)!(c-b)!(m+b-(\cp+c))!} \Prob{both}^b \Prob{one}^{(\cp+c)-2b} \Prob{neither}^{m+b-(\cp+c)},
\end{equation}
where $\Prob{both}$, $\Prob{one}$, and $\Prob{neither}$ represent the probability that a randomly selected clause is satisfied by both, one, or neither of the bitstrings separated by Hamming distance $d$, respectively. Since $\Prob{both} + 2\Prob{one} + \Prob{neither} = 1$, one only needs to calculate two of these three variables.
The previous equations then allow computing $N(\cp;d,c)$ for any value $\cp$ as
\begin{equation}\label{eq:rand_n_dist_end}
N(\cp;d,c) = \binom{n}{d} \frac{1}{P(\cp)}\sum_{b=\max(0, \cp+c-m)}^{\min(\cp,c)} \frac{m!}{b!(\cp-b)!(c-b)!(m+b-(\cp+c))!} \Prob{both}^b \Prob{one}^{(\cp+c)-2b} \Prob{neither}^{m+b-(\cp+c)}.
\end{equation}
We summarize this approach in Algorithm~\ref{alg:homog_apx_precomp}.
\begin{algorithm}
\caption{Classical Homogeneous Proxy for QAOA Pre-computation for Randomly Drawn Cost Function}\label{alg:homog_apx_precomp}
\begin{algorithmic}
\STATE \textbf{Input}: $n$, $m$, Description of problem class
\STATE \textbf{Output}: $\mathcal{C}$. $N(\cp;d,c)$, $P(\cp)$ $\forall \cp\in \mathcal{C}$
\STATE Determine a suitable $\mathcal{C}$ via Sec.~\ref{cost_distributions}.
\STATE Compute $\Prob{both}$, $\Prob{one}$, $\Prob{neither}$ and $P(\cp) \; \forall \cp\in \mathcal{C}$ given the problem class
\STATE Use these values to compute $N(\cp;d,c) \; \forall \cp\in \mathcal{C}$ via Eq.~\eqref{eq:rand_n_dist_end}
\\\hrulefill
\end{algorithmic}
\end{algorithm}
\emph{Time Complexity}: There are $|\mathcal{C}|$ distributions $N(\cp;d,c)$ with $(n+1)|\mathcal{C}|$ elements each. For fixed $\cp$, $d$, and $c$, we must sum over $|\mathcal{C}|$ terms that can be evaluated in $\mathcal{O}(|\mathcal{C}|)$ steps, given by the time complexity of evaluating factorials of $\mathcal{O}(|\mathcal{C}|)$. Thus the evaluation of all distributions is $\mathcal{O}(n|\mathcal{C}|^4)$. We once again note that if $|\mathcal{C}|$ is $O(\mathrm{poly}(n))$, Algorithm \ref{alg:homog_apx_precomp} runs in polynomial time.
We now demonstrate Algorithm~\ref{alg:homog_apx_precomp} for several common problem classes.
\subsection{MaxCut}\label{example_maxcut}
We first analyze MaxCut on Erd\H{o}s-R\'{e}nyi random graphs~$\mathcal{G}(n,p_e)$. In this model, a graph $G$ is generated by scanning over all possible $\binom{n}{2}$ edges in an $n$ node graph, and including each edge with independent probability $p_e$. The MaxCut problem on $G$ is to partition the $n$ nodes into two sets such that the number of cut edges crossing the partition is maximized. With respect to the class~$\mathcal{G}(n,p_e)$ of graphs the cost function to be maximized over configurations $x$ is
\begin{equation}\label{eq:maxcut_cost_fn}
c(x) = \sum_{\langle i,j \rangle \in \binom{n}{2}} e_{ij} x_i \oplus x_j,
\end{equation}
where $e_{ij}$ are independent binary random variables that take on value $0$ with probability $1-p_e$ and $1$ with probability $p_e$. We use this cost function to evaluate the relevant distributions in Eq.~\eqref{eq:rand_n_dist_end}. We start by noting for a fixed graph $G$ with $m$ edges we have $0\leq c(x) \leq m$, and the expected number of edges $\mathbb{E}[m]$ in graphs drawn from~$\mathcal{G}(n,p_e)$ is $p_e \binom{n}{2}$, with a standard deviation proportional to to the square root of this value as determined by the binomial distribution. Hence, as $n$ becomes large the expected number of edges concentrates to $p_e \binom{n}{2}$, and so for simplicity in the remainder of this work we set $\mathcal{C}=\{0,1,\dots,\lceil \mathbb{E}[m]\rceil \}$. Note that in practice, to accommodate the possibility of a given instance with maximum cut greater than this quantity, one may
increase
$\mathcal{C}$ by several standard deviations as desired.
We can then easily calculate $P(\cp)$, the probability a random bitstring has cost $\cp$ for Eq.~\eqref{eq:maxcut_cost_fn}. For a bitstring drawn uniformly at random, the probability of satisfying any given clause $x_i \oplus x_j$ is $1/2$, as there are two satisfying assignments ($01$ and $10$), and two non-satisfying assignments ($00$ and $11$). Thus, the probability $P(\cp)$ also follows a binomial distribution and is simply
\begin{equation}\label{eq:pc_maxcut}
P(\cp) = \left(\frac{1}{2}\right)^m \binom{m}{\cp}.
\end{equation}
We can then calculate $\Prob{both}$, $\Prob{one}$ and $\Prob{neither}$, where we show all three for didactic purposes (since $\Prob{both} + 2\Prob{one} + \Prob{neither} =1$). To see this, consider two randomly chosen bitstrings $y$ and $z$, separated by Hamming distance $d$. We note that there are $(n-d)$ bits in common between $y$ and $z$ and $d$ bits different. Thus $y_i \oplus y_j = z_i \oplus z_j$ requires that either both $i$ and $j$ are from the set of $(n-d)$ same bits or the set of $d$ different bits, which has probability
\begin{equation}
\Prob{same} = \frac{\binom{n-d}{2}}{\binom{n}{2}} + \frac{\binom{d}{2}}{\binom{n}{2}}.
\end{equation}
From this expression, we can easily see that $\Prob{both}=\Prob{neither}=\frac{1}{2}\Prob{same}$. Since $\Prob{same}$ represents the probability that $y_i \oplus y_j = z_i \oplus z_j$, for a random clause and a random bitstring, the probability that $y_i \oplus y_j=1$ is $1/2$ and the same is true for $y_i \oplus y_j=0$ by symmetry. Thus we have
\begin{equation}
\Prob{both} = \Prob{neither} = \frac{1}{2} \left( \frac{\binom{n-d}{2}}{\binom{n}{2}} + \frac{\binom{d}{2}}{\binom{n}{2}} \right).
\end{equation}
For completion, we can calculate $\Prob{one}$, which is $(1-\Prob{same}-\Prob{both})/2$ as shown in Sec.~\ref{sum_of_paths_perspective}. For this term we need $y_i \oplus y_j \neq z_i \oplus z_j$, which can be accomplished if one of $i$ or $j$ is chosen from the $(n-d)$ bits in common and the other is chosen from the $d$ differing bits. This probability is
\begin{equation}
\Prob{dif} = \frac{\binom{n-d}{1}\binom{d}{1}}{\binom{n}{2}}.
\end{equation}
Thus, $\Prob{one}$, which is the probability of specifically $y$ satisfying and $z$ not satisfying (or vice versa) is half of $\Prob{dif}$, so
\begin{equation}
\Prob{one} = \frac{1}{2} \left( \frac{\binom{n-d}{1}\binom{d}{1}}{\binom{n}{2}} \right).
\end{equation}
Using these quantities $N(\cp;d,c)$ is then obtained from Eq.~\ref{eq:rand_n_dist_end}.
\subsection{MaxE3Lin2/Max-3-XOR}
We next consider the MaxE3Lin2 problem
which generalizes MaxCut. QAOA for MaxE3Lin2 was analyzed by Farhi et al. \cite{farhi14}, who showed an advantage over classical approximation algorithms, only to inspire better classical approaches \cite{barak2015beating,hastings19}. The analogous random class of MaxE3Lin2 problems has cost function
\begin{equation}\label{eq:maxe3lin2}
c(x) = \sum_{a<b<c} d_{abc}(x_a \oplus x_b \oplus x_c \oplus z_{abc}),
\end{equation}
where the $z_{abc}$, $,d_{abc}$ are independent random variables with equal probability of being $0$ or $1$.
Using Eq.~\eqref{eq:maxe3lin2} we can again calculate the relevant probability distributions. First note that a random bitstring $x$ will satisfy a individual clause (i.e. term in the sum) with probability $1/2$, as $(x_a + x_b + x_c) \mod 2$ has an equal probability to be $0$ or $1$. Thus,
\begin{equation}
P(\cp) = \left(\frac{1}{2}\right)^m \binom{m}{\cp},
\end{equation}
using the binomial distribution. Then we note that, as in Sec.~\ref{example_maxcut}, the probability of $(y_a + y_b + y_c) \mod 2 = (z_a + z_b + z_c) \mod 2$ for two random bitstrings with $d(x,y)=d$ is given by
\begin{equation}
\Prob{same} = \frac{\binom{n-d}{3}+\binom{d}{2}\binom{n-d}{1}}{\binom{n}{3}},
\end{equation}
since the value of $(x_a + x_b + x_c) \mod 2$ is preserved if none or two of $x_a,x_b,x_c$ are flipped, which is satisfied if $a,b,c$ are all from the $n-d$ identical bits, or two of $a,b,c$ are chosen from the $d$ differing bits. Thus, we can easily compute $\Prob{both}=\Prob{neither}=\Prob{same}/2$, since there is an equal chance $(y_a + y_b + y_c) \mod 2 = (z_a + z_b + z_c) \mod 2 = 0/1$. $\Prob{one}$ can then be calculated by a similar argument or by taking $\Prob{one}=(1-\Prob{both}-\Prob{neither})/2$. We note that the probability distributions calculated here are exactly equivalent to those for the Max-3-XOR problem, where all $z_{abc}$ in Eq.~\eqref{eq:maxe3lin2} are taken to be $1$.
\subsection{MaxEkLin2/Max-k-XOR}
The MaxEkLin2 problem is a further generalization for each fixed for fixed $k$ of MaxE3Lin2, where we replace the $a,b,c$ with $a_1,...,a_k$ in the cost function and the sum is taken over hyperedges of size~$k$. This class of problems was previously studied for QAOA in~\cite{marwaha2022bounds,chou2022limitations}. For each $k$, $P(\cp)$ is the same as for MaxE3Lin2 above. However, $\Prob{same}$ is given by
\begin{equation}
\Prob{same} = \frac{\sum_{h=0}^{\lfloor k/2 \rfloor}\binom{d}{2l}\binom{n-d}{k-2l}}{\binom{n}{k}},
\end{equation}
where the terms in the sum represent all possible ways to choose an even number of bits to flip from the $k$ bits in the clause out of $d$ total bits. Then, again, we have again $\Prob{both}=\Prob{neither}=\Prob{same}/2$ and $\Prob{one}=(1-\Prob{both}-\Prob{neither})/2$. A similar calculation for the Max-k-XOR problem again yields identical probability distributions.
\subsection{Rand-k-SAT}
The case of random k-SAT is analyzed by Hogg in \cite{hogg00}. This cost function is defined as the sum over $m$ clauses of a logical OR of $k$ variables randomly drawn from a set of $n$ variables, each of which may be negated.
In the notation used in this paper, the distributions of interest are
\begin{equation}
P(\cp) = 2^{-km}(2^{k}-1)^{m-\cp}\binom{m}{\cp},\quad \Prob{both}=\frac{2^{-k}\binom{n-d}{k}}{\binom{n}{k}},\quad \Prob{one}=2^{-k}-\Prob{both},\quad
\Prob{neither}=1-2\Prob{one}-\Prob{both}.
\end{equation}
\section{Homogeneous Heuristic for Parameter Setting}\label{homogeneous_parameter_setting}
Leveraging the Classical Homogeneous Proxy for QAOA, here we propose a strategy for finding good algorithm parameters, which we call the Homogeneous Heuristic for Parameter Setting, as pictured in Fig.~\ref{fig:flowchart} and formalized in Algorithm~\ref{alg:homog_param_set}.
\begin{algorithm}[h]
\caption{Homogeneous Heuristic for Parameter Setting}\label{alg:homog_param_set}
\begin{algorithmic}
\STATE \textbf{Input}: $p$, $\vec{\gamma_{\mathrm{in}}}$, $\vec{\beta_{\mathrm{in}}}$, $n$, $m$, $\mathcal{C}$. $N(\cp;d,c)$ $\forall \cp\in \mathcal{C}$, $P(\cp)$ $\forall \cp\in \mathcal{C}$, constraints on $(\vec{\gamma}, \vec{\beta})$.
\STATE \textbf{Output}: $\vec{\gamma_{\mathrm{out}}}$, $\vec{\beta_{\mathrm{out}}}$
\STATE{$\vec{\gamma}, \vec{\beta} \gets \vec{\gamma_{\mathrm{in}}}, \vec{\beta_{\mathrm{in}}}$ }
\WHILE{Desired}
\STATE{1) Using any suitable parameter update scheme, perform one update of $(\vec{\gamma}, \vec{\beta})$}
\STATE{2) Evaluate the homogeneous parameter objective function (Eq.~\eqref{eq:homog_sop_cost}) using the Classical Homogeneous Proxy for QAOA for all $(\vec{\gamma}, \vec{\beta})$ required to perform next parameter update in 1)}
\ENDWHILE
\STATE{$\vec{\gamma_{\mathrm{out}}}, \vec{\beta_{\mathrm{out}}} \gets \vec{\gamma}, \vec{\beta}$ }
\\\hrulefill
\end{algorithmic}
\end{algorithm}
Here, a ``suitable parameter update scheme'' is intended to encapsulate a wide variety of general or specific approaches proposed for this in the literature, ``Desired'' denotes that the while loop can be iterated until the update scheme terminates or some desired convergence criteria is met, and ``Constraints on $(\vec{\gamma}, \vec{\beta})$'' denotes any restrictions on the domain of values allowed for $\vec{\gamma}, \vec{\beta}$, including restrictions to schedules of a prescribed form such as linear ramps introduced in Sec.~\ref{numerical_overlaps}.
With the heuristic, we replace the typical cost expectation value with the homogeneous parameter objective function, where each function evaluation takes time as determined by Algorithms~\ref{alg:homog_apx_evol} and~\ref{alg:homog_apx_precomp}. This heuristic is purposefully defined in broad terms, in order to maintain complete freedom in the choice of parameter update schemes. Thus, one can still apply a myriad of approaches explored in parameter setting literature, such as parameter initialization, re-parameterization, and the use of different global or local optimization algorithms.
On the other hand, we emphasize that while our approach can significantly speed up the parameter setting task, it is by no means a panacea. Indeed, in cases where the number of parameters to optimize grows with the problem size (e.g., when $p=n$), this problem suffers generically from the curse of dimensionality, as well as other potential difficulties such as barren plateaus or plentiful local optima. Hence the incorporation of a variety of parameter setting strategies or approximations that seek to ameliorate these difficulties within our approach is well-motivated.
\section{Numerically Investigating the Classical Homogeneous Proxy for QAOA for MaxCut}\label{validity_of_approximation}
In this section, we explore the application of the Classical Homogeneous Proxy for QAOA to MaxCut on Erd\H{o}s-R\'{e}nyi $\mathcal{G}(n, p_e)$ model graphs as considered in Sec.~\ref{example_maxcut}. We first numerically study the accuracy of replacing $n(x;d,c)$ distributions with $N(\cp;d,c)$ distributions as calculated via the methods presented in Sec.~\ref{n_dists_random}. We then numerically show that the proxy maintains large overlaps with full classical statevector simulation of QAOA for certain parameter schedules. Finally, we provide a toy example for a small graph at $p=3$, empirically showing that the homogeneous and typical parameter objective function correlate significantly for important parameter regimes.
\subsection{Viability of Replacement Distance and Cost Distributions}
For these experiments, we first choose ten $\mathcal{G}(10,1/3)$ Erd\H{o}s-R\'{e}nyi model graphs, and calculate the $n(x;d,c)$ for all bitstrings $x$ in $\{0,1\}^n$, all d in $[0,n]$ and all $c$ in $[0,m]$, with $m$ being an upper bound on the maximum cost (here we use $p_e \binom{n}{2}$, as described in Sec.~\ref{example_maxcut}, in this case $m=15$). For each cost $\cp$, we consider $n(x;d,c)$ for all states $x$ with cost $\cp$ across all $10$ graphs. In order to evaluate the viability of replacing $n(x;d,c)$ with $N(\cp;d,c)$, we present the following intuition: the better that $N(\cp;d,c)$ estimates the average of $n(x;d,c)$ over all $x$ with $c(x)=\cp$, and the less that $n(x;d,c)$ deviates over $x$ with $c(x)=\cp$, the better $N(\cp;d,c)$ should estimate $n(x;d,c)$ for all $x$ with $c(x)=\cp$. We thus aim to numerically demonstrate the extent to which both the analytically derived $N(\cp;d,c)$ estimate the average $n(x;d,c)$ and to which $n(x;d,c)$ deviates from its average. We first demonstrate the latter. To do this, we take the element-wise averages of these distributions. This average is one way of computing the distributions $N(\cp;d,c)$, as described in Sec.~\ref{sum_of_paths_perspective}. We also take the element-wise standard deviations of these distributions. In Fig.~\ref{fig:n10_er_p3overn_n_dev_ten_graphs} we display the element-wise ratio of standard deviation to mean of these distributions for $\cp=7$, chosen because $P(\cp)$ is maximal near $7$, such that this term has large weight in Eq.~\eqref{eq:homog_sop}.
\begin{figure}
\caption{Heat map of the standard deviation/mean of $N(\cp;d,c)$ distributions for 10 instances of $\mathcal{G}
\label{fig:n10_er_p3overn_n_dev_ten_graphs}
\end{figure}
From the figure, we see that for costs near $m/2$ ($7.5$) and distances near $n/2$ ($5$), there is minimal deviation in the $N(\cp;d,c)$ distributions among multiple instances of Erd\H{o}s-R\'{e}nyi model graphs and multiple bitstrings $x$ with cost $\cp$. We note that the relative deviation is highest at the edges of the plot, where $d \rightarrow 0$, $d \rightarrow n$, $c \rightarrow c_{min}$, and $c \rightarrow c_{max}$. We note, however, that at these points, the expected value of $n(x;d,c)$ is smaller, such that the contribution of these deviations to the sum in Eq.~\eqref{eq:pre_homog_sop} is less than those with distance and cost near the center of the distribution. As an example, there are $\binom{10}{5}=252$ bitstrings at distance $5$ from a given bitstring $x$, as opposed to $\binom{10}{1}=10$ bitstrings at distance $1$. A similar argument can be made using the values of $P(\cp)$ derived in Sec.~\ref{n_dists_random}. This numerical evidence thus suggests that replacing $n(x;d,c)$ with $N(\cp;d,c)$ determined via an averaging over all $x$ with cost $\cp$ over multiple instances may introduce deviations from Eq.~\eqref{eq:homog_sop} precisely in cases that contribute to less Eq.~\eqref{eq:general_amp_sop}, allowing for near-homogeneous evolution.
This result provides evidence that the deviation in $n(x;d,c)$ is small for the \lq\lq bulk\rq\rq\ of contributions to the sum in Eq.~\eqref{eq:general_amp_sop}, such that we can replace $n(x;d,c)$ with $N(\cp;d,c)$, the average over $x$ with $c(x)=\cp$ for the \emph{entire class} of problems. We then would like to understand how the analytically derived expressions for $N(\cp;d,c)$ in Sec.~\ref{n_dists_random} estimate these averaged distributions. We perform the comparison of $N(\cp;d,c)$ computed by averaging and the methods in Sec.~\ref{n_dists_random} for MaxCut on Erd\H{o}s-R\'{e}nyi model graphs in Fig.~\ref{fig:n_apx_comparison}, showing the Pearson correlation coefficient between the two distributions for each $\cp$ in $[0,m]$. We likewise display $P(\cp)$, in order to elucidate the dominant distributions in the sum of Eq.~\eqref{eq:homog_sop}.
\begin{figure}
\caption{(middle) Pearson Correlation Coefficients between $N(\cp;d,c)$, calculated through the averaging over 10 $\mathcal{G}
\label{fig:n_apx_comparison}
\end{figure}
From the figure we see that for dominant terms (with high $P(\cp)$), the two distributions align well visually, corresponding to a high correlation coefficient. For less important terms, the analytical distributions do not match the average over many instances, but the effect of this mismatch may be reduced due to the lesser weight on these terms as determined by $P(\cp)$ in the sums of equation Eq.~\eqref{eq:homog_sop}.
Combined, Figs.~\ref{fig:n10_er_p3overn_n_dev_ten_graphs} and~\ref{fig:n_apx_comparison} show that for dominant terms, there is little deviation in $n(x;d,c)$ distributions from their average $N(\cp;d,c)$, and the analytically computed distributions match these average distributions well. Thus, they together indicate that the analytical methods for calculating $N(\cp;d,c)$ should approximate $n(x;d,c)$ with $c(x)=\cp$ for terms that dominate the sum in Eq.~\eqref{eq:homog_sop}.
\subsection{Numerical Overlaps}\label{numerical_overlaps}
To further investigate the viability of the Classical Homogeneous Proxy for QAOA, we perform numerical simulations of the proxy on MaxCut on Erd\H{o}s-R\'{e}nyi graphs $\mathcal{G}(10, .5)$, and display the squared overlap between classical full statevector simulation and the proxy as a function of $p$ for various parameter schedules motivated by QAOA literature.
For this analysis, we choose linear ramp parameter schedules, inspired by quantum annealing. In particular, we fix a starting and ending point for $\vec{\gamma}$ and $\vec{\beta}$, which is kept constant regardless of $p$, and the schedule is defined as
\begin{equation}\label{eq:ramp_schedule}
\gamma_{\ell} = \gamma_1 + (\gamma_f - \gamma_1)\frac{\ell}{p}, \quad
\beta_{\ell} = \beta_1 + (\beta_f - \beta_1)\frac{\ell}{p}
\end{equation}
for each layer ${\ell}$. Given these linear ramp schedules, the squared overlaps between the replace and original quantities are calculated as follows. First, statevector simulation was performed using HybridQ, an open-source package for large-scale quantum circuit simulation \cite{mandra21}. Next, the $N(\cp;d,c)$ distributions are computed according to Eq.~\ref{n_dists_random}. Then, for the purposes of comparison, we compute the proxy slightly differently from above, by starting with the uniform superposition over all $2^n$ bitstrings and simply plugging in $N(c(x);d,c)$ for all $n(x;d,c)$ in Eq.~\eqref{eq:pre_homog_sop}, keeping all $2^n$ amplitudes $q^{\ell}(x)$ at each step $\ell$. This allows us to compute the overlap between the proxy and true state using standard quantum state overlap, although we lose the gain in complexity from performing the proxy on only the set of unique costs, as we are working with $2^n$ bitstrings rather than $|\mathcal{C}|$ costs. Then, using this method, we plot the squared overlaps between the replace and original quantities as a function of $p$ with varying values of $\gamma_1$ and $\gamma_f$ in Fig.~\ref{fig:ramp_homog}.
\begin{figure}
\caption{Squared overlap between $p=20$ MaxCut QAOA run on a $\mathcal{G}
\label{fig:ramp_homog}
\end{figure}
From the figure, we can see that the overlap gradually decreases as the number of QAOA layers increases. However, the decline is less dramatic when $\gamma_1$ and $\gamma_f$ are lower in magnitude. Thus, we see that as we move towards the $\gamma \ll 1$ regime for these problems (or, more precisely $\gamma \ll \|C\|$~\cite{hadfield2021analytical}), the true QAOA state remains closer to the proxy even as the algorithm progresses, as remarked in Sec~\ref{homogeneous_approximation}. We stress that this behavior is empirical and the numerics are limited to the MaxCut examples analyzed presently. This behavior does however align with the analytical fact mentioned in Sec.~\ref{homogeneous_approximation} that to first order in $\gamma$, QAOA states are Perfectly Homogeneous for strictly k-local Hamiltonians.
\subsection{Parameter Objective Function Landscapes at Low Depth}\label{landscapes}
In order to provide an explicit illustrative example for the efficacy of the Homogeneous Heuristic for Parameter Setting in Alg.~\ref{alg:homog_param_set}, we depict the both the typical and homogeneous parameter objective function as a function of $\gamma_3$ and $\beta_3$ for a randomly drawn $\mathcal{G}(8,1/2)$ graph and QAOA with $p=3$. In this example, our aim is to visualize similarities between the two parameter objective functions rather than to exhaustively find optimal parameters for the graph. As such, we borrow $\gamma_1$, $\gamma_2$, $\beta_1$ and $\beta_2$, optimized from a $20$ node instance in Sec.~\ref{low_depth}. These landscapes are shown in Fig.~\ref{fig:n8_er_landscapes}.
\begin{figure}
\caption{Parameter objective function landscapes displayed as heat maps as a function of $\gamma_3$ and $\beta_3$ for $p=3$ MaxCut QAOA on $\mathcal{G}
\label{fig:n8_er_landscapes}
\end{figure}
It is visually clear that the two landscapes have significant differences. For the typical parameter objective function, there exists a clearly defined, Gaussian-like peak (yellow) and valley (blue). For the homogeneous parameter objective function, there exists a similarly-located peak, albeit vertically compressed, and the corresponding valley comprises almost the entire rest of the landscape. However, we can see that in the small $\gamma$ regime in particular, the landscapes qualitatively look very similar. This behavior is suggested by Fig.~\ref{fig:ramp_homog}, where we see quantitatively that the extent to which the Classical Homogeneous Proxy for QAOA overlaps with the true QAOA evolution grows as $\gamma$ decreases.
Additionally, as seen in previous studies of QAOA parameters \cite{zhou20, crooks18, shaydulin21_qaoakit, lotshaw21}, optimal values of $\gamma$ tend to remain relatively small (exact values are not described as they depend on $n$, $p$, and the cost function being used), especially at the beginning of the algorithm. This suggests that, while the homogeneous and typical parameter objective functions may deviate significantly in general, they maintain significant correlations in \emph{relevant} parameter regimes, specifically those which are near the maximum in the landscape. Indeed, for the task of parameter setting, we expect qualitative feature mapping of the landscape to be much more important than a precise matching of objective function values.
It is also worthwhile to consider the difference in computational resources needed to produce the two plots. For the typical parameter objective function, in order to classically evaluate the evolution of the algorithm
in each cell of the presented 30 by 30 landscapes, we perform full statevector simulation. Farhi et al. show in \cite{farhi14}, that in order to compute Pauli observable expectation values for the typical parameter objective function, one only needs to include qubits that are within the reverse causal cone generated by the qubits involved in the observable and the quantum circuit implementing QAOA. However, for the example analyzed here, at $p=3$, this reverse causal cone includes all $8$ qubits for each observable, so in order to classically compute the evolution, we perform a full-state simulation. Thus, deriving evolution of the proxy took roughly one-fiftieth of the time required for simulating full QAOA. We note that it is possible to efficiently evaluate each cell on an actual quantum computer, and that if one only wants expectation values given parameters rather than the full state evolution, there are more efficient classical methods (e.g. \cite{farhi14,wang18}). Current difficulties for this approach, however include noise resulting both from finite sampling error as well as the effects of imperfect quantum hardware.
\section{Results}\label{results}
In this section we present numerical evidence supporting the Homogeneous Heuristic for Parameter Setting, again using MaxCut on Erd\H{o}s-R\'{e}nyi model graphs as a target application. Due to the array of possible techniques implementing the parameter update scheme as mentioned in Sec.~\ref{homogeneous_parameter_setting}, we do not wish to provide an exhaustive comparison of the heuristic to previous literature, but rather demonstrate regimes where the heuristic provides parameters that are either comparable with previous results, or that yield increasing performance out to larger values of $p$ where we are not aware of prior methods in the literature successfully returning well-performing parameter schedules.
\subsection{Global Optimization at Low-Depth}\label{low_depth}
Here we present results of the Homogeneous Heuristic for Parameter Setting, as well as comparisons to the transfer of parameters method outlined in Lotshaw et al. \cite{lotshaw21}, implemented using the QAOAKit software package \cite{shaydulin21_qaoakit}. Lotshaw et al. show that using one set of median (over the entire dataset at a given $n$ and $p$) parameters performs similarly to optimized parameters for each instance. Thus, we directly pull the obtained parameters from QAOAKit, which first calculates optimal parameters for all connected non-isomorphic graphs up to size $9$ at $p=1$,$2$, and $3$. For each~$p$, the median over all parameters is calculated, and these median parameters are directly applied to ten Erd\H{o}s-R\'{e}nyi graphs from $\mathcal{G}(20,1/2)$, yielding average and standard deviation of expectation values for these median parameters over the ten graphs. To compare with these transferred parameters, we display the approximation ratio achieved by parameters that are optimized with the heuristic, as described in Sec.~\ref{homogeneous_parameter_setting}, over the same ten graphs. Here the approximation ratio is defined as follows,
\begin{equation}
\mathrm{Apx \; Ratio} = \braket{C}/c_{opt},
\end{equation}
i.e., the expected cost value returned by true QAOA, divided by the true optimal value $c_{opt}$, as determined via brute force search over all $2^n$ bitstrings. For this experiment, as well as all experiments below, the state throughout the algorithm and $\braket{C}$ are computed via full statevector simulation, even for parameters returned via the heuristic. The comparison between the heuristic and parameter transfer is shown in Fig.~\ref{fig:apx_by_p_comparison}.
\begin{figure}
\caption{Box plots of approximation ratios obtained by the transfer of median of optimal parameters from $10$ $\mathcal{G}
\label{fig:apx_by_p_comparison}
\end{figure}
As we can see in Fig.~\ref{fig:apx_by_p_comparison}, for low depth, the heuristic performs comparably well to parameter transfer. On an instance-by-instance basis, the approximation ratio achieved by homogeneous parameter setting minus that achieved by parameter transfer was $-.0037 \pm .0062$, $.0164 \pm .0148$, and $.0097 \pm .0183$ for $p=1$, $2$, and $3$, respectively. We do not see statistically significant differences between the two methods for any three of the depths analyzed, although the average performance of the heuristic is slightly higher in the latter two cases. This numerical evidence indicates that the method is competitive. Furthermore, the optimal parameters in the transfer case require the optimization of smaller QAOA instances, which clearly may incur some tradeoff between the size of problem one wishes to train parameters on versus the accuracy of the parameter transfer onto larger and larger instances. The parameters for comparison were pulled directly from QAOAKit data-tables, so our purpose here is not to provide a full timing comparison between the two methods. However, this demonstrates that our polynomially-scaling heuristic performs comparably with other techniques used in literature.
\subsection{Parameter Optimization at Higher Depth}\label{high_depth}
To elucidate how the Homogeneous Heuristic for Parameter Setting scales with QAOA depth~$p$, we further depict box plots of the approximation ratio for a new set of ten Erd\H{o}s-R\'{e}nyi graphs $\mathcal{G}(20,.5)$ at $p$ up to $20$. For these experiments, we further restrict to linear ramp parameter schedules as described in Eq.~\eqref{eq:ramp_schedule}, to reduce the number of parameters from $2p$ to~$4$. We introduce this re-parameterization because having $2p$ free parameters, even for this relatively moderately sized $p$, results in optimization routines that do not converge in a reasonable time on the device as specified below. The results for these runs are shown in Fig.~\ref{fig:apx_by_p_ramp_sched}.
\begin{figure}
\caption{Box plots of approximation ratios for parameters found via homogeneous parameter setting for the same $10$ $\mathcal{G}
\label{fig:apx_by_p_ramp_sched}
\end{figure}
From this figure, we see that the heuristic, when implemented with linear ramp schedules, results in monotonic improvement of approximation ratios as $p$ increases. Notably, for this regime of $N=20$, $p=20$, we were not able to find previous works that efficiently returned optimized parameter schedules, even when restricted to linear ramps. Thus, these results demonstrate a regime in which the heuristic is able to return parameters that appear intractable for current devices and strategies, whether quantum, classical, or hybrid.
\textit{Numerical details}: For our simulations in this section, all calculations (excluding those pulled from the QAOAKit database) were performed using a laptop with Intel i7-10510U CPUs @ 1.80GHz and 16 GB of RAM, with no parallelization utilized. For the $20$ node graphs, all experiments clocked in under $6$ hours, where the longest times were for fully parameterized $p=3$ circuits (6 parameters). Parameters were seeded using linear ramp schedules from \cite{zhou20} and parameter optimization was performed using the standard Broyden–Fletcher–Goldfarb–Shanno algorithm \cite{fletcher87} from the SciPy package \cite{virtanen20}.
\section{Discussion}\label{discussion}
In this work we formalized the concepts of Perfect Homogeneity and the Classical Homogeneous Proxy for QAOA. We demonstrated how to derive the necessary quantities and efficiently evaluate the proxy for combinatorial satisfaction problems with a fixed, polynomial number of randomly chosen clauses. We then provided numerical evidence to support the use of the proxy for estimating the evolution and cost expectation value of QAOA. Finally, we applied these results to construct the Homogeneous Heuristic for QAOA, and implemented this strategy for a class of MaxCut instances on graphs up to $n=20$ and $p=20$. Our results show that the heuristic on this class easily yields parameters at $p=1$, $2$, and $3$ that are comparable to those returned by parameter transfer. We further demonstrated that we are able to optimize parameters out to $p=20$ by restricting to a linear ramp schedule, obtaining the desirable property of monotonically increasing approximation ratios as the number of QAOA layers is increased. Notably, we found that the proxy seems to well-estimate both the state and cost expectation of QAOA in the particular cases when $\gamma$ remains relatively small throughout the algorithm, as well as for quantum annealing-inspired linear ramp schedules. These ramp schedules have been frequently proposed as empirically well-performing schedules \cite{zhou20,lotshaw21,wurtz21_counteradiabaticity}, which supports that the proxy may more accurately estimate QAOA expectation values for important parameter regimes and schedules of interest, even if these estimates may diverge somewhat in the
case of arbitrarily chosen parameters.
Several interesting research questions and future directions directly follow from our results. An immediate question is to better understand the relationship between the problem class specified, the resulting distributions $N(\cp;d,c)$ and $P(\cp)$ used for the proxy, and the effect on the parameters returned by the Homogeneous Heuristic for QAOA, especially with respect to a given problem instance to be solved. For example, a fixed instance can be drawn from a number of different possible classes, so changing the class considered can have a significant effect on the parameters returned and resulting performance. One approach to address this issue would be to extend the derivations of $N(\cp;d,c)$ and $P(\cp)$ to incorporate instance-specific information beyond just the problem class. A naive example in this vein would be to estimate the distributions via Monte Carlo sampling of bitstrings and their costs for the given instance. Furthermore, including instance-specific information appears a promising route to explicitly extending the heuristic beyond random problem classes, which can be used to study parameter schedules and performance in the worst-case setting. Finally, it is worthwhile to explore adaptations of our approach to cases where the number of unique possible costs may become large. In this case, one could imagine binning together costs close in value such that an effective cost function with much fewer possible costs is produced, and to which the proxy may be applied.
In terms of generalizing both the methods and scope of our approach, we first re-emphasize that parameter optimization for parameterized quantum circuits consists of two primary components: a parameter update scheme outer loop, and a parameter objective function evaluation subroutine. The inner subroutine is typically evaluated using the quantum computer. The key idea of our approach is to replace the inner subroutine with an efficiently computable classical strategy based on the assumption of Perfect Homogeneity. Hence a natural extension is to consider other efficiently computable proxies for the inner loop. For example, in cases where the problem instance comes with a high degrees of classical symmetries, the dimension of the effective Hilbert space can be drastically reduced, and so the evaluation and optimization of the typical parameter objective can be sped up significantly~\cite{shaydulin21_symmetries}. Similarly, different proxies may follow from related ideas and results in the literature such as the small-parameter analysis of~\cite{hadfield2021analytical}, the pseudo-Boltzmann approximation of~\cite{DiezValle22}, and classical or quantum surrogate models \cite{sung2020using,shaffer22}. We remark that a promising direction that appears relatively straightforward in light of our results is to extend the analysis of~\cite{DiezValle22} to QAOA levels beyond $p=1$. Finally, an important direction is to explicitly generalize our approach to algorithms beyond QAOA and, more generally, problems beyond combinatorial optimization, such as the parameter setting problem for Variational Quantum Eigensolvers. Generally, it is important to better understand and characterize regimes where such classical proxies are most advantages, such as when the noisy computation and measurements of real-world quantum devices is taken into account, as well as to what degree undesirable effects such as barren plateaus may apply when such proxies are utilized for parameter setting.
\onecolumngrid
\end{document}
|
\begin{document}
\title{Removing Depth-Order Cycles Among Triangles: \ An Efficient Algorithm
Generating Triangular Fragments}
\begin{abstract}
More than 25 years ago, inspired by applications in computer graphics, Chazelle~\emph{et al.}\xspace (FOCS 1991)
studied the following question: Is it possible to cut any set of $n$ lines
or other objects in ${\ensuremath{\mathcal{B}}bb R}^3$ into a
subquadratic number of fragments such that the resulting fragments admit a depth order?
They managed to prove an $O(n^{9/4})$ bound on the number of fragments, but only for
the very special case of bipartite weavings of lines. Since then only little progress
was made, until a recent breakthrough by Aronov and Sharir (STOC 2016) who
showed that $O(n^{3/2}\polylog n)$ fragments suffice for any set of lines.
In a follow-up paper Aronov, Miller and Sharir (SODA 2017) proved an $O(n^{3/2+\varepsilon})$ bound
for triangles, but their method uses high-degree algebraic arcs to perform the cuts.
Hence, the resulting pieces have curved boundaries.
Moreover, their method uses polynomial partitions, for which
currently no algorithm is known.
Thus the most natural version of the problem is still wide open:
Is it possible to cut any collection of $n$ disjoint triangles in ${\ensuremath{\mathcal{B}}bb R}^3$ into a subquadratic
number of triangular fragments that admit a depth order? And if so, can we
compute the cuts efficiently?
We answer this question by presenting an algorithm that cuts any set of $n$ disjoint
triangles in ${\ensuremath{\mathcal{B}}bb R}^3$ into $O(n^{7/4}\polylog n)$ triangular fragments that
admit a depth order. The running time of our algorithm is $O(n^{3.69})$.
We also prove a refined bound that depends on the number, $K$, of intersections between the
projections of the triangle edges onto the $xy$-plane: we show that
$O(n^{1+\varepsilon} + n^{1/4} K^{3/4}\polylog n)$ fragments suffice to obtain a depth order.
This result extends to $xy$-monotone surface patches bounded by a constant number
of bounded-degree algebraic arcs in general position, constituting the first
subquadratic bound for surface patches. Finally, as a byproduct of our approach
we obtain a faster algorithm to cut a set of lines into $O(n^{3/2}\polylog n)$
fragments that admit a depth order. Our algorithm for lines runs in $O(n^{5.38})$ time,
while the previous algorithm uses $O(n^{8.77})$ time.
\end{abstract}
\section{Introduction}
\label{se:intro}
Let $T$ and $T'$ be two disjoint triangles (or other objects) in ${\ensuremath{\mathcal{B}}bb R}^3$.
We that $T$ is \emph{below} $T'$---or, equivalently, that $T'$ is
\emph{above} $T$---when there is a vertical line~$\ell$ intersecting both $T$ and $T'$
such that $\ell\cap T$ has smaller $z$-coordinate than $\ell\cap T'$. We denote this
relation by $T\prec T'$. Note that two triangles may be unrelated by the
$\prec$-relation, namely when their vertical projections onto the $xy$-plane are disjoint.
Now let $\ensuremath{\mathcal{T}}$ be a collection of $n$ disjoint triangles in ${\ensuremath{\mathcal{B}}bb R}^3$.
A \emph{depth order} (for the vertical direction) on $\ensuremath{\mathcal{T}}$ is a total order on
$\ensuremath{\mathcal{T}}$ that is consistent with the
$\prec$-relation, that is, an ordering $T_1,\ldots,T_n$ of the triangles such
that $T_i\prec T_j$ implies $i<j$.
Depth orders play an important role in many applications. For example, the
Painter's Algorithm from computer graphics performs hidden-surface
removal by rendering the triangles forming the objects in a scene one by one,
drawing each triangle ``on top of'' the already drawn ones. To give the correct result the
Painter's Algorithm must handle the triangles in depth order with respect to the viewing direction.
Several object-space hidden-surface removal algorithms and ray-shooting data structures
need a depth order as well. Depth orders also play a role when one wants
to assemble a product by putting its constituent parts one by one into place
using vertical translations~\cite{wl-grma-94}.
The problem of computing a depth order for a given set of objects
has therefore received considerable
attention~\cite{aks-rsisf-95,b-rsdoh-93,bg-vrscd-08,bos-cvdo-94}.
However, a depth order does not always exist since
there can be \emph{cyclic overlap}, as illustrated in Fig.~\ref{fi:cyclic-overlap}(i).
In such cases the algorithms above simply report that no depth exists.
What we would then like to do is to cut the triangles into fragments
such that the resulting set of fragments is acyclic (that is, admits a depth order).
This gives rise to the following problem: How many fragments are needed in the worst case
to ensure that a depth order exists? And how efficiently can we compute
a set of cuts resulting in a small set of fragments admitting a depth order?
\begin{figure}
\caption{(i) Three triangles with cycle overlap. (ii) A bipartite weaving.}
\label{fi:cyclic-overlap}
\end{figure}
The problem of bounding the worst-case number of fragments needed to remove
all cycles from the depth-order relation has a long history.
In the special case of lines (or line segments)
one can easily get rid of all cycles using $O(n^2)$ cuts:
project the lines onto the $xy$-plane and cut each line at
its intersection points with the other lines.
A lower bound on the worst-case number of cuts is $\Omega(n^{3/2})$~\cite{cegpsss-ccclr-92}.
It turned out to be amazingly hard to get any subquadratic upper bound.
In 1991 Chazelle~\emph{et al.}\xspace~\cite{cegpsss-ccclr-92} obtained such a bound, but
only for so-called bipartite weavings; see Fig.~\ref{fi:cyclic-overlap}(ii).
Moreover, their $O(n^{9/4})$ bound is
still far away from the $\Omega(n^{3/2})$ lower bound. Later Aronov~\emph{et al.}\xspace~\cite{aks-ctcls-05}
obtained a subquadratic upper bound for general sets of lines, but they only get of all triangular
cycles---that is, cycles consisting of three lines---and their bounds are only slightly subquadratic:
they use $O(n^{2-1/69}\log^{16/69} n)$ cuts to remove all triangular cycles.
(They obtained a slightly better bound of $O(n^{2-1/34}\log^{8/17}n)$ for removing
all so-called elementary triangular cycles.) Finally, several authors studied the
algorithmic problem of computing a minimum-size complete cut set---a \emph{complete cut set}
is a set of cuts that removes all cycles from the depth-order relation---for a set of lines
(or line segments). Solan~\cite{s-ccoris-98} and Har-Peled and Sharir~\cite{hs-oplpa-01}
gave algorithms that produce a complete cut sets of size roughly $O(n\sqrt{\mbox{{\sc opt}}\xspace})$,
where $\mbox{{\sc opt}}\xspace$ is the minimum size of any complete cut set for the given lines.
Aronov~\emph{et al.}\xspace~\cite{abgm-ccrs-08} showed that
this problem is {\sc np}-hard, and they presented an algorithm that computes
a complete cut set of size $O(\mbox{{\sc opt}}\xspace \cdot \log\mbox{{\sc opt}}\xspace\cdot \log\log\mbox{{\sc opt}}\xspace)$
in $O(n^{4+2\omega} \log^2n)=O(n^{8.764})$ time, where
$\omega<2.373$ is the exponent of the best matrix-multiplication algorithm.
Eliminating depth cycles from a set of triangles
is even harder than it is for lines. The trivial bound
on the number of fragments is $O(n^3)$, which can for instance be obtained by taking a vertical
cutting plane containing each triangle edge. Paterson and Yao~\cite{py-ebsph-90}
showed already in 1990 that any set of disjoint triangles admits a so-called
\emph{binary space partition} (BSP) of size $O(n^2)$, which immediately implies
an $O(n^2)$ bound on the number of fragments needed to remove all
cycles. Indeed, a BSP ensures that the resulting set of triangle fragments is acyclic
for any direction, not just for the vertical direction.
Better bounds on the size of BSPs are known for fat objects (or, more generally,
low-density sets)~\cite{b-lsbsp-00} and for axis-aligned
objects~\cite{agmv-bspfr-00,py-obspo-92,t-bspaafr-08}, but
for arbitrary triangles there is an $\Omega(n^2)$ lower bound on the worst-case
size of a BSP~\cite{c-cpp-86}. Thus to get a subquadratic bound on the number of fragments
needed to obtain a depth order, one needs a different approach.
In 2016, using Guth's polynomial partitioning technique~\cite{g-ppsv-15},
Aronov and Sharir~\cite{as-atbedc-16} achieved a breakthrough in the area
by proving that any set of $n$ lines in ${\ensuremath{\mathcal{B}}bb R}^3$ in general position
can be cut into $O(n^{3/2} \polylog n)$ fragments such that the resulting set
of fragments admits a depth order. A complete cut set of size $O(n^{3/2}\polylog n)$ can
then be computed using the algorithm of Aronov~\emph{et al.}\xspace\cite{abgm-ccrs-08} mentioned above.
(They also gave a more refined bound for line segments, which depends on the number
of intersections, $K$, between the segments in the projection. More precisely, they
show that $O(n + n^{1/2} K^{1/2}\polylog n)$ cuts suffice.)
In a follow-up paper, Aronov, Miller and Sharir~\cite{ams-adcat-17}
extended the result to triangles: they show that, for any fixed $\varepsilon>0$,
any set of disjoint triangles in general position can be cut into
$O(n^{3/2+\varepsilon})$ fragments that admit a depth order. This may seem to almost settle the problem
for triangles, but the result of Aronov, Miller and Sharir has two serious drawbacks.
\myitemize{
\item The technique does not result in triangular
fragments, since it cuts the triangles using algebraic arcs. The degree of these
arcs is exponential in the parameter $\varepsilon$ appearing in $O(n^{3/2+\varepsilon})$ bound.
\item The technique, while being in principle constructive, does not give an efficient
algorithm, since currently no algorithms are known for constructing
Guth's polynomial partitions.
}
Arguably, the natural way to pose the problem for triangles is that one requires
the fragments to be triangular as well---polygonal fragments can always be decomposed
further into triangles, without increasing the number of fragments asymptotically---so
especially the first drawback is a major one. Indeed, Aronov, Miller and Sharir
state that ``\emph{It is a natural open problem to determine
whether a similar bound can be achieved with straight cuts [\ldots].
Even a weaker bound, as long as it is subquadratic and generally applicable, would be of
great significance.}'' Another open problem stated by Aronov, Miller and Sharir
is to extend the result to surface patches: ``\emph{Extending the technique to curved objects (e.g.,
spheres or spherical patches) is also a major challenge.}''
\mathrm{parent}ragraph{Our contribution.}
We prove that any set $\ensuremath{\mathcal{T}}$ of $n$ disjoint triangles in ${\ensuremath{\mathcal{B}}bb R}^3$ can be cut
into $O(n^{7/4}\polylog n)$ triangular fragments that admit a depth order.
Thus we overcome the first drawback of the method of Aronov, Miller and Sharir
(although admittedly our bound is not as sharp as theirs).
We also overcome the second drawback, by presenting an
algorithm to perform the cuts in $O(n^{5/2+\omega/2}\log^2 n)=O(n^{3.69})$ time.
Here $\omega<2.373$ is, as above, the exponent of the best matrix-multiplication algorithm.
As a byproduct, we improve the time to compute a complete cut set of size $O(n^{3/2}\polylog n)$ for
a collection of lines: we show that a simple trick reduces the
running time from $O(n^{4+2\omega}\log^2 n)$ to $O(n^{3+\omega}\log^2 n)$.
We also present a more refined approach that yields a bound of
$O(n^{1+\varepsilon} + n^{1/4} K^{3/4}\polylog n)$ on the number of fragments, where
$K$ is the number of intersections between the triangles in the projection.
This result extends to $xy$-monotone surface patches bounded by a constant number
of bounded-degree algebraic arcs in general position. Thus we make
progress on all open problems posed by Aronov, Miller and Sharir.
Finally, as a minor contribution we get rid of the non-degeneracy
assumptions that Aronov and Sharir~\cite{as-atbedc-16} make when eliminating
cycles from a set of segments. Most degeneracies can be handled by a straightforward
perturbation argument, but one case---parallel segments that overlap
in the projection---requires some new ideas. Being able
to handle degeneracies for segments implies that our method for
triangles can handle degeneracies as well.
\section{Eliminating cycles among triangles}
\label{se:alg}
\mathrm{parent}ragraph{Overview of the method.}
We first prove a proposition
that gives conditions under which the existence of a depth order for a set of
triangles is implied by the existence of a depth order for the triangle edges.
The idea is then to take a complete cut set for the triangle edges---there is
such a cut set of size $O(n^{3/2}\polylog n)$ by the results of Aronov and Sharir---and
``extend'' the cuts (by taking vertical planes through the cut points)
so that the conditions of the proposition are met.
A straightforward extension would generate too many triangle fragments, however.
Therefore our cutting procedure has two phases. In the first phase we localize
the problem by partitioning space into regions such that (i) the collection
of regions admits a depth order, and (ii) each region is intersected by only
few triangles. (This localization is also the key to speeding up the algorithm
for lines.) In the second phase we then locally (inside each region)
extend the cuts from a complete cut set for the edges,
so that the conditions of the proposition are met.
\mathrm{parent}ragraph{Notation and terminology.}
Let $\ensuremath{\mathcal{T}}$ denote the given set of disjoint non-vertical triangles, let $\ensuremath{\mathcal{E}}$ denote the set
of edges of the triangles in $\ensuremath{\mathcal{T}}$, and let $\ensuremath{\mathcal{V}}$ denote the set of vertices of the triangles.
We assume the triangles in $\ensuremath{\mathcal{T}}$ are closed. However, at the places where a triangle is cut it becomes
open. Thus the edges of a triangle fragment that are (parts of) edges in $\ensuremath{\mathcal{E}}$ are part
of the fragment, while edges that are induced by cuts are not.
We denote the (vertical) projection of an object $o$ in ${\ensuremath{\mathcal{B}}bb R}^3$ onto the $xy$-plane by $\proj{o}$.
\mathrm{parent}ragraph{A proposition relating depth orders for edges to depth orders for triangles.}
We define a \emph{column} to be a 3-dimensional region
$C_\ensuremath{\mathcal{D}}elta := \ensuremath{\mathcal{D}}elta\times(-\infty,+\infty)$, where $\ensuremath{\mathcal{D}}elta$ is an open convex polygon on the $xy$-plane.
Our cutting procedure is based on the following proposition.
\begin{proposition}\label{prop:column}
Let $\ensuremath{\mathcal{T}}$ be a set of disjoint triangles in ${\ensuremath{\mathcal{B}}bb R}^3$, and let $\ensuremath{\mathcal{E}}$ be the set
of edges of the triangles in $\ensuremath{\mathcal{T}}$.
Let $C_\ensuremath{\mathcal{D}}elta$ be a column whose interior does not contain a vertex of any triangle in~$\ensuremath{\mathcal{T}}$,
and let $\ensuremath{\mathcal{T}}_\ensuremath{\mathcal{D}}elta := \{ T\cap C_\ensuremath{\mathcal{D}}elta : T\in \ensuremath{\mathcal{T}} \}$ and $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta} := \{ e\cap C : e\in \ensuremath{\mathcal{E}} \}$.
Then $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta}$ admits a depth order if $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta}$ admits a depth order.
\end{proposition}
\begin{proof}
For a triangle $T_i\in\ensuremath{\mathcal{T}}$, define $P_i := T_i \cap C$. Thus $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta} = \{ P_i : T_i\in\ensuremath{\mathcal{T}} \}$.
Assume $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta}$ admits a depth order and suppose for a contradiction that $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta}$ does not.
Consider a cycle $\ensuremath{\mathcal{C}} := P_0\prec P_1\prec \cdots \prec P_{k-1}\prec P_0$ in $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta}$.
As observed by Aronov~\emph{et al.}\xspace~\cite{ams-adcat-17}
we can associate a closed curve in ${\ensuremath{\mathcal{B}}bb R}^3$ to $\ensuremath{\mathcal{C}}$, as follows.
For each pair $P_i,P_{i+1}$ of consecutive polygons in~$\ensuremath{\mathcal{C}}$---here and in the rest
of the proof indices are taken modulo~$k$---let $b_i\in P_i$ and $a_{i+1}\in P_{i+1}$ be points such that
the segment~$b_i a_{i+1}$ is vertical. We refer to the
closed polygonal curve whose ordered set of vertices
is $b_1, a_2, b_2, a_3, \ldots, a_k, b_k, a_1$ as a
\emph{witness curve} for $\ensuremath{\mathcal{C}}$.
We call the vertical segments $b_i a_{i+1}$ the \emph{connections} of $\Gamma(\ensuremath{\mathcal{C}})$,
and we call the segments $a_i b_i$ the \emph{links} of $\Gamma(\ensuremath{\mathcal{C}})$. Since the
connections are vertical, we have $\proj{a_{i+1}}=\proj{b_{i}}$ and so we can write
$\proj{\Gamma(\ensuremath{\mathcal{C}})}$ as $\proj{a_1},\proj{a_2},\ldots,\proj{a_k},\proj{a_1}$.
Note that $\proj{a_i}\in \proj{P_{i-1}}\cap \proj{P_i}$ for all~$i$.
In general, the points $a_i$ and $b_i$ can be chosen in many ways and so there are
many possible witness curves.
We will need a specific witness curve, as specified next.
We say that a link $a_i b_i$ is \emph{good} if $a_i$ and $b_i$
lie on the same edge of their polygon~$P_i$---this
edge is also an edge in $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta}$---and we say that $a_i b_i$ is \emph{bad} otherwise.
We now define the \emph{weight} of a witness curve $\Gamma$ to be the
number of bad links in $\Gamma$, and we define $\Gamma(\ensuremath{\mathcal{C}})$
to be any minimum-weight witness curve for $\ensuremath{\mathcal{C}}$.
Now consider a minimal cycle $\ensuremath{\mathcal{C}}^*:= P_0 \prec P_1\prec \cdots\prec P_{k-1}\prec P_0$ in $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta}$.
(A cycle is minimal if any strict subset of polygons from the cycle is acyclic.)
We will argue that we
can find a cycle in $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta}$ consisting of edges of the polygons in $\ensuremath{\mathcal{C}}^*$, thus contradicting
that $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta}$ admits a depth order.
\begin{claiminproof}
All links $a_i b_i$ of $\Gamma(\ensuremath{\mathcal{C}}^*)$ are good.
\end{claiminproof}
\begin{proofinproof}
Consider any link $a_i b_i$.
Observe that $\proj{a_{i-1}}$ and $\proj{a_{i+2}}$ must both lie
outside $\proj{P_i}$, otherwise $\ensuremath{\mathcal{C}}^*$ is not minimal. Consider $\ensuremath{\mathcal{D}}elta\setminus \proj{P_i}$,
the complement of $\proj{P_i}$ inside the column base~$\ensuremath{\mathcal{D}}elta$. The region
$\ensuremath{\mathcal{D}}elta\setminus\proj{P_i}$ consists one or more connected components. (It cannot be empty
since then $P_i$ cannot be part of any cycle in $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta}$.)
Each connected component is separated from $\proj{P_i}$ by a single edge of~$\proj{P_i}$,
since by assumption $T_i$ does not have a vertex inside~$C$
and so $\proj{P_i}$ does not have a vertex inside~$\ensuremath{\mathcal{D}}elta$ either.
We now consider two cases, as illustrated in Fig.~\ref{fi:prop-fig}.
\begin{figure}
\caption{The two cases in the proof of Proposition~\protect\ref{prop:column}
\label{fi:prop-fig}
\end{figure}
\\[2mm]
\noindent \emph{Case~A: $\proj{a_{i-1}}$ and $\proj{a_{i+2}}$ lie in different components of~$\ensuremath{\mathcal{D}}elta\setminus\proj{P_i}$.}
Let $\proj{p}$ be the point where $\proj{a_{i-1}a_{i}}$ enters $\proj{P_i}$.
Let $p\in P_i$ project onto $\proj{p}$ and let $e$ be the edge of $P_i$ containing~$p$.
(Possibly $\proj{p} = \proj{a_i}$.) Since $\proj{a_{i+2}}$ lies in a different
connected component of $\ensuremath{\mathcal{D}}elta\setminus\proj{P_i}$ than $\proj{a_{i-1}}$, the projection $\proj{\Gamma(\ensuremath{\mathcal{C}}^*)}$
must cross $\proj{e}$ a second time, at some point~$\proj{q}$.
This leads to a contradicting with the minimality of~$\ensuremath{\mathcal{C}}^*$.
To see this, let $q\in\Gamma(\ensuremath{\mathcal{C}}^*)$ be a point projecting onto~$\proj{q}$
and let $P_j$ be such that $q\in P_j$. Then $j\not\in\{i-1,i,i+1\}$, because
$a_{i-1}b_{i-1}$, and $a_ib_i$, and $a_{i+1}b_{i+1}$ are the only links
of $\Gamma(\ensuremath{\mathcal{C}}^*)$ on $P_{i-1}$, and $P_i$, and $P_{i+1}$, respectively.
But since $\overline{P_i}\cap\overline{P_j}\neq\emptyset$ we have $P_i\prec P_j$ or $P_j\prec P_i$,
and so $j\not\in\{i-1,i,i+1\}$ contradicts that $\ensuremath{\mathcal{C}}^*$ is minimal. Thus Case~A cannot happen.
\\[2mm]
\noindent \emph{Case~B: $\proj{a_{i-1}}$ and $\proj{a_{i+2}}$ lie in the same component of~$\ensuremath{\mathcal{D}}elta\setminus\proj{P_i}$.}
In this case $a_i b_i$ must be a good link, because $a_i$ and $b_i$ must both lie on
the edge~$e$ bordering the component of~$\ensuremath{\mathcal{D}}elta\setminus\proj{P_i}$ that contains $\proj{a_{i-1}}$ and $\proj{a_{i+2}}$.
Indeed, if $\proj{a_i}$ and/or $\proj{a_{i+1}}$ would not lie on~$e$ then
we can obtain a witness curve of lower weight for $\ensuremath{\mathcal{C}}^*$, namely
if we replace $a_i$ by the point~$p$ such that
$\proj{p} = \proj{a_{i-1} a_{i}} \cap \proj{e}$ and we replace $a_{i+1}$ by the point~$q$ such that
$\proj{q} = \proj{a_{1} a_{i+1}} \cap \proj{e}$. (Note that if $a_{i-1}=a_{i+2}$,
which happens when $\ensuremath{\mathcal{C}}^*$ consists of only three polygons, then the argument still goes through.)
\\[2mm]
\noindent Thus $a_i b_i$ is a good link, as claimed.
\end{proofinproof}
If all links $a_i,b_i$ of $\Gamma(\ensuremath{\mathcal{C}}^*)$ are good then $\ensuremath{\mathcal{C}}^*$
gives a cycle in $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{D}}elta}$, contradicting that $\ensuremath{\mathcal{E}}(\sigma)$ admits a depth order.
Hence, the assumption that $\ensuremath{\mathcal{T}}_{\ensuremath{\mathcal{D}}elta}$ contains a cycle is false.
\end{proof}
\mathrm{parent}ragraph{The cutting procedure.}
A naive way to apply Proposition~\ref{prop:column} would be the following:
compute a complete cut set $X$ for the set $\ensuremath{\mathcal{E}}$ of triangle edges, and take
a vertical plane parallel to the $yz$-plane through each point in $\ensuremath{\mathcal{V}}\cup X$.
This subdivides ${\ensuremath{\mathcal{B}}bb R}^3$ into columns $C_\ensuremath{\mathcal{D}}elta$ (where each column base $\ensuremath{\mathcal{D}}elta$
is an infinite strip). These columns do not contain triangle vertices
and the edge fragments inside each column are acyclic,
and so the triangle fragments we obtain are acyclic. Unfortunately this straightforward
approach generates too many fragments. Hence, we first subdivide
space such that we do not cause too much fragmentation when we take the
vertical planes through $\ensuremath{\mathcal{V}}\cup X$. The crucial idea is to create the subdivision
based on the projections of the triangle edges. This allows us to use an efficient
2-dimensional partitioning scheme resulting in cells that are
intersected by only few projected triangles edges. The 2-dimensional subdivision
will then be extended into ${\ensuremath{\mathcal{B}}bb R}^3$, to obtain 3-dimensional regions
in which we can take vertical planes through $\ensuremath{\mathcal{V}}\cup X$ without
creating too many fragments. We cannot completely ignore
the triangles themselves, however, when we extend the 2-dimensional subdivision
into~${\ensuremath{\mathcal{B}}bb R}^3$---otherwise we already create too many fragments in this phase.
Thus we create a hierarchical 2-dimensional subdivision, and we use the hierarchy
to avoid cutting the input triangles into too many fragments.
Next we make these ideas precise.
Let $L$ be a set of $n$ lines in the plane. A \emph{$(1/r)$-cutting} for $L$ is
a partition $\Xi$ of the plane into triangular\footnote{The cells
may be unbounded, that is, we also allow wedges, half-planes, and the entire plane, as cells.}
cells such that the interior of any cell $\ensuremath{\mathcal{D}}elta\in\Xi$ intersects at most $n/r$ lines from~$L$.
We say that a cutting $\Xi$ \emph{$c$-refines} a cutting~$\Xi'$, where $c$ is
some constant, if every cell $\ensuremath{\mathcal{D}}elta\in \Xi$ is contained in a unique parent cell
$\ensuremath{\mathcal{D}}elta'\in\Xi'$, and each cell in $\Xi'$ contains at most $c$ cells from~$\Xi$.
An \emph{efficient hierarchical $(1/r)$-cutting} for $L$~\cite{m-rsehc-93} is a sequence
$\Psi := \Xi_0,\Xi_1,\ldots,\Xi_k$ of cuttings such that there are constants $c,\rho$ such that
the following four conditions are met:
\myenumerate{
\item[(i)] $\rho^{k-1} < r \leqslant \rho^k$;
\item[(ii)] $\Xi_0$ is the single cell ${\ensuremath{\mathcal{B}}bb R}^2$;
\item[(iii)] $\Xi_i$ is a $(1/\rho^i)$-cutting for $L$ of size $O(\rho^{2i})$, for all $0\leqslant i\leqslant k$;
\item[(iv)] $\Xi_i$ is a $c$-refinement of $\Xi_{i-1}$, for all $1\leqslant i\leqslant k$.
}
It is known that for any set $L$ and any parameter $r$ with $1\leqslant r\leqslant n$,
an efficient hierarchical $(1/r)$-cutting exists and can be
computed in $O(nr)$ time~\cite{c-chdc-93,m-rsehc-93}.
We can view $\Psi$ as a tree in which each node $u$ at level~$i$ corresponds to a cell
$\ensuremath{\mathcal{D}}elta_u\in\Xi_i$, and a node $v$ at level~$i$ is the child of a node~$u$ at level~$i-1$
if $\ensuremath{\mathcal{D}}elta_v\subseteq\ensuremath{\mathcal{D}}elta_u$.
Our cutting procedure now proceeds in two steps. Recall that $\ensuremath{\mathcal{T}}$ denotes the given set of $n$ triangles
in~${\ensuremath{\mathcal{B}}bb R}^3$, and $\ensuremath{\mathcal{E}}$ the set of $3n$~edges of the triangle in~$\ensuremath{\mathcal{T}}$.
\begin{enumerate}
\item \label{step1}
We start by constructing an efficient hierarchical $(1/r)$-cutting for $L$, with $r=n^{3/4}$,
where $L$ is the set of lines containing the edges in $\proj{\ensuremath{\mathcal{E}}}$.
Next we cut the projection $\proj{T}$ of each triangle $T\in\ensuremath{\mathcal{T}}$ into pieces. This is done
by executing the following recursive process on~$\Psi$, starting at its root.
Suppose we reach a node~$u$ of the tree.
If $\ensuremath{\mathcal{D}}elta_u\subseteq \proj{T}$ or $u$ is a leaf, then $\ensuremath{\mathcal{D}}elta_u\cap \proj{T}$
is one of the pieces of $\proj{T}$.
Otherwise, we recursively visit all children $v$ of $u$ such that $\ensuremath{\mathcal{D}}elta_v\cap \proj{T} \neq \emptyset$.
After cutting each projected triangle $\proj{T}$ in this manner, we cut the original triangles
$T\in\ensuremath{\mathcal{T}}$ accordingly. Let $\ensuremath{\mathcal{T}}_1$ denote the resulting collection of polygonal pieces.
We extend the 2-dimensional cutting $\Xi_k$ into ${\ensuremath{\mathcal{B}}bb R}^3$ by erecting vertical walls through
each of the edges in $\Xi_k$. Thus we create a column $C_\ensuremath{\mathcal{D}}elta := \ensuremath{\mathcal{D}}elta\times (-\infty,\infty)$
for each cell $\ensuremath{\mathcal{D}}elta\in \Xi_k$. Next, we cut each column $C_\ensuremath{\mathcal{D}}elta$ into vertical prisms by slicing it
with each triangle $T\in\ensuremath{\mathcal{T}}$ that completely cuts through $C_\ensuremath{\mathcal{D}}elta$ (that is, we
slice the column with each triangle $T$
such that $\ensuremath{\mathcal{D}}elta\subseteq \proj{T}$). Let~$\ensuremath{\mathcal{S}}$ denote the resulting
3-dimensional subdivision.
\item \label{step2}
For each prism $\sigma$ in the subdivision~$\ensuremath{\mathcal{S}}$, proceed as follows.
Let $\ensuremath{\mathcal{T}}_1(\sigma)\subseteq \ensuremath{\mathcal{T}}_1$ be the set of pieces that have an edge intersecting
the interior of~$\sigma$, and let $\ensuremath{\mathcal{E}}(\sigma) := \{ e\cap \sigma : e\in\ensuremath{\mathcal{E}}\}$.
Note that $\ensuremath{\mathcal{E}}(\sigma)$ is the set of edges of the pieces in $\ensuremath{\mathcal{T}}_1(\sigma)$,
where we only take the edges in the interior of $\sigma$.
Let $X(\sigma)$ be a complete cut set for $\ensuremath{\mathcal{E}}(\sigma)$,
and let $\ensuremath{\mathcal{V}}(\sigma)\subseteq \ensuremath{\mathcal{V}}$ be the set of triangle vertices in the interior of~$\sigma$.
For each point $q\in X(\sigma)\cup \ensuremath{\mathcal{V}}(\sigma)$, take a plane $h(q)$ containing $q$ and
parallel to the $yz$-plane, and let $H(\sigma)$ be the resulting set of planes.
Cut every piece $P\in \ensuremath{\mathcal{T}}_1(\sigma)$ into fragments using the planes in $H(\sigma)$.
\end{enumerate}
We denote the set of fragments generated in Step~\ref{step2} inside a prism~$\sigma$
by $\ensuremath{\mathcal{T}}_2(\sigma)$, and we denote the set of pieces in $\ensuremath{\mathcal{T}}_1$ that do not have an edge
crossing the interior of any prism~$\sigma\in\ensuremath{\mathcal{S}}$ by $\ensuremath{\mathcal{T}}_1^*$.
(Note that $\ensuremath{\mathcal{T}}_1^{*}$ contains all pieces generated at internal nodes of $\Psi$.)
Then $\ensuremath{\mathcal{T}}_2 := \ensuremath{\mathcal{T}}_1^* \cup \bigcup_{\sigma\in\ensuremath{\mathcal{S}}} \ensuremath{\mathcal{T}}_2(\sigma)$ is our final set of
fragments.
\begin{lemma}\label{le:number-of-fragments}
The set $\ensuremath{\mathcal{T}}_2$ of triangle fragments resulting from the procedure above is acyclic,
and the size of $\ensuremath{\mathcal{T}}_2$ is $O(n^{7/4}+ |X| \cdot n^{1/4})$, where
$X := \bigcup_{\sigma\in\ensuremath{\mathcal{S}}} X(\sigma)$.
\end{lemma}
\begin{proof}
To prove that $\ensuremath{\mathcal{T}}_2$ is acyclic, define $\ensuremath{\mathcal{S}}^*$ to be the set of (open) prisms
in $\ensuremath{\mathcal{S}}$, and consider the set $\ensuremath{\mathcal{S}}^* \cup \ensuremath{\mathcal{T}}_1^{*}$.
\begin{claiminproof}
The set $\ensuremath{\mathcal{S}}^* \cup \ensuremath{\mathcal{T}}_1^{*}$ admits a depth order.
\end{claiminproof}
\begin{proofinproof}
By construction, for any object~$o_i\in \ensuremath{\mathcal{S}}^* \cup \ensuremath{\mathcal{T}}_1^{*}$ there is
a node $u\in\Psi$ such that $\proj{o_i} = \ensuremath{\mathcal{D}}elta_u$. Hence,
for any two objects $o_1,o_2\in \ensuremath{\mathcal{S}}^* \cup \ensuremath{\mathcal{T}}_1^{*}$ we have
\begin{equation}
\proj{o_1}\subseteq\proj{o_2}, \hspace{5mm} \mbox{or} \hspace{5mm}
\proj{o_2}\subseteq\proj{o_1}, \hspace{5mm} \mbox{or} \hspace{5mm}
\proj{o_1}\cap\proj{o_2}=\emptyset. \label{eq1}
\end{equation}
This implies that $\ensuremath{\mathcal{S}}^* \cup \ensuremath{\mathcal{T}}_1^{*}$ is acyclic.
Indeed, suppose for a contradiction
that $\ensuremath{\mathcal{S}}^* \cup \ensuremath{\mathcal{T}}_1^{*}$ does not admit a depth order. Consider a minimal
cycle $\ensuremath{\mathcal{C}}^*:= o_1 \prec o_2\prec \cdots\prec o_{k}\prec o_1$. Obviously $k\geqslant 3$.
But then (\ref{eq1}) implies that we can remove $o_1$ or $o_2$ and still
have a cycle, contradicting the minimality of~$\ensuremath{\mathcal{C}}^*$.
\end{proofinproof}
The claim above implies that $\ensuremath{\mathcal{T}}_2$ is acyclic if each of the sets $\ensuremath{\mathcal{T}}_2(\sigma)$ is acyclic.
To see that $\ensuremath{\mathcal{T}}_2(\sigma)$ is acyclic, note that the planes in $H(\sigma)$
partition~$\sigma$ into subcells that do not contain a point from $X(\sigma)$
in their interior. Hence the set of edges of the fragments in such a subcell
is acyclic---if this were not the case, then there would be a cycle left
in $\ensuremath{\mathcal{E}}(\sigma)$, contradicting that $X(\sigma)$ is a complete cut set for $\ensuremath{\mathcal{E}}(\sigma)$.
Moreover, a subcell does not contain any point from $\ensuremath{\mathcal{V}}(\sigma)$ in its interior,
and so it does not contain a vertex of any fragment in its interior.
We can therefore use Proposition~\ref{prop:column} to conclude that within each
subcell, the fragments are acyclic; the fact that the subcell is strictly
speaking not a column---it may be bounded from above and/or
below by a piece in $\ensuremath{\mathcal{T}}_1^*$---clearly does not invalidate the conclusion.
Since the fragments in each subcell of $\sigma$ are acyclic and the subcells
are separated by vertical planes, $\ensuremath{\mathcal{T}}_2(\sigma)$ must be acyclic.
It remains to prove that $|\ensuremath{\mathcal{T}}_2|=O(n^{7/4}+ |X|\cdot n^{1/4})$.
We start by bounding~$|\ensuremath{\mathcal{T}}_1|$. To this end, consider a triangle $T\in \ensuremath{\mathcal{T}}$
and let $P\in \ensuremath{\mathcal{T}}_1$ be a piece generated for $T$ in Step~\ref{step1}. Let $v$ be the
node in $\Psi$ where $P$ was created. Then the cell $\ensuremath{\mathcal{D}}elta_u$ of the parent $u$ of $v$ is intersected by
an edge of $\proj{T}$. Since each node in $\Psi$ has $O(1)$ children and each
cell $\ensuremath{\mathcal{D}}elta\in\Xi_i$ intersects at most $n/\rho^i$ projected triangle edges, this means that
\[
|\ensuremath{\mathcal{T}}_1| = O\left( \sum_{i=0}^{k-1} \sum_{\ensuremath{\mathcal{D}}elta\in\Xi_i} n/\rho^i \right)
= O\left( \sum_{i=0}^{k-1} \rho^{2i} \cdot (n/\rho^i) \right)
= O(n \rho^{k})
= O(n r)
= O(n^{7/4}).
\]
The number of additional fragments created in Step~\ref{step2} can be bounded by
observing that each prism $\sigma$ in the subdivision~$\ensuremath{\mathcal{S}}$ intersects at most
$n/r = O(n^{1/4})$ triangle edges, and so $|T_1(\sigma)|=O(n^{1/4})$. If we now sum
the number of additional fragments over all prisms $\sigma$ in the subdivision~$\ensuremath{\mathcal{S}}$ we obtain
\[
\begin{array}{lll}
\mbox{number of additional fragments in Step~\ref{step2}} & \leqslant & \sum_{\sigma\in\ensuremath{\mathcal{S}}} |H(\sigma)| \cdot |\ensuremath{\mathcal{T}}_1(\sigma)| \\[2mm]
& \leqslant & O(n^{1/4}) \cdot \big( \sum_{\sigma\in\ensuremath{\mathcal{S}}} |X(\sigma)| + \sum_{\sigma\in\ensuremath{\mathcal{S}}} |\ensuremath{\mathcal{V}}(\sigma)| \big) \\[2mm]
& = & O(n^{1/4} (|X|+n) ).
\end{array}
\]
\end{proof}
Lemma~\ref{le:number-of-fragments} leads to the following result.
\begin{corollary}
Suppose that any set of $n$ lines has a complete cut set of size $\gamma(n)$.
Then any set $\ensuremath{\mathcal{T}}$ of $n$ disjoint triangles in ${\ensuremath{\mathcal{B}}bb R}^3$
can be cut into $O(n^{7/4} + \gamma(3n)\cdot n^{1/4})$ triangular fragments such that the
resulting set of fragments admits a depth order.
\end{corollary}
\begin{proof}
Define $\mbox{{\sc opt}}\xspace$ to be the minimum size of a complete cut set for $\ensuremath{\mathcal{E}}$ and,
for a prism~$\sigma\in\ensuremath{\mathcal{S}}$, define $\mbox{{\sc opt}}\xspace_\sigma$ to be the minimum size of a complete cut for
$\ensuremath{\mathcal{E}}(\sigma)$. Then $\sum_{\sigma\in\ensuremath{\mathcal{S}}} \mbox{{\sc opt}}\xspace_\sigma \leqslant \mbox{{\sc opt}}\xspace$. Indeed, if $X_{\smallopt}$
denotes a minimum-size complete cut set for~$\ensuremath{\mathcal{E}}$, then $X_{\smallopt}\cap \sigma$ must eliminate
all cycles from $\ensuremath{\mathcal{E}}(\sigma)$. Since $\mbox{{\sc opt}}\xspace \leqslant \gamma(3n)$, the bound on the number of fragments
generated by our cutting procedure is as claimed.
The procedure above cuts the triangles in $\ensuremath{\mathcal{T}}$ into constant-complexity polygonal
fragments, which we can obviously cut further into triangular fragments without increasing the
number of fragments asymptotically.
\end{proof}
The results of Aronov and Sharir~\cite{as-atbedc-16} thus imply that any set of
$n$ triangles can be cut into $O(n^{7/4}\polylog n)$ fragments such that the resulting
set of fragments is acyclic. (Aronov and Sharir assume general position, but in the
appendix we show this is not necessary.)
\begin{figure}
\caption{An example showing that one sometimes needs more cuts to eliminate all cycles
from $\ensuremath{\mathcal{T}
\label{fi:edges-vs-triangles}
\end{figure}
\begin{remark}
We use a factor $O(n^{1/4})$ more cuts than Aronov and Sharir need for the case of segments.
Observe that we already generate up to
$\ensuremath{\mathcal{T}}heta(n^{7/4})$ fragments in Step~\ref{step1}, since we take $r=n^{3/4}$.
To reduce the total number of fragments to $O(n^{3/2}\polylog n)$ using our approach,
we would need to set $r:=\sqrt{n}$ in Step~\ref{step1}. In Step~\ref{step2}
we could then only use the set $\ensuremath{\mathcal{V}}(\sigma)$ to generate the vertical planes in~$H(\sigma)$.
This would lead to vertical prisms that do not have any vertex in their interior,
while only using $O(n^{3/2})$ fragments so far. However, each such
prism~$\sigma'\subseteq \sigma$ can contain
up to $\ensuremath{\mathcal{T}}heta(\sqrt{n})$ triangle fragments. Hence, we cannot afford to compute
a cut set $X(\sigma')$ for $\ensuremath{\mathcal{E}}(\sigma')$ and cut each triangle fragment in $\sigma'$
with a vertical plane containing each $q\in X(\sigma')$. One may hope that if we can
eliminate all cycles from $\ensuremath{\mathcal{E}}(\sigma')$ using $|X(\sigma')|$ cuts, then we can also eliminate
all cycles from $\ensuremath{\mathcal{T}}_1(\sigma')$ using $|X(\sigma')|$ cuts. Unfortunately this is not the
case, as shown in Fig.~\ref{fi:edges-vs-triangles}.
In the example, there are two cycles in $\ensuremath{\mathcal{T}}_1(\sigma')$: the green triangle together with the blue segments
and the green triangle with the red segments. The set $\ensuremath{\mathcal{E}}(\sigma')$
also contains two cycles. The cycles from $\ensuremath{\mathcal{E}}(\sigma')$ can be eliminated
by cutting the green edge at the point indicated by the arrow. However, a single
cut of the green triangle cannot eliminate both cycles from~$\ensuremath{\mathcal{T}}_1(\sigma')$.
Indeed, to eliminate the blue-green cycle the cut should separate
(in the projection) the parts of the blue edges projecting onto the green triangle,
while to eliminate the red-green cycle the cut should separate
the parts of the red edges projecting onto the green triangle---but a single
cut cannot do both.
The example can be generalized to sets $\ensuremath{\mathcal{T}}_1(\sigma')$ of arbitrary
size, so that all cuts in $\ensuremath{\mathcal{E}}(\sigma')$ can be eliminated by a single cut,
while eliminating cycles from $\ensuremath{\mathcal{T}}_1(\sigma')$ requires $\Omega(|\ensuremath{\mathcal{T}}_1(\sigma')|)$ cuts.
Thus a more global reasoning is needed to improve our bound.
\end{remark}
\section{Efficient algorithms to compute complete cut sets}
\mathrm{parent}ragraph{The algorithm for triangles.}
The hierarchical cutting $\Psi$ can be computed in $O(nr)=O(n^{7/4})$ time~\cite{c-chdc-93,m-rsehc-93},
and it is easy to see that we can compute the set $\ensuremath{\mathcal{T}}_1$ within the same
time bound. Constructing the 3-dimensional subdivision~$\ensuremath{\mathcal{S}}$ can trivially be done
in $O(n^{5/2})$ time, by checking for each of the $O(n^{3/2})$ columns and each triangle
$T\in \ensuremath{\mathcal{T}}$ if $T$ slices the column. Next we need to find the sets $\ensuremath{\mathcal{T}}_1(\sigma)$
for each prism~$\sigma$ in~$\ensuremath{\mathcal{S}}$. The computation of the hierarchical cutting also
tells us for each cell $\ensuremath{\mathcal{D}}elta\in\Xi_k$ which projected triangle edges intersect~$\ensuremath{\mathcal{D}}elta$.
It remains to check, for each triangle $T$ corresponding to such an edge, which
of the $O(n)$ prisms of the column~$C_{\ensuremath{\mathcal{D}}elta}$ is intersected by~$T$.
Thus, we spend $O(n)$ time for each of the $O(n^{7/4})$ triangle pieces in $\ensuremath{\mathcal{T}}_1$,
so the total time to compute the sets $\ensuremath{\mathcal{T}}_1(\sigma)$ is $O(n^{11/4})$.
Next we need to compute the cut sets $X(\sigma)$. To this end we use
the algorithm by Aronov~\emph{et al.}\xspace~\cite{abgm-ccrs-08}, which computes a
complete cut set of size $O(\mbox{{\sc opt}}\xspace_\sigma \cdot \log \mbox{{\sc opt}}\xspace_\sigma \cdot\log\log \mbox{{\sc opt}}\xspace_\sigma)$,
where $\mbox{{\sc opt}}\xspace_\sigma$ is the minimum size of a complete cut set for $\ensuremath{\mathcal{E}}(\sigma)$.
Thus $|X|$, the total size of all cut sets $X(\sigma)$ we compute, is bounded by
\[
O\left( \sum_{\sigma\in\ensuremath{\mathcal{S}}} \mbox{{\sc opt}}\xspace_{\sigma} \cdot \log \mbox{{\sc opt}}\xspace_{\sigma} \cdot\log\log \mbox{{\sc opt}}\xspace_{\sigma} \right)
=
O(\mbox{{\sc opt}}\xspace \cdot \log \mbox{{\sc opt}}\xspace \cdot\log\log \mbox{{\sc opt}}\xspace)
=
O(n^{3/2}\polylog n).
\]
Now define $n_\sigma := |T_1(\sigma)|$.
Since the algorithm of Aronov~\emph{et al.}\xspace runs in time $O(m^{4+2\omega}\log^2 m)$
for $m$ segments, the total running time is
\[
O\left( \sum_{\sigma\in\ensuremath{\mathcal{S}}} n_{\sigma}^{4+2\omega}\log^2 n_\sigma \right).
\]
Since $n_{\sigma}=O(n^{1/4})$ for all $\sigma$
and $\sum_{\sigma\in\ensuremath{\mathcal{S}}} n_{\sigma} = O(n^{7/4})$, the total time to compute the
sets $X(\sigma)$ is
\[
O\left( \sum_{\sigma\in\ensuremath{\mathcal{S}}} n_{\sigma}^{4+2\omega}\log^2 n_\sigma \right)
=
O\left( n^{3/2} \cdot (n^{1/4})^{4+2\omega}\log^2 n \right)
=
O(n^{5/2+\omega/2}\log^2 n).
\]
Finally, for each prism~$\sigma$ we cut all triangles in $\ensuremath{\mathcal{T}}_1(\sigma)$
by the planes in $H(\sigma)$ in a brute-force manner, in total time~$O(n^{7/4}\polylog n)$.
The following theorem summarizes our main result.
\begin{theorem}\label{thm:triangles}
Any set $\ensuremath{\mathcal{T}}$ of $n$ disjoint non-vertical triangles in ${\ensuremath{\mathcal{B}}bb R}^3$
can be cut into $O(n^{7/4}\polylog n)$ triangular fragments such that the
resulting set of fragments admits a depth order. The time needed to compute the
cuts is $O(n^{5/2+\omega/2}\log^2 n)$, where $\omega<2.373$ is the exponent
in the running time of the best matrix-multiplication algorithm.
\end{theorem}
\mathrm{parent}ragraph{A fast algorithm for lines.}
The running time in Theorem~\ref{thm:triangles} is better than the running time
obtained by Aronov and Sharir~\cite{as-atbedc-16} to compute a complete cut set
for a set of lines in~${\ensuremath{\mathcal{B}}bb R}^3$. The reason is that we apply
the algorithm of Aronov~\emph{et al.}\xspace\cite{abgm-ccrs-08} locally, on a set of segments
whose size is significantly smaller than~$n$. We can use the same idea to speed
up the algorithm to compute a complete cut
set of size $O(n^{3/2}\polylog n)$ for a set $L$ of $n$ lines in ${\ensuremath{\mathcal{B}}bb R}^3$.
To this end we project $L$ onto the $xy$-plane, and compute a $(1/r)$-cutting $\Xi$
for $\proj{L}$ of size~$O(r^2)$, with $r:=\sqrt{n}$.
We then cut each
line $\ell\in L$ at the points where its projection~$\proj{\ell}$ is cut by the cutting
(that is, where $\proj{\ell}$ crosses the boundary of a cell~$\ensuremath{\mathcal{D}}elta$ in~$\Xi$).
Up to this point we make only $O(nr)=O(n^{3/2})$ cuts, which does not
affect the worst-case asymptotic bound on the number of cuts.
Each cell~$\ensuremath{\mathcal{D}}elta$ of the cutting defines a vertical column $C_{\ensuremath{\mathcal{D}}elta}$.
Within each column, we apply the algorithm of Aronov~\emph{et al.}\xspace~\cite{abgm-ccrs-08}
to compute a cut set of size
$O(\mbox{{\sc opt}}\xspace_{{\ensuremath{\mathcal{D}}elta}} \cdot \log \mbox{{\sc opt}}\xspace_{{\ensuremath{\mathcal{D}}elta}} \cdot\log\log \mbox{{\sc opt}}\xspace_{{\ensuremath{\mathcal{D}}elta}})$,
where $\mbox{{\sc opt}}\xspace_{{\ensuremath{\mathcal{D}}elta}}$ is the size of an optimal cut set inside the column.
In total this gives
$O(\mbox{{\sc opt}}\xspace \cdot \log \mbox{{\sc opt}}\xspace \cdot\log\log \mbox{{\sc opt}}\xspace) = O(n^{3/2}\polylog n)$ cuts
in time $O(n \cdot (n^{1/2})^{4+2\omega}) = O(n^{3+\omega})$.
This leads to the following result.
\begin{theorem}\label{thm:lines}
For any set $L$ of $n$ disjoint lines in ${\ensuremath{\mathcal{B}}bb R}^3$, we can compute
in $O(n^{3+\omega})$ time a set of $O(n^{3/2}\polylog n)$ cut points on the lines such that the
resulting set of fragments admits a depth order, where $\omega<2.373$ is the exponent
in the running time of the best matrix-multiplication algorithm.
\end{theorem}
\section{A more refined bound and an extension to surface patches}
Let $\ensuremath{\mathcal{T}}$ be a set of disjoint surface patches in~${\ensuremath{\mathcal{B}}bb R}^3$. We assume each surface
patch is $xy$-monotone, that is, each vertical line intersects a patch in a single
point or not at all, and we assume each surface patch is bounded by a constant number
of bounded-degree algebraic arcs. We refer to the arcs bounding a surface patch as
the \emph{edges} of the surface patch. We assume the edges are in general position
as defined by Aronov and Sharir~\cite{as-atbedc-16}, except that edges of the same
patch may share endpoints.
We will show how to cut the patches from $\ensuremath{\mathcal{T}}$ into
fragments such that the resulting fragments admit a depth order. The total
number of fragments will depend on~$K$, the number of intersections between the
projections of the edges: for any fixed $\varepsilon>0$, we can tune our procedure
so that it generates $O(n^{1+\varepsilon} + n^{1/4} K^{3/4} \polylog n)$ fragments.
Trivially this implies that the same
intersection-sensitive bound holds for triangles.
The extension of our procedure to obtain an intersection-sensitive
bound for surface patches is fairly straightforward.
First we observe that the analog of Proposition~\ref{prop:column} still holds, where
the base of the column can now have curved edges. In fact, the proof holds verbatim,
if we allow the links of the witness cycles $\Gamma(\ensuremath{\mathcal{C}})$ that connect points
$a_i$ and $b_i$ on the same surface patch to be curved.
Now, instead of using efficient hierarchical cuttings~\cite{c-chdc-93,m-rsehc-93}
we recursively generate a sequence of cuttings using the intersection-sensitive
cuttings of De~Berg and Schwarzkopf~\cite{bs-ca-95}. (This is somewhat similar
to the way in which Aronov and Sharir~\cite{as-atbedc-16} obtain an intersection-sensitive
bound on the number of cuts needed to eliminate all cycles for a set of line segments in ${\ensuremath{\mathcal{B}}bb R}^3$.)
Below we give the details.
Let $\ensuremath{\mathcal{E}}$ denote the set of $O(n)$ edges of the surface patches in~$\ensuremath{\mathcal{T}}$.
A $(1/r)$-cutting for $\proj{\ensuremath{\mathcal{E}}}$ is a subdivision of ${\ensuremath{\mathcal{B}}bb R}^2$ into
trapezoidal cells, such that the interior of each cell intersects at most $n/r$
edges from~$\proj{E}$. Here a trapezoidal cell
is a cell bounded by at most two segments that are parallel to the $y$-axis and at most two pieces of
edges in~$\proj{\ensuremath{\mathcal{E}}}$ (at most one bounding it from above and at most one bounding it from below).
Set $r := \min(n^{5/4}/K^{1/4},n)$. Let $\rho$ be a sufficiently large constant, and let $k$ be
such that $\rho^{k-1}<r\leqslant \rho^k$; the exact value of $\rho$ depends on the
desired value of $\varepsilon$ in the final bound. We recursively construct a hierarchy
$\Psi := \Xi_0,\Xi_1,\ldots,\Xi_k$ of cuttings such that $\Xi_i$ is a
$(1/\rho^i)$-cutting for~$\proj{\ensuremath{\mathcal{E}}}$, as follows. The initial cutting~$\Xi_0$ is
the entire plane~${\ensuremath{\mathcal{B}}bb R}^2$. To construct $\Xi_i$ we take each cell $\ensuremath{\mathcal{D}}elta$
of $\Xi_{i-1}$ and we construct a $(1/\rho)$-cutting for the set
$\proj{\ensuremath{\mathcal{E}}}_\ensuremath{\mathcal{D}}elta := \{ \proj{e}\cap \ensuremath{\mathcal{D}}elta : \proj{e}\in\proj{\ensuremath{\mathcal{E}}}\}$.
De~Berg and Schwarzkopf~\cite{bs-ca-95} have shown that there is such a cutting
consisting of $O(\rho + K_{\ensuremath{\mathcal{D}}elta}\rho^2/n_\ensuremath{\mathcal{D}}elta^2)$ cells, where $n_\ensuremath{\mathcal{D}}elta := |\proj{\ensuremath{\mathcal{E}}}_\ensuremath{\mathcal{D}}elta|$
and $K_{\ensuremath{\mathcal{D}}elta}$ is the number of intersections inside~$\ensuremath{\mathcal{D}}elta$. One easily
shows by induction on~$i$ that for each cell $\ensuremath{\mathcal{D}}elta$ in $\Xi_{i-1}$ we have\footnote{Strictly
speaking this is not true, as $n$ denotes the number of patches and not the
number of edges. To avoid cluttering the notation we allow ourselves this
slight abuse of notation.}
$n_\ensuremath{\mathcal{D}}elta \leqslant n/\rho^{i-1}$. Hence, by combining the cuttings $\Xi_\ensuremath{\mathcal{D}}elta$
over all $\ensuremath{\mathcal{D}}elta\in\Xi_{i-1}$ we obtain a $(1/\rho^i)$-cutting~$\Xi_i$.
Let $|\Xi_i|$ be the number of cells in $\Xi_i$. Then $|\Xi_0|=1$ and, for a suitable
constant~$D$ (which depends on the degree of the edges and follows from
the construction of De~Berg and Schwarzkopf~\cite{bs-ca-95}),
we have
\[
\begin{array}{llll}
|\Xi_i| & \leqslant & \sum_{\ensuremath{\mathcal{D}}elta\in\Xi_{i-1}} D (\rho + K_{\ensuremath{\mathcal{D}}elta}\rho^2/n_\ensuremath{\mathcal{D}}elta^2) & \\[2mm]
& \leqslant & D\rho \cdot |\Xi_{i-1}| + D K\rho^{2i}/n^2 & \mbox{(since $\sum_{\ensuremath{\mathcal{D}}elta\in\Xi_{i-1}} K_\ensuremath{\mathcal{D}}elta\leqslant K$ and $n_\ensuremath{\mathcal{D}}elta \leqslant n/\rho^{i-1}$)} \\[2mm]
& \leqslant & D^i \rho^i + \frac{DK}{n^2} \sum_{j=0}^i D^{j} \rho^{2i-j} & \\[2mm]
& \leqslant & D^i \rho^i + \frac{DK}{n^2} \cdot 2\rho^{2i} & \mbox{(assuming $\rho>2D$).}
\end{array}
\]
Now we can proceed exactly as before. Thus we first traverse the hierarchy~$\Psi$
with each patch~$T\in\ensuremath{\mathcal{T}}$, associating $T$ to nodes $u$ such that $\ensuremath{\mathcal{D}}elta_u\subseteq\proj{T}$
and $\ensuremath{\mathcal{D}}elta_{\mathrm{parent}(u)}\not\subseteq\proj{T}$, and to
the leaves that we reach. This partitions $T$ into a number of fragments.
The resulting set~$\ensuremath{\mathcal{T}}_1$ of fragments generated over all triangles $T\in\ensuremath{\mathcal{T}}$ has total size
\begin{eqnarray}
|\ensuremath{\mathcal{T}}_1| & = & O\left( \textstyle{\sum}_{i=0}^{k-1} \textstyle{\sum}_{\ensuremath{\mathcal{D}}elta\in\Xi_i} n/\rho^i \right) \label{eq:T1} \\
& = & O\left( \textstyle{\sum}_{i=0}^{k-1} \left( D^i \rho^i + \frac{DK}{n^2} \rho^{2i} \right) \cdot (n/\rho^i) \right) \nonumber \\
& = & O\left( n \textstyle{\sum}_{i=0}^{k-1}D^i + \frac{DK}{n} \textstyle{\sum}_{i=0}^{k-1} \rho^{i} \right) \nonumber \\
& = & O\left( n D^k + DKr/n \right) \nonumber
\end{eqnarray}
If we now set $\rho := D^{1/\varepsilon}$ then $D^k =\rho^{k\varepsilon} = O(r^{\varepsilon})=O(n^{\varepsilon})$, and so
$|\ensuremath{\mathcal{T}}_1|=O(n^{1+\varepsilon} + Kr/n)$. We then decompose ${\ensuremath{\mathcal{B}}bb R}^3$ into a subdivision~$\ensuremath{\mathcal{S}}$
consisting of prisms~$\sigma$
that each intersect at most $n/r$ surface patches, take a minimum-size complete cut set~$X(\sigma)$
for the edges inside each prism~$\sigma$, and generate a set $H(S)$ of cutting planes
through the points in $X(\sigma)\cup \ensuremath{\mathcal{V}}(\sigma)$. (Here $\ensuremath{\mathcal{V}}(\sigma)$ is, as before,
the set of vertices of the surface patches in the interior of~$\sigma$.)
Since Aronov and Sharir~\cite{as-atbedc-16} proved that any set of $n$ bounded-degree
algebraic arcs in general position\footnote{We assumed the edges are in general
position, but edges of the same patch may share endpoints. However, there
are no endpoints in the interior of~$\sigma$, and so the only degeneracy that can
happen is if two edges of the same patch share an endpoint that lies on the
boundary of~$\sigma$. In this case we can slightly shorten and perturb the
edges to remove this degeneracy as well.}
admits a complete cut set of size $O(n + (nK)^{1/2}\polylog n)$---the
constant of proportionality and the exponent of the polylogarithmic factor depend
on the degree of the arcs---we have
\[
\sum_{\sigma\in\ensuremath{\mathcal{S}}} |X(\sigma)| = O\left( n + (nK)^{1/2}\polylog n \right)
\]
and so the number of additional fragments created in Step~\ref{step2} is bounded by
\[
O(n/r) \cdot \left( \sum_{\sigma\in\ensuremath{\mathcal{S}}} (|\ensuremath{\mathcal{V}}(\sigma)|+ |X(\sigma)|) \right)
=
O\left( n^2/r + (n^{3/2}K^{1/2}/r)\polylog n \right).
\]
By picking $r := \min(n^{5/4}/K^{1/4},n)$ our final bound on the number of fragments becomes
$O(n^{1+\varepsilon} + n^{1/4} K^{3/4}\polylog n)$.
\begin{theorem}\label{thm:patches}
Let $\ensuremath{\mathcal{T}}$ be a set of $n$ disjoint $xy$-monotone surface patches in ${\ensuremath{\mathcal{B}}bb R}^3$, each
bounded by a constant number of constant-degree algebraic arcs in general position.
Then for any fixed $\varepsilon>0$ we can cut $\ensuremath{\mathcal{T}}$ into $O(n^{1+\varepsilon} + n^{1/4} K^{3/4}\polylog n)$ fragments
that admit a depth order, where $K$ is the number
of intersections between the projections of the edges of the surface patches in~$\ensuremath{\mathcal{T}}$.
The constant of proportionality and the exponent of the polylogarithmic factor depend
on the degree of the edges. The expected time needed to compute the cuts is
$O\left( n^{1+\varepsilon} + K^{(3+\omega)/2+\varepsilon} / n^{(1+\omega)/2} \right)$,
where $\omega<2.373$ is the exponent
in the running time of the best matrix-multiplication algorithm.
\end{theorem}
\begin{proof}
The bound on the number of fragments follows from the discussion above. To prove the time
bound we first note that an intersection-sensitive cutting of size
$O(\rho + K_{\ensuremath{\mathcal{D}}elta}\rho^2/n_\ensuremath{\mathcal{D}}elta^2)$ can be computed in expected time
$O(n_\ensuremath{\mathcal{D}}elta \log \rho + K_\ensuremath{\mathcal{D}}elta \rho/n_\ensuremath{\mathcal{D}}elta)$~\cite{bs-ca-95}.
Hence, constructing the hierarchy takes expected time
\[
\begin{array}{lll}
O\left( \sum_{i=0}^{k-1} \sum_{\ensuremath{\mathcal{D}}elta\in\Xi_i} \left( \frac{n}{\rho^i}\log \rho + K_{\ensuremath{\mathcal{D}}elta} \frac{\rho}{ (n/\rho^i)} \right) \right)
& = &
O\left( \sum_{i=0}^{k-1} \sum_{\ensuremath{\mathcal{D}}elta\in\Xi_i} \frac{n}{\rho^i}\log \rho \right) +
O\left( \sum_{i=0}^{k-1} K \frac{\rho^{i-1}}{n} \right).
\end{array}
\]
The first term is the same as in Equation~(\ref{eq:T1})
except for the extra $\log\rho$-factor (which is a constant), so this term is still bounded by
$O(n^{1+\varepsilon} + Kr/n)$. Since $\rho^{k-1}\leqslant r$, the second term is bounded by $O(Kr/n)$,
which is dominated by the first term. Thus the total expected time to compute the set $\ensuremath{\mathcal{T}}_1$ is
$O(n^{1+\varepsilon} + Kr/n)$.
In the second stage of the algorithm we use the algorithm of Aronov~\emph{et al.}\xspace~\cite{abgm-ccrs-08}
on the set $\ensuremath{\mathcal{E}}(\sigma)$ of edge fragments inside each prism~$\sigma\in \ensuremath{\mathcal{S}}$. Aronov~\emph{et al.}\xspace
only explicitly state their result for line segments, but it is easily checked that
it works for curves as well; the fact that, for example, there can already be cyclic
overlap between a pair of curves has no influence on the algorithm's approximation
factor or running time. (The crucial property still holds that cut
points can be ordered linearly along a curve, and this is sufficient for the algorithm to work.)
Thus the time needed to compute all cut sets $X(\sigma)$ is
\[
O\left( \sum_{\sigma\in\ensuremath{\mathcal{S}}} n_{\sigma}^{4+2\omega}\log^2 n_\sigma \right),
\]
where $n_\sigma$ is the number of edges inside~$\sigma$.
Since $n_\sigma = O(n/r)$ and $\sum_{\sigma\in\ensuremath{\mathcal{S}}} n_{\sigma} = O(|\ensuremath{\mathcal{T}}_1|) = O(n^{1+\varepsilon} + Kr/n)$,
computing the cut sets takes
\[
O\left( \frac{n^{4+2\omega+\varepsilon}}{r^{3+2\omega}} + K \left( \frac{n}{r} \right)^{2+2\omega+\varepsilon} \right)
\]
time (for a slightly larger $\varepsilon$ than before).
Because we picked $r := \min(n^{5/4}/K^{1/4},n)$, the time to compute the cut sets is
\[
O\left( n^{1+\varepsilon} + n^{\frac{1}{4}-\omega/2+\varepsilon}\cdot K^{\frac{3}{4}+\omega/2} + \frac{K^{\frac{3}{2}+\omega/2}+\varepsilon}{n^{\frac{1}{2}+\omega/2}} \right)
\ \ = \ \
O\left( n^{1+\varepsilon} + K^{\frac{3}{2}+\omega/2+\varepsilon} / n^{\frac{1}{2}+\omega/2} \right),
\]
which dominates the time for the first stage.
Finally, cutting the patches inside each prism~$\sigma$ in a brute-force manner
takes time linear in the maximum number of fragments we generate, namely
$O(n^{1+\varepsilon} + n^{1/4} K^{3/4}\polylog n)$. Thus the total expected time
is
\[
O\left( n^{1+\varepsilon} + K^{\frac{3}{2}+\omega/2+\varepsilon} / n^{\frac{1}{2}+\omega/2} \right).
\]
Observe that for $K=n^2$ the bound we get essentially the same bound as in
Theorem~\ref{thm:triangles}.
\end{proof}
\section{Dealing with degeneracies}
We first show how to deal with degeneracies when eliminating cycles from a set of segments
(or lines) in~${\ensuremath{\mathcal{B}}bb R}^3$, and then we argue that the method for triangles presented in
the main text does not need any non-degeneracy assumptions either. (We do not deal with
removing the non-degeneracy assumptions for the case of surface patches.)
\mathrm{parent}ragraph{Degeneracies among segments.}
Let $S=\{s_1,\ldots,s_n\}$ be a set of disjoint segments in~${\ensuremath{\mathcal{B}}bb R}^3$.
(Even though we allow degeneracies, we do not allow the segments in $S$ to intersect or touch,
since then the problem is not well-defined. If the segments
are defined to be relatively open, then we can also allow an
endpoint of one segment to coincide with an endpoint of, or lie in the interior of, another segment.)
We can assume without loss of generality that $S$ does not contain vertical segments,
since eliminating all cycles from the non-vertical segments in $S$ also eliminates all
cycles when we include the vertical segments.
Aronov and Sharir~\cite{as-atbedc-16} make the following non-degeneracy assumptions:
\myenumerate{
\item[(i)] no endpoint of one segment projects onto any other segment;
\item[(ii)] no three segments are concurrent (that is, pass through a common point) in the projection;
\item[(iii)] no two segments in $S$ are parallel.
}
The main difficulty arises from type~(iii) degeneracies where
parallel segments overlap in the projection. The problem is that a small
perturbation will reduce the intersection in the projection to a single point,
and cutting one of the segments at the intersection is effective for the perturbed
segments but not necessarily for the original segments. Next we describe how we handle this
and how to deal with the other degeneracies as well.
First we slightly extend each segment in $S$---segments that are relatively open
would be slightly shortened---to get rid of degeneracies of type~(i),
and we slightly translate each segment to make sure no two segments
intersect in more than a single point in the projection. (The translations are not necessary, but they
simplify the following description and bring out more clearly how
the $\prec$-relations between parallel segments are treated.)
Next, we slightly perturb each segment such that all degeneracies disappear and
any two non-parallel segments whose projections intersect
before the perturbation still do so after the perturbation. This gets rid of
degeneracies of types~(ii) and~(iii). Let $s'_i$ denote the segment $s_i$
after the perturbation, and define $S' := \{s'_1,\ldots,s'_n\}$. The set $S'$
has the following properties:
\myitemize{
\item for any two non-parallel segments $s_i,s_j\in S$ we have $s_i\prec s_j$
if and only if $s'_i\prec s'_j$;
\item the order of intersections along segments in the projection is preserved in the following sense:
if $\proj{s'_i}\cap\proj{s'_j}$ lies before $\proj{s'_i}\cap\proj{s'_k}$ along $\proj{s'_i}$
as seen from a given endpoint of $\proj{s'_i}$, then $\proj{s_i}\cap\proj{s_j}$ does
not lie behind $\proj{s_i}\cap\proj{s_k}$ along $\proj{s_i}$ as seen from the
corresponding endpoint of~$\proj{s_i}$;
\item if $s_i$ and $s_j$ are parallel then $\proj{s'_i}$ and $\proj{s'_j}$ do not intersect.
}
We will show how to obtain
a complete cut set for $S$ from a complete cut set $X'$ for $S'$.
The cut set for $S$ will consist of a cut set $X$ that is derived from $X'$ plus
a set $Y$ of $O(n\log n)$ additional cuts, as explained next.
\begin{itemize}
\item Let $q'\in X'$ be a cut point on a segment $s'_i\in S'$.
Let $s'_j\in S'$ be the segment such that $\proj{s'_i}\cap\proj{s'_j}$
is the intersection point on $\proj{s'_i}$ closest to $\proj{q'}$,
with ties broken arbitrarily. (We can assume that $s'_j$ exists, since
if $\proj{s'_i}$ does not intersect any projected segment then
the cut point~$q$ is useless and can be ignored.) Now we put into $X$ the
point $q\in s_i$ such that $\proj{q}=\proj{s_i}\cap\proj{s_j}$.
(It can happen that several cut points along $s'_i$ generate the same
cut point along~$s_i$. Obviously we need to insert only one of them into~$X$.)
The crucial property of the cut point $q\in X$ generated for $q'\in X'$ is the following:
\myitemize{
\item if $\proj{q'}$ coincides with a certain intersection
along $\proj{s'_i}$ then $\proj{q}$ coincides with the corresponding intersection
along $\proj{s_i}$;
\item if $\proj{q'}$ separates two intersections along
$\proj{s'_i}$ then $\proj{q}$ separates the corresponding intersections along~$\proj{s_i}$
or it coincides with at least one of them.
}
By treating all cut points in $X'$ in this manner, we obtain the set~$X$.
\item The set~$Y$ deals with parallel segments in $S$ whose projections overlap.
It is defined as follows. Let $S(X)$ be the set of fragments resulting from
cutting the segments in $S$ at the cut points in~$X$. Partition $S(X)$ into
subsets~$S_{\ell}(X)$ such that $S_\ell(X)$ contains all fragments from $S(X)$
projecting onto the same line~$\ell$. Consider such a subset $S_{\ell}(X)$
and assume without loss of generality that $\ell$ is the $x$-axis.
Construct a segment tree~\cite{bcko-cgaa-08} for the projections of the fragments in~$S_\ell(X)$.
Each projected fragment $\proj{f}$ is stored at $O(\log |S_\ell(X)|)=O(\log n)$
nodes of the segment tree,
which induces a subdivision of $\proj{f}$ into $O(\log n)$ intervals.
We put into $Y$ the $O(\log n)$ points on $f$ whose projections define these intervals.
The crucial property of segment trees that we will need is the following:
\myitemize{
\item Let $I_v$ denote the interval corresponding to a node~$v$. Then for any two
nodes~$v,w$ we either have $I_v\subseteq I_w$ (when $v$ is a descendent of $w$),
or we have $I_v\supseteq I_w$ (when $v$ is an ancestor of $w$), or otherwise
the interiors of $I_v$ and $I_w$ are disjoint. Hence, a similar property holds
for the projections of the sub-fragments resulting from cutting the fragments
in $S_\ell(X)$ as explained above.
}
Doing this for all fragments $s_i\in S_\ell(X)$ and for all subsets $S_\ell(X)$ gives us
the extra cut set~$Y$.
\end{itemize}
\begin{lemma}\label{le:remove-degen}
The set $X\cup Y$ is a complete cut set for $S$.
\end{lemma}
\begin{proof}
Let $F$ denote the set of fragments resulting from cutting the segments in~$S$
at the points in~$X\cup Y$, and suppose for a contradiction that $F$ still contains a cycle.
Let $\ensuremath{\mathcal{C}}:=f_0\prec f_1\prec\cdots\prec f_{k-1}\prec f_0$ be a minimal cycle in $F$,
and let $s_i\in S$ be the segment containing~$f_i$.
As explained above, the cut points in $Y$ guarantee that for any two
parallel fragments in~$F$ whose projections overlap, one is contained in the other in the projection.
This implies that two consecutive fragments $f_i,f_{i+1}$ in $\ensuremath{\mathcal{C}}$ cannot be parallel: if they were,
then $\proj{f_i}\subseteq \proj{f_{i+1}}$ (or vice versa) which contradicts that
$\ensuremath{\mathcal{C}}$ is minimal. Hence, any two consecutive fragments are non-parallel. Now consider
the witness curve $\Gamma(\ensuremath{\mathcal{C}})$ for $\ensuremath{\mathcal{C}}$. Since consecutive fragments in $\ensuremath{\mathcal{C}}$ are
non-parallel, $\Gamma(\ensuremath{\mathcal{C}})$ is unique. Let $\Gamma'$ be the corresponding curve
for $S'$, that is, $\Gamma'$ visits the segments $s'_0,s'_1,\ldots,s'_{k-1},s'_0$
from $S'$ in the given order---recall that $f_i\subseteq s_i$ and that $s'_i$ is
the perturbed segment $s_i$---and it steps from $s'_i$ to $s'_{i+1}$ using
vertical connections. Since $X'$ is a complete cut set for $S'$, there must be a
link of $\Gamma'$, say on segment $s'_i$, that contains a cut point~$q'\in X'$.
In other words, $\proj{q'}$ separates $\proj{s'_{i-1}}\cap \proj{s'_{i}}$ from
$\proj{s'_{i}}\cap \proj{s'_{i+1}}$, or it coincides with one of these points.
But then the cut point~$q\in X$ corresponding to $q'$ must separate
$\proj{s_{i-1}}\cap \proj{s_{i}}$ from $\proj{s_{i}}\cap \proj{s_{i+1}}$
or coincide with one of these points, thus cutting the witness curve $\Gamma(\ensuremath{\mathcal{C}})$---a contradiction.
\end{proof}
\begin{theorem}\label{th:degenerate}
Suppose any non-degenerate set of $n$ disjoint segments can be cut into $\gamma(n)$ fragments
in $T(n)$ time such that the resulting set of fragments admits a depth order.
Then any set of $n$ disjoint segments can be cut into $O(\gamma(n)\log n)$ fragments
in $T(n)+O(n^2)$ time such that the resulting set of fragments admits a depth order.
\end{theorem}
\begin{proof}
The bound on the number of fragments immediately follows from the discussion above.
The overhead term in the running time is caused by the computation of the perturbed set~$S'$, which
can be done in $O(n^2)$ time if we compute the full arrangement in the projection.
\end{proof}
\mathrm{parent}ragraph{Degeneracies among triangles.}
Recall that the cuts we make on the triangles are induced by vertical planes, and that
a triangle becomes open where it is cut. When a triangle is completely contained
in the cutting plane, however, it is not well defined what happens. One option is to say
that the triangle completely disappears; another option it to
say that the triangle is not cut at all. Since vertical triangles can be ignored
in the Painter's Algorithm, we will simply assume that no triangle in $\ensuremath{\mathcal{T}}$ is vertical.
However, we can still have other degeneracies, such as edges of different
triangles being parallel or triples of projected edges being concurrent.
Fortunately, the fact that we do not need non-degeneracy assumptions for segments immediately implies
that we can handle such cases. Indeed, degeneracies are
not a problem for the hierarchical cuttings we use in Step~\ref{step1} of our procedure,
and in Step~\ref{step2} we only assumed non-degeneracy when computing the cut set $X(\sigma)$ for
the edge set~$\ensuremath{\mathcal{E}}(\sigma)$---and Theorem~\ref{th:degenerate} implies we can get rid of
the non-degeneracy assumptions in this step. Note that the $O(n^2)$ overhead term in
Theorem~\ref{th:degenerate} is subsumed by the time needed to apply the algorithm of Aronov~\emph{et al.}\xspace~\cite{abgm-ccrs-08}.
\section{Concluding remarks}
\label{se:concl}
We proved that any set of $n$ disjoint triangles in ${\ensuremath{\mathcal{B}}bb R}^3$ can be cut into
$O(n^{7/4}\polylog n)$ triangular fragments that admit a depth order, thus providing
the first subquadratic bound for this important setting of the problem. We also proved
a refined bound that depends on the number of intersections of the triangle edges
in the projection, and generalized the result to $xy$-monotone surface patches.
The main open problem is to tighten the gap between our bound and the $\Omega(n^{3/2})$
lower bound on the worst-case number of fragments needed: is it possible to cut
any set of triangles into roughly $\Omega(n^{3/2})$ triangular fragments that admit a depth
order, or is this only possible by using curved cuts? One would expect the former,
but curved cuts seem unavoidable in the approach of Aronov, Miller and Sharir~\cite{ams-adcat-17}
and it seems very hard to push our approach to obtain any $o(n^{7/4})$ bound.
\end{document}
|
\begin{document}
\title{FedSSC: Shared Supervised-Contrastive Federated Learning\\
}
\author{
\IEEEauthorblockN{Sirui Hu*}
\IEEEauthorblockA{\textit{SEAS} \\
\textit{Harvard University}\\
siruihu\\
@g.harvard.edu}
\and
\IEEEauthorblockN{Ling Feng*}
\IEEEauthorblockA{\textit{HSPH} \\
\textit{Harvard University}\\
lingfeng\\
@hsph.harvard.edu}
\and
\IEEEauthorblockN{Xiaohan Yang*}
\IEEEauthorblockA{\textit{SEAS} \\
\textit{Harvard University}\\
xiaohan\_yang\\@g.harvard.edu}
\and
\IEEEauthorblockN{Yongchao Chen*}
\IEEEauthorblockA{\textit{SEAS} \\
\textit{Harvard University}\\
yongchaochen\\@fas.harvard.edu}
}
\maketitle
\begin{abstract}
Federated learning is widely used to perform decentralized training of a global model on multiple devices while preserving the data privacy of each device. However, it suffers from heterogeneous local data on each training device which increases the difficulty to reach the same level of accuracy as the centralized training. Supervised Contrastive Learning which outperform cross-entropy tries to minimizes the difference between feature space of points belongs to the same class and pushes away points from different classes. We propose Supervised Contrastive Federated Learning in which devices can share the learned class-wise feature spaces with each other and add the supervised-contrastive learning loss as a regularization term to foster the feature space learning. The loss tries to minimize the cosine similarity distance between the feature map and the averaged feature map from another device in the same class and maximizes the distance between the feature map and that in a different class. This new regularization term when added on top of the moon regularization term is found to outperform the other state-of-the-art regularization terms in solving the heterogeneous data distribution problem.
\end{abstract}
\begin{IEEEkeywords}
federated learning, contrastive learning, representation sharing, non-IID
\end{IEEEkeywords}
\section{Introduction}
Federate Learning \cite{zhang2022fine, zhu2021data, li2020federated, chen2022anomalous, huang2022learn} has become a hot research topic in recent years due to applications in many fields where participants don't want to share private training data but want to have a co-trained model with high quality. However, the usually non-iid distribution of training data of participating clients harms the model convergence and model accuracy to a large extent. Supervised Contrastive Learning which tries to minimize the difference between feature space of points belonging to the same class and pushes away points from different classes was found to outperform cross-entropy \cite{simclr,simsiamese}. MOON \cite{moon} is one of the state-of-the-art regularization terms that is found to be very effective in solving heterogeneity problem in federated learning by utilizing the similarity between model representations to correct the local training of individual parties. Inspired by these two ideas, we propose Shared Supervised-Contrastive Federated Learning(FedSSC) to tackle the heterogeneity problem. In FedSSC, devices can share with each other the learned class-wise feature embeddings and add the supervised-contrastive learning loss as a regularization term to foster the feature space learning. The loss tries to minimize the cosine similarity distance between the current sample's feature embedding and the averaged feature embedding from another device if they are in the same class, and on the contrary maximizes the distance if they are in different classes. This new regularization term when added on top of the MOON regularization term is found to outperform the other state-of-the-art regularization terms by getting higher accuracy and converging in fewer rounds.
\section{Problem to Solve}
\subsection{Problem Statement}
We are particularly interested in the non-IID setting of federated learning. Specifically, we assume that there are $N$ devices, and each of them has the training data $D_i$ where $i \in \{1, 2, ..., N\}$. Each $D_i$ consists of a different distribution of class labels. Our goal is to learn a machine learning model without local devices directly sharing training data to solve
\begin{equation} \label{eq:loss term}
argmin_{w} L(w) = \sum_i^N \frac{|D_i|}{|D|} L_i(w)
\end{equation}
where $L_i(w)$ is the empirical loss of the local device.
\subsection{Background}
Federated learning is widely used to perform decentralized training of a global model on multiple devices while preserving the data privacy of each device. One of the most basic yet popular models is FedAvg \cite{fedavg} algorithm, in which a central server aggregates the local model weights on each device to build a global model without directly accessing the local training data. However, it suffers from heterogeneous local data on each training device which increases the difficulty of reaching the same level of accuracy as the centralized training. To tackle the heterogeneity, MOON proposes to add a regularization term to
prevent local feature representation of the image from being too far from the global feature representation of the same image \cite{moon}. MOON outperformed other regularization terms like FedProx \cite{fedprox}.
\subsection{Research Goal}
The goal of our research is to improve the federated learning algorithm in non-IID scenarios without sharing raw data across devices. Specifically, we would like to introduce an extra regularization for class-wise feature space through supervised contrastive loss on top of the MOON regularization term.
\subsection{Evaluation Metrics}
To compare our method with other existing approaches, we evaluate the model performance by the top-1 accuracy of global model on an isolated test set. Moreover, we use the number of communication rounds to achieve the same level of accuracy as our metric for convergence speed.
\section{Approach}
Inspired by the idea of Supervised Contrastive Learning, in addition to the MOON loss FedSSC utilizes class-wise average feature maps shared by other devices to correct local training and to tackle the heterogeneity problem. The objective function for the local device is composed of three parts: 1) typical supervised learning loss term calculated with cross-entropy ($l_{class}$), 2) MOON loss ($l_{moon}$), and 3) global class-wise contrastive loss ($l_{glob}$). With $\tau$ being the temperature, $l_{moon}$ takes the projected feature representations $z$, $z_{glob}$, and $z_{prev}$ by passing the same image into the current model, the current round's global model, and the previous epoch's model\cite{moon}.
\begin{equation} \label{eq:moon_loss}
l_{moon}= -log \frac{exp(sim(z,z_{glob})/\tau)}{exp(sim(z,z_{glob})/\tau)+exp(sim(z,z_{prev})/\tau)}\\
\end{equation}
Similarly $l_{glob}$ takes the temperature $\tau$, the current projected feature representations $z^i$ in class $i$ and the shared global class-wise projected feature representations $zs_{glob}$ with a total of $|K|$ classes. For the shared global representation in the same class as the image, we treat them as a positive pair, whereas any other shared global representations in a different class as negative pairs.
\begin{equation} \label{eq:global_loss}
l_{glob}= -log \frac{exp(sim(z^i,zs^i_{glob})/\tau)}{\sum_{k \in K} exp(sim(z^i,zs^k_{glob})/\tau)}\\
\end{equation}
To construct the $zs_{glob}$, we first have each local device report to global server the class-wise projected feature representations at the last round of epoch using Equation \ref{eq:global_rep} assuming that each device has $N$ training data with $N^k$ of them in class $k$. In each communication round, for each class the global server randomly selects a device who has at least 10 images of that class locally as a source of that class's class-wise feature representation.
\begin{equation} \label{eq:global_rep}
zs^k_{glob} = \frac{\sum_{j}^N z^i_j * 1_{i=k}}{N^k}
\end{equation}
As shown in Equation \ref{eq:loss}, we can tune the two parameters $\mu_{moon}$ and $\mu_{glob}$ to weight the MOON loss and global class-wise contrastive loss differently. The local objective is to minimize the $l$.
\begin{equation} \label{eq:loss}
l = l_{class} + \mu_{moon}*l_{moon} + \mu_{glob} * l_{glob}\\
\end{equation}
\begin{algorithm}
\caption{FedSSC Framework}\label{alg:cap}
\begin{algorithmic}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\Require number of communication rounds $T$,
number of devices $P$, number of local
epochs $E$, temperature $\tau$ , learning rate $\eta$,
hyper-parameter $\mu$, total number of data $N$, number of data in device$_i$ is $N_i$
\Ensure Final global model $w^T$
\\ \textit{\textbf{Global Server}} :
\\ \text{Initialize global model $w^0$}
\\ \text{and classwise feature representation $zs^0$}
\For{$t=0,1,...,T-1$}
\For{$i=1,...,P$}
\State send the $w^t$ and the $zs^t$ to device $i$
\State $w_i^{t+1} , zs_i^{t+1} \gets$ \textbf{LocalTraining}($i,w^t,zs^{t}$)
\EndFor
\State $w^{t+1}\gets \sum_i^P \frac{|N_i|}{|N|} w_i^t$
\State $zs^{t+1}\gets \sum_i^P \frac{zs_i^{t+1}}{P} $
\EndFor
\State return $w^T$
\\
\\ \textit{\textbf{LocalTraining}} :
\State $w_0^t = w^t$
\For{$i=0,...,E-1$}
\For{each batch b = ($x,y$) of $N_i$}
\State $l_{class} \gets CrossEntropyLoss(F_{w_i^t}(x),y)$
\State $z \gets Proj(Enc(w_i^t;x))$
\State $z_{glob} \gets Proj(Enc(w^t;x))$
\State $z_{prev} \gets Proj(Enc(w_{i}^{t-1};x))$
\State $l \gets l_{class} + \mu_{moon}*l_{moon}(z,z_{glob},z_{prev}) + \mu_{glob} * l_{glob}(z,zs^t)$
\State $w_{i+1}^t \gets w_i^t - \eta \Delta l $
\EndFor
\EndFor
\For{each class c $\in$ C}
\State $zs^c = \frac{1}{N^c}\sum_{x}^N Proj(Enc(w_E^t;x^j)) * 1_{j=c}$
\EndFor
\State return $w_E^{t+1}$, $zs$\\
\end{algorithmic}
\end{algorithm}
\section{Intellectual Points}
Our contributions are two-fold. First, most prior works focused on regularizing local devices' weights or only regularizing feature representations of local images\cite{moon},\cite{fedprox}. However, our approach directly takes advantage of feature representations from other devices without sharing the raw data by regularizing each sample's local representation with its corresponding global class-wise feature representation. The global class-wise feature map is from a randomly selected device for each round
Furthermore, our experiments show that the global representation contrastive loss and the MOON loss are complementary to each other. We have considered losses other than the MOON loss by modifying its negative pair, but the model could easily collapse or the performance would be worse than the FedAvg. Moreover, even without the MOON loss, simply adding our regularization term on top of the supervised learning loss can achieve the same level of accuracy with MOON. However, using one of them doesn't outperform the other. Combining them together is the key to our success.
\section{Work Performed}
\subsection{Dataset}
We used CIFAR-10 as our experiment dataset because it is relatively small and widely used in previous papers. To simulate the non-IID scenario, we follow MOON's setup by using the Dirichlet distribution to generate dataset $D_i$ for each device. Specifically, we sample ${p_k} \sim Dir_N(\beta)$ for each class and allocate $p_{kj}$ samples of class $k$ to device $j$. Our default is $\beta=0.5$, which simulates a severe non-IID situation. The larger the $\beta$ gets to, the more IID each device will be. Using Dirichlet, we can have each local device have the same total number of samples as each other, but different class distribution from each other.
\begin{figure*}
\caption{$\beta=0.2$}
\caption{$\beta=0.5$}
\caption{$\beta=1$}
\caption{$\beta=5$}
\caption{Performance comparison under different $\beta$ of non-IID scenarios.
}
\label{fig:sub2A}
\label{fig:sub1A}
\label{fig:beta_exp}
\end{figure*}
\subsection{Implementation Details}
\textbf{Main Differences Compared with MOON} Our approach is implemented as an extension to MOON with two main differences. In particular, we modify the loss function to include supervised contrastive loss for shared representations. Furthermore, we change the device-to-server communication to sharing class-wise average representations of the local model, in addition to the local model's weights. To avoid bias from limited data points, we only share the representation if the device has abundant samples for the corresponding class (i.e., more than 10 samples). Moreover, when distributing representations from the server to a device, we randomly sample $k$ representations for each class and take the average of them to represent the specific class. If the server has fewer than $k$ representations for a class, we will average everything we have to represent that class.
\textbf{Model Architecture} Considering the time limit of this study, we used a simple CNN with two convolution layers, two max-pooling layers, and two fully connected layers as the encoder. After each convolution layer and fully connected layers, we have ReLU for the activation.
\subsection{Experiment Setups}
To evaluate the performance of our method, we compared it with FedAvg and MOON, where MOON is expected to perform better than FedAvg in non-IID setting. In default, we set $\mu_{MOON}=5$ as it is the best parameter reported by the original paper \cite{moon}. Besides, we set the batch size to $64$, and use the SGD optimizer with a learning rate of $0.01$, a weight decay of $0.00001$, and a momentum of $0.9$.
Furthermore, to mimic traditional two-stage contrastive learning, we decrease the weight of the global representation contrastive loss throughout the communication rounds. Specifically, we use the following formula to control its weight, where $\mu_{glob, s}$ is the initial weight, the $\mu_{glob, e}$ is the end weight, and the $T_{0}$ is the number of warmup rounds. In our experiment, we set $\mu_{glob, s} = 1$, $\mu_{glob, e} = 0.0001$, $T=100$, and $T_0=5$.
$$
\mu_{glob, i} = \mu_{glob, s} - \frac{1}{T-T_{0}} (\mu_{glob, s} - \mu_{glob, e})
$$
To simplify our experiments, we assume that no device will reject the server's request, and we will not encounter communication failures. In other words, all devices will participate in each communication round. In our default setting, we set the number of devices to 10. We utilized $1$ GPU and $10$ CPUs for each experimental setting. The training took less than 1 day to finish for all the experiments.
\section{Results}
\begin{table}[h]
\label{results-summary}
\caption{Overall performance and efficiency for different methods.}
\begin{small}
\begin{sc}
\begin{tabular}{lcc}
\hline
Method & Top-1 Accuracy & Num of Comms (0.68 Acc)\\
\hline
FedAvg & 0.658 & more than 100\\
MOON & 0.686 & 61 \\
FedSSC & \textbf{0.693} & \textbf{41}\\
\hline
\end{tabular}
\end{sc}
\end{small}
\end{table}
\subsection{Overall Performance}
In the default setting, our method \textsc{FedSSC} outperforms \textsc{FedAvg} by $3.4\%$ and \textsc{MOON} by $0.7\%$. The improvement compared to \textsc{FedAvg} is significant, while the increase from \textsc{MOON} is smaller. However, \textsc{FedSSC} reaches the $68\%$ level of accuracy at just $41$ communication rounds, while \textsc{MOON} needs $61$ rounds, and \textsc{FedAvg} cannot reach it within $100$ rounds. In summary, our approach performs better and is more efficient than previous methods.
\begin{figure*}
\caption{Losses w/o model contrastive learning}
\label{fig:loss_exp_sub1}
\caption{Losses w/ local contrastive learning}
\label{fig:loss_exp_sub2}
\caption{Different $k$ of presentations shared}
\label{fig:loss_exp_sub3}
\caption{Performance comparison for alternative loss components, where the blue line is our proposed approach. }
\label{fig:loss_exp}
\end{figure*}
\subsection{Different Non-IID Scenarios}
To better evaluate \textsc{FedSSC}, we compared its performance with other approaches under various non-IID scenarios, including $\beta \in \{0.2, 0.5, 1, 5\}$. From Figure \ref{fig:beta_exp}, we can see that as we increase $\beta$, the differences between the methods decrease since the heterogeneity is less severe. For both $\beta=0.2$ and $0.5$, \textsc{FedSSC} slightly outperforms \textsc{MOON} and is significantly better than \textsc{FedAvg}. Also, it converges faster than the others.
\subsection{Alternative Loss}
Beyond our proposed loss, we did extensive experiments with other variants that could potentially help us learn a better representation. In particular, we tried to use two-stage contrastive learning, where we first train the encoder for certain rounds while freezing the classifier in the first 90 rounds and then train the classifier while freezing the encoder in the last 10 rounds. Furthermore, we experimented with another alternative by adding negative pairs and positive pairs from the same local batch. Figure \ref{fig:loss_exp_sub1} shows that our proposed approach outperforms the alternatives a lot, especially in the earlier rounds. This is understandable because learning encoder with only supervised contrastive learning would take longer rounds to converge than using supervised classification loss. However, it could potentially outperform our current approach if we train it for more rounds.
Furthermore, we attempted to remove the $l_{MOON}$ completely by setting $\mu_{MOON}=0$ and it achieves the same level of accuracy. From Figure \ref{fig:loss_exp_sub2}, we can see that $l_{MOON}$ takes an important role. If we remove it, the performance will drop even if we share representations from all devices (i.e., \textsc{FedProc} \cite{contrafl-1}). Moreover, we experimented with different numbers of shared representations $k$ with the default $\mu_{MOON}=0$. Figure \ref{fig:loss_exp_sub3} shows that sharing representations does boost the performance, although increasing $k$ doesn't make a big difference. Therefore, the experimental results demonstrate that $l_{MOON}$ and $l_{glob}$ are complementary to each other.
\section{Related Work}
Recently many methods have been proposed to improve model accuracy and data usage at a heterogeneous distribution environment, as it has long been a key problem for Federated Learning in many fields such as finance, medicine, and social media where participants don't want to share private data.
\textbf{Federated Learning}
Based on the work of FedAvg \cite{fedavg}, many methods have been proposed to alleviate the heterogeneous distribution problem. FedProx \cite{fedprox} adds an extra regularization term to push together local model weights and global model weights. SCAFFOLD \cite{scaffold} method uses variance reduction to correct the heterogeneity during local training. SphereFed \cite{sphere} makes use of a freezed classification head to increase the similarity between global and local feature space. Generally, most of the previous methods focus on bring together local and global models.
\textbf{Contrastive learning.} Methods such as SimCLR \cite{simclr} and MOCO \cite{moco} have become promising self-supervised approaches in Computer Vision in recent years. BYOL \cite{byol} and Simsiam\cite{simsiamese} have extended the idea of contrastive learning to have zero negative samples, while SupCon \cite{supcon} proposed a supervised contrastive learning approach.
some researchers have combined contrastive learning with federated learning to mitigate the heterogeneous distribution problem\cite{moon}, \cite{contrafl-2}, \cite{contrafl-1}. Our approach is improved upon the idea of MOON \cite{moon}, where positive pair is the local representation and global representation of the same image, and the negative pair is set to be the current local representation and the local representation at the previous communication round.
\textbf{Representation sharing}. Recently, representation sharing becomes another direction to solve the problem of heterogeneous data distribution. There are quite some research works dedicated in this direction, by sharing clients' image-level or class-level representation with others. FedProc \cite{contrafl-1} proposed sharing both the local features and local model weights. The local features are averaged in a class-wise manner, which has achieved better performance than MOON or FedProx on CIFAR-10 or CIFAR-100. Another recent work, FedPCL \cite{contrafl-2} applies individual-level feature sharing on MINIST dataset.
\section{Conclusion}
In summary, our work proposes to utilize contrastive learning and representation sharing to mitigate the non-IID problem. The experiments show that our method is orthogonal to other federated learning methods, and can outperform state-of-the-art models in typical settings. Both the accuracy and convergence speed can be apparently raised. Admittedly, more experiments are needed to test the availability and performance of our method with different settings of neural network structures and datasets.
\section{Contribution Statement}
All four authors contributed equally to this work. All members participated fully in reviewing literatures, coming up with model ideas, coding different models, tuning hyperparameters and writing up the final report.
\end{document}
|
\begin{document}
\setcitestyle{numbers}
\title{Asymptotics for sums of a function of normalized independent sums}
\author{Kamil Marcin Kosi\'nski}
\email{[email protected]}
\address{Wydzia{\l} Matematyki, Informatyki i Mechaniki, Uniwersytet Warszawski, Warszawa, Poland}
\curraddr{Korteweg-de Vries Institute for Mathematics, University of Amsterdam, P.O. Box 94248,
1090 GE Amsterdam, The Netherlands}
\date{Semptember 17, 2008}
\subjclass[2000]{Primary G0F05}
\keywords{Products of sums of iid rv's, Limit distribution, Central limit theorem, Lognormal distribution}
\begin{abstract}
We derive a central limit theorem for sums of a function of independent sums of independent and
identically distributed random variables. In particular we show that previously known result
from Rempa{\l}a and Weso{\l}owski (Statist. Probab. Lett. 74 (2005) 129--138), which can be obtained by applying
the logarithm as the function, holds true under weaker assumptions.
\end{abstract}
\maketitle
\section{Introduction}
Throughout this paper let $(X_{k,n})_{k=1,\ldots,n}$; $n=1,2\ldots$ be a triangular array of
independent and identically distributed (iid) random variables (rv's) with the same distribution as $X$.
Let us define the (mutually independent) partial sums $S_{n,n}=\sum_{k=1}^n X_{k,n}$. We will
work with real functions $f$ defined at least on an interval $I$ such that $\mathbb{P}(X\in I)=1$.
We will also write $\log^+ x$ for $\log (x\vee 1)$.
\par The asymptotic behavior of the product of partial sums of a sequence of iid rv's
has been studied in several papers (see e.g. \citet{LuiQi}
for a brief review). In particular it was shown in \citet{Remiweso} that
if $(X_n)$ is a sequence of iid positive square integrable rv's with $\mathbb{E} X_1=\mu$,
$\Var(X_1)=\sigma^2>0$, then setting $S_n=\sum_{k=1}^n X_k$ and $\gamma=\sigma/\mu$ we have as $n\to\infty$
\[
\left(\prod_{k=1}^n\frac{S_k}{k\mu}\right)^{\frac{1}{\gamma\sqrt{n}}}\dto e^{\sqrt{2}\mathcal{N}},
\]
where $\dto$ stands for convergence in distribution and $\mathcal{N}$ is a standard normal random variable.
This result was extended in \citet{Qi} and \citet{LuiQi} to a general limit theorem covering
the case when the underlying distribution is integrable and belongs to the domain of attraction of a stable law with index from
the interval $[1,2]$.
\par This study brought an interest to the array case, where we no longer consider a sequence
$(X_n)$ but a triangular array $(X_{k,n})$. In \citet{Remiweso2} the analogous result
was obtained, namely
\begin{equation}
\label{result:rw}
\left(n^{\frac{\gamma^2}{2}}\prod_{k=1}^n\frac{S_{k,k}}{k\mu}\right)^{\frac{1}{\gamma\sqrt{\log n}}}\dto e^{\mathcal{N}},
\mathbb{E}q
under the assumption $\mathbb{E} |X|^p<\infty$ for some $p>2$.
\par The purpose of this paper is to show that the above result holds true under the assumption $\mathbb{E} X^2(\log^+ |X|)^{1/2}<\infty$.
However, it is no longer true in general when only $\mathbb{E} X^2<\infty$ is assumed.
We will show that under this assumption, different normalisation is needed.
Furthermore, we will set our discussion in a more general setting. It is straightforward that \eqref{result:rw}
is a simple corollary from
\[
\frac{\sum_{k=1}^n f(S_{k,k}/k)-b_n}{a_n}\dto\mathcal{N},
\]
if one sets $f(x)=\log x$ and chooses the sequences $a_n$, $b_n$ properly.
\section{Main result}
\label{main}
\begin{thm}
\label{theorem1} Suppose that $\mathbb{E} |X|^2<\infty$ and denote $\mu=\mathbb{E} X$, $\sigma^2=\Var(X)$.
Let $f$ be a real function with bounded third derivative on some neighbourhood of $\mu$.
Then as $n\to\infty$
\[
\frac{\sum_{k=1}^n f(S_{k,k}/k)-b_n}{a_n}\dto\sigma f'(\mu)\mathcal{N},
\]
where
\[
a_n=\sqrt{\log n}\,,\quad b_n= n f(\mu)+\frac{f''(\mu)}{2}\sum_{k=1}^n\frac{1}{k}\mathbb{E}|X-\mu|^21_{\{|X-\mu|\le\sigma k\}}.
\]
\end{thm}
\begin{rmk}
If we strengthen the assumption of the square integrability of random variable $X$ to $\mathbb{E} X^2(\log^+|X|)^{1/2}<\infty$,
then we can take the sequence $\tilde{b}_n=n f(\mu)+\frac{f''(\mu)\sigma^2}{2}\log n$ instead of $b_n$.
To see this we should show $\tilde{b}_n-b_n=o(\sqrt{\log n})$, which since
$\log n-\sum_{k=1}^n\frac{1}{k}=O(1)$
is equivalent to
\[
Q_n:=\sum_{k=1}^n\frac{1}{k}\left(\sigma^2-\mathbb{E}|X-\mu|^21_{\{|X-\mu|\le\sigma k\}}\right)=o(\sqrt{\log n}).
\]
Observe that $Q_n$ is positive and
\begin{align*}
Q_n&=
\sum_{k=1}^n\frac{1}{k}\mathbb{E}|X-\mu|^21_{\{|X-\mu|>\sigma k\}}
=\mathbb{E}|X-\mu|^2\sum_{\sigma k< |X-\mu|,\, k\le n}\frac{1}{k}\\
&\sim\mathbb{E}|X-\mu|^2\log^+\left(n\wedge(|X-\mu|/\sigma)\right)=:\tilde{Q}_n.
\end{align*}
Therefore, if $\mathbb{E} X^2(\log^+ |X|)^{1/2}<\infty$, we can use the Dominated Convergence Theorem and
infer that $\tilde{b}_n-b_n=o(\sqrt{\log n})$.
\end{rmk}
\begin{rmk}
On the other hand, if for some $\varepsilon>0$ we define a random variable $X$ by setting
$\mathbb{P}(X=\pm k_n)=C/(2 k_n^2 n^2)$ and $\mathbb{P}(X=0)=1-\sum_{n}\mathbb{P}(|X|=k_n)$,
where $k_n=e^{n^{2+\varepsilon}}$
and $C=6/\pi^2$. Then we simply have $\mu=0$, $\sigma^2=1$ and $\mathbb{E} X^2(\log^+ |X|)^{1/2}=\infty$.
Although, one can check that $\limsup_n Q_n/\sqrt{\log n}=\limsup_n \tilde{Q}_n/\sqrt{\log n}=\infty$,
which means that we cannot use $\tilde{b}_n$ in general.
\end{rmk}
Now let us take any positive (i.e. $I\subset(0,\infty)$), nondegenerate random variable $X$ with $\mathbb{E} X^2(\log^+ |X|)^{1/2}<\infty$
and $f(x)=\mu\log\left(x/\mu\right)$. Theorem \ref{theorem1} yields \eqref{result:rw}, that is
the result from \citet{Remiweso2} however under weaker assumptions. Our argument shows that their result
holds true even under the assumption of square integrability, although the normalizing sequences should be different.
Namely, instead of the term $n^{\frac{\gamma^2}{2}}$ in \eqref{result:rw} we should have
$\exp(\frac{1}{2\mu^2}\sum_{k=1}^n\frac{1}{k}\mathbb{E}|X-\mu|^21_{\{|X-\mu|\le\sigma k\}})$.
\par The proof of Theorem \ref{theorem1} relies on the Taylor's expansion of a function $f$ in the neighbourhood of $\mu$. Linear term
in this expansion obeys a version of the classical Central Limit Theorem. Other terms are negligible
mainly due to the Strong Law of Large Numbers (SLLN). This assertion will be made valid by a series of lemmas.
\begin{lem}
\label{lemma:CLT}
Under the assumptions of Theorem \ref{theorem1} with $\sigma>0$
\[
\frac{1}{\sigma\sqrt{\log n}}\sum_{k=1}^n \left(\frac{S_{k,k}}{k}-\mu\right)\dto\mathcal{N}\as n.
\]
\end{lem}
\begin{proof}
We may assume $\mathbb{E} X=0$ and $\mathbb{E} X^2=1$ by a simple normalization argument.
Since
\[
\Var\left(\sum_{k=1}^n \frac{S_{k,k}}{k}\right)=\sum_{k=1}^n\frac{\Var(S_{k,k})}{k^2}=\sum_{k=1}^n\frac{1}{k}\sim \log n,
\]
then to complete the proof it is sufficient to show the Lindeberg condition for the array $(\frac{S_{k,k}}{k\sqrt{\log n}})_{k\le n}$,
that is
\[
\mathop{\text{\Large$\forall$}}_{r>0}\quad\frac{1}{\log n}\sum_{k=1}^n\mathbb{E} \left(\frac{S_{k,k}}{k}\right)^2 1_{\{|S_{k,k}/k|>r \sqrt{\log n}\}}=o(1).
\]
Since $\{(S_{k,k}/\sqrt{k})^2\}$ is uniformly integrable,
\[
\sup_{k\in\mathbb{N}} \mathbb{E} \left(\frac{S_{k,k}}{\sqrt{k}} \right)^2 1_{\{|S_{k,k}/\sqrt{k}|>r\sqrt{\log n}\}}\to0\as n,
\]
proving the Lindeberg condition.
\end{proof}
To establish the SLLN we will refer to \citet{Hsu} Law of Large Numbers (cf. \citet{Li}
for partial bibliographies and brief discussions).
\begin{lem}[Hsu-Robbins LLN]
\label{lemma:hsu-r}
For a sequence $(X_n)$ of iid rv's with $\mathbb{E} X_1=0$ and $\mathbb{E} X_1^2<\infty$ the series
\begin{equation}
\label{hsu-r:statement}
\sum_{n=1}^\infty\mathbb{P}(|S_n/n|>t)
\mathbb{E}q
converges for every $t>0$.
\end{lem}
The condition \eqref{hsu-r:statement} implies $S_n/n\to 0$ a.s. under the Borel-Cantelli lemma.
Moreover if $X_1$ in Lemma \ref{lemma:hsu-r} has the same distribution as $X$ in Theorem \ref{theorem1},
then $\mathbb{P}(|S_n/n|>t)=\mathbb{P}(|S_{n,n}/n|>t)$, i.e $S_{n,n}/n\to 0$ a.s. as well.
\par Before we proceed, we need some technical results derived from the elementary fact about the moments of sums of iid rv's
(e.g., \citet[p. 23]{Hall}).
\begin{lem}[Rosenthal's inequality]
\label{Rosenthal}
If $(X_n)$ is a sequence of iid rv's with the zero mean,
then for any $p\ge 2$
\[
\mathbb{E}|S_n|^p\leq C_p\left(n\mathbb{E} |X_1|^p+n^{p/2}(\mathbb{E} X_1^2)^{p/2}\right),
\]
where $C_p$ is a constant depending only on $p$.
\end{lem}
We will use this version of the Rosenthal's inequality to prove the following lemma, which
on the other hand will simplify a number of steps in the next lemma. The later will play a crucial role in the proof
of the main theorem.
\begin{lem}
\label{lemma:cor}
Under the assumptions of Theorem \ref{theorem1}, for every $p>2$
\[
\mathbb{E}\sum_{k=1}^\infty\left(\frac{|\tilde{T}_k|}{k}\right)^p<\infty,
\]
where $\tilde{T}_k=\sum_{i=1}^k\left( X_{i,k}1_{\{|X_{i,k}|\le k\}}-\mathbb{E} X_{i,k}1_{\{|X_{i,k}|\le k\}}\right)$.
\end{lem}
\begin{proof}
Let $Y_k\de X_{i,k}1_{\{|X_{i,k}|\le k\}}-\mathbb{E} X_{i,k}1_{\{|X_{i,k}|\le k\}}$, then by Lemma \ref{Rosenthal}
\begin{align*}
\mathbb{E}\sum_{k=1}^\infty\left(\frac{|\tilde{T}_k|}{k}\right)^p&\le
C_p\sum_{k=1}^\infty\frac{1}{k^p}\left(k\mathbb{E} |Y_k|^p+k^{p/2}(\mathbb{E} Y_k^2)^{p/2}\right)\\
&\le C_p 2^p\left(\sum_{k=1}^\infty\frac{1}{k^{p-1}}\mathbb{E} |X|^p1_{\{|X|\le k\}}+\sum_{k=1}^\infty\frac{1}{k^{p/2}}(\mathbb{E} X^2)^{p/2}\right)\\
&= C_p2^p\left(\mathbb{E} |X|^p\sum_{k\ge |X|}^\infty k^{1-p}+(\mathbb{E} X^2)^{p/2}\sum_{k=1}^\infty\frac{1}{k^{p/2}}\right)\\
&\le C_p2^p\left( C \mathbb{E} |X|^2+(\mathbb{E} X^2)^{p/2}\sum_{k=1}^\infty\frac{1}{k^{p/2}}\right)<\infty,
\end{align*}
for some positive constant $C$.
\end{proof}
\begin{lem}
\label{lemma:remainder}
Under the assumptions of Theorem \ref{theorem1} we have
\begin{align}
\label{remainder:1}
\sum_{k=1}^n\left[\left(\frac{S_{k,k}-k\mu}{k}\right)^2-\frac{\mathbb{E}|X-\mu|^21_{\{|X-\mu|\le\sigma k\}}}{k}\right]&
=O_\mathbb{P}(1),\\
\label{remainder:2}
\sum_{k=1}^n\left|\frac{S_{k,k}-k\mu}{k}\right|^3 & = O_\mathbb{P}(1).
\end{align}
\end{lem}
\begin{proof} To simplify the notation we will write $S_k$ for $S_{k,k}$.
First note that to show \eqref{remainder:1} it is sufficient to prove that
\[
\sum_{k=1}^n\left(\frac{S_k^2}{k^2}-\frac{\mathbb{E}|X|^21_{\{|X|\le k\}}}{k}\right)=O_\mathbb{P}(1),
\]
for a normalized random variable $X$.
Take any $\varepsilon>0$ and let $T_k:=\sum_{i=1}^k X_{i,k}1_{\{|X_{i,k}|\le k\}}$.
Then $\sum_{k=1}^\infty\mathbb{P}(S_k\ne T_k)\le \sum_{k=1}^\infty k\mathbb{P}(|X|>k)<\infty$,
because $\mathbb{E} X^2<\infty$. Hence we can take $R$ big
enough that $\sum_{k=R}^\infty\mathbb{P}(S_k\ne T_k)<\varepsilon/3$ and $M$ big enough that
\[
\mathbb{P}\left(\left|\sum_{k=1}^{R-1}\left(\frac{S_k^2}{k^2}-\frac{\mathbb{E}|X|^21_{\{|X|\le k\}}}{k}\right)\right|>M/2\right)<\varepsilon/3.
\]
So all we need to show is
\[
\mathbb{P}\left(\left|\sum_{k=R}^n\left(\frac{T_k^2}{k^2}
-\frac{\mathbb{E}|X|^21_{\{|X|\le k\}}}{k}\right)\right|>M/2\right)<\varepsilon/3,
\]
which is implied by
\begin{equation}
\label{co}
\sum_{k=1}^n\frac{T_k^2-b_k}{k^2}=O_\mathbb{P}(1)
\mathbb{E}q
with $b_k:=k\mathbb{E}|X|^21_{\{|X|\le k\}}$. Observe that
\[
T_k=\sum_{i=1}^k\left( X_{i,k}1_{\{|X_{i,k}|\le k\}}-\mathbb{E} X_{i,k}1_{\{|X_{i,k}|\le k\}}\right)+c_k=:\tilde{T}_k+c_k,
\]
where $c_k=k\mathbb{E} X1_{\{|X|\le k\}}$ and
\[
\sum_{k=1}^n\frac{T_k^2-b_k}{k^2} =
\sum_{k=1}^n\frac{\tilde{T}_k^2-b_k}{k^2}+
\sum_{k=1}^n\frac{c_k^2}{k^2}+
2\sum_{k=1}^n\frac{c_k\tilde{T}_k}{k^2}
=:I_1+I_2+I_3.
\]
Recall that $\mathbb{E} X=0$ so that
\[
|c_k|=|k\mathbb{E} X 1_{\{|X|\le k\}}|=|k\mathbb{E} X 1_{\{|X|> k\}}|\le\mathbb{E} |X|^2=1,
\]
thus $I_2=O(1)$. We also have $I_3=O_\mathbb{P}(1)$ because $I_3$ is bounded in $L^2$
\[
\mathbb{E}\left(\sum_{k= 1}^n\frac{c_k\tilde{T}_k}{k^2}\right)^2 =
\sum_{k=1}^n\mathbb{E}\left(\frac{c_k\tilde{T}_k}{k^2}\right)^2
\le\sum_{k=1}^n\frac{1}{k^{4}}k\Var(X1_{\{|X|\le k\}})
\le \mathbb{E} X^2\sum_{k=1}^n\frac{1}{k^{3}}=O(1).
\]
$I_1$ can be rewritten as
\[
I_1=\sum_{k=1}^n\frac{\tilde{T}_k^2-\mathbb{E} \tilde{T}_k^2}{k^2}
-\sum_{k=1}^n\frac{c_k^2}{k^3}
=:I_{11}-I_{12}.
\]
But $I_{12}=O(1)$ since $|c_k|\le 1$. So in order to show \eqref{co} it is enough to
show that
\[
K_n:=\sum_{k=1}^n\frac{\tilde{T}_k^2-\mathbb{E} \tilde{T}_k^2}{k^2}=O_\mathbb{P}(1),
\]
where $\tilde{T}_k$ is a sum of independent, mean zero rv's with the same distribution as
$X1_{\{|X|\le k\}}-\mathbb{E} X1_{\{|X|\le k\}}$. This however follows from Lemma \ref{lemma:cor} with $p=4$.
Indeed
\[
\mathbb{E} K_n^2 =\Var(K_n)=\sum_{k=1}^n\frac{1}{k^4}\Var(\tilde{T}_k^2)\le
\mathbb{E}\sum_{k=1}^\infty\left(\frac{\tilde{T}_k}{k}\right)^4<\infty,
\]
so the proof of \eqref{remainder:1} is complete.
\par To prove \eqref{remainder:2} it suffices to show that
$
\sum_{k=1}^n\left|\frac{S_k}{k}\right|^3=O_\mathbb{P}(1)
$
for normalized $X$, which by the same arguments as above is implied by
\[
\sum_{k=1}^n\left|\frac{T_k}{k}\right|^3=O_\mathbb{P}(1).
\]
Using the same notation we have $|T_k|^3=|\tilde{T}_k+c_k|^3\le4|\tilde{T}_k|^3+4$. Thus
\[
\sum_{k=1}^n\left|\frac{T_k}{k}\right|^3 \le
4\left(\sum_{k=1}^n\left|\frac{\tilde{T}_k}{k}\right|^3+
\sum_{k=1}^n\frac{1}{k^3}\right)
=:I_4+I_5.
\]
We obviously have $I_5=O(1)$ and by Lemma \ref{lemma:cor} with $p=3$ we get
boundedness of $I_4$ in $L^1$, which completes the proof.
\end{proof}
Now we are in the position to prove the main theorem.
\begin{proof}[Proof of Theorem \ref{theorem1}]
Take $a_n$ and $b_n$ as in the claim and denote
$c_k=\mathbb{E}|X-\mu|^21_{\{|X-\mu|\le\sigma k\}}$ so $b_n=\sum_{k=1}^n(f(\mu)+f''(\mu)\frac{c_k}{2k})$.
By Taylor's expansion,
\[ f\left(\frac{S_{k,k}}{k}\right)=f(\mu)+f'(\mu)\left(\frac{S_{k,k}}{k}-\mu\right)+\frac{f''(\mu)}{2}\left(\frac{S_{k,k}}{k}-\mu\right)^2
+O\left(\left|\frac{S_{k,k}}{k}-\mu\right|^3\right)\,\text{a.s.},
\]
as a consequence of the SLLN and the assumption of boundedness of $f^{(3)}$ around $\mu$.
Using Lemma \ref{lemma:remainder} we have
\begin{align*}
\frac{\sum_{k=1}^nf(S_{k,k}/k)-b_n}{a_n}&=
\frac{f'(\mu)}{a_n}\sum_{k=1}^n\left(\frac{S_{k,k}}{k}-\mu\right)+\frac{f''(\mu)}{2a_n}
\sum_{k=1}^n\left[\left(\frac{S_{k,k}}{k}-\mu\right)^2-\frac{c_k}{k}\right]\\
&\qquad+O\left(\frac{1}{a_n}\sum_{k=1}^n\left|\frac{S_{k,k}}{k}-\mu\right|^3\right)\,\text{a.s.}\\
&=\frac{f'(\mu)}{a_n}\sum_{k=1}^n\left(\frac{S_{k,k}}{k}-\mu\right)+o_\mathbb{P}(1).
\end{align*}
By Lemma \ref{lemma:CLT}
\[
\frac{f'(\mu)}{a_n}\sum_{k=1}^n\left(\frac{S_{k,k}}{k}-\mu\right)\dto\sigma f'(\mu)\mathcal{N},
\]
completing the proof.
\end{proof}
\end{document}
|
\begin{document}
\date{}
\maketitle
\begin{abstract}
The kissing number problem asks for the maximal number $k(n)$ of equal size nonoverlapping
spheres in $n$-dimensional space that can touch another sphere of the same size.
This problem in dimension three was the subject of a famous discussion between Isaac Newton
and David Gregory in 1694. In three dimensions the problem was finally solved only in 1953 by Sch\"utte and van der Waerden.
In this paper we present a solution of a long-standing problem about the kissing number in four dimensions. Namely, the equality $k(4)=24$ is proved.
The proof is based on a modification of Delsarte's method.
\end{abstract}
\section {Introduction}
The {\it kissing number} $k(n)$ is the highest number of equal nonoverlapping spheres in ${\bf R}^n$ that can touch another sphere of the same size. In three dimensions the kissing number problem is asking how many white billiard balls can
{\em kiss} (touch) a black ball.
The most symmetrical configuration, 12 billiard balls around another, is if the 12 balls are placed at positions corresponding to the vertices of a regular icosahedron concentric with the central ball. However, these 12 outer balls do not kiss each other and may all moved freely. So perhaps if you moved all of them to one side a 13th ball would possibly fit in?
This problem was the subject of a famous discussion between Isaac Newton
and David Gregory in 1694.
It is commonly said that Newton believed the answer was 12 balls, while Gregory
thought that 13 might be possible. However, Casselman \cite{Cas} found some puzzling features in this story.
The Newton-Gregory problem is often called the {\it thirteen spheres problem}. Hoppe \cite{Hop} thought he had solved the problem in 1874. However, there was a mistake - an analysis of this mistake was published by Hales \cite{Hales} in 1994.
Finally, this problem was solved by Sch\"utte and van der Waerden in 1953 \cite{SvdW2}. A subsequent two-page sketch of a proof was given by Leech \cite{Lee} in 1956. The thirteen spheres problem continues to be of interest, and several new proofs have been published in the last few years \cite{Hs,Ma,Bor,Ans,Mus2}.
Note that $k(4)\ge 24$.
Indeed, the unit sphere in ${\bf R}^4$ centered at $(0,0,0,0)$ has 24 unit spheres around it, centered at the points $(\pm\sqrt{2},\pm\sqrt{2},0,0)$, with any choice of signs and any ordering of the coordinates. The convex hull of these 24 points yields a famous 4-dimensional regular polytope - the ``24-cell". Its facets are 24 regular octahedra.
Coxeter proposed upper bounds on $k(n)$ in 1963 \cite{Cox}; for $n=4, 5, 6, 7,$ and 8 these bounds were 26, 48, 85, 146, and 244, respectively. Coxeter's bounds are based on the conjecture that equal size spherical caps on a sphere can be packed no denser than packing where the Delaunay triangulation with vertices at the centers of caps consists of regular simplices. This conjecture has been proved by B\"or\"oczky in 1978 \cite{Bor1}.
The main progress in the kissing number problem in high dimensions was made in the end of 1970s. In 1978: Kabatiansky and Levenshtein have found an asymptotic upper bound $2^{0.401n(1+o(1))}$ for $k(n)$ \cite{Kab}. (Currently known the lower bound is $2^{0.2075n(1+o(1))}$ \cite{Wyn}.) In 1979:
Levenshtein \cite{Lev2}, and independently Odlyzko and Sloane \cite{OdS} (=
\cite[Chap.13]{CS}), using Delsarte's method, have proved that
$k(8)=240$, and $k(24)=196560$. This proof is surprisingly short, clean, and technically easier than all proofs in three dimensions.
However, $n=8, 24$ are the only dimensions in which this method gives a precise result. For other dimensions (for instance, $n=3, 4$) the upper bounds exceed the lower.
In \cite{OdS} the Delsarte method was applied in dimensions up to 24 (see \cite[Table 1.5]{CS}). For comparison with the values of Coxeter's bounds on $k(n)$ for $n=4, 5, 6, 7,$ and 8 this method gives 25, 46, 82, 140, and 240, respectively. (For $n=3$ Coxeter's and Delsarte's methods only gave $k(3)\le 13$ \cite{Cox,OdS}.)
Improvements in the upper bounds on kissing numbers (for $n<24$) were rather weak during next years
(see \cite[Preface to Third Edition]{CS} for a brief review and references). Arestov and Babenko \cite{AB1} proved that
the bound $\; k(4)\le25\; $ cannot be improved using Delsarte's method.
Hsiang \cite{Hs1} claims a proof of $k(4)=24.$ His work has not received yet a positive peer review.
If $M$ unit spheres kiss the unit sphere in ${\bf R}^n$, then the set of kissing points
is an arrangement on the central sphere such that the (Euclidean) distance between any two points is at least 1. So the kissing number problem can be stated in other way: How many points can be placed on the surface of ${\bf S}^{n-1}$ so that the angular separation between any two points is at least $\pi/3$?
This leads to an important generalization: a finite subset $X$ of ${\bf S}^{n-1}$ is called a {\it spherical $\psi$-code} if for
every pair $(x,y)$ of $X$ the inner product $x\cdot y\le \cos{\psi},$ i.e. the minimal angular separation is at least $\psi.$ Spherical codes have many applications. The main application outside mathematics is in the design of signals for data transmission and storage. There are interesting applications to the numerical evaluation of $n$-dimensional integrals \cite[Chap.3]{CS}.
Delsarte's method (also known in coding theory as Delsarte's linear programming method or Delsarte's scheme) is widely used for finding bounds for codes.
This method is described in
\cite{CS,Kab} ({see also \cite{PZ} for a beautiful exposition}).
In this paper we present an extension of the Delsarte method
that allowed to prove the bound $k(4)<25$, i.e. $k(4)=24$. This extension yields also a proof for $\; k(3)<13$ \cite{Mus2}.
The first version of these proofs used numerical solutions of some nonconvex constrained optimization problems \cite{Mus} (see also \cite{PZ}). Now, using geometric approach, we reduced it to relatively simple computations.
The paper is organized as follows: Section 2 shows that the main thorem: $k(4)=24$ easily follows from two lemmas: Lemma A and Lemma B. Section 3 reviews the Delsarte method and gives a proof of Lemma A. Section 4 extends Delsarte's bounds and reduces the upper bound problem for $\psi$-codes to some optimization problem. Section 5 reduces the dimension of the corresponding optimization problem. Section 6 develops a numerical method for a solution of this optimization problem and gives a proof of Lemma B.
\noindent{\bf Acknowledgment.} I wish to thank Eiichi Bannai,
Dmitry Leshchiner, Sergei Ovchinnikov, Makoto Tagami, G\"unter Ziegler, and especially anonymous referees of this paper for helpful discussions and useful comments.
I am very grateful to Ivan Dynnikov who pointed out a gap in arguments on earlier draft of \cite{Mus}.
\section {The Main Theorem}
Let us introduce the following polynomial of degree nine:\footnote{The polynomial $f_4$ was found by the linear programming method (see details in the Appendix).
This method for $n=4,$ $z=1/2,$ $d=9,$ $N=2000,$ $t_0=0.6058$ gives $E\approx 24.7895.$ For $f_4$ coefficients were changed to ``better looking" ones with $E\approx 24.8644.$}
$$f_4(t): = \frac{1344}{25}\,t^9 - \frac{2688}{25}\,t^7 + \frac{1764}{25}\,t^5 + \frac{2048}{125}\,t^4 - \frac{1229}{125}\,t^3 - \frac{516}{125}\,t^2 -
\frac{217}{500}\,t - \frac{2}{125}$$
\noindent{\bf Lemma A.} {\em Let $X=\{x_1,\ldots,x_M\}$ be points in the unit sphere ${\bf S}^3$. Then
$$
S(X)=\sum\limits_{i=1}^M\sum\limits_{j=1}^M {f_4(x_i\cdot x_j)}\ge M^2.
$$
}
We give a proof of Lemma A in the next section.
\noindent{\bf Lemma B.} {\em Suppose $X=\{x_1,\ldots,x_M\}$ is a subset of ${\bf S}^3$ such that the angular separation between any two distinct points $x_i, x_j$ is at least $\pi/3$. Then
$$
S(X)=\sum\limits_{i=1}^M\sum\limits_{j=1}^M {f_4(x_i\cdot x_j)} < 25M.
$$
}
A proof of Lemma B is given in the end of Section 6.
\noindent{\bf Main Theorem.}
$\quad k(4)=24.$
\begin{proof} Let $X$ be a spherical $\pi/3$-code in ${\bf S}^3$ with $M=k(4)$ points.
Then $X$ satisfies the assumptions in Lemmas A, B. Therefore, $M^2\le S(X) <25M.$
From this $M<25$ follows, i.e. $M\le 24$. From the other side we have $k(4)\ge 24$,
showing that $M=k(4)=24.$
\end{proof}
\section {Delsarte's method}
From here on we will speak of $x\in {\bf S}^{n-1}$ alternatively of points in ${\bf S}^{n-1}$ or of vectors in ${\bf R}^n.$
Let $X = \{x_1, x_2,\ldots, x_M\}$ be any finite subset of the unit sphere ${\bf S}^{n-1}
\subset{\bf R}^n,\\ {\bf S}^{n-1}=\{x: x\in {\bf R}^n,$ $x\cdot x=||x||^2=1\}.$
By $\phi_{i,j}=\mathop{\rm dist}\nolimits(x_i,x_j)$ we denote the spherical (angular) distance between $x_i,\, x_j.$ Clearly, $ \cos{\phi_{i,j}}=x_i\cdot x_j.$
\noindent{\bf 3-A. Schoenberg's theorem.}
Let $u_1, u_2,\ldots, u_M$ be any real numbers. Then
$$ ||\sum u_ix_i||^2 = \sum\limits_{i,j} \cos{\phi_{i,j}}u_iu_j \ge 0,$$ or equivalently
the Gram matrix $\Big(\cos{\phi_{i,j}}\Big)$ is positive semidefinite.
Schoenberg \cite{Scho} extended this property to Gegenbauer polynomials $G_k^{(n)}$.
He proved that {\em the matrix $\Big(G_k^{(n)}(\cos{\phi_{i,j}})\Big)$ is positive semidefinite for any finite $X\subset {\bf S}^{n-1}$.}
Schoenberg proved also that the converse holds: {\em if $f(t)$ is a real polynomial and for any finite $X\subset{\bf S}^{n-1}$ the matrix $\big(f(\cos{\phi_{i,j}})\big)$ is positive semidefinite, then $f(t)$ is a linear combination of $G_k^{(n)}(t)$ with nonnegative coefficients.}
\noindent{\bf 3-B. The Gegenbauer polynomials.}
Let us recall definitions of Gegenbauer polynomials. Let polynomials $C_k^{(n)}(t)$ are defined by the expansion
$$(1-2rt+r^2)^{(2-n)/2} = \sum\limits_{k=0}\limits^{\infty}r^kC_k^{(n)}(t).$$
Then the polynomials $G_k^{(n)}(t): = C_k^{(n)}(t)/C_k^{(n)}(1)$ are called {\it Gegenbauer} or {\it ultraspherical} polynomials. (So the normalization of $G_k^{(n)}$ is determined by the condition $G_k^{(n)}(1)=1.$)
Also the Gegenbauer polynomials $G_k^{(n)}$ can be defined by the recurrence formula:
$$G_0^{(n)}=1,\;\; G_1^{(n)}=t,\; \ldots,\; G_k^{(n)}=\frac {(2k+n-4)\,t\,G_{k-1}^{(n)}-(k-1)\,G_{k-2}^{(n)}} {k+n-3}$$
They are orthogonal on the interval $[-1,1]$ with respect to the weight function $\rho(t)=(1-t^2)^{(n-3)/2}$ (see details in \cite{Car,CS,Erd,Scho}). In the case $n=3,\; G_k^{(n)}$ are Legendre polynomials $P_k,$ and $G_k^{(4)}$ are Chebyshev polynomials of the second kind (but with a different normalization than usual, $U_k(1)=1$),
$$ G_k^{(4)}(t)=U_k(t) = \frac {\sin{((k+1)\phi)}}{(k+1)\sin{\phi}}, \quad t=\cos{\phi}, \quad k=0,1,2,\ldots$$
For instance, $\;\; U_0=1,\;\;\; U_1=t,\;\;\; U_2=(4t^2-1)/3,\;\;\; U_3=2t^3-t,\\ U_4=(16t^4-12t^2+1)/5,\;\ldots,\;
U_9=(256t^9-512t^7+336t^5-80t^3+5t)/5.$
\noindent{\bf 3-C. Delsarte's inequality.} If a symmetric matrix $A$ is positive semidefinite, then the sum of all its entries is nonnegative. Schoenberg's theorem implies that
the matrix $\big(G_k^{(n)}(t_{i,j})\big)$ is positive semidefinite, where $t_{i,j}:=\cos{\phi_{i,j}}, \; $ Then
$$\sum\limits_{i=1}^M\sum\limits_{j=1}^M {{G_k^{(n)}(t_{i,j})}} \ge 0 \eqno (3.1)$$
\begin{defn} We denote by ${\sf G}_n^+$ the set of continuous functions $f:[-1,1]\to {\bf R}$ representable as series
$$
f(t)=\sum\limits_{k=0}^\infty {c_kG_k^{(n)}(t)}
$$
whose coefficients satisfy the following conditions:
$$c_0>0,\quad c_k\ge 0 \; \mbox { for } k=1,2,\ldots, \quad
f(1)=\sum\limits_{k=0}^\infty {c_k}<\infty.$$
\end{defn}
Suppose $f\in{\sf G}_n^+$ and
let $$S(X)=S_f(X):=\sum\limits_{i=1}^M\sum\limits_{j=1}^M{f(t_{i,j})}.$$ Using $(3.1),$ we get
$$S(X)=
\sum\limits_{k=0}^\infty c_k\left(\sum\limits_{i=1}^M\sum\limits_{j=1}^M {G_k^{(n)}(t_{i,j})}\right)\ge
\sum\limits_{i=1}^M\sum\limits_{j=1}^M {c_0G_0^{(n)}(t_{i,j})} = c_0M^2.$$
Then
$$S(X)\ge c_0M^2. \eqno (3.2)$$
\noindent{\bf 3-D. Proof of Lemma A.}
\begin{proof}
The expansion of $f_4$ in terms of $U_k=G_k^{(4)}$ is
$$
f_4 = U_0 + 2\,U_1 + \frac{153}{25}\,U_2 + \frac{871}{250}\,U_3 +
\frac{128}{25}\,U_4
+\frac{21}{20}\,U_9
$$
We see that $f_4\in{\sf G}_4^+$ with $c_0=1$. So Lemma A follows from $(3.2)$.
\end{proof}
\noindent{\bf 3-E. Delsarte's bound.}
Let $X=\{x_1,\ldots,x_M\}\subset {\bf S}^{n-1}$ be a spherical $\psi$-code, i.e. for all $i\neq j,$ $t_{i,j}=\cos{\phi_{i,j}}=x_i\cdot x_j\le z:=\cos{\psi},$ i.e. $t_{i,j}\in [-1,z]$ (but $t_{i,i}=1$).
Suppose $f\in{\sf G}_n^+$ and
$f(t)\le 0 \; \, \mbox{ for all} \; \; t\in [-1,z],\; $
then $\; f(t_{i,j})\le 0\; $ for all $\; i\neq j.\;$ That implies
$$S_f(X)=Mf(1)+2f(t_{1,2})+\ldots+2f(t_{M-1,M}) \le Mf(1).$$ If we combine this with $(3.2),$ then we get $M \le f(1)/c_0.$
{\em Let $A(n,\psi)$ be the maximal size of a $\psi$-code in ${\bf S}^{n-1}$.} Then we have:
$$A(n,\psi) \le \frac {f(1)}{c_0}\eqno (3.3)$$
The inequality $(3.3)$ play a crucial role in the Delsarte method (see details in \cite{AB1, AB2, Boyv, CS, Del1, Del2, Kab, Lev2, OdS}). If $z=1/2$ and $c_0=1$, then $(3.3)$ implies $$k(n)=A(n,\pi/3)\le f(1).$$ Levenshtein \cite{Lev2}, and independently Odlyzko and Sloane \cite{OdS} for $n=8, 24$ have found suitable polynomials $f(t)$: $f(t)\le 0 \; \, \mbox{ for all} \; \; t\in [-1,1/2],\;
f\in{\sf G}_n^+,\; c_0=1$ with
$$
f(1)=240\; \, \mbox{for} \; \, n=8;\quad \mbox {and} \quad f(1)=196560 \; \, \mbox{for} \; \, n=24.
$$
Then
$$k(8)\le 240, \quad k(24) \le 196560.$$ For $n=8, \, 24$ the minimal vectors in sphere packings $E_8$ and Leech lattice give these kissing numbers. Thus $k(8)=240,$ and $k(24)=196560.$
When $n=4,$ a polynomial $f$ of degree 9 with $f(1)\approx 25.5585$ was found in \cite{OdS}. This implies
$24\le k(4) \le 25.$
\section {An extension of Delsarte's method.}
\noindent{\bf 4-A. An extension of Delsarte's bound.}
Let $f(t)$ be any real function on the interval $[-1,1]$. Let for a given $\psi$ $z:=\cos{\psi}$.
Consider on sphere ${\bf S}^{n-1}$ points $y_0,y_1,\ldots,y_m$
such that
$$y_i\cdot y_j\le z \; \mbox{ for all }\; i\neq j,\quad f(y_0\cdot y_i)> 0 \; \mbox{ for } \; 1\le i\le m. \eqno (4.1)$$
\begin{defn} For fixed $y_0\in{\bf S}^{n-1}, m\ge 0, z$, and $f(t)$ let us define the family
$Q_m(y_0)=Q_m(y_0,n,f)$ of finite sets of points from ${\bf S}^{n-1}$ by the formula
$$
Q_m(y_0):=\left\{
\begin{array}{lcl}
\{y_0\},\qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad m=0, \\
\{Y =\{y_1,\ldots,y_m\}\subset {\bf S}^{n-1}: \{y_0\}\cup Y \; \mbox{ satisfies } \; (4.1)\},
\quad m\ge 1.
\end{array}
\right.
$$
Denote $\mu=\mu(n,z,f):=\max\{m: Q_m(y_0)\ne\emptyset\}$. \\
For $0\le m\le\mu$ we define the function $H=H_f$ on the family $Q_m(y_0)$:
$$
H(y_0):=f(1)\quad \mbox{for}\quad m=0,
$$
$$
H(y_0;Y)=H(y_0;y_1,\ldots,y_m):=f(1)+f(y_0\cdot y_1)+\ldots+f(y_0\cdot y_m)
\, \mbox{ for }\, m\ge 1.
$$
Let
$$h_m=h_m(n,z,f):=\sup\limits_{Y\in Q_m(y_0)}\{H(y_0;Y)\},\quad
h_{max}:=\max{\{h_0,h_1,\ldots,h_\mu\}}.$$
\end{defn}
\begin{theorem} Suppose $f\in{\sf G}_n^+. \; $
Then
$$ A(n,\psi) \le \frac {h_{max}(n,\cos{\psi},f)}{c_0}=\frac{1}{c_0}\max\{h_0,h_1,\ldots,h_\mu\}.$$
\end{theorem}
\begin{proof}
Let $X=\{x_1,\ldots,x_M\}\subset{\bf S}^{n-1}\; $ be a spherical $\psi$-code. Since
$f\in{\sf G}_n^+,$ $(3.2)$ yields: $S(X)\ge c_0M^2.$
Denote $J(i):=\{j:f(x_i\cdot x_j)> 0, \; j\neq i\}, \; X(i):=\{x_j: j\in J(i)\}.$
Then
$$S_i(X):=\sum\limits_{j=1}^M {f(x_i\cdot x_j)}\le f(1)+\sum\limits_{j\in J(i)} {f(x_i\cdot x_j)}=H(x_i;X(i))\le h_{max},$$
so then $$S(X)=\sum\limits_{i=1}\limits^M S_i(X)\le Mh_{max}.\eqno (4.2)$$
We have $c_0M^2\le S(X) \le Mh_{max},$ i.e. $c_0M\le h_{max}\; $ as required.
\end{proof}
Note that $h_0=f(1).$ If $f(t)\le 0$ for all $t\in [-1,z]$, then
$\mu(n,z,f)=0,$ i.e. $h_{max}=h_0=f(1).$ Therefore, this theorem yields the Delsarte bound
$M\le f(1)/c_0.$
\noindent{\bf 4-B. The class of functions ${\it \Phi}(t_0,z)$.}
The problem of evaluating of $h_{max}$ in general case looks even more complicated than the upper bound problem for spherical $\psi$-codes. It is not clear how to find $\mu$, what is an optimal arrangement for $Y$?
Here we consider this problem only for a very restrictive class of functions ${\it \Phi}(t_0,z)$. For the bound given by Theorem 1 we need $f\in{\sf G}_n^+.$ However, for evaluations of $h_m$ we don't need this assumption. So we are not assume that $f\in{\sf G}_n^+.$
\begin{defn} Let real numbers $t_0, z$ satisfy $1> t_0>z\ge 0.$ We denote by
${\it \Phi}(t_0,z)$ the set of functions $f:[-1,1]\to {\bf R}$ such that
$$f(t)\le 0 \; \mbox{ for } \; t\in [-t_0,z]. $$
\end{defn}
Let $f\in{\it \Phi}(t_0,z)$, and let $Y\in \Omega_m(y_0,n,f)$. Denote
$$e_0:=-y_0,\quad \theta_0:=\arccos{t_0},\quad \theta_i:=\mathop{\rm dist}\nolimits(e_0,y_i) \; \mbox{ for } \; i=1,\ldots,m.$$
(In other words, $e_0$ is the antipodal point to $y_0$.)
It is easy to see that $f(y_0\cdot y_i)>0\; $ only if $\; \theta_i<\theta_0.$
Therefore, \\ {\em $Y$ is a spherical $\psi$-code in the open spherical cap $\mathop{\rm Cap}\nolimits(e_0,\theta_0)$ of center $e_0$ and radius $\theta_0$ with $\pi/2\ge\psi>\theta_0.$}\\
This assumption is quit restrictive and in particular derives the convexity property for $Y$.
We are using this property in the next section.
\noindent{\bf 4-C. Convexity property.}
A subset of ${\bf S}^{n-1}$ is called {\it spherically convex} if it contains, with every two nonantipodal points, the small arc of the great circle containing them. The closure of a convex set is convex and is the intersection of closed hemispheres (see details in \cite{DGK}).
Let $Y=\{y_1,\ldots,y_m\}\subset \mathop{\rm Cap}\nolimits(e_0,\theta_0), \; \theta_0<\pi/2$. Then the convex hull of $Y$ is well defined, and is the intersection of all convex sets containing $Y$.
Denote the convex hull of $Y$ by $\Delta_m=\Delta_m(Y)$.
Recall a definition of a vertex of a convex set:
{\em A point $y\in W$ is called the vertex (extremal point) of a spherically convex closed set $W$, if the set $W\setminus \{y\}$ is spherically convex or, equivalently, there are no points $x,z$ from $W$ for which $y$ is an interior point of the minor arc $\widehat{xz}$ of large radius connecting $x,z$.}
\begin {theorem} Let $Y=\{y_1,\ldots,y_m\}\subset {\bf S}^{n-1}$ be a spherical $\psi$-code.
Suppose $Y\subset\mathop{\rm Cap}\nolimits(e_0,\theta_0)$, and $0<\theta_0<\psi\le \pi/2.$
Then any $y_k$ is a vertex of $\Delta_m$.
\end {theorem}
\begin{proof} The cases $m=1,2$ are evident. For the case $m=3$ the theorem can be easily proved by contradiction. Indeed, suppose that some point, for instance, $y_2$ is not a vertex of $\Delta_3$. Then, firstly, the set $\Delta_3$ is the arc $\widehat{y_1y_3}$, and, secondly, the point $y_2$ lies on the arc $\widehat{y_1y_3}$. From this it follows that
$\mathop{\rm dist}\nolimits(y_1,y_3)\ge 2\psi$, since $Y$ is a $\psi$-code. From the other hand, according to the triangle inequality, we have
$$2\psi\le\mathop{\rm dist}\nolimits(y_1,y_3)\le\mathop{\rm dist}\nolimits(e_0,y_1)+\mathop{\rm dist}\nolimits(e_0,y_3)<2\theta_0.$$
We obtained the contradiction. It remains to prove the theorem for $m\ge4$.
In this paper we need only one fact from spherical trigonometry, namely the {\em law of cosines} (or the {\it cosine theorem}):
$$\cos{\phi} = \cos{\theta_1}\cos{\theta_2}+\sin{\theta_1}\sin{\theta_2}\cos\varphi,$$
where for a spherical triangle $ABC$ the angular lengths of its sides are \\ $\mathop{\rm dist}\nolimits(A,B)=\theta_1, \,
\mathop{\rm dist}\nolimits(A,C)=\theta_2, \, \mathop{\rm dist}\nolimits(B,C)=\phi$, and $\angle{BAC}=\varphi$.
By the assumptions: $$\theta_k=\mathop{\rm dist}\nolimits(y_k,e_0)<\theta_0<\psi \; \mbox { for }\; 1\le k\le m; \quad
\phi_{k,j}:=\mathop{\rm dist}\nolimits(y_k,y_j)\ge \psi,\; k\ne j.$$
Let us prove that there is no point $y_k$ belonging both to the interior of $\Delta_m$ and relative interior of some facet of dimension $d,\; 1\le d\le\dim{\Delta_m}$. Assume the converse. Then consider
the great $(n-2)$-sphere $\Omega_k$ such that $y_k\in \Omega_k,$ and $\Omega_k$ is orthogonal to the arc $e_0y_k.$
(Note that $\theta_k>0$. Conversely, $y_k=e_0$ and
$\phi_{k,j}=\theta_j\le\theta_0<\psi.$)
The great sphere $\Omega_k$ divides ${\bf S}^{n-1}$ into two closed hemispheres: $H_1$ and $H_2$. Suppose
$e_0$ lies in the interior of $H_1$, then at least one $y_j$ belongs $H_2$.
Consider the triangle $e_0y_ky_j$ and denote by $\gamma_{k,j}$ the angle $\angle {e_0y_ky_j}$
in this triangle. The law of cosines yields
$$\cos{\theta_j}=\cos{\theta_k}\cos{\phi_{kj}}+\sin{\theta_k}\sin{\phi_{k,j}}\cos{\gamma_{k,j}}$$
Since $y_j \in H_2,$ we have $\gamma_{k,j}\ge 90^\circ,$ and $\cos{\gamma_{k,j}}\le0$ (Fig. 1).
From the conditions of Theorem 2 there follow the inequalities
$$
\sin{\theta_k}>0,\quad \sin\phi_{k,j}>0,\quad \cos\theta_k>0,\quad \cos\theta_j>0.$$
Hence, using the cosine theorem we obtain
$$
\cos\theta_j=\cos\theta_k\cos\phi_{k,j}+\sin\theta_k\sin\phi_{k,j}\cos\gamma_{k,j},$$ $$
0<\cos\theta_j\le\cos\theta_k\cos\phi_{k,j}.$$
From these inequalities and $0<\cos\theta_k<1$ there follow that, firstly,
$$
0<\cos\phi_{k,j}\quad \Bigl(\mbox{i.e. } \; \psi\le\phi_{k,j}<\pi/2\Bigr),
$$
and, secondly, the inequalities
$$
\cos\theta_j<\cos\phi_{k,j}\le\cos\psi.
$$
Therefore, $\theta_j>\psi$. This contradiction completes the proof of Theorem 2.
\end{proof}
\begin{center}
\begin{picture}(320,140)(-160,-70)
\put(-110,0){\circle*{3}}
\put(-110,-20){\circle*{3}}
\put(-60,50){\circle*{3}}
\thicklines
\put(-150,0){\line(1,0){70}}
\thinlines
\put(-110,0){\line(1,1){50}}
\put(-110,-20){\line(0,1){20}}
\put(-106,-27){$e_0$}
\put(-119,7){$y_k$}
\put(-75,50){$y_j$}
\put(-145,-20){$H_1$}
\put(-145,30){$H_2$}
\put(-75,-3){$\Omega_k$}
\put(-118,-65){Fig. 1}
\put(80,-65){Fig. 2}
\qbezier(67,54)(90,65)(113,54)
\qbezier (67,54) (55,47) (48,37)
\qbezier (113,54) (125,47) (132,37)
\qbezier (48,37) (38,24) (35,11)
\qbezier (35,11) (31,-4) (30,-20)
\qbezier (132,37) (142,24) (145,11)
\qbezier (145,11) (149,-4) (150,-20)
\put(90,60){\circle*{4}}
\put(92,64){$e_0$}
\put(62,14){\circle*{4}}
\put(121,10){\circle*{4}}
\put(45,-24){\circle*{4}}
\put(135,-24){\circle*{4}}
\thicklines
\qbezier(30,-20)(90,-40)(150,-20)
\thinlines
\qbezier(30,-20)(90,0)(150,-20)
\qbezier(90,60)(67,35)(45,-25)
\qbezier(90,60)(113,35)(135,-25)
\qbezier(62,14)(90,15)(121,10)
\put(30,-37){$\Pi(y_i)$}
\put(130,-37){$\Pi(y_j)$}
\put(51,16){$y_i$}
\put(125,10){$y_j$}
\put(85,3){$\phi_{i,j}$}
\put(85,-39){$\gamma_{i,j}$}
\put(75,30){$\theta_i$}
\put(99,30){$\theta_j$}
\end{picture}
\end{center}
\noindent{\bf 4-D. Bounds on $\mu$.}
\begin {theorem}
Let $Y=\{y_1,\ldots,y_m\}\subset {\bf S}^{n-1}$ be a spherical $\psi$-code.
Suppose $Y\subset\mathop{\rm Cap}\nolimits(e_0,\theta_0)$, and $\; 0<\psi/2\le\theta_0<\psi\le \pi/2.\; $
Then
$$
m\le A\left(n-1,\arccos{\frac{\cos\psi-\cos^2\theta_0}{\sin^2\theta_0}}\right)
$$
\end {theorem}
\begin{proof}
It is easy to see that the assumption $\; 0<\psi/2\le\theta_0<\psi\le \pi/2\; $ guarantees, firstly, that the right side of the inequality in Theorem 3 is well defined, secondly, that there is $Y$ with $m\ge 2$.
If $m\ge 2$, then $y_i\ne e_0.$ Conversely, $\psi\le\mathop{\rm dist}\nolimits(y_i,y_j)=\mathop{\rm dist}\nolimits(e_0,y_j)=\theta_j<\theta_0$, a contradiction. Therefore,
the projection $\Pi$ from the pole $e_0$ which sends $x \in {\bf S}^{n-1}$ along its meridian to the equator of the sphere is defined for all $y_i$.
Denote $\gamma_{i,j}:=\mathop{\rm dist}\nolimits\left(\Pi(y_i),\Pi(y_j)\right)$ (see Fig. 2). Then from the law of cosines and the inequality $\cos{\phi_{i,j}}\le z=\cos\psi,$ we get
$$\cos{\gamma_{i,j}}=\frac{\cos{\phi_{i,j}}-\cos{\theta_i}\cos{\theta_j}}{\sin{\theta_i}\sin{\theta_j}}
\le
\frac{z-\cos{\theta_i}\cos{\theta_j}}{\sin{\theta_i}\sin{\theta_j}}
$$
Let
$$R(\alpha,\beta)=\frac{z-\cos{\alpha}\cos{\beta}}{\sin{\alpha}\sin{\beta}}, \; \; \mbox{ then } \; \;
\frac{\partial R(\alpha,\beta)}{\partial\alpha}=\frac{\cos{\beta}-z\cos{\alpha}}{\sin^2{\alpha}\sin{\beta}}.$$
We have $\theta_0 < \psi$. Therefore, if $\; 0<\alpha, \beta< \theta_0$, then $ \cos{\beta}> z$. That yields:
${\partial R(\alpha,\beta)}/{\partial\alpha}> 0,$ i.e. $R(\alpha,\beta)$ is a monotone increasing function in $\alpha$. We obtain
$R(\alpha,\beta)<R(\theta_0,\beta)=R(\beta,\theta_0)<R(\theta_0,\theta_0).$
Therefore,
$$\cos{\gamma_{i,j}}\le \frac{z-\cos{\theta_i}\cos{\theta_j}}{\sin{\theta_i}\sin{\theta_j}} <
\frac{z-\cos^2{\theta_0}}{\sin^2{\theta_0}}=\cos\delta.$$
Thus $\Pi(Y)$ is a $\delta$-code on the equator ${\bf S}^{n-2}$. That yields $m\le A(n-1,\delta)$.
\end{proof}
\begin{cor} Suppose $f\in {\it \Phi}(t_0,z)$. If $\; 2t_0^2> z+1,\; $ then
$\; \mu(n,z,f)=1$, otherwise
$$\mu(n,z,f)\le A\Bigl(n-1,\arccos{\frac{z-t_0^2}{1-t_0^2}}\Bigr),$$
\end{cor}
\begin{proof} Let $\cos\psi=z,\; \cos\theta_0=t_0$. Then $2t_0^2 > z+1$ if and only if
$\psi>2\theta_0.$ Clearly that in this case the size of any $\psi$-code in the cap $\mathop{\rm Cap}\nolimits(e_0,\theta_0)$ is at most 1. Otherwise, $\psi\le 2\theta_0$ and this corollary follows from Theorem 3.
\end{proof}
\begin{cor} Suppose $f\in {\it \Phi}(t_0,z)$.
Then $$\; \mu(3,z,f)\le 5.$$
\end{cor}
\begin{proof} Note that
$$T=\frac{z-t_0^2}{1-t_0^2}\le\frac{z-z^2}{1-z^2}=\frac{z}{1+z}<\frac{1}{2}. \quad \mbox{Then} \; \; \delta=\arccos{T}>\pi/3.$$
Thus $\; \mu(3,z,f)\le A(2,\delta)\le 2\pi/\delta<6$.
\end{proof}
\begin{cor} Suppose $f\in {\it \Phi}(t_0,z)$.\\ \\
$(i)\; \, \, $ If $\;t_0> \sqrt{z},\; $ then $\; \mu(4,z,f)\le 4.\\ \\$
$(ii)\; $ If $\; z=1/2,\; t_0\ge 0.6058,\; $ then
$\; \mu(4,z,f)\le 6.\\$
\end{cor}
\begin{proof} Denote by $\varphi_k(M)$ the largest angular separation that can be attained in a spherical code on ${\bf S}^{k-1}$ containing $M$ points. In three dimensions the best codes and the values $\varphi_3(M)$ presently known for $M\le 12$ and $M=24$ (see \cite{Dan,FeT,SvdW1}).
Sch\"utte and van der Waerden \cite{SvdW1} proved that $$\varphi_3(5)=\varphi_3(6)=90^\circ, \quad
\cos{\varphi_3(7)}=\cot{40^\circ}\cot{80^\circ}, \quad\varphi_3(7)\approx 77.86954^\circ.$$
$(i)$ Since $z-t_0^2<0$, Corollary 1 yields: $\mu(4,z,f)\le A(3,\delta)$, where $\delta>90^\circ.$ We have $\delta>\varphi_3(5).\; $ Thus $\; \mu<5.$
\noindent $(ii)$ Note that for $\; t_0\ge 0.6058,$ $$\arccos{\frac{1/2-t_0^2}{1-t_0^2}}>77.87^\circ.$$
So Corollary 1 implies $\mu(4,1/2,f)\le A(3,77.87^\circ).$
Since $\; 77.87^\circ>\varphi_3(7),\; $ we have $\; A(3,77.87^\circ)<7,\; $ i.e. $\; \mu\le 6.$
\end{proof}
\noindent{\bf 4-E. Optimization problem.} Let
$$t_0:=\cos\theta_0,\quad z:=\cos\psi, \quad
\cos\delta:=\frac{z-t_0^2}{1-t_0^2}, \quad \mu^*:=A(n-1,\delta).$$
For given $n, \psi, \theta_0, f\in {\it \Phi}(t_0,z), e_0\in {\bf S}^{n-1},$ and $m\le \mu^*$,
the value $h_{m}(n,z,f)$ is the solution of the following optimization problem on ${\bf S}^{n-1}$:
$$
\mbox{\em maximize } \; f(1)+f(-e_0\cdot y_1)+\ldots+f(-e_0\cdot y_m) $$
{\em subject to the constraints}
$$
y_i\in {\bf S}^{n-1}, \; i=1,\ldots,m, \quad \mathop{\rm dist}\nolimits(e_0,y_i)\le\theta_0, \quad \mathop{\rm dist}\nolimits(y_i,y_j)\ge \psi, \; i\ne j.
$$
The dimension of this problem is $(n-1)m\le(n-1)\mu^*.$
If $\mu^*$ is small enough, then for small $n$ it gets relatively small -
dimensional optimization problems for computation of values $h_m$.
If additionally $f(t)$ is a monotone decreasing function on $[-1,-t_0]$, then in some cases
this problem can be reduced to $(n-1)$ - dimensional optimization problem of a type that can be treated numerically.
\section{Optimal and irreducible sets}
{\bf 5-A. The monotonicity assumption and optimal sets.}
\begin{defn} We denote by
${\it \Phi}^*(z)$ the set of all functions
$f\in \bigcup\limits_{\tau_0>z}{\it \Phi}(\tau_0,z)$
such that
$\; f(t) \; \mbox{ is a monotone decreasing function on the interval }\; [-1,-\tau_0],$
and $f(-1)>0>f(-\tau_0).$
For any $f\in {\it \Phi}^*(z)$, denote $t_0=t_0(f):=\sup\{t\in [\tau_0,1]: f(-t)<0\}$.
\end{defn}
Clearly, if $f\in {\it \Phi}^*(z)$, then
$f\in {\it \Phi}(t_0,z)$, i.e. $f(t)\le 0$ for $t\in [-1,-t_0]$. Moreover, if $f(t)$
is a continious function on $[-1,-z]$, then $f(-t_0)=0$.
Consider a spherical $\psi$-code $Y=\{y_1,\ldots,y_m\}\subset\mathop{\rm Cap}\nolimits(e_0,\theta_0)\subset {\bf S}^{n-1}$.
Then we have the constraint: $\phi_{i,j}:=\mathop{\rm dist}\nolimits(y_i,y_j)\ge \psi$ for all $i\ne j.$ {\em Denote by
$\Gamma_\psi(Y)$ the graph with the set of vertices $Y$ and the set of edges $y_iy_j$ with $\phi_{i,j}=\psi.$}
\begin{defn} Let $f\in {\it \Phi}^*(z), \; \psi=\arccos(z), \; \theta_0=\arccos(t_0).$
We say that a spherical $\psi$-code $Y=\{y_1,\ldots,y_m\}\subset\mathop{\rm Cap}\nolimits(e_0,\theta_0)\subset {\bf S}^{n-1}$ is optimal for $f$ if $\; H_f(-e_0;Y)=h_m(n,z,f).\; $
If optimal $Y$ is not unique up to isometry, then we call $Y$ as optimal if the graph $\Gamma_\psi(Y)$ has the maximal number of edges.
\end{defn}
Let $\theta_k:=\mathop{\rm dist}\nolimits(y_k,e_0)$. Then $H(-e_0;Y)$ can be represented in the form:
$$F_f(\theta_1,\ldots,\theta_m):=H_f(-e_0;Y)=f(1)+f(-\cos{\theta_1})+\ldots+f(-\cos{\theta_m}). $$
Let us call $F(\theta_1,\ldots,\theta_m)=F_f(\theta_1,\ldots,\theta_m)$ {\em the efficient function}. Clearly, if $f\in {\it \Phi}^*(z),$ then the efficient function is {\em a monotone decreasing function in the interval $[0,\theta_0]$ for any variable $\theta_k$}.
\noindent{\bf 5-B. Irreducible sets.}
\begin{defn} Let $0<\theta_0<\psi\le\pi/2$. We say that a spherical $\psi$-code
$Y=\{y_1,\ldots,y_m\}\subset\mathop{\rm Cap}\nolimits(e_0,\theta_0)\subset {\bf S}^{n-1}$ is
irreducible (or jammed) if any $y_k$ can not be shifted towards $e_0$ (i.e. this shift decreases $\theta_k$) such that $Y'$, which is obtained after this shifting, is also a $\psi$-code.
As above, in the case when irreducible $Y$ is not defined uniquely up to isometry by $\theta_i$, we say that $Y$ is irreducible if the graph $\Gamma_\psi(Y)$ has the maximal number of edges.
\end{defn}
\begin{prop} Let $f\in {\it \Phi}^*(z)$. Suppose $Y\subset\mathop{\rm Cap}\nolimits(e_0,\theta_0)\subset {\bf S}^{n-1}$ is optimal for $f$. Then $Y$ is irreducible.
\end{prop}
\begin{proof}
The efficient function $F(\theta_1,\ldots,\theta_m)$ increases whenever $\theta_k$ decreases. From this follows that $y_k$ can not be shifted towards $e_0.$ In the converse case,
$H(-e_0;Y)=F(\theta_1,\ldots,\theta_m)$ increases whenever $y_k$ tends to $e_0$. It contradicts the optimality of the initial set $Y$.
\end{proof}
\begin{lemma} If $Y=\{y_1,\ldots,y_m\}$ is irreducible, then\\
$(i)\; e_0\in \Delta_m=$convex hull of ${Y};\\$
$ (ii)$ If $m>1$, then $\deg{y_i}>0$ for all $y_i\in Y$, where by $\deg{y_i}$ denoted the degree of the vertex $y_i$ in the graph $\Gamma_\psi(Y)$.
\end{lemma}
\begin{proof}
\noindent $(i)$ Otherwise whole $Y$ can be shifted towards $e_0.$
\noindent $(ii)$ Clearly, if $\phi_{i,j}>\psi$ for all $j\ne i$, then $y_i$ can be shifted towards $e_0$.
\end{proof}
For $m=1$ from this follows that $e_0=y_1$, i.e. $h_1=\sup\{F(\theta_1)\}=F(0).$ Thus
$$h_1=f(1)+f(-1). \eqno (5.1)$$
For $m=2$, Lemma 1 implies that $\mathop{\rm dist}\nolimits{(x_1,x_2)}=\psi$, i.e.
$$\Delta_2=y_1y_2 \; \mbox{\em is an arc of length } \psi.\eqno (5.2)$$
Consider $\Delta_m\subset {\bf S}^{n-1}$ of dimension $k, \; \dim{\Delta_m}=k$. Since $\Delta_m$ is a convex set, there exists the great $k$-dimensional sphere ${\bf S}^k$ in ${\bf S}^{n-1}$ containing
$\Delta_m.$
Note that if $\dim{\Delta_m}=1$, then $m=2.$ Indeed, since $\dim{\Delta_m}=1$, it follows that $Y$ belongs to the great circle ${\bf S}^1$. It is clear that in this case $m=2$. (For instance, $m>2$ contradicts Theorem 2 for $n=2$.)
To prove our main results in this section for $n=3,4$ we need the following fact. (For $n=3$, when $\Delta$ is an arc, a proof of this claim is trivial.)
\begin{lemma} Consider in ${\bf S}^{n-1}$ an arc $\omega$ and a regular simplex $\Delta$, both are with edge lengths $\psi,\; \psi\le \pi/2$. Suppose the intersection of $\omega$ and $\Delta$ is not empty. Then at least one of the distances between vertices of $\omega$ and $\Delta$ is less than $\psi$.
\end{lemma}
\begin{proof} We have $\omega=u_1u_2,\; \Delta=v_1v_2\ldots v_k,\; \mathop{\rm dist}\nolimits(u_1,u_2)=\mathop{\rm dist}\nolimits(v_i,v_j)=\psi.$
Assume the converse. Then $\mathop{\rm dist}\nolimits(u_i,v_j)\ge\psi$ for all $i, j.$ By $U$ denote the union of the spherical caps of centers $v_i,\; i=1,\ldots,k,\; $ and radius $\psi.$ Let $B$ be the boundary of $U.$
Note that $u_1$ and $u_2$ don't lie inside $U.$ If $\{u_1', u_2'\}=\omega\bigcap B$, then
$\psi=\mathop{\rm dist}\nolimits(u_1,u_2)\ge\mathop{\rm dist}\nolimits(u_1',u_2')$, and $\omega'\bigcap\Delta\ne \emptyset,$ where $\omega'=u_1'u_2'.$
We have the following optimization problem: to find an arc $w_1w_2$ of minimal length subject to the constraints
$w_1, w_2 \in B$, and $w_1w_2\bigcap\Delta\neq\emptyset$? It is not hard to prove that $\mathop{\rm dist}\nolimits(w_1,w_2)$ attains its minimum when $w_1$ and $w_2$ are at the distance of $\psi$ from all $v_i$, i.e.
$w_1v_1\ldots v_k$ and $w_2v_1\ldots v_k$ are regular simplices with the common facet $\Delta$.
Using this, it can be shown by direct calculation that
$$\cos{\alpha}=\frac{2kz^2-(k-1)z-1}{1+(k-1)z},\quad \alpha=\min{\mathop{\rm dist}\nolimits(w_1,w_2)},\; z=\cos{\psi}\eqno (5.3)$$
We have $\alpha\le\psi$. From $(5.3)$ follows that $\cos{\alpha}\ge z$ if and only if $z\ge 1$ or $(k+1)z+1\le 0$. It contradicts the assumption $0\le z<1.$
\end{proof}
\noindent{\bf 5-C. Irreducible sets in ${\bf S}^2$.}
Now we consider irreducible sets for $n=3$. In this case $\dim{\Delta_m\le 2}.$
\begin{theorem} Suppose $Y$ is irreducible and $\; \dim(\Delta_m)=2.\\$ Then $3\le m\le 5$, and $\Delta_m$ is a spherical regular triangle, rhomb, or equilateral pentagon with edge lengths $\psi.$
\end{theorem}
\begin{proof} From Corollary 2 follows that $m\le 5.$ On the other hand, $m>2.$ Then
$m=3,\, 4,\, 5.$
Theorem 2 implies that $\Delta_m$ is a convex polygon with vertices $y_1,\ldots,y_m$.
From Lemma 1 it follows that $e_0\in \Delta_m$, and $\deg{y_i}\geqslant 1.$
First let us prove that if $\deg{y_i}\ge 2$ for all $i$, then $\Delta_m$ is an equilateral $m$-gon with edge lengths $\psi.$ Indeed, it is clear for $m=3.$
Lemma 2 implies that two diagonals of $\Delta_m$ of lengths $\psi$ do not intersect each other.
That yields the proof for $m=4.$
When $m=5$, it remains to consider the case where $\Delta_5$ consists of two regular non overlapping triangles with a common vertex (Fig. 3). This case contradicts the convexity of $\Delta_5$. Indeed, since the angular sum in spherical triangle is strictly greater than $180^\circ$ and a larger side of spherical triangle subtends opposite large angle, we have
$\angle{y_iy_1y_j}>60^\circ$. Then
$$180^\circ\ge \angle{y_2y_1y_5}=\angle{y_2y_1y_3}+\angle{y_3y_1y_4}+\angle{y_4y_1y_5}>180^\circ$$ - a contradiction.
Now we prove that $\deg{y_i}\ge 2.$
Suppose $\deg{y_1}=1,$ i.e. $\phi_{1,2}=\psi, \; \, \phi_{1,i}>\psi \, $ for $\, i=3,\ldots,m.$ (Recall that $\phi_{i,j}=\mathop{\rm dist}\nolimits(y_i,y_j)$.)
If $e_0\notin y_1y_2$, then after sufficiently small turn of $y_1$ round $y_2$ to $e_0$ (Fig. 4) the distance $\theta_1$ decreases - a contradiction. (This turn will be considered in Lemma 3 with more details.)
\begin{center}
\begin{picture}(320,140)(-160,-70)
\put(-112,44){$y_3$}
\put(-90,-30){$y_1$}
\put(-56,44){$y_4$}
\put(-150,6){$y_2$}
\put(-18,6){$y_5$}
\put(-80,-20){\circle*{4}}
\put(-60,40){\circle*{4}}
\put(-140,0){\circle*{4}}
\put(-100,40){\circle*{4}}
\put(-20,0){\circle*{4}}
\put(-80,-20){\line(-3,1){60}}
\put(-80,-20){\line(-1,3){20}}
\put(-80,-20){\line(1,3){20}}
\put(-80,-20){\line(3,1){60}}
\put(-60,40){\line(1,-1){40}}
\put(-140,0){\line(1,1){40}}
\put(-95,-65){Fig. 3}
\put(80,-65){Fig. 4}
\put(40,30){\circle*{4}}
\put(90,-20){\circle*{6}}
\put(140,30){\circle*{4}}
\put(80,20){\circle*{4}}
\thicklines
\put(90,-20){\line(-1,1){50}}
\thinlines
\put(90,-20){\line(1,1){50}}
\put(40,30){\vector(1,1){13}}
\put(98,-25){$y_2$}
\put(86,18){$e_0$}
\put(25,28){$y_1$}
\put(147,28){$y_3$}
\end{picture}
\end{center}
It remains to consider the case: $e_0\in y_1y_2.$
If $\phi_{i,j}=\psi$ where $i>2$ or $j>2$, then $e_0\notin y_iy_j$. Indeed, in the converse case, we have two intersecting diagonals of lengths $\psi.$ Therefore,
$\deg{y_i}\ge 2$ for $2<i\le m$. For $m=3, 4$ it implies the proof. For $m=5$ there is the case where $Q_3=y_3y_4y_5$ is a regular triangle of side length $\psi.$ Note that $y_1y_2$ cannot intersect $Q_3$ (otherwise we again have intersecting diagonals of lengths $\psi$), then $y_1y_2$ is a side of $\Delta_5$. In this case, as above, after sufficiently small turn of $Q_3$ round $y_2$ to $e_0$ the distance $\theta_i, \; i=3,4,5,$ decreases - a contradiction.
\end{proof}
\noindent{\bf 5-D. Rotations and irreducible sets in $n$ dimensions.}
Now we extend these results to $n$ dimensions.\footnote{ In the first version of this paper for $m \ge n$ has been claimed that any vertex of $\Gamma_\psi(Y)$ has degree at least $n-1$.
However, E. Bannai, M. Tagami, and referees of this paper found some gaps in our exposition. Most of them are related to ``degenerated" configurations.
In this paper we need only the case $n=4, m<6$. For this case Bannai and Tagami verified each step of our proof, considered all ``degenerated" configurations, and finally gave clean and detailed proof (see E. Bannai and M. Tagami: On optimal sets in Musin's
paper ``The kissing number in four dimensions" in the
Proceedings of the COE Workshop on Sphere Packings, November
1-5, 2004, in Fukuoka Japan).
Now this claim for all $n$ can be considered only as conjecture. In {\bf 5-D} we prove the claim when $\{y_i\}$ are in ``general position".
I wish to thank Eiichi Bannai, Makoto Tagami, and anonymous referees for helpful and useful comments.}
Let us consider a rotation $R(\varphi,\Omega)$ on ${\bf S}^{n-1}$ about an $(n-3)$ - dimensional great
sphere $\Omega$ in ${\bf S}^{n-1}.$ Without loss of generality, we may assume that
$$\Omega=\{\vec u=(u_1,\ldots,u_n)\in {\bf R}^n: u_1=u_2=0, \, u_1^2+\ldots+u_n^2=1\}.$$
Denote by $R(\varphi,\Omega)$ the rotation in the plane $\{u_i=0, \, i=3,\ldots,n\}$ through an angle $\varphi$ about the origin $\Omega:$
$$u'_1=u_1\cos{\varphi}-u_2\sin{\varphi}, \quad u'_2=u_1\sin{\varphi}+u_2\cos{\varphi}, \quad u'_i=u_i, \; i=3, \ldots, n.$$
Let $$H_+=\{\vec u\in {\bf S }^{n-1}: u_2\ge 0\}, \quad H_-=\{\vec u\in {\bf S}^{n-1}: u_2\le 0\},$$ $$Q=\{\vec u\in {\bf S}^{n-1}: u_2=0,\; u_1>0\},\quad \bar Q=\{\vec u\in {\bf S}^{n-1}: u_2=0,\;
u_1\ge 0\}.$$
Note that $H_-$ and $H_+$ are closed hemispheres of ${\bf S}^{n-1},\; \, \bar Q=Q\bigcup\Omega,\; $ and $\bar Q$ is
a hemisphere of the unit sphere $\Omega_2=\{\vec u\in {\bf S}^{n-1}: u_2=0\}$ bounded by $\Omega.$
\begin{lemma} Consider two points $y$ and $e_0$ in ${\bf S}^{n-1}.$ Suppose $y\in Q$ and
$e_0\notin \bar Q.\\$ If $e_0\in H_+$, then any rotation $R(\varphi,\Omega)$ of $y$ with sufficiently small positive $\varphi$ decreases the distance between $y$ and $e_0.\\$
If $e_0\in H_-$, then any rotation $R(\varphi,\Omega)$ of $y$ with sufficiently small negative $\varphi$ decreases the distance between $y$ and $e_0.$
\end{lemma}
\begin{proof} Let $y$ be rotated into the point $y(\varphi)$. If the coordinate expressions of $y$ and $e_0$ are
$$y=(u_1,0,u_3,\ldots,u_n), \quad u_1>0; \qquad e_0=(v_1,v_2,\ldots,v_n), \; \, \mbox{then}$$
$$r(\varphi):=y(\varphi)\cdot e_0=u_1v_1\cos{\varphi}+u_1v_2\sin{\varphi}+u_3v_3+\ldots+u_nv_n.$$
Therefore, $\; r'(\varphi)=-u_1v_1\sin{\varphi}+u_1v_2\cos{\varphi},\; $ i.e. $\; r'(0)=u_1v_2. \;$ Then
$$r'(0)>0 \quad \mbox{iff} \quad v_2>0, \quad \mbox{i.e.}\quad e_0\in \stackrel{\sf o}{H}_+;$$
$$r'(0)<0 \quad \mbox{iff} \quad v_2<0 \quad \mbox{i.e.}\quad e_0\in \stackrel{\sf o}{H}_-.$$
That proves the lemma for $v_2\neq 0$. In the case $v_2=0$, by assumption ($e_0\notin \bar Q$) we have $v_1<0.$ In this case $r'(0)=0$, and $r''(0)=-u_1v_1>0$, i.e. $\varphi=0$ is a minimum point.
This completes the proof.
\end{proof}
\begin{prop} Let $Y$ be irreducible and $m=|Y| \ge n$. Suppose there are no closed great hemispheres $\bar Q$ in ${\bf S}^{n-1}$ such that $\bar Q$ contains $n-1$ points from $Y$ and $e_0$. Then any vertex of $\Gamma_\psi(Y)$ has degree at least $n-1$.
\end{prop}
\begin{proof} Without loss of generality, we may assume that
$$\phi_{1,i}=\psi,\; \, i=2, \ldots, \deg{y_1}+1; \quad \phi_{1,i}>\psi, \; \, i=\deg{y_1}+2, \ldots, m.$$
Suppose $\deg{y_1}< n-1$. Then $\phi_{1,i}>\psi$ for $i=n, \ldots, m.$ Let us consider the great $(n-3)$ - dimensional
sphere $\Omega$ in ${\bf S}^{n-1}$ that contains the points $y_2, \ldots, y_{n-1}.$ Then Lemma 3 implies that a rotation $R(\varphi,\Omega)$ of $y_1$ with sufficiently small $\varphi$ decreases $\theta_1$. It contradicts the irreducibility of $Y$.
\end{proof}
\begin{prop} If $Y$ is irreducible, $|Y|=n, \dim{\Delta_n}=n-1$, then $\deg{y_i} = n-1\; $ for all $\; i=1, \ldots, n.$ In other words, ${\Delta_n}$
is a regular simplex of edge lengths $\psi.$
\end{prop}
\begin{proof} Clearly, $\Delta_n$ is a spherical simplex. Denote by $F_i$ its facets,
$$F_i:=\mathop{\rm conv}\nolimits{\{y_1,\ldots,y_{i-1},y_{i+1},\ldots,y_n \}}.$$
Let for $\sigma\subset I_n:=\{1,\ldots,n\}$
$$F_\sigma:=\bigcap\limits_{i\in\sigma} {F_i}\,.$$
We claim for $ \; i\neq j \; $ that:
$$\mbox{\it If}\; \; e_0\notin F_{\{i,j\}}, \; \mbox{\it then \;} \phi_{i,j}=\psi. \eqno (5.4)$$
Conversely, from Lemma 3 follows that there exists a rotation $R(\varphi,\Omega_{ij})$ of $y_i$ (or $y_j$ if $e_0 \in F_i$) decreases $\theta_i$ (respectively, $\theta_j$), where $\Omega_{ij}$ is the great $(n-3)\, - \,$dimensional sphere contains $F_{\{i,j\}}$. It contradicts the irreducibility assumption for $Y$.
This yields, if there is no pair $\{i,j\}$ such that $e_0\in F_{\{i,j\}}$, then $\phi_{i,j}=\psi$ for all $i, j$ from $I_n$.
Suppose $e_0\in F_\sigma$, where $\sigma$ has maximal size and $|\sigma|>1$. Let $\bar\sigma=I_n\setminus\sigma$. From $(5.4)$ it follows that
$\phi_{i,j}=\psi\;$ if $\;i\in\bar\sigma\;$ or $\;j\in\bar\sigma.\; $ It remains to prove that $\phi_{i,j}=\psi\;$ for $\;i, j\in\sigma.\;$
Let $\Lambda$ be the intersection of the spheres of centers $y_i, \; i\in \bar\sigma,\; $ and radius $\psi$. Then $\Lambda$ is a sphere in ${\bf S}^{n-1}$ of dimension $|\sigma|-1.$ Note that $F_\sigma=$convex hull of $\{y_i: i\in\bar\sigma\}$, and for any fixed point $x$ from
$F_\sigma$ (in particular for $x=e_0$) the distance $\mathop{\rm dist}\nolimits(x,y)$ posses the same value (depending only on $x$) on the entire set $y\in\Lambda.$ Then
$y_i, \; i\in\sigma,\; $ lie in $\Lambda$ at the same distance from $e_0$. It is clear that $Y$ is irreducible if and only if $y_i, \; i\in\sigma,\; $ in $\Lambda$ are vertices of a regular simplex of edge length $\psi.$
Finally, we have that all edges of $\Delta_n$ are of lengths $\psi$ as required.
\end{proof}
\begin{cor} If $n>3$, then $\Delta_4$ is a regular tetrahedron of edge lengths $\psi.$
\end{cor}
\begin{proof}
Let us show that $\dim{\Delta_4}=3$. In the converse case, $\dim{\Delta_4}=2$, and from Theorem 4 follows that $\Delta_4$ is a rhomb. Suppose $y_1y_3$ is the minimal length diagonal of
$\Delta_4$. Then $\phi_{2,4}>\psi$ (see Lemma 2).
Let us consider a sufficiently small turn of the facet $y_1y_2y_3$ round $y_1y_3$. If $e_0\notin y_1y_3$, then this turn decreases either $\theta_4$ (if $e_0\in y_1y_2y_3$) or $\theta_2$, a contradiction. In the case $e_0\in y_1y_3$ any turn of $y_2$ round $y_1y_3$ decreases $\phi_{2,4}$ and doesn't change $\theta_2$. Obviously, there is a turn such that $\phi_{2,4}$ becomes is equal to $\psi.$ That contradicts the irreducibility of $Y$ also.
\end{proof}
\noindent{\bf 5-E. Irreducible sets in ${\bf S}^3$.}
\begin{lemma} If $Y\subset {\bf S}^3$ is irreducible, $|Y|=5$, then $\; \deg{y_i}\ge 3\; $
for all $i$.
\end{lemma}
\begin{proof} ({\bf 1})
Let us show that $\dim{\Delta_5}=3$. In the converse case, $\dim{\Delta_5}=2$, and from Theorem 4 follows that $\Delta_5$ is a convex equilateral pentagon. Suppose $y_1y_3$ is the minimal length diagonal of $\Delta_5$. We have $\phi_{2,k}>\psi$ for $k>3.$
Suppose $e_0\notin y_1y_3$. If $e_0\in y_1y_2y_3$ then any sufficiently small turn of the facet $y_1y_3y_4y_5$ round $y_1y_3$ decreases $\theta_4$ and $\theta_5$, otherwise it decreases $\theta_2$, a contradiction.
In the case $e_0\in y_1y_3$ any turn of $y_2$ round $y_1y_3$ decreases $\phi_{2,k}$ for $k=4, 5$, and doesn't change $\theta_i$. It can be shown in the elementary way that there is a turn such that $\phi_{2,4}$ or $\phi_{2,5}$ becomes is equal to $\psi$, a contradiction.
In three dimensions there exist only two combinatorial types of convex polytopes with 5 vertices: (A) and (B) (see Fig. 5). In the case (A) the arc $y_3y_5$ lies inside $\Delta_5$, and for (B): $y_2y_3y_4y_5$ is a facet of $\Delta_5.$
\begin{center}
\begin{picture}(320,150)(-80,-75)
\put(57,-75){Fig. 5}
\put(-10,-65){(A)}
\put(-60,-20){\circle*{4}}
\put(20,-40){\circle*{4}}
\put(0,20){\circle*{4}}
\put(60,40){\circle*{4}}
\put(0,60){\circle*{4}}
\put(-60,-20){\line(4,-1){80}}
\put(-60,-20){\line(3,2){60}}
\put(-60,-20){\line(3,4){60}}
\put(0,20){\line(3,1){60}}
\put(20,-40){\line(1,2){40}}
\put(0,60){\line(0,-1){40}}
\put(0,60){\line(1,-5){20}}
\put(0,60){\line(3,-1){60}}
\put(20,-40){\line(-1,3){20}}
\thinlines
\multiput(-60,-20)(4,2){30}
{\circle*{1}}
\put(-75,-21){$y_5$}
\put(27,-41){$y_2$}
\put(-13,24){$y_4$}
\put(64,44){$y_3$}
\put(3,65){$y_1$}
\put(170,-65){(B)}
\put(130,-20){\circle*{4}}
\put(190,-40){\circle*{4}}
\put(170,20){\circle*{4}}
\put(230,0){\circle*{4}}
\put(170,60){\circle*{4}}
\put(130,-20){\line(3,-1){60}}
\put(130,-20){\line(1,1){40}}
\put(130,-20){\line(1,2){40}}
\put(170,20){\line(3,-1){60}}
\put(190,-40){\line(1,1){40}}
\put(170,60){\line(0,-1){40}}
\put(170,60){\line(1,-5){20}}
\put(170,60){\line(1,-1){60}}
\put(115,-21){$y_5$}
\put(197,-41){$y_2$}
\put(157,24){$y_4$}
\put(234,4){$y_3$}
\put(173,65){$y_1$}
\end{picture}
\end{center}
\noindent ({\bf 2}) By $s_{ij}$ we denote the arc $y_iy_j$, and by $s_{ijk}$ denote the triangle $y_iy_jy_k.$ Let $\tilde s_{ijk}$ be the intersection of the great $2-$hemisphere $Q_{ijk}$ and $\Delta_5$, where $Q_{ijk}$ contains $y_i, y_j, y_k$ and bounded by the great circle passes through $y_i, y_j$.
Proposition 2 yields: if there are no $i, j, k$ such that $e_0\in \tilde s_{ijk}$, then
$\deg{y_i}\ge 3$ for all $i$.
It remains to consider all cases $e_0\in \tilde s_{ijk}$. Note that
for (A) $\tilde s_{ijk}\ne s_{ijk}$ only for three cases: $i=1, 2,4;$ where $j=3,\; k=5,$ or $j=5,\; k=3$ ($\tilde s_{i35}=\tilde s_{i53}$).
\noindent ({\bf 3}) Lemma 1 yields that $\deg{y_k}>0$. Now we consider the cases $\deg{y_k}=1, 2$.
\centerline{\em If $\; \deg{y_k}=1,\; \phi_{k,\ell}=\psi,\; $ then $\; e_0\in s_{k\ell}.$}
Indeed, otherwise
there exists the great circle
$\Omega$ in ${\bf S}^3$ such that $\Omega$ contains $y_\ell$, and the great sphere passes through $\Omega$ and $y_k$ doesn't pass through $e_0$. Then Lemma 3 implies that a rotation $R(\varphi,\Omega)$ of $y_k$ with sufficiently small $\varphi$ decreases $\theta_k$ - a contradiction.
Since $\theta_0<\psi$, $e_0$ can not be a vertex of $\Delta_5.$ Therefore, $e_0$ lies inside $s_{k\ell}$. From this follows if $s_{ij}$ for any $j$ doesn't intersect $s_{k\ell}$, then $\deg{y_i}\ge 2.$
Arguing as above it is easy to prove that
\centerline{\em If $\; \deg{y_k}=2,\; \phi_{k,i}=\phi_{k,j}=\psi,\; $ then $\; e_0\in \tilde s_{ijk}.$}
\noindent ({\bf 4}) Now we prove that $\deg{y_k}\ge 2$ for all $k.$
Conversely, $\deg{y_k}=1, \; e_0\in s_{k\ell}.$
a). First we consider the case when $s_{k\ell}$ is an ``external" edge of $\Delta_5$. For the type (A) that means $s_{k\ell}$ differs from $s_{35}$, and for (B) it is not $s_{35}$ or $s_{24}$. Since $\Delta_5$ is convex, there exists the great $2-$sphere $\Omega_2$ passes through $y_k, y_\ell$ such that 3 other points $y_i, y_j, y_q$ lie inside the hemisphere $H_+$ bounded by $\Omega_2.$ Let $\Omega$ be the great circle in $\Omega_2$ that contains $y_\ell$ and is orthogonal to the arc $s_{k\ell}$. Then (Lemma 3) there exists a small turn of $y_i, y_j, y_q$ round $\Omega$ that simultaneously decreases $\theta_i, \theta_j, \theta_q$ - a contradiction.
b). For the type (A) when $\deg{y_3}=1, \; \phi_{3,5}=\psi, \; e_0\in s_{35}$; we claim that $s_{124}$ is a regular triangle with side length $\psi.$ Indeed, from a) follows that $\deg{y_i}\ge 2$ for $i=1, 2, 4.$ Moreover, if $\deg{y_i}=2$, then $e_0=s_{35}\bigcap s_{124}.$ Therefore, in any case, $\phi_{1,2}=\phi_{1,4}=\phi_{2,4}=\psi.$
We have the arc $s_{35}$ and the regular triangle $s_{124}$, both are with edge lengths $\psi$. Then from Lemma 2 follows that some $\phi_{i,j}<\psi$ - a contradiction.
c). Now for the type (B) consider the case: $\deg{y_3}=1, \; \phi_{3,5}=\psi, \; e_0\in s_{35}$. Then
for $y_2$ we have: $\deg{y_2}=1$ only if $\phi_{2,4}=\psi, \; e_0=s_{24}\bigcap s_{35};\; $ $\deg{y_2}=2$ only if $\phi_{2,4}=\phi_{2,5}=\psi$; and $\phi_{2,4}=\phi_{1,2}=\phi_{2,5}=\psi$ if $\deg{y_2}=3$.
Thus, in any case, $\phi_{2,4}=\psi.$ We have two intersecting diagonals $s_{24}, s_{35}$ of lengths $\psi.$ Then Lemma 2 contradicts the assumption that $Y$ is a $\psi$-code. This contradiction concludes the proof that $\deg{y_k}\ge 2$ for all $k$.
\noindent ({\bf 5}) Finally let us prove that $\deg{y_k}\ge 3$ for all $k.$ Assume the converse. Then $\; \deg{y_k}=2, \; e_0\in \tilde s_{ijk},\; $ where $\; \phi_{k,i}=\phi_{k,j}=\psi.$
{\bf Case facet}{\bf:} Let $s_{ijk}$ be a facet of $\Delta_5,$ and $e_0\notin s_{ij}$. By the same argument as in ({\bf 4}a), where $\Omega_2$ be the great sphere contains $s_{ijk}$, and $\Omega$ be the great circle passes through $y_i, y_j,$ we can prove that there exists a shift decreases $\theta_\ell, \theta_q$ for two other points $y_\ell, y_q$ from $Y$, a contradiction.
If $e_0\in s_{ij}$, then any turn of $s_{{\ell}q}$ round $\Omega$ doesn't change $\theta_\ell$ and $\theta_q$. However, if this turn is in a positive direction, then it decreases $\phi_{k,\ell}$ and $\phi_{k,q}$. Clearly, there exists a turn when $\phi_{k,\ell}$ or $\phi_{k,q}$ is equal to $\psi$ - a contradiction.
It remains to consider all cases where $s_{ijk}$ is not a facet. Namely, there are the following cases:
$s_{124},\; s_{135}$ (type (A) and type (B)), $\; s_{234}$ (type (B)).
{\bf Case $s_{124}$:} We have $\deg{y_1}=2,\; \phi_{1,2}=\phi_{1,4}=\psi,\; e_0\in s_{124}.$ Consider a small turn of $y_3$ round $s_{24}$ towards $y_1$. If $e_0\notin s_{24}$, then this turn decreases $\theta_3.$ Therefore, the irreducibility yields $\phi_{3,5}=\psi.$ In the case $e_0\in s_{24},\; \theta_3'=\theta_3,$ but $\phi_{1,3}$ decreases. It again implies $\phi_{3,5}=\psi.$ Since $s_{35}$ cannot intersects a regular triangle $s_{124}$ [see Lemma 2, ({\bf 4}b)], $\phi_{2,4}>\psi.$
Then $\deg{y_2}=\deg{y_4}=3.$ (Since $ e_0\in s_{124},\; \deg{y_2}=2$ only if $\phi_{2,4}=\psi.$)
Thus we have three isosceles triangles $s_{243}, s_{241}, s_{245}$. Using this and $\phi_{3,5}=\psi,$ we obviously have $\phi_{1,i}<\psi$ for $i=3, 5,$ - a contradiction.
{\bf Case $s_{135}$}(type (B)) is equivalent to the {\bf Case $s_{124}$}.
{\bf Case $s_{135}$}(type (A)){\bf:} This case has two subcases: $\tilde s_{351},\; \tilde s_{153}$.
In the subcase $\tilde s_{135}$ we have
$\deg{y_1}=2, \; \phi_{1,3}=\phi_{1,5}=\psi, \; e_0\in \tilde s_{135}.\\$ If $e_0\notin s_{135}$, then any turn of $y_1$ round $s_{35}$ decreases $\theta_1$ (Lemma 3). Then $e_0\in s_{135}$.
Clearly, any small turn of $y_2$ round $s_{35}$ increases $\phi_{2,4}.$ On the other hand, this turn decreases $\theta_2$ (if $e_0\notin s_{35}$) and $\phi_{1,2}$. Arguing as above, we get a contradiction.
The subcase $\tilde s_{315}$, where $\phi_{3,5}=\psi$, can be proven by the same arguments as Case $s_{124}$.
{\bf Case $s_{234}$}(type (B)){\bf:} This case has two subcases: $\tilde s_{243},\; \tilde s_{234}$.
It is not hard to see that $\tilde s_{243}$ follows from the {\bf Case facet}, and $\tilde s_{234}$ can be proven in the same way as the subcase $\tilde s_{135}$. This concludes the proof.
\end{proof}
Lemma 4 yields that the degree of any vertex of $\Gamma_\psi(Y)$ is not less than 3. This implies that at least one vertex of $\Gamma_\psi(Y)$ has degree 4. Indeed, if all vertices of $\Gamma_\psi(Y)$ are of degree 3, then the sum of the degrees equals 15, i.e. is not an even number.
There exists only one type of $\Gamma_\psi(Y)$ with these conditions (Fig. 6). The lengths of all edges of $\Delta_5$ except $y_2y_4$, $y_3y_5$ are equal to $\psi$.
For fixed $\phi_{2,4}=\alpha, \; \Delta_5$ is uniquely defined up to isometry. Therefore, we have the 1-parametric family $P_5(\alpha)$ on ${\bf S}^3.\;$ If $\phi_{3,5}\ge\phi_{2,4}$, then $z\ge\cos{\alpha}\ge 2z-1.$
\begin{center}
\begin{picture}(320,140)(-80,-70)
\put(57,-65){Fig. 6: $P_5(\alpha)$}
\put(10,-20){\circle*{5}}
\put(90,-40){\circle*{5}}
\put(70,20){\circle*{5}}
\put(130,40){\circle*{5}}
\put(70,60){\circle*{5}}
\thicklines
\put(10,-20){\line(4,-1){80}}
\put(10,-20){\line(3,2){60}}
\put(70,20){\line(3,1){60}}
\put(90,-40){\line(1,2){40}}
\put(70,60){\line(-3,-4){60}}
\put(70,60){\line(0,-1){40}}
\put(70,60){\line(1,-5){20}}
\put(70,60){\line(3,-1){60}}
\thinlines
\multiput(90,-40)(-1,3){20}
{\circle*{1}}
\put(70,-14){$\alpha$}
\put(-5,-21){$y_5$}
\put(97,-41){$y_2$}
\put(57,24){$y_4$}
\put(134,44){$y_3$}
\put(73,65){$y_1$}
\end{picture}
\end{center}
Thus Theorem 4, Corollary 4 and Lemma 4 for $n=4$ yield:
\begin{theorem} Let $Y\subset {\bf S}^3$ be an irreducible set, $\; |Y|=m\le 5.\; $ Then $\; \Delta_m \; $
for $\; 2\le m\le 4$ is a regular simplex of edge lengths $\psi$, and $\; \Delta_5$ is isometric to $P_5(\alpha)$ for some
$\alpha\in [\psi,\arccos{(2z-1)}].$
\end{theorem}
\noindent{\bf 5-F. Optimization problem.} We see that if $Y$ is optimal, then for some cases
it can be defined up to isometry.
For fixed $ y_i\in {\bf S}^{n-1},\; i=1,\ldots,m;\; $ the function $H$ depends only on a position $y=-y_0=e_0\in {\bf S}^{n-1}.$ Let
$$H_m(y):=f(1)+f(-y\cdot y_1)+\ldots+f(-y\cdot y_m),$$
i.e. $H_m(y)=H(-y;Y).\; $
Thus for $h_m$ we have the following $(n-1)$-dimensional optimization problem:
$$h_m=\max\limits_y{\{H_m(y)\}}$$ {\em subjects to the constraint}
$$y\in T(Y,\theta_0):=\{y\in\Delta_m\subset {\bf S}^{n-1}:\; y\cdot y_i\ge t_0,\; i=1,\ldots, m\}. $$
We present an efficient numerical method for this problem in the next section.
\section {On calculations of $h_m$}
In this technical section we explain how to find an upper bound on $\; h_m\; $ for $\; n=4, \; m\le 6$.
Note that Theorem 5 gets for computation of $h_m$ a low-dimensional optimization problem
(see {\bf 5-F}). Our first approach for this problem was to apply numerical methods \cite{Mus}. However, that is a nonconvex constrained optimization problem. In this case, the Nelder-Mead simplex method and other local improvements methods cannot guarantee finding a global optimum. It's possible (using estimations of derivatives) to organize computational process in such way that it gives a global optimum. However, such solutions are very hard to verify and some mathematicians don't accept that kind of proofs. Fortunately, using geometric approach, estimations of $h_m$ can be reduced to relatively simple computations.
Throughout this section we use the function $\tilde f(\theta)$ defined for
$f\in {\it\Phi}^*(z)$ by
$$ \tilde f(\theta):=\left\{
\begin{array}{l}
f(-\cos{\theta}) \quad 0\le\theta\le\theta_0=\arccos{t_0} \; \mbox{ (see Definition 4)}\\
-\infty \qquad \quad \; \; \theta>\theta_0
\end{array}
\right.
$$
Since $f\in {\it\Phi}^*(z)$, $\tilde f(\theta)$ is a monotone decreasing function in $\theta$ on $[0,\theta_0]$.
\noindent{\bf 6-A. The case m=2.}
Suppose $m=2$ and $Y$ is optimal for $f\in {\it\Phi}^*(z)$. Then $\Delta_2=y_1y_2$ is an arc of length $\psi, \; e_0\in\Delta_2,$ and $\; \theta_1+\theta_2=\psi,$ where $\theta_i\le\theta_0$ (see Lemma 1 and $(5.2)$). The efficient function $F(\theta_1,\theta_2)=f(1)+
\tilde f(\theta_1)+\tilde f(\theta_2)$
is a symmetric function in $\theta_1, \theta_2.$
We can assume
that $\theta_1\le \theta_2$, then $\theta_1\in [\psi-\theta_0,\psi/2].$
Since $\Theta_2(\theta_1):=\psi-\theta_1$ is a monotone decreasing function, $\tilde f(\Theta_2(\theta_1))$ is a monotone increasing function in $\theta_1.$
Thus for any $\theta_1\in [u,v]\subset [\psi-\theta_0, \psi/2]$ we have
$$ F(\theta_1,\theta_2)\le \Phi_2([u,v]):=f(1)+\tilde f(u)+\tilde f(\psi-v). $$
Let $\; u_1=\psi-\theta_0,\; u_2,\, \ldots,\, u_{N},\; u_{N+1}=\psi/2\; $ be points in $\; [\psi-\theta_0,\psi/2]\; $ such that $\; u_{i+1}=u_i+\varepsilon,\; $ where $\; \varepsilon=(\theta_0-\psi/2)/N.\; $
If $\; \theta_1\in [u_i,u_{i+1}],\; $ then $\; h_2=H(y_0;Y)=F(\theta_1,\theta_2)\le
\Phi_2([u_i,u_{i+1}]). \; $ Thus
$$h_2\le \lambda_2(N,\psi,\theta_0):=\max\limits_{1\le i\le N}
\{\Phi_2(s_i)\}, \; \mbox{ where } \; s_i:=[u_i,u_{i+1}].\; $$
Clearly, $\lambda_2(N,\psi,\theta_0)$ tends to $\, h_2\, $ as $\; N\to\infty\; $ ($\varepsilon\to 0$).
That implies a very simple method for calculation of $h_2.$ Now we extend this approach to higher $m$.
\noindent{\bf 6-B. The function $\Theta_k$.}
Suppose we know (up to isometry) optimal $Y=\{y_1, \ldots, y_m\}\subset {\bf S}^{n-1}$. Let us assume that
$\dim{\Delta_m}=n-1$, and $\; V:= $ convex hull of $\{y_1\ldots y_{n-1}\} $ is a facet of $\Delta_m.$
Then $\; \mbox{rank}{\{y_1, \ldots, y_{n-1}\}}=n-1,\; $ and $Y$ belongs to the hemisphere $H_+$, where $H_+$ contains $Y$ and bounded by the great sphere $\tilde S$ passes through $V$.
Let us show that any $y=y_+\in H_+$ is uniquely determined by the set of distances $\theta_i=\mathop{\rm dist}\nolimits(y,y_i),\; i=1, \ldots, n-1.$ Indeed, there are at most two solutions: $\; y_+\in H_+\; $ and $\; y_-\in H_-\; $ of the quadratic equation
$$y\cdot y=1 \; \mbox{ with } \; y\cdot y_i=\cos{\theta_i}, \; i=1,\ldots,n-1. \eqno (6.1)$$
Note that $y_+=y_-$ if and only if $y\in \tilde S.$
This implies that $\theta_k,\; k\ge n $ is determined by $\; \theta_i,\; i=1,\ldots, n-1;$
$$ \theta_k=\Theta_k(\theta_1,\ldots,\theta_{n-1}).$$
It is not hard to solve $(6.1)$ and, therefore, to give an explicit expression for $\Theta_k.$
For instance, let $\Delta_n$ be a regular simplex of edge lengths $\pi/3$. (We need this case for $n=3,4$.) Then
\footnote{I am very grateful to referees for these explicit formulas.}
$$
\cos\theta_3=\cos\Theta_3(\theta_1,\theta_2)=
$$
$$
=\frac{1}{3}\left(\cos\theta_1+\cos\theta_2+\sqrt{6-8[\cos\theta_1\cos\theta_2+
(\cos\theta_2-\cos\theta_1)^2]}\right);
$$
$$
\cos\theta_4=\cos\Theta_4(\theta_1,\theta_2,\theta_3)=\frac{1}{4}\Bigl(\cos\theta_1+\cos\theta_2+\cos\theta_3+
$$
$$
\sqrt{10}\sqrt{1+\cos\theta_1\cos\theta_2+\cos\theta_1\cos\theta_3+\cos\theta_2\cos\theta_3
-\frac{3}{2}(\cos^2\theta_1+\cos^2\theta_2+\cos^2\theta_3)}\Bigr).
$$
\noindent{\bf 6-C. Extremal points of $\Theta_k$ on $D$.}
Let $\; {\bf{a}}=(a_1,\ldots,a_{n-1}),\; $ where $\; 0<a_i\le \theta_0<\psi.$ (Recall that $ \phi_{i,j}=\mathop{\rm dist}\nolimits(y_i,y_j);\; \cos{\psi}=z; \; \cos{\theta_0}=t_0.$) Now we consider a domain
$D({\bf{a}})$ in $H_+$, where
$$D({\bf{a}})=\{y\in H_+: \; \mathop{\rm dist}\nolimits(y,y_i)\le a_i,\; \, 1\le i\le n-1\}.$$
In other words,
$D({\bf{a}})$ is the intersection of the closed caps
$\mathop{\rm Cap}\nolimits(y_i,a_i)$ in $H_+$:
$$D({\bf{a}})=\bigcap\limits_{i=1}\limits^{n-1} {\mathop{\rm Cap}\nolimits(y_i,a_i)}\bigcap H_+ . $$
Suppose $\dim{D({\bf{a}})}=n-1$. Then $D({\bf{a}})$ has ``vertices", ``edges", and ``$k$-faces" for $k\le n-1.$
Indeed, let $$\sigma\subset I:=\{1, \ldots, n-1\},\quad 0<|\sigma|\le n-1;$$
$$\tilde F_\sigma:=\{y\in D({\bf{a}}):\; \mathop{\rm dist}\nolimits(y,y_i)=a_i \; \, \forall \; i\in\sigma\}.$$
It is easy to prove that $\dim{\tilde F_\sigma}=n-1-|\sigma|;\; \tilde F_\sigma$ belongs to the boundary $B$ of $D({\bf{a}})$; and if $\sigma\subset\sigma'$, then $\tilde F_{\sigma'}\subset \tilde F_\sigma$.
Actually, $D({\bf{a}})$ is combinatorially equivalent to an $(n-1)$-dimensional simplex.
Now we consider the minimum of $\Theta_k(\theta_1,\ldots,\theta_{n-1})$ on $D({\bf{a}})$ for $k\ge n$. In other words, we are looking for a point $p_k({\bf{a}})\in D({\bf{a}})$ such that
$$\mathop{\rm dist}\nolimits(y_k,p_k({\bf{a}}))=\mathop{\rm dist}\nolimits(y_k,D({\bf{a}})).$$
Since $\; \phi_{i,k}\ge \psi>\theta_0,\; $ all $y_k$ lie outside $D({\bf{a}})$. Clearly, $\Theta_k$ achieves its minimum at some point in $B$. Therefore, there is $\sigma\subset I$ such that $$p_k({\bf{a}})\in \tilde F_\sigma \eqno (6.2)$$
Suppose $\sigma=I,$ then $\tilde F_\sigma$ is a vertex of $D({\bf{a}})$. Let us denote this point by $p_*({\bf{a}})$. Note that the function $\Theta_k$ at the point $p_*({\bf{a}})$ is equal to $\Theta_k({\bf{a}})$.
Let $\sigma_k({\bf{a}})$ denote $\sigma\subset I$ of the maximal size such that $\sigma$ satisfies $(6.2)$.
Then for $\sigma_k({\bf{a}})=I,\; p_k({\bf{a}})=p_*({\bf{a}})$, and for $|\sigma_k({\bf{a}})|<n-1, \; p_k({\bf{a}})$ belongs to the open part of $\tilde F_{\sigma_k({\bf{a}})}$.
Consider $n=3$. There are two cases for $p_k({\bf{a}})$ (see Fig. 7): $\; p_3({\bf{a}})=p_*({\bf{a}})=
\tilde F_{\{1,2\}},$ and $p_4({\bf{a}})$ is the intersection in $H_+$ of the great circle passes through $y_1, \; y_4$, and the circle $\tilde S(y_1,a_1)$ of center $y_1$ and radius $a_1$ ($\tilde F_{\{1\}}\subset \tilde S(y_1,a_1)$).
The same holds for all dimensions.
Denote by $S_\sigma(k)$ the great $|\sigma|-$dimensional sphere passes through $y_i,\; i\in\sigma,$ and $y_k.$ Let $\tilde S(y_i,a_i)$ be the sphere of center $y_i$ and radius $a_i$; and for $\sigma\subset I$
$$\tilde S_\sigma:=\bigcap\limits_{i\in\sigma} \tilde S(y_i,a_i).$$
Denote by $s(\sigma,k)$ the intersection of $S_\sigma(k)$ and $\tilde S_\sigma$ in $H_+$,
$$s(\sigma,k)=S_\sigma(k)\bigcap \tilde S_\sigma\bigcap H_+$$
\begin{center}
\begin{picture}(320,140)(-160,-70)
\put(-150,-30){\circle*{4}}
\put(-60,-30){\circle*{4}}
\put(-130,45){\circle*{4}}
\put(-50,45){\circle*{4}}
\put(-120,10){\circle*{3}}
\put(-109.5,0){\circle*{3}}
\qbezier (-100,-30) (-100,-22) (-102,-15)
\qbezier (-102,-15) (-105,-8) (-109,-1)
\qbezier (-109,-1) (-113,5) (-120,10)
\qbezier (-120,10) (-125,4) (-128,-2)
\qbezier (-128,-2) (-131,-8) (-132,-22)
\put(-132,-22){\line(0,-1){8}}
\put(-150,-30){\line(1,0){90}}
\multiput(-150,-30)(5,3.75){20}
{\circle*{1}}
\put(-155,-40){$y_1$}
\put(-65,-40){$y_2$}
\put(-144,40){$y_3$}
\put(-45,40){$y_4$}
\put(-127,-23){$D({\bf{a}})$}
\put(-170,10){$H_+$}
\put (-99,-20){$\tilde F_{\{1\}}$}
\put (-150,-4){$\tilde F_{\{2\}}$}
\put (-130,15){$p_*({\bf{a}})$}
\put (-104,-3){$p_4({\bf{a}})$}
\put(-118,-65){Fig. 7}
\put(80,-65){Fig. 8}
\qbezier (60,-30) (100,-10) (140,-10)
\qbezier (40,20) (80,40) (120,40)
\qbezier (77,-37) (72,3) (47,43)
\qbezier (130,-25) (125,15) (100,55)
\put(76,5){$E({\bf b},{\bf{a}})$}
\put(27,-36){$\theta_2={ b}_2$}
\put(10,11){$\theta_2=a_2$}
\put(63,-45){$\theta_1={ b}_1$}
\put(115,-33){$\theta_1=a_1$}
\end{picture}
\end{center}
\begin{lemma} Suppose $D({\bf{a}})\ne\emptyset, \; 0<a_i\le \theta_0\; $ for all $i$, and
$\; k\ge n.\; $
Then $(i) \; \; p_k({\bf{a}})\in s(\sigma_k({\bf{a}}),k),\\(ii) $
if $\; s(\sigma,k)\ne\emptyset, \; |\sigma|<n-1$, then $s(\sigma,k)$ consists of the one point $p_k({\bf{a}}).$
\end{lemma}
\begin{proof}
$(i)$ Let $\theta_k^*:=\Theta_k(p_k({\bf{a}}))=\mathop{\rm dist}\nolimits(y_k,p_k({\bf{a}})).$ Since $\Theta_k$ achieves its minimum at $p_k({\bf{a}})$, the sphere $\tilde S(y_k,\theta_k^*)$ touches the sphere
$\tilde S_{\sigma({\bf{a}})}$ at $p_k({\bf{a}})$. If some sphere touches the intersections of spheres,
then the touching point belongs to the great sphere passes through the centers of these spheres. Thus
$\; p_k({\bf{a}})\in S_{\sigma({\bf{a}})}(k)$.
\noindent $(ii)\; $ Note that $\, s(\sigma,k)\, $ belongs to the intersection in $\, H_+\, $ of the spheres
$\; S(y_i,a_i), \\ i\in\sigma,$ and $S_\sigma(k)$.
Any intersection of spheres is also a sphere. Since
$$\dim{S_\sigma(k)}+\dim{\tilde S_\sigma}=n-1,$$
this intersection is empty, or is a $0-$dimensional sphere (i.e. 2-points set). In the last case, one point lies in $H_+$, and another one in $H_-.\; $ Therefore, $s(\sigma,k)=\emptyset,$ or
$s(\sigma,k)=\{p\}.$ Denote by $\sigma'$ the maximal size $\sigma'\supset\sigma$ such that
$s(\sigma',k)=\{ p\}.$ It is not hard to see that $\tilde S(y_k,\mathop{\rm dist}\nolimits(y_k,p))$ touches
$\tilde S_{\sigma'}$ at $p$. Thus $\; p=p_k({\bf{a}})$.
\end{proof}
Lemma 5 implies a simple method for calculations of the minimum of $\Theta_k$ on $D({\bf{a}})$. For this we can consider $s(\sigma,k), \; \sigma\subset I$, and if $s(\sigma,k)\ne\emptyset,$ then
$s(\sigma,k)=\{p_k({\bf{a}})\},$ so then $\Theta_k$ attains its minimum at this point. In the case
when $\Delta_n$ is a simplex we can find the minimum by very simple method.
\begin{cor} Suppose $|Y|=n, \; 0<a_i\le \theta_0\; $ for all $i$, and $D({\bf{a}})$ lies inside $\Delta_n.$ Then
$$\theta_n\ge\Theta_n(a_1,\ldots,a_{n-1}) \; \mbox{ for all }\; y\in D({\bf{a}}).$$
\end{cor}
\begin{proof} Clearly, $\Delta_n$ is a simplex. Since $D({\bf{a}})$ lies inside $\Delta_n$, for $|\sigma|<n-1$ the intersection of $\tilde S_\sigma$ and $S_\sigma(k)$ is empty. Thus $p_n({\bf{a}})=p_*({\bf{a}}).$
\end{proof}
\noindent{\bf 6-D. Upper bounds on $H_m$.}
Suppose $\; \dim{\Delta_m}=n-1, \; $ and $\; y_1\ldots y_{n-1}$ is a facet of $\Delta_m$.
Then (see {\bf 5-F} for the definition of $H_m$ and $T(Y,\theta_0)$)
$$ H_m(y)=F(\theta_1,\ldots,\theta_{n-1},\Theta_n,\ldots,\Theta_m)=\tilde F_m(\theta_1,\ldots,\theta_{n-1}),$$
where
$$\tilde F_m(\theta_1,\ldots,\theta_{n-1}):=f(1)+\tilde f(\theta_1)+\ldots+\tilde f(\theta_{n-1})+\tilde f(\Theta_n(\theta_1,\ldots,\theta_{n-1}))
+\ldots$$ $+\, \tilde f(\Theta_m(\theta_1,\ldots,\theta_{n-1})).$
\begin{lemma} Suppose $f\in{\it\Phi}^*(z), \;
|Y|=m,\; \dim{\Delta_m}=n-1,\; y_1\ldots y_{n-1}$ is a facet of $\Delta_m, \; \mathop{\rm dist}\nolimits(y_i,y_j)\ge \psi > \theta_0\; $ for $\; i\ne j$,
$\; 0\le{ b}_i<a_i\le\theta_0\; $ for $\; i=1,\ldots,n-1;$
and $\; \Theta_k({\bf{a}})\le\theta_0\; $ for all $\; k\ge n.\; $
If $\; D({\bf{a}})\ne\emptyset,\; $ then
$$H_m(y)\le \Phi_Y({\bf b},{\bf{a}}) \quad \mbox{for any} \quad y\in E({\bf b},{\bf{a}}):=D({\bf{a}})\setminus U({\bf b}),$$
where
$$\Phi_Y({\bf b},{\bf{a}}):=f(1)+\tilde f({ b}_1)+\ldots+\tilde f({ b}_{n-1})+
\tilde f(\Theta_n(p_n({\bf{a}})))+\ldots+\tilde f(\Theta_m(p_m({\bf{a}}))),$$
$$U({\bf b}):=\bigcup\limits_{i=1}^{n-1} \mathop{\rm Cap}\nolimits(y_i,{b}_i).$$
\end{lemma}
\begin{proof} We have for $1\le i\le n-1$ and $y\in E({\bf b},{\bf{a}}):\; \theta_i\ge { b}_i$ (Fig. 8). By the monotonicity assumption this implies
$\tilde f(\theta_i)\le \tilde f({ b}_i).$ On the other hand,
$\; y\in D({\bf{a}}).\; $ Then Lemma 5 yields
$\; \tilde f(\theta_k)\le \tilde f(\Theta_k(p_k({\bf{a}}))))\; $ for $\; k\ge n.$
\end{proof}
From Corollary 5 and Lemma 6 follow
\begin{cor} Let $|Y|=n$. Suppose $f, \; {\bf a},\; {\bf b},$ and $Y$ satisfy the assumptions of Lemma 6 and Corollary 5. Then for any $\; y\in E({\bf b},{\bf{a}}):$
$$H_m(y)\le f(1)+\tilde f({ b}_1)+\ldots+\tilde f({ b}_{n-1})+
\tilde f(\Theta_n({\bf{a}})).$$
\end{cor}
Let $K(n,\theta_0):=[0,\theta_0]^{n-1},\; $ i.e $\; K(n,\theta_0)$ is an $(n-1)-$dimensional cube of side length $\theta_0$. Consider for $K(n,\theta_0)$ the cubic grid $L(N)$ of sidelength $\varepsilon,$ where $\varepsilon=\theta_0/N$ for given positive integer $N$. Then the grid (tessellation) $L(N)$ consists of $N^{n-1}$ cells,
any cell $c\in L(N)$
is an $(n-1)-$dimensional cube of sidelength $\varepsilon,$ and for any point $(\theta_1,\ldots,\theta_{n-1})$ in $c$ we have
$${ b}_i(c)\le\theta_i\le a_i(c), \quad a_i(c)={ b}_i(c)+\varepsilon, \quad
i=1,\ldots,n-1.$$
Let $\tilde L(N)$ be the subset of cells $c$ in $L(N)$ such that
$D({\bf{a}}(c))\ne\emptyset.$ There exists $c\in L(N)$ such that $H_m$ attains its maximum on $T(Y,\theta_0)$ at some point in $E({\bf b}(c),{\bf{a}}(c))$. Therefore, Lemma 6 yields
\begin{lemma} Suppose $f$ and $Y$ satisfy the assumptions of Lemma 6, $N$ is a positive integer, and $y\in\Delta_m \;$ is such that $\mathop{\rm dist}\nolimits(y,y_i)\le \theta_0$ for all $i$. Then
$$H_m(y)\le \max\limits_{c\in\tilde L(N)} \{\Phi_Y({\bf b}(c),{\bf{a}}(c))\}$$
\end{lemma}
\noindent{\bf 6-E. Upper bounds on $h_m$.}
Suppose $\Delta_m$ is a regular simplex of edge length $\psi.$
Then the efficient function $F$ is a symmetric function in the variables $\theta_1, \ldots, \theta_m$. Consider this problem only on the domain
$$\Lambda:=\{y\in\Delta_m:\; \psi-\theta_0\le\theta_1\le\theta_2\le \ldots\le\theta_m\le \theta_0\}.$$
Let $L_\Lambda(N)$ be the subset of cells $c$ in $\tilde L(N)$ such that
$c\bigcap\Lambda\ne\emptyset.$ If $c\in L_\Lambda(N)$ is such that $E({\bf b}(c),{\bf{a}}(c)))$ lies inside $\Delta_m$,\footnote{Clearly, it holds for all $c\in L_\Lambda(N)$ if $N$ is sufficiently large.} then
we have an explicit expression for $\Phi_m(c):=\Phi_Y({\bf b}(c),{\bf{a}}(c))$ (see Corollary 6). For $n=4$, Theorem 5 implies that $\Delta_m$ is a regular simplex, where $m=2, 3, 4.$ Thus from Lemma 7 follows
$$h_m\le \lambda_m(N,\psi,\theta_0):=\max\limits_{c\in L_\Lambda(N)}
\{\Phi_m(c)\}.$$
Now we consider the case $n=4,\; m=5.$ Theorem 5 yields: $\Delta_5$ is isometric to $P_5(\alpha)$ for some $\alpha\in [\psi,\psi':=\arccos{(2z-1)}]$ (see Fig. 6). Let the vertices $y_1, y_2, y_3$ of
$P_5(\alpha)$ be fixed. Then the vertices $y_4(\alpha),\; y_5(\alpha)$ are uniquely determined by $\alpha.$
Note that for any $y\in D(\theta_0,\theta_0,\theta_0)$ the distance
$\theta_4(\alpha):=\mathop{\rm dist}\nolimits(y,y_4(\alpha))$ increases, and $\theta_5(\alpha)$ decreases whenever $\alpha$ increases.
Let $\alpha_1=\psi,\, \alpha_2,\ldots, \alpha_N, \, \alpha_{N+1}=\psi'$ be points in $[\psi,\psi']$ such that
$\alpha_{i+1}=\alpha_i+\epsilon$, where $\epsilon=(\psi'-\psi)/N.$
Then
$$\theta_4(\alpha_i)<\theta_4(\alpha_{i+1}), \quad \theta_5(\alpha_i)>\theta_5(\alpha_{i+1}),$$
so then
$$\tilde f(\theta_4(\alpha_i))>\tilde f(\theta_4(\alpha_{i+1})), \quad \tilde f(\theta_5(\alpha_i))<
\tilde f(\theta_5(\alpha_{i+1})).$$
Combining this with Lemma 7, we get
$$h_5\le \lambda_5(N,\psi,\theta_0):=f(1)+
{\max\limits_{c\in \tilde L(N)}\{R_{1,2,3}(c)+\max\limits_{1\le i\le N}\{R_{4,5}(c,i)\}\}},$$
$$R_{1,2,3}(c)=\tilde f({ b}_1(c))+\tilde f({ b}_2(c))+\tilde f({ b}_3(c)),$$ $$
R_{4,5}(c,i)=\tilde f(\Theta_4(p_4({\bf{a}}(c),\alpha_{i})))+\tilde f(\Theta_5(p_5({\bf{a}}(c),\alpha_{i+1}))), $$
where $p_k({\bf{a}},\alpha)=p_k({\bf{a}})$ with $y_k=y_k(\alpha).$
Clearly, $\lambda_m(N+1,\psi,\theta_0)\le\lambda_m(N,\psi,\theta_0)$. It's not hard to show that
$$h_m=\lambda_m(\psi,\theta_0):=\lim_{N\to\infty}{\lambda_m(N,\psi,\theta_0)}.$$
Finally let us consider the case: $n=4,\; m=6.$ In this case, we give an upper bound on $h_6$ by separate argument.
\begin{lemma}
Let $\; n=4, \; f\in{\it\Phi}^*(z), \; \sqrt{z}>t_0>z,\; $
$ \theta_0'\in [\arccos{\sqrt{z}},\theta_0].$ Then
$$h_6\, \le\, \max{\{\; \tilde f(\theta_0')+\lambda_5(\psi,\theta_0),\;
f(-\sqrt{z})+\lambda_5(\psi,\theta_0')\; \}}.$$
\end{lemma}
\begin{proof}
Let $Y=\{y_1,\ldots, y_6\}\subset C(e_0,\theta_0)\subset{\bf S}^3,\; $ where $Y$ is an optimal $z$-code. We may assume that
$\theta_1\le\theta_2\le\ldots\le\theta_6.\; $
Then from Corollary 3$(i)$ follows that $$\; \theta_0\ge\theta_6\ge\theta_5\ge\arccos{\sqrt{z}}.$$
Let us consider two cases: (a) $\; \theta_0\ge\theta_6\ge\theta_0',\; \, $
(b) $\; \theta_0'\ge\theta_6\ge\arccos{\sqrt{z}}.\\ \\$
(a) We have $\; h_6=H(y_0;y_1,\ldots,y_6)=H(y_0;y_1,\ldots,y_5)+\tilde f(\theta_6),$
$$H(y_0;y_1,\ldots,y_5)\le h_5=\lambda_5(\psi,\theta_0), \;
\quad \tilde f(\theta_6)\le\tilde f(\theta_0'). $$
Then $\; h_6 \le \tilde f(\theta_0')+\lambda_5(\psi,\theta_0). \\ \\$
(b) In this case all $\theta_i\le\theta_0',\; $ i.e. $Y\subset C(e_0,\theta_0')$. Since
$$H(y_0;y_1,\ldots,y_5)\le \lambda_5(\psi,\theta_0'), \;
\quad \tilde f(\theta_6)\le f(-\sqrt{z}), $$
it follows that $\; h_6\le f(-\sqrt{z})+\lambda_5(\psi,\theta_0'). $
\end{proof}
We have proved the following theorem.
\begin{theorem} Suppose $n=4, \; f\in{\it\Phi}^*(z), \; \sqrt{z}>t_0>z>0$,
and $N$ is a positive integer. Then
$\\ \\ (i)\quad h_0=f(1),\quad h_1=f(1)+f(-1);\\ \\
(ii) \quad \! h_m=\lambda_m(\psi,\theta_0)
\le\lambda_m(N,\psi,\theta_0)\; $ for $\; 2\le m\le 5;\\ \\ $
$(iii) \; \; h_6\le\max{\{\tilde f(\theta_0')+\lambda_5(\psi,\theta_0),\, f(-\sqrt{z})+\lambda_5(\psi,\theta_0')\}}\; \, \forall \; \, \theta_0'\in [\arccos{\sqrt{z}},\theta_0].$
\end{theorem}
\noindent{\bf 6-F. Proof of Lemma B.}
First we show that $f_4\in{\it\Phi}^*(1/2)$ (see Fig. 9). Indeed,
the polynomial $f_4$ has two roots on $[-1,1]$: $t_1=-t_0, \; t_0\approx 0.60794, \; t_2=1/2$;
$\; f_4(t)\le 0\;$ for $\;t\in [-t_0,1/2],$ and $f_4$ is a monotone decreasing function on the interval $[-1,-t_0].$ The last property holds because there are no zeros of the derivative
$f'_4(t)$ on $[-1,-t_0]$. Thus, $f_4\in{\it\Phi}^*(1/2)$.
We have $t_0>0.6058.$ Then Corollary 3$(ii)$ gives $\mu\le 6.$ For calculations of $h_m$ let us apply Theorem 6 with $\psi=\arccos{z}=60^\circ,\; \theta_0=\arccos{t_0}\approx 52.5588^\circ\; $ We get
$$h_0=f(1)=18.774,\quad h_1=f(1)+f(-1)=24.48.$$
$H_2$ achieves its maximum at $\theta_1=30^\circ.$ Then
$$h_2= f(1)+2f(-\cos{30^\circ})\approx 24.8644.$$
For $m=3$ we have
$$h_3=\lambda_3(60^\circ,\theta_0)\approx 24.8345$$
at $\theta_3=\theta_0, \; \theta_1=\theta_2\approx30.0715^\circ.$
The polynomial $H_4$ attains its maximum $$\; h_4\approx 24.818\;$$ at the point with $\; \theta_1=\theta_2\approx 30.2310^\circ,\;\; \theta_3=\theta_4\approx 51.6765^\circ,$ and
$$h_5\approx 24.6856$$
at $\alpha=60^\circ,\;
\theta_1\approx 42.1569^\circ,\; \theta_2=\theta_4=32.3025^\circ, \; \theta_3=\theta_5=\theta_0.$
Let $\theta_0'=50^\circ.$ We have
$\; \tilde f(50^\circ)\approx 0.0906, \; \, \arccos{\sqrt{z}}=45^\circ, \; \, \tilde f(45^\circ)\approx 0.4533,$
$$ \lambda_5(60^\circ,\theta_0)=h_5\approx 24.6856, \quad \; \lambda_5(60^\circ,50^\circ)\approx 23.9181,$$
$$h_6\le \max{\{ \, \tilde f(50^\circ)+h_5,\, \tilde f(45^\circ)+\lambda_5(60^\circ,50^\circ) \, \}}\approx 24.7762<h_2.$$
Thus $\; h_{max}=h_2<25$. Since $(4.2)$, we have $S(X)<25M$.
\begin{center}
\begin{picture}(320,200)(-160,-110)
\thinlines
\put(-135,-80){\line(0,1){160}}
\put(135,-80){\line(0,1){160}}
\put(-135,-80){\line(1,0){270}}
\put(-135,80){\line(1,0){270}}
\put(-135,-60){\line(1,0){270}}
\thicklines
\qbezier (-135,54)(-132,51)(-129,45)
\qbezier (-129,45)(-126,37)(-123,27)
\qbezier (-123,27)(-120,18)(-117,8)
\qbezier (-117,8)(-114,-2)(-111,-11)
\qbezier (-111,-11)(-108,-19)(-105,-27)
\qbezier (-105,-27)(-99,-40) (-93,-49)
\qbezier (-93,-49)(-90,-52)(-87,-55)
\qbezier (-87,-55)(-84,-57)(-81,-59)
\qbezier (-81,-59)(-78,-60)(-75,-61)
\qbezier (-75,-61)(-69,-62)(-53,-62)
\qbezier (-53,-62)(-45,-61)(-37,-61)
\qbezier (-37,-61)(-10,-61)(15,-61)
\qbezier (15,-61)(23,-62)(30,-63)
\qbezier (30,-63)(38,-65)(45,-67)
\qbezier (45,-67)(53,-69)(60,-71)
\qbezier (60,-71)(68,-72)(75,-71)
\qbezier (75,-71)(83,-68)(90,-60)
\qbezier (90,-60)(98,-50)(105,-35)
\qbezier (105,-35)(113,-16)(120,8)
\qbezier (120,8)(128,36)(134,69)
\thinlines
\multiput (-120,-80)(15,0){17}
{\line(0,1){2}}
\multiput (-135,-40)(0,20){6}
{\line(1,0){2}}
\put(-143,-90){$-1$}
\put(-119,-90){$-0.8$}
\put(-89,-90){$-0.6$}
\put(-59,-90){$-0.4$}
\put(-29,-90){$-0.2$}
\put(13,-90){$0$}
\put(40,-90){$0.2$}
\put(70,-90){$0.4$}
\put(100,-90){$0.6$}
\put(130,-90){$0.8$}
\put(-143,-62){$0$}
\put(-143,-42){$1$}
\put(-143,-22){$2$}
\put(-143,-2){$3$}
\put(-143,18){$4$}
\put(-143,38){$5$}
\put(-143,58){$6$}
\put(-150,-82){$-1$}
\put(-78,-110){Fig. 9. The graph of the function $f_4(t)$}
\end{picture}
\end{center}
\section {Concluding remarks }
This extension of the Delsarte method can be applied to other dimensions and spherical $\psi$-codes.
The most interesting application is a new proof for the Newton-Gregory problem, $k(3)<13.$ In dimension three computations of $h_m$ are technically much more easier than for $n=4$ (see \cite{Mus2}).
Let
$$f(t) = \frac{2431}{80}t^9 - \frac{1287}{20}t^7 + \frac{18333}{400}t^5 + \frac{343}{40}t^4 - \frac{83}{10}t^3 - \frac{213}{100}t^2+\frac{t}{10} - \frac{1}{200}. $$
Then $f\in{\it\Phi}^*(1/2),\; t_0\approx 0.5907, \; \mu(3,1/2,f)=4,$ and $h_{max}=h_1=12.88.$
The expansion of $f$ in terms of Legendre polynomials $P_k=G_k^{(3)}$ is
$$f = P_0 + 1.6P_1 + 3.48P_2 + 1.65P_3 + 1.96P_4 + 0.1P_5 + 0.32P_9.$$
Since $c_0=1,\; c_i\ge 0,$ we have $ k(3)\le h_{max}=12.88<13.$
Direct application of the method developed in this paper, presumably could lead to some improvements in the upper bounds on kissing numbers in dimensions 9, 10, 16, 17, 18 given in \cite[Table 1.5]{CS}. (``Presumably" because the equality $\; h_{max}=E\; $ is not proven yet.)
In 9 and 10 dimensions Table 1.5 gives: $\\306\le k(9)\le 380,\quad 500\le k(10)\le 595.$\\
Our method gives:\\
$n=\;\,9:\; \deg{f}=11,\; E=h_1=366.7822,\; t_0=0.54;$\\
$n=10:\; \deg{f}=11,\; E=h_1=570.5240,\; t_0=0.586$.\\
For these dimensions there is a good chance to prove that $\\k(9)\le 366,\; k(10)\le 570.$
From the equality $k(3)=12$ follows that $\varphi_3(13)<60^\circ.$
The method gives \\ $\varphi_3(13)<59.4^\circ$ ($\deg{f}=11$).
The lower bound on $\varphi_3(13)$ is $57.1367^\circ $ \cite{FeT}. Therefore, we have $57.1367^\circ\le\varphi_3(13)<59.4^\circ.$
Using our approach it can be proven that $\varphi_4(25)<59.81^\circ, \; \varphi_4(24) < 60.5^\circ.$
That improve the bounds:
$$\varphi_4(25)<60.79^\circ, \;\; \varphi_4(24) < 61.65^\circ \; \cite{Lev2} \; (\mbox{cf. } \cite{Boyv});
\;\; \varphi_4(24) < 61.47^\circ \; \cite{Boyv};$$
$$ \varphi_4(25)<60.5^\circ, \quad \varphi_4(24) < 61.41^\circ \; \cite{AB2}.$$
Now in these cases we have
$$\quad 57.4988^\circ\ < \varphi_4(25) < 59.81^\circ,
\quad 60^\circ \le \varphi_4(24) < 60.5^\circ. \footnote{The long-standinding conjecture: The maximal kissing arrangment in four dimensions is unique up to isometry (in other words, is the ``24-cell"), and
$\varphi_4(24)=60^\circ$.}
$$
However, for $n=5,6,7$ direct use of this extension of the Delsarte method doesn't give better upper bounds on $k(n)$ than Odlyzko-Sloane's bounds \cite{OdS}. It is an interesting problem to find better methods.
\pagebreak
\centerline{\bf\Large Appendix. An algorithm }
\centerline{\bf\Large for computation suitable polynomials $f(t)$}
In this Appendix is presented an algorithm for computation ``optimal" \footnote{Open problem: is it true that for given $t_0, d$ this algorithm defines $f$ with minimal $h_{max}$?} polynomials $f$ such that
$f(t)$ is a monotone decreasing function on the interval $[-1,-t_0],$ and
$f(t)\le 0 \; \mbox{ for } \; t\in [-t_0,z], \quad t_0>z\ge 0$. This algorithm based on our knowledge about optimal arrangement of points $y_i$ for given $m$. Coefficients $c_k$ can be found via discretization and linear programming; such method had been employed already by Odlyzko and Sloane \cite{OdS} for the same purpose.
Let us have a polynomial $f$ represented in the form $f(t)=1+\sum\limits_{k=1}\limits^d c_kG_k^{(n)}(t)$. We have the following constraints for $f$: (C1) $\;\; c_k\ge 0,\;\; 1\le k\le d$;\\ (C2) $\; f(a)>f(b)\;$ for $\; -1\le a<b\le -t_0$;\quad (C3) $\; f(t) \le 0\;$ for $\; -t_0\le t\le z.$
We do not know $e_0$ where $H_m$ attains its maximum, so for evaluation of $h_m$ let us use $e_0=y_c,$ where $y_c$ is the center of $\Delta_m.$ All vertices $y_k$ of $\Delta_m$ are at the distance of $\rho_m$ from $y_c,$ where $$\cos{\rho_m}=\sqrt{(1+(m-1)z)/m}.$$
When $m=2n-2, \; \Delta_m$ presumably is a regular $(n-1)$-dimensional cross-polytope.\footnote {It is also an open problem.} In this case $\; \cos{\rho_m}=\sqrt{z}.$
Let $I_n=\{1,\ldots,n\}\bigcup \{2n-2\}, \;\; m\in I_n, \;\; b_m=-\cos{\rho_m},\; $ then \\ $H_m(y_c)=f(1)+mf(b_m).\;\; $ If $F_0$ is such that $H(y_0;Y) \le E=F_0+f(1),\;$ then (C4) $\; f(b_m)\le F_0/m, \;\; m\in I_n.\;$ Note that $E=F_0+1+c_1+\ldots+c_d=F_0+f(1)$ is a lower estimate of $h_{max}$.
A polynomial $f$ that satisfies (C1-C4) and gives the minimal $E$
can be found by the following
\centerline{\bf Algorithm.}
\noindent Input: $\; n,\; z,\; t_0,\; d,\; N.$
\noindent Output: $\; c_1,\ldots, c_d,\; F_0, \; E.$
\noindent {\it First} replace (C2) and (C3) by a finite set of inequalities at the points\\ $a_j=-1+\epsilon j,\;\; 0\le j \le N, \;\; \epsilon=(1+z)/N:$
\noindent {\it Second} use linear programming to find $F_0, c_1,\ldots, c_d$ so as to minimize \\
$E-1=F_0+\sum\limits_{k=1}\limits^dc_k\;\;$ subject to the constraints
$$c_k\ge 0,\quad 1\le k\le d;\qquad \sum\limits_{k=1}\limits^dc_kG_k^{(n)}(a_j)\ge \sum\limits_{k=1}\limits^dc_kG_k^{(n)}(a_{j+1}), \quad a_j\in [-1,-t_0];$$
$$1+\sum\limits_{k=1}\limits^dc_kG_k^{(n)}(a_j)\le 0,\quad a_j \in [-t_0,z];\quad
1+\sum\limits_{k=1}\limits^dc_kG_k^{(n)}(b_m)\le F_0/m,\quad m\in I_n.$$
Let us note again that $E \le h_{max}$, and $E = h_{max}$ only if $h_{max} = H_{m_0}(y_c)$ for some $m_0\in I_n.$
\end{document}
|
\begin{document}
\address{Department of Mathematics, Columbia University, New York, NY 10027}
\email{[email protected]}
\title{K\"ahler-Ricci flow on blowups along submanifolds}
\author{ Bin Guo}
\thanks{}
\maketitle
\begin{abstract}
In this short note, we study the behavior of K\"aher-Ricci flow on K\"ahler manifolds which contract divisors to smooth submanifolds. We show that the K\"ahler potentials are H\"older continuous and the flow converges sequentially in Gromov-Hausdorff topology to a compact metric space which is homeomorphic to the base manifold.
\end{abstract}
\section{Introduction}
The Ricci flow, introduced by Hamilton (\cite{H}) in 1982, has been a powerful tool in solving problems in geometry and analysis. It deforms any metric with positive Ricci curvature in real $3$-dimensional manifold to a metric with constant curvature (\cite{H}). By performing surgery through singular times, Perelman (\cite{P}) used Ricci flow to solve the geometrization conjecture for $3$-dimensional manifolds. On the complex aspect, the Ricci flow preserves the K\"ahler condition (\cite{C}) and is reduced to a scalar equation with Monge-Amp\`ere type, which after suitable normalization converges to a solution of the Calabi conjecture (\cite{Y, C}). The non-K\"ahler analogue of Ricci flow also generates much interest recently, among them are the Chern-Ricci flow (\cite{TW1}), the Anomaly flow (\cite{PPZ1}) and etc, and we refer to \cite{PPZ2} for a survey on the recent development of non-K\"ahler geometric flows.
The analytic minimal model program, laid out in \cite{ST2}, predicts how the K\"ahler-Ricci flow behaves on a projective variety. It is conjectured that the K\"ahler-Ricci flow will either collapse in finite time, or deform any projective variety to its minimal model after finitely many divisorial contractions or flips in the Gromov-Hausdorff topology. There are various results on the finite time collapsing of K\"ahler-Ricci flow, see for example \cite{S1,SSW, PSSW, TWY, ToZh} and references therein. The behavior of K\"ahler-Ricci flow on some small contractions is studied in \cite{S13, SY} and it is shown that the flow forms a continuous path in Gromov-Hausdorff topology. In \cite{SW1,SW2}, Song and Weinkove study the divisorial contractions when the divisor is contracted to discrete points, and it is shown that the flow converges in Gromov-Hausdorff topology to a metric space which is isometric to the metric completion of the base manifold with the smooth limit of the flow outside the divisor, and the flow can be continued on the new space. The main purpose of this note is to generalize their results to divisorial contractions when the divisor is contracted to a higher dimensional subvariety.
Let $Y$ be a K\"ahler manifold and $N\subset Y$ be a complex submanifold of codimension $k\ge 1$. Let $X$ be the K\"ahler manifold obtained by blowing up $Y$ along $N$, $\pi: X\to Y$ be the blown-down map and $E= \pi^{-1}(N)$ be the exceptional divisor in $X$. We consider the (unnormalized) K\"ahler-Ricci flow on $X$:
\begin{equation}\label{eqn:KRF}
\left\{\begin{aligned}
&\frac{\partial \omega}{\partial t} = - \ric(\omega),\\
&\omega(0) = \omega_0,
\end{aligned}\right.
\end{equation}
for a suitable fixed K\"ahler metric $\omega_0$ on $X$. We assume the limit cohomology class satisfies $[\omega_0] + T K_X = [\pi^* \omega_Y]$ for some K\"ahler metric $\omega_Y$ on $Y$, where the maximal existence time (see \cite{TZ}) of the flow \eqref{eqn:KRF} is given by $$T = \sup\{t>0: ~ [\omega_0] + t K_X \text{ is K\"ahler}\}<\infty.$$
We define the reference metrics along the flow $$\hat \omega_t = \frac{T-t}{T} \omega_0 + \frac{t}{T}\pi^*\omega_Y.$$ In the following for notation simplicity we shall denote $\hat \omega_Y = \pi^*\omega_Y$, which is a nonnegative $(1,1)$-form on $X$.
It is well-known that the flow \eqref{eqn:KRF} is equivalent to the following parabolic complex Monge-Amp\`ere equation
\begin{equation}\label{eqn:MA}
\left\{\begin{aligned}
&\frac{\partial\varphi}{\partial t} = \log \frac{(\hat \omega_t + i\partial\bar\partial \varphi)^n}{\Omega},\\
&\varphi( 0 ) = 0,
\end{aligned}\right.
\end{equation}
where $\omega = \hat \omega_t + i\partial\bar\partial \varphi$ satisfies \eqref{eqn:KRF} and $\Omega$ is a smooth volume form satisfying $i\partial\bar\partial \log \Omega = \frac{1}{T}( \hat\omega_Y - \omega_0 )$.
Our main theorem is on the behavior of the metrics $\omega(t)$ as $t\to T^-$.
\begin{theorem}\label{thm:main}
Let $\pi: X\to Y$ and $\omega_t = \omega_0 + i\partial\bar\partial \varphi_t$ be as above, then the following hold: there exists a uniform constant $C=C(n,\omega_0,\omega_Y, \pi)>0$
\begin{enumerate}[label=(\arabic*)]
\item $\varphi_t$ is uniformly H\"older continuous in $(X,\omega_0)$, i.e. $|\varphi_t(p) - \varphi_t(q)|\le C d_{\omega_0}(p,q)^\delta$, for any $p,q\in X$ and some $\delta\in (0,1)$, and $\varphi_t\xrightarrow{C^\delta(X,\omega_0)} \varphi_T\in PSH(X,\pi^*\omega_Y)\cap C^\delta(X,\omega_0)$. Moreover, $\varphi_T$ descends to a function $\bar \varphi_T\in PSH(Y,\omega_Y)\cap C^{\delta_0}(Y,\omega_Y)$ for some $\delta_0\in (0,1)$.
\item \label{item 2} $\omega_t$ converge weakly to $\omega_T: = \pi^*\omega_Y + i\partial\bar\partial \varphi_T$ as $(1,1)$-currents on $X$ and the convergence is smooth and uniform on any compact subset $K\Subset X\backslash E$.
\item diam$(X,\omega_t)\le C$ for any $t\in [0,T)$.
\item \label{item 3}for any sequence $t_i\to T^-$, there exists a subsequence $\{t_{i_j}\}$ such that $(X,\omega_{t_{i_j}})$ (as compact metric spaces) converge in Gromov-Hausdorff topology to a compact metric space $(Z,d_Z)$.
\item the metric completion of $(Y\backslash N, \omega_T)$ is isometric to $(Y,d_T)$, where the distance function $d_T$ is induced from $\omega_T$ and defined in \eqref{eqn:dT}. And there exists an open dense subset $Z^\circ \subset Z$ such that $(Y\backslash N, d_T)$ and $(Z^\circ, d_Z)$ are homeomorphic and locally isometric. Furthermore $(Z,d_Z)$ is homeomorphic to $(Y,d_T)$.
\end{enumerate}
\end{theorem}
The item \ref{item 2} is known to hold for K\"ahler-Ricci flow for more general holomorphic maps $\pi:X\to Y$ with dim$Y = \mathrm{dim} X$ (see e.g. \cite{PSS, SW1, TZ}). We include it in the theorem just for completeness. We remark that Theorem \ref{thm:main} also holds if the base $Y$ has some mild singularities, for example, if the analytic subvariety $N$ is locally of the form $\mathbb C^k\times ( \mathbb C^{n-k}/\mathbb Z_p )$, where $\mathbb Z_p$ denotes the $S^1$-action $\{e^{2l\pi i/ p}\}_{l=1}^p$ on $\mathbb C^{n-k}$ by $$e^{2l\pi i/p}\cdot (z_{k+1},\ldots, z_n) \to (e^{2l\pi i/p} z_{k+1},\ldots, e^{2l\pi i/p} z_n).$$ The proof is by combining the techniques of \cite{SW2} and this note, so we omit the details.
Lastly we mention that under the same set-up as in Theorem \ref{thm:main}, the same and even stronger results hold for K\"ahler metrics along continuity method. More precisely, let $u_t\in PSH(X, \hat \omega_Y + t \omega_0)$ be the solution to the complex Monge-Amp\`ere equations
\begin{equation}\label{eqn:continuity}\omega_t^n = (\hat \omega_Y + t \omega_0 + i\partial\bar\partial u_t )^n = c_te^{F} \omega_0^n,\, \sup u_t = 0, \, t\in (0,1],\end{equation}
where $F$ is a given smooth function on $X$ and $c_t$ is a normalizing constant so that the integral of both sides are the same. It has been shown in \cite{FGS} that $\mathrm{diam}(X, \omega_t)$ is bounded by a constant $C=C(n,\omega_0, \hat \omega_Y ,F)>0$ and the Ricci curvature of $\omega_t$ is uniformly bounded below. We can repeat almost identically the proof of Theorem \ref{thm:main} to the equation \eqref{eqn:continuity} to get the same conclusions for $u_t$ as the $\varphi_t$ in Theorem \ref{thm:main}. Furthermore, along the continuity method \eqref{eqn:continuity}, we can improve the Gromov-Hausdorff convergence in Theorem \ref{thm:main} in the sense that the full sequence (without the need of passing to a subsequence) $(X,\omega_t)$ converges in GH topology to a compact metric space $(Z,d_Z)$ which is isometric to the metric completion of $(Y\backslash N, \hat\omega_0)$, where $\hat\omega_0$ is the smooth limit of $\omega_t$ on $X\backslash \pi^{-1}(N) = Y\backslash N$. The main advantage in this case is that the Ricci curvature has uniform lower bound so we can apply the argument in \cite{DGSW}, in particular the Gromov's lemma to find an almost geodesic connecting any two points away from the singular set $\pi^{-1}(N)$.
\section{Preliminaries}
The following estimates are well-known (\cite{Y, PSS, TZ, SW1}), so we just state the results and omit the proofs.
\begin{lemma}\label{lemma 1.1}
There exists a constant $C>0$ depending only on $(X,\omega_0)$, $(Y,\omega_Y)$ such that
\begin{enumerate}[label=(\roman*)]
\item $\| \varphi\|_{L^\infty(X)}\le C$ for all $t\in [0,T)$,
\item $\dot\varphi: = \frac{\partial \varphi}{\partial t}\le C$ and this is equivalent to $\omega^n \le C \Omega$ from the equation \eqref{eqn:MA}.
\item as $t\to T^-$, $\varphi$ converge to a bounded $\hat \omega_Y$-PSH function $\varphi_T$ and $\omega$ converge weakly to $\omega_T: = \hat\omega_Y + i\partial\bar\partial \varphi_T$ as $(1,1)$-currents on $X$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lemma 1.2}
There exists a uniform constant $C>0$ such that
\begin{enumerate}[label=(\roman*)]
\item \label{item lemma 1.2} $\hat \omega_Y \le C \omega$ for all $t\in [0,T)$,
\item for any compact subset $K \Subset X\backslash E$, there exists a constant $C_{j,K}>0$ such that $\| \varphi\|_{C^j(K,\omega_0)}\le C_{j,K}$. Therefore the convergence $\omega_t\to \omega_T$ and $\varphi\to \varphi_T$ is smooth on $X\backslash E$, so $\omega_T$ and $\varphi_T $ are both smooth on $X\backslash E$.
\end{enumerate}
\end{lemma}
In the proof of Lemma \ref{lemma 1.2}, we need the following Chern-Lu inequality as in the proof the Schwarz lemma (\cite{ST0})
\begin{equation*}
(\frac{\partial}{\partial t} - \Delta) \log \tr_\omega \hat \omega_Y \le C \tr_\omega \hat \omega_Y,
\end{equation*}
where $C>0$ depends also on the upper bound of the bisectional curvature of $(Y,\omega_Y)$. In turn this implies the equation below which will be used later.
\begin{equation}\label{eqn:my 1}
(\frac{\partial}{\partial t} - \Delta) \tr_\omega \hat \omega_Y \le - \frac{|\nabla \tr_\omega \hat \omega_Y|^2}{\tr_\omega\hat \omega_Y} + C (\tr_\omega \hat \omega_Y)^2 \le - c_0 |\nabla \tr_\omega\hat \omega_Y|^2 + C,
\end{equation}
where $c_0 = C^{-1}>0$ is the receptacle of the constant $C$ in \ref{item lemma 1.2} Lemma \ref{lemma 1.2}.
\subsection{K\"ahler metrics from the blown up} We will construct a smooth function $\sigma_Y$ on $Y$ such that $\sigma_Y = 0$ precisely on $N$. Choose a finite open cover $\{V_\alpha\}_{\alpha= 1}^J$ of $N$ in $Y$ and complex coordinates $\{w_{\alpha,i}\}_{i=1}^n$ on $V_\alpha$ such that $N\cap V_\alpha = \{w_{\alpha, 1} = \cdots = w_{\alpha, k} = 0\}$. We also denote $V_0 = Y\backslash \cup_\alpha \frac{1}{2}\overline{V_\alpha}$ and we may also assume that $V_0\cap N = \emptyset$. Take a partition of unity $\{\theta_\alpha\}_{\alpha = 0}^J$ subordinate to the open cover $\{V_\alpha\}_{\alpha = 0}^J$, and we define a smooth function
$$\sigma_Y = \theta_0\cdot 1 \;+\; \sum_{\alpha = 1}^J \theta_\alpha \cdot \sum_{j=1}^k |w_{\alpha, j}|^2\in C^\infty (Y),$$
and it is straightforward to see from the construction that $\sigma_Y$ vanishes precisely along $N$. Since $\{w_{\alpha,i}\}_{i=1}^k$ are defining functions of $N$, it follows that if $V_\alpha \cap V_\beta\neq \emptyset$, then the function $$f_{\alpha\beta} := \frac{\sum_{j=1}^k |w_{\alpha, j}|^2}{\sum_{j=1}^k |w_{\beta,j}|^2},\quad \text{on }V_\alpha\cap V_\beta$$
is never-vanishing and bounded from above. Since the cover is finite we have
\begin{equation}\label{eqn:my 2}
0<c\le \inf_{\alpha,\beta} \inf_{y\in V_\alpha \cap V_\beta\neq \emptyset} f_{\alpha\beta}(y)\le \sup_{\alpha,\beta} \sup_{y\in V_\alpha \cap V_\beta\neq \emptyset} f_{\alpha\beta}(y) \le C<\infty.
\end{equation}
We denote $\sigma_X = \pi^* \sigma_Y$ to be the pulled-back of $\sigma_Y$ to $X$.
\begin{lemma}[see also \cite{PS}]\label{lemma 1.3}
There exists an $\varepsilon_0>0$ such that for all $\varepsilon\in (0, \varepsilon_0]$ the $(1,1)$-form
$$\omega_\varepsilon: = \pi^*\omega_Y + \varepsilon i\partial\bar\partial \log \sigma_X$$
is positive definite on $X\backslash E$ and extends to a smooth K\"ahler metric on $X$.
\end{lemma}
\begin{proof}
We only need to prove the positivity of $\omega_\varepsilon$ near $E$, which is in fact local. So we may assume the map $\pi$ is defined from an open set $U\subset X$ to $V_\alpha$ given by $$w_{\alpha, 1} = z_1,\, w_{\alpha,2} = z_1 z_2,\cdots, w_{\alpha, k} = z_1 z_k,\, w_{\alpha, k+1} = z_{k+1},\cdots, w_{\alpha, n} = z_n,$$
where $\{z_i\}$ are the complex coordinates on $U$ such that $E\cap U = \{z_1 = 0\}$. It loses no loss of generality by assuming $\omega_Y$ on $V_\alpha$ is just the Euclidean metric $\omega_{\mathbb C^n} = \sum_j i dw_{\alpha, j}\wedge d\bar w_{\alpha, j}$.
We note that on $V_\alpha$ $$\sigma_Y =\Big( \sum_{\beta = 1, V_\beta\cap V_\alpha \neq \emptyset}^J \theta_\beta f_{\beta \alpha}\Big) \cdot \sum_{j=1}^k |w_{\alpha, j}|^2 = : \phi_\alpha \cdot \sum_{j=1}^k |w_{\alpha,j}|^2.$$
From \eqref{eqn:my 2}, we know that $\phi_\alpha$ is a smooth function with a strict positive lower bound on $V_\alpha$. In particular $\omega_Y + \varepsilon i\partial\bar\partial \log \phi_\alpha>0$ on $V_\alpha$ for any $0< \varepsilon\le \varepsilon_0<<1$.
We calculate
\begin{equation}\label{eqn:pullback}\begin{split}
\pi^* \omega_Y =& (1+\sum_{j=2}^k |z_j|^2) dz_1\wedge d\bar z_1 + \sum_{j=2}^k (z_1 \bar z_j dz_j \wedge d\bar z_1 + \bar z_1 z_j dz_1 \wedge d\bar z_j)\\
& \quad + |z_1|^2 \sum_{j=2}^k dz_j \wedge d\bar z_j +\sum_{j=k+1}^n dz_j\wedge d\bar z_j,
\end{split}\end{equation}
and note that on $U$ $$\sigma_X = \log \phi_\alpha + \log |z_1|^2 + \log (1+ \sum_{j=2}^k |z_j|^2 ),$$ so on $U\backslash E$ we have
\begin{equation}\label{eqn:pullback 1}
i\partial\bar\partial \log \sigma_X = i\partial\bar\partial \log \phi_\alpha + \frac{\sum_{i, j=2}^k ( (1+|z'|^2)\delta_{ij} - \bar z_i z_j) \sqrt{-1}dz_i \wedge d\bar z_j}{(1+|z'|^2)^2},
\end{equation}
where $z' = (z_2,\ldots, z_k)$ and the second term on RHS is nonnegative in $z'$-directions, which is just the Fubini-Study metric in the coordinates $z'$. By straightforward calculations, we see that if $\varepsilon$ is small enough the $(1,1)$-form $\pi^* \omega_Y + \varepsilon i\partial\bar\partial \log \sigma_X$ is positive on $X\backslash E$ and extends to a K\"ahler metric on $X$.
\end{proof}
\begin{remark}
Globally from the above calculations we see that $$\omega_\varepsilon = \pi^*\omega_Y + \varepsilon i\partial\bar\partial \log \sigma_X - \varepsilon [E],$$where $[E]$ denotes the current of integration along $E$.
\end{remark}
We will denote $\omega_X = \pi^* \omega_Y + \varepsilon_0i\partial\bar\partial \log \sigma_X - \varepsilon_0[E]$ to be a fixed K\"ahler metric obtained from the Lemma \ref{lemma 1.3}. The following inequality follows from the local expression of $\pi^* \omega_Y$ as in the proof of Lemma \ref{lemma 1.3}.
\begin{lemma}
There exists a uniform constant $C>1$ such that
\begin{equation}\label{eqn:my 3}
C^{-1} \hat \omega_Y \le \omega_X \le \frac{C}{\sigma_X}\hat \omega_Y,
\end{equation}
where the second inequality is understood on $X\backslash E$.
\end{lemma}
\section{The proof of the main theorem}
Now we are ready to derive the crucial estimates on $\omega$ along the K\"ahler-Ricci flow \eqref{eqn:KRF}.
\begin{lemma}\label{lemma 1.5}
There exists uniform constants $C>0$ and $\delta\in (0,1)$ such that along the flow \eqref{eqn:KRF} we have
\begin{equation}\label{eqn:main}
\omega\le C \frac{\omega_0}{\sigma_X^{1-\delta}},\quad \text{on }X\backslash E \times [0, T).
\end{equation}
\end{lemma}
The proof is almost the same as that of Lemma 2.5 in \cite{SW1}, with minor modification using Lemma \ref{lemma 1.3}. For completeness, we provide a sketched proof.
\begin{proof}
Fix an $\epsilon\in (0,1)$ and define $$Q_\epsilon = \log \tr_{\omega_0} \omega + A \log \sigma_X^{1+\epsilon} \tr_{\hat\omega_Y }\omega - A^2 \varphi,$$
where $A>0$ is a constant to be determined later. First of all, $Q_\epsilon|_{t= 0}\le C$ for a constant $C$ independent of $\epsilon\in (0,1)$, which can be seen from \eqref{eqn:my 3}. Observe that for each time $t_0\in (0,T)$, $\max_X Q_\epsilon$ can only be achieved on $X\backslash E$, since $Q_\epsilon(x)\to -\infty $ as $x\to E$. Thus we assume the maximum of $Q_\epsilon$ is obtained at $(x_0,t_0)$ for some $x_0\in X\backslash E$. From the Chern-Lu inequality the following holds on $X\backslash E$
\begin{align*}
(\frac{\partial}{\partial t} - \Delta) Q_\epsilon \le C \tr_\omega \omega_0 - A \tr_\omega ( A\hat \omega_t + (1+\epsilon)i\partial\bar\partial \log \sigma_X)+ A^2 \log\frac{\Omega}{\omega^n} + C,
\end{align*}
where the constant $C$ depends on the lower bound of the bisectional curvature of $(X,\omega_0)$ and the upper bound of bisectional curvature of $(Y,\omega_Y)$. Since $\hat \omega_t\ge c_1 \hat \omega_Y$ for a uniform $c_1>0$ and any $t\in [0,T)$, by Lemma \ref{lemma 1.3} for $A>0$ large enough $A \hat \omega_t + (1+ \epsilon )i\partial\bar\partial \log \sigma_X \ge c_2 \omega_0$ on $X\backslash E$ for some $c_2>0$. If $A>0$ is taken even larger then at $(x_0,t_0)$, we have
\begin{align*}
0\le (\frac{\partial}{\partial t} - \Delta) Q_\epsilon \le - 2 \tr_\omega \omega_0 + A^2 \log\frac{\Omega}{\omega^n} + C\le - \tr_\omega\omega_0 + C,
\end{align*}
where in the last inequality we use
$$-\tr_\omega \omega_0 + A^2 \log \frac{\Omega}{\omega^n}\le -\tr_\omega \omega_0 + n A^2 \log \tr_\omega \omega_0 + C \le C,$$
as $\log x \le \varepsilon x + C(\varepsilon)$ for any $x\in (0,\infty)$. So we have $\tr_\omega \omega_0(x_0,t_0)\le C$. Then
\begin{equation*}
\tr_{\omega_0}\omega|_{(x_0,t_0)}\le \frac{\omega^n}{\omega_0^n } (\tr_\omega \omega_0)^{n-1}|_{(x_0,t_0)}\le C.
\end{equation*}
Observing that from \eqref{eqn:my 3}, $\sigma_X \tr_{\hat \omega_Y} \omega \le C \tr_{\omega_0} \omega$ on $X\backslash E$, thus $\sup_X Q_\epsilon\le C$ for some uniform constant $C>0$. Letting $\epsilon\to 0$, we get
\begin{equation*}
\log \tr_{\omega_0} \omega + A \log \sigma_X \tr_{\hat \omega_Y } \omega \le C,\quad \text{on }X\backslash E \times [0,T).
\end{equation*}
Finally from $ C \tr_{\hat \omega_Y}\omega \ge \tr_{\omega_0}\omega$ we see from the above that
$$\log \tr_{\omega_0} \omega + \log \sigma_X^A (\tr_{\omega_0} \omega)^A \le C,$$
so $\tr_{\omega_0}\omega \le C {\sigma_X^{- A/(1+A)}}$ on $X\backslash E$, and we can then take $\delta = \frac{1}{1+A}\in (0,1)$.
\end{proof}
Next we will show the distance function defined by $ \omega_t$ is H\"older-continuous with respect to the fixed metric $(X,\omega_0)$.
\begin{lemma}\label{lemma 1.6}
There exists a uniform constant $C>0$ such that for any $p,q\in X$, it holds that
\begin{equation*}
d_{\omega_t}(p,q)\le C d_{\omega_0}(p,q)^\delta,\quad \forall ~ t\in [0,T),
\end{equation*}
where $\delta \in (0,1)$ is the constant determined in Lemma \ref{lemma 1.5}.
\end{lemma}
\begin{proof}
It suffices to prove the estimate near $E$, say on $T(E)$, a tubular neighborhood of $E$, since $\omega_t$ is uniformly equivalent to $\omega_0$ outside $T(E)$. Choose coordinates charts $\{U_\alpha\}$ covering $T(E)$ and local coordinates $\{z_{\alpha, i}\}_{i=1}^n$ such that $U_\alpha\cap E = \{z_{\alpha,1} = 0\}$. We may assume that the cover is fine enough such that any $p,q\in T(E)$ with $d_{\omega_0}(p,q)\le \frac 1 2 $ must lie in the same $U_\alpha$. Since we have only finitely many such $U_\alpha$, we will work on one of them only and omit the subscript $\alpha$ for simplicity. Furthermore the fixed K\"ahler metric $\omega_0$ is uniformly equivalent to the Euclidean metric $\omega_{\mathbb C^n}$ on $U$, so without loss of generality we assume $\omega_0 = \omega_{\mathbb C^n}$ on $U$. Recall that Lemma \ref{lemma 1.5} implies that on $U\backslash E$ it holds that
\begin{equation}\label{eqn:my 4}
\omega_t \le C \frac{\omega_{\mathbb C^n}}{|z_1|^{2(1-\delta)}},\quad \forall t\in [0,T),
\end{equation} since $\sigma_X \sim |z_1|^2$ on $U$.
Take any two points $p,q\in U$ with $d_{\omega_0}(p,q) = d< \frac 1 4$. We will consider different cases depending on the positions of $p,q$.
\noindent $\bullet$ {\bf Case 1:} $p,q\in E$. Rotate the coordinates if necessary we may assume $p = 0$ and $q = (0,d,0,\ldots,0)$. We pick two points $\tilde p = (d, 0 ,\ldots, 0)$ and $\tilde q = (d,d,0,\ldots, 0)$ as shown the picture below. From \eqref{eqn:my 4}, we have
\begin{equation*}
d_{\omega_t} (p,\tilde p) \le L_{\omega_t}(\overline{p \tilde p})\le C \int_0^d \frac{1}{r^{1-\delta}}dr \le C d^\delta,
\end{equation*}where $\overline{p\tilde p}$ denotes the (Euclidean) line segment connecting $p$ and $\tilde p$.
Similarly $d_{\omega_t}(q,\tilde q)\le C d^\delta$. On the other hand,
$$d_{\omega_t}(\tilde p,\tilde q) \le L_{\omega_t}(\overline{\tilde p\tilde q}) \le \frac{C}{d^{1-\delta}} L_{\omega_{\mathbb C^n}}(\overline{\tilde p\tilde q}) = C d^\delta.$$ If we denote $\gamma = \overline{p\tilde p} + \overline{\tilde p\tilde q} + \overline{q \tilde q}$ to be the piecewise line segment connecting $p$ and $q$, then we have
\begin{equation*}
d_{\omega_t}(p,q)\le L_{\omega_t}(\gamma)\le C d^\delta = C d_{\omega_0}(p,q)^{\delta}.
\end{equation*}
We remark that $\gamma\subset X\backslash E$, except the two end points $p,q$.
\noindent $\bullet$ {\bf Case 2.} $\min(d_{\omega_0}(p,E), d_{\omega_0}(q,E) )\le d$. The (Euclidean) projections of $p,q$ to $E$, denoted by $p', q'$, respectively, must satisfy $d_{\omega_0}(p',q')\le d$. From the assumption it follows that $d_{\omega_0}(p,p')\le 2d$ and $d_{\omega_0}(q,q')\le 2d$. By similar arguments as above using \eqref{eqn:my 4} we have
$$d_{\omega_t}(p,p')\le C d^\delta,\quad d_{\omega_t}(q,q')\le C d^\delta,$$
and by {\bf Case 1} $d_{\omega_t}(p',q')\le C d_{\omega_0}(p',q')^\delta \le C d^\delta$. By triangle inequality we get the desired estimate $d_{\omega_t}(p,q)\le C d^\delta$.
\noindent $\bullet$ {\bf Case 3.} $\min(d_{\omega_0}(p,E), d_{\omega_0}(q,E) )\ge d$. Every point in the (Euclidean) line segment $\overline{pq}$ has norm of $z_1$-coordinates no less than $d$, therefore
$$d_{\omega_t} (p,q)\le L_{\omega_t} (\overline{pq})\le C d^{-(1-\delta)} L_{\omega_{\mathbb C^n}} (\overline{p,q}) = C d^\delta.$$
Combining the all the cases above, we finish the proof of the lemma.
\end{proof}
Next we will prove the H\"older continuity of $\varphi_t$ with respect to $(X,\omega_0)$. To begin with, we first prove the gradient estimate of $\Phi:= (T-t)\dot \varphi + \varphi$ with respect to the evolving metrics $(X,\omega_t)$ (c.f. \cite{FGS}).
\begin{lemma}\label{lemma 1.7}
There exists a uniform constant $C>0$ such that
\begin{equation*}
\sup_X |\nabla_{\omega_t} \Phi|_{\omega_t}\le C,\quad \forall ~ t\in [0,T).
\end{equation*}
\end{lemma}
\begin{proof}
Taking $\frac{\partial }{\partial t}$ on both sides of \eqref{eqn:MA}, we get
$$\frac{\partial}{\partial t} \dot\varphi = \Delta \dot\varphi + \frac 1 T \tr_\omega (\hat \omega_Y - \omega_0 ) = \Delta \dot \varphi + \frac{1}{T-t}\tr_\omega \hat\omega_Y + \frac{1}{T-t}\Delta \varphi, $$
where we used the equation $ - \frac 1 T \tr_\omega \omega_0 = - \frac{n}{T-t} + \frac{t}{T(T-t)} \tr_\omega \hat \omega_Y + \frac{1}{T-t} \Delta \varphi $. Then we have the equation
\begin{equation}\label{eqn:my 5}
(\frac{\partial}{\partial t} - \Delta )\Phi = \tr_\omega \hat \omega_Y - n\ge -n.
\end{equation}
By maximum principle, it follows that $\inf_X \Phi\ge - C$ for some constant depending also on $T$. Recall $\Phi$ is also bounded above by Lemma \ref{lemma 1.1}.
And combining \eqref {eqn:my 5} with Bochner formula the following equation holds:
\begin{equation*}
(\frac{\partial}{\partial t} - \Delta )|\nabla \Phi|^2_{\omega} = - |\nabla \nabla \Phi|^2 - |\nabla\bar \nabla \Phi| ^2 + 2 Re \innpro{ \nabla \Phi, \bar\nabla \tr_\omega \hat \omega_Y}.
\end{equation*}
Fix a constant $B: = \sup_{X\times [0,T)}|\Phi| + 2$. By direct calculations the following equation holds
{\small
\begin{equation}\label{eqn:my 6}\begin{split}
(\frac{\partial}{\partial t} - \Delta ) \frac{|\nabla \Phi|^2}{ B - \Phi} & = \frac{\ho |\nabla \Phi|^2}{B - \Phi} + \frac{\abs{\nabla \Phi} \ho \Phi }{(B-\Phi)^2} + 2 Re \innpro{ \nabla \log(B-\Phi), \bar \nabla \frac{\abs{\nabla \Phi}}{B-\Phi} }\\
& = \frac{ - \abs{\nabla\nabla \Phi} - \abs{\nabla \bar \nabla \Phi} + 2 Re \innpro{ \nabla \Phi, \bar \nabla \tr_\omega \hat \omega_Y }}{B-\Phi} + \frac{\abs{\nabla \Phi} ( \tr_\omega \hat \omega_Y - n )}{(B-\Phi)^2}\\
& \quad + 2 Re \innpro{ \nabla \log (B-\Phi), \bar \nabla \frac{\abs{\nabla \Phi}}{B- \Phi} }.
\end{split}
\end{equation}
}
From the equation \eqref{eqn:my 1}, we have
\begin{equation}\label{eqn:my 7} \begin{split}
& \ho \frac{\tr_\omega \hat \omega_Y}{ B - \Phi}\\ \le~ & \frac{-c_0 |\nabla \tr_\omega \hat \omega_Y|^2 + C}{ B-\Phi} + \frac{\tr_\omega \hat \omega_Y (\tr_\omega \hat\omega_Y - n)}{(B-\Phi)^2} + 2 Re \innpro{ \nabla \log (B-\Phi), \bar \nabla \frac{\tr_\omega \hat\omega_Y}{ B - \Phi} }.
\end{split}\end{equation}
Denote $$G = \frac{\abs{\nabla \Phi}}{ B - \Phi} + A \frac{\tr_\omega \hat \omega_Y}{ B - \Phi},\quad \text{where } A = 10 c_0^{-1}.$$
By \eqref{eqn:my 6}, \eqref{eqn:my 7} and Cauchy-Schwarz inequality we have
\begin{align*}
& \ho G\\
\le ~ & \frac{ - \abs{\nabla \nabla \Phi} - \abs{\nabla \bar \nabla \Phi} - 9 \abs{\nabla \tr_\omega \hat \omega_Y } }{B-\Phi} + C G + C + 2 Re\innpro{\nabla \log(B-\Phi),\bar\nabla G}.
\end{align*}
Assuming the maximum of $G$ is attained at $(x_0,t_0)$, we may assume at this point $|\nabla \Phi|\ge A$, otherwise we are done yet. Then at this point $\ho G \ge 0$ and $\nabla G = 0$, hence we have $2 |\nabla \Phi|\cdot \nabla |\nabla \Phi| = - G \nabla \Phi - A \nabla \tr_\omega \hat \omega_Y$. Taking norm on both side we get at $(x_0,t_0)$
\begin{equation}\label{eqn:my 8}
\frac{\abs{G \nabla \Phi + A \nabla \tr_\omega \hat\omega_Y }}{2 \abs{\nabla \Phi}} = 2 \abs{\nabla |\nabla \Phi|}\le \abs{\nabla \nabla \Phi} + \abs{\nabla \bar \nabla \Phi},
\end{equation}
where we used the Kato's inequality in the last inequality. Therefore at $(x_0,t_0)$, we have
{\small
\begin{align*}
0\le& (B-\Phi)^{-1} \bk{ - \frac{1}{2} G^2 + A G \frac{|\nabla \tr_\omega \hat \omega_Y|}{|\nabla \Phi|} + \frac{A^2}{2\abs{\nabla \Phi}} \abs{\nabla \tr_\omega\hat \omega_Y} - 9 \abs{\nabla \tr_\omega \hat \omega_Y} } + C G + C\\
\le & -\frac{G^2}{4 (B-\Phi)} + C G + C,
\end{align*}
}
so at $(x_0,t_0)$, $G \le C$. From this we get the desired bound on $|\nabla \Phi|$.
\end{proof}
An immediate consequence of the gradient bound is the uniform H\"older continuity of $\varphi_t$ on $(X,\omega_0)$.
\begin{corr}\label{cor 1.1}
There exists a uniform constant $C>0$ such that
\begin{equation*}
| \varphi_t(p) - \varphi_t(q) |\le C d_{\omega_0}(p,q)^\delta,\quad \forall p,q\in X, \text{ and }\forall t\in [0,T).
\end{equation*}
\end{corr}
\begin{proof}
Recall the definition of $\Phi$, that
$$\Phi_t = (T-t) \dot\varphi + \varphi = (T- t)^2 \frac{\partial}{\partial t} \bk{ \frac{\varphi_t}{T-t} }.$$ By the gradient bound in Lemma \ref{lemma 1.7} and distance estimate in Lemma \ref{lemma 1.6}, for any fixed points $p, q\in X$, we have
\begin{equation*}
| \Phi_t(p) - \Phi_t (q) |\le C d_{\omega_t}(p,q)\le C d_{\omega_0}(p,q)^\delta.
\end{equation*}
So
\begin{equation}\label{eqn:my 10}
\Big| \frac{\partial}{\partial t}\bk{ \frac{\varphi_t}{T-t} }(p) - \frac{\partial}{\partial t}\bk{ \frac{\varphi_t}{T-t} }(q) \Big| \le \frac{C d_{\omega_0}(p,q)^\delta}{(T-t)^2},
\end{equation}
integrating \eqref{eqn:my 10} over $t\in [0,t_1)$ we get by noting that $\varphi_0 \equiv 0$ that
\begin{equation*}
\big|\frac{\varphi_{t_1}(p)}{T - t_1} - \frac{\varphi_{t_1}(q)}{T- t_1} \big|\le C d_{\omega_0}(p,q)^\delta \int_0^{t_1} \frac{1}{(T-t)^2}dt= C d_{\omega_0}(p,q)^\delta \frac{t_1}{T(T-t_1)},
\end{equation*}
cancelling the common factor $\frac{1}{T-t_1}$ on both sides we get the desired estimate since $t_1\in (0,T)$ is arbitrarily chosen.
\end{proof}
\begin{remark}
By an argument in \cite{Li}, the H\"older continuity of $\varphi_t$ implies that the distance functions satisfy the estimate in Lemma \ref{lemma 1.6}.
\end{remark}
Recall that the exceptional divisor $E$ is a $\mathbb{CP}^{k-1}$-bundle over $N$ and we identify $N$ with the zero section of this bundle. Denote the bundle map by $\hat \pi: E \to N$ which is the restriction of $\pi: X\to Y$ to $E$. From Corollary \ref{cor 1.1}, we see that the limit $\varphi_T\in PSH(X,\hat\omega_Y)$ is also H\"older continuous in $(X,\omega_0)$. Since $\hat\omega_Y |_{\hat\pi^{-1}(y)} = 0$ for any $y\in N$, we know that $\varphi_T|_{\hat\pi^{-1}(y)} = \text{const}$ for each $y\in N$ since $\hat \pi^{-1}(y)$ is connected. Thus $\varphi_T$ descends to a bounded function in $PSH(Y,\omega_Y)$, which we will still denote by $\varphi_T$. We shall show $\varphi_T$ is also H\"older continuous in $(Y,\omega_Y)$ with a possible different H\"older component.
\begin{lemma}\label{lemma 1.8}
There exists a uniform constant $C>0$ such that
\begin{equation}\label{eqn:my 11}|\varphi_T(p) - \varphi_T(q) |\le C d_{\omega_Y}(p,q)^{\delta_Y},\quad \forall ~ p,q\in Y,\end{equation}
where $\delta_Y = \min\{\delta (1-\delta),\delta^2\}\in (0,1)$.
\end{lemma}
\begin{proof}
We denote the zero section of the $\mathbb{CP}^{k-1}$-bundle $\hat \pi: E\to N$ by $\hat N$, and it is well-known that $\hat N \cong N$. It suffices to show \eqref{eqn:my 11} for $p,q$ in a fixed tubular neighborhood $T(N)$ of $N$, since on $Y\backslash T(N)$ the metric $\pi^*\omega_Y = \hat \omega_Y$ is equivalent to $\omega_0$, and the estimate follows from Corollary \ref{cor 1.1}.
Choose coordinates charts $(V_\alpha, w_{\alpha, j})$ covering $T(N)$ such that $V_\alpha \cap N = \{ w_{\alpha,1} = \cdots = w_{\alpha, k} = 0 \}$. We also assume that any $p,q\in T(N)$ with $d_{\omega_Y}(p,q)\le 1$ lie in the same $V_\alpha$, if the charts are chosen sufficiently fine. We will work in a fixed chart $(V, w_i)$, and omit the subscript $\alpha$. On this open set $\omega_Y$ is equivalent to the Euclidean metric $\omega_{\mathbb C^n}$ in $(\mathbb C^n, w_i)$, so without loss of generality, we may assume $\omega_Y = \omega_{\mathbb C^n}$ on $V$. The map $\pi: U\to V$ can be locally expressed as \begin{equation}\label{eqn:map pi}w_1 = z_1,\, w_2 = z_1 z_2,\cdots, w_k = z_1 z_k,\, w_{k+1} = z_{k+1},\cdots, w_n = z_n,\end{equation}
where $(U,z_i)$ is an open chart on $X$. The zero section $\hat N$ can be locally expressed as $\hat N\cap U = \{z_1=\cdots = z_k = 0\}$.
We consider different cases depending on the positions of $p,q$ in $V$. Denote $0<d= d_{\omega_Y}(p,q) \le 1/4$.
\noindent $\bullet$ {\bf Case 1:} we assume $p,q\in N$. Take the unique pre-images under $\hat p$ of $p,q$ in $\hat N$, $\hat p, \hat q$, respectively. We know that $\varphi_T(p) = \varphi_T(\hat p)$ and $\varphi_T(q) = \varphi_T(\hat q)$. The line segment $\overline{pq}$ is contained in $N$ and similarly $\overline{\hat p \hat q}$ is contained in $\hat N$ as well. From the local expressions \eqref{eqn:pullback} and \eqref{eqn:pullback 1} of $\omega_X :=\pi^*\omega_Y + \varepsilon_0 i\partial\bar\partial \log \sigma_X$, we conclude that $d_{\omega_Y}(p,q) = L_{\omega_Y}(\overline{pq})$ is comparable to $L_{\omega_X} (\overline{\hat p\hat q})$, which is no less than $c_1 d_{\omega_0}(\hat p,\hat q)$, for some uniform $c_1>0$. Therefore
\begin{align*}
| \varphi_T(p) - \varphi_T(q) | = & |\varphi_T(\hat p) - \varphi_T(\hat q)| \le C d_{\omega_0}(\hat p,\hat q)^\delta\le C d_{\omega_Y}(p,q)^\delta,
\end{align*}
as desired.
\noindent $\bullet$ {\bf Case 2:} if $0<\min\{d_{\omega_Y}(p,N), d_{\omega_Y} (q,N) \}\le 2 d^{1-\delta}$. Take the orthogonal projections of $p$ and $q$ to $N$, $p', q'$ respectively. In other words, $p'$ ($q'$ resp.) has the same $(w_{k+1},\ldots, w_n)$-coordinates as $p$ ($q$ resp.) but the first $k$-coordinates are zero. From the assumption we know that $d_{\omega_Y}(p, p') = L_{\omega_Y}(\overline{pp'})\le 3d^{1-\delta}$ and $d_{\omega_Y}(q, q') = L_{\omega_Y}(\overline{q q'})\le 3d^{1-\delta}$. The pulled-back of the line segment $\overline{pp'}$ under $\pi$ is also a line segment $\overline{\pi^{-1}({p}) \hat p'}$ in $(U,z_i)$ connecting $\pi^{-1}(p)$ and a unique point $\hat p'\in \hat \pi ^{-1}(p')\subset E$, and $\hat p' = (0,\frac{w_2}{w_1},\ldots, \frac{w_k}{w_1},w_{k+1},\ldots, w_n)$, where $w_j$ denotes the $w_j$-coordinate at $p$. It holds that $\varphi_T(p') = \varphi_T(\hat p')$ since $\hat p'$ lies at the fiber over $p'$. Again from the local expressions \eqref{eqn:pullback} and \eqref{eqn:pullback 1} of $\omega_X$, it follows that $L_{\omega_X} (\overline{\pi^{-1}(p) \hat p'})$ is comparable to the length of $\overline{p p'}$ under $\omega_Y$, therefore
$$d_{\omega_0}(\pi^{-1}(q), \hat p')\le C L_{\omega_X} ( \overline{ \pi^{-1}(p) \hat p' } )\le C L_{\omega_Y}(\overline{p p'})\le C d^{1-\delta},$$
from which we derive that
\begin{equation*}
|\varphi_T(p) - \varphi_T(p') | = |\varphi_T(\pi^{-1}(p)) - \varphi_T(\hat p ') |\le C d_{\omega_0}(\pi^{-1}(p), \hat p ' )^\delta \le C d^{\delta_Y}.
\end{equation*}
Similar estimate also holds for $|\varphi_T(\pi^{-1}(q)) - \varphi_T(q') |$. Since $p',q'\in E$ and $d_{\omega_Y}(p',q')\le d$, by {\bf Case 1} we also have $|\varphi_T(p') - \varphi_T(q')|\le C d^\delta$. The desired estimate \eqref{eqn:my 11} in this case then follows from triangle inequality.
\noindent$\bullet$ {\bf Case 3:} $\min\{d_{\omega_Y} (p,N), d_{\omega_Y}(q, N)\}> 2d^{1-\delta}$. The line segment $\gamma(s) = \overline{pq}$ is strictly away from $N$, in fact, $\sigma_Y (\gamma(s) ) \ge d^{2(1-\delta)}$. Therefore the pulled-back $\hat \gamma(s) = \pi^{-1}(\gamma(s))$ joins $\pi^{-1}(p)$ to $\pi^{-1}(q)$ and $\sigma_X ( \hat\gamma(s) ) \ge d^{2(1-\delta)}$. From \eqref{eqn:my 3} that $\omega_X \le C \frac{\pi^*\omega_Y}{ \sigma_X}$ on $X\backslash E$ we have
$$d_{\omega_0}(\pi^{-1}(p), \pi^{-1}(q)) \le C L_{\omega_X} (\hat\gamma) \le \frac{C}{d^{1-\delta}} L_{\omega_Y}(\gamma) \le C d ^{\delta}.$$
Therefore
{\small
\begin{equation*}
| \varphi_T(p) - \varphi_T(q) | = | \varphi_T(\pi^{-1}(p)) - \varphi_T(\pi^{-1}(q)) |\le C d_{\omega_0}\big( \pi^{-1}(p), \pi^{-1}(q) \big)^\delta\le C d^{\delta^2}\le C d^{\delta_Y},
\end{equation*}
}as desired.
Combining the cases discussed above, we finish the proof of the lemma.
\end{proof}
The positive $(1,1)$-form $\omega_T = \omega_Y + i\partial\bar\partial \varphi_T$ defines a K\"ahler metric $g_T$ on $Y\backslash N$, with the associated function $\tilde d_T: Y\backslash N \times Y\backslash N \to [0,\infty)$ defined by
\begin{equation*}
\tilde d_T(p,q): = \inf \Big\{ \int_{\gamma\backslash N} \sqrt{g_T( \gamma',\gamma' )} |~ \gamma\subset Y \text{ and $\gamma$ joins $p$ to $q$} \Big\}
\end{equation*}
for any $p,q\in Y\backslash N$ and $\gamma$ is taken over all piecewise smooth curves in $Y$ with only finitely many intersections with $N$. With this distance function, $(Y\backslash N, \tilde d_T)$ becomes a metric space, which may not be complete. We want to extend the distance function to the whole $Y$. To begin with, we need a trick from \cite{Li}.
\begin{lemma}\label{lemma 1.9}
There exists a uniform constant $C>0$ such that for any $p\in Y\backslash N$ and $r_p = d_{\omega_Y}(p, N)>0$
\begin{equation*}
\tilde d_T(p,q)\le C d_{\omega_Y}(p,q)^{\delta_Y/2},\quad \forall q\in B_{\omega_Y}(p,r_p/2).
\end{equation*}
\end{lemma}
\begin{proof}
The ball $B:=B_{\omega_Y}(p, r_p/2)$ is strictly away from $N$ so $\omega_T$ is smooth on $B$. The function $d_p(x) = \tilde d_T(p, x)$ is Lipschtiz continuous and satisfies $|\nabla d_p|_{\omega_T}\le 1$ a.e.. For any $r\le \frac{r_p}{2}$, we have
\begin{align*}
\int_{B_{\omega_Y}(p,r)} |\nabla d_p|_{\omega_Y}^2 \omega_Y^n \le &\int_{B_{\omega_Y}(p,r)} |\nabla d_p|_{\omega_T}^2 (\tr_{\omega_Y} \omega_T )\omega_Y^n \\
\le & \int_{B_{\omega_Y}(p,r)} ( n + \Delta_{\omega_Y} \varphi_T ) \omega_Y^n\\
\le & C r^{2n} + \int_{B_{\omega_Y}(p, 1.5 r)} |\varphi_T(x) - \varphi_T(p)| |\Delta_{\omega_Y} \eta| \omega_Y^n\\
\le & C r^{2n} + C r^{\delta_Y + 2n - 2} \le C r^{2n - 2 + \delta_Y},
\end{align*}
where $\eta$ is a standard cut-off function supported in $B_{\omega_Y}(p, 1.5 r)$ and identically equal to $1$ on $B_{\omega_Y}(p,r)$, and it satisfies $|\Delta_{\omega_Y} \eta|\le C r^{-2}$. Then by Poincare inequality and Campanato's lemma (see Theorem 3.1 in \cite{HL}) we get
$$\tilde d_T(p,q) = d_p(q) = | d_p(q) - d_p(p) |\le C d_{\omega_Y}(p,q)^{\delta_Y/2},$$
for any $q\in B_{\omega_Y}(p, r_p/2)$.
\end{proof}
\begin{lemma}\label{lemma 1.10}
There exist constants $C>0$ and $\delta_0\in (0,1)$ such that
\begin{equation}\label{eqn:my 12}
\tilde d_{T}(p,q)\le C d_{\omega_Y}(p,q)^{\delta_0},\quad \forall p, q\in Y\backslash N.
\end{equation}
\end{lemma}
\begin{proof}
We use the same notation as in the proof of Lemma \ref{lemma 1.8}. It suffices to show \eqref{eqn:my 12} for $p,q\in V$ where $V$ is a fixed coordinate chart in $Y$ and recall locally the map $\pi: U\to V$ is given by \eqref{eqn:map pi}. Let $d = d_{\omega_Y}(p,q)<1/4$.
In case $\min\{d_{\omega_Y}(p,N), d_{\omega_Y}(q,N)\}> 2d$, then $q\in B_{\omega_Y}(p, \frac 12 d_{\omega_Y}(p,N) )$. By Lemma \ref{lemma 1.9}, it follows that $\tilde d_T(p,q)\le C d^{\delta_Y/2}$. So it only remains to consider the case when the minimum above is $\le 2d$. Let $p',q'\in N\cap V$ be the orthogonal projection (assuming $\omega_Y = \omega_{\mathbb C^n}$) of $p,q$ to $N$, respectively. Then $\max\{d_{\omega_Y}(p, p '), d_{\omega_Y}(q,q')\} \le 3d$ and $d_{\omega_Y}(p',q')\le d$. Choose the unique pre-images $\hat p,\hat q\in \hat N\subset E$ in the zero section $\hat N$ of the bundle $\hat \pi: E\to N$, of $p', q'$, i.e. $\hat \pi(\hat p) = p'$ and $\hat \pi(\hat q) = q'$.
From the local expressions \eqref{eqn:pullback} and \eqref{eqn:pullback 1} of $\omega_X = \pi^*\omega_Y + \varepsilon_0i\partial\bar\partial \log \sigma_X$, it can be shown that $d_{\omega_X}(\hat p,\hat q) \le C d_{\omega_Y} (p',q')\le C d$. As in the proof of {\bf Case 1} in Lemma \ref{lemma 1.6} with $p,q$ in that lemma replaced by $\hat p, \hat q$ here. Recall that the piecewise line segment $\gamma$ which connects $\hat p$ and $\hat q$ lies outside $E$, except the two end points. Furthermore $\gamma$ is chosen independent of $t\in [0,T)$ and we have
\begin{equation}\label{eqn:my 13}
\int_\gamma \sqrt{ g_t( \gamma',\gamma' ) } = L_{\omega_t}(\gamma) \le C d_{\omega_X}(\hat p,\hat q)^\delta\le C d^\delta,
\end{equation}
since $g_t \to g_T$ (locally) smoothly on $\gamma\backslash \{\hat p, \hat q\}$, letting $t\to T^-$ and applying Fatou's lemma to \eqref{eqn:my 13}, we get
\begin{equation*}
\int_\gamma\sqrt{ g_T( \gamma',\gamma' ) } \le C d^{\delta}.
\end{equation*}
Denote the image curve $\gamma_0 = \pi(\gamma)\subset Y$ which joins $p'$ to $q'$ and is contained in $Y\backslash N$ except the end points. It follows then that $L_{\omega_T}(\gamma_0)\le C d^\delta$. The line segment $\gamma_1(s) = \overline{p p'}$ is given by
$$\gamma_1(s) = ( s w_1(p), \cdots, s w_k(p), w_{k+1} (p),\cdots, w_n(p) ) ,\quad s\in [0,1]$$
and its pulled-back to $X$, $\hat \gamma_1(s) = \pi^{-1}(\gamma(s))$ is locally given by
$$\hat \gamma_1(s) = ( s w_1(p), \frac{w_2(p)}{w_1(p)}, \cdots,\frac{w_{k}(p)}{w_1(p)}, w_{k+1}(p),\ldots, w_{n}(p) ),\quad s\in [0,1].$$
By the estimate in Lemma \ref{lemma 1.5}, it follows that
{\small
\begin{equation*}
\int_{\hat \gamma_1}\sqrt{ g_t( \hat \gamma_1',\hat \gamma_1 ' ) } \le C \int_{\hat \gamma_1} \sqrt{ \frac{g_X(\hat\gamma_1',\hat \gamma_1')}{s^{2(1-\delta)} |\mathbf{w}(p)|^{2(1-\delta)} } } \le C \int_{\hat \gamma_1} \frac{\sqrt{\pi^*\omega_Y( \hat \gamma_1',\hat \gamma_1 ' )}}{ s^{1-\delta} |\mathbf{w}(p)|^{1-\delta} } \le C |\mathbf{w}(p)|^\delta\le C d^\delta,
\end{equation*}
}
where $\mathbf{w}(p) = ( w_1(p),\cdots, w_k(p))$. By Fatou's lemma and letting $t\to T^-$, we get
\begin{equation*}
L_{\omega_T}(\gamma_1) = \int_{\hat \gamma_1} \sqrt{ g_T( \hat \gamma_1' , \hat \gamma_1' ) } \le C d^\delta.
\end{equation*} Similarly the line segment $\gamma_2 = \overline{q q'}$ also have $L_{\omega_T}(\gamma_2)\le C d^\delta$. Now we define a piecewise smooth curve $$\bar \gamma = \gamma_1 + \gamma_0 + \gamma_2, $$
which joins $p$ to $q$ and lies entirely outside $N$, except the two points $p'$ and $q'$. And combining the estimates above we get
$$L_{\omega_T}(\bar \gamma) = L_{\omega_T}(\gamma_1) + L_{\omega_T}(\gamma_0) + L_{\omega_T}(\gamma_2)\le C d^\delta. $$
Then by definition $$\tilde d_T (p,q)\le L_{\omega_T} (\bar \gamma) \le C d^\delta = C d_{\omega_Y}(p,q)^\delta. $$
From the discussions above, \eqref{eqn:my 12} follows for $\delta_0 = \min(\delta_Y/2, \delta)$.
\end{proof}
We now extend the distance function $\tilde d_T$ to $Y$, for any $p\in Y\backslash N$ and $q\in N$, we define the distance
\begin{equation}\label{eqn:dT}
d_T(p,q):= \lim_{i\to\infty} \tilde d_T(p, q_i),
\end{equation}
where $\{q_i\}\subset Y\backslash N$ is a sequence of points such that $d_{\omega_Y} (q,q_i)\to 0$. We need to justify $d_T$ is well-defined, i.e. the limit exists and is independent of the choice of the sequence $\{q_i\}$.
\begin{lemma}\label{lemma 1.11}
The limit in \eqref{eqn:dT} exists and for any other sequence $\{q_i'\}\subset Y\backslash N$ converging to $q$ in $(Y,\omega_Y)$, the following holds
\begin{equation*}
\lim_{i\to \infty} \tilde d_T(p,q_i) = \lim_{i\to \infty} \tilde d_T(p, q_i').
\end{equation*}
\end{lemma}
\begin{proof}
This is in fact an immediate consequence of Lemma \ref{lemma 1.10}. Observe that
\begin{equation*}
| \tilde d_T(p,q_i) - \tilde d_T(p,q_j) | \le \tilde d_T(q_i, q_j) \le C d_{\omega_Y}(q_i,q_j)^{\delta_0}\to 0,\quad \text{as }i,j\to\infty.
\end{equation*}
Thus $\{\tilde d_T(p,q_i)\}_{i=1}^\infty$ is a Cauchy sequence hence it converges. On the other hand, similarly we have
$$|\tilde d_T(p,q_i) - \tilde d_T(p,q_i') |\le C d_{\omega_Y}(q_i,q_i')^\delta\to 0,\quad \text{as }i\to \infty, $$
and it then follows that the limit is independent of the choice of $\{q_i\}$ converging to $q$.
\end{proof}
We then define the distance between points in $N$ as follows: for any $p,q\in N$
\begin{equation*}
d_T(p,q): = \lim_{i\to \infty} \tilde d_T ( p_i, q_i ),
\end{equation*}
for two sequences $Y\backslash N\supset\{p_i\}\to p$ and $Y\backslash N\supset \{q_i\}\to q$ under $d_{\omega_Y}$. It can be checked similar as Lemma \ref{lemma 1.11} that the limit exists and is independent of the choice of sequences converging to $p$ or $q$. Thus $(Y, d_T)$ defines a compact metric space, since $d_T(p,q)\le C d_{\omega_Y}(p,q)^{\delta_0}$ for any $p,q\in Y$, which follows from Lemma \ref{lemma 1.10}.
We now turn to the Gromov-Hausdorff convergence of the flow. The proof is motivated by \cite{RZ} (see also \cite{TWY, GTZ,FGS}).
\begin{lemma}
For any $t_i\to T^-$, there exists a subsequence which we still denote by $\{t_i\}$ such that as compact metric spaces $$(X,\omega_{t_i})\xrightarrow{d_{GH}} (Z,d_Z) $$for some compact metric space $(Z,d_Z)$.
\end{lemma}
\begin{proof}
For any $\epsilon>0$, we choose an $\epsilon$-net $\{x_j\}_{j=1}^{N_{i,\epsilon}}\subset (X,\omega_{t_i})$, in the sense that $d_{\omega_{t_i}}(x_j, x_{j'})>\epsilon$ and the open balls $\{B_{\omega_{t_i}}(x_j, 2\epsilon)\}_{j}$ cover $(X,\omega_{t_i})$. From Lemma \ref{lemma 1.6}, we have
$$\epsilon < d_{\omega_{t_o}} (x_j, x_{j'}) \le C d_{\omega_0} (x_j, x_{j'})^\delta,$$ thus under the fixed metric $d_{\omega_0}$, each pair of points $(x_j, x_{j'})$ from the $\epsilon$-net has distance at least $C^{-1/\delta} \epsilon^{1/\delta}$, thus the balls $\{B_{\omega_0}(x_j, C^{-1/\delta} \epsilon^{1/\delta}/2)\}_j $ are disjoint, so for some $c>0$
$$ N_{i,\epsilon} c \epsilon^{1/\delta^{2n}} = \sum_{j=1}^{N_{i,\epsilon}} c \epsilon^{1/\delta^{2n}} \le \int_{\cup_j B_{\omega_0}(x_j, C^{-1/\delta} \epsilon^{1/\delta}/2)} \omega_0^n \le \int_X \omega_0^n,$$ from which we derive an upper bound of $N_{i,\epsilon}\le N_\epsilon$, which is independent of $i$. Then by Gromov's precompactness theorem (\cite{Gr}), there exists a compact metric space $(Z,d_Z)$, such that up to a subsequence $(X,\omega_{t_i})\xrightarrow{d_{GH}} (Z,d_Z)$.
\end{proof}
\begin{lemma}\label{lemma 1.13}
There exists an open and dense subset $Z^\circ \subset Z$ such that $(Z^\circ, d_Z)$ and $(Y\backslash N, d_T)$ are homeomorphic and locally isometric.
\end{lemma}
\begin{proof}
For notation convenience we denote $Y^\circ = Y\backslash N$. The maps $\pi_i = \pi: (X,\omega_{t_i}) \to (Y, \omega_Y)$ are Lipschitz by the estimate $\pi^*\omega_Y \le C \omega_{t_i}$ as in (ii) of Lemma \ref{lemma 1.2}. The target space $(Y,\omega_Y)$ is compact, so by Arzela-Ascoli theorem up to a subsequence of $\{t_i\}$, along the GH convergence $(X,\omega_{t_i})\xrightarrow{d_{GH}}(Z,d_Z)$, the maps $\pi_i \xrightarrow{{GH}} \pi_Z$, for some map $\pi_Z : (Z,d_Z)\to (Y,\omega_Y)$, in the sense that for any $(X,\omega_{t_i})\ni x_i\xrightarrow{d_{GH}} z\in Z$, $\pi_i(x_i)\xrightarrow{d_{\omega_Y}} \pi_Z(z)$ in $Y$. $\pi_Z$ is also Lipschitz from $(Z,d_Z)$ to $(Y,\omega_Y)$, i.e. $d_{\omega_Y}(\pi_Z(z_1),\pi_Z(z_2))\le C d_Z(z_1, z_2)$ for any $z_1,z_2\in Z$. We denote $Z^\circ = \pi_Z^{-1}(Y^\circ)$, and we will show that $\pi_Z|_{Z^\circ}: (Z^\circ, d_Z) \to (Y^\circ, d_T)$ is homeomorphic and locally isometric, and $Z^\circ \subset Z$ is open and dense. The openness of $Z^\circ \subset Z$ follows from the continuity of the map $\pi_Z : (Z,d_Z)\to (Y,d_{\omega_Y})$ and the fact that $Y^\circ\subset Y$ is open.
\noindent $\bullet$ {\bf $\pi_Z|_{Z^\circ}$ is injective:} suppose $z_1, z_2\in Z^\circ = \pi_Z^{-1}(Y^\circ)$ are mapped to the same point $y\in Y^\circ$, $\pi_Z(z_1) = \pi_Z(z_2) = y$. Since $(Y^\circ, \omega_T)$ is an incomplete smooth Riemannian manifold and locally in $Y^\circ$, $d_T$ is induced from the Riemannian metric, we can find a small $r = r_y>0$ such that the metric ball $(B_{\omega_T}(y, 2r), \omega_T)$ is geodesically convex. Choose two sequence of points $z_{1,i},z_{2,i}\in (X,\omega_{t_i})$ converging in GH sense to $z_1, z_2\in Z$, respectively. From the convergence of $\pi_i\xrightarrow{GH} \pi_Z$, we obtain $d_{\omega_Y} ( \pi_i(z_{1,i}), \pi_Z(z_1) )\xrightarrow{i\to\infty} 0$ and $d_{\omega_Y} ( \pi_i(z_{2,i}), \pi_Z(z_2) )\xrightarrow{i\to\infty} 0$. By Lemma \ref{lemma 1.10}, the same limits hold with $d_{\omega_Y}$ replaced by $d_T$. In particular this implies that $d_T(\pi_i(z_{1,i}), \pi_{i}(z_{2,i}) )\xrightarrow{i\to\infty} 0$ and both $\pi_{i}(z_{1,i})$ and $\pi_i(z_{2,i})$ lie inside $B_{\omega_T}(y, r/2)$ when $i$ is large enough. We can find $\omega_T$-geodesics $\gamma_i\subset B_{\omega_T} (y, r)$ connecting $\pi_i(z_{1,i})$ and $\pi_i(z_{2,i})$, and by the uniform and smooth convergence of $\omega_{t_i}\to \omega_T$ on $\overline{\pi^{-1}(B_{\omega_T} (y, 2r) ) }$, it follows that
\begin{equation*}
0\le d_{\omega_{t_i}}(z_{1,i}, z_{2,i}) \le L_{\omega_{t_i}} (\hat \gamma_i) \le L_{\omega_T}(\gamma_i) + \epsilon_i = d_{T}( \pi(z_{1,i}),\pi_i( z_{2,i}) ) + \epsilon_i \xrightarrow{i\to\infty} 0,
\end{equation*}
where $\hat \gamma_i = \pi^{-1}(\gamma_i)$ is a curve joining $z_{1,i}$ to $z_{2,i}$ and $\{\epsilon_i\}$ is a sequence tending to zero. From the definition of GH convergence we see that $$d_Z(z_1, z_2) = \lim_{i\to\infty}d_{\omega_{t_i}} (z_{1,i}, z_{2,i}) = 0. $$
Hence $z_1 = z_2$ and $\pi_{Z}|_{Z^\circ}$ is injective.
\noindent$\bullet$ {\bf $\pi_{Z} |_{Z^\circ}: (Z^\circ, d_Z) \to (Y^\circ, d_T)$ is a local isometry.} We first explain what the local isometry means. It says that for any $z\in Z^\circ$ and $y = \pi_Z(z)\in Y^\circ$, we can find open sets $z\in U\subset Z^\circ$ and $y\in V\subset Y^\circ$ such that $\pi_Z|_U: (U,d_Z) \to (V,d_T)$ is an isometry.
There exists a small $r = r_y>0$ such that the metric ball $(B_{\omega_T}(y, 3r),\omega_T) \subset Y^\circ$ and is geodesically convex. Take $U = (\pi_Z|_{Z^\circ})^{-1} (B_{\omega_T}(y, r) )$. Since $B_{\omega_T}(y, r)$ is also open in $(Y, \omega_Y)$, it can be seen that $U$ is open in $Z^\circ$ and is a neighborhood of $z\in Z^\circ$. We will show $\pi_Z|_{U}: (U,d_Z) \to ( B_{\omega_T}(y,r),\omega_T )$ is an isometry, i.e. for any $z_1,z_2\in U$, and $y_1 = \pi_Z(z_1)$, $y_2 = \pi_Z(z_2)$, we have $d_Z(z_1, z_2) = d_T(y_1, y_2)$.
We choose sequences of points $z_{1,i},z_{2,i}\in (X,\omega_{t_i})$ converging in GH sense to $z_1, z_2$, respectively, as before. It then follows from $\pi_i \xrightarrow{GH} \pi_Z$ and Lemma \ref{lemma 1.10} that $d_{T}( \pi_i(z_{a,i}), y_a ) \to 0$ as $i\to \infty$, for each $a= 1,2$. In particular when $i$ is large enough, $\pi_i(z_{a,i})\in B_{\omega_T}(y, 1.1 r)$. Choose a minimal $\omega_{t_i}$-geodesic $\hat\gamma_i$ joining $z_{1,i}$ to $z_{2,i}$, and we have $$d_{\omega_{t_i}}(z_{1,i}, z_{2,i}) = L_{\omega_{t_i}}(\hat \gamma_i) \xrightarrow{i\to \infty } d_Z(z_1, z_2). $$ Denote the image $\gamma_i = \pi_i(\hat\gamma_i)$ which is a continuous curve joining $\pi_i(z_{1,i})$ to $\pi_i(z_{2,i})$. If $\gamma_i\subset B_{\omega_T}(y, 3r)$ (for a subsequence of $i$), since $\omega_{t_i} $ converge smoothly and uniformly to $\omega_T$ on the compact subset $\overline{\pi^{-1} ( B_{\omega_T} (y, 3r) ) }$, it follows
\begin{equation*}
d_{T}( \pi_i(z_{1,i}), \pi_i (z_{2,i}) ) \le L_{\omega_T} (\gamma_i) \le L_{\omega_{t_i}} (\hat \gamma_i) + \epsilon_i \xrightarrow{i\to\infty} d_Z(z_1,z_2).
\end{equation*}
In case $\gamma_i\not\subset B_{\omega_T}(y, 3 r)$ for $i$ large enough, we have
\begin{equation*}
d_T( \pi_i(z_{1,i}), \pi_i (z_{2,i}) ) \le 2.5 r \le L_{\omega_T}( \gamma_i\cap B_{\omega_T} (y, 3r) ) \le L_{\omega_{t_i}} (\gamma_i) + \epsilon_i \xrightarrow{i\to\infty} d_Z(z_1, z_2).
\end{equation*}
Observe that $d_T( \pi_i(z_{1,i}), \pi_i (z_{2,i}) )\xrightarrow{i\to \infty} d_{T} (\pi_Z(z_1) , \pi_Z(z_2) ) = d_T(y_1, y_2) $. So by the discussion in both cases, it follows that $d_T(y_1, y_2)\le d_Z(z_1, z_2)$. To see the reverse inequality, by the geodesic convexity of $(B_{\omega_T}(y, 3r), \omega_T )$, we can find minimal $\omega_T$-geodesics $\sigma_i\subset B_{\omega_T}(y, 3r)$ connecting $\pi_i(z_{1,i})$ and $\pi_i(z_{2,i})$ for $i$ large enough. The pulled-back $\hat \sigma_i = \pi^{-1}(\sigma_i)\subset \overline{B_{\omega_T}(y, 3r) }$ joins $z_{1,i}$ to $z_{2,i}$, again by the local smooth convergence of $\omega_{t_i}$ to $\omega_T$, we have
\begin{equation*}
d_{\omega_{t_i}}(z_{1,i}, z_{2,i}) \le L_{\omega_{t_i}} ( \hat \sigma_i )\le L_{\omega_T}( \sigma_i ) + \epsilon_i = d_{T}(\pi_i(z_{1,i}), \pi_i(z_{2,i}) ) + \epsilon_i\xrightarrow{i\to\infty} d_{T} (y_1,y_2),
\end{equation*}
letting $i\to \infty$ we get $d_Z(z_1, z_2)\le d_T(y_1, y_2)$. Thus we show that $d_{Z}(z_1,z_2) = d_T(y_1, y_2)$, as desired.
\noindent $\bullet$ {\bf $\pi_Z|_{Z^\circ}$ is surjective.} This follows from definition. Indeed, for any $y\in Y^\circ$, take $z = z_i = \pi^{-1}(y)\in (X,\omega_{t_i})$, up to a subsequence $z_i\xrightarrow{d_{GH}} z_0\in Z$. Since $\pi_i \xrightarrow{GH} \pi_Z$, we get $d_{\omega_Y} (y, \pi_Z(z_0)) = d_{\omega_Y} ( \pi_i(z_i), \pi_Z(z_0) ) \to 0$ as $i\to \infty$. So $\pi_Z(z_0) = y$ and $z_0\in Z^\circ$ is the pre-image of $y$ under $\pi_Z|_{Z^\circ}$.
Combining the discussions above, we see that $\pi_Z|_{Z^\circ}: (Z^\circ, d_Z) \to (Y^\circ, d_T)$ is a bijection and thus a homeomorphism (noting that the continuity of the maps $\pi_Z|_{Z^\circ}$ and $(\pi_Z|_{Z^\circ})^{-1}$ follow from the local isometry property).
It only remains to show $Z^\circ \subset Z$ is dense. Suppose not, there exists a point $z_0\in Z$ such that $B_{d_Z} (z_0,\bar \varepsilon) \subset Z\backslash Z^\circ$ for some $\bar \varepsilon>0$. Choose a sequence of points $x_{i}\in (X,\omega_{t_i})$ such that $x_{i}\xrightarrow{d_{GH}} z_0$. We claim that $d_{\omega_{t_i}}(x_{i}, E) \to 0$ as $i\to \infty$, where $E$ is the exceptional divisor of the blown-down map $\pi: X\to Y$. If not, then $d_{\omega_{t_i}}(x_{i}, E)\ge a_0>0$ for a sequence of large $i$'s, by Lemma \ref{lemma 1.6}, under the fixed metric $w_0$, $d_{\omega_0} (x_{i}, E)\ge C^{-1/\delta} a_0^{1/\delta}>0$, thus $\{x_{i}\} \subset K$, for some compact subset $K\Subset X\backslash E$. It then follows that $\pi_i(x_{i})\in \pi(K)\Subset Y^\circ$, and this contradicts the fact that $d_{\omega_Y}( \pi_i(x_{i}), \pi_Z(z_0) )\to 0$ and $\pi_Z(z_0)\not\in Y^\circ$. Therefore, we may assume without loss of generality that $x_{i}\in E$ for all $i$. Moreover, from Lemma \ref{lemma SW} below, we may replace $x_{i}\in E$ by the point in the same fiber as $x_{i}$ of the $\mathbb{CP}^{k-1}$-bundle $\hat \pi: E\to N$
and the zero section $\hat N$. So we can assume in addition that $x_{i}\in \hat N$. Denote the points $y_{i} = \pi_i(x_i)\in N$ and $y_0 = \pi_Z(z_0)\in N$. From $\pi_i\xrightarrow{GH} \pi_Z$ and $x_i\xrightarrow{d_{GH}} z_0$, we have $d_{\omega_Y}(y_i, y_0)\to 0$ as $i\to \infty$.
We may choose a coordinates chart $(V, w_j)$ as before, which is centered at $y_0$ and contains all but finitely many $y_i$, and $N\cap V = \{w_1 = \cdots = w_k = 0\}$. We take an open set $(U,z_j)$ over $(V,w_j)$, such that the map $\pi: U\to V$ is expressed as in \eqref{eqn:map pi}. We fix a point $p\in V\backslash N$ whose $w$-coordinate is $w(p) = (r, 0,\cdots, 0)$ for some $r>0$ to be determined. Take $\hat p = \pi^{-1}(p)$ and its $z$-coordinate is $z(\hat p) = (r,0,\cdots, 0)$. The point(s) $\hat p_i = \hat p\in (X,\omega_{t_i})$ converge (up to a subsequence) in GH sense to some point $p_Z\in Z$, and as above, we have $d_{\omega_Y}(p, \pi_Z(p_Z)) = d_{\omega_Y}(\pi_i(\hat p_i), \pi_Z(p_Z)) \to 0$ as $i\to\infty$, so $p = \pi(p_Z)\in Y^\circ$ and $p_Z\in Z^\circ$. From the assumption we have $d_Z(z_0, p_Z)\ge \bar \varepsilon>0$. On the other hand, by the local expressions \eqref{eqn:pullback} and \eqref{eqn:pullback 1} of $\omega_X = \pi^*\omega_Y + \varepsilon_0 i\partial\bar\partial \log \sigma_X$, we find that line segments $\overline{\hat p \hat z_0} + \overline{\hat z_0 x_i}$ in $(U,z_j)$ have $\omega_X$-length $\le C r + \epsilon_i$ for some sequence $\epsilon_i\to 0$, where we denote $\hat z_0 = \pi^{-1}(p_0)\cap \hat N$, i.e. $\hat z_0$ is the origin in $(U,z_j)$. So $d_{\omega_0}(\hat p_i, x_i)\le C (r + \epsilon_i)$ and by Lemma \ref{lemma 1.6}, $d_{\omega_{t_i}}(\hat p_i, x_i)\le C(r + \epsilon_i )^\delta$. Letting $i\to\infty$ we get $d_Z(p_Z, z_0)\le C r^\delta$. If we choose $r$ small such that $C r^{\delta} = \bar \varepsilon /2$, we would get a contradiction. Therefore $Z^\circ\subset Z$ is dense.
\end{proof}
By exactly the same proof of Lemma 3.2 in \cite{SW1}, we have
\begin{lemma}\label{lemma SW}
There is a uniform constant $C>0$ such that
\begin{equation*}
\mathrm{diam}( \hat \pi^{-1}(y), \omega_t )\le C (T-t)^{1/3},\quad \forall t\in [0,T), \text{ and }\forall y\in N.
\end{equation*}
\end{lemma}
That is to say, the diameters of the fibers of $\hat\pi: E\to N$ degenerate at a uniform rate as $O((T-t)^{1/3})$.
\begin{lemma}
The map $\pi_Z : (Z,d_Z) \to (Y,d_T)$ is a homeomorphism.
\end{lemma}
Note that the target space is equipped with the metric $d_T$, not the metric $d_{\omega_Y}$.
\begin{proof}
From Lemma \ref{lemma 1.10}, we get for any $z_1, z_2\in Z$
\begin{equation*}
d_T(\pi_Z(z_1), \pi_Z(z_2) )\le C d_{\omega_Y} ( \pi_Z(z_1),\pi_Z(z_2) )^{\delta_0}\le C d_Z(z_1,z_2)^{\delta_0},
\end{equation*}
so the map $\pi_Z: (Z,d_Z) \to (Y,d_T)$ is continuous.
\noindent$\bullet$ {\bf $\pi_Z$ is injective.} Suppose $z_1,z_2\in Z$ satisfies $\pi_Z(z_1) = \pi_Z(z_2) = y\in Y$. If $y\in Y^\circ$, then $z_1,z_2\in Z^\circ$, $z_1 = z_2$ by the injectivity of $\pi_Z|_{Z^\circ}$. So we only need to consider the case $y\in Y\backslash Y^\circ = N$ and thus $z_1, z_2 \in Z\backslash Z^\circ$. Pick sequences of points $x_{1,i}, x_{2,i}\in (X,\omega_{t_i})$ converging in GH sense to $z_1, z_2$, respectively. By similar arguments as in the proof of Lemma \ref{lemma 1.3}, without loss of generality we can assume $x_{1,i}, x_{2,i}\in \hat N\subset E$. Denote $y_{1,i} = \pi(x_{1,i})$ and $y_{2,i} = \pi(x_{2,i})$. We then have
\begin{equation*}
d_{\omega_Y} ( y_{1,i}, y ) = d_{\omega_Y}( \pi_i(x_{1,i}), \pi_Z(z_1) )\xrightarrow{i\to\infty} 0,
\end{equation*}
and similarly $d_{\omega_Y} (y_{2,i}, y )\to 0$ as well, and this implies that $d_{\omega_Y}(y_{1,i}, y_{2,i})\to 0$. Since $x_{1,i}$ and $x_{2,i}$ are both in the zero section $\hat N$, from the local expressions \eqref{eqn:pullback} and \eqref{eqn:pullback 1} of the metric $\omega_X = \pi^*\omega_Y + \varepsilon_0i\partial\bar\partial \log \sigma_X$, we see that $d_{\omega_X} (x_{1,i}, x_{2,i})\to 0$ as $i\to\infty$. Then by Lemma \ref{lemma 1.6} again, we get $d_{\omega_{t_i}}(x_{1,i}, x_{2,i})\le C d_{\omega_X} (x_{1,i}, x_{2,i})^\delta \to 0$. Letting $i\to\infty$ we get $d_Z(z_1, z_2) = 0$, thus $z_1 = z_2$. This proves the injectivity of $\pi_Z$.
\noindent $\bullet$ {\bf $\pi_Z$ is surjective.} This follows from the definition. In fact, we only need to show any $p \in Y\backslash Y^\circ = N$ lies in the image of $\pi_Z$. We fix the point $\hat p\in \hat N$ with $\hat \pi(\hat p) = p$. $\hat p_i = \hat p\in (X,\omega_{t_i})$ converge up to subsequence in GH sense to a point $p_Z\in Z$. Then $d_{\omega_Y} (p, \pi_Z(p_Z)) = d_{\omega_Y} ( \pi_i (\hat p_i), \pi_Z(p_Z) ) \to 0$ by definition of $\pi_i\xrightarrow{GH} \pi_Z$. It then follows that $\pi_Z(p_Z) = p$.
Thus, $\pi_Z: (Z,d_Z) \to (Y, d_T)$ is bijective and continuous. It is also a homeomorphism since $(Z,d_Z)$ is compact.
\end{proof}
\noindent{\bf Acknowledgements} The author would like to thank Professors Duong H. Phong and Jian Song for their constant support, encouragement and inspiring discussions. He also wants to thank Xiangwen Zhang and Teng Fei for many helpful suggestions. This work is supported in part by National Science Foundation grant DMS-1710500.
\end{document}
|
\begin{document}
\title{Multistage Online Maxmin Allocation of \\ Indivisible Entities\thanks{Research supported by the
Research Grants Council, Hong Kong, China (project no.~16207419).}}
\author{Siu-Wing Cheng\footnote{Department of Computer Science and Engineering,
HKUST, Hong Kong, China.}}
\date{}
\maketitle
\begin{abstract}
We consider an online allocation problem that involves a set $P$ of $n$ players and a set $E$ of $m$ indivisible entities over discrete time steps $1,2,\ldots,\tau$. At each time step $t \in [1,\tau]$, for every entity $e \in E$, there is a restriction list $L_t(e)$ that prescribes the subset of players to whom $e$ can be assigned and a non-negative value $v_t(e,p)$ of $e$ to every player $p \in P$. The sets $P$ and $E$ are fixed beforehand. The sets $L_t(\cdot)$ and values $v_t(\cdot,\cdot)$ are given in an online fashion. An allocation is a distribution of $E$ among $P$, and we are interested in the minimum total value of the entities received by a player according to the allocation. In the static case, it is NP-hard to find an optimal allocation the maximizes this minimum value. On the other hand, $\rho$-approximation algorithms have been developed for certain values of $\rho \in (0,1]$. We propose a $w$-lookahead algorithm for the multistage online maxmin allocation problem for any fixed $w \geqslant 1$ in which the restriction lists and values of entities may change between time steps, and there is a fixed stability reward for an entity to be assigned to the same player from one time step to the next. The objective is to maximize the sum of the minimum values and stability rewards over the time steps $1, 2, \ldots, \tau$. Our algorithm achieves a competitive ratio of $(1-c)\rho$, where $c$ is the positive root of the equation $wc^2 = \rho (w+1)(1-c)$. When $w = 1$, it is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$, which improves upon the previous ratio of $\frac{\rho}{4\rho+2 - 2^{1-\tau}(2\rho+1)}$ obtained for the case of 1-lookahead.
\end{abstract}
\section{Introduction}
Distributing a set $E$ of indivisible \emph{entities} to a set $P$ of \emph{players} is a very common optimization problem. The problem can model an assignment of non-premptable computer jobs to machines, a division of tasks among workers, allocating classrooms to lectures, etc. The \emph{value} of an entity $e \in E$ to a player $p \in P$ is usually measured by a non-negative real number. In the single-shot case, the problem is to assign the entities to the players in order to optimize some function of the values of entities received by the players. Every entity is assigned to at most one player.
The problem of maximizing the minimum total value of entities assigned to a player is known as the \emph{maxmin fair allocation} or the \emph{Santa Claus} problem. No polynomial-time algorithm can give an approximation ratio less than 2 unless P = NP~\cite{BD05}. An LP relaxation of the problem, called configuration LP, has been developed; although its size is exponential, it can be solved by the ellipsoid method in polynomial time without an explicit construction of the entire LP~\cite{BS06}. A polynomial-time algorithm was developed to round the fractional solution of the configuration LP to obtain an $\Omega(n^{-1/2}\log^{-3} n)$ approximation ratio~\cite{AS10}. Subsequently, the approximation ratio was improved to $\Omega((n\log n)^{-1/2})$~\cite{SS}. A tradeoff was obtained in~\cite{CCK09} between the approximation ratio and the exponent in the polynomial running time:~for any $\eps \geqslant 9\log\log n/\log n$, an $\Omega(n^{-\eps})$-approximate allocation can be computed in $n^{O(1/\eps)}$ time. An important special case, the \emph{restricted} maxmin allocation problem, is that for every entity $e$, the value of $e$ is the same for players who want it and zero for the other players. In this case, the configuration LP can be used to give an~$\Omega(\log\log\log n/\log\log n)$-approximate allocation~\cite{BS06}. Later, it was shown that the approximation ratio can be bounded by a large, unspecified constant~\cite{F,HSS}. Subsequently, for any $\delta \in (0,1)$, the approximation ratio has been improved to $\frac{1}{6 + 2\sqrt{10} + \delta}$ in~\cite{AKS}, $\frac{1}{6+\delta}$ in~\cite{CM18,DRZ18}, and $\frac{1}{4+\delta}$ in~\cite{CM19,DRZ20}.
Recently, there has been interest in solving online optimization problems in a way that balances the optimality at each time step and the stability of the solutions between successive time steps~\cite{BEM18,BCN14,BCNS12,CCDL16,GTW14}. In the context of allocating indivisible entities, the following setting has been proposed in~\cite{BEM18}. The sets of players and entities are fixed over a time horizon $t = 1, 2, \ldots, \tau$. The value of $\tau$ may not be given in advance. At the current time $t$, for every entity $e$, we are given the restriction list $L_t(e)$ of players to whom $e$ can be assigned and the value $v_t(e,p)$ of $e$ for every player $p \in P$. We assume that $v_t(e,p) = 0$ if $p \not\in L_t(e)$. In the strict online setting, no further information is provided. In the $w$-lookahead setting for any $w \geqslant 1$, we are given $L_{t+i}(\cdot)$ and $v_{t+i}(\cdot,\cdot)$ for every $i \in [0,w]$ at time $t$. Note that $L_t(e) \subseteq P$; if $p \not= q$, $v_t(e,p)$ and $v_t(e,q)$ may be different; if $s \not= t$, $L_s(e)$ and $L_t(e)$ may be different and so may $v_s(e,p)$ and $v_t(e,p)$. At current time $t$, we need to decide irrevocably an allocation $A_t$ of the entities to the players so that the constraints given in $L_t(\cdot)$ are satisfied. The objective is to maximize $\sum_{t=1}^\tau \min_{p \in P} \bigl\{\sum_{(e,p) \in A_t} v_t(e,p)\bigr\} + \sum_{t=1}^{\tau-1} \sum_{(e,p) \in E \times P} \Delta \cdot \bigl|\{(e,p) : (e,p) \in A_t \cap A_{t+1}\}\bigr|$, where $\Delta$ is some fixed non-negative value specified by the user. The first term is the sum of the minimum total value of entities assigned to a player at each time $t$. A stability reward of $\Delta$ is given for keeping an entity at the same player between two successive time steps. The second term is the sum of all stability rewards over all entities and all pairs of successive time steps. The following results are obtained in~\cite{BEM18} for the multistage online maxmin allocation problem. Let $\mathcal{A}$ be a $\rho$-approximation algorithm for some $\rho \leqslant 1$ for the single-shot maxmin allocation problem. If $L_t(e) = P$ for every $e \in E$ and every $t \in [1,\tau]$, one can use $\mathcal{A}$ to obtain a competitive ratio of $\frac{\rho}{\rho+1}$. It takes $O(mn+T(m,n))$ time at each time step, where $T(m,n)$ denotes the running time of $\mathcal{A}$. When the restriction lists $L_t(\cdot)$ are arbitrary subsets of $P$, it is impossible to achieve a bounded competitive ratio in the strict online setting. On the other hand, using 1-lookahead, one can obtain a competitive ratio of $\frac{\rho}{4\rho + 2 - 2^{1-\tau}(2\rho+1)}$. It takes $O(mn + T(m,n))$ time at each time step.
Two examples for the multistage online maxmin allocation problem are as follows. Given a set of computing servers and some daily analytic tasks, the goal is to assign the executions of these tasks to the servers so that the minimum utilization of a server is maximized. On each day, a task may only be executable at a particular subset of the servers due to resource requirements and data availability. Moreover, there is a fixed gain in system efficiency by executing the same task at the same server on two successive days. As the allocation of the daily analytic tasks to servers have to be performed quickly, one may choose $\mathcal{A}$ to be a polynomial-time approximation algorithm. Nevertheless, in some planning problem, one may have enough time to solve the single-shot maxmin allocation problem exactly. Consider an example in which a construction company is to produce assignments of engineers to different construction sites on an annual basis. Due to expertise and other considerations, an engineer can only work at a subset of the sites in the coming year. The company wants to maximize the minimize annual progress of a site, and there is a fixed gain in efficiency in keeping an engineer at the same site from one year to the next. If only a moderate number of engineers are involved, there may be enough time to take $\mathcal{A}$ to be an exact algorithm for solving the single-shot maxmin allocation problem.
In this paper, we improve the competitive ratio for the multistage online maxmin allocation problem and generalize to the case of $w$-lookahead for any fixed $w \geqslant1$. We design a new online algorithm that achieves a competitive ratio of $(1-c)\rho$, where $c$ is the positive root of the equation $wc^2 = \rho(w+1)(1-c)$. Our algorithm takes $O(wmn\log (wn) + w\cdot T(m,n))$ time at each time step. The total time spent in invoking $\mathcal{A}$ for the entire time horizon $[1,\tau]$ is $O(\tau \cdot T(m,n))$. If $w = 1$, our competitive ratio is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$, which is better than the ratio of $\frac{\rho}{4\rho+2 - 2^{1-\tau}(2\rho+1)}$ for the case of 1-lookahead in~\cite{BEM18}.
\section{Notation}
Let $I_t$ denote the input instance at time $t$ which specifies $L_t(\cdot)$ and $v_t(\cdot,\cdot)$. Let $I_{a:b}$ denote the set of input instances $I_a, I_{a+1},\ldots, I_b$. An allocation $C_t$ for $I_t$ is a set of ordered pairs $(e,p)$ for some $e \in E$ and $p \in P$ such that $p \in L_t(e)$ and every $e$ belongs to at most one pair in $C_t$. We use $C_{a:b}$ to denote the set of allocations $C_a,\ldots, C_b$ for the input instances $I_{a:b}$. For every entity~$e$, $C_t[e]$ denotes the assignment of $e$ at time $t$ specified in $C_t$. It is possible that $e$ is unassigned at time $t$. For any interval $[a,b] \subseteq [1,\tau]$ and any entity $e$, we use $C_{a:b}[e]$ to denote the set of assignments $C_a[e],\ldots,C_b[e]$.
An alternative way to view $C_{1:\tau}$ is that it specifies a sequence of disjoint time intervals for every entity $e$. In each time interval, $e$ is assigned to a single player. We call these intervals \emph{assignment intervals}. Our online algorithm generates a set of allocations $C_{1:\tau}$ by specifying these assignment intervals for the entities. Because our algorithm does not know the all future instances, it is possible that it may generate two consecutive assignment intervals in which $e$ is assigned to the same player in them. Ideally, we would like to merge such a pair of intervals; however, it is more convenient for our analysis to keep them separate. Therefore, we do not assume that an assignment interval is a maximal interval such that $e$ is assigned to the same player, although this would be the case for the optimal offline solution for $I_{1:\tau}$.
Given $C_{1:\tau}$ and any $[a,b] \subseteq [1,\tau]$, the assignment interval endpoints in $C_{a:b}$ refer to the endpoints of the assignment intervals in $C_{1:\tau}$ that lie in $[a,b]$. The assignment intervals in $C_{a:b}$ refer to the assignment intervals in $C_{1:\tau}$ that are contained in $[a,b]$. For every entity $e$, we can similarly interpret the notions of assignment interval endpoints in $C_{a:b}[e]$ and assignment intervals in $C_{a:b}[e]$. Due to the constraints posed by $L_t(\cdot)$, it is possible that an entity $e$ is unassigned at some time step, so there may be a gap between an assignment interval end time and the next assignment interval start time in $C_{1:\tau}[e]$.
Take any set of allocations $C_{1:\tau}$ for $I_{1:\tau}$.
Define the following quantities:
\begin{align*}
\nu(C_t) & = \min_{p \in P} \left\{\sum_{(e,p) \in C_t} v_t(e,p)\right\}, \\
\nu(C_{a:b}) & = \sum_{t=a}^b \nu(C_t), \\
\lambda(C_{t:t+1}[e]) & = \left\{\begin{array}{lcl}
\Delta, & & \mbox{if $[t,t+1]$ is contained in an assignment interval of $e$}; \\
0, & & \mbox{otherwise},
\end{array}\right. \\
\lambda(C_{a:b}[e]) & = \sum_{t=a}^{b-1} \lambda(C_{t:t+1}[e]), \\
\lambda(C_{a:b}) & = \sum_{e \in E} \lambda(C_{a:b}[e]).
\end{align*}
We call $\lambda(C_{a:b})$ the \emph{stability value} of $C_{a:b}$. The value $\lambda(C_{t:t+1}[e])$ is \emph{stability reward of $e$ from $t$ to $t+1$}.
Our online algorithm requires a $w$-lookahead for any fixed $w \geqslant 1$. That is, the input instances $I_{t+i}$ for all $i \in [0,w]$ are given at the current time step $t$.
We assume that $I_{\tau+j}$ for any $j \geqslant 1$ is an empty input instance (i.e., instances in which $L_t(\cdot)$ are empty sets and $v_t(\cdot,\cdot)$ are zeros) so that we can talk about the $w$-lookahead at $\tau - i$ for any $i \in [0,w-1]$.
\section{Multistage online maxmin allocation}
\subsection{Overview and periods}
We start off by initializing a set $S_{1:\tau}$ of empty allocations. Then, we use a greedy algorithm to update $S_{1:1+w}$ to be a set of allocations that maximize the stability value with respect to $I_{1:1+w}$. We also use $S_{1:1+w}$ to generate the first \emph{period} as follows.
The time step 1 is taken as a default \emph{period start time}. In general, suppose that the current time step $s$ is a period start time. Then, we use a greedy algorithm to compute the assignment intervals for some entities for $I_{s:s+w}$ provided by the $w$-lookahead. This gives an updated $S_{s:s+w}$. Every assignment interval end time in $S_{s:s+w}$ is a \emph{candidate end time}. The time step $s+w$ is a default candidate end time. For every assignment interval start time $i$ in $S_{s+1:s+w}$, $i-1$ is also a candidate end time. (If $s$ is an assignment start time, $s$ does not induce $s-1$ as a candidate end time.) Let $t$ be the smallest candidate end time within $[s,s+w]$. Then, $[s,t]$ is the next period. It is possible that $t = s+w$. It is also possible that $t$ lies inside an assignment interval of an entity $e$ in $S_{1:s+w}[e]$. To determine the allocations for $[s,t]$, we compute a set of allocations $B_{s:t}$ by running the $\rho$-approximation algorithm $\mathcal{A}$ on the instances $I_{s:t}$. By a judicious comparison of $\nu(B_{s:t})$ and $\lambda(S_{s:t})$, we set the allocations $A_{s:t}$ to be $S_{s:t}$ or $B_{s:t}$. $A_s$ will be returned at the current time step $s$; for each future time step $i \in [s+1,t]$, $A_i$ will be returned. The next period start time is $t+1$ and we will repeat the above at that time.
There are two main reasons for our improvement over the result in~\cite{BEM18}. First, we do not recompute after some waiting time that is fixed beforehand. The periods are dynamically generated and updated using the allocations produced by a greedy algorithm. This fact allows us to make better use of the stability values offered by these greedy allocations. Second, the greedy allocations and the $\rho$-approximate allocations are also compared in~\cite{BEM18} in determining the allocation for the current time step; however, our comparison is different because it allows us to reap the potential stability reward from the previous period to the current period, and at the same time, the potential stability reward from the current period to the next.
\subsection{Greedy allocations}
\begin{algorithm}[b]
\begin{algorithmic}[1]
\caption{StableAllocate$(a,b)$}
\label{alg:2}
\FOR{every entity $e$}
\IF{no assignment interval in $S_{1:\tau}[e]$ starts before $a$ and contains $a$}
\STATE{$S_{a:b}[e] \leftarrow \mathrm{StableEntity}(e,a,b)$}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
During the execution of our online algorithm, we maintain a set of allocations $S_{1:\tau}$ that are initially set to be empty allocations. At any time step, the allocations in $S_{1:\tau}$ are possibly empty beyond some time in the future due to our limited knowledge of the future. Suppose that $a$ is the start time of the next period. Our online algorithm will call StableAllocate$(a,a+w)$, which is shown in Algorithm~\ref{alg:2}, to update $S_{a:a+w}$. StableAllocate works by running a greedy algorithm for some of the entities. Given an entity $e$ and an interval $[a,b]$, a greedy algorithm is described in~\cite{BEM18} to compute some assignments intervals of $e$ within $[a,b]$ that have the maximum stability value with respect to the instances $I_{a:b}$. We give the pseudocode, StableEntity, of this algorithm in Algorithm~\ref{alg:2-2}. For every entity $e$, if no assignment interval in $S_{1:\tau}[e]$ starts before $a$ and contains $a$, we call StableEntity$(e,a,a+w)$ to recompute $S_{a:a+w}[e]$. The updated allocations $S_{a:a+w}$ will serve two purposes. First, they will determine the end time of the next period that starts at $a$. Second, they will help us to determine the allocations that will be returned for the next period.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\caption{StableEntity$(e,a,b)$}
\label{alg:2-2}
\STATE{initialize $C_{a:b}[e]$ to be empty allocations}
\STATE{$i \leftarrow a$}
\WHILE{$i \leqslant b$}
\FOR{every player $q$}
\IF{$q \in L_i(e)$}
\STATE{$k_q \leftarrow \max\{ k \in [i,b] : q \in \bigcap_{j=i}^{k} L_j(e)\}$}
\ELSE
\STATE{$k_q \leftarrow 0$}
\ENDIF
\ENDFOR
\STATE{$p \leftarrow \mathrm{argmax}\{k_q : q \in P \}$}
\IF{$k_p \geqslant i$}
\STATE{add $[i,k_p]$ as an assignment interval to $C_{a:b}[e]$ and assign $e$ to $p$ during $[i,k_p]$}
\STATE{$i \leftarrow k_{p} + 1$}
\ELSE
\STATE{$i \leftarrow i + 1$}
\ENDIF
\ENDWHILE
\STATE{return $C_{a:b}[e]$}
\end{algorithmic}
\end{algorithm}
The following result gives some structural conditions under which the stability value of some allocations $Y_{a:b}[e]$ is at least the stability value of some other allocations $X_{a:b}[e]$. The greediness of StableEntity ensures that these conditions are satisfied by its output when compared with any $X_{a:b}[e]$. As a result, Lemma~\ref{lem:1} proves the optimality of the stability value of the output of StableEntity. The optimality of the greedy algorithm was also proved in~\cite{BEM18}, but we make the structural conditions more explicit in Lemma~\ref{lem:1}.
\begin{lemma}
\label{lem:1}
Let $[a,b]$ be any time interval. Let $X_{a:b}[e]$ and $Y_{a:b}[e]$ be two sets of assignment intervals for $e$ that are contained in $[a,b]$. If the following conditions hold, then for every $j \in [0,b-a]$, $\lambda(X_{a:a+j}[e]) \leqslant \lambda(Y_{a:a+j}[e])$.
\begin{enumerate}[{\em (i)}]
\item The first assignment interval in $Y_{a:b}[e]$ starts no later than the first assignment interval in $X_{a:b}[e]$.
\item If the start time of an assignment interval $J$ in $Y_{a:b}[e]$ lies in an assignment interval $J'$ in $X_{a:b}[e]$, the end time of $J$ is not less than the end time of $J'$.
\item For every $t \in [a,b]$, if $e$ is assigned in $X_t[e]$, then $e$ is also assigned in $Y_t[e]$.
\end{enumerate}
\end{lemma}
\begin{proof}
We show that $\lambda(X_{a:a+j}[e]) \leqslant \lambda(Y_{a:a+j}[e])$ for $j \in [0,b-a]$ by induction on $j$. The base case of $j = 0$ is trivial as both $\lambda(X_{a:a}[e])$ and $\lambda(Y_{a:a}[e])$ are zero by definition. Consider $a+j$ for some $j \in [1,b-a]$. There are two cases depending the value of $\lambda(X_{a+j-1:a+j}[e])$.
Case~1: $\lambda(X_{a+j-1:a+j}[e]) = 0$. Then, $\lambda(X_{a:a+j}[e]) = \lambda(X_{a:a+j-1}[e]) \leqslant \lambda(Y_{a:a+j-1}[e]) \leqslant \lambda(Y_{a:a+j}[e])$.
Case~2: $\lambda(X_{a+j-1:a+j}[e]) = \Delta$. Some assignment interval $J'$ in $X_{a:b}[e]$ contains $[a+j-1,a+j]$ in this case. Let $p$ be the player to whom $e$ is assigned during $J'$. Let $a+i$ be the start time of $J'$. Note that $i \in [0,j-1]$.
If $i = 0$, then $[a,a+j] \subseteq J'$ and $J'$ is the first assignment interval in $X_{a:b}[e]$. By conditions~(i)~and~(ii), the first assignment interval in $Y_{a:b}[e]$ starts at $a$ and ends no earlier than $a+j$. Therefore, $\lambda(X_{a:a+j}[e]) = \lambda(Y_{a:a+j}[e])$.
Suppose that $i > 0$.
Because $e$ is assigned to $p$ at $a+i$ in $X_{a:b}[e]$, by condition~(iii), there exists an assignment interval $J$ in $Y_{a:b}[e]$ that contains $a+i$. So the start time of $J$ is less than or equal to $a+i$. There are two cases.
\begin{itemize}
\item If the end time of $J$ is at least $a+j$, then $\lambda(X_{a+i:a+j}[e]) = \lambda(Y_{a+i:a+j}[e])$ and hence
\begin{align*}
\lambda(X_{a:a+j}[e]) & = \lambda(X_{a:a+i}[e]) + \lambda(X_{a+i:a+j}[e]) \\
& \leqslant \lambda(Y_{a:a+i}[e]) + \lambda(X_{a+i:a+j}[e])
& (\because \text{induction assumption}) \\
& = \lambda(Y_{a:a+i}[e]) + \lambda(Y_{a+i:a+j}[e]) \\
& = \lambda(Y_{a:a+j}[e]).
\end{align*}
\item The other case is that $J$ ends at some time $t \in [a+i,a+j-1]$. By condition~(ii), the start time of $J$ cannot be $a+i$, which means that the start time of $J$ is less than or equal to $a+i-1$. Since $e$ is assigned to $p$ from $a+i$ to $a+j$ in $X_{a:b}[e]$, condition~(iii) implies that $e$ is assigned in $Y_{a:b}[e]$ at every time step in $[a+i,a+j]$. Therefore, there is another assignment interval $K$ in $Y_{a:b}[e]$ that starts at $t+1$. Condition~(ii) implies that the end time of $K$ is at least $a+j$. There is thus a loss of a stability reward of $\Delta$ for $e$ from $t$ to $t+1$ in $Y_{a+i-1:a+j}[e]$, which matches the loss of a stability reward of $\Delta$ for $e$ from $a+i-1$ to $a+i$ in $X_{a+i-1:a+j}[e]$.
As a result, $\lambda(X_{a+i-1:a+j}[e]) = \lambda(Y_{a+i-1:a+j}[e])$.
Hence,
\begin{align*}
\lambda(X_{a:a+j}[e]) & = \lambda(X_{a:a+i-1}[e]) + \lambda(X_{a+i-1:a+j}[e]) \\
& \leqslant \lambda(Y_{a:a+i-1}[e]) + \lambda(X_{a+i-1:a+j}[e])
& (\because \text{induction assumption}) \\
& = \lambda(Y_{a:a+i-1}[e]) + \lambda(Y_{a+i-1:a+j}[e]) \\
& = \lambda(Y_{a:a+j}[e]).
\end{align*}
\end{itemize}
\end{proof}
\subsection{Online Algorithm}
The pseudocode of our online algorithm, MSMaxmin, is shown in Algorithm~\ref{alg:3}. The parameter $c_0$ in line~\ref{code:3} is a real number chosen from the range $(0,1)$ that will be specified later when we analyze the performance of MSMaxmin.
MSMaxmin initializes $S_{1:\tau}$ to be a set of empty allocations and then iteratively computes $A_1,A_2,\ldots,A_\tau$. At the $s$-th time step, MSMaxmin calls StableAllocate$(s,s+w)$ if $s$ is the start time of the next period. (By default, 1 is the start time of the first period.) This call of StableAllocate updates $S_{1:\tau}$ by changing $S_{s:s+w}$. Afterwards, we determine the period end time $t$ using the assignment interval start and end times in $S_{s:s+w}$. The next task is to determine the allocations $A_{s:t}$ for $I_{s:t}$.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\caption{MSMaxmin}
\label{alg:3}
\STATE{$S_{1:\tau} \leftarrow \text{empty allocations}$}
\STATE{$\mathit{period}\_\mathit{start} \leftarrow 1$}
\FOR{$s$ = 1 to $\tau$}
\IF{$s = \mathit{period}\_\mathit{start}$}
\IF{$s > 1$}
\STATE{$[r,s-1] \leftarrow \mathit{period}$ \quad\quad\quad\quad\quad /* $[r,s-1]$ is the previous period */}
\ENDIF
\STATE{$\mathrm{StableAllocate}(s,s+w)$}
\STATE{$t \leftarrow \min\bigl(s+w,\min\{\beta : \text{assignment interval end time $\beta$ in $S_{s:s+w}$}\}\bigr)$} \label{code:1}
\STATE{$t \leftarrow \min\bigl(t, \min\{\beta-1: \text{assignment start time $\beta$ in $S_{s+1:s+w}$}\}\bigr)$} \label{code:1-1}
\STATE{$\mathit{period} \leftarrow [s,t]$ \quad\quad\quad\quad\quad\quad\quad\quad /* $[s,t]$ is the next period */}
\FOR{$j$ = $s$ to $t$} \label{code:1-3}
\STATE{$B_j\leftarrow \text{$\rho$-approximate maxmin allocation for $I_j$}$}
\ENDFOR \label{code:1-4}
\STATE{$L \leftarrow 0$} \label{code:2-1}
\IF{$s > 1$ and $s \leqslant r+w$ and $A_{s-1} = S_{s-1}$}
\STATE{$L \leftarrow \lambda(S_{s-1:s})$}
\ENDIF
\STATE{$R \leftarrow 0$}
\IF{$t < s+w$}
\STATE{$R \leftarrow \lambda(S_{t:t+1})$}
\ENDIF
\IF{$\nu(B_{s:t}) \geqslant L + \lambda(S_{s:t}) + c_0\cdot R$} \label{code:3}
\STATE{$A_{s:t} \leftarrow B_{s:t}$}
\ELSE
\STATE{$A_{s:t} \leftarrow S_{s:t}$}
\ENDIF
\STATE{$\mathit{period}\_\mathit{start} \leftarrow t+1$} \label{code:2}
\ENDIF
\STATE{output $A_s$}
\ENDFOR
\end{algorithmic}
\end{algorithm}
We invoke the $\rho$-approximation algorithm for the single-shot maxmin allocation problem. Specifically, we compute the $\rho$-approximate allocations $B_{s:t}$ for the instances $I_{s:t}$. That is, for each $i \in [s,t]$, $\nu(B_i) \geqslant \rho \cdot\nu(X_i)$ for any allocation $X_i$ for $I_i$. The $\rho$-approximation algorithm may not take the restriction lists into account. Nevertheless, since we assume that $v_i(e,p) = 0$ if $p \not\in L_i(e)$, we can remove such an assignment $(e,p)$ from $B_i$ without affecting $\nu(B_i)$. Therefore, we assume without loss of generality that every $B_i$ respects the restriction lists $L_i(\cdot)$.
We set $A_{s:t}$ to be $S_{s:t}$ or $B_{s:t}$. It is natural to check whether $\lambda(S_{s:t})$ is larger than $\nu(B_{s:t})$. However, if $s \leqslant r+w$ and $A_{s-1} = S_{s-1}$, where $r$ is the start time of the previous period, then $S_s$ was computed at time $r$ and it is possible that $\lambda(S_{s-1:s}[e]) = \Delta$ for some entity $e$. The call StableAllocate$(s,s+w)$ does not invoke StableEntity for such an entity $e$, and so $S_s[e]$ will be preserved. Therefore, if we set $A_{s:t}$ to be $S_{s,t}$, we will gain the stability reward of $\lambda(S_{s-1:s})$. Similarly, if $t < s+w$, then setting $A_{s:t}$ to be $S_{s,t}$ provides the opportunity to gain the stability reward of $\lambda(S_{t:t+1})$ in the future. On the other hand, if $t = s+w$, we do not know $I_{t+1}$ at time $s$, and we do not compute $S_{t+1}$ at time $s$. In this case, after calling StableAllocate at time $s$, all assignment intervals in the current $S_{1:\tau}$ must end at or before $t = s+w$, implying that there is no stability reward in $S_{t:t+1}$ irrespective of how we will set $S_{t+1}$ in the future. Hence, we compare $\nu(B_{s:t})$ with $\lambda(S_{s:t})$ and possibly $\lambda(S_{s-1:s})$ and $\lambda(S_{t:t+1})$ depending on the situation.
\section{Analysis}
Because MSMaxmin calls StableAllocate from time to time to update $S_{1:\tau}$, the set of allocations $S_{a:b}$ for any $[a,b] \subseteq [1,\tau]$ may change over time. To differentiate these allocations computed at different times, we introduce the notation $S_{i|s}$ to denote $S_i$ at the end of the time step $s$, i.e., at the end of the $s$-th iteration of the for-loop in MSMaxmin. Similarly, $S_{a:b|s}$ denotes $S_{a:b}$ at the end of the time step $s$.
First, we give some properties of the start and end times of periods and assignment intervals.
\begin{lemma}
\label{lem:period}
Let $s$ be a period start time. For every entity $e$,
\begin{enumerate}[{\em (i)}]
\item if $\alpha$ is an assignment interval end time in $S_{s:s+w|s}[e]$, then $\alpha$ will remain an assignment interval end time for $e$ in the future and $\alpha$ will be a period end time;
\item if $\alpha$ is an assignment interval start time in $S_{s+1:s+w|s}[e]$, then $\alpha-1$ will be a period end time, $\alpha$ will remain an assignment interval start time for $e$ in the future, and $\alpha$ will be a period start time.
\end{enumerate}
\end{lemma}
\begin{proof}
Take any entity $e$. For any period start time $s$, the call StableAllocate$(s,s+w)$ invokes StableEntity$(e,s,s+w)$ only if $e$ is unassigned at time $s$ in $S_{1:\tau|s-1}[e]$ or $s$ is an assignment interval start time in $S_{1:\tau|s-1}[e]$. Therefore, by the greediness of StableEntity, if a time step $\alpha\in [s,s+w]$ was already determined by some previous call of StableEntity on $e$ as an assignment interval start or end time, that decision will remain the same in this call StableEntity$(e,s,s+w)$. Therefore, if $\alpha$ is an assignment interval end time for $e$ in $[s,s+w]$ after calling StableAllocate$(s,s+w)$, it will remain an assignment interval end time for $e$ in the future. It will also be a candidate end time used in line~\ref{code:1} of MSMaxmin in the $s$-th iteration of the for-loop and thereafter until $\alpha$ becomes the next period end time. We can similarly argue that if $\alpha$ is an assignment interval start time for $e$ in $[s,s+w]$ after calling StableAllocate$(s,s+w)$, it will remain an assignment interval start time for $e$ in the future. Also, $\alpha-1$ will be a candidate end time used in line~\ref{code:1-1} of MSMaxmin until it becomes the next period end time. When this happens, $\alpha$ will be made the subsequent period start time in line~\ref{code:2} of MSMaxmin.
\end{proof}
Next, we show that for every entity $e$, the stability value of any set of allocations $X_{1:t}[e]$ cannot be much larger than that of $S_{1:t|\alpha}[e]$, where $\alpha$ is the largest assignment interval start time in $S_{1:\tau|t}[e]$ that is less than or equal to $t$, provided that $t \leqslant \alpha+w$.
\begin{lemma}
\label{lem:2}
Let $X_{1:\tau}$ be a set of allocations for $I_{1:\tau}$. Let $t$ be any time step. Let $\alpha$ be the largest assignment interval start time in $S_{1:\tau|t}[e]$ that is less than or equal to $t$. If $t \leqslant \alpha+w$, then $\lambda(X_{1:t}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:t|\alpha}[e])$.
\end{lemma}
\begin{proof}
We prove the lemma by induction on the assignment interval start time $\alpha$. In the base case, $\alpha$ is the smallest assignment interval start time for $e$ computed by StableEntity. It follows from the greediness of StableEntity that $L_s(e) = \emptyset$ for all $s \in [1,\alpha-1]$, which implies that $X_{1:\alpha-1}[e]$ is empty. By Lemma~\ref{lem:period}, $\alpha$ is a period start time, so MSMaxmin calls StableAllocate$(\alpha,\alpha+w)$ at $\alpha$ which calls StableEntity$(e,\alpha,\alpha+w)$. By the greediness of StableEntity and Lemma~\ref{lem:1}, for every time step $t \in [\alpha,\alpha+w]$, $\lambda(X_{\alpha:t}[e]) \leqslant \lambda(S_{\alpha:t|\alpha}[e])$. Hence, $\lambda(X_{1:t}[e]) = \lambda(X_{\alpha:t}[e]) \leqslant \lambda(S_{\alpha:t|\alpha}[e]) =\lambda(S_{1:t|\alpha}[e])$, i.e., the base case is true.
Consider the induction step. Let $\gamma$ be the largest assignment interval start time in $S_{1:\tau|t}[e]$ that is less than or equal to $t$. To prove the lemma, we are only concerned with the case of $t \leqslant \gamma+w$.
By Lemma~\ref{lem:period}, $\gamma$ is a period start time, so MSMaxmin calls StableAllocate$(\gamma,\gamma+w)$ at $\gamma$ which calls StableEntity$(e,\gamma,\gamma+w)$. By the greediness of StableEntity and Lemma~\ref{lem:1}, we get
\begin{equation}
\lambda(X_{\gamma:t}[e]) \leqslant \lambda(S_{\gamma:t|\gamma}[e]). \label{eq:2-4}
\end{equation}
Let $[\alpha,\beta]$ be the assignment interval in $S_{1:\tau|\gamma}[e]$ before $\gamma$. Note that $\beta \leqslant \alpha+w$. By Lemma~\ref{lem:period}, $\alpha$ is a period start time. So MSMaxmin calls StableAllocate$(\alpha,\alpha+w)$ at $\alpha$ which calls StableEntity$(e,\alpha,\alpha+w)$. The call StableEntity$(e,\gamma,\gamma+w)$ at $\gamma$ cannot modify assignment intervals that end before $\gamma$. Also, as StableAllocate does not call StableEntity for $e$ within $[\alpha+1,\beta]$, $\alpha$ is the last time before $\gamma$ at which the assignment intervals for $e$ was updated by a call of StableEntity. Therefore,
\begin{equation}
S_{1:\beta|\alpha}[e] = S_{1:\beta|\gamma}[e]. \label{eq:2-2-1}
\end{equation}
We claim that:
\begin{equation}
\forall\, s \in [\alpha,\gamma-1], \quad \lambda(X_{1:s}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:s|\alpha}[e]).
\label{eq:2-1}
\end{equation}
For $s \in [\alpha,\beta]$, we have $\lambda(X_{1:s}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:s|\alpha}[e])$ by the induction assumption. If $\beta+1\leqslant \gamma-1$, by the greediness of StableEntity, it must be the case that $L_s(e) = \emptyset$ for all $s \in [\beta+1,\gamma-1]$ so that StableEntity does not assign $e$ to any player during $[\beta+1,\gamma-1]$. Therefore, $X_{\beta+1:s}[e]$ is empty for all $s \in [\beta+1,\gamma-1]$, which means that $\lambda(X_{1:s}[e]) = \lambda(X_{1:\beta}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) = \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:s|\alpha}[e])$. Hence, \eqref{eq:2-1} holds.
Suppose that $\gamma \leqslant \alpha+w$. In this case, we know the instances $I_{\alpha:\gamma}$ at time $\alpha$. By the greediness of StableEntity and Lemma~\ref{lem:1}, we have
\begin{equation}
\lambda(X_{\alpha:\gamma}[e]) \leqslant \lambda(S_{\alpha:\gamma|\alpha}[e]). \label{eq:2-2}
\end{equation}
As $\gamma \leqslant \alpha+w$, after calling StableEntity$(e,\alpha,\alpha+w)$ at $\alpha$, we already know that $\gamma$ is the start time of the next assignment interval for $e$.
Therefore,
there is no stability reward for $e$ from $\beta$ to $\gamma$ in $S_{\alpha:\gamma|\alpha}[e]$.
Then, it follows from \eqref{eq:2-2-1} that
\begin{equation}
\lambda(S_{\alpha:\gamma|\alpha}[e]) = \lambda(S_{\alpha:\gamma|\gamma}[e]). \label{eq:2-3}
\end{equation}
Hence,
\begin{align*}
\lambda(X_{1:t}[e])
& = \lambda(X_{1:\alpha}[e]) + \lambda(X_{\alpha:\gamma}[e]) + \lambda(X_{\gamma:t}[e]) \\
& \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\gamma|\alpha}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-4}, \eqref{eq:2-1}, \text{and}~\eqref{eq:2-2}) \\
& = \frac{w+1}{w}\lambda(S_{1:\alpha|\gamma}[e]) + \lambda(S_{\alpha:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-2-1}~\text{and}~\eqref{eq:2-3}) \\
& \leqslant \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]).
\end{align*}
Suppose that $\gamma \geqslant \alpha+w+1$. If $\beta < \alpha+w$, there is a gap $[\beta+1,\gamma-1]$ during which StableEntity does not assign $e$ to any player. It means that $L_s(e) = \emptyset$ for all $s \in [\beta+1,\gamma-1]$. Therefore, $e$ is also unassigned in $X_{\beta+1:\gamma-1}$ and we can conclude that
\begin{align*}
\lambda(X_{1:t}[e])
& = \lambda(X_{1:\beta}[e]) +\lambda(X_{\gamma:t}[e]) \\
& \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-4}~\text{and}~\eqref{eq:2-1}) \\
& \leqslant \frac{w+1}{w}\lambda(S_{1:\beta|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e])
& (\because \eqref{eq:2-2-1}) \\
& = \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]).
& (\because \forall s \in [\beta+1,\gamma-1],\, L_s(e) = \emptyset)
\end{align*}
The remaining case is that $\beta \geqslant \alpha+w$. At time $\alpha$, MSMaxmin computes assignment intervals up to time $\alpha+w$ only. It follows that $\beta = \alpha+w$, which implies that $\lambda(S_{\alpha:\beta|\alpha}[e]) = w\Delta$. If the interval $[\beta+1,\gamma-1]$ is not empty, $e$ must be unassigned in $X_{\beta+1:\gamma-1}$ as we argued previously. We can thus conclude as in the above that $\lambda(X_{1:t}[e]) \leqslant \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e])$. Suppose that $[\beta+1,\gamma-1]$ is empty. It means that $\gamma = \beta+1$. Then,
\begin{align*}
& \lambda(X_{1:t}[e]) \\
& = \lambda(X_{1:\beta}[e]) + \lambda(X_{\beta:\beta+1}[e]) + \lambda(X_{\gamma:t}[e]) \\
& \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) + \lambda(X_{\beta:\beta+1}[e]) + \lambda(S_{\gamma:t|\gamma}[e]) & (\because \eqref{eq:2-4}~\text{and}~\eqref{eq:2-1})\\
& \leqslant \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \lambda(S_{\alpha:\beta|\alpha}[e]) + \Delta + \lambda(S_{\gamma:t|\gamma}[e])
& (\because \lambda(X_{\beta:\beta+1}[e]) \leqslant \Delta) \\
& = \frac{w+1}{w}\lambda(S_{1:\alpha|\alpha}[e]) + \frac{w+1}{w}\lambda(S_{\alpha:\beta|\alpha}[e]) + \lambda(S_{\gamma:t|\gamma}[e])
& (\because \lambda(S_{\alpha:\beta|\alpha}[e]) = w\Delta) \\
& = \frac{w+1}{w}\lambda(S_{1:\gamma|\gamma}[e]) + \lambda(S_{\gamma:t|\gamma}[e]).
& (\because \eqref{eq:2-2-1})
\end{align*}
\end{proof}
We are ready to analyze the performance of MSMaxmin. It depends on the parameter $c_0$ in line~\ref{code:3} of MSMaxmin which will be set based on the values of $\rho$ and $w$.
\begin{theorem}
\label{thm:main}
\emph{MSMaxmin} takes $O(wmn\log (wn) + w \cdot T(m,n))$ time at each period start time and $O(m)$ time at any other time step. The total time taken by \emph{MSMaxmin} in running $\mathcal{A}$ for the entire time horizon $[1,\tau]$ is $O(\tau \cdot T(m,n))$. Let $A_{1:\tau}$ be the solution returned by \emph{MSMaxmin}. Then, $\lambda(A_{1:\tau}) + \nu(A_{1:\tau}) \geqslant \frac{wc_0^2}{w+1}\cdot\lambda(O_{1:\tau}) + (1-c_0)\rho \cdot \nu(O_{1:\tau})$, where $O_{1:\tau}$ is the optimal offline solution. Hence, the competitive ratio is $(1-c_0)\rho$, where $c_0$ is the positive root of the equation $wc^2 = \rho (w+1)(1-c)$, that is,
\[
c_0 = \frac{\sqrt{\rho^2(w+1)^2 + 4\rho w(w+1)} - \rho(w+1)}{2w}.
\]
\end{theorem}
\begin{proof}
Let $s$ be the current time step. If $s$ is not a period start time, MSMaxmin spends $O(m)$ time just to output $A_s$. Suppose that $s$ is a period start time. After calling StableAllocate$(s,s+w)$, we obtain $O(wm)$ assignment interval start and end times in $S_{s:s+w|s}$. Selecting the next period end time in lines~\ref{code:1} and~\ref{code:1-1} can be done in $O(wm)$ time. Running $\mathcal{A}$ in lines~\ref{code:1-3}--\ref{code:1-4} take $O(w \cdot T(m,n))$ time. Lines~\ref{code:2-1}--\ref{code:2} clearly take $O(wm)$ time. It remains to analyze the running time of the call StableAllocate$(s,s+w)$.
Take an entity $e$ for which StableAllocate will call StableEntity. We describe an efficient implementation of StableEntity as follows. For every player $p$, we can construct the maximal interval(s) within $[s,s+w]$ in which $e$ can be assigned to $p$. There are fewer than $w$ intervals for $p$. We store these intervals for all players in a priority search tree $T_e$~\cite{mccrieght85}. The tree $T_e$ uses $O(wn)$ space and can be organized in $O(wn\log (wn))$ time. Given a time step~$i$, $T_e$ can be queried in $O(\log(wn))$ time to retrieve the interval that contains $i$ and has the largest right endpoint. This capability is exactly what we need for determining $k_p$ in lines~4--11 of StableEntity. It follows that the while-loop in StableEntity takes $O(w\log(wn))$ time. The total running time of StableEntity for $e$ is thus $O(wn\log(wn))$, implying that StableAllocate takes $O(wmn\log(wn))$ time. This completes the running time analysis.
We analyze the competitive ratio as follows. Consider a period $[s,t]$. Let $[r,s-1]$ be the period before $[s,t]$. There are several cases.
\begin{itemize}
\item Case~1: $s > 1$ and $s \leqslant r+w$ and $A_{s-1} = S_{s-1|r}$.
\begin{itemize}
\item Case~1.1: $t < s+w$. MSMaxmin compares $\lambda(S_{s-1:t|s}) + c_0\cdot\lambda(S_{t:t+1|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s-1:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) > \nu(B_{s:t})$, MSMaxmin sets $A_{s:t}$ to be $S_{s:t|s}$. Because $s \leqslant r+w$, $A_{s} = S_{s|s}$ allows us to collect the stability reward of $\lambda(S_{s-1:s|s})$, which makes the contribution of $A_{s:t}$ to $\lambda(A_{1:\tau}) + \nu(A_{1:\tau})$ greater than or equal to
\begin{equation}
\lambda(S_{s-1:t|s}) > (1-c_1)\cdot\lambda(S_{s-1:t|s}) - c_0c_1\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm0}
\end{equation}
If $\lambda(S_{s-1:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) \leqslant \nu(B_{s:t})$, MSMaxmin sets $A_{s:t}$ to be $B_{s:t}$ and the contribution of $A_{s:t}$ to $\lambda(A_{1:\tau}) + \nu(A_{1:\tau})$ is at least
\begin{equation}
\nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s-1:t|s}) + c_0(1- c_1)\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm1}
\end{equation}
\item Case~1.2: $t = s+w$. In this case, MSMaxmin compares $\lambda(S_{s-1:t|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s-1:t|s}) > \nu(B_{s:t})$, the contribution of $A_{s:t} = S_{s:t|s}$ to $\lambda(A_{1:\tau}) + \nu(A_{1:\tau})$ is greater than or equal to
\begin{equation}
\lambda(S_{s-1:t|s}) > (1-c_1)\cdot\lambda(S_{s-1:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm2}
\end{equation}
If $\lambda(S_{s-1:t|s}) \leqslant \nu(B_{s:t})$, the contribution of $A_{s:t} = B_{s:t}$ is at least
\begin{equation}
\nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s-1:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm3}
\end{equation}
\end{itemize}
\item Case~2: $s = 1$, or $s = r+w+1$, or $A_{s-1} = B_{s-1}$.
\begin{itemize}
\item Case~2.1: $t < s+w$. MSMaxmin compares $\lambda(S_{s:t|s}) + c_0\cdot\lambda(S_{t:t+1|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) > \nu(B_{s:t})$, the contribution of $A_{s:t} = S_{s:t|s}$ is greater than or equal to
\begin{equation}
\lambda(S_{s:t|s}) > (1-c_1)\cdot\lambda(S_{s:t|s}) - c_0c_1\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm4}
\end{equation}
If $\lambda(S_{s:t|s}) + c_0\cdot\lambda(S_{t:t+1|s}) \leqslant \nu(B_{s:t})$, the contribution of $A_{s:t} = B_{s:t}$ is at least
\begin{equation}
\nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s:t|s}) + c_0(1-c_1)\cdot\lambda(S_{t:t+1|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm5}
\end{equation}
\item Case~2.2: $t = s+w$. In this case, MSMaxmin compares $\lambda(S_{s:t|s})$ with $\nu(B_{s:t})$. If $\lambda(S_{s:t|s}) > \nu(B_{s:t})$, the contribution of $A_{s:t} = S_{s:t|s}$ is greater than or equal to
\begin{equation}
\lambda(S_{s:t|s}) > (1-c_1)\cdot\lambda(S_{s:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm6}
\end{equation}
If $\lambda(S_{s:t|s}) \leqslant \nu(B_{s:t})$, the contribution of $A_{s:t} = B_{s:t}$ is at least
\begin{equation}
\nu(B_{s:t}) \geqslant (1-c_1)\cdot\lambda(S_{s:t|s}) + c_1\cdot\nu(B_{s:t}). \label{eq:thm7}
\end{equation}
\end{itemize}
\end{itemize}
Let $O_{1:\tau}$ be the optimal offline solution for $I_{1:\tau}$.
In the sum of the applications of \eqref{eq:thm0}--\eqref{eq:thm7} to all the periods, the $\nu(\cdot)$ terms sum to $c_1\cdot\nu(B_{1:\tau})$, which is at least $c_1\rho\cdot\nu(O_{1:\tau})$.
Consider the sum of the $\lambda(\cdot)$ terms. Let $[r,s-1]$ and $[s,t]$ be two consecutive periods. If \eqref{eq:thm6} or \eqref{eq:thm7} is applicable to $[r,s-1]$, then $s-1 = r+w$ and one of the inequalities \eqref{eq:thm4}--\eqref{eq:thm7} is applicable to $[s,t]$, implying that the sum of the $\lambda(\cdot)$ terms does not include $\lambda(S_{s-1:s|s})$. Nevertheless, as $s-1 = r+w$, all assignment intervals computed at or before time $r$ do not extend beyond $s-1$. Therefore, $\lambda(S_{s-1:s|s}) = 0$ and there is no harm done. For all other kinds of transition from $s-1$ to $s$, the sum of the $\lambda(\cdot)$ terms includes the stability reward of the entities from $s-1$ to $s$ multiplied by a coefficient that is less than 1. We analyze the smallest coefficient of the $\lambda(\cdot)$ terms as follows.
If \eqref{eq:thm0} or \eqref{eq:thm4} is applicable to $[r,s-1]$, then $s-1 < r+w$ and we get a $-c_0c_1\cdot\lambda(S_{s-1:s|r})$ term. In this case, one of the inequalities~\eqref{eq:thm0}--\eqref{eq:thm3} must be applicable to $[s,t]$, which contains the term $(1-c_1)\cdot\lambda(S_{s-1:s|s})$. We claim that these two terms combine into $(1-c_1-c_0c_1)\cdot\lambda(S_{s-1:s|s})$. Take any entity $e$. If StableEntity is not invoked for $e$ at $s$, then $S_{s-1:s|r}[e] = S_{s-1:s|s}[e]$. If StableEntity is invoked for $e$ at $s$, no assignment interval in $S_{r:r+w|r}[e]$ or $S_{r:r+w|s}[e]$ contains $[s-1,s]$ and so $\lambda(S_{s-1:s|r}[e]) = \lambda(S_{s-1:s|s}[e]) = 0$. This proves our claim.
If \eqref{eq:thm1} or \eqref{eq:thm5} is applicable to $[r,s-1]$, we get the term $c_0(1-c_1)\cdot \lambda(S_{s-1:s|r})$ which is equal to $c_0(1-c_1)\cdot\lambda(S_{s-1:s|s})$ as explained in the previous paragraph.
Among the coefficients of the $\lambda(\cdot)$ terms, the smallest ones are $1-c_1-c_0c_1$ and $c_0(1-c_1)$. Balancing $1-c_1-c_0c_1$ and $c_0(1-c_1)$ gives the relation $c_0+c_1 = 1$. As a result, $\lambda(A_{1:\tau}) + \nu(A_{1:\tau}) \geqslant c_0^2\cdot\lambda(S_{1:\tau|\tau}) + c_1\cdot\nu(B_{1:\tau}) \geqslant c_0^2\cdot\lambda(S_{1:\tau|\tau}) + (1-c_0)\rho \cdot \nu(O_{1:\tau})$. Here, we use the fact that $S_{s:t|s}$ for a period $[s,t]$ will not be changed after $s$, and so $S_{s:t|s} = S_{s:t|\tau}$. By Lemma~\ref{lem:2}, we get $\lambda(A_{1:\tau}) + \nu(A_{1:\tau}) \geqslant \frac{wc_0^2}{w+1} \cdot \lambda(O_{1:\tau|\tau}) + (1-c_0)\rho\cdot\nu(O_{1:\tau})$. To maximize the competitive ratio, we balance the coefficients $\frac{wc_0^2}{w+1}$ and $(1-c_0)\rho$. The only positive root of the quadratic equation $wc_0^2 = \rho(w+1)(1-c_0)$ is
\[
\frac{\sqrt{\rho^2(w+1)^2 + 4\rho w(w+1)} - \rho(w+1)}{2w}.
\]
This positive root is less than $\bigl((\rho(w+1) + 2w) - \rho(w+1)\bigr)/(2w) = 1$.
\end{proof}
Suppose that we keep $\rho$ general and set $w = 1$. Then, $c_0 = \sqrt{\rho^2 + 2\rho} - \rho$ and our competitive ratio is $(\rho+1-\sqrt{\rho^2+2\rho})\rho$. To compare with the $\frac{\rho}{4\rho+2}$ bound in~\cite{BEM18}, we consider the difference in the coefficients $\rho+1 - \sqrt{\rho^2+2\rho} - 1/(4\rho+2)$. Treating this as a function in $\rho$, the derivative of this difference is $1 - (\rho+1)(\rho^2+2\rho)^{-1/2} + (2\rho+1)^{-2}$. This derivative is negative for $\rho \in (0,1]$, so the smallest difference is roughly $0.2679 - 0.1667 > 0.1$ when $\rho=1$. Therefore, our competitive ratio is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$.
\section{Conclusion}
We presented a $w$-lookahead online algorithm for the multistage online maxmin allocation problem for any fixed $w \geqslant 1$. It is more general than the 1-lookahead online algorithm in the literature~\cite{BEM18}. For the case of $w=1$, our competitive ratio is greater than $\frac{\rho}{4\rho+2} + \frac{\rho}{10}$, which improves upon the previous ratio of $\frac{\rho}{4\rho+2 - 2^{1-\tau}(2\rho+1)}$ in~\cite{BEM18}. It is unclear whether our analysis of MSMaxmin is tight. When we set $A_{s:t}$ to be $S_{s:t}$, we only analyze $\lambda(S_{s:t})$ and ignore $\nu(S_{s:t})$. Conversely, when we set $A_{s:t}$ to be $B_{s:t}$, we only analyze $\nu(B_{s:t})$ and ignore $\lambda(B_{s:t})$. There may be some opportunities for improvement.
\end{document}
|
\begin{document}
\title{Interferometer for force measurement by shortcut-to-adiabatic arms guiding}
\author{A. Rodriguez-Prieto}
\affiliation{Departament of Applied Mathematics, University of the Basque Country UPV/EHU, Bilbao, Spain}
\author{S. Mart\'\i nez-Garaot}
\affiliation{Departament of Physical Chemistry, University of the Basque Country UPV/EHU, Aptdo. 644, Bilbao, Spain}
\author{I. Lizuain}
\affiliation{Departament of Applied Mathematics, University of the Basque Country UPV/EHU, Donostia, Spain}
\author{J. G. Muga}
\affiliation{Departament of Physical Chemistry, University of the Basque Country UPV/EHU, Aptdo. 644, Bilbao, Spain}
\begin{abstract}
We propose a compact atom interferometer to measure homogeneous constant forces guiding the arms via
shortcuts to adiabatic paths.
For a given sensitivity, which only depends on the space-time area of the guiding paths,
the cycle time can be made fast without loosing visibility.
The atom is driven by spin-dependent trapping potentials moving in opposite directions, complemented by linear and time-dependent
potentials that compensate the trap acceleration. Thus the arm states are adiabatic in the moving frames, and non-adiabatic in the laboratory frame.
The trapping potentials may be anharmonic, e.g. optical lattices,
and the interferometric phase does not depend on the initial motional state or on the pivot point for swaying the linear potentials.
\end{abstract}
\maketitle
\section{Introduction}
Atom interferometry \cite{Berman1997,Cronin2009}
provides a route to quantum-enhanced precise sensors.
The key idea is to split and later recombine the atom wave function, to detect the differential phase accumulated during the separation, which is sensitive in particular to small potential differences between the arms.
Here we work out a scheme to measure constant forces
using STA-mediated guided interferometry \cite{Dupont-Nivet2016,Navez2016,Palmero2017,MartinezGaraot2018}. STA stands for ``shortcuts to adiabaticity'',
a set of techniques to achieve the results of adiabatic dynamics in shorter times \cite{Torrontegui2013,review2019}, and ``guided'' means that the
atom is driven in moving traps, as in buckets or conveyor belts \cite{Navez2016}, so it is never in free flight.
Guiding, e.g. via moving optical lattices, keeps the atom wavefunction localized with nanoscale spatial resolution, allowing for
precise measurements at ultrashort spatial scale \cite{Kovachi2010,Steffen2012},
whereas the speed-up with respect to slow
adiabatic processes can avoid perturbations and decoherence keeping the visibility and differential phase of adiabatic methods.
Moreover our STA-mediated interferometer scheme fulfills the ideal goal of giving a motional-state independent differential phase
with short process times while keeping simultaneously a high sensitivity.
We assume throughout one-particle wavefunctions, either because the interferometer works indeed with single particles or because interactions are negligible.
Similarly to \cite{MartinezGaraot2018} the arms in the current scheme are separated by means of ``spin'', here an alias for ``internal state'', dependent forces.
Operationally the current scheme differs from the one in \cite{MartinezGaraot2018}. There, a fixed harmonic trap was combined with two homogeneous time- and spin-dependent forces to separate first and then
recombine the wavefunction branches of an ion.
Here we use instead two moving spin-dependent traps, not necessarily harmonic, complemented by homogeneous spin- and time-dependent forces to compensate for inertial terms due to the motions of the traps \cite{Torrontegui2011}. This compensation is one of the
ways to implement STA-driven fast transport \cite{review2019}, and can be equivalently found
by invariant-based inverse engineering
\cite{Torrontegui2011}, by the ``fast-forward approach'' \cite{Masuda2010}, or as a local unitary transformation of a non-local counterdiabatic approach \cite{Ibanez2012,Deffner2014}.
An important difference between this work and
\cite{MartinezGaraot2018} is that the phase differential is now independent of the pivot, equipotential point $x_0$ to apply the compensating spin-dependent potentials, see an example of two different pivot positions in the outer columns of Fig. \ref{figu1}.
When the force to be measured acts permanently,
before and during the experiment, the natural choice in which $x_0$ is at the initial equilibrium point of the trap, which depends on the unknown force we want to measure, cancelled the interferometric phase
in \cite{MartinezGaraot2018}.
A rotation of the effectively one-dimensional (1D) trap to let the force act only from the initial time $t=0$ is a formal, but hardly practical solution. The scheme proposed here is free from such difficulties and is also more broadly applicable.
Using arbitrary trapping potentials, rather than harmonic ones, opens the way to applying the proposed scheme
to ultracold neutral atoms where the anharmonicities are usually stronger than for trapped ions. Different realizations are possible,
e.g. in atom chips \cite{Cronin2009}, but we shall discuss optical lattices as a specific example. Interferometers with
two oppositely moving optical lattices to accelerate the arm wavefunctions for a single internal state have been
demonstrated \cite{Muller2009} and studied theoretically \cite{Kovachi2010}. Closer to our goal,
Mandel et al. \cite{Mandel2003} demonstrated transport of the spin-dependent wavefunctions in optical lattices moving in opposite directions
with a scheme proposed in \cite{Jaksch1999},
and Steffen et al. \cite{Steffen2012} built a single-atom interferometer based on a similar setting. To implement our scheme we envision, for each spin, double supperlattices composed by an ultra-deep optical lattice as a ``conveyor-belt'' trapping potential \cite{Schrader2001} with negligible tunneling, see \cite{Lu2020} and references therein, while the compensating force may be achieved by a second lattice with much larger periodicity
than the trapping lattice to make it effectively homogeneous for each arm wavefunction.
Factors of ten between the periodicities of two lattices are routinely found playing with the angle
between the crossing beams \cite{Williams2008} and even higher factors are technically possible \cite{Tao2018}.
First we present the main idea of the interferometer and basic relations in Sec. \ref{sec:interferometer} and then the
recipe to move the arm traps and set the time dependence of the compensating forces in Sec. \ref{sec:Phases}.
The theory relies on a transformation to ``moving-frame'' interaction pictures for each arm. An alternative formulation is presented in
Sec. \ref{sec:invariants}
in terms of ``invariants'' which connects the current approach to ``invariant-based inverse engineering'' \cite{Torrontegui2011,Palmero2017,MartinezGaraot2018}. The interferometric phase can then be simply interpreted as the difference between the Lewis-Riesenfeld phases for the arms. This connection enables us to use invariant-based concepts and results, for example to apply techniques to enhance robustness with respect to different noises
\cite{Ruschhaupt2012,Lu2018,Lu2020}.
The paper ends with a discussion on possible applications and open questions.
\section{Basic concept of the interferometer}
\langle}\def\beq{\begin{equation}bel{sec:interferometer}
Consider an atom with two internal states, ``spin up'' $|\!\uparrow\ranglengle$, and ``spin down'' $|\downarrow\ranglengle$,
and effective motion in one dimension. A general state at time $t$ is
$
a_{\uparrow} |\!\uparrow \ranglengle \psi^{\uparrow}(x,t) + a_{\downarrow} |\!\downarrow \ranglengle
\psi^{\downarrow}(x,t),
$
where $\psi^{\uparrow}(x,t)=\langle}\def\beq{\begin{equation}ngle x|\psi^\uparrow(t)\ranglengle$ and $\psi^{\downarrow}(x,t)=\langle}\def\beq{\begin{equation}ngle x|\psi^\downarrow(t)\ranglengle$ are the motional states for the two internal levels, in coordinate representation. We assume a prepared state $|\!\uparrow\ranglengle |\Phi_p\rangle$
from which
a $\pi/2$ pulse \cite{Leibfried2003} produces two equally weighted components with $a_{\uparrow}=a_{\downarrow}=1/2^{1/2}$.
We set time $t=0$ at the end of the $\pi/2$ pulse and, assuming a Lamb-Dicke regime and a fast pulse compared to motional periods,
$\Phi(x,0)\equiv\Phi_{p}(x)=\psi^{\uparrow}(x,0)=\psi^{\downarrow}(x,0)$.
The two branches evolve separately in coordinate space due to spin-dependent forces.
At some final time $t_f$ the complex overlap can be written in polar form as
\begin{equation}
\langle}\def\beq{\begin{equation}bel{o}
\langle}\def\beq{\begin{equation}ngle \psi^{\downarrow}(t_f)|\psi^{\uparrow}(t_f) \ranglengle=e^{i\Delta\varphi(t_f)}|\langle}\def\beq{\begin{equation}ngle \psi^{\downarrow}(t_f)|\psi^{\uparrow}(t_f) \ranglengle|,
\end{equation}
A second $\pi/2$ pulse
gives the populations \cite{MartinezGaraot2018}
\begin{equation}
P^{\uparrow\downarrow}(t_f)= \frac{1}{2}\pm \frac{1}{2}\Re{\rm e} \left [ \langle}\def\beq{\begin{equation}ngle \psi^{\downarrow}(t_f)|\psi^{\uparrow}(t_f) \ranglengle \right ],
\end{equation}
where we have neglected the $\pi/2$-pulse duration.
The STA-driving will achieve maximal visibility, i.e., $|\langle}\def\beq{\begin{equation}ngle \psi^{\downarrow}(t_f)|\psi^{\uparrow}(t_f)\ranglengle|=1$, note that $|\langle}\def\beq{\begin{equation}ngle \psi^{\uparrow\downarrow}(t_f)|\Phi(0)\ranglengle|=1$ is {\it not} required.
Then the populations read
$
P^{\uparrow\downarrow}(t_{f})=\frac{1}{2}\pm\frac{1}{2}\cos[\Delta\varphi(t_{f})].
$
If the differential phase is proportional to a constant force $c$, $\Delta\varphi(t_{f})={\cal S}c$
and the sensitivity ${\cal S}$ is known,
$c$ can be measured
from the populations.
When
$c$, or its deviation from some approximately known value, are expected to be small in the scale of $\pi/{\cal S}$, $c$ is found from the populations using the relevant branch of the arccosine.
More generally, $c$ may be found unambiguously from the periodicity ${2\pi}/{c}$
of the populations
$P^{\uparrow\downarrow}(t_{f})$
as a function of ${\cal S}$ \cite{MartinezGaraot2018}. Measuring the populations requires repetitions in time if the interferometer works with a single particle, or alternatively noninteracting ensembles.
The method to guide the arm wave functions described below
fulfills the hypotheses made so far, namely, the modulus of the overlap (\ref{o}) is one and the differential phase is proportional to $c$. Moreover it will be possible to control the sensitivity ${\cal S}$ and the time $t_f$ of the process independently of $|\Phi(0)\ranglengle$.
\begin{figure*}
\caption{Four schematic snapshots of the potentials for moving optical lattices and two choices of pivot $x_0$.
We use arbitrary units,
the specific relative values are not intended to be realistic but are rather chosen to better visualize the process.
The corresponding $\alpha(t)$ and $f(t)$ are shown in Fig. \ref{figu2}
\end{figure*}
\begin{figure}
\caption{Typical forms of $\alpha(t)$ (dashed line), and $f(t)$ (solid line) in arbitrary units: $\alpha(t)$ is found by imposing
the boundary conditions (\ref{bc_y}
\end{figure}
\section{How to move the guiding traps}
\langle}\def\beq{\begin{equation}bel{sec:Phases}
For each spin state we assume a different evolution driven by
the Hamiltonians (Here and in the following, the superscript $\uparrow\downarrow$ in any equation implies that the sign on top in $\mp$ or $\pm$ is for $\uparrow$, whereas the sign on the bottom is for $\downarrow$.)
\begin{equation}
\langle}\def\beq{\begin{equation}bel{HamiltonianUPDOWN}
H^{\uparrow\downarrow}=\frac{p^{2}}{2m}- cx\mp \left[x-x_{0}(t)\right] f(t)+U[x\mp \alpha(t)].
\end{equation}
The trap potentials ${U}[x \mp \alpha(t)]$
move along opposite trajectories $\alpha^{\uparrow\downarrow}(t)=\pm\alpha(t)$.
We consider trap trajectories that satisfy the boundary conditions
\begin{equation}
\alpha(t_{b})=\dot \alpha(t_{b})=0
\langle}\def\beq{\begin{equation}bel{bc_y}
\end{equation}
at the boundary times $t_{b}=0, t_{f}$. The dots stand for time derivatives.
Each trap starts and ends at rest returning to the starting point, equal for both traps.
The trap potentials are complemented by spin-dependent linear potentials
$\mp \left[x-x_{0}(t)\right]f(t)$ that cross at the pivot point $x_0(t)$: In a typical experiment $x_0$ will be
constant, however we shall
keep by now formally a more general $x_0(t)$.
The force $f(t)$ will be chosen to compensate inertial terms in the moving frame as discussed below in detail.
Finally,
$c$ is the spin-independent homogeneous-in-space and constant-in-time force that we want to measure,
$m$ is the mass of the atom, and ${p^{2}}/{2m}$ the kinetic energy. Examples of the potential terms in
Eq. (\ref{HamiltonianUPDOWN})
are depicted schematically in Fig. \ref{figu1} for $U$ as an optical lattice potential and for two different pivots.
Let us now reorganize the Hamiltonians (\ref{HamiltonianUPDOWN}) as follows,
\begin{eqnarray}
H^{\uparrow\downarrow}
= \frac{p^{2}}{2m}\mp f(t) x+\widetilde{U}(x\mp\alpha)+\Lambda^{\uparrow\downarrow}(t),
\end{equation}a
where we have separated purely time-dependent terms in
\begin{equation}
\Lambda^{\uparrow\downarrow}(t)=\pm f(t)x_{0}(t) \mp c\alpha(t)
\end{equation}
and defined effective trap potentials
$\widetilde{U}$ that include the effect of the force $c$,
\begin{eqnarray}
\widetilde{U}(x\mp \alpha(t))=U(x\mp \alpha(t))- [x \mp \alpha(t)]c.
\end{eqnarray}
To solve the dynamics, it is useful to perform unitary transformations into
``moving-frame interaction pictures''.
Specifically we define the interaction picture wavevectors $|\psi_I^{\uparrow\downarrow}\ranglengle$ in terms of
Schr\"odinger (laboratory frame) wavevectors $|\psi^{\uparrow\downarrow}\ranglengle$,
as
\begin{equation}
|\psi_I^{\uparrow\downarrow}\ranglengle={\cal U}^{\uparrow\downarrow}|\psi^{\uparrow\downarrow}\ranglengle,\;\;\;\;\;
|\psi^{\uparrow\downarrow}\ranglengle=({\cal U}^{\uparrow\downarrow})^\dagger |\psi_I^{\uparrow\downarrow}\ranglengle,
\langle}\def\beq{\begin{equation}bel{IP}
\end{equation}
where the unitary operator ${\cal U}^{\uparrow\downarrow}$ is constructed by multiplying position and momentum
shift operators,
\begin{equation}
{\cal U}^{\uparrow\downarrow}=e^{\pm i\alpha p/\hbarar}e^{\mp i m \dot{\alpha} x/\hbarar}.
\langle}\def\beq{\begin{equation}bel{order}
\end{equation}
Other orderings and therefore interaction pictures are possible but the measurable quantities and the
differential phase are not affected by the different orderings as long as the intermediate calculations are done consistently. Using Eq. (\ref{order})
for each arm, the effective, moving-frame Hamiltonians become
\begin{equation}a
H_I^{\uparrow\downarrow}&=&{\cal U}^{\uparrow\downarrow}H^{\uparrow\downarrow}({\cal U}^{\uparrow\downarrow})^\dagger+i\hbarar\,\dot{{\cal U}}^{\uparrow\downarrow}({\cal U}^{\uparrow\downarrow})^\dagger
\nonumber\\
&=&\frac{p^2}{2m}+\frac{1}{2}m\dot{\alpha}^2\mp (x\pm\alpha)f(t)+\widetilde{U}(x)
\nonumber\\
&\pm& f(t) x_0(t)\mp c\alpha \pm (x\pm \alpha)m\ddot{\alpha}.
\langle}\def\beq{\begin{equation}bel{hami0}
\end{eqnarray}
If the applied $f(t)$ satisfies
\begin{equation}
\ddot \alpha(t)=\frac{f(t)}{m},
\langle}\def\beq{\begin{equation}bel{Newt}
\end{equation}
which can be interpreted as a Newton equation for the trajectory $\alpha(t)$ subjected to the force $f(t)$,
this auxiliary force compensates for inertial effects due to the motion of the
$\widetilde{U}(x\mp \alpha(t))$ potentials in the laboratory frame, note the cancellation of the third and last terms in Eq. (\ref{hami0}). The consequence is that
a stationary state in the moving frame will remain so. Equation (\ref{Newt}) is used inversely, i.e., $f(t)$ is found from a designed $\alpha(t)$ and hereinafter $f(t)$ is always assumed to satisfy Eq. (\ref{Newt}), {except in point { vii} of the final discussion}.
To make $f(t_b)$ zero at the boundary times we shall impose, in addition to the boundary conditions (\ref{bc_y}), that
$
\ddot{\alpha}(t_b)=0.
$
Applying Eq. (\ref{Newt}) in Eq. (\ref{hami0}) the moving-frame Hamiltonians take a simple form with a common time- and spin-independent term $H_{I,0}$, and terms $F^{\uparrow\downarrow}(t)$ that depend on time, but not on $x$ or $p$,
\begin{eqnarray}
H_I^{\uparrow\downarrow}&=&H_{I,0}+F^{\uparrow\downarrow}(t),\;
H_{I,0}=\frac{p^2}{2m}+\widetilde{U}(x),
\nonumber\\
F^{\uparrow\downarrow}(t)&=&\frac{1}{2}m\dot{\alpha}^2\pm f(t) x_0(t) \mp c \alpha(t).
\langle}\def\beq{\begin{equation}bel{structure}
\end{eqnarray}
The resulting structure facilitates the formal solution of the dynamics,
as the time-dependent part only accumulates a phase,
whereas the time-independent part gives a simple evolution operator,
\begin{equation}a
|\psi_I^{\uparrow\downarrow}(t)\ranglengle &=&e^{- i\int_0^t F^{\uparrow\downarrow}(t')dt'/\hbarar}
|\psi_{I,0}^{\uparrow\downarrow}(t)\ranglengle,
\nonumber\\
|\psi_{I,0}^{\uparrow\downarrow}(t)\ranglengle&=&e^{-iH_{I,0}t/\hbarar}|\psi_{I,0}^{\uparrow\downarrow}(0)\ranglengle.
\end{equation}a
As $|\psi_{I,0}^{\uparrow\downarrow}(0)\ranglengle=|\Phi(0)\ranglengle$ and $H_{I,0}$ are spin independent,
$|\psi_{I,0}^{\uparrow\downarrow}(t)\ranglengle=|\Phi(t)\ranglengle=e^{-iH_{I,0}t/\hbarar}|\Phi(0)\ranglengle$ is also a spin-independent vector.
Noting that $e^{\mp i \alpha p/\hbarar}$ shifts the position representation as
$
\langle}\def\beq{\begin{equation}ngle x| e^{\mp i \alpha p/\hbarar}|\Phi\ranglengle= \Phi(x\mp\alpha),
$
the branch wave functions in the laboratory frame are found by Eq. (\ref{IP}),
\begin{equation}
\psi^{\uparrow\downarrow}(x,t)=e^{\pm im\dot{\alpha} x/\hbarar} e^{- i\int_0^t\! F^{\uparrow\downarrow}(t')dt'/\hbarar}
\Phi(x\mp\alpha,t).
\langle}\def\beq{\begin{equation}bel{wfs}
\end{equation}
In particular, at final time $t_f$,
\begin{eqnarray}
\psi^{\uparrow\downarrow}(x,t_f)&=&e^{-im\!\int_0^{t_f}\! \dot{\alpha}^2 dt/2\hbarar} e^{\pm i c\!\int_0^{t_f}\! \alpha(t) dt/\hbarar}
\nonumber\\
&\times&e^{\mp i\! \int_0^{t_f}\! x_0(t)f(t) dt/\hbarar} \Phi(x,t_f).
\langle}\def\beq{\begin{equation}bel{wfsud}
\end{eqnarray}
For $x_0$ constant but otherwise arbitrary, the overlap in Eq. (\ref{o}) takes a very simple form, since the phase terms $\mp x_0\int_0^{t_f} f(t) dt=0$ vanish because of Eq. (\ref{Newt}) and the
boundary condition $\dot{\alpha}(t_b)=0$,
\begin{equation}
\langle}\def\beq{\begin{equation}ngle \psi^{\downarrow}(t_f)|\psi^{\uparrow}(t_f) \ranglengle=e^{2 i c\int_0^{t_f}\! \alpha(t) dt/\hbarar},
\langle}\def\beq{\begin{equation}bel{simple}
\end{equation}
so that $c$ can be measured from the interferometric differential phase via the
populations as explained before.
The phase is indeed
proportional to $c$,
$
\Delta\varphi(t_f)= c{\cal S},
$
with controllable
sensitivity
\begin{equation}
{\cal S}=\frac{2}{\hbarar} \int_0^{t_f}\! \alpha(t) dt,
\langle}\def\beq{\begin{equation}bel{sensi}
\end{equation}
the space-time area $2\int_0^{t_f} \alpha(t) dt$ in units of $\hbarar$ swept between the two trap paths. Because the relative motion of the motional states with respect to $\pm \alpha(t)$ is identical in both arms,
this is the same area between the state centroids for any initial motional state $|\Phi(0)\rangle$.
Thus the interferometric phase and sensitivity are independent of the initial motional state, a robust
``geometrical'' feature of the
proposed interferometer.
$t_f$ can be chosen freely, in particular it can be made short
compared to relevant decoherence times, and $\alpha(t)$ can be manipulated
to change the sensitivity. Examples on how to set $\alpha(t)$ may be found in \cite{MartinezGaraot2018}, the basic idea
is to expand it in some basis, e.g. sines or powers of $t$, with enough number of terms to satisfy the boundary conditions.
More terms are added if further conditions are imposed, such as a desired value of ${\cal S}$.
\subsection{Example: Sensitivity for Caesium atom interferometer}
Steffen et al. \cite{Steffen2012} implemented a Caesium atom interferometer which demonstrates some
elements of the current scheme, specifically the atom wavefunction was split into separated paths
controlled by spin-dependent optical-lattice potentials. A large displacement of $\pm\alpha(t)$ was technically -not fundamentally- limited by the maximum voltage that can be applied to an
electro-optical modulator
\cite{Mandel2003,Steffen2012}. This limitation was circumvented by accumulating elementary operation blocks
which consist of lattice displacements with alternating direction interleaved by $\pi$-pulses
\cite{Mandel2003,Steffen2012}. A single lattice displacement by $\langle}\def\beq{\begin{equation}mbda/4$ took 18 $\mu$s.
Figure \ref{figu3} represents a contour plot of the sensitivities (\ref{sensi})
with a four-term polynomial $\alpha(t)$ satisfying the boundary conditions (\ref{bc_y}) and $\ddot{\alpha}(t_b)=0$, with its maximum value $M$ at $t_f/2$, see Fig. \ref{figu2},
\begin{equation}a
\alpha(t)&=&\sum_{j=3}^{6} b_{j} \left(\frac{t}{t_{f}}\right)^{j},
\nonumber\\
b_3&=&64 M,\;\;
b_4=-192 M,
\nonumber\\
b_5&=&192 M,\;\;
b_6=-64 M.\langle}\def\beq{\begin{equation}bel{bs}
\end{equation}a
The resulting sensitivity ${\cal S}$ is remarkably simple, namely, ${\cal S}=32 M t_f/(35\hbarar)$.
Note that the scaling of ${\cal S}$
with $t_f$ can be chosen at will by fixing the dependence of $M$ on $t_f$, this amounts to follow a line $M(t_f)$ in Fig. \ref{figu3}, for example
as $M\propto t_f^\mu$, with $\mu=0,1,2,...$. Assuming a dependence of the order of the elementary displacement in \cite{Steffen2012},
gives $M=\frac{\langle}\def\beq{\begin{equation}mbda}{72 \mu s}\frac{t_f}{2}$, see the straight line in Fig. \ref{figu3},
and an ${\cal S}$ that depends quadratically on $t_f$. With the current scheme the trap can be subjected to strong
accelerations without spoiling the visibility since they are compensated. Thus, for a given $t_f$ higher sensitivities can be achieved for faster dependences $M(t_f)$.
Formally there is no limit to how large ${\cal S}(t_f)$ may be. The limit will be set in practice by the
technical limitations imposed by the specific setting to implement $\alpha(t)$ and $f(t)$. For a given, desired sensitivity ${\cal S}_0$, $M(t_f;{\cal S}_0)$ depends, along a given contour in Fig. \ref{figu3}, inversely on $t_f$,
$M(t_f;{\cal S}_0)=35 {\cal S}_0\hbarar/(32 t_f)$. If $M$ is technically limited by some upper value, $t_f$ will be lower limited accordingly.
\begin{figure}
\caption{Contour plot of the sensitivity ${\cal S}
\end{figure}
\section{Invariants}
\langle}\def\beq{\begin{equation}bel{sec:invariants}
In this section we will connect the results found so far with
Lewis-Riesenfeld invariants of motion
and the inverse engineering of trap trajectories based on them
\cite{Torrontegui2011,Torrontegui2013}.
A key result is the moving-frame Hamiltonian structure found in
Eq. (\ref{structure}). The Hamiltonian $H_{I,0}$ does not depend on time, and therefore
its expectation value $\langle}\def\beq{\begin{equation}ngle\Phi(t)|H_{I,0}|\Phi(t)\ranglengle$ is constant. In the laboratory frame, making use of Eq. (\ref{IP})
this translates into
\begin{equation}
\frac{d}{dt}\langle}\def\beq{\begin{equation}ngle \psi^{\uparrow\downarrow}(t)|({\cal U}^{\uparrow\downarrow})^\dagger H_{0,I}{\cal U}^{\uparrow\downarrow}|\psi^{\uparrow\downarrow}(t)\ranglengle=0,
\langle}\def\beq{\begin{equation}bel{invh0}
\end{equation}
or, in other words,
\begin{equation}
I^{\uparrow\downarrow}\equiv({\cal U}^{\uparrow\downarrow})^\dagger H_{0,I}{\cal U}^{\uparrow\downarrow}
\end{equation}
are ``dynamical'' Lewis-Riesenfeld invariants of motion for, respectively, the branch Hamiltonians
$H^{\uparrow\downarrow}$ in Eq. (\ref{HamiltonianUPDOWN}), supplemented by Eq. (\ref{Newt}) to specify $f(t)$. They satisfy the invariance equations
\begin{equation}
\frac{dI^{\uparrow\downarrow}}{dt}=\frac{\partial I^{\uparrow\downarrow}}{\partial t} + \frac{1}{i\hbarar}\left[I^{\uparrow\downarrow}, H^{\uparrow\downarrow}\right].
\end{equation}
These invariants may be calculated explicitly with the aid of Eq. (\ref{order}),
\begin{equation}
I^{\uparrow\downarrow}=\frac{1}{2m}(p\mp m\dot{\alpha})^2+\widetilde{U}(x\mp\alpha),
\end{equation}
and their (constant-in-time) eigenvalues $\langle}\def\beq{\begin{equation}mbda_n$ are nothing but the eigenvalues of $H_{I,0}$,
\begin{equation}
\langle}\def\beq{\begin{equation}bel{statio}
\left[\frac{-\hbarar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}+\widetilde{U}(x)\right]\phi_{n}(x)=\langle}\def\beq{\begin{equation}mbda_{n}\phi_{n}(x),
\end{equation}
where the $\phi_n(x)$ are the eigenfuctions of $H_{I,0}$. They form a natural basis to expand $|\Phi(t)\ranglengle$ as
\begin{equation}a
|\Phi(t)\ranglengle&=&\sum_n e^{-i\langle}\def\beq{\begin{equation}mbda_n t/\hbarar} |\phi_n\ranglengle c_n,
\nonumber\\
c_n&=&\langle}\def\beq{\begin{equation}ngle \phi_n|\Phi(0)\ranglengle,
\langle}\def\beq{\begin{equation}bel{superpo}
\end{equation}a
in terms of constant coefficients $c_n$.
The vectors
\begin{equation}
|\psi_n^{\uparrow\downarrow}\ranglengle\equiv ({\cal U}^{\uparrow\downarrow})^\dagger|\phi_n\ranglengle
\langle}\def\beq{\begin{equation}bel{psin}
\end{equation}
are eigenvectors of $I^{\uparrow\downarrow}$ with eigenvalue $\langle}\def\beq{\begin{equation}mbda_n$ since
\begin{equation}a
I^{\uparrow\downarrow}|\psi_n^{\uparrow\downarrow}\ranglengle&=&({\cal U}^{\uparrow\downarrow})^\dagger H_{I,0}\,{\cal U}^{\uparrow\downarrow}({\cal U}^{\uparrow\downarrow})^\dagger|\phi_n\ranglengle
\nonumber\\
&=&\langle}\def\beq{\begin{equation}mbda_n ({\cal U}^{\uparrow\downarrow})^\dagger |\phi_n\ranglengle=\langle}\def\beq{\begin{equation}mbda_n|\psi_n^{\uparrow\downarrow}\ranglengle.
\end{equation}a
Using the explicit form of ${\cal U}^{\uparrow\downarrow}$ in Eq. (\ref{order})
their coordinate representation is
\begin{equation}
\psi_n^{\uparrow\downarrow}(x,t)=e^{\pm i m\dot{\alpha}x/\hbarar}\phi_n(x\mp \alpha).
\end{equation}
The ``dynamical modes'' are defined as orthogonal solutions of the time-dependent Schr\"odinger equations driven by $H^{\uparrow\downarrow}$ proportional to
these eigenfunctions, $e^{i\theta_n^{\uparrow\downarrow}(t)}\psi_n(x,t)$, where the Lewis-Riesenfeld phases $\theta_n^{\uparrow\downarrow}(t)$
are found from
\begin{equation}
\frac{d \theta^{\uparrow\downarrow}_{n}(t)}{d t}=\frac{1}{\hbarar}
\left\langle}\def\beq{\begin{equation}ngle\psi^{\uparrow\downarrow}_{n}(t)\left|i\hbarar\frac{\partial}{\partial t}-H^{\uparrow\downarrow}\right|\psi^{\uparrow\downarrow}_{n}(t)\right\ranglengle,
\langle}\def\beq{\begin{equation}bel{previous}
\end{equation}
so that the Schr\"odinger equations are satisfied.
Setting $\theta_n^{\uparrow\downarrow}(0)=0$, an explicit calculation gives, see the Appendix,
\begin{equation}
\theta^{\uparrow\downarrow}_{n}(t)=\frac{-1}{\hbarar}\int_{0}^{t}\left[\langle}\def\beq{\begin{equation}mbda_{n}+F^{\uparrow\downarrow}(t')\right]dt'.
\langle}\def\beq{\begin{equation}bel{LR}
\end{equation}
Arbitrary wavefunction solutions of the dynamics $\psi^{\uparrow\downarrow}(t)$ will combine
these elementary solutions with constant coefficients. For the initial state $|\Phi(0)\ranglengle$,
$\psi^{\uparrow\downarrow}(x,t)=\sum_n e^{i\theta_n^{\uparrow\downarrow}(t)}\psi_n(x,t) c_n$.
Factoring out $n$-independent
phase factors and summing over $n$ as in Eq. (\ref{superpo}),
the expression (\ref{wfs}) and following results for $\psi^{\uparrow\downarrow}(x,t)$ in the main text are exactly recovered.
The interferometric phase from this point of view is nothing but the difference between Lewis-Riesenfeld phases for the arms.
Since the $n$-dependent part cancels out, the result is $n$-independent.
\section{Discussion \langle}\def\beq{\begin{equation}bel{final}}
We have put forward an STA-mediated atomic interferometer scheme to measure homogeneous constant forces with spin- (internal-state) dependent moving traps to guide the wavefunction components along the two arms.
The approach is robust in different ways:
i) As the process can be made fast,
decoherence effects and perturbations can be mitigated or avoided without necessarily
renouncing to some required sensitivity. For a caveat on the relation between process time and decoherence see point vi below.
ii) The moving trapping potentials may be anharmonic, so the method may be applied in particular to optical lattices
as conveyor belts to drive the arms.
iii) The motional initial wave function is arbitrary, there is no need to prepare a perfect ground state because the differential
phase is not affected by the initial motional state.
iv) The moving trap potentials are complemented by time-dependent linear potentials that compensate inertial forces
``rocking'' on a pivot point $x_0$.
The differential interferometric phase is simplified and made pivot independent when
$
\int_0^{t_f}\! x_0(t)f(t) dt=0
$
in Eq. (\ref{wfsud}). The integral vanishes when $x_0$ is a constant because of the way $f(t)$ is constructed.
This result is in fact robust with respect to typical forms of $x_0(t)$: A noisy $x_0(t)$ with a zero-mean perturbation around its nominal constant value
will give a vanishing integral as long as the correlation time is short compared to
$t_f$. Other relevant dependence is an undesired linear drift, e.g. $x_0(t)=a +bt$. For the linear term $bt$
integrating by parts and using the boundary conditions (\ref{bc_y}) gives a zero integral too.
It may also be of interest in practice to set a spin-dependent $x_0(t)$. For example, for $x_0^{\uparrow\downarrow}(t)=\pm\alpha(t)$ the resulting integrals $\pm\int_0^{t_f} [\pm\alpha(t)]f(t) dt$ would not vanish but they would give the same phase for both arms, which makes the differential phase again pivot independent.
v) There is ample freedom to choose the trap paths $\pm\alpha(t)$ which are only subjected, apart from technical limitations, to
satisfy some boundary conditions
at initial and final times. This flexibility may be used to change the sensitivity ${\cal S}$. It also allows to achieve fast scalings of ${\cal S}$
with total time $t_f$, in principle with an arbitrary power of $t_f$,
in contrast to linear scaling with $t_f$ of Ramsey-Bord\'e interferometers or with $t_f^2$ in a Mach-Zender configuration
\cite{McDonald2014}.
vi) Following techniques developed to enhance the robustness of STA approaches \cite{Ruschhaupt2012,review2019},
the freedom in choosing $\alpha(t)$ may be used to make the differential phase robust against specific setting-dependent perturbations, e.g. some particular type of noise relevant for the experimental arrangement. A recent study \cite{Lu2020} analyzes the motional energy excitation of atoms due to noises affecting different moving optical lattice parameters: periodicity, depth, or position. The excitations may be
analyzed in terms of static or dynamical contributions whose relative importance depends on the parameter affected by noise.
Static contributions are defined as those which are independent of the trap trajectory, they just increase with transport time $t_f$ so the strategy to mitigate them is to shorten process times. They are dominant in particular for position noise.
Dynamical contributions depend on the
trap trajectory so they could be mitigated by a good choice of $\alpha(t)$. For ``accordion noise'' of the lattice periodicity they
dominate and give minimal excitation at a certain transport time. For noise in the trap depth there is also a time $t_f$ with minimal excitation
with dynamical terms dominating at shorter transport times and static terms at larger times.
The existence of minima -for some but not all noise types- underlines that the naive expectation that
shorter and shorter times $t_f$ are always beneficial is not necessarily correct. The beneficial effect of shortening the time depends on the noise type and on the time domain.
It also points out that there are no universal recipes,
each noise or perturbation requires a dedicated study. Adapting the analysis in \cite{Lu2020}, which did not include compensating forces, to the current configuration, is left for a separate work.
vii) The arm wavefunctions overlap and differential phase found in Eq. (\ref{simple}) are exact, i.e., no adiabatic approximation has been performed, and there is no need to calculate non-adiabatic corrections. In this regard it is interesting to sketch how this result is found in
the adiabatic, slow motion limit when the compensating force $f(t)$ is {\it not} applied. The calculation would start in Eq. (\ref{hami0}) for
$H_{I}^{\uparrow\downarrow}$. Taking now $f(t)=0$ these moving-frame Hamiltonians cannot be separated into purely $t$-independent and purely $t$-independent terms because of the inertial terms $\pm xm\ddot\alpha$. In the slow-motion limit, however, these terms will be negligible compared to
$\widetilde{U}$ so that
the structure in that limit is again that of a time-independent Hamiltonian and purely time-dependent terms. The corresponding dynamics then lead to Eq. (\ref{simple}), but only as an approximation. In contrast, when the compensation forces $f(t)=\mp m\ddot{\alpha}$
are applied, the dynamics is generally non-adiabatic in the laboratory frame, but adiabatic by construction in the moving frames, a key property that allows us to set
simultaneously short process times and large sensitivities.
We hope that the unique features of the proposed scheme, among them independence of initial state, arbitrary trap potential,
and freedom to choose sensitivity and cycle time, will motivate further work.
The elements necessary to implement the current scheme have been separately demonstrated.
We have paid some attention to the use of oppositely moving spin-dependent optical lattices \cite{Mandel2003,Steffen2012}.
Alternative realizations may be based on the unitary equivalence between the ``local'', position- and $t$-dependent
compensating Hamiltonian terms
$\mp m\ddot{\alpha}(t)x$ and ``counterdiabatic'' momentum- and $t$-dependent terms $\pm p\dot{\alpha}$ \cite{Ibanez2012,review2019}.
While implementing the former in the laboratory is quite generally easier than the latter, the spin-dependent counterdiabatic terms may be realized in systems with either actual or synthetic spin-orbit coupling \cite{Ban2012,Cadez2013,Cadez2014,Chen2018}.
\acknowledgments{We thank D. Leibfried, J. Bollinger, and D. Gu\'ery-Odelin for many useful discussions.
This work was supported by the Basque Country Government (Grant No. IT986-16), and
by PGC2018-101355-B-I00 (MCIU/AEI/FEDER,UE). }
\appendix
\section{Calculation of Eq. (\ref{LR})\langle}\def\beq{\begin{equation}bel{app2}}
To calculate the Lewis-Riesenfeld phases in Eq. (\ref{LR}) we start from calculating the matrix elements in
Eq. (\ref{previous}).
It proves convenient to write first, using Eqs. (\ref{hami0}) and ({\ref{structure}),
\begin{equation}
H^{\uparrow\downarrow}=I^{\uparrow\downarrow}+F^{\uparrow\downarrow}-i\hbarar({\cal U}^{\uparrow\downarrow})^\dagger
\dot{\cal U}^{\uparrow\downarrow}.
\end{equation}
Using now Eq. (\ref{psin})
we find that
\begin{equation}a
&-&\langle}\def\beq{\begin{equation} \psi^{\uparrow\downarrow}_n|H^{\uparrow\downarrow}| \psi^{\uparrow\downarrow}_n\rangle/\hbarar
\nonumber\\
&=&\frac{-1}{\hbarar}(\langle}\def\beq{\begin{equation}mbda_n+F^{\uparrow\downarrow})+i\langle}\def\beq{\begin{equation} \phi_n| \dot{\cal U}^{\uparrow\downarrow}({\cal U}^{\uparrow\downarrow})^\dagger
|\phi_n\rangle
\langle}\def\beq{\begin{equation}bel{first}
\end{equation}a
whereas, using again Eq. (\ref{psin}) and noting that $\dot{\cal U}^{\uparrow\downarrow}({\cal U}^{\uparrow\downarrow})^\dagger=-{\cal U}^{\uparrow\downarrow}(\dot{\cal U}^{\uparrow\downarrow})^\dagger$,
\begin{equation}
i\langle}\def\beq{\begin{equation} \psi_n^{\uparrow\downarrow}|\dot{\psi}_n^{\uparrow\downarrow}\rangle=-i \langle}\def\beq{\begin{equation} \phi_n| \dot{\cal U}^{\uparrow\downarrow}({\cal U}^{\uparrow\downarrow})^\dagger|\phi_n\rangle.
\langle}\def\beq{\begin{equation}bel{second}
\end{equation}
The right hand side may be calculated explicitly but in any case it is cancelled by the last term in Eq. (\ref{first})
when summing Eqs. (\ref{first}) and (\ref{second}) in Eq. (\ref{previous}). Integrating we get
finally Eq. (\ref{LR}).
\end{document}
|
\begin{document}
\title{Unsolved Problems in Spectral Graph Theory}
\begin{abstract}
Spectral graph theory is a captivating area of graph theory that employs the eigenvalues
and eigenvectors of matrices associated with graphs to study them. In this paper, we
present a collection of $20$ topics in spectral graph theory, covering a range of open
problems and conjectures. Our focus is primarily on the adjacency matrix of graphs,
and for each topic, we provide a brief historical overview.
\end{abstract}
{\bfseries Key words:} Eigenvalues; Spectral radius; Adjacency matrix; Spectral graph theory
{\bfseries AMS Classifications:} 05C35; 05C50; 15A18
Spectral graph theory is a beautiful branch of graph theory that utilizes the eigenvalues
and eigenvectors of matrices naturally associated with graphs to study them. The primary
objective in spectral graph theory is twofold: firstly, to compute or estimate the eigenvalues
of these matrices and secondly, to establish links between the eigenvalues and the
structural properties of graphs. As it turns out, the spectral perspective is a powerful
tool in the study of graph theory.
Over the past few decades, numerous results and applications in various fields of mathematics
have been obtained through spectral graph theory. However, many open problems and conjectures
in spectral graph theory remain unresolved, necessitating further exploration.
In this paper, we collect $20$ topics in spectral graph theory that include a range of
conjectures and open problems, with a focus primarily on the adjacency matrix of graphs.
Additionally, we provide a brief historical overview of each topic. Inevitably, it is a somewhat
personal perspective on the choice of these problems and conjectures, and is not intended
to be exhaustive; the authors apologize for any omissions.
Let us begin with some definitions and notation. Throughout this paper, we only consider
graphs that are simple (there are no loops or multiple edges), undirected and unweighted.
Given a graph $G$, the \emph{adjacency matrix} $A(G)$ of $G$ is a $(0,1)$-matrix, where the
rows and columns are indexed by the vertices in $V(G)$. The $(i,j)$-entry of $A(G)$ is
equal to $1$ if the vertices $i$ and $j$ are adjacent, and $0$ otherwise. Since $A(G)$ is
a real and symmetric matrix, it has a full set of real eigenvalues which we will denote by
\[
\lambda_1(G)\geq \lambda_2(G)\geq\cdots \geq \lambda_n(G).
\]
Recall that the Laplacian matrix of $G$ is defined as $L(G) := D(G) - A(G)$, where $D(G)$ is
the diagonal matrix whose entries are the degrees of the vertices of $G$. We shall write
$\mu_1(G)\geq\mu_2(G)\geq\cdots\geq\mu_n(G)=0$ for the eigenvalues of $L(G)$. If there is no
danger of ambiguity, for any $1\leq i \leq n$ we write $\lambda_i$ and $\mu_i$ instead of
$\lambda_i(G)$ and $\mu_i(G)$, respectively. We also write $\lambda(G):=\lambda_1(G)$ for short.
For a graph $G$, let $\omega(G)$ denote the clique number of $G$, which is defined as the number
of vertices in the largest complete subgraph in $G$. Let $e(G)$ and $\overline{G}$ denote the
number of edges and the complement of $G$, respectively. We use the notations $\delta(G)$,
$\Delta(G)$, and $\overline{d}(G)$ to represent, respectively, the minimum degree, maximum
degree, and average degree of $G$. For two vertex-disjoint graphs $G$ and $H$, we
use $G \vee H$ to denote the join of $G$ and $H$, which is obtained by adding all possible
edges between $G$ and $H$. The \emph{complete split graph} $S_{n,k}$ with parameters $n$
and $k$ is the graph on $n$ vertices obtain from a clique on $k$ vertices and an independent
set on the remaining $n - k$ vertices in which each vertex of the clique is adjacent to each
vertex of the independent set. If there is no special explanation, we use $n$ to denote
the number of vertices in $G$, and $m$ to denote the number of edges in $G$. As usual,
$K_n$, $K_{p,n-p}$, $P_n$ and $C_n$ denote respectively the complete graph, the complete bipartite,
the path and the cycle on $n$ vertices. For graph notation and terminology undefined here, we refer
the reader to \cite{Bondy-Murty2008}.
\section{Extensions of two classic inequalities}
\label{sec:classic-inequalities}
\subsection{An extension of Hong's inequality}
In 1988, Hong \cite{H88} proved that $\lambda(G)\leq \sqrt{2m-n+1}$ if $G$ is connected.
In fact, Hong's inequality holds for each graph without isolated vertices. This was
emphasised in \cite{H93} later by Hong himself.
Let $s^+(G)$ ($s^-(G)$) be the sum of the squares of the positive (negative) eigenvalues of
the adjacency matrix $A(G)$ of $G$. Elphick, Farber, Goldberg and Wocjan \cite{EFGW16}
proposed the following conjecture:
\begin{conjecture}[Elphick-Farber-Goldberg-Wocjan \cite{EFGW16}] \label{conj:extension-Hong}
Let $G$ be a connected graph. Then $$\min\{s^+(G),s^-(G)\}\geq n-1.$$
\end{conjecture}
From the above conjecture, one can obtain an extension of Hong's inequality, i.e., $s^+(G)\leq 2m-n+1$.
In fact, the above conjecture has a stronger form that every graph with $\kappa$ components
satisfies that $\min\{s^+(G),s^-(G)\}\geq n-\kappa$.
Till now, Conjecture \ref{conj:extension-Hong} was confirmed for special classes of graphs,
such as, bipartite graphs, regular graphs, and complete $k$-partite graphs \cite{EFGW16},
connected graphs with no more than $4$ vertices after blowing up (see \cite{Guo-Wang2022} for details).
A graph $G$ is said to be \emph{hyper-energetic} if the energy $\mathcal{E}(G)=\sum_{i=1}^n|\lambda_i|>2(n-1)$.
Conjecture \ref{conj:extension-Hong} was also confirmed for hyper-energetic graphs \cite[Theorem~10]{EFGW16}.
Since Nikiforov \cite{N07} proved that almost all graphs are hyper-energetic, Conjecture \ref{conj:extension-Hong}
is true for almost all graphs. For more families of graphs supporting Conjecture \ref{conj:extension-Hong}
and results on structural properties, we refer to \cite{ADDGHM23+}.
\subsection{An extension of Wilf's inequality}
Consider a graph $G$ of order $n$ with clique number $\omega$. In 1986, Wilf \cite{W86} proved that
\begin{equation} \label{eq:Wilf}
\lambda(G) \leq \Big(1 - \frac{1}{\omega}\Big) n.
\end{equation}
Strengthening Wilf's bound, Elphick and Wocjan \cite{Elphick-Wocjan2018} proposed a conjecture
suggesting that \eqref{eq:Wilf} can be improved by substituting $\lambda(G)$ with $\sqrt{s^+(G)}$.
\begin{conjecture}[Elphick-Wocjan \cite{Elphick-Wocjan2018}] \label{conj:enhanced-Wilf-bound}
Let $G$ be a graph of order $n$ with clique number $\omega$. Then
\[
\sqrt{s^+(G)} \leq \Big(1 - \frac{1}{\omega}\Big) n.
\]
\end{conjecture}
This conjecture is exact, for example, for complete regular multipartite graphs. Based on some
numerical experiments, we suspect that equality holds in Conjecture \ref{conj:enhanced-Wilf-bound}
only when $G$ is a complete regular multipartite graph. In \cite{Elphick-Wocjan2018}, Elphick
and Wocjan proved this conjecture for various classes of graphs, including triangle-free graphs,
and for almost all graphs. They also tested against the thousands of named graphs with up to $40$
vertices from the Wolfram Mathematica database, but no counterexamples were found.
Using SageMath software we confirmed this conjecture for graphs having at most $10$ vertices.
\section{The Bollob\'as-Nikiforov Conjecture}
The other conjecture regarding to $\lambda_2$ is one due to Bollob\'as and Nikiforov \cite{BN07},
which can be seen as a spectral Tur\'an-type conjecture. The starting point of extremal graph
theory is Mantel's theorem, which says that every graph on $n$ vertices contains a triangle
if $m>\left\lfloor n^2/4 \right\rfloor$. In 1970, Nosal \cite{N70} proved that $G$ contains
a triangle if $\lambda(G)>\sqrt{m}$. Since Nosal's result can imply Mantel's theorem,
it is always called the spectral Mantel Theorem.
Nosal's theorem was generalized by Nikiforov \cite{N02} to the inequality
$\lambda(G)\leq \sqrt{2 (1 - 1/\omega(G)) m}$. It should be mentioned that this
result was implicitly suggested by Edwards and Elphick \cite{EE83}. The above
inequality can imply the classic Tur\'an theorem on cliques, and also Wilf's
inequality \eqref{eq:Wilf}. By introducing $\lambda_2(G)$ as a new parameter,
Bollob\'as and Nikiforov \cite{BN07} proposed a more stronger spectral inequality.
\begin{conjecture}[Bollob\'as-Nikiforov \cite{BN07}]\label{Conj-BN}
Let $G$ be a graph on $m$ edges. Then
\[
\lambda^2_1(G)+\lambda^2_2(G)\leq 2\Big(1 - \frac{1}{\omega(G)} \Big) m.
\]
\end{conjecture}
Till now, Conjecture \ref{Conj-BN} was confirmed by Lin, Ning and Wu \cite{LNW21} for the
case $\omega(G)=2$. They proved that for any triangle-free graph $G$ on $m$
edges and without isolated vertices, if $\lambda^2_1(G)+\lambda^2_2(G)=m$ then $G$ is a
blow up of some member in $\{P_2, 2P_2, P_4, P_5\}$. Li, Sun and Yu \cite{Li-Sun-Yu2022}
generalized this result by giving an upper bound of $\lambda_1^{2k} + \lambda_2^{2k}$
for $\{C_{2i+1}\}_{i=1}^k$-free graphs. Additionally, by the result of Ando and
Lin \cite{Ando-Lin2015}, we know that Conjecture \ref{Conj-BN} holds for weakly perfect
graphs, which are graphs with equal clique number and chromatic number.
There is also another conjecture related to Conjecture \ref{Conj-BN} that strengthens it.
\begin{conjecture}[Elphick-Linz-Wocjan \cite{Elphick-Linz-Wocjan2021}] \label{conj:generalization-BN}
Let $G$ be a graph on $m$ edges. Then
\[
\sum_{i=1}^{\ell} \lambda_i^2(G) \leq 2\Big(1 - \frac{1}{\omega(G)} \Big) m,
\]
where $\ell = \min\{n^+(G), \omega (G)\}$, and $n^+(G)$ is the number {\rm (}counting multiplicities{\rm )}
of positive eigenvalues of $A(G)$.
\end{conjecture}
The conjecture was confirmed by Elphick, Linz and Wocjan \cite{Elphick-Linz-Wocjan2021} for
weakly perfect graphs, Kneser graphs, Johnson graphs and two classes of strongly regular graphs.
It is important to note that choosing only $\ell=n^+(G)$ would result in a counterexample
to Conjecture \ref{conj:generalization-BN}. In fact, consider the cycle $C_7$, one can
check that $n^+(C_7) = 3$ and
\[
\lambda_1^2(C_7) + \lambda_2^2(C_7) + \lambda_3^2(C_7) > 2\Big(1 - \frac{1}{\omega(C_7)} \Big) m.
\]
\section{Maximum spectral radius of planar (hyper)graphs}
Planar graphs have been extensively studied for a long time. Among various topics in spectral graph
theory, the investigation of the spectral radius of planar graphs is particularly intriguing. This
topic can be traced back at least to Schwenk and Wilson's fundamental question \cite{Schwenk-Wilson1978}:
``What can be said about the eigenvalues of a planar graph?"
In 1988, Hong \cite{H88} established the first significant result that for a planar graph $G$,
the spectral radius satisfies the inequality $\lambda(G)\leq\sqrt{5n-11}$, using Hong's inequality as
mentioned in Section \ref{sec:classic-inequalities}. Subsequently, Cao and Vince \cite{Cao-Vince1993}
improved Hong's bound to $4+\sqrt{3n-9}$, while Hong \cite{Hong1995} himself improved it further
to $2\sqrt{2} + \sqrt{3n - 15/2}$, and Ellingham and Zha \cite{Ellingham2000} to $2 + \sqrt{2n - 6}$.
Additionally, Boots and Royle \cite{Boots-Royle1991} and independently, Cao and Vince \cite{Cao-Vince1993},
conjectured that $P_2\vee P_{n-2}$ attains the maximum spectral radius among all planar graphs
on $n \geq 9$ vertices. Recently, significant progress has been made on the conjecture. Tait
and Tobin \cite{Tait-Tobin2017} confirmed the conjecture for sufficiently large $n$. It is
worth noting that Guiduli announced a proof of the conjecture for large $n$ in his Ph.D.
Thesis (see \cite{G96} and comments from \cite{Tait-Tobin2017}). However, the conjecture
remains open for small values of $n$.
\begin{conjecture}[Boots--Royle 1991 \cite{Boots-Royle1991} and independently Cao--Vince 1993 \cite{Cao-Vince1993}] \label{conj:planar-graphs}
Among all planar graph on $n\geq 9$ vertces, $K_2\vee P_{n-2}$ attains the maximum spectral radius.
\end{conjecture}
Extending the investigations of graph case, Ellingham, Lu and Wang \cite{ELW22} studied a hypergraph
analog of the Cvetkovi\'c--Rowlinson conjecture which states that among all outerplanar graphs on
$n$ vertices, $K_1 \vee P_{n-1}$ attains the maximum spectral radius. Given a hypergraph $H$,
the \emph{shadow} of $H$, denoted by $\partial (H)$, is the graph $G$ with $V (G) = V (H)$ and
$E(G) = \{uv : uv\in e\ \text{for some}\ e\in E(H)\}$. For a hypergraph $H$, if each edge of $H$
contains precisely $r$ vertices, then $H$ is called \emph{$r$-uniform}. The \emph{spectral radius}
of an $r$-uniform hypergraph $H$ is defined as
\[
r! \max_{ \bm{x}\in\mathbb{R}^n,\,\|\bm{x}\|_r = 1 } \sum_{\{i_1,\ldots,i_r\}\in E(H)} x_{i_1} x_{i_2} \cdots x_{i_r},
\]
where $\mathbb{R}^n$ is the set of real vectors of dimension $n$ and $\|\bm{x}\|_r$ is the
$\ell^r$-norm of $\bm{x}$.
A $3$-uniform hypergraph $H$ is called \emph{outerplanar} (\emph{planar}) if its shadow has an
outerplanar (planar) embedding such that each edge of $H$ is the vertex set of an interior
triangular face of the shadow. In \cite{ELW22}, Ellingham, Lu and Wang proved that for sufficiently
large $n$, the $n$-vertex outerplanar $3$-uniform hypergraph of maximum spectral radius is the
unique $3$-uniform hypergraph whose shadow is $K_1 \vee P_{n-1}$. In particular, they proposed
a conjecture that serves as a hypergraph counterpart to Conjecture \ref{conj:planar-graphs}.
\begin{conjecture}[Ellingham-Lu-Wang \cite{ELW22}]
For large enough $n$, the $n$-vertex planar $3$-uniform hypergraph of maximum spectral radius
is the unique hypergraph whose shadow is $K_2\vee P_{n-2}$.
\end{conjecture}
\section{The relationship between Tur\'an theorem and spectral Tur\'an theorem}
A graph is said to have the \emph{Hereditarily Bounded Property} $P_{t,r}$ if $|E(H)|\leq t\cdot |V(H)| + r$
for any subgraph $H$ of $G$ with $|V(H)|\geq t$. In Guiduli's Ph.D. Thesis \cite{G96}, he proved a tight upper
bound on $\lambda(G)$ for graphs with property $P_{t,r}$.
\begin{theorem}[\cite{G96}]
Let $t\in\mathbb{N}$ and $r\geq -\binom{t+1}{2}$. If $G$ is a graph on $n$ vertices with property
$P_{t,r}$, then
\[
\lambda (G) \leq \sqrt{tn} + \sqrt{t(t+1) + 2r} + \frac{t-1}{2},
\]
and asymptotically,
\[
\lambda(G) \leq \sqrt{tn} + \frac{t-1}{2} + o(1).
\]
Furthermore, the asymptotic bound is tight.
\end{theorem}
A natural question is to ask for a generalization of this property $P_{t,r}$, where the hereditary
bound on the number of edges is not linear.
\begin{problem}[Guiduli \cite{G96}]
If for a graph $G$, $|E(H)|\leq c\cdot |V(H)|^2$ holds for all subgraph $H$ of $G$,
does it follows that $\lambda(G)\leq 2c\cdot |V(G)|$\,{\rm ?} What can be said if the
exponent of $2$ is replaced by some other constant less than $2$\,{\rm ?}
\end{problem}
It is worth noting that $\lambda (G) \leq \sqrt{2c}\cdot |V(G)|$ is a trivial bound from the well-known
inequality $\lambda (G) \leq \sqrt{2 |E(G)|}$. Moreover, the constant $2c$ would be best possible,
as seen by Wilf's inequality. If this problem were true, then the spectral Erd\H os-Stone-Simonovits
theorem \cite{G96,Nikiforov2009-2} would be a consequence of the Erd\H os-Stone-Simonovits theorem \cite{Erdos1946,Erdos1966},
and Wilf's inequality \eqref{eq:Wilf} would follow from Tur\'an's theorem \cite{Turan1961}.
\section{Tight spectral conditions for a cycle of given length}
In spectral graph theory, it is very natural to ask the following problem: Determine tight spectral radius
conditions for the existence of a cycle of length $\ell$ in a graph of order $n$ for $\ell \in[3,n]$.
This problem has two aspects, i.e., the case of short cycles and the case of long cycles.
For any given graph $H$, denote by $spec(n,H)$ the maximum spectral radius of $n$-vertex graph $G$
containing no the subgraph $H$, and by $Spec(n,H)$ the class of extremal graphs $G$ such that
$\lambda(G)=spec(n,H)$. For example, when $\ell=3$, from Nosal's theorem, one can obtain $spec(n,C_3)=\sqrt{\lfloor n^2/4 \rfloor}$.
When $n$ is odd, Nikiforov \cite{Nikiforov2007} showed $Spec(n,C_4)=K_1\vee (\frac{n-1}{2}K_2)$;
For the case $n$ is even, confirming a conjecture in \cite{Nikiforov2009}, Zhai and Wang \cite{Zhai-Wang2012}
showed that $Spec(n,C_4)=K_1\vee (K_1\cup \frac{n-2}{2}K_2)$. In 2010, Nikiforov \cite{Nikiforov2010}
proposed the following conjecture: Every graph on sufficiently large order $n$ contains a $C_{2k+2}$
if $\lambda(G)\geq \lambda(S^+_{n,k})$, unless $G=S^+_{n,k}$ where $k\geq 2$ and $S^+_{n,k}$ is
obtained from $S_{n,k}$ by adding an edge. Zhai and Lin \cite{ZL20} confirmed this conjecture
for $k=2$. Very recently, Cioab\u{a}, Desai and Tait \cite{CDT22+} announced a
complete proof of Nikiforov's conjecture. However, the problem of determining tight spectral
conditions for cycles of given length $\ell \in[3,n]$ is still wide open. A refined version
of Nikiforov's conjecture can be found in \cite{LN23-1}.
\begin{problem}[A refined version of Nikiforov's even cycle conjecture \cite{LN23-1}]
For any integer $k\geq 3$, determine the infimum $\alpha:=\alpha(k)$ such that every graph of
order $n=\Omega(k^{\alpha})$ {\rm (}where $\Omega(k^{\alpha})$ means there exists some
constant $c$ which is not related to $k$ and $n$, such that $n\geq ck^{\alpha}${\rm )}
satisfying $\lambda(G)>\lambda(S^+_{n,k})$ contains a $C_{2k+2}$.
\end{problem}
Recently, Zhai, Lin, and Shu \cite{Zhai-Lin-Shu2021} investigated the existence of short
consecutive cycles in fixed-size graphs and put forward the following conjecture.
\begin{conjecture}[Zhai-Lin-Shu \cite{Zhai-Lin-Shu2021}]
Let $k$ be a fixed positive integer and $G$ be a graph of sufficiently large size $m$
without isolated vertices. If
\[
\lambda (G) \geq \frac{k-1+\sqrt{4m-k^2+1}}{2},
\]
then $G$ contains a cycle of length $t$ for every $t\leq 2k+2$, unless $G\cong S_{m/k + (k+1)/2, k}$.
\end{conjecture}
For the case $k=2$, the conjecture has been confirmed, see \cite{Zhai-Lin-Shu2021}, \cite{Min-Lou-Huang2021}
and \cite{Sun-Li-Wei2023} for further details.
\section{Nikiforov's problem on consecutive cycles}
A \emph{Hamilton cycle} in a graph $G$ is a cycle passing through all the vertices of $G$. If it exists,
then $G$ is called \emph{Hamiltonian}. Maybe the most famous theorem in Hamiltonian graph theory is
Dirac's theorem \cite{D52}, which states that every graph on $n\geq 3$ vertices has a Hamilton cycle
if every vertex has degree at least $n/2$. In 1971, Bondy \cite{B71} introduced the concept of pancyclicity
of graphs. Let $G$ be a graph on $n$ vertices. We say that $G$ is pancyclic, if $G$ contains each cycle
of length $\ell$, where $\ell\in [3,n]$. Extending Ore's condition \cite{O60}, Bondy \cite{B71} proved
that every Hamiltonian graph on $n$ vertices is pancyclic if $e(G)\geq n^2/4$, unless $G$ is isomorphic
to $K_{n/2, n/2}$ where $n$ is even. If one drops the condition that $G$ is Hamiltonian in Bondy's
theorem, the phenomenon of consecutive cycles still persists, i.e, there are all cycles of length
$\ell \in [3, \lfloor (n+3)/2 \rfloor]$ in a graph $G$ if $e(G)\geq n^2/4$. This theorem may be due
to Woodall and independently, due to Kopylov (see also Bollob\'as \cite[Corrolary~5.4]{B78}).
In 2008, Nikiforov \cite{N08} considered a spectral analog of the above theorem.
\begin{problem}[Nikiforov \cite{N08}] \label{prob:consecutive-cycles}
What is the maximum $C$ such that for all positive $\varepsilon<C$ and sufficiently large $n$, every
graph $G$ of order $n$ with $\lambda (G) > \sqrt{\lfloor n^2/4 \rfloor}$ contains a cycle of length
$\ell$ for every integer $\ell\leq (C-\varepsilon)n$.
\end{problem}
Nikiforov \cite{N08} firstly showed $C\geq 1/320$ by the method of successively deleting
the least component of eigenvector with respect to spectral radius of a graph. Ning and
Peng \cite{NP20} improved it to $C\geq 1/160$. Later, Zhai and Lin \cite{ZL23} proved some
spectral result for theta graphs, and a direct main corollary is that $C\geq 1/7$. At the
same time, Li and Ning \cite{LN23+} showed that $C\geq 1/4$, by using some ideas from Ramsey
Theory \cite{ALPZ20+} and theorems on parity of cycles in graphs \cite{VZ77}. Li and Ning's
result was immediately used in \cite{ZZ23} to attack another problem in spectra graph theory.
On the other hand, Nikiforov \cite{N08} constructed the class of graphs
$G=K_s\vee (n-s)K_1$ where $s = (3-\sqrt{5})n/4$ from which one can find $C\leq (3-\sqrt{5})/2$.
Till now, Problem \ref{prob:consecutive-cycles} is still open.
\section{Graph toughness from Laplacian eigenvalues}
Let $c(G)$ denote the number of components of a graph $G$. The \emph{toughness} $t(G)$ of $G$ is
defined by
\[
t(G) := \min\left\{\frac{|S|}{c(G-s)}\right\},
\]
in which the minimum is taken over all proper subsets $S \subset V(G)$ such that $c(G - S) > 1$.
A graph $G$ is called $t$-tough if $t(G) \geq t$.
In 1973, Chv\'atal \cite{Chvatal1973} introduced the concept of graph toughness, which has close
connections to a variety of graph properties such as connectivity, Hamiltonicity, pancyclicity,
factors, and spanning trees (see \cite{Bauer-Broersma2006}). The study of toughness from
eigenvalues was initiated by Alon \cite{Alon1995}, who showed that for any connected $d$-regular graph $G$,
\[
t(G) > \frac{1}{3} \Big( \frac{d^2}{(d + \lambda')\lambda'} - 1 \Big),
\]
where $\lambda'$ is the second largest absolute eigenvalue. Brouwer \cite{Brouwer1995}
discovered a better bound and showed that $t(G) > d/\lambda' - 2$ for a connected $d$-regular
graph $G$. He mentioned in \cite{Brouwer1995} that the bound might be able to be improved
to $d/\lambda' - 1$ and then proposed the exact conjecture as an open problem
in \cite{Brouwer1995,Brouwer1996}. In 2021, the conjecture has been proved by Gu \cite{Gu2021}.
Recently, Haemers \cite{Haemers2020} proposed studying lower bounds on $t(G)$ in terms
of the eigenvalues of the Laplacian matrix $L(G)$. He also made the following conjecture.
\begin{conjecture}[Haemers \cite{Haemers2020}]\label{conj:tougu-form-Laplacian}
Let $G$ be a connected graph on $n$ vertex with minimum degree $\delta$. Then
\[
t(G) \geq\frac{\mu_{n-1}}{\mu_1 - \delta}.
\]
\end{conjecture}
For a connected $d$-regular graph $G$, this conjecture implies that $t(G)\geq \frac{d-\lambda_2}{-\lambda_n}$,
which is stronger than Brouwer's toughness conjecture. The bound in Conjecture \ref{conj:tougu-form-Laplacian}
is tight in case $G$ is the complete multipartite graph $K_{n_1,\ldots,n_k}$ ($1 < k < n$). Indeed, assume
$n_1 \geq n_2\geq\cdots\geq n_k$ then $t(G) = (n-n_1)/n_1$, $\mu_1 = n$ and $\mu_{n-1} = \delta = n - n_1$.
Let $S\subset V(G)$ be such that $t(G)=|S|/c(G-S)$. It was proved in \cite{Haemers2020} and \cite{Gu-Haemers2022}
that Conjecture \ref{conj:tougu-form-Laplacian} is true in each of the following cases:
\begin{enumerate}
\item[(1)] The complement of $G$ is disconnected;
\item[(2)] All connected components of $G - S$ are singletons;
\item[(3)] The union of some components of $G - S$ has order $(n-|S|)/2$;
\item[(4)] $c(G-S) = 2$.
\end{enumerate}
In \cite{Gu-Haemers2022}, two tight lower bounds for $t(G)$ in terms of the Laplacian eigenvalues were presented,
which provided support for Conjecture \ref{conj:tougu-form-Laplacian}.
\section{Hamilton cycles in pseudo-random graphs}
Finding general conditions which ensure that a graph is Hamiltonian is a central topic in graph
theory, and researchers have devoted many efforts to obtain sufficient conditions for Hamiltonicity.
There is an old and well-known conjecture related to pseudo-random graphs in this area.
A pseudo-random graph with $n$ vertices of edge density $p$ is a graph that behaves like a truly
random graph $G(n, p)$. Spectral techniques are a convenient way to express pseudo-randomness.
An $(n,d,\lambda')$-graph is a $d$-regular graph $G$ on $n$ vertices whose second largest eigenvalue
in absolute value is at most $\lambda'$. It is known that $(n, d, \lambda')$-graphs with small
$\lambda'$ compared to $d$ possess pseudo-random properties. For more details on pseudo-random
graphs, we refer the reader to \cite{Krivelevich-Sudakov2006}. In this area, a well-known conjecture on
Hamilton cycles in an $(n,d,\lambda')$-graph can be found in \cite{Krivelevich-Sudakov2003}.
\begin{conjecture}[Krivelevich-Sudakov \cite{Krivelevich-Sudakov2003}] \label{conj:Hamilton}
There exists an absolute constant $C > 0$ such that any $(n, d, \lambda')$-graph with $d/\lambda' \geq C$
contains a Hamilton cycle.
\end{conjecture}
In \cite{Krivelevich-Sudakov2003}, Krivelevich and Sudakov proved a result that $(n,d,\lambda')$-graphs
are Hamiltonian, provided
\[
\frac{d}{\lambda'} \geq \frac{1000\cdot \log n(\log\log\log n)}{(\log\log n)^2}
\]
for sufficiently large $n$. In recent work by Glock, Correia and Sudakov \cite{Glock-Correia-Sudakov2023},
progress has been made towards Conjecture \ref{conj:Hamilton} in two significant ways. Firstly, they
improved the Krivelevich and Sudakov's bound above by showing that for some constant $C>0$,
$d/\lambda' \geq C\cdot (\log n)^{1/3}$ already guarantees Hamiltonicity. Secondly, they established
that for any constant $\alpha>0$, there exists a constant $C>0$ such that any $(n,d,\lambda')$-graph
with $d\geq n^\alpha$ and $d/\lambda' \geq C$ contains a Hamilton cycle.
Let us remark that there exist three additional conjectures that are related to Conjecture \ref{conj:Hamilton}.
The first one has to do with the concept of $f$-connected which is a generalization of the traditional notion
of connectedness. For a graph $G$, a pair $(A,B)$ of proper subsets of $V(G)$ is called a \emph{separation}
of $G$ if $A\cup B = V(G)$ and $G$ has no edge between $A\setminus B$ and $B\setminus A$. Let
$f: \mathbb{N}\setminus \{0\} \to \mathbb{R}$ be a function, $G$ is called \emph{$f$-connected} if every
separation $(A, B)$ of $G$ with $|A\setminus B| \leq |B\setminus A|$ satisfies $|A\cap B| \geq f(|A\setminus B|)$.
In 2006, Brandt, Broersma, Diestel and Kriesell \cite{Brandt-Broersma2006} conjectured that there exists a
function $f(k) = O(k)$ (here $f(k) = O(k)$ means there exists some absolute constant $c>0$ such that
$f(k) < ck$ for large $k$) such that every $f$-connected graph of order $n\geq 3$ is Hamiltonian.
As asserted in \cite{Brandt-Broersma2006}, if this conjecture were true, it would imply Conjecture \ref{conj:Hamilton}.
The second one is related to Chv\'atal's toughness conjecture \cite{Chvatal1973}. In 1973, Chv\'atal
\cite{Chvatal1973} conjectured that there is a constant $t$ such that every $t$-tough graph is Hamiltonian.
In \cite{Brandt-Broersma2006}, the authors demonstrated that the Brandt-Broersma-Diestel-Kriesell's conjecture
\cite{Brandt-Broersma2006} can be derived from Chv\'atal's toughness conjecture. Therefore, if Chv\'atal's
conjecture were proven to be true, it would imply the validity of Conjecture \ref{conj:Hamilton}.
The third conjecture relates to the Laplacian eigenvalues of graphs. Gu (c.f.\,\cite{Gu-Haemers2022})
conjectured that there exists a positive constant $C < 1$ such that if $\mu_{n-1}/\mu_1 \geq C$
and $n \geq 3$, then $G$ contains a Hamilton cycle. It is evident that for a $d$-regular graph
$G$, we have $\mu_{n-i+1}(G) = d - \lambda_i(G)$ for $i=1,2,\ldots,n$. Moreover, it can be
observed that Gu's conjecture above also implies Conjecture \ref{conj:Hamilton}.
\section{A spectral problem on counting subgraphs}
Mantel's theorem, a well-known result, states that a graph with $n$ vertices and $\lfloor n^2/4 \rfloor + 1$
edges contains a triangle. Strengthening Mantel's theorem, Rademacher (c.f.\,\cite{Erdos1955}) proved
that there are at least $\lfloor n/2 \rfloor$ copies of a triangle. Erd\H os \cite{Erdos1962-1,Erdos1962-2}
further generalized the result by proving that if $q < cn$ for some small constant $c$, then
$\lfloor n^2/4 \rfloor + q$ edges guarantees at least $q\lfloor n/2 \rfloor$ triangles. He also
conjectured that the same result holds for $q < n/2$, which was later proved by Lov\'asz and
Simonovits \cite{Lovasz1983}.
In 2010, Mubayi \cite{Mubayi2010} extended these theorems to the class of color-critical graphs,
which are graphs whose chromatic number can be decreased by removing some edges.
Let $T_{n,k}$ denote the Tur\'an graph on $n$ vertices, which is the complete $k$-partite
graph with parts of size $\lfloor n/k \rfloor$ or $\lceil n/k \rceil$.
\begin{theorem}[\cite{Mubayi2010}]
Let $k \geq 2$ and $F$ be a color-critical graph with chromatic number $\chi (F) = k+1$. There
exists $\delta = \delta_F > 0$ such that if $n$ is sufficiently large and $1 \leq q < \delta n$,
then every $n$-vertex graph with $e(T_{n,k}) + q$ edges contains at least $q\cdot c(n,F)$ copies of $F$,
where $c(n,F)$ is the minimum number of copies of $F$ in the graph obtained from $T_{n,k}$ by adding one edge.
\end{theorem}
Motivated by Mubayi's result above, Ning and Zhai \cite{NZ23} proposed to study
the spectral analog of Mubayi's theorem.
\begin{problem}[Ning-Zhai \cite{NZ23}] \label{prob:supersaturation-problem}
{\rm (i)} {\rm (}The general case{\rm )} Find a spectral version of Mubayi's result.
{\rm (ii)} {\rm (}The critical case{\rm )} For $q=1$ {\rm (}where $q$ is defined as in Mubayi's theorem{\rm )},
find the tight spectral versions of Mubayi's result when $F$ is some particular color-critical subgraph,
such as triangle, clique, book, odd cycle or even wheel, etc.
\end{problem}
In \cite{NZ23}, Ning and Zhai studied the fundamental cases of triangles for Problem \ref{prob:supersaturation-problem}.
In \cite {Ning-Zhai2022}, they also studied some special bipartite case, i.e., the
quadrilaterals case.
\section{Extreme eigenvalues of nonregular graphs}
Regular graphs are a well-studied class of graphs, but for nonregular graphs where not
all vertices have equal degrees, it is possible to quantify how close they
are to regularity using various measures. One such measure is the difference between
the maximum degree $\Delta(G)$ and the largest eigenvalue $\lambda(G)$ of a graph $G$.
It is a well-known fact that $\lambda(G)\leq\Delta(G)$ for any connected graph $G$,
and equality holds if and only if $G$ is regular. Therefore, we can use the difference
$\Delta(G)-\lambda(G)$ as a measure of irregularity of a nonregular graph $G$. It
is natural to ask how small this difference can be for nonregular graphs. This
question has attracted the interest of many researchers in the past few
decades \cite{Cioaba2007-1,Cioaba2007-2,Nikiforov2018,Shi2007,Shi2009,Stevanovic2004,Zhang2005,Zhang2021}.
Let $\mathcal{G}(n,\Delta)$ denote the set of graphs attaining the maximum spectral radius
among all connected nonregular graphs with $n$ vertices and maximum degree $\Delta$, and let
$\lambda(n, \Delta)$ denote the maximum spectral radius. For a graph $G\in\mathcal{G}(n,\Delta)$,
Liu, Shen and Wang \cite{Liu-Shen-Wang2007} investigated the order of magnitude of
$\Delta-\lambda(G)$, and posed the following conjecture: For each fixed $\Delta$ and
$G\in\mathcal{G}(n,\Delta)$, the limit of $n^2(\Delta-\lambda(G))/(\Delta-1)$ exists. Furthermore,
\[
\lim_{n\to\infty} \frac{n^2(\Delta-\lambda(G))}{\Delta-1} = \pi^2.
\]
This conjecture is trivially true for $\Delta = 2$, and the condition that $\Delta$ is fixed
is crucial (see \cite{Liu2022} for details). Recently, the first author of this paper gave a
negative answer to the above conjecture by showing that the limit superior is at most $\pi^2/2$ (see \cite{Liu2022}).
Although Liu-Shen-Wang's conjecture is not true, we can still ask what is the exact leading
term of $\Delta-\lambda(n,\Delta)$. Based on some numerical experiments and heuristic arguments,
the following conjecture was presented.
\begin{conjecture}[Liu \cite{Liu2022}]
Let $\Delta\geq 3$ be a fixed integer and $G\in\mathcal{G}(n,\Delta)$. Then
the limit of $n^2 (\Delta-\lambda(G))$ always exists. Furthermore,
\begin{enumerate}
\item[$(1)$] if $\Delta$ is odd, then
\[
\lim_{n\to\infty} \frac{n^2(\Delta-\lambda(G))}{\Delta-1} = \frac{\pi^2}{4}.
\]
\item[$(2)$] if $\Delta$ $(\Delta>2)$ is even, then
\[
\lim_{n\to\infty} \frac{n^2 (\Delta-\lambda(G))}{\Delta-2} = \frac{\pi^2}{2}.
\]
\end{enumerate}
\end{conjecture}
For $\Delta=3$ and $\Delta=4$, the conjecture was confirmed by Liu \cite{Liu2022}. However,
for the general $\Delta$, it seems to be difficult to solve it.
On the other hand, it is intuitive that the graphs attaining the maximum spectral radius among all
connected nonregular graphs with prescribed maximum degree must be close to regular graphs. In particular, Liu
and Li \cite{Liu-Li2008} posed the following conjecture: Let $3\leq\Delta\leq n-2$ and $G\in\mathcal{G}(n,\Delta)$.
Then $G$ has degree sequence $(\Delta,\ldots,\Delta,\delta)$, where
\[
\delta =
\begin{cases}
\Delta-1, & n\Delta\ \text{is odd}, \\
\Delta-2, & n\Delta\ \text{is even}.
\end{cases}
\]
In spite of the statement of the conjecture above seems intuitive, it is challenging to
either prove or disprove it, even for small values of $\Delta$. One reason for the
difficulty of this conjecture, as well as Liu-Shen-Wang's conjecture, is that graphs
with bounded degree are sparse graphs whose spectral radius is bounded by a constant.
Therefore, many tools from spectral graph theory are not effective.
By analyzing the structural properties of the extremal graphs, Liu \cite{Liu2022} confirmed Liu-Li's
conjecture above for small $\Delta$. However, we cannot expect an affirmative answer to Liu-Li's conjecture
for general $\Delta$. In fact, there is some evidence to support the following speculation.
\begin{conjecture}[Liu \cite{Liu2022}] \label{conj:new-degree-sequence}
Let $G\in\mathcal{G}(n,\Delta)$. For each fixed $\Delta$ and sufficiently large $n$,
$G$ has degree sequence $(\Delta,\ldots,\Delta,\delta)$, where
\[
\delta =
\begin{cases}
\Delta-1, & \Delta\ \text{is odd},\ n\ \text{is odd}, \\
1, & \Delta\ \text{is odd},\ n\ \text{is even}, \\
\Delta-2, & \Delta\ \text{is even}.
\end{cases}
\]
\end{conjecture}
Liu confirmed Conjecture \ref{conj:new-degree-sequence} for $\Delta = 3$ and
$\Delta = 4$ in \cite{Liu2022}.
\section{Graphs with a prescribed average degree}
While a significant amount of research has focused on finding the maximum spectral radius
of graphs under specified conditions, there has been relatively less work done on
determining the minimum spectral radius.
For given $n$ and $m$, let $\mathcal{H}(n,m)$ denote the set of connected graphs on $n$
vertices and $m$ edges. In this section, we talk about the graphs in $\mathcal{H}(n,m)$ with
minimum spectral radius. It is a challenging task to identify the exact graph. Nevertheless,
the following two questions which, if true, would provide significant structural insights into
the extremal graphs.
In 1993, Hong \cite{H93} posed a problem concerning the maximum degree and minimum degree
of extremal graphs.
\begin{problem}[Hong \cite{H93}] \label{prop:minimum-spectral-radius}
Let $G\in\mathcal{H}(n,m)$ be a connected graph minimizing $\lambda (G) - \overline{d}(G)$.
Is it true that $\Delta (G) - \delta (G) \leq 1$\,{\rm ?}
\end{problem}
Obviously, Problem \ref{prop:minimum-spectral-radius} is true for $m=n-1$
(see \cite{Collatz-Sinogowitz1957,Lovasz-Pelikan1973}); for $m=n$ (the unique
extremal graph is clearly $C_n$); for $m=n+1$ (see \cite{Simic1989}). In \cite{Cioaba2021},
Cioab\u{a} claimed that Problem \ref{prop:minimum-spectral-radius}
is also true when $2m/n$ is an integer. Indeed, If $\overline{d} := 2m/n$
is an integer, then it is always possible to construct a $\overline{d}$-regular
graph which clearly minimizes the quantity $\lambda (G) - \overline{d}(G)$.
In \cite{G96}, Guiduli proposed a conjecture \cite[Conjecture 5.8]{G96} which provides a different
perspective on the extremal graphs. It is worth mentioning that the original conjecture
of Guiduli is not true for small $m$, the following is a modified version of Guiduli's conjecture.
\begin{conjecture}[A modified version of Guiduli's conjecture]
Let $G\in\mathcal{H}(n,m)$ be a connected graph having minimum spectral radius.
Let $r$ be such that $e(T_{n, r-1}) < m \leq e(T_{n, r})$. Then there is a constant
$\alpha >0$ such that $G$ is $r$-colorable for $m=\Omega(n^{\alpha})$.
\end{conjecture}
\section{The Bilu-Linial Conjecture}
A \emph{signed graph} $\Gamma = (G,\sigma)$ is a graph $G = (V,E)$ along with a function $\sigma: E \to \{+1, -1\}$
that assigns a positive or negative sign to each edge. The (unsigned) graph $G$ is said to be the underlying graph
of $\Gamma$, while the function $\sigma$ is referred to as the \emph{signature} of $\Gamma$.
The adjacency matrix $A(\Gamma)$ of a signed graph $\Gamma$ is derived from the adjacency matrix of
its underlying graph $G$ by replacing every $1$ with $-1$ if the corresponding edge in $\Gamma$ is negative.
An important feature of signed graphs is the concept of switching equivalent. Given a signed
graph $\Gamma = (G, \sigma)$ and a subset $U \subseteq V(G)$, let $\Gamma_U$ be the signed graph
obtained from $\Gamma$ by reversing the signs of the edges in the edge boundary of $U$, the set
of edges joining a vertex in $U$ to one not in $U$. The signed graph $\Gamma_U$ is said to
be \emph{switching equivalent} to $\Gamma$. When we talk about the eigenvalues and spectral
radius of a signed graph, we are actually referring to those of the corresponding signed adjacency matrix.
It was proved in \cite{Belardo-Cioaba-Koolen-Wang2018} that for a signed graph $\Gamma = (G, \sigma)$,
the spectral radius $\rho(G,\sigma)$ of $\Gamma$ is at most that of $G$. So, we know that, up to
switching equivalence, the signature leading to the maximal spectral radius is the all-positive
one. A natural question is to identify which signature leads to the minimum spectral radius.
This problem has important connections and consequences in the theory of expander
graphs \cite{Bilu-Linial2006}.
Bilu and Linial \cite{Bilu-Linial2006} proved that every regular graph has a signature
with small spectral radius.
\begin{theorem}[\cite{Bilu-Linial2006}]
Every connected $d$-regular graph has a signature with spectral radius at most
$c \sqrt{d\cdot (\log d)^3}$, where $c > 0$ is some absolute constant.
\end{theorem}
Furthermore, they posed the following conjecture.
\begin{conjecture}[Bilu-Linial \cite{Bilu-Linial2006}]\label{conj:BiluLinial}
Every connected $d$-regular graph $G$ has a signature $\sigma$ with spectral
radius at most $2\sqrt{d-1}$.
\end{conjecture}
If true, this conjecture would provide a way to construct or show the existence of an
infinite family of Ramanujan graphs, a connected $d$-regular graph with
$\max\{|\lambda_2|, |\lambda_n|\} \leq 2\sqrt{d-1}$. In 2015, Marcus, Spielman and
Srivastava \cite{Marcus-Spielman-Srivastava2015} made significant progress towards
solving the Bilu-Linial conjecture.
\begin{theorem}[\cite{Marcus-Spielman-Srivastava2015}]
Let $G$ be a connected $d$-regular graph. Then there exists a signature $\sigma$ of
$G$ such that the largest eigenvalue of $\Gamma = (G, \sigma)$ is at most $2\sqrt{d-1}$.
\end{theorem}
\begin{remark}
In general, the largest eigenvalue $\lambda_1(\Gamma)$ of $\Gamma$ may not be equal to its
spectral radius because the Perron-Frobenius Theorem is valid only for nonnegative matrices.
To put it simply, the Bilu-Linial conjecture aims to limit all eigenvalues of $\Gamma = (G, \sigma)$
between $-2\sqrt{d-1}$ and $2\sqrt{d-1}$, while the Marcus-Spielman-Srivastava result
shows the existence of a signature where all the eigenvalues of $\Gamma = (G, \sigma)$ are
at most $2\sqrt{d-1}$.
\end{remark}
If the regularity assumption on $G$ is dropped, Gregory\footnote{The original link to Gregory's
work is not available. Here we cite the description from \cite{Belardo-Cioaba-Koolen-Wang2018}.}
considered the following variant of Conjecture
\ref{conj:BiluLinial}.
\begin{conjecture}[Gregory, c.f.\,\cite{Belardo-Cioaba-Koolen-Wang2018}] \label{conj:Gregory}
Let $G$ be a nontrivial graph with maximum degree $\Delta$. Then there exists a signature
$\sigma$ such that $\rho(G,\sigma) < 2\sqrt{\Delta - 1}$.
\end{conjecture}
Furthermore, Belardo, Cioab\u{a}, Koolen and Wang \cite{Belardo-Cioaba-Koolen-Wang2018} posed the
following question whose affirmative answer would imply Conjecture \ref{conj:Gregory}.
\begin{problem}[Belardo-Cioab\u{a}-Koolen-Wang \cite{Belardo-Cioaba-Koolen-Wang2018}]
Let $G$ be a connected graph. Is there a signature $\sigma$ such that $\rho(G,\sigma) < 2\sqrt{\rho(G) - 1}$\,\rm{?}
\end{problem}
\section{Isoperimetric problem in hypercube}
The $d$-dimensional hypercube, denoted by $Q_d$, is a $d$-regular graph on $2^d$ vertices, with each
vertex corresponding to a binary string of length $d$. The adjacency between two vertices in $Q_d$
occurs if and only if they differ in exactly one bit position. Thus, each vertex is connected to
$d$ other vertices, which correspond to the vertices obtained by flipping each of its bits in turn.
In \cite{Bollobas-Lee-Letzter2018}, Bollob\'as, Lee and Letzter studied the maximum eigenvalue of
subgraphs of the hypercube $Q_d$. To be precise, they gave a partial answer to the following question
posed by Fink (c.f.\,\cite{Bollobas-Lee-Letzter2018}) and by Friedman and Tillich \cite{Friedman-Tillich2005}.
\begin{problem}\label{prob:isoperimetric-problem}
What is the maximum of the largest eigenvalue of $Q_d[U]$, where $|U| = m$ and $1\leq m\leq 2^d$\,\rm{?}
\end{problem}
Bounding the maximum eigenvalue of $Q_d[U]$ is closely related to the size of the edge
boundary of $U$. Indeed, since the largest eigenvalue of a graph is at least its average
degree, for $Q_d[U]$, an induced subgraph of $Q_d$, we have $e(Q_d[U]) \leq \lambda(Q_d[U]) \cdot |U|/2$.
Since $Q_d$ is $d$-regular, the edge boundary of $U$ is at least $(d - \lambda(Q_d[U])) \cdot |U|$.
Hence, if we denote the maximum of the largest eigenvalue of $Q_d[U]$ with $|U|=m$ by $\lambda(m)$, then
for every set of $m$ vertices of the hypercube $Q_d$, the edge boundary has size at
least $(d - \lambda(m)) m$. In this sense, Problem \ref{prob:isoperimetric-problem}
can be viewed as a variant of the isoperimetric problem in hypercube.
In \cite{Bollobas-Lee-Letzter2018}, several theorems were proved regarding this problem,
but there are still many open problems.
\section{Minimum spectral radius of $K_{r+1}$-saturated graphs}
A graph $G$ is \emph{$F$-saturated} if $G$ does not contain $F$ as a subgraph but the
addition of any new edge to $G$ creates at least one copy of $F$. In other words, $G$
is $F$-saturated if and only if it is a maximal $F$-free graph. The maximum possible
number of edges in a graph $G$ that is $F$-saturated is known as the Tur\'an number of $F$.
The study of Tur\'an numbers for various families of graphs is a cornerstone of extremal combinatorics.
On the other hand, the minimum number of edges in an $F$-saturated graph with $n$ vertices,
denoted by sat$(n,F)$, is called the \emph{saturation number} of $F$. Saturation numbers were
first studied by Erd\H os, Hajnal and Moon \cite{Erdos1964}. They determined the saturation number
of $K_{r+1}$ and characterized the unique extremal graph, which is $S_{n,r-1}$. For a thorough
account of the results known about saturation numbers, we refer the reader to a nice dynamic
survey \cite{Currie-Faudree2021}.
Similarly to the spectral Tur\'an-type problems for clique, one can naturally ask whether the spectral
radius version of the Erd\H os-Hajnal-Moon theorem is true. In 2020, Kim, Kim, Kostochka and O \cite{Kim-Kostochka2020}
made a first progress on this problem, and posed the following conjecture.
\begin{conjecture}[Kim-Kim-Kostochka-O \cite{Kim-Kostochka2020}]
Let $G$ be a $K_{r+1}$-saturated graph on $n$ vertices. Then $\lambda(G) \geq \lambda (S_{n, r-1})$, with
equality if and only if $G \cong S_{n, r-1}$.
\end{conjecture}
For the cases $r=2$ and $r=3$, the conjecture was confirmed in \cite{Kim-Kostochka2020} and \cite{Kim-Kostochka-O-Shi-Wang2023}
respectively. Generally, it would be interesting to consider the following problem.
\begin{problem}
Given a graph $F$, what is the minimum spectral radius of an $F$-saturated graph on $n$ vertices\,{\rm ?}
\end{problem}
\section{Brouwer's Laplacian spectrum conjecture}
For a graph $G$ on $n$ vertices and $1\leq k \leq n$, let $S_k(G)$ denote the sum of the $k$
largest Laplacian eigenvalues of $G$, that is,
\[
S_k(G) := \sum_{i=1}^k \mu_i(G).
\]
As a variation of the Grone--Merris theorem \cite{Bai2011}, Brouwer \cite{Brouwer-Haemers2012}
proposed the following conjecture.
\begin{conjecture}[Brouwer's conjecture \cite{Brouwer-Haemers2012}]\label{conj:Brouwer}
Let $G$ be a graph of order $n$. Then
\[
S_k(G) \leq e(G) + \binom{k+1}{2},~~k=1,2,\ldots,n.
\]
\end{conjecture}
The progress made on Brouwer's conjecture is worth mentioning. Brouwer himself used computers to verify
the conjecture for all graphs with at most $10$ vertices \cite{Brouwer-Haemers2012}. For $k = 1$, the
conjecture follows from the well-known inequality $\mu_1(G) \leq n$. Haemers, Mohammadian and
Tayfeh-Rezaie \cite{Haemers-Mohammadian-Tayfeh-Rezaie2010} showed that the conjecture is true for
all graphs when $k=2$, and recently the equality was characterized by Li and Guo \cite{Li-Guo2022}.
The cases $k = n-1$ and $k = n$ are straightforward due to the fact that
$S_{n-1}(G) = S_n(G) = 2 e(G) \leq e(G) + \binom{n}{2}$.
Chen \cite{Chen2019} showed that if Conjecture \ref{conj:Brouwer} holds for all graphs when $k = p$,
then it holds for all graphs when $k = n-p-1$ as well, where $1 \leq p \leq (n-1)/2$. Thus,
Conjecture \ref{conj:Brouwer} also holds for all graphs when $k = n-2$ and $k = n-3$.
Rocha and Trevisan \cite{Rocha-Trevisan2014} proved that the conjecture is true for all
$k$ with $1 \leq k \leq \lfloor g/5 \rfloor$, where $g$ is the girth of the graph $G$
(the length of the smallest cycle in $G$). They also showed that the conjecture is true
for a connected graph $G$ having maximum degree $\Delta$, $p$ pendant vertices and $c$
cycles with $\Delta \geq c + p + 4$.
In addition, it has been proved that Brouwer's conjecture is true for several classes of
graphs (for all $k$) such as trees \cite{Haemers-Mohammadian-Tayfeh-Rezaie2010}, unicyclic
graphs \cite{Du-Zhou2012,Wang-Huang-Liu2012}, bicyclic graphs \cite{Du-Zhou2012}, threshold
graphs \cite{Haemers-Mohammadian-Tayfeh-Rezaie2010}, regular graphs \cite{Mayank2010} and
split graphs \cite{Mayank2010}. For more progress on Brouwer's Conjecture, we refer
to \cite{Blinovsky-Speranca2022,Chen2018,Cooper2021,Ganie-Pirzada2016,Ganie-Alghamdi-Pirzada2016,Ganie-Pirzada-Rather-Trevisan2020,Rocha2020}
and the references therein. However, Conjecture \ref{conj:Brouwer} remains open at large.
Recently, Li and Guo \cite{Li-Guo2022} proposed the following full Brouwer's conjecture.
Before continuing, we introduce some notation. For $1\leq k\leq n-1$, let $G_{k,r,s}$ $(r\geq 1, s\geq 0)$
be a graph of order $n=k+r+s$ consisting of a clique of size $k$ and two independent sets $\overline{K}_r$
and $\overline{K}_s$, where each vertex of $K_k$ is adjacent to all vertices in $\overline{K}_r$,
and for each vertex $v_i\in V(\overline{K}_s) = \{v_1,v_2,\ldots,v_s\}$, $N(v_i)\subsetneq V(K_k)$ ($i=1,2,\ldots,s$)
and $N(v_{i+1}) \subseteq N(v_i)$ ($i=1,2,\ldots,s-1$). Obviously, if $G_{k,r,s}$ is connected, then for
$k=1$, $G_{1,r,s}$ is the star $K_{1,n-1}$; for $k=n-1$, $G_{n-1,r,s}$ is the complete graph $K_n$.
\begin{conjecture}[Li-Guo \cite{Li-Guo2022}, The full Brouwer's conjecture] \label{conj:full-Brouwer}
Let $G$ be a graph of order $n$. Then
\[
S_k(G) \leq e(G) + \binom{k+1}{2},~~k=1,2,\ldots,n,
\]
with equality if and only if $G\cong G_{k,r,s}$ $(r\geq 1$, $s\geq 0)$.
\end{conjecture}
In \cite{Li-Guo2022}, the authors confirmed Conjecture \ref{conj:full-Brouwer} for $k\in \{1,2,n-3,n-2,n-1\}$.
\section{The Spectral Gap conjecture}
Given a graph $G$, the \emph{spectral gap} of $G$ is defined as $\lambda_1(G)-\lambda_2(G)$.
Obviously, if $G$ is connected, then $\lambda_1(G) - \lambda_2(G) > 0$.
The spectral gap is primarily investigated for the class of regular graphs, as it is established that
regular graphs with large spectral gap possess high connectivity properties. This property renders them
significant in numerous branches of theoretical computer science (see \cite[pp.\,392--394]{Cvetkovic-Doob-Sachs1995}).
To the contrary, Stani\'c \cite{Stanic2013} suggested studying graphs with small spectral gap,
which can be viewed as an adjacency matrix version of Aldous and Fill's problem about maximizing
the relaxation time of a random walk on a connected graph (see \cite{Aksoy-Chung-Tait-Tobin2018}
and \cite{Aldous-Fill2002} for details). In particular, Stani\'c conjectured that the minimum
spectral gap is attained for the double kite graphs. A \emph{double kite} graph $DK(r, s)$ can
be constructed by taking a $(s+2)$-vertex path $P_{s+2}$, two copies of the $r$-vertex complete
graph $K_r$, and identifying one terminal vertex of $P_{s+2}$ with a vertex of one copy of $K_r$
and the other terminal vertex with a vertex of the other copy of $K_r$
(see Fig.\,\ref{fig:double-kite-graph} for an illustration).
\begin{figure}
\caption{The double kite graph $DK(8,5)$}
\label{fig:double-kite-graph}
\end{figure}
\begin{conjecture}[Stani\'c \cite{Stanic2013}]
The spectral gap is minimized for some double kite graph over all connected graphs
with given number of vertices.
\end{conjecture}
The conjecture has been confirmed by Stani\'c \cite{Stanic2013} for connected graphs
with up to $10$ vertices.
The spectral gap and the algebraic connectivity of graphs exhibit certain similarities.
Specifically, for regular graphs, the algebraic connectivity coincides with the spectral
gap, and connected regular graphs of degree $3$ and $4$ with minimum algebraic
connectivity (and therefore, minimum spectral gap) are determined in \cite{Brand-Guiduli-Imrich2007}
and \cite{Abdi-Ghorbani2023} respectively.
If we restrict our study on trees, Jovovi\'c, Koledin and Stani\'c \cite{Jovovic-Koledin-Stanic2018}
conjectured that the spectral gap is minimized for some double comet among all trees.
The \emph{double comet} $C(k, \ell)$ is a tree obtained by attaching $k$ pendant
vertices at one end of the path $P_{\ell}$ and $k$ pendant vertices at the other
end of the same path.
\begin{conjecture}[Jovovi\'c-Koledin-Stani\'c \cite{Jovovic-Koledin-Stanic2018}]\label{conj:spectral-grap-trees}
Among all trees of order $n$, the spectral gap is minimized for some double comet.
\end{conjecture}
The conjecture is confirmed by computer search for trees with at most $20$
vertices \cite{Jovovic-Koledin-Stanic2018}. There exists a unique tree that
achieves the minimum spectral gap in all cases, and the corresponding trees
are listed in the following table.
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$ & $n \leq 8$ & $9 \leq n \leq 11$ & $12 \leq n \leq 15$ & $16 \leq n \leq 20$ \\
\hline
The unique tree & $P_n$ & $C(2, n-4)$ & $C(3, n-6)$ & $C(4, n-8)$ \\
\hline
\end{tabular}
\end{center}
\section{A lower bound on graph energy}
The graph energy is a graph-spectrum-based quantity, introduced by Ivan Gutman in the 1970s.
For a graph $G$ on $n$ vertices, the \emph{energy} $\mathcal{E}(G)$ of $G$ is defined
to be the sum of the absolute values of the eigenvalues of $A(G)$, that is,
\[
\mathcal{E}(G) := \sum_{i=1}^n |\lambda_i(G)|.
\]
This graph invariant is very closely connected to a chemical quantity known as the total $\pi$-electron
energy of conjugated hydrocarbon molecules. For results on graph energy, we refer the reader
to \cite{Li-Shi-Gutman2012}, which is a monograph summarizing the main theorems, applications
and methods regarding the adjacency energy of a graph.
The following conjecture comes from the Written on the Wall (c.f.\,\cite{Aouchiche-Hansen2010}).
\begin{conjecture}[\cite{Aouchiche-Hansen2010}] \label{conj:energy}
Let $G$ be a graph on $n$ vertices with independence number $\alpha$. Then
\[
\sum_{\lambda_i(G)>0} \lambda_i(G) \geq n - \alpha.
\]
\end{conjecture}
Note that if Conjecture \ref{conj:energy} is proven to be true, it would provide us with a concise
lower bound on the energy of graph $G$, as the left-hand side of the above inequality is precisely
equal to $\mathcal{E}(G)/2$. We utilized computer computations to verify Conjecture \ref{conj:energy}
for all graphs having at most $10$ vertices, and we did not find any counterexamples.
The well-known inertia bound \cite[Lemma 9.6.3]{Godsil-Royle2001} due to Cvetkovi\'c states that
\[
\alpha (G) \leq \min\{n - n^+, n - n^-\},
\]
where $\alpha(G)$ is the independence number of $G$, $n^+$ and $n^-$ are the numbers (counting multiplicities)
of positive and negative eigenvalues of $A(G)$ respectively. Hence, if Conjecture \ref{conj:energy} is true,
we can promptly derive that
\begin{equation}\label{eq:energe-inertia}
\sum_{\lambda_i(G)>0} \lambda_i(G) \geq \max\{n^+, n^-\},
\end{equation}
which is also a conjecture in \cite{Aouchiche-Hansen2010}. On the ther hand, Elphick, Farber, Goldberg
and Wocjan \cite[Lemma 5]{EFGW16} proved that
\[
s^+(G) \geq \frac{\mathcal{E}(G)^2}{4n^+} ~~ and ~~
s^-(G) \geq \frac{\mathcal{E}(G)^2}{4n^-}.
\]
Thus, Elphick pointed to us that if Conjecture \ref{conj:energy} holds true, it would imply $s^+(G) \geq n^+$
and $s^-(G) \geq n^-$ by \eqref{eq:energe-inertia}, which is a slightly weaker, yet analogous statement,
compared to Conjecture \ref{conj:extension-Hong}.
\section{Maximal $\lambda_1 + \lambda_n$ of $K_{r+1}$-free graphs}
Erd\H os put forth a conjecture that any triangle-free graph $G$ on $n$ vertices must contain a set of $n/2$
vertices that span at most $n^2/50$ edges, which is one of his favourite conjectures \cite{Erdos1976,Erdos1997}.
Significant advancements have been made in various directions regarding this conjecture
\cite{Bedenknecht2019,Erdos1994,Keevash-Sudakov2006,Norin-Yepremyan2015}. Most recently, Razborov \cite{Razborov2022}
proved that every triangle-free graph on $n$ vertices has an induced subgraph on $n/2$ vertices with
at most $(27/1024)n^2$ edges.
A problem with similar motivation is to determine $D_2(G)$, the minimum number of edges which have to be
removed to make $G$ bipartite, for a triangle-free graph $G$ on $n$ vertices. A long-standing conjecture
of Erd\H os is that at most $n^2/25$ edges need to be deleted \cite{Erdos1976}. This conjecture has been
studied by several researchers \cite{Alon1996,Erdos-Faudree1988,Erdos-Gyori1992,Shearer1992}, and the
most recent result was obtained by Balogh, Clemen and Lidick\'y \cite{Balogh-Clemen2021}, who proved
that $D_2(G) \leq n^2/23.5$.
According to the Perron-Frobenius theorem, we know that $\lambda_1(G) \geq -\lambda_n(G)$. If $G$ is connected,
then equality holds if and only if $G$ is bipartite. Hence, the difference between $\lambda_1$ and $-\lambda_n$
can be viewed as a measure, how far a graph is from being bipartite. On the other hand, Brandt \cite{Brandt1998}
found a surprising connection between these two conjectures of Erd\H os and the eigenvalues of regular graphs $G$.
It was proved that
\[
\frac{\lambda_1(G) + \lambda_n(G)}{4} \cdot n \leq D_2(G).
\]
Brandt \cite{Brandt1998} also conjectured that
\[
\lambda_1(G) + \lambda_n(G) \leq \frac{4}{25} n
\]
for regular triangle-free graphs $G$ on $n$ vertices. If either of the two conjectures of Erd\H os
were true, it would imply Brandt's conjecture (see \cite{Balogh-Clemen2022} for details).
Towards Brandt's conjecture, it was proved that $\lambda_1(G) + \lambda_n(G) \leq (3-2\sqrt{2}) n$
for regular triangle-free graphs \cite{Brandt1998}, which was shown to hold also in the non-regular
setting by Csikv\'ari \cite{Csikvari2022}. Obviously, the quantity $\lambda_1(G) + \lambda_n(G)$
coincides with the smallest signless Laplacian eigenvalue of $G$ if $G$ is regular. Very recently,
Balogh, Clemen, Lidick\'y, Norin and Volec \cite{Balogh-Clemen2022} proved that for a triangle-free
$n$-vertex graph $G$, the smallest signless Laplacian eigenvalue of $G$ is at most $15n/94$,
which confirms Brandt's conjecture in strong form.
In general, the following problem is of interest.
\begin{problem}[Brandt \cite{Brandt1998}]
Let $r\geq 2$. How large can $\lambda_1(G) + \lambda_n(G)$ be if $G$ is a $K_{r+1}$-free graph of order $n$\,\rm{?}
\end{problem}
One can also consider a similar problem for the smallest signless Laplacian eigenvalue of graphs, see
\cite{Lima-Nikiforov-Oliveira2016} and \cite{Oboudi2022} for details.
\section{Problems on other adjacency eigenvalues of graphs}
In this section, our attention will be directed towards the adjacency eigenvalues of graphs,
excluding the largest and smallest eigenvalues.
\subsection{The third and fourth eigenvalues of graphs}
Given a connected graph $G$ on $n$ vertices. In 1989, Powers \cite{Powers1989} presented
an upper bound for $\lambda_i(G)$ ($1\leq i\leq n/2$) of $G$, just in terms of the order
of $G$, i.e.,
\begin{equation}\label{eq:lambda-i}
\lambda_i(G) \leq \left\lfloor \frac{n}{i} \right\rfloor.
\end{equation}
The inequality above is notable for its simplicity and elegance. The validity of
Inequality \eqref{eq:lambda-i} for $i\leq 2$ is clear (for $\lambda_2$, see \cite{Hong1988}),
but unfortunately, it does not hold for $i\geq 5$ (c.f.\,\cite{Nikiforov2015}).
Currently, the upper bound of Powers for $i\in\{3, 4\}$ remains unknown.
\begin{problem}[Powers \cite{Powers1989}] \label{prob:Powers}
Let $G$ be a graph of order $n$. Is it true that
\[
\lambda_i(G) \leq \left\lfloor \frac{n}{i} \right\rfloor
\]
for $i=3,4$\,\rm{?}
\end{problem}
Recently, Linz \cite{Linz2023} provided a counterexample to Problem \ref{prob:Powers} for
$i=4$ by constructing a class of graphs with $\lambda_4 > n/4$. Nevertheless, it is still
worth exploring the best possible upper bound for Problem \ref{prob:Powers}, as well as
considering the general question posed by Hong \cite{H93}.
\begin{problem}[Hong \cite{H93}]
Find the best possible lower and upper bounds for the $i$-th eigenvalue of graphs with $n$ vertices.
\end{problem}
Let us remark that Hong's problem is related to other areas of combinatorics other than spectral
graph theory, like the existence of symmetric Hadamard matrices, Ramsey theory and etc.
For progress on Hong's problem we refer the reader to \cite{Nikiforov2015}.
\subsection{HL-index Conjecture}
Fowler and Pisanski \cite{Fowler-Pisanski2010-1,Fowler-Pisanski2010-2} introduced the notion of the
HL-index of a graph that is related to the HOMO--LUMO separation studied in theoretical chemistry.
This is the gap between the Highest Occupied Molecular Orbital (HOMO) and Lowest Unoccupied Molecular
Orbital (LUMO). Let $G$ be a graph of order $n$. The \emph{HL-index} $R(G)$ of $G$ is defined as
$R(G)=\max\{|\lambda_H(G)|, |\lambda_L(G)|\}$, where
\[
H=\Big\lfloor\frac{n+1}{2}\Big\rfloor,~~
L=\Big\lceil\frac{n+1}{2}\Big\rceil.
\]
Several bounds for this index have been proposed for some classes of graphs in
\cite{Clemente-Cornaro2016,Fowler-Pisanski2010-1,Fowler-Pisanski2010-2,Jaklic-Fowler-Pisanski2012,Li-Li-Shi-Gutman2013}.
A graph $G$ is said to be \emph{subcubic} if its maximum degree is at most $3$. Fowler and
Pisanski \cite{Fowler-Pisanski2010-1,Fowler-Pisanski2010-2} proved that every subcubic graph
$G$ satisfies $0 \leq R(G) \leq 3$ and that if $G$ is bipartite, then $R(G) \leq \sqrt{3}$.
In 2015, Mohar \cite{Mohar2015} showed that $R(G) \leq \sqrt{2}$ for each subcubic graph $G$,
which improved the results of Fowler and Pisanski. This result is best possible since the
Heawood graph (the bipartite incidence graph of points and lines of the Fano plane) has
HL-index $\sqrt{2}$. In the same paper, Mohar also proposed the following conjecture.
\begin{conjecture}[Mohar \cite{Mohar2015}]\label{conj:HL-index}
If $G$ is a planar subcubic graph, then $R(G)\leq1$.
\end{conjecture}
The conjecture has been verified for planar bipartite graphs in \cite{Mohar2013}. Furthermore,
Mohar \cite{Mohar2016} confirmed Conjecture \ref{conj:HL-index} for all bipartite subcubic graphs
with a single exception of the Heawood graph (or a disjoint union of copies of it).
\section{Principal eigenvectors of graphs}
In the last section, we collect two conjectures on the principal eigenvectors of graphs.
For a connected graph $G$, the Perron-Frobenius theorem implies that $A(G)$ has a unique
unit positive eigenvector corresponding to $\lambda (G)$, which is usually called
the \emph{principal eigenvector} of $G$.
In 2010, Cioab\u{a} \cite{Cioaba2010} presented a necessary and sufficient condition for a
connected graph to be bipartite in terms of its principal eigenvector.
\begin{theorem}[\cite{Cioaba2010}] \label{thm:principaleigenvector}
Let $S$ be an independent set of a connected graph $G$. Then
\[
\sum_{v\in S} x_v^2 \leq \frac{1}{2}
\]
with equality if and only if $G$ is bipartite having $S$ as one color class.
\end{theorem}
Strengthening Cioab\u{a}'s result, Gregory posed the following conjecture in 2010.
\begin{conjecture}[Gregory, c.f.\,\cite{Cioaba2021}]
Let $G$ be a connected graph with chromatic number $k\geq 2$ and $S$
be an independent set. Then
\[
\sum_{v\in S} x_v^2 \leq \frac{1}{2} - \frac{k-2}{\sqrt{(k-2)^2 + 4(k-1)(n-k+1)}}.
\]
\end{conjecture}
One can check that $S_{n, k-1}$ attains equality. Let $P_r\cdot K_s$ denote the graph
of order $(r+s-1)$ attained by identifying an end vertex of the path $P_r$ to any
vertex of the complete graph $K_s$. This graph $P_r\cdot K_s$ is called a \emph{kite graph}
or a \emph{lollipop graph}.
The following conjecture appears in several papers \cite{Clark2022,Rucker-Gutman2002,Cioaba2021},
which was presented in different backgrounds.
\begin{conjecture}[R\"ucker-R\"ucker-Gutman \cite{Rucker-Gutman2002}]
Among all connected graphs on $n$ vertices, the graph $P_{n-3}\cdot K_4$
minimizes $\ell^1$-norm $\|\bm{x}\|_1$ of principal eigenvectors.
\end{conjecture}
The conjecture has been verified to be true for connected graphs with at most
$10$ vertices using SageMath software.
\section*{Acknowledgment}
This paper is an invited paper of ORSC. The second author is indebted to ORSC for inviting him
to submit a paper to the Operations Research Transactions (ORT). We would like to thank
Clive Elphick for his useful comments on the connection between Conjecture \ref{conj:extension-Hong}
and Conjecture \ref{conj:energy}. We also thank Xiaofeng Gu for bringing to our attention a related
conjecture in \cite{Gu-Haemers2022} that connects to Conjecture \ref{conj:Hamilton},
and William Linz for sharing the reference \cite{Linz2023} with us.
\end{document}
|
\begin{document}
\title{
~~\\ Tur\'an numbers and batch codes\thanks{
~Research supported in part by the Hungarian Scientific Research
Fund, OTKA grant T-81493, moreover
by the European Union and Hungary, co-financed
by the European Social Fund through the project T\'AMOP-4.2.2.C-11/1/KONV-2012-0004 -- National Research Center
for Development and Market Introduction of Advanced Information and Communication
Technologies.}}
\author{Csilla Bujt\'as~$^1$
\qquad
Zsolt Tuza~$^{1,2}$\\
\normalsize $^1$~Department of Computer Science and Systems
Technology \\
\normalsize University of Pannonia \\
\normalsize Veszpr\'em, Hungary \\
\normalsize $^2$~Alfr\'ed R\'enyi Institute of Mathematics \\
\normalsize Hungarian Academy of Sciences \\
\normalsize Budapest, Hungary
}
\date{\footnotesize Latest update on \version}
\maketitle
\begin{abstract}
Combinatorial batch codes provide a tool for distributed
data storage, with the feature of keeping privacy during
information retrieval.
Recently, Balachandran and Bhattacharya observed that the problem
of constructing such uniform codes in an economic way can be
formulated as a Tur\'an-type question on hypergraphs.
Here we establish general lower and upper bounds for this
extremal problem, and also for its generalization where
the forbidden family consists of those $r$-uniform hypergraphs
$H$ which satisfy the condition $k\ge |E(H)|> |V(H)|+q$
(for $k>q+r$ and $q> -r$ fixed).
We also prove that, in the given range of parameters,
the considered Tur\'an function is asymptotically
equal to the one restricted to $|E(H)|=k$,
studied by Brown, Erd\H os and T.~S\'os.
Both families contain some $r$-partite members
--- often called the `degenerate case',
characterized by the equality $\lim_{n\to\infty} \ex(n,{\cal F})/n^r=0$
--- and therefore their exact order of growth is not known.
\bsk
\noindent {\bf Keywords:}
Tur\'an number, hypergraph, combinatorial batch code.
\nin \textbf{AMS 2000 Subject Classification:}
05D05,
05C65,
68R05.
\end{abstract}
\bsk
\section{Introduction}
In this paper we study a Tur\'an-type problem on uniform hypergraphs,
which is motivated by optimization of distributed data storage
enabling secure data retrieval under a certain protocol.
\subsection{Terminology}
\paragraph{Hypergraphs.} A \emph{hypergraph} $H$ is a set system with vertex set
$V(H)$ and edge set $E(H)$ where every edge $e \in E(H)$ is a
nonempty subset of $V(H)$. The number of its vertices and edges is
the \emph{order} and the \emph{size} of $H$, respectively.
A hypergraph $H$ is called \emph{$r$-uniform}
if each edge of it contains precisely $r$ vertices.
For short, sometimes we shall use the term \emph{$r$-graph}
for $r$-uniform hypergraphs.
Graphs without loops
are just 2-uniform hypergraphs. A hypergraph $H_1$ is a
\emph{subhypergraph} of $H_2$ if $V(H_1) \subseteq V(H_2)$ and
$E(H_1) \subseteq E(H_2)$ holds, moreover we say that $H_1$ is an
\emph{induced subhypergraph} of $H_2$ if also $E(H_1)=\{e:\, e
\subseteq V(H_1) \enskip \wedge \enskip e \in E(H_2)\}$ holds. In
this paper graphs and hypergraphs are meant to be simple, that is
without loops and multiple edges, unless stated otherwise
explicitly.
\bsk
\paragraph{Tur\'an numbers.} Given hypergraphs $H$ and $F$, $H$ is said to be \emph{$F$-free} if $H$
has no subhypergraph isomorphic to $F$. Similarly, if ${\cal F}$ is a
family of hypergraphs, $H$ is ${\cal F}$-free if it contains no
subhypergraph isomorphic to any member of ${\cal F}$.
In the problems considered here, the family ${\cal F}$ contains
$r$-graphs for a fixed $r \ge 2$ and the property to be
${\cal F}$-free is considered only for $r$-graphs.
In a Tur\'an-type (hypergraph) problem there is a given collection ${\cal F}$ of $r$-uniform
hypergraphs and
the main goal is to determine or to estimate the
\emph{Tur\'an number}
$\ex (n, {\cal F})$ which is the maximum number of edges in an
${\cal F}$-free $r$-uniform hypergraph on $n$ vertices.
In 1941 Tur\'an \cite{Tu} determined
$\ex (n, K_t)$, that is the maximum size of a graph $G$ of order
$n$ such that $G$ contains no complete subgraph on $t$ vertices.
(The spacial case of $k=3$ was already solved in 1907 by Mantel
\cite{Ma}.)
Since then lots of famous results have been proved (see the
recent surveys \cite{FurS, Keev}), but many problems especially
among the ones concerning hypergraphs seem notoriously hard.
\bsk
\paragraph{Combinatorial batch codes.} The notion of batch code
was introduced by Ishai, Kushilevitz, Ostrovsky
and Sahai \cite{YKOS} to represent the distributed storage of
$m$ items of data on $n$ servers such that any at most $k$
data items are recoverable by submitting at most
$t$ queries to each server.\footnote{In the main part of the literature
notations $n$ and $m$ are used in reversed role. Here the
usual notation of hypergraph Tur\'an problems is applied for CBCs (as done also in \cite{BB}).}
In its combinatorial version \cite{PSW},
`encoding' and `decoding' mean simply that the data items are stored on and read from the
servers. Its basic case, when the parameter $t$
equals 1, can be defined as follows.\footnote{In this
definition the vertices of the hypergraph represent the $n$ servers,
the edges represent the $m$ data items, and an edge contains exactly
those vertices which correspond to the servers storing the data items
represented by the edge. Parameters $k$ and $t=1$ express the
condition that every family of at most $k$ edges has a system of
distinct representatives. Applying Hall's Theorem we obtain the
definition in the form given here.}
\tmz \item
A \emph{combinatorial batch code (CBC-system)} with parameters $(m,k,n)$
is a multihypergraph $H$ of order $n$ and size $m$, such that the union
of any $i$ edges contains at least $i$ vertices for every $1 \le i \le k$.
For given parameters $r,k,n$, satisfying $r\ge 2$ and $r+1\le k \le n$,
let $m(n,r,k)$ denote the maximum number $m$ of edges such
that an $r$-uniform CBC-$(m,k,n)$-system exists.
\etmz
Optimization problems on combinatorial batch codes (mainly for the non-uniform case
and under the condition $t=1$) were studied in
\cite{BRR,BKMS,k=4,miamano,cbc-ENDM,cbc-AADM,PSW}.
Recently,
Balachandran and Bhattacharya \cite{BB} formulated the problem of
determining the maximum size of $r$-uniform CBC-systems as a Tur\'an multihypergraph
problem. Clearly, an $r$-uniform multihypergraph $H$ is a CBC-system with parameter $k$ if
and only if
it has no subhypergraph of order $i-1$ and size
exactly $i$ for all $r+1 \le i \le k$.
\bsk
\paragraph{A problem of Brown, Erd\H{o}s and T.\ S\'os.}
Brown, Erd\H{o}s and T.\ S\'os started to study the problems where, for
fixed integers $2\le r \le v$ and $k \ge 2$,
all $r$-graphs on $v$ vertices and with at least $k$ edges are
forbidden to occur as a subhypergraph of an $r$-graph
\cite{BES1}.\footnote{On graphs, the problem was first studied by
Dirac in \cite{Di}.}
The maximum
size of such an $r$-graph of order $n$ is denoted by
$f^{(r)}(n,v,k)-1$. A general lower bound on
$f^{(r)}(n,v,k)$ was proved in \cite{BES1} and later further famous results
were given for the cases $v\ge k$
(see, e.g., \cite{RSz, EFR, SS1, SS2, AS}).
In this paper, motivated by the optimization problem on uniform CBCs,
we will study a problem closely related to the case $v \le k$.
\bsk
\paragraph{Our problem setting.}
We shall consider Tur\'an-type problems for the following families of
forbidden subhypergraphs.
The upper index
`$(r )$' in the notation indicates that the family
consists of $r$-graphs.
\tmz
\item ${\cal H}^{(r)}(k,q)= \{H: \, |E(H)|-|V(H)| = q+1 \enskip \wedge \enskip |E(H)| \le k
\}$\\\\
To study ${\cal H}^{(r)}(k,q)$-free hypergraphs, we put the following restrictions on the parameters:
\tmz
\item[$\circ$] $r\ge 2$ \enskip (The problem would be trivial
for the 1-uniform case.)
\item[$\circ$] $k \ge q+r+1$ \enskip ($|E(H)| \le q+r$
would imply
$|V(H)| \le r-1$ and hence
${\cal H}^{(r)}(k,q)=\emptyset$.)
\item[$\circ$] $q \ge -r+1$ \enskip (Negative values can be
allowed for $q$. But if $q\le -r$, the family ${\cal H}^{(r)}(k,q)$
contains an $r$-graph with 1 edge and with at least $r$
vertices, and hence $\ex (n, {\cal H}^{(r)}(k,q))=0$ would follow.)
\etmz
\item ${\cal F}^{(r)}(k,q)= \{H: \, |E(H)|-|V(H)| = q+1 \enskip \wedge \enskip |E(H)|
= k\}$\\\\
In general, $r\ge 2$, \enskip $k \ge q+r+1$ and $k \ge 2$ are
assumed. Here we restrict ourselves to the cases with $q \ge
-r+1$.
Note that ${\cal F}^{(r)}(k,q)$ contains exactly those $r$-graphs which are
forbidden in the Brown-Erd\H{o}s-S\'os problem with $v=k-q-1$,
while ${\cal H}^{(r)}(k,q)= \cup_{i=r+q+1}^k
{\cal F}^{(r)}(i,q)$.
\etmz
Moreover, for ${\cal H}^{(r)}(k,q)$ and
${\cal F}^{(r)}(k,q)$,
the family of multihypergraphs with the same defining
property is denoted by ${\cal H}^{(r)}_M(k,q)$ and
${\cal F}^{(r)}_M(k,q)$, respectively.
When the Tur\'an number relates to the maximum size of a
multihypergraph, the lower index $M$ is used, as well. For
instance, $\ex_M(n, {\cal H}^{(r)}_M(k,q))$ denotes the maximum
number of edges in a multihypergraph such that every $i$ edges
cover at least $i-q-1$ vertices subject to $q+r+1 \le i \le k$.
Note that if $q=-r+1$, already the presence of edges with
multiplicity 2
is forbidden and consequently
$\ex_M(n,{\cal H}^{(r)}_M(k,-r+1))= \ex (n,{\cal H}^{(r)} (k,-r+1))$.
\bsk
The next facts follow immediately from the definitions:
\begin{eqnarray*}
m(n,r,k) &=& \ex_M(n,{\cal H}^{(r)}_M(k,0))\\
\ex (n,{\cal H}^{(r)} (k,q)) \le \ex (n,{\cal F}^{(r)} (k,q)) &=& f^{(r)}(n,k-q-1,k)-1
\le \ex_M (n,{\cal F}^{(r)}_M (k,q))\\
\ex(n,{\cal H}^{(r)}(k,q)) &\le & \ex_M(n,{\cal H}^{(r)}_M(k,q))
\end{eqnarray*}
\subsection{Preliminaries and our results}
The following general lower bound was proved
by Brown, Erd\H{o}s and T.\ S\'os \cite{BES1} for
${\cal F}^{(r)} (k,q)$-free $r$-graphs under the previously given
conditions ($r \ge 2$, $k \ge q+r+1$ and $k \ge 2$).
\begin{eqnarray}
f^{(r)}(n,k-q-1,k)= \Omega( n^{r-1 + \frac{q+r}{k-1}}).
\end{eqnarray}
Paterson, Stinson and Wei \cite{PSW} proved that if $q=0$ but all the
$r$-graphs from ${\cal H}^{(r)}(k,0)$ are forbidden, the lower
bound (1) still remains valid\footnote{For the cases with $k-\lceil \log
k \rceil \le r \le k-1$, Balachandran and Bhattacharya \cite{BB}
proved the better lower bound $m(n,r,k) = \Omega(n^r)$}:
\begin{eqnarray*}
m(n,r,k) \ge \ex(n, {\cal H}^{(r)}(k,0)) = \Omega(n^{r-1 + \frac{r}{k-1}}).
\end{eqnarray*}
We prove in Section 2 that the lower bound (1) can be extended also to our general
case:
\begin{eqnarray}
\ex(n, {\cal H}^{(r)}(k,q))=\Omega( n^{r-1 + \frac{q+r}{k-1}}).
\end{eqnarray}
\bsk
Concerning upper bounds, our main result proved in Section 4 says that
\begin{eqnarray}
\ex(n,{\cal H}^{(r)}(k,q)) = {\cal O} ( n^{r-1+ \frac{1}{\left \lfloor
\frac{k}{q+r+1}\right\rfloor}})
\end{eqnarray}
for every fixed $r\ge 2$ and $k\ge q+r+1$.
The basis of the proof is $r=2$ (graphs), for which the order of
the upper bound follows already from a theorem of Faudree and
Simonovits \cite{FS}; in fact they only forbid a subfamily
of ${\cal F}^{(2)}(k,q)$.
Under the stronger condition of excluding
${\cal H}^{(2)}(k,q)$ instead of ${\cal F}^{(2)}(k,q)$, however, a better and
explicit constant can be derived on the former; and this can in turn
be proved to be valid on the latter as well.
For this reason, we do not simply derive the result from the one
in \cite{FS} but prove the new upper bound in our Theorem \ref{graphs}.
The more general result for
hypergraphs is given in Theorem \ref{hgs}.
In Section 4 we also prove that the same upper bound (3)
is valid for multihypergraphs, in fact not only the orders of these upper bounds are equal but
also the relatively small leading coefficients are the same.
Section 5 is devoted to exploring the connection between the Tur\'an numbers of
${\cal H}^{(r)}(k,q)$ and ${\cal F}^{(r)}(k,q)$.
The general message there is that any later improvement in the estimates
concerning ${\cal H}^{(r)}(k,q)$ will automatically yield an improvement for
${\cal F}^{(r)}(k,q)$ as well, and vice versa.\footnote{Obviously, by this
principle, one should seek upper bounds for ${\cal H}^{(r)}(k,q)$
and lower bounds for ${\cal F}^{(r)}(k,q)$.}
By Theorem \ref{th-d}, if $r=2$ and the parameters $k$ and $q$ are fixed, the difference
is bounded by a constant $d(k,q)$:
$$f^{(2)}(n,k-q-1,k)-\ex(n,{\cal H}^{(2)}(k,q)) \le d(k,q).$$
For $r \ge 3$, by Theorem \ref{th-d-hg} we obtain the upper bound
$$f^{(r)}(n,k-q-1,k)-\ex(n,{\cal H}^{(r)}(k,q))={\cal O}(n^{r-1}),$$
which is somewhat weaker but still strong enough to prove that
the Tur\'an numbers $\ex(n,{\cal F}^{(r)}(k,q))$
and $\ex(n,{\cal H}^{(r)}(k,q))$ have the same order of growth.
On the other hand, the question of sharpness
of Theorem \ref{th-d-hg} remains open:
\bpm \label{diff-inf}
For the triplets\/ $(r,k,q)$ of integers in the range\/ $r\ge 2$,\/
$q \ge -r+1$,\/ and\/ $k \ge q+r+1$, determine the infimum
value\/ $s(r,k,q)$ of
constants\/ $s\ge 0$ such that
$$f^{(r)}(n,k-q-1,k)-\ex(n,{\cal H}^{(r)}(k,q))={\cal O}(n^s)$$
as\/ $n\to\infty$.
\epm
\bcj \label{diff-min}
The infimum\/ $s(r,k,q)$ in Problem \ref{diff-inf} is
attained as minimum.
\ecj
Our Theorem \ref{th-d} shows that $s(2,k,q)=0$ holds
for all pairs $(k,q)$ in the given range, and so
Conjecture \ref{diff-min} is confirmed for $r=2$.
\bsk
At the end of this introductory section, we return to uniform combinatorial batch codes.
The previous upper bound given for $m(n,r,k)$ in
\cite{PSW} was improved recently by Balachandran and
Bhattacharya
\cite{BB}:
\begin{eqnarray}
m(n,r,k) &=& {\cal O}( n^{r- \frac{1}{2^{r-1}}}) \qquad\qquad \mbox{if
\enskip
$3\le r\le k-1-\lceil \log k\rceil$}.
\end{eqnarray}
Our Corollary \ref{cbc} yields a further improvement in the
range $r \le k/2 -1$. Especially, we have
\begin{eqnarray}
m(n,r,k) = {\cal O}\left( n^{r-1+ \frac{1}{\left\lfloor
\frac{k}{r+1}\right\rfloor}}\right).
\end{eqnarray}
Comparing (4) and (5), the difference is significant already
for parameters complying with $3 \le r = k/2 -1$. For these cases,
(4) gives exponent $r- 1/2^{r-1}$ whilst
our bound (5) yields exponent $r-1/2$.
\section{Lower bound}
In this section we prove a lower bound on $\exsk$
whose order is the same as proved in \cite{BES1} for
$f(n,k-q-1,k)$; that is, for the case when only the subhypergraphs
on exactly $k-q-1$ vertices and with $k$ edges are forbidden.
\begin{Theorem} \label{lower}
For all fixed triplets of integers\/ $r,k,q$
with\/ $r \ge 2$,\/ $q \ge -r+1$ and\/ $k \ge r+q+1$ we have
$$
\exsk = \Omega(n^{r-1+\frac{q+r}{k-1}})
= \Omega(n^{\frac{kr-k+q+1}{k-1}}).
$$
\end{Theorem}
\pf
We apply the probabilistic method. Our proof technique is similar
to those in \cite{BES1} and \cite{PSW}.
We let $p=cn^{-1+\frac{q+r}{k-1}}$, where the constant
$c=c(r,k,q)$ will be chosen later.
Note that the lower bound $-r+1$ on $q$ implies
$pn\ge cn^{\frac{1}{k-1}}$, i.e.\
$pn$ tends to infinity with $n$ whenever $r,k,q$ are constants.
Let $\hnp$ be the random $r$-uniform hypergraph of order $n$
with edge probability $p$.
That is, $\hnp$ has $n$ vertices, and for each $r$-tuple $S$ of
vertices the probability that $S$ is an edge is $p$,
independently of (any decisions on) the other $r$-tuples.
We denote by $E$ the number of edges in $\hnp$, and by $F$
the number of forbidden subhypergraphs in $\hnp$;
by `forbidden' we mean that for some $i\le k$, some $i-q-1$
vertices contain at least $i>0$ edges.
We will estimate the expected value of $E-F$, more precisely
our goal is to show that the inequality
$\bbE(E-F)\ge \bbE(E)/2$ on the expected values is true for a
suitable choice of the constant $c$.
Once $\bbE(E-F)\ge \bbE(E)/2$ is ensured, we obtain that
there exists a (non-random) hypergraph with twice as many
edges as the number of its forbidden subhypergraphs,
hence removing one edge from each of the latter we obtain
a hypergraph with the required structure and with
at least $\bbE(E)/2=\frac{p}{2} {n\choose r}$ edges.
By the additivity of expectation we have
$$
\bbE(E-F) = \bbE(E) - \bbE(F),
$$
moreover it is clear by definition that
\begin{eqnarray} \label{E(E)}
\bbE(E) = p\cdot\! {n\choose r}
= ({\textstyle\frac{1}{r!}}+o(1))\cdot p\cdot n^r
= ({\textstyle\frac{1}{r!}}+o(1))
\cdot c \cdot n^{r-1+\frac{q+r}{k-1}}
\end{eqnarray}
for any fixed $r$ as $n\to\infty$.
Hence we need to find an upper bound on $\bbE(F)$.
We consider the following set $I$ of those values of $i$ for which an
$(i-q-1)$-element vertex
subset is large enough to accommodate some forbidden subhypergraph:
$$I=\left\{i : \enskip i \le {i-q-1\choose r} \quad \wedge \quad q+r+2 \le i \le
k\right\}.$$
It should be noted first that if $I=\emptyset$,
then also ${\cal H}^{(r)}(k,q)=\emptyset$ holds
and hence $\exsk= {n \choose r}$. In this case, the lower bound in the theorem is
trivially valid, as the condition
$k \ge r+q+1 \ge 2$ implies $(q+r)/(k-1) \le 1$.
From now on, we assume that $I \neq \emptyset$.
Consider any $i\in I$.
On any $i-q-1$ vertices the number of ways we can select
$i$ edges is ${{i-q-1\choose r}\choose i}$, and the
probability for each of those selections to be a subhypergraph
of $\hnp$ is exactly $p^{i}$.
Since there are ${n\choose i-q-1}$ ways to select $i-q-1$
vertices, we obtain the following upper bound:
\begin{eqnarray} \label{E(F)}
\bbE(F) & \le & \sum_{i\in I} {{i-q-1\choose r}\choose i} \cdot
p^{ i}\cdot\! {n\choose i-q-1} \nonumber \\
& < & \sum_{i\in I} \frac{ {{i-q-1\choose r}\choose i} }
{(i-q-1)!}
\cdot p^{ i}\, {n^{i-q-1}} \nonumber \\
& < & {\textstyle \left( {\displaystyle \max_{i\in I} } \,
\frac{ {{i-q-1\choose r}\choose i} } {(i-q-1)!}
\right) } \cdot p^k \, n^{k-q-1} \cdot
\sum_{i=q+r+2}^{k} (pn)^{i-k} \nonumber \\
& \le & ( C_{k,q,r} + o(1) ) \cdot c^k \cdot
n^{k-q-1-k\,(1- \frac{q+r}{k-1})} \nonumber \\
& = & ( C_{k,q,r} + o(1) ) \cdot c^k \cdot
n^{r-1+\frac{q+r}{k-1}}
\end{eqnarray}
where $C_{k,q,r}$ abbreviates the maximum value of
$\frac{ {{i-q-1\choose r}\choose i} } {(i-q-1)!}$
taken over the range $I$ of $i$.
Compare the rightmost formula of (\ref{E(E)}) with
(\ref{E(F)}).
The terms in parentheses containing $o(1)$ are essentially
constant, while the main part of (\ref{E(E)})
is \mbox{$c \cdot n^{r-1+\frac{q+r}{k-1}}$} whereas that of
(\ref{E(F)}) grows with $c^k \cdot n^{r-1+\frac{q+r}{k-1}}$.
Thus, choosing $c$ sufficiently small, the required
inequality $\bbE(E-F)\ge \bbE(E)/2$ will hold for
$n$ large.
This completes the proof of the theorem. \qed
\bsk
\brm
It can also be ensured (again by a suitable choice of $c$)
that $\bbE(E-F)/\bbE(E)$ is arbitrarily close to 1.
This is not needed for the proof above, but it may be of
interest in the context of batch codes with specified rate
(cf. e.g. \cite{YKOS}).
\erm
\section{Upper bound for graphs}
First we prove an upper bound on ex$(n,{\cal H}^{(2)}(k,q))$.
\thm \label{graphs}
For every three integers\/ $q\ge -1$,\/ $k \ge 2q+6$ and\/ $n \ge
k$, we have
$$ \ex(n,{\cal H}^{(2)}(k,q)) < C\cdot n^{1+ \frac{1}{\left \lfloor
\frac{k}{q+3}\right\rfloor}}+(q+2)n,$$
where\/ $C=(q+2)^{\frac{1}{\left \lfloor
\frac{k}{q+3}\right\rfloor}}$.
\ethm
\pf Introduce the notation $h=\left \lfloor \frac{k}{q+3}\right\rfloor$
and assume for a contradiction that there exists a graph $G$ of order $n$
in which, for every $q+3 \le i \le k$, every $i$ edges cover at least $i-q$
vertices and the number of edges in $G$ is
$$ |E(G)|=m \ge C\cdot n^{1+ \frac{1}{h}}+(q +2)n.$$
Thus, the average degree $\bar{d}(G)=\bar{d}$ satisfies
$$ \bar{d}=\frac{2m}{n} \ge 2C \cdot n^{\frac{1}{h}}+2(q+2).$$
Moreover, every graph of average degree $\bar{d}$ has a subgraph of
minimum degree greater than $\bar{d}/2$.\footnote{Just
delete sequentially the vertices of degree smaller than or equal to
$\bar{d}/2$. After each single step the average degree is
greater than or equal to $\bar{d}$. Hence, finally we obtain a
subgraph of minimum degree greater than $\bar{d}/2$.}
Hence, we have a subgraph $F$ with
minimum degree $\delta(F)= \delta$ such that
\begin{eqnarray}\delta > C \cdot n^{\frac{1}{h}}+q+2.
\end{eqnarray}
\msk
\noindent\underline{{\it Claim A.}}\, The order of $F$ satisfies
$$|V(F)| > \frac{(\delta-q-2)^{h}}{q+2} .$$
\msk
\noindent{\it Proof.} Choose a vertex $x$ of $F$ as a root and construct the
breadth-first search tree (BFS-tree) of $F$ rooted in $x$.
Let $L_i$ denote the set of vertices on the $i$th level of the
BFS-tree, and introduce the notation
$\ell_i=|L_i|$.
The edges of $F$ not belonging to the BFS-tree will be called
additional edges.
First we consider the vertices of the first
$h^*=\left \lfloor \frac{k-q-1}{q+3}\right\rfloor$ levels
and prove that each vertex $v \in L_i$
is incident with at most $q+1$
additional edges, if $0 \le i \le h^*-1$.
Assume to the contrary that there exist $q+2$ such additional
edges and consider the union of paths on the BFS-tree connecting
the end-vertices of these additional edges with the root vertex
$x$.
This means $q+3$ (not necessarily edge-disjoint) paths
each of length at most $h^*$, and at least one of them (the path
between $v$ and $x$) is of length at most $h^*-1$.
They form a tree, let the number of its edges
be denoted by $p$.
Together with the $q+2$ additional
edges we have
$$p+q+2 \le h^*-1+(q+2)h^*+q+2 = (q+3)h^* +q+1 \le k$$
edges, which cover only $p+1$ vertices. This
contradicts the assumed property of $G$. Therefore, we may have
at most $q+1$ additional edges incident with vertex
$v$.
Now, we prove a bound on the number $\ell_i$ of vertices on the $i$th
level if $2 \le i \le h^*$. The sum of the vertex degrees
over the set $L_{i-1}$ cannot be smaller than $ \delta \ell_{i-1}$.
On the other hand, each of these $\ell_{i-1}$ vertices is incident with at most
$q+1$ additional edges, moreover there are
$\ell_{i-1}+\ell_i$ edges of the BFS-tree each of them being incident with exactly one vertex from
$L_{i-1}$.
As follows,
\begin{eqnarray*}
\delta \, \ell_{i-1} & \le & \ell_{i-1}+\ell_i+(q+1)\ell_{i-1}\\
(\delta-q-2)\; \ell_{i-1} &\le & \ell_i,
\end{eqnarray*}
for every $2 \le i \le h^*$.
Since $\ell_1 \ge \delta -q-2$ is also true, the recursive
formula gives
\begin{eqnarray}
|V(F)|\ge \ell_{h^*} \ge (\delta-q-2)^{h^*} \ge \frac{(\delta-q-2)^{h^*}}{q+2}.
\end{eqnarray}
If $h=h^*$, that is if $k \equiv q+1$ or $q+2$ (mod \, $q+3$),
this already proves Claim~A.
In the other case we have $h=h^*+1$ and claim that every
vertex $u\in L_{h-1}$ is incident with at most $q+1$
additional edges whose other end is in $L_{h-2} \cup L_{h-1}$.
Then, assume for a contradiction that there are at least $q+2$ such edges.
Again, take these $q+2$ additional edges together with
the paths in the BFS-tree connecting
their ends with the root.
In this subgraph we have only at most
$(q+3)(h-1) +q+2 <k$ edges,
which
cover fewer vertices by $q+1$ than the number of edges. Proved by this
contradiction, we have at most $q+1$ additional edges of the
described type.
A similar argumentation shows that each $w \in L_h$ might be incident with at most $q+1$
additional edges whose other end is in $L_{h-1}$.
Assuming the presence of $q+2$ such edges, we have at most
$h+ (q+2)(h-1)+q+2 \le k$ edges together with the
paths between their ends and the root. Moreover,
this cardinality exceeds the number of covered vertices by $q+1$. Thus, we have a
contradiction, which proves the property stated for $w$.
By these two bounds on the number of additional edges we can
estimate the sum $s$ of vertex degrees over $L_{h-1}$ as follows:
$$ \delta\, \ell_{h-1} \le s \le \ell_{h-1}+\ell_h +
(q+1)\ell_{h-1} + (q+1) \ell_h.$$
Together with (9) this implies
$$ |V(F)|\ge \ell_h \ge \frac{\delta-q-2}{q+2}\; \ell_{h-1} \ge
\frac{(\delta-q-2)^{h}}{q+2},$$
and proves Claim A. \dia
\bsk
Turning to graph $G$,
inequality (8) and Claim A yield the contradiction
$$
n \ge |V(F)| > \frac{\left(C \cdot n^{1/h}\right)^h}{q+2}=n.
$$
Therefore, in a ${\cal H}^{(2)}(k,q)$-free graph the number of edges must be smaller than
$C\cdot n^{1+ 1/h}+(q+2)n$,
as stated in the theorem. \qed
\bc
\label{multi-graphs}
For every three integers\/ $q\ge -1$,\/ $k \ge 2q+6$ and\/ $n \ge
k$, we have
$$ \ex_M(n,{\cal H}^{(2)}_M(k,q)) < C\cdot n^{1+ \frac{1}{\left \lfloor
\frac{k}{q+3}\right\rfloor}}+(q+2)n,$$
where\/ $C=(q+2)^{\frac{1}{\left \lfloor
\frac{k}{q+3}\right\rfloor}}$.
\ec
\pf The BFS-tree of a multigraph $G$ is meant as a simple graph. That
is, if an edge $uv$ has multiplicity $\mu >1$ in $G$, and $uv$ is an
edge in the BFS-tree, then only one edge $uv$ belongs to the tree,
the remaining $\mu -1$ copies are additional edges.
With this setting every detail of the previous proof remains valid for
multigraphs. \qed
\section{Upper bound for hypergraphs}
In this section we study the problem for hypergraphs. The upper bound on
$\ex(n,{\cal H}^{(r)}(k,q))$ will be obtained by using Theorem
\ref{graphs}.
\thm \label{hgs}
Let\/ $n$, $k$, $r$ and\/ $q$ be integers such that\/ $r\ge 2$, $q\ge -r+1$ and $n\ge k \ge
2q+2r+2$, moreover let $C'=(q+r)^{\frac{1}{\left \lfloor
\frac{k}{q+r+1}\right\rfloor}}$.
Then,
$$ \exsk < \frac{2C'}{r!}\cdot n^{r-1+ \frac{1}{\left \lfloor \frac{k}{q+r+1}\right
\rfloor}}+\frac{2(q+r)}{r!}\cdot n^{r-1}. $$
\ethm
\pf Consider an ${\cal H}^{(r)}(k,q)$-free $r$-graph $H$. Let its
order and size be denoted by $n$ and $m$, respectively.
For a set $S\subseteq V(H)$ denote by $d(S)$ the number of edges
of $H$ which contain $S$ entirely. By double counting we have
$$\sum_{S \subset V(H), \enskip |S|=r-2}d(S)= m\;{r \choose r-2},$$
and for the average value $\bar{d}_{r-2}$ of $d(S)$ over the
$(r-2)$-element subsets of $V(H)$
$$
\bar{d}_{r-2}= m\;\frac{{r \choose r-2}}{{n \choose r-2}}
$$
holds. Thus, there exists an $S^* \subset V(H)$ of cardinality $r-2$
satisfying
$$
d(S^*) \ge m\;\frac{{r \choose r-2}}{{n \choose r-2}}.
$$
Deleting the edges which do not contain $S^*$ entirely,
in addition deleting the $r-2$ vertices of $S^*$ from the remaining edges,
we obtain a graph $G$ with $V(G)=V(H)$ and
$$E(G)= \{e \setminus S^*: S^* \subset e \enskip \wedge \enskip e \in E(H)\}, \quad
\quad |E(G)| \ge m\;\frac{{r \choose r-2}}{{n \choose r-2}} .$$
Since every $i$ edges ($i \le k$) cover at least $i-q$ vertices in $H$,
every $i$ edges cover at least $i-q-r+2$ vertices
in $G$.
Moreover, the conditions given in Theorem \ref{graphs} hold for
$n'=n$, $k'=k$ and $q'=q+r-2$. Then, we obtain
\begin{eqnarray}
m\;\frac{{r \choose r-2}}{{n \choose r-2}}
\le |E(G)| <
(q+r)^{\frac{1}{\left \lfloor \frac{k}{q+r+1}\right \rfloor}}
n^{1+ \frac{1}{\left \lfloor \frac{k}{q+r+1}\right \rfloor}}
+(q+r)n,
\end{eqnarray}
from which
$$
m < \frac{2C'}{r!}\; n^{r-1+ \frac{1}{\left \lfloor \frac{k}{q+r+1}\right
\rfloor}}+\frac{2(q+r)}{r!}\cdot n^{r-1}
$$
follows. This implies the same upper bound for $\ex(n,{\cal H}^{(r)}(k,q))$.
\qed
\bsk
The above proof remains valid if the $r$-graph $H$ is allowed
to have multiple edges. The only difference is that
we must refer to Corollary \ref{multi-graphs} instead of Theorem
\ref{graphs}.
Hence, for
multihypergraphs the same upper bound can be stated. In addition,
since $m(n,r,k)=\ex_M(n,{\cal H}^{(r)}_M(k,0))$, we obtain a new upper bound
for the maximum size $m(n,r,k)$ of $r$-uniform CBC-systems with parameters
$n$ and $k$.
\bc \label{multi-hgs}
Let\/ $n$, $k$, $r$ and\/ $q$ be integers such that\/ $r\ge 2$, $q\ge -r+1$ and $n\ge k \ge
2q+2r+2$, moreover let $C'=(q+r)^{\frac{1}{\left \lfloor
\frac{k}{q+r+1}\right\rfloor}}$.
Then,
$$ \ex_M(n,{\cal H}^{(r)}_M(k,q)) < \frac{2C'}{r!}\cdot n^{r-1+ \frac{1}{\left \lfloor \frac{k}{q+r+1}\right
\rfloor}}+\frac{2(q+r)}{r!}\cdot n^{r-1}. $$
\ec
\bc \label{cbc}
Let\/ $n$, $k$, $r$ be integers such that\/ $r\ge 2$ and $n\ge k \ge
2r+2$, moreover let $C''= r^{\frac{1}{\left \lfloor
\frac{k}{r+1}\right\rfloor}}$.
Then,
$$ m(n,r,k) < \frac{2C''}{r!}\cdot n^{r-1+ \frac{1}{\left \lfloor \frac{k}{r+1}\right
\rfloor}}+\frac{2}{(r-1)!}\cdot n^{r-1}. $$
\ec
\section{Asymptotic equality of Tur\'an numbers}
Up to this point we were concerned with the problem of ${\cal H}^{(r)}(k,q)$-free
hypergraphs; it is different from the one studied by
Brown, Erd\H{o}s and T.\ S\'os \cite{BES2,BES1}, where only the
subhypergraphs with exactly $k-q-1$ vertices and $k$ edges are
forbidden.
In this section we show that
$\ex (n,{\cal H}^{(r)}(k,q))$ and $f^{(r)}(n,k-q-1,k)-1$
are asymptotically equal.
For graphs ($r=2$), our result is better as there exists a constant upper bound
(depending only on $k$ and $q$) on their difference.
As a consequence, we obtain a new upper bound on
$f^{(2)}(n,v,k)$ subject to $ v \ge (k+4)/2$.
First we prove the following lemma. For fixed parameters $k$, $q$ and
for a given graph $G$, a subgraph $G'$ is said to be \emph{forbidden (for $(k,q)$)} if
$G' \in {\cal H}^{(2)}(k,q)$, moreover $G'$ is \emph{maximal forbidden (for $(k,q)$)}, if it
cannot be extended into a forbidden subgraph of larger order.
\bl \label{lemma-d}
Let $k$ and $q$ be integers such that $q \ge -1$ and $k \ge q+3$,
and let $G$ be a graph of order at least $k-q-1$. If a subgraph $G'\subset G$ is
maximal forbidden for $(k,q)$, then either $G'$ has $k$ edges or
it is the union of one or more components of $G$.
\el
\pf
Assume that $G'$ is a forbidden subgraph of $G$ and
$|E(G')| < k$.
If there exists an edge $uv \in E(G)$ such
that $u \in V(G')$ and $v \in V(G) \setminus V(G')$, then the
subgraph $G''$ obtained by extending $G'$ with the vertex $v$ and
with the edge $uv$ satisfies $|E(G'')|-|V(G'')|=q+1$ and
$|E(G'')|= |E(G')|+1 \le k$. Hence $G''$ is forbidden for
$(k,q)$
and consequently, $G'$ is not
maximal forbidden.
On the other hand, if the subgraph of $G$ which is induced by $V(G')$ contains some
edge $e$ not in $G'$, then with any vertex $v \in V(G) \setminus
V(G')$, the subgraph $G' + e +v$ is forbidden for $(k,q)$ and
again, $G'$ is not a maximal forbidden subgraph.
Therefore, if $G'$ is of order smaller than $k$ and
it is a maximal forbidden subgraph for $(k,q)$, then $G'$
is a component of $G$, or it is the union of some components of
$G$.
\qed
\bsk
Clearly, $f^{(2)}(n,k-q-1,k) \ge \ex (n, {\cal H}^{(2)}(k,q))$.
The
following theorem states that the difference between them is
bounded by a constant, once the parameters $k$ and $q$ are fixed.
\thm \label{th-d}
For every pair $k,q$ of integers satisfying $q \ge -1$ and $k \ge
q+3$ there exists a constant $d=d(k,q)$ such that for every $n
\ge k-q-1$,
$$f^{(2)}(n,k-q-1,k) - \ex (n, {\cal H}^{(2)}(k,q)) \le d.$$
\ethm
\pf For given parameters $k$ and $q$ first define
$z:= \min \{i: \enskip q+3 \le i \le {i-q-1 \choose 2} \}$.
If $k >z$, there is no forbidden subgraph for $(k,q)$ and
consequently,
$f^{(2)}(n,k-q-1,k) = \ex (n, {\cal H}^{(2)}(k,q))= {n \choose 2}$.
Otherwise, $z$ is
the possible minimum size of a subgraph forbidden for $(k,q)$.
By Theorem \ref{lower}
$$\ex (n, {\cal H}^{(2)}(k,q))=\Omega(n^{1+\frac{q+2}{k-1}})$$
holds, thus
there exists an $n_0$ (depending only on $k$ and $q$) such that
for all $n\ge n_0$
\begin{eqnarray*} \label{1-5fej}
\frac{z}{z-q-1}\cdot n \le \ex (n, {\cal H}^{(2)}(k,q)).
\end{eqnarray*}
Consequently, the following finite maximum exists: \label{2-5fej}
\begin{eqnarray}
d= \max \left(\left\{\frac{z}{z-q-1}\cdot n - \ex (n, {\cal H}^{(2)}(k,q))+1:
\enskip n \in {\mathbb{N}}\right\}\cup \{1\}\right) .
\end{eqnarray}
We claim that $d$ is a suitable constant
for our theorem. To prove this, let us consider an ${\cal F}(k,q)$-free graph $G$
on $n$ vertices and with $f^{(2)}(n,k-q-1,k)-1$ edges. If $G$ is
${\cal H}^{(2)}(k,q)$-free as well, $f^{(2)}(n,k-q-1,k)-1$ is equal to $\ex (n,
{\cal H}^{(2)}(k,q))$, and since $d \ge 1$, the theorem holds for $k$, $q$
and $n$.
In the other case, $G$ contains a subgraph $G_1$ maximal
forbidden for $(k,q)$. Clearly, $G_1$ has fewer than $k$ edges,
hence by Lemma \ref{lemma-d}, $G_1$ is an induced subgraph and there is no edge
between $V(G_1)$ and $V(G)\setminus V(G_1)$.
Then, the remaining subgraph $G-G_1$ is either ${\cal H}^{(2)}(k,q)$-free or
contains a subgraph $G_2$ of size smaller than $k$, which is maximal forbidden for
$(k,q)$.
Iteratively applying this procedure, finally we have vertex-disjoint maximal
forbidden subgraphs $G_1, \dots G_j$ and the ${\cal H}^{(2)}(k,q)$-free
subgraph $G'$ induced by $V(G)\setminus \cup_{i=1}^j V(G_i)$, such that each edge of $G$
is contained in exactly one of $G',G_1, \dots G_j$.
As $q+1 \ge 0$ and for every $1\le i \le j$ we have $z \le |E(G_i)|\le
k-1$, applying Lemma \ref{lemma-d}, we obtain
$$
\frac{|E(G_i)|}{|V(G_i)|}= \frac{|E(G_i)|}{|E(G_i)-q-1|} \le
\frac{z}{z-q-1}.
$$
Using notations $n_1=\sum_{i=1}^j |V(G_i)|$ and
$n_2=|V(G')|=n-n_1$, moreover the definition (11) of $d$
\begin{eqnarray*}
|E(G)|&=& f^{(2)}(n,k-q-1,k)-1 \le \frac{z}{z-q-1}\cdot n_1 +
\ex (n_2, {\cal H}^{(2)}(k,q))\\
&\le& \ex (n_1, {\cal H}^{(2)}(k,q))+d-1 + \ex (n_2, {\cal H}^{(2)}(k,q))\\
&\le& \ex (n, {\cal H}^{(2)}(k,q))+d-1,
\end{eqnarray*}
which yields
$$
f^{(2)}(n,k-q-1,k)- \ex (n, {\cal H}^{(2)}(k,q)) \le d,
$$
as stated. \qed
\bsk
\bc
Let\/ $v$ and $k$ be integers such that\/
$2 \le v \le k $ and let\/ $C=(k-v+1)^{\frac{1}{\left \lfloor
\frac{k}{k-v+2}\right\rfloor}}$. Then, there exists a constant $D$
such that for every $n$
$$ f^{(2)}(n,v,k) < C\cdot n^{1+ \frac{1}{\left \lfloor
\frac{k}{k-v+2}\right\rfloor}}+(k-v+1)n +D.$$
\ec
\pf Let $q$ denote $k-v-1$. Then, under the given conditions
we have
$-1 \le q \le k-3 $ and $C=(q+2)^{\frac{1}{\left \lfloor
\frac{k}{q+3}\right\rfloor}}$.
Theorems \ref{graphs} and \ref{th-d} immediately imply the
existence of a constant $D$ such that for every $n$
$$ f^{(2)}(n,k-q-1,k) < C\cdot n^{1+ \frac{1}{\left \lfloor
\frac{k}{q+3}\right\rfloor}}+(q+2)n +D.$$
This is equivalent to the statement of the corollary. \qed
\bsk
\thm \label{th-d-hg}
For every four integers\/ $r,k,q$ and $n$ satisfying\/ $r \ge 2$ and\/ $2 \le q +r+1 \le
k \le n$,
$$f^{(r)}(n,k-q-1,k) - \ex (n, {\cal H}^{(r)}(k,q)) \le (k-1) {n-1\choose r-1}$$
holds. Hence, for every fixed\/ $r$, $k$, and\/ $q$ we have
$$f^{(r)}(n,k-q-1,k) = (1+o(1)) \, \ex (n, {\cal H}^{(r)}(k,q)).$$
\ethm
\bsk
\pf
Consider any extremal $r$-graph $H^*$ for ${\cal F}^{(r)}(k,q)$
on the $n$-element vertex set $V$.
By definition, $H^*$ is ${\cal F}^{(r)}(k,q)$-free. If $H^*$ is also
${\cal H}^{(r)}(k,q)$-free, then
$f^{(r)}(n,k-q-1,k) = \ex (n, {\cal H}^{(r)}(k,q))$ holds
and we have nothing to prove.
Otherwise we select the longest possible sequence of
subhypergraphs $H_i\subset H^*$ ($i=1,2,\dots,\ell$)
under the following conditions:
\begin{itemize}
\item
Each $H_i$ is isomorphic to some member of
${\cal H}^{(r)}(k,q) \smin {\cal F}^{(r)}(k,q)$.
\item
Under the previous condition,
$H_1$ is maximal in $H^*$.
\item
Under the previous conditions,
$H_i$ is maximal in $H^*\smin\bigcup_{j=1}^{i-1} H_j$
for each $2\le i\le\ell$.
\end{itemize}
Eventually we obtain an ${\cal H}^{(r)}(k,q)$-free hypergraph from $H^*$
by removing at most $(k-1)\cdot\ell$ edges, because
each $H_i$ has at most $k-1$ edges.
Thus, the proof will be done if we prove that
$\ell\le{n-1\choose r-1}$ holds.
Let $e_i$ be an arbitrarily chosen edge of $H_i$ and let $f_i$ be
an $(r-1)$-element subset of $e_i$, which we fix (again arbitrarily)
for $i=1,2,\dots,\ell$.
Should $f_i\subset e_j$ hold for some $1\le i< j\le\ell$, the
hypergraph
$H_i\cup\{e_j\}$ would also be isomorphic to some member of ${\cal H}^{(r)}(k,q)$.
This contradicts the choice (maximality) of $H_i$.
Consequently, for all $i=1,2,\dots,\ell$ we have:
\begin{itemize}
\item $|f_i|=r-1$,
\item $|V\smin e_i|=n-r$,
\item $f_i\cap (V\smin e_i)=\emptyset$,
\item $f_i\cap (V\smin e_j)\ne\emptyset$ whenever $1\le i< j\le\ell$.
\end{itemize}
Thus, applying a theorem of Frankl \cite{Fr},\footnote{Set pairs
with prescribed intersection properties can be applied
in many kinds of extremal problems (not only on graphs and hypergraphs).
A detailed account on those methods and results is given in the
two-part survey \cite{T1,T2}.}
the number of set pairs
$(f_i, V\smin e_i)$ is at most ${(r-1)+(n-r)\choose r-1}
= {n-1\choose r-1}$.
\qed
\bsk
\bc
Let\/ $r$, $v$, $k$ be integers such that\/ $r\ge 2$ and
$(k+2r)/2 \le v \le k+r-2 $ and let\/ $C=(k+r-v-1)^{\frac{1}{\left \lfloor
\frac{k}{k+r-v}\right\rfloor}}$. Then,
$$ f^{(r)}(n,v,k) \le \frac{2C}{r!}\cdot n^{r-1+\frac{1}{\left \lfloor
\frac{k}{k+r-v}\right\rfloor}}+ {\cal O}(n^{r-1}).$$
\ec
{}
\end{document}
|
\begin{document}
\title{On an atom with a magnetic quadrupole moment subject to a harmonic and a linear confining potentials}
\author{I. C. Fonseca}
\email{[email protected]}
\affiliation{Departamento de F\'isica, Universidade Federal da Para\'iba, Caixa Postal 5008, 58051-970, Jo\~ao Pessoa, PB, Brazil.}
\author{Knut Bakke}
\email{[email protected]}
\affiliation{Departamento de F\'isica, Universidade Federal da Para\'iba, Caixa Postal 5008, 58051-970, Jo\~ao Pessoa, PB, Brazil.}
\begin{abstract}
The quantum dynamics of an atom with a magnetic quadrupole moment that interacts with an external field subject to a harmonic and a linear confining potentials is investigated. It is shown that the interaction between the magnetic quadrupole moment and an electric field gives rise to an analogue of the Coulomb potential and, by confining this atom to a harmonic and a linear confining potentials, a quantum effect characterized by the dependence of the angular frequency on the quantum numbers of the system is obtained. In particular, it is shown that the possible values of the angular frequency associated with the ground state of the system are determined by a third-degree algebraic equation.
\end{abstract}
\keywords{magnetic quadrupole moment, coulomb-type potential, harmonic and linear confining potentials, biconfluent Heun function, bound states}
\pacs{03.65.Ge, 03.65.Vf}
\maketitle
\section{Introduction}
A quantum effect that has been widely investigated in the literature associated with the electric charge, the electric dipole moment and the electric quadrupole moment is the arising of geometric quantum phases. For instance, the Aharonov-Bohm effect \cite{ab} is related to the electric charge and the He-McKellar-Wilkens effect \cite{hmw,hmw2} is related to a neutral particle with a permanent electric dipole moment. With respect to an electric quadrupole moment, in Ref. \cite{chen} is shown that the wave function of a neutral particle can acquire a geometric quantum phase that stems from the interaction between the electric quadrupole moment and a magnetic field. Moreover, in Ref. \cite{b7} is shown that a geometric quantum phase stems from the interaction between an electric field and the electric quadrupole moment, where it is analogous to the scalar Aharonov-Bohm effect for a neutral particle \cite{anan,anan2}. Other quantum effects associated with the electric charge, the electric dipole moment and the electric quadrupole moment have been investigated in the context of searching for bound states, such as the Aharonov-Bohm effect for bound states \cite{pesk,fur} and the Landau quantization \cite{landau,lin,bf25}. On the other hand, based on the magnetic multipole expansion, a quantum effect associated with geometric quantum phases for a magnetic monopole is known in the literature as the dual of the Aharonov-Bohm effect \cite{dab,dab2}. Associated with the arising of geometric quantum phases for a neutral particle with a permanent magnetic dipole moment, the corresponding quantum effect is the Aharonov-Casher effect \cite{ac}. Besides, geometric quantum phases for a neutral particle with a magnetic quadrupole moment was obtained in Ref. \cite{chen} from the interaction between the magnetic quadrupole moment and an electric field by analogy with the electric quadrupole moment case.
In particular, a quantum particle with a magnetic quadrupole moment has attracted several discussions \cite{magq1,magq2,magq3,magq,prc,chen,pra,magq4,magq5,magq6,magq7,magq8,magq10}, for instance, in atomic systems \cite{magq5,magq6}, molecules \cite{magq7,magq2}, by dealing with $P$- and $T$-odds effects in atoms \cite{magq3,magq8,magq10} and chiral anomaly \cite{magq}. In this work, we consider the single particle approximation used in Ref. \cite{prc} in order to deal with a system that consists in a moving atom with a magnetic quadrupole moment that interacts with a nonuniform electric field subject to a harmonic and a linear confining potentials. Harmonic potentials \cite{landau,greiner3,grif} have a great importance in studies of molecular structure and molecules \cite{mol,mol2,mol3,mol4}, two-dimensional quantum rings and quantum dots \cite{tan}. In the relativistic quantum mechanics context, models of the harmonic oscillator have been proposed based the Klein-Gordon and Dirac equations and became known as the Dirac oscillator \cite{osc1} and Klein-Gordon oscillator \cite{kgo}. These relativistic harmonic oscillator models have attracted the interest in studies of the Jaynes-Cummings model \cite{jay2,osc3}, quantum phase transitions \cite{extra2,extra3} and the Ramsey-interferometry effect \cite{osc6}. Besides, the interest in including a linear confining potential comes from the studies of atomic and molecular physics \cite{linear3a,linear3b,linear3c,linear3d,linear3e,linear3f}, quantum bouncer \cite{bouncer,bouncer2}, motion of a quantum particle in a uniform force field \cite{landau,balle} and relativistic quantum mechanics \cite{linear2,linear2a,linear2b,linear2c,linear2d,linear2e,linear2f,eug,scalar2,vercin,mhv}. Hence, from the classical dynamics of a moving particle with a magnetic quadrupole moment that interacts with external fields, we obtain the corresponding Schr\"odinger equation and show that the interaction between the magnetic quadrupole moment and an electric field can give rise to an analogue of the Coulomb potential. Moreover, by confining this atom to a harmonic and a linear confining potentials, we show that a quantum effect characterized by the dependence of the angular frequency on the quantum numbers of the system is obtained. As a particular case, it is shown that the possible values of the angular frequency associated with the ground state of the system are determined by a third-degree algebraic equation.
The structure of this work is as follows: in section II, we introduce the classical and quantum dynamics of a moving atom with a magnetic quadrupole moment that interacts with external fields; in section III, we discuss a particular case where the interaction between the magnetic quadrupole moment and a radial electric field can give rise to a Coulomb-type potential, and thus investigate the influence of this Coulomb-type potential on the harmonic and linear confining potentials; in section IV, we present our conclusions.
\section{Moving atom with magnetic quadrupole moment in external fields }
In this section, we start by introducing the quantum dynamics of an atom possessing a magnetic quadrupole moment which interacts with external fields. From Refs. \cite{pra,prc}, the potential energy is defined by analogy with the classical dynamics of an electric quadrupole moment in the rest frame of the particle is given by
\begin{eqnarray}
U_{m}=-\sum_{i,j}M_{ij}\,\partial_{i}\,B_{j},
\label{1.1}
\end{eqnarray}
where $\vec{B}$ is the magnetic field and $M_{ij}$ is the magnetic quadrupole moment tensor, whose characteristic is that it is a symmetric and traceless tensor \cite{pra,prc}.
Henceforth, let us consider a moving particle possessing a magnetic quadrupole moment, then, by applying the Lorentz transformation of the electromagnetic field, the magnetic field $\vec{B}$ given in Eq. (\ref{1.1}) must be replaced with $\vec{B}\rightarrow\vec{B}-\frac{1}{c^{2}}\vec{v}\times\vec{E}$ for $v\ll c$ (SI units). Thereby, the Lagrangian of this classical system is written in the form:
\begin{eqnarray}
\mathcal{L}=\frac{1}{2}m\,v^2+\vec{M}\cdot\vec{B}+\frac{1}{c^{2}}\,\vec{v}\cdot\left(\vec{M}\times\vec{E}\right),
\label{1.2}
\end{eqnarray}
where $\vec{E}$ and $\vec{B}$ are the electric and magnetic fields in the laboratory frame, respectively, and we have defined the vector $\vec{M}$ in Eq. (\ref{1.2}) in such a way that its components are determined by $M_{i}=\sum_{j}M_{ij}\,\partial_{j}$ by analogy with the vector $Q_{i}=\sum_{j}Q_{ij}\,\partial_{j}$ defined in Ref. \cite{chen}, where $Q_{ij}$ is the electric quadrupole moment tensor. From this perspective, it is simple to obtain the canonical momentum of this system: $\vec{p}=m\,\vec{v}+\frac{1}{c^{2}}\left(\vec{M}\times\vec{E}\right)$; thus, the Hamiltonian of this system is
\begin{eqnarray}
\mathcal{H}=\frac{1}{2m}\left[\vec{p}-\frac{1}{c^{2}}(\vec{M}\times\vec{E})\right]^2-\vec{M}\cdot\vec{B}.
\label{1.3}
\end{eqnarray}
Let us proceed with the quantization of the Hamiltonian, by replacing the canonical momentum $\vec{p}$ with the Hermitian operator $\hat{p}=-i\hbar\vec{\nabla}$. Thereby, the quantum dynamics of a moving atom with a magnetic quadrupole moment can be described by the Schr\"odinger equation:
\begin{eqnarray}
i\hbar\frac{\partial\psi}{\partial t}=\frac{1}{2m}\left[\hat{p}-\frac{1}{c^{2}}\,\vec{M}\times\vec{E}\right]^{2}\psi-\vec{M}\cdot\vec{B}\,\psi.
\label{1.4}
\end{eqnarray}
Note that we can also extend the discussion about a moving atom with magnetic quadrupole moment in external fields by including a confining potential $V$ into the Schr\"odinger equation above. In this way, the Schr\"odinger equation becomes
\begin{eqnarray}
i\hbar\frac{\partial\psi}{\partial t}=\frac{1}{2m}\left[\hat{p}-\frac{1}{c^{2}}\,\vec{M}\times\vec{E}\right]^{2}\psi-\vec{M}\cdot\vec{B}\,\psi+V\,\psi.
\label{1.5}
\end{eqnarray}
Thereby, we are able to study some quantum effects of a moving atom that possesses a magnetic quadrupole moment which interacts with an electric field subject to a confining potential.
\section{confinement to a harmonic and linear confining potentials}
In this section, we show that the interaction between the magnetic quadrupole moment and a radial electric field can give rise to a Coulomb-like potential, and thus we discuss the behaviour of the system when it is subject to a harmonic and a linear confining potentials. Let us use the mathematical properties of the magnetic quadrupole tensor $M_{ij}$ and make a simple choice of the components of this tensor:
\begin{eqnarray}
M_{\rho z}=M_{z\rho}=-M,
\label{2.1}
\end{eqnarray}
where $M$ is a constant $\left(M>0\right)$ and all other components of $M_{ij}$ are null. Note that the magnetic quadrupole moment defined by the components given in Eq. (\ref{2.1}) satisfies the condition in which the magnetic quadrupole tensor $M_{ij}$ must be a symmetric and traceless matrix. These mathematical properties of the magnetic quadrupole tensor $M_{ij}$ have also been explored, for instance, in the study of geometric quantum phases, where the tensor $M_{ij}$ is considered to be diagonal \cite{chen}. Besides, let us consider an electric field given by \cite{er}
\begin{eqnarray}
\vec{E}=\frac{\lambda\,\rho}{2}\,\hat{\rho},
\label{2.2}
\end{eqnarray}
where $\lambda=\lambda_{0}/\epsilon_{0}$, $\lambda_{0}$ is a uniform volume electric charge density, $\epsilon_{0}$ is the electric vacuum permittivity, $\rho=\sqrt{x^{2}+y^{2}}$ and $\hat{\rho}$ is an unit vector in the radial direction. The field configuration given in Eq. (\ref{2.2}) was explored in studies of the Landau quantization for neutral particles with a permanent magnetic dipole moment \cite{er}. Note that the Landau quantization \cite{landau} is characterized by the presence of a uniform magnetic field on the path of a charged particle given by a vector potential $\vec{A}=\frac{B\rho}{2}\,\hat{\varphi}$, where $B$ is the intensity of the magnetic field and $\hat{\varphi}$ is a unit vector in the azimuthal direction. In the present case, the effective vector potential given in Eq. (\ref{1.5}) becomes $\vec{A}_{\mathrm{eff}}=\vec{M}\times\vec{E}=-\frac{M\,\lambda}{2}\,\hat{\varphi}$. Despite the presence of the effective vector potential cannot yield the Landau quantization for a moving atom with a magnetic quadrupole moment, the importance of it is that it contributes to the arising of an analogue of the Coulomb potential as we shall see in the following.
Finally, let us confine the system described by Eqs. (\ref{2.1}) and (\ref{2.2}) to a harmonic and a linear confining potentials, which can be introduced into the Schr\"odinger equation (\ref{1.5}) through
\begin{eqnarray}
V\left(\rho\right)=\frac{1}{2}\,m\,\omega^{2}\,\rho^{2}+\eta\,\rho.
\label{2.3}
\end{eqnarray}
In particular, the linear scalar potential given in Eq. (\ref{2.3}) has been proposed to describe the confinement of quarks \cite{linear,linear1} due to experimental data show a behaviour of the confinement to be proportional to the distance between the quarks \cite{linear4,linear4a,linear4b,linear4c}. It has also been explored in studies of the quark-antiquark interaction as a problem of a relativistic spinless particle which possesses a position-dependent mass, where the mass term acquires a contribution given by a interaction potential that consists of a linear and a harmonic confining potential plus a Coulomb potential term \cite{bah}. Besides, the linear scalar potential given in Eq. (\ref{2.3}) has attracted a great interest in atomic and molecular physics as pointed out in Refs. \cite{linear3a,linear3b,linear3c,linear3d,linear3e,linear3f} and in several discussions of relativistic quantum mechanics \cite{linear2,linear2a,linear2b,linear2c,linear2d,linear2e,linear2f,eug,scalar2,vercin,mhv}.
We simplify our calculations by working with the units $\hbar=c=1$, therefore the Schr\"odinger equation (\ref{1.5}) becomes
\begin{eqnarray}
i\frac{\partial\psi}{\partial t}&=&-\frac{1}{2m}\left[\frac{\partial^{2}}{\partial\rho^{2}}+\frac{1}{\rho}\frac{\partial}{\partial\rho}+\frac{1}{\rho^{2}}\,\frac{\partial^{2}}{\partial\varphi^{2}}+\frac{\partial^{2}}{\partial z^{2}}\right]\psi-i\,\frac{M\lambda}{2m\rho}\,\frac{\partial\psi}{\partial\varphi}+\frac{M^{2}\lambda^{2}}{8m}\,\psi\nonumber\\
[-2mm]\label{1.7}\\[-2mm]
&+&\frac{1}{2}\,m\,\omega^{2}\,\rho^{2}\,\psi+\eta\,\rho\,\psi.\nonumber
\end{eqnarray}
We can see that the quantum operators $\hat{L}_{z}=-i\partial_{\varphi}$ and $\hat{p}_{z}=-i\partial_{z}$ commute with the Hamiltonian operator given in the right-hand side of (\ref{1.7}), then, a particular solution to Eq. (\ref{1.7}) can be written in terms of the eigenvalues of the operator $\hat{p}_{z}$, and $\hat{L}_{z}$:
\begin{eqnarray}
\psi\left(t,\rho,\varphi,z\right)=e^{-i\mathcal{E}t}\,e^{i\,l\,\varphi}\,e^{ikz}\,R\left(\rho\right),
\label{1.8}
\end{eqnarray}
where $l=0,\pm1,\pm2,\ldots$, $k$ is a constant, and $R\left(\rho\right)$ is a function of the radial coordinate. Thereby, substituting the solution (\ref{1.8}) into Eq. (\ref{1.7}), we obtain
\begin{eqnarray}
R''+\frac{1}{\rho}R'-\frac{l^{2}}{\rho^{2}}R-\frac{M\lambda\,l}{\rho}\,R-m^{2}\omega^{2}\rho^{2}\,R-2m\,\eta\,\rho\,R+\zeta^{2}\,R=0,
\label{1.9}
\end{eqnarray}
where we have defined the following parameter:
\begin{eqnarray}
\zeta^{2}&=&2m\mathcal{E}-k^{2}-\frac{M^{2}\lambda^{2}}{4}.
\label{1.10}
\end{eqnarray}
Observe that the fourth term on the left-hand side of Eq. (\ref{1.9}) plays the role of a Coulomb-like potential for $l\neq0$. This term stems from the interaction between the electric field (\ref{2.2}) and the magnetic quadrupole moment given in Eq. (\ref{2.1}) due to the presence of the term proportional to $\vec{A}_{\mathrm{eff}}\cdot\hat{p}=\left(\vec{M}\times\vec{E}\right)\cdot\hat{p}$ in the Schr\"odinger equation (\ref{1.7}). Note that for $l=0$, this term vanishes and there is no presence of the interaction between the electric field and the magnetic quadrupole moment, which makes no sense for our discussion. In this way, the focus of our discussion is for $l\neq0$. Thereby, let us make a change of variable given by $\xi=\sqrt{m\omega}\,\rho$, thus, the radial equation (\ref{1.9}) becomes
\begin{eqnarray}
R''+\frac{1}{\xi}\,R'-\frac{l^{2}}{\xi^{2}}\,R-\frac{\delta}{\xi}\,R-\xi^{2}\,R-\alpha\,\xi\,R+\frac{\zeta^{2}}{m\omega}\,R=0,
\label{1.11}
\end{eqnarray}
where the parameters $\delta$ and $\alpha$ are defined as
\begin{eqnarray}
\delta=\frac{M\,\lambda\,l}{\sqrt{m\omega}};\,\,\,\,\,\,\,\alpha=\frac{2\,m\,\eta}{\left(m\omega\right)^{3/2}}.
\label{1.12}
\end{eqnarray}
By analysing the asymptotic behaviour of the possible solutions to Eq. (\ref{1.11}), we have that the asymptotic behaviour is determined for $\xi\rightarrow0$ and $\xi\rightarrow\infty$ \cite{heun,eug,mhv,vercin}. This allows us to write the function $R\left(\xi\right)$ in terms of an unknown function $H\left(\xi\right)$ in such a way that it is a regular function at the origin:
\begin{eqnarray}
R\left(\xi\right)=e^{-\frac{1}{2}\,\xi^{2}}\,e^{-\frac{\alpha}{2}\,\xi}\,\xi^{\left|l\right|}\,H\left(\xi\right).
\label{1.13}
\end{eqnarray}
Substituting (\ref{1.13}) into (\ref{1.11}), we obtain the following equation:
\begin{eqnarray}
H''+\left[\frac{\theta}{\xi}-\alpha-2\xi\right]H'+\left[g-\frac{\left(\theta\,\alpha+2\delta\right)}{2\xi}\right]H=0,
\label{1.14}
\end{eqnarray}
where
\begin{eqnarray}
g=\frac{\zeta^{2}}{m\omega}+\frac{\alpha^{2}}{4}-2-2\left|l\right|;\,\,\,\,\,\,\,\theta=2\left|l\right|+1.
\label{1.15}
\end{eqnarray}
The function $H\left(\xi\right)$ is a solution to the second order differential equation (\ref{1.14}) and it is known as the biconfluent Heun function \cite{heun,eug,bm,b50,b50a}:
\begin{eqnarray}
H\left(\xi\right)=H\left(2\left|l\right|,\,\alpha,\,\frac{\zeta^{2}}{m\omega}+\frac{\alpha^{2}}{4},\,2\delta,\,\xi\right).
\label{1.16}
\end{eqnarray}
Our goal is to obtain bound states solutions, then, let us use the Frobenius method \cite{arf,eug,b50,b50a}. Thereby, the solution to Eq. (\ref{1.14}) can be written as a power series expansion around the origin:
\begin{eqnarray}
H\left(\xi\right)=\sum_{j=0}^{\infty}\,c_{j}\,\xi^{j}.
\label{1.17}
\end{eqnarray}
Therefore, substituting the series (\ref{1.17}) into (\ref{1.14}), we obtain the recurrence relation:
\begin{eqnarray}
c_{j+2}=\frac{\left[2\alpha\left(j+1\right)+\theta\alpha+2\delta\right]}{2\left(j+2\right)\,\left(j+1+\theta\right)}\,c_{j+1}-\frac{\left(g-2j\right)}{\left(j+2\right)\,\left(j+1+\theta\right)}\,c_{j}.
\label{1.18}
\end{eqnarray}
By starting with $c_{0}=1$ and using the recurrence relation (\ref{1.18}), we can calculate other coefficients of the power series expansion (\ref{1.17}). As an example, the terms $c_{1}$ and $c_{2}$ are
\begin{eqnarray}
c_{1}&=&\frac{\alpha}{2}+\frac{\delta}{\theta};\nonumber\\
[-2mm]\label{1.19}\\[-2mm]
c_{2}&=&\frac{\left[2\alpha+\theta\alpha+2\delta\right]}{4\left(1+\theta\right)}\left(\frac{\alpha}{2}+\frac{\delta}{\theta}\right)-\frac{g}{2\left(1+\theta\right)}.\nonumber
\end{eqnarray}
Bound state solutions can be obtained by imposing that the power series expansion (\ref{1.17}) or the biconfluent Heun series becomes a polynomial of degree $n$. From the recurrence relation given in Eq. (\ref{1.18}), we have that the power series expansion (\ref{1.17}) becomes a polynomial of degree $n$ by imposing two conditions \cite{bm,eug,b50,b50a}:
\begin{eqnarray}
g=2n\,\,\,\,\,\,\mathrm{and}\,\,\,\,\,\,c_{n+1}=0,
\label{1.20}
\end{eqnarray}
where $n=1,2,3,\ldots$. By using the expression of $g$ given in Eq. (\ref{1.15}), then, the condition $g=2n$ yields
\begin{eqnarray}
\mathcal{E}_{n,\,l}=\omega\left[n+\left|l\right|+1\right]-\frac{\eta^{2}}{2m\,\omega^{2}}+\frac{M^{2}\lambda^{2}}{8m}+\frac{k^{2}}{2m}.
\label{1.21}
\end{eqnarray}
Equation (\ref{1.21}) corresponds to the spectrum of energy of a moving atom with a magnetic quadrupole moment subject to a harmonic potential and a linear confining potential under the influence of an analogue of the Coulomb potential. As we have seen, the Coulomb-type potential stems from the interaction between the magnetic quadrupole moment defined in Eq. (\ref{2.1}) and the radial electric field given in Eq. (\ref{2.2}). As discussed previously, the Coulomb-type potential is defined for all values of the quantum number $l$ that differ from zero, therefore the energy levels (\ref{1.21}) are defined for $l\neq0$, otherwise there is no presence of the Coulomb-type potential.
However, we have not analysed the condition $c_{n+1}=0$ imposed in Eq. (\ref{1.20}). For this purpose, let us assume that the angular frequency $\omega$ can be adjusted in such a way that the condition $c_{n+1}=0$ is satisfied. With this assumption, we have that both conditions imposed in Eq. (\ref{1.20}) are satisfied and we obtain a polynomial solution to the function $H\left(\xi\right)$ given in Eq. (\ref{1.17}). As a consequence, we obtain an expression involving the angular frequency and the quantum numbers $\left\{n,\,l\right\}$ of the system, whose meaning is that only specific values of the angular frequency $\omega$ are allowed and depend on the quantum numbers $\left\{n,\,l\right\}$. Thereby, we label
\begin{eqnarray}
\omega=\omega_{n,\,l}.
\label{1.22}
\end{eqnarray}
This corresponds to a quantum effect characterized by a dependence of the angular frequency of the harmonic potential on the quantum numbers $\left\{n,\,l\right\}$ of the system that stems from the influence of the Coulomb-type potential on the harmonic and linear confining potentials. From the mathematical
point of view, this relation involving the angular frequency of the harmonic oscillator on the quantum numbers $\left\{n,\,l\right\}$ results from the fact that the exact solutions to Eq. (\ref{1.16}) are achieved for specific values of harmonic oscillator frequency. In recent years, analogue effects of this angular frequency dependence on the quantum numbers of the system have been investigated in different quantum mechanical contexts \cite{bm,eug,b50,b50a,bf40}.
As an example, let us consider $n=1$, which corresponds to the ground state, and analyse the condition $c_{n+1}=0$. For $n=1$, we have $c_{2}=0$. The condition $c_{2}=0$ imposes that the angular frequency $\omega_{1,\,l}$ satisfies the third-degree algebraic equation \cite{b50,b50a,eug}:
\begin{eqnarray}
\omega_{1,\,l}^{3}-\frac{\left(M\,\lambda\,l\right)^{2}}{2m\theta}\,\omega_{1,\,l}^{2}-\frac{\eta\,M\,\lambda\,l}{m\theta}\left(1+\theta\right)\,\omega_{1,\,l}-\frac{\left(2+\theta\right)\,\eta^{2}}{2m}=0.
\label{1.23}
\end{eqnarray}
Despite Eq. (\ref{1.23}) having at least one real solution, we do not write it because its expression is very long. Moreover, for $n=1$ we have the simplest case of the function $H\left(\xi\right)$ which corresponds to a polynomial of first degree:
\begin{eqnarray}
H_{1,\,l}\left(\xi\right)=1+\left(\frac{m\,\eta}{\left(m\,\omega_{1,\,l}\right)^{3/2}}+\frac{M\,\lambda\,l}{\theta\,\left(m\,\omega_{1,\,l}\right)^{1/2}}\right)\,\xi.
\label{1.24}
\end{eqnarray}
In this way, the general expression for the energy levels (\ref{1.21}) is given by:
\begin{eqnarray}
\mathcal{E}_{n,\,l}=\omega_{n,\,l}\left[n+\left|l\right|+1\right]-\frac{\eta^{2}}{2m\,\omega_{n,\,l}^{2}}+\frac{M^{2}\lambda^{2}}{8m}+\frac{k^{2}}{2m}.
\label{1.25}
\end{eqnarray}
Hence, we have obtained bound state solutions for a moving atom with a magnetic quadrupole moment subject to harmonic and linear confining potentials under the influence of both attractive or repulsive Coulomb-type potentials. Finally, we have seen that the influence of the Coulomb-type potential on the harmonic and the linear confining potentials gives rise to a quantum effect characterized by the dependence of the angular frequency of the harmonic potential on the quantum numbers $\left\{n,\,l\right\}$, whose meaning is that only specific values of the angular frequency of the harmonic potential are allowed in the system in order that a polynomial solution to the function $H\left(\xi\right)$ can be obtained \cite{eug,bm,b50,b50a}.
\section{conclusions}
We have investigated the influence of Coulomb-type potential that stems from the interaction between a radial electric field and the magnetic quadrupole moment of a atom on the harmonic and linear confining potential. We have seen that bound state solutions of the Schr\"odinger equation can be obtained for both attractive or repulsive Coulomb-type potentials. Besides, the influence of Coulomb-type potential on the linear confining potential and the harmonic potential has yielded a quantum effect characterized by the dependence of the angular frequency on the quantum numbers of the system. In particular, we have shown that the possible values of the angular frequency associated with the ground state of the system are determined by a third-degree algebraic equation.
It is worth observing that we have considered relativistic corrections of the field up to $O\left(\frac{v^{2}}{c^{2}}\right)$. An interesting case would be the analysis of this system under the influence of the relativistic corrections which includes terms of order $\mathcal{O}\left(\frac{v^{2}}{c^{2}}\right)$. In this case, the relativistic effects can give rise to an effective mass \cite{whw} in which can be interesting in studies of position-dependent mass systems \cite{pdm,pdm2,pdm3,pdm4}. We hope to bring these discussions in the near future.
\acknowledgments
The authors would like to thank the Brazilian agencies CNPq and CAPES for financial support.
\begin{thebibliography}{99}
\bibitem{ab} Y. Aharonov and D. Bohm, {\it Significance of electromagnetic potentials in the quantum theory}, Phys. Rev. {\bf115}, 485 (1959).
\bibitem{hmw} X. G. He and B. H. J. McKellar, {\it Topological phase due to electric dipole moment and magnetic monopole interaction}, Phys. Rev. A {\bf47}, 3424 (1993).
\bibitem{hmw2} M. Wilkens, {\it Quantum phase of a moving dipole}, Phys. Rev. Lett. {\bf72}, 5 (1994).
\bibitem{chen} C.-C. Chen, {\it Topological quantum phase and multipole moment of neutral particles}, Phys. Rev. A {\bf51}, 2611 (1995).
\bibitem{b7} K. Bakke, {\it On the interaction between an electric quadrupole moment and electric fields}, Ann. Phys. (Berlin) {\bf524}, 338 (2012).
\bibitem{anan} J. Anandan, {\it Electromagnetic effects in the quantum interference of dipoles}, Phys. Lett. A {\bf138}, 347 (1989).
\bibitem{anan2} J. Anandan, {\it Classical and quantum interaction of the dipole}, Phys. Rev. Lett. {\bf85}, 1354 (2000).
\bibitem{pesk} M. Peshkin and A. Tonomura, \textit{The Aharonov-Bohm Effect} (Springer-Verlag, in: Lecture Notes in Physics, Vol. 340, Berlin, 1989).
\bibitem{fur} C. Furtado, V. B. Bezerra and F. Moraes, {\it Quantum scattering by a magnetic flux screw dislocation}, Phys. Lett. A {\bf289}, 160 (2001).
\bibitem{landau} L. D. Landau and E. M. Lifshitz, \textit{Quantum Mechanics, the nonrelativist theory, 3rd Ed.} (Pergamon, Oxford, 1977).
\bibitem{lin} L. R. Ribeiro, C. Furtado and J. R. Nascimento, {\it Landau levels analog to electric dipole}, Phys. Lett. A {\bf348}, 135 (2006).
\bibitem{bf25} J. Lemos de Melo, K. Bakke and C. Furtado, {\it Landau quantization for an electric quadrupole moment}, Phys. Scr. {\bf84}, 045023 (2011).
\bibitem{dab} J. P. Dowling, C. Williams and J. D. Franson, {\it Maxwell duality, Lorentz invariance and topological phase}, Phys. Rev. Lett. {\bf83}, 2486 (1999).
\bibitem{dab2} C. Furtado and G. Duarte, {\it Dual Aharonov-Bohm effect}, Phys. Scr. {\bf71}, 7 (2005).
\bibitem{ac} Y. Aharonov and A. Casher, {\it Topological 1uantum effects for neutral particles}, Phys. Rev. Lett. {\bf53}, 319 (1984).
\bibitem{magq1} I. B. Khriplovich, {\it A bound on the proton electric dipole moment derived from atomic experiments}, Zh. Eksp. Teor. Fiz. {\bf71}, 51 (1976) [Sov. Phys. JETP {\bf44}, 25 (1976)].
\bibitem{magq2} O. P. Sushkov, V. V. Flambaum and I. B. Khriplovich, {\it Possibility of investigating P- and T-odd nuclear forces in atomic and molecular
experiments}, Zh. Eksp. Teor. Fiz. {\bf87}, 1521 (1984) [Sov. Phys. JETP {\bf60}, 873 (1984)].
\bibitem{magq4} V. F. Dmitriev, V. B. Telitsin, V. V. Flambaum and V. A. Dzuba, {\it Core contribution to the nuclear magnetic quadrupole moment}, Phys. Rev. C {\bf54}, 3305 (1996).
\bibitem{pra} H. S. Radt and R. P. Hurst, {\it Magnetic quadrupole polarizability of closed-shell atoms}, Phys. Rev. A {\bf2}, 696 (1970).
\bibitem{magq5} E. Tak\'acs {\it et al}, {\it Polarization measurements on a magnetic quadrupole line in Ne-like barium}, Phys. Rev. A {\bf54}, 1342 (1996).
\bibitem{magq6} S. Majumder and B. P. Das, {\it Relativistic magnetic quadrupole transitions in Be-like ions}, Phys. Rev. A {\bf62}, 042508 (2000).
\bibitem{magq7} U. I. Safronova {\it el al}, {\it Electric-dipole, electric-quadrupole, magnetic-dipole, and magnetic-quadrupole transitions in the neon isoelectronic sequence}, Phys. Rev. A {\bf64}, 012507 (2001).
\bibitem{magq3} I. B. Khriplovich, {\it Parity Nonconseroation in Atomic Phenomena} (Gordon and Breach, London, 1991).
\bibitem{magq8} V. V. Flambaum, D. DeMille and M. G. Kozlov, {\it Time-Reversal Symmetry Violation in Molecules Induced by Nuclear Magnetic Quadrupole Moments}, Phys. Rev. Lett. {\bf113}, 103003 (2014).
\bibitem{magq10} V. V. Flambaum, {\it Spin hedgehog and collective magnetic quadrupole moments induced by parity and time invariance violating interaction}, Phys. Lett. B {\bf320}, 211 (1994).
\bibitem{magq} D. E. Kharzeev, H.-U. Yee and I. Zahed, {\it Anomaly-induced quadrupole moment of the neutron in magnetic field}, Phys. Rev. D. {\bf84}, 037503 (2011).
\bibitem{prc} V. F. Dmitriev, I. B. Khriplovich and V. B. Telitsin, {\it Nuclear magnetic quadrupole moments in the single-particle approximation}, Phys. Rev. C {\bf50}, 2358 (1994).
\bibitem{greiner3} W. Greiner, \textit{Quantum Mechanics: an introduction, Fourth Edition} (Springer, Berlin, 2001).
\bibitem{grif} D. J. Griffiths, {\it Introduction to quantum mechanics, Second Edition}, (Prentice Hall, 2004).
\bibitem{mol} M. Sage and J. Goodisman, {\it Improving on the conventional presentation of molecular vibrations: Advantages of the pseudoharmonic potential and the direct construction of potential energy curves}, Am. J. Phys. {\bf53}, 350 (1985).
\bibitem{mol2} R. J. Le Roy and R. B. Bernstein, {\it Dissociation energy and long-range potential of diatomic molecules from vibrational spacings of higher levels}, J. Chem. Phys. {\bf52}, 3869 (1970).
\bibitem{mol3} S. M. Ikhdair and R. Sever, {\it Exact solutions of the radial Schrödinger equation for some physical potentials}, Cent. Eur. J. Phys. {\bf5}, 516 (2007).
\bibitem{mol4} S. M. Ikhdair and R. Sever, {\it Exact solutions of the D-dimensional Schr\"odinger equation for a ring-shaped pseudoharmonic potential}, Cent. Eur. J. Phys. {\bf6}, 685 (2008).
\bibitem{tan} W.-C. Tan and J. C. Inkson, {\it Magnetization, persistent currents, and their relation in quantum rings and dots}, Phys. Rev. B {\bf60}, 5626 (1999).
\bibitem{osc1} M. Moshinsky and A. Szczepaniak, {\it The Dirac oscillator}, J. Phys. A: Math. Gen. {\bf22}, L817 (1989).
\bibitem{kgo} S. Bruce and P. Minning, {\it The Klein-Gordon oscillator}, Nuovo Cimento A {\bf106}, 711 (1993).
\bibitem{osc3} P. Rozmej and R. Arvieu, {\it The Dirac oscillator: A relativistic version of the Jaynes-Cummings model}, J. Phys. A {\bf32}, 5367 (1999).
\bibitem{jay2} A. Bermudez, M. A. Martin-Delgado and E. Solano, {\it Exact mapping of the 2+1 Dirac oscillator onto the Jaynes-Cummings model: Ion-trap experimental proposal}, Phys. Rev. A {\bf76}, 041801(R) (2007).
\bibitem{extra2} A. Bermudez, M. A. Martin-Delgado and A. Luis, {\it Chirality quantum phase transition in the Dirac oscillator}, Phys. Rev. A {\bf77}, 063815 (2008).
\bibitem{extra3} A. Bermudez, M. A. Martin-Delgado and E. Solano, {\it Mesoscopic superposition states in relativistic Landau levels}, Phys. Rev. Lett. {\bf99}, 123602 (2007).
\bibitem{osc6} A. Bermudez, M. A. Martin-Delgado and A. Luis, {\it Nonrelativistic limit in the 2+1 Dirac oscillator: A Ramsey-interferometry effect}, Phys. Rev. A {\bf77}, 033832 (2008).
\bibitem{linear3a} E. J. Austin, {\it Perturbation theory and Pad\'e approximants for a hydrogen atom in an electric field}, Mol. Phys. {\bf40}, 893 (1980).
\bibitem{linear3b} E. R. Vrscay, {\it Algebraic methods, Bender-Wu formulas, and continued fractions at large order for charmonium}, Phys. Rev. A {\bf31} 2054 (1985).
\bibitem{linear3c} K. Killingbeck, {\it Quantum-mechanical perturbation theory}, Rep. Prog. Phys. {\bf40}, 963 (1977).
\bibitem{linear3d} K. Killingbeck, {\it Perturbation theory without wavefunctions}, Phys. Lett. A {\bf65} 87 (1978).
\bibitem{linear3e} R. P. Saxena and V. S. Varma, {\it Polynomial perturbation of a hydrogen atom}, J. Phys. A: Math. Gen. {\bf15}, L149 (1982).
\bibitem{linear3f} E. Castro and P. Mart\'in, {\it Eigenvalues of the Schrödinger equation with Coulomb potentials plus linear and harmonic radial terms}, J. Phys. A: Math. Gen. {\bf33}, 5321 (2000).
\bibitem{bouncer} R. L. Gibbs, {\it The quantum bouncer}, Am. J. Phys. {\bf43}, 25 (1975).
\bibitem{bouncer2} R. D. Desko and D. J. Bord, {\it The quantum boucer revisited}, Am. J. Phys. {\bf51}, 82 (1983).
\bibitem{balle} L. E. Ballentine, {\it Quantum Mechanics, a Modern Development} (World Scientific, Singapore, 1998).
\bibitem{linear2} G. Plante and A. F. Antippa, {\it Analytic solution of the Schrödinger equation for the Coulomb-plus-linear potential: I. The wave functions}, J. Math. Phys. {\bf46}, 062108 (2005).
\bibitem{linear2a} J. H. Noble and U. D. Jentschura, {\it Dirac equations with confining potentials}, Int. J. Mod. Phys. A {\bf30}, 1550002 (2015).
\bibitem{linear2b} M. L. Glasser and N. Shawagfeh, {\it Dirac equation for a linear potential}, J. Math. Phys. {\bf25}, 2533 (1984).
\bibitem{linear2c} H. Tezuka, {\it Analytical solutions of the Dirac equation with a scalar linear potential}, AIP Advances {\bf3}, 082135 (2013).
\bibitem{linear2d} J. F. Gunion and L. F. Li, {\it Relativistic treatment of the quark-confinement potential}, Phys. Rev. D {\bf12}, 3583 (1975).
\bibitem{linear2e} P. Leal Ferreira, {\it Two-body Dirac equation with a scalar linear potential}, Phys. Rev. D {\bf38}, 2648 (1988).
\bibitem{linear2f} F. Dom\'inguez-Adame adn M. A. Gonz\'alez, {\it Solvable Linear Potentials in the Dirac Equation}, Europhys. Lett. {\bf13}, 193 (1990).
\bibitem{scalar2} G. Soff, B. M\"uller, J. Rafelski and W. Greiner, {\it Solution of the Dirac equation for scalar potentials and its implications in atomic physics}, Z. Naturforsch. A {\bf28}, 1389 (1973).
\bibitem{vercin} A. Ver\'cin, {\it Two anyons in a static, uniform magnetic field: Exact solution}, Phys. Lett. B {\bf260}, 120 (1991).
\bibitem{mhv} J. Myrhein, E. Halvorsen and A. Ver\'cin, {\it Two anyons with Coulomb interaction in a magnetic field}, Phys. Lett. B {\bf278}, 171 (1992).
\bibitem{eug} E. R. Figueiredo Medeiros and E. R. Bezerra de Mello, {\it Relativistic quantum dynamics of a charged particle in cosmic string spacetime in the presence of magnetic field and scalar potential}, Eur. Phys. J. C {\bf72}, 2051 (2012).
\bibitem{er} M. Ericsson and E. Sj\"oqvist, {\it Towards a quantum Hall effect for atoms using electric fields}, Phys. Rev. A {\bf65}, 013607 (2001).
\bibitem{linear} C. L. Chrichfield, {\it Scalar binding of quarks}, Phys. Rev. D {\bf12}, 923 (1975).
\bibitem{linear1} C. L. Chrichfield, {\it Scalar potentials in the Dirac equation}, J. Math. Phys. {\bf17}, 261 (1976).
\bibitem{linear4} R. S. Kaushal, {\it Pion form factor and the quark model for the spcetrum of heavy mesons}, Phys. Lett. B {\bf57}, 354 (1975).
\bibitem{linear4a} E. Eichten {\it el al}, {\it Charmonium: Comparison with experiment}, Phys. Rev. D {\bf21}, 203 (1980).
\bibitem{linear4b} E. Eichten {\it el al}, {\it Spectrum of charmed quark-antiquark bound states}, Phys. Rev. Lett. {\bf34}, 369 (1975).
\bibitem{linear4c} H. Tezuka, {\it Analytical solution of the Schrodinger equation with linear confinement potential}, J. Phys. A: Math. Gen. {\bf24}, 5267 (1991).
\bibitem{bah} M. K. Bahar and F. Yasuk, {\it Exact Solutions of the Mass-Dependent Klein-Gordon Equation with the Vector Quark-Antiquark Interaction and Harmonic Oscillator Potential}, Advances in High Energy Physics, vol. 2013, Article ID 814985, 6 pages, 2013. Doi:10.1155/2013/814985.
\bibitem{heun} A. Ronveaux (ed.), \textit{Heun's differential equations} (Oxford University Press, Oxford, 1995).
\bibitem{b50} K. Bakke, {\it Bound states for a Coulomb-type potential induced by the interaction between a moving electric quadrupole moment and a magnetic field}, Ann. Phys. (NY) {\bf341}, 86 (2014).
\bibitem{b50a} K. Bakke, {\it Some quantum aspects of a particle with electric quadrupole moment interacting with an electric field subject to confining potentials}, Int. J. Mod. Phys. A {\bf29}, 1450117 (2014).
\bibitem{bm} K. Bakke and F. Moraes, {\it Threading dislocation densities in semiconductor crystals: A geometric approach}, Phys. Lett. A {\bf376}, 2838 (2012).
\bibitem{arf} G. B. Arfken and H. J. Weber, {\it Mathematical Methods for Physicists, sixth edition} (Elsevier Academic Press, New York, 2005).
\bibitem{bf40} K. Bakke and C. Furtado, {\it On the Klein–Gordon oscillator subject to a Coulomb-type potential}, Ann. Phys. (NY) {\bf355}, 48 (2015).
\bibitem{whw} H. Wei, R. Han and X. Wei, {\it Quantum phase of induced dipoles moving in a magnetic field}, Phys. Rev. Lett. {\bf75}, 2071 (1995).
\bibitem{pdm} L. Dekar, L. Chetouani and T. F. Hammann, {\it Wave function for smooth potential and mass step}, Phys. Rev. A {\bf59}, 107 (1999).
\bibitem{pdm2} A. D. Alhaidari, {\it Solutions of the nonrelativistic wave equation with position-dependent effective mass}, Phys. Rev. A {\bf66}, 042116 (2002).
\bibitem{pdm3} A. D. Alhaidari, {\it Solution of the Dirac equation with position-dependent mass in the Coulomb field}, Phys. Lett. A {\bf322}, 72 (2004).
\bibitem{pdm4} L. Serra and E. Lipparini, {\it Spin response of unpolarized quantum dots}, Europhys. Lett. {\bf40}, 667 (1997).
\end{thebibliography}
\end{document}
\end{document}
|
\begin{document}
\title{Parameterized Matching in the Streaming Model}
\begin{abstract}
We study the problem of parameterized matching in a stream where we want to output matches between a pattern of length $m$ and the last $m$ symbols of the stream before the next symbol arrives. Parameterized matching is a natural generalisation of exact matching where an arbitrary one-to-one relabelling of pattern symbols is allowed. We show how this problem can be solved in constant time per arriving stream symbol and sublinear, near optimal space with high probability. Our results are surprising and important: it has been shown that almost no streaming pattern matching problems can be solved (not even randomised) in less than $\Theta(m)$ space, with exact matching as the only known problem to have a sublinear, near optimal space solution. Here we demonstrate that a similar sublinear, near optimal space solution is achievable for an even more challenging problem. The proof is considerably more complex than that for exact matching.
\end{abstract}
\section{Introduction}
We consider the problem of pattern matching in a stream where we want to output matches between a pattern of length $m$ and the last $m$ symbols of the stream. Each answer must be reported before the next symbol arrives. The problem we consider in this paper is known as \emph{parameterized matching} and is a natural generalisation of exact matching where an arbitrary one-to-one relabelling of the pattern symbols is allowed (one per alignment). For example, if the pattern is \texttt{abbca} then there there is a parameterized match with \texttt{bddcb} as we can apply the relabelling \texttt{a}$\rightarrow$\texttt{b}, \texttt{b}$\rightarrow$\texttt{d}, \texttt{c}$\rightarrow$\texttt{c}. There is however no parameterized match with \texttt{bddbb}. We show how this streaming pattern matching problem can be solved in near constant time per arriving stream symbol and sublinear, near optimal, space with high probability. The space used is reduced even further when only a small subset of the symbols are allowed to be relabelled. As discussed in the next section, our results demonstrate a serious push forward in understanding what pattern matching algorithms can be solved in sublinear space.
\subsection{Background}
Streaming algorithms is a well studied area and specifically finding patterns in a stream is a fundamental problem that has received increasing attention over the past few years. It was shown in~\cite{CEPP:2008} that many offline algorithms can be made online (streaming) and deamortised with a $\log m$ factor overhead in the time complexity per arriving symbol in the stream, where $m$ is the length of the pattern. There have also been improvements for specific pattern matching problems but they all have one property in common: space usage is $\Theta(m)$ words. It is not difficult to show that we in fact \emph{need} as much as $\Theta(m)$ space to do pattern matching, unless errors are allowed.
The field of pattern matching in a stream took a significant step forwards in 2009 when it was shown to be possible to solve exact matching using only $O(\log{m})$ words of space and $O(\log{m})$ time per new stream symbol~\cite{Porat:09}. This method, which is based on fingerprints, correctly finds all matches with high probability. The initial approach was subsequently somewhat simplified~\cite{EJS:2010} and then finally improved to run in constant time~\cite{BG:2011} within the same space requirements.
Being able to do exact matching in sublinear space raised the question of what other streaming pattern matching problems can be solved in small space. In~2011 this question was answered for a large set of such problems~\cite{CJPS:2011}. The result was rather gloomy: almost no streaming pattern matching problems can be solved in sublinear space, not even using randomised algorithms. An $\Omega(m)$ space lower bound was given for $L_1$, $L_2$, $L_\infty$, Hamming, edit distance and pattern matching with wildcards as well as for any algorithm that computes the cross-correlation/convolution. So what other pattern matching problems could possibly be solved in small space? It seems that the only hope to find any is by imposing various restrictions on the problem definition. This was indeed done in~\cite{Porat:09} where a solution to $k$-mismatch (exact matching where up to $k$ mismatches are allowed) was given which uses $O(k^2\text{poly}(\log m))$ time per arriving stream symbol and $O(k^3\text{poly}(\log m))$ words of space. The solution involves multiple instances of the exact matching algorithm run in parallel. Note that the space bound approaches $\Theta(m)$ as $k$ increases, so the algorithm is only interesting for sufficiently small $k$. Further, the space bound is very far from the known $\Omega(k)$ lower bound.
We also note that it is straightforward to show that exact matching with $k$~wildcards in the pattern can be solved with the $k$-mismatch algorithm. To our knowledge, no other streaming pattern matching have been solved in sublinear space so far.
In this paper we present the first push forward since exact matching by giving a sublinear, near optimal space and near constant time (or constant with a mild restriction on the alphabet) algorithm for parameterized matching in a stream. This natural problem turns out to be significantly more complicated to solve than exact matching and our results provide the first demonstration that small space and time bounds are achievable for a more challenging problem. Note that our space bound, as opposed to $k$-mismatch, is essentially optimal like for exact matching.
One could easily argue that our results are surprising, and yet again the question of what other problems are solvable in sublinear space calls for an answer. In particular, given that restrictions to the problem have to be made, what restrictions should one make to break the $\Omega(m)$ space barrier.
\subsection{Problem definition and related work}\label{sec:probdef}
A pattern $P$ of length $m$ is said to \emph{parameterize match}, or \emph{\mbox{p-match}\xspace} for short, an $m$~length string $S$ if there is an injective (one-to-one) function $f$ such that $S[j]=f(P[j])$ for all $j\in\{0,\dots,m-1\}$.
In our streaming setting, the pattern is known in advance and
the symbols of the stream $T$ arrive one at a time. We use the letter $i$ to denote the index of the latest symbol in the stream. Our task is to output whether there is a \mbox{p-match}\xspace between $P$ and $T[(i-m+1)\ensuremath{,\,\,} i]$ before $T[i+1]$ arrives. The mapping $f$ may be distinct for each $i$.
One may view this matching problem as that of finding matches in a stream encrypted using a substitution cipher. In offline settings, parameterized matching has its origin in finding duplication and plagiarism in software code although has since found numerous
other applications. Since the first introduction of the problem, a great deal of work has gone into
its study in both theoretical and practical settings (see
e.g.\@~\cite{Baker:1993,AFM:1994,
Baker:1995,Baker:1996,Baker:1997,HLS:2007}). Notably, in an offline
setting, the exact parameterized matching problem can be solved in
near linear time using a variant~\cite{AFM:1994} of the classic linear
time exact matching algorithm KMP~\cite{Knuth:1977}.
When the sublinear space algorithm for exact matching was given in~\cite{Porat:09},
properties of the periods of strings formed a crucial part of their analysis. However, when considering parameterized matching the period of a string is a much less straightforward concept than it is for exact matching.
For example, it is no longer true that consecutive matches must either be separated by the period of the pattern or be at least $m/2$ symbols apart. This property, which holds for exact but not parameterized matching, allows for an efficient encoding of the positions of the matches. This was crucial to reducing the space requirements of the previous streaming algorithms. Unfortunately, parameterized matches can occur at arbitrary positions in the stream, requiring new insights. This is not the only challenge that we face.
A natural way to match two strings under parameterization is to consider
their \emph{predecessor strings}. For a string $S$, the predecessor
string, denoted $\ensuremath{\textup{pred}}d{S}$, is a string of length $|S|$ such that
$\ensuremath{\textup{pred}}d{S}[j]$ is the distance, counted in numbers of symbols, to the
previous occurrence of the symbol $S[j]$ in $S$. In other words,
$\ensuremath{\textup{pred}}d{S}[j]=d$, where $d$ is the smallest positive value for
which $S[j]=S[j-d]$. Whenever no such $d$ exists, we set
$\ensuremath{\textup{pred}}d{S}[j]=0$.
As an example, if $S=\texttt{aababcca}$ then $\ensuremath{\textup{pred}}d{S}=\texttt{01022014}$.
We can perform parameterized matching offline by only considering predecessor strings using the fundamental fact~\cite{Baker:1993} that two equal length strings $S$ and $S'$ \mbox{p-match}\xspace iff $\ensuremath{\textup{pred}}d{S}=\ensuremath{\textup{pred}}d{S'}$.
A plausible approach for our streaming problem would now be to
translate the problem of parameterized matching in a stream to that of
exact matching. This could be achieved by converting both pattern and
stream into their corresponding predecessor strings and maintaining
fingerprints of a sliding window of the translated input. However,
consider the effect on the predecessor string, and hence its
fingerprint, of sliding a window in the stream along by one. The
leftmost symbol $x$, say, will move out of the window and so the
predecessor value of the new leftmost occurrence of $x$ in the new
window will need to be set to \texttt{0} and the corresponding fingerprint
updated. We cannot afford to store the positions of all
characters in a $\Theta(m)$ length window.
We will show a matching algorithm that solves these problems and others we
encounter en route using minimal space and in near constant time per arriving symbol. A number of technical innovations are required, including new uses of fingerprinting, a new compressed encoding of the positions of potential matches, a separate deterministic algorithm designed for prefixes of the pattern with small parameterized period as well as the deamortisation of the entire matching process. Section~\ref{sec:overview} gives a more detailed overview of these main hurdles.
\subsection{Our new results}\label{sec:newresults}
Our main result is a fast and space efficient algorithm to solve the streaming parameterized matching problem.
It applies to \emph{dense} alphabets where we assume that both the pattern and streaming text alphabets are $\Sigma=\{0,\dots,|\Sigma|-1\}$. The following theorem is proved over the subsequent sections of this paper.
\begin{theorem}\label{thm:main}
Suppose the pattern and text alphabets are both $\Sigma=\{0,\dots,|\Sigma|-1\}$ and the pattern has length~$m$. There is a randomised algorithm for streaming parameterized matching that takes $O(1)$ worst-case time per character and uses $O(|\Sigma|\log m)$ words of space. The probability that the algorithm outputs correctly at all alignments of an $n$~length text is at least $1-1/n^c$, where $c$ is any constant.
\end{theorem}
To fully appreciate this theorem we also give a nearly matching space lower bound which shows that our solution is optimal within logarithmic factors. The proof is based on communication complexity arguments and is deferred to Appendix~\ref{appendix:space}.
\begin{theorem}
\label{thm:space}
There is a randomised space lower bound of $\Omega(|\Sigma|)$ bits for the streaming parameterized problem, where $\Sigma$ is the pattern alphabet.
\end{theorem}
Parameterized matching is often specified under the assumption that only some symbols are variable (allowed to be relabelled). The mapping $f$ we used in Section~\ref{sec:probdef} has to reflect this constraint. More precisely, let the pattern alphabet be partitioned into fixed symbols $\Sigma_{\textup{fixed}}$ and variable symbols $\Pi$. For $\sigma\in\Sigma_{\textup{fixed}}$, we require that $f(\sigma)=\sigma$. The result from Theorem~\ref{thm:main} can be extended to handle \emph{general} alphabets with arbitrary fixed symbols. The idea is to apply a suitable reduction that was given in~\cite{AFM:1994} (Lemma~2.2) together with the streaming exact matching algorithm of Breslauer and Galil~\cite{BG:2011}, as well as applying a ``filter'' on the text stream, using for instance the
the dictionary of Andersson and Thorup~\cite{AT:2000} based on exponential search trees. The dictionary is used to map text symbols to the variable pattern symbols in $\Pi$. The proof of the following theorem is given in Appendix~\ref{appendix:filter}.
\begin{theorem}
\label{thm:filter}
Suppose $\Pi$ is the set of pattern symbols that can be relabelled under parameterized matching. All other pattern symbols are fixed. Without any constraints on the text alphabet, there is a randomised algorithm for streaming parameterized matching that takes $O(\sqrt{\log{|\Pi|}/\log{\log{|\Pi|}}})$ worst-case time per character and uses $O(|\Pi|\log m)$ words of space, where $m$ is the length of the pattern. The probability that the algorithm outputs correctly at all alignments of an $n$ length text is at least $1-1/n^c$, where $c$ is any constant.
\end{theorem}
As part of the proof of Theorem~\ref{thm:main} we had to develop an algorithm that efficiently solves streaming parameterized matching for patterns with small
\emph{parameterized period}, defined as follows. The parameterized period (\emph{\mbox{p-period}\xspace}) of the pattern $P$, denoted $\rho$, is the smallest positive integer such that $P[0\ensuremath{,\,\,} (m-1-\rho)]$ \mbox{p-match}\xspacees $P[\rho\ensuremath{,\,\,} m-1]$. That is, $\rho$ is the shortest distance that $P$ must be slid by to parameterized match itself.
Our algorithm is deterministic and is interesting in its own right (see Section~\ref{sec:smallrho} for details). We also provide a matching space lower bound which is detailed in Appendix~\ref{appendix:space}.
\begin{theorem}
\label{thm:kmp-main}
Suppose the pattern and text alphabets are both $\Sigma=\{0,\dots,|\Sigma|-1\}$ and the pattern has \mbox{p-period}\xspace~$\rho$. There is a deterministic algorithm for streaming parameterized matching that takes $O(1)$ worst-case time per character and uses $O(|\Sigma|+\rho)$ words of space. Further, there is a deterministic space lower bound of $\Omega(|\Sigma| + \rho)$ bits.
\end{theorem}
\subsection{Fingerprints}
We will make extensive use Rabin-Karp style fingerprints of strings
which are defined as follows. Let $S$ be a string over the alphabet $\Sigma$. Let $p > |\Sigma|$ be a prime and choose
$r \in \mathbb{Z}_p$ uniformly at random. The
fingerprint $\phi(S)$ is given by $\phi(S) \defeq \sum_{k=0}^{|S|-1} S[k] r^k \bmod p$. A critical property of the fingerprint function $\phi$ is that the probability of achieving a false positive, $\Pr(\phi(S) = \phi(S') \,\land\, S \ne S')$, is at most $|S|/(p-1)$ (see~\cite{KR:1987, Porat:09} for proofs). Let $n$ denote the total length of the stream. Our randomised algorithm will make $o(n^2)$ (in fact near linear) fingerprint comparisons in total. Therefore, by the applying the union bound, for any constant $c$, we can choose $p \in \Theta(n^{c+3})$ so that with probability at least $1-1/n^c$ there will be no false positive matches.
As we assume the RAM model with word size $\Theta(\log{n})$, a fingerprint fits in a constant number of words. We assume that all fingerprint arithmetic is performed within $\mathbb{Z}_p$.
In particular we will take advantage of two fingerprint operations.
\begin{itemize}
\item[\ensuremath{\ominus}] \emph{Splitting:} Given $\phid{S[0 \ensuremath{,\,\,} a]}$, $\phid{S[0 \ensuremath{,\,\,} b]}$ (where $b>a$) and the value of $r^{-a} \bmod p$, we can compute $\phid{S[a+1 \ensuremath{,\,\,} b]} = \phid{S[0 \ensuremath{,\,\,} b]} \ensuremath{\ominus} \phid{S[0 \ensuremath{,\,\,} a]}$ in $O(1)$ time.
\item[\ensuremath{\circledcirc}] \emph{Zeroing:} Let $S, S'$ be two equal length strings such that $S'$ is identical to $S$ except for in positions $z\in Z \subseteq [0,s-1]$ at which $S'[z]=0$. We write $\phid{S} \ensuremath{\circledcirc} Z$ to denote $\phid{S'}$. Given $\phid{S}$ and $(S[z],\,r^z \bmod p)$ for all $z\in Z$, computing $\phid{S} \ensuremath{\circledcirc} Z$ takes $O(|Z|)$ time.
\end{itemize}
\section{Overview, key properties and notation}\label{sec:overview}
The overall idea of our algorithm in Theorem~\ref{thm:main} follows that of previous work on streaming exact matching in small space, however for parameterized matching the situation is much more complex and calls for not only more involved details and methods but also a deep fundamental understanding of the nature of parameterized matching. We will now describe the overall idea, introduce some important notation and at the end of this section we will highlight key facts about parameterized matching that are crucial for our solution.
The main algorithm will try to match the streaming text with various prefixes of the pattern~$P$. Let $\ensuremath{\Sigma_{\textup{P}}}$ denote the pattern alphabet.
We define $\delta = |\ensuremath{\Sigma_{\textup{P}}}|\log{m}$ and
let $P_0$ denote the shortest prefix of $P$ that has
\mbox{p-period}\xspace greater than $3\delta$ (recall the definition of \mbox{p-period}\xspace given above Theorem~\ref{thm:kmp-main}).
We define $s$ prefixes $P_\ell$ of increasing length so that
$|P_{\ell}| = 2^{\ell}|P_0|$ for $\ell \in \{1, \ldots, s-1\}$, where $s \leqslant \lceil \log m \rceil$ is the largest value such that $|P_{s-1}|\leqslant m/2$. The final prefix $P_s$ has length $m-4 \delta$.
For all $\ell$, we define $m_\ell = |P_\ell|$, hence $m_\ell=2m_{\ell-1}$.
In order to determine if there is a \mbox{p-match}\xspace between the text and a pattern prefix, we will compare the fingerprints of their predecessor strings (recall that two strings \mbox{p-match}\xspace iff their predecessor strings are the same).
We will need two related (but typically distinct) fingerprint definitions to achieve this. Figure~\ref{fig:fp-output} will be helpful when reading the following definitions which are discussed in an example below.
For any index~$i'$ and~$\ell \in \{0, \ldots, s\}$,
\begin{align*}
\fplong{\ell}(i') \,&\defeq\, \phid{\ensuremath{\textup{pred}}d{T[0 \ensuremath{,\,\,} (i'+m_{\ell}-1)]}}\,, \\
\fpinner{\ell}(i') \,&\defeq\, \phid{\ensuremath{\textup{pred}}d{T[i'\ensuremath{,\,\,} (i'+m_{\ell}-1)]}[m_{\ell-1}\ensuremath{,\,\,} m_{\ell}-1]}\,.
\end{align*}
\insertfigure{phi-fingerprints}{\label{fig:fp-output} The key fingerprints used by the randomised algorithm.
Characters contribute differently to $\fpinner{\ell}(i')$ and $\fplong{\ell}(i')\ominus\fplong{\ell-1}(i')$ are highlighted.}
For each $\ell \in \{1, \ldots, s\}$ the main algorithm runs a process whose responsibility for finding \mbox{p-match}\xspacees between the text and $P_\ell$ ($P_0$ is handled separately as will be discussed later). The process responsible for $P_\ell$ will ask the process responsible for $P_{\ell-1}$ if it has found any \mbox{p-match}\xspacees, and if so it will try to extend the matches to $P_\ell$. As an example, suppose that the process for $P_{\ell-1}$ finds a match at position $i'$ of the text (refer to Figure~\ref{fig:fp-output}). The process will then store this match along with the fingerprint $\fplong{\ell-1}(i')$ which has been built up as new symbols arrive. The process for $P_\ell$ will be handed this information when the symbol at position $i'+m_\ell-1$ arrives. The task is now to work out if $i'$ is also a matching position with $P_\ell$. With the fingerprint $\fplong{\ell}(i')$ available (built up as new symbols arrive), the process for $P_\ell$ can use fingerprint arithmetics to determine if $i'$ is a matching position. This is one instance where the situation becomes more tricky than one might first think.
As position $i'$ is a \mbox{p-match}\xspace with $P_{\ell-1}$ it suffices to compare the second half of the predecessor string of $P_\ell$ with the second half of the predecessor string of $T[i'\ensuremath{,\,\,} i+m_\ell-1]$. Fingerprints are used for this comparison. It is crucial to understand that $\fplong{\ell}(i')\ensuremath{\ominus} \fplong{\ell-1}(i')$ cannot be used directly here;
some predecessor values of the text might point very far back, namely to some position \emph{before} index $i'$. In Figure~\ref{fig:fp-output} we have shaded the three symbols for which this is true and we have drawn arrows indicating their predecessors. Thus, in order to correctly do the fingerprint comparison we need to set those positions to zero (we want the fingerprint of the predecessor string of the text substring starting at position $i'$, not the beginning of $T$). The fingerprint we defined as $\fpinner{\ell}(i')$ above is the fingerprint we want to compare to the fingerprint of the second half of the predecessor string of $P_\ell$. Using fingerprint operations, we have from the definitions that $\fpinner{\ell}(i') = \big( \fplong{\ell}(i')\ensuremath{\ominus} \fplong{\ell-1}(i') \big) \ensuremath{\circledcirc} \ensuremath{\Delta}_\ell(i')$, where $\ensuremath{\Delta}_\ell(i')$ is the set of positions that have to be set to zero. For a substring of $T$ of length $\Theta(m_{\ell-1})$ consider the subset of positions which occur in $\ensuremath{\Delta}_\ell(i')$ for at least one value of $i'$. Any such position has a predecessor value greater than $m_{\ell-1}$. Therefore, by summing over all distinct symbols we have that the size of this subset is crucially only $O(|\ensuremath{\Sigma_{\textup{P}}}|)$. Thus, we can maintain in small space every position in a suitable length window that will \emph{ever} have to be set to zero.
Let us go back to the example where the process for $P_{\ell-1}$ had found a \mbox{p-match}\xspace at position~$i'$. The process stores $i'$ along with the fingerprint $\fplong{\ell-1}(i')$. This information is not needed by the process for $P_\ell$ until $m_{\ell-1}$ text symbols later. During the arrival of these symbols, the process for $P_{\ell-1}$ might detect more \mbox{p-match}\xspacees, in fact many more matches. Their positions and corresponding fingerprints have to be stored until needed by the process for $P_\ell$. We now have a space issue: how do we store this information in small space? To appreciate this question, first consider exact matching. Here matches are known to be either an exact period length apart or very far apart. The matching positions can therefore be represented by an arithmetic progression. Further, the fingerprints associated with the matches in an arithmetic progression can easily be stored succinctly as one can work out each one of the fingerprints from the first one.
For parameterized matching the situation is much more complex: matches can occur more chaotically and, as we have seen above, fingerprints must be updated dynamically to reflect that symbols could be mapped differently in two distinct alignments. Handling these difficulties in small space (and small time complexity) is a main hurdle and is one point at which our work differ significantly from all previous work on streaming matching in small space.
We cope with this space issue in the next section.
\subsection{The structure of parameterized matches}
First recall that an \emph{arithmetic progression} is a sequence of numbers such that the (common) difference between any two successive numbers is constant. We can specify an arithmetic progression by its start number, the common difference and the length of the sequence.
In the next lemma we will see that the positions at which a string $P$ of length $m$ parameterize matches a longer string of length $3m/2$ can be stored in small memory: either a matching position belongs to an arithmetic progression or it is one of relatively few positions that can be listed explicitly in $O(|\ensuremath{\Sigma_{\textup{P}}}|)$ space.
The proof of the lemma (consult Figure~\ref{fig:typ-matches}) is deferred to Section~\ref{sec:arithmetic-proof}.
\insertfigure{typ-matches}{\label{fig:typ-matches} Partitioning of positions (\!{\Large$\times$}\!) at which $P$ \mbox{p-match}\xspacees in a $3m/2$ length substring of $T$.}
\begin{lemma}
\label{lem:arithmetic}
Let $X$ be the set of positions at which $P$ \mbox{p-match}\xspacees within an $3m/2$ length substring of $T$. The set $X$ can be partitioned into two sets $Y$ and $\ensuremath{\mathcal{A}}\xspace$ such that $|Y|\leqslant 6|\ensuremath{\Sigma_{\textup{P}}}|$, $\max(Y)<\min(\ensuremath{\mathcal{A}}\xspace)$ and $\ensuremath{\mathcal{A}}\xspace$ is an arithmetic progression with common difference $\rho$, where $\rho$ is the \mbox{p-period}\xspace of $P$.
\end{lemma}
The lemma is incredibly important for the algorithm as it allows us to store all partial matches (that need to be kept in memory before being discarded) in a total of $O(|\ensuremath{\Sigma_{\textup{P}}}|\log m)$ space across all processes. The question of how to store their associated fingerprints remains, but is nicely resolved with the corollary below that follows immediately from the proof of Lemma~\ref{lem:arithmetic}. We can afford to store fingerprints explicitly for the positions that are identified to belong to the set $Y$ from Lemma~\ref{lem:arithmetic}, and for the matching positions in the arithmetic progression $\ensuremath{\mathcal{A}}\xspace$ we can, as for exact matching, work out every fingerprint given the first one.
\begin{corollary}
\label{cor:arithmetic}
For pattern $P$, text $T$ and arithmetic progression $\ensuremath{\mathcal{A}}\xspace$ as specified in Lemma~\ref{lem:arithmetic}, $\ensuremath{\textup{pred}}d{T}[(i+m-\rho) \ensuremath{,\,\,} (i+m-1)]$ is the same for all $i\in\ensuremath{\mathcal{A}}\xspace$.
\end{corollary}
\subsection{Deamortisation}\label{sec:deamortisation}
So far we have described the overall approach but it is of course a major concern how to carry out computations in constant time per arriving symbol. In order to \emph{deamortise} the algorithm, we run a separate process responsible for the pattern prefix $P_0$ that uses the deterministic algorithm of Section~\ref{sec:smallrho} (i.e.~Theorem~\ref{thm:kmp-main}). As $P_0$ has $\mbox{p-period}\xspace$ greater than $3\delta$, the \mbox{p-match}\xspacees it outputs are at least this far apart. This enables the other processes to operate with a small delay: process $P_\ell$ expects process $P_{\ell-1}$ to hand over matches and fingerprints with a small delay, and it will itself hand over matches and fingerprints to $P_{\ell+1}$ with a small delay. One of the reasons for the delays is that processes operate in a round-robin scheme -- one process per arriving symbol. The process that is responsible for $P_s$ (which has length $m-4\delta$) returns matches with a delay of up to $3\delta$ arriving symbols. Hence there is a gap of length $\delta$ in which we can work out if the whole of $P$ matches. To do this we have another process that runs in parallel with all other processes and explicitly checks if any match with $P_s$ can be extended with the remaining $4\delta$ symbols by directly comparing their predecessor values with the last $4\delta$ predecessor values of the pattern. This job is spread out over $\delta$ arriving symbols, hence matches with $P$ are outputted in constant time.
\section{The main algorithm}
We are now in a position to describe the full algorithm of Theorem~\ref{thm:main}. Recall that the algorithm will find \mbox{p-match}\xspacees with
each of the pattern prefixes $P_0,\dots,P_s$ defined in the previous section. If a shorter prefix fails to match at a given
position then there is no need to check matches for longer
prefixes.
Our algorithm runs three main processes concurrently which we label \textup{A}\xspace, \textup{B}\xspace and \textup{C}\xspace.
The term process had a slightly different meaning in the previous section, but hopefully this will cause no confusion.
Each process takes $O(1)$ time per arriving symbol. Recall that both the pattern and text alphabets are $\ensuremath{\Sigma_{\textup{P}}}=\{0,\dots,|\ensuremath{\Sigma_{\textup{P}}}|-1\}$. \textbf{Process~\textup{A}\xspace} finds \mbox{p-match}\xspacees with prefix $P_0$ which are inserted as they occur into a \emph{match queue} $M_0$. \textbf{Process \textup{B}\xspace} finds \mbox{p-match}\xspacees for prefixes $P_1,\dots,P_s$ which are inserted into the match queues $M_1, \ldots, M_s$, respectively.
The \mbox{p-match}\xspacees are inserted with a delay of up to $3\delta$
symbol arrivals after they occur. \textbf{Process~\textup{C}\xspace} finds \mbox{p-match}\xspacees with the whole pattern $P$ which are outputted in constant time as they occur as described in Section~\ref{sec:deamortisation}.
It is crucial for the space usage that the match queues $M_0,M_1, \ldots, M_s$ will be stored in a compressed fashion. The delay in detecting \mbox{p-match}\xspacees with $P_\ell$ in Process \textup{B}\xspace is a consequence of deamortising the work required to find a prefix match, which we spread out over $\Theta(\delta)$ arriving symbols. We can afford to spread out the work in this way because the \mbox{p-period}\xspace of $P_{\ell-1}$ is at least $\delta$ so any \mbox{p-match}\xspacees are at least this far apart.
Throughout this section we assume that $m>14\delta$ so that $m_\ell-m_{\ell-1} \geqslant 3\delta$ for $\ell \in \{1, \ldots, s\}$.
If $m\leqslant 14\delta$, or the \mbox{p-period}\xspace of $P$ is $3\delta$ or less, we use the deterministic algorithm presented in Section~\ref{sec:smallrho} to solve the problem within the required bounds.
\subsection{Process \textup{A}\xspace (finding matches with $P_0$)}
From the definition of $P_0$ we have that if
we remove the final character (giving the string $P[0\ensuremath{,\,\,} m_0-2]$) then its \mbox{p-period}\xspace is at most $3\delta$.
The \mbox{p-period}\xspace of $P_0$ itself could be much larger.
As part of process \textup{A}\xspace we run the deterministic pattern matching algorithm from Section~\ref{sec:smallrho} (see Theorem~\ref{thm:kmp-main}) on $P[0\ensuremath{,\,\,} m_0-2]$. It returns \mbox{p-match}\xspacees in constant time and uses $O(|\ensuremath{\Sigma_{\textup{P}}}|+3\delta)=O(|\ensuremath{\Sigma_{\textup{P}}}|\log m)$ space.
In order to establish matches with the whole of $P_0$ we handle the final character separately.
If the deterministic subroutine reports a match that ends in $T[i-1]$, when $T[i]$ arrives we have a \mbox{p-match}\xspace with $P_0$ if and only if $\ensuremath{\textup{pred}}d{T}[i]=\ensuremath{\textup{pred}}d{P_0}[m_0-1]$ (or $\ensuremath{\textup{pred}}d{T}[i]\geqslant m_0$ if $\ensuremath{\textup{pred}}d{P_0}[m_0-1]=0$). As the alphabet is of the form $\ensuremath{\Sigma_{\textup{P}}}=\{0,\ldots |\ensuremath{\Sigma_{\textup{P}}}|-1\}$, we can compute the value of $\ensuremath{\textup{pred}}d{T}[i]$ in $O(1)$ time by maintaining an array $A$ of length $|\ensuremath{\Sigma_{\textup{P}}}|$ such that for all $\sigma \in \ensuremath{\Sigma_{\textup{P}}}$, $A[\sigma]$ gives the index of the most recent occurrence of symbol $\sigma$.
Whenever Process \textup{A}\xspace finds a match with $P_0$ at position $i'$ of the text, the pair $(i',\fplong{0}(i'))$ is added to a (FIFO) queue $M_0$, which is queried by Process~\textup{B}\xspace when handling prefix $P_1$.
\subsection{Process \textup{B}\xspace (finding matches with $P_1,\dots,P_s$)}
We split the discussion of the execution of Process~\textup{B}\xspace into $s$ \emph{levels}, $1,\dots,s$.
For each level~$\ell$ the fingerprint $\fpinner{\ell}(i')$ is computed for each position $i'$ at which $P_{\ell-1}$ \mbox{p-match}\xspacees.
Then, as discussed in Section~\ref{sec:overview}, if $\fpinner{\ell}(i') = \phi(\ensuremath{\textup{pred}}d{P_\ell}[m_{\ell-1}\ensuremath{,\,\,} (m_\ell-1)])$, there is also a match with $P_\ell$ at $i'$. The algorithm will in this case add the pair $(i',\fplong{\ell}(i'))$ to the queue $M_\ell$ which is subject to queries by level~$\ell+1$.
To this end we compute $\fplong{\ell}(i')\ensuremath{\ominus}\fplong{\ell-1}(i')$ and $\ensuremath{\Delta}_\ell(i')$, where $\ensuremath{\Delta}_\ell(i')$ contains all the positions which should be zeroed in order to obtain $\fpinner{\ell}(i')$.
In the example of Figure~\ref{fig:fp-output},
$\ensuremath{\Delta}_\ell(i')=\{1,5,7\}$ (the \texttt{d}, \texttt{e} and \texttt{f}, respectively).
In order for process \textup{B}\xspace to spend only constant time per arriving symbol, all its work must be scheduled carefully. The preparation of the $\ensuremath{\Delta}_\ell(i')$ values takes place as a subprocess we name \textup{B}\xspacedelta. Computing $\fplong{\ell}(i')\ensuremath{\ominus}\fplong{\ell-1}(i')$ and establishing matches takes place in another subprocess named \textup{B}\xspacephi. The two subprocesses are run in sequence for each arriving symbol.
We now give their details.
\paragraph{Subprocess \textup{B}\xspacedelta~(prepare zeroing)}
We use a queue $D_\ell$ associated with each level~$l$ which contains the most recent $O(|\ensuremath{\Sigma_{\textup{P}}}|)$ positions with predecessor the values greater than $m_{\ell-1}$. We will see below that $\ensuremath{\Delta}_\ell(i')$ is a subset of the positions in $D_\ell$ (adjusted to the offset $i'$).
Unfortunately, in the worst case, for an arriving symbol $T[i]$, $i$ could belong to all of the $D_\ell$ queues. Since we can only afford constant time per arriving symbol, we cannot insert $i$ into more than a constant number of queues. The solution is to buffer arriving symbols.
When some $T[i]$ arrives we first check whether $\ensuremath{\textup{pred}}d{T}[i]>m_0$. If so, the pair $(i,\ensuremath{\textup{pred}}d{T}[i])$ is added to a buffer $\ensuremath{\mathcal{B}}\xspace$ to be dealt with later.
Together with the pair we also store the value $r^i \bmod p$ which will be needed to perform the required zeroing operations.
In addition to adding a new element to the buffer $\ensuremath{\mathcal{B}}\xspace$, the Subprocess~\textup{B}\xspacedelta will also process elements from $\ensuremath{\mathcal{B}}\xspace$. If is is currently not in the state of processing an element, it will now start doing so by removing an element from $\ensuremath{\mathcal{B}}\xspace$ (unless $\ensuremath{\mathcal{B}}\xspace$ is empty). Call this element $(j,\ensuremath{\textup{pred}}d{T}[j])$. Over the next $s$ arriving symbols the Subprocess~\textup{B}\xspacedelta will do the following. For each of the $s$ levels $\ell$, if $\ensuremath{\textup{pred}}d{T}[j]>m_{\ell-1}$, add $(j,\ensuremath{\textup{pred}}d{T}[j])$ to the queue $D_\ell$. If $D_\ell$ contains more than $12|\ensuremath{\Sigma_{\textup{P}}}|$ elements, discard the oldest.
\paragraph{Subprocess \textup{B}\xspacephi~(establish matches)}
This subprocess schedules the work across the levels in a round-robin fashion by only considering level $\ell = 1 + (i \bmod s)$ when the symbol $T[i]$ arrives. Potential matches may not be reported by this subprocess until up to $3\delta$ arriving symbols after they occur. As $P_{\ell-1}$ has \mbox{p-period}\xspace at least $3\delta$, the processing of potential matches does not overlap.
The Subprocess \textup{B}\xspacephi for level $\ell$ is always in one of two states: either it is \emph{checking} whether a matching position $i'$ for $P_{\ell-1}$ is also a match with $P_\ell$, or it is \emph{idle}. If idle, level~$\ell$ looks into queue $M_{\ell-1}$ which holds matches with $P_{\ell-1}$. If $M_{\ell-1}$ is
non-empty, level~$\ell$ removes an element from $M_{\ell-1}$, call this element $(i',\fplong{\ell-1}(i'))$, and enters the checking state. Whenever $i>i'+m_\ell+\delta$,
level~$\ell$ will start checking if $i'$ is also a matching position with $P_\ell$. It does so by first computing the fingerprint $\fplong{\ell}(i')\ensuremath{\ominus} \fplong{\ell-1}(i')$,
which by definition equals $\big(\fplong{\ell}(i')-\fplong{\ell-1}(i')\big) r^{-i'-m_{\ell-1}} \bmod p$. We can ensure the fingerprint $\fplong{\ell}(i')$ is always available when needed by maintaining a circular buffer of the most recent $\Theta(\delta)$ fingerprints of the text. Similarly we can obtain $r^{-i'-m_{\ell-1}}\bmod p$ in $O(1)$ time by keeping a buffer of the most recent $\Theta(\delta)$ values of $r^{-i}\bmod p$ along with $r^{-m_{\ell}}\bmod p$ for all $\ell$. \label{page:buff}
Over the next at most $|\ensuremath{\Sigma_{\textup{P}}}|$ arriving symbols for which Subprocess~\textup{B}\xspacephi is considering level~$\ell$ (i.e. those with $\ell = 1 + (i \bmod s)$),
$\fpinner{\ell}(i')$ will be computed from
$\fplong{\ell}(i') \ensuremath{\ominus} \fplong{\ell-1}(i')$ by stepping through the elements of the queue $D_\ell$. For any element $(j,\ensuremath{\textup{pred}}d{T}[j]) \in D_\ell$, we have that $(j-i'-m_{\ell-1}) \in \ensuremath{\Delta}_\ell(i')$ if and only if $\ensuremath{\textup{pred}}d{T}[j]>j-i'$.
Further, as Subprocess~\textup{B}\xspacedelta stored $r^j \bmod p$ with the element in $D_\ell$ and $r^{i'} \bmod p$ is obtained through the circular buffer as above, we can perform the zeroing in $O(1)$ time.
Having computed $\fpinner{\ell}(i')$, we then compare it to $\phi(\ensuremath{\textup{pred}}d{P_\ell}[m_{\ell-1}\ensuremath{,\,\,} (m_\ell-1)])$. If they are equal, we have a \mbox{p-match}\xspace with $P_\ell$ at position $i'$ of the text, and the pair $(i',\fplong{\ell}(i'))$ is added to the queue $M_\ell$. This occurs before $T[i'+m_\ell+3\delta]$ arrives.
\subsection{Correctness, time and space analysis}
The time and space complexity almost follow immediately from the description of our algorithm, but a little more attention is required to verify that the algorithm actually works. In particular one has to show that buffers do not overflow, elements in queues are dealt with before being discarded and every possible match will be found (disregarding the probabilistic error in the fingerprint comparisons). The proof of the next lemma is given in Appendix~\ref{appendix:correctness}.
\begin{lemma}
\label{lem:correctness}
The algorithm described above proves Theorem~\ref{thm:main}.
\end{lemma}
\section{The deterministic matching algorithm}\label{sec:smallrho}
We now describe the deterministic algorithm that solves Theorem~\ref{thm:kmp-main}. Its running time is $O(1)$ time per character and it uses $O(|\ensuremath{\Sigma_{\textup{P}}}|+\rho)$ words of space, where $\rho$ is the parameterized period
of $P$. We require that both the pattern and text alphabets are $\ensuremath{\Sigma_{\textup{P}}}=\{0,\dots,|\ensuremath{\Sigma_{\textup{P}}}|-1\}$.
We first briefly summarise the overall approach of \cite{AFM:1994} which our algorithm follows. It resembles the classic KMP algorithm. When $T[i]$ arrives, the overall goal is to calculate the largest $r$ such that $P[0 \ensuremath{,\,\,} r-1]$ \mbox{p-match}\xspacees $T[(i- r+1)\ensuremath{,\,\,} i]$.
A \mbox{p-match}\xspace occurs iff $r=m$.
When a new text character $T[i+1]$ arrives the algorithm compares
$\ensuremath{\textup{pred}}d{P}[r]$ to $\ensuremath{\textup{pred}}d{T}[i+1]$ in $O(1)$ time to determine whether $P[0 \ensuremath{,\,\,} r]$ \mbox{p-match}\xspacees $T[(i- r+1)\ensuremath{,\,\,} i+1]$.
More precisely, the algorithm checks whether either $\ensuremath{\textup{pred}}d{P}[r]=\ensuremath{\textup{pred}}d{T}[i+1]$, or $\ensuremath{\textup{pred}}d{P}[r]=0 \,\wedge\, \ensuremath{\textup{pred}}d{T}[i+1] > r$. The second case covers the possibility that the previous occurrence in the text was outside the window.
If there is a match, we set $r \leftarrow r +1$ and $i \leftarrow i+1$ and continue with the next text character. If not, we shift the pattern prefix $P[0 \ensuremath{,\,\,} r-1]$ along by its \mbox{p-period}\xspace, denoted $\rho_{r-1}$, so that it is aligned with $T[(i- r+\rho_{r-1}+1)\ensuremath{,\,\,} i]$. This is the next candidate for a \mbox{p-match}\xspace. In the original algorithm, the p-periods of all prefixes are stored in an array of length $m$ called a prefix table.
The main hurdle we must tackle is to store both a prefix table suitable for parameterized matching as well as an encoding of the pattern in only $O(|\ensuremath{\Sigma_{\textup{P}}}|+\rho)$ space, while still allowing efficient access to both. It is well-known that any string $P$ can be stored in space proportional to its exact period. In Lemma~\ref{lem:pred-const}, which follows from Lemma~\ref{lem:split}, we show an analogous result for $\ensuremath{\textup{pred}}d{P}$.
See Appendix~\ref{appendix:smallrho} for proofs.
\begin{lemma}\label{lem:split}
For any $j\in [\rho]$ there is a constant $k_j$ such that $\ensuremath{\textup{pred}}d{P}[j+k\rho]$ is 0 for $k<k_j$, and $c_j$ for $k\geqslant k_j$, where $c_j>0$ is a constant that depends on $j$.
\end{lemma}
\begin{lemma}
\label{lem:pred-const}
The predecessor string $\ensuremath{\textup{pred}}d{P}$ can be stored in $O(\rho)$ space, where $\rho$ is the \mbox{p-period}\xspace of~$P$. Further, for any $j\in [m]$ we can obtain $\ensuremath{\textup{pred}}d{P}[j]$ from this representation in $O(1)$ time.
\end{lemma}
We now explain how to store the parameterized prefix table in only $O(\rho)$ space, in contrast to $\Theta(m)$ space which a standard prefix table would require. The \mbox{p-period}\xspace $\rho_r$ of $P[0\ensuremath{,\,\,} r]$ is, as a function of $r$, non-decreasing in~$r$. This property enables us to run-length encode the prefix table and store it as a doubly linked list with at most $\rho$ elements, hence using only $O(\rho)$ space. Each element corresponds to an interval of prefix lengths with the same \mbox{p-period}\xspace, and the elements are linked together in increasing order (of the common \mbox{p-period}\xspace).
This representation does not allow $O(1)$ time random access to the \mbox{p-period}\xspace of any prefix, however, for our purposes it will suffice to perform sequential access.
To accelerate computation we also store a second linked list of the indices of the first occurrences of each symbol in $P$ in ascending order, i.e. every $j$ such that $\ensuremath{\textup{pred}}d{P}[j]=0$. This uses $O(|\ensuremath{\Sigma_{\textup{P}}}|)$ space.
There is a crucial second advantage to compressing the prefix table which is that it allows us to upper bound the number of prefixes of $P$ we need to inspect when a mismatch occurs. When a mismatch occurs in our algorithm, we repeatedly shift the pattern until a \mbox{p-match}\xspace between a text suffix and pattern prefix occurs. Naively it seems that we might have to check many prefixes within the same run.
However, as a consequence of Lemma~\ref{lem:split}
we are assured that if some prefix does not \mbox{p-match}\xspace, every prefix in the same run with $\ensuremath{\textup{pred}}d{P}[j]\neq 0$ will also mismatch (except possibly the longest). Therefore we can skip inspecting these prefixes.
This can be seen by observing (using Lemma~\ref{lem:split}) that for $j$ such that $\rho_j = \rho_{j+1}$, we have $\ensuremath{\textup{pred}}d{P}[j-\rho_j] \in \{0,\,\,\ensuremath{\textup{pred}}d{P}[j]\}$.
By keeping pointers into both linked lists, it is straightforward to find the next prefix to check in $O(1)$ time.
Whenever we perform a pattern shift we move at least one of the pointers to the left. Therefore the total number of pattern shifts inspected while processing $T[i]$ is at most $O(|\ensuremath{\Sigma_{\textup{P}}}| + \rho)$. As each pointer only moves to the right by at most one when each $T[i]$ arrives, an amortised time complexity of $O(1)$ per character follows. The space usage is $O(|\ensuremath{\Sigma_{\textup{P}}}| + \rho)$ as claimed, dominated by the linked lists.
We now briefly discuss how to deamortise our solution by applying Galil's KMP deamortisation argument~\cite{Galil:1981}. The main idea is to restrict the algorithm to shift the pattern at most twice when each text character arrives, giving a constant time algorithm. If we have not finished processing $T[i]$ by this point we accept $T[i+1]$ but place it on the end of a buffer, output `no match' and continue processing $T[i]$. The key property is that the number of text arrivals until the next \mbox{p-match}\xspace occurs is at least the length of the buffer. As we shift the pattern up to twice during each arrival we always clear the buffer before (or as) the next \mbox{p-match}\xspace occurs. Further, the size of the buffer is always $O(|\ensuremath{\Sigma_{\textup{P}}}| + \rho)$. This follows from the observation above that the number of pattern shifts required to process a single text character is $O(|\ensuremath{\Sigma_{\textup{P}}}| + \rho)$.
This concludes the algorithm of Theorem~\ref{thm:kmp-main}. Combining this result with the lower bound result of Appendix~\ref{appendix:space} proves Theorem~\ref{thm:kmp-main}.
\section{The proof of Lemma~\ref{lem:arithmetic}}\label{sec:arithmetic-proof}
In this section we prove the important Lemma~\ref{lem:arithmetic}. Let \ensuremath{i_\text{left}}\xspace denote an arbitrary position in $T$ where $P$ \mbox{p-match}\xspacees. Let $X$ be the set of positions at which $P$ \mbox{p-match}\xspacees within $T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+3m/2-1)]$. We now prove that there exist disjoint sets $Y$ and $\ensuremath{\mathcal{A}}\xspace$ with the properties set out in the statement of the lemma.
Let $\alpha$ be the smallest integer such that all distinct symbols in $P$ occur in the prefix $P[0\ensuremath{,\,\,}\alpha]$. We begin by showing that $\rho$, the $\mbox{p-period}\xspace$ of $P$ is at least $\alpha/|\Sigma|$. From the minimality of $\alpha$, we have that $P[\alpha]$ is the leftmost occurrence of some symbol. By the definition of the $\mbox{p-period}\xspace$, we have that $P[0 \ensuremath{,\,\,} (m-1-\rho)]$ \mbox{p-match}\xspacees $P[\rho\ensuremath{,\,\,} m-1]$. Under this shift, $P[\alpha]$ (in $P[\rho\ensuremath{,\,\,} m-1]$) is aligned with $P[\alpha-\rho]$ (in $P[0 \ensuremath{,\,\,} (m-1-\rho)]$) . Assume that $P[\alpha-\rho]$ is not a leftmost occurrence and let $j$ be the position of the previous occurrence of $P[j]=P[\alpha-\rho]$. As a parameterized match occurs, we have that $P[j]=P[j+\alpha] \neq P[\alpha]$, contradiction. By repeating this argument we have found distinct symbols at positions $\alpha-k\rho$ for all $k>0$. This immediately implies that $\rho\geqslant \alpha/|\Sigma|$.
We first deal with two simple cases: $\rho > m/8$ or $\alpha\geqslant m/4$ (which implies that $\rho\geqslant m/(4|\Sigma|)$). In these two cases the number of \mbox{p-match}\xspacees is easily upper bounded by $6|\Sigma|$, so all positions can be stored in the set $Y$.
We therefore continue under the assumption that $\alpha< m/4$ and $\rho< m/8$. As $\rho\geqslant \alpha/|\Sigma|$, there are at most $(\alpha+1)/(\alpha/|\Sigma|)\leqslant 2|\Sigma|$ positions from the range $[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} \ensuremath{i_\text{left}}\xspace+\alpha]$ at which $P$ can parameterize match $T$. We can store these positions in the set $Y$. Next we will show that the positions from the range $[(\ensuremath{i_\text{left}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+3m/2-1)]$ at which $P$ parameterize matches $T$ can be represented by the arithmetic progression~$\ensuremath{\mathcal{A}}\xspace$.
First we show that $\rho$ is an \emph{exact period} (not \mbox{p-period}\xspace) of $\ensuremath{\textup{pred}}d{P}[\alpha+1\ensuremath{,\,\,} m-1]$ (but not necessarily the shortest period). Consider arbitrary positions $P[j]$ and $P[j-\rho]$ where $\alpha<j<m-\rho$. By the definition of the $\mbox{p-period}\xspace$, we have that $P[\rho\ensuremath{,\,\,} m-1]$ \mbox{p-match}\xspacees $P[0 \ensuremath{,\,\,} (m-1-\rho)]$ and hence that $\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]}= \ensuremath{\textup{pred}}d{P[0 \ensuremath{,\,\,} (m-1-\rho)]}$.
In particular, $\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]}[j]= \ensuremath{\textup{pred}}d{P[0 \ensuremath{,\,\,} (m-1-\rho)]}[j]=\ensuremath{\textup{pred}}d{P}[j]$,
where the second equality follows because we take the predecessor string of a prefix of $P$.
Also observe that $\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]}[j]$ either equals $0$ or $\ensuremath{\textup{pred}}d{P}[j-\rho]$ by definition. Further, $\ensuremath{\textup{pred}}d{P[0 \ensuremath{,\,\,} (m-1-\rho)]}[j]= \ensuremath{\textup{pred}}d{P}[j] \neq 0$ as $j>\alpha$ and all leftmost occurrences are before $\alpha$.
This implies that $\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]}[j] \neq 0$, hence, as required, $\ensuremath{\textup{pred}}d{P}[j-\rho]=\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]}[j]=\ensuremath{\textup{pred}}d{P[0 \ensuremath{,\,\,} (m-1-\rho)]}[j]=\ensuremath{\textup{pred}}d{P}[j].$
Recall that $P$ \mbox{p-match}\xspacees $T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} \ensuremath{i_\text{left}}\xspace+m-1]$ so $\ensuremath{\textup{pred}}d{P}=\ensuremath{\textup{pred}}d{T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} \ensuremath{i_\text{left}}\xspace+m-1]}]$ and hence $\rho$ is an exact period of $\ensuremath{\textup{pred}}d{T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} \ensuremath{i_\text{left}}\xspace+m-1]}[\alpha+1\ensuremath{,\,\,} m-1]$. Let $j\in\{\alpha+1,\dots,m-2\}$
and observe that by definition, $\ensuremath{\textup{pred}}d{T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} \ensuremath{i_\text{left}}\xspace+m-1]}[j] \in \{ 0,\, \ensuremath{\textup{pred}}d{T}[\ensuremath{i_\text{left}}\xspace +j]\}$.
However, $\ensuremath{\textup{pred}}d{T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+m-1)]}[j]= \ensuremath{\textup{pred}}d{P}[j] >0$ because $j>\alpha$ and all leftmost occurrences are in $P[0 \ensuremath{,\,\,} \alpha]$. This implies that $\ensuremath{\textup{pred}}d{T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+m-1)]}[j]=\ensuremath{\textup{pred}}d{T}[\ensuremath{i_\text{left}}\xspace +j]$. As $j$ was arbitrary, we have that $\ensuremath{\textup{pred}}d{T}[(\ensuremath{i_\text{left}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+m-1)]=\ensuremath{\textup{pred}}d{T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+m-1)]}[\alpha+1\ensuremath{,\,\,} m-1]$ and hence $\rho$ is an exact period of $\ensuremath{\textup{pred}}d{T}[(\ensuremath{i_\text{left}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+m-1)]$.
Let $\ensuremath{i_\text{right}}\xspace$ be the rightmost position in $T[\ensuremath{i_\text{left}}\xspace \ensuremath{,\,\,} \ensuremath{i_\text{left}}\xspace+3m/2-1]$ where $P$ \mbox{p-match}\xspacees. By the same argument as for $\ensuremath{i_\text{left}}\xspace$, we have that $\rho$ is an exact period of $\ensuremath{\textup{pred}}d{T}[(\ensuremath{i_\text{right}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{right}}\xspace+m-1)]$.
Thus, both $\ensuremath{\textup{pred}}d{T}[(\ensuremath{i_\text{left}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{left}}\xspace+m-1)]$ and $\ensuremath{\textup{pred}}d{T}[(\ensuremath{i_\text{right}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{right}}\xspace+m-1)]$ has an exact period of $\rho$. As these two strings overlap by at least $\rho$ characters, we have that $\rho$ is also an exact period of $\ensuremath{\textup{pred}}d{T}[\ensuremath{i_\text{left}}\xspace+\alpha+1 \ensuremath{,\,\,} \ensuremath{i_\text{right}}\xspace+m-1]$.
Let $i\in\{(\ensuremath{i_\text{left}}\xspace+\alpha+1),\dots,\ensuremath{i_\text{right}}\xspace-1\}$ be arbitrary such that $P$ \mbox{p-match}\xspacees $T[i \ensuremath{,\,\,} (i+m-1)]$. We now prove that if $i+\rho<\ensuremath{i_\text{right}}\xspace$ then $P$ \mbox{p-match}\xspacees $T[i+\rho \ensuremath{,\,\,} (i+\rho+m-1)]$. As \mbox{p-match}\xspacees must be at least $\rho$ characters apart this is sufficient to conclude that all remaining matches form an arithmetic progression with common difference $\rho$.
As $\rho$ is an exact period of $\ensuremath{\textup{pred}}d{T}[(\ensuremath{i_\text{left}}\xspace+\alpha+1) \ensuremath{,\,\,} (\ensuremath{i_\text{right}}\xspace+m-1)]$, we have that $\ensuremath{\textup{pred}}d{T}[i \ensuremath{,\,\,} (i+m-1)]=\ensuremath{\textup{pred}}d{T}[i+\rho \ensuremath{,\,\,} (i+\rho+m-1)]$. By definition, this implies that $\ensuremath{\textup{pred}}d{T[i \ensuremath{,\,\,} (i+m-1)]}=\ensuremath{\textup{pred}}d{T[i+\rho \ensuremath{,\,\,} (i+\rho+m-1)]}$ and hence a \mbox{p-match}\xspace also occurs at $i+\rho$. This concludes the proof of Lemma~\ref{lem:arithmetic}.
\section{Space lower bounds}\label{appendix:space}
To complete the picture we give nearly matching space lower bounds which show that our
solutions are optimal to within log factors. The proof is by a
communication complexity argument. In
essence one can show that in the randomised case Alice is able to
transmit any string of length $\Theta(|\ensuremath{\Sigma_{\textup{P}}}|)$ bits to Bob
using a solution to the matching problem by selecting a suitable pattern and streaming text. Similarly in the deterministic case (see below) one can show
that she can send $\Theta(|\ensuremath{\Sigma_{\textup{P}}}|+\rho)$ bits.
\begin{proof}[\prooftheorem{thm:space}]
Consider first a pattern where all symbols are distinct, e.g.\@ $P=\texttt{123456}$. Now let us assume Alice would like to send a bit-string to Bob. She can encode the bit-string as an instance of the parameterized matching problem in the following way. As an example, assume the bit-string is \texttt{01011}. She first creates the first half of a text stream \texttt{aBcDE} where we choose capitals to correspond to \texttt{1} and lower case symbols to correspond to \texttt{0} from the original bit-string. She starts the matching algorithm and runs it until the pattern and the first half of the text have been processed and then sends a snapshot of the memory to Bob. Bob then continues with the second half of the text which is fixed to be the sorted lower case symbols, in this case \texttt{abcde}. Where Bob finds a parameterized match he outputs a \texttt{1} and where he does not, he outputs a \texttt{0}. Thus Alice's bit-string is reproduced by Bob. In general, if we restrict the alphabet size of the pattern to be $|\ensuremath{\Sigma_{\textup{P}}}|$ then Alice can similarly encode a bit-string of length $|\ensuremath{\Sigma_{\textup{P}}}|-1$, and successfully transmit it to Bob, giving us an $\Omega(|\ensuremath{\Sigma_{\textup{P}}}|)$ bit lower bound on the space requirements of any streaming algorithm.
\end{proof}
If randomisation is not allowed, the lower bound increases to $\Omega(|\ensuremath{\Sigma_{\textup{P}}}| + \rho)$ bits of space. Here $\rho$ is the parameterized period of the pattern. This bound follows by a similar argument by devising a one-to-one encoding of bit-strings of length $\Theta(\rho)$ into $P[0 \ldots \rho -1]$. The key difference is that with a deterministic algorithm, Bob can enumerate all possible $m$-length texts to recover Alice's bit-string from~$P$.
\section{Correctness proof of the main algorithm}\label{appendix:correctness}
\begin{proof}[\prooflemma{lem:correctness}]
Coupled with the discussion in Section~\ref{sec:overview}, the time and space complexity almost follow immediately from the description. It only remains to show that, at any time, $|\ensuremath{\mathcal{B}}\xspace| \leqslant |\ensuremath{\Sigma_{\textup{P}}}|$. First observe that any symbol $\sigma \in \Sigma_T$ is only inserted into $\ensuremath{\mathcal{B}}\xspace$ when $\ensuremath{\textup{pred}}d{T}[i]>m_0>\delta$ which can only happen at most once in every $\delta = |\ensuremath{\Sigma_{\textup{P}}}| \log{m}$ arriving symbols. Further we remove one element every $s \leqslant \lceil \log m \rceil$ arrivals and in particular remove the $\sigma$ occurrence after at most $|\ensuremath{\mathcal{B}}\xspace| \lceil \log m \rceil$ arrivals. As $\ensuremath{\mathcal{B}}\xspace$ is initially empty, by induction it follows that no symbol occurs more than once in $\ensuremath{\mathcal{B}}\xspace$.
For correctness, it remains to show that we correctly obtain the positions of $\fpinner{\ell}(i')$ from $D_\ell$. It follows from the description that all positions of $\fpinner{\ell}(i')$ correspond to elements inserted into $D_\ell$ at some point. However we need to prove that these elements are present in $D_\ell$ while $\fpinner{\ell}(i')$ is calculated. Any element inserted into $\ensuremath{\mathcal{B}}\xspace$ during $T[i' \ensuremath{,\,\,} (i'+m_{\ell}-1)]$ has cleared the buffer by the end of interval B (which has length $\delta$) by the argument above. Therefore any relevant element has been inserted into $D_\ell$ by the start of interval C, during which we calculate $\fpinner{\ell}(i')$. Any element inserted into $D_\ell$ is at least $m_{\ell-1}$ characters from its predecessor. Therefore, summing over all symbols in the alphabet, there are at most $4|\Sigma_P|$ positions in $T[i' \ensuremath{,\,\,} (i'+2m_{\ell}-1)]$ which are inserted into $D_\ell$. As $D_\ell$ is a FIFO queue of size $12|\Sigma|$, the relevant elements are still present after interval C.
As commented earlier, potential matches in $M_\ell$ are separated by more than $3\delta$ arrivals because $P_{\ell-1}$ has $\mbox{p-period}\xspace$ more than $3\delta$. They are processed within $3\delta$ arrivals so $M_\ell$ does not overflow. This completes the correctness.
\end{proof}
\section{Proof of Theorem~\ref{thm:filter} (general alphabets)} \label{appendix:filter}
Let $\Sigma_T$ denote the text alphabet.
In order to handle general alphabets we perform two reductions in sequence on each arriving text symbol (and on $P$ during preprocessing). The first reduces $\Sigma_P$ and $\Sigma_T$ to each contain only symbols from $\Pi$ and one additional variable symbol (which is different for $P$ and $T$). A suitable such reduction is given in~\cite{AFM:1994} (Lemma 2.2). The reduction is presented for the offline version but immediately generalises by using the constant time exact matching algorithm of Breslauer and Galil~\cite{BG:2011}.
We now define $\ensuremath{\Sigma_{\textup{P}}}'$ to be the pattern alphabet after the first reduction (and $\ensuremath{\Sigma_{\textup{T}}}'$ respectively). Note that $|\ensuremath{\Sigma_{\textup{P}}}'|=|\ensuremath{\Sigma_{\textup{T}}}'|=|\Pi|+1$ and all pattern symbols are variables. However we have no guarantee on the bit representations of the alphabet symbols. Let $T'$ and $P'$ denote the text and pattern after the first reduction. The second reduction now maps each $T'[i]$ into the range $\{0,\ldots,|\ensuremath{\Sigma_{\textup{P}}}'|\}$ as it arrives. The equivalent reduction for the pattern is a simplification which can be performed in preprocessing.
Let the strings $\ensuremath{S}$ and $\ensuremath{S}Filt$ denote the last $m$ characters of the unfiltered (post first reduction) and filtered (post second reduction) stream, respectively. Let $\ensuremath{\Sigma_{\textup{last}}}\subseteq \ensuremath{\Sigma_{\textup{T}}}'$ denote the up to $|\ensuremath{\Sigma_{\textup{P}}}'|+1$ last distinct symbols in $\ensuremath{S}$, hence $|\ensuremath{\Sigma_{\textup{last}}}|$ is never more than $|\ensuremath{\Sigma_{\textup{P}}}'|+1$. Let $\ensuremath{\mathcal{T}}\xspace$ be a dynamic dictionary on $\ensuremath{\Sigma_{\textup{last}}}$ such that a symbol in $\ensuremath{\Sigma_{\textup{T}}}'$ can be looked up, deleted and added in $O(\sqrt{\log{|\ensuremath{\Sigma_{\textup{P}}}'|}/\log{\log{|\ensuremath{\Sigma_{\textup{P}}}'|}}})$ time~\cite{AT:2000}. Every symbol that arrives in the stream is associated with its ``arrival time'', which is an integer that increases by one for every new symbol arriving in the stream. Let $\ensuremath{\mathcal{L}}\xspace$ be an ordered list of the symbols in $\ensuremath{\Sigma_{\textup{last}}}$ (together with their most recent arrival time) such that $\ensuremath{\mathcal{L}}\xspace$ is ordered according to the most recent arrival time. For example,
\begin{equation}
\label{eq:list}
\ensuremath{\mathcal{L}}\xspace = (\texttt{d}, 25),\, (\texttt{b}, 33),\, (\texttt{g}, 58),\, (\texttt{e}, 102)
\end{equation}
means that the symbols \texttt{b}, \texttt{d}, \texttt{e} and \texttt{g} are the last four distinct symbols that appear in $\ensuremath{S}$ (for this example, $|\ensuremath{\Sigma_{\textup{P}}}'|+1\geqslant 4$), where the last \texttt{e} arrived at time~102, the last \texttt{g} arrived at time~58, and so on.
By using appropriate pointers between elements of the hash table~$\ensuremath{\mathcal{T}}\xspace$ and elements of $\ensuremath{\mathcal{L}}\xspace$ (which could be implemented as a linked list), we can maintain $\ensuremath{\mathcal{T}}\xspace$ and $\ensuremath{\mathcal{L}}\xspace$ in $O(1)$ time per arriving symbol. To see this, take the example in Equation~(\ref{eq:list}) and consider the arrival of a new symbol $x$ at time~103 (following the last symbol \texttt{e}). First we look up $x$ in $\ensuremath{\mathcal{T}}\xspace$ and if $x$ already exists in $\ensuremath{\Sigma_{\textup{last}}}$, move it to the right end of $\ensuremath{\mathcal{L}}\xspace$ by deleting and inserting where needed and update the element to $(x,103)$. Also check that the leftmost element of $\ensuremath{\mathcal{L}}\xspace$ is not a symbol that has been pushed outside of $\ensuremath{S}$ when $x$ arrived. We use its arrival time to determine this and remove the last element accordingly. If the arriving symbol $x$ does not already exist in $\ensuremath{\Sigma_{\textup{last}}}$, then we add $(x,103)$ to the right end of $\ensuremath{\mathcal{L}}\xspace$. To ensure that $\ensuremath{\mathcal{L}}\xspace$ does not contain more than $|\ensuremath{\Sigma_{\textup{P}}}'|+1$ elements, we remove the leftmost element of $\ensuremath{\mathcal{L}}\xspace$ if necessary. We also remove the leftmost symbol if it has been pushed outside of $\ensuremath{S}$. The hash table $\ensuremath{\mathcal{T}}\xspace$ is of course updated accordingly as well.
Let $\ensuremath{\Sigma_{\textup{filt}}}=\{0,\dots,|\ensuremath{\Sigma_{\textup{P}}}'|\}$ denote the symbols outputted by the filter. We augment the elements of $\ensuremath{\mathcal{L}}\xspace$ to maintain a mapping $\ensuremath{\mathcal{M}}\xspace$ from the symbols in $\ensuremath{\Sigma_{\textup{last}}}$ to distinct symbols in $\ensuremath{\Sigma_{\textup{filt}}}$ as follows. Whenever a new symbol is added to $\ensuremath{\Sigma_{\textup{last}}}$, map it to an unused symbol in $\ensuremath{\Sigma_{\textup{filt}}}$. If no such symbol exists, then use the symbol that is associated with the symbol of $\ensuremath{\Sigma_{\textup{last}}}$ that is to be removed from $\ensuremath{\Sigma_{\textup{last}}}$ (note that $|\ensuremath{\Sigma_{\textup{last}}}|\leqslant |\ensuremath{\Sigma_{\textup{filt}}}|$). The mapping $\ensuremath{\mathcal{M}}\xspace$ specifies the filtered stream: when a symbol $x$ arrives, the filter outputs $\ensuremath{\mathcal{M}}\xspace(x)$. Finding $\ensuremath{\mathcal{M}}\xspace(x)$ and updating $\ensuremath{\mathcal{T}}\xspace$ is done in $O(1)$ time per arriving character, and both the tree $\ensuremath{\mathcal{T}}\xspace$ and the list $\ensuremath{\mathcal{L}}\xspace$ can be stored in $O(|\ensuremath{\Sigma_{\textup{P}}}'|)$ space.
It remains to show that the filtered stream does not induce any false matches or miss a potential match. Suppose first that the number of distinct symbols in $\ensuremath{S}$ is $|\ensuremath{\Sigma_{\textup{P}}}'|$ or fewer. That is, $\ensuremath{\Sigma_{\textup{last}}}$ contains all distinct symbols in $\ensuremath{S}$. Every symbol $x$ in $\ensuremath{S}$ has been replaced by a unique symbol in $\ensuremath{\Sigma_{\textup{filt}}}$ and the construction of the filter ensures that the mapping is one-to-one. Thus, $\ensuremath{\textup{pred}}d{\ensuremath{S}Filt}=\ensuremath{\textup{pred}}d{\ensuremath{S}}$. Suppose second that the number of distinct symbols in $\ensuremath{S}$ is $|\ensuremath{\Sigma_{\textup{P}}}'|+1$ or more. That is, $|\ensuremath{\Sigma_{\textup{last}}}|=|\ensuremath{\Sigma_{\textup{P}}}'|+1$ and therefore $\ensuremath{S}Filt$ contains $|\ensuremath{\Sigma_{\textup{P}}}'|+1$ distinct symbols. Thus, $\ensuremath{\textup{pred}}d{\ensuremath{S}Filt}$ cannot equal $\ensuremath{\textup{pred}}d{P'}$. The claimed result then follows from Theorem~\ref{thm:main}.
\section{Proofs omitted from Section~\ref{sec:smallrho}} \label{appendix:smallrho}
\begin{proof}[\prooflemma{lem:split}]
Let $\rho$ be the \mbox{p-period}\xspace of $P$. We prove the lemma by contradiction. Suppose, for some $j$ and $k$, that $i=j+k\rho$ is a position such that $\ensuremath{\textup{pred}}d{P}[i]=c\geqslant 1$ and $\ensuremath{\textup{pred}}d{P}[i+\rho]=c'\neq c$. Consider Figure~\ref{fig:pattern-rho} for a concrete example, where $\rho=5$, $i=12$, $\ensuremath{\textup{pred}}d{P}[12]=c=4$ and $\ensuremath{\textup{pred}}d{P}[12+5]=c'=3$.
\begin{figure}
\caption{\label{fig:pattern-rho}
\label{fig:pattern-rho}
\end{figure}
Since $\rho$ is a \mbox{p-period}\xspace of $P$, we have that
\begin{equation*}
\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]} = \ensuremath{\textup{pred}}d{P[0\ensuremath{,\,\,} (m-1-\rho)]} \,.
\end{equation*}
Consider the alignment of positions $i+\rho$ and $i$ (positions 17 and~12 in Figure~\ref{fig:pattern-rho}). We have that $\ensuremath{\textup{pred}}d{P[\rho\ensuremath{,\,\,} m-1]}[i]$ is either $c'$ or~0. In either case, it is certainly not $\ensuremath{\textup{pred}}d{P[0\ensuremath{,\,\,} m-1-\rho]}[i]$ which is $c$. Thus, $\rho$ cannot be a \mbox{p-period}\xspace of $P$.
\end{proof}
\begin{proof}[\prooflemma{lem:pred-const}]
By Lemma~\ref{lem:split} we can encode $\ensuremath{\textup{pred}}d{P}$ by storing the two values $k_j$ and $c_j$ for each $j\in [\rho]$. This takes $O(\rho)$ space. The value $\ensuremath{\textup{pred}}d{P}[i]$ is 0 if $i<k_{(i\bmod \rho)}$, otherwise it is $c_{(i\bmod \rho)}$.
\end{proof}
\end{document}
|
\begin{document}
\title[Divergence property of some generalised Thompson groups]
{Divergence property of the Brown-Thompson groups and braided Thompson groups}
\author{Xiaobing Sheng}
\address{Graduate School of Mathematical Sciences,
The University of Tokyo}
\email{[email protected]
}
\subjclass[2020]{
Primary 20F65,
}
\keywords{
generalised Thompson groups, divergence property}
\date{\today}
\maketitle
\begin{abstract}
Golan and Sapir proved that Thompson's groups $F$, $T$ and $V$ have linear divergence.
In the current paper,
we focus on the divergence property of several generalisations of the Thompson groups.
We first consider the Brown-Thompson groups $F_n$, $T_n$ and $V_n$
(also called Brown-Higman-Thompson group in some other context)
and find that these groups also have linear divergence functions.
We then focus on the braided Thompson groups $BF,$ $\widehat{BF}$ and $\widehat{BV}$ and
proved that these groups have linear divergence.
The case of $BV$ has also been done independently by Kodama.
\end{abstract}
\tableofcontents
\section{Introduction}
Thompson's groups $F$, $T$, $V$ were first constructed
from a logic point of view
while later found to have connections
with many other branches of mathematics such as string rewriting systems,
homotopy theory,
combinatorics
and dynamical systems.
$F$ and $T$ can be regarded as subgroups
of the group of homeomorphisms of the circle
while $V$ can be seen as a subgroup of the group of self-homomorphisms of the Cantor set.
These groups have been generalised to many larger classes since they were defined.
Thompson's group $V$ has first been generalised to
a family of finitely presented infinite groups by Higman
in the 70's
and the groups $F$, $T$ and $V$ have further been extended to infinite families by Brown \cite{MR885095}
and later being generalised by Stein \cite{MR1094555}
and at the same time,
some of their homological and simplicity results are obtained\cite{MR885095}.
More recently, the braided version
and the higher dimensional version ones are defined and investigated.
Among these generalisations of Thompson's groups,
most of the early results are on algebraic and topological properties of groups.
The geometry of the groups are beginning to attract the interest of more geometric group theorists
in the recent years
partially because of the still open amenability problem of Thompson's group $F.$
We list a selection of the geometric results as follows:
the result on the quadratic Dehn function of $F$ by Guba \cite{MR2104775, MR1750493},
the construction of the $CAT(0)$ cube complexes,
on which the groups $F$, $T$, $V$ act properly by Farley \cite{MR2393179},
some subgroup distortion results by Burillo, et al. \cite{MR1670622, MR1806724, MR3091272, MR2452818}
and results by Golan and Sapir \cite{MR3978542}
on the divergence property of the three groups.
A more recent result on the linear divergence of the braided Thompson's group $BV$
has been proved by Kodama \cite{Kodama:2020to}.
\subsection{Background}
This work focuses on one of the central notion in geometric group theory,
``hyperbolicity" or ``the nature of non-positive curvature'' of groups
and we consider some generalisations of a particular
class of groups, Thompson's groups.
Thompson's groups are not Gromov hyperbolic,
but the above geometric results on the groups suggest that
these groups might have some ``hyperbolicity"s in a coarse sense.
Then the question is how we are going to measure these ``hyperbolicity",
since there might be many generalisations of the notion of hyperbolic-type properties.
The one that we will be focusing on, in the current paper,
is the divergence property.
The idea of divergence property first appeared
in \textit{Asymptotic invariants of infinite groups} by Gromov \cite{MR1253544},
where he suggested that in the non-positively curved spaces,
\textit{One expects,
that (at least after plusifications) `infinitely close' geodesic rays issuing from a point,
either diverge linearly or exponentially}.
The formal definition was first stated by Gersten \cite{MR1302334},
and the notion later proved to be a quasi-isometric invariant of metric spaces.
Intuitively, divergence property measures \textit{how two geodesics in a metric space go apart,}
or in other words,
\textit{how hard it is to connect points on two distinct geodesic rays if the backtracking is not allowed.}
When considering the notion of the generalised ``hyperbolicity " or the hyperbolic-type properties,
in some particularly ``nice" space ( for example, Gromov hyperbolic space, CAT(0) spaces),
the condition of super-linear divergence is equivalent to
the notion of having cut-points in the asymptotic cones in the groups or
the ``Morse" property.
When enlarging the scope further,
the divergence property also indicates some large scale geometric properties of the spaces
and their associated groups.
Since the notion divergence is a quasi-isometry invariant,
we would know precisely the quasi-isometry classes of the groups.
On the other hand, in Tran's thesis \cite{MR3474592},
he investigated the divergence property of finitely presented groups,
and he further defined relative divergence
and the usual definition of divergence in the sense of Gersten \cite{MR1302334}
which gives a lower bound on the divergence functions of all the geodesics in the whole space.
In this paper,
we focus on some generalisations of Thompson's groups
and prove that similar divergence property can be found on geodesics of these groups.
More precisely,
we prove that Brown-Thompson groups and the braided Thompson group $BF$
have linear divergence property.
The result for $BF$ together with the result for $BV$ in \cite{Kodama:2020to}
can be extended to obtain that
$\widehat{BV}$ and $\widehat{BF}$ also have linear divergence.
\section{Basic definitions and notation}
Thompson's groups $F$, $T$
as well as the Brown-Thompson groups $F_n$, $T_n$
can be informally considered as subgroups of piecewise-linear homeomorphisms of
the unit interval $[0,1]$ and the circle $S^1$, respectively,
while the group $V$ and $V_n$ can be regarded as
subgroups of the homeomorphisms of the Cantor set
and they all have several different interpretations.
Here, we will be following the convention in \cite{MR3978542}
and utilising both combinatorial and analytical definitions.
\begin{definition}[Thompson's groups $F$, $T$]
Thompson's groups $F$, $T$,
are the groups of orientation-preserving piecewise-linear homeomorphisms
of the unit interval $ [0,1] $ and the $S^1$ where $S^1$ is the unit interval $[0,1]$
identifying the endpoints to themselves, respectively,
which are differentiable except at finitely many dyadic rationals
such that the slope of each subinterval is an integer power of $2.$
\end{definition}
\begin{definition}[The Brown-Thompson groups]
The Brown-Thompson groups $F_n$, $T_n$,
are the groups of orientation-preserving piecewise-linear homeomorphisms
of the unit interval $ [0,1] $ and the circle $S^1$, respectively,
which are differentiable except at finitely many $n$-adic rationals
such that the slope of each subinterval is an integer power of $n.$
\end{definition}
\begin{definition}[Thompson's group $V$ and its generalisations $V_n$]
$V$ and $V_n$ are the groups of the right-continuous bijections
from the unit interval $[0,1]$ onto itself
which are differentiable except at finitely many dyadic and $n$-adic rationals, respectively,
such that the slope of each interval is a power of $2$ and an integer power of $n,$ respectively.
\end{definition}
From this standard interpretation of the groups,
we define the following notion:
an element $g \in V$ is \textit{supported}
on some interval $(a, b) \subset [0,1]$
meaning that for any point $x \in (a, b),$
we have $g(x) \neq x.$
The collection of such subintervals in $[0,1]$ are called
the \textit{support} of $g.$
The idea can be extended to the Brown-Thompson groups as well.
The Thompson groups have many different definitions
\cite{MR0396769, MR1426438, MR1806724}.
Among these different interpretations of the elements of the Thompson groups,
we will mainly use the tree pair representations.
\subsection{The group elements, tree pair representations}
The tree pair representations come from the fact
that intervals or circles with partitions by dyadic rationals
can be identified with trees.
A pair of rooted $n$-ary trees having the same number of leaves
with identical labelings give rise to a representation for an element of $F_n$,
a pair of such trees with cyclic labelings
represent an element of $T_n$ \cite{Sheng:2018aa}
and
a pair with labelings
induced from an element of the symmetric group $S_{l(n-1)+1}$ representing an element of $V_n,$
here $l \in \mathbb{N}$ and $l(n-1)+1$ is the number of the leaves in each tree of the pair.
Hence, the elements in Thompson's groups and the generalisations
can be represented by such labeled tree pairs.
Such pairs of trees can be used to give
the unique normal form for elements in $F, F_n$
and a standard form for elements in $T, T_n$
with respect to the infinite standard generating sets.
These forms can be used to estimate the word lengths of the elements of these groups
and the bounds of the word lengths of elements in $V, V_n$
by replacing the standard infinite generating sets by the finite ones.
This alternative definition is also given in \cite{MR1806724, MR2452818}.
Below is a more precise description for the case of the group $F.$
For each unit interval $[0,1]$
with finitely many dyadic breakpoints,
the subintervals are of the form
$[ \frac{k}{2^m}, \frac{k+1}{2^m}]$
where $k+1\leq 2^m$ and $m, k \in \mathbb{N}\cup \{0\}$.
A unit interval $[0,1]$ with finitely many breakpoints are called
\textit{dyadically subdivided interval}
and can be associated with a rooted finite binary tree
by associating each of these breakpoints with a $2$-caret,
i.e. a rooted binary tree with one vertex to be the root and two edges attached to it.
Then we can associate two intervals
having the same number of dyadic breakpoints
with a pair of finite binary trees
with the same number of leaves,
namely, we have associated an element of Thompson's group $F$
with a pair of rooted finite binary trees.
Here the leaves in each tree of a tree pair
correspond to the dyadic subintervals in each $[0,1]$
and they are labeled by the natural number $\mathbb{N}$
to indicate
which leaf in the source tree maps to which leaf in the target.
It is proved in \cite{MR1426438} that
there is a uniquely reduced tree pair representative for each element in $F.$
The above construction can be generalised to $T$ and $V$
with some variations on the labelings
as well as to the Brown-Thompson groups $F_n$, $T_n$ and $V_n$
by changing binary trees to the $n$-ary trees.
The Brown-Thompson groups $F_n$ $T_n$ and $V_n$
as the groups $F,$ $T,$ $V$
are proved to be finitely presented
\cite{MR1426438, MR885095}.
Here we illustrate the tree pair representations of the group elements
in the standard finite generating sets $\mathcal{C}$
to give a brief idea of this form in Figure \ref{fig:generators}.
\begin{figure}
\caption{The infinite generating set for $F_n$, $T_n$ and $V_n$ \label{fig:generators}
\label{fig:generators}
\end{figure}
\subsection{Tree pairs, the Cantor set and the binary words}
Alternatively,
when we consider such tree pairs,
we consider
the correspondence between a rooted finite binary tree
and a different rooted finite binary tree
with the same number of leaves
which can be regarded as a partial automorphism of the infinite binary tree \cite{MR2952772}
such that only the finite top part from the root of the tree is replaced.
In addition,
since the boundary of the infinite binary tree
can be identified with
the Cantor set,
an element of Thompson's group $V$ can be regarded as
the homeomorphisms of the Cantor set
that preserves finite number of the partitions of the Cantor set.
We consider again the tree pair representations
as partial automorphisms of the rooted infinite binary tree denoted by $\mathcal{T}_{\infty}.$
For $\mathcal{T}_{\infty},$
each node $\nu$ can be associated with a unique finite path from the root,
and we can read off a binary word from the root along the path to the node $\nu$ as follows:
we identify the root of the binary tree with the empty word $\varepsilon,$
when the path going down to the lower left vertex through an edge in a $2$-caret,
we add $0$ to the word representing the path
and when the path going down to the lower right vertex in the same $2$-caret,
we concatenate a $1$ to the word representing the path.
Then, each leaf in a rooted finite binary tree can be identified
with a finite binary word in alphabet $\{0,1\}.$
The correspondence between a finite binary tree and another finite binary tree
with the same number of the leaves
can then be regarded as
the correspondence between a finite set of binary words
and another finite set of binary words of the same order.
These lead to the concept of branches.
\subsection{Branches and branches of a tree}
\label{subsec22}
Following the description in Golan and Sapir \cite{MR3978542},
we let the symbol $( \mathrm{T}_{+}, \sigma, \mathrm{T}_{-} )$ denote a tree pair
defining an element $g \in V $,
where $ \mathrm{T}_{+}$ and $\mathrm{T}_{-} $ are rooted finite binary trees
with the same number of the leaves
and $\sigma$ denotes the permutation of the order of the leaves.
\begin{definition}
For every leaf in some finite rooted binary tree $\mathrm{T},$
we associate with it
a path labelled by a binary word
from the root to $\gamma_o$ with the notation
$\ell_{\gamma_o}(\mathrm{T}).$
We call the path $\ell_{\gamma_o}(\mathrm{T})$ a branch of the tree.
\end{definition}
By identifying a leaf in the finite binary tree
with a finite path
and hence with a finite binary word $\omega_o$ over the alphabet $\{0,1\},$
we could express the correspondence between a leaf labeled $\omega_o$ in source tree $\mathrm{T}_{+}$
and a leaf labeled $\omega_{\sigma(0)}$ in the target tree $\mathrm{T}_{-}$
of a tree pair $( \mathrm{T}_{+}, \sigma, \mathrm{T}_{-} )$
representing the group element $g$ of $V$
by the correspondence between a finite path $\ell_{\omega_o}(\mathrm{T}_{+})$
in the source tree $\mathrm{T}_{+}$
and a finite path $\ell_{\omega_{\sigma(o)}}(\mathrm{T}_{-})$
in the target tree $\mathrm{T}_{-}.$
\begin{definition}
We denote the correspondence by
$\ell_{\omega_o}(\mathrm{T}_{+}) \mapsto \ell_{\omega_{\sigma(o)}}(\mathrm{T}_{-})$
for $( \mathrm{T}_{+}, \sigma, \mathrm{T}_{-} )$ representing $g \in V$
and call it a branch of tree pair $( \mathrm{T}_{+}, \sigma, \mathrm{T}_{-} )$
or a branch of the group element $g$ that the tree pair represents.
\end{definition}
We can now generalise the notion to as follows:
we denote, similarly, by $( \mathrm{T}_{n(+)}, \sigma, \mathrm{T}_{n(-)} ),$
a tree pair where $\mathrm{T}_n$ is an $n$-ary tree, for an element $g \in \mathcal{G}.$
Here $\mathcal{G}$ could be $F_n$, $T_n$, $V_n$ where $n \in \mathbb{N}$ and $n \geq 2.$
When $\mathcal{G} = F_n$, $\sigma$ represents trivial permutation,
when $\mathcal{G} = T_n$, $\sigma$ is a cyclic permutation in the cyclic group of order $l(n-1)+1$
where $l \in \mathbb{N}$ and $\sigma$ represents
a permutation in the symmetric group $\mathfrak{S}_{l(n-1)+1}$
for some $l \in \mathbb{N}$ when $\mathcal{G} = V_n.$
\begin{definition}
Then we could associate a path labelled by an $n$-ary word
from the root of the tree to some leaf $\omega_{o}$ in the source tree
to the notation $\ell_{\omega_{o}}(\mathrm{T}_n).$
Similarly, we denote by $\ell_{\omega_{o}}(\mathrm{T}_{n(+)}) \mapsto \ell_{\omega_{\sigma(o)}}(\mathrm{T}_{n(-)})$
some branch of the element $g \in \mathcal{G}.$
\end{definition}
Note that the words corresponding to the leaves are not binary words anymore,
they are now some $n$-ary words.
The branches of a tree pair representation
provide some partial information of a group element.
For instant, when considering the group elements of the Thompson groups
to be the maps from the unit interval to itself,
a branch of the unique tree pair representing a group element mapping
a branch of the source tree to a branch of the target tree
only gives the map between two subintervals
which reflects the self-similarity properties of the trees.
More precisely, when we identify the unit interval $ [0,1]$ with an $n$-nary tree $\textrm{T}_{n(\pm)}$,
a branch of a tree can be interpreted as a subinterval of $ [0,1]$ with $n$-adic bounds
and a branch of a tree pair can be seen as
the correspondence between two subintervals of the source interval $[0,1]$
and the target interval $[0,1]$ respectively.
When given the product of two group elements represented by tree pairs,
the support, branches and the tree pair representation of the product
can be interchanged to provide desired information.
We introduce another notation before we connect the dots together.
\begin{definition}
Let $g \in V_n,$
represented by tree pair $(\mathrm{G}_{+}, \sigma_g, \mathrm{G}_{-}).$
We denote by $g_{[ \omega' ]}$ the tree pair constructed as follows:
Take an element $h \in V_n$ represented by the tree pair
$(\mathrm{T}_{+}, \sigma, \mathrm{T}_{-})$ containing a branch
$\ell_{\omega'}(\mathrm{T}_{+}) \mapsto \ell_{\omega'}(\mathrm{T}_{-}).$
We attach the tree $\mathrm{G}_{+}$ to the leaf labeled
$\omega'$ in the branch $\ell_{\omega'}(\mathrm{T}_{+})$
and $\mathrm{G}_{-}$ to the leaf labeled
$\omega'$ in the branch $\ell_{\omega'}(\mathrm{T}_{-}).$
More generally,
let $t \in V_n$ be represented by
$(\mathrm{T}_{+}, \sigma, \mathrm{T}_{-})$
and let $\ell_o(\mathrm{T}_{+}) \mapsto \ell_{\sigma(o)}(\mathrm{T}_{-})$ be some branch of the element $t.$
Then we denote by $g_{[\ell_o(\mathrm{T}_{+}) \mapsto \ell_{\sigma(o)}(\mathrm{T}_{-})]}$
the tree pair
such that the source tree is the tree $\mathrm{T}_{+}$ with $\mathrm{G}_{+}$ attached to the leaf $o$
and the target tree is the tree $\mathrm{T}_{-}$ with $\mathrm{G}_{-}$ attached to the leaf $\sigma(0).$
\end{definition}
For example,
we could take both the notation $x_{0_{[u+(n-1)]}}$ and $x_{{n-1}_{[u]}}$
where $u \in \{0, \cdots, n-2\}$
to be the same tree pair representation
to represent the same element.
\label{branch}
This simply connects the dynamical interpretation
and the combinatorial interpretation of the group elements of Thompson's groups \cite{MR3091272}
as well as provides a method to estimate of the changes in the number of carets.
Combining these different interpretations of the elements of the Thompson groups,
we realised that
the branches of tree pair representation of the group elements
keep track of the support of the groups
and detect the changes in the word length
when having the product of the group elements.
We will mainly use the tree pair representations and the branches
to find the desired path for some later arguments.
\subsection{The estimate of the word lengths via tree pair representations}
In the current section,
we investigate the word lengths of the elements in the Brown-Thompson groups
and it turns out that the word length relies largely on the number of the carets
in the tree pair representation.
The \textit{word length} of a group element $g$
with respect to some finite generating set of this group is the length of the shortest word
representing $g$
in this finite generating set
denoted by $| \cdot |$, or $| \cdot |_{\Sigma}$
when the finite generating set $\Sigma$ needs to be specified.
Since the tree pair representations are usually not unique
for a group element in Thompson's groups and their generalisations,
we take the unique reduced tree pair representative for each group element \cite{MR1806724},
define the following form
with respect to the infinite generating set.
\begin{definition}[\cite{MR2452818,Sheng:2018aa}]
Let the reduced labelled tree pair $(\mathrm{T}_{n(+)}, \sigma, \mathrm{T}_{n(-)} )$ represent
an element $g$ in $T_n$ or in $V_n$
and $\mathrm{T}_{n(+)}$ and $\mathrm{T}_{n(-)}$ each have $i$ carets.
Let $\mathrm{R}$ be the all right tree with $i$ carets,
i.e. a tree constructed by attaching carets consequently
to the rightmost leaf of the previously attached caret.
We write $g$ as a product $\textbf{p}\sigma\textbf{q},$
where:
\begin{enumerate}
\item $\textbf{p}$, a positive word in the infinite generating set of $F_{n}$
and of the form $\textbf{p} = x_{i_1}^{r_1}x_{i_2}^{r_2} \cdots x_{i_y}^{r_y} $ where $i_1 < i_2 \cdots < i_y $)
with tree pair $(\mathrm{T}_{n(+)},\mathrm{id}, \mathrm{R}).$
\item
\begin{itemize}
\item For $g \in F_n,$ $\sigma$ is just the identity $\rm id,$
\item For $g \in T_n$, $\sigma$ is $c_{i-1}^{j},$
the $j$th power of torsion element $c_{i-1}$ with $1 \leq j < i(n-1)+1$,
which can be represented by a tree pair with two $\mathrm{R}$'s with $i$ $n$-carets,
i. e. $(\mathrm{R}, c_{i-1}^{j}, \mathrm{R}).$
\item For $g \in V_n$, $\sigma$ is a permutation in the symmetric group $\mathfrak{S}_{i(n-1)+1},$
which permutes the leaves of the tree pair,
we have $(\mathrm{R}, \sigma, \mathrm{R})$ as the factor in the middle,
\end{itemize}
and
\item $\textbf{q}$, a negative word,
is the of the form $x_{j_z}^{-s_z} \cdots x_{j_2}^{-s_2}x_{j_1}^{-s_1}$ where $j_1< j_2 <\cdots < j_z$)
for an element in $F_{n}$ represented by $(\mathrm{R},\mathrm{id}, \mathrm{T}_{n(-)}).$
\end{enumerate}
We call such a product $\textbf{p}\sigma\textbf{q}$, a generalised $\textbf{pcq}$ factorisation.
An example is illustrated in Figure \ref{fig:fig2}.
\end{definition}
\begin{figure}
\caption{$\textbf{pcq}
\label{fig:fig2}
\end{figure}
Note that a word in the form of
$x_{i_1}^{r_1}x_{i_2}^{r_2} \cdots x_{i_y}^{r_y}c_{i-1}^{j} x_{j_z}^{-s_z} \cdots x_{j_2}^{-s_2}x_{j_1}^{-s_1} $ in $T_n$,
where $i_1 < i_2 \cdots < i_y$ and $j_z >\cdots > j_2 > j_1 $, $r_k$'s
and $s_l$'s are all positive and $1 \leq j \leq i(n-1)+1$ for the cyclic part $c_{i-1}^{j}$ in the middle.
$x_{i_1}^{r_1}x_{i_2}^{r_2} \cdots x_{i_y}^{r_y}$ and $x_{j_z}^{-s_z} \cdots x_{j_2}^{-s_2}x_{j_1}^{-s_1}$
are words represented in the standard infinite generating set of $F_n$
Figure \ref{fig:generators}).
\subsubsection{The estimate of the word length of elements in $F_n$, $T_n$}
There is a connection between the word length
and the uniquely reduced tree pair representation of a group element
by comparing the number of the building blocks of each $n$-nary trees in the tree pair,
which is a rooted $n$-ary tree with one vertex representing the root and $n$ edges attached to it,
called the $n$-caret,
and the number of the leaves in each tree.
As is proved in Burillo et. al \cite{MR1806724, MR2452818, Sheng:2018aa},
these two notion are interchangeable
and the word length of the group elements in $F$, $F_n$, $T$, $T_n$ can be estimated
by the number of carets of the trees in the tree pair representation as follows.
\begin{lemma}[\cite{MR2452818}]
Let $\omega \in T_n,$
Let $N_n(\omega)$ denote the number of carets in each of the trees in the unique tree pair representing $\omega$,
let $|\omega |_n$ denote the word length of $\omega$
with respect to the standard finite generating set $\{x_0, x_1, \cdots, x_{n-1}, c_0\}$ of the group $T_n,$
then there exists a constant $C'$ such that the following estimate is satisfied for any $\omega$,
$$\frac{N_n(\omega)}{C'} \leq | \omega |_n \leq C'N_n(\omega)$$
\label{lem1}
\end{lemma}
Thus the word length can also be estimated by the number of leaves
in each tree of the unique tree pair representation.
\subsubsection{The definition of $V_n$ from a logic viewpoint}
For group $V,$
the word length of the group elements are proved to have a lower bound comparable
with the number of leaves and the number of carets in Birget \cite{MR2104771}
in a slightly more combinatorial description.
We follow the method from \cite{MR2104771}
and generalise the argument to $V_n$
which requires the definition of the Thompson groups through coding theory and
this, in fact, coincides with the original definition of Thompson's groups.
We briefly set up the foundation as follows.
Let $A$ be a finite alphabet,
given any two elements $u, v$ in the set $A^{\ast}$ of the words,
we denote by $u \cdot v$ the concatenation
and say that
$u$ is a prefix of $v$ if and only if $v = ux$ for some $x \in A^{\ast}.$
This provides a partial order for the words.
A \textit{prefix code} over $A$ is defined to be a subset of $A^{\ast}$
such that no elements of this set is the strict prefix of any other elements of the set.
For $A,$
we define the \textit{right ideal} of $A^{\ast}$ to be $R \subseteq A^{\ast}$
such that $R \cdot A^{\ast} \subseteq R.$
A set $\Gamma \subseteq R$ is called a \textit{right ideal generator} of $R$
if and only if $R = \Gamma \cdot A^{\ast}$.
$R$ is called \textit{essential} if and only if
$R$ has a nonempty intersection with every other right ideal of $A^{\ast}.$
A \textit{right ideal homomorphism} of $A^{\ast} $ is a function $\phi:R_1\rightarrow R_2$
such that $R_1$ and $R_2$ are nonempty right ideals of $A^{\ast}$
and such that for all $u \in R_1$ and for all $x \in A^{\ast} : \phi (u) \cdot x = \phi(ux)$
and it is a bijective right ideal homomorphism.
An \textit{extension} of a right ideal homomorphism
$\phi:R_1 \rightarrow R_2$ is a right ideal homomorphism
$\Phi : J_1 \rightarrow J_2$ where $J_1, J_2 $ are right ideals
such that $R_i \subseteq J_i$ for $i \in \{1, 2\}$ and $\Phi$ agrees with $\phi$ on $R_1.$
A right ideal homomorphism is said to be \textit{maximal} if and only if it has no strict extension in $A^{\ast}.$
Then we have all the basic ingredients for defining $V$ and it's generalisation alternatively.
\begin{definition}[\cite{MR2104771}]
$V$ is the partial action group on the set of words $\{a, b\}^{\ast}$
consisting of all maximal isomorphisms
between the finitely generated essential right ideals of $\{a, b\}^{\ast}.$
The group multiplication of two elements of $V$ is
the maximum extension of the composition of two elements of the group.
\end{definition}
Similarly, we define $V_n$.
\begin{definition}[The generalised Thompson group $V_n$]
$V_n$ is defined as
the partial action group on the set of words $\{a_0, a_1, \cdots, a_{n-1} \}^{\ast}$
consisting of all maximal isomorphisms
between the finitely generated essential right ideals of $\{a_0, a_1, \cdots, a_{n-1}\}^{\ast}.$
The group multiplication of two elements of $V_n$ is defined similarly as
the maximum extension of the composition of two elements of the group.\end{definition}
Need to mention that $F_n$ can be defined as the subgroup of $V_n$
consists of all right ideal isomorphism of $\{a_0, a_1, \cdots, a_{n-1} \}^{\ast}$ that
preserves the dictionary order as for $F$ in \cite{birget2004groups}.
\begin{definition}[\cite{MR2104771}]
For a right-ideal isomorphism $\phi:P_1A^{\ast} \rightarrow P_2A^{\ast}$ (a bijection),
where $P_1$ and $P_2$ are finite maximal prefix codes,
the restriction $P_1\rightarrow P_2$ of $\phi$ is called the table of $\phi.$
Define the table size $\lVert \phi \rVert$ to be $| P_1| $ $( = | P_2 | ).$
For an element $g \in V$,
the table size $\lVert g \rVert$ of $g$ is defined to be
the table size of the maximally extended right-ideal isomorphism that represents $g$.
\end{definition}
\subsubsection{An estimate of the word length of elements in $V_n$ from the logic viewpoint}
The following lemma indicates that the table size and the word length of an element of $V$ are comparable.
\begin{lemma}[ \cite{MR2104771}]
The table size and the word size of an element $g \in V$ are related as follows:
\begin{enumerate}
\item There are $c_{\Delta}, c'_{\Delta} > 0 $
(depending on the choice of finite generating set $\Delta$)
such that for all $g \in V - \{1\}$:
$c'_{\Delta} \lVert g \rVert \leq | g |_{\Delta} \leq c_{\Delta} \lVert g \rVert \cdot \log_2 \lVert g \rVert . $
\item For almost all $g \in V$, $| g |_{\Delta} > \lVert g \rVert \cdot \log_{2 | \Delta |} \lVert g \rVert .$
\end{enumerate}
``Almost all" means that in the set $ \{g \in V: \lVert g \rVert = n \},$
the subset that does not satisfy the above inequality has a proportion that tends to $0$ exponentially fast as $n \mapsto \infty.$
\end{lemma}
The analogue for $V_n$ is as follows.
\begin{lemma}[The word length of elements in $V_n$]
The table size and the word size of an element $g \in V_n$ are related as follows:
There are $c_{\Delta}, c'_{\Delta} > 0 $
(depending on the choice of finite generating set $\Delta$)
such that for all $g \in V - \{1\}$:
$c'_{\Delta} \lVert g \rVert \leq | g |_{\Delta} \leq c_{\Delta} \lVert g \rVert \cdot \log_2 \lVert g \rVert . $
\label{lem210}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem210}]
We extend \cite{MR2104771} to find a canonical factorisation for elements in $V_n$
to a right-ideal automorphism and two elements of $F_n$,
then prove the latter to have bounded word length.
The ideal comes naturally from Thompson's orginal definition.
When fixing the maximal prefix code
$S_{\lVert g \rVert }$ of some cardinality $l(n-1)+1,$
for an group element $g$ of $V_n$,
there exists a unique decomposition
such that $g = \beta_g\pi_g\alpha_g$
where $\alpha_g, \beta_g \in F_n$ and $\pi_g$ is a automorphism
whose table is a permutation of the $S_{\lVert g \rVert }$.
This is because,
by considering the group element as a maximal right-ideal isomorphism $\phi$,
$\alpha_g$ and $\beta_g$ can be uniquely defined as order-preserving bijective,
hence being uniquely determined by the maximal prefixed codes.
This factorisation coincides with the $\textbf{pcq}$ factorisation.
Next, we look at the word length of $\alpha_g$ and $\beta_g$.
The linear boundedness of the word length of the group elements in $F_n$
can either be deduced from \cite{MR1806724}
or from a similar argument as in \cite{MR2104771}.
They are essentially the same estimate.
Finally, we focus only on $\pi_g$,
which can be regarded as a table of some automorphisms
from the finite maximal prefix code $S_{\lVert g \rVert} $ to itself
and hence is a permutation with bounded number of transpositions.
The transpositions defined in a generalised version of the one in \cite{MR2104771}
give exactly the process of the permutation of the labels
on the two identical trees in the tree pair representation.
\end{proof}
By the above argument,
we also have a lower bound for estimating the word length of the elements in $V_n$.
\section{Divergence functions of the generalised Thompson's groups (the Brown-Thompson groups)}
\subsection{The Divergence function}
For metric spaces,
there are several different types of divergence functions\cite{ MR1302334, Dru_u_2009,MR3978542,MR3474592}.
The intuitive idea of divergence property is to understand
to what extend two geodesics starting from the same point in a metric space go away from each other.
Here we are taking the convention from \cite{Dru_u_2009}.
The divergence of the group is narrowed down
from the divergence of the metric spaces
by considering the divergence function of the Cayley graph of the groups
with respect to the standard finite generating set defined below in Subsection \ref{stg}.
From now on,
we use $| \cdot |$ as the notation for the word length of the group elements.
\begin{definition}[\cite{Dru_u_2009}]
Define $div(a,b,c; \delta)$ as the infimum of the length of the paths
for a geodesic metric space $(X, d )$ connecting $a$, $b$
and avoiding the ball $B(c, \delta r)$ centered at $c$ with radius $r$,
where $a$, $b$, $c$ are points taken arbitrarily in the metric space $X,$
and $r$ is the minimum of the distances between $c$ and $a$ or between $b$ and $c.$
\end{definition}
\begin{definition}[Divergence function \cite{Dru_u_2009}]
The divergence function $Div(m; \delta)$ is the maximum of all $div(a,b,c; \delta)$
such that the pairs $(a, b)$ are within distance $m$,
i.e. $dist(a, b) \leq m.$
\end{definition}
The goal here is to find the shortest path between two points at distance $n$ from the identity
in the Cayley graph of the generalised Thompson groups
and compare this path to some functions.
\subsection{Divergence property of $F_n, T_n, V_n$ }
As is in the Subsection \ref{subsec22},
we denote by $\mathcal{G}$ any of the groups $F_n$, $T_n$, $V_n.$
For an element $g \in \mathcal{G}$,
we denote by $\mathcal{N}_{\mathcal{G}}(g)$ the number of leaves in each of the trees in the reduced tree pairs representing $g$ in group $\mathcal{G}$ or $\mathcal{N}(\cdot)$ when the group is clear.
\begin{theorem}
There exist constants $\delta, D > 0$ such that the following holds.
Let $g_1,g_2 \in \mathcal{G} $ be two elements with $\mathcal{N}(\cdot) \geq 3n-2.$
Then there is a path of length at most $D(|g_1|+|g_2|)$ in the Cayley graph $\Gamma = Cay(\mathcal{G}, X)$
which avoids a $ ( \delta\min\{| g_1| , | g_2| \} )$-neighbourhood of the identity
and which has initial vertex $g_1$ and terminal vertex $g_2$.
\label{thm1}
\end{theorem}
We will prove Theorem \ref{thm1} over the course of this section.
\begin{corollary}
The generalised Thompson's groups $F_n$, $T_n$ and $V_n$ have linear divergence.
\label{thm2}
\end{corollary}
\begin{proof}
This is a direct consequence of Theorem \ref{thm1}.
\end{proof}
The next Proposition which can be regarded as an analogue of \cite[Prop2.1]{MR3978542}
can either be proved by the estimate on the number of carets as in
\cite{MR3978542, MR2452818, Sheng:2018aa}
or using the estimate by computation in \cite{MR2104771}
and the analogues of \cite{MR2104771} in the discussion above.
Let $\mathcal{A}_n = \{ x_0, x_1, \cdots, x_{n-1}\}$,
$\mathcal{B}_n = \{ x_0, x_1, \cdots, x_{n-1}, c_0\} $,
$\mathcal{C}_n = \{ x_0, x_1, \cdots, x_{n-1}, c_0, \pi_0\}$
be the \textit{standard generating sets} of $F_n$, $T_n$, $V_n$, respectively,
$\mathcal{N}_{\mathcal{G}}(g)$ be the number of leaves as above.
\label{stg}
\begin{proposition}[]
There exist constants $ 0 < c < 1$ and $C > 1$ such that the following holds
\begin{enumerate}
\item For every $g \in F_n$ we have $c \mathcal{N}_{F_n}(g) \leq | g |_{\mathcal{A}} \leq C \mathcal{N}_{F_n}(g)$.
\item For every $g \in T_n$ we have $c \mathcal{N}_{T_n}(g) \leq | g |_{\mathcal{B}} \leq C \mathcal{N}_{T_n}(g)$.
\item For every $g \in V_n$ we have $c \mathcal{N}_{V_n}(g) \leq | g |_{\mathcal{C}} $.
\end{enumerate}
\label{prop1}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop1}]
Since the number of carets and the number of the leaves of the unique tree pair
representing an element in Brown-Thompson groups $F_n$ and $T_n$
are interchangeable \cite{MR1806724, Sheng:2018aa},
together with Lemma \ref{lem1},
the first two statements for the groups $F_n$ and $T_n$ are satisfied.
A lower bound for the word length of the group elements of $V_n$ can be given
by a similar argument as in \cite{MR2104771}
in the proof of Lemma \ref{lem210}.
By defining the \textit{table size} for $V_n,$
we could prove that the table size and the word length of an element in $V_n$ are related,
and hence find a bound.
\end{proof}
The next step is to find out
how concatenating one extra generator may change the number of the carets.
\begin{lemma}
For $g \in \mathcal{G}$ being represented
by some reduced tree pair representation $(\mathrm{G}_{n(+)},\sigma_g, \mathrm{G}_{n(-)})$
with at least three carets in each tree,
let $\bar{\ell}_{\gamma}(\mathrm{G}_{n(-)})$ denote the length of the branch of the target tree $\mathrm{G}_{n(-)},$
where $\gamma$ is an $n$-ary word corresponding a path from the root in $\mathrm{G}_{n(-)}.$
Denote by $(\mathrm{GX_i}_{n(+)},\sigma_{gx_i}, \mathrm{GX_i}_{n(-)})$
the reduced tree pair representation of $gx_i \in \mathcal{G}.$
We then have
$$\mathcal{N}(g) - (n-1) \leq \mathcal{N}(gx_i^{\pm1}) \leq \mathcal{N}(g) + (n-1) $$
where $i \in \{0, \cdots, n-2\}$.
\begin{enumerate}
\item If $\bar{\ell}_{0}(\mathrm{G}_{n(-)}) = 1,$
then $\mathcal{N}(gx_i^{+1}) = \mathcal{N}(g) +(n-1),$
and $\bar{\ell}_{0}(GX_{i}^{+1}) = 1.$
\item If $ \bar{\ell}_{0}(\mathrm{G}_{n(-)}) \neq 1,$
then $\mathcal{N}(gx_i^{+1}) = \mathcal{N}(g),$
or $\mathcal{N}(gx_i^{+1}) = \mathcal{N}(g) - (n-1),$
Moreover, $\bar{\ell}_{i}(\mathrm{GX_i}_{n(-)}) = \bar{\ell}_{i}(\mathrm{G}_{n(-)}) -1.$
\item If $\bar{\ell}_{i}(\mathrm{G}_{n(-)}) = 1$,
then $\mathcal{N}(gx_i^{-1}) = \mathcal{N}(g) +(n-1),$
$\bar{\ell}_{j}(\mathrm{GX_i}_{n(-)}) = 1$ for $j \in \{1, \cdots, n-2 \}.$
\item If $ \bar{\ell}_{i}(\mathrm{G}_{n(-)}) \neq1$
then $\mathcal{N}(gx_i^{-1}) = \mathcal{N}(g) $
or $\mathcal{N}(gx_i^{\pm1}) = \mathcal{N}(g) - (n-1),$
$\bar{\ell}_{n}(\mathrm{GX_i}_{n(-)}) = \bar{\ell}_{n}(\mathrm{G}_{n(-)}).$
\end{enumerate}
\label{lem25}
\end{lemma}
\begin{proof}
The proof is generalised from the arguments in \cite{MR3978542}.
The elements $x_i \in \mathcal{G}$ where $i \in \{0, \cdots, n-2\}$
are the generators in $F_n$ in the finitely generating set
with only two carets
in each tree in the tree pair representation in Figure \ref{fig:generators},
i.e. $\mathcal{N}(x_i) = 2 (n-1) +1$ with the same target tree.
The source tree of the unique tree pair representing $x_i$s has the second caret
attached to leaves labelled $\{0, \cdots, n-1\}$,
and has the branch $ik$ where $k \in \{0, \cdots, n-1\}$ in the source tree, respectively.
The inverses $x_i^{-1}$s are represented by the tree pairs
that reverse the source tree and the target tree in the pair.
The composition $gx_i$ can be represented
by the multiplication of the unique tree pair representation of $g$ and $x_i.$
If $\bar{\ell}_{0}(\mathrm{G}_{n(-)}) = 1,$
the target tree of $g$ has a path with only one edge attached to the first leaf.
Hence the product add one more $n$-caret to
both the source and the target tree of the tree pair representing $g.$
Alternatively, this is indicated by the branch $\ell_{00}(G_{n(+)}(x_i)) \mapsto \ell_{0i}(G_{n(-)}(x_i)).$
Since the number of carets in $gx_i$ is larger than the ones in $g,$
the equation in $(1)$ follows.
The branch $\bar{\ell}_{0}(GX_{i{n(-)}}) = 1$,
since there is no extra $n$-carets added to the $i$th leaf on target tree of the product.
This proves $(1),$
the arguments for the rest three cases are similar.
\end{proof}
\begin{corollary}
Let $g \in \mathcal{G}$ be any element that can be represented by
a unique tree pair representation
with at least three carets in each tree.
Then
$ \mathcal{N}(g) < \mathcal{N}(gx_i^{+ m}) \leq \mathcal{N}(g) +m(n-1)-1$
when $\bar{\ell}_{0}(G_{n(-)}^{+1}) = 1$
and $\bar{\ell}_{0^m}(GX_{in(-)}^{+m}) = m+1;$
$ \mathcal{N}(g) < \mathcal{N}(gx_i^{-m}) \leq \mathcal{N}(g) +m(n-1)-1 $
when $\bar{\ell}_{i}(G_{n(-)}^{+1}) = 1.$
and $\bar{\ell}_{i0^{m-1}}(GX_{in(-)}^{-m})$ becomes $m+1.$
\label{cor37}
\end{corollary}
\begin{proof}
This is a direct consequence of the preceeding lemma.
\end{proof}
\begin{lemma}
Let $g \in \mathcal{G}$ be an element with the tree pair representation $(\mathrm{G}_{n(+)},\sigma, \mathrm{G}_{n(-)}).$
Let $\ell_{u}(\mathrm{G}_{n(+)})\mapsto \ell_v(\mathrm{G}_{n(-)})$
be a branch of $g$ and let $h$ be an element of $F_n$.
Let $h' = (h)_{[v]}. $
Then $$\mathcal{N}(gh') = \mathcal{N}(g) + \mathcal{N}(h) - 1 .$$
\label{lem27}
\end{lemma}
\begin{proof}
The argument follows from the ones in the interpretation of the branches.
According to Subsection \ref{branch}
the unique tree pair representation of
$h' = (h)_{[v]}$ has branches
$\ell_{v\omega_s}{(\mathrm{T}_{n(+)}(h'))} \mapsto \ell_{v\omega_t}{(\mathrm{T}_{n(-)}(h'))},$
where $\omega_s$ and $\omega_t$ are the words
corresponding to the paths in the source tree
and the target tree of the tree pair representing $h$ respectively.
The branch $\ell_{u}{(g)} \mapsto \ell_{v}{(g)}$ in the product $gh'$
takes the source tree attached to vertex labelled $v$ in the tree pair representation of $h'$ back to the vertex labelled $u$ in the source tree pf the tree pair represents $g.$
Then we have branches $\ell_{u\omega_s}{(\mathrm{T}_{n(+)}(gh'))} \mapsto \ell_{v\omega_t}{(\mathrm{T}_{n(-)}(gh'))},$ in the tree pair representation of $gh'.$
We here give a more geometric interpretation of this process
in Figure \ref{fig:fig3}.
\begin{figure}
\caption{$h' = (h)_{[v]}
\label{fig:fig3}
\end{figure}
\end{proof}
\begin{proposition}[]
There exist constants $\delta, D > 0$ and a positive integer $Q$
such that the following holds.
Let $g \in \mathcal{G}$ be an element with $\mathcal{N}(g) \geq 2(n-1)+n = 3n-2 $.
Then there is a path represented by $\omega$
of length at most $ D | g | $ in the Cayley graph $\Gamma = Cay(\mathcal{G}, X)$
which avoids a $\delta |g|$-neighbourhood of the identity
and which has initial vertex $g$ and terminal vertex $g\omega$.
\label{prop3.9}
\end{proposition}
Similar to the strategy in the proof of \cite{MR3978542},
we want to find a path connecting two points on two geodesics staring from the same point,
for instance, the identity $id,$
with some bounded distance and avoid a neighbouhood of $id.$
The construction works for $F_n$, $T_n$ and $V_n$.
In the general case,
we associate an element $g_k$ to some $k \in \mathbb{N}$
which is roughly the length of some group element $h$ in $F_n$,
and then find a path from $h$ to $g_k$ of which
the length is linear to the word length of the element $h$.
Finally, take two such elements so that the path between them
which is linear to some of the positive numbers associate them
as well as avoiding a ball at center $id.$
\begin{proof}[Proof of Proposition \ref{prop3.9}]
Let $C$ and $c$ be the constants from Proposition \ref{prop1}.
Pick an element $h$ randomly,
and associate to the word length of $h$ a group element $g_{| h | }$
so that the path $\omega$ between these two group elements in the Cayley graph is linear
to the word length $| h|$
with respect to the standard finite generating set.
For some group element $h$ whose number of carets is at least three
and hence the number of leaves is at least $2(n-1)+n,$
denote by $(\mathrm{H}_{n(+)}, \sigma_h, \mathrm{H}_{n(-)})$ the tree pair representation of $h.$
We want the element to be supported on some interval $ [0, a_{|h|}).$
Hence we concatenate the subpath
$\omega_1 = x_{0}^{2}x_{n-1}^{-1}x_0^{-1}$ to obtain $h_1 = h\omega_1$ if $l_{0}(h) =1$,
otherwise we take $\omega_1 $ to be an empty word.
\begin{figure}
\caption{Adding the first subpath \label{fig4}
\label{fig4}
\end{figure}
Next, we want to add on two subpaths to ensure that the newly created word represent an element in $F_n.$
We concatenate the subpath $\omega_2$ and $\omega_3$ to $h_1$
and the construction of the two subpaths are as follows:
The subpath $\omega_2 = x_0^{-M\mathcal{N}(h_1)} x_{n-1}x_0^{M\mathcal{N}(h_1)}$
which is an element in $F_n$
and we take $M \geq 10\frac{C}{c(n-1)}.$
The subpath $\omega_3$ is a word representing
an cyclic element $h_{\omega_3}$ in $T_n < V_n$
such that the tree pair representation of the element $h_{\omega_3}$
has the tree pair
with the source tree being the target tree of the tree pair of $h$
and the target tree being the source tree of the tree pair of $h,$
i.e. $(\mathrm{H}_{n(-)},\sigma_{\omega_3},\mathrm{H}_{n(+)})$
$\sigma_{\omega_3}$
permutes the leaves from $\mathrm{H}_{n(-)}$ to $\mathrm{H}_{n(+)}$ in the a cyclic order
and with the permutation $\sigma_{\omega_3}(\sigma_(h_1))(\ell_0(h_{\omega_3})) =\ell_0(h_{\omega_3}),$
in other words,
$h_{\omega_3}$ has the branch $\ell_{(\sigma_{\omega_3}(0))} \mapsto \ell_{0}.$
Since $h_{\omega_3} \in T_n,$ its word length is bounded linearly by its number of carets and hence the number of leaves,
i.e. $| h_{\omega_3} | \leq C \mathcal{N}(h_{\omega_3}).$
By concatenating the paths,
we have $h_{3} = h\omega_1\omega_2\omega_3.$
From the construction,
we have that
$\mathcal{N}(h_1) \leq \mathcal{N}(h_1) +(m-1)(n-1) -1 \leq \mathcal{N}(h_2) \leq \mathcal{N}(h_3) \leq 2\mathcal{N}(h_1) +(m-1)(n-1) -2.$
First two in equalities are from Lemma \ref{lem27},
the last ones is from considering the branches with prefix having $0$s and Corollary \ref{cor37}.
The subpath $\omega_4$ represents an element in $F_n$
with the construction $x_0^{Q| h |} x_{n-1}^{-1} x_0^{- Q| h | +1}$
where we take $Q \geq 10\frac{M}{c^2(n-1)}.$
This subpath ensures
$\mathcal{N}(h_3) \leq \mathcal{N}(h_3\omega_4)$
and commutes with $ h_3.$
This is because $h_3$ is supported on $[0, \ell_{0}(h_3) )$
while $\omega_4$ from the definition fixes the same interval.
We take the last subpath $\omega_5$ to be the word representing $h_{3}^{-1}$,
where $h_{3} = h\omega_1\omega_2\omega_3.$
Now, we can compute the upper bound of the length of the path $\omega$ as follows:
\begin{align*}
& \lVert \omega \rVert \leq \lVert h_3\rVert + \lVert \omega_4\rVert + \lVert \omega_5\rVert
= 2\lVert h_3\rVert + \lVert \omega_4\rVert \\
&\leq 2(2M\mathcal{N}(h_1) +2\mathcal{N}(h_1)) + (2Q\mathcal{N}(h_1) +2) \\
&\leq 4M\mathcal{N}(h_1) +(2Q+2)\mathcal{N}(h_1) + 2 \\
&\leq (4M+2Q+2) \mathcal{N}(h_1) +2\\
& = \Big(\frac{4M+2Q+2}{c} \Big) | h | +2
\end{align*}
We could take $\delta =\frac{c}{8M}$ since $\lVert \omega \rVert \leq 4M\mathcal{N}(h) $
and $D = \Big(\frac{4M+2Q+4}{c} \Big) | h |,$
so that $\delta | g_{| h|}| \leq\lVert g_{| h|}\omega' \rVert \leq D| g_{| h|}|$
which proves the linearity of the length of the path from $h$ to $ g_{| h |}.$
\end{proof}
\begin{lemma}
For a prefix $\omega'$ of $\omega$, $| h \omega' | \geq \delta | h |.$
\end{lemma}
\begin{proof}
By considering the changes in the number of carets,
there is no reduction on the number of carets while adding on the first subpath,
since when adding on the first subpath $\omega_1,$
if $\bar{\ell}_0(H_{-}) =1$, $\mathcal{N}(h\omega_1) > \mathcal{N}(h) $ by Corollary \ref{cor37},
otherwise we add an empty word and the length is unchanged.
For $\omega_2,$ Lemma \ref{lem25} ensures the increase in the number of carets
and hence the number of word length.
The tree pair representation of $h_{\omega_3}$ have the same number of carets as in $h_1$ in the $\textbf{pcq}$ form.
Besides $\omega_3$ has a cyclic permutation,
hence by \cite{Sheng:2018aa} whose word length is linear to the number of its carets
and together with Lemma \ref{lem27},
we have $\mathcal{N}(h_1\omega_2) \leq \mathcal{N}(h_1\omega_2\omega_3) =\mathcal{N}(h_3).$
Since the tree pair representation of $h_{\omega_3}$
has the branch $\ell_{(\sigma_{\omega_3}(0))} \mapsto \ell_{0},$
$h_3$ is supported on some $ [0, a_n).$
The path $h_2$ after adding on $\omega_2$ ends with powers of $x_0^{-1},$
we choose $\omega_3$ to be the shortest word representing the element
$h_{\omega_3}$ with the tree pair symmetric to the tree pair representing $h.$
Since no reduction occurs when concatenating powers of $x_0$s to $h,$
there are also no reductions occurring when concatenating $\omega_3$ to $x_0^{-1}$s
which is just reflecting the picture of the operating of the product of the tree pairs.
For the subpath represented by word $\omega_4,$
the argument is the same as the ones for the case when adding the subpath representing $\omega_2.$
As for adding on the subpath $\omega_5,$
it is just the inverse of path $h\omega_1\omega_2\omega_3.$
\end{proof}
With the preceding Proposition, we could prove Theorem \ref{thm1}.
\begin{proof}[Proof of Theorem \ref{thm1}]
From Proposition \ref{prop3.9},
we could pick two elements $g_1$ and $g_2$ randomly,
for each $g_i$ where $i \in\{1, 2\}$,
there is a path $\omega_i$ avoiding some ball centered at identity
with radius $D(| g_i|)$
from element $g_i$ to $x_0^{Q | g_i |} x_{n-1}^{-1} x_0^{- Q | g_i | +1}.$
Assume without loss of generality, we have $| g_1 | \leq | g_2 |$
and we add a path $$p = x_0^{Q |g_1 |-1} x_{n-1} x_{0}^{- Q | g_1-g_2 | +1} x_{n-1}^{-1} x_0^{- Q | g_2 | +1}.$$
Then by the previous lemma and the estimate from of the word length,
we have
$$ \delta \min\{| g_1|, | g_2 | \} < cQ \rVert g_1 \lVert \leq | x_0^{Q | g_1 |} x_{n-1}^{-1} x_0^{- Q | g_1\omega'' |} |.$$
\end{proof}
For some undistorted subgroups of Thompson's groups,
the method of finding out the comparison paths for the divergence functions is similar,
since the estimate of the word length replies only on the subgroups $F_n$.
Alternatively, we could rely on the quasi-isometric embeddings.
Since, the Brown-Thompson groups $F_n$, $T_n$ can be embedded quasi-isometrically
into Thompson's groups $F$, $T,$ respectively,
we could consider the quasi-isometric embedding $\phi^{\ast}$ from $T_n$ to $T$ defined in
\cite{Sheng:2018aa},
and pick the shortest path
between the image of two points outside the ball centred at the identity
with radius of the element with larger word length in $T,$
then the paths that are quasi-isometric to the preimage of this linear path \cite{Sheng:2018aa}
taken back by $\phi^{\ast}$ may also be an estimate.
\section{Divergence functions for the braided Thompson's group}
In \cite{MR3978542}, Golan and Sapir posed the question:
if the divergence property of the braided version of Thompson's groups
can also be found using the same method.
In this section,
we would like to investigate this generalised version of Thompson's group,
the braided Thompson groups.
The origin of these groups can be traced back to Brin \cite{MR2364825}
and Dehornoy \cite{MR2105432} of which the metric properties are being investigated
in \cite{MR2384840, MR2514382}
where the estimate of the word length is slightly different.
\subsection{The braided Thompson groups}
The braided version of the Thompson groups can be regarded
as some ``Artinification" construction of the original Thompson's groups.
Intuitively,
when considering the tree pair representation of the elements of Thompson's groups,
instead of just permuting the labelings of the leaves,
we add ``braids" to the leaves between the pair.
One replaces the permutations by the braid relations.
The group $BV,$ the larger group $\widehat{BV}$ and the braided analogues of $F,$
namely, $BF$ and $\widehat{BF}$ will be considered in the rest of this paper.
We recall dyadically subdivided intervals defined in the previous section.
For two dyadically subdivided intervals
with the same number of subintervals $I$ and $J,$
we associate them to an element in the braid group $B_m$
where $m$ is the number of the subintervals.
In order to navigate the map between subintervals in $I$ and $J,$
we identify each subinterval with the thickened strands or a elastic band in the braids.
\begin{figure}
\caption{ \label{fig:fig10}
\label{fig:fig10}
\end{figure}
Figure \ref{fig:fig10} illustrates one of the group elements
of the braided Thompson group $BV.$
Adding and subtracting the dyadic breakpoints
to the subintervals to the corresponding strands do not change the map.
When there are two such maps,
i.e. a map from $I_1$ to $J_1$ and a map from $I_2$ to $J_2$,
where $I_1$, $J_1$, $I_2$, $J_2$ are dyadically subdivided unit intervals,
and $I_1$, $J_1$ are the intervals with the same number of subdivisions,
$I_2$, $J_2$ are the intervals with the same number of subdivisions, respectively.
We compose $J_1$ and $I_2$ as composing two piecewise-linear functions
and then extend the composition to $I_1$ and $J_2$ as in Figure \ref{fig:fig11},
the resulted map consists again a pair of dyadically subdivided unit intervals
associated to an element in the braid group.
\begin{figure}
\caption{ \label{fig:fig11}
\label{fig:fig11}
\end{figure}
\begin{definition}[$BV$]
The braided Thompson group $BV$ is
the group of the piecewise-linear maps
from the set of the dyadically subdivided intervals to itself
associating pairs of intervals with intervals having the same number of breakpoints,
such that the subintervals in the source unit interval are associated
to those in the target as follows,
\begin{itemize}
\item all subintervals in the source and the corresponding ones in the target
are bijectively connected to each other
by gluing a rectangular flat elastic band (or a ribbon) to each pair of them
such that the top of the band is glued to the source subintervals
and the bottom is glued to the the target subintervals;
\item the band is not allowed to flip or twist, but can be stretched;
\item the set of the bands can be identified
with the thickened strands of a braid with $m$ strands, $m \in \mathbb{N}$
and can be represented by an element $b$ in the braid group $B_{m}$
where $m$ is the number of the subintervals in both
the source and the target unit intervals.
\end{itemize}
The group operation is defined by:
First, compositing the target interval in the
band-connected interval pair
representing the former element
and the source interval
in the pair
representing the latter element.
Then by extending the composition through the bands
back to the source interval in the pair representing the former element
and also to the target interval in the pair of the latter one.
\label{def41}
\end{definition}
This group can also be regarded as
the extension of Thompson's group $V$ by the ``stable braid groups " with trivial center
which includes all braid groups \cite{MR2952772} by $V$.
Similar to the case in $F$, $T$, $V$ and the Brown-Thompson groups,
there is a more combinatorial interpretation using the tree pair representation
and the braid diagrams for the braided Thompson groups.
These are called tree-braid-tree diagram
or the braided version of the $\textbf{pcq}$ factorisation
where the middle component $\textbf{c}$ comes from
the elements of the braid groups instead of the symmetric groups.
Since for $F$, $T$, $V$ as well as for $F_n$, $T_n$ and $V_n,$
we have a symbol with three tuples
indicating the tree pair representation for a group element,
analogously,
we have a similar symbol $(\mathrm{T}_{+} , \beta, \mathrm{T}_{-} )$ for elements in $BV$
where $\beta$ is a braid induced by an element in the braid group $B_{m+1}$ with $m$ strand.
\begin{figure}
\caption{ $\tau_1$, $\sigma_1$, $\phi \in BV$, $\psi \in BF.$ \label{fig:fig5}
\label{fig:fig5}
\end{figure}
Following \cite{MR3545879},
we allow the tree-braid-tree diagrams to change to each other
through the following equivalent movements in Figure \ref{fig:fig13}
which form an equivalence class
and hence define the composition of the diagrams.
\begin{figure}
\caption{ \label{fig:fig13}
\label{fig:fig13}
\end{figure}
When we want to compose two tree-braid-tree diagrams,
we lay the two diagrams vertically,
stacking the first diagram pair on top of the second diagram pair
and merge the target tree of the top diagram
and the source tree of the bottom tree
using the relations on the left side on Figure \ref{fig:fig13},
then using the relations on the right side to drag the carets up or down
to obtain a new tree-braid-tree diagram.
This operation coincides with the multiplication of the group elements.
Similarly, we defined the braided version for the group $F$.
\begin{definition}[$BF$]
The braided Thompson group $BF$ is
the group of the orientation-preserving piecewise-linear isotopies
from the set of the dyadically subdivided intervals to itself
associating pairs of intervals with intervals having the same number of breakpoints,
such that the subintervals in the source unit interval are associated
to the ones in the target as follows,
\begin{itemize}
\item all subintervals in the source and the corresponding ones in the target
are connected together
by gluing a rectangular flat elastic band to each pair of them
such that the top of the band is glued to the source subintervals
and the bottom is glued to the corresponding target subintervals;
\item the band is not allowed to flip or twist, but can be stretched;
\item the set of the bands can be identified
with a thickened pure braid with $m$ strands, $m \in \mathbb{N}$
and can be represented by an element $p$ in the pure braid group $PB_{m}.$
In addition, $m$ is the number of the subintervals in both the source and the target unit intervals.
\end{itemize}
The group operation is defined in the same manner
as the one in the Definition \ref{def41}.
\label{def42}
\end{definition}
Both $BF$ and $BV$ are finitely presented \cite{MR2384840}
and $BV$ has a finite generating set $x_0$, $x_1$, $\sigma_1$ and $\tau_1$
where $x_0$ and $x_1$ come from the standard generating set of $F$
and the other two are induced by the braid relations
and are illustrated on the left half in Figure \ref{fig:fig5}.
These two groups also have series of infinitely generating sets $x_i^{\pm1}$, $\sigma_i^{\pm1}$,$\tau_i^{\pm1}$
where $x_i^{\pm1}$ are the infinite order generators from the original Thompson's groups
and $\sigma_i^{\pm1}$,$\tau_i^{\pm1}$ are the infinite sequence of generators induced from the braid groups
\cite{MR2514382, MR3781416}
which can be represented by tree-braid-tree diagrams with a pair of all right trees
and a positive crossing at the strands on $(i-1)$th and $i$th leaves for the diagram pair representing $\tau_i^{\pm},$
a positive crossing at the strands on $(i-2)$th and $(i-1)$th leaves for diagram pair $\sigma_i^{\pm1}.$
Similar as in the non-braided version of Thompson's groups,
a set of the tree-braid-tree diagrams
under some equivalence relation
have a unique reduced diagram pair
which corresponds to a standard form.
A more precise description of the equivalent relation of the diagrams
and the minimal size representatives can be referred to \cite{MR1806724}.
Since the estimate on the word length later on will depend largely on
this standard form,
we give a brief introduction on this combinatorial description below.
\begin{definition}[The braided version of $\textbf{pcq}$ form]
Let the reduced labelled tree pair $(\mathrm{T}_{(+)},\beta, \mathrm{T}_{(-)} )$
represent an element $g \in BV$
and $\mathrm{T}_{(+)}$ and $\mathrm{T}_{(-)}$ each have $i$ carets.
Let $\mathrm{R}$ be the all right tree with $i$ carets.
We write $g$ as a product $\textbf{p}\beta\textbf{q}$, where:
\begin{enumerate}
\item $\textbf{p}$, a positive word in the infinite generating set of $F$
and with the form
$\textbf{p} = x_{i_1}^{r_1}x_{i_2}^{r_2} \cdots x_{i_y}^{r_y} $
where $i_1 < i_2 \cdots < i_y $
with tree pair $(\mathrm{T}_{(+)}, \rm{id}, \mathrm{R}).$
\item $\beta$ part is induced by an element in the braided group of $m$-strands,
which is represented by a $(\mathrm{R},\beta,\mathrm{R}),$
and is a word in the infinite generating sets $\{\sigma_i\}$ and $\{\tau_i\}$
which are induced by the elements in $B_{m}$ where
$m = \max\{ \mathcal{N}(\mathrm{T}_{(+)}), \mathcal{N}(\mathrm{T}_{(-)}) \} .$
\item $\textbf{q}$, a negative word,
is the in the form $x_{j_z}^{-s_z} \cdots x_{j_2}^{-s_2}x_{j_1}^{-s_1}$
where $j_1< j_2 <\cdots < j_z$ for an element in $F$
represented by $(\mathrm{R},\rm{id}, \mathrm{T}_{(-)}).$
\end{enumerate}
We call such a product $\textbf{p}\beta\textbf{q}$, a $\textbf{pcq}$ factorization for the braided Thompson groups.
\end{definition}
Note that a word with the form
$x_{i_1}^{r_1}x_{i_2}^{r_2} \cdots x_{i_y}^{r_y} \mathrm{id} x_{j_z}^{-s_z} \cdots x_{j_2}^{-s_2}x_{j_1}^{-s_1} $ in $BV,$
where $i_1 < i_2 \cdots < i_y \neq j_z >\cdots > j_2 > j_1 $, $r_k$'s and $s_l$'s are all positive
and $1 \leq j \leq i(n-1)+1$,
is a word
with respect to the infinite generating set of $F.$
In \cite{MR2384840}, there is a silmilar notion, \textit{block}.
By applying reductions to the words in the $\textbf{pcq}$ form
we may naturally obtain a block.
This is another standard form of the group elements
when taking the word $\beta$ to be the right-greedy form \cite{KD08}.
\begin{definition}[Block\cite{MR2384840}]
A word in the infinite generators $x_i^{\pm1}$, $\sigma_i^{\pm1}$, $\tau_i^{\pm1}$ is called a block,
if it is of the form $\omega_1\omega_2\omega_3^{-1}$ where
\begin{enumerate}
\item $\omega_1$ is a positive word as the $\textbf{p}$ part in the $\textbf{pcq}$ representation.
\item Let $N = \max{\{ N(\omega_1), N(\omega_3)\}},$
then there exists an integer $m \geq N +1 $ such that $\omega_2$ is a word
in the generators $\{ \sigma_1, \sigma_2, \cdots, \sigma_{m-2}, \tau_{m-1} \}.$
\item $\omega_3^{-1}$ is a negative word as the $\textbf{q}$ part in the $\textbf{pcq}$ representation.
\end{enumerate}
\end{definition}
Notice that when the $\textbf{c}$ part of the $\textbf{pcq}$ form of some group element
is a word in the generators induced from some braid group
then the word can easily be transformed to a block.
\begin{lemma}
Every group element in the \textbf{pcq} form
can be transformed to a block and transform back
by finitely many steps of moving around letters.
\end{lemma}
\begin{proof}
The $\textbf{p}$ and the $\textbf{q}$ are elements in $F$,
they are equivalent to the first block and the third block in the block form
and can possibly be reduced to shorter words.
The middle part of the $\textbf{pcq}$ form denoted by $\beta$,
is a word in $\sigma_i^{\pm1}$,$\tau_i^{\pm1}$
induced by elements in $B_{m}$ for some positive integer $m$,
hence, more precisely,
a word in $\{\tau_1, \cdots, \tau_{m-1}\}$ and $\{\sigma_1, \cdots, \sigma_m\}.$
Leave the letters in $\{\sigma_1, \cdots, \sigma_m\}$ where they are
and rewrite the letters from $\{ \tau_2, \cdots, \tau_{m-1} \}$
by reducing the subsuffix of the generators using the relation $D_2$ and $D_1$
from \cite{MR2384840} to move around the generators $x_{i}.$
Since $\beta$ has only finite letters,
the process takes only finitely many steps.
\end{proof}
In the original definition \cite{MR2364825},
there is a ``larger" version of the braided Thompson group
which contains the group $BV$
and also sits inside $BV$
which has the last strand unbraided and we define this group formally below:
\begin{definition}[$\widehat{BV}$]
The braided Thompson group $\widehat{BV}$ is defined as
the group of the piecewise-linear maps
from the set of the infinite dyadically subdivided intervals to itself
such that the first $k$ subintervals in the source interval
are associated to the first $k$ subintervals in the target
as an element in $BV$ in the description of Definition \ref{def41},
where $k \in \mathbb{N}.$
The rest of the subintervals in the source and the target
are associated to each other one by one from the left in order.
\end{definition}
Similarly, we could the define the larger version of the braided Thompson groups containing $F.$
\begin{definition}[$\widehat{BF}$]
The braided Thompson group $\widehat{BF}$ is
the group of the orientation-preserving piecewise-linear isotopies
from the set of the infinite dyadically subdivided intervals to itself
such that the first $k$ subintervals in the source interval
are associated to the first $k$ subintervals in the target
as an element in $BF$ in the description of Definition \ref{def42},
where $k \in \mathbb{N}.$
The rest of the subintervals in the source and the target
are associated to each other one by one from the left in order.
\end{definition}
Since the braided version of Thompson's groups involve both Thompson's groups
and braid groups,
we expect the estimate of the word length
to depend on the estimate of the word length of both of Thompson's groups
and braid groups.
\subsection{Divergence property of the braided Thompson groups $BF$, $BV$}
By considering the group elements of the braided Thompson groups as tree-braid-tree diagrams,
we estimate the word length from a purely combinatorial aspect.
For Thompson's group $F$ and the generalisations $F_n$
without the torsion elements,
the word length is proportional to the number of carets in the tree pair representation,
whereas for the braid groups,
the word length with respect to the standard generators is
closely related to the number of the crossings.
Here we denote by $\mathcal{K}(\cdot)$, the number of crossing of the tree-braid-tree diagram representing some group element.
In \cite{MR2514382}, Burillo et. al. have considered the Garside elements of the braid groups
inside the $BV$ in order to give the following estimate.
\begin{lemma}[\cite{MR2514382}]
For an element $\omega$ in $BV$
which has a reduced tree-pair-tree diagram representative with $n$ leaves and $k$ total crossings,
and with the maximum number of crossings of a pair of strands is $s$,
then there exist constants $C_1$, $C_2$, $C_3$, $C_4$ such that the length $| \omega | $ satisfy the following:
$$C_1\max\{n, \sqrt[3]{k}\} \leq C_2 \max\{n, s\} \leq | \omega | \leq C_3 (n+nk) \leq C_4 (n+n^3s).$$
\label{lem2}
\end{lemma}
Garside elements which provide the largest number of crossings
for the tree-braid-tree representation of the braid elements in Thompson's group are the ones
that we want to avoid in the estimate of the word length.
\begin{lemma}{}
For $g \in BV$ being represented
by some reduced tree pair representation $(\mathrm{G}_{(+)},\sigma_g, \mathrm{G}_{(-)})$
with at least three carets in the each tree,
and $gx_0$ be represented by $(\mathrm{GX_0}_{(+)},\sigma_{gx_0}, \mathrm{GX_0}_{(-)}),$
let $\bar{\ell}_{o}(\cdot)$ denote length of the branch of a tree,
then we have
$$\mathcal{N}(g) - 1 \leq \mathcal{N}(gx_{0}^{\pm1}) \leq \mathcal{N}(g) + 1 $$
In addition,
\begin{enumerate}
\item If $\bar{\ell}_{0}(\mathrm{G}_{(-)}) = 1,$
then $\mathcal{N}(gx_0) = \mathcal{N}(g) +1$
and $\bar{\ell}_{0}( \mathrm{GX_0}_{(-)}) = 1.$
The number of the crossing is at most twice the original number.
\item If $ \bar{\ell}_{0}(\mathrm{G}_{(-)}) \neq 1,$
then $\mathcal{N}(gx_0) = \mathcal{N}(g) $
or $\mathcal{N}(gx_0) = \mathcal{N}(g)-1.$
Moreover,
$\mathcal{N}(gx_0) = \mathcal{N}(g)$
only when $1$ and $01$ are strict prefixes
are prefixes of the target tree $G_{(-)}$ in the pair representing element $g.$
No changes in the number of crossings.
\item If $ \bar{\ell}_{1}(\mathrm{G}_{(-)}) = 1,$
then $\mathcal{N}(gx_0^{-1}) = \mathcal{N}(g) +1$
and $\bar{\ell}_{1}( \mathrm{GX_0^{-1}}_{(-)}) = 1.$
The number of the crossing is at most twice the original number.
\item If $ \bar{\ell}_{1}(\mathrm{G}_{(-)}) \neq 1,$
then $\mathcal{N}(gx_0^{-1}) = \mathcal{N}(g) $
or $\mathcal{N}(gx_0^{-1}) = \mathcal{N}(g)-1.$
Moreover,
$\mathcal{N}(gx_0) = \mathcal{N}(g)$
only when $10$ and $11$ are strict prefixes
are prefixes of the target tree $G_{(-)}$ in the pair representing element $g.$
No changes in the number of crossings.
\end{enumerate}
\label{lem39}
\end{lemma}
\begin{proof} The proof is a generalised argument of the proof of Lemma \ref{lem25}
and the estimate of the number of crossing is purely by graphical computation.
Take $g$ such that $\bar{\ell}_{0}(\mathrm{G}_{(-)}) = 1,$
the first statement follows from the changes in the carets.
The change is the number of crossings depending changes in the strands
under the extra carets added to the product.
Since we have one more caret in each tree in the tree-braid tree diagram of $gx_0,$
we then have at most twice the number of the original number of crossings.
The rest three cases are by similar arguments.
\end{proof}
We hence know that the generators in the braided Thompson groups
that are coming from the torsion free part of Thompson's groups
have affects mainly on the number of the carets of the tree-braid-tree diagrams.
\begin{lemma}
For $g \in BV$ being represented by some reduced tree pair representation
$(\mathrm{G}_{(+)},\sigma_g, \mathrm{G}_{(-)})$
with at least three carets in the each tree,
let $\ell_{\gamma}(\cdot)$ denote the branch of an element
such that $\gamma$ is a branch of the source tree in the reduced tree pair representing $g$ as before,
$\bar{\ell}_{\gamma}(\cdot)$ denote the length of the branch of some tree
and
let $\mathcal{K}(\cdot)$ be the number of crossings in the reduced tree pair representation
(or tree-braid-tree diagrams)
of an group element.
$$\mathcal{N}(g) \leq \mathcal{N}(g\sigma_1^{\pm1}) \leq \mathcal{N}(g) + 1 $$
$$ \mathcal{N}(g\tau_1^{\pm1}) = \mathcal{N}(g) $$
In addition,
\begin{enumerate}
\item If $\bar{\ell}_{0}(G_{(-)}) = 1,$
then $\bar{\ell}_{1}(G_{(-)}) \neq 1,$
$\mathcal{N}(g\sigma_1^{\pm1}) = \mathcal{N}(g)$
and
\begin{enumerate}
\item $\bar{\ell}_{0}( \mathrm{GX_0}_{(-)}) = 1 $ when $\ell_{10}( \mathrm{GX_0}_{(-)})$ is a branch,
$\mathcal{K}(g\sigma_1^{\pm1}) \leq \mathcal{K}(g)+1;$
\item $\bar{\ell}_{0}( \mathrm{GX_0}_{(-)}) \neq 1 $ when $\ell_{10}( \mathrm{GX_0}_{(-)})$ is not a branch,
$\mathcal{K}(g\sigma_1^{\pm1}) \leq 2 (\mathcal{K}(g)-2);$
\end{enumerate}
\item If $\bar{\ell}_{1}(G_{(-)}) = 1,$
then $\bar{\ell}_{0}(G_{(-)}) \neq 1,$
$\mathcal{N}(g\sigma_1^{\pm1}) = \mathcal{N}(g)+1,$
and $\bar{\ell}_{0}( \mathrm{GX_0}_{(-)}) \neq 1.$
Moreover, $\mathcal{K}(g\sigma_1^{\pm1}) \leq \mathcal{K}(g)+2;$
\item The crossing number of $g\tau_1^{\pm},$ $\mathcal{K}(g\tau_1^{\pm}) \leq \frac{\mathcal{K}(g)^2}{4}.$
\end{enumerate}
\end{lemma}
\begin{proof}
When $\bar{\ell}_0(G_{(-)}) =1,$
$\bar{\ell}_{1}(G_{(-)}) \neq 1$ follows from the assumption,
$\sigma_1^{\pm1}$ takes $0$ to $10,$
takes $10$ to $0$ and $11*$ to $11*.$
Then number of carets is hence unchanged through the computation.
Since there is one crossing between the first and second strand in $\sigma_1^{\pm1},$
i.e. strand on leaves labelled $0$ and $10,$
for the number of crossings of $g\sigma_1^{\pm1},$
we only need to consider how these two strands are interacting with other strands.
For $(a),$ the product does not add extra caret
and hence the number of crossings just adds by one from $\sigma_1^{\pm}.$
Similar arguments work for the case $(b).$
For $(2)$ and $(3),$ we do the similar analyse.
\end{proof}
\begin{example}
Some elementary cases of the multiplication of the generators
induced from braid relations $\tau_1$, $\sigma_1$
and the Thompson group generators $x_0$
are illustrated in Figure \ref{fig6} and Figure \ref{fig7}.
\begin{figure}\label{fig6}
\label{fig7}
\end{figure}
\end{example}
\begin{lemma}[]
Let $g \in BV$ be an element with the tree pair representation
$(\mathrm{G}_{(+)},\sigma, \mathrm{G}_{(-)}).$
Let $\ell_{u}(\mathrm{G}_{(+)})\mapsto \ell_v(\mathrm{G}_{(-)})$
be a branch of $g$ and let $h$ be an element of $F$
and $\mathcal{K} (\cdot)$ be the number of crossings of some reduced tree pair.
Let $h' = (h)_{[v]}.$
Then
$$\mathcal{K}(g) \leq \mathcal{K}(gh')\leq \min{\{ (\mathcal{N}(g)-1)\mathcal{N}(h) +\mathcal{K}(g), \mathcal{N}(h)\mathcal{K}(g)\} }.$$
\label{lem312}
\end{lemma}
\begin{proof}
The former equality can be obtained from the same argument in Lemma \ref{lem25}
and the latter one is given by multiplication of tree-braid-tree diagrams.
\end{proof}
From a few of the above consequence,
we can see that
for elements coming from the braid groups inside $BV,$
they do increase the number of carets when multiplying the torsion-free elements,
and they also increase on the number of crossings,
hence,
the changes in the number of crossings are in fact very hard to control,
when concatenating elements,
while the changes in the number of carets when adding on elements
is partially inherited
from those in the original Thompson's groups $F$ inside $BV.$
Thus, the ``counting arguments" for constructing our paths
will still rely on the ``non-braided" part.
\subsection{Divergence property of the geodesics}
For braided Thompson groups,
we would like to find a similar path
from one geodesic to another avoiding a ball centered at the identity in their Cayley graphs
with respect to the finite generating sets,
and we first want to find an element $g_{\mid h \mid}$
relying on elements in $F$
which corresponds to the element $h$ in $BV$
such that there is a path connecting the two elements and avoids the ball
having radius which is the same as the word length of $h.$
\begin{proposition}
There exist constants $\delta, D > 0$ and a positive integer $Q$
such that the following holds. Let $g \in BV$ be an element with $\mathcal{N}(g) \geq 4 $.
Then there is a path represented by $\omega$
of length at most $D | g|^4$ in the Cayley graph $\Gamma = Cay(\mathcal{G}, X)$
which avoids a $\delta | g | $-neighbourhood of the identity
and which has initial vertex $g$ and terminal vertex $g\omega.$
\label{prop412}
\end{proposition}
\begin{proof}
Let the estimate be as in Lemma \ref{lem2}.
Let $h \in BV$ whose number of leaves is at least four
and we want to first find a subpath $\omega_1\omega_2$
which makes $h_2 = h\omega_1\omega_2$ supported on some interval $ [0, a_n)$.
Next, we concatenate $h_2$ by another subpath $\omega_3$ to obtain $h_3 = h_2\omega_3$
with the condition that
$h_3$ commutes with $\omega_4$.
This ensures $\mathcal{N}(h_3\omega_4) \geq \mathcal{N}(h_3)$.
We finally obtain $g_{| h|} = h_4 (h_3)^{-1}$ by adding another subpath $\omega_5 = h_3^{-1}$
which by definition can be reduced to be $\omega_4$
and represents an element in $F$ inside $BV$.
Let $h$ be represented as a unique tree-braid-tree diagram
with the notation $(\mathrm{T}_{(+)}, b, \mathrm{T}_{(-)})$.
By definition,
$h$ will either have $\bar{\ell}_{(0)}(h) = 1$ or $\bar{\ell}_{(0)}(h) > 1$.
When $\bar{\ell}_{(0)}(h) = 1$,
we add a subpath $\omega_1 = x_0^2x_1^{-1}x_0^{-1}$ depending on the branches in $h$,
so that $l_{0}(h\omega_1) > 1,$
otherwise, we keep the original element $h.$
Then we add the second subpath $$\omega_2 = x_0^{-M\mathcal{N}(h_1)} x_{1}x_0^{M\mathcal{N}(h_1)} $$
to $h$ or $h\omega_1$ which is a path in $F$ linear to the length of $h$,
where $M \geq 10\frac{C_4}{C_1}$.
The estimate of the length of $\omega_3$ is the key part in case of the braided Thompson group.
The subpath corresponding to $\omega_3$ is taken to be the shortest word representing an element
with the following tree pair representation $(\mathrm{T}_{(-)}, b', \mathrm{T}_{(+)})$
such that $b'$ is taking the strand $b(0)$ to $0,$
i.e. $b'(b(0)) = id,$ $b'$ is not necessarily the inverse of $b.$
If $h \in F$, then the braid part of $\omega_3$ just represents $id.$
The path will satisfy the following properties:
\begin{enumerate}
\item $\rVert \omega_3 \lVert \leq C_3\mathcal{N}(h_1)(1+\mathcal{K}(h_1))$
which follows from Lemma \ref{lem2}.
\item $ (M-1)\mathcal{N}(h)
\leq (M-1) \mathcal{N}(h_1\omega_2)
\leq \mathcal{N}(h \omega_1\omega_2\omega_3).$
Moreover,
$ (M-1)\mathcal{K}(h)
\leq (M-1) \mathcal{K}(h_1\omega_2)
\leq \mathcal{K}(h \omega_1\omega_2\omega_3).$
Both sequences of inequalities can be deduced from visualising the products
from graphical interpretation of the group elements.
For the increase in the number of crossings,
every time we concatenate a prefix in $\omega_2$ which is a word purely from generators in $F$
from the left, the resulted tree-braid-tree diagram becomes larger extending below to the right side.
Then both the number of carets an the number of crossings increase linearly
which ensures that ``Garside element"
(elements with far larger number of crossings than the number of carets \cite{MR2514382}) will not appear.
\item $\ell_0(h_1)$ or $0^{\ell_0(h_1)} \mapsto 0^{\ell_0(h_1)}$ is a branch of $h\omega_1\omega_2\omega_3,$
which follows again from the products of tree-braid-tree diagrams.
\end{enumerate}
For $h_1$ supported on $[0, \ell_{0}(h_1) ))$,
we add on the fourth subpath $\omega_4 = x_0^{Q| h |} x_{1}^{-1} x_0^{- Q| h | +1}$
where $Q \geq 10\frac{M}{C_{1}^2}$ and then obtain $h_4 = h_3\omega_4$.
The construction by the tree-braid-tree diagrams could be interpreted as follows:
Noting that $h_3$ and $\omega_4$ commute on $[0, \ell_{0}(h_1) )$,
since $h_3$ is supported on $[0, \ell_{0}(h_1) ))$
while $\omega_4$ from the definition fixes the braids in the same interval.
The difference in $BV$ is that the number of crossings keep adding up linearly along the path.
Finally, we add on the last subpath $\omega_5 = h_3^{-1}$.
Since $h_4$ and $\omega_3$ commute,
$g_{| h | } = h_4\omega_5 = h_3\omega_5h_3^{-1} = \omega_4$.
We could take $\delta = 8M\mathcal{N}(h)+3Q $
by purely considering the changes in the number of carets,
so we have $ \delta | g_{| h | } | \leq | g_{| h | } \omega' |.$
However,
the upper bound is given by the product of $ \mathcal{N}(g_{| h |})$ and $\mathcal{K}(g_{| h |}) $.
Hence, the path from $h$ to $| g_{| h | } | $ has a linear lower bound
and at most polynomial for the upper bound.
\end{proof}
\begin{lemma}
For a prefix $\omega'$ of $\omega$ representing an element in $BV,$ $| h \omega' | \geq \delta | h |.$
\end{lemma}
\begin{proof}
In the case of $BV$,
we consider both the changes in the number of carets and the number of crossings.
When adding on the first subpath $\omega_1$,
when $\bar{\ell}_0(h) \neq 1$, $\mathcal{N}(h) < \mathcal{N}(h\omega_1) $
and number of crossings $\mathcal{K}(h\omega_1) \leq \mathcal{K}(h)$
by Lemma \ref{lem39}, otherwise, we add an empty word and the length is unchanged.
For $\omega_2,$ Lemma \ref{lem312} ensures the increase in the number of carets
and hence the number of word length.
The word $\omega_3$ represents a subpath with the same number of carets as $h$ in the $\textbf{pcq}$ form.
Besides we choose $\omega_3$ to be the shortest word with a non trivial braids
with word length linear to the ones of $h.$
We have $ \mathcal{N}(h_1\omega_2) \leq \mathcal{N}(h_1\omega_2\omega_3) = \mathcal{N}(h_3) .$
Since the element represents $\omega_{3}$ having the branch $\ell_{(\sigma_{\omega_3}(0))} \mapsto \ell_{0},$
$h_3$ is supported on some $ [0, a_{|h|}).$
For the $\omega_4$, the argument is the same as the ones for the case when add $\omega_3.$
As for subpath $\omega_5$,
it is just the inverse of path $h\omega_1\omega_2\omega_3.$
\end{proof}
\begin{theorem}
There exists constant $\delta > 0$
and a polynomial function $p(\cdot)$
such that the following holds. Let $g_1,g_2 \in \mathcal{G} $ be two elements with $\mathcal{N}(g) \geq 4.$
Then there is a path of length at most $p(|g_1| + |g_2|)$ in the Cayley graph $\Gamma = Cay(\mathcal{G}, X)$
which avoids a $\delta \min{\{| g_1| , | g_2| \} }$-neighbourhood of the identity
and which has initial vertex $g_1$ and terminal vertex $g_2.$
\end{theorem}
\begin{proof}
For element with small carets number the linear divergence of the geodesics
can be deduced directly from the computation.
Again by taking two paths we described in Proposition \ref{prop412}
and connect them as in the Brown-Thompson group case,
no reductions will occur.
\label{thm415}
\end{proof}
Taking the above estimate,
we could only say that the group has linear functions as lower bounds for the divergence functions for $BV.$
In fact, a stronger result has been proved in \cite{Kodama:2020to}
indicating that the divergence function of $BV$ is also bounded above by linear functions.
\begin{theorem}[Kodama]
$BV$ has linear divergence function.
\end{theorem}
Now we consider the group $BF < BV,$ which is a finitely generated group
with finite generating set
$$\Sigma_{BF} = \{ x_0, x_1, \alpha_{12}, \alpha_{13}, \alpha_{23}, \alpha_{24}, \beta_{12}, \beta_{13},\beta_{23},\beta_{24} \}$$
where $\alpha_{i,j}$s and $\beta_{i,j}$s are in fact deduced from
a sequence of infinite generating set of the pure braid relations in the braid groups \cite{MR2384840}.
These two sequences of infinite generating sets can be written in the infinite generating sets of $BV$,
$\tau_{i}$s and $\sigma_i$s, by the following \cite{MR3781416},
$$\alpha_{i,j} = \sigma_i\sigma_{i+1}\cdots \sigma_{j-2}\sigma_{j-1}^2 \sigma_{j-2}^{-1} \cdots \sigma_{i}^{-1}$$
$$\beta_{i,j} = \sigma_i\sigma_{i+1}\cdots \sigma_{j-2}\tau_{j-1}^2\sigma_{j-2}^{-1}\cdots \sigma_{i}^{-1}.$$
We take this finite generating set as the standard generating set when we consider the word length of $BF.$
\begin{lemma}
Let $g \in BF$ be an element
with the tree-braid tree diagram pairs $(\mathrm{T}_{n(+)},\sigma, \mathrm{T}_{n(-)})$
and let $u\mapsto v$ be a branch of $g,$
$h$ be an element of $F < BF.$
Let $h' = (h)_{[v]}. $
Then $$\mathcal{N}(gh') = \mathcal{N}(g) + \mathcal{N}(h) - 1 .$$
\label{lem417}
\end{lemma}
\begin{proof}
The proof is similar to the one for Lemma \ref{lem27}.
\end{proof}
\begin{proposition}
There exist constants $\delta, D > 0$
such that the following holds.
Let $g \in BF$ be an element with $\mathcal{N}(g) \geq 4 $.
Then there is a path which can be interpreted by a word $\omega$
of length at most $D | g|$ in the Cayley graph
$\Gamma = Cay(\mathcal{G}, X)$ which avoids a
$\delta | g| $-neighbourhood of the identity
and which has initial vertex $g$ and terminal vertex $\omega$.
\label{prop419}
\end{proposition}
\begin{proof}
Let $h \in BF$ be an element
whose reduced tree-braid-tree diagram pair $(\mathrm{H}_{+}, \sigma_h, \mathrm{H}_{-})$
has at least three carets in each tree.
Again when $\bar{\ell}_{(0)}(h) = 1$,
we add an extra path $\omega_1 = x_0^2x_1^{-1}x_0^{-1}$ and
then add path
$\omega_2 = x_0^{-M\mathcal{N}(h_1)} x_{1}x_0^{M\mathcal{N}(h_1)}$
followed by the shortest word
$\omega_3$ which has the reduced tree-braid-tree diagram pair
$(\mathrm{H}_{-}, id, \mathrm{H}_{+}).$
This works because $BF$ is torsion free, our paths only rely on the changes in the number of carets.
Next we take
$\omega_4 = x_0^{Q| h |} x_{1}^{-1} x_0^{- Q| h | +1},$
where $Q \geq 10\frac{M}{C_{1}^2}.$
Since the path we are adding on represents an element in $F,$
the number of the crossings stays the same, thus as long as we take $Q$ large enough,
the there will not be the influence of Garside elements \cite{MR2514382} on the distance.
This can also be seen from the Lemma \ref{lem417}.
The estimate of the length of the path only relies on the generators $x_0$ and $x_1$
and hence on the number of carets.
\end{proof}
\begin{theorem}
$BF$ has linear divergence.
\label{thm420}
\end{theorem}
\begin{proof}
Provided the constant $Q$ taken is large enough,
the concatenation of the two subpaths between the two words representing two elements in $BV$
that reduces the number of crossings are not going to affect the length of the linear path.
\end{proof}
\begin{remark}
According to \cite{MR2384840, BRIN_2006},
$\widehat{BF}$ and $\widehat{BV}$ are also finitely generated,
where $\widehat{BV}$ is generated by standard generating set in $F$ and the $\sigma$ generating set,
$\widehat{BF}$ only contains the standard generating set of $F$
and the $\alpha$-generators induced from pure braid relation.
In the case of $\widehat{BV}$, the estimate can be made similarly by taking five subpaths
and concatenating them.
The middle subpath corresponding to $\omega_3$ taken, however,
should be modified since $\tau$-generators are not in the generating set of $\widehat{BV}.$
In $\widehat{BF},$ the word length can be estimated similarly as in the case of $BF,$
the path only consists of generators from the subgroup isomorphic to $F$
and hence the geodesics all have linear divergence.
\end{remark}
\begin{corollary}
$\widehat{BF}$ and $\widehat{BV}$ have linear divergence.
\end{corollary}
\begin{proof}
The arguments here are the analogues of the proof for $BF$ and $BV$
by focusing on the ``braided" part of the tree-braid-tree diagrams pair.
\end{proof}
\begin{remark}
In the previous sections,
we considered the divergence property of the geodesics in the Cayley graph of braided Thompson groups.
We could also consider more generally
the divergence property of the subgroups
such as the the normal subgroup of the braided Thompson groups
for, on one hand,
many of the normal subgroups of the braided Thompson group are being investigated in \cite{MR3781416},
and they are still mysterious;
On the other hand,
the connection between the distortion of the finitely generated normal subgroups
of a finitely generated group
and the divergence of both groups has been indicated in \cite{MR3474592},
so we can think of the analogue in braided Thompson groups.
In Thompson's group $F,$
the proper quotient subgroups of $F$ are abelian
which somehow coincides with the linear divergence property of $F,$
whereas,
in the case of the braided Thompson group,
the surjective map from $BF$ to $F$ provide an exact sequence,
where the kernel corresponding to the pure braid part of the groups
provide natural subgroups of $BF.$
However, the kernel is not finitely generated
and there are also normal subgroups containing the commutator subgroups $[BF, BF]$
of which the presentation may be extremely complicated \cite{MR3781416}.
Here divergence property gives some indication
on the subgroup distortion of the normal subgroups inside the $BF$ and $BV.$
\end{remark}
\begin{remark}
At the point of finishing this work,
I was told that Kodama also has a result on the divergence function of the group $BV.$
\end{remark}
\section*{Acknowledgments}
I am very grateful for the help and support from my supervisor Takuya Sakasai during
while I am preparing for this work and during this hard time
and I would also like to thank Tomohiro Fukaya and his student Yuya Kodama for helpful discussions.
Last but not the least,
I would also like to express my gratitude to Sadayoshi Kojima for helpful discussion and suggestions
and for anonymous reviewers for carefully going through the manuscript
and pointing out mistakes and inappropriate descriptions.
\section*{Conflict of interests}
The authors declare that they have no conflict of interest.
\section*{Data Avaliability statement}
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
{}
\end{document}
|
\begin{document}
\baselineskip=17pt
\title{Pointwise Ergodic Theorems for Higher Levels of Mixing}
\author{Sohail Farhangi\\
Department of Mathematics\\
The Ohio State University\\
Mathematics Building 430\\
Columbus, Ohio 43220\\
E-mail: [email protected]}
\date{}
\maketitle
\renewcommand{\arabic{footnote}}{}
\footnote{2020 \emph{Mathematics Subject Classification}: Primary 37A25, 37A30; Secondary 37A05, 28D05.}
\footnote{\emph{Key words and phrases}: Pointwise Ergodic Theorem, Weak Mixing, Strong Mixing, Wiener-Wintner Theorem}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\begin{abstract}
We prove strengthenings of the Birkhoff Ergodic Theorem for weakly mixing and strongly mixing measure preserving systems. We show that our pointwise theorem for weakly mixing systems is strictly stronger than the Wiener-Wintner Theorem. We also show that our pointwise Theorems for weakly mixing and strongly mixing systems characterize weakly mixing systems and strongly mixing systems respectively.
\end{abstract}
\section{Introduction}
\hskip 4mm In this section we establish the notation that we use, review some known results related to the Birkhoff Pointwise Ergodic Theorem and state the main theorems of this paper.
Whenever we discuss a measure preserving system (m.p.s.) $(X,\mathscr{B},\mu,T)$, $X$ will be a measurable space, $\mathscr{B}$ will be a $\sigma$-algebra of subsets of $X$, $\mu$ will be a probability measure on $(X,\mathscr{B})$ and $T:X\rightarrow X$ will be a measurable transformation satisfying $\mu(A) = \mu(T^{-1}A)$ for all $A \in \mathscr{B}$. When we work with the Hilbert space $L^2(X,\mu)$, we will let $U:L^2(X,\mu)\rightarrow L^2(X,\mu)$ denote the unitary operator given by $U(f) = f\circ T$. When we work with a m.p.s. of the form $([0,1],\mathscr{B},\mu,T)$, we will assume that $\mathscr{B}$ is the completion of the Borel $\sigma$-algebra. We say that two sequences of complex numbers $(x_n)_{n = 1}^{\infty}$ and $(y_n)_{n = 1}^{\infty}$ are orthogonal if
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nx_n\overline{y_n} = 0.
\end{equation}
Now let us recall the levels of the ergodic hierarchy of mixing that will be used throughout this paper.
\begin{definition}
\label{EHOfMixing}
Let $(X,\mathscr{B},\mu,T)$ be a m.p.s.
\begin{itemize}
\item $(X,\mathscr{B},\mu,T)$ is {\bf ergodic} if for every $A \in \mathscr{B}$ satisfying $\mu(A) = \mu(T^{-1}A)$ we have $\mu(A) \in \{0,1\}$.
\item $(X,\mathscr{B},\mu,T)$ is {\bf weakly mixing} if for every $A,B \in \mathscr{B}$ we have
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N|\mu(A\cap T^{-n}B)-\mu(A)\mu(B)| = 0.
\end{equation}
\item $(X,\mathscr{B},\mu,T)$ is {\bf strongly mixing} if for every $A,B \in \mathscr{B}$ we have
\begin{equation}
\lim_{n\rightarrow\infty}\mu(A\cap T^{-n}B) = \mu(A)\mu(B).
\end{equation}
\end{itemize}
\end{definition}
\begin{theorem}[Birkhoff] Let $(X,\mathscr{B},\mu,T)$ be a m.p.s., and let $f \in L^1([0,1],\mu)$. For a.e. $x \in X$, we have
\label{BET}
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^nx) = f^*(x),
\end{equation}
\noindent where $f^*(x) \in L^1(X,\mu)$ is such that $f^*(Tx) = f^*(x)$ for a.e. $x \in X$ and $\int_Af^*d\mu = \int_Afd\mu$ for every $A \in \mathscr{B}$ satisfying $A = T^{-1}(A)$. In particular, if $T$ is ergodic, then for a.e. $x \in X$ we have
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^nx) = \int_Xfd\mu.
\end{equation}
\end{theorem}
The Birkhoff Pointwise Ergodic Theorem can be interpreted as follows. Given an ergodic m.p.s. $(X,\mathscr{B},\mu,T)$ and $f \in L^1(X,\mu)$ satisfying $\int_Xfd\mu = 0$, the sequence $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is orthogonal to the constant sequence $(1)_{n = 1}^{\infty}$. The Wiener-Wintner Theorem is a generalization of the Birkhoff Pointwise Ergodic Theorem for weakly mixing systems and has a similar interpretation.
\begin{theorem}[Wiener-Wintner]\label{WWT} Let $(X,\mathscr{B},\mu,T)$ be a m.p.s. and let $f \in L^1(X,\mu)$. There exists $X' \in \mathscr{B}$ with $\mu(X') = 1$, such that for every $x \in X'$ and any $\lambda \in \mathbb{C}$ with $|\lambda| = 1$ the limit
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^nx)\lambda^n
\end{equation}
\noindent exists. Furthermore, if $T$ is weakly mixing, then
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^nx)\lambda^n = \begin{cases}
0 & \text{if }\lambda \neq 1\\
\int_Xfd\mu & \text{if } \lambda = 1
\end{cases}.
\end{equation}
\end{theorem}
\begin{definition} A bounded sequence of complex numbers $(x_n)_{n = 1}^{\infty}$ is \noindent{\bf Besicovitch Almost Periodic} if for each $\epsilon > 0$ there exists a trigonometric polynomial $P_{\epsilon}$ for which
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n-P_{\epsilon}(n)| < \epsilon.
\end{equation}
\end{definition}
The Wiener-Wintner Theorem can be interpreted as follows. Given a weakly mixing m.p.s. $(X,\mathscr{B},\mu,T)$ and $f \in L^1(X,\mu)$ satisfying $\int_Xfd\mu = 0$, the sequence $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is orthogonal to any Besicovitch Almost Periodic Sequence $(y_n)_{n = 1}^{\infty}$. Theorem \ref{FirstMainTheorem} is one of the main results of this paper and is a generalization of the Wiener-Wintner Theorem when interpreted in a similar fashion. We require some more preliminaries before we can state Theorem \ref{FirstMainTheorem}.
\begin{definition} Let $(x_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ satisfy
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n| < \infty.
\end{equation}
\noindent $(x_n)_{n = 1}^{\infty}$ is a \noindent{\bf weakly mixing sequence} if it satisfies the following condition.
Suppose that for a bounded sequence $(y_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ there is a strictly increasing sequence $(N_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ for which
\begin{equation}
\ell(h) := \lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}x_{n+h}\overline{y_n}\text{ exists for every }h \in \mathbb{N}.
\end{equation}
Then we have
\begin{equation}
\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H|\ell(h)| = 0.
\end{equation}
\end{definition}
\begin{theorem}[cf. Theorem \ref{ActualyFirstMainTheorem} in Section 2] Let $(X,\mathscr{B},\mu,T)$ be a weakly mixing m.p.s. and let $f \in L^1(X,\mu)$ satisfy $\int_Xfd\mu = 0$. For a.e. $x \in X$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence.
\label{FirstMainTheorem}
\end{theorem}
\begin{definition}[cf. Definition 3.13 in \cite{TheErdosSumsetPaper}] A bounded sequence of complex numbers $(x_n)_{n = 1}^{\infty}$ is \noindent{\bf compact} if for any $\epsilon > 0$, there exists $K \in \mathbb{N}$ such that
\begin{equation}
\underset{m \in \mathbb{N}}{\text{sup }}\underset{1 \le k \le K}{\text{min }}\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_{n+m}-x_{n+k}|^2 < \epsilon.
\end{equation}
\end{definition}
\begin{lemma}[cf. Lemma 3.19 of \cite{TheErdosSumsetPaper}] If $(w_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ is a weakly mixing sequence and $(c_n)_{n = 1}^{\infty}$ is a compact sequence, then
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nw_n\overline{c_n} = 0.
\end{equation}
\label{WeakPerpCom}
\end{lemma}
Lemma \ref{WeakPerpCom} is what allows us to interpret Theorem \ref{FirstMainTheorem} in a similar fashion to the Birkhoff Pointwise Ergodic Theorem and the Wiener-Wintner Theorem. Theorem \ref{FirstMainTheorem} asserts that for a weakly mixing m.p.s. $(X,\mathscr{B},\mu,T)$ and $f \in L^1(X,\mu)$ satisfying $\int_Xfd\mu = 0$, the sequence $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is orthogonal to any compact sequence $(y_n)_{n = 1}^{\infty}$. We will see in section $2$ that the class of compact sequences is strictly larger than the class of Besicovitch Almost Periodic Sequences and this is why Theorem \ref{FirstMainTheorem} is more general than the Wiener-Wintner Theorem. The next main result of this paper is an analogue of Theorem \ref{FirstMainTheorem} for strongly mixing measure preserving systems.
\begin{definition}
Let $(x_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ satisfy
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n| < \infty.
\end{equation}
\noindent $(x_n)_{n = 1}^{\infty}$ is a \noindent{\bf strongly mixing sequence} if it satisfies the following condition.
Suppose that for a bounded sequence $(y_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ there is a strictly increasing sequence $(N_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ for which
\begin{equation}
\ell(h) := \lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}x_{n+h}\overline{y_n}\text{ exists for every }h \in \mathbb{N}.
\end{equation}
Then we have
\begin{equation}
\lim_{h\rightarrow\infty}|\ell(h)| = 0.
\end{equation}
\end{definition}
\begin{theorem}[cf. Theorem \ref{mainStrongMixingTheorem} of Section 3] Let $(X,\mathscr{B},\mu,T)$ be a strongly mixing m.p.s. and let $f \in L^1(X,\mu)$ satisfy $\int_Xfd\mu = 0$. For a.e. $x \in X$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence.
\label{SecondMainTheorem}
\end{theorem}
Proposition \ref{TooTrivial} gives a partial converse to the Birkhoff Pointwise Ergodic Theorem. Proposition \ref{TooTrivial} is well known and an easy consequence of the Dominated Convergence Theorem but we state it anyways so that we can put some results of this paper in context.
\begin{proposition} Let $([0,1],\mathscr{B},\mu,T)$ be a m.p.s. If for every $f \in L^{\infty}([0,1],\mu)$, there exists $A_f \in \mathscr{B}$ such that $\mu(A_f) = 1$ and for every $x \in A_f$ we have
\begin{equation}
\label{BirkConverse}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^nx) = \int_Xfd\mu,
\end{equation}
\noindent then $T$ is ergodic.
\label{TooTrivial}
\end{proposition}
Proposition \ref{Easy} is a converse to Theorem \ref{FirstMainTheorem} in the same fashion that Proposition \ref{TooTrivial} is a converse to the Birkhoff Ergodic Theorem.
\begin{proposition} Let $(X,\mathscr{B},\mu,T)$ be a m.p.s. If for every $f \in L^{\infty}(X,\mu)$ with $\int_Xfd\mu = 0$ there exist a set $A_f \subseteq X$ satisfying $\mu(A_f) = 1$ and for every $x \in A_f$ we have that $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence, then $T$ is weakly mixing.
\label{Easy}
\end{proposition}
Proposition \ref{Easy} should not come as a surprise since the Wiener-Wintner Theorem is already strong enough to characterize weakly mixing systems. To see this let us recall that an ergodic m.p.s. $(X,\mathscr{B},\mu,T)$ is weakly mixing if and only if $L^2(X,\mu)$ has no non-constant eigenfunctions with respect to $U$. Now let $(X,\mathscr{B},\mu,T)$ be an ergodic m.p.s. that is not weakly mixing and let $f \in L^2(X,\mu)$ be a non-constant eigen function corresponding to the eigenvalue $\lambda$. We see that
\begin{multline}
\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H|\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^{n+h}x)\lambda^{-n}| \\ = \lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H|f(T^hx)| = \int_X|f|d\mu \neq 0
\end{multline}
\noindent for a.e. $x \in X$. Since any m.p.s. $(X,\mathscr{B},\mu,T)$ satisfying the hypothesis of Proposition \ref{Easy} also satisfies the hypothesis of Proposition \ref{TooTrivial} we see that $(X,\mathscr{B},\mu,T)$ is ergodic, so $(X,\mathscr{B},\mu,T)$ is in fact weakly mixing.
Theorem \ref{StrongMixingConverse} is a converse to Theorem \ref{SecondMainTheorem} in the same way that Proposition \ref{Easy} is a converse for Theorem \ref{FirstMainTheorem}. The proof of Theorem \ref{StrongMixingConverse} can easily be adjusted to give a direct proof of Proposition \ref{Easy} as well, but we will delay the proof of Theorem \ref{StrongMixingConverse} until section 3.
\begin{theorem}[cf. Theorem \ref{ActualStrongMixingConverse} in Section 3] Let $([0,1],\mathscr{B},\mu,T)$ be a m.p.s. If for every $f \in L^{\infty}([0,1],\mu)$ with $\int_Xfd\mu = 0$ there exist a set $A_f \subseteq X$ satisfying $\mu(A_f) = 1$ and for every $x \in A_f$ we have that $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence, then $T$ is strongly mixing.
\label{StrongMixingConverse}
\end{theorem}
\section{A Pointwise Theorem for Weakly Mixing Systems}
\hskip 4mm In this section we prove Theorem \ref{ActualyFirstMainTheorem}, which is Theorem \ref{FirstMainTheorem} for standard measure preserving systems. In section $4$ we will show how to deduce Theorem \ref{FirstMainTheorem} from \ref{ActualyFirstMainTheorem}. We will also discuss some potential generalizations of Theorem \ref{FirstMainTheorem}.
Before we begin proving Theorem \ref{ActualyFirstMainTheorem} we will show that the class of compact sequences is strictly larger than that of Besicovitch Almost Periodic Sequences. In Example 3.21 of \cite{TheErdosSumsetPaper} a compact sequence orthogonal to any Besicovitch Almost Periodic Sequence was constructed. In Lemma \ref{Example} we give a different class of compact sequences that are orthogonal to all Besicovitch Almost Periodic Sequences.
\begin{lemma} Let $\alpha \in \mathbb{T}$ be irrational, and let $(x_n)_{n = 1}^{\infty}$ be the sequence
$$\alpha, 2\alpha, 2\alpha, 3\alpha, 3\alpha, 3\alpha, \cdots,$$
\noindent which can also be expressed as $x_n = m\alpha$ for $\binom{m}{2} < n \le \binom{m+1}{2}$. If $f \in C(\mathbb{T})$ satisfies $\int_{\mathbb{T}}fdm = 0$, then $\left(f(x_n)\right)_{n = 1}^{\infty}$ is orthogonal to any Besicovitch Almost Periodic Sequence.
\label{Example}
\end{lemma}
\noindent{\it Proof.} Using standard approximation arguments, we see that it suffices to show that for any $k \in \mathbb{N}$ the sequence $\left(e^{2\pi i kx_n}\right)_{n = 1}^{\infty}$ is orthogonal to any Besicovitch Almost Periodic Sequence. We note that for any $\beta \in \mathbb{T}\setminus\{0\}$ and $k \in \mathbb{Z}\setminus\{0\}$, we have
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Ne^{2\pi ikx_n}e^{-2\pi in\beta}|
\end{equation}
\begin{equation}
= \overline{\lim_{N\rightarrow\infty}}|\binom{N}{2}^{-1}\sum_{n = 1}^Ne^{-2\pi i\binom{n}{2}\beta}\sum_{m = 1}^ne^{2\pi ikn\alpha}e^{-2\pi im\beta}|
\end{equation}
\begin{equation}
= \overline{\lim_{N\rightarrow\infty}}|\binom{N}{2}^{-1}\sum_{n = 1}^Ne^{2\pi i(n\alpha-\binom{n}{2}\beta)}\frac{e^{-2\pi i(n+1)\beta}-e^{-2\pi i\beta}}{1-e^{-2\pi i\beta}}|
\end{equation}
\begin{equation}
\le \overline{\lim_{N\rightarrow\infty}}\binom{N}{2}^{-1}\frac{N}{|1-e^{-2\pi i\beta}|} = 0.
\end{equation}
\noindent Similarly, we see that
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Ne^{2\pi ikx_n}\overline{1}| = \overline{\lim_{N\rightarrow\infty}}|\binom{N}{2}^{-1}\sum_{n = 1}^Nne^{2\pi ikn\alpha}|
\end{equation}
\begin{equation}
= \overline{\lim_{N\rightarrow\infty}}|\frac{1}{n}\sum_{n = 1}^Ne^{2\pi ib\alpha}| = 0.
\end{equation}
We now see that a standard approximation argument shows that $(e^{2\pi ikx_n})_{n = 1}^{\infty}$ is orthogonal to any Besicovitch Almost Periodic Sequence.
To see that $\left(f(x_n)\right)_{n = 1}^{\infty}$ is a compact sequence it suffices to note that
\begin{equation}
\underset{m \in \mathbb{N}}{\text{sup }}\underset{1 \le k \le K}{\text{min }}\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_{n+m}-x_{n+k}|^2
\end{equation}
\begin{equation}
\le \underset{m \ge K+1}{\text{sup }}\underset{1 \le k \le K}{\text{min }}\sum_{j = k}^{m-1} \overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_{n+j+1}-x_{n+j}|^2
\end{equation}
\begin{equation}
\pushQED{\qed}
= \underset{m \ge k+1}{\text{sup }}\underset{1 \le k \le K}{\text{min }}\sum_{j = k}^{m-1}0 = 0. \qedhere
\popQED
\end{equation}
We see that in the proof of Lemma \ref{Example}, the sequence $(n\alpha)_{n = 1}^{\infty}$ could be replaced by any uniformly distributed sequence in $[0,1]$, and $(\binom{m}{2})_{m = 1}^{\infty}$ could be replaced by any subpolynomial sequence of integers $(k_m)_{m = 1}^{\infty}$ satisfying $\lim_{m\rightarrow\infty}k_{m+1}-k_m = \infty$, so Lemma \ref{Example} can be used to construct many examples of compact sequences that are not Besicovitch Almost Periodic.
Lemma \ref{Representation} is what will allow us to view bounded sequences of complex numbers as an element of some Hilbert space.
\begin{lemma}[cf. Lemma 3.26 of \cite{TheErdosSumsetPaper}] Let $(a_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ be bounded. Then there exists a compact metric space $X$, a continuous map $S:X\rightarrow X$, a function $F \in C([0,1])$, and a point $x \in X$ with a dense orbit under $S$ such that $a_n = F(S^n(x))$ for all $n \in \mathbb{N}$.
\label{Representation}
\end{lemma}
Lemma \ref{keylemma} allows us to convert correlations of sequences to inner products of functions in a Hilbert space, which will then permit us to use Hilbert space Theory to analyze our correlations. Before stating Lemma \ref{keylemma}, let us recall the definition of a generic point.
\begin{definition} Given an ergodic m.p.s. $([0,1],\mathscr{B},\mu,T)$, $x \in [0,1]$ is {\bf generic} if for every $f \in C([0,1])$ we have
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^nx) = \int_{[0,1]}fd\mu.
\end{equation}
\end{definition}
\begin{lemma}\label{keylemma} Let $([0,1],\mathscr{B},\mu,T)$ be an ergodic m.p.s. Let $(y_n)_{n = 1}^{\infty}$ be a bounded sequence of complex numbers. Let $f \in C([0,1])$ be arbitrary and let $x \in X$ be a generic point. Let $U:L^2([0,1],\mu)\rightarrow L^2([0,1],\mu)$ be the unitary operator induced by $T$. Let $(N_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ be any sequence for which
\begin{equation}
\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}
\end{equation}
\noindent exists for each $h \in \mathbb{N}$. Then there exists $g \in L^2([0,1],\mu)$ for which
\begin{equation}
\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n} = \langle U^hf,g\rangle_{\mu}.
\end{equation}
\end{lemma}
\noindent{\it Proof.} By Lemma \ref{Representation}, let $(Y,d)$ be a compact metric space, let $\xi \in C(Y)$, $y \in Y$, and $S:Y\rightarrow Y$ be such that $y_n = \xi(S^ny)$. Let $\nu$ be any weak$^*$ limit point of the sequence
\begin{equation}
\left(\frac{1}{N_q}\sum_{k = 1}^{N_q}\delta_{T^nx,S^ny}\right)_{q = 1}^{\infty},
\end{equation}
and let $(M_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ be such that
\begin{equation}
\nu = \lim_{q\rightarrow\infty}\frac{1}{M_q}\sum_{n = 1}^{M_q}\delta_{T^nx,S^ny}.
\end{equation}
Let $\tilde{f},\tilde{\xi} \in L^2([0,1]\times Y,\nu)$ be given by $\tilde{f}(x,y) = f(x)$ and $\tilde{\xi}(x,y) = \xi(y)$. Let $V:L^2([0,1]\times Y,\nu)\rightarrow L^2([0,1]\times Y,\nu)$ be the unitary operator induced by $T\times S$. We note that if $h \in L^2([0,1]\times Y,\nu)$ is such that $h(x,y) = k(x)$ for some $k \in L^2([0,1],\mu)$, then the genericity of $x$ gives us
\begin{equation}
\int_{[0,1]\times Y}hd\nu = \int_{[0,1]}kd\mu.
\end{equation}
Let $\tilde{\mu}$ be the probability measure on $([0,1]\times Y,\mathscr{B}\otimes\mathscr{A})$ given by $\tilde{\mu}(B\times A) = \mu(B)\mathbbm{1}_A(y)$ for all $B \in \mathscr{B}$ and $A \in \mathscr{A}$. Since we may identify $L^2([0,1]\times Y,\tilde{\mu})$ with the functions in $L^2([0,1]\times Y,\nu)$ of the form $h(x,y) = k(x)$, let $P:L^2([0,1]\times Y,\nu)\rightarrow L^2([0,1]\times Y,\tilde{\mu})$ denote the orthogonal projection. Let $\tilde{g} = P\tilde{\xi}$, and let $g \in L^2([0,1],\mu)$ be such that $\tilde{g}(x,y) = g(x)$. We now see that for any $h \in \mathbb{N}$ we have
\begin{equation}
\lim_{q\rightarrow\infty}|\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}| = \lim_{q\rightarrow\infty}|\frac{1}{M_q}\sum_{n = 1}^{M_q}\tilde{f}(T^{n+h}x)\overline{\tilde{\xi}(S^ny)}|
\end{equation}
\begin{equation}
\pushQED{\qed}
= |\int_{[0,1]\times Y} V^hf\overline{\tilde{\xi}}d\nu| = |\langle V^h\tilde{f}, \tilde{\xi}\rangle_{\nu}| = |\langle V^h\tilde{f},\tilde{g}\rangle_{\nu}| = |\langle U^hf,g\rangle_{\mu}|. \qedhere
\popQED
\end{equation}
\begin{lemma} Let $([0,1],\mathscr{B},\mu,T)$ be a weakly mixing m.p.s. For each generic point $x \in [0,1]$ and each $f \in C([0,1])$ with $\int_{[0,1]}fd\mu = 0$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence.
\label{WeakMixingForC(X)}
\end{lemma}
\noindent{\it Proof.} Let $f \in C([0,1])$ be arbitrary and let $(y_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ be bounded. Let $(N_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ be any sequence for which
\begin{equation}
\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}
\end{equation}
\noindent exists for every $h \in \mathbb{N}$. By Lemma \ref{keylemma}, let $g \in L^2([0,1],\mu)$ be such that
\begin{equation}
\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n} = \langle U^hf,g\rangle.
\end{equation}
Letting $U:L^2([0,1],\mu)\rightarrow L^2([0,1],\mu)$ denote the unitary operator induced by $T$, we see that $U$ is weakly mixing. It follows that
\begin{equation}
\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H|\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}|
\end{equation}
\begin{equation}
\pushQED{\qed}
= \lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H|\langle U^hf,g\rangle| = 0. \qedhere
\popQED
\end{equation}
\begin{theorem} Let $([0,1],\mathscr{B},\mu,T)$ be a weakly mixing m.p.s. and let $f \in L^1([0,1],\mu)$ satisfy $\int_{[0,1]}fd\mu = 0$. For a.e. $x \in [0,1]$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence.
\label{ActualyFirstMainTheorem}
\end{theorem}
\noindent{\it Proof.} Let $\epsilon > 0$ be arbitrary, and let $g \in C([0,1])$ be such that $||f-g||_1 \le \epsilon$ and $\int_{[0,1]}gd\mu = 0$. Let $X \subseteq [0,1]$ be a set of full measure for which the Birkhoff Pointwise Ergodic Theorem holds for $|f-g|$ along every $x \in X$. We now see that
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N|f(T^nx)-g(T^nx)| = \int_{[0,1]}|f-g|d\mu = ||f-g||_1 < \epsilon.
\end{equation}
Now let $(y_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ be uniformly bounded in norm by $1$. Since a.e. $x \in X$ is a generic point we may use Lemma \ref{WeakMixingForC(X)} to further refine $X$ to another set of full measure $X'$, such that for every $x \in X'$, $\left(g(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence. We see that for any $x \in X'$, we have
\begin{multline}
\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Nf(T^{n+h}x)\overline{y_n}| \\ \le \lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Ng(T^{n+h}x)\overline{y_n}| +\epsilon = \epsilon.
\label{Modification}
\end{multline}
\noindent Since $\epsilon > 0$ was arbitrary, we see that $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence. \qed
One way in which we can try to generalize Theorem \ref{ActualyFirstMainTheorem} is motivated by Theorem \ref{BourgainThm} which is due to Bourgain.
\begin{theorem}[Theorem 1 in \cite{Bourgain'sPolynomialTheorem}] Let $(X,\mathscr{B},\mu,T)$ be a m.p.s. and let $p(x)$ be a polynomial with integer coefficients. If $f \in L^r(X,\mu)$ for some $r > 1$, then
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^{p(n)}x)
\end{equation}
\noindent exists for a.e. $x \in X$. Furthermore, if $T$ is weakly mixing, then
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^Nf(T^{p(n)}x) = \int_Xfd\mu
\end{equation}
\noindent for a.e. $x \in X$.
\label{BourgainThm}
\end{theorem}
Theorem \ref{BourgainThm} shows us that the Birkhoff Ergodic Theorem holds along polynomial subsequences if the underlying transformation $T$ is assumed to be weakly mixing. This naturally leads us to ask if Theorem \ref{ActualyFirstMainTheorem} holds for polynomial subsequences.
\begin{question} If $(X,\mathscr{B},\mu,T)$ is a weakly mixing m.p.s., $p(x)$ a polynomial with integer coefficients and $f \in L^r(X,\mu)$ with $r > 1$ is such that $\int_Xfd\mu = 0$, then is $\left(f(T^{p(n)}x)\right)_{n = 1}^{\infty}$ a weakly mixing sequence for a.e. $x \in X$?
\label{PolynomialQuestion}
\end{question}
We see that if $(x_n)_{n = 1}^{\infty}$ is a weakly mixing sequence of complex numbers, and $(y_n)_{n = 1}^{\infty}$ is another sequence of complex numbers for which $d(\{n \in \mathbb{N}\ |\ x_n \neq y_n\})\footnote{For $A \subseteq \mathbb{N}$ the {\bf natural density} of $A$ is given by $$d(A) := \lim_{N\rightarrow\infty}\frac{1}{N}|A\cap[1,N]|$$ provided that the limit exists.} = 0$, then $(y_n)_{n = 1}^{\infty}$ is also weakly mixing. In particular, if $(x_n)_{n = 1}^{\infty}$ is weakly mixing, and $(y_n)_{n = 1}^{\infty}$ is given by $y_n = x_n$ when $n$ is not a square and $y_n = 1$ when $n$ is a square, then $(y_n)_{n = 1}^{\infty}$ is also a weakly mixing sequence, but $(y_{n^2})_{n = 1}^{\infty}$ is the constant sequence, so Question \ref{PolynomialQuestion} does not follow immediately from Theorem \ref{ActualyFirstMainTheorem}. It is well known that if $(X,\mathscr{B},\mu,T)$ is a weakly mixing system, then for any $f,g \in L^2(X,\mu)$ we have
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N|\langle U^{n^2}f,g\rangle-\int_Xfd\mu\int_Xgd\mu| = 0.
\end{equation}
Combining this fact with Lemma \ref{keylemma} allows us to prove Proposition \ref{PolynomialProp}, but does not immediately help us resolve Question \ref{PolynomialQuestion}.
\begin{proposition} Let $(X,\mathscr{B},\mu,T)$ be a weakly mixing m.p.s. and let $f \in L^1(X,\mu)$ satisfy $\int_{X}fd\mu = 0$. For a.e. $x \in X$, any bounded sequence of complex numbers $(y_n)_{n = 1}^{\infty}$, and any strictly increasing sequence of natural numbers $(N_q)_{q = 1}^{\infty}$ for which
\begin{equation}
\ell(h) := \lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}\text{ exists for every }h \in \mathbb{N},
\end{equation}
we have
\begin{equation}
\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{h = 1}^H|\ell(h^2)| = 0.
\end{equation}
\label{PolynomialProp}
\end{proposition}
We also see that Lemma \ref{keylemma} suggests we do not need the m.p.s. $([0,1],\mathscr{B},\mu,T)$ in Theorem \ref{ActualyFirstMainTheorem} to be weakly mixing. In particular, if we work with $L^2([0,1],\mu)$ instead of $L^1([0,1],\mu)$, then it seems that we only need $f \in L^2([0,1],\mu)$ to satisfy
\begin{equation}
\label{WeakMixing}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N|\langle U^nf,g\rangle-\int_Xfd\mu\int_Xgd\mu| = 0.
\end{equation}
\noindent for every $g \in L^2([0,1],\mu)$. However, it is not obvious as to whether or not this is the case. In the proof of Theorem $2.5$, we approximated $f$ by continuous functions, and every element of $C([0,1]) \subseteq L^2([0,1],\mu)$ satisfying equation \ref{WeakMixing} is the same as the m.p.s. $([0,1],\mathscr{B},\mu,T)$ being weakly mixing. This leads us to Conjecture \ref{HasToBeTrue}.
\begin{conjecture} Let $(X,\mathscr{B},\mu,T)$ be a m.p.s. and let $f \in L^2(X,\mu)$ satisfy
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N|\langle U^nf, g\rangle| = 0
\end{equation}
\noindent for every $g \in L^2(X,\mu)$. Then $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence for a.e. $x \in X$.
\label{HasToBeTrue}
\end{conjecture}
\section{A Pointwise Theorem for Strongly Mixing Systems}
\hskip 4mm In this section we prove Theorem \ref{mainStrongMixingTheorem}, which is Theorem \ref{SecondMainTheorem} for standard measure preserving systems. The method of extending Theorem \ref{mainStrongMixingTheorem} to \ref{SecondMainTheorem} is the same as the method used in section $4$ to extend \ref{FirstMainTheorem} to \ref{ActualyFirstMainTheorem} so we omit it to save space.
\begin{lemma} Let $([0,1],\mathscr{B},\mu,T)$ be a strongly mixing m.p.s. For every generic point $x \in [0,1]$ and each $f \in C([0,1])$ with $\int_Xfd\mu = 0$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence.
\label{StrongMixingLemma}
\end{lemma}
\noindent{\it Proof.} Let $f \in C([0,1])$ be arbitrary and let $(y_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ be bounded. Let $(N_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ be any sequence for which
\begin{equation}
\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}
\end{equation}
\noindent exists for every $h \in \mathbb{N}$. By Lemma \ref{keylemma}, let $g \in L^2([0,1],\mu)$ be such that
\begin{equation}
\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n} = \langle U^hf,g\rangle.
\end{equation}
\noindent Letting $U:L^2([0,1],\mu)\rightarrow L^2([0,1],\mu)$ denote the unitary operator induced by $T$, we see that $U$ is strongly mixing. It follows that
\begin{equation}
\pushQED{\qed}
\lim_{h\rightarrow\infty}\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}f(T^{n+h}x)\overline{y_n}h = \lim_{h\rightarrow\infty}\langle U^hf,g\rangle = 0. \qedhere
\popQED
\end{equation}
\begin{theorem}\label{mainStrongMixingTheorem} Let $([0,1],\mathscr{B},\mu,T)$ be a strongly mixing m.p.s. and let $f \in L^1([0,1],\mu)$ satisfy $\int_{[0,1]}fd\mu = 0$. For a.e. $x \in X$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence.
\end{theorem}
\noindent{\it Proof.} Let $\epsilon > 0$ be arbitrary, and let $g \in C([0,1])$ be such that $||f-g||_1 \le \epsilon$ and $\int_Xgd\mu = 0$. Let $X \subseteq [0,1]$ be a set of full measure for which the Birkhoff Pointwise Ergodic Theorem holds for $|f-g|$ along every $x \in X$. We now see that
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N|f(T^nx)-g(T^nx)| = \int_X|f-g|d\mu = ||f-g||_1 < \epsilon.
\end{equation}
Now let $(y_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ be uniformly bounded in norm by $1$. Since a.e. $x \in X$ is a generic point we may use Lemma \ref{WeakMixingForC(X)} to further refine $X$ to another set of full measure $X'$, such that for every $x \in X'$, $\left(g(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence. We see that for any $x \in X'$, we have
\begin{multline}
\lim_{h\rightarrow\infty}\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Nf(T^{n+h}x)\overline{y_n}| \\ \le \lim_{h\rightarrow\infty}\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Ng(T^{n+h}x)\overline{y_n}| +\epsilon = \epsilon.
\label{Modification2}
\end{multline}
\noindent Since $\epsilon > 0$ was arbitrary, we see that $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence. \qed
In fact, we see that Theorem \ref{mainStrongMixingTheorem} can be used to characterize strongly mixing systems.
\begin{theorem} Let $(X,\mathscr{B},\mu,T)$ be a m.p.s. If for every $f \in L^{\infty}(X,\mu)$ with $\int_Xfd\mu = 0$ there exist a set $A_f \subseteq X$ satisfying $\mu(A_f) = 1$ and for every $x \in A_f$ we have that $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a strongly mixing sequence, then $T$ is strongly mixing.
\label{ActualStrongMixingConverse}
\end{theorem}
\noindent{\it Proof.} Let $A, B \in \mathscr{B}$ both be arbitrary. Let $X' \subseteq X$ be a set of full measure, such that for any $x \in X'$ and any $h \in \mathbb{N}$ we have that $\left(\mathbbm{1}_B(T^nx)-\mu(B)\right)_{n = 1}^{\infty}$, $\left(\mathbbm{1}_A(T^nx)-\mu(A)\right)_{n = 1}^{\infty}$ and $\left(\mathbbm{1}_{T^{-h}(A)\cap B}(T^nx)-\mu(T^{-h}(A)\cap B)\right)_{n = 1}^{\infty}$ are strongly mixing sequences. We note that if $(x_n)_{n = 1}^{\infty} \subseteq \mathbb{C}$ is a strongly mixing sequence, then for any $(N_q)_{q = 1}^{\infty} \subseteq \mathbb{N}$ for which the limits in equation \eqref{MixingImpliesErgodic} exist, we have
\begin{equation}
\label{MixingImpliesErgodic}
\lim_{N_q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^Nx_n = \lim_{h\rightarrow\infty}\lim_{q\rightarrow\infty}\frac{1}{N_q}\sum_{n = 1}^{N_q}x_{n+h}\overline{1} = 0.
\end{equation}
\noindent In particular, for any $x \in X'$, we have that
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mathbbm{1}_B(T^nx) = \mu(B)
\end{equation}
\noindent and
\begin{equation}
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mathbbm{1}_{T^{-h}(A)\cap B}(T^nx) = \mu(T^{-h}(A)\cap B)
\end{equation}
\noindent for every $h \in \mathbb{N}$. We now see that for any $x \in X'$, we have
\begin{equation}
\lim_{h\rightarrow\infty}\mu(T^{-h}(A)\cap B) = \lim_{h\rightarrow\infty}\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mathbbm{1}_{T^{-h}(A)\cap B}(T^nx)
\end{equation}
\begin{equation}
= \lim_{h\rightarrow\infty}\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mathbbm{1}_{T^{-h}(A)}(T^nx)\mathbbm{1}_{B}(T^nx)
\end{equation}
\begin{equation}
= \lim_{h\rightarrow\infty}\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mathbbm{1}_{A}(T^{n+h}x)\mathbbm{1}_{B}(T^nx)
\end{equation}
\begin{multline}
= \lim_{h\rightarrow\infty}\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N(\mathbbm{1}_{A}(T^{n+h}x)-\mu(A))\mathbbm{1}_{B}(T^nx)\\ +\lim_{h\rightarrow\infty}\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mu(A)\mathbbm{1}_{B}(T^nx)
\end{multline}
\begin{equation}
\pushQED{\qed}
= 0+\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n = 1}^N\mu(A)\mathbbm{1}_{B}(T^nx) = \mu(A)\mu(B). \qedhere
\popQED
\end{equation}
\section{Concluding Remarks}
\hskip 4mm The definitions of weakly mixing sequences and strongly mixing sequences are quite cumbersome since we have to pass to a subsequence $(N_q)_{q = 1}^{\infty}$ for which all of the relevant limits to exist. To circumvent this, we would like to propose definition \ref{SupremelyStronglyMixing} to replace the current definition of strongly mixing sequences.
\begin{definition} Let $(x_n)_{n = 1}^{\infty}$ be a sequence of complex numbers for which
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n| < \infty.
\end{equation}
$(x_n)_{n = 1}^{\infty}$ is a \noindent{\bf supremely strongly mixing sequence} if for any other bounded sequence of complex numbers $(y_n)_{n = 1}^{\infty}$ we have
\begin{equation}
\lim_{h\rightarrow\infty}\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Nx_{n+h}\overline{y_n}| = 0.
\end{equation}
\label{SupremelyStronglyMixing}
\end{definition}
It is clear that any supremely strongly mixing sequence is a strongly mixing sequence, and it would be convenient if the two notions were equivalent, as supremely strongly mixing suquences are much simpler to work with. However, Lemma \ref{CounterExample} shows us that there does not exist a supremely strongly mixing sequence.
\begin{lemma}\label{CounterExample} If $(x_n)_{n = 1}^{\infty}$ is any sequence of complex numbers for which
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n| < \infty,
\end{equation}
\noindent then there exists a bounded sequence of complex numbers $(y_n)_{n = 1}^{\infty}$ for which
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Nx_{n+h}\overline{y_n}| = \overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n|,
\end{equation}
\noindent for every $h \in \mathbb{N}\cup\{0\}$.
\end{lemma}
\noindent{\it Proof.} Let $(N_k)_{k = 1}^{\infty}$ be such that
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n| = \lim_{k\rightarrow\infty}\frac{1}{N_k}\sum_{n = 1}^{N_k}|x_n|.
\end{equation}
\noindent By passing to a subsequence, we may assume without loss of generality that
\begin{equation}
\frac{1}{N_{k+1}}\left|\sum_{n = 1}^{N_k}|x_n|\right| < \frac{1}{k}.
\end{equation}
Let $f:\mathbb{N}_0^2\rightarrow\mathbb{N}_0$ be any bijection. For $N_{f(m,h)} < n \le N_{f(m,h)+1}$, let $y_n = \text{sgn}(x_{n+h})$. Now let $h \in \mathbb{N}_0$ be arbitrary, and note that
\begin{equation}
\overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n|
\end{equation}
\begin{equation}
\ge \overline{\lim_{N\rightarrow\infty}}|\frac{1}{N}\sum_{n = 1}^Nx_{n+h}\overline{y_n}| \ge \overline{\lim_{m\rightarrow\infty}}|\frac{1}{N_{f(m,h)+1}}\sum_{n = 1}^{N_{f(m,h)+1}}x_{n+h}\overline{y_n}|
\end{equation}
\begin{equation}
= \overline{\lim_{m\rightarrow\infty}}\frac{1}{N_{f(m,h)+1}}\left|\sum_{n = 1}^{N_{f(m,h)}}x_{n+h}\overline{y_n}+\sum_{n = N_{f(m,h)}+1}^{N_{f(m,h)+1}}|x_{n+h}|\right|
\end{equation}
\begin{equation}\ge \overline{\lim_{m\rightarrow\infty}}\frac{1}{N_{f(m,h)+1}}\left(\sum_{n = N_{f(m,h)}+1}^{N_{f(m,h)+1}}|x_{n+h}|-|\sum_{n = 1}^{N_{f(m,h)}}x_{n+h}\overline{y_n}|\right)
\end{equation}
\begin{equation}
\pushQED{\qed}
\ge \overline{\lim_{m\rightarrow\infty}}\frac{1}{N_{f(m,h)+1}}\sum_{n = 1}^{N_{f(m,h)+1}}|x_{n+h}|-\frac{2}{f(m,h)} = \overline{\lim_{N\rightarrow\infty}}\frac{1}{N}\sum_{n = 1}^N|x_n|. \qedhere
\popQED
\end{equation}
We will now show that Theorem \ref{ActualyFirstMainTheorem} and Theorem \ref{mainStrongMixingTheorem} hold for arbitrary measure preserving systems. Theorem \ref{ExtendingToTheGeneralSituation} is the generalization of Theorem \ref{ActualyFirstMainTheorem} to an arbitrary measure preserving system, and the argument for Theorem \ref{mainStrongMixingTheorem} is identical. For the proof of Theorem \ref{ExtendingToTheGeneralSituation} we will be using vocabulary and notation from chapter 15 of \cite{Royden} that we will not discuss here.
\begin{theorem} Let $(X,\mathscr{B},\mu,T)$ be a weakly mixing m.p.s. and let $f \in L^1(X,\mu)$ satisfy $\int_Xfd\mu = 0$. For a.e. $x \in X$, $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence.
\label{ExtendingToTheGeneralSituation}
\end{theorem}
\noindent{\it Proof.} Let $\mathscr{A}$ denote the $\sigma$-algebra generated by $\{\{f < q\}_{q \in \mathbb{Q}}\}$. We see that $\mathscr{A}$ is a countably generated $\sigma$-algebra with respect to which $f$ is measurable. By Carath\'eodory's Theorem (cf. Theorem 15.3.4 of \cite{Royden}) there exists an isomorphism $\Phi$ of $<\mathscr{A}, \mu>$ into $<\mathscr{L}/\mathscr{N},m>$, where $\mathscr{L}$ is the Lebesgue $\sigma$-algebra on $[0,1]$, $m$ is Lebesgue measure and $\mathscr{N} \subseteq \mathscr{L}$ is the $\sigma$-algebra of null sets. By Proposition $15.2.2$ of \cite{Royden}, there exists $\tilde{f} \in L^1([0,1],m)$ be such that for every $q \in \mathbb{Q}$ we have $\{\tilde{f} < q\} = \Phi(\{f < q\})$. By Proposition $15.6.19$ of \cite{Royden}, let $\phi:X\rightarrow[0,1]$ be a measurable transformation for which $\mu(\phi^{-1}(B)\triangle\Phi^{-1}(B)) = 0$ for every $B \in \Phi(\mathscr{A})$. We see that for every $q \in \mathbb{Q}$, we have that $(\tilde{f}\circ\phi)^{-1}((-\infty,q)) = f^{-1}((-\infty,q))$, so $\tilde{f}\circ\phi = f$ a.e. by the uniqueness portion of Proposition $15.6.19$ of \cite{Royden}. Now note that $\Phi\circ T^{-1}\circ\Phi^{-1}$ is a $\sigma$-isomorphism of $\Phi(\mathscr{A})$ to itself, so yet another application of Proposition $15.6.19$ of \cite{Royden} yields a map $S:[0,1]\rightarrow[0,1]$ for which $S^{-1}(B) = \Phi(T^{-1}(\Phi^{-1}(B)))$ for every $B \in \Phi(\mathscr{A})$. Noting that $\phi^{-1}\circ S^{-1}$ and $T^{-1}\circ \phi^{-1}$ are the same $\sigma$-homomorphism, we can use the uniqueness portion of Proposition $15.6.9$ of \cite{Royden} once again to see that $S\circ \phi = \phi\circ T$. It follows that for some $X' \subseteq X$ with $\mu(X') = 1$ we have that $S^n(\phi(x)) = \phi(T^n(x))$ and $\tilde{f}(\phi(T^n(x))) = f(T^n(x))$ for all $n \in \mathbb{N}$ and all $x \in X'$. By Theorem $1$, let $X'' \subseteq [0,1]$ be such that $\left(\tilde{f}(S^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence for every $x \in X''$. Finally, we see that for any $x \in X'\cap\phi^{-1}(X'')$ we have that $\left(f(T^nx)\right)_{n = 1}^{\infty}$ is a weakly mixing sequence. \qed
\vskip 5mm
\begin{remark}
Lastly, we note that Theorems similar to Theorem \ref{ActualyFirstMainTheorem} and Theorem \ref{mainStrongMixingTheorem} can be proven for some other levels of mixing (such as mild mixing but not K-mixing) using the same techniques. We see that the only difference in the proofs of Theorem \ref{ActualyFirstMainTheorem} and Theorem \ref{mainStrongMixingTheorem} is the difference between equations \eqref{Modification} and \eqref{Modification2}. In particular, the mode of convergence in \eqref{Modification} and \eqref{Modification2} is the only difference. By working with convergence along filters a Theorem that simultaneously generalizes Theorems \ref{ActualyFirstMainTheorem} and \ref{mainStrongMixingTheorem} can be proven, and the author plans to include the statement and proof of this Theorem in his thesis. It is omitted here since it requires preliminary knowledge about convergence along filters.
\end{remark}
\end{document}
|
\begin{document}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\setcounter{footnote}{-1}
\numberwithin{equation}{section}
\title{A remark on Chevalley's ambiguous class number formulas}
\author{Chia-Fu Yu}
\address{
Institute of Mathematics, Academia Sinica and NCTS (Taipei Office)\\
Astronomy Mathematics Building \\
No. 1, Roosevelt Rd. Sec. 4 \\
Taipei, Taiwan, 10617}
\email{[email protected]}
\date{\today}
\subjclass[2010]{}
\keywords{}
\begin{abstract}
In this note we remark that
Chevalley's ambiguous class number formula
is an immediate consequence of the Hasse norm theorem, the local and
global norm index theorems for cyclic extensions.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:01}
The ambiguous ideal classes are the Galois invariants of
the class group of a cyclic extension of a number field.
In \cite{chevalley} Chevalley gave a formula for the
ambiguous class number.
Besides Chevalley's original paper, one can also find
for Chevalley's ambiguous class formula
in Gras' book (see \cite[p.~178, p.~180]{gras:cft}).
Lemmermeyer \cite{lemmermeyer:amb} supplied a modern and
elementary proof as its proof does not seem to appear in most textbooks
of algebraic number theory.
In this note we observe that this formula can be also
deduced very easily from
the Hasse norm theorem, the local and
global norm index theorems for cyclic extensions.
The proofs of these standard theorems can be found in most
textbooks of algebraic number theory, for example in Lang's book
\cite{lang:ant}. We should note that our proof is not completely
independent of the proofs appeared before as
the $Q$-machine of Herbrand is also applied in the proof of the local
norm index theorem.
We now state the ambiguous class number formula.
Consider a cyclic extension $K/k$ of number fields
with cyclic Galois group $G=\Gal(K/k)=\langle\sigma\rangle$ of generator $\sigma$.
Denote by $\gro$ and $\grO$ the ring of integers of
$k$ and $K$, respectively.
Let $\infty$ and $\infty_r$
(resp. $\wt \infty$ and $\wt \infty_r$) denote the set
of infinite and real places of $k$
(resp. of $K$), respectively, and $\mathbb A_k$ (resp. $\mathbb A_K$) the adele ring
of $k$ (resp. $K$). Let $r_k:\wt
\infty \to \infty$ denote the restriction to $k$.
A real cycle is a cycle which is supported in the set of real places
\cite[p.~123]{lang:ant}. We may identify a real cycle
with its support, which is a subset of real places.
Suppose that $\wt \grc$ is a real cycle on $K$ which is stable under the
$G$-action. Denote by
\begin{equation}
\label{eq:ClKc}
\mathbb Cl(K,\wt \grc):=\frac{\mathbb A_K^\times}{K^\times \widehat \grO^\times
K_\infty(\wt \grc)^\times}
\end{equation}
the narrow ideal class group of $K$ with respect to $\wt \grc$, where $\widehat
\grO$ is the profinite completion of $\grO$, and
$ K_\infty(\wt \grc)^\times =\{a=(a_w) \in K_\infty^\times \mid a_w
>0 \quad \forall\, w| \wt \grc \}.$
Similarly one defines $\mathbb Cl(k,\grc)$ for any real cycle $\grc$ on $k$.
The group $G$ acts on the finite abelian group $\mathbb Cl(K,\wt \grc)$.
Its $G$-invariant subgroup $\mathbb Cl(K,\wt \grc)^G$ is
called the {\it ambiguous ideal class group} (with respect to $\wt \grc$).
Let $\grc$ be the real cycle on $k$ such that $\infty_r-\grc=r_k(\wt
\infty_r-\wt \grc)$, and let $\grc_0:=r_k(\wt \grc)$.
One has $\grc=\grc_0
\infty_r^c$, where $\infty_r^c$ is the set of real places of $k$ which does
not split completely in $K$. Let
$N_{K/k}$ denote the norm map from $K$ to $k$. The cycle $\grc$ is
determined by the property
$N_{K/k}(K_\infty(\wt \grc)^\times)=k_\infty(\grc)^\times$.
Put
$\gro(\grc)^\times:=\gro^\times \cap
i_\infty^{-1}(k_\infty(\grc)^\times)$, where $i_\infty:k^\times \to
k_\infty^\times$ is the diagonal embedding.
Denote by
$V_f$ the set of finite places of $k$. Let $e(v)$ denote
the ramification index of any place $w$ over $v\in V_f$.
\begin{thm}\label{form_ClG}
One has
\begin{equation}
\label{eq:form_ClG}
\# \mathbb Cl(K,\wt \grc)^G=\frac{\#\mathbb Cl(k,\grc) \indent od_{v\in V_f}
e(v)}{[K:k][\gro(\grc)^\times : \gro(\grc)^\times \cap
N_{K/k}(K^\times)]}.
\end{equation}
\end{thm}
When $\wt \grc=\wt \infty_r$, we get the restricted version of the
formula stated in \cite[p.~178]{gras:cft}.
When $\wt \grc=\emptyset$, using an elementary fact
\[ \# \mathbb Cl(k,\infty_r^c)=\frac{h(k)\cdot
2^{|\infty_r^c|}}{[\gro^\times:\gro(\infty_r^c)^\times]}, \]
we get the ordinary version of the formula stated
in \cite[p.~180]{gras:cft}.
\section{Proof of Theorem~\ref{form_ClG}}
\label{sec:02}
Define the norm ideal class group $N(K,\wt \grc)$ by
\begin{equation}
\label{eq:NKc}
N(K,\wt \grc):=\frac{N_{K/k}(\mathbb A_K^\times)}
{N_{K/k}(K^\times \widehat \grO^\times K_\infty(\wt \grc)^\times)}.
\end{equation}
Consider the commutative diagram of two short exact sequences (by
Hilbert's Theorem 90)
\begin{equation}
\label{eq:N_CD}
\begin{CD}
1 @>>> \mathbb A_K^{\times 1-\sigma}\cap U @>>> U @>{N_{K/k}}>>
N_{K/k}(U) @>>> 1 \\
@. @VVV @VVV @VVV \\
1 @>>> \mathbb A_K^{\times 1-\sigma} @>>> \mathbb A_K^\times @>{N_{K/k}}>>
N_{K/k}(\mathbb A_K^\times) @>>> 1, \\
\end{CD}
\end{equation}
where $U=K^\times \widehat \grO^\times K_\infty(\wt \grc)^\times$. The
snake lemma gives the short exact sequence
\begin{equation}
\label{eq:Cl_N}
\begin{CD}
1 @>>> \mathbb Cl(K,\wt \grc)^{1-\sigma} @>>> \mathbb Cl(K,\wt \grc) @>>>
N(K,\wt \grc) @>>> 1
\end{CD}
\end{equation}
as one has an isomorphism $\mathbb A_K^{\times 1-\sigma}/(\mathbb A_K^{\times 1-\sigma}\cap U)\simeq \mathbb Cl(K,\wt \grc)^{1-\sigma}$.
On the other hand we have the short exact sequence
\begin{equation}
\label{eq:ClG}
\begin{CD}
1 @>>> \mathbb Cl(K,\wt \grc)^G @>>> \mathbb Cl(K,\wt \grc) @>>>
\mathbb Cl(K,\wt \grc)^{1-\sigma} @>>> 1,
\end{CD}
\end{equation}
which with (\ref{eq:Cl_N}) shows the following result.
\begin{lemma}\label{ClG=N}
We have $\# \mathbb Cl(K,\wt \grc)^G= \# N(K,\wt \grc)$.
\end{lemma}
Define
\[ \mathbb Cl(k,\grc, \grO):=\frac{\mathbb A_k^\times}{k^\times
k_\infty(\grc)^\times N_{K/k}(\widehat \grO^\times)}. \]
\begin{lemma}\label{N_Cl}
The group $N(K,\wt \grc)$ is isomorphic to a subgroup $H\subset
\mathbb Cl(k,\grc, \grO)$ of index $[K:k]$.
\end{lemma}
\begin{proof}
Put $A:=N_{K/k}(\mathbb A_K^\times)$,
$B:=N_{K/k}(K^\times \widehat \grO^\times K_\infty(\wt \grc)^\times)$,
$C:=k^\times$ and $H:=CA/CB$. The group $H$ is a subgroup in
$\mathbb Cl(k,\grc,\grO)$, which is
of index $[K:k]$ by the global norm index theorem
\cite[p.~193]{lang:ant}.
One has $A\cap C=N_{K/k}(K^\times)\subset B$
by the Hasse norm theorem \cite[p.~195]{lang:ant}.
The lemma follows from
\[ N(K,\wt \grc)=A/B = A/(A\cap C)B \simeq CA/CB=H. \text{\qed} \]
\end{proof}
Consider the exact sequence
\begin{equation}
\label{eq:final_exact}
\begin{CD}
1 @>>> \frac{\gro(\grc)^\times}{\gro(\grc)^\times\cap N(\widehat
\grO^\times)}@>>> \frac{\widehat \gro^\times}{N(\widehat \grO^\times)} @>>>
\mathbb Cl(k,\grc, \grO) @>>> \mathbb Cl(k,\grc) @>>> 1.
\end{CD}
\end{equation}
It is easy to see
$\gro(\grc)^\times\cap N_{K/k}(\widehat \grO^\times)
=\gro(\grc)^\times\cap N_{K/k}(K^\times)$ from the Hasse
norm theorem. The local norm index theorem
\cite[p.~188, Lemma 4]{lang:ant}
gives
\begin{equation}
\label{eq:loc_norm_ind}
\#\left(\frac{\widehat \gro^\times}{N(\widehat \grO^\times)}\right)
=\indent od_{v\in V_f} e(v).
\end{equation}
Combining Lemma~\ref{N_Cl}, (\ref{eq:final_exact}) and
(\ref{eq:loc_norm_ind}) we get
\begin{equation}
\label{eq:form_N}
\# N(K,\wt \grc)=\frac{\#\mathbb Cl(k,\grc,\grO)}{[K:k]}=
\frac{\#\mathbb Cl(k,\grc) \indent od_{v\in V_f}
e(v)}{[K:k][\gro(\grc)^\times : \gro(\grc)^\times \cap
N_{K/k}(K^\times)]}.
\end{equation}
Theorem~\ref{form_ClG} follows from Lemma~\ref{ClG=N} and
(\ref{eq:form_N}). \qed
\begin{remark}
We do not know whether $\mathbb Cl(K,\wt \grc)^G$ and $N(K,\wt \grc)$ are
isomorphic as abelian groups or whether
there is a natural bijection between
them. When $[K:k]=2$ and $\# \mathbb Cl(K,\wt
\grc)^{1-\sigma}$ is odd, we show that there is a natural
isomorphism
\begin{equation}
\label{eq:Cl=N}
N(K,\wt \grc) \simeq \mathbb Cl(K,\wt \grc)^G.
\end{equation}
The map $1-\sigma: \mathbb Cl(K,\wt \grc) \to \mathbb Cl(K,\wt \grc)^{1-\sigma}$
restricted to $\mathbb Cl(K,\wt \grc)^{1-\sigma}$ is the squared map
${\rm Sq}$, which is an isomorphism from our assumption.
The inverse of ${\rm Sq}$ defines a section of (\ref{eq:ClG}),
and hence an isomorphism
$\mathbb Cl(K,\wt \grc)\simeq \mathbb Cl(K,\wt \grc)^G\oplus \mathbb Cl(K,\wt
\grc)^{1-\sigma}$. The assertion (\ref{eq:Cl=N}) then follows.
\end{remark}
\section*{Acknowledgments}
The present work is done while the author's stay in the Max-Planck-Institut
f\"ur Mathematik. He is grateful to the Institut for kind hospitality
and excellent working environment.
The author is partially supported by the grants
MoST 100-2628-M-001-006-MY4 and 103-2918-I-001-009.
\end{document}
|
\begin{document}
\allowdisplaybreaks
\title[Global existence for a system of multiple-speed wave equations]
{Global existence for a system of multiple-speed wave equations
violating the null condition}
\author[K. Hidano]{Kunio Hidano$^*$}
\author[K. Yokoyama]{Kazuyoshi Yokoyama}
\author[D. Zha]{Dongbing Zha$^\dag$}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\footnote[0]{2010\textit{ Mathematics Subject Classification}.
Primary 35L52, 35L15; Secondary 35L72}
\keywords{
Global existence, multiple-speed wave equations.
}
\thanks{
$^*$Partly supported by
the Grant-in-Aid for Scientific Research (C) (No.\,18K03365),
Japan Society for the Promotion of Science (JSPS) }
\thanks{
$^\dag$Supported by National Natural Science Foundation of
China (No.11801068) and
Fundamental Research Funds for
the Central Universities (No.\,2232021G-13)
}
\address{
Department of Mathematics \endgraf
Faculty of Education \endgraf
Mie University \endgraf
1577 Kurima-machiya-cho Tsu \endgraf
Mie Prefecture 514-8507 \endgraf
Japan
}
\email{[email protected]}
\address{
Hokkaido University of Science \endgraf
7-Jo 15-4-1 Maeda, Teine, Sapporo \endgraf
Hokkaido 006-8585 \endgraf
Japan
}
\email{[email protected]}
\address{
Department of Mathematics and \endgraf
Institute for Nonlinear Sciences \endgraf
Donghua University \endgraf
Shanghai 201620 \endgraf
PR China
}
\email{[email protected]}
\maketitle
\begin{abstract}
We discuss the Cauchy problem
for a system of semilinear wave equations
in three space dimensions with
multiple wave speeds.
Though our system does not satisfy the standard null condition,
we show that
it admits a unique global solution
for any small and smooth data.
This generalizes a preceding result due to
Pusateri and Shatah.
The proof is carried out by the energy method
involving a collection of generalized derivatives.
The multiple wave speeds disable the use of
the Lorentz boost operators,
and our proof therefore relies upon the version of
Klainerman and Sideris.
Due to the presence of nonlinear terms
violating the standard null condition,
some of components of the solution
may have a weaker decay as $t\to\infty$,
which makes it difficult even to establish
a mildly growing (in time) bound for the high energy estimate.
We overcome this difficulty by
relying upon the ghost weight energy estimate of Alinhac and
the Keel-Smith-Sogge type $L^2$ weighted space-time estimate for
derivatives.
\end{abstract}
\section{Introduction}
This paper is concerned with the Cauchy problem for
a system of semilinear wave equations
in three space dimensions of the form
\begin{equation}\label{eq1}
\begin{cases}
\displaystyle{
\partial_t^2 u_1-\Delta u_1
=
F_1(\partial u_1,\partial u_2,\partial u_3),
\,\,t>0,\,x\in{\mathbb R}^3
},\\
\displaystyle{
\partial_t^2 u_2-\Delta u_2
=
F_2(\partial u_1,\partial u_2,\partial u_3),
\,\,t>0,\,x\in{\mathbb R}^3
},\\
\displaystyle{
\partial_t^2 u_3-c_0^2\Delta u_3
=
F_3(\partial u_1,\partial u_2,\partial u_3),
\,\,t>0,\,x\in{\mathbb R}^3
}
\end{cases}
\end{equation}
subject to the initial condition
\begin{equation}\label{data}
(u_i(0),\partial_t u_i(0))
=
(f_i,g_i)
\in
C_0^\infty({\mathbb R}^3)\times C_0^\infty({\mathbb R}^3),\,\,
i=1,2,3,
\end{equation}
where $(u_1,u_2,u_3):(0,\infty)\times{\mathbb R}^3\to{\mathbb R}^3$,
$\partial=(\partial_0,\partial_1,\partial_2,\partial_3)$,
$\partial_0=\partial/\partial t$,
$\partial_i=\partial/\partial x_i$,
and $c_0>0$.
Moreover, $F_1(y)$, $F_2(y)$, and $F_3(y)$
are polynomials in $y\in{\mathbb R}^{12}$
of degree $\geq 2$. That is,
we suppose that
the nonlinear term has the form
\begin{equation}\label{formF}
F_i(\partial u_1,\partial u_2,\partial u_3)
=
F_i^{jk,\alpha\beta}(\partial_\alpha u_j)(\partial_\beta u_k)
+
C_i(\partial u_1,\partial u_2,\partial u_3),\,\,i=1,2,3,
\end{equation}
where $C_i(y)$ is a polynomial in $y\in{\mathbb R}^{12}$
of degree $\geq 3$.
In what follows, we suppose $F_i^{jk,\alpha\beta}=0$ if $j>k$,
without loss of generality.
Here, and in the following discussion as well,
we use the summation convention:
if lowered and uppered,
repeated Greek letters and Roman letters
are summed for $0$ to $3$ and
$1$ to $3$, respectively.
Though our main interest lies in global existence
of small, smooth solutions in the case $c_0\ne 1$,
we first review some of the results for the case $c_0=1$.
It follows from the fundamental result of John and Klainerman \cite{JK}
that the equation (\ref{eq1}) admits a unique
``almost global'' solution for small, smooth data with compact support.
That is, the time interval on which
the local solution exists becomes exponentially large
as the size of initial data gets smaller and smaller.
Almost global existence is the most that one can expect in general.
Indeed, nonexistence of global solutions is known even for
small data.
See, e.g., John \cite{John1981} and Sideris \cite{Sideris1983}
for the scalar equations $\partial_t^2 u-\Delta u=(\partial_t u)^2$
and $\partial_t^2 u-\Delta u=|\nabla u|^2$,
respectively.
On the other hand, if the null condition is satisfied,
that is, for any given $i,j,k$ we have
$F_i^{jk,\alpha\beta}X_\alpha X_\beta\equiv 0$
for all $(X_0,X_1,X_2,X_3)\in{\mathbb R}^4$ satisfying
$X_0^2=X_1^2+X_2^2+X_3^2$,
then it follows from the seminal result of
Christodoulou \cite{Christodoulou} and
Klainerman \cite{KlainermanNull86}
(see also Alinhac \cite[p.\,94]{Al2010}
for a new proof using $L^2$ space-time weighted estimates
for some special derivatives)
that the equation (\ref{eq1}) admits a unique global
solution for small, smooth data.
Christodoulou employed the method of conformal mapping
and Klainerman employed the energy method involving
the generators of the translations,
the Lorentz transformations, and the dilations.
Let us turn our attention to the case $c_0\ne 1$,
which does not seem amenable to the method in \cite{Christodoulou}
or \cite{KlainermanNull86}
because of the presence of multiple wave speeds.
Alternative techniques based on
a smaller collection of generators
have been explored by a lot of authors,
such as Kovalyov \cite{Kov} and Yokoyama \cite{Y}
using point-wise estimations of the fundamental solution,
Klainerman and Sideris \cite{KS} and Sideris and Tu \cite{SiderisTu}
without relying upon point-wise estimations of the fundamental solution,
and Keel, Smith and Sogge \cite{KSS2002}
using $L^2$ space-time weighted estimates for derivatives.
Obviously, the technique in \cite{KSS2002} is applicable to
the Cauchy problem (\ref{eq1})--(\ref{data}) with $c_0\ne 1$
and leads to almost global existence result.
Moreover, if $c_0\ne 1$ and the null condition in the sense of
\cite{Y}, \cite{SiderisTu}, and \cite{LNS} is satisfied,
that is, we have for any $i=1,2$ and $(j,k)=(1,1)$, $(1,2)$,
and $(2,2)$
\begin{align}
&F_i^{jk,\alpha\beta}X_\alpha X_\beta\equiv 0,\,\,
X\in{\mathcal N}^{(1)},\label{null1}\\
&F_3^{33,\alpha\beta}X_\alpha X_\beta\equiv 0,\,\,
X\in{\mathcal N}^{(c_0)},\label{null2}
\end{align}
then it follows from \cite{Y}, \cite[Remark following Theorem 3.1]{SiderisTu}, and
\cite[Theorem 1.1]{LNS}
that the equation (\ref{eq1}) admits a unique global solution
for small, smooth data.
(We note that as pointed out in \cite{H2004},
the argument of Sideris and Tu is general enough to
handle the nonlinear terms satisfying (\ref{null1})--(\ref{null2}),
although they were not explicitly treated in \cite{SiderisTu}.)
Here, and in the following as well,
we use the notation
\begin{equation}\label{definitionnc}
{\mathcal N}^{(c)}:=
\{
X=(X_0,X_1,X_2,X_3)\in{\mathbb R}^4\,:\,
X_0^2=c^2(X_1^2+X_2^2+X_3^2)
\},\,\,c>0.
\end{equation}
Recently, there have been a lot of activities
in studying systems of wave equations with wider classes of
quadratic nonlinear terms
for which one still enjoys global solutions
for any small, smooth data.
See, e.g., \cite{LR2003}, \cite{Al2006}, \cite{Lind2008},
\cite{KMS}, \cite{HY2017}, and \cite{Keir} for
systems in three space dimensions with equal propagation speeds.
As for (\ref{eq1}) with $c_0\ne 1$,
we easily see that
the condition (\ref{null1})--(\ref{null2}) is sufficient but not necessary
for global existence. Indeed,
setting
$F_1=(\partial_t u_2)^2$,
$F_2=F_3=(\partial_t u_2)(\partial_t u_3)$,
we see this term $(\partial_t u_2)^2$ violating the condition
(\ref{null1}) but we obtain global solutions by first
solving the system consisting of the second and the third equations in
(\ref{eq1}) on the basis of the results in
\cite{Y}, \cite{SiderisTu}, and \cite{LNS}
and then regarding the first equation in (\ref{eq1})
just as the inhomogeneous wave equation with the
``source term'' $(\partial_t u_2)^2$.
Interestingly,
using the space-time resonance method,
Pusateri and Shatah \cite{PS} have proved that
global existence of small solutions carries over to
3-component systems with a class of nonlinear terms, say,
$F_1=(\partial_t u_2)^2+(\partial_t u_1)(\partial_t u_3)^2$,
$F_2=(\partial_t u_2)(\partial_t u_3)+
\bigl((\partial_t u_2)^2-|\nabla u_2|^2\bigr)$,
$F_3=(\partial_t u_2)(\partial_t u_3)+(\partial_t u_1)(\partial_t u_2)^2$.
They also mention that
$\partial u_1$ has a weaker decay as $t\to\infty$.
Inspired by their observation,
we like to find 3-component and 2-speed systems with
a wider class of nonlinear terms
for which one still obtains global solutions for small, smooth data.
In particular, we are interested in the case where
$u_1$, which may have a weaker decay, is involved
in quadratic nonlinear terms.
We suppose
\begin{align}
&F_1^{11,\alpha\beta}X_\alpha X_\beta\equiv 0,\,\,
X\in{\mathcal N}^{(1)},\label{assumption1}\\
&F_2^{11,\alpha\beta}X_\alpha X_\beta
=
F_2^{12,\alpha\beta}X_\alpha X_\beta
=
F_2^{22,\alpha\beta}X_\alpha X_\beta\equiv 0,\,\,
X\in{\mathcal N}^{(1)},\label{assumption2}\\
&
F_3^{33,\alpha\beta}X_\alpha X_\beta\equiv 0,\,\,
X\in{\mathcal N}^{(c_0)},\label{assumption3}\\
&
F_2^{13,\alpha\beta}=0
\mbox{\,\,for any $\alpha$, $\beta$},\label{assumption4}\\
&
F_3^{13,\alpha\beta}=0
\mbox{\,\,for any $\alpha$, $\beta$,}\label{assumption5}\\
&
F_3^{11,\alpha\beta}X_\alpha X_\beta
=F_3^{12,\alpha\beta}X_\alpha X_\beta\equiv 0,
\,\,X\in{\mathcal N}^{(1)},\label{assumption6}
\end{align}
which means that
since the condition (\ref{assumption1}) is weaker than
(\ref{null1}) with $i=1$,
the nonlinear term such as
\begin{equation}\label{modelterm2019502}
F_1=(\partial_t u_2)^2
+
(\partial_t u_1)(\partial_t u_2)
+
\bigl(
(\partial_t u_1)^2-|\nabla u_1|^2
\bigr)
+
C_1(\partial u_1,\partial u_2,\partial u_3)
\end{equation}
is admissible. Also, any cubic term is admissible.
On the other hand, we need the restrictive conditions
(\ref{assumption4})--(\ref{assumption6})
in order to obtain a mildly growing bound
for the high energy estimate of $u_2$, $u_3$,
though readers might expect to benefit from
difference of propagation speeds.
Before stating the main theorem,
we set the notation.
We use the operators
$\Omega_{jk}:=x_j\partial_k-x_k\partial_j$,
$1\leq j<k\leq 3$ and
$S:=t\partial_t+x\cdot\nabla$.
The operators
$\partial_1$, $\partial_2$, $\partial_3$,
$\Omega_{12}$, $\Omega_{23}$, $\Omega_{13}$
and $S$ are denoted by
$Z_1$, $Z_2,\dots,Z_7$, respectively.
For multi-indices $a=(a_1,a_2,\dots,a_7)$,
we set $Z^a:=Z_1^{a_1}Z_2^{a_2}\cdots Z_7^{a_7}$.
Setting
$$
E(v(t);c):=
\frac12
\int_{{\mathbb R}^3}
\bigl(
(\partial_t v(t,x))^2
+
c^2
|\nabla v(t,x)|^2
\bigr)dx
$$
for $c>0$, we define
\begin{equation}
N_{\kappa}(v(t);c)
:=
\biggl(
\sum_{|a|\leq\kappa-1}
E(Z^a v(t);c)
\biggr)^{1/2},\,\,
\kappa\in{\mathbb N}.
\end{equation}
When there is no confusion,
we abbreviate $N_{\kappa}(v(t);c)$ to $N_{\kappa}(v(t))$.
To measure the size of data
$(f,g)$ with $f=(f_1,f_2,f_3)$ and $g=(g_1,g_2,g_3)$,
we use
\begin{equation}
\|(f,g)\|_D
:=
\sum_{i=1}^3
\biggl(
\sum_{|a|=1}^4
\|
\langle x\rangle^{|a|-1}\partial_x^a f_i
\|_{L^2({\mathbb R}^3)}
+
\sum_{|a|=0}^3
\|
\langle x\rangle^{|a|}\partial_x^a g_i
\|_{L^2({\mathbb R}^3)}
\biggr).
\end{equation}
We are in a position to state the main theorem.
\begin{theorem}\label{ourmaintheorem}
Suppose $c_0\ne 1$ in \mbox{$(\ref{eq1})$}
and suppose \mbox{$(\ref{assumption1})$--$(\ref{assumption6})$}.
There exists an $\varepsilon_0>0$ such that
if the initial data
$(f_i,g_i)\in C_0^\infty({\mathbb R}^3)\times C_0^\infty({\mathbb R}^3)$
$(i=1,2,3)$ satisfy
$\|(f,g)\|_D<\varepsilon_0$,
then the Cauchy problem \mbox{$(\ref{eq1})$--$(\ref{data})$} admits
a unique global solution satisfying
\begin{align}
&
N_3(u_1(t))
\leq
C\varepsilon_0(1+t)^\delta,\,\,
N_4(u_1(t))
\leq
C\varepsilon_0(1+t)^{2\delta},\label{growth1}\\
&
N_3(u_2(t)),\,N_3(u_3(t))
\leq
C\varepsilon_0,\,\,
N_4(u_2(t)),\,N_4(u_3(t))
\leq
C\varepsilon_0(1+t)^\delta.\label{growth2}
\end{align}
Here $\delta$ is a small constant such that
$0<\delta<1/24$.
\end{theorem}
\begin{remark}
Using (\ref{ellinfty}), (\ref{2019mninequality}) with $\mu=3$, and (\ref{j3}),
we see that the solution $(u_1,u_2,u_3)$ in
Theorem \ref{ourmaintheorem} satisfies
\begin{equation}
\|\partial u_1(t)\|_{L^\infty({\mathbb R}^3)}
=
O(t^{-1+\delta}),
\,\,
\|\partial u_i(t)\|_{L^\infty({\mathbb R}^3)}
=
O(t^{-1}),\,i=2,3
\end{equation}
as $t\to\infty$.
\end{remark}
\begin{remark}
Suppose that the system (\ref{eq1}) satisfies the assumptions
(\ref{assumption1})--(\ref{assumption6}) in Theorem \ref{ourmaintheorem}.
The referee has kindly pointed out that
if we ignore the third line of (\ref{eq1}) and
remove $u_3$ from the first two lines,
then the remaining 2-component system satisfies
the weak null condition in Alinhac
\cite[see (AA)--(${\overline{\rm AA}}$)]{Al2006}.
Also, if we ignore the first line of (\ref{eq1}) and
remove $u_1$ from the last two lines,
then the remaining 2-component system satisfies
the null condition in Yokoyama \cite{Y} and
Sideris-Tu \cite{SiderisTu}.
The assumptions (\ref{assumption1})--(\ref{assumption6})
are considerably weaker than those in \cite{PS} indeed,
but they still seem restrictive.
There arises a natural question to what extent
we can weaken (\ref{assumption1})--(\ref{assumption6}).
In this regard, one might want to ask whether or not
the 3-component system (\ref{eq1}) admit a unique global solution
for small, smooth data,
if Alinhac's conditions
(AA) and (${\overline{\rm AA}}$) are satisfied
by the 2-component system
derived by removal of $u_3$ from (\ref{eq1})
and
the null condition in \cite{Y} and \cite{SiderisTu} is satisfied
by the 2-component system
derived by removal of $u_1$ from (\ref{eq1}).
This seems to the authors an interesting open question.
\end{remark}
Differently from the space-time resonance method of
Pusateri and Shatah \cite{PS},
the proof of our main theorem employs the method of
Klainerman and Sideris \cite{KS}
which is the energy method involving
the generators of the translations,
the spatial rotations,
and the dilations.
It does not involve the generators of the
hyperbolic rotations,
and has successfully led to results of
global existence of small solutions
under the null condition,
for systems of multiple-speed wave equations
\cite{SiderisTu},
and for the equation of elasticity \cite{Sideris1996}, \cite{Sideris2000}.
Unlike the system considered in Sideris and Tu \cite{SiderisTu},
the system (\ref{eq1}) is permitted to involve the term $(\partial_t u_2)^2$
or $(\partial_t u_1)(\partial_t u_2)$
in the first equation (see (\ref{modelterm2019502}) above),
and the presence of terms violating the null condition
causes a weaker decay of $\partial u_1$ as $t\to\infty$.
Therefore, we must enhance the discussion in \cite{SiderisTu},
although we basically follow their argument
based on the two-energy method.
We recall that the proof of global existence in \cite{SiderisTu}
employed the ``high energy'' estimate
and the ``low energy'' estimate,
allowing the bound in the former estimate to grow mildly in time,
and establishing the uniform (in time) bound in the latter estimate
by virtue of the null condition and the difference of
propagation speeds.
We note that
because of the problem of ``loss of derivatives''
caused by the use of the standard estimation lemma
for the null forms (see \cite[Lemma 5.1]{SiderisTu}),
it is only for the estimate of
the {\it low} energy that
the null condition plays a role in \cite{SiderisTu}.
In the present case,
owing to the weaker decay of $\partial u_1$,
even a mildly growing bound in the high energy estimate is far from trivial.
A similar difficulty already occurred in the proof of
Alinhac \cite{Al2001} for global existence of small solutions
to the null-form quasilinear (scalar) wave equations in {\it two} space dimensions.
(Recall that the time decay rate of solutions in two space dimensions is worse
than in three space dimensions.)
Creating the ghost weight energy method,
he succeeded in employing the null condition
for the purpose of establishing
a mildly growing bound in the {\it high} energy estimate.
(See also \cite{ZhaDCDS} for this matter.)
Alinhac set up his remarkable method by relying upon the generators of the hyperbolic rotations, and we note that
his technique, combined with the method of Klainerman and Sideris,
remains useful without such operators.
See \cite{ZhaDCDS}, \cite{Zha}, and \cite{HZ2019}.
In order to obtain such an estimate
for the high energy,
we can therefore rely upon the ghost weight technique
and utilize a certain $L^2$ space-time weighted norm
for the special derivatives $\partial_j u_1+(x_j/|x|)\partial_t u_1$
along with the estimation lemma (see Lemma \ref{estimationlemma} below),
when handling such a null-form nonlinear term as
$\bigl(
(\partial_t u_1)^2-|\nabla u_1|^2
\bigr)$
(see (\ref{modelterm2019502}) above)
on the region ``far from the origin'', that is,
$\{x\in{\mathbb R}^3\,:\;|x|>(1+t)/2\}$.
Actually, this way of handling the null-form nonlinear term
$\bigl(
(\partial_t u_1)^2-|\nabla u_1|^2
\bigr)$
is effective only on the region ``far from the origin'',
because in the present paper,
the $L^2$ space-time weighted norm for the special derivatives
is employed in combination with the trace-type inequality
with the weight $r^{1-\eta}\langle t-r\rangle^{(1/2)+\eta}$
(see (\ref{interp}) with $\theta=(1/2)-\eta$ below, here $\eta>0$ is small enough)
and the factor $r^{1-\eta}$ no longer yields
the decay factor $t^{-1+\eta}$ on the region ``inside the cone''
$\{x\in{\mathbb R}^3\,:\;|x|<(1+t)/2\}$.
As in \cite{SiderisTu}, inside the cone we therefore give up benefiting from
the special structure that the null-form nonlinear terms enjoy,
and we regard them simply as products of the derivatives,
when considering the high energy estimate of $u_1$.
Because of the growth of the bound even in the low energy estimate
for $u_1$, we then proceed differently from \cite{SiderisTu}.
Namely, we make use of the Keel-Smith-Sogge type $L^2$ weighted norm
for usual derivatives (see Lemma \ref{ksstype} below)
together with the trace-type inequality with weight
$r^{1/2}\langle t-r\rangle$
(see (\ref{interp}) with $\theta=0$ below).
See, e.g., (\ref{highenergyj112019july25}) below.
In this way, such a null-form nonlinear term as
$\bigl(
(\partial_t u_1)^2-|\nabla u_1|^2
\bigr)$
is no longer the hurdle
to establishing a mildly growing bound in the high energy estimate
of $u_1$.
Because of the weaker decay of $\partial u_1$
and the mildly growing bound in the high energy estimate
of $u_2$ (see (\ref{growth2}) above),
the presence of such a term as
$(\partial_t u_1)(\partial_t u_2)$
also causes another difficulty in establishing
a mildly growing bound in the energy estimate of $u_1$.
This is the reason why we use different growth rates
for the high energy and the low energy of $u_1$
(see the factors $(1+t)^{2\delta}$ and $(1+t)^\delta$ in
(\ref{growth1}) above)
for the purpose of closing the argument.
See (\ref{2019july191626})--(\ref{2019july191656}) below.
This paper is organized as follows.
In the next section, we first recall some special properties that
the null-form nonlinear terms enjoy,
and then we recall several key inequalities
that play an important role in our arguments.
Section \ref{sectionmmm} is devoted to obtaining bounds for
certain weighted $L^2({\mathbb R}^3)$-norms of
the second or higher-order derivatives of solutions.
We carry out the energy estimate
and the $L^2$ weighted space-time estimate
in Sections \ref{sectionenergy} and \ref{l2weighted},
using the ghost weight method of Alinhac
and the Keel-Smith-Sogge type estimate,
respectively.
In the final section, we complete the proof of
Theorem \ref{ourmaintheorem} by using the method of continuity.
{\it Acknowledgments.} The problem of global existence for
systems of multiple-speed wave equations
violating the standard null condition was
suggested by Thomas C.\,Sideris at Tohoku University in July, 2017,
for which the authors are very grateful to him.
Special thanks also go to the referee for a valuable comment concerning
the weak null condition in \cite{Al2006}.
\section{Preliminaries}
We need the commutation relations.
Let $[\cdot,\cdot]$ be the commutator:
$[A,B]:=AB-BA$.
It is easy to verify that
\begin{eqnarray}
& &
[Z_i,\Box_c]=0\,\,\,\mbox{for $i=1,\dots,6$},\,\,\,
[S,\Box_c]=-2\Box_c,
\label{eqn:comm1}\\
& &
[Z_j,Z_k]=\sum_{i=1}^7 C^{j,k}_i Z_i,\,\,\,
j,\,k=1,\dots,7,
\label{eqn:comm2}\\
& &
[Z_j,\partial_k]
=
\sum_{i=1}^3 C^{j,k}_i\partial_i,\,\,\,j=1,\dots,7,\,\,k=1,2,3,
\label{eqn:comm3}\\
& &
[Z_j,\partial_t]=0,\,j=1,\dots,6,\quad [S,\partial_t]=-\partial_t
\label{eqn:comm4}.
\end{eqnarray}
Here $\Box_c:=\partial_t^2-c^2\Delta$, and
$C^{j,k}_i$ denotes a constant depending on
$i$, $j$, and $k$.
The next lemma states that the null form is preserved
under the differentiation.
Recall the definition of ${\mathcal N}^{(c)}$
(see (\ref{definitionnc})).
\begin{lemma}\label{nullpreserved}
Let $c>0$. Suppose that $\{H^{\alpha\beta}\}$ satisfies
\begin{equation}\label{nullformlemma}
H^{\alpha\beta}X_\alpha X_\beta=0
\mbox{\,\,for any $X\in{\mathcal N}^{(c)}$}.
\end{equation}
For any $Z_i$ $(i=1,\dots,7)$, the equality
\begin{align}
Z_i&\bigl(H^{\alpha\beta}
(\partial_\alpha v)(\partial_\beta w)\bigr)\\
&=
H^{\alpha\beta}
(\partial_\alpha Z_i v)
(\partial_\beta w)
+
H^{\alpha\beta}
(\partial_\alpha v)
(\partial_\beta Z_i w)
+
{\tilde H}_i^{\alpha\beta}
(\partial_\alpha v)
(\partial_\beta w)\nonumber
\end{align}
holds with the new coefficients
$\{{\tilde H}_i^{\alpha\beta}\}$
also satisfying $(\ref{nullformlemma})$.
\end{lemma}
See, e.g., \cite[pp.\,91--92]{Al2010} for the proof.
It is possible to show the following lemma
essentially in the same way as in \cite[pp.\,90--91]{Al2010}.
\begin{lemma}\label{estimationlemma}
Suppose that
$\{H^{\alpha\beta}\}$ satisfies $(\ref{nullformlemma})$
for some $c>0$.
With the same $c$ as in $(\ref{nullformlemma})$,
we have for smooth functions $v(t,x)$ and $w(t,x)$
\begin{equation}\label{htd20190718}
|
H^{\alpha\beta}
(\partial_\alpha v)
(\partial_\beta w)
|
\leq
C
\bigl(
|T^{(c)} v|
|\partial w|
+
|\partial v|
|T^{(c)} w|
\bigr).
\end{equation}
\end{lemma}
Here, and in the following, we use the notation
\begin{equation}\label{2019Nov16OK}
|T^{(c)}v|
:=
\biggl(
\sum_{k=1}^3
|T_k^{(c)} v|^2
\biggr)^{1/2},\quad
T^{(c)}_k:=c\partial_k+(x_k/|x|)\partial_t.
\end{equation}
Together with (\ref{htd20190718}),
we will later exploit the fact that
for local solutions $u$,
the special derivatives $T^{(c)}_i u$ have
better space-time $L^2$ integrability,
in addition to
improved time decay property of their $L^\infty({\mathbb R}^3)$ norms
as shown in the following lemma.
\begin{lemma}[Lemma 2.2 of \cite{Zha}]\label{lemma22ofzha}
Let $c>0$. The inequality
\begin{equation}
|T^{(c)}v(t,x)|
\leq
C
\langle t\rangle^{-1}
\biggl(
|\partial_t v(t,x)|
+
\sum_{i=1}^7
|Z_i v(t,x)|
+
\langle ct-r\rangle|\partial_x v(t,x)|
\biggr)
\end{equation}
holds for smooth functions $v(t,x)$.
\end{lemma}
Lemma \ref{lemma22ofzha} is a direct consequence of
the identity such as
\begin{equation}
T^{(c)}_1
=
\frac{1}{t}
\biggl(
\frac{x_1}{|x|}S
-
\frac{x_2}{|x|}\Omega_{12}
-
\frac{x_3}{|x|}\Omega_{13}
+
(ct-r)\partial_1
\biggr).
\end{equation}
The following lemma is concerned with Sobolev-type
or trace-type inequalities.
With $c>0$, the auxiliary norms
\begin{align}
M_2(v(t);c)
&=
\sum_{{0\leq\delta\leq 3}\atop{1\leq j\leq 3}}
\|\langle ct-|x|\rangle
\partial_{\delta j}^2 v(t)\|_{L^2({\Bbb R}^3)},\label{ksm2}\\
M_\mu(v(t);c)
&=\sum_{|a|\leq \mu-2}M_2(Z^a v(t);c),\,\,\mu=3,4,
\end{align}
which appear in the following discussion, play an intermediate role.
We remark that $\partial_t^2$ is absent in the right-hand side of (\ref{ksm2}) above.
We also use the notation
\begin{align}
&
\|v\|_{L_r^\infty L_\omega^p({\mathbb R}^3)}
:=
\sup_{r>0}
\|v(r\cdot)\|_{L^p(S^2)},\\
&
\|v\|_{L_r^2 L_\omega^p({\mathbb R}^3)}
:=
\biggl(
\int_0^\infty \|v(r\cdot)\|_{L^p(S^2)}^2 r^2dr
\biggr)^{1/2}.
\end{align}
\begin{lemma}\label{someinequalities2019aug17}
Let $c>0$.
Suppose that $v$ decays sufficiently fast as $|x|\to\infty$.
The following inequalities hold for $\alpha=0,1,2,3$
\begin{align}
&
\|
\langle ct-r\rangle\partial_\alpha v(t)
\|_{L^6({\mathbb R}^3)}
\leq
C
\bigl(
N_1(v(t))
+
M_2(v(t);c)
\bigr),\label{ell6}\\
&
\langle ct-r\rangle
|\partial_\alpha v(t,x)|
\leq
C
\biggl(
\sum_{|a|\leq 1}N_1(\partial_x^a v(t))
+
\sum_{|a|\leq 1}M_2(\partial_x^a v(t);c)
\biggr).\label{ellinfty}
\end{align}
Moreover, we have
\begin{align}
&
\|r\partial_\alpha v(t)\|_{L_r^\infty L_\omega^4({\mathbb R}^3)}
\leq
C
\sum_{|a|+|b|\leq 1}
\|\partial\partial_x^a\Omega^b v(t)\|_{L^2({\mathbb R}^3)},\label{j2}
\\
&
\langle r\rangle
|\partial_\alpha v(t,x)|
\leq
C\sum_{|a|+|b|\leq 2}
\|\partial\partial_x^a\Omega^b v(t)\|_{L^2({\mathbb R}^3)}.\label{j3}
\end{align}
\end{lemma}
Here, we have used the notation
$\Omega^b:=\Omega_{12}^{b_1}\Omega_{23}^{b_2}\Omega_{13}^{b_3}$
for multi-indices $b=(b_1,b_2,b_3)$.
These inequalities have been already employed in the literature.
For the proof of (\ref{ell6}), see \cite[(2.10)]{H2016}.
For the proof of (\ref{ellinfty}),
see \cite[(37)]{Zha}, \cite[(2.13)]{H2016}.
See \cite[(3.19)]{Sideris2000} for the proof of (\ref{j2}).
Finally, combining \cite[(3.14b)]{Sideris2000}
with the Sobolev embedding $H^2({\mathbb R}^3)\hookrightarrow
L^\infty({\mathbb R}^3)$,
we obtain (\ref{j3}).
We also need the following inequality.
\begin{lemma}
Let $c>0$ and $\alpha=0,1,2,3$.
Suppose that $v$ decays sufficiently fast as $|x|\to\infty$.
For any $\theta$ with $0\leq \theta\leq 1/2$,
there exists a constant $C>0$ such that the inequality
\begin{equation}\label{interp}
r^{(1/2)+\theta}
\langle ct-r\rangle^{1-\theta}
\|\partial_\alpha v(t,r\cdot)\|_{L^4(S^2)}
\leq
C
\biggl(
\sum_{|a|\leq 1}N_1(\Omega^a v(t))
+
M_2(v(t);c)
\biggr)
\end{equation}
holds.
\end{lemma}
Following the proof of \cite[(3.19)]{Sideris2000},
we are able to obtain this inequality for $\theta=1/2$.
The next lemma with
$v=\langle ct-r\rangle\partial_\alpha w$ immediately
yields (\ref{interp}) for $\theta=0$.
We follow the idea in Section 2 of \cite{MNS-JJM2005} and
obtain (\ref{interp}) for $\theta\in (0,1/2)$ by interpolation.
In our proof, the trace-type inequality also plays an important role.
For the proof, see, e.g., \cite[(3.16)]{Sideris2000}.
\begin{lemma}\label{traceinequality2019aug17}
There exists a positive constant $C$ such that
if $v=v(x)$ decays sufficiently fast as $|x|\to\infty$,
then the inequality
\begin{equation}
r^{1/2}
\|v(r\cdot)\|_{L^4(S^2)}
\leq
C\|\nabla v\|_{L^2({\mathbb R}^3)}
\label{eqn:hoshiro}
\end{equation}
holds.
\end{lemma}
Differently from the analysis in Sideris and Tu \cite{SiderisTu},
we need the space-time $L^2$ estimate
because of the growth of the bound not only in the high energy estimate
but also in the low energy estimate.
The following one corresponds to the special case of
\cite[Theorem 2.1]{HWY2012Adv}.
\begin{lemma}\label{ksstype}
Let $c>0$ and $0<\mu<1/2$. Then, there exists a positive constant $C$
depending on $c$ and $\mu$ such that
the inequality
\begin{align}\label{l2spacetime}
(1&+T)^{-2\mu}
\left(
\|
r^{-(3/2)+\mu}w
\|^2_{L^2((0,T)\times{\mathbb R}^3)}
+
\|
r^{-(1/2)+\mu}
\partial w
\|^2_{L^2((0,T)\times{\mathbb R}^3)}
\right)
\\
&
\leq
C\|\partial w(0,\cdot)\|^2_{L^2({\mathbb R}^3)}
+C
\int_0^T\!\!\!\int_{{\mathbb R}^3}
\left(
|\partial w||\Box_c w|
+
\frac{|w||\Box_c w|}{r^{1-2\mu}\langle r\rangle^{2\mu}}
\right)dxdt\nonumber
\end{align}
holds for smooth functions $w(t,x)$
compactly supported in $x$ for any fixed time.
\end{lemma}
See also Appendix of \cite{Ster} and \cite{MS-SIAM} for earlier and related estimates.
At first sight, the above estimate may appear useless
for the proof of global existence,
because of the presence of the factor $(1+T)^{-2\mu}$.
Owing to the useful idea of dyadic decomposition
of the time interval \cite[p.\,363]{Sogge2003}
(see also (\ref{2019aug171715}) below),
the estimate (\ref{l2spacetime}) actually works effectively for the proof of
global existence.
The following was proved by Klainerman and Sideris.
\begin{lemma}[Klainerman-Sideris inequality \cite{KS}]\label{lemmaks}
Let $c>0$. There exists a constant $C>0$ such that
the inequality
\begin{equation}\label{KSineq}
M_2(v(t);c)
\leq
C
\bigl(
N_2(v(t))
+
t
\|\Box_c v(t)\|_{L^2({\mathbb R}^3)}
\bigr)
\end{equation}
holds for smooth functions $v=v(t,x)$
decaying sufficiently fast as $|x|\to\infty$.
\end{lemma}
\section{Bound for $M_\mu(u_1;1)$, $M_\mu(u_2;1)$,
and $M_\mu(u_3;c_0)$}\label{sectionmmm}
We know that for any data
$(f_i,g_i)\in C_0^\infty({\mathbb R}^3)\times C_0^\infty({\mathbb R}^3)$
$(i=1,2,3)$,
the Cauchy problem (\ref{eq1})--(\ref{data}) admits
a unique local (in time) smooth solution which is
compactly supported in $x$ at any fixed time
by virtue of finite speed of propagation.
This section is devoted to the bound for
$M_\mu(u_1;1)$, $M_\mu(u_2;1)$, and $M_\mu(u_3;c_0)$
$(\mu=3,4)$.
Though much influenced by \cite{SiderisTu},
our strategy for establishing their bounds is similar to
the way adopted in \cite[Section 3]{HZ2019}.
In the discussion below,
we use the following quantity for the local solutions
$u=(u_1,u_2,u_3)$:
\begin{align}\label{<<u>>}
\langle&\!\langle u(t)\rangle\!\rangle\\
&
:=
\langle t\rangle^{-\delta}
\|
r\langle t-r\rangle^{1/2}
\partial u_1(t)
\|_{L^\infty({\mathbb R}^3)}
+
\sum_{|a|\leq 1}
\langle t\rangle^{-2\delta}
\|
r\langle t-r\rangle^{1/2}
\partial Z^a u_1(t)
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
+
\|
r\langle t-r\rangle^{1/2}
\partial u_2(t)
\|_{L^\infty({\mathbb R}^3)}
+
\sum_{|a|\leq 1}
\langle t\rangle^{-\delta}
\|
r\langle t-r\rangle^{1/2}
\partial Z^a u_2(t)
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
+
\|
r\langle c_0t-r\rangle^{1/2}
\partial u_3(t)
\|_{L^\infty({\mathbb R}^3)}
+
\sum_{|a|\leq 1}
\langle t\rangle^{-\delta}
\|
r\langle c_0t-r\rangle^{1/2}
\partial Z^a u_3(t)
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
+
\langle t\rangle^{-\delta}
\sum_{|a|\leq 1}
\biggl(
\|
r^{1/2}
\langle t-r\rangle
\partial Z^a u_1(t)
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^a u_1(t)
\|_{L_r^\infty L_\omega^4}
\biggr)\nonumber\\
&
+
\langle t\rangle^{-2\delta}
\sum_{|a|\leq 2}
\biggl(
\|
r^{1/2}
\langle t-r\rangle
\partial Z^a u_1(t)
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^a u_1(t)
\|_{L_r^\infty L_\omega^4}
\biggr)
\nonumber\\
&
+
\sum_{|a|\leq 1}
\biggl(
\|
r^{1/2}
\langle t-r\rangle
\partial Z^a u_2(t)
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^a u_2(t)
\|_{L_r^\infty L_\omega^4}
\biggr)
\nonumber\\
&
+
\langle t\rangle^{-\delta}
\sum_{|a|\leq 2}
\biggl(
\|
r^{1/2}
\langle t-r\rangle
\partial Z^a u_2(t)
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^a u_2(t)
\|_{L_r^\infty L_\omega^4}
\biggr)
\nonumber\\
&
+
\sum_{|a|\leq 1}
\biggl(
\|
r^{1/2}
\langle c_0t-r\rangle
\partial Z^a u_3(t)
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^a u_3(t)
\|_{L_r^\infty L_\omega^4}
\biggr)
\nonumber\\
&
+
\langle t\rangle^{-\delta}
\sum_{|a|\leq 2}
\biggl(
\|
r^{1/2}
\langle c_0t-r\rangle
\partial Z^a u_3(t)
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^a u_3(t)
\|_{L_r^\infty L_\omega^4}
\biggr)
\nonumber\\
&
+
\sum_{|a|\leq 1}
\bigl(
\langle t\rangle^{-\delta}
\|
r\partial Z^a u_1(t)
\|_{L_r^\infty L_\omega^4}
+
\|
r\partial Z^a u_2(t)
\|_{L_r^\infty L_\omega^4}
+
\|
r\partial Z^a u_3(t)
\|_{L_r^\infty L_\omega^4}
\bigr)\nonumber\\
&
+
\langle t\rangle^{-\delta}
\|
\langle t-r\rangle
\partial u_1(t)
\|_{L^\infty({\mathbb R}^3)}
+
\|
\langle t-r\rangle
\partial u_2(t)
\|_{L^\infty({\mathbb R}^3)}
+
\|
\langle c_0t-r\rangle
\partial u_3(t)
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
+
\langle t\rangle^{-\delta}
\sum_{|a|\leq 1}
\bigl(
\langle t\rangle^{-\delta}
\|
\langle t-r\rangle
\partial Z^a u_1(t)
\|_{L^\infty({\mathbb R}^3)}
+
\|
\langle t-r\rangle
\partial Z^a u_2(t)
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
\hspace{2.3cm}
+
\|
\langle c_0t-r\rangle
\partial Z^a u_3(t)
\|_{L^\infty({\mathbb R}^3)}
\bigr)\nonumber\\
&
+
\sum_{|a|\leq 1}
\bigl(
\langle t\rangle^{-\delta}
\|
\langle t-r\rangle
\partial Z^a u_1(t)
\|_{L^6({\mathbb R}^3)}
+
\|
\langle t-r\rangle
\partial Z^a u_2(t)
\|_{L^6({\mathbb R}^3)}\nonumber\\
&
\hspace{1.3cm}
+
\|
\langle c_0t-r\rangle
\partial Z^a u_3(t)
\|_{L^6({\mathbb R}^3)}
\bigr)
.\nonumber
\end{align}
Using the constant $\delta$ appearing in Theorem \ref{ourmaintheorem},
we also set
\begin{align}
&
{\mathcal M}_\kappa(u(t))
:=
\langle t\rangle^{-\delta}
M_\kappa(u_1(t);1)
+
M_\kappa(u_2(t);1)
+
M_\kappa(u_3(t);c_0),\\
&
{\mathcal N}_\kappa(u(t))
:=
\langle t\rangle^{-\delta}
N_\kappa(u_1(t))
+
N_\kappa(u_2(t))
+
N_\kappa(u_3(t)).
\end{align}
The purpose of this section is to prove the following:
\begin{proposition}\label{2019june25mnineq}
Suppose
\begin{align}
&F_1^{11,\alpha\beta}X_\alpha X_\beta=0,\,
F_2^{11,\alpha\beta}X_\alpha X_\beta
=F_2^{12,\alpha\beta}X_\alpha X_\beta=0,\\
&
\mbox{{\rm and}}\,\,
F_3^{11,\alpha\beta}X_\alpha X_\beta
=F_3^{12,\alpha\beta}X_\alpha X_\beta=0\nonumber
\end{align}
for any $X\in{\mathcal N}^{(1)}$. For $\mu=3,4$, the inequality
\begin{align}\label{mninequality2019aug16}
{\mathcal M}_\mu(u(t))
\leq&
C_{KS}{\mathcal N}_\mu(u(t))
+
C_{31}
\langle\!\langle u(t)\rangle\!\rangle
{\mathcal N}_\mu(u(t))\\
&
+
C_{32}\langle\!\langle u(t)\rangle\!\rangle^2
{\mathcal N}_3(u(t))
+
C_{33}\langle\!\langle u(t)\rangle\!\rangle
{\mathcal M}_\mu(u(t))\nonumber
\end{align}
holds.
Here, $C_{KS}$, $C_{31}$, $C_{32}$, and $C_{33}$
are positive constants.
\end{proposition}
The proof of this proposition is carried out
in the following three subsections.
\subsection{Bound for $M_\mu(u_1;1)$}
We have for $|a|\leq\mu-2$, $\mu=3,4$
\begin{align}\label{equality31}
\Box_1 Z^au_1
=&
\sum\!{}^{'}{\tilde F}_1^{11,\alpha\beta}
(\partial_\alpha Z^{a'}u_1)
(\partial_\beta Z^{a''}u_1)
+
\sum\!{}^{''}{\tilde F}_1^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)\\
&
+
Z^a C_1(\partial u_1,\partial u_2,\partial u_3),\nonumber
\end{align}
where the new coefficients
${\tilde F}_1^{11,\alpha\beta}$ and
${\tilde F}_1^{jk,\alpha\beta}$
(${\tilde F}_1^{jk,\alpha\beta}=0$ if $j>k$)
actually depend also on $a'$ and $a''$.
By $\sum\!{}^{'}$,
we mean the summation over all
$a'$ and $a''$ such that
$|a'|+|a''|\leq |a|$.
By $\sum\!{}^{''}$,
we mean the summation over all such $a'$, $a''$
and all $j$ and $k$ such that $(j,k)\ne (1,1)$;
for the second term on the right-hand side above,
the summation convention
only over the repeated Greek letters $\alpha$ and $\beta$
has been used.
By Lemma \ref{nullpreserved}, we know
\begin{equation}\label{newnull31}
{\tilde F}_1^{11,\alpha\beta}X_\alpha X_\beta
=0,
\,\,
X\in{\mathcal N}^{(1)}.
\end{equation}
We apply Lemma \ref{lemmaks} to
$v=Z^a u_1$, $|a|\leq\kappa-2$, $\kappa=3,4$.
Taking (\ref{KSineq}) into account,
we need to bound
\begin{align}\label{tj1tj2}
t&\sum\!{}^{'}
\|
{\tilde F}_1^{11,\alpha\beta}
(\partial_\alpha Z^{a'}u_1)
(\partial_\beta Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}\\
&+
t\sum\!{}^{''}
\|
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
\|_{L^2({\mathbb R}^3)}\nonumber
\end{align}
and
\begin{equation}\label{2019june25cubic1}
t\sum_{i,j,k}\sum\!{}^{'}
\|\partial u_i(t)\|_{L^\infty({\mathbb R}^3)}
\|
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
\|_{L^2({\mathbb R}^3)}.
\end{equation}
In the following discussion,
we utilize the characteristic function $\chi_1$
of the set
$\{x\in{\mathbb R}^3:|x|<(c_*/2)t+1\}$,
where $c_*:=\min\{c_0,1\}$.
We set $\chi_2:=1-\chi_1$.
Just for simplicity,
we omit dependence of $\chi_1$, $\chi_2$ on $t$.
Owing to (\ref{<<u>>}), we get
\begin{align}\label{2019may261}
\|&
\chi_1
{\tilde F}^{11,\alpha\beta}_1
(\partial_\alpha Z^{a'}u_1)
(\partial_\beta Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\|
\chi_1
(\partial Z^{a'}u_1)
(\partial Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\leq
C
\langle t\rangle^{-3/2}
\|
r\langle t-r\rangle^{1/2}
\partial Z^{a'}u_1
\|_{L^\infty({\mathbb R}^3)}
\|
r^{-1}\langle t-r\rangle\partial Z^{a''}u_1
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\leq
C\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
(
N_{\mu-1}(u_1(t))
+
M_\mu(u_1(t);1)
).\nonumber
\end{align}
Here we have used the Hardy inequality, as in \cite[(6.27)]{H2004}.
Also, we have assumed $|a'|\leq |a''|$ because the other case can be
handled similarly. Since $|a'|\leq |a''|\leq |a|\leq\mu-2$ $(\mu=3,4)$,
we have used the fact $|a'|\leq 1$.
Since the property (\ref{newnull31}) has played no role above,
we also obtain by assuming $|a'|\leq |a''|$
without loss of generality
\begin{align}\label{2019may262}
\|&
\chi_1
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-3/2}
\|
r\langle c_jt-r\rangle^{1/2}
\partial_\alpha Z^{a'}u_j
\|_{L^\infty({\mathbb R}^3)}
\|
r^{-1}\langle c_kt-r\rangle\partial_\beta Z^{a''}u_k
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\leq
C\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
\sum_{k=1}^3
(
N_{\mu-1}(u_k(t))
+
M_\mu(u_k(t);c_k)
).\nonumber
\end{align}
Here, and in the following as well,
by $c_k$
we mean $c_1=c_2=1$, $c_3=c_0$
(see (\ref{eq1})).
Let us turn our attention to
$|x|>(c_*/2)t+1$.
Using Lemmas \ref{estimationlemma}--\ref{lemma22ofzha}
together with (\ref{newnull31}),
we obtain
\begin{align}\label{2019may263}
\sum&\!{}^{'}
\|
\chi_2
{\tilde F}^{11,\alpha\beta}_1
(\partial_\alpha Z^{a'}u_1)
(\partial_\beta Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C\sum_{|a'|+|a''|\leq \mu-2}
\bigl(
\|
\chi_2
|T^{(1)}Z^{a'}u_1|
|\partial Z^{a''}u_1|
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\hspace{2cm}
+
\|
\chi_2
|\partial Z^{a'}u_1|
|T^{(1)} Z^{a''}u_1|
\|_{L^2({\mathbb R}^3)}
\bigr)\nonumber\\
&
\leq
C\sum_{|a'|+|a''|\leq \mu-2}
\langle t\rangle^{-3/2}
\biggl(
\|
r^{1/2}\partial_t Z^{a'} u_1
\|_{L_r^\infty L_\omega^4}
+
\sum_{i=1}^7
\|
r^{1/2}Z_i Z^{a'} u_1
\|_{L_r^\infty L_\omega^4}\nonumber\\
&
\hspace{2cm}
+
\|
r^{1/2}\langle t-r\rangle\partial_x Z^{a'} u_1
\|_{L_r^\infty L_\omega^4}
\biggr)
\|
\partial Z^{a''} u_1
\|_{L_r^2 L_\omega^4}\nonumber\\
&
\leq
C\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
N_\mu(u_1(t)).\nonumber
\end{align}
When dealing with
$\|\chi_2(\partial Z^{a'}u_j)(\partial Z^{a''}u_k)\|_{L^2}$
$(1\leq j\leq k\leq 3,\,(j,k)\ne (1,1))$,
we obviously know $k=2$ or $k=3$.
When $|a'|\leq 2$ and $|a''|=0$,
we get
\begin{align}\label{2019may264}
\|&
\chi_2(\partial Z^{a'}u_j)(\partial u_k)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-1}
\|\partial Z^{a'}u_j\|_{L^2({\mathbb R}^3)}
\|
r\partial u_k
\|_{L^\infty({\mathbb R}^3)}
\leq
C
\langle t\rangle^{-1+\delta}
\langle\!\langle u(t)\rangle\!\rangle
{\mathcal N}_3(u(t)).\nonumber
\end{align}
When $|a'|\leq 1$ and $|a''|\leq 1$,
we get
\begin{align}\label{2019may265}
\|&
\chi_2(\partial Z^{a'}u_j)(\partial Z^{a''}u_k)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-1}
\|r\partial Z^{a'}u_j\|_{L_r^\infty L_\omega^4}
\|
\partial Z^{a''}u_k
\|_{L_r^2 L_\omega^4}
\nonumber\\
&
\leq
C
\langle t\rangle^{-1+\delta}
\langle\!\langle u(t)\rangle\!\rangle
\bigl(
N_3(u_2(t))
+
N_3(u_3(t))
\bigr).\nonumber
\end{align}
When $|a'|=0$ and $|a''|\leq 2$, we get
\begin{align}\label{2019may266}
\|&
\chi_2(\partial u_j)(\partial Z^{a''}u_k)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-1}
\|
r\partial u_j
\|_{L^\infty({\mathbb R}^3)}
\|
\partial Z^{a''}u_k
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\leq
C
\langle t\rangle^{-1+\delta}
\langle\!\langle u(t)\rangle\!\rangle
\bigl(
N_3(u_2(t))
+
N_3(u_3(t))
\bigr).\nonumber
\end{align}
As for (\ref{2019june25cubic1}),
it easy to get for $|a|\leq \mu-2$, $\mu=3,4$
\begin{equation}\label{2019june25ineqcubic1}
t\sum_{i,j,k}\sum\!{}^{'}
\|\partial u_i(t)\|_{L^\infty({\mathbb R}^3)}
\|
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
\|_{L^2({\mathbb R}^3)}
\leq
C
\langle\!\langle u(t)\rangle\!\rangle^2
{\mathcal N}_{\mu-1}(u(t)).
\end{equation}
Summing up,
we have obtained for $\mu=3,4$
\begin{align}\label{2019june25boundm1}
\langle&t\rangle^{-\delta}
M_\mu(u_1(t);1)\\
&
\leq
C\langle t\rangle^{-\delta}
N_\mu(u_1(t))\nonumber\\
&
+
C\langle t\rangle^{-(1/2)+\delta}
\langle\!\langle u(t)\rangle\!\rangle
\sum_{k=1}^3
\bigl(
N_{\mu-1}(u_k(t))
+
M_\mu(u_k(t);c_k)
\bigr)\nonumber\\
&
+
C\langle t\rangle^{-(1/2)+\delta}
\langle\!\langle u(t)\rangle\!\rangle
N_\mu(u_1(t))
+
C
\bigl(
\langle\!\langle u(t)\rangle\!\rangle
+
\langle\!\langle u(t)\rangle\!\rangle^2
\bigr)
{\mathcal N}_3(u(t)).\nonumber
\end{align}
\subsection{Bound for $M_\mu(u_2;1)$}
As in (\ref{equality31}), we have
\begin{align}\label{equality32}
\Box_1 Z^a u_2
=&
\sum_{{1\leq j\leq k\leq 3}\atop{(j,k)\ne (1,3)}}
\sum\!{}^{'}{\tilde F}_2^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)\\
&
+Z^a C_2(\partial u_1,\partial u_2,\partial u_3),\nonumber
\end{align}
where the new coefficients ${\tilde F}_2^{jk,\alpha\beta}$
actually depend also on $a'$, $a''$.
By Lemma \ref{nullpreserved}, we know
\begin{equation}\label{newnull32}
{\tilde F}_2^{11,\alpha\beta}X_\alpha X_\beta
=
{\tilde F}_2^{12,\alpha\beta}X_\alpha X_\beta
=
{\tilde F}_2^{22,\alpha\beta}X_\alpha X_\beta
=0,
\,\,
X\in{\mathcal N}^{(1)}.
\end{equation}
(In fact, the condition on
${\tilde F}_2^{22,\alpha\beta}$ plays no role in the present section.)
The same computation as in (\ref{2019may261})--(\ref{2019may262}) yields
\begin{align}\label{june19ineq1}
\|&
\chi_1
{\tilde F}_2^{11,\alpha\beta}
(\partial_\alpha Z^{a'} u_1)
(\partial_\beta Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
\bigl(
N_{\mu-1}(u_1(t))
+
M_\mu(u_1(t);1)
\bigr),\nonumber
\end{align}
\begin{align}\label{june19ineq2}
\|&
\chi_1
{\tilde F}_2^{12,\alpha\beta}
(\partial_\alpha Z^{a'} u_1)
(\partial_\beta Z^{a''}u_2)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
\bigl(
N_{\mu-1}(u_2(t))
+
M_\mu(u_2(t);1)
\bigr)\nonumber\\
&
+
C
\langle t\rangle^{-(3/2)+\delta}
\langle\!\langle u(t)\rangle\!\rangle
\bigl(
N_{\mu-1}(u_1(t))
+
M_\mu(u_1(t);1)
\bigr).\nonumber
\end{align}
On the other hand,
using the property (\ref{newnull32})
of the coefficients ${\tilde F}_2^{11,\alpha\beta}$
and ${\tilde F}_2^{12,\alpha\beta}$,
we get
\begin{equation}\label{june19ineq3}
\|
\chi_2
{\tilde F}_2^{11,\alpha\beta}
(\partial_\alpha Z^{a'} u_1)
(\partial_\beta Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}
\leq
C
\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
N_\mu(u_1(t))
\end{equation}
and
\begin{align}\label{june19ineq4}
\|&
\chi_2
{\tilde F}_2^{12,\alpha\beta}
(\partial_\alpha Z^{a'} u_1)
(\partial_\beta Z^{a''}u_2)
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-(3/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
N_\mu(u_2(t))
+
C
\langle t\rangle^{-(3/2)+\delta}
\langle\!\langle u(t)\rangle\!\rangle
N_\mu(u_1(t))\nonumber
\end{align}
as in (\ref{2019may263}).
Therefore, we focus on the terms
with $(j,k)=(2,2)$, $(2,3)$, and
$(3,3)$ on the right-hand side of
(\ref{equality32}).
We have only to show
how to estimate the term with $(j,k)=(2,3)$
because the others can be handled similarly.
When $|a'|=0$ and $|a''|\leq 2$,
we get
\begin{align}\label{june20930}
\|
\chi_1
(\partial u_2)
(\partial Z^{a''}u_3)
\|_{L^2({\mathbb R}^3)}
&\leq
C
\langle t\rangle^{-1}
\|
\langle t-r\rangle
\partial u_2
\|_{L^\infty({\mathbb R}^3)}
\|
\partial Z^{a''}u_3
\|_{L^2({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-1}
\langle\!\langle u(t)\rangle\!\rangle
N_3(u_3(t)).\nonumber
\end{align}
When $|a'|\leq 1$ and $|a''|\leq 1$,
we get
\begin{align}
\|
\chi_1
(\partial Z^{a'}u_2)
(\partial Z^{a''}u_3)
\|_{L^2({\mathbb R}^3)}
&\leq
C
\langle t\rangle^{-1}
\|
\langle t-r\rangle
\partial Z^{a'}u_2
\|_{L^6({\mathbb R}^3)}
\|
\partial Z^{a''}u_3
\|_{L^3({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-1}
\langle\!\langle u(t)\rangle\!\rangle
N_3(u_3(t)).\nonumber
\end{align}
Furthermore, we obtain
for $|a'|\leq 2$ and $|a''|=0$
\begin{align}
\|
\chi_1
(\partial Z^{a'}u_2)
(\partial u_3)
\|_{L^2({\mathbb R}^3)}
&\leq
C
\langle t\rangle^{-1}
\|
\partial Z^{a'}u_2
\|_{L^2({\mathbb R}^3)}
\|
\langle c_0t-r\rangle
\partial u_3
\|_{L^\infty({\mathbb R}^3)}\\
&
\leq
C
\langle t\rangle^{-1}
\langle\!\langle u(t)\rangle\!\rangle
N_3(u_2(t)).\nonumber
\end{align}
On the other hand,
repeating the same discussion as in
(\ref{2019may264})--(\ref{2019may266}),
we can obtain
\begin{equation}\label{june20ineq932}
\|
\chi_2
(\partial Z^{a'}u_2)
(\partial Z^{a''}u_3)
\|_{L^2({\mathbb R}^3)}
\leq
C
\langle t\rangle^{-1}
\langle\!\langle u(t)\rangle\!\rangle
\bigl(
N_3(u_2(t))+N_3(u_3(t))
\bigr)
\end{equation}
for $|a'|+|a''|\leq 2$.
The cubic term $Z^a C_2(\partial u_1,\partial u_2,\partial u_3)$
can be handled in the same way as in (\ref{2019june25ineqcubic1}).
Summing up, we have obtained for $\mu=3,4$
\begin{align}\label{2019june25boundm2}
M&_\mu(u_2(t);1)\\
&
\leq
CN_\mu(u_2(t))
+
C\langle t\rangle^{-(1/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
\sum_{k=1}^2
\bigl(
N_\mu(u_k(t))
+
M_\mu(u_k(t);1)
\bigr)\nonumber\\
&
+
C
\bigl(
\langle\!\langle u(t)\rangle\!\rangle
+
\langle\!\langle u(t)\rangle\!\rangle^2
\bigr)
{\mathcal N}_3(u(t)).\nonumber
\end{align}
\subsection{Bound for $M_\mu(u_3;c_0)$}
As in (\ref{equality31}),
we have
\begin{align}\label{equality619}
\Box_{c_0} Z^a u_3
=&
\sum_{{1\leq j\leq k\leq 3}\atop{(j,k)\ne (1,3)}}
\sum\!{}^{'}{\tilde F}_3^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)\\
&
+
Z^a C_3(\partial u_1,\partial u_2,\partial u_3),\nonumber
\end{align}
where the new coefficients above
actually depend on $a'$, $a''$.
By Lemma \ref{nullpreserved},
we have
\begin{align}
&{\tilde F}_3^{11,\alpha\beta}X_\alpha X_\beta
=
{\tilde F}_3^{12,\alpha\beta}X_\alpha X_\beta=0,
\,\,X\in{\mathcal N}^{(1)},\\
&
{\tilde F}_3^{33,\alpha\beta}X_\alpha X_\beta=0,\,\,
X\in{\mathcal N}^{(c_0)}.
\end{align}
(In fact, this condition on ${\tilde F}_3^{33,\alpha\beta}$ plays no role
in the present section.)
The terms with $(j,k)=(1,1)$ and $(1,2)$ on the right-hand side of
(\ref{equality619})
can be handled in the same way as in
(\ref{june19ineq1}), (\ref{june19ineq3}) and
(\ref{june19ineq2}), (\ref{june19ineq4}),
respectively.
Moreover, we can bound the terms
with $(j,k)=(2,2)$, $(2,3)$, and $(3,3)$ on the right-hand side of
(\ref{equality619}) similarly to
(\ref{june20930})--(\ref{june20ineq932}).
The cubic term can be handled in the same way as before.
We have therefore obtained for $\mu=3,4$
\begin{align}\label{2019june25boundm3}
M&_\mu(u_3(t);c_0)\\
&
\leq
CN_\mu(u_3(t))
+
C\langle t\rangle^{-(1/2)+2\delta}
\langle\!\langle u(t)\rangle\!\rangle
\sum_{k=1}^2
\bigl(
N_\mu(u_k(t))
+
M_\mu(u_k(t);1)
\bigr)\nonumber\\
&
+
C
\bigl(
\langle\!\langle u(t)\rangle\!\rangle
+
\langle\!\langle u(t)\rangle\!\rangle^2
\bigr)
{\mathcal N}_3(u(t)).\nonumber
\end{align}
It is obvious that Proposition \ref{2019june25mnineq}
is a direct consequence of
(\ref{2019june25boundm1}),
(\ref{2019june25boundm2}), and
(\ref{2019june25boundm3}).
We have finished the proof. $
\Box$
\section{Energy estimate}\label{sectionenergy}
We carry out the energy estimate by relying upon the ghost weight method of Alinhac
\cite{Al2001}, \cite{Al2010}.
Just in order to make the proof self-contained,
let us start our discussion with some preliminaries.
Let $c>0$, and define
$m^{\alpha\beta}:={\rm diag}(-1,c^2,c^2,c^2)$.
We define the energy-momentum tensor as
\begin{equation}
T^{\alpha\beta}
:=
m^{\alpha\mu}m^{\beta\nu}
(\partial_\mu v)
(\partial_\nu v)
-
\frac12
m^{\alpha\beta}
m^{\mu\nu}
(\partial_\mu v)
(\partial_\nu v).
\end{equation}
A straightforward computation yields
\begin{equation}
\partial_\beta
T^{\alpha\beta}
=
(m^{\alpha\mu}\partial_\mu v)
(-\Box_c v).
\end{equation}
In particular, we have
\begin{equation}
\partial_\beta T^{0\beta}
=
(\partial_t v)
(\Box_c v).
\end{equation}
For any $g=g(\rho)\in C^1({\mathbb R})$,
we therefore get
\begin{align}
\partial_\beta
(e^{g(ct-r)}T^{0\beta})
&=
e^{g(ct-r)}g'(ct-r)
(-\omega_\beta)
T^{0\beta}
+
e^{g(ct-r)}\partial_\beta T^{0\beta}\\
&
=
e^{g(ct-r)}
\biggl\{
\frac{c}{2}
g'(ct-r)
\sum_{j=1}^3
(T_j^{(c)}v)^2
+
(\partial_t v)(\Box_c v)
\biggr\}.\nonumber
\end{align}
Here, by $\omega=(\omega_0,\omega_1,\omega_2,\omega_3)$,
we mean
$\omega_0=-c$,
$\omega_j=x_j/|x|$.
As for $T_j^{(c)}$, see (\ref{2019Nov16OK}).
With $0<\eta<1/4$,
we choose
\begin{equation}
g(\rho)
=
-\int_0^\rho
\langle {\tilde \rho}\rangle^{-1-2\eta}
d{\tilde\rho},\,\,
\rho\in{\mathbb R},
\end{equation}
so that $g'(ct-r)=-\langle ct-r\rangle^{-1-2\eta}$.
Since $g(\rho)$ is a bounded function and
we have
$T^{00}
=
\bigl\{
(\partial_t v)^2
+
c^2|\nabla v|^2
\bigr\}/2$,
we get the key estimate
\begin{align}\label{keyghostweight}
E&(v(t);c)
+
\sum_{j=1}^3
\int_0^t\!\!
\int_{{\mathbb R}^3}
\langle c\tau-r\rangle^{-1-2\eta}
\bigl(
T_j^{(c)}v(\tau,x)
\bigr)^2
d\tau
dx\\
&
\leq
CE(v(0);c)
+
C
\int_0^t\!\!
\int_{{\mathbb R}^3}
|\Box_c v(\tau,x)|
|\partial_t v(\tau,x)|
d\tau
dx\nonumber
\end{align}
for any smooth function $v(t,x)$
decaying sufficiently fast as $|x|\to\infty$.
In the following, we use the notation for $c>0$
\begin{equation}
G(v(t);c)
:=
\biggl(
\sum_{|a|\leq 3}
\sum_{j=1}^3
\int_{{\mathbb R}^3}
\langle ct-r\rangle^{-1-2\eta}
\bigl(
T_j^{(c)} Z^a v(t,x)
\bigr)^2
dx
\biggr)^{1/2}
\end{equation}
associated with (\ref{keyghostweight}) and
\begin{equation}\label{weightlocalenergynorm}
L(v(t))
:=
\biggl(
\sum_{|a|\leq 3}
\bigl(
\|
r^{-5/4}Z^a v(t)
\|_{L^2({\mathbb R}^3)}^2
+
\|
r^{-1/4}
\partial Z^a v(t)
\|_{L^2({\mathbb R}^3)}^2
\bigr)
\biggl)^{1/2}
\end{equation}
associated with (\ref{l2spacetime}).
Recall that we use the notation
$c_1=c_2=1$, $c_3=c_0$ (see (\ref{eq1})).
The purpose of this section is to prove
the following a priori estimate.
\begin{proposition}\label{2019Nov17OK}
Suppose $c_0\ne 1$ in \mbox{$(\ref{eq1})$}
and suppose \mbox{$(\ref{assumption1})$--$(\ref{assumption5})$}.
The unique local $($in time$)$ solution to
{\rm (\ref{eq1})--(\ref{data})}
defined in $(0,T)\times{\mathbb R}^3$ for some $T>0$
satisfies
\begin{align}\label{2019Nov17LowEnergy}
\bigl(&
\langle t\rangle^{-\delta}
N_3(u_1(t))
\bigr)^2
+
N_3(u_2(t))^2
+
N_3(u_3(t))^2\\
&
\leq
C\sum_{k=1}^3 N_3(u_k(0))^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal M}_4(u(t))
\biggr)
\sup_{0<t<T}
{\mathcal N}_3(u(t))\nonumber\\
&
\hspace{0.1cm}
+
C\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2\nonumber
\end{align}
and
\begin{align}\label{2019Nov17HighEnergy}
\bigl(&
\langle t\rangle^{-2\delta}
N_4(u_1(t))
\bigr)^2
+
\sum_{k=2}^3
\bigl(
\langle t\rangle^{-\delta}
N_4(u_k(t))
\bigr)^2\\
&
\hspace{0.1cm}
+
\langle t\rangle^{-4\delta}
\int_0^t
G(u_1(\tau);1)^2d\tau
+
\sum_{k=2}^3
\langle t\rangle^{-2\delta}
\int_0^t
G(u_k(\tau);c_k)^2d\tau
\nonumber\\
&
\leq
C
\sum_{k=1}^3
N_4(u_k(0))^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^T
\langle\tau\rangle^{-1+2\delta}
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)^2d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^T
\langle\tau\rangle^{-1+\eta+4\delta}
\sum_{k=1}^3
G(u_k(\tau);c_k)d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))\nonumber
\end{align}
for $0<t<T$.
$($See $(\ref{definition<<u>>T})$
for the definition of
$\langle\!\langle u\rangle\!\rangle_T$.$)$
\end{proposition}
\subsection{Energy estimate for $u_1$}
Note that (\ref{equality31}) remains valid
for $|a|\leq 3$.
Using (\ref{keyghostweight}) and (\ref{equality31}),
we get for $|a|\leq 3$
\begin{align}\label{u1ghostenergy2019july24}
E&(Z^a u_1(t);1)
+
\sum_{j=1}^3
\int_0^t\!\!\int_{{\mathbb R}^3}
\langle \tau-r\rangle^{-1-2\eta}
\bigl(
T_j^{(1)} Z^a u_1(\tau,x)
\bigr)^2
d\tau dx\\
&
\leq
C
E(Z^a u_1(0);1)
+
C
\sum\!{}^{'}\int_0^t J_{11}\,d\tau
+
C
\sum\!{}^{''}\int_0^t J_{12}\,d\tau
+
C
\int_0^t J_{13}\,d\tau,\nonumber
\end{align}
where
\begin{align}
&J_{11}
=
\|
{\tilde F}_1^{11,\alpha\beta}
(\partial_\alpha Z^{a'}u_1)
(\partial_\beta Z^{a''}u_1)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)},\label{2019july23j11}\\
&
J_{12}
=
\|
{\tilde F}_1^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)},\label{2019july23j12}\\
&
J_{13}
=
\|
\bigl(
Z^a C_1(\partial u_1,\partial u_2,\partial u_3)
\bigr)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}.\label{2019july23j13}
\end{align}
We refer to (\ref{equality31}) for $\sum\!{}^{'}$ and $\sum\!{}^{''}$.
As for $|a|\leq 2$ we have only to repeat
quite the same argument as before.
Indeed,
as in (\ref{2019may261}) and (\ref{2019may263}) with $\mu=4$,
we obtain for $|a|\leq 2$
\begin{equation}
J_{11}
\leq
C
\langle\tau\rangle^{-(3/2)+3\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
N_4(u_1(\tau))
+
M_4(u_1(\tau);1)
\bigr)
\bigl(
\langle\tau\rangle^{-\delta}
N_3(u_1(\tau))
\bigr).
\end{equation}
As in (\ref{2019may262}), (\ref{2019may264})--(\ref{2019may266}),
we get for $|a|\leq 2$, using the notation $c_1=c_2=1$, $c_3=c_0$
\begin{align}
J_{12}
\leq&
C
\langle\tau\rangle^{-(3/2)+3\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{k=1}^3
\bigl(
N_3(u_k(\tau))
+
M_4(u_k(\tau);c_k)
\bigr)
\biggr)
\\
&
\hspace{2cm}
\times
\bigl(
\langle\tau\rangle^{-\delta}
N_3(u_1(\tau))
\bigr)\nonumber\\
&
+
C
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
{\mathcal N}_3(u(\tau))
\bigl(
\langle\tau\rangle^{-\delta}
N_3(u_1(\tau))
\bigr).\nonumber
\end{align}
It is also possible to get for $|a|\leq 2$
\begin{equation}\label{j13lowenergy}
J_{13}
\leq
C
\langle\tau\rangle^{-2+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
{\mathcal N}_3(u(\tau))
\bigl(
\langle\tau\rangle^{-\delta}
N_3(u_1(\tau))
\bigr).
\end{equation}
Therefore, we may focus on $|a|\leq 3$.
Note that we can no longer rely upon the Hardy inequality
as we have done in (\ref{2019may261}), (\ref{2019may262}).
(Its use would cause the loss of derivatives,
and we could not close the argument.)
As mentioned in Introduction,
this is one of the places where we need to proceed quite differently
from \cite{SiderisTu}, and we utilize the weighted norm
(\ref{weightlocalenergynorm}) associated with (\ref{l2spacetime}).
Assuming $|a'|\leq |a''|$
(and hence $|a'|\leq 1$)
without loss of generality, we get
\begin{align}\label{highenergyj112019july25}
\|&
\chi_1
(\partial Z^{a'} u_1)
(\partial Z^{a''}u_1)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-1}
\|
r^{1/2}
\langle\tau-r\rangle
\partial Z^{a'}u_1
\|_{L^\infty({\mathbb R}^3)}
\|
r^{-1/4}
\partial Z^{a''}u_1
\|_{L^2({\mathbb R}^3)}
\|
r^{-1/4}
\partial_t Z^a u_1
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\leq
C
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
L(u_1(\tau))^2.\nonumber
\end{align}
Here, the Sobolev embedding
$W^{1,4}(S^2)\hookrightarrow L^\infty(S^2)$ has been used
to bound
$\langle\tau\rangle^{-2\delta}
\|
r^{1/2}
\langle\tau-r\rangle
\partial Z^{a'}u_1
\|_{L^\infty({\mathbb R}^3)}$
by a constant-multiple of
$\langle\!\langle u(\tau)\rangle\!\rangle$.
Similarly, we get for $(j,k)\ne (1,1)$
\begin{align}\label{highenergyj122019july25}
\|&
\chi_1
(\partial Z^{a'} u_j)
(\partial Z^{a''}u_k)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_1(\tau)).\nonumber
\end{align}
On the other hand,
as in (\ref{2019may263}),
we employ (\ref{htd20190718}) to get
\begin{align}\label{2019july181530}
\|&
\chi_2
{\tilde F}_1^{11,\alpha\beta}
(\partial_\alpha Z^{a'}u_1)
(\partial_\beta Z^{a''}u_1)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\sum_{|a'|+|a''|\leq 3}
\bigl(
\|
\chi_2
(T^{(1)}Z^{a'}u_1)
(\partial Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\hspace{2.8cm}
+
\|
\chi_2
(\partial Z^{a'}u_1)
(T^{(1)}Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}
\bigr)N_4(u_1).\nonumber
\end{align}
To continue the estimate of (\ref{2019july181530}),
we may assume $|a'|\leq |a''|$
(hence $|a'|\leq 1$)
by symmetry.
Using simply the $L^\infty({\mathbb R}^3)$ norm
(together with $W^{1,4}(S^2)\hookrightarrow L^\infty(S^2)$)
and the $L^2$ norm
in place of the $L_r^\infty L_\omega^4$ and
the $L_r^2 L_\omega^4$ norms,
we naturally modify the argument in (\ref{2019may263}) to get
\begin{equation}\label{2019july251729}
\|
\chi_2
(T^{(1)}Z^{a'}u_1)
(\partial Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}
\leq
C
\langle\tau\rangle^{-(3/2)+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
N_4(u_1(\tau)).
\end{equation}
Moreover, using (\ref{interp}) with $\theta=(1/2)-\eta$ and $c=1$,
we obtain
\begin{equation}\label{2019july251730}
\|
\chi_2
(\partial Z^{a'}u_1)
(T^{(1)}Z^{a''}u_1)
\|_{L^2({\mathbb R}^3)}
\leq
C
\langle\tau\rangle^{-1+\eta+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
G(u_1(\tau);1).
\end{equation}
To handle
\begin{equation}
\sum\!{}^{''}
\|
\chi_2
{\tilde F}^{jk,\alpha\beta}_1
(\partial_\alpha Z^{a'} u_j)
(\partial_\beta Z^{a''} u_k)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)},
\end{equation}
we focus on the estimate of
\begin{equation}\label{2019july191626}
\|
\chi_2
(\partial Z^{a'} u_j)
(\partial Z^{a''} u_k)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}
\end{equation}
for $|a|\leq 3$,
$|a'|+|a''|\leq 3$, and
$(j,k)\ne (1,1)$,
because of lack of the null condition
on the coefficients
$\{F_1^{jk,\alpha\beta}\}$ with
$(j,k)\ne (1,1)$.
Unlike (\ref{2019july181530}),
we fully utilize the different growth rates for
the high energy and the low energy of $u_1$.
Without loss of generality,
we may suppose $j\ne 1$ in (\ref{2019july191626}).
When $|a'|=0$ (and hence $|a''|\leq 3$),
we get
\begin{align}\label{2019july191641}
\|&
\chi_2
(\partial u_j)
(\partial Z^{a''} u_k)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-1+4\delta}
\|
r\partial u_j
\|_{L^\infty({\mathbb R}^3)}
\bigl(
\langle\tau\rangle^{-2\delta}
\|
\partial Z^{a''} u_k
\|_{L^2({\mathbb R}^3)}
\bigr)
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
\bigr)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-1+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
\bigr)
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
\bigr).\nonumber
\end{align}
When $|a'|=1$ (and hence $|a''|\leq 2$),
we employ the $L_r^\infty L_\omega^4$ norm
and the $L_r^2 L_\omega^4$ norm
(together with $W^{1,2}(S^2)\hookrightarrow L^4(S^2)$)
in place of the $L^\infty({\mathbb R}^3)$ norm and
the $L^2({\mathbb R}^3)$ norm,
to get the same bound as in (\ref{2019july191641}).
When $|a'|=2$ (and hence $|a''|\leq 1$),
we obtain
\begin{align}\label{2019july191656}
\|&
\chi_2
(\partial Z^{a'}u_j)
(\partial Z^{a''} u_k)
(\partial_t Z^a u_1)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-1+4\delta}
\bigl(
\langle\tau\rangle^{-\delta}
\|
\partial Z^{a'} u_j
\|_{L_r^2 L_\omega^4}
\bigr)
\bigl(
\langle\tau\rangle^{-\delta}
\|
r\partial Z^{a''}u_k
\|_{L_r^\infty L_\omega^4}
\bigr)
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
\bigr)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-1+4\delta}
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
\bigr)
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
\bigr).\nonumber
\end{align}
For $|a'|=3$ (and hence $|a''|=0$),
we employ the $L^2({\mathbb R}^3)$ norm
and the $L^\infty({\mathbb R}^3)$ norm
in place of the $L_r^2 L_\omega^4$ norm
and the $L_r^\infty L_\omega^4$ norm,
to get the same bound as in (\ref{2019july191656}).
It remains to bound (\ref{2019july23j13}) for $|a|\leq 3$.
It is possible to get
\begin{equation}\label{2019july231855}
J_{13}
\leq
C
\langle\tau\rangle^{-2+6\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
+
{\mathcal N}_3(u(\tau))
\bigr)
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
\bigr).
\end{equation}
It suffices to handle such a typical cubic term as
$
(\partial_t Z^{a'}u_1)
(\partial_t Z^{a''}u_1)
(\partial_t Z^{a'''}u_1)
$
with
$|a'|+|a''|+|a'''|=3$,
to show (\ref{2019july231855}).
We get
\begin{align}
\biggl(
&
\sum_{|a'|=3}
\|
\chi_1
(\partial_t Z^{a'}u_1)
(\partial_t u_1)^2
\|_{L^2({\mathbb R}^3)}\\
&
\hspace{0.1cm}
+
\sum_{{|a'|=2}\atop{|a''|=1}}
\|
\chi_1
(\partial_t Z^{a'}u_1)
(\partial_t Z^{a''}u_1)
(\partial_t u_1)
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\hspace{0.1cm}
+
\sum_{{|a'|=|a'|}\atop{=|a'''|=1}}
\|
\chi_1
(\partial_t Z^{a'}u_1)
(\partial_t Z^{a''}u_1)
(\partial_t Z^{a'''}u_1)
\|_{L^2({\mathbb R}^3)}
\biggr)
N_4(u_1)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-2}
\biggl(
\sum_{|a'|=3}
\|
\partial_t Z^{a'}u_1
\|_{L^2({\mathbb R}^3)}
\|\langle\tau-r\rangle
\partial_t u_1
\|_{L^\infty({\mathbb R}^3)}^2
\nonumber\\
&
\hspace{1.4cm}
+
\sum_{{|a'|=2}\atop{|a''|=1}}
\|
\partial_t Z^{a'}u_1
\|_{L^3({\mathbb R}^3)}
\|
\langle\tau-r\rangle
\partial_t Z^{a''}u_1
\|_{L^6({\mathbb R}^3)}
\|
\langle\tau-r\rangle
\partial_t u_1
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
\hspace{1.4cm}
+
\sum_{{|a'|=|a'|}\atop{=|a'''|=1}}
\|
\langle\tau-r\rangle
\partial_t Z^{a'}u_1
\|_{L^\infty({\mathbb R}^3)}
\|
\langle\tau-r\rangle
\partial_t Z^{a''}u_1
\|_{L^6({\mathbb R}^3)}\nonumber\\
&
\hspace{7cm}
\times
\|
\partial_t Z^{a'''}u_1
\|_{L^3({\mathbb R}^3)}
\biggr)
N_4(u_1)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-2+6\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
+
\langle\tau\rangle^{-\delta}
N_3(u_1(\tau))
\bigr)\nonumber\\
&
\hspace{3.4cm}
\times
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
\bigr).\nonumber
\end{align}
We also obtain
\begin{align}
\biggl(
&
\sum_{|a'|=3}
\|
\chi_2
(\partial_t Z^{a'}u_1)
(\partial_t u_1)^2
\|_{L^2({\mathbb R}^3)}\\
&
\hspace{0.1cm}
+
\sum_{{|a'|=2}\atop{|a''|=1}}
\|
\chi_2
(\partial_t Z^{a'}u_1)
(\partial_t Z^{a''}u_1)
(\partial_t u_1)
\|_{L^2({\mathbb R}^3)}\nonumber\\
&
\hspace{0.1cm}
+
\sum_{{|a'|=|a'|}\atop{=|a'''|=1}}
\|
\chi_2
(\partial_t Z^{a'}u_1)
(\partial_t Z^{a''}u_1)
(\partial_t Z^{a'''}u_1)
\|_{L^2({\mathbb R}^3)}
\biggr)
N_4(u_1)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-2}
\biggl(
\sum_{|a'|=3}
\|
\partial_t Z^{a'}u_1
\|_{L^2({\mathbb R}^3)}
\|
r\partial_t u_1
\|_{L^\infty({\mathbb R}^3)}^2\nonumber\\
&
\hspace{0.1cm}
+
\sum_{{|a'|=2}\atop{|a''|=1}}
\|
\partial_t Z^{a'}u_1
\|_{L^2_r L^4_\omega}
\|
r\partial_t Z^{a''}u_1
\|_{L^\infty_r L^4_\omega}
\|
r\partial_t u_1
\|_{L^\infty({\mathbb R}^3)}\nonumber\\
&
\hspace{0.1cm}
+
\sum_{{|a'|=|a'|}\atop{=|a'''|=1}}
\|
\partial_t Z^{a'}u_1
\|_{L^2_r L^\infty_\omega}
\|
r\partial_t Z^{a''}u_1
\|_{L^\infty_r L^4_\omega}
\|
r\partial_t Z^{a'''}u_1
\|_{L^\infty_r L^4_\omega}
\biggr)
N_4(u_1)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-2+6\delta}
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1)
\bigr)^2
\langle\!\langle u(\tau)\rangle\!\rangle^2.\nonumber
\end{align}
With the notation
\begin{equation}\label{definition<<u>>T}
\langle\!\langle u\rangle\!\rangle_T
:=
\sup_{0<t<T}
\langle\!\langle u(t)\rangle\!\rangle,
\end{equation}
summing yields for $|a|\leq 2$
\begin{align}\label{2019aug201631}
\langle&t\rangle^{-2\delta}E(Z^a u_1(t);1)
\\
&
\leq
C
E(Z^a u_1(0);1)\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal M}_4(u(t))
\biggr)
\sup_{0<t<T}
{\mathcal N}_3(u(t))\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2,\nonumber
\end{align}
and for $|a|\leq 3$
\begin{align}\label{2019aug61027}
\langle&t\rangle^{-4\delta}
E(Z^a u_1(t);1)
+
\langle t\rangle^{-4\delta}
\int_0^t
G(u_1(\tau);1)^2
d\tau\\
&
\leq
C
E(Z^a u_1(0);1)\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^t
\langle\tau\rangle^{-1+2\delta}
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_1(\tau))
d\tau\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^t
\langle\tau\rangle^{-1+\eta+4\delta}
G(u_1(\tau);1)
d\tau\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t)).\nonumber
\end{align}
\subsection{Energy estimate for $u_2$.}
As in (\ref{u1ghostenergy2019july24}), we get for $|a|\leq 3$
\begin{align}\label{u2ghostenergy2019july24}
E&(Z^a u_2(t);1)
+
\sum_{j=1}^3
\int_0^t\!\!\int_{{\mathbb R}^3}
\langle \tau-r\rangle^{-1-2\eta}
\bigl(
T_j^{(1)} Z^a u_2(\tau,x)
\bigr)^2
d\tau dx\\
&
\leq
C
E(Z^a u_2(0);1)\nonumber\\
&
\hspace{0.1cm}
+
C
\sum_{{(j,k)=(1,1),}\atop{(1,2),(2,2)}}
\sum\!{}^{'}
\int_0^t J_{21}\,d\tau
+
C
\sum_{{(j,k)=(2,3),}\atop{(3,3)}}
\sum\!{}^{'}
\int_0^t J_{21}\,d\tau
+
C
\int_0^t J_{22}\,d\tau,\nonumber
\end{align}
here we have set
\begin{equation}
J_{21}
=J_{21}^{(j,k)}
:=
\|
{\tilde F}_2^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}
\end{equation}
(Note that the summation convention
only for the Greek letters $\alpha$ and $\beta$
has been used above, and
the coefficients ${\tilde F}_2^{jk,\alpha\beta}$ actually
depend also on $a'$, $a''$.),
and
\begin{equation}
J_{22}
:=
\|
\bigl(
Z^a C_2(\partial u_1,\partial u_2,\partial u_3)
\bigr)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}.
\end{equation}
Let us first consider the low energy $|a|\leq 2$.
As in (\ref{june19ineq1})--(\ref{june19ineq2}),
it is possible to obtain
\begin{align}\label{2019july281550}
\|&
\chi_1
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-(3/2)+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
{\mathcal N}_3(u(\tau))
+
\langle\tau\rangle^{-\delta}
{\mathcal M}_4(u(\tau))
\bigr)
N_3(u_2(\tau)).\nonumber
\end{align}
On the other hand,
for $(j,k)=(1,1), (1,2)$, and $(2,2)$,
we benefit from the null condition and obtain
\begin{align}\label{2019july281551}
\|&
\chi_2
{\tilde F}_2^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-(3/2)+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
+
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau))
\bigr)
N_3(u_2(\tau))\nonumber
\end{align}
as in (\ref{2019may263}).
For $(j,k)=(2,3), (3,3)$,
we divide the set
$\{x\in{\mathbb R}^3\,:\,|x|>(c_*/2)t+1\}$
($c_*=\min\{c_0,1\}$) into
$$
\biggl\{
x\in{\mathbb R}^3\,:\,
\frac{c_*}{2}t+1<|x|<\frac{c_0+1}{2}t+1
\biggr\}
\mbox{ and }
\biggl\{
x\in{\mathbb R}^3\,:\,
|x|>\frac{c_0+1}{2}t+1
\biggr\},
$$
and obtain
for $j=2,3$, $|a'|+|a''|\leq 2$, and $|a|\leq 2$
\begin{align}\label{2019july281626}
\|&
\chi_2
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_3)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-(3/2)}
\|
\partial Z^{a'}u_j
\|_{L^2({\mathbb R}^3)}
\bigl(
\|
r^{1/2}
\langle c_0\tau-r\rangle
\partial Z^{a''}u_3
\|_{L_r^\infty L_\omega^4}
\|
\partial_t Z^a u_2
\|_{L_r^2 L_\omega^4}\nonumber\\
&
\hspace{5cm}
+
\|
\partial Z^{a''}u_3
\|_{L_r^2 L_\omega^4}
\|
r^{1/2}
\langle\tau-r\rangle
\partial_t Z^a u_2
\|_{L_r^\infty L_\omega^4}
\bigr)\nonumber\\
&
\leq
C
\langle\tau\rangle^{-(3/2)+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
N_3(u_2(\tau))+N_3(u_3(\tau))
\bigr)\nonumber\\
&
\hspace{3.7cm}
\times
\bigl(
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau))
+
\langle\tau\rangle^{-\delta}
N_4(u_3(\tau))
\bigr)\nonumber
\end{align}
by considering the two cases
$c_0<1$ and $c_0>1$, separately.
It is also possible to get for $|a|\leq 2$
\begin{equation}\label{j22lowenergy}
J_{22}
\leq
C
\langle\tau\rangle^{-2+3\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
{\mathcal N}_3(u(\tau))
N_3(u_2(\tau)).
\end{equation}
Summing yields for $|a|\leq 2$
\begin{align}\label{2019aug201638}
E&(Z^a u_2(t);1)
\\
&
\leq
C
E(Z^a u_2(0);1)\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal M}_4(u(t))
\biggr)
\sup_{0<t<T}
{\mathcal N}_3(u(t))\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2.\nonumber
\end{align}
Let us turn our attention to the high energy $|a|\leq 3$.
Proceeding as in (\ref{highenergyj112019july25})
and (\ref{highenergyj122019july25}),
we get for $|a'|+|a''|\leq 3$
\begin{align}\label{2019july291431}
\|&
\chi_1
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_2(\tau)).\nonumber
\end{align}
On the other hand,
for $(j,k)=(1,1), (1,2), (2,2)$,
we rely upon the null condition to get
\begin{align}\label{2019july291507}
&
\sum_{{(j,k)=(1,1),}\atop{(1,2),(2,2)}}
\|
\chi_2
{\tilde F}_2^{jk,\alpha\beta}
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_k)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}\\
&
\leq
C
\langle\tau\rangle^{-(3/2)+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-2\delta}
N_4(u_1(\tau))
+
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau))
\bigr)
N_4(u_2(\tau))\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\tau\rangle^{-1+\eta+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{i=1,2}
G(u_i(\tau);1)
\biggr)
N_4(u_2(\tau))\nonumber
\end{align}
in the same way as in (\ref{2019july181530}),
(\ref{2019july251729}), and (\ref{2019july251730}).
For $(j,k)=(2,3), (3,3)$,
we can no longer rely upon the null condition.
Instead, we rely upon the fact
$\min\{|a'|,|a''|\}\leq 1$
for $|a'|+|a''|\leq 3$.
Proceeding as in (\ref{2019july191641}) and (\ref{2019july191656}),
we then obtain
\begin{align}\label{2019july291525}
&
\sum_{j=2,3}
\|
\chi_2
(\partial Z^{a'}u_j)
(\partial Z^{a''}u_3)
(\partial_t Z^a u_2)
\|_{L^1({\mathbb R}^3)}\\
&
\hspace{0.1cm}
\leq
C
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau))
+
\langle\tau\rangle^{-\delta}
N_4(u_3(\tau))
\bigr)
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau)).\nonumber
\end{align}
Finally, we get for $|a|\leq 3$
\begin{equation}
J_{22}
\leq
C
\langle\tau\rangle^{-2+5\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
+
{\mathcal N}_3(u(\tau))
\bigr)
\bigl(
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau))
\bigr)
\end{equation}
in the same way as in (\ref{2019july231855}).
Summing yields for $|a|\leq 3$
\begin{align}\label{2019aug61028}
\langle&t\rangle^{-2\delta}
E(Z^a u_2(t);1)
+
\langle t\rangle^{-2\delta}
\int_0^t
G(u_2(\tau);1)^2
d\tau\\
&
\leq
C
E(Z^a u_2(0);1)\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^t
\langle\tau\rangle^{-1+2\delta}
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_2(\tau))d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^t
\langle\tau\rangle^{-1+\eta+3\delta}
\biggl(
\sum_{i=1,2}
G(u_i(\tau);1)
\biggr)
d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t)).\nonumber
\end{align}
\subsection{Energy estimate for $u_3$.}
As in (\ref{u1ghostenergy2019july24}), we get for $|a|\leq 3$
\begin{align}\label{u3ghostenergy2019july24}
E&(Z^a u_3(t);c_0)
+
\sum_{j=1}^3
\int_0^t\!\!\int_{{\mathbb R}^3}
\langle c_0\tau-r\rangle^{-1-2\eta}
\bigl(
T_j^{(c_0)} Z^a u_3(\tau,x)
\bigr)^2
d\tau dx\\
&
\leq
C
E(Z^a u_3(0);c_0)
+
C
\sum_{{(j,k)=(1,1),}\atop{(1,2)}}
\sum\!{}^{'}
\int_0^t J_{31}\,d\tau
+
C
\sum\!{}^{'}
\int_0^t J_{32}\,d\tau\nonumber\\
&
+
C
\sum_{k=2,3}
\sum\!{}^{'}
\int_0^t J_{33}\,d\tau
+
C
\int_0^t J_{34}\,d\tau.\nonumber
\end{align}
Here we have set
\begin{equation}
J_{31}
=J_{31}^{(j,k)}
:=
\|
{\tilde F}_3^{jk,\alpha\beta}
(\partial_\alpha Z^{a'}u_j)
(\partial_\beta Z^{a''}u_k)
(\partial_t Z^a u_3)
\|_{L^1({\mathbb R}^3)},
\end{equation}
(Note that the summation convention
only for the Greek letters $\alpha$ and $\beta$
has been used above.)
\begin{align}
&J_{32}
:=
\|
{\tilde F}_3^{33,\alpha\beta}
(\partial_\alpha Z^{a'}u_3)
(\partial_\beta Z^{a''}u_3)
(\partial_t Z^a u_3)
\|_{L^1({\mathbb R}^3)},\\
&
J_{33}=J_{33}^{(k)}
:=
\|
{\tilde F}_3^{2k,\alpha\beta}
(\partial_\alpha Z^{a'}u_2)
(\partial_\beta Z^{a''}u_k)
(\partial_t Z^a u_3)
\|_{L^1({\mathbb R}^3)},
\end{align}
(Note that
the coefficients ${\tilde F}_3^{jk,\alpha\beta}$ actually
depend also on $a'$, $a''$.),
and
\begin{equation}
J_{34}
:=
\|
\bigl(
Z^a C_3(\partial u_1,\partial u_2,\partial u_3)
\bigr)
(\partial_t Z^a u_3)
\|_{L^1({\mathbb R}^3)}.
\end{equation}
Let us first consider the low energy $|a|\leq 2$.
In the same way as in (\ref{2019july281550})--(\ref{2019july281551}),
we obtain
\begin{equation}
J_{31}
\leq
C
\langle\tau\rangle^{-(3/2)+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
+
\langle\tau\rangle^{-\delta}
{\mathcal M}_4(u(\tau))
\bigr)
N_3(u_3(\tau)).
\end{equation}
Since $\{{\tilde F}_3^{33,\alpha\beta}\}$ satisfies the null condition
(\ref{assumption3}),
we also get
\begin{equation}
J_{32}
\leq
C
\langle\tau\rangle^{-(3/2)+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
+
\langle\tau\rangle^{-\delta}
{\mathcal M}_4(u(\tau))
\bigr)
N_3(u_3(\tau)).
\end{equation}
For $J_{33}$, we proceed as in (\ref{2019july281550})
and (\ref{2019july281626}), to get
\begin{equation}
J_{33}
\leq
C
\langle\tau\rangle^{-(3/2)+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
+
\langle\tau\rangle^{-\delta}
{\mathcal M}_4(u(\tau))
\bigr)
{\mathcal N}_3(u(\tau)).
\end{equation}
It is possible to get for $|a|\leq 2$
\begin{equation}
J_{34}
\leq
C
\langle\tau\rangle^{-2+3\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
{\mathcal N}_3(u(\tau))
N_3(u_3(\tau)).
\end{equation}
Summing yields for $|a|\leq 2$
\begin{align}\label{2019aug201641}
E&(Z^a u_3(t);c_0)
\\
&
\leq
C
E(Z^a u_3(0);c_0)\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal M}_4(u(t))
\biggr)
\sup_{0<t<T}
{\mathcal N}_3(u(t))\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}{\mathcal N}_3(u(t))
\biggr)^2.\nonumber
\end{align}
As for the high energy $|a|\leq 3$,
we obtain
\begin{align}
J&_{31},\,J_{32}\\
&
\leq
C
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_3(\tau))\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\tau\rangle^{-(3/2)+4\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
\bigr)
N_4(u_3(\tau))\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\tau\rangle^{-1+\eta+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{i=1,2}
G(u_i(\tau);1)
+
G(u_3(\tau);c_0)
\biggr)
N_4(u_3(\tau))\nonumber
\end{align}
in the same way as in (\ref{2019july291431}) and
(\ref{2019july291507}).
Moreover, as in (\ref{2019july291431})
and (\ref{2019july291525}),
we obtain
\begin{align}
J_{33}
\leq&
C
\langle\tau\rangle^{-1+\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\biggl(
\sum_{k=2}^3
L(u_k(\tau))
\biggr)
L(u_3(\tau))
\\
&
+
\langle\tau\rangle^{-1+2\delta}
\langle\!\langle u(\tau)\rangle\!\rangle
\bigl(
\langle\tau\rangle^{-\delta}
N_4(u_2(\tau))
+
\langle\tau\rangle^{-\delta}
N_4(u_3(\tau))
\bigr)
\langle\tau\rangle^{-\delta}
N_4(u_3(\tau)).\nonumber
\end{align}
For $J_{34}$, we easily obtain
\begin{equation}
J_{34}
\leq
C
\langle\tau\rangle^{-2+5\delta}
\langle\!\langle u(\tau)\rangle\!\rangle^2
\bigl(
\langle\tau\rangle^{-\delta}
{\mathcal N}_4(u(\tau))
+
{\mathcal N}_3(u(\tau))
\bigr)
\bigl(
\langle\tau\rangle^{-\delta}
N_4(u_3(\tau))
\bigr).
\end{equation}
Recall the notation $c_1=c_2=1$, $c_3=c_0$.
Summing yields for $|a|\leq 3$
\begin{align}\label{2019aug61029}
\langle&t\rangle^{-2\delta}
E(Z^a u_3(t);c_0)
+
\langle t\rangle^{-2\delta}
\int_0^t
G(u_3(\tau);c_0)^2
d\tau\\
&
\leq
C
E(Z^a u_3(0);c_0)\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^t
\langle\tau\rangle^{-1+2\delta}
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_3(\tau))
d\tau\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^t
\langle\tau\rangle^{-1+\eta+3\delta}
\biggl(
\sum_{i=1}^3
G(u_i(\tau);c_i)
\biggr)
d\tau\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t)).\nonumber
\end{align}
Now we are in a position to complete the proof of
Proposition \ref{2019Nov17OK}.
It is obvious that
the estimate (\ref{2019Nov17LowEnergy})
follows from (\ref{2019aug201631}), (\ref{2019aug201638}),
and (\ref{2019aug201641}).
The high energy estimate
(\ref{2019Nov17HighEnergy})
is a direct consequence of (\ref{2019aug61027}),
(\ref{2019aug61028}), and (\ref{2019aug61029}).
We have finished the proof. $
\Box$
\section{$L^2$ weighted space-time estimates}\label{l2weighted}
The purpose of this section is to prove the following
a priori estimates:
\begin{proposition}
The smooth local $($in time$)$ solution $u=(u_1,u_2,u_3)$ to
$(\ref{eq1})$--$(\ref{data})$
defined in $(0,T)\times{\mathbb R}^3$ for some $T>0$
satisfies the following a priori estimates for all $t\in (0,T):$
\begin{align}\label{2019aug201643}
\langle&t\rangle^{-(1/2)-4\delta}
\int_0^t
L(u_1(\tau))^2d\tau
\\
&
\leq
C
\sum_{|a|\leq 3}
\|
(\partial Z^a u_1)(0)
\|_{L^2({\mathbb R}^3)}^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^t
\langle\tau\rangle^{-1+2\delta}
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_1(\tau))
d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^t
\langle\tau\rangle^{-1+\eta+4\delta}
G(u_1(\tau);1)
d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t)),\nonumber
\end{align}
\begin{align}\label{2019aug201644}
\langle&t\rangle^{-(1/2)-2\delta}
\int_0^t
L(u_2(\tau))^2
d\tau\\
&
\leq
C
\sum_{|a|\leq 3}
\|
(\partial Z^a u_2)(0)
\|_{L^2({\mathbb R}^3)}^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^t
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_2(\tau))d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^t
\langle\tau\rangle^{-1+\eta+3\delta}
\biggl(
\sum_{i=1,2}
G(u_i(\tau);1)
\biggr)
d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t)),\nonumber
\end{align}
\begin{align}\label{2019aug201645}
\langle&t\rangle^{-(1/2)-2\delta}
\int_0^t
L(u_3(\tau))^2d\tau\\
&
\leq
C
\sum_{|a|\leq 3}
\|
(\partial Z^a u_3)(0)
\|_{L^2({\mathbb R}^3)}^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\int_0^t
\biggl(
\sum_{k=1}^3
L(u_k(\tau))
\biggr)
L(u_3(\tau))
d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\int_0^t
\langle\tau\rangle^{-1+\eta+3\delta}
\biggl(
\sum_{i=1}^3
G(u_i(\tau);c_i)
\biggr)
d\tau\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)^2\nonumber\\
&
\hspace{0.1cm}
+
C
\langle\!\langle u\rangle\!\rangle_T^2
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t)).\nonumber
\end{align}
\end{proposition}
In (\ref{2019aug201645}), we have used the notation
$c_1=c_2=1$, $c_3=c_0$.
The proof of this proposition naturally uses Lemma \ref{ksstype}
with $\mu=1/4$.
With the simple inequality
$r^{2\mu}\langle r\rangle^{-2\mu}\leq 1$,
the contributions from the term
$$
\int_0^T\!\!\!\int_{{\mathbb R}^3}
\frac{|w||\Box_c w|}{r^{1-2\mu}\langle r\rangle^{2\mu}}
dxdt
$$
(see the right-hand side of (\ref{l2spacetime}))
can be handled with use of the Hardy inequality
or the norm (\ref{weightlocalenergynorm}),
and therefore the proof is essentially the same as that of
(\ref{2019aug61027}), (\ref{2019aug61028}),
and (\ref{2019aug61029}).
We may omit the details. $
\Box$
\section{Proof of Theorem \ref{ourmaintheorem}}
Now we are ready to complete the proof of
Theorem \ref{ourmaintheorem} by using the method of
continuity.
By the standard contraction-mapping argument,
it is easy to show that
for any smooth, compactly supported data (\ref{data}),
there exists ${\hat T}>0$
depending on $\|(f,g)\|_D$ such that
the equation (\ref{eq1}) admits
a unique local (in time) solution
$u=(u_1,u_2,u_3)$ defined in
the strip $(0,{\hat T})\times{\mathbb R}^3$
satisfying
$\partial_\alpha Z^a u_i\in
C([0,{\hat T});L^2({\mathbb R}^3))$
($\alpha=0,1,2,3$, $|a|\leq 3$, $i=1,2,3$)
and
${\rm supp}\,u_i(t,\cdot)
\subset
\{x\in{\mathbb R}^3:|x|<R+c^*t\}$
$(i=1,2,3,\,0<t<{\hat T})$.
Here we have set $c^*:=\max\{1,c_0\}$
(see (\ref{eq1}) for $c_0$)
and chosen $R>0$ so that
${\rm supp}\,f_i\cup{\rm supp}\,g_i
\subset\{x\in{\mathbb R}^3:|x|<R\}$,
$i=1,2,3$. Actually, this solution is smooth
in the strip $(0,{\hat T})\times{\mathbb R}^3$,
and it has the important properties
\begin{align}
&
N_\mu(u_1(t)),\,
N_\mu(u_2(t)),\,
N_\mu(u_3(t))
\in
C([0,{\hat T})),\,\,\mu=3,4,\label{2019property1}\\
&
N_4(u_1(0))
+
N_4(u_2(0))
+
N_4(u_3(0))
\leq
C_d\|(f,g)\|_D\label{2019property2}
\end{align}
for a suitable constant $C_d>0$.
We employ the numerical constant $C_{61}$
appearing in (\ref{2019aug171715}) and set
\begin{equation}\label{c*definition}
C^*:=
\max
\biggl\{
2C_d,\,
\frac{2}{3}\sqrt{\frac{4}{3}C_{61}}
\biggr\}
\quad\mbox{so that}\quad
\sqrt{\frac{4}{3}C_{61}}
\leq
\frac{3}{2}C^*.
\end{equation}
On the basis of the properties
(\ref{2019property1})--(\ref{2019property2}),
for the smooth data (\ref{data})
with the support contained in the ball
$\{x\in{\mathbb R}^3:|x|<R\}$,
we can define the non-empty set
of all the numbers $T>0$ such that
there exists a unique smooth solution $u$ to
(\ref{eq1})--(\ref{data}) defined in $(0,T)\times{\mathbb R}^3$
satisfying
\begin{align}
&
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
{\mathcal N}_3(u(t))
\leq
2C^*\|(f,g)\|_D,\label{n4n3estimate}\\
&
\bigcup_{i=1}^3\,{\rm supp}\,u_i(t,\cdot)
\subset
\{x\in{\mathbb R}^3:|x|<R+c^*t\}\label{finitespeedpropagation}
\end{align}
for all $t\in(0,T)$.
We define $T^*\in(0,\infty]$
as the supremum of this non-empty set.
To proceed, we assume
\begin{align}\label{fgsizecondition}
\|(f,g)\|_D
<\varepsilon_0:=\min
\biggl\{
1,\,\frac{1}{8C^*C_{33}C_{60}},\,&
\frac{1}{12C^*C_{60}(C_{31}+2C^*C_{32}C_{60})},\\
&
\frac{1}{2C^*C_{60}C_{62}},\,
\frac{1}{C^*C_{60}C_{63}}
\biggr\}.\nonumber
\end{align}
For the constants appearing above,
see (\ref{mninequality2019aug16}),
(\ref{<<u>>smallerthandata}), and (\ref{2019aug171715}).
We prove
\begin{proposition}\label{propmninequality2019aug17}
Let $u$ be the smooth solution to
$(\ref{eq1}){\rm -}(\ref{data})$
satisfying
$(\ref{n4n3estimate})$ and
$(\ref{finitespeedpropagation})$ for all $t\in (0,T^*)$.
The estimate
\begin{equation}\label{2019mninequality}
{\mathcal M}_\mu(u(t))
\leq
C
{\mathcal N}_\mu(u(t)),
\quad
0<t<T^*
\end{equation}
holds for $\mu=3,4$,
provided that
$\|(f,g)\|_D$ satisfies $(\ref{fgsizecondition})$.
\end{proposition}
\begin{proof}
We proceed closely following the proof of \cite[Proposition 8.1]{HZ2019}.
When the initial data is identically zero and hence
the corresponding solution identically vanishes,
we obviously have (\ref{2019mninequality}).
We may therefore suppose
without loss of generality that
the smooth initial data is not identically zero.
We then have
${\mathcal N}_\mu(u(0))>0$.
Moreover,
we see ${\mathcal N}_\mu(u(t))>0$
for all $t\in (0,T^*)$
by repeating basically the same argument
as in the proof of
Proposition 8.1 in \cite{HZ2019}.
(While the uniqueness theorem of
$C^2$-solutions of John \cite{John1981}, \cite{John1990}
was employed in \cite{HZ2019},
the uniqueness of $H^3\times H^2$-solutions,
which can be shown in the standard way for such systems
of semilinear equations as (\ref{eq1}),
suffices in the present case.)
Therefore, we may suppose without loss of generality
that ${\mathcal N}_\mu(u(t))>0$
for all $t\in [0,T^*)$.
Next, we remark the important fact that
${\mathcal M}_\mu(u(t))$ is continuous on the interval
$[0,T^*)$.
This can be easily verified
thanks to the fact that
the smooth solution $u$ satisfies (\ref{finitespeedpropagation})
on the interval $[0,T^*)$ and hence
the uniform continuity of
$\partial_\alpha\partial_x Z^a u_i$
($|a|\leq \mu-2, \alpha=0,\dots,3$)
in such a bounded and closed set as
$\{(t,x):t\in[0,T+\delta],\,|x|\leq
R+c^*t\}$
($\delta$ is a suitable positive constant)
can be utilized in order to show
the continuity of ${\mathcal M}_\mu(u(t))$
at $t=T\in[0,T^*)$.
This is the place where our proof
of Theorem \ref{ourmaintheorem} relies upon the compactness
of the support of data.
Since all the constants appearing in our argument are independent of $R$,
this condition on the support can be actually removed in the standard way.
Now we are ready to prove (\ref{2019mninequality}).
We start with the inequality
$$
{\mathcal M}_\mu(u(t))|_{t=0}\leq
C_{KS}{\mathcal N}_\mu(u(t))|_{t=0}
$$
for the constant $C_{KS}$ appearing (\ref{mninequality2019aug16}),
which is a direct consequence of (\ref{KSineq}).
(See the second term on the right-hand side of (\ref{KSineq}),
which vanishes at $t=0$.)
Since $\bigl({\mathcal M}_\mu(u(t))/{\mathcal N}_\mu(u(t))\bigr)|_{t=0}
\leq C_{KS}$ and
${\mathcal M}_\mu(u(t))/{\mathcal N}_\mu(u(t))$
is continuous on the interval $[0,T^*)$,
we have ${\mathcal M}_\mu(u(t))/{\mathcal N}_\mu(u(t))
\leq 2C_{KS}$, that is
\begin{equation}\label{2019m2cksn}
{\mathcal M}_\mu(u(t))\leq 2C_{KS}{\mathcal N}_\mu(u(t))
\end{equation}
at least for a short time interval, say,
$[0,{\tilde T}]\subset [0,T^*)$.
It remains to show that (\ref{2019m2cksn}) actually
holds for {\it all} $t\in[0,T^*)$.
Let
\begin{align}
{\bar T}:=
\sup
\{\,T\in(0,T^*)\,:
&
\,
{\mathcal M}_\mu(u(t))\leq 2C_{KS}{\mathcal N}_\mu(u(t))\\
&
\qquad\quad
(\mu=3,4)\,\mbox{for all}\,\,t\in[0,T)
\}\nonumber
\end{align}
By definition, we know ${\bar T}\leq T^*$.
To show ${\bar T}=T^*$,
we proceed as follows.
By (\ref{<<u>>}), Lemmas \ref{someinequalities2019aug17}
--\ref{traceinequality2019aug17}, and (\ref{n4n3estimate}),
we get for $t\in (0,{\bar T})$
\begin{align}\label{<<u>>smallerthandata}
\langle\!\langle u(t)\rangle\!\rangle
&
\leq
C
\langle t\rangle^{-\delta}
\bigl(
{\mathcal N}_4(u(t))
+
{\mathcal M}_4(u(t))
\bigr)
+
C
\bigl(
{\mathcal N}_3(u(t))
+
{\mathcal M}_3(u(t))
\bigr)\\
&
\leq
C_{60}
\bigl(
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
{\mathcal N}_3(u(t))
\bigr)
\leq
2C^*C_{60}
\|(f,g)\|_D.\nonumber
\end{align}
Here, $C_{60}$ is a suitable positive constant.
Owing to the size condition (\ref{fgsizecondition}),
Proposition \ref{2019june25mnineq}
combined with the last inequality (\ref{<<u>>smallerthandata})
immediately yields for $\mu=3,4$
\begin{equation}\label{mn32inequality}
{\mathcal M}_\mu(u(t))
\leq
\frac32
C_{KS}
{\mathcal N}_\mu(u(t)),
\quad
0<t<{\bar T}.
\end{equation}
Since ${\mathcal M}_\mu(u(t))/{\mathcal N}_\mu(u(t))$
is continuous on the interval $[0,T^*)$,
we have finally arrived at the conclusion
${\bar T}=T^*$.
Indeed, if we assume ${\bar T}<T^*$,
then the estimate (\ref{mn32inequality})
contradicts the definition of ${\bar T}$.
We have finished the proof of
Proposition \ref{propmninequality2019aug17}.
\end{proof}
Now we are going to prove the crucial a priori estimate
\begin{equation}\label{2019crucial1443}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
{\mathcal N}_3(u(t))
\leq
\frac32
C^*\|(f,g)\|_D,
\quad
0<t<T^*.
\end{equation}
This estimate combined with the standard local existence theorem
will immediately implie $T^*=\infty$, i.e., global existence.
Just for simplicity, we use the notation
\begin{align*}
&
{\mathcal G}(t)
:=
\langle t\rangle^{-\delta}
\|
G(u_1(\cdot);1)
\|_{L^2((0,t))}
+
\|
G(u_2(\cdot);1)
\|_{L^2((0,t))}
+
\|
G(u_3(\cdot);c_0)
\|_{L^2((0,t))},\\
&
{\mathcal L}(t)
:=
\langle t\rangle^{-(1/4)}
\bigl(
\langle t\rangle^{-\delta}
\|
L(u_1(\cdot))
\|_{L^2((0,t))}
+
\|
L(u_2(\cdot))
\|_{L^2((0,t))}
+
\|
L(u_3(\cdot))
\|_{L^2((0,t))}
\bigr).
\end{align*}
Without loss of generality, we may suppose $T^*>1$
because we are considering solutions with small data.
It then follows from (\ref{2019Nov17LowEnergy}),
(\ref{2019Nov17HighEnergy}),
(\ref{2019aug201643}),
(\ref{2019aug201644}), and
(\ref{2019aug201645})
that for any $T$ with $1<T<T^*$ we have
\begin{align}\label{2019aug171715}
\biggl(&
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2\\
&
+
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal G}(t)
\biggr)^2
+
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal L}(t)
\biggr)^2\nonumber\\
&
\leq
C_{61}
\|(f,g)\|_D^2
+
C_{62}
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal L}(t)
\biggr)^2\nonumber\\
&
+
C_{63}
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
\biggr)
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal G}(t)
\biggr)\nonumber\\
&
+
C_{64}
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2.\nonumber
\end{align}
Here the positive constants $C_{6i}$ $(i=1,\dots,4)$ are independent
of $T$.
We note that $\delta$ and $\eta$ are so small that
the idea of decomposing the interval $[1,T]$ dyadically
has played an important role as in such previous papers
as \cite[p.\,363]{Sogge2003}, \cite[(122)--(125)]{HZ2019}.
For any $T$ with $T<T^*$,
we easily see
$$
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal G}(t),
\quad
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal L}(t)
<\infty
$$
and it is therefore possible
to move
the second and the third terms
on the right-hand side of (\ref{2019aug171715})
to its left-hand side.
Using the estimate (\ref{<<u>>smallerthandata}),
which holds for all $t\in(0,T^*)$,
and (\ref{fgsizecondition}),
we thereby obtain
\begin{align}
\biggl(&
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2\\
&
\leq
C_{61}
\|(f,g)\|_D^2\nonumber\\
&
\hspace{0.1cm}
+
\biggl(
\frac{1}{2}C_{63}+C_{64}
\biggr)
\langle\!\langle u\rangle\!\rangle_T
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2,\nonumber
\end{align}
which immediately implies
\begin{equation}
\frac34
\biggl(
\sup_{0<t<T}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T}
{\mathcal N}_3(u(t))
\biggr)^2
\leq
C_{61}
\|(f,g)\|_D^2
\end{equation}
thanks to (\ref{<<u>>smallerthandata}) and (\ref{fgsizecondition}).
Since $T(<T^*)$ is arbitrary and
the constant $C_{61}$ is independent of $T$,
we finally obtain
\begin{equation}
\sup_{0<t<T^*}
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
\sup_{0<t<T^*}
{\mathcal N}_3(u(t))
\leq
\sqrt{\frac{4}{3}C_{61}}\|(f,g)\|_D
\leq
\frac32
C^*
\|(f,g)\|_D.
\end{equation}
See (\ref{c*definition}).
Now we are in a position to show $T^*=\infty$.
Assume $T^*<\infty$.
By solving (\ref{eq1}) with data
$(u_i(T^*-\delta,x),(\partial_t u_i)(T^*-\delta,x))
\in C_0^\infty({\mathbb R}^3)\times C_0^\infty({\mathbb R}^3)$
given at $t=T^*-\delta$ ($\delta$ is a sufficiently small
positive constant),
we can extend the local solution under consideration
smoothly to a larger strip,
say,
$\{(t,x):\,0<t<{\tilde T},\,x\in{\mathbb R}^3\}$,
where $T^*<{\tilde T}$.
The local solution thereby extended satisfies
\begin{align*}
&
N_\mu(u_1(t)),\,
N_\mu(u_2(t)),\,
N_\mu(u_3(t))
\in
C([0,{\tilde T})),\,\,\mu=3,4,\\
&
\bigcup_{i=1}^3\,{\rm supp}\,u_i(t,\cdot)
\subset
\{x\in{\mathbb R}^3:|x|<R+c^*t\},\quad
0<t<{\tilde T}.
\end{align*}
Since
$\bigl(
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
{\mathcal N}_3(u(t))
\bigr)|_{t=T^*}
\leq (3/2)C^*\|(f,g)\|_D$
by (\ref{2019crucial1443})
and
$
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
{\mathcal N}_3(u(t))
\in
C([0,{\tilde T}))
$,
we see that there exists $T'\in (T^*,{\tilde T}]$
such that
$
\langle t\rangle^{-\delta}
{\mathcal N}_4(u(t))
+
{\mathcal N}_3(u(t))
\bigr)
\leq 2C^*\|(f,g)\|_D$
for all
$t\in (0,T')$,
which contradicts the definition of $T^*$.
Hence we have $T^*=\infty$.
We have finished the proof.$
\Box$
\end{document}
|
\begin{document}
\title{Quantum mechanics in phase space: \\First order comparison between
the Wigner and the Fermi function}
\author{Giuliano Benenti}
\email{[email protected]}
\affiliation{CNISM, CNR-INFM, and Center for Nonlinear and Complex Systems,
Universit\`a degli Studi dell'Insubria, via Valleggio 11, I-22100 Como, Italy}
\affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano,
via Celoria 16, I-20133 Milano, Italy}
\author{Giuliano Strini}
\email{[email protected]}
\affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano,
via Celoria 16, I-20133 Milano, Italy}
\date{\today}
\begin{abstract}
The Fermi $g_F(x,p)$ function provides a phase space description of quantum mechanics
conceptually different from that based on the the Wigner function $W(x,p)$.
In this paper, we show that for a peaked wave packet the $g_F(x,p)=0$ curve
approximately corresponds to a phase space contour level of the Wigner function
and provides a satisfactory description of the wave packet's size and shape.
Our results show that the Fermi function is an interesting tool to
investigate quantum fluctuations in the semiclassical regime.
\end{abstract}
\pacs{03.65.-w, 03.65.Sq}
\maketitle
Sec.ion{Introduction}
The Wigner phase space representation of
quantum mechanics~\cite{wigner,wignerreview,zachos}
is a very useful and enlightening approach.
It is of practical interest in the description of a broad range
of physical phenomena, including quantum transport processes in
quantum optics~\cite{schleich}
and condensed matter~\cite{jacoboni},
quantum chaos~\cite{saraceno},
quantum complexity~\cite{benenti},
decoherence~\cite{zurek},
quantum computation~\cite{saraceno2,terraneo},
and quantum tomography~\cite{wootters}.
Furthermore, the phase space approach brings
out most clearly the differences and similarities between classical and
quantum mechanics and offers unique insights into the classical limit
of quantum theory~\cite{heller,berry,brumer,davidovich,habib}.
The Wigner phase space distribution function
of a quantum state described by a state vector $|\psi\rangle$ reads
\begin{equation}
W(x,p)Eq.iv \frac{1}{2\pi\hbar}\int_{-\infty}^\infty
dy e^{-\frac{i}{\hbar}py}
\psi\left(x+\frac{y}{2}\right)
\psi^\star\left(x-\frac{y}{2}\right).
\label{wignerfunction}
\end{equation}
(For the sake of simplicity, we consider the case
of a single particle moving along a straight line).
The Wigner function provides a pictorial phase-space representation of
the abstract notion of a quantum state and allows us to compute the
quantum mechanical expectation values of observables in terms of
phase space-averages.
A different, almost unknown phase space approach is
based on an old paper by Fermi~\cite{fermi}.
As pointed out by Fermi, the state of a quantum system
may be defined in two completely equivalent ways:
by its wave function $\psi(x)=\langle x | \psi\rangle$ or
by measuring a physical quantity $g_F(x,p)$.
Given the measurement outcome $g_F(x,p)=\bar{g}$,
$\psi(x)$ is obtained as solution of the eigenvalue equation
$g_F(x,p)\psi(x)=\bar{g}\psi(x)$, where
$p=-i\hbar {\partial_x}$.
On the other hand, given the wave function $\psi(x)$
it is always possible to find an operator $g_F(x,p)$ such that
\begin{equation}
g_F(x,p) \psi(x)=0.
\label{goperator}
\end{equation}
Using the polar decomposition
\begin{equation}
\psi(x)=R(x)e^{\frac{i}{\hbar} S(x)},
\end{equation}
where $R(x)$ and $S(x)$ are real functions
[$R(x) \ge 0$ for any $x$], it is easy to
check that identity (\ref{goperator}) is fulfilled by taking
\begin{equation}
{g}_F\left(x,-i\hbar \partial_x\right)=
\left[-i\hbar \partial_x -
S'(x)\right]^2+\hbar^2
\frac{R''(x)}{R(x)}.
\label{gfoperator}
\end{equation}
Equation (\ref{goperator}) implies that the corresponding
physical quantity $g_F(x,p)$ takes the value $\bar{g}=0$.
The equation
\begin{equation}
g_F(x,p)=
\left[p- S'(x)\right]^2+\hbar^2
\frac{R''(x)}{R(x)}=0
\label{gfimplicit}
\end{equation}
defines a curve in the two-dimensional phase space.
In other words, as expected from Heisenberg uncertainty principle,
we cannot identify a quantum particle by means of a
phase-space point $(x,p)$
but we need a curve, $g_F(x,p)=0$.
Note that it is also possible to write equation (\ref{gfimplicit}) in the form
\begin{equation}
p_\pm= S'(x)\pm
\sqrt{-\hbar^2\frac{R''(x)}{R(x)}}
=m v_M \pm \sqrt{2mV_Q},
\label{ppmFermi}
\end{equation}
where $m$ is the particle mass, $v_MEq.iv \frac{1}{m} S'$
the Madelung's velocity ~\cite{madelung}, and
$V_QEq.iv -\frac{\hbar^2}{2m}\frac{R''}{R}$
the so-called quantum-mechanical
potential~\cite{bohm}.
Equation (\ref{ppmFermi}) locates two points, $(x,p_+)$ and
$(x,p_-)$, in the phase space for any $x$ such that $R''(x)<0$ and
$R(x)\ne 0$.
The phase space Fermi function $g_F(x,p)$
and the Wigner function $W(x,p)$ are
at first sight unrelated.
In particular, for the Fermi function there is
no interpretation in terms of quasiprobabilities as for
the Wigner function.
On the other hand, for a Gaussian wave packet the
$g_F(x,p)=0$ curve is an ellipse
of area $\pi\hbar$~\cite{strini1,strini2}
and coincides with the phase-space contour level along which
$W(x,p)=W_{\rm max}/e$, with $W_{\rm max}$ equal to the
maximum value of $W$.
Different contour levels of $W$ correspond
to different ``equipotential curves'' $g_{F}={\rm constant}$.
The purpose of the present paper is to show that a
similar relation exists when the Gaussian
shape of the wave packet is modified, provided the wave packet remains
peaked. Finally, we will comment
on the significance of our results in the context of semiclassical
approximations of quantum mechanics.
Sec.ion{Gaussian packets}
Let us first consider the Gaussian packet
\begin{equation}
R(x)=G(x)Eq.iv\frac{1}{\sqrt{\sqrt{\pi}\delta}}
e^{-\frac{(x-x_0)^2}{2\delta^2}},\;\;
S(x)=p_0 x.
\label{eq:gaussian}
\end{equation}
In this case Wigner function (\ref{wignerfunction}) reads
\begin{equation}
W(x,p)=\frac{1}{\pi\hbar}
e^{-\frac{(x-x_0)^2}{\delta^2}-\frac{\delta^2(p-p_0)^2}{\hbar^2}},
\label{Wgaussian}
\end{equation}
while the Fermi function is given by
\begin{equation}
g_F(x,p)=\frac{\hbar^2(x-x_0)^2}{\delta^4}+
(p-p_0)^2-\frac{\hbar^2}{\delta^2}.
\label{gFgaussian}
\end{equation}
It is clear from Eqs.~(\ref{Wgaussian}) and (\ref{gFgaussian})
that for Gaussian packet (\ref{eq:gaussian}) we have
\begin{equation}
W(x,p)=\frac{1}{\pi e\hbar }e^{-\frac{\delta^2}{\hbar^2} g_F(x,p)}.
\label{wgf}
\end{equation}
Therefore,
there is a one to one correspondence between
the ``equipotential curves'' $g_F(x,p)=K$ and
$W(x,p)=C$, with $C=\frac{1}{\pi e\hbar}e^{-\frac{\delta^2}{\hbar^2} K}$.
In particular, the $g_F=0$ curve coincides with the
curve $W=\frac{1}{\pi e\hbar}=\frac{W_{\rm max}}{e}$,
with $W_{\rm max}=W(x_0,p_0)$ maximum value of $W$.
Sec.ion{Non-Gaussian packets}
We now discuss the relation between the Wigner and the Fermi function
when the wave packet is peaked but not Gaussian.
Assuming a smooth, regular behavior of the packet around its maximum,
we choose the analytic expression
$R(x)=C G(x)[1+P(x)]$, with $P(x)$ polynomial [chosen so that $R(x)\ge 0$
for any $x$] and $C$ normalization constant which is irrelevant
for our purposes, while
$S(x)$ is a polynomial.
The Wigner function is then given by the Fourier transform
(\ref{wignerfunction}) of
\begin{equation}
\begin{array}{c}
\psi\left(x+\frac{y}{2}\right)
\psi^\star\left(x-\frac{y}{2}\right)=
C^2 G\left(x+\frac{y}{2}\right)
G\left(x-\frac{y}{2}\right)
\\
\\
\times \left[1+
P\left(x+\frac{y}{2}\right)+
P\left(x-\frac{y}{2}\right)+
P\left(x+\frac{y}{2}\right) P\left(x-\frac{y}{2}\right)
\right]
\\
\\
\times \, e^{\frac{i}{\hbar}\left[
S\left(x+\frac{y}{2}\right)-
S\left(x-\frac{y}{2}\right)\right]}.
\end{array}
\end{equation}
\subsection{Gaussian $R$}
We computed numerically $W(x,p)$
for several functions $P(x)$, $S(x)$
and found that it strongly depends on $S(x)$, while the dependence
on $P(x)$ is weak, as far as the wave packet remains peaked.
Therefore, we first focus on the case $R(x)=G(x)$.
\begin{figure}
\caption{Plots of the Wigner function (left) and of the
$g_F=0$ curve (right, thick full curves) for $R$ Gaussian and various
$S$.
Horizontal axis: $-6\le \tilde{x}
\label{fig1}
\end{figure}
Wigner functions for $P(x)=0$, namely
\begin{equation}
\psi(x)=G(x)e^{\frac{i}{\hbar}S(x)},
\label{GaussianS}
\end{equation}
and several $S(x)$ are shown in Fig.~\ref{fig1} (left plot)
and compared with the corresponding $g_F=0$ curves (right plots).
Even though the wave packets in Fig.~\ref{fig1},
with the exception of the top plot [$S(x)\propto x^2$,
corresponding to a squeezed state],
are far from being Gaussian, the $g_F=0$ curve still
provides a rather satisfactory description of size and
shape of the wave packet in phase space.
This agreement can be explained by the following argument.
If we set $P(x)=0$ and consider the expansion
\begin{equation}
\begin{array}{c}
S\left(x+\frac{y}{2}\right)
-S\left(x-\frac{y}{2}\right)
\\
\\
=S'(x)y+\frac{1}{24}S'''(x)y^3+O(y^5)
\approx S'(x)y,
\end{array}
\end{equation}
we obtain
\begin{equation}
\psi\left(x+\frac{y}{2}\right)
\psi^\star\left(x-\frac{y}{2}\right)
\approx
F(x,y)
e^{\frac{i}{\hbar} S'(x)y},
\label{shift}
\end{equation}
where
$F(x,y)Eq.iv G\left(x+\frac{y}{2}\right)G\left(x-\frac{y}{2}\right)$.
Therefore, the shift theorem of Fourier transform implies that, if
\begin{equation}
{\cal F}_y[F(x,y)]=W_G(x,p),
\end{equation}
with ${\cal F}_y$ Fourier transform with respect to the $y$-variable,
then
\begin{equation}
{\cal F}_y\left[F(x,y)e^{\frac{i}{\hbar}S'(x)y}\right]=W_G[x,p-S'(x)].
\end{equation}
We can therefore conclude that the Wigner function corresponding to
wave vector (\ref{GaussianS}) reads
\begin{equation}
W(x,p)\approx W_G[x,p-S'(x)]
=\frac{1}{\pi\hbar}
e^{-\frac{(x-x_0)^2}{\delta^2}-\frac{\delta^2 [p-S'(x)]^2}{\hbar^2}}.
\end{equation}
Hence connection (\ref{wgf}) between the Wigner and the Fermi
functions approximately holds around the peak of the wave packet.
The rather good agreement between the $g_F=0$ curve and the contour
level $W=\frac{W_{\rm max}}{e}$ is shown in the right plots of
Fig.~\ref{fig1}.
We can conclude that, for peaked packets, the $g_F=0$ curve is close
to an equipotential curve of the Wigner function, enclosing a phase
space area of the order of Planck's constant.
\subsection{Non-Gaussian $R$, $S=0$}
We have seen numerically that the dependence of the Wigner function
on $P(x)$ is weak, as far as the wave packet remains peaked.
As an example, we consider $P(x)=1+a\frac{(x-x_0)^2}{\delta^2}$, with
$a>0$ so that $R(x)>0$ for any $x$, and $S=0$. For $a<\frac{1}{2}$
the wave function $\psi(x)=R(x)$ has a single maximum at $x=0$
and the $g_F=0$ curve again gives a good representation of
the phase-space size and shape of the wave packet (see the top
plots of Fig.~\ref{fig2}). On the other hand, for $a>\frac{1}{2}$
the wave function exhibits two maxima at
$x_{\pm}=\pm \delta\sqrt{2-\frac{1}{a}}$.
For $a\gg 1$ the two maxima are well separated.
Such a ''cat state'' exhibits non-classical features which impact on
the structure of the Wigner function (see Fig.~\ref{fig2} bottom left,
for $a=10$). Since $W(x,p=0)$ is the autocorrelation function of
$R(x)$, then it reaches its maximum at $x=0$. On the other hand,
the marginal $\int dp W(x,p)=|\psi(x)|^2=R^2(x)$ exhibits a minimum
at $x=0$ and this is possible thanks to the negative regions of
$W(x,p)$ (the white regions in Fig.~\ref{fig2} bottom left).
For this cat state the $g_F=0$ curve captures the two peaks
at $x_\pm$ (see Fig.~\ref{fig2} bottom
right) but not the non-classical phase-space structures of the
Wigner function.
\begin{figure}
\caption{Plots of the Wigner function (left) and of the
$g_F=0$ curve (right, thick full curves) for
$P=1+a\tilde{x}
\label{fig2}
\end{figure}
\begin{figure}
\caption{Plots of the Wigner function (left) and of the
$g_F=0$ curve (right) for the quartic oscillator,
with $\hbar \lambda/\omega=0.01$.
Horizontal axis: $-10\le \tilde{x}
\label{fig3}
\end{figure}
Sec.ion{A dynamical example: the quartic oscillator}
The $g_F=0$ curve is an interesting tool to investigate quantum fluctuations
in the semiclassical region. As far as the wave packet remains peaked,
that is, before the Ehrenfest time scale~\cite{berman},
its size and phase-space shape can be readily derived from the wave
function (or from a semiclassical approximation of the wave function),
without computing the whole Wigner function.
Moreover, the distortion of the wave packet
is directly related to $S'(x)$, that is, to the Madelung's
velocity.
As a numerical illustration
of the capability of the $g_F=0$ curve to capture relevant features of
quantum fluctuations, we follow in Fig.~\ref{fig3} the evolution
of the Wigner and Fermi functions for the quartic oscillator Hamiltonian
\begin{equation}
H=\hbar\omega a^\dagger a + \hbar^2 \lambda (a^\dagger)^2 a^2.
\label{eq:quartic}
\end{equation}
Here, $a=\sqrt{\frac{1}{\hbar m \omega}}(m\omega x + i p)$,
$\omega$ is the frequency of the harmonic part of the oscillator
and $\hbar \lambda$ gives the strength of the nonlinearity.
This model has been widely investigated in the context of quantum
to classical transition~\cite{berman,milburn,angelo,oliveira}
and also used to explain important experimental results~\cite{bloch}.
Model (\ref{eq:quartic}) is integrable, see Refs.~\cite{milburn,oliveira}
for the evolution of classical and quantum phase-space distributions.
Details on the computation of the Fermi function are given in the
Appendix.
In Fig.~\ref{fig3} we compare the evolution the Wigner function with
the evolution of the $g_F=0$ Fermi function, for the quartic oscillator,
starting from an initial Gaussian wave packet
$|\alpha\rangle$ ($a|\alpha\rangle=\alpha|\alpha\rangle$).
It is clear that, as far
as the wave packet remains peaked, the $g_F=0$ curve reproduces both
size and shape of quantum fluctuations. This is the case for times
smaller than the Ehrenfest time scale
$t_E\sim\frac{1}{\hbar\lambda |\alpha|}$~\cite{berman},
until which the centroid of the wave packet follows a classical
trajectory. For longer times the Wigner function develops interference
fringes, while the $g_F=0$ function splits into several curves.
We point out that the $g_F(x,p)=0$ function (\ref{ppmFermi}) singles out
only two $p$-values (or none) for any $q$. Therefore, it cannot reproduce
the whole phase space structure of the wave packet when the Wigner
function does not exhibit a single peak but a non-monotonous behavior
along $p$ [see the bottom plot of Fig.~\ref{fig3} (left)].
Sec.ion{Concluding remarks}
In summary, we have shown that the phase space structure of a peaked
wave packet can be satisfactorily described
by the $g_F=0$ Fermi curve.
In spite of the fact that the Wigner and the Fermi functions
are at first sight two completely unrelated phase space descriptions of
quantum mechanics, a link between them
exists and is based on the shift
theorem of Fourier transform. Such theorem also allows us to
understand the shape of the Wigner function for perturbed
Gaussian packets in terms of the Madelung's velocity.
Our theoretical results, corroborated by numerical simulations
for the quartic oscillator model, show that the Fermi function is
an interesting tool to investigate the phase-space size and shape
of quantum fluctuations in the semiclassical regime.
While the Fermi function $g_F(x,p)$ fully determines the state of
a quantum system, the extension of the results obtained in this
paper to generic states encounters difficulties.
Knowledge of the $g_F(x,p)=0$ curve is in general not sufficient.
The complete determination
of the state of a system requires the extension of this curve
to the complex $p$-plane. That is,
consideration of the complex values of $p_{\pm}$, obtained
from Eq.~(\ref{ppmFermi}) when $R'' >0$, is needed.
We then obtain
$S' = \frac{p_++p_-}{2}$,
$\hbar^2\frac{R''}{R}=
-\left(\frac{p_+-p_-}{2}\right)^2$,
from which the $g_F$ operator (\ref{gfoperator}) and consequently
the wave function $\psi(x)$ are determined.
Any phase space description of quantum mechanics necessarily
involves features beyond classical intuition: negative quasiprobabilities
in the case of the Wigner function, complex momenta for the Fermi
curve.
\appendix
Sec.ion{Fermi function for the quartic oscillator}
We consider as initial condition a Gaussian state
(coherent state for the harmonic part of the oscillator),
$|\alpha\rangle = \sum_{n=0}^\infty c_n |n\rangle$,
with $c_n=e^{-\frac{1}{2} |\alpha|^2}\frac{\alpha^n}{\sqrt{n !}}$,
$a|\alpha\rangle = \alpha |\alpha\rangle$, $H|n\rangle = E_n |n\rangle$,
$E_n=\hbar \omega + \hbar^2 \lambda n(n-1)$. The state of the quartic
oscillator at time $t$ is then given by
$|\psi(t)\rangle = \sum_{n=0}^\infty c_n e^{-\frac{i}{\hbar} E_n t}|n\rangle$.
In the coordinate representation,
\begin{equation}
\begin{array}{c}
\phi_n(x)Eq.iv \langle x | n \rangle
\\
\\
=
\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}
\frac{1}{2^{n/2}\sqrt{n!}}
H_n\left(\sqrt{\frac{m\omega}{\hbar}} x\right)
e^{-\frac{1}{2}\frac{m \omega}{\hbar} x^2},
\end{array}
\end{equation}
where $H_n$ denotes the $n$-th Hermite polynomial.
In order to compute the Fermi function, we write $\psi=\psi_R+i \psi_I$,
so that
\begin{equation}
\begin{array}{c}
\frac{R''}{R}=\frac{1}{(\psi_R^2+\psi_I^2)^2}
(\psi_R \psi_R'+\psi_I \psi_I')^2
\\
\\
+\frac{1}{\psi_R^2+\psi_I^2}
\left[(\psi_R')^2+(\psi_I')^2+\psi_R \psi_R''+ \psi_I \psi_I''\right],
\end{array}
\end{equation}
\begin{equation}
S'=\frac{\psi_R \psi_I' - \psi_R' \psi_I}{\psi_R^2+\psi_I^2}.
\end{equation}
Finally, the $g_F=0$ curve is obtained from (\ref{gfimplicit}).
Note that
in computing the derivatives of $\psi_R$ and $\psi_I$, we took advantage
of the relation $H_n'= 2 n H_{n-1}$. This property of Hermite polynomials
allowed us to avoid numerical errors in the computation of the derivatives
$R''$ and $S'$.
\end{document}
|
\begin{eqnarray}gin{document}
\frenchspacing
\title{Exponential dichotomy for dynamically defined matrix-valued Jacobi operators}
\begin{eqnarray}gin{abstract} We present in this work a proof of the exponential dichotomy for dynamically defined matrix-valued Jacobi operators in $(\mathbb{C}^{l})^{\mathbb{Z}}$, given for each $\omega \in \Omega$ by the law
\[[H_{\omega} \textbf{u}]_{n} := D(T^{n - 1}\omega) \textbf{u}_{n - 1} + D(T^{n}\omega) \textbf{u}_{n + 1} + V(T^{n}\omega) \textbf{u}_{n},\] where $\Omega$ is a compact metric space, $T: \Omega \rightarrow \Omega$ is a minimal homeomorphism and $D, V: \Omega \rightarrow M(l, \mathbb{R})$ are continuous maps with $D(\omega)$ invertible for each $\omega\in\Omega$. Namely, we show that for each $\omega\in\Omega$, the resolvent set of $H_\omega$ corresponds to the subset of $\mathbb{C}$ for which $(T,A_z)$, the $SL(2l,\mathbb{C})$-cocycle induced by the eigenvalue equation $[H_\omega u]_n=zu_n$ at $z\in\mathbb{C}$, is uniformly hyperbolic.
\end{eqnarray}d{abstract}
\section{Introduction}
The so-called exponential dichotomy for dynamically defined self-adjoint operators consists in identifying their spectra with the subset of $\mathbb{C}$ for which the associated cocycle is non-uniformly hyperbolic; roughly, this means that if a solution to the eigenvalue equation associated with an element of a family of dynamically defined self-adjoint operators, say $\{H_\omega\}$ (where $\omega$ belongs to a compact metric space $\Omega$), at $z$ in the spectrum of $H_\omega$ has an exponential behavior at $\pm\infty$ (that is, it does not decay or grow exponentially at $\pm\infty$), then its rate of exponential decay or growth depends on $\omega$ (see Definition~\ref{def.cresc.uni}).
Such relations between differential equations and spectral properties of the corresponding operators were firstly explored in~\cite{sell74,sell76ii,sell76iii,sell78iv}, where the concept of Sacker-Sell spectrum was presented; we also refer to \cite{palmer82} and \cite{sell80}, where the authors have explored similar relations between spectrum and hyperbolicity.
It was in~\cite{jonhson86} that these ideas were first presented for dynamically defined scalar Schr\"odinger operators, for which there is a spectral characterization of the spectrum in terms of the existence of a dominated splitting for the respective cocycle. In \cite{marx14}, there is a extension of this result to dynamically defined scalar Jacobi operators, including also the singular case. In \cite{haro13}, there is another extension of this result, now to matrix-valued operators with dynamically defined potentials, based on the results of \cite{sell76ii}.
In the scope of the theory of dynamical systems, the equivalence between the existence of a dominated splitting and hyperbolicity is a subject of great interest; one of the most important references on this subject is \cite{shub70}. In \cite{pesin07}, a connection between uniform hyperbolicity and the existence of families of invariant cones is presented.
In this work, our main goal is to show a version of the exponential dichotomy for a special class of dynamically defined matrix-valued Jacobi operators. Recall that a matrix-valued Jacobi operator in $l^{2}(\mathbb{Z}; \mathbb{C}^{l})$ is given by the law
\begin{eqnarray}gin{equation}
\label{eq.ope.din.jacobi}
[H \textbf{u}]_{n} := D_{n - 1} \textbf{u}_{n - 1} + D_{n} \textbf{u}_{n + 1} + V_{n} \textbf{u}_{n},
\end{eqnarray}d{equation}
where $(D_{n})_{n \in \mathbb{Z}}$ and $(V_{n})_{n \in \mathbb{Z}}$ are sequences in $M(l, \mathbb{R})$ (the linear space of $l\times l$ matrices with real entries), with each $V_{n}$ being self-adjoint (see~\cite{marx15}). In~\cite{VieiraCarvalho2}, a characterization of the absolutely continuous spectrum (including multiplicity) of this class of matrix-like operators was presented.
Now, let $\Omega \neq \emptyset$ be an arbitrary set, $T: \Omega \rightarrow \Omega$ be an invertible map,
$l \in \mathbb{N}$ and let $D, V: \Omega \rightarrow M(l, \mathbb{R})$. A dynamically defined matrix-valued Jacobi operator is a family of operators in $(\mathbb{C}^{l})^{\mathbb{Z}}$ given for each $\omega \in \Omega$ by the law
\begin{eqnarray}gin{equation}
\label{eq.ope.din}
[H_{\omega} \textbf{u}]_{n} := D(T^{n - 1}\omega) \textbf{u}_{n - 1} + D(T^{n}\omega) \textbf{u}_{n + 1} + V(T^{n}\omega) \textbf{u}_{n}.
\end{eqnarray}d{equation}
If, for each $\omega\in\Omega$, $D(\omega)$ is invertible, then for each $z \in \mathbb{C}$ one may define a cocycle related with $H_\omega$ by the law
\begin{eqnarray}gin{equation}
\label{eq.cociclo.din}
\begin{eqnarray}gin{array}{lllll}
(T, A_z): & \Omega \times \mathbb{C}^{2l} & \rightarrow & \Omega \times \mathbb{C}^{2l}\\
&(\omega, \textbf{u}) & \mapsto & (T(\omega), A(z, \omega)\textbf{u}),
\end{eqnarray}d{array}
\end{eqnarray}d{equation}
where the map $A_z: \Omega \rightarrow M(2l, \mathbb{C})$ is given by
\begin{eqnarray}gin{equation}
\label{def.func.mat.coci}
A_z(\omega)
:=
\left[
\begin{eqnarray}gin{array}{cc}
D^{-1}(\omega)(z - V(\omega)) & - D^{-1}(\omega) \\
& \\
D(\omega) & 0
\end{eqnarray}d{array}
\right].
\end{eqnarray}d{equation}
One also has, associated with the map $A: \Omega \rightarrow M(2l, \mathbb{C})$, the so-called transfer matrices
\begin{eqnarray}gin{equation}
\label{def.mat.trans.erg}
A_{n}(z, \omega) =
\begin{eqnarray}gin{cases}
A_z(T^{n -1}(\omega))A_z(T^{n - 2}(\omega))\cdots A_z(T(\omega))A_z(\omega), & \mbox{ if } n \geq 1, \\
& \\
\mathbb{I}_{2l}, & \mbox{ if } n = 0, \\
& \\
A^{-1}_z(T^{-n}(\omega))...A^{-1}_z(T^{-2}(\omega))A^{-1}_z(T^{-1}(\omega)), & \mbox{ if } n \leq - 1.
\end{eqnarray}d{cases}
\end{eqnarray}d{equation}
Each solution to the eigenvalue equation for the operator $H_{\omega}$ at $z\in\mathbb{C}$, that is, each $\mathbf{u}\in(\mathbb{C}^l)^\mathbb{Z}$ that satisfies, for each $n\in\mathbb{Z}$
\begin{eqnarray}gin{equation}\label{eqauto}
[H_\omega\mathbf{u}]_n=z\mathbf{u}_n,
\end{eqnarray}d{equation}
is associated with an orbit of the cocycle; namely, $\textbf{u}\in(\mathbb{C}^l)^\mathbb{Z}$ is a solution to the eigenvalue equation for $H_{\omega}$ at $z$ if, and only if for every $n \in \mathbb{Z}$,
\begin{eqnarray}gin{equation}
\label{eq.cociclo.trans}
\left[
\begin{eqnarray}gin{array}{c}
\textbf{u}_{n + 1}\\
D(T^{n}\omega) \textbf{u}_{n}
\end{eqnarray}d{array}
\right]
=
A_{n}(z, \omega)
\left[
\begin{eqnarray}gin{array}{c}
\textbf{u}_{1}\\
D(\omega) \textbf{u}_{0}
\end{eqnarray}d{array}
\right].
\end{eqnarray}d{equation}
Note that for each $\omega\in\Omega$ and each $z\in\mathbb{C}$, $A_z(\omega)\in SL(2l,\mathbb{C})$, and so for each $n\in\mathbb{N}$, $A_n(z,\omega)\in SL(2l,\mathbb{C})$.
In general, an $SL(2l,\mathbb{C})$-cocycle over $(\Omega,\mathbb{C}^{2l})$, where $\Omega$ is an arbitrary set, is defined as
\begin{eqnarray}gin{equation*}
\begin{eqnarray}gin{array}{lllll}
(T, A): & \Omega \times \mathbb{C}^{2l} & \rightarrow & \Omega \times \mathbb{C}^{2l}\\
&(\omega, \textbf{u}) & \mapsto & (T(\omega), A(\omega)\textbf{u}),
\end{eqnarray}d{array}
\end{eqnarray}d{equation*}
where $T:\Omega\rightarrow \Omega$ is a bijection and the map $A: \Omega \rightarrow SL(2l, \mathbb{C})$ is such that $\Vert A\Vert_\infty<\infty$. The iterations of $(T,A)$ are denoted by $(T,A)^n=(T^n,A_n)$, with $A_n(\omega)$ given by~\eqref{def.mat.trans.erg}.
Now we recall the definition of a uniformly hyperbolic $SL(2l,\mathbb{C})$-cocycle.
\begin{eqnarray}gin{1}[Uniformly Hyperbolic Cocycle]\label{UHC}
An $SL(2l,\mathbb{C})$-cocycle $(T,A)$ is said to be uniformly hyperbolic, and one denotes this by $(T,A)\in\mathcal{UH}$, if there exist two maps $u, s : \Omega\rightarrow \mathbb{C}^{2l}$
such that:
\begin{eqnarray}gin{enumerate}
\item $u$ and $s$ are $(T,A)$-invariant, that is, for each $\omega\in\Omega$,
\[A(\omega)u(\omega)= u(T(\omega))\qquad \mathrm{and}\qquad A(\omega)s(\omega)= s(T(\omega));
\]
\item there exists $C > 0$, $\lambda > 1$ such that for each $\omega\in\Omega$, $\textbf{v}\in u(\omega)\setminus\{\mathbf{0}\}$ and $\textbf{w}\in s(\omega)\setminus\{\mathbf{0}\}$,
\[\dfrac{\Vert A_{-n}(\omega)\textbf{v}\Vert}{\Vert\textbf{v}\Vert},\dfrac{\Vert A_{n}(\omega)\textbf{w}\Vert}{\Vert\textbf{w}\Vert}\le C\lambda^{-n},\qquad\forall\; n\in\mathbb{N}.\]
\end{eqnarray}d{enumerate}
Here, $u$ is called the unstable direction and $s$ is called the stable direction of $(T,A)$.
\end{eqnarray}d{1}
We note that the constants $C>0$ and $\lambda>1$ do not depend on $\omega\in\Omega$, $\textbf{v}\in u(\omega)\setminus\{\mathbf{0}\}$ and $\textbf{w}\in s(\omega)\setminus\{\mathbf{0}\}$; this is why one calls such cocycles \textit{uniformly hyperbolic}.
\begin{eqnarray}gin{remark}\label{RUH}
\begin{eqnarray}gin{enumerate}
\item It is a consequence of Definition~\ref{UHC} that there exists an invariant Whitney splitting $\mathbb{C}^{2l}\times\Omega=E^s\oplus E^u$ such that for each $\omega\in\Omega$, $E^s(\omega)=s(\omega)$ and $E^u(\omega)=u(\omega)$ are, respectively, the fibers of the bundles $E^s$ and $E^u$ at $\omega$.
Indeed, if there exists $\omega\in\Omega$ such that $u(\omega) = s(\omega)$, then by $A$-invariance, $s(T^j\omega) = u(T^j\omega)$ for each $j\in\mathbb{Z}$. Then, for each unit vector $\textbf{v}\in s(T^j\omega)$, it follows that for each $n\in\mathbb{N}$,
\[\Vert A_n(T^j\omega)\textbf{v}\Vert< C\lambda^{-n}.
\]
Now, for each unit vector $\textbf{u}\in u(T^{j + n}\omega)$, it follows that for each $n\in\mathbb{N}$,
\[\Vert A_{-n}(T^{j+n}\omega)\textbf{u}\Vert< C\lambda^{-n}.\]
By $A$-invariance, it follows that $A_{-n}(T^{j+n}\omega)\textbf{u}\in u(T^{j}\omega) = s(T^{j}\omega)$. Hence, for each $n\in\mathbb{N}$,
\[\dfrac{\Vert A_n(T^j\omega)A_{-n}(T^{j+n}\omega)\textbf{u}\Vert}{\Vert A_{-n}(T^{j+n}\omega)\textbf{u}\Vert}=\dfrac{\Vert\textbf{u}\Vert}{\Vert A_{-n}(T^{j+n}\omega)\textbf{u}\Vert}>C^{-1}\lambda^n,
\]
which in turn implies, for each $n\in\mathbb{N}$,
\[C^{-1}\lambda^n< C\lambda^{-n}.\]
Thus, one concludes from this contradiction that for each $\omega\in\Omega$, $u(\omega) \neq s(\omega)$.
\item It is another consequence of Definition~\ref{UHC} that if $A:\Omega\rightarrow\mathbb{C}^{2l}$ is continuous, then the fibers $E^s(\omega)$ and $E^u(\omega)$ depend continuously on $\omega\in\Omega$. Now, for the map $B:\Omega\times\mathbb{C}\rightarrow\mathbb{C}^{2l}$ given by the law $B(\omega,z)=A_z(\omega)$, with $A_z(\omega)$ given by \eqref{def.func.mat.coci}, the fibers $E^s(\omega,z)$ and $E^u(\omega,z)$ also depend continuously on $z\in\mathbb{C}$.
\item Moreover, if $\Omega$ is compact, then there exists $\gamma>0$ such that for each $n\in\mathbb{Z}$,
\[\sup_{\omega\in\Omega}\Vert \textbf{s}^\perp(\omega)-\textbf{u}^\perp(\omega)\Vert=\sup_{\omega\in\Omega}\Vert \textbf{s}^\perp(T^n\omega)-\textbf{u}^\perp(T^n\omega)\Vert\ge\gamma,
\] where for each $\omega\in\Omega$, $\textbf{s}^\perp(\omega)$ and $\textbf{u}^\perp(\omega)$ stand for the unitary normal vectors to $s(\omega)$ and $u(\omega)$, respectively. Roughly, this means that the angle between one vector in the stable subspace and another one in the unstable subspace (both at $\omega$) is always positive.
\end{eqnarray}d{enumerate}
\end{eqnarray}d{remark}
In order to obtain a version of the aforementioned exponential dichotomy for the family $(H_{\omega})_\omega$ given by~\eqref{eq.ope.din}, with $\Omega$ a compact metric space and $T:\Omega\rightarrow\Omega$ a minimal homeomorphism, we follow the sequence of arguments presented in \cite{zhang13} and also explore some results that characterize minimal supports for spectral measures. Here, we consider the property known as \textit{uniform exponential growth}.
\begin{eqnarray}gin{1}[Uniform Exponential Growth Condition]
\label{def.cresc.uni}
Let $(T, A)$ be an $SL(2l,\mathbb{C})$-cocycle.
One says that $(T,A)$ satisfies the uniform exponential growth condition, and denotes this by $(T, A) \in \mathcal{UG}$, if there exist constants $\begin{eqnarray}ta > 0$ and $\lambda > 1$ such that for each fixed $\omega \in \Omega$
and each fixed $\textbf{v}\in \mathbb{C}^{2l}\setminus\{\textbf{0}\}$,
\begin{eqnarray}gin{eqnarray}
\label{prop.cresc.exp}
\dfrac{\Vert A_{n}(\omega)\textbf{v}\Vert}{\Vert\textbf{v}\Vert}\geq \begin{eqnarray}ta \lambda^{n},\;\forall\, n\in\mathbb{N},\qquad \mathrm{otherwise}\qquad
\dfrac{\Vert A_{-n}(\omega)\textbf{v}\Vert}{\Vert\textbf{v}\Vert}\geq \begin{eqnarray}ta \lambda^{n},\;\forall\, n\in\mathbb{N}.
\end{eqnarray}d{eqnarray}
\end{eqnarray}d{1}
\begin{eqnarray}gin{remark}\label{lxl}
Assume that $(T, A) \in \mathcal{UG}$ and fix an orthonormal basis $\mathcal{B}:=\left\{\textbf{u}_{1}, \textbf{u}_{2},...\textbf{u}_{2l}\right\}$ of $\mathbb{C}^{2l}$. Since for each $\omega\in\Omega$ and each $n\in\mathbb{N}$, $A_{n}(\omega)\in SL(2l,\mathbb{C})$, at most $l$ vectors in $\mathcal{B}$ satisfy the condition
\begin{eqnarray}gin{equation}
\label{cond.cresc.1}
\left\|A_{n}(\omega)\textbf{v}\right\| \geq \begin{eqnarray}ta \lambda^{n},\qquad \forall\;n\in\mathbb{N}
\end{eqnarray}d{equation}
(note that these vectors are the same for each $n\in\mathbb{N}$); on the other hand, by using the same reasoning, for each $\omega\in\Omega$
at most $l$ vectors in $\mathcal{B}$ satisfy the condition
\begin{eqnarray}gin{equation}
\label{cond.cresc.2}
\left\|A_{-n}(\omega)\textbf{v}\right\| \geq \begin{eqnarray}ta \lambda^{n},\qquad \forall\;n\in\mathbb{N}.
\end{eqnarray}d{equation}
Thus, since each vector in $\mathcal{B}$ satisfies \eqref{cond.cresc.1} or \eqref{cond.cresc.2} (but not both simultaneously), the only possibility is that exactly $l$ vectors in $\mathcal{B}$ satisfy \eqref{cond.cresc.1} and the remaining $l$ vectors satisfy \eqref{cond.cresc.2}.
In fact, this reasoning applies to any basis of $\mathbb{C}^{2l}$, including any system of $2l$ linearly independent solutions
to the eigenvalue equation~\eqref{eqauto} for the operator $H_\omega$ at $z\in\mathbb{C}$.
\end{eqnarray}d{remark}
Actually, the notion presented in Definition~\ref{def.cresc.uni} is equivalent to the notion of uniform hyperbolicity (see Subsection~\ref{UGCEUH} for a proof of this statement).
We are now able to precisely state our main result: if $\Omega$ is a compact metric space and if $T$ is a minimal homeomorphism, then for each $\omega\in\Omega$, $z\in\rho(H_{\omega})$ (where $\rho(H_\omega)$ stands for the resolvent set of $H_\omega$) if, and only if, the cocycle $(T,A_z)$ given by~\eqref{eq.cociclo.din} satisfy the uniform exponential growth condition. This result is a consequence of the following theorems.
\begin{eqnarray}gin{7}
\label{prop.espec.2}
Let $(H_{\omega})_{\omega}$ be the family of bounded dynamically defined operators given by~\eqref{eq.ope.din}, where $\Omega$ is a compact metric space, $T: \Omega \rightarrow \Omega$ is a minimal homeomorphism and $D,V:\Omega\rightarrow M(l,\mathbb{R})$ are continuous maps, with $D(\omega)$ invertible for each $\omega\in\Omega$. Then, for each $\omega \in \Omega$,
\[
\rho(H_{\omega}) \subseteq \{z \in \mathbb{C}\mid (T, A_z) \in \mathcal{UG}\},
\]
where the associated cocycle $(T, A_z)$ is given by \eqref{eq.cociclo.din}.
\end{eqnarray}d{7}
\begin{eqnarray}gin{7}
\label{prop.espec.3}
Let $(H_{\omega})_{\omega}$ be as in the statement of Theorem~\ref{prop.espec.2}.
Then,
for each $\omega \in \Omega$,
$$
\rho(H_{\omega}) \supseteq \{z \in \mathbb{C}\mid (T, A_z) \in \mathcal{UG}\},
$$
where the associated cocycle $(T, A_z)$ is given by \eqref{eq.cociclo.din}.
\end{eqnarray}d{7}
\begin{eqnarray}gin{remark} Given that $(H_\omega)_\omega$ is a family of self-adjoint operators, it follows that for each $\omega\in\Omega$, $\sigma(H_\omega)$ (the spectrum of $H_\omega$) is a subset of $\mathbb{R}$. Therefore, the result stated in Corollary~\ref{maintheo} is equivalent to the statement that for each $\omega\in\Omega$,\[\sigma(H_{\omega})=\{x \in \mathbb{R}\mid (T, A_x) \notin \mathcal{UG}\}.
\]
\end{eqnarray}d{remark}
\subsection{Equivalence between uniform hyperbolicity and uniform growth condition}
\label{UGCEUH}
\begin{eqnarray}gin{7}\label{UG=UH}
Let $(T,A)$ be an $SL(2l,\mathbb{C})$-cocycle. Then, $(T, A) \in \mathcal{UG}$ if, and only if, $(T,A)\in\mathcal{UH}$.
\end{eqnarray}d{7}
\begin{eqnarray}gin{proof} One just needs to show that if $(T, A) \in \mathcal{UG}$, then $(T, A) \in \mathcal{UH}$, since the other implication follows readily from the definition of an $SL(2l,\mathbb{C})$-cocycle.
Assume that $(T, A) \in \mathcal{UG}$ and fix an orthonormal basis $\mathcal{B}$
of $\mathbb{C}^{2l}$.
Now, let $s:\Omega\rightarrow\mathbb{C}^{2l}$ be given by the law $s(\omega)=\gerado(\mathcal{B}\setminus\mathcal{B}_\omega)$, where $\mathcal{B}_\omega$ are the elements of $\mathcal{B}$ that satisfy~\eqref{cond.cresc.1} (note that, by Remark~\ref{lxl}, $\mathcal{B}_\omega\notin\{\emptyset,\mathcal{B}\}$; in fact, $\#(\mathcal{B}_\omega)=l$). Accordingly, let $u:\Omega\rightarrow\mathbb{C}^{2l}$ be given by the law $u(\omega)=\gerado(\mathcal{B}_\omega)$.
Let us show that $u$ and $s$ satisfy the conditions stated in Definition~\ref{UHC}.
Namely, since for each $\omega\in\Omega$ and each $n\in\mathbb{N}$, $A_{n}(\omega)\in SL(2l,\mathbb{C})$, it follows from~\eqref{prop.cresc.exp},~\eqref{cond.cresc.1} and~\eqref{cond.cresc.2} that for each $\omega\in\Omega$, $\textbf{v}\in s(\omega)\setminus\{\textbf{0}\}$ and $\textbf{w}\in u(\omega)\setminus\{\textbf{0}\}$,
\begin{eqnarray}gin{equation}\label{cond.hip.uni.}
\dfrac{\Vert A_{-n}(\omega)\textbf{w}\Vert}{\Vert\textbf{w}\Vert},\dfrac{\Vert A_{n}(\omega)\textbf{v}\Vert}{\Vert\textbf{v}\Vert}\le C\lambda^{-n},\qquad\forall\; n\in\mathbb{N},
\end{eqnarray}d{equation}
where $C:=c\begin{eqnarray}ta^{-1}$, with $c$ a positive constant such that for each $\textbf{u}\in\mathbb{C}^{2l}$, $\Vert \textbf{u}\Vert_1\le c\Vert\textbf{u}\Vert_2$ (such constant exists, since the norms $\Vert\textbf{u}\Vert_1:=\max\{|u_1|,\ldots,|u_{2l}|\}$ and $\Vert\textbf{u}\Vert_2:=(\sum_j|u_j|^2)^{1/2}$ are equivalent).
This proves condition~2.
Now, suppose that there exist $\omega\in\Omega$ and $\mathbf{0}\neq\textbf{v}\in s(\omega)$ so that $A(\omega)\textbf{v}\notin s(T\omega)$; given that $\mathbb{C}^{2l}=s(T\omega)\oplus u(T\omega)$, one has $A(\omega)\textbf{v}=a\textbf{w}+b\textbf{u}$ for some $\textbf{w}\in u(T\omega)$ and $\textbf{u}\in s(T\omega)$, with $a\neq 0$.
It follows from the definition of $s(T\omega)$ and $u(T\omega)$ that for each $n\in\mathbb{N}$,
\[\dfrac{\Vert A_n(T\omega)A(\omega)\textbf{v}\Vert}{\Vert A(\omega)\textbf{v}\Vert}=\dfrac{\Vert A_n(T\omega)(a\textbf{w}+b\textbf{u})\Vert}{\Vert A(\omega)\textbf{v}\Vert}\ge c_1\lambda^n-c_2\lambda^{-n},\]
with $c_1:=C^{-1}(\Vert a\textbf{w}\Vert/\Vert A(\omega)\textbf{v}\Vert)>0$, $c_2:=C(\Vert b\textbf{u}\Vert/\Vert A(\omega)\textbf{v}\Vert)\ge0$. On the other hand, it follows from~\eqref{cond.hip.uni.} that for each $n\in\mathbb{N}$,
\[\Vert A_{n}(T\omega)A(\omega)\textbf{v}\Vert=\Vert A_{n+1}(\omega)\textbf{v}\Vert\le C\lambda^{-n-1}\Vert\textbf{v}\Vert;\]
thus, for each $n\in\mathbb{N}$,
\[c_1\le \lambda^{-2n}(c_2+c_3),\]
where $c_3:=C\lambda^{-1}(\Vert\textbf{v}\Vert/\Vert A(\omega)\textbf{v}\Vert)>0$.
From this contradiction, one concludes that for each $\omega\in\Omega$, $A(\omega)s(\omega)=s(T\omega)$.
Using the same reasoning, one may prove that for each $\omega\in\Omega$, $A(\omega)u(\omega)=u(T\omega)$. This shows condition~1, and we are done.
\end{eqnarray}d{proof}
\begin{eqnarray}gin{remark}
One has the following direct consequence of the proof of Theorem~\ref{UG=UH}: if $(T,A)\in\mathcal{UG}$, then for each $\omega\in\Omega$ and each $n\in\mathbb{Z}$, one has
\[
s_{l}\left[A_{n}(\omega)\right] \geq \begin{eqnarray}ta \lambda^{|n|}
\]
and
\[s_{l + 1}\left[A_{n}(\omega)\right] \leq \begin{eqnarray}ta^{-1} \lambda^{-|n|},
\]
where for each $j\in\{1,\ldots,2l\}$, $s_{j}\left[A_{n}(\omega)\right]$ stands for the $j$-th singular value of $A_n(\omega)$.
\end{eqnarray}d{remark}
The next result, the aforementioned exponential dichotomy for dinamically defined matrix-valued Jacobi operators, is a direct consequence of Theorems~\ref{prop.espec.2},~\ref{prop.espec.3} and~\ref{UG=UH}.
\begin{eqnarray}gin{7}\label{maintheo}
Let $(H_{\omega})_{\omega}$ be as in the statement of Theorem~\ref{prop.espec.2}. Then,
for each $\omega \in \Omega$,
$$
\rho(H_{\omega})=\{z \in \mathbb{C}\mid (T, A_z) \in \mathcal{UG}\}=\{z \in \mathbb{C}\mid (T, A_z) \in \mathcal{UH}\},
$$
with the associated cocycle $(T, A_z)$ given by \eqref{eq.cociclo.din}.
\end{eqnarray}d{7}
We organize this paper as follows. In Section~\ref{crescimento} we present a necessary condition for an $SL(2l,\mathbb{C})$-cocycle to satisfy the uniform exponential growth condition (this is Proposition~\ref{prop.caracter}). We also show that the uniform exponential growth condition for the cocycle $(T,A_z)$ is open with respect to $z\in\mathbb{C}$ (this is Proposition~\ref{prop.hip.aberto}).
In Section~\ref{expdic} we present the proofs of Theorems
\ref{prop.espec.2} and
\ref{prop.espec.3}.
\section{$SL(2l,\mathbb{C})$-cocycles and the Uniform Exponential Growth Condition}
\label{crescimento}
\zerarcounters
In this section, we present a necessary condition for a cocycle $(T,A)$ defined over a compact metric space $\Omega$, with $A: \Omega \rightarrow SL(2l, \mathbb{C})$ a continuous map, to satisfy $(T,A)\notin\mathcal{UG}$ (Proposition \ref{prop.caracter}). This result is required in the proof of Theorem~\ref{prop.espec.2}. We adapt some of the ideas employed in \cite{zhang13} for $SL(2, \mathbb{R})$-cocycles.
We also show that the condition $(T,A_z)\in\mathcal{UG}$ in open for $z\in\mathbb{C}$ (Proposition~\ref{prop.hip.aberto}), where $(T,A_z)$ stands for the cocycle defined by~\eqref{def.mat.trans.erg}. This result is required in the proof of Theorem~\ref{prop.espec.3}.
\begin{eqnarray}gin{3}
\label{lema.carac.omega.v}
Let $(T, A)$ be a cocycle defined over a compact metric space $\Omega$, with $A: \Omega \rightarrow SL(2l, \mathbb{C})$ a continuous map. If there exist $\epsilon > 0$ and $R \in \mathbb{N}$ such that for every $(\omega, \textbf{v}) \in \Omega \times \mathbb{S}^{2l-1}${\footnote{$\mathbb{S}^{2l-1}:=\{\textbf{v}\in\mathbb{C}^{2l}\mid\Vert\textbf{v}\Vert=1\}$}}, there exists $r \in \mathbb{Z}$ such that $\left|r\right| \leq R$ and
\begin{eqnarray}gin{equation}
\label{eq.matr.trans.l}
\left\| A_{r}(\omega)\textbf{v} \right\| \geq 1 + \epsilon,
\end{eqnarray}d{equation}
then $(T, A)\in\mathcal{UG}$.
\end{eqnarray}d{3}
\begin{eqnarray}gin{proof}
Let $(\omega, \textbf{v}) \in \Omega \times \mathbb{S}^{2l-1}$ and set
$$
\begin{eqnarray}gin{array}{lll}
r(\omega, \textbf{v}) & := & m; \qquad\left|m\right| = \min \{\left| r \right| \mid \left| r \right| \leq R, $ where $ r, \omega $ and $ \textbf{v} $ satisfy \eqref{eq.matr.trans.l}$ \}
\end{eqnarray}d{array}
$$
(by hypothesis, there exists at least one $r\in\mathbb{Z}$, with $|r|\le R$, for which \eqref{eq.matr.trans.l} is valid for each $(\omega, \textbf{v}) \in \Omega \times \mathbb{S}^{2l-1}$; moreover, $r(\omega, \textbf{v})\neq 0$).
One defines recursively a sequence $(r_{k}, \textbf{v}_{k}, \omega_{k})_{k}$ in $\mathbb{Z} \times \mathbb{S}^{2l-1} \times \Omega$ by the following procedure: for $k = 0$, let $r_{0} = 0$, $\textbf{v}_{0} = \textbf{v}$, and $\omega_{0} = \omega$; for $k > 0$, set
$$
\left\{
\begin{eqnarray}gin{array}{lll}
r_{k} & := & r(\omega_{k - 1}, \textbf{v}_{k - 1}), \\
& & \\
\textbf{v}_{k} & := & \dfrac{A_{r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1}}{\left\| A_{r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1} \right\|}, \\
& & \\
\omega_{k}& := & T^{r_{k}}(\omega_{k - 1}).
\end{eqnarray}d{array}\right.
$$
Thus, for every $k \in \mathbb{N}$,
$$
\left\| A_{r_{k + 1}}(\omega_{k}) \textbf{v}_{k} \right\| \geq 1 + \epsilon.
$$
Now, note that by the definition of $\textbf{v}_{k}$,
$$
\left\|A_{r_{k + 1}}(\omega_{k}) \frac{A_{r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1}}{\left\| A_{r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1} \right\|}\right\| \geq 1 + \epsilon;
$$
it follows from~\eqref{def.mat.trans.erg} that
$$
A_{r_{k + 1}}(\omega_{k}) A_{r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1}
=
A_{r_{k + 1} + r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1},
$$
and so
$$
\left\|A_{r_{k + 1} + r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1}\right\| \geq (1 + \epsilon)\left\| A_{r_{k}}(\omega_{k - 1})\textbf{v}_{k - 1} \right\|\ge(1 + \epsilon)^{2}.
$$
If one proceeds recursively, one gets, for $0\le j\le k-1$,
\begin{eqnarray}gin{equation}
\label{eq.rec.lk}
\left\|A_{r_{k + 1} + \ldots + r_{k - j}}(\omega_{k - j - 1})\textbf{v}_{k - j - 1}\right\| \geq (1 + \epsilon)^{j + 2}.
\end{eqnarray}d{equation}
Set, for each $k \in \mathbb{Z}_0^+$, $R_{k} := \sum^{k}_{j = 0}r_{j}$; it follows from relation \eqref{eq.rec.lk} that for each $k\in\mathbb{N}$ and each $1\le j\le k$,
$$
\left\| A_{R_{k} - R_{j - 1}}(\omega_{j - 1})\textbf{v}_{j - 1} \right\| \geq (1 + \epsilon)^{k - j+1},
$$
and in particular for $j=1$, one gets
\begin{eqnarray}gin{equation}\label{crescente}
\left\| A_{R_{k}}(\omega)\textbf{v} \right\| \geq (1 + \epsilon)^{k}.
\end{eqnarray}d{equation}
It follows from~\eqref{crescente} that there exists $L$ such that for $k > L$, $\left|R_{k}\right| > R$. Indeed, let $M:=\max_{\omega\in\Omega}\max_{|l|\le R}\Vert A_l(\omega)\Vert<\infty$ and set $L:=\max\{k\in\mathbb{N}\mid (1+\epsilon)^{k-1}\le M\}$; then, the assertion follows.
At last, set $k_0:=\max\{k\le L\mid |R_k|\le R\}$. Note that by relation~\eqref{crescente}, $|R_k|\to\infty$; moreover, by the definition of $L$, one has that $R_k>R$ for each $k>k_0$, otherwise $R_k<-R$ for each $k>k_0$. By combining these facts with $\left|r_{k}\right|\le R$, valid for each $k\in\mathbb{N}$, one concludes that there exists a sequence $\{k_j\}$, with $k_1 > k_{0}$, such that $R_{k_{j + 1}} > R_{k_j}$, $R_{k_{j + 1}}- R_{k_j}\le R$, otherwise there exists a sequence $\{l_j\}$, with $l_1>k_0$, such that $R_{l_{j + 1}} < R_{l_j}$, $R_{l_{j}}- R_{l_{j+1}}\le R$.
Suppose initially that $R_{k_j + 1} > R_{k_j}$ for every
$j\in\mathbb{N}$. Then,
$R_{k_1}>R\ge R_{k_0}>0$, and for each $n > R_{k_0}$, there exists $Q \in \mathbb{Z}_0^+$ such that $R_{k_Q} \leq n < R_{k_{Q + 1}}$; thus, one can write
\[\Vert A_{R_{k_Q}-n}(T^{n}\omega)\Vert\left\|A_{n}(\omega)\textbf{v}\right\|
\geq \left\| (A_{n - R_{k_Q}} (T^{R_{k_Q}}\omega))^{-1} A_{n}(\omega)\textbf{v}\right\|
=
\left\|A_{R_{k_Q}}(\omega)\textbf{v}\right\|.
\]
Now, given that
the map $A:\Omega\rightarrow SL(2l,\mathbb{C})$ is continuous (with $A(\omega)$ invertible for each $\omega\in\Omega$), $\Omega$ is compact and $n - R_{k_Q} <R_{k_{Q+1}}-R_{k_Q}\leq R$, there exists a constant $D > 0$ such that for each $n>R_{k_0}$ and each $Q\in\mathbb{N}$ so that $n-R_{k_Q}\leq R$, one has
$$
\Vert A_{R_{k_Q}-n} (T^{n}\omega)\Vert\le(\max_{\omega\in\Omega}\Vert A(\omega)^{-1}\Vert)^R=:D.
$$
Therefore,
it follows from the estimates
\[n-R<R_{k_{Q+1}}-R\le R_{k_{Q+1}}-R_{k_0}\le R_{k_Q}+R-R_{k_0}=\sum_{k=k_0}^{k_{Q}}r_{k}+R\le(k_{Q}-k_0+1)R
\]
that $k_{Q} + 2-k_0 > \frac{n}{R}$, and so
$$
\left\|A_{n}(\omega)\textbf{v}\right\|
\geq
G \left\|A_{R_{k_Q}}(\omega)\textbf{v}\right\|
\geq
G(1 + \epsilon)^{k_{Q}}
\geq
G(1 + \epsilon)^{k_{0} + \frac{n}{R} - 2}\ge G(1 + \epsilon)^{\frac{n}{R} - 2},
$$
with $G=D^{-1}$.
Suppose now that $R_{l_{j + 1}} < R_{l_j}$ for every $l_1 > k_{0}$. In this case, $R_{l_1}<-R\le R_{k_0}<0$, and for each $n\in\mathbb{N}$ such that $-n < R_{k_0}$, there exists $Q \in \mathbb{Z}_0^+$ such that $R_{l_{Q + 1}} < -n \leq R_{l_Q}$. Then, it follows from arguments analogous to those used in the previous case that there exists $\tilde{G}>0$ such that, for each $n\in\mathbb{N}$ so that $-n < R_{k_{0}}$, one has
$$
\left\|A_{-n}(\omega)\textbf{v}\right\|
\geq
\tilde{G}(1 + \epsilon)^{k_{0} + \frac{n}{R} - 2}\ge\tilde{G}(1 + \epsilon)^{\frac{n}{R} - 2}.
$$
Now, set $\lambda:=(1+\epsilon)^{1/R}$ and $C:=\min\{m/\lambda^R,\min\{G,\tilde{G}\}(1+\epsilon)^{-2}\}$, where $m:=\min_{\omega\in\Omega}(\Vert A(\omega)\Vert^{-R})>0$;
then,
it was shown that for each fixed $\omega \in \Omega$ and $\textbf{v} \in \mathbb{S}^{2l-1}$,
\begin{eqnarray}gin{equation*}
\left\|A_{n}(\omega)\textbf{v}\right\| \geq C \lambda^{n},\qquad \forall\,n\in\mathbb{N},
\end{eqnarray}d{equation*}
otherwise
\begin{eqnarray}gin{equation*}
\left\|A_{-n}(\omega)\textbf{v}\right\| \geq C \lambda^{n},\qquad \forall\,n\in\mathbb{N}.
\end{eqnarray}d{equation*}
\end{eqnarray}d{proof}
\begin{eqnarray}gin{4}
\label{prop.caracter}
Let $(T, A)$ be a cocycle defined over a compact metric space $\Omega$, with $A: \Omega \rightarrow SL(2l, \mathbb{C})$ a continuous map. If $(T, A) \notin \mathcal{UG}$, then there exist $(\omega,\textbf{v})\in \Omega\times\mathbb{S}^{2l-1}$ such that for each $n\in\mathbb{Z}$,
$$
\left\|A_{n}(\omega)\textbf{v}\right\| \le 1.
$$
\end{eqnarray}d{4}
\begin{eqnarray}gin{proof}
If $(T, A)\notin \mathcal{UG}$, it follows from Lemma \ref{lema.carac.omega.v} that for each $k \in \mathbb{N}$, there exists a pair $(\omega_{k}, \textbf{v}_{k}) \in \Omega \times \mathbb{S}^{2l-1}$ such that for each $r \in \mathbb{Z}$ with $\left| r \right| \leq k$, one has
$$
\left\|A_{r}(\omega_{k})\textbf{v}_{k}\right\| < 1 + \frac{1}{k}.
$$
Since $\Omega\times \mathbb{S}^{2l-1}$ is compact, there exist $(\omega,\textbf{v}) \in \Omega\times\mathbb{S}^{2l-1}$ and sequences $(\omega_{k_{j}})_{j}$ and $(\textbf{v}_{k_{j}})_{j}$ such that
$$
\begin{eqnarray}gin{array}{lll}
\lim_{j \rightarrow \infty} \omega_{k_{j}} & = & \omega, \\
&& \\
\lim_{j \rightarrow \infty} \textbf{v}_{k_{j}} & = & \textbf{v}.
\end{eqnarray}d{array}
$$
It follows now from the fact that the map $A:\Omega\rightarrow SL(2l,\mathbb{C})$ is continuous that for each $n\in\mathbb{Z}$,
$$
\left\|A_{n}(\omega)\textbf{v}\right\| \leq \lim_{k\to\infty}\Vert A_n(\omega_k)\textbf{v}_k\Vert\le \lim_{k\to\infty}\left(1+\dfrac{1}{k}\right)=1.
$$
\end{eqnarray}d{proof}
\begin{eqnarray}gin{4}
\label{prop.hip.aberto}
Let $z\in\mathbb{C}$ and let $(T, A_z)$ be the cocycle defined over the compact metric space $\Omega$ by the law~\eqref{eq.cociclo.trans}, with $T: \Omega \rightarrow \Omega$ a minimal homeomorphism and $D,V:\Omega\rightarrow M(l,\mathbb{R})$ continuous maps.
If $(T, A_z) \in \mathcal{UG}$, then there exists $\eta>0$ such that for each $z^\prime\in B(z;\eta)$, $(T, A_{z^\prime}) \in \mathcal{UG}$. Consequently, the condition $(T, A_{z})\in\mathcal{UG}$ is open with respect to $z\in\mathbb{C}$.
\end{eqnarray}d{4}
\begin{eqnarray}gin{proof}
Let $z\in\mathbb{C}$ be such that $(T, A_z) \in \mathcal{UG}$. Then, it follows from Definition~\ref{def.cresc.uni} that there exists $n_0=n_0(z)\in\mathbb{N}$ such that for each $(\omega,\textbf{v})\in\Omega\times\mathbb{S}^{2l-1}$,
\[\Vert A_{n}(z,\omega)\textbf{v}\Vert>3/2,\;\forall\; n\ge n_0,\qquad\textrm{otherwise}\qquad \Vert A_{-n}(z,\omega)\textbf{v}\Vert>3/2,\;\forall\; n\ge n_0,
\]
and so, for each $n\ge n_0$ and each $(\omega,\textbf{v})\in\Omega\times\mathbb{S}^{2l-1}$,
\[B_n(z,\omega,\textbf{v}):=\max\{\Vert A_{n}(z,\omega)\textbf{v}\Vert,\Vert A_{-n}(z,\omega)\textbf{v}\Vert\}>3/2.
\]
Fix $n\ge n_0$.
Then, by Definition~\eqref{def.mat.trans.erg} and by the fact that
$D, V : \Omega\rightarrow M (l, \mathbb{R})$ are uniformly continuous maps (with $D(\omega)$ invertible for each $\omega\in \Omega$), it follows that there exists $\eta=\eta(z,n)>0$ such that for each $z^\prime\in B(z;\eta)$ and each $(\omega,\textbf{v})\in\Omega\times\mathbb{S}^{2l-1}$,
\[B_n(z^\prime,\omega,\textbf{v})>3/2;\]
namely, $\{z\in\mathbb{C}\mid\sup_{(\omega,\textbf{v})\in\Omega\times\mathbb{S}^{2l-1}}B_n(z,\omega,\textbf{v})>3/2\}$ is open, given that
\[\mathbb{C}\ni z\mapsto\sup_{(\omega,\textbf{v})\in\Omega\times\mathbb{S}^{2l-1}}B_n(z,\omega,\textbf{v})\in\mathbb{R}\] is a lower-semicontinuous function.
Let $z^\prime\in B(z;\eta)$. The result follows now for Lemma~\ref{lema.carac.omega.v}; let $\epsilon=1/2$, $R=n$ and $r\in\{-n,n\}$ in the statement of the lemma in order to conclude that $(T,A_{z^\prime})\in\mathcal{UG}$.
\end{eqnarray}d{proof}
\section{Exponential Dichotomy}
\label{expdic}
\zerarcounters
In this section, we present the proofs of Theorems~\ref{prop.espec.2} and~\ref{prop.espec.3}.
\subsection{Proof of Theorem~\ref{prop.espec.2}}
Firstly, we recall the Weyl Criterion, used for characterizing the spectrum of a linear operator defined in a Hilbert space (see, for example,~\cite{cesar09} for a proof).
\begin{eqnarray}gin{4}[Weyl Criterion]
\label{crit.weyl}
Let $H: \dom(H) \subset \mathcal{H} \rightarrow \mathcal{H}$ be a linear operator defined in a Hilbert space $\mathcal{H}$ and let $z \in \mathbb{C}$. If there exists a sequence $(\textbf{v}_{n})_{n \in \mathbb{N}}$ of unitary vectors in $\dom(H)$ such that
\[\lim_{n \rightarrow \infty} (H - z)\textbf{v}_{n} = \textbf{0},
\]
then $z \in \sigma(H)$.
\end{eqnarray}d{4}
\begin{eqnarray}gin{3}
\label{lema.weyl}
Let $c_{00}(\mathbb{Z}, \mathbb{C}^{l}):=\{\textbf{u}\in(\mathbb{C}^{l})^{\mathbb{Z}}\mid \exists N\in\mathbb{Z}_0^+$ such that $\forall\, |j|>N, \textbf{u}_j=0\}$, and let $(H_{\omega})_{\omega}$ be the family of self-adjoint operators given by \eqref{eq.ope.din}, with $\dom(H_{\omega}) = l^{2}(\mathbb{Z}, \mathbb{C}^{l})$ for every $\omega \in \Omega$. Then, for each $\omega \in \Omega$, $x \in \sigma(H_{\omega})$ if, and only if for every $\epsilon > 0$, there exists a unitary vector $\textbf{u} \in c_{00}(\mathbb{Z}, \mathbb{C}^{l})$ such that
$$
\left\| (H_{\omega} - x)\textbf{u} \right\| < \epsilon.
$$
\end{eqnarray}d{3}
\begin{eqnarray}gin{proof}
Let $\omega\in\Omega$, $\epsilon > 0$, and set $\delta > 0$ such that $0<\frac{2A\delta}{4A - \delta} \leq \epsilon$, where $A:=\sup\{\Vert H_\omega-x\Vert\mid x\in\sigma(H_\omega)\}<\infty$ (since $H_\omega$ is a bounded operator, $\sigma(H_\omega)\subset\mathbb{R}$ is bounded). Let also $x \in \sigma(H_{\omega})$; it follows from Proposition \ref{crit.weyl} that there exists a unitary vector $\textbf{v} \in l^{2}(\mathbb{Z}, \mathbb{C}^{l})$ such that
$$
\left\| (H_{\omega} - x) \textbf{v} \right\| < \frac{\delta}{4}.
$$
For each $q \in \mathbb{N}$, define the vector $\textbf{v}^{(q)} \in l^{2}(\mathbb{Z}, \mathbb{C}^{l})$ by
$$
\textbf{v}^{(q)}_{n} :=
\left\{
\begin{eqnarray}gin{array}{lll}
\textbf{v}_{n}, & n \in [-q, q], \\
& \\
\textbf{0}, & n \notin [-q, q]. \\
\end{eqnarray}d{array}\right.
$$
Now,
there exists $L \in \mathbb{N}$ such that
$\left\| \textbf{v}^{(L)} - \textbf{v}\right\| < \frac{\delta}{4A}$ and so,
$$
\left\| (H_{\omega} - x) \textbf{v}^{(L)} \right\| < \frac{\delta}{2}.
$$
The result follows now by taking $\textbf{u} := \frac{\textbf{v}^{(L)}}{\left\|\textbf{v}^{(L)}\right\|}$. Namely,
$$
1 = \left\| \textbf{v} \right\| \leq \left\| \textbf{v}^{(L)} \right\| + \left\| \textbf{v}^{(L)} - \textbf{v}\right\| \leq \left\| \textbf{v}^{(L)} \right\| + \frac{\delta}{4A},
$$
from which follows that $\left\| \textbf{v}^{(L)} \right\| \geq \frac{4A - \delta}{4A}$; then,
$$
\left\| (H_{\omega} - x) \textbf{u} \right\| = \frac{\left\| (H_{\omega} - x) \textbf{v}^{(L)} \right\|}{\left\| \textbf{v}^{(L)} \right\|} \leq \left\| (H_{\omega} - x) \textbf{v}^{(L)} \right\| \left(\frac{4A}{4A - \delta}\right) < \frac{\delta}{2} \left(\frac{4A}{4A - \delta}\right) \leq \epsilon.
$$
The converse follows from Proposition~\ref{crit.weyl}.
\end{eqnarray}d{proof}
\
\begin{eqnarray}gin{proof4}
For each $\omega\in\Omega$, let $V = (V_n)_{n\in\mathbb{Z}}$ and $D = (D_n)_{n\in\mathbb{Z}}$ be the bilateral sequences of real and symmetric $l\times l$ matrices that define $H_\omega$. Denote by $S:l_\infty(\mathbb{Z}; M (l, \mathbb{C}))\rightarrow l_\infty(\mathbb{Z}; M (l, \mathbb{C}))$ the shift operator and let $\Lambda:=\Lambda_1\times \Lambda_2$, with $\Lambda_1:=\overline{\{S^n(D)\mid n\in\mathbb{N}\}}$ and $\Lambda_2:=\overline{\{S^n(V)\mid n\in\mathbb{N}\}}$ (the closures are taken with respect to the topology of the uniform convergence over $l_\infty(\mathbb{Z}; M (l, \mathbb{C}))$).
Let also $g:\Lambda_{i}:\rightarrow SL(2l,\mathbb{C})$, $i=1,2$, be given by the law $g(B)=B_0$ (that is, $g$ is the evaluation map of the bilateral sequence $B\in\Lambda_{i}$ at entry $n=0$). Then, for each $z\in\mathbb{C}$, one defines the cocycle
\begin{eqnarray}gin{equation*}
\begin{eqnarray}gin{array}{lllll}
(S,P_z): & \Lambda \times \mathbb{C}^{2l} & \rightarrow & \Lambda \times \mathbb{C}^{2l}\\
&(\lambda, \textbf{v}) & \mapsto & (S(\lambda), P_z(\lambda)\textbf{v}),
\end{eqnarray}d{array}
\end{eqnarray}d{equation*}
where the map $P_z(\lambda): \Omega \rightarrow SL(2l, \mathbb{C})$ is given by the law
\begin{eqnarray}gin{equation*}
P_z(\lambda)
:=
\left[
\begin{eqnarray}gin{array}{cc}
g(B)^{-1}(z - g(C)) & - g(B)^{-1} \\
& \\
g(B) & 0
\end{eqnarray}d{array}
\right],
\end{eqnarray}d{equation*}
with $\lambda = (B, C)\in\Lambda_1\times\Lambda_2$.
Naturally, if the cocycle defined by~\eqref{eq.cociclo.din} is such that $(T, A_z)\notin \mathcal{UG}$, then $(S,P_z)\notin \mathcal{UG}$; it follows from Proposition~\ref{prop.caracter} that there exist $\lambda \in \Lambda$ and $\textbf{v} \in \mathbb{S}^{2l-1}$ such that for each $n \in \mathbb{Z}$,
\begin{eqnarray}gin{equation}
\label{sequlim}
\left\|P_{n}(z;\lambda)\textbf{v}\right\| \leq 1,
\end{eqnarray}d{equation}
where $P_n(z;\lambda)$ is defined as in~\eqref{def.mat.trans.erg}, with $P_z(\lambda)$ replacing $A_z(\omega)$.
Let $(\textbf{u}_{n})_{n \in \mathbb{Z}} \in (\mathbb{C}^{l})^{\mathbb{Z}}$ be the bilateral sequence given by the law
$$
\left[
\begin{eqnarray}gin{array}{c}
\textbf{u}_{n+1}\\
g(S^nB)\textbf{u}_{n}
\end{eqnarray}d{array}\right]
=
P_{n}(z;\lambda)\textbf{v}.
$$
Denote by $H_\lambda$ the matrix-valued Jacobi operator~\eqref{eq.ope.din.jacobi} associated with $\lambda=(B,C)$. Then, $\textbf{u}=(\textbf{u}_{n})_n$ is the solution to the eigenvalue equation of $H_{\lambda}$ at $z$ and by~\eqref{sequlim}, $\left\|\textbf{u}\right\|_{\infty}<\infty$.
If $(\textbf{u}_{n}) \in l^{2}(\mathbb{Z}, \mathbb{C}^{l})$, then $z \in \sigma(H_{\lambda})$ and we are done. Otherwise, for each $L \in \mathbb{N}$, one may define $\textbf{u}^{(L)}$ by the law
\begin{eqnarray}gin{eqnarray}\label{seqWeyl}
\textbf{u}^{(L)}_{n} =
\left\{
\begin{eqnarray}gin{array}{cc}
\textbf{u}_{n}, & \left| n \right| \leq L, \\
&\\
\textbf{0}, & \left|n\right| > L.
\end{eqnarray}d{array}\right.
\end{eqnarray}d{eqnarray}
If $n\notin\{-L-1,-L,-L+1,L-1,L,L+1\}$, since $\textbf{u}$ is a solution to the eigenvalue equation of $H_\lambda$ at $z$, one has
$$
[(H_{\lambda} - z\mathbb{I})\textbf{u}^{(L)}]_{n} = \textbf{0}.
$$
Specifically at the entries $n = \pm (L - 1), \pm L, \pm (L + 1)$, one has
$$
\left\{
\begin{eqnarray}gin{array}{lll}
\left[(H_{\lambda} - z\mathbb{I})\textbf{u}^{(L)}\right]_{\pm(L - 1)} & = & g(S^{\pm(L - 2)}B)\textbf{u}_{\pm(L - 2)} + g(S^{\pm(L - 1)}B)\textbf{u}_{\pm L} \\
&&\\
&+& (z\mathbb{I} - g(S^{\pm(L - 1)}C))\textbf{u}_{\pm(L - 1)}, \\
& & \\
\left[(H_{\lambda} - z\mathbb{I})\textbf{u}^{(L)}\right]_{\pm L} & = & g(S^{\pm(L - 1)}B) \textbf{u}_{\pm(L - 1)} + g(S^{\pm L}C)\textbf{u}_{\pm L}, \\
& & \\
\left[(H_{\lambda} - z\mathbb{I})\textbf{u}^{(L)}\right]_{\pm(L + 1)} & = & g(S^{\pm L}B) \textbf{u}_{\pm L}.
\end{eqnarray}d{array}\right. .
$$
Thus,
one concludes that there exists a constant $K$ such that for each $L \in \mathbb{N}$,
$$
\left\|(H_{\lambda} - z\mathbb{I})\textbf{u}^{(L)}\right\| \leq K
$$
(note that $(V_{n})_{n \in \mathbb{Z}}$ and $(D_{n})_{n \in \mathbb{Z}}$ are uniformly bounded bilateral sequences, and that for each $L\in\mathbb{N}$, $\Vert\textbf{u}^{(L)}\Vert_\infty\le\Vert\textbf{u}\Vert_\infty<\infty$).
Now, by setting $\textbf{w}^{(L)} := \frac{\textbf{u}^{(L)}}{\left\| \textbf{u}^{(L)} \right\|}$, it follows from the last inequality that
$$
\left\|(H_{\lambda} - z\mathbb{I})\textbf{w}^{(L)}\right\| \leq \frac{K}{\left\| \textbf{u}^{(L)} \right\|}.
$$
Since, by hypothesis, $\lim_{L \rightarrow \infty}\left\| \textbf{u}^{(L)} \right\| = \infty$ (given that $\textbf{u}^{(L)}\notin l^2(\mathbb{Z},\mathbb{C}^l)$), it follows from Lemma~\ref{lema.weyl} that $z \in \sigma(H_{\lambda})$.
In the same way, it is possible to show that for each $n\in\mathbb{Z}$, $z \in \sigma(H_{S^n\lambda})$; namely, it is sufficient to note that if one defines the sequence $(\textbf{v}^{(L)}_k)$ by the law $\textbf{v}_{k}^{(L)} := \textbf{u}_{k + n}^{(L)}$, with $\textbf{u}^{(L)}$ given by~\eqref{seqWeyl}, then by following the same reasoning as before, one concludes that $z \in \sigma(H_{S^n\lambda})$.
Recall that we want to show that for an arbitrary $\theta \in \Omega$, $z \in \sigma(H_{\theta})$. By Lemma~\ref{lema.weyl}, it is enough to prove that for each $\epsilon > 0$, there exists a unit vector $\textbf{u}\in c_{00}(\mathbb{Z},\mathbb{C}^l)$ such that $\|(H_{\theta} - z\mathbb{I})\textbf{u}\| \leq \epsilon$.
Note that for each $\epsilon>0$, there exists a sufficiently large $N$ such that for each $k\in\mathbb{Z}$,
cientemente grande de maneira que S N λ esteja tão próximo de (D(ω), V (ω))Since $x \in \sigma(H_{\lambda})$, it follows from Lemma~\ref{lema.weyl} the existence of a unitary $\textbf{v} \in c_{00}(\mathbb{Z}, \mathbb{C}^{l})$ such that
$$
\begin{eqnarray}gin{array}{lll}
\left\|(S^N(B)-D)_{k}\right\| & < & \dfrac{\epsilon}{8}, \\
& & \\
\left\|(S^N(C)-V)_{k}\right\| & < & \dfrac{\epsilon}{4},
\end{eqnarray}d{array}
$$
where $D_k=D(T^k\theta)$ and $V_k=V(T^k\theta)$. Thus, for each $\epsilon>0$ and each $\textbf{v}\in l^2(\mathbb{Z},\mathbb{C}^l)$, one has
$$
\left\|(H_{S^N\lambda} - H_{\theta})\textbf{v}\right\| \leq \dfrac{\epsilon}{2}.
$$
Now, given that $z\in\sigma(H_{S^N\lambda})$, it follows from Lemma~\ref{lema.weyl} that there exists a unit vector $\textbf{u}\in c_{00}(\mathbb{Z},\mathbb{C}^l)$ such that
$$
\left\|(H_{S^N\lambda} - z\mathbb{I})\textbf{u}\right\| \leq \dfrac{\epsilon}{2}.
$$
Finally, by combining the previous estimates, one gets
$$
\left\|(H_{\theta} - z\mathbb{I})\textbf{u}\right\|
\leq
\left\|(H_{S^N\lambda} - H_{\theta})\textbf{u}\right\|
+
\left\|(H_{S^N\lambda} - z\mathbb{I})\textbf{u}\right\|
\leq \epsilon,
$$
and we are done.
\end{eqnarray}d{proof4}
\subsection{Proof of Theorem~\ref{prop.espec.3}}
Now we prove the converse of Theorem~\ref{prop.espec.2}.
The idea is to show that if there exist $2l$ linearly independent solutions to the eigenvalue equation of $H_\omega$ at $z$
such that half of them belong to $l^2(\mathbb{N},\mathbb{C}^{l})$ and the other half belong to $l^2(\mathbb{Z}_-,\mathbb{C}^{l})$, then $z\in\rho(H_\omega)$. In order to prove this result,
one needs to write explicitly the resolvent operator on its integral form, the so-called Green Function (Proposition \ref{porp.def.green}). Some preparation is required.
We begin with Green Formula. Let $\textbf{u}, \textbf{v} \in (\mathbb{C}^{l})^{\mathbb{Z}}$; then, for every integers $n > m$,
\begin{eqnarray}gin{equation}
\label{eq.green.1}
\sum^{n}_{k = m} \left\langle (H\textbf{u})_{k}, \bar{\textbf{v}}_{k} \right\rangle_{\mathbb{C}^{l}} - \left\langle (H\textbf{v})_{k}, \bar{\textbf{u}}_{k} \right\rangle_{\mathbb{C}^{l}} = W_{[\textbf{u}, \textbf{v}]}(n + 1) - W_{[\textbf{u}, \textbf{v}]}(m),
\end{eqnarray}d{equation}
where $W$ is the Wronskian of $\textbf{u}$ and $\textbf{v}$, given by
\begin{eqnarray}gin{equation}
\label{eq.wrons1}
W_{[\textbf{u}, \textbf{v}]}(n) := \left\langle D_{n - 1}\textbf{u}_{n}, \bar{\textbf{v}}_{n - 1} \right\rangle_{\mathbb{C}^{l}} - \left\langle D_{n - 1} \textbf{v}_{n}, \bar{\textbf{u}}_{n - 1} \right\rangle_{\mathbb{C}^{l}}.
\end{eqnarray}d{equation}
If $(A_{n})$ and $(B_{n})$ are sequences in $M(l,\mathbb{C})$, one obtain, by
applying the operator to each of their columns, the following version of Green Formula for
matrices:
\begin{eqnarray}gin{equation}
\label{wronski.matriz}
\sum^{n}_{k = m} A^{\ast}_{k} H(B)_{k} - H(A)_{k}^{\ast} B_{k} = W_{[A, B]}(n + 1) - W_{[A, B]}(m),
\end{eqnarray}d{equation}
with
$$
W_{[A, B]}(m) = (A^{\ast}_{m - 1}D_{m - 1}B_{m} - A^{\ast}_{m}D_{m - 1}B_{m - 1}).
$$
In the specific case that $\textbf{u}_n$ and $\textbf{v}_n$ are solutions to the same eigenvalue equation, one obtains the constancy of the Wronskian.
\begin{eqnarray}gin{3}[Constancy of the Wronskian]
\label{lema.const.wronsk}
Let $H$ be the operator given by \eqref{eq.ope.din.jacobi}. If $\textbf{u}, \textbf{v} \in (\mathbb{C}^{l})^{\mathbb{Z}}$ satisfy the eigenvalue equation for $H$
at $z \in \mathbb{C}$, then for each $m,n\in\mathbb{Z}$,
\[W_{[\textbf{u}, \textbf{v}]}(m)=W_{[\textbf{u}, \textbf{v}]}(n).
\]
\end{eqnarray}d{3}
\begin{eqnarray}gin{proof}
It is enough to apply the Green formula for $\textbf{u}$ and $\textbf{v}$. Namely,
$$
\sum^{n}_{k = m} \left\langle (H\textbf{u})_{k}, \bar{\textbf{v}} \right\rangle_{\mathbb{C}^{l}} - \left\langle (H\textbf{v})_{k},\bar{\textbf{u}}_{k} \right\rangle_{\mathbb{C}^{l}} = \sum^{n}_{k = m} \left\langle z \textbf{u}_{k},\bar{\textbf{v}}_{k} \right\rangle_{\mathbb{C}^{l}} - \left\langle z \textbf{v}_{k}, \bar{\textbf{u}}_{k} \right\rangle_{\mathbb{C}^{l}} = 0,
$$
from which follows that
$$
W_{[\textbf{u}, \textbf{v}]}(n + 1) - W_{[\textbf{u}, \textbf{v}]}(m) = 0.
$$
\end{eqnarray}d{proof}
We want to define the resolvent operator in terms of the solutions to the eigenvalue equation.
\begin{eqnarray}gin{3}
\label{lema.green.nd}
Let $H$ be given by \eqref{eq.ope.din.jacobi} and let $(F_{n}^{(+)})_{n\in\mathbb{Z}}, (F_{n}^{(-)})_{n\in\mathbb{Z}}$ be bilateral sequences in $M(l,\mathbb{C})$ whose columns are linearly independent solutions to the eigenvalue equation of $H$ at $z \in \mathbb{C}$. Suppose that
$$
\lim_{n \rightarrow \pm \infty} \left\| F^{(\pm)}_{n} \right\| = 0
$$
and set
$$
Q:= W_{[F^{(+)}, F^{(-)}]}(0) = (F^{(+)}_{0})^{\ast}D_{0}F^{(-)}_{1} - (F^{(+)}_{1})^{\ast}D_{0}F^{(-)}_{0}.
$$
Then, for every $n \in \mathbb{Z}$,
\begin{eqnarray}gin{eqnarray*}
(a) & F^{(+)}_{n}Q^{-1}(F^{(-)}_{n})^{\ast} - F^{(-)}_{n}(Q^{\ast})^{-1}(F^{(+)}_{n})^{\ast}=0;\\
(b) & F^{(+)}_{n}Q^{-1}(F^{(-)}_{n + 1})^{\ast} - F^{(-)}_{n}(Q^{\ast})^{-1}(F^{(+)}_{n + 1})^{\ast} = D^{-1}_{n};\\
(c) & F^{(+)}_{n + 1}Q^{-1}(F^{(-)}_{n})^{\ast} - F^{(-)}_{n + 1}(Q^{\ast})^{-1}(F^{(+)}_{n})^{\ast} = - D^{-1}_{n}.
\end{eqnarray}d{eqnarray*}
\end{eqnarray}d{3}
\begin{eqnarray}gin{proof}
By letting $A_{n}=B_{n} = F^{(\pm)}_{n}$ in Green Formula \eqref{wronski.matriz}, it follows that there exists a constant $C_\pm\in M(l,\mathbb{C})$ such that for every $n \in \mathbb{Z}$,
\[
W_{[F^{(\pm)}, F^{(\pm)}]}(n) = ((F^{(\pm)}_{n - 1})^{\ast}D_{n - 1}F^{(\pm)}_{n} - (F_{n}^{(\pm)})^{\ast}D_{n - 1}F_{n - 1}^{(\pm)}) = C_\pm;
\]
since $\lim_{n \rightarrow \pm \infty} \left\| F^{(\pm)}_{n} \right\| = 0$, one has
\[
\lim_{n \rightarrow \infty} W_{[F^{(\pm)}, F^{(\pm)}]}(n) = 0,
\]
and then, $C_\pm = 0$.
Now, if one applies equation~\eqref{wronski.matriz} to the pairs $(F^{(+)},F^{(-)}), (F^{(-)},F^{(+)})$ and $(F^{(-)},F^{(-)})$, one obtains from the constancy of the Wronskian, for each $n \in \mathbb{Z}$, the system
$$
\left\{
\begin{eqnarray}gin{array}{lll}
(F^{(+)}_{n})^{\ast}D_{n}F^{(+)}_{n + 1} - (F^{(+)}_{n + 1})^{\ast}D_{n}F^{(+)}_{n} & = & 0, \\
&&\\
(F^{(+)}_{n})^{\ast}D_{n}F^{(-)}_{n + 1} - (F^{(+)}_{n + 1})^{\ast}D_{n}F^{(-)}_{n} & = & Q, \\
&&\\
(F^{(-)}_{n})^{\ast}D_{n}F^{(+)}_{n + 1} - (F^{(-)}_{n + 1})^{\ast}D_{n}F^{(+)}_{n} & = & -Q^{\ast}, \\
&&\\
(F^{(-)}_{n})^{\ast}D_{n}F^{(-)}_{n + 1} - (F^{(-)}_{n + 1})^{\ast}D_{n}F^{(-)}_{n} & = & 0,
\end{eqnarray}d{array}\right.
$$
which can be written in the form
\begin{eqnarray}gin{equation}
\label{equa.matri.simpl}
\left[
\begin{eqnarray}gin{array}{cc}
(F^{(+)}_{n})^{\ast}_{n} & (F^{(+)}_{n})^{\ast}_{n + 1}\\
(F^{(-)}_{n})^{\ast}_{n} & (F^{(-)}_{n})^{\ast}_{n + 1}
\end{eqnarray}d{array}
\right]
\left[
\begin{eqnarray}gin{array}{cc}
D_{n} & 0 \\
0 & D_{n}
\end{eqnarray}d{array}
\right]
\mathbb{J}
\left[
\begin{eqnarray}gin{array}{cc}
F^{(+)}_{n} & F^{(-)}_{n} \\
F^{(+)}_{n + 1} & F^{(-)}_{n + 1}
\end{eqnarray}d{array}
\right]
=
\mathbb{J}
\left[
\begin{eqnarray}gin{array}{cc}
Q & 0 \\
0 & Q^{\ast}
\end{eqnarray}d{array}
\right],
\end{eqnarray}d{equation}
with
$$
\mathbb{J}
=
\left[
\begin{eqnarray}gin{array}{cc}
0 & \mathbb{I} \\
- \mathbb{I} & 0
\end{eqnarray}d{array}
\right].
$$
Since $\mathbb{J}^{-1} = -\mathbb{J}$ and
$$
\left[
\begin{eqnarray}gin{array}{cc}
Q & 0 \\
0 & Q^{\ast}
\end{eqnarray}d{array}
\right]^{-1} = \left[
\begin{eqnarray}gin{array}{cc}
Q^{-1} & 0 \\
0 & (Q^{\ast})^{-1}
\end{eqnarray}d{array}
\right],
$$
by multiplying to the left both members of identity \eqref{equa.matri.simpl} by $\left[
\begin{eqnarray}gin{array}{cc}
Q^{-1} & 0 \\
0 & (Q^{\ast})^{-1}
\end{eqnarray}d{array}
\right]\mathbb{J}^{-1}$, it follows that
$$
\left(
\left[
\begin{eqnarray}gin{array}{cc}
F^{(+)}_{n} & F^{(-)}_{n} \\
F^{(+)}_{n + 1} & F^{(-)}_{n + 1}
\end{eqnarray}d{array}
\right]
\right)
\left(
\left[
\begin{eqnarray}gin{array}{cc}
Q^{-1} & 0 \\
0 & (Q^{\ast})^{-1}
\end{eqnarray}d{array}
\right]
\mathbb{J}^{-1}
\left[
\begin{eqnarray}gin{array}{cc}
(F^{(+)}_{n})^{\ast} & (F^{(+)}_{n + 1})^{\ast}\\
(F^{(-)}_{n})^{\ast} & (F^{(-)}_{n + 1})^{\ast}
\end{eqnarray}d{array}
\right]
\left[
\begin{eqnarray}gin{array}{cc}
D_{n} & 0 \\
0 & D_{n}
\end{eqnarray}d{array}
\right]
\mathbb{J}
\right)
=
\left[
\begin{eqnarray}gin{array}{cc}
\mathbb{I} & 0 \\
0 & \mathbb{I}
\end{eqnarray}d{array}
\right],
$$
since one-sided finite matrix inverses are two-sided; in other words,
$$
\left[
\begin{eqnarray}gin{array}{cc}
Q^{-1} & 0 \\
0 & (Q^{\ast})^{-1}
\end{eqnarray}d{array}
\right]
\mathbb{J}^{-1}
\left[
\begin{eqnarray}gin{array}{cc}
(F^{(+)}_{n})^{\ast} & (F^{(+)}_{n + 1})^{\ast}\\
(F^{(-)}_{n})^{\ast} & (F^{(-)}_{n + 1})^{\ast}
\end{eqnarray}d{array}
\right]
\left[
\begin{eqnarray}gin{array}{cc}
D_{n} & 0 \\
0 & D_{n}
\end{eqnarray}d{array}
\right]
\mathbb{J}
\;\;\;\; \mbox{and} \;\;\;\;
\left[
\begin{eqnarray}gin{array}{cc}
F^{(+)}_{n} & F^{(-)}_{n} \\
F^{(+)}_{n + 1} & F^{(-)}_{n + 1}
\end{eqnarray}d{array}
\right]
$$
commute. Such identity can be written in the form
$$
\left\{
\begin{eqnarray}gin{array}{lll}
F^{(+)}_{n}Q^{-1}(F^{(-)}_{n})^{\ast} - F^{(-)}_{n}(Q^{\ast})^{-1}(F^{(+)}_{n})^{\ast} & = & 0, \\
&&\\
F^{(+)}_{n}Q^{-1}(F^{(-)}_{n + 1})^{\ast} - F^{(-)}_{n}(Q^{\ast})^{-1}(F^{(+)}_{n + 1})^{\ast} & = & D^{-1}_{n}, \\
&&\\
F^{(+)}_{n + 1}Q^{-1}(F^{(-)}_{n})^{\ast} - F^{(-)}_{n + 1}(Q^{\ast})^{-1}(F^{(+)}_{n})^{\ast} & = & - D^{-1}_{n}, \\
&&\\
F^{(+)}_{n + 1}Q^{-1}(F^{(-)}_{n + 1})^{\ast} - F^{(-)}_{n + 1}(Q^{\ast})^{-1}(F^{(+)}_{n + 1})^{\ast} & = & 0.
\end{eqnarray}d{array}\right.
$$
\end{eqnarray}d{proof}
One can obtain from these relations the Green Function of $H$.
\begin{eqnarray}gin{4}
\label{porp.def.green}
Let $H$ be the operator given by \eqref{eq.ope.din.jacobi}, let $z\in\mathbb{C}$ and let $F_{n}^{(+)}, F_{n}^{(-)}$ be sequences of matrices in $M(l,\mathbb{C})$ such that $\lim_{n\to\pm\infty}\Vert F_n^{(\pm)}\Vert=0$ and whose columns are linearly independent solutions to the eigenvalue equation of $H$ at $z$.
Set, for each $p, q \in \mathbb{Z}$,
\begin{eqnarray}gin{equation}\label{eq.resol.green}
G(p, q; z) =
\left\{
\begin{eqnarray}gin{array}{ll}
- F_{p}^{(-)}(z) (Q^{\ast})^{-1} (F_{q}^{(+)})^{\ast}(z), & p \leq q, \\
&\\
- F^{(+)}_{p}(z) Q^{-1} (F^{(-)}_{q})^{\ast}(z), & p > q.
\end{eqnarray}d{array}\right.
\end{eqnarray}d{equation}
Then, for each $\textbf{u} \in (\mathbb{C}^{l})^{\mathbb{Z}}$,
\begin{eqnarray}gin{equation*}
\sum_{q} G(p, q; z)\textbf{u}_{q} = ((H - z)^{-1}\textbf{u})_{p};
\end{eqnarray}d{equation*}
that is, $G$ is the Green Function of the operator $H$ at $z$.
\end{eqnarray}d{4}
\begin{eqnarray}gin{proof}
One needs to prove that
$$
(H - z)\left(\sum_{q} G(p, q; z)\textbf{u}_{q}\right)_{p \in \mathbb{Z}} = (\textbf{u}_{p})_{p \in \mathbb{Z}}.
$$
It follows from the definition of $G(p, q; z)$ that (we omit the dependence of $z$ in $G(p, q; z)$),
$$
\begin{eqnarray}gin{array}{lll}
(H - z)\left(\sum_{q} G(p, q)\textbf{u}_{q}\right)_{p \in \mathbb{Z}}= & & \left(\sum_{q} D_{p}G(p + 1, q)\textbf{u}_{q}\right)_{p \in \mathbb{N}}+\\
& & \\
& & \left(\sum_{q} D_{p - 1}G(p - 1, q)\textbf{u}_{q}\right)_{p \in \mathbb{Z}}+ \\
& & \\
& & \left(\sum_{q} (V_{p} - z)G(p, q)\textbf{u}_{q}\right)_{p \in \mathbb{Z}}.
\end{eqnarray}d{array}
$$
Let $p\in\mathbb{Z}$; one needs to consider the following cases.
\textbf{Case} $q < p - 1$:
\[
\begin{eqnarray}gin{array}{lll}
& D_{p}G(p + 1, q)\textbf{u}_{q} + D_{p - 1}G(p - 1, q)\textbf{u}_{q} + (V_{p} - z)G(p, q)\textbf{u}_{q} = & \\
& & \\
& - D_{p}F_{p + 1}^{(+)}Q^{-1} (F^{(-)}_{q})^{\ast}\textbf{u}_{q} - D_{p - 1}F^{(+)}_{p - 1} D^{-1}_{0} (F^{(-)}_{q})^{\ast}\textbf{u}_{q} - (V_{p} - z) F_{p} Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q} = & \\
& & \\
& - \left[ D_{p}F_{p + 1}^{(+)} + D_{p - 1}F_{p - 1}^{(+)} + (V_{p} - z) F_{p}^{(+)} \right]Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q} = - \left[ 0 \right] Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q} = \textbf{0}. &
\end{eqnarray}d{array}
\]
\textbf{Case} $q = p - 1$:
\[
\begin{eqnarray}gin{array}{lll}
&D_{p}G(p + 1, q)\textbf{u}_{q} + D_{p - 1}G(p - 1, q)\textbf{u}_{q} + (V_{p} - z)G(p, q)\textbf{u}_{q}=& \\
& & \\
&- D_{p}F^{(+)}_{p + 1} Q^{-1} (F^{(-)}_{q})^{\ast}\textbf{u}_{q} - D_{p - 1}F^{(-)}_{p - 1} (Q^{\ast})^{-1} (F^{(+)}_{q})^{\ast}\textbf{u}_{q} - (V_{p} - z) F^{(+)}_{p} Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q}=&\\
& & \\
&- D_{p}F^{(+)}_{p + 1} Q^{-1} (F^{(-)}_{q})^{\ast}\textbf{u}_{q} - D_{p - 1}F^{(+)}_{p - 1}Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q} - (V_{p} - z) F^{(+)}_{p}Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q}= &\\
&&\\
&- \left[ D_{p}F^{(+)}_{p + 1} + D_{p - 1}F^{(+)}_{p - 1} + (V_{p} - z) F^{(+)}_{p} \right] Q^{-1}(F^{(-)}_{q})^{\ast}\textbf{u}_{q} =- \left[ 0 \right] Q^{-1} \psi^{\ast}_{q} \textbf{u}_{q} = \textbf{0}&,
\end{eqnarray}d{array}
\]
where it has been applied Lemma~\ref{lema.green.nd}-$(a)$ to the second identity.
\textbf{Case} $q=p$: by applying Lemma~\ref{lema.green.nd}-$(c)$, one has
\[
D_{q}G(q + 1, q)\textbf{u}_{q} + D_{q - 1}G(q - 1, q)\textbf{u}_{q} + (V_{q} - z)G(q, q)\textbf{u}_{q} = \textbf{u}_{q}.
\]
\textbf{Case} $q \geq p + 1$:
\[
D_{p}G(p + 1, q)\textbf{u}_{q} + D_{p - 1}G(p - 1, q)\textbf{u}_{q} + (V_{p} - z)G(p, q)\textbf{u}_{q} = \textbf{0}.
\]
\end{eqnarray}d{proof}
The next step consists in obtaining a minimal support for the trace of the spectral (matrix) measure of $H_\omega$, $\omega\in\Omega$.
We note that Green Functions $G_\omega(j, j; \cdot):\mathbb{C}_{+}\rightarrow M(l,\mathbb{C})$, $j\in\mathbb{Z}$, are matrix-valued Herglotz functions (that is, each $G_\omega(j, j; \cdot)$ is analytic and $\Im G_\omega(j,j;z)>0$ for each $z\in\mathbb{C}_+$; this is a consequence of the fact that $G_\omega$ is the integral kernel of $(H_\omega-z)^{-1}$ and that $\Im (H_\omega-z)^{-1}>0)$, from which follows that for $\kappa$-a.e.~$x\in\mathbb{R}$ (here, $\kappa$ stands for the Lebesgue measure on $\mathbb{R}$),
\[
\lim_{y \downarrow 0}\Im G_\omega(j,j; x \pm iy)<\infty
\]
(see~\cite{gesztesy97}). By Spectral Theorem one has, for each $\textbf{u} \in l^{2}(\mathbb{Z}; \mathbb{C}^{l})$,
\begin{eqnarray}gin{equation}
\label{eq.med.espec}
\left\langle (H_\omega - z)^{-1} \textbf{u}, \textbf{u} \right\rangle = \int \frac{1}{x - z} d\mu^\omega_{\textbf{u}}(x),
\end{eqnarray}d{equation}
where $\mu_{\textbf{u}}(\cdot):=\langle\textbf{u},E^\omega(\cdot)\textbf{u}\rangle$ is a finite Borel measure and $E^\omega$ is the resolution of the identity of the operator $H_\omega$.
Let $H: \dom(H)\subset\mathcal{H} \rightarrow \mathcal{H}$ be a self-adjoint operator defined in a separable Hilbert space $\mathcal{H}$, and let $\mathcal{C} = \{\textbf{u}_{1}, \textbf{u}_{2}, \ldots, \textbf{u}_{k}\}\subset\mathcal{H}$. The cyclic subspace of $H$ spanned by $\mathcal{C}$ is the space
\begin{eqnarray}gin{equation*}
\mathcal{H}_{\mathcal{C}}:=\overline{\gerado\{\cup_{j=1}^k\{(H)^n(\textbf{u}_{j})\mid n\in\mathbb{N}\}\}}.
\end{eqnarray}d{equation*}
One says that $\mathcal{C} = \{\textbf{u}_{1}, \textbf{u}_{2}, \ldots, \textbf{u}_{k}\}$ is a spectral basis for $H$ if the system $\mathcal{C}$ is linearly independent and $\mathcal{H}_{\mathcal{C}}=\mathcal{H}$.
In our setting, the $2l$ canonical vectors $(\textbf{e}_{\alpha,k})_{\alpha=0,1,k = 1,\ldots,l}$ in $(\mathbb{C}^{l})^{\mathbb{Z}}$, where $(\textbf{e}_{\alpha,k})_{n,j} = \delta_{\alpha,n}\delta_{j,k}$, $\alpha=0,1$, form a spectral basis for $H_\omega$ (see~\cite{carmona90}). The matrix
\begin{eqnarray}gin{eqnarray*}
\left(\begin{eqnarray}gin{array}{cc}\mu^\omega_{0,0} & \mu^\omega_{0,1}\\
\mu^\omega_{1,0}&\mu^\omega_{1,1}\end{eqnarray}d{array}\right),
\end{eqnarray}d{eqnarray*}
where $\mu^\omega_{\alpha,\begin{eqnarray}ta}=(\mu^\omega_{\textbf{e}_{\alpha, i},\textbf{e}_{\begin{eqnarray}ta, j}})_{1 \leq i,j \leq l}$, $\alpha,\begin{eqnarray}ta=0,1$, is called the spectral (matrix) measure of $H_\omega$.
It can be shown (see~\cite{carmona90}) that the spectral type of the operator $H_\omega$ is given by $\mu^{tr}_\omega=\tr[\mu_{0,0}^\omega+\mu_{1,1}^\omega]$, in the sense that for each $\textbf{u}\in l^2(\mathbb{Z};\mathbb{C}^l)$, $\mu^\omega_{\textbf{u}}$ is absolutely continuous with respect to $\mu^{tr}_\omega$.
\begin{eqnarray}gin{1}[Minimal Support]
\label{def.sup.minimal}
One says that a set $S \subseteq \mathbb{R}$ is a minimal support for the positive and finite Borel measure $\mu$ if
\begin{eqnarray}gin{eqnarray*}\begin{eqnarray}gin{array}{ll}
(i) & \mu (\mathbb{R} \setminus S) = 0; \\
(ii) & S_{0}\subset S,\;\;\; \mu(S_{0}) = 0\qquad \Longrightarrow\qquad \kappa(S_{0}) = 0.
\end{eqnarray}d{array}\end{eqnarray}d{eqnarray*}
\end{eqnarray}d{1}
In other words, a minimal support for $\mu$ is a Borel set in which $\mu$ is concentrated and such that its subsets of zero measure necessarily have zero Lebesgue measure. Definition \ref{def.sup.minimal} induces an equivalence relation in $\mathcal{B}(\mathbb{R})$ (the Borel $\sigma$-algebra of $\mathbb{R}$):
$$
S_{1} \sim S_{2} \Longleftrightarrow \kappa(S_{1} \Delta S_{2}) = \mu(S_{1} \Delta S_{2}) = 0,
$$
where $S_{1} \Delta S_{2}:= (S_{1} \setminus S_{2}) \cup (S_{2} \setminus S_{1}) $ is the symmetric difference of $S_{1}$ and $S_{2}$ (see Lemma 2.20 in \cite{gilbert84} for a proof of this statement).
\begin{eqnarray}gin{4}[Minimal Supports for the Spectral Types of $H_\omega$]
\label{porp.sup.ac}
Let $H_\omega$, $\omega\in\Omega$, be given by \eqref{eq.ope.din} and let $G_\omega(z):=G_\omega(0,0;z)+G_\omega(1,1;z)$, where $G_\omega(j,j;z)$ is the Green Function of $H_\omega$ at $z\in\mathbb{C}$ evaluated at $j=0,1$, given by \eqref{eq.resol.green}. Then,
\begin{eqnarray}gin{eqnarray*}
\Sigma_{ac}^\omega &:=& \{x \in \mathbb{R}\mid \exists \lim_{y \downarrow 0} G_\omega(x + iy),\; 0<\pi^{-1}\lim_{y \downarrow 0} \Im[\ln[G_\omega(x + iy)]]<\infty\},\\
\Sigma_{s}^\omega &:=& \{x \in \mathbb{R}; \lim_{y \downarrow 0} \Im[\tr[G_\omega(x + iy)]] = \infty \}
\end{eqnarray}d{eqnarray*}
are minimal supports for the absolutely continuous and singular parts of the spectral measure $\mu^{tr}_\omega$, respectively. Moreover,
\[
\Sigma^\omega:=\Sigma_{ac}^\omega\cup\Sigma_{s}^\omega\]
is a minimal support for the spectral measure $\mu^{tr}_\omega$.
\end{eqnarray}d{4}
\begin{eqnarray}gin{proof}
See Theorem~6.1 in~\cite{gesztesy97} for details.
\end{eqnarray}d{proof}
We are now ready to prove the converse of Theorem~\ref{prop.espec.2}.
\
\begin{eqnarray}gin{proof5}
Set
\[\mathcal{A}:=\{x \in \mathbb{R}\mid (T, A_x) \in \mathcal{UG}\}\]
(if $z\in\mathbb{C}\setminus\mathbb{R}$, then $z\in\rho(H_\omega)$, given that $H_\omega$ is self-adjoint for each $\omega\in\Omega$), and let $x\in\mathcal{A}$. Then,
there exist exactly $l$ linearly independent unitary vectors,
which we denote by $\{\textbf{u}_1,\ldots,\textbf{u}_l\}$ (we omit its dependence over $x$), such that
$\Vert A_n(x;\omega)\textbf{u}_i\Vert\le C\lambda^{-n}$ for each $i\in\{1,\ldots,l\}$ and each $n\in\mathbb{N}$, with another $l$ linearly independent unitary vectors, which we denote by $\{\textbf{v}_1,\ldots,\textbf{v}_l\}$, satisfying
$\Vert A_{-n}(x;\omega)\textbf{v}_i\Vert\le C\lambda^{-n}$ for each $i\in\{1,\ldots,l\}$ and each $n\in\mathbb{N}$ (see the proof of Theorem~\ref{UG=UH}).
Thus, one may define the sequence $(F^{(+)}_{n}(x;\omega))_{n}\in M(l,\mathbb{R})$ in such way that each column of $(F^{(+)}_{n}(x;\omega)\; D_{n-1}(\omega)F^{(+)}_{n-1}(x;\omega))^t$ corresponds to $A_n(x;\omega)\textbf{v}_i$, for some $i\in\{1,\ldots,n\}$. In the same fashion, one may define the sequence $(F^{(-)}_{n}(x;\omega))_{n}\in M(l,\mathbb{R})$ in such way that each column of $(F^{(-)}_{n}(x;\omega)\; D_{n-1}(\omega)F^{(-)}_{n-1}(x;\omega))^t$ corresponds to $A_{-n}(z;\omega)\textbf{u}_i$, for some $i\in\{1,\ldots,n\}$. Naturally, $\lim_{n\to\pm\infty}\Vert F^{(\pm)}_{n}(x;\omega)\Vert=0$.
It follows from Proposition~\ref{prop.hip.aberto} that there exists $\epsilon>0$ such that for each $z\in B(x;\epsilon)$, $(T,A_z)\in\mathcal{UG}$. Let $z=x+iy$, with $0<y<\epsilon/2$, and let $\omega\in\Omega$; then, by the continuity of $F^{(\pm)}_{n}(z;\omega)$ with respect to $z$ (for each fixed $n\in\mathbb{Z}$ and $\omega\in\Omega$, with $F^{(\pm)}_{n}(z;\omega)$ defined as above; note also that for each fixed $\omega\in\Omega$, the vectors $\{\textbf{u}_1,\ldots,\textbf{u}_l\}$ and $\{\textbf{v}_1,\ldots,\textbf{v}_l\}$ depend continuously on $z\in\mathbb{C}$, by the continuity of $A_z(\omega)$ with respect to $z$) and since $\lim_{n\to\pm\infty}\Vert F^{(\pm)}_{n}(z;\omega)\Vert=0$, it follows from Proposition~\ref{porp.def.green} that
\begin{eqnarray}gin{eqnarray*}
\lim_{y\downarrow 0}G_\omega(x+iy)&=&\lim_{y\downarrow 0}\sum_{j=0}^1F_{j}^{(-)}(x+iy;\omega) (Q^{\ast}(x+iy;\omega))^{-1} (F_{j}^{(+)})^{\ast}(x+iy;\omega)\\
&=&\sum_{j=0}^1F_{j}^{(-)}(x;\omega)(Q^{t}(x;\omega))^{-1} (F_{j}^{(+)})^{t}(x;\omega)\in M(l,\mathbb{R}),
\end{eqnarray}d{eqnarray*}
and so
\[\lim_{y\downarrow 0}\Im G_\omega(x+iy)=0.\]
By combining this result with Definition~\ref{def.sup.minimal} and Proposition~\ref{porp.sup.ac}, one concludes that for each $\omega\in\Omega$, $\mu^{tr}_\omega(\mathcal{A})=0$ (since for each $\omega\in\Omega$, $\mathcal{A}\subset \mathbb{R}\setminus\Sigma^\omega$ and $\mu^{tr}_\omega(\mathbb{R}\setminus\Sigma^\omega)=0$). By evoking again Proposition~\ref{prop.hip.aberto}, it follows that for each $x\in\mathcal{A}$ there exists $\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subset\mathcal{A}$, so for each $\omega\in\Omega$,
\[\mu^{tr}_\omega((x-\epsilon,x+\epsilon))=0.\]
This shows that for each $\omega\in\Omega$, $\mathcal{A}\subset\rho(H_\omega)$ (given that $\sigma(H_\omega)=\{x\in\mathbb{R}\mid\forall\epsilon>0,$ $\mu^{tr}_\omega((x-\epsilon,x+\epsilon))>0\}$), and we are done.
\end{eqnarray}d{proof5}
\section{Acknowledgments}
F.V. was supported by CAPES (Brazilian agency). SLC thanks the partial support by Fapemig (Minas Gerais state agency; Universal Project under contract 001/17/CEX-APQ-00352-17).
{}
\end{eqnarray}d{document}
|
\begin{document}
\title{Dynamical Phonon Laser in Coupled Active-Passive Microresonators}
\author{Bing He}
\affiliation{Department of Physics, University of Arkansas, Fayetteville, Arkansas 72701, USA}
\author{Liu Yang }
\affiliation{Harbin Engineering University, College of Automation, Harbin, Heilongjiang 150001, China
}
\author{Min Xiao}
\affiliation{Department of Physics, University of Arkansas, Fayetteville, Arkansas 72701, USA}
\affiliation{National Laboratory of Solid State Microstructures and School of Physics, Nanjing University, Nanjing 210093, China}
\begin{abstract}
Effective transition between the population-inverted optical eigenmodes of two coupled microcavities carrying mechanical oscillation realizes
a phonon analogue of optical two-level laser. By providing an approach that linearizes the dynamical equations of weak nonlinear systems without relying on their steady states, we study such phonon laser action as a realistic dynamical process, which exhibits time-dependent stimulated phonon field amplification especially when one of the cavities is added with optical gain medium. The approach we present explicitly gives the conditions
for the optimum phonon lasing, and thermal noise is found to be capable of facilitating the phonon laser action significantly.
\end{abstract}
\maketitle
\begin{figure}
\caption{(color online) Setup of coupled microcavities with their coupling rate $J$ adjusted by their gap distance. The first cavity carries a mechanical mode. The pump field from the second optical fiber for amplification does not couple to the first cavity. The stimulated transition of phonons takes place between two supermode states $\hat{o}
\end{figure}
Compound structures like coupled microcavities or waveguides constitute large number of interesting systems in optical sciences. An important category
that has recently attracted extensive researches covers those with alternately distributed active (gain) and passive (loss) components, as they can
mimic the parity-time ($\mathcal {PT}$) symmetric quantum mechanics \cite{bender}, a generalization of ordinary quantum mechanics. In addition to the theoretical investigations (see, e.g. \cite{w1,k, sc, ch, oth,arga, ben2, l-l, m1,m2, m3, bhe0}), numerous experiments have demonstrated peculiar features of light transmission in these systems \cite{ex1, ex2,ex3, ex4, ex5, ex6, ex7}. Richer phenomena could manifest if they incorporate other degrees of freedom to form hybrid systems, which have been studied by combining $\mathcal {PT}$ symmetric systems with Kerr nonlinearity \cite{n1,n2,n3,n5,n7,bhe} and mechanical oscillators \cite{jing,oms1,oms2, oms3}.
The device of two coupled microcavities in Fig. 1 can implement phonon laser action \cite{l-4}, as well as in many other systems \cite{l-1,l-2,l-21, l-3,l-5, l-6, l-7,l-71, l-8, l-81, l-9, l-10}.
Here the coupling intensity $J$ of the two microcavities is determined with their adjustable gap. Under a pump drive of the intensity $E$ and frequency $\omega_L$, two eigenmodes or supermodes of different energy levels, as the superpositions of the individual cavity modes, will be built up. If one of the cavities also carries a mechanical oscillation with the frequency $\omega_m$, the cavity supermodes will couple to the associated phonon field in cavity material via radiation pressure. Once there is a population inversion between the cavity supermodes, an amplification of the phonon field will be realized in analog to an optical laser.
A recent study \cite{jing} proposes the enhancement of the phonon lasing by adding optical gain medium into one of the cavities (also see \cite{oms3}
for a continued study in the similar approach). Then the system will have the exact $\mathcal {PT}$ symmetry given the equal gain rate $g$ and loss rate $\gamma$ of the respective cavities, and this $\mathcal {PT}$ symmetric point was predicted to be capable of achieving the best performance of the phonon laser driven by resonant pump \cite{jing}. A prediction like this was made under the assumption that the phonon laser operates in a steady state, in which the expectation values of the cavity modes $\hat{a}_1, \hat{a}_2$ and mechanical mode $\hat{b}$ keep unchanged with time.
However, as we will show below, the phonon laser should operate under a blue detuned pump which leads to no steady state. In the presence of optical gain the similar systems can be fully dynamical. A well-known example is that, at the above mentioned $\mathcal {PT}$ symmetric point $g=\gamma$, the intracavity light fields are totally variable, exhibiting a transition from periodically oscillating to exponentially growing as the cavity coupling $J$ decreases across the exceptional point $J=\gamma$. A slight change of a cavity's size under radiation pressure can hardly make these dynamically evolving fields become time-independent. Properly understanding the concerned phonon laser operation necessitates an approach based on dynamical picture.
To be more specific, the system's dynamical equations read \cite{note}
\begin{eqnarray}
&&\dot{\hat{a}}_1=-(\gamma-ig_m\hat{x})\hat{a}_1-iJ\hat{a}_2 +E e^{-i\Delta t}+\sqrt{2\gamma}\hat{\xi}_p,\\
&&\dot{\hat{a}}_2=g \hat{a}_2-iJ\hat{a}_1 +\sqrt{2 g}\hat{\xi}^\dagger_a,\\
&&\dot{\hat{b}}=-\gamma_m \hat{b}-i\omega_m \hat{b}+ig_m\hat{a}_1^\dagger\hat{a}_1+\sqrt{2\gamma_m} \hat{\xi}_m
\label{equation}
\end{eqnarray}
in a frame co-moving at the frequency $\omega_c$ ($\Delta=\omega_c-\omega_L$) of the two cavities, where $\hat{x}=\hat{b}+\hat{b}^\dagger$ is the dimensionless position operator of the mechanical oscillator damping at the rate $\gamma_m$ and coupled to the passive mode occupation $\hat{a}_1^\dagger\hat{a}_1$ with a constant $g_m=\omega_c x_0/R$ ($x_0$ is the the mechanical oscillator's zero point fluctuation and $R$ is the cavity size). Without a classical steady state it will be impossible to linearize the dynamical equations (1)-(3) following the practice in most other works about quantum optomechanics. Moreover, these equations carry the random drive terms of the dissipation (amplification) noise $\hat{\xi}_p$ ($\hat{\xi}_a$) and the thermal noise $\hat{\xi}_m$, which satisfy the relations $\langle \hat{\xi}_i(t) \hat{\xi}_i^\dagger (t')\rangle=\delta(t-t')$ ($i=p,a$) and $\langle \hat{\xi}_m(t) \hat{\xi}_m^\dagger (t')\rangle=(n_{th}+1)\delta(t-t')$ ($n_{th}$ is the thermal reservoir mean occupation number). The effects of these quantum noises, which are neglected in the previous studies but exist in any concerned quantum dynamical process, should be well clarified. In this work we develop an approach to such quantum dynamical processes. The population inversion of the optical supermodes, as the key to the phonon lasing, will be determined in this approach capable of dealing with the quantum noises which are indispensable as shown below.
Our approach makes use of the stochastic Hamiltonian
\begin{eqnarray}
H_{SR}(t)&=&i\big\{\sqrt{2\gamma }(\hat{a}_{1}^{\dagger }\hat{\xi}
_{p}(t)-H.c.)
+\sqrt{2g }(\hat{a}_{2}^{\dagger }\hat{\xi}_{a}^{\dagger }(t)-H.c.)\nonumber\\
&+&\sqrt{2\gamma _{m}}(\hat{b}^{\dagger }\hat{\xi}_{m}(t)-H.c.)\big\}
\end{eqnarray}
in terms of the system-reservoir couplings for the amplification and dissipations in the system (the notation in \cite{bhe} for the amplification part is adopted). The quantum dynamical equations (1)-(3) can be obtained by the small increments $d\hat{o}(t)=U^\dagger(t+dt,t)\hat{o}(t)U(t+dt,t)-\hat{o}(t)$ of the operators $\hat{o}=\hat{a}_1,\hat{a}_2$ and $\hat{b}$, which are under
the evolution $U(t)={\cal T}\exp\{-i\int_0^t d\tau [H_S(\tau)+H_{OM}+H_{SR}(\tau)]\}$ of the total Hamiltonian \cite{book}. The Hamiltonians inside the time-ordered exponential include the part
\begin{eqnarray}
H_S(t)&=&\omega _{c}\hat{a}_{1}^{\dagger }\hat{a}_{1}+\omega _{c}\hat{a}_{2}^{\dagger }\hat{a}_{2}+\omega _{m}\hat{b}^{\dagger }\hat{b}+J(\hat{a}_{1}\hat{a}_{2}^{\dagger }+\hat{a}_{1}^{\dagger }\hat{a}_{2})\nonumber\\
&+&iE(\hat{a}_{1}^{\dagger }e^{-i\omega _{L}t}-\hat{a}_{1}e^{i\omega _{L}t})
\end{eqnarray}
about the cavity coupling plus the external drive, as well as the one $H_{OM}=-g_m\hat{a}_1^\dagger\hat{a}_1(\hat{b}+\hat{b}^\dagger)$ about optomechanical interaction.
We apply an interaction picture with respect to the system Hamiltonian $H_S(t)$, whose action $U_0(t)=\mathcal{T}\exp\{-i\int_0^t d\tau H_S(\tau)\}$ evolves the cavity modes as the exact transformation
\begin{eqnarray}
\left(
\begin{array}{c}
U^\dagger_0\hat{a}_1 U_0\\
U^\dagger_0\hat{a}_2U_0
\end{array}
\right)
&=&\frac{1}{\sqrt{2}}e^{-i\omega_ct}\left(
\begin{array}{c}
\frac{\hat{a}_1+\hat{a}_2}{\sqrt{2}}e^{-iJt}+\frac{\hat{a}_1-\hat{a}
_2}{\sqrt{2}}e^{iJt} \\
\frac{\hat{a}_1+\hat{a}_2}{\sqrt{2}}e^{-iJt}-\frac{\hat{a}_1-\hat{a}_2}{\sqrt{2}}e^{iJt}
\end{array}
\right)\nonumber\\
&+&
\sqrt{2}e^{-i\omega_ct}\left( \begin{array}{c}
E_1(t) \\
E_2(t)
\end{array}
\right), \label{l-solution}
\end{eqnarray}
where
\begin{eqnarray}
E_1(t)&= &\frac{iE}{2\sqrt{2}}(\frac{1}{\Delta+J}e^{-iJt} +\frac{1}{\Delta-J}e^{iJt}\nonumber\\
& -&\frac{2\Delta}{\Delta^2-J^2}e^{i\Delta t}),\nonumber\\
E_2(t)&=&\frac{iE}{\sqrt{2}}\big(\frac{J}{\Delta^2-J^2}e^{i\Delta t}-\frac{J}{\Delta^2-J^2}\cos(Jt)\nonumber\\
&-&i\frac{\Delta}{\Delta^2-J^2}\sin(Jt)\big).
\label{E}
\end{eqnarray}
The optical supermodes $\hat{o}_{1,2}=(\hat{a}_1\pm \hat{a}_2)/\sqrt{2}$ with the energy levels $\omega_c\pm J$ naturally appear in Eq. (\ref{l-solution}). Taking the interaction picture is equivalent to the factorization
\begin{eqnarray}
&&{\cal T}e^{-i\int_0^t d\tau (H_S(\tau)+H_{OM}+H_{SR}(\tau))}\nonumber\\
&=&U_0(t)~{\cal T}e^{-i\int_0^t d\tau U_0^\dagger(\tau)(H_{OM}+H_{SR}(\tau))U_0(\tau)}
\end{eqnarray}
of the evolution operator $U(t)$ \cite{fac}, to have the exact form $U_0^{\dagger}(t)(H_{OM}+H_{SR}(t))U_0(t)$
in one of the time-ordered exponentials above consisting of two parts.
One is in a time-dependent quadratic form plus a mechanical
displacement term and three system-reservoir coupling terms
\begin{eqnarray}
H_1(t)&=&-g_m \{[E_1(t)(\hat{o}^\dagger_1 e^{iJt}+\hat{o}
^\dagger_2 e^{-iJt})+H.c.]\nonumber\\
&+&2|E_1(t)|^2\}(\hat{b}e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t})\nonumber\\
&+& i\sqrt{g}\{(\hat{o}^\dagger_1 e^{iJt}-\hat{o}^\dagger_2
e^{-iJt}+2E_2^\ast(t))e^{i\omega_c t}\hat{\xi}^\dagger_a-H.c.\}\nonumber\\
&+& i\sqrt{\gamma}\{(\hat{o}^\dagger_1 e^{iJt}+\hat{o}^\dagger_2
e^{-iJt}+2E_1^\ast(t))e^{i\omega_c t}\hat{\xi}_p-H.c.\}\nonumber\\
&+& i\sqrt{2\gamma_m}\{\hat{b}^\dagger e^{i\omega_m t}\hat{\xi}_m(t)-H.c.\},
\label{1}
\end{eqnarray}
and the other is the cubic nonlinear one
\begin{eqnarray}
H_2(t)&=&-\frac{1}{2}g_m(\hat{o}^\dagger_1 e^{iJt}+\hat{o}^\dagger_2 e^{-iJt})(\hat{o}_1 e^{-iJt}+\hat{o}_2
e^{iJt})\nonumber\\
&\times & (\hat{b}e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t}).
\label{2}
\end{eqnarray}
The terms containing $\hat{o}_1\hat{o}_2^\dagger b^\dagger$ or its conjugate in the second Hamiltonian $H_2(t)$ indicate a transition from the blue supermode $\hat{o}_1$ to the red supermode $\hat{o}_2$ while generating a phonon (see the level scheme in Fig. 1), realizing phonon lasing once the occupation of the blue supermode surpasses that of the red one. The Hamiltonian $H_2(t)$ also gives the resonant transition between the two supermodes at $\omega_m=2J$, i.e. the coefficient of $\hat{o}_1\hat{o}_2^\dagger b^\dagger$ becomes unity, corresponding to the gain spectrum line center of stimulated phonon field \cite{l-4}.
Under the simultaneous action of $H_1(t)$ and $H_2(t)$, the supermode populations
\begin{eqnarray}
&&\langle \hat{o}_i^\dagger\hat{o}_i(t)\rangle=\mbox{Tr}_S(\hat{o}_i^\dagger\hat{o}_i\rho_S(t))\nonumber\\
&&~~~~~~~~~~~=\mbox{Tr}_S\big\{\hat{o}_i^\dagger\hat{o}_i\mbox{Tr}_R\big(U(t)\rho_S(0)\rho_R U^{\dagger}(t)\big)\big\},~~
\end{eqnarray}
for $i=1,2$, are predominantly determined by the former. Here $\rho_S(t)$ and $\rho_R$ are the reduced system state and the total reservoir state,
respectively. This can be seen from their following reduction
\begin{eqnarray}
&&\langle \hat{o}_i^\dagger\hat{o}_i(t)\rangle
=\mbox{Tr}_{S,R}\big\{\hat{o}_i^\dagger\hat{o}_i U_0(t)~{\cal T}e^{-i\int_0^t d\tau(H_{1}+H_{2})(\tau)}\nonumber\\
&&~~~~~~~~~~~\times \rho_S(0)\rho_R ~{\cal T}e^{i\int_0^t d\tau(H_{1}+H_{2})(\tau)}U_0^\dagger (t)\big\}\nonumber\\
&\approx&\mbox{Tr}_{S,R}\big\{ U_1^\dagger(t)U_0^\dagger (t)\hat{o}_i^\dagger\hat{o}_i U_0(t)U_1(t)
U_2(t)\rho_S(0)\rho_R U^\dagger_2(t)\big\}\nonumber\\
&=&\mbox{Tr}_{S,R}\big\{ U_1^\dagger(t)U_0^\dagger (t)\hat{o}_i^\dagger\hat{o}_i U_0(t)U_1(t)\rho_S(0)\rho_R\big\},
\label{reduction}
\end{eqnarray}
where $U_l(t)={\cal T}e^{-i\int_0^t d\tau H_{l}(\tau)}$ for $l=1,2$. In Eq. (\ref{reduction}), the relation $U_2(t)\rho_S(0)U_2^\dagger(t)=\rho_S(0)$ for the system's initial state $\rho_S(0)$, the product of a cavity vacuum state $|0\rangle_c$ and a mechanical thermal state, is due to the fact $H_2(t)|0\rangle_c=0$. The approximate equality in Eq. (\ref{reduction}) comes from factorizing the actions of the non-commutative Hamiltonians $H_1(t)$ and $H_2(t)$ as
\begin{eqnarray}
{\cal T}e^{-i\int_0^t d\tau(H_{1}+H_{2})(\tau)}&=&{\cal T}e^{-i\int_0^t d\tau U_2(t,\tau)H_{1}(\tau)U^\dagger_2(t,\tau)}U_2(t)\nonumber\\
&\approx & U_1(t)U_2(t).
\label{app}
\end{eqnarray}
For the experimentally realizable optomechanical systems of weak coupling, the corrections to the
system operators by the unitary operation $U_2(t,\tau)={\cal T}e^{-i\int_\tau^t dt' H_2(t')}$ are in the higher orders of the coefficient $g_m/\omega_m\ll 1$ (see the Supplementary Materials), so that they can be well neglected to use the original form of $H_1(\tau)$ in the time-ordered exponential on the right side of the above equation. This only approximation we use in the calculations of the optical supermode populations is independent of the drive intensity $E$.
While the unitary operation $U_0(t)$ only displaces the supermode operators in Eq. (\ref{reduction}), the action $U_1(t)$ of the Hamiltonian $H_1(t)$ leads to the following dynamical equations \cite{book}
\begin{eqnarray}
\dot{\hat{o}}_1&=& 1/2(g-\gamma)\hat{o}_1+1/2(g+\gamma) e^{2iJt}\hat{o}_2\nonumber\\
&+& ig_m E_1(t)e^{iJt}(\hat{b}
e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t})\nonumber\\
&+&(\gamma E_1(t)-g E_2(t))e^{iJt}+\hat{n}_1(t),\nonumber\\
\dot{\hat{o}}_2&=& 1/2(g+\gamma)e^{-2iJt}\hat{o}_1+1/2(g-\gamma)\hat{o}_2\nonumber\\
&+&ig_m E_1(t)e^{-iJt}(\hat{b}e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t})\nonumber\\\
&+&(\gamma E_1(t)+g E_2(t))e^{-iJt}+\hat{n}_2(t),\nonumber\\
\dot{\hat{b}}&=& -\gamma_m \hat{b}+ig_m E^\ast_1(t)e^{i\omega_m t}(\hat{o}_1 e^{-iJt}+\hat{o}_2 e^{iJt})\nonumber\\
&+& ig_m E_1(t)e^{i\omega_m t}(\hat{o}
^\dagger_1 e^{iJt}+\hat{o}^\dagger_2 e^{-iJt})\nonumber\\
&+&2i g_m|E_1(t)|^2e^{i\omega_m t}+\hat{n}_3(t)
\label{dynamic}
\end{eqnarray}
for the system operators, where
\begin{eqnarray}
\hat{n}_1(t)&=&\sqrt{g}
e^{iJt}e^{i\omega_c t}\hat{\xi}^\dagger_a(t)+\sqrt{\gamma}e^{iJt}e^{i\omega_c t}\hat{\xi}_p(t),\nonumber\\
\hat{n}_2(t)&=&\sqrt{g} e^{-iJt}e^{i\omega_c t}\hat{\xi}^\dagger_a(t)-\sqrt{\gamma}e^{-iJt}e^{i\omega_c t}\hat{\xi}_p(t),\nonumber\\
\hat{n}_3(t)&=&\sqrt{2\gamma_m}e^{i\omega_m t}\hat{\xi}_m(t).
\label{noise}
\end{eqnarray}
The noise drive terms in Eq. (\ref{noise}) must be included in these equations. For example, in the trivial situation of turning off the pump drive ($E=0$),
the damping of the mechanical mode would result in its ``cooling" to the ground state, i.e. $\langle \hat{b}^\dagger\hat{b}(t)\rangle\rightarrow 0$ as $t\rightarrow \infty$, were there no thermal noise term $\hat{n}_3(t)$ in the last equation of (\ref{dynamic}). The invariant occupation number $\langle \hat{b}^\dagger \hat{b}\rangle$ under such thermal equilibrium is preserved with the complete form $\hat{b}(t)=e^{-\gamma_mt}\hat{b}+\sqrt{2\gamma_m}\int_0^t d\tau e^{-\gamma_m(t-\tau)}e^{i\omega_m\tau}\hat{\xi}_m(\tau)$ of the evolved mechanical mode. The evolved supermodes $\hat{o}_1(t), \hat{o}_2(t)$, on the same footing with $\hat{b}(t)$ in Eq. (\ref{dynamic}), should include the contributions from the quantum noises as well.
The next question is how to evolve the supermodes so that a good population inversion $\Delta N(t)=\langle \hat{o}_1^\dagger\hat{o}_1(t)\rangle-\langle \hat{o}_2^\dagger\hat{o}_2(t)\rangle$ can be achieved. One advantage of our approach is that the conditions for realizing the optimal population inversion can be straightforwardly read from Eq. (\ref{dynamic}), which is an inhomogeneous system of differential equations with the coherent and noise drive terms. The coefficients of $\hat{o}_i$ or $\hat{o}_i^\dagger$ on the right side of the last equation, for examples, are generally the sums of complex exponential functions of $t$ considering the form of $E_1(t)$. These coefficients reflect the intensities of the beam splitter (BS) type coupling in the form $f(t)\hat{o}_i\hat{b}^\dagger+H.c.$ or the squeezing (SQ) type coupling in the form $g(t)\hat{o}^\dagger_i\hat{b}^\dagger+H.c.$, where the exact functions $f(t),g(t)$ can be found from Eq. (\ref{1}). These couplings can be enhanced if a complex exponential function of $t$ in $f(t)$ or $g(t)$ becomes unity.
A significant population inversion will be realized if an SQ coupling between the blue supermode $\hat{o}_1$ and the mechanical mode $\hat{b}$ can be strengthened. Such enhancement will be possible by setting the pump to blue sideband with its detuning $\Delta$ equal to $-\omega_m-J=-3J$ [considering the optimal transition condition $\omega_m=2J$ from Eq. (\ref{2})], reducing the factor $e^{i(\Delta+\omega_m+J)t}$ before $\hat{o}_1^\dagger$ in the last equation of (\ref{dynamic}) to a unity.
\begin{figure}
\caption{(color online) Population inversion evolutions in a setup with the system parameters $\omega_m=22.8\gamma$, $g_m=5\times 10^{-5}
\end{figure}
To illustrate the general theory, we plot the population inversions in terms of the dimensionless parameters in Fig. 2.
These inversions are numerically calculated with Eq. (\ref{dynamic}). Figures 2(a) and 2(b) show that, under the above mentioned two optimal conditions, the inversions grow with time due to the SQ process. Increased gain rate $g$ and drive intensity $E$ serve as the additional factors to make them go up monotonously further. The inversion in a passive setup ($g=-\gamma$) can increase with time, in addition to reaching the steady states (not shown here) under lower drive intensity $E$ for this passive setup (in the absence of considerably high optical gain, steady states may exist under the condition $g_m|\alpha_i|\ll \gamma$ for a blue detuned drive, where $\alpha_i$ are the average cavity field amplitudes proportional to the drive intensity $E$; see e.g. a proposed setup in \cite{laser1}). The enhanced SQ process heats up the cavity material with increased thermal occupation $\langle \hat{b}^\dagger\hat{b}(t)\rangle$ different from the quantity $|\langle \hat{b}(t)\rangle|^2$, and the very strong light fields after a long period will make the system go beyond the current model of linear amplification and dissipation in accordance with the specific material properties.
\begin{figure}
\caption{(color online) Relations between the realized population inversion at $t=\gamma_m^{-1}
\end{figure}
As comparison we also present two other examples. The first one in Fig. 2(c) is to drive the passive cavity resonantly at $\Delta=0$, having $E_1(t)=iE/(2\sqrt{2}J) (e^{-iJt}-e^{iJt})$. The term with the factor $e^{-iJt}$ in the $E_1(t)$ provides enhanced SQ coupling between $\hat{o}_2$ and $\hat{b}$, while the one with $e^{iJt}$ enhances the BS coupling between $\hat{o}_1$ and $\hat{b}$, showing that the SQ effect will dominate in the end. The other example in Fig. 2(d) has $\Delta=J$, which happens to be one of the resonant points of the coupled system so that $E_1(t)=iE/(4\sqrt{2}J)(e^{-iJt}-2(1+iJt)e^{iJt})$. In this special situation, the extra linearly increasing factor overshadows the effects of the phase factors ($e^{\pm iJt}$) and nonetheless enhances the SQ coupling between $\hat{o}_2$ and $\hat{b}$, to give negative population inversions.
The influence of the cavity coupling intensity on supermode population inversion is illustrated in Fig. 3, showing the relations between the inversion (at the mechanical oscillation lifetime $\gamma^{-1}_m$) and the drive intensity for different $J$. To keep the optimal conditions in Fig. 2(a), the mechanical frequency $\omega_m$ is also assumed to be adjustable in the illustrations. One sees that, given a fixed drive intensity $E$, a lowed coupling $J$ actually increases the population inversion until it becomes small enough to have the two cavities almost decoupled. This is totally contrasting to the prediction of no phonon lasing in the regime $J<(g+\gamma)/2$ by a previous study \cite{jing}. That conclusion is based on the diagonalized form $(\omega_{+}+i\gamma_+)\hat{q}_2^\dagger\hat{q}_2+(\omega_{-}+i\gamma_-)\hat{q}_1^\dagger\hat{q}_1$ of the non-Hermitian Hamiltonian $(\omega_c-i\gamma) \hat{a}_1^\dagger\hat{a}_1+(\omega_c+ig) \hat{a}_2^\dagger\hat{a}_2+J(\hat{a}_1^\dagger \hat{a}_2+\hat{a}_2^\dagger\hat{a}_1)$ widely used in the study
of $\mathcal{PT}$ symmetric optical systems, suggesting that phonons induce
a transition between the modes $\hat{q}_1, \hat{q}_2$ with their gap $\omega_+-\omega_-$ disappearing when $J<(g+\gamma)/2$. In fact, these generally non-orthogonal modes (see more detailed discussion in \cite{ex2}) coincide with the supermodes $\hat{o}_1, \hat{o}_2$ only in a special situation of $g=-\gamma$; see the Supplementary Materials. Similar to the transitions between atomic levels, the action of the Hermitian Hamiltonian $H_2(t)$ can only cause an effective transition between two orthogonal states like $\hat{o}^\dagger_1|0\rangle$ and $\hat{o}^\dagger_2|0\rangle$, and the transition between the non-orthogonal states $\hat{q}^\dagger_1|0\rangle$ and $\hat{q}^\dagger_2|0\rangle$ with $\langle 0|\hat{q}_1\hat{q}_2^\dagger|0\rangle\neq 0$ is forbidden for arbitrary system parameters.
\begin{figure}
\caption{(color online) Thermal noise contribution to the supermode population inversion. The solid curves in (a), (b) and (c) are the portions of those in Fig. 2(a), with $g=\gamma$, $0.1\gamma$ and $-\gamma$, respectively. The dashed curves represent the contributions from the thermal noise drive $\hat{n}
\end{figure}
A unique property of the optical medium is that the quantum noises, which must be considered as mentioned before, can significantly affect the supermode populations. We illustrate this important fact in Fig. 4 showing the proportions of the thermal noise contribution in the results of Fig. 2(a). The detailed calculation of the noise contributions can be found in the Supplementary Materials. It is seen from the comparisons in Fig. 4 that, under the enhanced SQ coupling due to the properly chosen system parameters, the thermal noise acting as a random drive can predominantly contribute to the population inversions.
The contribution is proportional to the thermal occupation number $n_{th}$, a parameter of the environment.
This observation constitutes an interesting feature of the quantum noises which have been seldom discussed for coupled gain-loss systems
\cite{sc, arga, bhe, stable}.
With the above understandings, one will find how well the phonon laser can operate.
In analogue to an optical laser \cite{qo}, the phonon laser dynamical equations similar to those in \cite{l-4} are independently found as
\begin{eqnarray}
\dot{b}_s&=&(-\gamma_m-i\omega_m)b_s-(1/2) ig_m p,\nonumber\\
\dot{p} &=&(1/2) ig_m \Delta N(t) b_s+\big(1/2(g-\gamma)-2 iJ\big )p,
\label{laser}
\end{eqnarray}
where $b_s=\langle \hat{b}_s\rangle$ [the subscript ``$s$" indicates the stimulated phonon mode
to be distinguished from the thermal phonon mode in Eq. (\ref{dynamic})] and $p=\langle \hat{o}_2^\dagger\hat{o}_1\rangle$. Corresponding to the semi-classical treatment of atomic level transitions, by which the atomic levels are described quantum mechanically while the radiations are regarded as classical, we approximate the phonon laser mode in Eq. (\ref{laser}) as a mean field but insert the inversion $\Delta N(t)$ determined in a completely quantum way from Eq. (\ref{dynamic}) into the same equations. The amplification rates of the stimulated phonon field numerically found with the above equations are illustrated in Fig. 5. The threshold drive
intensity $E_{th}$ for realizing phonon field amplification becomes lower with increased gain rate $g$, which is upper bounded in reality due to gain saturation. Under the optimal transition and optimal population inversion condition as in Figs. 2(a)-2(b), adding optical gain medium into one cavity can enhance the phonon lasing further.
\begin{figure}
\caption{(color online) Amplification of the stimulated phonon field intensity in terms of the ratio between their values at $t=\gamma_m^{-1}
\end{figure}
In summary, we have presented a dynamical approach to the phonon laser model of coupled active-passive resonators, which only uses a single approximation in Eq. (\ref{app}) to make the calculations of the optical supermode populations highly accurate to the system with $g_m\ll \omega_m$. Compared with a previous study based on the assumed steady states for such system \cite{jing}, we find three fundamental differences: (1) the phonon laser should operate under blue-detuned pump rather than the resonant and red-detuned ones considered in \cite{jing}---under blue-detuned drives the phonon laser performance simply betters with increased optical gain instead of reaching the optimum at the balanced gain and loss;
(2) the phonon laser can operate even better in the $\mathcal {PT}$ symmetry broken regime ($J<(g+\gamma)/2$) in contrast to its non-existence predicted in \cite{jing}; (3) under the conditions to realize the optimum lasing, quantum noises can significantly contribute to the supermode population inversion for magnifying the stimulated phonon field.
These features surely exist in the presence of the realistic gain saturation, though we use a model of fixed gain rate to illustrate them more clearly. According to our dynamical picture, the optimum phonon lasing in any similar setup (beyond those carrying optical gain) should be reached by choosing a proper pump detuning $\Delta$ and a suitable cavity coupling $J$, and the added optical gain highlighted in \cite{jing} will not help the laser action unless the pump detuning is within the appropriate range. For the experimentally realizable optomechanical systems ($g_m/\gamma \ll 1$), the approach can be applied to quantum dynamical processes in the blue-detuned regime, where the previously available approach of classical dynamics (see Sec. VIII in \cite{om}) is unable to deal with the problems involving quantum noises. This approach of linearizing the dynamical equations of
weak nonlinear systems without relying on their steady states may be applied to solve other dynamical problems.
M. X. acknowledges partial funding supports from NBRPC (Grant No. 2012CB921804) and NSFC (Grant No. 61435007).
\begin{widetext}
\section*{Supplementary Materials}
\subsection{Approximation for weakly coupled optomechanical systems}
\renewcommand{S-III-\arabic{equation}}{S-I-\arabic{equation}}
\setcounter{equation}{0}
We start from Eq. (8) in the main text. There taking the interaction picture with respect to the Hamiltonian $H_S(t)$ is equivalent to the
following factorization (see (2.189) in \cite{fac} or the appendices of \cite{fac1,fac2}):
\begin{eqnarray}
U(t)=\mathcal{T}\exp\{-i\int_0^t d\tau H(\tau)\}&=&\underbrace{\mathcal{T}\exp\{-i\int_0^t d\tau H_S(\tau)\}}_{U_0(t)}~\mathcal{T}\exp\{-i\int_0^t d\tau U_0^\dagger(\tau)\{H_{OM}+H_{SR}(\tau)\}U_0(\tau)\}\nonumber\\
&=&\mathcal{T}\exp\{-i\int_0^t d\tau H_S(\tau)\}~\mathcal{T}\exp \big\{-i\int_0^t d\tau \big(H_1(\tau)+H_2(\tau)\big)\big\},
\label{fact}
\end{eqnarray}
where $H_1(t)$ and $H_2(t)$ are given in Eq. (9) and Eq. (10) of the main text, respectively. The supermodes $\hat{o}_1, \hat{o}_2$ appearing in $H_1(t)$ and $H_2(t)$ are the orthogonal eigenstates of the Hermitian Hamiltonian $\omega _{c}\hat{a}_{1}^{\dagger }\hat{a}_{1}+\omega _{c}\hat{a}_{2}^{\dagger }\hat{a}_{2}+J(\hat{a}_{1}\hat{a}_{2}^{\dagger }+\hat{a}_{1}^{\dagger }\hat{a}_{2})$. Then, we take another factorization of the last time-ordered exponential
in (\ref{fact}) as \cite{fac1,fac2}
\begin{eqnarray}
\mathcal{T}\exp \big\{-i\int_0^t d\tau \big(H_1(\tau)+H_2(\tau)\big)\big\}=\underbrace{\mathcal{T}\exp\{-i\int_0^t d\tau U_2(t,\tau)H_1(\tau)U_2^\dagger(t,\tau)\}}_{U_1(t)}~\underbrace{\mathcal{T}\exp\{-i\int_0^t d\tau H_2(\tau)\}}_{U_2(t)},
\end{eqnarray}
where $U_2(t,\tau)=\mathcal{T}\exp\{-i\int_\tau^t dt' H_2(t')\}$. Because of the non-commutativity of the effective Hamiltonian $H_1(t)$ and $H_2(t)$,
the unitary operation $U_2(t,\tau)$ inside the first time-ordered exponential modifies the system
mode operators in $H_1(t)$, e.g.
\begin{eqnarray}
U_2(t,\tau)\hat{o}_1 U^\dagger_2(t,\tau)&=&\hat{o}_1+\big\{\frac{g_m}{2\omega_m}(e^{-i\omega_m t}-e^{-i\omega_m \tau})\hat{o}_1\hat{b}
-\frac{g_m}{2\omega_m}(e^{i\omega_m t}-e^{i\omega_m \tau})\hat{o}_1\hat{b}^\dagger\nonumber\\
&+& \frac{g_m}{2(2J-\omega_m)}(e^{-i(2J-\omega_m) t}-e^{-i(2J-\omega_m) \tau})\hat{o}_2\hat{b}\nonumber\\
&+&\frac{g_m}{2(2J+\omega_m)}(e^{-i(2J+\omega_m) t}-e^{-i(2J+\omega_m) \tau})\hat{o}_2\hat{b}^\dagger\big\}+\cdots
\label{corr}
\end{eqnarray}
From the above expression up to the first order of $g_m$, the corrections of the mode operators by the unitary transformation $U_2(t,\tau)$ are seen to be negligible in the weak coupling regime $g_m\ll \gamma, \omega_m, J$. Even under the resonant condition $\omega_m=2J$, their corrections in the order of the dimensionless quantity $g_m(t-\tau)$ will take effect after a significantly long period of time $\gamma t$, given a coupling constant $g_m\sim 10^{-5}\gamma$ as in our illustrated examples. Neglecting the corrections in (\ref{corr}) will therefore not affect the results we illustrate in the main text. This only approximation of neglecting such modification from $H_2(t)$ in the calculation of the supermode populations is independent of the drive intensity $E$.
With this approximation the supermode populations can be rewritten as
\begin{eqnarray}
\langle \hat{o}_i^\dagger\hat{o}_i(t)\rangle &=&
\mbox {Tr}_{S,R}\big (U_2^\dagger(t)U_1^\dagger(t)U_0^\dagger(t)\hat{o}_i^\dagger\hat{o}_iU_0(t)U_1(t)U_2(t) \rho_S(0)\otimes \rho_R\big)\nonumber\\
&=&\mbox {Tr}_{S,R}\big (U_1^\dagger(t)U_0^\dagger(t)\hat{o}_i^\dagger\hat{o}_iU_0(t)U_1(t) U_2(t)\rho_S(0)\otimes \rho_R U_2^\dagger(t)\big)\nonumber\\
&=&\mbox {Tr}_{S,R}\big (U_1^\dagger(t)U_0^\dagger(t)\hat{o}_i^\dagger\hat{o}_iU_0(t)U_1(t) \rho_S(0)\otimes \rho_R \big),
\end{eqnarray}
where the action $U_2(t)$ leaves the quantum state $\rho_S(0)\otimes\rho_R$ invariant because the initial state $\rho_S(0)$ is the product of a cavity vacuum state and the mechanical thermal state $\sum_{n=0}^\infty n_{th}^n/(1+n_{th})^{n+1}|n\rangle_m\langle n|$, to have $H_2(t)|0\rangle_c=0$ for the cavity vacuum state $|0\rangle_c$. The evolved supermode populations $\langle \hat{o}_i^\dagger\hat{o}_i(t)\rangle$ are only due to the successive actions of $U_0(t)$ and $U_1(t)$.
\subsection{Difference between the optical supermodes and the eigenstates of a non-Hermitian Hamiltonian}
\renewcommand{S-III-\arabic{equation}}{S-II-\arabic{equation}}
\setcounter{equation}{0}
The model of $\mathcal {PT}$ symmetric quantum mechanics with active-passive coupler is often based on the non-Hermitian Hamiltonian
\begin{eqnarray}
&&H_{PT}=(\omega_c-i\gamma) \hat{a}_1^\dagger\hat{a}_1+(\omega_c+ig) \hat{a}_2^\dagger\hat{a}_2+J(\hat{a}_1^\dagger \hat{a}_2+\hat{a}_2^\dagger\hat{a}_1)\nonumber\\
&=& (\hat{a}_1^\dagger,\hat{a}_2^\dagger)\left(
\begin{array}{cc}
\omega_c-i\gamma & J \\
J & \omega_c+ig
\end{array}
\right)\left(
\begin{array}{c}
\hat{a}_1 \\
\hat{a}_2
\end{array}
\right)\nonumber\\
&=& (\hat{q}_1^\dagger,\hat{q}_2^\dagger)\left(
\begin{array}{cc}
-\frac{i(g+\gamma)+\sqrt{4J^2-(g+\gamma)^2}}{2J} & 1 \\
-\frac{i(g+\gamma)-\sqrt{4J^2-(g+\gamma)^2}}{2J} & 1
\end{array}
\right)^{T,-1}\left(
\begin{array}{cc}
\omega_c-i\gamma & J \\
J & \omega_c+ig
\end{array}
\right)\left(
\begin{array}{cc}
-\frac{i(g+\gamma)+\sqrt{4J^2-(g+\gamma)^2}}{2J} & 1 \\
-\frac{i(g+\gamma)-\sqrt{4J^2-(g+\gamma)^2}}{2J} & 1
\end{array}
\right)^{T}\left(
\begin{array}{c}
\hat{q}_1 \\
\hat{q}_2
\end{array}
\right)\nonumber\\
&=&(\hat{q}_1^\dagger,\hat{q}_2^\dagger)\left(
\begin{array}{cc}
\omega_c-\sqrt{J^2-(\frac{g+\gamma}{2})^2}+\frac{1}{2}(g-\gamma)i & \\
& \omega_c+\sqrt{J^2-(\frac{g+\gamma}{2})^2}+\frac{1}{2}(g-\gamma)i
\end{array}
\right)\left(
\begin{array}{c}
\hat{q}_1 \\
\hat{q}_2
\end{array}
\right)\nonumber\\
&=& \big(\omega_c-\sqrt{J^2-(\frac{g+\gamma}{2})^2}+\frac{1}{2}(g-\gamma)i\big)\hat{q}_1^\dagger\hat{q}_1+\big(\omega_c+\sqrt{J^2-(\frac{g+\gamma}{2})^2}+\frac{1}{2}(g-\gamma)i\big)\hat{q}^\dagger_2\hat{q}_2.
\end{eqnarray}
The notation ``$T,-1$" means first taking the transpose and then the inverse of the matrix. The eigenmodes of the non-Hermitian Hamiltonian $H_{PT}$ take the forms
\begin{eqnarray}
\hat{q}_1&=&-\frac{J}{\sqrt{4J^2-(g+\gamma)^2}}\hat{a}_1-\frac{i(g+\gamma)-\sqrt{4J^2-(g+\gamma)^2}}{2\sqrt{4J^2-(g+\gamma)^2}}\hat{a}_2\nonumber\\
\hat{q}_2&=&\frac{J}{\sqrt{4J^2-(g+\gamma)^2}}\hat{a}_1+\frac{i(g+\gamma)+\sqrt{4J^2-(g+\gamma)^2}}{2\sqrt{4J^2-(g+\gamma)^2}}\hat{a}_2,
\label{eigenstates}
\end{eqnarray}
from the above diagonalization procedure, and the eigenstates $\hat{q}_1^\dagger|0\rangle$ and $\hat{q}_2^\dagger|0\rangle$ are generally non-orthogonal. They can be defined as ``orthogonal" only following the definition of the inner product in $\mathcal {PT}$ symmetric quantum mechanics, i.e. $(\mathcal {PT}\hat{q}^\dagger_2|0\rangle)^T\cdot \hat{q}_1^\dagger|0\rangle=0$ where the $\mathcal {PT}$ transformation is applied to one of the vectors (see, e.g. \cite{pt}). The identification of these two eigenmodes with the orthogonal optical supermodes $\hat{o}_{1,2}=\hat{a}_1\pm \hat{a}_2$ (up to a normalization factor) is generally not true. Only for the passive setup ($g=-\gamma$) can the non-Hermitian Hamiltonian be diagonalized in terms of the orthogonal optical supermodes, i.e.
\begin{eqnarray}
H_{P}&=&(\omega_c-i\gamma) \hat{a}_1^\dagger\hat{a}_1+(\omega_c-i\gamma) \hat{a}_2^\dagger\hat{a}_2+J(\hat{a}_1^\dagger \hat{a}_2+\hat{a}_2^\dagger\hat{a}_1)\nonumber\\
&=& (\omega_c+J-i\gamma)\hat{o}_1^\dagger\hat{o}_1+(\omega_c-J-i\gamma)\hat{o}^\dagger_2\hat{o}_2.
\end{eqnarray}
The addition of the optical gain therefore leads to a significant difference. The eigenvalues of the non-Hermitian Hamiltonian $H_{PT}$ manifest in light transportation inside coupled active-passive systems. Using the differential equations
\begin{eqnarray}
&&\frac{d}{dt}a_1=-\gamma a_1-iJ a_2 +E e^{-i\Delta t},\\
&&\frac{d}{dt}a_2=g a_2-iJ a_1
\end{eqnarray}
from taking the averages in Eqs. (1)-(2) of the main text (with $g_m=0$), one will find two transmission resonances (two values of $\Delta$ to have peaked $|a_2(t)|^2$) when $J>(g+\gamma)/2$ and one resonance when $J<(g+\gamma)/2$ from its solution
\begin{eqnarray}
\left(
\begin{array}{c}
a_1(t)\\
a_2(t)
\end{array}
\right)=\int_0^t d\tau\exp\{\left(
\begin{array}{cc}
-\gamma & -iJ \\
-iJ & g
\end{array}
\right)(t-\tau)\}\left(
\begin{array}{c}
E e^{-i\Delta \tau}\\
0
\end{array}
\right).
\end{eqnarray}
These resonances have been observed by experiments \cite{ex6, ex7}, and reflect the real parts of the eigenvalues of the non-Hermitian Hamiltonian $H_{PT}$.
When it comes to the phonon induced transition between coupled cavity modes in the presence of the optomechanical coupling $g_m\neq 0$, one should clarify whether such transition could take place between the states $\hat{q}_1^\dagger|0\rangle$ and $\hat{q}_2^\dagger|0\rangle$. Since they are non-orthogonal with an overlap $\langle 0|\hat{q}_1\hat{q}_2^\dagger|0\rangle\neq 0$, the transition between these two eigenmodes as the superpositions of the cavity modes $\hat{a}_1$ and $\hat{a}_2$ is impossible. Otherwise a trivial Hamiltonian in the form of identity operator can cause the automatic transition between them, being contradictory with the available observations.
In this sense the action of the Hermitian Hamiltonian $H_{OM}=-g_m\hat{a}_1^\dagger\hat{a}_1(\hat{b}+\hat{b}^\dagger)$ of optomechanical coupling can not lead to the transition between two non-orthogonal supermode states. In phonon lasing the stimulated phonon field is amplified due to the effective transition between the orthogonal states $\hat{o}_1^\dagger|0\rangle$ and $\hat{o}_2^\dagger|0\rangle$ under the action of $H_{OM}$.
\subsection{Calculation of the optical supermode populations}
\renewcommand{S-III-\arabic{equation}}{S-III-\arabic{equation}}
\setcounter{equation}{0}
The main equations, Eq. (14) in the main text, for the determination of the supermode populations can be found following Eq. (11.2.33) in \cite{book} (in the absence of the amplification part) or Eq. (A4) in \cite{bhe}. Their extended forms including the differential equations for the conjugated operators read
\begin{eqnarray}
\frac{d}{dt}\hat{o}_1&=& \frac{1}{2}(g-\gamma)\hat{o}_1+\frac{1}{2}(g+\gamma) e^{2iJt}\hat{o}_2+ ig_m E_1(t)e^{iJt}(\hat{b}
e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t}) \nonumber\\
&+&\underbrace{\big(\gamma E_1(t)-g E_2(t)\big)e^{iJt}}_{\lambda_1(t)}+\underbrace{\sqrt{g}
e^{iJt}e^{i\omega_c t}\hat{\xi}^\dagger_a(t)+\sqrt{\gamma}
e^{iJt}e^{i\omega_c t}\hat{\xi}_p(t)}_{\hat{n}_1(t)}, \nonumber \\
\frac{d}{dt}\hat{o}^\dagger_1&=& \frac{1}{2}(g-\gamma)\hat{o}^\dagger_1+\frac{1}{2}(g+\gamma) e^{-2iJt}\hat{o}^\dagger_2- ig_m
E^\ast_1(t)e^{-iJt}(\hat{b}e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t})\nonumber\\
&+& \big(\gamma E^\ast_1(t)-g E^\ast_2(t)\big)e^{-iJt}+\sqrt{g}e^{-iJt}e^{-i\omega_c t}\hat{\xi}_a(t)+
\sqrt{\gamma}e^{-iJt}e^{-i\omega_c t}\hat{\xi}^\dagger_p(t), \nonumber \\
\frac{d}{dt}\hat{o}_2&=& \frac{1}{2}(g+\gamma)e^{-2iJt}\hat{o}_1+\frac{1}{2}(g-\gamma)\hat{o}_2+ig_m E_1(t)e^{-iJt}(\hat{b}e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t})\nonumber\\
&+& \underbrace{\big(\gamma E_1(t)+g E_2(t)\big)e^{-iJt}}_{\lambda_2(t)}+\underbrace{\sqrt{g} e^{-iJt}e^{i\omega_c t}\hat{\xi}^\dagger_a(t)-\sqrt{\gamma}e^{-iJt}e^{i\omega_c t}\hat{\xi}_p(t)}_{\hat{n}_2(t)}, \nonumber \\
\frac{d}{dt}\hat{o}^\dagger_2&=& \frac{1}{2}(g+\gamma)e^{2iJt}\hat{o}^\dagger_1+\frac{1}{2}(g-\gamma)\hat{o}^\dagger_2-ig_m
E^\ast_1(t)e^{iJt}(\hat{b}e^{-i\omega_m t}+\hat{b}^\dagger e^{i\omega_m t})\nonumber\\
&+& \big(\gamma E^\ast_1(t)+g E^\ast_2(t)\big)e^{iJt}+
\sqrt{g} e^{iJt}e^{-i\omega_c t}\hat{\xi}_a(t)-\sqrt{\gamma}e^{iJt}e^{-i\omega_c t}\hat{\xi}^\dagger_p(t),
\nonumber \\
\frac{d}{dt}\hat{b}&=&-\gamma_m \hat{b}+ig_m E^\ast_1(t)e^{i\omega_m t}(\hat{
o}_1 e^{-iJt}+\hat{o}_2 e^{iJt})+ ig_m E_1(t)e^{i\omega_m t}(\hat{o}
^\dagger_1 e^{iJt}+\hat{o}^\dagger_2 e^{-iJt}) \nonumber \\
&+&\underbrace{\sqrt{2\gamma_m}e^{i\omega_m t}\hat{\xi}_m(t)}_{\hat{n}_3(t)}+\underbrace{2ig_m|E_1(t)|^2e^{i\omega_m t}}_{\lambda_3(t)},
\nonumber \\
\frac{d}{dt}\hat{b}^\dagger&=&-\gamma_m \hat{b}^\dagger -ig_m
E^\ast_1(t)e^{-i\omega_m t}(\hat{o}_1 e^{-iJt}+\hat{o}_2 e^{iJt})-ig_m
E_1(t)e^{-i\omega_m t}(\hat{o}^\dagger_1 e^{iJt}+\hat{o}^\dagger_2 e^{-iJt})
\nonumber \\
&+&\sqrt{2\gamma_m}e^{-i\omega_m t}\hat{\xi}^\dagger_m(t)-2ig_m|E_1(t)|^2e^{-i\omega_m t}, \label{main}
\end{eqnarray}
where $E_1(t), E_2(t)$ are given in Eq. (7) of the main text.
We write the above equations in terms of a $6\times 6$ dynamical matrix $\hat{M}(t)$ as
\begin{eqnarray}
\frac{d}{dt}\hat{\vec{c}}(t)=\hat{M}(t)\hat{\vec{c}}(t)+\vec{\lambda}(t)+\hat{\vec{n}}(t),
\label{equations}
\end{eqnarray}
where
\begin{eqnarray}
\hat{\vec{c}}(t)&=&(\hat{o}_1(t),\hat{o}_1^\dagger(t),\hat{o}_2(t),\hat{
o}_2^\dagger(t),\hat{b}(t),\hat{b}^\dagger(t))^T,\nonumber\\
\vec{\lambda}(t)&=& ( \lambda_1(t), \lambda^\ast_1(t),\lambda_2(t), \lambda^\ast_2(t),\lambda_3(t),\lambda^\ast_3(t))^T\nonumber\\
\hat{\vec{n}}(t)&=&(\hat{n}_1(t),\hat{n}_1^\dagger(t),\hat{n}_2(t),\hat{n}_2^\dagger(t),\hat{n}
_3(t),\hat{n}_3^\dagger(t))^T.
\end{eqnarray}
The enhancement of beam-splitter type coupling or squeezing type coupling, as we describe in the main text, can be realized by adjusting the elements of the dynamical matrix $\hat{M}(t)$.
The general solution of the above dynamical equations is
\begin{eqnarray}
\hat{\vec{c}}(t)&=&\mathcal{T}\exp \{\int_0^t d\tau \hat{M}(\tau)\}\hat{\vec{c}}
(0)+\int_0^t d\tau \mathcal{T}\exp \{\int_\tau^t dt^{\prime} \hat{M}(t^{\prime})\}\big(\vec{\lambda}(\tau)+\hat{\vec{n}}(\tau)\big)= \hat{\vec{c}}_s(t)+\vec{c}_{ds}(t)+\hat{\vec{c}}_n(t).
\label{sol}
\end{eqnarray}
The operator $\mathcal{T}\exp \{\int_0^t d\tau \hat{M}(\tau)\}$ is numerically calculated as the product $\prod_{i=N-1}^0 (I+\hat{M}(\tau_i)h)$, where the range $[0,t]$ is divided into $N$ pieces with the step size $h$. The step size $h$ is chosen to be sufficiently small so that the matrix product becomes insensitive to it. The time ordered exponential in the general form $\mathcal{T}\exp\{\int_\tau^t dt' \hat{M}(t^{\prime})\}$ can be found with such matrix products and their inverses, and is represented as a $6\times 6$ matrix
\begin{eqnarray}
\mathcal{T}\exp\{\int_\tau^t dt' \hat{M}(t^{\prime})\}= \left(
\begin{array}{cccccc}
d_{11}(t,\tau) & d_{12}(t,\tau) & \cdots & & & d_{16}(t,\tau) \\
d_{21}(t,\tau) & d_{22}(t,\tau) & \cdots & & & d_{26}(t,\tau) \\
\vdots & \vdots & \ddots & & & \vdots \\
d_{61}(t,\tau) & d_{62}(t,\tau) & \cdots & & & d_{66}(t,\tau) \\
& & & & &
\end{array}
\right).
\label{t-exp}
\end{eqnarray}
There are three terms in the solution (\ref{sol}) of Eq. (\ref{main}). The supermode populations from the first term are obtained by taking the average of $\hat{o}_{i,s}^\dagger\hat{o}_{i,s}(t)$ with respect to the system's initial state $|0\rangle_c\langle 0|\otimes \sum_n\frac{n_{th}^n}{(1+n_{th})^{n+1}}|n\rangle_m\langle n|$, the product of the cavity vacuum state and the mechanical thermal state, where $n_{th}$ is the thermal reservoir mean occupation number. This part of the contribution is found as
\begin{eqnarray}
\langle\hat{o}^\dagger_{1,s}\hat{o}_{1,s}(t)\rangle=
d_{21}(t,0)d_{12}(t,0)+d_{23}(t,0)d_{14}(t,0)+d_{25}(t,0)d_{16}(t,0)(n_{th}+1)+d_{26}(t,0)d_{15}(t,0)n_{th},
\label{s1}
\end{eqnarray}
\begin{eqnarray}
\langle\hat{o}^\dagger_{2,s}\hat{o}_{2,s}(t)\rangle=
d_{41}(t,0)d_{32}(t,0)+d_{43}(t,0)d_{34}(t,0)+d_{45}(t,0)d_{36}(t,0)(n_{th}+1)+d_{46}(t,0)d_{35}(t,0)n_{th}.
\label{s2}
\end{eqnarray}
Together with the displacement terms due to the action of $U_0(t)$, the second pure drive term of $\vec{\lambda}(t)$ gives rise to
the following contribution
\begin{eqnarray}
\langle\hat{o}^\dagger_{1,ds}\hat{o}_{1,ds}(t)\rangle =\big|E_1(t)+E_2(t)+o_{1,ds}(t)\big|^2,
~~~~\langle\hat{o}^\dagger_{2,ds}\hat{o}_{2,ds}(t)\rangle =\big|E_1(t)-E_2(t)+o_{2,ds}(t)\big|^2,
\end{eqnarray}
where
\begin{eqnarray}
o_{1,ds}(t) &=&\int_0^t d\tau \big\{ d_{11}(t,\tau)\big(\gamma E_1(\tau)-g E_2(\tau)\big)e^{iJ\tau}+d_{12}(t,\tau)\big(\gamma E^\ast_1(\tau)-g E^\ast_2(\tau)\big)e^{-iJ\tau}\nonumber\\
&+&d_{13}(t,\tau)\big(\gamma E_1(\tau)+g E_2(\tau)\big)e^{-iJ\tau}+d_{14}(t,\tau)\big(\gamma E^\ast_1(\tau)+g E^\ast_2(\tau)\big)e^{iJ\tau}\nonumber\\
&+&2ig_m d_{15}(t,\tau)|E_2(\tau)|^2e^{i\omega_m \tau}-2ig_md_{16}(t,\tau)|E_2(\tau)|^2e^{-i\omega_m \tau}\big\},
\end{eqnarray}
\begin{eqnarray}
o_{2,ds}(t)&= &\int_0^t d\tau \big\{ d_{31}(t,\tau)\big(\gamma E_1(\tau)-g E_2(\tau)\big)e^{iJ\tau}+d_{32}(t,\tau)\big(\gamma E^\ast_1(t)-gE^\ast_2(\tau)\big)e^{-iJ\tau}\nonumber\\
&+&d_{33}(t,\tau)\big(\gamma E_1(\tau)+g E_2(\tau)\big)e^{-iJ\tau}+d_{34}(t,\tau)\big(\gamma E^\ast_1(\tau)+g E^\ast_2(\tau)\big)e^{iJ\tau}\nonumber\\
&+&2ig_m d_{35}(t,\tau)|E_2(\tau)|^2e^{i\omega_m \tau}-2ig_md_{36}(t,\tau)|E_2(\tau)|^2e^{-i\omega_m \tau}\big\}.
\end{eqnarray}
Here the contributions from the drive terms $\vec{\lambda}(t)$ to the cavity modes add to those of $E_1(t), E_2(t)$, which are obtained without amplification and dissipation, to have the correct ones under the decoherence effects. The effect of the drive terms is more obvious when comparing the results of with and without decoherence in the situation of $g_m=0$. Finally, the contribution from the noise drive terms $\hat{\vec{n}}(t)$ can be found with the relations $\langle \hat{\xi}_i(t) \hat{\xi}_i^\dagger (t')\rangle=\delta(t-t')$, $\langle \hat{\xi}^\dagger_i(t) \hat{\xi}_i (t')\rangle=0$ for $i=p,a$ and $\langle \hat{\xi}_m(t) \hat{\xi}_m^\dagger (t')\rangle=(n_{th}+1)\delta(t-t')$, $\langle \hat{\xi}^\dagger_m(t) \hat{\xi}_m (t')\rangle=n_{th}\delta(t-t')$ as the expectation values over the reservoir states, to have
\begin{eqnarray}
\langle \hat{o}^\dagger_{1,n}(t)\hat{o}_{1,n}(t)\rangle&=&
\gamma\int_0^t d\tau d_{21}(t,\tau)d_{12}(t,\tau)+g\int_0^t d\tau
d_{22}(t,\tau)d_{11}(t,\tau)+\gamma\int_0^t d\tau
d_{23}(t,\tau)d_{14}(t,\tau) \nonumber \\
&+&g\int_0^t d\tau d_{24}(t,\tau)d_{13}(t,\tau)+2\gamma_m
(n_{th}+1)\int_0^t d\tau d_{25}(t,\tau)d_{16}(t,\tau) \nonumber \\
&+&2\gamma_m n_{th}\int_0^t d\tau d_{26}(t,\tau)d_{15}(t,\tau),
\label{n1}
\end{eqnarray}
\begin{eqnarray}
\langle \hat{o}^\dagger_{2,n}(t)\hat{o}_{2,n}(t)\rangle&=& \gamma\int_0^t d\tau d_{41}(t,\tau)d_{32}(t,\tau)+g\int_0^t d\tau
d_{42}(t,\tau)d_{31}(t,\tau)+\gamma\int_0^t d\tau
d_{43}(t,\tau)d_{34}(t,\tau) \nonumber \\
&+&g\int_0^t d\tau d_{44}(t,\tau)d_{33}(t,\tau)+2\gamma_m
(n_{th}+1)\int_0^t d\tau d_{45}(t,\tau)d_{36}(t,\tau) \nonumber \\
&+&2\gamma_m n_{th}\int_0^t d\tau d_{46}(t,\tau)d_{35}(t,\tau).
\label{n2}
\end{eqnarray}
Adding the three parts of contributions together gives the total supermode populations. The detailed contributions from the different parametric processes or different noises can be identified by the evolutions of the matrix elements $d_{ij}(t,0)$, which also specify the temporal distributions of the general matrix elements $d_{ij}(t,\tau)$. For a passive setup without optical gain, the amplification noise contributions proportional to $g$ in Eqs. (\ref{n1}) and (\ref{n2}) will be replaced by the ones from another part of dissipation noise contributions. Under the conditions as in Figs. 2(a) and 2(b) of the main text, the term with the factor $(n_{th}+1)$ in Eq. (\ref{n1}) is found to contribute to the blue supermode population significantly.
\end{widetext}
\end{document}
|
\begin{document}
\twocolumn[
\icmltitle{Time-Series Anomaly Detection \\with Implicit Neural Representation}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
\icmlauthor{Kyeong-Joong Jeong}{equal,yyy}
\icmlauthor{Yong-Min Shin}{equal,yyy}
\end{icmlauthorlist}
\icmlaffiliation{yyy}{Computational Science and Engineering, Yonsei University, Seoul, South Korea}
\icmlcorrespondingauthor{Kyeong-Joong Jeong}{[email protected]}
\icmlcorrespondingauthor{Yong-Min Shin}{[email protected]}
\vskip 0.3in
]
\printAffiliationsAndNotice{\icmlEqualContribution}
\begin{abstract}
Detecting anomalies in multivariate time-series data is essential in many real-world applications. Recently, various deep learning-based approaches have shown considerable improvements in time-series anomaly detection. However, existing methods still have several limitations, such as long training time due to their complex model designs or costly tuning procedures to find optimal hyperparameters (e.g., sliding window length) for a given dataset. In our paper, we propose a novel method called Implicit Neural Representation-based Anomaly Detection (INRAD). Specifically, we train a simple multi-layer perceptron that takes time as input and outputs corresponding values at that time. Then we utilize the representation error as an anomaly score for detecting anomalies. Experiments on five real-world datasets demonstrate that our proposed method outperforms other state-of-the-art methods in performance, training speed, and robustness.
\end{abstract}
\section{Introduction}
Time-series data is frequently used in various real-world systems, especially in multivariate scenarios such as server machines, water treatment plants, spacecraft, etc. Detecting an anomalous event in such time-series data is crucial to managing those systems~\cite{su2019smdomnianomaly,mathur2016swatwadi,hundman2018smapmsl,6684530,blazquez2021review}. To solve this problem, several classical approaches have been developed in the past~\cite{fox1972outliers,zhang2005network,ma2003time,liu2008isolationforest}. However, due to the limited capacity of their approaches, they could not fully capture complex, non-linear, and high-dimensional patterns in the time-series data.
Recently, various unsupervised approaches employing deep learning architectures have been proposed. Such works include adopting architectures such as recurrent neural networks (RNN)~\cite{hundman2018smapmsl}, variational autoencoders (VAE)~\cite{xu2018unsupervised}, generative adversarial networks (GAN)~\cite{madgan}, graph neural networks (GNN)~\cite{gdn}, and combined architectures~\cite{zong2018dagmm,su2019smdomnianomaly,shen2020thoc,audibert2020usad,park2018lstmvae}. These deep learning approaches have brought significant performance improvements in time-series anomaly detection. However, most deep learning-based methods have shown several downsides. First, they require a long training time due to complex calculations, hindering applications where fast and efficient training is needed. Second, they need a significant amount of effort to tune model hyperparameters (e.g., sliding window size) for a given dataset, which can be costly in real-world applications.
\begin{figure}
\caption{Implicit neural representation for multivariate time-series data.}
\label{Figure1INRinTimeseries}
\end{figure}
In our paper, we propose Implicit Neural Representation-based Anomaly Detection (INRAD), a novel approach that performs anomaly detection in multivariate time-series data by adopting implicit neural representation (INR). Figure~\ref{Figure1INRinTimeseries} illustrates the approach of INR in the context of multivariate time-series data. Unlike conventional approaches where the values are passed as input to the model (usually processed via sliding window etc.), we directly input time to a multi-layer perceptron (MLP) model. Then the model tries to represent the values of that time, which is done by minimizing a mean-squared loss between the model output and the ground truth values. In other words, we train a MLP model to represent the time-series data itself. Based on our observation that the INR represents abnormal data relatively poorly compared to normal data, we use the representation error as the anomaly score for anomaly detection. Adopting such a simple architecture design using MLP naturally results in a fast training time. Additionally, we propose a temporal encoding technique that improves efficiency for the model to represent time-series data, resulting in faster convergence time.
In summary, the main contributions of our work are:
\begin{itemize}
\item We propose INRAD, a novel time-series anomaly detection method that only uses a simple MLP which maps time into its corresponding value.
\item We introduce a temporal encoding technique to represent time-series data efficiently.
\item We conduct extensive experiments while using the same set of hyperparameters over all five real-world benchmark datasets. Our experimental results show that our proposed method outperforms previous state-of-the-art methods in terms of not only accuracy, but also training speed in a highly robust manner.
\end{itemize}
\section{Related Work}
In this section, we review previous works for time-series anomaly detection and implicit neural representation.
\subsection{Time-Series Anomaly Detection}
Since the first study on this topic was conducted by \cite{fox1972outliers}, time-series anomaly detection has been a topic of interest over the past decades~\cite{6684530,blazquez2021review}. Traditionally, various methods, including autoregressive moving average (ARMA)~\cite{galeano2006outlier} and autoregressive integrated moving average (ARIMA) model~\cite{zhang2005network}-based approaches, one-class support vector machine-based method~\cite{ma2003time} and isolation-based method \cite{liu2008isolationforest} have been widely introduced for time-series anomaly detection. However, these classical methods either fail to capture complex and non-linear temporal characteristics or are very sensitive to noise, making them infeasible to be applied on real-world datasets.
Recently, various unsupervised deep learning-based approaches have successfully improved performance in complex multivariate time-series anomaly detection tasks. As one of the well-known unsupervised models, autoencoder (AE)-based approaches~\cite{sakurada2014anomaly} capture the non-linearity between variables. Recurrent neural networks (RNNs) are a popular architecture choice used in various methods~\cite{hundman2018smapmsl,malhotra2016lstm} for capturing temporal dynamics of time series data. Generative models are also used in the literature, namely generative adversarial networks~\cite{madgan} and variational autoencoder (VAE)-based approaches~\cite{xu2018unsupervised}. Graph neural network-based approach~\cite{gdn} is also proposed to capture the complex relationship between variables in the multivariate setting. Furthermore, methodologies combining multiple architectures are also proposed, such as AE with the Gaussian mixture model~\cite{zong2018dagmm} or AE with GANs~\cite{audibert2020usad}, stochastic RNN with a planar normalizing flow~\cite{su2019smdomnianomaly}, deep support vector data description~\cite{deepsvdd} with dilated RNN~\cite{drnn}, and VAE with long short term memory (LSTM) networks~\cite{park2018lstmvae}.
Despite remarkable improvements via those above deep learning-based approaches, most of the approaches produce good results at the expense of training speed and generalizability. Such long training time with costly hyperparameter tuning for each dataset results in difficulties applying to practical scenarios~\cite{audibert2020usad}.
\subsection{Implicit Neural Representation}
Recently, implicit neural representations (or coordinate-based representations) have gained popularity, mainly in 3D deep learning. Generally, it trains a MLP to represent a single data instance by mapping the coordinate (e.g., $xyz$-coordinates) to the corresponding values of the data. This approach has been proven to have expressive representation capability with memory efficiency. As one of the well-known approaches, occupancy networks~\cite{mescheder2019occunet} train a binary classifier to predict whether a point is inside or outside the data to represent. DeepSDF~\cite{park2019deepsdf} directly regresses a signed distance function that returns a signed distance to the closest surface when the position of a 3D point is given. Instead of occupancy networks or signed distance functions, NeRF~\cite{mildenhall2020nerf} proposes to map an MLP to the color and density of the scene to represent. SIREN~\cite{sitzmann2020siren} proposes using sinusoidal activation functions in MLPs to facilitate high-resolution representations. Since then, various applications, including view synthesis~\cite{martin2021nerfw} and object appearance~\cite{saito2019pifu} have been widely studied.
However, the application of INR to time-series data has been relatively underdeveloped. Representation of time-varying 3D geometry has been explored~\cite{niemeyer2019occupancyflow}, but they do not investigate multivariate time-series data. Although SIREN~\cite{sitzmann2020siren} showed the capability to represent audio, its focus was limited to the high-quality representation of the input signals. To the best of our knowledge, this is the first work to use INR to solve the problem of time-series anomaly detection.
\section{INRAD Framework}
In this section, we define the problem that we aim to solve, and then we present our proposed INRAD based on the architecture proposed by \cite{sitzmann2020siren}. Next, we describe our newly designed temporal encoding technique in detail. Finally, we describe the loss function to make our model represent input time signals and describe the anomaly score used during the detection procedure. Figure~\ref{Figure2Overview} describes the overview of the proposed method.
\begin{figure*}
\caption{The overview of the proposed Implicit Neural Representation-based Anomaly Detection (INRAD). (a) From the given time-series data, we perform temporal encoding and represent time as a real-valued vector. (b) An MLP using periodic activation functions represents the given data by mapping the time processed by temporal encoding to the corresponding values. (c) After the model converges, we calculate the representation error and use this as the anomaly score for detection.}
\end{figure*}\label{Figure2Overview}
\subsection{Problem Statement}
In this section, we formally state the problem of multivariate time-series anomaly detection as follows.
We first denote multivariate time-series data as $X = \{({t_1}, \mathbf{x}_{t_1}), ({t_2}, \mathbf{x}_{t_2}), ({t_3}, \mathbf{x}_{t_3}),...,({t_N}, \mathbf{x}_{t_N})\}$, where $t_i$ denotes a timestamp, $\mathbf{x}_{t_i}$ denotes corresponding values at the timestamp, and $N$ denotes the number of observed values. As we focus on multivariate data, $\mathbf{x}_{t_{i}}$ is a $d$-dimensional vector representing multiple signals. The goal of time-series anomaly detection is to output a sequence, $Y = \{y_{t_1}, y_{t_2}, y_{t_3},...,y_{t_N}\}$, where $y_{t_i} \in \{0, 1\}$ denotes the abnormal or normal status at $t_i$. In general, 1 indicates the abnormal state while 0 indicates normal state.
\subsection{Implicit Neural Representation of Time-Series Data}
To represent a given time-series data, we adopt the architecture proposed by \cite{sitzmann2020siren}, which leverages periodic functions as activation functions in the MLP model, resulting in a simple yet powerful model capable of representing various signals, including audio. After preprocessing the time coordinate input via an encoding function $\phi$, our aim is to learn a function $f$ that maps the encoded time $\phi(t_i)$ to its corresponding value $\mathbf{x}_{t_i}$ of the data.
We can describe the MLP $f$ by first describing each fully-connected layer and stacking those layers to get the final architecture. Formally, the $l$th fully-connected layer $f_{l}$ with hidden dimension $m_l$ can be generally described as $f_{l}(\mathbf{h}_{l-1}) = \sigma(\mathbf{W}_{l}\mathbf{h}_{l-1} + \mathbf{b}_l)$, where $\mathbf{h}_{l-1} \in \mathbb{R}^{m_{l-1}}$ represents the output of the previous layer $f_{l-1}$, $\mathbf{W}_{l} \in \mathbb{R}^{m_{l} \times m_{l-1}}$ and $\mathbf{b}_{l} \in \mathbb{R}^{m_{l}}$ are learnable weights and biases, respectively, and $\sigma$ is a non-linear activation function. Here, sine functions are used as $\sigma$, which enables accurate representation capabilities of various signals. In practice, a scalar $\omega_0$ is multiplied such that the $l$th layer is $f_l = \sin (\omega_0 \cdot \mathbf{W}_{l}\mathbf{h}_{l-1} + \mathbf{b}_l)$, in order for the input to span multiple periods of the sine function.
Finally, by stacking a total of $L$ layers with an additional linear transformation at the end, we now have our model $f(\phi(t_i)) = \mathbf{W}(f_{L} \circ f_{L-1} \circ \cdots \circ f_1)(\phi(t_i)) + \mathbf{b}$ which maps the input $t_i$ to the output $f(\phi(t_i)) \in \mathbb{R}^{d}$.
\subsection{Temporal Encoding}\label{temporal encoding}
As INR has been mainly developed to represent 2D or 3D graphical data, encoding time coordinate for INR has rarely been studied. Compared to graphical data, which the number of points in each dimension is fairly limited to around thousands, the number of timestamps is generally much larger and varies among different datasets. Also, training and test data need to be considered regarding their chronological order (training data usually comes first). These observations with a real-world time-series data motivate us to design a new encoding strategy such that 1) the difference between $\phi(t_i)$ and $\phi(t_{i+1})$ is not affected by the length of the time sequence 2) chronological order between train and test data is preserved after encoding 3) timestamps from real-world data are naturally represented using its standard time scale rather than relying on the sequential index of time-series data. These desired properties are not satisfied with the encoding strategy applied in~\cite{sitzmann2020siren} (which we call vanilla encoding), where it normalizes coordinates in the range $[-1, 1]$.
We now describe our temporal encoding, a simple yet effective method which satisfies conditions mentioned above. The key idea is to directly utilize the timestamp data present in the time-series data (we can assign arbitrary timestamps if none is given). \color{black} We first represent $t_i$ into a 6-dimensional vector ${\bf k} = [k_{yr}, k_{mon}, k_{day}, k_{hr}, k_{min}, k_{sec}] \in \mathbb{R}^{6}$, each dimension representing year, month, day, hour, minute, and second respectively. Here, $k_{yr},k_{mon},k_{day},k_{hr},k_{min}$ are all positive integers, while $k_{sec} \in [0,60)$. Note that this can flexibly change depending on the dataset. For instance, if the timestamp does not include minute and second information, we use a 4-dimensional vector (i.e., $[k_{yr},k_{mon},k_{day},k_{hr}]$ and $k_{hr} \in [0, 24)$).
Next, we normalize the vectorized time information. With a pre-defined year $k'_{yr}$, we first set $[k'_{yr},1,1,0,0,0]$ (January 1st 00:00:00 at year $k'_{yr}$) as [-1,-1,-1,-1,-1,-1]. Now, let us represent the current timestamp of interest as $\mathbf{k}^{curr}$. We normalize the $j$-th dimension of the current timestamp $\mathbf{k}^{curr}$ by the following linear equation:
\begin{equation}\label{temporalencoding}
n_j^{curr} = -1 + \dfrac{1 - (-1)}{N_{j}-1} \times (k_j^{curr}-\mathbb{I}(j=1)k'_{yr})
\end{equation}
where $n_i^{curr}$ is the $j$th dimension of the normalized vector $\mathbf{n}^{curr} \in \mathbb{R}^{6}$ and $\mathbb{I}$ is an indicator function. For the values of $N_i$, we set $N_2 = 12, N_3 = 31, N_4 = 24, N_5 = 60, N_6 = 60$ to match the standard clock system. We assume that $N_1$ is pre-defined by the user. In short, we define a temporal encoding function $\phi$ that transforms a scalar $t$ into $\mathbf{n}_i$ ($\phi: t_i \mapsto \mathbf{n}_i$). In our method, we will by default use this temporal encoding unless otherwise stated.
\subsection{Loss Function}
As we aim the model to represent the input time-series data, we compare the predicted value at each timestamp $t_i$to its ground-truth value $\mathbf{x}_{t_i}$. Therefore, we minimize the following loss function:
\begin{equation}
\mathcal{L} = \dfrac{1}{n} \sum_{i=1}^{n} ||\mathbf{x}_{t_{i}} - f({\phi}(t_{i}))||^{2}
\end{equation}
where $||\cdot||$ indicates the l2 norm of a vector.
\subsection{Anomaly Score and Detection Procedure}
Our proposed representation error-based anomaly detection strategy is built on the observation that values at an anomalous time are difficult to represent, resulting in relatively high representation error. By our approach described above, the given data sequence $X$ is represented by an MLP function $f$. We now perform anomaly detection with this functional representation by defining the representation error as the anomaly score. Formally, the anomaly score $a_{t_i}$ at a specific timestamp $t_i$ is defined as $a_{t_i} = |\mathbf{x}_{t_i} - f({\phi}(t_i))|$, where $|\cdot|$ indicates the l1 norm of a vector. Anomalies can be detected by comparing the anomaly score $a_{t_i}$ with the pre-defined threshold $\tau$.
In our approach, we first use the training data to pre-train our model $f$ and then re-train the model to represent the given test data to obtain the representation error as an anomaly score for the detection.
\section{Experiments}
In this section, we perform various experiments to answer the following research questions:
\begin{itemize}
\item {\bf RQ1:} Does our method outperform various state-of-the-art methods, even with a fixed hyperparameter setting?
\item {\bf RQ2:} How does our proposed temporal encoding affect the performance and convergence time?
\item {\bf RQ3:} Does our method outperform various state-of-the-art methods in terms of training speed?
\item {\bf RQ4:} How does our method behave in different hyperparameter settings?
\end{itemize}
\begin{table}[t]
\centering
\begin{tabular}{r|c|c|c|c} \toprule
Datasets & Train & Test & Features & Anomalies \\\midrule
SMD & 708405 & 708420 & 28$\times$38 & 4.16 (\%)\\
SMAP & 135183 & 427617 & 55$\times$25 & 13.13 (\%)\\
MSL & 58317 & 73729 & 27$\times$55 & 10.72 (\%)\\
SWaT & 496800 & 449919 & 51 & 11.98 (\%)\\
WADI & 1048571 & 172801 & 123 & 5.99 (\%)\\
\bottomrule
\end{tabular}
\caption{Statistics of the datasets used in our experiments.}
\label{TableDatasetStatistics}
\end{table}
\subsection{Dataset}
We use five real-world benchmark datasets, SMD~\cite{su2019smdomnianomaly}, SMAP \& MSL~\cite{hundman2018smapmsl}, SWaT \& WADI~\cite{mathur2016swatwadi}, for anomaly detection for multivariate time-series data, which contain ground-truth anomalies as labels. Table \ref{TableDatasetStatistics} summarizes the statistics of each dataset, which we further describe its detail in the supplementary material.
In our experiments, we directly use timestamps included in the dataset for SWAT and WADI. We arbitrarily assign timestamps for the other three datasets since no timestamps representing actual-time information are given.
\begin{table*}[t]
\centering
\fontsize{9}{10}\selectfont
\begin{tabular}{r|ccc|ccc|ccc|ccc|ccc} \toprule
& \multicolumn{3}{c}{SMD} & \multicolumn{3}{c}{SMAP} & \multicolumn{3}{c}{MSL} & \multicolumn{3}{c}{SWaT} & \multicolumn{3}{c}{WADI}\\\cmidrule{2-16}
Method & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1\\ \midrule
IF & 59.4 & 85.3 & 0.70 & 44.2 & 51.1 & 0.47 & 56.8 & 67.4 & 0.62 & 96.2 & 73.1 & 0.83 & 62.4 & 61.5 & 0.62\\
LSTM-VAE & 87.0 & 78.8 & 0.83 & 71.6 & 98.8 & 0.83 & 86.0 & 97.6 & 0.91 & 71.2 & 92.6 & 0.80 & 46.3 & 32.2 & 0.38 \\
DAGMM & 67.3 & 84.5 & 0.75 & 63.3 & 99.8 & 0.78 & 75.6 & 98.0 & 0.85 & 82.9 & 76.7 & 0.80 & 22.3 & 19.8 & 0.21\\
OmniAnomaly & 98.1 & 94.4 & 0.96 & 75.9 & 97.6 & 0.85 & 91.4 & 88.9 & 0.90 & 72.2 & 98.3 & 0.83 & 26.5 & 98.0 & 0.41\\
USAD & 93.1 & 94.4 & 0.96 & 77.0 & 98.3 & 0.86 & 88.1 & 97.9 & 0.93 & 98.7 & 74.0 & 0.85 & 64.5 & 32.2 & 0.43\\
THOC & 73.2 & 78.8 & 0.76 & 79.2 & 99.0 & 0.88 & 78.9 & 97.4 & 0.87 & 98.0 & 70.6 & 0.82 & - & - & - \\ \midrule
$\text{INRAD}^{\text{c}}_{\text{van}}$ & 94.7 & 97.8 & {0.96} & 80.0 & 99.3 & 0.89 & 93.6 & 98.1 & {\bf 0.96} & 96.9 & 88.7 & 0.93 & 60.2 & 67.0 & 0.63 \\
$\text{INRAD}^{\text{c}}_{\text{temp}}$ & 98.0 & 98.3 & {\bf 0.98} & 83.2 & 99.1 & 0.90 & 92.1 & 99.0 & 0.95 & 93.0 & 96.3 & 0.95 & 78.4 & 99.9 & 0.88 \\ \midrule
$\text{INRAD}_{\text{van}}$ & 98.0 & 98.6 & {\bf 0.98} & 84.0 & 99.4 & {0.91} & 90.4 & 99.0 & {0.95} & 96.4 & 91.7 & {0.94} & 77.1 & 66.5 & {0.71} \\
$\text{INRAD}_{\text{van}^{*}}$ & 95.0 & 96.4 & {0.95} & 82.6 & 99.3 & {0.90} & 91.7 & 98.7 & {0.95} & 84.2 & 84.7 & {0.84} & 72.4 & 72.8 & {0.73} \\
$\text{INRAD}_{\text{temp}}$ & 98.2 & 97.5 & {\bf 0.98} & 85.8 & 99.5 & {\bf 0.92} & 93.3 & 99.0 & {\bf 0.96} & 95.6 & 98.8 & {\bf 0.97} & 88.9 & 99.1 & {\bf 0.94} \\ \bottomrule
\end{tabular}
\caption{Anomaly detection accuracy results in terms of precision(\%), recall(\%), and F1-score, on five real-world benchmark datasets. $\text{INRAD}_{\text{van}}$, $\text{INRAD}_{\text{van}^{*}}$, and $\text{INRAD}_{\text{temp}}$ adopts the vanilla, vanilla$^{*}$, and temporal encoding, respectively. Also, $\text{INRAD}^{\text{c}}_{\text{van}}$ and $\text{INRAD}^{\text{c}}_{\text{temp}}$ indicates that the experiment was run on the cold-start setting with each encoding.}
\label{TablePerformanceComparison}
\end{table*}
\subsection{Baseline methods}
We demonstrate the performance of our proposed method, INRAD, by comparing with the following six anomaly detection methods:
\begin{itemize}
\item {\bf IF}~\cite{liu2008isolationforest}: Isolation forests (IF) is the most well-known isolation-based anomaly detection method, which focuses on isolating abnormal instances rather than profiling normal instances.
\item {\bf LSTM-VAE}~\cite{park2018lstmvae}: LSTM-VAE uses a series of connected variational autoencoders and long-short-term-memory layers for anomaly detection.
\item {\bf DAGMM}~\cite{zong2018dagmm}: DAGMM is an unsupervised anomaly detection model which utilizes an autoencoder and the Gaussian mixture model in an end-to-end training manner.
\item {\bf OmniAnomaly}~\cite{su2019smdomnianomaly}: OmniAnomaly employs a stochastic recurrent neural network for multivariate time-series anomaly detection to learn robust representations with a stochastic variable connection and planar normalizing flow.
\item {\bf USAD}~\cite{audibert2020usad}: USAD utilizes an encoder-decoder architecture with an adversely training framework inspired by generative adversarial networks.
\item {\bf THOC}~\cite{shen2020thoc}: THOC combines a dilated recurrent neural network~\cite{drnn} for extracting multi-scale temporal features with the deep support vector data description~\cite{deepsvdd}.
\end{itemize}
\subsection{Evaluation Metrics}
We use precision (P), recall (R), F1-score (F1) for evaluating time-series anomaly detection methods. Since these performance measures depend on the way threshold is set on the anomaly scores, previous works proposed a strategy such as applying extreme value theory~\cite{siffer2017anomaly}, using a dynamic error over a time window \cite{hundman2018smapmsl}. However, not all methodologies develop a mechanism to select a threshold in different settings, and many previous works~\cite{audibert2020usad,su2019smdomnianomaly,xu2018unsupervised} adopt the best F1 score for performance comparison, where the optimal global threshold is chosen by trying out all possible thresholds on detection results. We also use the point-adjust approach~\cite{xu2018unsupervised}, widely used in evaluation~\cite{audibert2020usad,su2019smdomnianomaly,shen2020thoc}. Specifically, if any point in an anomalous segment is correctly detected, other observations in the segment inside the ground truth are also regarded as correctly detected.
Therefore, we adopt the best F1-score (short F1 score hereafter) and the point-adjust approach for evaluating the anomaly detection performance to directly compare with the aforementioned state-of-the-art methods.
\subsection{Hyperparameters and Experimental Setup}
To show the robustness of our proposed method, we conduct experiments using the same hyperparameter setting for all benchmark datasets.
The details of the experimental setting are as follows. For the model architecture, we use a 3-layer MLP with sinusoidal activations with 256 hidden dimensions each (refer to Figure~\ref{Figure2Overview}(b)). Following~\cite{sitzmann2020siren}, we set $\omega_0 = 30$ except for the first layer, which is set to $3000$. During training, we use the Adam optimizer~\cite{kingma2014adam} with a learning rate of 0.0001 and $(\beta_1, \beta_2) = (0.9, 0.99)$. Additionally, we use early stopping with patience 30. Our code and data are released at https://github.com/KyeongJoong/INRAD
\subsection{RQ 1. Performance Comparison}
Table~\ref{TablePerformanceComparison} shows the performance comparison results of our proposed method $\text{INRAD}_{\text{temp}}$ and its variants, along with other baseline approaches on five benchmark datasets. We use the reported accuracy values of baselines (except THOC~\cite{shen2020thoc}) from the previous work \cite{audibert2020usad}, which achieves state-of-the-art performance in the identical experimental setting with ours, such as datasets, train/test split, and evaluation metrics. Note that results of THOC on the WADI dataset are omitted due to an out-of-memory issue.
Overall, our proposed $\text{INRAD}_{\text{temp}}$ consistently achieves the highest F1 scores over all datasets. Especially, the performance improvement over the next best method achieves 0.32 in terms of F1 score on the WADI dataset, where most other approaches show relatively low performance. On other datasets, we still outperform the second-best performance of other baselines by 0.02 to 0.12. Considering that the single hyperparameter setting restriction is only applied for our method, this shows that our approach can provide superior performance in a highly robust manner in various datasets.
As we adopt the representation error-based detection strategy, it is possible that our method detects anomalies without training data by directly representing the test set. We hypothesize that the test set already contains an overwhelming portion of normal samples from which the model can still learn temporal dynamics of normal patterns in the given data without any complex model architectures (e.g., RNN and its variants). To distinguish from the original method, we denote this variant as $\text{INRAD}^{\text{c}}_{\phi}$ ($\phi=$ 'van' or 'temp' for vanilla and temporal encoding, respectively), which we also investigate its performance. We observe that both $\text{INRAD}^{\text{c}}_{\text{van}}$ and $\text{INRAD}^{\text{c}}_{\text{temp}}$ generally achieves slightly inferior performance to $\text{INRAD}_{\text{temp}}$ except for WADI. This result shows that utilizing the training dataset has performance benefits, especially when the training data is much longer than the test data.
\begin{table}[t]
\centering
\fontsize{9}{10}\selectfont
\begin{tabular}{r|c|c|c} \toprule
Method & SMD & SMAP & MSL \\ \midrule
LSTM-VAE & 3.807 & 0.987 & 0.674 \\
OmniAnomaly & 77.32 & 16.66 & 15.55 \\
USAD & 0.278 & 0.034 & 0.029 \\
THOC & 0.299 & 0.07 & 0.066 \\\midrule
INRAD & 0.243 & 0.024 & 0.020 \\ \bottomrule
\end{tabular}
\caption{Comparison of training time (sec) per epoch.}
\label{TableTrainingTimePerEpoch}
\end{table}
\subsection{RQ 2. Effect of temporal encoding}
We study the effect of our temporal encoding method by comparing it with two encoding methods: vanilla and its variant, vanilla$^{*}$. Vanilla encoding normalizes the indices $[1,2, \cdots, M]$ of the training data to $[-1, 1]$, and the indices of the test data are also mapped to $[-1, 1]$. On the other hand, vanilla$^{*}$ encoding is derived from vanilla encoding to preserve chronological order and unit interval of training and test data after encoding by mapping indices of test data to the range $[1,\infty)$, while keeping the difference between neighboring encoding consistent with the training data. We denote the variant using vanilla and vanilla$^{*}$ as $\text{INRAD}_{\text{van}}$ and $\text{INRAD}_{\text{van}^{*}}$, respectively.
Table~\ref{TablePerformanceComparison} shows that $\text{INRAD}_{\text{temp}}$ achieves slightly superior performance in general compared to $\text{INRAD}_{\text{van}}$ and $\text{INRAD}_{\text{van}^{*}}$. However, when the length of the dataset becomes long, the performance of vanilla and vanilla$^{*}$ degrades significantly while temporal encoding remains at 0.94, as shown in the case of WADI. The performance gap in WADI becomes even more significant in the case of cold-start settings, which is 0.25.
Also, Figure~\ref{Figure3Encoding} compares the convergence time for representation of test data between $\text{INRAD}_{\text{van}}$, $\text{INRAD}_{\text{van}^{*}}$, and $\text{INRAD}_{\text{temp}}$ using MSL and SMAP dataset. The vanilla encoding shows the slowest convergence time, and the vanilla$^{*}$ and our temporal encoding shows competitive results. This result shows that representation of test data is learned faster when time coordinates in training and test data are encoded while preserving the chronological order. Overall, our temporal encoding strategy achieves superior performance and fast convergence compared to the vanilla encoding strategy.
\subsection{RQ 3. Training speed comparison}
Here, we study the training speed of $\text{INRAD}_{\text{temp}}$ and compare it to the other four baselines that show good performance. Table~\ref{TableTrainingTimePerEpoch} summarizes the results the training time per epoch for $\text{INRAD}_{\text{temp}}$ along with other baseline methods on three benchmark datasets. Specifically, the reported time is the average time across all entities within each dataset (i.e., 28 entities for SMD, 55 for SMAP, and 27 for MSL). The results show that our method achieves the fastest training time, mainly because our method only uses a simple MLP for training without any additional complex modules (e.g., RNNs).
\begin{figure}
\caption{Convergence time (sec) comparison for different encoding techniques.}
\label{Figure3Encoding}
\end{figure}
\subsection{RQ 4. Hyperparameter sensitivity}
In Figure~\ref{Hyperparameter_experiment}, we test the robustness of $\text{INRAD}_{\text{temp}}$ by varying different hyperparameters settings using the MSL dataset. In our experiment, we change the patience in early stopping in range $\{30, 60, 90, 120, 150\}$, size of hidden dimension in range $\{32, 64, 128, 256, 512\}$, $\omega_0$ of the first layer in range $\{30, 300, 3000, 30000, 300000\}$, and the number of layers in range $\{1, 2, 3, 4, 5\}$. Figure~\ref{Hyperparameter_patience},~\ref{Hyperparameter_hidden_dim}, and~\ref{Hyperparameter_num_layers} shows that $\text{INRAD}_{\text{temp}}$ achieves high robustness with varying hyperparameter settings. We see that the choice of $\omega_0$ also minimally impacts $\text{INRAD}_{\text{temp}}$ as in Figure~\ref{Hyperparameter_omega}. This results suggests that the MLP struggles to differentiate neighboring inputs in the case where $\omega_0$ is extremely low.
\begin{figure}\label{Hyperparameter_patience}
\label{Hyperparameter_hidden_dim}
\label{Hyperparameter_omega}
\label{Hyperparameter_num_layers}
\label{Hyperparameter_experiment}
\end{figure}
\section{Conclusion}
In this paper, we proposed INRAD, a novel implicit neural representation-based method for multivariate time-series anomaly detection, along with a temporal encoding technique. Adopting a simple MLP, which takes time as input and outputs corresponding values to represent a given time-series data, our method detects anomalies by using the representation error as anomaly score. Various experiments on five real-world datasets show that our proposed method achieves state-of-the-art performance in terms of accuracy and training speed while using the same set of hyperparameters. For future work, we can consider additional strategies for online training in order to improve applicability in an environment where prompt anomaly detection is needed.
\nocite{langley00}
\appendix
\onecolumn
\section{Baseline Implementation}
We describe the implementation of the baseline methods used in our paper. Isolation forest (IF)~\cite{isolationforeset} is implemented using the scikit-learn library, and we use the source code of THOC~\cite{shen2020thoc} given by authors. The other baselines are downloaded from the following links:
\begin{itemize}
\item LSTM-VAE~\cite{park2018lstmvae}:\\ https://github.com/Danyleb/Variational-Lstm-Autoencoder
\item DAGMM~\cite{zong2018dagmm}: \\https://github.com/tnakae/DAGMM
\item OmniAnomaly~\cite{su2019smdomnianomaly}:\\ https://github.com/NetManAIOps/OmniAnomaly
\item USAD~\cite{audibert2020usad}: \\https://github.com/robustml-eurecom/usad
\end{itemize}
\section{Vanilla Encoding}
In this section, we describe the encoding strategy in~\cite{sitzmann2020siren} which we call vanilla encoding in our main paper. Let us assume that the given dataset has $N$ timestamps with index $i$ (i.e., $X = \{(t_i, \mathbf{x}_{t_i})\}_{i=1}^{N}$). Also, denote the vector of indices as $\mathbf{i} = [1, 2, \cdots, N]$, and $\mathbf{1} \in \mathbb{R}^{1 \times N}$ as the one vector. Vanilla encoding plainly normalizes each indices $i$ to the range $[-1, 1]$ by $\mathbf{i}_{naive} = (2/N) \times \mathbf{i} - \mathbf{1}$.
\section{Detailed Setting of Temporal Encoding}
In the case of SWaT and WADI datasets, we use the actual timestamps given in each dataset. On the other hand, in the case of SMD, MSL, SMAP dataset that does not contain such information, we arbitrarily set the timestamps for each sample starting from "2021-01-01 00:00:00" with one-minute intervals. We assume that the test data is directly after the end of the timestamp in the training set.
Now we describe the details of the pre-defined $k'_{year}$ and $N_1$. We set $k'_{year}$ as the year of the first timestamp in the training set as we assume that our model will not encounter past information before the first sample in the training set. In the case of SWAT and WADI, we set $k'_{year}$ as 2015 and 2017, respectively, following the given timestamp information as we stated earlier. For the other datasets, we set $k'_{year}$ as 2021 as we set the first timestamp in the training set as "2021-01-01 00:00:00". Next, we set $N_1$ by $\gamma + 1$, where $\gamma$ indicates the difference between the year of the earliest and latest observed data. We note that, however, these settings can be flexibly chosen for various settings.
\section{Detailed Description of the Datasets}
SMD~\cite{su2019smdomnianomaly} is a 5-week-long public dataset collected from a large Internet company containing data from 28 server machines, each one monitored by 33 metrics. It is divided into two subsets of equal size, where the first half is the training set and the second half is the testing set. SMAP and MSL~\cite{hundman2018smapmsl} are two real-world public datasets, expert-labeled datasets from NASA. SMAP contains the data from 55 entities monitored by 25 metrics, and MSL contains the data from 27 entities monitored by 55 metrics. SWaT~\cite{mathur2016swatwadi} is collected from a scaled-down real-world industrial water treatment plant that produces filtered water. In SWaT, operational data under normal circumstances are collected for 7 days, and operational data with attack scenarios are collected under 4 days. In WADI~\cite{mathur2016swatwadi}, operational data under normal circumstances are collected for 14 days, and operational data with attack scenarios are collected under 2 days. This dataset is collected from the WADI testbed, an extension of the SWaT testbed.
\end{document}
|
\begin{document}
\title[Remarks on the blow-up of a toy model for the Navier-Stokes equations]{Remarks on the blow-up of solutions to a toy model for the Navier-Stokes equations}
\author[I. Gallagher]{Isabelle Gallagher}
\address[I. Gallagher]
{ Institut de Math{\'e}matiques de Jussieu UMR 7586\\ Universit{\'e} Paris 7\\
175, rue du Chevaleret\\ 75013 Paris\\FRANCE }
\email{[email protected]}
\author[M. Paicu]{Marius Paicu}
\address[M. Paicu]
{ D\'epartement de Math\'ematiques\\ Universit{\'e} Paris 11\\
B\^{a}timent 425\\ 91405 Orsay Cedex\\FRANCE }
\email{[email protected] }
\begin{abstract}
In~\cite{ms}, S. Montgomery-Smith provides a one dimensional model for the three dimensional, incompressible Navier-Stokes
equations, for which he proves the
blow up of solutions associated to a class of large initial data, while the same global existence results as for the Navier-Stokes equations hold for small data. In this note the model is adapted to the case of two and three
space dimensions, with the additional feature that the divergence free condition is preserved. It is checked that the family of initial data constructed in~\cite{cg}, which is arbitrarily large but yet generates a global solution to the Navier-Stokes equations in three space dimensions, actually causes blow up for the toy model --- meaning that the precise structure of the nonlinear term is crucial to
understand the dynamics of large solutions to the Navier-Stokes equations.
\end{abstract}
\keywords {Navier-Stokes equations, blow up}
\maketitle
\section{Introduction}
Consider the Navier-Stokes equations in~$\mathop{\mathbb R\kern 0pt}\noindentlimits^{d}$, for~$d = 2$ or~3,
\[
{\rm (NS)}\ \left\{
\begin{array}{c}
\partialartial_{t} u -\Delta u+u\cdot\nablabla u =-\nablabla p\\
\mbox{div}\: u =0 \\
u_{|t = 0} = u_{0},
\end{array}
\right.
\]
where~$u = (u^1,\dots,u^d)$ is the velocity of an incompressible, viscous, homogeneous fluid evolving in~$\mathop{\mathbb R\kern 0pt}\noindentlimits^d$, and~$p$ is its pressure. Note that the divergence free condition allows to recover~$p$ from~$u$ through the formula
$$
-\Delta p = \mbox{div} \: (u\cdot\nablabla u).
$$
A formally equivalent formulation for~(NS) can be obtained
by applying the projector onto divergence free vector fields~$\displaystyle
\mathop{\mathbb P\kern 0pt}\noindentlimits \buildrel\hbox{\footnotesize d{\'e}f}\over =a \mbox{Id} - \nablabla
\Delta^{-1} \mbox{div}
$ to~(NS):
\[
\left\{
\begin{array}{c}
\partialartial_{t} u -\Delta u+\mathop{\mathbb P\kern 0pt}\noindentlimits (u\cdot\nablabla u) =0\\
u_{|t = 0} = u_{0} = \mathop{\mathbb P\kern 0pt}\noindentlimits u_0.
\end{array}
\right.
\]
This system has three important features:
(E) (the {\it energy} inequality): the~$L^{2} (\mathop{\mathbb R\kern 0pt}\noindentlimits^d)$ norm of~$u$ is formally bounded for all times by that of the initial data;
(I) (the {\it incompressibility} condition): the solution satisfies for all times the constraint~$\mbox{div}\: u =0 $;
(S) (the {\it scaling} conservation): if~$u$ is a solution associated with the data~$u_0$, then for any positive~$\lambdambda$, the rescaled~$u_{\lambdam}(t,x) \buildrel\hbox{\footnotesize d{\'e}f}\over =a \lambdam u(\lambdam^{2}t, \lambdam x)$
is a solution associated with~$u_{0,\lambdam}(x) \buildrel\hbox{\footnotesize d{\'e}f}\over =a \lambdam u_{0}( \lambdam x)$.
Of course the two first properties are related, as (I) is the ingredient enabling one to obtain~(E), due to the special structure of the nonlinear term.
Taking~(E) into account, one can prove the existence of global, possibly non unique, finite energy solutions (see the fundamental work
of J. Leray~\cite{leray}). On the other hand the use of~(S) and a fixed point argument enables one to prove the existence of a unique,
global solution if the initial data is small
in scale-invariant spaces (we will call ``scale-invariant space'' any Banach space~$X$ satisfying~$\|\lambdam f(\lambdam \cdot)
\|_{X} = \|f\|_{X}$ for all~$\lambdam>0$): for instance the homogenenous Sobolev space~$\dot H^{\frac d2 - 1}$,
Besov spaces~$\dot B^{-1 + \frac dp}_{p,\infty}$ for~$p<\infty$ or the space~$BMO^{-1}$. We recall that
$$
\|f\|_{\dot B^{s}_{p,q}} \buildrel\hbox{\footnotesize d{\'e}f}\over =a \left\| t^{-\frac s2} \|e^{t\Delta}f\|_{L^p(\mathop{\mathbb R\kern 0pt}\noindentlimits^d)} \right\|_{L^q(\mathop{\mathbb R\kern 0pt}\noindentlimits^+;\frac{dt}t)},
$$
and
$$
\|f\|_{BMO^{-1}} \buildrel\hbox{\footnotesize d{\'e}f}\over =a \sup_{t > 0}
\biggl( t^{\frac 1 2}\|e^{t\Delta} f\|_{L^\infty}
+\supetage{x\in \mathop{\mathbb R\kern 0pt}\noindentlimits^d}{R>0} R^{-\frac d 2}
\Big(\int_{P(x,R)}| e^{t\Delta} f (t,y)|^2 dy\Bigr)^{\frac 1 2}\biggr) ,
$$
where~$P(x,R)=[0,R^2]\times B(x,R)$
and~$B(x,R)$ denotes the ball of~$\mathop{\mathbb R\kern 0pt}\noindentlimits^d$ of center~$x$ and radius~$R$.
We refer
respectively to~\cite{fk},\cite{cmp} and~\cite{kt} for proofs of the wellposedness of~(NS) for small data in those spaces. When~$d=2$, the smallness condition may be removed: that has been known since the work of J. Leray (\cite{Leray2D}) in the energy space~$L^2$ (which is scale invariant in two space dimensions), and was proved in~\cite{gp},\cite{g} for larger spaces, provided they are completions of the Schwartz class for the corresponding norm (Besov or~$BMO^{-1}$ norms).
It is well known and rather easy to see that the largest scale invariant Banach space embedded in the space of tempered distributions is~$\dot B^{-1 }_{\infty,\infty}$. In three or more space dimensions, it is not known that global solutions exist for smooth data, arbitrarily large in~$\dot B^{-1 }_{\infty,\infty}$. We will not review here all the progress made in that direction in the past years, but merely recall a few of the main recent achievements concerning the possibility of blow up of large solutions. Recently, D. Li and Ya. Sinai were able in~\cite{lisinai} to prove the blow up in finite time of solutions to the Navier-Stokes equations for complex initial data. We note that, as for the system that we construct in the present paper, the complex Navier-Stokes system does not satisfy any energy inequality. Before that, some numerical evidence was suggested to support the idea of finite time blow up of (NS) (see for instance~\cite{no} or~\cite{grundymclaughlin}).
On the other hand in~\cite{cg} a class of
large initial data was constructed, giving rise to a global, unique solution; this family will be presented below. Another type of example was provided in~\cite{cg2}. It should be noted that in both those examples, the special structure of the equation is crucial to obtain the global wellposedness. In~\cite{ms}, S. Montgomery-Smith suggested a model for~(NS), with the same scale invariance and for which
the same global wellposedness results hold for small data. The interesting feature of the model is that it is possible (see~\cite{ms}) to prove the blow up in finite
time of
some solutions. The model is the following:
$$
{\rm (TNS}_{1}{\rm )} \quad \quad \partialartial_{t} u -\Delta u = \sqrt{-\Delta} \: (u^{2}) \quad \mbox{in} \: \mathop{\mathbb R\kern 0pt}\noindentlimits^{+} \times \mathop{\mathbb R\kern 0pt}\noindentlimits.
$$
The main ingredient of the proof of the existence of blowing-up solutions consists in noticing that if the initial data has a positive Fourier transform, then that positivity is preserved for the solution at all further times. One can then use the Duhamel formulation of the solution and deduce a lower bound for the Fourier transform that blows up in finite time. We will not write more details here as we will be reproducing that computation in Section~\ref{2D}.
In this paper we adapt the construction of~\cite{ms} to higher space dimensions. In order to have a proper model
in higher dimensions it is important to preserve as many features of the Navier-Stokes equations as possible. Here we will seek to preserve scaling~(S) as well as the divergence free condition~(I) (as we will see, condition~(E) cannot be preserved in our model). This amounts to transforming the nonlinear
term proposed in~\cite{ms} (see Equation~(TNS$_{1}$) above) in such a way as to preserve both the positivity conservation property in Fourier space {\it and} the incompressibility condition. This
is in fact a technicality which may be handled by explicit computations in Fourier space; actually the more interesting aspect of the result we obtain is that the initial data constructed in~\cite{cg} to show the possibility of global solutions associated with arbitrarily large initial data actually generates a blow-up solution for~(TNS$_{3}$). This, joint to the fact that we are also able to
obtain blowing-up solutions in the two dimensional case, indicates that proving a global existence result for arbitrarily large data for~(NS) requires using the energy estimate, or the specific structure of the nonlinear term -- two properties which are discarded in our model.
Let
us state the result proved in this paper.
\begin{theo}\lambdabel{ms2D3D}
Let the dimension~$d $ be equal to 2 or 3. There is a bilinear operator~$Q$, which is a~$d$-dimensional matrix of Fourier multipliers of order one, such
as the equation
$$
{\rm (TNS}_{d}{\rm )} \quad \left\{
\begin{array}{c}
\partialartial_{t} u -\Delta u = Q(u,u) \quad \mbox{in} \: \mathop{\mathbb R\kern 0pt}\noindentlimits^{+} \times \mathop{\mathbb R\kern 0pt}\noindentlimits^{d}\\
\mbox{div}\: u =0 \\
u_{|t = 0} = u_{0}
\end{array}
\right.
$$
satisfies properties (I) and~(S), and such that there is a global, unique solution if the data
is small enough in~$BMO^{-1}$. Moreover there is a family of smooth initial data~$u_{0}$, which may be chosen arbitrarily large
in~$ \dot B^{-1 }_{\infty,\infty}$, such that the associate solution of~(TNS$_{d}$) blows up in all Besov norms, whereas the
associate solution of~(NS) exists globally in time.
\end{theo}
The proof of the theorem is given in the sections below. In Section~\ref{2D}
we deal with the two dimensional case, while the three dimensional case is treated in Section~\ref{3D}: in both cases
we present an alternative to the bilinear term of (NS), which preserves scaling and the divergence free property, while giving rise to solutions blowing up in finite time, for some classes of initial data. The fact that some of those initial data in fact generate a global solution for the three dimensional Navier-Stokes equations is addressed in Section~\ref{exampleC-1}.
\begin{rmk}
We note that the method of the proof allows to construct blowing up solutions for the hyper-viscous case, meaning for equations of the form
$$
\quad \left\{
\begin{array}{c}
\partialartial_{t} u -\Delta^{\alpha} u = Q(u,u) \quad \mbox{in} \: \mathop{\mathbb R\kern 0pt}\noindentlimits^{+} \times \mathop{\mathbb R\kern 0pt}\noindentlimits^{d}\\
\mbox{div}\: u =0 \\
u_{|t = 0} = u_{0}
\end{array}
\right.
$$
where $\alpha\geq 1$ and $\Delta^{\alpha}f={\mathcal F}^{-1}(|\xi|^{2\alpha}\hat f(\xi))$. Indeed, the only important feature in order to construct blowing-up solutions by the method of \cite{ms} is that the system written in the Fourier variable, preserves the positivity of the symbol, and so, the positivity of $\hat u^j(t,\xi)$ if $\hat u_0^j(\xi)>0$, for any~$j \in \{1, \dots, d\}$.
\end{rmk}
\begin{rmk} In the two dimensional case, it might seem more natural to work on the vorticity formulation of the equation: in 2D it is well known that the vorticity satisfies a transport-diffusion equation, which provides easily the existence of global solutions for any sufficiently smooth initial data. An example where the vorticity equation is modified (rather than~(NS)) is provided at the end of Section~\ref{2D} below.
\end{rmk}
\section{Proof of the theorem in the two-dimensional case}\lambdabel{2D}
In this section we shall construct the quadratic form~$Q$, as given in the statement of Theorem~\ref{ms2D3D}, which allows to construct blowing up solutions for the (TNS$_2$) system.
Let us consider a system of the following form:
$$
\quad \left\{
\begin{array}{c}
\partialartial_t u - \Delta u= {\mathcal Q}(u , u )-\nablabla p \\
\mathop{\rm div}\noindentlimits u=0.
\end{array}
\right.$$
Taking the Leray projection of this equation, we obtain
$$\partialartial_t u-\Delta u=\mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q} (u ,u ).$$
We wish to follow the idea of the proof of~\cite{ms}, thus to construct~$Q = \mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q} $ as a matrix of Fourier multipliers of order~1, such that the product~$\widehat{\mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q} }$ preserves the positivity of the Fourier transform.
We define~$Q (u,u)$ as the vector whose $j$-component is, for~$ j \in \{1,2\},$
\begin{equation} \lambdabel{defQ}
\left(Q (u , u ) \right)^j = \sum_i q_{i,j} (D) (u^i u^j),
\end{equation}
and we impose that~$q_{i,j} (D)$
are Fourier multipliers of order~1.
For example, let us simply choose
$$
\widehat {\mathcal Q} (\xi) = |\xi|
{\mathbf 1}_{ \xi_{1} \xi_2<0} \left(
\begin{array}{cc}
1&1\\
1&1 \\
\end{array}
\right).
$$
Recalling that
$$
\widehat \mathop{\mathbb P\kern 0pt}\noindentlimits(\xi) =
\left(
\begin{array}{cc}
1-\frac{\xi_1^2}{|\xi|^2}&-\frac{\xi_1\xi_2}{|\xi|^2}\\
-\frac{\xi_1\xi_2}{|\xi|^2}&1-\frac{\xi_2^2}{|\xi|^2} \\
\end{array}
\right),
$$
we easily obtain
$$
\widehat{\mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q} }(\xi) =
{\mathbf 1}_{ \xi_{1} \xi_2<0}\frac{1}{|\xi| }\left(
\begin{array}{cc}
\xi_2^2-\xi_1\xi_2&\xi_2^2-\xi_1\xi_2\\
\xi_1^2-\xi_1\xi_2&\xi_1^2-\xi_2\xi_1\\
\end{array}
\right),
$$
so all the elements of this matrix are positive.
The Duhamel formulation of~(TNS$_2$) reads
$$
\widehat u^j(t,\xi)=e^{-t|\xi|^2}\widehat u_0^j(\xi)+ \sum_i \int_0^t e^{-(t-s)|\xi|^2}q_{i,j}(\xi)(\widehat u^i(s)\ast \widehat u^j (s)) \: ds,
$$
where we have denoted by~$q_{i,j}(\xi)$ the matrix elements of~$\widehat{\mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q}}(\xi)$.
It is not difficult to see that all the usual results on the Cauchy problem for the Navier-Stokes equations hold for this system (namely results of~\cite{fk},\cite{cmp} and~\cite{kt} as recalled in the introduction).
Moreover it
is clear that if the Fourier transform of~$\widehat u_0$ is positive, then that positivity property holds for all times.
Now let us construct a data generating a solution blowing up in finite time. We will be following closely the argument of~\cite{ms}, and we refer to that article for all the computational details. We start by choosing the initial data~$u_0=(u_0^1, u_0^2)$ such that~$\widehat u_0^1 \geq 0$, and the support of $\widehat u_0^1$ lies in the second and fourth sector of the complex plane, that is the zone where $\xi_1\xi_2<0$; we also suppose
this spectrum is symmetric with respect to zero (and to fix notation, that the support of~$\widehat u_0^1 $ intersects the set~$|\xi_j| \geq 1/2$, for~$j \in \{1,2\}$). Taking into account the divergence free condition which states that $\displaystyle \widehat u^2(\xi)=-\frac{\xi_1 \xi_2 }{\xi_2^2 }\widehat u^1(\xi) $, we deduce that $\widehat u_2$ is supported in the same region as $\widehat u_1 $ and is also nonnegative.
Let us denote by~$A$ the~$L^1$ norm of~$u_0$ (which will be assumed to be large enough at the end), and let us write~$u_0 = Aw_0$.
The idea, as in~\cite{ms}, is to prove that for any~$k \in \mathop{\bf N\kern 0pt}\noindentlimits$ and~$j \in \{1,2\}$,
\begin{equation} \lambdabel{induction}
\widehat u^j (t,\xi) \geq A^{2^{k}} e^{-2^k t} 2^{k-4(2^k-1)} {\mathbf 1}_{t \geq t_k} \widehat w_0^{k,j} (\xi)
\end{equation}
where we have written, $ w_0^{k,j} = (w_0^{0,j})^{2^k} $
and~$ \widehat w_0^0$ is the restriction of~$ \widehat w_0 \: {\mathbf 1}_{|\xi_j| \geq 1/2}$ to the second sector of the plane. Finally the time~$t_k$
is chosen so that~$t_0 = 0$ and~$t_k - t_{k-1} \geq 2^{-2k} \log 2$. Notice that~$\displaystyle \lim_{k \rightarrow \infty} t_k = \log 2^{1/3}. $ The result~(\ref{induction}) is proved by induction. Suppose that~(\ref{induction}) is true for~$k-1$ (it is clearly true for~$k = 0$). Due to the positivity of~$\widehat u_0$, we can write
\begin{eqnarray*}
\widehat u^j(t,\xi) & \geq& \sum_i \int_0^t e^{-(t-s)|\xi|^2}q_{i,j}(\xi)(\widehat u^i(s,\xi)\ast \widehat u^j (s,\xi)) \: ds \\
& \geq& \int_0^t e^{-(t-s)|\xi|^2}q_{j,j}(\xi)(\widehat u^j(s,\xi)\ast \widehat u^j (s,\xi)) \: ds
\end{eqnarray*}
and using the induction assumption, along with the support restriction of~$w_0^{k-1,j}$, we find that
\begin{eqnarray*}
\widehat u^j(t,\xi)
& \geq& \int_0^t e^{-(t-s)|\xi|^2}q_{j,j}(\xi) (A^{2^k-1} \alpha_{k-1}(s))^2 \: ds \: \widehat w_0^{k-1,j} \ast \widehat w_0^{k-1,j}(\xi)\\
& \geq& \int_0^t e^{-(t-s)2^{2k}}q_{j,j}(\xi) (A^{2^k-1} \alpha_{k-1}(s))^2 \: ds \: \widehat w_0^{k-1,j} \ast \widehat w_0^{k-1,j}(\xi),
\end{eqnarray*}
where~$\alpha_k(t) =2^{k-4(2^k-1)} {\mathbf 1}_{t \geq t_k} $. But $ \widehat w_0^{k-1,j} \ast \widehat w_0^{k-1,j} = \widehat w_0^{k ,j}$, and on the support of~$\widehat w_0^{k ,j}$
we have~$q_{j,j}(\xi) \geq C 2^k$. The induction then follows exactly as in~\cite{ms}.
Once~(\ref{induction}) is obtained, the blow up of all~$\dot B^{s}_{\infty,\infty}$ norms follows directly, noticing that~$u^j(t_\infty)$ can be bounded from below in~$\dot B^{s}_{\infty,\infty}$ by~$C (Ae^{-t_\infty} 2^{-4})^{2^k} 2^{(s+1)k}$, which goes to infinity with~$k$ as soon as~$Ae^{-t_\infty} 2^{-4} >1$. That lower bound is simply due to the fact that (calling~$\Delta_k$ the usual Littlewood-Paley truncation operator entering in the definition of Besov norms)
$$
\|u (t_\infty)\|_{\dot B^{s}_{\infty,\infty}} = \sup_k 2^{ks} \|\Delta_k u (t_\infty) \|_{L^\infty} \geq \sup_k 2^{ks} |\Delta_k u (t_\infty,0)| = \sup_k 2^{ks}
\|\widehat {\Delta_k u} (t_\infty) \|_{L^1}
$$
since~$\widehat {\Delta_k u}(t_\infty)$ is nonnegative.
\begin{rmk} \lambdabel{whyitworks}
One can notice that as soon as the matrix~$Q$ has been defined, the computation turns out to be identical to the case studied in~\cite{ms}. In particular the important fact is that~$\widehat u_0$ is nonnegative (and that its support intersects, say, the set~$|\xi_j| \geq 1/2$).
\end{rmk}
\begin{rmk} As explained in the introduction, it seems natural to try to improve the previous example by perturbing the vorticity equation, since that equation is special in two space dimensions.
Let us therefore consider the vorticity $\omega=\partialartial_1 u^2-\partialartial_2 u^1$. As is well known, the two dimensional Navier-Stokes equations can simply be written as a transport-diffusion equation on~$\omega$:
$$\partialartial_t\omega+u \cdot \nablabla\omega-\Delta \omega=0,$$
which can also be written, since~$u$ is divergence free,
$$
\partialartial_t\omega + \partialartial_1 (u^1\omega) + \partialartial_2(u^2\omega)- \Delta\omega=0.
$$
Changing the place of the derivatives, and noticing that a derivative of~$u$ has the same scaling as~$\omega$, a model equation for the vorticity equation is simply
$$
\partialartial_t\omega +\omega^2 - \Delta\omega=0.
$$
This simplified model is a semilinear heat equation for which the blow-up of the solution is well known (see \cite{friedman}, \cite{fujita}). It is also easy to see that the argument of~\cite{ms} is true for this system, which therefore blows up in finite time for large enough initial data with negative Fourier transform. One can note that the equation on $u$ becomes
$$
\partialartial_t u+\nablabla^\partialerp\Delta^{-1}\big((\mathop{\rm curl}\noindentlimits u)^2\big)-\Delta u=-\nablabla p \quad,\quad \mathop{\rm div}\noindentlimits u=0,
$$
which blows up but does not preserve the sign of the Fourier transform.
\end{rmk}
\section{Proof of the theorem in the three-dimensional case}\lambdabel{3D}
The three-dimensional situation follows the lines of the two-dimensional case studied above, though it is slightly more technical. The main step, as in the previous section, consists in finding a three-dimensional matrix~${\mathcal Q}$ such that the Fourier transform of the product~$\mathop{\mathbb P\kern 0pt}\noindentlimits{\mathcal Q}$ has positive coefficients (we recall that~$\mathop{\mathbb P\kern 0pt}\noindentlimits$ denotes the~$L^2$ projection onto divergence free vector fields).
Let us define, similarly to the previous section, the matrix
$$
\widehat {\mathcal Q} (\xi) = |\xi| {\mathbf 1}_{ \xi \in {\mathcal E}} \left(
\begin{array}{ccc}
1&1&1 \\
1&1&1 \\
1&1&1
\end{array}
\right), $$
where~$\displaystyle
{\mathcal E} \buildrel\hbox{\footnotesize d{\'e}f}\over =a \left\{\xi \in \mathop{\mathbb R\kern 0pt}\noindentlimits^3, \: \xi_{1}\xi_{2} <0, \: \xi_{1}\xi_{3} <0, \:
| \xi_{2}| < \min(|\xi_1 |,
|\xi_3 |)\right\}.$
We compute easily that
$$ \widehat {\mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q} }(\xi) =
{\mathbf 1}_{ \xi\in {\mathcal E}} |\xi|^{-1} \left(
\begin{array}{ccc}
\xi_2^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_1 \xi_3 &\xi_2^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_1 \xi_3&\xi_2^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_1 \xi_3 \\
\xi_1^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_2 \xi_3&\xi_1^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_2 \xi_3&\xi_1^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_2 \xi_3 \\
\xi_1^2 + \xi_2^2 - \xi_1 \xi_3 - \xi_2 \xi_3&\xi_1^2 + \xi_2^2 - \xi_1 \xi_3 - \xi_2 \xi_3&\xi_1^2 + \xi_2^2 - \xi_1 \xi_3 - \xi_2 \xi_3\end{array}
\right).
$$
Let us consider the sign of the matrix elements of~$\widehat {\mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q} }(\xi) $.
The first line of the above matrix is clearly made of positive scalars, due to the sign condition imposed on the components of~$\xi$.
The components of the second line may be written
$$
\xi_1^2 + \xi_3^2 - \xi_1 \xi_2 - \xi_2 \xi_3 = \xi_1^2 - \xi_1 \xi_2 + \xi_3 (\xi_3 - \xi_2),
$$
which is also positive since either~$ \xi_2 $ and~$\xi_3$ are both positive, in which case~$\xi_3 > \xi_{2}$, or they are both negative in which case~$\xi_3 < \xi_{2}$.
Similarly one has
$$
\xi_1^2 + \xi_2^2 - \xi_1 \xi_3 - \xi_2 \xi_3 = \xi_1^2 + \xi_2^2- \xi_3 (\xi_1 + \xi_2) ,
$$
and either~$\xi_1>0$, $\xi_2<0$, $\xi_3<0$ and~$\xi_1+\xi_2>0$, or~$\xi_1<0$, $\xi_2>0$, $\xi_3>0$ and~$\xi_1+\xi_2<0$. So the third line is also made of positive real numbers.
Now that it has been checked that all coefficients are positive, we just have to follow again the proof of the two dimensional case to obtain the expected result, showing the blow up of solutions to
$$
\partialartial_t u - \Delta u = Q(u,u), \quad u_{| t = 0} = u_0,
$$
where~$ Q(u,u) = \mathop{\mathbb P\kern 0pt}\noindentlimits {\mathcal Q}(u,u)$ is the vector defined as in~(\ref{defQ}).
We will not write all the details, which are identical to the two-dimensional case, but simply give the form of the initial data, which is summarized in the next proposition.
\begin{prop}\lambdabel{data3D}
Let~$u_0$ be a smooth, divergence free vector field such that the components of~$\widehat u_0$ are even, nonnegative functions, such that the support of~$\widehat u_0$ intersects the set~$|\xi_j| \geq 1/2$, for~$j \in \{1,2,3\}$. Then the unique solution to~(TNS$_3$) associated with~$u_0$ blows up in finite time, in all Besov spaces.
\end{prop}
We will not detail the proof of that proposition, as it is identical to the two dimensional case (thus in fact to~\cite{ms}).
Of course one must check that such initial data exists. The simplest way to construct such an initial data is simply to suppose it only has two nonvanishing components, say~$u_0^1$ and~$u_0^2$, and that the Fourier transform of~$u_0^1$ is supported in~${\mathbf 1}_{\xi_1 \xi_2 <0}$ while intersecting the set~$|\xi_j| \geq 1/2$. The divergence free condition ensures that the same properties hold for~$u_0^2$ (and~$u_0^3$ is assumed to vanish identically). An explicit example is provided in the next section.
That ends the proof of the ``blowing up'' part of the theorem.
\begin{rmk}
Notice that in that example, the energy inequality~(E) cannot be satisfied, as it would require that~$(Q(u,u) | u)_{L^2} \geq 0$, which cannot hold in our situation if the Fourier transform of~$\widehat u$ is nonnegative.
\end{rmk}
\section{Examples of arbitrarily large initial data providing a blowing up solution to~(TNS$_3$) and a global solution to~(NS)}\lambdabel{exampleC-1}
In this short section, we check that the initial data provided in \cite{cg} and which allows to obtain large, global solutions for Navier-Stokes equations, gives rise to a solution blowing up in finite time for the modified three dimensional Navier-Stokes equation constructed in the previous section.
More precisely we have the following result.
\begin{prop}\lambdabel{NSvsTNS}
Let~$\partialhi $ be a function in~${\mathcal S}(\mathop{\mathbb R\kern 0pt}\noindentlimits^3)$, such that~$\widehat\partialhi\geq 0$, and such that~$\widehat\partialhi$ is even and has its support in the region~$ {\mathbf 1}_{ \xi_{1}\xi_{2} <0} $, while intersecting the set~$|\xi_j| \geq 1/2$, for~$j \in \{1,2,3\}$. Let~$\e$ and~$\alpha$ be given in~$ ]0,1[$, and consider the family of initial data
$$u_{0,\varepsilon}(x) = (\partialartial_{2}\varphi_{\varepsilon}(x),
-\partialartial_{1}\varphi_{\varepsilon}(x), 0)
$$
where
$$
\varphi_{\varepsilon}(x) = \frac{({-\log \varepsilon})^{\frac15}}
{\e^{1-\alpha}} \cos \left(
\frac{x_{3}}{\e}\right)
(\partialartial_1\partialhi)\Bigl(x_{1}, \frac{x_{2}}{\varepsilon^{\alpha}}, x_{3}\Bigr).
$$
Then for~$\e>0$ small enough, the unique solution of~(NS) associated with~$u_{0,\varepsilon}$ is smooth and global in time, whereas the unique solution of~(TNS$_3$) associated with~$u_{0,\varepsilon}$ blows up in finite time, in all Besov norms.
\end{prop}
\begin{rmk} It is proved in~\cite{cg} that such initial data has a large~$\dot B^{-1}_{\infty,\infty}$ norm, in the sense that there is a constant~$C$ such that
$$
C^{-1} ({-\log \varepsilon})^{\frac15}
\leq \|u_{0,\varepsilon}\|_{\dot B^{-1}_{\infty,\infty}}
\leq C ({-\log \varepsilon})^{\frac15}.
$$
\end{rmk}
To prove Proposition~\ref{NSvsTNS}, we notice that the initial data given in the proposition
is a particular case of the family of initial data presented in~\cite{cg}, Theorem~2, which generates a unique, global solution as soon as~$\e$ is small enough (in~\cite{cg} there is no restriction on the support of the Fourier transform and~$\partialartial_1 \partialhi$ is simply~$\partialhi$). So we just have to check that the initial data fits with the requirements of Section~\ref{3D} above, and more precisely that it satisfies the assumptions of Proposition~\ref{data3D}.
Notice that
$$
\widehat \varphi_\varepsilon (\xi)=
\frac{({-\log \varepsilon})^{\frac15}}
{2\e^{1-2\alpha} }
\left(i\xi_1\widehat\partialhi(\xi_1, \varepsilon ^\alpha \xi_2,\xi_3+\frac 1\varepsilon )+i\xi_1\widehat\partialhi(\xi_1, \varepsilon ^\alpha \xi_2,\xi_3-\frac 1\varepsilon )\right)
$$
We need to check that $\widehat u^i_{0,\varepsilon }\geq 0$, for $i \in \{1,2,3\}$, and that the Fourier support intersects the set~$|\xi_j| \geq 1/2$.
We have
$$\widehat u_{0,\varepsilon }(\xi)=
\frac{({-\log \varepsilon})^{\frac15}}
{2\e^{1-2\alpha} }
\left(-\xi_1\xi_2\widehat\partialhi(\xi_1,\varepsilon ^\alpha\xi_2,\xi_3\partialm\frac 1\varepsilon ), \xi_1^2\widehat\partialhi(\xi_1,\varepsilon ^\alpha\xi_2,\xi_3\partialm\frac 1\varepsilon ),0\right),$$
and we have clearly the desired properties.
This ends the proof of the proposition, and of the theorem.
\end{document}
|
\begin{document}
\title{Convergence of Closed Pseudo-Hermitian Manifolds\footnotetext{\textbf{Keywords}: Convergence, Pseudo-Einstein, Pseudo-Hermitian Ricci Curvature, Bochner Formulae, CR Sobolev inequality, Sasakian Manifold}\footnote{\textbf{MSC 2010}: 53C25, 32V05}}
\author{Shu-Cheng Chang\footnote{Supported in part by the MOST of Taiwan} \and Yuxin Dong\footnote{Supported by NSFC grant No. 11771087, and LMNS, Fudan.} \and Yibin Ren}
\maketitle
\begin{abstract}
Based on uniform CR Sobolev inequality and Moser iteration, this paper investigates the convergence of closed pseudo-Hermitian manifolds.
In terms of the subelliptic inequality, the set of closed normalized pseudo-Einstein manifolds with some uniform geometric conditions is compact.
Moreover, the set of closed normalized Sasakian $\eta$-Einstein $(2n+1)$-manifolds with Carnot-Carath\'eodory distance bounded from above, volume bounded from below and $L^{n + \frac{1}{2}}$ norm of pseudo-Hermitian curvature bounded is $C^\infty$ compact. As an application, we will deduce some pointed convergence of complete K\"ahler cones with Sasakian manifolds as their links.
\end{abstract}
\section{Introduction}
Cheeger-Gromov compactness theorem says that the class of closed Riemannian $n$-manifold with sectional curvature $|\mbox{Sec}|\leq \Lambda$, volume $\mbox{Vol} \geq V_1$ and diameter $\mbox{diam} \leq d$ is precompact in $C^{1, \alpha}$ topology for any $ \alpha \in (0,1)$.
Gao \cite{gao1990einstein} studied the compactness of one canonical metric -- Einstein metric, and obtained that the class of normalized Einstein $4$-manifolds with injective radius $inj \geq i_0$ and $diam \leq D$ is compact in $C^\infty$ topology.
Anderson \cite{anderson1989ricci} extended Gao's theorem for higher dimension under a weaker condition -- volume $\mbox{Vol} \geq V_1$.
There are many other compactness theorems under various geometric assumptions which lead important results (cf. \cite{anderson1990convergence,anderson1992compactness,cheeger1970finiteness,cheeger1997ricci,fukaya2006metruc,gromov181metric,petersen2006riemannian,petersen1997relative,tian1992compact}).
One generalization of Gromov precompactness theorem is to study the convergence of various geometric structures.
In sub-Riemannian geometry, Baudoin, Bonnefont, Garofalo and Munive \cite{baudoin2014volume} have showed the Gromov-Hausdorff precompactness theorem of closed sub-Riemannian manifolds which satisfy the so-called curvature-dimension inequality and have bounded (sub-Riemannian) diameter.
In pseudo-Hermitian geometry,
Chang, Chang, Han and Tie \cite{chang2017pseudo} deduced the CR volume doubling property in pseudo-Hermitian manifolds under the uniformly conditions of pseudo-Hermitian Ricci curvature and pseudo-Hermitian torsion, which also leads the Gromov-Hausdorff precompactness theorem of pseudo-Hermitian manifolds.
For more general case, one refers to \cite{villani2008optimal}.
Motivated by these results, this paper studies the regularity convergence of closed pseudo-Hermitian manifolds.
Note that a pseudo-Hermitian manifold is determined by a contact form $\theta$ and an almost complex structure $J$ which lies in the horizontal bundle $HM$ given by $\theta$ and can be canonically extended to tangent bundle, also denoted by $J$.
A sequence of closed pseudo-Hermitian manifolds $(M_i, H M_i, J_i, \theta_i)$ is called $C^{k, \alpha}$ convergent if there are a manifold $M$, two tensors $\theta \in C^{k,\alpha} (TM , \mathbb{R}), J \in C^{k, \alpha} (TM, TM)$ and diffeomorphisms $\phi_i : M \to M_i$ such that $ \phi_i^* \theta_i \to \theta $ and $ \phi_i^* J_i \to J $ in $C^{k,\alpha}$ topology.
The class of closed pseudo-Hermitian manifolds with uniform conditions of Carnot-Carath\'eodory diameters, volumes and higher-order horizontal derivatives of pseudo-Hermitian curvatures and pseudo-Hermitian torsions is compact in $C^\infty$ topology.
The pseudo-Einstein condition leads some subelliptic inequalities of pseudo-Hermitian curvature which weaken the higher-order derivative condition of pseudo-Hermitian curvature as follows:
\begin{customthm}{\ref{c-smcptpseudothm}}
Given constants $\kappa_1, \kappa_2, d, V_1, \langlembda, \Lambda$ and $q > Q$ where $Q$ is given in \eqref{b-crsobolev}, any sequence of closed connected pseudo-Einstein manifolds with dimension $2n+1 \geq 5$ and
\begin{align*}
|A| \leq \kappa_1, |div A| \leq \kappa_2, ||A||_{S^{\frac{q}{2}}_{2k+4}} \leq \langlembda, ||\tilde{R}||_{\frac{q}{2}} \leq \Lambda, \mbox{diam}_{cc} \leq d, \mbox{Vol} \geq V_1
\end{align*}
is $C^{k + 1, \alpha}$ sub-convergent for any $\alpha \in (0,1)$, where $A$ is the pseudo-Hermitian torsion, $\tilde{R}$ is pseudo-Hermitian curvature, $\mbox{diam}_{cc}$ is the Carnot-Carath\'eodory diameter and $\mbox{Vol}$ is the volume.
\end{customthm}
\noindent It is remarkable that the pseudo-Hermitian scalar curvature may not be constant in this theorem and $Q > 2n +2$.
Moreover, although the condition of the bounds of $|A|, |div A|$ and pseudo-Einstein condition won't imply lower bound of Riemannian Ricci curvature, they give CR Sobolev inequality due to \cite{chang2017pseudo} which is an important ingredient in geometric analysis.
When the real dimension is 5, the convergence theorem needs less derivatives of pseudo-Hermitian torsion due to a special Bochner formula of pseudo-Hermitian torsion (see Theorem \ref{c-pseudocptness}).
A pseudo-Hermitian manifold with vanishing pseudo-Hermitian torsion is Sasakian.
Sasakian geometry is an important branch of contact geometry and gets much attention due to its role in String theory and K\"ahler geometry.
In the last two section of this paper,
$C^\infty$ version of Theorem \ref{c-smcptpseudothm} for closed Sasakian pseudo-Einstein $(2n+1)$-manifolds will be improved to $q = 2n +1$ and the pseudo-Einstein condition will be slightly relaxed.
In particular, any sequence of closed Sasakian manifolds with pseudo-Hermitian Ricci curvature bounded from below will have some $C^{1, \alpha}$ sub-convergence.
As an application, we will deduce some pointed convergence of complete K\"ahler cones whose links are Sasakian manifolds (see Corollary \ref{d-corollary-cone} and Corollary \ref{c-corollary-cone}).
\section{Pseudo-Hermitian Geometry and CR Bochner Formulae} \langlebel{sec-basic}
Let's briefly review the pseudo-Hermitian geometry. For details, the readers could refer to \cite{boyer2008sasakian,dragomir2006cr,lee1988psuedo,webster1978pseudo}. A smooth manifold $M$ of real dimension ($2n+1$) is said to be a CR manifold if
there exists a smooth rank $n$ complex subbundle $T_{1,0} M \subset TM \otimes \mathbb{C}$ such that
\begin{gather}
T_{1,0} M \cap T_{0,1} M =0 \\
[\Gamma (T_{1,0} M), \Gamma (T_{1,0} M)] \subset \Gamma (T_{1,0} M) \langlebel{a-integrable}
\end{gather}
where $T_{0,1} M = \overline{T_{1,0} M}$ is the complex conjugate of $T_{1,0} M$.
Equivalently, the CR structure may also be described by the real subbundle $HM = Re \: \{ T_{1,0} M \oplus T_{0,1}M \}$ of $TM$ which carries an almost complex structure $J : HM \rightarrow HM$ defined by $J (X+\overline{X})= i (X-\overline{X})$ for any $X \in T_{1,0} M$.
Since $HM$ is naturally oriented by the almost complex structure $J$, then $M$ is orientable if and only if
there exists a global nowhere vanishing 1-form $\theta$ such that $ HM = Ker (\theta) $.
Any such section $\theta$ is referred to as a pseudo-Hermitian structure on $M$. The Levi form $L_\theta $ of a given pseudo-Hermitian structure is defined by
$$L_\theta (X, Y ) = d \theta (X, J Y) $$
for any $Z , W \in HM$.
An orientable CR manifold $(M, HM, J)$ is called strictly pseudo-convex if $L_\theta$ is positive definite for some $\theta$. It is remarkable that the signature of the Levi form is invariant under the CR conformal transformation $\tilde{\theta} = e^{2 u} \theta$.
When $(M, HM, J)$ is strictly pseudo-convex, there exists a pseudo-Hermitian structure $\theta$ such that $L_\theta$ is positive. The quadruple $( M, HM J, \theta )$ is called a pseudo-Hermitian manifold. This paper is discussed in these pseudo-Hermitian manifolds.
For a pseudo-Hermitian manifold $(M, HM, J, \theta)$, there exists a unique nowhere zero vector field $\xi$, called Reeb vector field, transverse to $HM$ satisfying
$\xi \lrcorner \: \theta =1, \ \xi \lrcorner \: d \theta =0$. There is a decomposition of the tangent bundle $TM$:
\begin{align}
TM = HM \oplus \mathbb{R} \xi \langlebel{a-webstermetric}
\end{align}
which induces the projection $\pi_H : TM \to HM$. Set $G_\theta = \pi_H^* L_\theta$. Since $L_\theta$ is a metric on $HM$, it is natural to define a Riemannian metric
\begin{align}
g_\theta = G_\theta + \theta \otimes \theta
\end{align}
which makes $HM$ and $\mathbb{R} \xi$ orthogonal. Such metric $g_\theta$ is called Webster metric.
In the terminology of foliation geometry, $\mathbb{R} \xi$ provides a one-dimensional Reeb foliation and $HM$ is its horizontal distribution.
By requiring that $J \xi=0$, the almost complex structure $J$ can be extended to an endomorphism of $TM$.
The integrable condition \eqref{a-integrable} guarantees that $g_\theta$ is $J$-invariant.
Clearly $\theta \wedge (d \theta)^n$ differs a constant with the volume form of $g_\theta$.
Henceforth we will regard it as the volume form and always omit it for simplicity.
On a pseudo-Hermitian manifold, there exists a canonical connection preserving the CR structure and the Webster metric.
\begin{proposition} [\cite{tanaka1975differential,webster1978pseudo}] \langlebel{b-tanakawebster}
Let $(M, HM, J, \theta)$ be a pseudo-Hermitian manifold. Then there is a unique linear connection $\nabla$ on $M$ (called the Tanaka-Webster connection) such that:
\begin{enumerate}[(1)]
\item The horizontal bundle $HM$ is parallel with respect to $\nabla$;
\item $\nabla J=0$, $\nabla g_\theta=0$;
\item \langlebel{a-tor1} The torsion $T_\nabla$ of the connection $\nabla$ is pure, that is, for any $X, Y \in HM$, \langlebel{b-twtorsion}
$$
T_{\nabla} (X, Y)= 2 d \theta (X, Y) \xi \mbox{ and } T_{\nabla} (\xi, J X) + J T_{\nabla} (\xi, X) =0.
$$
\end{enumerate}
\end{proposition}
Note that the torsion on $HM \times HM$ is always nonzero.
The pseudo-Hermitian torsion, denoted by $\tau$, is the $TM$-valued 1-form defined by $\tau(X) = T_{\nabla} (\xi,X)$. Define the tensor $A$ by $A(X, Y) = g_\theta (X, \tau (Y)) $ for any $X, Y \in TM$. The condition \eqref{a-tor1} leads that $A$ is trace-free and symmetric.
A pseudo-Hermitian manifold is called Sasakian if $\tau \equiv 0$. Sasakian geometry is very rich as the odd-dimensional analogous of K\"ahler geometry. We refer the readers to the book \cite{boyer2008sasakian} by C. P. Boyer, and K. Galicki.
Let $R$ be the curvature tensor of the Tanaka-Webster connection. As the Riemannian curvature, $R$ satisfies
\begin{align*}
\langlengle R(X, Y) Z, W \ranglengle = - \langlengle R(X, Y) W, Z \ranglengle = - \langlengle R(Y, X) Z, W \ranglengle
\end{align*}
for any $X, Y, Z, W \in \Gamma (TM)$.
Set $e_0= \xi$. Let $\{e_i\}_{i=1}^{2n}$ be a local orthonormal basis of $HM$ restricted on some open set $U$ satisfying $e_{i+n} = J e_i, i = 1, \dots , n$. Then $\{\eta_\alpha = \frac{1}{\sqrt{2}} (e_\alpha - i J e_i) \}_{\alpha=1}^n$ is a unitary frame of $T_{1,0} M |_U$.
Besides $R_{\bar{\alpha}r{\alpha} \beta \langlembda \bar{\alpha}r{\mu}} = \langlengle R(\eta_\langlembda, \eta_{\bar{\alpha}r{\mu}}) \eta_\beta, \eta_{\bar{\alpha}r{\alpha}} \ranglengle$, the other parts of $R$ are clear:
\begin{gather}
R_{\bar{\alpha} \beta \bar{\langlembda} \bar{\mu}} = 2 i ( A_{\bar{\alpha} \bar{\mu}} \delta_{\beta \bar{\langlembda}} - A_{ \bar{\alpha} \bar{\langlembda}} \delta_{\beta \bar{\mu}} ) , \\
R_{\bar{\alpha} \beta \langlembda \mu} =2 i ( A_{\beta \mu} \delta_{\bar{\alpha} \langlembda}- A_{\beta \langlembda } \delta_{\bar{\alpha} \mu } ) , \\
R_{\bar{\alpha} \beta 0 \bar{\mu}} = A_{\bar{\alpha} \bar{\mu}, \beta} , \langlebel{b-tor6} \\
R_{ \bar{\alpha} \beta 0 \mu} = - A_{\mu \beta, \bar{\alpha} } , \langlebel{b-tor7}
\end{gather}
where $A_{\mu \beta, \bar{\alpha}r{\alpha}}$ are the components of $\nabla A$. In particular, we will set
\begin{align*}
\tilde{R} = R \big|_{T_{0,1} M \otimes T_{1,0} M \otimes T_{1,0} M \otimes T_{0,1} M}
\end{align*}
which is called pseudo-Hermitian curvature.
As a Riemannian manifold, $(M, g_\theta)$ carries the Levi-Civita connection $D$ and the Riemannian curvature $\hat{R}$.
Dragomir and Tomassini \cite{dragomir2006cr} have derived the relationship between $\nabla$ and $D$:
\begin{align}
D = \nabla - (d \theta + A) \otimes \xi + \tau \otimes \theta + 2 \theta \odot J, \langlebel{b-con}
\end{align}
where $2 \theta \odot J = \theta \otimes J + J \otimes \theta$.
They also have deduced the relationship between $R$ and $\hat{R}$: for any $X, Y, Z \in \Gamma(TM)$,
\begin{align*}
\hat{R} (X, Y) Z = & R (X, Y) Z + ( LX \wedge LY ) Z + d \theta (X, Y) J Z \numberthis \langlebel{b-pseudoriemcur} \\
& \quad - g_\theta ( S (X, Y), Z ) T+ \theta (Z ) S ( X, Y) \\
& \quad - 2 g_\theta ( \theta \wedge \mathcal{O} (X, Y), Z) T + 2 \theta ( Z) (\theta \wedge \mathcal{O}) ( X, Y ) ,
\end{align*}
where
\begin{align}
(LX \wedge LY ) Z =& g_\theta (LX, Z) LY - g_\theta (LY, Z) LX , \\
S (X, Y) =& ( \nabla_X \tau ) Y - (\nabla_Y \tau) X , \langlebel{b-formula-s} \\
\mathcal{O} = & \tau^2 + 2 J \tau - \pi_H , \langlebel{b-formula-o} \\
L = & \tau + J .
\end{align}
Hence the first Bianchi identity of $R$ is
\begin{align}
\mathcal{S} \left( R(X, Y) Z \right) = 2 \mathcal{S} \left( d \theta (X, Y) \tau (Z) \right). \langlebel{b-firstbianchi}
\end{align}
where $\mathcal{S}$ stands for the cyclic sum with respect to $X, Y, Z \in \Gamma (HM)$. It implies that $R_{\bar{\alpha} \beta \langlembda \bar{\mu}} = R_{\bar{\alpha} \langlembda \beta \bar{\mu}}$ which was given by Webster \cite{webster1978pseudo}.
Tanaka \cite{tanaka1975differential} defined the pseudo-Hermitian Ricci operator $R_*$ by
\begin{align}
R_* X = - i \sum_{\langlembda=1}^n R(\eta_\langlembda, \eta_{\bar{\alpha}r{\langlembda}}) JX.
\end{align}
Set $R_* \eta_\alpha = R_{\alpha \bar{\alpha}r{\beta}} \eta_\beta$. Hence $R_{\alpha \bar{\alpha}r{\beta}} = R_{\bar{\alpha}r{\beta} \alpha \langlembda \bar{\alpha}r{\langlembda}} = R_{\bar{\alpha}r{\beta} \langlembda \alpha \bar{\alpha}r{\langlembda}} $ by the first Bianchi identity.
The pseudo-Hermitian scalar curvature $\rho$ is half of the trace of $R_*$, that is $\rho = R_{\alpha \bar{\alpha}r{\alpha}}$.
The following second Bianchi identities were given by Lee in Lemma 2.2 of \cite{lee1988psuedo}. Please note that this paper follows the same exterior algebra as one in \cite{dragomir2006cr} which makes some coefficients of commutations different with ones in \cite{lee1988psuedo}.
\begin{lemma}
The pseudo-Hermitian curvature and torsion satisfy the following identities
\begin{gather}
A_{\alpha \beta, \gamma} = A_{\alpha \gamma, \beta} \langlebel{b-sym-tor} \\
R_{\bar{\alpha} \beta \langlembda \bar{\mu}, \gamma} - R_{\bar{\alpha} \beta \gamma \bar{\mu}, \langlembda} = 2 i ( A_{\beta \gamma, \bar{\mu}} \delta_{\bar{\alpha} \langlembda} + A_{\gamma \beta , \bar{\alpha}} \delta_{\langlembda \bar{\mu}} - A_{\beta \langlembda, \bar{\mu}} \delta_{\bar{\alpha} \gamma} - A_{\langlembda \beta, \bar{\alpha}} \delta_{\gamma \bar{\mu}} ), \langlebel{b-sec-bianchi-1} \\
R_{\bar{\alpha} \beta \langlembda \bar{\mu}, 0}= A_{\langlembda \beta, \bar{\alpha} \bar{\mu}} + A_{\bar{\alpha} \bar{\mu} , \beta \langlembda} + 2 i (A_{\bar{\alpha} \bar{\gamma}} A_{\gamma \langlembda } \delta_{\beta \bar{\mu}} - A_{\beta \gamma} A_{\bar{\gamma} \bar{\mu}} \delta_{\bar{\alpha} \langlembda} ), \langlebel{b-secbianchi2}
\end{gather}
and the contracted identities:
\begin{gather}
R_{\langlembda \bar{\mu}, \gamma} - R_{\gamma \bar{\mu}, \langlembda} = 2i (A_{\gamma \alpha, \bar{\alpha}} \delta_{\langlembda \bar{\mu}} - A_{\langlembda \alpha, \bar{\alpha}} \delta_{\gamma \bar{\mu}}), \\
\rho_\langlembda - R_{\langlembda \bar{\mu}, \mu} = 2 i (n-1) A_{\langlembda \mu, \bar{\mu}}, \langlebel{b-einscalar} \\
R_{\langlembda \bar{\mu}, 0} = A_{\langlembda \alpha, \bar{\alpha} \bar{\mu}} + A_{\bar{\alpha} \bar{\mu}, \alpha \langlembda}, \langlebel{b-ric-reeb} \\
\rho_0 =A_{\langlembda \alpha, \bar{\alpha} \bar{\langlembda}} + A_{\bar{\alpha} \bar{\langlembda}, \alpha \langlembda}.
\end{gather}
\end{lemma}
\begin{lemma} \langlebel{b-ricrelation}
Let $\hat{Ric}$ be the Riemannian Ricci tensor.
The relation of $R_*$ and $\hat{Ric}$ is
\begin{align}
\langlengle \hat{Ric} (X) , Y \ranglengle=& \langlengle R_* X, Y \ranglengle - 2 \langlengle \pi_H X, \pi_H Y \ranglengle + (2 n - |\tau|^2) \theta (X) \theta (Y) \langlebel{b-riempseudoric} \\
& - 2 (n-2) A(JX, Y) - \langlengle (\nabla_\xi \tau) X, Y \ranglengle + div \tau (X) \theta (Y) + \theta(X) div \tau (Y) \nonumber
\end{align}
for $X, Y \in \Gamma (TM)$ and
\begin{align*}
|\tau|^2 = \sum_{i=1}^{2n} \langlengle \tau^2 (e_i), e_i \ranglengle = 2 \sum_{\alpha, \beta =1}^n A_{\alpha \beta} A_{\bar{\alpha}r{\alpha} \bar{\alpha}r{\beta}}.
\end{align*}
\end{lemma}
\begin{proof}
To prove \eqref{b-riempseudoric}, let's introduce an auxiliary tensor $\mathcal{Q}$ as follows:
\begin{align}
\mathcal{Q} (X) = \sum_{i=1}^{2n} R( X, e_i) e_i.
\end{align}
It suffices to deduce the following identities:
\begin{align}
\langlengle R_* X, Y \ranglengle =& \langlengle \mathcal{Q} (\pi_H X), \pi_H Y \ranglengle - 2 (m-1) A(J X, Y), \langlebel{b-pseric} \\
\langlengle \mathcal{Q} (X) , Y \ranglengle = & \langlengle \mathcal{Q} (\pi_H X), \pi_H Y \ranglengle + \theta(X) div \tau (Y) \langlebel{b-ric2} \\
\langlengle \hat{Ric} (X) , Y \ranglengle=& \langlengle \mathcal{Q} (X), Y \ranglengle - 2 \langlengle \pi_H X, \pi_H Y \ranglengle + (2 n - |\tau|^2) \theta (X) \theta (Y) + 2 \langlengle \tau J X, Y \ranglengle \langlebel{b-rric} \\
& - \langlengle (\nabla_\xi \tau) X, Y \ranglengle + div \tau (X) \theta (Y) \nonumber
\end{align}
For \eqref{b-pseric}, on one hand, since $J X$ is horizontal, we can use the first Bianchi identity \eqref{b-firstbianchi} and obtain
\begin{align*} \numberthis \langlebel{b-9}
-i & \sum_{\alpha=1}^n R(\eta_\alpha, \eta_{\bar{\alpha}r{\alpha}}) JX - i \sum_{\alpha=1}^n R(\eta_{\bar{\alpha}r{\alpha}}, JX) \eta_\alpha - i \sum_{\alpha=1}^n R(JX, \eta_\alpha) \eta_{\bar{\alpha}r{\alpha}} \\
&= - i \sum_{\alpha=1}^n 2 d \theta (\eta_\alpha, \eta_{\bar{\alpha}r{\alpha}}) \tau J X - i \sum_{\alpha=1}^n 2 d \theta (\eta_{\bar{\alpha}r{\alpha}}, JX) \tau e_\alpha - i \sum_{\alpha=1}^n 2 d \theta (JX, \eta_\alpha) \tau \eta_{\bar{\alpha}r{\alpha}} \\
& = 2n \tau JX - 2 \sum_{\alpha=1}^n \tau J \bigg( \langlengle \eta_{\bar{\alpha}r{\alpha}}, X \ranglengle \eta_\alpha + \langlengle X, \eta_\alpha \ranglengle \eta_{\bar{\alpha}r{\alpha}} \bigg) \\
& = 2 (n-1) \tau JX.
\end{align*}
On the other hand, note that
\begin{align*}\numberthis \langlebel{b-8}
i \sum_{\alpha=1}^n R(\eta_{\bar{\alpha}r{\alpha}}, JX) \eta_\alpha + i \sum_{\alpha=1}^n R(JX, \eta_\alpha) \eta_{\bar{\alpha}r{\alpha}} = & i \sum_{\alpha=1}^n R(\eta_{\bar{\alpha}r{\alpha}}, JX) \eta_\alpha - i \sum_{\alpha=1}^n R(\eta_\alpha, JX) \eta_{\bar{\alpha}r{\alpha}} \\
= & \sum_{\alpha=1}^n J \big( R(\eta_{\bar{\alpha}r{\alpha}}, JX) \eta_\alpha \big) + J \big( R(\eta_\alpha, JX) \eta_{\bar{\alpha}r{\alpha}} \big) \\
= & J \circ \mathcal{Q} (JX)
\end{align*}
Substituting \eqref{b-8} into \eqref{b-9} and replacing $X, Y$ by $JX, JY$, we obtain \eqref{b-pseric}.
The identity \eqref{b-ric2} is due to \eqref{b-pseudoriemcur} and the following calculation:
\begin{align*}
\langlengle \mathcal{Q} (\xi), \pi_Y \ranglengle = & \sum_{i = 1}^{2n} \langlengle R (\xi, e_i) e_i, \pi_H Y \ranglengle = \sum_{i = 1}^{2n} \langlengle \hat{R} (\xi, e_i) e_i, \pi_H Y \ranglengle \\
=& \sum_{i = 1}^{2n} \langlengle \hat{R} (e_i, \pi_H Y) \xi, e_i \ranglengle = \sum_{i=1}^{2n} \langlengle S (e_i, \pi_H Y) , e_i \ranglengle = div \tau (Y).
\end{align*}
For \eqref{b-rric}, by \eqref{b-pseudoriemcur} and $e_i \in \Gamma (HM)$, we have
\begin{align*}
\langlengle \hat{Ric} (X), Y \ranglengle = &
\sum_{i=1}^{2n} \langlengle \hat{R}(e_i, X) Y, e_i \ranglengle + \langlengle \hat{R} (\xi, X) Y, \xi \ranglengle \numberthis \langlebel{b-ric1} \\
= & \sum_{i=1}^{2n} \langlengle R(e_i, X) Y, e_i \ranglengle + \sum_{i=1}^{2n} \langlengle (L e_i \wedge L X) Y, e_i \ranglengle + \sum_{i=1}^{2n} 2 d \theta (e_i, X) \langlengle J Y, e_i \ranglengle \\
& + \sum_{i=1}^{2n} \theta(Y) \langlengle S(e_i,X) , e_i \ranglengle + \sum_{i=1}^{2n} 2 \theta(Y) \langlengle (\theta \wedge \mathcal{O}) (e_i, X), e_i \ranglengle + \langlengle \hat{R} (\xi, X) Y, \xi \ranglengle
\end{align*}
Now we see each terms in the right side except the first one. Note that
\begin{align}
\sum_{i=1}^{2n} \langlengle (L e_i \wedge L X) Y, e_i \ranglengle = \sum_{i=1}^{2n} \langlengle L e_i , Y \ranglengle \langlengle L X, e_i \ranglengle - \langlengle LX, Y \ranglengle \langlengle L e_i, e_i \ranglengle \langlebel{b-1}
\end{align}
On one hand, since $\langlengle L e_i, Y \ranglengle = \langlengle e_i, \tau Y \ranglengle - \langlengle e_i, J Y \ranglengle$,
we find
\begin{align*}
\sum_{i=1}^{2n} \langlengle L e_i , Y \ranglengle \langlengle L X, e_i \ranglengle =& \langlengle L X , \tau Y \ranglengle - \langlengle L X, J Y \ranglengle \numberthis \langlebel{b-2} \\
= & \langlengle \tau X, \tau Y \ranglengle + \langlengle JX, \tau Y \ranglengle - \langlengle \tau X, J Y \ranglengle - \langlengle J X, JY \ranglengle \\
= & \langlengle \tau X, \tau Y \ranglengle - \langlengle \pi_H X, \pi_H Y \ranglengle.
\end{align*}
Here the last equation is due to $\tau J + J \tau =0$ which implies
\begin{align*}
\langlengle J X, \tau Y \ranglengle = - \langlengle X, J \tau Y \ranglengle = \langlengle X, \tau J Y \ranglengle = \langlengle \tau X, J Y \ranglengle.
\end{align*}
On the other hand,
\begin{align}
\langlengle L e_i, e_i \ranglengle = trace_{G_\theta} \tau + trace_{G_\theta} J = 0. \langlebel{b-3}
\end{align}
Substituting \eqref{b-2} and \eqref{b-3} into \eqref{b-1}, the result is
\begin{align}
\sum_{i=1}^{2n} \langlengle (L e_i \wedge L X) Y, e_i \ranglengle = \langlengle \tau X, \tau Y \ranglengle - \langlengle \pi_H X, \pi_H Y \ranglengle. \langlebel{b-5}
\end{align}
The third term in \eqref{b-ric1} is due to
\begin{align}
\sum_{i=1}^{2n} 2 d \theta (e_i, X) \langlengle JZ, e_i \ranglengle = - 2 \langlengle JY, JZ \ranglengle = -2 \langlengle \pi_H X, \pi_H Y \ranglengle. \langlebel{b-6}
\end{align}
The fourth term in \eqref{b-ric1} comes from the formula \eqref{b-formula-s} of $S$ and the parallelism of $HM$ with respect to Tanaka-Webster connection:
\begin{align}
\sum_{i=1}^{2n} \langlengle S(e_i, X), e_i \ranglengle = \sum_{i=1}^{2n} \langlengle (\nabla_{e_i} \tau) X, e_i \ranglengle + \sum_{i=1}^{2n} \langlengle (\nabla_Y \tau) e_i, e_i \ranglengle = (div \tau) X. \langlebel{b-7}
\end{align}
For the fifth term in \eqref{b-ric1}, by the formula \eqref{b-formula-o} of $\mathcal{O}$, we have
\begin{align}
\sum_{i=1}^{2n} 2 \langlengle (\theta \wedge \mathcal{O}) (e_i, X), e_i \ranglengle = & \sum_{i=1}^{2n} - \langlengle \theta (X) \mathcal{O} (e_i) , e_i \ranglengle \langlebel{b-4} \\
= & \sum_{i=1}^{2n} -\theta (X) \langlengle (\tau^2 + 2 J \tau - \pi_H ) (e_i) , e_i \ranglengle \nonumber \\
= & \theta (X) (2n -|\tau|^2) \nonumber
\end{align}
which is due to
\begin{align*}
- \sum_{i=1}^{2n} \langlengle J \tau (e_i), e_i \ranglengle = \sum_{i=1}^{2n} \langlengle \tau J e_i , e_i \ranglengle = \sum_{i=1}^{m} \langlengle \tau J e_i, e_i \ranglengle + \langlengle \tau J^2 e_i, J e_i \ranglengle = 0.
\end{align*}
For the sixth term in \eqref{b-ric1}, we have
\begin{align*}
\langlengle \hat{R} (\xi, X) Y, \xi \ranglengle = & - \langlengle S(\xi, X), Y \ranglengle - 2 \langlengle (\theta \wedge \mathcal{O}) (\xi, X), Y \ranglengle \\
= & - \langlengle (\nabla_\xi \tau) X, Y \ranglengle - \langlengle \mathcal{O} (X), Y \ranglengle \\
= & - \langlengle (\nabla_\xi \tau) X, Y \ranglengle - \langlengle \tau X, \tau Y \ranglengle - 2 \langlengle J \tau X, Y \ranglengle + \langlengle \pi_H X, \pi_H Y \ranglengle . \numberthis \langlebel{b-10}
\end{align*}
By substituting \eqref{b-5}, \eqref{b-6}, \eqref{b-7}, \eqref{b-4} and \eqref{b-10} to \eqref{b-ric1}, we get \eqref{b-rric}.
For \eqref{b-riempseudoric}, we note that
\begin{align}
\langlengle \mathcal{Q} (X), Y \ranglengle = \langlengle \mathcal{Q} (J^2 X), J^2 Y \ranglengle + \theta (X) \langlengle \mathcal{Q} (\xi), \pi_H Y \ranglengle. \langlebel{b-11}
\end{align}
Using \eqref{b-pseudoriemcur}, the last term can be calculated by
\begin{align}
\langlengle \mathcal{Q} (\xi), \pi_H Y \ranglengle &= \sum_{i=1}^{2n} \langlengle R(\xi , e_i) e_i, \pi_H Y \ranglengle \langlebel{b-12} \\
&= \sum_{i=1}^{2n} \langlengle \hat{R} (\xi , e_i) e_i, \pi_H Y \ranglengle = \sum_{i=1}^{2n} \langlengle \hat{R} (e_i, \pi_H Y ) \xi , e_i \ranglengle = div \tau (Y). \nonumber
\end{align}
Substituting \eqref{b-pseric}, \eqref{b-ric2}, \eqref{b-11} and \eqref{b-12} to \eqref{b-rric}, we obtain \eqref{b-riempseudoric}.
\end{proof}
A pseudo-Hermitian structure $\theta$ is called pseudo-Einstein if $R_* = \frac{\rho}{n} G_\theta$.
Since the curvature associated with $\nabla$ contains the pseudo-Hermitian torsion, the pseudo-Einstein structure will heavily depend on it too. For example, by \eqref{b-einscalar}, if $(M, HM, J, \theta)$ is pseudo-Einstein with dimension $2n+ 1 \geq 5$, then the horizontal gradient of $\rho$, the restriction of $\nabla \rho$ on $HM$, is
\begin{align} \langlebel{b-tordiv}
\nabla_b \rho = 2n \mbox{ div } \tau \circ J,
\end{align}
where the horizontal gradient $\nabla_b \rho$ is the horizontal restriction of the gradient $\nabla \rho$.
Hence under the pseudo-Einstein condition, the constancy of the pseudo-Hermitian scalar curvature is equivalent to the vanishing horizontal gradient of the pseudo-Hermitian torsion.
The Ricci identity is very useful to derive Bochner formulae in Riemannian geometry. It has the following analogous for any affine connection.
\begin{lemma} \langlebel{c-lem-ricidentity}
Assume that $ \sigma \in \Gamma ( \otimes^p T^* M ) $ and $ X_1 , \cdots , X_p \in \Gamma (TM) $. Then
\begin{align} \langlebel{c-ricidentity}
& ( \nabla^2 \sigma ) ( X_1 , \cdots , X_p ; X, Y ) - ( \nabla^2 \sigma ) ( X_1 , \cdots , X_p ; Y, X ) \\
&= \sigma ( X_1 , \cdots , R(X, Y) X_i, \cdots, X_p ) + \big( \nabla_{T_\nabla (X, Y) } \sigma \big) ( X_1 , \cdots , X_p ). \nonumber
\end{align}
where
\begin{align*}
( \nabla^2 \sigma ) ( X_1 , \cdots , X_p ; X, Y ) = (\nabla_Y \nabla \sigma) (X_1, \cdots, X_p; X).
\end{align*}
\end{lemma}
The proof is directly from the definition and we omit it here. As an application,
Lemma \ref{c-lem-ricidentity} yields the commutations of second derivatives $\nabla^2 A$.
\begin{lemma} \langlebel{c-psetorcom}
The commutations of the derivatives of the pseudo-Hermitian torsion are
\begin{align}
A_{\bar{\alpha} \bar{\beta} , \gamma \langlembda } - A_{\bar{\alpha} \bar{\beta}, \langlembda \gamma} =& 2 i ( A_{ \rho \gamma } \delta_{\bar{\alpha} \langlembda} - A_{\rho \langlembda} \delta_{\bar{\alpha} \gamma} ) A_{\bar{\rho} \bar{\beta}} + 2i ( A_{\rho \gamma} \delta_{\bar{\beta} \langlembda} -A_{\rho \langlembda } \delta_{\bar{\beta} \gamma} ) A_{\bar{\rho} \bar{\alpha}} \langlebel{c-tor1} \\
A_{\bar{\alpha} \bar{\beta} , \bar{\gamma} \bar{\langlembda}} - A_{\bar{\alpha} \bar{\beta} , \bar{\langlembda} \bar{\gamma}} = & 0 \langlebel{c-tor2} \\
A_{\bar{\alpha} \bar{\beta} , \gamma \bar{\langlembda} } - A_{\bar{\alpha} \bar{\beta}, \bar{\langlembda} \gamma} =& R_{\rho \bar{\alpha} \gamma \bar{\langlembda}} A_{\bar{\rho} \bar{\beta}} + R_{ \rho \bar{\beta} \gamma \bar{\langlembda} } A_{\bar{\rho} \bar{\alpha} } + 2i \delta_{\gamma \bar{\langlembda}} A_{\bar{\alpha} \bar{\beta}, 0} \langlebel{c-tor3} \\
A_{\bar{\alpha} \bar{\beta} , 0 \gamma} - A_{\bar{\alpha} \bar{\beta}, \gamma 0} = & A_{\gamma \rho , \bar{\alpha} } A_{\bar{\rho} \bar{\beta}} + A_{\gamma \rho , \bar{\beta}} A_{\bar{\rho} \bar{\alpha}} + A_{\gamma \rho} A_{\bar{\alpha} \bar{\beta} , \bar{\rho}} \langlebel{c-tor4} \\
A_{\bar{\alpha} \bar{\beta}, 0 \bar{\gamma}} - A_{ \bar{\alpha} \bar{\beta}, \bar{\gamma} 0 } = & - A_{\bar{\alpha} \bar{\gamma} , \rho } A_{\bar{\rho} \bar{\beta}} - A_{\bar{\beta} \bar{\gamma}, \rho} A_{\bar{\rho} \bar{\alpha}} + A_{\bar{\alpha} \bar{\beta}, \rho } A_{\bar{\gamma} \bar{\rho}} \langlebel{c-tor5}
\end{align}
\end{lemma}
The equation \eqref{c-tor1} has been given by Lee in \cite{lee1988psuedo}. But the coefficients are different due to the different exterior algebras. Now let's deduce the CR Bochner formulae of pseudo-Hermitian curvature $\hat{R}$ and $\nabla_\xi A$.
\begin{lemma} \langlebel{b-lem-bochner-cur}
The CR Bochner formula of $\tilde{R}$ is
\begin{align*} \numberthis \langlebel{c-bochner-cur}
\Delta_b R_{ \bar{\alpha} \beta \langlembda \bar{\mu} } =& 2R_{ \beta \bar{\alpha} , \langlembda \bar{\mu} } - R_{\rho \bar{\alpha}} R_{\bar{\rho} \beta \langlembda \bar{\mu}} + R_{\beta \bar{\rho}} R_{\bar{\alpha} \rho \langlembda \bar{\mu}} + R_{\langlembda \bar{\rho}} R_{\bar{\alpha} \beta \rho \bar{\mu}} + R_{ \rho \bar{\mu} } R_{\bar{\alpha} \beta \langlembda \bar{\rho}} \\
& + 2 R_{\rho \bar{\alpha} \bar{\mu} \gamma} R_{ \bar{\rho} \beta \langlembda \bar{\gamma} } + 2 R_{ \bar{\rho} \beta \bar{\mu} \gamma } R_{ \bar{\alpha} \rho \langlembda \bar{\gamma} } + 2 R_{ \bar{\rho} \langlembda \bar{\mu} \gamma } R_{ \bar{\alpha} \beta \rho \bar{\gamma} } \\
&- (2n +4) i A_{\langlembda \beta, \bar{\alpha} \bar{\mu}} + 2n i A_{\bar{\alpha} \bar{\mu} , \beta \langlembda} + 4i A_{ \bar{\alpha} \bar{\mu} , \langlembda \beta} \\
& - 4i A_{ \bar{\alpha} \bar{\gamma}, \gamma \langlembda} \delta_{\beta \bar{\mu}} - 4i A_{\bar{\alpha} \bar{\gamma} , \gamma \beta} \delta_{\bar{\mu} \langlembda} + 4i A_{\beta \gamma , \bar{\gamma} \bar{\mu}} \delta_{\bar{\alpha} \langlembda} \\
& + (4n+ 8) A_{\rho \langlembda} A_{\bar{\alpha} \bar{\rho}} \delta_{\beta \bar{\mu}} + 8n A_{\rho \beta} A_{\bar{\alpha} \bar{\rho}} \delta_{\bar{\mu} \langlembda} + (4n-8) A_{\beta \rho} A_{\bar{\rho} \bar{\mu}} \delta_{\bar{\alpha} \langlembda} \\
& - 8 A_{\rho \gamma} A_{\bar{\rho} \bar{\gamma}} (\delta_{\bar{\alpha} \gamma} \delta_{\beta \bar{\mu}} + \delta_{\bar{\alpha} \beta} \delta_{\bar{\mu} \langlembda}) .
\end{align*}
where $\Delta_b$ is the sub-Laplacian operator.
In particular, if $(M, HM, J, \theta)$ is pseudo-Einstein with dimension $2n+1 \geq 5$, then
\begin{align} \langlebel{c-subcur1}
\Delta_b \tilde{R} = \tilde{R} * \tilde{R} + \nabla_b^2 A * J + A^2,
\end{align}
and
\constantnumber{cst-1}
\begin{align} \langlebel{c-subcur2}
\Delta_b |\tilde{R}| + C_{\ref*{cst-1}} |\tilde{R}|^2 + C_{\ref*{cst-1}} \big( | \nabla^2_b A | + |A^2| \big) \geq 0
\end{align}
where $C_{\ref*{cst-1}} = C_{\ref*{cst-1}} (n)$.
\end{lemma}
\begin{proof}
Since the process is standard, we only outline the schedule.
Due to the second Bianchi identity \eqref{b-sec-bianchi-1} and Lemma \ref{c-ricidentity}, we find
\begin{align*}
R_{ \bar{\alpha} \beta \langlembda \bar{\mu}, \bar{\gamma} \gamma } = & \ R_{\beta \bar{\alpha}, \langlembda \bar{\mu}} \\
&+ R_{ \rho \bar{\alpha} \bar{\mu} \gamma } R_{ \bar{\rho} \beta \langlembda \bar{\gamma} } + R_{ \bar{\rho} \beta \bar{\mu} \gamma } R_{ \bar{\alpha} \rho \langlembda \bar{\gamma} } + R_{ \bar{\rho} \langlembda \bar{\mu} \gamma } R_{ \bar{\alpha} \beta \rho \bar{\gamma} } + R_{ \rho \bar{\mu} } R_{ \bar{\alpha} \beta \langlembda \bar{\rho} } \\
& + 2i ( A_{ \bar{\alpha} \bar{\mu} , \langlembda \beta} + A_{\bar{\mu} \bar{\alpha} , \beta \langlembda} -A_{ \bar{\alpha} \bar{\gamma}, \langlembda \gamma} \delta_{\beta \bar{\mu}} -A_{\bar{\gamma} \bar{\alpha}, \beta \gamma} \delta_{\bar{\mu} \langlembda} ) + 2i A_{\beta \gamma , \bar{\gamma} \bar{\mu}} \delta_{\bar{\alpha} \langlembda} - 2n i A_{\langlembda \beta, \bar{\alpha} \bar{\mu}} \\
& - 2i R_{ \bar{\alpha} \beta \langlembda \bar{\mu}, 0 },
\end{align*}
and
\begin{align*}
R_{\bar{\alpha} \beta \langlembda \bar{\mu} , \gamma \bar{\gamma}} = R_{ \bar{\alpha} \beta \langlembda \bar{\mu} , \bar{\gamma} \gamma } - R_{ \rho \bar{\alpha} } R_{ \bar{\rho} \beta \langlembda \bar{\mu} } + R_{\beta \bar{\rho}} R_{\bar{\alpha} \rho \langlembda \bar{\mu}} + R_{\langlembda \bar{\rho}} R_{ \bar{\alpha} \beta \rho \bar{\mu} } - R_{ \rho \bar{\mu} } R_{ \bar{\alpha} \beta \langlembda \bar{\rho} } + 2 n i R_ { \bar{\alpha} \beta \langlembda \bar{\mu}, 0} .
\end{align*}
Hence by combining with \eqref{b-secbianchi2}, we have
\begin{align*}
\Delta_b R_{ \bar{\alpha} \beta \langlembda \bar{\mu} } =& R_{ \bar{\alpha} \beta \langlembda \bar{\mu}, \bar{\gamma} \gamma } + R_{\bar{\alpha} \beta \langlembda \bar{\mu} , \gamma \bar{\gamma}} \\
= & 2R_{ \beta \bar{\alpha} , \langlembda \bar{\mu} } - R_{\rho \bar{\alpha}} R_{\bar{\rho} \beta \langlembda \bar{\mu}} + R_{\beta \bar{\rho}} R_{\bar{\alpha} \rho \langlembda \bar{\mu}} + R_{\langlembda \bar{\rho}} R_{\bar{\alpha} \beta \rho \bar{\mu}} + R_{ \rho \bar{\mu} } R_{\bar{\alpha} \beta \langlembda \bar{\rho}} \\
& + 2 R_{\rho \bar{\alpha} \bar{\mu} \gamma} R_{ \bar{\rho} \beta \langlembda \bar{\gamma} } + 2 R_{ \bar{\rho} \beta \bar{\mu} \gamma } R_{ \bar{\alpha} \rho \langlembda \bar{\gamma} } + 2 R_{ \bar{\rho} \langlembda \bar{\mu} \gamma } R_{ \bar{\alpha} \beta \rho \bar{\gamma} } \\
& - (2n +4) i A_{\langlembda \beta, \bar{\alpha} \bar{\mu}} + 2n i A_{\bar{\alpha} \bar{\mu} , \beta \langlembda} + 4i A_{ \bar{\alpha} \bar{\mu} , \langlembda \beta} - 4i A_{ \bar{\alpha} \bar{\gamma}, \langlembda \gamma} \delta_{\beta \bar{\mu}} - 4i A_{\bar{\gamma} \bar{\alpha}, \beta \gamma} \delta_{\bar{\mu} \langlembda} \\
& + 4i A_{\beta \gamma , \bar{\gamma} \bar{\mu}} \delta_{\bar{\alpha} \langlembda} - (4n -8) (A_{\bar{\alpha} \bar{\gamma}} A_{\gamma \langlembda } \delta_{\beta \bar{\mu}} - A_{\beta \gamma} A_{\bar{\gamma} \bar{\mu}} \delta_{\bar{\alpha} \langlembda} )
\end{align*}
The rest for \eqref{c-bochner-cur} is to turn $A_{\bar{\alpha}r{\alpha} \bar{\alpha}r{\gamma}, \langlembda \gamma}$ and $A_{\bar{\alpha}r{\gamma} \bar{\alpha}r{\alpha}, \beta \gamma}$ into divergence forms.
This follows from \eqref{c-tor1}.
\end{proof}
A similar argument shows the CR Bochner formula of pseudo-Hermitian torsion.
\begin{lemma} \langlebel{b-lem-bochner-tor}
The CR Bochner formula of $A_{\alpha \beta , 0}$ is
\begin{align} \numberthis \langlebel{c-reeb}
\Delta_b A_{\alpha \beta} = 2 A_{\alpha \gamma, \bar{\alpha}r{\gamma} \beta} - 2 (n-2) i A_{\alpha \beta , 0} + 2 R_{\bar{\alpha}r{\rho} \alpha \beta \bar{\alpha}r{\gamma}} A_{\rho \gamma} + R_{\beta \bar{\alpha}r{\rho}} A_{\rho \alpha} - R_{\alpha \bar{\alpha}r{\rho}} A_{\rho \beta}.
\end{align}
\end{lemma}
\section{Subelliptic Estimates and CR Sobolev Embedding}
Let's first review Folland-Stein space in \cite{folland1974estimates}. For simplicity, we require that $(M, HM, J, \theta)$ is a closed pseudo-Hermitian manifold with dimension $2n+1$.
For any $f \in C^\infty (M)$, the Folland-Stein norm is given by
\begin{align}
||f||_{S^p_k} = \sum_{l=0}^k ||\nabla^l_b f ||_{p, M} \langlebel{b-sobolevnorm}
\end{align}
where $\nabla^l_b f$ is the restriction of $\nabla^l f$ on the horizontal distribution $HM$ and
\begin{align*}
||\nabla^l_b f||_{p, M} = \left( \int_M \big| \nabla^l_b f \big|_{g_\theta} \theta \wedge (d \theta)^n \right)^{\frac{1}{p}}.
\end{align*}
The Folland-Stein space $S^p_k (M)$ is the completion of $C^\infty (M)$ under the Folland-Stein norm.
The CR Sobolev inequality also holds (cf. \cite{folland1974estimates}): there exists a constant $C_M$ such that
\begin{align}
\left( \int_M |f|^{\frac{2n +2}{n}} \right)^{\frac{n}{n+1}} \leq C_M \left( \int_M |\nabla_b f|^2 + \int_M |f|^2 \right) . \langlebel{b-sobolevinequality}
\end{align}
But the dependency of $C_M$ is not clear to our best knowledge.
Recently Chang, Chang, Han and Tie \cite{chang2017pseudo} have obtained another version of CR Sobolev inequality.
Let's recall the definition of Carnot-Carath\'eodory distance (called CC distance for short).
The CC distance $d_{cc} (x, y)$ of any two points $x, y \in M$ can be measured from the horizontal direction, that is
\begin{align*}
d_{cc} (x, y) = \inf \left\{ \int_0^1 \big| \dot{\gamma} \big| dt \ \bigg| \ \gamma \in C_{x,y} \right\},
\end{align*}
where $C_{x,y}$ is the set of all piecewise $C^1$ curves $\gamma: [0,1] \to M$ satisfying $\dot{\gamma} \in HM$ and $\gamma(0) = x, \gamma(1) = y$.
Such curve is called horizontal curve. Clearly the Riemannian distance $d_{Riem} (x, y) \leq d_{cc} (x, y) $.
Strichartz \cite{strichartz1986sub} has showed that if $M$ is complete, then there is at least one length minimizing horizontal curve reaching the CC distance. For any fixed $x \in M$, the CC ball of radius $r$ centered at $x$ is denoted by $B_r (x) = \{ y \in M \: | \: d_{cc} (x, y) < r \} $. Using Varopoulos' argument, one can obtain Sobolev inequality from the estimate of heat kernel (also see Theorem 11.4 in \cite{li2012geometric} or \cite{saloff1992elliptic}). Under this framework, Chang and his collaborators \cite{chang2017pseudo} applied the estimates of heat kernel associated to $\partial_t - \Delta_b$ in \cite{baudoin2014volume} and deduced a CR Sobolev inequality in complete pseudo-Hermitian manifolds.
One can also refer to \cite{chang2016liyau} about the generalized curvature-dimension inequality and Li-Yau gradient estimate on pseudo-Hermitian manifolds.
It is notable that the estimate (3.19) in \cite{chang2017pseudo} is enough to obtain the following CR Sobolev inequality.
\begin{lemma}[CR Sobolev Inequality, Theorem 1.2 in \cite{chang2017pseudo}] \langlebel{b-crsobolevlem}
Let $(M, HM, J, \theta)$ be a complete pseudo-Hermitian $(2n+1)$-manifold with
\begin{enumerate}[(i)]
\item pseudo-Hermitian Ricci operator uniformly bound, that is $R_* \geq \kappa_1 G_\theta$,
\item pseudo-Hermitian torsion and its divergence uniformly bound $|A|, |\mbox{div} A| \leq \kappa_2$,
\end{enumerate}
where $\kappa_1$ and $\kappa_2$ are constants. Then there exist $Q=Q(n, \kappa_1, \kappa_2)$ and $C= C(n, \kappa_1, \kappa_2)$ such that for any $x \in M$ and $\partial B_r(x) \neq \emptyset$,
\begin{align}
\left( \int_{B_r (x)} |\phi|^{\frac{2Q}{Q-2}} \right)^{\frac{Q-2}{Q}} \leq C r^2 e^{C r^2} V (B_r (x))^{-\frac{2}{Q}} \int_{B_r (x)} |\nabla_b \phi|^2, \quad \forall \phi \in C^\infty_0 (B_r (x)) . \langlebel{b-crsobolev}
\end{align}
Here $V (B_r (x)) $ is the volume of $B_r (x)$.
\end{lemma}
\begin{remark} \langlebel{c-rmk-1}
When $(M, HM, J , \theta)$ is Sasakian, $Q = 3(n+3)$ which is bigger than $2n +2$ in \eqref{b-sobolevinequality}. So CR Sobolev inequality \eqref{b-crsobolev} is not sharp.
\end{remark}
For $k \in \mathbb{N}$, it is also natural to denote all functions having horizontal covariant derivatives of order $\leq k$ continuous on $M$ by $\Gamma_k (M)$. Its canonical norm is
\begin{align*}
||f||_{\Gamma_k (M)} = \max_{l \leq k} \sup_M |\nabla_b^l f|.
\end{align*}
This idea is still fit for non-integer $k$ to define an analogous of H\"older space, but the distance is replaced by CC distance. One can refer to \cite{dragomir2006cr,folland1974estimates} for details. As CR Sobolev inequality, CR Sobolev embedding theorem is known by Folland and Stein \cite{folland1974estimates}, but the dependency of the embedding constant is not clear.
This problem can be overcome by the method in Theorem 7.10 in Page 155 of \cite{gilbarg1977elliptic}. Let's recapture CR Sobolev embedding lemma.
\begin{lemma} \langlebel{c-lem-sobolev-1}
Suppose that CR Sobolev inequality holds on some CC ball $B_R = B_R (x)$ in a pseudo-Hermitian manifold $(M^{2n+1}, HM, J, \theta)$, that is
\begin{align} \langlebel{c-sobolev-equ-1}
\left( \int_{B_R} |\phi|^{\frac{2 Q}{Q-2}} \right)^{\frac{Q-2}{Q}} \leq C_S \int_{B_R} |\nabla_b \phi|^2, \quad \forall \phi \in C^\infty_0 (B_R).
\end{align}
Then for any $q > Q$, $S^q_1 (B_R)$-function $u$ with compact support is continuous and
\constantnumber{cst-embed}
\begin{align}
\sup_{B_R} |u| \leq C_{\ref*{cst-embed}} C_S^{\frac{1}{2}} ||\nabla_b u||_{q, B_R} \langlebel{c-sobolev-embed}
\end{align}
where $C_{\ref*{cst-embed}} = C_{\ref*{cst-embed}} (q, Q, V (B_R))$.
\end{lemma}
\begin{proof}
It suffices to prove \eqref{c-sobolev-embed} for $u \in C^\infty_0 (B_R)$.
Without loss of generality, we assume $u \neq 0$.
For any $f \in C^\infty_0 (B_R)$ and $\gamma \geq 1$, CR Sobolev inequality \eqref{c-sobolev-equ-1} and H\"older inequality show that
\begin{align}
||f^\gamma||_{Q', B_R} \leq C_S^{\frac{1}{2}} ||\nabla_b f^\gamma||_{2, B_R} = C_S^{\frac{1}{2}} ||\gamma f^{\gamma -1} \nabla_b f||_{2, B_R} \leq C_S^{\frac{1}{2}} \gamma ||f^{\gamma -1}||_{q', B_R} ||\nabla_b f||_{q, B_R} \langlebel{equ-sob1}
\end{align}
where $Q' = \frac{2Q}{Q-2}$ and
$\frac{1}{q'} + \frac{1}{q} = \frac{1}{2}. $
Let's denote
\begin{align*}
\tilde{u} = \frac{u}{||\nabla_b u||_{q, B_R}}
\end{align*}
which enjoys the estimate:
\begin{align}
||\tilde{u}||_{Q', B_R} \leq C_S^{\frac{1}{2}} V_R^{\frac{1}{q'}}. \langlebel{equ-sob2}
\end{align}
Here $V_R = V (B_R)$.
By \eqref{equ-sob1} and H\"older inequality again, we find
\begin{align}
||\tilde{u}||_{\gamma Q', B_R} &\leq C_S^{\frac{1}{2 \gamma}} \gamma^{\frac{1}{\gamma}} ||\tilde{u}||_{(\gamma -1) q', B_R}^{1- \frac{1}{\gamma}} \nonumber \\
& \leq C_S^{\frac{1}{2 \gamma}} \gamma^{\frac{1}{\gamma}} V_R^{\frac{1}{q' \gamma^2}} ||\tilde{u}||_{\gamma q', B_R}^{1- \frac{1}{\gamma}}
\end{align}
Now set $\delta = \frac{Q'}{q'} >1$ and $\gamma = \delta^\nu$. Then using \eqref{equ-sob2}, we have for any $\nu \in \mathbb{N}$
\begin{align*}
||\tilde{u}||_{\delta^\nu Q', B_R} \leq C_S^{\frac{1}{2} \delta^{-\nu}} V_R^{\frac{1}{q'} \delta^{- 2 \nu} } \delta^{\nu \delta^{-\nu}} ||\tilde{u}||_{\delta^{\nu-1} Q', B_R }^{1- \delta^{-\nu}}
\leq C_S^{\frac{1}{2}} (V_R +1)^{\frac{1}{q'}} \delta^a
\end{align*}
where
\begin{align*}
a = \nu \delta^{-\nu} + \sum_{k=0}^{\nu-1} k \delta^{-k} \prod_{i={k+1}}^\nu (1- \delta^{-i}) \leq \sum_{k =1}^\nu k \delta^{-k} < + \infty.
\end{align*}
The proof is finished by taking $\nu \to \infty$.
\end{proof}
Let's denote CC diameter and volume of $M$ by $diam_{cc}$ and $\mbox{Vol} (M)$.
\begin{lemma} \langlebel{c-lem-sobolev-2}
Suppose that $(M^{2n+1}, HM, J, \theta)$ is a closed connected pseudo-Hermitian manifold with
\begin{enumerate}[(i)]
\item $R_* \geq \kappa_1 G_\theta$ and $|A| , |div A| \leq \kappa_2$,
\item $diam_{cc} \leq d$ and $\mbox{Vol} (M) \geq V_1$.
\end{enumerate}
Then for any $q > Q$, any $S^q_1 (M)$ function $u$ is continuous and
\constantnumber{cst-embed-2}
\begin{align}
\sup_M |u| \leq C_{\ref*{cst-embed-2}} ||u||_{S^q_1}
\end{align}
where $C_{\ref*{cst-embed-2}} = C_{\ref*{cst-embed-2}} (n, \kappa_1, \kappa_2, d, V_1, q)$.
\end{lemma}
\begin{proof}
For any $x \in M$, we can choose a CC ball $B_R (x)$ with boundary such that $V(B_R(x)) = \frac{V_1}{2}$. Then by Lemma \ref{b-crsobolevlem}, the CR Sobolev inequality condition holds on $B_R(x)$ with
\begin{align*}
C_S = C R^2 e^{C R^2} V (B_R (x))^{-\frac{2}{Q}} \leq C R^2 e^{C d^2} \left( \frac{V_1}{2} \right)^{-\frac{2}{Q}}
\end{align*}
Let's choose a cutoff function $\eta$ such that
\constantnumber{cutoff}
\begin{align*}
\eta \big|_{B_{\frac{R}{2}} (x)} \equiv 1 , \quad supp \: \eta \subset B_{R} (x), \quad | \nabla_b \eta | \leq \frac{C_{\ref*{cutoff}}}{R},
\end{align*}
where $C_{\ref*{cutoff}}$ is a universal constant.
Then $\eta u$ has compact support in $B_R (x)$ and
\begin{align*}
||\nabla_b (\eta u) ||_{q, B_R (x)} \leq ||\nabla_b u||_{q, M} + \frac{C_{\ref*{cutoff}}}{R} ||u||_{q, M} \leq (1 + \frac{C_{\ref*{cutoff}}}{R}) ||u||_{S^q_1 (M)}.
\end{align*}
Hence Lemma \ref{c-lem-sobolev-1} shows that $u$ is continuous at $x$ and
\begin{align*}
|u(x)|
\leq C_{\ref*{cst-embed}} C_S^{\frac{1}{2}} ||\nabla_b (\eta u)||_{q, B_R(x)}
\end{align*}
which yields the conclusion.
\end{proof}
Next we use Moser iteration to estimate $L^\infty$ norm of the solution of
\begin{align*}
\Delta_b f + \phi f + \psi \geq 0.
\end{align*}
\begin{lemma} \langlebel{c-moser-1}
Suppose that CR Sobolev inequality holds on some CC ball $B_R$ in a pseudo-Hermitian manifold $(M^{2n+1}, HM, J, \theta)$, that is
\begin{align}
\left( \int_{B_R} |\phi|^{\frac{2Q}{Q-2}} \right)^{\frac{Q-2}{Q}} \leq C_S \int_{B_R} |\nabla_b \phi|^2, \quad \forall \phi \in C^\infty_0 (B_R) \langlebel{c-crsobolev}
\end{align}
Assume that $\phi, \psi \in L^\frac{q}{2} (B_R) $ for some $q > Q$. Then there exists a constant
\constantnumber{cst-4}
$C_{\ref*{cst-4}} = C_{\ref*{cst-4}} (n, q, Q)$
such that
if $0 \leq f \in Lip (B_R) $ is a weak solution of
\begin{align}
\Delta_b f + \phi f + \psi \geq 0, \langlebel{c-subinequality}
\end{align}
then we have
\begin{align}
\sup_{B_{\frac{R}{2}}} |f| \leq C_{\ref*{cst-4}} \left[ \left( ||\phi||_{\frac{q}{2}, B_R} +1 \right)^{\frac{q}{q-Q}} C_S^{\frac{Q}{q-Q}} + 1 \right] C_S R^{-2} (R^2 + 1) \left[ ||f||_{\frac{Q}{2} , B_R} + || \psi ||_{\frac{q}{2} , B_R} V(B_R) \right] .
\end{align}
\end{lemma}
\begin{proof}
Without loss of generality, we assume that $\psi \neq 0$.
Let $\eta$ be any cutoff function on $B_R$ and
$$u = f +k \quad \mbox{where} \quad k = || \psi||_{\frac{q}{2}, B_t} . $$
Test \eqref{c-subinequality} by $\eta^2 u^\alpha$ for $\alpha \geq 1$. Then we get
\begin{align*}
\alpha \int \eta^2 u^{\alpha-1} | \nabla_b u |^2 + \int 2 \eta f^\alpha \langle \nabla_b \eta, \nabla_b u \rangle \leq \int \eta^2 u^\alpha |\psi| + \int \eta^2 u^{\alpha+1} |\phi|.
\end{align*}
which implies that
\begin{align*}
\int | \nabla_b ( \eta u^{\frac{\alpha+1}{2}} ) |^2 \leq \int u^{\alpha +1} |\nabla_b \eta|^2 + \alpha \int \eta^2 u^{\alpha + 1} (|\phi| + \frac{|\psi|}{k})
\end{align*}
By the assumption of CR Sobolev inequality \eqref{c-crsobolev}, we have
\begin{align} \langlebel{c-1}
\int \left( | \eta u^{\frac{\alpha+1}{2} } |^{\frac{2Q}{Q-2}} \right)^{\frac{Q-2}{Q}} \leq C_S \alpha \int \eta^2 u^{\alpha + 1} (|\phi| + \frac{|\psi|}{k}) + C_S \int u^{\alpha +1} | \nabla_b \eta |^2 .
\end{align}
Taking account of H\"older inequality and interpolation inequality, we find
\begin{align*}
\int \eta^2 u^{\alpha + 1} (|\phi| + \frac{|\psi|}{k})
& \leq (|| \phi ||_{\frac{q}{2} , B_R} + 1) || \eta u^{\frac{\alpha+1}{2} } ||_{\frac{2q}{q-2}}^2 \\
& \leq 2 (|| \phi ||_{\frac{q}{2} , B_R} + 1) \left( \epsilon^2 || \eta u^{\frac{\alpha+1}{2}} ||^2_{\frac{2Q}{Q-2}} + \epsilon^{-2 \mu} || \eta u^{\frac{\alpha+1}{2}} ||^2_{2} \right)
\end{align*}
where $\mu= \frac{Q}{q-Q}$.
By choosing $\epsilon^2 = 4^{-1} C_S^{-1} \alpha^{-1} ( || \phi ||_{\frac{q}{2} , B_R} + 1 )^{-1}$,
then \eqref{c-1} becomes
\begin{align} \langlebel{c-2}
\int \left( | \eta u^{\frac{\alpha+1}{2} } |^{\frac{2Q}{Q-2}} \right)^{\frac{Q-2}{Q}} & \leq 2^{\mu+2} (||\phi||_{\frac{q}{2}, B_t} +1)^{\mu + 1} C_S^{\mu + 1} \alpha^{\mu + 1} \int \eta^2 u^{\alpha + 1} + 2 C_S \int u^{\alpha + 1} |\nabla_b \eta|^2 \\
& = 2 C_S (C_\phi C_S^\mu \alpha^{\mu+1} + 1) \int u^{\alpha + 1} \left( \eta^2 + |\nabla_b \eta|^2 \right) \nonumber
\end{align}
where $C_\phi = 2^{2 \mu + 1} (||\phi||_{\frac{q}{2}, B_R} +1)^{\mu + 1} $.
Let $\frac{R}{2} \leq r_2 < r_1 \leq R$ and choose the cutoff function $\eta$ satisfying
\begin{align*}
\eta \big|_{B_{r_2}} \equiv 1 , \quad supp \: \eta \subset B_{r_1}, \quad | \nabla_b \eta | \leq \frac{C_{\ref*{cutoff}}}{r_1 - r_2},
\end{align*}
where $C_{\ref*{cutoff}}$ is a universal constant.
Apply it in \eqref{c-2}, the result is
\begin{align} \langlebel{c-estimate-1}
\int_{B_{r_1}} \left( | \eta u^{\frac{\alpha+1}{2} } |^{\frac{2Q}{Q-2}} \right)^{\frac{Q-2}{Q}} \leq 2 C_S (C_\phi C_S^\mu + 1) (R^2 + C_{\ref*{cutoff}}^2) \frac{\alpha^{\mu+1}}{(r_1 - r_2)^2} \int_{B_{r_1}} u^{\alpha + 1}.
\end{align}
Denote
\constantnumber{cst-5}
\begin{align}
C_{\ref*{cst-5}} = 2 C_S (C_\phi C_S^\mu + 1) (R^2 + C_{\ref*{cutoff}}^2)
\end{align}
and define
\begin{align*}
T(p, r) = \left( \int_{B_r} u^p \right)^{\frac{1}{p}}.
\end{align*}
Then \eqref{c-estimate-1} can be rewritten as
\begin{align} \langlebel{c-3}
T ( \chi p, r_2 ) \leq \left( \frac{C_{\ref*{cst-5}} p^{\mu +1} }{(r_1 - r_2)^2} \right)^\frac{1}{p} T (p , r_1), \quad \mbox{for } p \geq 2
\end{align}
where $ \chi= \frac{Q}{Q -2} $. Now let's use Moser iteration.
By taking
$$ p_0 = \frac{Q}{2} \geq 2, \ p_m = \chi^m p_0, \ R_m = \frac{R}{2} + 2^{-m-1} R $$
\eqref{c-3} will lead the following process
\constantnumber{cst-6}
\begin{align*}
T( \chi^{m+1} p_0, R_{m+1} ) \leq & C_{\ref*{cst-5}}^{\frac{1}{\chi^m p_0}} p_0^{\frac{\mu+1}{\chi^m p_0}} \chi^{\frac{m(\mu+1)}{\chi^m p_0}} \cdot 4^{ - \frac{m+2}{\chi^m p_0} } R^{- \frac{2}{\chi^m p_0}} T (\chi^m p_0 , R_m) \numberthis \langlebel{c-estimate-2} \\
\leq & C_{\ref*{cst-5}}^{\sum_0^m \frac{1}{\chi^k p_0}} p_0^{\sum_0^m \frac{\mu+1}{\chi^k p_0}} \chi^{\sum_0^m \frac{k(\mu+1)}{\chi^k p_0}} \cdot 4^{ \sum_0^m - \frac{k+2}{\chi^k p_0} } R^{ \sum_0^m - \frac{2}{\chi^k p_0}} T ( p_0 , R_0) \\
\leq & C_{\ref*{cst-5}} C_{\ref*{cst-6}} R^{- 2} T (p_0 , R_0)
\end{align*}
where $C_{\ref*{cst-6}} = C_{\ref*{cst-6}} (q, Q)$.
Letting $m \to \infty$ and using expression of $C_{\ref*{cst-5}}$, we can finish the proof.
\end{proof}
\begin{lemma} \langlebel{c-moser}
Suppose that $(M^{2n+1}, HM, J, \theta)$ is a closed connected pseudo-Hermitian manifold with
\begin{enumerate}[(i)]
\item $R_* \geq \kappa_1 G_\theta$ and $|A| , |div A| \leq \kappa_2$,
\item $diam_{cc} \leq d$ and $\mbox{Vol} (M) \geq V_1$.
\end{enumerate}
Assume that $\phi, \psi \in L^\frac{q}{2} (M) $ for some $q > Q$.
Then there exists a constant
\constantnumber{cst-3}
$$C_{\ref*{cst-3}} = C_{\ref*{cst-3}} (n, \kappa_1, \kappa_2, d, V_1, q, ||\phi||_{\frac{q}{2}})$$ such that if $0 \leq f \in Lip (M)$ is a weak solution of
\begin{align*}
\Delta_b f + \phi f + \psi \geq 0
\end{align*}
then we have
\begin{align}
\sup_M |f| \leq C_{\ref*{cst-3}} \big( ||f||_{\frac{Q}{2} , M} + || \psi ||_{\frac{q}{2} , M} \big).
\end{align}
\end{lemma}
\begin{proof}
For any point $x \in M$, there is a CC ball $B_R (x)$ such that $V(B_R(x)) = \frac{V_1}{2}$ which implies that $\partial B_R (x) \neq \emptyset$. Moreover, Lemma \ref{b-crsobolevlem} guarantees CR Sobolev inequality on $B_R (x)$ with
\begin{align*}
C_S = C R^2 e^{C R^2} V (B_R (x))^{-\frac{2}{Q}} \leq C R^2 e^{C d^2} \left( \frac{V_1}{2} \right)^{-\frac{2}{Q}}.
\end{align*}
Then the conclusion follows from Lemma \ref{c-moser-1}.
\end{proof}
\section{Convergence of pseudo-Einstein manifolds} \langlebel{sec-higherregularity}
In Riemannian geometry, a sequence $(M_i, g_i)$ of compact Riemannian manifolds $C^{k, \alpha}$ converges to a compact Riemannian manifold $(M,g)$ if there is a sequence of diffeomorphisms $\phi : M \to M_i$ such that $\phi^* g_i \to g$ in $C^{k, \alpha}$. By Cheeger's Lemma (cf. Lemma 51 of Chapter 10 in \cite{petersen2006riemannian}) and Peters' method in \cite{peters1987convergence}, we use the induction and get the following regularity theorem for convergence.
\begin{theorem} \langlebel{b-riemsmoothconvergence}
For constants $\Lambda, V_1, d> 0$ and nonnegative integer $k$, the space of all closed Riemannian manifolds with
\begin{enumerate}[(1)]
\item $|D^m \hat{R}| \leq \Lambda$ for $m =0, 1, 2, \dots, k, $
\item $\mbox{Vol} \geq V_1, $
\item $\mbox{diam}_{Riem} \leq d, $
\end{enumerate}
is $C^{k +1 ,\alpha}$ precompact for any $\alpha \in (0, 1)$.
\end{theorem}
We want to generalize it to pseudo-Hermitian manifolds.
\begin{definition}
A sequence of closed pseudo-Hermitian manifolds $(M_i, H M_i, J_i, \theta_i)$ is called $C^{k, \alpha}$ convergent if there are a manifold $M$, two tensors $\theta \in C^{k,\alpha} (TM , \mathbb{R}), J \in C^{k, \alpha} (TM, TM)$ and diffeomorphisms $\phi_i : M \to M_i$ such that
\begin{align}
\phi_i^* \theta_i \to \theta, \quad \phi_i^* J_i \to J \quad \mbox{ in $C^{k,\alpha}$ topology}.
\end{align}
\end{definition}
Let's first discuss the convergence of pseudo-Hermitian structures and almost complex structures which is based on the identities
\begin{align}
D_X \theta (Y) &= d \theta(X, Y) + A (X, Y) = g(JX, Y) + A (X, Y) , \langlebel{c-levi3} \\
D_X J (Y) & = - g (X, Y) \xi - A (X, J Y) \xi - \theta (Y) J \tau (X) + \theta (Y) X , \langlebel{c-levi4}
\end{align}
for any $X, Y \in \Gamma (TM)$, due to \eqref{b-con} with $\nabla \theta = 0 $ and $ \nabla J =0$.
\begin{lemma} \langlebel{c-thm-riemcur}
Given constants $\langlembda, V_1, d$, $\langlembda_0, \dots, \langlembda_{k+1}$ and $\Lambda_0, \dots, \Lambda_k$ for $k \geq 0$, any sequence of closed pseudo-Hermitian $(2n+1)$-manifolds with
\begin{enumerate}[(1)]
\item $|D^m \hat{R}| \leq \Lambda_m$ for $m = 0, 1, 2 \dots, k$, \langlebel{c-condition3}
\item $|D^m A| \leq \langlembda_m$ for $m = 0, 1, 2 \dots, k +1$, \langlebel{c-condition4}
\item $\mbox{diam}_{cc} \leq d$,
\item $\mbox{Vol} \geq V_1$, \langlebel{c-condition5}
\end{enumerate}
where $D$ is the Levi-Civita connection associated with Webster metric, is $C^{k+1, \alpha}$ sub-convergent for any $\alpha \in (0,1)$.
\end{lemma}
\begin{proof}
Due to the relation of Riemannian distance and CC distance, Cheeger's finiteness theorem shows that closed pseudo-Hermitian manifolds satisfying the assumptions \eqref{c-condition3}-\eqref{c-condition5} have finite differentiable structures. Hence it suffices to prove the sub-convergence of closed pseudo-Hermitian manifolds $(M, H_i M, J_i, \theta_i)$ under the conditions \eqref{c-condition3}-\eqref{c-condition5}.
Let $(U, x^a)$ be a coordinate chart of $M$. Then we can set
\begin{align*}
\theta_i = \phi_{i; a} d x^a, J_i = J_{i; a}^{b} d x^a \otimes \frac{\partial}{\partial x^b}, g_{\theta_i} = g_{i; ab} d x^a \otimes d x^b, A_i = A_{i; ab} d x^a \otimes d x^b.
\end{align*}
Moreover, the Christoffel symbols of the Levi-Civita connection about $g_{\theta_i}$ is denoted by $\Gamma_{i; ab}^c$. It is obvious that the components of $\xi_i$ satisfies $\xi_{i}^a = g_i^{a b} \phi_{i; b}$ where $(g_i^{a b})$ is the inverse of $(g_{i; ab})$.
By assumptions, Theorem \ref{b-riemsmoothconvergence} guarantees the $C^{k+1, \alpha}$ convergence of Riemannian manifolds $(M, g_{\theta_i})$ by taking subsequence which is also denoted by itself. Hence for any $\beta \in (\alpha, 1)$, $g_{i; ab}$ have uniformly $C^{k+1, \beta}$ bound and then $\Gamma_{i; ab}^c$ have uniformly $C^{k, \beta}$ bound. Moreover, $\phi_{i; a} $ and $ J_{i; a}^b$ are uniformly bound due to $|\theta_{i}|_{g_{\theta_i}} = 1$ and $|J_{i}|_{g_{\theta_i}} = 2n$. The identities \eqref{c-levi3} and \eqref{c-levi4} have the following local expressions:
\begin{align*}
\frac{\partial \phi_{i; a}}{\partial x^b} &= \Gamma_{i; b a}^c \phi_{i; c} + J_{i; b}^c g_{i; c a} + A_{i ; ab} \\
\frac{\partial J_{i; a}^b}{\partial x^c} & =\Gamma_{i; ca}^d J_{i; d}^b - J_{i; a}^d \Gamma_{i; cd}^b - g_{i; ca} g_{i}^{db} \phi_d - J_{i;a}^d A_{i;cd} g_i^{eb} \phi_{i; e} - J_{i; d}^b A_{i; ce} g_i^{de} \phi_{i; a} + \phi_{i;a} \delta_c^b
\end{align*}
which makes $\phi_{i; a}$ and $J_{i; a}^b$ uniformly $C^1$ bound and thus $C^{0, \beta}$ bound. By induction, one can easily show that $\phi_{i; a}$ and $J_{i; a}^b$ are uniformly bounded in $C^{k+1, \beta}$ which is compact in $C^{k+1, \alpha}$. By choosing a finite cover of $M$ and taking subsequence of $\theta_i, J_i$, there are $\theta \in C^{k+1, \alpha} (M, T^*M)$ and $J \in C^{k+1, \alpha} (M, T^* M \otimes TM)$ such that $\theta_i \to \theta $ and $ J_i \to J$ in the sense of $C^{k+1, \alpha}$ convergence of components.
\end{proof}
Next we discuss the convergence of closed pseudo-Hermitian manifolds under some regularity condition of pseudo-Hermitian curvature and pseudo-Hermitian torsion. Let's slightly recall the notion of weights of covariant derivatives: for any tensor $\sigma$, we say that the covariant derivative $(\nabla^m \sigma) (X_1, X_2, \cdots, X_m)$ weights $k$ if there are $k_1$ horizontal vector fields and $k_2$ Reeb vector fields in $X_1, X_2, \cdots, X_m$ such that $ k_1 + 2 k_2 = k $.
The contraction of two tensors $\sigma_1$ and $\sigma_2$ is denoted by $\sigma_1 * \sigma_2$.
The contraction of one tensor $\sigma$ is denoted by $\mathcal{L} (\sigma)$. For simplicity, we will always omit the tensors $J, \theta$, $g_\theta$ and their duals for contractions with other tensors in the absence of specific circumstances, because they are parallel associated with Tanaka-Webster connection $\nabla$.
Let's denote
\begin{align*}
||\tilde{R}||_{C^k_\nabla} & = \sum_{m = 0}^k ||\nabla^m \tilde{R}||_{C^0} \\
||A||_{C^k_\nabla} & = \sum_{m = 0}^k ||\nabla^m A||_{C^0}
\end{align*}
where $\nabla$ is the Tanaka-Webster connection.
\begin{lemma} \langlebel{d-lem-estimate}
Suppose that $(M, HM, J, \theta)$ is a pseudo-Hermitian manifold. Then for any integer $k \geq 0$, we have
\begin{enumerate}[(1)]
\item $D^k A$ is bounded by $||A||_{C^k_\nabla}$; \langlebel{d-lem-conclusion-1}
\item $D^k \tilde{R}$ is bounded by $||\tilde{R}||_{C^k_\nabla}$ and $||A||_{C^{k-1}_\nabla}$; \langlebel{d-lem-conclusion-2}
\item $D^k \hat{R}$ is bounded by $||\tilde{R}||_{C^k_\nabla}$ and $||A||_{C^{k+1}_\nabla}$; \langlebel{d-lem-conclusion-3}
\item $D^k A$ is bounded by $||\tilde{R}||_{\Gamma_{2k-2}}$ and $||A||_{\Gamma_{2k}}$; \langlebel{d-lem-conclusion-4}
\item $D^k \hat{R}$ is bounded by $||\tilde{R}||_{\Gamma_{2k}}$ and $||A||_{\Gamma_{2k+2}}$. \langlebel{d-lem-conclusion-5}
\end{enumerate}
\end{lemma}
\begin{proof}
It is obvious for $k =0$. Now assume that $k \geq 1$.
By \eqref{b-con}, for any $\sigma \in \Gamma (\otimes^p T^* M)$ and $X, X_1, \dots X_p \in \Gamma (TM) $, we have
\begin{align*}
D \sigma =& \nabla \sigma + \sigma * d \theta * \xi + \sigma * A * \xi + \sigma * \tau * \theta + \sigma * J * \theta \numberthis \langlebel{c-derivativetensor} \\
= & \nabla \sigma + \sigma * g * g^{-1} * J * \theta + \sigma * g^{-1} * A * \theta + \sigma * J * \theta
\end{align*}
since $d \theta (\cdot, \cdot) = g (J \cdot, \cdot)$, $g(\xi , \cdot) = \theta (\cdot)$ and $A (\cdot, \cdot ) = g (\cdot, \tau ( \cdot ) )$ where $g= g_\theta$ and $g^{-1}$ is its inverse.
We claim that
\begin{align}
D^k \theta & = P_{k-1} (\theta , J ; A) \langlebel{c-devtheta} \\
D^k J & = Q_{k-1} (\theta, J ; A) \langlebel{c-devj} \\
D^k A & = \mathcal{L} (\nabla^k A) + S_{k-1} (\theta, J ; A) \langlebel{c-devtorsion}
\end{align}
where $P_{k-1} (\theta, J; A) , Q_{k-1} (\theta, J; A) , S_{k-1} (\theta, J; A) $ represent the linear combination of $g, \theta, J$ and $\nabla^l A$ with $l \leq k-1$.
We use induction to prove this claim. For the case $k=1$, the identities \eqref{c-levi3} and \eqref{c-levi4} lead \eqref{c-devtheta} and \eqref{c-devj}.
The identity \eqref{c-devtorsion} follows from \eqref{c-derivativetensor} with $\sigma = A$.
Now assume that the claim is right for all cases $< k$ and consider the case $k$.
Due to \eqref{c-levi3} and \eqref{c-levi4}, we have
\begin{align*}
D^k \theta = & D^{k-1} (g * J + A) = \sum_{i \leq k-1} g * D^i J + D^{k-1} A, \\
D^k J = & D^{k-1} (g * g^{-1} * \theta + g^{-1} * A * J * \theta + \theta * id ) \\
= & \sum_{i \leq k-1} (g * g^{-1} * D^i \theta + D^i \theta * id) + \sum_{i+ j + l = k-1} g^{-1} * D^i A * D^j J * D^l \theta ,
\end{align*}
which, combining with inductive assumption, yield \eqref{c-devtheta} and \eqref{c-devj}. One can similarly get \eqref{c-devtorsion} by \eqref{c-derivativetensor}.
Hence $D^k \theta, D^k J$ and $D^k A$ are bounded by $||A||_{C^k_{\nabla}}$. Thus the conclusion \eqref{d-lem-conclusion-1} is obtained.
Similarly, one can easily get
\begin{align}
D^k \tilde{R} = \mathcal{L} (\nabla^k \tilde{R}) + T_{k-1} (\theta, J; A, \tilde{R})
\end{align}
where $T_{k-1} (\theta, J; A , \tilde{R})$ is the linear combination of $g, \theta, J$ and $\nabla^l A, \nabla^l \tilde{R}$ with $l \leq k-1$.
Hence the conclusion \eqref{d-lem-conclusion-2} is finished.
The identity \eqref{b-pseudoriemcur} shows that $\hat{R}$ involves $g, \theta, \xi, J, A, \nabla A $ and $ \tilde{R}$ which leads \eqref{d-lem-conclusion-3}.
Due to the condition \eqref{b-twtorsion} in Proposition \ref{b-tanakawebster}, the identity \eqref{c-ricidentity} shows that
\begin{align*}
& ( \nabla^2 \sigma ) ( X_1 , \cdots , X_p ; \eta_\alpha, \eta_{\bar{\alpha}r{\alpha}} ) - ( \nabla^2 \sigma ) ( X_1 , \cdots , X_p ; \eta_{\bar{\alpha}r{\alpha}}, \eta_\alpha ) \\
&= \sigma ( X_1 , \cdots , R(\eta_\alpha, \eta_{\bar{\alpha}r{\alpha}}) X_i, \cdots, X_p ) + 2 i \big( \nabla_{\xi} \sigma \big) ( X_1 , \cdots , X_p ).
\end{align*}
which yields that
\begin{align*}
\nabla_\xi \sigma = \mathcal{L} (\nabla_b^2 \sigma) + \sigma * \tilde{R}.
\end{align*}
Taking $\sigma = \nabla_b^j \tilde{R}$ and using \eqref{c-ricidentity} again, we get
\begin{align}
\nabla_b^i \nabla_\xi \nabla_b^j \tilde{R} =& \nabla_b^i ( \mathcal{L} (\nabla_b^{j+2} \tilde{R}) + \nabla_b^j \tilde{R} * \tilde{R}) \langlebel{d-weight-cur} \\
= & \mathcal{L} (\nabla_b^{i+j +2} \tilde{R}) + \sum_{k = 0}^i \nabla_b^{j+k} \tilde{R} * \nabla_b^{i - k} \tilde{R}, \nonumber
\end{align}
which yields that $||\tilde{R}||_{C^k_\nabla}$ can be estimate by $||\tilde{R}||_{\Gamma_{2k}}$.
The similar argument of $\nabla_b^i \nabla_\xi \nabla_b^j A$ shows that $||A||_{C^k_\nabla}$ is bounded by $||\tilde{R}||_{\Gamma_{2k-2}}$ and $||A||_{\Gamma_{2k}}$.
Hence the conclusions \eqref{d-lem-conclusion-4} and \eqref{d-lem-conclusion-5} follow from the previous conclusions \eqref{d-lem-conclusion-1} and \eqref{d-lem-conclusion-3}.
\end{proof}
Using Lemma \ref{c-thm-riemcur} and Lemma \ref{d-lem-estimate}, we have the following convergence theorem for closed pseudo-Hermitian manifolds.
\begin{theorem} \langlebel{c-smgeneralthm}
Given constants $d, V_1$, $\langlembda $ and $ \Lambda$ , any sequence of closed connected pseudo-Hermitian manifolds with same dimension and
\begin{align*}
||\tilde{R}||_{\Gamma_{2k}} \leq \Lambda, \quad ||A||_{\Gamma_{2k+2}} \leq \langlembda, \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1
\end{align*}
is $C^{k +1, \alpha}$ sub-convergent for any $\alpha \in (0,1)$.
\end{theorem}
\begin{corollary}
The class of closed connected pseudo-Hermitian manifolds with same dimension and
\begin{align*}
||\tilde{R}||_{\Gamma_k} \leq \Lambda_k, \quad ||A||_{\Gamma_k} \leq \langlembda_k, \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1
\end{align*}
for all integer $k \geq 0$ is $C^\infty$ compact.
\end{corollary}
From an analytical viewpoint in Riemannian geometry, Bochner formulae of geometric tensor always provide their high-order derivative estimates, such as harmonic maps and Einstein metrics.
This idea is also effective for pseudo-Einstein structure which requires less derivatives for pseudo-Hermitian curvature in Theorem \ref{c-smgeneralthm} as follows:
\begin{theorem} \langlebel{c-smcptpseudothm}
Given constants $\kappa_1, \kappa_2, d, V_1, \langlembda $ and $ \Lambda$, any sequence of closed connected pseudo-Einstein manifolds with dimension $2n+1 \geq 5$ and
\begin{align}
|A| \leq \kappa_1, |div A| \leq \kappa_2, || A||_{S^{\frac{q}{2}}_{2k+4}} \leq \langlembda, ||\tilde{R}||_{\frac{q}{2}} \leq \Lambda, \mbox{diam}_{cc} \leq d, \mbox{Vol} \geq V_1 \langlebel{c-condthm-1}
\end{align}
for some $q > Q$ where $Q$ is given in \eqref{b-crsobolev},
is $C^{k + 1, \alpha}$ sub-convergent for any $\alpha \in (0,1)$.
\end{theorem}
\begin{proof}
By Theorem \ref{c-smgeneralthm}, it suffices to estimate $||\tilde{R}||_{\Gamma_{2k} }$ and $||A||_{\Gamma_{2k+2} }$ of a closed connected pseudo-Hermitian manifold $(M, HM, J, \theta)$ with \eqref{c-condthm-1}.
According to Lemma \ref{b-crsobolevlem}, the assumptions assert that CR Sobolev inequality uniformly holds. Moreover, for $m \leq 2k+2$, the right side of the following equation
\begin{align*}
\Delta_b \nabla_b^m A = \mbox{trace}_{G_\theta} \nabla_b^2 \nabla_b^m A
\end{align*}
is uniformly bounded and thus Lemma \ref{c-moser} guarantees that
\begin{align*}
||A||_{\Gamma_{2k+2}} \leq \langlembda' = \langlembda' (\kappa_1, \kappa_2, q, d, V_1, \langlembda).
\end{align*}
Next, we use induction to prove the uniformly bound of $||\tilde{R}||_{\Gamma_{2k} (M)}$.
The case $||\tilde{R}||_{\Gamma_0}$ is easily obtained by subelliptic inequality \eqref{c-subcur2} and Lemma \eqref{c-moser}. Assume that $||\tilde{R}||_{\Gamma_m}$ is uniformly bounded for $m \leq 2k -1 $. For the case $m+1$, we first observe that
\begin{align}
\Delta_b \nabla_b^m \tilde{R} =& \mathcal{L} (\nabla_b^{m+2} A) + \sum_{i + j = m} \left( \nabla_b^i A * \nabla_b^j A + \nabla_b^i \tilde{R} * \nabla_b^j A + \nabla_b^i \tilde{R} * \nabla_b^j \tilde{R} \right) \langlebel{c-sublphgcur} \\
= & \nabla_b^m \tilde{R} * \tilde{R} + \nabla_b^m \tilde{R} * A + B_m \nonumber
\end{align}
due to the identity \eqref{c-ricidentity} and pseudo-Einstein condition where $B_m$ contains horizontal derivatives of $A$ with order $\leq m+2$, horizontal derivatives of $\tilde{R}$ with order $\leq m-1$ and thus it is uniformly bounded.
For any $x \in M$, choose a CC ball $B_R (x)$ with volume $\frac{V_1}{2}$.
On one hand, Multiplying \eqref{c-sublphgcur} with $\nabla_b^m \tilde{R}$ and taking integral on $B_R (x)$, the result is
\begin{align*}
\int_{B_R (x)} |\nabla_b^{m+1} \tilde{R}|^2 \leq \Lambda''
\end{align*}
due to the induction assumption where $\Lambda''$ is a constant.
On the other hand, the $m+1$ version of \eqref{c-sublphgcur} will leads a subelliptic inequality of $\nabla_b^{m+1} \tilde{R}$. Hence a similar argument of Lemma \ref{c-moser} estimates the $L^\infty$ norm of $\nabla_b^{m+1} \tilde{R}$.
Then the proof is finished by Theorem \ref{c-smgeneralthm}.
\end{proof}
\begin{corollary} \langlebel{d-cor-einstein}
Given constants $d, V_1$ and $\Lambda$, the class of closed connected Sasakian pseudo-Einstein manifolds with dimension $2n+1 \geq 5$ and
\begin{align}
||\tilde{R}||_{\frac{q}{2}} \leq \Lambda, \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1
\end{align}
for some $q > Q$ where $Q$ is given in \eqref{b-crsobolev}, is $C^\infty$ compact.
\end{corollary}
One can also replace the $S^{\frac{q}{2}}_{2k+4}$ norm condition of pseudo-Hermitian torsion $A$ in Theorem \ref{c-smcptpseudothm} by the $S^q_{2k+3}$ norm due to Lemma \ref{c-lem-sobolev-2}. This can be weakened if the dimension is $5$ and the pseudo-Hermitian scalar curvature is constant.
\begin{theorem} \langlebel{c-pseudocptness}
Given constants $\kappa_1, d, V_1, \langlembda $ and $ \Lambda$, any sequence of closed connected pseudo-Einstein manifolds with dimension 5, constant pseudo-Hermitian scalar curvature $\rho$ and
\begin{align}
|A| \leq \kappa_1, ||A||_{S^q_{2k+2}} \leq \langlembda, ||\tilde{R}||_{\frac{q}{2}} \leq \Lambda, \mbox{diam}_{cc} \leq d, \mbox{Vol} \geq V_1, \langlebel{c-condthm-2}
\end{align}
for some $q > Q$ where $Q$ is given in \eqref{b-crsobolev},
is $C^{ k+1, \alpha}$ sub-convergent for any $\alpha \in (0,1)$.
\end{theorem}
\begin{proof}
By Lemma \ref{c-thm-riemcur} and Lemma \ref{d-lem-estimate}, it suffices to estimate $||\tilde{R}||_{C^k_\nabla}$ and $||A||_{C^{k+1}_\nabla}$. Suppose that $(M^5, HM, J, \theta)$ is a closed Hermitian manifold satisfying the conditions \eqref{c-condthm-2} in this theorem.
By the constancy of pseudo-Hermitian scalar curvature and pseudo-Einstein condition, we know that $div A = 0$ due to \eqref{b-tordiv}. Hence Lemma \ref{c-lem-sobolev-2} holds which makes $||A||_{\Gamma_{2k+1}} \leq \langlembda'$ where $\langlembda' = \langlembda' (\kappa_1, d, V_1, q, \langlembda)$.
We can use induction to prove $||\tilde{R}||_{\Gamma_{2k}} \leq \Lambda'$ by a similar argument in Theorem \ref{c-smcptpseudothm} and thus obtain the uniformly bound of all derivatives of $\tilde{R}$ with weight $\leq 2k$ by \eqref{d-weight-cur} which gives the estimate of $||\tilde{R}||_{C^k_\nabla}$.
It is notable that
\begin{align}
\nabla_b^i \nabla_\xi \nabla_b^j A = \nabla_b^i \left( \nabla_b^{j+2} A + \nabla_b^j A * \tilde{R} \right) = \nabla_b^{i+j+ 2} A + \sum_{l=0}^i \nabla_b^{j+l} A * \nabla_b^{i-l} \tilde{R}. \langlebel{d-commutate-tor}
\end{align}
due to \eqref{c-ricidentity}
and then all derivatives of $A$ with weight $\leq 2k+1$ are uniformly bounded. It remains to estimate $\nabla_\xi^{k+1} A$.
One can easily use induction and \eqref{c-reeb} to show the sub-Laplacian of $\nabla_\xi^{k+1} A$
\begin{align*}
\Delta_b \nabla_\xi^{k+1} A = \tilde{R} * \nabla_b^{2 k+2} A + A * \nabla_b^{2k + 2} A + \mathcal{P}_{2k+1}
\end{align*}
where $\mathcal{P}_{2k+1}$ involves derivatives of $A$ with weight $\leq 2k+1$ and ones of $\tilde{R}$ with weight $\leq 2k$. The commutation relation \eqref{d-commutate-tor} guarantees that the norm $||\nabla_\xi^{k+1} A||_{q, M}$ is uniformly bounded by the $S^q_{2k+2}$ norm of $A$ and thus so is $| \nabla_\xi^{k+1} A|$ by Lemma \ref{c-moser}.
\end{proof}
\section{Compactness of Sasakian pseudo-Einstein manifolds} \langlebel{sec-sasakian}
As we know, Sasakian pseudo-Einstein manifolds with pseudo-Hermitian scalar curvature $\rho> 0$ are Einstein under D-homothetic transformations. The compactness of Einstein manifolds are well studied (cf. \cite{anderson1989ricci}). This section aims to investigate the other two cases with $\rho =0$ and $ \rho < 0$.
Let's say that a pseudo-Hermitian $(2n+1)$-manifold is normalized if $\rho = \pm n (n+1)$ or $0$.
Our main theorem is as follows:
\begin{theorem} \langlebel{d-sasacptness}
Given constants $d, V_1$ and $\Lambda$, the class of normalized closed Sasakian pseudo-Einstein $(2n+1)$-manifolds with dimension $2n+1 \geq 5$ and
\begin{align}
||\tilde{R}||_{\frac{2n+1}{2}} \leq \Lambda, \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1 \langlebel{f-condition}
\end{align}
is compact in $C^\infty$ topology.
\end{theorem}
Here we only need $L^{n+\frac{1}{2}}$ norm of $\tilde{R}$ and improve Corollary \ref{d-cor-einstein}.
To prove Theorem \ref{d-sasacptness},
let's first obtain the smooth convergence of pseudo-Hermitian structures and almost complex structures from the convergence of metrics.
\begin{lemma} \langlebel{d-sasastrcpt}
Let $(M_i, HM_i, J_i, \theta_i)$ be a family of closed Sasakian manifolds. Assume $(M_i , g_{\theta_i})$ $C^\infty$ converge to $(M, g)$. Then there exists a smooth pseudo-Hermitian structure $\theta$ and a smooth almost complex structure $J$ such that $(M, HM, J, \theta)$ is the limit of a subsequence of $(M_i, HM_i J_i, \theta_i)$ and is also Sasakian.
\end{lemma}
\begin{proof}
Without loss of generality, assume that $M_i = M$. The proof of Lemma \eqref{c-thm-riemcur} shows that there exists a subsequence, also denoted by $\theta_i, J_i$ such that $\theta_i \to \theta$ and $J_i \to J$ in $C^\infty$ topology of $M$.
The formula \eqref{b-con} shows that
\begin{align}
\tau_i = D^i \xi_i - J_i. \langlebel{d-1}
\end{align}
Since $g_i$ $C^\infty$ converge to $g$, then $D^i$ tends to $D$ in $C^\infty$. By taking the limit of \eqref{d-1}, we find
\begin{align*}
\tau = D \xi - J = \lim_{i \to \infty} (D^i \xi_i - J_i) = \lim_{i \to \infty} \tau_i =0.
\end{align*}
Hence $(M, HM, J, \theta)$ is Sasakian where $HM = \mbox{Ker} \theta$.
\end{proof}
Thus the proof of Theorem \ref{d-sasacptness} remains to show the $C^\infty$ convergence of metrics. By Theorem \ref{b-riemsmoothconvergence}, it suffices to prove the uniform bounds of all derivatives of Riemannian curvatures.
Roughly speaking, Theorem A' in \cite{anderson1989ricci} produces the estimate of $L^\infty$ norm of curvatures and the pseudo-Einstein condition with CR Bochner formulae lifts the regularities as same as Theorem \ref{c-smcptpseudothm}.
\begin{proof}[Proof of Theorem \ref{d-sasacptness}]
Let $(M, HM, J, \theta)$ be a Sasakian pseudo-Einstein manifold satisfying \eqref{f-condition} in this theorem.
Since the pseudo-Hermitian torsion vanishes, then the relation \eqref{b-con} between the Levi-Civita connection $D$ of $g_\theta$ and the Tanaka-Webster connection $\nabla$ becomes
\begin{align}
D= \nabla - d \theta \otimes \xi + 2 \theta \odot J \langlebel{d-levi}
\end{align}
which implies that
\begin{align}
D \theta = d \theta , \quad D d \theta = \theta * g_\theta. \langlebel{d-thetaj}
\end{align}
By Lemma \ref{b-ricrelation}, we find
\begin{align}
D^2 \hat{Ric} = \theta * \theta * g_\theta + d \theta * d \theta \langlebel{d-derici}
\end{align}
which is uniformly bounded. So is $D^{k} \hat{Ric}$ for all $k$.
The proof Theorem A' in \cite{anderson1989ricci} guarantees that
\constantnumber{cst-7}
\begin{align}
\sup_{M} |\hat{R}|_{g_\theta} \leq C_{\ref*{cst-7}}, \langlebel{d-zeroreg}
\end{align}
where $C_{\ref*{cst-7}} = C_{\ref*{cst-7}} (\Lambda, V_1, d)$.
For the estimate of higher derivatives of $D^k \hat{R}$, we use induction to prove
\begin{align}
\sup_M | D^k \hat{R} |_{g_\theta} \leq \Lambda_k . \langlebel{d-regclaim}
\end{align}
The case $k=0$ has been proved in \eqref{d-zeroreg}. Assume that the estimate \eqref{d-regclaim} holds for all cases $\leq k$.
Now we consider the case $k+1$. Note that
\begin{align} \langlebel{d-subcurreg1}
\Delta D^{k+1} \hat{R} = D^{k+1} \hat{R} * \hat{R} + \sum_{l = 1}^k D^l \hat{R} * D^{k + 1 -l} \hat{R} + D^{k+3} \hat{Ric},
\end{align}
\constantnumber{cst-8}
which implies
\begin{align}
\Delta | D^{k+1} \hat{R} | + C_{\ref*{cst-8}} | D^{k+1} \hat{R} | + C_{\ref*{cst-8}} \geq 0. \langlebel{d-subcurreg2}
\end{align}
where $C_{\ref*{cst-8}} = C_{\ref*{cst-8}} (n, k)$. We apply the Riemannian Sobolev inequality (cf. \cite{anderson1989ricci}) and the similar argument of Lemma \ref{c-moser} to \eqref{d-subcurreg2} with $p_0 =2$. The result is
\constantnumber{cst-9}
\begin{align}
\sup_M | D^{k+1} \hat{R} | \leq C_{\ref*{cst-9}} ( ||D^{k+1} \hat{R}||_{2, M} +1 ),
\end{align}
where $C_{\ref*{cst-9}} = C_{\ref*{cst-9}} (k, \Lambda, V_1, d) $. Next we will estimate $||D^{k+1} \hat{R}||_{2, M}$. Using the Stokes' formula and a $k$-th version of \eqref{d-subcurreg2}, we have
\constantnumber{cst-10}
\begin{align*}
& \int_M | D^{k+1} \hat{R} |^2 = - \int_M \langlengle \Delta D^k \hat{R}, D^k \hat{R} \ranglengle \\
& \leq \int_M \langlengle D^k \hat{R} * \hat{R}, D^k \hat{R} \ranglengle + \sum_{l =1}^{k-1} \int_M \langlengle D^l \hat{R} * D^{k-l} \hat{R}, D^k \hat{R} \ranglengle + \int_M \langlengle D^{k+2} \hat{Ric} , D^k \hat{R} \ranglengle \\
& \leq C_{\ref*{cst-10}},
\end{align*}
which gives \eqref{d-regclaim}.
Suppose $(M_i, HM_i, J_i, \theta_i)$ is a sequence of normalized closed connected Sasakian pseudo-Einstein manifolds. The estimate \eqref{d-regclaim} shows the uniform bounds of all covariant derivatives of $\hat{R}$.
Hence by Theorem \ref{b-riemsmoothconvergence}, we obtain the $C^\infty$ sub-convergence of the metric $g_{\theta_i}$. Applying Lemma \ref{d-sasacptness}, the structures $\theta_i$ and $J_i$ both $C^\infty$ converge and the limit $(M, HM, J, \theta)$ is Sasakian pseudo-Einstein.
\end{proof}
\begin{remark}
The proof of Theorem \ref{d-sasacptness} only requires the upper bound of Riemannian distance with respect to Webster metric which seems weaker than one of Carnot-Carath\'eodory distance. But they are equivalent for normalized Sasakian pseudo-Einstein manifolds (cf. Theorem 3 in \cite{baudoin2014volume}).
\end{remark}
\begin{remark}
There is another generalization of Einstein notion in Sasakian geometry, which is called Sasakian $\eta$-Einstein (cf. \cite{boyer2006eta}). It means that the Riemannian Ricci curvature satisfies
\begin{align}
\langlengle \hat{Ric} (X), Y \ranglengle = \langlembda \langlengle X, Y \ranglengle + \mu \theta(X) \theta(Y), \mbox{ for } X, Y \in \Gamma (TM) ,
\end{align}
where $\langlembda$ and $\mu$ are constants.
The concepts of pseudo-Einstein and $\eta$-Einstein are equivalent in Sasakian geometry by the following lemma. In other words, Theorem \ref{d-sasacptness} gives the compactness of Sasakian $\eta$-Einstein manifolds.
\end{remark}
\begin{lemma}
Let $(M, HM, J, \theta)$ is a $(2n+1)$-Sasakian manifold. Then it is $\eta$-Einstein if and only if it is pseudo-Einstein.
\end{lemma}
\begin{proof}
Since $(M, HM, J \theta)$ is Sasakian, by \eqref{b-tor6} and \eqref{b-tor7}, $R(\xi, \cdot) \cdot = 0$. Moreover, Lemma \ref{b-ricrelation} shows that for any $X, Y \in \Gamma (TM) $,
\begin{align}
\langlengle \hat{Ric} (X), Y \ranglengle = \langlengle R_* X, Y \ranglengle - 2 \langlengle \pi_H X, \pi_H Y \ranglengle + 2 n \theta (X) \theta (Y). \langlebel{b-sasakianric}
\end{align}
Suppose $(M, HM, J \theta)$ is $\eta$-Einstein. For any $X, Y \in \Gamma (HM) $, by \eqref{b-sasakianric}, we have
\begin{align*}
\langlengle R_* X, Y \ranglengle = \langlengle \hat{Ric} (X), Y \ranglengle + 2 \langlengle X, Y \ranglengle = (\langlembda+\mu) \langlengle X, Y \ranglengle,
\end{align*}
which shows that it is also pseudo-Einstein.
Suppose $(M, HM, J, \theta)$ is pseudo-Einstein. Then the pseudo-Hermitian scalar curvature $\rho$ is constant by \eqref{b-tordiv}. Thus by \eqref{b-sasakianric}, we have for any $X, Y \in \Gamma (TM)$
\begin{align*}
\langlengle \hat{Ric} (X), Y \ranglengle &= (\frac{\rho}{n} - 2) \langlengle \pi_H X, \pi_H Y \ranglengle + 2 n \theta (X) \theta (Y) \\
& = (\frac{\rho}{n} - 2) \langlengle X, Y \ranglengle + (2 n - \frac{\rho}{n} +2 ) \theta (X) \theta (Y),
\end{align*}
which shows that it is $\eta$-Einstein.
\end{proof}
As a simple consequence of Theorem \ref{d-sasacptness}, we would deduce some pointed compactness of K\"ahler cones. Suppose that $(M, HM, J, \theta)$ is a Sasakian $(2n+1)$-manifold. Its K\"ahler cone is the product manifold $CM = \mathbb{R}_+ \times M$ with metric
\begin{align*}
h = d t^2 + t^{2} g_\theta
\end{align*}
and complex structure
\begin{align*}
\mathfrak{J} = J + dt \otimes (t^{-1} \xi) - (t \theta) \otimes \partial_t,
\end{align*}
where $t$ is the coordinate of $\mathbb{R}_+$ (cf. \cite{boyer19983sasakian}).
The link $\{1\} \times M$ with the induced CR structure is identified with the generator $(M, HM, J, \theta)$.
As we know, Sasakian manifold is Einstein if and only if its K\"ahler cone is Ricci-flat.
Actually, one can easily obtain the following relationship between pseudo-Hermitian Ricci curvature $R_*$ of Sasakian manifold $(M, HM, J, \theta)$ and Ricci curvature $\mathfrak{Ric}$ of its K\"ahler cone $(CM, \mathfrak{J}, h)$:
\begin{align}
h ( \mathfrak{Ric} (X), Y ) = t^{-2} G_\theta \big( (R_* - (2n+2)) \pi_{TM} X , \pi_{TM} Y \big). \langlebel{d-ricci-cone}
\end{align}
where $\pi_{TM}$ is the projection from $T(CM)$ to $TM$.
By Theorem \ref{d-sasacptness}, we have the following corollary.
\begin{corollary} \langlebel{d-corollary-cone}
Given constants $d, V_1$ and $\Lambda$, the class of complete Ricci-flat K\"ahler cones with dimension $2n+2$ and their Sasakian links satisfying
\begin{align}
||\tilde{R}||_{\frac{2n+1}{2}} \leq \Lambda , \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1
\end{align}
is pointed $C^\infty$ compact.
\end{corollary}
\section{Pseudo-Hermitian Ricci Bounded From Below} \langlebel{sec-pinching}
In this section, we deduce a weak version of Theorem \ref{c-pseudocptness} to relax the pseudo-Einstein condition.
\begin{theorem} \langlebel{c-weakcptthm}
Given $\kappa_1 , d, V_1, \Lambda$ and $p > 2n+1$ for any integer $n \geq 1$, there exists $\epsilon = \epsilon (n, p , d ) > 0$ such that any sequence of closed pseudo-Hermitian manifolds $(M_i, HM_i, J_i, \theta_i)$ with dimension $2n+1$ and
\begin{enumerate}[(1)]
\item $ R_{*,i} \geq - 2 (n+1) \kappa_1 $ ,
\item $ ||A_i||_{S^p_1 (M_i)}, ||\nabla_{\xi_i} A||_{p, M_i}, ||A_i^2||_{p, M_i} \leq \epsilon \left( \mbox{Vol} (M_i) \right)^{\frac{1}{p}} $, \langlebel{c-weaktor}
\item $ ||\tilde{R}_i||_{p, M_i} \leq \kappa_3 $,
\item $\mbox{diam}_{cc} (M_i) \leq d$,
\item $ \mbox{Vol} (M_i) \geq V_1$,
\end{enumerate}
is $C^{1, \alpha}$ convergent for any $\alpha < 1- \frac{p}{2n +1} $.
\end{theorem}
The proof follows from the following theorem (Theorem 1.4) in \cite{petersen1997relative}.
\begin{theorem} \langlebel{c-petersenwei}
Given an integer $n \geq 2$, and numbers $p > \frac{n}{2}, \langlembda \leq 0, V_1 >0, d < \infty, \Lambda \leq \infty$, one can find $\varepsilon = \varepsilon (n, p, \langlembda, d) > 0$ such that the class of closed Riemannian manifolds with dimension $n$ and
\begin{gather*}
\mbox{Vol} \geq V_1 , diam \leq d , ||R||_{L^p} \leq \Lambda \\
||\max \{ - f(x) + (n -1) \langlembda, 0 \}||_{L^p} \leq \varepsilon (vol)^{\frac{1}{p}}
\end{gather*}
where $f(x)$ is the smallest eigenvalue of Riemannian Ricci tensor,
is precompact in $C^\alpha$ topology for any $\alpha < 2 - \frac{n}{p} $.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{c-weakcptthm}]
Let $\nabla^i$ be the Tanaka-Webster connection of $(M_i, HM_i, J_i, \theta_i)$.
Due to the relation \eqref{b-riempseudoric}, the Riemannian Ricci curvature
\begin{align*}
\hat{Ric}_i =
\begin{pmatrix}
R_{*, i} - 2 I_{2n} & 0 \\
0 & 2n
\end{pmatrix}
+
\begin{pmatrix}
(A_i) + (\nabla_\xi^i A_i) & (\mbox{div} A_i)^T \\
\mbox{div} A_i & - |A_i|^2
\end{pmatrix}
\end{align*}
where $(A_i)$ and $(\nabla_\xi^i A_i)$ represent the linear combination of $\theta, J$ and themselves. Hence the smallest eigenvalue of $\hat{Ric}$ at $x \in M$
\begin{align*}
f_i (x) \geq - 2 (n+1) \kappa_1-2 - C (|A_i| + |A_i|^2 + |\nabla^i A_i|)
\end{align*}
where $C = C(n)$. Let $\langlembda = \frac{- 2 (n+1) \kappa_1-2}{2n}$ and then we have
\begin{align*}
\max \{ - f_i (x) + 2n \langlembda, 0 \} \leq C (|A_i| + |A_i|^2 + |\nabla^i A_i|)
\end{align*}
Since the condition \eqref{c-weaktor} controls $L^p$ norm of $\nabla^i A_i$,
then Theorem \ref{c-petersenwei} hold for sufficiently small $\varepsilon$.
Thus $(M_i, g_{\theta_i})$ will $C^{1,\alpha}$ converge to some Riemannian manifold $(M, g)$.
Due to Lemma \ref{d-lem-estimate}, one can easily check that the first covariant derivative of pseudo-Hermitian torsion with respect to the Levi-Civita connection has uniform classical Sobolev $L^p_1$-norm. By Sobolev embedding theorem, the $C^{0,\beta}$-norm of pseudo-Hermitian torsion $A_i$ is uniformly bounded for $\beta = 1- \frac{p}{2n +1}$. The proof will be finished by a similar argument of Lemma \ref{c-thm-riemcur}.
\end{proof}
The condition \eqref{c-weaktor} of Theorem \ref{c-weakcptthm} holds naturally in Sasakian manifolds.
\begin{corollary}
Given $\kappa_1 , d, V_1 , \Lambda$ and $p > 2n +1$ for any positive integer $n$, any sequence of closed Sasakian manifolds with dimension $2n+1$ and
\begin{align*}
R_{*} \geq - 2 (n+1) \kappa_1 , \quad ||\tilde{R}||_{L^p} \leq \Lambda, \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1
\end{align*}
is $C^{1, \alpha}$ sub-convergent for any $\alpha < 1- \frac{p}{2n +1} $.
\end{corollary}
As a consequence with \eqref{d-ricci-cone}, we can deduce the pointed compactness of K\"ahler cones with Ricci curvature $\mathfrak{Ric}$ lower bound.
\begin{corollary} \langlebel{c-corollary-cone}
Given constants $\kappa_1, d, V_1, \Lambda$ and $p > 2n +1$ for any positive integer $n$, any sequence of complete K\"ahler cones with dimension $2n+2$, $\mathfrak{Ric} \geq - \kappa_1 t^{-2}$ and their Sasakian links satisfying
\begin{align}
||\tilde{R}||_{L^p} \leq \Lambda, \quad \mbox{diam}_{cc} \leq d, \quad \mbox{Vol} \geq V_1
\end{align}
is $C^{1, \alpha}$ sub-convergent for any $\alpha < 1- \frac{p}{2n +1} $.
\end{corollary}
Shu-Cheng Chang
\emph{Department of Mathematics and Taida Institute for Mathematical Sciences (TIMS)}
\emph{National Taiwan University}
\emph{Taipei, 10617, Taiwan}
[email protected]
Yuxin Dong
\emph{School of Mathematical Sciences}
\emph{Fudan University}
\emph{Shanghai, 200433, P. R. China}
[email protected]
Yibin Ren
\emph{College of Mathematics, Physics and Information Engineering}
\emph{Zhejiang Normal University}
\emph{Jinhua, 321004, Zhejiang, P.R. China}
[email protected]
\end{document}
|
\begin{document}
\title{Separation axioms in bi-soft Topological Spaces}
\author{Munazza Naz}
\address{Department of Mathematics, Fatima Jinnah Women University, The
Mall, Rawalpindi}
\email{[email protected]}
\author{Muhammad Shabir}
\address{Department of Mathematics, Quaid-i-Azam University, Islamabad}
\email{[email protected]}
\author{Muhammad Irfan Ali}
\address{Department of Mathematics, Islamabad Model College for Boys F-7/3,
Islamabad, Pakistan.}
\email{[email protected]}
\keywords{Bitopological Spaces, Soft Topology, Soft Sets, Soft Open Sets,
Soft Closed Sets, Separation Axioms.}
\subjclass[2000]{Primary 05C38, 15A15; Secondary 05A15, 15A18}
\begin{abstract}
Concept of bi-soft topological spaces is introduced. Several notions of a
soft topological space are generalized to study bi-soft topological spaces.
Separation axioms play a vital role in study of topological spaces. These
concepts have been studied in context of bi-soft topological spaces. There
is a very close relationship between topology and rough set theory. An
application of bi-soft topology is given in rough set theory.
\end{abstract}
\maketitle
\section{Introduction}
Soft set theory, initiated by Molodtsov$^{\text{\cite{Molod}}}$, is a novel
concept and a completely new approach for modeling vagueness and
uncertainty, which occur in real world problems. Applications of soft set
theory in many disciplines and real-life problems, have established their
role in scientific literature. Many researchers are working in this very
important area. Molodtsov suggests many directions for the applications of
soft sets in his seminal paper \cite{Molod}, which include smoothness of
functions, game theory, Riemann integration, Perron integration and theory
of measurement. Some important applications of soft sets are in information
systems and decision making problems can be seen $^{\text{\cite{Maji 1}\cite
{Pie}}}$. These concepts are of utmost importance in artificial intelligence
and computer science. Algebraic structures of soft sets have been discussed
in $^{\text{\cite{Ali 1}, \cite{Ali 2}, \cite{Maji 2}}}$. Concept of Soft
topological spaces is introduced in $^{\text{\cite{Shabir}}}$, where soft
separation axioms have been studied as well. Further contributions to the
same concepts have been added by many authors in $^{\text{\cite{Bashir},
\cite{Cagman}, \cite{Min}}}$.
Topology is an important branch of mathematics. Separation axioms in
topology are among the most beautiful and interesting concepts. Various
generalizations of separation axioms have been studied for generalized
topological spaces. It is interesting to see that when classical notions are
replaced by new generalized concepts, several new results emerge. Kelly$^{
\text{\cite{Kelly}}}$ introduced the concept of bitopological spaces and
studied the separation properties for bitopological spaces. These separation
axioms are actually pair-wise separation axioms. In later years, many
researchers studied bitopological spaces$^{\text{\cite{Kim}, \cite{Lal},
\cite{Lane}, \cite{Murdesh}, \cite{Patty}, \cite{Pervin}, \cite{Reilly},
\cite{Singal}}}$ due to the richness of their structure and potential for
carrying out a wide scope for the generalization of topological results in
bitopological environment. Our present work is also a continuation of this
trend.
In the present paper, concept of soft topological spaces have been
generalized to initiate the study of bi-soft topological spaces. In section
\ref{Pre}, some preliminary concepts about bitopological spaces and soft
topological spaces are given. Section \ref{Bisofttopo} is devoted for the
study of bi-soft topological spaces. The basic structure of a bi-soft
topological space over an initial universal set $X$, with a fixed set of
parameters has been given. The concept of pair-wise soft separation axioms
for bi-soft topological spaces is studied section \ref{Sep}. Properties of
pair-wise soft $T_{0}$, $T_{1}$, and $T_{2}-$spaces and their relations with
the corresponding soft $T_{0}$, $T_{1}$, and $T_{2}-$spaces have been
discussed here. Main goal of this paper is to study the implications of
these generalized separation axioms in soft and crisp cases. Several results
in this regards have been presented. This study focuses on question: If a
pair-wise soft $T_{i}-$space $(i=0$, $1$, $2)$, say $(X,\mathcal{T}_{1},
\mathcal{T}_{2},E)$, over a ground set $X$ is given, what can be said about
the situations,
\begin{enumerate}
\item both $(X,\mathcal{T}_{1},E)$ and $(X,\mathcal{T}_{2},E)$\ are soft $
T_{i}-$spaces,
\item $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a soft $T_{i}-$space,
\item the parameterized bitopological spaces $(X,\mathcal{T}_{1e},\mathcal{T}
_{2e})$ are $T_{i}-$spaces for all $e\in E$,
\item bi-soft subspaces $(Y,\mathcal{T}_{1Y},\mathcal{T}_{2Y},E)\ $are $
T_{i}-$spaces for $\emptyset \neq Y\subset X$.
\end{enumerate}
Furthermore characterizations theorem is proved for pair-wise soft Hausdorff
space. Finally in section \ref{appli} an application of bi-soft topological
spaces is suggested in rough set theory.
\section{Preliminaries\label{Pre}}
In this section some basic concepts about bitopological spaces and soft
topological spaces are presented.
\begin{definition}
$^{\text{\cite{Kelly}}}$A \textit{bitopological space} is the triplet $(X,
\mathcal{P},\mathcal{Q})$ where $X$ is a non-empty set, $\mathcal{P}$ and $
\mathcal{Q}$ are two topologies on $X$.
\end{definition}
\begin{definition}
$^{\text{\cite{Murdesh}}}$A \textit{bitopological space} $(X,\mathcal{P},
\mathcal{Q})$ is said to be pair-wise $T_{0}$ if for each pair of distinct
points of $X$, there is a $\mathcal{P}$-open set or a $\mathcal{Q}$-open set
containing one of the points, but not the other.
\end{definition}
\begin{definition}
$^{\text{\cite{Reilly}}}$A \textit{bitopological space} $(X,\mathcal{P},
\mathcal{Q})$ is said to be pair-wise $T_{1}$, if for each pair of distinct
points $x$, $y$ there exist $U\in \mathcal{P}$, $V\in \mathcal{Q}$ such that
$x\in U$, $y\notin V$ and $x\notin U$, $y\in V$.
\end{definition}
\begin{definition}
$^{\text{\cite{Kelly}}}$A \textit{bitopological space} $(X,\mathcal{P},
\mathcal{Q})$ is said to be pair-wise $T_{2}$, if given distinct points $x$,
$y\in X$, there exist $U\in \mathcal{P}$, $V\in \mathcal{Q}$ such that $x\in
U$, $y\in V$, $U\cap V=\emptyset $.
\end{definition}
In the following some concepts about soft sets and soft topological spaces
are given.
Let $X$ be an initial universe set and $E$ be the non-empty set of
parameters.
\begin{definition}
$^{\text{\cite{Molod}}}$ Let $U$ be an initial universe and $E$ be a set of
parameters. Let $\mathcal{P}(X)$ denotes the power set of $X$ and $A$ be a
non-empty subset of $E$. A pair $(F$,$A)$\ is called a soft set over $X$,
where $F$ is a mapping given by $F:A\rightarrow \mathcal{P}(X)$.
\end{definition}
In other words, a soft set over $X$ is a parametrized family of subsets of
the universe $X$. For $\varepsilon \in A$, $F(\varepsilon )$ may be
considered as the set of $\varepsilon -$approximate elements of the soft set
$(F$,$A)$. Clearly, a soft set is not a set.
\begin{definition}
$^{\text{\cite{Ali 1}}}$ For two soft sets $(F$,$A)\ $and $(G$,$B)$ over a
common universe $X$, we say that $(F$,$A)$ is a \textit{soft subset} of $(G$,
$B)$ if
\begin{enumerate}
\item $\ A\subseteq B$ and
\item $F(e)\subseteq G(e)$, for all $e\in A$.
\end{enumerate}
\end{definition}
We write $(F$,$A)\widetilde{\subset }(G$,$B)$.
$(F$,$A)$ is said to be a soft super set of $(G$,$B)$, if $(G$,$B)$ is a
soft subset of $(F$,$A)$. We denote it by $(F$,$A)\widetilde{\supset }(G$,$
B) $.
\begin{definition}
$^{\text{\cite{Ali 1}}}$A soft set $(F$,$A)$ over $X$ is said to be a
\textit{NULL} soft set denoted by $\Phi $\ if for all $\varepsilon \in A$, $
F(\varepsilon )=$ $\emptyset $ $($null set$)$.
\end{definition}
\begin{definition}
$^{\text{\cite{Ali 1}}}$ A soft set $(F$,$A)$ over $X$ is said to be \textit{
absolute} soft set denoted by $\widetilde{A}$\ if for all $e\in A$, $F(e)=$ $
X$.
\end{definition}
\begin{definition}
$^{\text{\cite{Ali 1}}}$The \textit{Union of two soft sets} $(F$,$E)$ and $
(G $,$E)$ over the common universe $X$\ is the soft set $(H$,$E)$, where $
H(e)=F(e)\cup G(e)$ for all $e\in E$. We write $(F,E)\cup (G,E)=(H,E)$.
\end{definition}
\begin{definition}
$^{\text{\cite{Ali 1}}}$\textit{The intersection} of two soft sets $(F,E)$
and $(G,E)$ over a common universe $X$,\ is a soft set $(H,E)=(F,E)\cap
(G,E) $, defined by $H(e)=F(e)\cap G(e)$ for all $e\in E$.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$The difference $(H$,$E)$ of two soft sets $(F$,$E)$
and $(G$,$E)$ over $X$,\ denoted by $(F,E)-(G,E)$, is defined as $
H(e)=F(e)-G(e)$ for all $e\in E$,
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(F$,$E)$ be a soft set over $X$ and $x\in X$.
We say that $x\in (F,E)$ read as $x$ belongs to the soft set $(F$,$E)$
whenever $x\in F(\alpha )$ for all $\alpha \in E$.
\end{definition}
Note that for any $x\in X$, $x\notin (F$,$E)$, if $x\notin F(\alpha )$ for
some $\alpha \in E$.
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $Y$ be a non-empty subset of $X$, then $
\widetilde{Y}$ denotes the soft set $(Y,E)$ over $X$ for which $Y(\alpha )=Y$
, for all $\alpha \in E$.
\end{definition}
In particular, $(X,E)$\ will be denoted by $\widetilde{X}$.
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $x\in X$. Then $(x$,$E)$ denotes the soft set
over $X$ for which $x(\alpha )=\{x\}$, for all $\alpha \in E$.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(F$,$E)$ be a soft set over $X$ and $Y$\ be a
non-empty subset of $X$. Then the soft subset of $(F$,$E)$ over $Y$ denoted
by $(^{Y}F$,$E)$, is defined as follows
\begin{equation*}
^{Y}F(\alpha )=Y\cap F(\alpha ),\text{ for all }\alpha \in E
\end{equation*}
In other words $(^{Y}F,E)=\widetilde{Y}\cap (F,E)$.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$The\textit{\ complement of a soft set} $(F,E)$ is
denoted by $(F,E)^{c}$ and is defined by $(F,E)^{c}=(F^{c},E)$ where $
F^{c}:E\rightarrow \mathcal{P}(X)$ is a mapping given by
\begin{equation*}
F^{c}(\alpha )=X-F(\alpha )\text{ for all}\ \alpha \in E.
\end{equation*}
\end{definition}
\begin{proposition}
$^{\text{\cite{Shabir}}}$Let $(F,E)$ and$\ (G,E)$ be the soft sets over $X$.
Then
\begin{enumerate}
\item $((F,E)\cup (G,E))^{c}=(F,E)^{c}\cap (G,E)^{c}$,
\item $((F,E)\cap (G,E)^{c}=(F,E)^{c}\cup (G,E)^{c}$.
\end{enumerate}
\end{proposition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $\mathcal{T}$ be the collection of soft sets
over $X$. Then $\mathcal{T}$ is said to be a soft topology on $X$ if
\begin{enumerate}
\item $\Phi $, $\widetilde{X}$ belong to $\mathcal{T}$
\item the union of any number of soft sets in $\mathcal{T}$\ belongs to $
\mathcal{T}$
\item the intersection of any two soft sets in $\mathcal{T}$\ belongs to\ $
\mathcal{T}$.
\end{enumerate}
The triplet $(X$,$\mathcal{T}$,$E)$ is called a soft topological space over $
X$.
\end{definition}
\begin{example}
$^{\text{\cite{Shabir}}}$Let $X=\{h_{1}$,$h_{2}$,$h_{3}\}$, $E=\{e_{1}$,$
e_{2}\}$ and
$\mathcal{T}=\{\Phi $,$\widetilde{X}$,$(F_{1}$,$E)$,$(F_{2}$,$E)$,$(F_{3}$,$
E)$,$(F_{4}$,$E)$, $(F_{5}$,$E)\}$ where $(F_{1}$,$E)$,$(F_{2}$,$E)$,$(F_{3}$
,$E)$,$(F_{4}$,$E)$, and $(F_{5}$,$E)$ are soft sets over $X$, defined as
follows:
\begin{equation*}
\begin{array}[t]{ll}
F_{1}(e_{1})=\{h_{2}\}, & F_{1}(e_{2})=\{h_{1}\}, \\
F_{2}(e_{1})=\{h_{2},h_{3}\}, & F_{2}(e_{2})=\{h_{1},h_{2}\}, \\
F_{3}(e_{1})=\{h_{1},h_{2}\}, & F_{3}(e_{2})=X, \\
F_{4}(e_{1})=\{h_{1},h_{2}\}, & F_{4}(e_{2})=\{h_{1},h_{3}\}, \\
F_{5}(e_{1})=\{h_{2}\}, & F_{4}(e_{2})=\{h_{1},h_{2}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}$\ defines a soft topology on $X$ and hence $(X$,$\mathcal{T
}$,$E)$ is a soft topological space over $X$.
\end{example}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a soft topological
space over $X$. Then the members of$\ \mathcal{T}$ are said to be soft open
sets in $X$.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a soft topological
space over $X$. A soft set $(F,E)$ over $X$ is said to be a soft closed set
in $X$, if its relative complement $(F,E)^{c}$ belongs to $\mathcal{T}$.
\end{definition}
\begin{definition}
\label{softcl}$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a
\textit{soft topological space }over $X$ and $(F$,$E)$ be a soft set over $X$
. Then the \textit{soft closure} of $(F$,$E)$, denoted by $\overline{(F,E)}$
is the intersection of all soft closed super sets of $(F$,$E)$.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a \textit{soft
topological space} over $X$ and $Y$\ be a non-empty subset of $X$. Then
\begin{equation*}
\mathcal{T}_{Y}=\{\text{ }(^{Y}F,E)\text{\ }|\text{ }(F,E)\in \mathcal{T}
\text{\ }\}
\end{equation*}
is said to be the \textit{soft relative topology} on $Y$ and $(Y$,$\mathcal{T
}_{Y}$,$E)$\ is called a \textit{soft subspace} of $(X$,$\mathcal{T}$,$E)$.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a soft topological
space over $X$ and $x$,$y\in X$ be such that $x\neq y$. If there exist soft
open sets $(F$,$E)$ and $(G$,$E)$ such that
"$x\in (F$,$E)$ and$\ y\notin (F$,$E)$" or "$y\in (G$,$E)$ and $x\notin (G$,$
E)$", then $(X$,$\mathcal{T}$,$E)$\ is called a soft $T_{0}-$space.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a soft topological
space over $X$ and $x$,$y\in X$ be such that $x\neq y$. If there exist soft
open sets $(F$,$E)$ and $(G$,$E)$ such that
"$x\in (F$,$E)$ and$\ y\notin (F$,$E)$" and "$y\in (G$,$E)$ and $x\notin (G$,
$E)$", then $(X$,$\mathcal{T}$,$E)$\ is called a soft $T_{1}-$space.
\end{definition}
\begin{definition}
$^{\text{\cite{Shabir}}}$Let $(X$,$\mathcal{T}$,$E)$ be a soft topological
space over $X$ and $x$,$y\in X$ be such that $x\neq y$. If there exist soft
open sets $(F$,$E)$ and $(G$,$E)$ such that
$x\in (F$,$E)$, $y\in (G$,$E)$ and $(F$,$E)\cap (G$,$E)=\Phi $, then $(X$,$
\mathcal{T}$,$E)$\ is called a soft $T_{2}-$space.
\end{definition}
\section{Bi-Soft Topological Spaces\label{Bisofttopo}}
In this section study of bi-soft topological spaces is initiated.
\begin{definition}
Let $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$ be two soft topologies on $X$.
Then the quadruple $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is said to be a
bi-soft topological space over $X$.
\end{definition}
\begin{example}
Let $X=\{h_{1},h_{2},h_{3}\}$, $E=\{e_{1},e_{2}\}$. Let
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F_{1},E),(F_{2},E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X}
,(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E)\}\text{,}
\end{eqnarray*}
where $(F_{1},E),(F_{2},E),(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E)$ are soft
sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{ll}
F_{1}(e_{1})=\{h_{1}\}, & F_{1}(e_{2})=\{h_{1},h_{2}\}, \\
F_{2}(e_{1})=\{h_{1},h_{3}\}, & F_{2}(e_{2})=X,
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{ll}
G_{1}(e_{1})=\{h_{1}\}, & G_{1}(e_{2})=\{h_{2}\}, \\
G_{2}(e_{1})=\{h_{1},h_{2}\}, & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{2}\}, & G_{3}(e_{2})=\{h_{2}\}, \\
G_{4}(e_{1})=\{\}, & G_{4}(e_{2})=\{h_{2}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Thus $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$.
\end{example}
\begin{proposition}
\label{Propref}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft
topological space over $X$. We define
\begin{eqnarray*}
\mathcal{T}_{1e} &=&\{F(e)|(F,E)\in \mathcal{T}_{1}\} \\
\mathcal{T}_{2e} &=&\{G(e)|(G,E)\in \mathcal{T}_{2}\}
\end{eqnarray*}
for each $e\in E$. Then $(X,\mathcal{T}_{1e},\mathcal{T}_{2e})$ is a \textit{
bitopological space}.
\end{proposition}
\begin{proof}
Follows from the fact that $\mathcal{T}_{1e}$, and $\mathcal{T}_{2e}$ are
topologies on $X$ for each $e\in E$.
\end{proof}
Proposition \ref{Propref}\ shows that corresponding to each parameter $e\in
E $, we have a bitopological space $X$. Thus a bi-soft topology on $X$ gives
a parameterized family of bitopological spaces.
\begin{example}
Let $X=\{h_{1},h_{2},h_{3}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E),(F_{5},E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X}
,(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E)\}\text{,}
\end{eqnarray*}
where $
(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E),(F_{5},E),(G_{1},E),(G_{2},E),(G_{3},E),
$ and $(G_{4},E)$ are soft sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{ll}
F_{1}(e_{1})=\{h_{2}\}, & F_{1}(e_{2})=\{h_{1}\}, \\
F_{2}(e_{1})=\{h_{2},h_{3}\}, & F_{2}(e_{2})=\{h_{1},h_{2}\}, \\
F_{3}(e_{1})=\{h_{1},h_{2}\}, & F_{3}(e_{2})=X, \\
F_{4}(e_{1})=\{h_{1},h_{2}\}, & F_{4}(e_{2})=\{h_{1},h_{3}\}, \\
F_{5}(e_{1})=\{h_{2}\}, & F_{5}(e_{2})=\{h_{1},h_{2}\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{ll}
G_{1}(e_{1})=\{h_{1}\}, & G_{1}(e_{2})=\{h_{2}\}, \\
G_{2}(e_{1})=\{h_{1},h_{2}\}, & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{2}\}, & G_{3}(e_{2})=\{h_{2}\}, \\
G_{4}(e_{1})=\{\}, & G_{4}(e_{2})=\{h_{2}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. It can be easily seen that
\begin{eqnarray*}
\mathcal{T}_{1e_{1}} &=&\{\emptyset
,X,\{h_{2}\},\{h_{1},h_{2}\},\{h_{2},h_{3}\}\}, \\
\mathcal{T}_{2e_{1}} &=&\{\emptyset ,X,\{h_{1}\},\{h_{2}\},\{h_{1},h_{2}\}\},
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{T}_{1e_{2}} &=&\{\emptyset
,X,\{h_{1}\},\{h_{1},h_{3}\},\{h_{1},h_{2}\}\}, \\
\mathcal{T}_{2e_{2}} &=&\{\emptyset ,X,\{h_{2}\}\},
\end{eqnarray*}
are topologies on $X$. Thus $(X,\mathcal{T}_{1e_{1}},\mathcal{T}_{2e_{1}})$
and $(X,\mathcal{T}_{1e_{2}},\mathcal{T}_{2e_{2}})$ are bitopological spaces
corresponding to parameters.
\end{example}
We have seen in \cite{Shabir} that the intersection of two soft topologies
is again a soft topology on $X$ but the union of two soft topologies need
not be a soft topology and its examples can be found in \cite{Shabir}. Now
we define the supremum soft topology:
\begin{definition}
Let $(X,\mathcal{T}_{1},E)$ and $(X,\mathcal{T}_{2},E)$\ be two soft
topological spaces over $X.$ Let $\mathcal{T}_{1}\vee \mathcal{T}_{2}$ be
the smallest soft topology on $X$ that contains $\mathcal{T}_{1}\cup
\mathcal{T}_{2}$.
\end{definition}
\begin{example}
Let $X=\{h_{1},h_{2},h_{3}\}$, $E=\{e_{1},e_{2}\}$. Let
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F_{1},E),(F_{2},E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X}
,(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E)\}\text{,}
\end{eqnarray*}
where $(F_{1},E),(F_{2},E),(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E)$ are soft
sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{lll}
F_{1}(e_{1})=\{h_{1}\}, & & F_{1}(e_{2})=\{h_{1},h_{2}\}, \\
F_{2}(e_{1})=\{h_{1},h_{3}\}, & & F_{2}(e_{2})=X,
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{lll}
G_{1}(e_{1})=\{h_{1}\}, & & G_{1}(e_{2})=\{h_{2}\}, \\
G_{2}(e_{1})=\{h_{1},h_{2}\}, & & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{2}\}, & & G_{3}(e_{2})=\{h_{2}\}, \\
G_{4}(e_{1})=\{\}, & & G_{4}(e_{2})=\{h_{2}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$. Now
\begin{equation*}
\mathcal{T}_{1}\vee \mathcal{T}_{2}=\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E),(H_{1},E)\}
\end{equation*}
where
\begin{equation*}
\begin{array}{lll}
H_{1}(e_{1})=\{h_{1},h_{2}\}, & & H_{1}(e_{2})=\{h_{1},h_{2}\},
\end{array}
\end{equation*}
Thus $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is the smallest \textit{
soft topological space over }$X$ that contains $\mathcal{T}_{1}\cup \mathcal{
T}_{2}$.
\end{example}
\section{Bi-Soft Separation Axioms\label{Sep}}
In the last section concept of bi-soft topological spaces has been
introduced. In this section separation axioms for bi-soft topological spaces
are being studied.
\begin{definition}
A bi-soft topological space $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ over $X$
is said to be \textit{pair-wise soft }$T_{0}-$\textit{space} if for every
pair of distinct points $x$,$y\in X$, there is a $\mathcal{T}_{1}-$soft open
set $(F,E)$ such that $x\in (F,E)$ and $y\notin (F,E)$ or a $\mathcal{T}
_{2}- $soft open set $(G,E)$ such that $x\notin (G,E)$ and $y\in (G,E)$.
\end{definition}
\begin{example}
\label{Examp2}Let $X$ be any non-empty set and $E$ be a set of parameters.
Consider
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X}\}\text{ \ Soft indiscrete topology
over }X \\
\mathcal{T}_{2} &=&\{(F,E)|(F,E)\text{ is a soft set over }X\}\text{ \ \
Soft discrete topology over }X
\end{eqnarray*}
Then $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{0}-$
space.
\end{example}
\begin{proposition}
\label{Prop2}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft
topological space over $X$. If $(X,\mathcal{T}_{1},E)$\ or $(X,\mathcal{T}
_{2},E)$\ is a soft $T_{0}-$space then $(X,\mathcal{T}_{1},\mathcal{T}
_{2},E) $\ is a pair-wise soft $T_{0}-$space.
\begin{proof}
Let $x$,$y\in X$ be such that $x\neq y$. Suppose that $(X,\mathcal{T}_{2},E)$
\ is a soft $T_{0}-$space. Then there exist some $(F,E)\in \mathcal{T}_{1}$
such that $x\in (F,E)$ and$\ y\notin (F,E)$ or some $(G,E)\in \mathcal{T}
_{2} $ such that $y\in (G,E)$ and $x\notin (G,E)$. In either case we obtain
the requirement and so $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a
pair-wise soft $T_{0}-$space.
\end{proof}
\end{proposition}
\begin{remark}
The converse of Proposition \ref{Prop2} is not true in general.
\end{remark}
\begin{example}
Let $X=\{h_{1},h_{2},h_{3},h_{4}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F,E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G_{1},E),(G_{2},E),(G_{3},E)\}
\text{,}
\end{eqnarray*}
where $(F,E),(G_{1},E),(G_{2},E),$ and $(G_{3},E)$ are soft sets over $X$,
defined as follows:
\begin{equation*}
\begin{array}{lll}
F(e_{1})=\{h_{1},h_{3}\}, & & F(e_{2})=\{h_{3}\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{ll}
G_{1}(e_{1})=\{h_{3},h_{4}\}, & G_{1}(e_{2})=\{h_{1},h_{4}\}, \\
G_{2}(e_{1})=\{h_{2}\}, & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{2},h_{3},h_{4}\}, & G_{3}(e_{2})=\{h_{1},h_{2},h_{4}\}
\text{.}
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$.
Now $h_{1},h_{2}\in X$ and $(G_{2},E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (G_{2},E)\text{, }h_{1}\notin (G_{2},E)\text{.}
\end{equation*}
$h_{1},h_{3}\in X$ and $(F,E)\in \mathcal{T}_{1}$ such that
\begin{equation*}
h_{3}\in (F,E)\text{, }h_{1}\notin (F,E)\text{.}
\end{equation*}
$h_{1},h_{4}\in X$ and $(G_{1},E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{4}\in (G_{1},E)\text{, }h_{1}\notin (G_{1},E)\text{.}
\end{equation*}
$h_{2},h_{3}\in X$ and $(G_{2},E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (G_{2},E)\text{, }h_{3}\notin (G_{2},E)\text{.}
\end{equation*}
$h_{2},h_{4}\in X$ and $(G_{2},E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (G_{2},E)\text{, }h_{4}\notin (G_{2},E)\text{.}
\end{equation*}
Finally $h_{3},h_{4}\in X$ and $(G_{3},E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{4}\in (G_{3},E)\text{, }h_{3}\notin (G_{3},E)\text{.}
\end{equation*}
Thus $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a pair-wise \textit{soft }$
T_{0}-$\textit{\ space over }$X$.
We observe that $h_{1},h_{2}\in X$ and there does not exist any $(F,E)\in
\mathcal{T}_{1}$ such that $h_{1}\in (F,E)$, $h_{2}\notin (F,E)$ or $
h_{2}\in (F,E)$, $h_{1}\notin (F,E)$. Therefore $(X,\mathcal{T}_{1},E)$ is
not a \textit{soft }$T_{0}-$\textit{\ space over }$X$. Similarly $
h_{1},h_{3}\in X$ and there does not exist any $(G,E)\in \mathcal{T}_{2}$
such that $h_{1}\in (G,E)$, $h_{3}\notin (G,E)$ or $h_{3}\in (G,E)$, $
h_{1}\notin (G,E)$ so $(X,\mathcal{T}_{2},E)$ is not a \textit{soft }$T_{0}-$
\textit{\ space also}.
\end{example}
\begin{proposition}
\label{Prop1}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft
topological space over $X$. If $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a
pair-wise soft $T_{0}-$space then $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$
\ is a soft $T_{0}-$space.
\begin{proof}
Let $x$,$y\in X$ be such that $x\neq y$. Then there exists some $(F,E)\in
\mathcal{T}_{1}$ such that $x\in (F,E)$ and$\ y\notin (F,E)$ or some $
(G,E)\in \mathcal{T}_{2}$ such that $y\in (G,E)$ and $x\notin (G,E)$. In
either case $(F,E),$ $(G,E)\in \mathcal{T}_{1}\vee \mathcal{T}_{2}$.\ Hence $
(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$\ is a soft $T_{0}-$space.
\end{proof}
\end{proposition}
\begin{remark}
The converse of Proposition \ref{Prop1}, is not true. This is shown by the
following example:
\end{remark}
\begin{example}
Let $X=\{h_{1},h_{2},h_{3},h_{4}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F_{1},E),(F_{2},E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G,E)\}\text{,}
\end{eqnarray*}
where $(F_{1},E),(F_{2},E),$ and $(G,E)$ are soft sets over $X$, defined as
follows:
\begin{equation*}
\begin{array}{ll}
F_{1}(e_{1})=\{h_{1},h_{4}\}, & F_{1}(e_{2})=\{h_{4}\}, \\
F_{2}(e_{1})=\{h_{4}\}, & F_{2}(e_{2})=\{\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{lll}
G(e_{1})=\{h_{2},h_{4}\}, & & G(e_{2})=\{h_{1},h_{2}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. Now
\begin{equation*}
\mathcal{T}_{1}\vee \mathcal{T}_{2}=\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(G,E),(H,E)\}
\end{equation*}
where
\begin{equation*}
\begin{array}{lll}
H(e_{1})=\{h_{1},h_{2},h_{4}\}, & & H(e_{2})=\{h_{1},h_{2},h_{4}\},
\end{array}
\end{equation*}
so $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a \textit{soft topological
space over }$X$ that contains $\mathcal{T}_{1}\cup \mathcal{T}_{2}$.
For $h_{1},h_{3}\in X$, we cannot find any soft sets $(F,E)\in \mathcal{T}
_{1}$ or $(G,E)\in \mathcal{T}_{2}$ such that
\begin{eqnarray*}
h_{1} &\in &(F,E)\text{, }h_{3}\notin (F,E)\text{ or} \\
h_{3} &\in &(G,E)\text{, }h_{1}\notin (G,E)\text{.}
\end{eqnarray*}
Thus $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is not pair-wise \textit{soft} $
T_{0}-$space.
Now $h_{1},h_{2}\in X$ and $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (G,E)\text{, }h_{1}\notin (G,E)\text{.}
\end{equation*}
$h_{1},h_{3}\in X$ and $(H,E)\in \mathcal{T}_{1}\vee \mathcal{T}_{2}$ such
that
\begin{equation*}
h_{1}\in (H,E)\text{, }h_{3}\notin (H,E)\text{.}
\end{equation*}
$h_{1},h_{4}\in X$ and $(F_{1},E)\in \mathcal{T}_{1}$ such that
\begin{equation*}
h_{4}\in (F_{1},E)\text{, }h_{1}\notin (F_{1},E)\text{.}
\end{equation*}
$h_{2},h_{3}\in X$ and $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (G,E)\text{, }h_{3}\notin (G,E)\text{.}
\end{equation*}
$h_{2},h_{4}\in X$ and $(F_{1},E)\in \mathcal{T}_{1}$ such that
\begin{equation*}
h_{4}\in (F_{1},E)\text{, }h_{2}\notin (F_{1},E)\text{.}
\end{equation*}
Finally $h_{3},h_{4}\in X$ and $(F_{1},E)\in \mathcal{T}_{1}$ such that
\begin{equation*}
h_{4}\in (F_{1},E)\text{, }h_{3}\notin (F_{1},E)\text{.}
\end{equation*}
Thus $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a \textit{soft }$T_{0}-$
\textit{\ space over }$X$.
\end{example}
\begin{example}
\label{Examp1}Let $X=\{h_{1},h_{2},h_{3},h_{4}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F,E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G_{1},E),(G_{2},E),(G_{3},E)\}
\text{,}
\end{eqnarray*}
where $(F,E),(G_{1},E),(G_{2},E),$ and $(G_{3},E)$ are soft sets over $X$,
defined as follows:
\begin{equation*}
\begin{array}{lll}
F(e_{1})=\{h_{1},h_{3}\}, & & F(e_{2})=\{h_{3}\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{lll}
G_{1}(e_{1})=\{h_{3},h_{4}\}, & & G_{1}(e_{2})=\{h_{1},h_{4}\}, \\
G_{2}(e_{1})=\{h_{2}\}, & & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{2},h_{3},h_{4}\}, & & G_{3}(e_{2})=\{h_{1},h_{2},h_{4}\}
\text{.}
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. Also observe that $(X,\mathcal{T}_{1},\mathcal{T
}_{2},E)$ is a pair-wise soft $T_{0}-$space. Now
\begin{eqnarray*}
\mathcal{T}_{1e_{1}} &=&\{\emptyset ,X,\{h_{1},h_{3}\}\}, \\
\mathcal{T}_{2e_{1}} &=&\{\emptyset ,X,\{h_{3}\}\},
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{T}_{2e_{2}} &=&\{\emptyset
,X,\{h_{2}\},\{h_{3},h_{4}\},\{h_{2},h_{3},h_{4}\}\}, \\
\mathcal{T}_{2e_{2}} &=&\{\emptyset
,X,\{h_{2}\},\{h_{1},h_{4}\},\{h_{1},h_{2},h_{4}\}\},
\end{eqnarray*}
are corresponding parametrized topologies on $X$. Considering the \textit{
bitopological space} $(X,\mathcal{T}_{1e_{1}},\mathcal{T}_{2e_{1}})$, one
can easily see that $h_{2},h_{4}\in X$ and there do not exist any $\mathcal{T
}_{1e_{1}}-$open set $X$ such that $h_{2}\in X$, $h_{4}\notin X$ or $
\mathcal{T}_{2e_{1}}-$open set $V$ such that $h_{4}\in V$, $h_{2}\notin V$.
Thus $(X,\mathcal{T}_{1e_{1}},\mathcal{T}_{2e_{1}})$ is not a pair-wise $
T_{0}-$space.
\end{example}
Example \ref{Examp1}, shows that the parametrized bitopological spaces need
not be pair-wise $T_{0}$ even if the given bi-soft topological space is
pair-wise soft $T_{0}-$space. Following proposition will provide us an
alternative condition that resolves this problem while looking for the
corresponding parameterized families.
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$ and $x$,$y\in X$ be such that $x\neq y$. If there exists a $
\mathcal{T}_{1}-$soft open set $(F,E)$ such that $x\in (F$,$E)$ and$\ y\in
(F $,$E)^{c}$ or a $\mathcal{T}_{2}-$soft open set $(G,E)$ such that $y\in
(G $,$E)$ and $x\in (G$,$E)^{c}$, then $(X,\mathcal{T}_{1},\mathcal{T}
_{2},E) $ is a pair-wise soft $T_{0}-$space over $X$ and $(X,\mathcal{T}
_{1e},\mathcal{T}_{2e})$\ is a pair-wise $T_{0}-$space for each $e\in E$.
\end{proposition}
\begin{proof}
Let $x$,$y\in X$ be such that $x\neq y$ and $(F,E)\in \mathcal{T}_{1}$ such
that $x\in (F,E)$ and$\ y\in (F,E)^{c}$ Or $(G,E)\in \mathcal{T}_{2}$ such
that $y\in (G,E)$ and $x\in (G,E)^{c}$. If $y\in (F,E)^{c}$\ then $y\in
(F(e))^{c}$\ for each $e\in E$. This implies that $y\notin F(e)$\ for each $
e\in E$. Therefore $y\notin (F,E)$.\ Similarly we can show that if $x\in
(G,E)^{c}$ then $x\notin (G,E)$.\ Hence $(X,\mathcal{T}_{1},\mathcal{T}
_{2},E)$\ is a pair-wise soft $T_{0}-$space.\ Now for any $e\in E$,\ $(X,
\mathcal{T}_{1e},\mathcal{T}_{2e})$\ is a \textit{bitopological space}. By
above discussion we have $x\in F(e)\in \mathcal{T}_{1e}$ and\ $y\notin F(e)$
or\ $y\in G(e)\in \mathcal{T}_{2e}$ and\ $x\notin G(e)$. Thus $(X,\mathcal{T}
_{1e},\mathcal{T}_{2e})$ is a pair-wise $T_{0}-$space.
\end{proof}
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$ and $Y$ be a non-empty subset of $X$. If $(X,\mathcal{T}_{1},
\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{0}-$space then $(Y,\mathcal{T}
_{1Y},\mathcal{T}_{2Y},E)$\ is also a pair-wise soft $T_{0}-$space.
\begin{proof}
Let $x$,$y\in Y$ be such that $x\neq y$. Then there exists some soft set $
(F,E)\in \mathcal{T}_{1}$ or $(G,E)\in \mathcal{T}_{2}$ such that $x\in
(F,E) $ and$\ y\notin (F,E)$ or $y\in (G,E)$ and $x\notin (G,E)$. Suppose
that there exists some soft set $(F,E)\in \mathcal{T}_{1}$ such that $x\in
(F,E)$ and$\ y\notin (F,E)$. Now $x\in Y$\ implies that $x\in \widetilde{Y}$
.\ So $x\in \widetilde{Y}$ and $x\in (F,E)$. Hence $x\in \widetilde{Y}\cap
(F,E)=(^{Y}F,E)$. Consider $y\notin (F,E)$, this means that $y\notin F(e)$
for some $e\in E$.\ Then $y\notin Y\cap F(e)=Y(e)\cap F(e)$.\ Therefore $
y\notin \widetilde{Y}\cap (F,E)=(^{Y}F,E)$. Similarly it can be proved that
if $y\in (G,E)$ and $x\notin (G,E)$ then $y\in (^{Y}G,E)$ and $x\notin
(^{Y}G,E)$.\ Thus $(Y,\mathcal{T}_{1Y},\mathcal{T}_{2Y},E)$\ is also a
pair-wise soft $T_{0}-$space.
\end{proof}
\end{proposition}
\begin{definition}
A bi-soft topological space $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ over $X$
is said to be pair-wise soft $T_{1}-$space if for every pair of distinct
points $x$,$y\in X$, there is a $\mathcal{T}_{1}-$soft open set $(F,E)$ such
that $x\in (F,E)$ and $y\notin (F,E)$ and a $\mathcal{T}_{2}-$soft open set $
(G,E)$ such that $x\notin (G,E)$ and $y\in (G,E)$.
\end{definition}
\begin{example}
\label{Examp3}Let $X=\{h_{1},h_{2},h_{3}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E),(F_{5},E),(F_{6},E),(F_{7},E)\}
\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X}
,(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E),(G_{5},E),(G_{6},E)\}\text{,}
\end{eqnarray*}
where $(F_{1},E),$ $(F_{2},E),$ $(F_{3},E),$ $(F_{4},E),$ $(F_{5},E),$ $
(F_{6},E),$ $(F_{7},E),$ $(F_{8},E),$ $(G_{1},E),$ $(G_{2},E),$ $(G_{3},E),$
$(G_{4},E),$ $(G_{5},E)$ and $(G_{6},E)$ are soft sets over $X$, defined as
follows:
\begin{equation*}
\begin{array}{ll}
F_{1}(e_{1})=\{h_{1}\}, & F_{1}(e_{2})=\{h_{1},h_{3}\}, \\
F_{2}(e_{1})=\{h_{3}\}, & F_{2}(e_{2})=\{h_{3}\}, \\
F_{3}(e_{1})=\{h_{1},h_{3}\}, & F_{3}(e_{2})=\{h_{1},h_{3}\}, \\
F_{4}(e_{1})=\{\}, & F_{4}(e_{2})=\{h_{3}\}, \\
F_{5}(e_{1})=\{h_{2},h_{3}\}, & F_{5}(e_{2})=\{h_{2}\}, \\
F_{6}(e_{1})=\{h_{3}\}, & F_{6}(e_{2})=\{\}, \\
F_{7}(e_{1})=\{h_{2},h_{3}\}, & F_{7}(e_{2})=\{h_{2},h_{3}\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{lll}
G_{1}(e_{1})=\{h_{2}\}, & & G_{1}(e_{2})=\{h_{2}\}, \\
G_{2}(e_{1})=\{h_{3}\}, & & G_{2}(e_{2})=\{h_{3}\}, \\
G_{3}(e_{1})=\{h_{2},h_{3}\}, & & G_{3}(e_{2})=\{h_{2},h_{3}\}, \\
G_{4}(e_{1})=\{h_{1},h_{2}\}, & & G_{4}(e_{2})=\{h_{1},h_{2}\}, \\
G_{5}(e_{1})=\{h_{1},h_{3}\}, & & G_{5}(e_{2})=\{h_{1},h_{3}\}, \\
G_{6}(e_{1})=\{h_{1}\}, & & G_{6}(e_{2})=\{h_{1}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. One can easily see that $(X,\mathcal{T}_{1},
\mathcal{T}_{2},E)$ is a pair-wise \textit{soft} $T_{1}-$\textit{space over }
$X$.
\end{example}
\begin{proposition}
\label{Prop4}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft
topological space over $X$. Then $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is
a pair-wise soft $T_{1}-$space if and only if $(X,\mathcal{T}_{1},E)$\ and $
(X,\mathcal{T}_{2},E)$\ are soft $T_{1}-$spaces.
\begin{proof}
Let $x$,$y\in X$ be such that $x\neq y$. Suppose that $(X,\mathcal{T}_{1},E)$
\ and $(X,\mathcal{T}_{2},E)$\ are soft $T_{1}-$spaces. Then there exist
some $(F,E)\in \mathcal{T}_{1}$ and $(G,E)\in \mathcal{T}_{2}$\ such that $
x\in (F,E)$ and$\ y\notin (F,E)$ and $y\in (G,E)$ and $x\notin (G,E)$. In
either case we obtain the requirement and so $(X,\mathcal{T}_{1},\mathcal{T}
_{2},E)$\ is a pair-wise soft $T_{1}-$space. Conversely we assume that $(X,
\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{1}-$space. Then
there exist some $(F_{1},E)\in \mathcal{T}_{1}$ and $(G_{1},E)\in \mathcal{T}
_{2}$\ such that $x\in (F_{1},E)$ and $y\notin (F_{1},E)$ and $y\in
(G_{1},E) $ and $x\notin (G_{1},E)$. Also there exist soft sets $
(F_{2},E)\in \mathcal{T}_{1}$ and $(G_{2},E)\in \mathcal{T}_{2}$\ such that $
y\in (F_{2},E)$ and $x\notin (F_{2},E)$ and $x\in (G_{2},E)$ and $y\notin
(G_{2},E)$. Thus $(X,\mathcal{T}_{1},E)$\ and $(X,\mathcal{T}_{2},E)$\ are
soft $T_{1}-$spaces.
\end{proof}
\end{proposition}
\begin{proposition}
\label{Prop3}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft
topological space over $X$. If $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a
pair-wise soft $T_{1}-$space then $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$
\ is also a soft $T_{1}-$space.
\begin{proof}
Let $x$,$y\in X$ be such that $x\neq y$. There exists $(F,E)\in \mathcal{T}
_{1}$ such that $x\in (F,E)$,$\ y\notin (F,E)$ and $(G,E)\in \mathcal{T}_{2}$
such that $y\in (G,E)$ and $x\notin (G,E)$. So $(F,E),$ $(G,E)\in \mathcal{T}
_{1}\vee \mathcal{T}_{2}$ and thus $(X,\mathcal{T}_{1}\vee \mathcal{T}
_{2},E) $\ is a soft $T_{1}-$space.
\end{proof}
\end{proposition}
\begin{remark}
The converse of Proposition \ref{Prop3}, is not true. This is shown by the
following example:
\end{remark}
\begin{example}
Let $X=\{h_{1},h_{2}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F,E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G,E)\}\text{,}
\end{eqnarray*}
where $(F,E),$ and $(G,E)$ are soft sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{lll}
F(e_{1})=\{h_{1}\}, & & F(e_{2})=X, \\
G(e_{1})=X, & & G(e_{2})=\{h_{2}\}\text{.}
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. Both of $(X,\mathcal{T}_{1},E)$ and $(X,
\mathcal{T}_{2},E)$\ are not soft $T_{1}-$spaces over $X$ and so $(X,
\mathcal{T}_{1},\mathcal{T}_{2},E)$ is not a pair-wise soft $T_{1}-$space by
Proposition \ref{Prop4}. Now
\begin{equation*}
\mathcal{T}_{1}\vee \mathcal{T}_{2}=\{\Phi ,\widetilde{X},(F,E),(G,E),(H,E)\}
\end{equation*}
where
\begin{equation*}
\begin{array}{lll}
H(e_{1})=\{h_{1}\}, & & H(e_{2})=\{h_{2}\}\text{.}
\end{array}
\end{equation*}
So $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a \textit{soft topological
space over }$X$ containing $\mathcal{T}_{1}\cup \mathcal{T}_{2}$. For $
h_{1},h_{2}\in X$, $(F,E)\in \mathcal{T}_{1}$, $(G,E)\in \mathcal{T}_{2}$
such that
\begin{equation*}
h_{1}\in (F,E)\text{, }h_{2}\notin (F,E)\text{ and }h_{2}\in (G,E)\text{, }
h_{1}\notin (G,E)\text{.}
\end{equation*}
Thus $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a \textit{soft }$T_{1}-$
\textit{space over }$X$.
\end{example}
Consider the following example:
\begin{example}
Let $X=\{h_{1},h_{2}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F_{1},E),(F_{2},E),(F_{3},E)\}
\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G_{1},E),(G_{2},E),(G_{3},E)\}
\text{,}
\end{eqnarray*}
where $(F_{1},E),$ $(F_{2},E),$ $(F_{3},E),$ $(G_{1},E),$ $(G_{2},E),$ and $
(G_{3},E)$ are soft sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{ll}
F_{1}(e_{1})=\{h_{2}\}, & F_{1}(e_{2})=X, \\
F_{2}(e_{1})=X, & F_{2}(e_{2})=\{h_{1}\}, \\
F_{3}(e_{1})=\{h_{2}\}, & F_{3}(e_{2})=\{h_{1}\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{ll}
G_{1}(e_{1})=\{h_{1}\}, & G_{1}(e_{2})=X, \\
G_{2}(e_{1})=X, & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{1}\}, & G_{3}(e_{2})=\{h_{2}\}\text{.}
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. Both of $(X,\mathcal{T}_{1},E)$ and $(X,
\mathcal{T}_{2},E)$\ are soft $T_{1}-$spaces over $X$ and so $(X,\mathcal{T}
_{1},\mathcal{T}_{2},E)$ is also a pair-wise soft $T_{1}-$space by
Proposition \ref{Prop4}. Now
\begin{eqnarray*}
\mathcal{T}_{1e_{1}} &=&\{\emptyset ,X,\{h_{2}\}\}, \\
\mathcal{T}_{2e_{1}} &=&\{\emptyset ,X,\{h_{1}\}\},
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{T}_{2e_{2}} &=&\{\emptyset ,X,\{h_{1}\}\}, \\
\mathcal{T}_{2e_{2}} &=&\{\emptyset ,X,\{h_{2}\}\},
\end{eqnarray*}
are corresponding parametrized topologies on $X$. Considering the \textit{
bitopological space} $(X,\mathcal{T}_{1e_{1}},\mathcal{T}_{2e_{1}})$, we see
that $h_{1},h_{2}\in X$ and there do not exist any $\mathcal{T}_{1e_{1}}-$
open set $U$ such that $h_{1}\in U$, $h_{2}\notin U$ and $\mathcal{T}
_{2e_{1}}-$open set $V$ such that $h_{2}\in V$, $h_{1}\notin V$. Thus $(X,
\mathcal{T}_{1e_{1}},\mathcal{T}_{2e_{1}})$ is not a pair-wise $T_{1}-$
space. Similarly $(X,\mathcal{T}_{1e_{2}},\mathcal{T}_{2e_{2}})$ is not a
pair-wise $T_{1}-$space too.
\end{example}
The following proposition will provide us the condition that will address
this problem when we go for the corresponding parameterized topologies.
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$ and $x$,$y\in X$ be such that $x\neq y$. If there exist a $\mathcal{
T}_{1}-$soft open set $(F,E)$ such that $x\in (F$,$E)$, $y\in (F$,$E)^{c}$
and a $\mathcal{T}_{2}-$soft open set $(G,E)$ such that $y\in (G$,$E)$, $
x\in (G$,$E)^{c}$, then $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a
pair-wise soft $T_{1}-$space over $X$ and $(X,\mathcal{T}_{1e},\mathcal{T}
_{2e})$\ is a pair-wise $T_{1}-$space for each $e\in E$.
\end{proposition}
\begin{proof}
Straightforward.
\end{proof}
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$ and $Y$ be a non-empty subset of $X$. If $(X,\mathcal{T}_{1},
\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{1}-$space then $(Y,\mathcal{T}
_{1Y},\mathcal{T}_{2Y},E)$\ is also a pair-wise soft $T_{1}-$space.
\begin{proof}
Let $x$,$y\in Y$ be such that $x\neq y$. Then there exist soft sets $
(F,E)\in \mathcal{T}_{1}$ and $(G,E)\in \mathcal{T}_{2}$ such that $x\in
(F,E)$,$\ y\notin (F,E)$ and $y\in (G,E)$, $x\notin (G,E)$. Now $x\in Y$\
implies that $x\in \widetilde{Y}$. Hence $x\in \widetilde{Y}\cap
(F,E)=(^{Y}F,E)$\ where $(F,E)\in \mathcal{T}_{1}$. Consider $y\notin (F,E)$
, this means that $y\notin F(e)$ for some $e\in E$.\ Then $y\notin Y\cap
F(e)=Y(e)\cap F(e)$.\ Therefore $y\notin \widetilde{Y}\cap (F,E)=(^{Y}F,E)$.
Similarly it can also be proved that $y\in (G,E)$ and $x\notin (G,E)$
implies that $y\in (^{Y}G,E)$ and $x\notin (^{Y}G,E)$.\ Thus $(Y,\mathcal{T}
_{1Y},\mathcal{T}_{2Y},E)$\ is also a pair-wise soft $T_{1}-$space.
\end{proof}
\end{proposition}
\begin{proposition}
Every pair-wise soft $T_{1}-$space is also a pair-wise soft $T_{0}-$space.
\begin{proof}
Straightforward.
\end{proof}
\end{proposition}
\begin{example}
Example \ref{Examp2} is a pair-wise soft $T_{0}-$space which is not a
pair-wise soft $T_{1}-$ space over $X$. Another example is given by taking $
X=\{h_{1},h_{2}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X},(F,E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G,E)\}\text{,}
\end{eqnarray*}
where $(F,E),$ and $(G,E)$ are soft sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{ll}
F(e_{1})=X, & F(e_{2})=\{h_{2}\}, \\
G(e_{1})=\{h_{1}\}, & G(e_{2})=X\text{.}
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. Both of $(X,\mathcal{T}_{1},E)$ and $(X,
\mathcal{T}_{2},E)$\ are not soft $T_{1}-$spaces over $X$ and so $(X,
\mathcal{T}_{1},\mathcal{T}_{2},E)$ is not a pair-wise soft $T_{1}-$space by
Proposition \ref{Prop4}, but it is evident that $(X,\mathcal{T}_{1},\mathcal{
T}_{2},E)$ is a pair-wise soft $T_{0}-$space over $X$.
\end{example}
\begin{definition}
A bi-soft topological space $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ over $X$
is said to be pair-wise soft $T_{2}-$space or pair-wise Hausdorff space if
for every pair of distinct points $x$,$y\in X$, there is a $\mathcal{T}_{1}-$
soft open set $(F,E)$ and a $\mathcal{T}_{2}-$soft open set $(G,E)$ such
that $x\in (F,E)$ and $y\in (G,E)$ and $(F,E)\cap (G,E)=\Phi $.
\end{definition}
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$. If $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a pair-wise soft $
T_{2}-$space over $X$ then $(X,\mathcal{T}_{1e},\mathcal{T}_{2e})$\ is a
pair-wise $T_{2}-$space for each $e\in E$.
\end{proposition}
\begin{proof}
Suppose that $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a pair-wise soft $
T_{2}-$space over $X$. For any $e\in E$
\begin{eqnarray*}
\mathcal{T}_{1e} &=&\{F(e)\text{ }|\text{ }(F,E)\in \mathcal{T}_{1}\text{ }\}
\\
\mathcal{T}_{2e} &=&\{G(e)\text{ }|\text{ }(G,E)\in \mathcal{T}_{2}\text{ }
\}.
\end{eqnarray*}
Let $x$,$y\in X$ be such that $x\neq y$, then there exist $(F,E)\in \mathcal{
T}_{1}$, $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
x\in (F,E)\text{, }y\in (G,E)\text{ and }(F,E)\cap (G,E)=\Phi
\end{equation*}
This implies that
\begin{equation*}
x\in F(e)\in \mathcal{T}_{1e}\text{, }y\in G(e)\in \mathcal{T}_{2e}\text{
and }F(e)\cap G(e)=\emptyset \text{.}
\end{equation*}
Thus $(X,\mathcal{T}_{1e},\mathcal{T}_{2e})$\ is a pair-wise $T_{2}-$space
for each $e\in E$.
\end{proof}
\begin{remark}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a pair-wise soft $T_{2}-$
space over $X$ then $(X,\mathcal{T}_{1},E)$\ and $(X,\mathcal{T}_{2},E)$\
need not be soft $T_{2}-$spaces over $X$.
\end{remark}
\begin{example}
Let $X$ be an infinite set and $E$ be the set of parameters. We define
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{(F,E)|(F,E)\text{ is a soft set over }X\}\text{ ' Soft
discrete topology over }X\text{'} \\
\mathcal{T}_{2} &=&\{\Phi \}\cup \{(F,E)|(F,E)\text{ is a soft set over }X
\text{ and }F^{c}(e)\text{ is finite for all }e\in E\}\text{.}
\end{eqnarray*}
Obviously $\mathcal{T}_{1}$ is a soft topology over $X$. We verify for $
\mathcal{T}_{2}$\ as:
\begin{enumerate}
\item $\Phi \in \mathcal{T}_{2}$ and $\tilde{X}^{c}=\Phi \Rightarrow $ $
\tilde{X}\in \mathcal{T}_{2}$.
\item Let $\{(F_{i},E)|$ $i\in I$ $\}$ be a collection of soft sets in $
\mathcal{T}_{2}$. For any $e\in E$, $F_{i}^{c}(e)$ is finite for all $i\in I$
\ so that $\underset{i\in I}{\cap }F_{i}^{c}(e)=(\underset{i\in I}{\cup }
F_{i})^{c}(e)$ is also finite. This means that $\underset{i\in I}{\cup }
(F_{i},E)\in \mathcal{T}_{2}$.
\item Let $(F,E)$, $(G,E)\in \mathcal{T}_{2}$. Since $F^{c}(e)$, and $
G^{c}(e)$ are finite sets so as their union $F^{c}(e)\cup G^{c}(e)$. Thus $
(F\cap G)^{c}(e)$ is finite for all $e\in E$ which shows that $(F,E)\cap
(G,E)\in \mathcal{T}_{2}$.
\end{enumerate}
Then $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$ are soft topologies on $X$. For
any $x,y\in X$ where $x\neq y$, $(x,E)\in \mathcal{T}_{1}$ and $(x,E)^{c}\in
\mathcal{T}_{2}$ such that
\begin{equation*}
x\in (x,E),y\in (x,E)^{c}\text{ and }(x,E)\cap (x,E)^{c}=\Phi \text{.}
\end{equation*}
Thus $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a pair-wise soft $T_{2}-$
space over $X$.
Now, we suppose that there are soft sets $(G_{1},E)$, $(G_{2},E)\in \mathcal{
T}_{2}$ such that
\begin{equation*}
x\in (G_{1},E),y\in (G_{2},E)\text{ and }(G_{1},E)\cap (G_{2},E)=\Phi \text{.
}
\end{equation*}
But then, we must have $(G_{1},E)\tilde{\subset}(G_{2},E)^{c}\Rightarrow
G_{1}(e)\subseteq G_{2}^{c}(e)$ for all $e\in E$, which is not possible
because $G_{1}(e)$ is infinite and $G_{2}^{c}(e)$ is finite. Therefore $(X,
\mathcal{T}_{2},E)$ is not a soft $T_{2}-$space over $X$.
\end{example}
\begin{remark}
Let $(X,\mathcal{T}_{1},E)$\ and $(X,\mathcal{T}_{2},E)$\ be soft $T_{2}-$
spaces over $X$ then $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ need not be a
pair-wise soft $T_{2}-$space over $X$.
\end{remark}
\begin{example}
Let $X$ be an infinite set and $E$ be the set of parameters. Let $x\neq y$,
where $x,y\in X$, we define
\begin{gather*}
\mathcal{T}(x)_{1}=\{(F,E)|x\in (F,E)^{c}\text{ is a soft set over }X\}\cup
\{(F,E)|(F,E)\text{ is a } \\
\text{soft set over }X\text{ and }F^{c}(e)\text{ is finite for all }e\in
E\}\} \\
\mathcal{T}(y)_{2}=\{(G,E)|y\in (G,E)\text{ is a soft set over }X\}\cup
\{(G,E)|(G,E)\text{ is a } \\
\text{soft set over }X\text{ and }G^{c}(e)\text{ is finite for all }e\in
E\}\}\text{.}
\end{gather*}
We verify for $\mathcal{T}(x)_{1}$ as:
\begin{enumerate}
\item $x\notin \Phi \Rightarrow \Phi \in \mathcal{T}(x)_{1}$ and $\tilde{X}
^{c}=\Phi \Rightarrow $ $\tilde{X}\in \mathcal{T}(x)_{1}$.
\item Let $\{(F_{i},E)|$ $i\in I$ $\}$ be a collection of soft sets in $
\mathcal{T}(x)_{1}$. We have following three cases:
\begin{description}
\item[(i)] If $x\in (F_{i},E)^{c}$ for all $i\in I$ then $x\in \underset{
i\in I}{\cap }(F_{i},E)^{c}$ so, in this case, $\underset{i\in I}{\cup }
(F_{i},E)\in \mathcal{T}(x)_{1}$.
\item[(ii)] If $(F_{i},E)$\ is such that $F_{i}^{c}(e)$ is finite for all $
e\in E$ so $F_{i}^{c}(e)$ is finite for all $i\in I$\ implies that $\underset
{i\in I}{\cap }F_{i}^{c}(e)=(\underset{i\in I}{\cup }F_{i})^{c}(e)$ is also
finite. This means that $\underset{i\in I}{\cup }(F_{i},E)\in \mathcal{T}
(x)_{1}$.
\item[(iii)] If there exist some $j,k\in I$ such that $x\in (F_{j},E)^{c}$
and $F_{k}^{c}(e)$ is finite for all $e\in E$. It means that $\underset{i\in
I}{\cap }F_{i}^{c}(e)(\subset F_{k}^{c}(e))$ is also finite for all $e\in E$
and by definition $\underset{i\in I}{\cup }(F_{i},E)\in \mathcal{T}(x)_{1}$.
\end{description}
\item Let $(F_{1},E)$, $(F_{2},E)\in \mathcal{T}(x)_{1}$. Again we have
following three cases:
\begin{description}
\item[(i)] If $x\in (F_{1},E)^{c}$ and $x\in (F_{2},E)^{c}$ then $x\in
(F_{1},E)^{c}\cup (F_{2},E)^{c}$ and therefore $(F_{1},E)\cap (F_{2},E)\in
\mathcal{T}(x)_{1}$.
\item[(ii)] If $F_{1}^{c}(e)$ and $F_{2}^{c}(e)$ are finite for all $e\in E$
then their union $F_{1}^{c}(e)\cup F_{2}^{c}(e)$ is also finite. Thus $
(F_{1}\cap F_{2})^{c}(e)$ is finite for all $e\in E$ which shows that $
(F_{1},E)\cap (F_{2},E)\in \mathcal{T}(x)_{1}$.
\item[(iii)] If $x\in (F_{1},E)^{c}$ and $F_{2}^{c}(e)$ is finite for all $
e\in E$ then $x\in F_{1}^{c}(e)\cup F_{2}^{c}(e)=(F_{1}\cap F_{2})^{c}(e)$
and so $x\in ((F_{1},E)\cap (F_{2},E))^{c}$. Thus $(F_{1},E)\cap
(F_{2},E)\in \mathcal{T}(x)_{1}$.
\end{description}
\end{enumerate}
Hence $\mathcal{T}(x)_{1}$ is a soft topology on $X$. For any $p,q\in X$
where $x\neq p$, $x\in (p,E)^{c}\Rightarrow (p,E)\in \mathcal{T}(x)_{1}$ and
$(p,E)^{c}\in \mathcal{T}(x)_{1}$ such that
\begin{equation*}
p\in (p,E),q\in (p,E)^{c}\text{ and }(p,E)\cap (p,E)^{c}=\Phi \text{.}
\end{equation*}
Thus $(X,\mathcal{T}(x)_{1},E)$ is a soft $T_{2}-$space over $X$. Similarly $
(X,\mathcal{T}(y)_{2},E)$ is a soft $T_{2}-$space over $X$.
Now, $(X,\mathcal{T}(x)_{1},\mathcal{T}(y)_{2},E)$ is a bi-soft topological
space over $X$. For $x,y\in X$, we cannot find any soft sets $(F,E)\in
\mathcal{T}(x)_{1}$ and $(G,E)\in \mathcal{T}(y)_{2}$ such that
\begin{equation*}
x\in (F,E),y\in (G,E)\text{ and }(F,E)\cap (G,E)=\Phi
\end{equation*}
because $y\in (G,E)$ and $(F,E)\cap (G,E)=\Phi $ implies that we must have $
(F,E)\tilde{\subset}(G,E)^{c}$ which means that $G^{c}(e)$ is finite for all
$e\in E$ and $F(e)\subseteq G^{c}(e)$ for all $e\in E$, and this is not
possible for $F(e)$ is infinite and $G^{c}(e)$ is finite. Therefore $(X,
\mathcal{T}(x)_{1},\mathcal{T}(y)_{2},E)\ $is not a pair-wise soft $T_{2}-$
space over $X$.
\end{example}
\begin{remark}
Let $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$\ be soft $T_{2}-$space over $
X$ then $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ need not be a pair-wise soft
$T_{2}-$space over $X$.
\end{remark}
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$. If $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a pair-wise soft $
T_{2}-$space then $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$\ is also a
soft $T_{2}-$space.
\begin{proof}
Let $x$,$y\in X$ be such that $x\neq y$. There exist $(F,E)\in \mathcal{T}
_{1}$ and $(G,E)\in \mathcal{T}_{2}$ such that $x\in (F,E)$, $y\in (G,E)$
and $(F,E)\cap (G,E)=\Phi $. In either case $(F,E),(G,E)\in \mathcal{T}
_{1}\vee \mathcal{T}_{2}$.\ Hence $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$
\ is a soft $T_{2}-$space over $X$.
\end{proof}
\end{proposition}
\begin{example}
Let $X=\{h_{1},h_{2},h_{3}\}$, $E=\{e_{1},e_{2}\}$ and
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E)\}\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G_{1},E),(G_{2},E),(G_{3},E)\}
\text{,}
\end{eqnarray*}
where $(F_{1},E),$ $(F_{2},E),$ $(F_{3},E),$ $(F_{4},E),$ $(G_{1},E),$ $
(G_{2},E),$ and $(G_{3},E)$ are soft sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{ll}
F_{1}(e_{1})=\{h_{1}\}, & F_{1}(e_{2})=\{h_{1}\}, \\
F_{2}(e_{1})=\{h_{2}\}, & F_{2}(e_{2})=\{h_{1},h_{2}\}, \\
F_{3}(e_{1})=\{\}, & F_{3}(e_{2})=\{h_{1}\}, \\
F_{4}(e_{1})=\{h_{1},h_{2}\}, & F_{4}(e_{2})=\{h_{1},h_{2}\},
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{ll}
G_{1}(e_{1})=\{h_{3}\}, & G_{1}(e_{2})=\{h_{3}\}, \\
G_{2}(e_{1})=\{h_{2}\}, & G_{2}(e_{2})=\{h_{2}\}, \\
G_{3}(e_{1})=\{h_{2},h_{3}\}, & G_{3}(e_{2})=\{h_{2},h_{3}\}\text{.}
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Therefore $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. One can easily see that $(X,\mathcal{T}_{1},
\mathcal{T}_{2},E)$ is not a pair-wise \textit{soft} $T_{2}-$\textit{space
over }$X$ because $h_{1},h_{2}\in X$, and we cannot find any soft sets $
(F,E)\in \mathcal{T}_{1}$ or $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (F,E)\text{, }h_{1}\in (G,E)\text{ and }(F,E)\cap (G,E)=\Phi \text{.
}
\end{equation*}
Now, we have
\begin{eqnarray*}
\mathcal{T}_{1}\vee \mathcal{T}_{2} &=&\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E),(G_{1},E),(G_{2},E),(G_{3},E), \\
&&(H_{1},E),(H_{2},E),(H_{3},E)\}
\end{eqnarray*}
where
\begin{equation*}
\begin{array}{ll}
H_{1}(e_{1})=\{h_{1},h_{3}\}, & H_{1}(e_{2})=\{h_{1},h_{3}\}, \\
H_{2}(e_{1})=\{h_{2},h_{3}\}, & H_{2}(e_{2})=X, \\
H_{3}(e_{1})=\{h_{3}\}, & H_{3}(e_{2})=\{h_{1},h_{3}\},
\end{array}
\end{equation*}
so $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a \textit{soft topological
space over }$X$.
For $h_{1},h_{2}\in X$, $(F_{1},E)$, $(G_{2},E)\in \mathcal{T}_{1}\vee
\mathcal{T}_{2}$ such that
\begin{equation*}
h_{1}\in (F_{1},E)\text{, }h_{2}\in (G_{2},E)\text{ and }(F_{1},E)\cap
(G_{2},E)=\Phi \text{.}
\end{equation*}
For $h_{1},h_{3}\in X$, $(F_{1},E),(G_{3},E)\in \mathcal{T}_{1}\vee \mathcal{
T}_{2}$ such that
\begin{equation*}
h_{1}\in (F_{1},E)\text{, }h_{3}\in (G_{3},E)\text{ and }(F_{1},E)\cap
(G_{3},E)=\Phi \text{.}
\end{equation*}
For $h_{2},h_{3}\in X$ and $(G_{2},E),(G_{1},E)\in \mathcal{T}_{1}\vee
\mathcal{T}_{2}$ such that
\begin{equation*}
h_{2}\in (G_{2},E)\text{, }h_{3}\in (G_{1},E)\text{ and }(F_{1},E)\cap
(G_{1},E)=\Phi \text{.}
\end{equation*}
Thus $(X,\mathcal{T}_{1}\vee \mathcal{T}_{2},E)$ is a \textit{soft }$T_{2}-$
\textit{\ space over }$X$.
\end{example}
\begin{proposition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$ and $Y$ be a non-empty subset of $X$. If $(X,\mathcal{T}_{1},
\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{2}-$space then $(Y,\mathcal{T}
_{1Y},\mathcal{T}_{2Y},E)$\ is also a pair-wise soft $T_{2}-$space.
\end{proposition}
\begin{proof}
Let $x$,$y\in Y$ be such that $x\neq y$. Then there exist soft sets $
(F,E)\in \mathcal{T}_{1}$ and $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
x\in (F,E),y\in (G,E)\text{ and }(F,E)\cap (G,E)=\Phi \text{.}
\end{equation*}
For each $e\in E$, $x\in F\left( e\right) $, $y\in G\left( e\right) $\ and $
F\left( e\right) \cap G\left( e\right) =\emptyset $. This implies that $x\in
Y\cap F\left( e\right) $, $y\in Y\cap G(e)$ and
\begin{eqnarray*}
^{Y}F(e)\cap ^{Y}G(e) &=&(Y\cap F\left( e\right) )\cap (Y\cap G(e)) \\
&=&Y\cap (F(e)\cap G\left( e\right) ) \\
&=&Y\cap \emptyset =\emptyset \text{.}
\end{eqnarray*}
\ Hence $x\in (^{Y}F,E)\in \mathcal{T}_{1Y},$ $y\in (^{Y}G,E)\in \mathcal{T}
_{2Y}$\ and $(^{Y}F,E)\cap (^{Y}G,E)=\Phi $ where. Thus $(Y,\mathcal{T}_{1Y},
\mathcal{T}_{2Y},E)$\ is a pair-wise soft $T_{2}-$space.
\end{proof}
\begin{proposition}
\label{Prop5}Every pair-wise soft $T_{2}-$space is also a pair-wise soft $
T_{1}-$space.
\begin{proof}
If $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{2}-$
space and $x$,$y\in X$ be such that $x\neq y$ then there exist soft sets $
(F,E)\in \mathcal{T}_{1}$ and $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
x\in (F,E),y\in (G,E)\text{ and }(F,E)\cap (G,E)=\Phi \text{.}
\end{equation*}
As $(F,E)\cap (G,E)=\Phi $, so $x\notin (G,E)$ and$\ y\notin (F,E)$. Hence $
(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$\ is a pair-wise soft $T_{1}-$space.
\end{proof}
\end{proposition}
\begin{remark}
The converse of Proposition \ref{Prop5} is not true i.e. a pair-wise soft $
T_{1}-$space need not be a pair-wise soft $T_{2}-$space.
\end{remark}
\begin{example}
The bi-soft topological space $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ in
Example \ref{Examp3} is a pair-wise soft $T_{1}-$space over $X$ which is not
a pair-wise \textit{soft Hausdorff space} over $X$.
\end{example}
\begin{remark}
For any \textit{soft set }$(F,E)$ over $X$, $\overline{(F,E)}^{\mathcal{T}}$
will be used to denote the \textit{soft closure}$^{\text{Definition \ref
{softcl}}}$ of $(F,E)$ with respect to the soft topological space $(X,
\mathcal{T},E)$ over $X$.
\end{remark}
\begin{theorem}
\label{thm1}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft
topological space over $X$. Then the following are equivalent:
\begin{enumerate}
\item $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a pair-wise soft Hausdorff
space over $X$.
\item Let $x\in X$, for each point $y$ distinct from $x$, there is a soft
set $(F,E)\in \mathcal{T}_{1}$ such that $x\in (F,E)$ and $y\in \tilde{X}-
\overline{(F,E)}^{\mathcal{T}_{2}}$.
\end{enumerate}
\begin{proof}
$(1)\Rightarrow (2):$
Suppose that $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a pair-wise soft
Hausdorff space over $X$ and $x\in X$. For any $y\in X$ such that $y\neq x$,
pair-wise soft Hausdorffness implies that there exist soft sets $F,E)\in
\mathcal{T}_{1}$ and $(G,E)\in \mathcal{T}_{2}$ such that
\begin{equation*}
x\in (F,E),y\in (G,E)\text{ and }(F,E)\cap (G,E)=\Phi \text{.}
\end{equation*}
So that $(F,E)\tilde{\subset}(G,E)^{c}$. Since $\overline{(F,E)}^{\mathcal{T}
_{2}}$ is the smallest soft closed set in $\mathcal{T}_{2}$ that contains $
(F,E)$ and $(G,E)^{c}$ is a soft closed set in $\mathcal{T}_{2}$ so $
\overline{(F,E)}^{\mathcal{T}_{2}}\tilde{\subset}(G,E)^{c}\Rightarrow (G,E)
\tilde{\subset}(\overline{(F,E)}^{\mathcal{T}_{2}})^{c}$. Thus $y\in (G,E)
\tilde{\subset}(\overline{(F,E)}^{\mathcal{T}_{2}})^{c}$ or $y\in \tilde{X}-
\overline{(F,E)}^{\mathcal{T}_{2}}$.
$(2)\Rightarrow (1):$
Let $x,y\in X$ be such that $x\neq y$. By $(2)$ there is a soft set $
(F,E)\in \mathcal{T}_{1}$ such that $x\in (F,E)$ and $y\in \tilde{X}-
\overline{(F,E)}^{\mathcal{T}_{2}}$. As $\overline{(F,E)}^{\mathcal{T}_{2}}$
is a soft closed set in $\mathcal{T}_{2}$\ so $(G,E)=\tilde{X}-\overline{
(F,E)}^{\mathcal{T}_{2}}\in \mathcal{T}_{2}$. Now $x\in (F,E),y\in (G,E)$
and
\begin{eqnarray*}
(F,E)\cap (G,E) &=&(F,E)\cap (\tilde{X}-\overline{(F,E)}^{\mathcal{T}_{2}})
\\
&\tilde{\subset}&(F,E)\cap (\tilde{X}-(F,E))\text{ \ }\because (F,E)\tilde{
\subset}\overline{(F,E)}^{\mathcal{T}_{2}} \\
&=&\Phi \text{.}
\end{eqnarray*}
Thus $(F,E)\cap (G,E)=\Phi $.
\end{proof}
\end{theorem}
\begin{corollary}
\label{cor1}Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a pair-wise soft $
T_{2}-$space over $X$. Then, for each $x\in X$,
\begin{equation*}
(x,E)=\dbigcap \left\{ \overline{(F,E)}^{\mathcal{T}_{2}}:x\in (F,E)\in
\mathcal{T}_{1}\right\} \text{.}
\end{equation*}
\begin{proof}
Let $x\in X$, the existence of a soft open set $x\in (F,E)\in \mathcal{T}
_{1} $ is guaranteed by pair-wise soft Hausdorffness. If $y\in X$ such that $
y\neq x$ then, by Theorem \ref{thm1},\ there exists a soft set $(F,E)\in
\mathcal{T}_{1}$ such that $x\in (F,E)$ and $y\in \tilde{X}-\overline{(F,E)}
^{\mathcal{T}_{2}}\Rightarrow y\notin (\bar{F}^{\mathcal{T}
_{2}}(e))\Rightarrow y\notin \underset{x\in (F,E)\in \mathcal{T}_{1}}{
\dbigcap }(\bar{F}^{\mathcal{T}_{2}}(e))$ for all $e\in E$. Therefore
\begin{equation*}
\dbigcap \left\{ \overline{(F,E)}^{\mathcal{T}_{2}}:x\in (F,E)\in \mathcal{T}
_{1}\right\} \tilde{\subset}(x,E)\text{.}
\end{equation*}
Converse inclusion is obvious as $x\in (F,E)\tilde{\subset}\overline{(F,E)}^{
\mathcal{T}_{2}}$.
\end{proof}
\end{corollary}
\begin{corollary}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a pair-wise soft $T_{2}-$
space over $X$. Then, for each $x\in X$, $(x,E)^{c}\in \mathcal{T}_{i}$ for $
i=1,2$.
\begin{proof}
By Corollary \ref{cor1}
\begin{equation*}
(x,E)^{c}=\dbigcup \left\{ (\overline{(F,E)}^{\mathcal{T}_{2}})^{c}:x\in
(F,E)\in \mathcal{T}_{1}\right\} \text{.}
\end{equation*}
Since $\overline{(F,E)}^{\mathcal{T}_{2}}$ is a soft closed set in $\mathcal{
T}_{2}$ so $(\overline{(F,E)}^{\mathcal{T}_{2}})^{c}\in \mathcal{T}_{2}$ and
by the axiom of a soft topological space $\dbigcup \left\{ (\overline{(F,E)}
^{\mathcal{T}_{2}})^{c}:x\in (F,E)\in \mathcal{T}_{1}\right\} \in \mathcal{T}
_{2}$. Thus $(x,E)^{c}\in \mathcal{T}_{2}$.
A similar argument holds to show $(x,E)^{c}\in \mathcal{T}_{1}$.
\end{proof}
\end{corollary}
\section{Application of bi-soft topological spaces to rough sets\label{appli}
}
Rough set theory introduced by Pawlak \cite{Pawlak} is another mathematical
tool to deal with uncertainty. These concepts have been applied successfully
in various fields \cite{Pawlak 2}. In the present paper a new approach for
rough approximations of a soft set is given and some properties of lower and
upper approximations are studied. A bi-soft topological space is applied to
granulate the universe of discourse and a general model of bi-soft
topological spaces based roughness of a soft set is established.
It is easy to see that for any soft set $\left( F,A\right) $ over a set $X$,
the set of parameters $A$ can be extended to $E$ by defining the following
map $\widetilde{F}:E\rightarrow P\left( U\right) $
\begin{equation*}
\widetilde{F}\left( e\right) =\left\{
\begin{tabular}{ll}
$F\left( a\right) $ & if $a\in A$ \\
$\emptyset $ & if $a\in E-A$
\end{tabular}
\right.
\end{equation*}
In the following, a technique is developed to find approximations of a soft
set $\left( F,A\right) $ with respect to a bi-soft topological space $(X,
\mathcal{T}_{1},\mathcal{T}_{2},E)$. From here onward every soft set $\left(
F,A\right) $ will represented by $(\widetilde{F},E)$.
\begin{definition}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
over $X$. Then $(X,\mathcal{T}_{1e},\mathcal{T}_{2e})$ is a \textit{
bi-topological space} for each $e\in E$. Given a soft set $(\widetilde{F},E)$
over $X$, two soft sets $(\widetilde{\underline{F}}_{\mathcal{T}_{1},
\mathcal{T}_{2}},E)$ and $(\overline{\widetilde{F}}_{\mathcal{T}_{1},
\mathcal{T}_{2}},E)$\ are defined as:
\begin{eqnarray*}
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(e) &=&\mathcal{T}
_{1e}Int(\widetilde{F}(e))\cap \mathcal{T}_{2e}Int(\widetilde{F}(e)) \\
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(e) &=&\overline{
\widetilde{F}(e)}^{\mathcal{T}_{1e}}\cup \overline{\widetilde{F}(e)}^{
\mathcal{T}_{2e}}
\end{eqnarray*}
for each $e\in E$, where $\mathcal{T}_{ie}Int(\widetilde{F}(e))$ and $
\overline{\widetilde{F}(e)}^{\mathcal{T}_{ie}}$ denote the interior and
closure of subset $\widetilde{F}(e)$ in the topological space $(X,\mathcal{T}
_{ie})$ respectively. The soft sets $(\widetilde{\underline{F}}_{\mathcal{T}
_{1},\mathcal{T}_{2}},E)$ and $(\overline{\widetilde{F}}_{\mathcal{T}_{1},
\mathcal{T}_{2}},E)$ for called respectively, the lower approximation and
upper approximation of the soft set $(\widetilde{F},E)$ with respect to
bi-soft topological space $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ over $X$.
\end{definition}
\begin{definition}
If $(\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)\tilde{=}(
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)$ (soft equal)
then the soft set $(\widetilde{F},E)$ is said to be definable and otherwise
it is called a bi-soft topological rough set denoted by the pair $(
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},\overline{
\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}})$. Further,
\begin{eqnarray*}
pos_{_{\mathcal{T}_{1},\mathcal{T}_{2}}}(\widetilde{F},E) &=&(\widetilde{
\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E); \\
neg_{_{\mathcal{T}_{1},\mathcal{T}_{2}}}(\widetilde{F},E) &=&(\overline{
\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)^{c}; \\
bnd_{_{\mathcal{T}_{1},\mathcal{T}_{2}}}(\widetilde{F},E) &=&(\overline{
\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)-(\widetilde{\underline{F}
}_{\mathcal{T}_{1},\mathcal{T}_{2}},E).
\end{eqnarray*}
In order to explain this idea the following example is given:
\end{definition}
\begin{example}
Let $X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}$ be the set of sample designs of
laptop covers and $E=\{$Red, Green, Blue$\}$ be the set of available colors.
Let us suppose that there are two groups of people. First group consists of $
3$ members aging $20$, $25$, $28$ and the second group has members aging $35$
, $42$, $45$. Both groups are asked to select the covers which they approve
according to their likeness and choice. Following the choices they have
made, we obtain $6$ soft sets, given by $(F_{1},E)$, $(F_{2},E)$, $(F_{3},E)$
for the members of first group and $(G_{1},E)$, $(G_{2},E)$, $(G_{3},E)$ for
the members of latter one. Let $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$ be
the soft topologies generated by $(F_{1},E)$, $(F_{2},E)$, $(F_{3},E)$ and $
(G_{1},E)$, $(G_{2},E)$, $(G_{3},E)$,
\begin{eqnarray*}
\mathcal{T}_{1} &=&\{\Phi ,\widetilde{X}
,(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E),(F_{5},E),(F_{6},E),(F_{7},E),(F_{8},E)\}
\text{, and} \\
\mathcal{T}_{2} &=&\{\Phi ,\widetilde{X},(G_{1},E),(G_{2},E),(G_{3},E)\}
\text{,}
\end{eqnarray*}
where $(F_{1},E),$ $(F_{2},E),$ $(F_{3},E),$ $(F_{4},E),$ $(F_{5},E),$ $
(F_{6},E),$ $(F_{7},E),$ $(F_{8},E),$ $(G_{1},E),$ $(G_{2},E),$ $(G_{3},E)$
are soft sets over $X$, defined as follows:
\begin{equation*}
\begin{array}{lll}
F_{1}(Red)=\{x_{2},x_{4}\}, & F_{1}(Green)=\{x_{1},x_{5}\}, &
F_{1}(Blue)=\{x_{1}\}, \\
F_{2}(Red)=\{x_{1},x_{2},x_{4}\}, & F_{2}(Green)=\{x_{1},x_{2},x_{5}\}, &
F_{2}(Blue)=\{x_{1},x_{3}\}, \\
F_{3}(Red)=\{x_{2}\}, & F_{3}(Green)=\{x_{2}\}, & F_{3}(Blue)=\{x_{2}\}, \\
F_{4}(Red)=\{x_{2}\}, & F_{4}(Green)=\{x_{2}\}, & F_{4}(Blue)=\emptyset , \\
F_{5}(Red)=\{x_{2},x_{4}\}, & F_{5}(Green)=\{x_{1},x_{2},x_{5}\}, &
F_{5}(Blue)=\{x_{1},x_{2}\}, \\
F_{6}(Red)=\{x_{2},x_{4}\}, & F_{6}(Green)=\{x_{1},x_{2},x_{5}\}, &
F_{6}(Blue)=\{x_{1}\}, \\
F_{7}(Red)=\{x_{2}\}, & F_{7}(Green)=\emptyset , & F_{7}(Blue)=\emptyset ,
\\
F_{8}(Red)=\{x_{1},x_{2},x_{4}\}, & F_{8}(Green)=\{x_{1},x_{2},x_{5}\}, &
F_{8}(Blue)=\{x_{1},x_{2},x_{3}\}.
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{lll}
G_{1}(Red)=\{x_{1},x_{2},x_{4}\}, & G_{1}(Green)=\{x_{2},x_{4},x_{5}\}, &
G_{1}(Blue)=\{x_{1},x_{2},x_{3}\}, \\
G_{2}(Red)=\{x_{2},x_{4}\}, & G_{2}(Green)=\{x_{4}\}, & G_{2}(Blue)=\{x_{2}
\}, \\
G_{3}(Red)=\{x_{1}\}, & G_{3}(Green)=\{x_{2},x_{5}\}, & G_{3}(Blue)=
\{x_{1},x_{3}\}.
\end{array}
\end{equation*}
Then $\mathcal{T}_{1}$\ and $\mathcal{T}_{2}$ are soft topologies on $X$.
Thus $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ is a \textit{bi-soft
topological space over }$X$. We have
\begin{eqnarray}
\mathcal{T}_{1Red} &=&\{\emptyset
,X,\{x_{2}\},\{x_{2},x_{4}\},\{x_{1},x_{2},x_{4}\}\}, \\
\mathcal{T}_{2Red} &=&\{\emptyset
,X,\{x_{1}\},\{x_{2},x_{4}\},\{x_{1},x_{2},x_{4}\}\}, \notag
\end{eqnarray}
\begin{eqnarray}
\mathcal{T}_{1Green} &=&\{\emptyset
,X,\{x_{2}\},\{x_{1},x_{2}\},\{x_{1},x_{2},x_{5}\}\}, \\
\mathcal{T}_{2Green} &=&\{\emptyset
,X,\{x_{4}\},\{x_{2},x_{5}\},\{x_{2},x_{4},x_{5}\}\}, \notag
\end{eqnarray}
\begin{eqnarray}
\mathcal{T}_{2Blue} &=&\{\emptyset
,X,\{x_{1}\},\{x_{2}\},\{x_{1},x_{3}\},\{x_{1},x_{2}\},\{x_{1},x_{2},x_{3}\}
\}, \\
\mathcal{T}_{2Blue} &=&\{\emptyset
,X,\{x_{2}\},\{x_{1},x_{3}\},\{x_{1},x_{2},x_{3}\}\}. \notag
\end{eqnarray}
Consider the soft set $(\widetilde{F},E)$ over $X$ that describes the choice
of a random customer Mr. X, whose age is in the range of\ $20-45$, where
\begin{equation*}
\begin{array}{lll}
\widetilde{F}(Red)=\{x_{2},x_{4},x_{5}\}, & \widetilde{F}(Green)=\emptyset ,
& \widetilde{F}(Blue)=\{x_{1},x_{3},x_{4}\}.
\end{array}
\end{equation*}
The lower approximation $(\widetilde{\underline{F}}_{\mathcal{T}_{1},
\mathcal{T}_{2}},E)$ and upper approximation $(\overline{\widetilde{F}}_{
\mathcal{T}_{1},\mathcal{T}_{2}},E)$ of the soft set $(\widetilde{F},E)$
with respect to bi-soft topological space $(X,\mathcal{T}_{1},\mathcal{T}
_{2},E)$ over $X$ is computed as:
\begin{equation*}
\begin{array}{lll}
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Red)=
\{x_{2},x_{4}\}, & \widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}
_{2}}(Green)=\emptyset , & \widetilde{\underline{F}}_{\mathcal{T}_{1},
\mathcal{T}_{2}}(Blue)=\{x_{1},x_{3}\}, \\
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Red)=X, &
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Green)=\emptyset ,
& \overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Blue)=
\{x_{1},x_{3},x_{4},x_{5}\}.
\end{array}
\end{equation*}
Thus
\begin{eqnarray*}
pos_{_{\mathcal{T}_{1},\mathcal{T}_{2}}}(\widetilde{F},E) &=&(\widetilde{
\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E); \\
&&
\begin{array}{l}
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Red)=
\{x_{2},x_{4}\}, \\
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Green)=\emptyset
, \\
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}(Blue)=
\{x_{1},x_{3}\},
\end{array}
\\
neg_{_{\mathcal{T}_{1},\mathcal{T}_{2}}}(\widetilde{F},E) &=&(\overline{
\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)^{c}; \\
&&
\begin{array}{l}
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}^{c}(Red)=
\emptyset , \\
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}^{c}(Green)=X, \\
\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}^{c}(Blue)=\{x_{2}
\},
\end{array}
\\
bnd_{_{\mathcal{T}_{1},\mathcal{T}_{2}}}(\widetilde{F},E) &=&(\overline{
\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)-(\widetilde{\underline{F}
}_{\mathcal{T}_{1},\mathcal{T}_{2}},E); \\
&&
\begin{array}{l}
(\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}-\widetilde{
\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}})(Red)=\{x_{1},x_{3},x_{5}\},
\\
(\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}-\widetilde{
\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}})(Green)=\emptyset , \\
(\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}}-\widetilde{
\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}})(Blue)=\{x_{4},x_{5}\},
\end{array}
\end{eqnarray*}
From these approximations, we may assert the following:
\begin{enumerate}
\item For Mr. X, $x_{2}$ or $x_{4}$ will be best choices in red color, no
design will be selected in green color and, $x_{1}$ and $x_{3}$ should be
preferred if the color choice is blue.
\item No design in red color can be considered as a bad choice, no design
can be selected in green color and, $x_{2}$ should not be selected in blue
color.
\item $x_{1}$, $x_{3}$, $x_{5}$ may be chosen in red color and, $x_{4}$ and $
x_{5}$ are also considerable in case the color choice is blue.
\end{enumerate}
\end{example}
\begin{theorem}
Let $(X,\mathcal{T}_{1},\mathcal{T}_{2},E)$ be a bi-soft topological space
and $\left( \widetilde{F},E\right) $, $\left( \widetilde{F}_{1},E\right) $, $
\left( \widetilde{F}_{2},E\right) $ be soft sets over $X$. Then:
\begin{equation*}
\begin{tabular}{ll}
$1$ & $(\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)\tilde{
\subset}(\widetilde{F},E)\tilde{\subset}(\overline{\widetilde{F}}_{\mathcal{T
}_{1},\mathcal{T}_{2}},E),$ \\
$2$ & $\underline{\Phi }_{\mathcal{T}_{1},\mathcal{T}_{2}}\tilde{=}\Phi
\tilde{=}\overline{\Phi }_{\mathcal{T}_{1},\mathcal{T}_{2}},$ \\
$3$ & $\underline{\widetilde{X}}_{\mathcal{T}_{1},\mathcal{T}_{2}}\tilde{=}
\widetilde{X}\tilde{=}\overline{\widetilde{X}}_{\mathcal{T}_{1},\mathcal{T}
_{2}},$ \\
$4$ & $(\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)\cap (
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)_{\mathcal{T}
_{1},\mathcal{T}_{2}}\tilde{=}(\widetilde{\underline{F}}_{\mathcal{T}_{1},
\mathcal{T}_{2}},E)$ \\
$5$ & $(\overline{\widetilde{F}_{1}\cap \widetilde{F}_{2}}_{_{\mathcal{T}
_{1},\mathcal{T}_{2}}},E)\tilde{\subset}(\overline{\widetilde{F}_{1}}_{
\mathcal{T}_{1},\mathcal{T}_{2}},E)\cap (\overline{\widetilde{F}_{2}}_{
\mathcal{T}_{1},\mathcal{T}_{2}},E),$ \\
$6$ & $(\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)\cup (
\widetilde{\underline{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)_{\mathcal{T}
_{1},\mathcal{T}_{2}}\tilde{\subset}(\widetilde{\underline{F}}_{\mathcal{T}
_{1},\mathcal{T}_{2}},E),$ \\
$7$ & $(\overline{\widetilde{F}_{1}\cup \widetilde{F}_{2}}_{_{\mathcal{T}
_{1},\mathcal{T}_{2}}},E)\tilde{=}(\overline{\widetilde{F}_{1}}_{\mathcal{T}
_{1},\mathcal{T}_{2}},E)\cup (\overline{\widetilde{F}_{2}}_{\mathcal{T}_{1},
\mathcal{T}_{2}},E),$ \\
$8$ & $(\widetilde{F}_{1},E)\tilde{\subset}(\widetilde{F}_{2},E)\Rightarrow (
\widetilde{\underline{F}}_{_{\mathcal{T}_{1},\mathcal{T}_{2}}},E)\tilde{
\subset}(\widetilde{\underline{F}}_{_{\mathcal{T}_{1},\mathcal{T}_{2}}},E),(
\overline{\widetilde{F}_{1}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)\tilde{
\subset}(\overline{\widetilde{F}_{2}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E),$
\\
$9$ & $(\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E)\tilde{=
}((\widetilde{\underline{F}}_{_{\mathcal{T}_{1},\mathcal{T}
_{2}}},E)^{c})^{c},$ \\
$10$ & $((\overline{\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}
_{2}},E)^{c})^{c}\tilde{=}(\widetilde{\underline{F}}_{_{\mathcal{T}_{1},
\mathcal{T}_{2}}},E),$ \\
$11$ & $(\underline{(\underline{F}_{_{\mathcal{T}_{1},\mathcal{T}_{2}}})}_{_{
\mathcal{T}_{1},\mathcal{T}_{2}}},E)\tilde{=}(\widetilde{\underline{F}}_{_{
\mathcal{T}_{1},\mathcal{T}_{2}}},E),$ \\
$12$ & $(\overline{(\overline{\widetilde{F}}_{_{\mathcal{T}_{1},\mathcal{T}
_{2}}})}_{_{\mathcal{T}_{1},\mathcal{T}_{2}}},E)\tilde{=}(\overline{
\widetilde{F}}_{\mathcal{T}_{1},\mathcal{T}_{2}},E).$
\end{tabular}
\end{equation*}
\end{theorem}
\begin{conclusion}
The concept of soft topological spaces is generalized to bi-soft topological
spaces. Some basic notions and the inter-relations of classical and
generalized concepts have been studied in detail. It is worth mentioning
that the purpose of this paper is just to initiate the concept, and there is
a lot of scope for the researchers to make their investigations in this
field. This is a beginning of some new generalized structure and the concept
of separation axioms may be studied further for regular and normal bi-soft
topological spaces that is our next goal too. The topologies induced by the
definitions of information systems through rough sets$^{\text{\cite{Pawlak}}
} $ give rise to a natural bitopological space on initial universal set and
this fact increases the interest in bitopological environment. The
connections of information systems and soft sets can be another key to
search for the structure of bi-soft topological spaces in real world
phenomenon. Separation axioms have an application in digital topology$^{
\text{\cite{Kong}}}$ and a hybrid generalization of the axioms may also be
of some use there.
\end{conclusion}
\begin{summary}
The results on implications are summarized in Figure \ref{Figure 1}:
\end{summary}
\FRAME{ftbpFU}{2.6013in}{3.0407in}{0pt}{\Qcb{Summary of Results}}{\Qlb{
Figure 1}}{Figure}{\special{language "Scientific Word";type
"GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width
2.6013in;height 3.0407in;depth 0pt;original-width 2.5598in;original-height
2.9966in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename
'Doc1.jpg';file-properties "XNPEU";}}
\end{document}
|
\begin{document}
\baselineskip = 16pt
\newcommand \ZZ {{\mathbb Z}}
\newcommand \NN {{\mathbb N}}
\newcommand \RR {{\mathbb R}}
\newcommand \PR {{\mathbb P}}
\newcommand \AF {{\mathbb A}}
\newcommand \GG {{\mathbb G}}
\newcommand \QQ {{\mathbb Q}}
\newcommand \bcA {{\mathscr A}}
\newcommand \bcC {{\mathscr C}}
\newcommand \bcD {{\mathscr D}}
\newcommand \bcF {{\mathscr F}}
\newcommand \bcG {{\mathscr G}}
\newcommand \bcH {{\mathscr H}}
\newcommand \bcM {{\mathscr M}}
\newcommand \bcJ {{\mathscr J}}
\newcommand \bcL {{\mathscr L}}
\newcommand \bcO {{\mathscr O}}
\newcommand \bcP {{\mathscr P}}
\newcommand \bcQ {{\mathscr Q}}
\newcommand \bcR {{\mathscr R}}
\newcommand \bcS {{\mathscr S}}
\newcommand \bcU {{\mathscr U}}
\newcommand \bcV {{\mathscr V}}
\newcommand \bcW {{\mathscr W}}
\newcommand \bcX {{\mathscr X}}
\newcommand \bcY {{\mathscr Y}}
\newcommand \bcZ {{\mathscr Z}}
\newcommand \goa {{\mathfrak a}}
\newcommand \gob {{\mathfrak b}}
\newcommand \goc {{\mathfrak c}}
\newcommand \gom {{\mathfrak m}}
\newcommand \gon {{\mathfrak n}}
\newcommand \gop {{\mathfrak p}}
\newcommand \goq {{\mathfrak q}}
\newcommand \goQ {{\mathfrak Q}}
\newcommand \goP {{\mathfrak P}}
\newcommand \goM {{\mathfrak M}}
\newcommand \goN {{\mathfrak N}}
\newcommand \uno {{\mathbbm 1}}
\newcommand \Le {{\mathbbm L}}
\newcommand \Spec {{\rm {Spec}}}
\newcommand \Gr {{\rm {Gr}}}
\newcommand \Pic {{\rm {Pic}}}
\newcommand \Jac {{{J}}}
\newcommand \Alb {{\rm {Alb}}}
\newcommand \Corr {{Corr}}
\newcommand \Chow {{\mathscr C}}
\newcommand \Sym {{\rm {Sym}}}
\newcommand \Prym {{\rm {Prym}}}
\newcommand \cha {{\rm {char}}}
\newcommand \eff {{\rm {eff}}}
\newcommand \tr {{\rm {tr}}}
\newcommand \Tr {{\rm {Tr}}}
\newcommand \pr {{\rm {pr}}}
\newcommand \ev {{\it {ev}}}
\newcommand \cl {{\rm {cl}}}
\newcommand \interior {{\rm {Int}}}
\newcommand \sep {{\rm {sep}}}
\newcommand \td {{\rm {tdeg}}}
\newcommand \alg {{\rm {alg}}}
\newcommand \im {{\rm im}}
\newcommand \gr {{\rm {gr}}}
\newcommand \op {{\rm op}}
\newcommand \Hom {{\rm Hom}}
\newcommand \Hilb {{\rm Hilb}}
\newcommand \Sch {{\mathscr S\! }{\it ch}}
\newcommand \cHilb {{\mathscr H\! }{\it ilb}}
\newcommand \cHom {{\mathscr H\! }{\it om}}
\newcommand \colim {{{\rm colim}\, }}
\newcommand \End {{\rm {End}}}
\newcommand \coker {{\rm {coker}}}
\newcommand \id {{\rm {id}}}
\newcommand \van {{\rm {van}}}
\newcommand \spc {{\rm {sp}}}
\newcommand \Ob {{\rm Ob}}
\newcommand \Aut {{\rm Aut}}
\newcommand \cor {{\rm {cor}}}
\newcommand \Cor {{\it {Corr}}}
\newcommand \res {{\rm {res}}}
\newcommand \red {{\rm{red}}}
\newcommand \Gal {{\rm {Gal}}}
\newcommand \PGL {{\rm {PGL}}}
\newcommand \Bl {{\rm {Bl}}}
\newcommand \Sing {{\rm {Sing}}}
\newcommand \spn {{\rm {span}}}
\newcommand \Nm {{\rm {Nm}}}
\newcommand \inv {{\rm {inv}}}
\newcommand \codim {{\rm {codim}}}
\newcommand \Div{{\rm{Div}}}
\newcommand \sg {{\Sigma }}
\newcommand \DM {{\sf DM}}
\newcommand \Gm {{{\mathbb G}_{\rm m}}}
\newcommand \tame {\rm {tame }}
\newcommand \znak {{\natural }}
\newcommand \lra {\longrightarrow}
\newcommand \hra {\hookrightarrow}
\newcommand \rra {\rightrightarrows}
\newcommand \ord {{\rm {ord}}}
\newcommand \Rat {{\mathscr Rat}}
\newcommand \rd {{\rm {red}}}
\newcommand \bSpec {{\bf {Spec}}}
\newcommand \Proj {{\rm {Proj}}}
\newcommand \pdiv {{\rm {div}}}
\newcommand \CH {{\it {CH}}}
\newcommand \wt {\widetilde }
\newcommand \ac {\acute }
\newcommand \ch {\check }
\newcommand \ol {\overline }
\newcommand \Th {\Theta}
\newcommand \cAb {{\mathscr A\! }{\it b}}
\newenvironment{pf}{\par\noindent{\em Proof}.}{
\framebox(6,6)
\par
}
\newtheorem{theorem}[subsection]{Theorem}
\newtheorem{conjecture}[subsection]{Conjecture}
\newtheorem{proposition}[subsection]{Proposition}
\newtheorem{lemma}[subsection]{Lemma}
\newtheorem{remark}[subsection]{Remark}
\newtheorem{remarks}[subsection]{Remarks}
\newtheorem{definition}[subsection]{Definition}
\newtheorem{corollary}[subsection]{Corollary}
\newtheorem{example}[subsection]{Example}
\newtheorem{examples}[subsection]{examples}
\title{Algebraic cycles on the Fano variety of lines of a cubic fourfold }
\author{Kalyan Banerjee}
\address{Tata Institute of Fundamental Research, Mumbai, India}
\email{[email protected]}
\begin{abstract}
In this text we prove that if a smooth cubic in $\PR^5$ has its Fano variety of lines birational to the Hilbert scheme of two points on a K3 surface, then there exists a smooth projective curve or a smooth projective surface embedded in the Fano variety, such that the kernel of the push-forward (at the level of zero cycles ) induced by the closed embedding is torsion.
\end{abstract}
\maketitle
\section{Introduction}
In the article \cite{BG} the authors were discussing the kernel of the push-forward homomorphism at the level of algebraically trivial one cycles modulo rational equivalence, induced by the closed embedding of a smooth hyperplane section of a cubic fourfold into the cubic itself. It has been proved that the kernel of this homomorphism is countable when we consider a very general hyperplane section. It was a natural question at that time that can we formulate the rationality problem of a very general cubic fourfold in terms of this kernel. That is what is the obstruction to rationality in terms of the kernel.
In \cite{GS}[theorem 7.5] it has been proved that if the ground field $k$ is such that the class of the affine line is not a zero divisor in the Grothendieck ring of varieties over $k$ and if the cubic fourfold is rational then the Fano variety of lines on the cubic fourfold is birational to the Hilbert scheme of two points on a K3 surface. This is the starting points of this paper. Although it is a point to be noted that this assumption is false, \cite{BO}, when the ground field is $\mathbb C$. We start with a birational map from the Hilbert scheme of two points on a K3 surface to theh Fano variety of lines and try to formulate the cycle-theoretic consequence of it, in terms of zero cycles on the Fano variety. It is also known, due to Hassett \cite{Ha} that for a certain class of rational cubic fourfolds the Hilbert scheme is actually isomorphic to the Fano variety of lines.
To proceed in this direction of studying the zero cycles on the Fano variety of lines, we study algebraic cycles on symmetric powers of a smooth projective variety following Collino \cite{Collino}, where the algebraic cycles on symmetric powers of a smooth projective curve was studied. It has been proved in \cite{Collino} that the closed embedding of one symmetric power of a smooth projective curve into a bigger symmetric power of the same curve induces injective push-forward at the level of Chow groups. We follow this approach and try to understand the algebraic cycles on the blowup of $\Sym^2 S$ along the diagonal where $S$ is a smooth projective surface. This blowup is the Hilbert scheme of two points on a smooth projective surface. Following Collino's approach we study a correspondence between the Hilbert scheme of two points on a surface and the strict transform the surface itself under the blowup along the diagonal. Our main result of this paper is the following:
\textit{Let $X$ be a smooth cubic fourfold in $\PR^5$ and let $F(X)$ be the Fano variety of lines on the cubic fourfold $X$. Let the cubic $X$ is such that there exists a birational map from the Hilbert scheme of two points on a fixed K3 surface to the Fano variety of $X$ and let the indeterminacy locus of the rational map $\Hilb^2 S\to F(X)$ is smooth. Then there exists a non-rational curve $D$ in $F(X)$ such that the push-forward induced by the inclusion of $D$ into $F(X)$ has torsion kernel.}
{\small \textbf{Acknowledgements:} The author wishes to thank Vladimir Guletskii for suggesting the idea of studying the relation between torsion elements in Chow groups and non-rationality of cubics in $\PR^5$. The author thanks the anonymous referee for his careful reading of the manuscript and for suggesting relevant modifications. The author is also grateful to Marc Levine for pointing out a technical mistake in the proof of theorem 2.1 and to V.Srinivas and C.Voisin for useful discussions relevant to the theme of the paper. Author is indebted to Mingmin Shen for pointing out a mistake in section 4. The author remains grateful to B.Hassett for improving the conclusion of the main theorem. Finally the author also wishes to thank the ISF-UGC grant for funding this project and hospitality of Indian Statistical Institute, Bangalore Center for hosting this project.}
\section{Algebraic cycles on Hilbert schemes}
In this section we are going to prove the following. Let $S$ be a smooth, projective algebraic surface. Consider the diagonal embedding of $S$ into $\Sym^2 S$. Then the blow up of $\Sym^2 S$ along $S$ is the Hilbert scheme of two points on $S$. Let $\wt{\Sym^2 S}$ denote this Hilbert scheme and let $\wt{S}$ denote the inverse image of $S$ in $\wt{\Sym^2 S}$ (This copy of $S$ is different from the image of the diagonal embedding and it is the image of the closed embedding $s\mapsto [s,p]$). Then the embedding of $\wt{S}$ into $\wt{\Sym^2 S}$ induces an injective push-forward homomorphism at the level of Chow group of zero cycles.
To prove that we prove that if blow up $\Sym^n X$ along a subvariety $Z$, which does not contain a copy of $\Sym^m X$ for $m\leq n$, then the closed embedding of the inverse image of $\Sym^m X$ into the blow up of $\Sym^n X$ induces injective push-forward homomorphism at the level of Chow groups of zero cycles.
\begin{theorem}
\label{theorem 1}
Let $X$ be a smooth projective variety. Let $\wt{\Sym^n X}$ be the blow up of $\Sym^n X$ along a smooth subvariety $Z$, such that the blow-up will be a smooth projective variety. Let $\Sym^m X$ intersect $Z$ transversally. Then the closed embedding of the strict transform of $\Sym^m X$ into $\wt{\Sym^n X}$ induces an injective push-forward homomorphism at the level of Chow groups of zero cycles.
\end{theorem}
\begin{proof}
To prove the theorem we follow the approach as in \cite{Collino}. First consider the correspondence $\Gamma$ on $\Sym^n X\times \Sym^m X$ given by
$$(\pi_n\times \pi_m)(Graph(pr))$$
where $pr$ is the projection from $X^n$ to $X^m$ and $\pi_i$ is the natural quotient morphism from $X^i$ to $\Sym^i X$. Let $f$ be the natural morphism from $\wt{\Sym^n X}$ to $\Sym^n X$ and $f'$ be the restriction of $f$ to $\wt{\Sym^m X}$, which is the inverse image of $\Sym^m X$. Consider the correspondence $\Gamma'$ given by $(f\times f')^*(\Gamma)$. Then a computation following \cite{Collino} shows that $\Gamma'_*j_*$ is induced by $(j\times id)^*(\Gamma')$, where $j$ is the closed embedding of $\wt{\Sym^m X}$ into $\wt{\Sym^n X}$. Here we can consider the pull-back by $f\times f'$ and by $j\times id$, because they are local complete intersection morphisms since we are considering monoidal transformations \cite{Fulton}[page 332, section 17.5]. Consider the following diagram.
$$
\diagram
\wt{Sym^m X}\times \wt{\Sym^m X}\ar[dd]_-{f'\times f'} \ar[rr]^-{j\times id} & & \wt{\Sym^n X}\times \wt{\Sym^m X} \ar[dd]^-{f\times f'} \\ \\
\Sym^m X\times \Sym^m X \ar[rr]^-{i\times id} & & \Sym^n X\times \Sym^m X
\enddiagram
$$
This diagram gives us the following at the level of algebraic cycles.
$$(j\times id)^*(f\times f')^*(\Gamma)=(f'\times f')^*(i\times id)^*(\Gamma)$$
that is equal to
$$(f'\times f')^*(\Delta+Y)$$
by \cite{Collino}, where $Y$ is supported on $\Sym^m X\times \Sym^{m-1}X$, and $\Delta$ is the diagonal of $\Sym^m X\times \Sym^m X$.
Now
$$(f'\times f')^*(\Delta)=\Delta+E_1$$
where
$$E_1\subset(E\times E)\cap (\wt{\Sym^m X}\times \wt{\Sym^m X}) $$
$E$is the exceptional locus, that is the pre-image of $Z$.
So then by the Chow moving lemma, we can take the support of a zero cycle away from $E\cap \wt{\Sym^m X}$, which is a proper closed subscheme of $\wt{\Sym^m X}$, and we get
$$\rho^*\Gamma'_*j_*(z)=\rho^*(z+z_1)=\rho^*(z)$$
where $\rho$ is the open embedding of the complement of $ \wt{\Sym^{m-1}X}$ in $\wt{\Sym^m X}$, and $z_1$ is supported on
$\wt{\Sym^{m-1}X}$. Now consider the following commutative diagram with the rows exact.
$$
\xymatrix{
\CH^*(\wt{\Sym^{m-1}X}) \ar[r]^-{j'_{*}} \ar[dd]_-{}
& \CH^*(\wt{\Sym^m X}) \ar[r]^-{\rho^{*}} \ar[dd]_-{j_{*}}
& \CH^*(U) \ar[dd]_-{} \
\\ \\
\CH^*(\wt{\Sym^{m-1}X}) \ar[r]^-{j''_*}
& \CH^*(\wt{\Sym^{n} X}) \ar[r]^-{}
& \CH^*(V)
}
$$
Suppose that $j_*(z)=0$, by the above discussion we have
$\rho^*\Gamma'_*j_*(z)=\rho^*(z)=0$. Hence by the localisation exact sequence there exists $z'$ such that $j'_*(z')=z$. So by induction $\CH_*(\wt{\Sym^{m-1}X})$ to $\CH_*(\wt{\Sym^n X})$ is injective.
So we have $z'=0$ and hence $z=0$. So we have the theorem.
\end{proof}
Let $S$ be a smooth projective algebraic surface. Let us consider the diagonal embedding of $S$ into $\Sym^2 S$ and the embedding of $S$ into $\Sym^2 S$ given by $s\mapsto [s,p]$, where $p$ is a fixed closed point on $S$. Let $\wt{\Sym^2 S}$ be the blow up of $\Sym^2 S$ along the image of the diagonal embedding and let $\wt{S}$ be the strict transform of the other copy of $S$ prescribed by the late embedding.
\begin{corollary}
The closed embedding of $\wt{S}$ into $\wt{\Sym^2 S}$ induces injective push-forward homomorphism at the level of Chow groups of zero cycles.
\end{corollary}
\begin{proof}
Follows from \ref{theorem 1}.
\end{proof}
$\wt{\Sym^2 S}$ is nothing but the Hilbert scheme of two points on $S$.
The above corollary tell us in particular that the closed embedding of $\wt{S}$ into the Hilbert scheme of two points on a surface $S$ induces injective push-forward homomorphism at the level of Chow group of zero cycles.
Using the same technique we can prove that if we blow up the Hilbert scheme once again and consider the total transform of the surface $\wt{S}$, then the closed embedding of the total transform in the blow-up gives rise to an injective push-forward homomorphism at the level of Chow group of zero cycles.
\section{Blow down of symmetric powers}
In this section we are going to consider blow downs of $\Sym^n X$, for a smooth projective variety $X$, and consider the images of $\Sym^m X$ under the blow down. Then we want to investigate the nature of the push-forward homomorphism induced at the level of Chow groups by the closed embedding of the blow down of $\Sym^m X$ into the blow down of $\Sym^n X$.
\begin{theorem}
\label{theorem2}
Let $\wt{\Sym^n X}$ be a smooth blow down of $\Sym^n X$ and let $\wt{\Sym^m X}$ denote the image of $\Sym^m X$ under this blow down. Then the closed embedding of $\wt{\Sym^m X}$ into $\wt{\Sym^n X}$ induces an injective push-forward homomorphism at the level of Chow groups.
\end{theorem}
\begin{proof}
As before consider the correspondence $\Gamma$ given by $(\pi_n\times \pi_m)(Graph(pr))$, where $pr$ is the projection from $X^n$ to $X^m$ and $\pi_i$ is the natural morphism from $X^i$ to $\Sym^i X$. Let $\wt{f}$ denote the morphism from $\Sym^n X$ to $\wt{\Sym^n X}$, and $f'$ its restriction to $\wt{\Sym^m X}$. Consider the correspondence $\Gamma'$ on $\wt{\Sym^n X}\times \wt{\Sym^m X}$ given by $(f\times f')_*(\Gamma)$. Then following \cite{Collino}, we have that, $\Gamma'_*j_*$ is induced by $(j\times id)^*(\Gamma')$, where $j$ is the closed embedding of $\wt{Sym^m X}$ into $\wt{\Sym^n X}$. Now consider the following Cartesian square.
$$
\diagram
{Sym^m X}\times {\Sym^m X}\ar[dd]_-{f'\times f'} \ar[rr]^-{i\times id} & & {\Sym^n X}\times {\Sym^m X} \ar[dd]^-{f\times f'} \\ \\
\wt{\Sym^m X}\times \wt{\Sym^m X} \ar[rr]^-{j\times id} & & \wt{\Sym^n X}\times \wt{\Sym^m X}
\enddiagram
$$
Then since we have above Cartesian square and $f,f'$ are birational, we have that
$$(j\times id)^*\Gamma'=(j\times id)^*(f\times f')_*(\Gamma)$$
$$=(f'\times f')_*(i\times id)^*(\Gamma)=(f'\times f')_*(\Delta+Y)$$
Here $i$ is the closed embedding of $\Sym^m X$ into $\Sym^n X$, $\Delta$ is the diagonal of $\Sym^m X\times \Sym^m X$, and $Y$ is supported on $\Sym^m X\times \Sym^{m-1}X$. Now
$$(f'\times f')_*(\Delta)=\Delta$$
the diagonal of $\wt{\Sym^m X}\times \wt{\Sym^m X}$ and $(f'\times f')_*(Y)$ is supported on $\wt{\Sym^m X}\times \wt{\Sym^{m-1}Y}$. The multiplicity of the diagonal of $\wt{\Sym^m X}\times \wt{\Sym^m X}$ is $1$, because the morphism $f$ from $\Sym^n X$ to $\wt{\Sym^n X}$ is birational and the multiplicity of $\Delta$ of $\Sym^m X\times \Sym^m X$ is $1$.
Let $\rho$ be the open embedding of the complement of $\wt{\Sym^{m-1}X}$ into $\wt{\Sym^m X}$. Then the above computation shows that
$$\rho^*\Gamma'_*j_*(z)=\rho^*(z+z_1)=\rho^*(z)$$
where $z_1$ is supported on $\wt{\Sym^{m-1}X}$. Now consider the following commutative square with the rows exact.
$$
\xymatrix{
\CH^*(\wt{\Sym^{m-1}X}) \ar[r]^-{j'_{*}} \ar[dd]_-{}
& \CH^*(\wt{\Sym^m X}) \ar[r]^-{\rho^{*}} \ar[dd]_-{j_{*}}
& \CH^*(U) \ar[dd]_-{} \
\\ \\
\CH^*(\wt{\Sym^{m-1}X}) \ar[r]^-{j''_*}
& \CH^*(\wt{\Sym^{n} X}) \ar[r]^-{}
& \CH^*(V)
}
$$
Suppose that $j_*(z)=0$, then we have $\rho^*\Gamma'_*j_*(z)=0$, that gives us $\rho^*(z)=0$. By the localisation exact sequence we have that $z=j'_*(z')$. By the commutativity of the above diagram we have that
$j''_*(z')=0$ and by induction hypothesis we have that $z'=0$ and hence $z=0$. So the homomorphism $j_*$ is injective.
\end{proof}
\section{Fano variety of lines on a cubic fourfold and algebraic cycles}
In this section we consider the cubic fourfolds and the Fano variety of lines on them. Suppose that the Fano variety of lines admit of a bi-rational map from the Hilbert scheme of two points on a K3 surface $S$. The precise theorem is as follows.
\begin{theorem}
\label{theorem4}
Let $X$ be a cubic fourfold. Let $F(X)$ denote the Fano variety of lines on $X$. Suppose that there exists a birational, dominant map from $\Hilb^2(S)$, the Hilbert scheme of two points on a fixed surface $S$, to $F(X)$. Let $\wt{S}$ be the strict transform of $S$, under the Blow up from $\Sym^2 S$ to $\Hilb^2(S)$,which intersect the indeterminacy locus of the above rational map transversally. Then we have a codimension $2$ subvariety $D$ in $F(X)$, such that the kernel of the pushforward $\CH_0(D)$ to $\CH_0(F(X))$ is torsion.
\end{theorem}
\begin{proof}
Let us consider the rational map from $\Hilb^2(S)$ to $F(X)$. Let us resolve the indeterminacy of the rational map and we get a regular map
$$\widehat{\Hilb^2(S)}\to F(X)$$
where $\widehat{\Hilb^2(S)}$ denote the Blow up of $\Hilb^2(S)$ along the indeterminacy locus. Suppose that this locus is smooth and intersect a copy of $\wt{S}$ transvesally. Then first we prove that the closed embedding $\widehat{S}\to \widehat{\Hilb^2(S)}$ gives an injective push-forward homomorphism at the level of Chow groups of zero cycles, here $\widehat{S}$ is the strict transform of $\wt{S}$ under this blow-up.
Let $f_1$ be the natural morphism from $\widehat{\Hilb^2(S)}$ to $\Hilb^2(S)$ and $f_1'$ be the restriction of it to $\widehat{S}$. Consider the correspondence $\Gamma''$ on $\widehat{\Hilb^2(S)}\times \widehat{S}$ given by $(f_1\times f_1')^*(\Gamma')$. Let $\hat{j}$ denote the closed embedding of $\widehat{S}$ into $\widehat{\Hilb^2(S)}$. Then as previous we have that $\Gamma''_*\hat{j_*}$ is induced by $(\hat{j}\times id)^*(\Gamma'')$, which is equal to
$$(\hat{j}\times id)^*(f_1\times f_1')^*\Gamma'$$
which is
$$(f'_1\times f'_1)^*(\Delta+Y_1+E_1)$$
where $E_1$ is the intersection of the exceptional locus of the blow up $\Hilb^2(S)\to \Sym^2 S$ with $\wt{S}$ and $Y_1$ is supported on $\wt{S}\times \PR^1$. Then we have
$$(f'_1\times f'_1)^*(\Delta+Y_1+E_1)=\Delta+\hat{E}_2+\hat{E_1}+\hat{Y_1}\;.$$
Here $\hat{E}_2$ is the intersection of the exceptional locus $\hat{E}$ of the blow up $\widehat{\Hilb^2(S)}\to \Hilb^2(S)$ with $\hat{S}$. Then consider the open embedding of the complement of $ \hat {Y}$ in $\hat{S}$ (where $\hat{Y}$ is a $\PR^1$-bundle over $\PR^1$, or a rational curve). Call it $\rho$. We have by Chow moving lemma,
$$\rho^*\Gamma''_*\hat{j}_*(z)=\rho^*(z+z_1)=\rho^*(z)\;.$$
Here $z_1$ is supported on $A$. Consider the following commutative diagram where the rows are exact.
$$
\xymatrix{
\CH^*(\hat{Y}) \ar[r]^-{j'_{*}} \ar[dd]_-{}
& \CH^*(\hat{S}) \ar[r]^-{\rho^{*}} \ar[dd]_-{\hat{j}_{*}}
& \CH^*(U) \ar[dd]_-{} \
\\ \\
\CH^*(\hat{Y}) \ar[r]^-{j''_*}
& \CH^*(\widehat{\Hilb^2(S)}) \ar[r]^-{}
& \CH^*(V)
}
$$
Suppose that $\hat{j}_*(z)=0$, then we have $\rho^*\Gamma''_*\hat{j}_*(z)=0$, which gives $\rho^*(z)=0$. So there exists $z'$ such that $j_*(z')=z$. The map from $\CH_0(\hat{Y})\to \CH_0(\widehat{\Hilb^2 S})$ is injective. So $z'=0$, hence $z=0$.
Now we have a regular (generically finite) map from $\widehat{\Hilb^2 (S)}$ to $F(X)$ and let $D$ denote the image of $\hat{S}$.
First we prove that $D$ is a surface, that is the dimension does not drop under the blow-down from $\widehat{\Hilb^2 (S)}$ to $F(X)$.
For that consider the universal family
$$\bcU=\{([x,y],p)|p\in [x,y]\}\subset \Sym^2 S\times S$$
consider its pull-back to $\Hilb^2 S$ and further to $\widehat{\Hilb^2(S)}$. Consider the projection from $\bcU\to S$. For $p\in S$, we have the fiber of the projection over $p$ is $S_p$, that is a copy of $S$ in $\Sym^2 S$ embedded by $x\mapsto [x,p]$. Now given any $[x,y]$ in $\Sym^2 S$, it belongs to some $S_p$. So the $S_p$'s cover $\Sym^2 S$. Therefore the strict transform of $S_p$'s, under the blow up from $\Sym^2 S$ to $\Hilb^2(S)$, cover $\Hilb^2(S)$. Hence there exists one $S_p$, such that $\wt{S_p}$ is not contained in the indeterminacy locus of the rational map from $\Hilb^2(S)$ to $F(X)$. Moreover we can prove that the collection of points in $S$ such that $S_p$ is not contained in the above mentioned indeterminacy locus is Zariski closed. Similarly the strict transforms $\hat{S_p}$ cover $\widehat{\Hilb^2(S)}$, hence there exists one $\hat{S_p}$, which is not contained in the blow-down locus of the map $\widehat{\Hilb^2(X)}\to F(X)$. The collection of $p$, such that $\hat{S_p}$ is contained in the blow-down locus is Zariski closed. Hence there exists $p$ such that $\wt{S_p}$ is not contained in the indeterminacy locus of the rational map $\Hilb^2(S)\to F(X)$ and also not contained in the center of blow down of $\widehat{\Hilb^2(S)}\to F(X)$. The image of that $\hat{S_p}$ is a surface. That is our $D$.
Then we prove that the kernel of the push-forward homomorphism from $\CH_0(D)$ to $\CH_0(F(X))$ is torsion. So let $h$ be the closed embedding of $D$ into $F(X)$ and $g$ the map from $\widehat{\Hilb^2(S)}$ to $F(X)$ and $g'$ its restriction to $\hat{S}$. Then consider the correspondence $\Gamma'''$ to be $(g\times g')_*(\Gamma'')$. The homomorphism $\Gamma'''_*h_*$ is induced by the cycle $(h\times id)^*(\Gamma''')$, which is
$$(h\times id)^*(g\times g')_*(\Gamma'')$$
and is equal to the fundamental cycle \cite{Fulton}[1.5] associated to (since $g$ is birational and generically finite)
$$(g'\times g')(\hat{j}\times id)^{-1}(\Gamma'')+\hat{E_3}$$
where $E_3$ is supported on the product $(D\cap B)\times (D\cap B)$, $B$ is the image of the blown down variety.
So the above is equal to
$$(d\Delta+(g'\times g')_*(\hat{E}_2+\hat{E_1}+\hat{Y}))+\hat{E_3}$$
where $(g'\times g')(\hat{Y})$ is supported on $D\times V$ where $V$ is the image of the $\PR^1$ bundle $\hat{Y}$ or on $D\times C$ where $C$ is a rational curve (possibly singular) on $D$. So let $\rho$ be the open embedding of the complement of $g'(\hat{Y})$ in $D$. Then as previous we have that
$$\rho^*\Gamma'''_*h_*(z)=\rho^*(dz+z_1)$$
where $z_1$ is supported on $g'(\hat{Y})$ (this is obtained by Chow moving lemma). Then consider the following commutative diagram with the rows exact.
$$
\xymatrix{
\CH^*(g'(\hat{Y})) \ar[r]^-{h'_{*}} \ar[dd]_-{}
& \CH^*(D) \ar[r]^-{\rho^{*}} \ar[dd]_-{h_{*}}
& \CH^*(U) \ar[dd]_-{} \
\\ \\
\CH^*(g'(\hat{Y})) \ar[r]^-{j''_*}
& \CH^*(F(X)) \ar[r]^-{}
& \CH^*(V)
}
$$
Then if $h_*(z)=0$, it would follow that $d(\rho^*(z))=0$. So we have $h_*'(z')=dz$. Now $\CH_0(g'(\hat{Y}))\to \CH_0(F(X))$ is injective as $V$ is rational or $\CH_0(g'\hat{Y})$ is torsion since $C$ is rational, tell us that $z'=0$ or $d'z'=0$ so $dz=0$ or $dd'z=0$. So the kernel of $h_*$ is torsion.
\end{proof}
Now all this techniques can be applicable in the following setup. Let us embed $\Sym^2 S$ into some projective space. Let $C$ be a smooth hyperplane section of $S$ inside $\Sym^2 S$ such that $C$ intersects the diagonal transversally (or do not intersect). Then consider the strict transform of $\wt{C}$ inside $\wt{\Sym^2 S}=\Hilb^2(S)$, that gives us push-forward homomorphism at the level of Chow group of zero cycles, which has torsion kernel. When we blow up $\Hilb^2 S$, then we get the strict transform $\hat{C}$, which is again inducing a push-forward at the level of Chow groups from $\CH_0(\hat{C})$ to $\CH_0(\widehat{\Hilb^2(S)})$ having torsion kernel. Then we push-down everything to $F(X)$, we get that the image of $\hat{C}$, inside $F(X)$, say $C'$, is such that the push-forward $\CH_0(C')\to \CH_0(F(X))$ has torsion kernel. Now Consider an embedding of $\Sym^2 S$ into $\PR^N$ such that the general hyperplane section of $S$ is non-rational. Now consider the family
$$\bcH=\{([x,y],p,t)|p\in [x,y], [x,y]\in \Sym^2 S\cap H_t\}\subset \Sym^2 S\times S\times {\PR^N}^*$$
Now given any $p$ in $S$, the fiber of the projection $\bcH\to S$ is nothing but the family of hyperplane sections of $S_p$. A general such hyperplane section is non-rational, call it $C_t$. As in the previous Theorem \ref{theorem4}, there exists $p$, such that $\hat{S_p}$ is not contained in the exceptional divisor of the blow up $\widehat{\Hilb^2(S)}\to \Hilb^2(S)$ and also not in the center of blow down of the blow down $\widehat{\Hilb^2(S)}$ to $F(X)$. Consider a general hyperplane section of $S_p$, which intersects the diagonal of $\Sym^2 S$ properly. Then the strict transform $\hat{C_t}$ under two successive blow-ups of such a $C_t$ is non-rational. Also since $\hat{C_t}$ is not contained in the center of blow-down of the map $\widehat{\Hilb^2(S)}\to F(X)$, the image of $\hat{C_t}$ under the blow-down remains non-rational. So we have the following theorem:
\begin{theorem}
\label{theorem3}
Let $X$ be a smooth cubic fourfold in $\PR^5$ and let $F(X)$ be the Fano variety of lines on the cubic fourfold $X$. Let the cubic $X$ is such that there exists a birational map from the Hilbert scheme of two points on a fixed K3 surface to the Fano variety of $X$ and let the indeterminacy locus of the rational map $\Hilb^2 S\to F(X)$ is smooth. Then there exists a non-rational curve $D$ in $F(X)$ such that the push-forward induced by the inclusion of $D$ into $F(X)$ has torsion kernel.
\end{theorem}
\subsection{Relation of the above main theorem with the work of Voisin}
\label{Subsec1}
In the recent work due to Voisin, \cite{Vo}, it has been proved that a cubic fourfold admits of a Chow theoretic decomposition of the diagonal if and only if it admits a cohomological decomposition of the diagonal. By Chow theoretic decomposition we mean that the diagonal on the two fold product on the cubic $X$ is rationally equivalent to $X\times x+Z$, where $Z$ is supported on $D\times X$, where $D$ is a proper closed subscheme of $X$. Cohomological decomposition means that such a decomposition holds at the level of Cohomology. It has been proved in \cite{Vo} that the Chow theoretic decomposition of the diagonal of $X$ is equivalent to have universal triviality of $\CH_0(X)$, meaning that $\CH_0(X_L)$ is isomorphic to $\mathbb Z$ for all field extensions $L$ of $k$.
Let $D$ be the smooth projective surface inside $F(X)$, such that the push-forward induced by the closed embedding $j$ of $D$ into $F(X)$ is torsion. This torsionness actually follows from the fact that the cycle $(j\times id)\Gamma'''-d\Delta$ is rationally equivalent to $\hat{E}_2+\hat{E_1}+\hat{Y_1}$ , where this later cycle is supported on $Z\times D$ or $D\times Z'$, where $ Z,Z'$ are proper Zariski closed subsets in $D$. Let us call $(j\times id)^*(\Gamma''')=\Gamma$, consider its image under the homomorphism $\CH^2(D\times D)\to \CH_0(D_{k(D)})$, call it $\Gamma_{k(D)}$. Consider also the image of the diagonal under this homomorphism and denote it by $\delta$. Suppose that $\CH_0(D_{k(D)})$ is isomorphic to $\ZZ$. Then
$$\Gamma_{k(D)}-d\delta$$
is rationally equivalent to $nx_{k(D)}$, where $x$ is a closed $k$-point on $X$. The above would mean that the restriction of the cycle
$$\Gamma-d\Delta$$
is rationally equivalent to the restriction of $X\times x$ on $U\times X$, for some Zariski open $U$ inside $X$. Therefore
$$\Gamma-d\Delta$$
is rationally equivalent to $X\times x+Z$, where $Z$ is supported on $D\times X$, where $D$ is the complement of $U$ in $X$. Therefore the universal triviality of $\CH_0(D)$ is giving the decomposition of $\Gamma-d\Delta$, which further gives the torsionness of the kernel of $j_*$.
Therefore we have the following theorem:
\begin{theorem}
Universal triviality of $\CH_0(D)$ and the assumption that the Fano variety $F(X)$ is birational to the Hilbert scheme of two points on a K3 surface, implies the torsionness of the kernel of $j_*$, where $j$ is the closed embedding of $D$ into $F(X)$.
\end{theorem}
Therefore if we can prove that $\ker(j_*)$ is non-torsion then that would imply the $\CH_0(D)$ is not universally trivial or the Fano variety is not birational to the Hilbert scheme of two points on a K3 surface.
It is worthwhile to mention that, in the paper \cite{MS}, Shen proved that the universal line correspondence $P$ on $F(X)\times X$ gives rise to surjective homomorphism from $\CH_0(F(X)_L)$ to $\CH_1(X_L)$ for all field extensions $L/k$. Here we prove something similar. Consider the base change of $D,F(X)$, by $k(D)$, then as before we can prove that the embedding of $D_{k(D)}$ into $F(X)_{k(D)}$ has torsion kernel at the Chow group of zero cycles. This follows from the observation that
$$(j\times id)^*(\Gamma''')_{k(D)}=(j_{k(D)}\times \id_{k(D)})^*(\Gamma'''_{k(D)})=d\Delta_{k(D)}-Y_{k(D)}$$
where $Y$ is supported on $Z\times D$ or $D\times Z'$. So we can say that the push-forward has torsion kernel universally.
\end{document}
|
\begin{document}
\title{Bidirectional Nested Weighted Automata}
\label{s:intro}
\begin{abstract}
Nested weighted automata (NWA) present a robust and convenient
automata-theoretic formalism for quantitative specifications.
Previous works have considered NWA that processed input words only in the
forward direction.
It is natural to allow the automata to process input words backwards as well,
for example, to measure the maximal or average time between a response and
the preceding request.
We therefore introduce and study bidirectional NWA that can process input
words in both directions.
First, we show that bidirectional NWA can express interesting quantitative
properties that are not expressible by forward-only NWA.
Second, for the fundamental decision problems of emptiness and universality,
we establish decidability and complexity results for the new framework which
match the best-known results for the special case of forward-only NWA.
Thus, for NWA, the increased expressiveness of bidirectionality is
achieved at no additional computational complexity.
This is in stark contrast to the unweighted case, where bidirectional finite
automata are no more expressive but exponentially more succinct than their
forward-only counterparts.
\end{abstract}
\section{Introduction}
We study an extension of nested weighted automata (NWA)~\cite{nested} that can process words in both directions.
We show that this new and natural framework can express many interesting
quantitative properties that the previous formalism could not.
We establish decidability and complexity results of the basic
decision problems for the new framework.
We start with the motivation for quantitative properties,
then describe NWA and our new framework, and finally the contributions.
\noindent{\em Weighted automata}.
Automata-theoretic formalisms provide a natural way to express quantitative
properties of systems.
Weighted automata extend finite automata where every transition is
assigned an integer number called weight.
Thus a run of an automaton gives rise to a sequence of weights.
A value function aggregates the sequence of weights into a single value.
For non-deterministic weighted automata, the value of a word
$w$ is the infimum value of all runs over~$w$.
First, weighted automata were studied over finite words with weights
from a semiring, and ring multiplication as value function~\cite{Droste:2009:HWA:1667106},
and later extended to infinite words with limit averaging or supremum as
value function~\cite{Chatterjee08quantitativelanguages,DBLP:journals/corr/abs-1007-4018,Chatterjee:2009:AWA:1789494.1789497}.
While weighted automata over semirings can express several
quantitative properties~\cite{DBLP:journals/jalc/Mohri02}, they cannot
express long-run average properties that weighted automata with limit
averaging can~\cite{Chatterjee08quantitativelanguages}.
However, even weighted automata with limit averaging cannot express
some basic quantitative properties (see~\cite{nested}).
\noindent{\em Nested weighted automata}.
A natural extension of weighted automata is to add nesting,
which leads to \emph{nested weighted automata (NWA)}~\cite{nested}.
A nested weighted automaton consists of a master automaton and a set
of slave automata. The master automaton runs over input infinite words.
At every transition the master can invoke a slave automaton that runs
over a finite subword of the infinite word, starting at the position where
the slave automaton is invoked.
Each slave automaton terminates after a finite number of steps and returns
a value to the master automaton.
Each slave automaton is equipped with a value function for finite words,
and the master automaton aggregates the returned values from slave automata
using a value function for infinite words.
\noindent{\em Advantages of NWA}.
We discuss the various advantages of NWA.
\begin{compactenum}
\item For Boolean finite automata, nested automata are equivalent to the
non-nested counterpart, whereas NWA are strictly more expressive than
non-nested weighted automata~\cite[Example~5]{nested}.
It has been shown in~\cite{nested} that NWA provide a specification framework
where many basic quantitative properties can be expressed, which
cannot be expressed by weighted automata.
\item NWA provide a natural and convenient way to express quantitative
properties.
Every slave automaton computes a subproperty,
which is then combined using the master automaton.
Thus NWA allow to decompose properties conveniently, and provide a natural
framework to study quantitative run-time verification.
\item Finally, subclasses of NWA are equivalent in expressive power with
automata with monitor counters~\cite{nested-sas}, and thus they provide a
robust framework to express quantitative properties.
\end{compactenum}
\noindent{\em Bidirectional NWA.}
Previous works considered slave automata that can only process input
words in the forward direction (forward-only NWA).
However, to specify quantitative properties, it is natural to allow slave automata to run
backwards, for example, to measure the maximal or average time between a response and
the preceding request.
In this work we consider this natural extension of NWA, namely
{\em bidirectional NWA}, where slave automata can process words in the forward
as well as the backward direction.
\noindent{\em Natural properties.}
First, we show that many natural properties can be expressed in the
bidirectional NWA framework.
We present two examples below (details in Section~\ref{s:examples}).
\begin{compactenum}
\item {\em Average energy level.} Consider a quantitative setting where each weight
represents energy gain or consumption, and thus the sum of weights represents
the energy level.
To express the average energy level property, the master automaton has
long-run average as the value function, and at every transition it invokes
a slave automaton that walks backward with sum value function for the weights.
Thus the average energy level property is naturally expressed by NWA with
backward-walking slave automata, while this property is not expressible
by NWA with forward-walking slave automata.
\item {\em Data-consistency property (DCP).}
Consider the data-consistency property (DCP) where the input letters correspond to
reads, writes, null instructions, and commits.
For each read, the distance to the previous commit measures how fresh
is the read with respect to the last commit, and this can be measured with
a backward-walking slave automaton.
For each write, the distance to the next commit measures how fresh is
the write with respect to the following commit, and this can be measured with
a forward-walking slave automaton.
Thus the average freshness, called DCP, is expressed
with bidirectional NWA.
Moreover, the DCP can neither be expressed by NWA with only forward-walking
slave automata nor by NWA with only backward-walking slave automata.
\end{compactenum}
\noindent{\em Our contributions.}
We propose bidirectional NWA as a specification framework for quantitative
properties.
First, we show that the classes of forward-only NWA and backward-only NWA
have incomparable expressiveness, and bidirectional NWA strictly generalize
both classes.
Second, we establish complexity of the emptiness and universality problems
for bidirectional NWA, where we consider the limit-average value function
for the master automaton and for the slave automata we consider standard
value functions for finite words (such as min, max, and variants of sum).
The obtained complexity results coincide with the results for forward-only NWA,
and range from $\mathbb{N}LOGSPACE$-complete, $
\PTIME$ to $\PSPACE$-complete to $\EXPSPACE$.
However the proofs for bidirectional NWA are much more involved than forward-only
NWA.
Thus bidirectional NWA have all the advantages of NWA but provide a more expressive
framework for natural quantitative properties. Moreover, the added expressiveness of
bidirectionality is achieved with no increase in the computational complexity
of the decision problems~(Table~\ref{tab:complexity}).
We highlight two significant differences as compared to the unweighted case:
(1)~In the unweighted case bidirectionality does not change expressiveness,
whereas we show for NWA it does; and
(2)~in the unweighted case for deterministic automata bidirectionality leads to
exponential succinctness and increase in complexity of the decision problems,
whereas for NWA bidirectionality does not change the computational complexity.
Thus the combination of nesting and bidirectionality is very interesting in
the weighted automata setting, which we study in this work.
\noindent{\em Related works.}
Quantitative automata and logic have been extensively studied in recent years
in many different contexts~\cite{Droste:2009:HWA:1667106,Chatterjee08quantitativelanguages,boundsInWRegularity,DBLP:journals/jacm/AlmagorBK16}.
The book~\cite{Droste:2009:HWA:1667106} presents an excellent collection of results
of weighted automata on finite words.
Weighted automata on infinite words have been studied in~\cite{Chatterjee08quantitativelanguages,DBLP:journals/corr/abs-1007-4018,DrosteR06}.
Weighted automata over finite words extended with monitor counters have been considered (under the name of
cost register automata) in~\cite{DBLP:conf/lics/AlurDDRY13,copylessCRA}.
A version of nested weighted automata over finite words has been
studied in~\cite{bollig2010pebble}, and nested weighted automata over
infinite words has been studied in~\cite{nested,nestedprob,nested-mfcs}.
Several quantitative logics have also been studied, such as~\cite{BokerCHK14,BouyerMM14,AlmagorBK14}.
However, none of these works consider the rich and expressive formalism of quantitative
properties expressible by NWA with slaves that walk both forward and backward,
retaining decidability of the basic decision problems.
In the main paper, we present the key ideas and main intuitions of the proofs of selected results,
and detailed proofs are relegated to the appendix.
\section{Definitions}
\label{s:definition}
\newcommand{\pi}{\pi}
\newcommand{\texttt{\#}}{\texttt{\#}}
\subsection{Words and automata}
\Paragraph{Words}.
We consider a finite \emph{alphabet} of letters $\Sigma$.
A \emph{word} over $\Sigma$ is a (finite or infinite) sequence of letters from $\Sigma$.
We denote the $i$-th letter of a word $w$ by $w[i]$, and for $i < j$ we
define $w[i,j]$ as the word $w[i] w[i+1] \ldots w[j]$.
The length of a finite word $w$ is denoted by $|w|$; and the length of an infinite word
$w$ is $|w| = \infty$.
For an infinite word $w$, word $w[i,\infty]$ is the suffix of $w$
with first $i-1$ letters removed.
For a finite word $w$ of length $k$, we define the reverse of $w$, denoted by $w^R$,
as the word $w[k] w[k-1] \ldots w[1]$.
\Paragraph{Labeled automata}. For a set $X$, an \emph{$X$-labeled automaton} ${\cal A}$ is a tuple
$\tuple{\Sigma, Q, Q_0, \delta, F, {C}}$, where
(1)~$\Sigma$ is the alphabet,
(2)~$Q$ is a finite set of states,
(3)~$Q_0 \subseteq Q$ is the set of initial states,
(4)~$\delta \subseteq Q \times \Sigma \times Q$ is a transition relation,
(5)~$F$ is a set of accepting states,
and
(6)~${C} : \delta \mapsto X$ is a labeling function.
A labeled automaton $\tuple{\Sigma, Q, \{q_0\}, \delta, F, {C}}$ is
\emph{deterministic} if and only if
$\delta$ is a function from $Q \times \Sigma$ into $Q$
and $Q_0$ is a singleton.
\Paragraph{Semantics of (labeled) automata}.
A \emph{run} $\pi$ of a (labeled) automaton ${\cal A}$ on a word $w$ is a sequence of states
of ${\cal A}$ of length $|w|+1$
such that $\pi[0]$ belongs to the initial states of ${\cal A}$
and for every $0 \leq i \leq |w|-1$ we have $(\pi[i], w[i+1], \pi[i+1])$ is a transition of ${\cal A}$.
A run $\pi$ on a finite word $w$ is \emph{accepting} if and only if the last state $\pi[|w|]$ of the run
is an accepting state of ${\cal A}$.
A run $\pi$ on an infinite word $w$ is \emph{accepting} if and only if some accepting state of ${\cal A}$ occurs
infinitely often in $\pi$.
For an automaton ${\cal A}$ and a word $w$, we define $\mathsf{Acc}(w)$ as the set of accepting runs on $w$.
Note that for deterministic automata, every word $w$ has at most one accepting run ($|\mathsf{Acc}(w)| \leq 1$).
\Paragraph{Weighted automata and their semantics}.
A \emph{weighted automaton} is a $\mathbb{Z}$-labeled automaton, where $\mathbb{Z}$ is the set of integers.
The labels are called \emph{weights}.
We define the semantics of weighted automata in two steps. First, we define the value of a
run. Second, we define the value of a word based on the values of its runs.
To define values of runs, we will consider \emph{value functions} $f$ that
assign real numbers to sequences of integers.
Given a non-empty word $w$, every run $\pi$ of ${\cal A}$ on $w$ defines a sequence of weights
of successive transitions of ${\cal A}$, i.e.,
${C}(\pi)=({C}(\pi[i-1], w[i], \pi[i]))_{1\leq i \leq |w|}$;
and the value $f(\pi)$ of the run $\pi$ is defined as $f({C}(\pi))$.
We denote by $({C}(\pi))[i]$ the weight of the $i$-th transition,
i.e., ${C}(\pi[i-1], w[i], \pi[i])$.
The value of a non-empty word $w$ assigned by the automaton ${\cal A}$, denoted by $\valueL{{\cal A}}(w)$,
is the infimum of the set of values of all {\em accepting} runs;
i.e., $\inf_{\pi \in \mathsf{Acc}(w)} f(\pi)$, and we have the usual semantics that the infimum of the
empty set is infinite, i.e., the value of a word that has no accepting run is infinite.
Every run $\pi$ on the empty word has length $1$ and the sequence ${C}(\pi)$ is empty, hence
we define the value $f(\pi)$ as an external (not a real number) value $\bot$.
Thus, the value of the empty word is either $\bot$, if the empty word is accepted by ${\cal A}$, or $\infty$
otherwise.
To indicate a particular value function $f$ that defines the semantics,
we call a weighted automaton ${\cal A}$ with value function $f$ an $f$-automaton.
\Paragraph{Value functions}.
For finite runs we consider the following classical value functions: for runs of length $n+1$ we have
\begin{compactitem}
\item {\em Max and min:}
$\textsc{Max}(\pi) = \max_{i=1}^n ({C}(\pi))[i]$ and
$\textsc{Min}(\pi) = \min_{i=1}^n ({C}(\pi))[i]$.
\item \emph{Sum and absolute sum:} the sum function
$\textsc{Sum}(\pi) = \sum_{i=1}^{n} ({C}(\pi))[i]$,
the absolute sum
$\textsc{Sum}^+(\pi) = \sum_{i=1}^{n} \mathop{\mathsf{Abs}}(({C}(\pi))[i])$, where $\mathop{\mathsf{Abs}}(x)$ is the absolute value of $x$.
\item \emph{Variants of bounded sum:} we consider a family of functions called the (variant of) bounded sum value function $\fBsum{B}$.
Each of these functions returns the sum if all the
partial sums are in the interval $[L,U]$, otherwise there are many possibilities which lead to multiple variants.
For example, we can require that for all prefixes $\pi'$ of $\pi$ we have $\textsc{Sum}(\pi') \in [L,U]$. We can impose a similar restriction on all suffixes, all infixes etc.
Moreover, if partial sums are not contained in $[L,U]$, a bounded sum can return $\infty$, the first violated bound, etc.
\end{compactitem}
For infinite runs we consider:
\begin{compactitem}
\item {\em Limit average:} $\textsc{LimAvg}(\pi) = \liminf\limits_{k \rightarrow \infty} \frac{1}{k} \cdot \sum_{i=1}^{k} ({C}(\pi))[i]$.
\end{compactitem}
\Paragraph{Silent moves}. Consider a $(\mathbb{Z} \cup \{ \bot\})$-labeled automaton. We consider such an automaton as an extension
of a weighted automaton in which transitions labeled by $\bot$ are \emph{silent}, i.e., they do not contribute to
the value of a run. Formally, for every function $f \in \mathsf{InfVal}$ we define
$\silent{f}$ as the value function that applies $f$ on sequences after removing $\bot$ symbols.
The significance of silent moves is as follows: it allows to ignore transitions, and thus provides
robustness where properties could be specified based on desired events rather than steps.
\subsection{Nested weighted automata}
Nested weighted automata (NWA) have been introduced in~\cite{nested} and originally
allowed slave automata to move only forward. The variant we define here allow two types of slave automata, forward walking and backward walking.
The original definition of NWA from~\cite{nested} is versatile and hence it
can be seamlessly extended to the case with bidirectional (forward- and backward-walking) slave automata.
We follow the description of~\cite{nested}.
\noindent{\em Informal description.}
A \emph{nested weighted automaton} consists of a labeled automaton over infinite words,
called the \emph{master automaton}, a value function $f$ for infinite words,
and a set of weighted automata over finite words, called \emph{slave automata}.
A nested weighted automaton can be viewed as follows:
given a word, we consider the run of the master automaton on the word,
but the weight of each transition is determined by dynamically running
slave automata; and then the value of a run is obtained using the
value function $f$.
That is, the master automaton proceeds on an input word as an usual automaton,
except that before taking a transition, it starts a slave automaton
corresponding to the label of the current transition.
The slave automaton starts at the current position of the master automaton in the input word
and works on some finite part of it.
There are two types of slave automata: (a)~forward walking, which move onward the input word (toward higher positions), and
(b)~backward walking, which move towards the beginning of the input word.
Once a slave automaton finishes,
it returns its value to the master automaton, which treats the returned
value as the weight of the current transition that is being executed.
The slave automaton might immediately accept and return value $\bot$,
which corresponds to a \emph{silent transition}, i.e., transition with no weight.
If one of slave automata rejects, the nested weighted automaton rejects.
We present two examples of properties expressible by NWA.
Additional examples are presented in Section~\ref{s:examples}.
\begin{example}[Average response time and its dual]
\label{ex:ART}
\label{ex:dual-ART}
Consider infinite words over $\{r,g,\#\}$, where $r$ represents
requests, $g$ represents grants, and $\#$ represents idle.
A basic and interesting property is the average number of letters
between a request and the corresponding grant, which represents the
long-run {\em average response time (ART)} of the system.
This property cannot be expressed by a non-nested automaton~\cite{nested}.
ART can be expressed by a deterministic nested weighted automaton,
which basically implements the definition of ART.
This automaton invokes at every request a forward-walking slave automaton with $\textsc{Sum}^+$ value function, which counts the number of events until the following grant.
On the other events the NWA takes silent transitions.
Finally, the master automaton applies $\textsc{LimAvg}$ value function to the values returned by slave automata.
Figure~\ref{fig:ARTAW} presents a run of the NWA computing ART.
\begin{figure}
\caption{Runs of NWA computing ART (above) and AW (below). Each weight of a transition is dynamically computed as the sum of weights of slave automata. The thick arrows depict directions of slave automata. }
\label{fig:ARTAW}
\end{figure}
We define the average workload property (AW), which measures
the average number of pending requests. The average is computed over all positions in a word.
Intuitively, if we pick a position in word $w$ at random, the expected number of pending requests is the average workload of~$w$.
Formally, we define the workload at $i$ in $w$, denoted $wl(w,i)$, as the number of letters $r$ among $w[j,i]$, where $j$ is the last
position in $w[1,i]$ where $g$ occurs or $1$ if such a position does not exist.
The average workload of $w$ is the limit average over all positions $i$ of $wl(w,i)$.
AW can be expressed by a deterministic $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with backward-walking slave automata.
Basically, the NWA invokes at every position a slave automaton, which counts the number of $r$ letter from its current position to the first position containing letter $g$, where it terminates. Since slave automata run backwards, each of them computes the workload at the position of its invocation.
Figure~\ref{fig:ARTAW} presents a run of the NWA computing AW.
\end{example}
Now, we present a formal definition of NWA and their semantics.
\Paragraph{Nested weighted automata}.
A \emph{nested weighted automaton} (NWA) with bidirectional slave automata is a tuple $\tuple{{\cal A}_{mas}; f; {\mathfrak{B}}_{-m},\ldots, {\mathfrak{B}}_0, \ldots, {\mathfrak{B}}_l}$, with $m,l \in \mathbb{N}$ where
(1)~${\cal A}_{mas}$, called the \emph{master automaton}, is a $\{-m, \ldots, l\}$-labeled automaton over infinite words
(the labels are the indexes of automata ${\mathfrak{B}}_{-m}, \ldots, {\mathfrak{B}}_l$),
(2)~$f$ is a value function on infinite words, called the \emph{master value function}, and
(3)~${\mathfrak{B}}_{-m}, \ldots, {\mathfrak{B}}_l$ are weighted automata over finite words called \emph{slave automata}.
Intuitively, an NWA can be regarded as an $f$-automaton whose weights are dynamically computed at every step by the corresponding slave automaton.
The automata ${\mathfrak{B}}_{-m}, \ldots, {\mathfrak{B}}_{-1}$ (resp., ${\mathfrak{B}}_{1}, \ldots, {\mathfrak{B}}_l$) are called \emph{backward walking}
(resp., \emph{forward walking}) slave automata.
We refer to NWA with both forward and backward walking slave automata as
{\em bidirectional NWA}.
The automaton ${\mathfrak{B}}_0$ immediately accepts and returns no weight; it is used to implement silent transitions, which have no weight.
We define an \emph{$(f;g)$-automaton} as an NWA where the master value function is $f$ and all slave automata are $g$-automata.
\Paragraph{Semantics: runs and values}.
A \emph{run} of $\mathbb{A}$ on an infinite word $w$ is an infinite sequence
$(\Pi, \pi_1, \pi_2, \ldots)$ such that
(1)~$\Pi$ is a run of ${\cal A}_{mas}$ on $w$;
(2)~for every $i>0$ the label $j = {C}(\Pi[i-1], w[i], \Pi[i])$ pointers at a slave automaton and
(a)~if $j < 0$, then $\pi_i$ is a run of the automaton ${\mathfrak{B}}_j$ on some prefix of the reverse word $(w[1,i])^R$, and
(b)~if $j \geq 0$, then $\pi_i$ is a run of the automaton ${\mathfrak{B}}_j$ on some finite prefix of $w[i,\infty]$.
The run $(\Pi, \pi_1, \pi_2, \ldots)$ is \emph{accepting} if all
runs $\Pi, \pi_1, \pi_2, \ldots$ are accepting (i.e., $\Pi$ satisfies its acceptance
condition and each $\pi_1,\pi_2, \ldots$ ends in an accepting state)
and infinitely many runs of slave automata have length greater than $1$ (the master automaton takes infinitely many non-silent transitions).
The value of the run $(\Pi, \pi_1, \pi_2, \ldots)$ is defined as
$\silent{f}( v(\pi_1) v(\pi_2) \ldots)$, where $v(\pi_i)$ is the value of the run $\pi_i$ in
the corresponding slave automaton, and $\silent{f}$ is the value function that takes its input sequence, removes
$\bot$ symbols and applies $f$ to the remaining sequence.
The value of a word $w$ assigned by the automaton $\mathbb{A}$, denoted by
$\valueL{\mathbb{A}}(w)$, is the infimum of the set of values of all {\em accepting} runs.
We require accepting runs to contain infinitely many non-silent transitions
as $f$ is a value function over infinite sequences, hence the sequence
$v(\pi_1) v(\pi_2) \ldots$ with $\bot$ removed must be infinite.
\Paragraph{Deterministic nested weighted automata}. An NWA $\mathbb{A}$ is \emph{deterministic} if (1)~the master automaton
and all slave automata are deterministic, and (2)~in all slave automata, accepting states have no outgoing transitions.
Intuitively, a slave automaton in an accepting state can choose (non-deteministically) to terminate or continue running; condition (2) removes this source of non-determinism.
\Paragraph{Width of NWA}.
An NWA has \emph{width} $k$ if and only if in every run at every position at most $k$ slave automata are active.
\section{Examples}
\label{s:examples}
In this section we present several examples of quantitative properties that
can be expressed with bidirectional NWA.
\begin{example}[Average energy level]
\label{ex:energy}
We consider the average energy level property studied in~\cite{DBLP:journals/tac/ChatterjeeP15,DBLP:journals/corr/BouyerMRLL15}.
Consider $W \in \mathbb{N}$ and an alphabet $\Sigma_W$ consisting of integers from
interval $[-W,W]$.
These letters correspond to the energy change, i.e.,
negative values represent energy consumption whereas positive values
represent energy gain.
For $w \in \Sigma_W$ we define the energy level at $i$ as the sum
$w[1] + \ldots + w[i]$.
The average energy property (AE) is the limit average of the energy levels
at every position.
For example, the average energy level of $2 (-1) 3 ((-1) 1)^{\omega}$ is $4$.
AE can be expressed by a $(\textsc{LimAvg};\textsc{Sum})$-automaton $\mathbb{A}$ with
backward-walking slave automata, but it is not expressible by
$(\textsc{LimAvg};\textsc{Sum})$-automata with forward-walking slave automata.
To express AE, a $(\textsc{LimAvg};\textsc{Sum})$-automaton $\mathbb{A}$ with backward-walking
slave automata invokes at every position a slave automaton, which
runs backward to the beginning of the word and sums up all the letters.
In contrast, $(\textsc{LimAvg};\textsc{Sum})$-automata with forward-walking slave automata
can use finite memory of the master automaton, but finite prefixes influence only finitely many values returned by slave slave automata and the limit-average value function neglects finite prefixes.
Formally, we can show with a simple pumping argument that for every $(\textsc{LimAvg};\textsc{Sum})$-automaton with
forward-walking slave automata, among words $w_i = 1^i 0^{\omega}$ there exists a pair of words with
the same value. In contrast, all these words have different AE (AE of $w_i$ is $i$).
AE property is often considered in conjunction with bounds on energy values.
Typically, energy should not drop below some threshold, in particular, it should not be negative.
In addition, the energy storage is limited, which motivates the upper bound on the stored energy, where the excess energy is released.
These two restrictions lead to the \emph{interval constraint} on energy levels, i.e., we require the
energy level at every position to belong to a given interval $[L,U]$, which results in a
variant of the bounded sum $\fBsum{L,U}$.
\end{example}
\begin{example}[Data consistency]
\label{ex:data-consistency}
Consider a database server, which processes instructions grouped into transactions.
There are four instructions: read $r$, write $w$, void $\#$ and commit $c$.
The commit instruction applies all writes, finishes the current transaction and starts a new one.
The read instructions refer to writes applied before the previous commit.
In the presence of multiple clients connected to the database, there are two options to achieve consistency.
One option is to use locks that can limit concurrency. A second approach is \emph{optimistic concurrency} which proceeds without locks,
and then rolls back in case there was a collision between transactions.
In ordered to limit the number of roll backs, it is preferred that the read instructions occur shortly after commit,
while write instructions are followed by the commit instruction as quickly as possible.
Formally, we define (a)~consistency (or freshness) of a read instruction as the number of steps to
the first preceding commit instruction, and (b)~consistency of a write instruction as the number of steps to the following commit instruction.
The data consistency property (DCP) of $w$ is the limit average of consistency of reads and writes in $w$.
DCP is expressed by the following deterministic $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton $\mathbb{A}$ with bidirectional slave automata.
On every read $r$ (resp., $w$), the NWA $\mathbb{A}$ invokes a slave automaton which walks backward (resp., forward) and counts
the number of steps to the first encountered $c$.
On the remaining instructions $c,\#$, the NWA $\mathbb{A}$ invokes a dummy slave automaton which corresponds to a silent transition.
\end{example}
\begin{example}
\label{ex:regret}
Consider the framework of Example~\ref{ex:data-consistency}.
For every position with read $r$ or write $w$ we define a regret at position $i$ as the minimal distance to
the preceding or the following commit $c$.
Intuitively, the regret corresponds to the number of instructions by which we have to prepone or postpone
the commit to include the instruction at the current position.
We consider the minimal regret property (MR) on words over $\{ r,w,c,\#\}$ defined the limit average over
positions with $r$ and $w$ of the regret at these positions.
MR can be expressed by a non-deterministic $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with bidirectional slave automata,
which basically implements the definition of MR (the non-deterministic guess is whether it is the preceding or
the following commit).
The NWA invokes at every $r$ or $w$ position one
of the following two slave automata ${\mathfrak{B}}_B, {\mathfrak{B}}_F$.
The automaton ${\mathfrak{B}}_B$ counts the number of steps to the preceding grant, while
${\mathfrak{B}}_F$ counts the number of steps to the following grant.
\end{example}
\section{Decision questions}
\label{s:decision}
For NWA with bidirectional slave automata, we consider the quantitative counterparts of the fundamental problems of emptiness and universality.
The (quantitative) emptiness and universality problems are defined in the same way for weighted automata and all variants of NWA;
in the following definition ${\cal A}$ denotes either a weighted automaton or an NWA.
\Paragraph{Emptiness and universality}.
Given an automaton ${\cal A}$ and a threshold $\lambda$, the {\em emptiness} (resp. {\em universality}) problem asks
whether there exists a word $w$ with ${\cal L}_{\cal A}(w) \leq \lambda$
(resp., for every word $w$ we have ${\cal L}_{\cal A}(w) \leq \lambda$).
\begin{remark}\label{rem:decision}
The emptiness and universality problems have been studied for forward-only NWA in~\cite{nested}.
\begin{itemize}
\item For NWA the value functions considered for the master automaton are
the infimum (or limit-infimum), the supremum (or limit-supremum), and the limit-average.
For all the decidability results for the infimum (limit-infimum) and the supremum (limit-supremum)
value functions the techniques are similar to unweighted automata~\cite{nested}, which can be easily
adapted to the bidirectional framework.
Hence in the sequel we only focus on bidirectional NWA with the limit-average value function for the
master automaton.
\item Moreover, we study only the emptiness problem for the following reasons.
First, for the deterministic case the emptiness and the universality problems are similar and hence we focus on the emptiness problem.
Second, in the non-deterministic case the universality problem is already undecidable for $\textsc{LimAvg}$-automata even
with no nesting~\cite{DBLP:journals/corr/abs-1006-1492}.
\end{itemize}
\end{remark}
\subsection{The minimum, maximum and bounded sum value functions}
First, we show that
for $g$ being $\textsc{Min}, \textsc{Max}$, or a variant of the bounded sum value function $\fBsum{B} $, the emptiness problem for $(\textsc{LimAvg};g)$-automata with bidirectional slave automata is decidable in $\PSPACE$.
To show that, we prove a stronger result, i.e., every $(\textsc{LimAvg};g)$-automaton can be effectively transformed to a $\textsc{LimAvg}$-automaton of exponential size.
\noindent\emph{Key ideas}. Weighted automata with value functions $\textsc{Min},\textsc{Max},\fBsum{B}$ are close to (non-weighted) finite-state automata.
In particular, these automata have finite range and for each value $\lambda$ from the range, the set of words of value $\lambda$ is regular.
Thus, instead of invoking a slave automaton, the master automaton can non-deterministically pick value $\lambda$ and verify that the value returned by this slave automaton is $\lambda$.
For backward-walking slave automata the guessing can be avoided as the master automaton can simulate (the reverse of) runs of all backward-walking slave automata until the current position.
Thus, we can eliminate slave automata from NWA, i.e., we transform such NWA to weighted automata.
Formally, we show that for $g \in \{\textsc{Min},\textsc{Max},\fBsum{B}\}$,
every $(\textsc{LimAvg};g)$-automaton with bidirectional slave automata can be transformed into an equivalent $\textsc{LimAvg}$-automaton of exponential size.
The emptiness problem for non-deterministic $\textsc{LimAvg}$-automata is in $\mathbb{N}LOGSPACE$ (assuming weights given in unary) and hence
we have the containment part in the following Theorem~\ref{th:regularBF}. The hardness part follows from $\PSPACE$-hardness of the emptiness problem for
$(\textsc{LimAvg};g)$-automata with forward-walking slave automata only~\cite{nested}.
\begin{restatable}{theorem}{RegularForwardAndBackward}
Let $g \in \{\textsc{Min},\textsc{Max},\fBsum{B}\}$.
The emptiness problem for non-deterministic $(\textsc{LimAvg};g)$-automata with bidirectional slave automata is $\PSPACE$-complete.
\label{th:regularBF}
\end{restatable}
\noindent\emph{Note}. The complexity in Theorem~\ref{th:regularBF} does not depend on encoding of weights in slave automata, i.e., the problem is $\PSPACE$-hard even for a fixed set of weights, and
it remains in $\PSPACE$ for weights encoded in binary.
The average energy property from Example~\ref{ex:energy} with bounds on energy levels can be expressed with $(\textsc{LimAvg};\fBsum{B})$-automata.
The emptiness problem for these automata is decidable by Theorem~\ref{th:regularBF}.
\begin{remark}[Parametrized complexity]
If we assume that the size of slave automata in Theorem~\ref{th:regularBF} is bounded by a constant, the complexity
of the emptiness problem drops to $\mathbb{N}LOGSPACE$-complete. $\mathbb{N}LOGSPACE$-hardness follows from $\mathbb{N}LOGSPACE$-hardness of the emptiness problem for
$\textsc{LimAvg}$-automata, which can be considered as a special case of NWA.
\label{rem:parametricRegular}
\end{remark}
The results of this section apply to general bidirectional NWA.
In the following section we consider bidirectional NWA with the sum value function,
where we consider additional restrictions of finite width (Section~\ref{s:finite})
and bounded width (Section~\ref{s:bounded}).
We also justify in Remark~\ref{rem:FinieWidthNatural} that the finite width
restriction is natural.
\section{Finite-width case}
\label{s:finite}
In this section we study NWA satisfying the \emph{finite width} condition.
First, we briefly discuss the finite-width condition and argue that it is a natural restriction.
Next, we show that the emptiness problem for (finite-width) $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata is decidable in $\EXPSPACE$.
We conclude this section with the expressiveness results; we show that classical NWA with forward-walking slave automata and NWA with backward-walking slave automaton have incomparable expressive power.
Hence, (finite-width) $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata are strictly more expressive than NWA with one-direction slave automata.
\subsection{The finite-width condition}
\Paragraph{Finite width}. An NWA $\mathbb{A}$ has finite width if and only if in every accepting run of $\mathbb{A}$ at every position at most
finitely many slave automata are active.
Classical NWA with forward-walking slave automata only have finite width.
Indeed, in any run, at any position $i$ at most $i$ slave automata can be active.
\begin{example}
Consider an NWA over $\{a,b\}$ such that the master automaton accepts a single word $a b^{\omega}$ and all slave automata are backward walking and accept words $b^* a$.
All slave automata terminate at the first position of $a b^{\omega}$ and hence this NWA does not have finite width.
\end{example}
The automata expressing properties from Examples~\ref{ex:dual-ART},~\ref{ex:data-consistency}~and~\ref{ex:regret} are
finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata.
Observe that an NWA does not have finite width if and only if it has an accepting run, in which at some position $i$ infinitely many backward-walking slave automata terminate.
\begin{remark}[Finite width is natural for positive sum]
\label{rem:FinieWidthNatural}
Let $\mathbb{A}$ be a $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with bidirectional slave automata.
Except for degenerate cases, runs of $\mathbb{A}$, which do not have finite width, have infinite value. Indeed,
consider a run $\pi$ and a position $i_0$ at which infinitely many automata are active. Since only finitely many forward-walking slave automata are active at $i_0$, infinitely many of them are backward-walking and
for some position $i < i_0$, infinitely many slave automata $S$ terminate at position $i$.
Then, one of the following holds: either that value of this run is
infinite or one of the following two degenerate cases happen:
(a)~The slave automata from $S$ are invoked with zero density (i.e., if consider the long-run average of the
frequency of invoking slave automata, then it is zero). This situation represents that monitoring with
slave automata happens with vanishing frequency which is a degenerate case.
(b)~The values returned by the slave automata from $S$ are bounded. It follows that these automata take transitions of non-zero weight only in some finite subword $w[i,j]$ of the input word $w$.
This situation represents monitoring of an infinite sequence, in which all events past position $j$ are irrelevant. This is a degenerate case in the infinite-word case.
\end{remark}
The finite-width property does not depend on weights and hence we can construct an exponential-size B\"{u}chi{} automaton ${\cal A}$, which simulates runs of
a given NWA $\mathbb{A}$. Having ${\cal A}$, we can check whether it has a run corresponding to an accepting run of $\mathbb{A}$, in which infinitely many backward-walking slave automata terminate
at the same position. This check can be done in logarithmic space and hence checking the finite-width property is in $\PSPACE$.
A simple reduction from the non-emptiness problem for NWA shows $\PSPACE$-hardness of checking the finite-width property.
\begin{restatable}{theorem}{FiniteWidthDecidable}
The problem whether a given NWA has finite width is $\PSPACE$-complete.
\label{th:FiniteWidthDecidable}
\end{restatable}
\subsection{The absolute sum value function}
We present the main result on NWA of finite width.
\begin{restatable}{theorem}{ForwardAndBackward}
\label{th:FiniteWidth}
The emptiness problem for finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata is $\PSPACE$-hard and
it is decidable in $\EXPSPACE$.
\end{restatable}
\noindent\emph{Key ideas}.
$\PSPACE$-hardness follows from $\PSPACE$-hardness of the emptiness problem for $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with forward-walking slave automata only.
Containment in $\EXPSPACE$ is shown by reduction to the bounded-width case, which is shown decidable in the following section (Theorem~\ref{th:BoundedBF}).
We briefly describe this reduction.
Consider a finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton $\mathbb{A}$ with bidirectional slave automata.
First, we observe that without loss of generality, we can assume that $\mathbb{A}$ is deterministic.
Second, we observe that in every word accepted by $\mathbb{A}$, at almost every position $i$ there exists
a \emph{barrier}, which is a word $u$ such that
(a)~the word $w' = w[1,i] u w[i+1,\infty]$, i.e., $w$ with $u$ inserted at position $i$, is accepted by $\mathbb{A}$, and the runs on $w$ and $w'$ coincide except for positions in $w'$ corresponding to $u$,
(b)~in the run on $w'$, backward-walking slave automata active at the end of $u$ terminate within $u$,
(c)~in the run on $w'$, forward-walking slave automata active at the beginning of $u$ terminate within $u$,
and (d)~$u$ has exponential length.
Basically, active slave automata cannot cross $u$ in $w'$ and in the effect insertion of $u$ bounds the number of active slave automata.
Existence of barriers follows from the finite-width property of $\mathbb{A}$.
We insert barriers in $w$ to reduce the number of active slave automata.
While inserting $u$ at a certain position may increase the partial average,
we show that if at position $i$ in $w$, exponentially many active slave automata accumulates exponential weight past crossing $i$ (some slave automata walk forward while other backwards),
all partial averages (of values returned by slave automata) in $w'$ are bounded by the corresponding partial averages in $w$.
We conclude that for every word $w$, there exists a word $w'$ such that (i)~at every position at most exponentially many slave automata accumulate exponential values, and
(ii)~the value of $w'$ does not exceed the value of $w$.
Thus, to compute the infimum over all runs of $\mathbb{A}$, we can focus on runs in which at every position at most exponentially many slave automata accumulate exponential value.
Runs of slave automata in which they accumulate bounded (exponential) values can be eliminated as in Theorem~\ref{th:regularBF}, i.e.,
we can construct an exponential size NWA $\mathbb{A}'$, which simulates $\mathbb{A}$, and such that its slave automata run as long as they can accumulate value exponential (in $|\mathbb{A}|$) and
otherwise they non-deterministically pick the remaining value and the master automaton verifies that the pick is correct.
Therefore, the infimum over all runs of $\mathbb{A}$ coincides with the infimum over all runs of $\mathbb{A}'$ of width exponentially bounded.
\begin{remark}[Parametrized complexity]
If we assume that the size of slave automata in Theorem~\ref{th:FiniteWidth} is bounded by a constant, the complexity
of the emptiness problem drops to $\mathbb{N}LOGSPACE$-complete. $\mathbb{N}LOGSPACE$-hardness follows from $\mathbb{N}LOGSPACE$-hardness of the emptiness problem for
$\textsc{LimAvg}$-automata, which can be viewed as a special case of NWA.
\end{remark}
\subsection{Expressive power}
\label{s:expressionPower}
\newcommand{\mathcal{FB}(\flimavg; \fsum^+)}{\mathcal{FB}(\textsc{LimAvg}; \textsc{Sum}^+)}
\newcommand{\mathcal{F}(\flimavg; \fsum^+)}{\mathcal{F}(\textsc{LimAvg}; \textsc{Sum}^+)}
\newcommand{\mathcal{B}(\flimavg; \fsum^+)}{\mathcal{B}(\textsc{LimAvg}; \textsc{Sum}^+)}
DCP defined in Example~\ref{ex:data-consistency} can be expressed by a deterministic finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with bidirectional slave automata.
We show that both forward-walking and backward-walking slave automata are required to express DCP.
That is, we formally show that DCP cannot be expressed by any (non-deterministic) $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with slave automata walking in one direction only.
\Paragraph{Classes of NWA}.
We define $\mathcal{FB}(\flimavg; \fsum^+)$ as the class of all finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata.
We define $\mathcal{F}(\flimavg; \fsum^+)$ (resp., $\mathcal{B}(\flimavg; \fsum^+)$) as the subclass of $\mathcal{FB}(\flimavg; \fsum^+)$ consisting of NWA with forward-walking (resp., backward-walking) slave automata only.
We establish that classes $\mathcal{F}(\flimavg; \fsum^+)$ and $\mathcal{B}(\flimavg; \fsum^+)$ have incomparable expressive power and hence they are strictly less expressive than class $\mathcal{FB}(\flimavg; \fsum^+)$.
\noindent\emph{Key ideas}.
Consider a word $w = (c \#^N r^{2K} c \#^{2N} r^K )^{\omega}$ for some big $K$ and much bigger $N$.
An NWA from $\mathcal{B}(\flimavg; \fsum^+)$ computes DCP of $w$ by invoking (non-dummy) slave automata at every $r$ letter and taking silent transitions
on letters $\#,c$. We show that an NWA $\mathbb{A}$ from $\mathcal{F}(\flimavg; \fsum^+)$ cannot invoke the right number of slave automata, even if it uses non-determinism.
More precisely, we show that $\mathbb{A}$ computing DCP has to invoke at most $O(K)$ non-dummy slave automata on average on subwords $c \#^N r^{2K} c \#^{2N} r^K$.
Since $N$ is much bigger than $K$, we conclude that $\mathbb{A}$ has a cycle over $\#$ letters at which it takes only silent transitions and a cycle over $r$ letters on which it increases the multiplicity of active slave automata.
Using these two cycles, we construct a run of value smaller than DCP, which contradicts the assumption that $\mathbb{A}$ computes DCP.
Similarly, we can show that an NWA from $\mathcal{B}(\flimavg; \fsum^+)$
cannot compute correctly DCP of words
of the form $w = (c w^{2K} \#^N c w^K \#^{2N}) ^{\omega}$, while on these words DCP is expressible by an NWA from $\mathcal{F}(\flimavg; \fsum^+)$.
\begin{restatable}{lemma}{IncomparableForwardAndBackward}
(1)~DCP restricted to alphabet $\{r,\#,c\}$ is expressed by an NWA from $\mathcal{B}(\flimavg; \fsum^+)$, but it is not expressible by NWA from $\mathcal{F}(\flimavg; \fsum^+)$.
(2)~DCP restricted to alphabet $\{w,\#,c\}$ is expressed by an NWA from $\mathcal{F}(\flimavg; \fsum^+)$, but it is not expressible by NWA from $\mathcal{B}(\flimavg; \fsum^+)$.
\end{restatable}
The above lemma implies that DCP over alphabet $\{r,w,\#,c\}$ is not expressible by any NWA from $\mathcal{F}(\flimavg; \fsum^+)$ nor from $\mathcal{B}(\flimavg; \fsum^+)$.
In conclusion, we have:
\begin{theorem}
\label{th:expressivness}
(1)~$\mathcal{F}(\flimavg; \fsum^+)$ and $\mathcal{B}(\flimavg; \fsum^+)$ have incomparable expressive power.
(2)~$\mathcal{FB}(\flimavg; \fsum^+)$ are strictly more expressive than $\mathcal{F}(\flimavg; \fsum^+)$ and $\mathcal{B}(\flimavg; \fsum^+)$.
\end{theorem}
\section{Bounded-width case}
\label{s:bounded}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\textsc{AvgE}}{\textsc{AvgE}}
\newcommand{\textsc{Gain}}{\textsc{Gain}}
\newcommand{\nestedA}{\mathbb{A}}
In this section, we study
$(\textsc{LimAvg};\textsc{Sum})$-automata with bidirectional slave automata, which have bounded
width. The bounded width restriction
has been introduced in~\cite{nested-mfcs} to improve the complexity of the emptiness problem and to establish decidability of
the emptiness problem for $(\textsc{LimAvg};\textsc{Sum})$-automata.
NWA considered in~\cite{nested-mfcs} have only forward-walking slave automata, while we extend these results to NWA with bidirectional slave automata.
This extension preserves the complexity bounds from~\cite{nested-mfcs}, i.e., the emptiness problem is in $\PTIME$ for constant width and $\PSPACE$-complete for width given in unary.
The bounded width restriction emerges naturally in examples presented so far. If we bound the number of pending requests, we can express
ART and AW (Examples~\ref{ex:ART}~and~\ref{ex:dual-ART}) by automata of bounded width.
If we bound the number of writes and reads between any two commits, then DCP and MR (Examples~\ref{ex:data-consistency}~and~\ref{ex:regret}) can be expressed by NWA of bounded width.
These natural restrictions lead to more efficient decision procedures.
The decision procedure in this section differs from the one from~\cite{nested-mfcs}.
The key step in the decidability proof from~\cite{nested-mfcs} is establishing the following dichotomy: either the infimum over values of
all words is $-\infty$ or the infimum is realized by \emph{dense} runs. A run is dense if for the values $v_1, v_2, \ldots $ returned by slave automata invoked at positions $1,2, \ldots$ we have $\frac{v_i}{i}$ converges to $0$, i.e., values returned by slave automata are sublinear in the positions of their invocation.
Properties of dense runs allow for further reductions, which lead to a decision procedure.
However, we show in the following Example~\ref{ex:runningBounded} that in case of NWA with bidirectional slave automata, dense runs may not attain the infimum of all runs.
\begin{example}
\label{ex:runningBounded}
Consider a $(\textsc{LimAvg};\textsc{Sum})$-automaton $\nestedA$ with bidirectional slave automata over $\Sigma = \{a,b,c\}$.
The NWA $\nestedA$ accepts words $(a b^* c)^{\omega}$ and it works as follows.
On letters $a$, $\nestedA$ invokes a forward-walking slave automaton ${\mathfrak{B}}_a$, which returns the number of the following $b$ letters up to $c$.
On letters $c$, $\nestedA$ invokes a backward-walking slave automaton ${\mathfrak{B}}_c$, which returns the number of the preceding $b$ letters since $a$.
Finally, on $b$ letters, $\nestedA$ invokes a slave automaton ${\mathfrak{B}}_b$, which takes a single transition and returns value $0$.
The NWA $\nestedA$ has width $3$.
We can show that the value of any dense run, is $2$.
However, the infimum over values of all words is $1$.
The partial average of the values returned by slave automata on finite word $u = (a b^* c)^*$ is $2$, while the partial average
over $u a b^N$ is $\frac{2 |u| + N}{|u| + N}$. Therefore, the value, which is limit infimum over partial averages, of word
$a b^{n_1} c \ldots a b^{n_i} c \ldots$ is $1$ if sequence $n_1, \ldots$ is grows rapidly (e.g. doubly-exponentially).
\end{example}
\noindent\emph{Main ideas}.
In Example~\ref{ex:runningBounded}, the words attaining the least value contain long blocks of letter $b$,
at which the NWA $\nestedA$ is (virtually) in the same state, i.e., it loops in this state.
On letters $b$, the sum of all weights collected by all active slave automata is $2$, i.e.,
automata ${\mathfrak{B}}_a, {\mathfrak{B}}_c$ collect weight $1$ and ${\mathfrak{B}}_b$ collect $0$.
However, in computing limit infimum over partial averages, we pick positions just before letter $c$ as they correspond to the local minima, i.e.,
we compute the partial average over prefixes $u a b^N$, and hence
the weights collected by ${\mathfrak{B}}_c$ do not contribute to this partial average.
Then, the sum of all weights collected by slave automata ${\mathfrak{B}}_a, {\mathfrak{B}}_b$ over a letter $b$ is $1$, which is
equal to the least value of the limit infimum of the partial averages.
In the following, we extend this idea and present the solution of all
bounded-width $(\textsc{LimAvg};\textsc{Sum})$-automata with bidirectional slave automata.
We show that the infimum over all words of a given NWA is the least average value over all cycles.
In the following, we define appropriate notions of cycles of NWA and their average with exclusion of some slave automata.
\Paragraph{The graph of $k$-configurations}.
Let $\mathbb{A}$ be a non-deterministic $(\textsc{LimAvg}; \textsc{Sum})$-automaton of width $k$.
We define a \emph{$k$-configuration} of $\mathbb{A}$ as a tuple
$(q; q_1, \ldots, q_k)$ where $q$ is a state of the master automaton, and
each $q_1, \ldots, q_k$ is either a state of a slave automaton of $\mathbb{A}$
or $\bot$.
Given a run of $\mathbb{A}$, we say that $(q; q_1, \ldots, q_k)$ is the $k$-configuration at position $i$ in the run if
$q$ is the state of the master automaton at position $i$ and there are $l \leq k$ active slave automata at position $i$, whose states are $q_1, \ldots, q_l$
ordered by position of invocation (backward-walking slave automata are invoked past position $i$).
If $l < k$, then $q_{l+1}, \ldots, q_{k} = \bot$.
We say that a $k$-configuration $C_2$ is a successor of a $k$-configuration $C_1$ if there exists an accepting run of $\mathbb{A}$ and $i>0$
such that $C_1$ is the $k$-configuration at $i$ and $C_2$ is the $k$-configuration at $i+1$
The \emph{graph of $k$-configurations} of $\mathbb{A}$ is the set of $k$-configurations of $\mathbb{A}$, which occur infinitely often in some accepting run, with the successor relation.
\begin{figure}
\caption{Pictorial explanation of $\textsc{Gain}
\label{fig:focus}
\end{figure}
\Paragraph{Characteristics of cycles}.
Let $\mathcal{C}$ be a cycle in a graph of $k$-configurations of $\mathbb{A}$.
Let $F$ (resp., $B$) be the set of forward-walking (resp., backward-walking) slave automata, which are active throughout $\mathcal{C}$, i.e., which are not invoked nor terminated within $\mathcal{C}$.
A \emph{focus} $Fc$ (for $\mathcal{C}$) is a downward closed subset of $F$, i.e., it contains all automata from $F$ invoked before some position.
We define a focused gain $\textsc{Gain}(\mathcal{C},Fc)$ as the sum of weights which automata from $Fc$ accumulate over $\mathcal{C}$.
A \emph{restriction} $R$ (for $\mathcal{C}$) is an upward closed subset of $B$, i.e., it contains all automata from $B$ invoked past certain position.
We define an average weight of $\mathcal{C}$ excluding $R$, denoted by $\textsc{AvgE}(\mathcal{C},R)$, as the sum of weights of all transitions of slave automata within $\mathcal{C}$,
except of transitions of slave automata from $R$, divided by the number of slave automata invoked within $\mathcal{C}$.
Intuitively, a focused gain refers to the value, which forward-walking slave automata invoked before some position $i$, accumulate over
the part of run corresponding to $\mathcal{C}$ (see Figure~\ref{fig:focus}).
If the focused gain $\textsc{Gain}(\mathcal{C},Fc)$ is negative, then by pumping $\mathcal{C}$ we can arbitrarily decrease the partial average of the values of slave automata invoked before $i$.
In consequence, we can construct a run of the value $-\infty$.
Formally, we define condition (*), which implies that there exists a run of value $-\infty$, as follows: (*)~there exists a cycle $\mathcal{C}$ in the graph of $k$-configurations of $\mathbb{A}$ and
a focus $Fc$ such that $\textsc{Gain}(\mathcal{C},Fc) < 0$.
If the focused gain of every cycle is non-negative, we need to examine averages of cycles, while excluding some backward-walking slave automata.
The average weight with restriction corresponds to the partial average of values aggregated over $\mathcal{C}$ by all slave automata invoked before position $j$ (which can be past $\mathcal{C}$).
Backward-walking slave automata in the restriction correspond to automata invoked past $j$, and
hence their values do not contribute to the partial average (up to $i$) (see Figure~\ref{fig:focus}).
In Example~\ref{ex:runningBounded}, we compute the average of slave automata over letters $b$, but we exclude the backward-walking slave automaton invoked at the following letter $c$.
Observe that for any cycle $\mathcal{C}$ and any restriction $R$, having a run containing $\mathcal{C}$ occurring infinitely often,
we can repeat each occurrence of cycle $\mathcal{C}$ sufficiently many times so that the partial average of values of slave automata up to position corresponding to $j$ becomes arbitrarily close to the average $\textsc{AvgE}(\mathcal{C}, R)$.
The resulting run contains a subsequence of partial averages convergent to $\textsc{AvgE}(\mathcal{C}, R)$
and hence its value does not exceed $\textsc{AvgE}(\mathcal{C}, R)$.
We can now state our key technical lemma.
This lemma is a direct extension of an intuition behind computing the infimum over values of all words of the NWA $\nestedA$ from Example~\ref{ex:runningBounded}.
\begin{restatable}{lemma}{TechnicalBoundedWidth}
\label{l:techlical-bounded}
Let $\mathbb{A}$ be a $(\textsc{LimAvg};\textsc{Sum})$-automaton of bounded width with bidirectional slave automata.
(1)~If condition (*) holds, then
$\mathbb{A}$ has a run of value $-\infty$.
(2)~If (*) does not hold, then
the infimum $\inf_w \mathbb{A}(w)$ equals
the infimum $\inf_{\mathcal{C} \in \Lambda, R} \textsc{AvgE}(\mathcal{C}, R)$, where
$\Lambda$ is the set of all cycles $\mathcal{C}$ in the graph of $k$-configurations of $\mathbb{A}$.
\end{restatable}
If the width of $\mathbb{A}$ is constant, then the graph of $k$-configurations has polynomial size in $|\mathbb{A}|$ and it can be constructed in polynomial time by employing
reachability checks on the set of all $k$-configurations w.r.t. to relaxation of the successor relation.
Therefore, for every focus $Fc$ and every $k$-configuration $c$ we can check in polynomial time whether there exists a cycle $\mathcal{C}$ such that
$\mathcal{C}[1] = c$ and $\textsc{Gain}(\mathcal{C}, Fc) < 0$.
Thus, condition (1) can be check in logarithmic space assuming that weights are given in unary. If weights are given in binary, condition (1) can be checked in polynomial time.
Checking condition (2) has the same complexity as condition (1).
If the width $k$ is given in unary in input, the graph of $k$-configurations is exponential in $|\mathbb{A}|$ and
conditions (1) and (2) can be checked in polynomial space.
Weights in this case are logarithmic in the size of the graph and hence changing representation from binary to unary does not affect the (asymptotic) size of the graph.
\begin{restatable}{theorem}{BoundedWidthForwardAndBackward}
\label{th:BoundedBF}
The emptiness problem for $(\textsc{LimAvg};\textsc{Sum})$-automaton of width $k$ with bidirectional slave automata is
(a)~$\mathbb{N}LOGSPACE$-complete for constant $k$ and weights given in unary,
(b)~in $\PTIME$ for constant $k$ and weights given in binary, and
(c)~$\PSPACE$-complete for $k$ given in unary.
\end{restatable}
\section{Extensions}
\label{s:extensions}
In this section we briefly discuss some extensions of the model of bidirectional NWA, i.e.,
we discuss the possibility of invoking multiple slave automata in one
transition and two-way walking slave automata.
\begin{figure}
\caption{A run of a two-way slave automaton ${\mathfrak{B}
\label{fig:twoway}
\end{figure}
\Paragraph{Invocation of multiple slave automata}.
The master automaton of an NWA invokes exactly one slave automaton at every transition.
We can generalize the definition of NWA and allow the master automaton to invoke up to some
constant $k$ slave automata at every transition.
We call such a model $k$-NWA.
First note that $k$-NWA contain NWA.
Conversely,
we briefly describe a reduction of the emptiness problem for $k$-NWA to the emptiness problem for NWA.
First, observe that without loss of generality we can assume that $k$-NWA always invokes exactly $k$-slave automata as it can
always invoke a dummy slave automaton, which immediately accepts.
Invocation of such a slave automaton is equivalent to taking a silent transition.
Next, given a $k$-NWA $\mathbb{A}$ with bidirectional slave automata over the alphabet $\Sigma$, we can construct an NWA $\mathbb{A}'$ with bidirectional slave automata
over the alphabet $\Sigma \cup \{ \# \}$, which accepts words of the form $w[1] \#^{k-1} w[2] \#^{k-1} \ldots$.
The runs of $\mathbb{A}'$ on the word $w[1] \#^{k-1} w[2] \#^{k-1} w[3] \ldots$ correspond to all runs of $\mathbb{A}$ on $w$;
the $\mathbb{A}'$ invokes at letters $w[i] \#^{k-1}$ exactly $k$ slave automata which $\mathbb{A}$ invokes at the corresponding transition over letter $w[i]$.
Observe that the emptiness problems for $\mathbb{A}$ and $\mathbb{A}'$ coincide.
\Paragraph{Two-way walking slave automata}.
For the ease of presentation we focus on bidirectional NWA where each slave automaton is either
forward walking or backward walking.
However, in general, we can allow slave automata that change direction while running, i.e.,
allow two-way slave automata and obtain the same complexity results.
Observe that in case of two-way $\textsc{Sum}^+$-automata, we can assume that such an automaton does not visit the same position in the same state twice.
Indeed, such a cycle can be eliminated without increase of the value of the run.
Thus, without loss of generality we assume that every two-way slave automaton visits every position at most $|{\mathfrak{B}}|$ times.
Therefore, instead of invoking a two-way slave $\textsc{Sum}^+$-automaton the master automaton invokes multiple forward-walking and backward walking slave automata,
two automata, a forward
walking ${\mathfrak{B}}_f$ and a backward walking ${\mathfrak{B}}_b$ such that ${\mathfrak{B}}_f$ (resp., ${\mathfrak{B}}_b$)
simulates the run of ${\mathfrak{B}}$ past its invocation position (resp., before its invocation position).
This reduction shows that every $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton $\mathbb{A}$ with two-way walking slave automata is equivalent
to a $2$-NWA with bidirectional slave automata.
This reduction however involves exponential blow-up as slave automata ${\mathfrak{B}}_f, {\mathfrak{B}}_b$ can have exponential size in $\mathbb{A}$.
This follows from the fact that due to reversals each visited position by ${\mathfrak{B}}$ can be visited $|{\mathfrak{B}}|$ times in different states.
To simulate that in one run, ${\mathfrak{B}}_f$ and ${\mathfrak{B}}_b$ have to simulate $|{\mathfrak{B}}|$ instances of ${\mathfrak{B}}$ in different states.
This exponential blow-up can be avoided by dividing ${\mathfrak{B}}_f, {\mathfrak{B}}_b$ into multiple slave automata,
each of which tracks only one loop in the run of ${\mathfrak{B}}$ as shown in Figure~\ref{fig:twoway} with automata ${\mathfrak{B}}_1, \ldots, {\mathfrak{B}}_4$.
Each of these automata have to track at most two instances of ${\mathfrak{B}}$ and hence it involves only quadratic blow-up.
The resulting automaton is an $|\mathbb{A}|$-NWA as up to $\frac{|{\mathfrak{B}}|}{2}$ slave
automata have to be invoked at every position.
Still, the emptiness problems for $|\mathbb{A}|$-NWA and for NWA have the same complexity and hence we conclude that
allowing two-way slave automata does not increase the emptiness problem for $(\textsc{LimAvg};\textsc{Sum}^+)$-automata.
\section{Discussion and Conclusion}
\noindent{\em Discussion.}
We established decidability of the emptiness problem for classes of bidirectional NWA, which include all
NWA presented in the examples.
An NWA from Example~\ref{ex:energy} is covered by Theorem~\ref{th:regularBF},
while NWA from Examples~\ref{ex:dual-ART}, \ref{ex:data-consistency} and~\ref{ex:regret} are covered by
Theorem~\ref{th:FiniteWidth}.
The lower bounds in the presented theorems follow from the lower bounds of the special case of forward-only NWA.
The established complexity bounds (Table~\ref{tab:complexity}) coincide with the bounds
for forward-only NWA.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Value & {\mathsf{mult}}irow{2}{*}{Restrictions} & Complexity & Complexity \\
func. $g$ & &
Bidirectional & Forward case \\
\hline
$\textsc{Min}$,$\textsc{Max}$, & {\mathsf{mult}}irow{2}{*}{None} & \textbf{$\PSPACE$-complete} & {\mathsf{mult}}irow{2}{*}{$\PSPACE$-complete~\cite{nested}}\\
$\fBsum{} $ & & (Thm~\ref{th:regularBF}) & \\
\hline
& {\mathsf{mult}}irow{2}{*}{finite} & \textbf{$\PSPACE$-hard} & {\mathsf{mult}}irow{2}{*}{$\PSPACE$-hard} \\
$\textsc{Sum}^+$ & {\mathsf{mult}}irow{2}{*}{width} & \textbf{$\EXPSPACE$} & {\mathsf{mult}}irow{2}{*}{$\EXPSPACE$~\cite{nested}} \\
& & (Thm~\ref{th:FiniteWidth}) & \\
\hline
$\textsc{Sum}^+$, & constant width & \textbf{$\mathbb{N}LOGSPACE$-complete} & {\mathsf{mult}}irow{1}{*}{$\mathbb{N}LOGSPACE$-complete} \\
$\textsc{Sum}$& unary weights & (Thm~\ref{th:BoundedBF}) & \cite{nested-mfcs} \\
\hline
$\textsc{Sum}^+$, & constant width& \textbf{$\PTIME$} & {\mathsf{mult}}irow{2}{*}{$\PTIME$~\cite{nested-mfcs}} \\
$\textsc{Sum}$& binary weights & (Thm~\ref{th:BoundedBF}) & \\
\hline
$\textsc{Sum}^+$, & width given & \textbf{$\PSPACE$-complete} & {\mathsf{mult}}irow{2}{*}{$\PSPACE$-complete~\cite{nested-mfcs}} \\
$\textsc{Sum}$& in unary & (Thm~\ref{th:BoundedBF}) & \\
\hline
\end{tabular}
\caption{The complexity of the emptiness problem for $(\textsc{LimAvg};g)$-automata. The columns describe respectively: the value function $g$,
additional restrictions imposed on the problem, the complexity in the case with bidirectional slave automata, and
the complexity in the previously studied~\cite{nested,nested-mfcs} case with only forward-walking slave automata.
Results presented in this paper are boldfaced. }
\label{tab:complexity}
\end{table}
\noindent{\em Concluding remarks.}
In this work we present bidirectional NWA as a specification formalism for quantitative
properties. In this formalism many natural quantitative properties can be expressed, and
we present decidability and complexity results for the basic decision problems.
There are several interesting directions for future work.
The study of bidirectional NWA with other value functions is an interesting direction.
The second direction of future work is to consider other formalism (such as a logical
framework) which has the same expressive power as bidirectional NWA.
{\footnotesize
\subparagraph*{Acknowledgements.}
This research was supported in part by the Austrian Science Fund (FWF) under grants S11402-N23,S11407-N23 (RiSE/SHiNE) and Z211-N23 (Wittgenstein Award),
ERC Start grant (279307: Graph Games), Vienna Science and Technology Fund (WWTF) through project ICT15-003 and
by the National Science Centre (NCN), Poland under grant 2014/15/D/ST6/04543.
}
\appendix
\section*{Appendix}
In the appendix we recall statements of theorems and lemmas from the main body of the paper keeping their original numbering.
Lemmas introduced in the appendix have subsequent numbers. For this reason, the numbering of theorem and lemmas in
the appendix is mixed.
\section{Proofs from Section~\ref{s:decision}}
\RegularForwardAndBackward*
The $\PSPACE$-hardness part follows from $\PSPACE$-hardness of the emptiness problem for
$(\textsc{LimAvg};g)$-automata with forward-walking slave automata only~\cite{nested}. Therefore, we focus on
the containment in $\PSPACE$. We begin with a definition of a unifying framework of regular value functions.
\Paragraph{Regular weighted automata and regular value functions}.
Following~\cite{nested}, we say that a weighted automaton ${\cal A}$ over finite words is a \emph{regular weighted automaton}
if and only if
there is a finite set of rationals $\{ q_1, \ldots, q_n\} $ and
there are regular languages ${{\cal L}}_1, \ldots, {{\cal L}}_n$
such that
\begin{compactenum}[(i)]
\item every word accepted by ${\cal A}$ belongs to $\bigcup_{1 \leq i \leq n} {{\cal L}}_i$, and
\item for every $w \in {{\cal L}}_{i}$, its value w.r.t. ${\cal A}$ is $q_i$.
\end{compactenum}
A value function $f$ is a \emph{regular value function}
if and only if all $f$-automata are regular weighted automata.
Examples of regular value functions are $\textsc{Min},\textsc{Max}$ and variants of the bounded sum $\fBsum{B}$ with regular conditions, i.e., the partial sum of every prefix, suffix, infix of the run belongs to interval $L,U$, e.t.c.
Having the definition of regular value function, we can easily check whether our variant of the bounded sum is admissible.
We define the \emph{description size} of a given regular weighted automaton ${\cal A}$,
as the size of automata ${\cal A}_1, \ldots, {\cal A}_n$ recognizing languages
${{\cal L}}_1, \ldots, {{\cal L}}_n$ that witness ${\cal A}$ being a regular weighted automaton.
\begin{lemma}
Let $g$ be a regular value function.
Every $(\textsc{LimAvg};g)$-automaton $\mathbb{A}$ with bidirectional slave automata is equivalent to an exponential-size
$\textsc{LimAvg}$-automaton ${\cal A}$. The automaton ${\cal A}$ can be constructed implicitly in polynomial time.
\label{l:regularEquivalent}
\end{lemma}
Since the emptiness problem for $\textsc{LimAvg}$-automata can be solved in $\mathbb{N}LOGSPACE$,
Lemma~\ref{l:regularEquivalent} implies Theorem~\ref{th:regularBF}. Moreover, Remark~\ref{rem:parametricRegular}
follows directly from the construction in the following proof.
\begin{proof}
\newcommand{Q_{SF}}{Q_{SF}}
\newcommand{Q_{SB}}{Q_{SB}}
Let $Q_m$ be the set of states of the master automaton of $\mathbb{A}$.
Since $g$ is a regular value function, every slave automaton ${\mathfrak{B}}$ has a finite range $R_{{\mathfrak{B}}}$ and for each value $v \in R_{{\mathfrak{B}}}$ there exists a finite-state automaton
${\cal A}_{{\mathfrak{B}},v}$ recognizing the set of words of value $v$.
Let $Q_{SF}$ (resp., $Q_{SB}$) be the union of the sets of states of all automata ${\cal A}_{{\mathfrak{B}},v}$, where ${\mathfrak{B}}$ if a forward-walking (resp., backward-walking) slave automaton and $v \in R_{{\mathfrak{B}}}$.
We assume that sets of states of automata ${\cal A}_{{\mathfrak{B}},v}$ are disjoint.
We define a $\textsc{LimAvg}$-automaton ${\cal A}$ with generalized B\"{u}chi{} condition as follows.
The set of states of ${\cal A}$ is $Q_m \times 2^{Q_{SF}} \times 2^{Q_{SF}} \times 2^{Q_{SB}}$.
States of ${\cal A}$ are of the form $(q,F_1, F_2, B)$, whose objectives are as follows.
Component $q$ is used to simulate the run of the master automaton.
Components $F_1, F_2$ are used to simulate finite-state automata corresponding to forward-walking slave automata.
Accepting states correspond to termination of a slave automaton and hence they are removed from $F_1, F_2$.
Every newly invoked forward-walking slave automaton is added to component $F_2$, i.e., automaton ${\cal A}$ picks a transition invoking slave automaton ${\mathfrak{B}}$ and picks weight of this transition $v$,
then it adds to $F_2$ an initial state of finite-state automaton ${\cal A}_{{\mathfrak{B}}, v}$. The weight of such a transition is $v$,
If every slave automaton has finite run than $F_1$ becomes empty at some point of time.
Then, we move all states from $F_2$ to $F_1$, and put $F_2 = \emptyset$.
Observe that runs of all forward-walking slave automaton are finite if and only if $F_1$ is empty infinitely often.
Component $B$ is used to simulate finite-state automata corresponding to backward-walking slave automata.
Accepting states of backward-walking slave automata correspond to their termination.
Since ${\cal A}$ moves in the opposite direction w.r.t. $\mathbb{A}$, the automaton ${\cal A}$
adds to component $B$ some subset of accepting states from $Q_{SB}$.
There is only one position in the run of ${\cal A}$ at which it adds states to $B$.
Whenever a backward-walking slave automaton is invoked in a state $q_i$, we require that
$q_i$ belongs to $B$. This state $q_i$ may be removed from the corresponding set but does not have to.
Removal corresponds to a situation when only a single slave automaton is in the state $q_i$, while leaving $q_i$ in $B$
corresponds the situation when more than one slave automaton is in state $q_i$.
Due to the construction we have (i)~for every run of $\mathbb{A}$, the automaton ${\cal A}$ has a run of the same weight, and conversely
(ii)~for every run of ${\cal A}$, there exists a run $\mathbb{A}$ of the same value.
\end{proof}
\section{Proofs from Section~\ref{s:finite}}
\newcommand{Q_{\textrm{slv}}}{Q_{\textrm{slv}}}
\newcommand{\confNum}[1]{\mathsf{conf}({#1})}
\newcommand{{\mathsf{mult}}}{{\mathsf{mult}}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\subsection{The proof of Theorem~\ref{th:FiniteWidthDecidable}}
\FiniteWidthDecidable*
\begin{proof}
\newcommand{Q_{\textrm{slv}}F}{Q_{slv}^F}
\newcommand{Q_{\textrm{slv}}B}{Q_{slv}^B}
Let $Q_{\textrm{slv}}F$ (resp., $Q_{\textrm{slv}}B$) be the set of states of forward-walking (resp., backward-walking) slave automata of $\mathbb{A}$ and let $Q_m$ be the set of states of the master automaton of $\mathbb{A}$.
We define a (generalized) B\"{u}chi{}-automaton (with no weights) ${\cal A}$ as follows.
The set of states of ${\cal A}$ is $Q_m \times 2^{Q_{\textrm{slv}}F} \times 2^{Q_{\textrm{slv}}F} \times 2^{Q_{\textrm{slv}}B} \times 2^{Q_{\textrm{slv}}B}$.
Initially ${\cal A}$ starts with $(q,\emptyset,\emptyset,\emptyset,\emptyset)$, where $q$ is some initial state of the master automaton.
We use sets of states to simulate runs of slave automata and to ensure that every forward-walking slave automaton runs for finitely many steps.
We treat backward-walking slave automata in a similar way to forward-walking slave automata except that backward-walking slave automata are started at the termination step
of the corresponding slave automata, and they can terminate (which correspond to invocation) multiple, but finitely many times.
More precisely, states of ${\cal A}$ are of the form $(q,F_1, F_2, B_1, B_2)$, whose objectives are as follows.
Component $q$ is used to simulate the run of the master automaton. Components $F_1, F_2$ are used to simulate forward-walking slave automata.
Accepting states correspond to termination of a slave automaton and hence they are removed from $F_1, F_2$.
A newly invoked forward-walking slave automaton is added to component $F_2$. Thus, if every slave automaton has finite run than $F_1$ becomes empty at some point of time.
Then, we move all states from $F_2$ to $F_1$, and put $F_2 = \emptyset$. Observe that runs of all forward-walking slave automaton are finite if and only if $F_1$ is empty infinitely often.
Components $B_1, B_2$ are used to simulate backward-walking slave automata.
Component $B_1$ contains the states of slave automata that all finish at the same position, while $B_2$ contains states of other slave automata.
We require that $B_1$ and $B_2$ are disjoint at every position.
Accepting states of backward-walking slave automata correspond to their termination.
However, as the simulating automaton ${\cal A}$ moves in the opposite direction, it guesses
at every step whether some (backward-walking) slave automaton
have been terminated at the current position and it may add some accepting states to $B_1$ or $B_2$.
There is only one position in the run of ${\cal A}$ at which it adds states to $B_1$.
Whenever a backward-walking slave automaton is invoked in a state $q_i$, we require that
$q_i$ belongs to $B_1$ or $B_2$. This state $q_i$ may be removed from the corresponding set but does not have to.
Removal corresponds to a situation when only a single slave automaton is in the state $q_i$, while leaving $q_i$ in $B_1$ or $B_2$
corresponds the situation when more than one slave automaton is in state $q_i$.
Consider an infinite run $\pi$ of ${\cal A}$, in which
(a)~component $q$ equals infinitely often to some accepting state of the master automaton,
(b)~component $F_1$ is empty infinitely often, and
(c)~component $B_1$ is never empty and infinitely often contains an initial state of the slave automaton invoked at the current position.
The run $\pi$ corresponds to an accepting run of $\mathbb{A}$ in which infinitely many backward-walking slave automata terminate at the same position (which is the position at which $B_1$ becomes non-empty for the first time).
Conversely, having a run of $\mathbb{A}$ of infinite width, the corresponding run of ${\cal A}$ satisfies (a), (b) and (c).
Checking existence of a run of ${\cal A}$ satisfying (a), (b) and (c) can be done in non-deterministic logarithmic space.
Thus, checking whether $\mathbb{A}$ has infinite width can be done in polynomial space.
We show $\PSPACE$-hardness of checking finite-width, by reduction from the emptiness problem for NWA with forward-walking slave automata only, which is $\PSPACE$-complete~\cite{nested}.
Let $\mathbb{A}$ be an NWA with forward-walking slave automata only over the alphabet $\Sigma$. We extend the alphabet $\Sigma$ by $\$,\#$.
Next, we construct an NWA $\mathbb{A}'$ whose every run has infinite width and which accepts precisely words of the form $\$ w[1] \# w[2] \# $ such that $\mathbb{A}$ accepts $w[1] w[2] \ldots$.
Basically, forward-walking slave automata of $\mathbb{A}$ ignore letters $\#$ and every transition $(q,a,q')$ of the master automaton of $\mathbb{A}$ is replaced with two transitions
$(q,a,q'_{\#})$ and $(q'_{\#}, \#,q')$. On the first transition, $\mathbb{A}'$ invokes the same automaton as $\mathbb{A}$ and on transitions labeled by $\#$, the automaton $\mathbb{A}'$ invokes a backward-walking slave automaton
that runs until $\$$. Now, if $\mathbb{A}$ does not have an accepting run, $\mathbb{A}'$ does not have an accepting run and it has trivially finite width.
Otherwise, if $\mathbb{A}$ accepts $w$, the automaton $\mathbb{A}'$ accepts $\$ w[1] \# w[2] \# $ and it has a run of infinite width on it.
Therefore, $\mathbb{A}$ has an accepting run if and only if $\mathbb{A}'$ does not have finite width.
\end{proof}
\subsection{The proof of Theorem~\ref{th:FiniteWidth}}
\ForwardAndBackward*
The $\PSPACE$-hardness part follows from $\PSPACE$-hardness of the emptiness problem for
$(\textsc{LimAvg};\textsc{Sum}^+)$-automata with forward-walking slave automata only~\cite{nested}. Therefore, we focus on
the containment in $\EXPSPACE$.
\Paragraph{Overview}.
The proof is by reduction to the bounded-width case, which is decidable due to Theorem~\ref{th:BoundedBF}.
First, we show that without loss of generality we can assume that a given NWA $\mathbb{A}$ is deterministic (Lemma~\ref{WLOGDeterministicLimAvgSum}).
Next, we define words, called barriers, upon which all active (backward- and forward-walking) slave automata terminate.
We show that for finite-width NWA such words do exist (Lemma~\ref{l:barriersExist}).
Properties of barriers ensure that if as some position $i$ in the input word, exponentially many slave automata accumulate
exponential values, then inserting a barrier actually decreases the partials sum of values returned by slave automata (Lemma~\ref{l:barrierDecreasesPartialSum}), and hence we can insert barriers even at infinitely many positions and the value of the resulting word does not exceed the value of the original word.
Thanks to barriers, we can show that for every word $w$, there exists a word $w'$ such that at every position at most exponentially many slave automata aggregate
exponential values and the value of $w'$ does not exceed the value of $w$.
Slave automata which aggregate bounded (exponential) values can be eliminated, i.e., we construct an NWA $\mathbb{A}'$ which simulates $\mathbb{A}'$ in such a way that
runs of slave automata that accumulate at most exponential values are compressed into a single transition. Observe that $\mathbb{A}'$ on word $w'$ as above has exponential width.
Hence, the infimum of $\mathbb{A}$ over all words coincides with the infimum of $\mathbb{A}'$ over words on which it has a run of exponential width.
We can encode the bound on width into $\mathbb{A}'$ and decide the emptiness problem of $\mathbb{A}'$ in non-deterministic logarithmic space (Theorem~\ref{th:BoundedBF}).
The size of $\mathbb{A}'$ is doubly-exponentially bounded in the size of $\mathbb{A}$, and hence the emptiness problem for finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata is in $\EXPSPACE$.
\Paragraph{Configuration and multiplicities}.
Invocation of a slave automaton in an NWA is a form of a universal transition
in the sense of alternating automata.
We adapt the power-set construction, which is used to convert alternating
automata to non-deterministic automata, to the NWA case.
Given a (non-deterministic) NWA $\mathbb{A}$ with bidirectional slave automata, we define \emph{configurations} and \emph{multiplicities} of $\mathbb{A}$
as follows.
Let $Q_{\textrm{slv}}$ be the disjoint union of the sets of states of all slave automata of $\mathbb{A}$.
For a run of $\mathbb{A}$, we say that $(q_m, A)$ is the \emph{configuration} at position $p$ if
$q_m$ is the state of the master automaton at position $p$
and $A \subseteq Q_{\textrm{slv}}$ is the set of states of slave automata at position $p$.
We denote by $\confNum{\mathbb{A}}$ the number of configurations of $\mathbb{A}$.
We define the \emph{multiplicity} ${\mathsf{mult}}$ at position $p$ as the function
${\mathsf{mult}} : Q_{\textrm{slv}} \mapsto \mathbb{N}$, such that ${\mathsf{mult}}(q)$ specifies the number of
slave automata in the state $q$ at position $p$.
The configuration together with the multiplicity give a complete
description of the state of $\mathbb{A}$ at position $p$.
We observe that
without loss of generality we can assume that NWA are deterministic.
Basically, non-deterministic choices of the master automaton and slave automata can be encoded in the input alphabet.
More precisely, the proof consists of two steps.
First, we define simple runs as follows.
A run of an NWA is \emph{simple} if at every position in the run slave automata that are in the same state take the same transition.
We show that (i)~for every run $(\Pi, \pi_1, \pi_2, \dots)$ of $\mathbb{A}$ there exists
a \emph{simple} run of $\mathbb{A}$ of the value not exceeding the value of $(\Pi, \pi_1, \pi_2, \dots)$.
Second, we show that (ii)~there exists a deterministic $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton $\mathbb{A}'$ over
an extended alphabet such that
the sets of accepting simple runs of $\mathbb{A}$ and accepting runs of $\mathbb{A}'$
coincide and each run has the same value in both automata.
The proof of the following lemma is virtually the same as in the case of NWA with forward-walking slave automata only~\cite{nested}.
\begin{restatable}{lemma}{WLOGDeterministicLimAvgSum}
\label{WLOGDeterministicLimAvgSum}
Given a $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton $\mathbb{A}$ over $\Sigma$ with bidirectional slave automata,
(i)~for every run of $\mathbb{A}$, there exists a simple run of at most the same value, and
(ii)~one can compute in polynomial space a deterministic $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton $\mathbb{A}'$
over an alphabet $\Sigma \times \Gamma$ such that:
(1)~$\inf_{w \in \Sigma^+} \valueL{{\cal A}}(w) = \inf_{w' \in (\Sigma \times \Gamma)^+} \valueL{{\cal A}'}(w')$, and
(2)~$\confNum{\mathbb{A}} = \confNum{\mathbb{A}'}$.
\end{restatable}
\begin{proof}
(i): Consider a run $(\Pi, \pi_1, \pi_2, \dots)$ of $\mathbb{A}$.
Suppose that $\pi_i, \pi_{j}$ are runs of the same slave automaton ${\mathfrak{B}}$ invoked at positions $i$ and $j$, such that its both instances are
in the same state at position $s$ in the word, i.e., $\pi_i[i'] = \pi_j[j']$, where
$i', j'$ are the positions in $\pi_i, \pi_j$ corresponding to the position $s$ in $w$.
We pick from the suffixes $\pi_i[i', |\pi_i|], \pi_j[j', |\pi_j|]$ the one with the smaller sum, and in case of the equal sum we pick the shorter.
Then, we change the suffixes of both runs to the picked one.
Such a transformation does not increase the value of the partial sums and
does not introduce infinite runs of slave automata.
Indeed, a run of each slave automaton can be changed by such an operation only finitely many times.
Thus, this transformation can be applied to any pair of slave runs to
obtain a simple run of the value not exceeding the value of $(\Pi, \pi_1, \pi_2, \dots)$.
\newcommand{Q_{\textrm{all}}}{Q_{\textrm{all}}}
(ii):
Without loss of generality, we can assume that for every slave automaton in $\mathbb{A}$
final states have no outgoing transitions.
Let $Q_{\textrm{all}}$ be the disjoint union of
the sets of states of the master automaton and all slave automata of $\mathbb{A}$.
We define $\Gamma$ as the set of all partial functions $h : Q_{\textrm{all}} \mapsto Q_{\textrm{all}}$.
We define a $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton $\mathbb{A}'$ over the alphabet $\Sigma \times \Gamma$
by modifying only the transition relations and labeling functions of the master automaton and slave automata of $\mathbb{A}$;
the sets of states and accepting states are the same as in the original automata.
The transition relation and the labeling function of the master automaton ${\cal A}_{mas}'$ of
$\mathbb{A}'$ are defined as follows:
$(q, \lpair{a}{h},q')$ is a transition of ${\cal A}_{mas}'$ if and only if $h(q) = q'$ and ${\cal A}_{mas}$ has the transition $(q,a,q')$.
The label of the transition $(q, \lpair{a}{h},q')$ is the same as the label
of the transition $(q,a,q')$ in ${\cal A}_{mas}$. Similarly,
for each slave automaton ${\mathfrak{B}}_i$ in $\mathbb{A}$, the transition relation and the labeling function of the corresponding
slave automaton ${\mathfrak{B}}_i'$ in $\mathbb{A}'$ are defined as follows:
$(q, \lpair{a}{h},q')$ is a transition of ${\mathfrak{B}}_i'$ if and only if $h(q) = q'$ and ${\mathfrak{B}}_i$ has the transition $(q,a,q')$.
The label of the transition $(q, \lpair{a}{h},q')$ is the same as the label
of the transition $(q,a,q')$ in ${\mathfrak{B}}_i$.
First, we see that $\confNum{\mathbb{A}} = \confNum{\mathbb{A}'}$.
Second, observe that the master automaton ${\cal A}_{mas}'$ and all slave automata ${\mathfrak{B}}_i'$
are deterministic. Moreover, since we assumed that for every slave automaton in $\mathbb{A}$
final states have no outgoing transitions, slave automata ${\mathfrak{B}}_i'$ recognize
prefix free languages.
Finally, it follows from the construction that
(i)~every simple run $(\Pi, \pi_1, \pi_2, \dots)$ of $\mathbb{A}$ is a
run of $\mathbb{A}'$ of the same value. Basically, we encode in the input word all transitions in functions $h \in \Gamma$.
The value of each transition is the same by the construction.
Conversely, (ii)~every run $(\Pi, \pi_1, \pi_2, \dots)$ of $\mathbb{A}'$ is a
simple run of $\mathbb{A}$ of the same value. Indeed, the fact that transitions are
directed by functions $h \in \Gamma$ implies that the run is simple.
\end{proof}
In the following definition we introduce \emph{barriers}, which are words on which all active slave automata terminate, i.e., if $u$ is a barrier, then
forward-walking slave automata terminate while reading $u$, and backward-walking slave automata terminate while reading $u^R$ (word $u$ from right to left).
Barriers have additional properties, which allow us tho show that if exponentially many slave automata accumulate exponential values,
then inserting a barrier decreases multiplicities of slave automata and does not increase the partial sums of values returned by slave automata.
\begin{definition}[Barriers]
\label{def:barriers}
Let $w$ be an infinite word, $i$ be a position and $k > i$ be the first position such that all backward-walking slave automata invoked past $k$ terminate past $i$.
A finite word $u$ is a barrier at $i$ in $w$ if in word $w' = w[1,i] u w[i+1, \infty]$ we have
\begin{enumerate}[BC1]
\item backward-walking slave automata invoked past position $i+|u|$ terminate past position $i$ (between positions $i+1$ and $i+|u|$)
\item forward-walking slave automata invoked before position $i$ terminate before position $i+|u|$,
\item the configurations at $i$ and $i + |u|$ in $w'$ are the same as the one at $i$ in $w$,
\item the length of $u$ is bounded by $\mathbf{N} = (|Q_s + 2|) \cdot \confNum{\mathbb{A}} \cdot |Q_s|^{2 \cdot |Q_s|}$,
\item
for every state $q$ of some backward-walking (resp., forward-walking) slave automaton,
the multiplicity $mult(q)$ at $i$ in $w'$ (resp., ${\mathsf{mult}}(q)$ at $i+|u|$ in $w'$) is bounded by $mult(q)$ at $i$ in $w$, and
\item every backward-walking (resp., forward-walking) slave automaton active at position $i$,
accumulates over $w[1,i]$ (resp., $w[i,\infty]$) a value greater or equal to the value accumulated over $w[1,i]u$ (resp., $uw[i+1,\infty]$).
\end{enumerate}
\end{definition}
The above conditions simply state that a barrier terminates all slave automata active and reduces their multiplicities, i.e.,
the multiplicity of backward-walking slave automata, which are invoked in the suffix $w[i+1,\infty]$ is reduced by word $u$ (BC1) and
the multiplicity of forward-walking slave automata invoked in prefix $w[1,i]$ is reduced by word $u$ (BC2).
Property BC3 ensures
that inserting a barrier at position $i$ does not change the run essentially (except for $u$ it only reduces the multiplicities of slave automata).
These properties and the bound on the length of barriers (BC4) allow us to reduce the multiplicity of slave automata along words.
Properties BC5, BC6 are necessary to show that such a reduction of multiplicities does not increase the values of words.
\begin{lemma}
\label{l:barriersExist}
Let $\mathbb{A}$ be a deterministic $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton of finite width with
bidirectional slave automata. Then, for every word $w$,
at almost every position a barrier exists.
\end{lemma}
\begin{proof}
Let $w$ be a word. We consider the unique run of $\mathbb{A}$ on $w$ and refer to the positions
in $w$ and the corresponding positions in the run, i.e., we refer to ``the configuration at $i$ in $w$'' as the unique configuration
of $\mathbb{A}$ at $i$ while processing word $w$.
We define the \emph{profile} at position $j$ in $w$ as a pair of
the configuration at $j$ and multiplicities at $j$ bounded by $\mathbf{N}$, defined for every
$q \in Q_s$ as $min({\mathsf{mult}}(q), \mathbf{N})$.
Let $c_0 < c_1$ positions in $w$ such that
every profile that occurs infinitely often in $w$ occurs between $c_0$ and $c_1$.
Next, we define $c_2$ as the minimal position past $c_1$ such that every backward-walking slave
automaton invoked past $c_2$ terminates at some position past $c_1$, i.e., any backward-walking slave automaton invoked
past $c_2$ terminates before it reaches $c_1$. The NWA $\mathbb{A}$ has finite width and hence such $c_2$ exists.
We show that there exists a barrier in $w$ for every position $i > c_2$.
\Paragraph{Construction of a barrier $u$ at position $i$}.
Let $i > c_2$. We pick positions $a < i < b$ such that all
slave automata active at $i$ terminate within $w[a,b]$ and the profiles at positions $a,i,b$ and letters in $w$ are the same.
Since $i > c_2 > c_1 > c_0$ such $a,b$ exist.
Observe that $w[a,b]$ satisfies first three conditions of the barrier definition, but it does not have to satisfy the remaining conditions.
Consider word $w[b,i]w[a+1,i]$. It satisfies all barrier conditions except for BC4.
We take $w[b,i]w[a+1,i]$ and transform it into a barrier $u$ by removing certain subwords corresponding to cycles in $\mathbb{A}$, i.e.,
subwords such that at the beginning and at the end of this subword $\mathbb{A}$ is in the same configuration.
Removal of such subwords does not change runs of the master automaton or slave automata in the suffix of the word.
However, to show condition RC5 we need to ensure that the removal operation does not change the profile, i.e.,
the profiles at positions $i$ and $i+|u|$ in $w[1,i] u w[i+1, \infty]$ are the same as the profile at $i$ in $w$.
We define an \emph{extended configuration} in a finite word $x$ at position $p$ as the pair of
the configuration at $p$ and the equivalence relation $R_p$ on states of slave automata active at position $p$
such that $q_1 R_p q_2$ if and only if either
\begin{compactitem}
\item $q_1, q_2$ are states of forward-walking slave automata and
slave automata in states $q_1$ and $q_2$ at position $p$ reach the same
state at the end of word $x$, or
\item $q_1, q_2$ are states of backward-walking slave automata and
slave automata in states $q_1$ and $q_2$ at position $p$ reach the same
state at the beginning of word $x$.
\end{compactitem}
The slave automata that do not reach the end o word $x$ (resp., the beginning of $x$) are in the same equivalence class $\emptyset$.
Given two positions $p < p'$ in $x$ with the same extended configuration, we define
\emph{transformation from $p$ to $p'$} as
a function from the set of equivalence classes of $R_p$ (which is equal $R_{p'}$) into itself such that
\begin{compactitem}
\item for a state $q$ of a forward-walking slave automaton, the
equivalence class $[q]$ is transformed into a class $[q']$ if some slave automaton in state $q$ at position $p$
reaches state from $[q']$ at position $p'$, and
\item for a state $q$ of a backward-walking slave automaton, the
equivalence class $[q]$ is transformed into a class $[q']$ if some slave automaton in state $q'$ at position $p'$
reaches state from $[q]$ at position $p$.
\end{compactitem}
Due to determinicity of $\mathbb{A}$, the transformation is, indeed, a function and a permutation.
Now we describe the subword removal process. First, we mark
positions $j_1, \ldots, j_n$ in $w$ at which each of
slave automata active at position $i$ in $w$ terminates.
Observe that these belong to the interval $[a,b]$.
Now, starting with word $w[b,i]w[a+1,i]$ we iteratively remove subwords $y$ such that
(a)~the extended configurations at the first and the last position of $y$ are the same,
(b)~the transformation between these positions is the identity, and
(c)~$y$ does not contain any position corresponding to positions $\{j_1, \ldots, j_{n}\} $.
The last word, from which no such a subword can be removed is $u$. We show that $u$ is a barrier.
\Paragraph{Word $u$ is a barrier}.
The positions at which slave automata are terminated are not removed from $w[a,b]$ and hence $u$
satisfies conditions BC1, BC2.
Removal of a word satisfying (a) and (b) preserves profile at every step and hence condition BC3 holds for $u$.
Condition BC4 follows from the fact that there are at most $\confNum{\mathbb{A}} \cdot |Q_s|^{|Q_s|}$
different extended configurations. Transformations are permutations of equivalence classes, i.e., they are permutations of
sets of size at most $|Q_s|$. Therefore, if $k > |Q_s|^{|Q_s|}$, then
among positions $p_1 < \ldots < p_k$ with the same extended configuration there exists a pair such that
the transformation between these positions is the identity.
Thus, words of length at least $\confNum{\mathbb{A}} \cdot |Q_s|^{2 \cdot |Q_s|}$ contain a subword $y$ satisfying
(a) and (b). Furthermore, words of length at least $(|Q_s + 2|) \cdot \confNum{\mathbb{A}} \cdot |Q_s|^{2 \cdot |Q_s|} =
{\mathbf{N}}$, contain a subword $y$ satisfying (a), (b) and (c). Therefore, the length of $v$ is bounded by ${\mathbf{N}}$.
We show that $u$ satisfies BC5.
Let $q$ be a state of some slave automaton and let $mult_1(q)$ be the multiplicity of $q$ in $w$ at position $i$
and $mult_2(q), mult_3(q)$ be multiplicities of $q$ in $w[1,i] u w[i+1, \infty]$ at positions $i$ and $i+|u|$ respectively.
Observe that (a) and (b) imply the
the profiles at positions $i$ and $i+|u|$ in $w[1,i] u w[i+1, \infty]$ are the same as the profile at $i$ in $w$.
It follows that $min(mult_1(q), N) = min(mult_2(q), N) = min(mult_3(q), N)$.
Now, if $q$ is a state of a backward-walking slave automaton, then all such automata active at position $i$
in $w[1,i] u w[i+1, \infty]$ have been invoked in $v$ and hence $mult_2(q) < |u| \leq {\mathbf{N}}$.
It follows that $mult_2(q) \leq mult_1(q)$.
Otherwise, similarly if $q$ is a state of a forward-walking slave automaton we have $mult_3(q) < N$ and
$mult_3(q) \leq mult_1(q)$. Therefore, condition BC5 holds.
Finally, word $u$ satisfies condition BC6.
Consider a backward-walking slave automaton ${\mathfrak{B}}$ active at position $i$ in $w$.
This automaton terminates before position $a$ and hence it accumulates equal values on subwords
$w[a,i]$ and $w[1,i]w[b,i]w[a+1,i]$. Now, word $u$ results from $w[b,i]w[a+1,i]$ by deletion of certain subwords, while it preserves positions corresponding to
termination of slave automata. Moreover, the deletion process only shortens runs; the transitions taken by slave automata correspond to the transitions in the original run.
Thus, ${\mathfrak{B}}$
accumulates over $w[1,i]u$ a value smaller or equal to the value accumulated over $w[1,i]w[b,i]w[a+1,i]$.
The case of forward-walking slave automata is symmetric.
\end{proof}
Now, we show the key property of barriers, i.e., they decrease the partial sum of values if inserted at a position where
exponentially many slave automata accumulate exponential values. This enables us to reduce the emptiness problem for finite-width NWA to
the bounded-width case.
\begin{lemma}
\label{l:barrierDecreasesPartialSum}
Let $\mathbb{A}$ be a deterministic $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with bidirectional slave automata.
Let $C$ be the maximal weight of slave automata of $\mathbb{A}$.
Let $w$ be a word and let $u$ be a barrier for $w$ at position $i$.
If more than $2 \cdot |\mathbb{A}| \cdot \mathbf{N}$ slave automata accumulate
the value exceeding $4 \cdot C \cdot \mathbf{N}$ past $i$ in $w$ (i.e, for backward-walking slave automata it is at $w[1,i]$),
then for almost all $K$, the sum of values returned by slave automata invoked up to $k$ in $w$ is greater than
the sum of values of returned by slave automata invoked up to $k+|u|$ in $w[1,i]uw[i+1,\infty]$.
\end{lemma}
\begin{proof}
Let $k$ be the first position past $i$ such that all backward-walking slave automata invoked past $k$ terminate before position $i$.
We show that the partial sum of values returned by slave automata invoked up to $k$ in $w$ is greater than
the partial sum of values of returned by slave automata invoked up to $k+|u|$ in $w[1,i]uw[i+1,\infty]$. This argument works for every $k_0 \geq k$.
The partial sum of values returned by slave automata invoked up to $k$ in $w$ consists of the sum of weights:
(1)~accumulated by slave automata before they reach position $i$, and
(2)~accumulated by slave automata after they reach position $i$.
In (1) we include values of backward-walking (resp., forward-walking) slave automata that do not reach position $i$.
Let $A$ be the set of states of slave automata active at position $i$ in $w$.
For $q \in A$, we define ${\mathsf{mult}}(q)$ (resp., $val(q)$) as the multiplicity at $i$ (resp., the value accumulated past $i$) in $w$ by slave automata that are in the state $q$ at $i$.
Observe that (2) = $\sum_{q \in A} mult(q) \cdot val(q)$.
The partial sum of values returned by slave automata invoked up to $k+|u|$ in $w$ consists of four components, which are the sum of weights
(1')~aggregated by slave automata before they reach any of positions $i, \ldots i+|u|$,
(2'a)~aggregated over positions $i, \ldots i+|u|$ by backward-walking slave automata invoked past $i+|u|$ and forward-walking slave automata invoked before $i$,
(2'b)~aggregated past positions $i, \ldots i+|u|$ by slave automata invoked on positions $i, \ldots i+|u|$, i.e.,
the values aggregated by backward-walking slave automata over $w'[1,i] = w[1,i]$ and
forward-walking slave automata over $w'[i+|u|+1, \infty] = w[i+1,\infty]$, and
(2'c)~aggregated over positions $i, \ldots i+|u|$ by slave automata invoked on positions $i, \ldots i+|u|$.
We observe that (1)$=$(1') and we show that (2) $>$ (2'a)+(2'b)+(2'c).
Observe that (2'c) is bounded by $C \cdot |u|^2$. In (2'a), the multiplicities of slave automata are the same as the multiplicities of
the corresponding slave automata active at $i$ in $w$ while the values they accumulate are bounded by the minimum of
the values accumulated within $w$ and $C \cdot |u|$. Indeed, consider a backward-walking slave automaton, which is in state $q$ at position $i+|u|$ in $w'$.
The multiplicity of such automata is equal to the multiplicity of slave automata that are in state $q$ at position $i$ in $w$.
Moreover, due to condition BC6 satisfied by $u$, the value which such an automaton accumulates along $w'[1,i+|u|]$ is bounded by the value it accumulates at $w[1,i]$.
Also, such an automaton terminates within $|u|$ and hence its value is bounded by $C \cdot |u|$ as well.
The similar reasoning holds for forward-walking slave automata active at position $i$ in $w'$ and hence
(2'a) is bounded by $\sum_{a \in A} {\mathsf{mult}}(q) \min(C\cdot |u|, val(q))$.
In (2'b), slave automata accumulate the same values as the corresponding slave automata in $w$, but multiplicities are bounded by $|u|$ and the values of the corresponding
slave automata at position $i$ in $w$.
Indeed, consider a backward-walking slave automaton, which is in state $q$ at position $i$ in $w'$.
Since the profile at $i$ in $w$ and $w'$ is the same, the multiplicity ${\mathsf{mult}}'(q)$ of such automata is bounded by the multiplicity of slave automata that are in state $q$ at position $i$ in $w$.
Moreover, all backward-walking slave automata active at position $i$ in $w'$ have been invoked within positions $i, \ldots, i + |u|$ and hence
the sum of their multiplicities is bounded by $|u|$.
Since $w[1,i] = w'[1,i]$, a backward-walking slave automaton in state $q$ at position $i$ in $w'$ accumulates value $val(q)$ at past position $i$, i.e,
the value which accumulates the same automaton in $w$.
Similar estimates hold for forward-walking slave automata past position $i+|u|$.
Therefore, (2'b) is bounded by $\sum_{q \in A} mult'(q) val(q)$, where $\sum_{q \in A} mult'(q) \leq |u|$.
Due to condition BC5 of barriers, for every $q \in A$ we have ${\mathsf{mult}}'(q) \leq mult(q)$.
Now, we estimate $(2) - (2'a) - (2'b) - (2' c)$, which equals
$\sum_{q \in A} ({\mathsf{mult}}(q) \cdot (val(q) - \min(C\cdot |u|, val(q))) - \sum_{q \in A} mult'(q) val(q) ) - C \cdot |u|^2$.
We partition $A$ into $A_1$,
the states of slave automata at position $i$, which accumulate the value at least $4 \cdot C \cdot \mathbf{N}$ past position $i$ in $w$, and $A_2$ the reaming states from $A$.
Then,
(2) - (2'a)+(2'b)+(2'c) $\geq \sum_{q \in A_1} ({\mathsf{mult}}(q) - {\mathsf{mult}}'(q)) \cdot (val(q) - C\cdot |u|) - \sum_{q\in A_1} {\mathsf{mult}}'(q) C \cdot |u| - \sum_{q \in A_2} {\mathsf{mult}}'(q) 4 \cdot C \cdot \mathbf{N} - C \cdot |u|^2$.
Since $\sum_{q \in A} mult'(q) \leq |u|$ and $|u| \leq \mathbf{N}$, we get
(2) - (2'a)+(2'b)+(2'c) $ \geq \sum_{q \in A_1} ({\mathsf{mult}}(q) - {\mathsf{mult}}'(q)) \cdot (val(q) - C\cdot |u|) - 5 \cdot C \cdot \mathbf{N}^2$.
Recall that $\sum_{q \in A_1} {\mathsf{mult}}(q) \geq 2 \cdot \mathbf{N}$ and for every $q \in A_1$ we have $val(q) \geq 4 \cdot C \cdot \mathbf{N}$.
Therefore, $\sum_{q \in A_1} ({\mathsf{mult}}(q) - {\mathsf{mult}}'(q)) \cdot (val(q) - C\cdot |u|) > 8 \cdot C \cdot \mathbf{N}^2$ and hence (2) $>$ (2'a)+(2'b)+(2'c).
This concludes the proof that insertion of a barrier decreases the partial sum.
\end{proof}
Using the above lemma we can reduce the emptiness problem for finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automata with bidirectional slave automata
to the bounded-width case.
\begin{lemma}
\label{l:reductionFiniteWidthToBoundedWidth}
Let $\mathbb{A}$ be a deterministic $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton of finite width with
bidirectional slave automata.
There exists a deterministic $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton $\mathbb{A}'$ over an extended alphabet $\Sigma_1$
with bidirectional slave automata of width bounded exponentially in $|\mathbb{A}|$,
such that
the emptiness problems for $\mathbb{A}$ and $\mathbb{A}'$ coincide, i.e.,
$\inf_{w \in \Sigma^{\omega}} \valueL{\mathbb{A}}(w) = \inf_{w \in \Sigma_1^{\omega}} \valueL{\mathbb{A}'}(w)$.
The size of $\mathbb{A}'$ is $O(\mathbf{N} \cdot |\mathbb{A}|)$.
\end{lemma}
\begin{proof}
We define an exponential size $(\textsc{LimAvg}; \textsc{Sum}^+)$-automaton $\mathbb{A}'$ with
bidirectional slave automata of width bounded by exponentially in $|\mathbb{A}|$ such that $\mathbb{A}'$
simulates a subset of runes of $\mathbb{A}$ which satisfy the following condition (*):
at almost every position $s$,
among slave automata active at $s$, at most $2 \cdot |\mathbb{A}| \cdot \mathbf{N}$ will accumulate value greater than $4 \cdot C \cdot \mathbf{N}$.
Next, we show that for every run of $\mathbb{A}$ there exists a run satisfying (*) of at most the same value. It follows
that the emptiness problems for $\mathbb{A}$ and $\mathbb{A}'$ coincide.
\Paragraph{Definition of $\mathbb{A}$}.
The alphabet $\Sigma_1$ consists of $\Sigma$ and additional marking letters described below.
First, the automaton $\mathbb{A}'$ invokes only dummy slave automata before the initiation marking.
Past initial marking the master automaton keeps track of the number of invoked slave automata and rejects
if the number of slave automata exceeds $2 \cdot |\mathbb{A}| \cdot \mathbf{N}$.
Second, we modify each slave automaton of $\mathbb{A}$ so that they work only as long as they can accumulate the value
exceeding $4\cdot C \cdot \mathbf{N}$, where $C$ is the maximal weight among all slave automata of $\mathbb{A}$.
In particular, slave automata, which accumulate the value below $4\cdot C \cdot \mathbf{N}$, are invoked for a single transition only.
We encode in the alphabet whether a slave automaton is invoked for a single transition and the weight of this transition.
Moreover, we encode in the alphabet that slave automata in state $q$ ought to terminate as in the following run they
accumulate the value below $4\cdot C \cdot \mathbf{N}$. The master automaton checks whether the external markings are correct.
These constructions involve a single exponential blow up of the master automaton, slave automata and the alphabet.
Observe that accepting runs of $\mathbb{A}'$ correspond to accepting runs of $\mathbb{A}$, which satisfy condition (*).
The values of the corresponding runs are the same. Now, we need to show that while computing the infimum over all accepting runs of $\mathbb{A}$, we can restrict ourselves to runs satisfying (*).
\Paragraph{The emptiness problems for $\mathbb{A}$ and $\mathbb{A}'$ coincide}.
Since $\mathbb{A}'$ simulates a subset of runs of $\mathbb{A}$ we have
$\inf_{w \in \Sigma^{\omega}} \valueL{\mathbb{A}}(w) \leq \inf_{w \in \Sigma_1^{\omega}} \valueL{\mathbb{A}'}(w)$.
We show how to transform any word $w$ of $\mathbb{A}$ into a word whose unique run satisfies (*), which means that is can be simulated by $\mathbb{A}'$, and whose value does not exceed the value of $w$.
It follows that $\inf_{w \in \Sigma^{\omega}} \valueL{\mathbb{A}}(w) \geq \inf_{w \in \Sigma_1^{\omega}} \valueL{\mathbb{A}'}(w)$.
Let $w$ be a word accepted by $\mathbb{A}$.
Let $s$ be a position such that on every position $p\geq s$ in $w$ a barrier exists (Lemma~\ref{l:barriersExist}).
We start with $i=0$, word $w_0 = w$ and $p_0 = s$. We iteratively at step $i$, pick first position $p > p_i$ at which more than $2 \cdot |\mathbb{A}| \cdot \mathbf{N}$ slave automata accumulate
the values exceeding $4 \cdot C \cdot \mathbf{N}$ and we insert a barrier $u_i$ at position $p$.
Barrier $u_i$ exists as a barrier for $w$ at the corresponding position.
Then, we start the next iteration $i+1$ with word $w_{i+1} = w_i[1,p] u_i w_i[p+1, \infty]$ and $p_{i+1} = p +|u_i|$.
Observe that iterations past $i$ change only positions past $p_i$ in words $w_i, w_{i+1}, \ldots$, i.e., all words $w_i, w_{i+1}, \ldots$ share the prefix $w[1,p_i]$.
Thus, there exists the limit word $w_B$, which is the result of the iterative process.
We argue that the resulting word $w_B$ satisfies (*)
and its value does not exceed the value of $w$.
First, we show that $w_B$ satisfies (*).
Let $u_i$ be a barrier at position $p$ in $w_i$ and let $w' = w_i[1,p] u_i w_i[p+1, \infty]$.
The forward-walking slave automata active at $p$ in $w_i$ are terminated within $u_i$ in $w'$ and hence accumulate the value below $|u_i| < C \cdot N$ in $w'$, where $C$ is the maximal weight of slave automata of $\mathbb{A}$.
The number of forward-walking slave automata active between positions $p$ and $p+|u_i|$ which are not terminated within $u$ is bounded
by $|u_i| < \mathbf{N}$. Therefore, at every position between $i$ and $p+|u|$ in $w'$,
at most $\mathbf{N}$ forward-walking slave automata accumulate the value exceeding $4\cdot C \cdot \mathbf{N}$.
The similar argument applies to backward-walking slave automata, which shows that at positions $p$ to $p+|u|$ condition (*) is satisfied.
It remains to comment on the value of $w_B$. Barriers inserted in the iterative process satisfy conditions of Lemma~\ref{l:barrierDecreasesPartialSum}, and hence
the partial sums decrease from $w_0$ to $w_1, w_2$ and so on. It follows partial sums in $w_B$ are bounded by partial sums in every word $w_0, w_1, \ldots$ and hence
the value of $w_B$ does not exceed the value of $w$.
\end{proof}
\Paragraph{Algorithm}.
Let $\mathbb{A}$ be a non-deterministic finite-width $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with bidirectional slave automata.
Lemma~\ref{WLOGDeterministicLimAvgSum} reduces the emptiness problem of $\mathbb{A}$ to
the emptiness problem of $\mathbb{A}_d$, which has the same properties as $\mathbb{A}$, but it is deterministic.
The reduction takes exponential time and produces an exponential-size automaton, while it does not change the number of configurations.
Therefore,
the value of $\mathbf{N}$ for $\mathbb{A}$ and $\mathbb{A}_d$ is the same.
Next, Lemma~\ref{l:reductionFiniteWidthToBoundedWidth} reduces the emptiness problem for $\mathbb{A}_d$ to
the emptiness problem of $\mathbb{A}_B$ which is a deterministic $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with bidirectional slave automata and
the width of $\mathbb{A}_B$ is linear in $\mathbf{N}$, i.e., it is exponential in $|\mathbb{A}|$.
The size of $\mathbb{A}_B$ is $O(\mathbf{N} \cdot |\mathbb{A}_d|)$, i.e., it is exponential in $|\mathbb{A}|$.
Therefore, the emptiness problem for $\mathbb{A}_B$ can be solved in polynomial space in $\mathbf{N} \cdot |\mathbb{A}_B|$ (Theorem~\ref{th:BoundedBF}) and
in the exponential space in the size of $\mathbb{A}$.
\subsection{The proof of Theorem~\ref{th:expressivness}}
In the finite-automata framework, the pumping lemma is a standard tool to show inexpressibility results.
It is difficult to state a pumping lemma for NWA as their state space is infinite.
While there is finitely many configurations of NWA, the multiplicity of running slave automata is unbounded.
We consider the graph of configurations of an NWA to reduce the infinite-space case to the finite-state case.
Recall that we assume that slave automata do not have outgoing transitions from accepting states. We also assume (without loss of generality) that initial states of slave automata
do not have ingoing transitions.
\Paragraph{Graph of configurations}.
Let $\mathbb{A}$ be an NWA with bidirectional slave automata over an alphabet $\Sigma$.
We define the graph of configurations $G$ of an NWA $\mathbb{A}$ as a $\Sigma$-labeled graph whose nodes are configurations of $\mathbb{A}$.
For $a \in \Sigma$, there exists an $a$-labeled edge from configuration $(q_m, A_1)$ to $(q_m', A_2)$ if and only if
\begin{compactitem}
\item the master automaton of $\mathbb{A}$ has a transition $(q_m,a,q_m')$ invoking a slave automaton ${\mathfrak{B}}$ in an initial state $q_I$,
\item for $i=1,2$, states $A_i$ are partitioned into states of backward-walking slave automata $A_i^B$ and forward-walking slave automata $A_i^F$,
\item there exists a function $h_F$, which transforms all non-final states from $A_1^F$ onto $A_2^F \setminus \{q_I\}$ such that
for every $q \in dom(h_F)$ tuple $(q,a,h(q))$ is a transition of some slave automaton,
\item there exists a function $h_B$, which transforms all non-final states from $A_2^B$ onto $A_1^B \setminus \{q_I\}$ such that
for every $q \in dom(h_B)$ tuple $(q,a,h(q))$ is a transition of some slave automaton, and
\item if ${\mathfrak{B}}$ is forward-walking $q_I \in A_2^F$; otherwise $q_I \in A_1^B$.
\end{compactitem}
Paths in $G$ are related to simple runs of $\mathbb{A}$.
Consider a simple run of $\mathbb{A}$. Its sequence of configurations is an infinite path in $G$.
Conversely, given a path $\pi$ in $G$ starting in the initial configuration $(q_I, \emptyset)$, we can specify regular conditions ensuring
(a)~existence of a simple run corresponding to $\pi$ and
(b)~existence of a simple accepting run corresponding to $\pi$.
Regularity of these conditions follows from the fact that unweighted parts of runs of $\mathbb{A}$ can be simulated by an alternating B\"{u}chi{} automaton.
This automaton checks whether runs of all slave automata are finite; this suffices to ensure that a path corresponds to a valid simple run.
It can also check acceptance condition, i.e., whether runs of all slave automata terminate in accepting states and the master automaton visits some accepting state infinitely often.
The graph of configurations of $\mathbb{A}$ enables us to construct accepting runs of $\mathbb{A}$ with desired properties.
Having an accepting run of $\mathbb{A}$ with a sequence of configurations $\alpha \beta \gamma$
such that $\beta$ is a cycle in the graph of configurations of $\mathbb{A}$, we know that for every $n>0$ there exists an accepting run of $\mathbb{A}$ whose sequence of configurations is
$\alpha \beta^n \gamma$. This is a key observation used in the following lemma.
\IncomparableForwardAndBackward*
\begin{proof}
\newcommand{\confSeq}[1]{\sigma[#1]}
\newcommand{\mathcal{O}}{\mathcal{O}}
We show (i) in detail and next we comment on (ii).
Suppose that $\mathbb{A}$ is a $(\textsc{LimAvg};\textsc{Sum}^+)$-automaton with forward-walking slave automata which computes DCP (Example~\ref{ex:data-consistency}) restricted to the alphabet $\{r,\#,c\}$.
First, we show that in the graph of configurations of $\mathbb{A}$ there exist two cycles $\tau_r, \tau_\#$ such that
$\tau_r$ is an $r$-labeled cycle in which at least one (non-dummy) slave automaton is invoked, and
$\tau_{\#}$ is an $\#$-labeled cycle in which $\mathbb{A}$ takes only silent transitions, i.e., it invokes only slave automata that immediately accept returning no value.
Next, we use $\tau_r, \tau_{\#}$ to construct of a run
$\pi_0$ on some word $u$ such that the value of $\pi_0$ is smaller than DCP of $u$, which contradicts
the assumption that $\mathbb{A}$ computes DCP.
\Paragraph{Existence of $\tau_r$ and $\tau_{\#}$}.
Let $K$ be greater than the number of configurations of $\mathbb{A}$ and let $N > 15 K^2$.
To simplify the calculations, we denote by $\mathcal{O}(2K)$ some natural number from interval $[0,2K]$.
For example, we write $N + \frac{3K-1}{2} = N + \mathcal{O}(2K)$.
Consider a word $w = (c \#^N r^{2K} c \#^{2N} r^K )^{\omega}$. DCP of $w$ is $\frac{4}{3}\cdot N+ \mathcal{O}(2K)$.
Let $\pi$ be a run of $\mathbb{A}$ on $w$ of the value $\frac{4}{3}\cdot N + \mathcal{O}(2K)$.
We can assume that $\pi$ is simple (Lemma~\ref{WLOGDeterministicLimAvgSum}).
In every block $\#^{2K}$ in $w$, there exist positions $i_1 < i_2$ such that
the configurations in $\pi$ at $i_1$ and $i_2$ are the same and $i_2 - i_1 > K$. We remove these parts of $\pi$.
The resulting sequence $\pi'$ is a run of $\mathbb{A}$ on some word
$w' = c \#^N r^{L_1} c \#^{2N} r^K c \#^N r^{L_2} c \#^{2N} r^K \ldots$ such that $L_1, L_2, \ldots$ are at most $K-1$.
Observe that DCP of $w'$ is at least $\frac{3}{2}\cdot N$ and hence the value of $\pi'$ is at least $\frac{3}{2}\cdot N$.
However, the partial sums of the values returned by slave automata in $\pi'$ are bounded by
the corresponding partial sums in $\pi$. Therefore, the value of $\pi'$ increases due to the fact that the removed parts of $\pi$
contain invocations of slave automata returning small values and removal of these parts of $\pi$ increase partial averages.
It follows that infinitely often at least one (non-dummy) slave automaton is invoked over the block $r^{2K}$.
Consider the sequence of configurations $\confSeq{\pi}$ of run $\pi$.
There exists infinitely many subsequences $\tau$ of $\confSeq{\pi}$, which correspond to
transitions over letters $r$ and satisfy: (A1)~the first and the last configuration of $\tau$
is the same, (A2)~along $\tau$ at least one slave automaton is invoked and (A3)~the length of $\tau$ is bounded by $K$.
Such a sequence corresponds to an $r$-labeled cycle in the graph of configurations of $\mathbb{A}$ of length at most $K$.
There are finitely many such cycles and hence there exists a cycle $\tau_{r}$ satisfying condition (A1), (A2) and (A3)
which occurs infinitely often in $\confSeq{\pi}$.
In a similar we show that $\confSeq{\pi}$ contains infinitely often a subsequence $\tau_{\#}$, which corresponds to transitions over letters $\#$,
such that
(B1)~the first and the last configuration of $\tau_{\#}$ is the same,
(B2)~slave automata invoked along $\tau_{\#}$ return no value (correspond to silent transitions), and
(B3)~the length of $\tau_{\#}$ is bounded by $K$.
To see that, we divide runs $\pi$ and $\pi'$ into blocks separated by letter $c$.
In transformation from $\pi$ to $\pi'$,
the average number of invocations of (non-dummy) slave automata per block decreases by at most $\frac{2K}{3}$.
Yet, the value of $\pi'$ increases by at least $\frac{1}{6}N - \mathcal{O}(2K)$ w.r.t. the value of $\pi$.
Therefore, the average number of invoked slave automata per block
cannot exceed $9 K$ in $\pi$.
It follows that at most $14 \cdot K$ non-dummy slave automata are invoked on average in a block of $N$ letters $\#$.
Thus, there exists infinitely many occurrences of subsequences of $\confSeq{\pi}$ satisfying (B1), (B2) and (B3), and hence
there exists a $\#$-labeled cycle $\tau_{\#}$ satisfying conditions (B1), (B2) and (B3), which occurs infinitely often in $\confSeq{\pi}$.
Observe that there exist infinitely many subwords $c \#^N r^{2K} c \#^{2N} r^K$ of $w$ such that in the corresponding positions in $\confSeq{\pi}$ occur both
$\tau_{\#}$ and $\tau_{r}$. Thus, there exists a path $\alpha_A$
in the graph of configurations of $\mathbb{A}$ such that $\alpha_A$ leads from the last configuration of $\tau_{\#}$ to the first configuration of $\tau_r$ over letters $\#, r$.
Moreover, all slave automata in $\pi$ terminate after finite number of steps, while $\tau_{\#}$ and $\tau_r$ occur infinitely often. Therefore,
there exists a path $\alpha_B$ from the first configuration of $\tau_{r}$ to the last configuration of $\tau_{\#}$ over letters $\#,r,c$ such that
(C1)~at least one transition is over letter $c$, and
(C2)~all slave automata active at the first configuration of $\alpha_B$ are terminated before the end of $\alpha_B$,
(C3)~the master automaton of $\mathbb{A}$ visits an accepting state within $\alpha_B$.
Let $u_A$ (resp., $u_B$) be a subword of $w$ at which configurations of $\pi$ form the sequence $\alpha_A$ (resp., $\alpha_B$).
Next, we show the construction of $\pi_0$ using $\tau_r, \tau_{\#}$, $\alpha_A$ and $\alpha_B$.
\Paragraph{The construction of $\pi_0$}.
Let $M,L$ be natural numbers, which we fix later.
We define $\pi_0$ as some simple accepting run that corresponds to the sequence of configurations $\alpha_0 ((\tau_{\#})^L \alpha_A (\tau_r)^M \alpha_B)^{\omega}$,
where $\alpha_0$ is a sequence of configurations from an initial configuration to the first configuration of $\tau_{\#}$.
Such a run exists as we can ensure that at positions corresponding to $\alpha_B$ all slave automata terminate in accepting states and the master automaton visits an accepting state.
Let $u_0$ be a word at which there exists a run with the sequence of configurations $\alpha_0$.
We define $u = u_0 (\#^{L\cdot |\tau_{\#}|} u_A r^{M \cdot |\tau_r|} u_B )^{\omega}$.
The run $\pi_0$ is an accepting run on $u$.
Observe that DCP of $u$ exceeds $|\tau_{\#}| \cdot L$.
However, we show that the value of $\pi_0$ is smaller.
Run $\pi_0$ is a lasso and its limit average is the average of the cycle, which corresponds to
the average of $\alpha_B (\tau_{\#})^L \alpha_A (\tau_r)^M \alpha_B$ excluding values of slave automata invoked in the second occurrence of $\alpha_B$.
The non-dummy slave automata are invoked only in $\alpha_B$ and in $\alpha_A (\tau_r)^M$.
All slave automata invoked within this cycle terminate by the end of it, and hence
(a)~the values of slave automata invoked in $\alpha_B$ are bounded by the length of the cycle multiplied by $C$, the maximal weight of $\mathbb{A}$, i.e.,
$S_1 = C \cdot (|\alpha_B| + |\alpha_A| + L\cdot |\tau_{\#}| + M \cdot |\tau_r|)$, and
(b)~the values of slave automata invoked in $\alpha_A (\tau_r)^M$ are bounded by
$S_2 = C \cdot (|\alpha_A| + |\alpha_B| + M \cdot |\tau_r|)$.
We have $S_1 > S_2$, however there are at most $|\alpha_B|$ slave automata invoked in $\alpha_B$, which accumulate value at most $S_1$.
The remaining slave automata are invoked in $\alpha_A (\tau_r)^M$ and there are at least $M$ of them.
Thus, the average value of the cycle is at most $ \frac{S_1 \cdot |\alpha_B| + S_2 \cdot M}{|\alpha_B| + M}$.
Now, for $M = 2 \cdot C \cdot |\alpha_B| \cdot |\tau_{\#}|$, we have
$\frac{S_1 \cdot |\alpha_B| }{|\alpha_B| + M} < |\alpha_B| + |\alpha_A| + \frac{L}{2} + C\cdot |\tau_r|\cdot |\alpha_B|$ and
$\frac{S_2 \cdot M}{|\alpha_B| + M} < S_2$.
Let $L > 2 \cdot (S_2 + |\alpha_B| + |\alpha_A| + C\cdot |\tau_r|\cdot |\alpha_B|)$, then the average of the cycle, which is bounded by
$\frac{S_1 \cdot |\alpha_B|+ S_2 \cdot M}{M+ |\alpha_B|} $, is smaller than $L$.
However, DCP of $u$ exceeds $L$, which contradicts the fact that $\mathbb{A}$ computes DCP.
\Paragraph{Backward-walking slave automata}.
The proof for backward-walking slave automata is similar.
We consider numbers $K,N$ and a word $w = (c w^{2K} \#^N c w^K \#^{2N})^{\omega}$; we show that there exist cycles
$\tau_w, \tau_{\#}'$ in $\mathbb{A}$, with similar properties to $\tau_r, \tau_{\#}$ from the forward case.
Moreover, there exist sequences of configurations $\alpha_A'$ from the last configuration of $\tau_w$ to the first configuration of $\tau_{\#}'$, and
$\alpha_B'$ from the last configuration of $\tau_{\#}'$ to the first configuration of $\tau_{w}$, with the properties similar to (C1), (C2) and (C3).
To show (C2) we use the fact that $\tau_w$ and $\tau_{\#}'$ occur infinitely often and $\mathbb{A}$ has \textbf{finite-width}, and hence for every position
$i$ there exists position $j>i$ such that every (backward-walking) slave automaton active at position $j$ terminates before $i$ (i.e., at some position within $[i,j]$).
Next, we construct from $\tau_w,\tau_{\#}',\alpha_A', \alpha_B'$ a run $\pi_0'$ of the value lower than DCP of the corresponding word.
The construction is virtually the same as in the forward case.
\end{proof}
\section{Proofs from Section~\ref{s:bounded}}
\TechnicalBoundedWidth*
\begin{proof}
\Paragraph{(1)}: Assume that (*) is satisfied.
Consider an accepting run $\pi$ with configuration $\mathcal{C}[1]$ occurring infinitely often.
Let $i$ be a position at which configuration $\mathcal{C}[1]$ occurs.
Let $i' < i$ be the last position at which any automaton from $Fc$ is invoked.
Consider the run resulting from inserting cycle $\mathcal{C}$ repeated $N$ times at position $i$ in $\pi$.
The only slave automata active past position $i + |\mathcal{C}|$, which has been invoked before $i'$ are the automata from $Fc$.
Therefore, the partial sum of values returned by slave automata up to position $i'$ decreases by at least
$(N-1) \cdot \textsc{Gain}(\mathcal{C}, Fc) + C < - (N-1) +C$, where $C$ is the value of forward slave automata invoked before $i'$, which terminate in $\mathcal{C}$.
The number of slave automata invoked before $i'$ does not change and hence
by picking $N$ large enough we can decease the partial average up to $i'$ arbitrarily.
We can apply such a pumping step at every position with configuration $\mathcal{C}[1]$ obtaining a run whose limit infimum of partial averages diverges to $-\infty$.
\Paragraph{(2)}: $(\Rightarrow)$:
Assume that there exists a cycle $\mathcal{C}$ in the graph of configurations of $\mathbb{A}$ and
a restriction $R$ such that $\textsc{AvgE}(\mathcal{C}, R)$ and
there exists an accepting run $\pi$ with configuration $\mathcal{C}[1]$ occurring infinitely often.
Let $i$ be a position at which configuration $\mathcal{C}[1]$ occurs.
We insert $\mathcal{C}$ at position $i$ and obtain run $\pi'$.
Let $i'$ be the last position in $\pi'$ such that $i' \geq i + |\mathcal{C}|$ and all automata active at position $i$, which are not in $R$, are invoked before $i'$.
Due to presence of backward-walking slave automata $i'$ can be strictly grater than $i+ |\mathcal{C}|$.
Consider the run resulting from inserting cycle $\mathcal{C}$ repeated $N$ times at position $i$ in $\pi'$.
Then, the partial average up to position $i' + N |\mathcal{C}|$ is given by the expression
$\frac{a + N\cdot p - \Delta}{b + N \cdot q}$, where
\begin{compactitem}
\item $a$ is the partial sum of values returned by slave automata invoked up to $i'$ in run $\pi$,
\item $b$ is the number of slave automata invoked up to $i'$ in run $\pi$,
\item $\frac{p}{q} = \textsc{AvgE}(\mathcal{C}, R)$ and $q$ is the number of slave automata invoked in $\mathcal{C}$, and
\item $\Delta$ is the value accumulated by backward-walking slave automata invoked past $i'+N |\mathcal{C}|$, which terminate
within interval $[i + N |\mathcal{C}|, i + (N+1) |\mathcal{C}|]$.
\end{compactitem}
Now, by taking $N$ large enough we can bring the partial average arbitrarily close to $\frac{a}{b}$.
Using that and simple iteration, we can construct an accepting run of the value $\textsc{AvgE}(\mathcal{C}, R)$.
$(\Leftarrow)$: Assume that $\mathbb{A}$ does not satisfy (*) from (1).
It follows that in every accepting run of $\mathbb{A}$ for almost every position $i$,
slave automata invoked before $i$, accumulate past $i$ the value exceeding value $D$
defined as $-C \cdot k^2 \cdot \conf{\mathbb{A}}$, where
$C$ is the maximal weight occurring in $\mathbb{A}$ and $\conf{\mathbb{A}}$
is the number of configurations of $\mathbb{A}$.
Indeed, if there exists such a position $i$, there exists a cycle past $i$ which we can pump to lower the sum of value returned by slave automata invoked before $i$.
Hence, existence of infinitely many such positions $i$, implies that condition (*) holds.
Let $\pi$ be an accepting run of $\mathbb{A}$ of value $\lambda$.
Consider $\epsilon > 0$. There exists a prefix of $\pi$ up to position $i$ such that
the partial average of values returned by slave automata up to $i$ is at most $\lambda + \epsilon$,
the sum of values accumulated by slave automata invoked before $i$ exceeds $D$ and
and the number of slave automata before $i$ exceeds $\frac{\epsilon}{-D}$.
Then, the partial average of the values accumulated by slave automata invoked before $i$ within positions $1, \ldots, i$
is at most $\lambda + 2\cdot \epsilon$.
Now, we can decompose the prefix up to $i$ into simple cycles one by one, i.e., having a prefix $\tau$ of $\pi$ up to position $i$,
we pick a simple cycle, remove it from $\tau$ and repeat the process. We terminate when we end up with run $\tau_E$ which has no simple cycle to remove;
the remaining run $\tau_E$ has length bounded by the number of configurations and therefore its sum of values is greater than $D$.
Thus, the partial average of the values accumulated by slave automata invoked before $i$ within positions $1, \ldots, i$ equals (a)~
the weighted average of average weights of simple cycles excluding backward-walking slave automata invoked past $i$,
plus (b)~the average of $\tau_E$ bounded by $D \cdot \frac{\epsilon}{D}$.
It follows that there exist a simple cycle $\mathcal{C}$ and a restriction $R$ such that $\textsc{AvgE}(\mathcal{C}, R) \leq \lambda + 3 \cdot \epsilon$;
otherwise the weighted average in (a) exceeds $\lambda + 3 \cdot \epsilon$, which contradicts the choice of prefix of $\pi$.
However, there are infinitely many $\epsilon >0$, while there are finitely many simple cycles. Therefore,
there exist a simple cycle $\mathcal{C}$ and a restriction $R$ such that $\textsc{AvgE}(\mathcal{C}, R) \leq \lambda$.
\end{proof}
\BoundedWidthForwardAndBackward*
\begin{proof}
It suffices to show that conditions from Lemma~\ref{l:techlical-bounded} can be checked (a)~in logarithmic space for constant $k$ and unary weights,
(b)~polynomial time for constant $k$ and binary weights,
and (c)~polynomial space for $k$ given in unary.
Observe that these conditions reduce to weighted reachability, which can be computed in logarithmic space in the size of the graph of $k$-configurations of $\mathbb{A}$, provided that weights can fit in logarithmic space.
Otherwise, if weights are represented in binary and are of length greater than logarithmic in the size of the graph, weighted reachability can be implemented using Dijkstra algorithm in polynomial time.
The size of the graph of $k$-configurations is polynomial in the size of $\mathbb{A}$ and exponential in $k$. Thus,
the graph of $k$-configurations of $\mathbb{A}$ is polynomial if $k$ is constant, and exponential if $k$ is given in unary.
Finally, we comment how to compute the successor relation.
We define a presuccessor relation $R$ on $k$-configurations as follows.
We have $(q; q_1, \ldots, q_k) R (q'; q_1', \ldots, q_k')$ if and only if for some $a \in \Sigma$
the master automaton of $\mathbb{A}$ has a transition $(q,a,q')$ invoking a slave automaton ${\mathfrak{B}}$ in an initial state $q_I$, and
for every component $j \in \{1, \ldots, k\}$ one of the following holds
\begin{compactitem}
\item $q_j$ is a non-final state of a forward-walking slave automaton and $(q_j,a,q_j')$ is a transition of this automaton,
\item $q_j$ is a final state of a forward-walking slave automaton, and $q_j' = \bot$ or $q_j' = q_I$,
\item $q_j'$ is a non-final state of a backward-walking slave automaton and $(q_j',a,q_j)$ is a transition of this automaton,
\item $q_j'$ is a final state of a backward-walking slave automaton, and $q_j = \bot$ or $q_j = q_I$,
\end{compactitem}
if ${\mathfrak{B}}$ is forward-walking (resp., backward-walking) slave automaton, then for exactly one component $j$ we have $q_I = q_j'$ (resp., $q_I = q_j$).
Observe that $R$ encodes a local consistency of transitions of the master and slave automata. A sequence of $k$-configurations consistent with $R$ satisfying
the following conditions (a) and (b) corresponds to an accepting simple run. These conditions are:
(a)~the master automaton visits one of its accepting states infinitely often, and (b)~every slave automaton terminates after finitely many steps.
Now observe that the successor relation defined in Section~\ref{s:bounded} is the presuccessor relation restricted to $k$-configurations $C$, which are
(1)~reachable through $R$ from the initial configuration and (2)~a cycle w.r.t. $R$ satisfying conditions (a) and (b) is reachable through $R$ from $C$.
Indeed, for such configurations $C_1, C_2$ satisfying $C_1 R C_2$ there exists a sequence of $k$-configurations consistent with $R$, which corresponds to an accepting run.
It follows that the successor relation in the graph of $k$-configurations can be computed based on reachability w.r.t. presuccessor relation, which is computable in logarithmic space.
\end{proof}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.